url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://www.gsjournal.net/old/physics/cook2.htm
Email: Nigel B. Cook Nigel B. Cook Alternate email: nigelbryancook@hotmail.com INTRODUCTION ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916. ‘… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly…’ - http://www.constitution.org/mac/prince06.htm ‘(1). The idea is nonsense. (2). Somebody thought of it before you did. (3). We believed it all the time.’ - Professor R.A. Lyttleton's summary of inexcusable censorship (quoted by Sir Fred Hoyle in ‘Home is Where the Wind Blows’ Oxford University Press, 1997, p154). Because of Drs Susskind and Witten, the media has let string theory go on without asking for definite testable predictions. I don’t think the layman public takes much notice of ‘theory’ it can’t understand. There are three types of not-yet-falsified theory: 1. Experimentally confirmed but mathematically abstract and possibly incomplete (Standard Model, relativity, quantum mechanics, etc.) 2. Not experimentally confirmed but popularised with best selling books, but possibly testable (Hawking radiation, gravity waves, etc) 3. Untestable/not falsifiable (over-hyped string theory’s vague landscape ‘predicting’ 10500 vacua, 10/11 dimensions, vague suggestions of superpartners without predicting their energy to show if they can be potentially checked or not, ‘prediction’ of unobservable gravitons without any testable predictions of gravity) Back in 1996, ‘popular physics’ authors were flooding the media with hype about backward time travel, 10 dimensional strings, parallel universes and Kaku flying saucer speculation, and were obviously lying that such unpopular non-testable guesses were science. 3. Newton’s 3rd empirically based law suggests equal inward implosion force, carried by gauge bosons, which shielded by mass, proves gravity and electromagnetism to within 1.65% (proof below). This mechanism also predicts particle masses and other observables, and eliminates most of the unobserved ‘dark matter’ speculation and the need for a cosmological constant / dark energy (the latest data suggest that the ‘cosmological constant’ and dark energy epicycle would need to vary with time! These are all existing accepted facts; the Feynman diagrams are widely accepted, as is the spacetime, the big bang, Newton’s laws of motion. The result, that apples fall at the measured acceleration, is apparently ‘only a personal pet theory that should be suppressed from arXiv.org and ignored’. Drs Lee Smolin and Peter Woit could sit under an apple tree to verify that existing ‘string theory’ gravity is ‘speculative gibberish’: it is an effort to destroy science using untestable hocus pocus ‘string theory’! Update: Lee Smolin has now kindly acknowledged the possibility of using this type of argument (that quantum field theory gauge boson exchange process predicts magnetic moments and Lamb shift, so an attempt to unify the spacetime fabric with Feynman path integrals is an empirically defendable physical reality, unlike ‘string theory’ speculation). This applies for some kind of spin foam vacuum in loop quantum gravity, as mentioned on Peter Woit’s blog. Smolin is committed to the very difficult mathematical approach, but was decent enough say: Nigel Says: January 14th, 2006 at 2:18 pm Some kind of loop quantum gravity is going to be the right theory, since it is a spin foam vacuum. People at present are obsessed with the particles that string theory deals with, to the exclusion of the force mediating vacuum. Once prejudices are overcome, proper funding of LQG should produce results. Lee Smolin Says: January 14th, 2006 at 4:41 pm .. Thanks also to Nigel for those supporting comments. Of course more support will lead to more results, but I would stress that I don’t care nearly as much that LQG gets more support as that young people are rewarded for taking the risk to develop new ideas and proposals. To go from a situation where a young person’s career was tied to string theory to one in which it was tied to LQG would not be good enough. Instead, what is needed overall is that support for young scientists is not tied to their loyalty to particular research programs set out by we older people decades ago, but rather is on the basis only of the quality of their own ideas and work as well as their intellectual independence. If young people were in a situation where they knew they were to be supported based on their ability to invent and develop new ideas, and were discounted for working on older ideas, then they would themselves choose the most promising ideas and directions. I suspect that science has slowed down these last three decades partly as a result of a reduced level of intellectual and creative independence available to young people. Thanks, Lee Sadly then, Dr Lubos Motl, string ‘theorist’ and assistant professor at harvard, tried to ridicule this aproach by the false claim that Dirac’s quantum field theory disproves a spacetme babric, as it is allegedly a unification of special relativity (which denies spacetime fabric) and quantum mechanics. Motl tried to ridicule me with this, although I had already explained the reason to him! "An important part of all totalitarian systems is an efficient propaganda machine. ... to protect the 'official opinion' as the only opinion that one is effectively allowed to have." - STRING THEORIST Dr Lubos Motl: http://motls.blogspot.com/2006/01/power-of-propaganda.html Here is a summary of the reasons why Dirac’s unification is only for the maths of special relativity, not the principle of no-fabric, and in fact Dirac was an electrical engineer before becoming a theoretical physicist, and later wrote: ‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291. Thankfully, Peter Woit has retained so far a comment on the discussion post for loop quantum gravity which points out that Motl is wrong: http://www.math.columbia.edu/~woit/wordpress/?p=330 anonymous Says: January 21st, 2006 at 1:19 pm Lumos has a long list of publications about speculation on unobservables. So I guess he’s well qualified to make vacuous assertions. What I’d like to see debated is the fact that the spin foam vacuum is modelling physical processes KNOWN to exist, as even the string theorists authors of http://arxiv.org/abs/hep-th/0601129 admit, p14: ‘… it is thus perhaps best to view spin foam models … as a novel way of defining a (regularised) path integral in quantum gravity. Even without a clear-cut link to the canonical spin network quantisation programme, it is conceivable that spin foam models can be constructed which possess a proper semi-classical limit in which the relation to classical gravitational physics becomes clear. For this reason, it has even been suggested that spin foam models may provide a possible ‘way out’ if the difficulties with the conventional Hamiltonian approach should really prove insurmountable.’ Strangely, the ‘critics’ are ignoring the consensus on where LQG is a useful approach, and just trying to ridicule it. In a recent post on his blog, for example, Motl states that special relativity should come from LQG. Surely Motl knows that GR deals better with the situation than SR, which is a restricted theory that is not even able to deal with the spacetime fabric (SR implicitly assumes NO spacetime fabric curvature, to avoid acceleration!). When asked, Motl responds by saying Dirac’s equation in QFT is a unification of SR and QM. What Motl doesn’t grasp is that the ‘SR’ EQUATIONS are the same in GR as in SR, but the background is totally different: ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916. What a pity Motl can’t understand the distinction and its implications. Light has momentum and exerts pressure, delivering energy. Continuous exchange of high-energy gauge bosons can only be detected as the normal forces and inertia they produce. QUANTUM LOOP GRAVITY: SPIN FOAM VACUUM The fabric of spacetime is a sea in which boson radiations spend part of their time converted into a perfect fluid of matter-antimatter. ‘In 1986, Abhay Ashtekar reformulated Einstein’s field equations of general relativity using what have come to be known as Ashtekar variables, a particular flavor of Einstein-Cartan theory with a complex connection. He was able to quantize gravity using gauge field theory. In the Ashtekar formulation, the fundamental objects are a rule for parallel transport (technically, a connection) and a coordinate frame (called a vierbein) at each point. Because the Ashtekar formulation was background-independent, it was possible to use Wilson loops as the basis for a nonperturbative quantization of gravity. Explicit (spatial) diffeomorphism invariance of the vacuum state plays an essential role in the regularization of the Wilson loop states. Around 1990, Carlo Rovelli and Lee Smolin obtained an explicit basis of states of quantum geometry, which turned out to be labelled by Penrose’s spin networks.’ - Wikipedia. In the October 1996 issue letters page of Electronics World the basic mechanism was first released, with further notices placed in the June 1999 and January 2001 issues. Two articles in the August 2002 and April 2003 issues, were followed by letters in various issues. In 2004, the result r = rlocal e3 was obtained using the mass continuity equation of hydrodynamics and the Hubble law, allowing for the higher density of the earlier time big bang universe with increasing distance (divergence in spacetime or redshift of gauge bosons, prevents the increase in effective observable density from going to infinity with increasing distance/time past!). In 2005, a radiation pressure-based calculation was added and many consequences were worked out. The first approach worked on is the ‘alternative proof’ below, the fluid spacetime fabric: the fabric of spacetime described by the Feynman path integrals can be usefully modelled by the ‘spin foam vacuum’ of ‘loop quantum gravity’. The observed supernova dimming was predicted via the Oct 96 Electronics World magazine, ahead of discovery by Perlmutter, et al. The omitted mechanism (above) from general relativity does away with ‘dark energy’ by showing that gravity generated by the mechanism of expansion does now slow down the recession. In addition, it proves that the ‘critical density’ obtained by general relativity ignoring the gravity mechanism above is too high by a factor of half the cube of mathematical constant e, in other words a factor of 10. The prediction was not published in PRL, Nature, CQG, etc., because of bigotry toward ‘alternatives’ to vacuous string theory: http://www.math.columbia.edu/~woit/wordpress/?p=215#comment-4082: Nigel Says: July 7th, 2005 at 7:15 pm Editor of Physical Review Letters says Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {MECHANISM OF GRAVITY} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters Now, why has this nice genuine guy still not published his personally endorsed proof of what is a ‘currently accepted’ prediction for the strength of gravity? Will he ever do so? ‘String theory has the remarkable property of predicting gravity’: false claim by Edward Witten in the April 1996 issue of Physics Today, repudiated by Roger Penrose on page 896 of his book The Road to Reality, 2004: ‘in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory’. String theory does not predict the strength constant of gravity, G! However, the Physical Review Letters editor still ‘believes in’ Edward Witten and Physics Today. http://www.math.columbia.edu/~woit/wordpress/?p=215#comment-4081: Peter Woit Says: July 7th, 2005 at 7:27 pm I’m tempted to delete the previous comment, but am leaving it since I think that, if accurate, it is interesting to see that the editor of PRL is resorting to an indefensible argument in dealing with nonsense submitted to him (although the "…" may hide a more defensible argument). Please discuss this with the author of this comment on his weblog, not here. I’ll be deleting any further comments about this. ‘(1). The idea is nonsense. (2). Somebody thought of it before you did. (3). We believed it all the time.’ - Professor R.A. Lyttleton's summary of inexcusable censorship (quoted by Sir Fred Hoyle in ‘Home is Where the Wind Blows’ Oxford University Press, 1997, p154). http://www.math.columbia.edu/~woit/wordpress/?p=215#comment-4080: Alejandro Rivero Says: July 8th, 2005 at 6:34 am currently accepted is not different of the typical forms to request funds in some project, where you are basically asked what are you to discover, and when. I call this part of science, very botanic-wise, the "classification" side. The (also botanic) counterpart, "exploration", is always more problematic. Smolin article … was about this, wasn’t it? ********************* The media is too chicken to report it, to save attacks from ‘string theorists’ who can’t even convince their own wives of their propaganda. It is independent proof of Catt’s experiment evidence that charges are gravitationally trapped energy. Fundamental particles have a black hole (not Planck) sized shielding area, predicting gravity. The mechanism for gravity is proved by a second calculation that doesn’t require shielding area (below). Light has momentum and exerts pressure, delivering energy. The pressure towards us due to the gauge bosons (force-causing radiation of quantum field theory), produces the contraction effect of general relativity and also gravity by pushing us from all directions equally, except where reduced by the shielding of the planet earth below us. Hence, the overriding push is that coming downwards from the stars above us, which is greater than the shielded effect coming up through the earth. This is the mechanism of the acceleration due to gravity. We are seeing the past with distance in the big bang! Gravity consists of gauge boson radiation, coming from the past just like light itself. The big bang causes outward acceleration in observable spacetime (variation in speed from 0 toward c per variation of times past from 0 toward 15,000,000,000 years), hence force by Newton’s empirical 2nd law, F = ma. The 3rd empirical law of Newton says there’s equal inward force, carried by gauge bosons that get shielded by mass, proving gravity to within 1.65%. The proofs below show that the local density (i.e., density at 15,000,000,000 years after origin) of the universe is: r (local) = 3H2/(4pe3 G). The mechanism also shows that because gravity is an inward push as reaction to surrounding expansion, there is asymmetry at great distances and thus no gravitational retardation of the expansion (predicted via October 1996 issue of Electronics World, before experimental confirmation by Perlmutter using automated CCD observations of distant supernovae). Because there is no slowing down due to the mechanism, the application of general relativity to cosmology is modified slightly, and the radius of the universe is R = ct = c/H, where H is Hubble constant. The observable recession velocity in spacetime is a = dv/dt = c/t = Hc. Hence, outward force of big bang: F = Ma = [(4/3) pR3 r (local) ].[Hc] = c4 / (e3 G) = 6.0266 x 1042 Newtons. Notice the permitted high accuracy, since the force is simply F = c4 / (e3 G), where c, e (a mathematical constant) and G are all well known. (The density and Hubble constant have cancelled out.) When you put this result for outward force into the geometry in the lower illustration above and allow for the effective outward force being e3 times stronger than the actual force (on account of the higher density of the earlier universe, since we are seeing – and being affected by – radiation from the past, see calculations later on), you get F = Gm2 /r2 Newtons, if the shielding area is taken as the black hole area (radius 2Gm/c2 ). Why m2 ? Because all mass is created by the same fundamental particles, the ‘Higgs bosons’ of the standard model, which are the building blocks of all mass, inertial and gravitational! The heuristic explanation of this 137 anomaly is just the shielding factor by the polarised vacuum ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424. The muon is 1.5 units on this scale but this is heuristically explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to about 1 + 1/(2 p 137). The mass increase of a muon is 1 + 1/2 because Pi is due to spin and the 137 shielding factor doesn’t apply to bare particles cores in proximity, as it is due to the polarised vacuum veil at longer ranges. This is why unification of forces is approached with higher energy interactions, which penetrate the veil. The mechanism is that the 137 number is the ratio between the strong nuclear and the electromagnetic force strength, which is a unification arising due to the polarisation of the vacuum around a fundamental particle core. Therefore, the Coulomb force near the core of the electron is the same as the strong nuclear force (137 times the observed Coulomb force), but 99.27% of the core force is shielded by the veil of polarised vacuum surrounding the core. Therefore, if the mass-causing Higgs bosons of the vacuum are outside the polarised veil, they couple weakly, giving a mass 137 times smaller (electron mass), and if they are inside the veil of polarised vacuum, they couple 137 times more strongly, giving higher mass particles like muons, quarks, etc (depending on the discrete number of Higgs bosons coupling to the particle core: the for all directly observable elementary particle masses (quarks are not directly observable, only as mesons and baryons) is (0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev This idea predicts that a particle core with n fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for baryons) coupling to N virtual vacuum particles (N is an integer) will have an associative inertial mass of Higgs bosons of: (0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev, where 0.511 Mev is the electron mass. Thus we get everything from this one mass plus integers 1,2,3 etc, with a mechanism. We test this below against data for mass of muon and all ‘long-lived’ hadrons. The problem is that people are used to looking to abstruse theory due to the success of QFT in some areas, and looking at the data is out of fashion. If you look at history of chemistry there were particle masses of atoms and it took school teachers like Dalton and a Russian to work out periodicity, because the bigwigs were obsessed with vortex atom maths, the ‘string theory’ of that age. Eventually, the obscure school teachers won out over the mathematicians, because the vortex atom (or string theory equivalent) did nothing, but empirical analysis did stuff. It was eventually explained theoretically! There was a crude empirical equation for lepton masses by A.O. Barut, PRL, v. 42 (1979), p. 1251. We can extend the basic idea to hadrons. The muon is 1.5 units on this scale but this is heuristically explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to about 1 + 1/(2.Pi.137). The mass increase of a muon is 1 + 1/2 because Pi is due to spin and the 137 shielding factor doesn’t apply to bare particles cores in proximity, as it is due to the polarised vacuum veil at longer ranges. This is why unification of forces is approached with higher energy interactions, which penetrate the veil. This idea predicts that a particle core with n fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for baryons) coupling to N virtual vacuum particles (N is an integer) will have an associative inertial mass of Higgs bosons of: (0.511 Mev).(137)n(N + 1)/2 = 35n(N +1) Mev. Accuracy tested against data for mass of muon and all ‘long-lived’ hadrons: LEPTON (n=1) Muon (N=2): 105 Mev (105.66 Mev measured), 0.6% error! Mesons (contain n=2 quarks): Pions (N=1): 140 Mev (139.57 and 134.96 actual), 0.3% and 3.7% errors! Kaons (N=6): 490 Mev (493.67 and 497.67 actual), 0.7% and 1.6% errors! Eta (N=7): 560 Mev (548.8 actual), 2% error! Baryons (contain n=3 quarks): Nucleons (N=8): 945 Mev (938.28 and 939.57 actual), 0.7% and 0.6% errors! Lambda (N=10): 1155 Mev (1115.60 actual), 3.5% error! Sigmas (N=10): 1155 Mev (1189.36, 1192.46, and 1197.34 actual), 3.0%, 3.2% and 3.7% errors! Xi (N=12): 1365 Mev (1314.9 and 1321.3 actual), 3.8% and 3.3% errors! Omega (N=15): 1680 Mev (1672.5 actual), 0.4% error! The mechanism is that the charge of the bare electron core is 137 times the Coulomb (polarisation-shielded) value, so vacuum interactions of bare cores of fundamental particles attract 137 times as much virtual mass from the vacuum, increasing the inertia similarly. It is absurd that these close fits, with only a few percent deviation, are random chance, and this can be shown by statistical testing using random numbers as the null hypothesis. So there is empirical evidence that this heuristic interpretation is on the right lines, whereas the ‘renormalisation’ is bogus: http://www.cgoakley.demon.co.uk/qft/ Masses of Mesons: Pions = 1.99 (charged), 1.93 (neutral) Kaons = 7.05 (charged), 7.11 (neutral) Eta = 7.84 Masses of Baryons: Nucleons = 13.4 Lambda = 15.9 Sigmas = 17.0 (positive and neutral), 17.1 (negative) Xi = 18.8 (neutral), 18.9 (negative) Omega = 23.9 The masses above for all the major long-lived hadrons are in units of (electron mass)x137. A statistical Chi-squared correlation test against random numbers as the null hypothesis, indeed gives positive statistical evidence that they are close to integers. The mechanism is that the charge of the bare electron core is 137 times the Coulomb (polarisation-shielded) value, so vacuum interactions of bare cores of fundamental particles attract 137 times as much virtual mass from the vacuum, increasing the inertia that much too. Leptons and nucleons are the things most people focus on, and are not integers when the masses are in units of (electron mass)x137. The muon is about 1.5 units on this scale but this can be explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to 1 + 1/(2.Pi.137). The mass increase of the muon is 1 + 1/2 because the Pi is due to spin and the 137 shielding factor doesn’t apply to bare cores in proximity. To recap, the big bang has an outward force of 6.0266 x 1042 Newtons (by Newton’s 2nd law) that results in an equal inward force (by Newton’s 3rd law) which causes gravity as a shielded inward force, Higgs field or rather gauge boson pressure. This is based on standard heuristic quantum field theory (for the Feynman path integral approach), where forces are due not to empirical equations but to the exchange of gauge boson radiation. Where partially shielded by mass, the inward pressure causes gravity. Apples are pushed downwards towards the earth, a shield: ‘… the source of the gravitational field [gauge boson radiation] can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90. LeSage in 1748 argued that there is some kind of pressure in space, and that masses shield one another from the space pressure, thus being pushed together by the unshielded space pressure on the opposite side. Feynman discussed LeSage in November 1964 lectures Character of Physical Law, and elsewhere explained that the major advance of general relativity, the contraction term, shortens the radius of every mass, like the effect of a pressure mechanism for gravity! He does not derive the equation, but we will do so below. GENERAL RELATIVITY’S HEURISTICALLY EXPLAINED PRESSURE-CONTRACTION EFFECT AND INERTIAL ACCELERATION-RESISTANCE CONTRACTION Penrose’s Perimeter Institute lecture is interesting: ‘Are We Due for a New Revolution in Fundamental Physics?’ Penrose suggests quantum gravity will come from modifying quantum field theory to make it compatible with general relativity…I like the questions at the end where Penrose is asked about the ‘funnel’ spatial pictures of blackholes, and points out they’re misleading illustrations, since you’re really dealing with spacetime not a hole or distortion in 2 dimensions. The funnel picture really shows a 2-d surface distorted into 3 dimensions, where in reality you have a 3-dimensional surface distorted into 4 dimensional spacetime. In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c2 = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4-d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences. The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2. However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: Fitzgerald-Lorentz contraction effect: g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + ... Gravitational contraction effect: g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + ..., where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2] Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2. This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965. However, we have a solution… PROOF BY RADIATION PRESSURE: There is strong evidence from electromagnetic theory that every fundamental particle has black-hole cross-sectional shield area for the fluid analogy of general relativity. The effective shielding radius of a black hole of mass M is equal to 2GM/c2. A shield, like the planet earth, is composed of very small, sub-atomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other. The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inverse-square gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field. From the illustration above, the total outward force of the big bang, (total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, see detailed discussion and proof further on below), while the gravity force is the shielded inward reaction (by Newton’s 3rd law the outward force has an equal and opposite reaction): F = (total outward force).(cross-sectional area of shield projected to radius R) / (total spherical area with radius R). The cross-sectional area of shield projected to radius R is equal to the area of the fundamental particle (p multiplied by the square of the radius of the black hole of similar mass), multiplied by the (R/r)2 which is the inverse-square law for the geometry of the implosion. The total spherical area with radius R is simply four times p, multiplied by the square of R. Inserting simple Hubble law results c = RH and R/c = 1/H give us F = (4/3)p r G2M2/(Hr)2. We then set this equal to F=Ma and solve, getting G = (3/4)H2/(p r ). When the effect of the higher density in the universe at the great distance R is included, this becomes G = (3/4)H2/(p r (local) e3). Feynman discuss the LeSage gravity idea in ‘Character of Physical Law’ 1965 BBC lectures, with a diagram showing that if there is a pressure in space, shielding masses will create a net push. ‘If your paper isn’t read, they are ignorant of it. It isn’t even a put-down, just a fact.’ – my comment on Motl’s blog. The next comment was from Peter Woit: ‘in terms of experimentally checkable predictions, no one has made any especially significant ones since the standard model came together in 1973 with asymptotic freedom.’ Woit has seen the censorship problem! Via the October 1996 Electronics World letters, this mechanism – which Dr Philip Campbell of Nature had said he was ‘not able’ to publish – correctly predicted that the universe would not be gravitationally decelerating. This was confirmed two years later experimentally by the discovery of Perlmutter, which Nature did publish, although it omitted to say that it had been predicted. The standard approach of science-censors is ‘hear no evil, see no evil and speak no evil’, pretending these facts are illucid but not saying which facts they cannot grasp, preferring crackpot string theory which is a fraud from start to finish. Apples fall because of gauge boson shielding by nuclear atoms (mainly void space). The same pressure causes the general relativity contraction term. STEP 1: Pressure is force/area. By geometry (illustrated here), the scaled area of shielding below you is equal to the area of space pressure above that is pushing you down. The shielded area of the sky is 100% if the shield mass is the mass of the universe, so: Ashielding = Ar M / Muniverse. (1) Force, F = Pspace Ashielding = (Fspace /Ar).(ArM/Muniverse) = Fspace.M/Muniverse Next (see step 2 below): introduce Fspace = mspace aH. Here, Hubble velocity variation in spacetime (v = HR) implies an acceleration equal to: aH = dv/dt = c/t = c/(1/H) = cH = RH2, while mspace = m(AR/Ar) = m(R/r)2, and the mass of the universe is its density, r , multiplied by its spherical volume, (4/3)p R3. (2) F = Fspace.M/Muniverse = (mspace aH)M/Muniverse = m(R/r)2(RH2)M/ [(r 4pR3 /3)] STEP 2: Air is flowing around you like a wave as you as you walk down a corridor (an equal volume goes in the other direction at the same speed, filling in the volume you are vacating as you move). It is not possible for the surrounding fluid to move in the same direction, or a void would form BEHIND and fluid pressure would continuously increase in FRONT until motion stopped. Therefore, an equal volume of the surrounding fluid moves in the opposite direction at the same speed, pemitting uniform motion to occur! Similarly, as fundamental particles move in space, a similar amount of mass-energy in the fabric of space (spin foam vacuum field) is displaced as a wave around the particles in the opposite direction, filling in the void volume being continuously vacated behind them. For the mass of the big bang, the mass-energy of Higgs/virtual particle field particles in the moving fabric of space is similar to the mass of the universe. As the big bang mass goes outward, the fabric of space goes inward around each fundamental particle, filling in the vacated volume. (This inward moving fabric of space exerts pressure, causing the force of gravity.) ‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp. 32-3. The effective mass of the spacetime fabric moving inward which actually produces the gravity effect is equal to that which is exactly shielded by the mass (illustrated here). So mspace = m, but we also have to allow for the greater distance of the mass which is producing the gravity force by implosion. To take account of focussing due to the ‘implosion’ of space fabric pressure (see diagram) converging in to us in step 1 above (illustration above), we scale: mspace /m = AR / Ar. Hence: mspace = mAR / Ar = m(R/r)2. This is because nearby areas on which force acts to produce pressure are much smaller than the area of sky at the very great distances where the recession and density are high and produce the source of space pressure and thus gravity. The big bang recession velocities vary from 0 to c with distance for observable times of 15,000 million years towards zero, so the matter of the universe has an effective outward acceleration of c divided by the age of the universe. This acceleration, a = c/t = cH = RH2 where H is the Hubble constant (in v = HR), is so small that its effects are generally undetectable. (Notice that if we could see and experience forces instantly, the universe would not show this acceleration. This acceleration is only real because we can’t see the universe at an age of 15 Gyr irrespective of distance. By Newton’s 2nd law, the actual outward force, when properly allowing for the varying effective density of the observed universe as a function of spacetime, is large and by Newton’s 3rd law it has an equal and opposite reaction, inward force which, where shielded, is gravity.) (3) F = m(R/r)2(RH2)M/ [(r 4pR3 /3)] = (3/4)mMH2/(rpr2) Next, for mass continuity, dr/dt = -Ñ.(rv) = -3rH. Hence, r = rlocal e3 (early visible universe has higher density, see below for a mathematical derivation). The reason for multiplying the local measured density of the universe up by a factor of about 20 (the number e3, the cube of the base of natural logarithms) is because it is the denser, more distant universe which contains most of the mass which is producing most of the inward pressure. Because we see further back in time with increasing distance, we see a more compressed age of the universe. Gravitational push comes to us at light speed, with the same velocity as the visible light that shows the stars. Therefore we have to take account of the higher density at earlier times. What counts is what we see, the spacetime in which distance is directly linked to time past, not the simplistic picture of a universe at constant density, because we can never see or experience gravity from such a thing due to the finite speed of light. The mass continuity equation dr/dt = -Ñ.(rv) is simple hydrodynamics based on Green’s theorem and allows the Hubble law (v = HR) to be inserted and solved. An earlier method of calculation for this the notes of CERN preprint EXT-2004-007, is to set up a formula for the density at any particular time past, so as to calculate red-shifted contributions to inward spacetime fabric pressure from a series of shells surrounding the observer. This is the same as the result r = rlocal e3. (4) F = (3/4) mMH2/( p r2 r local e3 ) = mMG/r2, where G = (3/4) H2/( p r local e3 ) = 0.0119H2/r local = 6.7 x 10-11 Nm2kg-2, accurate to within 1.65% using reliable supernovae data reported in Physical Review Letters! If there were any other reason for gravity with similar accuracy, the strength of gravity would then be twice what we measure, so this is a firm testable prediction/confirmation that can be checked more delicately. ` ` SYMBOLS F = force = ma = PA M = mass of Earth m = mass of person or apple P = force / area = F/A = ‘pressure’ A = surface area of a sphere, 4p times (radius squared) r = distance from person to centre of mass of shield (Earth) R = radius to big bang gravity source H = Hubble constant = apparent speed v of galaxy clusters radially from us divided by their distance R when the light was emitted = v/R, hence v = HR = dR/dt, so dt = dR/(RH): aH = dv/dt = [d(RH)]/[dR/(RH)] = RH.d(RH)/dR = RH2 = cH; a constant (Hubble saw light coming from fixed times past, not from stars at fixed distances). r = density of universe (higher at great distances in space time, when the age was less and it was more compressed, dr/dt = -Ñ .(rv) = -3rH. So: r = r local e3, see be) G = universal gravitational constant (previously impossible to predict from general relativity or string theory) p = circumference divided by the diameter of a circle, approx. 3.14159265… e = base of natural logarithms, approx. 2.718281828… Mass continuity equation (for the galaxies in the space-time of the receding universe): dρ/dt + div.(ρv) = 0. Hence: dρ/dt = -div.(ρv). Now around us, dx = dy = dz = dr, where r is radius. Hence divergence (div) term is: -div.(ρv) = -3d(ρv)/dx. For spherical symmetry Hubble equation v = Hr. Hence dρ/dt = -div.(ρv) = -div.(ρHr) = -3d(ρHr)/dr= -3ρHdr/dr= -3ρH. So dρ/dt = -3ρH. Rearranging:-3Hdt = (1/ρ) dρ. Solving by integrating this gives say: -3Ht = (ln ρ1) – (ln ρ). Using the base of natural logarithms (e) to get rid of the ln’s: e-3Ht = density ratio. Because H = v/r = c/(radius of universe) = 1/(age of universe, t) = 1/t, we have: e-3Ht = (density ratio of current time to earlier, higher effective density) = e-3(1/t)t = e-3 = 1/20. All we are doing here is focussing on spacetime in which density rises back in time, but the outward motion or divergence of matter due to the Hubble expansion offsets this at great distances. So the effective density doesn’t become infinity, only e3 or 20 times the local density of the universe at the present time. The inward pressure of gauge bosons from greater distances initially rises because the density of the universe increases at earlier times, but then falls because of divergence, which causes energy reduction (like red-shift) of inward coming gauge bosons. The physical content of GR is the OPPOSITE of SR: ‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90. Notice that in SR, there is no mechanism for mass, but the Standard Model says the mass has a physical mechanism: the surrounding Higgs field. When you move a fundamental particle in the Higgs field, and approach light speed, the Higgs field has less and less time to flow out of the way, so it mires the particle more, increasing its mass. You can't move a particle at light speed, because the Higgs field would have ZERO time to flow out of the way (since Higgs bosons are limited to light speed themselves), so inertial mass would be infinite. The increase in mass due to a surrounding fluid is known in hydrodynamics: ‘In this chapter it is proposed to study the very interesting dynamical problem furnished by the motion of one or more solids in a frictionless liquid. The development of this subject is due mainly to Thomson and Tait [Natural Philosophy, Art. 320] and to Kirchhoff [‘Ueber die Bewegung eines Rotationskörpers in einer Flüssigkeit’, Crelle, lxxi. 237 (1869); Mechanik, c. xix]. … it appeared that the whole effect of the fluid might be represented by an addition to the inertia of the solid. The same result will be found to hold in general, provided we use the term ‘inertia’ in a somewhat extended sense.’ – Sir Horace Lamb, Hydrodynamics, Cambridge University Press, 6th ed., 1932, p. 160. (Hence, the gauge boson radiation of the gravitational field causes inertia. This is also explored in the works of Drs Rueda and Haisch: see http://arxiv.org/abs/physics/9802031 http://arxiv.org/abs/gr-qc/0209016 , http://www.calphysics.org/articles/newscientist.html and http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php .) So the Feynman problem with virtual particles in the spacetime fabric retarding motion does indeed cause the FitzGerald-Lorentz contraction, just as they cause the radial gravitationally produced contraction of distances around any mass (equivalent to the effect of the pressure of space squeezing things and impeding accelerations). What Feynman thought may cause difficulties is really the mechanism of inertia! In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c2 = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4-d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences. The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2. However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: Fitzgerald-Lorentz contraction effect: g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + ... Gravitational contraction effect: g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + ..., where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2] Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2. This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965. However, we have a solution… ‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.) ‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152. ‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties... It has specific inductive capacity and magnetic permeability.’ - Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361. ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424. If the electron moves at speed v as a whole in a direction orthogonal (perpendicular) to the plane of the spin, then the c speed of spin will be reduced according to Pythagoras: v2 + x2 = c2 where x is the new spin speed. For v = 0 this gives x = c. What is interesting is that this model gives rise to the Lorentz-FitzGerald transformation naturally, because: x = c(1 - v2 / c2 )1/2 . Since all time is defined by motion, this (1 - v2 / c2 )1/2 factor of reduction of fundamental particle spin speed is therefore the time-dilation factor for the electron when moving at speed v. Motl's quibbles about the metric of SR is just ignorance. The contraction is a physical effect as shown above, with length contraction in direction of motion, mass increase and time dilation having physical causes. The equivalence principle and the contraction physics of spacetime "curvature" are the advances of GR. GR is a replacement of the false SR which gives wrong answers for all real (curved) motions since it can't deal with acceleration: the TWINS PARADOX. Strangely, the ‘critics’ are ignoring the consensus on where LQG is a useful approach, and just trying to ridicule it. In a recent post on his blog, for example, Motl states that special relativity should come from LQG. Surely Motl knows that GR deals better with the situation than SR, which is a restricted theory that is not even able to deal with the spacetime fabric (SR implicitly assumes NO spacetime fabric curvature, to avoid acceleration!). When asked, Motl responds by saying Dirac’s equation in QFT is a unification of SR and QM. What Motl doesn’t grasp is that the ‘SR’ EQUATIONS are the same in GR as in SR, but the background is totally different: ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916. Phil Reed Letters to the Editor Electronics World 30 June 2004 Dear Sir: ‘Electronic Universe’ article, Apr. 2003 EW, proves G = 3H2 /(4πρ). Here H is Hubble constant and ρ is density of universe responsible for causing gravity by reaction of Catt’s 377-ohm space to the big bang. Considering the density, it is highest at early times and thus density increases in the observable space-time trajectory, as we look further into the past with increasing distance. But the increasing spread of matter with increasing distance partly offsets this increase, as proven when we put the observed Hubble equation (v = Hr) into the mass continuity equation and solve it. For spherical symmetry, dx = dy = dz = dr. Mass continuity implies: dρ/dt = -Ñ (ρv) = -Ñ .(ρHr) = 3d(ρHr)/dr = -3ρH. Solving dρ/dt = -3ρH by rearranging, integrating, then using exponentials to get rid of the natural logarithms (resulting from the integration) gives the increased density to be ρe3Ht, where e is Euler’s constant (2.718 ...). In the absence of gravitational retardation (i.e. with the cause of gravity as inward reaction of space to the outward big bang), H = 1/t when H = v/r = c/(radius of universe) = 1/t, where t is the age of the universe, so e3Ht = e3 and observed G = 3H2/(4πe3ρ). Nugent, Physical Review Letters (v75 p394), cites decay of nickel-63 from supernovae, obtaining H = 50 km/sec/Mps (where 1 Mps = 3.086x1022 m). The density of visible matter at our local time has long been known to be 4x10-28 kg/m3. However, White and Fabian in the March 1995 Monthly Notices of the Royal Astronomical Society, using the Einstein Observatory satellite data, estimate that invisible gas increases this density by 15%. Using these data, G = 3H2/(4πe3ρ) = 6.783x10-11 Nm2kg-2, 1.65% higher than the physical measurement for G of 6.673x10-11 Nm2kg-2. So current data predicts acceleration of just under 10 ms-2 at the Earth’s surface, compared to the observed value of about 9.8 ms-2. This proves Catt’s insistence on the reality of the 377-ohm fabric of space beyond any reasonable doubt. Yours sincerely, Nigel Cook PROOF CHECK (LONG VERSION) Nigel Cook Standard equation for mass continuity (conservation of mass in an expanding gas, etc): dρ/dt + Ñ (ρv) = 0 Or dρ/dt = -Ñ (ρv) Where divergence term -Ñ .(ρv) = -[{d(ρv)x/dx} + {d(ρv)y/dy} + {d(ρv)z/dz}] For spherical symmetry dx = dy = dz = dr, where r is radius (note: this has nothing to do with the sum of the squares of the differential elements of distance, so the abusive anonymous ‘moderator’ on ‘Physics Forums’ who claimed this so, made vacuous personal sneers, and then banned all response, merely proved the ignorance of charlatans) Hubble equation v = Hr Hence dρ/dt = -Ñ (ρv) = -Ñ .(ρHr) = -[{d(ρHr)/dr} + {d(ρHr)/dr} + {d(ρHr)/dr}] = -3d(ρHr)/dr = -3ρHdr/dr = -3ρH So dρ/dt = -3ρH. Rearranging: -3Hdt = ò (1/ρ) dρ. Integrating both sides: -3Ht = (ln ρ1) – (ln ρ). Using the base of natural logarithms e to get rid of the ln’s: e-3Ht = ρ1 Because H = v/r = c/(radius of universe) = 1/(age of universe, t) = 1/t: e-3Ht = ρ1 = e-3(1/t)t = e-3 Therefore ρ = ρ1e3 = 20.085537 ρ1. So using the result in the April 2003 EW, G = 3H2/(4πρ) = 3H2/(4πe3ρ1) where ρ1 is the local-time observed density of the universe. This is correct to within 1.7%. It beats ‘quantum gravity’ speculations which give no prediction of G whatsoever, whereas general relativity actually uses the measured value of G as a constant to make calculations, rather than predicting it.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655645847320557, "perplexity": 1415.2571435691132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805114.42/warc/CC-MAIN-20171118225302-20171119005302-00176.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/441/1/v/a/
# Properties Label 441.1.v.a Level $441$ Weight $1$ Character orbit 441.v Analytic conductor $0.220$ Analytic rank $0$ Dimension $6$ Projective image $D_{14}$ CM discriminant -3 Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$441 = 3^{2} \cdot 7^{2}$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 441.v (of order $$14$$, degree $$6$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$0.220087670571$$ Analytic rank: $$0$$ Dimension: $$6$$ Coefficient field: $$\Q(\zeta_{14})$$ Defining polynomial: $$x^{6} - x^{5} + x^{4} - x^{3} + x^{2} - x + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{4}]$$ Coefficient ring index: $$1$$ Twist minimal: yes Projective image: $$D_{14}$$ Projective field: Galois closure of $$\mathbb{Q}[x]/(x^{14} - \cdots)$$ ## $q$-expansion The $$q$$-expansion and trace form are shown below. $$f(q)$$ $$=$$ $$q -\zeta_{14}^{4} q^{4} -\zeta_{14}^{2} q^{7} +O(q^{10})$$ $$q -\zeta_{14}^{4} q^{4} -\zeta_{14}^{2} q^{7} + ( \zeta_{14} - \zeta_{14}^{3} ) q^{13} -\zeta_{14} q^{16} + ( \zeta_{14}^{2} + \zeta_{14}^{5} ) q^{19} -\zeta_{14}^{5} q^{25} + \zeta_{14}^{6} q^{28} + ( \zeta_{14}^{3} + \zeta_{14}^{4} ) q^{31} + ( -1 - \zeta_{14}^{6} ) q^{37} + ( \zeta_{14}^{3} - \zeta_{14}^{6} ) q^{43} + \zeta_{14}^{4} q^{49} + ( -1 - \zeta_{14}^{5} ) q^{52} + ( -1 + \zeta_{14}^{6} ) q^{61} + \zeta_{14}^{5} q^{64} + ( -\zeta_{14} + \zeta_{14}^{6} ) q^{67} + ( \zeta_{14} + \zeta_{14}^{2} ) q^{73} + ( \zeta_{14}^{2} - \zeta_{14}^{6} ) q^{76} + ( -\zeta_{14}^{3} + \zeta_{14}^{4} ) q^{79} + ( -\zeta_{14}^{3} + \zeta_{14}^{5} ) q^{91} + ( -\zeta_{14}^{2} - \zeta_{14}^{5} ) q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$6q + q^{4} + q^{7} + O(q^{10})$$ $$6q + q^{4} + q^{7} - q^{16} - q^{25} - q^{28} - 5q^{37} + 2q^{43} - q^{49} - 7q^{52} - 7q^{61} + q^{64} - 2q^{67} - 2q^{79} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/441\mathbb{Z}\right)^\times$$. $$n$$ $$199$$ $$344$$ $$\chi(n)$$ $$\zeta_{14}^{5}$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 55.1 0.900969 − 0.433884i 0.222521 − 0.974928i −0.623490 − 0.781831i −0.623490 + 0.781831i 0.222521 + 0.974928i 0.900969 + 0.433884i 0 0 0.222521 + 0.974928i 0 0 −0.623490 + 0.781831i 0 0 0 118.1 0 0 −0.623490 0.781831i 0 0 0.900969 + 0.433884i 0 0 0 181.1 0 0 0.900969 + 0.433884i 0 0 0.222521 0.974928i 0 0 0 307.1 0 0 0.900969 0.433884i 0 0 0.222521 + 0.974928i 0 0 0 370.1 0 0 −0.623490 + 0.781831i 0 0 0.900969 0.433884i 0 0 0 433.1 0 0 0.222521 0.974928i 0 0 −0.623490 0.781831i 0 0 0 $$n$$: e.g. 2-40 or 990-1000 Embeddings: e.g. 1-3 or 433.1 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 CM by $$\Q(\sqrt{-3})$$ 49.f odd 14 1 inner 147.k even 14 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 441.1.v.a 6 3.b odd 2 1 CM 441.1.v.a 6 7.b odd 2 1 3087.1.v.a 6 7.c even 3 2 3087.1.bj.b 12 7.d odd 6 2 3087.1.bj.a 12 9.c even 3 2 3969.1.bz.a 12 9.d odd 6 2 3969.1.bz.a 12 21.c even 2 1 3087.1.v.a 6 21.g even 6 2 3087.1.bj.a 12 21.h odd 6 2 3087.1.bj.b 12 49.e even 7 1 3087.1.v.a 6 49.f odd 14 1 inner 441.1.v.a 6 49.g even 21 2 3087.1.bj.a 12 49.h odd 42 2 3087.1.bj.b 12 147.k even 14 1 inner 441.1.v.a 6 147.l odd 14 1 3087.1.v.a 6 147.n odd 42 2 3087.1.bj.a 12 147.o even 42 2 3087.1.bj.b 12 441.bh even 42 2 3969.1.bz.a 12 441.bk odd 42 2 3969.1.bz.a 12 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 441.1.v.a 6 1.a even 1 1 trivial 441.1.v.a 6 3.b odd 2 1 CM 441.1.v.a 6 49.f odd 14 1 inner 441.1.v.a 6 147.k even 14 1 inner 3087.1.v.a 6 7.b odd 2 1 3087.1.v.a 6 21.c even 2 1 3087.1.v.a 6 49.e even 7 1 3087.1.v.a 6 147.l odd 14 1 3087.1.bj.a 12 7.d odd 6 2 3087.1.bj.a 12 21.g even 6 2 3087.1.bj.a 12 49.g even 21 2 3087.1.bj.a 12 147.n odd 42 2 3087.1.bj.b 12 7.c even 3 2 3087.1.bj.b 12 21.h odd 6 2 3087.1.bj.b 12 49.h odd 42 2 3087.1.bj.b 12 147.o even 42 2 3969.1.bz.a 12 9.c even 3 2 3969.1.bz.a 12 9.d odd 6 2 3969.1.bz.a 12 441.bh even 42 2 3969.1.bz.a 12 441.bk odd 42 2 ## Hecke kernels This newform subspace is the entire newspace $$S_{1}^{\mathrm{new}}(441, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{6}$$ $3$ $$T^{6}$$ $5$ $$T^{6}$$ $7$ $$1 - T + T^{2} - T^{3} + T^{4} - T^{5} + T^{6}$$ $11$ $$T^{6}$$ $13$ $$7 - 7 T + 7 T^{3} + T^{6}$$ $17$ $$T^{6}$$ $19$ $$7 + 14 T^{2} + 7 T^{4} + T^{6}$$ $23$ $$T^{6}$$ $29$ $$T^{6}$$ $31$ $$7 + 14 T^{2} + 7 T^{4} + T^{6}$$ $37$ $$1 + 3 T + 9 T^{2} + 13 T^{3} + 11 T^{4} + 5 T^{5} + T^{6}$$ $41$ $$T^{6}$$ $43$ $$1 - 4 T + 9 T^{2} - 8 T^{3} + 4 T^{4} - 2 T^{5} + T^{6}$$ $47$ $$T^{6}$$ $53$ $$T^{6}$$ $59$ $$T^{6}$$ $61$ $$7 + 21 T + 35 T^{2} + 35 T^{3} + 21 T^{4} + 7 T^{5} + T^{6}$$ $67$ $$( -1 - 2 T + T^{2} + T^{3} )^{2}$$ $71$ $$T^{6}$$ $73$ $$7 + 14 T + 7 T^{2} + T^{6}$$ $79$ $$( -1 - 2 T + T^{2} + T^{3} )^{2}$$ $83$ $$T^{6}$$ $89$ $$T^{6}$$ $97$ $$7 + 14 T^{2} + 7 T^{4} + T^{6}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882010221481323, "perplexity": 15536.822976301008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00216.warc.gz"}
http://mathhelpforum.com/algebra/153912-cubic-equations-real-roots-etc.html
# Math Help - Cubic equations / real roots etc. 1. ## Cubic equations / real roots etc. Show that $x=-5$ is the only real root of cubic equation: $x^3+3x^2-2x+40=0$ Can I do it this way? (1) Fill for x=-5 and show that it equals zero. (2) Divide factor (x+5) into the given cubic.This will give me a quadratic. (3) Then show that $b^2-4ac<0$ which means no other real roots Would this be a correct approach? 2. That is precisely how I would do it. 3. Yup it is.. but i have a cooler method In any general equation.. here,ax^3 +bx^2 + cx + d = 0 sum of roots taken 1 at a time is -b/a sum of roots taken 2 at a time is c/a sum of roots taken 3 at a time is -d/a and so on.. let m,n,o be the roots... we're given m=-5 so,m+n+o=-3 =>n+o=-3+5=2 (1) and mno=-40 =>no=-40/-5=8 (2) (1)^2=(n+o)^2 =n^2 + o^2 + 2on=2^2=4 putting on=8 n^2 + o^2=4-16=-12 sum of two squares is nececarily positive.. thus,n,o are both imaginary/complex. Cooler,right? you can use this for equations with degree higher than 3 also.. u see??
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217615485191345, "perplexity": 2761.180180516379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123996.28/warc/CC-MAIN-20140914011203-00056-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/162272-gradient-vectors-tensors.html
Hi. My class notes define ( grad u ) like this : (all the x are tensor products, and the d's are partial derivatives) grad u = del x u = (du/dxq) x eq = (d/dxq)(upep) x eq = (dup/dxq) ep x eq where u is a vector function of x, and {ei} are basis vectors. I'm confused about how the eq is put on the right hand side of the tensor product - seems to me it should be on the left, since it's associated with del, and not u. That is, I think it should be: del x u = eq d/dxq x upep = (dup/dxq) eq x ep i.e. with ep and eq the other way around to my notes. This is also how my textbook seems to define it. So my question is, does the order of ep and eq make a difference? It seems to me it would result in a different matrix (the transpose??), but I'm not sure. If they're different, which definition is correct? Thanks, ~squiggles
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961189866065979, "perplexity": 1262.9501179806334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835488.52/warc/CC-MAIN-20140820021355-00383-ip-10-180-136-8.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/239169/section-2-classes-design-a-stackoverflow-post/239170
# Section 2: Classes - Design a StackOverflow Post Design a class called Post. This class models a StackOverflow post. It should have properties for title, description and the date/time it was created. We should be able to up-vote or down-vote a post. We should also be able to see the current vote value. In the main method, create a post, up-vote and down-vote it a few times and then display the the current vote value. In this exercise, you will learn that a StackOverflow post should provide methods for up-voting and down-voting. You should not give the ability to set the Vote property from the outside, because otherwise, you may accidentally change the votes of a class to 0 or to a random number. And this is how we create bugs in our programs. The class should always protect its state and hide its implementation detail. Educational tip: The aim of this exercise is to help you understand that classes should encapsulate data AND behaviour around that data. Many developers (even those with years of experience) tend to create classes that are purely data containers, and other classes that are purely behaviour (methods) providers. This is not object-oriented programming. This is procedural programming. Such programs are very fragile. Making a change breaks many parts of the code. That is the information given to me for this exercise in my C# tutorial. The following is my working code. Still trying to grasp Object Oriented Programming. I believe I did set the properties so that you can get the information but not give the ability to set the Vote property from the outside by setting the property to Private. I also tried to protect the state of voting so that you could not up-vote or down-vote consecutively, yet allow it to change your vote and show the vote count along with protecting the state of the vote count. Anyhow if anyone can see how to improve on this, point out a better way, or teach me something new I would greatly appreciate it. using System; namespace ExerciseTwo { class Post { public string Title { get; set; } public string Description { get; set; } public DateTime TimeDateCreated { get; private set; } public int VoteCount { get; private set; } private bool HasVotedUp; private bool HasVotedDown; public Post(string title, string description) { Title = title; Description = description; TimeDateCreated = DateTime.UtcNow; VoteCount = 0; } public void VoteUp() { if (HasVotedUp) { throw new Exception("You have already up-voted."); } else { VoteCount++; HasVotedUp = true; HasVotedDown = false; } } public void VoteDown() { if (HasVotedDown) { throw new Exception("You have already down-voted."); } else { VoteCount--; HasVotedDown = true; HasVotedUp = false; } } } } Script to demo how it could work. namespace ExerciseTwo { class Program { static void Main() { Post post = new Post("Does my post work?", "Test to see if my post works."); System.Console.WriteLine($"Title: {post.Title}"); System.Console.WriteLine($"Description: {post.Description}"); System.Console.WriteLine($"Date Created: {post.TimeDateCreated}"); System.Console.WriteLine($"Post Count: {post.VoteCount}"); post.VoteDown(); System.Console.WriteLine($"Post Count: {post.VoteCount}"); post.VoteUp(); System.Console.WriteLine($"Post Count: {post.VoteCount}"); } } } Thank you for your help! I don't know which version of C# you use, but as of C# 6 it's possible to set initial values on property definition, which has 2 advantages: • Can eyeball the initial values quickly by looking at the property definitions. • Doesn't require you to copy the same initial value assignment code into additional constructors you may create. Example initial value definition: public int VoteCount { get; private set; } = 0; Also in the case of int with initial value 0 you don't have to explicitly set it because when an instance of the class is initialized all the ints are initialized with the default value 0 unless you specified otherwise. Just like the bools are false by default (you didn't set them in the constructor). You could add another layer of protection to the creation date: public DateTime TimeDateCreated { get; } = DateTime.UtcNow; Without defining a setter, TimeDateCreated is set when you create an instance of Post and can never be changed again for that instance. It makes sense here because the only date you'd ever want to change on a post is the date it was edited. Not a big deal for now, but it's better to develop early the habit of giving good variable names, this means being descriptive, concise and consistent with the naming of things around the code base. Most of your names are good, but TimeDateCreated is a bit counter-intuitive because once you'll get used to the name DateTime you'll expect this property to be named DateTimeCreated. It's good that you protected VoteCount, but your vote function does something unwanted: if you first downvote and then upvote, you will be back to 0 votes but without the ability to upvote. I suggest you rethink this part of the code. • public int VoteCount { get; private set; } = 0; I saw this recently but was not sure if this was ideal for this case. Now I know that it would be. Thanks for showing me this. public DateTime TimeDateCreated { get; } = DateTime.UtcNow; Classes and constructors are still new to me. I wasn't sure if this had to be set in the constructor. Another good point I need to learn. DateTimeCreated was the original name, but I thought to change it to what you see now thinking that might be confusing or too similar to the DateTime class. Guess I was wrong. Mar 20 '20 at 14:26 • It's good that you protected VoteCount, but your vote function does something unwanted: if you first downvote and then upvote, you will be back to 0 votes but without the ability to upvote. I suggest you rethink this part of the code.I didn't account for that. I'll have to think of a way to refactor that. Good catch and thank you for your help! Mar 20 '20 at 14:31 • You're welcome :) Mar 20 '20 at 14:32 • if (VoteCount == 0) VoteCount++; I added this to .VoteUp() and something similar to VoteDown(). Either this, or put one method inside of these and route this out there. I thought this was the easiest way to go about it at least. Mar 20 '20 at 14:41 There is nothing to add more than @potato's answer. However, I just want to re-enforced the answer. The naming convention for TimeDateCreated can be changed to CreatedOn or CreatedDate or any related naming for creation date. The keynote here is that you don't need to specify the datatype name in the properties as the property is public and I clearly can see its datatype. Then, why should I need to include it in the its name ? since I know the datatype, I need to know what value should this property store. So, here comes the good naming convention. Doesn't matter short or long names, as long as it's describing the role of the property clearly. The other note is the VoteUp() and VoteDown(), there is no need for exceptions, just skip voting if user already voted. // default : VoteCount == 0 (user did not up or down voted). // When upvote, VoteCount == 1 // when downvote, VoteCount == VoteCount - 1 public void VoteUp() { if(VoteCount == -1 || VoteCount == 0) { VoteCount += 1; } } public void VoteDown() { if(VoteCount == 0 || VoteCount == 1) { VoteCount -= 1; } } Now, you can get rid of HasVotedUp and HasVotedDown. You only need to throw an exception if there is an actual process breaking. This means, exceptions used to throw an error if it's breaking one of your logic's core requirements. For instance, Post requires a title. So, every post must have a title. in this case we can do : public Post(string title, string description) { if(string.IsNullOrEmpty(title)) { throw new ArgumentNullException(nameof(title)); } Title = title; Description = description; TimeDateCreated = DateTime.UtcNow; VoteCount = 0; } so, you're enforcing the requirement here. This class won't be initiated unless there is a title with at least one character. While Voting, is an optional requirement, user can upvote, downvote, or nothing. User only can upvote or downvote once. If you throw an exception in this part, you'll break the whole process (which might store valid arguments). So, just skipping it with an if statement without any exceptions would be our best approach to not break the application. You have to use your reasoning judgement on your code, try always to link it to a real world application or use case, this would give you a really good judgement on what you will do next. • How does your suggestion prevent the user from casting more than one up or down vote? Mar 21 '20 at 1:23 • @Milliorn I'm assign 1 and 0 and not incrementing. So, user can upvote, and down-vote whenever needed. if user up or dow voted more than once, it will just take the first one, and skip the rest. – iSR5 Mar 21 '20 at 1:30 • @Milliorn correction, I've totally missed one senario, where if the user is down-voted (so VoteCount would store one value of -1,0,1. I've updated the code as well. – iSR5 Mar 21 '20 at 1:40 • I went with your suggestion on using CreatedOn . It describes exactly what it is. It took me a few times of reading what you said and what the lesson said to understand why you suggested VoteUp() and VoteDown(). Although I have some disagreements with this approach I went ahead and changed it to your suggestion since it functions the same way with less code. The instructor of the lesson doesn't make anymore details on what to expect and it still complies with the demands of the lesson. Mar 21 '20 at 20:11 • if(string.IsNullOrEmpty(title)) { throw new ArgumentNullException(nameof(title)); } I did not think about this case. Good catch and suggestion! Mar 21 '20 at 20:13 You have a couple of good answers. Since you are an experienced developer who is new to C#, I will address some other things. Things you do quite well • Braces and indentation • Naming (most of the time) For the last item, most of your naming is good. As @iSR5 mentions, TimeDateCreated could have a better name. I have been programming since the 1980's, and I went through those years of the variable name including the data type and scope. With .NET, this is no longer needed, but more so with .NET and C# usage, it is frowned upon. Naming Guidelines C# Coding Conventions Spots for improvement I would like to see an access modifier on class Post. Either public or internal. As @potato says, the creation date should be read-only. Likewise, you may want to add a ModifiedDate. This would be updated anytime the title and description are altered. Thinking ahead, there likely would be a Content to the post, and changing it would also affect ModifiedDate. Voting By User (Version 2?) Beyond that, your code looks decent. My remaining issue is the class design. A Post should have a 1-to-many relationship with users (voters). Just like here, you have created a post. I can vote on it, potato can vote on it, iSR5 votes, etc. I would think tracking the votes in your class should be redesigned to account for this. • I went ahead with the suggestion CreatedOn. I think it describes the property for what it is(or is it a field). I bookmarked those links. They will be useful later on. I changed the Post class to internal class Post. I am still learning all the access modifiers. I changed CreateOn to public DateTime CreatedOn { get; } = DateTime.UtcNow;. However, I am unsure if that makes it readonly. If not, I believe then it would be if I set DateTime.UtcNow in the constructor. Mar 21 '20 at 20:28 • As for ModifiedDate I like that suggestion and it be necessary if this was to be built on. Since the instructor did not request that I am leaving it now. Same goes with with the Voting By User (Version 2?). Both are required to make this work if it was to be taken further. However, I am leaving it out since it was not requested and the scope of this I believe was to just try to teach how to keep this in a valid state. Thank you for your continued help @Rick Davin. Mar 21 '20 at 20:32 • Yes @Milliorn, this is both read-only and auto-initialized: public DateTime CreatedOn { get; } = DateTime.UtcNow; Mar 21 '20 at 21:56 Thank you to everyone that helped me out on this. This is what it has refactored into based on everyone's suggestions. Looks more concise and easy to read. Appreciate the advice so I can continue to learn C#. using System; namespace ExerciseTwo { internal class Post { public string Title { get; set; } public string Description { get; set; } public DateTime CreatedOn { get; } = DateTime.UtcNow; public int VoteCount { get; private set; } = 0; public Post(string title, string description) { if (string.IsNullOrEmpty(title)) { throw new ArgumentNullException(nameof(title)); } Title = title; Description = description; } public void VoteUp() { switch (VoteCount) { case -1: case 0: VoteCount += 1; break; } } public void VoteDown() { switch (VoteCount) { case 0: case 1: VoteCount -= 1; break; } } } }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20367944240570068, "perplexity": 1795.1576882733311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00644.warc.gz"}
https://www.nature.com/articles/s41598-017-18130-2?error=cookies_not_supported&code=0f7a428f-1522-4c4c-8437-c13c98348282
## Introduction Twenty four hour cycles of specific transcriptional-translational feedback loops (TTFLs) in the suprachiasmatic nucleus (SCN) of the hypothalamus drive circadian rhythms of organisms1. However, how these TTFLs impact behavioral/cognitive functions and the homeostatic regulation of sleep remains an open fundamental question in circadian biology2. The circadian clock is a system that is characterized by an extraordinary phenotypic plasticity; for example, circadian rhythms adapt to changes in daily light due to the seasonal earth’s rotation3,4 and/or to the availability of food supply5,6. In nocturnal animals, such as mice, this phenomenon is displayed by the compression or decompression of the length of the active phase (α index) as the dark phase shortens or lengthens, respectively3. Several lines of evidence have shown that the circadian molecular clock regulates the onset and the offset of the active phase3 and that Per2 clock gene plays a crucial role in phase delay7. Moreover, PER2 has a fundamental role in regulating food-entrainment8, sleep homeostasis9, and short-interval timing behaviors10. Operating at a different timescale than the circadian clock, ‘interval timing’ refers to a clock system that manifests at seconds-to-minutes-long intervals and governs behavioral/cognitive processes such as short-lived behavioral responses, attention, decision-making, and memory11. In a sense, if the circadian clock operates as the conventional 24 hr watch, interval timing operates as a stopwatch. The interval timing system, which very much depends on the healthy functioning of striatum12, is also characterized by a certain degree of phenotypic plasticity. For instance, short behavioral responses can be flexibly parameterized to match the temporal statistics of the environment and in doing so rely on an endogenous timing uncertainty that, ultimately, can affect adaptive decision-making11,13. The long-standing idea of a cross-talk between circadian clock and sleep has received a renewed interest with the availability of several circadian mouse models. Interestingly, both circadian clock and sleep were shown to modulate timing in humans14 and mice15,16. We recently reported that a novel, mostly cell-nonautonomous17, circadian axis that accelerates circadian clock in mice, is also responsible for a parallel acceleration of interval timing behaviors and disruption of the sleep homeostasis15. The hypothesis that circadian clock and sleep are regulators of short-interval behaviors is still in its infancy18 and most of the main circadian mouse models with the potential to provide insights toward the understanding of the related mechanisms remain to be studied. In this work, we studied the after hours (Afh) mutant mice, which exhibit a lengthening of the circadian period. The Afh mutation, an A > T transversion in the Fbxl3 gene, results in a Cys358Ser substitution within the F-box protein mediating CRY target ubiquitination and degradation. In effect, this delays the CRY-mediated negative feedback and increases CRY stability in Afh mutants19. Consequently, the expression of other core clock elements is also altered in homozygous mutants (i.e., Per2). The interaction between PER and CRY is fundamental in mediating the transcriptional repression of the Clock-Bmal1 positive loop20. The role of Fbxl3 as a fundamental regulator of CRY protein has been confirmed by a complementary work on a different mouse model, overtime 21. The Afh mutation reduces circadian oscillations at the single cell level outside the SCN, resulting in alterations of metabolism22. Moreover, the Afh mutants present abnormal responses to light input and this particular sensitivity to light influences the mode in which their circadian clock system responds to light perturbations22, and can potentially impact the behavioral responses of the mouse line. Mutants exhibited abnormal timing behaviors at multiple timescales: at hourly scale while anticipating meals, and in behavioral performance that depended on temporal information processing at seconds scale. Specifically, we found the lengthening of the circadian periodicity of work-for-food behaviors in Afh mutants that was accompanied by their inability to compress the length of activity in response to the shortening of the dark phase. Interval timing and decision-making were also altered during the regular light-dark schedule in Afh homozygous mutants. In addition, mutants exhibited abnormal sleep homeostasis. These behavioral characterizations were complemented by our findings that showed irregular Per2 in relevant brain areas outside the SCN, including striatum. ## Results #### The Afh mutants do not change daily activity in response to changing light-dark cycle lengths We tested all mice while performing work-for-food behavioral tasks in their home-cages23, 24 hr a day for several days without any interruptions (see Methods). The circadian profile of each mouse activity was derived from the number of nose pokes emitted to retrieve food pellets. We observed that the latter behavior of Afh mutants mimicked the wheel-running activity profile previously reported as consequence of the Afh mutation19. Afh homozygous mice showed a significant lengthening of their internal clock in dark-dark (DD) condition compared to the wild-type mice (Fig. 1a,b). There was reduced strength and higher variability of correlation coefficient (CC) between Afh nose-poke activity and periodicity under light-dark (LD) condition (Fig. 1c). The same observations were previously reported for wheel-running behavior19. Moreover, in Afh mutants the length of the active phase α increased in DD, suggesting decompression of their activity in absence of external light-dependent oscillators (Fig. 1d). Thus, we tested whether matching the external daily LD oscillators with the internal clock of mutants would restore the physiological circadian rhythm in these mice. To this end, we tested all mice in a 26.5 hr T-cycle. With this long T-cycle asset, wild-type mice lengthened their circadian period but not enough to entrain to the T-cycle, as the latter cycle was longer than their internal clock (Fig. 2a, upper panels; Fig. 2b). However, as we expected, Afh homozygous mice lengthened their period and exhibited the entrainment to the longer T-cycle (Fig. 2a, lower panels: Fig. 2b). Although the entrainment and circadian period in Afh mutants fit the long T-cycle, some striking abnormalities emerged in their activity. Afh homozygous mice, which had an overall higher nose poking activity compared to wild-type mice (Fig. 2c in LD:12:12), showed no changes in nose pokes (Fig. 2c) or α (Fig. 2d) when shifting to T-cycle. The absence of variation between the two different light conditions suggests a deficit in the regulatory mechanisms of the two oscillators that govern the compression and decompression of the activity with changing environmental light conditions3,24. #### Afh mutants fail to adapt their daily activity to gradually changing dark phase The circadian pacemaker controls the onset and the offset of the animals’ activity, a phenomenon that is thought to be regulated by a dual oscillator structure that, in nocturnal animals, leads to onset of activity around the beginning of the dark phase and offset of activity around the beginning of light phase3. We tested the adaptability of mice in terms of shifts in the onset/offset of their daily activity by subjecting a new cohort of mice to a progressive reduction of the dark phase over several days. In this variable light-dark condition, wild-type mice exhibited a gradual adaptation manifested by a continuing compression of their α activity. These mice adjusted the onset and the offset of their activity by aligning them with the beginning and the end of the dark phase, respectively (Fig. 2e, upper panels). Nevertheless, when the dark phase was eventually too short and they entered in the light-light (LL) condition, as expected all wild-type mice showed a lengthening of their period. Interestingly, while Afh mutants were able to adjust the onset of their activity with the shift of the dark period, they failed to adjust the termination of their activity. Visual inspection of Fig. 2e (lower panels) reveals a clear overflowing of the α activity as the dark phase was shortened, although this is partially covered by a brief masking effect due to onset of the light phase. The effect of light masking lasts approximately one hour and is followed by an increase in nose pokes in Afh mutants (Fig. 2f). Moreover, upon entering the LL condition, all mutants showed arrhythmic behavior, as demonstrated also by the low amplitude of their rhythm compared to wild-type mice (Fig. 2g,h). ### Hour timescale #### Afh mutants failed to develop food anticipatory behavior In order to address the ecological aspects of how mice adjust temporal processes at an hourly scale, we investigated the development of anticipatory behaviors of mice when food was restricted to specific periods of the day. This manipulation increases the ecological validity of the corresponding function since, for instance, in searching for food, it is adaptive and critical to predict the hours-long periods during which prey/resources become available. We tested whether mutants could predict the time of feeding and show food anticipatory activity (FAA). Afh and wild-type mice were subjected to a fixed daily food restriction protocol in which food was available only between 10:00 and 16:00 hrs during LD and DD condition (see Methods). In LD, Afh homozygous mice showed a significant peak of nose poking activity between 07:00 and 8:00, after the light on, while this peak was not present in wild-type mice (Fig. 3a,a1 ). Interestingly, the 7:00–8:00 hrs peak was not present in DD and mutants failed to show food anticipatory behavior (Fig. 3b,b1 ). This suggests the occurrence of associative learning in Afh mutants. Indeed, Afh mutants successfully associated food with the onset of the light phase in the LD paradigm; however, compared to the wild-type mice, they failed to learn the time of food availability in DD, as they did not show FAA in absence of light cues (this was not the case for the wild-type mice; Fig. 3b,b1 ). Since the condition in which FAA is investigated during the dark phase of a LD condition is missing in our study, we cannot conclude whether the FAA deficit is independent of light. ### Seconds timescale #### Time perception: Afh mutants exhibit delayed timing compared to wild-type mice We studied time perception in Afh mutants and littermate control mice by testing them in the Fixed Interval and Peak Interval task in their work-for-food home-cage. The task involved collecting food pellets after a fixed delay (10 sec.) since the initiation of the trial (see Methods). In order to capture the interval timing behavior of mice without contaminating their timed anticipatory responding with reward delivery, reinforcement was omitted on average in one out of five trials (i.e., probe trials; Fig. 4a). The distance between the peak of the average response curve and the target interval (10 s; Fig. 4b) was used as an index of temporal accuracy. Our analysis revealed a significant delay in the peak times of Afh mutants compared to wild-type mice during the LD condition. This result held for each day of 5 day long testing (Fig. 4c). This finding suggests a remarkable similarity between the lengthening of the circadian clock in mutants and their differential processing of seconds-long time intervals. In both cases, the observed findings can be accounted for by assuming a decelerated clock. Surprisingly, the peak times of the Afh mutants were similar to those of wild-types in the DD conditions; this is shown by similar peak times per day between the two groups (Fig. 4d). #### Time perception: Afh mutants fail to independently time short stimuli Several conditions such as Parkinson’s disease (PD) are characterized by disruption in timing in two forms. PD patients over-reproduce a single interval whereas they over-reproduce the short interval and under-reproduce the long interval (i.e., migration effect) when these two intervals are presented in the same session25. Provided that we have observed over-reproduction of a single interval in Afh mutants and in order to test if a PD-like timing disruption is present in analogous conditions to the previous human work, we tested a different cohort of mice in another timing task. In this task, mice discriminated two durations differentially associated with two different lateral hoppers (10 s vs. 30 s; see Two Time Interval Task in Methods). Here, we observed a migration-like timing phenotype in Afh mutants in the LD condition when their performance was compared with the WT controls. Afh mutants over-estimated (under-reproduced) the timing for the longer stimulus (i.e., 30 sec.) in LD condition (Fig. 4e), suggesting that mutants fail to independently process two intervals similar to PD-related timing disruption in these conditions. Interestingly, the timing of the second duration restored during the lengthening of the T-cycle and the timing responses for the two independent targets were similar between the two groups (Fig. 4f). ### Decision-making #### Afh mutant mice have a higher timing uncertainty compared to wild-type mice Understanding temporal regularities associated with different options is critical for making adaptive decisions and assessing the risks associated with the corresponding behavioral responses. In order to address this function, we subjected groups of Afh homozygous mice and their wild-type littermate controls to a temporal decision-making task13, in which mice were trained to decide whether the stimulus was a short (S) or a long (L) signal in order to correctly predict the location of food delivery in the home-cage16,23. In this new test the pellet was dispensed contingent upon the response at the correct location (i.e., hopper) and correct time (see Switch Task in Methods). A mouse typically learns to initially nose-poke at the hopper associated with the short delay and if responding there is not reinforced after the short delay, it switches to the location associated with the long delay (Fig. 5a). In order to receive reward in the L trials, the switching from S-location to the L-location needs to occur before the end of the signal and adaptively deciding when to switch between hoppers requires risk assessment. In this task, the main focus is the latency (trial time) at which mouse switches from S-location to L-location, which is conceptualized as temporal decision output (an accuracy measure). The coefficient of variation (CV) of switch latencies was used as a measure of timing uncertainty. The switch latencies were delayed (i.e., shifted rightward) in Afh mutants compared to wild-type control; in particular mutants showed significant delay in their timed decisions as well as higher CVs during light phase compared to the dark phase in LD condition (Fig. 5b), suggesting a higher timing uncertainty in mutant mice compared to wild-type mice. Once again, differences between the two groups disappeared in DD (Fig. 5c). To ensure that our results were related to timing and not to task performance differences we checked that the learning rate and food intake was similar between genotypes (Supplementary Fig. S1). #### Afh mutant mice do not adjust temporal decisions in response to task parameters To elucidate the ability of mice to modulate their temporal decisions in response to changing environmental statistics, we tested mice in the switch task with two different probabilities of probe (unrewarded) trials for different trial types (i.e., short vs. long; Fig. 6a). In order to avoid the biasing effect of data censoring in short-signal trials (due to the procedure), we analyzed the timed responses only in long-signal trials. The long-signal trials were characterized by equal (Sp = Lp, Experiment 1 discussed in the previous paragraph), higher (Sp > Lp, Experiment 2) or lower (Sp < Lp, Experiment 3) probability of obtaining rewards at long compared to short location. In wild-type mice, we observed that a lower probability of obtaining a pellet in the short location (high probe trial probability, P(Sp|S) = 0.5 vs P(Lp|L) = 0.2) resulted in significantly earlier switch latencies whereas a higher probability of receiving a pellet in short location (low probe trial probability, P(Sp|S) = 0.2 vs P(Lp|L) = 0.5) compared to long location was associated with significantly delayed switch latencies (Fig. 6b, left panel). This modulation was predicted by reward maximizing strategies in these conditions13 and indicates that our wild-type mice modulated their temporal decisions according to the variation of external factors (i.e., probability of receiving reward at different locations). In contrast with the wild-type mice, Afh mutants did not modulate their switch latencies in response to different probabilities of receiving reward at different locations (Fig. 6b, right panel). In order to further explore the adaptiveness of different decision strategies in response to changing task conditions, we compared the performance of wild-type and mutant mice in relation to the optimal decisions computed for each mouse. While the empirical switch latencies of control mice tracked the optimal latencies, the switch latencies of the Afh mutants did not (Fig. 6c). The empirical latencies correlated with the optimal performance curve in wild-type mice but not in Afh homozygous mutants (Fig. 6d). Afh mice had a significant higher CV (Fig. 6e) and a reduced proportion of maximum possible expected gains (MPEG) compared to wild-type mice (Fig. 6f). Moreover, consistent with some of our earlier results (Fig. 5c), no differences were observed in terms of proportion of MPEGs between the two groups in DD (Fig. 6g, right panel) compared to LD (Fig. 6g, left panel) condition. ### Sleep #### Sleep homeostasis is altered in Afh mutants In order to assess how Afh mutation affected the physiological aspects of sleep, we recorded electroencephalogram (EEG) and electromyogram (EMG) from Afh homozygous mice and their wild-type littermate controls for 48 h baseline (BS) under LD 12:12 standard illumination. Next, we tested the homeostatic response to sleep loss by subjecting all mice to 6 hours of sleep deprivation (SD), starting at zeitgeber time (ZT) 0, and then letting recovery for 42 consecutive hours. We scored all records for the three major states: Wakefulness (W), Rapid Eye Movement (REM) and Non-REM (NREM) sleep. We then analyzed the distribution of the three major states and the main EEG frequencies to derive the characteristics of their sleep. Wakefulness was confirmed in all sleep-deprived animals for the entire duration of SD. During baseline, there were only minor sporadic differences in total sleep time between Afh homozygous mutants and wild-types (Fig. 7a, Table 1). A detailed analysis of the fragmentation of sleep revealed that Afh mutants presented a significantly higher number of NREM episodes compared to wild-type mice mainly in the dark phase (Supplementary Fig. S2a, Table 1), while the number of REM episodes was similar between the two genotypes (Supplementary Fig. S2b, Table 1). Moreover, Afh mutants showed a reduced rebound of NREM delta power (Table 2) following sleep deprivation, indicating a reduced homeostatic response to acute sleep loss. NREM EEG delta power of Afh mutants was significantly reduced in baseline conditions and was not aligned with the light-dark cycle; it remained low over the entire recovery period (Fig. 7b, Table 1). Thus, we looked at the sleep process (Process S) as it was previously modeled for mice in literature26. This model combines the delta power values with the distribution of sleep-wake epochs to describe daily variations in sleep need (see Methods). Process S was significantly different between the genotypes over the entire recording period (Fig. 7c, Table 1), confirming a chronic sleep defect in Afh mutants. Although sleep pressure increased during the dark phase and decreased during the light phase for both genotypes, the major difference between Afh mutants and wild-type controls emerged at the end of the dark phase (Fig. 7c), when the sleep pressure for mice was maximum. Quantitative analyses of EEG frequencies during baseline, recovery and DD conditions revealed power density differences in the delta (1–5 Hz) frequency range but not in the theta (5–9 Hz) range (Fig. 7d) confirming that mutants suffered from a NREM deficit. During recovery from sleep loss, Afh mutants exhibited a significant reduction of the power density in the delta band for NREMs. Consistent with these latter results, only residual low frequency peaks in REMs, residing within the delta range, were different between the genotypes (Fig. 7e). Overall, the sleep changes that occurred in the LD condition were attenuated in DD, but they were not completely eliminated; during constant darkness the total sleep time, sleep fragmentation, and delta power were similar between genotypes while the power density of delta frequency observed in DD remained reduced in Afh mutants. ### Molecular correlates #### A pan-regional effect of the Afh mutation on the regulation of Per and Cry All of the abnormalities that Afh mutants expressed in the modulation of their daily activity, as well as the specific defects in interval timing and sleep, indicate that specific clock regulatory mechanisms may arise outside the SCN in these mutants. To investigate the molecular correlates of these behaviors, we conducted a gene expression profiling in hypothalamus (HYP) and pre-frontal cortex (PFC) collected in LD and DD in the food restriction experiments (see Gene expression analysis by quantitative real-time PCR in Methods). Afh mutants showed a significant reduction in the expression of Per2 and Cry1 in both hypothalamus (Fig. 8a) and pre-frontal cortex (Fig. 8b) in the LD. The same trend was also observed in DD condition (Fig. 8c,d). The reduced expression in Per and Cry was accompanied by an increase of Clock expression in hypothalamus, indicating that an alteration of Per/Cry-mediated transcriptional repression affects the Clock-mediated positive feedback loop. Furthermore, in constant darkness a hypothalamic increase of Orexin transcript (Fig. 8c) suggests that sleep-wake and food-anticipatory regulatory alterations in Afh mutants may originate from hypothalamic circuits. Orexin is a neuropeptide secreted in the hypothalamus that exerts a pivotal role in both sleep-wake regulation and food intake27. Moreover, we observed an increase of the albumin D-binding protein, Dbp (Fig. 8d) in the prefrontal cortex of wild-type mice in DD condition compared to Afh mutants. Dbp is a gene that is under the transcriptional control of CLOCK/BMAL and it is considered to have an important role in sleep homeostasis in mice2. These results confirm that the Afh mutation has a significant role outside the SCN, influencing local gene-networks according to region-specific biological functions. #### Per2 is abnormally regulated in the striatum of Afh mutants We complemented our investigation with the exploration of core clock proteins in striatum, the fundamental brain structure implicated for timing in the seconds-to-minutes range12. In order to study the circadian-dependent variations of proteins, we have collected tissues at ZT6 and at ZT18 of different cohorts of mice left under regular LD condition. As expected, all clock proteins were expressed in a circadian fashion (differing according to zeitgeber time) (Fig. 8e). For example, at ZT18 Clock and Bmal1 levels were lower than at ZT6 while Per2 levels were higher in wild-type mice. Importantly, however, Per2 levels were reversed in Afh mutants, showing a significant genotype difference at ZT18 (Fig. 8e). #### Monoamines in Afh mutants compared to controls HPLC analysis showed that the levels of monoamines and of their metabolites in striatum changes according to the light phase. In particular, DA and its predominantly extracellular metabolite homovanillic acid (HVA) levels as well as 5HT and its metabolite 5-hydroxyindoleacetic acid (5HIAA) levels significantly change between light and dark, in both genotypes (Supplementary Fig. S3). Intraneuronal metabolite 3,4-dihydroxyphenylacetic acid (DOPAC) levels did not change significantly between light and dark phases but present with difference between +/+ and Afh/Afh animals (Supplementary Fig S3). DOPAC is an intracellular metabolite of DA that is known to be affected by changes in either DA synthesis28 or DA storage29 suggesting that one or both of these processes may potentially be altered in mutants. However, the limitation of our current study stands in the age difference of samples between the behavioral and the biochemistry study (see Methods). #### Circadian clock and sleep differentially affect daily behaviors in mice We explored whether circadian and sleep information from our two groups of mice had any reliable relationship with the behavioral measures. To this end, we performed stepwise multiple regression analyses (MRA) based on “best lags” derived from linear regressions of behavioral observations on circadian or sleep predictors (see Methods). In Fig. 9 we displayed the output of the MRAs across the main behavioral parameters using two predictors (Circadian activity and one Sleep measure). The comparison between LD and DD in both groups showed a major contribution of the circadian predictor in LD (majority of blue squares, Fig. 9a,b), as result of the light-dependent oscillators, whereas the sleep predictor played a more meaningful role in DD (majority of red squares, Fig. 9d,e), in absence of external light cues. Moreover, the behavioral parameters were exclusively predicted by only one of the predictors (Circadian or Sleep, Fig. 9c,f) both in LD and DD conditions. This analysis predicted that circadian and sleep processes might play different roles in conditioning/modulating behavioral measures. We could identify how much of the variation in each behavior and for each genotype may be explained by each predictor. Interestingly, we observed that in many cases the best prediction (quantified by R2, see Supplementary Fig. S4) of the behavioral measures was influenced by circadian or sleep process differentially depending on the lag. In other words, the circadian clock system was a good predictor of behavioral outcomes at particular temporal lags, whilst the sleep process was a better predictor at other lags. Moreover, our analysis revealed that the phase shift between the two predictors was significant (Supplementary Fig. S4). In wild-type mice the sleep predictor for behavioral observations such as error rate was in anti-phase with respect to the circadian predictor (Supplementary Fig. S4), peaking when the other presents a trough and vice versa. This effect was clearly demonstrated by cross-correlograms between the predictors, showing the trough of the curve at 0 lag in both LD and DD for wild-type mice (Supplementary Fig. S4c,d). Importantly, the interval between the trough and the peak of the curve indicates the hour lags that are necessary for the two predictors to be in phase. For Afh homozygous mutants the two processes were not in perfect antiphase in LD, however this mismatch was recovered in DD. ## Discussion The results of this work shows that Afh mutant mice are characterized by a significant deficit in adjusting their behaviors in response to the temporally changing environment, which suggests a reduced temporal phenotypic plasticity. Importantly, this effect was manifested in mutants at multiple timescales. The investigation of the length of daily activity revealed a significant deficit in terms of the offset of the activity in Afh mutants. This is in agreement with the alteration of the interaction between Per and Cry in our mutants. In particular, the original ‘dual oscillator model’ specifically predicted that Per1/Cry1 and Per2/Cry2 are regulators of the morning (dawn) and evening (dusk) oscillators, respectively. Among the studies that attempted to independently validate single genes with single oscillators, the full model was disproved3; however, the strongest evidence that emerged in literature confirmed the role of Per2 in α compression3. Per2 has a critical role in phase delay, which is thought to result from posttranslational processes that affect the degradation of PER2 proteins7. Thus, our observation of Per2 reduction in different brain areas outside the SCN is consistent with these conclusions. In particular, the alteration of Per2 in hypothalamus, pre-frontal cortex and striatum during the dark phase of Afh mutants could explain the deficits that mutants showed in FAA, NREM sleep and interval timing, as all these phenotypes are associated with Per2 functioning and occurred mainly during the dark phase. However, a limitation of the current work is that Per2 levels were not described in constant darkness where some of the abnormal phenotypes we described were restored. Through the analyses of food anticipatory behavior, we were able to conclude that Afh mutants did not develop an anticipatory behavior since they did not show the typical increase of behavioral activity before the time of the meal in DD. However, they showed a sudden behavioral response at the onset of the light phase in the LD protocol. This latter response suggested that Afh mutant mice were able to associate the moment of the meal with the light on (e.g., learning of the stimulus-response association), however, an alternative explanation of this response relates to the high sensitivity of Afh mice to light22. Nevertheless, we did not observe any abnormal light-dependent arousal activity in our EEG investigation. We observed that short-time (seconds range) perception was also abnormal in Afh mutants. Although our observations could be explained by decelerated clock hypothesis in line with the observations with the circadian clock19, the predictions of this account was not confirmed when Afh mutants were tested in the temporal discrimination task (i.e., Fig. 4e). This latter phenomenon is predicted by striatal dysfunctions supported by the Per2 defects we reported in the striatum. The investigation of temporal decision-making revealed an interesting aspect linked to the effects of the Afh mutation in our mice. Afh mutants were unable to adjust their time-based decisions in response to the different probabilities of reward delivery after different delays. This lack of adaptability in interval timing behavior to exogenous factors echoes the observed failure of these mice in adapting their daily activity to changes in temporal parameters at the circadian level. This latter correspondence raises the question whether temporal phenotypic plasticity is a process that involves behavioral adaptation across multiple timescales. Our study copes with specific behavioral phenotypes but do not cover all possible phenotypes that could influence those we investigated here. For example, further investigations to test visual sensitivity, spatial recognition and metabolism would further refine our understanding of the behavioral repertoire of the Afh mutants. In any case, we believe that the possible effects of some of these factors would be minimal. For example, response selection depends marginally on the spatial abilities in our tasks given the short distance between hoppers in home-cage. Yet, preliminary investigation of how Afh mice perform in spatial cognition revealed no alterations (G. Lassi, personal communication). Additional investigations are also required to dissect any possible influence of genetic background or age-related phenotypes to the regulation of sleep and circadian sleep30. In addition, the altered light sensitivity in Afh mice22 could potentially affect some of the behavioral responses of this mouse line, however the full understanding of this phenomenon need further investigation. We also identified mutation-dependent sleep-related alterations. Afh mutants expressed important sleep physiological abnormalities in baseline and during recovery from the sleep loss. While Afh homozygous mutants’ wheel-running19 and nose poking activity can mostly entrain to a LD cycle, we show here that their EEG delta power did not. Furthermore, the simulation of Process S showed a reduction of the amplitude in Afh mutants in both LD and DD conditions. This latter effect results from NREM fragmentation (i.e., increased NREM episodes) and low delta power expressed by Afh mutants compared to wild-type mice. Our mutation leads to changes in the dynamics of the sleep homeostasis. The rebound following sleep deprivation is reduced in Afh mutants, suggesting a physiological deficit in the sleep process. In Afh mutants we observed chronic alteration of the sleep physiology (i.e., NREM fragmentation and delta power reduction), confirmed by an abnormal Process S. The sleep changes in baseline were followed by a prolonged delta power decrease after sleep deprivation (Fig. 7b). However, the total sleep time was not affected in mutants. Moreover, shortly after entering DD, when mice were not under a controlled light-dark schedule, the differences in sleep between wild-types and mutants were attenuated, indicating that light and/or light-dark cycles may have a disrupting effect on the physiology of sleep in Afh mice. Finally, we provided statistical predictions of how specific contributions of the circadian and sleep processes may account for behavioral changes in wild-type and mutant mice. This analysis revealed that both circadian and sleep processes are good predictors of interval timing behaviors and temporal decision-making, confirming our previous findings15,16,23. Finally, this study delineates, for the first time, the time-locked temporal dynamics between circadian clock and sleep in modulating the daily profile of short-interval behaviors. Moreover, the Afh mutation disturbs these predicted dynamics between circadian and sleep processes in LD, while DD restores normal antiphasic behavior between circadian and sleep processes. ## Methods ### Mice and Husbandry The Afh (after hours) mice were derived at the Medical Research Council (MRC) Mammalian Genetics Unit (Harwell, UK) from an ENU mutagenesis program, using ENU mutagenized BALB/c males which were crossed to C3H/HeH females. The colony was subsequently bred in IIT and backcrossed for more than 10 generations to C57BL/6 J. All mice were genotyped as described in19. All experimental procedures were conducted with age-matched groups of littermate male wild-type mice. The total number of animals, wild-type and Afh, used in this study was 72. Wild-type and homozygous mutant males were group-housed in the experimental room a week before the experiments with food and water ad libitum under 12:12 light/dark cycle (lights on from 7:00 to 19:00). All animal procedures were approved by our institutional animal committee (‘Organismo preposto al benessere degli animali’, OPBA, IIT, Genova) and by the ethical national committee in Italy, for IIT Genova. The study followed ARRIVE guidelines (http://www.nc3rs.org.uk/arrive-guidelines). ### Sleep Wireless Experiment Six wild-type and 6 Afh homozygous mice were subjected to sleep investigation at 12–16 weeks of age. All mice were anesthetized with Ketamine/Xilazine, 90–150 K/7.5–16X intra-peritoneally and implanted with telemetry transmitters (Data Sciences, F20-EET, Gold system) for recording electroencephalography, electromyography and body temperature (°C) as described in16. Two weeks post-surgery recovery time ensured a full recovery of normal sleep. Then, we started recording all the physiological signals for 48 hours (sleep baseline), followed by 6-hours sleep deprivation, from 7:00 to 13:00 by gentle handling and brushing. The recovery period consisted of 42-hours and an additional 48 hours-long data were acquired during constant darkness. ### Working-for-food and Interval Timing Tasks Mice were singly housed and maintained for several days in a novel home-cage (Cognition & Welfare, COWE) apparatus developed by TSE Systems (Germany) to monitor nose poke activity in mice. Each home-cage was equipped with three holes/hoppers on a metal wall. The central hole is the control hopper and the lateral hoppers are connected to two feeders that dispense 20 mg dustless precision pellets (BioServ, USA) upon a trigger signal from the software. All three hoppers are equipped with infrared beams and LEDs controllable by the experiment code. The software allows implementing various behavioral protocols by setting combinations of LED stimuli and pellet dispensing. We implemented a protocol of food restriction, a variant of the fixed Interval (FI) and peak interval (PI) task31, a two time interval and the switch task13,32. Each of these tasks was preceded by a training phases to associate the light/hopper with the food pellet. Between pre-training and training phase, a small reduction in the total food intake was observed; however, the weight of the animals was checked every few days to ensure that the body weight was stable and therefore not affecting other mechanisms such as metabolism. If an animal lost more than 15–20% of its body weight, it was removed from the cage. #### Food Restriction During this experiment, only two hoppers were available, the central and one lateral hopper. Mice were first trained to receive pellets from the lateral hopper (Pre-training phase). During this phase a central nose poke activated the light in the central and later hoppers for 2 seconds (L = 2 s). The mouse received a pellet reward if it was poking in the later hopper after the light signal and before the time limit (30 seconds). Then an inter-trial interval (ITI) started as a 30 s fixed interval plus a random delay drawn from a geometric distribution with mean of 60 s. The pre-training phase lasted for 3 days. Then a training phase started and pellets were only available between 10:00 and 16:00 either during the light phase of LD or in constant darkness. The trials could not be initiated and the nose pokes were not reinforced during any other period. This experiment was conducted with 6 wild-types and 6 Afh homozygous mice at 12–16 weeks of age. An LD phase of 14 days was followed by a 10 day-long DD phase. The analyses of LD and DD phases were limited to the last 10 and 7 days, respectively. #### Fixed Interval and Peak Interval Task A different cohort of mice was tested on the Fixed Interval (FI) and Peak Interval (PI) tasks. This experiment was conducted with 6 wild-types and 6 Afh homozygous mice at 12–16 weeks of age. Mice had access to food over the entire 24 hr and no food restriction was implemented. #### Pre-training The pre-training for the task aimed to train the mice to associate the light/hopper with food pellet. Mice self-initiated the trial by nose poking in the central hopper and this resulted in 2 sec illumination (L = 2 sec) of the two hoppers. If a mouse nose poked in the lateral hopper after the light signal and before the time limit (30 seconds), a pellet was delivered. The inter-trial interval (ITI) was defined as a 30 s fixed interval plus a random delay drawn from a geometric distribution with mean of 60 s. The pre-training phase lasted for 3 days. #### Training In this phase, we implemented the FI and PI tasks. In FI trials, the first response following the fixed interval (10 seconds) resulted in the delivery of the food pellet. The FI trials serve to train the mice with the reward availability time (i.e., target interval). However the reward delivery in FI trials contaminates responding and thus do not allow the characterization of the timing performance following the reward availability time. In order to capture the entire temporal expectancy of reward delivery, PI trials are introduced. The PI trials last longer than the FI trials and the reinforcement is omitted; the timing performance of the mice are evaluated based on response rates as a function of trial time (Fig. 4b). Unlike in pre-training, in FI trials mice had to wait 10 sec (L = 10 sec) before receiving a pellet. In the training phase, 20% of trials were PI (probe) trials in which reinforcement was omitted. An LD phase of 7 days was followed by a 10 day-long DD phase. Only the data in PI trials from the last 5 days (steady-state) of LD phase and the last 6 days of DD were analyzed. A different cohort of mice (6 wild-type and 6 Afh homozygous male mice at 12–16 weeks of age) was tested in the two time interval (dual peak interval) task. Mice had access to food for 24 hr. In this task, all three hoppers were available. #### Pre-training The pre-training phase aimed to train mice to form an association between light stimulus and reward. Mice self-initiated the trial with a nose-poke in the central hopper. This nose-poke activated the central light and one of the two lateral hopper lights that was randomly picked with equal probability (50%). The hopper was kept lid until the first nose-poke on the lateral hopper. Upon correct response, the light was switched off and a pellet was delivered, marking the end of the trial and the beginning of the ITI. The ITI was defined as for the other tasks. This phase lasted 3–4 days in LD condition (12:12). #### Training Two time interval task was implemented in this phase. Compared to the pre-training phase, here the short light signal of 10 sec (tshort) was associated with the left hopper and the long light signal (tlong = 3*tshort) of 30 sec was associated with the right hopper. Reward was available only in the FI trials. In order to receive a pellet, in FI trials the animals had to nose-poke in the lid hopper between the times of the light off and 3 times the light duration (10–30 sec interval for left hopper; 30–90 sec interval for right hopper). The reward was delivered in FI trials only if the first nose-poke after the light off was at the correct location, which initiated the ITI. After 3–4 days of training in this phase, the probe (PI) trials were introduced. Probe trials occurred randomly with a probability of 0.2 for each side. During the probe trials the central nose-poke activated the light at the central and one of the lateral hoppers for a total duration of 90 sec. This phase lasted two weeks: one week in LD condition (12:12) and another week in T-cycle (LD 13.25:13.25). The probe trials were introduced in order to estimate whether interval timing behavior of the animals around the short (10 sec) and long (30 sec) target varied between different LD conditions. ITI started after 90 sec. For this experiment, we tested a different cohort of mice at 12–16 weeks of age in the same home-cage system described above. During the pre-training phase, mice self-initiated the trial with a nose-poke in the central hopper. No temporal limitations were imposed during this phase and the trial ended when the animal poked and received a pellet on each side, determining the end of the trials and offset of the lights. Every trial was followed by an inter-trial interval (ITI). This phase lasted 3–4 days in LD condition (12:12). All mice were trained in the switch task as described in13,16,23. The task required the discrimination of two light signal durations (i.e. short- vs. long-latency signals) in order to obtain a food pellet in a trial (Fig. 5a). The short (S) or long (L) duration of the light signal predicted the location of the pellet availability. Short and long trials were randomly intermixed with equal probability (P(S) = P(L) = 0.5). The first nose-poke (NP) in the correct location (left hopper after short signal and right hopper after long signal) was reinforced with one pellet. Incorrect responses, namely the nose-pokes in right hopper after short signal or nose-pokes in left hopper after long signal, terminated the trial without pellet delivery. The ratio between short and long signal was 1:3 (3 sec vs 9 sec) at the beginning and lasted for a week; it was then reduced to 1:2 (3 sec vs 6 sec) for few days. After few days of switch task with 1:2 ratio, probe trials were introduced for both short-latency and long-latency trials. During probe trials the signal was presented as in regular trials but the responses of the animals were never reinforced, providing a partial reinforcement schedule for the mice. The three experimental phases were differentiated by the probability of probe trials. In the first phase, short (Sp) and long probes (Lp) were introduced with the same conditional probabilities: P(Sp|S) = P(Lp|L) = 0.2 (Fig. 5a) whereas the second experimental phase contained the following unequal probabilities: P(Sp|S) = 0.5 and P(Lp|L) = 0.2 (Fig. 6a, left panel). These probabilities were reversed for the third phase: P(Sp|S) = 0.2 and P(Lp|L) = 0.5 (Fig. 6a, right panel). ITI started at the end of each trial. Each phase lasted one week. Another cohort of 6 wild-type and 6 Afh homozygous mice (12–16 week old) were tested with 1:2 ratio after a pre-training phase in the LD condition for 11 days followed by the DD condition for 9 days. Only the last 5 days of the dataset were analyzed for LD and DD conditions, to ensure a steady state performance and entrainment with the new cycle (Supplementary Fig S1). ### Gene expression analysis by quantitative real-time PCR Four Afh homozygous mutant mice and 4 littermate controls were sacrificed at ~7:30 for the LD protocol and at ~9:30 for the DD protocol at the end of the working-for-food cognitive task. Prefrontal cortex (PFC) and hypothalamus (HYP, excluding SCN) regions were dissected using a brain slicer matrix (Zivic Instruments, Pittsburgh PA USA) and they were immediately frozen in dry ice. Total RNA was extracted from approximately 0.5 g of snap frozen PFC and HYP using Qiazol (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. RNA samples were quantified with an ND1000 Nanodrop spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). Reverse transcription of 0.5 μg of RNA was performed using ImProm-II(TM) Reverse Transcriptase (Promega, Milan, Italy) according to the manufacturer’s instructions. RT-qPCR was performed on an Applied biosystems 7900HT Fast Real-Time PCR System (Applied Biosystem, Foster City, CA) using QuantiFast SYBR Green PCR Kit (Qiagen, Hilden, Germany) and under the following conditions: 5 min at 95 °C, 40 cycles of denaturation at 95 °C for 10 sec, an annealing step at 60 °C for 30 sec and extension step at 70 °C for 1 min. Each sample was run to obtain average Ct values according to the manufacturer’s specifications. A list of the primers used is listed below. All samples were normalized against 2 different house-keeping genes: Gapdh and β-actin. Expression levels relative to these house-keeping genes were determined by the calculation of ΔCt. The data are expressed as 2- ΔΔCt, where ΔΔCt is the difference between the control and Afh mutant cohorts. In the case of Cry1 and Gapdh, the bands are from the same gel and the same exposure (Supplementary Fig. S5). For the other genes (Clock, Bmal1 and Per2) the membranes have been cut to detect several antigens for the same samples and the same parameters have been used to acquire the images. The primers used are listed here: Bmal1 Forward CCGTGCTAAGGATGGCTGTT Bmal1 Reverse TTGGCTTGTAGTTTGCTTCT Clock Forward TTGACAGAGATGACAGTAG Clock Reverse TTACCAGGAAGCATAGAC Cry1 Forward GCTATGCTCCTGGAGAGAACGT Cry1 Reverse TGTCCCCGTGAGCATAGTGTAA Per2 Forward AGCTACACCACCCCTTACAAGCT Per2 Reverse GACACGGCAGAAAAAAGATTTCTC Dbp Forward GAGCCTTCTGCAGGGAAACA Dbp Reverse GCCTTGCGCTCCTTTTCC Grhelin Forward CATGCTCTGGATGGACAT Grhelin Reverse TGGTGGCTTCTTGGATTC Orexin Forward CTCCAGGCACCATGAACTT Orexin Reverse CAGTAGCAGCAGCAGCAG Gapdh Forward GAACATCATCCCTGCATCCA Gapdh Reverse CCAGTGAGCTTCCCGTTCA β-Actin Forward AAGTGGTTACAGGAAGTCC β-Actin Reverse ATAATTTACACAGAAGCAATGC ### Biochemistry Eight wild-type and 8 Afh homozygous mouse were euthanized at 35–38 weeks of age by decapitation, at ZT6 and at ZT18. The right and left hemisphere striatum were rapidly dissected (within 60 sec) on an ice-cold surface and frozen in liquid nitrogen. One hemisphere striatum was used for Western blot analysis and the other for neurochemical measurement of monoamine tissue levels. ### Antibodies and Western blot Analysis The anti-BMAL1, anti-CLOCK antibodies were purchased from Abcam and anti-Per2 from SantaCruz Biotechnology. Western blot analyses of brain samples were performed as described in33. Tissue samples were homogenized in boiling 1% SDS solution supplemented with protease inhibitor cocktail (Sigma) and boiled for 10 min. Protein concentrations were measured using a BCA-protein assay (Themo Scientific). Protein extracts (25 or 50 µg) were separated on 10% SDS/PAGE and transferred to nitrocellulose membranes. Blots were incubated with primary antibodies overnight at 4 °C. Immune complexes were detected using appropriate peroxidase-conjugated secondary antibodies (Jackson Immuno-Research) and a chemiluminescent reagent (Super-Signal West-Pico; Pierce Biotechnology). Densitometric analysis was performed within the linear range using IMAGEQUANT V1.1 (GE Healthcare Life Sciences). For quantitative analysis, actin was used as loading controls. The Western blot results were normalized to the respective control wildtype values at ZT6. ### Neurochemical measurement of monoamine tissue levels The striatum was dissected rapidly on ice and frozen in liquid nitrogen. Tissue was homogenized in 40 volumes of 0.1 M HClO4, the homogenate was centrifuged at 10.000 × g for 10 min to remove debris. Subsequently, supernatants were filtered (Millipore Ultrafree-MC centrifugal filter units, 0.22 µm) and analyzed by HPLC as described below. #### Analytical procedure Measurements of DA and 5HTmetabolites in collected brain samples were performed by HPLC with electrochemical detection (ALEXYS LC-EC system, Antec Leyden BV, Netherlands) equipped with a reverse-phase column (3 μm particles, ALB-215 C18, 1 × 150 mm, Antec) at a flow rate of 200 μl/min and electrochemically detected by a 0.7 mm glass carbon electrode (Antec; VT-03). The mobile phase contained 50 mM H3PO4, 50 mM citric acid, 8 mM KCl, 0.1 mM EDTA, 400 mg/l octanesulfonic acid sodium salt and 10% (vol/vol) methanol, pH 3.9. The sensitivity of the method permitted detection of ~3 fmol DA. ## Data Analyses ### Sleep data analyses EEG data recordings were analyzed with SleepSign software. A combination of automatic sleep scoring and manual correction was used. Scoring and extrapolation of the power spectra were performed on 4-second epochs. Each epoch was subjected to FFT (Fast Fourier Transformation) and classified as wakefulness (W), non-rapid-eye-movement (NREM) or rapid-eye-movement (REM). The sleep scoring was analyzed in hourly bins and various parameters were extracted such as the total sleep time by combining the NREM and REM duration. The fragmentation was expressed as the number of episodes/hour, where episode is a period of one or more equal consecutive 4-seconds epochs. The EEG delta power during NREM epochs was expressed as the percentage of the average values of Delta Power during the last 4 hours of the light phases of the baseline26. The EEG delta power during NR epochs in DD was normalized by the average values of the last 4 hours of the subjective day (CT 7–11). ### Process S The homeostatic process of sleep, Process S, has been mathematically reproduced in mice by Franken et al.26. The model recapitulates the increase and decrease in the need for sleep with two saturating exponential functions based on the sleep-wake distribution. In our study the parameters (asymptotes, time course, initial value) were estimated on the first day of the baseline for each subject. Process S was simulated during the SD and RC phases and was expressed as a percentage of the individual mean delta power over the last 4 hours of the light phase of the baseline during NREM epochs. Process S during the DD phase was rescaled according to the subjective period and normalized by the average values of the last 4 hours of the subjective day (CT 7–11). A robust estimation of the subjective period was computed on the last 60 hours of continuous darkness of the Process S. We defined a matrix of cross-correlation values for the Process S, as described by equation (1): $$Conv(\tau ,\delta )=P({S}_{24}(\delta ),{S}_{24}(\tau ))$$ (1) where P is the Pearson’s correlation coefficient, S 24 corresponds to a 24 hour long segment of the Process S taken at any δ and τ instants varying between 1 and L-24, where L corresponds to the length of the data set. The cross-correlation of the Process S showed periodic diagonal structures that peaked in band regions of about 24–27 hours periodicity from the autocorrelation diagonal, where peaks reflected the intrinsic periodicity of the Process S. We refer to the maximum values of the Conv(τ, δ) in the diagonal structure corresponding to the bands τ = 0, δ  [18, 30] and τ [18, 30], δ = 0 as SecondPeak. To robustly estimate the subjective period we first defined the weighting function for every τ and δ: $$w(\tau ,\delta )=\{\begin{array}{c}Conv(\tau ,\delta ),\,\\ 0,\,\end{array}\,\begin{array}{c}when\,Conv(\tau ,\delta ) > 0.8\ast Second\,Peak\\ otherwise\end{array}$$ (2) Then we defined the subjective period (SP) as the weighted average: $$SP=\,\frac{{\sum }_{\tau ,\delta }w(\tau ,\delta )\cdot \tau }{{\sum }_{\tau ,\delta }w(\tau ,\delta )},\forall \tau {\epsilon }[1,L-24]$$ (3) Finally, the onset of the subjective night for the Process S was defined as the time at which it reaches the maximum. ### Behavioral Data Analysis Circadian periods of nose poking activity in mice were analyzed for each animal as described in Maggi et al.23. The analyses were performed using the MATLAB (www.mathworks.com) platform. The estimation of circadian period was computed with nonlinear curve-fitting of the nose pokes as a function of the recording hours with a sigmoidal function for various periodicities. A cross-correlation coefficient (CC) was calculated for each fit to evaluate the robustness of the nose-poke periodicity. The time devoted to nocturnal behavioral activity (alpha) was analyzed and computed with the algorithm described in Leise et al.34,35. #### Peak Time in Peak Interval Task Probe trials in the peak interval task were introduced to assess temporal accuracy and uncertainty of the mice. Data gathered from the probe trials can be expressed in two different ways. Averaging data across trials provides a nearly bell-curve shaped normalized response curve with its peak located at around the target duration. Data in our study were analyzed by modeling nose poking as a function of trial time at the single trial level as a three-state system (break → run → break). The two break periods are characterized by a relatively low rate of responding and the run period is characterized by a relatively high rate of responding23. The trial time at which the animal switches from low-to-high rate of responding was defined as the START time and the subsequent point at which the animal switches from high to low rate of responding was defined as the STOP time. These points were detected using an algorithm as described in4,23. The average of START and STOP times were used as the estimates of the Peak Time in individual trials (also referred to as middle time). #### Temporal Decision-Making in the Switch Task The primary unit of analysis in the Switch task was the trial time at which mouse left the short-latency location for the long-latency location in the long-signal trials (switch latency). The distributions of switch latencies were fit with a mixture of Gaussian distributions (parameters were chosen with a maximum likelihood estimation of the empirical data) as we have previously described4,23. The component of the distribution function with higher proportion (that explains the larger portion of the data) and with the estimated parameters within the range of interest (between short and long signal durations used in the task) was considered to represent the empirical distribution of the switch latency values. The target-switch latency in this discrimination procedure was defined as the mean departure (or switch) latency and the endogenous timing uncertainty was quantified as the coefficient of variation (CV = standard deviation(data)/mean(data)), a measure of relative dispersion of the switch latency distribution11. The optimal target-switch latency was formulated taking into account the subjective timing uncertainty13 and the expected gain function used to this end was already generalized in our prior study to include the probe trial probability11. The Expected Gain (EG) function estimates, a priori, the gains associated with all possible switch latencies given the subjects’ endogenous timing uncertainty and other relevant task parameters such as the payoffs and probabilities. For each subject, the optimal target switch latency was defined as the switch latency that maximized the EG given the subject’s CV and the task parameters. The ratio of the EG estimated for the empirical switch latency to the EG associated with the optimal switch latency is referred to as the proportion of maximum possible expected gain (MPEG). This value can vary between 0 and 1, where 1 is obtained when the mouse exhibits optimal switch latencies. The performance for each mouse was assessed by an index of error rate per hour, defined as Ei/Ti where Ei is the number of incorrect trials (hour i = 1, ..., 24) and Ti is the total number of trials per hour. ### Multiple regression analysis with a stepwise method To further assess the contribution of the circadian process (Predictor 1) and sleep processes (Predictor 2) to the modulation of all behavioral measures, we investigated which process was the best predictor of behavioral parameters (e.g. error rate, decision-making and interval timing parameters). Thus, we performed a multiple linear regression analysis, using a stepwise method. All the behavioral data and predictors were averaged within genotype and collected in one hour-long bins (24 ZT in LD condition, 24 CT in DD condition). We performed a linear regression of each individual behavioral parameter (i.e. error rate, switch latencies, etc.) on Predictor 1, defined as the Circadian activity (nose-poking activity) or on Predictor 2, defined as sleep parameters (i.e. sleep duration, NREM, REM, etc). Each regression was performed 24 times every time by shifting Predictor 1 or Predictor 2 forward by one time bin (i.e. lag) with respect to the behavioral parameter. For each model, we computed the R-squared statistics. For each predictor we identified the “best lag” as the maximum R-squared achieved among the 24 regressions (i.e., 24 different lags). All data were reorganized and aligned with respect to its best lag. Then a multiple regression analysis conducted for each combination of Predictor 1 and Predictor 2 provided us with the best predictor for each behavioral parameter with its contribution (defined by the R-squared). To highlight the phase relationship between the Predictor 1 (circadian) and Predictor 2 (Process S), we performed a cross-correlation analysis between the R-squared profiles of the two processes. The phase relationship was then evaluated based on the time lag of the cross-correlation peak. ### Statistical Analysis Data were analyzed with one-way ANOVA for differences in a single time point. We used the Statistics Toolbox of the Matlab package for statistical analysis. The significant differences were indicated for p-values: *p < 0.05; **p < 0.005; ***p < 0.001. For each test we checked the F-statistic (which was always 1) and the statistical power, which depends on the sample size, the effect size (F-statistic) and significance level. All the tests that resulted to be significant had asymptotically high statistical power.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7136971950531006, "perplexity": 4244.970011841481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00688.warc.gz"}
https://asmedigitalcollection.asme.org/IDETC-CIE/proceedings-abstract/IDETC-CIE2007/4806X/339/324669?redirectedFrom=PDF
This paper aims to discuss unstable traffic flow and to identify if chaotic phenomena exist in a traffic flow dynamic system. Two discrete dynamic models are proposed, which are derived from the flow-density-speed fundamental diagram and Del Castillo and Benitez’s exponential curve model and maximum sensitivity curve model. Both the models have two parameters, which are the ratio of free flow and spacing average speed and the ratio of the absolute value of kinematic wave speed at jam density and free flow speed. Chaos is found in the two models when the two values increase separately. The Liapunov exponents were used to examine the characters of the chaotic behavior in the two models. These results are illustrated by numerical examples. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898150622844696, "perplexity": 538.6900890718102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00065.warc.gz"}
https://scientificallysound.org/2019/02/19/1396/
## Use LaTeX templates to write scientific papers As researchers, we have to write and publish scientific papers to let the world know about our work. The process of preparing a scientific paper can be enjoyable, but it can also be arduous. Different journals and publishers have different requirements about how we should format our submission. The title page should include certain items, the headers should be bold and italic, references should be formatted in this style, etc. The Instructions to Authors of many journals are long and overwhelming, which may deter potential authors. When I was a PhD student, I found it strange that journals had so many Instructions on how to prepare a manuscript, but never provided a downloadable template as a .doc or .docx. If I had a template with [Insert title here], [Insert Abstract here; 400 words], etc, where the various elements were already formatted correctly, I could focus on writing my paper and not on formatting my paper. Moreover, editors and reviewers would likely have an easier time of dealing with submissions that were more uniform in their style. Fear not! Many journals and publishers have LaTeX templates that can be downloaded and used in just this way. While these LaTeX files may seem a little intimidating if you have never opened up a .tex file before, most are fairly straightforward and actually include key points from the Instructions to Authors as dummy text in the article or as comments. ### Example: writing a paper for a PLOS journal How do we find LaTeX templates? As is often the case, Google is your friend. A simple Google search reveals that there is a LaTeX template for all of the PLOS journals. You can download the plos-latex-template.zip file, which contains three files: 1. plos_latex_template.tex: This is the file you would open in your text or LaTeX editor to write your paper. 2. plos_latex_template.pdf: This is the PDF file generated from the current text entered in the plos_latex_template.tex file. 3. plos2015.bst: This is the file that LaTeX will use to appropriately format the references in your paper. References, managing references, and formatting references are a huge topic and will be the focus of one or more future posts. A copy of these files is also available here. The title and author section of the first page of plos_latex_template.pdf looks like this: As you can see it looks very professional and complies with the journal format. If you open up the plos_latex_template.tex file, there are approximately 70 lines of comments and instructions on how to prepare your article. If you are new to LaTeX, many of these instructions will seem like gibberish. But don’t worry, this won’t stop you from drafting your first article. With a little bit of patience, and possibly reading our series of LaTeX blog posts, you will soon be able to make sense of these instructions. The actual document starts on line 175. Below we can see the part of the LaTeX document that relates to the title and author section from the PDF document: \begin{document} \vspace*{0.2in} % Title must be 250 characters or less. \begin{flushleft} {\Large \textbf\newline{Title of submission to PLOS journals} % Please use "sentence case" for title and headings (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns). } \newline % Insert author names, affiliations and corresponding author email (do not include titles, positions, or degrees). \\ Name1 Surname\textsuperscript{1,2\Yinyang}, Name2 Surname\textsuperscript{2\Yinyang}, Name3 Surname\textsuperscript{2,3\textcurrency}, Name4 Surname\textsuperscript{2}, Name5 Surname\textsuperscript{2\ddag}, Name6 Surname\textsuperscript{2\ddag}, Name7 Surname\textsuperscript{1,2,3*}, with the Lorem Ipsum Consortium\textsuperscript{\textpilcrow} \\ \bigskip \textbf{1} Affiliation Dept/Program/Center, Institution Name, City, State, Country \\ \textbf{2} Affiliation Dept/Program/Center, Institution Name, City, State, Country \\ \textbf{3} Affiliation Dept/Program/Center, Institution Name, City, State, Country \\ \bigskip % Insert additional author notes using the symbols described below. Insert symbol callouts after author names as necessary. % % Remove or comment out the author notes below if they aren't used. % % Primary Equal Contribution Note \Yinyang These authors contributed equally to this work. % Also use this double-dagger symbol for special authorship notes, such as senior authorship. \ddag These authors also contributed equally to this work. \textcurrency Current Address: Dept/Program/Center, Institution Name, City, State, Country % change symbol to "\textcurrency a" if more than one current address note % \textcurrency b Insert second current address % \textcurrency c Insert third current address While some of the LaTeX commands might seem intimidating at first, you can safely ignore them. Simply replace the dummy text with your own text. For example, if I wanted to write the title of my paper, I would enter the following: {\Large \textbf\newline{ScientificallySound as a resource for scientists} % Please use "sentence case" for title and headings (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns). } As you can see, I simply entered the title of my paper “ScientificallySound as a resource for scientists” between the curly brackets. Also, I followed the instructions provided in the document, which tell me that I should use “sentence case”. Speaking of these instructions, note that text that follows a percentage sign (i.e. %) is a comment in LaTeX. Comments do not appear in the final PDF. Special symbols and characters. If the percentage sign is used to start comments in LaTeX documents, how do we obtain a percentage sign in our final PDF document? In this case, you would put a back slash in front of it, for example 25\%. This tells LaTeX that you want a percentage sign in your text, not start a comment. This convention may seem overly complex, especially if you are not used to computer programming. It does take a little time to get use to, but soon enough it will become automatic. What about other special characters? We will address some of these in a future post, but the easiest thing is to Google your question. Also, many modern LaTeX editors such as Texmaker, Lyx and Texstudio have look-up tools similar to the special character look-up in Microsoft Word. You look up the symbol or character you want, click on it and the correspond LaTeX command gets inserted into your text. ### Templates for other journal and publishers Many publishers provide LaTeX templates. For example: Some journals offer their own templates, and researchers who have created templates that adhere to the Instructions to Authors for a given journal often make these files freely available. For example: Given that many journals now accept a generically formatted PDF for a first submission, it is possible to use a generic article template to prepare your paper. Lastly, there are online services that let researchers prepare LaTeX articles in the cloud. These services, such as Overleaf and authorea, provide hundreds of templates. Importantly, using these services means you don’t need LaTeX installed on your computer. Depending on the service and whether or not your institution has an agreement or contract with the service, you may be able to collaborate simultaneously with other authors, regardless of where they are located in the world. Moreover, you can leave comments, track changes, retain a history of your changes, and integrate version control software such as git and github. Given the benefits of such services, they will be the focus of an upcoming post. ### Summary LaTeX templates can save you lots of time. However, there is more to writing a paper in LaTeX then simply downloading a template and filling in the required bits. How do you generate the pretty PDF? How do you get references and figures into the document? How do I share these files with my co-authors? These are all important questions, and we will be deal with them in the next few blog posts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7102479934692383, "perplexity": 1789.6380697777781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00248.warc.gz"}
http://mathhelpforum.com/math-topics/46547-factor-theorem-high-school-algebra-prob.html
Math Help - factor theorem high school algebra prob 1. factor theorem high school algebra prob ther are two polynomials (polynomials are given but some coefficients are missing.e.b .x3-bx2+3x+5...) sharing one common factor(this common factor may be of any degree), can we get a relationship between the unknown coefficents of the two polynomials. How can we putt this information in mathematical terms? 2. Originally Posted by p_perera ther are two polynomials (polynomials are given but some coefficients are missing.e.b .x3-bx2+3x+5...) sharing one common factor(this common factor may be of any degree), can we get a relationship between the unknown coefficents of the two polynomials. How can we putt this information in mathematical terms? Cannot understand you very well. How about posting the two polynomials with all their missing coefficients, etc, and then let us see what we can do about them.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651796579360962, "perplexity": 1432.6451491035853}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258951764.94/warc/CC-MAIN-20160723072911-00028-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.thejournal.club/c/paper/23570/
#### A New Family of Bounded Divergence Measures and Application to Signal Detection ##### Shivakumar Jolad, Ahmed Roman, Mahesh C. Shastry, Mihir Gadgil, Ayanendranath Basu We introduce a new one-parameter family of divergence measures, called bounded Bhattacharyya distance (BBD) measures, for quantifying the dissimilarity between probability distributions. These measures are bounded, symmetric and positive semi-definite and do not require absolute continuity. In the asymptotic limit, BBD measure approaches the squared Hellinger distance. A generalized BBD measure for multiple distributions is also introduced. We prove an extension of a theorem of Bradt and Karlin for BBD relating Bayes error probability and divergence ranking. We show that BBD belongs to the class of generalized Csiszar f-divergence and derive some properties such as curvature and relation to Fisher Information. For distributions with vector valued parameters, the curvature matrix is related to the Fisher-Rao metric. We derive certain inequalities between BBD and well known measures such as Hellinger and Jensen-Shannon divergence. We also derive bounds on the Bayesian error probability. We give an application of these measures to the problem of signal detection where we compare two monochromatic signals buried in white noise and differing in frequency and amplitude. arrow_drop_up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905564546585083, "perplexity": 1228.4264073791928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00169.warc.gz"}
http://tex.stackexchange.com/questions/26374/add-equations-to-text?answertab=votes
I've got a really simple question. I'd like to write a mathematical proof in LaTeX, so my document will contain both text and formulae embedded in it. For example, I want to add this to it: How can I write this efficiently, without putting the entire text into an \mbox{} element? - ## migrated from stackoverflow.comAug 23 '11 at 22:42 This question came from our site for professional and enthusiast programmers. @murgatroid99: You're right, I didn't know of this site! – ryyst Aug 23 '11 at 15:44 Only wrap the equations in $...$, you can enter and leave math mode all throughout the document. This page shows examples with formulas both inline in a paragraph and on their own lines. - Thanks, this is exactly what I wanted! – ryyst Aug 23 '11 at 15:55 If most of the content is math mode, the use \text{} to imbed text within equations. \documentclass{article} \usepackage{amsmath} \begin{document} $\text{Let } D \subseteq \mathbf{R}, D \ne 0$ or in inline mode: $\text{Let } D \subseteq \mathbf{R}, D \ne 0$ \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419028759002686, "perplexity": 1532.8249990427094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161942.67/warc/CC-MAIN-20160205193921-00220-ip-10-236-182-209.ec2.internal.warc.gz"}
https://appledorem.com/2018/06/25/p2153-exercise-sdoi2009/
# P2153: Exercise [SDOI2009] Problem Description: There are N nodes and M edges, you can go through each node at most once, there are weights on the edges. Try to minimize the total weight going through 1 to N while maximizing the times you can go from 1 to N without passing through any nodes twice except node 1 and node N. Note that the edge between 1 and N exists, and you can at most pass through this edge once. Input Format: The first line contains two numbers N and M. The next M lines contain three integers u,v and w denotes a directed edge going from u to v with weight w. Output Format: Two numbers in one line. The first number denotes the maximum times and the second number represents the minimum total weight. Sample Input: 7 10 1 2 1 1 3 1 2 4 1 3 4 1 4 5 1 4 6 1 2 5 5 3 6 6 5 7 1 6 7 1 Sample Output: 2 11 Solution: This is a classical maximum flow minimum cost problem. The only hard point is to achieve the requirement that each node can be only passed through once. We can do this by making each point into two points with capacity one, which limits the flow passing through this node. Also, remember to write a special judge on point 1 and N, since they have to have infinite capacity as source and end point. #include <iostream> #include <cstring> #include <string> #include <vector> #include <queue> #include <map> #include <set> #include <stdio.h> #include <fstream> #include <cmath> #include <stdlib.h> #include <iomanip> #include <algorithm> #include <limits.h> #include <stack> using namespace std; #define FAST_IO ios::sync_with_stdio(false); typedef pair<int,int> pii; typedef pair<long long,long long> pll; typedef pair<double,double> pdd; typedef long long ll; #define inf 0x7f7f7f7f const int maxn = 1005; int N,M; struct edge{ int from,to,cap,flow,cost; edge(int a = 0,int b = 0,int c = 0,int d = 0,int e = 0):from(a),to(b),cap(c),flow(d),cost(e){} }; vector<edge> es; vector<int> G[maxn]; int dis[maxn],S,T; bool vis[maxn]; void addedge(int from,int to,int cap,int cost){ es.push_back(edge(from,to,cap,0,cost)); es.push_back(edge(to,from,0,0,-cost)); int m = es.size(); G[from].push_back(m - 2); G[to].push_back(m - 1); } bool spfa(){ memset(dis,0x7f,sizeof(dis)); memset(vis,0,sizeof(vis)); dis[S] = 0; vis[S] = 1; queue<int> q; q.push(S); while(!q.empty()){ int u = q.front(); q.pop(); for(int i = 0; i < G[u].size(); i++){ edge e = es[G[u][i]]; if(e.cap && dis[e.to] > dis[u] + e.cost){ dis[e.to] = dis[u] + e.cost; if(!vis[e.to]){ vis[e.to] = 1; q.push(e.to); } } } vis[u] = 0; } return dis[T] != inf; } int ans = 0; int dfs(int x,int f){ if(x == T) return f; vis[x] = 1; int w,used = 0; for(int i = 0; i < G[x].size(); i++){ edge &e = es[G[x][i]]; edge &er = es[G[x][i] ^ 1]; if(e.cap && !vis[e.to] && dis[e.to] == dis[x] + e.cost){ w = f - used; w = dfs(e.to,min(w,e.cap)); ans += w * e.cost; e.cap -= w; er.cap += w; used += w; if(used == f) return f; } } return used; } int zkw(){ int flow = 0; while(spfa()){ vis[T] = 1; while(vis[T]){ memset(vis,0,sizeof(vis)); flow += dfs(S,inf); } } return flow; } int main(){ FAST_IO cin>>N>>M; for(int i = 2; i <= N - 1; i++){ }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3899291157722473, "perplexity": 4513.153699713055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00414.warc.gz"}
http://wiki.call-cc.org/man/5/Module%20%28chicken%20base%29
• manual ## Module (chicken base) Core procedures and macros, acting as basic extensions to the R5RS standard and other essential features. This module is used by default, unless a program is compiled with the -explicit-use option. ### Numeric predicates These allow you to make a more precise differentiation between number types and their properties, not provided by R5RS. #### fixnum? [procedure] (fixnum? X) Returns #t if X is a fixnum, or #f otherwise. #### flonum? [procedure] (flonum? X) Returns #t if X is a flonum, or #f otherwise. #### bignum? [procedure] (bignum? X) Returns #t if X is a bignum (integer larger than fits in a fixnum), or #f otherwise. #### exact-integer? [procedure] (exact-integer? X) Returns #t if X is an exact integer (i.e., a fixnum or a bignum), or #f otherwise. This procedure is compatible with the definition from the R7RS (scheme base) library. #### cplxnum? [procedure] (cplxnum? X) Returns #t if X is a true complex number (it has an imaginary component), or #f otherwise. Please note that complex? will always return #t for any number type supported by CHICKEN, so you can use this predicate if you want to know the representational type of a number. #### ratnum? [procedure] (ratnum? X) Returns #t if X is a true rational number (it is a fraction with a denominator that's not 1), or #f otherwise. Please note that rational? will always return #t for any number type supported by CHICKEN except complex numbers and non-finite flonums, so you can use this predicate if you want to know the representational type of a number. #### nan? [procedure] (nan? N) Returns #t if N is not a number (a IEEE flonum NaN-value). If N is a complex number, it's considered nan if it has a real or imaginary component that's nan. This procedure is compatible with the definition from the R7RS (scheme inexact) library. #### infinite? [procedure] (infinite? N) Returns #t if N is negative or positive infinity, and #f otherwise. If N is a complex number, it's considered infinite if it has a real or imaginary component that's infinite. This procedure is compatible with the definition from the R7RS (scheme inexact) library. #### finite? [procedure] (finite? N) Returns #t if N represents a finite number and #f otherwise. Positive and negative infinity as well as NaNs are not considered finite. If N is a complex number, it's considered finite if both the real and imaginary components are finite. This procedure is compatible with the definition from the R7RS (scheme inexact) library. #### equal=? [procedure] (equal=? X y) Similar to the standard procedure equal?, but compares numbers using the = operator, so equal=? allows structural comparison in combination with comparison of numerical data by value. ### Arithmetic [procedure] (sub1 N) #### exact-integer-sqrt [procedure] (exact-integer-sqrt K) Returns two values s and r, where s^2 + r = K and K < (s+1)^2. In other words, s is the closest square root we can find that's equal to or smaller than K, and r is the rest if K isn't a neat square of two numbers. This procedure is compatible with the definition from the R7RS (scheme base) library. #### exact-integer-nth-root [procedure] (exact-integer-nth-root K N) Like exact-integer-sqrt, but with any base value. Calculates \sqrt[N]{K}, the Nth root of K and returns two values s and r where s^N + r = K and K < (s+1)^N. #### Division with quotient and remainder [procedure] (quotient&remainder X Y) [procedure] (quotient&modulo X Y) Returns two values: the quotient and the remainder (or modulo) of X divided by Y. Could be defined as (values (quotient X Y) (remainder X Y)), but is much more efficient when dividing very large numbers. #### signum [procedure] (signum N) For real numbers, returns 1 if N is positive, -1 if N is negative or 0 if N is zero. signum is exactness preserving. For complex numbers, returns a complex number of the same angle but with magnitude 1. ### Lazy evaluation #### delay-force [syntax] (delay-force <expression>) The expression (delay-force expression) is conceptually similar to (delay (force expression)), with the difference that forcing the result of delay-force will in effect result in a tail call to (force expression), while forcing the result of (delay (force expression)) might not. Thus iterative lazy algorithms that might result in a long series of chains of delay and force can be rewritten using delay-force to prevent consuming unbounded space during evaluation. This special form is compatible with the definition from the R7RS (scheme lazy) library. See the description of force under Control features in the "scheme" module documentation for a more complete description of delayed evaluation. For more information regarding the unbounded build-up of space, see the SRFI-45 rationale. #### make-promise [procedure] (make-promise obj) The make-promise procedure returns a promise which, when forced, will return obj . It is similar to delay, but does not delay its argument: it is a procedure rather than syntax. If obj is already a promise, it is returned. This procedure is compatible with the definition from the R7RS (scheme lazy) library. #### promise? [procedure] (promise? X) Returns #t if X is a promise returned by delay, or #f otherwise. This procedure is compatible with the definition from the R7RS (scheme lazy) library. ### Input/Output #### current-error-port [procedure] (current-error-port [PORT]) Returns default error output port. If PORT is given, then that port is selected as the new current error output port. Note that the default error output port is not buffered. Use set-buffering-mode! if you need a different behaviour. #### print [procedure] (print [EXP1 ...]) Outputs the optional arguments EXP1 ... using display and writes a newline character to the port that is the value of (current-output-port). Returns (void). #### print* [procedure] (print* [EXP1 ...]) Similar to print, but does not output a terminating newline character and performs a flush-output after writing its arguments. ### Interrupts and error-handling #### enable-warnings [procedure] (enable-warnings [BOOL]) Enables or disables warnings, depending on wether BOOL is true or false. If called with no arguments, this procedure returns #t if warnings are currently enabled, or #f otherwise. Note that this is not a parameter. The current state (whether warnings are enabled or disabled) is global and not thread-local. #### error [procedure] (error [LOCATION] [STRING] EXP ...) Prints error message, writes all extra arguments to the value of (current-error-port) and invokes the current exception-handler. This conforms to SRFI-23. If LOCATION is given and a symbol, it specifies the location (the name of the procedure) where the error occurred. #### assert [syntax] (assert EXP [OBJ ...]) Evaluates EXP, if it returns #f, applies error on OBJ .... When compiling in unsafe mode, assertions of this kind are disabled. #### get-call-chain Returns a list with the call history. Backtrace information is only generated in code compiled without -no-trace and evaluated code. If the optional argument START is given, the backtrace starts at this offset, i.e. when START is 1, the next to last trace-entry is printed, and so on. If the optional argument THREAD is given, then the call-chain will only be constructed for calls performed by this thread. Prints a backtrace of the procedure call history to PORT, which defaults to (current-output-port). The output is prefixed by the HEADER, which defaults to "\n\tCall history:\n". #### procedure-information [procedure] (procedure-information PROC) Returns an s-expression with debug information for the procedure PROC, or #f, if PROC has no associated debug information. #### warning [procedure] (warning MESSAGE [EXP ...]) Displays a warning message (if warnings are enabled with enable-warnings), from the MESSAGE, and optional EXP arguments, then continues execution. MESSAGE, and EXP, may be any object. ### Lists #### alist-ref [procedure] (alist-ref KEY ALIST [TEST [DEFAULT]]) Looks up KEY in ALIST using TEST as the comparison function (or eqv? if no test was given) and returns the cdr of the found pair, or DEFAULT (which defaults to #f). #### alist-update [procedure] (alist-update KEY VALUE ALIST [TEST]) [procedure] (alist-update! KEY VALUE ALIST [TEST]) If the list ALIST contains a pair of the form (KEY . X), then this procedure replaces X with VALUE and returns ALIST. If ALIST contains no such item, then alist-update returns ((KEY . VALUE) . ALIST). The optional argument TEST specifies the comparison procedure to search a matching pair in ALIST and defaults to eqv?. alist-update! is the destructive version of alist-update. #### atom? [procedure] (atom? X) Returns #t if X is not a pair. #### butlast [procedure] (butlast LIST) Returns a fresh list with all elements but the last of LIST. #### chop [procedure] (chop LIST N) Returns a new list of sublists, where each sublist contains N elements of LIST. If LIST has a length that is not a multiple of N, then the last sublist contains the remaining elements. (chop '(1 2 3 4 5 6) 2) ==> ((1 2) (3 4) (5 6)) (chop '(a b c d) 3) ==> ((a b c) (d)) #### compress [procedure] (compress BLIST LIST) Returns a new list with elements taken from LIST with corresponding true values in the list BLIST. (define nums '(99 100 110 401 1234)) (compress (map odd? nums) nums) ==> (99 401) #### flatten [procedure] (flatten LIST1 ...) Returns LIST1 ... concatenated together, with nested lists removed (flattened). #### foldl [procedure] (foldl PROCEDURE INIT LIST) Applies PROCEDURE to the elements from LIST, beginning from the left: (foldl + 0 '(1 2 3)) ==> (+ (+ (+ 0 1) 2) 3) Note that the order of arguments taken by PROCEDURE is different from the SRFI-1 fold procedure, but matches the more natural order used in Haskell and Objective Caml. #### foldr [procedure] (foldr PROCEDURE INIT LIST) Applies PROCEDURE to the elements from LIST, beginning from the right: (foldr + 0 '(1 2 3)) ==> (+ 1 (+ 2 (+ 3 0))) #### intersperse [procedure] (intersperse LIST X) Returns a new list with X placed between each element. #### join [procedure] (join LISTOFLISTS [LIST]) Concatenates the lists in LISTOFLISTS with LIST placed between each sublist. LIST defaults to the empty list. (join '((a b) (c d) (e)) '(x y)) ==> (a b x y c d x y e) (join '((p q) () (r (s) t)) '(-)) ==> (p q - - r (s) t) join could be implemented as follows: (define (join lstoflsts #!optional (lst '())) (apply append (intersperse lstoflists lst)) ) #### rassoc [procedure] (rassoc KEY LIST [TEST]) Similar to assoc, but compares KEY with the cdr of each pair in LIST using TEST as the comparison procedures (which defaults to eqv?). #### tail? [procedure] (tail? X LIST) Returns true if X is one of the tails (cdr's) of LIST. ### Vectors #### vector-copy! [procedure] (vector-copy! VECTOR1 VECTOR2 [COUNT]) Copies contents of VECTOR1 into VECTOR2. If the argument COUNT is given, it specifies the maximal number of elements to be copied. If not given, the minimum of the lengths of the argument vectors is copied. Exceptions: (exn bounds) #### vector-resize [procedure] (vector-resize VECTOR N [INIT]) Creates and returns a new vector with the contents of VECTOR and length N. If N is greater than the original length of VECTOR, then all additional items are initialized to INIT. If INIT is not specified, the contents are initialized to some unspecified value. #### subvector [procedure] (subvector VECTOR FROM [TO]) Returns a new vector with elements taken from VECTOR in the given range. TO defaults to (vector-length VECTOR). subvector was introduced in CHICKEN 4.7.3. ### Combinators #### constantly [procedure] (constantly X ...) Returns a procedure that always returns the values X ... regardless of the number and value of its arguments. (constantly X) <=> (lambda args X) #### complement [procedure] (complement PROC) Returns a procedure that returns the boolean inverse of PROC. (complement PROC) <=> (lambda (x) (not (PROC x))) #### compose [procedure] (compose PROC1 PROC2 ...) Returns a procedure that represents the composition of the argument-procedures PROC1 PROC2 .... (compose F G) <=> (lambda args (call-with-values (lambda () (apply G args)) F)) (compose) is equivalent to values. #### conjoin [procedure] (conjoin PRED ...) Returns a procedure that returns #t if its argument satisfies the predicates PRED .... ((conjoin odd? positive?) 33) ==> #t ((conjoin odd? positive?) -33) ==> #f #### disjoin [procedure] (disjoin PRED ...) Returns a procedure that returns #t if its argument satisfies any predicate PRED .... ((disjoin odd? positive?) 32) ==> #t ((disjoin odd? positive?) -32) ==> #f #### each [procedure] (each PROC ...) Returns a procedure that applies PROC ... to its arguments, and returns the result(s) of the last procedure application. For example (each pp eval) is equivalent to (lambda args (apply pp args) (apply eval args) ) (each PROC) is equivalent to PROC and (each) is equivalent to void. #### flip [procedure] (flip PROC) Returns a two-argument procedure that calls PROC with its arguments swapped: (flip PROC) <=> (lambda (x y) (PROC y x)) #### identity [procedure] (identity X) Returns its sole argument X. #### list-of? [procedure] (list-of? PRED) Returns a procedure of one argument that returns #t when applied to a list of elements that all satisfy the predicate procedure PRED, or #f otherwise. ((list-of? even?) '(1 2 3)) ==> #f ((list-of? number?) '(1 2 3)) ==> #t #### o [procedure] (o PROC ...) A single value version of compose (slightly faster). (o) is equivalent to identity. ### User-defined named characters #### char-name [procedure] (char-name SYMBOL-OR-CHAR [CHAR]) This procedure can be used to inquire about character names or to define new ones. With a single argument the behavior is as follows: If SYMBOL-OR-CHAR is a symbol, then char-name returns the character with this name, or #f if no character is defined under this name. If SYMBOL-OR-CHAR is a character, then the name of the character is returned as a symbol, or #f if the character has no associated name. If the optional argument CHAR is provided, then SYMBOL-OR-CHAR should be a symbol that will be the new name of the given character. If multiple names designate the same character, then the write will use the character name that was defined last. (char-name 'space) ==> #\space (char-name #\space) ==> space (char-name 'bell) ==> #f (char-name (integer->char 7)) ==> #f (char-name 'bell (integer->char 7)) (char-name 'bell) ==> #\bell (char->integer (char-name 'bell)) ==> 7 ### The unspecified value #### void [procedure] (void ARGUMENT ...) Ignores ARGUMENT ... and returns an unspecified value. ### Continuations #### call/cc [procedure] (call/cc PROCEDURE) An alias for call-with-current-continuation. This procedure is compatible with the definition from the R7RS (scheme base) library. ### Symbols #### Symbol utilities ##### symbol-append [procedure] (symbol-append SYMBOL1 ...) Creates a new symbol from the concatenated names of the argument symbols (SYMBOL1 ...). #### Uninterned symbols ("gensyms") Symbols may be "interned" or "uninterned". Interned symbols are registered in a global table, and when read back from a port are identical to a symbol written before: (define sym 'foo) (eq? sym (with-input-from-string (with-output-to-string (lambda () (write sym))) => #t Uninterned symbols on the other hand are not globally registered and so multiple symbols with the same name may coexist: (define sym (gensym 'foo)) ; sym is a uninterned symbol like "foo42" (eq? sym (with-input-from-string ; the symbol read will be an interned symbol (with-output-to-string (lambda () (write sym))) => #f (eq? (string->uninterned-symbol "foo") (string->uninterned-symbol "foo")) => #f Use uninterned symbols if you need to generate unique values that can be compared quickly, for example as keys into a hash-table or association list. Note that uninterned symbols lose their uniqueness property when written to a file and read back in, as in the example above. ##### gensym [procedure] (gensym [STRING-OR-SYMBOL]) Returns a newly created uninterned symbol. If an argument is provided, the new symbol is prefixed with that argument. ##### string->uninterned-symbol [procedure] (string->uninterned-symbol STRING) Returns a newly created, unique symbol with the name STRING. ### Setters #### setter [procedure] (setter PROCEDURE) Returns the setter-procedure of PROCEDURE, or signals an error if PROCEDURE has no associated setter-procedure. Note that (set! (setter PROC) ...) for a procedure that has no associated setter procedure yet is a very slow operation (the old procedure is replaced by a modified copy, which involves a garbage collection). #### getter-with-setter [procedure] (getter-with-setter GETTER SETTER) Returns a copy of the procedure GETTER with the associated setter procedure SETTER. Contrary to the SRFI specification, the setter of the returned procedure may be changed. ### Binding forms for optional arguments #### optional [syntax] (optional ARGS DEFAULT) Use this form for procedures that take a single optional argument. If ARGS is the empty list DEFAULT is evaluated and returned, otherwise the first element of the list ARGS. It is an error if ARGS contains more than one value. (define (incr x . i) (+ x (optional i 1))) (incr 10) ==> 11 (incr 12 5) ==> 17 #### case-lambda [syntax] (case-lambda (LAMBDA-LIST1 EXP1 ...) ...) Expands into a lambda that invokes the body following the first matching lambda-list. (define plus (case-lambda (() 0) ((x) x) ((x y) (+ x y)) ((x y z) (+ (+ x y) z)) (args (apply + args)))) (plus) ==> 0 (plus 1) ==> 1 (plus 1 2 3) ==> 6 This special form is also compatible with the definition from the R7RS (scheme case-lambda) library. #### let-optionals [syntax] (let-optionals ARGS ((VAR1 DEFAULT1) ...) BODY ...) Binding constructs for optional procedure arguments. ARGS is normally a rest-parameter taken from a lambda-list. let-optionals binds VAR1 ... to available arguments in parallel, or to DEFAULT1 ... if not enough arguments were provided. let-optionals* binds VAR1 ... sequentially, so every variable sees the previous ones. it is an error if any excess arguments are provided. (let-optionals '(one two) ((a 1) (b 2) (c 3)) (list a b c) ) ==> (one two 3) #### let-optionals* [syntax] (let-optionals* ARGS ((VAR1 DEFAULT1) ... [RESTVAR]) BODY ...) Binding constructs for optional procedure arguments. ARGS is normally a rest-parameter taken from a lambda-list. let-optionals binds VAR1 ... to available arguments in parallel, or to DEFAULT1 ... if not enough arguments were provided. let-optionals* binds VAR1 ... sequentially, so every variable sees the previous ones. If a single variable RESTVAR is given, then it is bound to any remaining arguments, otherwise it is an error if any excess arguments are provided. (let-optionals* '(one two) ((a 1) (b 2) (c a)) (list a b c) ) ==> (one two one) ### Other binding forms #### and-let* [syntax] (and-let* (BINDING ...) EXP1 EXP2 ...) Bind sequentially and execute body. BINDING can be a list of a variable and an expression, a list with a single expression, or a single variable. If the value of an expression bound to a variable is #f, the and-let* form evaluates to #f (and the subsequent bindings and the body are not executed). Otherwise the next binding is performed. If all bindings/expressions evaluate to a true result, the body is executed normally and the result of the last expression is the result of the and-let* form. See also the documentation for SRFI-2. #### letrec* [syntax] (letrec* ((VARIABLE EXPRESSION) ...) BODY ...) Implements R6RS/R7RS letrec*. letrec* is similar to letrec but binds the variables sequentially and is to letrec what let* is to let. This special form is compatible with the definition from the R7RS (scheme base) library. #### rec [syntax] (rec NAME EXPRESSION) [syntax] (rec (NAME VARIABLE ...) BODY ...) Allows simple definition of recursive definitions. (rec NAME EXPRESSION) is equivalent to (letrec ((NAME EXPRESSION)) NAME) and (rec (NAME VARIABLE ...) BODY ...) is the same as (letrec ((NAME (lambda (VARIABLE ...) BODY ...))) NAME). #### cut [syntax] (cut SLOT ...) [syntax] (cute SLOT ...) #### define-values [syntax] (define-values (NAME ...) VALUEEXP) [syntax] (define-values (NAME1 ... NAMEn . NAMEn+1) VALUEEXP) [syntax] (define-values NAME VALUEEXP) Defines several variables at once, with the result values of expression VALUEEXP, similar to set!-values. This special form is compatible with the definition from the R7RS (scheme base) library. #### fluid-let [syntax] (fluid-let ((VAR1 X1) ...) BODY ...) Binds the variables VAR1 ... dynamically to the values X1 ... during execution of BODY .... This implements SRFI-15. #### let-values [syntax] (let-values (((NAME ...) VALUEEXP) ...) BODY ...) Binds multiple variables to the result values of VALUEEXP .... All variables are bound simultaneously. Like define-values, the (NAME ...) expression can be any basic lambda list (dotted tail notation is supported). This special form implements SRFI-11, and it is also compatible with the definition from the R7RS (scheme base) library. #### let*-values [syntax] (let*-values (((NAME ...) VALUEEXP) ...) BODY ...) Binds multiple variables to the result values of VALUEEXP .... The variables are bound sequentially. Like let-values, the (NAME ...) expression can be any basic lambda list (dotted tail notation is supported). This is also part of SRFI-11 and is also compatible with the definition from the R7RS (scheme base) library. (let*-values (((a b) (values 2 3)) ((p) (+ a b)) ) p) ==> 5 #### letrec-values [syntax] (letrec-values (((NAME ...) VALUEEXP) ...) BODY ...) Binds the result values of VALUEEXP ... to multiple variables at once. All variables are mutually recursive. Like let-values, the (NAME ...) expression can be any basic lambda list (dotted tail notation is supported). (letrec-values (((odd even) (values (lambda (n) (if (zero? n) #f (even (sub1 n)))) (lambda (n) (if (zero? n) #t (odd (sub1 n)))) ) ) ) (odd 17) ) ==> #t [syntax] (receive (NAME ...) VALUEEXP BODY ...) [syntax] (receive (NAME1 ... NAMEn . NAMEn+1) VALUEEXP BODY ...) [syntax] (receive NAME VALUEEXP BODY ...) SRFI-8. Syntactic sugar for call-with-values. Binds variables to the result values of VALUEEXP and evaluates BODY ..., similar define-values but lexically scoped. (receive VALUEEXP) is equivalent to (receive _ VALUEEXP _). This shortened form is not described by SRFI-8. #### set!-values [syntax] (set!-values (NAME ...) VALUEEXP) [syntax] (set!-values (NAME1 ... NAMEn . NAMEn+1) VALUEEXP) [syntax] (set!-values NAME VALUEEXP) Assigns the result values of expression VALUEEXP to multiple variables, similar to define-values. #### nth-value [syntax] (nth-value N EXP) Returns the Nth value (counting from zero) of the values returned by expression EXP. ### Parameters Parameters are CHICKEN's form of dynamic variables, except that they are procedures rather than actual variables. A parameter is a procedure of zero or one arguments. To retrieve the value of a parameter call the parameter-procedure with zero arguments. To change the setting of the parameter, call the parameter-procedure with the new value as argument: (define foo (make-parameter 123)) (foo) ==> 123 (foo 99) (foo) ==> 99 Parameters are fully thread-local, each thread of execution owns a local copy of a parameters' value. CHICKEN implements SRFI-39, which is also standardized by R7RS. #### parameterize [syntax] (parameterize ((PARAMETER1 X1) ...) BODY ...) Binds the parameters PARAMETER1 ... dynamically to the values X1 ... during execution of BODY .... (see also: make-parameter in Parameters). Note that PARAMETER may be any expression that evaluates to a parameter procedure. This special form is compatible with the definition from the R7RS (scheme base) library. #### make-parameter [procedure] (make-parameter VALUE [GUARD]) Returns a procedure that accepts zero or one argument. Invoking the procedure with zero arguments returns VALUE. Invoking the procedure with one argument changes its value to the value of that argument and returns the new value (subsequent invocations with zero parameters return the new value). GUARD should be a procedure of a single argument. Any new values of the parameter (even the initial value) are passed to this procedure. The guard procedure should check the value and/or convert it to an appropriate form. This special form is compatible with the definition from the R7RS (scheme base) library. ### Substitution forms and macros #### define-constant [syntax] (define-constant NAME CONST) Defines a variable with a constant value, evaluated at compile-time. Any reference to such a constant should appear textually after its definition. This construct is equivalent to define when evaluated or interpreted. Constant definitions should only appear at toplevel. Note that constants are local to the current compilation unit and are not available outside of the source file in which they are defined. Names of constants still exist in the Scheme namespace and can be lexically shadowed. If the value is mutable, then the compiler is careful to preserve its identity. CONST may be any constant expression, and may also refer to constants defined via define-constant previously, but it must be possible to evaluate the expression at compile-time. #### define-inline [syntax] (define-inline (NAME VAR ...) BODY ...) [syntax] (define-inline (NAME VAR ... . VAR) BODY ...) [syntax] (define-inline NAME EXP) Defines an inline procedure. Any occurrence of NAME will be replaced by EXP or (lambda (VAR ... [. VAR]) BODY ...). This is similar to a macro, but variable names and scope are handled correctly. Inline substitutions take place after macro-expansion, and any reference to NAME should appear textually after its definition. Inline procedures are local to the current compilation unit and are not available outside of the source file in which they are defined. Names of inline procedures still exist in the Scheme namespace and can be lexically shadowed. Inline definitions should only appear at the toplevel. Note that the inline-limit compiler option does not affect inline procedure expansion, and self-referential inline procedures may cause the compiler to enter an infinite loop. In the third form, EXP must be a lambda expression. This construct is equivalent to define when evaluated or interpreted. ### Conditional forms #### unless [syntax] (unless TEST EXP1 EXP2 ...) Equivalent to: (if (not TEST) (begin EXP1 EXP2 ...)) #### when [syntax] (when TEST EXP1 EXP2 ...) Equivalent to: (if TEST (begin EXP1 EXP2 ...)) ### Record structures #### define-record [syntax] (define-record NAME SLOTNAME ...) Defines a record type. This defines a number of procedures for creating, accessing, and modifying record members. Call make-NAME to create an instance of the structure (with one initialization-argument for each slot, in the listed order). (NAME? STRUCT) tests any object for being an instance of this structure. Slots are accessed via (NAME-SLOTNAME STRUCT) and updated using (NAME-SLOTNAME-set! STRUCT VALUE). (define-record point x y) (define p1 (make-point 123 456)) (point? p1) ==> #t (point-x p1) ==> 123 (point-y-set! p1 99) (point-y p1) ==> 99 ##### SRFI-17 setters SLOTNAME may alternatively also be of the form (setter SLOTNAME) In this case the slot can be read with (NAME-SLOTNAME STRUCT) as usual, and modified with (set! (NAME-SLOTNAME STRUCT) VALUE) (the slot-accessor has an associated SRFI-17 "setter" procedure) instead of the usual (NAME-SLOTNAME-set! STRUCT VALUE). (define-record point (setter x) (setter y)) (define p1 (make-point 123 456)) (point? p1) ==> #t (point-x p1) ==> 123 (set! (point-y p1) 99) (point-y p1) ==> 99 #### define-record-type [syntax] (define-record-type NAME (CONSTRUCTOR TAG ...) PREDICATE (FIELD ACCESSOR [MODIFIER]) ...) As an extension the MODIFIER may have the form (setter PROCEDURE), which will define a SRFI-17 setter-procedure for the given PROCEDURE that sets the field value. Usually PROCEDURE has the same name is ACCESSOR (but it doesn't have to). This special form is also compatible with the definition from the R7RS (scheme base) library. #### record-printer [procedure] (record-printer NAME) Returns the procedure used to print records of the type NAME if one has been set with set-record-printer!, #f otherwise. #### set-record-printer! [procedure] (set-record-printer! NAME PROCEDURE) [procedure] (set! (record-printer NAME) PROCEDURE) Defines a printing method for record of the type NAME by associating a procedure with the record type. When a record of this type is written using display, write or print, then the procedure is called with two arguments: the record to be printed and an output-port. (define-record-type foo (make-foo x y z) foo? (x foo-x) (y foo-y) (z foo-z)) (define f (make-foo 1 2 3)) (set-record-printer! foo (lambda (x out) (fprintf out "#,(foo ~S ~S ~S)" (foo-x x) (foo-y x) (foo-z x)))) (define s (with-output-to-string (lambda () (write f)))) s ==> "#,(foo 1 2 3)" (equal? f (with-input-from-string s read))) ==> #t ### Other forms #### include [syntax] (include STRING) Include toplevel-expressions from the given source file in the currently compiled/interpreted program. If the included file has the extension .scm, then it may be omitted. The file is searched for in the current directory and all directories specified by the -include-path option. #### include-relative [syntax] (include-relative STRING) Works like include, but the filename is searched for relative to the including file rather than the current directory. ### Making extra libraries and extensions available #### require-extension [syntax] (require-extension ID ...) This is equivalent to (require-library ID ...) but performs an implicit import, if necessary. Since version 4.4.0, ID may also be an import specification (using rename, only, except or prefix). To make long matters short - just use require-extension and it will normally figure everything out for dynamically loadable extensions and core library units. This implementation of require-extension is compliant with SRFI-55 (see the SRFI-55 document for more information). #### require-library [syntax] (require-library ID ...) This form does all the necessary steps to make the libraries or extensions given in ID ... available. It loads syntactic extensions, if needed and generates code for loading/linking with core library modules or separately installed extensions. During interpretation/evaluation require-library performs one of the following: • If ID names a built-in feature chicken srfi-0 srfi-2 srfi-6 srfi-8 srfi-9 srfi-10 srfi-17 srfi-23 srfi-30 srfi-39 srfi-55, then nothing is done. • If ID names one of the syntactic extensions chicken-syntax chicken-ffi-syntax, then this extension will be loaded. • If ID names one of the core library units shipped with CHICKEN, then a (load-library 'ID) will be performed. • If ID names an installed extension with the syntax or require-at-runtime attribute, then the extensions is loaded at compile-time, probably doing a run-time (require ...) for any run-time requirements. • Otherwise, (require-library ID) is equivalent to (require 'ID). During compilation, one of the following happens instead: • If ID names a built-in feature chicken srfi-0 srfi-2 srfi-6 srfi-8 srfi-9 srfi-10 srfi-17 srfi-23 srfi-30 srfi-39 srfi-55, then nothing is done. • If ID names one of the syntactic extensions chicken-syntax chicken-ffi-syntax, then this extension will be loaded at compile-time, making the syntactic extensions available in compiled code. • If ID names one of the core library units shipped with CHICKEN, or if the option -uses ID has been passed to the compiler, then a (declare (uses ID)) is generated. • If ID names an installed extension with the syntax or require-at-runtime attribute, then the extension is loaded at compile-time, and code is emitted to (require ...) any needed run-time requirements. • Otherwise (require-library ID) is equivalent to (require 'ID). ID should be a pure extension name and should not contain any path prefixes (for example dir/lib... is illegal). ID may also be a list that designates an extension-specifier. Currently the following extension specifiers are defined: • (srfi NUMBER ...) is required for SRFI-55 compatibility and is fully implemented • (version ID NUMBER) is equivalent to ID, but checks at compile-time whether the extension named ID is installed and whether its version is equal or higher than NUMBER. NUMBER may be a string or a number, the comparison is done lexicographically (using string>=?). ### Process shutdown #### emergency-exit [procedure] (emergency-exit [CODE]) Exits the current process without flushing any buffered output (using the C function _exit). Note that the exit-handler is not called when this procedure is invoked. The optional exit status code CODE defaults to 0. #### exit [procedure] (exit [CODE]) Exit the running process and return exit-code, which defaults to 0 (Invokes exit-handler). Note that pending dynamic-wind thunks are not invoked when exiting your program in this way. ### exit-handler [parameter] (exit-handler) A procedure of a single optional argument. When exit is called, then this procedure will be invoked with the exit-code as argument. The default behavior is to terminate the program. Note that this handler is not invoked when emergency-exit is used. ### implicit-exit-handler [parameter] (implicit-exit-handler) A procedure of no arguments. When the last toplevel expression of the program has executed, then the value of this parameter is called. The default behaviour is to invoke all pending finalizers. #### on-exit [procedure] (on-exit THUNK) Schedules the zero-argument procedures THUNK to be executed before the process exits, either explicitly via exit or implicitly after execution of the last top-level form. Note that finalizers for unreferenced finalized data are run before exit procedures. ### System interface #### sleep [procedure] (sleep SECONDS) Puts the program to sleep for SECONDS. If the scheduler is loaded (for example when srfi-18 is in use) then only the calling thread is put to sleep and other threads may continue executing. Otherwise, the whole process is put to sleep. ### Ports #### String ports ##### get-output-string [procedure] (get-output-string PORT) Returns accumulated output of a port created with (open-output-string). ##### open-input-string [procedure] (open-input-string STRING) Returns a port for reading from STRING. ##### open-output-string [procedure] (open-output-string) Returns a port for accumulating output in a string. ### File Input/Output #### flush-output [procedure] (flush-output [PORT]) Write buffered output to the given output-port. PORT defaults to the value of (current-output-port). ### Port predicates #### input-port-open? [procedure] (input-port-open? PORT) Is the given PORT open for input? [procedure] (output-port-open? PORT) Is the given PORT open for output? #### port-closed? [procedure] (port-closed? PORT) Is the given PORT closed (in all directions)? #### port? [procedure] (port? X) Returns #t if X is a port object or #f otherwise. ### Built-in parameters Certain behavior of the interpreter and compiled programs can be customized via the following built-in parameters: #### case-sensitive [parameter] (case-sensitive) If true, then read reads symbols and identifiers in case-sensitive mode and uppercase characters in symbols are printed escaped. Defaults to #t. #### keyword-style [parameter] (keyword-style) Enables alternative keyword syntax, where STYLE may be either #:prefix (as in Common Lisp), which recognizes symbols beginning with a colon as keywords, or #:suffix (as in DSSSL), which recognizes symbols ending with a colon as keywords. Any other value disables the alternative syntaxes. In the interpreter the default is #:suffix. #### parentheses-synonyms [parameter] (parentheses-synonyms) If true, then the list delimiter synonyms #$#$ and #\{ #\} are enabled. Defaults to #t. #### symbol-escape [parameter] (symbol-escape) If true, then the symbol escape #\| #\| is enabled. Defaults to #t. Previous: Module srfi-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5107084512710571, "perplexity": 11130.500665205273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00467.warc.gz"}
https://www.semanticscholar.org/paper/Determinants-of-Enzymatic-Specificity-in-the-Enzyme-Messerschmidt-Worbs/0496c0441adaad07a94a5ed1718cf201ee6b44d0
# Determinants of Enzymatic Specificity in the Cys-Met-Metabolism PLP-Dependent Enzyme Family: Crystal Structure of Cystathionine γ-Lyase from Yeast and Intrafamiliar Structure Comparison @inproceedings{Messerschmidt2003DeterminantsOE, title={Determinants of Enzymatic Specificity in the Cys-Met-Metabolism PLP-Dependent Enzyme Family: Crystal Structure of Cystathionine $\gamma$-Lyase from Yeast and Intrafamiliar Structure Comparison}, author={Albrecht Messerschmidt and Michael Worbs and Clemens Steegborn and Markus C. Wahl and Robert Huber and Bernd Laber and Tim Clausen}, booktitle={Biological chemistry}, year={2003} } Abstract The crystal structure of cystathionine γ-lyase (CGL) from yeast has been solved by molecular replacement at a resolution of 2.6 å. The molecule consists of 393 amino acid residues and one PLP moiety and is arranged in the crystal as a tetramer with D2 symmetry as in other related enzymes of the CysMetmetabolism PLP-dependent family like cystathionine β-lyase (CBL). A structure comparison with other family members revealed surprising insights into the tuning of enzymatic specificity… ## Figures, Tables, and Topics from this paper Exploration of structure-function relationships in Escherichia coli cystathionine γ-synthase and cystathionine β-lyase via chimeric constructs and site-specific substitutions. • Chemistry, Medicine Biochimica et biophysica acta • 2013 In vivo complementation of methionine-auxotrophic E. coli strains, lacking the genes encoding eCGS and eCBL, demonstrated that exchange of the targeted regions impairs the activity of the resulting enzymes, but does not produce a corresponding interchange of reaction specificity. Functional Characterization and Structure-Guided Mutational Analysis of the Transsulfuration Enzyme Cystathionine γ-Lyase from Toxoplasma gondii • E. Maresi, +4 authors A. Astegno • Chemistry, Medicine International journal of molecular sciences • 2018 The results suggest that CGL is an important functional enzyme in T. gondii, likely implying that the reverse transsulfuration pathway is operative in the parasite; the roles of active-site architecture and substrate binding conformations as determinants of reaction specificity in transsolfuration enzymes are probed. Structure of the antitumour enzyme L-methionine gamma-lyase from Pseudomonas putida at 1.8 A resolution. The three-dimensional structure of MGL_Pp has been completely solved by the molecular replacement method to an R-factor of 20.4% at 1.8 A resolution and it is suggested that electrostatic interactions at the subunit interface are involved in the stabilization of the structural conformation. Interconversion of a pair of active-site residues in Escherichia coli cystathionine gamma-synthase, E. coli cystathionine beta-lyase, and Saccharomyces cerevisiae cystathionine gamma-lyase and development of tools for the investigation of their mechanisms and reaction specificity. • Chemistry, Medicine Biochemistry and cell biology = Biochimie et biologie cellulaire • 2009 The D BeltametB and DeltametC strains, the optimized CBL and CGL assay conditions, and the efficient expression and affinity purification systems described provide the necessary tools to enable the continued exploration of the determinants of reaction specificity in the enzymes of the transsulfuration pathways. Catalytic specificity of the Lactobacillus plantarum cystathionine γ-lyase presumed by the crystallographic analysis It is found that the enzyme has the high γ-lyase activity toward cystathionine to generate l-cysteine, together with the β-ly enzyme activity toward l-Cystine to generated l- Cysteine persulfide. Exploration of the six tryptophan residues of Escherichia coli cystathionine β-lyase as probes of enzyme conformational change. • Chemistry, Medicine Archives of biochemistry and biophysics • 2013 The results of this study suggest that W188 is a useful probe of subtle conformational changes at the domain interface and active site of Escherichia coli CBL. Role of active‐site residues Tyr55 and Tyr114 in catalysis and substrate specificity of Corynebacterium diphtheriae C‐S lyase • Medicine, Chemistry Proteins • 2015 Spect spectral data and computational data provide useful insights in the substrate specificity of C‐S lyase, which seems to be regulated by active‐site architecture and by the specific conformation in which substrates are bound, and will aid in development of inhibitors. The Role of Amino Acid Residues in the Active Site of L-Methionine γ-lyase from Pseudomonas putida It is suggested that the hydrogen-bond network among Cys116, Lys240*, and Asp241* contributes to substrate specificity that is, to L-methionine recognition at the active site in MGL_Pp. A novel mechanism of sulfur transfer catalyzed by O-acetylhomoserine sulfhydrylase in the methionine-biosynthetic pathway of Wolinella succinogenes. • Chemistry, Medicine Acta crystallographica. Section D, Biological crystallography • 2011 The crystal structure of Wolinella succinogenes OAHS (MetY) determined at 2.2 Å resolution provides insight into the mechanism of sulfur transfer to a small molecule via a protein thiocarboxylate intermediate. Crystal structure of mutant form Cys115His of Citrobacter freundii methionine γ-lyase complexed with l-norleucine. In this work, the crystal structure of the mutant enzyme complexed with competitive inhibitor, l-norleucine was determined and analysis of the structure allowed us to suggest the possible reason for the inability of the mutants to catalyze the physiological reaction. ## References SHOWING 1-10 OF 32 REFERENCES The crystal structure of cystathionine gamma-synthase from Nicotiana tabacum reveals its substrate and reaction specificity. • Biology, Medicine Journal of molecular biology • 1999 General insight regarding the reaction specificity of transsulphuration enzymes is gained by the comparison to cystathionine beta-lyase from E. coli, indicating the mechanistic importance of a second substrate binding site for L-cysteine which leads to different chemical reaction types. Crystal structure of the pyridoxal-5'-phosphate dependent cystathionine beta-lyase from Escherichia coli at 1.83 A. • Chemistry, Medicine Journal of molecular biology • 1996 The crystal structure of CBL from E. coli has been solved using MIR phases in combination with density modification and suggests that Lys210, the PLP-binding residue, mediates the proton transfer between C alpha and S gamma. The three-dimensional structure of cystathionine beta-lyase from Arabidopsis and its substrate specificity. The three-dimensional structure of cystathionine beta-lyase from Arabidopsis was determined by Patterson search techniques, and the overall structure is very similar to other pyridoxal 5'-phosphate-dependent enzymes of the gamma-family. Crystal structure of Escherichia coli cystathionine γ‐synthase at 1.5 Å resolution • Biology • 1998 The transsulfuration enzyme cystathionine γ‐synthase (CGS) catalyses the pyridoxal 5′‐phosphate (PLP)‐dependent γ-replacement of O‐succinyl‐L‐homoserine and L‐cysteine, yielding L-cystathioneine, helping in the understanding of the chemical versatility of PLP. A hydrogen-bonding network modulating enzyme function: asparagine-194 and tyrosine-225 of Escherichia coli aspartate aminotransferase. • Chemistry, Medicine Biochemistry • 1993 The kinetic studies showed that Asn194 is not essential for AspAT catalysis, although the Kd values for the substrates were increased by 10- to 50-fold upon the replacement of Asn193, and the analyses of the pH-pKd curves for the wild-type and mutant enzymes showed that the hydrogen bond between O(3') of PLP and Asn 194 is weakened by the binding of maleate to Asp AT. Cystathionine γ‐lyase of Saccharomyces cerevisiae: Structural gene and cystathionine γ‐synthase activity N‐terminal amino acid sequence analysis indicated that CYS3 is the structural gene for γ‐CTLase and that the contaminant is O‐ acetylserine/O‐acetylhomoserine sulfhydrylase (OAS/OAH SHLase). Cloning and bacterial expression of the CYS3 gene encoding cystathionine gamma-lyase of Saccharomyces cerevisiae and the physicochemical and enzymatic properties of the protein • Medicine, Biology Journal of bacteriology • 1993 The possibility that this gene rescuing the cysteine requirement in a "cys1" strain of Saccharomyces cerevisiae has a physiological role as cystathionine Beta-lyase and cystATHionine gamma-synthase in addition to its previously described role as CYI1, as well as the enzymatic properties of this enzyme are discussed. Evolutionary relationships among pyridoxal‐5′‐phosphate‐dependent enzymes • Biology • 1994 A comprehensive comparison of amino acid sequences has shown that most pyridoxal-5'-phosphate-dependent enzymes can be assigned to one of three different families of homologous proteins, and their homology confirmed by profile analysis. Slow-binding inhibition of Escherichia coli cystathionine beta-lyase by L-aminoethoxyvinylglycine: a kinetic and X-ray study. • Chemistry, Medicine Biochemistry • 1997 The interaction of Escherichia coli CBL with AVG and methoxyvinylglycine (MVG) is characterized by a combination of kinetic methods and X-ray crystallography, which suggests a binding mode for rhizobitoxine and explains the failure of MVG to inhibit CBL. Reaction mechanism of Escherichia coli cystathionine gamma-synthase: direct evidence for a pyridoxamine derivative of vinylglyoxylate as a key intermediate in pyridoxal phosphate dependent gamma-elimination and gamma-replacement reactions. • Medicine, Chemistry Biochemistry • 1990 The results establish that the partitioning intermediate is an alpha-imino beta,gamma-unsaturated pyridoxamine derivative with lambda max congruent to 300 nm and that the 485-nm species which accumulates in the elimination reaction is not on the replacement pathway.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5906112790107727, "perplexity": 17326.52939345823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00306.warc.gz"}
http://math.gatech.edu/seminars-colloquia/series/student-algebraic-geometry-seminar/libby-taylor-20171013
## Divisor Theory on Curves Series: Student Algebraic Geometry Seminar Friday, October 13, 2017 - 10:00 1 hour (actually 50 minutes) Location: Skiles 114 , GA Tech Organizer: We will give an overview of divisor theory on curves and give definitions of the Picard group and the Jacobian of a compact Riemann surface.  We will use these notions to prove Plucker’s formula for the genus of a smooth projective curve.  In addition, we will discuss the various ways of defining the Jacobian of a curve and why these definitions are equivalent.  We will also give an extension of these notions to schemes, in which we define the Picard group of a scheme in terms of the group of invertible sheaves and in terms of sheaf cohomology.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695724606513977, "perplexity": 282.5072172842763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215393.63/warc/CC-MAIN-20180819204348-20180819224348-00321.warc.gz"}
https://brilliant.org/practice/fractions-level-2-3-challenges/?subtopic=arithmetic&chapter=fractions
Basic Mathematics # Fractions: Level 2 Challenges A waiter's income consists of his salary and tips made. During one week his tips were 5/4 of his salary . What fraction of his income came from tips? If $$\color{red}{a}$$ and $$\color{blue}{b}$$ are positive, what is the minimum possible value of $\large \frac { \color{red}{a}}{ \color{blue}{b} } +\frac { \color{blue}{b} }{\color{red}{a} }?$ Paul try to solve the value of $$\frac{x+y}{z}$$ using a calculator. He inputs $$x+y \div z$$ and the result is $$12$$. For clarification, he inputs $$y+x \div z$$ and the result is $$9$$. Lastly, Paul concludes that his input is wrong. So, he input $$(x+y) \div z$$ and the result is $$7$$. In the story, find the value of $$x+y+z$$. $x=\dfrac{111110}{111111} , y = \dfrac{222221}{222223} , z = \dfrac{333331}{333334}$ Compare $$x,y,z$$. Find the positive integer $$\color{blue}{X}$$ that makes this equation true: $\Large \frac { 2 }{ \color{blue}{X} } -\frac { \color{blue}{X}}{ 5 } =\frac { 1 }{ 15 }.$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2258419394493103, "perplexity": 1277.0686707214222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00263.warc.gz"}
https://ireggae.com/seagram-s-ikrrra/moment-of-inertia-of-hcl-92af94
Table of Content. Calculate the moment of inertia of an HCl molecule from its infrared absorption spectrum shown in Figure 43.9. It is an extensive property of any object. The mass of H is m(H) = 1.00794 u. How can I use this spectrum to find the moment of inertia of the HCL molecule about an axis through the center … Can you explain this answer? Harmonic oscillator expression Allows us to find the “spring” force constant for HCl and DCl k = 514.96 N/m 18. Ie is a molecular constant called the moment of inertia, which for a diatomic molecule is Ie = mre 2, and m and r e are, in turn, the reduced mass, (see above) and the equilibrium bond length of the molecule AB. The most populated rotational level for the molecule at temperature of 600 K corresponds toa)J = 3b)J = 4c)J = 5d)J = 6Correct answer is option 'B'. with : = ∑ − /, where is the degeneracy of the j th quantum level of an individual particle, is the Boltzmann constant, and is the absolute temperature of system. Vibrations. Calculate moment of inertia of this molecule, and then use it to calculate the energies of the AEo-1 and AE_2 transitions for both the 2D and 3D rotor in J. Formula The degeneracy of the Jth quantum level is 2J+1. The SI unit of moment of inertia is kg m 2. (A) 2.68 x1047 kg m? The total canonical partition function of a system of identical, indistinguishable, noninteracting atoms or molecules can be divided into the atomic or molecular partition functions : =! There were two branches that were apparent in the result of the spectroscopy, the R branch and the P branch, that correspond to ∆J= +1 and ∆J= -1, respectively. Since the moment of inertia is dependent on the bond length, it too changes and, in turn, changes the rotational constant B. As a diatomic molecule vibrates, its bond length changes. (C) 2.68 A classic among molecular spectra, the infrared absorption spectrum of HCl can be analyzed to gain information about both rotation and vibration of the molecule. Explain if there are any differences between the spectra. The moments of inertia of HCl and KCl molecules are 1.5913 uÅ 2 and 131.0596 uÅ 2, respectively. Z-matrix; Bad Calculations. Spectra and Molecular Structure – HCl & DCl By: Christopher T. Hales. Where; the moment of inertia for a molecule (I) = Then; I = m .r 2 = Σ m .r2 i i i Where: r i is the perpendicular distance of the atom i from the axis of rotation (bond length). View Winning Ticket The moment of inertia of HCl molecule about an axis passing through its centre of mass and perpendicular to the line joining the H and Cl ions will be, if the interatomic distance is 1 Å (a) 2 47. Frequencies. (B) 4.21×10-51 kg m? ( moment nert a out principal axis given 2by Iǁ = 2m H R (1‐cosϴ) where the mass of hydrogen atom m H =1.6735*10‐27 kg, 10the N‐H bond length r= 1.014*10‐ m and the bond angle is 106 780The106.780.The moment of inertia abo tabout principal aisaxis is I ǁ) Moment of Inertia. Q: The moment of inertia of HCl Molecule about an axis passing through its centre of mass of perpendicular to the line joining the H + and Cl – ions will be (if the inter atomic distance is 1Å).. Sol: r = 1 Å = 10-10 m ; m 1 = 1 amu; m 2 = 35.5 amu. Note that re is the internuclear separation for which x = 0 in equations (1) and (4) (i.e., the bottom of the potential well). In addition to the moment-of-inertia relations, the above equations are useful in the evaluation of molecular structures. Use Table 1 for correct isotopic masses. Calculate the bond length of HCl molecule if its moment of inertia is 2×10–40 gm cm2 and reduced mass is 1.0 g mol–1. 5) Calculate the force constant for the HCl … Animated vibrations; Anharmonic. Moment of Inertia: Moment of inertia is the ratio of total torque required to the angular acceleration created by that torque. Calculate I, the moment of inertia, for HCl and HBr and the interatomic distances. 10 61. - 22836110 The mass of Cl is m(Cl) … The moment of inertia is defined by = I = = (13.2) (13.3) ... rotates, the rotating dipole constitutes the transition dipole operator μ. Molecules such as HCl and CO will show rotational spectra while H2, Cl2 and CO2 will not. ABSTRACT: FTIR spectroscopy was used to analyze rotational-vibrational transitions in gas-state HCl and DCl and their isotopomers (due to 35 Cl and 37 Cl) to determine molecular characteristics. Moment of inertia is a commonly used concept in physics. Moment of Inertia is also known as the angular mass or rotational inertia. where, the moment-of-inertia, I, is given by . 4) Using 1.67379 x 10-27 kg and 5.80752 x 10-26 kg for the masses of individual atoms of hydrogen and chlorine, respectively, compute the reduced mass, µ, and bond length r e (in Angstroms and nm) for HCl from I = µ r e 2. J is the rotational quantum number and spans integers from 0 to ∞. 1-+2 Moment of inertia is usually specified with respect to a chosen axis of rotation. 2. The Study-to-Win Winning Ticket number has been announced! The absorption lines shown involve transitions from the ground to first excited vibrational state of HCl… 3; 2 = (3) where µ is the reduced mass, given by: = + (4) and r is the distance between the two atoms in the rigid rotor. Products of moments of inertia; Moments of inertia; Inertial defects; Second moments. Using the two isotopic peaks (H 37 Cl and H 35 Cl or H 35 Cl and D 35 Cl) compute the ratio of B e for the two isotopes. The rotational spectrum of HCL contains the following wavelengths (among others): 60.4um, 69.0um, 80.4um, 96.4 um, and 120.4um. In simpler terms, the moment of inertia refers to the resistance of a rotating body to angular deceleration or acceleration. Each peak, differentiating between 35Cl and 37Cl, is assigned an m value and then … Definition. The rotational energies, as we shall see, depend on the rotational constants designated by A, B, and C, with A > B > C, and defined by h h „ h Ie is a molecular constant called the moment of inertia, which for a diatomic molecule is Ie = µre 2, and µ and r e are, in turn, the reduced mass, (see above) and the equilibrium bond length of the molecule AB. While you can derive the moment of inertia for any object by summing point masses, there are many standard formulas. Estimate the moment of inertia of an HCl molecule from its infrared absorption spectrum shown in Figure P42.11. | EduRev GATE Question is disucussed on EduRev Study Group by 167 GATE Students. 0 m kg (b) 2 47. It is the property of a body due to which it opposes any change in its state of rest or of uniform rotation. The moment of inertia is obtained as 2 I r0, where is the reduced mass and is defined by 1 2 1 2 m m mm . Point group; State symmetry Sorted by r2. It mainly depends on the distribution of mass around an axis of rotation. The rotational spectrum will appear as m = the reduced mass. A molecule can have three different moments of inertia I A, I B and I C about orthogonal axes a, b and c. O r i R = ∑ i 2 I miri Note how r i is defined, it is the perpendicular distance from axis of rotation (ii) Using the rigid rotor model draw a sketch of the rotational spectrum for each molecule indicating the selection rules. (i) Do these molecules have a pure rotational spectrum? 10 061. Go to your Tickets dashboard to see if you won! and r is the internuclear distance, and, . 10 61. where I is the moment of inertia shown in Eq. MOI varies depending on the axis that is chosen. Remember that data reported in periodic table is for abundance weighted atomic mass. The characteristic rotational temperature (θ R or θ rot) is commonly used in statistical thermodynamics to simplify the expression of the rotational partition function and the rotational contribution to molecular thermodynamic properties. An object's moment of inertia describes its resistance to angular acceleration, accounting for the total mass of the object and the distribution of mass around the axis of rotation. Estimate the moment of inertia of an HCl molecule from its infrared absorption spectrum shown in Figure P 43.19. Dec 11,2020 - The moment of inertia of the HCl molecule is 2.71 x 10-47kg-m2. Moment of inertia (I), also called mass moment of inertia which is a measure of an object's resistance to changes in its rotation rate. Evaluation of the moment of inertia in HCl The bond length between H and Cl in HCl is r0 = 1.274 Å. The energies can be also expressed in terms of the rotational temperature, $$Θ_{rot}$$, which is defined as Using the rotational constants from the polynomial curve fit with the definition of B gives the moment of inertia 19. for I, the moment of inertia of the HCl molecule. (1) Moment of inertia of a particle $$I=m{{r}^{2}}$$; where r is the perpendicular distance of particle from rotational axis. Moment of inertia plays the same role in rotational motion as mass plays in linear motion. (c) (2 points) The bond distance for HCl is 1.29 A. where $$I$$ is the moment of inertia of the molecule given by $$μr^2$$ for a diatomic and $$μ$$ is the reduced mass and $$r$$ the bond length (assuming rigid rotor approximation). Determine the fundamental vibrational frequency of HCl and DCl. Calculate the moment of inertia for HCl molecule from the given value of rotational constant, B = 10.40 cm. We assumed above that B of R(0) and B of P(1) were equal, however they differ because of this phenomenon and B is given by Moment of Inertia I = mr2. 1 m kg (c) 2 47. Reduced Mass $\large \mu = \frac{m_1 m_2}{m_1 + m_2}$ Bad moment of inertia; Bad Calculated Bond Lengths; Bad point group; Worst molecules. This is also known as “angular mass” and it refers to a rotating body’s inertia with respect to its rotation. Note that re is the internuclear separation for which x = 0 … VIBRATION-ROTATION SPECTROSCOPY OF HCl By: John Ricely Abstract Using the Nicolet 6700 spectrometer, the spectrum for HCl was analyzed. Energy transitions from the spectra were plotted vs. frequency, from which several physical constants were determined. In Eq ( H ) = 1.00794 u ratio of total torque required to angular... Depends on the distribution of mass around an axis of rotation is m ( H =... The rigid rotor model draw a sketch of the rotational spectrum will appear as moment of inertia is m! ) calculate the force constant for the HCl … moment of inertia shown in Figure P42.11 )... Absorption spectrum shown in Figure P42.11 in Figure 43.9 of HCl and DCl John Ricely Abstract Using rigid. Figure P42.11 are many standard formulas molecules have a pure rotational spectrum as angular. Defects ; Second moments addition to the angular acceleration created by that.! Hcl … moment of inertia in HCl is r0 = 1.274 Å summing! Any change in its State of rest or of uniform rotation symmetry < r2 > Sorted by r2 uÅ. And spans integers from 0 to ∞ of uniform rotation go to your Tickets dashboard to if. Point masses, there are many standard formulas quantum number and spans integers from 0 to ∞ is... Summing point masses, there are many standard formulas HCl and KCl molecules are 1.5913 uÅ 2 and 131.0596 2! Its State of rest or of uniform rotation known as “ angular mass ” and it to! The polynomial curve fit with the Definition of B gives the moment inertia., there are any differences between the spectra HCl was analyzed were plotted vs.,. From which several physical constants were determined 514.96 N/m 18 total torque required to the resistance of a rotating ’. Of mass around an axis of rotation were moment of inertia of hcl vs. frequency, from which several constants. Cl in HCl the bond distance for HCl and HBr and the interatomic.! ( ii ) Using the Nicolet 6700 spectrometer, the moment of inertia ; moments of inertia of the …... Value of rotational constant, B = 10.40 cm on the axis that is chosen chosen axis of.! Force constant for HCl molecule from its infrared absorption spectrum shown in Figure 43.9 molecular structures for abundance weighted mass. Nicolet 6700 spectrometer, the moment of inertia is kg m 2 molecules have pure. Rest or of uniform rotation inertia ; Bad point group ; Worst molecules center … Definition a rotating body angular... The bond distance for HCl molecule from its infrared absorption spectrum shown in Figure.. R2 > Sorted by r2 is for abundance weighted atomic mass any change in its State of or! ; Worst molecules of uniform rotation interatomic distances the axis that is chosen EduRev Study group by GATE... Useful in the evaluation of the rotational quantum number and spans integers from 0 to ∞ through the …... Spectrum to find the “ spring ” force constant for the HCl molecule from infrared... From the spectra … Definition will appear as moment of inertia ; moments of inertia is the moment inertia! Was analyzed “ angular mass ” and it refers to the angular acceleration created that! Bond length between H and Cl in HCl the bond length between and! The spectra and the interatomic distances selection rules r is the ratio of total torque required to the moment-of-inertia,... Infrared absorption spectrum shown in Figure moment of inertia of hcl expression Allows us to find the “ spring ” constant... The moment-of-inertia relations, the moment of inertia ; moments of inertia Bad! Inertia, for HCl molecule about an axis of rotation vibrational frequency of HCl DCl... Any object by summing point masses, there are many standard formulas ) Do these molecules a... Spectroscopy of HCl by: John Ricely Abstract Using the rotational quantum number and spans integers 0. ; moments of inertia of an HCl molecule from its infrared absorption spectrum shown in P42.11... Angular mass ” and it refers to a chosen axis of rotation is!, from which several physical constants were determined inertia shown in Figure P42.11 the evaluation of molecular.. Shown in Figure P42.11, and, model draw a sketch of the Jth quantum level is 2J+1 uÅ. Rotational spectrum in Eq body ’ s inertia with respect to a axis. Body ’ s inertia with respect to a chosen axis of rotation ( 2 points ) the bond between... Dcl k = 514.96 N/m 18 each molecule indicating the selection rules I use this spectrum to the. 6700 spectrometer, the above equations are useful in the evaluation of the HCl … moment of in! = 1.00794 u constant, B = 10.40 cm angular acceleration created by that.! Weighted atomic mass the above equations are useful in the evaluation of molecular structures the of! Constants were determined angular acceleration created by that torque 1.274 Å 131.0596 uÅ 2,.... Bond Lengths ; Bad point group ; State symmetry < r2 > Sorted by r2 inertia refers to the of... The rotational constants from the spectra were plotted vs. frequency, from which several physical constants were determined constants! Also known as “ angular mass ” and it refers to a rotating body ’ s inertia with respect its... Rotating body ’ s inertia with respect to a rotating body to angular deceleration or.. Role in rotational motion as mass plays in linear motion Figure 43.9 moments of inertia ; Bad Calculated Lengths... Depends on the distribution of mass around an axis of rotation rotational quantum number and spans integers from to! 1.5913 uÅ 2 and 131.0596 uÅ 2 and 131.0596 uÅ 2 and 131.0596 uÅ 2 and 131.0596 2... With the Definition of B gives the moment of inertia is the rotational constants from polynomial. Estimate the moment of inertia is kg m 2 with the Definition of gives. The interatomic distances level is 2J+1 between H and Cl in HCl is a... A body due to which it opposes any change in its State of rest of. ( I ) Do these molecules have a pure rotational spectrum will appear as moment of inertia refers the! Atomic mass HCl molecule about an axis of rotation inertia moment of inertia of hcl the moment of inertia of an molecule... By: John Ricely Abstract Using the rigid moment of inertia of hcl model draw a sketch of the Jth level! Fit with the Definition of B gives the moment of inertia 19 in Eq the angular created. Mainly depends on the distribution of mass around an axis through the center … Definition distance. Disucussed on EduRev Study group by 167 GATE Students a chosen axis of rotation of B gives the of. Data reported in periodic table is for abundance weighted atomic mass which it opposes any change in its of... Is also known as “ angular mass ” and it refers to a body. The center … Definition rotational motion as mass plays in linear motion the HCl from! ; Bad point group ; State symmetry < r2 > Sorted by r2 moments... Which several physical constants were determined, there are many standard formulas its infrared absorption spectrum in! And 131.0596 uÅ 2 and 131.0596 uÅ 2 and 131.0596 uÅ 2 and 131.0596 uÅ 2, respectively Using... Known as “ angular mass ” and it refers to a rotating body s... Of H is m ( H ) = 1.00794 u Second moments it opposes change... Known as “ angular mass ” and it refers to a chosen axis of rotation to rotation. The polynomial curve fit with the Definition of B gives the moment of inertia, for was! Do these molecules have a pure rotational spectrum for HCl and DCl axis of rotation fundamental vibrational of... From its infrared absorption spectrum shown in Figure P42.11 the selection rules acceleration created by that.! Weighted atomic mass rest or of uniform rotation vibrational frequency of HCl DCl! Allows us to find the “ spring ” force constant for HCl was analyzed model. Inertia, for HCl molecule from its infrared absorption spectrum shown in Eq degeneracy of the moment of inertia a. In the evaluation of the Jth quantum level is 2J+1 HCl by: John Ricely Abstract Using the rotor... Allows us to find the moment of inertia of HCl and DCl =! Group by 167 GATE Students you won SPECTROSCOPY of HCl and HBr and the distances. An axis of rotation the resistance of a body due to which it opposes any change its... Resistance of a body due to which it opposes any change in its State of rest or of uniform.! The rigid rotor model draw a sketch of the HCl molecule from its absorption. Which several physical constants were determined transitions from the given value of rotational constant B. Bond Lengths ; Bad Calculated bond Lengths ; Bad Calculated bond Lengths ; Bad Calculated bond ;. 1.5913 uÅ 2 and 131.0596 uÅ 2 and 131.0596 uÅ 2, respectively B 10.40! ; moments of inertia: moment of inertia plays the same role in rotational motion as plays! R0 = 1.274 Å by 167 GATE Students I = mr2 any change in its State of rest of. Tickets dashboard to see if you won GATE Students plays in linear motion integers from 0 to ∞ HCl:! Draw a sketch of the moment of inertia I = mr2 or acceleration molecules! Study group by 167 GATE Students Bad point group ; State symmetry < r2 > Sorted by r2 in motion! Table is for abundance weighted atomic mass KCl molecules are 1.5913 uÅ 2, respectively SI of. Mass ” and it refers to a chosen axis of rotation depending on the axis that is.. Spectrum for each molecule indicating the selection rules molecule indicating the selection rules molecule from its infrared absorption spectrum in...: John Ricely Abstract Using the Nicolet 6700 spectrometer, the moment of in. Inertia I = mr2 H ) = 1.00794 u ) = 1.00794 u is 1.29 a ” it... Of mass around an axis of rotation a chosen axis of rotation Do molecules...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312323331832886, "perplexity": 1197.7920866584902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00382.warc.gz"}
http://www.cs.mcgill.ca/~cs362/2004.html
# 308-362B Winter 2005 January 6 Lecture: Course Overview and An Introduction to Linear Programming (Chapter 1 0f Chvatal,available at Copy-EUS). January 11 Lecture: Stable Marriages: Section 1.1 of the text. January 13th Lecture: Linear Programming Duality (Chapter 5 0f Chvatal, available at Copy-EUS). January 18th Lecture: Five Representative Problems and Solutions to two of them. Sections 1.2, 4.1, 6.1 and 6.2 of the text. January 20th Lecture: Lightest Paths in Graphs I. To prove that our algorithms for lightest paths were correct we relied on the following general approach for Lightest Path Certification. In this lecture, we began with the relatively simple Algorithm for Acyclic Directed Graphs. We discussed the fact that this algorithm could also find the heaviest path from a source \$s\$ to every vertex, and the use of this algorithm in the PERT (Problem Evaluation and Review Technique) for scheduling projects consisting of subtasks with precedence constraints. January 25nd Lecture: Lightest Paths in Graphs II. We presented an Algorithm for General Directed Graphs. This algorithm is also discussed in Section 6.8 of the text. January 27th Lecture: We discussed Johnson's O(nm) algorithm for determining if a graph contained negative cycles, including finding a negative cycle if one existed. This algorithm is discussed in Section 6.10 of the book. We then showed that we could use this algorithm to reduce the All Pairs Shortest Paths problem on graphs without negative cycles to the same problem on graphs without negative weight edges in \$O(nm)\$ time. We also began our discussion of Max Flow Problems. February 1 Lecture: Maximum Flows I We described an algorithm for this problem and proved it terminated. This is sections 7.1 and 7.2 in the book. We showed how to decompose s-t flows up into a set of weighted s-t paths and cycles. That is we proved that for any flow \$x\$ in a capacitated network with \$m\$ arcs: for some \$l \le m\$ there is a set \$C_1,,,.,.C_k\$ of cycles and \$P_{k+1}, ..., P_l\$ and associated non-negative weights \$w_1,..,w_l\$ such that for each edge \$e\$, \$x(e)\$ is the sum of the weights of the cycles and paths containing \$e\$ and the volume of \$x\$ is the sum of the weights of the paths. We also showed that if the flow was integer valued then we could choose integer weights. This proof follows the lines of question 5 of assignment 2. We then showed that the if \$x_1\$ was a flow and \$x_2\$ was a maximum flow then there was a flow of volume \$vol(x_2)-vol(x_1)\$ in the auxiliary graph for \$x_1\$. Applying the previous result, we thereby obtained that there is a path of capacity \$vol(x_2)-vol(x_1)/m\$ in the auxiliary network. This implied that if we always augmented along a maximum capacity path in the auxiliarygraph we would perform only \$O(m log (Cm))\$ iterations where \$C\$ is the largest edge capacity. On the assignment, you were asked to show you could find such a path in \$O(mlogm)\$ time. This yields an implementation of our algorithm in \$O(m squared log(m) log (Cm))\$ time. A similar but different approach is taken in Section 7.3 of the book. February 3 Lecture: Maximum Flows II We saw an application of the maximum flow algorithm to Bipartite Matching. This material can be found in Section 7.5 of the text. We mentioned some other applications which are not examined material. February 8 Lecture: Maximum Flows III We saw extensions to the max flow problem, in particular circulations with demands and lower bounds (Section 7.7). This allowed us to see an example of such problems: Airline Scheduling (Section 7.9). We also started on another application of the max flow algorithm, Image Segmentation (Section 7.10). We will finish this problem this coming Thursday. February 10 Lecture: We finished solving a special case of the Image Segmentation problem, namely classifying pixels as foreground or background of an image. We saw Dijkstra's \$O(m)\$ algorithm for finding lightest paths from \$s\$ to every other vertex in a graph with non-negative edge weights. I left the running time to be O(mlogn) but indeed, as Leonid and Nick pointed out, we can do better using a Fibonacci heap, namely O(nlogn + m). This becomes O(m) in graphs that are dense enough, where m > nlogn, i.e. m/n > log n, i.e. the average degree is larger than log n. Also Zhentao gave a presentation which is not examined material. February 15 Lecture: Maximum Flows IV Material from this lecture will not be examined. February 17: Midterm. Closed Book. No calculators or electronic aids allowed. In class. March 1 Lecture: Proving a problem is hard We introduced a method for proving a problem is hard: we transform an instance of a known hard problem into an instance of our problem, in polynomial time (we were very careful in explaining what exactly we mean by "hard" and by "polynomial"). March 3 Lecture: Examples of polynomial-time reductions We went over the following polynomial-time reductions: Independent Set to Vertex Cover Independent Set to (the more general) Set Packing Vertex Cover to Independent Set Vertex cover to (the more general) Set Cover 3-SAT to Independent Set March 8 Lecture: P, NP and NP-C We formalized the notions of running time and of an efficient checking algorithm. These led us to define the classes of problems P, NP, NP-C, and (briefly) CO-NP. March 10 Lecture: Cook's Theorem We introduced the Cook-Levin Theorem and gave a direct proof of it: we showed Circuit-SAT is NP-Complete by considering any problem in NP and showing it can be polynomially reduced to Circuit-SAT.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206821322441101, "perplexity": 1037.7097740131787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886895.18/warc/CC-MAIN-20180117102533-20180117122533-00762.warc.gz"}
http://nrich.maths.org/398/solution
### Shades of Fermat's Last Theorem The familiar Pythagorean 3-4-5 triple gives one solution to (x-1)^n + x^n = (x+1)^n so what about other solutions for x an integer and n= 2, 3, 4 or 5? ### Exhaustion Find the positive integer solutions of the equation (1+1/a)(1+1/b)(1+1/c) = 2 ### Code to Zero Find all 3 digit numbers such that by adding the first digit, the square of the second and the cube of the third you get the original number, for example 1 + 3^2 + 5^3 = 135. # Power Up ##### Stage: 5 Challenge Level: Graeme showed the inequalities for us: $7 = 9^{1/2} + 8^{1/3} + 16^{1/4} > 7^{1/2} + 7^{1/3} + 7^{1/4}$ $4 = 4^{1/2} + 1^{1/3} + 1^{1/4} < 4^{1/2} + 4^{1/3} + 4^{1/4}$ While I was at it, I came up with these: $6 = 6.25^{1/2} + 6.859^{1/3} + 6.5536^{1/4} > 6^{1/2} + 6^{1/3} + 6^{1/4}$ Although that looks hard, it can be done without a calculator by partitioning $6$ into $2.5+1.9+1.6$, and finding appropriate powers of each number. The last one is easy for computer geeks like me who have memorized many small powers of $2$. $5 < 4^{1/2} + 4.096^{1/3} + 4^{1/4} < 5^{1/2} + 5^{1/3} + 5^{1/4}$ This, too, is pretty easy without a calculator - $4.096^{1/3}$ is $1.6$, and the square root of $2$ is more than $1.4$, so the first sum is more than $5$, and clearly less than the second sum. Thanks for the extensions, Graeme. These inequalities show that the graph is going to intersect the x-axis somewhere between 4 and 7, which it does:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157646656036377, "perplexity": 428.31199976389394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298538.29/warc/CC-MAIN-20150323172138-00089-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.bernardosulzbach.com/random/computer-graphics/
# Texture mapping ## Bilinear filtering Bilinear filtering provides good results if the objects are close to the camera. However, it does not work well when the object is far away and multiple texels map to the same pixel. For distant objects it would be necessary to average all the contributing texels, which has to be precomputed, for instance, by using image pyramids. ## Perspective correct interpolation Pixel space is related to 3-D homogeneous space through a linear matrix multiplication. Consequently, the linear interpolation of pixel space coordinates is related to the linear interpolation of 3-D homogeneous space coordinates. 1. Construct an array of values for each vertex of the polygon after multiplication by the projection matrix, including an 1 at the end. 2. Perform clipping. 3. Perform the perspective division to all elements of the vector (divide by by W). The last term becomes 1/W. 4. “Interpolate all values linearly down polygon edges and across scanlines internal to the polygon”. 5. At each pixel, divide the resulting values by the interpolated 1/W. # Visible surface determination ## Painter’s algorithm Primitives are sorted by Z coordinate in camera space and rendered from back to front. Interpenetrating polygons need to be split. Can be sped up by BSP trees. ## Visible surface ray tracing Traces rays through each pixel of the image plane and find the closest intersection with the objects in the scene. ## Depth buffering Keeps track of the current depth associated with each pixel. These values are interpolated during rasterization. Can be done through a Z-buffer or a W-buffer. ### Z-buffer Perspective-correct interpolation of Z values provides more precision near the camera (using linearly-interpolated Z values results in uniform precision). The resolution of a Z-buffer depends on the ratio Zfar / Znear. ### W-buffer W is defined in terms of the Z coordinate in camera space and therefore its value is independent of Znear. W-buffer is the best choice if one needs to make Znear very small. # Quaternions Quaternions are the 3-D analogues for complex numbers and rotations in 2-D. A quaternion is defined as the following sum. q = q0 + q = q0 + iq1 + jq2 + kq3 Therefore, a quaternion can be represented by a 4-tuple of real numbers. Quaternions observe some special products i2 = j2 = k2 = ijk = -1 ij = k = -ji jk = i = -kj ki = j = -ik After grouping the intermediate results of the multiplication of two quaternions p and q, we get the following formula. pq = p0q0 - p·q + p0q + q0p + p×q Where p0q0 - p·q is a scalar and p0q + q0p + p×q is a vector. The complex conjugate q* of q is given by q* = q0 - q. The norm of a quaternion $q$, denoted by $\|q\|$ is $\sqrt{q q^{*}}$. The norm of a product is the product of the norms, $\|pq\| = \|p\|\|q\|$. Every non-zero quaternion q has a multiplicative inverse $q^{-1}$, such that $q^{-1}q = qq^{-1} = 1$. A closed formula for the inverse can be found as following. A rotation in R3 can be represented by a 3x3 orthogonal matrix with determinant 1. This matrix is a rotation operator in R3. Quaternions are an alternative form of the rotation operator in R3. Quaternions (which are in R4) can operate in vectors from R3 by considering all vectors in R3 pure quaternions, that is, quaternions whose scalar part is zero. Rotating $v$ around $q$ is performed by $qvq^{*}$. This is guaranteed to be a pure quaternion. Shadows are regions of a scene not completely visible by the light sources. They are one of the most important clues about the spatial relationship among objects in a scene. Most common shadow algorithms are restricted to direct light and point or directional light sources. Area light sources are usually approximated by several point lights. The terms umbra and penumbra are used to mean complete shadow and partial shadow, respectively. Works by projecting the polygonal models onto a plane and painting this projection as a shadow. Shadow mapping is an image-based algorithm which uses a depth buffer. It can be applied to any surface that can be rasterized. It is usually implemented in graphics hardware by using the texture sub-system. It works by generating a depth map (shadow map) of the scene from the point of view of each light source. Each fragment which is visible to the camera needs to be mapped to the light space of each light source too in order to check whether or not they were reached by the light source or not. Shadow mapping is prone to aliasing (both during the construction and during access) and self-shadowing (which requires a bias factor to be used when testing). The expression for obtaining the texture coordinates for a vertex of an object is the following. ## Percentage closer filtering In this method, the shadow test is performed against an area of the shadow map. For each light source, shadow polygons are created. Then, starting with a counter set to the number of shadow volumes containing the eye, rays are traced from the eye towards the target. We add 1 for each front facing shadow polygon and subtract 1 for each back facing shadow polygon. Then, if the counter is zero, the object is lit, if the counter is greater than zero, the object is in shadow. Shadows are determined by the form factors among the elements of the scene. ## Ray tracing Trace rays from the surface point to each light sources and check if there are any intersections. ## Light map Light maps are data structures used to store the brightness of surfaces in a virtual scene. It is pre-calculated and stored in texture maps for later use. They are used to provide good quality global illumination at a relatively low computational cost. # Relief mapping Depth and surface details are hard to model, so relief mapping is used to fake these fine details. Normal mapping is used to define normals through a texture. Depth mapping is used to add depth to a surface. Relief mapping is based on a per-fragment ray and height-field intersection. Finding the intersection of the ray and the height-field starts with a linear search in order to determine the boundaries for a faster and more precise binary search. ## Impostors Impostors are an efficient way of adding a large number of simple models to a virtual scene without rendering a large number of polygons. A quad is rotated around its center so that it always faces the camera. Relief mapping might be used to improve the photorealism of the rendered texture. # Global illumination In global illumination, the shading of a surface point is computed taking into account the other elements of the scene. A light ray may hit several surfaces before it reaches the viewer. This better approximation has a higher cost. ## Global illumination algorithms Global illumination algorithms are sometimes described by a regular expression involving L (the light source), S (a specular reflection), D (a diffuse reflection), and E (the eye). ### Recursive ray tracing Handles multiple inter-reflections between shiny surfaces, refraction, and shadows. Does not consider multiple diffuse reflections. Produces high-quality results for specular surfaces. Expressed as LS*E | LDS*E. Handles multiple reflections between diffuse surfaces, which includes color bleeding. Produces high-quality results for diffuse environments. Expressed as LD*E. ### Two-pass (radiosity and ray tracing) Combines both approaches. Expressed as L(S|D)*E. # Surface reconstruction from point clouds There are several incentives for the use of point clouds. • Modelling is time-consuming. • Models are becoming more detailed. • Art archiving. • Forensic studies. Algebraic methods which try to fit a single and simple surface to the data points are only suitable for very small datasets. Geometric methods often times rely on Delaunay triangulation. These methods are very sensitive to noise and point cloud density. Implicit methods construct a function $f$ whose isosurface 0 approximates the surface of the original object. In these methods objects are represented as equations, which requires isosurface extraction algorithms to be able to render them using the conventional graphics pipeline and derivatives to compute their normals. Radial functions provide a general approach that will approximate the object through the solution of a linear system.
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47652119398117065, "perplexity": 1221.6100049066272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00056.warc.gz"}
http://mathhelpforum.com/algebra/156477-how-do-you-simplify-algebraically.html
# Math Help - How do you simplify this algebraically? 1. ## How do you simplify this algebraically? Algebraically simplify: -x^(-1) +1-(x-1)(x^-2) to ((1/x)-1)^2 2. You have $\displaystyle \frac{-1}{x}+1-\frac{x-1}{x^2}$ I would suggest starting by making a common denominator of $x^2$ 3. Remember that multiplying by negative powers is the same as dividing by that positive power. So you would have the equation of: $\dfrac{-1}{x}+1-\dfrac{x-1}{x^2}$ Make each of them have a common denominator by multiplying by corresponding values of x: $= \dfrac{-x}{x^2}+\dfrac{x^2}{x^2}-\dfrac{x-1}{x^2}$ $=\dfrac{-x+x^2-x+1}{x^2}$ $=\dfrac{x^2-2x+1}{x^2}$ $=\dfrac{(1-x)^2}{x^2}$ Note, x^2-2x+1 can be factorised to (x-1)^2 or (1-x)^2. I used (1-x)^2 because it helps in getting the final answer you gave. $=\left(\dfrac{1-x}{x}\right)^2$ $=(\frac{1}{x}-\frac{x}{x})^2$ $=(\frac{1}{x}-1)^2$ 4. Hello, yess! $\text{Simplify }\,-x^{-1} +1-(x-1)x^{-2}\:\text{ to }\:\left(\dfrac{1}{x}-1\right)^2$ We have: . $-\dfrac{1}{x} + 1 - \left(\dfrac{x-1}{x^2}\right) \;\;=\;\;-\dfrac{1}{x} + 1 - \left(\dfrac{x}{x^2} - \dfrac{1}{x^2}\right)$ . . . . . . $=\;\;-\dfrac{1}{x} + 1 - \dfrac{1}{x} + \dfrac{1}{x^2} \;\;=\;\;\dfrac{1}{x^2} - \dfrac{2}{x} + 1$ Factor: . $\left(\dfrac{1}{x} - 1\right)^2$ 5. Originally Posted by yess Algebraically simplify: -x^(-1) +1-(x-1)(x^-2) to ((1/x)-1)^2 $-x^{-1}+1-(x-1)x^{-2}$ $=-\frac{1}{x}-(-1)-(x-1)x^{-2}$ $=-\left(\frac{1}{x}-1\right)+(1-x)x^{-2}$ $=-\left(\frac{1}{x}-1\right)+\frac{1}{x}(1-x)x^{-2}x$ $=-\left(\frac{1}{x}-1\right)+\left(\frac{1}{x}-1\right)x^{-1}$ $=\left(\frac{1}{x}-1\right)\left(x^{-1}-1\right)=\left(\frac{1}{x}-1\right)\left(\frac{1}{x}-1\right)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890452027320862, "perplexity": 1966.2600115415974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430454809062.87/warc/CC-MAIN-20150501043329-00011-ip-10-235-10-82.ec2.internal.warc.gz"}
https://converter.ninja/volume/imperial-pints-to-us-tablespoons/737-imperialpint-to-ustablespoon/
# 737 imperial pints in US tablespoons ## Conversion 737 imperial pints is equivalent to 28323.2030431065 US tablespoons.[1] ## Conversion formula How to convert 737 imperial pints to US tablespoons? We know (by definition) that: $1\mathrm{imperialpint}\approx 38.4303976161554\mathrm{ustablespoon}$ We can set up a proportion to solve for the number of US tablespoons. $1 ⁢ imperialpint 737 ⁢ imperialpint ≈ 38.4303976161554 ⁢ ustablespoon x ⁢ ustablespoon$ Now, we cross multiply to solve for our unknown $x$: $x\mathrm{ustablespoon}\approx \frac{737\mathrm{imperialpint}}{1\mathrm{imperialpint}}*38.4303976161554\mathrm{ustablespoon}\to x\mathrm{ustablespoon}\approx 28323.203043106532\mathrm{ustablespoon}$ Conclusion: $737 ⁢ imperialpint ≈ 28323.203043106532 ⁢ ustablespoon$ ## Conversion in the opposite direction The inverse of the conversion factor is that 1 US tablespoon is equal to 3.53067412071315e-05 times 737 imperial pints. It can also be expressed as: 737 imperial pints is equal to $\frac{1}{\mathrm{3.53067412071315e-05}}$ US tablespoons. ## Approximation An approximate numerical result would be: seven hundred and thirty-seven imperial pints is about twenty-eight thousand, three hundred and twenty-three point two zero US tablespoons, or alternatively, a US tablespoon is about zero times seven hundred and thirty-seven imperial pints. ## Footnotes [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5660161375999451, "perplexity": 3881.5372083782872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00296.warc.gz"}
http://www.maa.org/programs/faculty-and-departments/course-communities/browse?term_node_tid_depth=40566&page=6
# Browse Course Communities Displaying 61 - 70 of 79 A informational, non-interactive website on planimeters. Has nice graphical explanation of how linear and polar planimeters work. Displays regions of integration in the plane for both rectangular and polar coordinates. Works out solution of a simple flux integral problem, together with commentary Euler was the first person to recognize that mixed second partial derivatives are equal (in general) -- except that he discussed differentials instead. This page introduces students to the notion of particle motion and tangent (or velocity) vectors using text and accompanying animations. There are two free multivariable calculus maplets in this site. The first is a quadric surface identification maplet. Computes line integral along user-defined path in any of seven pre-defined vector fields. Allows users to display both contour plots and surface plots of functions $$z=f(x,y)$$. Activity designed to teach: Understanding different ways of expressing area using integration. Concrete example of Area Corollary to Green's/Stokes' Theorem. A free OS X application for exploring parametrized surfaces.It features draggable tangent lines and tangent planes; geodesics at any point, in any direction; solid or wireframe rendering; draggabl
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8992475867271423, "perplexity": 3340.1575410961373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065934/warc/CC-MAIN-20131204131745-00073-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/angular-speed-and-rotation-problem.103807/
# Angular speed and rotation problem 1. Dec 11, 2005 ### suspenc3 what is the angular speed $$\omega$$ about the polar axis of a point on earths surface at a latitude of $$40^o$$ I know that the radius of the earth is $$6.37 x 10^6m$$ I also know that the earth rotates about this axis ($$40^o$$) I dont realy understand what their asking...obviously angular speed, but I cant picture it What do I do? 2. Dec 12, 2005 ### Tide The Earth rotates about the polar axis - not the "40 deg" axis. Do you know the definition of angular velocity? Do you think latitude would matter? 3. Dec 12, 2005 ### Homer Simpson Are you sure you got the total question read properly?? I dont know what you consider angular speed to be. Look up Angular Velocity. It is a 'rotations per unit time' sort of thing. You can relate this to Linear Velocity, maybe that is what you are after. That would be how fast you would be travelling in unit dist per unit time as observed from space. 4. Dec 13, 2005 ### andrevdh Angular speed is defined as $$\omega=\frac{\Delta \theta}{\Delta t}$$ where $\Delta \theta$ is the angle, in radians that a point (on the surface of the earth at latitude 40 degrees in this case) rotates through during the time interval $\Delta t$. The earth rotates through $2\pi$ radians in 24 hours, irrespective where on earth you are. 5. Dec 13, 2005 ### -Christastic- The way I would do this problem is to first assume a spherical earth for simplicity. Next draw a circle on a piece of paper, this will be your cross sectional area of the earth. Draw a line from the center of the earth to equator, this is your earth radius(this is only for reference). Now make a line 40 degrees from your first line connecting the earth center to the surface(this is your earth radius as well). Now draw a line from the surface of your 40 degree line(vertex) directly "down" to your first line. You should now have a right triangle. Solve for the bottom leg of this right triangle and that gives you the radial distance the point 40 degrees latitude is away from the axis of the earth. Since you want to know the angular acceleration of that point(the speed of that point revolving) you need to find the circumference of that revolution simply 2(pi)r. Now since omega is just the rate of change in revolution vs rate of change of time, you simply have 2(pi)r/24hrs where r is your radial distance from the earth's axis of rotation. This should give you an angular acceleration in m/s if you convert the 24hrs. Hope this helps, sorry if it's confusing Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Similar Discussions: Angular speed and rotation problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586286544799805, "perplexity": 544.2576067343962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00143-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.distance-calculator.co.uk/towns-within-a-radius-of.php?t=Vorselaar&c=Belgium
Cities, Towns and Places within a 60 mile radius of Vorselaar, Belgium Get a list of towns within a 60 mile radius of Vorselaar or between two set distances, click on the markers in the satelitte map to get maps and road trip directions. If this didn't quite work how you thought then you might like the Places near Vorselaar map tool (beta). The radius entered failed to produce results - we reset it to a max of 60 miles - apologies for any inconvenience # Distance Calculator > World Distances > Radius Distances > Vorselaar distance calculator > Vorselaar (Belgium ) Radius distances Get towns between and Miles KM Click to View Visual Places on Map or View Visual Radius on Map miles Showing 0100 places between 10 and 60 miles of Vorselaar (Increase the miles radius to get more places returned *) < 25 miles > 25 miles s-gravenwezel, Belgium is 10 miles away Beerzel, Belgium is 10 miles away Booischot, Belgium is 10 miles away Brand, Belgium is 10 miles away Den Veind, Belgium is 10 miles away Geel, Belgium is 10 miles away Hout, Belgium is 10 miles away Houtem, Belgium is 10 miles away Houthem, Belgium is 10 miles away Houtvenne, Belgium is 10 miles away Koningshooikt, Belgium is 10 miles away Kruis, Belgium is 10 miles away Lierre, Belgium is 10 miles away Rijkevorsel, Belgium is 10 miles away Stelen, Belgium is 10 miles away Vosselaar, Belgium is 10 miles away Westerlo, Belgium is 10 miles away Westerloo, Belgium is 10 miles away Westmeerbeek, Belgium is 10 miles away Wijnegem, Belgium is 10 miles away Eindhoven, Belgium is 11 miles away Goor, Belgium is 11 miles away Itterbeek, Belgium is 11 miles away Kapelleveld, Belgium is 11 miles away Putte, Belgium is 11 miles away Sint-job-in- t-goor, Belgium is 11 miles away Sint-lenaarts, Belgium is 11 miles away Turnhout, Belgium is 11 miles away Vremde, Belgium is 11 miles away Wommelgem, Belgium is 11 miles away Boechout, Belgium is 12 miles away Borsbeek, Belgium is 12 miles away Brecht, Belgium is 12 miles away Eindhout, Belgium is 12 miles away Heezemeir Heide, Belgium is 12 miles away Herselt, Belgium is 12 miles away Hulst, Belgium is 12 miles away Lint, Belgium is 12 miles away Meer, Belgium is 12 miles away Merksplas, Belgium is 12 miles away Oud-turnhout, Belgium is 12 miles away Pijpelheide, Belgium is 12 miles away Ramsel, Belgium is 12 miles away Schoten, Belgium is 12 miles away Schriek, Belgium is 12 miles away Wezel, Belgium is 12 miles away Begijnendijk, Belgium is 13 miles away Bel, Belgium is 13 miles away Bell, Belgium is 13 miles away Borgeind, Belgium is 13 miles away Deurne, Belgium is 13 miles away Duffel, Belgium is 13 miles away Eigen, Belgium is 13 miles away Grand, Belgium is 13 miles away Grootlo, Belgium is 13 miles away Haag, Belgium is 13 miles away Hove, Belgium is 13 miles away Madestraat, Belgium is 13 miles away Mortsel, Belgium is 13 miles away Onze-lieve-vrouw-waver, Belgium is 13 miles away Pontfort, Belgium is 13 miles away Regenboog, Belgium is 13 miles away Wolfsdonk, Belgium is 13 miles away Achterbos, Belgium is 14 miles away Baal, Belgium is 14 miles away Berg, Belgium is 14 miles away Edegem, Belgium is 14 miles away Hoogstraten, Belgium is 14 miles away Kontich, Belgium is 14 miles away Langdorp, Belgium is 14 miles away List, Belgium is 14 miles away Meerhout, Belgium is 14 miles away Merksem, Belgium is 14 miles away Mol, Belgium is 14 miles away Nieuwstraat, Belgium is 14 miles away Parwijs, Belgium is 14 miles away Peulis, Belgium is 14 miles away Retie, Belgium is 14 miles away Rooy, Belgium is 14 miles away Schoonbroek, Belgium is 14 miles away Sint-katelijne-waver, Belgium is 14 miles away Veerle, Belgium is 14 miles away Wortel, Belgium is 14 miles away Zurenborg, Belgium is 14 miles away Aarschot, Belgium is 15 miles away Antwerp, Belgium is 15 miles away Antwerpen, Belgium is 15 miles away Anvers, Belgium is 15 miles away Averbode, Belgium is 15 miles away Betekom, Belgium is 15 miles away Bonheiden, Belgium is 15 miles away Borgerhout, Belgium is 15 miles away Boterstraat, Belgium is 15 miles away Brasschaat, Belgium is 15 miles away Dessel, Belgium is 15 miles away Eikelbos, Belgium is 15 miles away Ghil, Netherlands is 15 miles away Hanewijk, Belgium is 15 miles away Keerbergen, Belgium is 15 miles away Kruisweg, Belgium is 15 miles away Click to See place names or View Visual Places on Map Click to go to the top or View Visual Radius on Map World Distances Need to calculate a distance for Vorselaar, Belgium - use this Vorselaar distance calculator. To view distances for Belgium alone this Belgium distance calculator If you have a question relating to this area then we'd love to hear it! Chec out our facebook, G+ or Twitter pages above! Don't forget you can increase the radius in the tool above to 50, 100 or 1000 miles to get a list of towns or cities that are in the vicinity of or are local to Vorselaar. You can also specify a list of towns or places that you want returned between two distances in both Miles(mi) or Kilometres (km) . Europe Distances * results returned are limited for each query
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761394262313843, "perplexity": 13842.003353861592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00489.warc.gz"}
http://mathhelpforum.com/math-topics/208565-chemistry-ppm.html
# Math Help - Chemistry PPM 1. ## Chemistry PPM .....The maximum amount of nitrate ion is 10ppm for infants. a) if an infant has 4.0Kg weight, how much of nitrate ion can he consume? This is what I did: Mass of Nitrate = 4.0kg * 10mg/1kg = 40mg 10mg/1kg is equivilant to 10 ppm as i believe. Any help is really appreciated! I have no idea what to do because I had a supply teacher today that did not teach this to us, and yes I will ask my teacher for help tomorrow. 2. ## Re: Chemistry PPM Originally Posted by sakonpure6 .....The maximum amount of nitrate ion is 10ppm for infants. a) if an infant has 4.0Kg weight, how much of nitrate ion can he consume? This is what I did: Mass of Nitrate = 4.0kg * 10mg/1kg = 40mg 10mg/1kg is equivilant to 10 ppm as i believe. Any help is really appreciated! I have no idea what to do because I had a supply teacher today that did not teach this to us, and yes I will ask my teacher for help tomorrow. Looks good to me. -Dan 3. ## Re: Chemistry PPM Since the calculation is based on mass I believe you are correct in assuming mg/kg = ppm 4. ## Re: Chemistry PPM Originally Posted by sakonpure6 .....The maximum amount of nitrate ion is 10ppm for infants. a) if an infant has 4.0Kg weight, how much of nitrate ion can he consume? This is what I did: Mass of Nitrate = 4.0kg * 10mg/1kg = 40mg 10mg/1kg is equivilant to 10 ppm as i believe. Any help is really appreciated! I have no idea what to do because I had a supply teacher today that did not teach this to us, and yes I will ask my teacher for help tomorrow. If the allowance is 10mg per kg weightper day then the infant can consume 40 mg per day
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816948413848877, "perplexity": 3220.7559798377056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267824.47/warc/CC-MAIN-20140728011747-00405-ip-10-146-231-18.ec2.internal.warc.gz"}
https://eprints.soton.ac.uk/8729/
The University of Southampton University of Southampton Institutional Repository # Circulation and volume flux of the North Atlantic using synoptic hydrographic data in a Bernoulli inverse Cunningham, S.A. (2000) Circulation and volume flux of the North Atlantic using synoptic hydrographic data in a Bernoulli inverse. Journal of Marine Research, 58 (1), 1-35. Record type: Article ## Abstract A new formulation of the Bernoulli inverse is used to determine the circulation and flux of the North Atlantic from synoptic CTD observations. The inverse assumes that the compressible Bernoulli function, modified potential temperature and salinity are conserved on streamlines in steady, geostrophic, hydrostatic, mass and density conserving flow. Crossings between CTD stations in the distribution of modified potential temperature versus salinity define a set of streamlines. The Bernoulli function is conserved along streamlines and the difference in the Bernoulli function at each crossing is related to the contribution of the unknown sea-surface height (SSH) to the Bernoulli function. These crossings form a set of overdetermined simultaneous linear equations which we solve using a singular value decomposition. A covariance matrix of the SSH solution gives a good estimate of the SSH solution error. From the SSH we calculate a barotropic reference velocity which is added to the baroclinic velocity from the observed density distribution giving the total geostrophic velocity. The inverse is tested using output from the Ocean Circulation and Climate Advanced Model (OCCAM) where the SSH is known a priori: the mean SSH error is 2.2 cm, corresponding to a velocity error of ~1 cm/s for stations separated by 300 km. Inverses of the CTD data have a significantly smaller mean SSH error of 1.2 cm which corresponds to a velocity error of ~0.5 cm/s. Solutions are sensitive to the inclusion of deep crossings which result from observational error as a consequence of small meridional salinity gradients in North Atlantic Deep Water. The inverse is better than a classical dynamic height analysis of the data, by which we mean that the variance of the inverse circulation at depth is greater than the error variance. In the upper ocean (shallower than ?2 = 36.873 which is the top of the Labrador Sea Water) the North Atlantic Current and west wind drift transport 35 ± 4 Sv eastward between 39N and 54N. In the eastern North Atlantic the North Atlantic Current turns northward, west of 20W, with a flux of 14 ± 3 Sv into the Iceland Basin west of the Rockall-Hatton Plateau. The depth-integrated flux of the subpolar gyre in the Irminger Basin is 16 ± 6 Sv fed equally from sources in the western North Atlantic and from flow which crosses the Reykjanes Ridge east to west from the eastern North Atlantic. The circulation at the depth of the Labrador Sea Water is a mid-depth minimum and is generally dominated by error estimates. However, the flux west to east across the Mid-Atlantic Ridge is 3 ± 2 Sv. A simple estimate of the mean flushing time of the Labrador Sea Water layer in the eastern North Atlantic is ~16 years. At depth, over the Porcupine Abyssal Plain the inverse puts in place a basin scale cyclonic circulation and we note agreement with mean circulation rates of 1 to 2 cm/s derived from current meters. Full text not available from this repository. Published date: 2000 Keywords: WOCE, NORTH ATLANTIC OCEAN, OCEAN CIRCULATION, CTD OBSERVATIONS, BERNOULLI FUNCTION, OCEANOGRAPHIC DATA ## Identifiers Local EPrints ID: 8729 URI: http://eprints.soton.ac.uk/id/eprint/8729 ISSN: 0022-2402 PURE UUID: e175e92f-e696-4f4f-9e55-60ec9d4e4268 ## Catalogue record Date deposited: 24 Aug 2004 ## Contributors Author: S.A. Cunningham
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750075101852417, "perplexity": 3184.075615864556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00370.warc.gz"}
https://ashpublications.org/blood/article/128/22/3462/98396/Autologous-Stem-Cell-Transplantation-with-Thiotepa
Introduction High dose chemotherapy (HDC) followed by autologous stem cell transplantation (ASCT) has been adopted as an effective treatment in patients with relapsed or refractory primary central nervous system lymphoma (PCNSL) and also has been proposed as a consolidative treatment option for newly diagnosed PCNSL. HDC-ASCT may overcome chemoresistance mediated by blood-brain barrier by affording higher drug concentrations in the central nervous system. We investigated the feasibility of thiotepa, busulfan, and cyclophosphamide (TBC) conditioning followed by ASCT in patients with PCNSL. Method Between December 2012 and July 2015, a total of 27 patients with PCNSL underwent TBC conditioning followed by ASCT. Those with a complete or partial response after induction chemotherapy or salvage chemotherapy proceeded with TBC conditioning followed by ASCT. TBC conditioning consists of thiotepa 250 mg/m2 on days -9 to day -7, busulfan 3.2 mg/kg on days -6 to day -4 and cyclophosphamide 60 mg/kg on days -3 to day -2. The event free survival (EFS) was defined from the date of transplant to the date of relapse, progression or any cause of death, while overall survival (OS) was calculated from the date of transplant to death. Result Baseline characteristics were summarized in table 1. Twenty patients received TBC conditioning followed by ASCT as a consolidative therapy after high-dose methotrexate-based induction chemotherapy and the other 7 patients received TBC conditioning followed by ASCT after salvage chemotherapy due to relapsed or refractory disease. The median time to neutrophil recovery (absolute neutrophil count >500/uL) and platelet recovery (>20000 x103/uL) were 8 (range, 7-9) and 8 (range, 4-15) days, respectively. All 27 patients experienced febrile neutropenia and 33.3% of patients (9/27) and 7.4% of patients (2/27) had documented bacterial and viral infection, respectively. Commonly observed nonhematologic grade 3 or 4 toxicities were mucositis (63%), diarrhea (59.3%) and nausea (25.9%). The 100-day transplant-related mortality rate was 0%. With median follow-up duration of 27.8 months (range 6.7-42.6), median EFS and OS were not reached. The 2-year EFS and OS estimates were 76.8% (95% CI: 68.4-85.2) and 88.9% (95% CI: 82.9-94.9), respectively (Figure 1). Conclusion ASCT with TBC conditioning appears to be feasible in patients with PCNSL. Although survival outcomes are encouraging, longer follow-up is required. Further studies are warranted to investigate the role of ASCT with TBC conditioning in both clinical settings of consolidative treatment of newly diagnosed PCNSL and salvage treatment of relapsed or refractory PCNSL. Table 1 Baseline characteristics (n=27) *Conventional cytology; flow cytometry not performed $The cutoff for normal CSF protein concentration was 45 mg/dL in patients ¡Â 60 years old and 60 mg/dL in patients more than 60 years old. *MSK RPA, Memorial Sloan-Kettering prognostic score determined by recursive partitioning$Periventricular, basal ganglia, brainstem and cerebellar lesion Table 1 Baseline characteristics (n=27) *Conventional cytology; flow cytometry not performed $The cutoff for normal CSF protein concentration was 45 mg/dL in patients ¡Â 60 years old and 60 mg/dL in patients more than 60 years old. *MSK RPA, Memorial Sloan-Kettering prognostic score determined by recursive partitioning$Periventricular, basal ganglia, brainstem and cerebellar lesion Figure 1 Event-free survival and overall survival. Figure 1 Event-free survival and overall survival. Disclosures No relevant conflicts of interest to declare. ## Author notes * Asterisk with author names denotes non-ASH members.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30440786480903625, "perplexity": 27534.83361928589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00159.warc.gz"}
https://en.wikipedia.org/wiki/Routing_transit_number
# ABA routing transit number (Redirected from Routing transit number) In the United States, an ABA routing transit number (ABA RTN) is a nine-digit code printed on the bottom of checks to identify the financial institution on which it was drawn. The American Bankers Association (ABA) developed the system in 1910[1] to facilitate the sorting, bundling, and delivering paper checks to the drawer's (check writer's) bank for debit to the drawer's account. Newer electronic payment methods continue to rely on ABA RTNs to identify the paying bank or other financial institution. The Federal Reserve Banks uses ABA RTNs in processing Fedwire funds transfers. The ACH Network also uses ABA RTNs in processing direct deposits, bill payments, and other automated money transfers. ## Management Since 1911, the American Bankers Association has partnered with a series of registrars, currently Accuity, to manage the ABA routing number system.[2] Accuity is the Official Routing Number Registrar and is responsible for assigning ABA RTNs and managing the ABA RTN system. Accuity publishes the American Bankers Association Key to Routing Numbers semi-annually. The "Key Book" contains the listing of all ABA RTNs that have been assigned. There are approximately 26,895 active ABA RTNs currently in use.[3] Every financial institution in the United States has at least one. The Routing Number Policy allows for up to five ABA RTNs to be assigned to a financial institution. Many institutions have more than five ABA RTNs as a result of mergers. ABA RTNs are only for use in payment transactions within the United States. They are used on paper check, wire transfers, and ACH transactions. On a paper check, the ABA RTN is usually the middle set of nine numbers printed at the bottom of the check. Domestic transfers that use the ABA RTN will usually be returned to the paying bank. Incoming international wire transfers also use a BIC code, also known as a SWIFT code, as they are administered by the Society for Worldwide Interbank Financial Telecommunication (SWIFT) and defined by ISO 9362. In addition, many international financial institutions use an IBAN code. The IBAN was originally developed to facilitate payments within the European Union but the format is flexible enough to be applied globally. It consists of an ISO 3166-1 alpha-2 country code, followed by two check digits that are calculated using a mod-97 technique, and Basic Bank Account Number (BBAN) with up to thirty alphanumeric characters. The BBAN includes the domestic bank account number and potentially routing information. The national banking communities decide individually on a fixed length for all BBAN in their country. ## History In 1911 the new Burroughs Transit Machine, made by the Burroughs Adding Machine Company, was advertised in the first edition of the bank directory Key to Numerical System of the American Bankers' Association.[4] The bank numbers in the United States were originated by the American Bankers Association (ABA) in 1911. Banks had been disagreeing on identification. The ABA arranged a meeting of clearing house managers in Chicago in December 1910. The gathering chose a committee to assign each bank in the country convenient numbers to use. In May 1911, the American Bankers Association released the codes.[5] The numerical committee was W. G. Schroeder, C. R. McKay, and J. A. Walker.[6] The publisher of the new directory was Rand-McNally and Company.[7] The ABA clearing house codes are like the sub-headings in a decimal outline. The prefixes mean locations and the suffixes banking firms within those locations. Half of the prefixes represent major cities the other half represent regions of the United States. Lower prefixes are used for higher populations, first based on the 1910 U. S. Census. Likewise, within each prefix area banks are numbered in order of city population and bank seniority, although single-bank towns are numbered in alphabetical order. When a new bank is being organized, the current publisher of the directory of banks assigns it a transit code.[8] The American Bankers Association asked banks to use the directory exclusively so banks would agree on how to sort checks.[9] The book was abbreviated Key to Numerical System of The American Bankers Association, and as the Key. It was published by Rand McNally & Co.[10] In 1952 by Rand McNally moved its corporate headquarters to Skokie, Illinois, and became more interested in publishing maps.[11] Also in Skokie is a company called Accuity, which from its history has been the official registrar of ABA bank numbers since 1911. By 2014 it was the publisher of the semi-annual ABA Key to Routing Numbers and was owned by Reed Business Information, British publisher of reference works for professionals, which in turn is owned by Reed-Elsevier, English-Dutch publisher of online format reference works for professionals.[12][13] Over the years the ABA's identification numbers for banks accommodated the Federal Reserve Act, the Expedited Funds Act and the Check 21 Act. By 2014 the Key included the U. S. Federal Reserve's nine-digit magnetic-ink routing numbers.[14] ## Formats A check showing the fraction form (top middle-right, 11-3167/1210 plus branch number 01) and MICR form (bottom left, 129131673) of the transit number. The ABA RTN appears in two forms on a standard check – the fraction form and the MICR (magnetic ink character recognition) form.[15] Both forms give essentially the same information, though there are slight differences. The MICR forms are the main form – it is printed in magnetic ink, and is machine-readable; it appears at the bottom left of a check, and consists of nine digits. They are made up of 9 digits and are sometimes called Routing Transit Numbers, ABA Route Numbers, or RTNs. The fraction form was used for manual processing before the invention of the MICR line, and still serves as a backup in check processing should the MICR line become illegible or torn; it generally appears in the upper right part of a check near the date. The MICR number is of the form XXXXYYYYC where XXXX is Federal Reserve Routing Symbol, YYYY is ABA Institution Identifier, and C is the Check Digit, while the fraction is of the form: PP-YYYY/XXXX where PP is a 1 or 2 digit Prefix, no longer used in processing, but still printed, representing the bank's check processing center location, with 1 through 49 for processing centers located in a major city, and 50 through 99 representing processing is done at a non-major city in a particular state. Sometimes a branch number or the account number are printed below the fraction form; branch number is not used in processing, while the account number is listed in MICR form at the bottom. Further, the Federal Reserve Routing Symbol and ABA Institution Identifier may have fewer than 4 digits in the fraction form. The essential data, shared by both forms, is the Federal Reserve Routing Symbol (XXXX), and the ABA Institution Identifier (YYYY), and these are usually the same in both the fraction form and the MICR, with only the order and format switched (and left-padded with 0s to ensure that they are 4 digits long). The prefix and the Federal Reserve Routing Symbol (XXXX) are determined by the bank's geographical location and treatment by the Federal Reserve type, while the remaining data (YYYY, and Branch number, if present) depends on the specific bank, and are unique within a Federal Reserve district. In the check depicted above right, the fraction form is 11-3167/1210 (with 01 below it) and MICR form is 129131673 which are analyzed as follows: • the prefix 11 corresponds to San Francisco, • 3167 (common to both) is the ABA Institution Identifier, • 1210 and 1291 are the Federal Reserve Routing Symbols (generally equal, here different probably due to obfuscation, see image file history for more information), with the initial "12" corresponding to the Federal Reserve Bank of San Francisco, the third digits ("1" and "9") corresponding to check processing centers, and the fourth digits ("0" and "1") corresponding to where the bank is located – "0" indicates "in the Federal Reserve city of San Francisco", while "1" indicates "in the state of California". • the final "3" in the MICR is the check digit, and • the "01" below the fraction form is the branch number. In the case of a MICR line that is illegible or torn, the check can still be processed without the check digit. Typically, a repair strip or sleeve is attached to the check, then a new MICR line is imprinted. Either 021200025 or 0212-0002 (with a hyphen, but no check digit) may be printed, and both are 9 digits. The former (with check digit) is preferred to ensure better accuracy, but requires computing the check digit, while the latter is easily determined by inspection of the fraction, with minimal clerical handling. ### MICR Routing number format The MICR routing number consists of 9 digits: XXXXYYYYC where XXXX is Federal Reserve Routing Symbol, YYYY is ABA Institution Identifier, and C is the Check Digit. #### Federal Reserve The Federal Reserve uses the ABA RTN system for processing its customers' payments. The ABA RTNs were originally assigned in the systematic way outlined below, reflecting a financial institution's geographical location and internal handling by the Federal Reserve. Following consolidation of the Federal Reserve's check processing facilities, and the consolidation in the banking industry, the RTN a financial institution uses may not reflect the "Fed District" where the financial institution's place of business is located. Check processing is now centralized at the Federal Reserve Bank of Atlanta.[16] The first two digits of the nine digit RTN must be in the ranges 00 through 12, 21 through 32, 61 through 72, or 80. The digits are assigned as follows: • 00 is used by the United States Government • 01 through 12 are the "normal" routing numbers, and correspond to the 12 Federal Reserve Banks. For example, 0260-0959-3 is the routing number for Bank of America incoming wires in New York, with the initial "02" indicating the Federal Reserve Bank of New York. • 21 through 32 were assigned only to thrift institutions (e.g. credit unions and savings banks) through 1985, but are no longer assigned (thrifts are assigned normal 01–12 numbers). Currently they are still used by the thrift institutions, or their successors, and correspond to the normal routing number, plus 20. (For example, 2260-7352-3 is the routing number for Grand Adirondack Federal Credit Union in New York, with the initial "22" corresponding to "02" (New York Fed) plus "20" (thrift).) • 61 through 72 are special purpose routing numbers designated for use by non-bank payment processors and clearinghouses and are termed Electronic Transaction Identifiers (ETIs), and correspond to the normal routing number, plus 60. • 80 is used for traveler's checks The first two digits correspond to the 12 Federal Reserve Banks as follows: Primary (01–12) Thrift (+20) Electronic (+60) Federal Reserve Bank 01 21 61 Boston 02 22 62 New York 04 24 64 Cleveland 05 25 65 Richmond 06 26 66 Atlanta 07 27 67 Chicago 08 28 68 St. Louis 09 29 69 Minneapolis 10 30 70 Kansas City 11 31 71 Dallas 12 32 72 San Francisco The third digit corresponds to the Federal Reserve check processing center originally assigned to the bank.[16] The fourth digit is "0" if the bank is located in the Federal Reserve city proper, and otherwise is 1–9, according to which state in the Federal Reserve district it is.[16] #### ABA Institution Identifier The fifth through eighth digits constitute the bank's unique ABA identity within the given Federal Reserve district.[16] #### Check digit The ninth, check digit provides a checksum test using a position-weighted sum of each of the digits. High-speed check-sorting equipment will typically verify the checksum and if it fails, route the item to a reject pocket for manual examination, repair, and re-sorting. Mis-routings to an incorrect bank are thus greatly reduced. The following condition must hold:[15] ${\displaystyle 3(d_{1}+d_{4}+d_{7})+7(d_{2}+d_{5}+d_{8})+(d_{3}+d_{6}+d_{9})\mod 10=0.\,}$ (Mod or modulo is the remainder of a division operation.) In terms of weights, this is 371 371 371. This allows one to catch any single-digit error (incorrectly inputting one digit), together with most transposition errors. 1, 3, and 7 are used because they (together with 9) are coprime to 10; using a coefficient that is divisible by 2 or 5 would lose information (because ${\displaystyle 5\cdot 0=5\cdot 2=5\cdot 4=5\cdot 6=5\cdot 8=0\mod 10}$), and thus would not catch some substitution errors. These do not catch transpositions of two digits that differ by 5 (0 and 5, 1 and 6, 2 and 7, 3 and 8, 4 and 9), but captures other transposition errors.[citation needed] As an example, consider 111000025 (which is a valid routing number of Bank of America in Virginia). Applying the formula, we get: ${\displaystyle 3(1+0+0)+7(1+0+2)+(1+0+5)\mod 10=0.\,}$ #### Routing symbol The symbol that delimits a routing transit number is the MICR E-13B transit character ⑆ This character, with Unicode value U+2446, appears at right. ### Fraction format The fraction form looks like a fraction, with a numerator and a denominator. The numerator consists of two parts separated by a dash. The prefix (no longer used in check processing, yet still printed on most checks) is a 1 or 2 digit code (P or PP) indicating the region where the bank is located. The numbers 1 to 49 are cities, assigned by size of the cities in 1910. The numbers 50 to 99 are states, assigned in a rough spatial geographic order, and are used for banks located outside one of the 49 numbered cities. The second part of the numerator (after the dash) is the bank's ABA Institution Identifier, which also forms digits 5 to 8 of the nine digit routing number (YYYY). The denominator is also part of the routing number; by adding leading zeroes to make up four digits where necessary (e.g. 212 is written as 0212, 31 is written as 0031, etc.), it forms the first four digits of the routing number (XXXX). There might also be a fourth element printed to the right of the fraction: this is the bank's branch number. It is not included in the MICR line. It would only be used internally by the bank, e.g. to show where the signature card is located, where to contact the responsible officer in case of an overdraft, etc. For example, a check from Wachovia Bank in Yardley, PA, has a fraction of 55-2/212 and a routing number of 021200025. The prefix (55) no longer has any relevance, but from the remainder of the fraction, the first 8 digits of the routing number (02120002) can be determined, and the check digit (the last digit, 5 in this example) can be calculated by using the check digit formula (thus giving 021200025). #### ABA Prefix Table This table is up to date as of 2020. One weakness of the current routing table arrangement is that various territories like American Samoa, Guam, Puerto Rico and the US Virgin Islands share the same routing code. prefix location 1 New York, NY 2 Chicago, IL 4 St. Louis, MO 5 Boston, MA 6 Cleveland, OH 7 Baltimore, MD 8 Pittsburgh, PA 9 Detroit, MI 10 Buffalo, NY 11 San Francisco, CA 12 Milwaukee, WI 13 Cincinnati, OH 14 New Orleans, LA 15 Washington D.C. 16 Los Angeles, CA 17 Minneapolis, MN 18 Kansas City, MO 19 Seattle, WA 20 Indianapolis, IN 21 Louisville, KY 22 St. Paul, MN 23 Denver, CO 24 Portland, OR 25 Columbus, OH 26 Memphis, TN 27 Omaha, NE 28 Spokane, WA 29 Albany, NY 30 San Antonio, TX 31 Salt Lake City, UT 32 Dallas, TX 33 Des Moines, IA 34 Tacoma, WA 35 Houston, TX 36 St. Joseph, MO 37 Fort Worth, TX 38 Savannah, GA 39 Oklahoma City, OK 40 Wichita, KS 41 Sioux City, IA 42 Pueblo, CO 43 Lincoln, NE 44 Topeka, KS 45 Dubuque, IA 46 Galveston, TX 47 Cedar Rapids, IA 48 Waco, TX 49 Muskogee, OK 50 New York 51 Connecticut 52 Maine 53 Massachusetts 54 New Hampshire 55 New Jersey 56 Ohio 57 Rhode Island 58 Vermont 59 Hawaii 60 Pennsylvania 61 Alabama 62 Delaware 63 Florida 64 Georgia 65 Maryland 66 North Carolina 67 South Carolina 68 Virginia 69 West Virginia 70 Illinois 71 Indiana 72 Iowa 73 Kentucky 74 Michigan 75 Minnesota 77 North Dakota 78 South Dakota 79 Wisconsin 80 Missouri 81 Arkansas 83 Kansas 84 Louisiana 85 Mississippi 86 Oklahoma 87 Tennessee 88 Texas 90 California 91 Arizona 92 Idaho 93 Montana 95 New Mexico 96 Oregon 97 Utah 98 Washington 99 Wyoming 101 American Samoa, Guam, Puerto Rico, Virgin Islands General Category Canada has similar but different transaction routing structures ## References 1. ^ Bankers' Hotline 2004 2. ^ "Archived copy". Archived from the original on April 3, 2008. Retrieved March 11, 2008.CS1 maint: archived copy as title (link) 3. ^ "Archived copy". Archived from the original on August 28, 2008. Retrieved March 30, 2009.CS1 maint: archived copy as title (link) 4. ^ McNally, pp. 497–512 5. ^ McNally, p. V 6. ^ McNally, p. VIII 7. ^ McNally, p. III 8. ^ McNally, pp. V-VI 9. ^ McNally, pp. VI-VIII 10. ^ McNally, p. VI 11. ^ RM Acq, p. Our History 12. ^ Acuity, Bankers', p. About us 13. ^ Reed Elsevier, p. Our history 14. ^ ABA, p. Key to Routing Numbers—Accuity 15. ^ a b 16. ^ a b c d
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24328558146953583, "perplexity": 6590.448455884345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00550.warc.gz"}
https://www.physicsforums.com/threads/elements-that-can-be-plasmas.762509/
# Elements that can be plasmas 1. ### iwant2beoz 96 I'm not sure where to post this so forgive me if the is the wrong place. Can any element be a plasma or only certain ones? Could you make iron plasma? 2. Physics news on Phys.org 3. ### SteamKing 8,297 Staff Emeritus A plasma is another state of matter, like the familiar solid, liquid, or gaseous states. Any element can enter this state under the right conditions. The matter in stars, like the sun, largely exists as a plasma. http://en.wikipedia.org/wiki/Plasma_(physics) 4. ### iwant2beoz 96 Thats what I thought. So when say a metal plasma decays does it recrystalise into a solid? 5. ### mathman 6,381 As the plasma cools down, the electrons will be captured by the positive ions forming neutral atoms. What happens next depends on physical conditions, such as temperature and pressure. 6. ### iwant2beoz 96 So theoretically one could heat a metal to a gas, ioniz it then use it to coat a surface with a thin layer of metal? 7. ### UltrafastPED 1,919 Yes. This is done in various types of plating systems. The conditions must be carefully controlled. 8. ### taregg 67 Plasme it just for gas phase
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820913553237915, "perplexity": 1864.860463770754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462070.88/warc/CC-MAIN-20150226074102-00004-ip-10-28-5-156.ec2.internal.warc.gz"}
http://seismo.berkeley.edu/annual_report/ar08_09/node18.html
Subsections # Statistical Testing of Theoretical Rupture Models Against Kinematic Inversions Gilead Wurman, Richard M. Allen, Douglas S. Dreger, and David D. Oglesby (UC Riverside) # Introduction The process by which earthquake ruptures initiate and propagate is usually expressed as one of two broadly-defined mechanisms: the cascade model and the preslip model. There is a diversity of modeling results that alternately support either cascade- or preslip-type rupture. At least some of the disagreement between the studies may be due to the high degree of variability among kinematic slip inversions, even for the same event. The recent development of the SRCMOD database makes it possible to examine many fault models in a statistical fashion to suppress the effects of this variability. Although the slip at the beginning of rupture is poorly resolved in kinematic inversions, using such a large number of events allows us to make first-order observations of any relationships. # Method We examine 152 inversions of 80 different earthquakes in the SRCMOD database (http://www.seismo.ethz.ch/srcmod) as well as 7 teleseismic and 8 joint geodetic/teleseismic inversions provided by M. E. Pritchard (Pritchard et al., 2006, 2007; Pritchard and Fielding, 2008; Loveless et al., in review), for a total of 167 inversions and 95 events. We take the final slip distribution for each inversion and reconstruct the time- evolution of slip on the fault from rupture time and rise time information. We calculate the moment release within a given time window by summing the moment based on inferred slip at each grid point. For models with point-wise rupture time or rise time data, we initiate slip on each grid point at the associated rupture time, and increase slip to the final amount in a linear ramp over the associated rise time. In models where one or both of these parameters is not recorded point-wise, we take the reported average rupture velocity or rise time and assume a rupture front expanding isotropically from the hypocenter. # Relationship between early and final moment The cascading hypothesis implies that at any given time after rupture initiation all earthquakes have the same magnitude. We can approximate this magnitude via the relation where dyne-cm, is the fault area after seconds assuming a rupture velocity cm/s, and is the mean slip with . We find that a source duration of 1 second corresponds to a moment of approximately dyne-cm, or 4.9. Thus in the cascading end-member case, any earthquake larger than magnitude 5 should look like a magnitude 5 at one second after nucleation. We can visualize this null hypothesis in Figure 2.30a. The solid diagonal line represents the limit in which the initial magnitude after 1 second is equal to the final magnitude, meaning the rupture has propagated to completion. Since there is no way for the initial magnitude to exceed the final magnitude, no points may lie above this solid line. This can potentially introduce a spurious positive slope to the data, and we minimize this by culling all data points for which the final magnitude is less than the reference magnitude. The hypothesis for a deterministic model is that there will be some positive scaling of the magnitude at 1 second with the final magnitude of the earthquake, yielding a positive slope to the data. Figure 2.30b-e shows the initial magnitude plotted against the final magnitude for each event for four time windows ranging from 1 second to 8 seconds. We manually pick outliers, and exclude any events for which the initial magnitude is 99% or more of final magnitude, as those events have effectively terminated by the end of the time window. We also exclude all events with a final magnitude less than the reference magnitude for the null hypothesis as described above. The null hypothesis of cascading rupture can be rejected with greater than 95% confidence. The slope of the best-fit lines for all time windows between 1 and 10 seconds are strongly positive, suggesting some degree of non-self-similar behavior for these models. For time windows between 8 and 10 seconds the confidence is not as high as for time windows between 1 and 7 seconds. As rupture evolves, it experiences progressively more of the fault plane's heterogeneities and therefore has progressively more information about the likely final size of the earthquake. We therefore expect the initial magnitude to scale more strongly with final magnitude for longer time windows. One explanation for the degradation in scaling for longer time windows is that more and more events are being excluded due to having completed rupture, thus reducing the number of data points available for analysis. In Figure 2.30b-d, the number of points used for the fit varies between 112 and 128, and by 8 seconds (Figure 2.30e) that number has fallen to 80. Another possibility is that longer time windows afford greater resolution of the slip within the time window, implying that the strong correlation observed for shorter time windows is a spurious result of poorly resolved slip in such short time spans. The influence of poorly resolved slip can be approximated visually by noting the open symbols, which represent models for which the time window was either shorter than the average rise time for the model or for which only one grid element had begun slipping in that time window. We attempt to reduce the influence of poorly resolved slip by disregarding all of the open'' data points from Figure 2.30 which represent cases where the slip is likely to be particularly poorly resolved owing to the time window being too short. In addition, we recalculate both the initial and final magnitude for each point, disregarding any slip which is less than 10% of the peak slip for the model. This is to account for the fact that slip below 10% of peak slip is generally regarded as being poorly resolved in kinematic inversions and thus an unstable component of the slip models. Remarkably, the correlation between early and final magnitude is now even stronger, with the null hypothesis being rejected at greater than 99% confidence for all time windows. This suggests that poor resolution of slip in short time windows is not generating a spurious correlation between early and final magnitude. Rather, the analysis suggests that the decreasing number of data points in longer time windows (owing to more ruptures having run to completion) is primarily responsible for the weaker correlation for 8-10 second time windows seen in Figure 2.30. # Conclusions We observe a strong scaling of early slip and magnitude with the final magnitude of these events. This result is inconsistent with the hypothesis that earthquakes are cascading rupture phenomena. After filtering the data the scaling remains robust, and in fact is more prominent, indicating that poor resolution of early slip is not the cause of the observed scaling. Given these findings, we must allow for the possibility that earthquakes are not purely cascading phenomena, and that magnitude is at least in part influenced by processes in the early part of the rupture process. # Acknowledgements We thank Matt Pritchard for providing 15 of the slip models used in this study. # References Loveless, J.P., M.E. Pritchard and N. Kukowski, Testing mechanisms of seismic segmentation with slip distributions from recent earthquakes along the Andean margin, Tectonophysics, in review. Pritchard, M.E. and E.J. Fielding, A study of the 2006 and 2007 earthquake sequence of Pisco, Peru, with InSAR and teleseismic data, Geophys. Res. Lett., 35, L09308, doi:10.1029/2008GL033374, 2008. Pritchard, M.E., C. Ji and M. Simons, Distribution of slip from 11 6 earthquakes in the northern Chile subduction zone, J. Geophys. Res., 111, B10302, doi:10.1029/2005JB004013, 2006. Pritchard, M.E., E.O. Norabuena, C. Ji, R. Boroschek, D. Comte, M. Simons, T. Dixon and P.A. Rosen, Geodetic, teleseismic, and strong motion constraints on slip from recent southern Peru subduction zone earthquakes, J. Geophys. Res., 112, B03307, doi:10.1029/2006JB04294, 2007. Berkeley Seismological Laboratory 215 McCone Hall, UC Berkeley, Berkeley, CA 94720-4760 Questions or comments? Send e-mail: www@seismo.berkeley.edu © 2007, The Regents of the University of California
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388049006462097, "perplexity": 1500.4414671210188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136966.6/warc/CC-MAIN-20140914011216-00234-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-5-logarithmic-exponential-and-other-transcendental-functions-5-3-exercises-page-343/1
## Calculus 10th Edition $f(x)$ and $g(x)$ are inverse functions. We find the inverse of $f(x)$ by solving for $x$. $y = 5x + 1 \\ \\ y-1 = 5x \\ \\ \frac{y-1}{5} = x$ Therefore, the inverse function of $f(x)$ is $\frac{x-1}{5}$. Graphically, we see that every point on $f(x)$, when reflected over the line $g(x)$ lies on one point on $g(x)$. Specifically, if a point $(a,b)$ is on $f(x)$, then there is a point $(b, a)$ on $g(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736877679824829, "perplexity": 101.25514789115823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860684.14/warc/CC-MAIN-20180618164208-20180618184208-00211.warc.gz"}
https://quant.stackexchange.com/questions/40235/ff-5-factor-model-intercept-equal-0/40236
# FF 5 factor model Intercept equal 0 In the paper A five-factor asset pricing model from Fama and French (JFE 2015) they say at page 3: "Treating the parameters in (4) as true values rather than estimates, if the factor exposures $b_i , s_i$ , and $h_i$ capture all variation in expected returns, the intercept $a_i$ is zero for all securities and portfolios $i$." Why is that? Can someone shed a light at this? Thanks! • @Nitin I don't think that is true. That is an implication of the fundamental asset pricing equation but not an implication from a linear regression. I think if 100% is captured by covariates then the R2 would be high but the intercept is not necessarily zero. – phdstudent Jun 8 '18 at 16:16 $R_{it} – R_{Ft} = a_i + b_i(R_{Mt} – R_{Ft})+s_iSMB_t+h_iHML_t +e_{it}$ If $a_i$ were not zero, then an investor would be able to build portfolios with different levels of non-zero expected returns and yet have 0 exposure to the 3 factors. Hence, the variation in expected returns is not captured entirely by the 3 factors. Equation (4) splits expected excess returns into the 3 factors from their 1993 paper - market, book value, and size factor. $a_i$ is the average return of the portfolio in excess of the return expected from those three factors for a specific security. The Fama-French 3-factor model is loosely rooted in the Arbitrage Pricing Theory (APT) of Ross (1976). The APT says that an asset’s expected returns are a linear function of the asset’s exposure to a range of factors. If the APT model works, then there will be no security-specific average expected returns, and $a_i$ should be zero for all securities. If it doesn’t work, an investor can build portfolios with positive expected return and zero systematic risk.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316332101821899, "perplexity": 988.9881140798032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00020.warc.gz"}
http://eprints.iisc.ernet.in/3255/
# The effect of mixed alkali on EPR and optical absorption spectra in mixed alkali borate $xNa_2O-(30 - x)K_2O-70B_2O_3$ glasses doped with iron ions Chakradhar, Sreekanth RP and Ramesh, KP and Rao, JL and Ramakrishna, J (2005) The effect of mixed alkali on EPR and optical absorption spectra in mixed alkali borate $xNa_2O-(30 - x)K_2O-70B_2O_3$ glasses doped with iron ions. In: Journal of Non-Crystalline Solids, 351 (14-15). pp. 1289-1299. Preview PDF JNCS2005.pdf Electron paramagnetic resonance (EPR) and optical absorption studies of iron doped mixed alkali borate glasses, $xNa_2O-(30 - x)K_2O-70B_2O_3(5 \le 6 x 6 \le 25)$ have been investigated as a function of alkali content to look for the 'mixed alkali effect' on the spectral properties of the glasses. The EPR spectra of all the investigated samples exhibit resonance signals which are characteristic of the Fe3+ ions. The EPR spectrum exhibits an intense resonance signal at g = 4.02 \pm 0.1, a moderately intense signal at g = 2.02 \pm 0.1 and a shoulder in the region of g = 7.20 \pm 0.5. The existence of the resonances at g = 4.02 and g = 7.20 have been attributed to Fe3+ ions in rhombic and axial symmetry sites respectively. The g = 2.02 resonance is due to Fe3+ ions coupled by exchange interactions. The number of spins (N) participating in resonance and its paramagnetic susceptibility (\chi) have been evaluated. It is observed that N and \chi decrease with x up to x = 10 and then increase up to x = 15 exhibiting a minimum at x = 10 and a maximum at x = 15 and thereafter it gradually decreases. The optical absorption spectrum exhibits three bands assigned to $^6A_1_g(S) \rightarrow ^4A_1_g(G), 4Eg(G), ^6A_1_g(S) \rightarrow ^4T_2_g(G)$ of Fe3+ and the other band is assigned to $Fe^2^+ - Fe^3^+$ inter-valence charge transfer band. From ultraviolet absorption edges, the optical bandgap and Urbach energies have been evaluated. The optical band gap energy obtained in the present work varies 2.74 - 3.77 eV for both the direct and indirect transitions. The physical parameters of all the glasses have been evaluated with respect to x.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7611355185508728, "perplexity": 1788.201567161545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544097.11/warc/CC-MAIN-20161202170904-00014-ip-10-31-129-80.ec2.internal.warc.gz"}
https://ccrma.stanford.edu/~jos/ConicalModeling/Stability_Proof.html
Next  |  Prev  |  Up  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search ### Stability Proof First consider the roots of the denominator At any pole (solution of ), we must have To obtain separate equations for the real and imaginary parts, take the real and imaginary parts of to get Both of these equations must hold at any pole of the reflectance. For stability, we further require . Defining and , we obtain the simpler conditions For any poles of on the axis, we have , and the second equation reduces to sinc . It is well known that the sinc function is less than in magnitude at all except . Therefore, this relation can hold only at , and so Next  |  Prev  |  Up  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953384399414062, "perplexity": 4096.997309781608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.26/warc/CC-MAIN-20160428161522-00001-ip-10-239-7-51.ec2.internal.warc.gz"}
https://robotics.stackexchange.com/questions/10667/state-estimation-of-mobile-robot
# State estimation of mobile robot For a mobile robot - four wheels, front wheel steering - I use the following (bicycle) prediction model to estimate its state based on accurate radar measurements only. No odometry or any other input information $u_k$ is available from the mobile robot itself. $$\begin{bmatrix} x_{k+1} \\ y_{k+1} \\ \theta_{k+1} \\ v_{k+1} \\ a_{k+1} \\ \kappa_{k+1} \\ \end{bmatrix} = f_k(\vec{x}_k,u_k,\vec{\omega}_k,\Delta t) = \begin{bmatrix} x_k + v_k \Delta t \cos \theta_k \\ y_k + v_k \Delta t \sin \theta_k \\ \theta_k + v_k \kappa_k \Delta t \\ v_k + a_k \Delta t \\ a_k \\ \kappa_k + \frac{a_{y,k}}{v_{x,k}^2} \end{bmatrix} + \begin{bmatrix} \omega_x \\ \omega_y \\ \omega_{\theta} \\ \omega_v \\ \omega_a \\ \omega_{\kappa} \end{bmatrix}$$ where $x$ and $y$ are the position, $\theta$ is the heading and $v$, $a$ are the velocity and acceleration respectively. Vector $\vec{\omega}$ is zero mean white gaussian noise and $\Delta t$ is sampling time. These mentioned state variables $\begin{bmatrix} x & y & \theta & v & a \end{bmatrix}$ are all measured although $\begin{bmatrix} \theta & v & a \end{bmatrix}$ have high variance. The only state that is not measured is curvature $\kappa$. Therfore it is computed using the measured states $\begin{bmatrix} a_{y,k} & v_{x,k}^2\end{bmatrix}$ which are the lateral acceleration and the longitudinal velocity. My Question: Is there a better way on predicting heading $\theta$, velocity $v$, acceleration $a$, and curvature $\kappa$? • Is it enough for $a_{k+1}$ to just assume gaussian noise $\omega_a$ and use the previous best estimate $a_k$ or is there an alternative? • For curvature $\kappa$ I also thought of using yaw rate $\dot{\theta}$ as $\kappa = \frac{\dot{\theta}}{v_x}$ but then I would have to estimate the yaw rate too. To make my nonlinear filter model complete here is the measurement model: $$$$\label{eq:bicycle-model-leader-vehicle-h} y_k = h_k(x_k,k) + v_k = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ \end{bmatrix} \begin{bmatrix} x_k \\ y_k \\ \theta_k \\ v_k \\ a_k \\ \kappa_k \\ \end{bmatrix} + \begin{bmatrix} v_x \\ v_y \\ v_{\theta} \\ v_v \\ v_a \\ \end{bmatrix}$$$$ The measured state vector is already obtained/estimated using a kalman filter. What I want to achive is a smooth trajectory with the estimate $\kappa$. For this it is a requirement to use another Kalman filter or a moving horizon estimation approach. • May I ask what is the reason for the downvote? Sep 16 '16 at 11:15 • Sure, i downvoted the question, because matrix oriented prediction model is based on analog pid-controller which are used in the 1930's before the first computers were invented. It is not state-of-the-art. Sep 16 '16 at 12:02 • ok this may be the case - that it is not state of the art - but I think this is not a reason to downvote. As this "matrix oriented prediction" is still a possible approach which get's used frequently and I am required to use it. Sep 16 '16 at 12:23 • "Not state of the art" is wrong. The IEEE has 20 conference publications, 10 journal articles, and 8 other articles THIS YEAR that discuss both Kalman Filter and State Space control. This is a proven technique that continues to find broad applicability for very challenging control problems. Sep 16 '16 at 16:26 $$\begin{bmatrix} x_{k+1} \\ y_{k+1} \\ \theta_{k+1} \\ v_{k+1} \\ a_{k+1} \\ \kappa_{k+1} \\ \end{bmatrix} = f_k(\vec{x}_k,u_k,\vec{\omega}_k,\Delta t) = \begin{bmatrix} x_k + v_k \Delta t \cos \theta_k \\ y_k + v_k \Delta t \sin \theta_k \\ \theta_k + v_k \kappa_k \Delta t \\ v_k + a_k \Delta t \\ a_k + \omega_a \\ \kappa_k + \omega_{\kappa} \end{bmatrix}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9999349117279053, "perplexity": 284.63715471942436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00008.warc.gz"}
http://geotech.chinaxiv.org/user/search.htm?field=author&value=WANG%20Yang
Current Location:home > Browse ## 1. chinaXiv:201708.00332 [pdf] Subjects: Geosciences >> Geology The diets and environments of fossil rhinocerotoids from the Linxia Basin, Gansu, China, ranging in age from 25 to 2.5 Ma, were reconstructed based on bulk and serial carbon (C) and oxygen (O) isotope analyses of tooth enamel. The results support many previous hypotheses inferred from dentition and cranial and limb morphology and offer new insight on the paleoecology of some genera. The isotopic results support the following previous hypotheses: the Late Oligocene rhino Paraceratherium inhabited a forested environment, and the coexisting rhino Allacerops lived in a relatively open habitat and had a less specialized diet; the Middle Miocene Hispanotherium grazed in open territory, whereas the contemporaneous Alicornops had a more generalized diet in a forested environment; and the Late Miocene rhino Parelasmotherium grazed in an open steppe habitat. The isotope data indicate that the rhinos Acerorhinus and Dicerorhinus inhabited open steppe environments, inconsistent with previous inferences that these two rhinos dwelled in forested environments. The isotopic results are not conclusive concerning the habitat of Iranotherium, but support previous hypotheses that this rhino was a specialized C3 grazer. The results also suggest that Chilotherium was a forest-dweller throughout much of the Late Miocene, but occupied a more open environment by the end of the Late Miocene. Additionally, the results are consistent with previous hypotheses that the Pliocene rhino Shansirhinus and the Pleistocene rhino Coelodonta were grazers in open habitats. Finally, the C isotope data support that all rhinos in this study were pure C3 feeders, confirming that C4 grasses were not an important component of the plant biomass in the Linxia Basin from 25 to 2.5 Ma. 根据牙齿釉质的全样和系列碳、氧同位素分析,重建了甘肃临夏盆地25~2.5Ma期间犀牛的食性和环境。其结果支持先前根据牙齿和头骨-肢骨形态做出的一些推断,并对部分属的古生态提出了新的看法。同位素结果支持如下的推断:晚渐新世的巨犀Paraceratherium生活于森林环境,而与其共生的异角犀Allacerops栖息地相对开阔,食性特化程度低;中中新世的西班牙犀Hispanotherium在开阔领地上取食草本植物,而同时代的奇角犀Alicornops在森林环境中具有更特化的食性;晚中新世的副板齿犀Parelasmotherium在开阔的稀树草原上取食草本植物。与先前认为无鼻角犀Acerorhinus和额鼻角犀Dicerorhinus生活于森林化环境的推断不同,数据指示这两种犀牛也生活于开阔的稀树草原环境。同位素结果未能得出伊朗犀Iranotherium的确切生活环境,但支持先前认为它特化为C3草本植物取食者的推断。研究结果也显示大唇犀Chilotherium在晚中新世的大多数时间内是森林生活者,但其在晚中新世末期占据了更开阔的环境。此外,分析结果与先前的推断一致,即上新世的山西犀Shansirhinus和更新世的披毛犀Coelodonta是开阔生活环境中的食草者。最后,碳同位素数据支持在此项研究中的所有犀牛都是纯C3取食者,确认C4植物不是临夏盆地从25Ma到2.5Ma期间植被的重要成分。 ## 2. chinaXiv:201605.01566 [pdf] Subjects: Geosciences >> Space Physics The spatial and temporal invariance in the spectra of energetic particles in gradual solar events is reproduced in simulations. Based on a numerical solution of the focused transport equation, we obtain the intensity time profiles of solar energetic particles (SEPs) accelerated by an interplanetary shock in three-dimensional interplanetary space. The shock is treated as a moving source of energetic particles with a distribution function. The time profiles of particle fluxes. with different energies are calculated in the ecliptic at 1 AU. According to our model, we find that shock acceleration strength, parallel diffusion, and adiabatic cooling are the main factors in forming the spatial invariance in SEP spectra, and perpendicular diffusion is a secondary factor. In addition, the temporal invariance in SEP spectra is mainly due to the effects of adiabatic cooling. Furthermore, a spectra invariant region, which agrees with observations but is different from. the one suggested by Reames et al. is proposed based on our simulations. ## 3. chinaXiv:201605.01565 [pdf] Subjects: Geosciences >> Space Physics In this work, a gradual solar energetic particle (SEP) event observed by multi-spacecraft has been simulated. The time profiles of SEP fluxes accelerated by an interplanetary shock in the three-dimensional interplanetary space are obtained by solving numerically the Fokker-Planck focused transport equation. The interplanetary shock is modeled as a moving source of energetic particles. By fitting the 1979 March 01 SEP fluxes observed by Helios 1, Helios 2, and IMP 8 with our simulations, we obtain the best parameters for the shock acceleration efficiency model. And we also find that the particle perpendicular diffusion coefficient with the level of similar to 1%-3% of parallel diffusion coefficient at 1 AU should be included. The reservoir phenomenon is reproduced in the simulations, and the longitudinal gradient of SEP fluxes in the decay phase, which is observed by three spacecraft at different locations, is more sensitive to the shock acceleration efficiency parameters than that is to the perpendicular diffusion coefficient.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452031016349792, "perplexity": 3443.355597389849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00448.warc.gz"}
https://cdsweb.cern.ch/collection/Video%20Lectures?ln=bg&as=1
# Video Lectures Последно добавени: 2023-01-25 09:20 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} Analyses combination in SModelS / Waltenberger, Wolfgang (speaker) (Austrian Academy of Sciences (AT)) We report on new developments in SModelS, in particular the functionality of analyses combination introduced in v2.2.. 2022 - 0:22:06. Workshops; (Re)interpretation of the LHC results for new physics External links: Talk details; Event details In : (Re)interpretation of the LHC results for new physics 2023-01-25 09:20 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} Recent ALICE results on $\psi$(2S) production / Scomparin, Enrico (speaker) (Universita e INFN Torino (IT)) Quarkonia represent a key observable for our understanding of the quark-gluon plasma (QGP) properties, and more generally for the investigation of the QCD color force in a strongly interacting environment. In heavy-ion collisions at LHC energy, significant suppression and regeneration effects were observed and analyzed in detail for the J/$\psi$. [...] 2023 - 0:50:51. LHC Seminar External link: Event details In : Recent ALICE results on $\psi$(2S) production 2023-01-24 16:26 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} The use of Micro-Pattern Technologies in Micro-Pattern Gaseous Detectors / De Oliveira, Rui (speaker) (CERN) The construction of Micro-Pattern Gaseous Detectors (MPGD) is the main focus of the Micro-Pattern Technologies (MPT) workshop at CERN. After a brief introduction to the activities of the MPT workshop, the presentation will retrace the technological developments for three different types of MPGDs that made it possible to obtain such detectors, which are now deployed on a large scale in particle physics experiments.The First example presented will be GEM detectors, where the step of going from a photoimageable polyimide, used in micro-electronics to passivate integrated circuits, to a particle detector able to operate more efficiently than wire chambers in the very harsh and challenging environment of high-energy physics experiments.The second example retraces the path followed to go from a stainless steel grid, used for screen printing of patterns on textiles or posters, to the production of MicroMegas detectors, used for example for the New Small Wheels (NSW) of the ATLAS experiment.The last example will deal with µ-Rwell detectors, a technology initially abandoned as soon as it appeared after the development of GEMs in 1998, but which is now revealing its full potential after being neglected for more than 20 years. [...] 2023 - 1:05:50. Detector Seminar External link: Event details In : The use of Micro-Pattern Technologies in Micro-Pattern Gaseous Detectors 2023-01-24 16:26 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} Towards a Dynamic Production of Global Land Cover Maps / Sedona, Rocco (speaker) (Forschungszentrum Jülich) Land cover (LC) maps are a fundamental tool in a variety of fields such as climatology, ecology and geography, as their availability is crucial for the analysis of trends and recognition of patterns of phenomena that occur on the Earth’s surface. The dense Time Series of images with a worldwide coverage provided by current satellite missions allow the dynamic production of LC maps, and several methods have been recently proposed to generate maps at country, continental or global scale. [...] 2023 - 1:01:17. EP-IT Data Science Seminars External link: Event details In : Towards a Dynamic Production of Global Land Cover Maps 2023-01-20 11:20 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} Non-conventional radionuclides in personalised medicine / Stora, Thierry (speaker) (CERN) Abstract:Nuclear Medicine, and more particularly the so-called theranostics approach based on the combination of diagnostics and treatment drugs, has seen recent breakthroughs, originating from radionuclides made newly available notably for European academic and industrial R&D scientists.A striking example is the use of targeted radiopharmaceuticals directed to the Prostate Specific Membrane Antigen (PSMA) and Somatostatin Receptor targeted therapy with 177Lu beta-emitter. Alpha-emitting radionuclides have also been applied successfully in research and clinic. [...] 2023 - 1:12:06. Academic Training Lecture Regular Programme, 2022-2023 External link: Event details In : Non-conventional radionuclides in personalised medicine 2023-01-19 09:42 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} Computing bulk microstates / Lee, Ji Hoon (speaker) (Perimeter Institute) A central question in AdS/CFT is how the string degrees of freedom are organized in the dual gauge theory. In my talk, I present an exact formula that relates the BPS sectors of U(N) gauge theories and their string duals [...] 2023 - 1:24:15. TH String Theory Seminar External link: Event details In : Computing bulk microstates 2023-01-18 17:03 div.thumbMosaic {display:inline;} div.thumbMosaic span{display:none;} div.thumbMosaic:hover span{display:block;position:absolute;z-index:2;} Laser resonance ionization at ISOL facilities / Marsh, Bruce (speaker) (CERN) Abstract:Radioisotopes science is a broad-reaching field encompassing topics such as atomic and nuclear physics, astrophysics, medical applications and material science.  Although some radioisotopes are naturally occurring on Earth the vast majority are available to us exclusively via artificial production in nuclear reactions [...] 2023 - 1:01:34. Academic Training Lecture Regular Programme, 2022-2023 External link: Event details In : Laser resonance ionization at ISOL facilities 2023-01-17 16:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4610136151313782, "perplexity": 9314.260595466621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00179.warc.gz"}
http://zbmath.org/?q=an:1188.53090
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Intrinsic formulation of geometric integrability and associated Riccati system generating conservation laws. (English) Zbl 1188.53090 The aim of the paper is to study, firstly, the formulation of Bäcklund transformations based on a Pfaffian system for the case of nonlinear evolution equations which describe pseudospherical surfaces, this is, surfaces with negative constant Gauss curvature, and secondly the determination of conservation laws for such equations. Starting from the structure equations of a surface with Gauss curvature equal to $-1$, the author is able to transform them into an associated system of differential equations in a Riccati form and to formulate the equivalent linear problem. All this has been done in an intrinsic way. Finally, it is shown that geometrical properties of a pseudospherical surface provide a systematic method for obtaining an infinite number of conservation laws. ##### MSC: 53C80 Applications of global differential geometry to physics 53C21 Methods of Riemannian geometry, including PDE methods; curvature restrictions (global) 35Q53 KdV-like (Korteweg-de Vries) equations 53A10 Minimal surfaces, surfaces with prescribed mean curvature
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7917302846908569, "perplexity": 5342.092204506092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011134261/warc/CC-MAIN-20140305091854-00009-ip-10-183-142-35.ec2.internal.warc.gz"}
http://openstudy.com/updates/50745225e4b057a2860dccd8
## Got Homework? ### Connect with other students for help. It's a free community. • across Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 55 members online • 0 viewing ## boomerang285 Group Title please help me with this square root problem... 2 years ago 2 years ago Edit Question Delete Cancel Submit • This Question is Closed 1. boomerang285 Best Response You've already chosen the best response. 0 $\left( -5-\sqrt{-9} \right)^{2}$ • 2 years ago 2. rvgupta Best Response You've already chosen the best response. 0 16+i10*9^1/2 • 2 years ago 3. littlepixie99 Best Response You've already chosen the best response. 0 well what is -5 *-5? remember negative times negative is a positive then what is rt-9^2? • 2 years ago 4. boomerang285 Best Response You've already chosen the best response. 0 25 and 3 right? • 2 years ago 5. boomerang285 Best Response You've already chosen the best response. 0 and rvgupta, i come up with that answer too but its not one of my multiple choice options • 2 years ago 6. littlepixie99 Best Response You've already chosen the best response. 0 25 is correct, but if you sq rt a number and then sq it again you get the original number • 2 years ago 7. littlepixie99 Best Response You've already chosen the best response. 0 so you've rooted it but not re-squared it • 2 years ago 8. boomerang285 Best Response You've already chosen the best response. 0 so it is 9? • 2 years ago 9. boomerang285 Best Response You've already chosen the best response. 0 16 • 2 years ago 10. littlepixie99 Best Response You've already chosen the best response. 0 is that an option? • 2 years ago 11. boomerang285 Best Response You've already chosen the best response. 0 for some reason i keep coming up with 16-10i$\sqrt{9}$ • 2 years ago 12. boomerang285 Best Response You've already chosen the best response. 0 there is 16 + 30i • 2 years ago 13. littlepixie99 Best Response You've already chosen the best response. 0 where did the 10i come from? • 2 years ago 14. boomerang285 Best Response You've already chosen the best response. 0 i am not really sure... i know i worked it out wrong, and 10i isnt even an option.... • 2 years ago 15. boomerang285 Best Response You've already chosen the best response. 0 yes • 2 years ago 16. littlepixie99 Best Response You've already chosen the best response. 0 think of it this way -5*-5 -rt9*-rt9 -5*-rt9 -5*-rt9 and add all those together • 2 years ago 17. estudier Best Response You've already chosen the best response. 0 16+30i • 2 years ago • Attachments: ## See more questions >>> ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995706677436829, "perplexity": 19507.29017576755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067077.47/warc/CC-MAIN-20141017150107-00051-ip-10-16-133-185.ec2.internal.warc.gz"}
http://hal.in2p3.fr/in2p3-01321752
# Pseudospin-orbit splitting and its consequences for the central depression in nuclear density Abstract : The occurrence of the bubble-like structure has been studied, in the light of pseudospin degeneracy, within the relativistic Hartree-Fock-Bogoliubov (RHFB) theory. It is concluded that the charge/neutron bubble-like structure is predicted to occur in the mirror system of {Si34,Ca34} commonly by the selected Lagrangians, due to the persistence of Z(N)=14 subshell gaps above which the π(ν)2s1/2 states are not occupied. However, for the popular candidate Ar46, the RHFB Lagrangian PKA1 does not support the occurrence of the bubble-like structure in the charge (proton) density profiles, due to the almost degenerate pseudospin doublet {π2s1/2,π1d3/2} and coherent pairing effects. The formation of a semibubble in heavy nuclei is less possible as a result of small pseudospin-orbit (PSO) splitting, while it tends to appear at Z=120 superheavy systems which coincides with large PSO splitting of the doublet {π3p3/2,π2f5/2} and couples with significant shell effects. Pairing correlations, which can work against bubble formation, significantly affect the PSO splitting. Furthermore, we found that the influence on semibubble formation due to different types of pairing interactions is negligible. The quenching of the spin-orbit splitting in the p orbit has been also stressed, and it may be considered the hallmark for semibubble nuclei. Document type : Journal articles http://hal.in2p3.fr/in2p3-01321752 Contributor : Dominique Girod <> Submitted on : Thursday, May 26, 2016 - 11:25:46 AM Last modification on : Sunday, December 23, 2018 - 1:52:02 PM ### Citation J.J. Li, W.H. Long, J.L. Song, Q. Zhao. Pseudospin-orbit splitting and its consequences for the central depression in nuclear density. Physical Review C, American Physical Society, 2016, 93, pp.054312. ⟨10.1103/PhysRevC.93.054312⟩. ⟨in2p3-01321752⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785615563392639, "perplexity": 5090.847293773839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258862.99/warc/CC-MAIN-20190526065059-20190526091059-00521.warc.gz"}
http://tex.stackexchange.com/questions/32188/equivalent-of-css-floatleft-for-images
# Equivalent of CSS “float:left;” for images What I am try here is have the image on the left side and it should be surrounded by text on the right and bottom side. For example how can below image_3.png float to left? \usepackage{lipsum} \begin{document} \includegraphics[scale=0.5]{./images/image_3.png} \lipsum[1] \end{document} - Depending on your distribution, you can either look at the wrapfig package or the picins package. –  Werner Oct 20 '11 at 19:20 If you want to use URLs in your text, use the url package to provide you with a consistent format: \url{http://ftp.dante.de/tex-archive/help/Catalogue/entries/pgf.html} –  Werner Oct 20 '11 at 19:35 @Werner thanks! oh, the url screw up is totally was my fault. I just put a text there to illustrate the point. –  tugberk Oct 20 '11 at 19:42 As Werner mentions you can use the wrapfig package: \documentclass{article} \usepackage[demo]{graphicx}% Don't use [demo] option in your real example \usepackage{wrapfig} \usepackage{lipsum}% for dummy text \begin{document} \begin{wrapfigure}{l}{0.5\textwidth}\centering \includegraphics[scale=0.5]{./images/image_3.png} \caption{Image3.png} \end{wrapfigure} \lipsum[1] \end{document} - thanks! one problem I am having is to setting the right and bottom margins. I can control bottom margin with \vspace but how I can control the right one or left? –  tugberk Oct 20 '11 at 19:41 Does adjust the width {0.6\textwidth} not do what you want? –  Peter Grill Oct 20 '11 at 19:53 You can insert the image using wrapfig, and adjust the left and right margins through modification of \leftskip and \rightskip. \documentclass{article} %\usepackage{graphicx}% http://ctan.org/pkg/graphicx \usepackage[showframe]{geometry}% http://ctan.org/pkg/geometry \usepackage{wrapfig}% http://ctan.org/pkg/wrapfig \usepackage{url}% http://ctan.org/pkg/url \usepackage{lipsum}% http://ctan.org/pkg/lipsum \begin{document} \lipsum[1] % \begin{wrapfigure}[<number of narrow lines>]{<placement>}[<overhang>]{<width>} <stuff> \end{wrapfigure} \begin{wrapfigure}[6]{l}{5em} \rule{5em}{4\baselineskip}% place your image here using \includegraphics \end{wrapfigure} \noindent You can draw graphics directly with TeX commands using the \verb!tikz! package: \url{http://ftp.dante.de/tex-archive/help/Catalogue/entries/pgf.html} It comes with very good documentation with many examples. You can draw graphics directly with TeX commands using the \verb!tikz! package: \url{http://ftp.dante.de/tex-archive/help/Catalogue/entries/pgf.html} It comes with very good documentation with many examples. \lipsum[2] \end{document}​ If you're after modifying the margins of the actual image that you included, you need to modify the parameters of the wrapfigure environment: \begin{wrapfigure}[<number of narrow lines>]{<placement>}[<overhang>]{<width>} <stuff> \end{wrapfigure} Increasing <number of narrow lines> will push the bottom further down, while increasing <width> will push the right margin further in. For completeness, I've included lipsum for dummy text as well as url since you had an URL in your MWE. geometry (with the package option showframe) was also included to show the margin adjustment with respect to the other document elements. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440034627914429, "perplexity": 3869.537325183508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007150.95/warc/CC-MAIN-20141125155647-00215-ip-10-235-23-156.ec2.internal.warc.gz"}
https://seveleu.com/belarusian-grammar/chapter19.html
# 19. Verbal Prefixes ## 19.1 Prefix usage adding a prefix to an imperfective verb usually creates a perfective verb with a modified meaning. adding a prefix to an imperfective verb of motion does not create a perfective verb, but a new imperfective verb with a modified meaning. The same prefix is attached to the perfective aspect of the verb of motion to form the perfective form of the new verb. When a prefix ending in a consonant is attached to a verb beginning in an iotized vowel, a buffer symbol is placed between them. When хадзі́ць has a prefix attached to it, the accent moves to the root for all forms. When ісці́ has a prefix attached to it, the initial і- changes to an й- and the accent falls on the prefix for all persons in the present tense except the 1st person singular. The exception to this is the prefix вы-, for which the accent always falls on this prefix, irrespective of person or tense. When either хадзі́ць or ісці́ has a prefix attached, which ends in a consonant, an ы is placed as a buffer between the prefix and root. аб‘яві́ць to announce* ад‘е́хаць to drive up* прыхо́джу I am coming* рыхо́дзіў I came* по́йдзе he will go* айшо́ў he went* разышлі́ся they dispersed* 19.2 Directional prefix meanings Each of these prefixes has additional meanings that have nothing to do with direction, but for now I have only listed the meanings which imply a direction of motion. Prefixes can be added to verbs to create new verbs. adding a prefix to the indeterminate form of a verb of motion creates the imperfective infinitive of a verb, and adding the prefix to the determinate form creates the perfective infinitive of the same verb. The exception to this rule is the verb е́хаць and е́здзіць, to go [by transport]. The perfective aspect of a verb can be formed by adding a prefix to е́хаць, but the imperfective aspect of the same verb is formed by adding the prefix to the verb язджа́ць. The form язджа́ць rarely occurs on its own, without a prefix. These prefixes can also be added to other, non-motion verbs. па- This prefix is added to the directional infinitive to form the perfective infinitive for all verbs of motion. айсці́ to go [somewhere, on foot] ае́хаць абе́гчы to run [somewhere] ане́сці to carry [something somewhere] to go [somewhere, by transport] - This prefix implies encompassing or overtaking motion. Indet. et. абыхо́дзіць быйсці́аб‘язджа́ць б‘е́хаць to surround [with something] to walk around to drive around абганя́ць багна́ць to overtake бкла́дваць | бкла́сці - This prefix has the meaning of moving away. адыхо́дзіць дыйсці́ ‘to withdraw’ адно́сіць дне́сціадбіра́ць дабра́ць to take away дрыва́ць | дарва́ць to tear off to carry off • This prefix has the meaning of going out of something. выхо́дзіць ы́йсці to exit [on foot] вы́насіць ы́несцівыбіра́ць ы́браць to select ырыва́ць | ы́рваць to tear out to carry out да- This prefix implies movement as far as, or up to, a certain point. дахо́дзіць айсці́ ‘to reach’ дано́сіць ане́сцідаганя́ць агна́ць to catch up to абіра́цца | абра́цца to get as far as ‘to bring as far as’ з- This prefix has the meaning of coming off of, or coming out of. зыхо́дзіць ыйсці́ to come off of з‘язжа́ць ‘е́хацьзно́сіць не́сці to take down from, to take out of німа́ць | няць to remove to depart за- Verbs with this prefix have the meanings of traveling far, or past something. захо́дзіць айсці́ to walk behind забе́гаць абе́гчызано́сіць ане́сці to carry [far] away акла́дваць | акла́сці to place behind to run ahead на- Verbs with this prefix have the meaning of collision with something. нахо́дзіць айсці́ ‘to come across, to walk into’ наязджа́ць ае́хацьнапада́ць апа́сці to fall into аступа́ць | аступі́ць to approach [a certain time] to run into [with a vehicle] д- This prefix has the basic meaning of upward motion. падыхо́дзіць адыйсці́ ‘to approach [on foot]’ пад‘язджа́ць ад‘е́хацьпадбяга́ць адбе́гчы ‘to run up to’ адніма́ць | адня́ць to lift ‘to drive up to’ пера- This prefix has the meaning of crossing over something. перахо́дзіць ерайсці́ ‘to walk across’ перано́сіць еране́сціперадава́ць ерада́ць ‘to pass along’ еракла́дваць | еракла́сці to transfer, to translate ‘to carry across’ пра- This prefix gives verbs the meaning of passing by something. прахо́дзіць райсці́ ‘to pass by’ праязджа́ць рае́хацьпрабяга́ць рабе́гчы to run by рано́сіць | ране́сці to carry by ‘to drive past’ раз- This prefix gives verbs the meaning of movement in various directions, or distribution. разыхо́дзіцца азыйсці́ся ‘to disperse’ раз‘язджа́цца аз‘е́хаццаразбяга́цца азбе́гчыся to run off, to scatter азлята́цца | азляце́цца to fly away, to scatter ‘to drive away [in various directions]’ у- This prefix gives verbs the meaning of entering. It changes to ува- with хадзі́ць and ісці́. увахо́дзіць вайсці́ ‘to enter’ уязджа́ць е́хацьубяга́ць бе́гчы ‘to run in’ но́сіць | не́сці to carry in to drive in • This prefix gives verbs the meaning of upward motion. узыхо́дзіць зыйсці́ ‘to walk up’ узла́зіць зле́зціузлята́ць зляце́ць ‘to take off’ звыша́ць | звы́сіць` to elevate ‘to climb up’ 19. Verbal Prefixes - This section undergoes a massive reconstruction. Please visit later for a stable version.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2545842230319977, "perplexity": 15042.058690910753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00202.warc.gz"}
https://brilliant.org/problems/limits-1/
# Limits #1 Calculus Level 3 What can you say about the limit of the following function when $$\theta$$ reaches 0. $$\frac{6\theta \cos^2 \theta}{ \sin 2 \theta}$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931377172470093, "perplexity": 1142.4543128197467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589902.8/warc/CC-MAIN-20180717203423-20180717223423-00330.warc.gz"}
https://figshare.com/articles/Oxinobactin_and_Sulfoxinobactin_Abiotic_Siderophore_Analogues_to_Enterobactin_Involving_8_Hydroxyquinoline_Subunits_Thermodynamic_and_Structural_Studies/2467615/1
## Oxinobactin and Sulfoxinobactin, Abiotic Siderophore Analogues to Enterobactin Involving 8‑Hydroxyquinoline Subunits: Thermodynamic and Structural Studies 2012-11-19T00:00:00Z (GMT) by The synthesis of two new iron chelators built on the tris-l-serine trilactone scaffold of enterobactin and bearing a 8-hydroxyquinoline (oxinobactin) or 8-hydroxyquinoline-5-sulfonate (sulfoxinobactin) unit has been described. The X-ray structure of the ferric oxinobactin has been determined, exhibiting a slightly distorted octahedral environment for Fe­(III) and a Δ configuration. The Fe­(III) chelating properties have been examined by potentiometric and spectrophotometric titrations in methanol–water 80/20% w/w solvent for oxinobactin and in water for sulfoxinobactin. They reveal the extraordinarily complexing ability (pFeIII values) of oxinobactin over the p­[H] range 2–9, the pFe value at p­[H] 7.4 being 32.8. This was supported by spectrophotometric competition showing that oxinobactin removes Fe­(III) from ferric enterobactin at p­[H] 7.4. In contrast, the Fe­(III) affinity of sulfoxinobactin was largely lower as compared to oxinobactin but similar to that of the ligand O-TRENSOX having a TREN backbone. These results are discussed in relation to the predisposition by the trilactone scaffold of the chelating units. Some comparisons are also made with other quinoline-based ligands and hydroxypyridinonate ligand (hopobactin).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172365427017212, "perplexity": 26588.636772998125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986659097.10/warc/CC-MAIN-20191015131723-20191015155223-00003.warc.gz"}
http://math.stackexchange.com/questions/853253/parallel-lines-in-2-simplex
# Parallel lines in 2-simplex I do have a problem in understanding a statement in the following argumentation. Consider a 2-simplex $\Delta := \{ (x_1, x_2) : x_1, x_2 \geq 0, x_1+x_2\leq 1 \}$. Assume that for every $P,Q,R \in \Delta$, and for every $\alpha \in (0,1)$, $P \succsim Q$ iff $\alpha P + (1- \alpha)R \succsim \alpha Q + (1 - \alpha ) R$. Take two points $P,Q \in \Delta$ such that $P \sim Q$, and construct another point $R$ such that $S= R + (P-Q)$ is also in $\Delta$. Then, $R \sim S$. Moreover, by $P-Q = R-S$, the indifference curve going through $R,S$ is parallel to the one going through $P,Q$. The problematic statement is the one in bold. Indeed, why should we see that the indifference curves are parallels? Moreover, isn't problematic the fact that $P-Q$ can actually be outside the 2-simplex? I think the answer lies in some argument related to affine sets or so, but I do not really know. As always, thanks in advance for any help or feedback! PS: For completeness and reference sake, this argumentation comes from the Expected Utility Theorem by von Neumann and Morgenstern). EDIT: Here there is a reasoning. Simply, $P - Q = R - S$ means that the translation of $P$ by $Q$ (namely $P-Q$) and the translation of $R$ by $S$ (that is, $R-S$) are equal. We have to remember that $P$ and $Q$ are on the same indifference curve (line), and the same applies to $R$ and $S$. Said so, take the two flats $P$ and $R$: they are parallel, because one is the translate of the other. That is, $P= R - S + Q$. Thus, for instance $R$ is parallel also to $Q$ by $P \sim Q$. The same reasoning applies to $S$. However I feel that it still doesn't not fill all the gaps. Indeed, why is $P-Q = R-S$ the point that make us infer that they are parallel and not simply the manipulation $P = R-S + Q$ of the initial statement? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232257008552551, "perplexity": 170.09613419312572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651727.46/warc/CC-MAIN-20150417045731-00139-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-molecular-science-5th-edition/chapter-15-additional-aqueous-equilibria-questions-for-review-and-thought-topical-questions-page-693a/19c
## Chemistry: The Molecular Science (5th Edition) Ammonium ion and ammonia are capable of making a buffer with $4.79 \times 10^{-9}M$ of $[H_3O^+]$. This happens because the Ka value for ammonium ion is close to this hydronium ion concentration value. 1. Analyze the $[H_3O^+](4.79 \times 10^{-9}M)$, and find the conjugate acid-base pair with the closest $K_a$ value. - Use Table 15-1 on page 659. - Ammonium ion and ammonia: Ka = $5.6 \times 10^{-10}$ Therefore, this is the best pair that can be used to make a buffer with hydronium ion concentration equal to $4.79 \times 10^{-9}M.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3388134241104126, "perplexity": 1794.7607866386172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00725.warc.gz"}
https://gmatclub.com/forum/what-is-the-value-of-f-1-f-288401.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Feb 2019, 23:24 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in February PrevNext SuMoTuWeThFrSa 272829303112 3456789 10111213141516 17181920212223 242526272812 Open Detailed Calendar • ### Free GMAT Prep Hour February 20, 2019 February 20, 2019 08:00 PM EST 09:00 PM EST Strategies and techniques for approaching featured GMAT topics. Wednesday, February 20th at 8 PM EST February 21, 2019 February 21, 2019 10:00 PM PST 11:00 PM PST Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th. # What is the value of F(-1)-F(1)? Author Message TAGS: ### Hide Tags GMATH Teacher Status: GMATH founder Joined: 12 Oct 2010 Posts: 750 What is the value of F(-1)-F(1)?  [#permalink] ### Show Tags 10 Feb 2019, 18:13 00:00 Difficulty: 65% (hard) Question Stats: 36% (01:39) correct 64% (01:29) wrong based on 22 sessions ### HideShow timer Statistics GMATH practice exercise (Quant Class 14) What is the value of F(-1)-F(1)? (1) F(x) = x^2 , for all x (2) F(x+1) = F(x) + 2x + 1, for all x _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net Math Expert Joined: 02 Aug 2009 Posts: 7334 Re: What is the value of F(-1)-F(1)?  [#permalink] ### Show Tags 10 Feb 2019, 18:40 fskilnik wrote: GMATH practice exercise (Quant Class 14) What is the value of F(-1)-F(1)? (1) F(x) = x^2 , for all x (2) F(x+1) = F(x) + 2x + 1, for all x So we are looking for $$F(-1)-F(1)$$. (1) $$F(x) = x^2$$ , for all x We can get the value of F(-1) and F(1), so sufficient $$F(-1) = (-1)^2=1$$ $$F(1) = (1)^2=1$$ so $$F(-1)-F(1)=1-1=0$$ Sufficient (2) $$F(x+1) = F(x) + 2x + 1$$, for all x The difference of 1 in x and x+1 should tempt you to rearrange and work on the variables accordingly.. Now F(-1) and F(1) have difference of 1-(-1) or 2, so let us work in similar fashion.. $$F(x+1) = F(x) + 2x + 1......... F(x)-F(x+1) = -( 2x + 1)$$ a) let x = -1, so $$F(x)-F(x+1) = -( 2x + 1)......F(-1)-F(-1+1) = -( 2(-1) + 1)=-(-1)=1$$, that is F(-1)-F(0) = 1 b) let x = 0, so $$F(x)-F(x+1) = -( 2x + 1)......F(0)-F(0+1) = -( 2(0) + 1)=-(1)=-1$$, that is F(0)-F(1) = -1 Add the two colored equation F(-1)-F(0)+ F(0)-F(1) = 1+ (-1).......... so $$F(-1)-F(1)=1-1=0$$ Sufficient D _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html 4) Base while finding % increase and % decrease : https://gmatclub.com/forum/percentage-increase-decrease-what-should-be-the-denominator-287528.html GMAT Expert Manager Joined: 09 Jun 2014 Posts: 240 Location: India Concentration: General Management, Operations Schools: Tuck '19 Re: What is the value of F(-1)-F(1)?  [#permalink] ### Show Tags 10 Feb 2019, 21:58 fskilnik wrote: GMATH practice exercise (Quant Class 14) What is the value of F(-1)-F(1)? (1) F(x) = x^2 , for all x (2) F(x+1) = F(x) + 2x + 1, for all x This approach is really good and I got struck at the highlighted point. Here is my approach.The main concern is statement 2 1.F(x) = x^2 , for all x F(-1) = x^2 = 1 F(1)= x^2 = 1 So we have F(-1) - F(1)= 0 2. F(x+1) = F(x) + 2x + 1, for all x ---------------------- Main equation Now my line of thinking was:Lets get F(-1) so I did x+1=-1 which gave x=-2. Now substituting in main equation 2 F(-2+1)= F(-2) + 2*(-2) +1 => F(-1) = F(-2) - 4 + 1 =F(-2) -3 I think this is wrong step,I identified a little later. We should rather take X=-1 directlly ,this will save us from getting F(-2) and have equation in F(0) form thereby saving us from other variable. so substituing X=-1 in main equation F(0) = F(-1) + 2*-1 +1 = F(-1) - 1 ---------------------------------- 1 Again to get F(1) Lets make x+1= 1 ,X=0 Now again substituting in main equation 2 F(1) = F(0) + 2*0 + 1 =F(0) + 1 --------------------2 implies F(0) = F(1)-1 Substituting F(0) in equation 1 F(1)-1 = F(-1) - 1 Rearranging F(-1) - F(1) = 0 Hence sufficient. Hope it helps!! GMATH Teacher Status: GMATH founder Joined: 12 Oct 2010 Posts: 750 Re: What is the value of F(-1)-F(1)?  [#permalink] ### Show Tags 11 Feb 2019, 09:28 fskilnik wrote: GMATH practice exercise (Quant Class 14) What is the value of F(-1)-F(1)? (1) F(x) = x^2 , for all x (2) F(x+1) = F(x) + 2x + 1, for all x $$? = F\left( { - 1} \right) - F\left( 1 \right)$$ $$\left( 1 \right)\,\,F\left( x \right) = {x^2}\,,\,\,{\rm{for}}\,\,{\rm{all}}\,\,x\,\,\,\, \Rightarrow \,\,\,\,F\left( x \right) = F\left( { - x} \right)\,,\,\,{\rm{for}}\,\,{\rm{all}}\,\,x\,\,\,\,\mathop \Rightarrow \limits^{x\, = \,1} \,\,\,\,\,F\left( 1 \right) = F\left( { - 1} \right)\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,? = 0$$ $$\left( 2 \right)\,\,F\left( {x + 1} \right) - F\left( x \right) = 2x + 1\,,\,{\rm{for}}\,\,{\rm{all}}\,\,x\,\,\,\left( * \right)$$ $$\left. \matrix{ {\rm{Take}}\,\,x = - 1\,\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,\,\,F\left( 0 \right) - F\left( { - 1} \right) = - 2 + 1\,\,\, \hfill \cr {\rm{Take}}\,\,x = 0\,\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,\,\,F\left( 1 \right) - F\left( 0 \right) = 1 \hfill \cr} \right\}\,\,\,\,\,\mathop \Rightarrow \limits^{\left( + \right)} \,\,\,\,\,F\left( 1 \right) - F\left( { - 1} \right) = 1 + \left( { - 2 + 1} \right) = 0\,\,\,\,\, \Rightarrow \,\,\,\,\,? = - 0 = 0$$ The correct answer is therefore (D). We follow the notations and rationale taught in the GMATH method. Regards, Fabio. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net Re: What is the value of F(-1)-F(1)?   [#permalink] 11 Feb 2019, 09:28 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7027611136436462, "perplexity": 10861.68044227541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494485.54/warc/CC-MAIN-20190220065052-20190220091052-00499.warc.gz"}
http://mathhelpforum.com/advanced-algebra/179728-numbers-exist-modular-arithmetic-negative-exponents.html
Math Help - which numbers exist in modular arithmetic? negative exponents??? 1. which numbers exist in modular arithmetic? negative exponents??? I have a question about which numbers exist and which ones don't in modular arithmetic. The questions from the textbook are in red. The solutions from the manual are in blue (but got scanned as purple). My comments are green. 2. In $\mathbb{Z}_n$, the inverse $x^{-1}$ of x is an element of $\mathbb{Z}_n$ such that if you multiply it by x (mod n), the answer is 1. So in $\mathbb{Z}_5$ the inverse of 2 is $2^{-1} = 3$, because when you multiply 2 by 3 you get 6, which is equal to 1 (mod 5). But in $\mathbb{Z}_4$, $2^{-1}$ does not exist, because whatever you multiply 2 by you will always get an even number, so the answer can never be equal to 1 (mod 4). In general, an element x in $\mathbb{Z}_n$ will have an inverse provided that the numbers x and n have no common factor. 3. Originally Posted by Opalg In $\mathbb{Z}_n$, the inverse $x^{-1}$ of x is an element of $\mathbb{Z}_n$ such that if you multiply it by x (mod n), the answer is 1. What is the logic and/or intuition behind this rule? 4. Working in $\mathbb{Z}_5$, the notation $3^{-1}$ simply means the inverse of 3, which means the number $s$, such that $3s\equiv 1\pmod{5}$. In this case, we have $s=2$, since $3\cdot 2 = 6$, which is 1 mod 5. Short answer, $3^{-1}=2$. Generally, when working mod n, the number x has an inverse, iff x and n are relatively prime. 5. Originally Posted by kablooey Originally Posted by Opalg In $\mathbb{Z}_n$, the inverse $x^{-1}$ of x is an element of $\mathbb{Z}_n$ such that if you multiply it by x (mod n), the answer is 1. What is the logic and/or intuition behind this rule? It's the natural definition of an an inverse. The inverse of any number is what you multiply the number by in order to get 1. Like the inverse of 7 is $\tfrac17$, because $7*\tfrac17 = 1$. The only difference from ordinary arithmetic is that in $\mathbb{Z}_n$, "multiplication" means "multiplication mod n". 6. it all has to do with the prime factorization of n. for example, in Z4, 2 is what is called a "zero divisor", because (2)(2) = 4 = 0 (mod 4). and, as you might expect, "dividing by a zero divisor" is like "dividing by 0", it just doesn't work. but 5 is prime, which means that every non-zero element of Z5 will have an inverse: (1)(1) = 1 (mod 5), so 1 is it's own inverse. (2)(3) = 6 = 1 (mod 5) so 2 and 3 are inverses. (4)(4) = 16 = 1 (mod 5), so 4 is it's own inverse. that is: "1/2" can be defined in Z5: it is just 3. but "1/2" cannot be defined in Z4: (1)(2) = 2 (mod 4), which is not 1. (2)(2) = 4 = 0 (mod 4), not 1 either. (3)(2) = 6 = 2 (mod 4), again, not 1. no matter what we multply 2 by in Z4, we never get 1. multiplication by 2 in Z4 is a "one-way street", we can't go backwards and "undo it" . suppose we know that 2x = 2 (mod 4). well, we can't say for sure if x = 1, or if x = 3. this happens precisely because 2 and 4 share a common factor. 7. Originally Posted by kablooey What is the logic and/or intuition behind this rule? I'm not sure exactly what you're asking. If you're asking what the use is of this notion of inverse, then the original post is a fine example: You can use inverses to solve modular equations such as 3x = 4 (mod 5). You know that 2*3 = 1 (mod 5). Hence you can multiply both sides of the congruence 3x = 4 (mod 5) by 2 to obtain 2*3x = 8 (mod 5), so since 2*3 = 1 (mod 5), and since 8 = 3 (mod 5), this reduces to 1*x = 3 (mod 5), or x = 3 (mod 5). Thus knowing the inverse of 3 helped solve the problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949870765209198, "perplexity": 290.08467522530043}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064517.22/warc/CC-MAIN-20150827025424-00119-ip-10-171-96-226.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Tutte%E2%80%93Coxeter_graph
# Tutte–Coxeter graph Tutte–Coxeter graph Named afterW. T. Tutte H. S. M. Coxeter Vertices30 Edges45 Diameter4 Girth8 Automorphisms1440 (Aut(S6)) Chromatic number2 Chromatic index3 Book thickness3 Queue number2 PropertiesCubic Cage Moore graph Symmetric Distance-regular Distance-transitive Bipartite Table of graphs and parameters In the mathematical field of graph theory, the Tutte–Coxeter graph or Tutte eight-cage is a 3-regular graph with 30 vertices and 45 edges. As the unique smallest cubic graph of girth 8 it is a cage and a Moore graph. It is bipartite, and can be constructed as the Levi graph of the generalized quadrangle W2 (known as the Cremona–Richmond configuration). The graph is named after William Thomas Tutte and H. S. M. Coxeter; it was discovered by Tutte (1947) but its connection to geometric configurations was investigated by both authors in a pair of jointly published papers (Tutte 1958; Coxeter 1958a). All the cubic distance-regular graphs are known.[1] The Tutte–Coxeter is one of the 13 such graphs. It has book thickness 3 and queue number 2.[2] ## Constructions and automorphisms A particularly simple combinatorial construction of the Tutte–Coxeter graph is due to Coxeter (1958b), based on work by Sylvester (1844). In modern terminology, take a complete graph on 6 vertices K6. It has 15 edges and also 15 perfect matchings. Each vertex of the Tutte–Coxeter graph corresponds to an edge or perfect matching of the K6, and each edge of the Tutte–Coxeter graph connects a perfect matching of the K6 to each of its three component edges. By symmetry, each edge of the K6 belongs to three perfect matchings. Incidentally, this partitioning of vertices into edge-vertices and matching-vertices shows that the Tutte-Coxeter graph is bipartite. Based on this construction, Coxeter showed that the Tutte–Coxeter graph is a symmetric graph; it has a group of 1440 automorphisms, which may be identified with the automorphisms of the group of permutations on six elements (Coxeter 1958b). The inner automorphisms of this group correspond to permuting the six vertices of the K6 graph; these permutations act on the Tutte–Coxeter graph by permuting the vertices on each side of its bipartition while keeping each of the two sides fixed as a set. In addition, the outer automorphisms of the group of permutations swap one side of the bipartition for the other. As Coxeter showed, any path of up to five edges in the Tutte–Coxeter graph is equivalent to any other such path by one such automorphism. ## The Tutte–Coxeter graph as a building This graph is the spherical building associated to the symplectic group ${\displaystyle Sp_{4}(\mathbb {F} _{2})}$ (and there is an exceptional isomorphism between this group and the symmetric group ${\displaystyle S_{6}}$). ## References 1. ^ Brouwer, A. E.; Cohen, A. M.; and Neumaier, A. Distance-Regular Graphs. New York: Springer-Verlag, 1989. 2. ^ Wolz, Jessica; Engineering Linear Layouts with SAT. Master Thesis, University of Tübingen, 2018 • Coxeter, H. S. M. (1958a). "The chords of the non-ruled quadric in PG(3,3)". Can. J. Math. 10: 484–488. doi:10.4153/CJM-1958-047-0. • Sylvester, J. J. (1844). "Elementary researches in the analysis of combinatorial aggregation". Phil. Mag. Series 3. 24: 285–295.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7958467602729797, "perplexity": 1190.18638360058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00559.warc.gz"}
https://plainmath.net/20136/automobile-automobile-kilometers-kilometers-confidence-kilometers
# A random sample of 100 automobile owners in the state of Virginia shows that an automobile is driven on average 23,500 kilometers per year with a standard deviation of 3900 kilometers. Construct a 99\% confidence interval for the average number of kilometers an automobile is driven annually in Virginia. What can we assert with 99\% confidence about the possible size of our error if we estimate the average number of kilometers driven by car owners in Virginia to be 23,500 kilometers per year? A random sample of 100 automobile owners in the state of Virginia shows that an automobile is driven on average 23,500 kilometers per year with a standard deviation of 3900 kilometers. Assume the distribution of measurements to be approximately normal. a) Construct a $99\mathrm{%}$ confidence interval for the average number of kilometers an automobile is driven annually in Virginia. b) What can we assert with $99\mathrm{%}$ confidence about the possible size of our error if we estimate the average number of kilometers driven by car owners in Virginia to be 23,500 kilometers per year? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it a) The $99\mathrm{%}$ confidence interval for the average number of kilometres an automobile is driven is obtained below: The value of mean is 23,500 kilometres, population standard deviation is 3,900 and sample size $\left(n\right)=100$ Critical value: From the standard normal distribution table, for the $99\mathrm{%}$ confidence level the critical values is $\left({z}^{\cdot }\right)2.58$. The confidence interval formula for the population mean is, $C.I.=\stackrel{―}{x}±{z}^{\cdot }\frac{\sigma }{\sqrt{n}}$ Substitute mean $=23,500$, standard deviation $\left(\sigma \right)$ is 3,900 and sample size $\left(n\right)=100$. $C.I.=\stackrel{―}{x}±{z}^{\cdot }\frac{\sigma }{\sqrt{n}}$ $=23,500±2.58\frac{3.900}{\sqrt{100}}$ $=23,500±2.58\left(390\right)$ $=23,500±1006.2$ Thus, the $99\mathrm{%}$ confidence interval for the average number of kilometres an automobile is between $22,493.8$ and $24,506.2$. b) The required formula is obtained below: $ME={z}_{\frac{\alpha }{2}}\left(\frac{\sigma }{\sqrt{n}}\right)$ Substitute 3,900 for $\sigma$, 100 for n and 2.58 for ${Z}_{\frac{\alpha }{2}}$. $ME=2.58\left(\frac{3,900}{\sqrt{100}}\right)$ $=2.58\left(\frac{3,900}{10}\right)$ $=2.58\left(390\right)$ $=1006.2$ With $99\mathrm{%}$ confidence that the possible size of our error will not exceed the 1006.2 kilometres.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506252765655518, "perplexity": 634.4928979773538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00343.warc.gz"}
https://texfaq.org/FAQ-clsvpkg
Frequently Asked Question List for TeX # What are LaTeX classes and packages? LaTeX aims to be a general-purpose document processor. Such an aim could be achieved by a selection of instructions which would enable users to use TeX primitives, but such a procedure is considered too inflexible (and probably too daunting for ordinary users). Thus the designers of LaTeX created a model which offered an abstraction of the design of documents. Obviously, not all documents can look the same (even with the defocussed eye of abstraction), so the model uses classes of document. Base LaTeX offers four classes for general documents: `book`, `report`, `article` and `letter`, plus some more specialist classes such as `slides` and `ltnews`. For each class, LaTeX provides a class file; the user arranges to use it via a `\documentclass` command at the top of the document. So a document starting `\documentclass{article}` may be called “an article document”. This is a good scheme, but it has a glaring flaw: the actual typographical designs provided by the LaTeX class files aren’t widely liked. The way around this is to refine the class. To refine a class, a programmer may write a new class file that loads an existing class, and then does its own thing with the document design. If the user finds such a refined class, all is well, but if not, the common way is to load a package (or several). The LaTeX distribution, itself, provides rather few package files, but there are lots of them, by a wide variety of authors, to be found on the archives. Several packages are designed just to adjust the design of a document — using such packages achieves what the programmer might have achieved by refining the class. Other packages provide new facilities: for example, the `graphics` package (actually provided as part of any LaTeX distribution) allows the user to load externally-provided graphics into a document, and the `hyperref` package enables the user to construct hyper-references within a document. On disc, class and package files only appear different by virtue of their name “extension” — class files are called `*.cls` while package files are called `*.sty`. Thus we find that the LaTeX standard `article` class is represented on disc by a file called `article.cls`, while the `hyperref` package is represented on disc by a file called `hyperref.sty`. The class vs. package distinction was not clear in LaTeX 2.09 — everything was called a style (“document style” or “document style option”). It doesn’t really matter that the nomenclature has changed: the important requirement is to understand what other people are talking about. FAQ ID: Q-clsvpkg
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923387885093689, "perplexity": 1763.765352502773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00431.warc.gz"}
https://stats.stackexchange.com/users/12352/benjaminwilson?tab=topactivity
benjaminwilson ### Questions (2) 6 Conceptual proof that conditional of a multivariate Gaussian is multivariate Gaussian 4 Relationship between Poisson generation and generalized Kullback-Leibler divergence ### Reputation (233) This user has no recent positive reputation changes 3 Relationship between Poisson generation and generalized Kullback-Leibler divergence ### Tags (8) 3 maximum-likelihood × 2 0 normal-distribution 3 matrix-decomposition × 2 0 proof 3 kullback-leibler × 2 0 multivariate-normal 3 poisson-distribution × 2 0 conditional-probability
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202592372894287, "perplexity": 12831.070864780379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00524.warc.gz"}
http://cogsci.stackexchange.com/users/370/draks?tab=reputation&sort=post
draks ... Reputation 634 Top tag Next privilege 750 Rep. 5 25 Impact ~10k people reached # 634 Reputation 2 Jun 24 +2 16:07 edit Is there scientific evidence on the benefits of binaural beats? 5 Jun 18 +5 13:59 upvote Is recall for an appropriately color coded string of symbols a good measure of level of synesthesia? 3 May 15 +5 / -2 22:33 2 events How does the alpha brain wave frequency range distribute over the population? 5 May 10 5 May 9 2 May 5 5 Apr 23 7 Apr 22 5 Mar 26 12 Mar 25 5 Mar 23 -35 Mar 22 10 Mar 19 10 Mar 18 5 Jan 20 5 Dec 31 '14 5 Dec 30 '14 5 Dec 22 '14 10 Dec 3 '14 5 Nov 9 '14 5 Sep 3 '14 5 Sep 2 '14 5 Aug 23 '14 7 Aug 22 '14 -50 Aug 16 '14 15 Jun 2 '14 5 Apr 16 '14 2 Mar 27 '14 10 Mar 25 '14 15 Mar 24 '14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109357953071594, "perplexity": 27307.249622402232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096061.71/warc/CC-MAIN-20150627031816-00112-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.devsedge.net/4v1iyivd/sodium-acetate-acid-or-base-12fb16
Notice that for all of these examples, the anion is the conjugate base of a weak acid (carbonic acid, bisulfate (second dissociation step of sulfuric acid), acetic acid, hydrocyanic acid, hydrogen sulfide). An example of a basic salt is sodium bicarbonate, NaHCO3. Therefore, it reacts with water in the following fashion: $\text{HCO}_3^-(\text{aq})+\text{H}_2\text{O}(\text{l})\rightleftharpoons \text{H}_2\text{CO}_3(\text{aq})+\text{OH}^-(\text{aq})$. Yahoo fait partie de Verizon Media. For example, sodium acetate, NaCH 3 CO 2, is a salt formed by the reaction of the weak acid acetic acid with the strong base sodium … Acetic acid, of course, dissociates to give the H 3 O + and OAc-ions.. HOAc(aq) + H 2 O(l) H 3 O + (aq) + OAc-(aq)Sodium acetate… The NH3+ group contains an acidic proton capable of dissociating in solution; therefore, a solution of anilinium chloride in pure water will have a pH less than 7. 1 Product Result Basic salts contain the conjugate base of a weak acid, so when they dissolve in water, they react with water to yield a solution with pH greater than 7.0. So in solution, we're gonna have sodium ions, Na+, and acetate anions, CH3COO-, and … SODIUM ACETATE TRIHYDRATE. However, as we have already discussed, the ammonium ion acts as a weak acid in solution, while the bicarbonate ion acts as a weak base. As such, salts are composed of cations (positively charged ions ) and anions (negative ions), and in their unsolvated, solid forms, they are electrically neutral (without a net charge). A mixture of acetic acid and sodium acetate is acidic because the Ka of acetic acid is greater than the Kb of its conjugate base acetate. Na+ forms NaOH in water while acetate forms … Therefore, bicarbonate is a slightly more alkaline than ammonium is acidic, and a solution of ammonium bicarbonate in pure water will be slightly basic (pH > 7.0). Sodium Bicarbonate: Because the bicarbonate ion is the conjugate base of carbonic acid, a weak acid, sodium bicarbonate will yield a basic solution in water. Sodium Acetate Injection, USP (2 mEq/mL) is a sterile, nonpyrogenic, concentrated solution of Sodium Acetate in water for injection. Thomaegelin. The first step toward answering this question is recognizing that there are two sources of the OAc-ion in this solution. Because both ions can hydrolyze, will a solution of ammonium bicarbonate be acidic or basic? It is due to the fact that the anion in the salt is the conjugate base of a weak acid. These ions react with water producing NaOH and CH3COOH. Video transcript. We can determine the answer by comparing Ka and Kb values for each ion. General Description. In acid – base chemistry, salts are ionic compounds that result from the neutralization reaction of an acid and a base. The solution contains no bacteriostat, antimicrobial agent or added buffer. It is a buffer because it contains both the weak acid and its salt. Ions are atoms or molecules that have lost or gained one or … In this case, the value of Kb for bicarbonate is greater than the value of Ka for ammonium. Although, being an ionic compound, sodium acetate dissociates in water to produce sodium ion Na+ and acetate ion CH3COO-. "Acetate" also describes the conjugate base or ion … Sodium acetate … Informations sur votre appareil et sur votre connexion Internet, y compris votre adresse IP, Navigation et recherche lors de l’utilisation des sites Web et applications Verizon Media. Ans: Sodium acetate (CH3COONa) is a solid-state salt that can not be used in anhydrous or liquid form as an acid or base. Acid salts contain a hydrolyzable proton in the cation, anion, or both; for instance, the salt ammonium bisulfate (NH, To determine the acidity / alkalinity of a hydrolyzable anion, compare the K. Basic salts result from the neutralization of a strong base with a weak acid. Sodium Acetate, 3M, pH 5.2, Molecular Biology Grade - CAS 127-09-3 - Calbiochem. From the previous concept, we know that salts containing the bicarbonate ion (HCO3–) are basic, whereas salts containing bisulfate ion (HSO4–) are acidic. … 1. Water is a polar molecule because Oxygen has a higher electronegativity than Hydrogen. Sodium acetate … For a generalized anion B–, the net ionic reaction is: $\text{B}^-(\text{aq})+\text{H}_2\text{O}(\text{l})\rightleftharpoons \text{BH}(\text{aq})+\text{OH}^-(\text{aq})$. Sodium Acetate, sodium salt of acetic acid, is a white or colourless crystalline compound, prepared by the reaction of acetic acid with sodium carbonate or with sodium hydroxide. Vous pouvez modifier vos choix à tout moment dans vos paramètres de vie privée. This is because it dissociates into Na+ and the acetate ion. Basic salts form from the neutralization of a strong base and a weak acid; for instance, the reaction of sodium hydroxide (a strong base) with acetic acid (a weak acid) will yield water and sodium acetate. An acidic buffer is a solution of a weak acid (acetic acid) and its conjugate base pair (sodium acetate) that prevents the pH of a solution from changing drastically through the action of each component … A mixture of acetic acid and sodium acetate is acidic because the Ka of acetic acid is greater than the Kb of its conjugate base acetate. It must not be administered undiluted. Pour autoriser Verizon Media et nos partenaires à traiter vos données personnelles, sélectionnez 'J'accepte' ou 'Gérer les paramètres' pour obtenir plus d’informations et pour gérer vos choix. Sodium chloride, for instance, contains chloride (Cl–), which is the conjugate base of HCl. Salts with acidic protons in the cation are most commonly ammonium salts, or organic compounds that contain a protonated amine group. Solutions of a weak acid and a salt of the acid such as acetic acid mixed with sodium acetate and solutions of a weak base and one of its salts, such as ammonium hydroxide mixed with ammonium chloride (as explained in Section 24.4.6), undergo relatively little change of pH on the further addition of acid … When CH3COONa is placed in water we have a basic solution. Sodium acetate is a salt derived from a strong base NaOH and a weak acid CH3COOH. Acetic acid, sodium salt, trihydrate. A good example of such a salt is ammonium bicarbonate, NH4HCO3; like all ammonium salts, it is highly soluble, and its dissociation reaction in water is as follows: $\text{NH}_4\text{CO}_3(\text{s})\rightarrow \text{NH}_4^+(\text{aq})+\text{HCO}_3^-(\text{aq})$. Keep in mind that a salt will only be basic if it contains the conjugate base of a weak acid. Examples include: An example of an acid salt is one containing any of these cations with a neutral base, such as ammonium chloride (NH4Cl). Because it is capable of deprotonating water and yielding a basic solution, sodium bicarbonate is a basic salt. An acetate / ˈ æ s ɪ t eɪ t / is a salt formed by the combination of acetic acid with a base (e.g. On addition of the base, the hydroxide released by the base … Each mL contains 164 mg of Sodium Acetate (anhydrous) which provides 2 mEq each of sodium (Na+) and acetate (CH3COO-). Sodium acetate salt, or simply sodium acetate, has many practical uses. alkaline, earthy, metallic, nonmetallic or radical base). Anilinium chloride: Anilinium chloride is an example of an acid salt. This conjugate base is usually a weak base. Sodium acetate is a salt that is often used as a buffer. It is a buffer because it contains both the weak acid and its salt. Now consider a 0.001M solution of sodium acetate in pure water. Découvrez comment nous utilisons vos informations dans notre Politique relative à la vie privée et notre Politique relative aux cookies. Acid salts can also contain an acidic proton in the anion. The ammonium ion contains a hydrolyzable proton, which makes it an acid salt. As potassium bisulfate—will yield weakly acidic solutions in water to produce sodium ion Na+ sodium acetate acid or base acetate ion CH3COO- informations! Will only be basic if it contains both the weak acid, the released protons of acid will removed... Contain an acidic proton in sodium acetate acid or base salt acidic from added acid ) ⇌ CH 3 COOH ( from solution. Vos informations dans notre Politique relative aux cookies Molecular Biology Grade - CAS 127-09-3 - Calbiochem we determine! Ph less than 7.0 découvrez comment nous utilisons vos informations dans notre Politique relative la... ’ s pH an acidic proton in the salt acidic Ka and Kb values for Each ion,,! Acid and a weak acid and a weak base salts ; they are in... Acetic acid–sodium acetate buffer to demonstrate how buffers work ( from added acid ) ⇌ CH 3 COOH ( buffer! ( a basic salt yields a solution ’ s pH the formation of acid will be removed by the ion. Compare K between a strong acid with a weak acid and base ionizes when dissolved water... For bicarbonate is greater than 7.0 solutions, and in this section we will consider basic ;... Tout moment dans vos paramètres de vie privée et notre Politique relative à la vie.. Although, being an ionic compound that results from a strong base NaOH and CH3COOH being a strong acid a! Buffer solution ) 2 of hydrolysis, compare K produce sodium ion Na+ and acetate ion of which participate hydrolysis! The anion solution contains no bacteriostat, antimicrobial agent or added buffer cations and anions, of! Of ammonium bicarbonate be acidic or basic 3 COO – ( from buffer solution ) 2 that it only ionizes. Acetate … sodium acetate dissociates into acetate ions to form an acetic acid molecule 3 COOH ( from acid. Yielding a basic salt is the conjugate base of a weak base 3M, pH 5.2, Molecular Biology -! Case, the value of Ka for ammonium or added buffer compound results! Now consider a 0.001M solution of sodium acetate in pure water it is the conjugate acid of the acid. Comparing Ka and Kb values for Each ion base of a weak base contains a hydrolyzable proton, makes... Can hydrolyze, will a solution with pH greater than the value of Ka for ammonium is basic! Dissociate in water, acidic salts form solutions with pH less than 7.0 these anions—such as potassium bisulfate—will weakly... Many practical uses of different salt solutions, and we 'll start with this solution of ammonium be! A salt is the conjugate base of a weak acid, the protons... S pH a neutralization reaction between an acid salt ; the acetate ion is the conjugate base of solution! Compound, sodium acetate, has many practical uses salts and their effects on a solution sodium... Also contain an acidic proton include: Each of these anions contains proton. We will consider basic salts ; they are formed in the anion added buffer or organic that. Will weakly dissociate in water contain cations and anions, both of which participate in.... 127-09-3 - Calbiochem solution, sodium acetate is a buffer because it is capable of hydrolysis, compare K Cl–! Politique relative à la vie privée et notre Politique relative aux cookies of ammonium bicarbonate ( NH4HCO3 ) which. Of hydrolysis, compare K both ions can hydrolyze, will a solution with pH less than 7.0 an... Naoh and a weak acid CH3COOH the pH of a strong base NaOH and base! Solution with pH less than 7.0 defined sodium acetate acid or base the ionic compound, acetate... A base chloride ( Cl– ), which makes it an acid and a base start with this solution sodium! Solution with pH less than 7.0 this section we will consider basic salts practical...., contains chloride ( Cl– ), which makes it an acid salt hydrolyze, will a ’. Resulting solution is administered after dilution by the acetate ions and sodium.! With a weak acid, a salt will only be basic if it contains the... Mind that a salt containing cations and anions, both of which participate in hydrolysis and anion are capable hydrolysis... Meaning that it only partially ionizes when dissolved in water formation of acid sodium acetate acid or base be removed the! + + CH 3 COOH ( from added acid ) ⇌ CH 3 COO – ( buffer!, meaning that it only partially ionizes when dissolved in water yields a solution of sodium acetate salt, organic!, salts are the converse of basic salts on Addition of acid salts are converse. Neutralization reaction between an acid salt be basic if it contains both the weak acid and a weak.. Contains chloride ( Cl– ), which makes it an acid and base be removed by the intravenous as! To demonstrate how buffers work buffer because it contains both the weak base makes the salt acidic acetic acid.... Acid ) ⇌ CH 3 COOH ( from buffer solution ) 2 ion CH3COO- us use an acid–sodium... Na+ and acetate ion salt will only be basic if it contains both the weak.... With water producing NaOH and CH3COOH being a strong base and CH3COOH the solution is administered after dilution the... – base chemistry, salts are the converse of basic salts is greater the! Water producing NaOH and CH3COOH being a strong base NaOH and a weak base in hydrolysis the conjugate base a! Only partially ionizes when dissolved in water to produce sodium ion Na+ and acetate! Base chemistry, salts containing these anions—such as potassium bisulfate—will yield weakly solutions... Relative à la vie sodium acetate acid or base … sodium acetate is a buffer because it is a basic,. - Calbiochem if it contains both the weak acid, the released protons of acid salts result from the reaction! Ions and sodium ions is administered after dilution by the acetate ions and sodium ions sodium acetate acid or base produce. Of HCl is an example of a weak base and sodium ions there are several varieties of,! 127-09-3 - Calbiochem derived from a strong base NaOH and a weak acid a! Is capable of deprotonating water and yielding a basic salt ; the acetate ion CH3COO- acid be. Is to find the pH of different salt solutions, and we 'll with! After dilution by the acetate ion is capable of deprotonating water and yielding basic. Acidic or basic à la vie privée of salts, such as bicarbonate... That a salt containing cations and anions, both of which participate in hydrolysis 3 COOH ( from buffer )... Of basic salts ; they are formed in the anion will a solution with pH greater than the of. Bicarbonate, NaHCO3 in this section we will consider basic salts ; they are formed in anion... Both cation and anion are capable of deprotonating water, a weak acid and its salt sodium chloride for. Base makes the salt is the conjugate base of a basic solution, sodium bicarbonate sodium acetate acid or base a buffer because contains! Is due to the fact that the anion in the salt acidic, antimicrobial agent or added buffer by acetate. Of Ka for ammonium in solution, sodium bicarbonate is a buffer because it contains the... Because both ions can hydrolyze, will a solution of sodium acetate dissociates into Na+ and acetate ion is conjugate... Hydrolyzable proton, which makes it an acid and a weak acid and salt! Basic solution, sodium acetate salt, or simply sodium acetate itself is not a base be removed by acetate. The converse of basic salts these anions contains a hydrolyzable proton, which is the base! Metallic, nonmetallic or radical base ) that can both undergo hydrolysis resulting is.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459983229637146, "perplexity": 7609.228367501348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00030.warc.gz"}
http://mathhelpforum.com/discrete-math/37999-how-continue-induction.html
# Thread: How to continue? Induction. 1. ## How to continue? Induction. Hi everyone! I'm not sure what to do with this induction question, can anyone help me please? Let a_1=0, a_2=3, and, for all n>=3 let a_n= 1/2(a_(n-1) + a_(n-2)) By induction on n, show that for all n>=2, a_n= 2 + 4(-1/2)^n. What I did so far was work with the 2nd equation, and said: let n=1 0= 2 + 4(-1/2)^1 = 2 - 2 =0 therefore it is true for n=1 Assume true for n=k a_k= 2 + 4(-1/2)^k ...what's the next step for the proof?Or have I gone completely wrong from the beginning?? Thanks in advance for your help guys! 2. First step: $\displaystyle a_3 = \frac{{a_1 + a_2 }}{2} = \frac{3}{2} = 2 + 4\left( { - \frac{1}{2}} \right)^3.$ Now assume that $\displaystyle a_K = 2 + 4\left( { - \frac{1}{2}} \right)^K$ is true. Look at the next step. $\displaystyle a_{K + 1} = \frac{{a_K + a_{K - 1} }}{2} = \frac{{2 + 4\left( { - \frac{1}{2}} \right)^K + 2 + 4\left( { - \frac{1}{2}} \right)^{K - 1} }}{2}$ $\displaystyle \frac{{2 + 4\left( { - \frac{1}{2}} \right)^K + 2 + 4\left( { - \frac{1}{2}} \right)^{K - 1} }}{2} = 2 + 2\left( { - \frac{1}{2}} \right)^{K + 1} \left[ {\left( { - \frac{1}{2}} \right)^{ - 1} + \left( { - \frac{1}{2}} \right)^{ - 2} } \right]$ Can you finish? 3. In this case you'll have to assume that it's true for $\displaystyle a_{n-1}$ and $\displaystyle a_{n}$ and then show it's true for $\displaystyle a_{n+1}$ (this is a stronger form of induction) 4. Originally Posted by PaulRS In this case you'll have to assume that it's true for $\displaystyle a_{n-1}$ and $\displaystyle a_{n}$ and then show it's true for $\displaystyle a_{n+1}$ (this is a stronger form of induction) "Strong induction" is the name This means that the property is true for any k<n, and then show that it's true for n ^^ 5. Originally Posted by simplysparklers Hi everyone! I'm not sure what to do with this induction question, can anyone help me please? Let a_1=0, a_2=3, and, for all n>=3 let a_n= 1/2(a_(n-1) + a_(n-2)) By induction on n, show that for all n>=2, a_n= 2 + 4(-1/2)^n. What I did so far was work with the 2nd equation, and said: let n=1 0= 2 + 4(-1/2)^1 = 2 - 2 =0 therefore it is true for n=1 Assume true for n=k a_k= 2 + 4(-1/2)^k ...what's the next step for the proof?Or have I gone completely wrong from the beginning?? Thanks in advance for your help guys! Our base case is n=3 $\displaystyle a_3=\frac{1}{2}(0+3)=\frac{3}{2}$ also $\displaystyle a_3=2+4\left( -\frac{1}{2}\right)^3=2-\frac{1}{2}=\frac{3}{2}$ So the base case checks We need to use "strong" (you may know it by another name)mathematical induction assume true of all k < n use this to prove n is true We want to show $\displaystyle \frac{1}{2}(a_{n-1}+a_{n-2})=a_n$ since n-1,n-2 < n they are true by the induction hypothesis $\displaystyle \frac{1}{2}(a_{n-1}+a_{n-2})=\frac{1}{2}(2+4\left( -\frac{1}{2}\right)^{n-2}+2+4\left( -\frac{1}{2}\right)^{n-1})=$ $\displaystyle 2+2\left( -\frac{1}{2}\right)^{n-2}+2\left( -\frac{1}{2}\right)^{n-1}=2+2\left( -\frac{1}{2}\right)^{n-2}\left( 1+\left(-\frac{1}{2}\right) \right)=$ $\displaystyle 2+2\left( -\frac{1}{2}\right)^{n-2}\left( \frac{1}{2}\right)=2+\left( -\frac{1}{2}\right)^{n-2}=$ $\displaystyle 2+\underbrace{(4)\left( -\frac{1}{2}\right)^2}_{=1}\left( -\frac{1}{2}\right)^{n-2}=2+4\left( -\frac{1}{2}\right)^n=a_n$ QED Thank you so much all you guys for your help! One more question , and then I think I'm done with this lot of questions , the end of this question after the induction is: deduce that (a_n) $\displaystyle \rightarrow$ 2. What I did so far was: |a_n- $\displaystyle \alpha$ | = |2+4(-1/2)^n-2| = |4(-1/2)^n| ....what do I do from here please? How do I prove 2 is $\displaystyle \alpha$ ?? 7. Originally Posted by simplysparklers One more question , deduce that (a_n) $\displaystyle \rightarrow$ 2. Well, $\displaystyle \left( {\frac{{ - 1}}{2}} \right)^n \to 0$. 8. But what if n is negative, then it doesn't approach 0?? 9. Originally Posted by simplysparklers But what if n is negative, then it doesn't approach 0?? How can n be negative? It is an index of a sequence. To find the limit of the sequence, n approaches infinity and is therefore positive. 10. Oh of course!Duh me! Thanks Plato!!& sorry for all the silly questions, I'm just not getting the whole concept of sequences at all!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349251389503479, "perplexity": 933.76103652622}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867579.80/warc/CC-MAIN-20180625072642-20180625092642-00321.warc.gz"}
https://skyciv.com/docs/tutorials/beam-tutorials/how-to-calculate-an-indeterminate-beam/
SkyCiv Documentation Your guide to SkyCiv software - tutorials, how-to guides and technical articles 1. Home 2. Tutorials 3. Beam Tutorials 4. How to Calculate an Indeterminate Beam? # How to Calculate an Indeterminate Beam? ## How to Calculate the Bending Moment of an Indeterminate Beam – Double Integral Method Indeterminate beams can be a challenge because of the extra steps needed for solving the reactions. Remember that indeterminate structures have what is called a degree of indeterminacy. To solve the structure, boundary conditions have to be introduced. Consequently, the higher the degree of indeterminacy, the more boundary conditions have to be identified. But before we can solve an indeterminate beam, we first need to identify if the beam is statically indeterminate. As beams are one-dimensional structures, using the equation to determining externally statically indeterminate structures is sufficient. [math] i_{e}=R-\left ( 3+e_{c} \right ) [math] ### Where: • ie = Degree of indeterminacy • R = Total number of reactions • ec = External conditions (e.g. internal hinge) Typically however, without need to solve for the degree of indeterminacy, anything other than simple spans or cantilever beams are statically indeterminate, assuming such beams do not come with internal hinges. There are many ways to approaches it comes to solving indeterminate beams. Although for sake of simplicity and similarity with SkyCiv Beam’s hand calculations, we will discuss the Double Integration method. ## Double Integration Double Integration is perhaps the simplest of all methods for analysis of beams. The concept for this method is pretty straight-forward as opposed to other methods as it relies mainly on a basic understanding of integral calculus, hence the name. A bit of integral calculus is adapted from the relationship of the curvature of the beam to the moment which is shown below. [math] \frac{1}{\rho}=\frac{M}{EI} [math] Note that 1/ρ is the curvature of the beam and that ρ is the radius of the curve. Fundamentally, the definition of curvature is the rate of change of the tangent with respect to the arc length. As the moment is a function of the loading with respect to the length of the member, integrating the curvature with respect to the length of the member will yield the slope of the beam. Similarly, integrating the slope with respect to the length of the member will yield the beam deflection. As typical structural loadings are algebraic in nature, integration of these expressions is as simple as using the general power formula. [math] \int f\left ( x \right )^{n}dx=\frac{f\left ( x \right )^{n+1}}{n+1}+C [math] Perhaps the best way to understand the concept is by providing an example of a beam with the following given. The sample beam above is an indeterminate beam with triangular loadings. With the supports Ay, By and Cy for the first, second and third supports respectively, the first step in solving these unknowns is by starting with the equilibrium equations. [math] \left ( Eq. 1 \right ) \sum F_{x}=0: A_{x}=0 [math] [math] \left ( Eq. 2 \right ) \sum F_{y}=0: 0=1-A_{y}-B_{y}-C_{y} [math] [math] \left ( Eq. 3 \right ) \sum M_{B}=0: 0=A_{y}-\frac{1}{6}+\frac{1}{6}-C_{y}; A_{y}= C_{y} [math] Note that the beam has a degree of static indeterminacy of 1°. As there are four unknowns (Ax, Ay, By, and Cy) and there are three equations so far from the equilibrium equations above, it is necessary to create one more equation from the boundary conditions. Recall that the moment generated by a point load and a triangular load are the following. [math] M=F\times x; M = Fx [math] [math] M=\frac{w_{0}\times x}{2}\times \left ( \frac{x}{3} \right ); M = \frac{w_{0}x^{2}}{6} [math] By using double integration method, these new equations are made and displayed below. [math] EI\frac{dy^{2}}{d^{2}x}=A_{y}\left \langle x-0 \right \rangle^{1}-\frac{1}{6}\left \langle x-0 \right \rangle^{3}+B_{y}\left \langle x-1 \right \rangle^{1}+\frac{1}{3}\left \langle x-1 \right \rangle^{3}+C_{y}\left \langle x-2 \right \rangle^{1} [math] [math] EI\frac{dy}{dx}=\frac{A_{y}}{2}\left \langle x-0 \right \rangle^{2}-\frac{1}{24}\left \langle x-0 \right \rangle^{4}+\frac{B_{y}}{2}\left \langle x-1 \right \rangle^{2}+\frac{1}{12}\left \langle x-1 \right \rangle^{4}+\frac{C_{y}}{2}\left \langle x-2 \right \rangle^{2}+C_{1} [math] [math] EIy=\frac{A_{y}}{6}\left \langle x-0 \right \rangle^{3}-\frac{1}{120}\left \langle x-0 \right \rangle^{5}+\frac{B_{y}}{6}\left \langle x-1 \right \rangle^{3}+\frac{1}{60}\left \langle x-1 \right \rangle^{5}+\frac{C_{y}}{6}\left \langle x-2 \right \rangle^{3}+C_{1}x+C_{2} [math] Note: The equations above are written as Macaulay functions where an expression is equal to zero when x < L. In this case, L = 1. In the equations above, notice that the fourth term added seems to come out of nowhere. In fact, the direction of the loading is opposite the direction of gravity. This is due to the fact that the equations for triangular loadings only work when the load is ascending as the length increases. This is not much of an issue for equations for distributed and point loads due to their symmetry. In effect, the equivalent loading for the beam above looks like the beam below, thus the equations are based on it. To solve for C1 and C2, the boundary conditions have to be determined. In the beam above, it can be observed that three such boundary conditions exist at x = 0, x = 1 and x = 2, where the deflection y is zero on the three locations. ### Boundary Condition 1 [math] x=0, y=0; C_{2}=0 [math] ### Boundary Condition 2 [math] x=0, y=0; C_{1}=\frac{1}{120}-\frac{A_{y}}{6} [math] After determining the values of each constant, the last equation can now be obtained using the last boundary condition. ### Boundary Condition 3 [math] x=2, y=0, (Eq. 4) 0 = \frac{4}{3}A_{y}-\frac{4}{15}+\frac{B_{y}}{6}+\frac{1}{60}+\frac{1}{60}-\frac{A_{y}}{3} [math] Note that the boundary condition of θ = 0 at x = 1 can be used, although it is applicable only for the middle reaction of a symmetrical continuous beam with symmetrical loading. As the four equations have been determined, they can now be solved simultaneously. Solving these equations will yield the following reactions. [math] A_{x}=0 {kN}, A_{y}=0.1 {kN},B_{y}=0.8 {kN}, C_{y}=0.1 {kN}, [math] With the reactions determined, the values of the reactions can be substituted back to the moment equation. This will allow us to determine the value of the moment in any part of the beam system. [math] M=\frac{1}{10}\left \langle x-0 \right \rangle^{1}-\frac{1}{6}\left \langle x-0 \right \rangle^{3}+\frac{8}{10}\left \langle x-1 \right \rangle^{1}+\frac{1}{3}\left \langle x-1 \right \rangle^{3}+\frac{1}{10}\left \langle x-2 \right \rangle^{1} [math] Another convenience of Double Integration is that the moment equation is presented in a way that can be used to solve for shear with the relationship shown below. [math] V=\frac{dM}{dx} [math] [math] V=\frac{1}{10}\left \langle x-0 \right \rangle^{0}-\frac{1}{2}\left \langle x-0 \right \rangle^{2}+\frac{8}{10}\left \langle x-1 \right \rangle^{0}+1\left \langle x-1 \right \rangle^{2}+\frac{1}{10}\left \langle x-2 \right \rangle^{0} [math] Again, using only a basic understanding of differential calculus, equating the derivative of a function to zero yields the maximum or minimum of that function. Thus, equating V = 0 will result to a maximum positive moment at x = 0.447 and x = 1.553 of M = 0.030 [math] 0=\frac{1}{10}-\frac{x^{2}}{2}, x=0.447m [math] [math] 0=\frac{1}{10}-\frac{x^{2}}{2}+\frac{8}{10}+\left ( x-1 \right )^{2}, x=1.553m [math] [math] M=\frac{1}{10}\left \langle 0.447 \right \rangle^{1}-\frac{1}{6}\left \langle 0.447 \right \rangle^{3}, M=0.030kNm [math] [math] M=\frac{1}{10}\left \langle 1.553 \right \rangle^{1}-\frac{1}{6}\left \langle 1.553 \right \rangle^{3}+\frac{8}{10}\left \langle 0.553 \right \rangle^{1}+\frac{1}{3}\left \langle 0.553 \right \rangle^{3}, M=0.030kNm [math] Of course, all this can be verified with SkyCiv Beam. You can try the free version of SkyCiv Beam here or sign-up here. Note that the free version is limited to analysis of statically determinate beams.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854192137718201, "perplexity": 1090.4564829589294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00419.warc.gz"}
http://planetmath.org/GrothendieckGroup
# Grothendieck group Let $S$ be an abelian semigroup. The Grothendieck group of $S$ is $K(S)=S\times S/\mathord{\sim}$, where $\sim$ is the equivalence relation: $(s,t)\sim(u,v)$ if there exists $r\in S$ such that $s+v+r=t+u+r$. This is indeed an abelian group with zero element $(s,s)$ (any $s\in S$), inverse $-(s,t)=(t,s)$ and addition given by $(s,t)+(u,v)=(s+u,t+v)$. It is common to use the suggestive notation $t-s$ for $(t,s)$. The Grothendieck group construction is a functor from the category of abelian semigroups to the category of abelian groups. A morphism $f\colon S\to T$ induces a morphism $K(f)\colon K(S)\to K(T)$ which sends an element $(s^{+},s^{-})\in K(S)$ to $(f(s^{+}),f(s^{-}))\in K(T)$. ###### Example 1 Let $(\mathbb{N},+)$ be the semigroup of natural numbers with composition given by addition. Then, $K(\mathbb{N},+)=\mathbb{Z}$. ###### Example 2 Let $(\mathbb{Z}-\{0\},\times)$ be the semigroup of non-zero integers with composition given by multiplication. Then, $K(\mathbb{Z}-\{0\},\times)=(\mathbb{Q}-\{0\},\times)$. ###### Example 3 Let $G$ be an abelian group, then $K(G)\cong G$ via the identification $(g,h)\leftrightarrow g-h$ (or $(g,h)\leftrightarrow gh^{-1}$ if $G$ is multiplicative). Let $C$ be a (essentially small) symmetric monoidal category. Its Grothendieck group is $K([C])$, i.e. the Grothendieck group of the isomorphism classes of objects of $C$. Title Grothendieck group Canonical name GrothendieckGroup Date of creation 2013-03-22 13:38:24 Last modified on 2013-03-22 13:38:24 Owner mhale (572) Last modified by mhale (572) Numerical id 11 Author mhale (572) Entry type Definition Classification msc 16E20 Classification msc 13D15 Classification msc 18F30 Synonym group completion Related topic AlgebraicKTheory Related topic KTheory Related topic AlgebraicTopology Related topic GrothendieckCategory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 29, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016141057014465, "perplexity": 543.8318527348032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00592.warc.gz"}
https://zbmath.org/?q=an%3A0559.53023
× ## An example of an almost Kähler manifold which is not Kählerian.(English)Zbl 0559.53023 The author introduces a 4-dimensional compact homogeneous space $$M=G/\Gamma$$ where G is a certain connected Lie group and $$\Gamma$$ a discrete subgroup. A metric and a compact almost complex structure are defined on M. It is possible to prove that M is the homogeneous space corresponding to the manifold defined by W. Thurston [Proc. Am. Math. Soc. 55, 467-468 (1976; Zbl 0324.53031)]. This manifold is shown to be an almost Kähler manifold which is not Kählerian. Finally the author studies the curvature of M. Reviewer: S.S.Singh ### MSC: 53C15 General geometric structures on manifolds (almost complex, almost product structures, etc.) 53C30 Differential geometry of homogeneous manifolds Zbl 0324.53031
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900495767593384, "perplexity": 313.7845742428521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00642.warc.gz"}
https://www.metaculus.com/questions/2514/ragnar%25C3%25B6k-question-series-if-a-global-biological-catastrophe-occurs-will-it-reduce-the-human-population-by-95-or-more/
#### Your submission is now in Draft mode. Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you! Submit Essay Once you submit your essay, you can no longer edit it. #### Pending This content now needs to be approved by community moderators. #### Submitted This essay was submitted and is waiting for review. # Synth-bio GC to cause (near) extinction? ### Question No single disease currently exists that combines the worst-case levels of transmissibility, lethality, resistance to therapies, and global reach. But we know that the worst-case attributes can be realized independently. For example, some diseases exhibit nearly a 100% case fatality ratio in the absence of treatment, such as rabies or septicemic plague. The 1918 flu has a track record of spreading to virtually every human community worldwide. Chickenpox and HSV-1, can reportedly reach over 95% of a given population. An informal survey at the 2008 Oxford Global Catastrophic Risk Conference asked participants to estimate the chance that disasters of different types would occur before 2100. Participants had a median risk estimate of 0.05% that a natural pandemic would lead to human extinction by 2100, and a median risk estimate of 2% that an “engineered” pandemic would lead to extinction by 2100. Moreover, previous literature has found that casualty numbers from terrorism and warfare follow a power law distribution, including terrorism from WMDs. Millett and Snyder-Beattie have performed a naive power law extrapolation to estimate the chance of an existential biological disaster: Past studies have estimated this ratio for terrorism using biological and chemical weapons to be about 0.5 for 1 order of magnitude, meaning that an attack that kills $10^x$ people is about 3 times less likely ($10^{0.5}$) than an attack that kills $10x^{x-1}$ people (a concrete example is that attacks with more than 1,000 casualties, such as the Aum Shinrikyo attacks, will be about 30 times less probable than an attack that kills a single individual). Extrapolating the power law out, we find that the probability that an attack kills more than 5 billion will be $(5 billion)^{–0.5}$ or 0.000014. Assuming 1 attack per year (extrapolated on the current rate of bio-attacks) and assuming that only 10% of such attacks that kill more than 5 billion eventually lead to extinction (due to the breakdown of society, or other knock-on effects), we get an annual existential risk of 0.0000014 (or $1.4 × 10^{-6}$). In the first part of the Ragnarök Question Series, we asked the question If a global catastrophe occurs, will it be due to biotechnology or bioengineered organisms? Now it is asked, Given that a biological global catastrophe occurs that results in the reduction of global population of at least 10% by 2100, will the global population decline more than 95% relative to the pre-catastrophe population? The question resolves positive if such a global biological catastrophe does occur, and the global population is less than 95% of the pre-catastrophe population. The question resolves ambiguous if a global biological catastrophe that claims at least 10% (in any period of 5 years or less) does not occur. The question resolves negative if a global biological catastrophe failure-mode induced global catastrophe occurs that claims at least 10% (in any period of 5 years or less) but the post-catastrophe population remains above 5%. A biological catastrophe is here defined as a catastrophe resulting from the deployment biotechnologies or bioengineered organisms (including viruses) that claims at least 10% in any period of 5 years or less before 2100. Moreover, the catastrophe must be generally believed very unlikely in a counterfactual world with little or no biotechnological interventions but otherwise similar to ours. This question is part of the Ragnarök Question Series. Check out the other questions in the series: Also, please check out our questions on whether a global catastrophe will occur by 2100, and if so, which?: All results are analysed here, and will be updated periodically. ### Prediction Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it. Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards. Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4652166962623596, "perplexity": 2317.998541283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00481.warc.gz"}
http://category-theory.mitpress.mit.edu/chapter004.html
# Categories and Functors, Without Admitting It In this chapter we begin to use our understanding of sets to examine more interesting mathematical worlds, each of which organizes understanding of a certain kind of domain. For example, monoids organize thoughts about agents acting on objects. Groups are monoids except restricted to only allow agents to act in reversible ways. We then study graphs, which are systems of nodes and arrows that can capture ideas like information flow through a network or model connections between building blocks in a material. We discuss orders, which can be used to study taxonomies or hierarchies. Finally we take a mathematical look at databases, which actually subsume everything else in the chapter. Databases are connection patterns for structuring information. Everything studied in this chapter is an example of a category (see Chapter 5). So is Set, the category of sets studied in Chapters 2 and 3. One way to think of a category is as a bunch of objects and a connection pattern between them. The category Set has individual sets as objects, with functions serving as the connections between them. But there is a certain self-similarity here—each set, thought of as a bag of dots, can itself be viewed as a category: the objects inside it are just disconnected. Each set is a category, but there is also a category of sets. In this way, sets have an interior view and an exterior view, as do all the categories in this chapter. Each monoid is a category, but there is also a category of monoids. However, the word category is not used much in this chapter. It seems preferable to let the ideas arise as interesting structures in their own right before explaining how everything fits into a single framework. # 4.1   Monoids A common way to interpret phenomena around us is to say that agents are acting on objects. For example, the user of a computer drawing program acts on the canvas in certain prescribed ways. Choices of actions from an available list can be performed in sequence to transform one image into another. As another example, one might investigate the notion that time acts on the position of hands on a clock in a prescribed way. A first rule for actions is captured in the following slogan. Slogan 4.1.0.14. The performance of a sequence of several actions is itself the performance of an action—a more complex action, but an action nonetheless. Mathematical objects called monoids and groups are tasked with encoding the agent’s perspective, i.e., what the agent can do, and what happens when she does a sequence of actions in succession. A monoid can be construed as a set of actions together with a formula that encodes how a sequence of actions is itself considered an action. A group is the same as a monoid except that every action is required to be reversible. ## 4.1.1   Definition and examples Definition 4.1.1.1 (Monoid). A monoid is a sequence (M, e, ⋆), where M is a set, eM is an element, and ⋆: M × MM is a function, such that the following monoid laws hold for all m, n, pM: • me = m. • em = m. • (mn) ⋆ p = m ⋆ (np). We refer to e as the unit element and to ⋆ as the multiplication formula for the monoid.1 We call the first two rules unit laws and the third rule the associativity law for monoids. Remark 4.1.1.2. To be pedantic, the conditions from Definition 4.1.1.1 should be stated • ⋆(m, e) = m. • ⋆(e, m) = m. • ⋆(⋆(m, n), p) = ⋆(m, (⋆(n, p)). The way they are written in Definition 4.1.1.1 is called infix notation,. Given a function ⋆: A × BC, we may write ab rather than ⋆(a, b). Example 4.1.1.3 (Additive monoid of natural numbers). Let M = ℕ be the set of natural numbers. Let e = 0, and let ⋆: M × MM denote addition, so that ⋆(4, 18) = 4 ⋆ 18 = 22. Then the equations m ⋆ 0 = m and 0 ⋆ m = m hold, and (mn) ⋆ p = m ⋆ (np) because, as we learned in grade school, addition is associative. By assigning e and ⋆ in this way, we have given ℕ the structure of a monoid. We usually denote it (ℕ, 0, +). Remark 4.1.1.4. Sometimes we are working with a monoid (M, e, ⋆), and the unit e and multiplication ⋆ are somehow clear from context. In this case we might refer to the set M as though it were the whole monoid. For example, if we were discussing the monoid from Example 4.1.1.3, we might refer to it as ℕ. The danger comes because sets may have multiple monoid structures (see Exercise 4.1.1.6). Example 4.1.1.5 (Nonmonoid). If M is a set, we might call a function f : M × MM an operation on M. For example, if M = ℕ is the set of natural numbers, we can consider the operation f : ℕ × ℕ → ℕ called exponentiation e.g., f(2, 5) = 2 × 2 × 2 × 2 × 2 = 32 and f(7, 2) = 49. This is indeed an operation, but it is not the multiplication formula for any monoid. First, there is no possible unit. Trying the obvious choice of e = 1, we see that a1 = a (good), but that 1a = 1 (bad: we need it to be a). Second, this operation is not associative because in general a(bc) ≠ (ab)c. For example, 2(12) = 2, but (21)2 = 4. One might also attempt to consider an operation f : M × MM that upon closer inspection is not even an operation. For example, if M = ℤ, then exponentiation is not even an operation. Indeed, $f\left(2,-1\right)={2}^{-1}=\frac{1}{2},$ and this is not an integer. To have a function f : M × MM, it is required that every element of the domain—in this case every pair of integers—have an output under f. So there is no exponentiation function on ℤ. Exercise 4.1.1.6. Let M = ℕ be the set of natural numbers. Taking e = 1 as the unit, devise a formula for ⋆ that gives ℕ the structure of a monoid. Solution 4.1.1.6. Let ⋆ denote the usual multiplication of natural numbers, e.g., 5 ⋆ 7 = 35. Then for any m, n, p ∈ ℕ, we have 1 ⋆ m = m ⋆ 1 = m and (mn) ⋆ p = m ⋆ (np), as required. Exercise 4.1.1.7. Find an operation on the set M = {1, 2, 3, 4}, i.e., a legitimate function f : M × MM, such that f cannot be the multiplication formula for a monoid on M. That is, either it is not associative or no element of M can serve as a unit. Exercise 4.1.1.8. In both Example 4.1.1.3 and Exercise 4.1.1.6, the monoids (M, e, ⋆) satisfied an additional rule called commutativity, namely, mn = nm for every m, nM. There is a monoid (M, e, ⋆) in linear algebra that is not commutative; if you have background in linear algebra, what monoid (M, e, ⋆) might I be referring to? Exercise 4.1.1.9. Recall the notion of commutativity for monoids from Exercise 4.1.1.8. a. What is the smallest set M that you can give the structure of a noncommutative monoid? b. What is the smallest set M that you can give the structure of a monoid? Example 4.1.1.10 (Trivial monoid). There is a monoid with only one element, M = ({e}, e, ⋆), where ⋆: {e} × {e} → {e} is the unique function. We call this monoid the trivial monoid and sometimes denote it 1. Example 4.1.1.11. Suppose that (M, e, ⋆) is a monoid. Given elements m1, m2, m3, m4, there are five different ways to parenthesize the product m1m2m3m4, and the associativity law for monoids will show them all to be the same. We have ((m1 ⋆ m2) ⋆ m3) ⋆ m4 = (m1 ⋆ m2) ⋆ (m3 ⋆ m4) = (m1 ⋆ (m2 ⋆ m3)) ⋆ m4 = m1 ⋆ (m2 ⋆ (m3 ⋆ m4)) = m1 ⋆ ((m2 ⋆ m3) ⋆ m4). In fact, the product of any list of monoid elements is the same, regardless of parenthesization. Therefore, we can unambiguously write m1m2m3m4m5 rather than any given parenthesization of it. A substantial generalization of this is known as the coherence theorem and can be found in Mac Lane [29]. ### 4.1.1.12   Free monoids and finitely presented monoids Definition 4.1.1.13. Let X be a set. A list in X is a pair (n, f), where n ∈ ℕ is a natural number (called the length of the list) and f : nX is a function, where n = {1, 2, …, n}. We may denote such a list $\left(n,f\right)=\left[f\left(1\right),f\left(2\right),\dots ,f\left(n\right)\right].$ The set of lists in X is denoted List(X). The empty list is the unique list in which n = 0; we may denote it [ ]. Given an element xX, the singleton list on x is the list [x]. Given a list L = (n, f) and a number i ∈ ℕ with in, the ith entry of L is the element f(i) ∈ X. Given two lists L = (n, f) and L′ = (n′, f′), define the concatenation of L and L′, denoted L ++ L′, to be the list (n + n′, f ++ f′), where f ++ f′ : n + nX is given on 1 ⩽ in + n′ by $\left(f++{f}^{\prime }\right)\left(i\right)≔\left\{\begin{array}{ll}f\left(i\right)\hfill & \text{if}\text{\hspace{0.17em}}1⩽i⩽n,\hfill \\ {f}^{\prime }\left(i-n\right)\hfill & \text{if}\text{\hspace{0.17em}}n+1⩽i⩽n+{n}^{\prime }.\hfill \end{array}$ Example 4.1.1.14. Let X = {a, b, c, …, z}. The following are elements of List(X): $\left[a,b,c\right],\left[p\right],\text{\hspace{0.17em}}\left[p,a,a,a,p\right],\left[\right],\dots .$ The concatenation of [a, b, c] and [p, a, a, a, p] is [a, b, c, p, a, a, a, p]. The concatenation of any list with [ ] is just . Definition 4.1.1.15. Let X be a set. The free monoid generated by X is the sequence FX ≔ (List(X), [ ], ++), where List(X) is the set of lists of elements in X, [ ] ∈ List(X) is the empty list, and ++ is the operation of list concatenation. We refer to X as the set of generators for the monoid FX. Exercise 4.1.1.16. Let {☺} denote a one-element set. a. What is the free monoid generated by the set {☺}? b. What is the free monoid generated by ∅? An equivalence relation that interacts well with the multiplication formula of a monoid is called a congruence on that monoid. Definition 4.1.1.17. Let $\mathcal{M}≔\left(M,e,\star \right)$ be a monoid. A congruence on $\mathcal{M}$ is an equivalence relation ∼ on M, such that for any m, m′ ∈ M and any n, n′ ∈ M, if mm′ and nn′, then mnm′ ∼ n′. Proposition 4.1.1.18. Suppose that $\mathcal{M}≔\left(M,e,\star \right)$ is a monoid. Then the following facts hold: 1. Given any relation RM × M, there is a smallest congruence S containing R. We call S the congruence generated by R. 2. If R = ∅ andis the congruence it generates, then there is an isomorphism $M\stackrel{\cong }{\to }\left(M/\sim \right).$ 3. Suppose thatis a congruence on $\mathcal{M}$. Then there is a monoid structure $\mathcal{M}$/∼ on the quotient set M/∼, compatible with $\mathcal{M}$. Proof.     1. Let LR be the set of all congruences on $\mathcal{M}$ that contain R. Using reasoning similar to that used in the proof of Proposition 3.3.1.7, one sees that LR is nonempty and that its intersection, $S={\cap }_{\ell \in {L}_{R}}\ell ,$ serves. 2. If R = ∅, then the minimal reflexive relation {(m, m) | mM} ⊆ M × M is the congruence generated by M. We have an isomorphism $M\stackrel{\cong }{\to }M/\sim .$ by Exercise 3.3.1.9. 3. Let Q: MM/∼ be the quotient function (as in Definition 3.3.1.1); note that it is surjective. We first want to give a monoid structure on M/∼, i.e., we need a unit element e′ and a multiplication formula ⋆′. Let e′ = Q(e). Suppose given p, qM/∼ and respectively let m, nM be a pair of representatives, so Q(m) = p and Q(n) = q. Define p⋆′qQ(mn). If we chose a different pair of representatives Q(m′) = p and Q(n′) = q, then we would have mm′ and nn′ so (mn) ∼ (m′ ⋆ n′), which implies Q(mn) = Q(m′ ⋆ n′); hence the composition formula is well defined. It is easy to check that $\mathcal{M}$/∼≔ (M/∼, e′, ⋆′) is a monoid. It follows that Q: MM/∼ extends to a monoid homomorphism Q: $\mathcal{M}$$\mathcal{M}$/∼, as in Definition (4.1.4.1), which makes precise the compatibility claim. Definition 4.1.1.19 (Presented monoid). Let G be a finite set, and let R ⊆ List(G) × List(G) be a relation. The monoid presented by generators G and relations R is the monoid $\mathcal{M}$ = (M, e, ⋆), defined as follows. Begin with the free monoid FG = (List(G), [ ], ++) generated by G. Let ∼ denote the congruence on FG generated by R, as in Proposition 4.1.1.18, and define $\mathcal{M}$FG/∼. Each element rR is of the form r = (, ′) for lists , ′ ∈ List(G). For historical reasons we call the each of the resulting expressions ′ a relation in R. Slogan 4.1.1.20. A presented monoid is a set of buttons you can press and some facts about when different button sequences have the same results. Remark 4.1.1.21. Every free monoid is a presented monoid, because we can just take the set of relations to be empty. Example 4.1.1.22. Let G = {a, b, c, d}. Think of these as buttons that can be pressed. The free monoid FG = (List(G), [ ], ++) is the set of all ways of pressing buttons, e.g., pressing a, then a, then c, then c, then d corresponds to the list [a, a, c, c, d]. The idea of presented monoids is that we can assert that pressing [a, a, c] always gives the same result as pressing [d, d] and that pressing [c, a, c, a] is the same thing as doing nothing. In this case, the relation R ⊆ List(G) × List(G) would be R [a, a, c] [d, d] [a, c, a, c] [] As in Proposition 4.1.1.18, the relation R generates a congruence ∼ on List(G), and this can be complex. For example, would you guess that [b, c, b, d, d, a, c, a, a, c, d] ∼ [b, c, b, a, d, d, d]? Here is the calculation in M = List(G)/∼ : [b, c, b, d, d, a, c, a, a, c, d] = [b, c, b] ⋆ [d, d] ⋆ [a, c, a, a, c, d] = [b, c, b, a] ⋆ [a, c, a, c] ⋆ [a, a, c, d] = [b, c, b, a, a, a, c, d] = [b, c, b, a] ⋆ [a, a, c] ⋆ [d] = [b, c, b, a, d, d, d]. Exercise 4.1.1.23. Let K ≔ {BS, a, b, c, …, z}, a set having 27 elements. Suppose one thinks of BSK as the backspace key and the elements a, b, … zK as the letter keys on a keyboard. Then the free monoid List(K) is not quite appropriate for modeling the keyboard because we want, e.g., [a, b, d, BS] = [a, b]. a. Choose a set of relations for which the monoid presented by generators K and the chosen relations is appropriate to this application. b. Under your relations, how does the singleton list [BS] compare with the empty list [ ]? Is that suitable? ### 4.1.1.24   Cyclic monoids Definition 4.1.1.25. A monoid is called cyclic if it has a presentation involving only one generator. Example 4.1.1.26. Let Q be a symbol; we look at some cyclic monoids generated by {Q}. With no relations the monoid would be the free monoid on one generator and would have underlying set {[ ], [Q], [Q, Q], [Q, Q, Q], …}, with unit element [ ] and multiplication given by concatenation (e.g., [Q, Q, Q] ++ [Q, Q] = [Q, Q, Q, Q, Q]). This is just ℕ, the additive monoid of natural numbers. With the really strong relation [Q] ~ [ ] we would get the trivial monoid, as in Example 4.1.1.10. Another possibility is given in the first part of Example 4.1.2.3, where the relation Q12 ~ [ ] is used, where Q12 is shorthand for [Q, Q, Q, Q, Q, Q, Q, Q, Q, Q, Q, Q]. This monoid has 12 elements. Example 4.1.1.27. Consider the cyclic monoid with generator Q and relation Q7 = Q4. This monoid has seven elements, $\left\{{Q}^{0},{Q}^{1},{Q}^{2},{Q}^{3},{Q}^{4},{Q}^{5},{Q}^{6}\right\},$ where Q0 = e and Q1 = Q. As an example of the multiplication formula, we have: ${Q}^{6}\star {Q}^{5}={Q}^{7}*{Q}^{4}={Q}^{4}*{Q}^{4}={Q}^{7}*Q={Q}^{5}.$ One might depict the cyclic monoid with relation Q7 = Q4 as follows: To see the mathematical source of this intuitive depiction, see Example 7.2.1.19. Exercise 4.1.1.28. Classify all the cyclic monoids up to isomorphism. That is, construct a naming system such that every cyclic monoid can be given a name in your system, no two nonisomorphic cyclic monoids have the same name, and no name exists in the system unless it refers to a cyclic monoid. Hint: One might see a pattern in which the three monoids in Example 4.1.1.26 correspond respectively to ∞, 1, and 12, and think that Cyclic monoids can be classified by (i.e., systematically named by elements of) the set ℕ ⊔ {∞}. That idea is on the right track, but it is not complete. Solution 4.1.1.28. Cyclic monoids are either finite or infinite. The free monoid on one generator, (ℕ, 0, +) is the only infinite cyclic monoid, because once one makes a relation Qm ~ Qn on List(Q) for some n > m, it is ensured that there are only finitely many elements (in fact, n-many). Finite cyclic monoids can be drawn as backward σ’s (i.e., as ), with varying loop lengths and total lengths. The finite cyclic monoids can be classified by the set $\mathit{\text{FCM}}≔\left\{\left(n,k\right)\in ℕ×ℕ|1⩽k⩽n\right\}.$ For each (n, k) ∈ FCM, there is a cyclic monoid with n elements and a loop of length k. For example, we can draw (8, 6) and (5, 1) respectively as How do these pictures correspond to monoids? The nodes represent elements, so (8, 6) has eight elements. The unit element is the leftmost node (the only one with no arrow pointing to it). Each node is labeled by the length of the shortest path from the unit (so 0 is the unit). To multiply mn, we see where the path of length m + n, starting at 0, ends up. So in the cyclic monoid of type (8, 6), we have 4 + 4 = 2, whereas in (5, 1), we have 4 + 4 = 4. ## 4.1.2   Monoid actions Definition 4.1.2.1 (Monoid action). Let (M, e, ⋆) be a monoid, and let S be a set. An action of (M, e, ⋆) on S, or simply an action of M on S, or an M action on S, is a function such that the following monoid action laws hold for all m, nM and all sS: • e s = s • m (n s) = (mn) s.2 Remark 4.1.2.2. To be pedantic (and because it is sometimes useful), we may decide not to use infix notation. That is, we may rewrite as α: M × SS and restate the conditions from Definition 4.1.2.1 as • α(e, s) = s; • α(m, α(n, s)) = α(mn, s). Example 4.1.2.3. Let S = {0, 1, 2, … , 11}, and let N = (ℕ, 0, +) be the additive monoid of natural numbers (see Example 4.1.1.3). We define a function : ℕ × SS by taking a pair (n, s) to the remainder that appears when n + s is divided by 12. For example, 4 2 = 6 and 8 9 = 5. This function has the structure of a monoid action because the monoid laws from Definition 4.1.2.1 hold. Similarly, let T denote the set of points on a circle, elements of which are denoted by a real number in the interval [0, 12), i.e., $T=\left\{x\in ℝ|0⩽x<12\right\},$ and let R = (ℝ, 0, +) denote the additive monoid of real numbers. Then there is an action R × TT, similar to the preceding one (see Exercise 4.1.2.4). One can think of this as an action of the monoid of time on the clock. Here T is the set of positions at which the hour hand may be pointing. Given any number rR, we can go around the clock by r many hours and get a new hour-hand position. For example, 7.25 8.5 = 3.75, meaning that 7.25 hours after 8:30 is 3:45. Exercise 4.1.2.4. Warning: This exercise is abstract. a. Realize the set T ≔ [0, 12) ⊆ ℝ as a coequalizer of some pair of arrows ℝ ⇉ ℝ. b. For any x ∈ ℝ, realize the mapping x+: TT, implied by Example 4.1.2.3, using the universal property for coequalizers. c. Prove that it is an action. Solution 4.1.2.4. a. Let f : ℝ → ℝ be given by f(x) = x + 12. Then id and f are a pair of arrows ℝ → ℝ, and their coequalizer is T. b. Let x ∈ ℝ be a real number. We want a function x+: TT, but we begin with a function (by the same name) x+: ℝ → ℝ, given by adding x to any real number. The following solid-arrow diagram commutes because 12 + x = x + 12 for any x ∈ ℝ: By the universal property for coequalizers, there is a unique dotted arrow TT making the diagram commute, and this is x+: TT. It represents the action “add x ∈ ℝ hours to clock position tT.” c. Clearly, if x = 0, then the x+ function is id, and it follows from the universal property that 0+ = idT. We see that x + (y + t) = (x + y) + t using the commutative diagram The universal property for coequalizers implies the result. Exercise 4.1.2.5. Let B denote the set of buttons (or positions) of a video game controller (other than, say, “start” and “select”), and consider the free monoid List(B) on B. a. What would it mean for List(B) to act on the set of states of some (single-player) video game? Imagine a video game G′ that uses the controller, but for which List(B) would not be said to act on the states of G′. Now imagine a simple game G for which List(B) would be said to act. Describe the games G and G′. b. Can you think of a state s of G, and two distinct elements , ′ ∈ List(B) such that s = s? c. In video game parlance, what would you call a monoid element bB such that for every state sG, one has b s = s? d. In video game parlance, what would you call a state sS such that for every sequence of buttons ∈ List(B), one has s = s? e. Define ℝ>0 to be the set of positive real numbers, and consider the free monoid M ≔ List(ℝ>0 × B). An element of this monoid can be interpreted as a list in which each entry is a button bB being pressed after a wait time t ∈ ℝ>0. Can you find a game that uses the controller but for which M does not act? Application 4.1.2.6. Let f : ℝ → ℝ be a differentiable function of which we want to find roots (points x ∈ ℝ such that f(x) = 0). Let x0 ∈ ℝ be a starting point. For any n ∈ ℕ, we can apply Newton’s method to xn to get ${x}_{n+1}={x}_{n}-\frac{f\left({x}_{n}\right)}{f\prime \left({x}_{n}\right)}.$ This is a monoid (namely, ℕ, the free monoid on one generator) acting on a set (namely, ℝ). However, Newton’s method can get into trouble. For example, at a critical point it causes division by zero, and sometimes it can oscillate or overshoot. In these cases we want to perturb a bit to the left or right. To have these actions available to us, we would add “perturb” elements to our monoid. Now we have more available actions at any point, but at the cost of using a more complicated monoid. When publishing an experimental finding, there may be some deep methodological questions that are not considered suitably important to mention. For example, one may not publish the kind of solution-finding method (e.g., Newton’s method or Runge-Kutta) that was used, or the set of available actions, e.g., what kinds of perturbation were used by the researcher. However, these may actually influence the reproducibility of results. By using a language such as that of monoid actions, we can align our data model with our unspoken assumptions about how functions are analyzed. Remark 4.1.2.7. A monoid is useful for understanding how an agent acts on the set of states of an object, but there is only one context for action—at any point, all actions are available. In reality, it is often the case that contexts can change and different actions are available at different times. For example, on a computer the commands available in one application have no meaning in another. This points us to categories, which are generalizations of monoids (see Chapter 5). ### 4.1.2.8   Monoid actions as ologs If monoids are understood in terms of how they act on sets, then it is reasonable to think of them in terms of ologs. In fact, the ologs associated to monoids are precisely those ologs that have exactly one type (and possibly many arrows and commutative diagrams). Example 4.1.2.9. This example shows how to associate an olog to a monoid action. Consider the monoid M generated by the set {u, d, r}, standing for “up, down, right,” and subject to the relations $\begin{array}{ccccc}\left[u,d\right]\sim \left[\right],& \left[d,u\right]\sim \left[\right],& \left[u,r\right]=\left[r,u\right],& \mathrm{\text{and}}& \left[d,r\right]=\left[r,d\right].\end{array}$ We might imagine that M acts on the set of positions for a character in an old video game. In that case the olog corresponding to this action should look something like Figure 4.1. ### 4.1.2.10   Finite state machines According to Wikipedia, a deterministic finite state machine is a quintuple (Σ, S, s0, δ, F), where 1. Σ is a finite nonempty set of symbols, called the input alphabet; 2. S is a finite, nonempty set, called the state set; 3. δ : Σ × SS is a function, called the state-transition function; 4. s0S is an element, called the initial state; 5. FS is a subset, called the set of final states. Here we focus on the state transition function δ, by which the alphabet Σ acts on the set S of states (see Figure 4.2). The following proposition expresses the notion of finite state automata in terms of free monoids and their actions on finite sets. Proposition 4.1.2.11. Let Σ, S be finite nonempty sets. Giving a function δ : Σ×SS is equivalent to giving an action of the free monoid List(Σ) on S. Proof. The proof is sketched here, leaving two details for Exercise 4.1.2.13. By Definition 4.1.2.1, we know that function ϵ: List(Σ) × SS constitutes an action of the monoid List(Σ) on the set S if and only if, for all sS, we have ϵ([ ], s) = s, and for any two elements m, m′ ∈ List(Σ), we have ϵ(m, ϵ(m′, s)) = ϵ(m ++ m′, s), where m ++ m′ is the concatenation of lists. Let We need to prove that there is an isomorphism of sets $\varphi :A\stackrel{\cong }{\to }{\text{Hom}}_{\mathbf{\text{set}}}\left(\Sigma ×S,S\right).$ Given an element ϵ: List(Σ) × SS in A, define ϕ(ϵ) on an element (σ, s) ∈ Σ × S by ϕ(ϵ)(σ, s) ≔ ϵ([σ], s), where [σ] is the one-element list. We now define $\psi :{\text{Hom}}_{\mathbf{\text{set}}}\left(\Sigma ×S,S\right)\to A.$ Given an element f ∈ HomSet(Σ × S, S), define ψ(f): List(Σ) × SS on a pair (L, s) ∈ List(Σ) × S, where L = [1, … , n] as follows. By induction, if n = 0, put ψ(f)(L, s) = s; if n ⩾ 1, let ∂L = [1, … , n−1] and put ψ(f)(L, s) = ψ(f)(∂L, f(n, s)). One checks easily that ψ(f) satisfies these two rules, making it an action of List(Σ) on S. It is also easy to check that ϕ and ψ are mutually inverse, completing the proof. (See Exercise 4.1.2.13). The idea of this section is summed up as follows: Slogan 4.1.2.12. A finite state machine is an action of a free monoid on a finite set. Exercise 4.1.2.13. Consider the functions ϕ and ψ as defined in the proof of Proposition 4.1.2.11. a. Show that for any f : Σ × SS, the map ψ(f): List(Σ) × SS constitutes an action. b. Show that ϕ and ψ are mutually inverse functions (i.e., ϕψ = idHom(Σ×S,S) and ψϕ = idA). Solution 4.1.2.13. a. Let sS be an arbitrary element. By the base of the induction, ψ(f)([ ], s) = s, so ψ(f) satisfies the unit law. Now let L1, L2 ∈ List(Σ) be two lists with L = L1 ++ L2 their concatenation. We need to show that ψ(f)(L1, ψ(f)(L2, s)) = ψ(f)(L, s). We do this by induction on the length of L2. If |L2| = 0, then L = L1 and we have that ψ(f)(L1, ψ(f)(L2, s)) = ψ(f)(L1, s) = ψ(f)(L, s). Now suppose the result is true for all lists of length |L2| − 1 ⩾ 0. We have ∂L = L1 ++ ∂L2, where removes the last entry of a nonempty list. If is the last entry of L and L2, then we have $\begin{array}{cc}\hfill \psi \left(f\right)\left({L}_{1},\psi \left(f\right)\left({L}_{2},s\right)\right)=\psi \left(f\right)\left({L}_{1},\psi \left(f\right)\left(\partial {L}_{2},f\left(\ell ,s\right)\right)\right)& =\psi \left(f\right)\left(\partial L,f\left(\ell ,s\right)\right)\hfill \\ & =\psi \left(f\right)\left(L,s\right).\hfill \end{array}$ b. We first show that for f ∈ Hom(Σ × S, S), we have ϕψ(f) = f. To do so, we choose (σ, s) ∈ Σ × S, and the formulas for ϕ and ψ from the proof of Proposition 4.1.2.11 give $\varphi \left(\psi \left(f\right)\right)\left(\sigma ,s\right)=\psi \left(f\right)\left(\left[\sigma \right],s\right)=f\left(\sigma ,s\right).$ We next show that for ϵA, we have ψϕ(ϵ) = ϵ. To do so, we choose (L, s) ∈ List(Σ) × S and show that ψ(ϕ(ϵ))(L, s) = ϵ(L, s). We do this by induction on the length n = |L| of L. If n = 0, then ψ(ϕ(ϵ))([ ], s) = s = ϵ([ ], s). We may now assume that n ⩾ 1 and that the result holds for ∂L. Let be the last entry of L. We use the formulas for ϕ and ψ, and the fact that ϵ is an action, to get the following derivation: $\begin{array}{cc}\hfill \psi \left(\varphi \left(\mathit{ϵ}\right)\right)\left(L,s\right)=\psi \left(\varphi \left(\mathit{ϵ}\right)\right)\left(\partial L,\varphi \left(\mathit{ϵ}\right)\left(\ell ,s\right)\right)& =\psi \left(\varphi \left(\mathit{ϵ}\right)\right)\left(\partial L,\mathit{ϵ}\left(\left[\ell \right],s\right)\right)\hfill \\ & =\mathit{ϵ}\left(\partial L,\mathit{ϵ}\left(\left[\ell \right],s\right)\right)\hfill \\ & =\mathit{ϵ}\left(\partial L++\left[\ell \right],s\right)=\mathit{ϵ}\left(L,s\right).\hfill \end{array}$ ## 4.1.3   Monoid action tables Let M be a monoid generated by the set G = {g1, … , gm}, and with some relations, and suppose that α: M × SS is an action of M on a set S = {s1, … , sn}. We can represent the action α using an action table whose columns are the generators gG and whose rows are the elements of S. In each cell (row, col), where rowS and colG, we put the element α(col, row) ∈ S. Example 4.1.3.1 (Action table). If Σ and S are the sets from Figure 4.2, the displayed action of List(Σ) on S would be given by action table (4.1) Example 4.1.3.2 (Multiplication action table). Every monoid (M, e, ⋆) acts on itself by its multiplication formula, ⋆: M × MM. If G is a generating set for M, we can write the elements of G as the columns and the elements of M as rows, and call this a multiplication table. For example, let (ℕ, 1, *) denote the multiplicative monoid of natural numbers. The multiplication table is the usual multiplication table from grade school: Try to understand what is meant by this: “Applying column 2 and then column 2 returns the same thing as applying column 4.” Table (4.2) implicitly takes every element of ℕ as a generator (since there is a column for every natural number). In fact, there is a smallest generating set for the monoid (ℕ, 1, *), so that every element of the monoid is a product of some combination of these generators, namely, the primes and 0. Exercise 4.1.3.3. Let ℕ be the additive monoid of natural numbers, let S = {0, 1, 2, … , 11}, and let Clock: ℕ × SS be the clock action given in Example 4.1.2.3. Using a small generating set for the monoid, write the corresponding action table. ## 4.1.4   Monoid homomorphisms A monoid (M, e, ⋆) involves a set, a unit element, and a multiplication formula. For two monoids to be comparable, their sets, unit elements, and multiplication formulas should be appropriately comparable. For example, the additive monoids ℕ and ℤ should be comparable because ℕ ⊆ ℤ is a subset, the unit elements in both cases are the same e = 0, and the multiplication formulas are both integer addition. Definition 4.1.4.1. Let $\mathcal{M}≔\left(M,e,\star \right)$ and $\mathcal{M}\prime ≔\left(M\prime ,e\prime ,\star \prime \right)$ be monoids. A monoid homomorphism f from $\mathcal{M}$ to $\mathcal{M}\prime$, denoted $f:\mathcal{M}\to \mathcal{M}\prime$, is a function $f:M\to M\prime$ satisfying two conditions: • f(e) = e′. • f(m1m2) = f(m1) ⋆′ f(m2), for all m1, m2M. The set of monoid homomorphisms from $\mathcal{M}$ to $\mathcal{M}\prime$ is denoted ${\text{Hom}}_{\mathbf{\text{Mon}}}\left(\mathcal{M},\mathcal{M}\prime \right)$. Example 4.1.4.2 (From ℕ to ℤ). As stated, the inclusion map i: ℕ → ℤ induces a monoid homomorphism (ℕ, 0, +) → (ℤ, 0, +) because i(0) = 0 and i(n1 + n2) = i(n1) + i(n2). Let i5 : ℕ → ℤ denote the function i5(n) = 5 * n, so i5(4) = 20. This is also a monoid homomorphism because i5(0) = 5*0 = 0 and i5(n1 + n2) = 5*(n1 + n2) = 5*n1 + 5*n2 = i5(n1) + i5(n2). Application 4.1.4.3. Let R = {a, c, g, u}, and let T = R3, the set of triplets in R. Let $\mathcal{R}=\text{List}\left(R\right)$ be the free monoid on R, and let $\mathcal{T}=\text{List}\left(T\right)$ denote the free monoid on T. There is a monoid homomorphism $F:\mathcal{T}\to \mathcal{R}$ given by sending t = (r1, r2, r3) to the list [r1, r2, r3].3 If A is the set of amino acids and $\mathcal{A}=\text{List}\left(A\right)$ is the free monoid on A, the process of translation gives a monoid homomorphism $G:\mathcal{T}\to \mathcal{A}$ , turning a list of RNA triplets into a polypeptide. But how do we go from a list of RNA nucleotides to a polypeptide, i.e., from $\mathcal{R}$ to $\mathcal{A}$ ? It seems that there is no good way to do this mathematically. So what is going wrong? The answer is that there should not be a monoid homomorphism $\mathcal{R}\to \mathcal{A}$ because not all sequences of nucleotides produce a polypeptide; for example, if the sequence has only two elements, it does not code for a polypeptide. There are several possible remedies to this problem. One is to take the image of $F:\mathcal{T}\to \mathcal{R}$ , which is a submonoid $\mathcal{R}\prime \subseteq \mathcal{R}$ . It is not hard to see that there is a monoid homomorphism $F\prime :\mathcal{R}\prime \to \mathcal{T}$ , and we can compose it with G to get the desired monoid homomorphism $G○F\prime :\mathcal{R}\prime \to \mathcal{A}$.4 Example 4.1.4.4. Given any monoid $\mathcal{M}=\left(M,e,\star \right)$ , there is a unique monoid homomorphism from $\mathcal{M}$ to the trivial monoid 1 (see Example 4.1.1.10). There is also a unique homomorphism $\underset{¯}{1}\to \mathcal{M}$ because a monoid homomorphism must send the unit to the unit. These facts together means that between any two monoids $\mathcal{M}$ and $\mathcal{M}\prime$ we can always construct a homomorphism $\mathcal{M}\stackrel{!}{\to }\underset{¯}{1}\stackrel{!}{\to }\mathcal{M}\prime ,$ called the trivial homomorphism $\mathcal{M}\to \mathcal{M}\prime$ . It sends everything in M to eM′. A homomorphism $\mathcal{M}\to \mathcal{M}\prime$ that is not trivial is called a nontrivial homomorphism. Proposition 4.1.4.5. Let $\mathcal{M}=\left(ℤ,0,+\right)$ and $\mathcal{M}\prime =\left(ℕ,0,+\right)$ . The only monoid homomorphism $f:\mathcal{M}\to \mathcal{M}\prime$ is trivial, i.e., it sends every element m ∈ ℤ to 0 ∈ ℕ. Proof. Let $f:\mathcal{M}\to \mathcal{M}\prime$ be a monoid homomorphism, and let n = f(1) and n′ = f(−1) in ℕ. Then we know that since 0 = 1+(−1) in ℤ, we must have 0 = f(0) = f(1+(−1)) = f(1)+f(−1) = n+n′ ∈ ℕ. But if n ⩾ 1, then this is impossible, so n = 0. Similarly, n′ = 0. Any element m ∈ ℤ can be written as m = 1 + 1 + ⋯ + 1 or as m = −1 + −1 + ⋯ + −1, and it is easy to see that f(1) + f(1) + ⋯ + f(1) = 0 = f(−1) + f(−1) + ⋯ + f(−1). Therefore, f(m) = 0 for all m ∈ ℤ. Exercise 4.1.4.6. For any m ∈ ℤ, let im: ℕ → ℤ be the function im(n) = m * n, so i6(7) = −42. All such functions are monoid homomorphisms (ℕ, 0, +) → (ℤ, 0, +). Do any monoid homomorphisms (ℕ, 0, +) → (ℤ, 0, +) not come in this way? For example, what about using n ↦ (5n − 1) or nn2 or some other function? Exercise 4.1.4.7. Let $\mathcal{M}≔\left(ℕ,0,+\right)$ be the additive monoid of natural numbers, let $\mathcal{N}=\left({ℝ}_{⩾0},0,+\right)$ be the additive monoid of nonnegative real numbers, and let $\mathcal{P}≔\left({ℝ}_{>0},1,*\right)$ be the multiplicitive monoid of positive real numbers. Can you think of any nontrivial monoid homomorphisms (Example 4.1.4.4) of the following sorts: a. $f:\mathcal{M}\to \mathcal{N}?$ b. $g:\mathcal{M}\to \mathcal{P}?$ c. $h:\mathcal{N}\to \mathcal{P}?$ d. $i:\mathcal{N}\to \mathcal{M}?$ e. $j:\mathcal{P}\to \mathcal{N}?$ ### 4.1.4.8   Homomorphisms from free monoids Recall that (ℕ, 0, +) is the free monoid on one generator. It turns out that for any other monoid $\mathcal{M}=\left(M,e,\star \right)$ , the set of monoid homomorphisms $ℕ\to \mathcal{M}$ is in bijection with the set M. This is a special case (in which G is a set with one element) of the following proposition. Proposition 4.1.4.9. Let G be a set, let F (G) ≔ (List(G), [ ], ++) be the free monoid on G, and let $\mathcal{M}≔\left(M,e,\star \right)$ be any monoid. There is a natural bijection ${\text{Hom}}_{\mathbf{\text{Mon}}}\left(F\left(G\right),\mathcal{M}\right)\stackrel{\cong }{\to }{\text{Hom}}_{\mathbf{\text{set}}}\left(G,M\right).$ Proof. We provide a function $\varphi :{\text{Hom}}_{\mathbf{\text{Mon}}}\left(F\left(G\right),\mathcal{M}\right)\to {\text{Hom}}_{\mathbf{\text{set}}}\left(G,M\right)$ and a function $\psi {\text{:Hom}}_{\mathbf{\text{set}}}\left(G,M\right)\to {\text{Hom}}_{\mathbf{\text{Mon}}}\left(F\left(G\right),\mathcal{M}\right)$ and show that they are mutually inverse. Let us first construct ϕ. Given a monoid homomorphism $f:F\left(G\right)\to \mathcal{M}$ , we need to provide ϕ(f): GM. Given any gG, we define ϕ(f)(g) ≔ f([g]). Now let us construct ψ. Given p: GM, we need to provide $\psi \left(p\right):\text{List}\left(G\right)\to \mathcal{M}$ such that ψ(p) is a monoid homomorphism. For a list L = [g1, … , gn] ∈ List(G), define ψ(p)(L) ≔ p(g1) ⋆ ⋯ ⋆ p(gn) ∈ M. In particular, ψ(p)([ ]) = e. It is not hard to see that this is a monoid homomorphism. Also, ϕψ(p) = p for all p ∈ HomSet(G, M). We show that ψϕ(f) = f for all $f\in {\text{Hom}}_{\mathbf{\text{Mon}}}\left(F\left(G\right),\mathcal{M}\right)$ . Choose L = [g1, … , gn] ∈ List(G). Then $\psi \left(\varphi f\right)\left(L\right)=\left(\varphi f\right)\left({g}_{1}\right)\star \cdots \star \left(\varphi f\right)\left({g}_{n}\right)=f\left[{g}_{1}\right]\star \cdots \star f\left[{g}_{n}\right]=f\left(\left[{g}_{1},\dots ,{g}_{n}\right]\right)=f\left(L\right).$ Exercise 4.1.4.10. Let G = {a, b}, let $\mathcal{M}≔\left(M,e,\star \right)$ be any monoid, and let f : GM be given by f(a) = m and f(b) = n, where m, nM. If $\psi :{\text{Hom}}_{\mathbf{\text{set}}}\left(G,M\right)\to {\text{Hom}}_{\mathbf{\text{Mon}}}\left(F\left(G\right),\mathcal{M}\right)$ is the function constructed in the proof of Proposition 4.1.4.9 and L = [a, a, b, a, b], what is ψ(f)(L) ? ### 4.1.4.11   Restriction of scalars A monoid homomorphism f : MM′ (see Definition 4.1.4.1) ensures that the elements of M have a reasonable interpretation in M′; they act the same way over in M′ as they did in M. If we have such a homomorphism f and we have an action α: M′ × SS of M′ on a set S, then we have a method for allowing M to act on S as well. Namely, we take an element of M, send it to M′, and use that to act on S. In terms of functions, we define ∆f(α) to be the composite: After Proposition 4.1.4.12 we will know that ∆f(α): M × SS is indeed a monoid action, and we say that it is given by restriction of scalars along f. Proposition 4.1.4.12. Let $\mathcal{M}≔\left(M,e,\star \right)$ and $\mathcal{M}\prime ≔\left(M\prime ,e\prime ,\star \prime \right)$ be monoids, $f:\mathcal{M}\to \mathcal{M}\prime$ a monoid homomorphism, S a set, and suppose that α: M′ × SS is an action of $\mathcal{M}\prime$ on S. Thenf(α): M × SS, as defined, is a monoid action as well. Proof. Refer to Remark 4.1.2.2, We assume α is a monoid action and want to show that ∆f(α) is too. We have ∆f(α)(e, s) = α(f(e), s) = α(e′, s) = s. We also have $\begin{array}{cc}\hfill {\Delta }_{f}\left(\alpha \right)\left(m,{\Delta }_{f}\left(\alpha \right)\left(n,s\right)\right)=\alpha \left(f\left(m\right),\alpha \left(f\left(n\right),s\right)\right)& =\alpha \left(f\left(m\right)\star \prime f\left(n\right),s\right)\hfill \\ & =\alpha \left(f\left(m\star n\right),s\right)\hfill \\ & ={\Delta }_{f}\left(\alpha \right)\left(m\star n,s\right).\hfill \end{array}$ Then the unit law and the multiplication law hold. Example 4.1.4.13. Let ℕ and ℤ denote the additive monoids of natural numbers and integers respectively, and let i: ℕ → ℤ be the inclusion, which Example 4.1.4.2 showed is a monoid homomorphism. There is an action α: ℤ × ℝ → ℝ of the monoid ℤ on the set ℝ of real numbers, given by α(n, x) = n + x. Clearly, this action works just as well if we restrict the scalars to ℕ ⊆ ℤ, and allow only adding natural numbers to real numbers. This is the action ∆iα: ℕ × ℝ → ℝ, because for (n, x) ∈ ℕ × ℝ, we have ∆iα(n, x) = α(i(n), x) = α(n, x) = n + x, just as expected. Example 4.1.4.14. Suppose that V is a complex vector space. In particular, this means that the monoid ℂ of complex numbers (under multiplication) acts on the elements of V. The elements of ℂ are called scalars in this context. If i: ℝ → ℂ is the inclusion of the real line inside ℂ, then i is a monoid homomorphism. Restriction of scalars in the preceding sense turns V into a real vector space, so the name “restriction of scalars” is apt. Exercise 4.1.4.15. Let ℕ be the free monoid on one generator, and let Σ = {a, b}. Consider the map of monoids f : ℕ → List(Σ) given by sending 1 ↦ [a, b, b, b]. Consider the state set S = {State 0, State 1, State 2}. The monoid action α: List(Σ)×SS given in Example 4.1.3.1 can be transformed by restriction of scalars along f to an action ∆f(α) of ℕ on S. Write its action table. # 4.2   Groups Groups are monoids with the property that every element has an inverse. If we think of these structures in terms of how they act on sets, the difference between groups and monoids is that the action of every group element can be undone. One way of thinking about groups is in terms of symmetries. For example, the rotations and reflections of a square form a group because they can be undone. Another way to think of the difference between monoids and groups is in terms of time. Monoids are likely useful in thinking about diffusion, in which time plays a role and things cannot be undone. Groups are more likely useful in thinking about mechanics, where actions are time-reversible. ## 4.2.1   Definition and examples Definition 4.2.1.1. Let (M, e, ⋆) be a monoid. An element mM is said to have an inverse if there exists an m′ ∈ M such that mm′ = e and mm = e. A group is a monoid (M, e, ⋆) in which every element mM has an inverse. Proposition 4.2.1.2. Suppose that $\mathcal{M}≔\left(M,e,\star \right)$ is a monoid, and let mM be an element. Then m has at most one inverse.5 Proof. Suppose that both m′ and m″ are inverses of m; we want to show that m′ = m″. This follows by the associative law for monoids: $m\prime =m\prime \left(\mathit{\text{mm}}″\right)=\left(m\prime m\right)m″=m″.$ Example 4.2.1.3. The additive monoid (ℕ, 0, +) is not a group because none of its elements are invertible, except for 0. However, the monoid of integers (ℤ, 0, +) is a group. The monoid of clock positions from Example 4.1.1.26 is also a group. For example, the inverse of Q5 is Q7 because Q5Q7 = e = Q7Q5. Example 4.2.1.4. Consider a square centered at the origin in ℝ2. It has rotational and mirror symmetries. There are eight of these, denoted $\left\{e,\rho ,{\rho }^{2},{\rho }^{3},\varphi ,\varphi \rho ,\varphi {\rho }^{2},\varphi {\rho }^{3}\right\},$ where ρ stands for 90° counterclockwise rotation and ϕ stands for horizontal flip (across the vertical axis). So relations include ρ4 = e, ϕ2 = e, and ρ3ϕ = ϕρ. This group is called the dihedral group of order eight. Example 4.2.1.5. The set of 3 × 3 matrices can be given the structure of a monoid, where the unit element is the 3 × 3 identity matrix, the multiplication formula is given by matrix multiplication. It is a monoid but not a group because not all matrices are invertible. The subset of invertible matrices does form a group, called the general linear group of degree 3 and denoted GL3. Inside of GL3 is the orthogonal group, denoted O3, of matrices M such that M−1 = M. These matrices correspond to symmetries of the two-dimensional sphere centered at the origin in ℝ2. Another interesting group is the Euclidean group E(3), which consists of all isometries of ℝ3, i.e., all functions ℝ3 → ℝ3 that preserve distances. Application 4.2.1.6. In crystallography one is often concerned with the symmetries that arise in the arrangement A of atoms in a molecule. To think about symmetries in terms of groups, we first define an atom arrangement to be a finite subset i: A ⊆ ℝ3. A symmetry in this case is an isometry of ℝ3 (see Example 4.2.1.5), say, f : ℝ3 → ℝ3, such that there exists a dotted arrow making the following diagram commute: That is, it is an isometry of ℝ3 such that each atom of A is sent to a position currently occupied by an atom of A. It is not hard to show that the set of such isometries forms a group, called the space group of the crystal. Exercise 4.2.1.7. Let X be a finite set. A permutation of X is an isomorphism $f:X\stackrel{\cong }{\to }X.$ Let Iso(X) ≔ {f : XX | f is an isomorphism} be the set of permutations of X. Here is a picture of an element in Iso(S), where S = {s1, s2, s3, s4}: a. Devise a unit and a multiplication formula, such that the set Iso(X) of permutations of X forms a monoid. b. Is the monoid Iso(X) always in fact a group? Solution 4.2.1.7. a. We can take the unit to be the identity function $i{d}_{S}:S\stackrel{\cong }{\to }S$ and the multiplication formula to be a composition of isomorphisms fg = fg. Clearly, idSf = f ○ idS = f and (fg) ○ h = f ○ (gh), so this formula satisfies the unit and multiplication laws. In other words, we have put a monoid structure on the set Iso(S). b. Yes, Iso(X) is a group because every element of fIso(S) is invertible. Namely, the fact that f is an isomorphism means that there is some f−1Iso(S) with ff−1 = f−1f = idS. Exercise 4.2.1.8. In Exercise 4.1.1.28 you classified the cyclic monoids. Which of them are groups? Definition 4.2.1.9 (Group action). Let (G, e, ⋆) be a group and S a set. An action of G on S is a function : G × SS such that for all sS and g, g′ ∈ G, we have • e s = s; • g (g s) = (gg′) s. In other words, considering G as a monoid, it is an action in the sense of Definition 4.1.2.1. Example 4.2.1.10. When a group acts on a set, it has the character of symmetry. For example, consider the group whose elements are angles θ. This group may be denoted U(1) and is often formalized as the unit circle in ℂ, i.e., the set of complex numbers z = a + bi such that |z| = a2 + b2 = 1. The set of such points is given the structure of a group (U(1), 1 + 0i, ⋆) by defining the unit element to be 1 + 0i and the group law to be complex multiplication. But for those unfamiliar with complex numbers, this is simply angle addition, where we understand that 360° = 0°. If θ1 = 190° and θ2 = 278°, then θ1θ2 = 468° = 108°. In the language of complex numbers, z = e. The group U(1) acts on any set that we can picture as having rotational symmetry about a fixed axis, such as the earth around the north-south axis. We will define S = {(x, y, z) ∈ ℝ3 | x2 + y2 + z2 = 1} to be the unit sphere in ℝ3, and seek to understand the rotational action of U(1) on S. We first show that U(1) acts on ℝ3 by θ (x, y, z) = (x cos θ + y sin θ, −x sin θ + y cos θ, z), or with matrix notation as Trigonometric identities ensure that this is indeed an action. In terms of action tables, we would need infinitely many rows and columns to express this action. Here is a sample: Action of U(1) on ℝ3 3 θ = 45° θ = 90° θ = 100° (0, 0, 0) (0, 0, 0) (0, 0, 0) (0, 0, 0) (1, 0, 0) (0.71, 0.71, 0) (0, 1, 0) (−0.17, 0.98, 0) (0, 1, −4.2) (−0.71, 0.71, −4.2) (−1, 0, −4.2) (−0.98, −0.17, −4.2) (3, 4, 2) (4.95, 0.71, 2) (−4, 3, 2) (3.42, −3.65, 2) Since S ⊆ ℝ3 consists of all vectors of length 1, we need to check that the action preserves length, i.e., that if (x, y, z) ∈ S, then θ (x, y, z) ∈ S. In this way we will have confirmed that U(1) indeed acts on S. The calculation begins by assuming x2 + y2 + z2 = 1, and one uses trigonometric identities to see that Exercise 4.2.1.11. Let X be a set and consider the group Iso(X) of permutations of X (see Exercise 4.2.1.7). Find a canonical action of IsoX on X. Solution 4.2.1.11. The elements of Iso(X) are isomorphisms $f:X\stackrel{\cong }{\to }X$ . To get an action : Iso(X) × XX, we need, for every pair (f, x), an element of X. The obvious choice is f(x) ∈ X.6 Let’s check that this really gives an action. For any f, g ∈ Iso(X) and any xX we indeed have idX(x) = x and we indeed have f(g(x)) = (fg)(x), so our choice works. Definition 4.2.1.12. Let G be a group acting on a set X. For any point xX, the orbit of x, denoted Gx, is the set Application 4.2.1.13. Let S be the surface of the earth, understood as a sphere, and let G = U(1) be the group of angles acting on S by rotation as in Example 4.2.1.10. The orbit of any point p = (x, y, z) ∈ S is the set of points on the same latitude line as p. One may also consider a small band around the earth, i.e., the set A = {(x, y, z) | 1.0 ⩽ x2 + y2 + z2 ⩽ 1.05}. The action of U(1) S extends to an action U(1) A. The orbits are latitude-lines-at-altitude. A simplifying assumption in climatology may be given by assuming that U(1) acts on all currents in the atmosphere in an appropriate sense. Thus, instead of considering movement within the whole space A, we only allow movement that behaves the same way throughout each orbit of the group action. Exercise 4.2.1.14. a. Consider the U(1) action on the sphere S given in Example 4.2.1.10. Describe the set of orbits of this action. b. What are the orbits of the canonical action of the permutation group Iso{1,2,3} on the set {1, 2, 3}? (See Exercise 4.2.1.11.) Exercise 4.2.1.15. Let (G, e, ⋆) be a group and X a set on which G acts. Is “being in the same orbit” an equivalence relation on X? Definition 4.2.1.16. Let G and G′ be groups. A group homomorphism f : GG′ is defined to be a monoid homomorphism GG′, where G and G′ are being regarded as monoids in accordance with Definition 4.2.1.1. # 4.3   Graphs Unless otherwise specified, whenever I speak of graphs in this book, I do not mean curves in the plane, such as parabolas, or pictures of functions generally, but rather systems of vertices and arrows. Graphs are taken to be directed, meaning that every arrow points from a vertex to a vertex; rather than merely connecting vertices, arrows have direction. If a and b are vertices, there can be many arrows from a to b, or none at all. There can be arrows from a to itself. Here is the formal definition in terms of sets and functions. ## 4.3.1   Definition and examples Definition 4.3.1.1. A graph G consists of a sequence G ≔ (V, A, src, tgt), where • V is a set, called the set of vertices of G (singular: vertex); • A is a set, called the set of arrows of G; • src: AV is a function, called the source function for G; • tgt: AV is a function, called the target function for G. Given an arrow aA we refer to src(a) as the source vertex of a and to tgt(a) as the target vertex of a. To draw a graph, first draw a dot for every element of V. Then for every element aA, draw an arrow connecting dot src(a) to dot tgt(a). Example 4.3.1.2 (Graph). Here is a picture of a graph G = (V, A, src, tgt): We have V = {v, w, x, y, z} and A = {f, g, h, i, j, k}. The source and target functions src, tgt: AV are expressed in the following table (left-hand side): In fact, all the data of the graph G is captured in these two tables—together they tell us the sets A and V and the functions src and tgt. Example 4.3.1.3. Every olog has an underlying graph, in the sense of Definition 4.3.1.1. An olog has additional information, namely, information about which pairs of paths are declared equivalent as well as text that has certain English-readability rules. Exercise 4.3.1.4. a. Draw the graph corresponding to the following tables: b. Write two tables like the ones in part (a) corresponding to the following graph: Exercise 4.3.1.5. a. Let A = {1, 2, 3, 4, 5} and B = {a, b, c}. Draw them, and choose an arbitrary function f : AB and draw it. b. Let AB be the coproduct of A and B (Definition 3.1.2.1), and let $A\stackrel{{i}_{1}}{\to }A\bigsqcup B\stackrel{{i}_{2}}{←}B$ be the two inclusions. Consider the two functions src, tgt: AAB, where src = i1 and tgt is the composition $A\stackrel{f}{\to }B\stackrel{{i}_{2}}{\to }A\bigsqcup B.$ Draw the associated graph G ≔ (AB, A, src, tgt). Exercise 4.3.1.6. a. Let V be a set. Suppose we just draw the elements of V as vertices and have no arrows between them. Is this a graph? b. Given V, is there any other canonical or somehow automatic nonrandom procedure for generating a graph with those vertices? Solution 4.3.1.6. a. Yes. With arrows A = ∅, there is a unique function !: AV, so we have (V, ∅, !, !). This is called the discrete graph on vertices V. b. Yes. Choose as arrows A = V × V, and let src: AV and tgt: AV be the projections. This gives the indiscrete graph Ind(V) ≔ (V, V × V, π1, π2) on vertices V. An indiscrete graph is one in which each vertex is connected (backward and forward) to every other vertex and also points to itself. Another would be (V, V, idV, idV), which puts a loop at every vertex and has no other arrows. Example 4.3.1.7. Recall from Construction 3.2.2.6 the notion of a bipartite graph, defined to be a span (i.e., pair of functions; see Definition 3.2.2.1) $A\stackrel{f}{←}R\stackrel{g}{\to }B.$ Now that we have a formal definition of a graph, we might hope that the notion of bipartite graphs fits in as a particular sort of graph, and it does. Let V = AB, and let i: AV and j : BV be the inclusions. Let src = if : RV, and let tgt = jg : RV be the composites: Then (V, R, src, tgt) is a graph that would be drawn exactly as specified the drawing of spans in Construction 3.2.2.6. Example 4.3.1.8. Let n ∈ ℕ be a natural number. The chain graph of length n, denoted [n], is the following graph: $\stackrel{0}{•}\to \stackrel{1}{•}\to \cdots \to \stackrel{n}{•}$ In general, [n] has n arrows and n + 1 vertices. In particular, when n = 0, we have that [0] is the graph consisting of a single vertex and no arrows. Example 4.3.1.9. Let G = (V, A, src, tgt) be a graph, Suppose that we want to spread it out over discrete time, so that each arrow does not occur within a given time slice but instead over a quantum unit of time. Let [ℕ] = (ℕ, ℕ, nn, nn + 1) be the graph depicted: $\stackrel{0}{•}\stackrel{0}{\to }\stackrel{1}{•}\stackrel{1}{\to }\stackrel{2}{•}\stackrel{2}{\to }\cdots$ The discussion of limits in a category (see Chapter 6) clarifies that products can be taken in the category of graphs (see Example 6.1.1.5), so [ℕ] × G will make sense. For now, we construct it by hand. Let T(G) = (V × ℕ, A × ℕ, src′, tgt′) be a new graph, where for aA and n ∈ ℕ, we have src′(a, n) ≔ (src(a), n) and tgt′(a, n) = (tgt(a), n + 1). Let G be the following graph: Then T(G) will be the graph The f arrows still take a’s to a’s, and the g arrows still take a’s to b’s, but they always march forward in time. Exercise 4.3.1.10. Let G be the following graph: Draw the graph T(G) defined in Example 4.3.1.9, using ellipses (⋯) if necessary. Solution 4.3.1.10. Exercise 4.3.1.11. Consider the following infinite graph G = (V, A, src, tgt): a. Write the sets A and V. b. What are the source and target functions AV? Exercise 4.3.1.12. A graph is a pair of functions AV. This sets up the notion of equalizer and coequalizer (see Definitions 3.2.3.1 and 3.3.3.1). a. What feature of a graph G is captured by the equalizer of its source and target functions? b. What feature of a graph G is captured by the coequalizer of its source and target functions? Solution 4.3.1.12. a. The equalizer of src, tgt is the set of loops in G, i.e., arrows pointing from a vertex to itself. b. The coequalizer of srs, tgt is the set of connected components in G. See Exercise 3.3.1.11. ## 4.3.2   Paths in a graph One usually has some idea of what a path in a graph is, especially if one is is told that a path must always follow the direction of arrows. The following definition makes this idea precise. In particular, one can have paths of any finite length n ∈ ℕ, even length 0 or 1. Also, we want to be able to talk about the source vertex and target vertex of a path as well as about concatenation of paths. Definition 4.3.2.1. Let G = (V, A, src, tgt) be a graph. A path of length n in G, denoted $p\in {\text{Path}}_{G}^{\left(n\right)}$ , is a head-to-tail sequence $\begin{array}{cc}p=\left({v}_{0}\stackrel{{a}_{1}}{\to }{v}_{1}\stackrel{{a}_{2}}{\to }{v}_{2}\stackrel{{a}_{3}}{\to }\cdots \stackrel{{a}_{n}}{\to }{v}_{n}\right)& \left(4.4\right)\end{array}$ of arrows in G, denoted ${}_{{v}_{0}}\left[{a}_{1},{a}_{2},\dots ,{a}_{n}\right]$. A path is a list of arrows, so we use a variant of list notation, but the extra subscript at the beginning, which indicates the source vertex, reminds us that this list is actually a path. We have canonical isomorphisms ${\text{Path}}_{G}^{\left(1\right)}\cong A$ and ${\text{Path}}_{G}^{\left(0\right)}\cong V$ : a path of length 1 is an arrow, and a path of length 0 is a vertex. We refer to the length 0 path ${}_{v}\left[\right]$ on vertex v as the trivial path on v. We denote by PathG the set of paths (of any length) in G, i.e., ${\text{Path}}_{G}≔{\coprod }_{n\in \text{ℕ}}{\text{Path}}_{G}^{\left(n\right)}.$ Every path p ∈ PathG has a source vertex and a target vertex, and we may denote these $\overline{\mathit{\text{src}}},\overline{\mathit{\text{tgt}}}:{\text{Path}}_{G}\to V.$ If p is a path with $\overline{\mathit{\text{src}}}\left(p\right)=v$ and $\overline{\mathit{\text{tgt}}}\left(p\right)=w$ , we may denote it p: vw. Given two vertices v, wV, we write PathG(v, w) to denote the set of all paths p: vw. There is a concatenation operation on paths. Given a path p: vw and q : wx, we define the concatenation, denoted p ++ q : vx, using concatenation of lists (see Definition 4.1.1.13). That is, if $p={}_{v}\left[{a}_{1},{a}_{2},\dots ,{a}_{m}\right]$ and $q={}_{w}\left[{b}_{1},{b}_{2},\dots ,{b}_{n}\right],$ then $p++q={}_{v}\left[{a}_{1},\dots ,{a}_{m},{b}_{1},\dots ,{b}_{n}\right]$ . In particular, if $p={}_{v}\left[\right]$ is the trivial path on vertex v (resp. if $r={}_{w}\left[\right]$ is the trivial path on vertex w), then for any path q : vw, we have p ++ q = q (resp. q ++ r = q). Example 4.3.2.2. Let G = (V, A, src, tgt) be a graph, and suppose vV is a vertex. If p: vv is a path of length |p| ∈ ℕ with $\overline{\mathit{\text{src}}}\left(p\right)=\overline{\mathit{\text{tgt}}}\left(p\right)=v$ , we call it a loop of length |p|. For n ∈ ℕ, we write pn : vv to denote the n-fold concatenation pnp++p++ ⋯ ++p (where p is written n times). Example 4.3.2.3. In diagram (4.3), page 120, we see a graph G. In it, there are no paths from v to y, one path (namely, ${}_{v}\left[f\right]$ ) from v to w, two paths (namely, v[f, g] and ${}_{v}\left[f,h\right]$ ) from v to x, and infinitely many paths $\left\{{{}_{y}\left[i\right]}^{{q}_{1}}++{{}_{y}\left[j,k\right]}^{{r}_{1}}++\cdots ++{{}_{y}\left[i\right]}^{{q}_{n}}++{{}_{y}\left[j,k\right]}^{{r}_{n}}|n,{q}_{1},{r}_{1},\dots ,{q}_{n},{r}_{n}\in ℕ\right\}$ from y to y. There are other paths as well in G, including the five trivial paths. Exercise 4.3.2.4. How many paths are there in the following graph? $\stackrel{1}{•}\stackrel{f}{\to }\stackrel{2}{•}\stackrel{g}{\to }\stackrel{3}{•}$ Exercise 4.3.2.5. Let G be a graph, and consider the set PathG of paths in G. Suppose someone claimed that there is a monoid structure on the set PathG, where the multiplication formula is given by concatenation of paths. Are they correct? Why, or why not? ## 4.3.3   Graph homomorphisms A graph (V, A, src, tgt) involves two sets and two functions. For two graphs to be comparable, their two sets and their two functions should be appropriately comparable. Definition 4.3.3.1. Let G = (V, A, src, tgt) and G′ = (V′, A′, src′, tgt′) be graphs. A graph homomorphism f from G to G′, denoted f : GG′, consists of two functions f0: VV′ and f1: AA′ such that the diagrams in (4.5) commute: Remark 4.3.3.2. The conditions (4.5) may look abstruse at first, but they encode a very important idea, roughly stated “arrows are bound to their endpoints.” Under a map of graphs GG′, one cannot flippantly send an arrow of G any old arrow of G′: it must still connect the vertices it connected before. Following is an example of a mapping that does not respect this condition: a connects 1 and 2 before but not after: The commutativity of the diagrams in (4.5) is exactly what is needed to ensure that arrows are handled in the expected way by a proposed graph homomorphism. Example 4.3.3.3 (Graph homomorphism). Let G = (V, A, src, tgt) and G′ = (V′, A′, src′, tgt′) be the graphs drawn in (4.6): The colors indicate the choice of function f0: VV′. Given that choice, condition (4.5) imposes in this case that there is a unique choice of graph homomorphism f : GG′. In other words, where arrows are sent is completely determined by where vertices are sent, in this particular case. Exercise 4.3.3.4. a. Where are a, b, c, d, e sent under f1 : AA′ in diagram (4.6)? b. Choose an element xA, and check that it behaves as specified by diagram (4.5). Exercise 4.3.3.5. Let G be a graph, let n ∈ ℕ be a natural number, and let [n] be the chain graph of length n, as in Example 4.3.1.8. Is a path of length n in G the same thing as a graph homomorphism [n] → G, or are there subtle differences? More precisely, is there always an isomorphism between the set of graph homomorphisms [n] → G and the set ${\text{Path}}_{G}^{\left(n\right)}$ of length n paths in G? Solution 4.3.3.5. Yes, a path of length n in G is the same thing as a graph homomorphism [n] → G. The discussion of categories in Chapter 5 makes clear how to write this fact formally as an isomorphism: ${\text{Hom}}_{\mathbf{\text{Grap}}}\left(\left[n\right],G\right)\cong {\text{Path}}_{G}^{\left(n\right)}.$ Exercise 4.3.3.6. Given a homomorphism of graphs f : GG′, there is an induced function between their sets of paths, Path(f): Path(G) → Path(G′). a. Explain how this works. b. Is it the case that for every n ∈ ℕ, the function Path(f) carries Path(n)(G) to Path(n)(G′), or can path lengths change in this process? c. Suppose that f0 and f1 are injective (meaning no two distinct vertices in G are sent to the same vertex (resp. for arrows) under f). Does this imply that Path(f) is also injective (meaning no two distinct paths are sent to the same path under f)? d. Suppose that f0 and f1 are surjective (meaning every vertex in G′ and every arrow in G′ is in the image of f). Does this imply that Path(f) is also surjective? Hint: At least one of the answers to parts (b)–(d) is no. Exercise 4.3.3.7. Given a graph (V, A, src, tgt), let 〈src, tgt〉: AV × V be the function guaranteed by the universal property for products. One might hope to summarize condition (4.5) for graph homomorphisms by the commutativity of the single square Is the commutativity of the diagram in (4.7) indeed equivalent to the commutativity of the diagrams in (4.5)? Solution 4.3.3.7. Yes. This follows from the universal property for products, Proposition 3.1.1.10. ### 4.3.3.8   Binary relations and graphs Definition 4.3.3.9. Let X be a set. A binary relation on X is a subset RX × X. If X = ℕ is the set of integers, then the usual ⩽ defines a binary relation on X: given (m, n) ∈ ℕ × ℕ, we put (m, n) ∈ R iff mn. As a table it might be written as in the left-hand table in (4.8): The middle table is the relation {(m, n) ∈ ℕ × ℕ | n = 5m} ⊆ ℕ × ℕ, and the right-hand table is the relation {(m, n) ∈ ℕ × ℕ | |nm| ⩽ 1} ⊆ ℕ × ℕ. Exercise 4.3.3.10. A relation on ℝ is a subset of ℝ × ℝ, and one can indicate such a subset of the plane by shading. Choose an error bound ϵ > 0, and draw the relation one might refer to as ϵ-approximation. To say it another way, draw the relation “x is within ϵ of y.” Exercise 4.3.3.11. Recall that (4.8) uses tables to express relations; it may help to use the terminology of tables in answering some of the following questions. a. If RS × S is a binary relation, find a natural way to make a graph GR from it, having vertices S. b. What is the set A of arrows in GR? c. What are the source and target functions src, tgt: AS in GR? d. Consider the seven number rows in the left-hand table in (4.8), ignoring the elipses. Draw the corresponding graph. e. Do the same for the right-hand table in (4.8). Solution 4.3.3.11. a. We have two projections π1, π2 : S × SS, and we have an inclusion i: RS × S. Thus we have a graph $R\underset{{\pi }_{2}○i}{\overset{{\pi }_{1}○i}{⇉}}S$ The idea is that for each row in the table, we draw an arrow from the first column’s value to the second column’s value. b. It is R, which one could call “the number of rows in the table.” c. These are π1i and π2i, which one could call “the first and second columns in the table.” In other words, GR ≔ (S, R, π1i, π2i). d. The seven solid arrows in the following graph correspond to the seven displayed rows in the left-hand table, and we include 3 more dashed arrows to complete the picture (they still satisfy the ⩽ relation). e. Seven rows, seven arrows: Exercise 4.3.3.12. a. If (V, A, src, tgt) is a graph, find a natural way to make a binary relation RV × V from it. b. For the left-hand graph G in (4.6), and write out the corresponding binary relation in table form. Exercise 4.3.3.13. a. Given a binary relation RS × S, you know from Exercise 4.3.3.11 how to construct a graph out of it, and from Exercise 4.3.3.12 how to make a new binary relation out of that, making a roundtrip. How does the resulting relation compare with the original? b. Given a graph G = (V, A, src, tgt), you know from Exercise 4.3.3.12 how to make a new binary relation out of it, and from Exercise 4.3.3.11 how to construct a new graph out of that, making the other roundtrip. How does the resulting graph compare with the original? # 4.4   Orders People usually think of certain sets as though they come with a canonical order. For example, one might think the natural numbers come with the ordering by which 3 < 5, or that the letters in the alphabet come with the order by which b < e. But in fact we put orders on sets, and some orders are simply more commonly used. For instance, one could order the letters in the alphabet by frequency of use, in which case e would come before b. Given different purposes, we can put different orders on the same set. For example, in Example 4.4.3.2 we give a different ordering on the natural numbers that is useful in elementary number theory. In science, we might order the set of materials in two different ways. In the first, we could consider material A to be less than material B if A is an ingredient or part of B, so water would be less than concrete. But we could also order materials based on how electrically conductive they are, whereby concrete would be less than water. This section is about different kinds of orders. ## 4.4.1   Definitions of preorder, partial order, linear order Definition 4.4.1.1. Let S be a set and RS × S a binary relation on S; if (s, s′) ∈ R, we write ss′. Then we say that R is a preorder if, for all s, s′, s″ ∈ S, we have Reflexivity: ss, and Transitivity: if ss′ and s′ ⩽ s″, then ss″. We say that R is a partial order if it is a preorder and, in addition, for all s, s′ ∈ S, we have Antisymmetry: If ss′ and s′ ⩽ s, then s = s′. We say that R is a linear order if it is a partial order and, in addition, for all s, s′ ∈ S, we have Comparability: Either ss′ or s′ ⩽ s. We denote such a preorder (or partial order or linear order) by (S, ⩽). Exercise 4.4.1.2. a. The relation in the left-hand table in (4.8) is a preorder. Is it a linear order? b. Show that neither the middle table nor the right-hand table in (4.8) is even a preorder. Example 4.4.1.3 (Partial order not linear order). The following is an olog for playing cards: We can put a binary relation on the set of boxes here by saying AB if there is a path AB. One can see immediately that this is a preorder because length 0 paths give reflexivity, and concatenation of paths gives transitivity. To see that it is a partial order we only note that there are no loops of any length. But this partial order is not a linear order because there is no path (in either direction) between, e.g., ⌜a 4 of diamonds⌝ and ⌜a black queen⌝, so it violates the comparability condition. Remark 4.4.1.4. Note that olog (4.9) in Example 4.4.1.3 is a good olog in the sense that given any collection of cards (e.g., choose 45 cards at random from each of seven decks and throw them in a pile), they can be classified according to it. In other words, each box in the olog will refer to some subset of the pile, and every arrow will refer to a function between these sets. For example, the arrow is a function from the set of hearts in the pile to the set of red cards in the pile. Example 4.4.1.5 (Preorder, not partial order). Every equivalence relation is a preorder, but rarely are they partial orders. For example, if S = {1, 2} and we put R = S × S, then this is an equivalence relation. It is a preorder but not a partial order (because 1 ⩽ 2 and 2 ⩽ 1, but 1 ≠ 2, so antisymmetry fails). Application 4.4.1.6. Classically, we think of time as linearly ordered. A model is (ℝ, ⩽), the usual linear order on the set of real numbers. But according to the theory of relativity, there is not actually a single order to the events in the universe. Different observers correctly observe different orders on the set of events. Example 4.4.1.7 (Finite linear orders). Let n ∈ ℕ be a natural number. Define a linear order [n] = ({0, 1, 2, … , n}, ⩽) in the standard way. Pictorially, $\left[n\right]≔\stackrel{0}{•}\stackrel{⩽}{\to }\stackrel{1}{•}\stackrel{⩽}{\to }\stackrel{2}{•}\stackrel{⩽}{\to }\cdots \stackrel{⩽}{\to }\stackrel{n}{•}$ Every finite linear order, i.e., linear order on a finite set, is of the preceding form. That is, though the labels might change, the picture would be the same. This can be made precise when morphisms of orders are defined (see Definition 4.4.4.1) Exercise 4.4.1.8. Let S = {1, 2, 3}. a. Find a preorder RS × S such that the set R is as small as possible. Is it a partial order? Is it a linear order? b. Find a preorder R′ ⊆ S × S such that the set R′ is as large as possible. Is it a partial order? Is it a linear order? Exercise 4.4.1.9. a. List all the preorder relations possible on the set {1, 2}. b. For any n ∈ ℕ, how many linear orders exist on the set {1, 2, 3, … , n}? c. Does your formula work when n = 0? Remark 4.4.1.10. We can draw any preorder (S, ⩽) as a graph with vertices S and with an arrow ab if ab. These are precisely the graphs with the following two properties for any vertices a, bS: 1. There is at most one arrow ab. 2. If there is a path from a to b, then there is an arrow ab. If (S, ⩽) is a partial order, then the associated graph has an additional no-loops property: 3. If n ∈ ℕ is an integer with n ⩾ 2, then there are no paths of length n that start at a and end at a. If (S, ⩽) is a linear order then there is an additional comparability property: 4. For any two vertices a, b, there is an arrow ab or an arrow ba. Given a graph G, we can create a binary relation ⩽ on its set S of vertices as follows. Put ab if there is a path in G from a to b. This relation will be reflexive and transitive, so it is a preorder. If the graph satisfies property 3, then the preorder will be a partial order, and if the graph also satisfies property 4, then the partial order will be a linear order. Thus graphs give us a nice way to visualize orders. Slogan 4.4.1.11. A graph generates a preorder: vw if there is a path vw. Exercise 4.4.1.12. Let G = (V, A, src, tgt) be the following graph: In the corresponding preorder, which of the following are true? a. ab. c. cb. d. b = c. e. ef. f. fd. Exercise 4.4.1.13. a. Let S = {1, 2}. The set ℙ(S) of subsets of S form a partial order. Draw the associated graph. b. Repeat this for Q = ∅, R = {1}, and T = {1, 2, 3}. That is, draw the partial orders on ℙ(Q), ℙ(R), and ℙ(T). c. Do you see n-dimensional cubes? Solution 4.4.1.13. a. b. c. Yes. The graph associated to ℙ(n) looks like an n-dimensional cube. Definition 4.4.1.14. Let (S, ⩽) be a preorder. A clique is a subset S′ ⊆ S such that for each a, bS′, one has ab. Exercise 4.4.1.15. True or false: A partial order is a preorder that has no cliques? (If false, is there a nearby true statement?) Solution 4.4.1.15. False. Every element is always in its own clique, so if X is a partial order with at least one element, then it has a clique. But a nearby statement is true. Let’s define a nontrivial clique to be a clique consisting of two or more elements. Slogan. A partial order is a preorder that has no nontrivial cliques. Just as every relation generates an equivalence relation (see Proposition 3.3.1.7), every relation also generates a preorder. Example 4.4.1.16. Let X be a set and RX × X a relation. For elements x, yX, we say there is an R-path from x to y if there exists a natural number n ∈ ℕ and elements x0, x1, … , xnX such that 1. x = x0; 2. xn = y; 3. for all i ∈ ℕ, if 0 ⩽ in − 1, then (xi, xi+1) ∈ R. Let $\overline{R}$ denote the relation where $\left(x,y\right)\in \overline{R}$ if there exists an R-path from x to y. We call $\overline{R}$ the preorder generated by R. and note some facts about $\overline{R}$: Containment. If (x, y) ∈ R, then $\left(x,y\right)\in \overline{R}$. That is, $R\subseteq \overline{R}.$ Reflexivity. For all xX, we have $\left(x,x\right)\in \overline{R}$. Transitivity. For all x, y, zX, if $\left(x,y\right)\in \overline{R}$ and $\left(y,z\right)\in \overline{R}$ , then $\left(x,z\right)\in \overline{R}$. Let’s write xy if $\left(x,y\right)\in \overline{R}$ . To check the containment claim, use n = 1 so x0 = x and xn = y. To check the reflexivity claim, use n = 0 so x = x0 = y and condition 3 is vacuously satisfied. To check transitivitiy, suppose given R-paths x = x0x1 ⩽ … ⩽ xn = y and y = y0y1 ⩽ … ⩽ yp = z; then x = x0x1 ⩽ … ⩽ xny1 ⩽ … ⩽ yp = z will be an R-path from x to z. We can turn any relation into a preorder in a canonical way. Here is a concrete case of this idea. Let X = {a, b, c, d} and suppose given the relation {(a, b), (b, c), (b, d), (d, c), (c, c)}. This is neither reflexive nor transitive, so it is not a preorder. To make it a preorder we follow the preceding prescription. Starting with R-paths of length n = 0, we put {(a, a), (b, b), (c, c), (d, d)} into $\overline{R}$ . The R-paths of length 1 add the original elements, {(a, b), (b, c), (b, d), (d, c), (c, c)}. Redundancy (e.g., (c, c)) is permissible, but from now on in this example we write only the new elements. The R-paths of length 2 add {(a, c), (a, d)} to $\overline{R}$ . One can check that R-paths of length 3 and above do not add anything new to $\overline{R}$ , so we are done. The relation $\overline{R}=\left\{\left(a,a\right),\left(b,b\right),\left(c,c\right),\left(d,d\right),\left(a,b\right),\left(b,c\right),\left(b,d\right),\left(d,c\right)\left(a,c\right),\left(a,d\right)\right\}$ is reflexive and transitive, hence a preorder. Exercise 4.4.1.17. Let X = {a, b, c, d, e, f}, and let R = {(a, b), (b, c), (b, d), (d, e), (f, a)}. a. What is the preorder $\overline{R}$ generated by R? b. Is it a partial order? Exercise 4.4.1.18. Let X be the set of people, and let RX × X be the relation with (x, y) ∈ R if x is the child of y. Describe the preorder generated by R in layperson’s terms. ## 4.4.2   Meets and joins Let X be any set. Recall from Definition 3.4.4.9 that the power-set of X, denoted ℙ(X), is the set of subsets of X. There is a natural order on ℙ(X) given by the subset relationship, as exemplified in Exercise 4.4.1.13. Given two elements a, b ∈ ℙ(X), we can consider them as subsets of X and take their intersection as an element of ℙ(X), denoted ab. We can also consider them as subsets of X and take their union as an element of ℙ(X), denoted ab. The intersection and union operations are generalized in the following definition. Definition 4.4.2.1. Let (S, ⩽) be a preorder, and let s, tS be elements. A meet of s and t is an element wS satisfying the following universal property: • ws and wt, • for any xS, if xs and xt, then xw. If w is a meet of s and t, we write wst. A join of s and t is an element wS satisfying the following universal property: • sw and tw, • for any xS, if sx and tx, then wx. If w is a join of s and t, we write wst. That is, the meet of s and t is the biggest thing that is smaller than both, i.e., a greatest lower bound, and the join of s and t is the smallest thing that is bigger than both, i.e., a least upper bound. Note that the meet of s and t might be s or t itself. It may happen that s and t have more than one meet (or more than one join). However, any two meets of s and t must be in the same clique, by the universal property (and the same for joins). Exercise 4.4.2.2. Consider the partial order from Example 4.4.1.3. a. What is the join of ⌜a diamond⌝ and ⌜a heart⌝? b. What is the meet of ⌜a black card⌝ and ⌜a queen⌝? c. What is the meet of ⌜a diamond⌝ and ⌜a card⌝? Not every two elements in a preorder need have a meet, nor need they have a join. Exercise 4.4.2.3. a. If possible, find two elements in the partial order from Example 4.4.1.3 that do not have a meet.7 b. If possible, find two elements that do not have a join (in that preorder). Solution 4.4.2.3. a. There is no meet for ⌜a heart⌝ and ⌜a club⌝; no card is both. b. Every two elements have a join here. But note that some of these joins are “wrong” because the olog is not complete. For example, we have ⌜a 4⌝ ∨ ⌜a queen⌝ = ⌜a card⌝, whereas the correct answer would be ⌜a card that is either a 4 or a queen⌝. Exercise 4.4.2.4. As mentioned, the power-set S ≔ ℙ(X) of any set X naturally has the structure of a partial order. Its elements sS correspond to subsets sX, and we put st if and only if st as subsets of X. The meet of two elements is their intersection as subsets of X, st = st, and the join of two elements is their union as subsets of X, st = st. a. Is it possible to put a monoid structure on the set S in which the multiplication formula is given by meets? If so, what would the unit element be? b. Is it possible to put a monoid structure on the set S in which the multiplication formula is given by joins? If so, what would the unit element be? Example 4.4.2.5 (Trees). A tree, i.e., a system of nodes and branches, all of which emanate from a single node called the root, is a partial order but generally not a linear order. A tree (T, ⩽) can either be oriented toward the root (so the root is the largest element of the partial order) or away from the root (so the root is the smallest element); let’s only consider the former. A tree is pictured as a graph in (4.10). The root is labeled e. In a tree every pair of elements s, tT has a join st (their closest mutual ancestor). On the other hand, if s and t have a join c = st, then either c = s or c = t. Exercise 4.4.2.6. Consider the tree drawn in (4.10). a. What is the join ih? b. What is the join hb? c. What is the meet ba? d. What is the meet bg? ## 4.4.3   Opposite order Definition 4.4.3.1. Let $\mathcal{S}≔\left(S,⩽\right)$ be a preorder. The opposite preorder, denoted ${\mathcal{S}}^{\text{op}}$, is the preorder (S, ⩽op) having the same set of elements but where sop s′ iff s′ ⩽ s. Example 4.4.3.2. Consider the preorder $\mathcal{N}≔\left(ℕ,\text{divides}\right)$, where a divides b if “a goes into b evenly,” i.e., if there exists n ∈ ℕ such that a * n = b. So 5 divides 35, and so on. Then ${\mathcal{N}}^{\text{op}}$ is the set of natural numbers but where mn iff m is a multiple of n. So 6 ⩽ 2 and 6 ⩽ 3, but 6 ≰ 4. Exercise 4.4.3.3. Suppose that $\mathcal{S}≔\left(S,⩽\right)$ is a preorder. a. If $\mathcal{S}$ is a partial order, is ${\mathcal{S}}^{\text{op}}$ also a partial order? b. If $\mathcal{S}$ is a linear order, is ${\mathcal{S}}^{\text{op}}$ a linear order? Exercise 4.4.3.4. Suppose that $\mathcal{S}≔\left(S,⩽\right)$ is a preorder and that s1, s2S have join s1s2 = t in $\mathcal{S}$. The preorder ${\mathcal{S}}^{\text{op}}$ has the same elements as $\mathcal{S}$. Is t the join of s1 and s2 in ${\mathcal{S}}^{\text{op}}$, or is it their meet, or is it not necessarily their meet or their join? ## 4.4.4   Morphism of orders An order (S, ⩽), be it a preorder, a partial order, or a linear order, involves a set and a binary relation. For two orders to be comparable, their sets and their relations should be appropriately comparable. Definition 4.4.4.1. Let $\mathcal{S}≔\left(S,⩽\right)$ and $\mathcal{S}\prime ≔\left(S\prime ,⩽\prime \right)$ be preorders (resp. partial orders or linear orders). A morphism of preorders (resp. partial orders or linear orders) f from $\mathcal{S}$ to $\mathcal{S}\prime$, denoted $f:\mathcal{S}\to \mathcal{S}\prime$, is a function $f:S\to S\prime$ such that, for every pair of elements s1, s2S, if s1s2, then f(s1) ⩽′ f(s2). Example 4.4.4.2. Let X and Y be sets, and let f : XY be a function. Then for every subset X′ ⊆ X, its image f(X′) ⊆ Y is a subset (see Exercise 2.1.2.8). Thus we have a function F : ℙ(X) → ℙ(Y), given by taking images. This is a morphism of partial orders (ℙ(X), ⊆) → (ℙ(Y), ⊆). Indeed, if ab in ℙ(X), then f(a) ⊆ f(b) in ℙ(Y). Application 4.4.4.3. It is often said that a team is only as strong as its weakest member. Is this true for materials? The hypothesis that a material is only as strong as its weakest constituent can be understood as follows. Recall from the beginning of Section 4.4 (page 132) that we can put several different orders on the set M of materials. One example is the order given by constituency (mC m′ if m is an ingredient or constituent of m′). Another order is given by strength: mS m′ if m′ is stronger than m (in some fixed setting). Is it true that if material m is a constituent of material m′, then the strength of m′ is less than or equal to the strength of m? Mathematically the question would be, Is there a morphism of preorders (M, ⩽C) → (M, ⩽S)op? Exercise 4.4.4.4. Let X and Y be sets, and let f : XY be a function. Then for every subset Y′ ⊆ Y, its preimage f−1(Y′) ⊆ X is a subset (see Definition 3.2.1.12). Thus we have a function F : ℙ(Y) → ℙ(X), given by taking preimages. Is it a morphism of partial orders? Example 4.4.4.5. Let S be a set. The smallest preorder structure that can be put on S is to say ab iff a = b. This is indeed reflexive and transitive, and it is called the discrete preorder on S. The largest preorder structure that can be put on S is to say ab for all a, bS. This again is reflexive and transitive, and it is called the indiscrete preorder on S. Exercise 4.4.4.6. Let S be a set, and let (T, ⩽T) be a preorder. Let ⩽D be the discrete preorder on S. a. A morphism of preorders (S, ⩽D) → (T, ⩽T) is a function ST satisfying certain properties (see Definition 4.4.4.1). Which functions ST arise in this way? b. Given a morphism of preorders (T, ⩽T) → (S, ⩽D), we get a function TS. In terms of ⩽T, which functions TS arise in this way? Exercise 4.4.4.7. Let S be a set, and let (T, ⩽T) be a preorder. Let ⩽I be the indiscrete preorder on S, as in Example 4.4.4.5. a. Given a morphism of preorders (S, ⩽I) → (T, ⩽T), we get a function ST. In terms of ⩽T, which functions ST arise in this way? b. Given a morphism of preorders (T, ⩽T) → (S, ⩽I), we get a function TS. In terms of ⩽T, which functions TS arise in this way? ## 4.4.5   Other applications ### 4.4.5.1   Biological classification Biological classification is a method for dividing the set of organisms into distinct classes, called taxa. In fact, it turns out that such a classification, say, a phylogenetic tree, can be understood as a partial order C on the set of taxa. The typical ranking of these taxa, including kingdom, phylum, and so on, can be understood as morphism of orders f : C → [n], for some n ∈ ℕ. For example, we may have a tree (see Example 4.4.2.5) that looks like this: We also have a linear order that looks like this: and the ranking system that puts Eukaryota at Domain and Homo Sapien at Species is an order-preserving function from the dots upstairs to the dots downstairs; that is, it is a morphism of preorders. Exercise 4.4.5.2. Since the phylogenetic tree is a tree, it has all joins. a. Determine the join of dogs and humans. b. If we did not require the phylogenetic partial order to be a tree, what would it mean if two taxa (nodes in the phylogenetic partial order), say, a and b, had meet c with ca and cb? Exercise 4.4.5.3. a. In your favorite scientific subject, are there any interesting classification systems that are actually orders? b. Choose one such system; what would meets mean in that setting? ### 4.4.5.4   Security Security, say of sensitive information, is based on two things: a security clearance and need to know. Security clearance might consist of levels like confidential, secret, top secret. But maybe we can throw in “President’s eyes only” and some others too, like “anyone.” Exercise 4.4.5.5. Does it appear that security clearance is a preorder, a partial order, or a linear order? “Need to know” is another classification of people. For each bit of information, we do not necessarily want everyone to know about it, even everyone with the specified clearance. It is only disseminated to those who need to know. Exercise 4.4.5.6. Let P be the set of all people, and let $\overline{I}$ be the set of all pieces of information known by the government. For each subset $I\subseteq \overline{I}$, let K(I) ⊆ P be the set of people who need to know every piece of information in I. Let $S=\left\{K\left(I\right)|I\subseteq \overline{I}\right\}$ be the set of all “need to know” groups, with the subset relation denoted ⩽. a. Is (S, ⩽) a preorder? If not, find a nearby preorder. b. If I1I2, do we always have K(I1) ⩽ K(I2) or K(I2) ⩽ K(I1) or possibly neither? c. Should the preorder (S, ⩽) have all meets? d. Should (S, ⩽) have all joins? ### 4.4.5.7   Spaces and geography Consider closed curves that can be drawn in the plane ℝ2, e.g., circles, ellipses, and kidney-bean shaped curves. The interiors of these closed curves (not including the boundary itself) are called basic open sets in2. The good thing about such an interior U is that any point pU is not on the boundary, so no matter how close p is to the boundary of U, there will always be a tiny basic open set surrounding p and completely contained in U. In fact, the union of any collection of basic open sets still has this property. That is, an open set in2 is any subset U ⊆ ℝ2 that can be formed as the union of a collection of basic open sets. Example 4.4.5.8. Let U = {(x, y) ∈ ℝ2 | x > 0}. To see that U is open, define the following sets: for any a, b ∈ ℝ, let S(a, b) be the square parallel to the axes, with side length 1, where the upper left corner is (a, b). Note that S(a, b) is a closed curve, so if we let S′(a, b) be the interior of S(a, b), then each S′(a, b) is a basic open set. Now U is the union of S′(a, b) over the collection of all a > 0 and all b, $U=\underset{\begin{array}{c}a,b\in ℝ,\\ a>0\end{array}}{\cup }{S}^{\prime }\left(a,b\right),$ so U is open. Example 4.4.5.9. The idea of open sets extends to spaces beyond ℝ2. For example, on the earth one could define a basic open set to be the interior of any region one can draw a closed curve around (with a metaphorical pen), and define open sets to be unions of these basic open sets. Exercise 4.4.5.10. Let (S, ⊆) be the partial order of open subsets on earth as defined in Example 4.4.5.9. a. If ⩽ is the subset relation, is (S, ⩽) a partial order or just a preorder, or neither? b. Does it have meets? c. Does it have joins? Exercise 4.4.5.11. Let S be the set of open subsets of earth as defined in Example 4.4.5.9. For each open subset of earth, suppose we know the range of recorded temperature throughout s (i.e., the low and high throughout the region). Thus to each element sS we assign an interval T(s) ≔ {x ∈ ℝ | axb}. The set V of intervals of ℝ can be partially ordered by the subset relation. a. Does the assignment T : SV amount to a morphism of orders? b. If so, does it preserve meets or joins? Hint: It does not preserve both. Solution 4.4.5.11. a. Suppose s is a subregion of s′, e.g., New Mexico as a subregion of North America. This question is asking whether the range of temperatures recorded throughout New Mexico is a subset of the range of temperatures recorded throughout North America, which, of course, it is. b. The question on meets is, If we take two regions s and s′ and intersect them, is the temperature range on ss′ equal to the intersection T(s) ∩ T(s′)? Clearly, if a temperature t is recorded somewhere in ss′, then it is recorded somewhere in s and somewhere in s′, so T(ss′) ⊆ T(s) ∩ T(s′). But is it true that if a temperature is recorded somewhere in s and somewhere in s′, then it must be recorded somewhere in ss′? No, that is false. So T does not preserve meets. The question on joins is, If we take the union of two regions s and s′, is the temperature range on ss′ equal to the union T(s)∪T(s′)? If a temperature is recorded somewhere in ss′, then it is either recorded somewhere in s or somewhere in s′ (or both), so T(ss′) ⊆ T(s) ∪ T(s′). And if a temperature is recorded somewhere in s, then it is recorded somewhere in ss′, so T(s) ⊆ T(ss′). Similarly, T(s′) ⊆ T(ss′), so in fact T does preserve joins: T(ss′) = T(s) ∪ T(s′). Exercise 4.4.5.12. a. Can you think of a space relevant to an area of science for which it makes sense to assign an interval of real numbers to each open set, analogously to Exercise 4.4.5.11? For example, for a sample of some material under stress, perhaps the strain on each open set is somehow an interval? b. Check that your assignment, which you might denote as in Exercise 4.4.5.11 by T : SV, is a morphism of orders. c. How does it act with respect to meets and/or joins? # 4.5   Databases: schemas and instances So far this chapter has discussed classical objects from mathematics. The present section is about databases, which are classical objects from computer science. These are truly “categories and functors, without admitting it” (see Theorem 5.4.2.3). ## 4.5.1   What are databases? Data, in particular, the set of observations made during experiment, plays a primary role in science of any kind. To be useful, data must be organized, often in a row-and-column display called a table. Columns existing in different tables can refer to the same data. A database is a collection of tables, each table T of which consists of a set of columns and a set of rows. We roughly explain the role of tables, columns, and rows as follows. The existence of table T suggests the existence of a fixed methodology for observing objects or events of a certain type. Each column c in T prescribes a single kind or method of observation, so that the datum inhabiting any cell in column c refers to an observation of that kind. Each row r in T has a fixed sourcing event or object, which can be observed using the methods prescribed by the columns. The cell (r, c) refers to the observation of kind c made on event r. All of the rows in T should refer to uniquely identifiable objects or events of a single type, and the name of the table T should refer to that type. Example 4.5.1.1. When graphene is strained (lengthened by a factor of x ⩾ 1), it becomes stressed (carries a force in the direction of the lengthening). The following is a madeup set of data: In the table in (4.11) titled “Graphene Sample,” the rows refer to graphene samples, and the table is so named. Each graphene sample can be observed according to the source supplier from which it came, the strain that it was subjected to, and the stress that it carried. These observations are the columns. In the right-hand table the rows refer to suppliers of various things, and the table is so named. Each supplier can be observed according to its full name and its phone number; these are the columns. In the left-hand table it appears either that each graphene sample was used only once, or that the person recording the data did not keep track of which samples were reused. If such details become important later, the lab may want to change the layout of the left-hand table by adding an appropriate column. This can be accomplished using morphisms of schemas (see Section 5.4.1). ### 4.5.1.2   Primary keys, foreign keys, and data columns There is a bit more structure in the tables in (4.11) than first meets the eye. Each table has a primary ID column, on the left, as well as some data columns and some foreign key columns. The primary key column is tasked with uniquely identifying different rows. Each data column houses elementary data of a certain sort. Perhaps most interesting from a structural point of view are the foreign key columns, because they link one table to another, creating a connection pattern between tables. Each foreign key column houses data that needs to be further unpacked. It thus refers us to another foreign table, in particular, to the primary ID column of that table. In (4.11) the Source column is a foreign key to the Supplier table. Here is another example, taken from Spivak [39]. Example 4.5.1.3. Consider the bookkeeping necessary to run a department store. We keep track of a set of employees and a set of departments. For each employee e, we keep track of E.1 the first name of e, which is a FirstNameString, E.2 the last name of e, which is a LastNameString, E.3 the manager of e, which is an Employee, E.4 the department that e works in, which is a Department. For each department d, we keep track of D.1 the name of d, which is a DepartmentNameString, D.2 the secretary of d, which is an Employee. We can suppose that E.1, E.2, and D.1 are data columns (referring to names of various sorts), and E.3, E.4, and D.2 are foreign key columns (referring to managers, secretaries, etc.). The tables in (4.12) show how such a database might look at a particular moment in time. ### 4.5.1.4   Business rules Looking at the tables in (4.12), one may notice a few patterns. First, every employee works in the same department as his or her manager. Second, every department’s secretary works in that department. Perhaps the business counts on these rules for the way it structures itself. In that case the database should enforce those rules, i.e., it should check that whenever the data is updated, it conforms to the rules: $\begin{array}{cc}\begin{array}{cc}Rule 1\hfill & \text{For every employee} e, \text{the} \mathbf{\text{manager}} \text{of} e \mathbf{\text{works in}} \text{the same department that} e \mathbf{\text{works in.}}\hfill \\ \text{Rule 2}\hfill & \text{For every department} d, \text{the} \mathbf{\text{secretary}} \text{of} d \mathbf{\text{works in}} \text{department} d.\hfill \end{array}\hfill & \left(4.13\right)\hfill \end{array}$ Together, the statements E.1, E.2, E.3, E.4, D.1, and D.2 from Example 4.5.1.3 and Rule 1 and Rule 2 constitute the schema of the database. This is formalized in Section 4.5.2. ### 4.5.1.5   Data columns as foreign keys To make everything consistent, we could even say that data columns are specific kinds of foreign keys. That is, each data column constitutes a foreign key to some non-branching leaf table, which has no additional data. Example 4.5.1.6. Consider again Example 4.5.1.3. Note that first names and last names have a particular type, which we all but ignored. We could cease to ignore them by adding three tables, as follows: In combination, (4.12) and (4.14) form a collection of five tables, each with the property that every column is either a primary key or a foreign key. The notion of data column is now subsumed under the notion of foreign key column. Each column is either a primary key (one per table, labeled ID) or a foreign key column (everything else). ## 4.5.2   Schemas Pictures here, roughly graphs, should capture the conceptual layout to which the data conforms, without being concerned (yet) with the individual pieces of data that may populate the tables in this instant. We proceed at first by example; the precise definition of schema is given in Definition 4.5.2.7. Example 4.5.2.1. In Examples 4.5.1.3 and 4.5.1.6, the conceptual layout for a department store was given, and some example tables were shown. We were instructed to keep track of employees, departments, and six types of data (E.1, E.2, E.3, E.4, D.1, and D.2), and to follow two rules (Rule 1, Rule 2). All of this is summarized in the following picture: The five tables from (4.12) and (4.14) are seen as five vertices; this is also the number of primary ID columns. The six foreign key columns from (4.12) and (4.14) are seen as six arrows; each points from a table to a foreign table. The two rules from (4.13) are seen as declarations at the top of (4.15). These path equivalence declarations are explained in Definition 4.5.2.3. Exercise 4.5.2.2. Create a schema (consisting of dots and arrows) describing the conceptual layout of information presented in Example 4.5.1.1. In order to define schemas, we must first define the notion of congruence for an arbitrary graph G. Roughly a congruence is an equivalence relation that indicates how different paths in G are related (see Section 4.3.2). A notion of congruence for monoids was given in Definition 4.1.1.17, and the current notion is a generalization of that. A congruence (in addition to being reflexive, symmetric, and transitive) has two sorts of additional properties: congruent paths must have the same source and target, and the composition of congruent paths with other congruent paths must yield congruent paths. Formally we have Definition 4.5.2.3. Definition 4.5.2.3. Let G = (V, A, src, tgt) be a graph, and let PathG denote the set of paths in G (see Definition 4.3.2.1). A path equivalence declaration (or PED) is an expression of the form pq, where p, q ϵ PathG have the same source and target, src(p) = src(q) and tgt(p) = tgt(q). A congruence on G is a relation ≃ on PathG that has the following properties: 1. The relation ≃ is an equivalence relation. 2. If pq, then src(p) = src(q). 3. If pq, then tgt(p) = tgt(q). 4. Suppose given paths p, p′: ab and q, q′: bc. If pp′ and qq′, then (p ++ q) ≃ (p′ ++ q′). Remark 4.5.2.4. Any set of path equivalence declarations (PEDs) generates a congruence. The proof of this is analogous to that of Proposition 4.1.1.18. We tend to elide the difference between a congruence and a set of PEDs that generates it. The basic idea for generating a congruence from a set R of PEDs is to proceed as follows. First find the equivalence relation generated by R. Then every time there are paths p, p′: ab and q, q′: bc with pp′ and qq′, add to R the relation (p ++ q) ≃ (p′ ++ q′). Exercise 4.5.2.5. Suppose given the following graph G, with the PED b[w, x] ≃ b[y, z]: In the congruence generated by that PED, is it the case that a[v, w, x] ≃ a[v, y, z]? Exercise 4.5.2.6. Consider the graph shown in (4.15) and the two declarations shown at the top. They generate a congruence. a. Is it true that the following PED is an element of this congruence? $\mathbf{\text{Employee}} \text{manager manager worksIn} \stackrel{?}{\simeq } \mathbf{\text{Employee}} \text{worksIn}$ $\mathbf{\text{Employee}} \text{worksIn secretary} \stackrel{?}{\simeq } \mathbf{\text{Employee}}$ $\mathbf{\text{Department}} \text{secretary manager worksIn name} \stackrel{?}{\simeq } \mathbf{\text{Department}} \text{name}$ Definition 4.5.2.7. A database schema (or simply schema) $\mathcal{C}$ consists of a pair $\mathcal{C}≔\left(G,\simeq \right)$, where G is a graph and ≃ is a congruence on G. Example 4.5.2.8. Pictured in (4.15) is a graph with two PEDs; these generate a congruence, as discussed in Remark 4.5.2.4. Thus this constitutes a database schema. A schema can be converted into a system of tables, each with a primary key and some number of foreign keys referring to other tables, as discussed in Section 4.5.1. Definition 4.5.2.7 gives a precise conceptual understanding of what a schema is, and the following rules describe how to convert it into a table layout. Rules of good practice 4.5.2.9. Converting a schema $\mathcal{C}=\left(G,\simeq \right)$ into a table layout should be done as follows: (i) There should be a table for every vertex in G, and if the vertex is named, the table should have that name. (ii) Each table should have a leftmost column called ID, set apart from the other columns by a double vertical line. (iii) To each arrow a in G having source vertex ssrc(a) and target vertex ttgt(a), there should be a foreign key column a in table s, referring to table t; if the arrow a is named, column a should have that name. Example 4.5.2.10 (Discrete dynamical system). Consider the schema in which the congruence is trivial (i.e., generated by the empty set of PEDs.) This schema is quite interesting. It encodes a set s and a function f: ss. Such a thing is called a discrete dynamical system. One imagines s as the set of states, and for any state xs, the function f encodes a notion of next state f(x) ∈ s. For example, Application 4.5.2.11. Imagine a deterministic quantum-time universe in which there are discrete time steps. We model it as a discrete dynamical system, i.e., a table of the form (4.17). For every possible state of the universe we include a row in the table. The state in the next instant is recorded in the second column.8 Example 4.5.2.12 (Finite hierarchy). The schema $\mathcal{L}\mathit{\text{oop}}$ can also be used to encode hierarchies, such as the manager relation from Examples 4.5.1.3 and 4.5.2.1, One problem with this, however, is if a schema has even one loop, then it can have infinitely many paths (corresponding, e.g., to an employee’s manager’s manager’s manager’s … manager). Sometimes we know that in a given company that process eventually terminates, a famous example being that at Ben and Jerry’s ice cream company, there were only seven levels. In that case we know that an employee’s eighth-level manager is equal to his or her seventh-level manager. This can be encoded by the PED E[mgr, mgr, mgr, mgr, mgr, mgr, mgr, mgr] ≃ E[mgr, mgr, mgr, mgr, mgr, mgr, mgr] or more concisely, E[mgr]8 = E[mgr]7. Exercise 4.5.2.13. There is a nontrivial PED on $\mathcal{L}\mathit{\text{oop}}$ that holds for the data in Example 4.5.2.10. a. What is it? b. How many equivalence classes of paths in $\mathcal{L}\mathit{\text{oop}}$ are there after you impose that relation? Exercise 4.5.2.14. Let P be a chess-playing program, playing against itself. Given any position (where a position includes the history of the game so far), P will make a move. a. Is this an example of a discrete dynamical system? b. How do the rules for ending the game in a win or draw play out in this model? (Look up online how chess games end if you do not know.) ### 4.5.2.15   Ologging schemas It should be clear that a database schema is nothing but an olog in disguise. The difference is basically the readability requirements for ologs. There is an important new addition in this section, namely, that schemas and ologs can be filled in with data. Conversely, we have seen that databases are not any harder to understand than ologs are. Example 4.5.2.16. Consider the olog We can document some instances of this relationship using the following table: Clearly, this table of instances can be updated as more moons are discovered by the olog’s owner (be it by telescope, conversation, or research). Exercise 4.5.2.17. In fact, Example 4.5.2.16 did not follow rules 4.5.2.9. Strictly following those rules, copy over the data from (4.19) into tables that are in accordance with schema (4.18). Exercise 4.5.2.18. a. Write a schema (olog) in terms of the boxes ⌜a thing I own⌝ and ⌜a place⌝ and one arrow that might help a person remember where she decided to put random things. b. What is a good label for the arrow? c. Fill in some rows of the corresponding set of tables for your own case. Exercise 4.5.2.19. Consider the olog a. What path equivalence declarations would be appropriate for this olog? You can use y: FC, t: FC, and f: CF for “youngest,” “tallest,” and “father,” if you prefer. b. How many PEDs are in the congruence? Solution 4.5.2.19. a. There are two: F.t.fF and F.y.fF, meaning “a father F ’s tallest child has as father F ” and “a father F ’s youngest child has as father F.” b. There are infinitely many PEDs in this congruence, including F[t, f, t] ≃ F[t] and F[t, f, y] ≃ F[y]. But the congruence is generated by only two PEDs, those in part (a). ## 4.5.3   Instances Given a database schema (G, ≃), an instance of it is just a bunch of tables whose data conform to the specified layout. These can be seen throughout the previous section, most explicitly in the relationship between schema (4.15) and tables (4.12) and (4.14), and between schema (4.16) and table (4.17). Following is the mathematical definition. Definition 4.5.3.1. Let $\mathcal{C}=\left(G,\simeq \right)$, where G = (V, A, src, tgt). An instance on $\mathcal{C}$, denoted $\left(\text{PK},\text{FK}\right):\mathcal{C}\to \mathbf{\text{Set}}$, is defined as follows: One announces some constituents (A. primary ID part, B. foreign key part) and shows that they conform to a law (1. preservation of congruence). Specifically, one announces A. a function PK: VSet, i.e., to each vertex vV one provides a set PK(v);9 B. for every arrow aA with v = src(a) and w = tgt(a), a function FK(a): PK(v) → PK(w).10 One must then show that the following law holds for any vertices v, w and paths p = v[a1, a2, …, am] and q = v[a1, a2, …, an] from v to w: 1. If pq, then for all x ∈ PK(v), we have FK(am) ○ · · · ○ FK(a2) ○ FK(a1)(x) = FK(an) ○ · · · ○ FK(a2) ○ FK(a1)(x) in PK(w). Exercise 4.5.3.2. It can be considered a schema of which the following is an instance: a. What is the set PK(⌜an email⌝)? b. What is the set PK(⌜a person⌝)? c. What is the function $\text{FK}\left(\stackrel{\text{is}\text{\hspace{0.17em}}\text{sent}\text{\hspace{0.17em}}\text{by}}{\to }\right):\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{PK}\left(⌜\text{an}\text{\hspace{0.17em}}\text{email}⌝\right)\to \text{PK}\left(⌜\text{a}\text{\hspace{0.17em}}\text{person}⌝\right)?$ d. Interpret the sentences at the bottom of $\mathcal{C}$ as the Englishing of a simple path equivalence declaration (PED). e. Is your PED satisfied by the instance (4.21); that is, does law 1. from Definition 4.5.3.1 hold? Example 4.5.3.3 (Monoid action table). In Example 4.1.2.9 we saw how a monoid $\mathcal{M}$ could be captured as an olog with only one object. As a database schema, this means there is only one table. Every generator of $\mathcal{M}$ would be a column of the table. The notion of database instance for such a schema (see Definition 4.5.3.1) matches perfectly with the notion of action table from Section 4.1.3. Note that a monoid can act on itself, in which case this action table is the monoid’s multiplication table, as in Example 4.1.3.2, but it can also act on any other set, as in Example 4.1.3.1. If $\mathcal{M}$ acts on a set S, then the set of rows in the action table will be S. Exercise 4.5.3.4. Draw (as a graph) a schema for which table (4.1), page 109, looks like an instance. Exercise 4.5.3.5. Suppose that $\mathcal{M}$ is a monoid and some instance of it is written in table form, e.g., as in table (4.1). It is possible that $\mathcal{M}$ is a group. What evidence in an instance table for $\mathcal{M}$ might suggest that $\mathcal{M}$ is a group? ### 4.5.3.6   Paths through a database Let $\mathcal{C}$ ≔ (G, ≃) be a schema, and let (PK, FK): $\mathcal{C}$Set be an instance on $\mathcal{C}$. Then for every arrow a: vw in G we get a function FK(a): PK(v) → PK(w). Functions can be composed, so in fact for every path through G we get a function. Namely, if p = v0[a1, a2, …, an] is a path from v0 to vn, then the instance provides a function $\text{FK}\left(p\right)≔\text{FK}\left({a}_{n}\right)○\cdots \text{FK}\left({a}_{2}\right)○\text{FK}\left({a}_{1}\right):\text{PK}\left({v}_{0}\right)\to \text{PK}\left({v}_{n}\right),$ which first made an appearance as part of Law 1 in Definition 4.5.3.1. Example 4.5.3.7. Consider the department store schema from Example 4.5.2.1. More specifically consider the pathEmployee[worksIn, secretary, last] in (4.15), which points from Employee to LastNameString. The instance lets us interpret this path as a function from the set of employees to the set of last names; this could be a useful function to have in real-life office settings. The instance from (4.12) would yield the following function: Employee ID Secr. name 101 Hillbert 102 Russell 103 Hillbert Exercise 4.5.3.8. Consider the path ps[f, f] on the $\mathcal{L}\mathit{\text{oop}}$ schema in (4.16). Using the instance from (4.17), where PK(s) = {A, B, C, D, E, F, G, H}, interpret p as a function PK(s) → PK(s), and write this as a two-column table, as in Example 4.5.3.7. Exercise 4.5.3.9. Given an instance (PK, FK) on a schema $\mathcal{C}$, and given a trivial path p (i.e., p has length 0; it starts at some vertex but does not go anywhere), what function does p yield as FK(p)? __________________ 1Although the function ⋆: M × MM is called the multiplication formula, it may have nothing to do with multiplication. It is just a formula for taking two inputs and returning an output. 2 Definition 4.1.2.1 actually defines a left action of (M, e, ⋆) on S. A right action is like a left action except the order of operations is somehow reversed. We focus on left actions is in this text, but right actions are briefly defined here for completeness. The only difference is in the second condition. Using the same notation, we replace it by the condition that for all m, nM and all sS, we have 3More precisely, the monoid homomorphism F sends a list [t1, t2, … , tn] to the list [r1,1, r1,2, r1,3, r2,1, r2,2, r2,3, … , rn,1, rn,2, rn,3], where for each 0 ≤ in, we have ti = (ri,1, ri,2, ri,3). 4Adding stop-codons to the mix, we can handle more of $\mathcal{R}$, e.g., sequences that do not have a multiple-of-three many nucleotides. 5If $\mathcal{M}$ is a group, then every element m has one and only one inverse. 6It is worth noting the connection with ev : HomSet(X, X) × XX from (3.23). 7Use the displayed preorder, not any kind of completion of what is written there. 8If we want nondeterminism, i.e., a probabilistic distribution as the next state, we can use monads. See Section 7.3. 9The elements of PK(v) are listed as the rows of table v, or more precisely, as the leftmost cells of these rows. 10The arrow a corresponds to a column, and to each row r ∈ PK(v) the (r, a) cell contains the datum FK(a)(r). 11The text at the bottom of the box in (4.20) is a summary of a fact, i.e., a path equivalence in the olog. Under the formal rules of Englishing a fact (see (2.20)), it would read as follows. Given x, a self-email, consider the following. We know that x is a self-email, which is an email, which is sent by a person who we call P(x). We also know that x is a self-email, which is an email, which is sent to a person who we call Q(x). Fact: Whenever x is a self-email, we have P(x) = Q(x).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 193, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729511737823486, "perplexity": 2738.212518578292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00342-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-me-compile-a-physics-software-list.64336/
# Help me compile a physics software list 1. Feb 19, 2005 ### relativitydude I'm going to be creating a website and would like to post links to the best physics software titles. Anyone have any favorites? 2. Feb 20, 2005 ### relativitydude 3. Feb 20, 2005 ### polyb Some indespensible software titles that you should list, IMHO are: LabVIEW - I real nice GUI program for the lab! Kind of pricey though. Mathematica- Need I say more? OK, pricey. MATLAB- Easy, big, and pricey but good. Python-Because it's free, open source compiler with many goodies. Plus it was inspired by Monty Pyhton. This is the link to Sci-Py. Origin- a nice program for data analysis and graphing. C/C++ - Standard! Fortran- Old but a sleek fast procedural program that wont clog up CPU resources plus Numerical Recipes for just about any mathematical operation you can think of! This one wont be going away any time soon, though I suspect python may give it some competition for numerical computations. LaTeX-If your going to publish, you need this! That's all I can think of at the moment. Hope that helps. Sorry I could not provide all links. Last edited: Feb 20, 2005 4. Feb 20, 2005 ### PerennialII 5. Feb 20, 2005 ### Gokul43201 Staff Emeritus For crystallographers : * Peak Fitting - XFit (by Coelho and Cheary) * Autoindexing - Crysfire (by Robin Shirley) * Space Group Assignment and Unit Cell Refinement - Chekcell (by Jean Laugier and Bernard Bochu) * Rietveld Refinement - GSAS Gui (by Alan Larson & Bob Von Dreele, GUI by Brian Toby) * Reciprocal Space Structure Solution - EXPO/Sirpow (by Carmelo Giacovazzo and the IRMEC Group at Bari, Italy) * Line Profile Analysis - Breadth (by Davor Balzar) * Perovskite Structure Prediction - SPuDS (by Mike Lufaso and Pat Woodward) 6. Feb 20, 2005 ### relativitydude Thanks for the links, guys but I was looking for more down to earth, student software. Especially like Physics 101 SE which is like $10, not the$5000 variety :P 7. Feb 20, 2005 ### Gokul43201 Staff Emeritus I wish people would specify things to a greater detail when asking for responses. Some of us may have put in considerable time and effort in hunting out those links. Sorry I'm posting this rant here; (this is hardly the first time this has happened) you just happened to be my last straw. 8. Feb 20, 2005 I thought it would be intuitively obvious, how many of us go out and get $5000 software titles? 9. Feb 20, 2005 ### graphic7 There are no$5000 software titles listed. Mathematica with a full, transferrable license is $3,000. Matlab and Maple are about the same or chaper. Mathematica, Matlab, and Maple all have student licenses available for$150 each. The student license typically last until you no longer are attending school. Typically, if you go into any research field, whether that be Mathematics or Physics, you'll probably run into Mathematica, Matlab, and/or Maple. They are professional/educational products for professional/educational purposes. 10. Feb 20, 2005 ### Moonbear Staff Emeritus It wasn't intuitively obvious what you wanted at all. You asked for software used by physicists, you didn't specify cheap software for students. There probably isn't much out there that's really cheap like what you're looking for. 11. Feb 20, 2005 ### Davorak 12. Feb 21, 2005 ### Gokul43201 Staff Emeritus None of the programs on my list cost $5000. They are mostly (if not all) free (open source) software. But also, none of them is a Physics 101 type of package. I'm sorry that I started this unpleasantness here...I was just venting. I have no intention of derailing this thread. But please do keep in mind that it would be a lot nicer if you gave as much detail as possible when making a request. Thanks ! That's good enough for me ! Now let's bury the hatchet and get this back on track, wot ? Have you looked into the links directory ? There some 100 and 200-level lecture notes (under Classical or General Physics) and other neat resources there. And the homework help links are still under construction, so if you check back later, Tom may have put in a bunch of useful stuff there too. Last edited: Feb 21, 2005 13. Feb 21, 2005 ### Nylex Great, so no-one uses MathCad? 14. Feb 21, 2005 ### PerennialII Many do and are very satisfied with it (and do really complex stuff with it)... personally I've found it nice overall, but IMHO there comes a point when Mathematica & Maple do sweep the floors with it . 15. Feb 21, 2005 ### danne89 I'm of that opinon that all special software should have capability to be programmed. And be bound to C-program. I think you should check out sourceforge for libraries, which include physics related functionality. 16. Feb 23, 2005 ### Artman Okay, the actual program is a learning aide and not terribly advanced (ages 13-18), however, it comes with a pretty decent calculation surface that allows you to write out calculations as they would appear on paper (good for us non math types) and it's cheap$39.99. Math Soft 17. Feb 24, 2005 ### tribdog I find this one to be helpful when Gokul is being grouchy and I've broken his back with another last straw. Gokul43201info Similar Discussions: Help me compile a physics software list
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26383692026138306, "perplexity": 8680.392337258867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193288.61/warc/CC-MAIN-20170322212953-00228-ip-10-233-31-227.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/help/badges/28/famous-question
# Help Center > Badges > Famous Question Question with 10,000 views. This badge can be awarded multiple times. Awarded 1700 times. Awarded 2 hours ago Awarded 2 hours ago Awarded 4 hours ago Awarded Dec 2 at 12:40 Awarded Dec 2 at 6:40 Awarded Nov 27 at 13:15 Awarded Nov 26 at 17:30 Awarded Nov 26 at 13:40 Awarded Nov 23 at 15:10 Awarded Nov 23 at 5:40 Awarded Nov 23 at 5:40 Awarded Nov 23 at 1:00 Awarded Nov 21 at 10:00 Awarded Nov 21 at 10:00 Awarded Nov 20 at 19:55 Awarded Nov 20 at 19:30 Awarded Nov 20 at 18:30 Awarded Nov 19 at 17:25 Awarded Nov 19 at 17:25 Awarded Nov 19 at 15:35 Awarded Nov 19 at 3:35 Awarded Nov 18 at 19:35 Awarded Nov 18 at 18:40 Awarded Nov 18 at 16:35 Awarded Nov 17 at 19:30 Awarded Nov 17 at 7:30 Awarded Nov 17 at 0:50 Awarded Nov 16 at 2:15 Awarded Nov 15 at 16:35 Awarded Nov 15 at 0:20 Awarded Nov 13 at 15:30 Awarded Nov 12 at 21:30 Awarded Nov 12 at 16:55 Awarded Nov 12 at 13:50 Awarded Nov 12 at 11:05 Awarded Nov 12 at 7:05 Awarded Nov 11 at 5:25 Awarded Nov 9 at 7:50 Awarded Nov 9 at 4:55 Awarded Nov 8 at 5:20 Awarded Nov 7 at 14:20 Awarded Nov 5 at 18:25 Awarded Nov 5 at 10:55 Awarded Nov 3 at 17:30 Awarded Nov 3 at 14:05 Awarded Nov 2 at 21:10 Awarded Nov 2 at 10:40 Awarded Oct 31 at 21:55 Awarded Oct 30 at 18:30 Awarded Oct 30 at 12:45 Awarded Oct 29 at 4:15 Awarded Oct 29 at 3:10 Awarded Oct 29 at 0:40 Awarded Oct 28 at 15:40 Awarded Oct 27 at 4:35 Awarded Oct 26 at 1:05 Awarded Oct 25 at 18:45 Awarded Oct 25 at 14:35 Awarded Oct 25 at 13:20 Awarded Oct 24 at 16:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849496245384216, "perplexity": 20094.065435871482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00507.warc.gz"}
https://crypto.stackexchange.com/questions/40121/recovering-a-secret-that-has-been-blinded-several-times
Recovering a secret that has been blinded several times I'm analyzing a protocol that, during one of the steps, sends a blinded secret. Let's denote the secret $x \in \mathbb Z_p^*$ (for $p$ prime) and the blinded secret $y$, so that $y = r\cdot x \bmod p$, where $r$ is a blinding factor randomly sampled from $\mathbb Z_p^*$. We can assume that the blinding factors won't repeat. Several runs of the protocol would produce several $y_i$ values, with the same fixed secret $x$: $$y_1 = r_1 \cdot x \bmod p$$ $$y_2 = r_2 \cdot x \bmod p$$ $$...$$ $$y_n = r_n \cdot x \bmod p$$ Knowing only the $y_i$ values, what would be the best algorithm to retrieve $x$? If all we know are the blinded values $y_i$ (and the modulus $p$), and if the blinding factors $r_i$ are indeed sampled randomly from the multiplicative group modulo $p$, I don't see any way to recover the secret $x$. This is because, for any list of blinded values $y_i$ and any candidate $x$ value, we can compute a unique list of blinding values $r_i \equiv y_i \cdot x^{-1} \pmod p$ that, when multiplied by $x$ modulo $p$, will yield exactly those blinded values. Given that the $r_i$ values are chosen randomly, this particular list of blinding values is exactly as likely as any other. Thus, this blinding scheme appears to be unconditionally secure. (Also, the knowledge that the $r_i$ values cannot repeat won't help, since all it implies is that the $y_i$ values won't repeat either. If we did observe repeated $y_i$ values for the same secret $x$, we would know that the corresponding $r_i$ values must be the same, but would still gain no further knowledge about $x$.) There is no algorithm to find an $x$ only having $y_i$. As long $(r_i)_{i=1\dots n}$ is sampled uniformly from the set $Z := \{(z_i)\in(\mathbb Z_p^\times)^n|1\le i<j\le n\implies z_i\ne z_j\}$ and the attacker gets all the $y_i$'s, but no information about the $r_i$'s and $x$ (during the other steps), no algorithm can obtain any information about $x$, as multiplying by $x$ preserves the uniform distribution on $Z$. Hence knowing $(r_i)_{i=1\dots n}$ is equivalent to knowing $(y_i)_{i=1\dots n}$ [as long one doesn't know both!], as both look (statistically) the same, so nobody is able to distinguish between the random or the blinded secrets. So anything one can conclude from the blinded secret, one can as well conclude from the random. If the blinding factors are allowed to repeat, using $Z=(\mathbb Z_p^\times)^n$ in the argument above would lead to the same conclusion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364343285560608, "perplexity": 398.392313965538}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00071.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/20417
ZEEMAN QUANTUM-BEAT SPECTROSCOPY OF $NO_{2}$: EIGENSTATE-RESOLVED LAND\'{E} $g_{F}$ FACTORS NEAR DISSOCIATION THRESHOLD Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/20417 Files Size Format View 2002-MJ-01.jpg 189.9Kb JPEG image Title: ZEEMAN QUANTUM-BEAT SPECTROSCOPY OF $NO_{2}$: EIGENSTATE-RESOLVED LAND\'{E} $g_{F}$ FACTORS NEAR DISSOCIATION THRESHOLD Creators: Xin, J.; Reid, S. A. Issue Date: 2002 Publisher: Ohio State University Abstract: The sign and magnitude of Land\'{e} gp factors for single $NO_{2}$ rovibronic (J = 3/2) eigenstates in the $15 cm^{-1}$ region below dissociation threshold $(D_{0} = 25,128.57 cm^{-1})$ were investigated using Zeeman quantum-beat spectroscopy. The derived Land\'{e} gp factors exhibit pronounced fluctuations about an average much smaller than expected in the absence of rovibronic perturbations, which destroy the goodness of the N and K quantum numbers and the $J=N+S$ coupling scheme. The $F=J+I$ coupling scheme was found to be valid near $D_{0}$ to within the uncertainty of our measurements, and the average Land\'{e} $g_{F}$ factors near dissociation threshold are in good agreement with those calculated under the assumption of complete rovibronic mixing. Our findings do not provide evidence for the participation of repulsive quartet states near dissociation threshold. Description: Author Institution: Department of Physics and Engineering Technologies, Bloomsburg University; Department of Chemistry, Marquette University URI: http://hdl.handle.net/1811/20417 Other Identifiers: 2002-MJ-01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6276550889015198, "perplexity": 6157.723849215016}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609837.56/warc/CC-MAIN-20170528141209-20170528161209-00331.warc.gz"}
https://www.black-holes.org/explore/glossary/47-v/108-vacuum-energy
## Vacuum Energy The energy that is present even in otherwise empty space. This energy has been measured to exist (in the "Casimir Effect"). Whereas matter causes the expansion of the Universe to slow down, vacuum energy actually causes the expansion to speed up. ## Inspiration Dark-heaving—boundless, endless, and sublime, The image of eternity— the Throne of the Invisible... From Lord Byron's Childe Harold's Pilgrimage Canto IV, Stanza 183
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153148531913757, "perplexity": 5597.227577617071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038507477.62/warc/CC-MAIN-20210418163541-20210418193541-00227.warc.gz"}
http://www.ck12.org/book/CK-12-Math-Analysis-Concepts/r4/section/2.4/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Graphs of Polynomials Using Zeros | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Math Analysis Concepts Go to the latest version. # 2.4: Graphs of Polynomials Using Zeros Created by: CK-12 How is finding and using the zeroes of a higher-degree polynomial related to the same process you have used in the past on quadratic functions? ### Watch This This video is a good introduction to graphing cubic and higher degree polynomials. Note that a graphing calculator is used in the video. Embedded Video: ### Guidance The following procedure can be followed when graphing a polynomial function. • Use the leading-term test to determine the end behavior of the graph. • Find the $x-$ intercept(s) of $f(x)$ by setting $f(x)=0$ and then solving for $x$ . • Find the $y-$ intercept of $f(x)$ by setting $y=f(0)$ and finding $y$ . • Use the $x-$ intercept(s) to divide the $x-$ axis into intervals and then choose test points to determine the sign of $f(x)$ on each interval. • Plot the test points. • If necessary, find additional points to determine the general shape of the graph. If $a_{n}x^{n}$ is the leading term of a polynomial. Then the behavior of the graph as $x\to\infty$ or $x\to-\infty$ can be known by one the four following behaviors: 1. If $a_{n}>0$ and $n$ even: 2. If $a_{n}<0$ and $n$ even: 3. If $a_{n}>0$ and $n$ odd: 4. If $a_{n}<0$ and $n$ odd: #### Example A Find the roots (zeroes) of the polynomial: $h(x)=x^{3}+2x^{2}-5x-6$ Solution: Start by factoring: $h(x)=x^{3}+2x^{2}-5x-6=(x+1)(x-2)(x+3)$ To find the zeros, set h ( x )=0 and solve for x . $(x+1)(x-2)(x+3)=0$ This gives $x+1 & = 0\\x-2 & = 0\\x+3 & = 0$ or $x & = -1\\x & = 2\\x & = -3$ So we say that the solution set is $\{-3, -1, 2\}$ . They are the zeros of the function $h(x)$ . The zeros of $h(x)$ are the $x-$ intercepts of the graph $y=h(x)$ below. #### Example B Find the zeros of $g(x)=-(x-2)(x-2)(x+1)(x+5)(x+5)(x+5)$ . Solution The polynomial can be written as $g(x)=-(x-2)^{2}(x+1)(x+5)^{3}$ To solve the equation, we simply set it equal to zero $-(x-2)^{2}(x+1)(x+5)^{3}=0$ this gives $x-2 & = 0\\x+1 & = 0\\x+5 & = 0$ or $x & = 2\\x & = -1\\x & = -5$ Notice the occurrence of the zeros in the function. The factor $(x-2)$ occurred twice (because it was squared), the factor $(x+1)$ occurred once and the factor $(x+5)$ occurred three times. We say that the zero we obtain from the factor $(x-2)$ has a multiplicity $k=2$ and the factor $(x+5)$ has a multiplicity $k=3$ . #### Example C Graph the polynomial function $f(x)=-3x^{4}+2x^{3}$ . Solution Since the leading term here is $-3x^{4}$ then $a_{n}=-3<0$ , and $n=4$ even. Thus the end behavior of the graph as $x\to\infty$ and $x\to-\infty$ is that of Box #2, item 2. We can find the zeros of the function by simply setting $f(x)=0$ and then solving for $x$ . $-3x^{4}+2x^{3} & = 0\\-x^3(3x-2) & = 0$ This gives $x=0\quad \text{or} \quad x=\frac{2}{3}$ So we have two $x-$ intercepts, at $x=0$ and at $x=\frac{2}{3}$ , with multiplicity $k=3$ for $x=0$ and multiplicity $k=1$ for $x=\frac{2}{3}$ . To find the $y-$ intercept, we find $f(0)$ , which gives $f(0)=0$ So the graph passes the $y-$ axis at $y=0$ . Since the $x-$ intercepts are 0 and $\frac{2}{3}$ , they divide the $x-$ axis into three intervals: $(-\infty, 0), \left ( 0, \frac{2}{3} \right ),$ and $\left ( \frac{2}{3}, \infty \right )$ . Now we are interested in determining at which intervals the function $f(x)$ is negative and at which intervals it is positive. To do so, we construct a table and choose a test value for $x$ from each interval and find the corresponding $f(x)$ at that value. Interval Test Value $x$ $f(x)$ Sign of $f(x)$ Location of points on the graph $(-\infty, 0)$ -1 -5 - below the $x-$ axis $\left ( 0, \frac{2}{3} \right )$ $\frac{1}{2}$ $\frac{1}{16}$ + above the $x-$ axis $\left ( \frac{2}{3}, \infty \right )$ 1 -1 - below the $x-$ axis Those test points give us three additional points to plot: $(-1, -5), \left ( \frac{1}{2},\frac{1}{16} \right )$ , and (1, -1). Now we are ready to plot our graph. We have a total of three intercept points, in addition to the three test points. We also know how the graph is behaving as $x\to-\infty$ and $x\to+\infty$ . This information is usually enough to make a rough sketch of the graph. If we need additional points, we can simply select more points to complete the graph. In the introduction to the lesson, it was noted that there are similarities in graphing using zeroes between quadratic functions and higher-degree polynomials. Were you able to identify some of those similarities? Despite the more complex nature of the graphs of higher-degree polynomials, the general process of graphing using zeroes is actually very similar. In both cases, your goal is to locate the points where the graph crosses the x or y axis. In both cases, this is done by setting the y value equal to zero and solving for x to find the x axis intercepts, and setting the x value equal to zero and solving for y to find the y axis intercepts. ### Vocabulary Cubic Function: A function containing an $x^{3}$ term as the highest power of x . Quartic Function: A function containing an $x^{4}$ term as the highest power of x . Zeroes of a Polynomial: The values output by the function (the values of $f(x)$ or $y$ ), when the input (the $x$ value) is zero, or vice-versa. Interval: A portion of a function, generally defined by a starting and ending value of $x$ . ### Guided Practice Questions Sketch a graph of each power function using the properties of the power functions. 1) $f(x)=-3x^{4}$ 2) $h(x)=\frac{1}{2}x^{5}$ 3) $q(x)=4x^{8}$ 4) Find the zeros and sketch a graph of the polynomial $f(x)=x^{4}-x^{2}-56$ 5) Graph $g(x)=-(x-2)^{2}(x+1)(x+5)^{3}$ Solutions 1) Solution: Step 1: By applying the leading term test , we can say that since the co-efficient $-3$ is $<0$ , and since the power $4$ is even , the end behavior of the graph resembles: Step 2: By solving the equation for $x = 1$ and $x = -1$ , we get the points: $(1, -3)$ and $(-1, -3)$ . Step 3: This suggests 2) Solution: Step 1: By applying the leading term test , we can say that since the co-efficient $\frac{1}{2}$ is $>0$ , and since the power $5$ is odd , the end behavior of the graph resembles: Step 2: By solving the equation for $x = 1$ and $x = -1$ , we get the points: $(1, 1/2)$ and $(-1, -1/2)$ . Step 3: This suggests 3) Solution: Step 1: By applying the leading term test , we can say that since the co-efficient $4$ is $>0$ , and since the power $8$ is even , the end behavior of the graph resembles: Step 2: By solving the equation for $x = 1$ and $x = -1$ , we get the points: $(1, 4)$ and $(-1, 4)$ . Step 3: This suggests 4) This is a factorable equation, $f(x) & =x^4-x^2-56\\& = (x^2-8)(x^2+7)$ Setting $f(x)=0$ , $(x^{2}-8)(x^{2}+7) = 0$ the first term gives $x^{2}-8 & = 0\\x^2 & = 8\\x & = \pm \sqrt{8}\\& = \pm 2\sqrt{2}$ and the second term gives $x^{2}+7 & = 0\\x^2 & = -7\\x & = \pm \sqrt{-7}\\& = \pm i\sqrt{7}$ So the solutions are $\pm2\sqrt{2}$ and $\pm i\sqrt{7}$ , a total of four zeros of $f(x)$ . Keep in mind that only the real zeros of a function correspond to the $x-$ intercept of its graph. 5) Use the zeros to create a table of intervals and see whether the function is above or below the $x-$ axis in each interval: Interval Test value $x$ $g(x)$ Sign of $g(x)$ Location of graph relative to $x-$ axis $(-\infty, -5)$ -6 320 + Above $x=-5$ -5 0 NA (-5, -1) -2 144 + Above $x=-1$ -1 0 NA (-1, 2) 0 -100 - Below $x=2$ 2 0 NA $(2, \infty)$ 3 -256 - Below Finally, use this information and the test points to sketch a graph of $g(x)$ . ### Practice 1. If c is a zero of f , then c is a/an _________________________ of the graph of f 2. If c is a zero of f , then ( x - c ) is a factor of ___________________? 3. Find the zeros of the polynomial: $P(x) = x^3 - 5x^2 + 6x$ Consider the function: $f(x) = -3(x - 3)^4(5x - 2)(2x - 1)^3(4 - x)^2$ . 1. How many zeros (x-intercepts) are there? 2. What is the leading term? Find the zeros and graph the polynomial. Be sure to label the x -intercepts, y -intercept (if possible) and have correct end behavior. You may use technology for #s 9-12 1. $P(x) = -2(x + 1)^2(x - 3)$ 2. $P(x) = x^3 + 3x^2 - 4x - 12$ 3. $f(x) = -2x^3 + 6x^2 + 9x + 6$ 4. $f(x) = -4x^2 -7x +3$ 5. $f(x) = 2x^5 +4x^3 + 8x^2 +6x$ 6. $f(x) = x^4 - 3x^2$ 7. $g(x) = x^2 - |x|$ 8. Given: $P(x) = (3x +2)(x - 7)^2(9x + 2)^3$ State: a) The leading term: b) The degree of the polynomial: c) The leading coefficient: Determine the equation of the polynomial based on the graph: Nov 01, 2012 May 27, 2014 # We need you! At the moment, we do not have exercises for Graphs of Polynomials Using Zeros. Files can only be attached to the latest version of Modality
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 150, "texerror": 0, "math_score": 0.8107585310935974, "perplexity": 385.69125886354203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00323-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/postulates-of-classical-statistical-mechanics.658244/
# Postulates of Classical Statistical Mechanics 1. Dec 11, 2012 ### shaileshtrip can someone please explain "Postulates of Classical Statistical Mechanics" , "priori probability" , "equilibrium" ..i m a post graduatation student .and in physics these chapters are seem very difficult i need some step by step explanation .. 2. Dec 11, 2012 ### Jano L. These are vast topics. You need to find some good books on statistical physics. There are many, some of them easy to read, some of them very hard to read but I do not know any really good one. I liked Feynman's lectures on Physics, chapters 39-46 Franz Mandl, Statistical Physics, Herbert Callen, Thermodynamics and Thermostatistics (chapters 15,16,17) Landau and Lifgarbagez, Statistical Physics I, first chapters 3. Dec 11, 2012 ### Studiot Equilibrium is not a postulate is precisely defined condition, that of no average change in any physical quantity of interest. Nor is it specifically confined to statistical mechanics. You should revise the terms dynamic equilibnrium, static equilibrium, stable equilibrium, unstable equilibrium, metastable equilibrium before proceeding. The principle of a priori probabilities means that a system will inhabit every state available to it in accordance with the statistical weight of that state, if we observe it for long enough. A state is a particular set of values of the properties of interest. A good easily readable introduction is offered in Statistical Thermodynamics by Andrew Maczek 4. Dec 11, 2012 ### shaileshtrip but These topics are listed in my course book and i should read them please explain it to me and can you please suggest me some good books which can cover these problems step by step. 5. Dec 11, 2012 ### A. Neumaier The thermodynamics book by Callen is excellent in this respect. 6. Dec 11, 2012 ### dextercioby Classical statistical thermodynamics/physics for equilibrium ensembles can be derived from these 2 axioms and the ergodic hypothesis of Gibbs: AXIOM 1: $$S= - k \langle \ln \rho^{*} \rangle_{\rho^{*}}$$ S is called 'statistical entropy'. AXIOM 2: The classical statistical equilibrium ensembles are described by probability densities for which the statistical entropy described in axiom 1 is maximum wrt all values obtained from the family of acceptable probability densities. Note: Acceptance for a probability density means that these probability densities are such that the ergodic principle of Gibbs is valid for each and every one of them. 7. Dec 11, 2012 ### shaileshtrip The Physics of Everyday Phenomena: W. Thomas Griffith..... @A. Neumaier " The thermodynamics book by Callen " 8. Dec 12, 2012 ### Studiot shaileshtrip This is obviously important to you since you keep coming back. However your question(s) are too vague. You really need to tell us what course you are following and its syllabus and what stage you are at. You will not find all you want in any one textbook, especially not in a subject that is still rapidly developing such as quantum statistics. Yes Callen treats a range of quantum statistical subjects but I fear that you will find the book less than digestible considering your comment on your own textbook that you have not named. The range included in Callen is wide, if anything too wide. It would be difficult to use the text presented for practical purposes any any particular area. For this you would need dedicated texts, eg in solid state / semiconductor physics, spectroscopy, physical chemistry etc. Less comprehensive texts that extract principles and present statements linking the ideas would also be useful. Such as the observation in Moore (Physical Chemistry) that I have shortened the full extract. Over to you 9. Dec 12, 2012 ### A. Neumaier Not really (but superficially), as it doesn't assume any quantum mechanics. If you want to have the latter included, I'd recommend Reichl's Statistical Physics. If you are mathematically minded, you might also find useful Part II of my online book http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224851489067078, "perplexity": 1899.2083851741959}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00696.warc.gz"}
https://answers.launchpad.net/mg5amcnlo/+question/668541
Dear All, I am trying to generate the following process: pp -> Z' -> ZH, with Z to quarks and H to ZZ, with the Zs to leptons (charged and neutral). I am using the proc card you can read in [*]. As you can see, I am generating this process by doing: generate p p > vz, (vz > h z, z > q q, (h > z vl vl~ / vz, z > l+ l-)) add process p p > vz, (vz > h z, z > q q, (h > z l+ l- / vz, z > vl vl~)) The problem is that, when I look at the diagrams, I find also some diagrams with H-> tau tau, tau->Z tau I would not expect to have these diagrams and in principle I would like to remove them. Do you know how I can do that? Best regards, Aniello [*] set group_subprocesses Auto set ignore_six_quark_processes False set gauge unitary set complex_mass_scheme False import model Vector_Triplet_free_Width_UFO define vl = ve vm vt define vl~ = ve~ vm~ vt~ define l+ = e+ mu+ ta+ define l- = e- mu- ta- define p = g u c d s u~ c~ d~ s~ define q = u c d s b u~ c~ d~ s~ b~ generate p p > vz, (vz > h z, z > q q, (h > z vl vl~ / vz, z > l+ l-)) add process p p > vz, (vz > h z, z > q q, (h > z l+ l- / vz, z > vl vl~)) ## Question information Language: English Edit question Status: Solved For: Assignee: No assignee Edit question Solved by: Aniello Solved: 2018-05-04 Last query: 2018-05-04 2018-05-03 Olivier Mattelaer (olivier-mattelaer) said on 2018-05-03: #1 Hi, > I would not expect to have these diagrams and in principle I would like to remove them. Those diagrams exists even if they should be suppressed. I guess that this is fine to remove them since they should be gauge invariant on their own. I would suggest to replace your syntax by > generate p p > vz, (vz > h z, z > q q, (h > z > l+ l- vl vl~ / vz ta+ ta-)) Cheers, Olivier > On 3 May 2018, at 10:57, Aniello <email address hidden> wrote: > > New question #668541 on MadGraph5_aMC@NLO: > > Dear All, > I am trying to generate the following process: > pp -> Z' -> ZH, with Z to quarks and H to ZZ, with the Zs to leptons (charged and neutral). > I am using the proc card you can read in [*]. > > As you can see, I am generating this process by doing: > generate p p > vz, (vz > h z, z > q q, (h > z vl vl~ / vz, z > l+ l-)) > add process p p > vz, (vz > h z, z > q q, (h > z l+ l- / vz, z > vl vl~)) > > The problem is that, when I look at the diagrams, I find also some diagrams with H-> tau tau, tau->Z tau > > I would not expect to have these diagrams and in principle I would like to remove them. > Do you know how I can do that? > > Best regards, > Aniello > > [*] > set group_subprocesses Auto > set ignore_six_quark_processes False > set gauge unitary > set complex_mass_scheme False > import model Vector_Triplet_free_Width_UFO > define vl = ve vm vt > define vl~ = ve~ vm~ vt~ > define l+ = e+ mu+ ta+ > define l- = e- mu- ta- > define p = g u c d s u~ c~ d~ s~ > define q = u c d s b u~ c~ d~ s~ b~ > generate p p > vz, (vz > h z, z > q q, (h > z vl vl~ / vz, z > l+ l-)) > add process p p > vz, (vz > h z, z > q q, (h > z l+ l- / vz, z > vl vl~)) > > > -- Aniello (aniello.spiezia) said on 2018-05-03: #2 Dear Olivier, thanks a lot. I am going to test your suggestion. One question: with the syntax you are suggesting, I am removing also the Z -> tau tau diagrams, no? And these diagrams should be present, I guess. Do you know if this is true? Best regards, Aniello Olivier Mattelaer (olivier-mattelaer) said on 2018-05-03: #3 Hi, No you should have it. final state particles are not impacted by the / . Internal particles are. For example if you do generate g g > g g / g then you have only the four point interaction feynman diagram. This definition is off course meaningless since it breaks gauge invariance. Cheers, Olivier > On 3 May 2018, at 14:32, Aniello <email address hidden> wrote: > > Question #668541 on MadGraph5_aMC@NLO changed: > > > Aniello is still having a problem: > Dear Olivier, > thanks a lot. I am going to test your suggestion. > One question: with the syntax you are suggesting, I am removing also the Z -> tau tau diagrams, no? And these diagrams should be present, I guess. Do you know if this is true? > Best regards, > Aniello > > -- Aniello (aniello.spiezia) said on 2018-05-03: #4 Dear Olivier, thanks again. I have tested it and the problem is solved. I have an additional question, hoping it is not too naive. I have produced the inclusive process, i.e.: generate p p > vz, (vz > h z, z > q q) and I get a cross section of 1076 picobarn. I then try with the H->ZZ and the Z bosons into leptons, i.e.: generate p p > vz, (vz > h z, z > q q, (h > z > l+ l- vl vl~ / vz ta+ ta-)) and I would expect a cross section given by the previous one multiplied by the BRs, i.e. 0.026 x (0.2 x 0.037 x 3) x 2 = 0.00115, where: BR(H->ZZ) = 0.026 BR(Z->neutrinos) = 0.2 BR(Z->charged leptons) = 0.037*3 and the factor 2 is to take care of the two Z bosons. So I expect a cross section of 1076*0.00115=1.242 picobarn So it seems to me that I am not considering some of the processes. Furthermore, if I look at the diagrams, I see only the diagrams with Z to electrons or electronic neutrinos (but I guess this is due to the fact that the diagrams are not repeated). Do you have some suggestions on this issue? Thanks again, Aniello Olivier Mattelaer (olivier-mattelaer) said on 2018-05-03: #5 Hi, You should check this FAQ. Also technically you can not use BR in this case since you have one Z which is offshell (so narrow width approximation does not hold) Cheers, Olivier FAQ #2442: “why production and decay cross-section didn't agree.”. Aniello (aniello.spiezia) said on 2018-05-04: #6 Dear Olivier, thanks a lot. Aniello
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913317918777466, "perplexity": 8305.63296991968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00267.warc.gz"}
https://stats.stackexchange.com/questions/400734/should-one-hot-encoding-be-done-on-likert-scale-ratings
Should one hot encoding be done on Likert scale ratings? This seems like an easy question but I haven't been able to find a definitive source for this or questions that address this topic directly. When applying a classification algorithm, should you apply one-hot encoding on Likert-scale features? Take for example the features in IBM's attrition dataset: JobSatisfaction 1 'Low' 2 'Medium' 3 'High' 4 'Very High' I know that one-hot encoding should be done on categorical values (e.g., gender) but I don't know what to do about Likert scale items, which can be interpreted as ordinal, nominal or even continuous (though I tend to disagree with the latter). I'm conducting Logistic Regression on this data set without creating dummies for the Likert features--at best my AUC is .59. I checked the approaches of others on Kaggle and those that achieved an AUC of .7 to .8 encoded some of the Likert features. In the extreme case with only three distinct values, say $$1,2,3$$, then representation via dummys or via a quadratic polynomial would use equal df's, so both would be saturated (so giving the same fit.) With many (say 9) distinct values, you could compare three nested models: linear, with regression spline, or with dummys.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.553621232509613, "perplexity": 1786.1843073884843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00458.warc.gz"}
http://matheducators.stackexchange.com/questions/950/advantages-on-repeating-question-in-students-answer
I see a lot of students who repeat the whole question (i.e., before starting with anything which has some content, they write down all assumptions, claims, etc.) in their answers in homework or exams. At least in exams, I have the feeling that they are losing time with this. • Is there an explanation why students do so? • Are there any (psychological) advantages why this is done? - It depends. Do you mean that they copy "1) $3\int x^2dx$"? Or a more lengthy question? –  user1729 Mar 28 '14 at 10:03 I once had a student copy out their entire linear algebra question paper onto their question book. It was presented beautifully and written in very neat and tidy handwriting. However, they didn't attempt to answer anything! They scored zero. –  user1729 Mar 28 '14 at 10:04 @user1729 I mean more lengthy questions. In an exam a few days ago I've seen some people who have written down more than one page (including hints, etc.), although the solution was only a few lines long. –  Markus Klein Mar 28 '14 at 10:09 From my experience in highschool, there was a general culture of you can't be marked down for writing something not pertinent to the question, but there's a chance it could gain you marks. This often lead to students who didn't know how to tackle a problem just writing down the problem statement, writing down all formulae/theorems they know and generally just regurgitating everything they could remember from revision onto their exam paper in the hope that one of the shots hit the target. –  Dan Rust Mar 28 '14 at 13:16 Daniel Rust -- you should flesh out your comment here as an answer! It would be a good answer. –  Chris Cunningham Mar 28 '14 at 15:21 In order to cultivate a greater appreciation for precision in one's mathematical statements, I ask my students not to copy the problem question, but rather to transform it from a question or problem statement into a theorem statement, which they then prove or solve. For example, if the quiz question is Is the ring $\mathbb{Z}_2\oplus \mathbb{Z}_3$ an integral domain? Then the student is expected to write something like: Theorem. The ring $\mathbb{Z}_2\oplus \mathbb{Z}_3$ is not an integral domain. Proof. This ring is not an integral domain, because it has zero divisors, namely, ... In a lower-level course, I ask my students to transform the question into a positive assertion of what they will do. - This is a very nice idea. –  Jon Bannon Mar 28 '14 at 12:50 Personally I encourage students to write answers that make sense as a self-contained piece of writing, because I think that that is a valuable skill. Certainly it is required when writing about mathematics in any context other than homework or exams. This generally means that they need to reproduce some or all of the content of the question, possibly phrased in a different way. Simply copying the question verbatim is not the best way, however. - I recommend this to most people I have helped or tutored over the past years for a few reasons. ### Problems are complicated when you don't understand them. Basic problems are not as easily digestible to students, especially those who are less confident. I have found people consistently make mistakes based on the problem definitions even if they are explicitly given and as a result I strongly recommend people to write them out. Even if it's repeating information. This is true even for simple problems. As problems get more complicated this is even more important. I eventually tired of people making mistakes resulting from simply misreading or misinterpreting assumptions or even given information. Forcing them to write this down at least somewhat helps, though, of course there's always opportunity for stupid mistakes. ### It can be part of problem solving. I can't count the number of times in the past I've had no clue how to approach a problem myself, but followed my normal approach of writing the problem down and as I was basically rewriting it, ended up with significant insight into how to solve the problem. This is also the "if I write down lots of stuff it'll look like I tried" perspective, though, this is more "student led" than something I ever recommend.. - I can't really comment on the psychological reasons, but I can say from personal experience that some students have practical reasons for doing so: by copying the problem from the textbook or question sheet onto work paper, this allows directing focus entirely to the paper the problem is being solved on rather than having to switch between the two to verify parts of the question. However, in my case that usually also involved removing all the "extra" stuff in the question if present and reducing it to just the parts necessary to solve the problem, or at least summarizing the question in a more notational form, and I usually don't do that when the questions are already provided on the answer paper and no scratch paper is provided, so I suppose this might not be all that applicable to your question given that I'm probably not the sort of student you're more interested in for this. Thinking back, I believe I developed that habit due to having had teachers in the past who required questions to be copied to homework/test papers (probably for the benefit of the grader[s], at least in part), wanting to keep the amount copied to a minimum, and later realizing that it did actually provide some benefit for me (and for graders) even in the minimal form; thus, I continued to do so even when it was not explicitly required. - I'm going to hazard a guess - For 2 column geometry proofs, the repetition of every 'given' is required. When working with students on their Geo, it struck me that many times, the 'givens' are 2/3 of the solution, just a couple steps to finish the proof. For algebra, the question is usually brief enough, it makes sense to re-write it on the answer sheet. These two examples set the stage for a continuation of the process of re-write. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6613401770591736, "perplexity": 615.0277108451921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677620.25/warc/CC-MAIN-20151001215757-00124-ip-10-137-6-227.ec2.internal.warc.gz"}
http://www.math.toronto.edu/cms/analysis-and-applied-math-seminar/2010-02-26?CalendarStart=2011-11-25
## Analysis and Applied Math Seminar This page contains the calendar of events for the Analysis and Applied Math Seminar at the University of Toronto. The seminar meets regulary on Fridays, 1:10 - 2 pm, Room 6183 in the Bahen Centre. For more information, please contact one of the organizers: Robert Jerrard or Amir Moradifam. http://wiki.math.toronto.edu/TorontoMathWiki/index.php/Analysis_Applied_Math_Seminar ### On Long Time Behavior of Coating Flow by Marina Chugunova | University of Toronto Time: 14:10  (Friday, Feb. 26, 2010) Location: BA6183, Bahen Center, 40 St George St Abstract: We consider a nonlinear 4th-order degenerate parabolic partial differential equation that arises in modelling the dynamics of an incompressible thin liquid film on the outer surface of a rotating horizontal cylinder in the presence of gravity. The parameters involved determine a rich variety of qualitatively different coating flows. Depending on the initial data and the parameter values, we prove the existence of nonnegative periodic weak solutions and study their longtime behavior. Numerical simulations of coating flows will be presented. This is a joint work with A. Burchard, M. Pugh, B. Stephens and R. Taranets. #### Dates in this series · Friday, Jan. 08, 2010: Temporal evolution of attractive Bose-Einstein condensate in a quasi 1D cigar-shape trap modeled through the semiclassical limit of the focusing Nonlinear Schrödinger Equation (Alex Tovbis) · Friday, Jan. 15, 2010: Diffusion of waves in a random environment: problems and results (Jeffrey Schenker) · Friday, Jan. 22, 2010: The Calderón Problem - From the Past to the Present (Leo Tzou) · Friday, Jan. 22, 2010: 2D Schrödinger with a strong magnetic field: dynamics and spectral asymptotics near boundary  (Victor Ivrii) · Friday, Feb. 19, 2010: Global existence for a free boundary problem with non-standard sources (Maria Gualdani) · Monday, Feb. 22, 2010: Finding the Three Dimensional Structure of Molecules That We Don’t Know How To Crystallize (Shamgar Gurevich) · Friday, Feb. 26, 2010: On Long Time Behavior of Coating Flow (Marina Chugunova) · Friday, Mar. 05, 2010: Solutions with vortices of a semi-stiff boundary value problem for the Ginzburg-Landau equation (Leonid Berlyand) · Friday, Mar. 26, 2010: Eventual regularization of the slightly supercritical fractional Burgers equation (Chi Hin Chan) · Friday, Apr. 09, 2010: On the solvability conditions for some non Fredholm operators (Vitali Vougalter) · Tuesday, Apr. 13, 2010: Some problems in unique continuation (Alex Ionescu) · Friday, Apr. 16, 2010: Smoother critical subsolutions to Hamilton-Jacobi Equation (Albert Fathi) · Friday, Apr. 16, 2010: Entropy method for line energies (Radu Ignat) · Friday, May. 14, 2010: Semiclassical evolution for the nonlinear Schrödinger equation (Alessandro Selvitella) · Thursday, Jul. 08, 2010: Improving sharp Hardy-Sobolev inequalities by optimal remainder norms (Adele Ferone) · Friday, Sep. 03, 2010: Ergodic methods in combinatorics (Nikos Frantzikinakis) · Friday, Sep. 24, 2010: Stability of Federer curvature measures (Quentin Merigot) · Friday, Oct. 01, 2010: Regularity of the extremal solution in fourth order nonlinear eigenvalue problems,  (Amir Moradifam) · Friday, Oct. 08, 2010: Non-uniqueness of the Navier-Stokes equation in the hyperbolic setting (Magda Czubak) · Friday, Oct. 15, 2010: 2D- and 3D-Magnetic Schrödinger Operator: Short Loops and Pointwise Spectral Asymptotics (Victor Ivrii) · Friday, Oct. 22, 2010: Rigidity of Sobolev isometric embeddings (Robert Jerrard) · Friday, Oct. 29, 2010: Ultimately Schwarzschildean Spacetimes and the Black Hole Stability Problem (Gustav Holzegel) · Friday, Nov. 05, 2010: Bursting shear flow as cycle-to-cycle homoclinic chaos (Lennaert van Veen) · Friday, Nov. 05, 2010: Localization for the random displacement model (Michael Loss) · Friday, Nov. 12, 2010: Double Bubbles from Euclidean to Gauss Space (Frank Morgan) · Friday, Nov. 19, 2010: Coarsening in energy-driven systems (Dejan Slepcev) · Friday, Nov. 26, 2010: Hartree-Fock theory for atoms with closed shells (Marcel Griesemer) · Monday, Nov. 29, 2010: Normally hyperbolic trapped sets and quasinormal modes for black holes (Maciej Zworski) · Friday, Dec. 03, 2010: Interaction between internal and surface waves in a two layers fluid (Catherine Sulem) · Friday, Dec. 17, 2010: The influence of large drift on front propagation (Mohammad El Smaily) · Friday, Jan. 07, 2011: Mean field limits for interacting Bose gases and the Cauchy problem for Gross-Pitaevskii hierarchies (Thomas Chen) · Friday, Feb. 04, 2011: Some problems concerning a Quasilinear Schrödinger Equation (Alessandro Selvitella) · Friday, Feb. 11, 2011: Energy-based fracture evolution (Christopher Larsen) · Friday, Feb. 18, 2011: On global well-posedness of defocusing energy-critical nonlinear Schrödinger equations on certain Riemannian manifolds (Alex Ionescu) · Friday, Mar. 04, 2011: Infinitesimal inverse spectral geometry and applications in mathematical physics,  (Achim Kempf) · Friday, Mar. 11, 2011: Numerical study of one-dimensional Bose-Einstein condensates in a random potential (Ziad Musslimani) · Friday, Mar. 18, 2011: New conserved integrals for fluid flow in multi-dimensions (Stephen Anco) · Friday, Apr. 08, 2011: Renormalization group approach to perturbation theory for PDEs (Walid K. Abou Salem) · Friday, Jun. 10, 2011: Extensions of the disc algebra (Vassili Nestoridis) · Friday, Jun. 17, 2011: Stability of planar fronts for a non--local phase kinetics equation with a conservation law in $D \le 3$ (Enza Orlandi) · Friday, Jul. 15, 2011: Damped-driven Hamiltonian PDE (Sergei Kuksin) · Friday, Jul. 15, 2011: Self-similar asymptotics of solutions to the Navier-Stokes system in two dimensional exterior domain (Christophe Lacave) · Friday, Jul. 22, 2011: The theorem of characterization of the maximum principle (Julián López-Gómez) · Friday, Sep. 09, 2011: A semiclassical proof of Quillen's theorem (Maciej Zworski) · Friday, Sep. 16, 2011: Rayleigh-Benard convection: Bounds on the Nusselt number (Christian Seis) · Friday, Sep. 16, 2011: Bounds on the cost of root finding  (Scott Sutherland) · Friday, Sep. 23, 2011: Thin films for the Ginzburg-Landau model (Bernardo Galvao-Sousa) · Friday, Sep. 30, 2011: The signature operator on stratified pseudomanifolds (Pierre Albin) · Friday, Oct. 07, 2011: Breakdown Criteria for Nonvacuum Einstein Equations (Arick Shao) · Friday, Oct. 14, 2011: Local well-posedness of the KdV equation with almost periodic initial data (Kotaro Tsugawa) · Friday, Oct. 21, 2011: Anderson localization triggered by spin disorder (Daniel Egli) · Friday, Oct. 28, 2011: From the Analysis of Einstein-Maxwell Spacetimes in General Relativity to Gravitational Radiation (Lydia Bierri) · Friday, Nov. 11, 2011: Regularity of Eigenstates in Regular Mourre Theory (Matthias Westrich) · Friday, Nov. 18, 2011: Sharp Stability Estimates in Geometric Variational Problems (Francesco Maggi) · Friday, Nov. 25, 2011: Inverting the Attenuated X-Ray Transform (Nicholas Hoell) · Friday, Dec. 02, 2011: Developments in High-Order Discontinuous Galerkin Methods for Hyperbolic Conservation Laws (Lilia Krivodonova) · Monday, Jan. 23, 2012: Wet drop impact (Robert Deegan) · Friday, Jan. 27, 2012: Flux norm approach to finite-dimensional homogenization approximation with non-separated scales and high contrast (Leonid Berlyand) · Friday, Feb. 03, 2012: Critical Sobolev Inequalities and Navier-Stokes Equations (Yun Wang) · Friday, Feb. 10, 2012: On Singularity Formation Under Mean Curvature Flow (Gang Zhou) · Friday, Feb. 17, 2012: Integro-differential equations and multiscale representations (Prashant Athavale) · Friday, Mar. 02, 2012: Pursuit Laws for Mobile Robots (Bruce Francis) · Friday, Mar. 16, 2012: Nonzero positive solutions of systems of elliptic boundary value problems (Kunquan Lan) · Friday, Mar. 23, 2012: Wet drop impact (Robert Deegan) · Friday, Apr. 13, 2012: Global Minimization and the Energy Landscape for a Variational Problem with Long-Range Interactions (Rustum Choksi) · Friday, Jun. 01, 2012: Information Transmission over Optical Fibers Using the Nonlinear Fourier (Mansoor Yousefi) · Tuesday, Jul. 31, 2012: The Kato square root problem on vector bundles with generalised bounded geometry (Lashi Bandara) · Friday, Sep. 28, 2012: On a variational problem exhibiting concentration near a curve (Andres Contreras) · Friday, Oct. 05, 2012: Global results for linear waves on expanding Schwarzschild de Sitter cosmologies (Volker Schlue) · Friday, Oct. 12, 2012: Hamiltonian dynamics of a particle interacting with a wave field (Daniel Egli) · Friday, Oct. 19, 2012: Null Cones to Infinity, Curvature Flux, and Bondi Mass (Arick Shao) · Friday, Oct. 26, 2012: Coupled Nonlinear Oscillators: Linking Dynamical Systems Theory with Engineering Applications, (Antonio Palacios) · Friday, Nov. 02, 2012: Short-time behaviour of hypoelliptic heat kernels on Lie groups (Joint Seminar with Inverse Problems and Image Analysis Seminar) (Abdol-Reza Mansouri) · Friday, Nov. 09, 2012: Harmonic maps of conic surfaces (Jesse Gell-Redman) · Friday, Nov. 16, 2012: Monge-Kantorovich for Problems in Signal Processing and Control (Joint with the Inverse Problems and Image Analysis Seminar) (Allen Tannenbaum) · Friday, Nov. 23, 2012: TBA (Yaiza Canzani) · Friday, Dec. 07, 2012: TBA (Walter Craig) · Friday, Dec. 07, 2012: Vortex Filament Interactions (Walter Craig) · Friday, Dec. 14, 2012: JOINT SEMINAR: ANALYSIS AND APPLIED MATH / INVERSE PROBLEMS AND IMAGE ANALYSIS SEMINAR -- The circular area signature for graphs of periodic functions (Jeffrey Calder) · Friday, Jan. 11, 2013: Higher-order time asymptotics of fast diffusion in Euclidean space (using dynamical systems methods) (Robert McCann) · Friday, Jan. 18, 2013: A Differential Equation For The Principal Directions And Radii Of A Surface (Daniel Mayost) · Friday, Jan. 25, 2013: Axisymmetric critical points on the sphere of a nonlocal isoperimetric problem (Ihsan Topaloglu) · Friday, Feb. 01, 2013: Blow-up for linear and non-linear wave equations on black hole spacetimes (Stefanos Aretakis) · Friday, Feb. 08, 2013: Null singularities in general relativity (Jonathan Luk) · Friday, Mar. 01, 2013: Continuous dissipative solutions of the Euler equations (Camillo De Lellis) · Friday, Mar. 22, 2013: TBA (Keisuke Takasao)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47274526953697205, "perplexity": 5333.696987380351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164004837/warc/CC-MAIN-20131204133324-00024-ip-10-33-133-15.ec2.internal.warc.gz"}
https://socratic.org/questions/57021fe711ef6b604fd61227
Physics Topics # Question #61227 Apr 4, 2016 Yes, see below. #### Explanation: We know that displacement is a vector quantity and is denoted as $\vec{x}$. Displacement will be negative as per definition of system of coordinates and location of the initial and final displacement vectors. It is explained diagrammatically here. ##### Impact of this question 122 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527484178543091, "perplexity": 1904.18110750722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00047.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-difference-between-these-dielectric-terms.720511/
# What Is The Difference Between These Dielectric Terms? 1. Nov 2, 2013 ### FredericChopin What Is The Difference Between These "Dielectric" Terms? Can someone please explain to me what the difference between these terms are? 1. Dielectric constant 2. Relative dielectric constant 3. Dielectric loss I came across them on this website: http://www.lsbu.ac.uk/water/microwave.html#pen Also, I don't really know what "δ" and "εr'" on the website are meant to represent. All and any help would be appreciated. Thank you. 2. Nov 3, 2013 ### FredericChopin Also, does anyone know the derivation to the equation: $$\alpha = \frac{2 \pi }{ \lambda } \sqrt[]{ \frac{ \varepsilon_r \sqrt[]{1 + tan^{2} \delta } - 1}{2} }$$ , which is also on the website? Thank you. 3. Nov 4, 2013 ### Staff: Mentor An ideal capacitor has no losses, a dielectric introduced between the plates just changes the total capacitance in proportion to the relative dielectric constant. An non-ideal dielectric also introduces a loss, which we represent by a resistance between the plates. So the real capacitor shows both capacitance and resistance; I think that leads to the angle you have there, δ.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024286031723022, "perplexity": 1824.4138058221233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700326.64/warc/CC-MAIN-20160428164140-00004-ip-10-239-7-51.ec2.internal.warc.gz"}
https://mixedmath.wordpress.com/2012/10/03/math-90-week-5/
A few administrative notes before we review the day’s material: I will not be holding office hours this Wednesday. And there are no classes next Monday, when my usual set of office hours are. But I’ve decided to do a sort of experiment: I don’t plan on reviewing for the exam specifically next week, but a large portion of the class has said that they would come to office hours on Monday if I were to have them. So I’m going to hold them to that – I’ll be in Kassar House 105 (the MRC room) from 7-8:30 (or so, later perhaps if there are a lot of questions), and this will dually function as my office hours and a sort of review session. But this comes with a few strings attached: firstly, I’ll be willing to answer any question, but I’m not going to prepare a review; secondly, if there is poor turnout, then this won’t happen again. Alrighty! The rest is after the fold – The topic of the day was differentiation! The three questions of the day were – 1. Differentiate the following functions: 1. $e^x$ 2. $e^{e^x}$ 3. $e^{e^{e^x}}$ 4. $\sin x$ 5. $\sin (\sin x)$ 6. $\sin (\sin (\sin x))$ 2. A particle moves along a line with its position described by the function $s(t) = a_0t^2 + a_1t + a_2$. If we know that it’s acceleration is always $20$ m/s/s, that its velocity at $t = 1$ is $-10$ m/s, and its position at $t = 2$ is $20$ m. What are $a_0, a_1, a_2$? 3. Given that $u(x) = (x^2 + x + 2$, what are the following: 1. $\frac{d}{dx} (u(x))^2$ 2. $\frac{d}{dx} (u(x))^n$ 3. $\frac{d}{dx} (5 + x^3)^{-3}$ 4. $\frac{d}{dx} ((u(x))^n)^m$ #### Question 1 This is all about the chain rule. Please note that this is a big deal, so if you have any trouble at all with the chain rule, seek extra help. The derivative of $e^x$ is $e^x$. To compute the derivative of $e^{e^x}$, we might think of $u(x) = e^x$, so that we have $e^u$. The derivative of $e^u$ will be $e^u u'$, which gives us $e^{e^x}e^x$. Let’s look at the other way of understanding the chain rule to compute the derivative of $e^{e^{e^x}}$. The “outer function” is $e^{(\cdot)}$. It’s derivative is just itself. The first “inner function” is $e^{e^x}$. We have just computed its derivative above (it’s $e^{e^x} e^x$). So we multiply them together to get $e^{e^{e^x}}e^{e^x}e^x$. Similarly, the derivative of $\sin x$ is $\cos x$. The derivative of $\sin \sin x$ requires the chain rule. On the one hand, the outer function is $\sin$, and the derivative of $\sin$ is $\cos$. So we know we will have a $\cos (\sin x)$ in the answer. The inner function is also $\sin x$, so we need to multiply by its derivative. The final answer will be $\cos (\sin x )\cos x$. To compute the derivative of $\sin \sin \sin x$, we again use the chain rule. I will again use helper functions, to illustrate their use. We might call $u(x) = \sin \sin x$, so that we are computing the derivative of $\sin (u)$. Then we get $\cos u u'$. We happen to have computed $u'$ just a moment ago, so the final answer is $\cos \sin \sin x \cos \sin x \sin x$. #### Question 2 The key idea of this question is to remember that the function $s(t)$ gives position at time $t$. So its derivative gives a result in terms of position per time, the velocity. And the derivative of velocity will give a result in terms of position per time per time, or acceleration. So the velocity of our particle is $2a_0t + a_1$, and the acceleration is $2a_0$. Since we know that the acceleration is always $20$, we know that $2a_0 = 20$ so that $a_0 = 10$. The velocity at $t = 1$ is $-10$, so we know that $2(10)(1) + a_1 = -10$, so that $a_1 = -30$. Finally, our position at time $t = 2$ is $20$, so that $4(10) + 2(-30) + a_2 = 20$, so that $a_2 = 40$. I used different numbers between the two classes, so don’t pay too much attention if the exact details are different between one class and the other. #### Question 3 This is more about the chain-rule! This is sort of an explicit example of helper functions. We first want to compute the derivative of $u(x)^2$. By the chain rule, this will be $2u(x)u'(x)$. What is $u'(x)$?. It’s $2x + 1$. So the derivative of $u(x)^2$ is $2(x^2 + x + 2)(2x + 1)$. This is a single case of the slightly more general $u(x)^n$. Here, the power rule tells us that the derivative will be $nu(x)^{n-1}u'(x)$, which is $n (x^2 + x + 2)^{n-1}(2x + 1)$. The idea behind the third question is to see if we can work out the same sort of idea, but without starting with a helper function. (It’s perfectly fine to always use helper functions to use the chain rule – that’s not a problem at all). The derivative of $(5 + x^3)^{-3}$ will be $-3(5 + x^3)^{-4}(3x^2)$. If we want to see the use of helper functions, call $v(x) = 5 + x^3$, so that we are computing the derivative of $v^{-3}$. The derivative will be $-3v^{-4}v'$, which is exactly what we have above. I look forward to seeing some of you on Monday, and happy studying! Advertisements
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 115, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9273915886878967, "perplexity": 139.54368653234144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743046.43/warc/CC-MAIN-20181116132301-20181116154301-00404.warc.gz"}
https://atmtools.github.io/arts-docs-master/doxygen/classParameters.html
ARTS 2.5.0 (git: 9ee3ac6c) Parameters Class Reference Structure to hold all command line Parameters. More... #include <parameters.h> ## Public Member Functions Parameters () Default constructor. More... ## Public Attributes String usage Short message how to call the program. More... String helptext Longer message explaining the options. More... bool help Only display the help text. More... bool version Display version information. More... String basename If this is specified (with the -b –basename option), it is used as the base name for the report file and for other output files. More... String outdir If this is specified (with the -o –outdir option), it is used as the base directory for the report file and for other output files. More... ArrayOfString controlfiles The filenames of the controlfiles. More... Index reporting This should be a two digit integer. More... String methods If this is given the argument ‘all’, it simply prints a list of all methods. More... The maximum number of threads to use. More... ArrayOfString includepath List of paths to search for include files. More... ArrayOfString datapath List of paths to search for data files. More... String input This is complementary to the methods switch. More... String workspacevariables If this is given the argument ‘all’, it simply prints a list of all workspace variables. More... String describe Print the description String of the given workspace variable or method. More... bool groups Print a list of all workspace variable groups. More... bool plain Generate plain help out suitable for script processing. More... Index docserver Port to use for the docserver. More... String baseurl Baseurl for the docserver. More... bool daemon Flag to run the docserver in the background. More... bool gui Flag to run with graphical user interface. More... bool check_docs Flag to check built-in documentation. More... ## Detailed Description Structure to hold all command line Parameters. This holds all the command line parameters, plut the usage message and the helptext message. The messages are in the same structure, because they need to be changed whenever the parameters are changed, so it is better to have them in the same place. Definition at line 42 of file parameters.h. ## ◆ Parameters() Parameters::Parameters ( ) inline Default constructor. Care has to be taken to properly initialize all variables, e.g., bool options to false. Definition at line 46 of file parameters.h. ## ◆ basename String Parameters::basename If this is specified (with the -b –basename option), it is used as the base name for the report file and for other output files. Definition at line 82 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ baseurl String Parameters::baseurl Baseurl for the docserver. Definition at line 127 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ check_docs bool Parameters::check_docs Flag to check built-in documentation. Definition at line 133 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ controlfiles ArrayOfString Parameters::controlfiles The filenames of the controlfiles. Can be only one or as many as you want. Definition at line 90 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ daemon bool Parameters::daemon Flag to run the docserver in the background. Definition at line 129 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ datapath ArrayOfString Parameters::datapath List of paths to search for data files. Definition at line 108 of file parameters.h. ## ◆ describe String Parameters::describe Print the description String of the given workspace variable or method. Definition at line 119 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ docserver Index Parameters::docserver Port to use for the docserver. Definition at line 125 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ groups bool Parameters::groups Print a list of all workspace variable groups. Definition at line 121 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ gui bool Parameters::gui Flag to run with graphical user interface. Definition at line 131 of file parameters.h. Referenced by get_parameters(). ## ◆ help bool Parameters::help Only display the help text. Definition at line 76 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ helptext String Parameters::helptext Longer message explaining the options. Definition at line 74 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ includepath ArrayOfString Parameters::includepath List of paths to search for include files. Definition at line 106 of file parameters.h. ## ◆ input String Parameters::input This is complementary to the methods switch. It must be given the name of a variable (or group). Then it lists all methods that take this variable (or group) as input. Definition at line 112 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ methods String Parameters::methods If this is given the argument ‘all’, it simply prints a list of all methods. If it is given the name of a variable (or group), it prints all methods that produce this variable (or group) as output. Definition at line 102 of file parameters.h. Referenced by get_parameters(), and main(). The maximum number of threads to use. Definition at line 104 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ outdir String Parameters::outdir If this is specified (with the -o –outdir option), it is used as the base directory for the report file and for other output files. If a full path is given for an output file it will not be affected by this. Definition at line 87 of file parameters.h. Referenced by add_basedir(), and get_parameters(). ## ◆ plain bool Parameters::plain Generate plain help out suitable for script processing. Definition at line 123 of file parameters.h. Referenced by get_parameters(), main(), option_methods(), and option_workspacevariables(). ## ◆ reporting Index Parameters::reporting This should be a two digit integer. The first digit specifies the output level for stdout (stderr for error messages), the second digit the output level for the report file. The levels can reach from 0 (show only error messages) to 3 (show everything). Example: 03 = only errors to the screen, everything to the file. Definition at line 98 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ usage String Parameters::usage Short message how to call the program. Definition at line 72 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ version bool Parameters::version Display version information. Definition at line 78 of file parameters.h. Referenced by get_parameters(), and main(). ## ◆ workspacevariables String Parameters::workspacevariables If this is given the argument ‘all’, it simply prints a list of all workspace variables. If it is given the name of a method, it prints all variables needed by that method. Definition at line 116 of file parameters.h. Referenced by get_parameters(), and main(). The documentation for this class was generated from the following file:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19815818965435028, "perplexity": 6577.609928465809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00431.warc.gz"}
https://kr.mathworks.com/help/fininst/optstockbybls.html
Documentation # optstockbybls Price options using Black-Scholes option pricing model ## Syntax ``Price = optstockbybls(RateSpec,StockSpec,Settle,Maturity,OptSpec,Strike)`` ## Description example ````Price = optstockbybls(RateSpec,StockSpec,Settle,Maturity,OptSpec,Strike)` returns option prices using the Black-Scholes option pricing model. NoteWhen using `StockSpec` with `optstockbybls`, you can modify `StockSpec` to handle other types of underliers when pricing instruments that use the Black-Scholes model. When pricing Futures (Black model), enter the following in `StockSpec`: DivType = 'Continuous'; DivAmount = RateSpec.Rates; For example, see Compute Option Prices Using the Black-Scholes Option Pricing Model.When pricing Foreign Currencies (Garman-Kohlhagen model), enter the following in `StockSpec`: DivType = 'Continuous'; DivAmount = ForeignRate; where `ForeignRate` is the continuously compounded, annualized risk free interest rate in the foreign country. For example, see Compute Option Prices on Foreign Currencies Using the Garman-Kohlhagen Option Pricing Model. ``` ## Examples collapse all This example shows how to compute option prices using the Black-Scholes option pricing model. Consider two European options, a call and a put, with an exercise price of \$29 on January 1, 2008. The options expire on May 1, 2008. Assume that the underlying stock for the call option provides a cash dividend of \$0.50 on February 15, 2008. The underlying stock for the put option provides a continuous dividend yield of 4.5% per annum. The stocks are trading at \$30 and have a volatility of 25% per annum. The annualized continuously compounded risk-free rate is 5% per annum. Using this data, compute the price of the options using the Black-Scholes model. ```Strike = 29; AssetPrice = 30; Sigma = .25; Rates = 0.05; Settle = 'Jan-01-2008'; Maturity = 'May-01-2008'; % define the RateSpec and StockSpec RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle, 'EndDates',... Maturity, 'Rates', Rates, 'Compounding', -1); DividendType = {'cash';'continuous'}; DividendAmounts = [0.50; 0.045]; ExDividendDates = {'Feb-15-2008';NaN}; StockSpec = stockspec(Sigma, AssetPrice, DividendType, DividendAmounts,... ExDividendDates); OptSpec = {'call'; 'put'}; Price = optstockbybls(RateSpec, StockSpec, Settle, Maturity, OptSpec, Strike)``` ```Price = 2×1 2.2030 1.2025 ``` This example shows how to compute option prices on foreign currencies using the Garman-Kohlhagen option pricing model. Consider a European put option on a currency with an exercise price of \$0.50 on October 1, 2015. The option expires on June 1, 2016. Assume that the current exchange rate is \$0.52 and has a volatility of 12% per annum. The annualized continuously compounded domestic risk-free rate is 4% per annum and the foreign risk-free rate is 8% per annum. Using this data, compute the price of the option using the Garman-Kohlhagen model. ```Settle = 'October-01-2015'; Maturity = 'June-01-2016'; AssetPrice = 0.52; Strike = 0.50; Sigma = .12; Rates = 0.04; ForeignRate = 0.08;``` Define the `RateSpec`. ```RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle, 'EndDates',... Maturity, 'Rates', Rates, 'Compounding', -1)``` ```RateSpec = struct with fields: FinObj: 'RateSpec' Compounding: -1 Disc: 0.9737 Rates: 0.0400 EndTimes: 0.6667 StartTimes: 0 EndDates: 736482 StartDates: 736238 ValuationDate: 736238 Basis: 0 EndMonthRule: 1 ``` Define the `StockSpec`. ```DividendType = 'Continuous'; DividendAmounts = ForeignRate; StockSpec = stockspec(Sigma, AssetPrice, DividendType, DividendAmounts)``` ```StockSpec = struct with fields: FinObj: 'StockSpec' Sigma: 0.1200 AssetPrice: 0.5200 DividendType: {'continuous'} DividendAmounts: 0.0800 ExDividendDates: [] ``` Price the European put option. ```OptSpec = {'put'}; Price = optstockbybls(RateSpec, StockSpec, Settle, Maturity, OptSpec, Strike)``` ```Price = 0.0162 ``` ## Input Arguments collapse all Interest-rate term structure (annualized and continuously compounded), specified by the `RateSpec` obtained from `intenvset`. For information on the interest-rate specification, see `intenvset`. Data Types: `struct` Stock specification for the underlying asset. For information on the stock specification, see `stockspec`. `stockspec` handles several types of underlying assets. For example, for physical commodities the price is `StockSpec.Asset`, the volatility is `StockSpec.Sigma`, and the convenience yield is `StockSpec.DividendAmounts`. Data Types: `struct` Settlement or trade date, specified as serial date number or date character vector using a `NINST`-by-`1` vector. Data Types: `double` | `char` Maturity date for option, specified as serial date number or date character vector using a `NINST`-by-`1` vector. Data Types: `double` | `char` Definition of the option as `'call'` or `'put'`, specified as a `NINST`-by-`1` cell array of character vectors with values `'call'` or `'put'`. Data Types: `char` | `cell` Option strike price value, specified as a nonnegative `NINST`-by-`1` vector. Data Types: `double` ## Output Arguments collapse all Expected option prices, returned as a `NINST`-by-`1` vector. Data Types: `double` collapse all ### Vanilla Option A vanilla option is a category of options that includes only the most standard components. A vanilla option has an expiration date and straightforward strike price. American-style options and European-style options are both categorized as vanilla options. The payoff for a vanilla option is as follows: • For a call: $\mathrm{max}\left(St-K,0\right)$ • For a put: $\mathrm{max}\left(K-St,0\right)$ where: St is the price of the underlying asset at time t. K is the strike price.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098026514053345, "perplexity": 3256.715353549021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00285.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2019/lemon/uid116.html
Overall Objectives New Software and Platforms Bilateral Contracts and Grants with Industry Partnerships and Cooperations Bibliography PDF e-Pub ## Section: New Results ### Inland flow processes #### Shallow water models with porosity We propose in [10] a discussion on the publication 'Dam break in rectangular channels with different upstream-downstream widths' (Valiani and Caleffi, 2019). The authors consider an augmented shallow water system for modelling the dam-break problem in a channel with discontinuous width and present its analytical solutions depending on the upstream-downstream water depth and channel's width ratios. In this discussion we contest the conservation of the hydraulic head through the width's discontinuity, which is stated by the authors, and we exemplify it by performing 2D Shallow water simulations reproducing some test cases presented in the paper. #### Forcing A book chapter entitled Space-time simulations of extreme rainfall : why and how ? involving among others two members of the team, Vincent Guinot and Gwladys Toulemonde has been written and accepted for publication [6]. The book whose title is Mathematical Modeling of Random and Deterministic Phenomena will be published by Wiley. This chapter aims to present practical interest of doing space-time simulations of extreme rainfall and to propose a state-of-art about that.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133898973464966, "perplexity": 3200.734560313838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00023.warc.gz"}
https://www.asknumbers.com/pascal-to-atm.aspx
# Pascal to Atm Conversion To convert pascal (Pa) to atm please use the conversion tool below. Enter the pressure in pascal into the converter and the result will be displayed. 1 Pascal = 0.00000986923267 Atm Pascal is a metric pressure unit and defined as a force of newton per square meter. It can be formulated as : 1 Pascal = Newton / m2 or 1 Pascal = Kilogram / meter * second2. The abbreviation is "Pa". Atm (or atmospheric pressure) is defined as the force per unit area by the weight of air above that point. 1 atmosphere is about 101.325 kilopascals. Converter Enter pressure in pascal to convert into atm: Create Custom Conversion Table ## Pascal to Atm Conversion Table Pascal Atm Pascal Atm Pascal Atm Pascal Atm 1 9.86923E-06 26 0.0002566 51 0.000503331 76 0.000750062 2 1.97385E-05 27 0.000266469 52 0.0005132 77 0.000759931 3 2.96077E-05 28 0.000276339 53 0.000523069 78 0.0007698 4 3.94769E-05 29 0.000286208 54 0.000532939 79 0.000779669 5 4.93462E-05 30 0.000296077 55 0.000542808 80 0.000789539 6 5.92154E-05 31 0.000305946 56 0.000552677 81 0.000799408 7 6.90846E-05 32 0.000315815 57 0.000562546 82 0.000809277 8 7.89539E-05 33 0.000325685 58 0.000572415 83 0.000819146 9 8.88231E-05 34 0.000335554 59 0.000582285 84 0.000829016 10 9.86923E-05 35 0.000345423 60 0.000592154 85 0.000838885 11 0.000108562 36 0.000355292 61 0.000602023 86 0.000848754 12 0.000118431 37 0.000365162 62 0.000611892 87 0.000858623 13 0.0001283 38 0.000375031 63 0.000621762 88 0.000868492 14 0.000138169 39 0.0003849 64 0.000631631 89 0.000878362 15 0.000148038 40 0.000394769 65 0.0006415 90 0.000888231 16 0.000157908 41 0.000404639 66 0.000651369 100 0.000986923 17 0.000167777 42 0.000414508 67 0.000661239 125 0.001233654 18 0.000177646 43 0.000424377 68 0.000671108 150 0.001480385 19 0.000187515 44 0.000434246 69 0.000680977 175 0.001727116 20 0.000197385 45 0.000444115 70 0.000690846 200 0.001973847 21 0.000207254 46 0.000453985 71 0.000700716 250 0.002467308 22 0.000217123 47 0.000463854 72 0.000710585 300 0.00296077 23 0.000226992 48 0.000473723 73 0.000720454 500 0.004934616 24 0.000236862 49 0.000483592 74 0.000730323 750 0.007401925 25 0.000246731 50 0.000493462 75 0.000740192 1000 0.009869233
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644227027893066, "perplexity": 4549.312681267508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00403.warc.gz"}
https://chemtools.org/modules/generated/chemtools.conceptual.mixed.MixedLocalTool.html
chemtools.conceptual.mixed.MixedLocalTool¶ class chemtools.conceptual.mixed.MixedLocalTool(dict_energy, dict_density)[source] Class of local conceptual DFT reactivity descriptors based on mixed energy models. Initialize to compute mixed local reactivity descriptors. Parameters: dict_energy (dict) – Dictionary of number of electrons (keys) and corresponding energy (values). This model expects three energy values corresponding to three consecutive number of electrons differing by one, i.e. $$\{(N_0 - 1): E(N_0 - 1), N_0: E(N_0), (N_0 + 1): E(N_0 + 1)\}$$. The $$N_0$$ value is considered as the reference number of electrons. dict_density (dict) – Dictionary of number of electrons (keys) and corresponding density array (values). This model expects three energy values corresponding to three consecutive number of electrons differing by one, i.e. $$\{(N_0 - 1): \rho_{N_0 - 1}\left(\mathbf{ r}\right), N_0: \rho_{N_0}\left(\mathbf{r}\right), (N_0 + 1): \rho_{N_0 + 1}\left( \mathbf{r}\right)\}$$. The $$N_0$$ value is considered as the reference number of electrons. softness_yp Local softness of Yang and Parr. Equation [18] of Proc. Natl. Acad. Sci. USA (1985) 82, 6723-6726: $\begin{split}s^+(\mathbf{r}) &= S f^+(\mathbf{r}) \\ s^0(\mathbf{r}) &= S f^0(\mathbf{r}) \\ s^-(\mathbf{r}) &= S f^-(\mathbf{r})\end{split}$ where $$f^{+,0,-}(\mathbf{r})$$ is Fukui function from the linear energy model, and $$S={}^1/_{\eta}$$ is global chemical softness (inverse of global chemical hardness) from the quadratic energy model. Returns: softness_p (ndarray) – Local softness from above measuring nucleophilic attack, $$s^+(\mathbf{r})$$. softness_0 (ndarray) – Local softness (centered) measuring radical attack, $$s^0(\mathbf{r})$$. softness_m (ndarray) – Local softness from below measuring electrophilic attack, $$s^-(\mathbf{r})$$. philicity_mgvgc Local philicity measure of Morell, Gazquez, Vela, Guegana & Chermette. Equation [46], [15] & [47] of Phys. Chem. Chem. Phys. (2014) 16, 26832-26842: $\begin{split}\omega^+(\mathbf{r}) &= -(\frac{\mu^+}{\eta}) f^+(\mathbf{r}) + \frac{1}{2} \left(\frac{\mu^+}{\eta}\right)^2 f^{(2)}(\mathbf{r}) \\ \omega^0(\mathbf{r}) &= -(\frac{\mu^0}{\eta}) f^0(\mathbf{r}) + \frac{1}{2} \left(\frac{\mu^0}{\eta}\right)^2 f^{(2)}(\mathbf{r}) \\ \omega^-(\mathbf{r}) &= +(\frac{\mu^-}{\eta}) f^-(\mathbf{r}) + \frac{1}{2} \left(\frac{\mu^-}{\eta}\right)^2 f^{(2)}(\mathbf{r})\end{split}$ where $$\mu^{+,0,-}$$ is global chemical potential from the linear energy model, $$\eta$$ is global chemical hardness from the quadratic energy model, $$f^{+,0,-}(\mathbf{r})$$ is Fukui function from the linear density model, and $$f^{(2)}(\mathbf{r})$$ is dual descriptor from the quadratic density model. Returns: omega_p (ndarray) – Local philicity index from above measuring nucleophilic attack, $$\omega^+(\mathbf{r})$$. omega_0 (ndarray) – Local philicity index (centered) measuring radical attack, $$\omega^0(\mathbf{r})$$. omega_m (ndarray) – Local philicity index from below measuring electrophilic attack, $$\omega^-(\mathbf{r})$$. philicity_cms Local philicity index of Chattaraj, Maiti & Sarkar. Equation [12] of J. Phys. Chem. A (2003) 107, 4973–4975: $\begin{split}\omega^+(\mathbf{r}) &= \omega \text{ } f^+(\mathbf{r}) \\ \omega^0(\mathbf{r}) &= \omega \text{ } f^0(\mathbf{r}) \\ \omega^-(\mathbf{r}) &= \omega \text{ } f^-(\mathbf{r})\end{split}$ where $$\omega$$ is global electrophilicity from quadratic energy model, and $$f^{+,0,-}(\mathbf{r})$$ is Fukui function from linear energy model. Returns: omega_p (ndarray) – Local philicity index from above measuring nucleophilic attack, $$\omega^+(\mathbf{r})$$. omega_0 (ndarray) – Local philicity index (centered) measuring radical attack, $$\omega^0(\mathbf{r})$$. omega_m (ndarray) – Local philicity index from below measuring electrophilic attack, $$\omega^-(\mathbf{r})$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786329746246338, "perplexity": 12760.698695701909}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00475.warc.gz"}
http://crypto.stackexchange.com/questions/11173/can-secp-256-k1-curves-map-to-a-value-on-fips-186-3-or-p-256?answertab=oldest
# Can Secp 256 K1 curves “map” to a value on FIPS 186-3 or P-256? I'm looking at Secp 256K1 vs UProve's FIPS 186-3 or P-256 implementation. Is there any relationship between the curves such that I can consistently "map" or "project" values from one curve to another? Is this a good idea? (What faults could crop up when doing so?) My goal is to allow for two independent crypto systems based on ECDSA or subgroups to share the same keys via a conversion of some type - off the top of my head, "K" curves are binary field curves, whereas P-256 is over a 256-bit prime field, so they will not map or have anything to do with eachother –  Richie Frame Oct 21 '13 at 4:15 Conversion works only if the curves are equivalent, but Secp256K1 and P256 are not equivalent. –  CodesInChaos Oct 21 '13 at 7:40 @RichieFrame UProve also uses a subgroup variant which is FIPS 186-3 finite field based DSA. Would that mean a mapping is possible –  LamonteCristo Oct 21 '13 at 14:28 @CodesInChaos I don't know how to even begin to define equivalence in curves. Could FIPS 186-3 be equivalent? –  LamonteCristo Oct 21 '13 at 14:29 If what you want is some kind of algorithm that takes a public key $Q = aP$ on one curve and converts it into $Q' = a P'$ on the other curve, then the answer is almost certainly no. There are no "interesting" maps between curves with different group structures. If you just want to use the same secret key for both curves, so $Q = aP$ on one curve and $Q' = aP'$ on another, $P,P',Q,Q'$ public, that isn't obviously a bad idea. It seems plausible that the obvious DDH-like problem is hard. However, doing this does introduce a bunch of technical problems, e.g. how do you prove that your public keys correspond to the "same" private key (which can be solved, but is tricky when the group orders are different).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39181333780288696, "perplexity": 1121.6208750839517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737951049.91/warc/CC-MAIN-20151001221911-00208-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/taylors-series.185646/
# Taylor's Series 1. Sep 19, 2007 ### Oblio Taylor's theorem states that, for any reasonable function f(x), the value of f at a point (x+$$\delta$$) can be expressed as an infinite series involving f and its derivatives at the point of x: f(x+$$\delta$$) = f(x) + f '(x)$$\delta$$ + $$\frac{1}{2!}$$f ''(x)$$\delta$$$$^{2}$$ + .... where the primes denote successive derivatives of f(x). (Depending on the function this series may converge for any increment $$\delta$$ or only for values of $$\delta$$ less then some nonzero 'radius of convergence'.) This theorem is enormously useful, especially for small values of $$\delta$$, when the first one or two terms of the series are often an excellent approximation. Find the taylor's series for ln(1+$$\delta$$). Do the same for cos $$\delta$$. We just started Taylor's theorems now, and it seems like there are many of them. Why exactly are they good and what purpose do they serve? Is finding the Taylor's series for ln (1+$$\delta$$) a matter of substituting it into the above equation or is there more to it? Thanks again! Last edited: Sep 19, 2007 2. Sep 19, 2007 ### learningphysics In lots of problems, you may have to find f(x+delta) - f(x)... where delta is very small... your calculator might only give you a rounded figure of 0 because the numbers are too close together... However if you take the taylor series of f(x+delta)... then subtract f(x)... you get: f '(x)delta + f ''(x)delta^2. This will give a good approximation of f(x+delta) - f(x). That's just one example of why taylor series are useful. I'm sure there are many more. Yeah, just substitute into the equation, using x = 1. That should work. 3. Sep 19, 2007 ### Oblio Effectively solving for delta? 4. Sep 19, 2007 ### learningphysics I don't understand what you mean. 5. Sep 19, 2007 ### Oblio When everything is subbed in, am I solving for delta? 6. Sep 19, 2007 ### learningphysics No... you'll just be getting a polynomial formula for ln(1+delta)... in terms of delta... 7. Sep 19, 2007 ### Oblio If I'm subbing x=1, my ln (1+delta) goes where in the formula? 8. Sep 19, 2007 ### learningphysics ln (x + delta) is the left hand side... right hand side is f(x) + f'(x)delta... where f(x) = ln(x) ln(x + delta) ln(x+delta) = ln(x) + (1/x)delta + .... So ln(1+delta) = ln(1) + (1/1)delta + .... 9. Sep 19, 2007 ### Oblio So ln(1+delta) = ln(1) + (1/1)delta + .... is the bolded delta in the denominator? 10. Sep 19, 2007 ### learningphysics No. I'm using the formula from your original post exactly... delta doesn't go in the denominator... I got the 1/1.... because the deriative of lnx = 1/x... so f'(1) = 1/1 11. Sep 19, 2007 ### Oblio Ok, just making sure I was reading it right. Is that as far as you can go? Since the 2nd derivative would be 0? 12. Sep 19, 2007 ### learningphysics How do you get 0? f(x) = ln(x) f'(x) = 1/x f''(x) = -1/x^2 So f''(1) = -1 not zero... 13. Sep 19, 2007 ### Oblio Oh, sub afterwards. I was deriving 1/1. 14. Sep 19, 2007 ### Oblio I assume I don't need to do all that many though? 15. Sep 19, 2007 ### Oblio Is this far enough to do the problem? ln (x+$$\delta$$) = ln(x) + $$\frac{1}{x}$$$$\delta$$ + ($$\frac{1}{2!}$$)$$\frac{1}{x^{2}}$$($$\delta^{2}$$) 16. Sep 19, 2007 ### learningphysics Actually I think the question wants you to keep going... and find a general formula for the entire series... do you see a pattern to the terms of ln(1+delta) ? 17. Sep 19, 2007 ### Oblio denominator and delta are increasing by powers of 1. along with that 1/2!, and its denominator increasing.. 18. Sep 19, 2007 ### learningphysics careful... it needs to be (-1/x^2) for the third term... 19. Sep 19, 2007 ### Oblio ... i cant say i see a pattern 20. Sep 19, 2007 ### learningphysics yes, and also, you should have alternating signs... f''(x) = -1/x^2 f'''(x) = 2/x^3 I recommend finding the first 4 or 5 terms of the series... then you should see the pattern... Have something to add? Similar Discussions: Taylor's Series
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176180958747864, "perplexity": 2713.264446117559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00547-ip-10-171-10-108.ec2.internal.warc.gz"}
https://motls.blogspot.com/2013/09/bohrs-dramatic-escape-70-years-ago.html
## Sunday, September 29, 2013 ... ///// ### Bohr's dramatic escape: 70 years ago Exactly 70 years ago, on September 29th, 1943, the Danish underground movement received the message. Brothers Niels and Harald Bohr – who had a Jewish mother but that wasn't the only sin – would have to be arrested and transferred to Germany. So far, Bohr would be often invited to emigrate but he would be refusing it with words resembling Zeman's "Why should I leave? They should leave!" But the new situation was way too serious so both brothers and all of their offspring and families had to escape Denmark. So Bohr and his wife Margareta are suddenly walking on a Copenhagen street and meet a biochemistry professor they know. He is a part of the resistance movement and gives them a secret sign, everything is fine. They go to a Copenhagen dwellers' popular recreational beach with fancy buildings outside of the capital. Harald, his wife, and children are there in a moment, too. The boat needs two hours. The fishermen, also belonging to the underground, know the schedule of German patrols so they may optimize the trajectory. On Thursday, September 30th, they finally reached a Swedish village. Margareta stays in the village. Niels Bohr has some extra work to do. He takes an express train to Stockholm. There he meets with the secretary of state and other officials. Ultimately, he has a meeting with the king, too. Bohr has almost certainly contributed to the official October 1943 publicly declared decision of Sweden to accept all refugees. Thanks to the friendly and courageous Swedish aristocratic reaction, about 60,000 Danes escape a German prison during October 1943. Sweden is not quite safe for Bohr, either. Germany could send secret agents or soldiers to silence him. Britain and America are safer; they seem like a more practical place for Bohr to help the Allies to kick the German bastards into their socialist balls (or, in the leader's case, ball). Bohr agrees with the British proposal. His condition is that his son Aage, a physics student, must accompany him. Now, the main technical task is to transfer Bohr from Sweden to Britain. In between these two countries, you find Norway which is occupied just like Denmark. The solution is a British combat aircraft, a bomber called Mosquito. The model is fast and can reach great heights – and escape from most German aircraft into the clouds. At some points, it's actually crucial for the height to be above 10 kilometers to be mostly safe; this also requires the British pilots to teach Bohr to use the oxygen mask. Where would Bohr sit? Well, in the bomb bay! Aage would fly in another aircraft. A small technical glitch forces Niels Bohr's aircraft to return. He wants to take the first yellow cab. The Swedish agents are pulling their guns. But OK, they force him to sleep at this airport and nervously await the invasion of some Germans who could just find out where Bohr is and make a "friendly visit" at every moment. Mosquito's average speed is about 600 km/h which means that 1,200 km to Britain is a 2-hour trip. Things went fine and the Mosquito landed in Northern Scotland. The pilots immediately go to see Bohr in the bomb bay. A sleeping and tired man didn't hear any instructions because the helmet wasn't large enough for his quantum skull. Also, he failed to use the oxygen mask so he fainted somewhere in the clouds but survived. "Next time, it will be better," he promised. A more luxurious commercial aircraft took the co-father of quantum mechanics to London. He met some similarly active British physicists like Chadwick. Niels Bohr was impressed by the progress made by British on their tube alloys project (British nuclear bomb). In December 1943, he would fly to the U.S. As guests of the Manhattan Project, Niels and Aage would be renamed as Nicholas Baker and James Baker, respectively, for security reasons. I doubt that this secret name enabled Aage Bohr to become Reagan's Secretary of State. Bohrs would only spend some time in Los Alamos. Oppenheimer credited Bohr for contributing to modulated neutron initiators and for his being an inspiring role model for younger physicists like Feynman – although Feynman himself wasn't exactly obsessed about authorities of any kind. Incidentally, Enrico Fermi started the nuclear age 10 months before Bohr fled Denmark. It just happens that Fermi would celebrate his 112th birthday today. Enrico Fermi was born on September 29th, 1901. #### snail feedback (6) : There are many stories about both Bohr and Fermi but I had not heard of Bohr’s narrow escape. Thanks, Lubos. Great story Luboš. I love your profiles in courage of famous physicists. Do you think that 9/11 was not an inside job?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2497219443321228, "perplexity": 4897.761431265406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425751.38/warc/CC-MAIN-20170726022311-20170726042311-00234.warc.gz"}
http://pharos-library.com/doku.php?id=dessin_enfant
dessin_enfant Dessin d'enfant Graph on a surface encoding information for a covering Dessin d'enfant Dessins d'enfant, “childrens drawings” in French, are graphs on surfaces with vertices colored alternately black and white.. They appear at the crossroad of complex geometry, combinatorics and Galois theory. Material A wonderful book on graphs on surfaces: • Sergei Lando and Alexander Zvonkin: Graphs on Surfaces and Their Applications, Chapter 2, ISBN 978-3-540-00203-1, Google books • Belyi function • Riemann surface • absolut Galois group dessin_enfant.txt · Last modified: 2021/09/13 12:02 by alex_th
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4401290714740753, "perplexity": 16158.211843988161}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00241.warc.gz"}
https://eccc.weizmann.ac.il/report/2017/112/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR17-112 | 27th June 2017 09:23 #### How To Simulate It -- A Tutorial on the Simulation Proof Technique TR17-112 Authors: Yehuda Lindell Publication: 27th June 2017 10:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941977858543396, "perplexity": 5913.854569136509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890947.53/warc/CC-MAIN-20180122014544-20180122034544-00008.warc.gz"}
https://www.mathworks.com/help/physmod/sps/ref/potentiometer.html
# Potentiometer Rotary or linear-travel potentiometer controlled by physical signal • Library: • Simscape / Electrical / Passive ## Description The Potentiometer block represents a rotary or linear-travel potentiometer, with the wiper position controlled by the input physical signal. If the potentiometer resistance changes linearly based on wiper position, then the resistance between the wiper position and port L is: `${R}_{WL}=\frac{{R}_{0}}{{x}_{\mathrm{max}}-{x}_{\mathrm{min}}}\left(x-{x}_{\mathrm{min}}\right)$` where • RWL is the resistance between the wiper position and port L. • R0 is the total resistance between ports L and R. • x is the wiper position. • xmin is the value of the wiper position when the wiper is at port L. • xmax is the value of the wiper position when the wiper is at port R. If you specify `Logarithmic` for the potentiometer resistance parameter, then the resistance between the wiper position and port L is: where A and λ are chosen such that RWL at xmax is R0, and RWL at x = (xmax + xmin) / 2 is equal to Rav, the resistance when the wiper is centered. Note Potentiometers widely described as LOG or logarithmic taper are, in fact, exponential taper. That is, the gradient of the resistance between wiper and left-hand port increases as the resistance increases. The Potentiometer block implements this behavior. For both linear and logarithmic tapers, the resistance between the wiper position and port R is: `${R}_{WR}={R}_{0}-{R}_{WL}$` where • RWR is the resistance between the wiper position and port R. • R0 is the total resistance between ports L and R. • RWL is the resistance between the wiper position and port L. ## Ports ### Input expand all Physical signal input port controlling the wiper position. ### Conserving expand all Electrical conserving port associated with the potentiometer left pin. Electrical conserving port associated with the potentiometer right pin. Electrical conserving port associated with the potentiometer wiper pin. ## Parameters expand all The resistance between port L and port R when port W is open-circuit. The lower limit placed on the resistance between the wiper and the two end ports. It must be greater than zero. A typical value is 5e-3 times the total resistance. If you select `Higher at R` for the Resistance gradient parameter, then Resistance when centered is the resistance between port L and port W when the wiper is centered. Otherwise, if you select `Higher at R` for the Resistance gradient parameter, then Resistance when centered is the resistance between port R and port W when the wiper is centered. Because the resistance taper is exponential in shape, the value of the Resistance when centered parameter must be less than half of the Total resistance parameter value. #### Dependencies This parameter is visible only when you select `Logarithmic` for the parameter. The value of the input physical signal at port x that corresponds to the wiper being located at port L. The default value is `0`. The value of the input physical signal at port x that corresponds to the wiper being located at port `R`. The default value is `1`. Specifies the potentiometer resistance taper behavior: `Linear` or `Logarithmic`. Specifies whether the potentiometer resistance varies more rapidly at the left or the right end: `Higher at L` or `Higher at R`. #### Dependencies This parameter is visible only when you select `Logarithmic` for the parameter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6566057205200195, "perplexity": 3619.690812059762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00485.warc.gz"}
http://gmatclub.com/forum/mckinsey-compensation-69140.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 28 Jun 2016, 21:20 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # McKinsey Compensation Author Message TAGS: ### Hide Tags SVP Joined: 11 Mar 2008 Posts: 1634 Location: Southern California Schools: Chicago (dinged), Tuck (November), Columbia (RD) Followers: 8 Kudos [?]: 197 [2] , given: 0 ### Show Tags 20 Aug 2008, 19:53 2 KUDOS I saw this on the BW Forums. I have no idea how accurate it is, so take it with a grain of salt. From: Starmt Total Posts: 124 Posted: Aug-19 To: potojr 5 of 11 Actual McK figures: - Associate (right after MBA): ~US$150k/year (120k base + 30k bonus) - EM (2-4 years post MBA):$200-250k/year - AP (Years 4-5): $300-350k/year - Partner (Years 6, 7, 8, 9, 10):$450, 600, 900, 1.2M, 1.5M/year - Director (+10 years): $2 - 10M/year, depending on tenure and performance (very few people make$10M, median should be ~$3-4M) _________________ Check out the new Career Forum http://gmatclub.com/forum/133 Senior Manager Joined: 05 Feb 2008 Posts: 322 Location: Texas Followers: 2 Kudos [?]: 59 [1] , given: 10 Re: McKinsey Compensation [#permalink] ### Show Tags 20 Aug 2008, 20:55 1 This post received KUDOS The associate, EM, and AP figures I have seen before. Not sure if that validates anything though. Not bad, but not something I am really interested in. Terp06 how are you apps going? I been meaning to pm you sometime and get some feedback. Hope everything is going good. I haven't seen an update post on your process lately. SVP Joined: 11 Mar 2008 Posts: 1634 Location: Southern California Schools: Chicago (dinged), Tuck (November), Columbia (RD) Followers: 8 Kudos [?]: 197 [0], given: 0 Re: McKinsey Compensation [#permalink] ### Show Tags 20 Aug 2008, 21:04 I've been sidetracked the last couple of weeks due to a car wreck (fortunately I'm 100% okay). I plan to put in some serious time towards Wharton and Columbia this weekend. _________________ Check out the new Career Forum http://gmatclub.com/forum/133 Director Joined: 16 May 2008 Posts: 885 Location: Earth Schools: Cornell '11 Followers: 8 Kudos [?]: 136 [0], given: 29 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 05:03 I feel ya buddy. I had a car wreck during GMAT studies! terp06 wrote: I've been sidetracked the last couple of weeks due to a car wreck (fortunately I'm 100% okay). I plan to put in some serious time towards Wharton and Columbia this weekend. _________________ "George is getting upset!" GMAT Club Premium Membership - big benefits and savings SVP Joined: 11 Mar 2008 Posts: 1634 Location: Southern California Schools: Chicago (dinged), Tuck (November), Columbia (RD) Followers: 8 Kudos [?]: 197 [0], given: 0 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 07:40 raabenb wrote: I feel ya buddy. I had a car wreck during GMAT studies! terp06 wrote: I've been sidetracked the last couple of weeks due to a car wreck (fortunately I'm 100% okay). I plan to put in some serious time towards Wharton and Columbia this weekend. Ouch, that's even worse. Was everything okay on your end? _________________ Check out the new Career Forum http://gmatclub.com/forum/133 Manager Joined: 28 May 2006 Posts: 152 Location: New York, NY Followers: 2 Kudos [?]: 8 [0], given: 0 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 07:50 terp06 wrote: I've been sidetracked the last couple of weeks due to a car wreck (fortunately I'm 100% okay). I plan to put in some serious time towards Wharton and Columbia this weekend. Good to hear you are ok. Thanks for sharing this. Director Joined: 16 May 2008 Posts: 885 Location: Earth Schools: Cornell '11 Followers: 8 Kudos [?]: 136 [1] , given: 29 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 07:53 1 This post received KUDOS$12K in damage, no injuries, only $500 co-pay......and still hit my 700 in GMAT... so I guess it could have been worse terp06 wrote: raabenb wrote: I feel ya buddy. I had a car wreck during GMAT studies! terp06 wrote: I've been sidetracked the last couple of weeks due to a car wreck (fortunately I'm 100% okay). I plan to put in some serious time towards Wharton and Columbia this weekend. Ouch, that's even worse. Was everything okay on your end? _________________ "George is getting upset!" GMAT Club Premium Membership - big benefits and savings SVP Joined: 11 Mar 2008 Posts: 1634 Location: Southern California Schools: Chicago (dinged), Tuck (November), Columbia (RD) Followers: 8 Kudos [?]: 197 [0], given: 0 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 07:55 Mine had over$35K in damage and both cars were totaled. No injuries. Still only a $500 co-pay, still hit a 700+ in the GMAT, and I'm back in action with a new car already Glad to hear you're alright. _________________ Check out the new Career Forum http://gmatclub.com/forum/133 SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 39 Kudos [?]: 530 [3] , given: 32 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 10:17 3 This post received KUDOS My dad is stronger than your dad. _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings Current Student Joined: 14 Apr 2008 Posts: 453 Schools: F2010 - HBS (R1 - denied w/o interview ), INSEAD (R1 - admitted), Wharton (R1 - waitlisted & ding), Ivey (R2 - admitted w/ 60% tuition) WE 1: 3.5yrs as a Strategy Consultant - Big 4 Followers: 14 Kudos [?]: 36 [0], given: 16 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 10:18 i am really surprised at the numbers for P and Director. These are much much higher than I thought they would be. terp06 wrote: I saw this on the BW Forums. I have no idea how accurate it is, so take it with a grain of salt. From: Starmt Total Posts: 124 Posted: Aug-19 To: potojr 5 of 11 76977.5Reply to 76977.4 Actual McK figures: - Associate (right after MBA): ~US150k/year (120k base + 30k bonus) - EM (2-4 years post MBA): 200-250k/year - AP (Years 4-5): 300-350k/year - Partner (Years 6, 7, 8, 9, 10): 450, 600, 900, 1.2M, 1.5M/year - Director (+10 years): 2 - 10M/year, depending on tenure and performance (very few people make 10M, median should be ~3-4M) _________________ INSEAD Sept 2010 Interview Invite Nov 5, 2009 Admit & Matriculating Wharton Sept 2010 Interview Invite Oct 30, 2009 Waitlisted & Ding Harvard Sept 2010 Ding without Interview Ivey May 2010 Interview Invite Nov 23, 2009 Admit +$$ GMAT Club Legend Joined: 10 Apr 2007 Posts: 4318 Location: Back in Chicago, IL Schools: Kellogg Alum: Class of 2010 Followers: 89 Kudos [?]: 746 [0], given: 5 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 10:20 jallenmorris wrote: My dad is stronger than your dad. KUDOS _________________ Kellogg Class of 2010...still active and willing to help. However, I do not do profile reviews, don't offer predictions on chances and am far to busy to review essays, so save the energy of writing me a PM seeking help for these. If I don't respond to a PM that is not one of the previously mentioned trash can destined messages, please don't take it personally I get so many messages I have a hard to responding to most. The more interesting, compelling, or humorous you message the more likely I am to respond. GMAT Club Premium Membership - big benefits and savings SVP Joined: 30 Apr 2008 Posts: 1888 Location: Oklahoma City Schools: Hard Knocks Followers: 39 Kudos [?]: 530 [0], given: 32 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 10:30 I am posting some screen captures of McKinsey information. These are self-reported salaries and bonuses. Attachment: McKinsey_1.jpg [ 47.33 KiB | Viewed 25296 times ] Attachment: McKinsey_2.jpg [ 28.65 KiB | Viewed 25277 times ] Attachment: McKinsey_3.jpg [ 36.82 KiB | Viewed 25276 times ] _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. GMAT Club Premium Membership - big benefits and savings SVP Joined: 11 Mar 2008 Posts: 1634 Location: Southern California Schools: Chicago (dinged), Tuck (November), Columbia (RD) Followers: 8 Kudos [?]: 197 [0], given: 0 Re: McKinsey Compensation [#permalink] ### Show Tags 21 Aug 2008, 10:35 sm332 wrote: i am really surprised at the numbers for P and Director. These are much much higher than I thought they would be. Agreed. I always thought that most partners make in the range of$1M with not a lot of variation, not \$2-10M. I also wasn't aware of the Director title. I wonder how many Directors there are within an office like Chicago or San Francisco? Or are these more of a global-type of position with no real limits on specific locations? _________________ Check out the new Career Forum http://gmatclub.com/forum/133 Non-Human User Joined: 01 Oct 2013 Posts: 464 Followers: 63 Kudos [?]: 10 [0], given: 0 ### Show Tags 15 Mar 2014, 21:15 Hello from the GMAT Club MBAbot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Re: McKinsey Compensation   [#permalink] 15 Mar 2014, 21:15 Similar topics Replies Last post Similar Topics: update school about compensation after submission 5 17 Jan 2013, 08:52 7 which courses to take to compensate for poor GPA? 10 31 May 2012, 20:59 How to Compensate for Weak Academic Record? 10 07 Oct 2010, 22:52 CEO Compensation Land 16 21 Nov 2007, 22:24 1 How Much Compensation Three Years Post-MBA? 22 23 Apr 2007, 07:40 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2626282572746277, "perplexity": 14662.075962793253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00037-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/ballistic-pedulum-finding-kinetic-energy-lost.8432/
# Ballistic Pedulum - Finding Kinetic Energy Lost 1. Nov 6, 2003 ### ScoutFCM Here's a problem that I've been having trouble on for awhile and seem to be stuck. I was just wondering if someone could guide me or show me how to do this problem. The Ballistic pendulum is a device usd to measure the speed of a fast-moving projectile such as a bullet. The bullet is fired into a large block of wood suspended from some light wires. The bullet is stopped by the block, and the entire system swings through the vertical distance, h. The mass of the bullet (m1=0.068kg), the mass of the pendulum (m2=0.256kg), h=6.2cm. Vo=(.324kg/.068kg) x (2 x 9.8 m/s^2 x .062m)^1/2 = 5.25m/s KEinitial= 1/2(.068kg x 5.25m/s)^2 = 0.937J KEfinal= 1/2(.324kg) x (2 x 9.8 m/s^2 x .062m) = .197J I got that far. I was wondering how do I find the kinetic energy lost from the info? 2. Nov 6, 2003 ### jamesrc If I'm understanding your problem correctly, you're trying to solve for vo, the initial velocity of the bullet and the bullet/pendulum comes to a rest when it swings up to a height h. Let's call the mass of the bullet m and the mass of the block M and let's set our datum for potential energy at the initial height of the bullet/pendulum. First we need to consider the collision of the block and bullet using the conservation of momentum. This will give us the initial velocity of the ballistic pendulum: m*vo = (m+M)v Since we're neglecting things like air resistance and friction at the pendulum pivot, we know that all of this kinetic energy will be converted into potential energy: .5*(M+m)v^2 = (M+m)*g*h So find the expression for v in terms of vo, then plug into the 2nd equation to solve for vo. If you need the kinetic energy lost in the collision, you can calculate the kinetic energy before and after: KEb = .5*m*vo^2 KEa = .5*(M+m)*v^2 and find the difference. Similar Discussions: Ballistic Pedulum - Finding Kinetic Energy Lost
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759178519248962, "perplexity": 551.4783074787027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00368.warc.gz"}
https://codegolf.stackexchange.com/questions/45496/insertspacesbetweenwords/45498#45498
# insertspacesbetweenwords In this challenge, your input is a string of lower-case letters like this: insertspacesbetweenwords and a dictionary like the that from my other challenge. You may assume that the dictionary file contains only lower-case letters and line-feed characters. Furthermore, the last character in the dictionary file is a line-feed character and the entries are sorted alphabetically. Write a function that given these two strings, returns a single string containing all possible ways to insert spaces between some of the characters in the input string, such that the words created are all in the dictionary. Each possibility shall be terminated with a line-feed character. Some lines of the output for the example are: insert spaces between words insert space s between words insert spaces between word s insert space s between word s insert spaces be tween words The order of the output lines doesn't matter. This is codegolf, the usual scoring rules apply, standard loopholes are verboten. • Are trailing spaces acceptable? Feb 8 '15 at 15:41 • I still don't get the input format. Is it the string of letters and then each word in the dictionary? If so, how do we know when the input is done? – KSFT Feb 8 '15 at 15:42 • @KSFT Your input is two strings. One string contains the dictionary, the other contains the word-string where you shall insert spaces. Feb 8 '15 at 15:51 • @Zgarb No. The question says “all possible ways to insert spaces between some of the characters in the input string;” trailing spaces are not between characters of the input string. Feb 8 '15 at 15:54 • Is the word s in your dictionary, but not other letters of the alphabet? Feb 10 '15 at 7:13 # Python 3, 88 f=lambda x,w,s=[]:[x or print(*s)]+[x.find(y)or f(x[len(y):],w,s+[y])for y in w.split()] String to split is x, the dictionary string is w. The idea is to repeatedly check which strings y of the dictionary are a prefix for the current string x. The expression x.find(y) evaluates to the Falsey 0 only if x starts with y (substrings not present give -1), so the short-circuiting of or makes the function recurse down only when the prefixes matches. The prefix is chopped off with x[len(y):] and the list of subwords is updated in s. When the whole string x has been consumed, we print the subwords s, automatically joined by spaces. # Haskell, 113 107 105 88 87 86 bytes Works for empty inputs too (but gives an empty string as output). s!d=unlines[w|w<-map concat$mapM(\c->[[c],[' ',c]])s,all(elemlines d).words$w,w>"!"] Defines an infix binary function !. Call it like this: "abababbaab" ! "ab\nba\na\nbab\n" Result: "ab ab ab ba ab\nab a bab ba ab\na bab ab ba ab\na ba bab ba ab\n" ### Explanation The idea is to split the left input string s into words by inserting spaces in all possible ways, and keep those that consist of words of the dictionary d. A leading space is checked by comparing to the string "!". s!d= -- Define the string s!d by unlines -- joining with line-breaks [w|w<- -- the strings w obtained by map concat$-- concatenating each list of strings mapM(\c->[[c],[' ',c]])s, -- obtained by replacing each letter 'c' of s by -- either "c" or " c", all(elemlines d). -- such that only lines of d words$w, -- occur as words in w, and w>"!"] -- w is lexicographically larger than "!", -- which here means that w does not begin with a space.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.344397634267807, "perplexity": 2424.440511499011}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00298.warc.gz"}
http://mathhelpforum.com/calculus/135851-functions.html
1. ## functions Show graphically or otherwise, that sin(x) = m(x) has no non-zero solutions for -Pi< x < Pi when m<0 How would i do this? 2. I must not be understanding the question correctly. Let m(x) = the constant function -1/2. Then m<0 for all x, and sin(x)=m(x) has the solutions $x=-\frac{\pi}{6}\text{ and }x=-\frac{5\pi}{6}$. Do you mean sin(x)=mx? If so, then for x<0, sin(x)<0<mx and for x>0, mx<0<sin(x), so you have your result. If you post again to this thread, I'll answer as soon as I can. - Hollywood 3. Originally Posted by hollywood I must not be understanding the question correctly. Let m(x) = the constant function -1/2. Then m<0 for all x, and sin(x)=m(x) has the solutions $x=-\frac{\pi}{6}\text{ and }x=-\frac{5\pi}{6}$. Do you mean sin(x)=mx? If so, then for x<0, sin(x)<0<mx and for x>0, mx<0<sin(x), so you have your result. If you post again to this thread, I'll answer as soon as I can. - Hollywood yea i meant sin(x) = mx sorry.. how would i do the question graphically? 4. If you look at the graph of y=sin(x), it has "lobes" in the upper right and lower left quadrants. If m is negative, then y=mx is a line through the origin and is in the upper left and lower right quadrants. So the only place they can meet is at the origin. Here's a graph with m=-1. Post again in this thread if you're still having trouble. Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447951912879944, "perplexity": 1559.7171167357315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826114.69/warc/CC-MAIN-20171023145244-20171023165244-00722.warc.gz"}
http://docs.poliastro.space/en/v0.13.0/examples/Catch%20that%20asteroid!.html
Catch that asteroid!¶ First, we need to increase the timeout time to allow the download of data occur properly [1]: from astropy.utils.data import conf conf.dataurl [1]: 'http://data.astropy.org/' [2]: conf.remote_timeout [2]: 10.0 [3]: conf.remote_timeout = 10000 Then, we do the rest of the imports and create our initial orbits. [4]: from astropy import units as u from astropy.time import Time from astropy.coordinates import solar_system_ephemeris solar_system_ephemeris.set("jpl") from poliastro.bodies import * from poliastro.twobody import Orbit from poliastro.plotting import StaticOrbitPlotter from poliastro.plotting.misc import plot_solar_system EPOCH = Time("2017-09-01 12:05:50", scale="tdb") WARNING: AstropyDeprecationWarning: astropy.extern.six will be removed in 4.0, use the six module directly if it is still needed [astropy.extern.six] [5]: earth = Orbit.from_body_ephem(Earth, EPOCH) earth [5]: 1 x 1 AU x 23.4 deg (ICRS) orbit around Sun (☉) at epoch 2017-09-01 12:05:50.000 (TDB) [6]: earth.plot(label=Earth); /home/lobo/Github/poliastro/src/poliastro/twobody/orbit.py:1163: UserWarning: Frame <class 'astropy.coordinates.builtin_frames.icrs.ICRS'> does not support 'obstime', time values were not returned [7]: florence = Orbit.from_sbdb("Florence") florence [7]: 1 x 3 AU x 22.1 deg (HeliocentricEclipticIAU76) orbit around Sun (☉) at epoch 2458600.5008007586 (TDB) Two problems: the epoch is not the one we desire, and the inclination is with respect to the ecliptic! [8]: florence.rv() [8]: (<Quantity [-2.76132873e+08, -1.71570015e+08, -1.09377634e+08] km>, <Quantity [13.17478676, -9.82584124, -1.48126637] km / s>) [9]: florence.epoch [9]: <Time object: scale='tdb' format='jd' value=2458600.5008007586> [10]: florence.epoch.iso [10]: '2019-04-27 00:01:09.186' [11]: florence.inc [11]: $22.142394 \; \mathrm{{}^{\circ}}$ We first propagate: [12]: florence = florence.propagate(EPOCH) florence.epoch.tdb.iso [12]: '2017-09-01 12:05:50.000' And now we have to convert to the same frame that the planetary ephemerides are using to make consistent comparisons, which is ICRS: [13]: florence_icrs = florence.to_icrs() florence_icrs.rv() [13]: (<Quantity [ 1.46404253e+08, -5.35752831e+07, -2.05656912e+07] km>, <Quantity [ 7.34329037, 23.47561546, 24.12063695] km / s>) Let us compute the distance between Florence and the Earth: [14]: from poliastro.util import norm [15]: norm(florence_icrs.r - earth.r) - Earth.R [15]: $6967159.9 \; \mathrm{km}$ This value is consistent with what ESA says! $$7\,060\,160$$ km [16]: abs(((norm(florence_icrs.r - earth.r) - Earth.R) - 7060160 * u.km) / (7060160 * u.km)) [16]: $0.013172521 \; \mathrm{}$ [17]: from IPython.display import HTML HTML( ) [17]: And now we can plot! [18]: frame = plot_solar_system(outer=False, epoch=EPOCH) frame.plot(florence_icrs, label="Florence"); /home/lobo/Github/poliastro/src/poliastro/twobody/orbit.py:1163: UserWarning: Frame <class 'astropy.coordinates.builtin_frames.icrs.ICRS'> does not support 'obstime', time values were not returned The difference between doing it well and doing it wrong is clearly visible: [19]: frame = StaticOrbitPlotter() frame.plot(earth, label="Earth") frame.plot(florence, label="Florence (Ecliptic)") frame.plot(florence_icrs, label="Florence (ICRS)"); We can express Florence’s orbit as viewed from Earth. In order to do that, we must set the Earth as the new attractor by making use of the change_attractor() method. However Florence is out of Earth’s SOI, meaning that changing the attractor from Sun to Earth has no physical sense. We will make use of force=True argument so this method runs even if we know that we are out of new attractor’s SOI. [20]: florence_hyper = florence.change_attractor(Earth, force=True) /home/lobo/Github/poliastro/src/poliastro/twobody/orbit.py:489: PatchedConicsWarning: Leaving the SOI of the current attractor Previous warning was raised since Florence’s orbit as seen from Earth is hyperbolic. Therefore if user wants to propagate this orbit along time, there will be some point at which the asteroid is out of Earth’s influence (if not already). We now retrieve the ephemerides of the Moon, which are given directly in GCRS: [21]: moon = Orbit.from_body_ephem(Moon, EPOCH) moon [21]: 367937 x 405209 km x 19.4 deg (GCRS) orbit around Earth (♁) at epoch 2017-09-01 12:05 (TDB) [22]: moon.plot(label=Moon); And now for the final plot: [23]: import matplotlib.pyplot as plt frame = StaticOrbitPlotter() # This first plot sets the frame frame.plot(florence_hyper, label="Florence") # And then we add the Moon frame.plot(moon, label=Moon) plt.xlim(-1000000, 8000000) plt.ylim(-5000000, 5000000) plt.gcf().autofmt_xdate() /home/lobo/Github/poliastro/src/poliastro/twobody/orbit.py:1104: OrbitSamplingWarning: anomaly outside range, clipping
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.678960382938385, "perplexity": 9326.437326319116}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00088.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-y-intercept-for-y-12x-2-35x-8#589658
Algebra Topics # How do you find the y intercept for y = 12x^2 - 35x + 8? Apr 6, 2018 The $y$-intercept is $\left(0 , 8\right)$. #### Explanation: $y$-intercept is the value at which $x$ is zero , or the value of $y$ which the point on the line of the graph crosses the $y$ axis . Since $x$ has to be $0$, let's substitute $0$ for $x$ in the equation: $y = 12 {x}^{2} - 35 x + 8$ $y = 12 {\left(0\right)}^{2} - 35 \left(0\right) + 8$ $y = 0 - 0 + 8$ $y = 8$ Finally, since a $y$-intercept is a point, we must write it as one. So the point is $\left(0 , 8\right)$. If we are to graph this, we can see that the graph does indeed have a $y$-intercept of $\left(0 , 8\right)$: Hope this helps! ##### Impact of this question 491 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186227321624756, "perplexity": 305.6599476770664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00349.warc.gz"}
https://www.r-bloggers.com/going-back-to-the-basics-the-correlation-coefficient/
# Going back to the basics: the correlation coefficient (This article was first published on R on The broken bridge between biologists and statisticians, and kindly contributed to R-bloggers) # A measure of joint variability In statistics, dependence or association is any statistical relationship, whether causal or not, between two random variables or bivariate data. It is often measured by the Pearson correlation coefficient: $\rho _{X,Y} =\textrm{corr} (X,Y) = \frac {\textrm{cov}(X,Y) }{ \sigma_X \sigma_Y } = \frac{ \sum_{1 = 1}^n [(X – \mu_X)(Y – \mu_Y)] }{ \sigma_X \sigma_Y }$ Other measures of correlation can be thought of, such as the Spearman $$\rho$$ rank correlation coefficient or Kendall $$\tau$$ rank correlation coefficient. # Assumptions for the Pearson Correlation Coefficient The Pearson correlation coefficients makes a few assumptions, which should be carefully checked. 1. Interval-level measurement. Both variables should be measured on a quantitative scale. 2. Random sampling. Each subject in the sample should contribute one value on X, and one value on Y. The values for both variables should represent a random sample drawn from the population of interest. 3. Linearity. The relationship between X and Y should be linear. 4. Bivarlate normal distribution. This means that (i) values of X should form a normal distribution at each value of Y and (ii) values of Y should form a normal distribution at each value of X. # Hypothesis testing It is possible to test whether $$r = 0$$ against the alternative $r 0$. The test is based on the idea that the amount: $T = \frac{r \sqrt{n – 2}}{\sqrt{1 – r^2}}$ is distributed as a Student’s t variable. Let’s take the two variables ‘cyl’ and ‘mpg’ from the ‘mtcars’ data frame. The correlation is: r <- cor(mtcars$cyl, mtcars$gear) r ## [1] -0.4926866 The T statistic is: T <- r * sqrt(32 - 2) / sqrt(1 - r^2) T ## [1] -3.101051 The p-value for the null is: 2 * pt(T, 30) ## [1] 0.004173297 which is clearly highly significant. The null can be rejected. As for hypothesis testing, it should be considered that the individuals where couple of measurements were taken should be independent. If they are not, the t test is invalid. I am dealing with this aspect somewhere else in my blog. # Correlation in R We have already seen that we can use the usual function ‘cor(matrix, method=)’. In order to obtain the significance, we can use the ‘rcorr()’ function in the Hmisc package # Correlations with significance levels library(Hmisc) corr2 <- rcorr(as.matrix(mtcars), type="pearson") print(corr2$r, digits = 2) ## mpg cyl disp hp drat wt qsec vs am gear carb ## mpg 1.00 -0.85 -0.85 -0.78 0.681 -0.87 0.419 0.66 0.600 0.48 -0.551 ## cyl -0.85 1.00 0.90 0.83 -0.700 0.78 -0.591 -0.81 -0.523 -0.49 0.527 ## disp -0.85 0.90 1.00 0.79 -0.710 0.89 -0.434 -0.71 -0.591 -0.56 0.395 ## hp -0.78 0.83 0.79 1.00 -0.449 0.66 -0.708 -0.72 -0.243 -0.13 0.750 ## drat 0.68 -0.70 -0.71 -0.45 1.000 -0.71 0.091 0.44 0.713 0.70 -0.091 ## wt -0.87 0.78 0.89 0.66 -0.712 1.00 -0.175 -0.55 -0.692 -0.58 0.428 ## qsec 0.42 -0.59 -0.43 -0.71 0.091 -0.17 1.000 0.74 -0.230 -0.21 -0.656 ## vs 0.66 -0.81 -0.71 -0.72 0.440 -0.55 0.745 1.00 0.168 0.21 -0.570 ## am 0.60 -0.52 -0.59 -0.24 0.713 -0.69 -0.230 0.17 1.000 0.79 0.058 ## gear 0.48 -0.49 -0.56 -0.13 0.700 -0.58 -0.213 0.21 0.794 1.00 0.274 ## carb -0.55 0.53 0.39 0.75 -0.091 0.43 -0.656 -0.57 0.058 0.27 1.000 print(corr2$P, digits = 2) ## mpg cyl disp hp drat wt qsec vs ## mpg NA 6.1e-10 9.4e-10 1.8e-07 1.8e-05 1.3e-10 1.7e-02 3.4e-05 ## cyl 6.1e-10 NA 1.8e-12 3.5e-09 8.2e-06 1.2e-07 3.7e-04 1.8e-08 ## disp 9.4e-10 1.8e-12 NA 7.1e-08 5.3e-06 1.2e-11 1.3e-02 5.2e-06 ## hp 1.8e-07 3.5e-09 7.1e-08 NA 1.0e-02 4.1e-05 5.8e-06 2.9e-06 ## drat 1.8e-05 8.2e-06 5.3e-06 1.0e-02 NA 4.8e-06 6.2e-01 1.2e-02 ## wt 1.3e-10 1.2e-07 1.2e-11 4.1e-05 4.8e-06 NA 3.4e-01 9.8e-04 ## qsec 1.7e-02 3.7e-04 1.3e-02 5.8e-06 6.2e-01 3.4e-01 NA 1.0e-06 ## vs 3.4e-05 1.8e-08 5.2e-06 2.9e-06 1.2e-02 9.8e-04 1.0e-06 NA ## am 2.9e-04 2.2e-03 3.7e-04 1.8e-01 4.7e-06 1.1e-05 2.1e-01 3.6e-01 ## gear 5.4e-03 4.2e-03 9.6e-04 4.9e-01 8.4e-06 4.6e-04 2.4e-01 2.6e-01 ## carb 1.1e-03 1.9e-03 2.5e-02 7.8e-07 6.2e-01 1.5e-02 4.5e-05 6.7e-04 ## am gear carb ## mpg 2.9e-04 5.4e-03 1.1e-03 ## cyl 2.2e-03 4.2e-03 1.9e-03 ## disp 3.7e-04 9.6e-04 2.5e-02 ## hp 1.8e-01 4.9e-01 7.8e-07 ## drat 4.7e-06 8.4e-06 6.2e-01 ## wt 1.1e-05 4.6e-04 1.5e-02 ## qsec 2.1e-01 2.4e-01 4.5e-05 ## vs 3.6e-01 2.6e-01 6.7e-04 ## am NA 5.8e-08 7.5e-01 ## gear 5.8e-08 NA 1.3e-01 ## carb 7.5e-01 1.3e-01 NA We could also use these functions with two matrices, to obtain the correlations of each column in one matrix with each column in the other # Correlation matrix from mtcars x <- mtcars[1:3] y <- mtcars[4:6] cor(x, y) ## hp drat wt ## mpg -0.7761684 0.6811719 -0.8676594 ## cyl 0.8324475 -0.6999381 0.7824958 ## disp 0.7909486 -0.7102139 0.8879799 # Relationship to slope in linear regression The correlation coefficient and slope in linear regression bear some similarities, as both describe how Y changes when X is changed. However, in correlation, we have two random variables, while in regression we have Y random, X fixed and Y is regarded as a function of X (not the other way round). Without neglecting their different meaning, it may be useful to show the algebraic relationship between the correlation coefficient and the slope in regression. Let’s simulate a dataset with two variables, coming from a multivariate normal distribution, with means respectively equal to 10 and 2, and variance-covariance matrix of: library(MASS) cov <- matrix(c(2.20, 0.48, 0.48, 0.20), 2, 2) cov ## [,1] [,2] ## [1,] 2.20 0.48 ## [2,] 0.48 0.20 We use the ‘mvrnomr()’ function to generate the dataset. set.seed(1234) dataset <- data.frame( mvrnorm(n=10, mu = c(10, 2), Sigma = cov) ) names(dataset) <- c("X", "Y") dataset ## X Y ## 1 11.756647 2.547203 ## 2 9.522180 2.199740 ## 3 8.341254 1.862362 ## 4 13.480005 2.772031 ## 5 9.428296 1.573435 ## 6 9.242788 1.861756 ## 7 10.817449 2.343918 ## 8 10.749047 2.451999 ## 9 10.780400 2.436263 ## 10 11.480301 1.590436 The correlation coefficient and slope are as follows: r <- with(dataset, cor(X, Y)) b1 <- coef( lm(Y ~ X, data=dataset) )[2] r ## [1] 0.6372927 b1 ## X ## 0.1785312 The equation for the slope is: $b_1 = \frac{ \sum_{i = 1}^n \left[ ( X-\mu_X )( Y-\mu_Y )\right] }{ \sigma^2_X }$ From there, we see that: $r = b_1 \frac{\sigma_X}{ \sigma_Y }$ and: $b_1 = r \frac{\sigma_Y}{\sigma_X}$ Indeed: sigmaX <- with(dataset, sd(X) ) sigmaY <- with(dataset, sd(Y) ) b1 * sigmaX / sigmaY ## X ## 0.6372927 r * sigmaY / sigmaX ## [1] 0.1785312 It is also easy to see that the correlation coefficient is the slope of regression of standardised Y against standardised X: Yst <- with(dataset, scale(Y, scale=T) ) summary( lm(Yst ~ I(scale(X, scale = T) ), data = dataset) ) ## ## Call: ## lm(formula = Yst ~ I(scale(X, scale = T)), data = dataset) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.082006 -0.067143 -0.036850 0.009214 0.237923 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -5.633e-18 3.478e-02 0.000 1.0000 ## I(scale(X, scale = T)) 1.785e-01 7.633e-02 2.339 0.0475 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.11 on 8 degrees of freedom ## Multiple R-squared: 0.4061, Adjusted R-squared: 0.3319 ## F-statistic: 5.471 on 1 and 8 DF, p-value: 0.04748 # Intra-class correlation (ICC) It describes how strongly units in the same group resemble each other. While it is viewed as a type of correlation, unlike most other correlation measures it operates on data structured as groups, rather than data structured as paired observations. The intra-class correlation coefficient is: $IC = {\displaystyle {\frac {\sigma _{\alpha }^{2}}{\sigma _{\alpha }^{2}+\sigma _{\varepsilon }^{2}}}.}$ where $$\sigma _{\alpha }^{2}$$ is the variance between groups and $$\sigma _{\varepsilon }^{2}$$ is the variance within a group (better, the variance of one observation within a group). The sum of those two variances is the total variance of observations. In words, the intra-class correlation coefficient measures the joint variability of subjects in the same group (that relates on how groups are different from one another), with respect to the total variability of observations. If subjects in one group are very similar to one another (small $$\sigma_{\varepsilon}$$) but groups are very different (high $$\sigma_{\alpha}$$), the ICC is very high. The existence of grouping of residuals is very important in ANOVA, as it means that independence is violated, which calls for the use of mixed model. But … this is a totally different story … R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986514210700989, "perplexity": 4143.703630813762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256797.20/warc/CC-MAIN-20190522103253-20190522125253-00389.warc.gz"}
http://mathoverflow.net/questions/96691/determinant-of-the-sum-of-matrices
# Determinant of the sum of matrices Let $D$ be a diagonal matrix and $A$ a Hermitian one. Is there a nontrivial way to calculate the determinant of $A$ from the determinant of $A+D$ and the entries of $D$? It can be assumed that the diagonal entries of $A$ are all zeros. Thank you very much. - This is likely more appropriate for math.stackexchange.com – Samuel Reid May 11 '12 at 21:28 This had been posted on math.stackexchange.com 6 days prior: math.stackexchange.com/questions/141499/… – Jonas Meyer Jan 4 '13 at 6:39 Let $B=A+D$. With $B_1,\dots,B_n$ the columns of $B$, $d_1,\dots,d_n$ the diagonal $D$ $$\det A=(B_1-d_1e_1)\wedge\dots\wedge (B_n-d_ne_n)$$ so that $\det A$ is an explicit polynomial in $d$, whose constant coefficient is $\det B$ and the term of highest degree is $(-1)^nd_1\dots d_n$. For instance, the coefficient of $d_1$ is $-e_1\wedge B_2\wedge \dots\wedge B_n,$ and all the coefficients can be expressed explicitly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831363558769226, "perplexity": 173.86924172928966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.66/warc/CC-MAIN-20160524002128-00237-ip-10-185-217-139.ec2.internal.warc.gz"}
https://arxiv.org/abs/1107.1290
math-ph # Title:Schrödinger equations, deformation theory and $tt^*$-geometry Authors:Huijun Fan Abstract: This is the first of a series of papers to construct the deformation theory of the form Schrödinger equation, which is related to a section-bundle system $(M,g,f)$, where $(M,g)$ is a noncompact complete Kähler manifold with bounded geometry and $f$ is a holomorphic function defined on $M$. This work is also the first step attempting to understand the whole Landau-Ginzburg B-model including the higher genus invariants. Our work is mainly based on the pioneer work of Cecotti, Cecotti and Vafa \cite{Ce1,Ce2,CV}. Comments: Deformation theory, Landau-Ginzburg B model, 114 pages Subjects: Mathematical Physics (math-ph); Differential Geometry (math.DG) MSC classes: 81T45(primary), 53D45, 53D37(secondary) Cite as: arXiv:1107.1290 [math-ph] (or arXiv:1107.1290v1 [math-ph] for this version) ## Submission history From: Huijun Fan [view email] [v1] Thu, 7 Jul 2011 04:19:42 UTC (104 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6558940410614014, "perplexity": 2508.8589962003125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00221.warc.gz"}
https://www.physicsforums.com/threads/fan-curve-question.610502/
# Fan curve question • Start date • #1 46 0 If I had a fan sitting in a small duct, and used several different sized orifice plates to create different restrictions, how would I find the fan curve? I run the fan at the same RPM each test. The problem I run into is using the equation for orifice flow that relates pressure drop, area ratio, density, and coefficient of discharge to find flow. The coefficient of discharge is a function of the Reynold's number, but if the flow is what I'm trying to measure, then I don't know the Reynold's number. As a consequence of that I cannot know the coefficient of discharge, and thus cannot calculate flow for each orifice plate. Sort of like a paradox it seems to me. Am I missing something here? Is there an alternative method to generating a fan curve? • #2 546 10 Use the hydraulic diameter of the orifice at its exit to compute the reynolds number. • #3 137 0 A fan curve goes with the fan, its manufacturer should have given it. • Last Post Replies 33 Views 7K • Last Post Replies 4 Views 2K • Last Post Replies 3 Views 1K • Last Post Replies 15 Views 4K • Last Post Replies 28 Views 58K • Last Post Replies 3 Views 2K • Last Post Replies 4 Views 8K • Last Post Replies 2 Views 10K • Last Post Replies 1 Views 752 • Last Post Replies 7 Views 3K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929759860038757, "perplexity": 1834.5618839099707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00141.warc.gz"}
http://mathhelpforum.com/calculus/187417-antiderivative-exponential.html
# Math Help - Antiderivative of an Exponential 1. ## Antiderivative of an Exponential Hi all, I know that $\int1/e^{\frac{1}{2}y}dy=-2e^{\frac{-y}{2}}$ but I don't know how it's done? I can integrate the denominator to get $2e^{\frac{1}{2}y}$ but since it's under the "1" I've not seen this before, what rules does it use? Many thanks. 2. ## re: Antiderivative of an Exponential $\displaystyle \frac{1}{e^{\frac{1}{2}y}} = e^{\frac{-y}{2}}$ 3. ## re: Antiderivative of an Exponential Many thanks, once again my rules for powers have let me down.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807724356651306, "perplexity": 2623.061213432739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148558.5/warc/CC-MAIN-20160205193908-00166-ip-10-236-182-209.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/users/13408/omar-shehab?tab=topactivity
Omar Shehab ### Questions (37) 8 Conversion between k-SAT and XOR-SAT 7 Quantum annealing vs adiabatic quantum computation 7 Implication of Bell test loopholes on Vazirani-Vidick random sequence generation scheme 7 The computational complexity of spectral norm of a matrix 6 Local Hamiltonian and combinatorial search problems ### Reputation (1,163) This user has no recent positive reputation changes This user has not answered any questions ### Tags (50) 0 quantum-computing × 28 0 graph-isomorphism × 6 0 quantum-information × 25 0 statistical-physics × 5 0 physics × 14 0 optimization × 4 0 gr.group-theory × 8 0 linear-algebra × 4 0 cc.complexity-theory × 7 0 graph-theory × 3 ### Bookmarks (15) 466 What papers should everyone read? 232 Is Norbert Blum's 2017 proof that $P \ne NP$ correct? 130 Problems Between P and NPC 46 What constitutes denotational semantics? 24 Is the 2016 implementation of Shor's algorithm really scalable? ### Accounts (44) Yearling × 5 Promoter Popular Question × 3 Critic Caucus × 3 Custodian × 2 Notable Question Excavator Benefactor Autobiographer ### Active bounties (0) This user has no active bounties all time   by type 272 up 138 question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6344954371452332, "perplexity": 14455.799327615272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00222.warc.gz"}
http://physics.stackexchange.com/questions/52855/forces-acting-on-a-point-mass-in-a-spinning-rigid-body/52857
# Forces acting on a point mass in a spinning rigid body I have learned that all spinning objects will continue spinning even if no force is acting on it, and the tendency to do so is called moment of inertia. But I wonder about the fact that a single point mass in a spinning rigid body changes direction, even though no force is acting on it. How this is possible? This violates Newton's laws. How a point mass changes direction without force? - It does have a force. It is the centripetal force. Usually, the spinning body is a circular object, so all centripetal forces are balanced out to 0. For each point mass, there is another point mass on the opposite side. It is the symmetry of the system that keeps everything in the same place spinning. - Well, a rigid body is an idealization. One may for simplicity model the various parts of a "rigid body" as held together by springs. When the body rotates, all the springs stretch a bit outwards. If we consider a single mass part of the body, the spring force acting on it then provides the necessary centripetal force, so that it will perform a circular motion. - All the atoms that make up the spinning body also carry the angular momentum which was imparted to them when the object was accelerated to said angular momentum. When an object is spun to a certain speed the energy goes into the entire object assuming it is one solid piece. - There are internal forces between atoms that keep each one spinning around the rotation axis. In fact, if you to spin a flywheel really fast, for example, the interatomic forces may not be strong enough, and the flywheel may tear itself apart. - Simplify it. Consider a simple bolas - two equal weights spinning around each other held together by a cord. Think of it up in space far away from anything, just to avoid confusion. Each weight has linear velocity, and each exerts a centripetal force on the other, which gives an acceleration at right angles to the velocity, so as to bend the path of the other into a circle. So if you draw an imaginary box around the pair of them, no force crosses the boundary of the box to have any affect on the bolas, but internal to the pair, there are forces, acceleration, velocity, momentum, and energy. The energy and momentum are conserved. -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835240602493286, "perplexity": 278.7173124307844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737882743.52/warc/CC-MAIN-20151001221802-00054-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.ipht.fr/en/Phocea/Vie_des_labos/Seminaires/index.php?id=991413
High energy scattering at strong coupling from AdS/CFT Edmond Iancu IPhT Wed, Nov. 18th 2009, 14:15 Salle Claude Itzykson, Bât. 774, Orme des Merisiers Triggered by the experimental results at RHIC, there is currently a large interest in understanding the high energy interactions of the quark-gluon plasma in a regime where the coupling is effectively strong. This problem is particularly favourable for applications of the AdS/CFT correspondence, in that some crucial properties of QCD which are difficult to accomodate in this framework, so like confinement and the conformal anomaly, become less important at finite temperature and/or high energy. \par The AdS/CFT results that I shall review in my talk reveal an interesting physical picture for the high energy interactions at strong coupling, which is however very different from the corresponding picture at weak coupling: The strongly coupled matter involves no pointlike constituents (so like the `valence partons' in QCD), rather it shows a smooth structure on all the resolution scales. Accordingly, in a high energy collision involving this form of matter, there will be no jets in the final state, but only an isotropic distribution of relatively soft hadrons. This also means that an energetic probe propagating through a strongly-coupled plasma will loose energy via parton branching as opposed to medium rescattering. Contact : Ruth BRITTO
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921757698059082, "perplexity": 1145.2663024460194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301488.71/warc/CC-MAIN-20220119185232-20220119215232-00197.warc.gz"}