arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
New Zealand Level 6 - NCEA Level 1 Energy of Consumption Interactive practice questions Use the fact that $1$1 kWh = $3.6\times10^6$3.6×106 J to calculate how many joules $4$4 kWh is equal to. Easy Approx 2 minutes Use the fact that $1$1 kWh = $3.6\times10^6$3.6×106 J to calculate how many joules $104.23$104.23 kWh is equal to. An appliance consumes energy at a rate of $1200$1200 watts. How many joules of energy does it use if it runs for $29$29 seconds? A microwave oven uses $0.95$0.95 kilowatts each hour. In a month, this appliance runs for $8$8 hours. Outcomes GM6-2 Apply the relationships between units in the metric system, including the units for measuring different attributes and derived measures 91030 Apply measurement in solving problems
## Two applications of Schur’s lemma Posted: July 11, 2010 in Noncommutative Ring Theory Notes, Ring of Endomorphisms Tags: , , Let $k$ be an algebraically closed field, $A$ a $k$ algebra and $V$ a simple $A$ module with $\dim_k V < \infty.$ We know, by Schur’s lemma, that every element of $D = \text{End}_A(V)$ is in the form $\mu 1_D,$ for some $\mu \in k.$ Application 1. If $A$ is commutative, then $\dim_k V = 1.$ Proof. Let $a \in A$ and $\{0\} \neq W$ be a $k$ subspace of $V.$ Define the map $f: V \longrightarrow V$ by $f(v)=av,$ for all $v \in V.$ Clearly $f$ is $k$ linear and, for any $b \in A$ and $v \in V,$ we have $f(bv)=a(bv)=(ab)v=(ba)v=b(av)=bf(v).$ That means $f \in D$ and hence $f = \mu 1_D$, for some $\mu \in k.$ Thus if $w \in W$, then $aw=f(w)=\mu w \in W,$ which means that $W$ is an $A$ submodule of $V$ and so $W=V,$ because $V$ is simple over $A.$ So every non-zero $k$ subspace of $V$ is equal to $V.$ Hence $\dim_k V = 1.$ Application 2. Let $Z(A)$ be the center of $A.$ For every $a \in Z(A)$ there exists (a unique) $\mu_a \in k$ such that $av=\mu_a v,$ for all $v \in V$ and the map $\chi_V : Z(A) \longrightarrow k$ defined by $\chi_V(a)=\mu_a$ is a $k$ algebra homomorphism. Proof. Define the map $f_a : V \longrightarrow V$ by $f_a(v)=av,$ for all $v \in V.$ Then $f_a \in D$ because $a \in Z(A).$ Thus $f_a = \mu_a 1_D,$ for some $\mu_a \in k$ and therefore $av=f_a(v)=\mu_a v,$ for all $v \in V.$ The uniqueness of $\mu_a$ is trivial. To show that $\chi_V$ is a homomorphism, let $\lambda \in k, \ a,b \in Z(A).$ Then $\mu_{\lambda a + b} v= (\lambda a + b)v=\lambda (av) + bv = \lambda \mu_a v + \mu_b v,$ and so $\mu_{\lambda a + b} = \lambda \mu_a + \mu_b.$ Similarly $\mu_{ab} v = (ab)v = a(bv)=a (\mu_b v) = \mu_a (\mu_b v)=(\mu_a \mu_b)v$ and so $\mu_{ab}=\mu_a \mu_b.$  $\Box$ Definition. The homomorphism $\chi_V$ is called the central character of $V.$
Home > English > Class 12 > Physics > Chapter > Jee Main > Position of particle as a func... # Position of particle as a function of time is given as vec r=cos wt hati+sin wt hatj . Choose correct statement about vecr,vec v and vec a where vec v and vec a are velocity and acceleration of particle at time t. Updated On: 27-06-2022 Text Solution vec v is perpendicular to vec r and vec a is towards originvec v and vec a are perpendicular to vec rvec v is parallel to vec r and vec a parallel to vec rvec v is perpendicular to vec r and vec a is away from origin Solution : NA Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello question is position of a particle as a function of time given as is equal to cos Omega T + velocity and acceleration and velocity what are the option option is perpendicular there like this now I am going to draw this is our order and given a position vector aspose this is our position vector values cos Omega T + sin Omega T this is the position now let's operate on this year given R R is what cos Omega T + Sin Omega T find its velocity of velocity is given is not by duty that is what causes differentiation of minus sign Omega 3 dytor plus fineness differentiation of Cos Omega T and it will get X with Omega Vinod chain rule differentiation of Sin Omega T is what was it into this is what I found now let's talk about the acceleration what is the acceleration acceleration can be written as DV by DT aur I can also write is b2r work details fine Omega is already here one Omega will also get X we can take commerce differentiation of Sin Omega T is what it is cos Omega T + differentiation of Cos Omega T is what minus of Sin Omega T Jacob and Omega I am already taken from it comes from to be acceleration comes out to be omega square minus Cos Omega T what is minus Sin Omega T of minus a minus b square cos Omega T + sin of Omega so I can write it a is minus of omega square and this is what are you can say this is exactly fine here we can say that if r is in this direction we will be totally opposite of this because -9 value from this then we will be in this direction totally opposite or we can say it is anti parallel or angle between them to be found about now look at the V W officer cost of time is perpendicular to earth we have to check otherwise perpendicular to our or not to check this we know for perpendicular at for dot product f v.ir is zero then we can say we is perpendicular to our and given condition we and our should not be 0 that second condition is satisfied we are given the value and take the dot product of water it comes out to be what Omega of minus Sin Omega T into cos Omega T last Omega Sin Omega T into cos of Omega why we can say that it is exactly opposite or negative direction it is getting cancelled out v.ir is zero it means we can say we is perpendicular to found to conclusion fast is vs perpendicular to US and the second one is a is anti parallel to let's look at the options the option one be perpendicular to US and age 28 origin as this will be too
## The Dynamics of Viscous Fibers • This work deals with the mathematical modeling and numerical simulation of the dynamics of a curved inertial viscous Newtonian fiber, which is practically applicable to the description of centrifugal spinning processes of glass wool. Neglecting surface tension and temperature dependence, the fiber flow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the fiber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional fiber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms. For the numerical simulation of the derived model a finite volume code is developed. The results of the numerical scheme for high Reynolds numbers are validated by comparing them with the analytical solution of the inviscid problem. Moreover, the influence of parameters, like viscosity and rotation on the fiber dynamics are investigated. Finally, an application based on industrial data is performed. • Dynamik viskoser Fäden $Rev: 13581$
## Tipping Points in Climate Systems 4 March, 2013 If you’ve just recently gotten a PhD, you can get paid to spend a week this summer studying tipping points in climate systems! They’re having a program on this at ICERM: the Institute for Computational and Experimental Research in Mathematics, in Providence, Rhode Island. It’s happening from July 15th to 19th, 2013. But you have to apply soon, by the 15th of March! For details, see below. But first, a word about tipping points… in case you haven’t thought about them much. ### Tipping Points A tipping point occurs when adjusting some parameter of a system causes it to transition abruptly to a new state. The term refers to a well-known example: as you push more and more on a glass of water, it gradually leans over further until you reach the point where it suddenly falls over. Another familiar example is pushing on a light switch until it ‘flips’ and the light turns on. In the Earth’s climate, a number of tipping points could cause abrupt climate change: (Click to enlarge.) They include: • Loss of Arctic sea ice. • Melting of the Greenland ice sheet. • Melting of the West Antarctic ice sheet. • Permafrost and tundra loss, leading to the release of methane. • Boreal forest dieback. • Amazon rainforest dieback • West African monsoon shift. • Indian monsoon chaotic multistability. • Change in El Niño amplitude or frequency. • Change in formation of Atlantic deep water. • Change in the formation of Antarctic bottom water. • T. M. Lenton, H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf, and H. J. Schellnhuber, Tipping elements in the Earth’s climate system, Proceedings of the National Academy of Sciences 105 (2008), 1786–1793. Mathematicians are getting interested in how to predict when we’ll hit a tipping point: • Peter Ashwin, Sebastian Wieczorek and Renato Vitolo, Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system, Phil. Trans. Roy. Soc. A 370 (2012), 1166–1184. Abstract: Tipping points associated with bifurcations (B-tipping) or induced by noise (N-tipping) are recognized mechanisms that may potentially lead to sudden climate change. We focus here a novel class of tipping points, where a sufficiently rapid change to an input or parameter of a system may cause the system to “tip” or move away from a branch of attractors. Such rate-dependent tipping, or R-tipping, need not be associated with either bifurcations or noise. We present an example of all three types of tipping in a simple global energy balance model of the climate system, illustrating the possibility of dangerous rates of change even in the absence of noise and of bifurcations in the underlying quasi-static system. We can test out these theories using actual data: • J. Thompson and J. Sieber, Predicting climate tipping points as a noisy bifurcation: a review, International Journal of Chaos and Bifurcation 21 (2011), 399–423. Abstract: There is currently much interest in examining climatic tipping points, to see if it is feasible to predict them in advance. Using techniques from bifurcation theory, recent work looks for a slowing down of the intrinsic transient responses, which is predicted to occur before an instability is encountered. This is done, for example, by determining the short-term auto-correlation coefficient ARC in a sliding window of the time series: this stability coefficient should increase to unity at tipping. Such studies have been made both on climatic computer models and on real paleoclimate data preceding ancient tipping events. The latter employ re-constituted time-series provided by ice cores, sediments, etc, and seek to establish whether the actual tipping could have been accurately predicted in advance. One such example is the end of the Younger Dryas event, about 11,500 years ago, when the Arctic warmed by 7 C in 50 years. A second gives an excellent prediction for the end of ’greenhouse’ Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state, using data from tropical Pacific sediment cores. This prediction science is very young, but some encouraging results are already being obtained. Future analyses will clearly need to embrace both real data from improved monitoring instruments, and simulation data generated from increasingly sophisticated predictive models. The next paper is interesting because it studies tipping points experimentally by manipulating a lake. Doing this lets us study another important question: when can you push a system back to its original state after it’s already tipped? • S. R. Carpenter, J. J. Cole, M. L. Pace, R. Batt, W. A. Brock, T. Cline, J. Coloso, J. R. Hodgson, J. F. Kitchell, D. A. Seekell, L. Smith, and B. Weidel, Early warnings of regime shifts: a whole-ecosystem experiment, Nature 332 (2011), 1079–1082. Abstract: Catastrophic ecological regime shifts may be announced in advance by statistical early-warning signals such as slowing return rates from perturbation and rising variance. The theoretical background for these indicators is rich but real-world tests are rare, especially for whole ecosystems. We tested the hypothesis that these statistics would be early-warning signals for an experimentally induced regime shift in an aquatic food web. We gradually added top predators to a lake over three years to destabilize its food web. An adjacent lake was monitored simultaneously as a reference ecosystem. Warning signals of a regime shift were evident in the manipulated lake during reorganization of the food web more than a year before the food web transition was complete, corroborating theory for leading indicators of ecological regime shifts. ### IdeaLab program If you’re seriously interested in this stuff, and you recently got a PhD, you should apply to IdeaLab 2013, which is a program happening at ICERM from the 15th to the 19th of July, 2013. Here’s the deal: The Idea-Lab invites 20 early career researchers (postdoctoral candidates and assistant professors) to ICERM for a week during the summer. The program will start with brief participant presentations on their research interests in order to build a common understanding of the breadth and depth of expertise. Throughout the week, organizers or visiting researchers will give comprehensive overviews of their research topics. Organizers will create smaller teams of participants who will discuss, in depth, these research questions, obstacles, and possible solutions. At the end of the week, the teams will prepare presentations on the problems at hand and ideas for solutions. These will be shared with a broad audience including invited program officers from funding agencies. Two Research Project Topics: • Tipping Points in Climate Systems (MPE2013 program) • Towards Efficient Homomorphic Encryption IdeaLab Funding Includes: • Travel support • Six nights accommodations • Meal allowance The Application Process: IdeaLab applicants should be at an early stage of their post-PhD career. Applications for the 2013 IdeaLab are being accepted through MathPrograms.org. Application materials will be reviewed beginning March 15, 2013. ## Successful Predictions of Climate Science 5 February, 2013 guest post by Steve Easterbrook In December I went to the 2012 American Geophysical Union Fall Meeting. I’d like to tell you about with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can watch the whole talk here: But let me give you a summary, with some references. Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence. Here are the successful predictions: 1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels. Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course—much good work was done in this period. For example: • 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930. • 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun. • 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres. This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930′s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation. 1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades. 1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere. 1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70. 1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good. 1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2 °C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al. 1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified—see Thorne 2008 for an analysis) 1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula. Of course, scientists often get it wrong: 1900: Knut Ångström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added. 1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes. 1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong. 1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected. In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong: 2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”. Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950–1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2 °C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3 °C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion.) To conclude, climate scientists have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope—in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that isn’t threatened by the destructive power of a warming planet.” ## Milankovich vs the Ice Ages 30 January, 2013 guest post by Blake Pollard Hi! My name is Blake S. Pollard. I am a physics graduate student working under Professor Baez at the University of California, Riverside. I studied Applied Physics as an undergraduate at Columbia University. As an undergraduate my research was more on the environmental side; working as a researcher at the Water Center, a part of the Earth Institute at Columbia University, I developed methods using time-series satellite data to keep track of irrigated agriculture over northwestern India for the past decade. I am passionate about physics, but have the desire to apply my skills in more terrestrial settings. That is why I decided to come to UC Riverside and work with Professor Baez on some potentially more practical cross-disciplinary problems. Before starting work on my PhD I spent a year surfing in Hawaii, where I also worked in experimental particle physics at the University of Hawaii at Manoa. My current interests (besides passing my classes) lie in exploring potential applications of the analogy between information and entropy, as well as in understanding parallels between statistical, stochastic, and quantum mechanics. Glacial cycles are one essential feature of Earth’s climate dynamics over timescales on the order of 100′s of kiloyears (kyr). It is often accepted as common knowledge that these glacial cycles are in some way forced by variations in the Earth’s orbit. In particular many have argued that the approximate 100 kyr period of glacial cycles corresponds to variations in the Earth’s eccentricity. As we saw in Professor Baez’s earlier posts, while the variation of eccentricity does affect the total insolation arriving to Earth, this variation is small. Thus many have proposed the existence of a nonlinear mechanism by which such small variations become amplified enough to drive the glacial cycles. Others have proposed that eccentricity is not primarily responsible for the 100 kyr period of the glacial cycles. Here is a brief summary of some time series analysis I performed in order to better understand the relationship between the Earth’s Ice Ages and the Milankovich cycles. I used publicly available data on the Earth’s orbital parameters computed by André Berger (see below for all references). This data includes an estimate of the insolation derived from these parameters, which is plotted below against the Earth’s temperature, as estimated using deuterium concentrations in an ice core from a site in the Antarctic called EPICA Dome C: As you can see, it’s a complicated mess, even when you click to enlarge it! However, I’m going to focus on the orbital parameters themselves, which behave more simply. Below you can see graphs of three important parameters: • obliquity (tilt of the Earth’s axis), • precession (direction the tilted axis is pointing), • eccentricity (how much the Earth’s orbit deviates from being circular). You can click on any of the graphs here to enlarge them: Richard Muller and Gordon MacDonald have argued that another astronomical parameter is important: the angle between the plane Earth’s orbit and the ‘invariant plane’ of the solar system. This invariant plane of the solar system depends on the angular momenta of the planets, but roughly coincides with the plane of Jupiter’s orbit, from what I understand. Here is a plot of the orbital plane inclination for the past 800 kyr: One can see from these plots, or from some spectral analysis, that the main periodicities of the orbital parameters are: • Obliquity ~ 42 kyr • Precession ~ 21 kyr • Eccentricity ~100 kyr • Orbital plane ~ 100 kyr Of course the curves clearly are not simple sine waves with those frequencies. Fourier transforms give information regarding the relative power of different frequencies occurring in a time series, but there is no information left regarding the time dependence of these frequencies as the time dependence is integrated out in the Fourier transform. The Gabor transform is a generalization of the Fourier transform, sometimes referred to as the ‘windowed’ Fourier transform. For the Fourier transform: $\displaystyle{ F(w) = \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(t) e^{-iwt} \, dt}$ one may think of $e^{-iwt}$, the ‘kernel function’, as the guy acting as your basis element in both spaces. For the Gabor transform instead of $e^{-iwt}$ one defines a family of functions, $g_{(b,\omega)}(t) = e^{i\omega(t-b)}g(t-b)$ where $g \in L^{2}(\mathbb{R})$ is called the window function. Typical windows are square windows and triangular (Bartlett) windows, but the most common is the Gaussian: $\displaystyle{ g(t)= e^{-kt^2} }$ which is used in the analysis below. The Gabor transform of a function $f(t)$ is then given by $\displaystyle{ G_{f}(b,w) = \int_{-\infty}^\infty f(t) \overline{g(t-b)} e^{-iw(t-b)} \, dt }$ Note the output of a Gabor transform, like the Fourier transform, is a complex function. The modulus of this function indicates the strength of a particular frequency in the signal, while the phase carries information about the… well, phase. For example the modulus of the Gabor transform of $\displaystyle{ f(t)=\sin(\dfrac{2\pi t}{100}) }$ is shown below. For these I used the package Rwave, originally written in S by Rene Carmona and Bruno Torresani; R port by Brandon Whitcher. You can see that the line centered at a frequency of .01 corresponds to the function’s period of 100 time units. A Fourier transform would do okay for such a function, but consider now a sine wave whose frequency increases linearly. As you can see below, the Gabor transform of such a function shows the linear increase of frequency with time: The window parameter in both of the above Gabor transforms is 100 time units. Adjusting this parameter effects the vertical blurriness of the Gabor transform. For example here is the same plot as a above, but with window parameters of 300, 200, 100, and 50 time units: You can see as you make the window smaller the line gets sharper, but only to a point. When the window becomes approximately smaller than a given period of the signal the line starts to blur again. This makes sense, because you can’t know the frequency of a signal precisely at a precise moment in time… just like you can’t precisely know both the momentum and position of a particle in quantum mechanics! The math is related, in fact. Now let’s look at the Earth’s temperature over the past 800 kyr, estimated from the EPICA ice core deuterium concentrations: When you look at this, first you notice spikes occurring about every 100 kyr. You can also see that the last 5 of these spikes appear to be bigger and more dramatic than the ones occurring before 500 kyr ago. Roughly speaking, each of these spikes corresponds to rapid warming of the Earth, after which occurs slightly less rapid cooling, and then a slow decrease in temperature until the next spike occurs. These are the Earth’s glacial cycles. At the bottom of the curve, where the temperature is about about 4 °C cooler than the mean of this curve, glaciers are forming and extending down across the northern hemisphere. The relatively warm periods on the top of the spikes, about 10 °C hotter than the glacial periods. are called the interglacials. You can see that we are currently in the middle of an interglacial, so the Earth is relatively warm compared to rest of the glacial cycles. Now we’ll take a look at the windowed Fourier transform, or the Gabor transform, of this data. The window size for these plots is 300 kyr. Zooming in a bit, one can see a few interesting features in this plot: We see one line at a frequency of about .024, with a sampling rate of 1 kyr, corresponds to a period of about 42 kyr, close to the period of obliquity. We also see a few things going on around a frequency of .01, corresponding to a 100 kyr period. The band at .024 appears to be relatively horizontal, indicating an approximately constant frequency. Around the 100 kyr periods there is more going on. At a slightly higher frequency, about .015, there appears to be a band of slowly increasing frequency. Also, around .01 it’s hard to say what is really going on. It is possible that we see a combination of two frequency elements, one increasing, one decreasing, but almost symmetric. This may just be an artifact of the Gabor transform or the window and frequency parameters. The window size for the plots below is slightly smaller, about 250 kyr. If we put the temperature and obliquity Gabor Transforms side by side, we see this: It’s clear the lines at .024 line up pretty well. Doing the same with eccentricity: Eccentricity does not line up well with temperature in this exercise though both have bright bands above and below .01 . Now for temperature and orbital inclination: One sees that the frequencies line up better for this than for eccentricity, but one has to keep in mind that there is a nonlinear transformation performed on the ‘raw’ orbital plane data to project this down into the ‘invariant plane’ of the solar system. While this is physically motivated, it surely nudges the spectrum. The temperature data clearly has a component with a period of approximately 42 kyr, matching well with obliquity. If you tilt your head a bit you can also see an indication of a fainter response at a frequency a bit above .04, corresponding roughly to period just below 25 kyrs, close to that of precession. As far as the 100 kyr period goes, which is the periodicity of the glacial cycles, this analysis confirms much of what is known, namely that we can’t say for sure. Eccentricity seems to line up well with a periodicity of approximately 100 kyr, but on closer inspection there seems to be some discrepancies if you try to understand the glacial cycles as being forced by variations in eccentricity. The orbital plane inclination has a more similar Gabor transform modulus than does eccentricity. A good next step would be to look the relative phases of the orbital parameters versus the temperature, but that’s all for now. If you have any questions or comments or suggestions, please let me know! ### References The orbital data used above is due to André Berger et al and can be obtained here: Orbital variations and insolation database, NOAA/NCDC/WDC Paleoclimatology. The temperature proxy is due to J. Jouzel et al, and it’s based on changes in deuterium concentrations from the EPICA Antarctic ice core dating back over 800 kyr. This data can be found here: EPICA Dome C – 800 kyr deuterium data and temperature estimates, NOAA Paleoclimatology. Here are the papers by Muller and Macdonald that I mentioned: • Richard Muller and Gordan MacDonald, Glacial cycles and astronomical forcing, Science 277 (1997), 215–218. • Richard Muller and Gordan MacDonald, Spectrum of 100-kyr glacial cycle: orbital inclination, not eccentricity, PNAS 1997, 8329–8334. They also have a book: • Richard Muller and Gordan MacDonald, Ice Ages and Astronomical Causes, Springer, Berlin, 2002. You can also get files of the data I used here: Berger et al orbital parameter data, with explanatory text here. Jouzel et al EPICA Dome C temperature data, with explanatory text here. ## Anasazi America (Part 2) 24 January, 2013 Last time I told you a story of the American Southwest, starting with the arrival of small bands of hunters around 10,000 BC. I focused on the Anasazi, or ‘ancient Pueblo people’, and I led up to the Late Basketmaker III Era, from 500 to 750 AD. The big invention during this time was the bow and arrow. Before then, large animals were killed by darts thrown from slings, which required a lot more skill and luck. But even more important was the continuing growth of agriculture: the cultivation of corn, beans and squash. This was fueled a period of dramatic population growth. But this was just the start! ### The Pueblo I and II Eras The Pueblo I Era began around 750 AD. At this time people started living in ‘pueblos’: houses with flat roofs held up by wooden poles. Towns became bigger, holding up to 600 people. But these towns typically lasted only 30 years or so. It seems people needed to move when conditions changed. Starting around 800 AD, the ancient Pueblo people started building ‘great houses’: multi-storied buildings with high ceilings, rooms much larger than those in domestic dwellings, and elaborate subterranean rooms called ‘kivas’. And around 900 AD, people started building houses with stone roofs. We call this the start of the Pueblo II Era. The center of these developments was the Chaco Canyon area in New Mexico: Chaco Canyon is 125 kilometers east of Canyon de Chelly. Unfortunately, I didn’t see it on my trip—I wanted to, but we didn’t have time. By 950 AD, there were pueblos on every ridge and hilltop of the Chaco Canyon area. Due to the high population density and unpredictable rainfall, this area could no longer provide enough meat to sustain the needs of the local population. Apparently they couldn’t get enough fat, salt and minerals from a purely vegan diet—a shortcoming we have now overcome! Yet the population continued to grow until 1000 AD. In his book Anasazi America, David Stuart wrote: Millions of us buy mutual funds, believing the risk is spread among millions of investors and a large “basket” of fund stocks. Millions divert a portion of each hard-earned paycheck to purchase such funds for retirement. “Get in! Get in!” hawk the TV ads. “The market is going up. Historically, it always goes up in the long haul. The average rate of return this century is 9 percent per year!” Every one of us who does that is a Californian at heart, believing in growth, risk, power. It works—until an episode of too-rapid expansion in the market, combined with brutal business competition, threatens to undo it. That is about what it was like, economically, at Chaco Canyon in the year 1000—rapid agricultural expansion, no more land to be gotten, and deepening competition. Don’t think of it as “romantic” or “primitive”. Think of it as just like 1999 in the United States, when the Dow Jones Industrial Average hit 11,000 and 30 million investors held their breath to see what would happen next. ### The Chaco phenomenon In 1020 the rainfall became more predictable. There wasn’t more rain, it was simply less erratic. This was good for the ancient Pueblo people. At this point the ‘Chaco phenomenon’ began: an amazing flowering of civilization. We see this in places like Pueblo Bonito, the largest great house in Chaco Canyon: Pueblo Bonito was founded in the 800s. But starting in 1020 it grew immensely, and it kept growing until 1120. By this time it had 700 rooms, nearly half devoted to grain storage. It also had 33 kivas, which are the round structures you see here. But Pueblo Bonito is just one of a dozen great houses built in Chaco Canyon by 1120. About 215 thousand ponderosa pine trees were cut down in this building spree! Stuart estimates that building these houses took over 2 million man-hours of work. They also built about 650 kilometers of roads! Most of these connect one great house to another… but some mysteriously seem to go to ‘nowhere’. By 1080, however, the summer rainfall had started to decline. And by 1090 there were serious summer drought lasting for five years. We know this sort of thing from tree rings: there are enough ponderosa logs and the like that archaeologists have built up a detailed year-by-year record. Thanks to overpopulation and these droughts, Chaco Canyon civilization was in serious trouble at this point, but it charged ahead: Part of Chacoan society were already in deep trouble after AD 1050 as health and living conditions progressively eroded in the southern districts’ open farming communities. The small farmers in the south had first created reliable surpluses to be stored in the great houses. Ultimately, it was the increasingly terrible conditions of those farmers, the people who grew the corn, that had made Chacoan society so fatally vulnerable. They simply got back too little from their efforts to carry on. [....] Still, the great-house dwellers didn’t merely sit on their hands. As some farms failed, they used farm labor to expand roads, rituals, and great houses. This prehistoric version of a Keynesian growth model apparently alleviated enough of the stresses and strains to sustain growth through the 1070s. Then came the waning rainfall of the 1080s, followed by drought in the 1090s. Circumstances in farming communities worsened quickly and dramatically with this drought; the very survival of many was at stake. The great-house elites at Chaco Canyon apparently responded with even more roads, rituals, and great houses. This was actually a period of great-house and road infrastructure “in-fill”, both in and near established open communities. In a few years, the rains returned. This could not help but powerfully reinforce the elites’ now well-established, formulaic response to problems. But roads, rituals, and great houses simply did not do enough for the hungry farmers who produced corn and pottery. As the eleventh century drew to a close, even though the rains had come again, they walked away, further eroding the surpluses that had fueled the system. Imagine it: the elites must have believe the situation was saved, even as more farmers gave up in despair. Inexplicably, they never “exported” the modest irrigation system that had caught and diverted midsummer runoff from the mesa tops at Chaco Canyon and made local fields more productive. Instead, once again the elites responded with the sacred formula—more roads, more rituals, more great houses. So, Stuart argues that the last of the Chaco Canyon building projects were “the desperate economic reactions of a fragile and frightened society”. Regardless of whether this is true, we know that starting around 1100 AD, many of the ancient Pueblo people left the Chaco Canyon area. Many moved upland, to places with more rain and snow. Instead of great houses, many returned to building the simpler pit houses of old. Tribes descending from the ancient Pueblo people still have myths about the decline of the Chaco civilization. While such tales should be taken with a huge grain of salt, these are too fascinating not to repeat. Here are two quotes: In our history we talk of things that occurred a long time ago, of people who had enormous amounts of power, spiritual power and power over people. I think that those kinds of people lived here in Chaco…. Here at Chaco there were very powerful people who had a lot of spiritual power, and these people probably used their power in ways that caused things to change, and that may have been one of the reasons why the migrations were set to start again, because these these people were causing changes that were never meant to occur. My response to the canyon was that some sensibility other than my Pueblo ancestors had worked on the Chaco great houses. There were the familiar elements such as the nansipu (the symbolic opening into the underworld), kivas, plazas and earth materials, but they were overlain by a strictness and precision of design that was unfamiliar…. It was clear that the purpose of these great villages was not to restate their oneness with the earth but to show the power and specialness of humans… a desire to control human and natural resources… These were men who embraced a social-political-religious hierarchy and envisioned control and power over places, resources and people. These quotes are from an excellent book on the changing techniques and theories of archaeologists of the American Southwest: • Stephen H. Lekson, A History of the Ancient Southwest, School for Advanced Research, Santa Fe, New Mexico, 2008. What these quotes show, I think, is that the sensibility of current-day Pueblo people is very different from that of the people who built the great houses of Chaco Canyon. According to David Stuart, the Chaco civilization was a ‘powerful’ culture, while their descendants became an ‘efficient’ culture: … a powerful society (or organism) captures more energy and expends (metabolizes) it more rapidly than an efficient one. Such societies tend to be structurally more complex, more wasteful of energy, more competitive, and faster paced than an efficient one. Think of modern urban America as powerful, and you will get the picture. In contrast, an efficient society “metabolizes” its energy more slowly, and so it is structurally less complex, less wasteful, less competitive, and slower. Think of Amish farmers in Pennsylvania or contemporary Pueblo farms in the American Southwest. In competitive terms, the powerful society has an enormous short-term advantage over the efficient one if enough energy is naturally available to “feed” it, or if its technology and trade can bring in energy rapidly enough to sustain it. But when energy (food, fuel and resources) becomes scarce, or when trade and technology fail, an efficient society is advantageous because it simpler, less wasteful structure is more easily sustained in times of scarcity. ### The Pueblo III Era, and collapse By 1150 AD, some of the ancient Pueblo people began building cliff dwellings at higher elevations—like Mesa Verde in Colorado, shown above. This marks the start of the Pueblo III Era. But this era lasted a short time. By 1280, Mesa Verde was deserted! Some of the ruins in Canyon de Chelly also date to the Pueblo III Era. For example, the White House Ruins were built around 1200. Here are some of my pictures of this marvelous place. Click to enlarge: But again, they were deserted by the end of the Pueblo III Era. Why did the ancient Pueblo people move to cliff dwellings? And why did they move out so soon? Nobody is sure. Cliff dwellings are easy to defend against attack. Built into the south face of a cliff, they catch the sun in winter to stay warm—it gets cold here in winter!—but they stay cool when the sun is straight overhead in summer. These are good reasons to build cliff dwellings. But these reasons don’t explain why cliff dwellings were so popular from 1150 to 1280, and then were abandoned! One important factor seems to be this: there was a series of severe droughts starting around 1275. There were also raids from other tribes: speakers of Na-Dené languages, who eventually became the current-day Navajo inhabitants of this area. But drought alone may be unable to explain what happened. There have been some fascinating attempts to model the collapse of the Anasazi culture. One is called the Artificial Anasazi Project. It used ‘agent-based modeling’ to study what the ancient Pueblo people did in Long House Valley, Arizona, from 200 to 1300. The Villages Project, a collaboration of Washington State University and the Crow Canyon Archaeological Center, focused on the region near Mesa Verde. Quoting Stephen Lekson’s book: Both projects mirrored actual settlement patterns from 800 to 1250 with admirable accuracy. Problems rose, however, with the abandonments of the regions, in both cases after 1250. There were unexplained exceptions, misfits between the models and reality. Those misfits were not minor. Neither model predicted complete abandonment. Yet it happened. That’s perplexing. In the Scientific American summary of the Long House Valley model, Kohler, Gummerman, and Reynolds write, “We can only conclude that sociopolitical, ideological or environmental factors not included in our model must have contributed to the total depopulation of the valley.” Similar conundrums best the Villages Project: “None of our simulations terminated with a population decline as dramatic as what actually happened in the Mesa Verde region in the late 1200.” These simulation projects look interesting! Of course they leave out many factors, but that’s okay: it suggests that one of those factors could be important in understanding the collapse. For more info, click on the links. Also try this short review by the author of a famous book on why civilizations collapse: • Jared Diamond, Life with the artificial Anasazi, Nature 419 (2002), 567–569. From this article, here are the simulated versus ‘actual’ populations of the ancient Pueblo people in Long House Valley, Arizona, from 800 to 1350 AD: The so-called ‘actual’ population is estimated using the number of house sites that were active at a given time, assuming five people per house. This graph gives a shocking and dramatic ending to our tale! Lets hope our current-day tale doesn’t end so abruptly, because in abrupt transitions much gets lost. But of course the ancient Pueblo people didn’t disappear. They didn’t all die. They became an ‘efficient’ society: they learned to make do with diminished resources. ## Why It’s Getting Hot 22 January, 2013 The Berkeley Earth Surface Temperature project concludes: carbon dioxide concentration and volcanic activity suffice to explain most of the changes in earth’s surface temperature from 1751 to 2011. Carbon dioxide increase explains most of the warming; volcanic outbursts explain most of the bits of sudden cooling. The fit is not improved by the addition of a term for changes in the behavior of the Sun! For details, see: • Robert Rohde, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom and Charlotte Wickham, A new estimate of the average earth surface land temperature spanning 1753 to 2011, Geoinformatics and Geostatics: an Overview 1 (2012). The downward spikes are explained nicely by volcanic activity. For example, you can see the 1815 eruption of Tambora in Indonesia, which blanketed the atmosphere with ash. 1816 was called The Year Without a Summer: frost and snow were reported in June and July in both New England and Northern Europe! Average global temperatures dropped 0.4–0.7 °C, resulting in major food shortages across the Northern Hemisphere. Similarly, the dip in 1783-1785 seems to be to due to Grímsvötn in Iceland. (Carbon dioxide goes up a tiny bit in volcanic eruptions, but that’s mostly irrelevant. It’s the ash and sulfur dioxide, forming sulfuric acid droplets that help block incoming sunlight, that really matter for volcanoes!) It’s worth noting that they get their best fit if each doubling of carbon dioxide concentration causes a 3.1 ± 0.3°C increase in land temperature. This is consistent with the 2007 IPCC report’s estimate of a 3 ± 1.5°C warming for land plus oceans when carbon dioxide doubles. This quantity is called climate sensitivity, and determining it is very important. They also get their best fit if each extra 100 gigatonnes of atmospheric sulfates (from volcanoes) cause 1.5 ± 0.5°C of cooling. They also look at the left-over temperature variations that are not explained by this simple model: 3.1°C of warming with each doubling of carbon dioxide, and 1.5°C of cooling for each extra 100 gigatonnes of atmospheric sulfates. Here’s what they get: The left-over temperature variations, or ‘residuals’, are shown in black, with error bars in gray. On top is the annual data, on bottom you see a 10-year moving average. The red line is an index of the Atlantic Multidecadal Oscillation, a fluctuation in the sea surface temperature in the North Atlantic Ocean with a rough ‘period’ of 70 years. Apparently the BEST team places more weight on the Atlantic Multidecadal Oscillation than most climate scientists. Most consider the [El Niño Southern Oscillation](http://www.azimuthproject.org/azimuth/show/ENSO) to be more important in explaining global temperature variations! I haven’t seen why the BEST team prefers to focus attention on the Atlantic Multidecadal Oscillation. I’d like to see some more graphs… ## Anasazi America (Part 1) 20 January, 2013 A few weeks ago I visited Canyon de Chelly, which is home to some amazing cliff dwellings. I took a bunch of photos, like this picture of the so-called ‘First Ruin’. You can see them and read about my adventures starting here: • John Baez, Diary, 21 December 2012. Here I’d like to talk about what happened to the civilization that built these cliff dwellings! It’s a fascinating tale full of mystery… and it’s full of lessons for the problems we face today, involving climate change, agriculture, energy production, and advances in technology. First let me set the stage! Canyon de Chelly is in the Navajo Nation, a huge region with its own laws and government, not exactly part of the United States, located at the corners of Arizona, New Mexico, and Utah: The hole in the middle is the Hopi Reservation. The Hopi are descended from,the people who built the cliff dwellings in Canyon de Chelly. Those people are often called the Anasazi, but these days the favored term is ancient Pueblo peoples. The Hopi speak a Uto-Aztecan language, and so presumably did the Anasazi. Uto-Aztecan speakers were spread out like this shortly before the Europeans invaded: with a bunch more down in what’s now Mexico. The Navajo are part of a different group, the Na-Dené language group: So, the Navajo aren’t a big part of the story in this fascinating book: • David E. Stuart, Anasazi America, U. of New Mexico Press, Albuquerque, New Mexico, 2000. Let me summarize this story here! ### After the ice The last Ice Age, called the Wisconsin glaciation, began around 70,000 BC. The glaciers reached their maximum extent about 18,000 BC, with ice sheets down to what are now the Great Lakes. In places the ice was over 1.6 kilometers thick! Then it started warming up. By 16,000 BC people started cultivating plants and herding animals. Around 12,000 BC, before the land bridge connecting Siberia and Canada melted, people from the so-called Clovis culture came to the Americas. It seems likely that other people got to America earlier, moving down the Pacific coast before the inland glaciers melted. But even if the Clovis culture didn’t get there first, their arrival was a big deal. They be traced by their distinctive and elegant spear tips, called Clovis points: After they arrived, the Clovis people broke into several local cultures, roughly around the time of the Younger Dryas cold spell beginning around 10,800 BC. By 10,000 BC, small bands of hunters roamed the Southwest, first hunting mammoths, huge bison, camels, horses and elk, and later—perhaps because they killed off the really big animals—the more familiar bison, deer, elk and antelopes we see today. For about 5000 years the population of current-day New Mexico probably fluctuated between 2 and 6 thousand people—a density of just one person per 50 to 150 square kilometers! Changes in culture and climate were slow. ### The Altithermal Around 5,000 BC, the climate near Canyon de Chelly began to warm up, dry out, and become more strongly seasonal. This epoch is called the ‘Altithermal’. The lush grasslands that once supported huge herds of bison began to disappear in New Mexico, and those bison moved north. By 4,000 BC, the area near Canyon de Chelly became very hot, with summers often reaching 45°C, and sometimes 57° at the ground’s surface. The people in this area responded in an interesting way: by focusing much more on gathering, and less on hunting. We know this from their improved tools for processing plants, especially yucca roots. The yucca is now the state flower of New Mexico. Here’s a picture taken by Stan Shebs: David Stuart writes: At first this might seem an unlikely response to unremitting heat and aridity. One could argue that the deteriorating climate might first have forced people to reduce their numbers by restricting sex, marriage, and child-bearing so that survivors would have enough game. That might well have been the short-term solution [....] When once-plentiful game becomes scarce, hunter-gatherers typically become extremely conservative about sex and reproduction. [...] But by early Archaic times, the change in focus to plant resources—undoubtedly by necessity—had actually produced a marginally growing population in the San Juan Basin and its margins in spite of climatic adversity. [....] Ecologically, these Archaic hunters and gatherers had moved one entire link down the food chain, thereby eliminating the approximately 90-percent loss in food value that occurs when one feeds on an animal that is a plant-eater. [....] This is sound ecological behavior—they could not have found a better basic strategy even if they had the advantage of a contemporary university education. Do I attribute this to their genius? No. It is simply that those who stubbornly clung to the traditional big game hunting of their Paleo-Indian ancestors could not prosper, so they left fewer descendents. Those more willing to experiment, or more desperate, fared better, so their behavior eventually became traditional among their more numerous descendents. ### The San Jose Period By 3,000 BC the Altithermal was ending, big game was returning to the Southwest, yet the people retained their new-found agricultural skills. They also developed a new kind of dart for hunting, the ‘San Jose point’. So, this epoch is called the ‘San Jose period’. Populations rose to maybe about 15 to 30 thousand people in New Mexico, a vast increase over the earlier level of 2-6 thousand. But still, that’s just one person per 10 or 20 square kilometers! The population increased until around 2,000 BC. At this point population pressures became acute… but two lucky things happened. First, the weather got wetter. Second, corn was introduced from Mexico. The first varieties had very small cobs, but gradually they were improved. The wet weather lasted until around 500 BC. And at just about this time, beans were introduced, also from Mexico. Their addition was critical. Corn alone is a costly food to metabolize. Its proteins are incomplete and hard to synthesize. Beans contain large amounts of lysine, the amino acid missing from corn and squash. In reasonable balance, corn, beans and squash together provide complimentary amino acids and form the basis of a nearly complete diet. This diet lacks only the salt, fat and mineral nutrients found in most meats to be healthy and complete. By 500 BC, nearly all the elements for accelerating cultural and economic changes were finally in place—a fairly complete diet that could, if rainfall cooperated, largely replace the traditional foraging one; several additional, modestly larger-cobbed varieties of corn that not only prospered under varying growing conditions but also provided a bigger harvest; a population large enough to invest the labor necessary to plant and harvest; nearly 10 centuries of increasing familiarity with cultigens; and enhanced food-processing and storage techniques. Lacking were compelling reasons to transform an Archaic society accustomed to earning a living with approximately 500 hours of labor a year into one willing to invest the 1,000 to 2,000 yours coming to contemporary hand-tool horticulturalists. Nature then stepped in with one persuasive, though not compelling, reason for people to make the shift. Namely, droughts! Precipitation became very erratic for about 500 years. People responded in various ways. Some went back to the old foraging techniques. Others improved their agricultural skills, developing better breeds of corn, and tricks for storing water. The latter are the ones whose populations grew. This led to the Basketmaker culture, where people started living in dugout ‘pit houses’ in small villages. More precisely, the Late Basketmaker II Era lasted from about 50 AD to 500 AD. New technologies included the baskets that gave this culture its name: Pottery entered the scene around 300 AD. Have you ever thought about how important this is? Before pots, people had to cook corn and beans by putting rocks in fires and then transferring them to holes containing water! Now, porridge and stews could be put to boil in a pot set directly into a central fire pit. The amount of heat lost and fuel used in the old cooking process—an endless cycle of collecting, heating, transferring, removing and replacing hot stones just to boil a few quarts of water—had always been enormous. By comparison, cooking with pots became quick, easy, and far more efficient. In a world more densely populated, firewood had to be gathered from greater distances. Now, less of it was needed. And there was newer fuel to supplement it—dried corncobs. Not all the changes were good. Most adult skeletons from this period show damage from long periods spend stooping—either using a stone hoe to tend garden plots, or grinding corn while kneeling. And as they ate more corn and beans and fewer other vegetables, mineral deficiencies became common. Extreme osteoporosis afflicted many of these people: we find skulls that are porous, and broken bones. It reminds me a little of the plague of obesity, with its many side-affects, afflicting modern Americans as we move to a culture where most people work sitting down. On the other hand, there was a massive growth in population. The number of pit-house villages grew nine-fold from 200 AD to 700 AD! It must have been an exciting time. In only some 25 generations, these folks had transformed themselves from forager and hunters with a small economic sideline in corn, beans and squash into semisedentary villagers who farmed and kept up their foraging to fill in the economic gaps. But this was just the beginning. By 1020, the ancient Pueblo people would begin to build housing complexes that would remain the biggest in North America until the 1880s! This happened in Chaco Canyon, 125 kilometers east of Canyon de Chelly. Next time I’ll tell you the story of how that happened, and how later, around 1200, these people left Chaco Canyon and started to build cliff dwellings. For now, I’ll leave you with some pictures I took of the most famous cliff dwelling in Canyon de Chelly: the ‘White House Ruins’. Click to enlarge: ## Our Galactic Environment 27 December, 2012 While I’m focused on the Earth these days, I can’t help looking up and thinking about outer space now and then. So, let me tell you about the Kuiper Belt, the heliosphere, the Local Bubble—and what may happen when our Solar System hits the next big cloud! Could it affect the climate on Earth? ### New Horizons We’re going on a big adventure! New Horizons has already taken great photos of volcanoes on Jupiter’s moon Io. It’s already closer to Pluto than we’ve ever been. And on 14 July 2016 it will fly by Pluto and its moons Charon, Hydra, and Nix! But that’s just the start: then it will go to see some KBOs! The Kuiper Belt stretches from the orbit of Neptune to almost twice as far from the Sun. It’s a bit like the asteroid belt, but much bigger: 20 times as wide and 20 – 200 times as massive. But while most asteroids are made of rock and metal, most Kuiper Belt Objects or ‘KBOs’ are composed largely of frozen methane, ammonia and water. The Earth’s orbit has a radius of one astronomical unit, or AU. The Kuiper Belt goes from 30 AU to 50 AU out. For comparison, the heliosphere, the region dominated by the energetic fast-flowing solar wind, fizzles out around 120 AU. That’s where Voyager 1 is now. New Horizons will fly through the Kuiper Belt from 2016 to 2020… and, according to plan, its mission will end in 2026. How far out will it be then? I don’t know! Of course it will keep going… For more see: ### The heliosphere Here’s a young star zipping through the Orion Nebula. It’s called LL Orionis, and this picture was taken by the Hubble Telescope in February 1995: The star is moving through the interstellar gas at supersonic speeds. So, when this gas hits the fast wind of particles shooting out from the star, it creates a bow shock half a light-year across. It’s a bit like when a boat moves through the water faster than the speed of water waves. There’s also a bow shock where the solar wind hits the Earth’s magnetic field. It’s about 17 kilometers thick, and located about 90,000 kilometers from Earth: For a long time scientists thought there was a bow shock where nearby interstellar gas hit the Sun’s solar wind. But this was called into question this year when a satellite called the Interstellar Boundary Explorer (IBEX) discovered the Solar System is moving slower relative to this gas than we thought! IBEX isn’t actually going to the edge of the heliosphere—it’s in Earth orbit, looking out. But Voyager 1 seems close to hitting the heliopause, where the Earth’s solar wind comes to a stop. And it’s seeing strange things! ### The Interstellar Boundary Explorer The Sun shoots out a hot wind of ions moving at 300 to 800 kilometers per second. They form a kind of bubble in space: the heliosphere. These charged particles slow down and stop when they hit the hydrogen and helium atoms in interstellar space. But those atoms can penetrate the heliosphere, at least when they’re neutral—and a near-earth satellite called IBEX, the Interstellar Boundary Explorer, has been watching them! And here’s what IBEX has seen: In December 2008, IBEX first started detecting energetic neutral atoms penetrating the heliosphere. By October 2009 it had collected enough data to see the ‘IBEX ribbon’: an unexpected arc-shaped region in the sky has many more energetic neutral atoms than expected. You can see it here! The color shows how many hundreds of energetic neutral atoms are hitting the heliosphere per second per square centimeter per keV. A keV, or kilo-electron-volt, is a unit of energy. Different atoms are moving with different energies, so it makes sense to count them this way. You can see how the Voyager spacecraft are close to leaving the heliosphere. You can also see how the interstellar magnetic field lines avoid this bubble. Ever since the IBEX ribbon was detected, the IBEX team has been trying to figure out what causes it. They think it’s related to the interstellar magnetic field. The ribbon has been moving and changing intensity quite a bit in the couple of years they’ve been watching it! Recently, IBEX announced that our Solar System has no bow shock—a big surprise. Previously, scientists thought the heliosphere created a bow-shaped shock wave in the interstellar gas as it moved along, like that star in the Orion Nebula we just looked at. ### The Local Bubble Get to know the neighborhood! I love the names of these nearby stars! Some I knew: Vega, Altair, Fomalhaut, Alpha Centauri, Sirius, Procyon, Denebola, Pollux, Castor, Mizar, Aldebaran, Algol. But many I didn’t: Rasalhague, Skat, Gaorux, Pherkad, Thuban, Phact, Alphard, Wazn, and Algieba! How come none of the science fiction I’ve read uses these great names? Or maybe I just forgot. The Local Bubble is a bubble of hot interstellar gas 300 light years across, probably blasted out by the supernova called Geminga near the bottom of this picture. ### Geminga Here’s the sky viewed in gamma rays. A lot come from a blazar 7 billion light years away that erupted in 2005: a supermassive black hole at the center of a galaxy, firing particles in a jet that happens to be aimed straight at us. Some come from nearby pulsars: rapidly rotating neutron stars formed by the collapse of stars that went supernova. The one I want you to think about is Geminga. Geminga is just 800 light years away from us, and it exploded only 300,000 years ago! That may seem far away and long ago to you, but not to me. The first Neanderthalers go back around 350,000 years… and they would have seen this supernova in the daytime, it was so close. But here’s the reason I want you to think about Geminga. It seems to have blasted out the bubble of hot low-density gas our Solar System finds itself in: the Local Bubble. Astronomers have even detected micrometer-sized interstellar meteor particles coming from its direction! We may think of interstellar space as all the same—empty and boring—but that’s far from true. The density of interstellar space varies immensely from place to place! The Local Bubble has just 0.05 atoms per cubic centimeter, but the average in our galaxy is about 20 times that, and we’re heading toward some giant clouds that are 2000 to 20,000 times as dense. The fun will start when we hit those…. but more on that later. ### Nearby clouds While we live in the Local Bubble, several thousand years ago we entered a small cloud of cooler, denser gas: the Local Fluff. We’ll leave this in at most 4 thousand years. But that’s just the beginning! As we pass the Scorpius-Centaurus Association, we’ll hit bigger, colder and denser clouds—and they’ll squash the heliosphere. When will this happen? People seem very unsure. I’ve seen different sources saying we entered the Local Fluff sometime between 44,000 and 150,000 years ago, and that we’ll stay within it for between 4,000 and 20,000 years. We’ll then return to the hotter, less dense gas of the Local Bubble until we hit the next cloud. That may take at least 50,000 years. Two candidates for the first cloud we’ll hit are the G Cloud and the Apex Cloud. The Apex Cloud is just 15 light years away: • Priscilla C. Frisch, Local interstellar matter: the Apex Cloud. When we hit a big cloud, it will squash the heliosphere. Right now, remember, this is roughly 120 AU in radius. But before we entered the Local Fluff, it was much bigger. And when we hit thicker clouds, it may shrink down to just 1 or 2 AU! The heliosphere protects us from galactic cosmic rays. So, when we hit the next cloud, more of these cosmic rays will reach the Earth. Nobody knows for sure what the effects will be… but life on Earth has survived previous incidents like this, and other problems will hit us much sooner, so don’t stay awake at night worrying about it! Indeed, ice core samples from the Antarctic show spikes in the concentration of the radioactive isotope beryllium-10 in two seperate events, one about 60,000 years ago and another about 33,000 years ago. These might have been caused by a sudden increase in cosmic rays. But nobody is really sure. People have studied the possibility that cosmic rays could influence the Earth’s weather, for example by seeding clouds: • K. Scherer, H. Fichtner et al, Interstellar-terrestrial relations: variable cosmic environments, the dynamic heliosphere, and their imprints on terrestrial archives and climate, Space Science Reviews 127 (2006), 327–465. • Benjamin A. Laken, Enric Pallé, Jaša Čalogović and Eimear M. Dunne, A cosmic ray-climate link and cloud observations, J. Space Weather Space Clim. 2 (2012), A18. Despite the title of the second paper, its conclusion is that “it is clear that there is no robust evidence of a widespread link between the cosmic ray flux and clouds.” That’s clouds on Earth, not clouds of interstellar gas! The first paper is much more optimistic about the existence of such a link, but it doesn’t provide a ‘smoking gun’. And—in case you’re wondering—variations in cosmic rays this century don’t line up with global warming: The top curves are the Earth’s temperature as estimated by GISTEMP (the brown curve), and the carbon dioxide concentration in the Earth’s atmosphere as measured by Charles David Keeling (in green). The bottom ones are galactic cosmic rays as measured by CLIMAX (the gray dots), the sunspot cycle as measured by the Solar Influences Data Analysis Center (in red), and total solar irradiance as estimated by Judith Lean (in blue). But be careful: the galactic cosmic ray curve has been flipped upside down, since when solar activity is high, then fewer galactic cosmic rays make it to Earth! You can see that here: I’m sorry these graphs aren’t neatly lined up, but you can see that peaks in the sunspot cycle happened near 1980, 1989 and 2002, which is when we had minima in the galactic cosmic rays. For more on the neighborhood of the Solar System and what to expect as we pass through various interstellar clouds, try this great article: • Priscilla Frisch, The galactic environment of the Sun, American Scientist 88 (January-February 2000). I have lots of scientific heroes: whenever I study something, I find impressive people have already been there. This week my hero is Priscilla Frisch. She edited a book called Solar Journey: The Significance of Our Galactic Environment for the Heliosphere and Earth. The book isn’t free, but this chapter is: • Priscilla C. Frisch and Jonathan D. Slavin, Short-term variations in the galactic environment of the Sun. For more on how what the heliosphere might do when we hit the next big cloud, see: • Hans-R. Mueller, Priscilla C. Frisch, Vladimir Florinski and Gary P. Zank, Heliospheric response to different possible interstellar environments. ### The Aquila Rift Just for fun, let’s conclude by leaving our immediate neighborhood and going a bit further out. Here’s a picture of the Aquila Rift, taken by Adam Block of the Mt. Lemmon SkyCenter at the University of Arizona: The Aquila Rift is a region of molecular clouds about 600 light years away in the direction of the star Altair. Hundreds of stars are being formed in these clouds. A molecular cloud is a region in space where the interstellar gas gets so dense that hydrogen forms molecules, instead of lone atoms. While the Local Fluff near us has about 0.3 atoms per cubic centimeter, and the Local Bubble is much less dense, a molecular cloud can easily have 100 or 1000 atoms per cubic centimeter. Molecular clouds often contain filaments, sheets, and clumps of submicrometer-sized dust particles, coated with frozen carbon monoxide and nitrogen. That’s the dark stuff here! I don’t know what will happen to the Earth when our Solar System hits a really dense molecular cloud. It might have already happened once. But it probably won’t happen again for a long time. ## Teaching the Math of Climate Science 18 December, 2012 When you’re just getting started on simulating the weather, it’s good to start with an aqua-planet. That’s a planet like our Earth, but with no land! Click on this picture to see an aqua-planet created by H. Miura: Of course, it’s important to include land, because it has huge effects. Click on this to see what I mean: This simulation is supposed to illustrate a Madden–Julian oscillation: the largest form of variability in the tropical atmosphere on time scales of 30-90 days! It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall… but also patches of anomalously low rainfall. Strong Madden-Julian Oscillations are often, but not always, seen 6-12 months before an El Niño starts. Wouldn’t it be cool if math majors could learn to do simulations like these? If not of the full-fledged Earth, at least of an aqua-planet? Soon they will. ### Climate science at Cal State Northridge At the huge fall meeting of the American Geophysical Union, I met Helen Steele Cox from the geography department at Cal State Northridge. She was standing in front of a poster describing their new Climate Science Program. They got a ‘NICE’ grant from NASA to develop new courses—where ‘NICE’ means NASA Innovations in Climate Education. This grant also helps them run a seminar every other week where they invite climate scientists and the like from JPL and other nearby places to talk about their work. What really excited me about this program is that it includes courses designed to teach math majors—and others—the skills needed to go into climate science. Since I’m supposed to be developing the syllabus for an undergraduate ‘Mathematics of the Environment’ course, I’m eager to hear about such things. She told me to talk to David Klein in the math department there. He used to work on general relativity, but now—like me—he’s gotten interested in climate issues. I emailed him, and he told me what’s going on. They’ve taught this course twice: Phys 595 CL. Mathematics and Physics of Climate Change. Atmospheric dynamics and thermodynamics, radiation and radiative transfer, green-house effect, mathematics of remote sounding, introduction to atmospheric and climate modeling. Syllabus here. They’ve just finished teaching this one: Math 396 CL. Introduction to Mathematical Climate Science. This course in applied mathematics will introduce students to applications of vector calculus and differential equations to the study of global climate. Fundamental equations governing atmospheric dynamics will be derived and solved for a variety of situations. Topics include: thermodynamics of the atmosphere, potential temperature, parcel concepts, hydrostatic balance, dynamics of air motion and wind flows, energy balance, an introduction to radiative transfer, and elementary mathematical climate models. Syllabus here. In some ways, the most intriguing is the one they haven’t taught yet: Math 483 CL. Mathematical Modeling. Possible topics include fundamental principles of atmospheric radiation and convection, two dimensional models, varying parameters within models, numerical simulation of atmospheric fluid flow from both a theoretical and applied setting. There’s no syllabus it yet, but they want to focus the course on four projects: 1. Modeling a Lorenz dynamical system, using the trajectories as analogies to weather and the attractor as an analogy to climate. 2. Modeling a land-sea breeze. 3. Creating a 2d model of an aqua-planet: that is, one with no land. 4. Doing some projects with EdGCM, a proprietary ‘educational general climate model’. It would be great to take student-made software and add it to the Azimuth Code Project. If they were well-documented, future generations of students could go ahead and improve on them. And an open-source GCM would be a wonderful thing. As more and more schools teach climate science—not just to Earth scientists, but also to math and computer science students—this sort of ‘open-source climate modeling software’ should become more and more common. Some questions: Do you know other schools that are teaching climate modeling in the math department? Do you know of efforts to formalize the sharing of open-source climate software for educational purposes? ## Mathematics of the Environment (Part 10) 4 December, 2012 There’s a lot more to say, but just one more class to say it! Next quarter I’ll be busy teaching an undergraduate course on evolutionary game theory and a grad course on Lagrangian methods in classical mechanics, together with this seminar and weekly meetings with my students. So, to keep from burning out, I’m going to temporarily switch this seminar to a different topic, where I have a textbook all lined up: • John Baez and Jacob Biamonte, A Course on Quantum Techniques in Stochastic Mechanics. I will stop putting up online notes. I’ll also teach the classical mechanics using a book I helped write: • John Baez and Derek Wise, Lectures on Classical Mechanics. This should make my job a bit easier: explaining climate physics is a lot more work, since I’m just an amateur! But I hope to come back to this topic someday. In this final class let’s talk a bit about recent work on glacial cycles and changes in the Earth’s orbit. To keep my job manageable, I’ll just talk about one paper. ### The work of Didier Paillard We’ve seen a few puzzles about how Milankovich cycles are related to the glacial cycles. There are many more I haven’t even gotten around to explaining: Milankovich cycles: problems, Wikipedia. But let’s dive in and look at a model that tries to solve some: • Didier Paillard, The timing of Pleistocene glaciations from a simple multiple-state climate model, Nature 391 (1998), 378–391. Paillard starts by telling us the good news: The Earth’s climate over the past million years has been characterized by a succession of cold and warm periods, known as glacial–interglacial cycles, with periodicities corresponding to those of the Earth’s main orbital parameters; precession (23 kyr), obliquity (41 kyr) and eccentricity (100 kyr). The astronomical theory of climate, in which the orbital variations are taken to drive the climate changes, has been very successful in explaining many features of the palaeoclimate records. I’m not including reference numbers, but here he cites a famous paper which we discussed in Part 8: • J. D. Hays, J. Imbrie, and N. J. Shackleton, Variations in the earth’s orbit: pacemaker of the Ice Ages, Science 194 (1976), 1121–1132. The main result of this paper was to find peaks in the power spectrum of various temperature proxies that match some of the periods of the Milankovitch cycles. This has repeatedly been confirmed. In fact, one of the students in this course, Blake Pollard, has already checked this. I want to pressure him to write a blog article including the nice graphs he’s generated. But then comes the bad news: Nevertheless, the timing of the main glacial and interglacial periods remains puzzling in many respects. In particular, the main glacial–interglacial switches occur approximately every 100 kyr, but the changes in insolation forcing are very small in this frequency band. Here’s an article on the first problem: 100,000-year problem, Wikipedia. The basic idea is that during the last million years, the glacial cycles seem to happening roughly every 100 thousand years: The Milankovich cycles that most closely match this are two cycles in the eccentricity of the Earth’s orbit which have periods of 95 and 123 thousand years. But as we saw last time, these have very tiny effects on the average solar energy hitting the Earth year round. The obliquity and precession cycles have no effect on the average solar energy hitting the Earth, but they have a noticeable effect on how much hits it in a given latitude in a given season! Alas, we didn’t get around to calculating that yet. But this gives you a sense of it: As common in paleontology, time here goes from right to left. The yellow curve shows the amount of solar power hitting the Earth at a latitude of 65° N at the summer solstice. This quantity is often called simply the insolation, though that term also means other things. The insolation curve most closely resembles the red curve showing precession cycles, which have periods near 20 thousand years. But during this stretch of time, ice ages have been happening roughly once every 100 thousand years! Why? That’s the 100,000 year problem. Continuing the quotation: Similarly, an especially warm interglacial episode, about 400,000 years ago, occurred at a time when insolation variations were minimal. If you look at the graph above, you’ll see what he means. Next, he sketches what he’ll do: Here I propose that multiple equilibria in the climate system can provide a resolution of these problems within the framework of astronomical theory. I present two simple models that successfully simulate each glacial–interglacial cycle over the late Pleistocene epoch at the correct time and with approximately the correct amplitude. Moreover, in a simulation over the past 2 million years, the onset of the observed prominent 100-kyr cycles around 0.8 to 1 million years ago is correctly reproduced. ### Paillard’s model I’ll just talk about his first, simpler model. It assumes the Earth can be in three different states: i: interglacial g: mild glacial G: full glacial In this model: • The Earth goes from i to g as soon as the insolation goes below some level $i_0.$ • The Earth then goes from g to G as soon as the volume of ice goes above some level $v_{\mathrm{max}}.$ • The Earth then goes from G to i as soon as the insolation goes above some level $i_1.$ Only the transitions ig and gG are allowed! The reverse transitions Gg and gi are forbidden. Paillard draws a schematic picture of the model, like this: Of course, he also most specify how the ice volume grows when the Earth is in its mild glacial g state. He says: I assume that the ice sheet needs some minimal time $t_g$ in order to grow and exceed the volume $v_{\mathrm{max}}$ [...] and that the insolation maxima preceding the gG transition must remain below the level $i_3.$ The gG transition then can occur at the next insolation decrease, when it falls below $i_2$. Being a mathematician rather than a climate scientist, I can think of more than one way to interpret this. I think it means: 1. If the Earth is in its g state and the insolation stays below some value $i_3$ for a time $t_g,$ then the Earth jumps into the G state. 2. If the Earth is in its g state and the insolation rises above $i_3,$ we wait until it drops below some value $i_2,$ and then the Earth jumps into its G state. An alternative interpretation is: 2′. If the Earth is in its g state and the insolation rises above $i_3,$ we wait until it drops below some value $i_2.$ Then we ‘reset the clock’ and proceed according to rule 1. I’ll try to sort this out. Now, the insolation as a function of time is known—you can compute it using the formula and the data here: Insolation, Azimuth Project. So, the only thing required to complete Paillard’s model are choices of these numbers: $i_0, i_1, i_2, i_3, t_g$ He likes to measure insolation in terms of its standard deviation from its mean value. With this normalization he takes: $i_0 = -0.75, \qquad i_1 = i_2 = 0 , \qquad i_3 = 1$ and $t_g = 33 \; \mathrm{kyr}$ Then his model gives these results: (Click to enlarge.) The bottom graph shows temperature as measured by the extra amount of oxygen-18 in some geological records. So, we can see that the Earth often pops rather suddenly into a warm interglacial state and cools a bit more slowly into a glacial state. In the model, this ‘popping into a warm state’ happens instantaneously in the middle graph. The main thing is to compare this to the bottom graph! The way the models pops suddenly into the very cold G state does not look quite so good. But still, it’s exciting how such a simple model fits the overall profile of the glacial cycle—at least for the last million years. Paillard says his model is fairly robust, too: This model is not very sensitive to parameter changes. Different threshold values will slightly offset the transitions by a few hundred years, but the overall shape will remain the same for a broad range of values. There is no significant changes when $i_0$ is between -0.97 and -0.64, $i_1$ between -0.23 and 0.32, $i_2$ between -0.30 and 0.13, $i_3$ between 0.97 and 1.16, and $t_g$ between 27 kyr and 60 kyr. Even when the parameters are out of these bounds, the changes are minor: when $i_0$ is between -0.63 and -0.09, the succession of regimes remains the same except for present time, which becomes a g regime. When $i_1$ is chosen between 0.33 and 0.87, only the duration of stage 11.3 changes to become more comparable to other interglacial stages. ### Marine isotope stages There’s a lot more to say. For example, what does the model say about the time more than a million years ago, when the glacial cycles happened roughly every 41 thousand years, instead of every 100? I won’t answer this. Instead, I’ll conclude by explaining something very basic—but worth knowing. What’s ‘stage 11.3′? This refers to the numbers down at the bottom of Paillard’s chart: these numbers are Marine Isotope Stages. 11.3 is a ‘substage’, not shown on the chart. Marine Isotopes Stages are official periods of time used by people who study glacial cycles. The even-numbered ones roughly correspond to glacial periods, and the odd-numbered ones to interglacials. By now over a hundred stages have been identified, going back 6 million years! Just to give you a little sense of what’s going on, here are the start dates of the last 11 stages, with hot ones in red and the cold ones in blue: MIS 1: 11 thousand years ago. This marks the end of the last glacial cycle. More precisely, this is about 500 years after the end of the Younger Dryas event. MIS 2: 24 thousand years ago. The Last Glacial Maximum occurred between 26.5 and 19 thousand years ago. At that time we had ice sheets down to the Great Lakes, the mouth of the Rhine, and covering the British Isles. Homo sapiens arrived in the Americas later, around 18 thousand years ago. MIS 3: 60 thousand years ago. For comparison, Homo sapiens arrived in central Asia around 50 thousand years ago. About 35 thousand years ago the calendar was invented, Homo sapiens arrived in Europe, and Homo neanderthalensis. went extinct. MIS 4: 71 (or maybe 74) thousand years ago. MIS 5: 130 thousand years ago. The Eemian, the last really warm interglacial period before ours, began at this time and ended about 114 thousand years ago. If you look at this chart, you’ll see MIS 3 was a much less warm interglacial: (Now time is going to the right again. Click for more details.) MIS 6: 190 thousand years ago. MIS 7: 244 thousand years ago. The first known Homo sapiens date back to 250 thousand years ago. MIS 8: 301 thousand years ago. MIS 9: 334 thousand years ago. MIS 10: 364 thousand years ago. The first known Homo neanderthalensis date back to about 350 thousand years ago. MIS 11: 427 thousand years ago. This stage is supposedly the most similar to MIS 1, and looking at the graph above you can see why people say that. I hope you agree that it’s worth understanding the glacial cycles, not just because we need to understand how the Earth will respond to the big boost of carbon dioxide that we’re dosing it with now, but because it’s a fascinating physics problem—and because glaciation has been a powerful force in Earth’s recent history, and the history of our species. For your convenience, here are links to all the notes for this course: • Part 1 – The mathematics of planet Earth. • Part 2 – Simple estimates of the Earth’s temperature. • Part 3 – The greenhouse effect. • Part 4 – History of the Earth’s climate. • Part 5 – A model showing bistability of the Earth’s climate due to the ice albedo effect: statics. • Part 6 – A model showing bistability of the Earth’s climate due to the ice albedo effect: dynamics. • Part 7 – Stochastic differential equations and stochastic resonance. • Part 8 – A stochastic energy balance model and Milankovitch cycles. • Part 9 – Changes in insolation due to changes in the eccentricity of the Earth’s orbit. • Part 10 – Didier Paillard’s model of the glacial cycles. ## Mathematics of the Environment (Part 9) 27 November, 2012 I didn’t manage to cover everything I intended last time, so I’m moving the stuff about the eccentricity of the Earth’s orbit to this week, and expanding it. ### Sunshine and the Earth’s orbit I bet some of you are hungry for some math. As I mentioned, it takes some work to see how changes in the eccentricity of the Earth’s orbit affect the annual average of sunlight hitting the top of the Earth’s atmosphere. Luckily Greg Egan has done this work for us. While the result is surely not new, his approach makes nice use of the fact that both gravity and solar radiation obey an inverse-square law. That’s pretty cool. Here is his calculation with some details filled in. Let’s think of the Earth as moving around an ellipse with one focus at the origin. Its angular momentum is then $\displaystyle{ J = m r v_\theta }$ where $m$ is its mass, $r$ and $\theta$ are its polar coordinates, and $v_\theta$ is the angular component of its velocity: $\displaystyle{ v_\theta = r \frac{d \theta}{d t} }$ So, $\displaystyle{ J = m r^2 \frac{d \theta}{d t} }$ and $\displaystyle{\frac{d \theta}{d t} = \frac{J}{m r^2} }$ Since the brightness of a distant object goes like $1/r^2$, the solar energy hitting the Earth per unit time is $\displaystyle{ \frac{d U}{d t} = \frac{C}{r^2}}$ for some constant $C.$ It follows that the energy delivered per unit of angular progress around the orbit is $\displaystyle{ \frac{d U}{d \theta} = \frac{d U/d t}{d \theta/ dt} = \frac{C m}{J} }$ Thus, the total energy delivered in one period will be $\begin{array}{ccl} U &=& \displaystyle{ \int_0^{2 \pi} \frac{d U}{d \theta} \, d \theta} \\ \\ &=& \displaystyle{ \frac{2\pi C m}{J} } \end{array}$ So far we haven’t used the the fact that the Earth’s orbit is elliptical. Next we’ll do that. Our goal will be to show that $U$ depends only very slightly on the eccentricity of the Earth’s orbit. But we need to review a bit of geometry first. ### The geometry of ellipses If the Earth is moving in an ellipse with one focus at the origin, its equation in polar coordinates is $\displaystyle{ r = \frac{p}{1 + e \cos \theta} }$ where $e$ is the eccentricity and $p$ is the somewhat dirty-sounding semi-latus rectum. You can think of $p$ as a kind of average radius of the ellipse—more on that in a minute. Let’s think of the origin in this coordinate system as the Sun—that’s close to true, though the Sun moves a little. Then the Earth gets closest to the Sun when $\cos \theta$ is as big as possible. So, the Earth is closest to the Sun when $\theta = 0$, and then its distance is $\displaystyle{ r_1 = \frac{p}{1 + e} }$ Similarly, the Earth is farthest from the Sun happens when $\theta = \pi$, and then its distance is $\displaystyle{ r_2 = \frac{p}{1 - e} }$ We call $r_1$ the perihelion and $r_2$ the aphelion. The semi-major axis is half the distance between the opposite points on the Earth’s orbit that are farthest from each other. This is denoted $a.$ These points occur at $\theta = 0$ and $\theta = \pi$, so the distance between these points is $r_1 + r_2$, and $\displaystyle{ a = \frac{r_1 + r_2}{2} }$ So, the semi-major axis is the arithmetic mean of the perihelion and aphelion. The semi-minor axis is half the distance between the opposite points on the Earth’s orbit that are closest to each other. This is denoted $b.$ Puzzle 1. Show that the semi-minor axis is the geometric mean of the perihelion and aphelion: $\displaystyle{ b = \sqrt{r_1 r_2} }$ I said the semi-latus rectum $p$ is also a kind of average radius of the ellipse. Just to make that precise, try this: Puzzle 2. Show that the semi-latus rectum is the harmonic mean of the perihelion and aphelion: $\displaystyle{ p = \frac{1}{\frac{1}{2}\left(\frac{1}{r_1} + \frac{1}{r_2}\right) } }$ This puzzle is just for fun: the Greeks loved arithmetic, geometric and harmonic means, and the Greek mathematician Apollonius wrote a book on conic sections, so he must have known these facts and loved them. The conventional wisdom is that the Greeks never realized that the planets move in elliptical orbits. However, the wonderful movie Agora presents a great alternative history in which Hypatia figures it all out shortly before being killed! And the mathematician Sandro Graffi (who incidentally taught a course I took in college on the self-adjointness of quantum-mechanical Hamiltonians) has claimed: Now an infrequently read work of Plutarch, several parts of the Natural History of Plinius, of the Natural Questions of Seneca, and of the Architecture of Vitruvius, also infrequently read, especially by scientists, clearly show that the cultural elite of the early imperial age (first century A.D.) were fully aware of and convinced of a heliocentric dynamical theory of planetary motions based on the attractions of the planets toward the Sun by a force proportional to the inverse square of the distance between planet and Sun. The inverse square dependence on the distance comes from the assumption that the attraction is propagated along rays emanating from the surfaces of the bodies. I have no idea if the controversial last part of this claim is true. But it’s fun to imagine! More importantly for what’s to come, we can express the semi-minor axis in terms of the semi-major axis and the eccentricity. Since $\displaystyle{ r_1 = \frac{p}{1 + e} , \qquad r_2 = \frac{p}{1 - e} }$ we have $\displaystyle{ r_1 + r_2 = \frac{p}{1 + e} + \frac{p}{1 - e} = \frac{2 p}{1 - e^2} }$ so the semi-minor axis is $\displaystyle{ a = \frac{p}{1 - e^2} }$ while $\displaystyle {r_1 r_2 = \frac{p^2}{1 - e^2} }$ so the semi-major axis is $\displaystyle { b = \frac{p}{\sqrt{1 - e^2}} }$ and thus they are related by $b = a \sqrt{1 - e^2}$ Remember this! ### How total annual sunshine depends on eccentricity We saw a nice formula for the total solar energy hitting the Earth in one year in terms of its angular momentum $J$: $\displaystyle{ U = \frac{2\pi C m}{J} }$ How can we relate the angular momentum $J$ to the shape of the Earth’s orbit? The Earth’s energy, kinetic plus potential, is constant throughout the year. The kinetic energy is $\frac{1}{2}m v^2$ and the potential energy is $\displaystyle{ -\frac{G M m}{r} }$ At the aphelion or perihelion the Earth isn’t moving in or out, just around, so by our earlier work $\displaystyle{v = v_\theta = \frac{J}{m r} }$ and the kinetic energy is $\displaystyle{ \frac{J^2}{2 r^2} }$ Equating the Earth’s energy at aphelion and perihelion, we thus get $\displaystyle{\frac{J^2}{2m r_1^2} -\frac{G M m}{r_1} = \frac{J^2}{2m r_2^2} -\frac{G M m}{r_2} }$ and doing some algebra: $\displaystyle{\frac{J^2}{2m} \left(\frac{1}{r_1^2} - \frac{1}{r_2^2}\right) = G M m \left( \frac{1}{r_1} - \frac{1}{r_2} \right) }$ $\displaystyle{\frac{J^2}{2m} \left(\frac{r_2^2 - r_1^2}{r_1^2 r_2^2}\right) = G M m \left( \frac{r_2 - r_1}{r_1 r_2} \right) }$ $\displaystyle{\frac{J^2}{2m} \left(\frac{r_1 + r_2}{r_1 r_2}\right) = G M m }$ and solving for $J,$ $\displaystyle{ J = m \sqrt{\frac{2 G M r_1 r_2}{r_1 + r_2}} }$ But remember that the semi-major and semi-minor axis of the Earth’s orbit are given by $\displaystyle{ a=\frac{1}{2} (r_1+r_2)} , \qquad \displaystyle{ b=\sqrt{r_1 r_2} }$ respectively! So, we have $\displaystyle{ J = mb \sqrt{\frac{GM}{a}} }$ This lets us rewrite our old formula for the energy $U$ in the form of sunshine that hits the Earth each year: $\displaystyle{ U=\frac{2\pi C m}{J} = \frac{2\pi C}{b} \sqrt{\frac{a}{G M}} }$ But we’ve also seen that $b = a \sqrt{1 - e^2}$ so we get the formula we’ve been seeking: $\displaystyle{U=\frac{2\pi C}{\sqrt{G M a (1-e^2)}}}$ This tells us $U$ as a function of semi-major axis and eccentricity. As we’ll see later, the semi-major axis $a$ is almost unchanged by small perturbations of the Earth’s orbit. The main thing that changes is the eccentricity $e$. But if $e$ is small, $e^2$ is even smaller, so $U$ doesn’t change much when we change $e.$ We can make this more quantiative. Let’s work out how much the actual changes in the Earth’s orbit affect the amount of solar radiation it gets! As we’ll see, the semi-major axis is almost constant, so we can ignore that. Complicated calculations we can’t redo here show that the eccentricity varies between 0.005 and 0.058. We’ve seen thetotal energy the Earth gets each year from solar radiation is proportional to $\displaystyle{ \frac{1}{\sqrt{1-e^2}} }$ When the eccentricity is at its lowest value, $e = 0.005,$ we get $\displaystyle{ \frac{1}{\sqrt{1-e^2}} = 1.0000125 }$ When the eccentricity is at its highest value, $e = 0.058,$ we get $\displaystyle{\frac{1}{\sqrt{1-e^2}} = 1.00168626 }$ So, the solar power hitting the Earth each year changes by a factor of $\displaystyle{1.00168626/1.0000125 = 1.00167373 }$ In other words, it changes by merely 0.167%. That’s very small And the effect on the Earth’s temperature would naively be even less! Naively, we can treat the Earth as a greybody: an ideal object whose tendency to absorb or emit radiation is the same at all wavelengths and temperatures. Since the temperature of a greybody is proportional to the fourth root of the power it receives, a 0.167% change in solar energy received per year corresponds to a percentage change in temperature roughly one fourth as big. That’s a 0.042% change in temperature. If we imagine starting with an Earth like ours, with an average temperature of roughly 290 kelvin, that’s a change of just 0.12 kelvin! The upshot seems to be this: in a naive model without any amplifying effects, changes in the eccentricity of the Earth’s orbit would cause temperature changes of just 0.12 °C! This is much less than the roughly 5 °C change we see between glacial and interglacial periods. So, if changes in eccentricity are important in glacial cycles, we have some explaining to do. Possible explanations include season-dependent phenomena and climate feedback effects, like the ice albedo effect we’ve been discussing. Probably both are very important! Why does the semi-major axis of the Earth’s orbit remain almost unchanged under small perturbations? The reason is that it’s an ‘adiabatic invariant’. This is basically just a fancy way of saying it remains almost unchanged. But the point is, there’s a whole theory of adiabatic invariants… which supposedly explains the near-constancy of the semi-major axis. According to Wikipedia: The Earth’s eccentricity varies primarily due to interactions with the gravitational fields of Jupiter and Saturn. As the eccentricity of the orbit evolves, the semi-major axis of the orbital ellipse remains unchanged. From the perspective of the perturbation theory used in celestial mechanics to compute the evolution of the orbit, the semi-major axis is an adiabatic invariant. According to Kepler’s third law the period of the orbit is determined by the semi-major axis. It follows that the Earth’s orbital period, the length of a sidereal year, also remains unchanged as the orbit evolves. As the semi-minor axis is decreased with the eccentricity increase, the seasonal changes increase. But the mean solar irradiation for the planet changes only slightly for small eccentricity, due to Kepler’s second law. Unfortunately, even though I understand a bit about the general theory of adiabatic invariants, I have not gotten around to convincing myself that the semi-major axis is such a thing, for the perturbations experienced by the Earth. Here’s something easier: checking that the semi-major axis of the Earth’s orbit determines the period of the Earth’s orbit, say $T$. To do this, first relate the angular momentum to the period by integrating the rate at which orbital area is swept out by the planet: $\displaystyle{\frac{1}{2} r^2 \frac{d \theta}{d t} = \frac{J}{2 m} }$ over one orbit. Since the area of an ellipse is $\pi a b$, this gives us: $\displaystyle{ J = \frac{2 \pi a b m}{T} }$ On the other hand, we’ve seen $\displaystyle{J = m b \sqrt{\frac{G M}{a}}}$ Equating these two expressions for $J$ shows that the period is: $\displaystyle{ T = 2 \pi \sqrt{\frac{a^3}{G M}}}$ So, the period depends only on the semi-major axis, not the eccentricity. Conversely, we could solve this equation to see that the semi-major axis depends only on the period, not the eccentricity. I’m treating $G$ and $M$ as constants here. If the mass of the Sun decreases, as it eventually will when it becomes a red giant and puffs out lots of gas, the semi-major axes of the Earth’s orbit will change. It will actually increase! This is one reason people are still arguing about just when the Earth will get swallowed up by the Sun: • David Appell, The Sun will eventually engulf the Earth—maybe, Scientific American, 8 September 2008. And, to show just how subtle these things are, if the mass of the Sun slowly changes, while the semi-major axis of the Earth’s orbit will change, the eccentricity will remain almost unchanged. Why? Because for this kind of process, it’s the eccentricity that’s an adiabatic invariant! Indeed, I got all excited when I started reading a homework problem in Landau and Lifschitz’s book Classical Mechanics, which describes adiabatic invariants for the gravitational 2-body problem. But I was bummed out when they concluded that the eccentricity was an adiabatic invariant for gradual changes in $M$. They didn’t discuss any problems for which the semi-major axis was an adiabatic invariant. I’ll have to get back to this later sometime, probably with the help of a good book on celestial mechanics. If you’re curious about the concept of adiabatic invariant, start here:
# What is the intuition behind a function that is not continuous at a point but its partial derivatives at that point exist? I can't really understand how can the partial derivatives of a function exist at a point where the function is not continuous. For example, say we have a function f(x,y) that has domain all of R^2 except (0,0). Then in the definition of the partial derivatives, how can we even compute the limit if f(0,0) does not exist? Thank you. NOTE: The particular example I am bothered by can be found in See Colley's "Vector Calculus" at p.123, example 7. The example used there is for the function $\dfrac{x^2y^2}{x^4+y^4}$ • That's because it is continuous restricted to the coordinate directions, but it's not continuous in all directions. Aug 20 '16 at 16:50 • Make a picture. Aug 20 '16 at 22:51 The existence of partial derivatives for $f:\Bbb R^n\to \Bbb R^p$ at a point $a=(a_1,\dots,a_n)$ where $f$ is defined (see edit and comments below) corresponds to the differentiability of the (single-variable) functions $$f_i : x\mapsto f(a_1,\dots,a_{i-1},a_i+x,a_{i+1},\dots,a_n)$$ for $1\leq i\leq n$. In other words, $f$ is differentiable (hence continuous) when restricted to the lines parallel to the $n$ coordinate axis of $\Bbb R^n$ passing through $a$. But this doesn't suffice to ensure continuity of $f$ because this continuity means that $f$ is continuous when restricted to all the directions around $a$ (not only the directions of the coordinate axis). For example, see my answer in Differentiability of Multivariable Functions Edit: A function which is undefined at a point $a$ can't have partial derivatives at this point (simply because the functions $f_i$ of my answer above are undefined and hence, as in single-variable calculus, they have no derivative at this point because the quotient $\dfrac{f_i(h)-f_i(0)}{h}$ doesn't make sense, since $f_i(0)=f(a)$ doesn't exist!). 2nd Edit: In the example given, there is no problem: the function is well-defined at $(0,0)$ by $f(0,0)=0$ (see 2nd line after the brace). By definition, we have $f(0,0)=0$ (this has nothing to do with the definition of $f$ at the other points, she may have chosen another arbitrary value for $f(0,0)$ ). Hence the computation of the limits of $\dfrac{f_i(h)-f_i(0)}{h}$ (for $i=1$ or $2$) makes sense and it gives 0 because, as it's explained in your book, $\forall x\in\Bbb R, f(x,0)=0$ and $\forall y\in\Bbb R, f(0,y)=0$. • But, if say the point (0,0) is not defined for a function f: R^2--> R , meaning f is not continuous at (0,0), then doesn't that mean that any partial derivative at (0,0) should not exist? I have found that there are cases that they do exist at points like that, but I don't quite understand why Aug 20 '16 at 17:00 • A function which is undefined at a point can't have partial derivatives at this point (simply because the functions $f_i$ of my answer above are undefined and hence, as in single-variable calculus, they have no derivative at this point). – paf Aug 20 '16 at 17:09 • But I found an example of such a function where the partials at the origin do exist. See Colley p.123, example 7. Aug 20 '16 at 17:11 • Could you edit your question by adding a description of Colley's example? I don't have this book with me. – paf Aug 20 '16 at 17:13 • I edited. Sorry for not using LaTex Aug 20 '16 at 17:16
# 2. Solve the following trigonometric equations for the given regions_ sin? (r) + 3sin(r) = -2 { <I<t sin(0) cos(20), ###### Question: 2. Solve the following trigonometric equations for the given regions_ sin? (r) + 3sin(r) = -2 { <I<t sin(0) cos(20), 0 < 0 < 2t 2 cos(~) + 3tan(z) = 0, {<I < #### Similar Solved Questions ##### Explain why can a monopolist continue to make positive profit even in long run while a... Explain why can a monopolist continue to make positive profit even in long run while a perfectly competitive firm can make only zero economic profits in long run... ##### 191 435003000Wavenumber(cMtIi 191 4 3500 3000 Wavenumber(cMtIi... ##### Ancient lobe-finned fishes are ancestral to tetrapods. Modern examples of lobe-finned fishes include lungfishes. Which feature... Ancient lobe-finned fishes are ancestral to tetrapods. Modern examples of lobe-finned fishes include lungfishes. Which feature of ancient lobe-finned fishes was important in the evolution of tetrapods and is also found in modern lungfishes? A. Swim bladder B. Thin, flat, lightweight scale... ##### Write a short paper that explains the four elements of big data ethics :identity, privacy, ownership... Write a short paper that explains the four elements of big data ethics :identity, privacy, ownership and reputation. In your discussion be sure to identify the specific ethical issues raised by each element... ##### Usirig the graph shown t0 ie right, deternine the function the graph depicis.0A. Y=0s2) 0 B. Y=0s 2) y= sin (-w) 0 D. y= sin '{x+2) Usirig the graph shown t0 ie right, deternine the function the graph depicis. 0A. Y=0s 2) 0 B. Y=0s 2) y= sin (-w) 0 D. y= sin '{x+2)... ##### Please use proper apa formating 36 36 Discussion Question Two - Week 8 Discussion Question Two:... please use proper apa formating 36 36 Discussion Question Two - Week 8 Discussion Question Two: Homeopathy, a type of naturopathy, is sometimes confused with naturopathy. Explain the difference, and what to look for in a qualified homeopath... ##### Interestedtesono Whetheroalanced Casedte numterneadstosses Ofthe coin (Ho:Yersus H;;Tne test Use = the rejection rejion Iy _and the Values1 3n0 nave deen computedFalse?Tne levelthe crobabili that HoJine value8 for this test crobabilty that H43s comoumed 3fumina that the null hvpothesis was True6as comnuted chenthevlue Nouldlarger than the waluecttained usingTrueThe probabilty thatthe rest mistakenk= reieceSuppose that RR was changed13| 2 2-This RR wouldrejecting the nul hypathesis more aften tha interested tesono Whether oalanced Cased te numter neads tosses Ofthe coin (Ho: Yersus H;; Tne test Use = the rejection rejion Iy _ and the Values 1 3n0 nave deen computed False? Tne level the crobabili that Ho Jine value 8 for this test crobabilty that H 43s comoumed 3fumina that the null hvpothes... ##### Prove that tan(pi/4+A/2)=secA+tanA? Prove that tan(pi/4+A/2)=secA+tanA?... ##### Problem 31IR Spactum Cadum40003ooo2000 I600 (cm"1200B00100Mass SpectrumCoH,oo 200 240 200120 7160 m/e"C NMR Spectrum (00 Mu CDCl,toutoniDEPT Cni Chit Cha2001601206 (pom)IH NMR Spectnum 00 4 CDO,tutton)6 (Dom)120 Problem 31 IR Spactum Cadum 4000 3ooo 2000 I600 (cm" 1200 B00 100 Mass Spectrum CoH,oo 200 240 200 120 7160 m/e "C NMR Spectrum (00 Mu CDCl,toutoni DEPT Cni Chit Cha 200 160 120 6 (pom) IH NMR Spectnum 00 4# CDO,tutton) 6 (Dom) 120... ##### Suppose the value of Young' modulus (GPa) was determined for cast plates consisting of certain intermetallic substrates, resulting the following sample observations;116.2115.7114.9115.4115.5Calculate x (in GPa): 115,54 GPaCalculate the deviations from the mean116.2115.7114.9115.4115.5devlation116.20.160.64-0.140.04(b) Use the deviations calculated in part (a) to obtain the sample variance (in GPa?) GPa2Use the deviations calculated in part (a) to obtain the sample standard deviation (in GPa Suppose the value of Young' modulus (GPa) was determined for cast plates consisting of certain intermetallic substrates, resulting the following sample observations; 116.2 115.7 114.9 115.4 115.5 Calculate x (in GPa): 115,54 GPa Calculate the deviations from the mean 116.2 115.7 114.9 115.4 115... ##### EXERCISE 1-11 Intrinsic Motivation and Extrinsic Incentives In a Harvard Business Review article titled "Why Incentive... EXERCISE 1-11 Intrinsic Motivation and Extrinsic Incentives In a Harvard Business Review article titled "Why Incentive Plans Cannot Work." (Volume 71, Issue 5) author Alfie Kohn wrote: "Research suggests that, by and large, rewards succeed at securing one thing only: temporary compliance... ##### 44. A 25.0 mL sample of a 0.100 M solution of aqueous trimethylamine is titrated with... 44. A 25.0 mL sample of a 0.100 M solution of aqueous trimethylamine is titrated with a 0.125 M solution of HCI. What is the pH of the solution after 10.0, 20.0 and 30.0 mL of acid have been added? Acids К. Acetic 1.76 x 105 Bases Кь Ammonia Methylamine Dimethylamine Trimethylamine... ##### Where pk is computed as the steepest descent direction using only a subset Sk of all data points:pkC(yi (at; + 6))2 M ieSkHere, Sk is randomly chosen subset of {1, ~N} in each iteration k with M 10 elements_ In other words, Pk only uses 1% of all of the data in each step. Using the same step length strategy as above, generate again the sequence %k that results from the method As before, plot Ilz* 8kll as function of k. Evaluate whether this method is competitive with the original steepest desce where pk is computed as the steepest descent direction using only a subset Sk of all data points: pk C(yi (at; + 6))2 M ieSk Here, Sk is randomly chosen subset of {1, ~N} in each iteration k with M 10 elements_ In other words, Pk only uses 1% of all of the data in each step. Using the same step len... ##### Listed below are the numbers of hurricanes that occurred in each year in a certain region.... Listed below are the numbers of hurricanes that occurred in each year in a certain region. The data are listed in order by year. Find the​ range, variance, and standard deviation for the given sample data. Include appropriate units in the results. What important feature of the data is not reve... ##### Due to heavy rains that destroyed numerous roadways in Jamaicathe National Works Agency (NWA) has decided to repair 300 km of theSothern Coastal Highway and 200 km of Type A roads from Kingston toLucea. The repair will cost $2 million per km for the Type A roadsand$8 million per km for the Southern Coastal Highway. The NWA hasidentified two firms China Harbour and Caribbean Construction.Caribbean Construction can repair at most 300 km while ChinaHarbour has no restriction on the amount of roadw Due to heavy rains that destroyed numerous roadways in Jamaica the National Works Agency (NWA) has decided to repair 300 km of the Sothern Coastal Highway and 200 km of Type A roads from Kingston to Lucea. The repair will cost $2 million per km for the Type A roads and$8 million per km for the Sout... ##### Link phenomena at these measurements, if we have a reliable model and appropriate mathematical skills to two length scales. You will be able to predict the behavior of rubber under a variety of... link phenomena at these measurements, if we have a reliable model and appropriate mathematical skills to two length scales. You will be able to predict the behavior of rubber under a variety of practical Preliminary calculations Consider a piece of rubber with initial dimensions and orientation show... ##### 17. Assume the measurement we conduct follows Poisson Distribution model. We have a single measurement x... 17. Assume the measurement we conduct follows Poisson Distribution model. We have a single measurement x sa 100, then what is the best estimate of the deviation from the true mean? A. 5%; (B. 10% C. 10 D . 1.... ##### Consider the boundary value problem UIz + Uyy = 0 0 < x < L; 0 < y < H u(0, y) = 0, 0 < y < H_ U(L,y) 0. 0 < y < H_ u(z,0) = f(c); 0 < w < L; u(z, H) = g(r) , 0 < x < L. a) Letting u(z,y) = o()(y), show that and & satisfy the differential equations 0" (x) + Aolx) = 0, o(0) = 0 @(L) = 0 and U" (y) Auly) = 0. b) Show that An = (Z) and On(x) = sin and dn(y) Cn cosh "TV + dn sinh (Ty We then use the superposition principle to get the solution u( Consider the boundary value problem UIz + Uyy = 0 0 < x < L; 0 < y < H u(0, y) = 0, 0 < y < H_ U(L,y) 0. 0 < y < H_ u(z,0) = f(c); 0 < w < L; u(z, H) = g(r) , 0 < x < L. a) Letting u(z,y) = o()(y), show that and & satisfy the differential equations 0" (x)... ##### Find the products AB and BA for the diagonal matrices_ A = e= [-5 &]AB =BANeed Help?Eidl Find the products AB and BA for the diagonal matrices_ A = e= [-5 &] AB = BA Need Help? Eidl... ##### Consider the differential equation dy 4t dt (2t2+1)(y+l) (10 points) Find the general solution for this differential equation You may leave your answer implicitly defined b. (3 points) Find the solution satisfying the condition y(0) Consider the differential equation dy 4t dt (2t2+1)(y+l) (10 points) Find the general solution for this differential equation You may leave your answer implicitly defined b. (3 points) Find the solution satisfying the condition y(0)... ##### Laser light with a wavelength $\lambda=690 \mathrm{nm}$ illuminates a pair of slits at normal incidence. What slit separation will produce first-order maxima at angles of $\pm 25^{\circ}$ from the incident direction? Laser light with a wavelength $\lambda=690 \mathrm{nm}$ illuminates a pair of slits at normal incidence. What slit separation will produce first-order maxima at angles of $\pm 25^{\circ}$ from the incident direction?... ##### Question of 8b PointsPREGUNTA ABIERTA (ver instrucciones abajo para subir contestacion) Un electrcn que se mueve en Ura dimension (a largo de coordenadaX) esta en = estacc descmto nor siguiente funcicn de onda:O(r) =0 "(z) = Ce-'(1para * <0para * > 0dondeposiclon (en nm) del electcny es una contante: Determ ne vaior depromedio de Posicion del electrcn:Para recibil credito,muesue claramente_TODQ_suurabajo_IQDOS lospasos: Indique claramente Ias herramientas matematicas utilizadas ( Question of 8 b Points PREGUNTA ABIERTA (ver instrucciones abajo para subir contestacion) Un electrcn que se mueve en Ura dimension (a largo de coordenadaX) esta en = estacc descmto nor siguiente funcicn de onda: O(r) =0 "(z) = Ce-'(1 para * <0 para * > 0 donde posiclon (en nm) del e... ##### Solve for the variable in the given equation: Use methods from this course_ points Exact answer only:9*-4 4*+2Add file Solve for the variable in the given equation: Use methods from this course_ points Exact answer only: 9*-4 4*+2 Add file... ##### Solve the trigonometric equation the interval [0_ 22) . Giveexact value,possible; otherwise, rouno Your answerdecima places (Enter your answers 35comma separated list )cos(0) 2)(7 sin(0) Solve the trigonometric equation the interval [0_ 22) . Give exact value, possible; otherwise, rouno Your answer decima places (Enter your answers 35 comma separated list ) cos(0) 2)(7 sin(0)... ##### 2.1 Given the vectors U = 31+ 2j+k, v=i+j+2k and w = { + 3j + 3k. Find the scalar triple product of u, V,and in two ways: (i) by using the determinant formula directly; and (ii) by first taking cross-product of and and then the dot product with2.2 Find the area of the triangle with vertices_P (1,5,_2) Q(0,= 0,0),and R(3,5,1). 2.1 Given the vectors U = 31+ 2j+k, v=i+j+2k and w = { + 3j + 3k. Find the scalar triple product of u, V,and in two ways: (i) by using the determinant formula directly; and (ii) by first taking cross-product of and and then the dot product with 2.2 Find the area of the triangle with vertices_P (1,5,... ##### Alameda Manufacturing manufactures a variety of wooden picture frames using recycled wood from old barns. Alameda... Alameda Manufacturing manufactures a variety of wooden picture frames using recycled wood from old barns. Alameda Manufacturing has reported the following costs for the previous year. Assume no production inventories. Advertising Cost of hardware (hangers, decorations, etc) Cost of wood Depreciation... ##### The number of houses in a town has been growing according to the recursive rule Pn = Pn-1 + 34, where N is the number o... The number of houses in a town has been growing according to the recursive rule Pn = Pn-1 + 34, where N is the number of years after 2010. In 2010, there were Po = 200 houses in this town. (a) Calculate P1 and P2. P1 = P2 = (b) Find an explicit formula for Pn. Note: Webwork is case-sensitive here, s... ##### Problem 02-3A Source documents, journal entries, and accounts in job order costing LO P1, P2, P3... Problem 02-3A Source documents, journal entries, and accounts in job order costing LO P1, P2, P3 [The following information applies to the questions displayed below.] Widmer Watercraft’s predetermined overhead rate is 200% of direct labor. Information on the company’s production activiti... ##### If you could take on only one project (they are mutually exclusive), what is the NPV... If you could take on only one project (they are mutually exclusive), what is the NPV of the project you would choose from the table below? answerable question reference...
Browse Questions # If $\;(2 + \large\frac{x}{3})^{55}\;$ is expanded in the ascending powers of x in two consecutive terms of the expansion are equal , then these terms are : $(a)\;7^{th}\;and\;8^{th} \qquad(b)\;8^{th}\;and\;9^{th}\qquad(c)\;28^{th}\;and\;29^{th}\qquad(d)\;27^{th}\;and\;28^{th}$
# How do you write an equation for a circle with center (-2,5) and radius of 5? Jan 28, 2016 ${x}^{2} + {y}^{2} + 4 x - 10 y + 4 = 0$ #### Explanation: suppose, the equation is, ${\left(x - h\right)}^{2} + {\left(y - k\right)}^{2} = {r}^{2}$ here, $h = - 2$ $k = 5$ $r = 5$ so, the equation is, ${\left(x + 2\right)}^{2} + {\left(y - 5\right)}^{2} = {5}^{2}$ $\mathmr{and} , {x}^{2} + 4 x + 4 + {y}^{2} - 10 y + 25 = 25$ $\mathmr{and} , {x}^{2} + {y}^{2} + 4 x - 10 y + 4 = 0$
BLAST parallelisation best practices 2 0 Entering edit mode 10 months ago Hi All, I'm trying to speed up a BLASTP call as part of a bigger RBH workflow to detect orthologs, and I'm in the process of testing different approaches with a 100K sequence database and a 1107 sequence query (real database will be 350K, queries will differ in size). My function splits the query into separate files and processes them separately using python multiprocessing (Process or Pool), and I'm also looking at combining that with BLASTP's -num_threads parameter to increase speed further. I'm very knew to parallelisation in general (both threading and multiprocessing) but am keen to know more! I posted these questions together as they all relate to the same code, are generally continuing on from each other and I can accept multiple answers (unlike stack overflow), but please let me know if you'd suggest posting them separately. I'd be grateful for any answers, doesn't have to cover every point in one go :D Question 1 - I'm running local BLASTP and was wondering about the details for the num_threads parameter. Am I right in thinking that (as the name suggests), it spreads the workload across multiple threads across a single CPU, so is kind of analogous to Pythons threading module (as opposed to the multiprocessing module, which spreads tasks across separate CPUs)? I've heard BLAST only goes above 1 thread when it 'needs too', but I'm not clear on what this actually means - what determines if it needs more threads? Does it depend on the input query size? Are threads split at a specific step in the BLASTP program? Question 2 - To check I have the right ideas conceptually, if the above is correct, would I be correct to say that BLAST itself is I/O bound (hence the threading), which makes sense as its processing thousands of sequences in the query etc so lots of input? But if you want to call BLAST in a workflow script (e.g. using Python's subprocess module), then the call is CPU bound if you set num_threads to a high number, as it spreads the work across multiple threads in a single CPU, which takes a lot of the CPU power? Or does the fact that blastP is not taking full advantage of the threading mean that the CPU is not actually getting fully utilised, so a call will still be input/output bound independent of num_threads? If that's correct, then maybe I could use threading to speed separately process the split queries rather than multiprocessing... Question 3 - Are there any suggestions for how to get the best core and thread parameters for general use across different machines without relying on individual benchmarking (I want it to work on other peoples machines with as little tuning and optimisation as possible). Is it just cores = as many cores as you have (ie multiprocessing.cpu_count()) and threads = cores + 1 (defined by the BLASTP parameter num_threads)? Would this still be true on machines with more/less cores? Question 4 - for benchmarking, how do external programs affect multiprocessing speed - would scrolling the web with 100 tabs open impact multiprocessing speed by increasing the work done by one of the CPUs, taking away resources from one of the processes running my script? If the answer is yes, whats the best way to benchmark this kind of thing? I'm including this question to give context on my benchmarking questions below (i.e. the numbers I am throwing around may be crap). I tried to include graphs of the numbers but they wont copy in, however I found a post explaining how to add pics so if they are helpful I can add them in. Question 5 - Perhaps a more general question, I'm only splitting my query into 4 processes so would have thought multiprocessing.Process would be better (vs multiprocessing.Pool, which seems the preferred choice if you have lots of processes). But this isn't the case in my benchmarks, for multiprocessing using blastP_paralellised_process and blastP_paralellised_pool - any idea why? Timewise the process to pool 'seconds' ratio hovers around 1 with no obvious pattern for all num_threads (1-9) and core (1-5) combinations. Question 6 - why does increasing the numbers of cores used to process number of cores * split BLASTP queries not result in obvious speed improvements? I would expect this with cores set >4, as my pc is a 4-core machine, but there seems to be little difference between processing 1/4 query files across 4 cores vs processing 1/2 query files across 2 cores. Is my assumption for Question 2 incorrect? There is a little bit of slowdown for running on a single core and a dramatic increase for 1 core with 2 and 1 threads (1618 seconds and 2297 seconds), but for 2-5 cores with 1-9 threads the time for each blastP run is around 1000 seconds with some small random fluctuations (eg 4 cores 1 thread is 1323 seconds, but the other multicore single thread runs are normal timewise relative to the baseline of the other values). I've copied my code below. I've not included functions like split_fasta etc, as both they and BLASTP seem to be working (in the sense that im getting xml results files that I havent started parsing yet but look ok when i open in notepad) and I don't want to add 100 lines of unnecessary code and comments. Also, theyre used in the same way for both blastP_paralellised_process and blastP_paralellised_pool, so I don't think they are causing the time differences. Please let me know if including these would help though! def blastP_paralellised_process(evalue_user, query_in_path, blastp_exe_path, results_out_path, db_user, num_cores, thread_num): #function to split fasta query in 1 txt file per core filenames_query_in_split=fasta_split(query_in_path, num_cores) #function to construct result names for blastp parameter 'out' filenames_results_out_split=build_split_filename(results_out_path, num_cores) #copy a makeblastdb database given as iinput. generate one database per core. #Change name of file to include 'copy' and keep original database directory for quality control. delim=db_user.rindex('\\') db_name=db_user[delim::] db_base=db_user[:delim] databases=copy_dir(db_base, num_cores)#1 db per process or get lock #split blastp params across processes. processes=[] for file_in_real, file_out_name, database in zip(filenames_query_in_split, filenames_results_out_split, databases): #'blastP_subprocess' is a blast specific subprocess call that sets the environment to have #env={'BLASTDB_LMDB_MAP_SIZE':'1000000'} and has some diagnostic error management. blastP_process=Process(target=blastP_subprocess, args=(evalue_user, file_in_real, blastp_exe_path, file_out_name, database+db_name, blastP_process.start() processes.append(blastP_process) #let processes all finish for blastP_process in processes: blastP_process.join() def blastP_paralellised_pool(evalue_user, query_in_path, blastp_exe_path, results_out_path, db_user, num_cores, thread_num): ####as above#### filenames_query_in_split=fasta_split(query_in_path, num_cores) filenames_results_out_split=build_split_filename(results_out_path, num_cores) delim=db_user.rindex('\\') db_name=db_user[delim::] db_base=db_user[:delim] databases=copy_dir(db_base, num_cores) ################ #build params for blast params_new=list(zip( [evalue_user]*num_cores, filenames_query_in_split, [blastp_exe_path]*num_cores, filenames_results_out_split, [database+db_name for database in databases], #feed each param to a worker in pool with Pool(num_cores) as pool: blastP_process=pool.starmap(blastP_subprocess, params_new) if __name__ == '__main__': #make blast db makeblastdb_exe_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Executables\NCBI\blast-2.10.1+\bin\makeblastdb.exe' input_fasta_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Precomputed_files\fasta_sequences_SMCOG_efetch_only.txt' db_outpath=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_queries\DEMgenome_old\database\smcog_db' db_type_str='prot' start_time = time.time() makeblastdb_subprocess(makeblastdb_exe_path, input_fasta_path, db_type_str, db_outpath) print("--- makeblastdb %s seconds ---" % (time.time() - start_time)) #get blast settings evalue_user= 0.001 query_user=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_queries\DEMgenome_old\genome_1_vicky_3.txt' blastp_exe_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Executables\NCBI\blast-2.10.1+\bin\blastp.exe' out_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_results\blastresults_genome_1_vicky_3.xml'#zml? num_cores=os.cpu_count() #benchmarking for num_cores in range(1,6)[::-1]: print() start_time = time.time() blastP_paralellised_process(evalue_user, query_user, blastp_exe_path, out_path, db_outpath, num_cores, num_threads) end_time=time.time() start_time = time.time() blastP_paralellised_pool(evalue_user, query_user, blastp_exe_path, out_path, db_outpath, num_cores, num_threads) end_time=time.time() print() multiprocessing python 3.7 num_threads BLASTP • 615 views 2 Entering edit mode 10 months ago Mensur Dlakic ★ 15k First, beware that I am no expert on running these kinds of jobs from python scripts. In my experience, it is best to run this as a single job with maximum available num_threads. If you have a single -query group of sequences and a single -db database, I think the program will load the database into memory once, and keep it there for all subsequent sequences. Any other solution that splits your sequences into multiple jobs will have to deal with loading the database multiple times, and I think that is likely to be the slowest part of this process even if you have a solid-state drive and really fast memory. That said, I don't think you need to worry too much with a database that is 350K sequences and ~1000 queries. That will probably be done in couple of hours on any modern computer. In other words, you may spend more time thinking (and writing) about it than what the actual run will take. 0 Entering edit mode I think you're right, I managed to find some more discussions where this was discussed as an issue for multiprocessing - I might try splitting the database rather than the query which could ameliorate the issue (and seems to be a general approach used). I agree it's not a big deal for this specific example, but it's all going to end up as part of a bigger, generic application so I would like to scale it up to bigger queries/databases as much as possible (although something like diamond might be more useful for that). I'm also treating it as a learning exercise for python multiprocessing, I'm not very good at it but it seems a useful library to know :P Thanks again, I completely disregarded the impacts of data sharing between processes! 2 Entering edit mode 10 months ago xonq ▴ 40 Q1: Generally, I think you are correct in your interpretation between threading and multiprocessing. For BLAST, num_threads also has to do with spreading the workload of aligning across the threads you've allotted and query sequences you've provided, e.g. if you're querying nr, more threads will increase the throughput simply by expanding how many sequences can be aligned at once. Nevertheless, threading and multiprocessing are more nuanced than that and don't scale linearly sometimes when you think they should, so there is typically an asymptote of performance gains when increasing threads... beyond the asymptote you will likely decrease throughput. Q2: I'm not sure I interpret what you're saying, but let's say you that subprocessed a bunch of BLAST searches with one thread alotted per search; the amount of concurrent BLAST queries you are running would scale with subprocessing, and the speed of each of those query should scale to a point with the threading. However, you may be starting to unnecessarily convolute things here - why not just compile your queries into a single fasta and search with one BLAST search that uses all available threads instead of relying on yourself to implement the subprocessing optimally? BLAST is a significantly tested program - probably one of, if not the best for all bioinformatic software - I trust their implementation of multithreading over my own - anything well written at the C level will surely be faster than Python. The only exception I can think of here is if you are reaching the peak threading improvement for BLAST, then it may make sense to start multiprocessing the workflow with each subprocess allocating the optimal threads. Q3: For this you will have to turn to the literature or user forums to find actual graphs on how BLAST scales with multithreading. Q4: If you are spreading your workflow across all your CPUs and maximizing CPU usage within the workflow then of course you are going to be cutting into performance once you start performing other computer processes. It is best practice, in my experience, to leave 1 to 2 CPU cores free during these analyses if you aren't using a HPC so that you don't interfere with system processes and potentially crash your system. With respect to many tabs, you may run into RAM issues before CPU problems. Q5: No clue, I simply use pool so that I can scale in whichever way I choose. Most scripts I've reviewed do the same. Q6: Again, if at all possible, leave the threading to the program that's been deployed around the globe and used for a couple decades haha. Multithreading in BLAST is tailored toward the operation, multiprocessing by you is opening a blackbox of different things that can affect performance. Unless you're at the asymptote of performance increase v threads, then prefer BLAST's implementation of multithreading over multiprocessing in my opinion. 0 Entering edit mode Hi Xonq, Thanks for your reply, managed to cover every question! I agree its probably not absolutely necessary here, but I'm quite keen to get more familiar with Python's parallelisation libraries so thought it would be a good opportunity :P Can see how it goes, definitely given some food for thought!
# Simple and effective coin segmentation using Python and OpenCV The new generation of OpenCV bindings for Python is getting better and better with the hard work of the community. The new bindings, called “cv2” are the replacement of the old “cv” bindings; in this new generation of bindings, almost all operations returns now native Python objects or Numpy objects, which is pretty nice since it simplified a lot and also improved performance on some areas due to the fact that you can now also use the optimized operations from Numpy and also enabled the integration with other frameworks like the scikit-image which also uses Numpy arrays for image representation. In this example, I’ll show how to segment coins present in images or even real-time video capture with a simple approach using thresholding, morphological operators, and contour approximation. This approach is a lot simpler than the approach using Otsu’s thresholding and Watershed segmentation here in OpenCV Python tutorials, which I highly recommend you to read due to its robustness. Unfortunately, the approach using Otsu’s thresholding is highly dependent on an illumination normalization. One could extract small patches of the image to implement something similar to an adaptive Otsu’s binarization (like the one implemented in Letptonica – the framework used by Tesseract OCR) to overcome this problem, but let’s see another approach. For reference, see the output of the Otsu’s thresholding using an image taken with my webcam with a non-normalized illumination: ## 1. Setting the Video Capture configuration The first step to create a real-time Video Capture using the Python bindings is to instantiate the VideoCapture class, set the properties and then start reading frames from the camera: import numpy as np import cv2 cap = cv2.VideoCapture(0) cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280) cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 720) In newer versions (unreleased yet), the constants for CV_CAP_PROP_FRAME_WIDTH are now in the cv2 module, for now, let’s just use the cv2.cv module. The next step is to use the VideoCapture object to read the frames and then convert them to gray color (we are not going to use color information to segment the coins): while True: roi = frame[0:500, 0:500] gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) Note that here I’m extracting a small portion of the complete image (where the coins are located), but you don’t have to do that if you have only coins on your image. At this moment, we have the following gray image: In this step we will apply the Adaptive Thresholding after applying a Gaussian Blur kernel to eliminate the noise that we have in the image: gray_blur = cv2.GaussianBlur(gray, (15, 15), 0) cv2.THRESH_BINARY_INV, 11, 1) See the effect of the Gaussian Kernel in the image: And now the effect of the Adaptive Thresholding with the blurry image: Note that at that moment we already have the coins segmented except for the small noisy inside the center of the coins and also in some places around them. ## 4. Morphology The Morphological Operators are used to dilate, erode and other operations on the pixels of the image. Here, due to the fact that sometimes the camera can present some artifacts, we will use the Morphological Operation of Closing to make sure that the borders of the coins are always close, otherwise, we may found a coin with a semi-circle or something like that. To understand the effect of the Closing operation (which is the operation of erosion of the pixels already dilated) see the image below: You can see that after some iterations of the operation, the circles start to become filled. To use the Closing operation, we’ll use the morphologyEx function from the OpenCV Python bindings: kernel = np.ones((3, 3), np.uint8) closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=4) See now the effect of the Closing operation on our coins: The operations of Morphological Operators are very simple, the main principle is the application of an element (in our case we have a block element of 3×3) into the pixels of the image. If you want to understand it, please see this animation explaining the operation of Erosion. ## 5. Contour detection and filtering After applying the morphological operators, all we have to do is to find the contour of each coin and then filter the contours having an area smaller or larger than a coin area. You can imagine the procedure of finding contours in OpenCV as the operation of finding connected components and their boundaries. To do that, we’ll use the OpenCV findContours function. cont_img = closing.copy() contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) Note that we made a copy of the closing image because the function findContours will change the image passed as the first parameter, we’re also using the RETR_EXTERNAL flag, which means that the contours returned are only the extreme outer contours. The parameter CHAIN_APPROX_SIMPLE will also return a compact representation of the contour, for more information see here. After finding the contours, we need to iterate into each one and check the area of them to filter the contours containing an area greater or smaller than the area of a coin. We also need to fit an ellipse to the contour found. We could have done this using the minimum enclosing circle, but since my camera isn’t perfectly above the coins, the coins appear with a small inclination describing an ellipse. for cnt in contours: area = cv2.contourArea(cnt) if area < 2000 or area > 4000: continue if len(cnt) < 5: continue ellipse = cv2.fitEllipse(cnt) cv2.ellipse(roi, ellipse, (0,255,0), 2) Note that in the code above we are iterating on each contour, filtering coins with area smaller than 2000 or greater than 4000 (these are hardcoded values I found for the Brazilian coins at this distance from the camera), later we check for the number of points of the contour because the function fitEllipse needs a number of points greater or equal than 5 and finally we use the ellipse function to draw the ellipse in green over the original image. To show the final image with the contours we just use the imshow function to show a new window with the image: cv2.imshow('final result', roi) And finally, this is the result in the end of all steps described above: The complete source-code: import numpy as np import cv2 def run_main(): cap = cv2.VideoCapture(0) cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280) cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 720) while(True): roi = frame[0:500, 0:500] gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) gray_blur = cv2.GaussianBlur(gray, (15, 15), 0) cv2.THRESH_BINARY_INV, 11, 1) kernel = np.ones((3, 3), np.uint8) closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=4) cont_img = closing.copy() contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: area = cv2.contourArea(cnt) if area < 2000 or area > 4000: continue if len(cnt) < 5: continue ellipse = cv2.fitEllipse(cnt) cv2.ellipse(roi, ellipse, (0,255,0), 2) cv2.imshow("Morphological Closing", closing) cv2.imshow('Contours', roi) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() if __name__ == "__main__": run_main() Christian S. Perone 1. very nice! will use it in a demo (and give credit) 1. KN says: Hi, it is great for a demo because of just white background. Do you think we can get the very accurate edge of the coin on a robust background? I am talking about a professional application… 2. Syed says: One of the most comprehensive post that I could find on this topic. Thanks. 3. I just realized that a few weeks ago I was reading this post to understand better image segmentation in OpenCV and now I’m about to test PyEvolve. Small world? Great work. 4. one day all of the matlab’s function and programs will turn python and opencv thanks to you and the mans who are likes you . Great thanks 5. hi, can you please tell me, how to extract each coin after finding the contours. 6. Juan Galarza says: Nice tutorial! Everything is very clear thanks! 7. Andre says: Very nice tutorial!!! There is any way to measure the coin from this? Best regards 8. Faruq says: Hello I’ve tried this program but there is an error that occurs can you help me? Traceback (most recent call last): File “C:/Python27/Scripts/CountoursF.py”, line 15, in NameError: name ‘cap’ is not defined and error again if cv2.waitKey(1) & 0xFF == ord(‘q’): break 1. Without the entire code is hard to say but this error is because the variable “cap” isn’t defined, are you sure that you defined it like in the line 5 of my code ? 9. fatima says: Hi, Nice tutorial!!! Can you please tell me, how to extract rectangles instead of ellipse after finding the contours? If we have many rectangles in a image and how to extract the large one? Thanks. Regards 10. Hi, Very nice tutorial. Can you tell me how to display a bounding rectangle that have all coins inside? i mean all theses coins are in one large rectangle Thanks. Regards 11. Anonymous says: Hello. Excellent tutorial, I’m starting to program with OpenCV-Python and have a question. How could count the number of coins in the picture? thank you very much 12. Alejandro says: Hello. Excellent tutorial, I’m starting to program with OpenCV-Python and have a question. How could count the number of coins in the picture? thank you very much 13. Rafi says: Hello Christian. Thank you for sharing this, it is very helpful. Quick question: I am interested in counting the number of coins. Can this be done in your code as well? (my apologies, i have not been programming for the past 9 years). –rafi– 14. Rob says: I have to estratre from photos of a street just alfalto how do I? 15. simerya says: Hello I am trying this on android List contours = new ArrayList(); Imgproc.findContours(closing , contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE); for (int idx = 0; idx < contours.size(); idx++) { MatOfPoint2f temp=new MatOfPoint2f(contours.get(idx).toArray()); RotatedRect elipse1=Imgproc.fitEllipse(temp); Imgproc.ellipse(originalMat, elipse1, new Scalar(0,255,0), 2); } and nothing display. What I am missing? 16. Jon says: Your code is incredible. But I have a question. How could you recognize other geometric shapes other than circles. Thanks, I’m sorry for the English on google. 1. Danat Sandibay says: Hi DO you have coins detector and count program in python ? 17. jimmy says: Hello, What is the equivalent to numpy.ones() in c++ ? 18. yash singhai says: It says that roi is not defined when i try to display it. Everything runs including the for loop, but for some reason when i try to display roi it throws an error. Any suggestions? This site uses Akismet to reduce spam. Learn how your comment data is processed.
In 5210, we learned some standard library functions for reading in data, including read.csv and read.table. We also have a number of libraries/data formats supported by R Studio: • tibble. A re-imagining of the data frame, that keeps “what time has proven to be effective”. see https://r4ds.had.co.nz/tibbles.html • readr. Powerful library for reading and writing raw data files, csv, tsv, fixed width, delimited, and other table specifications. Rather than creating standard data frames, it creates tibbles. See https://r4ds.had.co.nz/data-import.html • haven. Includes files to read in Stata, SPSS, and SAS files, and do some data cleaning like tranforming missing/empty values to NA. • readxl. Libraries for reading in blocks of data from excel files. • curl. Load web pages, including web-hosted data files. • zip/unzip. Functions in the core utils package that read and write zip files. On windows, “Relies on a zip program (for example that from Rtools) being in the path”. • gdata. A data manipulation library. Includes time/date handling, xls file handling, and a number of data reorganization/filtering functions, that are probably replaced by dplyr and reshape and similar tidyverse libraries. I discuss some of these in the optional section below. • webreadr. Built on readr, it reads in various computer log files like access logs for web servers. • prepdat. A special-purpose data reading library focused on problems in psychology, where you process response times and accuracies, and have multiple participants with data saved in separate files. This will read in and merge multiple files, and also has response time outlier removal procedures. The material below covers the gdata package, which is fairly well replaced by haven/readxls and the like; along with prepdat; a special-purpose library that combines many different data files. ## Using readr and tibbles If you use the readr data libraries instead of the base R data reading libraries, it creates a new type of data structure called a ‘tibble’. A tibble is a data frame that generally works a bit better. Some of the advantages of a tibble are that it (usually) works as a drop-in replacement for data frames, it prints out more information when you view it, but only 5-10 rows and not all variables. It will hopefully prevent you from printing out markdown files that are 100s of pages long because a single print function is hidden somewhere in your code. They also are a bit stricter, which can avoid some errors that are hard to track down. For example, if you try to access a variable name in a data frame but it does not exist, it will return an empty data vector with the NA value. In contrast, a tibble will return an error. Finally, subsetting a column returns another tibble, whereas something like x[,1] will return a vector. The authors suggest that reading in data into a tibble can be a lot faster as well. Tibble also better support subsetting via pipes, which is an advanced syntax that some people feel is more powerful. Storing data in a tibble is usually seamless, but there are sometimes packages out there that will fail when given a tibble when they expect a data frame. So be on the lookout for that. library(tibble) ## NOTICE THE DIFFERENCE HERE: iris[, 1] [1] 5.1 4.9 4.7 4.6 5.0 5.4 4.6 5.0 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7 [20] 5.1 5.4 5.1 4.6 5.1 4.8 5.0 5.0 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5.0 5.5 4.9 [39] 4.4 5.1 5.0 4.5 4.4 5.0 5.1 4.8 5.1 4.6 5.3 5.0 7.0 6.4 6.9 5.5 6.5 5.7 6.3 [58] 4.9 6.6 5.2 5.0 5.9 6.0 6.1 5.6 6.7 5.6 5.8 6.2 5.6 5.9 6.1 6.3 6.1 6.4 [ reached getOption("max.print") -- omitted 75 entries ] as_tibble(iris)[, 1] # A tibble: 150 x 1 Sepal.Length <dbl> 1 5.1 2 4.9 3 4.7 4 4.6 5 5 6 5.4 7 4.6 8 5 9 4.4 10 4.9 # … with 140 more rows ## Using readxl The gdata library uses a third-party system in perl, and so you may be prompted to update that library, but it should work out of the box to read in xls and xlsx files. It is kind of slow for large files such as this one (with more than 1000 observations an 50+ variables). It creates a .csv file along the way that gets deleted, but you can use a related function to convert automatically to csv, as that may be faster. library(readxl) library(tibble) library(formatR) glimpse(data2) Rows: 1,017 Columns: 53 $Subnum <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17…$ Gender <chr> "F", "M", "F", "M", "M", "F", "F", "M", "F", "F", "F", "F… $Education <dbl> 2, 2, 3, 3, 1, 2, 3, 3, 1, 3, 3, 1, 2, 2, 2, 1, 3, 2, 1, …$ Q1 <dbl> 3, 4, 2, 3, 2, 5, 4, 1, 4, 4, 3, 2, 3, 2, 2, 4, 3, 4, 5, … $Q2 <dbl> 4, 4, 2, 3, 4, 2, 3, 4, 4, 2, 3, 3, 2, 2, 2, 3, 3, 5, 4, …$ Q3 <dbl> 4, 3, 5, 4, 4, 4, 5, 4, 4, 5, 5, 5, 4, 3, 4, 4, 3, 4, 4, … $Q4 <dbl> 2, 2, 1, 4, 4, 1, 3, 4, 3, 2, 4, 2, 1, 4, 2, 2, 3, 2, 3, …$ Q5 <dbl> 3, 3, 3, 2, 4, 3, 1, 2, 3, 4, 2, 4, 4, 5, 2, 3, 4, 4, 4, … $Q6 <dbl> 4, 3, 4, 4, 5, 3, 3, 5, 3, 2, 2, 4, 3, 4, 3, 2, 4, 3, 2, …$ Q7 <dbl> 5, 3, 4, 5, 3, 4, 4, 3, 3, 4, 4, 4, 4, 3, 4, 4, 4, 3, 5, … $Q8 <dbl> 4, 4, 4, 2, 2, 4, 4, 4, 2, 2, 2, 3, 2, 4, 2, 3, 4, 2, 3, …$ Q9 <dbl> 5, 3, 5, 4, 1, 2, 4, 4, 3, 4, 4, 3, 4, 1, 3, 2, 2, 3, 3, … $Q10 <dbl> 5, 5, 3, 4, 4, 4, 5, 5, 4, 5, 4, 5, 5, 5, 4, 5, 3, 4, 3, …$ Q11 <dbl> 3, 3, 2, 1, 1, 4, 5, 3, 3, 5, 2, 3, 4, 5, 3, 4, 3, 4, 3, … $Q12 <dbl> 2, 3, 2, 2, 2, 2, 2, 4, 4, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, …$ Q13 <dbl> 2, 5, 5, 4, 4, NA, 4, 5, 4, 4, 5, 5, 4, 3, 5, 5, 2, 4, 4,… $Q14 <dbl> 4, 5, 1, 2, 4, 3, 4, 4, 4, 3, 5, 3, 3, 4, 3, 4, 4, 5, 3, …$ Q15 <dbl> 4, 3, 2, 2, 4, 4, 4, 5, 4, 2, 3, 5, 3, 5, 3, 3, 3, 4, 5, … $Q16 <dbl> 2, 4, 1, 4, 2, 4, 3, 2, 3, 4, 3, 3, 3, 5, 3, 5, 5, 3, 4, …$ Q17 <dbl> 5, 3, 3, 5, 2, 4, 5, 3, 2, 4, 5, 5, 4, 4, 4, 3, 5, 4, 5, … $Q18 <dbl> 4, 1, 3, 4, 5, 2, 1, 4, 1, 2, 5, 2, 1, 5, 1, 3, 4, 1, 4, …$ Q19 <dbl> 1, 5, 1, 4, 5, 3, 5, 1, 5, 2, 5, 4, 3, 5, 2, 3, 4, 5, 3, … $Q20 <dbl> 4, 4, 4, 3, 5, 3, 2, 4, 4, 4, 5, 5, 4, 5, 2, 4, 1, 4, 3, …$ Q21 <dbl> 2, 4, 4, 4, 4, 1, 2, 5, 4, 2, 3, 5, 3, 4, 4, 2, 5, 3, 2, … $Q22 <dbl> 5, 4, 4, 5, 1, 5, 5, 2, 4, 1, 5, 4, 4, 1, 4, 5, 4, 4, 4, …$ Q23 <dbl> 3, 5, 2, 4, 4, 4, 1, 2, 4, 2, 4, 1, 2, 3, 3, 3, 4, 2, 3, … $Q24 <dbl> 5, 3, 5, 4, 2, 3, 4, 3, 2, 4, 5, 4, 4, 1, 3, 2, 1, 4, 3, …$ Q25 <dbl> 4, 3, 2, 2, 4, 3, 3, 2, 3, 5, 3, 5, 4, 5, 2, 3, 2, 3, 4, … $Q26 <dbl> 4, 5, 2, 1, 1, 4, 2, 4, 4, 4, 1, 1, 3, 2, 3, 4, 3, 5, 5, …$ Q27 <dbl> 4, 5, 3, 3, 5, 3, 2, 3, 4, 1, 2, 5, 3, 5, 3, 4, 4, 4, 5, … $Q28 <dbl> 3, 4, 4, 2, 3, 3, 2, 4, 5, 4, 2, 4, 2, 1, 4, 4, 5, 2, 2, …$ Q29 <dbl> 4, 3, 4, 4, 2, 3, 5, 5, 4, 4, 5, 5, 3, 3, 3, 4, 4, 3, 4, … $Q30 <dbl> 2, 4, 1, 4, 4, 4, 1, 4, 5, 2, 3, 3, 3, 4, 4, 3, 5, 5, 4, …$ Q31 <dbl> 5, 4, 2, 4, 5, 4, 2, 3, 5, 2, 1, 5, 4, 5, 4, 5, 3, 3, 4, … $Q32 <dbl> 3, 5, 3, 4, 4, 3, 4, 4, 4, 2, 4, 4, 4, 5, 4, 2, 5, 3, 2, …$ Q33 <dbl> 5, 5, 4, 5, 4, 4, 5, 2, 4, 4, 5, 4, 4, 4, 4, 5, 4, 4, 5, … $Q34 <dbl> 4, 5, 3, 3, 3, 4, 5, 5, 3, 5, 4, 5, 4, 2, 3, 4, 2, 4, 5, …$ Q35 <dbl> 5, 4, 5, 4, 2, 4, 4, 4, 4, 4, 5, 4, 3, 3, 4, 4, 5, 4, 4, … $Q36 <dbl> 3, 5, 4, 3, 2, 3, 4, 5, 5, 3, 5, 1, 2, 2, 4, 3, 5, 3, 4, …$ Q37 <dbl> 3, 5, 2, 2, 1, 5, 2, 2, 4, 5, 3, 2, 4, 1, 3, 5, 2, 4, 5, … $Q38 <dbl> 2, 4, 3, 2, 3, 3, 2, 4, 4, 2, 2, 2, 2, 2, 2, 3, 4, 4, 1, …$ Q39 <dbl> 3, 4, 2, 3, 4, NA, 5, 5, 3, 4, 4, 5, 4, 3, 3, 4, 4, 4, 4,… $Q40 <dbl> 1, 4, 1, 3, 5, 3, 3, 3, 4, 2, 5, 4, 4, 5, 3, 2, 2, 2, 3, …$ Q41 <dbl> 4, 3, NA, 4, 4, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 5, 4, 4, 4,… $Q42 <dbl> 2, 5, 5, 4, 1, 2, 5, 2, 3, 4, 5, 1, 2, 1, 2, 2, 2, 3, 4, …$ Q43 <dbl> 4, 4, 2, 4, 2, 4, 3, 4, 3, 4, 3, 3, 4, 4, 4, 5, 4, 3, 4, … $Q44 <dbl> 4, 5, 4, 4, 4, 4, 5, 3, 3, 2, 2, 4, 3, 5, 4, 3, 5, 2, 4, …$ Extra <dbl> 2.750, 3.500, 2.375, 2.250, 1.500, 3.750, 3.625, 2.500, 3… $Agreeable <dbl> 3.444444, 3.111111, 3.888889, 4.111111, 2.888889, 3.11111…$ Consc <dbl> 2.777778, 3.444444, 3.777778, 3.000000, 3.222222, 3.00000… $Neuro <dbl> 2.250000, 3.250000, 1.750000, 3.000000, 3.875000, 2.71428…$ Openness <dbl> 3.0, 3.6, 2.7, 2.8, 4.0, 3.2, 2.8, 3.3, 3.4, 3.0, 2.9, 3.… $extrabinary <chr> "I", "E", "I", "I", "I", "E", "E", "I", "I", "E", "E", "I… print(data2) # A tibble: 1,017 x 53 Subnum Gender Education Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 F 2 3 4 4 2 3 4 5 4 5 2 2 M 2 4 4 3 2 3 3 3 4 3 3 3 F 3 2 2 5 1 3 4 4 4 5 4 4 M 3 3 3 4 4 2 4 5 2 4 5 5 M 1 2 4 4 4 4 5 3 2 1 6 6 F 2 5 2 4 1 3 3 4 4 2 7 7 F 3 4 3 5 3 1 3 4 4 4 8 8 M 3 1 4 4 4 2 5 3 4 4 9 9 F 1 4 4 4 3 3 3 3 2 3 10 10 F 3 4 2 5 2 4 2 4 2 4 # … with 1,007 more rows, and 41 more variables: Q10 <dbl>, Q11 <dbl>, # Q12 <dbl>, Q13 <dbl>, Q14 <dbl>, Q15 <dbl>, Q16 <dbl>, Q17 <dbl>, # Q18 <dbl>, Q19 <dbl>, Q20 <dbl>, Q21 <dbl>, Q22 <dbl>, Q23 <dbl>, # Q24 <dbl>, Q25 <dbl>, Q26 <dbl>, Q27 <dbl>, Q28 <dbl>, Q29 <dbl>, # Q30 <dbl>, Q31 <dbl>, Q32 <dbl>, Q33 <dbl>, Q34 <dbl>, Q35 <dbl>, # Q36 <dbl>, Q37 <dbl>, Q38 <dbl>, Q39 <dbl>, Q40 <dbl>, Q41 <dbl>, # Q42 <dbl>, Q43 <dbl>, Q44 <dbl>, Extra <dbl>, Agreeable <dbl>, Consc <dbl>, # Neuro <dbl>, Openness <dbl>, extrabinary <chr> There are sometimes multiple sheets in a spreadsheet. By default this will grab the first sheet. A specific sheet can be specified too. This reads the second sheet, which is just the composite (mean) scores on five personality dimensions. # data2 <- read.xls('bigfive-codingcomplete.xlsx',sheet=2) head(data2) data2 <- read_excel("bigfive-codingcomplete.xlsx", sheet = 2) The documentation warns that strings will be quoted, and you may need to play with the quote argument if you have quoted text in the spreadsheet, but this should work reasonably well for simple files. ## Using curl Curl is a widely used library and program for downloading files from the web. R provides a library that uses curl, which allows you to specify a web URL and a local file name and it will download/save the file. For example, here is an xlsx file that shows data about student health at an Australian university: library(curl) ## This won't work! data3 <- ## read_excel('https://lo.unisa.edu.au/pluginfile.php/1020313/mod_book/chapter/106604/HLTH1025_2016.xlsx') curl_download("https://lo.unisa.edu.au/pluginfile.php/1020313/mod_book/chapter/106604/HLTH1025_2016.xlsx", destfile = "HLTH1025_2016.xlsx") data3 <- read_excel("HLTH1025_2016.xlsx", sheet = "Data") codesheet <- read_excel("HLTH1025_2016.xlsx", sheet = 1, skip = 2) You can verify that the xlsx file has been dowloaded to your working directory and look at it. Note that it has two worksheets. The first worksheet is ‘Metadata’, which is the code book for the survey. The second sheet is ‘Data’, which is the actual data table. We need to specify sheet=2 or sheet =“Data”, to get the right data sheet. ## The haven library. Haven can actually read web url directly. Here is the same data that is made available as an spss file. Notice it looks identical to the xls sheet. library(haven) data3.spss <- read_spss("https://lo.unisa.edu.au/pluginfile.php/1020313/mod_book/chapter/106604/HLTH1025_2016.sav") You need to be careful when using other people’s data, especially when it is in SPSS format, because SPSS has a tradition of using a number like 999 or -999 as the code for missing data. I’ve heard claims that a depressingly-large number of high correlations reported in the literature occur because people have used 999 as missing data in two different variables, and fail to treat it as missing so it inflates correlations substantially. If we look carefully, we can see 99s in mntlcurr and sf1, and 9999 in weight. Luckily, they did not choose to use 99 as a missing value for weight because there were legitimate values of 99 for weight. We’d hope that the spss format would save this. Haven has some functions to handle user-specified NA values, but at least for this current file it does not flag these values as missing, so we have to do it by hand. Note that almost every variable has one or more missing codes, so there will be a lot of data recoding if we want to analyze this data. data3.spss$weight[data3.spss$weight == 9999] <- NA data3.spss$mntlcurr[data3.spss$mntlcurr == 99] <- NA data3.spss$mntlcurr[data3.spss$mntlcurr == 99] <- NA # Older materials ## The gdata package gdata has a lot of data management tools, related to sampling, summarizing, and the like. It has a number of useful functions for reading in .xls and .xlsx files. Note that the similar ‘’foreign’’ package includes ways of reading in additonal file formats, such as spss, sas, stata, and octave, and the readxl package which is built-in to rstudio. # install.packages('gdata') library(gdata) If loading the library returns an error, it is likely that you need to install perl on your computer. Instructions (courtesy Raghavendran Shankar): Step-by-step procedure: 1. I have referred the cran r page : https://cran.r-project.org/web/packages/gdata/INSTALL 2. In the webpage, I have used the link for installing perl ( http://www.activestate.com/activeperl/) and downloaded and installed perl for 64 bit windows. 3. After installation, there was a folder called Perl in C drive. In that, I went to bin folder and there was an .exe file 4. I copied the path and used it in read.xls shown below: data <- read.xls(“bigfive-codingcomplete”,perl = “C:/Perl64/bin/Perl.exe”) 1. The data gets imported in R. ## The prepdat package The prepdat package is a new package that has two interesting functions: one that merges multiple files together, and another that aggregates summary statistics, especially useful for response time. # install.packages('prepdat') library("prepdat") ## This might be needed on windows: install.packages('rtools') utils::unzip("data.zip") ##unzip the data file. This may not work on some platforms, and you may need to do it by hand. The file_merge will merge multiple files (possibly in nested directories) with specific formats/matching strigns into a single data frame: data <- file_merge(folder_path = "data", has_header = T, raw_file_extension = "csv", raw_file_name = "globallocal*") head(data) subnum block trial code type correctresp localstim globalstim consistency 1 105 1 1 0 0 E E O 0 2 105 1 2 0 0 E E O 0 3 105 1 3 0 0 H H O 0 4 105 1 4 0 0 E E O 0 correctLocal correctGlobal positionX positionY response correct time1 rt 1 1 NA 960 540 <rshift> 1 97295 15634 2 1 NA 960 540 <rshift> 1 114447 9514 3 1 NA 960 540 <lshift> 1 125478 1454 4 1 NA 960 540 <rshift> 1 128449 669 [ reached 'max' / getOption("max.print") -- omitted 2 rows ] There is also a ‘prep’ function that will summarize your data data$within <- as.numeric(data\$correctresp) dat2 <- prep(dataset = data, dvc = "rt", id = "subnum", within_vars = c("within"), save_results = F, save_summary = T, results_path = "data") subnum block trial code type correctresp localstim globalstim consistency 1 105 1 1 0 0 E E O 0 2 105 1 2 0 0 E E O 0 3 105 1 3 0 0 H H O 0 4 105 1 4 0 0 E E O 0 correctLocal correctGlobal positionX positionY response correct time1 rt 1 1 NA 960 540 <rshift> 1 97295 15634 2 1 NA 960 540 <rshift> 1 114447 9514 3 1 NA 960 540 <lshift> 1 125478 1454 4 1 NA 960 540 <rshift> 1 128449 669 within 1 NA 2 NA 3 NA 4 NA [ reached 'max' / getOption("max.print") -- omitted 2 rows ] subnum block trial code type correctresp localstim globalstim consistency 1 105 1 1 0 0 E E O 0 2 105 1 2 0 0 E E O 0 3 105 1 3 0 0 H H O 0 correctLocal correctGlobal positionX positionY response correct time1 rt 1 1 NA 960 540 <rshift> 1 97295 15634 2 1 NA 960 540 <rshift> 1 114447 9514 3 1 NA 960 540 <lshift> 1 125478 1454 within within_condition 1 NA <NA> 2 NA <NA> 3 NA <NA> [ reached 'max' / getOption("max.print") -- omitted 3 rows ] subnum block trial code type correctresp localstim globalstim consistency 1 105 1 1 0 0 E E O 0 2 105 1 2 0 0 E E O 0 3 105 1 3 0 0 H H O 0 correctLocal correctGlobal positionX positionY response correct time1 rt 1 1 NA 960 540 <rshift> 1 97295 15634 2 1 NA 960 540 <rshift> 1 114447 9514 3 1 NA 960 540 <lshift> 1 125478 1454 within within_condition 1 NA <NA> 2 NA <NA> 3 NA <NA> [ reached 'max' / getOption("max.print") -- omitted 3 rows ] subnum mdvc1 sdvc1 meddvc1 t1dvc1 t1.5dvc1 t2dvc1 n1tr1 n1.5tr1 105 105 756.2044 739.9412 650 NaN 756.2044 756.2044 0 0 106 106 515.7235 164.8607 482 NaN 515.7235 515.7235 0 0 110 110 565.1750 190.7407 535 NaN 565.1750 565.1750 0 0 n2tr1 ndvc1 p1tr1 p1.5tr1 p2tr1 rminv1 p0.05dvc1 p0.25dvc1 p0.75dvc1 105 0 680 0 0 0 641.7693 416.85 539.5 803.25 106 0 680 0 0 0 488.6712 370.95 431.0 565.00 110 0 680 0 0 0 299.7960 397.00 472.0 616.25 p0.95dvc1 105 1257.25 106 711.05 110 792.35 [ reached 'max' / getOption("max.print") -- omitted 3 rows ] # The Tidyverse The tibble library is part of a set of packages called the tidyverse. Tidyverse is a meta-libary that loads and installs a bunch of other packages. These include: • ggplot2, for fancy plotting • dplyr, for data manipulation • tidyr, for ‘tidying’ messy data • forcats, for dealing with factors • purrr, for functional programming; ways of applying functions to data. These replace and improve on a lot of data management functions we have already used, like aggregate, apply/tapply/lapply, filtering, sorting, There are a number of other secondary and related libraries in the tidyverse that do not get loaded. Most of these are led to Hadley Wickham and R Studio. library(tidyverse)
# Deutsch-Jozsa Algorithm¶ The Deutsch-Jozsa algorithm was the first to show a separation between the quantum and classical difficulty of a problem. This algorithm demonstrates the significance of allowing quantum amplitudes to take both positive and negative values, as opposed to classical probabilities that are always non-negative. The Deutsch-Jozsa problem is defined as follows. Consider a function $$f(x)$$ that takes as input $$n$$-bit strings $$x$$ and returns $$0$$ or $$1$$. Suppose we are promised that $$f(x)$$ is either a constant function that takes the same value $$c\in \{0,1\}$$ on all inputs $$x$$, or a balanced function that takes each value $$0$$ and $$1$$ on exactly half of the inputs. The goal is to decide whether $$f$$ is constant or balanced by making as few function evaluations as possible. Classically, it requires $$2^{n-1}+1$$ function evaluations in the worst case. Using the Deutsch-Jozsa algorithm, the question can be answered with just one function evaluation. In the quantum world the function $$f$$ is specified by an oracle circuit $$U_f$$ (see the previous section on Grover’s algorithm, such that $$U_f |x\rangle =(-1)^{f(x)} |x\rangle$$). To understand how the Deutsch-Jozsa algorithm works, let us first consider a typical interference experiment: a particle that behaves like a wave, such as a photon, can travel from the source to an array of detectors by following two or more paths simultaneously. The probability of observing the particle will be concentrated at those detectors where most of the incoming waves arrive with the same phase. Imagine that we can set up an interference experiment as above, with $$2^n$$ detectors and $$2^n$$ possible paths from the source to each of the detectors. We shall label the paths and the detectors with $$n$$-bit strings $$x$$ and $$y$$ respectively. Suppose further that the phase accumulated along a path $$x$$ to a detector $$y$$ equals $$C(-1)^{f(x)+x\cdot y}$$, where $$x\cdot y=\sum_{i=1}^n x_i y_i$$ is the binary inner product and $$C$$ is a normalizing coefficient. The probability to observe the particle at a detector $$y$$ can be computed by summing up amplitudes of all paths $$x$$ arriving at $$y$$ and taking the absolute value squared: $$\mathrm{Pr}(y)=| C\sum_x (-1)^{f(x)+x\cdot y} |^2$$ Normalization condition $$\sum_y \mathrm{Pr}(y)=1$$ then gives $$C=2^{-n}$$. Let us compute the probability $$\mathrm{Pr}(y=0^n)$$ of observing the particle at the detector $$y=0^n$$ (all zeros string). We have $$\mathrm{Pr}(y=0^n)=| 2^{-n}\sum_x (-1)^{f(x)} |^2$$ If $$f(x)=c$$ is a constant function, we get $$\mathrm{Pr}(y=0^n)=|(-1)^c |^2 =1$$. However, if $$f(x)$$ is a balanced function, we get $$\mathrm{Pr}(y=0^n)=0$$, since all the terms in the sum over $$x$$ cancel each other. We can therefore determine whether $$f$$ is constant or balanced with certainty by running the experiment just once. Of course, this experiment is not practical since it would require an impossibly large optical table! However, we can simulate this experiment on a quantum computer with just $$n$$ qubits and access to the oracle circuit $$U_f$$. Indeed, consider the following algorithm: Step 1. Initialize $$n$$ qubits in the all-zeros state $$|0,\ldots,0\rangle$$. Step 2. Apply the Hadamard gate $$H$$ to each qubit. Step 3. Apply the oracle circuit $$U_f$$. Step 4. Repeat Step 2. Step 5. Measure each qubit. Let $$y=(y_1,\ldots,y_n)$$ be the list of measurement outcomes. We find that $$f$$ is a constant function if $$y$$ is the all-zeros string. Why does this work? Recall that the Hadamard gate $$H$$ maps $$|0\rangle$$ to the uniform superposition of $$|0\rangle$$ and $$|1\rangle$$. Thus the state reached after Step 2 is $$2^{-n/2} \sum_x |x\rangle$$, where the sum runs over all $$n$$-bit strings. The oracle circuit maps this state to $$2^{-n/2} \sum_x (-1)^{f(x)} |x\rangle$$. Finally, let us apply the layer of Hadamards at Step 4. It maps a basis state $$|x\rangle$$ to a superposition $$2^{-n/2}\sum_y (-1)^{x\cdot y} |y\rangle$$. Thus the state reached after Step 4 is $$|\psi\rangle =\sum_y \psi(y) |y\rangle$$, where $$\psi(y)=2^{-n}\sum_x (-1)^{f(x)+x\cdot y}$$. This is exactly what we need for the interference experiment described above. The final measurement at Step 5 plays the role of detecting the particle. As was shown above, the probability to measure $$y=0^n$$ at Step 5 is one if $$f$$ is a constant function and zero if $$f$$ is a balanced function. Thus we have solved the Deutsch-Jozsa problem with certainty by making just one function evaluation. ## Example circuits¶ Suppose $$n=3$$ and $$f(x)=x_0 \oplus x_1 x_2$$. This function is balanced since flipping the bit $$x_0$$ flips the value of $$f(x)$$ regardless of $$x_1,x_2$$. To run the Deutsch-Jozsa algorithm we need an explicit description of the oracle circuit $$U_f$$ as a sequence of quantum gates. To this end we need a $$Z_0$$ gate such that $$Z_0|x\rangle =(-1)^{x_0} |x\rangle$$ and a controlled-Z gate $$CZ_{1,2}$$ such that $$CZ_{1,2} |x\rangle =(-1)^{x_1x_2} |x\rangle$$.  Using basic circuit identities (see the Basic Circuit Identities and Larger Circuits section), one can realize the controlled-Z gate as a CNOT sandwiched between two Hadamard gates. DJ N=3 Example Open in composer DJ N=3 Constant Open in composer
Visual Diary ongoing /ongoing/ These are a series of ongoing explorations that serves as much as a visual diary. The mind is constantly reshaping the way one thinks and as such there are bound to be lies here and there but these are my truth; as close as I can get to them. # /ongoing/ These are a series of ongoing explorations that serves as much as a visual diary. The mind is constantly reshaping the way one thinks and as such there are bound to be lies here and there but these are my truth; as close as I can get /ongoing/ These are a series of ongoing explorations that serves as much as a visual diary. The mind is constantly reshaping the way one thinks and as such there are bound to be lies here and there but these are my truth; as close as I can get to them. Visual Diary– ongoing
Tech Topics # Distributed Tracing, OpenTracing and Elastic APM ## The World of Microservices Enterprises are increasingly adopting microservice architectures. They are developing and deploying more microservices everyday. Often, these services are developed in different programming languages, deployed into separate runtime containers, and managed by different teams and organizations. Large enterprises like Twitter can have tens of thousands of microservices, all working together to achieve their business goals. As they discussed in this Twitter blog post, visibility into the health and performance of the diverse service topology is extremely important for them to be able to quickly determine the root cause of issues, as well as increasing Twitter’s overall reliability and efficiency. This is where Distributed Tracing can really help. Distributed Tracing helps with two fundamental challenges faced by microservices: 1. Latency tracking One user request or transaction can travel through many different services in different runtime environments. Understanding the latency of each of these services for a particular request is critical to the understanding of the overall performance characteristics of the system as a whole, and provides valuable insight for possible improvements. 2. Root cause analysis Root cause analysis is even more challenging for applications that build on top of large ecosystems of microservices. Anything can go wrong with any of the services at any time. Distributed tracing is of crucial importance when debugging issues in such a system. Take a step back, tracing is only one piece of the puzzles of the Three Pillars of Observability - Logging, Metrics and Tracing. As we will discuss briefly, Elastic Stack is a unified platform for all three pillars of observability. When logs, metrics, and APM data are stored in the same repository, analyzed, and correlated together, you gain the most context-rich insight into your business applications and systems. In this blog, we will solely focus on the tracing aspect. ## Distributed Tracing with Elastic APM Elastic APM is an application performance monitoring system built on the Elastic Stack. It allows you to monitor software services and applications in real time, collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, etc. Elastic APM agents offer rich auto-instrumentation out of the box (e.g. timing db queries, etc.) for supported frameworks and technologies. You can also use custom instrumentation for custom purposes. This makes it much easier to pinpoint and fix performance problems quickly. Elastic APM supports distributed tracing and is OpenTracing compliant. It enables you to analyze performance throughout your microservice architecture all in one view. Elastic APM accomplishes this by tracing all of the requests, from the initial web request to your front-end service, to queries made to your back-end services. This makes finding possible bottlenecks throughout your application much easier and faster. The Timeline visualization in APM UI shows a waterfall view of all of the transactions from individual services that are connected in a trace: Elastic Stack is also a great platform for log aggregation and metrics analytics. Having logs, metrics, and APM traces all stored and indexed in Elasticsearch is super powerful. Being able to quickly correlate data sources like infrastructure metrics, logs and traces enables you to debug the root cause much faster. In the APM UI, when looking at a trace, you can quickly jump to the host or container metrics and logs by clicking the Actions menu, if these metrics and logs are also collected. It would be wonderful if everybody was using Elastic APM to instrument their applications and services. However, Elastic APM is not the only distributed tracing solution available today. There are other popular open source tracers like Zipkin and Jaeger. Concepts like polyglot programming and polyglot persistence are well known and well accepted in the world of microservices. Similarly, “polyglot tracing” is going to be more common than not. Because of the independent and decoupled nature of microservices, people responsible for different services will likely use different tracing systems. ## Challenges for Developers With many different tracing systems available, developers are faced with real challenges. At the end of the day, tracers live inside the application code. Some common challenges are: 1. Which tracing system to use? 2. What if I want to change my tracer? I don’t want to change my entire source code. 3. What do I do with shared libraries that might be using different tracers? 4. What if my third-party services use different tracers? Not surprisingly, we need standardization to address these concerns. Before discussing where we are with the standardization, let’s take a step back and look at distributed tracing from an architectural perspective in a holistic manner and understand what’s required to achieve the distributed tracing “nirvana”. ## Architectural Components of Distributed Tracing Modern software systems can be broken down into a few high-level components, typically designed and developed by different organizations and run in different runtime environments. • Your own application code and services • Shared libraries and services • External services To monitor such a system in a holistic and integrated fashion with distributed tracing, we need four architectural components: 1. Standardized distributed tracing API. A standardized vendor-neutral tracing API allows developers to instrument their code in a standardized way, does not matter what tracer they might choose to use later during the runtime. This is the first step towards anything. 2. Standardized tracing context definition and propagation. For a trace to go across from one runtime to another, the tracing context has to be understood by both parties, and there has to be a standard way of propagating that context. At the minimum, the context carries a trace ID. 3. Standardized tracing data definition. For trace data from one tracer to be understood and consumed by another tracer there has to be a standardized and extensible format for it. 4. Interoperable tracers. Finally, to achieve 100% runtime compatibility, different tracers have to provide mechanisms for them to both export and import trace data from other tracers in an open way. Ideally, a shared library or service instrumented by a tracer like Jaeger should be able to have it’s tracing data sent directly to Elastic APM or another tracer via the Jaeger agent through a configuration change. Now, enter OpenTracing. ## The OpenTracing Specification The OpenTracing specification defines an open, vendor-neutral API for distributed tracing. It enables users to avoid vendor lock-in by allowing them to switch the OpenTracing implementer at any time. It also enables developers of frameworks and shared libraries to provide tracing functionality out of the box, in a standard fashion to enable better insights into the frameworks and libraries. Web-scale companies like Uber and Yelp are using OpenTracing to get deeper visibility into their highly distributed and dynamic applications. ### The OpenTracing Data Model Basic concepts of OpenTracing and the fundamental data model came from Google’s Dapper paper. Key concepts include trace and span. 1. A trace represents a transaction as it moves through a distributed system. It can be thought of as a directed acyclic graph of Spans. 2. A span represents a logical unit of work that has a name, start time, and duration. Spans may be nested and ordered to model relationships. Spans accept key:value tags as well as fine-grained, time-stamped, structured logs attached to the particular span instance. 3. Trace context is the trace information that accompanies the distributed transaction, including when it passes the service to service over the network or through a message bus. The context contains the trace identifier, span identifier, and any other data that the tracing system needs to propagate to the downstream service. ### How does it all fit in? Ideally, with standardization, tracing information from custom application code, shared libraries, and shared services developed and run by different organizations are exchangeable and runtime compatible, does not matter what tracer each of these components chose to use. However, OpenTracing only addresses the first of the four architectural components we discussed before. So, where are we today with other components and what the future holds for us? ### Where are We Today? As we discussed, OpenTracing defines a standard set of tracing APIs for different tracers to implement, which is a great start and very encouraging. However, we still need tracing context standardization and tracing data standardization for them to be compatible and exchangeable with each other. 1. OpenTracing API provides a standard set of APIs. This is pretty much the only standardization we have as of today. There is limitation to the specification too. For example, it does not cover all the programming languages. Nevertheless, it’s a wonderful effort and gaining great traction. 2. No standardized tracing context definition yet. The W3C Distributed Tracing Work Group is in the process of standardizing the tracing context definition - the W3C Trace Context specification. The specification defines a unified approach to context and event correlation within distributed systems, and will enable end-to-end transaction tracing within distributed applications across different monitoring tools. Elastic APM supports the W3C Trace Context working group's effort to standardize the HTTP header format for distributed tracing. Our agent implementations closely follow the Trace Context draft specification, and we intend to fully support the final specification. As an example of the incompatibility of the tracing context today, here is an example of the HTTP header used by Elastic APM and Jaeger for trace ID. As you can see, both the name and encoding of the ID are different. When different tracing headers are used, traces will break when they cross the boundaries of the respective tracing tools. Jaeger: uber-trace-id: 118c6c15301b9b3b3:56e66177e6e55a91:18c6c15301b9b3b3:1 Elastic APM: elastic-apm-traceparent: 00-f109f092a7d869fb4615784bacefcfd7-5bf936f4fcde3af0-01 There are other challenges too, other than the definition itself. For example, not all HTTP headers are automatically forwarded by service infrastructure and routers etc. Whenever headers are dropped, the trace will break. 3. No standardized tracing data definition yet. As stated by the W3C Distributed Tracing Work Group, the second piece of the puzzle for trace interoperability is “a standardized and extensible format to share trace data -- full traces or fragments of traces -- across tools for further interpretation”. As you can imagine, with many open source and commercial players involved, agreeing on a standard format is not an easy thing. Hopefully we will get there soon. 4. Tracers are not runtime-compatible. Because of everything we discussed above, plus mixed motivation of making their system open and compatible with the rest of the world, tracers are simply not compatible with each other during the runtime today. I can confidently say that it will probably be that way for the foreseeable future. ## How Elastic APM Works with Other Tracers Today Even though we are not even close to 100% compatibility among tracers yet today, there is no need to be discouraged. Elastic Stack can still work with other tracers in a couple of different ways. 1. Elasticsearch as the scalable backend data store for other tracers. Not surprisingly, Elasticsearch has been used as the backend data store for other tracers like Zipkin and Jaeger, due to its massive scalability and rich analytics capabilities. Shipping Zipkin or Jaeger tracing data into Elasticsearch is a simple configuration for both of them. Once the tracing data is inside Elasticsearch, you can use the powerful analytic and visualization capability of Kibana to analyze your tracing information and create eye-catching visualizations that provide deep insight into your application performance. 2. Elastic OpenTracing Bridge The Elastic APM OpenTracing bridge allows you to create Elastic APM Transactions and Spans, using the OpenTracing API. In other words, it translates the calls to the OpenTracing API to Elastic APM and thus allows for reusing existing instrumentation. For example, an existing instrumentation done by Jaeger can be simply replaced with Elastic APM by changing a couple of lines of code. Original instrumentation by Jaeger: import io.opentracing.Scope; import io.opentracing.Tracer; import io.jaegertracing.Configuration; import io.jaegertracing.internal.JaegerTracer; ... private void sayHello(String helloTo) { Configuration config = ... Tracer tracer = config.getTracer(); try (Scope scope = tracer.buildSpan("say-hello").startActive(true)) { scope.span().setTag("hello-to", helloTo); } ... } Replace Jaeger with Elastic OpenTracing bridge: import io.opentracing.Scope; import io.opentracing.Tracer; import co.elastic.apm.opentracing.ElasticApmTracer; ... private void sayHello(String helloTo) { Tracer tracer = new ElasticApmTracer(); try (Scope scope = tracer.buildSpan("say-hello").startActive(true)) { scope.span().setTag("hello-to", helloTo); } ... } With this simple change, the tracing data will be happily flowing into Elastic APM, without you having to modify other tracing code. That’s the power of OpenTracing! ## Elastic APM Real User Monitoring While we mostly focus on the backend services when discussing tracing and context propagation etc., there is great value to start the trace on the client side in the browser. When doing so, you get trace information the moment a user clicks on something in the browser. That trace information represents the “real user experience” of your applications from performance aspect. Unfortunately again, there is no standardized way of forwarding that information today. The W3C group does intend to extend the trace context all the way to the browser in the future. Elastic APM Real User Monitoring (RUM) provides exactly that functionality today. The RUM JS agent monitors the real user experience within your client-side application. You will be able to measure metrics such as "Time to First Byte", domInteractive, and domComplete which helps you discover performance issues within your client-side application as well as issues that relate to the latency of your server-side application. Our RUM JS agent is framework-agnostic which means that it can be used with any JavaScript-based frontend application. ## Summary Hopefully, this blog helped you understand the landscape of Distributed Tracing a bit better and clarified some of the confusions about where we are with OpenTracing today. Let’s call it a wrap with a brief summary: 1. Distributed tracing provides invaluable performance insight for microservices. 2. OpenTracing is the industry’s first step towards standardization for distributed tracing. We still have a long way to go for full compatibility. 3. Elastic APM is OpenTracing compliant. 4. Elastic OpenTracing bridge allows instrumentation reuse. 5. Elastic Stack is a great scalable long-term storage for other tracers like Zipkin and Jaeger, even without full runtime compatibility today. 6. Elastic provides rich analytics for tracing data Elastic or not. Shipping Zipkin or Jaeger tracing data into Elasticsearch is a simple configuration. 7. Elastic APM Real User Monitoring (RUM) monitors the real user experience within your client-side application. 8. All-in-all, Elastic is a massively scalable, feature-rich, and unified analytics platform for all three pillars of observability - logging, metrics & tracing. As always, reach out on the Elastic APM forum if you want to open up a discussion or have any questions. Happy tracing! • #### We're hiring Work for a global, distributed team where finding someone like you is just a Zoom meeting away. Flexible work with impact? Development opportunities from the start?
Learn about the process of identifying and eliminating noise in data capture. In the previous articles, I discussed mechanical considerations, schematic design, PCB layout, and Firmware. One of the greatest anxieties of designing a custom PCB is the testing phase. I designed a precision inclinometer subsystem and received my brand new board. In a “Hard-way Hughes” first, the board worked on the first spin. Now it’s time to create a suitable test environment and determine the measurement resolution that the board can achieve. If you'd like to catch up on the precision inclinometer series overall, please check out the links below: The Linear LTC2380IDE-24, a 24-bit ADC, was always known to be overkill for this project (the LTC2380IDE-16 is a pin-compatible replacement). My initial hope, and the “best-guess” order-of-magnitude estimate that I discussed with AAC’s Technical Director Robert Keim, was that I could statistically tease about 17-18 bits of resolution out of the device. We agreed that the best way to build this one-time board was to go with a higher-precision ADC, even if we knew we’d be throwing bits of data away. This board will never go into production, and the small increase in price of a single part is insignificant in the context of a prototype board. The 11-bit ADC built into the SCA103T is only really useful for calibration at the factory. A 16-bit ADC would have been fine for this project, though it’s possible that under ideal conditions the sensor could provide more than 16 bits of resolution. ### Understanding the Specifications: Why I Chose a 24-bit Device The SCA103T-D04 datasheet indicates that the noise density is 0.0004°/√Hz. If we limit the bandwidth to 8 Hz, a quick multiplication of the noise density by the square root of the bandwidth indicates that the analog output resolution is in the range of 0.001°. $$0.0004\frac°{\sqrt{Hz}} \sqrt{8\;Hz}=0.0011°$$ The ADC produces a 24-bit conversion value, and that value covers 30° of measurement range, meaning the LSB (least significant bit) is as small as 1.8x10-6°. So, how many bits will be useful before the ADC encounters the noise-floor of the inclinometer? $$\frac{30°}{2^n}=0.0004\frac°{\sqrt{Hz}}\sqrt{8\; Hz}$$ $$2^n=26516.5%0$$ $$n\cdot Log(2)=Log(26516.5)$$ $$n\approx 14.7\;\text{bits}$$ ##### Calculating how many bits we can use before hitting the noise floor Based on these equations, a 16-bit inclinometer would be suitable for this job. However, a precision inclinometer that is used to calibrate a scientific instrument might experience very slow changes in inclination, which means that we can reduce the bandwidth (to 1 Hz, for example) and thereby decrease the noise floor. $$0.0004\frac°{\sqrt{Hz}}\sqrt{1\;Hz}=0.0004°$$ $$\frac{30°}{2^n}=0.0004\frac{°}{\sqrt{Hz}}\sqrt{1\; Hz}$$ $$2^n=75000$$ $$n\cdot Log(2)=Log(75000)$$ $$n\approx 16.2\;\text{bits}$$ ##### The same calculations with a reduced bandwidth. If we change the bandwidth, our bit needs also change. In this example, a 16-bit ADC would be insufficient. By repeating and averaging multiple measurements, I should be able to squeeze a bit more performance out of my design. The initial prediction of a maximum resolution of 17-18 bits came from a back-of-the-envelope calculation at the beginning of the design process based on the uncertainty of measurement. The uncertainty of measurement indicates the range that a measurement lies within. For example, if we know that an object has a length between 11 cm and 13 cm, we can report the measurement as12 cm ± 1 cm, where 1 cm is the uncertainty of measurement. The uncertainty of measurement is generally taken to be the ratio of the standard deviation to the square root of the number of measurements. $$u=\frac{\sigma}{\sqrt{n}}$$ I won’t know the actual standard deviation until I have data, but to get an approximation, I will assume it to be in the range of 0.001°, and I’ll arbitrarily choose 1024 measurements to get an estimate of my uncertainty of measurement. $$u=\frac{0.001°}{\sqrt{1024}}=0.0001°$$ $$Log(\frac{30°}{0.0001°})/Log(2)=18.2 bits$$ Even if I don’t need the additional bits of resolution for the conversion, I will need the additional bits of resolution for the digital averaging filter. And 16-bit ADCs don’t come with 24-bit digital averaging filters. To be clear, 0.0001° is a practically useless and wholly unreasonable standard in most non-scientific and non-military applications. In other words, it’s a perfect-goal for “Hard-way” Hughes. To put this type of angular displacement into perspective, 0.0001° corresponds to a change in elevation of ~2 mm over a 1 km distance. A device attempting this level of precision cannot sit on a desk because the force of a hand on the mouse will deflect the desk enough to be detected. The device cannot sit on the ground in the office next to the desk, because the weight of a person shifting in a chair will disturb the floor, as will the movement of elevators in the building, the movement of coworkers as they move around the office, etc. The PCB cannot be simply placed randomly in the room because the force of HVAC air currents on the connecting wires will provide enough force to measurably disturb the PCB. In other words, this is an unattainable goal and I’m going to spend countless hours and untold dollars of company resources trying to achieve it. I don’t want it to ever be said that I was unable to find a hundred-dollar solution to a twenty-dollar problem. The things I can do to mitigate external effects on the device include adding mass and rigidity. So I created a ~275 g aluminum PCB holder (~1 cm thick to provide rigidity and weight) with an adjustable differential screw mechanism. The PCB is held rigidly in place on the board holder, and the board holder makes points of contact atop three polished round points (one differential screw and two 3mm acorn nuts) that touch a ground piece of 4″×4″×1″-thick steel plate that supports it. The polished plate sits atop a 1mm-thick piece of neoprene. That whole assembly is placed atop a 9” x 12” x 2” granite surface plate that sits atop another 1mm-thick piece of neoprene. Finally, that entire assembly sits atop an oak floor on a 4″ concrete slab. If I had larger piece of granite, or more appropriate vibration damping material, I would have used it. ##### The aluminum PCB holder was placed atop a steel block that sat on a granite surface plate. USB cables running to the device (one for the JTAG programmer, one for the data port) were taped to the granite surface plate approximately two inches from the device. The device was powered for at least 12 hours prior to data collection, allowing any thermal variations to stabilize. A FLIR TG130 thermal camera showed no thermal hot spots on the PCB. The temperature of the voltage-reference and inclinometer ICs could not be measured due to reflective package enclosures, but the sides of the IC appeared to be at the same temperature as the rest of the PCB. ### Noise Characterization The part of the circuit that I’m most concerned about is the voltage reference—so that’s the measurement I focused on first. This is the part of the circuit that I would redesign if I remade the PCB. The sensor provides ratiometric output and the ADC has ratiometric measurement; since the same voltage reference signal is sent to both ICs, any noise in the reference itself should theoretically be self-negating. At first glance, the signal appears fine. ##### Tektronix MDO3104 view of the VREF net measured at test point on PCB near potentiometer. The next image shows 0.1 V / division @ 200 ns / division. Most of the variation seems to be bound within 40 mV. And with the way the test point is attached to the board, it is impossible to know if that is indeed fluctuating voltage, or RF energy absorbed by the probe. So far, so good, right? Perhaps. But until we enable “normal” triggering mode and set the scope to single-shot capture, we won’t know if there are any anomalies. So I set the scope to single-shot mode and let it run for several seconds. And eventually, I picked up a hiccup. That hiccup repeats periodically (~10 second interval) while the device is running. Unfortunately, I don’t have enough test-points on my PCB to find any event that accompanies the noise. Noise like this doesn’t magically appear—there is always a source. Additional test points might allow me to see a correlation between the noise and the SPI or UART lines, or the wall wart power supply, or perhaps the fluorescent lights near my desk. I investigated a noise source once that turned out to coincide with the building’s AC compressor motor. If this board was ever meant for production, I would certainly need to find out what's going on. If the noise occurs every time an SPI transaction or UART transmission occurs (it doesn’t), then the noise doesn’t really matter since the sensor measurement does not occur during those times. If the noise occurs during the SAR data acquisition phase, it’s going to be a big problem.  The next article in the series will present data captured by the device in greater detail. The resolution of the data was such that this noise was clearly not affecting the sensor in any meaningful way. ### Conclusion The test points that I did include on the board showed that the noise on the PCB was quite low. While I did detect a spurious event on VREF, I could not locate where it originated. If I were to redesign the board, I would provide more taps, and coaxial taps to watch sensitive aspects of the board (such as the inclinometer outputs and the VREF inputs). • kubeek 2019-01-17 You write “...or RF energy absorbed by the probe”. What actually is the probe? How and where is it connected? This is a much needed information that should be carefully descirbed with the measurement. I was looking for ways to measure noise in circuit - mainly reference and power supply, but there is little information about how to best do it. I got nice results measuring PSU noise with just a piece of coax straight into the 1Meg input of the scope, with not much of common mode issues. You have your blips in the tens of MHz range, so I guess a series 50 ohm resistor and 50ohm termination in the scope would be the better way, but for higher voltages the scope can´t hadle that, and you would need a rather large decoupling capacitor. Could you make an article about that? • Mark Hughes 2019-01-18 @kubeek, You’re right that I didn’t include much information about the probe.  To be perfectly honest, once I saw that I was getting good data off of the board, I went back looking for noise as an afterthought.  And that could have bitten me in the butt—badly.  If I was going to redesign this board today—I’d likely toss some better testpoints on the board. Maybe something like this (https://www.digikey.com/product-detail/en/murata-electronics-north-america/MM8030-2610RK0/490-11804-2-ND/4421021) As it stands—the testpoints I chose were too small, and there were too few of them.  The first one I attached my scope probe to actually broke it right off the board and I had to resolder it. I know you had some questions you left in the forum—I’ll go visit you there as it’s easier to have a back and forth.  Thanks for the comment and the article suggestion.  We definitely can do something in test-and-measurement! Mark
Bug 103395 - [9/10/11/12/13 Regression] ICE on qemu in arm create_fix_barrier [9/10/11/12/13 Regression] ICE on qemu in arm create_fix_barrier Status: NEW None gcc Unclassified target (show other bugs) 12.0 P2 normal 9.5 Not yet assigned to anyone ice-on-valid-code Reported: 2021-11-23 19:32 UTC by Jakub Jelinek 2022-04-28 15:02 UTC (History) 11 users (show) berrange fche fw ktkachov marxin pbrobinson ramana rearnsha rjones scox wcohen armv7hl-linux-gnueabi 2021-11-24 00:00:00 Attachments qemuice.i.xz (124.19 KB, application/octet-stream) 2021-11-24 13:18 UTC, Jakub Jelinek Details Note You need to log in before you can comment on or make changes to this bug. Jakub Jelinek 2021-11-23 19:32:46 UTC Since GCC 5 (r208990 still works fine, r215478 ICEs already up to current trunk) the following testcase ICEs on armv7hl-linux-gnueabi with -O2: void foo (void) { __asm__ ("\n\n\n\n\n\n\n\n\n\n" "\n\n\n\n\n\n\n\n\n\n" "\n\n\n\n\n\n\n\n\n\n" "\n\n\n\n\n\n\n\n\n\n" "\n\n\n\n\n\n\n\n\n\n" "\n\n\n\n\n\n\n\n\n\n" "\n" : : "nor" (0), "nor" (0)); } Removing just one newline makes the ICE go away. get_attr_length for such inline asm is 248 bytes (estimation) and the arm minipool code is apparently upset about it. Richard W.M. Jones 2021-11-23 22:07:29 UTC Nice reproducer! Here's the original thread where the bug was reported when compiling qemu on Fedora Rawhide for armv7: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/GD3ABSWD6HHTNEKV2EJY4PXABQ245UCZ/ Andrew Pinski 2021-11-24 09:32:37 UTC Confirmed. Jakub Jelinek 2021-11-24 09:40:03 UTC Note, in qemu it isn't just those newlines but simply a long inline asm. It can very well be something that is empty or very short in the .text section, e.g. can .pushsection; .section whatever; put lots of stuff in there; .popsection. But as inline asm is a black box to gcc, we have only very rough estimation for the inline asm text length, which just counts how many newlines and typically ; characters there are and multiple that by some insn size. Richard Earnshaw 2021-11-24 12:16:48 UTC I suspect the best we're likely to be able to do is to downgrade the ICE into an error if there's a long inline ASM in the code, much like the way we handle invalid constraints in inline ASMs. Jakub Jelinek 2021-11-24 12:25:50 UTC That will still mean qemu will not compile. I admit I know next to nothing about the arm minipool stuff, but can't whatever needs to be inserted be inserted either before or after the inline asm, if it is needed for the asm inputs then likely before that? Richard Earnshaw 2021-11-24 13:05:08 UTC It depends on the the reference. Some minipool references can only be forwards due to the nature of the instruction making the reference. I'll need to take a look, and I might also need a real testcase, not just the reduced one below. Jakub Jelinek 2021-11-24 13:18:25 UTC Created attachment 51867 [details] qemuice.i.xz Unreduced preprocessed source. Richard Earnshaw 2021-11-24 16:35:26 UTC OK, so the real problem in the test case is the constraint ("nor") is meaningless on Arm (because there is no instruction in the architecture that can accept both a constant and a memref) and this confuses the minipool code because it exploits this restriction to detect insns that need to be reworked by the md_reorg pass. When processing an ASM we allow only a forward literal pool reference and it must be less than 250 bytes beyond the /start/ of the pattern (because we don't know where in the asm it gets used). So we have to deduct from that range 4 bytes for every asm statement: add too many lines to the asm and we reach the point where it is impossible to place the literal pool even directly after the asm. So I think really this is an ICE on invalid, because the constraint is not meaningful on Arm. Jakub Jelinek 2021-11-24 16:52:48 UTC CCing Frank as this is systemtap sys/sdt.h which has: # ifndef STAP_SDT_ARG_CONSTRAINT # if defined __powerpc__ # define STAP_SDT_ARG_CONSTRAINT nZr # else # define STAP_SDT_ARG_CONSTRAINT nor # endif # endif All of n, o and r are generic constraints and const0_rtx is valid for the "n" constraint, so why is the backend trying to put it into memory at all? What is systemtap trying to do is not use those operands in any instruction, but note for the debugger how to find out the value of the asm input operand (read some register, some memory or the immediate constant). Richard Earnshaw 2021-11-24 18:17:21 UTC It's been this way now for over 20 years. A single instruction cannot take a constant and a memory for the same operand, so this has been used in the backend to trigger the minipool generation: a constant in an operand that takes a memory triggers a minipool spill to be pushed. If we changed this now we risk breaking existing inline asms that exploit this feature (good or bad) and expect a constant to be pushed into a minipool entry. Jakub Jelinek 2021-11-24 18:57:22 UTC Inline asm that only works with memory but in constraints says it accepts both immediate constant and memory is IMNSHO just broken, it is just fine if the compiler makes a different choice. If "nor" with constant input on arm has meant actually just "or", then sure, systemtap could be changed and after a couple of years it will propagate to all stap copies used in the wild, but it is quite severe misoptimization of one of the most common cases. The systemtap macros don't really know what argument will be passed to them, whether a constant, something that lives in memory, something that lives in a register and ideally wants as few actual instructions before those macros as possible to arrange the arguments so that debugger can inspect them. Jakub Jelinek 2021-11-24 18:58:54 UTC Note, sys/sdt.h is using the "nor" constraints for 11 years already. Florian Weimer 2021-11-25 10:03:07 UTC Maybe it's possible to provide specific, architecture-independent constraints for Systemtap-like use cases? Jakub Jelinek 2021-11-25 10:16:45 UTC If it can be proven that all gcc versions until now treat "nor" constraint as ignoring the n in there and pushing all constants into constant pool, I think it could be changed into "or" for arm32. But it would be IMNSHO unnecessary pessimization (but it could e.g. be done for GCC < 12 or whenever this would be fixed). Another option is to tweak whatever generates those large inline asms. In the qemu case it is created with /usr/bin/python3 ../scripts/tracetool.py --backend=dtrace --group=util --format=h /builddir/build/BUILD/qemu-6.1.0/util/trace-events trace/trace-util.h whatever that is (but that means I haven't actually seen what it generates). Note, apparently several other packages are affected, so not sure what changed recently in systemtap-sdt-devel or whatever else that adds up to this. In the preprocessed source I got I see several blocks of ".ascii \"\\x20\"" "\n" "_SDT_SIGN %n[_SDT_S4]" "\n" "_SDT_SIZE %n[_SDT_S4]" "\n" "_SDT_TYPE %n[_SDT_S4]" "\n" ".ascii \"%[_SDT_A4]\"" "\n" It already uses assembler macros, perhaps adding a macro to do all 3 at once or perhaps with something extra could bring the number of newlines down... Jakub Jelinek 2021-11-25 11:11:57 UTC Apparently the change on the systemtap side was: https://sourceware.org/git/?p=systemtap.git;a=commit;f=includes/sys/sdt.h;h=eaa15b047688175a94e3ae796529785a3a0af208 which indeed adds a lot of newlines to the inline asms. But when already using gas macros, I wonder if all that #define _SDT_ASM_TEMPLATE_1 _SDT_ARGFMT(1) #define _SDT_ASM_TEMPLATE_2 _SDT_ASM_TEMPLATE_1 _SDT_ASM_BLANK _SDT_ARGFMT(2) #define _SDT_ASM_TEMPLATE_3 _SDT_ASM_TEMPLATE_2 _SDT_ASM_BLANK _SDT_ARGFMT(3) #define _SDT_ASM_TEMPLATE_4 _SDT_ASM_TEMPLATE_3 _SDT_ASM_BLANK _SDT_ARGFMT(4) #define _SDT_ASM_TEMPLATE_5 _SDT_ASM_TEMPLATE_4 _SDT_ASM_BLANK _SDT_ARGFMT(5) #define _SDT_ASM_TEMPLATE_6 _SDT_ASM_TEMPLATE_5 _SDT_ASM_BLANK _SDT_ARGFMT(6) #define _SDT_ASM_TEMPLATE_7 _SDT_ASM_TEMPLATE_6 _SDT_ASM_BLANK _SDT_ARGFMT(7) #define _SDT_ASM_TEMPLATE_8 _SDT_ASM_TEMPLATE_7 _SDT_ASM_BLANK _SDT_ARGFMT(8) #define _SDT_ASM_TEMPLATE_9 _SDT_ASM_TEMPLATE_8 _SDT_ASM_BLANK _SDT_ARGFMT(9) #define _SDT_ASM_TEMPLATE_10 _SDT_ASM_TEMPLATE_9 _SDT_ASM_BLANK _SDT_ARGFMT(10) #define _SDT_ASM_TEMPLATE_11 _SDT_ASM_TEMPLATE_10 _SDT_ASM_BLANK _SDT_ARGFMT(11) #define _SDT_ASM_TEMPLATE_12 _SDT_ASM_TEMPLATE_11 _SDT_ASM_BLANK _SDT_ARGFMT(12) couldn't be rewritten into use of another .macro _SDT_ASM_TEMPLATE that just takes emits it all. See the .macro sum from=0, to=5 .long \from .if \to-\from sum "(\from+1)",\to .endif .endm macro from gas documentation for inspiration. Jakub Jelinek 2021-11-25 11:25:27 UTC Note, the %n[_SDT_S##no] in there need to stay (dunno about the _SDT_ASM_SUBSTR(_SDT_ARGTMPL(_SDT_A##no)) stuff), but that could be achieved by giving the macro from, to, arg, args:vararg arguments and use it like: _SDT_ASM_TEMPLATE 1, 4, %n[_SDT_S1], %n[_SDT_S2], %n[_SDT_S3], %n[_SDT_S4] Daniel Berrange 2021-11-26 09:38:42 UTC FYI, I've opened a bug against systemtap in Fedora to track this problem on that side too https://bugzilla.redhat.com/show_bug.cgi?id=2026858 Jakub Jelinek 2021-11-26 13:46:29 UTC Note that we document how the size of asm is estimated: https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html and unfortunately asm inline ("..." ...) makes the size estimation 0 only for inlining purposes and not for others too. So, for systemtap it is still desirable to use as few newlines/; characters in the pattern as possible. If one macro is used to handle 1-12 operands through recursions, one way to save a few newlines would be avoid all those other 5 macros, each one is used just once for each parameter and avoiding their .macro, .endm and .purgem lines will get rid of 15 lines... Similarly, not doing .pushsection/.popsection 3 times for each argument but just once would help etc.
## LSU Doctoral Dissertations #### Title Excluding a Weakly 4-connected Minor #### Identifier etd-04042016-220803 #### Degree Doctor of Philosophy (PhD) Mathematics Dissertation #### Abstract A 3-connected graph $G$ is called weakly 4-connected if min $(|E(G_1)|, |E(G_2)|) \leq 4$ holds for all 3-separations $(G_1,G_2)$ of $G$. A 3-connected graph $G$ is called quasi 4-connected if min $(|V(G_1)|, |V(G_2)|) \leq 4$. We first discuss how to decompose a 3-connected graph into quasi 4-connected components. We will establish a chain theorem which will allow us to easily generate the set of all quasi 4-connected graphs. Finally, we will apply these results to characterizing all graphs which do not contain the Pyramid as a minor, where the Pyramid is the weakly 4-connected graph obtained by performing a $\Delta Y$ transformation to the octahedron. This result can be used to show an interesting characterization of quasi 4-connected, outer-projective graphs. 2016 #### Document Availability at the Time of Submission Release the entire work immediately for access worldwide. Ding, Guoli COinS
## Lecture 32. The Malliavin derivative The next Lectures will be devoted to the study of the problem of the existence of a density for solutions of stochastic differential equations. The basic tool to study such questions is the so-called Malliavin calculus. Let us consider a filtered probability space $(\Omega, (\mathcal{F}_t)_{0 \le t \le 1}, \mathbb{P})$ on which is defined a Brownian motion $(B_t)_{0 \le t \le 1}$. We assume that $(\mathcal{F}_t)_{0 \le t \le 1}$ is the usual completion of the natural filtration of $(B_t)_{0 \le t \le 1}$. A $\mathcal{F}_{1}$ measurable real valued random variable $F$ is said to be cylindric if it can be written $F=f \left( \int_0^{1} h^1(s) dB_s,...,\int_0^{1} h^m(s) dB_s \right)$ where $h^i \in \mathbf{L}^2 ([0,1], \mathbb{R}^n)$ and $f:\mathbb{R}^m \rightarrow \mathbb{R}$ is a $C^{\infty}$ function such that $f$ and all its partial derivatives have polynomial growth. The set of cylindric random variables is denoted by $\mathcal{S}$. It is easy to see that $\mathcal{S}$ is dense in $L^p$ for every $p \ge 1$. The Malliavin derivative of $F \in \mathcal{S}$ is the $\mathbb{R}^n$ valued stochastic process $(\mathbf{D}_t F )_{0 \leq t \leq 1}$ given by $\mathbf{D}_t F=\sum_{i=1}^{m} h^i (t) \frac{\partial f}{\partial x_i} \left( \int_0^{1} h^1(s) dB_s,...,\int_0^{1} h^m(s)dB_s \right).$ We can see $\mathbf{D}$ as an (unbounded) operator from the space $\mathcal{S} \subset L^p$ into the Banach space $\mathcal{L}^p=\left\{ (X_t)_{0 \le t \le 1},\mathbb{E}\left( \left( \int_0^1 \| X_t \|^2 dt\right)^p \right) < +\infty \right\}.$ Our first task will be to prove that $\mathbf{D}$ is closable. This will be a consequence of the following fundamental integration by parts formula which is interesting in itself. Proposition. (Integration by parts formula) Let $F \in \mathcal{S}$ and $(h(s))_{0 \le s \le 1}$ be a progressively measurable such that $\mathbb{E}\left( \int_0^1 \| h(s)\|^2 ds \right) < +\infty$. We have $\mathbb{E} \left( \int_0^1( \mathbf{D}_s F)h(s) ds \right)=\mathbb{E}\left( F \int_0^{1} h(s)dB_s\right).$ Proof. Let $F=f \left( \int_0^{1} h^1(s) dB_s,...,\int_0^{1} h^m(s) dB_s \right) \in \mathcal{S}.$ Let us now fix $\varepsilon \ge 0$ and denote $F_\varepsilon =f \left( \int_0^{1} h^1(s) d\left( B_s +\varepsilon \int_0^{s} h(u)du \right),...,\int_0^{1} h^m(s) d\left( B_s +\varepsilon \int_0^{s} h(u)du \right) \right).$ From Girsanov’s theorem, we have $\mathbb{E} ( F_\varepsilon)=\mathbb{E} \left(\exp \left(\varepsilon \int_0^{1} h(u)dB_u -\frac{\varepsilon^2}{2}\int_0^{1} \|h(u)\|^2du \right) F \right).$ Now, on one hand we compute $\lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left( \mathbb{E} ( F_\varepsilon)-\mathbb{E} (F) \right) =\mathbb{E} \left( \int_0^1\sum_{i=1}^{m} \frac{\partial f}{\partial x_i} \left( \int_0^{1} h^1(s)dB_s,...,\int_0^{1} h^m(s) dB_s \right) h^i(s)h(s) dt \right)$ $=\mathbb{E} \left( \int_0^1( \mathbf{D}_s F)h(s) dt \right)$, and on the other hand, we obtain $\lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left( \mathbb{E} ( F_\varepsilon)-\mathbb{E} (F) \right)=\mathbb{E}\left( F \int_0^{1} h(s)dB_s\right)$ $\square$ Proposition. Let $p \ge 1$. As a densely defined operator from $L^p$ into $\mathcal{L}^p$, $\mathbf{D}$ is closable. Proof. Let $(F_n)_{n \in \mathbb{N}}$ be a sequence in $\mathcal{S}$ that converges in $L^p(\mathcal{B}_{1})$ to $0$ and such that $\mathbf{D}F_n$ converges in $\mathcal{L}^p$ to $X$. We want to prove that $X=0$. Let $(h(s))_{0 \le s \le 1}$ be a function in $L^2([0,1])$. Let us first assume $p > 1$. We have $\lim_{n \to \infty} \mathbb{E} \left( \int_0^1( \mathbf{D}_sF _n)h(s) ds \right)=\mathbb{E} \left( \int_0^1 X_s h(s) ds \right),$ and $\lim_{n \to \infty}\mathbb{E}\left( F_n \int_0^{1} h(s)dB_s\right)=0.$ As a consequence, we obtain $\mathbb{E} \left( \int_0^1 X_s h(s) ds \right)=0.$ Since $h$ is arbitrary, we conclude $X=0$. Let us now assume $p=1$. Let $\eta$ be a smooth and compactly supported function and let $\Theta=\eta \left( \int_0^{1} h(s)dB_s \right) \in \mathcal{S}$. We have $\mathbf{D} (F_n \Theta)=F_n (\mathbf{D} \Theta )+( \mathbf{D}F_n) \Theta.$ As a consequence, we get $\mathbb{E}\left(\int_0^1 \mathbf{D}_s (F_n \Theta) h(s)ds \right) =\mathbb{E}\left(F_n \int_0^1 (\mathbf{D}_s \Theta ) h(s)ds \right) +\mathbb{E}\left( \Theta \int_0^1 ( \mathbf{D}_sF_n) h(s)ds \right),$ and thus $\lim_{n \to \infty} \mathbb{E} \left( \int_0^1 \mathbf{D}_s(F _n \Theta) h(s) ds \right)=\mathbb{E} \left( \Theta \int_0^1 X_s h(s) ds \right).$ On the other hand, we have $\mathbb{E} \left( \int_0^1 \mathbf{D}_s(F _n \Theta) h(s) ds \right)=\mathbb{E} \left( F _n \Theta \int_0^1 h(s) dB_s\right) \to_{n \to \infty} 0.$ We conclude $\mathbb{E} \left( \Theta \int_0^1 X_s h(s) ds \right)=0 \square$ The closure of $\mathbf{D}$ in $L^p$ shall still be denoted by $\mathbf{D}$. Its domain $\mathbb{D}^{1,p}$ is the closure of $\mathcal{S}$ with respect to the norm $\left\| F\right\| _{1,p}=\left( \mathbb{E}\left( F^{p}\right) + \mathbb{E}\left( \left\| \mathbf{D} F\right\|_{\mathbf{L}^2 ([0,1], \mathbb{R}^n)}^{p}\right) \right)^{\frac{1}{p}},$ For $p > 1$, we can consider the adjoint operator $\delta$ of $\mathbf{D}$. This is a densely defined operator $\mathcal{L}^q \to L^q(\mathcal{B}_{1})$ with $1/p+1/q=1$ which is characterized by the duality formula $\mathbb{E} (F \delta u)=\mathbb{E} \left(\int_0^1 (\mathbf{D}_s F) u_s ds \right) , \quad F \in \mathbb{D}^{1,p}.$ From the integration by parts formula and Burkholder-Davis-Gundy inequalities, it is clear that the domain of $\delta$ in $\mathcal{L}^q$ contains the set of progressively measurable processes $(u_t)_{0 \le t \le 1}$ such that $\mathbb{E} \left(\left( \int_0^1 \| u_s \|^2ds\right)^{q/2} \right) < + \infty$ and in that case, $\delta u =\int_0^1 u_s dB_s.$ The operator $\delta$ can thus be thought as an extension of the Itō’s integral. It is often called the Skohorod integral. Exercise.(Clark-Ocone formula) Show that for $F \in \mathbb{D}^{1,2}$, $F=\mathbb{E}(F)+\int_0^1 \mathbb{E} \left( \mathbf{D}_1F \mid \mathcal{F}_t \right)dB_t.$ This entry was posted in Stochastic Calculus lectures. Bookmark the permalink.
## Prime Fibonacci prime > 3 expressed as the sum of two squares of distinct Fibonacci numbers Can you prove that any prime Fibonacci prime > 3 can be expressed as the sum of two squares of distinct Fibonacci numbers? the first few examples: $F_5 \; = \; 5 \; = \; 1^2 \; + \; 2^2 \; = \; F^2_2 \; + \; F^2_3$ $F_7 \; = \; 13 \; = \; 2^2 \; + \; 3^2 \; = \; F^2_3 \; + \; F^2_4$ $F_{11} \; = \; 89 \; = \; 5^2 \; + \; 8^2 \; = \; F^2_5 \; + \; F^2_6$ $F_{13} \; = \; 233 \; = \; 8^2 \; + \; 13^2 \; = \; F^2_6 \; + \; F^2_7$ $F_{17} \; = \; 1597 \; = \; 21^2 \; + \; 34^2 \; = \; F^2_8 \; + \; F^2_9$ $F_{23} \; = \; 28657 \; = \; 89^2 \; + \; 144^2 \; = \; F^2_{11} \; + \; F^2_{12}$
Dissemination of IT for the Promotion of Materials Science (DoITPoMS) # Questions ### Quick questions You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again! 1. Which of the following materials has the highest Coefficient of Thermal Expansion (CTE)? a alumina b aluminium c copper d mild steel 2. In the experiment to determine Young's Modulus, hanging a weight on the cantilever beam leads to a measurement of vertical displacement, d. This value is related to the applied load and the Young's Modulus by the following equation: $\delta = \frac{1}{3}\frac{{P{L^3}}}{{EI}}$ What does I represent in the equation? a the applied load b Young's Modulus c the second moment of area d the distance between the clamp and the position of the weight on the bimetallic strip 3. Which of the following materials has the higher value of Young's Modulus? a alumina b aluminium c copper d mild steel 4. What do αA and αB represent with respect to materials A and B in the following equation which calculates the misfit strain? Δε = (αA - αBT a the thickness of the two materials b the thermal diffusivities of the materials c temperature changes d thermal expansivities ### Deeper questions The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP. 1. Explain in terms of atomic structure and bonding why polymers tend to have lower stiffnesses and higher expansivities than metals and ceramics. 2. From the data given in the properties table suggest a suitable pair of metals for the construction of a bimetallic strip. What other factors do you think might in practice be relevant for making such a choice?
# What are the problems in trying to interpret the Klein-Gordon equation as a single particle equation? What is the problem if we try to interpret KG equation as a single particle equation? Also I wish to know whether the born interpretation of wavefunction is applicable in relativistic quantum mechanics. - If you try to construct the probablilty current for the KG equation, its zero component $j^0$ is not positive definite, even though you want to interpret it as a probability.
Yes, I know. I should’ve posted something on the 27th. I didn’t forget about my loyal readers, but I had to write a paper for school. And since I won’t have time to write something original (Let’s learn together: AI #2) until january, I thought I could share the paper I worked on with you.
# Trig Identity Homework due 5-7-14 (MRS22-3) or 5-8-14 (MRS22-1) Prove each identity. 1. $\frac{\cot\theta}{\csc\theta}=\cos\theta$ 2. $\cos^{2}x\tan^{2}x=\sin^{2}x$ 3. $\frac{1-\sin^{2}\beta}{\cos\beta}=\cos\beta$ 4. $\frac{\tan^{2}\alpha+1}{\sec\alpha}=\sec\alpha$ 5. $1-\csc{x}\sin^{3}x=\cos^{2}x$ 6. $\cos^{2}\theta(\tan^{2}\theta+1)=1$ 7. $\sin^{2}\beta(1+\cot^{2}\beta)=1$ 8. $\cot\theta+\tan\theta=\csc\theta\sec\theta$ 9. $\sin^{2}\alpha+\tan^{2}\alpha+\cos^{2}\alpha=\sec^2\alpha$ 10. $1+\csc^{2}x\cos^{2}x=\csc^{2}x$
# I having trouble with some notation in algebraic topology. The statement of my problem: If $f:(X,A)\to(Y,B)$ is a homotopy equivalence of pairs, then so is the induced map $\hat{f}:(X/A, *)\to (Y/B, *)$. I'm confused on what the star stands for? I think with that I can infer how the induced map works. So If I have the quotient map $q:X\to X/A$, $q(A) =\{A\}$, and then my induced map can be written $\hat{f}:(X/A, \{A\})\to (Y/B, *)$? Would it make sense to replace the second star by $\{B\}$? With some confidence: The first star stands for the image of $A$ in the quotient $X/A$, which is a point. –  Dylan Moreland Feb 2 '12 at 20:06 What you say is correct. I didn't mention $B$ out of laziness, really. –  Dylan Moreland Feb 2 '12 at 20:19
# Question #4a240 ${V}_{2} \cong 40 \cdot L$ $\frac{{P}_{1} {V}_{1}}{T} _ 1 = \frac{{P}_{2} {V}_{2}}{T} _ 2$ All temperature is reported on the $\text{Absolute scale}$; for $\text{pressure}$ we use the relationship $760$ $m m$ $H g$ $\equiv$ $1 \cdot a t m .$ ${V}_{2} = {T}_{2} / \left({P}_{2}\right) \times \frac{{P}_{1} {V}_{1}}{T} _ 1 = \frac{256.8 \cdot K}{\frac{385 \cdot m m \cdot H g}{760 \cdot m m \cdot H g \cdot a t {m}^{-} 1}} \times \frac{1}{303.5 \cdot K} \times \left(\frac{731 \cdot m m \cdot H g}{760 \cdot m m \cdot H g \cdot a t {m}^{-} 1}\right) \times 26.5 \cdot L \cong 40 \cdot L$
# Math Help - Finding Sample size? (Normal Distribution) 1. ## Finding Sample size? (Normal Distribution) I'm not really sure what to name this, and im pretty sure it belongs here, im sorry if it doesnt. Question The number of loaves of white bread demanded daily at a bakery is normally distributed with mean 6800 loaves and variance 84000. the company decides to produce a sufficient number of loaves so that it will fully supply demand on 95% of the days. (a) How many loaves of bread should the compan produce? I know i need to find n = CI * standard dev. / E .. i can figure out CI and SD. but i cant find E.. (b) Based on (a), on what percentage of days will the company be left with more than 500 loaves of unsold bread? I'm not sure on how to do this one! Any amount of help would be amazing!! 2. For part (a) You want to find $P(Y so by standardising, you want $\frac{y_1-\mu}{\sigma}=z_{.95}$ So $y_1=1.6449*\sigma+\mu$ $y_1=1.6449\cdot 289.83+6800=7276.737$ So they must make 7277 loaves (b).. since they bake 7277 loaves, you want the probability that they sell Y less than $7277-500=6777$ so $P(Y<6777)$... 3. Originally Posted by superjen I'm not really sure what to name this, and im pretty sure it belongs here, im sorry if it doesnt. Question The number of loaves of white bread demanded daily at a bakery is normally distributed with mean 6800 loaves and variance 84000. the company decides to produce a sufficient number of loaves so that it will fully supply demand on 95% of the days. (a) How many loaves of bread should the compan produce? I know i need to find n = CI * standard dev. / E .. i can figure out CI and SD. but i cant find E.. (b) Based on (a), on what percentage of days will the company be left with more than 500 loaves of unsold bread? I'm not sure on how to do this one! Any amount of help would be amazing!! (a) Let Pr(Z > z*) = 0.05 and Pr(N > n*) = 0.05. Then $z^* = \frac{n^* - 6800}{\sqrt{84000}}$. Your job is to get z* and then solve for n*. (b) Calculate Pr(N > n* + 500) and multiply the answer by 100.
# Big-list of classical facts When we type an answer, we need sometimes classical facts, which will be long to expand and are well-known. For example, a bounded measurable function can be approached uniformly by simple functions, or the fact that convergence in $L^p$ implies convergence almost everywhere of a sub-sequence. I suggest to write in the answers a collection of these facts by fields (like "measure-theory", "real-analysis", etc...) including the fact and a link on the site to a question which deals with it. Since these fields are far from being disjoint, a same fact could belong to several answers. The point is that if someone is looking for a proof of a fact, he/she will include a link in his/her answer after having found it in this page. • I think it should be CW, but I don't see how to do it. – Davide Giraudo Apr 14 '13 at 10:41 • I don't think this should be on the meta site; and I also don't really think that we need this. One can cite "classical facts" and if anyone wants to know more about these facts, one can ask a separate question and receive an answer (or ask in the comments, in some cases). – Asaf Karagila Apr 14 '13 at 10:43 • Most are also on wikipedia, which can be linked to of course! – Thomas Rot Apr 14 '13 at 12:17 • while I think this is an intersting idea, I think it'd be quite hard to make it work in reality – Lost1 Apr 14 '13 at 12:20 • @Will WTF?!?${}{}$ – Michael Greinecker Apr 14 '13 at 21:33
# Assume that Sivart Corporation has 2019 taxable income of $1,750,000 for purposes of computing the §179... ###### Question: Assume that Sivart Corporation has 2019 taxable income of$1,750,000 for purposes of computing the §179 expense and acquired several assets during the year. Assume the delivery truck does not qualify for bonus depreciation. (Use MACRS Table 1, Table 2, Table 3, Table 4 and Table 5.) Placed in Asset Service Basis Machinery June 12 $1,440,000 Computer equipment February 10 70,000 Delivery Truck-used August 21 93,000 Furniture April 2 310,000 Total$ 1,913,000 a. What is the maximum amount of §179 expense Sivart may deduct for 2019? b. What is the maximum total depreciation (§179, bonus, MACRS) that Sivart may deduct in 2019 on the assets it placed in service in 2019? #### Similar Solved Questions ##### X 5. (4 pts) Predict which of the following substrates would undergo El more rapidly. .... x 5. (4 pts) Predict which of the following substrates would undergo El more rapidly. . Explain your choice. 1 more rapidly year we 6. KOH (4 pts) The following Sw2 reaction yields a product with the given empirical formula. Suggest a structure for this compound and briefly show how it formed: OH mp... ##### Ali stands 10.2 meters in front of a plane mirror. What is the magnification of the... Ali stands 10.2 meters in front of a plane mirror. What is the magnification of the image formed of Ali? 10.2 O 0.100 1.00 O 100 Question 9 (2 points) Below are given several indices of refraction. In which of these materials is light traveling the FASTEST? Material 2: n=1.69 Material 3: n=2.25 O Ma... ##### 20 Suppose a simple random sample of size n= 1000 is obtained from a population whose... 20 Suppose a simple random sample of size n= 1000 is obtained from a population whose size is N = 1,500,000 and whose population proportion with a specified characteristic is p=0.48. Complete parts (a) through (c) below. (a) Describe the sampling distribution of p. O A. Approximately normal, ... ##### At December 31, Folgeys Coffee Company reports the following results for its calendar year. Exercise 9-11... At December 31, Folgeys Coffee Company reports the following results for its calendar year. Exercise 9-11 Estimating bad debts P3 Cash sales.......................... $900,000 Credit sales .........................$300,000 We were unable to transcribe this imageWe were unable to transcribe this ima... ##### What is interesting about Margaret Floy Washburn? What is interesting about Margaret Floy Washburn?... ##### Alg Based Physics Waves I A sound wave with an intensity level of 80.8 dB is incident on an eardrum of area0.600 10-4 m2. How much energy is absorbed by theeardrum in 3.0 minutes?Not sure how to set this up... ##### Problem 1 A manufacturer is experiencing difficulty getting consistent readings of tensile strength between three machines... Problem 1 A manufacturer is experiencing difficulty getting consistent readings of tensile strength between three machines located on the production floor, research lab, and quality control lab, respectively. There are also four possible technicians-Tom, Joe, Ken, and Carol-who operate at least one ... ##### Calculate the magnitude of the magnetic field from a circular loop of wire of radius 0.20... Calculate the magnitude of the magnetic field from a circular loop of wire of radius 0.20 m, carrying a current of 2.4 A, and with 300 turns of wire at a distance of 2.0 m away from the loop along the axis of the loop.... ##### Terminology & Concepts (4 pts each) K known as the The continuous and conservative exchange of... Terminology & Concepts (4 pts each) K known as the The continuous and conservative exchange of water is known a. Conservation of Energy Hydrologic Cycle C. Conservation of Mass d. Hydraulic Cycle What is the name of a plot of total rainfall depth over time a Hyetograph b. IDF Curve Mass Curve d.... ##### I'm not sure how to use plot to display the mag of the amplitude .Generate Bode... I'm not sure how to use plot to display the mag of the amplitude .Generate Bode Plot for following frequency response system using MATLAB Can you guess MATLAB function which generate Bode Plot? Repeat (No 8), but this time use this function to plot amplitude and phase response of system . Use h... ##### One type of wagon wheel consists of a 2.0-kg hoop fitted with four 0.80-kg thin rods... One type of wagon wheel consists of a 2.0-kg hoop fitted with four 0.80-kg thin rods placed along diameters of the hoop so as to make eight evenly spaced spokes. For a hoop of radius 0.38 m , what is the rotational inertia of the wheel about an axis perpendicular to the plane of the wheel and throug... ##### 5) Choose the best classification of the reaction represented by the following equation: Pb(NO3)2(ay)+CaClg(ay)-PbCl20)+ Ca(NO3)2(a) A)... 5) Choose the best classification of the reaction represented by the following equation: Pb(NO3)2(ay)+CaClg(ay)-PbCl20)+ Ca(NO3)2(a) A) decomposition B) precipitation C) oxidation-reduction D) acid-base E) combustion 5) 6) 6) Choose the best classification of the reaction represented by the followin... ##### *2. In the circuit below, Vtn--Vtpー0.4 V and K,-K,-10 mA/V2, λ = .04 V. If Iss... *2. In the circuit below, Vtn--Vtpー0.4 V and K,-K,-10 mA/V2, λ = .04 V. If Iss 0.5mA, VoD - 5V, and Ri - 80 K2, find: a) The DC voltage Vo. b) How high can the common mode voltage become before Qi and Q2 fall out of saturation. c) Draw the small signal half circuit appropriate for find... Cut off date is 26 November, Tuesday, 12 pm Group work: Maximum 2 persons per group XYZ Corp's most recet FCF was $48 million, the FCF is expected to grow at a constant rate of 6%. The firm's WACC is 12% and it has 15 million shares of common stock outstanding. The firm has$30 million in sh...
# Unbiased estimator by happyg1 Tags: estimator, unbiased P: 308 Hi, I'm working on the following problem and I need some clarification: Suppose that a sample is drawn from a $$N(\mu,\sigma^2)$$ distribution. Recall that $$\frac{(n-1)S^2}{\sigma^2}$$ has a $$\chi^2$$ distribution. Use theorem 3.3.1 to determine an unbiased estimator of $$\sigma$$ Thoerem 3.3.1 states: Let X have a $$\chi^2(r)$$ distribution. If $$k>-\frac{r}{2}$$ then $$E(X^k)$$ exists and is given by: $$E(X^k)=\frac{2^k(\Gamma(\frac{r}{2}+k))}{\Gamma(\frac{r}{2})}$$ My understanding is this: The unbiased estimator equals exactly what it's estimating, so $$E(\frac{(n-1)S^2}{\sigma^2})$$is supposed to be$$\sigma^2$$ which is 2(n-1). Am I going the right way here? CC P: 308 Ok, So after hours of staring at this thing, here's what I did: I let k=1/2 and r=n-1, so the thing looks like this: $$E[S]=\sigma(\sqrt{\frac{2}{n-1}}\frac{\Gamma\frac{n}{2}}{\Gamma\frac{n-1}{2}}$$ so I use the property of the gamma function that says: $$\Gamma(\alpha)=(\alpha-1)!$$ which leads to: $$E[S]=\sigma\sqrt\frac{2}{n-1}(n-1)$$ So now do i just flip over everything on the RHS,leaving $$\sigma$$ by itself and that's the unbiased estimator, i.e. $$\sqrt{2(n-1)}E[S]=\sigma$$ Any input will be appreciated. CC P: 308 OK Anyone who looked and ran away, here at last is the solution: (finally) $$E[S]=\sigma\sqrt{\frac{2}{n-1}} \frac{\Gamma\frac({n}{2})}{\Gamma\frac({n-1}{2})}$$ is indeed correct, however my attempt to reduce the RHS with the properties of the Gamma function is wrong. The unbiased estimator is obtained by isolating the $$\sigma$$ on the RHS and then using properties of the Expectation to get: $$E\left(\sqrt\frac{n-1}{2}\frac{\Gamma(\frac{n-1}{2})}{\frac\Gamma(\frac{n}{2})}S\right)=\sigma$$ So at last it has been resolved. WWWWEEEEEEEEEEEeeeeeeee CC
# Submultiplicity of Operator Norm ## Theorem Let $H, K$ be Hilbert spaces, and let $A: H \to K$ be a bounded linear transformation. Let $\norm A$ denote the norm of $A$ defined by: $\norm A = \inf \set {c > 0: \forall h \in H: \norm {A h}_K \le c \norm h_H}$ Then: $\forall h \in H: \norm {Ah}_K \le \norm A \norm{h}_H$ ## Proof $\norm A = \inf \set {c > 0: \forall h \in H: \norm {A h}_K \le c \norm h_H}$ exists and $\norm A < \infty$ Let $x \in H \setminus \set{0_H}$ Let $\lambda \in \set {c > 0: \forall h \in H: \norm {A h}_K \le c \norm h_H}$. Then: $\ds \norm {A x}_K$ $\le$ $\ds \lambda \norm x_H$ $\ds \leadstoandfrom \ \$ $\ds \dfrac {\norm{A x}_K} {\norm x_H}$ $\le$ $\ds \lambda$ As $c$ was arbitrary, then: $\forall \lambda \in \set {c > 0: \forall h \in H: \norm {A h}_K \le c \norm h_H}: \dfrac {\norm{A x}_K} {\norm x_H} \le \lambda$ By the definition of the infimum: $\dfrac {\norm{A x}_K} {\norm x_H} \le \norm A$ Hence: $\norm{A x}_K \le \norm A \norm x_H$ Since $x$ was arbitrary: $\forall h \in H \setminus \set{0_H}: \norm{A h}_K \le \norm A \norm h_H$ Lastly, we have: $\ds \norm{A 0_H}_K$ $=$ $\ds \norm{0_K}_K$ $\ds$ $=$ $\ds 0$ $\ds$ $=$ $\ds \norm A \cdot 0$ $\ds$ $=$ $\ds \norm A \norm{0_H}$ It follows that: $\forall h \in H: \norm {Ah}_K \le \norm A \norm{h}_H$ $\blacksquare$
# Chapter 5 - Review Exercises: 49 $x=\left\{ -2,0,2 \right\}$ #### Work Step by Step $\bf{\text{Solution Outline:}}$ To solve the given equation, $-x^3-3x^2+4x+12=0 ,$ express it first in factored form. Then equate each factor to zero using the Zero Product Property. Finally, solve each resulting equation. $\bf{\text{Solution Details:}}$ Grouping the first and second terms and the third and fourth terms, the given expression is equivalent to \begin{array}{l}\require{cancel} (-x^3-3x^2)+(4x+12)=0 .\end{array} Factoring the $GCF$ in each group results to \begin{array}{l}\require{cancel} -x^2(x+3)+4(x+3)=0 .\end{array} Factoring the $GCF= (x+3)$ of the entire expression above results to \begin{array}{l}\require{cancel} (x+3)(-x^2+4)=0 \\\\ (x+3)(-1)(x^2-4)=0 \\\\ \dfrac{(x+3)(-1)(x^2-4)}{-1}=\dfrac{0}{-1} \\\\ (x+3)(x^2-4)=0 .\end{array} The expressions $x^2$ and $4$ are both perfect squares (the square root is exact) and are separated by a minus sign. Hence, $x^2-4 ,$ is a difference of $2$ squares. Using the factoring of the difference of $2$ squares which is given by $a^2-b^2=(a+b)(a-b),$ the expression above is equivalent to \begin{array}{l}\require{cancel} (x+3)(x+2)(x-2)=0 .\end{array} Equating each factor to zero (Zero Product Property), the solutions to the equation above are \begin{array}{l}\require{cancel} x+3=0 \\\\\text{OR}\\\\ x+2=0 \\\\\text{OR}\\\\ x-2=0 .\end{array} Solving each equation results to \begin{array}{l}\require{cancel} x+3=0 \\\\ x=-3 \\\\\text{OR}\\\\ x+2=0 \\\\ x=-2 \\\\\text{OR}\\\\ x-2=0 \\\\ x=2 .\end{array} Hence, $x=\left\{ -2,0,2 \right\} .$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# duration for a signal after filtering [duplicate] I have a signal $$s(t)$$ of duration $$T$$. This signal doesn't have a limited bandwidth. To solve this problem, I filter it with a pulse shaping filter (raised cosine filter). My question: Does filtering change the duration of the signal?
# Gravity with Zero Distance ## Main Question or Discussion Point If the separation between two objects (say, me and my chair) is zero, shouldn't the gravitational force between those two objects be infinite because in the equation for force you divide by r? And I understand that the centers of my atoms are not literally touching the chair, so maybe the separation is miniscule... but as they get so close that they are only TINY distances apart, the gravitational force should at least APPROACH infinity? But this seems to defy logic... Related Classical Physics News on Phys.org They are tiny distances apart but the mass of those atoms is also tiny and the gravitational constant is also tiny, so you get a tiny number. Ok well take the mass of my foot and the mass of the ground with which it is in contact. Shouldn't this be a magnificentally large force? HallsofIvy Homework Helper The gravitational force between two extended (not point) masses is $$\frac{GmM}{r^2}$$. where r is the distance beween their center of masses. For your body and the earth, that is a very large distance. If you want to calculate forces between, say, atoms that are very close together then you have to deal with those very, very small masses at very small distances. So would this be an example of the equations of classical mechanics breaking down when you get to quantum levels? Ok well take the mass of my foot and the mass of the ground with which it is in contact. Shouldn't this be a magnificentally large force? No, only a few atoms are close to the ground, the rest aren't. It's simple: Newton's Law for gravitation brakes down when you look at very small distances. Greetings! you're entering the realm of Quantum effects. As do Newton's Laws of motion brake down when you travel at velocities close to the speed of light. Say hello the world of Relativity! :)
# DAVIDSTUTZ 02ndOCTOBER2019 Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. AISec@CCS 2017. Munoz-Gonzalez et al. propose a multi-class data poisening attack against deep neural networks based on back-gradient optimization. They consider the common poisening formulation stated as follows: $\max_{D_c} \min_w \mathcal{L}(D_c \cup D_{tr}, w)$ where $D_c$ denotes a set of poisened training samples and $D_{tr}$ the corresponding clean dataset. Here, the loss $\mathcal{L}$ used for training is minimized as the inner optimization problem. As result, as long as learning itself does not have closed-form solutions, e.g., for deep neural networks, the problem is computationally infeasible. To resolve this problem, the authors propose using back-gradient optimization. Then, the gradient with respect to the outer optimization problem can be computed while only computing a limited number of iterations to solve the inner problem, see the paper for detail. In experiments, on spam/malware detection and digit classification, the approach is shown to increase test error of the trained model with only few training examples poisened. Also find this summary on ShortScience.org. What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below:
# example of paracompact topological spaces ## 0.1 Paracompact space examples Locally compact Hausdorff spaces that are also $\sigma$-compact (http://planetmath.org/AlternativeDefinitionOfSigmaFiniteMeasure) can be shown to be paracompact, and, therefore, also normal. Other particular cases of paracompact spaces are those of locally compact Hausdorff spaces, second countable Hausdorff locally compact spaces, and their related locally compact groupoids. Title example of paracompact topological spaces ExampleOfParacompactTopologicalSpaces 2013-03-22 18:23:27 2013-03-22 18:23:27 bci1 (20947) bci1 (20947) 8 bci1 (20947) Example msc 55-00 msc 54-00 $\sigma$-compact locally compact Hausdorff spaces second countable Hausdorff locally compact spaces Paracompact LocallyCompactGroupoids LocallyCompactHausdorffSpace
## Tuesday, October 20, 2015 ### Deriving the Product Formula: The Easy Way Recall from this post that: $\sum_{n=1}^{\infty} \frac{1}{x^2+n^2}=\frac{\pi}{2x} \coth(\pi x)-\frac{1}{2x^2}$ We then substitute $x=i z$: $\sum_{n=1}^{\infty} \frac{1}{n^2-z^2}=-\frac{\pi}{2z} \cot(\pi z)+\frac{1}{2z^2}$ We then go down the following line of calculation: $\sum_{n=1}^{\infty} \frac{2z}{n^2-z^2}=\frac{1}{z}-\pi\cot(\pi z)$ $\int\sum_{n=1}^{\infty} \frac{2z}{n^2-z^2}dz=C+\int \frac{1}{z}-\pi\cot(\pi z) dz$ $\sum_{n=1}^{\infty} -\ln \left (1-\frac{z^2}{n^2} \right )=C+\ln (z) - \ln (\sin (\pi z) )$ $\sin(\pi z)=C' z\prod_{n=1}^{\infty}\left ( 1-\frac{z^2}{n^2} \right )$ We can find $C'$ by looking at the behavior near zero, and so find that: $\sin(\pi z)=\pi z\prod_{n=1}^{\infty}\left ( 1-\frac{z^2}{n^2} \right )$ Therefore: $\sin(z)=z\prod_{n=1}^{\infty}\left ( 1-\frac{z^2}{\pi^2 n^2} \right )$ ### Deriving the Product Formula: The Overkill Way, by Weierstrass' Factorization Theorem Suppose a function can be expressed as $f(x)=A\frac{\prod_{n=1}^{M}\left ( x-z_n \right )}{\prod_{n=1}^{N}\left ( x-p_n \right )}$ Where $M \leq N$ and $N$ can be arbitrarily large, even tending to infinity. Assuming there are no poles of degree >1 (all poles are simple), we can rewrite this as $f(x)=K+\sum_{n=1}^{\infty} \frac{b_n}{x-p_n}$ Where some of the $b_n$ may be zero. We can also write this as $f(x)=f(0)+\sum_{n=1}^{\infty} b_n \cdot \left ( \frac{1}{x-p_n}+\frac{1}{p_n} \right )$ Suppose $f(0) \neq 0$, and that $f$ is an integral function (i.e. an entire function). In that case, the logarithmic derivative $f'(x)/f(x)$ has poles of degree 1. Moreover, $\lim_{x \rightarrow z_n} (x-z_n)\frac{f'(x)}{f(x)}=d_n$ Where $d_n$ is the degree of the zero at $z_n$. Thus: $\frac{f'(x)}{f(x)}=\frac{f'(0)}{f(0)}+\sum_{n=1}^{\infty} d_n \cdot \left ( \frac{1}{x-z_n}+\frac{1}{z_n} \right )$ Integrating: $\ln(f(x))=\ln(f(0))+x \frac{f'(0)}{f(0)}+\sum_{n=1}^{\infty} d_n \cdot \left ( \ln \left (1-\frac{x}{z_n} \right ) +\frac{x}{z_n} \right )$ $f(x)=f(0) e^{x \frac{f'(0)}{f(0)}} \prod_{n=1}^{\infty} \left (1-\frac{x}{z_n} \right )^{d_n} e^{x\frac{d_n}{z_n}}$ This is our main result, called the Weierstrass factorization theorem. In particular, for the function $f(x)=\sin(x)/x$ $\frac{\sin(x)}{x}=\prod_{n=-\infty, n \neq 0}^{\infty} \left (1-\frac{x}{n \pi} \right ) e^{x\frac{1}{n \pi}}=\prod_{n=1}^{\infty} \left (1-\frac{x^2}{n^2 \pi^2} \right )$ Thus $\sin(x)=x\prod_{n=1}^{\infty} \left (1-\frac{x^2}{\pi^2 n^2 } \right )$ ### Corollary 1: Wallis Product Let us plug in $x=\pi/2$: $\sin(\pi/2)=1=\frac{\pi}{2}\prod_{n=1}^{\infty} \left (1-\frac{1}{4 n^2 } \right )$ $\pi=2\prod_{n=1}^{\infty} \left (\frac{4 n^2}{4 n^2-1 } \right )=2\frac{2 \cdot 2}{1 \cdot 3} \cdot \frac{4 \cdot 4}{3 \cdot 5} \cdot \frac{6 \cdot 6}{5 \cdot 7} \cdot \frac{8 \cdot 8}{7 \cdot 9} \cdots$ More generally: $\pi=\frac{N}{M} \sin(\pi M/N) \prod_{n=1}^{\infty} \left (\frac{N^2 n^2}{N^2 n^2 -M^2} \right )$ This is useful when $\sin(\pi M/N)$ is easily computable, such as when $\sin(\pi M/N)$ is algebraic (e.g. $M=1$, $N=2^m$ ). For example: $\pi=2 \sqrt{2} \prod_{n=1}^{\infty} \left (\frac{4^2 n^2}{4^2 n^2 -1^2} \right )$ $\pi=\frac{2}{3} \sqrt{2} \prod_{n=1}^{\infty} \left (\frac{4^2 n^2}{4^2 n^2 -3^2} \right )$ $\pi=\frac{3}{2} \sqrt{3} \prod_{n=1}^{\infty} \left (\frac{3^2 n^2}{3^2 n^2 -1^2} \right )$ $\pi=\frac{3}{4} \sqrt{3} \prod_{n=1}^{\infty} \left (\frac{3^2 n^2}{3^2 n^2 -2^2} \right )$ $\pi=3 \prod_{n=1}^{\infty} \left (\frac{6^2 n^2}{6^2 n^2 -1^2} \right )$ $\pi=\frac{3}{5} \prod_{n=1}^{\infty} \left (\frac{6^2 n^2}{6^2 n^2 -5^2} \right )$ $\pi=3\sqrt{2}(-1+\sqrt{3}) \prod_{n=1}^{\infty} \left (\frac{12^2 n^2}{12^2 n^2 -1^2} \right )$ ### Corollary 2: Product Formula for Cosine Let us evaluate the sine formula at $x+\pi/2$: $\sin(x+\pi/2)=\cos(x)=\left (x+\frac{\pi}{2} \right )\prod_{n=-\infty, n \neq 0}^{\infty} \left (1-\frac{x+\pi/2}{\pi n } \right )$ $\cos(x)=\frac{\sin(x+\pi/2)}{\sin(\pi/2)}=\left (1+\frac{x}{\pi/2} \right )\prod_{n=-\infty, n \neq 0}^{\infty} \frac{\left (1-\frac{x+\pi/2}{\pi n } \right )}{\left (1-\frac{\pi/2}{\pi n } \right )}$ $\cos(x)=\left (1+\frac{x}{\pi/2} \right )\prod_{n=-\infty, n \neq 0}^{\infty} \left (1-\frac{x}{\pi (n-1/2) } \right )=\prod_{n=-\infty}^{\infty} \left (1-\frac{x}{\pi (n-1/2) } \right )$ $\cos(x)=\prod_{n=1}^{\infty} \left (1-\frac{x^2}{\pi^2 (n-1/2)^2 } \right )$ Alternatively, we can derive this directly from the Weierstrass factorization theorem. Additionally, by using imaginary arguments, we can derive the formulae: $\sinh(x)=x\prod_{n=1}^{\infty} \left (1+\frac{x^2}{\pi^2 n^2 } \right )$ $\cosh(x)=\prod_{n=1}^{\infty} \left (1+\frac{x^2}{\pi^2 (n-1/2)^2 } \right )$ ### Corollary 3: Sine is Periodic Let us evaluate the sine formula at $x+\pi$: $\sin(x+\pi)=\left (x+\pi \right )\prod_{n=-\infty, n \neq 0}^{\infty} \left (1-\frac{x+\pi}{\pi n } \right )$ $\sin(x+\pi)=\cdots \left (1+\frac{x+\pi}{3\pi} \right ) \left (1+\frac{x+\pi}{2\pi} \right )\left (1+\frac{x+\pi}{\pi} \right )\left (x+\pi \right ) \left (1-\frac{x+\pi}{\pi} \right )\left (1-\frac{x+\pi}{2\pi} \right ) \left (1-\frac{x+\pi}{3\pi} \right ) \cdots$ $\sin(x+\pi)=\cdots \left (\frac{4}{3}+\frac{x}{3\pi} \right ) \left (\frac{3}{2}+\frac{x}{2\pi} \right )\left (2+\frac{x}{\pi} \right ) \pi \left (1+\frac{x}{\pi}\right ) \left (\frac{-x}{\pi} \right )\left (\frac{1}{2}-\frac{x}{2\pi} \right ) \left (\frac{2}{3}-\frac{x}{3\pi} \right ) \cdots$ $\sin(x+\pi)=\cdots \frac{4}{3}\left (1+\frac{x}{4\pi} \right ) \frac{3}{2}\left (1+\frac{x}{3\pi} \right )2\left (1+\frac{x}{2\pi} \right ) \pi \left (1+\frac{x}{\pi}\right ) \left (\frac{-x}{\pi} \right ) \frac{1}{2}\left (1-\frac{x}{\pi} \right ) \frac{2}{3}\left (1-\frac{x}{2\pi} \right ) \cdots$ $\sin(x+\pi)=-2x\left ( \prod_{k=2}^{\infty} \frac{k^2-1}{k^2} \right ) \left ( \prod_{n=1}^{\infty} \left (1-\frac{x^2}{n^2 \pi^2} \right ) \right )=-\sin(x)$ As the first product easily telescopes. Thus $\sin(x+2\pi)=\sin((x+\pi)+\pi)=-\sin(x+\pi)=\sin(x)$. Therefore, sine is periodic with period $2\pi$. ### Corollary 3: Some Zeta Values Let us begin expanding the product for sine in a power series $\sin(x)=x\prod_{n=1}^{\infty} \left (1-\frac{x^2}{\pi^2 n^2 } \right )=x-\frac{x^3}{\pi^2}\left (\frac{1}{1^2}+\frac{1}{2^2}+\cdots \right )+\frac{x^5}{\pi^4}\left (\frac{1}{1^2 \cdot2^2}+\frac{1}{1^2 \cdot3^2}+\cdots \frac{1}{2^2 \cdot3^2}+\frac{1}{2^2 \cdot4^2}+\cdots \right )+\cdots$ $\sin(x)=x-\frac{x^3}{\pi^2}\left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )+\frac{x^5}{\pi^4}\left (\sum_{m=1,n=1, m < n}^{\infty}\frac{1}{m^2n^2} \right )+\cdots$ $\sin(x)=x-\frac{x^3}{\pi^2}\left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )+\frac{x^5}{2\pi^4}\left (\left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )^2- \sum_{k=1}^{\infty}\frac{1}{k^4} \right )+\cdots$ By comparing this to the Taylor series for sine, we find: $\frac{1}{3!}=\frac{1}{\pi^2}\left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )$ $\frac{1}{5!}=\frac{1}{2\pi^4}\left (\left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )^2- \sum_{k=1}^{\infty}\frac{1}{k^4} \right )$ From which it follows that $\sum_{k=1}^{\infty}\frac{1}{k^2}=\frac{\pi^2}{6}$ $\sum_{k=1}^{\infty}\frac{1}{k^4}=\frac{\pi^4}{90}$ In fact, for the fourth term, we find, similarly, that $\frac{1}{7!}=\frac{1}{6\pi^6}\left ( \left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )^3-3\left (\sum_{k=1}^{\infty}\frac{1}{k^2} \right )\left (\sum_{k=1}^{\infty}\frac{1}{k^4} \right )+2\left (\sum_{k=1}^{\infty}\frac{1}{k^6} \right ) \right )$ From which it follows that $\sum_{k=1}^{\infty}\frac{1}{k^6}=\frac{\pi^6}{945}$
# Transport phenomena in micro- and nanoscales Transport phenomena at dimensions between 1 and 100 μm are different from those at larger scales. At these scales, phenomena that are negligible at larger scales become dominant, but the macroscopic transport theory is still valid. One example of these phenomena in multiphase systemss is surface tension. In larger scale systems, the hydrostatic pressure or dynamic pressure effects may dominate over pressure drops caused by surface tension, but at smaller scales, the pressure drops caused by surface tension can dominate over hydrostatic and dynamic pressure effects. Transport phenomena in these scales are still regarded as macroscale, because classical theory of transport theory is still valid. As systems scale down even further to the nanoscale, 1-100 nm, or for ultrafast process (e.g., materials processing using picosecond or femtosecond lasers) the fundamental theory used in larger scale systems breaks down because of fundamental differences in the physics. The purpose of research in micro-/nanoscale heat transfer is to exploit their differences from the macroscopic systems to create and improve materials, devices and systems. It is also the objective of this research to understand when system performance will begin to degrade because of the adverse effects of scaling. One large field of interest in which microscale heat transfer is of great interest is energy and thermal systems. Microscale energy and thermal systems include thin film fuel cells, thin film electrochemical cells, photon-to-electric devices, micro heat pipes, bio-cell derived power, and microscale radioisotopes (Peterson, 2004). Other devices that are of a slightly larger scale but require components on the microscale include miniaturized heat engines as well as certain combustion-driven thermal systems. These energy systems can power anything from MEMS sensors and actuators to cell phones. Thermal management of thermally based energy systems is extremely important. To emphasize its importance, consider a micro heat engine. The most general components of a micro heat engine are a compressor, a combustor, and a turbine. Generally, the combustor runs much cooler than the turbine, because the gases must expand in the combustor to drive the turbine’s shaft. However, the turbine and the compressor are linked by this shaft. The shorter the distance between the turbine and the compressor, the less thermal resistance through the shaft. Therefore, the temperature difference between the turbine and the compressor is much less, which decreases the system’s efficiency. Another form of energy conversion is thermoelectric energy conversion, in which thermal energy is directly converted to electricity. There are three thermoelectric effects: the Seebeck effect, the Peltier effect, and the Thomson effect. The Seebeck effect occurs when electrons flow from the hot side to the cold side of a material under a temperature gradient. The result is an electric potential field (voltage) balancing the electron diffusion. The Peltier effect occurs when heat is carried by electrons in an electrical current in a material held at a constant temperature. The Thomson effect is seen when current flows through a conductor under a temperature gradient. The suitability of a thermoelectric material for energy conversion is based on the figure of merit Z, $Z = \frac{{{\alpha ^2}}}{{{R_e}k}}\qquad \qquad(1)$ where the Seebeck coefficient, electrical resistivity and thermal conductivity are α, Re and k, respectively. Materials with a high figure of merit are difficult to find in bulk form, therefore, nanostructures provide additional parameter space (Chen et al., 2004). To manipulate the nanostructures of certain materials, the electron and phonon thermoelectric transport must first be understood. Conduction heat transfer in a solid at a small scale is analogous to the kinetic theory of gases (see Section 1.3.2). However, instead of molecules transporting momentum, there are electrons and phonons transferring heat. The classical heat conduction theory is a macroscopic model based on the equilibrium assumption. From the microscopic perspective, the energy carriers in a substance include phonons, photons, and electrons. Depending on the nature of heating and the structure of the materials, the energy can be deposited into materials in different ways: it can be simultaneously deposited to all carriers by direct contact, or only to a selected carrier by radiation (Qiu and Tien, 1993). For short-pulsed laser heating of metal, the energy deposition involves three steps: (1) deposition of laser energy on electrons, (2) exchange of energy between electrons, and (3) propagation of energy through media. If the pulse width is shorter than the thermalization time, which is the time it takes for the electrons and lattice to reach equilibrium, the electron and lattice are not in thermal equilibrium, and a two-temperature model is often used. If the laser pulse is shorter than the relaxation time, which is the mean time required for electrons to change their states, the hyperbolic conduction model must be used. More insights about the microscale heat transfer can be found in Tzou (1996) and Majumdar (1998). In addition to the two-temperature and hyperbolic models, which are still continuum models, another approach is to understand heat and mass transfer at the molecular level using the molecular dynamics method (Maruyama, 2001). The conduction of heat is caused by electrons and by phonons. Therefore, the thermal conductivity, k, in solids can be broken into two components, the thermal conduction by electrons (ke) and by phonons (kph). $k = {k_{{e^ - }}} + {k_{ph}}\qquad \qquad(2)$ From kinetic theory, the thermal conduction of each component is given by (Flik et al. 1992), ${k_{{e^ - }}} = \frac{1}{3}{c_{p,{e^ - }}}{\bar c_{{e^ - }}}{\lambda _{{e^ - }}}\qquad \qquad(3)$ ${k_{ph}} = \frac{1}{3}{c_{p,ph}}{\bar c_{ph}}{\lambda _{ph}}\qquad \qquad(4)$ The subscripts e- and ph refer to an electron and a phonon, respectively. The specific heats of the electron and the phonon are ${c_{p,{e^ - }}}$ and cp,ph, respectively. The average velocities of an electron and a phonon are ${\bar c_{{e^ - }}}$ and ${\bar c_{ph}}$. A phonon is a sound particle, and the speed at which it travels is the speed of sound in that material. Finally, the mean free paths of an electron and a phonon are ${\lambda _{{e^ - }}}$ and λph, respectively. The mean free path is the distance one of the conduction energy carriers (electrons or phonons) travels before it collides with an imperfection in a material. Defects, dislocations, impurities and boundaries within a solid structure all have an effect on the phonon transport in a solid. The effect of impurities in a solid material can be described in terms of the acoustic impedance (Z) of the lattice waves (Huxtable et al., 2004). The acoustic impedance is $Z = \rho {\bar c_{ph}}\qquad \qquad(5)$ The speed of sound (mean velocity of a phonon) is related to the elastic stiffness of a chemical bond (E) by ${\bar c_{ph}} = \sqrt {\frac{E}{\rho }}\qquad \qquad(6)$ The acoustic impedance is a function of both the stiffness of a bond and the density. When a phonon encounters a change in the acoustic impedance, it may scatter. Scattering of this nature can change the direction in which a phonon is traveling and also its mean free path or wavelength. The wavelength or mean free path of a material also has temperature dependence that can be approximated by ${\lambda _{ph}} \approx \frac{{h{{\bar c}_{ph}}}}{{3{k_B}T}}\qquad \qquad(7)$ where h is Planck’s constant. From this equation, it can be seen that the mean free path has an inverse dependence on temperature. Therefore, with a decrease in temperature, the wavelength will increase, which increases the probability that a particle will be affected by either an imperfection or a boundary. Mean free path of electrons or phonons (a) far away from boundaries and (b) in the presence of boundaries. Physical boundaries in a material have the effect of scattering the energy carriers. The effect of a boundary on the mean free path is presented in the figure on the right. The scattering characteristics of boundaries either reflect or transmit energy carriers. The probability that a phonon will transmit from material A to material B, ${P_{A \to B}}$, at normal incidence is a function of the impedance of both materials. ${P_{A \to B}} = \frac{{4{Z_A}{Z_B}}}{{{{\left( {{Z_A} + {Z_B}} \right)}^2}}}\qquad \qquad(8)$ The effect of boundaries can reduce the effective mean free path in the vertical and horizontal directions for phonons traveling at all incident angles. Examining eqs. (3) and (4) indicates that reducing the mean free path will reduce the thermal conductivity. Physical boundaries in a system can be grain boundaries or the surfaces of an extremely thin solid film. The effective conductivity in a thin film was related to the isotropic bulk conductivity of a material by Flik and Tien (1990). The effective conductivities normal to the thin film keff,n and along the thin film layer keff,t are: $\frac{{{k_{eff,n}}}}{k} = 1 - \frac{\lambda }{{3\delta }}\qquad \qquad(9)$ $\frac{{{k_{eff,t}}}}{k} = 1 - \frac{{2\lambda }}{{3\pi \delta }}\qquad \qquad(10)$ where δ is the film thickness. The electrical component of thermal conductivity in a solid can also be approximated by the Wiedemann-Franz law, which is valid up to the metal-insulator transition (Castellani et al., 1987). This transition occurs when the electrical conductivity of a metal suddenly changes from high to low conductivity due to a decrease in temperature. This equation is valid because, in a highly electrically conductive material, essentially all of the thermal transport is carried through electrons. $k = {\sigma _{{e^{ - 1}}}}{L_0}T\qquad \qquad(11)$ where L0 is the Lorenz number and ${\sigma _{{e^ - }}}$ is the electrical conductivity. For semi-conductor materials, it is expected that the contribution to heat conduction by the electrons will be small, because the electrical conductivity is small. Therefore, molecular dynamics simulations have been used to predict the effect of system size on the thermal conductivity by considering only the phonon transport. Such analyses were done by Petzsch and Böttger (1994), Schelling et al. (2002) and McGaughey and Kaviany (2004). The methods used in these studies are nonequilibrium molecular dynamics (NEMD) or equilibrium molecular dynamics (EMD). The NEMD approach to computing thermal conductivity is called the “direct method,” which imposes a temperature gradient across a simulation cell and is analogous to the experimental set-up. However, due to computational capacity, the cell is very small, resulting in extremely high temperature gradients in which Fourier’s law of heat conduction may break down. A commonly used approach to EMD is the Green-Kubo method, which uses current fluctuations to compute thermal conductivity by the fluctuation-dissipation theorem. This theorem captures the linear response of a system subjected to an external perturbation and is expressed in terms of fluctuation properties of the thermal equilibrium. Heat conduction in a liquid is a combination of the random movement of molecules, similar to the kinetic theory of gases, as well as the movement of phonons and/or electrons. Therefore, thermal transport through conduction is much more complex in the liquid state than the gaseous or solid states; the theory is in its infancy, and is therefore not included. The discussion of microscale and nanoscale systems thus far has been primarily theoretical. Therefore, a discussion of experimental measurements of these systems is needed. The most common tool for sensing and actuating at the nanometer scale is atomic force microscopy (AFM) (King and Goodson, 2004). The basic structural design of an AFM consists of a micromachined tip at the end of a cantilever beam. A motion control stage is attached to the cantilever beam. The motion control stage brings the tip into contact with a surface, and moves the tip laterally over the surface. As the tip follows the surface, small changes in the vertical position of the cantilever beam are detected. The result is a topographic map of a surface with a resolution as good as 1 nm. The AFM can be used to intentionally modify a surface over which it scans in order to study the effects of certain modifications. This research includes local chemical delivery, thermally-assisted indentation of soft materials, direct indentation of soft materials, and guiding electromagnetic radiation into photoreactive polymers. ## References Castellani, C., DiCastro, C., Kotliar, G., and Lee, P.A., 1987, “Thermal Conductivity in Disordered Interacting-Electron Systems,” Physical Review Letters, Vol. 59, pp. 477-480. Chen, G., 2004, Nano-To-Macro Thermal Transport, Oxford University Press. Faghri, A., and Zhang, Y., 2006, Transport Phenomena in Multiphase Systems, Elsevier, Burlington, MA. Faghri, A., Zhang, Y., and Howell, J. R., 2010, Advanced Heat and Mass Transfer, Global Digital Press, Columbia, MO. Flik, M.I., Choi, B.I. and Goodson, K.E., 1992, “Heat Transfer Regimes in Microstructures,” ASME Journal of Heat Transfer, Vol. 114, pp. 666-674. Huxtable, S.T., Abramson, A.R. and Majumdar, A., 2004, “Heat Transport in Superlattices and Nanowires,” Heat and Fluid Flow in Microscale and Nanoscale Structures, eds. Faghri, M. and Sundén, B., Chapter 3, Southampton, UK. King, W.P. and Goodson, K.E., 2004, “Thermomechanical Formation and Thermal Detection of Polymer Nanostructures,” Heat and Fluid Flow in Microscale and Nanoscale Structures, eds. Faghri, M. and Sundén, B., Chapter 4, Southampton, UK. Majumdar, A., 1998, “Microscale Energy Transport in Solids,” Microscale Energy Transport, edited by Majumdar, A., Gerner, F., and Tien, C. L., Taylor & Francis, New York. Maruyama, S., 2001, “Molecular Dynamics Method for Microscale Heat Transfer,” Advances in Numerical Heat Transfer, edited by Minkowycz, W.J., and Sparrow, E.M., Taylor & Francis, New York, pp. 189-226. McGaughey, A.J.H. and Kaviany, M., 2004, “Quantitative Validation of the Boltzmann Transport Equation Phonon Thermal Conductivity Model under the Single-Mode Relaxation Time Approximation,” Physical Review B, Vol., 69, 094303. Peterson, R.B., 2004, “Miniature and Microscale Energy Systems,” Heat and Fluid Flow in Microscale and Nanoscale Structures, eds. Faghri, M. and Sundén, B., Chapter 1, Southampton, UK. Poetzsch, R.H.H. and Böttger, H., 1994, “Interplay of Disorder and Anharmonicity in Heat Conduction: Molecular-Dynamics Study,” Physical Review B, Vol. 50, pp.15757-15763. Qiu, T.Q., and Tien, C.L., 1993, “Heat Transfer Mechanism during Short Pulsed Laser Heating of Metals,” ASME Journal of Heat Transfer, Vol. 115, pp. 835-841. Schelling, P.K., Phillpot, S.R., and Keblinski, P., 2002, “Comparison of Atomic-Level Simulation Methods for Computing Thermal Conductivity,” Physical Review B, Vol. 65, 144306. Tzou, D.Y., 1996, Macro- to Microscale Heat Transfer, Taylor & Francis, New York.
2013 12-26 # Flipping Burned Pancakes The cook at the Frobbozz Magic Pancake House sometimes falls asleep on the job while cooking pancakes. As a result, one side of a stack of pancakes is often burned. Clearly, it is bad business to serve visibly burned pancakes to the patrons. Before serving, the waitress will arrange the stacks of pancakes so that the burned sides are facing down. You must write a program to aid the waitress in stacking the pancakes correctly. We start with a stack of N pancakes of distinct sizes, each of which is burned on one side. The problem is to convert the stack to one in which the pancakes are in size order with the smallest on the top and the largest on the bottom and burned side down for each pancake. To do this, we are allowed to flip the top k pancakes over as a unit (so the k-th pancake is now on top and the pancake previously on top is now in the k-th position and the burned side goes from top to bottom and vice versa). For example (+ indicates burned bottom, – a burned top): You must write a program which finds a sequence of at most (3n � 2) flips, which converts a given stack of pancakes to a sorted stack with burned sides down. The first line of the input contains a single decimal integer, N, the number of problem instances to follow. Each of the following N lines gives a separate dataset as a sequence of numbers separated by spaces. The first number on each line gives the number, M, of pancakes in the data set. The remainder of the data set is the numbers 1 through M in some order, each with a plus or minus sign, giving the initial pancake stack. The numbers indicate the relative sizes of the pancakes and the signs indicate whether the burned side is up (-) or down (+). M will be, at most, 30. The first line of the input contains a single decimal integer, N, the number of problem instances to follow. Each of the following N lines gives a separate dataset as a sequence of numbers separated by spaces. The first number on each line gives the number, M, of pancakes in the data set. The remainder of the data set is the numbers 1 through M in some order, each with a plus or minus sign, giving the initial pancake stack. The numbers indicate the relative sizes of the pancakes and the signs indicate whether the burned side is up (-) or down (+). M will be, at most, 30. 3 3 +1 -3 -2 4 -3 +1 -2 -4 5 +1 +2 +3 +4 -5 1 6 2 1 3 1 2 1 2 6 4 1 4 3 1 2 3 3 5 1 5 1. 问题3是不是应该为1/4 .因为截取的三段,无论是否能组成三角形, x, y-x ,1-y,都应大于0,所以 x<y,基础应该是一个大三角形。小三角是大三角的 1/4.
# Properties Label 3.2.ac_h_ai Base Field $\F_{2}$ Dimension $3$ $p$-rank $2$ Does not contain a Jacobian ## Invariants Base field: $\F_{2}$ Dimension: $3$ Weil polynomial: $( 1 + 2 x^{2} )( 1 - x + 2 x^{2} )^{2}$ Frobenius angles: $\pm0.384973271919$, $\pm0.384973271919$, $\pm0.5$ Angle rank: $1$ (numerical) ## Newton polygon $p$-rank: $2$ Slopes: $[0, 0, 1/2, 1/2, 1, 1]$ ## Point counts This isogeny class does not contain a Jacobian, and it is unknown whether it is principally polarizable. $r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 12 576 1764 2304 15972 254016 2601156 18662400 137650212 1020419136 $r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ 1 15 19 7 11 63 155 287 523 975 ## Decomposition 1.2.ab 2 $\times$ 1.2.a ## Base change This is a primitive isogeny class.
# What can be a good equation to encode a natural number to bigger, visually random natural number and decode it back? I need to visually encrypt (make the information uninterpretable by a non-prepared human) a natural number (known to be below some x) representing it as a much bigger, seemingly (for an unprepared observer) random (so just a+x doesn't work) natural number and then decrypt it back. a = the source value 0 < a < x b = the encrypted value x < b < y y = 2147483647 UPDATE: @chris-taylor has answered the question, but more answers are still welcome. For example (my dilettantish expectations on fancy math suggest it can be possible and easy for math geeks) it would be cool to have a simple formula (without a huge key mapping) to solve the task beating the law of b2(a2) being always greater or always smaller than b1(a1) when a1 < a2. But this is not necessary, I am just curious. - You need to read about steganography. Optimally, there would be a key needed to even be able to show that there is a message (i.e. smaller number) hidden. –  Paŭlo Ebermann Oct 16 '11 at 23:38 It depends on how 'random' you want it to appear. You could use a combination of multiplication and linear shift: $$b = ha + k$$ and then the decrypting is easy: $$a = (b - k) / h$$ For this to satisfy your condition that $b<y$ you need to choose $h$ and $k$ such that $hx+k<y$, which gives you quite a bit of freedom. Or you could use multiplication modulo $y$, such as $$b = ma \mod y$$ To decrypt you multiply by any inverse of $m$, i.e. any number for which $nm=1$ $(\textrm{mod } y)$, to get $$a = nb = nma = a \mod y$$ However, this method doesn't guarantee that $b>a$ (although if $x$ is sufficiently small then 'most of the time' you will have $b>a$). Since you are coming at this from a programming point of view, perhaps the best method is to store two maps (or dicts, depending on your language) encrypt and decrypt such that the result of encrypt.get(a) is greater than x and the expression decrypt.get(encrypt.get(a)) == a is true for all a. Then your encryption can be as random-looking as it is possible to be. This does depend somewhat on the size of x. A map with a million keys is small fry, but a map with a billion keys is starting to take up decent amounts of space. -
# Eigenvectors of $\begin{bmatrix}a&-b\\b&a\end{bmatrix}$ I am currently confused as to what the correct eigenvectors are for $\begin{bmatrix}a&-b\\b&a\end{bmatrix}$. I confirmed through my own calculations that the eigenvalues are a$\pm$bi. My textbook, Linear Algebra and its Applications, states that the corresponding eigenvectors are $\begin{bmatrix}1\\-i\end{bmatrix}$ and $\begin{bmatrix}1\\i\end{bmatrix}$. This makes sense when verifying that $Ax = \lambda x$, as $Ax = \begin{bmatrix}a&-b\\b&a\end{bmatrix} \begin{bmatrix}1\\-i\end{bmatrix} = \begin{bmatrix}a+bi\\b-ai\end{bmatrix} = (a+bi) \begin{bmatrix}1\\-i\end{bmatrix}$. However, upon performing the calculations myself, I repeatedly found the eigenvectors to be $\begin{bmatrix}-i\\1\end{bmatrix}$ and $\begin{bmatrix}i\\1\end{bmatrix}$ rather than the given solution. Thinking that I could have made a calculation error, I plugged this into WolframAlpha and got the same values. My calculation process was to solve for $(A-(a+bi)I)x = 0$, which I reduced down to $\begin{bmatrix}-i&-1\\1&-i\end{bmatrix}$. After multiplying the top equation by $i$, I got $\begin{bmatrix}1&-i\\1&-i\end{bmatrix}$ ~ $\begin{bmatrix}1&-i\\0&0\end{bmatrix}$. Thus, $x_1 = ix_2$ and $x_2$ is free. So $x = \begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}i\\1\end{bmatrix} x_2$, so the eigenvalue corresponding to $\lambda = a+bi$ is $\begin{bmatrix}i\\1\end{bmatrix}$. Similarly, the eigenvalue corresponding to $\lambda = a-bi$ is $\begin{bmatrix}-i\\1\end{bmatrix}$. Where is the discrepancy between my calculated answer and the one given in the textbook? Eigenvectors are unique upto some (non-zero) constant. Note that the two eigenvectors that are listed as answers are just $\pm i$ times the eigenvectors you found. There are an infinity of eigenvectors related to a given eigenvalue (in fact a whole vector sub space). Let's check that $$\begin{bmatrix} 1\\-i \end{bmatrix}=-i\cdot\begin{bmatrix}i \\1\end{bmatrix}$$ So "your" eigenvector is proportionate to the one "of the textbook" Similarly for the other eigenvalue There is none: $(-i,1)$ is a scalar multiple of $(1,i)$ that is: $-i(1,i) = (-i,-i^2) = (-i,-(-1)) = (-i,1)$, and similarly for your other eigenvector. Let $r$ be the module and $\theta$ the argument of complex number $a+bi$, thus with $$a=r \cos \theta \ \ \ \text{and} \ \ \ b=r \sin \theta$$ No answer mentions the geometrical interpretation of matrix $$S=\begin{bmatrix}a&-b\\b&a\end{bmatrix}=r\begin{bmatrix}\cos \theta&-\sin \theta\\\sin \theta&\cos \theta\end{bmatrix}$$ that helps to understand why such matrices are bound to have non real eigenvalues. The interpretation of $S$ as a geometrical transformation is clearly a rotation followed (or preceded) by an homothetic transform, i.e., a similitude. It is clear that (unless $\theta=0$!) no real vector can be transformed by $S$ into a real multiple of it. Therefore, the eigenvalues "have to be" complex.
# Is $\varnothing$ an affine variety with these definitions? A topological space $X$ is called reducible if $X=X_1 \cup X_2$ where $X_1, X_2$ are non-empty, proper subsets of $X$ and closed. An irreducible algebraic set in $\mathbb{A}^n$ is called an affine variety. $\varnothing = Z(1)$, so $\varnothing$ is an algebraic set, and obviously $\varnothing \subseteq \mathbb{A}^n$. If we suppose that $\varnothing$ is reducible, then the only options for $X_1$ and $X_2$ are $\varnothing$, but this can't happen, because $X_1$ and $X_2$ must be non-empty. So, am I right saying that $\varnothing$ is an affine variety? • possible duplicate of is the empty set an (irreducible) variety? – Alex Kruckman Mar 9 '15 at 21:04 • I saw that question but the definition of ''reducible'' is different, I don't suppose that $X \neq \varnothing$ – Leafar Mar 9 '15 at 21:08 • @Leafar. You are not entitled to change the meaning of definitions. This is like asking "if I call $3$ the number $4$, is it true that $2+2=3$" ? So, yes, $\emptyset=Z(1)$ is algebraic, not irreducible and that's the end of the (uninteresting) story. – Georges Elencwajg Mar 9 '15 at 21:23 • Why uninteresting? There are many things associated, like the dimension, we have to consider all pathological cases... – Leafar Mar 9 '15 at 21:27 However, there are good reasons for considering this to be the wrong definition of "irreducible", as explained in this question. Here's another: the defining property of sober spaces (examples include algebraic varieties with the Zariski topology and Hausdorff spaces) is "every irreducible closed set has a unique generic point". This is nonsense if we allow $\emptyset$ to be irreducible.
# Algorithm for Removing Inverted Elements from a Permutation I currently have a problem, whose solution requires to remove from a permutation of $\lbrace 1,\ \dots,\ n\rbrace$ those values that are to the left of a smaller one. My idea was to remove the complement of the longest increasing subsequence, but am not quite sure, if that yields the correct result (which I doubt, because the longest increasing subsequence need not be unique, whereas the inversion relation is) and/or whether it would be the most efficient way to remove all inversions. Questions: • does the longest increasing subsequence of a permutation represent the complement of all inversions? • what is the fastest known algorithm for removing all "inverted" elements from a permutation? • The answer to the first question is certainly "no", as the permutation $[2,3,1,6,4,5]$ shows: in this case, the longest increasing subsequence is even unique, it is $2,3,4,5$. If I understand correctly, you want to remove $2$,$3$ and $6$, right? – Martin Rubey Apr 20 '17 at 6:51 • @MartinRubey yes, you have put it correctly; I want to remove 2,3 and 6. And thanks for the example with unique longest increasing subsequence. – Manfred Weis Apr 20 '17 at 6:54 • Find the smallest element $s_1=1$. Find the smallest element $s_2$ that comes after $s_1$; find smallest $s_3$ that comes after $s_2$, etc. until you get $s_k$ equal the last element in the permutation. Then $(s_1,s_2,\dots,s_k)$ is the required subpermutation, isn't it? – Max Alekseyev Apr 20 '17 at 21:26 • @MaxAlekseyev your algorithm will work, but you always would have to search for the next smallest unremoved elemement; could require $O(n^2)$ steps or, you could use a second array with values $[1,\ \dots,\ n]$, in which the removed elements are marked; in that array you could go from left to right to identify the next smallest element and then remove all elements of the primary array until you have encountered the identified next minimal value, requiring $O(n)$ steps. – Manfred Weis Apr 21 '17 at 5:00 • @ManfredWeis: So, there is a linear algorithm. What is your question then? – Max Alekseyev Apr 21 '17 at 12:12 Create a doubly-linked list $L$, initially empty. Scan a given permutation $p=(p_1,p_2,\dots,p_n)$ from left to right, and for each element $p_i$ perform the following operations: 1. (loop) While $L$ is not empty and $p_i$ is smaller than the last element of $L$, remove this last element from $L$. 2. Append $p_i$ to $L$. At the end, $L$ will contain a required subpermutation of $p$. Notice that visiting (i.e., comparison to) every element of $L$ results in either appending or removal of an element to/from $L$. The total number of such operations does not exceed $2n$, since every element of $p$ may be appended to $L$ only once and removed from $L$ only once. So, this is a linear-time algorithm. • does "last element" denote the element to the left of $p_i$ or the element at the end of the list? – Manfred Weis Apr 21 '17 at 19:26 • "last element" refers to the last element of $L$ – Max Alekseyev Apr 21 '17 at 21:14
# 二、系统编译类型 ## 2.1 使用编译类型 You will never know how excellent you are unless you impel yourself once. When you don't call others a bigwig or a guru, you have made a lot of progress!
# How to calculate <munder> <mo movablelimits="true" form="prefix">lim <mrow class="MJX-TeX How to calculate $\underset{x\to 1}{lim}\frac{\left({x}^{2}-3x+2\right)}{\left(x-1\right)\left({x}^{3}-1\right)}$ You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Isla Klein Notice that by writing $f\left(x\right)=\frac{x-2}{{x}^{2}+x+1}$ you have that $\frac{x-2}{{x}^{3}-1}=\frac{x-2}{{x}^{2}+x+1}\cdot \frac{1}{x-1}=\frac{f\left(x\right)}{x-1}$ Notice, moreover, that as $x\to 1$, $f\left(x\right)\to -1/3$. Meanwhile, depending on which direction you approach from, $1/\left(x-1\right)\to ±\mathrm{\infty }$. The end result is clear: your one-sided limits go to $+\mathrm{\infty }$ and $-\mathrm{\infty }$ and hence the limit overall does not exist.
# Qm problem 1. Dec 6, 2007 ### ehrenfest 1. The problem statement, all variables and given/known data I am working on 6.16 at the following site: http://mikef.org/files/phys_4241_hw14.pdf I think that the solution given is given is wrong. I can get part a), however, I am just getting stuck on part b). So, the wavefunction in r < r_0 is R(r) = A/r sin(k_1*r) and the solution in r > r_0 is given by R(r) = B/r exp(-k_2*r) I have no idea how t do what they are asking in part b) since we have three unknowns, A, B and V_0 and only two equations: namely continuity at r_0 of R and R'. Is there something that I am missing? 2. Relevant equations 3. The attempt at a solution 2. Dec 6, 2007 ### dwintz02 What I'm not understanding is the fact that they tell you to assume you are in a bound state, but then they find oscillatory solutions even as r goes to infinity. Did they just say the nuclear force is binding for all r?? Since they are modeling the nuclear force, R(r) should decrease exponentially outside of r0. If you're dying to come up with a solution, use m(D) = 2.014102 u m(p) = 1.00727647 u m(n) = 1.00866501 u Where 931.502 MeV = 1 u That should give you the actual binding energy of a deuterium nucleus, although I know this doesn't help you solve it the way they want you to. 3. Dec 6, 2007 ### ehrenfest As I said, I think that solution is totally wrong as you can see from the statement "Then we must have E + V_0 > 0 or E > V_0." But I think there is still a way to do the problem... Here is more of my work: For r < r_0 the only solution is R(r) = A/r sin (k_1*r), where k_1 = sqrt((E+V_0)2m)/h-bar. For r > r_0, I get R(r) = B/r sin(k_2 r) + C/r cos (k_2 r) if E> 0 and R(r) = D/r exp(-k_2 r) if E< 0 , where k_2 = sqrt(E2m)/h-bar. So, for part b, since E is less than 0, I can use R(r) = D/r exp(-k_2 r) for r > r_0. But, then there are 3 unknowns, A, D and V_0, and I do not understand how I can solve for all any of them using only continuity. Last edited: Dec 6, 2007 4. Dec 6, 2007 ### dwintz02 Yes, I think it is wrong. It says assume a bound solution but at the beginning of the last paragraph they assume that E > 0 which should not be true for a bound solution in a potential of -V. But, have you tried using normalization, continuity, and smoothness on Psi to give you 3 equations for your 4 unknowns A, B, C, and V? 5. Dec 6, 2007 ### ehrenfest Part a says that I should not normalize the solutions. Anyway, do you think what I wrote in my last post is correct, reducing it to 3 unknowns? 6. Dec 6, 2007 ### ehrenfest anyone see what is going on here? 7. Dec 6, 2007
# #7: The Visible Grid Point Problem Here is a difficult probability question: Suppose you are standing on an infinitely large square grid at the point (0,0), and suppose that you can see infinitely far but cannot see through grid points. Given a random grid point z = (x,y), where x and y are integers, what is the chance you can see z? As far as I know, there is no “easy” way to solve this problem. We could try picking random pairs of points and testing a large number of them, and could even write a computer program to do this, testing perhaps billions and billions of cases. But even then, we would not know what the exact answer is supposed to be. Solution In any case, the solution to the question is the elegant $\displaystyle\frac{6}{\pi^2},$ which is approximately 0.608…. Two methods of solving this problem are given below. They both use the following result: Lemma 1 The point z = (x,y) is visible if and only if gcd(x,y) = 1, where gcd(x,y) is the greatest common divisor of x and y. Proof. Let d = gcd(x,y). If d > 1, then let x’ = x/d and y’ = y/d. The line between (0,0) and (x,y) intersects the lattice point (x’,y’), so (x,y) is not visible. Conversely, if d = 1, then suppose there is a point (x’,y’) on the line between (0,0) to (x,y). Let r = x’/x = y’/y, and note that 0 < r < 1. Write r in lowest term fraction r = s/t where gcd(s,t) = 1. Since 0 < r < 1, it must be that t > 1. This gives the equations sx = tx’ and sy = ty’, which implies, based on gcd(s,t) = 1, that x and y are both divisible by t, which contradicts gcd(x,y) = 1. Notation If gcd(m,n) = 1, we say that m and n are relatively prime. Let φ(n) denote the Euler totient function, which counts the number of elements less than or equal to n which are relatively prime to n. For example, φ(4) = 2 as the gcd(1,4) = 1 and gcd(3,4) = 1. On the other hand, gcd(0,4) = 4, gcd(2,4) = 2, and gcd(4,4) = 4. The pictures below illustrate this example. Method 1: Prime Factors This method uses the fact that two numbers are relatively prime if and only if they share no common prime factor. So for a random point z = (x,y), the chance that x and y are relatively prime is the chance that they share no common prime factor. For each prime p, the chance that a random integer n is divisible by p is 1/p, so the chance that both are divisible by p is 1/p². Thus, the chance that not both are divisible by p is 1 – 1/p². The chance that x and y share no common prime factor is the chance they do not share each prime, multiplied out for each prime, or $\displaystyle Prob = \prod_p \left(1 - \frac{1}{p^2}\right)$ where the product ranges over all primes p. To calculate this value, note that inverting each term gives a geometric series for each prime p, that is, $\displaystyle\frac{1}{1 - \frac{1}{p^2}} = 1 + \frac{1}{p^2} + \frac{1}{(p^2)^2} + \cdots.$ Since each positive integer n is uniquely represented as the product of primes, this implies that each term in the final product of series is 1/n² for some unique n, and furthermore, since this product contains every prime power, we have all the positive numbers n. Hence the inverse of the probability is actually $\displaystyle\frac{1}{Prob} = \prod_p \left(\frac{1}{1 - \frac{1}{p^2}}\right) = \sum_{n=1}^{\infty} \frac{1}{n^2}$ where the second equality is a case of the Euler product formula. The Riemann zeta function ζ(s) is defined for s > 1 by $\displaystyle\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s},$ so that $\displaystyle\frac{1}{Prob} = \zeta(2)$. Since ζ(2) = π²/6, this implies $\displaystyle Prob = \frac{6}{\pi^2}$. This method has the advantage that it is relatively short, but it is not geometrically intuitive. Method 2: Geometry and Limits The second method is initially geometric, but still involves number theory. The strategy is to consider ever-larger squares around the origin point (0,0) and consider the probability of visible grid points within each square and see what value that probability converges to. Let Sr be the square from -r to r in both coordinates. Let N(r) denote the number of grid points in Sr and N'(r) denote the number of grid points in Sr which are also visible from the origin. Then $\displaystyle Prob = \lim_{r \to \infty} \frac{N'(r)}{N(r)}.$ The following diagram is due to Apostol (see references). The shaded region consists of the points in the form (x,y) where 2 ≤ x ≤ r and 1 ≤ y ≤ x. For r = 1, all 8 points around (0,0) are visible. For r ≥ 2, N'(r) is 8 (the closest 8 points) plus 8 times the number of visible points in the shaded region. This is because the infinite grid is symmetric with respect to reflection across axes and diagonals (think UK flag), so all 8 sub-parts will have the same result. To compute the number of visible points in the shaded region we sum the number of points in the form (x,y) for each x in 2 ≤ x ≤ r. But a point of the form (x,y) is visible if and only if x and y are relatively prime, which means for each x, the contribution to the number of visible points in the shaded region is φ(x). Thus the number of points in the shaded region is $\displaystyle\sum_{n=2}^r \varphi(n)$ so that $\displaystyle N'(r) = 8 + 8 \sum_{n=2}^r \varphi(n) = 8\sum_{n=1}^r \varphi(n)$ as φ(1) = 1. Now we use the big-O notation to compare the growth rates of functions. We say that f(x) = O(g(x)) if there exist two constants x0 and M such that for all x ≥ x0, f(x) ≤ Mg(x). There is a theorem of analytic number theory that states for x > 1, $\displaystyle\sum_{n \leq x} \varphi(n) = \frac{3}{\pi^2}x^2 + O(x \log x)$, which implies N'(r) = 24r²/π² + O(r log r). Since N(r) is just the number of points in a square, it is given by N(r) = 4r² + O(r). Then we have the ratio $\displaystyle \frac{N'(r)}{N(r)} = \frac{\frac{24r^2}{\pi^2} + O(r \log r)}{4r^2 + O(r)} = \frac{\frac{6}{\pi^2} + O\left(\frac{\log r}{r}\right)}{1 + O\left(\frac{1}{r}\right)}$ which leads to the result $\displaystyle Prob = \lim_{r\to\infty} \frac{N'(r)}{N(r)} = \frac{6}{\pi^2}.$ References (Note: Comments are disabled for this post due to spam.)
# Iteratively Replacing Substrings This problem came up while I was helping someone. Informally, we have a string of characters and a "rule" which replaces a specific string with another one, and we repeat this rule until we can no longer do so (obviously if you replace a string with another which contains the first as a substring, this won't terminate). The question is, under what conditions will this terminate, and is there a general bound on the length of the resulting string? Specifically, let's try this with two characters, $a$ and $b,$ and let's assume there's only one rule. As an example, consider the rule $ab \mapsto bba.$ It isn't hard to see for a string of length $n,$ the maximum length of the resulting string is $2^{n-1} + n -1.$ The point here is that this rule essentially moves all $a$'s to the right (and it doesn't change the number of $a$'s), so the most it can do is double the number of $b$'s for each $a.$ A string which achieves this upper bound is $aaa\ldots ab.$ Another question is, why is it true that the order of replacement doesn't matter (this shouldn't be too hard, assuming it terminates... I think)? • "why is it true that the order of replacement doesn't matter" Consider the rule $aa\to b$ and the string $aaaa$. If we first apply the rule to the second and third $a$'s, we get $aba$ and we stop. If we first apply the rule to the first two $a$'s, we get $baa$; applying the rule again leads to $bb$. The order of replacement mattered. – Joel Reyes Noche Feb 24 '14 at 1:43 • nice, thanks for the example – cats Feb 24 '14 at 1:49 • Note that if your rule replaces a specific letter with a string, then there would be no ambiguity in which replacement should be done first. That is, if your rule is a morphism (see here for a definition) then things would be much easier. – Joel Reyes Noche Feb 24 '14 at 14:49 (1) Your first question (whether a single-rule string rewriting system $u \to v$ is terminating) appears to be a long-standing open problem. (2) There are some known results for your second question whan $|u| = |v|$. In this case the upper bound is $n^{|u|}$, where $n$ denotes the length of the initiating string. If the alphabet has only two letters, then the upper bound is $n^2/4$ and this bound is tight.
My Math Forum Automata question Computer Science Computer Science Forum December 12th, 2015, 01:57 AM #1 Member   Joined: Mar 2015 From: USA Posts: 34 Thanks: 1 Automata question The question: Let it be $L$ a regular language. few definitions: $p(L)$-the minimal natural number so that $L$ fulfill the pumping lemma. $n(L)$- minimal NFA that accepts $L$. $m(L)$- $Rank(L)$, the number of equivalence classes in $L$. Let it be integer $k>0$. find an example for a language $L$ so that: $p=n=m=k$. My attempt: At start, thought of $L=\{w: |w|\bmod k=0\}$, $\Sigma=\{ a\}$. But then I realized that \$p Tags automata, question Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Saman Q Computer Science 2 November 8th, 2015 01:42 AM taih Applied Math 1 December 22nd, 2013 12:17 PM BenFRayfield Number Theory 0 July 30th, 2013 08:32 AM MrPhil Computer Science 0 August 30th, 2012 11:48 PM Phrzby Phil Computer Science 0 June 12th, 2007 11:02 AM Contact - Home - Forums - Cryptocurrency Forum - Top
• 区域发展 • ### 基于改进模糊Borda法的中亚农业投资环境组合评价分析 1. 新疆农业大学经济与贸易学院, 新疆乌鲁木齐 830052 • 收稿日期:2015-01-17 修回日期:2015-05-02 出版日期:2015-09-25 • 通讯作者: 马惠兰(1962-),女,教授,博士生导师,主要研究方向:农业经济、区域经济.Email:mhl2020@sina.com E-mail:mhl2020@sina.com • 作者简介:汪晶晶(1988-),男,安徽黄山人,新疆农业大学博士研究生,研究方向:投资与管理.Email:499854745@qq.com • 基金资助: 国家国际科技合作计划项目(2010DFA92720-13);2014年农业部国际交流与合作项目;新疆人文社会科学重点研究基地干旱区农村发展研究中心资助 ### Combined evaluation analysis on agricultural investment environment in Central Asia based on improved fuzzy Borda method WANG Jing-jing, MA Hui-lan 1. School of Economics and Trade, Xinjiang Agricultural University, Urumqi 830052, Xinjiang, China • Received:2015-01-17 Revised:2015-05-02 Online:2015-09-25 Abstract: Central Asia is considered as the region with the best investment potential along the "Silk Road Economic Belt". The evaluation of its agricultural investment environment is helpful to "going out" to Central Asia to develop agricultural investment and cooperation for Chinese enterprises. In order to comprehensively evaluate agricultural investment environment in Central Asia and reveal its space difference and time trend, this paper chose five Central Asian countries from 2008 to 2012 as the research subjects. Based on comprehensive consideration of general factors that affects the investment environment and with reference to the evaluation index of FAO and some relevant scholars, the paper established the evaluation index system which consisted of 4 aspects, 27 indicators of the political and legal environment, economical and the opening up environment, infrastructure and public service environment, agricultural production environment. To take the full use of simple evaluation methods' advantages and overcoming their shortcomings to some degree, the multi-factor comprehensive evaluation method based on the principal component analysis, entropy method, mean variance method and the maximum deviation method were firstly implemented to decide the comprehensive evaluation value and rank position. And on the basis of passing by priori test of combined evaluation, the evaluation results of four methods above were combined with improved fuzzy Borda method, so that the result of combined evaluation is obtained. The results show as follows:Firstly, economical and the opening up environment, infrastructure and public service environment, agricultural production environment are the main aspects influencing Central Asian agricultural investment environment. Secondly, combined evaluation model based on improved fuzzy Borda method has good effectiveness according to the result of post test. Thirdly, on the dimension of space, average combined evaluation value of Kazakhstan's agriculture investment environment is the highest(254.056), Uzbekistan(131.110)and Kyrgyzstan (107.158)take the second rank, Tajikistan(17.673)and Turkmenistan(15.886)are the lowest; Fourthly, on the dimension of time, the agricultural investment environment of Kazakhstan, Kyrgyzstan, Uzbekistan are meliorated in general, and that of Tajikistan and Turkmenistan are relatively worse and in the unstable state.In addition, this paper analyzed the reasons which lead to the differences of agricultural investment environment. • F323.9
# Inverting a colormap in pgfplots I am trying to obtain an inverted version of a given colormap to use in the colorbar of a plot. The code below achieves this specifically for the blackwhite colormap. However, I was looking for a more general method which would only require the name of the colormap to be inverted. \documentclass{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[colorbar, colormap={}{ gray(0cm)=(1); gray(1cm)=(0);}] \end{axis} \end{tikzpicture} \begin{tikzpicture} \begin{axis}[colorbar, colormap={}{ gray(0cm)=(0); gray(1cm)=(1);}] \end{axis} \end{tikzpicture} \end{document} • If I understand your question correctly. You could define a new colormap named {<nameINV>} via colormap={<name>}{<color specification>} in pgfplotsset{...} and activate it by using colormap name=<nameINV>. See pages 87-88 in PGFPLOTS for detail. – Jesse Oct 29 '13 at 5:21 • colormap={<name>}{<color specification>} is used in the example above, but this has the inconvenience of having to manually define the colour specification. I was looking for something that would provide a general solution for inverting any existing colormap. For example, in Matlab, this is achieved by flipud(colormap). – Pedro Oct 29 '13 at 8:42 ## EDIT Recent versions of pgfplots allow to answer the question in simpler form without resorting to custom macro coding. Here is the answer based on a recent version of pgfplots: \documentclass{standalone} \usepackage{pgfplots} \usepgfplotslibrary{colormaps} \begin{document} \pgfplotsset{ %colormap={X}{ gray(0cm)=(1); gray(1cm)=(0);}, colormap/winter, } \begin{tikzpicture} \begin{axis}[colorbar] \end{axis} \end{tikzpicture} \begin{tikzpicture} \begin{axis}[colorbar, colormap={reverse winter}{ indices of colormap={ \pgfplotscolormaplastindexof{winter},...,0 of winter} }, ] \end{axis} \end{tikzpicture} \end{document} Details about this approach can be found in the pgfplots manual, subsection "Building Colormaps based on other Colormaps". ## This here is the original answer One could write such a utility macro as follows. Note that this comes without any warranty, i.e. if the internals might change sometime, this will break (although it is unlikely that they will change soon). \documentclass{standalone} \usepackage{pgfplots} \usepgfplotslibrary{colormaps} \makeatletter \def\customrevertcolormap#1{% \pgfplotsarraycopy{pgfpl@cm@#1}\to{custom@COPY}% \c@pgf@counta=0 \c@pgf@countb=\pgfplotsarraysizeof{custom@COPY}\relax \c@pgf@countd=\c@pgf@countb \pgfutil@loop \ifnum\c@pgf@counta<\c@pgf@countb \pgfplotsarrayselect{\c@pgf@counta}\of{custom@COPY}\to\pgfplots@loc@TMPa \pgfplotsarrayletentry\c@pgf@countd\of{pgfpl@cm@#1}=\pgfplots@loc@TMPa \pgfutil@repeat %\pgfplots@colormap@showdebuginfofor{#1}% }% \makeatother \begin{document} \pgfplotsset{ %colormap={X}{ gray(0cm)=(1); gray(1cm)=(0);}, colormap/winter, } \begin{tikzpicture} \begin{axis}[colorbar] \end{axis} \end{tikzpicture} \begin{tikzpicture} %\customrevertcolormap{X} %\customrevertcolormap{jet} \customrevertcolormap{winter} \begin{axis}[colorbar] • This only seems to work for the colormap hot. For other colormaps, say winter, the macro fails, probably because \pgfplotscolormapassertexists{winter} gives an undefined error. Any ideas? – Pedro Oct 30 '13 at 8:44 • The color maps need to be defined when you invoke the command. In my example, you can include the X colormap by uncommenting the associated lines. You can also say colormap/winter right where the X colormap is currently uncommented - that will work. – Christian Feuersänger Oct 30 '13 at 19:24 • Using a key such as colormap/winter would be better than having to manually define each colormap, but I can't get it to work. Could you please update the example for this case? – Pedro Oct 30 '13 at 22:04 • see my edit. Note the use of \usepgfplotslibrary{colormaps} – Christian Feuersänger Oct 31 '13 at 18:35
Percentage difference tells us at what extent any reference value differs from the measured value in percentage. It gives the difference between the two numbers given divided by their averages.The percentage difference is given by Percentage difference = $\frac{Reference\ Number - Relative\ Number}{\frac{Reference\ Number + Relative\ Number}{2}}$ $\times$ 100 % In terms of input reference number V1 and relative number V2 it is given by Percentage difference = $\frac{V_1 - V_2}{\frac{V_1 + V_2}{2}}$ $\times$ 100 % Percentage difference Calculator is a online tool to calculate the percentage difference. You just have to enter the given values V1 and V2 and get its percentage difference instantly. ## Percentage Difference Steps Lets see how to find the percentage difference : Step 1 : Read the given problem and note down the reference number V1 and relative number V2. Step 2 : To calculate the percentage difference use formula Percentage difference = $\frac{V_1 - V_2}{\frac{V_1 + V_2}{2}}$ $\times$ 100 % Substitute the values in above formula and get the answer.
# Re: [HM] Malfatti's problem Subject: Re: [HM] Malfatti's problem From: Antreas P. Hatzipolakis (xpolakis@otenet.gr) Date: Thu Mar 23 2000 - 03:58:08 EST [Kotera Hiroshi] > > In 1803, G.F.Malfatti proposed the problem of cutting three right > circular cylinders and of maximum total volume from a given right > triangular prism. > [John Conway] > > Let's call this problem (1). > [Kotera Hiroshi] > > The problem was intuitively reduced to the following: > To inscribe three circles in a given triangle so that each circle > will be tangent to two sides of the triangle and to the other two > circles. > [John Conway] > > Let's call this problem (2). > [Kotera Hiroshi] > > In "A Survey of Geometry", H.Eves said that Malfatti gave a prolix > and incomplete analytical solution of the reduced problem. > [Antreas P. Hatzipolakis] > > I don't know which is H. Eves' source, but mine doesn't say that > Malfatti's solution was wrong. > [John Conway] > > What Malfatti did correctly was to solve problem (2). But this > doesn't happen to give the solution to problem (1), for which > the incircle is one of the 3 circles involved. Get it?? > [Antreas P. Hatzipolakis] .................................................. Sure!! :-) The firsts, who pointed out that problem (1) is not equivalent to problem (2), were Lob and Richmond [1], by the counterexample of the equilateral triangle. Michael Goldberg [2,3] showed that the solution of Problem (2) is NEVER the best solution for the Problem (1). As Ogilvy [4, p. 147] remarks: "Goldberg's conclusions are based on calculations and graphs; a purely mathematical proof is doubtless difficult and has not yet been published." It was not until 1992 that a COMPLETE solution of (1) appeared by Zalgaller and Los' [5]. References: [1] Lob, H. - Richmond, H. W.: On the Solution of Malfatti's Problem for a Triangle. Proc. London Math. Soc. 2(1930) 287-304. [2] Goldberg, M.: On the Original Malfatti Problem. Math. Mag. 40(1967) 241-247. [3] Goldberg, M.: The Converse Malfatti Problem. Math. Mag. 41(1968) 262-266. [4] Ogilvy, C. Stanley: Excursions in Geometry. Dover, 1990 [First publ.: Oxford U. Press, 1969] [5] Zalgaller, V.A. - Los', G.A.: Solution of the Malfatti Problem. (Russian) Ukr. Geom. Sb. 35, 14-33 (1992). Translation: J. Math. Sci., New York 72, No.4, 3163-3177 (1994) The authors give the solution of the Malfatti problem to place three non- overlapping discs with maximal summary of areas into a triangle. \par The main result: Let $2\alpha$, $2\beta$, $2\gamma$ be angles of the triangle $ABC$, where $0 < \alpha \leq \beta \leq \gamma \geq {\pi \over 2}$. Let $K\sb 1$ be a circle inscribed into a triangle $ABC$, $K\sb 2$ be a circle tangent to $AB$, $AC$ and $K\sb 1$, $K\sb 3$ be a circle either tangent to $AB$, $BC$ and $K\sb 1$, for $\sin \alpha = tg {\beta \over 2}$, or tangent to $AB$, $AC$ and $K\sb 2$. The discs bounded by circles $K\sb 1$, $K\sb 2$, $K\sb 3$ are solution of the Malfatti problem. [ P.Burda (Ostrava) ] (From Zbl) Antreas This archive was generated by hypermail 2b28 : Thu Mar 23 2000 - 11:01:46 EST
Mechanics problem: a ball rolling off the edge of a table 1. Mar 17, 2005 ribod I'm requesting help for a physics problem I have made up. Let's say we put a ball on the edge of a table, with the center of the ball being a teeny weeny bit outside the edge of the table. In other words, if we have a coordinate system, the edge of the table is at 0,0, the center of the ball is an extremely small bit to the right of 0,r (where r is the radius of the ball). It looks something like this: ''''''''''''''''''''''''ball _________(_'''') table'''''edge^ What will happen now is that the ball will roll down over the edge, to the right, because of gravity. My question is, how can I set up two functions, for the movement of the center of the ball -- x(t) and y(t), where t is the time -- only when the ball is rolling over the edge. My thinking is that the movement of the ball will follow the shape of the ball. At any given coordinate, the force affecting the ball can be calculated simply by taking the gravity vector downwards, the normal vector pointing from where the ball touches the edge to the center of the ball, and from these get the resultant vector. This resultant will be a tangent to the ball's circular shape, where the ball touches the edge. This means that the acceleration will change over time, and thus is not constant. Let's say gravity acceleration is constant G, and radius of the ball is r, the center of the ball is x,y, and the coordinates where the ball touches the edge of the table is h,H. We ignore air resistance, and such things that can be ignored. I can not use for example s=vt, because v is not constant. I cannot use s=ut+at^2/2, because acceleration is not constant either. so... How will the ball move with time? How do I do the equations? 2. Mar 17, 2005 Crosson The answer comes from looking more deeply at Newton's 2nd law: F = ma Write this as: $$\Sigma F_x = \frac{d^2 X(t)}{dt^2}$$ $$\Sigma F_y = \frac{d^2 Y(t)}{dt^2}$$ Now you need to find the force as a function of position and velocity! $$\Sigma \vec{F(\vec{r},\vec{v})}$$ Using Newton's 2nd, you will have a set of (coupled) differential equations. $$F_x (X,Y,V_x,V_y) = \frac{d^2 X(t)}{dt^2}$$ $$F_y (X,Y,V_x,V_y) = \frac{d^2 Y(t)}{dt^2}$$ Then you have to know how to solve differential equations. 3. Mar 17, 2005 ribod Ok could you explain it more detailed please? First of all what is d? Then how do you get ma=d^2X(t)/(dt^2)? 4. Mar 17, 2005 reilly The keys are the constraints. As you have set it up, there is gravity acting on the CM, and a force exerted by the table edge on the ball. (Often this problem is done starting with the ball rolling on the table toward the edge.) Ask yourself, does the ball roll down the edge as it moves under gravity? This is, more or less, a standard constraint problem, best done with a Lagrangian approach and Lagrange multipliers. Regards, reilly Atkinson 5. Mar 17, 2005 Loren Booda Is there ever a discontinuity in the ball's motion? 6. Mar 17, 2005 Crosson Think of "d" like you think a "change in", as in velocity = dx/dt. dx/dt is called the "derivative" of x with respect to t, and it means "the instantaneous rate of change in x". I am not trying to sound mean, but you cannot solve this problem, because you have not studied calculus. 7. Mar 18, 2005 Antiphon Force the ball to not slip on the corner as it rolls off. The equations for the center of the balss become x = cos(theta), y=sin(theta) where the origin is on the corner of the table. As for how it moves with time, you need the above posts to help you out with the forces. It's not a simple problem. 8. Mar 18, 2005 whozum Any x component the ball will experience will be a result of the normal force from the table pushing the ball out of the table's way as the ball falls. This will be the most diffulct part to figure out, though it will still be a function of time, since the ball drops at a constant acceleration. x(t) = some function of y(t) where y(t) = g*sin(theta) Notice sin(theta) will not be constant since the ball experiences different forces as it passes the table. For every instant t where the ball is pulled downthe table exerts a force equal in magnitude to the force imparted on it by gravity. There is a torque applied to the ball, it will spin around its CM as it falls off the table. As to how to calculate each of these components, I'm not really sure how you would go about doing so. 9. Mar 19, 2005 ribod Crosson I know what a derivate is. If it would say dx/dt i would have known it was the derivate. However I don't remember any use of d squared. It was some years ago I studied this. I found your formula online however, and I want to make sure if there is no algebraical solution to my problem. I have to use numerical method? 10. Mar 19, 2005 Crosson What is it about the motion that is important? The method I described will give you all the information about the balls position at any time, but if all you want to know is the speed it falls with, or how many times it spins, that problem is easier. 11. Mar 19, 2005 Loren Booda Has anyone considered introducing the simplification of zero friction? 12. Mar 19, 2005 ribod crosson as I understand it you showed that s''=v'=a ? this doesn't make a function of time. Can anyone solve the problem? It might be trickier than you think if you don't really solve it. 13. Mar 20, 2005 Crosson That's why I said more than that. Realizing that acceleration is defined as a derivative does not solve problems, but Newton's law does: F = ma $$F(x,v) = m \frac{d^2 X}{dt^2}$$ Can you find the forces as a function of position and velocity? Let me give you an example, suppose we are talking about a spring mass setup. Then the force due to the spring is proportional to how stretched out it is: F = - k*x(t) So to determing the motion we just use: $$F(x,v) = m \frac{d^2 X}{dt^2}$$ $$-kx = m \frac{d^2 X}{dt^2}$$ This is a functional equation (where the function x(t) is the unknown). We can't solve for x(t) using algebra, because we have to deal with the second derivative of x. We need a function x, -k times which will equal m times its second derivative. It turns out the solution is: x(t) = A cos(wt) + B sin(wt) Springs are oscillators! 14. Mar 20, 2005 ribod yes i can make a function of position, F(y), or F(x), but i have no idea how you get from -kx=m*a to x(t)=A cos(wt) + B sin (wt) (let alone where did A and B come from). 15. Mar 20, 2005 Crosson Good point, why do something at all if I am not going to do it right? $$-kX(t) = m \frac{d^2 X(t)}{dt^2}$$ look at what this equation says: We need a function x, which is the negative of its second derivative. Most functions do not have this property. For example, 5t^3 is not equal to the negative of its second derivative. Neither is e^t, ln(t), t^(-2/3). But sin(t) and cos(t) are the negatives of their second derivatives! So they are the "solution" to the above differential equation. (We have uniqueness theorems that tell us these are the only solution.) I think this problem is one of the most beautiful of them all, judged by its power and simplicity. 16. Mar 20, 2005 Loren Booda Crosson Why didn't I think of it? 17. Mar 20, 2005 whozum Crosson: Is that what differential equations is pretty much all about? 18. Mar 21, 2005 Crosson Yes, this is a differential equation. Notice we guessed the solution; applying (difficult) special techniques to solve them is what differential equations is all about. 19. Mar 22, 2005 PBRMEASAP What did you get? I don't know how to set up force equations at the edge of the table. If you imagine that the ball is rolling off a table with a rounded "edge" (like a quarter-circle), and require that the ball is rolling without slipping, you can find the point at which the ball loses contact with the table in terms of the initial velocity. This is when the force of gravity exactly balances the centrifugal force, which makes the normal force zero. However, if the initial velocity is too large, the ball may lose contact immediately upon reaching the rounded part. Here is the condition that must be required for the ball to stay in contact with the rounded part for some finite time: $$\frac{V_0^2}{gR} \leq 1$$ Where V_0 is the initial speed of the ball and R is the radius of the rounded edge. As R approaches zero, as in your case, no finite velocity will allow the ball to stay in contact with the edge. 20. Mar 22, 2005 ribod my reasoning, which can be illustrated easily visually by using a cd disc rolling of the table: the corner is like a point, and does not affect the geometry of the movement. since the cd is round however, the movement will be the same as if a point is rolling off a sphere. to apply the force is easy. at any position of the ball, or cd, with force down, the ball can't move down, so it has to move as much down as it can. and that is always the tangent line of the point where the ball touches the table. the length of this vector is calculated the same way as when a ball is rolling on a slope. let's say g is gravity down, with the center of the ball in (0,0) in a coordinate system. that means a gravity vector ends in (0,g). geometry shows that the normal force will always point from the center of the wheel towards the point where the ball hits the edge. this gives you two coordinates to calculate the line of the normal vector. if you dont trust this you can just find out the line of the normal knowing it passes through the y axis at x=0,y=g, and using the coordinates where the ball touches the edge. this gives us a function for the normal in the form of: y=kx+g that gives you two lines, gravity down and the normal, intersecting the at (0,g). now draw the resultant force line from 0.0, to where it intersects with the normal force. this coordinate will be the vector of the resulant force, at any coordinate. to find out the value, we simply use y=kx+g and y=bx where b is the coefficient for the line that represents the direction of the resultant force. both y's and x's are equal of course, because we want to know where these two lines meet. solving this: kx+g=bx x=g/(b-k) gives us the force on the x axis. for y axis you simply do: x=(y-g)/k; x=y/b y/b=(y-g)k; yk=yb-gb; gb=yb-yk=y(b-k) y=gb/(b-k) now i can simply find out how a force relates to the distance of x or y, because b and k use x and y. the normal force's line: k=(y-H)/(x-h) where (x,y) are the center of the wheel, and (h,H) are the coordinates of the hit point. also: y=kx+g; k=(y-g)/x b=-1/k, because the normal is perpendicular to the resultant. b=x/(g-y) since we know how x relates to y in this point, we make these formulas depend on only y: y=bx; x=y/b k=(y-g)/x=(y-g)/(y/b)=-1/b b=-sqrt(-y)/sqrt(y-g) OR b=sqrt(-y)/sqrt(y-g) so: k=-1/b= + OR - sqrt(y-g)/sqrt(-y) now we insert these formulas into the force formula for y: y=gb/(b-k) this y is not same as above though, so i will call it f: f=gb/(b-k) inserting y: f=g(sqrt(-y)/sqrt(y-g))/((sqrt(-y)/sqrt(y-g))-(-sqrt(y-g)/sqrt(-y))) OR f=g(-sqrt(-y)/sqrt(y-g))/((-sqrt(-y)/sqrt(y-g))-(sqrt(y-g)/sqrt(-y))) ... i can edit this post later to explain some details, too tired now. btw crosson i get it now, thanks for your help. i did find an alternate solution however, by using s=v0*at^2/2, and finding out the averege acceleration with the formula of f(y). Last edited: Mar 22, 2005
# Linux background changing daemon in Rust I'm working on a program that updates the background of an x11 desktop at a specified interval. However, it eats large amounts of my CPU just sitting idle. I know this is due to the main loop running faster than I want it to. I've used a couple shameful hacks to quell usage down to about 15% but now I am searching for a long-term solution. Here is the function that contains the main loop: pub fn run_mapped(&mut self) { info!(self.logger, "Running in mapped mode!"); let wait_d = Duration::new(0,500); let long_wait_d = Duration::new(0,750); loop { self.img_dir.1 = Instant::now(); } let cd = self.x.get_current_desktop(); // change background if timeout is reached if self.since_timeout.elapsed() > self.timeout { self.change_backgrounds(); let ref mut current_bg = self.image_map[cd]; self.since_timeout = Instant::now(); self.x.change_background(current_bg); } match self.x.next_event() { Some(_) => { let ref mut current_bg = self.image_map[cd]; self.x.change_background(current_bg); }, None => { sleep(wait_d); } } sleep(long_wait_d); } } Not shown, but I have additionally set process priority lower to give more important process priority over this one. I added the sleep calls to prevent >50% CPU usage but it still consumes ~20% which seems a little excessive for what I want this program to do. I'm looking at some of the CPU usage from other programs I'm running in my DE (candybar) and they're nowhere near as resource-hungry as my program. All that being said, my two questions are: 1. Why is this? Meaning, why is my program using so much CPU since it is basically 'sleeping' most of the time (in its current state)? 2. How can I fix this? The only thing I can think of now is scrapping it and abusing cron jobs to the same effect. The full code can be found on Github. • Does Rust have a Timer you can use that fires an event periodically? – user34073 Dec 30 '16 at 5:48 • I don't believe so and the only external timing library's appear to be basic fps counters :/. But i don't think that would work because it needs to monitor "_NET_CURRENT_DESKTOP" which could change at anytime. But i could be wrong? – user7008548 Dec 30 '16 at 5:55 since it is basically 'sleeping' most of the time That's not really the most accurate thing... impl Duration { fn new(secs: u64, nanos: u32) -> Duration } Duration's second argument is nanoseconds, thus your "long wait" is 750 nanoseconds. That's 0.75 microseconds / 0.00075 milliseconds / 0.00000075 seconds. For reference, a cycle of a 3GHz computer is 0.3333 nanoseconds. If your code takes zero time, you are waking 1 million times per second! This is the very definition of a busy wait loop. So how do you fix it? You need to rearchitect. You state: and abusing cron jobs to the same effect Cron jobs have resolution of a minute (8×10^7 bigger!). If you can live with that, sleep for that long. Another solution is to move to an event-driven system. You make some mention: it needs to monitor _NET_CURRENT_DESKTOP which could change at anytime. If that's the case, you'd have two sources of events: a timer and whatever provides the _NET_CURRENT_DESKTOP event. The current front-runners of this type of code in Rust are futures and mio. I don't know exactly how the X code works, but it's probably event driven (GUI code usually is), and there's probably some mio/futures periodic timer. You "simply" need to configure both of them and then respond whenever either triggers.
# State rent This article discusses state rent, in particular, state rent with asynchronous garbage collection. Compared with traditional state rent with deterministic garbage collection, asynchrony allows more predictable block processing performance estimation, and freer node implementations. ## Special thanks The earliest state rent work was done by Alexey Akhunov, which highly inspired this article. ## State rent with asynchronous garbage collection Consider a simple account-based blockchain like Ethereum or Substrate. The world state consists of accounts. Accounts consist of some basic information such as balance, nonce, code, together with an account storage. Over lifetime of a blockchain, the collection of accounts can become huge, while a lot of them remain unused. The goal of state rent is to recycle storage space occupied by unused accounts. Some state rent systems, like the one proposed in Ethereum by Alexey Akhunov, uses a deterministic garbage collection routine. Accounts pay a certain amount of rent to exist for a period of time. This causes the account balances to gradually reduce. If the account balance is too low to pay the rent, it is marked to be evicted. An operational transaction is then due to deterministically evict the account out of the world state. Determinstic garbage collection presents some challenges: • Accounts constantly need to pay rent. While the number of times dues happen can be optimized (to only pay it when an account is updated), it still causes unnecessary state updates. • Eviction is costly. All accounts that need to be evicted must be done so deterministically. While we can limit the number of evictions per block, this can still cause massive merkle tree updates. State rent with asynchronous garbage collection aims at solving those challenges. ### Protocol Rule Asynchronous garbage collection requires an append-only merkle tree. Existing accounts can be modified. However, new accounts can only be appended at the end of the merkle tree. Any index-based merkle tree is suitable for this purpose, but not key-value-based merkle tree. Below, we use index-based binary merkle tree to demostrate asynchronous garbage collection, but note that the same system can be used for hex merkle tree. The protocol rules that constraint block processing is as follows: • Each account has a keep-alive period. Pre-defined operations on a blockchain can extend an account’s keep-alive period. • If an account is accessed after its keep-alive period, the transaction that accessed it must be accompanied with witness proof of the account, with reinitiation fee and operations to reinitiate the corresponding storage values. Otherwise, the whole transaction fails with no state changes, other than consuming all gases. We don’t explictly define garbage collection within protocol rules, but it enables the rules to be implemented in a node implementation. ### Garbage collection Upon an account reaches its end of the keep-alive period, a node implementation can safely (but optionally) evict it. It does so by removing the corresponding merkle node in the database, but no merkle hash change is required. Consider the following merkle tree, where we have a world state of 4 accounts D, E, F, G: A / \ B C / \ / \ D E F G When D is evicted, no garbage collection is possible. Then, when E is evicted, merkle node D and E can be removed from database, leaving only B. No operation is able to modify evicted accounts and the merkle tree is append-only for new accounts, so B is fixed and will never be modified, unless D or E is reinitiated. If the node has not invoked garbage collection, then the node can use the account information to check keep-alive period. If an access reaches an evicted node, then it knows that it must have passed the keep-alive period, resulting in revertion of the transaction.
# How are censorings before the first event dealt with in survival analysis If you have a dataset, sorted into ascending order by survival time (minimum of censoring and event time), and this dataset contains at least one censoring before the first event (so that the start of the censoring indicator looks similar to 0 0 0 1 for example), how are these censored events dealt with? Do they count towards the risk set for the first event? Do they contribute to the estimation of survival rates or model fitting at all? Any information on how different methods deal with censorings before the first event would be appreciated. The risk set for the first event only includes cases still at risk at that time, so any cases censored earlier than that time do not enter that (or any) risk set, at least in standard Kaplan-Meier or Cox proportional hazards analyses. You can check this by comparing results on a data set where you either include or exclude such early-censored cases. It's possible that some methods of fitting defined parametric models to survival data might include information on early-censored cases, but I don't have expertise in that type of modeling. Others on this site are better equipped to address parametric survival modeling. To complete @EdM's answer, if an event is right censored before the first observed event time, it will contribute to the log likelihood function, i.e. it will contribute $\log(S(t_i | \theta))$ where $\theta$ is the parameter of interest. However, this contribution will be very minimal; for most values of $\theta$ implied by the rest of the data, $S(t_i | \theta) \approx 1$, implying for little impact on the likelihood function, and thus very little impact of the estimation of $\theta$ itself. To tie back into EdM's answer, the reason these observations do not affect the semi- and non-parametric models is that in those cases, $S(t_i|\hat \theta) = 1$, as assigning probability mass before the first event would only decrease the likelihood function.
# Is the deceleration of a body coming to rest due to friction completely smooth right up to the end? Referring to the (empirical) laws of (dry) friction from Wikipedia it would appear that the deceleration is constant until the speed is zero. "Coulomb's Law of Friction: Kinetic friction is independent of the sliding velocity." I'm wondering though whether there is a threshold (speed) below which the body will suddenly stick (i.e. stop), similar to the static friction (force) threshold before which motion doesn't start. • This question could be rephrased this way: A bonding occurs between objects that are at rest and in contact with each other (adhesion). This is the reason that the coefficient of static friction is higher than the coefficient of kinetic friction (due to "peaks and valleys" of the surfaces of the materials). Do the two objects have to be totally at rest with respect to each other for this bonding to occur?? – Jack R. Woods May 23 '17 at 14:00 • Intuition tells me that it would depend on the materials and the speed would be extremely slow. Otherwise, the bonding would never take place at all. – Jack R. Woods May 23 '17 at 14:13 • So the question boils down to: What is the maximum relative speed at which adhesion bonds can form? This is quite an interesting question. – Steeven Jul 5 '17 at 8:29 • I would assume that dependent on the surface and the object, you could conclude that there would be a sticking action at the point when the object is moving slow enough to form bonds with the surface. Essentially, it would make sense that when the kinetic friction becomes static, there would be a sticking action, but static friction only exists in still systems, so there is an argument either way. – BooleanDesigns Feb 7 '18 at 15:22 • Re: the relative speed at which bonds can form — the formation of bonds, i.e. rearrangement of electrons, once the atoms are more or less in place, is fast. Significant fraction of c sort of fast, I think. – colinh Feb 19 '18 at 14:53 I think this is a complex situation than cannot be resolved only considering rigid bodies. IRL everything is "deformable" and as a result not all parts of a sliding body may move with the exact same speed. As a body is sliding under friction it is reasonable to assume that the force of friction deforms the body in a shear fashion like this: Now, at the moment where the object stops (or rather starts to stop), the friction force might instantaneously jump up in order to remove enough momentum from the bottom part of the object to stop (at an instant). But the rest of the body will still be in motion, and a damped oscillation would occur during which the contact friction force will vary. In the end the motion and friction will die down. Qualitatively I would describe the situation as follows: You can imagine the molecules binding up as some very small speed at the bottom, and the top part swinging back and forth for a short amount of time. Both static and dynamic friction are limited, but to go from speed $v$ to 0 in $t$ seconds requires an acceleration of ${v\over t} \frac{m}{s^2}$. But there is a limit to how small $t$ can be because the acceleration is limited due to the limit in the friction. So there are no jumps in the speed, as it will always take some time to change the speed. • I think by smooth, the OP meant zero jerk, as opposed the mathematical definition of there term: continuous. – JEB Feb 28 '18 at 17:48
# More accurate acceleration 1. Mar 17, 2004 ### salamander Hi! I'm quite new here, and I'm not sure wheter this is to go here or in the math section, so I'll just post it here since i guess nobody really care anyway. I've been thinking of this for a while, but I can't seem to get it right. (I'm not that good at maths but I'm learning.) Concider dropping a rock from a rather high altitude down to the ground. Now, using Newtonian theory, find an expression for the rocks' velocity as a function of time, that includes the fact that the gravitational attraction must become greater as the distance to Earth shrinks. Can somebody give me a hint? I know g=MG/r^2 What confuses me is how to integrate time in this expression, since r=r0-gt^2/2 Finally, I don't know why but i just like these guys: 2. Mar 17, 2004 ### Palpatine You need to set up a differential equation relating the distance from the earth to the acceleration and solve it. 3. Mar 17, 2004 ### HallsofIvy Staff Emeritus Letting r be the distance from the center of the earth, $$m\frac{d^2r}{dt^2}= \frac{-GmM}{r^2}$$ That's a non-linear differential equation but we can use the fact that t does not appear explicitely in it: Let v= dr/dt. Then (chain rule): $$\frac{d^2r}{dt^2}= \frac{dv}{dt}= \frac{dr}{dt}\frac{dv}{dr}= v\frac{dv}{dr}$$ so the differential equation becomes $$v\frac{dv}{dr}= \frac{-GM}{r^2}$$ and then separate: $$v dv= -GM \frac{dr}{r^2}$$. Integrating: $$\frac{1}{2}v^2= \frac{GM}{r}+ C$$. (Notice that that first integral is the same as $$\frac{1}{2}v^2- GM/r= C$$, conservation of energy, since the first term is kinetic energy and the second potential energy.) That is the same as $$v= \frac{dr}{dt}= \sqrt{2\frac{GM}{r}+ C}$$ which is also integrable. You can use the fact that GM/r02= g (r0 is the radius of the earth) to simplify. Last edited: Mar 17, 2004
AST_DELFITS Delete the current FITS card in a FitsChan Description: This routine deletes the current FITS card from a FitsChan. The current card may be selected using the Card attribute (if its index is known) or by using AST_FINDFITS (if only the FITS keyword is known). After deletion, the following card becomes the current card. Invocation CALL AST_DELFITS( THIS, STATUS ) Arguments THIS = INTEGER (Given) Pointer to the FitsChan. STATUS = INTEGER (Given and Returned) The global status. Notes: • This function returns without action if the FitsChan is initially positioned at the " end-of-file" (i.e. if the Card attribute exceeds the number of cards in the FitsChan). • If there are no subsequent cards in the FitsChan, then the Card attribute is left pointing at the " end-of-file" after deletion (i.e. is set to one more than the number of cards in the FitsChan).
# Proof of Definite Integral Let’s assume that $f$ is continuous and positive on the interval $[a, b]$. Then the definite integral $\int^b_a f(x) dx$ represents the area of the region bounded by the graph of $f$ and the x-axis, from x = a to x = b. First, we partition the interval $[a, b]$ into n subintervals, each of width $\Delta x = (b - a)/n$ such that $a = x_0 < x_1 < x_2 < . . . < x_n = b$ Then we can form a trapezoid for each subinterval and the area of the ith trapezoid = $[\frac{f(x_{i-1}) + f(x_i)}{2}](\frac{b-a}{n})$. This implies that the sum of the areas of the n trapezoids is Area = $\frac{b - a}{2n}[f(x_0) + 2f(x_1) + 2f(x_2) + ... + 2f(x_{n-1}) + f(x_n)] = \frac{b - a}{2n}[f(x_0) + f(x_n) + 2\displaystyle\sum\limits_{i=1}^{n-1} f(x_i)] = \frac{b - a}{2n}(f(x_0) + f(x_n)) + \displaystyle\sum\limits_{i=1}^{n-1} f(x_i)(\frac{b - a}{n}) = \frac{b - a}{2n}(f(x_0) + f(x_n) - 2f(x_n)) + \displaystyle\sum\limits_{i=1}^{n} f(x_i)(\frac{b - a}{n}) - 2f(x_n)(\frac{b - a}{2n}) = \frac{b - a}{2n}(f(x_0) - f(x_n)) + \displaystyle\sum\limits_{i=1}^{n} f(x_i)\Delta x = \lim_{n\to\infty}\frac{b - a}{2n}(f(x_0) - f(x_n)) + \lim_{n\to\infty}\displaystyle\sum\limits_{i=1}^{n} f(x_i)\Delta x = 0 + \lim_{n\to\infty}\displaystyle\sum\limits_{i=1}^{n}f(x_i)\Delta x = \int^b_a f(x) dx$ # Exercise on Relations Let S and S‘ be the following subsets of the plane: $S = \{(x,y) | y = x+1, 0 and $S'= \{(x,y) | y-x \in \mathbb{Z} \}$ a) Show that S’ is an equivalence relation on the real line and that $S \subset S'$. Proof: Reflexivity$x-x \in \mathbb{Z}, \forall x \in \mathbb{R}$ Symmetry$z \in \mathbb{Z} \Rightarrow -z \in \mathbb{Z}$ Transitivity- If $x~y, y~z$ then $z-y=(z-x)-(x-y)$ and thus z-y is the difference of two integers which implies that z-y is itself an integer. To show that $S \subset S'$ we note that $y=x+1 \Rightarrow y-x=1$ which $\in \mathbb{Z}$ b) Show that given any collection of equivalence relations on a set A, their intersection is an equivalence relation in A. Proof: Let $\{R_\alpha\}_\alpha\in A$ be a nonempty class of equivalence relations and let $\Omega = \cap_{\alpha \in A}R_\alpha$ Reflexivity- If $(x,y) \in R_\alpha, \forall_\alpha \in A$ then $(x,y) \in \Omega \Rightarrow (y,x) \in R_\alpha, \forall_\alpha \in a \Rightarrow (y,x) \in \Omega$. Symmetry$(x,x) \in R_\alpha, \forall_\alpha \in A \Rightarrow (x,x) \in \Omega$. Transitivity– If $(x,y), (y,z) \in R_\alpha, \forall_\alpha \in A$ then $(x,y), (y,z) \in \Omega \Rightarrow (x,z) \in R_\alpha, \forall_{\alpha \in A} \Rightarrow (x,z) \in \Omega \therefore$ the intersection is an equivalence relation on A.
## Encyclopedia > Tilde Article Content # Tilde A tilde is a diacritic mark (~) put over a letter (usually a vowel) to indicate nasalization. For example, in Portuguese, ã and õ are nasalized a and o. In Spanish, tilde over n (ñ) is a separate letter (called eñe) and is a palatal [n] (SAMPA J, IPA [ɲ]), pronounced like nh in Portuguese. In the International Phonetic Alphabet (IPA), the tilde is used to mark nasalization, and is placed above any phone that is nasalized. A similar symbol, written on the line (ASCII: 126, hex 7E), is used in logic as one way of representing negation: thus ~ p means it is not the case that p. In Japanese, this symbol is used to indicate ranges. 12 ~ 15 means "12 to 15", ~ 3 means "up to three" and 100 ~ means "100 and greater". See also punctuation, Õ, Special characters All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article Stirling number ... is $E(X^n)=\sum_{k=1}^n S(n,k)\lambda^k.$ In particular, the nth moment of the Poisson distribution with expected value 1 is precisely the ...
2019 CMS Winter Meeting Toronto, December 6 - 9, 2019 Complex Analysis and Operator Theory Org: Ilia Binder (University of Toronto), Damir Kinzebulatov (Université Laval) and Javad Mashreghi (Université Laval) [PDF] IEVGEN BILOKOPYTOV, University of Manitoba Maximum modulus principle for multipliers  [PDF] In this talk I will show that (not necessarily holomorphic) multipliers of a wide class of normed spaces of continuous functions over a connected Hausdorff topological space cannot attain their multiplier norms, unless they are constants. THOMAS BLOOM, University of Toronto Asymptotic zero distribution of random orthogonal polynomials  [PDF] We consider random polynomials of the form $H_n(z)=\sum_{j=0}^n \xi_jq_j(z)$ where the $\{\xi_j\}$ are i.i.d. non-degenerate complex random variables, and the $\{q_j(z)\}$ are orthogonal polynomials ( deg $q_j=j$) with respect to an appropriate compactly supported measure in the plane.The problem is to understand (probabilistically) the behavior of the zeros of $H_n$ as $n\rightarrow\infty.$ Study of the Kac Ensemble, (when $q_j(z)=z^j$) goes back to the 1950's. I will present recent results on the almost sure convergence and the convergence in probability of the zeros. This is work of Ibraguimov and Zaporozhets, D.Dauvergne and myself. ALEX BRUDNYI, University of Calgary TOWARD SOLUTION OF THE MULTIVARIATE CORONA PROBLEM  [PDF] We present a new approach to the multivariate corona problem for the algebra of bounded holomorphic functions on an open polydisk. CHENG CHU, Université Laval Which de Branges-Rovnyak spaces have complete Nevanlinna-Pick property?  [PDF] Complete Nevanlinna-Pick kernels are related to Nevanlinna-Pick interpolation problem. Many properties of the Hardy space carry over to other spaces with complete Nevanlinna-Pick property. A natural question is to decide which reproducing kernel Hilbert spaces have complete Nevanlinna-Pick property. We characterize the de Branges-Rovnyak spaces with complete Nevanlinna-Pick property. Our method relies on the general theory of reproducing kernel Hilbert spaces. GALIA DAFNI, Concordia University Extension and approximation for VMO on a domain  [PDF] In joint work with Almaz Butaev, we study the question of extension of functions of vanishing mean oscillation on a domain $\Omega \subset {\mathbb R}^n$ to VMO(${\mathbb R}^n$). For the case of bounded mean oscillation (BMO), it was shown by P. Jones (1980) that a bounded extension if possible if and only if $\Omega$ is a uniform domain. By suitably modifying Jones' extension we are able to use a single operator to extend BMO and VMO as well as other function spaces. This question turns out to be related to approximation of VMO functions by Lipschitz functions. We also study the question in the local setting. RICHARD FOURNIER, Dawson College and CRM (Montréal) On a polynomial inequality of Schur  [PDF] In this talk, I shall compare a polynomial inequality due to Dryanov, Fournier and Ruscheweyh (see J. Approx.Theory 136(2005), 84–90) to another one attributed to Isaï Schur ( see J. Approx.Theory 182(2014), 103–109) and observe that they indeed are independent one of each other. PAUL GAUTHIER, Université de Montréal Universality, polynomial approximation and the Riemann Hypothesis  [PDF] Two observations. Firstly, the universality of the Riemann Hypothesis easily yields a conjecture equivalent to the Riemann Hypothesis. Secondly, we recall an interesting and plausible conjecture of Andersson on polynomial approximation, which has been partially confirmed with much effort and which (if correct) suggests the Riemann Hypothesis fails. Growth of measurably entire function and related questions  [PDF] Let $T$ be the action of the complax plain on the space of entire functions defined by translations, i.e $T_w$ takes the entire function $f(z)$ to the entire function $f(z+w)$. B.Weiss showed in `97 that there exists a probability measure defined on the space of entire functions, which is invariant under this action. In this talk I will present (almost) optimal bounds on the minimal possible growth of functions in the support of such measures, and discuss other growth related problems inspired by this work. The talk is partly based on a joint work with L. Buhovsky, A.Logunov, and M. Sodin. ISAO ISHIKAWA, RIKEN On the boundedness of composition operators on reproducing kernel Hilbert spaces with analytic positive definte functions  [PDF] In this talk,I will explain our result which says boundedness of composition operators of maps implies the maps are affine maps in certain situations. Our problem originally comes from the applied mathematics. Composition operators (Koopman operators) are classically investegated in the theory of function space and complex analysis, but, they have been getting popular in the context of machine leraning and data analysis these days. Besides, reproducing kernel Hilbert spaces with analytic positive definite functions on euclidean spaces are utilized in many fields in engeneering and statistics. On the other hand, although it is important to prove the relation between the properties of maps and those of composition operators of the maps to guruantee theoretical validity, such relation is currently not known very well. In some important situation, we prove that a map become an affine map if its composition operator is bounded on an RKHS associated with analytic positive definite functions on euclidean spaces. This is the joint work with Masahiro Ikeda (RIKEN/Keio University) and Yoshihiro Sawano (Tokyo metropolitan University/RKEN ERIC SAWYER, McMaster University Two weight testing theory  [PDF] We report on recent results in the theory of two weight testing, aka T1 theorems, including joint works with Tuomas Hytonen, Kangwei Li, Chun-Yen Shen, Ignacio Uriarte-Tuero, Robert Rahm and Brett Wick RASUL SHAFIKOV, University of Western Ontario Polynomially and rationally convex embeddings  [PDF] I will discuss recent developments in polynomially and rationally convex embeddings of real submanifolds into complex Euclidean spaces. Grunsky Operator and Inequality for Open Riemann Surfaces with Finite Borders  [PDF] Consider an open Riemann surface $\Sigma$ of genus $g>0$ with $n>1$ borders, each one homeomorphic to the unit circle. The surface $\Sigma$ can be described as a compact Riemann surface $\mathcal{R}$ of the same genus $g$, from which $n$ simply-connected domains $\Omega_1, \dots, \Omega_n$, removed; that is, $\Sigma=\mathcal{R}\backslash \cup cl(\Omega_k)$. Fix conformal maps $f_k$ from the unit disc $\mathbb{D}$ onto $\Omega_k, k=1, \dots, n$. We may assume each $f_k$ has a quasiconformal extension to an open neighbourhood of $\mathbb{D}$. Let $\mathbf{f}=\left(f_1, \dots, f_n\right)$. I will define the Grunsky operator $Gr_{\mathbf{f}}$ corresponding to $\mathbf{f}$ (equivalently to $\Sigma$) on some Dirichlet spaces when all the boundary curves are quasicircles in $\mathcal{R}$. I will show that the norm of the Grunsky operator is less than or equal to one. This is a generalization of the classical Grunsky inequalities from the planar case to bordered Riemann surfaces described above. Joint work with E. Schippers and W. Staubach. IGANCIO URIARTE-TUERO, Michigan State University Two weight norm inequalities for singular and fractional integral operators in $R^n$.  [PDF] I will report on recent advances on the topic, related to proofs of T1 type theorems in the two weight setting for Calder\'{o}n-Zygmund singular and fractional integral operators, with side conditions, and related counterexamples. Mostly joint work with Eric Sawyer and Chun-Yen Shen. The talk will be self-contained and provide a general overview of the area plus some recent advances. Eric Sawyer's talk in this same session will provide more specifics on the latest advances.
Contact Description The Contact block can be used to specify parameters related to mechanical contact enforcement in MOOSE simulations. The ContactAction is associated with this input block, and is the class that performs the associated model setup tasks. Use of the ContactAction is not strictly required, but it greatly simplifies the setup of a simulation using contact enforcement. A high-level description of the contact problem is provided here. This block can be used to specify mechanical normal and tangential contact using several possible models for the physical behavior of the interaction: • frictionless • glued • coulomb (frictional) Contact enforcement using node/face primary/secondary algorithms is available using the following mathematical formulations: • kinematic • penalty • tangential penalty (kinematic normal constraint with penalty tangential constraint) • augmented lagrange • reduced active nonlinear function set (RANFS) Constructed Objects The primary task performed by this action is creating the Constraint classes that perform the contact enforcement. The type of Constraint class(es) constructed depend on the formulation and physical interaction model specified using the formulation and model parameters. Table 1 shows the Constraint classes that can be created for various types of contact enforcement. Table 1: Constraint objects constructed by ContactAction In addition to the Constraint class, several other objects are created, as shown in Table 2: Other objects constructed by ContactAction Constructed ObjectPurpose ContactPressureAuxCompute contact pressure and store in an AuxVariable PenetrationCompute contact penetration and store in an AuxVariable NodalAreaCompute nodal area and store in an AuxVariable Notes on Node/Face Contact Enforcement The node/face contact enforcement is based on a primary/secondary algorithm, in which contact is enforced at the nodes on the secondary surface, which cannot penetrate faces on the primary surface. As with all such approaches, for the best results, the primary surface should be the coarser of the two surfaces. The contact enforcement system relies on MOOSE's geometric search system to provide the candidate set of faces that can interact with a secondary node at a given time. The set of candidate faces is controlled by the patch_size parameter and the patch_update_strategy options in the Mesh block. The patch size must be large enough to accommodate the sliding that occurs during a time step. It is generally recommended that the patch_update_strategy=auto be used. The formulation parameter specifies the technique used to enforce contact. The DEFAULT option uses a kinematic enforcement algorithm that transfers the internal forces at secondary nodes to the corresponding primary face, and forces the secondary node to be at a specific location on the primary face using a penalty parameter. The converged solution with this approach results no penetration between the surfaces. The PENALTY algorithm employs a penalty approach, where the penetration between the surfaces is penalized, and the converged solution has a small penetration, which is inversely proportional to the penalty parameter. Regardless of the formulation used, the robustness of the mechanical contact algorithm is affected by the penalty parameter. If the parameter is too small, there will be excessive penetrations with the penalty formulation, and convergence will suffer with the kinematic formulation. If the parameter is too large, the solver may struggle due to poor conditioning. System Parameter The system parameter is deprecated and currently defaults to Constraint. Gap offset parameters Gap offset can be provided to the current contact formulation enforced using the MechanicalContactConstraint. It can be either secondary_gap_offset (gap offset from secondary side) or mapped_primary_gap_offset (gap offset from primary side but mapped to secondary side). Use of these gap offset parameters treats the surfaces as if they were virtually extended (positive offset value) or narrowed (negative offset value) by the specified amount, so that the surfaces are treated as if they are closer or further away than they actually are. There is no deformation within the material in this gap offset region. Example Input syntax Node/face frictionless contact: [Contact] [./leftright] secondary = 3 primary = 2 model = frictionless penalty = 1e+6 normal_smoothing_distance = 0.1 [../] [] (modules/contact/test/tests/sliding_block/sliding/frictionless_kinematic.i) Node/face frictional contact: [Contact] [./leftright] secondary = 3 primary = 2 model = coulomb penalty = 1e+7 friction_coefficient = 0.2 formulation = penalty normal_smoothing_distance = 0.1 [../] [] (modules/contact/test/tests/sliding_block/sliding/frictional_02_penalty.i) Normal (frictionless) mortar contact: [Contact] [frictionless] mesh = simple_mesh primary = 2 secondary = 1 formulation = mortar [] [] (modules/contact/test/tests/mechanical-small-problem/frictionless-nodal-lm-mortar-disp-action.i) Normal and tangential (frictional) mortar contact: [Contact] [frictional] mesh = revised_file_mesh primary = 20 secondary = 10 formulation = mortar model = coulomb friction_coefficient = 0.1 [] [] (modules/contact/test/tests/bouncing-block-contact/frictional-nodal-min-normal-lm-mortar-fb-tangential-lm-mortar-action.i) Gap offset: [Contact] [./leftright] master = 2 slave = 3 model = frictionless penalty = 1e+6 secondary_gap_offset = secondary_gap_offset mapped_primary_gap_offset = mapped_primary_gap_offset [../] [] (modules/contact/test/tests/mechanical_constraint/frictionless_kinematic_gap_offsets.i) Input Parameters • active__all__ If specified only the blocks named will be visited and made active Default:__all__ C++ Type:std::vector Options: Description:If specified only the blocks named will be visited and made active • al_frictional_force_toleranceThe tolerance of the frictional force for augmented Lagrangian method. C++ Type:double Options: Description:The tolerance of the frictional force for augmented Lagrangian method. • al_incremental_slip_toleranceThe tolerance of the incremental slip for augmented Lagrangian method. C++ Type:double Options: Description:The tolerance of the incremental slip for augmented Lagrangian method. • al_penetration_toleranceThe tolerance of the penetration for augmented Lagrangian method. C++ Type:double Options: Description:The tolerance of the penetration for augmented Lagrangian method. • c_normal1Parameter for balancing the size of the gap and contact pressure Default:1 C++ Type:double Options: Description:Parameter for balancing the size of the gap and contact pressure • c_tangential1Parameter for balancing the contact pressure and velocity Default:1 C++ Type:double Options: Description:Parameter for balancing the contact pressure and velocity • capture_tolerance0Normal distance from surface within which nodes are captured Default:0 C++ Type:double Options: Description:Normal distance from surface within which nodes are captured • disp_xThe x displacement C++ Type:VariableName Options: Description:The x displacement • disp_yThe y displacement C++ Type:VariableName Options: Description:The y displacement • disp_zThe z displacement C++ Type:VariableName Options: Description:The z displacement • displacementsThe displacements appropriate for the simulation geometry and coordinate system C++ Type:std::vector Options: Description:The displacements appropriate for the simulation geometry and coordinate system • formulationkinematicThe contact formulation Default:kinematic C++ Type:MooseEnum Options:ranfs kinematic penalty augmented_lagrange tangential_penalty mortar Description:The contact formulation • friction_coefficient0The friction coefficient Default:0 C++ Type:double Options: Description:The friction coefficient • inactiveIf specified blocks matching these identifiers will be skipped. C++ Type:std::vector Options: Description:If specified blocks matching these identifiers will be skipped. • mapped_primary_gap_offsetOffset to gap distance mapped from primary side C++ Type:VariableName Options: Description:Offset to gap distance mapped from primary side • meshThe mesh generator for mortar method C++ Type:MeshGeneratorName Options: Description:The mesh generator for mortar method • modelfrictionlessThe contact model to use Default:frictionless C++ Type:MooseEnum Options:frictionless glued coulomb Description:The contact model to use • normal_lm_scaling1Scaling factor to apply to the normal LM variable Default:1 C++ Type:double Options: Description:Scaling factor to apply to the normal LM variable • normal_smoothing_distanceDistance from edge in parametric coordinates over which to smooth contact normal C++ Type:double Options: Description:Distance from edge in parametric coordinates over which to smooth contact normal • normal_smoothing_methodMethod to use to smooth normals C++ Type:MooseEnum Options:edge_based nodal_normal_based Description:Method to use to smooth normals • normalize_penaltyFalseWhether to normalize the penalty parameter with the nodal area. Default:False C++ Type:bool Options: Description:Whether to normalize the penalty parameter with the nodal area. • orderFIRSTThe finite element order: FIRST, SECOND, etc. Default:FIRST C++ Type:MooseEnum Options:CONSTANT FIRST SECOND THIRD FOURTH Description:The finite element order: FIRST, SECOND, etc. • penalty1e+08The penalty to apply. This can vary depending on the stiffness of your materials Default:1e+08 C++ Type:double Options: Description:The penalty to apply. This can vary depending on the stiffness of your materials • ping_pong_protectionFalseWhether to protect against ping-ponging, e.g. the oscillation of the secondary node between two different primary faces, by tying the secondary node to the edge between the involved primary faces Default:False C++ Type:bool Options: Description:Whether to protect against ping-ponging, e.g. the oscillation of the secondary node between two different primary faces, by tying the secondary node to the edge between the involved primary faces • primaryThe primary surface C++ Type:BoundaryName Options: Description:The primary surface • primary_secondary_jacobianTrueWhether to include Jacobian entries coupling primary and secondary nodes. Default:True C++ Type:bool Options: Description:Whether to include Jacobian entries coupling primary and secondary nodes. • secondaryThe secondary surface C++ Type:BoundaryName Options: Description:The secondary surface • secondary_gap_offsetOffset to gap distance from secondary side C++ Type:VariableName Options: Description:Offset to gap distance from secondary side • tangential_lm_scaling1Scaling factor to apply to the tangential LM variable Default:1 C++ Type:double Options: Description:Scaling factor to apply to the tangential LM variable • tangential_toleranceTangential distance to extend edges of contact surfaces C++ Type:double Options: Description:Tangential distance to extend edges of contact surfaces • tension_release0Tension release threshold. A node in contact will not be released if its tensile load is below this value. No tension release if negative. Default:0 C++ Type:double Options: Description:Tension release threshold. A node in contact will not be released if its tensile load is below this value. No tension release if negative. • use_dualFalseWhether to use the dual mortar approach Default:False C++ Type:bool Options: Description:Whether to use the dual mortar approach
# Canonical question for two masses falling in Newtonian gravity Some variation of this question gets asked a lot: We have 2 point masses, $m$ and $M$ in a perfect world separated by radius $r$. Starting from rest, they both begin to accelerate towards each other. So we have the gravitational force between them as: $$F_g = \frac{GMm}{r^2}$$ How do we find out at what time they will collide? There are two instances that seem to be emerging as "canonical", or at least that both seem to be the target of frequent duplicate closures: 1. The Time That 2 Masses Will Collide Due To Newtonian Gravity (where the above quote comes from) 2. Don't heavier objects actually fall faster because they exert their own gravity? (has more upvotes) I think we should pick one of these (or potentially another version of this same question) to be the canonical target of duplicate closures. I would probably go ahead and pick #2 except that I happen to have the accepted answer on that question, and I wouldn't want people to think I'm doing it to accumulate reputation. Besides, there are some advantages to #1, namely that the question is more cleanly asked (IMO) and the accepted answer is much more direct. What should we go with? #1 or #2, or another existing version of this, or write an entirely new canonical question for it? A canonical question, that is a question specifically intended to act as a definitive reference, is distinct from a question that just happens to cover the same topic. If I were writing a canonical question I would tailor the question to make it absolutely clear that the question was intended as a reference and to explain what it did and didn't cover. So in this case I would write a brand new question and answer that combined the best aspects of the two questions you mention, the question Qmechanic links and indeed any other related questions. I don't feel desperately strongly about this (at least, not strongly enough to volunteer to do it :-) and it's a lot of work writing a canonical Q/A in this fashion. However if you are attempting to provide the best possible reference i think a new question and answer are the best course. • What I had in mind was really a canonical target for duplicate closures - maybe not "canonical question" in the sense that you mean it. Given that we have several good questions and answers on the topic already, I'd want to establish that there is no suitable candidate currently before starting on a brand new question and answer. So my response to this answer (not that I necessarily expect you to address this) would be, what are the shortcomings of each of the existing questions that make them unsuitable for being the target of future duplicate closures? – David Z Apr 21 '16 at 12:06 This question comes in different versions: • with or without reduced mass. • with or without non-radial motion. • with or without relativistic effects. • with or without initial velocity. • taking the finite radii of the two bodies into account.
Mathwizurd.com is created by David Witten, a mathematics and computer science student at Vanderbilt University. For more information, see the "About" page. # Parametric Functions MathJax TeX Test Page If you had this set of equations: $x = \cos{t}$, and $y = \sin{t}$, and you want to find the area of that ste of parametric equations. We could do $$\text{The area is} \int\mathrm{y}\, \mathrm{d}x$$ $$x = \cos{t}, dx = -\sin{t}dt \rightarrow \int\mathrm{\sin{t}*-\sin{t}}\, \mathrm{d}t = \int\mathrm{-\sin^2{t}}\, \mathrm{d}t$$ $$\cos{2t} = 1 - 2\sin^2{t} \rightarrow -\sin^2{t} = \frac{1}{2}(cos{2t} - 1)$$ $$=\frac{1}{2}\int\mathrm{\cos{2t} - 1}\, \mathrm{d}t = \frac{\sin{2t}}{4} - \frac{x}{2} + C$$ $$\text{If you graph the system, you get that it is a unit circle. So, you can integrate from 0 to 2}\pi$$ $$(\frac{\sin{2t}}{4} - \frac{x}{2})|^{2\pi}_{0} = -\pi$$ $$\text{Note that that is the integral, not the area. The area is the absolute value, which is } \pi$$ # Polar Equations MathJax TeX Test Page You can think about doing area under a polar function as the sum of many sectors. The area of a sector is $\frac{1}{2\pi}(\pi{}r^2) = \frac{r^2}{2}$, so the area of the entire part is $\frac{1}{2}\int_{a}^{b}\mathrm{r(\theta{})^2}\, \mathrm{d}\theta{}$ ## Examples r = 1 - 2cos(θ) MathJax TeX Test Page Find the area of the outer loop of $r = 1 - \cos{\theta{}}$ For this problem, you have to take total area - the inner loop. First, you have to figure out that the inner loop goes from $-\frac{\pi}{3} \rightarrow \frac{\pi}{3}$ $$\text{The outer loop = } \frac{1}{2}\int_{\frac{\pi}{3}}^{\frac{5\pi{}}{3}}\mathrm{(1-\cos{\theta{}})^2}\, \mathrm{d}\theta{}$$ $$\text{The inner loop = } \frac{1}{2}\int_{\frac{-\pi{}}{3}}^{\frac{\pi}{3}}\mathrm{(1-cos{\theta})^2}\, \mathrm{d}\theta{}$$ To calculate the integral of $(1 - \cos^{x})^2$, you power reduce $$\int\mathrm{\cos^2{x} - 2\cos{x} + 1}\, \mathrm{d}x = \int\mathrm{\frac{\cos{2x}}{2} + \frac{1}{2} - 2\cos{x} + 1}\, \mathrm{d}x$$ $$=\frac{\sin{2x}}{4} + \frac{3}{2}x - 2\sin{x}$$ The outer loop - the inner loop = 4.60189 David Witten
Find all School-related info fast with the new School-Specific MBA Forum It is currently 25 Oct 2016, 18:01 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Veritas Prep 10 Year Anniversary Promo Question #6 Author Message TAGS: ### Hide Tags Founder Affiliations: AS - Gold, HH-Diamond Joined: 04 Dec 2002 Posts: 14102 Location: United States (WA) GMAT 1: 750 Q49 V42 GPA: 3.5 Followers: 3598 Kudos [?]: 21720 [1] , given: 4394 Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:00 1 KUDOS Expert's post 3 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 68% (02:15) correct 32% (01:29) wrong based on 195 sessions ### HideShow timer Statistics Veritas Prep 10 Year Anniversary Promo Question #6 One quant and one verbal question will be posted each day starting on Monday Sept 17th at 10 AM PST/1 PM EST and the first person to correctly answer the question and show how they arrived at the answer will win a free Veritas Prep GMAT course ($1,650 value). Winners will be selected and notified by a GMAT Club moderator. For more questions and details please check here: veritas-prep-10-year-anniversary-giveaway-138806.html To participate, please make sure you provide the correct answer (A,B,C,D,E) and explanation that clearly shows how you arrived at it. Winners will be announced the following day at 10 AM Pacific/1 PM Eastern Time. Good Luck! May the best and fastest win! The deer, despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers, struggled to acclimate to the habitat that wildlife biologists had predicted would enable it to thrive. (A) despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers (B) despite having traveled hundreds of miles from home to the Canadian wilderness where they would now be free to roam without fear of highway traffic or other man-made dangers (C) despite having traveled hundreds of miles from home to reach the Canadian wilderness that offered freedom to roam without fear of highway traffic or other man-made dangers (D) even after traveling hundreds of miles from their home to reach the Canadian wilderness where they could freely roam without fear of highway traffic or other man-made dangers (E) who had traveled hundreds of miles from home to the Canadian wilderness that would offer them freedom to roam without fear of highway traffic or other man-made dangers [Reveal] Spoiler: Official Answer & Explanation OA: C Explanation: This sentence is entirely about the advanced GMAT theme of “The Whole Sentence Matters”. Without recognizing the pronoun “it” toward the end of the sentence, far from the underline, you’d struggle to gain a foothold here. “Deer” is both singular and plural, so if you didn’t notice that defining singular pronoun “it” you wouldn’t be turned off by the presence of “they”, “their”, and “them” in choices A, B, D, and E. But the word “it” makes “deer” singular, and only choice C (which avoids the use of a second pronoun) can then be correct. [Reveal] Spoiler: OA _________________ Founder of GMAT Club US News 2008 - 2017 Rankings progression - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests GMAT Club Premium Membership - big benefits and savings Last edited by bb on 20 Sep 2012, 08:32, edited 2 times in total. posted the explanation Veritas Prep GMAT Discount Codes Jamboree Discount Codes Magoosh Discount Codes Founder Affiliations: AS - Gold, HH-Diamond Joined: 04 Dec 2002 Posts: 14102 Location: United States (WA) GMAT 1: 750 Q49 V42 GPA: 3.5 Followers: 3598 Kudos [?]: 21720 [1] , given: 4394 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:00 1 This post received KUDOS Expert's post 1 This post was BOOKMARKED OA: C Official Explanation: This sentence is entirely about the advanced GMAT theme of “The Whole Sentence Matters”. Without recognizing the pronoun “it” toward the end of the sentence, far from the underline, you’d struggle to gain a foothold here. “Deer” is both singular and plural, so if you didn’t notice that defining singular pronoun “it” you wouldn’t be turned off by the presence of “they”, “their”, and “them” in choices A, B, D, and E. But the word “it” makes “deer” singular, and only choice C (which avoids the use of a second pronoun) can then be correct. _________________ Founder of GMAT Club US News 2008 - 2017 Rankings progression - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests GMAT Club Premium Membership - big benefits and savings Last edited by bb on 20 Sep 2012, 08:31, edited 1 time in total. posted the explanation Director Status: Done with formalities.. and back.. Joined: 15 Sep 2012 Posts: 647 Location: India Concentration: Strategy, General Management Schools: Olin - Wash U - Class of 2015 WE: Information Technology (Computer Software) Followers: 44 Kudos [?]: 521 [0], given: 23 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:04 ! Congratulations - You are the winner Actually you have won both Quant and Verbal challenge (very impressive!)You will be awarded the course/prize for the Quant. Per rules only one prize per person. Ans is C. The Deer is singular, so A,B,D and E are incorrect. bb wrote: Veritas Prep 10 Year Anniversary Promo Question #6 One quant and one verbal question will be posted each day starting on Monday Sept 17th at 10 AM PST/1 PM EST and the first person to correctly answer the question and show how they arrived at the answer will win a free Veritas Prep GMAT course ($1,650 value). Winners will be selected and notified by a GMAT Club moderator. For more questions and details please check here: veritas-prep-10-year-anniversary-giveaway-138806.html To participate, please make sure you provide the correct answer (A,B,C,D,E) and explanation that clearly shows how you arrived at it. Winners will be announced the following day at 10 AM Pacific/1 PM Eastern Time. Good Luck! May the best and fastest win! The deer, despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers, struggled to acclimate to the habitat that wildlife biologists had predicted would enable it to thrive. (A) despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers (B) despite having traveled hundreds of miles from home to the Canadian wilderness where they would now be free to roam without fear of highway traffic or other man-made dangers (C) despite having traveled hundreds of miles from home to reach the Canadian wilderness that offered freedom to roam without fear of highway traffic or other man-made dangers (D) even after traveling hundreds of miles from their home to reach the Canadian wilderness where they could freely roam without fear of highway traffic or other man-made dangers (E) who had traveled hundreds of miles from home to the Canadian wilderness that would offer them freedom to roam without fear of highway traffic or other man-made dangers _________________ Lets Kudos!!! Black Friday Debrief Manager Joined: 05 Dec 2011 Posts: 82 Concentration: Accounting, Finance GMAT Date: 09-08-2012 GPA: 3 Followers: 0 Kudos [?]: 34 [0], given: 2 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:05 Who does not refer to Deer . Answer is C Veritas Prep 10 Year Anniversary Promo Question #6 _________________ Thanks = +1 Kudos Study from reliable sources!! Thursdays with Ron: http://www.manhattangmat.com/thursdays-with-ron.cfm Gmat Prep Questions: CR http://gmatclub.com/forum/gmatprepsc-105446.html SC http://gmatclub.com/forum/gmatprepsc-105446.html Intern Joined: 16 Sep 2012 Posts: 4 Followers: 0 Kudos [?]: 0 [0], given: 22 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:06 Reason; Grammatically correct, precise and concise, correct pronoun A,B & D are wrong as plural pronoun is wrongly used E is wrong because it is lengthy and wordy Intern Joined: 05 Jun 2012 Posts: 22 Followers: 0 Kudos [?]: 0 [0], given: 3 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:08 Ans C A,B,D,E are eliminated because of sub verb agreement errors Intern Joined: 05 Sep 2012 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:08 C All other options have plural, doesn't go along with the singular "deer" Intern Joined: 17 Sep 2012 Posts: 4 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:09 Answer is C because "it", located at the end of the sentence is singular and refers to the deer. The deer , at the beginning of the sentence, can be plural or singular, the "it" tells us it is singular. Therefore we can eliminate all sentences with they, their or them Last edited by mladner on 19 Sep 2012, 10:24, edited 1 time in total. Intern Joined: 15 Jul 2009 Posts: 10 Followers: 0 Kudos [?]: 0 [0], given: 2 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:09 bb wrote: Veritas Prep 10 Year Anniversary Promo Question #6 One quant and one verbal question will be posted each day starting on Monday Sept 17th at 10 AM PST/1 PM EST and the first person to correctly answer the question and show how they arrived at the answer will win a free Veritas Prep GMAT course ($1,650 value). Winners will be selected and notified by a GMAT Club moderator. For more questions and details please check here: veritas-prep-10-year-anniversary-giveaway-138806.html To participate, please make sure you provide the correct answer (A,B,C,D,E) and explanation that clearly shows how you arrived at it. Winners will be announced the following day at 10 AM Pacific/1 PM Eastern Time. Good Luck! May the best and fastest win! The deer, despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers, struggled to acclimate to the habitat that wildlife biologists had predicted would enable it to thrive. (A) despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers (B) despite having traveled hundreds of miles from home to the Canadian wilderness where they would now be free to roam without fear of highway traffic or other man-made dangers (C) despite having traveled hundreds of miles from home to reach the Canadian wilderness that offered freedom to roam without fear of highway traffic or other man-made dangers (D) even after traveling hundreds of miles from their home to reach the Canadian wilderness where they could freely roam without fear of highway traffic or other man-made dangers (E) who had traveled hundreds of miles from home to the Canadian wilderness that would offer them freedom to roam without fear of highway traffic or other man-made dangers We can arrive at the answer for this question by looking at the pronoun used to modify "The deer" in each of the options. All the options apart from C, have pronoun errors. "The deer" is singular and therefore cant be modified using plural pronouns: them, they as seen in the options A, B, D and E (highlighted above). Option C modifies "The deer" without using pronouns or introducing any other errors, and thus making C the right option. Last edited by ssule on 19 Sep 2012, 10:19, edited 4 times in total. Senior Manager Status: Prevent and prepare. Not repent and repair!! Joined: 13 Feb 2010 Posts: 275 Location: India Concentration: Technology, General Management GPA: 3.75 WE: Sales (Telecommunications) Followers: 9 Kudos [?]: 81 [0], given: 282 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:12 bb wrote: Veritas Prep 10 Year Anniversary Promo Question #6 One quant and one verbal question will be posted each day starting on Monday Sept 17th at 10 AM PST/1 PM EST and the first person to correctly answer the question and show how they arrived at the answer will win a free Veritas Prep GMAT course ($1,650 value). Winners will be selected and notified by a GMAT Club moderator. For more questions and details please check here: veritas-prep-10-year-anniversary-giveaway-138806.html To participate, please make sure you provide the correct answer (A,B,C,D,E) and explanation that clearly shows how you arrived at it. Winners will be announced the following day at 10 AM Pacific/1 PM Eastern Time. Good Luck! May the best and fastest win! The deer, despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers, struggled to acclimate to the habitat that wildlife biologists had predicted would enable it to thrive. (A) despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers (B) despite having traveled hundreds of miles from home to the Canadian wilderness where they would now be free to roam without fear of highway traffic or other man-made dangers (C) despite having traveled hundreds of miles from home to reach the Canadian wilderness that offered freedom to roam without fear of highway traffic or other man-made dangers (D) even after traveling hundreds of miles from their home to reach the Canadian wilderness where they could freely roam without fear of highway traffic or other man-made dangers (E) who had traveled hundreds of miles from home to the Canadian wilderness that would offer them freedom to roam without fear of highway traffic or other man-made dangers The Ans is C- The explanation is as follows- A- eliminate A as the Deer is singular and not plural. Here its referred to as 'their' which is wrong usage B-eliminate B as the Deer is again referred to as their. Also the usage where they would now be is not appropriate. D- eliminate D because the usage of their to refer to the deer is wrong E-Who had traveled is the wrong usage. C- the usage of the pronoun and the tense is right. _________________ I've failed over and over and over again in my life and that is why I succeed--Michael Jordan Kudos drives a person to better himself every single time. So Pls give it generously Wont give up till i hit a 700+ Manager Joined: 31 Oct 2011 Posts: 50 Concentration: General Management, Entrepreneurship GMAT 1: 710 Q50 V35 GPA: 3.4 WE: Accounting (Commercial Banking) Followers: 0 Kudos [?]: 15 [0], given: 7 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:18 The deer, despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers, struggled to acclimate to the habitat that wildlife biologists had predicted would enable it to thrive. it in the sentence let me know The deer is singular, not plural (while deer can be both singular and plural). => eliminate A, B, D because their and E them Last edited by gamelord on 19 Sep 2012, 10:21, edited 1 time in total. Intern Joined: 13 Jun 2012 Posts: 7 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:21 Subject verb agreement. Option A - their is incorrectly referring to singular deer. Option B - they in the second part incorrectly referring to singular deer. Option D - they in the second part incorrectly referring to singular deer. Option E - The second part explaining canadian wilderness should be in present tense. Option C - Correct. Intern Joined: 04 Jul 2012 Posts: 5 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:27 (A) despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers. The deer is singular noun. (B) despite having traveled hundreds of miles from home to the Canadian wilderness where they would now be free to roam without fear of highway traffic or other man-made dangers.The deer is singular noun. (C) despite having traveled hundreds of miles from home to reach the Canadian wilderness that offered freedom to roam without fear of highway traffic or other man-made dangers. (D) even after traveling hundreds of miles from their home to reach the Canadian wilderness where they could freely roam without fear of highway traffic or other man-made dangers.The deer is singular noun. (E) who had traveled hundreds of miles from home to the Canadian wilderness that would offer them freedom to roam without fear of highway traffic or other man-made dangers. using who is wrong and The deer is singular noun. Intern Joined: 29 Jan 2012 Posts: 6 Followers: 0 Kudos [?]: 1 [0], given: 10 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:42 All other answers refer Singular Subject "Deer" with Plural pronouns such as "They", "Them". Manager Joined: 14 Feb 2012 Posts: 55 Location: United States WE: Project Management (Consulting) Followers: 1 Kudos [?]: 18 [0], given: 0 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:47 I will choose option C. In the question, if you look closely, the non underlined part has "the deer" in the beginning, and "would enable it" in the end of the sentence. The last point "enable it" confirms that the the subject "the deer" is a singular subject. answer option A, B, D, and E used either they or them in the choice and hence can be eliminated. The deer, despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers, struggled to acclimate to the habitat that wildlife biologists had predicted would enable it to thrive. (A) despite having traveled hundreds of miles from their home to reach the Canadian wilderness and therefore being free to roam without fear of highway traffic or other man-made dangers (B) despite having traveled hundreds of miles from home to the Canadian wilderness where they would now be free to roam without fear of highway traffic or other man-made dangers (C) despite having traveled hundreds of miles from home to reach the Canadian wilderness that offered freedom to roam without fear of highway traffic or other man-made dangers - correct as there is no "they" or "them", and not even "being" (D) even after traveling hundreds of miles from their home to reach the Canadian wilderness where they could freely roam without fear of highway traffic or other man-made dangers (E) who had traveled hundreds of miles from home to the Canadian wilderness that would offer them freedom to roam without fear of highway traffic or other man-made dangers[/size][/quote] Intern Joined: 20 May 2012 Posts: 20 Concentration: Entrepreneurship, Strategy GMAT 1: 690 Q47 V37 WE: Engineering (Manufacturing) Followers: 0 Kudos [?]: 8 [0], given: 2 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 10:54 Option C Because: Remove pronoun which creates ambiguity. sentence that is clear and concise Intern Joined: 31 Mar 2012 Posts: 6 Followers: 0 Kudos [?]: 0 [0], given: 7 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 11:39 C. rest all have "them" for The Deer. Intern Joined: 08 Sep 2012 Posts: 4 Followers: 0 Kudos [?]: 0 [0], given: 2 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 12:58 C for couple of reasons A). It has a "being" in it and how much gmat hates "being" because of redundancies and passive construction is well known so eliminate A B).Subject verb agreement mismatch : The deer/They and NOW is not needed in the sentence eliminate B C)."the canadian wilderness" offered freedom as "star bucks" offered coffee so its right and uses "that" to identify a subordinate clause Keep C D).Again Subject verb mismatch eliminate D E).Doesnt show us a contrast ,to what was expected , Thus changing the meaning so eliminate E Ans is C Manager Joined: 16 Feb 2011 Posts: 193 Schools: ABCD Followers: 1 Kudos [?]: 162 [0], given: 78 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 17:53 C - the deer is singular as written in the non-underlined portion. All answer choices have 'they' or 'them'..... Intern Joined: 24 May 2012 Posts: 13 Location: India WE: Engineering (Energy and Utilities) Followers: 0 Kudos [?]: 5 [0], given: 4 Re: Veritas Prep 10 Year Anniversary Promo Question #6 [#permalink] ### Show Tags 19 Sep 2012, 18:37 A,B, D & E are out as all of them uses a plural pronoun (they/them) for the singular 'The deer'. Now when you look at C, it is correct in terms of using a singular pronoun(it). Moreover the clause starting with 'that' is kept close to the 'wilderness' which it modifies. So perfect Re: Veritas Prep 10 Year Anniversary Promo Question #6   [#permalink] 19 Sep 2012, 18:37 Go to page    1   2    Next  [ 28 posts ] Similar topics Replies Last post Similar Topics: SC Comparison-Veritas Prep material 2 20 Sep 2015, 06:30 5 Veritas Prep 10 Year Anniversary Promo Question #4 30 18 Sep 2012, 10:00 13 Veritas Prep 10 Year Anniversary Promo Question #2 23 17 Sep 2012, 10:00 2 SC question from Veritas Prep SC 1 13 22 Mar 2012, 19:04 3 VeritasPrep CAT Qn 10 11 Dec 2010, 20:59 Display posts from previous: Sort by
## Section: New Results ### Static Analysis and Abstract Interpretation Participants : Alain Girault, Bertrand Jeannet [contact person] , Lies Lakhdar-Chaouch, Peter Schrammel, Pascal Sotin. #### Numerical and logico-numerical abstract acceleration Acceleration methods are used for computing precisely the effects of loops in the reachability analysis of counter machine models. Applying these methods to synchronous data-flow programs with Boolean and numerical variables, e.g., Lustre programs, firstly requires the enumeration of the Boolean states in order to obtain a control graph with numerical variables only. Secondly, acceleration methods have to deal with the non-determinism introduced by numerical input variables. Concerning the latter problem, we pushed further the work presented in [90] that extended the concept of abstract acceleration of Gonnord et al. [69] , [68] to numerical input variables, and we wrote a journal version [13] . The original contributions of [13] compared to [91] is abstract backward acceleration (for backward analysis) and a detailed comparison of the abstract acceleration approach with the derivative closure approach of [39] , which is related to methods based on transitive closures of relations. We then worked more on the first point, which is to apply acceleration techniques to data-flow programs without resorting to an exhaustive enumeration of Boolean states. To this end, we introduced (1) logico-numerical abstract acceleration methods for CFGs with Boolean and numerical variables and (2) partitioning techniques that make logical-numerical abstract acceleration effective. Experimental results showed that incorporating these methods in a verification tool based on abstract interpretation provides not only significant advantage in terms of accuracy, but also a gain in performance in comparison to standard techniques. This work was published in [28] . This line of work is part of the PhD thesis of Peter Schrammel. #### Improving dynamic approximations in static analysis Abstract interpretation [51] formalizes two kind of approximations that can be done in the static analysis of programs: • Static approximations, defined by the choice of an abstract domain of abstract properties (for instance, intervals or convex polyhedra that approximates set of points in numerical spaces), and the definition of sound approximations in this domain of concrete operations (variable assignments, tests, ...). These abstract properties and operations are substitutes to the concrete properties and operations defined by the semantics of the analyzed program. This stage results into a abstract fixpoint equation $Y=G\left(Y\right),Y\in A$, where $A$ is the abstract domain. The best (least) solution of this equation can be obtained by Kleene iteration, which consists in computing the sequence ${Y}_{0}={\perp }_{A},{Y}_{n+1}=G\left({Y}_{n}\right)$, where ${\perp }_{A}$ is the least element of the lattice $A$. • Dynamic approximations, that makes the Kleene iteration sequence converge in finite time by applying an extrapolation operator called widening and denoted with $\nabla$. This results in a sequence ${Z}_{0}={\perp }_{A},{Z}_{n+1}={Z}_{n}\nabla G\left({Z}_{n}\right)$ that converges to a post-fixpoint ${Z}_{\infty }⊒G\left({Z}_{\infty }\right)$. For instance, for many numerical abstract domains (like octagons [86] or convex polyhedra [75] ) the “standard” widening $\nabla :A×A\to A$ consists in keeping in the result $R=P\nabla Q$ the numerical constraints of $P$ that are still satisfied by $Q$. The problem addressed here is that the extrapolation performed by widening often loses crucial information for the analysis goal. ##### Widening with thresholds. A classical technique for improving the precision is “widening with thresholds”, which bounds the extrapolation. The idea is to parameterize $\nabla$ with a finite set $𝒞$ of threshold constraints, and to keep in the result $R=P{\nabla }_{𝒞}Q$ those constraints $c\in 𝒞$ that are still satisfied by $Q$: $P{\nabla }_{𝒞}Q=\left(P\nabla Q\right)\sqcap \left\{c\in 𝒞\phantom{\rule{3.33333pt}{0ex}}|\phantom{\rule{3.33333pt}{0ex}}Q\vDash c\right\}$. In practice, one extrapolates up to some threshold; in the next iteration, either the threshold is still satisfied and the result is better than with the standard widening, or it is violated and one extrapolates up to the remaining thresholds. The benefit of this refinement strongly depends on the choice of relevant thresholds. In [33] , [26] we proposed a semantic-based technique for automatically inferring such thresholds, which applies to any control graph, be it intraprocedural, interprocedural or concurrent, without specific assumptions on the abstract domain. Despite its technical simplicity, we showed that our technique is able to infer the relevant thresholds in many practical cases. ##### Policy Iteration. Another direction we investigated for solving the fix-point equation $Y=G\left(Y\right),Y\in A$ is the use of Policy Iteration, which is a method for the exact solving of optimization and game theory problems, formulated as equations on min max affine expressions. In this context, a policy $\pi$ is a strategy for the min-player, which gives rise to a simplified equation $X={F}^{\pi }\left(X\right),{F}^{\pi }\ge F,X\in {ℝ}^{n}$ which is easier to solve that the initial equation $X=F\left(X\right),X\in {ℝ}^{n}$. Policy iteration iterates on policies rather than iterating the application of $F$ (as in Kleene iteration), using the property that the least fixpoint of $F$ corresponds to the least fixpoint of ${F}^{\pi }$ for some $\pi$. [50] showed that the problem of finding the least fixpoint of semantic equations on some abstract domains can be reduced to such equations on min max affine expressions, that can then be solved using Policy Iteration instead of the traditional Kleene iteration with widening described above. We first investigated the integration of the concept of Policy Iteration in a generic way into existing numerical abstract domains. We implemented it in the Apron library (see module  5.4 ). This allows the applicability of Policy Iteration in static analysis to be considerably extended. In particular we considered the verification of programs manipulating Boolean and numerical variables, and we provided an efficient method to integrate the concept of policy in the logico-numerical abstract domain BddApron that mixes Boolean and numerical properties (see module  5.4 ). This enabled the application of the policy iteration solving method to much more complex programs, that are not purely numerical any more. This work was published in [30] . #### Analysis of imperative programs We also studied the analysis of imperative programs. Even if it is preferable to analyze embedded systems described in higher-level languages such as synchronous languages, it is also useful to be able to analyze C programs. Moreover, it enables a wider diffusion of the analysis techniques developed in the team. ##### Inferring Effective Types for Static Analysis of C Programs This work is a step in the project of connecting the C language to our analysis tool Interproc /ConcurInterproc (see section  5.5.4 ). The starting point is the connection made by the industrial partner EADS-IW in the context of the ANR project ASOPT (§ 8.1.2 ) from a subset of the C language to Interproc . This translation uses the Newspeak intermediate language promoted by EADS [77] . Figure 3. Inferring finite types in C programs typedef struct {   int n; } t; int main() {   t x; t* y;   int *p,*q;   y = alloc(t); p = &(y->n);   y = &x; q = &(y->n);   *p = 1; *q = 2; *p = *p < 1;   return *p; } typedef enum {   l0=0,l1=1,l2=2 } e; typedef struct {   e n; } t; int main() {   t x; t* y;   e *p,*q;   y = alloc(t); p = &(y->n);   y = &x; q = &(y->n);   *p = l1; *q = l2; *p = (*p==l0)?l1:l0;   return *p; } Initial program. Transformed program. The problem addressed here is that the C language does not have a specific Boolean type: Boolean values are encoded with integers. This is also true for enumerated types, that may be freely and silently cast to and from integers. On the other hand, our verification tool Interproc that infers the possible values of variables at each program point may benefit from the information that some integer variables are used solely as Boolean or as enumerated type variables, or more generally as finite type variables with a small domain. Indeed, specialized and efficient symbolic representations such as BDDs are used for representing properties on such variables, whereas approximated representations like intervals and octagons are used for larger domain integers and floating-points variables. Driven by this motivation, we proposed in [25] a static analysis for inferring more precise types for the variables of a C program, corresponding to their effective use. The analysis addresses a subset of the C99 language, including pointers, structures and dynamic allocation. The principle of the method is very different from type inference techniques used in functional programming languages such as ML, where the types are inferred from the context of use. Instead, our analysis can be seen as a simple points-to analysis, followed by a disjunction version of a constant propagation analysis, and terminated by a program transformation that generates a strongly typed program. Fig. 3 illustrates this process. On this example, we discover that the program is a finite-state one, to which exact analysis technique can be applied. ##### Interprocedural analysis with pointers to the stack This work addressed the problem of interprocedural analysis when side-effect are performed on the stack containing local variables. Indeed, in any language with procedures calls and pointers as parameters (C, Ada) an instruction can modify memory locations anywhere in the call-stack. The presence of such side effects breaks most generic interprocedural analysis methods, which assume that only the top of the stack may be modified. In [29] we presented a method that addresses this issue, based on the definition of an equivalent local semantics in which writing through pointers has a local effect on the stack. Our second contribution in this context is an adequate representation of summary functions that models- the effect of a procedure, not only on the values of its scalar and pointer variables, but also on the values contained in pointed memory locations. Our implementation in the interprocedural analyzer PInterproc (see § 5.5.4 ) results in a verification tool that infers relational properties on the value of Boolean, numerical, and pointer variables.
# Functors¶ Functors AUTHORS: • David Kohel and William Stein • David Joyner (2005-12-17): examples • Simon King (2010-04-30): more examples, several bug fixes, re-implementation of the default call method, making functors applicable to morphisms (not only to objects) • Simon King (2010-12): Pickling of functors without loosing domain and codomain sage.categories.functor.ForgetfulFunctor(domain, codomain) Construct the forgetful function from one category to another. INPUT: C, D - two categories OUTPUT: A functor that returns the corresponding object of D for any element of C, by forgetting the extra structure. ASSUMPTION: The category C must be a sub-category of D. EXAMPLES: sage: rings = Rings() sage: F = ForgetfulFunctor(rings, abgrps) sage: F The forgetful functor from Category of rings to Category of commutative additive groups It would be a mistake to call it in opposite order: sage: F = ForgetfulFunctor(abgrps, rings) Traceback (most recent call last): ... ValueError: Forgetful functor not supported for domain Category of commutative additive groups If both categories are equal, the forgetful functor is the same as the identity functor: sage: ForgetfulFunctor(abgrps, abgrps) == IdentityFunctor(abgrps) True class sage.categories.functor.ForgetfulFunctor_generic The forgetful functor, i.e., embedding of a subcategory. NOTE: Forgetful functors should be created using ForgetfulFunctor(), since the init method of this class does not check whether the domain is a subcategory of the codomain. EXAMPLES: sage: F = ForgetfulFunctor(FiniteFields(),Fields()) #indirect doctest sage: F The forgetful functor from Category of finite fields to Category of fields sage: F(GF(3)) Finite Field of size 3 class sage.categories.functor.Functor A class for functors between two categories NOTE: • In the first place, a functor is given by its domain and codomain, which are both categories. • When defining a sub-class, the user should not implement a call method. Instead, one should implement three methods, which are composed in the default call method: • _coerce_into_domain(self, x): Return an object of self‘s domain, corresponding to x, or raise a TypeError. • Default: Raise TypeError if x is not in self‘s domain. • _apply_functor(self, x): Apply self to an object x of self‘s domain. • Default: Conversion into self‘s codomain. • _apply_functor_to_morphism(self, f): Apply self to a morphism f in self‘s domain. - Default: Return self(f.domain()).hom(f,self(f.codomain())). EXAMPLES: sage: rings = Rings() sage: F = ForgetfulFunctor(rings, abgrps) sage: F.domain() Category of rings sage: F.codomain() sage: from sage.categories.functor import is_Functor sage: is_Functor(F) True sage: I = IdentityFunctor(abgrps) sage: I The identity functor on Category of commutative additive groups sage: I.domain() sage: is_Functor(I) True Note that by default, an instance of the class Functor is coercion from the domain into the codomain. The above subclasses overloaded this behaviour. Here we illustrate the default: sage: from sage.categories.functor import Functor sage: F = Functor(Rings(),Fields()) sage: F Functor from Category of rings to Category of fields sage: F(ZZ) Rational Field sage: F(GF(2)) Finite Field of size 2 Functors are not only about the objects of a category, but also about their morphisms. We illustrate it, again, with the coercion functor from rings to fields. sage: R1.<x> = ZZ[] sage: R2.<a,b> = QQ[] sage: f = R1.hom([a+b],R2) sage: f Ring morphism: From: Univariate Polynomial Ring in x over Integer Ring To: Multivariate Polynomial Ring in a, b over Rational Field Defn: x |--> a + b sage: F(f) Ring morphism: From: Fraction Field of Univariate Polynomial Ring in x over Integer Ring To: Fraction Field of Multivariate Polynomial Ring in a, b over Rational Field Defn: x |--> a + b sage: F(f)(1/x) 1/(a + b) We can also apply a polynomial ring construction functor to our homomorphism. The result is a homomorphism that is defined on the base ring: sage: F = QQ['t'].construction()[0] sage: F Poly[t] sage: F(f) Ring morphism: From: Univariate Polynomial Ring in t over Univariate Polynomial Ring in x over Integer Ring To: Univariate Polynomial Ring in t over Multivariate Polynomial Ring in a, b over Rational Field Defn: Induced from base ring by Ring morphism: From: Univariate Polynomial Ring in x over Integer Ring To: Multivariate Polynomial Ring in a, b over Rational Field Defn: x |--> a + b sage: p = R1['t']('(-x^2 + x)*t^2 + (x^2 - x)*t - 4*x^2 - x + 1') sage: F(f)(p) (-a^2 - 2*a*b - b^2 + a + b)*t^2 + (a^2 + 2*a*b + b^2 - a - b)*t - 4*a^2 - 8*a*b - 4*b^2 - a - b + 1 codomain() The codomain of self EXAMPLE: sage: F = ForgetfulFunctor(FiniteFields(),Fields()) sage: F.codomain() Category of fields domain() The domain of self EXAMPLE: sage: F = ForgetfulFunctor(FiniteFields(),Fields()) sage: F.domain() Category of finite fields sage.categories.functor.IdentityFunctor(C) Construct the identity functor of the given category. INPUT: A category, C. OUTPUT: The identity functor in C. EXAPLES: sage: rings = Rings() sage: F = IdentityFunctor(rings) sage: F(ZZ['x','y']) is ZZ['x','y'] True class sage.categories.functor.IdentityFunctor_generic(C) Generic identity functor on any category NOTE: This usually is created using IdentityFunctor(). EXAMPLES: sage: F = IdentityFunctor(Fields()) #indirect doctest sage: F The identity functor on Category of fields sage: F(RR) is RR True sage: F(ZZ) Traceback (most recent call last): ... TypeError: x (=Integer Ring) is not in Category of fields TESTS: sage: R = IdentityFunctor(Rings()) sage: P, _ = QQ['t'].construction() sage: R == P False sage: P == R False sage: R == QQ False sage.categories.functor.is_Functor(x) Test whether the argument is a functor NOTE: There is a deprecation warning when using it from top level. Therefore we import it in our doc test. EXAMPLES: sage: from sage.categories.functor import is_Functor sage: F1 = QQ.construction()[0] sage: F1 FractionField sage: is_Functor(F1) True sage: is_Functor(FractionField) False sage: F2 = ForgetfulFunctor(Fields(), Rings()) sage: F2 The forgetful functor from Category of fields to Category of rings sage: is_Functor(F2) True Morphisms #### Next topic Coercion via Construction Functors
# 11.7 Probability  (Page 6/18) Page 6 / 18 Landing on a vowel $\text{\hspace{0.17em}}\frac{1}{2}.\text{\hspace{0.17em}}$ Not landing on blue Landing on purple or a vowel $\text{\hspace{0.17em}}\frac{5}{8}.\text{\hspace{0.17em}}$ Landing on blue or a vowel Landing on green or blue $\text{\hspace{0.17em}}\frac{1}{2}.\text{\hspace{0.17em}}$ Landing on yellow or a consonant Not landing on yellow or a consonant $\text{\hspace{0.17em}}\frac{3}{8}.\text{\hspace{0.17em}}$ For the following exercises, two coins are tossed. What is the sample space? Find the probability of tossing two heads. $\text{\hspace{0.17em}}\frac{1}{4}.\text{\hspace{0.17em}}$ Find the probability of tossing exactly one tail. Find the probability of tossing at least one tail. $\text{\hspace{0.17em}}\frac{3}{4}.\text{\hspace{0.17em}}$ For the following exercises, four coins are tossed. What is the sample space? Find the probability of tossing exactly two heads. $\text{\hspace{0.17em}}\frac{3}{8}.\text{\hspace{0.17em}}$ Find the probability of tossing exactly three heads. Find the probability of tossing four heads or four tails. $\text{\hspace{0.17em}}\frac{1}{8}.\text{\hspace{0.17em}}$ Find the probability of tossing all tails. Find the probability of tossing not all tails. $\text{\hspace{0.17em}}\frac{15}{16}.\text{\hspace{0.17em}}$ Find the probability of tossing exactly two heads or at least two tails. $\text{\hspace{0.17em}}\frac{5}{8}.\text{\hspace{0.17em}}$ For the following exercises, one card is drawn from a standard deck of $\text{\hspace{0.17em}}52\text{\hspace{0.17em}}$ cards. Find the probability of drawing the following: A club A two $\text{\hspace{0.17em}}\frac{1}{13}.\text{\hspace{0.17em}}$ Six or seven Red six $\text{\hspace{0.17em}}\frac{1}{26}.\text{\hspace{0.17em}}$ An ace or a diamond A non-ace $\text{\hspace{0.17em}}\frac{12}{13}.\text{\hspace{0.17em}}$ A heart or a non-jack For the following exercises, two dice are rolled, and the results are summed. Construct a table showing the sample space of outcomes and sums. 1 2 3 4 5 6 1 (1, 1) 2 (1, 2) 3 (1, 3) 4 (1, 4) 5 (1, 5) 6 (1, 6) 7 2 (2, 1) 3 (2, 2) 4 (2, 3) 5 (2, 4) 6 (2, 5) 7 (2, 6) 8 3 (3, 1) 4 (3, 2) 5 (3, 3) 6 (3, 4) 7 (3, 5) 8 (3, 6) 9 4 (4, 1) 5 (4, 2) 6 (4, 3) 7 (4, 4) 8 (4, 5) 9 (4, 6) 10 5 (5, 1) 6 (5, 2) 7 (5, 3) 8 (5, 4) 9 (5, 5) 10 (5, 6) 11 6 (6, 1) 7 (6, 2) 8 (6, 3) 9 (6, 4) 10 (6, 5) 11 (6, 6) 12 Find the probability of rolling a sum of $\text{\hspace{0.17em}}3.\text{\hspace{0.17em}}$ Find the probability of rolling at least one four or a sum of $\text{\hspace{0.17em}}8.$ $\text{\hspace{0.17em}}\frac{5}{12}.$ Find the probability of rolling an odd sum less than $\text{\hspace{0.17em}}9.$ Find the probability of rolling a sum greater than or equal to $\text{\hspace{0.17em}}15.$ $\text{\hspace{0.17em}}0.$ Find the probability of rolling a sum less than $\text{\hspace{0.17em}}15.$ Find the probability of rolling a sum less than $\text{\hspace{0.17em}}6\text{\hspace{0.17em}}$ or greater than $\text{\hspace{0.17em}}9.$ $\text{\hspace{0.17em}}\frac{4}{9}.\text{\hspace{0.17em}}$ Find the probability of rolling a sum between $\text{\hspace{0.17em}}6\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}9\text{,}\text{\hspace{0.17em}}$ inclusive. Find the probability of rolling a sum of $\text{\hspace{0.17em}}5\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}6.\text{\hspace{0.17em}}$ $\text{\hspace{0.17em}}\frac{1}{4}.\text{\hspace{0.17em}}$ Find the probability of rolling any sum other than $\text{\hspace{0.17em}}5\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}6.\text{\hspace{0.17em}}$ For the following exercises, a coin is tossed, and a card is pulled from a standard deck. Find the probability of the following: A head on the coin or a club $\text{\hspace{0.17em}}\frac{3}{4}\text{\hspace{0.17em}}$ A tail on the coin or red ace A head on the coin or a face card $\text{\hspace{0.17em}}\frac{21}{26}\text{\hspace{0.17em}}$ No aces For the following exercises, use this scenario: a bag of M&Ms contains $\text{\hspace{0.17em}}12\text{\hspace{0.17em}}$ blue, $\text{\hspace{0.17em}}6\text{\hspace{0.17em}}$ brown, $\text{\hspace{0.17em}}10\text{\hspace{0.17em}}$ orange, $\text{\hspace{0.17em}}8\text{\hspace{0.17em}}$ yellow, $\text{\hspace{0.17em}}8\text{\hspace{0.17em}}$ red, and $\text{\hspace{0.17em}}4\text{\hspace{0.17em}}$ green M&Ms. Reaching into the bag, a person grabs 5 M&Ms. What is the probability of getting all blue M&Ms? $\text{\hspace{0.17em}}\frac{C\left(12,5\right)}{C\left(48,5\right)}=\frac{1}{2162}\text{\hspace{0.17em}}$ What is the probability of getting $\text{\hspace{0.17em}}4\text{\hspace{0.17em}}$ blue M&Ms? What is the probability of getting $\text{\hspace{0.17em}}3\text{\hspace{0.17em}}$ blue M&Ms? $\frac{C\left(12,3\right)C\left(36,2\right)}{C\left(48,5\right)}=\frac{175}{2162}$ What is the probability of getting no brown M&Ms? ## Extensions Use the following scenario for the exercises that follow: In the game of Keno, a player starts by selecting $\text{\hspace{0.17em}}20\text{\hspace{0.17em}}$ numbers from the numbers $\text{\hspace{0.17em}}1\text{\hspace{0.17em}}$ to $\text{\hspace{0.17em}}80.\text{\hspace{0.17em}}$ After the player makes his selections, $\text{\hspace{0.17em}}20\text{\hspace{0.17em}}$ winning numbers are randomly selected from numbers $\text{\hspace{0.17em}}1\text{\hspace{0.17em}}$ to $\text{\hspace{0.17em}}80.\text{\hspace{0.17em}}$ A win occurs if the player has correctly selected $\text{\hspace{0.17em}}3,4,\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}5\text{\hspace{0.17em}}$ of the $\text{\hspace{0.17em}}20\text{\hspace{0.17em}}$ winning numbers. (Round all answers to the nearest hundredth of a percent.) how fast can i understand functions without much difficulty what is set? a colony of bacteria is growing exponentially doubling in size every 100 minutes. how much minutes will it take for the colony of bacteria to triple in size I got 300 minutes. is it right? Patience no. should be about 150 minutes. Jason It should be 158.5 minutes. Mr ok, thanks Patience 100•3=300 300=50•2^x 6=2^x x=log_2(6) =2.5849625 so, 300=50•2^2.5849625 and, so, the # of bacteria will double every (100•2.5849625) = 258.49625 minutes Thomas what is the importance knowing the graph of circular functions? can get some help basic precalculus What do you need help with? Andrew how to convert general to standard form with not perfect trinomial can get some help inverse function ismail Rectangle coordinate how to find for x it depends on the equation Robert yeah, it does. why do we attempt to gain all of them one side or the other? Melissa whats a domain The domain of a function is the set of all input on which the function is defined. For example all real numbers are the Domain of any Polynomial function. Spiro Spiro; thanks for putting it out there like that, 😁 Melissa foci (–7,–17) and (–7,17), the absolute value of the differenceof the distances of any point from the foci is 24. difference between calculus and pre calculus? give me an example of a problem so that I can practice answering x³+y³+z³=42 Robert dont forget the cube in each variable ;) Robert of she solves that, well ... then she has a lot of computational force under her command .... Walter what is a function? I want to learn about the law of exponent explain this
Analysis of lumped parameter models for blood flow simulations and their relation with 1D models ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 38 (2004) no. 4, p. 613-632 This paper provides new results of consistence and convergence of the lumped parameters (ODE models) toward one-dimensional (hyperbolic or parabolic) models for blood flow. Indeed, lumped parameter models (exploiting the electric circuit analogy for the circulatory system) are shown to discretize continuous 1D models at first order in space. We derive the complete set of equations useful for the blood flow networks, new schemes for electric circuit analogy, the stability criteria that guarantee the convergence, and the energy estimates of the limit 1D equations. DOI : https://doi.org/10.1051/m2an:2004036 Classification:  35L50,  35M20,  47H10,  65L05,  76Z05 Keywords: multiscale modelling, parabolic equations, hyperbolic systems, lumped parameters models, blood flow modelling @article{M2AN_2004__38_4_613_0, author = {Mili\v si\'c, Vuk and Quarteroni, Alfio}, title = {Analysis of lumped parameter models for blood flow simulations and their relation with 1D models}, journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique}, publisher = {EDP-Sciences}, volume = {38}, number = {4}, year = {2004}, pages = {613-632}, doi = {10.1051/m2an:2004036}, zbl = {1079.76053}, mrnumber = {2087726}, language = {en}, url = {http://www.numdam.org/item/M2AN_2004__38_4_613_0} } Milišić, Vuk; Quarteroni, Alfio. Analysis of lumped parameter models for blood flow simulations and their relation with 1D models. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 38 (2004) no. 4, pp. 613-632. doi : 10.1051/m2an:2004036. http://www.numdam.org/item/M2AN_2004__38_4_613_0/ [1] A.P. Avolio, Multibranched model of the human arterial system. Med. Biol. Eng. Comput. 18 (1980) 709-119. [2] B.S. Brook, S.A.E.G. Falle and T.J. Pedley, Numerical solutions for unsteady gravity-driven flows in collapsible tubes: evolution and roll-wave instability of a steady state. J. Fluid Mech. 396 (1999) 223-256. | Zbl 0971.76052 [3] S. Čanić and E.H. Kim, Mathematical analysis of quasilinear effects in a hyperbolic model of blood flow through compliant axi-symmetric vessels. Math. Meth. Appl. Sci. 26 (2003) 1161-1186. | Zbl 1141.76484 [4] S. Čanić and A. Mikelić, Effective equations modeling the flow of a viscous incompressible fluid through a long elastic tube arising in the study of blood flow through small arteries. SIAM J. Appl. Dyn. Sys. 2 (2003) 431-463. | Zbl 1088.76077 [5] A. Čanić, D. Lamponi, S. Mikelić and J. Tambaca, Self-consistent effective equations modeling blood flow in medium-to-large compliant arteries. SIAM MMS (2004) (to appear). | Zbl 1081.35073 [6] L. De Pater and J.W. Van Den Berg, An electrical analogue of the entire human circulatory system. Med. Electron. Biol. Engng. 2 (1964) 161-166. [7] C.A. Desoer and E.S. Kuh, Basic Circuit Theory. McGraw-Hill (1969). [8] L. Formaggia, J.F. Gerbeau, F. Nobile and A. Quarteroni, On the coupling of 3D and 1D Navier-Stokes equations for flow problems in compliant vessels. Comput. Methods Appl. Mech. Engrg. 191 (2001) 561-582. | Zbl 1007.74035 [9] L. Formaggia and A. Veneziani, Reduced and multiscale models for the human cardiovascular system. Technical report, PoliMI, Milan (June 2003). Collection of two lecture notes given at the VKI Lecture Series 2003-07, Brussels 2003. [10] E. Godlewski and P.-A. Raviart, Hyperbolic systems of conservation laws. Math. Appl., 3/4. Ellipses, Paris (1991). | MR 1304494 | Zbl 0768.35059 [11] W.P. Mason, Electromechanical Transducers and Wave Filters (1942). [12] F. Migliavacca, G. Pennati, G. Dubini, R. Fumero, R. Pietrabissa, G. Urcelay, E.L. Bove, T.Y. Hsia and M.R. De Leval, Modeling of the norwood circulation: effects of shunt size, vascular resistances, and heart rate. Am. J. Physiol. Heart Circ. Physiol. 280 (2001) H2076-H2086. [13] V. Milišić and A. Quarteroni, Coupling between linear parabolic and hyperbolic systems of equations for blood flow simulations, in preparation. [14] A. Quarteroni, R. Sacco and F. Saleri, Numerical Mathematics, 37 Texts Appl. Math. Springer-Verlag, New York (2000). | MR 1751750 | Zbl 0957.65001 [15] V.C. Rideout and D.E. Dick, Difference-differential equations for fluid flow in distensible tubes. IEEE Trans. Biomed. Eng. BME-14 (1967) 171-177. [16] P. Segers, F. Dubois, D. De Wachter and P. Verdonck, Role and relevancy of a cardiovascular simulator. J. Cardiovasc. Eng. 3 (1998) 48-56. [17] S.J. Sherwin, V. Franke, J. Peiro and K. Parker, One-dimensional modelling of a vascular network in space-time variables. J. Engng. Math. 47 (2003) 217-250. | Zbl pre02068972 [18] N.P. Smith, A.J. Pullan and P.J. Hunter, An anatomically based model of transient coronary blood flow in the heart. SIAM J. Appl. Math. 62 (2001/02) 990-1018 (electronic). | Zbl 1023.76061 [19] J.A. Spaan, J.D. Breuls and N.P. Laird, Diastolic-systolic coronary flow differences are caused by intramyocardial pump action in the anesthetized dog. Circ. Res. 49 (1981) 584-593. [20] N. Stergiopulos, D.F. Young and T.R. Rogge, Computer simulation of arterial flow with applications to arterial and aortic stenoses. J. Biomech. 25 (1992) 1477-1488. [21] J.C. Strikwerda, Finite difference schemes and partial differential equations. The Wadsworth & Brooks/Cole Mathematics Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA (1989). | MR 1005330 | Zbl 0681.65064 [22] N. Westerhof, F. Bosman, C.J. De Vries and A. Noordergraaf, Analog studies of the human systemic arterial tree. J. Biomechanics 2 (1969) 121-143. [23] F. White, Viscous Fluid Flow. McGraw-Hill (1986). | Zbl 0356.76003
## Algebra: A Combined Approach (4th Edition) $\sqrt[5] (x+1)^{3}$ Step 1: $(x+1)^{\frac{3}{5}}$ is simplified to $(x+1)^{3\times\frac{1}{5}}$ Step 2: Since $\frac{1}{5}$ represents the fifth root of the expression, the expression is simplified as $\sqrt[5] (x+1)^{3}$ Step 3: No further simplification possible
# Homework Help: Torque and angular momentum for a Particle in a Force Field 1. Apr 17, 2008 ### so09er [SOLVED] Torque and angular momentum for a Particle in a Force Field Find A) the torque and B) the angular momentum about the origin at the time t=3 for a particle in a Force Field F=(3t^2 -4t)i + (12t -6)j + (6t - 12t^2)k. Assuming that t=0 is located at the origin. I equated r"=F then took r X F is this the proper way? Then took r X v to find angular momentum. Any help would be appreciated. 2. Apr 17, 2008 ### Dick Yes, that's the right way. Solve the differential equation r''=F and find v and r at t=3. You not only need to assume r(0)=(0,0,0), you need some initial condition for r'(0). Is it also (0,0,0)? 3. Apr 18, 2008 ### so09er Thanks for the confirmation
# How do you differentiate g(y) =(2+x )( 2-3x) using the product rule? Mar 26, 2017 $\frac{d}{\mathrm{dx}} g \left(x\right) = - 6 x - 4$ (In the answer, I use h(x) where g(x) is traditionally used to avoid confusion since the question already defines g(x)). #### Explanation: Product rule states that: $\frac{d}{\mathrm{dx}} f \left(x\right) \cdot h \left(x\right) = f \left(x\right) \cdot h ' \left(x\right) + f ' \left(x\right) \cdot h \left(x\right)$ In this case, $f \left(x\right) = 2 + x$, and $h \left(x\right) = 2 - 3 x$. If we differentiate each of these separately, we get that: $\frac{d}{\mathrm{dx}} f \left(x\right) = \frac{d}{\mathrm{dx}} \left(2 + x\right) = 1$ $\frac{d}{\mathrm{dx}} h \left(x\right) = \frac{d}{\mathrm{dx}} \left(2 - 3 x\right) = - 3$ Therefore, using product rule, we get: $\frac{d}{\mathrm{dx}} \left(2 + x\right) \left(2 - 3 x\right) = \left(2 + x\right) \left(- 3\right) + \left(1\right) \left(2 - 3 x\right)$ Now, all we have left to do is simplify. $\textcolor{w h i t e}{\text{XX}} \left(2 + x\right) \left(- 3\right) + \left(1\right) \left(2 - 3 x\right)$ $= \left(- 6 - 3 x\right) + \left(2 - 3 x\right)$ $= - 6 x - 4$
Title Stereographic projection of plane algebraic curves onto the sphere Author S. Welke Journal / Anthology Innovation in Mathematics: Proceedings of the Second International Mathematica Symposium Year: 1997 Page range: 491-498 Description Stereographic projection is a conformal map from the xy-plane to the unit sphere S^2 \in R3. ... Now consider the problem of representing the image p(gamma) of an algebraic curve under stereographic projection as a 3D-plot. ... This straightforward approach is easily realized with Mathematica. Subject Mathematics > Geometry
Recognitions: Gold Member Staff Emeritus ## Pretty woman ... Yeah, yeah, yeah! HAMILTON, ONT. - Men's minds may be so rattled by the sight of a pretty woman that they behave irrationally, Canadian psychologists have shown. Scientists know animals prefer short-term gains to longer-term ones, even if the future payoff is larger. Advertisements featuring attractive women take advantage of the effect in people. Researchers at McMaster University in Hamilton, Ont., designed an experiment to investigate sex-related "irrational discounting." Psychologists Margo Wilson and Martin Daly asked 209 students to look at pictures of the opposite sex from the website "Hot or Not." The participants were then asked if they'd prefer receiving an average of $19 immediately or waiting for$25 at some future time. After eyeballing faces of women ranked as pretty on the website, men were more likely to want the immediate payment rather than hang on for a bigger bonus. Women's choices, though, were unaffected by photos of handsome men. The researchers suggest men may want money immediately to impress the ladies. The study appears in this week's online issue of Proceedings of the Royal Society London B. http://www.cbc.ca/stories/2003/12/10/pretty031210 PhysOrg.com social sciences news on PhysOrg.com >> The hidden agenda of Obama's opposition>> Academics earn street cred with TED Talks but no points from peers, research shows>> Why is it easier to lose 2-4 pounds rather than 3 pounds? Recognitions: Gold Member Staff Emeritus The researchers suggest men may want money immediately to impress the ladies. Are they serious.. money to impress the 2D picture on the screen or what? Recognitions: Gold Member Staff Emeritus In men, at least, visual stimulus seems to be hooked directly into deeper behavior sections of the brain. I have posted before my own crank theory why this might be so, and certainly the porn industry would suggest it is true. So it may be irrational, but it's not unexpected, that men would start courting behavior ("money to impress...") based on nothing more than a picture. ## Pretty woman ... Yeah, yeah, yeah! Well, according to Professor J. Philippe Rushton (http://www.ssc.uwo.ca/psychology/faculty/rushton.html), Northern men court women through their show of social status and money, while the women show off their sexuality. In other words, Northern men want looks, Northern women want money/social social status/intelligence. But, Rushton says that equitorial men attract women by showing off their muscles, penis size, and athletic ability, while equitorial women are the same as Northen women, or something like that. The evolutionary explanation for this is that in equitorial regions, high intelligence was not really needed to reproduce successfully, but in the North, larger brains were needed to figure out how to find shelter, make clothing, find food. Women were better off choosing intelligence over strength in the North. Plus, tribes were highly separated from each other in the North; finding shelter and food was more of a problem than fighting rival tribes. But inter-tribal conflict was much more of a problem in equitorial regions, such as in Africa and the Middle East. Here, strenth to fight off rival tribes was more important. Of course, though the above is the natural inclinations of people, it is possible to resist such urges and make wiser reproductive choices. I am part of an equitorial race and I am always thinking about female's naked bodies, but since I am a eugenicist, I will only reproduce with a female with good genetic stock (high intelligence, good behavioral traits, no genetic diseases) and sooth my sexual desires via masturbation. Carlos Hernandez Recognitions: Gold Member Staff Emeritus Forgive me, but the professor's explanation strike me as self indulgent fanasizing; just-so stories in fact. As to your eugenic concerns, have you found anyone who measures up? Would you accept a woman who met your IQ and health constraints, but was of another race? Remember, you can't affect the population quality of the future if you don't reporduce! This seems to perpetuate the myth that women don't care about what a guy looks like. Before you even have any chance with a woman, you have to pass her physical attraction test, at least in the beginning. Recognitions: Gold Member Science Advisor Staff Emeritus It's usually "common sense" that women are not as concerned about looks as are men -- I don't have any real numbers to back it up, but it sure does seem to be true in my experience. (Hey, don't throw any tomatoes!) In fact, I once saw a website gallery of photographs of beautiful women with horrifically ugly boyfriends. Yes, women do generally care about whether or not a man is attractive -- but much less so than men care about whether or not a woman is attractive. - Warren Originally posted by selfAdjoint Forgive me, but the professor's explanation strike me as self indulgent fanasizing; just-so stories in fact. His site which I posted has ample physical and psychological data. But I do understand your emotional reaction, most of us are like that when presented with an argument antagonistic to our individual world views. It takes time getting used to new data. As to your eugenic concerns, have you found anyone who measures up? Would you accept a woman who met your IQ and health constraints, but was of another race? Remember, you can't affect the population quality of the future if you don't reporduce! IQ and health are just two of the factors, I also want a mate with good personality/behavioral traits, such as high conscientiousness, Typical Intellectual engagement, healthy ethnocentrism, balanced altruism, and creativity. Once I find such a mate, I will produce 3 offspring. With regards to race, I am only attracted to East Asians, Whites, and Hispanic women. Carlos Hernandez Originally posted by The_Professional This seems to perpetuate the myth that women don't care about what a guy looks like. Before you even have any chance with a woman, you have to pass her physical attraction test, at least in the beginning. First, according to the professor, you can't generalize all women as the same, but you must divide them into Northern women (East Asians and Europeans) and then Equitorial women (South and South-East Asians, Arabs, Africans, Mullatos, etc.) What Professor Rushton says is that the Northern women place more emphasis on social status/intelligence/wealth than do equitorial women. But yes, all women do value good looks too, after all, what female wants to have sex with a very hideous male? Carlos Hernandez Originally posted by chroot And if she only wants two, you'll beat her. Right? - Warren Beating my wife is illegal and I obey the laws. Second, such things as number of offspring desired is something we must agree to before getting married. If we have different idealist viewpoints, we would not get married. Carlos Hernandez Recognitions: Gold Member Staff Emeritus Originally posted by Carlos Hernandez If we have different idealist viewpoints, we would not get married. A) Why get married then? It seems you've already assigned your wife a purpose: as a concubine to produce your three offspring. You certainly don't need to marry her. Just use her as a breeding machine, then throw her out the door and raise the kids yourself. After all, she'll probably try to get involved in decisions about their parenting, and that would be intolerable! B) You don't date much, do ya? - Warren Originally posted by chroot A) Why get married then? It seems you've already assigned your wife a purpose: as a concubine to produce your three offspring. You certainly don't need to marry her. Just use her as a breeding machine, then throw her out the door and raise the kids yourself. After all, she'll probably try to get involved in decisions about their parenting, and that would be intolerable! B) You don't date much, do ya? - Warren Please learn the rules of rational debate at http://www.infidels.org/news/atheism/logic.html Carlos Hernandez Recognitions: Gold Member Science Advisor Staff Emeritus I don't recall engaging in a debate. I recall making fun of you for your views on the utility of women. - Warren Originally posted by chroot I don't recall engaging in a debate. I recall making fun of you for your views on the utility of women. - Warren Please don't reduce the quality of this forum by engaging in ad hominem. Second, I consider both males and females as utilitarian objects. I am a pragmatist and stoic, you obviously are a sentimentalist. To each his own. Carlos Hernandez Recognitions: Gold Member Staff Emeritus Originally posted by Carlos Hernandez I consider both males and females as utilitarian objects. I am a pragmatist and stoic And, like I said, that attitude probably means you'll never get laid -- much less the requisite three times. - Warren Originally posted by Carlos Hernandez Second, I consider both males and females as utilitarian objects. I am a pragmatist and stoic, you obviously are a sentimentalist. To each his own. Carlos Hernandez Heil Heil ve vait for the 4th Reich... Originally posted by chroot And, like I said, that attitude probably means you'll never get laid -- much less the requisite three times. - Warren
# Benchmark ODE solver: GSL V/s Boost Odeint library For our neural simulator, MOOSE, we use GNU Scientific Library (GSL) for random number generation, for solving system of non-linear equations, and for solving ODE system. Recently I checked the performance of GSL ode solver V/s Boost ode solver; both using Runge-Kutta 4. The boost-odeint outperformed GSL by approximately by a factor of 4. The numerical results were same. Both implementation were compiled with -O3 switch. Below are the numerical results. In second subplot, a molecule is changing is concentration ( calcium ) and on the top, another molecule is produced (CaMKII). This network has more than 30 reactions and approximately 20 molecules. GSL took approximately 39 seconds to simulate the system for 1 year. While Boost-odeint took only 8.6 seconds. (This turns out to be a corner case) Update: If I let both solvers choose the step size by themselves for given accuracy, boost usually outperforms the GSL solver by a factor of 1.2x to 2.9x. These tests were done during development of a signaling network which has many many reactions. Under no circumstances I have tested, BOOSE ODE solver was slower than GSL solver. # Simulating Random Walks using Langevin Equation Random walks (Brownian motions), in addition to their theoretical potency (describes macro-scale behavior of gas starting with micro-scale description), also describes behavior of many processes in nature. A few of them; genetic network, protein expression caused by mRNA,  have been described/predicted well using stochasticity. Moreover they are ideal noise sources cause by thermal fluctuation and often found in natural processes. To get the solid foundation in this subject, see the classic “Stochatic Processes in Physics and Chemistry” by Van Kampen, and also “handbook of stochastic methods” by Gandiner. Random walks (in fact any stochastic process) can be described by Fokker Planck equation: it describes how probability density function evolves over time. An equivalent is Master Equation which are much easier to visualize and solve (using Gillespie algorithm, a variant of Markov method). Master equation can describe “almost” all of the chemistry. In fact, Einstein built his theory of Brownian motion by writing down a variant of Fokker Planck equation. After Einstein published his work, a Frenchmen Paul Langevian discovered the same theory using a totally different approach; which is “infinitely simpler” than Einstein approach. I’d highly recommend to read the paper which does not require more than intermediate mathematics.  http://scitation.aip.org/content/aapt/journal/ajp/65/11/10.1119/1.18725 In this note, I’d present a Python recipe to solve Langevian equation to simulate random walk. The Langevian approach is computationally extremely cheap and works extremely well in practice. Of course there is a cost involve. Fokker-Planck gives you “exact” mean and variance (if you can solve it), you need to produce many Langevian trajectories to see mean and variance converging to fixed values. But for simulating biological and physical processes, you don’t worry too much about overall mean and variance. One trajectory is good enough for introducing noise sources. Langevian equation looks something like the following. $dx = −f(x) dt+\alpha g(x) \sqrt{dt}$ where α is normally distributed with mean 0 and variance 1. Forget f(x) and g(x), the out-of-place thing about these equation is square root of dt on the right-hand side. This led to fractional calculus, and stochastic differential equations. For the sake of “web and coding”, problem statement and python recipe which simulates this equation can be found here. 5 model trajectories of Random walk in 1D generated by this equation are attached with this note. Many others can be generated using the script solve.py. Mean is as usual; and standard deviation relates with diffusion coefficient. PS: File solve.py implements the assignment. To produce these trajectories run the following command: \$ python solve.py 1 To get more about “Randomness in Biology” visit http://courses.ncbs.res.in/mod/resource/view.php?id=374 (login as guest) NOTES: 1. Google for: Sriram Ramaswamy’s Resonance article on Einstein’s derivation. Nice readable article. Do check the Langevin original paper linked above. 2. It is instructive for those who are interested in Control theory to read about biological processes: “How cell control its size” and other macro-molecules inside it in such. Biology is unexplored gold-mine for Control theory. Also its worth thinking how noise is minimized in cellular processes. # Rotation 1, Week 0: Understanding the problem Its not much one can do in a 6 week rotation in lab, but when Madan Rao asked me to read some literature and if possible do something, I suggested that I will rather work on a problem and read if need arise. Reading without working is pretty stupid thing to do at my age. It might have some use for an undergraduate, at least for his/her exam. In nutshell, just reading  is another way of time-passing without feeling guilty about it. Knowledge is overrated when one wants to work with fundamental ideas. Fortunately Madan saw it clearly that I don’t want to become Mr. Know-It-All or some sort of Pundit or Mahant with eye on a grand problem but rather a journeyman who solves problem at hand with whatever best tools available at his disposal and build his expertise by practicing his craft. He politely suggested me to read a paper over coffee. Not a month ago, I was talking to Somya Mani about my bird-eye view of networks in biology and what I lack to deal with them: how to inject probabilistic variation into well-defined and well-behaved system. And to these ends, I want to take course offered by Mukund Thattai this semester at NCBS Bangalore, “Randomness in Biology”. After going through the paper which Madan suggested to read before further discussion what can be done during my rotation, I am happy about the problem I encountered by chance: how cells control a variable when input is mixed with random noise! **** Cells are always trying to control certain processes. Some variables such as cell-size is kept constant in a very variable environment. One can formulate this as a control-system problem and ask what a cell can and can’t do to suppress noise or variability in a parameter under control. Bounds on the performance of a control system under noisy conditions are well-studies by many early pioneers in system science; but this is somewhat a different problem. Since molecules inside cells go through random birth and deaths (I am still not quite clear about the difference). This paper [1] establishes bounds on networks with negative feedback. This is interesting because negative feedback is often used to minimize the influence of noise on variable under control. In this paper, authors consider a class of system, which is generic enough to describe many biological networks, and build a mathematical framework which can tell us what a cell (or a network) can not do when a given amount of noise is present in their environment. General structure of system is captured by following three equations: $x_1 \xrightarrow{u(x_2(-\infty,t))} x_1 + 1$, $x_1 \xrightarrow{x_1/\tau_1} x_1 - 1$, $x_2 \xrightarrow{f(x_1)} x_2 + 1$ Variable $x_1$ is under control whose production is controlled by a control daemon $u$ who knows past and present of $x_1$. Variable $x_1$, in turns, contol the production of $x_2$ since rate of its production is controlled by a function $f$ of $x_1$. $\tau$ is mean death rate of $x_1$. System under investigation is a system with negative feedback and signaling. Authors describe part of system using continuous differential equations but keep the controller and singaling discrete. They claim that It is extremely hard for a network to reduce noise where signal $x_2$ is made less frequently than the controlled component i.e. $x_1$. Reducing the standard deviation of $x_1$ tenfold can be achieved by increasing the birth rate of $x_2$ by a factor of 10,000. In short, authors have developed a framework under which one can say something about what cell can not do under a noisy condition. I am still trying to understand the theory they have used to established these bounds (see the supplementary material available in reference 1). I am also writing a simulator using SystemC/C++ to play with this idea. REFERENCES [1] Lestas, Ioannis, Glenn Vinnicombe, and Johan Paulsson. 2010. “Fundamental Limits on the Suppression of Molecular Fluctuations.” Nature 467 (7312) (Sep 09):Lestas, Ioannis, Glenn Vinnicombe, and Johan Paulsson. 2010. “Fundamental Limits on the Suppression of Molecular Fluctuations.” Nature 467 (7312) (Sep 09): 174–178. doi:10.1038/nature09333. http://dx.doi.org/10.1038/nature09333.
## Introduction The possession of language distinguishes humans from other animals. It is true that other animals also engage in vocal communication; for example, vervet monkeys can convey some simple information by using alert calls1. However, the vocal communication of such animals lacks the complex grammar and high expressiveness that characterize human language. Why do only humans have sophisticated language? This is one of the core questions to understanding human identity. In this study, we explored the evolution of language in the context of the biological evolution of the fundamental traits underlying communicative interaction. We focused on two fundamental problems concerning language evolution: how communicative ability can evolve directionally under frequency-dependent selection and whether the cultural evolution of language and biological evolution of traits underlying communicative interactions can coevolve. One of the fundamental assumptions of most studies on language evolution from the biological viewpoint is that the fundamental traits underlying communicative interactions evolved under directional selection. These traits can be modified incrementally to increase the benefit from communicative interactions. At the same time, it has been assumed that such traits must be shared between individuals for communication to succeed. Accordingly, at least some of the selection is positively frequency-dependent. This may obstruct evolution based on directional selection. We believe that this captures a fundamental and general problem in the evolution of communicative traits. For example, in the context of language evolution, it has been pointed out that mutations in grammar cannot be beneficial because the peers of an individual with a grammar mutation may not understand the mutant form2. Nature’s solution to this challenging problem can be found in the evolution of phenotypic plasticity3. Phenotypic plasticity refers to the variability in a phenotype obtained from a given genotype resulting from development in different environments4. In the field of evolutionary biology, ontogenetic adaptation (individual learning) based on phenotypic plasticity has recently been recognized as one of the key factors that bring about the adaptive evolution of novel traits5,6. Wund summarized eight hypotheses on how plasticity may influence evolution (including several pieces of empirical support), focusing mainly on the adaptation to new environments7. For example, it has been suggested that phenotypic plasticity promotes persistence in a new environment and that a change in the environment can release cryptic genetic variations via phenotypic plasticity which in turn impact the rate of the evolutionary response. Zollman and Smead8 analyzed simple models of language evolution based on Lewis’s signaling game and the prisoner’s dilemma game. They observed that the presence of plastic individuals alters the trajectory of evolution by directing the population away from non-adaptive signalling and toward optimal signalling. They termed this the “Baldwin optimizing effect.” Suzuki and Arita showed that such an adaptive shift can occur repeatedly by using a computational model of the evolution of communicative traits (e.g., signaling and receiving behaviors9,10, channels11) that incorporate behavioral plasticity. These studies have indicated that learning may be an important driving force for adaptive evolution in the context of communicative interactions, although they did not explicitly deal with the cultural evolution of language. The second problem that we considered is the relationships between two different evolutionary processes: biological evolution and cultural evolution12. Evolutionary scholars have converged on the idea that the cultural and innate aspects of language were tightly linked in a process of gene-culture coevolution13. However, the relationship between genes and language is extremely complex and shrouded in controversy because these mechanisms interact with each other despite the difference in their time scales14. Substantial knowledge regarding gene–culture coevolution in general has been acquired from the viewpoint of genetic analysis15. One seemingly common argument is that language changes rapidly and is a “moving target;” therefore, it does not provide a stable environment for biological adaptations16. This argument is almost parallel with the assumption that biological evolution has become “frozen” as if language evolution works on a “fixed” biological background17. For example, Chater et al.18 used a computational model to show that there are strong restrictions on the conditions under which the Baldwin effect (typically interpreted as a two-step evolution of the genetic acquisition of a learned trait without the Lamarckian mechanism19) can embed arbitrary linguistic constraints and that the effect emerges only when language provides a stable target for natural selection. However, problems have been posed for the underlying assumptions of the “moving target argument.” With regard to biological evolution, Számadó et al.20 summarized several ways in which natural selection can adapt to moving targets. The simplest way is genetic evolution. In general, the ability of a population to keep pace with change depends on both the size of the population and the variation present. There are indeed many examples showing that adaptation can be very fast if variability is present. Furthermore, it has been reported that the rate of genomic evolution during the last 40,000 years has been more than 100 times higher than the characteristic rate for most of human evolution21. The second way is again the phenotypic plasticity. When natural selection acts to preserve adaptive phenotypes, it can lead to genetic change and to the fixation of the preserved adaptive phenotype by several evolutionary processes, including the Baldwin effect22. The third way is by means of systems and organs that have evolved to cope with fast-changing environments, much like the immune system being capable of tracking most pathogens. With regard to cultural evolution, Számadó et al.20 summarized two possible reasons why cultural evolution may not present a moving target that cannot be tracked by biological evolution; this is partly due to the fact that the rate of linguistic change depends on the frequency of use and population size. First, even contemporary linguistic changes need not be that fast. Second, past rates of linguistic change may have been much slower. There is also a seemingly counterintuitive claim23 that a “moving target” should increase the rate of evolution because temporally varying goals have been shown to substantially accelerate evolution compared with evolution under a fixed goal. Another factor is the measurement of the evolutionary rates. In general, there is an inverse correlation between the rates of evolution and the time interval used to measure the rates. This can explain why the pace of cultural evolution appears faster, that is usually measured with shorter intervals24. We discuss this factor in detail later in the section Evolutionary Rates. Rather than viewing language as a monolithic and independent entity, modern researchers typically break it down into its component mechanisms and analyze these independently25. These studies tend to be one-sided. Indeed, there has been relatively little work on investigating the two types of adaptations within a single framework26. Based on the above considerations and hypotheses, in this study we used a coevolutionary framework that allows us to integrate biological and cultural evolution to develop a comprehensive understanding of language evolution. Figure 1 illustrates a general picture of the coevolution, in which there are two intertwined adaptation processes: language adapts to the brain, and the brain adapts to language. On the one hand, a language is continuously changed by its users, which brings about linguistic variations much like mutation brings about genetic variation. Language variants that have more fitness contribution to their users in terms of, for example, learnability and expressiveness tend to survive and spread in the population of languages. On the other hand, having innate linguistic abilities that equip an individual to handle more sophisticated language variants better than others provides a fitness advantage. In addition to biological evolution and cultural evolution, we considered individual learning based on phenotypic plasticity as a third adaptive system, which was assumed to play a key role in considering the two fundamental problems that were the focus of this study. We present a minimum computational model with a one-dimensional linguistic space, in which biological evolution, cultural evolution and individual learning can evolve language. It is a sophistication of our previous models with a two-dimensional linguistic space27,28,29. This sophistication could realize our quantitative analysis on the model using evolutionary rates and transfer entropy in this paper. ## Models We propose an integrated computational framework for investigating possible scenarios of the genetic and cultural evolution of language. This framework allowed us to study coevolutionary interactions between languages and agents who use the languages for communication. ### Language and Agent There are a finite number of agents and languages in a one-dimensional space, and agents can communicate with each other by using their shared languages (Fig. 2). Note that the number of agents is N and does not change through a trial, while the number of languages can vary. Each language is defined as a point in the space. The position l x (≥0) of a language x in the space represents the expressiveness of the language, which is the expected fitness benefit of a successful communication using that language. Each agent i is also represented as a point in the same space and the interval surrounding the point. This point represents the agent’s innate language ability, for which the position is determined by its genotype a i (≥0). a i also represents a possible language that can be learned by the agent i with the minimum innate cost. In other words, the agent can use the corresponding language located at a i in the linguistic space without learning if it exists in the current population of languages. The area represents the agent’s linguistic plasticity determined by its genotype p i . The agent can use any languages that exist in its plasticity interval [a i  − p i , a i  + p i ] through the learning processes of the languages by paying a certain cost (explained below in detail). ### Linguistic Interactions In each generation, there is a chance for communication between two agents in each of all possible pairs of agents. If there are one or more languages that can be used by both agents, the agents can communicate successfully by using one of their shared languages. Specifically, the set of shared languages SLi,j between agents i and j is defined as follows: $$S{L}_{i,j}=\{x|({a}_{i}-{p}_{i}\le {l}_{x}\le {a}_{i}+{p}_{i})\cap ({a}_{j}-{p}_{j}\le {l}_{x}\le {a}_{j}+{p}_{j})\}.$$ (1) The agents can communicate successfully if SLi,j ≠ /0. The fitness of each agent is determined by summing the expressiveness of the languages used in its successful communication with others and the cost of the linguistic plasticity. The fitness of an agent i is defined as follows: $$fitnes{s}_{i}={W}_{1}\cdot \sum _{j\,\in \,S{C}_{i}}{l}_{i,j}-{W}_{2}\cdot {({p}_{i}+1)}^{{a}_{i}},$$ (2) where W1 and W2 are the weights for the two components of the fitness function. The first component represents the benefit from successful communicative interactions. SC i is the set of agents with which the focal agent i successfully communicates by using the shared language with the position li,j. Note that, if the communicating agents share two or more languages in their communication, one of the shared languages is randomly selected and used for calculating the fitness. The second component represents the cost of linguistic plasticity. It is determined by the size of the interval of its plasticity (p i ). The cost increases exponentially as the agents’ innate ability increases. This reflects a situation in which a greater innate ability makes it more costly to maintain the learning ability (e.g., the cost for maintaining the larger brain size). Overall, this definition of the fitness means that agents who can communicate with more other agents by using more expressive languages that are acquired with less linguistic plasticity will have higher fitness. ### Biological Evolution of Language Ability After communicative interactions among agents, the biological evolution of agents occurs. The population of the next generation is generated by repeating the following procedures N times: a parent agent for the next generation is selected by using a roulette wheel selection that is proportional to the fitness (i.e., the probability that an agent is picked up as a parent is proportional to its fitness), which produces an offspring that has the same genotypes as the ones of its parent. Each genotype of the offspring is mutated with the probability P m . A mutation process adds a small random value R(0, 2) to the original genetic value of an offspring i (a i and p i ), where R(μ, σ2) is a normal random number with the mean μ and variance σ2. ### Cultural Evolution of Language Subsequently, the population of languages evolves according to the following four cultural processes: cultural change, division, extinction, and fusion. #### Cultural change We define a cultural change in languages as a change in the position of the language in the linguistic space due to the use of the language during successful communication among agents. Basically, a successful communication between a pair of agents moves the language used in the communication toward the agents’ innate linguistic abilities, as shown in Fig. 3(a). Specifically, the direction and amount of displacement (d x ) of a language x is calculated as follows: $${d}_{x}={d}_{x+}+{d}_{x-},$$ (3) $${d}_{x+}=\{{\sum }_{i\in s{a}_{x}}\begin{array}{cc}F/{n}_{i} & \,{\rm{if}}\,{a}_{i} > {l}_{x},\\ 0 & \,{\rm{otherwise}},\end{array}$$ (4) $${d}_{x-}=\{\sum _{i\,\in \,s{a}_{x}}\begin{array}{cc}F/{n}_{i} & {\rm{if}}\,{a}_{i} < {l}_{x}\,,\\ 0 & \,{\rm{otherwise}}{\rm{.}}\,\end{array}$$ (5) dx + and dx are the total amounts of displacement toward the positive and negative directions, respectively. SA x is the set of agents that used the language x in their successful communications at least one time. n i is the number of languages that the agent i is able to use, i.e., languages that are in the plasticity interval of the agent i. F is the parameter that determines the amount of displacement. Each language x moves by d x in the linguistic space. #### Division Each language is divided into two languages if it is pulled dramatically toward opposite directions simultaneously. Specifically, the language x is divided when min(dx+, dx) > F d . Instead of the focal language x being removed, new two languages are created and placed at l x  + dx+ and l x  + dx in the linguistic space. #### Extinction Any languages that are not used by any agents in the current generation will not appear in the next generation; in other words, they are removed from the linguistic space. This represents language extinction. #### Fusion When the distance between two languages is fairly close, these languages are united into a single language. This process occurs when the difference between the two languages is smaller than the threshold T f . The united language is placed in the midpoint between these two languages. Through the above processes, the populations of the agents and languages coevolve. ### One dimensional expression of language Language is a communication tool but also a cognitive tool. Indeed, in the brain, utilizing language-related circuits, some form of linguistic knowledge is linked to the external world by producing/perceiving sounds and gestures, and at the same time, is connected to the inner mental world composed of concepts, intentions and reasoning30. We can also assume that, in general, the collective adaptivity of language is related with not only communicative but also cognitive aspects. If so, the one dimensional space in the model might include the cognitive aspects of language (e.g. recursive ability or ability to merge), although we focus on the communicative aspect of language and define the dimension as expressiveness in this paper. In our previous study29, we constructed a two-dimensional model based on a polar coordinate system in which the distance from a language to the origin represents its expressiveness, and its angle represents its structural character. The similarity in the structural character determines a communication success between agents but does not affect the fitness value from that successful communication. We observed a “linguistic burst” in that many languages with structural differences emerged from successful communication among agents sharing a few languages in the initial population located at the origin. However, after a few hundred generations, the structural properties of languages converged to a certain value and a cyclic coevolutionary process began to occur, which is similar to the one discussed in this paper. This paper proposed a more simplified model to focus more on evolutionary rates of biological and cultural processes and their directional effects, using transfer entropy. ## Results We conducted evolutionary experiments for 10,000 generations. The following parameters were used: N = 2000, W1 = 3, W2 = 10, P m  = 0.001, F d  = 0.008, T f  = 0.02, and F = 0.001. The initial values of a i , p i , and l x were randomly selected from [0, 1]. We selected these values so that division and fusion of languages continuously occurred. Especially, the effects of changing parameters W2, F, F d and T f are described later in this section. Figure 4 shows an example run of this experiment. We observed the cyclic coevolutionary processes of languages and agents, which are summarized in Fig. 5. As an example, consider the evolutionary process from the 6500th to 7400th generations (i–iii). (i) Around the 6500th generation, we observed agents with smaller plasticity fields clustered densely together. In this situation, there was only weak selection pressure on the innate language ability because agents could already communicate successfully. (ii) This lack of selection pressure led innate language abilities to be scattered by neutral evolution around the 7000th generation. The number of languages increased during term (ii) because the increased diversity of the agents created more linguistic changes. (iii) Around the 7200th generation, some agents with more expressive innate language ability and lower phenotypic plasticity appeared and occupied the population quickly. Instead of communicating with many agents by using less expressive languages, these agents communicated with a limited number of neighbors by using more expressive languages while incurring only a small plasticity cost. This resulted in a net relative fitness gain. At the same time, the number of languages increased because the languages were dragged by two groups: the group of agents with a more expressive language ability and the group of the agents with a less expressive language ability. Afterwards, the language population evolved toward the languages used by the former group of adaptive agents via a process of cultural evolution arising from the increased use of the more expressive languages. Languages distant from the agents’ (shifted) innate language abilities became extinct, which led to a gradual decrease in the number of languages. As a result, the average expressiveness of language caught up with the innate language abilities, which means that the populations of agents and languages moved in an outward direction in the linguistic space, and their evolutionary process went back to the initial state of the cycle. We conducted experiments to study the effects of the model’s parameters on the evolutionary process. First, to investigate the effect of the learning cost, we conducted experiments with various weights for the learning cost W2. We found that the duration until the population reached the coevolution phase increased with W2. A higher cost of learning placed the population under stronger selection pressure at low plasticity. Because individuals with low plasticity were less robust against mutations and often failed to leave offspring, the evolution speed dropped. The rate of increase in the expressiveness of languages was inversely proportional to W2 due to the increased duration until the start of the coevolutionary phase. For example, in the case of no cost (W2 = 0), the duration was quite short: the coevolutionary phase started after about 100 generations. In the case of a huge learning cost (W2 = 100000), the evolution of the language and population stagnated around the origin because individuals could not increase their plasticity at all. Note that higher values of W2 led to a shorter cycle period. This is thought to be because the rapid decrease in phenotypic plasticity (ii) tended to occur more often as the cost of plasticity increased. We also investigated the effects of F, which determines the amount of displacement of languages. Because this parameter is used in the processes of cultural change and language division, we assumed the condition that the threshold for the division F d is proportional to F (F d  = F × 300) in order to mainly focus on the effects of change in F on the cultural change process. Experiments with different settings of F (from 1.0 × 10−7 to 1.0 × 10−4) showed that the chances of all languages dying off during the early generations increased with F. This is because, at large F when there are many individuals changing language, the amount of displacement of languages tends to be so large that languages are displaced outside the plasticity range of the agent population. Especially in trials with high F (1.0 × 10−5), successful evolution was only observed when the initial population had high plasticity by chance. At extremely high F (1.0 × 10−4), all trials failed in about 10 generations. Concerning F d and T f , the thresholds for division and fusion of language, respectively, F d is smaller, the less frequently division happens, and T f is bigger, the more frequently fusion happens. In this case, the evolution goes stagnate because few languages can mediate communication between agents. Inversely, when F d is too large, or T f is too small, many languages tend to appear in one simulation step, leading to interesting behavior such as “linguistic burst”. We do not focus on this to simplify the following analyses in this paper. ### Analysis #### Evolutionary Rates Cultural linguistic change is often assumed to be significantly faster than biological change18. However, the rate of evolution is known to depend on the time interval over which the rates are measured31. Rates of evolution can be measured in darwins (d), which is a standardized unit of change in factors of e, the base of the natural logarithm, per million years31. $$d=(\mathrm{ln}({v}_{2})-\,\mathrm{ln}({v}_{1}))/{\rm{\Delta }}t$$ (6) where v1, v2, and Δt are the mean trait value calculated at the time t1, the mean value calculated at the time t2, and the time interval between v1 and v2, respectively. Figure 6 illustrates an example of the measurement interval dependence of the evolutionary rate of a quantitative trait. If the evolutionary process is directional (a-i), then the rates are stable irrespective of the measurement time interval (a-ii). However, if the evolutionary process is less directional or fluctuating (b-i), the rates are inversely correlated with the measurement time interval (b-ii). This is because the fluctuation strongly affects the measured rate when the interval is short, while the general trend affects the rate when the interval is long. Perreault compared the rate of cultural and biological evolution by analyzing archaeological data. He found that the rates of cultural evolution are also inversely correlated with the measurement interval and concluded that cultural evolution is faster than biological evolution even when such a correlation is taken into account24. This has never been tested empirically before this study. We focused on this comparison in the context of language evolution, where the main challenge is the lack of empirical data. It is unclear as to what extent his findings can be extended to other domains of human cultures including language. Language evolution has at least two features which restrict the evolutionary rate: (1) Language evolution is restricted by the capacity of the human brain or organs that are related to language use, which differs from general cultural evolution. (2) Language speakers must share the conventions of language to communicate with each other, which restricts language evolution. Our model reflects these features. Thus, we were able to consider the relationship between both evolutionary processes in this context by performing informational analysis on our simulation results. We measured the evolutionary rates of languages and agents to clarify the relationship between the rates of both evolutions by focusing on their measurement-interval dependence. Here, we calculated biological rates by using the agent’s mean trait values in the population in each generation with a i of agents being used as the trait value. The cultural rates were calculated by focusing on each occurrence of the cultural process as follows. (a) Cultural change: l x of the changed language was regarded as v2, and the corresponding value before its change was regarded as v1. (b) Division: in this case, two rates were calculated because one language was divided into two. The two different l x values of the divided languages were regarded as v2, and l x of the language before division was regarded as v1. (c) Fusion: this event also generated two rates because two languages fused into one. The l x value of the fused language was regarded as v2, and the two different l x values of the languages before fusion were regarded as v1. (d) Extinction: no rate was calculated. Here, we used the various lengths in generations of our model as Δt in order to observe the effects of the measurement interval on the evolutionary rates of languages and agents. Figure 7 shows the evolutionary rates of language and biological evolutions. We measured the rates in 18 experiments while changing Δt for ten intervals. The x-axis represents the time interval (Δt), and the y-axis represents the evolutionary rates of language and biological evolution (darwins). The rates of language evolution tended to be higher than those of biological evolution. However, the rates of language evolution slowed with the time interval. This implies that the evolutionary rate of language has a stronger measurement-interval dependence than the rate of biological evolution. We propose that this is due to the lack of directionality in cultural evolution. This implies that biological evolution is more directional than cultural language evolution and can therefore maintain the pace with language evolution. #### Transfer Entropy We believe that it is important to investigate the directional effects for both evolutions in gene–culture coevolution to understand the complex relationship between them. In order to do this, we used transfer entropy (TE)32, which is able to quantify the asymmetric impact between multiple sequences. To put it simply, the TE from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. The TE TYX from the sequence Y t  = {y t }t=1,2,... to another sequence X t  = {x t }t=1,2,... indicates the amount of uncertainty reduced in the state xt + 1 by knowing the past l states of $${y}_{t}^{l}=\{{y}_{t-l-1},\mathrm{...}\,{y}_{t}\}$$ given the past m states of $${x}_{t}^{m}=\{{x}_{t-m-1},\mathrm{...}\,{x}_{t}\}$$. TYX is calculated as follows32: $${T}_{Y\to X}=\sum _{{x}_{t+1},{x}_{t}^{m},{y}_{t}^{l}}p({x}_{t+1},{x}_{t}^{m},{y}_{t}^{l}){log}\,\frac{p({x}_{t+1}|{x}_{t}^{m},{y}_{t}^{l})}{p({x}_{t+1}|{x}_{t}^{m})}$$ (7) This measure reflects the directional effect from the sequence Y t to the sequence X t . In this analysis, we generated discrete sequences from the continuous data in evolutionary experiments that consisted of the time series of average values of l x and a i in each generation. The sequence was generated as follows: We separated the time series of l x (or a i ) into periods with a specific time interval (Δt) and generated a sequence composed of the average of l x (or a i ) in each separated period. Then, we replaced each value in the sequence with one of the five equally divided levels according to its rate of change from the previous value. In addition, we calculated the effective transfer entropy (ET) to verify the significance of the effects of the calculated TYX33 by comparing it to the one when Y t was randomized ($${T}_{{Y}_{rand}\to X}$$). Y rand was generated by randomly shuffling the order of values in the discretized sequence of Y t . We created Y t values by using different random seeds and calculated the mean values of $${T}_{{Y}_{rand}\to X}$$ s, which can be considered as TE when Y t has no relation with X t . We defined the ET (ETYX) as follows: $$E{T}_{Y\to X}={T}_{Y\to X}-mean({T}_{{Y}_{rand}\to X})$$ (8) Figure 8 shows the ET against the time interval. We used m = l = 1. We measured the entropies of 38 experiments while changing Δt for 10 intervals. The entropies were plotted as points for each trial. The X-axis represents the time interval (Δt) used for generating sequences, and the Y-axis represents the effective transfer entropy with the corresponding Δt. Each red point represents the effective entropy from language to biological evolution (ETLB), and each blue point represents the ET from biological evolution to language evolution (ETBL). The lines show their median values with the corresponding setting of (Δt). The mark above each interval indicates the significance level of the difference between the median values of ETLB and ETBL (*for p < 0.005, **for p < 0.001, ***for p < 0.0001). We used Wilcoxon rank-sum test to calculate the significance34. We observed a statistically significant difference between ETLB and ETBL when 10 ≤ Δt ≤ 110. This is because the directional effect from language to biological evolution was weak relative to that from biological to language evolution when we took measurements at short time scales. We think that this is due to the high evolutionary speed of language in short time scales. When we measured the evolutionary rate of language in short time scales, the rate tended to be high because of the measurement-interval dependence described in the Introduction. Biological evolution seems to be unable to keep pace with language evolution because of such high-speed changes in language at short time scales, which is the “moving target” problem18. However, at long time scales (Δt > 110), we found that the difference between them tended to be small. This implies that the directional effects on each other are comparable. This means that at long time scale, biological evolution can track language evolution, and coevolution occurs thanks to both phenotypic plasticity and a constraint imposed on dynamics of language evolution by language ability of agents. ## Conclusion Investigating the evolutionary changes of language directly is difficult; as often stated, “language does not fossilize”. Therefore, researchers have relied on inferring the evolution of linguistic ability on the basis of fossil remains of human ancestors or analyzing cultural evolution (e.g., the evolution of vocabulary), which has been observed over a short time scale. We believe that further investigating the cultural and biological evolutions of language in a comprehensive manner requires “emergent computational thought experiments”35 and “opaque thought experiments” as an alternative methodology, where the consequences follow from the premises in such a non-obvious manner that the consequences can be understood only through systematic enquiry36). From this viewpoint, in this paper we proposed an integrated framework for investigating genetic and cultural language evolution. Based on this framework, we first constructed an agent-based model that captures both the cultural evolution of languages and biological evolution of linguistic faculties, which are expressed in a one-dimensional linguistic space. Second, we analyzed the evolutionary rates of cultural evolution and biological evolution by using our simulation results. Finally, we analyzed the directional effects between cultural evolution and biological evolution by using the transfer entropy calculated from our simulation results. Our evolutionary experiments showed that, after an initial rapid increase in the number of languages, a cyclic coevolutionary process occurs in which biological evolution and cultural evolution proceed alternately. We observed the genetic assimilation of language into the innate linguistic ability. Eventually, the population reaches languages with high expressiveness. Thompson et al. constructed several coevolutionary models for biological evolution of innate cognitive biases on language acquisition and cultural learning of languages based on Bayesian inference37. They showed that culture facilitates rapid biological adaptation yet rules out nativism. In other words, behavioral universals arise that are underpinned by weak biases rather than strong innate constraints. However, it should be noted that such reduced selection pressures brought about the genetic diversity in the innate genotypes (Fig. 5 (2)), which bootstrapped further evolution processes (Fig. 5 (3)). Repeated or continuous interactions of biological and cultural evolution processes have also been pointed out by using agent-based models of vowel repertoires38. The analysis of the evolutionary rate showed that the cultural rate of evolution is typically faster than the biological rate; hence, the biological evolution cannot maintain the pace with the cultural evolution. However, biological evolution may become faster as a result of a coevolutionary process, and cultural evolution tends to fluctuate more than biological evolution on a short time scale. The analysis of the directional effects showed that biological evolution appears to be unable to maintain the pace with language evolution on short time scales, while the mutual directional effects are comparable on long time scales. This indicates that language and biological evolution can coevolve. We believe that we must observe their evolutions over longer time scales. These results partly support Számadó et al.’s claims, especially with regard to the way phenotypic plasticity promotes adaptation. In addition to Számadó et al.’s claims, we obtained the following insights from our simulation: (1) Diversity across language groups increases the fitness variance, which accelerates the rate of biological evolution. (2) The rate of cultural evolution tends to be restricted by the plasticity of individuals because languages cannot survive outside the linguistic plasticity range of individuals. (3) The rate of cultural change can be slow, especially when individuals reduce their learning cost as they cluster around existing languages with sufficient expressiveness for communication. In contrast to situations with no linguistic conventions among speakers, this tends to cause language evolution to stagnate. We think that the rate of cultural change may be faster when there are no linguistic conventions among speakers and slower when some shared conventions exist among them.
You're reading the documentation for a development version. For the latest released version, please have a look at v4.3.0. # Reaction models¶ ## Externally dependent reaction models¶ Some reaction models have a variant that can use external sources as specified /input/model/external/ (also see Section Dependence on external function). For the sake of brevity, only the standard variant of those reaction models is specified below. In order to obtain the format for the externally dependent variant, first replace the reaction model name XXX by EXT_XXX. Each parameter $$p$$ (except for stoichiometric and exponent matrices) depends on a (possibly distinct) external source in a polynomial way: \begin{aligned} p(T) &= p_{\texttt{TTT}} T^3 + p_{\texttt{TT}} T^2 + p_{\texttt{T}} T + p. \end{aligned} Thus, a parameter XXX_YYY of the standard reaction model variant is replaced by the four parameters EXT_XXX_YYY, EXT_XXX_YYY_T, EXT_XXX_YYY_TT, and EXT_XXX_YYY_TTT. Since each parameter can depend on a different external source, the dataset EXTFUN (not listed in the standard variants below) should contain a vector of 0-based integer indices of the external source of each parameter. The ordering of the parameters in EXTFUN is given by the ordering in the standard variant. However, if only one index is passed in EXTFUN, this external source is used for all parameters. Note that parameter sensitivities with respect to column radius, column length, particle core radius, and particle radius may be wrong when using externally dependent reaction models. This is caused by not taking into account the derivative of the external profile with respect to column position. ## Multiple particle types¶ The group that contains the parameters of a reaction model in unit operation with index XXX reads /input/model/unit_XXX/reaction_particle. This is valid for models with a single particle type. If a model has multiple particle types, it may have a different reaction model in each type. The parameters are then placed in the group /input/model/unit_XXX/reaction_particle_YYY instead, where YYY denotes the index of the particle type. Note that, in any case, /input/model/unit_XXX/reaction_particle_000 contains the parameters of the first (and possibly sole) particle type. This group also takes precedence over a possibly existing /input/model/unit_XXX/adsorption_particle group. ## Group /input/model/unit_XXX/reaction - REACTION_MODEL = MASS_ACTION_LAW¶ MAL_KFWD_BULK Forward rate constants for bulk volume reactions (available for external functions) Type: double Range: $$\geq 0$$ Length: $$\texttt{NREACT}$$ MAL_KBWD_BULK Backward rate constants for bulk volume reactions (available for external functions) Type: double Range: $$\geq 0$$ Length: $$\texttt{NREACT}$$ MAL_KFWD_LIQUID Forward rate constants for particle liquid phase reactions (available for external functions) Type: double Range: $$\geq 0$$ Length: $$\texttt{NREACT}$$ MAL_KBWD_LIQUID Backward rate constants for particle liquid phase reactions (available for external functions) Type: double Range: $$\geq 0$$ Length: $$\texttt{NREACT}$$ MAL_KFWD_SOLID Forward rate constants for particle solid phase reactions (available for external functions) Type: double Range: $$\geq 0$$ Length: $$\texttt{NREACT}$$ MAL_KBWD_SOLID Backward rate constants for particle solid phase reactions (available for external functions) Type: double Range: $$\geq 0$$ Length: $$\texttt{NREACT}$$ MAL_STOICHIOMETRY_BULK Stoichiometric matrix of bulk volume reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_BULK_FWD Forward exponent matrix of bulk volume reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage (optional, calculated from $$\texttt{MAL_STOICHIOMETRY_BULK}$$ by default) Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_BULK_BWD Backward exponent matrix of bulk volume reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage (optional, calculated from $$\texttt{MAL_STOICHIOMETRY_BULK}$$ by default) Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_STOICHIOMETRY_LIQUID Stoichiometric matrix of particle liquid phase reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_LIQUID_FWD Forward exponent matrix of particle liquid phase reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage (optional, calculated from $$\texttt{MAL_STOICHIOMETRY_LIQUID}$$ by default) Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_LIQUID_BWD Backward exponent matrix of particle liquid phase reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage (optional, calculated from $$\texttt{MAL_STOICHIOMETRY_LIQUID}$$ by default) Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_LIQUID_FWD_MODSOLID Forward solid phase modifier exponent matrix of particle liquid phase reactions as $$\texttt{NTOTALBND} \times \texttt{NREACT}$$ matrix in row-major storage (optional, defaults to all 0) Type: double Length: $$\texttt{NTOTALBND} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_LIQUID_BWD_MODSOLID Backward solid phase modifier exponent matrix of particle liquid phase reactions as $$\texttt{NTOTALBND} \times \texttt{NREACT}$$ matrix in row-major storage (optional, defaults to all 0) Type: double Length: $$\texttt{NTOTALBND} \cdot \texttt{NREACT}$$ MAL_STOICHIOMETRY_SOLID Stoichiometric matrix of particle solid phase reactions as $$\texttt{NTOTALBND} \times \texttt{NREACT}$$ matrix in row-major storage Type: double Length: $$\texttt{NTOTALBND} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_SOLID_FWD Forward exponent matrix of particle solid phase reactions as $$\texttt{NTOTALBND} \times \texttt{NREACT}$$ matrix in row-major storage (optional, calculated from $$\texttt{MAL_STOICHIOMETRY_SOLID}$$ by default) Type: double Length: $$\texttt{NTOTALBND} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_SOLID_BWD Backward exponent matrix of particle solid phase reactions as $$\texttt{NTOTALBND} \times \texttt{NREACT}$$ matrix in row-major storage (optional, calculated from $$\texttt{MAL_STOICHIOMETRY_SOLID}$$ by default) Type: double Length: $$\texttt{NTOTALBND} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_SOLID_FWD_MODLIQUID Forward liquid phase modifier exponent matrix of particle solid phase reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage (optional, defaults to all 0) Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$ MAL_EXPONENTS_SOLID_BWD_MODLIQUID Backward liquid phase modifier exponent matrix of particle solid phase reactions as $$\texttt{NCOMP} \times \texttt{NREACT}$$ matrix in row-major storage (optional, defaults to all 0) Type: double Length: $$\texttt{NCOMP} \cdot \texttt{NREACT}$$
The Derivative of a Function # The Derivative of a Function One of the first topics studied in elementary calculus is the derivative of a real-valued function $f$. We will now formally define the derivative of a function below and begin to look at some of the properties of derivatives. Definition: Let $f$ be a function defined on the open interval $(a, b)$ and let $c \in (a, b)$. Then $f$ is said to be Differentiable at $c$ if $\displaystyle{f'(c) = \lim_{x \to c} \frac{f(x) - f(x)}{x - c}}$ exists, where $f'(c)$ is called the Derivative of $f$ at $c$. The real-valued function $f'$ on $(a, b)$ for which $f'(c)$ exists is called the Derivative of $f$. The process by which $f'$ is obtained from $f$ is called Differentiation. There are many notations for the derivative function including (1) We will commonly use the notations "$f'(x)$" and "$\frac{dy}{dx}$". We will now look at a nice theorem which gives us an alternative definition for a function $f$ to be differentiable at a point $c \in (a, b)$ which is sometimes more convenient to use. Theorem 1: Let $f$ be a function defined on the open interval $(a, b)$ and let $c \in (a, b)$. Then $f$ is differentiable at $c$ if and only if $\displaystyle{\lim_{h \to 0} \frac{f(c + h) - f(c)}{h}}$ exists. • Proof: Let $f$ be differentiable at $c$. Then: (2) \begin{align} \quad f'(c) = \lim_{x \to c} \frac{f(x) - f(c)}{x - c} \end{align} • Let $h = x - c$. Then as $x \to c$, $h \to 0$, notice that: (3) \begin{align} \quad f'(c) = \lim_{x \to c} \frac{f(x) - f(c)}{x - c} = \lim_{h \to 0} \frac{f(c + h) - f(c)}{h} \end{align} • $\Rightarrow$ If $f$ is differentiable, $\displaystyle{\lim_{h \to 0} \frac{f(c + h) - f(c)}{h}}$ must exist. • $\Leftarrow$ Conversely, if $\displaystyle{\lim_{h \to 0} \frac{f(c + h) - f(c)}{h}}$ exists then $f'(c)$ exists. $\blacksquare$ ## Example 1 Apply Theorem 1 to show that the function $f : \mathbb{R} \to \mathbb{R}$ defined by $f(x) = x^2 - 2x$ is differentiable at any $c \in \mathbb{R}$ and compute $f'(3)$. Using Theorem 1 we have that for any $c \in \mathbb{R}$: (4) \begin{align} \quad f'(c) &= \lim_{h \to 0} \frac{f(c + h) - f(c)}{h} \\ \quad &= \lim_{h \to 0} \frac{[(c + h)^2 - 2(c + h)] - [c^2 - 2c]}{h} \\ \quad &= \lim_{h \to 0} \frac{c^2 + 2ch + h^2 -2c - 2h - c^2 + 2c}{h} \\ \quad &= \lim_{h \to 0} \frac{2ch + h^2 - 2h}{h} \\ \quad &= \lim_{h \to 0} [2c + h - 2] \\ \quad &= 2c - 2 \end{align} Plugging in $c = 3$ gives us that: (5) \begin{align} \quad f'(3) = 2(3) - 2 = 4 \end{align}
# Derivative of a norm I learned not use the Norm[] function when computing vector derivative, so I use the dot product instead: In: D[x.x, x] Out: 1.x + x.1 What does the result mean? Is 1=(1, 1, .., 1) here? Why can't it show just 2x as the result? And Mathematica won't resolve it when I define x? In: 1.x + x .1 /. x -> {3, 4} Out: {0.3 + 1.{3, 4}, 0.4 + 1.{3, 4}} • One way to approach this to define x = Array[a, 3]; Then you can take the derivative x = D[x . x, {x}] and you'll get more what you expect. Otherwise it doesn't know what the dimensions of x are (if its a scalar, vector, matrix). Apr 11 '21 at 20:17 • Thanks, now it makes sense why, since it might be a matrix. Can I tell it the "type"? I tried \$Assumptions = (x) \[Element] Vectors[2, Reals]; but it didn't help. Also, if I use Array[a, 3], can I resolve a at some point? For example, take a derivative and then compute it for certain vector. D[x.x, {x}] /. x -> {1, 2, 3} doesn't seem to work – Soid Apr 11 '21 at 20:41 • Apr 11 '21 at 21:35 • There is not really a nice way to generically say "x is a 3 vector" using assumptions. But if you think about it, x = Array[a, 3]; is effectively a declaration that x is an arbitrary 3 vector. Apr 11 '21 at 22:00 As suggested by @bill s, you can write x = Array[a, 3]; deriv = D[x . x, {x}] (* {2 a[1], 2 a[2], 2 a[3]} *) Note that the you need to put {x} rather than x (otherwise it will attempt to interpret the 2nd item in the list as the order of the derivative - see the help for D - the syntax is rather over loaded). The easiest way to substitute values is perhaps deriv /. Thread[x -> {1, 2, 3}] (* {2, 4, 6} *)
# Chezy Formula ### Search for Multiple Choice Question (MCQ) #### Chezy Formula By definition there is no acceleration in uniform flow. By applying the momentum equation to a control volume encompassing two sections 1 and 2, distance L apart, the Chezy formula is derived : $V = C\sqrt {R{S_o}}$ where : V : Average flow velocity through cross section C : Chezy coefficient coefficient which depends on the nature of the surface and the flow R : Hydraulic radius (R = A/P). R is a length parameter accounting for the shape of the channel. It plays a very important role in developing flow equations which are common to all shapes of channels. So : Longitudinal bed slope (dimensionless) The French engineer Antoine Chezy derived Chezy formula in 1769. The dimensions of Chezy coefficient C are : ${L^{1/2}}{T^{ - 1}}$
# Problem with calculating propagation delay I need to calculate the propagation delay for this circuit.I am confused because do I have to take every necessary combination of inputs and outputs or do I have to take the longest route from input to output by deciding to take the route which has the max number of logic gates ? thx for any advice • Looks like the longest route(s) go through 5 gates and the shortest routes go through 2 gates. – JIm Dearden May 18 '15 at 13:47 The simplest model is that each logic gate has a fixed value of propagation delay, and if you're using discrete logic gates and not operating at very high frequency, then this is probably good enough. Then all you do is add the delays of each logic gate along a path from input to output. You'll have multiple paths from input to output, so you need to find the slowest path, as this will determine the maximum speed at which the circuit can run. Now, if you're implementing the logic circuit on say, an ASIC, the propagation delays of the logic gates are much lower than discrete logic devices and in this situation the propagation delay is much more greatly dependent on the 1) the number of other logic gate inputs the output of the previous gate is connected, 2) capacitance of the internal routing ('wiring'). For an ASIC implementation, you don't know what the capacitance of the internal routing will be as you haven't done a layout and routing of the device, so the estimated values are used. When the chip is layed out, placed and routed, the actual real values of capacitance from the physical layout are extracted and this can be fed back into a logic simulator to resimulate the logic design, and the diffence between the pre-layout and post-layout logic simulations of the circuit can be significantly different. And there often is two propagation delays for each logic element: one value of delay for a rising input signal, and another value for a falling input signal. For Gallium Arsenide chips, the modelling of propagation delay can be even more complex and can take into consideration (for some ASIC manufacturers) the slew rate (the rise time) of the output of the previous logic gate. If you're implementing the design in something like CMOS 4000 series, 74xx seriest, then a simple fixed propagation delay for each logic gate should be sufficient, if you're implementing in another type of technology (with sub-nanosecond delays), you may need to use a more complex propagation delay calculation. As the signals take many paths from inputs to outputs in a combinatorial circuit, the propagation delay is the longest path; i.e when the output would remain static following a change in the inputs. Different gates have different delays. For example, 2 inverters in series may have the same delay as a single 4 input NAND gate with similar drive capability. Assuming you're implementing this in a CMOS IC, a very easy way to understand which path is the longest (in general, not just your example), is to use the method of Logical Effort. In Logical Effort you basically model the delay as $$d=gh+p$$ where 1. d is the delay (in comparative units); 2. g is the logical effort: the ratio of the input capacitance of a gate to an inverter capable of delivering the same output current; 3. h is the electrical effort: the input to load capacitance ratio of the gate 4. p is the parasitic delay For example (from Wikipedia): The total normalised delay of an inverter driving an equivalent inverter is d = gh + p = (1)(1) + 1 = 2 The normalised delay of a two-input NAND gate driving an identical copy of itself (such that the electrical effort is 1) is d = gh + p = (4/3)(1) + 2 = 10/3
## Introduction 1-nitroso-2-naphthol can be widely used in chemical and pharmaceutical fields as chelating agent and chromogenic agent. However, as a nitro compound, 1-nitroso-2-naphthol has certain thermal instability and can undergo exothermic decomposition after heating. Especially when it encounters acid or alkali, it will spontaneously ignite immediately, so it has certain thermal hazard. At present, in the field of material thermal hazard research, the research objects are mainly energetic materials1, including organic peroxides and nitro compounds, such as cumene hydroperoxide2, benzoyl peroxide3, ammonium nitrate4, guanidine nitrate5, etc. The risk of thermal decomposition of these substances were be studied. The effects of acid6, alkali7, metal ion8, organic matter9 on the thermal decomposition process of these substances were be researched, etc. The research method of material thermal safety mainly uses DSC10, TGA11, arc12, RC113, C8014,C60015, VSPII16 and other thermal analysis instruments to test the thermal decomposition process of materials, and uses Kissinger method17, Flynn wall Ozawa method18, Starink method19 and so on, to calculate the activation energy of material thermal decomposition reaction and thermodynamic parameters, so as to evaluate the thermal hazard risk of the substance. For example, Xia, et al.20 and others studied the thermal decomposition characteristics and thermal risk of three anthraquinone hazardous waste by differential scanning calorimetry (DSC), calculated the kinetic characteristics of the decomposition process by Friedman method, and the effect of the coupling of phase transition and decomposition on the thermal risk of materials was studied. Yabei Xu et al.21 studied the Autocatalytic Decomposition Characteristics and thermal decomposition of benzoyl peroxide by differential scanning calorimetry (DSC), and calculated the kinetic parameters of the decomposition process by Kissinger method; Suranee et al.22 Evaluated the thermal hazard and reactivity of hydrogen peroxide with a mass concentration of 35% by DSC. It is found that the calculated activation energy is 70.03 kJ/mol, and the adiabatic temperature rise at heating rates of 2, 4, 8 °C/min is 236.5, 159.2, 217.5 k. So far, the research on 1-nitroso-2-naphthol mainly focuses on the determination of cobalt, palladium, copper and iron23, and the research on its thermal stability has not been reported. In this paper, the thermal decomposition process of 1-nitroso-2-naphthol was analyzed and evaluated by TGA/DSC-FTIR. The effects of heating rate and impurities on the thermal decomposition of 1-nitroso-2-naphthol were studied by TGA/DSC. The activation energy of the thermal decomposition process was calculated by Kissinger method. The infrared absorption spectra of the gas products in the decomposition process were measured by TGA/DSC-FTIR, the group characteristics of the gas decomposition products were analyzed, and the reaction process of the decomposition process was speculated. The research results of this paper have certain reference significance for the storage and transportation safety of 1-nitroso-2 naphthol. ## Experimental ### Instruments and reagents Main reagents and apparatuses used in the experiment are shown in Tables 1 and 2 respectively. ### Differential scanning calorimetry experiments TGA/DSC 3 + ecalorimeter manufactured by Mettler Corporation in Switzerland was used to obtain the mass change, the thermal endothermic and exothermic characteristics and the initial thermodynamic parameters of the material in the heating environment. TGA/DSC involves setting the test method, testing the blank curve, weighing a small amount of 1-nitroso-2-naphthol sample(5–10 mg), placing the sample in a crucible and subsequently on the thermal detector along with an empty crucible (a reference). Then start testing things. The Crucible material is alumina, Method gas is N2 with a flow rate of 5 ml/min, Temperature range is 30–300 °C, heating rate (β) is 5–25 °C/min. ### TGA/DSC-FTIR experiments Fourier transform infrared spectroscopy (FTIR) is based on the principle that molecules interact with electromagnetic radiation in the near infrared (12,500 ~ 4000 cm−1), mid infrared (4000–200 cm−1) and far infrared (200 ~ 12.5 cm−1) spectral regions. When infrared radiation passes through a sample, the sample will absorb energy of a certain frequency according to the structural characteristics of different molecules, causing the molecules or different parts of the molecules (functional groups) to vibrate at these frequencies, Get the structure information of the functional groups of the molecule. Connecting the thermal analyzer and the infrared spectrometer in series through a heatable transmission pipeline is called thermal infrared combined technology (TGA/DSC-FTIR). This method uses purge gas (usually nitrogen or air) to transfer the fugitive products generated by the TGA/DSC during heating to the gas pool in the optical path of FTIR through a high temperature (usually 200 °C–350 °C) metal pipe, It is a technique to analyze and judge the component structure of escaping gas through the detector of infrared spectrometer (MCT detector). During the experiment, with the temperature change of TGA/DSC, while the change of the mass and heat flow of the sample to be measured with the temperature, the infrared spectrometer measures the functional group information of the gas products overflowed at different temperatures. Therefore, this method can be used to infer the process of thermal decomposition. In this paper, the purging gas is nitrogen, its flow rate is 50 ml/min, the pipeline transmission temperature is 260℃, and the resolution is 4. ## Results and discussion ### TGA/DSC thermal decomposition characteristics of 1-nitroso-2-naphthol The thermal decomposition data of 1-nitroso-2-naphthol obtained by TGA/DSC at the heating rate of 5 °C/min are shown in Fig. 1. As can be seen from Fig. 1a, 1-nitroso-2-naphthol has three heat flow peaks during heating. The first two are endothermic peaks. The first endothermic peak is relatively small, and the peak temperature (Tp1) is about 43 °C, which may be formed by the evaporation of water in 1-nitroso-2-naphthol. The second endothermic peak is relatively obvious, and the peak temperature (Tp2) is about 106 °C, indicating that part of 1-nitroso-2-naphthol changes its phase state through endothermic. The third peak is an obvious exothermic peak, and initial exothermic temperature (Tonset) is about 126 °C, and the peak temperature (Tp3) is about 144 °C, indicating that 1-nitroso-2-naphthol has undergone thermal decomposition and released a lot of heat. These three heat flow peaks of DSC correspond to the three weight loss steps of TGA curve in Fig. 1b and the three weight loss peaks of DTG curve in Fig. 1c, that is, the peak temperature (TP1) of the first weightlessness is about 45 °C, the weight loss is about 1.94%, and the peak temperature (TP2) of the second weightlessness is about 98 °C, the weight loss rate is about 2.52%. Both weightlessness may be caused by the volatilization of the adsorbed water in the sample. The peak temperature (Tp3) of the third weight loss is about 145 °C, and the weight loss was about 14.21%, corresponding to the thermal decomposition reaction of the sample. ### Effect of sodium hydroxide on thermal stability of 1-nitroso-2-naphthol It is reported that the existence of impurities has a certain impact on the thermal stability of substances 24,25. It can be inferred from the production process of 1-nitroso-2-naphthol that 1-nitroso-2-naphthol may be mixed with unreacted sodium hydroxide in the production process. Therefore, the effect of sodium hydroxide on the thermal decomposition behavior of 1-nitroso-2-naphthol was studied by TGA / DSC-FTIR. The content of sodium hydroxide is 5%, and the test method is the same as TGA/DSC. The test results are shown in Fig. 2. According to Fig. 2a, there are two significant exothermic peaks in the DSC curve of 1-nitroso-2-naphthol added with sodium hydroxide. The initial exothermic temperature (Tonset1) of the first exothermic peak is 104 °C and the peak temperature(Tp1) is 105 °C, which were lower than that of pure 1-nitroso-2-naphthol. It shows that 1-nitroso-2-naphthol is more prone to exothermic decomposition after adding sodium hydroxide at room temperature. Compared with the DSC curve of pure 1-nitroso-2-naphthol, the DSC curve of 1-nitroso-2-naphthol with sodium hydroxide has a second exothermic peak. The initial exothermic temperature of the second exothermic peak (Tonset2) is 150 °C and the peak temperature (Tp2) is 156 °C, indicating that sodium hydroxide will promote the secondary thermal decomposition of 1-nitroso-2-naphthol in the later stage of the reaction and release a certain amount of heat. The TG diagram in Fig. 2b and DTG diagram in Fig. 2c can also confirm that after the addition of sodium hydroxide, the thermal weight loss of 1-nitroso-2-naphthol increases and the rate of thermal weight loss accelerates. This shows that sodium hydroxide will reduce the thermal stability of 1-nitroso-2-naphthol. ### Effect of heating rate (β) on thermal decomposition of 1-nitroso-2-naphthol In order to study the effect of different heating rates on the thermal decomposition of 1-nitroso-2-naphthol. Five different heating rates (5, 10, 15, 20, 25 °C/min) were used to study the thermal decomposition of 1-nitroso-2-naphthol. The results are shown in Figs. 3 and 4. It can be seen from Fig. 3 that with the increase of heating rate (β), the peak temperature (Tp) and initial exothermic temperature (Tonset) tend to move to the right, and the shape of exothermic peak of DSC heat flow curve becomes more and more sharp. This shows that the heating rate (β) will affect the thermal decomposition of 1-nitroso-2-naphthol. With the increase of heating rate (β), the initial decomposition temperature (Tonset) of 1-nitroso-2-naphthol increases. This is because the heating rate is too fast, the substances are heated unevenly, and some substances have no time to decompose, so the thermal decomposition temperature is postponed. However, the faster the heating rate, the larger the area of heat release peak, the greater the heat release, the faster the heat release rate and the worse the thermal stability. The test result data are shown in Table 3. The thermogravimetric (TG) curves at different heating rates are shown in Fig. 4. The thermogravimetric diagram in Fig. 4 also confirms the phenomenon that the initial decomposition temperature (Tonset) of 1-nitroso-2-naphthol increases with the increase of heating rate(β) (Fig. 4a and c). With the increase of heating rate (β), the TGA curve of 1-nitroso-2-naphthol moves to the right. However, it can also be seen from TGA-time diagram (Fig. 4b) that with the increase of heating rate (β), the thermal decomposition weight loss time of 1-nitroso-2-naphthol becomes shorter, the weight loss rate increases and the thermal safety becomes worse. ### Analysis of thermal decomposition kinetic parameters of 1-nitroso-2-naphthol According to the DTG peak data (Tp,i) at different heating rates (β) in the previous paper, the thermal decomposition kinetic parameters of 1-nitroso-2-naphthol were calculated by Kissinger method16,26. Kissinger's formula is as follows: $$\ln \left( {\frac{\beta }{{T_{p,i}^{2} }}} \right) = \ln \left( { - \frac{AR}{E}f^{\prime}\left( {\alpha_{p} } \right)} \right) - \frac{E}{{RT_{p,i} }}\quad (i = 1,2,...,{\text{n}})$$ (1) Kissinger's method points out that $$f^{\prime } \left( {\alpha_{p} } \right)$$ will not change with the change of $$\beta$$, and the calculation of kinetic mechanism function can be approximately 1. Therefore, according to the linear relationship between $$\ln \left( {\frac{\beta }{{T_{p,i}^{2} }}} \right)$$ and $$\frac{1}{{T_{p,i} }}$$, this method performs linear fitting analysis on the peak temperature ($$T_{p,i}$$) at different heating rates(βi), and uses the linear slope to solve the reaction activation energy(E). According to the decomposition exothermic peak($$T_{p,i}$$) in DTG curves with heating rates (βi) of 5, 10, 15, 20 and 25 °C /min, The correlation diagram is drawn with $$\ln \left( {\frac{\beta }{{T_{p,i}^{2} }}} \right)$$ as the ordinate and $$\frac{1000}{{T_{{}} }}$$ as the abscissa. The five peak data points are linearly fitted. The calculation results are shown in Fig. 5. It can be seen from the fitting results in Fig. 5 that the linear correlation coefficient (R2) is 0.9883 and the correlation is good. From the slope E/R = 10.022, it can be calculated that the activation energy E of 1-nitroso-2-naphthol thermal decomposition reaction is 83.323 kJ/mol, and the activation energy is low, indicating that the thermal decomposition reaction of 1-nitroso-2-naphthol is easy and has great thermal risk. ### Mechanism analysis of thermal decomposition reaction of 1-nitroso-2 naphthol TGA–DSC-FTIR can dynamically and real-time scan the infrared spectrum information of the gas products released by the decomposition of 1-nitroso-2-naphthol during TGA test. Then, according to the infrared absorption standard spectrum, the composition of gas products produced in each stage of decomposition is analyzed, and the thermal decomposition process of 1-nitroso-2-naphthol is speculated. The dynamic three-dimensional infrared spectrum and Gram-Schmidt information of the gas products decomposed by 1-nitroso-2-naphthol during heating is shown in Figs. 6 and 7, respectively. As can be seen from Fig. 6, in the whole thermal decomposition process, there is always absorption near 1100 cm−1, especially in the later stage of the reaction, the absorption at 1100 cm−1 is stronger. Compared with the infrared standard spectrum, 1100 cm−1 is the antisymmetric stretching vibration peak of C–O–C. It is reported that as long as it is an ether compound, this peak is often the strongest peak in the spectrum, which is more characteristic. Therefore, it can be inferred that there may be dehydration reaction between 1-nitroso-2-naphthol molecules to form ether. It can be seen from Fig. 7 that the strong and dense absorption is at about the 10th minute and the 20th minute, so the infrared absorption spectrum at that time is listed and analyzed separately, as shown in Fig. 8. As can be seen from Fig. 8, in addition to the strong absorption peak of antisymmetric stretching vibration of ether bond C–O–C near 1100 cm−1, there is also a certain intensity of infrared absorption near 668 cm-1 and 2356 cm−1. It is found that 668 cm−1 and 2356 cm−1 correspond to in-plane and out of plane bending vibrations and asymmetric stretching peaks of CO2, respectively. It can be inferred from the positions of these absorption peaks in Fig. 8 that 1-nitroso-2-naphthol mainly undergoes intermolecular dehydration reaction to form ether during heating, and decomposes to release a small amount of carbon dioxide. After adding sodium hydroxide, the dynamic three-dimensional infrared spectrum and gram Schmidt information of the gas product decomposed by 1-nitroso-2-naphthol during heating are shown in Figs. 9 and 10, respectively. It can be seen from Fig. 9 that after the addition of sodium hydroxide, the infrared peak of the decomposition gas product of 1-nitroso-2 naphthol has strong absorption near 1100 cm−1 and 1380 cm−1, especially near 1380 cm−1. According to the comparison of the spectra, 1380 cm−1 is the symmetrical stretching absorption peak of aliphatic nitro compounds, and 1100 cm−1 is the antisymmetric stretching vibration peak of ether C–O–C groups. Compared with Fig. 6, after the addition of sodium hydroxide, the decomposition gas product of 1-nitroso-2 naphthol has a stronger absorption band near 1380 cm−1 in addition to the strong absorption of 1100 cm−1, indicating that the main reaction has changed after the addition of sodium hydroxide. It can be seen from Fig. 10 that the strong and dense absorption is at about the 11th minute and the 22th minute. Therefore, the infrared absorption spectrum at the time is listed and analyzed separately, as shown in Fig. 11. Similarly, as can be seen in Fig. 11, after adding sodium hydroxide, the ether bond absorption near 1100 cm−1 decreases, the CO2 absorption peak near 668 cm−1 and 2356 cm−1 disappears, and a super strong symmetric stretching absorption peak of aliphatic nitro compounds is added at 1380 cm−1. This shows that after adding sodium hydroxide, the dehydration reaction between 1-nitroso-2-naphthol molecules is weakened, more 1-nitroso-2-naphthol reacts with sodium hydroxide to form sodium nitrophenol compounds, and sodium nitrophenol is further heated and decomposed into aliphatic nitro compounds. At the same time, compared with Fig. 8, it is also found that after adding sodium hydroxide, the small peaks in other bands are significantly weakened. Except for the strong absorption peaks near 1100 cm-1 and 1380 cm-1, the curves of other bands are relatively smooth, indicating that the thermal decomposition path changes after the addition of sodium hydroxide. ## Conclusions The study results were as follows: The thermal weight loss of 1-nitroso-2 naphthol has three stages. The first two stages are endothermic process, corresponding to the evaporation of water. The third stage is exothermic decomposition process, and the corresponding maximum exothermic temperature (Tp) is between 144.57 and 172.33 °C. The heating rate (β) has an effect on the thermal decomposition process of 1-nitroso-2 naphthol. The faster the heating rate (β), the higher the starting temperature (Tonset) and maximum temperature (Tp) of thermal decomposition, the faster the thermal decomposition rate, the greater the heat release and the worse the thermal stability. After doping a small amount of sodium hydroxide, the initial temperature (Tonset) and the maximum temperature (Tp) of thermal decomposition of 1-nitroso-2 naphthol decreased, and the thermal stability became worse. The dynamic infrared absorption spectrum of TGA/DSC-FTIR showed that the main reaction of 1-nitroso-2 naphthol during heating was intermolecular dehydration to form ether. After adding sodium hydroxide, the thermal decomposition process of 1-nitroso-2-naphthol changed, the intermolecular dehydration reaction weakened, more 1-nitroso-2-naphthol reacted with sodium hydroxide to form sodium nitrophenol compounds, and finally decomposed into aliphatic nitro compounds.
# Probability in infinitary logic Let X be a random variable taking the value 0.2 with probability 0.2, 0.4 with probability 0.4 and 0.8 with probability 0.2 and 1.0 with probability 0.2. Using Infinitary logic I can ask the probability: $$P(X = P(X = P(X = \ldots)))$$ How is is this value computed? - I don't see how you can express anything close to anything looking like the thing you wrote with infinitary logic. Maybe you just want to study the set $\{x \in [0;1] / P(X=x) = x\}$ ? –  mercio Nov 22 '11 at 13:00 If you think this expression belongs to the framework of infinitary logic, you should explain why. Otherwise one could get the feeling that, to you, infinitary logic is a shorthand for everything seemingly paradoxical and with $\cdots$ in it. –  Did Nov 27 '11 at 11:21
× So tomorrow we all will be appearing for the IAPT Olympiads viz NSEP,NSEC And NSEA. For those who dont know these exams are the first tier for participating in the International olympiads from INDIA these are being conducted for selecting the Indian team for INTERNATIONAL PHYSICS OLYMPIAD (IPHO) 2017 To be held at Bali,Indonesia INTERNATIONAL CHEMISTRY OLYMPIAD(ICHO)2017 To be held at Nakhon Pathon,Thailand International ASTRONOMY OLYMPIAD (IAO) 2017 To be held at Either Thailand or Kazakhstan Note by Prakhar Bindal 1 year, 1 month ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Hey this is my 1st time I gave a nse olympiad (before this , I was now knowing about these olympiads ) And I have to give 2nd level of all 3 (P, C and A) . You both have already experience all levels of IJSO and have cleared them too (Congo! )so it would be pretty awesome that you both can share about the tactics to write this type of subjective paper which you both must have learnt through your experiences and also share your experiences . Feel free to write. - 1 year ago I am also giving 2nd Level of NSEP and NSEA. With IJSO experience is , stick to the basic. I don't know about INPhO and INAO. - 1 year ago Comment deleted Dec 31, 2016 Thanks for sharing :) One Question : What exactly did you mean by highlighting your answers? - 1 year ago In the sense like making boxes everywhere to mark even the silliest arguments that you have made. going on for a detailed elaboration of a 1-2 mark q,and make use of the graph paper(if provided)I remember that in jso we had to do an fbd and I just did it in plain although we had been provided with graph paper.so the deducted my marks there. - 1 year ago @Aniket Sanghi You are focused towards clearing which one?? - 1 year ago AT present for physics and chemistry - 1 year ago You cleared which which ? I heard from Ayushmann that your Physics is legendary. - 1 year ago @Rajdeep Dhingra.congrats to you for such a get achievement(in ijso.)did we meet in ocsc?? - 1 year ago No we didn't. You had chickenpox - 1 year ago Ya that is now for me a horrible dream and puts me into regret whenever I think abt it.I had seen you on the day of start(during registration) - 1 year ago I will be appearing this year for INPHO,INAO,INMO.why about dhyey,Smit,ayushman.??Smit must have made to inbo,ayushman as well and dhyey no doubt incho.??? - 1 year ago How much did you fare in NSEP ? - 1 year ago Ayushmann and Smit cleared NSEP. Dhey is national topper NSEP , he got the highest. - 1 year ago Sorry Ayushmann and Smit had cleared NSEB. Sorry Autocorrect is a meanie. I have typed NSEP quite a lot but NSEB less so it replaced it. - 1 year ago Ya I knew that.both were surely going to do it.and they will even clear inbo.even jeevesh must have qualified?? - 1 year ago Jeevesh couldn't give NSEB due new eligibility rules. He did NSEJS. Also how much did you fare in NSEP and NSEA ? - 1 year ago he got 197 marks and air 1 in nsejs. In 6th also he cleared - 4 months, 1 week ago Oh the first stage marks I got 167 in phy and 128 in astronomy.I didn't quite do well in astro as I was not prepared and as of phy I just wanted to clear the mi.that's it.what's yours?? - 1 year ago 173 in Physics and 142 in Astronomy. They matter very less as INPhO and INAO has a different style. - 1 year ago Ya I had mainly polarised my attention throughout the year for inpho.good scores.happy new year and am the best for ino s - 1 year ago But your score in nsep implies that you must have done jee phy course completely? - 1 year ago I have done a big chunk. Little nooks are there still to be filled. - 1 year ago Thanks. Same to you and a happy new year. - 1 year ago Dhyey.but the national topper is from chattisgarh with 219 score. - 1 year ago Idk, Dhey told me. - 1 year ago I am talking about subjective writing , what's the way they prefer while checking ? Any idea? - 1 year ago I don't know much but most imp.try to avoid highlighting the and more.that not nly wastes time but also consumes space.and they have lim space for ans. - 1 year ago @Spandan Senapati ... Lay Jain of Resonance Kota scored 230 in NSEP. ( Sources) - 1 year, 1 month ago He's also in ALLEN - 4 months, 1 week ago First of all He is from Allen Kota. Second , he got 213. - 1 year ago YOU are right - 4 months, 1 week ago What's your score??are you in kota as well??? - 1 year, 1 month ago That's great.....I know him..... - 1 year, 1 month ago No...I am from FIITJEE Bhopal .. NSEP - 140 ... NSEC - 150 ... Pretty sad by NSEP marks. - 1 year, 1 month ago Which class??? - 1 year, 1 month ago XII... - 1 year, 1 month ago How much are you getting in iapt 2017-18 - 1 month, 1 week ago Results are out. Who all are selected?? - 1 year ago I am selected in physics - 1 year ago Comment deleted Dec 28, 2016 No - 1 year ago I am Selected in All 3 ( NSEA, NSEP , NSEC ) :) - 1 year ago I Am selected in NSEA And NSEC (Above MI). In physics i am doubtful many many are above 160 from my state . hope i get selected - 1 year ago My scores from resonance keys (with some error as i have indicated due to ambiguities) NSEP- 160-170 NSEC-180-190 NSEA-165-180 - 1 year, 1 month ago hey, i am from bhilai. i am in class 11. i am getting 185 in NSEP according to resonance answer keys. considering that the paper was too easy this time, do i have any chance of qualifying ? - 1 year, 1 month ago Hey Nikhil, How much in NSEP ? - 1 year ago hahaha you really doubtful of your selection???!!! - 1 year, 1 month ago Even I thought of the same - 1 year, 1 month ago Any one you know who has scored more than 200???? - 1 year, 1 month ago 182 is the max i know - 1 year, 1 month ago I am doubtful of my selection as well...I am scoring around 165-170...class 11 - 1 year, 1 month ago i am scoring just 128 - 1 year, 1 month ago You are in class 12/11??if its 12 then don't worry...focus on jee.....some times things don't go well....if its 11 then go ahead.....I had qualified nsejs with just 162 marks whereas 9th class students had about 200 and around 35 students with 200+...but guess what even after that I could make it to ocsc of ijso......so only one level won't make the difference..... Even then the MAS was 109......and avg was 220+ - 1 year, 1 month ago which year - 4 months, 1 week ago Cool performa !! I hope I would have known about nsejs when I could give it !! - 1 year ago i in 12 - 1 year, 1 month ago Comment deleted Nov 30, 2016 well yeah!! thanks for the boost up - 1 year, 1 month ago And you also share some things which you think you missed in class 11(or the things you could have taken care of)I will try not to do those....I also have jee(that's in 2018 not so far:).☺ - 1 year, 1 month ago thumps up - 1 year, 1 month ago And be +ve that's not a bad score..... - 1 year, 1 month ago well those are a bulk of marks - 1 year, 1 month ago A person in my school is scoring 228, he is in class 12. - 1 year, 1 month ago what are his scores in other subjects?? - 1 year, 1 month ago - 1 year, 1 month ago It will do..... - 1 year, 1 month ago PLS SHARE UR NSEJS MARKS - 1 month, 2 weeks ago Expected cutoff of nsea? - 1 month, 3 weeks ago Expected cutoff of nsec for WB? - 1 month, 3 weeks ago INO registrations are started (from 3rd) - 1 year ago hey sorry to post here. But what are you guys making in chem/physics investigatory project @Samarth Agarwal @Aniket Sanghi @Prakhar Bindal or anyone in class 12 - 1 year ago Whats this??!! - 1 year ago some random project for boards practical - 1 year ago Oh sorry i cant help i am in isc so here school assigns us some stupid stuff for boards - 1 year ago I am in 11th and cleared NSEA (MI). How to prepare for INAO? Is there a separate cut-off for 11th in INAO too? - 1 year ago Hey do we need to register somewhere for the seond level after clearing 1st? @Prakhar Bindal @Samarth Agarwal - 1 year ago No we don't need now.iapt will later announce a list of those eligible for the into(stage 2).and those whose name will appear in that list only will have to register.and they even haven't said anything.but this is what happened with me when I was selected for injso last year.congrats !! you have a hat trick. - 1 year ago Thanks - 1 year ago I dont know exactly but it is done on hbcse site - 1 year ago Thanks - 1 year ago Iapt results are out!!!!.....share your results here @Prakhar Bindal @Aniket Sanghi @neelesh vij - 1 year ago I got selected in both.nsep and nsea 11th - 1 year ago Congrats! - 1 year ago nice.....congrats bro :+1: - 1 year ago Got MI in nsec and BS in nsep - 1 year ago What's BS? - 1 year ago And what abt nsea? - 1 year ago BS = bull shit , nsea i am nowhere - 1 year ago BS? - 1 year ago The results of the exam were expected to be out today.But they have not uploaded it yet.so we gonna wait. - 1 year ago @Calvin Lin sir i want to post a note but i am not able to do so . can you check please - 1 year, 1 month ago What's the difference between "ac ripple in the output increases" and "ripple frequency decreases"? And is the and to this q asked in nsep correct.??? - 1 year, 1 month ago It cant get bad than this . they haven't corrected multi correct . only fringe width one is changed . Feeling Angry on ignorance of people CONDUCTING Nation wide olympiads. similarly they haven't done corrections in NSEA . Just one done. still i feel 3-4 problems have wrong answers! RESO Keys were much more correct than official in all 3 subjects - 1 year, 1 month ago what is the answer for pascal triangle question in NSEA?? - 1 year, 1 month ago The answer given in key is wrong but my due to my silly error i got it correct - 1 year, 1 month ago Any guesses for the MI and Mas scores in nsep,nsec,nsea.@Prakhar Bindal @Samarth Agarwal @Akash singh - 1 year, 1 month ago NSEP Mas Would be around 100 i suppose. rest i dont know - 1 year, 1 month ago I think phy multi ones are correct...astro ya I agree with you.but we can't do anything now..let's hope for the best...and our score in the 3 sub are quite high.you will make in all of them..and how was vijyoshi camp??what did you learn?? - 1 year, 1 month ago It was awesome. you can have a look at my note . ! - 1 year, 1 month ago I feel the same... don't you think in chemistry many answers are wrong?...btw how was the camp? - 1 year, 1 month ago Chem has 4-5 answers wrong .dont know whether dey will correct or not - 1 year, 1 month ago See my note on that (camp) Comment there - 1 year, 1 month ago I feel the answer to that question should be bc ....also there was another question about fringe width....According to me it should increase...Is my answer right? and are there any other corrections? - 1 year, 1 month ago Ya fringe width increases as expected.but to that q on ripples the key is correct.I googled on half and full wave rectifiers and found that the ripples produced by the half wave rectifier are substantially more than full wave rectifiers(option c).and the ripple frequency definitely decreases... - 1 year, 1 month ago Hey guys how much are you getting by official answer key. I got 128 in P, 139 in C and 119 in A. @Prakhar Bindal - 1 year, 1 month ago I am getting 167..NSEP....and 140. in NSEA....will I get selected in any??.. - 1 year, 1 month ago P-156 C-173 A-160 - 1 year, 1 month ago @Samarth Agarwal When will you reach kolkata? - 1 year, 1 month ago I am not coming...my train is 15 hrs late :( - 1 year, 1 month ago AITS? - 1 year, 1 month ago - 1 year, 1 month ago So the official keys are out....in nsep what would be the and to whether the fringe width will increase/decrease remain same after evacuation of the chamber filled with air.????I am doubtful bout this ans. Please help @Prakhar Bindal,@Aniket Sanghi@Samarth Agarwal and all.... - 1 year, 1 month ago i think the given answer is wrong . mu decreases>>lambda increases>>width increases . thats what i think - 1 year, 1 month ago I marked the same. - 1 year, 1 month ago And how do we get R>>G or R<<G.....the relation was 1/R+1/G=1/S....so how can we make such conclusion???? - 1 year, 1 month ago I attempted NSEC and NSEA. Getting 128 in C and 141 in A. I think my paper went pretty average. Do I have any chance? @Prakhar Bindal - 1 year, 1 month ago you might have a chance in A - 1 year, 1 month ago i dont know! - 1 year, 1 month ago Okay. Thanks anyway - 1 year, 1 month ago @Prakhar Bindal .. Hey , You are in which coaching ..? - 1 year, 1 month ago I Am at fiitjee Delhi - 1 year, 1 month ago @Prakhar Bindal i am getting 124 in P and 163 in C....are there my chances of qualifying ? - 1 year, 1 month ago C you will surely do . in P You can but i cannot say with surity .it is expected that at max (if some miracle happens ) cutoff can reach 130 not more than it at any cost - 1 year, 1 month ago Are you also not using slack? - 1 year, 1 month ago Yup i found that i was wasting my time there hence deactivated it - 1 year, 1 month ago hey my scores NSEP-128 NSEC-160 NSEA-161 will i qualify in any of these??? - 1 year, 1 month ago Nsea answer key ?? Where i can get it @akash singh - 1 year, 1 month ago - 1 year, 1 month ago resonance - 1 year, 1 month ago What are your scores.....is n't the key a bit wrong(3-4)q....... - 1 year, 1 month ago My score isn't that good . the paper was easier than last year - 1 year, 1 month ago Bhaiya can you tell what it is..mine is 170 in nsep.....not checked nsea... - 1 year, 1 month ago Do you go to Fiitjee? - 1 year, 1 month ago Ya fiitjee Bhubaneswar centre......you are from bhilai I guess..... - 1 year, 1 month ago Yeah. In which topics are you guyz in PCM? - 1 year, 1 month ago @Harsh Shrivastava i really liked your status! - 1 year, 1 month ago Lol thanks! - 1 year, 1 month ago P-waves,c-organic.,maths permutations and combinations - 1 year, 1 month ago i am also getting around that much in NSEP!. in nsec also i am expecting about 172 - 1 year, 1 month ago Chem was tough I didn't appear.....will appear next year in class 12.....what abt astro...???? - 1 year, 1 month ago Didn't checked answer key isn't out i suppose - 1 year, 1 month ago Ya howdo you rate astro q....you had done it prev year..... - 1 year, 1 month ago It was tougher than last year surely - 1 year, 1 month ago Bhaiya, what do you expect about the MAS and MI cutoff in astro.. - 1 year, 1 month ago MAS Would be 100 max - 1 year, 1 month ago MI.. - 1 year, 1 month ago dont know - 1 year, 1 month ago was the paper easy or difficult this year compared to the previous year of nsea .answer keys are out... www.resonance.ac.in/answer-key-solutions/ISO/2016-17/Stage-1/ISO-Stage-1-2016-17-AnswerKey-Solutions.php - 1 year, 1 month ago From Allen's solutions I think I will get around 180 - 190 in NSEC. - 1 year, 1 month ago How did it go guys? - 1 year, 1 month ago Allen has also given chemistry solutions. - 1 year, 1 month ago ohh lemme see score? @Samarth Agarwal - 1 year, 1 month ago i am tagging the people whom i know will be participating . rest anyone who will appear can participate - 1 year, 1 month ago Official solutions out! Check Here - 1 year, 1 month ago
Limited access An alpha particle commonly has a speed of $15{,}000 \text{ km/s}$. What is a reasonable value for the de Broglie wavelength of an alpha particle when considered as a matter wave? A $\lambda =2.65\times { 10 }^{ -14 }\text{ m}$ B $\lambda =6.65\times { 10 }^{ -15 }\text{ m}$ C $\lambda =4.15\times { 10 }^{ -4 }\text{ m}$ D $\lambda =6.65\times { 10 }^{ -12 }\text{ m}$ Select an assignment template
# Change of variables 1. Feb 3, 2006 ### Benny Hi, I would like some help with the following question. Q. Let f be continuous on [0,1] and let R be the triangular region with vertices (0,0), (1,0) and (0,1). Show that: $$\int\limits_{}^{} {\int\limits_R^{} {f\left( {x + y} \right)} dA = \int\limits_0^1 {uf\left( u \right)} } du$$ By making the substitutions u = x+y and v = y, I got it down to: $$\int\limits_{}^{} {\int\limits_R^{} {f\left( {x + y} \right)} dA = } \int\limits_0^1 {\int\limits_0^u {f\left( u \right)} dvdu}$$ The above leads to the given result. However, I was stuck on trying to get bounds for the integrals so I'm not sure if I've justified those limits of integration properly. From the boundary line y = 1 - x, the substitution u = x+y gives u = 1. From the boundary line x = 0, the substitutions yield u = 0 + y = v so that v = u. The boundary line y = 0 yields v = 0. The lower limit for u is what I am lacking in. The three boundaries that I've just obtained completely describe the region R anyway so I decided to say that at the origin x,y = 0 which gives u = 0. I'm not sure whether I should've done something else to obtain the lower u limit. Any help would be good. Last edited: Feb 3, 2006 2. Feb 3, 2006 ### benorin If the original bounds are 0<=y<=1 and 0<=x<=1-y then for u=x+y and v=y we have 0<=x<=1-y ==> y<=x+y<=1 ==> v<=u<=1 and, of chourse 0<=v<=1 3. Feb 3, 2006 ### benorin A better way: (take all inequalities as inclusive) Let L1: x+y=1, 0<y<1 ==> L1': u=1, 0<v<1 Let L2: y=0, 0<x<1 ==> L2': v=0, 0<u-v<1, but v=0, so 0<u-0<1 or just 0<u<1 Let L3: x=0, 0<y<1 ==> L3': u-v=0 i.e. u=v, 0<v<1 4. Feb 3, 2006 ### benorin So the transformed region is /| = a right triangle in the uv-plane formed by cutting the unit square along u=v, lower triangle 5. Feb 3, 2006 ### Benny Thanks for the help Benorin. I just thought of another way to justify setting u = 0 as a lower bound for the u integral. I've already established 3boundary lines which already account for the 'shape' of the original region in the xy plane. From a quick sketch I can see that adding the line y = - x as a boundary line still gives the same region so y = - x => y + x = 0 = u. But that's kind of a fudge method, your method is the right one.
# How many real roots are there to $2^x=x^2$? [duplicate] How many real roots are there to $2^x=x^2$? - ## marked as duplicate by Chandrasekhar, Asaf Karagila, t.b., J. M., Jonas TeuwenAug 9 '11 at 11:51 What have you tried? –  lhf Aug 8 '11 at 16:11 Closely related: math.stackexchange.com/questions/9505/… –  user9413 Aug 8 '11 at 16:12 Wrong tag; this is not a functional equation. –  Hans Lundmark Aug 8 '11 at 16:56 Another related question: math.stackexchange.com/questions/44206/… –  Jonas Meyer Aug 9 '11 at 4:21 I don't find the "Possible Duplicate" mentioned to be equivalent. $x^y=y^x$ ($x$ and $y$ are integers) doesn't consider the negative solution ($0<2^x<1$ for $x<0$). How many points in the xy-plane do the graphs of $y=x^12$ and $y=2^x$ intersect? at least has $2$ positive and $1$ negative solution, but no obvious ones. I could take it as equivalent, if pressed. –  robjohn Aug 10 '11 at 9:33 An obvious solution is $x=2$. If $2^x = x^2$, then $x\neq 1$ and $x\neq 0$. I'll treat the positive and negative cases separately. If $x\gt 0$, then we get $x\ln(2) = 2\ln (a)$, or $\frac{x}{\ln x} = \frac{2}{\ln 2}$. The derivative of $g(x) = \frac{x}{\ln x}$ is $\frac{\ln x - 1}{(\ln x)^2}$. On $(1,\infty)$, the derivative is positive on $(e,\infty)$ and negative on $(1,e)$, so there is an absolute minimum at $x=e$, where the value is $e$; $\lim\limits_{x\to 1^+} g(x) = \lim\limits_{x\to\infty}g(x) = \infty$; since $\frac{2}{\ln 2}\gt e$, there are two values of $x$ where $g(x) = \frac{2}{\ln 2}$; one is $x=2$, which we had already found, the other is a value greater than $e$ (which as it happens is $4$). On $(0,1)$, $g(x)$ is always negative, so there are no values where $g(x)=\frac{2}{\ln 2}$. So for $x\gt 0$, there are two solutions. For $x\lt 0$, the equation $2^x = x^2$ is equivalent to the equation $\left(\frac{1}{2}\right)^a = a^2$, where $a=-x\gt 0$. This time, the equation is equivalent to $\frac{a}{\ln a} = -\frac{2}{\ln 2}$. There are no solutions for $a\gt 1$, since $g(x)$ is positive there. On $(0,1)$, $g'(x)\lt 0$, so the function is strictly decreasing; we have $\lim\limits_{a\to 0^+}\frac{a}{\ln a} = 0$ and $\lim\limits_{a\to 1^-}\frac{a}{\ln a} = -\infty$, so there is one and only one value of $a$ for which $\frac{a}{\ln a} = -\frac{2}{\ln 2}$. Thus, there is one value of $x\lt 0$ which solves the equation. In summary, there are three real solutions: one lies in $(-1,0)$, the second is $2$, and the third is $4$. - Explicitly, the third real solution (besides 2 and 4) is $- \frac{2 W(\ln(2)/2)}{\ln(2)}$, where $W$ is the Lambert W function. - Assuming that $x>0$, by taking logs of both sides and rearranging, we get that $$\frac{\log(x)}{x}=\frac{\log(2)}{2}$$ Since $\frac{d}{dx}\frac{\log(x)}{x}=\frac{1-\log(x)}{x^2}$ vanishes only when $x=e$, and $\frac{\log(x)}{x}=\frac{\log(2)}{2}$ when $x=2$ and $x=4$, those are the only two positive solutions (i.e. the Mean Value Theorem says that $\frac{d}{dx}\frac{\log(x)}{x}$ vanishes between any two solutions). For $x<0$, noting that $x^2=(-x)^2$, we have $$\frac{\log(-x)}{x} = \frac{\log(2)}{2}$$ Since $\frac{d}{dx}\frac{\log(-x)}{x}=\frac{1-\log(-x)}{x^2}$ only vanishes at $x=-e$, there can be at most one solution in $(-e,0)$ and one in $(-\infty,-e)$. For $x$ in $(-\infty,-e)$, $\frac{\log(-x)}{x}<0$ so there are no solutions in this range. Since $\frac{\log(-(-1))}{-1}=0$ and $\frac{\log(-(-1/2))}{-1/2}=2\log(2)>\frac{\log(2)}{2}$, there must be a solution in $(-1,-\frac{1}{2})$, which is $x=-.766664695962123093111204422510$. - Taking logs of both sides assumes $x>0$ but there is a negative solution near $x=-1$. –  lhf Aug 8 '11 at 16:20 Why is $x\approx-0.76666469596212309311$ a solution? –  FUZxxl Aug 8 '11 at 16:20 @FUZxxl: Because when you plug it into each of the two sides you get the same value? –  Arturo Magidin Aug 8 '11 at 16:26 @Arturo This was meant because the answerer stated, that $x = 2, x = 4$ are the only solutions. –  FUZxxl Aug 8 '11 at 16:32 @FUZxxl: So, should that have been "Why isn't [...] a solution?", then? –  Arturo Magidin Aug 8 '11 at 16:43 Go to Wolfram|Alpha and type $2^x=x^2$ (link) - I wanted to do that for my answer. =P +1 –  Patrick Da Silva Aug 8 '11 at 23:31 you have 3 roots: You can put your equation into a function: $$f(x)=2^x-x^2$$ Now the question is, for what x is $f(x) = 0$; or, what are the roots of f(x)? The Newton-Raphson method starts with some first guess, $x_0$, and finds the next guess, $x_1$, by a formula. Then, using this guess, we apply the same formula to find a new guess, $x_2$. We continue until we're as close as we wish. The formula is $$x_{i+1} = x_{i} - \frac{f(x_{i})}{f'(x_{i})}$$ We need $f'(x)$, the derivative of $f(x)$. It is $$f'(x) = 2^x * ln(2) - 2x$$ Thus the formula for our problem is $$x_{i+1} = x_{i} - \frac{(2^{x_i}-x^2)}{(2^{x_{i}}*ln(2)-2x)}$$ You can set this up in a spreadsheet. Then try different first guesses $x_0$. You'll find that the algorithm zeroes in on one of the three roots, depending on the starting value. If I start with $x_0 = 0$, I get the root: $$x = -0.766664696$$ after 5 iterations. You can verify: $$2^{-0.766664696} = 0.587774756$$ $$(-0.766664696)^2 = 0.587774756$$ If I start with $x_0 = 1$, I get the root $x=2$. If I start with $x_0 = 3$, I get the root $x = 4$. You have observed that there are three roots. I hope this helps - To flesh out Robert's solution: $$x^2=\exp(x\ln 2)$$ can be rearranged as: $$x^2 \exp(-x\ln 2)=1$$ Take the appropriate square root of both sides: $$x \exp\left(-x\frac{\ln 2}{2}\right)=-1$$ multiply both sides with the appropriate factor: $$-x \frac{\ln 2}{2}\exp\left(-x\frac{\ln 2}{2}\right)=\frac{\ln 2}{2}$$ invoke the Lambert function: $$-x \frac{\ln 2}{2}=W\left(\frac{\ln 2}{2}\right)$$ $$x=-\frac{2}{\ln 2}W\left(\frac{\ln 2}{2}\right)$$ Also, $$-\frac{2}{\ln 2}W\left(-\frac{\ln 2}{2}\right)=2$$ and $$-\frac{2}{\ln 2}W_{-1}\left(-\frac{\ln 2}{2}\right)=4$$ where $W_{-1}(x)$ is the other branch of the Lambert function that is real in the interval $[-1/e,0)$ - if you take the other sign for the square root, will $W(-\frac{\ln 2}{2})$ yield 2 and 4? –  Tobias Kienzler Aug 9 '11 at 11:34 @Tobias: I have updated my answer. –  J. M. Aug 10 '11 at 1:49 For whatever it's worth: I await the day where one would not feel the desire/need to link to a wiki article for the Lambert function... –  J. M. Aug 10 '11 at 4:24 sorry... because of wiki instead of wolfram or because of linking at all? –  Tobias Kienzler Aug 10 '11 at 6:01 so I better not worsen your by mentioning I had to look for it first... but then again I'm always eager to learn new things by randomly browsing questions here :) –  Tobias Kienzler Aug 10 '11 at 8:26 show 1 more comment By drawing the graphs of both functions, we can easily guess that there are three. What the roots actually are, I don't know, but at least I can answer your question. To prove this, one might want to use Rolle's Theorem on the function $f(x) = 2^x - x^2$ to show the existence of the third $0$, which is the non-trivial one (the first two are $x=2$ and $x=4$). Just notice that $f(0) = 1$ and say $f(-100) < 0$, hence there exists a zero between those points. Since the derivative of $f$ is strictly positive in the interval $(-\infty, 0)$, this is the only one in this interval. - Hardy's Pure Mathematics has a section on sketching graphs - an underused technique. I recall that someone once solved one of Bela Bollobas's infamous double-starred questions (back in the early 1980s) by getting a computer to draw a graph, which then made it obvious why needed to be done to make a rigorous argument. If stuck DRAW A GRAPH. –  Mark Bennet Aug 8 '11 at 18:51 I'm not sure I'd call the root at $x=4$ 'trivial' - it's easy, but it's not simply a symbolic substitution into the equation the way $x=2$ is. But this is nitpicking. :-) –  Steven Stadnicki Aug 8 '11 at 19:00 It's trivial in the sense that if I tell you "this guy is a root" you can look at me and say "yes". The third root is non-trivial in the sense that I can't do that. =P This is usually the sense of the trivial word in general contexts. If proving that something works doesn't require any argument, it is said to be trivial in general. Trivial does not mean "easy to find", it means "does not require proof in the context"... anyway, those are my perceptions of the word. I believe there are $\infty$ posts about the word "trivial" on every math forum in the world. =P –  Patrick Da Silva Aug 8 '11 at 23:29
# Infinite quotient of Hurwitz Group I am currently working through all the groups with two generators, and I am up to the group with presentation $G := \langle a, b \ | \ a^2, b^3, (ab)^7, [a,b]^9 \rangle$. I have found all the finite quotients of this group, but there are also the infinite quotients of the group that I need to check. Are there any infinite quotients of this group other than the whole group? I know that there is a central element of order 2, but what I need to know is, what is this element? - Why the downvote? The question is perfectly reasonable. –  Thomas Aug 21 '13 at 4:32 This was answered in your earlier question. $G$ has a central element of order 2. –  Derek Holt Aug 21 '13 at 6:59 Ok then, what I need to know is: What is the central element in terms of a and b? Also, is there a way to prove that there aren't any more quotients? –  Thomas Aug 21 '13 at 7:06 Sorry I'm in a hurry! It's the commutator $[x,y]$, where (for example) $x=b * a * b * a * b^-1 * a * b * a * b * a * b^-1 * a * b * a * b^-1 * a * b^-1 * a * b * a * b^-1 * a * b^-1 * a * b * a * b^-1 * a$, $y=b * a * b^-1 * a * b * a * b^-1 * a * b^-1 * a * b * a * b^-1 * a * b^-1 * a * b * a * b^-1 * a * b * a * b * a * b^-1 * a * b * a$. –  Derek Holt Aug 21 '13 at 7:11 "I am currently working through all the groups with two generators". Higman, Neumann and Neumann proved that every countable group embeds in a 2-generator group. So your project may take you a while... –  HJRW Aug 21 '13 at 14:12
# Is it possible to build homemade storage device? From what i know, HDD is built on magnetic recording technology. My question is, is it possible to create a storage device from scratch (USB, HDD, etc.)? I'm just eager to try out building some electronic devices... • You could try making a USB based memory device. For example, get a USB to serial converter chip, a microcontroller and some $I^2C$ memory for it. You could then write a program that will communicate with the micro and write to the memory and read from it. Just don't expect the capacity and speed to be anything close to the one of a commercial USB flash drive. Still, it will give you a starting point. May 28 '11 at 11:05 • What are you actually planning to do? Build the physical storage medium from scratch, or use preexisting physical storage and just do the interfacing? And what kind of stuff do you want to interface it with? PC or microcontroller or what? May 28 '11 at 12:41 • modern hard disk devices and memory technologies are more advanced and they are based on tons of patents and tech secrets. You could hack to a hard disk through JTAG and play with it, then start reading the patents and hack into their firmware. By that way you could use to learn about integrated technologies by doing. Feb 19 '15 at 9:21 • You can pick a slate plate from a quarry and scratch on it whatever data you want to store. – Curd Aug 9 '15 at 14:16 • Note that USB is not a type of data storage device. It's a data transfer protocol. Some storage devices speak USB, but many devices that speak USB are not storage devices. Aug 9 '15 at 15:16 Yes, you can do it-- but it's hard and it won't store very much. I think the thing that makes it hard is that you need to know a lot of very specialized areas to make it works. Things like: software, signal processing, electronics, electro-magnetism, metalworking, motors/gears/etc, and materials science (somewhat like chemistry). While not impossible, it is rare to find someone who is proficient in all of those areas. If you want to make this easier, I would recommend starting with a standard cassette tape player/recorder. Rip out all of the electronics and keep the motors, gears, mechanical stuff, and the read/write/erase heads. Then add back in your own electronics. This still offers a lot of challenges, but the odds of success go way up. Then, if you get that working you can take the knowledge you gained and go on to a hard drive or something. If you go the cassette tape route, let me mention that if you Google that you'll find a lot of pages that do something similar, but without modifying the player much. They do that by modulating the data into something that resembles audio and can be stored as audio. That's not what I'm recommending. If you rip out the guts of the player/recorder then you can have direct control of the motors and heads, which opens up lots of possibilities. Hard drives will be harder, mostly because you'd have to figure out how to make the hard drive platters. Meaning, you have to make the magnetic recording medium and somehow spread it out evenly and smoothly on the glass or aluminum platter "base". Making the heads isn't easy either. I should point out, however, that a clean room is not required. I remember playing with a "removable hard drive" on a DEC PDP-8 computer. Instead of removing the whole drive, you only removed the platters. The platters were about 12 inches across and contained in something like a piece of Tupperware that you'd carry a cake in. About 6 or so platters per carrier. Before putting the platters into the drive you would have to remove them from the Tupperware. It was big, and didn't store a lot, but no clean room either. Don't get me wrong, modern drives do need a clean room. But a DIYer has little or no hope of building a modern drive in his home so it's not really an issue. Another form of storage that could be interesting is an optical fiber "drive". Light travels approximately 6 inches per nanosecond in a fiber optic. So if you have a fiber that is 100 feet long and you're transmitting stuff at 1 gbps then you're really storing 200 bits of data in that fiber. Make the fiber several kilometers long and you could store a barely useful amount of data. Get a fiber transmitter and receiver set up so whatever is received will be retransmitted and your data will just recirculate endlessly. Some extra stuff will then allow you to read/write the data. Probably the most useful, and least satisfying, thing to build would be something like a USB thumb drive. Basically you buy the flash chip, and the controller chip, connect them together and you're done. To make it a little harder, replace the controller chip with a microcontroller and write lots of software. It's not super interesting, in my opinion. I don't think it offers the same sense of accomplishment that the other approaches offer-- even though the performance and capacity would be the highest this way. • about the PDP-8 HDD. That was then, data densities nowadays are much higher, so that the head has to fly much closer to the platter's surface, typically 1 um. A hard dust particle of 1 um may crash the head. May 28 '11 at 14:38 • Isn't the index of refraction for fiber (both glass and plastic) about 1.5? That would give a speed of light of 8 inches (20 cm) per nanosecond. May 28 '11 at 16:18 • @stevenvh Wow, tough crowd! While data densities of modern drives are high and the head height is super low, we are not talking about a modern drive. We're taking about something that someone built in their home. Using the PDP-8 as a point of reference, I doubt that anyone can build a HDD as good in their home. At least not on their first or second try. So, my point stands. You don't need a clean room when making a HDD from scratch. As for the propagation speed in a fiber: you'r just being pedantic. :) – user3624 May 28 '11 at 19:31 • I'm not sure much is to be gained by replacing the electronics in a tape deck; you are still going to have to modulate and demodulate the data stream, and are still going to be bandwidth limited to probably not much more than audio by the head. You might get a little better with fancier amplifiers and speeding up that tape. Want to store a lot? Use a VCR instead of an audio deck. May 30 '11 at 7:39 • @Chris Statton By replacing the electronics you can do: Automagic seeking (use one of the channels as an index, store the data in the other channel, then automatically FF and RWD to find where the data is stored). Replace the normal tape bias with your own modulation to increase the storage density. Run the motors at a higher speed for higher data rates. Finer control of the erase head allows more selective writes. Etc. – user3624 May 30 '11 at 13:19 A ferrite core memory is entirely buidable at home without specialized hardware or electronic parts... Some kind of low density magnetic media storage could also be built with no custom parts. • It's an idea, but I would hate to think of weaving something bigger than an 8 byte memory :-) May 28 '11 at 16:01 • @Federico What about 32 bit on an Arduino shield? :-)) corememoryshield.com/report.html May 28 '11 at 18:48 • Building a large, low density hard drive could be a fun project. May 30 '11 at 7:36 • @Axeman: I wonder whether cores could be reliably driven with 2/3 the "switching" current without switching and, if so, whether that's been exploited? It would seem like it should be possible to access 256 bits using 12 wires (4x(4+4)x(4+4)) and "power-of-two" addressing, or 880 bits (four times (12 choose 3)) if one can use arbitrary combinations of drive wires. Jul 18 '12 at 19:41 • @ChrisStratton - something like this? Aug 9 '15 at 15:20 A HDD is not a very good idea for a DIY project. You need lots of special parts which aren't available for DIY, like voice coil, the platters and the magnetic head. You'd also need clean room conditions. And of course it's all about high precision mechanics. Also, if you would succeed in constructing one it would probably cost 10 to 100 times more than what you pay for a commercial product. • I once arrived at a friend's while he was doing open heart surgery on a HDD, sitting at the kitchen table with an ashtray next to the open drive :-) May 28 '11 at 12:15 • @Federico - Yeah, so much for clean room conditions... :-( May 28 '11 at 13:31 • There are hard drive failures where opening it, fixing the problem, and then immediately imaging it to a good drive is a pragmatically reasonable alternative. Back in the day, I remember running a scrap <20mb drive open, it "seemed" to work for a few hours, but I think not when I tried it again days later. May 30 '11 at 7:35 • @stevenvh High precision mechanics ? No clue what kind of mechanics are those. Are you referring to those MEMS like nano technological actuators ? Does MEMS used in hard disk construction? Voice coil system is more than a feedback system as I thought already. Just only read some patents, and as I read a somewhere a neural net and a byasian net method to calibrate that head. Stand forward to have more info about precision mechanics. Feb 19 '15 at 10:37 • @StandardSandun: You don't know what precision mechanics is?? How about parts which are machined to operate with better than 1/100 mm precision? Mar 5 '15 at 8:07 Crazy idea If you are really really bored. You could delve into organic storage. Slow but huge capacity. E Coli hard Disk If your goal is to build something to be interesting, rather than practical, there are variety of DIY-compatible ways one could store information electronically. While it's extremely doubtful that one could achieve anything resembling cost-effective performance, it's entirely possible that one might be able to, with modern technology, achieve a level of performance for some techniques which would be significantly above what could have been achieved a few years ago with similar techniques. For example, it might be interesting to play with acoustic delay lines. Generally, their performance has been limited by the fact that signals will spread out a certain amount as they travel down the lines; if one tries to push the bandwidth too high, bits may blur into each other by the time they reach the far end. Back in the days when delay lines were used for storage, that would have been an absolute limiting factor. With today's DSP's, however, it may be possible to reconstruct waves which would have been unreadably blurry a few decades ago. I'm not sure how many bits one could store in something like a spring reverb, but it might be interesting to play with one and find out. • For a really long delay line, use a geosynchronous comsat... or a repeater on the moon. May 30 '11 at 7:41 There's always drum memory. A soup can wrapped in magnetic wire or sticky tape covered in rust may work. Then add a small A/C motor and gear train to move the drum one word at a time, allowing very precise control. And finally one or more read heads, consisting of a c-shaped ferromagnet wrapped in wire. Ferrite was generally used for these types of heads, but maybe steel or iron will work, too. And if all else fails, there's always the paper drum: paper with holes in it wrapped around the drum. Apply a charge the the drum, and the other charge to the "read heads", and it makes for a simple ROM. You can make magnetic tape from sticky tape and rust. Axeman beat me to it by suggesting magnetic core memory. I'd add that if you're looking for permanent storage (ROM) then you could investigate 'core rope memory'. This could be useful for 'code shadowing' on a very small project, where the permanently coded ROM containing the code instructions is loaded into RAM during boot-time. Both magnetic core and core rope are conceptually similar, although they function differently. the ferrite rings in the magnetic core memory function by easily switching their polarity (north-south). This switching is done with a current carrying wire pair through the middle of the ferrite ring. The polarity indicates the binary memory state, and a sensor wire then reads the state. Core ropes function more as tiny transformers: a data address wire is powered, and every core tied to that address will energize. Functionally 8 cores could be wired to each address, and by energizing the individual addresses, the 8-bit binary value 'stored' at that address can be read. These technologies were used in the Apollo project. Although they have low storage space per volume, the point is they answer your original question; they are possible to build entirely from scratch. I've seen groups dedicated to making their own (I've thought about making one myself as a demonstration/teaching aide.) and even someone making a module to display numbers on a 7-segment display using 7 cores and just wrapping them in the correct order; each number is then shown by powering each 'address' from 0-9. http://hackaday.com/2013/10/09/making-a-core-rope-read-only-memory/ If you are more interested in simply building a functional memory for a bit of electronics practice, then there are options; the STM32F4 family of microcontrollers can be programmed as a 'USB-on-the-go' device. You could then get some SPI flash memory chips (a few Kb up to several Mb) and use the STM32 both as the USB device, and the driver to store/read from the memory chip. STMicro produce an 'F4 discovery' board, which comes with a suitable USB port wired on for USB-OTG. Once you start looking into the SPI protocol, you can see that each chip uses the same 3 data transfer wires, and a separate dedicated chip select wire: building your own 16MB USB memory stick using four 2MB SPI chips and switching which 'bank' is used in software would be an excellent learning tool, if a bit on the advanced side. A similar project could be done with an Arduino or Picaxe microcontroller (much much easier to program than the STM32, but not as powerful). A simple Arduino project which takes data from a serial port and stores it to an SPI memory shouldn't take more than a few days to get working. • Axeman beat you by 6 years. xD Feb 8 '17 at 0:24 I came up with a variant of remove 5s to contact the delay line independently, using the effect of infrared quenched phosphorescence in ZnS glow in the dark (GITD) material. My research suggested that a single platter with 16 UV SMD LEDs and 16 photodiodes tuned to green emission (doable) and a single infrared quencher 300 degrees away in direction of rotation with analogue ICs to do the data refresh could potentially store just shy of 500MB if the goal was to store the data for a minute at a time and continuously refresh from external storage (ie scope, etc) For something like encryption keys it would be ideal as the original could be on paper, paper then destroyed etc. You Could Take An old DVD player and rewrite the software to store data on the NAND Flash where the O.S. for the DVD player is stored (the O.S. is typically written in JAVA). • The OP wants to create a storage device from scratch, not reuse an existing one. And do you have some kind of citation for your statement that the OS is "typically written in Java"....that surprises me very much. Jan 12 '20 at 21:03
Copied to clipboard ## G = Dic10⋊19D4order 320 = 26·5 ### 7th semidirect product of Dic10 and D4 acting via D4/C22=C2 Series: Derived Chief Lower central Upper central Derived series C1 — C2×C10 — Dic10⋊19D4 Chief series C1 — C5 — C10 — C2×C10 — C2×Dic5 — C2×Dic10 — C22×Dic10 — Dic10⋊19D4 Lower central C5 — C2×C10 — Dic10⋊19D4 Upper central C1 — C22 — C4⋊D4 Generators and relations for Dic1019D4 G = < a,b,c,d | a20=c4=d2=1, b2=a10, bab-1=a-1, cac-1=a11, ad=da, bc=cb, bd=db, dcd=c-1 > Subgroups: 982 in 290 conjugacy classes, 107 normal (43 characteristic) C1, C2, C2, C4, C4, C22, C22, C22, C5, C2×C4, C2×C4, C2×C4, D4, Q8, C23, C23, C23, D5, C10, C10, C42, C22⋊C4, C22⋊C4, C4⋊C4, C4⋊C4, C22×C4, C22×C4, C2×D4, C2×D4, C2×D4, C2×Q8, C4○D4, Dic5, Dic5, C20, C20, D10, C2×C10, C2×C10, C2×C10, C4×D4, C4×Q8, C4⋊D4, C4⋊D4, C22⋊Q8, C4.4D4, C22×Q8, C2×C4○D4, Dic10, Dic10, C4×D5, C2×Dic5, C2×Dic5, C2×Dic5, C5⋊D4, C2×C20, C2×C20, C2×C20, C5×D4, C22×D5, C22×C10, C22×C10, Q85D4, C4×Dic5, C4×Dic5, C10.D4, C10.D4, C4⋊Dic5, D10⋊C4, D10⋊C4, C23.D5, C23.D5, C5×C22⋊C4, C5×C4⋊C4, C2×Dic10, C2×Dic10, C2×Dic10, C2×C4×D5, D42D5, C22×Dic5, C2×C5⋊D4, C2×C5⋊D4, C22×C20, D4×C10, D4×C10, Dic5.14D4, Dic5.5D4, Dic53Q8, D102Q8, C4×C5⋊D4, D4×Dic5, C20.17D4, Dic5⋊D4, C5×C4⋊D4, C22×Dic10, C2×D42D5, Dic1019D4 Quotients: C1, C2, C22, D4, C23, D5, C2×D4, C4○D4, C24, D10, C22×D4, C2×C4○D4, 2- 1+4, C22×D5, Q85D4, D4×D5, D42D5, C23×D5, C2×D4×D5, C2×D42D5, D4.10D10, Dic1019D4 Smallest permutation representation of Dic1019D4 On 160 points Generators in S160 (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20)(21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100)(101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120)(121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140)(141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160) (1 129 11 139)(2 128 12 138)(3 127 13 137)(4 126 14 136)(5 125 15 135)(6 124 16 134)(7 123 17 133)(8 122 18 132)(9 121 19 131)(10 140 20 130)(21 78 31 68)(22 77 32 67)(23 76 33 66)(24 75 34 65)(25 74 35 64)(26 73 36 63)(27 72 37 62)(28 71 38 61)(29 70 39 80)(30 69 40 79)(41 90 51 100)(42 89 52 99)(43 88 53 98)(44 87 54 97)(45 86 55 96)(46 85 56 95)(47 84 57 94)(48 83 58 93)(49 82 59 92)(50 81 60 91)(101 148 111 158)(102 147 112 157)(103 146 113 156)(104 145 114 155)(105 144 115 154)(106 143 116 153)(107 142 117 152)(108 141 118 151)(109 160 119 150)(110 159 120 149) (1 156 90 28)(2 147 91 39)(3 158 92 30)(4 149 93 21)(5 160 94 32)(6 151 95 23)(7 142 96 34)(8 153 97 25)(9 144 98 36)(10 155 99 27)(11 146 100 38)(12 157 81 29)(13 148 82 40)(14 159 83 31)(15 150 84 22)(16 141 85 33)(17 152 86 24)(18 143 87 35)(19 154 88 26)(20 145 89 37)(41 61 139 113)(42 72 140 104)(43 63 121 115)(44 74 122 106)(45 65 123 117)(46 76 124 108)(47 67 125 119)(48 78 126 110)(49 69 127 101)(50 80 128 112)(51 71 129 103)(52 62 130 114)(53 73 131 105)(54 64 132 116)(55 75 133 107)(56 66 134 118)(57 77 135 109)(58 68 136 120)(59 79 137 111)(60 70 138 102) (1 66)(2 67)(3 68)(4 69)(5 70)(6 71)(7 72)(8 73)(9 74)(10 75)(11 76)(12 77)(13 78)(14 79)(15 80)(16 61)(17 62)(18 63)(19 64)(20 65)(21 127)(22 128)(23 129)(24 130)(25 131)(26 132)(27 133)(28 134)(29 135)(30 136)(31 137)(32 138)(33 139)(34 140)(35 121)(36 122)(37 123)(38 124)(39 125)(40 126)(41 141)(42 142)(43 143)(44 144)(45 145)(46 146)(47 147)(48 148)(49 149)(50 150)(51 151)(52 152)(53 153)(54 154)(55 155)(56 156)(57 157)(58 158)(59 159)(60 160)(81 109)(82 110)(83 111)(84 112)(85 113)(86 114)(87 115)(88 116)(89 117)(90 118)(91 119)(92 120)(93 101)(94 102)(95 103)(96 104)(97 105)(98 106)(99 107)(100 108) G:=sub<Sym(160)| (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140)(141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160), (1,129,11,139)(2,128,12,138)(3,127,13,137)(4,126,14,136)(5,125,15,135)(6,124,16,134)(7,123,17,133)(8,122,18,132)(9,121,19,131)(10,140,20,130)(21,78,31,68)(22,77,32,67)(23,76,33,66)(24,75,34,65)(25,74,35,64)(26,73,36,63)(27,72,37,62)(28,71,38,61)(29,70,39,80)(30,69,40,79)(41,90,51,100)(42,89,52,99)(43,88,53,98)(44,87,54,97)(45,86,55,96)(46,85,56,95)(47,84,57,94)(48,83,58,93)(49,82,59,92)(50,81,60,91)(101,148,111,158)(102,147,112,157)(103,146,113,156)(104,145,114,155)(105,144,115,154)(106,143,116,153)(107,142,117,152)(108,141,118,151)(109,160,119,150)(110,159,120,149), (1,156,90,28)(2,147,91,39)(3,158,92,30)(4,149,93,21)(5,160,94,32)(6,151,95,23)(7,142,96,34)(8,153,97,25)(9,144,98,36)(10,155,99,27)(11,146,100,38)(12,157,81,29)(13,148,82,40)(14,159,83,31)(15,150,84,22)(16,141,85,33)(17,152,86,24)(18,143,87,35)(19,154,88,26)(20,145,89,37)(41,61,139,113)(42,72,140,104)(43,63,121,115)(44,74,122,106)(45,65,123,117)(46,76,124,108)(47,67,125,119)(48,78,126,110)(49,69,127,101)(50,80,128,112)(51,71,129,103)(52,62,130,114)(53,73,131,105)(54,64,132,116)(55,75,133,107)(56,66,134,118)(57,77,135,109)(58,68,136,120)(59,79,137,111)(60,70,138,102), (1,66)(2,67)(3,68)(4,69)(5,70)(6,71)(7,72)(8,73)(9,74)(10,75)(11,76)(12,77)(13,78)(14,79)(15,80)(16,61)(17,62)(18,63)(19,64)(20,65)(21,127)(22,128)(23,129)(24,130)(25,131)(26,132)(27,133)(28,134)(29,135)(30,136)(31,137)(32,138)(33,139)(34,140)(35,121)(36,122)(37,123)(38,124)(39,125)(40,126)(41,141)(42,142)(43,143)(44,144)(45,145)(46,146)(47,147)(48,148)(49,149)(50,150)(51,151)(52,152)(53,153)(54,154)(55,155)(56,156)(57,157)(58,158)(59,159)(60,160)(81,109)(82,110)(83,111)(84,112)(85,113)(86,114)(87,115)(88,116)(89,117)(90,118)(91,119)(92,120)(93,101)(94,102)(95,103)(96,104)(97,105)(98,106)(99,107)(100,108)>; G:=Group( (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140)(141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160), (1,129,11,139)(2,128,12,138)(3,127,13,137)(4,126,14,136)(5,125,15,135)(6,124,16,134)(7,123,17,133)(8,122,18,132)(9,121,19,131)(10,140,20,130)(21,78,31,68)(22,77,32,67)(23,76,33,66)(24,75,34,65)(25,74,35,64)(26,73,36,63)(27,72,37,62)(28,71,38,61)(29,70,39,80)(30,69,40,79)(41,90,51,100)(42,89,52,99)(43,88,53,98)(44,87,54,97)(45,86,55,96)(46,85,56,95)(47,84,57,94)(48,83,58,93)(49,82,59,92)(50,81,60,91)(101,148,111,158)(102,147,112,157)(103,146,113,156)(104,145,114,155)(105,144,115,154)(106,143,116,153)(107,142,117,152)(108,141,118,151)(109,160,119,150)(110,159,120,149), (1,156,90,28)(2,147,91,39)(3,158,92,30)(4,149,93,21)(5,160,94,32)(6,151,95,23)(7,142,96,34)(8,153,97,25)(9,144,98,36)(10,155,99,27)(11,146,100,38)(12,157,81,29)(13,148,82,40)(14,159,83,31)(15,150,84,22)(16,141,85,33)(17,152,86,24)(18,143,87,35)(19,154,88,26)(20,145,89,37)(41,61,139,113)(42,72,140,104)(43,63,121,115)(44,74,122,106)(45,65,123,117)(46,76,124,108)(47,67,125,119)(48,78,126,110)(49,69,127,101)(50,80,128,112)(51,71,129,103)(52,62,130,114)(53,73,131,105)(54,64,132,116)(55,75,133,107)(56,66,134,118)(57,77,135,109)(58,68,136,120)(59,79,137,111)(60,70,138,102), (1,66)(2,67)(3,68)(4,69)(5,70)(6,71)(7,72)(8,73)(9,74)(10,75)(11,76)(12,77)(13,78)(14,79)(15,80)(16,61)(17,62)(18,63)(19,64)(20,65)(21,127)(22,128)(23,129)(24,130)(25,131)(26,132)(27,133)(28,134)(29,135)(30,136)(31,137)(32,138)(33,139)(34,140)(35,121)(36,122)(37,123)(38,124)(39,125)(40,126)(41,141)(42,142)(43,143)(44,144)(45,145)(46,146)(47,147)(48,148)(49,149)(50,150)(51,151)(52,152)(53,153)(54,154)(55,155)(56,156)(57,157)(58,158)(59,159)(60,160)(81,109)(82,110)(83,111)(84,112)(85,113)(86,114)(87,115)(88,116)(89,117)(90,118)(91,119)(92,120)(93,101)(94,102)(95,103)(96,104)(97,105)(98,106)(99,107)(100,108) ); G=PermutationGroup([[(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20),(21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100),(101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120),(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140),(141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)], [(1,129,11,139),(2,128,12,138),(3,127,13,137),(4,126,14,136),(5,125,15,135),(6,124,16,134),(7,123,17,133),(8,122,18,132),(9,121,19,131),(10,140,20,130),(21,78,31,68),(22,77,32,67),(23,76,33,66),(24,75,34,65),(25,74,35,64),(26,73,36,63),(27,72,37,62),(28,71,38,61),(29,70,39,80),(30,69,40,79),(41,90,51,100),(42,89,52,99),(43,88,53,98),(44,87,54,97),(45,86,55,96),(46,85,56,95),(47,84,57,94),(48,83,58,93),(49,82,59,92),(50,81,60,91),(101,148,111,158),(102,147,112,157),(103,146,113,156),(104,145,114,155),(105,144,115,154),(106,143,116,153),(107,142,117,152),(108,141,118,151),(109,160,119,150),(110,159,120,149)], [(1,156,90,28),(2,147,91,39),(3,158,92,30),(4,149,93,21),(5,160,94,32),(6,151,95,23),(7,142,96,34),(8,153,97,25),(9,144,98,36),(10,155,99,27),(11,146,100,38),(12,157,81,29),(13,148,82,40),(14,159,83,31),(15,150,84,22),(16,141,85,33),(17,152,86,24),(18,143,87,35),(19,154,88,26),(20,145,89,37),(41,61,139,113),(42,72,140,104),(43,63,121,115),(44,74,122,106),(45,65,123,117),(46,76,124,108),(47,67,125,119),(48,78,126,110),(49,69,127,101),(50,80,128,112),(51,71,129,103),(52,62,130,114),(53,73,131,105),(54,64,132,116),(55,75,133,107),(56,66,134,118),(57,77,135,109),(58,68,136,120),(59,79,137,111),(60,70,138,102)], [(1,66),(2,67),(3,68),(4,69),(5,70),(6,71),(7,72),(8,73),(9,74),(10,75),(11,76),(12,77),(13,78),(14,79),(15,80),(16,61),(17,62),(18,63),(19,64),(20,65),(21,127),(22,128),(23,129),(24,130),(25,131),(26,132),(27,133),(28,134),(29,135),(30,136),(31,137),(32,138),(33,139),(34,140),(35,121),(36,122),(37,123),(38,124),(39,125),(40,126),(41,141),(42,142),(43,143),(44,144),(45,145),(46,146),(47,147),(48,148),(49,149),(50,150),(51,151),(52,152),(53,153),(54,154),(55,155),(56,156),(57,157),(58,158),(59,159),(60,160),(81,109),(82,110),(83,111),(84,112),(85,113),(86,114),(87,115),(88,116),(89,117),(90,118),(91,119),(92,120),(93,101),(94,102),(95,103),(96,104),(97,105),(98,106),(99,107),(100,108)]]) 53 conjugacy classes class 1 2A 2B 2C 2D 2E 2F 2G 2H 4A 4B 4C 4D 4E 4F ··· 4M 4N 4O 4P 5A 5B 10A ··· 10F 10G 10H 10I 10J 10K 10L 10M 10N 20A ··· 20H 20I 20J 20K 20L order 1 2 2 2 2 2 2 2 2 4 4 4 4 4 4 ··· 4 4 4 4 5 5 10 ··· 10 10 10 10 10 10 10 10 10 20 ··· 20 20 20 20 20 size 1 1 1 1 2 2 4 4 20 2 2 4 4 4 10 ··· 10 20 20 20 2 2 2 ··· 2 4 4 4 4 8 8 8 8 4 ··· 4 8 8 8 8 53 irreducible representations dim 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 4 4 4 4 type + + + + + + + + + + + + + + + + + + - + - - image C1 C2 C2 C2 C2 C2 C2 C2 C2 C2 C2 C2 D4 D5 C4○D4 D10 D10 D10 D10 2- 1+4 D4×D5 D4⋊2D5 D4.10D10 kernel Dic10⋊19D4 Dic5.14D4 Dic5.5D4 Dic5⋊3Q8 D10⋊2Q8 C4×C5⋊D4 D4×Dic5 C20.17D4 Dic5⋊D4 C5×C4⋊D4 C22×Dic10 C2×D4⋊2D5 Dic10 C4⋊D4 C2×C10 C22⋊C4 C4⋊C4 C22×C4 C2×D4 C10 C4 C22 C2 # reps 1 2 2 1 1 1 2 1 2 1 1 1 4 2 4 4 2 2 6 1 4 4 4 Matrix representation of Dic1019D4 in GL6(𝔽41) 9 0 0 0 0 0 0 32 0 0 0 0 0 0 0 1 0 0 0 0 40 34 0 0 0 0 0 0 1 0 0 0 0 0 0 1 , 0 9 0 0 0 0 9 0 0 0 0 0 0 0 1 0 0 0 0 0 34 40 0 0 0 0 0 0 40 0 0 0 0 0 0 40 , 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 23 31 0 0 0 0 12 18 , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 18 10 0 0 0 0 21 23 G:=sub<GL(6,GF(41))| [9,0,0,0,0,0,0,32,0,0,0,0,0,0,0,40,0,0,0,0,1,34,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[0,9,0,0,0,0,9,0,0,0,0,0,0,0,1,34,0,0,0,0,0,40,0,0,0,0,0,0,40,0,0,0,0,0,0,40],[0,1,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,23,12,0,0,0,0,31,18],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,18,21,0,0,0,0,10,23] >; Dic1019D4 in GAP, Magma, Sage, TeX {\rm Dic}_{10}\rtimes_{19}D_4 % in TeX G:=Group("Dic10:19D4"); // GroupNames label G:=SmallGroup(320,1270); // by ID G=gap.SmallGroup(320,1270); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-5,224,477,232,570,185,12550]); // Polycyclic G:=Group<a,b,c,d|a^20=c^4=d^2=1,b^2=a^10,b*a*b^-1=a^-1,c*a*c^-1=a^11,a*d=d*a,b*c=c*b,b*d=d*b,d*c*d=c^-1>; // generators/relations ׿ × 𝔽
# What is a prime, and who decides? Some people view mathematics as a purely platonic realm of ideas independent of the humans who dream about those ideas. If that’s true, why can’t we agree on the definition of something as universal as a prime number? Courtney R. Gibbons Hamilton College ### Introduction Scene: It’s a dark and stormy night at SETI. You’re sitting alone, listening to static on the headphones, when all of the sudden you hear something: two distinct pulses in the static. Now three. Now five. Then seven, eleven, thirteen — it’s the sequence of prime numbers! A sequence unlikely to be generated by any astrophysical phenomenon (at least, so says Carl Sagan in Contact, the novel from which I’ve lifted this scene) — in short, proof of alien intelligence via the most fundamental mathematical objects in the universe… Hi! I’m Courtney, and I’m new to this column. I’ve been enjoying reading my counterparts’ posts, including Joe Malkevitch’s column Decomposition and David Austin’s column Meet Me Up in Space. I’d like to riff on those columns a bit, both to get to some fun algebra (atoms and ideals!) and to poke at the idea that math is independent of our humanity. ### Introduction, Take 2 Scene: It’s a dark and stormy afternoon in Clinton, NY. I’m sitting alone at my desk with two undergraduate abstract algebra books in front of me, both propped open to their definitions of a prime number… • Book A says that an integer $p$ (of absolute value at least 2) is prime provided it has exactly two positive integer factors. Otherwise, Book A says $p$ is composite. • Book B says that an integer $p$ (of absolute value at least 2) is prime provided whenever it divides a product of integers, it divides one of the factors (in any possible factorization). Otherwise, Book B says $p$ is composite. Note: Book A is Bob Redfield’s Abstract Algebra: A Concrete Introduction (Bob is my predecessor at Hamilton College). Book B Abstract Algebra: Rings, Groups, and Fields by Marlow Anderson (Colorado College; Marlow was my undergraduate algebra professor) and Todd Feil (Denison University). I reached for the nearest algebra textbook to use as a tie-breaker, which happened to be Dummit and Foote’s Abstract Algebra, only to find that the authors hedge their bets by providing Book A’s definition and then saying, well, actually, Book B’s definition can be used to define prime, actually. Yes, it’s a nice exercise to show these definitions are equivalent. I can’t help but wonder, though: which is what it really is to be prime, and which is merely a consequence of that definition? ### Who Decides? Some folks take the view that math is a true and beautiful thing and we humans merely discover it. This seems to me to be a way of saying that math is independent of our humanity. Who we are, what communities we belong to — these don’t have any effect on Mathematics, Platonic Realm of Pure Ideas. To quantify this position as one might for an intro to proofs class: For each mathematical idea $x$, $x$ has a truth independent of humanity. And yet, two textbooks fundamental to the undergraduate math curriculum are sitting here on my desk with the audacity to disagree about the very definition of arguably the most pure, most platonic, most absolutely mathematical phenomenon you could hope to encounter: prime numbers! This isn’t a perfect counterexample to the universally quantified statement above (maybe one of these books is wrong?). But in my informal survey of undergraduate algebra textbooks (the librarians at Hamilton really love me and the havoc I wreak in the stacks!), there’s not exactly a consensus on the definition of a prime! As far as I can tell, the only consensus is that we shouldn’t consider $-1$, $0$, or $1$ to be prime numbers. But, uh, why not?! In the case of $0$, it breaks both definitions. You can’t divide by zero (footnote: well, you shouldn’t divide by if you want numbers to be meaningful, which is, of course, a decision that someone made and that we continue to make when we assert “you can’t divide by zero”), and zero has infinitely many positive integer factors. But when $\pm 1$ divides a product, it divides one (all!) of the factors. And what’s so special about exactly two positive divisors anyway? Why not “at most two” positive divisors? Well, if you’re reading this, you probably have had a course in algebra, and so you know (or can be easily persuaded, I hope!) that the integers have a natural (what’s natural is a matter of opinion, of course) algebraic analog in a ring of polynomials in a single variable with coefficients from a field $F$. The resemblance is so strong, algebraically, that we call $F[x]$ an integral domain (“a place where things are like integers” is my personal translation). The idea of prime, or “un-break-down-able”, comes back in the realm of polynomials, and Book A and Book B provide definitions as follow: • Book B says that a nonconstant polynomial $p(x)$ is irreducible provided the only way it factors is into a product in which one of the factors must have degree 0 (and the other necessarily has the same degree as $p(x)$). Otherwise, Book B says $p(x)$ is reducible. • Book A says that a nonconstant polynomial $p(x)$ is irreducible provided whenever $p(x)$ divides a product of polynomials in $F[x]$, it divides one of the factors. Otherwise, Book A says $p(x)$ is reducible. Both books agree, however, that a polynomial is reducible if and only if it has a factorization that includes more than one irreducible factor (and thus a polynomial cannot be both reducible and irreducible). Notice here that we have a similar restriction: the zero polynomial is excluded from the reducible/irreducible conversation, just as the integer 0 was excluded from the prime/composite conversation. But what about the other constant polynomials? They satisfy both definitions aside from the seemingly artificial caveat that they’re not allowed to be irreducible! Well, folks, it turns out that in the integers and in $F[x]$, if you’re hoping to have meaningful theorems (like the Fundamental Theorem of Arithmetic or an analog for polynomials, both of which say that factorization into primes/irreducibles is unique up to a mild condition), you don’t want to allow things with multiplicative inverses to be among your un-break-down-ables! We call elements with multiplicative inverses units, and in the integers, $(-1)\cdot(-1) = 1$ and $1\cdot 1 = 1$, so both $-1$ and $1$ are units (they’re the only units in the integers). In the integers, we want $6$ to factor uniquely into $2\cdot 3$, or, perhaps (if we’re being generous and allowing negative numbers to be prime, too) into $(-2)\cdot(-3)$. This generosity is pretty mild: $2$ and $-2$ are associates, meaning that they are the same up to multiplication by a unit. One statement of the Fundamental Theorem of Arithmetic is that every integer (of absolute value at least two) is prime or factors uniquely into a product of primes up to the order of the factors and up to associates. That means that the list prime factors (up to associates) that appear in the factorization of $6$ is an invariant of $6$, and the number of prime factors (allowing for repetition) in any factorization of $6$ is another invariant (and it’s well-defined). Let’s call it the length of $6$. But if we were to let $1$ or $-1$ be prime? Goodbye, fundamental theorem! We could write $6 = 2\cdot 3$, or $6 = 1\cdot 1\cdot 1 \cdots 1 \cdot 2 \cdot 3$, or $6 = (-1)\cdot (-2) \cdot 3$. We have cursed ourselves with the bounty of infinitely many distinct possible factorizations of $6$ into a product primes (even accounting for the order of the factors or associates), and we can’t even agree on the length of $6$. Or $2$. Or $1$. The skeptical, critical-thinking reader has already been working on workarounds. Take the minimum number of factors as the length. Write down the list of prime factors without their powers. Keep the associates in the list (or throw them out, but at that point, just agree that $1$ and $-1$ shouldn’t be prime!). But in the polynomial ring $F[x]$, dear reader, every nonzero constant polynomial is a unit: given $p(x) = a$ for some nonzero $a \in F$, the polynomial $d(x) = a^{(-1)}$ is also in $F[x]$ since $a^{(-1)}$ is in the field $F$, and $p(x)d(x) = 1$, the multiplicative identity in $F[x]$. So, if you allow units to be irreducible in $F[x]$, now even an innocent (and formerly irreducible) polynomial like $x$ has infinitely many factorizations into things like ($a)(1/a)(b)(1/b)\cdots x$. So much for those workarounds! So, since we like our Fundamental Theorems to be neat, tidy, and useful, we agree to exclude units from our definitions of prime and composite (or irreducible and reducible, or indecomposable and decomposable, or…). ### More Consequences (or Precursors) Lately I’ve been working on problems related to semigroups, by which I mean nonempty sets equipped with an associative binary operation — and I also insist that my semigroups be commutative and have a unit element. In the study of factorization in semigroups, the Fundamental Theorem of Arithmetic leads to the idea of counting the distinct factors an element can have in any factorization into atoms (the semigroup equivalent of irreducible/prime elements; these are elements $p$ that factor only into products involving units and associates of $p$). One of my favorite (multiplicative) semigroups is $\mathbb{Z}[\sqrt{-5}] = \{a + b \sqrt{-5} \, : \, a,b \in \mathbb{Z}\}$, favored because the element $6$ factors distinctly into two different products of irreducibles! In this semigroup, $6 = 2\cdot 3$ and $6 = (1+\sqrt{-5})(1-\sqrt{-5})$. It’s a nice exercise to show that $1\pm \sqrt{-5}$ are not associates of $2$ or $3$, yielding two distinct factorizations into atoms! While we aren’t lucky enough to have unique factorization, at least we have that the number of irreducible factors in any factorization of $6$ is always two. That is, excluding units from our list of atoms leads to an invariant of $6$ in the semigroup $\mathbb{Z}[\sqrt{-5}]$. Anyway, without the context of more general situations like this semigroup (and I don’t know, is $\mathbb{Z}[\sqrt{-5}]$ one of those platonically true things, or were Gauss et al. just really imaginative weirdos?), would we feel so strongly that $1$ is not a prime integer? ### Still More Consequences (or precursors) Reminding ourselves yet again that the integers form a ring under addition and multiplication, we might be interested in the ideals generated by prime numbers. (What’s an ideal? It’s a nonempty subset of the ring closed under addition, additive inverses, and scalar multiplication from the ring.) We might even call those ideals prime ideals, and then generalize to other rings! The thing is, if we do that, we end up with this definition: (Book A and B agree here:) An ideal $P$ is prime provided $xy \in P$ implies $x$ or $y$ belongs to $P$. But in the case of the integers — a principal ideal domain! — that means that a product $ab$ belongs to the principal ideal generated by the prime $p$ precisely when $p$ divides one of the factors. From the perspective of rings, every (nonzero) ring has two trivial ideals: the ring $R$ itself (and if $R$ has unity, then that ideal is generated by $1$, or any other unit in $R$) and the zero ideal (generated by $0$). If we want the study of prime ideals to be the study of interesting ideals, then we want to exclude units from our list of potential primes. And once we do, we recover nice results like an ideal $P$ is prime in a commutative ring with unity if and only if $R/P$ is an integral domain. ### Conclusions I still have two books propped open on my desk, and after thinking about semigroups and ideals, I’m no closer to answering the question “But what is a prime, really?” than I was at the start of this column! All I have is some pretty good evidence that we, as mathematicians, might find it useful to exclude units from the prime-or-composite dichotomy (I haven’t consulted with the mathematicians on other planets, though). To me, that evidence is a reminder that we are constantly updating our mathematics framework in reference to what we learn as we do more math.  We look back at these ideas that seemed so solid when we started — something fundamentally indivisible in some way — and realize that we’re making it up as we go along. (And ignoring a lot of what other humans consider math, too, as we insist on our axioms and law of the excluded middle and the rest of the apparatus of “modern mathematics” while we’re making it up…) And the math that gets done, the math that allows us to update our framework… Well, that depends on what is trendy/fundable/publishable, who is trendy/fundable/publishable, and who is making all of those decisions. Perhaps, on planet Blarglesnort, math looks very different. ### References Anderson, Marlow; Feil, Todd. A first course in abstract algebra. Rings, groups, and fields. Third edition. ISBN: 9781482245523. Dummit, David S.; Foote, Richard M. Abstract algebra. Third edition. ISBN: 0471433349. Redfield, Robert. Abstract algebra. A concrete introduction. First edition. ISBN: 9780201437218. Geroldinger, Alfred; Halter-Koch, Franz. Non-unique factorizations. Algebraic, combinatorial and analytic theory. ISBN: 9781584885764. # Decomposition Mathematics too has profited from the idea that sometimes things of interest might have a structure which allowed them to be decomposed into simpler parts… Joe Malkevitch York College (CUNY) ### Introduction One way to get insights into something one is trying to understand better is to break the thing down into its component parts, something simpler. Physicists and chemists found this approach very productive—to understand water or salt it was realized that common table salt was sodium chloride, a compound made of two elements, sodium and chlorine and that water was created from hydrogen and oxygen. Eventually, many elements (not the Elements of Euclid!) were discovered. The patterns noticed in these building block elements lead to the theoretical construct called the periodic table, which showed that various elements seemed to be related to each other. The table suggested that there might be elements which existed but had not been noticed; the "holes" in the table were filled when these elements were discovered, sometimes because missing entries were sought out. The table also suggested "trans-uranium" elements, which did not seem to exist in the physical world but could be created, and were created, in the laboratory. These new elements were in part created because the periodic table suggested approaches as to how to manufacture them. The work done related to understanding the structure of the periodic table suggested and lead to the idea that elements were also made up of even smaller pieces. This progression of insight lead to the idea of atoms, and that atoms too might have structure lead to the idea of subatomic particles. But some of these "fundamental" particles could be decomposed into smaller "parts." We now have a zoo of quarks and other "pieces" to help us understand the complexities of the matter we see in the world around us. Crystals of gallium, an element whose existence was predicted using the periodic table. Photo by Wikipedia user Foobar, CC BY-SA 3.0. ### Prime patterns Mathematics too has profited from the idea that sometimes things of interest might have a structure which allowed them to be decomposed into simpler parts. A good example is the number 111111111. It is an interesting pattern already, because all of its digits are 1’s when written in base 10. We could compare 111111111 with the number that it represents when it is interpreted in base 2 (binary)—here it represents 511. But it might be interesting to study any relation between numbers with all 1’s as digits and compare them to numbers in other bases, not only base 2! Mathematics grows when someone, perhaps a computer, identifies a pattern which can be shown to hold in general, rather than for the specific example that inspired the investigation. A number of the form 1111….1 is called a repunit. Can we find interesting patterns involving repunits? One approach to decomposing a number (here strings of digits are to be interpreted as being written in base 10) is to see if a number can be written as the product of two other numbers different from 1 and itself. For example, 17 can only be written as the product of the two numbers 17 and 1. On the other hand, 16 can be written as something simpler, as $2 \times 8$ but there are "simpler" ways to write 16, as $4 \times 4$, and since 4 can be decomposed as $2 \times 2$ we realize that 16 can be written as $2 \times 2 \times 2 \times 2$. Seeing this pattern, mathematicians are trained to ask questions, such as whether numbers which are all the products of the same number which cannot be broken down have any special interest. But we are getting ahead of ourselves here. What are the "atoms" of the multiplicative aspect of integers? These are the numbers called the primes, 2, 3, 5, 7, 11, … Notice that 11 is also a repunit. This takes us back to the idea that there might be many numbers of the form 11111…….111111 that are prime! Are there infinitely many primes all of whose digits in their base 10 representation are one? Answer!! No one knows. But it has been conjectured that there might be infinitely many repunit primes. Note that numbers like 111, 111111, 111111111, … where the number of digits is a multiple of 3, can’t be prime. Note 11 base 2 is 3, which is also prime. Similarly, 1111111111111111111 is prime, and 1111111111111111111 base 2 represents the number 524287, which is also prime. If a repunit is prime, must the decimal number it represents treated as a base 2 number be prime? By looking for "parts" that were building blocks for the integers, mathematics has opened a rich array of questions and ideas many of which have spawned major new mathematical ideas both theoretical and applied. Having found the notion of prime number as a building block of the positive number system, there are natural and "unnatural" questions to ask: 1. Are there infinitely many different primes? 2. Is there a "simple" function (formula) which generates all of the primes, or if not all primes, only primes? While the fact that there are infinitely many primes was already known by the time of Euclid, the irregularity of the primes continues to be a source of investigations to this day. Thus, the early discovered pattern that there seemed to be pairs of primes differing by two (e.g. 11 and 13, 41 and 43, 139 and 141), which lead to the "guess" that perhaps there are infinitely many numbers of the form $p$ and $p+2$ that are both primes (known as twin primes) is still unsolved today. While more and more powerful computers made possible finding larger and larger twin prime pairs, no one could find a proof of the fact that there might be infinitely many such pairs. There were attempts to approach this issue via a more general question. Were there always primes which were some fixed bound of numbers apart? Little progress was made on this problem until in 2013 a mathematician whose name was not widely known in the community showed that there was a large finite uniform bound on a fixed size gap which could appear infinitely many times. This work by Yitang Zhang set off a concerted search to improve his methods and alter them in a way to get better bounds for the size of this gap. While Zhang’s breakthrough has been improved greatly, the current state of affairs is still far from proving that the twin-prime conjecture is true. Photo of Yitang Zhang. Courtesy of Wikipedia. Mathematical ideas are important as a playground in which to discover more mathematical ideas, thus enriching our understanding of mathematics as an academic subject and sometimes making connections between mathematics and other academic subjects. Today there are strong ties between mathematics and computer science, an academic subject that did not even exist when I was a public school student. Mathematics can be applied in ways that not long ago could not even be imagined, no less carried out. Who would have thought that the primes would help make possible communication that prevents snooping by others as well as protecting the security of digital transactions? From ancient times, codes and ciphers were used to make it possible to communicate, often in military situations, so that should a communication fall into enemy hands it would not assist them. (Codes involve the replacement of words or strings of words with some replacement symbol(s), while ciphers refer to replacing each letter in an alphabet with some other letter in order to disguise the meaning of the original text.) Human ingenuity has been remarkable in developing clever systems for carrying out encryption of text rapidly and has allowed the receiver to decrypt the message in a reasonable amount of time but would, at the very least, slow down an enemy who came into possession of the message. But the development of calculators and digital computers made it harder to protect encrypted messages, because many systems could be attacked by a combination of brute force (try all possible cases) together with using ideas about how the design of the code worked. There was also the development of statistical methods based on frequency of letters and/or words used in particular languages that were employed to break codes. You can find more about the interactions between mathematics, ciphers, and internet security in the April 2006 Feature Column! Earlier we looked at "decomposing" numbers into their prime parts in a multiplication of numbers setting. Remarkably, a problem about decomposing numbers under addition has stymied mathematics for many years, despite the simplicity of stating the problem. The problem is named for Christian Goldbach (1690-1764). Letter from Goldbach to Euler asking about what it is now known as Goldbach’s Conjecture. Image courtesy of Wikipedia. #### Goldbach’s Conjecture (1742) Given an even positive integer $n$, $n$ can be written as the sum of two primes. For example, 10 = 3 +7 or also 5 +5, 20 = 3 + 17, 30 = 11 + 19. We allow the primes to be either the same or different in the decomposition. While computers have churned out larger and larger even numbers for which the conjecture is true, the problem is still open after hundreds of years. What importance should one attach to answering a particular mathematical question? This is not an easy issue to address. Some mathematical questions seem to be "roadblocks" to getting insights into what seem to be important questions in one area of mathematics and in some cases answering a mathematical question seems to open doors on many mathematical issues. Another measure of importance might be in terms of aesthetic properties of a particular mathematical result. The aesthetic may be from the viewpoint that something seems surprising or unexpected or the aesthetic may be that a result seems to have "beauty"—a trait that whether one is talking about beautiful music, fabrics, poems etc. seems to differ greatly from one person to the next. It is hard to devise an objective yardstick for beauty. Another scale of importance is the "value" of a mathematical result to areas of knowledge outside of mathematics. Some results in mathematics have proved to be insightful in many academic disciplines like physics, chemistry, biology but other mathematics seems only to be relevant to mathematics itself. What seems remarkable is that over and over again mathematics that seemed only to have value within mathematics itself or to be only of "theoretical" importance, has found use outside of mathematics. Earlier I mentioned some applications of mathematical ideas to constructing ciphers to hide information. There are also codes designed to correct errors in binary strings and to compress binary strings. Cell phones and streaming video use these kinds of ideas: it would not be possible to have the technologies we now have without the mathematical ideas behind error correction and data compression. ### Partitions The word decompose has some connotations in common with the word partition. Each of these words suggests breaking up something into pieces. Often common parlance guides the use of the technical vocabulary that we use in mathematics, but in mathematics one often tries to be very careful to be precise in what meaning one wants a word to have. Sometimes in popularizing mathematics this attempt to be precise is the enemy of the public’s understanding the mathematics involved. Sometimes precise words are used to define a concept which are mathematically precise but obscure the big picture of what the collection of ideas/concepts that are being defined precisely are getting at. Here I try to use "mathematical terminology" to show the bigger picture of the ideas involved. Given a positive integer $n$, we can write $n$ as a sum of positive integers in different ways. For example, $3 = 3$, $3 = 2+1$, and $3 = 1 + 1 + 1$. In counting the number of decompositions possible, I will not take order of the summands into account—thus, $1 +2$ and $2 +1$ will be considered the same decomposition. Each of these decompositions is considered to be a partition of 3. In listing the partitions of a particular number $n$, it is common to use a variant of set theory notation where the entries in set brackets below can be repeated. Sometimes the word multiset is used to generalize the idea of set, so that we can repeat the same element in a set. Thus we can write the partitions of three as $\{3\}$, $\{2,1\}$, $\{1,1,1\}$. A very natural question is to count how many different partitions there are of $n$ for a given positive integer. You can verify that there are 5 partitions of the number 4, and 11 partitions of the number 5. Although for very large values of $n$ the number of partitions of $n$ has been computed, there is no known formula which computes the number of partitions of $n$ for a given positive integer $n$. Sometimes the definition of partition insists that the parts making up the partition be listed in a particular order. It is usual to require the numbers in the partition not to increase as they are written out. I will use this notational convention here: The partitions of 4 are: $\{4\}$, $\{3,1\}$, $\{2, 2\}$, $\{2, 1, 1\}$, $\{1,1,1,1\}$. Sometimes in denoting partitions with this convention exponents are used to indicate runs of parts: $4$; $3,1$; $2^2$; $2, 1^2$; $1^4$. The notation for representing partitions varies a lot from one place to another. In some places for the partition of 4 consisting of $2 + 1 + 1$ one sees $\{2,1,1\}$, $2+1+1$, $211$ or $2 1^2$ and other variants as well! It may be worth noting before continuing on that we have looked at partitions of $n$ in terms of the sum of smaller positive integers but there is another variant that leads in a very different direction. This involves the partition of the set $\{1,2,3,\dots,n\}$ rather than the partition of the number $n$. In this framework the partition of a set $S$ consists of a set of non-empty subsets of the set $S$ whose union is $S$. (Remember that the union of two sets $U$ and $V$ lumps together the elements of $U$ and $V$ and throws away the duplicates.) Example: Partition the set $\{1,2,3\}$: Solution: $$\{1,2,3\}, \{1,2\} \cup \{3\}, \{1, 3\} \cup \{2\}, \{2,3\} \cup \{1\}, \{1\} \cup \{2\} \cup {3}$$ While there are 3 partitions of the number 3 there are 5 partitions of the set $\{1,2,3\}$. The number of partitions of $\{1,2,3,\dots,n\}$ are counted by the Bell numbers, developed by Eric Temple Bell (1883-1960). While the "standard" name for these numbers now honors Bell, other scholars prior to Bell also studied what today are known as the Bell numbers, including the Indian mathematician Srinivasa Ramanujan (1887-1920). A sketch of Eric Temple Bell. Courtesy of Wikipedia. Partitions have proved to be a particularly intriguing playground for studying patterns related to numbers and have been used to frame new questions related to other parts of mathematics. When considering a partition of a particular number $n$, one can think about different properties of the entries in one of the partitions: • How many parts are there? • How many of the parts are odd? • How many of the parts are even? • How many distinct parts are there? For example, for the partition $\{3, 2, 1, 1\}$, this partition has 4 parts, the number of odd parts is 3, the number of even parts 1, and the number of distinct parts is 3. Closely related to partitions is using diagrams to represent partitions. There are various versions of these diagrams, some with dots for the entries in the partition and others with cells where the cell counts in the rows are the key to the numbers making up the partition. Thus for the partition $3+2+1$ of 6 one could display this partition in a visual way: X X X X X X There are various conventions about how to draw such diagrams. One might use X’s as above but traditionally dots are used or square cells that abut one another. These are known as Ferrers’s (for Norman Macleod Ferrers, 1829-1903) diagrams or sometimes tableaux, or Young’s Tableaux. The name honors Alfred Young (1873-1940). Young was a British mathematician and introduced the notion which bears his name in 1900. Norman Ferrers. Image courtesy of Wikipedia. The term Young’s Tableau is also used for diagrams such as the one below where numbers chosen in various ways are placed inside the cells of the diagram. A representation of the partition 10 with parts 5, 4, 1 A representation of the partition of 11 ($\{5,3,2,1\}$) using rows of dots. While these diagrams show partitions of 10 and 11 by reading across the rows, one also sees that these diagrams display partitions of 10, and 11, namely, 3,2,2,1 and 4,3,2,1,1 respectively, by reading in the vertical direction rather than the horizontal direction. Thus, each Ferrers’s diagram gives rise to two partitions, which are called conjugate partitions. Some diagrams will read the same in both the horizontal and vertical directions; such partitions are called self-conjugate. Experiment to see if you can convince yourself that the number of self-conjugate partitions of $n$ is the same as the number of partitions of $n$ with odd parts that are all different! The next figure collects Ferrers’s diagrams for the partitions of small integers. Ferrers’s diagrams of partitions of the integers starting with 1, lower right, and increasing to partitions of 7. Courtesy of Wikipedia. Often to get insights into mathematical phenomena one needs data. Here for example, complementing the previous figure, is a table of the ways to write the number $n$ as the sum of $k$ parts. For example, 8 can be written as a sum of two parts in 4 ways. These are the partitions of 8 which have two parts: $7+1$, $6+2$, $5+3$, and $4+4$. $n$ into $k$ parts 1 2 3 4 5 6 7 8 1 1 2 1 1 3 1 1 1 4 1 2 1 1 5 1 2 2 1 1 6 1 3 3 2 1 1 7 1 3 4 3 2 1 1 8 1 4 5 5 3 2 1 1 Fill in next row!! Table: Partition of the number $n$ into $k$ parts While many people have contributed to the development of the theory of partitions, the prolific Leonhard Euler (1707-1783) was one of the first. Leonhard Euler. Image courtesy of Wikipedia. Euler was one of the most profound contributors to mathematics over a wide range of domains, including number theory, to which ideas related to partitions in part belongs. Euler showed a surprising result related to what today are called figurate numbers. In particular he discovered a result related to pentagonal numbers. Euler was fond of using power series (a generalized polynomial with infinitely many terms) which in the area of mathematics dealing with counting problems, combinatorics, is related to generating functions. If one draws a square array of dots, one sees $1, 4, 9, 16, \dots$ dots in the pattern that one draws. What happens when one draws triangular, pentagonal, or hexagonal arrays of dots? In the next two figures, we see a sample of the many ways one can visualize the pentagonal numbers: $1, 5, 12, 22, \dots$ (a)                                             (b) Two ways of coding the pentagonal numbers. Courtesy of Wikipedia. The pentagonal numbers for side lengths of the pentagon from 2 to 6. Courtesy of Wikipedia. Godfrey Harold Hardy (1877-1947), Ramanujan and in more modern times, George Andrews and Richard Stanley have been important contributors to a deeper understanding of the patterns implicit in partitions and ways to prove that the patterns that are observed are universally correct. Photo of G. H. Hardy. Photo courtesy of Wikipedia. Srinivasa Ramanujan. Image courtesy of Wikipedia. Photo of George Andrews. Courtesy of Wikipedia. Photo of Richard Stanley. Courtesy of Wikipedia. What is worth noting here is that the methodology of mathematical investigations is both local and global. When one hits upon the idea of what can be learned by “decomposing’ something one understands in the hope of getting a deeper understanding, it also has implications in other environments where one uses the broader notion (decomposition). So understanding primes as building blocks encourages one to investigate primes locally in the narrow arena of integers but also makes one think about other kinds of decompositions that might apply to integers. We are interested in not only decompositions of integers from a multiplicative point of view but also decompositions of integers from an additive point of view. Here in a narrow sense one sees problems like the Goldbach Conjecture but in a broader sense it relates to the much larger playground of the partitions of integers. When one develops a new approach to looking at a situation (e.g. decomposing something into parts) mathematicians invariably try to "export" the ideas discovered in a narrow setting to something more global, including areas of mathematics that are far from where the original results were obtained. So if decomposition is useful in number theory, why not try to understand decompositions in geometry as well? Thus, there is a whole field of decompositions dealing with plane polygons, where the decompositions are usually called dissections. As an example of a pattern which has been discovered relatively recently and which illustrates that there are intriguing mathematical ideas still to be discovered and explored, consider this table: Partition Distinct elements Number 1’s 4 1 0 3+1 2 1 2+2 1 0 2+1+1 2 2 1+1+1+1 1 4 Total 7 7 See anything interesting—some pattern? Not all of the column entries are odd or even in the second and third columns. However, Richard Stanley noted that the sum of the second and third columns are equal! Both add to 7. And this is true for all values of $n$. How might one prove such a result? One approach would be to find a formula (function involving $n$) for the number of distinct elements in the partitions of $n$ and also find a formula for the number of 1’s in the partitions of $n$. If these two formulas are the same for each value of $n$, then it follows that we have a proof of the general situation that is illustrated for the example $n = 4$ in the table above. However, it seems unlikely that there is a way to write down closed form formulas for either of these two different quantities. However, there is a clever approach to dealing with the observation above that is also related to the discovery of Georg Cantor (1845-1918) that there are sets with different sizes of infinity. Consider the two sets, $\mathbb{Z}^+$ the set of all positive integers and the set $\mathbb{E}$ of all of the even positive integers. $\mathbb{Z}^+ = \{1,2,3,4,5, \dots\}$ and $\mathbb{E}={2,4,6,8,10,\dots}$. Both of these are infinite sets. Now consider the table: 1 paired with 2 2 paired with 4 3 paired with 6 4 paired with 8 . . . Note that each of the entries in $\mathbb{Z}^+$ will have an entry on the left in this "count" and each even number, the numbers in $\mathbb{E}$, will have an entry on the right in this "count." This shows that there is a one to one and onto way to pair these two sets, even though $\mathbb{E}$ is a proper subset of $\mathbb{Z}^+$ in the sense that every element of $\mathbb{E}$ appears in $\mathbb{Z}^+$ and there are elements in $\mathbb{Z}^+$ that don’t appear in $\mathbb{E}$. There seems to be a sense in which $\mathbb{E}$ and $\mathbb{Z}^+$ have the same "size." This strange property of being able to pair elements of a set with a proper subset of itself can only happen for an infinite collection of things. Cantor showed that in this sense of size, often referred to as the cardinality of a set, some pairs of sets which seemed very different in size had the same cardinality (size). Thus, the set of positive integers Cantor showed had the same cardinality as the set of positive rational numbers (numbers of the form $a/b$ where $a$ and $b$ are positive integers with no common factor for $a$ and $b$). Remarkably, he was also able to show that the set of positive integers had a different cardinality from the set of real numbers. To this day there are questions dating back to Cantor’s attempt to understand the different sizes that infinite sets can have that are unresolved. What many researchers are doing for old and new results about partitions is to show that counting two collections, each defined differently but for which one gets the same counts, the equality of the counts can be shown by constructing a bijection between the two different collections. When such a one-to-one and onto correspondence (function) can be shown for any value of a positive integer n, then the two collections of things must have the same size. Such bijective proofs often show more clearly the connection between seemingly unrelated things rather than showing that the two different concepts can be counted with the same formula. These bijective proofs often help generate new concepts and conjectures. Try investigating ways that decompositions might give one new insights into ideas you find intriguing. ### References Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society’s MathSciNet can be used to get additional bibliographic information and reviews of some of these materials. Some of the items above can be found via the ACM Digital Library, which also provides bibliographic services. Andrews, G.E., 1998. The theory of partitions (No. 2). Cambridge University Press. Andrews, G.E. and Eriksson, K., 2004. Integer partitions. Cambridge University Press. Atallah, Mikhail J., and Marina Blanton, eds. Algorithms and theory of computation handbook, volume 2: special topics and techniques. CRC Press, 2009. Bóna, M. ed., 2015. Handbook of enumerative combinatorics (Vol. 87). CRC Press. Fulton, Mr William, and William Fulton. Young tableaux: with applications to representation theory and geometry. No. 35. Cambridge University Press, 1997. Graham, Ronald L. Handbook of combinatorics. Elsevier, 1995. Gupta H. Partitions – a survey. Journal of Res. of Nat. Bur. Standards-B Math. Sciences B. 1970 Jan 1;74:1-29. Lovász, László, József Pelikán, and Katalin Vesztergombi. Discrete mathematics: elementary and beyond. Springer Science & Business Media, 2003. Martin, George E. Counting: The art of enumerative combinatorics. Springer Science & Business Media, 2001. Matousek, Jiri. Lectures on discrete geometry. Vol. 212. Springer Science & Business Media, 2013. Menezes, Alfred J., Paul C. Van Oorschot, and Scott A. Vanstone. Handbook of applied cryptography. CRC Press, 2018. Pak, Igor. "Partition bijections, a survey." The Ramanujan Journal 12.1 (2006): 5-75. Rosen, Kenneth H., ed. Handbook of discrete and combinatorial mathematics. CRC Press, 2017. Rosen, Kenneth H., and Kamala Krithivasan. Discrete mathematics and its applications: with combinatorics and graph theory. Tata McGraw-Hill Education, 2012. Sjöstrand, Jonas. Enumerative combinatorics related to partition shapes. Diss. KTH, 2007. Stanley, Richard P. Ordered structures and partitions. Vol. 119. American Mathematical Soc., 1972. Stanley, Richard P. "What is enumerative combinatorics?." Enumerative combinatorics. Springer, Boston, MA, 1986. 1-63. Stanley, Richard P. "Enumerative Combinatorics Volume 1 second edition." Cambridge studies in advanced mathematics (2011). Stanley, Richard P., and S. Fomin. "Enumerative combinatorics. Vol. 2, volume 62 of." Cambridge Studies in Advanced Mathematics (1999). Stanley, Richard P. Catalan numbers. Cambridge University Press, 2015. Toth, Csaba D., Joseph O’Rourke, and Jacob E. Goodman, eds. Handbook of discrete and computational geometry. CRC Press, 2017.
Search the Forum * View unanswered posts * View unread posts * View new posts * View active topics * Visit the Club Shop * See Club Members Deals ## How to Show Images on the RAV4 Drivers Club Posts on how to use the Forum Features, Posting Files, Setting Time Zones, Displays Styles etc. Got a tip for fellow members? Post it here. Want a Feature and cannot find? Ask here! ### How to Show Images on the RAV4 Drivers Club On the RAV4 Drivers Club we have a finite storage area. At the moment, we have loads left and space is not an issue, but as the club grows, obviously this space will start to be consumed, and it would be best served being put primarily to technical documents and how-do downloads in PDF and similar document formats. If we take this approach from the start, then it will mean we can avoid having to cull files later on if storage becomes an issue. To this end, We are going to request that rather then uploading photos onto the site, that an Image Hosting Company is used instead to upload images to, and then those images linked to within the topic. This will both keep storage space freed up for other purposes on the Forum, and allow better image management for users anyway. As far as Image Hosting Services go, there are many to chose from. Popular sites include Photobucket (http://photobucket.com/) and ImageShack (http://imageshack.us) to name a few. These are free to join and all give a significant amount of storage space for free. With regards to image size, the resolution of digital cameras far exceed those of Computer Screens, and if you were to show an image at its native resolution on a screen, it would require a very very large screen! For example, the image I am using in this thread was taken by a 5MP Pentax DSLR camera I bought in 2006 and the pixel width is over 3000, A newer camera would have much higher resolution, take a picture creating an bigger picture, with even Camera Phones in the 10MP or better capacity now (the top Nokia is now 38MP usable!) In contrast, the resolution of a Full HD TV Screen is 1980x1080 Pixels (and this is the very maximum a typical domestic home computer screen would likely have). Taking into account space for scroll bars, text areas for mini-profile etc and the likelyhood the browser will not be maximised, it is unlikely that an image would want to be displayed at anything near to its native size on a computer monitor. Example usage of showing images on this forum: This is a Thumbnail of that picture that is hosted on Photobucket. (note the thumbnail is generated by the Image Hosting Service (Photobucket in this case), and you would just copy and past the code provided by them) You see the entire picture, just miniaturised, and to see the image as hosted, you then click on hit and a new window will open on that hosting site. The following image is using the default Photobucket setting of 1024x768 Resolution for link sharing If you look carefully you may likely find the image is truncated at the right of the sreen, chopping off some of the picture? The degree of truncation depends on the resolution of the device being used and the size of the web browser window (if you are seeing the full picture and not understanding what I am referring to, make your browser window smaller to demonstrate) The result of this is that you cannot be sure that the readers of your posts will see the full picture (literally!) as you don't know what device they will be using to view the forum. *it should be noted that other forum styles may have different results i.e. Subsilver will not truncate the image but the text on the whole page, including all menus will shrink proportionately, which at larger image resolutions could make the screen text unreadable (though the picture will look fine! ) Consequently, the Forum recommendation for picture size is to display them no larger then 700 Pixel width as that is the size that seems to deliver the best compromise between a nicely viewable image and compatibility across commonly used browsing devices, such as Desktops with large screens, Laptops, Netbooks and Tablets. I make this simple to do, I have introduced a new "BBCode" called 'ResiZe' found in the ribbon above the text typing window: resize.JPG What this will do is to review the image width, and if it is larger then 700Px, it will shrink the displayed image size to that 700Px size. The key thing is that it will not simply truncate the image, it will resize its presentation, so the whole picture is maintained, but just in a smaller view. So again, the Thumbnail: And the Picture, resized by the forum to max of 700Px As you can see, the whole of the photo is now seen. To use the ResiZe code, it is very simple - just copy the address from the Image Hosting Service you are using - so in the case above, it would be the address in the code window: Code: Select all http://i191.photobucket.com/albums/z50/Hoovie_bucket/Test/birdinsnow_zps9e5fb67e.jpg Note you do not want [URL] or [IMG] type tags to use ResiZe,just the address starting "http" and ending .jpg (or .gif, etc) Then highlight the address and click 'ResiZe' and that is it! You will see in the text window the data as shown in the code window and when you submit the picture will appear, correctly sized for the forum Code: Select all [resize]http://i191.photobucket.com/albums/z50/Hoovie_bucket/Test/birdinsnow_zps9e5fb67e.jpg[/resize] If you prefer to just use the IMG Tags, then you can do, but please note that other users experience may be compromised and under no circumstances should any images be posted that display at a greater width then 1024Px You can also just put the address of the image in the text e.g. http://i191.photobucket.com/albums/z50/ ... 5fb67e.jpg and just the address is seen and to see it, the reader would just click on the URL address I hope this helps with posting of pictures on threads and if anyone needs any assistance with getting started with Photobucket or the like, please shout NOTES: Please check the T&Cs of the Image Hosting Service you decide to use. For Example, Flikcr has some fairly restrictive (IMO) guidelines here and one key one which could be plain annoying is: Do link back to Flickr when you post your Flickr content elsewhere. Flickr makes it possible to post content hosted on Flickr to other web sites. However, pages on other web sites that display content hosted on flickr.com must provide a link from each photo or video back to its page on Flickr So if you want to show an image hosted by Flickr in a thread, you must include with it text with its flickr origin (not aware of similar stipulations elsewhere on other services?) I have setup a new account on PB under the name of 'RAV4DriversClub' which I will use to post any pictures I am sent if anyone is stuck using an Image Host. You do not have the required permissions to view the files attached to this post. Welcome to the best RAV4 Drivers Club in Europe ... come and join in the fun! Sell on Line? Get a Website Designed, Built and Hosted, complete with Shop facilities from just £99 Visit SBS Borders for custom Web Design and Hosting services, and more besides Hoovie Posts: 12322 Joined: Mon Dec 09, 2013 3:56 pm Location: Scottish Borders Real Name: David Primary Vehicle: Vauxhall Corsa! Year: 2017 Trim Level: SE Fuel Type: Petrol Other Vehicle: VW Camper Vans Country: UK ### Re: How to Show Images on the RAV4 Drivers Club So if i upload a full size image in now David and click on resize it will reduce it to the required size?? Paulus17 Club Master Posts: 1038 Joined: Wed Dec 11, 2013 11:36 am Country: England ### Re: How to Show Images on the RAV4 Drivers Club Also when you load a piccy from Photobucket using the GET LINKS remove all the url part from the link and just leave the img part,this will then just show the image not all your PB account. HTH. Paulus17 Club Master Posts: 1038 Joined: Wed Dec 11, 2013 11:36 am Country: England ### Re: How to Show Images on the RAV4 Drivers Club Paulus17 wrote:So if i upload a full size image in now David and click on resize it will reduce it to the required size?? No, the ResiZe will act on HTML Code when showing pictures on external sites The idea is not to actually upload pictures onto the forum in fact If an image is uploaded, then the size will not be changed, but its display on screen will automatically be set to a maximum width or a thumbnail created instead. Paulus17 wrote:Also when you load a piccy from Photobucket using the GET LINKS remove all the url part from the link and just leave the img part,this will then just show the image not all your PB account. HTH. On PhotoBucket, What I do if I just want the address with no code around it (so I can apply the ResiZe code) is go to the image I am after and chose the 'Direct' option from the "Share Links" menu.... no removal of anything needed that way I don't really use any other Image Hosting sites, so can't comment on exactly what to do there, other then say there are bound to be similar options. You do not have the required permissions to view the files attached to this post. Welcome to the best RAV4 Drivers Club in Europe ... come and join in the fun! Sell on Line? Get a Website Designed, Built and Hosted, complete with Shop facilities from just £99 Visit SBS Borders for custom Web Design and Hosting services, and more besides Hoovie Posts: 12322 Joined: Mon Dec 09, 2013 3:56 pm Location: Scottish Borders Real Name: David Primary Vehicle: Vauxhall Corsa! Year: 2017 Trim Level: SE Fuel Type: Petrol Other Vehicle: VW Camper Vans Country: UK ### Re: How to Show Images on the RAV4 Drivers Club Any one else having problems when they go onto Photobucket? Every time i go on lately it takes an age for the piccys to load but today i just gave up as i just couldn't get an image to load onto this forum. So i opened an account with Imageshack and the same image loaded straight away so the issue must be with PB?? Paulus17 Club Master Posts: 1038 Joined: Wed Dec 11, 2013 11:36 am Country: England
# Binomial distribution #### Punch ##### New member A bag contains 4 red, 5 blue and 6 green balls. The balls are indistinguishable except for their colour. A trial consists of drawing a ball at random from the bag, noting its colour and replacing it in the bag. A game is plated by performing 10 trials in all. At the start of the tournament, each player plays the above game once. Players who earned more than k dollars proceed to the next round. Find the least value of k such that, in a random sample of 10 players, the probability that all 10 players proceed to the next round is less than 0.1. Let X be the number of blue balls drew. X~B(10,$\frac{1}{3}$) $[P(X>n)]^{10} < 0.1$ where $n=\frac{k}{0.50}$ $1-P(X$≤ $n) <0.794$ $P(X$≤ $n) > 0.206$ #### CaptainBlack ##### Well-known member A bag contains 4 red, 5 blue and 6 green balls. The balls are indistinguishable except for their colour. A trial consists of drawing a ball at random from the bag, noting its colour and replacing it in the bag. A game is plated by performing 10 trials in all. At the start of the tournament, each player plays the above game once. Players who earned more than k dollars proceed to the next round. Find the least value of k such that, in a random sample of 10 players, the probability that all 10 players proceed to the next round is less than 0.1. Let X be the number of blue balls drew. X~B(10,$\frac{1}{3}$) $[P(X>n)]^{10} < 0.1$ where $n=\frac{k}{0.50}$ $1-P(X$≤ $n) <0.794$ $P(X$≤ $n) > 0.206$ Incomplete question. Please include all the relevant information to the question in the thread with the question. CB #### Punch ##### New member Incomplete question. Please include all the relevant information to the question in the thread with the question. CB Sorry! The missing part is: For each blue ball obtained, the player earns $0.50 #### CaptainBlack ##### Well-known member Sorry! The missing part is: For each blue ball obtained, the player earns$0.50 OK, so make a table of b(i,10,1/3): Code: i b(i,10,1/3) ---------------- 0 0.0173415 1 0.0867076 2 0.195092 3 0.260123 4 0.227608 5 0.136565 6 0.0569019 7 0.0162577 8 0.00304832 9 0.000338702 10 1.69351e-005 Now you need another column with the cumulative sum ... (n=2 is the smallest number of wins such that P(X<=n)>0.206) CB
About a possible generalization of Green-Tao's theorem Hello, Let's say that an integer $n$ is $k$-primal if $k$ is its smallest primality radius (i.e non negative integer $r$ such that both $n-r$ and $n+r$ are primes). I think that for every positive integer $m$ and every non negative integer $k$, there exists an arithmetic progression made of $m$ $k$-primal integers. Has such a generalization of Green-Tao's theorem been considered so far? If not, is there a heuristics that would make it quite likely? - This would imply the twin prime conjecture (take $k=1$ and $m$ arbitrarily large). Even if it were only true for $k\ge 1000$, it would imply $\liminf_{n\to\infty}(p_n-p_{n-1})<\infty$, which is a known hard problem. –  Anthony Quas Dec 18 '12 at 19:21 János Pintz considered such questions recently, see his preprints here and here. In particular, under a weak form of the Elliot-Halberstam conjecture there is an integer $d>0$ such that there are arbitrary long arithmetic progressions of primes $p$ such that $p+d$ is the next prime. Assuming the full conjecture one can take $d\leq 16$, while under a natural strengthening of it one can take any even number $d>0$. - I am sure many people already know this but just I wanted to mention that the Green-Tao theorem is special case of a more general conjecture of Erdos who asserted that if $(a_n)_{n=1}^\infty \subset \mathbb{N}$ such that $\sum_n \frac {1}{a_n} = \infty$, then the sequence $(a_n)$ contains arbitrarily long arithmetic progressions. -
# Npv Discount Rate Table Share: Filter Type: Filter Time: ## PRESENT VALUE TABLE Actived: Thursday Feb 13, 2020 › See more:  Promo codes ## Leases Discount rates - KPMG Actived: Friday Feb 14, 2020 › See more:  Discount codes Offer Details: The discount rate affects the amount of the lessee’s lease liabilities – and . a host of key financial ratios. The new standard brings forward definitions of discount rates from the current leases standard. But applying these old definitions in the new world of on-balance sheet lease accounting will be tough, especially for lessees. They now need to . determine discount rates for most See more ... ## NPV calculation - Illinois Institute of Technology Actived: Saturday Feb 15, 2020 › See more:  Discount codes Offer Details: NPV Calculation – basic concept PV(Present Value): PV is the current worth of a future sum of money or stream of cash flows given a specified rate of return. Future cash flows are discounted at the discount rate, and the higher the discount rate, the lower the present value of the future cash flows. See more ... ## Difference Between NPV and IRR (with Comparison Chart Actived: Thursday Feb 13, 2020 › See more:  Discount codes Offer Details: The aggregate of all present value of the cash flows of an asset, immaterial of positive or negative is known as Net Present Value. Internal Rate of Return is the discount rate at which NPV = 0. The calculation of NPV is made in absolute terms as compared to IRR which is computed in percentage terms. See more ... ## NPV Calculator - calculate Net Present Value Actived: Wednesday Feb 12, 2020 › See more:  Discount codes Offer Details: If you wonder how to calculate the Net Present Value (NPV) by yourself or using an Excel spreadsheet, all you need is the formula: where r is the discount rate and t is the number of cash flow periods, C 0 is the initial investment while C t is the return during period t. See more ... ## Business Smarts - Sample Module for Corporate Financial Actived: Saturday Feb 15, 2020 › See more:  Discount codes Offer Details: 2) A project with a 3 year life and a cost of $28,000 generates revenues of$8,000 in year 1, $12,000 in year 2, and$17,000 in year 3. If the discount rate is 4%, what is the NPV of the project? See more ... ## Discount Factor Calculator - miniwebtool.com Actived: Wednesday Feb 12, 2020 › See more:  Discount codes Offer Details: Discount Rate: % Number of Compounding Periods: About Discount Factor Calculator . The Discount Factor Calculator is used to calculate the discount factor, which is the factor by which a future cash flow must be multiplied in order to obtain the present value. Discount Factor Calculation Formula. The discount factor is calculated in the following way, where P(T) is the discount factor, r the See more ... ## Valuing Pharmaceutical Assets: When to Use NPV vs rNPV Actived: Friday Feb 14, 2020 › See more:  Discount codes Offer Details: The NPV approach requires the use of different discount rates in an attempt to approximate the evolving probability of technical and regulatory success. Each new NPV calculation and discount rate can only provide insight about the net present value and risk at a single point in time. For example, the NPV calculation with a 33.4% discount rate See more ... ## Calculate the Net Present Value - NPV | PrepLounge.com Actived: Friday Feb 14, 2020 › See more:  Discount codes Offer Details: By increasing the discount rate, the NPV of future earnings will shrink. Discount rates for quite secure cash-streams vary between 1% and 3%, but for most companies, you use a discount rate between 4% - 10% and for a speculative start-up investment, the applied interest rate could reach up to 40%. See more ... ## Mid Period Discounting with the NPV Function Actived: Saturday Feb 15, 2020 › See more:  Discount codes Offer Details: taking the present value of each cash flow then adding them up. If you use the NPV including the initial cash flow you will understate the true NPV. Getting the IRR for the half year assumptions is a little trickier. The easiest way is to use the NPV formula above with the half year adjustment, make the discount rate (.1 in example) a variable See more ... ## NPV (net present value) - Valuation - Moneyterms Actived: Friday Feb 14, 2020 › See more:  Promo codes Offer Details: A net present value (NPV) includes all cash flows including initial cash flows such as the cost of purchasing an asset, whereas a present value does not. The simple present value is useful where the negative cash flow is an initial one-off, as when buying a security (see DCF valuation for more detail) See more ... ## How to Evaluate Two Projects by Evaluating the Net Present Actived: Saturday Feb 15, 2020 › See more:  Discount codes Offer Details: Does the Net Present Value of Future Cash Flows Increase or Decrease as the Discount Rate Increases? Share on Facebook The net present value method has become one of the most popular tools for evaluating capital projects because it reduces each project to a single figure: the total estimated value of the project, expressed in today's dollars. See more ... ## NPV Profile | Excel with Excel Master Actived: Wednesday Feb 12, 2020 › See more:  Promo codes Offer Details: An NPV Profile or Net Present Value Profile is a graph that looks like this:. The horizontal axis shows various values of r or the cost of capital and the vertical axis shows the Net Present Values (NPV) at those values of r. The point at which the line or curve crosses the horizontal axis is the estimate of the Internal Rate of Return or IRR.. To prepare an NPV Profile we need to have set up See more ... ## A Refresher on Net Present Value Actived: Tuesday Feb 11, 2020 › See more:  Promo codes Offer Details: “Net present value is the present value of the cash flows at the required rate of return of your project compared to your initial investment,” says Knight. In practical terms, it’s a method See more ... ## Calculate NPV with a Series of Future Cash Flows - dummies Actived: Friday Feb 14, 2020 › See more:  Promo codes Offer Details: Compute the net present value of a series of annual net cash flows. To determine the present value of these cash flows, use time value of money computations with the established interest rate to convert each year’s net cash flow from its future value back to its present value. Then add these present values together. See more ... ## Excel formula: NPV formula for net present value | Exceljet Actived: Saturday Feb 15, 2020 › See more:  Promo codes Offer Details: How this formula works. Net Present Value (NPV) is the present value of expected future cash flows minus the initial cost of investment. The NPV function in Excel only calculates the present value of uneven cashflows, so the initial cost must be handled explicitly. See more ... ## Net Present Value Calculator Actived: Saturday Feb 15, 2020 › See more:  Discount codes Offer Details: Calculator Use. Calculate the net present value (NPV) of a series of future cash flows.More specifically, you can calculate the present value of uneven cash flows (or even cash flows). See Present Value Cash Flows Calculator for related formulas and calculations.. Interest Rate (discount rate per period) This is your expected rate of return on the cash flows for the length of one period. See more ... ## Let's Talk About Net Present Value and - Bloomberg.com Actived: Friday Feb 14, 2020 › See more:  Discount codes Offer Details: hyperbolic discounting,” the idea being that the discount rate in your personal present-value calculation might start out low but rises sharply after a year or two before settling down again in See more ... ## Appendix: Present Value Tables - GitHub Pages Actived: Thursday Feb 13, 2020 › See more:  Promo codes ## NPV Profile | Definition | Example Actived: Friday Feb 14, 2020 › See more:  Discount codes Offer Details: NPV profile of a project or investment is a graph of the project’s net present value corresponding to different values of discount rates. The NPV values are plotted on the Y-axis and the WACC is plotted on the X-axis. The NPV profile shows how NPV changes in response to changing cost of capital. See more ...
Share # Find the Ratio in Which the Point (2, Y) Divides the Line Segment Joining the Points a (-2,2) and B (3, 7). Also, Find the Value of Y. - CBSE Class 10 - Mathematics ConceptConcepts of Coordinate Geometry #### Question Find the ratio in which the point (2, y) divides the line segment joining the points A (-2,2) and B (3, 7). Also, find the value of y. #### Solution The co-ordinates of a point which divided two points(x_1,y_1) and                  (x_2,y_2) internally in the ratio m:n is given by the formula, (x,y) = ((mx_2 + nx_1)/(m + n)), ((my_2 + ny_1)/(m+n))) Here we are given that the point P(2,y) divides the line joining the points A(−2,2) and B(3,7) in some ratio. Let us substitute these values in the earlier mentioned formula. (2,y) = (((m(3) +n(-2))/(m + n))"," ((m(7)+n(2))/(m+n))) Equating the individual components we have 2 = (m(3) + n(-2))/(m + n) 2m + 2n = 3m - 2n m - 4n m/n = 4/1 We see that the ratio in which the given point divides the line segment is 4: 1. Let us now use this ratio to find out the value of ‘y’. (2,y) = (((m(3) + n(-2))/(m + n))"," ((m(7) + n(2))/(m + n))) (2,y) = (((4(3) + 1(-2))/(4 +1))","((4(7) + 1(2))/(4 +1))) Equating the individual components we have y = (4(7) + 1()2)/(4 + 1) y = 6 Thus the value of ‘y’ is 6 Is there an error in this question or solution? #### Video TutorialsVIEW ALL [1] Solution Find the Ratio in Which the Point (2, Y) Divides the Line Segment Joining the Points a (-2,2) and B (3, 7). Also, Find the Value of Y. Concept: Concepts of Coordinate Geometry. S
It is currently 06 Feb 2023, 12:27 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # On a certain day it took Bill three times as long to drive from home Intern Joined: 22 Nov 2008 Posts: 9 Math Expert Joined: 02 Sep 2009 Posts: 88545 ##### General Discussion Intern Joined: 22 Nov 2008 Posts: 9 Math Expert Joined: 02 Sep 2009 Posts: 88545 Non-Human User Joined: 09 Sep 2013 Posts: 26579 Moderators: Math Expert 10657 posts Math Expert 88544 posts