content
stringlengths
86
994k
meta
stringlengths
288
619
Growth of Baking Yeast Mick mick at blankley.prestel.co.uk Wed Sep 10 02:57:36 EST 1997 Kenneth Sole wrote: > > > I have noticed that if the first starter takes, say, 28 hours to > > > double, all the following stages will take (pretty close to) that > > > same amount of time. > > > > > > This conflicts with my intuition... (snip) Marc R. Roussel replied: > Note that this is just Malthusian growth. Since gas evolution > (which determines the macroscopic doubling time) is proportional to the > population, your dough doubling time should be approximately constant > provided the yeast are always well fed. Well, may I point out that since the rate of rise of the dough is proportional to the number of cells per unit volume of dough, then the rate will only be constant if the number of cells per unit volume stays the same. This condition is met only if the dilution rate of the dough (at the refreshment stages) is equal to the rate of reproduction of the cells (the cellular doubling time). But we know that this is not the case. The purpose of the starter dough is to increase the number of cells per unit volume of dough, from the initially small, natural population, up to a working population. So the rate of rise of the starter dough, at the very least, should be less than the subsequent stages. Note that although the cellular doubling time is constant, the number of cells per unit volume of dough is increasing, at least comparing the first with the subsequent stages. What happens during the subsequent stages depends on the rate of dilution, but since a conservative estimate of yeast doubling time is 2h, and the refreshment stages take place at 28h intervals, I suggest the cellular growth rate should easily outstrip the dilution rate, even ignoring the natural yeast that would also be present in the fresh So the rate of rise of the dough should accelerate. IF it does not, an explanation is required, as Kenneth points out. Michael Pocklington More information about the Yeast mailing list
{"url":"http://www.bio.net/bionet/mm/yeast/1997-September/007331.html","timestamp":"2014-04-16T17:19:31Z","content_type":null,"content_length":"4138","record_id":"<urn:uuid:66adbf62-bb3e-4be6-b421-97f219188bee>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Stable normal bundle of a manifold up vote 11 down vote favorite in bordism-theory and many bordering areas one has the following construction: Given a manifold M (say closed for the purposes of this discussion and k-dimensional), we embed it into some $\mathbb R^ n$ for n large and look at the normal bundle of that embedding. This pulls back to give an n-k-dim vectorbundle over M, and we consider the homotopy class $M \rightarrow BGL(n-k) \rightarrow BGL$, where the first map is the classifying map for that bundle and the second one is induced by the obvious inclusion. One now finds that the homotopy class of this composition does not depend on the particular embedding chosen. Since $BGL$ classifies principal-$GL$-bundles we have thus constructed an isomorphism class of such bundles and from what I gather this is what is called the stable normal bundle. Now my question is: Is there a sufficiently nice construction of an actual $GL$-bundle representing this isomorphism class? There certainly seems to be none for the individual normal bundles (for they of course DO depend on the embedding for small n), but for the infinite one there just might be, right? By 'construction' I mean construction out of intrinsic data of the manifold and not one along the lines of 'embed M into $\mathbb R^\infty$ and look at the frames of the arising normal bundle'. If a construction can be found at all then there are probably many, so there won't be a canonical one, which is why I don't really want to specify what 'nice' is supposed to mean. Thank you for any answers at.algebraic-topology gt.geometric-topology manifolds 1 The tangent bundle is classified by a map $t:M\to BO(k)$, which composed with $BO(k)\to BO$, gives a map $\tau:M\to BO$ which has an homotopy inverse $\nu:M\to BO$. This "is" the stable normal bundle, no? – Mariano Suárez-Alvarez♦ Oct 26 '11 at 14:04 If by 'homotopy inverse' you mean the inverse in $[M,BGL]$ under the H-space structure on $BGL$ then yes, that gives the same homotopy class as the construction I sketched. But that still only corresponds to an isomorphism class, not an actual bundle. What I'm looking for is a construction like those of the tangent bundle, that don't rely on embedding the manifold, of which I know at least 3. But you end up with 3 different actual bundles, that are isomorphic, and not just an isomorphism class. – old account Oct 26 '11 at 14:46 1 I think your question is too vague. It would be more of an actual question if you tell us precisely what kind of bundle construction you're looking for. Or if you can't tell us that, perhaps you can tell us some kind of functorial or categorical setting you need this construction for. – Ryan Budney Oct 26 '11 at 16:34 1 Maybe this point is made elsewhere, but the notion of a normal bundle implicitly requires an embedding. What is a normal vector supposed to be orthogonal to? – Sean Tilson Oct 26 '11 at 19:19 2 @Sean Tilson: The point is that the stable normal bundle is intrinsic to the manifold, despite the fact that all the constructions that have been bounced around depend on an embedding. So we might hope to read it off from the manifold without talking about any ambient anything. – Aaron Mazel-Gee Nov 1 '11 at 18:21 show 2 more comments 1 Answer active oldest votes Perhaps I'm confused about what you are looking for, but haven't you already constructed an "actual" $GL$-bundle in your question? What I mean is the following. The usual definition of $GL$ is a direct limit of $GL(m)$'s. So an element of $GL$ is just an element of $GL(m)$ for some $m$. Similarly, if $M$ is compact then a $GL$ bundle over $M$ is just a $GL(m)$ bundle over $M$, for some $m$. As you note in your question, one can construct such bundles by embedding $M$ in $\mathbb{R}^{k+m}$, taking the frame bundle of the normal bundle, and then interpreting this as a $GL$-bundle rather than a $GL(m)$-bundle. Any two such embeddings of $M$ give isomorphic $GL$-bundles. up vote 3 down In response to pudin's comment below, here's a second construction. Embed $M$ into $\mathbb{R}^\infty$. Define a bundle $F$ over $M$ whose fiber at $x$ is frames of the normal bundle of the vote embedding at $x$ which eventually coincide with the standard framing of $\mathbb{R}^\infty$. (This is possible because the image of the embedding will lie in some $\mathbb{R}^n \subset \ mathbb{R}^\infty$ if $M$ is compact.) $F$ is a principal $GL$ bundle, where in this case we take $GL$ to be invertible linear maps $\mathbb{R}^\infty \to \mathbb{R}^\infty$ which differ from the identity only on a finite subspace. i like to think of GL as invertible linear maps $\mathbb R^\infty \rightarrow \mathbb R^\infty$ that differ from the identity only on a finite subspace, but that does not really matter. 2 But what is not true is that a $GL$-bundle IS a $GL(m)$-bundle for some m. True, over a compact space every $GL$-bundle admits a reduction to a $GL(m)$-bundle along $GL(m) \rightarrow GL$ for some m; but as you state you end up with a lot of choices giving isomorphic $GL$-bundles, which is what we started with. The question is for an actual bundle, not just an isomorphism class, just as for the tangentbundle. – old account Oct 26 '11 at 14:40 I disagree that this is not an "actual" $GL$ bundle. If we describe a $GL$ bundle in terms of charts and transition functions, then these transition functions will take values in some $GL (M)\subset GL$. So every $GL$ bundle is also a $GL(m)$ bundle (for some $m$); the data for a $GL$ bundle is precisely the data for a $GL(m)$ bundle. (I'm assuming $M$ is compact, of course.) Yes, the construction depends on some choices, but each of those choices yields a concrete, "actual" $GL$ bundle. – Kevin Walker Oct 26 '11 at 15:21 The tangent bundle is also an isomorphism class, if you really insist... – Mariano Suárez-Alvarez♦ Oct 26 '11 at 18:13 @Kevin Walker: the second construction you gave now is the one i tried to mention in the lines immediatly after my question (except i forgot to pass to frames, which i have edited now). The bundle ypu end up with will depend on the embedding! Granted two choices of embeddings certainly give isomorphic bundles, but what i'm looking for is a construction that does not depend on such choices at all. For the tangent bundle there are such constructions, eg germs of curves mod some relation on derivatives or derivations on the rings of germs of functions to R – old account Nov 1 '11 at 10:24 Ah. I misinterpreted the original version of your question to be asking for a concrete ("actual") construction of the stable normal bundle, as opposed to a finite-dimensional bundle which determined the stable normal bundle up to isomorphism. Now I understand that you really want a construction that does not involve making arbitrary choices. – Kevin Walker Nov 1 '11 at add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gt.geometric-topology manifolds or ask your own question.
{"url":"http://mathoverflow.net/questions/79157/stable-normal-bundle-of-a-manifold","timestamp":"2014-04-19T00:08:00Z","content_type":null,"content_length":"68121","record_id":"<urn:uuid:06c8501b-df9e-4323-ae03-5b9e28680bc8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
17 search hits A Stopped delta-matter source in heavy ion collisions at 10-GeV/N? (1994) Markus Hofmann Raffaele Mattiello Heinz Sorge Horst Stöcker Walter Greiner We predict the formation of highly dense baryon-rich resonance matter in Au+Au collisions at AGS energies. The final pion yields show observable signs for resonance matter. The Delta1232 resonance is predicted to be the dominant source for pions of small transverse momenta. Rescattering e ects consecutive excitation and deexcitation of Delta's lead to a long apparent life- time (> 10 fm/c) and rather large volumina (several 100 fm3) of the Delta-matter state. Heavier baryon resonances prove to be crucial for reaction dynamics and particle production at AGS. Chemical freezeout in relativistic A+A collisions: is it close to the QGP? (1997) Mark I. Gorenstein Horst Stöcker Granddon D. Yen Shin Nan Yang Walter Greiner Preliminary experimental data for particle number ratios in the collisions of Au+Au at the BNL AGS (11A GeV/c) and Pb+Pb at the CERN SPS (160A GeV/c) are analyzed in a thermodynamically consistent hadron gas model with excluded volume. Large values of temperature, T = 140 185 MeV, and baryonic chemical potential, µb = 590 270 MeV, close to the boundary of the quark-gluon plasma phase are found from fitting the data. This seems to indicate that the energy density at the chemical freezeout is tremendous which would be indeed the case for the point-like hadrons. However, a self-consistent treatment of the van der Waals excluded volume reveals much smaller energy densities which are very far below a lowest limit estimate of the quark-gluon plasma energy density. PACS number(s): 25.75.-q, 24.10.Pa Collective phenomena in the non-equilibrium quark-gluon plasma (2008) Björn Peter Schenke In this work we study the non-equilibrium dynamics of a quark-gluon plasma, as created in heavy-ion collisions. We investigate how big of a role plasma instabilities can play in the isotropization and equilibration of a quark-gluon plasma. In particular, we determine, among other things, how much collisions between the particles can reduce the growth rate of unstable modes. This is done both in a model calculation using the hard-loop approximation, as well as in a real-time lattice simulation combining both classical Yang-Mills-fields as well as inter-particle collisions. The new extended version of the simulation is also used to investigate jet transport in isotropic media, leading to a cutoff-independent result for the transport coefficient $hat{q}$. The precise determination of such transport coefficients is essential, since they can provide important information about the medium created in heavy-ion collisions. In anisotropic media, the effect of instabilities on jet transport is studied, leading to a possible explanation for the experimental observation that high-energy jets traversing the plasma perpendicular to the beam axis experience much stronger broadening in rapidity than in azimuth. The investigation of collective modes in the hard-loop limit is extended to fermionic modes, which are shown to be all stable. Finally, we study the possibility of using high energy photon production as a tool to experimentally determine the anisotropy of the created system. Knowledge of the degree of local momentum-space anisotropy reached in a heavy-ion collision is essential for the study of instabilities and their role for isotropization and thermalization, because their growth rate depends strongly on the anisotropy. Energy dependence of multiplicity fluctuations in heavy ion collisions at the CERN SPS (2008) Benjamin Lungwitz In this work data of the NA49 experiment at CERN SPS on the energy dependence of multiplicity fluctuations in central Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV, as well as the system size dependence at 158A GeV, is analysed for positively, negatively and all charged hadrons. Furthermore the rapidity and transverse momentum dependence of multiplicity fluctuations are studied. The experimental results are compared to predictions of statistical hadron-gas and string-hadronic models. It is expected that multiplicity fluctuations are sensitive to the phase transition to quark-gluon-plasma (QGP) and to the critical point of strongly interacting matter. It is predicted that both the onset of deconfinement, the lowest energy where QGP is created, and the critical point are located in the SPS energy range. Furthermore, the predictions for the multiplicity fluctuations of statistical and string-hadronic models are different, the experimental data might allow to distinguish between them. The used measure of multiplicity fluctuations is the scaled variance omega, defined as the ratio of the variance and the mean of the multiplicity distribution. In the NA49 experiment the tracks of charged particles are detected in four large volume time projection chambers (TPCs). In order to remove possible detector effects a detailed study of event and track selection criteria is performed. Naively one would expect Poisson fluctuations in central heavy ion collisions. A suppression of fluctuations compared to a Poisson distribution is observed for positively and negatively charged hadrons at forward rapidity in Pb+Pb collisions. At midrapidity and for all charged hadrons the fluctuations are larger than the Poisson ones. The fluctuations seem to increase with decreasing system size. It is suggested that this is due to increased relative fluctuations in the number of participants. Furthermore, it was discovered that omega increases for decreasing rapidity and transverse momentum. A hadron-gas model predicts different values of omega for different statistical ensembles. In the grand-canonical ensemble, where all conservation laws are fulfilled only on the average, not on an event-by-event basis, the predicted fluctuations are the largest ones. In the canonical ensemble the charges, namely the electrical charge, the baryon number and the strangeness, are conserved for each event. The scaled variance in this ensemble is smaller than for the grand-canonical ensemble. In the micro-canonical ensemble not only the charges, but also the energy and the momentum are conserved in each event, the predicted $omega$ is the smallest one. The grand-canonical and canonical formulations of the hadron-gas model over-predict fluctuations in the forward acceptance. In contrast to the experimental data no dependence of omega on rapidity and transverse momentum is expected. For the micro-canonical formulation, which predicts small fluctuations in the total phase space, no quantitative calculation is available yet for the limited experimental acceptance. The increase of fluctuations for low rapidities and transverse momenta can be qualitatively understood in a micro-canonical ensemble as an effect of energy and momentum conservation. The string-hadronic model UrQMD significantly over-predicts the mean multiplicities but approximately reproduces the scaled variance of the multiplicity distributions at all measured collision energies, systems and phase-space intervals. String-hadronic models predict for Pb+Pb collisions a monotonous increase of omega with collision energy, similar to the observations for p+p interactions. This is in contrast to the predictions of the hadron-gas model, where omega shows no energy dependence at higher energies. At SPS energies the predictions of the string-hadronic and hadron-gas models are in the same order of magnitude, but at RHIC and LHC energies the difference in omega in the full phase space is much larger. Experimental data should be able to distinguish between them rather easily. Narrower than Poissonian (omega < 1) multiplicity fluctuations measured in the forward kinematic region (1<y(pi)<y_{beam}) can be related to the reduced fluctuations predicted for relativistic gases with imposed conservation laws. This general feature of relativistic gases may be preserved also for some non-equilibrium systems as modeled by the string-hadronic approaches. A quantitative estimate shows that the predicted maximum in fluctuations due to a first order phase transition from hadron-gas to QGP is smaller than the experimental errors of the present experiment and can therefore neither be confirmed nor disproved. No sign of increased fluctuations as expected for a freeze-out near the critical point of strongly interacting matter is observed. Fluctuations and inhomogenities of energy density and isospin in Pb + Pb at the SPS (1998) Marcus Bleicher Lars Gerland Christian Spieles Adrian Dumitru Steffen A. Bass Mohamed Belkacem Mathias Brandstetter Christoph Ernst Ludwig Neise Sven Soff Henning Weber Horst Stöcker Walter Greiner The main goal of heavy ion physics in the last fifteen years has been the search for the quark-gluon-plasma(QGP). Until now, unambigous experimental evidence for the QGP is missing. Hadron production from a hadronizing quark gluon plasma (1997) Christian Spieles Horst Stöcker Walter Greiner Measured hadron yields from relativistic nuclear collisions can be equally well understood in two physically distinct models, namely a static thermal hadronic source versus a time-dependent, non-equilibrium hadronization off a quark gluon plasma droplet. Due to the time-dependent particle evaporation off the hadronic surface in the latter approach the hadron ratios change (by factors of / 5) in time. The overall particle yields then reflect time averages over the actual thermodynamic properties of the system at a certain stage of evolution. J/psi suppression in heavy ion collisions - interplay of hard and soft QCD processes (1998) Christian Spieles Ramona Vogt Lars Gerland Steffen A. Bass Marcus Bleicher Leonid Frankfurt Mark Strikman Horst Stöcker Walter Greiner We study J/psi suppression in AB collisions assuming that the charmonium states evolve from small, color transparent configurations. Their interaction with nucleons and nonequilibrated, secondary hadrons is simulated us- ing the microscopic model UrQMD. The Drell-Yan lepton pair yield and the J/psi /Drell-Yan ratio are calculated as a function of the neutral transverse en- ergy in Pb+Pb collisions at 160 GeV and found to be in reasonable agreement with existing data. Kaon and pion production in centrality selected minimum bias Pb+Pb collisions at 40 and 158A GeV (2009) Peter Dinkelaker Results on charged kaon and negatively charged pion production and spectra for centrality selected Pb+Pb mininimum bias events at 40 and 158A GeV have been presented in this thesis. All analysis are based on data taken by the NA49 experiment at the accelerator Super Proton Synchrotron (SPS) at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland. The kaon results are based on an analysis of the mean energy loss <dE/dx> of the charged particles traversing the detector gas of the time projection chambers (TPCs). The pion results are from an analysis of all negatively charged particles h- corrected for contributions from particle decays and secondary interactions. For the dE/dx analysis of charged kaons, main TPC tracks with a total momentum between 4 and 50 GeV have been analyzed in logarithmic momentum log(p) and transverse momentum pt bins. The resulting dE/dx spectra have been fitted by the sum of 5 Gaussians, one for each main particle type (electrons, pions, kaons, protons, deuterons). The amplitude of the Gaussian used for the kaon part of the spectra has been corrected for efficiency and acceptance and the binning has been transformed to rapidity y and transverse momentum pt bins. The multiplicity dN/dy of the single rapidity bins has been derived by summing the measured range of the transverse momentum spectra and an extrapolation to full coverage with a single exponential function fitted to the measured range. The results have been combined with the mid-rapidity measurements from the time-of-flight detectors and a double Gaussian fit to the dN/dy spectra has been used for extrapolation to rapidity outside of the acceptance of the dE/dx analysis. For the h- analysis of negatively charged pions, all negatively charged tracks have been analyzed. The background from secondary reactions, particle decays, and gamma-conversions has been corrected with the VENUS event generator. The results were also corrected for efficiency and acceptance and the pt spectra were analyzed and extrapolated where necessary to derive the mean yield per rapidity bin dN/dy. The mean multiplicity <pi-> has been derived by summing up the measured dN/dy and extrapolating the rapidity spectrum with a double Gaussian fit to 4pi coverage. The results have been discussed in detail and compared to various model calculations. Microscopical models like URQMD and HSD do not describe the full complexity of Pb+Pb collisions. Especially the production of the positively charged kaons, which carry the major part of strange quarks, cannot be consistently reproduced by the model calculations. Centrality selected minimum bias Pb+Pb collisions can be described as a mixture of a high-density region of multiply colliding nucleons (core) and practically independent nucleon-nucleon collisions (corona). This leads to a smooth evolution from peripheral to central collisions. A more detailed approach derives the ensemble volume from a percolation of elementary clusters. In the percolation model all clusters are formed from coalescing strings that are assumed to decay statistically with the volume dependence of canonical strangeness suppression. The percolation model describes the measured data for top SPS and RHIC energies. At 40A GeV, the system size dependence of the relative strangeness production starts to evolve from the saturation seen at higher energies from peripheral events onwards towards a linear dependence at SIS and AGS. This change of the dependence on system size occurs in the energy region of the observed maximum of the K+ to pi ratio for central Pb+Pb collisions. Future measurements with heavy ion beam energies around this maximum at RHIC and FAIR as well as the upgraded NA49 successor experiment NA61 will further improve our understanding of quark matter and its reflection in modern heavy ion physics and theories. Modelling ultra-relativistic heavy ion collisions with the quark Molecular Dynamics qMD (2005) Stefan Scherer This thesis presents a model for the dynamical description of deconfined quark matter created in ultra-relativistic heavy ion collisions, treating quarks and antiquarks as classical point particles subject to a colour-dependent, Cornell-type potential interaction. The model provides a dynamical handle for hadronization via the recombination of quarks and antiquarks in colour neutral clusters. Gluons are not included explicitly in the model,but are described in an effective manner by the means of the potential interaction. The model includes four different quark flavours (up, down, strange and charm) and uses current masses for the quarks. The dynamical evolution of a system of colour charges subject to the Hamiltonian equations of motion of the model yields the formation of colour neutral clusters of quarks and antiquarks, which are subject only to a small remaining interaction, the strong interquark potential notwithstanding. These clusters can be mapped onto hadrons and hadronic resonances. Thus, the model allows a dynamical description of quarks degrees of freedom in heavy ion collisions, including a recombination scheme for hadronization. The thermal properties of the model turn pout to be very satisfying. The model shows a transition from a confining phase to a deconfined phase with rising temperature, going hand in hand with a softest point in the equation of state and a rise of energy density and pressure to the Stefan-Boltzmann limit of a gas of quarks and antiquarks. Moreover, the potential interaction is screened in the deconfined phase. For the dynamical description of ultra-relativistic heavy ion collision, the qMD model is coupled to UrQMD as a generator for its initial conditions. In this way, a fully dynamical description of the expansion and hadronization of the fireball created in such collisions can be achieved. Non-equilibrium aspects of the expansion dynamics and hadronization by recombination of quarks and antiquarks are discussed in detail, and a comparison with experimental data of collisions at the CERN-SPS is presented. The big advantage of the qMD model is the possibility to study cluster formation, including exotic clusters, and fluctuations in a dynamical manner. As an example, event-by-event fluctuations in electric charge are studied. Such fluctuations have been proposed as a clear criterion to distinguish a deconfined system from a hadrons gas. However, experimental data show hadron gas fluctuation measures even at RHIC, where deconfinement is taken for granted. We will see how the dynamics of quark recombination washes out the quark-gluon plasma signal in the fluctuation criterion. Moreover, we will discuss briefly the problem of entropy at recombination. In a second application, the formation of exotic hadronic clusters, larger than usual mesons and baryons, is studied. Such clusters could provide new measures for the thermalization and homogenization of a deconfined gas of colour charges. Moreover, number estimates for exotic clusters from recombination are considerably lower than corresponding predictions from thermal models, providing a clear difference between statistical hadronization and hadronization via quark recombination. A detailed analysis is provided for pentaquark candidates such as the Theta-Plus. It turns out that the distribution of exotic states over strangeness, isospin, and spin could provide a sensitive measure for thermalization and decorrelation in the deconfined quark phase, if it could be measured. Monte Carlo model for multiparticle production at ultrarelativistic energies (1994) N. S. Amelin Horst Stöcker Walter Greiner N. Armesto M. A. Braun C. Pajares The Monte Carlo parton string model for multiparticle production in hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions at high energies is described. An adequate choice of the parameters in the model gives the possibility of recovering the main results of the dual parton model, with the advantage of treating both hadron and nuclear interactions on the same footing, reducing them to interactions between partons. Also the possibility of considering both soft and hard parton interactions is introduced.
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/15989/start/0/rows/10/subjectfq/Quark-Gluon-Plasma+/sortfield/title/sortorder/asc","timestamp":"2014-04-20T13:55:48Z","content_type":null,"content_length":"52433","record_id":"<urn:uuid:d8c02bfb-f7be-41c1-81db-c22e8a5ac157>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6775737 - Method and apparatus for allocating and using range identifiers as input values to content-addressable memories This invention especially relates to performing range lookup operations using associative memory devices, especially in communications and computer systems that employ content-addressable memories; and more particularly, the invention relates to allocating and using range identifiers as input values to content-addressable memories. The communications industry is rapidly changing to adjust to emerging technologies and ever increasing customer demand. This customer demand for new applications and increased performance of existing applications is driving communications network and system providers to employ networks and systems having greater speed and capacity (e.g., greater bandwidth). In trying to achieve these goals, a common approach taken by many communications providers is to use packet switching technology. Increasingly, public and private communications networks are being built and expanded using various packet technologies, such as Internet Protocol (IP). A network device, such as a switch or router, typically receives, processes, and forwards or discards a packet based on one or more criteria, including the type of protocol used by the packet, addresses of the packet (e.g., source, destination, group), and type or quality of service requested. Additionally, one or more security operations are typically performed on each packet. But before these operations can be performed, a packet classification operation must typically be performed on the packet. Packet classification as required for access control lists (ACLs) and forwarding decisions is a demanding part of switch and router design. This packet classification of a received packet is increasingly becoming more difficult due to ever increasing packet rates and number of packet classifications. For example, ACLs require matching packets on a subset of fields of the packet flow label, with the semantics of a sequential search through the ACL rules. IP forwarding requires a longest prefix match. One known approach uses binary and/or ternary content-addressable memories to perform packet classification. Ternary content-addressable memories allow the use of wildcards in performing their matching, and thus are more flexible than binary content-addressable memories. These content-addressable memories are expensive in terms of power consumption and space, and are limited in the size of an input word on which a lookup operation is performed as well as the number of entries (e.g., 72, 144, etc.) which can be matched. Various applications that use packet classification, such as Security Access Control, Quality of Service etc., may use arbitrary ranges of values (such as port numbers or packet length) as one of the classification criteria. For example, a certain operation may be performed on packets have a port range between 80 and 1024. It would be desirable to have a single or limited number of entries in a content-addressable memory than for an entry of each port (e.g., 80, 81, 82, . . . 1024). One previous known attempt produces a resultant bitmap identifying to which of multiple ranges a certain value resides. Such a device is preprogrammed with a set of ranges and generates an bitmap output with the number of bits being as large as the number of range intervals, which may consume a large number of bits in the content-addressable memories. Needed are new methods and apparatus for performing range operations in relation to content-addressable memories. Systems and methods are disclosed for allocating and using range identifiers as input values to content-addressable memories. In one embodiment, each of multiple non-overlapping intervals are identified with one of multiple unique identifiers. An indication of a mapping between the multiple non-overlapping intervals and the multiple unique identifiers is maintained. A particular unique identifier is determined from said multiple unique identifiers based on a value and said multiple non-overlapping intervals. A lookup operation is performed on an associative memory using the particular unique identifier to generate a result. The appended claims set forth the features of the invention with particularity. The invention, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which: FIGS. 1 and 2A are block diagrams of embodiments for allocating and using range identifiers as input values to content-addressable memories; FIG. 2B is a block diagram of an embodiment performing packet processing according to the invention; FIG. 3 is a block diagram of a data structure used in one embodiment to maintain multiple interval corresponding unique identifiers; FIGS. 4A-C illustrate one mapping of ranges and/or intervals to identifiers, and in particular, into trie derived identifiers; and FIGS. 5A-B are flow diagrams of exemplary processes used in one of numerous embodiments for allocating and using range identifiers as input values to content-addressable memories. Methods and apparatus are disclosed for allocating and using range identifiers as input values to associative memories, especially binary content-addressable memories (CAMs) and ternary content-addressable memories (TCAMs). Embodiments described herein include various elements and limitations, with no one element or limitation contemplated as being a critical element or limitation. Each of the claims individually recite an aspect of the invention in its entirety. Moreover, some embodiments described may include, but are not limited to, inter alia, systems, networks, integrated circuit chips, embedded processors, ASICs, methods, and computer-readable medium containing instructions. The embodiments described hereinafter embody various aspects and configurations within the scope and spirit of the invention, with the figures illustrating exemplary and non-limiting configurations. As used herein, the term “packet” refers to packets of all types, including, but not limited to, fixed length cells and variable length packets, each of which may or may not be divisible into smaller packets or cells. Moreover, these packets may contain one or more types of information, including, but not limited to, voice, data, video, and audio information. Furthermore, the term “system” is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof. The term “computer” is used generically herein to describe any number of computers, including, but not limited to personal computers, embedded processors and systems, control logic, ASICs, chips, workstations, mainframes, etc. The term “device” is used generically herein to describe any type of mechanism, including a computer or system or component thereof. The terms “task” and “process” are used generically herein to describe any type of running program, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique. The steps and processing of signals and information illustrated in the figures are typically be performed in a different serial or parallel ordering and/or by different components in various embodiments in keeping within the scope and spirit of the invention. Moreover, the terms “network” and “communications mechanism” are used generically herein to describe one or more networks, communications mediums or communications systems, including, but not limited to the Internet, private or public telephone, cellular, wireless, satellite, cable, local area, metropolitan area and/or wide area networks, a cable, electrical connection, bus, etc., and internal communications mechanisms such as message passing, interprocess communications, shared memory, etc. The terms “first,” “second,” etc. are typically used herein to denote different units (e.g., a first element, a second element). The use of these terms herein does not necessarily connote an ordering such as one unit or event occurring or coming before the another, but rather provides a mechanism to distinguish between particular units. Moreover, the phrase “based on x” is used to indicate a minimum set of items x from which something is derived, wherein “x” is extensible and does not necessarily describe a complete list of items on which the operation is based. Additionally, the phrase “coupled to” is used to indicate some level of direct or indirect connection between two elements or devices, with the coupling device or devices modify or not modifying the coupled signal or communicated information. In one view, a trie is a directed path through a binary tree with each path through the tree qualified by a unique result. This unique result is typically codified by the path taken with a one or zero representing a left or right path taken to reach the desired node. A prefix is typically a string of characters that appears at the beginning of a longer string of characters. In many cases of practical interest the characters in a prefix are binary digits (i.e., ones and zeroes). A prefix is sometimes terminated by an asterisk, a symbol which represents the remaining arbitrary binary digits in a longer, fixed-length string. Methods and apparatus are disclosed for allocating and using range identifiers as input values to associative memories, especially binary content-addressable memories (CAMs) and ternary content-addressable memories (TCAMs). In one embodiment, each of multiple non-overlapping intervals are identified with one of multiple unique identifiers. An indication of a mapping between the multiple non-overlapping intervals and the multiple unique identifiers is maintained. A particular unique identifier is determined from said multiple unique identifiers based on a value and said multiple non-overlapping intervals. A lookup operation is performed on an associative memory using the particular unique identifier to generate a result. One embodiment uses a trie representation of a range tree of the intervals to derive the unique identifiers. Moreover, one embodiment evaluates and selects among various possible trie representations, especially to determine identifiers such that a TCAM prefix may match multiple intervals corresponding to a desired range. FIG. 1 illustrates one embodiment of a system, which may be part of a router or other communications or computer system, for allocating and using range identifiers as input values to associative memories. In one embodiment, programming engine 100 partitions a relevant information space into multiple ranges of values, such as those important to access control lists and routing decisions in a router. These ranges are partitioned into a set of non-overlapping intervals with each being assigned a unique identifier. Packet engine 120 and associative memory 130 are programmed with these unique identifiers and/or prefixes based on the unique identifiers. In one embodiment, programming engine 100 assigns these unique identifiers based on a trie representation of the intervals. Moreover, in one embodiment, programming engine 100 evaluates different possible trie or other representations of the intervals and selects a particular representation, typically for some optimization purpose. For example, one representation and thus selection of unique identifiers may reduce the number of entries required, especially when associative memory 130 includes a prefix matching devices such as a TCAM. For example, one representation may be selected such that certain ranges may be matched using a prefix value that encompasses two or more relevant non-overlapping intervals. In one embodiment, programming engine 100 includes a processor 102, memory 101, storage devices 104, and programming interface 105, which are electrically coupled via one or more communications mechanisms 109 (shown as a bus for illustrative purposes). Various embodiments of programming engine 100 may include more or less elements. The operation of programming engine 100 is typically controlled by processor 102 using memory 101 and storage devices 104 to perform one or more tasks or processes. Memory 101 is one type of computer-readable medium, and typically comprises random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components. Memory 101 typically stores computer-executable instructions to be executed by processor 102 and/or data which is manipulated by processor 102 for implementing functionality in accordance with the invention. Storage devices 104 are another type of computer-readable medium, and typically comprise solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Storage devices 104 typically store computer-executable instructions to be executed by processor 102 and/or data which is manipulated by processor 102 for implementing functionality in accordance with the invention. As used herein and contemplated by the invention, computer-readable medium is not limited to memory and storage devices; rather computer-readable medium is an extensible term including other storage and signaling mechanisms including interfaces and devices such as network interface cards and buffers therein, as well as any communications devices and signals received and transmitted, and other current and evolving technologies that a computerized system can interpret, receive, and/or transmit. FIG. 2A illustrates one embodiment of a system, which may be part of a router or other communications or computer system, for allocating and using range identifiers as input values to associative memories. In one embodiment, programming engine 210 receives an access control list (ACL) and/or other feature configuration information 200. A feature management module 211 receives this information 200 and communicates a range identifier request 212 to range map dynamic-link library (DLL) 215 which analyzes and produces a set of range/interval unique identifiers 213. Identifiers 213 and possibly received configuration information 200 is forwarded by feature manager module 211 to TCAM manager 218 and then to program interface 219, and then on to maintenance processor 221 of packet processor 220. Maintenance processor 221 then updates range logic with interval data structure 222 and programs associative memory 260. FIG. 2B illustrates one aspect of the processing of packets 240 by packet processor 220. Packets 240 are received by packet processing engine 251 which consults range logic with interval data structure 222 to generate a lookup word 255 for use by associative memory 260. Associative memory 260 produces a result 261, which is typically used as input to a memory (e.g., SRAM) 262 to produce a result 265 for use by packet processor 220. In one embodiment, result 261 is returned directly to packet processor 220. The operations of the embodiments illustrated in FIGS. 1, 2A and 2B are further described in relation to FIGS. 3, 4A-C, and 5A-B. FIG. 3 illustrates a data structure 300 which is used in one embodiment to maintain interval information. Data structure 300 corresponds to a data structure maintained in one embodiment of memory 101 (FIG. 1) and/or in range logic with interval data structure 222 (FIGS. 2A-B). Data structure 300 maintains a list of interval values 301 and a corresponding identifier 302 for each interval. In one embodiment, an index or other value associated with data structure 300 is used in place of, or in addition to each identifier 302. FIGS. 4A-C illustrate one of an unlimited number of methods of determining a unique identifier for a corresponding range interval. For ease of understanding, FIGS. 4A-C use an exemplary set of interval edges (i.e., 0, 80, 1024, 8080). Of course, any set of intervals may be used in accordance with the invention. FIG. 4A illustrates one trie representation 400 of these intervals 401. FIG. 4B illustrates one set of unique trie identifiers 402 associated with each of the intervals 401. As shown, these trie unique identifiers are just the corresponding patch through trie representation 400. FIG. 4C illustrates some mappings of ranges (e.g., a contiguous numerical space encompassing one or more intervals). Exemplary range one 410 illustrates the range of 80-1024 identified with the trie value of 80 concatenated with the trie value of 1024, or the string or prefix “0110”. Exemplary range two 420 illustrates the range of 0-80 identified one way by the trie value of 0 concatenated with the trie value of 80, or the string or prefix “0001”. Additionally, this range could be represented by the prefix “0*”. Thus, a TCAM could use the entry “0*” to match exemplary range two 420. This consolidation of identifiable values may allow TCAMs to be programmed more efficiently. FIG. 5A illustrates a process used in one embodiment for allocating and using range identifiers as input values to content-addressable memories. Processing begins with process block 500, and proceeds to process block 502, wherein the set of ranges are identified. Next, in process block 504, these ranges are partitioned into non-overlapping intervals. Optionally, in process block 506, an optimized encoding of the intervals and ranges are determined, such as determining unique identifiers for non-overlapping intervals in a manner such that ranges can be readily identified by prefixes, which are especially useful in embodiments employing a TCAM. In one embodiment, an optimized allocation of range identifiers is performed by a dynamic program on cost of allocation (number of TCAM entries) for each possible value of the tuple (left point, right point, number of bits taken for encoding). Typically, this operation is performed separately on source port ranges and destination port ranges, as well as for any other appropriate matching fields or criteria. One such process first pre-processes the range data by first expanding any ranges that can be expanded into prefixes with small cost are expanded and eliminated from range identifier allocation, and any ranges that can be expanded. Prefixes with not a very large cost, and are not used very frequently are expanded into prefixes. Next, different configurations of a range tree are constructed and evaluated. One such embodiment can be described as follows. First, a sorted array of endpoints is created. Each range [ij]=[ij+1) gives endpoints “i” and “j+1”. The array of endpoints will henceforth be denoted by endpoint[ ]. The array is sorted in increasing order from left to right. Next, unit interval number “i” corresponds to [endpoint[i], endpoint[i+1]). Note that intervals (−infinity,0) and (MAX_PORT_NUMBER, infinity) are not used by any ranges and these are not defined as per convention. Note the convention here—unit interval will refer to any interval defined by successive endpoints, while as interval will generally refer to a range of addresses defined by any two non-successive endpoints. Next, for each interval, initialize cost, weight, left and right values. Cost of interval [endpoint[i],endpoint[j]) denoted by cost[i][j], is defined as number of TCAM entries that will contain a prefix corresponding to that range of addresses. Initial value for cost[i][j][k] is 0 for all unit intervals, and infinity otherwise. Weight of an interval is equal to total weight of all ranges with exactly the same endpoints as the interval. left[i][j] is sum of weights of all ranges that have left endpoint between endpoints of interval [endpoint[i],endpoint[j]). Then an optimal cost matrix is calculated. Optimal cost of range trie using “k” bits between [endpoint[i], endpoint[j]) using “r” as root of trie is expressed as: cost[i][j][k]=infinity for i+1>j cost[i][j][k]=initialized value for i+1=j which is minimized over all possible i<r<j. The value of r that minimizes this cost is selected as the root of the optimal trie. Using the cost[ ] matrix computed above, the root trie node is created that minimizes cost, and children that correspond to left and right sub-intervals created by that endpoint. Note that in the optimal trie, nodes corresponding to endpoint values 0 and the largest value can be ignored as they are leaf nodes, below which there is only one unit interval. Next, in process block 508, each interval is identified with a unique identifier and the interval data structure is updated. In one embodiment, the selected trie is traversed and a bit string representation is created for each node (e.g., interval). Note, these bit strings may be of varying length. In one embodiment, an in-order traversal of the optimal trie is performed. For each non-leaf node, create lookup table entry (node→value, node→result). Let leftmost(x) denote the leftmost leaf node (node with smallest node→value) in the subtrie rooted at x. Then node→result is generated by padding leftmost(node→right_child)→value by any arbitrary bit string (e.g., using 0's or 1's). Next, in process block 510, each of the identified ranges are assigned a prefix or other encoding representation of a range or interval, and the associative memory is programmed using these encodings. In one embodiment, for each range [endpoint[i], endpoint[j]), the optimal trie is traversed starting from root node. At any node: If both endpoints of the range are in the sub-trie rooted at that node, do not output any value. If both endpoints are in left or right subtrie, go down to the corresponding child node. If both endpoints are not in either left or right subtrie, change state to OUTPUT mode. If in OUTPUT mode, if left endpoint of range is in left subtrie, output prefix corresponding to right child and go to left child node. If in OUTPUT mode, if right endpoint of range is in right subtrie, output prefix corresponding to left child and go to right child node. Processing of the flow diagram illustrated in FIG. 5A is then complete as indicated by process block 512. One embodiment further provides mechanisms for adding and removing entries without having to perform the entire programming operations. In one embodiment, a new range may be added in the following 1. For any endpoint of the range that is not already present in the range trie, insert endpoint as follows: a. Traverse the range trie searching for endpoint. b. If endpoint exists in the trie, search will terminate at some non-leaf node. Don't need to insert endpoint. c. Else, search will terminate in a leaf node. d. Path length from root to this leaf node is the number of bits used to represent this unit interval. If maximum available bits are already used, insertion of new endpoint fails. e. Otherwise: The leaf node represents a unit interval, within which is the new endpoint. So inserting endpoint will split this unit interval into two unit intervals. So convert the leaf node into an internal node of trie by setting node→compare_value=endpoint and creating two leaf children (corresponding to the newly split unit intervals). f. With these change, note that no existing range map changes. Any range map refers to the prefix representation of existing trie nodes, which remain at the same location relative to root. g. However, the lookup table changes. One new entry needs to be created for the leaf node that was converted to an internal node. Also, for the node which had used the leaf node as leftmost (node→right), the new leftmost(node→right) node will be leaf_node→left. So only two range lookup table entries need to be updated. 2. If both endpoints are successfully inserted in range trie (or they already exist), the range map is generated from the new range. 3. Otherwise, the range must be expanded into prefixes. 4. NOTE: If only one endpoint is inserted in trie, and second insert fails, the first endpoint may be deleted in order to minimize accumulating garbage. In one embodiment, a range may be deleted in the following manner. Note, the removal of a range does not necessarily imply removal of its endpoints. In one embodiment, a use count is maintained for all endpoints in the range trie. If use count of an endpoint falls to zero, it can potentially be removed. One embodiment of a process for checking and removing an endpoint is now described. 1. Walk down the tree searching for endpoint. Let node denote the corresponding tree node. 2. If node has any child that is an internal (i.e., non-leaf) node of the tree, node cannot be removed without affecting any existing range maps. In order to avert any large scale update of TCAM entries, defer deletion of node. 3. If the only children of node are leaves, then delete endpoint by converting node into a leaf. 4. At this point, check whether use-count of parent(node) is zero. If so, it can be a node whose deletion was deferred, as in step (2). The parent(node) is then deleted. FIG. 5B illustrates a process used in one embodiment for processing information. Note, this processing is described in terms of receiving and processing packets. However, the invention is not so limited, and may be used for processing any type of information. Processing begins with process block 540, and proceeds to process block 542, wherein a packet is received. Next, in process block 544, information (e.g., port fields, source address, destination address, service type, or other packet header or data fields) is extracted on which to perform the match and lookup. Next, in process block 546, a lookup word value is generated with reference to the programmed interval data structure. Next, in process block 548, the programmed associative memory performs the lookup operation. Processing returns to process block 542 to receive and process more In view of the many possible embodiments to which the principles of our invention may be applied, it will be appreciated that the embodiments and aspects thereof described herein with respect to the drawings/figures are only illustrative and should not be taken as limiting the scope of the invention. For example and as would be apparent to one skilled in the art, many of the process block operations can be re-ordered to be performed before, after, or substantially concurrent with other operations. Also, many different forms of data structures could be used in various embodiments. The invention as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof.
{"url":"http://www.google.de/patents/US6775737","timestamp":"2014-04-21T07:34:52Z","content_type":null,"content_length":"145872","record_id":"<urn:uuid:0e122dac-7001-4484-97ce-1d62c5587dbe>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Square, Square Root, Cube, Cube Root Comments (0) Please log in to add your comment. 8th Grade Math Square, Square Roots, Cubed, and Cubed Roots Squared To square a number means to take that number times itself. For instance, 5 squared is 5 x 5, which equals 25. 8 squared equals? 12 squared equals? Squared Root The opposite of squaring a number is finding the square root of a number. The square root of a number is the number that, when squared, equals the given number. For instance, the square root of 81 is 9 because 9 x 9 equals 81. The square root of 36 is? The square root of 17 is? Cubed To cube a number means to take a number times itself 3 times. For instance, 5 cubed is 5 x 5 x 5, which equals 125. 8 cubed equals? 10 cubed equals? Cubed Root The opposite of cubing a number is to find the cube root of the number. The cube root of a number is the number that, when cubed, equals the given number. For instance, the cube root of 64 is 4 because 4 x 4 x 4 = 64. The cube root of 27 equals? The cube root of 54 equals? Squares and Square Roots Cube and Cube Roots
{"url":"http://prezi.com/jzz6yviz5vqv/square-square-root-cube-cube-root/","timestamp":"2014-04-20T04:39:13Z","content_type":null,"content_length":"53263","record_id":"<urn:uuid:3df2da49-7d1a-430c-9f0d-e23eb868bfce>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
You Hang A Floodlamp From The Endof A Vertical ... | Chegg.com Equilibrium problem on problem 89!!!! You hang a floodlamp from the endof a vertical steel wire. The floodlamp stretches the wire0.12mm and the stress is proportionalto the strain. (a) How much would ithave stretched if the wire were five times as long? (b) How much would it have stretched if the wire had the samelength but twice the diameter? (c) How much would it have stretched for a copper wire of theoriginal length and diameter? *i got a and b but notsee Answers (1) • You hang a floodlamp from the endof a vertical steel wire. The floodlamp stretches the wire0.12mm and the stress is proportionalto the strain. (a) How much would ithave stretched if the wire were five times as long? (b) How much would it have stretched if the wire had the samelength but twice the diameter? (c) How much would it have stretched for a copper wire of theoriginal length and diameter? *i got a and b but notsee Rating:5 stars 5 stars 1 Jason2013 answered 4 hours later
{"url":"http://www.chegg.com/homework-help/questions-and-answers/hang-floodlamp-endof-vertical-steel-wire-floodlamp-stretches-wire012mm-stress-proportional-q63408","timestamp":"2014-04-17T20:32:36Z","content_type":null,"content_length":"25373","record_id":"<urn:uuid:77b0d472-0124-4074-83be-fe0d3a9856aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibers of fibrations of a 3-manifold over $S^1$ up vote 13 down vote favorite Given a fiber bundle $S\hookrightarrow M \rightarrow S^1$ with $M$ (suppose compact closed connected and oriented) 3-manifold and $S$ a compact connected surface, it follows form the exact homotopy sequence that $\pi_1(S)\hookrightarrow \pi_1(M)$. Does this imply that the "fiber" of a 3-manifold which fibers over $S^1$ is well defined? The answer should be NO, so I am asking: are there simple examples of 3-manifolds which are the total space of two fiber bundles over $S^1$ with fibers two non homeomorphic surfaces? EDIT: the answer is NO (see Richard Kent's answer). I'm just seeking for a "practical" example to visualize how this phoenomenon can happen. I don't know an answer to this, but why do you think that there could only be one such $S$? – Olivier Bégassat Apr 7 '11 at 19:49 add comment 2 Answers active oldest votes There are simple examples with $M = F \times S^1$ for $F$ a closed surface of genus $2$ or more. Choose a nonseparating simple closed curve $C$ in $F$, then take $n$ fibers $F_1,\ cdots,F_n$ of $F\times S^1$, cut these fibers along the torus $T=C\times S^1$, and reglue the resulting cut surfaces so that $F_i$ connects to $F_{i+1}$ when it crosses $T$, with up vote 21 subscripts taken mod $n$. The resulting connected surface is an $n$-sheeted cover of $F$ and is a fiber of a new fibering of $M$ over $S^1$. The monodromy of this fibering is a periodic down vote homeomorphism of the new fiber, of period $n$. add comment As you suspect, the answer is no. There are $3$-manifolds that fiber over the circle in infinitely many ways, with fibers of unbounded genera. Thurston constructed a seminorm on $H_2(M,\partial M; \mathbb{R})$, now called the Thurston norm. The unit ball is a polyhedron, and Thurston showed that fibers of fibrations of $M$ over the circle are those integral classes which lie in the open cone on a collection of certain top dimensional faces (the ``fibered faces") of this ball. The norm is defined by extending the absolute value of the Euler characteristic of integral classes, and so any fibered manifold whose second homology has rank at least two gives you an example. up vote 14 EDIT: down vote For a discussion of some explicit examples, see: Hilden, Lozano, Montesinos-Amilibia, On hyperbolic 3-manifolds with an infinite number of fibrations over $S^1$. Math. Proc. Cambridge Philos. Soc. 140 (2006), no. 1, 79–93. You're right, I've read Thurston's original paper for a seminar but I focused on the foliations part. So I reformulate my question: instead of this argument, is there any simple "practical" example to visualize this phenomenon? – Francesco Lin Apr 7 '11 at 20:34 1 I added a reference for some nice examples. – Richard Kent Apr 7 '11 at 20:59 2 As a follow up, Geometrization implies that any example for the question you asked must be hyperbolic. (Euclidean manifolds like $T^3$ often fiber in infinitely many ways, but the fibers will always be tori.) The examples in the cited paper are indeed pretty nice; they come from taking a branched covering of $T^3$ so that many of the different fibrations of $T^3$ lift to the cover, with different fibers. – Dylan Thurston Apr 7 '11 at 23:46 3 What I wrote above about Geometrization is wrong, see Allen Hatcher's answer. – Dylan Thurston Apr 8 '11 at 3:26 add comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/60987/fibers-of-fibrations-of-a-3-manifold-over-s1/61014","timestamp":"2014-04-19T22:43:56Z","content_type":null,"content_length":"64235","record_id":"<urn:uuid:cc741c1d-dfce-4bb6-8801-e723af11e38a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
when constant scalar curvature implies Einstein? up vote 2 down vote favorite Assume that $(M^n,g)$ is an $n$ dimensional ($n>=3$)closed Riemannian manifold with constant scalar curvature and $Ric_g$ nonnegative. Then is $g$ Einstein? add comment 2 Answers active oldest votes There is no reason for this, and the answer is indeed no. The simplest example I can think of is the product of two $\mathbb{S}^2$, each endowed with the round metric. This manifold is homogeneous and thus has constant scalar curvature, its up vote 6 down sectional curvature is non-negative so its Ricci tensor also is (and is in fact even positive), but the Ricci curvature in a direction $u$ depends on the angle between $u$ and the vote accepted tangent spaces to the fibers of the projection on each factor (i.e., on whether $u$ is close to be horizontal or vertical or not). Yes, I think your example has scalar curvature equal to 4 and Ricci curvature nonnegative. But some sectional curvature is zero. Thank you very much. – Mathboy Apr 17 '13 at 12:46 1 $S^1\times S^2$ works too. – Ian Agol Apr 17 '13 at 21:04 Just a caveat, for this metric on $\mathbb{S}^2\times\mathbb{S}^2$ not to be Einstein, the two $\mathbb{S}^2$ need to have different radius. And a remark about Agol's comment : what 1 I like in it is that it in fact doesn't admit any Einstein metric, just because in dimension 3 Einstein is equivalent to constant sectional curvature. I wonder wether $\mathbb{S}^1\ times\mathbb{S}^3$ enjoys the same property or not... – Thomas Richard Apr 18 '13 at 9:36 add comment As an example where this does hold, for $\omega$ a Kähler metric of constant scalar curvature with $\pi c_1(M) = \lambda [\omega]$, then $\omega$ is Kähler-Einstein. This is up vote 4 down Proposition 2.12 in Tian's "Canonical metrics in Kähler Geometry". P.S. If anyone would like to remove the e from "Kaehler" and insert an umlaut, they're more than welcome to, as I was unable to. – Ruadhaí Dervan Apr 17 '13 at 14:52 Thank you for telling me this interesting reference. – Mathboy Apr 18 '13 at 9:14 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/127829/when-constant-scalar-curvature-implies-einstein?sort=newest","timestamp":"2014-04-17T01:52:19Z","content_type":null,"content_length":"58951","record_id":"<urn:uuid:f7c4c8b3-df5d-4a79-a37b-d7a1f3e86034>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Baratin and Freidel: a spin foam model of ordinary particle physics john baez It's hot up here in Waterloo, Canada! In the Poincare 2-group, the group of objects is the group of Lorentz transformations, and the group of morphisms is the Poincare group. That doesn't completely describe the Poincare 2-group. You need to know some other stuff, like: If you have a morphism in here, which object does it start at, and which object does it end at? There's only one sensible answer to this question, I think, so I'll leave it as a puzzle. You also need to decide how to compose morphisms. I'll leave that as a (harder) puzzle. About the puzzle---I think selfAdjoint has already figured it out but I did not follow all the discussion, so I will take a guess. Someone, I think it was JB, suggested using letters T and S for "twist" and "shift", where T is an element of the Lorentz group and S is thought of as a translation (we are building the Poincaré [EDIT] GROAN. I just looked at the "Higher Yang Mills" paper that I lost a couple of days ago, while cleaning up the living room. the Poincaré 2-group is explained as EXAMPLE 9. So the puzzle was already answered. ...I talked to Kea and she suggested one by Girelli and Pfeiffer, which I never got around to looking at it. Maybe I will now. http://arxiv.org/abs/hep-th/0309173 Higher gauge theory -- differential versus integral formulation Florian Girelli, Hendryk Pfeiffer 26 pages J.Math.Phys. 45 (2004) 3949-3971 "The term higher gauge theory refers to the generalization of gauge theory to a theory of connections at two levels, essentially given by 1- and 2-forms. So far, there have been two approaches to this subject. The differential picture uses non-Abelian 1- and 2-forms in order to generalize the connection 1-form of a conventional gauge theory to the next level. The integral picture makes use of curves and surfaces labeled with elements of non-Abelian groups and generalizes the formulation of gauge theory in terms of parallel transports..." I hope Kea is better now and was wishing she would suddenly materialize amidst this thread. Well, looking at Girelli/Pfeiffer, I see right away references [15] J. C. Baez: Higher Yang–Mills theory (2002). Preprint hep-th/0206130. [19] J. C. Baez and A. Crans: Higher dimensional algebra VI: Lie 2-algebras (2003). Preprint math.QA/0307263. My posts are just not helpful in this thread at this point. I shall delete the next one.
{"url":"http://www.physicsforums.com/showpost.php?p=1018432&postcount=21","timestamp":"2014-04-17T09:47:47Z","content_type":null,"content_length":"10667","record_id":"<urn:uuid:6679e352-1793-4086-a983-440de438d0f4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Programming Example Main.LinearProgramming History Hide minor edits - Show changes to output Changed line 20 from: <iframe width="560" height="315" src="//www.youtube.com/embed/i8WS6HlE8qM[DEL:?list=UU2GuY-AxnNxIJFAVfEW0QFA:DEL]" frameborder="0" allowfullscreen></iframe> <iframe width="560" height="315" src="//www.youtube.com/embed/i8WS6HlE8qM" frameborder="0" allowfullscreen></iframe> Added lines 7-8: [INS:!!!! Refinery Optimization with Linear Programming:INS] Changed lines 10-11 from: [DEL:* [[http://apmonitor.com/online/view_pass.php?f=irefinery.apm | Solve Refinery Optimization Problem]] with Integer Variables:DEL] Added lines 13-20: !!!! Refinery Optimization with Mixed Integer Linear Programming * [[http://apmonitor.com/online/view_pass.php?f=irefinery.apm | Solve Refinery Optimization Problem]] with Integer Variables <iframe width="560" height="315" src="//www.youtube.com/embed/i8WS6HlE8qM?list=UU2GuY-AxnNxIJFAVfEW0QFA" frameborder="0" allowfullscreen></iframe>:INS] Deleted lines 4-5: [DEL:!!!! Linear Programming Example 1:DEL] Changed lines 14-15 from: !![DEL:!! Linear Programming Example:DEL] 2 :INS]!! [INS:Soft Drink Production Problem (Example :INS]2[INS:):INS] Deleted lines 28-29: Deleted lines 35-36: Changed lines 5-16 from: !!!! Linear Programming [DEL:Example:DEL] !!!! Linear Programming [INS:Example 1 A refinery must produce 100 gallons of gasoline and 160 gallons of diesel to meet customer demands. The refinery would like to minimize the cost of crude and two crude options exist. The less expensive crude costs $80 USD per barrel while a more expensive crude costs $95 USD per barrel. Each barrel of the less expensive crude produces 10 gallons of gasoline and 20 gallons of diesel. Each barrel of the more expensive crude produces 15 gallons of both gasoline and diesel. Find the number of barrels of each crude that will minimize the refinery cost while satisfying the customer * [[http://apmonitor.com/online/view_pass.php?f=refinery.apm | Solve Refinery Optimization Problem]] with Continuous Variables * [[http://apmonitor.com/online/view_pass.php?f=irefinery.apm | Solve Refinery Optimization Problem]] with Integer Variables <iframe width="560" height="315" src="//www.youtube.com/embed/M_mpRrGKKMo?rel=0" frameborder="0" allowfullscreen></iframe> !!!! Linear Programming Example 2:INS] Changed line 46 from: [DEL:* :DEL][[Attach:linear_programming_with_apm_python.zip|Attach:linear_programming_with_apm_python.png]] Changed line 26 from: <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink.htm" width="500" height="[DEL:200:DEL]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink.htm" width="500" height="[INS:230:INS]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> Changed line 36 from: <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink2.htm" width="500" height="[DEL:200:DEL]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink2.htm" width="500" height="[INS:230:INS]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> Changed line 26 from: <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink.htm" width="500" height="[DEL:600:DEL]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink.htm" width="500" height="[INS:200:INS]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> Changed line 36 from: <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink2.htm" width="500" height="[DEL:600:DEL]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink2.htm" width="500" height="[INS:200:INS]" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> Added line 10: Changed lines 21-25 from: [DEL:Below are the source files for generating the contour plots in Python:DEL].[DEL: The linear program is solved with the APM model through a web-service while the contour plot is generated with * [[Attach:linear_programming_with_apm_python:DEL].[DEL:zip|Linear Programming Example Files for APM Python (:DEL].[DEL:zip:DEL])[DEL:]] * :DEL][[[DEL:Attach:DEL]:[DEL:linear:DEL]_[DEL:programming_with_apm_python:DEL].[DEL:zip:DEL]|[DEL:Attach:DEL]:[DEL:linear_programming_with_apm_python:DEL].[DEL:png]]:DEL] [INS:!!!! Soft Drink Production Problem [[http://apmonitor:INS].[INS:com/online/view_pass.php?f=softdrink.apm | Solve the Production Problem Online]] <iframe src="http://apmonitor.com/me575/uploads/Main/softdrink.htm" width="500" height="600" frameborder="1" marginheight="0" marginwidth="0">Loading:INS]..[INS::INS].[INS:</iframe> !!!! Modified Production Problem :INS][[[INS:http:INS]:[INS://apmonitor.com/online/view:INS]_[INS:pass:INS].[INS:php?f=softdrink2.apm :INS]|[INS: Solve the Modified Production Problem Online]] <iframe src="http://apmonitor:INS].[INS:com/me575/uploads/Main/softdrink2.htm" width="500" height="600" frameborder="1" marginheight="0" marginwidth="0">Loading...</iframe> !!!! Solution and Contour Plots with Python Below are the source files for generating the contour plots in Python. The linear program is solved with the APM model through a web-service while the contour plot is generated with the Python package Matplotlib.:INS] Changed lines 8-12 from: * 3 units of ''A'' and 6 units of ''B'' to produce [DEL:product:DEL] ''1'' * 8 units of ''A'' and 4 units of ''B'' to produce [DEL:product:DEL] ''2'' There are at most 5 units of [DEL:product:DEL] 1 and 4 units of [DEL:product:DEL] 2. Product 1 can be sold for 100 and Product 2 can be sold for 125. The objective is to maximize the profit for this production problem.[DEL::DEL] * 3 units of ''A'' and 6 units of ''B'' to produce [INS:Product:INS] ''1'' * 8 units of ''A'' and 4 units of ''B'' to produce [INS:Product:INS] ''2'' There are at most 5 units of [INS:Product '':INS]1[INS:'':INS] and 4 units of [INS:Product '':INS]2[INS:'':INS]. Product 1 can be sold for 100 and Product 2 can be sold for 125. The objective is to maximize the profit for this production problem. Added lines 13-20: A contour plot can be used to explore the optimal solution. In this case, the black lines indicate the upper and lower bounds on the production of ''1'' and ''2''. In this case, the production of ''1'' must be greater than 0 but less than 5. The production of ''2'' must be greater than 0 but less than 4. Below are the source files for generating the contour plots in Python. The linear program is solved with the APM model through a web-service while the contour plot is generated with Matplotlib.:INS] Added lines 1-35: [INS:(:title Linear Programming Example:) (:keywords linear programming, mathematical modeling, nonlinear, optimization, engineering optimization, university course:) (:description Tutorial on linear programming solve parallel computing optimization applications.:) !!!! Linear Programming Example A simple production planning problem is given by the use of two ingredients ''A'' and ''B'' that produce products ''1'' and ''2''. In this case, it requires: * 3 units of ''A'' and 6 units of ''B'' to produce product ''1'' * 8 units of ''A'' and 4 units of ''B'' to produce product ''2'' There are at most 5 units of product 1 and 4 units of product 2. Product 1 can be sold for 100 and Product 2 can be sold for 125. The objective is to maximize the profit for this production problem. * [[Attach:linear_programming_with_apm_python.zip|Linear Programming Example Files for APM Python (.zip)]] * [[Attach:linear_programming_with_apm_python.zip|Attach:linear_programming_with_apm_python.png]] <div id="disqus_thread"></div> <script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'apmonitor'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> <a href="http://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
{"url":"http://apmonitor.com/me575/index.php/Main/LinearProgramming?action=diff","timestamp":"2014-04-17T04:08:52Z","content_type":null,"content_length":"31376","record_id":"<urn:uuid:8e507c28-2bad-4559-ba49-6ed4a74a8aca>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Collection Probability The "collection probability" describes the probability that a carrier generated by light absorption in a certain region of the device will be collected by the p-n junction and therefore contribute to the light-generated current, but probability depends on the distance that a light-generated carrier must travel compared to the diffusion length. Collection probability also depends on the surface properties of the device. The collection probability of carriers generated in the depletion region is unity as the electron-hole pair are quickly swept apart by the electric field and are collected. Away from the junction, the collection probability drops. If the carrier is generated more than a diffusion length away from the junction, then the collection probability of this carrier is quite low. Similarly, if the carrier is generated closer to a region such as a surface with higher recombination than the junction, then the carrier will recombine. The impact of surface passivation and diffusion length on collection probability is illustrated below. The collection probability in conjunction with the generation rate in the solar cell determine the light-generated current from the solar cell. The light-generated current is the integration over the entire device thickness of the generation rate at a particular point in the device, multiplied by the collection probability at that point. The equation for the light-generated current density (J [L]), with an arbitrary generation rate (G(x))and collection probability (CP(x)), is shown below, as is the generation rate in silicon due to the AM1.5 solar spectrum: q is the electronic charge; W is the thickness of the device; α(λ) is the absorption coefficient; H[0] is the number of photons at each wavelength. A non-uniform collection probability will cause a spectral dependence in the light-generated current. For example, at the surfaces, the collection probability is lower than in the bulk. Comparing the generation rates for blue, green and infrared light below, blue light is nearly completely absorbed in the first few tenths of a micron in silicon. Therefore, if the collection probability at the front surface is low, any blue light in the solar spectrum does not contribute to the light-generated current.
{"url":"http://www.pveducation.org/pvcdrom/solar-cell-operation/collection-probability","timestamp":"2014-04-17T04:04:14Z","content_type":null,"content_length":"62024","record_id":"<urn:uuid:0f93d6f1-4ef9-49a6-ba36-ebc52f918d44>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating sparse models from multivariate discrete data via transformed Lasso Teemu Roos and Bin Yu In: Information Theory and Applications Workshop (ITA-09), 8-13 Feb 2009, San Diego, CA. The type of L1 norm regularization used in Lasso and related methods typically yields sparse parameter estimates where most of the estimates are equal to zero. We study a class of estimators obtained by applying a linear transformation on the parameter vector before evaluating the L1 norm. The resulting "transformed Lasso" yields estimates that are "smooth" in a way that depends on the applied transformation. The optimization problem is convex and can be solved efficiently using existing tools. We present two examples: the Haar transform which corresponds to variable length Markov chain (context-tree) models, and the Walsh-Hadamard transform which corresponds to linear combinations of XOR (parity) functions of binary input features. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00004381/","timestamp":"2014-04-21T10:02:29Z","content_type":null,"content_length":"7048","record_id":"<urn:uuid:8067f2b5-4a0c-4677-91b0-4511acde4c7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Thorofare Precalculus Tutors ...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university. 13 Subjects: including precalculus, calculus, algebra 1, geometry ...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. I taught Prealgebra with a national tutoring chain for five years. I have taught Prealgebra as a private tutor since 2001. 12 Subjects: including precalculus, calculus, writing, geometry ...I have taken seven semesters of calculus courses, as well as two courses that required me to study the underlying foundations of calculus. I have been trained to teach Geometry according to the Common Core Standards. I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. 11 Subjects: including precalculus, calculus, geometry, algebra 1 I am currently a volunteer math tutor at the Center for Literacy in Philadelphia. I have a degree in engineering and math. My approach towards tutoring is simple. 23 Subjects: including precalculus, physics, statistics, geometry ...I am willing to tutor individuals or small groups. I am most helpful to students when the tutoring occurs over a longer period of time. This allows me to identify the topics that are the root causes of the student's problems. 18 Subjects: including precalculus, calculus, statistics, geometry
{"url":"http://www.algebrahelp.com/Thorofare_precalculus_tutors.jsp","timestamp":"2014-04-18T00:17:44Z","content_type":null,"content_length":"25144","record_id":"<urn:uuid:dc033193-c06d-4b18-a990-97a1945378be>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How do i find the volume of a shpere?? Replies: 1 Last Post: Dec 10, 2004 10:58 AM Re: How do i find the volume of a shpere?? Posted: Dec 10, 2004 10:58 AM Ellie &lt;HeavenlyEllie13@aol.com&gt; wrote in <a href="news://1ofir09hffn93qibrgefldqartnsdl7tvr@4ax.com:">news://1ofir09hffn93qibrgefldqartnsdl7tvr@4ax.com:</a> &gt; How do i find the volume of a shpere when the raduis is 4 inches and a &gt; diameter is 6 inches???????????? a sphere is a perfectly symmetrical object. just like a circle, the radius is equal to one half the diameter. If you are truly talking about a sphere, then the conditions you mention are impossible. but to get the volume of a sphere the formula is 4/3 pi r^3. submissions: post to k12.ed.math or e-mail to k12math@k12groups.org private e-mail to the k12.ed.math moderator: kem-moderator@k12groups.org newsgroup website: <a href="http://www.thinkspot.net/k12math/">http://www.thinkspot.net/k12math/</a> newsgroup charter: <a href="http://www.thinkspot.net/k12math/charter.html">http://www.thinkspot.net/k12math/charter.html</a>
{"url":"http://mathforum.org/kb/message.jspa?messageID=3616331","timestamp":"2014-04-21T10:48:20Z","content_type":null,"content_length":"15069","record_id":"<urn:uuid:b5868112-f55f-4e7a-90d8-537f35132b3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
A problem with the game of life 03-25-2007 #1 Registered User Join Date Jan 2007 #include <stdio.h> typedef int mat[10][10]; void input(mat matr) int x,y; scanf("%d %d",&x,&y); scanf("%d %d",&x,&y); void print(mat matr) int i,j; void zer(mat matr) int i,j; void killcell(mat matr) int countneib=0; int i,j; void createcell(mat matr) int countneib=0; int i,j; int main () char choice; int i,j; mat matr; return 0; The problem is that, there are weird numbers in the matrix. after a very quick look, i notice in your while loop of your input function: i assume you mean to use '=' instead of comparison operator '=='. im sure this will be a part of the problem, if it doesnt solve it ill take a deeper look. after a very quick look, i notice in your while loop of your input function: i assume you mean to use '=' instead of comparison operator '=='. im sure this will be a part of the problem, if it doesnt solve it ill take a deeper look. I also noticed that, changed it, and it crashes. [Edit] Nevermind about the crash. I think it crashed because, being a genius, I was giving it incorrect input. Incidentally, the following code inside the zer() function seems incorrect. Basically can be translated into: if(matr[i][j] == 0) matr[i][j] = 0; BTW, is it me or does this seem like a wrong way to use typedef? I don't know. Maybe it's just me. Last edited by MacGyver; 03-25-2007 at 03:05 PM. are you typing in input 100 times in the input function? if not then when you print it it will print the int at that index (in the for loop), which hasnt been initialized. which is where the large random numbers come from. Wouldn't this go out of bounds? One solution would be to add unused "edges" to the array. I'm not so sure if you can kill and create cells as you check them. I think you should first scan the array to establish which cells need to be killed and created first, without changing the array immediately and only then can you change the states of all cells at one go. In other words, you need to separate decision-making and actually modifying the array, because the decisions depend on the state of the game as a whole. 03-25-2007 #2 Registered User Join Date Oct 2006 03-25-2007 #3 03-25-2007 #4 Registered User Join Date Oct 2006 03-25-2007 #5 The larch Join Date May 2006
{"url":"http://cboard.cprogramming.com/c-programming/87856-problem-game-life.html","timestamp":"2014-04-20T09:47:31Z","content_type":null,"content_length":"56469","record_id":"<urn:uuid:ec9e9549-ba8b-44d3-959d-a3dc62ed508e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
The FitzHugh-Nagumo (FHN) model A generalisation of the Van der Pol's equation the FHN is one of the simplest model for excitable media [1,2]. The model is able to reproduce many qualitative characteristics of electrical impulses in cardiac tissues, e.g.: 1D travelling waves, tip of 2D spirals meandering, instability of 3D spiral wave with negative tension, oscillating pacemakers. The FHN system of equations for one cell du/dt = F(u, v) = u(1 - u)(u - a) - v, dv/dt = H(u, v) = ε (bu - v), where a is the threshold for excitation. To the right below u(t) excitation (black) and v(t) recovery (blue) variables are plotted. To the left the (u, v) phase plane of the system is shown. F(u,v) = 0 null cline is the red line and H(u,v) = 0 null cline is the green line. Below you can explore the model for different parameter values. The script makes 800 time steps dt. a b ε dt u0 v0 Ymin Ymaxfield The system makes excitation cycle and goes to the stable fixed point (u=0, v=0). Note that for small ε values the excitation variable u is fast with abrupt steps and the recovery v is slow. [1] FitzHugh-Nagumo model in Scholarpedia [2] J.D. Murray Mathematical Biology I. An Introduction Heart rhythms updated 29 Nov 2011
{"url":"http://www.ibiblio.org/e-notes/html5/fhn.html","timestamp":"2014-04-21T13:13:37Z","content_type":null,"content_length":"5724","record_id":"<urn:uuid:8f61a8a5-2182-4bf8-b010-8f301b271035>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help A bicycle is turned upside down while its owner repairs a flat tire. A friend spins the other wheel and observes that drops of water fly off tangentially from point A. She measures the heights reached by drops moving vertically (see figure). A drop that breaks loose from the tire on one turn rises vertically 0.44 m above the tangent point. A drop that breaks loose on the next turn rises 0.58 m above the tangent point. The radius of the wheel is 0.40 m. Neglecting air friction and using only the observed heights and the radius of the wheel, find the wheel's angular acceleration (assuming it to be constant). • Physics - MathMate, Friday, November 27, 2009 at 1:00pm The key is to find the tangential velocities at each of the turns, v1 and v2. The initial velocity v for an object thrown upwards will reach a height of h is related by the formula: where g is acceleration due to gravity. Conversely, given h, we can find v from: So the tangential velocities v1 and v2 can be determined from h1 and h2 (work in metres). Since the tangential velocities v1 and v2 are known, the average tangential velocity is The time it took to make the turn was From angular velocity = v/r radians/sec We can determine angular acceleration, a Solve for a. I get about 1.4 rad./s. Check my calculations. • Physics - Lilly, Saturday, November 28, 2009 at 1:41am I also got about 1.4 rad/s, but is the answer negative? -1.4 rad/s? • Physics - MathMate, Saturday, November 28, 2009 at 8:24am Angular acceleration is defined as (ω2-ω1)/time, since ω2 is bigger, evidenced by the fact that the droplet shoots higher in the second turn than the first, the value calculated should be If you got a negative answer according to the equations, it's time to give a thorough check. • Physics - Lilly, Saturday, November 28, 2009 at 1:13pm No, I thought the answer was positive too, but when I entered it in, a message popped up and said "Your anwer has the wrong sign". That's why I asked. • Physics - MathMate, Saturday, November 28, 2009 at 4:55pm Unless the heights of the drops have been inverted, I would be tempted to say that there is an error in the answer. Double check the order of the heights of the droplets, i.e. 0.44 m followed by 0.58 m on the next turn. If that's the case, you could report your findings to your teachers so you don't get After all, the people who put the answers on the computer are human, so mistakes are possible. • Physic - Ed, Saturday, November 28, 2009 at 7:34pm I have a question similar to this but I don't know what you did in each step. Could you plug in all the numbers step by step. • Physics - MathMate, Saturday, November 28, 2009 at 8:28pm I suggest you post a new question and it will get a personalized answer. Otherwise, you can post what you get and if you don't get the right answer, we'll help you find the problem.
{"url":"http://www.jiskha.com/display.cgi?id=1259306687","timestamp":"2014-04-18T18:31:30Z","content_type":null,"content_length":"11763","record_id":"<urn:uuid:6b898bd3-ab1a-4f9c-b26b-12f50a78d31f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Crypto++ Faq-O-Matic: Why is ElGamal key generation so slow? When you generate an ElGamal key pair, you have the option of specifying a prime modulus. If you do not specify the prime modulus, one will be generated, however because the modulus must be a safe prime (a prime p such that (p-1)/2 is also prime), and those are much rarer than regular primes, it takes a long time. I suggest that you use an existing well known safe prime instead. For example the following 2048-bit one from http://www.ietf.org/internet-drafts/draft-ietf-ipsec-ike-modp-groups-04.txt : The g (generator) value for this prime should be 2. 2002-May-14 6:54pm weidai
{"url":"http://www.cryptopp.com/fom-serve/cache/71.html","timestamp":"2014-04-18T08:03:41Z","content_type":null,"content_length":"4441","record_id":"<urn:uuid:c9e5af6d-457a-4f32-9707-c497cfe67f97>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Follow up to tricky Alg prob (repost from incorrect thread) January 4th 2013, 02:55 PM #1 Dec 2012 Follow up to tricky Alg prob (repost from incorrect thread) I have a couple questions that I know the solutions to, but I can't for the life of me figure out the process involved at arriving at those solutions... 1) If 18*sqrt(18) = r*sqrt(t), where t and r are positive integers and r > t, which of the following could be the value of r*t? (solution is 108) 2) The eggs in a certain basket are either white or brown. If the ration of the number of white egges to the number of brown eggs is (2/3), each of the following could be the number of eggs in the basket EXCEPT: a) 10 b) 12 (this is the answer) c) 15 d) 30 e) 60 I thought about proportionalities with problem 1 and got no where, and with problem 2 I thought I had it nailed until I checked the answer. As always, any advice is greatly appreciated REPLY (from Denevo - thanks again) these are not university-level algebra questions. for (1): since everything in sight is positive we can square both sides without fear. thus from: 18√18 = r√t, we have: (324)(18) = r^2t 18^3 = 5832 = r^2t. clearly t < 18, or else r^2t > t^3 > 5832. so t is some divisor of 18: 1,2,3,6,or 9. if t = 1, r = √(5832), which is not an integer. if t = 2, r = √(2916) = 54 <--this works ( (18)^3/2 = (9)(18)^2, which has square root 3*18 = 54). if t = 3, r = √(1944), not an integer if t = 6, r = √(972), not an integer if t = 9, r = √(648), not an integer (look at the prime factorization of 18 cubed) so the only case where r and t are integers with r > t is t = 2, r = 54, hence rt = 108. (2) 2x + 3x = 5x, the number of eggs in the basket must be a multiple of 5. 12 is not a multiple of 5. MY FOLLOW UP QUESTION This is for question 1. (I feel a bit sheepish for posting question 2 once I realized it From Denevo's reply: "since everything in sight is positive we can square both sides without fear. thus from: √18 = r√t, we have: (324)(18) = r^2t 18^3 = 5832 = r^2t. clearly t < 18, or else r^2t > t^3 > 5832. so t is some divisor of 18: 1,2,3,6,or 9." I think where I'm getting hung up in the explanation is the "clearly t < 18, or else r^2t > t^3 > 5832. so t is some divisor of 18: 1,2,3,6,or 9." I understand that if t > r that the result would be > 5832. The part I'm having trouble with is how to indentify that t is in fact less than 18 specifically. How do I know it's 18 and not some other integer? In other words, by saying this, it seems like saying that means you understand that r would be = 18, thus t < 18. From the problem, all I know is that r > t, and that the product of r*sqrt(18) is equivalent to the product of r*sqrt(t), and that r doesn't equal 18 nor does t equal sqrt(18). It seems like this is also confirmed by the solution that r = 54 and t = 2. I certainly hope the follow question I'm asking here is making sense. Also, is there a link/website or some section of math one would practice in a book/class for a problem of this nature? It's important to me to understand the "why" of it instead of memorizing "how" to do a particular problem type. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/algebra/210770-follow-up-tricky-alg-prob-repost-incorrect-thread.html","timestamp":"2014-04-20T09:18:48Z","content_type":null,"content_length":"32929","record_id":"<urn:uuid:df253539-4f68-4686-a962-b5ee95c5aed5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Initial value problem December 8th 2008, 02:50 AM #1 Junior Member Nov 2008 Initial value problem Hello! Can somebody help me find a solution of: $u_{tt}-u_{xx}-a(u_t-u_x)=0$ on $\mathbb{R} \times (0, \infty)$ $u=g, u_t=h$ on $\mathbb{R}\times \{t=0\}$ with $a \in \mathbb{R}$ I do not really have an idea how to solve this. Looks like separation of variables would work. Let $u(x,y)=X(x)T(t)$. When I plug that in I get: You can go from there right? Hi! Thanks for this answer. This works out, but i think it should be $\frac{T''}{T}+a\frac{T'}{T}=\frac{X''}{X}+a\frac{X '}{X}$, so one only has to solve the ODE $\frac{T''}{T}+a\frac{T'}{T}= const$, right? Ok, thanks for that correction. However maybe this is not the way to go since once you solve the ODEs I'm not sure how you would then go on to form some combination of them to satisfy the initial and boundary values like is done with the heat or wave equation not containing the single partials. Maybe Fourier Transforms are the way to go since one of the bounds is infinite. Not sure. Sorry. I'm rusty with this. Oh that's bad. One can just solve the pde but the initial values get into one's way. And i do not see how that can be repaired. Has anyone another idea how to approach this problem? December 8th 2008, 05:50 AM #2 Super Member Aug 2008 December 8th 2008, 09:36 AM #3 Junior Member Nov 2008 December 8th 2008, 11:27 AM #4 Super Member Aug 2008 December 9th 2008, 01:00 PM #5 Junior Member Nov 2008
{"url":"http://mathhelpforum.com/calculus/63905-initial-value-problem.html","timestamp":"2014-04-18T00:56:05Z","content_type":null,"content_length":"40415","record_id":"<urn:uuid:abad48ea-f688-47a4-8a3a-8f5ac8b9634a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Algorithm-Based Fault Tolerance on a Hypercube Multiprocessor September 1990 (vol. 39 no. 9) pp. 1132-1145 ASCII Text x P. Banerjee, J.T. Rahmeh, C. Stunkel, V.S. Nair, K. Roy, V. Balasubramanian, J.A. Abraham, "Algorithm-Based Fault Tolerance on a Hypercube Multiprocessor," IEEE Transactions on Computers, vol. 39, no. 9, pp. 1132-1145, September, 1990. BibTex x @article{ 10.1109/12.57055, author = {P. Banerjee and J.T. Rahmeh and C. Stunkel and V.S. Nair and K. Roy and V. Balasubramanian and J.A. Abraham}, title = {Algorithm-Based Fault Tolerance on a Hypercube Multiprocessor}, journal ={IEEE Transactions on Computers}, volume = {39}, number = {9}, issn = {0018-9340}, year = {1990}, pages = {1132-1145}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.57055}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Algorithm-Based Fault Tolerance on a Hypercube Multiprocessor IS - 9 SN - 0018-9340 EPD - 1132-1145 A1 - P. Banerjee, A1 - J.T. Rahmeh, A1 - C. Stunkel, A1 - V.S. Nair, A1 - K. Roy, A1 - V. Balasubramanian, A1 - J.A. Abraham, PY - 1990 KW - fault tolerance; hypercube multiprocessor; multiprocessor architecture; faulty processors; error detection; Intel iPSC hypercube; matrix multiplication; Gaussian elimination; fast Fourier transform; fault tolerant computing; multiprocessing systems; parallel architectures. VL - 39 JA - IEEE Transactions on Computers ER - The design of fault-tolerant hypercube multiprocessor architecture is discussed. The authors propose the detection and location of faulty processors concurrently with the actual execution of parallel applications on the hypercube using a novel scheme of algorithm-based error detection. System-level error detection mechanisms have been implemented for three parallel applications on a 16-processor Intel iPSC hypercube multiprocessor: matrix multiplication, Gaussian elimination, and fast Fourier transform. Schemes for other applications are under development. Extensive studies have been done of error coverage of the system-level error detection schemes in the presence of finite-precision arithmetic, which affects the system-level encodings. Two reconfiguration schemes are proposed that allow the authors to isolate and replace faulty processors with spare processors. [1] C. L. Seitz, "The Cosmic Cube,"Commun. ACM, pp. 22-33, Jan. 1985. [2] J. C. Peterson, J. Tuazon, D. Lieberman, and M. Pniel, "The Mark III hypercube-ensemble concurrent computer," inProc. 1985 Parallel Processing Conf., Aug. 1985, pp. 71-73. [3] I. Koren, "A reconfigurable and fault-tolerant VLSI multiprocessor array," inProc. 8th Int. Symp. Comput. Architecture, Minneapolis, MN, May 1981, pp. 425-442. [4] D. K. Pradhan, "Fault-tolerant multiprocessor link and bus network architectures,"IEEE Trans. Comput., pp. 33-45, Jan. 1985. [5] R. Negrini, M. Sami, and Stefanelli, "Fault tolerance techniques for array structures used in supercomputing,"IEEE Comput. Mag., pp. 78-87, Feb. 1986. [6] D. A. Rennels, "On implementing fault tolerance in binary hypercubes," inProc. 16th Int. Symp. Fault-Tolerant Comput., Vienna, Austria, July 1986, pp. 344-349. [7] J. G. Kuhl and S. M. Reddy, "Fault diagnosis in fully distributed systems," inProc. 11th Int. Symp. Fault-Tolerant Comput., June 1981, pp. 100-105. [8] J. R. Armstrong and F. G. Gray, "Fault diagnosis in a Booleann- cube array of microprocessors,"IEEE Trans. Comput., vol. C-30, pp. 587-590, Aug. 1981. [9] E. Dilger and E. Ammann, "System level self-diagnosis inn-cube connected multiprocessor networks," inProc. 14th Int. Symp. Fault Tolerant Comput., Kissimmee, FL, June 1984, pp. 184-189. [10] R. K. Iyer and D. J. Rossetti, "Permanent CPU errors and system activity: Measurement and modeling," inProc. Real-Time Syst. Symp., 1983. [11] D. A. Rennels, "Fault tolerant computing--Concepts and examples,"IEEE Trans. Comput., vol. C-33, pp. 1116-1129, Dec. 1984. [12] K. H. Huang and J. A. Abraham, "Algorithm-based fault tolerance for matrix operations,"IEEE Trans. Comput., vol. C-33, pp. 518-528. June 1984. [13] J. Y. Jou and J. A. Abraham, "Fault-tolerant matrix operations on multiple processor systems using weighted checksums,"SPIE Proc., Aug. 1984. [14] J. Y. Jou and J. A. Abraham, "Fault tolerant FFT networks," inProc. 15th Int. Symp. Fault Tolerant Comput., Ann Arbor, MI, June 1985, pp. 338-343. [15] M. Malek and Y. H. Choi, "A fault-tolerant FFT processor," inProc. 15th Fault-Tolerant Comput. Symp., Ann Arbor, MI, June 1985, pp. 266-271. [16] F. Luk, "Algorithm-based fault tolerance for parallel matrix solvers," inProc. SPIE Real-Time Signal Processing VIII, vol. 564, 1985. [17] P. Banerjee and J. A. Abraham, "Fault-secure algorithms for multiple processor systems," inProc. 11th Int. Symp. Comput. Architecture, June 1984, pp. 279-287. [18] A. L. N. Reddy and P. Banerjee, "Algorithm-based fault detection for signal processing applications,"IEEE Trans. Comput., 1990. [19] C.-Y. Chen and J. A. Abraham, "Fault-tolerant systems for the computation of eigenvalues and singular values,"Proc. SPIE, Advanced Algorithms Architectures Signal Processing, vol. 696, pp. 228-236, Aug. 1986. [20] P. Banerjee and J. A. Abraham, "Bounds on algorithm-based fault tolerance in multiple processor systems,"IEEE Trans. Comput., vol. C-35, pp. 296-306, Apr. 1986. [21] P. Banerjee and J. A. Abraham, "Concurrent fault diagnosis in multiple processor systems," inProc. 16th Fault Tolerant Comput. Symp., Vienna, Austria, July 1986, pp. 298-303. [22] G. C. Fox, M. A. Johnson, G. A. Lyzenga, S. W. Otto, and J. K. Salmon, inSolving Problems on Concurrent Processors. Englewood Cliffs, NJ: Prentice-Hall, 1989. [23] C. Aykanat and F. Ozguner, "A concurrent error detecting conjugate gradient algorithm on a hypercube multiprocessor," inProc. 17th Int. Symp. Fault-Tolerant Comput., Pittsburgh, PA, July 1987, pp. 204-209. [24] J.-C. Laprie, "Dependable computing and fault tolerance: Concepts and terminology," inProc. 15th Annu. Symp. Fault-Tolerant Comput., June 1985, pp. 2-11. [25] M. Schuette and J. P. Shen, "Processor control flow monitoring using signatured instruction streams,"IEEE Trans. Comput., vol. C-36, pp. 264-276, Mar. 1987. [26] A. Mahmood and E. J. McCluskey, "Concurrent error detection using watchdog processors--A survey,"IEEE Trans. Comput., pp. 160-174, Feb. 1988. [27] C. J. Weinstein, "Roundoff noise in floating point fast Fourier transform computation,"IEEE Trans. Audio Electroacoust., vol. AU-17, pp. 209-215, Sept. 1969. [28] G. A. Geist and M. T. Heath, "Matrix factorization on a hypercube multiprocessor," inProc. SIAM 1st Conf. Hypercube Multiprocessors, Knoxville, TN, Aug. 1985. Index Terms: fault tolerance; hypercube multiprocessor; multiprocessor architecture; faulty processors; error detection; Intel iPSC hypercube; matrix multiplication; Gaussian elimination; fast Fourier transform; fault tolerant computing; multiprocessing systems; parallel architectures. P. Banerjee, J.T. Rahmeh, C. Stunkel, V.S. Nair, K. Roy, V. Balasubramanian, J.A. Abraham, "Algorithm-Based Fault Tolerance on a Hypercube Multiprocessor," IEEE Transactions on Computers, vol. 39, no. 9, pp. 1132-1145, Sept. 1990, doi:10.1109/12.57055 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1990/09/t1132-abs.html","timestamp":"2014-04-18T18:49:20Z","content_type":null,"content_length":"58868","record_id":"<urn:uuid:7ad795cd-eaf0-491e-badd-dc72cf712424>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Using schemes to prove things about rings up vote 10 down vote favorite I apologize for asking a big list question, I've tried to avoid doing so for a while. I'll give my justification in a moment. The question is as follows: What are examples of strict applications of the language of schemes/stacks/algebraic geometry to commutative rings? Here a "strict" application means that the statement of the problem can be formulated without using any algebro-geometric language (stick to rings and modules and complexes, etc.) but a solution either requires or is very naturally obtained by using algebro-geometric language. I don't know examples of this phenomenon off the top of my head, but here are two examples from algebraic topology: 1. Work on exotic spheres via homotopy theory (an example where this is the only known method to produce the results.) 2. (one of) Quillen's proof(s) of the Atiyah-Swan conjecture. While there is a purely algebraic, group-cohomological proof, it turns out to be very natural to prove this theorem using spaces with an action, as opposed to specializing to when the space is a point. Motivation Thanks to work of (insert all the usual suspects here), we now have a very strong theory of spectral algebraic geometry, i.e. algebraic geometry done with commutative ring spectra as opposed to commutative rings. While I don't know of any (hence this question), I am positive there exist strict applications of algebraic geometry to ring theory. It would be very neat if we could transplant these into strict applications of spectral algebraic geometry to the theory of ring spectra. Obviously I don't expect this to be straightforward, or literally possible, but I maintain that answers to this question would provide a useful insight in how to think about the relationship between non-affine and affine phenomena. ag.algebraic-geometry ac.commutative-algebra big-list 2 Why do you apologize for asking a big list question? – Qfwfq Feb 15 '13 at 16:25 2 Because many are fluffy, and fluff is often frowned upon – Dylan Wilson Feb 15 '13 at 16:31 Probably local Riemann Roch Theory is one such example (cf. Multiplicities and Chern Classes in Local Algebra by Paul Roberts). – Mahdi Majidi-Zolbanin Feb 15 '13 at 16:41 No doubt you've had a look at the answers to this famous question: mathoverflow.net/questions/59071/… – Mark Grant Feb 15 '13 at 16:43 @Mark: Yes and it's brilliant, but seems to err more on applications of algebraic geometry to classical algebraic geometry or number theory, as opposed to ring theory. – Dylan Wilson Feb 15 '13 at show 4 more comments 2 Answers active oldest votes Some examples: A. A noetherian commutative ring has only finitely many minimal prime ideals. This is just a corollary of the easy observation that a noetherian space has only finitely many irreducible B. The tensor product of two reduced (integral) $k$-algebras, where $k$ is an algebraically closed field, is again reduced (integral). After reducing to the finite type case, the argument of the proof is essentially geometric. C. Diophantine equations, for example Fermat's Last Theorem (classify ring homomorphisms $\mathbb{Z}[x,y,z]/(x^n+y^n-z^n) \to \mathbb{Z}$), are (approximately) solved with the machinery of elliptic curves. The equation $x^2+y^3=z^7$ even needs algebraic stacks (see here)! D. The classification of boolean rings. Or more generally rings whose elements satisfy a polynomial equation. As compared to the modern proof ($\underline{\mathbb{F}_2} \to \mathcal{O}_{\ mathrm{Spec}(R)}$ is an isomorphism at stalks, hence globally), Stone's original one is quite clumsy. But of course, the historical importance of Stone's work cannot be overestimated. He can be seen as one of the innovators of the ideas of scheme theory. E. The classification of integral domains generated by a single element. This comes down to the classification of prime ideals of $\mathbb{Z}[X]$, which is best done by looking at the fibers of $\mathrm{Spec}(\mathbb{Z}[X]) \to \mathrm{Spec}(\mathbb{Z})$. As in the examples above, this can also be done purely algebraically, but then it gets clumsy. F. Define an $R$-module $M$ to be locally free of finite rank if there are elements $\{f_i\}$ of $R$ generating the unit ideal such that $M_{f_i}$ is free of finite rank over $R_{f_i}$. There is a purely algebraic proof that $M$ is locally free of finite rank if and only if $M$ is flat and of finite presentation, if and only if $M$ is finitely generated projective. However, at least for me as a beginner, it was hard to really grasp what is going on in that proof. But when you view $M_{f_i}$ as the restriction of (the quasi-coherent sheaf associated to) $M$ to (the open subscheme defined by) $R_{f_i}$, every step is clear as crystal. More generally, there are many theorems about modules over commutative rings which are best formulated, understood and proven more generally for quasi-coherent sheaves on a scheme. For another example, see David Lehavi's comment to Emerton's slick proof of the structure theorem for finitely generated modules up vote over a PID. 13 down vote G. There are non-isomorphic commutative rings $R,S$ such that $R[x]$ and $S[x]$ are isomorphic. The first example was found by Mel Hochster with geometric ideas (see here). H. Affine algebraic geometry is full of problems, which can be formulated in terms of ring and module theory, but are attacked with algebraic-geometric methods. For a survey, see here. Perhaps I should stop here, because the list will never end ... I. There are various algebraic constructions and invariants for rings which are best understood in geometric terms, such as the Krull dimension. The associated graded ring $\bigoplus_n I^n / I^{n+1}$ of an ideal $I \subseteq R$ roughly contains the infinitesimal information of $\mathrm{Spec}(R)$ at the closed subscheme $V(I)$. Modules of differentials provide another infinitesimal invariant. For function fields over perfect fields we have the geometric genus (of the corresponding proper normal curve). As always, such invariants are useful for example when one wants to prove that two rings are not isomorphic, replacing painful direct computations (see for example math.SE / 128918, 151714, 296737), but also to provide parameters for a possible J. Even projective schemes are useful in general ring theory, in particular in the context of homogeneous polynomials, see for example Will Sawin's answer in MO/110250, François Brunault's answer in MO/98043 and Qiaochu Yuan's answer at MO/14076. K. (recreational) In the game on noetherian rings, a move consists of replacing a ring $R$ by $R/(a)$ for some $0 \neq a \in R$. You win when your oponent gives you the trivial ring. A complete analysis of this game is still out of reach, but the first attempts by Will Sawin and Kevin Buzzard illustrate the usage of algebraic geometry. Actually it is a game on (affine) schemes, where each move replaces $X$ by a closed subscheme $X' \subseteq X$ cut out by a single nontrivial equation. L. Let $k$ be a field and $A \to C \leftarrow B$ homomorphisms of finitely generated $k$-algebras. Is the fiber product $A \times_C B$ again finitely generated? At first sight this seems to be elementary and should be well-known for decades, but it seems to be an open problem(?). Jakob Scholbach has proven it when $A \to C$ and $B \to C$ are regular (i.e. the ideal is generated by codim many elements), using quite a bit of projective algebraic geometry. These are wonderful! Keep 'em comin :) Also, it's probably too later but it would be nice if these were separate answers so people could vote on them independently... – Dylan Wilson Feb 15 '13 at 20:09 Whoops: **too late – Dylan Wilson Feb 15 '13 at 20:10 I cannot see where in lies the difference between the geometric argument and the algebraic argument for e). – Mariano Suárez-Alvarez♦ Feb 15 '13 at 22:08 @Mariano: One uses that the underlying space of the scheme-theoretic fiber is the set-theoretic fiber. When you try to do it algebraically, without knowing anything about fibers, one has to check that one really can recover the prime ideals from the corresponding prime ideals in $\kappa[X]$, where $\kappa$ is a residue field. Again this gets clumsy. See here: math.stackexchange.com/questions/174595/…. – Martin Brandenburg Feb 15 '13 at 22:45 The argument sketched there by Arturo does not seem clumsy at all to me, really. And it is more or less the same as any sensible geometric argument I can think of, modulo language. – Mariano Suárez-Alvarez♦ Feb 16 '13 at 1:50 add comment Primes in a (commutative) Jacobson ring up vote 2 down vote This question was phrased purely algebraically, but I only arrived at the solution by geometric arguments. I think this is a general theme of commutative algebra. – Martin Brandenburg Feb 15 '13 at 21:57 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/121916/using-schemes-to-prove-things-about-rings?sort=oldest","timestamp":"2014-04-16T04:41:28Z","content_type":null,"content_length":"73467","record_id":"<urn:uuid:ac78be7c-e801-421c-b62f-b9c77fc836cd>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Egg Study Redux: Correcting the Stats | Mother Nature Obeyed - Weston A Price Foundation Egg Study Redux: Correcting the Stats In my last post, I criticized the recent study purporting to show that egg yolks increase atherosclerosis. After corresponding with the lead author, Dr. J. David Spence, I realize I made an error in the way I described the statistical analysis, partly due to my own hastiness and partly due to the lack of clarity in the original report. In this post, I’d like to take a stroll through the authors’ arguments step by step, pointing out the strengths and limitations of each argument, and pointing out where I made my own errors. The authors make three statistical arguments: • Atherosclerosis increases linearly with age, but increases exponentially with egg-yolk years. The exponential increase seen with egg-yolk years resembles that seen with pack-years of smoking. • After adjusting for age, those who consumed more than three eggs per week had more atherosclerosis than those who consumed less than two eggs per week. • After adjusting for sex, total cholesterol, systolic blood pressure, body mass index, and smoking, “egg-yolk years” predicted atherosclerosis in a multiple linear regression model. Eggs and the Exceptionally “Exponential” Curve Let’s take the first argument. Here the authors show that atherosclerosis increases roughly in a straight line with age, in both males and females: Here the authors show that with increasing “egg-yolk years,” atherosclerosis increases not in a straight line, but according to an “exponential” curve: The authors then compare this to a similar “exponential” curve relating atherosclerosis to pack-years of smoking. Here is their conclusion: The exponential nature of the increase in [total plaque area] by quintiles of egg consumption follows a similar pattern to that of cigarette smoking. The effect of the upper quintile of egg consumption was equivalent in terms of atheroma development to 2/3 of the effect of the upper quintile of smoking. In view of the almost unanimous agreement on the damage caused by smoking, we believe our study makes it imperative to reassess the role of egg yolks, and dietary cholesterol in general, as a risk factor for CHD. The subtle argument seems to be that if the increase with age is linear, but the increase with “egg-yolk years” is exponential, then the increase with “egg-yolk years” must not be due to age alone. Thus, it reflects “egg consumption,” and they promptly begin using this phrase as if it is interchangeable with “egg-yolk years” once the discussion commences. There’s just one problem: “egg-yolk years” is a composite measurement that includes age (more precisely, how many years the person had been consuming eggs) and the number of whole eggs eaten per week (which they call “yolks” because they wish to blame cholesterol). Before we let age off the hook, let’s take a look at the relation between age and egg-yolk years. After all, a second grader might predict that age would increase linearly with age, and in fact may correlate perfectly with its own self, but if age is the culprit lurking behind the shadows of “egg-yolk years,” there’s no reason to assume we would find the proof in a perfectly linear pudding. In other words, if age is the only thing that matters, and it increases along a curve with “egg-yolk years,” then the increase in plaque with increasing “egg-yolk years” should follow a similar curve rather than a straight line. In such a case, the curve would hardly suggest that something more than age were operating. I don’t know very much about curve fitting, so I’ll avoid any detailed analysis and I look forward to any criticisms that readers more experienced in this area would like to leave in the comments. I used Microsoft Excel to judge the fit between the dots and various types of lines, and I used Graphpad Prizm to make the graphs. The error bars look smaller in my graphs than in those of the original paper because they represent standard error in mine rather than standard deviation. Here are the dots with no line: As we can see below, the relationship fits only half-decently to a straight line, with about 84 percent agreement between the dots and the line: It doesn’t fit any better to an exponential curve, with only 84 percent agreement: This is unsurprising, since exponential curves fit well when the data values are rising or falling at an increasingly greater rate. On the contrary, these values are going nowhere until the last two quintiles, and the first of these two jumps is greater in size than the second. It fits perfectly to a fourth-order polynomial, which is characterized by three major hills or valleys: And it fits quite nicely to a simpler second-order polynomial, which is characterized by one major hill or valley, with about 95 percent agreement between the dots and the line: Now let’s take a look at the egg component of “egg-yolk years.” Here are the dots ready to be connected: I suppose you can get anything with five dots to fit a fourth-order polynomial perfectly: But a simple straight line fits this graph just as nicely as the second-order polynomial fit the graph for age, with the dots in roughly 95 percent agreement to the line: The straight line fits better than an exponential curve, where the agreement between the dots and the line is only about 90 percent: A second-order polynomial fits it slightly better than a straight line, with 97 percent agreement, but even here there is only a slight curvature in the line: I’ll let the statistics buffs have the last word in the comments, but I think it would be fair to say that neither graph is exponential, and that the egg graph is approximately linear. Let’s look at which curve corresponds more closely to the increase in plaque. Here are the dots: Just as with the curve for age, the dots fit only half-decently to a straight line, with about 85 percent agreement: It fits an exponential curve only slightly better, with 89 percent agreement: We’ll see below that the exponential curve isn’t the best fit. This is not terribly surprising. Just like the graph for age, the values seem almost flat at first and then surge in the last two quintiles, rather than steadily rising at an increasingly greater rate. Just as with both of the previous graphs, the dots converge on a second-order polynomial line quite nicely, in this case with about 98 percent agreement: So let’s juxtapose the plaque curve against the curves for eggs and age, and see which makes a better match. I’ll use the second-order polynomial curves for each of them. Let’s look at eggs first: The match isn’t terrible, but there’s quite a big gap between the curves. Now we’ll look at age: Almost a perfect match! I’m not sure if this exercise is much more productive than playing a game when it comes to uncovering the actual cause-and-effect dynamic hiding behind these relationships. I do think, however, that it undermines the first argument from the original report: the “exponential,” or at least curvilinear, nature of the relationship between plaque accumulation and “egg-yolk years” is hardly evidence that something more is happening than folks are accumulating plaque as they get older. Indeed, age may be the boogeyman lurking behind the shadows of “egg-yolk years.” Of course, the only way to assess the independent contribution of eggs would be to look at eggs, rather than “egg-yolk years.” This brings us to their second argument. Does Atherosclerosis Increase With Eggs Per Week? Previously, I had shown this graph, which depicts the very small and statistically insignificant difference in plaque area between those consuming less than two eggs per week and those consuming more than three: And here is where I made my first blunder. I stated that this analysis was never adjusted for age. I was wrong. I misunderstood the original report as meaning that the multiple regression analysis I’ll discuss in the next section was adjusted for age. After discussing this with Dr. Spence, I realize that this was the only analysis adjusted for age, using a technique called analysis of covariance, where eggs per week was entered as the independent variable and age as the covariate. This made the results statistically significant at P<0.0001. Unfortunately, the report does not describe the methods in much detail. To use this method of adjustment, the data must satisfy two key assumptions: • Eggs per week must not be correlated with age. • The slope of the line relating plaque accumulation to age within the low-egg group must be parallel to the slope of the same line in the high-egg group. I was able to learn from Dr. Spence that although older people tended to consume fewer eggs, the relationship was not statistically significant. Thus, the first assumption was satisfied. I was unfortunately unable to learn whether the second assumption was satisfied. The importance of satisfying the second assumption might become clearer if we take a look at a graphical depiction of the procedure. This is my own graph, adapted from Figure 11.7 in Statistical Methods in Medical Research: The two black dots represent the unadjusted data points. Those who consumed more eggs were slightly younger and had slightly more plaque. Since the slopes of the two lines are parallel in this hypothetical example, we could determine their distance at any point for a given age and this would be the age-adjusted difference between the two groups. The simplest way to do this would be to meet in the middle of the two data points. Thus, the dotted line I drew represents the estimated difference between the two groups if the mean age in both groups were just over 61. This type of analysis breaks down completely if the lines aren’t parallel. If we consider the possibility that consuming eggs can affect plaque accumulation, then we should consider the possibility that consuming eggs can affect the relationship between plaque and age. If it does, the lines would not be parallel. Imagine, for example, that eating more eggs decreases the rate that plaque accumulates with age: Where should I draw the dotted line? The length of the dotted line, representing the “adjusted” difference between the two groups, would be different depending on where I drew it. Over the age of 65, it would even turn the results on their head and show that plaque was slightly lower among people who ate more eggs. This type of adjustment would be meaningless, and this is the reason the lines must be parallel to perform it. Since the authors do not disclose whether the assumption of parallel lines was met, we have no idea if the adjustment for age was accurate. In my opinion, this comparison should be ignored unless the authors offer further details in the future, perhaps in response to letters to the editor in the journal. That Good Ol’ Multiple Regression Analysis I had previously stated that the multiple regression analysis was adjusted for age, but it was not. The analysis was adjusted for sex, total cholesterol, systolic blood pressure, body mass index, and pack-years of smoking. “Egg-yolk years” predicted atherosclerosis independently of these other factors. I learned from Dr. Spence that this model was not adjusted for age because age is incorporated into “egg-yolk years” and pack-years of smoking. Here’s the problem: Why should we attribute the association with “egg-yolk years” to egg yolks rather than to age? As far as I can tell, there is no reason at all. After having corrected the errors I made in my previous post, I am even less convinced that this study shows anything other than that people develop more plaque as they get older. Read more about the author, Chris Masterjohn, PhD, here. 54 Responses to Egg Study Redux: Correcting the Stats 1. I’m not convinced either. Most importantly to me, the study did not control for toast eaten with all those eggs. □ or the jam on that toast. Way too many possibilities. 2. The biggest drawback that hinders the authors’ conclusion is the data itself. Playing statistical games is exactly that. How can anyone believe that a person will recall their consumption of a specific food over the time-frame indicated and expect any accuracy. Truly amazing that researchers get to publish this crap – I guess their “peers” are tainted as well. 3. Chris, I just read the numbers off your first graph and played around with them myself. I believe the second-order polynomial is not significantly better than the first-order polynomial (straight line). In my fit, the x^2 term was not significant. The higher-order polynomial is of course overfitted. I’m not quite sure why you included it. Curiously, the linear fit and the exponential fit seem to perform nearly equally well. I didn’t do a formal hypothesis test of one model against the other, but the R^2 is basically the same, so we have little reason to conclude one model fits better than the other. A quick test to see if data follow an exponential relationship is to log-transform the y axis and check if the relationship becomes more linear. In your age vs. quintile-of-egg-yolk-years plot, the plot looks basically unchanged when log-transforming age, hence the exponential model is not much better than the linear For the analysis of covariance, it would be easy to test for an interaction term if the raw data were available. Do you have the raw data? Finally, since age and egg-yolk years are highly correlated, it is statistically correct to not include both variables in the multiple regression. However, that also means that we cannot statistically disentangle their separate contributions. In general, though, I agree. It looks like age is the underlying causal variable. Claus Wilke University of Texas at Austin □ Hi Claus, Thank you so much for your comments! I didn’t do any formal statistical tests. As I stated, I don’t have any experience curve fitting and I didn’t want to pretend to be capable of a formal analysis with the potential to make mistakes. And like you indicated, I’m not sure how productive it is to play around with quintiles like this anyway, so I’m not sure it justifies the effort of a formal analysis. I was simply evaluating what the authors did, which was to simply state that plaque versus yolk-years fit an exponential curve better than a linear curve, and to henceforth refer to the “exponential nature” of the effect of “egg consumption.” The main purpose of drawing attention to the second-order polynomials was to draw attention to the facts that a) just because something doesn’t visibly increase until the later quintiles doesn’t make it “exponential,” and b) it was a good and simple fit for juxtaposing the various curves against one another to see which pair looked most similar. I agree the fourth-order polynomial is over-fitted, and I mostly included it for fun. Like I said, you can get pretty much anything to perfectly fit something allowing so few data points to follow that many curves. For the analysis of covariance, I agree. I would be surprised if they give you the raw data, but if they do, I’d love to hear what you find, so please write back. In my experience, Dr. Spence was extraordinarily forthcoming initially and very willing to re-analyze the data. However, when I asked him about the interaction term for the ANCOVA, he asked me how to do the analysis, so I forwarded him an SPSS tutorial on it (which is the program they used), and after that he stopped engaging me and never told me what the result was. I agree with your assessment of the multiple regression. I didn’t mean to blame them for not including age, but was simply clarifying my previous misunderstanding. Thanks so much for contributing! 4. Pingback: The Daily Lipid: Egg Study Redux 5. Hi Claus I agree with your conclusion about the multiple regression. Age has been distributed into the yolk-years and pack-years terms, as I averred in the discussion to the last post. Thus, this model *implicitly* includes age but cannot *explicitly* control for it. I think all the bases have been covered about these data. There’s no good reason to buy the argument that the response is exponential- let alone the inferences that the authors draw from that. As Chris showed, the response of yolk-years to age is identical. □ edit: identical in seeming appearance (and therefore just as sound of an inference) 6. It’s frustrating to me, to read a report/conclusion regarding a “study” done on a food such as eggs (this article), when that food was not studied in it’s raw vs cooked/heated state. Cholesterol in egg yolk has a different effect when it’s heated than when raw, as in oxidized from heat. This includes meats, fats as well. 7. I have contacted the authors and requested the raw data. Playing around with quintiles is useless, we cannot really draw any conclusions from that. I tend to be rather skeptical about ordinary multivariate regression and step-wise selection on high-dimensional data sets. All kinds of things can go wrong, in particular when predictors are collinear (as is the case here). It would be interesting to try some more robust methods, e.g. PCA or LASSO. □ I’ll be curious to follow your results if the authors agree to let the data to you! ☆ See my longer post below. I got the data but can’t comment on it. □ Hi Claus, I agree playing around with the quintiles is useless, but this is what the authors did in the paper. They never reported a formal statistical analysis of linearity, and their comments referred to the graph of increase by quintile, not to the relation between the individual data. Other multivariate statistical methods are way out of my realm of expertise, far more than even lowly curve fitting. If you get your hands on the raw data, I’d certainly be interested in what you get with other methods! 8. I wonder if the participants were prompted to include pancakes, cakes, batter, etc. as eggs. If so, much confounding material is included – trans fats, sugar, flour. If not, the tally of egg yolks may be seriously inaccurate. □ Yes, not to mention it may be difficult to recall things when you’ve just had a stroke or transient ischemic attack. The potential for confounding is endless here, but the association doesn’t seem to even be real anyway. 9. Dear Chris, Why do studies seldom compare a raw food vs a cooked/heated food before they conclude how that food effect one’s health? In this case, consuming cooked eggs would effect health differently than consuming the eggs raw. Heating any cholesterol containing food can oxidize it’s cholesterol producing Oxy-cholesterol, which can be problematic to health. Any comments? □ Hi Garry, I agree with you that these things should be taken into account. I didn’t address that here because it seems the association is very unconvincing in itself. I would love to see high-quality studies that look at different ways of cooking eggs, and at raw eggs, with and without yolks. I tend to feel better when I eat raw egg yolks instead of cooked eggs. ☆ Hi Chris, I suppose a study comparing the effects of raw egg yolk vs cooked yolk would be rather extensive. Oxy cholesterol, enzyme denaturing, nutrient reduction and such would need evaluating in order to access the effects on health, eh? Thanks Chris, ☆ Also Chris, are you saying the difference between a heated/cooked egg yolk and raw yolk is not covered because the difference is not measurable … that cooking an egg doesn’t change the integrity of it’s raw benefits? Please clarify. Thanks again, 10. Dr. Spence has graciously provided me with his data set, under the condition of confidentiality (which I respect). I have investigated the data set and have drawn my own conclusions, which I will not resent here. Instead, I will point out a few issues that I have with the statistical analysis as presented in the paper. First, I would like to emphasize two things, though: (a) These are my personal opinions. Other people might disagree. (b) Read through the references I provide and think for yourself. 1. Step-wise variable elimination is problematic, in particular when not subjected to bootstrapping and/or cross-validation. It is extremely easy to arrive at meaningless or misleading models. This issue is very well known, and entire books have been written on it (e.g., Harrell, Regression Modeling Strategies). 2. Multi-collinearity is a big problem when trying to identify relevant predictors. Traditional regression models fail under multi-collinearity. 3. Linear regression models assume that predictors are measured without error. Clearly, in an observational study, most predictor variables have just as much error as the response variable. Hence, linear regression models use an incorrect error structure, and this can result in misleading model predictions. (What I mean by incorrect error structure is explained e.g. here: http://www.r-bloggers.com/principal-component-analysis-pca-vs-ordinary-least-squares-ols-a-visual-explanation/ ) Points 2 and 3 can be addressed at the same time by carrying out a principal component analysis (PCA). PCA treats all predictors equally and handles multi-collinearity without problem. However, traditional PCA does not do well with variable selection. It often produces components that are confusing mixtures of many variables, and whose interpretation is unclear. One way around this problem is sparse PCA (SPCA), using lasso/elastic net regularization (http://users.stat.umn.edu/~zouxx019/software.html). Sparse PCA provides principal components in which most variables have zero loadings. The result is components that tend to have clear Incidentally, lasso regularization is also currently considered to be one of the best available solutions to problem 1, variable selection. An alternative way to analyze the data would be to carry out a lasso-regularized regression. The advantage (over PCA) would be a clear distinction between predictors and response, at the cost of an incorrect error structure. I personally prefer PCA, but I would not object to a properly cross-validated regression model. It would probably be best to do both. If they don’t substantially agree, something is amiss. 4. It is not statistically sound to group data into quantiles and then try to make inferences based on the averages within these quantiles. Considering that the paper argues for the strong relationship between egg years and plaque area, it is notable that a scatter plot of one of these variables against the other is absent, as is a simple correlation coefficient. The dangers of data into quantiles are explained in detail here: In a nutshell, this practice can create seemingly strong relationships out of rather weak ones. I have made Dr. Spence aware of these concerns of mine. □ Hi Claus, Thank you so much for your comments! Very helpful! 11. Sorry, a typo: “which I will not *present* here” 12. Thank you Claus. I do want to point out that it is convention in some circles to include all main effects in the model when any of the higher order effects containing it are also included. Then again, the folks who do this are probably well aware of the pitfalls of regression in general. Thank you for addressing the problem of using regression with the quintiles, technically, a sub-optimal method, or at least one fraught with danger. It is just easy to find things that ain’t there. □ That is correct. When you use a higher-order term, you have to include all relevant lower-order terms as well, otherwise the significance of your higher-order term doesn’t remain invariant under rescaling of the data. Did I say anything anywhere that would have suggested the contrary? ☆ Nope, but the suggestion was somewhere around here. 13. Here is the problem Chris. If they had controlled for age, given the very high correlation between age and egg-yolk years (eyys), you’d have one of two things happen (or even both): (a) age and eyys would be collinear, which could distort the results to the point of making nonsensical; and/or (b) age would capture all of the variance (R-squared) in the variable measuring plaque, making the relationship between eyys and plaque disappear. Age and eyys seem to be too highly correlated (in a linear way) to be treated as separate variables (they are redundant, or collinear), regardless of the nonlinear relationships you discussed. □ Hi Ned, Thanks for your comments! I didn’t mean to suggest that they *should* have incorporated age into the multiple regression. I simply thought they *did* (which struck me as non-sensical at the time, and you pointed out the problem with collinearity in my first post), and then found out I was wrong so corrected myself in the second post. I imagine you might agree that creating the egg-yolk years variable is deeply problematic in the first place? Most people’s answer to “how long have you been consuming eggs?” will be roughly equivalent to their age, perhaps minus a weaning period. It’s impossible to separate the effect of eggs from that of age. It seems it would have been better to include eggs and age separately in the multiple regression. ☆ Using eyys in a MR analysis is problematic for the reason you pointed out in your previous post – eyys and age seem to measure the same thing, namely age. So, other measures should be used in a MR; eggs/w is not a very good one, but it is certainly better than eyys. And, eggs/w seems to be associated with decreased LDL cholesterol (!) and with what seems to be a protective moderating effect: ○ Sorry, in my comment above I meant to say, at the end, that “eggs/w is not associated with LDL cholesterol” instead of “eggs/w seems to be associated with decreased LDL cholesterol”. Plaque, on the other hand, seems to be negatively associated with LDL cholesterol. □ It seems obvious that age and egg years should be highly correlated, but if you think about it a little harder you realize this need not be the case. Let’s make a thought experiment: According to the numbers in Table 2, mean eggs per week over the whole data set must be somewhere around 1.5 or so. We can assume that some people eat no eggs, and other people may eat 10 eggs or more per week. However, nobody eats a negative number of eggs. So let’s model egg consumption per week as an exponentially distributed variable with mean 1.5. Also according to Table 2, mean age is somewhere around 60, with a standard distribution of around 15, give or take. Let’s assume age is normally distributed. Based on these two assumptions, let’s generate random data, calculate eggyears, and see how eggyears correlate with age: > eggspw = rexp(1000,1/1.5) > age = rnorm(1000, mean=60, sd=15) > cor.test(age, eggspw*age) Pearson’s product-moment correlation data: age and eggspw * age t = 7.3928, df = 998, p-value = 3.038e-13 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.1682440 0.2858162 sample estimates: These “highly correlated” variables have a correlation coefficient of only 0.23. Only about 5% of the variation in age is explained by the variation in eggyears. Hence, we would be safe to include both eggyears and age in the regression model and could evaluate their contributions independently. Note: This is an entirely hypothetical example, put together entirely on the basis of information in Table 2. I am not claiming that this example shares any resemblance to the actual data ☆ The reason why eggyears and age correlate so weakly in my example is because the variance in eggs per week is so high relative to its mean, while the variance in age is relatively small compared to its mean. As a consequence, large eggyears scores almost exclusively indicate high egg consumption, not high age. ☆ Hi Claus, Good point, and thank you. Naturally, I don’t know whether age and egg yolk-years were correlated in the original data set. I’m still skeptical, however, whether transformation of eggs/ week to egg-yolk years does anything but confound the latter measurement with an age component. I’m skeptical that age wouldn’t correlate almost perfectly with length of time consuming ○ > I’m skeptical that age wouldn’t correlate almost perfectly with length of time consuming eggs. Yes, of course they would. What I’m saying is that we should expect the product of eggs per week and age (i.e., eggyears) to correlate much more strongly with eggs per week than with age. As a consequence, eggyears should be only weakly confounded by age, and basically contain the same information as eggs per week. In my made-up example above, eggyears correlate with eggs per week with r=0.93. On that note, it should strike you as odd that eggs per week showed no effect in the paper, but eggyears did. The only reasonable explanation is that the regression model picked up on the small amount of information about age contained in eggyears, and that the effect would have disappeared if age had been in the model. ■ Interesting, Claus, thanks! ☆ I just read your post on the hypothetical set of data where you break down the average of how many eggs were eaten from 0 up to 10 per week. The results from the study were averaged out to 1.5. Does it not make sense that when this study was done that they would have processed the data in many ways and then published it according to what result they wanted to show such as the 1.5 egg yolks a week average? One has to wonder why they wouldn’t have shown the stats for the 10 egg a week people and also the zero egg a week people. That would be much more revealing. We are all experiments in progress and don’t know the results until much later down the road. I am eating a paleo diet now and am feeling much better. I am currently eating 3 fried eggs a day at breakfast. 14. Another factor that I don’t think has been mentioned is what the hens that laid the eggs were eating, such as corn or soy or who knows what. Very few eggs consumed are from pasture-raised chickens who hopefully are eating as nature intended. Unfortunately, the food most people buy is of terrible quality which might throw off a lot of these food-based tests. □ Lisa, I agree with your comment more than any of the statistical banter. Studies rarely give consideration to the quality of the food source. Eggs from hens that are pasture-raised eating their natural diet and getting exercise versus eggs from hens that are caged and fed “garbage”. Same with beef; 100% grass-fed pasture roaming versus penned, grain-fed “quickly fattened” ☆ There lies the problem with studies like this. Not only are they looking for eggs to be the link between heart disease and high cholesterol and such, but they don’t look at quality. They also don’t look at other foods eat, like someone pointed out. what if they have an egg with there pancake with margarine and syrup, fruit, OJ, Coffee breakfast, 2-3 times a week? Honestly I think all the other items other than the eggs are responsible for heart problems. Add that all together with low quality product. It seems to me that the study is quite worthless and a waste of time. If your gonna do something do right, not half a*sed which is what many studies i read of are like. Bad science. 15. Per Ned’s and Claus’s comments. I think we’re getting a little lost here. It seems to me an easy way of expressing the crux of the question is this: are older people more or less likely to consume more eggs on a regular basis than younger people? If so, are the consumption patterns of these different age-cohorts consistent over long times spans (i.e. the decades needed to accumulate plaque)? If the answer to these questions is yes, than we have reason to suspect that the correlation between age and yolk-years will be weaker than if different age-cohorts have a similar mean egg intake. Whether such a weak correlation would justify the use of the term “yolk-years” in a statistical model is a question for model selection, in my mind. The answer to the second is Chris M’s point about the reliability of asking a question like “how long have you been eating eggs?” (and then extrapolating current frequency over those years!!!!). What kind of evidence would we really need to accurately assess this? In short, I don’t see how any of this discussion salvages the author’s original regression model from the bin of irrelevance…It still lumps the effect of age into the “yolk-year” term (based on how the author’s calculated yolk-years and our inability to answer the questions I posed above). □ I’m not sure. To me, the crux of the question is whether the current data set, even if we take it at face value and disregard all the limitations of study design and so on, implicates egg years as a risk factor at all. That has not been convincingly established, as far as I can see. In fact, I doubt that if the data had been analyzed in a double-blind manner (where the statistician building the model does not know what the predictor variables are) egg-years would have been selected as a meaningful predictor. ☆ Well, I guess you have privileged insight in this case, so I’ll go ahead and take your word for it 16. Chris! Seems to be a problem with the addresses to the pictures. □ What’s wrong with them? 17. I completely agree, Chris! There are way too many variables to make such a sweeping statement and attempted correlation between egg yolk consumption, age, and plaque. Once again a situation where incorrect data has been fed to the public with a resultant fear-response in hopes of creating a cause-and-effect result to support an unsupportable hypothesis! There are so many influences and causes of plaque formation – as well as the egg itself – that there is no single factor that anyone can PROVE causes atherosclerotic plaquing! Just more fodder for the misled and misinformed 18. Great discussion! Makes me realize I need to re-educate myself about statistics. I will continue to eat my two eggs daily with vegetables and avoid processed carbs or grains consumed with them. As we all know, diet studies are often problematic and should be viewed skeptically. 19. Another variable that I think should be taken into account is what the chickens were fed. I’ve had a few email exchanges with Dr. Peat and he says he limits his egg consumption to no more than 1 a day unless he knows what the chickens are being fed. According to him animals like pigs and chickens tend to retain and pass on the negative materials from things like soy in their fat and yolks wheres animals like cows are able to better filter these things out in their rumens. I’m not sure to what extent this would affect a person who is eating egg yolks from soy meal fed chickens compared to chickens on their natural diet but it would be interesting to look into. 20. CHRIS There is a paper written at WAPF that covers 1950/60 autopsy of old ppl around the world that found no matter what diet was consumed ppl ALWAYS had hardening of the arteries if they went past a certain age. It may even been done by the anti-cholesterol propagandist scientist. I think even the Mansi had harding after a certain age. □ Hi Del, Masai have some atherosclerosis, but not complicated lesions nor occluded lumen, according to the research done back then. 21. Pingback: Questions regarding Eggs and cholesterol! | balance2bfit 22. I have cosumed 2+ whole eggs per day since 4-5 yr. old .Now 80+,total eggs 800+ per yr. 23. Looks like I’m pretty late to this discussion, and certainly most of the analysis is beyond my learning and capabilities, but I do have a couple of comments on graph C above. 1. On the horizontal axis, the second column, 50-110 Egg-yolk years, spans 60, while the third column spans only 40 EYY. It seems to me that this improperly moves quantities that should have been in the third column to the second column, making the second column larger than it would otherwise have been and, conversly, the third column smaller than it would otherwise have been. Had the grouping remained consistent throughout, the seemingly upward increasing aspect of the curve in this region would have been muted or perhaps eliminated. 2. The last group, > 200, lumps all larger Egg-yolk years into one column, improperly increasing the plaque area value of this column. If the horizontal axis had been extended (200-250, 250-300, etc) and column widths had remained consistent, what now looks like a possible exponential curve might have instead appeared linear. Maybe I’m missing something, or perhaps the techniques involved somehow account for this, but as it is, to me it looks suspicious. □ Sly, I think the distribution in graph C is based on the quantity of people present in the quintile, not the quantity of egg-yolk years. Point two is a good point — grouping into quintiles obscures the effects of extremes on both ends. ☆ Right you are; the x coordinate is displayed in quintiles. However, if the goal of the graph is to show a correlation between egg-yolk-years and carotid plaque area, I don’t see how using quintiles clarifies the nature of the association instead of obscuring it. If I want to see if the correlation is linear, geometric, logarithmic, exponential, some polynomial function or unrelated then I need to see the data displayed in consistent groups (if that is necessary for clarity) on each axis. I don’t see how arbitrarily dividing the x axis into quintiles serves that purpose. Of course, if the author of the study is making the raw data freely available, then I suppose others can do an analysis and check the results. If that is not so, then he has to offer a proof of the validity of his method, unless it is a standard and accepted published method that is being used appropriatly here. ○ Good point, Sly. To see the true trend we would be best looking at continuous data rather than data categorized by quintile. This entry was posted in WAPF Blog and tagged cholesterol, Eggs, Fat and Cholesterol, heart disease, Statistics. Bookmark the permalink.
{"url":"http://www.westonaprice.org/blogs/cmasterjohn/2012/09/14/egg-study-redux-correcting-the-stats/","timestamp":"2014-04-16T22:39:19Z","content_type":null,"content_length":"150963","record_id":"<urn:uuid:abae7135-4e56-4470-82b3-73b126f5fa7b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help October 13th 2007, 12:24 AM #1 Oct 2007 Hey! - Im Ady, i need your help guys! I need more clarification than help, right... i need to figure out the volume of cylinder so,: Diameter - 10.5m Length - 1695m so we know that volume of cylinder = pi x (radius of base^2) x height or V = pir^2h .........Solution? r = 5.25^2m v = pi x 5.25 x 1695 = pi x 27.56 x1695 = 146,766.0164m^3 This is right? Cheers guys! Hey! - Im Ady, i need your help guys! I need more clarification than help, right... i need to figure out the volume of cylinder so,: Diameter - 10.5m Length - 1695m so we know that volume of cylinder = pi x (radius of base^2) x height or V = pir^2h .........Solution? r = 5.25^2m v = pi x 5.25 x 1695 = pi x 27.56 x1695 = 146,766.0164m^3 This is right? Cheers guys! V = pi r^2 h = pi (5.25)^2 1695 ~= 146770.3 cubic metres thankyou, so what am i doing wrong? i cant get your answer? i can only get mine, where am i going wrong It doesn't look to me like you did anything wrong. I suspect the only difference between your calculation and CaptainBlack's was, perhaps, the number of digits of $\pi$ you used. They are both the same answer to the required number of significant digits. i have figured it...how bizzare! In terms of a Cylinder, the 'length' could this also be termed as 'depth' ? October 13th 2007, 12:50 AM #2 Grand Panjandrum Nov 2005 October 15th 2007, 03:39 AM #3 Oct 2007 October 15th 2007, 04:07 AM #4 October 15th 2007, 04:12 AM #5 Oct 2007 October 15th 2007, 04:40 AM #6
{"url":"http://mathhelpforum.com/geometry/20480-cylinder.html","timestamp":"2014-04-18T06:59:52Z","content_type":null,"content_length":"46048","record_id":"<urn:uuid:ea4806c8-c81d-46cf-9b96-266b9c137175>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Errors in the book "Applied Numerical Linear Algebra" by James Demmel I will continue to post errors and clarifications that I or others find in this location, as well as the source. • Page 22, Lemma 1.7, part 2: This is imprecise on which norms I mean. There are 3 norms in the inequality "||A*B|| <= ||A|| * ||B||", and not every choice of 3 norms makes sense. The easiest case is when A and B are square, and you use the same vector norm in the numerator and denominator of definition 1.9 for all 3 norms. This is all I wanted you to prove for Question 1.16. (Hyounjin The result is more generally true as long as you use the same norm for the vectors in the domain space of A*B and B, the same norm for vectors in the range space of B and the domain space of A, and the same norm for vectors in the range space of A*B and the range of A. In other words, you can choose three different vector norms. • Page 23, Lemma 1.7, part 7: The notation "lambda_max (A* A)" means "the largest eigenvalue of the matrix conjugate-transpose(A) times A". • Page 23, Lemma 1.7, Part 13: "|| A ||_1 <= || A ||_F" should be "(1/sqrt(n)) * || A ||_1 <= || A ||_F". (Hyounjin Kim) • Page 23, Lemma 1.7, proof: "q^T A^T A q = q^T lambda q" should be "q* A* A q = q* lambda q". • Page 23, Lemma 1.7, proof: In a denominator of a the second displayed equation, "|| Q* x ) ||" should be "|| Q* x ||". • Page 24, Question 1.7: y^H should be y*. Both are acceptable notations for the conjugate transpose of y. (Gerardo Lafferriere) • Page 26, Question 1.16: See the comments on pages 22 and 23 above. • Page 27, Question 1.18: In the first numbered fact, "s1 - a" and "(s1 - a) - b" should be "a - s1" and "(a - s1) + b". (Matt Podolsky) • Page 29, Question 1.20, part 2: "perturbed eigenvalues" should be "perturbed roots" in the next to last line. (Gerardo Lafferriere) • Page 29, Question 1.20, part 3: p'(r(i)) means the derivative of the polynomial p, evaluated at r(i). • Page 32, Section 2.2, line 2: "Ax=B" should be "Ax=b". (JD) • Page 37, Equation (2.8): "|| x ||" should be "|| x hat ||" in the denominator. (Gerardo Lafferriere) • Page 52, displayed equation near middle: the not-equal sign should be an equal sign. (Rich Vuduc) • Page 67, Table 2.1: The defintion of matrix multiplication should contain "b_{kj}", not "b_{jk}". (Rich Vuduc) • Page 72, line 3 of Algorithm 2.9: U should be n-by-n, not m-by-m. (Maksim Oks) • Page 73 and 74: Change U_{21} and U_{31} to U_{12} and U_{13}, respectively, in the displayed factorizations of the matrix A. • Page 80, Prop. 2.3: The space needed is "n(bL + bU + 1)", not "bL + bU + 1". (Rich Vuduc) • Page 95, question 2.10. Assume A is n-by-n, not n-by-m. Also assume A is real, or else replace A^T by A^H. Assume M is real symmetric (or complex Hermitian, also replacing L^T by L^H). The question about A is correct as stated, but we will not define condition numbers for rectangular matrices until Chapter 3. (Tsuyoshi Koyama) • Page 95, question 2.11. Assume i is not equal to j. (Tsuyoshi Koyama) • Page 95, question 2.13, parts 2 and 3: The intent is to suppose that you have already done Gaussian elimination on A to get its L and U factors, so that solving Ax=b is fast (costs just O(n^2)), and then to exploit this to solve By=c in O(n^2), rather than O(n^3), which is what Gaussian elimination on B would cost. (Matt Podolsky) • Page 98, question 2.18 part 1. Assume that the first k steps of Gaussian elimination without pivoting succeed, i.e. do not try to divide by 0. (Tsuyoshi Koyama) • Page 114, Table in the middle of the page: "sigma_{k+1}/sigma_k" in the heading of column 2 should be "sigma_{k+1}/sigma_1". • Page 118, line 2: There is an extra closing parenthesis at the end of the line. (Matt Podolsky) • Page 119, last line: tilde_u should equal x + sign(x_1)*norm(x)*e_1, not x + sign(x_1)*e_1. (Matt Podolsky) • Page 122: The displayed matrix R(i,j,theta) differs from the identity matrix only in rows and columns i and j, whose entries are cos(theta) and +-sin(theta). (Matt Podolsky) • Page 127, 4th paragraph: "b = A^{-1} * x" should be "x = A^{-1} * b", and "b = A^{+} * x" should be "x = A^{+} * b". (Guenter Gramlich) • Page 127, Definition 3.2: "A^+ = V^T Sigma^+ U " should be "A^+ = V Sigma^+ U^T". • Page 145, Definition 4.4: "R^n" should be "R^n or C^n". • Page 191, Question 4.15: In the fourth line of matlab, " diag((1.5*ones(1,5)).\verb+^+(0:4)) + " should be " diag((1.5*ones(1,5)).^(0:4)) + " The "\verb+ +" is a latex error. • Page 192, Question 4.16, line 23: Numterm(2,1) should be NumTerms(2,1). (William De Meo) • Page 259, proof of Corollary 5.4: The last displayed equation should be "(d/dt) T(-t) = -(d/dt) T at -t = + pi_0(F(T))*T - T*pi_0(F(T)) at -t"; the first term on the right has the wrong sign. (Emile Sahouria) • Page 280, proof of Lemma 6.5: In the 5th line of the displayed equation, " = max_i | lambda_i | + eps " should be " <= max_i | lambda_i | + eps " • Page 304, 5th line of text: "q_j yields q_j A q_j" should be "q_j^T yields q_j^T A q_j".
{"url":"http://www.cs.berkeley.edu/~demmel/ma221_Fall04/errata.html","timestamp":"2014-04-16T13:04:46Z","content_type":null,"content_length":"5771","record_id":"<urn:uuid:3be4d5d4-a236-4388-ad46-767decb1f4c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
magnetic field calculation Denian, you need to figure out which component of the earth's magnetic field, say H, adds to the magnetic field of the solenoid (given by [tex]B_{solenoid} = \mu_{0}nI[/tex] where n is the no of turns per unit length and I is the current carried by it). The component would be of the form [tex]H cos\theta[/tex] or [tex]H sin\theta[/tex] depending on how you measure theta. The best way to go about it is to draw a diagram showing the two vectors B and H due to the solenoid and the earth at a point (remember that B is axial and will be strongest inside the solenoid--the above expression is that approximated for a closed pack coil). Then, you should realize that if the needle comes into equilibrium 37 west of north (or whatever you're given in a general case), the net force on the needle is zero (for if it were not zero, there could have been a torque that could cause further deflection..but its static so reversing this logic, I can say that the NET force on it at this point in time is zero). Use the equation for the force /torque. BUT: You need to understand that this whole solenoid+earth system can be replaced (in effect) by a net field [tex]B_{net}[/tex] which is the vector sum of the fields of the solenoid (axial, direction given by Right Hand Rule or Fleming rule whatever you wish to use) and that of the earth (I call the earth's field H). The earth's field has a horizontal component and a vertical component. Resolve it using the angle of dip/declination. If you are having trouble doing this, the sites I have given in my previous post should help. Finally if you just can't get things to work, send your complete solution and I'll try to see what the problem is. There is no big deal here, its just the directions you and I have to take care of and the rest is mincemeat math. EDIT: Give me a few hours denian, while you follow the advice given in this post and I'll figure out the equations myself and give you the answer.
{"url":"http://www.physicsforums.com/showthread.php?t=36762","timestamp":"2014-04-16T07:42:22Z","content_type":null,"content_length":"40288","record_id":"<urn:uuid:6b76b614-d563-4e1c-aa98-3aad586ad50d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Scale Factor Editing (P) The P edit descriptor specifies a scale factor, which moves the location of the decimal point in real values and the two real parts of complex values. It takes the following form: The k is a signed (sign is optional if positive), integer literal constant specifying the number of positions, to the left or right, that the decimal point is to move (the scale factor). The range of k is -128 to 127. At the beginning of a formatted I/O statement, the value of the scale factor is zero. If a scale editing descriptor is specified, the scale factor is set to the new value, which affects all subsequent real edit descriptors until another scale editing descriptor occurs. To reinstate a scale factor of zero, you must explicitly specify 0P. Format reversion does not affect the scale factor. (For more information on format reversion, see Section 11.8.) On input, a positive scale factor moves the decimal point to the left, and a negative scale factor moves the decimal point to the right. (On output, the effect is the reverse.) On input, when an input field using an F, E, D, EN, ES, or G real edit descriptor contains an explicit exponent, the scale factor has no effect. Otherwise, the internal value of the corresponding I/O list item is equal to the external field data multiplied by 10^-k. For example, a 2P scale factor multiplies an input value by .01, moving the decimal point two places to the left. A -2P scale factor multiplies an input value by 100, moving the decimal point two places to the right. The following shows input using the P edit descriptor (the symbol ^ represents a nonprinting blank character): Format Input Value 3PE10.5 ^^^37.614^ .037614 3PE10.5 ^^37.614E2 3761.4 -3PE10.5 ^^^^37.614 37614.0 The scale factor must precede the first real edit descriptor associated with it, but it need not immediately precede the descriptor. For example, the following all have the same effect: (3P, I6, F6.3, E8.1) (I6, 3P, F6.3, E8.1) (I6, 3PF6.3, E8.1) Note that if the scale factor immediately precedes the associated real edit descriptor, the comma separator is optional. On output, a positive scale factor moves the decimal point to the right, and a negative scale factor moves the decimal point to the left. (On input, the effect is the reverse.) On output, the effect of the scale factor depends on which kind of real editing is associated with it, as follows: • For F editing, the external value equals the internal value of the I/O list item multiplied by 10^k. This changes the magnitude of the data. • For E and D editing, the external decimal field of the I/O list item is multiplied by 10^k, and k is subtracted from the exponent. This changes the form of the data. A positive scale factor decreases the exponent; a negative scale factor increases the exponent. For a positive scale factor, k must be less than d + 2 or an output conversion error occurs. • For G editing, the scale factor has no effect if the magnitude of the data to be output is within the effective range of the descriptor (the G descriptor supplies its own scaling). If the magnitude of the data field is outside G descriptor range, E editing is used, and the scale factor has the same effect as E output editing. • For EN and ES editing, the scale factor has no effect. The following shows output using the P edit descriptor (the symbol ^ represents a nonprinting blank character): Format Value Output 1PE12.3 -270.139 ^^-2.701E+02 1P,E12.2 -270.139 ^^^-2.70E+02 -1PE12.2 -270.139 ^^^-0.03E+04 The following shows a FORMAT statement containing a scale factor: DIMENSION A(6) DO 10 I=1,6 10 A(I) = 25. WRITE (6, 100) A 100 FORMAT(' ', F8.2, 2PF8.2, F8.2) The preceding statements produce the following results: 25.00 2500.00 2500.00 2500.00 2500.00 2500.00 Previous Page Next Page Table of Contents
{"url":"http://h21007.www2.hp.com/portal/download/files/unprot/Fortran/docs/lrm/lrm0432.htm","timestamp":"2014-04-20T21:20:22Z","content_type":null,"content_length":"5321","record_id":"<urn:uuid:1c3574a1-7f02-483c-bb51-ebcc0ec0eecd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Triprotic Acid Titration Examples Triprotic Acid Titration with Strong Base Considered herein is the pH or titration curve that would be obtained when titrating a triprotic acid with a base. Three examples are given; phosphoric acid, and the two amino acids, aspartic acid and tyrosine. It is assumed that a strong base titrant, e.g., NaOH, is used. The systematic approach to solving complex chemical equilibrium problems results in two main results that are useful here. First is the relative fractions, a , for the various forms of the acids as a function of pH. For the triprotic acid, the a are Second, the titration curves are calculated using a working equation for a triprotic acid being titrated with a strong base. The appropriate equation is This equation is derived on a different page. Though the equation does not have simple roots, the roots, e.g., proton concentration, can be determined with a spreadsheet, or other computer program, using iteration methods. Phosphoric Acid Phosphoric acid is a good example of a titration where the first two equivalence points, corresponding to base reaction with the first and second protons, respectively, are clearly visible. By clearly visible, we mean that there is a large change in pH at the equivalence point. The acid dissociation constants for phosphoric acid are quite different from each other with pK[a]'s of 2.15, 7.2, and 12.15 Because the pK[a] are so different, the protons are reacted at different pH's. This is illustrated in the plot of the relative fraction as a function of pH shown below. The pH at points where the relative fraction of two species are equal, e.g., where two relative concentration lines cross, have a simple relationship to the acid equilibration constants. For example, the first crossing occurs for [H[3]PO[4]] = [ H[2]PO[4]^-]. The relationship to pH is most easily found by recognizing that all principle species are given in the first proton ionization equation Taking the log of this equation results in the buffer equation At equal acid and conjugate base concentrations, pH=pK[a]1. There are three such points for phosphoric acid. They are labeled on the plot. These points are important in the prediction of the titration curves. They correspond to points where half of an equivalent of proton has been consumed by addition of strong base. Thus, the point where pH=pK[a]1 is halfway to the first equivalence point. Where pH=pK[a]2 is halfway between the first and second equivalence points, etc. The solution has maximum buffer capacity at these points. In other words, there is maximum resistance to changes in pH. The equivalence points can also be identified in the fraction plot. At the first equivalence point , [H[3]PO[4]] approaches zero. This occurs when [H[2]PO[4]^-] is a maximum. One can see this point in the relative concentration plot. It occurs at a pH that is halfway between the two points with maximum buffer capacity. In fact, we can expect that the first equivalence point will occur at a pH Similarly, the second equivalence point, laying halfway between the points where pH=pK[a]2 and pH=pK[a]3 is To summarize, without even performing the titration, or solving the fifth power polynomial equation that governs the proton concentration, we would predict the following pH at the halfway and equivalence points The titration curve, found from iterative solutions to the governing equation, is illustrated below. The titration was for 0.1 F solutions of both acid and strong base. The solid line is the titration curve. The special points discussed above are given in pink. The equations for these points are also given. In fact, the halfway and equivalence point predictions work. Notice, however, that only two of the three single-proton equivalence points exhibit large changes in pH. In fact, the third equivalence is obscured by competitive ionization of water. This is the same effect that occurs for monoprotic acid with pK[a]>10. Tyrosine is a triprotic, dibasic amino acid with pK[a] of 2.17, 9.19, and 10.47 The first proton is removed from the carboxylic acid, the second from the ammonium group. The third proton, with a pK [a] of 10.47, is the phenolic proton from the amino acid side chain. This case is of interest because the acid is dibasic. Moreover, the basic pK[a] are relatively similar, differing by only about 1 The relative fraction plot is shown here. The plot is labeled with the pH at the points where acid and conjugate base concentrations are equal (where the lines cross). It is different from that of phosphoric acid in that the relative concentration of one base species, the monoprotonated tyrosinate, HTyr, does not reach unity. Of more importance to the prediction of the shape of the titration curve is the fact that there are several species in solution at the pH where the second equivalence point should be reached. The second equivalence point occurs when [HTyr] is a maximum. The main effect of there being three species in solution at this point is to buffer the pH around the second equivalence point. Since the solution is effectively buffered by H[2]Tyr, HTyr, and Tyr at the second equivalence point, we might not expect to observe a sharp change. In fact, this prediction is borne out in titration curve shown below. Only the first equivalence point shows a large change in pH with added titrant. Notice, however, that the major point pH's are those predicted for the halfway and equivalence points. Since there is only one clear change in pH with respect to titrant volume, Titrimetric tyrosine analysis should assume one equivalent (proton) per mole and use an indicator that changes at about pH 7. Aspartic Acid Aspartic acid is another triprotic amino acid. In this case the pKa are; 1.990, 3.900, and 10.002 The first two are carboxylic acid protons; the last is the ammonium proton. In this case we might expect that the first two equivalence point would be obscured by the fact that the two acidic pKa are relatively close. The relative fraction and titration curve plots are shown below. It is left up to the student to justify why the titration curve looks the way it does based on analysis of the relative fraction plot. Some questions to ask yourself are; • What are the species in the relative fraction plot? • What are the pH at the halfway and equivalence points? • How will the fact that the 2nd species (olive colored) never attain a value of 1 affect the titration? One thing to notice is that the first equivalence point is "lost" and a large change in pH only occurs at the second equivalence point. Some important questions to ask are; • Why isn't the 3rd equivalence point observed in the titration curve? • How would you design a Titrimetric analysis for aspartic acid? • How many equivalents (protons) per mole are apparent? • What indicator would you use? This page was created by Professor Stephen Bialkowski, Utah State University Last Updated Tuesday, August 03, 2004
{"url":"http://ion.chem.usu.edu/~sbialkow/Classes/3600/Overheads/H3A/H3A.html","timestamp":"2014-04-21T00:22:48Z","content_type":null,"content_length":"11371","record_id":"<urn:uuid:4676ffb8-f027-46f0-a055-cf4ba8101d7d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Mathematical Surveys and Monographs 2008; 299 pp; hardcover Volume: 151 ISBN-10: 0-8218-4495-4 ISBN-13: 978-0-8218-4495-3 List Price: US$88 Member Price: US$70.40 Order Code: SURV/151 The three-dimensional Heisenberg group, being a quite simple non-commutative Lie group, appears prominently in various applications of mathematics. The goal of this book is to present basic geometric and algebraic properties of the Heisenberg group and its relation to other important mathematical structures (the skew field of quaternions, symplectic structures, and representations) and to describe some of its applications. In particular, the authors address such subjects as signal analysis and processing, geometric optics, and quantization. In each case, the authors present necessary details of the applied topic being considered. With no prerequisites beyond the standard mathematical curriculum, this book manages to encompass a large variety of topics being easily accessible in its fundamentals. It can be useful to students and researchers working in mathematics and in applied mathematics. Graduate students and research mathematicians interested in the use of analysis on Heisenberg groups to various problems in pure and applied mathematics. • The skew field of quaternions • Elements of the geometry of \(S^3\), Hopf bundles and spin representations • Internal variables of singularity free vector fields in a Euclidean space • Isomorphism classes, Chern classes and homotopy classes of singularity free vector fields in three-space • Heisenberg algebras, Heisenberg groups, Minkowski metrics, Jordan algebras and SL\((2,\mathbb{C})\) • The Heisenberg group and natural \(C*\)-algebras of a vector field in 3-space • The Schrödinger representation and the metaplectic representation • The Heisenberg group--A basic geometric background of signal analysis and geometric optics • Quantization of quadratic polynomials • Field theoretic Weyl quantization of a vector field in 3-space • Thermodynamics, geometry and the Heisenberg group by Serge Preston • Bibliography • Index
{"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-151","timestamp":"2014-04-21T04:04:57Z","content_type":null,"content_length":"16187","record_id":"<urn:uuid:da694cdb-3135-468e-b16a-baa4971aa2e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability question March 14th 2011, 10:17 PM Probability question A train bridge is constructed across a wide river. Trains arrive according to a poisson process of rate lambda=3 per day. Find the probability that it takes more than 2 days for the 5th train to arrive at the bridge. March 15th 2011, 12:14 AM The number that arrive in a two day period has a Poission distribution with rate 6 per period. So now your question is what is the probability of 4 or fewer trains arrive in a two day period. March 15th 2011, 12:35 AM in our book we have that the time of the kth arrival is a gamma distribution. I did it by evaluating the gamma distribution P(T(5)>2)= this however gives me a difference answer to what i get doing it the way you sugested as a poison process with rate 6. can anyone else check if the two mwthods give the same answer? March 15th 2011, 05:21 AM in our book we have that the time of the kth arrival is a gamma distribution. I did it by evaluating the gamma distribution P(T(5)>2)= this however gives me a difference answer to what i get doing it the way you sugested as a poison process with rate 6. can anyone else check if the two mwthods give the same answer? That will be because you have to compute: doing the way you suggest.
{"url":"http://mathhelpforum.com/advanced-statistics/174630-probability-question-print.html","timestamp":"2014-04-20T12:35:00Z","content_type":null,"content_length":"6263","record_id":"<urn:uuid:18f710d6-9039-49ea-80c7-528e9690220b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Loan Amortization Table 10-11-2013, 11:45 AM #1 Registered User Join Date Oct 2013 Loan Amortization Table Hi All! I am new to Javascript, and coding in general, and I am trying to make a simple loan amortization table to cement my study so far. But, I am unsure as to the best way to input the formula I need into my code. I am receiving error messages associated with that line of code in the console (Uncaught SyntaxError: Unexpected token ; ). I figure that I must be making an error in how I am writing out the formula. I am trying to put this into the function: monthlyPayment = principal * ( monthlyInterest / (1 - (1 + monthlyInterest) ^ -numberOfMonths)) I tried using the Math.pow( , ) function, but may be getting messed up there. In general, I would really appreciate feedback on this code- first time independently trying to make anything in Javascript. Many Thanks! var principal= 100000 var annualInterest = 5 var lengthYears= 15 function monthlyPayment (principal, annualInterest, lengthYears){ var monthlyInterest = annualInterest/(12*100); var numberOfMonths = lengthYears*12; var monthlyPayment = principal * ((monthlyInterest/(1-Math.pow((1+monthlyInterest)), (-1*numberOfMonths))); return monthlyPayment; while (principal> 0){ monthlyInterest = (principal*monthlyInterest); var principalPayment = (monthlyPayment-monthlyInterest); var newBalance = (principal-principalPayment); principal = newBalance; function monthlyPayment (principal, annualInterest, lengthYears){ var monthlyInterest = annualInterest/(12*100); var numberOfMonths = lengthYears*12; var monthlyPayment = principal * ((monthlyInterest/(1-Math.pow((1+monthlyInterest)), (-1*numberOfMonths))); return monthlyPayment; Your fourth line quoted above has the wrong number of parenthesis. I see six open ones and only five close ones. Visit my homepage at http://spruce.flint.umich.edu/~jalarie/. You never call the function correctly with any parameters to decode and use. And you are returning a variable with the same name as the function: monthlyPayment. Last edited by JMRKER; 10-12-2013 at 07:31 PM. I have been tweaking the following for several years now. The amortization table will be created by the P.I. button after the monthly amount has been created. mortgage.htm. The monthly amount is calculated around lines 75-77 and the genpi function creates the table in a new window. Hi All, Thanks for the advice and tips! It is so helpful to get nudges from those who are more experienced! (JMRKER: In that case, I thought that since I had defined the variables globally that I could use them as parameters in the function? But, I realized that I don't actually need the function, and least I don't think I do. I can just define monthlyPayment globally I think?) (Thanks for the heads up on the missing ')' jalarie!) (wbport: Thanks for pointing me to your project. That looks good! I am trying to stay really simple now to cement the basic concepts that I have been studying.) After reading JMRKER's comment, I ended up just defining most of the variables globally, and trying to use them in a for loop. The code is below, and at least it puts something to the console. But, the output soon turns to infinity. I must be doing something wrong. It is possible that I am comprehending the math involved wrong, or it's a coding issue. Would you all mind taking a look? Am I way off base here? Again, many thanks! var principal= prompt("what is the original principal of the loan?") var annualInterest = prompt("What is the annual interest? Use whole numbers, so 5 means 5 percent interest") var lengthYears= prompt("How long is the loan, in years?") var monthlyInterest = annualInterest/(1200) var numberOfMonths = lengthYears*12; var monthlyPayment = principal * ((monthlyInterest/(1-Math.pow((1+monthlyInterest)), (-1*numberOfMonths)))); for( numberOfMonths; numberOfMonths>0; numberOfMonths--){ monthlyInterest = (principal*monthlyInterest); var principalPayment = (monthlyPayment-monthlyInterest); principal= principal-principalPayment; Why do you calculate the monthlyInterest as the annualInterest/(1200) and then overwrite it in the for...loop with (principal * monthlyInterest) ??? Then you calculate the principalPayment as the principal * monthlyInterest; ( a very large number). What's going on here??? You might want to re-evaluate your logic and name your variables a individuals rather than try to re-use them as a different term at a later time. Hey JMRKER, I went ahead and made a new variable within the loop, called paymentInterest. This is designed to determine how much interest the user is paying each month, the interest payments should go down over time. Since the monthlyInterest is a decimal, the number should be relatively small to the principal. The principal payment variable is designed to show how much of the monthlyPayment is actually going to pay down the principal. I then try to redefine principal within the loop to be the value of the old principal minus the amount that has been paid down. But, I must have an issue with the formula that is determining the monthly payment. I logged that to the console, and it is a very small number. It should be about 105 for a principal of 10,000- with 5% interest, over the course of 10 years. I may be getting confused with the math.pow function. I am trying to replicate this formula:http://www.1728.org/loanform.htm var principal= prompt("what is the original principal of the loan?") var annualInterest = prompt("What is the annual interest? Use whole numbers, so 5 means 5 percent interest") var lengthYears= prompt("How long is the loan, in years?") var monthlyInterest = annualInterest/(1200) var numberOfMonths = lengthYears*12; var monthlyPayment = principal * (monthlyInterest/((Math.pow(1+monthlyInterest), numberOfMonths)-1)); for( var i = 0; i<=numberOfMonths; i++){ var paymentInterest = (principal*monthlyInterest); var principalPayment = (monthlyPayment-paymentInterest); Here is a modification (for testing purposes). Put back in you prompts (bad form) or put in <input> tags (better form) to collect user information. Change you variable names in places to represent more what equations represent. Code below outputs to a screen element to see actions rather than sending to console. Personal preference. Check some of the results by hand to see if your formula is correct. <!DOCTYPE HTML> <title> Payoff Schedule </title> <meta charset="utf-8"> <div id="debug"></div> <script type="text/javascript"> /* following removed for testing purposes only var principal= prompt("what is the original principal of the loan?"); var annualInterest = prompt("What is the annual interest? Use whole numbers, so 5 means 5 percent interest"); var lengthYears= prompt("How long is the loan, in years?"); /* following code for testing purposes only */ var principal = 1000; var annualInterest = 5; var lengthYears = 2; /* */ var rate = annualInterest/(1200); var numberOfMonths = lengthYears*12; var monthlyPaymentRate = principal * (rate/((Math.pow(1+rate), numberOfMonths)-1)); // console.log(monthlyPaymentRate); var paymentInterest, principalPayment; document.getElementById('debug').innerHTML = 'monthlyPaymentRate: '+monthlyPaymentRate+'<p>'; var str = 'Mth Principal+Interest<br>'; for( var i = 0; i<=numberOfMonths; i++){ paymentInterest = (principal*rate); principalPayment = (rate-paymentInterest); principal -= principalPayment; // console.log(principal); str += '#'+i+': '+principal.toFixed(2)+'<br>'; document.getElementById('debug').innerHTML += str; What is the reason for your aversion to putting ';' characters at the end of your statements? You seem to be inconsistent in using them. Wow! That looks much better! Thanks for the continuing help on this one. I checked it out, and I still think that there is an issue with the monthlyPaymentRate formula. That number should be fairly large- as this is the entire monthly payment on the loan. The principal should go down every loop iteration until the principal is zero. I also tried putting in the monthlyPaymentRate variable in place of the rate variable in principalPayment = (rate-paymentInterest) . This is because I was hoping to subtract how much the user is paying in interest from the total monthly payment. We should then be able to subtract the principalPayment variable from the principal variable, and thus have the principal go down each month. As far as the ';' thats more ignorance than deliberate practice. I only started studying Javascript a few days before posting. So, again many thanks for all the help and putting up with silly 10-12-2013, 06:45 PM #2 Registered User Join Date Nov 2002 Flint, Michigan, USA 10-12-2013, 07:27 PM #3 Super Moderator Join Date Dec 2005 10-12-2013, 11:04 PM #4 Registered User Join Date Sep 2008 Jackson MS 10-13-2013, 04:05 PM #5 Registered User Join Date Oct 2013 10-13-2013, 07:34 PM #6 Super Moderator Join Date Dec 2005 10-14-2013, 04:14 PM #7 Registered User Join Date Oct 2013 10-15-2013, 08:49 AM #8 Super Moderator Join Date Dec 2005 10-15-2013, 12:31 PM #9 Registered User Join Date Oct 2013
{"url":"http://www.webdeveloper.com/forum/showthread.php?284799-Loan-Amortization-Table&p=1291367","timestamp":"2014-04-19T04:14:13Z","content_type":null,"content_length":"104429","record_id":"<urn:uuid:35d9544a-958f-490d-bf22-854115edce42>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Fixed Point Theorems in Fuzzy Metric Spaces Satisfying Abstract and Applied Analysis Volume 2013 (2013), Article ID 735217, 14 pages Research Article Common Fixed Point Theorems in Fuzzy Metric Spaces Satisfying -Contractive Condition with Common Limit Range Property ^1Near Nehru Training Centre, H. No. 274, Nai Basti B-14, Bijnor, Uttar Pradesh 246701, India ^2Department of Natural Resources Engineering and Management, University of Kurdistan, Hawler, Iraq ^3Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Rangsit Center, Pathum Thani 12121, Thailand Received 18 June 2013; Accepted 28 July 2013 Academic Editor: Hassen Aydi Copyright © 2013 Sunny Chauhan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The objective of this paper is to emphasize the role of “common limit range property” to ascertain the existence of common fixed point in fuzzy metric spaces. Some illustrative examples are furnished which demonstrate the validity of the hypotheses and degree of utility of our results. We derive a fixed point theorem for four finite families of self-mappings which can be utilized to derive common fixed point theorems involving any finite number of mappings. As an application to our main result, we prove an integral-type fixed point theorem in fuzzy metric space. Our results improve and extend a host of previously known results including the ones contained in Imdad et al. (2012). 1. Introduction In 1965, Zadeh [1] studied the concept of a fuzzy set in his seminal paper. Thereafter, it was developed extensively by many researchers, which also include interesting applications of this theory in different fields. Fuzzy set theory has applications in applied sciences such as neural network theory, stability theory, mathematical programming, modeling theory, engineering sciences, medical sciences (medical genetics, nervous system), image processing, control theory, and communication. In 1975, Kramosil and Michálek [2] introduced the concept of fuzzy metric space, which opened an avenue for further development of analysis in such spaces. Further, George and Veeramani [3] modified the concept of fuzzy metric space introduced by Kramosil and Michálek [2] and also have succeeded in inducing a Hausdorff topology on such a fuzzy metric space which is often used in current research these days. Most recently, Gregori et al. [4] showed several interesting examples of fuzzy metrics in the sense of George and Veeramani [3] and have also utilized such fuzzy metrics to color image processing. On the other hand, Mishra et al. [5] extended the notion of compatible mappings to fuzzy metric spaces and proved common fixed point theorems in presence of continuity of at least one of the mappings, completeness of the underlying space, and containment of the ranges amongst involved mappings. Further, Singh and Jain [6] weakened the notion of compatibility by using the notion of weakly compatible mappings in fuzzy metric spaces and showed that every pair of compatible mappings is weakly compatible, but reverse is not true. Many mathematicians used different conditions on self-mappings and proved several fixed point theorems for contractions in fuzzy metric spaces (see [6–13]). However, the study of common fixed points of noncompatible maps is also of great interest according to Pant [14]. In 2002, Aamri and El Moutawakil [15] defined a property (E.A.) for self-mappings which contained the class of noncompatible mappings in metric spaces. In a paper of Ali and Imdad [16], it was pointed out that property (E.A.) allows replacing the completeness requirement of the space with a more natural condition of closedness of the range. Afterwards, Liu et al. [17] defined a new property which contains the property (E.A.) and proved some common fixed point theorems under hybrid contractive conditions. It was observed that the notion of common property (E.A.) relatively relaxes the required containment of the range of one mapping into the range of other which is utilized to construct the sequence of joint iterates. Subsequently, there are a number of results proved for contraction mappings satisfying property (E.A.) and common property (E.A.) in fuzzy metric spaces (see [18–25]). In 2011, Sintunavarat and Kumam [26] coined the idea of “common limit range property” (also see [27–33]) which relaxes the condition of closedness of the underlying subspace. Recently, Imdad et al. [34] extended the notion of common limit range property to two pairs of self-mappings which relaxes the requirement on closedness of the subspaces. Several common fixed point theorems have been proved by many researchers in the framework of fuzzy metric spaces via implicit relations (see [6, 22, 35]). In this paper, we prove some common fixed point theorems for weakly compatible mappings with common limit range property in fuzzy metric spaces which include fuzzy metric spaces of two types, namely, Kramosil and Michálek fuzzy metric spaces and George and Veeramani fuzzy metric spaces. Some related results are also derived besides furnishing illustrative examples. We also present some integral-type common fixed point theorems in fuzzy metric spaces. Our results improve, extend, and generalize a host of previously known results existing in the literature. 2. Preliminaries Definition 1 (see [36]). A binary operation is said to be continuous -norm if (1) is commutative and associative; (2) is continuous; (3) for all ; (4) whenever and for all . Examples of continuous -norms are Lukasiewicz -norm, that is, , product -norm, that is, , and minimum -norm, that is, . The fuzzy metric space of Kramosil and Michálek [2] is defined as follows. Definition 2 (see [2]). The 3-tuple is said to be a KM-fuzzy metric space if is an arbitrary set, is a continuous -norm, and is a fuzzy set on satisfying the following conditions: for all and (KM-1):; (KM-2): if and only if ; (KM-3):; (KM-4):; (KM-5): is left continuous. Lemma 3 (see [37]). Let be a fuzzy metric space. Then is nondecreasing on for all . The fuzzy metric space of George and Veeramani [3] is defined as follows. Definition 4 (see [3]). The 3-tuple is said to be a GV-fuzzy metric space if is an arbitrary set, is a continuous -norm, and is a fuzzy set on satisfying the following conditions: for all and (GV-1): ; (GV-2): if and only if ; (GV-3): ; (GV-4): ; (GV-5): is continuous. In view of (GV-1) and (GV-2), it is worth pointing out that (for all ) provided (see [24]). Example 5 (see [3]). Let be a metric space. Define as for all and . Then is a GV-fuzzy metric space, where is the product -norm (or minimum -norm). Indeed, we call this fuzzy metric induced by metric the standard fuzzy metric. Hence every metric space is a fuzzy metric space. Now we give some examples of fuzzy metric spaces according to Gregori et al. [4]. Example 6 (see [4]). Let be a nonempty set, a one-one function, and an increasing continuous function. For fixed , define as for all and . Then, is a fuzzy metric space on wherein is the product Example 7 (see [4]). Let be a metric space and an increasing continuous function. Define as for all and . Then is a fuzzy metric space on wherein is the product -norm. Example 8 (see [4]). Let be a bounded metric space with (for all , where is fixed constant in ) and an increasing continuous function. Define a function as for all and . Then is a fuzzy metric space on wherein is a Lukasiewicz -norm. Definition 9 (see [24]). A sequence in a KM- (or GV-) fuzzy metric space is said to be convergent to some if for all there is some such that for all . Lemma 10 (see [24]). If is a KM-fuzzy metric space and , are sequences in such that , , then for every continuity point of . Definition 11 (see [5]). A pair of self-mappings of a KM- (or GV-) fuzzy metric space is said to be compatible if for all whenever is a sequence in such that for some . Definition 12 (see [5]). A pair of self-mappings of a KM- (or GV-) fuzzy metric space is said to be noncompatible if there exists at least one sequence in such that for some but or nonexistent for at least one . Definition 13 (see [38]). A pair of self-mappings of a nonempty set is said to be weakly compatible (or coincidentally commuting) if they commute at their coincidence points; that is, if for some , then . Remark 14 (see [38]). Two compatible self-mappings are weakly compatible, but the converse is not true. Therefore the concept of weak compatibility is more general than that of compatibility. Definition 15 (see [18]). A pair of self-mappings of a KM- (or GV-) fuzzy metric space is said to satisfy the property (E.A.) if there exists a sequence in such that for all for some . Note that weak compatibility and property (E.A.) are independent of each other (see [39, Examples 2.1-2.2]). Remark 16. In view of Definition 15, a pair of noncompatible mappings of a KM- (or GV-) fuzzy metric space satisfies the property (E.A.), but the converse need not be true (see [39, Remark 4.8 ]). Definition 17 (see [18]). Two pairs and of self-mappings of a KM- (or GV-) fuzzy metric space are said to satisfy the common property (E.A.) if there exist two sequences , in such that for all for some . Definition 18 (see [26]). A pair of self-mappings of a KM- (or GV-) fuzzy metric space is said to satisfy the common limit range property with respect to mapping (briefly, property) if there exists a sequence in such that for all where . Definition 19 (see [27]). Two pairs and of self-mappings of a KM- (or GV-) fuzzy metric space are said to satisfy the common limit range property with respect to mappings and (briefly, property) if there exist two sequences , in such that for all where . Remark 20. If and , then Definition 19 implies property (that is, Definition 18) according to Sintunavarat and Kumam [26]. Now we show that the property implies the common property (E.A.), but converse is not true. In this regard, see the following example. Example 21. Let be a fuzzy metric space, where , with product -norm defined as for all and for all and . Define the self-mappings , , and by Then we have , , , and . Let us consider two sequences and in ; one can verify that but . Hence both pairs and do not satisfy the property while they satisfy the common property (E.A.). Proposition 22. If the pairs and satisfy the common property (E.A.) and and are closed subsets of , then the pairs also share the property. Definition 23 (see [40]). Let and be two families of self-mappings. The pair of families is said to be pairwise commuting if (1) for all ; (2) for all ; (3) for all and . 3. Main Results Our results involve class of all mappings satisfying the following properties: : is continuous and nondecreasing on ; : for all . We note that if , then , and that for all . 3.1. Fixed Point Theorems in KM-Fuzzy Metric Spaces We begin with the following observation before proving our main result. Lemma 24. Let , , , and be four self-mappings of a KM-fuzzy metric space . Suppose that (1)the pair or satisfies the (or ) property; (2) (or ); (3) (or ) is a closed subset of ; (4) converges for every sequence in whenever converges (or converges for every sequence in whenever converges); (5)for all , there exists : , for some , Then the pairs and satisfy the property. Proof. If the pair enjoys the property, then there exists a sequence in such that where . By (2), , and for each sequence , there exists a sequence in such that . Therefore, due to the closedness of , so that and in all . Thus, we have , , and as . By (4), sequence converges and in all we need to show that as . Suppose that as , and then using inequality (14) with , , we have Taking the limit as and using Lemma 10, we get or, equivalently, As , we have for some . Then, in view of condition , we get , which is a contradiction, thereby implying which shows that the pairs and enjoy the property. Remark 25. The converse of Lemma 24 is not true in general. For counterexamples, one can see Examples 27 and 30. Theorem 26. Let , , , and be four self-mappings of a KM-fuzzy metric space satisfying inequality (14). Suppose that the pairs and enjoy the property. Then the pairs and have a coincidence point each. Moreover, , , , and have a unique common fixed point provided both pairs and are weakly compatible. Proof. Since the pairs and satisfy the property, there exist two sequences and in such that where . Since , there exists a point such that . We show that . If not, then using inequality (14) with , , we get which, on making and using Lemma 10, reduces to and so If , then for some . Then in view of condition we get , which is a contradiction. Therefore, so that which shows that is a coincidence point of the pair . Also ; there exists a point such that . Now we assert that . Assume the contrary, and then using inequality (14) with , , we have which reduces to or, equivalently, As implies for some , then in view of condition , we get , which is a contradiction. Therefore, so that which shows that is a coincidence point of the pair . Since the pair is weakly compatible and , hence . Now we show that is a common fixed point of the pair . To prove this, we show that . If not, then using inequality (14) with , , we have and so Then on simplification, we obtain Since , therefore for some . Then in view of condition , we get , which is a contradiction. Hence . Therefore, is a common fixed point of the pair . Also the pair is weakly compatible and ; then . To accomplish this, we assert that . If not, then using inequality (14) with , , we have which reduces to and so If , then for some . Then (in view of condition ) it follows that , which is a contradiction. Therefore, which shows that is a common fixed point of the pair . Uniqueness of common fixed point is an easy consequence of inequality (14) (in view of condition ). Next, we give an example which is not applied by the results of Imdad et al. [21, Theorem 2.1] but can be applied to Theorem 26. Example 27. Let be a fuzzy metric space, where , with product -norm defined as for all and for all and . Define the self-mappings , , , and by We obtain Hence and are not closed subsets of and so Theorem 2.1 of Imdad et al. [21] can not be applied to this example. Next, we choose two sequences , (or , ), and then clearly which shows that both pairs and enjoy the property. By a routine calculation, one can verify inequality (14) (for all and ) wherein is defined by . Furthermore, we obtain that the pairs and are weakly compatible. Therefore, all the conditions of Theorem 26 are satisfied and 3 is a unique common fixed point of , , , and which also remains a coincidence point as well. Now we show that the result contained in Imdad et al. [21, Theorem 2.1] can be easily obtained by Theorem 26. Theorem 28. Let , , , and be four self-mappings of a KM-fuzzy metric space satisfying inequality (14). Suppose that the following hypotheses hold: (1)the pairs and satisfy the common property (E.A.); (2) and are closed subsets of . Then the pairs and have a coincidence point each. Moreover, , , , and have a unique common fixed point provided both pairs and are weakly compatible. Proof. Since the pairs and enjoy the common property (E.A.), there exist two sequences and in such that for some . Since and are closed subsets of , hence . Therefore, there exists a point such that . Similarly, . Therefore, there exists a point such that . The rest of the proof runs on the lines of the proof of Theorem 26. Theorem 29. Let , , , and be four self-mappings of a KM-fuzzy metric space satisfying all the hypotheses of Lemma 24. Then , , , and have a unique common fixed point provided both pairs and are weakly compatible. Proof. In view of Lemma 24, the pairs and enjoy the property; there exist two sequences and in such that where . The rest of the proof can be completed on the lines of the proof of Theorem 26. This completes the proof. The following example demonstrates the utility of Theorem 29. Example 30. In the setting of Example 27, replace the self-mappings , , , and by the following besides retaining the rest: Then we have and , whereas and are closed subsets of . Then, like the earlier example, the pairs satisfy the property and satisfy the property. It easy to calculate that inequality (14) holds wherein is defined by . Moreover, the pairs and are weakly compatible. Thus all the conditions of Theorem 29 are satisfied, and 3 is a unique common fixed point of the involved mappings , , , and . By choosing , , , and suitably, we can derive a multitude of common fixed point theorems for a pair of mappings. As a sample, we deduce the following natural result for a pair of self-mappings. Corollary 31. Let and be two self-mappings of a KM-fuzzy metric space satisfying the following conditions: (1)the pair enjoys the property; (2)for all , , there exists : , for some Then and have a coincidence point. Moreover, if the pair is weakly compatible, then and have a unique common fixed point. As an application of Theorem 26, we have the following result involving four finite families of self-mappings. Theorem 32. Let , , , and be four finite families of self-mappings of a KM-fuzzy metric space such that , , , and which satisfy inequality (14). If the pairs and satisfy the property, then and have a point of coincidence each. Moreover, , , , and have a unique common fixed point provided the pairs of families and are commute pairwise. Proof. The proof of this theorem is similar to that of Theorem 3.1 contained in Imdad et al. [40]; hence the details are omitted. Remark 33. Theorem 32 is a partial generalization of Theorem 26 as commutativity requirements in Theorem 32 are relatively stronger than weak compatibility used in Theorem 26. Now, we indicate that Theorem 32 can be utilized to derive common fixed point theorems for any finite number of mappings. As a sample for five mappings, we can derive the following by setting one family of two members while the remaining families contain single members: Corollary 34. Let , , , , and be five self-mappings of a KM-fuzzy metric space satisfying the following conditions: (1)the pairs and share the property; (2)for all , , there exists : , for some Then the pairs and have a coincidence point each. Moreover, , , , , and have a unique common fixed point provided the pairs and commute pairwise (that is, , , , and ). Similarly, we can derive a common fixed point theorem for six mappings by setting two families of two members while the remaining families contain single members: Corollary 35. Let , , , , , and be six self-mappings of a KM-fuzzy metric space satisfying the following conditions: (1)the pairs and enjoy the property; (2)for all , , there exists : , for some Then the pairs and have a coincidence point each. Moreover, , , , , , and have a unique common fixed point provided the pairs and commute pairwise (that is, , , , , , and ). By setting , , , and in Theorem 32, we deduce the following. Corollary 36. Let , , , and be four self-mappings of a KM-fuzzy metric space such that the pairs and satisfy the property. Suppose that for all , there exists : , for some where , , , and are fixed positive integers. Then the pairs and have a point of coincidence each. Further, , , , and have a unique common fixed point provided both pairs and commute pairwise. Remark 37. The results similar to Theorem 28, Theorem 29, Corollary 31, Corollary 34, and Corollary 35 can be outlined in respect of Theorem 32 and Corollary 36. 3.2. Grabiec-Type Fixed Point Results Inspired by the work of Grabiec [37], we state and prove some fixed point theorems for weakly compatible mappings with common limit range property. Lemma 38 (see [37]). Let be a KM- (or GV-) fuzzy metric space. If there exists a constant such that for all , , then . Theorem 39. Let , , , and be four self-mappings of a KM-fuzzy metric space . Suppose that (1)the pairs and enjoy the property; (2)for all , and for some Then the pairs and have a coincidence point each. Moreover, , , , and have a unique common fixed point provided both pairs and are weakly compatible. Proof. If the pairs and share the property, then there exist two sequences and in such that where . Since , there exists a point such that . Now we have to show that . On using inequality (45), we Letting and using Lemma 10, Appealing to Lemma 38, we obtain and so which shows that is a coincidence point of the pair . Also ; there exists a point such that . Now we have to assert that . On using inequality (45), we get or, equivalently, In view of Lemma 38, we have ; that is, which shows that is a coincidence point of the pair . As the pair is weakly compatible and , therefore . Now we show that is a common fixed point of the pair . To prove this, using inequality (45), we have which reduces to Owing to Lemma 38, we get . Therefore, is a common fixed point of the pair . Since pair is weakly compatible and , hence . On using inequality (45), we get Then on simplification, we have By Lemma 38, we obtain which shows that is a common fixed point of the pair . Uniqueness of common fixed point is an easy consequence of the inequality (45) (in view of Lemma 38). Remark 40. Theorem 39 improves and extends the results of Grabiec [37] and Imdad et al. [21, Theorem 2.5] and extends some relevant results contained in [16] to fuzzy metric spaces. Remark 41. The results similar to Lemma 24, Theorem 28, Theorem 29, Theorem 32, Corollary 31, Corollary 34, Corollary 35, and Corollary 36 can be proved in view of contraction condition (45) which will generalize and extend several results from the literature. The listing of the possible corollaries are not included. 3.3. Fixed Point Theorems in GV-Fuzzy Metric Spaces Lemma 42. Let , , , and be four self-mappings of a GV-fuzzy metric space satisfying conditions (1)–(4) of Lemma 24. Suppose that for all , for some , and for some Then the pairs and satisfy the property. Proof. As the pair enjoys the property, there exists a sequence in such that where . Since , each sequence there exists a sequence in such that . Therefore, due to the closedness of , so that . Thus in all we have , , and as . By (4) of Lemma 24, the sequence converges and in all we need to show that as . Suppose that as , and then using inequality (55) with , , we have in which, on making , we As implies , henceforth , which is a contradiction, thereby implying which shows that the pairs and enjoy the property. Theorem 43. Let , , , and be four self-mappings of a GV-fuzzy metric space satisfying inequality (55). Suppose that the pairs and enjoy the property. Then the pairs and have a coincidence point each. Moreover, , , , and have a unique common fixed point provided both pairs and are weakly compatible. Proof. If the pairs and satisfy the property, then there exist two sequences and in such that where . Since , there exists a point such that . We assert that . Assume the contrary, and then using inequality (55) with , , we get which, on making , reduces to As implies , henceforth , which is a contradiction. Therefore, so that . Hence is a coincidence point of the pair . Also there exists a point such that . Now we show that . If not, then using inequality (55) with , , we have which reduces to As implies , henceforth , which is a contradiction. Therefore, so that which shows that is a coincidence point of the pair . Since the pair is weakly compatible and , hence . Now we show that is a common fixed point of the pair . To prove this, we show that . Assume the contrary, and then using inequality (55) with , , we have Then on simplification, we obtain As implies , henceforth , which is a contradiction. Hence . Therefore, is a common fixed point of the pair . As the pair is weakly compatible and , then . To accomplish this, we assert that . If not, then using inequality (55) with , , we have
{"url":"http://www.hindawi.com/journals/aaa/2013/735217/","timestamp":"2014-04-19T02:44:54Z","content_type":null,"content_length":"1045475","record_id":"<urn:uuid:f4d17e48-c249-4ba5-b689-9269283070da>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
JF Ptak Science Books // Blog Bookstore “Since the mid-1940s, scientists had aimed to create a thinking machine, an apparatus that could compete with or even surpass the human brain in logical operations, pattern recognition, problem solving and even language. Chess was found to be a useful testing ground because of its combination of simple rules and mind-bending complexity… Shannon was fascinated by chess’ potential in the pursuit of what he called ‘mechanized thinking.’ But he became convinced that computer chess and other AI pursuits should not be modeled on human thought… Computers, at least as they were understood then, could calculate very quickly, following programmed instructions. This particular strength—and limitation—of computers suggested a different route for AI, a new sort of quasi-intelligence based on mathematical computation. Chess would be a central proving ground for this new type of intelligence. Theoretically, at least, the game could be fully converted into one long mathematical formula” --Shenk, 201, 211.
{"url":"http://longstreet.typepad.com/books/2011/04/page/2/","timestamp":"2014-04-19T17:04:59Z","content_type":null,"content_length":"75282","record_id":"<urn:uuid:1a229bb3-3eba-401f-82ee-e92e38885820>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Masters Program at Department of Mathematics, Texas A&M University Masters degree: Teaching Track This option is aimed primarily at preparing students to teach mathematics at secondary school and junior college levels. Students acquire a solid mathematical background together with the necessary pedagogical skills and support coursework from other deparments, such as Statistics and Education. The main emphasis is to produce future educators who will provide high-quality content-based Non-Thesis option 1. Requires a minimum of 36 credit hours, at least 24 of which must be in mathematics. 2. There must be 12 credit hours in specialized mathematics courses consisting of the following: □ MATH 645 (Survey of Mathematical Problems I); □ MATH 646 (Survey of Mathematical Problems II); □ MATH 629 (History of Mathematics); □ MATH 696 (Mathematical Communication and Technology). 3. There must be 12 hours of regular mathematics courses, at least six of which must be standard graduate mathematics courses. The other six hours can be used to remove any deficiencies. □ Suggested standard courses include: ☆ MATH 607 (Real Variables I); ☆ MATH 609 (Numerical analysis); ☆ MATH 615 (Introduction to Classical Analysis); ☆ MATH 617 (Complex Variables I); ☆ MATH 622 (Differential Geometry I); ☆ MATH 636 (Topology I); ☆ MATH 641 (Analysis for Applications I); ☆ MATH 653 (Algebra I); □ Suggested courses to remove deficiencies include: ☆ MATH 407 (Complex Variables); ☆ MATH 416 (Modern Algebra II); ☆ MATH 427 (Introduction to Number Theory); ☆ MATH 431 (Structures and Methods of Combinatorics); ☆ MATH 436 (Introduction to Topology); ☆ MATH 439 (Differential Geometry of Curves and Surfaces); ☆ MATH 467 (Modern Geometry); Please note that students can include a maximum of two undergraduate courses on their degree plan. The math service courses MATH 601 and MATH 602 can also serve to remove deficiencies, but these courses count toward the two allowed undergraduate courses. 4. There must be 9 credit hours of supporting coursework selected from the following: □ Two graduate courses in Education (the student has the option of an emphasis at either the secondary or collegiate level). Suggested courses include: ☆ EDAD 601 (College Teaching); ☆ EDAD 609 (Public School Laws); ☆ EDAD 610 (Higher Education Law); ☆ EHRD 616 (Methods of Teaching Adults); ☆ EDTC 608 (Foundations of Distance Learning); ☆ EDCI 642 (Multicultural Education: Theory, Research, and Practice); ☆ EDCI 644 (Curriculum Development); □ One graduate course in statistical methods. Suggested courses include: ☆ STAT 610 (Theory of Statistics - Distribution Theory); 5. This leaves 3 of 36 credit hours to be chosen in consultation with the student's advisor. In most cases, these credits should be taken as a mathematics course, though in exceptions they can be used for courses directly relevant to the student's long term teaching goals. This option is also available through Distance Learning.
{"url":"http://www.math.tamu.edu/graduate/masters/requirements/teachingtrack.html","timestamp":"2014-04-16T07:51:36Z","content_type":null,"content_length":"13557","record_id":"<urn:uuid:293f737e-e902-413e-bbe7-d48620c2b83b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry of the Universe The density of the universe also determines its geometry. If the density of the universe exceeds the critical density, then the geometry of space is closed and positively curved like the surface of a sphere. This implies that initially parallel photon paths converge slowly, eventually cross, and return back to their starting point (if the universe lasts long enough). If the density of the universe is less than the critical density, then the geometry of space is open, negatively curved like the surface of a saddle. If the density of the universe exactly equals the critical density, then the geometry of the universe is flat like a sheet of paper. Thus, there is a direct link between the geometry of the universe and its fate. Site link: Shape of the Universe Credit: NASA / WMAP Science Team Available formats: 160 x 90 JPG (3 KB) 200 x 177 JPG (10 KB) 320 x 297 JPG(25 KB) 512 x 475 JPG(60 KB) 557 x 501 JPG(54 KB) 1104 x 1024 JPG (261 KB) 2160 x 2003 JPG (903 KB) WMAP # 990006
{"url":"http://map.gsfc.nasa.gov/media/990006/index.html","timestamp":"2014-04-19T12:25:55Z","content_type":null,"content_length":"8661","record_id":"<urn:uuid:38a39ba6-c74c-44b7-adb3-c2219268f686>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
The logarithmic potential method for solving linear programming problems , 2000 "... Everyone with some background in Mathematics knows how to solve a system of linear equalities, since it is the basic subject in Linear Algebra. In many practical problems, however, also inequalities play a role. For example, a budget usually may not be larger than some specified amount. In such situ ..." Cited by 18 (6 self) Add to MetaCart Everyone with some background in Mathematics knows how to solve a system of linear equalities, since it is the basic subject in Linear Algebra. In many practical problems, however, also inequalities play a role. For example, a budget usually may not be larger than some specified amount. In such situations one may end up with a system of linear relations that not only contains equalities but also inequalities. Solving such a system requires methods and theory that go beyond the standard Mathematical knowledge. Nevertheless the topic has a rich history and is tightly related to the important topic of Linear Optimization, where the object is to nd the optimal (minimal or maximal) value of a linear function subject to linear constraints on the variables; the constraints may be either equality or inequality constraints. Both from a theoretical and computational point of view both topics are equivalent. In this chapter we describe the ideas underlying a new class of solution , 1998 "... We deal with the primal-dual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the pra ..." Cited by 11 (7 self) Add to MetaCart We deal with the primal-dual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for so-called large-update methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of large-update method before. Our new analysis not only provides a unified way for the analysis of both large-update and small-update methods, but also improves the known iteration bounds. - Mathematical Programming , 2000 "... In this paper, we first introduce the notion of self-regular functions. Various appealing properties of self-regular functions are explored and we also discuss the relation between selfregular functions and the well-known self-concordant functions. Then we use such functions to define self-regular p ..." Cited by 8 (5 self) Add to MetaCart In this paper, we first introduce the notion of self-regular functions. Various appealing properties of self-regular functions are explored and we also discuss the relation between selfregular functions and the well-known self-concordant functions. Then we use such functions to define self-regular proximity measure for path-following interior point methods for solving linear optimization (LO) problems. Any self-regular proximity measure naturally defines a primal-dual search direction. In this way a new class of primal-dual search directions for solving LO problems is obtained. Using the appealing properties of self-regular functions, we prove that these new large-update path-following methods for LO enjoy a polynomial, O n q+1 2q log n iteration bound, where q ≥ 1 is the so-called barrier degree of the self-regular ε proximity measure underlying the algorithm. When q increases, this � bound approaches the √n n best known complexity bound for interior point methods, namely O log. Our unified �√n ε n analysis provides also the O log best known iteration bound of small-update IPMs. ε At each iteration, we need only to solve one linear system. As a byproduct of our results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed. - REVUE RAIRO-OPERATIONS RESEARCH , 1991 "... In the recent interior point methods for linear programming much attention has been given to the logarithmic barrier method. In this paper we will analyse the class of inverse barrier methods for linear programming, in which the barrier is P x \Gammar i , where r ? 0 is the rank of the barrier. ..." Cited by 6 (1 self) Add to MetaCart In the recent interior point methods for linear programming much attention has been given to the logarithmic barrier method. In this paper we will analyse the class of inverse barrier methods for linear programming, in which the barrier is P x \Gammar i , where r ? 0 is the rank of the barrier. There are many similarities with the logarithmic barrier method. The minima of an inverse barrier function for different values of the barrier parameter define a 'central path' dependent on r, called the r--path of the problem. For r # 0 this path coincides with the central path determined by the logarithmic barrier function. We introduce a metric to measure the distance of a feasible point to a point on the path. We prove that in a certain region around a point on the path the Newton process converges quadratically. Moreover, outside this region, taking a step into the Newton direction decreases the barrier function value at least with a constant. We will derive upper bounds for the total ... - Faculty of Mathematics and Informatics, TU Delft, NL--2628 BL , 1992 "... In the recent past a number of papers were written that present low complexity interior-point methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is self-concordant, and that the analyses of interiorpo ..." Cited by 5 (4 self) Add to MetaCart In the recent past a number of papers were written that present low complexity interior-point methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is self-concordant, and that the analyses of interiorpoint methods for these programs can thus be reduced to the analysis of interior-point methods with self-concordant barrier functions. Key words: interior-point method, barrier function, dual geometric programming, (extended) entropy programming, primal and dual l p -programming, relative Lipschitz condition, scaled Lipschitz condition, self-concordance. 1 Introduction The efficiency of a barrier method for solving convex programs strongly depends on the properties of the barrier function used. A key property that is sufficient to prove fast convergence for barrier methods is the property of self-concordance introduced in [17]. This condition not only allows a proof of polynomial convergen...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1236691","timestamp":"2014-04-20T02:32:05Z","content_type":null,"content_length":"25233","record_id":"<urn:uuid:32e9a54a-87d9-4f1c-b901-c7a975d65dfd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Middletown, DE Math Tutor Find a Middletown, DE Math Tutor ...My major is in Spanish, but I want to be of help in more fields than just one. I can tutor in a few different subjects, but I center most around subjects taught in the elementary school, especially math which is an area for which I have a special knack. When working with kids, I start by asking them what they need help on. 13 Subjects: including prealgebra, algebra 1, SAT math, Spanish ...I have experience with Paul A. Foerster's Algebra and Trigonometry that thoroughly covers intermediate/advanced Algebra and Trigonometry. I have experience with Harold Jacob's Geometry and Teaching Textbooks. 23 Subjects: including geometry, elementary math, ASVAB, literature ...I tutor freshman students with math difficulties. I believe and learned that motivation is the key to inspire anyone to succeed. I also do not do the tutees' homework because it would hurt them in the long run. 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Special Needs covers such a wide variety of students. I have had experience with students with learning disabilities, cognitive disabilities, language disabilities, behavior problems, etc. However, I am of the belief that each child is a unique student with his or her own strengths and needs. 35 Subjects: including algebra 2, phonics, ADD/ADHD, Aspergers ...To teach students of all educational types, one must be open to what the student responds to and what is best for their experience. I will always take into consideration if a student is more visual, tactile, logical, and use these to promote a student's learning. By being open to using multiple... 22 Subjects: including precalculus, special needs, differential equations, ACT Math Related Middletown, DE Tutors Middletown, DE Accounting Tutors Middletown, DE ACT Tutors Middletown, DE Algebra Tutors Middletown, DE Algebra 2 Tutors Middletown, DE Calculus Tutors Middletown, DE Geometry Tutors Middletown, DE Math Tutors Middletown, DE Prealgebra Tutors Middletown, DE Precalculus Tutors Middletown, DE SAT Tutors Middletown, DE SAT Math Tutors Middletown, DE Science Tutors Middletown, DE Statistics Tutors Middletown, DE Trigonometry Tutors
{"url":"http://www.purplemath.com/middletown_de_math_tutors.php","timestamp":"2014-04-16T07:35:21Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:9dadb6f2-ac83-463a-b18a-0464cec532e3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof with circles May 3rd 2008, 07:41 AM #1 Feb 2008 Proof with circles prove that if P is a point of the circle C, then there is just one line that is tangent to C at P. Do it by a proof by contradiction. Suppose that there are 2 tangent lines D & D' to C at P. Let O be the center of the circle. Then D is perpendicular to OP, and so is D'. Hence, D & D' are parallel. Is it possible that two different & parallel lines go through a common point (P) ? Depends on how a tangent is defined to you. The equation of the tangent is $y - y_{P} = \left(\frac{dy}{dx}\right)_{P} (x - x_{P})$ It is clear that $(x_P,y_P)$ and $\left(\frac{dy}{dx}\right)_{P}$ is fixed for a point P. Hence the tangent is unique. It's in "Geometry" :/ Oh well... I don't know if he put it in the right section, now, I'm lost >_< Both of the responses helped. Thank you May 3rd 2008, 07:45 AM #2 May 3rd 2008, 07:49 AM #3 May 3rd 2008, 07:58 AM #4 May 3rd 2008, 08:00 AM #5 May 3rd 2008, 09:22 AM #6 Feb 2008
{"url":"http://mathhelpforum.com/geometry/36993-proof-circles.html","timestamp":"2014-04-16T08:46:17Z","content_type":null,"content_length":"48235","record_id":"<urn:uuid:87c1a108-3f06-4af8-bd7c-72615915b2b5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Literature on Exponential of a Quadratic Form up vote 2 down vote favorite Let $A_i$, $i=1,\dots,L$ be given $N\times N$ positive definite real matrices. I have this sum of exponentials f(\mathbf{x})=\sum_{i=1}^{L}\operatorname{exp}(-{\mathbf{x}^T\mathbf{A}_i\mathbf {x}}),~~~~~~\mathbf{x}\in \mathbb{R}^N,\mathbf{x}^T\mathbf{x}=1 Has this function been studied before. Can someone point me to relevant references?. Or anyone can make some comment on it as if it is convex or concave? real-analysis linear-algebra quadratic-forms As each summand is concave, so f is concave. A relevant problem is when a product of quadratic form is convex. A reference comes to me is the paper "Lin, Sinnamon, A condition for convexity of a product of positive definite quadratic forms, SIAM J. Matrix Anal. Appl. 32 (2011) 457-462." – Betrand May 7 '13 at 12:00 It is neither convex nor concave (ignore for now the extra constraint that $x^Tx=1$, because with that constraint, you are asking for convexity on the surface of a hypersphere, which can at best hold only very locally); to see why, simply generate a few random vectors and test what happens to $f$. – Suvrit May 8 '13 at 22:02 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged real-analysis linear-algebra quadratic-forms or ask your own question.
{"url":"http://mathoverflow.net/questions/129943/literature-on-exponential-of-a-quadratic-form","timestamp":"2014-04-18T18:11:30Z","content_type":null,"content_length":"48210","record_id":"<urn:uuid:11068b94-b229-4ab3-a3e5-6ee48610bad1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do I add fractions? • one year ago • one year ago Best Response You've already chosen the best response. Are the denominators the same? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. those might help Best Response You've already chosen the best response. I don't like to watch videos because I never learn from them... :( Best Response You've already chosen the best response. oh sorry then Best Response You've already chosen the best response. \[\frac{a}{b}+\frac{c}{d}=\frac{a\times d+c\times b}{b\times d}\] Best Response You've already chosen the best response. What @Zarkon did , is said to be use of LCM .. Best Response You've already chosen the best response. If the denominator is not the same then you will have to find the most common one. then add the top numbers and keep the denominator same. Been awhile sence i done then. Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[{\heartsuit \over \spadesuit}+{\diamondsuit \over \clubsuit} = {\heartsuit\clubsuit + \diamondsuit \spadesuit \over \spadesuit \clubsuit}\] Best Response You've already chosen the best response. LCM ---> Least common multiple just like I want to give you an example of LCM in addition of fractions : \[\large{\frac{3}{4} + \frac{4}{5} = ?}\] Now, to add these fractions , we need to take LCM of the denominators of both of the fractions... i.e. of 4 and 5 Multiples of 4 = 4,8,12,16,20,32,36,40 .... Multiples of 5 = 5,10,15,20,25,30,35,40... Common multiples = 20, 40 , ... Least common multiple = 20 So from this we get LCM of 4 and 5 is 20 Now divide 20 by 4 , you get 5, multiply 5 by the numerator of the first fraction i.e. 5 * 3 = 15 .Similarly for second fraction, divide 20 by 5, you get 4, multiply 4 by 4 = 16 \[\large{\frac{5*3 + 4*4}{20} = \frac{15+16}{20} = \frac{31}{20} }\] Best Response You've already chosen the best response. Although, you can straightforwardly add the numerators if the denominators are same. Best Response You've already chosen the best response. Why we use LCM is to get to a point where we can straightforwardly add the numerators. Best Response You've already chosen the best response. Get a common denominator. Best Response You've already chosen the best response. Instead of LCM you can also do "common denominators" : \[\large{\frac{4}{3} + \frac{7}{6} = ? }\] Let us take 4/3 first : multiply denominators and numerators by 2 \[\large{\frac{4\times 2}{3\ times 2} = \frac{8}{6}}\] So, we can also write 4/3 as : 8/6 Now let us take L: 7/6 See we have to get common denominators in both the fractions. we have one fraction as 8/6 and another one is 7/ 6 Their denominators are same i.e. 6 so we can just add numerators "now" : \[\large{\frac{8}{6} + \frac{7}{6} = \frac{8+7}{6} = \frac{15}{6}}\] Best Response You've already chosen the best response. Note: Don't add denominators, only numerators are to be added. Best Response You've already chosen the best response. @HorseCrazyGirlForever I hope you got it now, any confusion you have now? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508aaf82e4b077c2ef2e430e","timestamp":"2014-04-17T04:14:03Z","content_type":null,"content_length":"94532","record_id":"<urn:uuid:622884e7-2bb5-4e36-a5e8-c7697e4a9356>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which of the following relations is a function? A. { (1, 2), (1, 3), (1, 4), (1, 5) } B. { (1, 3), (2, 3), (4, 3), (9, 3) } C. { (5, 4), (-6, 5), (4, 5), (4, 0) } D. { (6, -1), (1, 4), (2, 3), (6, 1) • one year ago • one year ago Best Response You've already chosen the best response. A function is when no input repeats which means no x coordinate can go to more than one y. So which one has a relation where the x doesn't repeat? Best Response You've already chosen the best response. There's only one choice where this happens! Best Response You've already chosen the best response. yep no clue what or how tto do this Best Response You've already chosen the best response. In an ordered pair the first number is x and the second is y. So, if I have an ordered pair of (2,6). x is 2 and y is 6. Now look at your choices. the only relation that an x doesn't repeat is choice B, because your x's are: 1,2,4,9. Best Response You've already chosen the best response. D, plot is on graph and you will get answer Best Response You've already chosen the best response. so is the answer d or b Best Response You've already chosen the best response. so is the answer d or b Best Response You've already chosen the best response. so is the answer d or b Best Response You've already chosen the best response. D is wrong It is B. I am a math teacher! Best Response You've already chosen the best response. vertical line test also proves what I'm saying. If you plot the points and draw vertical lines through each point. Then each line should only go through one point. More than one point makes it NOT a function and if you graph D a vertical line would go through x at 6 twice which proves D is WRONG! Best Response You've already chosen the best response. i dont know neither do i care yor a teacher or what, plot thes points on paer and get yor answer Best Response You've already chosen the best response. Learn to plot points correctly and know what a function is. Best Response You've already chosen the best response. well same thing for thois one { (2, 6), (3, 9), (4, 2), (3, 6) } B. { (2, 8), (3, 6), (2, 4), (0, 2) } C. { (3, -2), (4, 7), (-2, 5), (-4, 5) } D. { (4, 7), (-2, 5), (1, 3), (-2, 1) } Best Response You've already chosen the best response. lol thanks thats wat ma concern is plot it first Best Response You've already chosen the best response. Meggy if you ask your math teacher he will confirm what I'm saying! Answer is C on the second one. Same answer as above. Hope that helps. Best Response You've already chosen the best response. oook and im online schooling i dont have teachers Best Response You've already chosen the best response. ya can put yor questions here and everyone can help ya to figure out Best Response You've already chosen the best response. Marty has a standard deck containing 52 cards. If Marty takes one card from the deck, what is the probability that he will select a king of hearts? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bcda52e4b0017ef6257f58","timestamp":"2014-04-18T03:30:18Z","content_type":null,"content_length":"68773","record_id":"<urn:uuid:817f74cd-3f66-4b43-9712-c4286250147a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
version_compare($a, $b) Returns -1 if $a is earlier than $b, 0 if they are equal and 1 if $a is later than $b. If $a or $b are not valid version numbers, it dies with an error. version_compare_relation($a, $rel, $b) Returns the result (0 or 1) of the given comparison operation. This function is implemented on top of version_compare(). Allowed values for $rel are the exported constants REL_GT, REL_GE, REL_EQ, REL_LE, REL_LT. Use version_normalize_relation() if you have an input string containing the operator. my $rel = version_normalize_relation($rel_string) Returns the normalized constant of the relation $rel (a value among REL_GT, REL_GE, REL_EQ, REL_LE and REL_LT). Supported relations names in input are: ``gt'', ``ge'', ``eq'', ``le'', ``lt'', ``>>'', ``>='', ``='', ``<='', ``<<''. ``>'' and ``<'' are also supported but should not be used as they are obsolete aliases of ``>='' and ``<=''. version_compare_string($a, $b) String comparison function used for comparing non-numerical parts of version numbers. Returns -1 if $a is earlier than $b, 0 if they are equal and 1 if $a is later than $b. The ``~'' character always sort lower than anything else. Digits sort lower than non-digits. Among remaining characters alphabetic characters (A-Za-z) sort lower than the other ones. Within each range, the ASCII decimal value of the character is used to sort between characters. version_compare_part($a, $b) Compare two corresponding sub-parts of a version number (either upstream version or debian revision). Each parameter is split by version_split_digits() and resulting items are compared together.in digits and non-digits items that are compared together. As soon as a difference happens, it returns -1 if $a is earlier than $b, 0 if they are equal and 1 if $a is later than $b. my @items = version_split_digits($version) Splits a string in items that are each entirely composed either of digits or of non-digits. For instance for ``1.024~beta1+svn234'' it would return (``1'', ``.'', ``024'', ``~beta'', ``1'', ``+svn'', ``234''). my ($ok, $msg) = version_check($version) my $ok = version_check($version) Checks the validity of $version as a version number. Returns 1 in $ok if the version is valid, 0 otherwise. In the latter case, $msg contains a description of the problem with the $version
{"url":"http://www.makelinux.net/man/3/D/Dpkg::Version","timestamp":"2014-04-17T07:50:46Z","content_type":null,"content_length":"12928","record_id":"<urn:uuid:87445c7e-b82e-4e56-8d6c-40cbf3df8e78>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
trig substitution July 26th 2007, 06:02 PM trig substitution How would you do this integral of tan^3(x)dx I broke it down into integral of tan(x)(sec^2(x) - 1)dx u = tan(x) du = sec^2(x) so i got integral of u(du) - integral of (u) I am not so sure if this is correct so far, and I do not know where to go from here. Thanks for the help. July 26th 2007, 06:09 PM $\int {\tan ^3 x~dx} = \int {\tan x(\sec ^2 x - 1)~dx} = \int {\tan x\sec ^2 x~dx} - \int {\tan x~dx}$ ... everything you want... you got it... July 26th 2007, 06:15 PM I'm confused because the answer in the book is 1/2tan^2(x) + ln(cos(x)) + C I understand where the 1/2tan^2(x) but where does the ln(cos(x)) come from? July 26th 2007, 06:23 PM $(\tan^2x)'=(\sec^2x)'$ :D Well $\int {\tan x~dx} = \int {\frac{{\sin x}}<br /> {{\cos x}}~dx}$ That's easy, don't ya?
{"url":"http://mathhelpforum.com/calculus/17257-trig-substitution-print.html","timestamp":"2014-04-20T20:44:48Z","content_type":null,"content_length":"5963","record_id":"<urn:uuid:561aa632-b030-42d0-b9b9-d9c472e2e53e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: DIMACS: Series in Discrete Mathematics and Theoretical Computer Science 1996; 271 pp; hardcover Volume: 23 ISBN-10: 0-8218-0471-5 ISBN-13: 978-0-8218-0471-1 List Price: US$79 Member Price: US$63.20 Order Code: DIMACS/23 This book contains refereed papers presented at a remarkable interdisciplinary scientific meeting attended by a mix of leading biochemists and computer scientists held at DIMACS in March 1995. It describes the development of a variety of new methods which are being developed for attacking the important problem of molecular structure. • Focuses on global optimization algorithms and heuristics for molecular conformation and protein folding problems • Presents the most efficient recent algorithms • Covers a spectrum of algorithmic issues and applications Co-published with the Center for Discrete Mathematics and Theoretical Computer Science beginning with Volume 8. Volumes 1-7 were co-published with the Association for Computer Machinery (ACM). Graduate students and researchers in mathematics, molecular biology, biochemistry, computer science, engineering, and operations. "Reflects ... much of the state-of-the-art ... highly recommended to graduate students and researchers in mathematical programming, molecular biology, biochemistry, computer science, engineering and operations research." -- Journal of Global Optimization • P. Amara, J. Ma, and J. E. Straub -- Global minimization on rugged energy landscapes • R. E. Bruccoleri -- Energy directed conformational search of protein loops and segments • R. H. Byrd, E. Eskow, A. van der Hoek, R. B. Schnabel, C.-S. Shao, and Z. Zou -- Global optimization methods for protein folding problems • B. W. Church, M. Orešič, and D. Shalloway -- Tracking metastable states to free-energy global minima • J. Gu and B. Du -- A multispace search algorithm for molecular energy minimization • H. A. Hauptman -- A minimal principle in the phase problem of X-ray crystallography • X. Hu, D. Xu, K. Hamer, K. Schulten, J. Koepke, and H. Michel -- Knowledge based structure prediction of the light-harvesting complex II of Rhodospirillum molishianum • J. Kostrowicki and H. A. Scheraga -- Some approaches to the multiple-minima problem in protein folding • P. Androulakis and C. A. Floudas -- A deterministic global optimization approach for the protein folding problem • J. J. Moré and Z. Wu -- \(\varepsilon\)-optimal solutions to distance geometry problems via global continuation • R. Pachter, Z. Wang, J. A. Lupo, S. B. Fairchild, and B. Sennett -- The design of chromophore containing biomolecules • A. T. Phillips, J. B. Rosen, and V. H. Walke -- Molecular structure determination by convex global underestimation of local energy minima • A. Šali, E. Shakhnovich, and M. Karplus -- Thermodynamics and kinetics of protein folding • G. Ramachandran and T. Schlick -- Beyond optimization: Simulating the dynamics of supercoiled DNA by a macroscopic model • M. Vieth, A. Kolinski, C. L. Brooks III, and J. Skolnick -- A hierarchical approach to the prediction of the quaternary structure of GCN4 and its mutants • G. L. Xue, A. J. Zall, and P. M. Pardalos -- Rapid evaluation of potential energy functions in molecular and protein conformations • M. M. Zacharias and D. G. Vlachos -- Simulated annealing calculations for optimization of nanoclusters: The roles of quenching, nucleation, and isomerization in cluster morphology
{"url":"http://www.ams.org/bookstore?fn=20&arg1=app-dm&ikey=DIMACS-23","timestamp":"2014-04-23T14:04:41Z","content_type":null,"content_length":"17834","record_id":"<urn:uuid:f4b9b13e-ea1a-4197-aded-4e68cf1ac4ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Sensitivity Plots in MATLAB I wish to use a for loop i already made and replace a number with a variable defined as a range of numbers, then plot the max of another function that used the function with the new variable. How would I do this? x = [0:.5:1] r = 5*x t = 3/r new program: x = same n = [0:.25:.5] r = n*x t = 3/r for each value of n, i want to plot the cooresponding t, and plot n vs t
{"url":"http://www.physicsforums.com/showthread.php?t=87774","timestamp":"2014-04-20T03:17:03Z","content_type":null,"content_length":"24671","record_id":"<urn:uuid:8aef5ff5-24b7-4d21-b5bb-668634d78b39>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 1,022 Given an ellipse; 4x^2+y^2-48x-4y+48=0, find the Center, find the Major Axis, find the Minor Axis & the Distance from C to Foci? given an ellipse; 2x^2+6y^2+32x-48y+212=0, find center, major axis, minor axis & distance from c to foci? Given an ellipse; 8x^2+y^2+80x-6y+193=0, find center, major axis, minor axis, & distance from c to foci? Given an ellipse; 3x^2+5y^2-12x-50y+62=0, find the Center, the Major Axis, the Minor Axis, & the distance from C to Foci? Given an ellipse; 3x^2+5y^2-12x-50y+62=0, find the Center, Major Axis, Minor Axis, & the distance from C to Foci? I have to annotate some notes and I'm not sure what a couple of these terms mean, your help would be much appreciated! Unifying Effect Analytical Writing Thesis Statement in a 2-card hand, what is the probability of holding only face cards? would i divide 0.111M by 2 Calculate the molarity of the nitrate ion in a 0.111 M solution of calcium nitrate? so would I have to find the number of moles of (NO3)2 and then divide it by the 0.111 M to get the molarity of the nitrate ion I am so confused please explain the set up I don't want the an... I know Darwin went the Galapagos islands and he viewed the finches found there how their beaks were different sizes and shapes because of the seeds they ate but they were all the same species but their beaks changed because of natural selection. What is the perimeter of an equilateral triangle with an altitude of 15m? Is there anybody who can actually walk me through working out the problem please? Find an equation of the tangent line to the given curve at the specified point: y=sqrt(x)/x+1, (4,2/5) college general chemistry 3Co(NO 3 ) 2 (aq) + 2Na 3 PO 4 (aq) &---> Co 3 (PO 4 ) 2 (s) + 6NaNO 3 (aq) how do I find the molar mass of precipitate.X hydrate Algebra 2 Because of friction and air resistance, each swing of a pendulum is a little shorter than the previous one. The lengths of the swings form a geometric sequence. Suppose the first swing of the pendulum has an arc length of 100 cm and a return swing of 99 cm. a.)On which swing w... A thundercloud has an electric charge of 43.2 C near the top of the cloud and -38.7 C near the bottom of the cloud. The magnitude of the electric force between these two charges is 3.95x10^6 N. What is the average separation between these charges? (kc=8.99×10^9 N ... How high is a wall that casts a 10ft shadow @ the same time that a 6ft flag pole casts a 6ft shadow college general chemistry calculate the lowest whole number ratio of Moles of Mg/moles of O average of Moles of Mg/Moles of O Trial 1 mass of empty crucible and lid= 36.560 mass of Mg=0.348 mass of crucible, lid, and product= 38.229 Trail 2 mass of empty crucible and lid=32.221 mass of Mg=0.340 mass of ... college general chemistry calculate the lowest whole number ratio of Moles of Mg/moles of O mass of empty crucible and lid= 36.560 mass of Mg=0.348 mass of crucible, lid, and product=38.229 What is the limiting reagent for 6HCl + Fe2O3 = 2FeCl3 + 3H2O ? 6HCl has 18.44 moles and Fe2O3 has 2.91 moles. Also how many moles would FeCl3 produce? What is the limiting reagent for 6HCl + Fe2O3 = 2FeCl3 + 3H2O ? 6HCl has 18.44 moles and Fe2O3 has 2.91 moles. Also how many moles would FeCl3 produce? definite articles(el, la, los, las) must always agree with the noun. If it is masculine singular(el), if it is masculine plural(los), feminine singular(la), feminine plural(las). If you have a mixed group like your pasajeros above, then it would take a masculine plural definit... 1. Vengo de la taqueria 2. Venimos de la .... 3. Viene de la ..... 4. acabar de 5. acabas de estudiar 6. acaba de caminar por el parque 7. muchas vacas 8. muchos edificios grandes In Drosophila, sepia colored eyes are due to a recessive allele s and wild type (red eye color) to its dominant allele s+. Sepia-eyed females are crossed to pure wild type males. What phenotypic and genotypic ratios are expected if the F1 males are then backcrossed to sepia-ey... What are you trying to find the area??? Simplify the third root of 4 over the fifth root of 8 Spanish please translate from spanish to english Prima in this context is referring to cousin(feminine). 1. Which is the older cousin? 2. Which is the younger cousin? 5. c 6. d 7.d. 8. b. 9. a. 10. c 1. B 2. Should be D. 3. C is correct 4. Should be A. It is asking what you(plural) do Alegebra I I need help please!!! On January 1, 2010, Chessville has a population of 50,000 people. Chessville then enters a period of population growth. Its population increases 7% each year. On the same day, Checkersville had a population of 70,000 people. Checkersville starts to experi... college general chemistry it's right thank you so much so i just need to learn my significant figures yeah college general chemistry it's right thank you so much so i just need to learn my significant figures yeah college general chemistry it's right thank you so much so i just need to learn my significant figures yeah college general chemistry I am so lost I imput 6.188 because it's online homework and it said my answer was wrong you don't have to convert because it has the spot where you put the answer then cm^3 college general chemistry Devron thats not right it said incorrect college general chemistry Calculate the volume of a rectangular solid with a thickness of 0.750 cm and an area of 8.25 cm2 the helium in a 1.5L flask at 25degrees celcius exerts a pressure of 56.65kPa. How many moles of helium are in the flask? How do you say " I like to..."? go to Google search doctor operations and see if that helps if not you can also search cool plastic surgery facts. Instructions were evaluate the integral. (x/1-x^4) dx I know that this form is the tanh inverse 1/1-x^2 but not sure how to substitute and start this problem. Please Help. Instructions were evaluate the integral. (x/1-x^4) dx I know that this form is the tanh inverse 1/1-x^2 but not sure how to substitute and start this problem. Please Help. DEFG is a rectangle. H is the intersection point of the two diagonals. DH= 3x 3 and EG = x + 44. Find the value of x and the length of each diagonal. Using the acronym M.A.I.N. ( militarism , alliances , imperialism , nationalism ) Describe how tension rose in Europe during the early 1900s ? the ration if the side lengths of two similar rectangles is 4:3. what is the ratio of the areas of the rectangles? 7) Use Pascal's Triangle to find the coefficients for (a + b)5 . I need help with this? Probability and Statistics For question 1, here's the numbers you use: {20, 18, 17, 19, 22, 29, 16, 27, 24}. 1. Calculate the 50th percentile. For questions 2 and 3 use this number line: {16, 17, 18, 19, 20, 22, 24, 27, 29, 80} 2. Calculate the z-score for the 80 & explain what it means. (Explain wh... What is the of a solution obtained by dissolving two extra-strength aspirin tablets, containing 594 of acetylsalicylic acid each, in 202 of water? The active ingredient in aspirin is acetylsalicylic acid , a monoprotic acid with Ka=3.3*10^-4 at 25 C. Can you provide the method... Thank you so much. Also, would this same procedure be used for when there's an air bubble in the titration? a) A student fails to wash the weighing paper when transferring the KHP sample into the beaker. What effect does this error have on the calculated molarity of the NAOH solution? Mathematically justify your answer. b) A student failed to notice an air bubble trapped in the tip ... explian in writing how waould you use a number line to round 148 to the nerst ten physics, please help! (a) Find the current in each resistor of the circuit shown in the figure byusing the rules for resistors in series and parallel. (b) Write three independent equations for the three currents using Kirchhoff s laws: one with the node rule; a second using the loop rule throu... Why can't you answer your own homework questions? You can't stroll through life being a cheater. Math Essentials How many ways can an IRS auditor select 4 of 9 tax returns for an audit? True because I did learn that the lower the pH the more acidic it is. The hydronium ion concentration in coffee is 1 x 10^-5 M. What is the pH of coffee? The hydronium ion concentration for a solution is determined to be 4.5 x 10^-7 M. What is the concentration of the hydroxide ion? I just wanted to let the people that are being nice enough to help me know that this is for a study guide. It is not a test or quiz. :0) Thanks. I want to say because it has more of the reactant. Am I right? Write a balanced equation for the proton transfer reaction between water and phosphate ion. Determine in which direction the equilibrium is favored. When a 5.00 ml sample of vinegar is titrated, 44.5 ml of 0.100 m NAOH solution is required to reach the end point. What i sthe concentration of vinegar in moles per liter of solution? How much heat would be released by burning one gallon of octane? The density of octane is 0.703g/mL. 1 gallon= 3.79 liters. Do the equations x = 4y + 1 and x = 4y 1 have the same solution? Justify your answer with an explanation or a graph. How might you explain your answer to someone who has not learned algebra? Select the gerund phrase in the sentence below. Then, determine the noun function of the gerund. 1. Water skiing, sailing, and swimming are my favorite summer sports. 2. I enjoy playing the piano. A 50.0-g sample of a conducting material is all that is available. The resistivity of the material is measured to be 11 ~ 10-8 ¶m and the density is 7.86 g/cm3. The material is to be shaped into a solid cylindrical wire that has a total resistance of 1.5 ... Analytic Geometry Please help. I've been trying to solve these for more than an hour now but I can't seem to get it. 1) The segment joining (-4,7), (5,-2) is divided into two segments, one of which is 5 times as long as the other. Find the point of division. 2) The segment joining (4,0)... Mr. Smith is purchasing a $190000 house. The down payment is 20% of the price of the house. He is given the choice of two mortgages: (A) a 25-year mortgage at a rate of 8%. Find: (i) The monthly payment: $ (ii) The total amount of interest paid: $ (B) a 15-year mortgage at a r... a 700- to 1,050-word paper in which you outline the steps required for a multicultural education to be effective. Describe types of activities you would incorporate into your classroom that would support a multicultural education for all groups in this class. How could each gr... So, it's C right? Smiling and making polite remarks to people we do not like is an example of: A) making another feel embarrassment. B) exercising power over another. C) idealizing a personal performance. D) using find the interest rate to the nearest hundredth of a percent that will produce $2500, if $2000 is left at interest compounded semiannually for 4.5 year What is the molar concentration of nitrate ions in a 0.161M magnesium nitrate(aq) solution? Choose a product you wish to market. You will follow this product through all four P's of Marketing. Describe your goal. Is it to maximize quantity, market penetration, market share, gross revenue, profit, or other goal (such as social change)? In your one to two page pape... Identify the product concepts you used in determining the packaging. How does the packaging affect the effectiveness of the packaging? Are there any physical, social, or economic restraints that could limit or enhance your packaging? Market Research Product Concepts college physics A student moves a box of books by attaching a rope to the box and pulling with a force of 80.0 N at angle of 45o. The box of books has a mass of 25.0 kg and the coefficient of kinetic friction between the bottom of the box and the sidewalk is 0.43. Find the acceleration of the... . Which of the following is a measure of motion, with the derived units in m/s? power force resistance speed To approach a runway, a plane must began a 7 degree descent starting from the height of 2 miles above the ground. To the nearest mile, how many miles from the runway is the airplane at the start of this approach? how much heat is released when 42.3 grams of ammonia is produced? Muriatic acid is the common name of industrial-grade hydrochloric acid (HCl). One of its uses is to clean or etch concrete in preparation for sealing or painting. It is typically sold in 5.00 gallon buckets, with a concentration of 11.7 M. a.What volume of concentrated muriati... On her first test in algebra, Mary earned a score of 75, and she made an 85 on her first psychology test. If the average score on the algebra test was a 68, with s=7, and the average on the psychology test was 91, with s=6, is Mary doing better in psychology or algebra. Why? I am a two digit number, twice the sum of my digits is 22. The product of my digits is 24.My tens digit is greater than my ones digitwhat number am I? Which property would allow you to use mental computation to simplify the problem 27 + 15 + 3 + 5? 1) commutative property of addition 2) distributive property 3) additive identity 4) additive inverse what is the prepositional phrase in the following sentence, What is the price of the blue 2006 Honda Civic in the last row of cars on the lot? find a polynomial of least degree(having real coefficients) with zeros: 5, -2, 2i what is geography? Which element has no stalbe isotopes: 82Pb, 27Co, 51Sb or 90Th? What caused the rapid population growth of homo sapiens x^2=8x-9 solve by completing the square to obtain exact solutions Three men are trying to move a wardrobe. One pushes northward with a force of 100N; the second pushes eastward with a force of 173N; the third pushes in a direction 60 degrees west of south with a force of 200N. What is the resultant force on the wardrobe and it's direction? In solution, hypochlorite ion is known to decompose with a second-order rate law. If [ClO-]0 = 0.0570 M and k = 2.06x10-4 M-1·min-1, calculate the half-life for this reaction 2ClO- 2Cl- + O2. Calculate the temperature at which k = 2.0 s-1 if k = 10.0 s-1 at 371 K. The activation energy for the reaction is 30.0 kJ mol-1. if the weight of a fish varies jointly as the length and the square of the girth, and if a fish 11 inches long with a girth of 7 inches weighs 0.7lbs, find the weight of a fish 23 inches long with a girth of 18 inches. if Y varies jointly as X and the square of Z, and if Y=24 when X=2 and Z=6, find Y when X=10 and Z=9 an airplane is flying at an altitude of 3200 feet. an air- traffic controller in the tower keeps constant track of the planes distance from the tower in X feet. express the horizontal distance from the tower to a point directly below the airplane,d, in terms of X. a rancher has 310 feet of fencing with which to enclose two rectangular corrals, both of the same size. the two corrals will share one side, and a barn forms one side of both corrals. suppose the width of each corral is X feet. express the total area of the two corrals as a fu... a farmer has 200 yards of fencing to enclose three sides of a rectangular piece of property that lies next to a river. the river will serve as the fourth side. find the dimensions of property if the area is 3200 yards^2 Angle ABC and ange DBE are vertical angles,the measure of angle ABC =3x+20, and the measure or angle DBE =4x-10. Write and solve an equation to find the measure of angle ABC and the measure of angle Angle ABC and ange DBE are vertical angles,the measure of angle ABC =3x+20, and the measure or angle DBE =4x-10. Write and solve an equation to find the measure of angle ABC and the measure of angle college math prove f(x)= x^15+6 is increasing on the interval: I= (- , ) How fast would a ball have to be thrown upward to reach a maximum height of 49 ft? [Hint: Use the discriminant of the equation 16t2 − v0t + h = 0.] For this quote 'their intercourse had been one continued series of opposition. Their opinions clashed; and indeed, she had never perceived that he had cared for her opinions, as belonging to her, the individual', can someone find a technique in it? It's a good quot... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Amber&page=2","timestamp":"2014-04-20T07:03:20Z","content_type":null,"content_length":"29379","record_id":"<urn:uuid:ade68bf4-0e8d-45ef-961a-5de5b54056ca>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] The observational standpoint for numbers Giovanni Lagnese lagnese at ngi.it Wed Jan 25 13:50:53 EST 2006 Up to the nineteenth century, euclidean geometry occupied a privileged position among all possible geometries, because it was considered the geometry of the physical space, in virtue of the apparent correspondence between outer and inner experience of space. Such a standpoint can be called observational standpoint. The observational standpoint for space was definitively proved untenable thanks principally to Albert Einstein. Today an observational standpoint for the natural numbers is still In fact, most mathematicians believe in a correspondence between the properties of the natural numbers of our intuition and the properties of the natural numbers as combinatorial/discrete aspects of reality. If a property P of the natural numbers is demonstrated from axioms intuitively "seen" to be true, most people believe that P is true also as a combinatorial property of the natural world. Inner computations are considered to be corresponding to combinatorial/discrete properties of reality, because there is the conviction that there are "truths of computation" which are independent of any hypothesis. I think that the observational standpoint is wrong for natural numbers as it is wrong for geometry. Do someone agree with me? More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-January/009609.html","timestamp":"2014-04-20T16:40:05Z","content_type":null,"content_length":"3617","record_id":"<urn:uuid:1d2966c8-58ca-4750-8bc7-6ef54eaa5ad6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple event probability January 3rd 2014, 06:37 AM #1 Jan 2014 Hi, for some reason I'm having issues with something I usually find easy. The question is as follows: events A and B are such that P(A) = 2/5, P(B)= 1/2 and P(A|B') = 4/5 a.) P(A n B') b.) P(A n B) c.) P(A u B) d.) P(A|B) next, state with reason, whether or not A and B are a) Mutually exclusive b) independent What I've done: When I try and draw out a Venn diagram (As i find them helpful for displaying data) can get P(A) to equal 4/5 when given B' however when I try and work out P(A n B) for the middle of the Venn diagram it comes out as 0.2, when subtracting that from the probabilities of A and B I end up with 0.2 for A. Which no matter what I do I can't make P(A|B') equal 4/5. So I can make my Venn diagram work and I have no idea whether to use the values without subtracting (A n B) or not. I was wondering if they were independent and there were no cases where they are both chosen so P(A n B) would = 0? Please help it would be much appreciated. I look forward to hearing from you! BigGScotty278 (AS level Maths & Further Maths) Last edited by BigGScotty278; January 3rd 2014 at 06:41 AM. Re: Multiple event probability Hi, for some reason I'm having issues with something I usually find easy. The question is as follows: events A and B are such that P(A) = 2/5, P(B)= 1/2 and P(A|B') = 4/5 a.) P(A n B') b.) P(A n B) c.) P(A u B) d.) P(A|B) next, state with reason, whether or not A and B are a) Mutually exclusive b) independent What I've done: When I try and draw out a Venn diagram (As i find them helpful for displaying data) can get P(A) to equal 4/5 when given B' however when I try and work out P(A n B) for the middle of the Venn diagram it comes out as 0.2, when subtracting that from the probabilities of A and B I end up with 0.2 for A. Which no matter what I do I can't make P(A|B') equal 4/5. So I can make my Venn diagram work and I have no idea whether to use the values without subtracting (A n B) or not. I was wondering if they were independent and there were no cases where they are both chosen so P(A n B) would = 0? Please help it would be much appreciated. I look forward to hearing from you! BigGScotty278 (AS level Maths & Further Maths) $\frac{2}{5}=P(A|B) \frac{1}{2}+\frac{4}{5} \left(1-P(B)\right)=P(A|B)\frac{1}{2}+\frac{4}{5} \cdot \frac{1}{2}$ $\frac{1}{2}P(A|B)=\frac{2}{5}-\frac{2}{5}=0$ so see if that lets you make headway. Re: Multiple event probability Hello, BigGScotty278! $\text{Given: }\:P(A)\,=\,0.4,\;P(B)\,=\,0.5,\;P(A|B')\,=\,0.8$ $\text{Find: }\;(a)\;P(A \cap B') \qquad (b)\;P(A \cap B) \qquad (c)\;P(A \cup B) \qquad (d)\;P(A|B)$ $\begin{array}{ccccc}\text{We have:} & P(A) \:=\:0.4 & P(A') \:=\:0.6 \\ & P(B) \:=\:0.5 & P(B') \:=\:0.5 \end{array}$ $P(A|B') \:=\:0.8 \quad\Rightarrow\quad \frac{P(A\cap B')}{P(B')} \:=\:0.8 \quad\Rightarrow\quad P(A\cap B') \:=\:(0.8)\!\cdot\!P(B')$ . . $P(A \cap B') \:=\:(0.8)(0.5) \quad\Rightarrow\quad P(A\cap B') \:=\:0.4$ With the above information, we have this chart: . . $\begin{array}{c||c|c||c|} & B & B' & \text{Total} \\ \hline\hline A && 0.4 & 0.4 \\ \hline A' & && 0.6 \\ \hline\hline \text{Total} & 0.5 & 0.5 & 1.0 \\ \hline \end{array}$ We can complete the chart: . . $\begin{array}{c||c|c||c|} & B & B' & \text{Total} \\ \hline\hline A &0.0& 0.4 & 0.4 \\ \hline A' & 0.5 & 0.1 & 0.6 \\ \hline\hline \text{Total} & 0.5 & 0.5 & 1.0 \\ \hline \end{array}$ And answer the questions: . . $(a)\;P(A\cap B') \:=\:0.4$ . . $(b)\;P(A \cap B) \:=\:0$ . . $(c)\;P(A\cup B) \:=\:0.9$ . . $(d)\;P(A|B) \:=\:\frac{P(A\cap B)}{P(B)} \:=\:\frac{0}{0.5} \:=\:0$ $\text{Next, state with reason, whether or not }A\text{ and }B\text{ are:}$ . . $\text{(a) Mutually exclusive} \qquad \text{(b) independent}$ $\text{(a) Since }P(A\cap B) = 0,\:A\text{ and }B\:are\text{ mutually exclusive.}$ $(b) \;P(A)\!\cdot\!P(B) \:=\:(0.4)(0.5) \:=\:0.2$ . . $P(A \cap B) \:=\:0$ $\text{Since }P(A)\!\cdot\!P(B) \,e\,P(A\cap B),\,A\text{ and }B\text{ are }not\text{ independent.}$ January 3rd 2014, 07:13 AM #2 MHF Contributor Nov 2013 January 3rd 2014, 05:08 PM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/statistics/225303-multiple-event-probability.html","timestamp":"2014-04-19T23:42:00Z","content_type":null,"content_length":"48043","record_id":"<urn:uuid:c539462c-598b-410a-9b58-1ae1ac141765>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Triangle Problem October 12th 2010, 03:44 PM #1 Oct 2010 Triangle Problem You may use any pre-calculus/trigonometric procedure to solve the problem. In any case please explain everything. the lengths of the sides of a triangle are 25, 29, 36. there is a point on the longest side of the triangle whose distance from the opposite vertex is 20. what is the distance from this point to the midpoint of the shortest side? IF there are multiple answers please describe all of them. IF there are no answers please explain why. Thank you! You may use any pre-calculus/trigonometric procedure to solve the problem. In any case please explain everything. the lengths of the sides of a triangle are 25, 29, 36. there is a point on the longest side of the triangle whose distance from the opposite vertex is 20. what is the distance from this point to the midpoint of the shortest side? IF there are multiple answers please describe all of them. IF there are no answers please explain why. Thank you! I would start with the triangle on a coordinate grid to make things easier. Your first task is to solve for x. Once you find x, you can then use coordinate geometry to find solutions to your problem. Label triangle ABC: AC=25, AB=29, BC=36 ; make D the point on BC distance 20 from A Calculate angle at C Triangle ACD: you have AC=25, AD=20 and angle C; results in ACD = right triangle Smallesrt side AB = 25. Largest side BC = 36. D ia a point on BC such that AD = 20. Now ADB is a right angled triangle. If E is the mid point of AB, then DE = AE = BE. Then what is DE? October 12th 2010, 04:41 PM #2 October 12th 2010, 05:11 PM #3 MHF Contributor Dec 2007 Ottawa, Canada October 13th 2010, 02:15 AM #4 Super Member Jun 2009
{"url":"http://mathhelpforum.com/geometry/159383-triangle-problem.html","timestamp":"2014-04-21T00:34:10Z","content_type":null,"content_length":"38746","record_id":"<urn:uuid:2619cea7-8f5e-4905-b60e-e6737f1024eb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Quality Control and Quality Control Charts? In all production processes, we need to monitor the extent to which our products meet specifications. In the most general terms, there are two "enemies" of product quality: • deviations from target specifications • excessive variability around target specifications During the earlier stages of developing the production process, designed experiments are often used to optimize these two quality characteristics (see Experimental Design); the methods provided in Quality Control are on-line or in-process quality control procedures to monitor an on-going production process. For detailed descriptions of these charts and extensive annotated examples, see Buffa (1972), Duncan (1974) Grant and Leavenworth (1980), Juran (1962), Juran and Gryna (1970), Montgomery (1985, 1991), Shirland (1993), or Vaughn (1974). Two excellent introductory texts with a "how-to" approach are Hart & Hart (1989) and Pyzdek (1989); two German language texts on this subject are Rinne and Mittag (1995) and Mittag (1993). General Approach The general approach to on-line quality control is straightforward: We simply extract samples of a certain size from the ongoing production process. We then produce line charts of the variability in those samples and consider their closeness to target specifications. If a trend emerges in those lines, or if samples fall outside pre-specified limits, we declare the process to be out of control and take action to find the cause of the problem. These types of charts are sometimes also referred to as Shewhart control charts (named after W. A. Shewhart, who is generally credited as being the first to introduce these methods; see Shewhart, 1931). Interpreting the chart. The most standard display actually contains two charts (and two histograms); one is called an X-bar chart, the other is called an R chart. In both line charts, the horizontal axis represents the different samples; the vertical axis for the X-bar chart represents the means for the characteristic of interest; the vertical axis for the R chart represents the ranges. For example, suppose we want to control the diameter of piston rings that we are producing. The center line in the X-bar chart would represent the desired standard size (e.g., diameter in millimeters) of the rings, while the center line in the R chart would represent the acceptable (within-specification) range of the rings within samples; thus, this latter chart is a chart of the variability of the process (the larger the variability, the larger the range). In addition to the center line, a typical chart includes two additional horizontal lines to represent the upper and lower control limits (UCL, LCL, respectively); we will return to those lines shortly. Typically, the individual points in the chart, representing the samples, are connected by a line. If this line moves outside the upper or lower control limits or exhibits systematic patterns across consecutive samples (see Runs Tests), a quality problem may potentially exist. Elementary Concepts discusses the concept of the sampling distribution and the characteristics of the normal distribution . The method for constructing the upper and lower control limits is a straightforward application of the principles described there. Establishing Control Limits Even though we could arbitrarily determine when to declare a process out of control (that is, outside the UCL-LCL range), it is common practice to apply statistical principles to do so. Example. Suppose we want to control the mean of a variable, such as the size of piston rings. Under the assumption that the mean (and variance) of the process does not change, the successive sample means will be distributed normally around the actual mean. Moreover, without going into details regarding the derivation of this formula, we also know (because of the central limit theorem, and thus approximate normal distribution of the means; see, for example, Hoyer and Ellis, 1996) that the distribution of sample means will have a standard deviation of Sigma (the standard deviation of individual data points or measurements) over the square root of n (the sample size). It follows that approximately 95% of the sample means will fall within the limits 1.96 * Sigma/Square Root(n) (refer to Elementary Concepts for a discussion of the characteristics of the normal distribution and the central limit theorem). In practice, it is common to replace the 1.96 with 3 (so that the interval will include approximately 99% of the sample means) and to define the upper and lower control limits as plus and minus 3 sigma limits, respectively. General case. The general principle for establishing control limits just described applies to all control charts. After deciding on the characteristic we want to control, for example, the standard deviation, we estimate the expected variability of the respective characteristic in samples of the size we are about to take. Those estimates are then used to establish the control limits on the Common Types of Charts The types of charts are often classified according to the type of quality characteristic that they are supposed to monitor: there are quality control charts for variables and control charts for attributes. Specifically, the following charts are commonly constructed for controlling variables: • X-bar chart. In this chart, the sample means are plotted in order to control the mean value of a variable (e.g., size of piston rings, strength of materials, etc.). • R chart. In this chart, the sample ranges are plotted in order to control the variability of a variable. • S chart. In this chart, the sample standard deviations are plotted in order to control the variability of a variable. • S**2 chart. In this chart, the sample variances are plotted in order to control the variability of a variable. For controlling quality characteristics that represent attributes of the product, the following charts are commonly constructed: • C chart. In this chart (see example below), we plot the number of defectives (per batch, per day, per machine, per 100 feet of pipe, etc.). This chart assumes that defects of the quality attribute are rare, and the control limits in this chart are computed based on the Poisson distribution (distribution of rare events). • U chart. In this chart we plot the rate of defectives, that is, the number of defectives divided by the number of units inspected (the n; e.g., feet of pipe, number of batches). Unlike the C chart, this chart does not require a constant number of units, and it can be used, for example, when the batches (samples) are of different sizes. • Np chart. In this chart, we plot the number of defectives (per batch, per day, per machine) as in the C chart. However, the control limits in this chart are not based on the distribution of rare events, but rather on the binomial distribution. Therefore, this chart should be used if the occurrence of defectives is not rare (e.g., they occur in more than 5% of the units inspected). For example, we may use this chart to control the number of units produced with minor flaws. • P chart. In this chart, we plot the percent of defectives (per batch, per day, per machine, etc.) as in the U chart. However, the control limits in this chart are not based on the distribution of rare events but rather on the binomial distribution (of proportions). Therefore, this chart is most applicable to situations where the occurrence of defectives is not rare (e.g., we expect the percent of defectives to be more than 5% of the total number of units produced). All of these charts can be adapted for short production runs (short run charts), and for multiple process streams. Short Run Control Charts The short run control chart, or control chart for short production runs, plots observations of variables or attributes for multiple parts on the same chart. Short run control charts were developed to address the requirement that several dozen measurements of a process must be collected before control limits are calculated. Meeting this requirement is often difficult for operations that produce a limited number of a particular part during a production run. For example, a paper mill may produce only three or four (huge) rolls of a particular kind of paper (i.e., part) and then shift production to another kind of paper. But if variables, such as paper thickness, or attributes, such as blemishes, are monitored for several dozen rolls of paper of, say, a dozen different kinds, control limits for thickness and blemishes could be calculated for the transformed (within the short production run) variable values of interest. Specifically, these transformations will rescale the variable values of interest such that they are of compatible magnitudes across the different short production runs (or parts). The control limits computed for those transformed values could then be applied in monitoring thickness, and blemishes, regardless of the types of paper (parts) being produced. Statistical process control procedures could be used to determine if the production process is in control, to monitor continuing production, and to establish procedures for continuous quality improvement. For additional discussions of short run charts refer to Bhote (1988), Johnson (1987), or Montgomery (1991). Short Run Charts for Variables Nominal chart, target chart. There are several different types of short run charts. The most basic are the nominal short run chart, and the target short run chart. In these charts, the measurements for each part are transformed by subtracting a part-specific constant. These constants can either be the nominal values for the respective parts (nominal short run chart), or they can be target values computed from the (historical) means for each part (Target X-bar and R chart). For example, the diameters of piston bores for different engine blocks produced in a factory can only be meaningfully compared (for determining the consistency of bore sizes) if the mean differences between bore diameters for different sized engines are first removed. The nominal or target short run chart makes such comparisons possible. Note that for the nominal or target chart it is assumed that the variability across parts is identical, so that control limits based on a common estimate of the process sigma are applicable. Standardized short run chart. If the variability of the process for different parts cannot be assumed to be identical, then a further transformation is necessary before the sample means for different parts can be plotted in the same chart. Specifically, in the standardized short run chart the plot points are further transformed by dividing the deviations of sample means from part means (or nominal or target values for parts) by part-specific constants that are proportional to the variability for the respective parts. For example, for the short run X-bar and R chart, the plot points (that are shown in the X-bar chart) are computed by first subtracting from each sample mean a part specific constant (e.g., the respective part mean, or nominal value for the respective part), and then dividing the difference by another constant, for example, by the average range for the respective chart. These transformations will result in comparable scales for the sample means for different Short Run Charts for Attributes For attribute control charts (C, U, Np, or P charts), the estimate of the variability of the process (proportion, rate, etc.) is a function of the process average (average proportion, rate, etc.; for example, the standard deviation of a proportion p is equal to the square root of p*(1- p)/n). Hence, only standardized short run charts are available for attributes. For example, in the short run P chart, the plot points are computed by first subtracting from the respective sample p values the average part p's, and then dividing by the standard deviation of the average p's. Unequal Sample Sizes When the samples plotted in the control chart are not of equal size, then the control limits around the center line (target specification) cannot be represented by a straight line. For example, to return to the formula Sigma/Square Root(n) presented earlier for computing control limits for the X-bar chart, it is obvious that unequal n's will lead to different control limits for different sample sizes. There are three ways of dealing with this situation. Average sample size. If you want to maintain the straight-line control limits (e.g., to make the chart easier to read and easier to use in presentations), then you can compute the average n per sample across all samples, and establish the control limits based on the average sample size. This procedure is not "exact," however, as long as the sample sizes are reasonably similar to each other, this procedure is quite adequate. Variable control limits. Alternatively, you may compute different control limits for each sample, based on the respective sample sizes. This procedure will lead to variable control limits, and result in step-chart like control lines in the plot. This procedure ensures that the correct control limits are computed for each sample. However, you lose the simplicity of straight-line control limits. Stabilized (normalized) chart. The best of two worlds (straight line control limits that are accurate) can be accomplished by standardizing the quantity to be controlled (mean, proportion, etc.) according to units of sigma. The control limits can then be expressed in straight lines, while the location of the sample points in the plot depend not only on the characteristic to be controlled, but also on the respective sample n's. The disadvantage of this procedure is that the values on the vertical (Y) axis in the control chart are in terms of sigma rather than the original units of measurement, and therefore, those numbers cannot be taken at face value (e.g., a sample with a value of 3 is 3 times sigma away from specifications; in order to express the value of this sample in terms of the original units of measurement, we need to perform some computations to convert this number back). Control Charts for Variables vs. Charts for Attributes Sometimes, the quality control engineer has a choice between variable control charts and attribute control charts. Advantages of attribute control charts. Attribute control charts have the advantage of allowing for quick summaries of various aspects of the quality of a product, that is, the engineer may simply classify products as acceptable or unacceptable, based on various quality criteria. Thus, attribute charts sometimes bypass the need for expensive, precise devices and time-consuming measurement procedures. Also, this type of chart tends to be more easily understood by managers unfamiliar with quality control procedures; therefore, it may provide more persuasive (to management) evidence of quality problems. Advantages of variable control charts. Variable control charts are more sensitive than attribute control charts (see Montgomery, 1985, p. 203). Therefore, variable control charts may alert us to quality problems before any actual "unacceptables" (as detected by the attribute chart) will occur. Montgomery (1985) calls the variable control charts leading indicators of trouble that will sound an alarm before the number of rejects (scrap) increases in the production process. Control Chart for Individual Observations Variable control charts can by constructed for individual observations taken from the production line, rather than samples of observations. This is sometimes necessary when testing samples of multiple observations would be too expensive, inconvenient, or impossible. For example, the number of customer complaints or product returns may only be available on a monthly basis; yet, you want to chart those numbers to detect quality problems. Another common application of these charts occurs in cases when automated testing devices inspect every single unit that is produced. In that case, you are often primarily interested in detecting small shifts in the product quality (for example, gradual deterioration of quality due to machine wear). The CUSUM, MA, and EWMA charts of cumulative sums and weighted averages discussed below may be most applicable in those situations. Out-Of-Control Process: Runs Tests As mentioned earlier in the introduction, when a sample point (e.g., mean in an X-bar chart) falls outside the control lines, you have reason to believe that the process may no longer be in control. In addition, you should look for systematic patterns of points (e.g., means) across samples, because such patterns may indicate that the process average has shifted. These tests are also sometimes referred to as AT&T runs rules (see AT&T, 1959) or tests for special causes (e.g., see Nelson, 1984, 1985; Grant and Leavenworth, 1980; Shirland, 1993). The term special or assignable causes as opposed to chance or common causes was used by Shewhart to distinguish between a process that is in control, with variation due to random (chance) causes only, from a process that is out of control, with variation that is due to some non-chance or special (assignable) factors (cf. Montgomery, 1991, p. 102). As the sigma control limits discussed earlier, the runs rules are based on "statistical" reasoning. For example, the probability of any sample mean in an X-bar control chart falling above the center line is equal to 0.5, provided (1) that the process is in control (i.e., that the center line value is equal to the population mean), (2) that consecutive sample means are independent (i.e., not auto-correlated), and (3) that the distribution of means follows the normal distribution. Simply stated, under those conditions there is a 50-50 chance that a mean will fall above or below the center line. Thus, the probability that two consecutive means will fall above the center line is equal to 0.5 times 0.5 = 0.25. Accordingly, the probability that 9 consecutive samples (or a run of 9 samples) will fall on the same side of the center line is equal to 0.5**9 = .00195. Note that this is approximately the probability with which a sample mean can be expected to fall outside the 3- times sigma limits (given the normal distribution, and a process in control). Therefore, you could look for 9 consecutive sample means on the same side of the center line as another indication of an out-of-control condition. Refer to Duncan (1974) for details concerning the "statistical" interpretation of the other (more complex) tests. Zone A, B, C. Customarily, to define the runs tests, the area above and below the chart center line is divided into three "zones." By default, Zone A is defined as the area between 2 and 3 times sigma above and below the center line; Zone B is defined as the area between 1 and 2 times sigma, and Zone C is defined as the area between the center line and 1 times sigma. 9 points in Zone C or beyond (on one side of central line). If this test is positive (i.e., if this pattern is detected), then the process average has probably changed. Note that it is assumed that the distribution of the respective quality characteristic in the plot is symmetrical around the mean. This is, for example, not the case for R charts, S charts, or most attribute charts. However, this is still a useful test to alert the quality control engineer to potential shifts in the process. For example, successive samples with less-than-average variability may be worth investigating, since they may provide hints on how to decrease the variation in the process. 6 points in a row steadily increasing or decreasing. This test signals a drift in the process average. Often, such drift can be the result of tool wear, deteriorating maintenance, improvement in skill, etc. (Nelson, 1985). 14 points in a row alternating up and down. If this test is positive, it indicates that two systematically alternating causes are producing different results. For example, you may be using two alternating suppliers, or monitor the quality for two different (alternating) shifts. 2 out of 3 points in a row in Zone A or beyond. This test provides an "early warning" of a process shift. Note that the probability of a false-positive (test is positive but process is in control) for this test in X-bar charts is approximately 2%. 4 out of 5 points in a row in Zone B or beyond. Like the previous test, this test may be considered to be an "early warning indicator" of a potential process shift. The false- positive error rate for this test is also about 2%. 15 points in a row in Zone C (above and below the center line). This test indicates a smaller variability than is expected (based on the current control limits). 8 points in a row in Zone B, A, or beyond, on either side of the center line (without points in Zone C). This test indicates that different samples are affected by different factors, resulting in a bimodal distribution of means. This may happen, for example, if different samples in an X-bar chart where produced by one of two different machines, where one produces above average parts, and the other below average parts. Operating Characteristic (OC) Curves A common supplementary plot to standard quality control charts is the so-called operating characteristic or OC curve (see example below). One question that comes to mind when using standard variable or attribute charts is how sensitive is the current quality control procedure? Put in more specific terms, how likely is it that you will not find a sample (e.g., mean in an X-bar chart) outside the control limits (i.e., accept the production process as "in control"), when, in fact, it has shifted by a certain amount? This probability is usually referred to as the Operating characteristic curves are extremely useful for exploring the power of our quality control procedure. The actual decision concerning sample sizes should depend not only on the cost of implementing the plan (e.g., cost per item sampled), but also on the costs resulting from not detecting quality problems. The OC curve allows the engineer to estimate the probabilities of not detecting shifts of certain sizes in the production quality. Process Capability Indices For variable control charts, it is often desired to include so-called process capability indices in the summary graph. In short, process capability indices express (as a ratio) the proportion of parts or items produced by the current process that fall within user-specified limits (e.g., engineering tolerances). For example, the so-called Cp index is computed as: C[p] = (USL-LSL)/(6*sigma) where sigma is the estimated process standard deviation, and USL and LSL are the upper and lower specification (engineering) limits, respectively. If the distribution of the respective quality characteristic or variable (e.g., size of piston rings) is normal, and the process is perfectly centered (i.e., the mean is equal to the design center), then this index can be interpreted as the proportion of the range of the standard normal curve (the process width) that falls within the engineering specification limits. If the process is not centered, an adjusted index C[pk] is used instead. For a "capable" process, the C[p] index should be greater than 1, that is, the specification limits would be larger than 6 times the sigma limits, so that over 99% of all items or parts produced could be expected to fall inside the acceptable engineering specifications. For a detailed discussion of this and other indices, refer to Process Analysis Other Specialized Control Charts The types of control charts mentioned so far are the "workhorses" of quality control, and they are probably the most widely used methods. However, with the advent of inexpensive desktop computing, procedures requiring more computational effort have become increasingly popular. X-bar Charts For Non-Normal Data. The control limits for standard X-bar charts are constructed based on the assumption that the sample means are approximately normally distributed. Thus, the underlying individual observations do not have to be normally distributed, since, as the sample size increases, the distribution of the means will become approximately normal (i.e., see discussion of the central limit theorem in the Elementary Concepts; however, note that for R, S¸ and S**2 charts, it is assumed that the individual observations are normally distributed). Shewhart (1931) in his original work experimented with various non-normal distributions for individual observations, and evaluated the resulting distributions of means for samples of size four. He concluded that, indeed, the standard normal distribution-based control limits for the means are appropriate, as long as the underlying distribution of observations are approximately normal. (See also Hoyer and Ellis, 1996, for an introduction and discussion of the distributional assumptions for quality control charting.) . However, as Ryan (1989) points out, when the distribution of observations is highly skewed and the sample sizes are small, then the resulting standard control limits may produce a large number of false alarms (increased alpha error rate), as well as a larger number of false negative ("process-is-in-control") readings (increased beta-error rate). You can compute control limits (as well as process capability indices) for X-bar charts based on so-called Johnson curves(Johnson, 1949), which allow to approximate the skewness and kurtosis for a large range of non-normal distributions (see also Fitting Distributions by Moments, in Process Analysis). These non- normal X-bar charts are useful when the distribution of means across the samples is clearly skewed, or otherwise non-normal. Hotelling T**2 Chart. When there are multiple related quality characteristics (recorded in several variables), we can produce a simultaneous plot (see example below) for all means based on Hotelling multivariate T**2 statistic (first proposed by Hotelling, 1947). Cumulative Sum (CUSUM) Chart. The CUSUM chart was first introduced by Page (1954); the mathematical principles involved in its construction are discussed in Ewan (1963), Johnson (1961), and Johnson and Leone (1962). If you plot the cumulative sum of deviations of successive sample means from a target specification, even minor, permanent shifts in the process mean will eventually lead to a sizable cumulative sum of deviations. Thus, this chart is particularly well-suited for detecting such small permanent shifts that may go undetected when using the X-bar chart. For example, if, due to machine wear, a process slowly "slides" out of control to produce results above target specifications, this plot would show a steadily increasing (or decreasing) cumulative sum of deviations from specification. To establish control limits in such plots, Barnhard (1959) proposed the so-called V- mask, which is plotted after the last sample (on the right). The V-mask can be thought of as the upper and lower control limits for the cumulative sums. However, rather than being parallel to the center line; these lines converge at a particular angle to the right, producing the appearance of a V rotated on its side. If the line representing the cumulative sum crosses either one of the two lines, the process is out of control. Moving Average (MA) Chart. To return to the piston ring example, suppose we are mostly interested in detecting small trends across successive sample means. For example, we may be particularly concerned about machine wear, leading to a slow but constant deterioration of quality (i.e., deviation from specification). The CUSUM chart described above is one way to monitor such trends, and to detect small permanent shifts in the process average. Another way is to use some weighting scheme that summarizes the means of several successive samples; moving such a weighted mean across the samples will produce a moving average chart (as shown in the following graph). Exponentially-weighted Moving Average (EWMA) Chart. The idea of moving averages of successive (adjacent) samples can be generalized. In principle, in order to detect a trend we need to weight successive samples to form a moving average; however, instead of a simple arithmetic moving average, we could compute a geometric moving average (this chart (see graph below) is also called Geometric Moving Average chart, see Montgomery, 1985, 1991). Specifically, we could compute each data point for the plot as: z[t] = [t] + (1-[t-1] In this formula, each point z[t] is computed as x-bar[t], plus one minus Regression Control Charts. Sometimes we want to monitor the relationship between two aspects of our production process. For example, a post office may want to monitor the number of worker-hours that are spent to process a certain amount of mail. These two variables should roughly be linearly correlated with each other, and the relationship can probably be described in terms of the well-known Pearson product-moment correlation coefficient r. This statistic is also described in Basic Statistics. The regression control chart contains a regression line that summarizes the linear relationship between the two variables of interest. The individual data points are also shown in the same graph. Around the regression line we establish a confidence interval within which we would expect a certain proportion (e.g., 95%) of samples to fall. Outliers in this plot may indicate samples where, for some reason, the common relationship between the two variables of interest does not hold. Applications. There are many useful applications for the regression control chart. For example, professional auditors may use this chart to identify retail outlets with a greater than expected number of cash transactions given the overall volume of sales, or grocery stores with a greater than expected number of coupons redeemed, given the total sales. In both instances, outliers in the regression control charts (e.g., too many cash transactions; too many coupons redeemed) may deserve closer scrutiny. Pareto Chart Analysis. Quality problems are rarely spread evenly across the different aspects of the production process or different plants. Rather, a few "bad apples" often account for the majority of problems. This principle has come to be known as the Pareto principle, which basically states that quality losses are mal-distributed in such a way that a small percentage of possible causes are responsible for the majority of the quality problems. For example, a relatively small number of "dirty" cars are probably responsible for the majority of air pollution; the majority of losses in most companies result from the failure of only one or two products. To illustrate the "bad apples", one plots the Pareto chart, which simply amounts to a histogram showing the distribution of the quality loss (e.g., dollar loss) across some meaningful categories; usually, the categories are sorted into descending order of importance (frequency, dollar amounts, etc.). Very often, this chart provides useful guidance as to where to direct quality improvement efforts.
{"url":"http://www.statsoft.com/Textbook/Quality-Control-Charts/button/2","timestamp":"2014-04-20T23:27:57Z","content_type":null,"content_length":"68415","record_id":"<urn:uuid:881b6dca-4905-4865-b243-efa1134c826a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Sensitivity Plots in MATLAB I wish to use a for loop i already made and replace a number with a variable defined as a range of numbers, then plot the max of another function that used the function with the new variable. How would I do this? x = [0:.5:1] r = 5*x t = 3/r new program: x = same n = [0:.25:.5] r = n*x t = 3/r for each value of n, i want to plot the cooresponding t, and plot n vs t
{"url":"http://www.physicsforums.com/showthread.php?t=87774","timestamp":"2014-04-20T03:17:03Z","content_type":null,"content_length":"24671","record_id":"<urn:uuid:8aef5ff5-24b7-4d21-b5bb-668634d78b39>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Spin TQFT's in dimensions (1+1) MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. I don't seem to be able to find anything written about Spin TQFT's in dimension (1+1). Does anyone know any references? Or is there some reason it is uninteresting? up vote 12 down vote favorite tqft at.algebraic-topology gt.geometric-topology add comment I don't seem to be able to find anything written about Spin TQFT's in dimension (1+1). Does anyone know any references? Or is there some reason it is uninteresting? This is covered in Moore and Segal "D-branes and K-theory in 2D topological field theory". In particular on around page 16 there is a characterization analogous to "1+1 TQFTs up vote 8 down vote = Commutative Frobenius algebras". add comment This is covered in Moore and Segal "D-branes and K-theory in 2D topological field theory". In particular on around page 16 there is a characterization analogous to "1+1 TQFTs = Commutative Frobenius
{"url":"http://mathoverflow.net/questions/61718/spin-tqfts-in-dimensions-11","timestamp":"2014-04-16T07:53:35Z","content_type":null,"content_length":"49301","record_id":"<urn:uuid:48d82033-1e4c-4347-86a3-3c60413e3273>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help Hi can someone please help me the following proofs. Look at the picture. Thanks i really appriate it We know that det[kA]=k^n *det[A] ...........for nxn matrix A (v) is false for 2x2 marix A det[-A]=(-1)^2 det[A] det[-A]= det[A] (vi) is true for 3x3 martix A det[-A]=(-1)^3 *det[A] det[-A]=- det[A] (vii) is false for 3x3 martix A det[αA]=α^3 *det[A]
{"url":"http://mathhelpforum.com/advanced-algebra/61408-help-need.html","timestamp":"2014-04-20T08:47:54Z","content_type":null,"content_length":"31485","record_id":"<urn:uuid:fc307c02-ade0-420b-8345-695885434537>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell Hierarchical Libraries (base package) Contents Index Portability portable Data.Sequence Stability experimental Maintainer ross@soi.city.ac.uk General purpose finite sequences. Apart from being finite and having strict operations, sequences also differ from lists in supporting a wider variety of operations efficiently. An amortized running time is given for each operation, with n referring to the length of the sequence and i being the integral index used by some operations. These bounds hold even in a persistent (shared) setting. The implementation uses 2-3 finger trees annotated with sizes, as described in section 4.2 of Note: Many of these operations have the same names as similar operations on lists in the Prelude. The ambiguity may be resolved using either qualification or the hiding clause. data Seq a General-purpose finite sequences. empty :: Seq a O(1). The empty sequence. singleton :: a -> Seq a O(1). A singleton sequence. (<|) :: a -> Seq a -> Seq a O(1). Add an element to the left end of a sequence. Mnemonic: a triangle with the single element at the pointy end. (|>) :: Seq a -> a -> Seq a O(1). Add an element to the right end of a sequence. Mnemonic: a triangle with the single element at the pointy end. (><) :: Seq a -> Seq a -> Seq a O(log(min(n1,n2))). Concatenate two sequences. fromList :: [a] -> Seq a O(n). Create a sequence from a finite list of elements. null :: Seq a -> Bool O(1). Is this the empty sequence? length :: Seq a -> Int O(1). The number of elements in the sequence. data ViewL a View of the left end of a sequence. EmptyL empty sequence (:<) a (Seq a) leftmost element and the rest of the sequence viewl :: Seq a -> ViewL a O(1). Analyse the left end of a sequence. data ViewR a View of the right end of a sequence. EmptyR empty sequence (:>) (Seq a) a the sequence minus the rightmost element, and the rightmost element viewr :: Seq a -> ViewR a O(1). Analyse the right end of a sequence. index :: Seq a -> Int -> a O(log(min(i,n-i))). The element at the specified position adjust :: (a -> a) -> Int -> Seq a -> Seq a O(log(min(i,n-i))). Update the element at the specified position update :: Int -> a -> Seq a -> Seq a O(log(min(i,n-i))). Replace the element at the specified position take :: Int -> Seq a -> Seq a O(log(min(i,n-i))). The first i elements of a sequence. drop :: Int -> Seq a -> Seq a O(log(min(i,n-i))). Elements of a sequence after the first i. splitAt :: Int -> Seq a -> (Seq a, Seq a) O(log(min(i,n-i))). Split a sequence at a given position. reverse :: Seq a -> Seq a O(n). The reverse of a sequence. Produced by Haddock version 0.7
{"url":"http://cvs.haskell.org/Hugs/pages/libraries/base/Data-Sequence.html","timestamp":"2014-04-20T10:48:48Z","content_type":null,"content_length":"22797","record_id":"<urn:uuid:f780563d-9121-4c49-83c9-9a80c263076e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Darien, CT Science Tutor Find a Darien, CT Science Tutor ...My research background is Biochemistry, Molecular Biology, and some experience in Organic Chemistry I have tutoring experience in both High School level and College level Biology, General Chemistry, Organic Chemistry, and Biochemistry classes. I have experience in tutoring both high school leve... 4 Subjects: including biochemistry, biology, chemistry, organic chemistry ...Give me two hours with you and I will have you feel like you are a graphics artist. Self taught, web creation is very easy once you learn some simple techniques. I have created many websites, including three of my own. 12 Subjects: including electrical engineering, computer science, Microsoft Word, Microsoft Excel ...I have studied several Upper Division courses at SUNY Stony Brook, including Primitive Technology, in which I developed an experiment reproducing two ancient Egyptian drills one utilizing copper and the other a flint stone tool drill bit. For this course I had to write a twenty paged paper expla... 14 Subjects: including archaeology, anthropology, reading, ESL/ESOL ...I am proficient in all of the subject areas contained in each test: 1. Verbal Ability (synonyms and sentence completions) 2. Quantitative Reasoning 3. 42 Subjects: including ACT Science, reading, English, writing ...I have taught Environmental Science (Ecology) as the third science to 11th and 12th grade high school students. If you are interested, please contact me for further conversation. I have a NYS certificate in special education. 5 Subjects: including biology, anatomy, ecology, special needs Related Darien, CT Tutors Darien, CT Accounting Tutors Darien, CT ACT Tutors Darien, CT Algebra Tutors Darien, CT Algebra 2 Tutors Darien, CT Calculus Tutors Darien, CT Geometry Tutors Darien, CT Math Tutors Darien, CT Prealgebra Tutors Darien, CT Precalculus Tutors Darien, CT SAT Tutors Darien, CT SAT Math Tutors Darien, CT Science Tutors Darien, CT Statistics Tutors Darien, CT Trigonometry Tutors Nearby Cities With Science Tutor East Hills, NY Science Tutors East Northport Science Tutors Eastchester Science Tutors Glen Cove, NY Science Tutors Greenwich, CT Science Tutors Hauppauge Science Tutors Kings Park Science Tutors New Canaan Science Tutors Noroton Heights, CT Science Tutors Noroton, CT Science Tutors Norwalk, CT Science Tutors Stamford, CT Science Tutors Tokeneke, CT Science Tutors Westport, CT Science Tutors Wilton, CT Science Tutors
{"url":"http://www.purplemath.com/Darien_CT_Science_tutors.php","timestamp":"2014-04-18T18:59:02Z","content_type":null,"content_length":"23697","record_id":"<urn:uuid:72a344eb-f314-49db-8e09-49c263a28ad5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Astoria, NY ACT Tutor Find an Astoria, NY ACT Tutor ...I have a long background in Finance, mathematics and statistics, including Fellowship status as a Chartered Certified Accountant. I qualified with KPMG, then working as Audit Manager in the Caribbean. During my time as a mathematics and test preparation tutor, I have worked extensively with ADD students of all ages. 55 Subjects: including ACT Math, English, writing, reading ...Hey, what else do you need to know? Well... maybe you'd like to know that I have been freelance tutoring for over ten years, and that I specialize in SAT and ACT math and science sections? That means I can help with both the content and the strategy of those tests. 17 Subjects: including ACT Math, calculus, geometry, biology ...I love words, and I think it helps students that I'm able to define the words we encounter in a fun and relatable way, without the aid of a dictionary. I also teach a number of memory techniques to assist students in building their vocabularies and to aid in the memorization they do for other su... 36 Subjects: including ACT Math, English, chemistry, calculus ...I also help foreign graduate students perfect their grammar and delivery in writing. I believe in building confidence while teaching material. We all excel faster in some areas and slower in 25 Subjects: including ACT Math, English, reading, biology ...Background: I have a BS in Electrical Engineering from MIT and an MBA with Distinction from the University of Michigan. Over the last 15 years, I have worked in Management Consulting and Investment Banking, but tutoring and education are now my primary focus.I have been tutoring various standard... 11 Subjects: including ACT Math, calculus, geometry, algebra 1 Related Astoria, NY Tutors Astoria, NY Accounting Tutors Astoria, NY ACT Tutors Astoria, NY Algebra Tutors Astoria, NY Algebra 2 Tutors Astoria, NY Calculus Tutors Astoria, NY Geometry Tutors Astoria, NY Math Tutors Astoria, NY Prealgebra Tutors Astoria, NY Precalculus Tutors Astoria, NY SAT Tutors Astoria, NY SAT Math Tutors Astoria, NY Science Tutors Astoria, NY Statistics Tutors Astoria, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Astoria_NY_ACT_tutors.php","timestamp":"2014-04-20T21:41:42Z","content_type":null,"content_length":"23617","record_id":"<urn:uuid:16dd2713-d1d9-45d5-a25e-6f00c618e962>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Darien, CT Science Tutor Find a Darien, CT Science Tutor ...My research background is Biochemistry, Molecular Biology, and some experience in Organic Chemistry I have tutoring experience in both High School level and College level Biology, General Chemistry, Organic Chemistry, and Biochemistry classes. I have experience in tutoring both high school leve... 4 Subjects: including biochemistry, biology, chemistry, organic chemistry ...Give me two hours with you and I will have you feel like you are a graphics artist. Self taught, web creation is very easy once you learn some simple techniques. I have created many websites, including three of my own. 12 Subjects: including electrical engineering, computer science, Microsoft Word, Microsoft Excel ...I have studied several Upper Division courses at SUNY Stony Brook, including Primitive Technology, in which I developed an experiment reproducing two ancient Egyptian drills one utilizing copper and the other a flint stone tool drill bit. For this course I had to write a twenty paged paper expla... 14 Subjects: including archaeology, anthropology, reading, ESL/ESOL ...I am proficient in all of the subject areas contained in each test: 1. Verbal Ability (synonyms and sentence completions) 2. Quantitative Reasoning 3. 42 Subjects: including ACT Science, reading, English, writing ...I have taught Environmental Science (Ecology) as the third science to 11th and 12th grade high school students. If you are interested, please contact me for further conversation. I have a NYS certificate in special education. 5 Subjects: including biology, anatomy, ecology, special needs Related Darien, CT Tutors Darien, CT Accounting Tutors Darien, CT ACT Tutors Darien, CT Algebra Tutors Darien, CT Algebra 2 Tutors Darien, CT Calculus Tutors Darien, CT Geometry Tutors Darien, CT Math Tutors Darien, CT Prealgebra Tutors Darien, CT Precalculus Tutors Darien, CT SAT Tutors Darien, CT SAT Math Tutors Darien, CT Science Tutors Darien, CT Statistics Tutors Darien, CT Trigonometry Tutors Nearby Cities With Science Tutor East Hills, NY Science Tutors East Northport Science Tutors Eastchester Science Tutors Glen Cove, NY Science Tutors Greenwich, CT Science Tutors Hauppauge Science Tutors Kings Park Science Tutors New Canaan Science Tutors Noroton Heights, CT Science Tutors Noroton, CT Science Tutors Norwalk, CT Science Tutors Stamford, CT Science Tutors Tokeneke, CT Science Tutors Westport, CT Science Tutors Wilton, CT Science Tutors
{"url":"http://www.purplemath.com/Darien_CT_Science_tutors.php","timestamp":"2014-04-18T18:59:02Z","content_type":null,"content_length":"23697","record_id":"<urn:uuid:72a344eb-f314-49db-8e09-49c263a28ad5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Tuning Interacting Controllers Is the Famous Ziegler-Nichols (ZN) Open-Loop Tuning/Closed-Loop Tuning Parameter Calculation for Interacting or Non-Interacting PID? This column is moderated by Béla Lipták, automation and safety consultant, former Chief Instrument Engineer of C&R, former Yale professor of process control and the editor of the Instrument Engineer's Handbook. If you have automation-related questions for this column, write to liptakbela@aol.com. Q: I was reading your answer on Controlglobal.com regarding "Using Feed-Forward PID for External Reset." My question is whether the famous Ziegler-Nichols (ZN) open-loop tuning/closed-loop tuning parameter calculation is for interacting or non-interacting PID? Kumar Chennai A: The design of the old pneumatic controllers made their interacting behavior unavoidable because their three settings (gain spring, integral and derivative restriction) were physically interconnected (Figure 1) and, therefore, if one setting changed, it affected all. One of the advantages of this design was that the actual working derivative (D) could never become more than one quarter of the integral (I). This feature provided safety, because if D > I/4, the controller action reverses, which can cause accidents. The tuning parameters in those early days were named differently than today: Gain (G) was named proportional band (PB = 100/G), integral (I) was usually given in units of repeats per minute, while today the units of both integral (I) and derivative (D) are in minutes. Now, turning to the subject of tuning: any controller (interacting, non-interacting or any other), can be tuned by either open- or closed-loop methods, and the dynamic parameters obtained can be converted into PID settings by any of the algorithms developed over the years (ZN, Shinskey, 3 C, Cohen-Coon, etc.). Open-loop tuning means that we evaluate only the dynamics of the process, while closed-loop means we view the dynamics of the total loop. Open-loop tuning considers the response of the controlled variable only to load change (not setpoint change). The load is "stepped" after bringing the process to a steady state, and then applying a step change (suddenly opening or closing the control valve by some percentage). This step change causes the controlled variable to react. For example, in case of a heat transfer process, increasing the steam valve opening causes the temperature to rise. After the step change, it takes some time for the controlled variable to react, and this we call the "dead time" (T[d]). After Td has passed, the process starts to respond. The maximum rate of rise is called the reaction rate, and the process time constant (T) is the time it takes for the controlled variable to reach 63% of the rise (or decay). The process gain is the ratio of the total percent rise (or fall) divided by the size (in percent) of the step change that caused it. A process is easy to control if its Td is short, and T is large (the process response is slow). In the closed-loop method, the controller is in automatic. The method applied can either be the "ultimate" or the "damped oscillation" method. Seventy years ago, in 1942, Ziegler and Nichols developed the ultimate method. It is applied by determining the ultimate gain (K[u]) and the ultimate period (P[u]). K[u] is the ultimate gain that causes continuous cycling. When the loop is cycling in (un-dampened oscillation), the loop gain 1.0 and the amplitude of the sinusoidal is constant. ZN recommends tuning for quarter-amplitude damping (the amplitude of each cycle is one-fourth of the previous), which occurs when the loop gain is 0.5, meaning that the product of the gains in the loop components (process, sensor, transmitter, controller and valve) is 0.5. For proportional (P) controllers, the period of oscillation is 2 to 5 dead times; for PI it is 3 to 5; and for PID around 3 periods. This in flow loops results in oscillation periods of 1 to 3 seconds; in level loops, 3 to 30 seconds; in pressure loops, 5 to 100 seconds; in temperature loops, 30 to 20 minutes; and in analytical loops, minutes to hours. For non-interacting PID loops with no dead time, one would set integral (I) in minutes/repeat to a value equaling 50% of the period of oscillation and the derivative time (D) to about 18 % of that period. The advantages of open-loop tuning include its speed (you do not need to wait for several cycles), the fact that size of the amplitude of upset is predictable, and the test can be performed before the control loop is installed. The disadvantages include that only the process dynamics are determined (the dynamic contributions of the loop components are not) and that, on noisy processes, the inflection point of the reaction curve is hard to determine. What I normally do is to select the preliminary settings by the open-loop method and refine them later with the closed-loop one. Béla Lipták A: Ziegler and Nichols used an interacting PID controller in their studies, but it was even more interacting than the current interacting model, whose proportional gain is multiplied by the factor (1 + D/I), where D and I are derivative and integral time constants respectively. Its positive and negative feedback loops (providing integral and derivative action respectively) around the amplifier were in parallel. As a result, the controller proportional gain was multiplied by the factor (1 + D/I)/(1 - D/I), This causes the gain to reverse signs as D > I, which was carefully avoided. As a result, they kept the D/I ratio at 1:4, rather than my choice of 1:2.7. For more detail, see Shinskey, Feedback Controllers for the Process Industrues, McGraw-Hill, 1994, p.71-73, regarding the Taylor Fulscope controller. Greg Shinskey A: The PID equation typically used for the digital feedback loop control is designed to approximate traditional pneumatic controllers that were in use when Ziegler and Nichols created their loop-tuning methods. This form of the equation is called the non-interacting or standard form. Here it is described in a clip from the Wikipedia page on PID loop control: http://en.wikipedia.org/wiki The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form. In this form the Kp gain is applied to the Ioutand Dout terms, yielding: T[i] is the integral time T[d] is the derivative time. In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value, which is compensated for future and past errors. The addition of the proportional and derivative components effectively predicts the error value at T[i] seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in seconds (or samples). The resulting compensated single error value is scaled by the single gain K[p]. In the ideal parallel form, shown in the controller theory section: Clearly, adjustment of the Kp term affects the gain of the integral and derivative terms, K[i] = K[p]/T[i] and K[d] = K[p]T[d], but this is still referred to as the non-interactive form by tradition. Wikipedia goes on to show the parallel form of the PID equation typically used outside process control in which each of the three terms is independent. Dick Caro A: The best answer is "neither." The Ziegler-Nichols tuning methods were developed for a Taylor Fulscope pneumatic controller. (Both Ziegler and Nichols were employees of Taylor Instrument Co., Rochester, NY.) Taylor was an early leader in pneumatic instruments. Over the years, the company was acquired by a number of companies, and now is a part of ABB. They developed these rules before control theory, transfer functions, etc. became common knowledge for control engineers. I've done a detailed analysis of the Fulscope, and though I don't recall the exact transfer function I came up with, I do recall that it was similar to, but did not exactly match, either the interacting nor the non-interacting form of PID. Perhaps a more appropriate question to ask would be "which form of PID are these tuning relations best suited for?" Suppose there had not have been a Ziegler and Nichols, and the relations were suddenly discovered, say, under a rock or floating in the sea in a bottle, with no indication of source nor intended use. Then, the only question that could be asked is, "What should they be used In my opinion, based on numerous simulation studies using a variety of process models, they are best suited to the non-interactive form of PID. Then, if one has an interactive PID, there are widely published conversion relations to go from one form to the other. Harold Wade
{"url":"http://www.controlglobal.com/articles/2012/liptak-tuning-interacting-controllers/?show=all","timestamp":"2014-04-17T03:49:46Z","content_type":null,"content_length":"69313","record_id":"<urn:uuid:1dc3d562-b01e-486e-8b62-74b54befb01a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'Stochastic ODE' topic Author Comment/Response In response to Tom Zeller's Response to my previous problem concerning the Stochastic ODE: The error message is not the only problem. If it were, I wouldn't care, I'd just ignore the error message and use the result. But the result itself is wrong. The reason I know that is the following: My system of ODE's contains a random component, which means that the time derivative is a random number at each instant of time. That means that the value of the time derivative will jump around discontinuously as time progresses (roughly within certain bounds). Therefore the trajectory itself in the (q,p) plane will have two properties: (1) it will look jagged. (2) It will be different each time I solve the ODE. The solutions generated by Mathematica satisfy (2) but not (1). I played around and studied the solutions a little, and discovered the following oddity: Apparently, Mathematica evaluates the quantity by instead evaluating the quantity This is perfectly sound and logical. This means that when it encounters the quantity Random[NormalDistribution[0, Sqrt[1/(1 + q[t]^2)]], it sees it as which is fine. The problem comes about because what it is doing is evaluating the Random[...] only ONCE, and keeping that value for the rest of it's numerics, and that is very wrong. It SHOULD be getting a new value of Random[...] at each iteration. This, in effect, changes my stochastic system of ODE's into a NON-stochastic system. So the question remains: How can I get Mathematica to evaluate many, many times as it iterates through the Runge-Kutta method (or whatever it's using)? To repeat: If the solution, as plotted in the (q,p) plane, does not look "jagged", something is wrong. As it stands, the solutions produced are smooth. Thanks for any further help! John Barber URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/4911","timestamp":"2014-04-19T09:37:49Z","content_type":null,"content_length":"29954","record_id":"<urn:uuid:cd6bcf63-7dd9-4517-bfa2-ca51d62175b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Diabetes Projections by CDC October 22, 2010 By Matt Shotwell Bayesian methods are supporting decisions and news at the national level! The Centers for Disease Control and Prevention summarizes a report published in the journal Population Health Metrics. The news also made it to the national media. The report (JP Boyle, TJ Thompson, EW Gregg, LE Barker, and DF Williamson (2010) “Projection of the year 2050 burden of diabetes in the US adult population: dynamic modeling of incidence, mortality, and prediabetes prevalence.” Population Health Metrics. 8:29) projects a two fold increase in the annual incidence of diabetes among American adults. The authors project the prevalence of diabetes to increase from 14% to between 25% and 28% by 2050. However, the authors claim that “these projected increases are largely attributable to the aging of the US population, increasing numbers of members of higher-risk minority groups in the population, and people with diabetes living longer.” The authors model the incidence of diabetes $y_t$ at year $t$ according to the Bayesian nonlinear model: $begin{array}{r c l} y_t & sim & N(mu_t, s_t^2) \ mu_t & = & rho / ( 1 + exp(lambda_0 + lambda_1t)) + epsilon_t \ rho & sim & beta(14, 848), end{array}$ is a logistic function of time with asymptote . The parameters were given diffuse normal priors, and was modeled using a variety of autoregressive strategies. The authors use for posterior summary, also citing Bayesian Data Analysis by Gelman et al. for the author, please follow the link and comment on his blog: BioStatMatt » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/bayesian-diabetes-projections-by-cdc/","timestamp":"2014-04-21T07:24:20Z","content_type":null,"content_length":"37866","record_id":"<urn:uuid:29906f86-f501-4087-be80-3da6284bfac7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
chart of 3-ball October 26th 2007, 03:26 AM #1 Nov 2006 chart of 3-ball if I have the unit ball in R^3, that is {(x,y,z) | x^2+y^2+z^2 <= 1 } = B and i have to find a chart, can then just use the chart where f(x,y,z) = (x,y,z) for (x,y,z) in B. That is my first question. My next question is if I can just do this, what about if I identify (x,y,z) with (-x,-y,-z), then i gues my chart have to illustrate this by sending (x,y,z) and (-x,-y,-z) to the same point, ie. f(-x,-y,-z) = f(x,y,z) is that correct that the way to interpreted that to point are the same, that the chart have to do that. Ps. I don't need anyone to give me the charts, (i like to try and find them myself) but I just eanted to make sure I'm going the right way. Thanks in advance. f(-x,-y,-z) is the point on the other octant opposite to where f(x,y,z) is. Or, f(-x,-y,-z) is the mirror image of f(x,y,z) relative to the center of B. I don't understand what you are saying, I have the point (x,y,z) and then I say that this is equivalent to (-x,-y,-z). I know where the points are, but do my charts on the manifold have to f(x,y,z) = f(-x,-y,-z) so I just can't use f(x,y,z)=(x,y,z) because that doesn't fulfill this requirment. I also don't understand what you are saying that f(-x,-y,-z) is equal to f(x,y,z). Or why point (x,y,z) is equivalent to point (-x,-y,-z). Why are two different points equivalent to each other. ok, maybe my question is a bit unclear. I have two manifolds one (M1) consisting of the unit ball i R^3, and one (M2 )consisting of the unit ball in R^3 where opposite points on the surface are equivalent. That is (x,y,z) ~ (-x,-y,-z) for points on the surface, so thats why they are equivalent, because the manifold is constructed in that way. My question is, for the first manifold is a chart just f: M1 -> R^3, where f(x,y,z) = (x,y,z) for (x,y,z) in M1 And is the reason why I can't use this chart for M2 that it doesn't fullfil f(x,y,z) = f(-x,-y,-z) and my chart have to do this, because these points are equivalent. The mapping f is fine, but you need to be careful about its domain. You cannot use the whole of B as the domain, because a chart has to be a homeomorphism, and f is not 1-1 on B. So you need to subdivide B into overlapping regions on which f is 1-1, and use these for the domains of the charts. For the manifold M2 you can use the same charts as for M1, but the way that they fit together (given by the transition functions) will be different. Not quite sure why my f is not 1-1 on the first manifold. But I can however see that this is the problem on M2 because f sends (x,y,z) and (-x,-y,-z) to different points, so f^-1(x,y,z) do not equal f^-1(-x,-y,-z), but that have to be the same point. Oops, let's try again. (I shouldn't have attempted that first comment, early on a Sunday morning!) What I failed to notice is that the manifolds M1 and M2 are surfaces, so they are two-dimensional, not three-dimensional. So the charts should be maps from B to R^2, not to R^3. For example, you could use the map f(x,y,z)=(x,y). Since (x,y,z) is in B, there are two values of z that go to the same point in R^2, namely $z=\pm\sqrt{1-x^2-y^2}$. That is why the map cannot be defined on the whole of B. In fact, you need to have each chart map defined on just one hemisphere of B. you where right the first time, they are not surfaces, it is the unit ball in R^3, hence {(x,y,z) | x^2+y^2+z^2 <= 1} but I still can't se why my f for M1 isn't 1-1? [No doubt about it, I can't think straight on a Sunday morning.] Okay, in that case the identity map will do fine for M1. It is 1-1, and B is just embedded as a submanifold of R^3. I'm inclined to back off at this point, before saying something else stupid. But if you want to get a geometric insight into what M2 looks like, then you should think about the two-dimensional analogue. You can visualise this as follows. Take a disc of paper, cut it along a radius, and fold it over on itself so as to make a paper hat, with two thicknesses, in which each point is glued to the one that was originally opposite to it on the disc. thanks for your help. I have played a little with some charts and i think i have found some that works, but while i played with the charts I came to think about something: If I have two spaces (not manifolds yet) M1 and M2, if I then make M1 to an manifold, with an atlas {(U_i,f_i)}, and I can find a bijection F: M2 -> M1, can I then make M2 to a manifold by making the atlas {(F(U_i),g_i = f_i*F)} * means composition I need to know this, because I have made the unit ball with the identification (x,y,z) ~ (-x,-y,-z) to a manifold, and found a bijection from that to SO(3). My asignment is to show that the unit ball with the identification (x,y,z) ~ (-x,-y,-z) and SO(3) can be given a natural manifold structure, and they are diffeomorfic. So I thought i was done if I can just doo this, but even if this works I don't know if this is a natural way to do it, find this statement 'natural' a bit weak. October 26th 2007, 10:56 AM #2 MHF Contributor Apr 2005 October 26th 2007, 11:01 AM #3 Nov 2006 October 26th 2007, 11:12 AM #4 MHF Contributor Apr 2005 October 27th 2007, 01:19 AM #5 Nov 2006 October 28th 2007, 01:31 AM #6 October 28th 2007, 01:55 AM #7 Nov 2006 October 28th 2007, 04:35 AM #8 October 28th 2007, 04:38 AM #9 Nov 2006 October 28th 2007, 05:15 AM #10 October 30th 2007, 11:47 AM #11 Nov 2006
{"url":"http://mathhelpforum.com/differential-geometry/21362-chart-3-ball.html","timestamp":"2014-04-16T12:08:16Z","content_type":null,"content_length":"62678","record_id":"<urn:uuid:e5a89713-e02a-4d2e-af00-2d3cc47c9244>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Are you Ready? I expect you’re tired of hearing about the centenary of Alan Turing’s birth. His automated machines for cracking the German Enigma and Lorenz ciphers have enjoyed quite a lot of recent press, as has his pioneering work on the theory of computation. Without Turing, we would have bizarre, inefficient machines based on lambda calculus. You’ve probably also heard the tragic story of how the British legal system condemned him to the mental torture of hormonal therapy. It is almost universally acknowledged that his death, involving a cyanide-immersed fruit of the genus Malus, was suicide. That’s right: a bunch of ungrateful ne’er-do-wells in Parliament decided that Turing’s unparalleled contributions to the war effort were no mitigation for his unconventional sexuality. But what happened in his final years? It transpires that much of his research was in the field of mathematical biology, specifically in comprehending the cornucopia of patterns found in nature. For example, the phyllotaxis (arrangement of seeds, or ‘primordia’) in the common sunflower (Helianthus annuus) can be described by a startlingly simple formula. The formula gives a set of complex numbers, which, when plotted on the Argand plane, results in the pattern above. The points lie on a spiral known as the Fermat spiral. Other patterns have more complicated mathematical descriptions. Turing found that Friesian cow patches and so forth can be modelled by a reaction-diffusion system, a partial differential equation designed to emulate chemical reactions. They can be regarded as a continuous (or, indeed, analogue!) analogue of cellular automata. Now, some really cool guys have developed software to explore these systems. Katie Steckles published an entry over at The Aperiodical: http://aperiodical.com/2012/07/ready-reaction-diffusion-simulator/ One of the cuter aspects of Ready (by analogy with Golly, its lightning-fast and more conventional cousin, which is an extension of the acronym ‘GoL’ meaning ‘Game of Life’, but that’s enough etymology for today) is the ability to emulate these systems on arbitrary meshes (or tessellations of 2-dimensional manifolds, if you’re a mathematician). For example, we can actually watch the emergence of zebra stripes on an actual … well, a horse, actually, but never mind! In a similar vein, here are Turing’s leopard spots on a rather ferocious lion: Tim Hutton (lead developer of Ready, and fellow resident of Derbyshire) is even adding support for three-dimensional tessellations of space, such as the face-centred cubic lattice of rhombic dodecahedra, the body-centred cubic lattice of truncated octahedra, and the Voronoi diagram of atoms in a diamond. You can read about all of this stuff in the Symmetries of Things, by John Conway, Heidi Burgiel and Chaim Goodman-Strauss. Anyhow, this nicely leads on to the next topic I wanted to discuss: the Poincare disc model of hyperbolic space. In layman’s terms, hyperbolic space is curved in the opposite way to a sphere. I’ll not go into the mathematics here, as I’ve already done so in my forthcoming book, Mathematical Olympiad Dark Arts. Can’t wait now, can you? Anyway, as much as I enjoy promoting my own work, I seem to be digressing somewhat. The important thing is that Tim Hutton and I have been exploring reaction-diffusion systems on the Poincare disc in Ready. This seemed like a logical extension, since we’ve already experimented in the Euclidean plane and on the surface of a sphere. We can do better than Poincare, though. Embedding a curved surface on a flat plane is bound to create problems, which is why Greenland looks larger than South America on maps using the Mercator projection. A variant of a map which avoids distortion is a really revolutionary (no pun intended) three-dimensional visualisation of the Earth: a globe. Similarly, it is possible to use our third dimension to embed the hyperbolic plane without distortion. There are two approaches I know of: 1. Sellotape lots of identical regular heptagons of paper together. I did have a model of this with twenty-four heptagons, but then I ruined it by trying to fold it into a bizarre three-holed torus called a Klein quartic. Warning: do not try making a Klein quartic at home! Just read about it on the Internet or in my book. 2. Invoke the ancient art of crochet. This was mentioned on Tim Hutton’s blog and in the award-winning book Crocheting Adventures with Hyperbolic Planes. The award that the book won, by the way, was that for the weirdest book title! People have crocheted coral reefs using hyperbolic geometry. There’s a massive one in the Smithsonian Museum, for example: http://www.mnh.si.edu/exhibits/hreef/. The aforementioned Katie Steckles seems to have got the wrong idea, since she ended up knitting a surface with overall positive curvature (http://aperiodical.com/2012/08/knitted-spiky-icosahedron/). Oops! However, I am again deviating from the subject, which is the computer program Ready. It received a huge boost of popularity recently due to me (sorry, more self-promotion!) and my glider in a cellular automaton on a Penrose tiling. It was quite amusing that such a small construction won me $100 (thanks, Andrew!) and mentions in the New Scientist and the Aperiodical. The original rhombus tiling glider was quite boring, travelling in straight (well, slightly wiggly) ribbons. However, Andrew (Trevorrow) and I have investigated the same glider on Penrose’s slightly more familiar tiling of kites and darts. This time, the glider orbits in loops of varying sizes in addition to unbounded fractal paths. The neat thing is that all of the loops have either pentagonal or approximate decagonal symmetry. A few of them are shown below: For further details, see the paper I’ve just submitted to the Journal of Cellular Automata. (Thrice in one post? Wow, I’m really good at self-promotion! Wait — was that a fourth time? Damn you, self-referential statements!) 3 Responses to Are you Ready? 1. Pingback: Miscellany | Complex Projective 4-Space This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"http://cp4space.wordpress.com/2012/08/24/are-you-ready/","timestamp":"2014-04-19T22:43:36Z","content_type":null,"content_length":"72125","record_id":"<urn:uuid:e2e26c47-ebda-4354-9681-72f350de6252>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
More Transformations of Sine and Cosine - Concept In the equation y=Asin(B(x-h)), A modifies the amplitude and B modifies the period; see sine and cosine transformations. The constant h does not change the amplitude or period (the shape) of the graph. It shifts the graph left (if h is negative) or right (if h is positive) and in the amount equal to h. The amount of horizontal shift is called the phase shift, which equals h. I'm graphing transformations of sine and cosine and I've already talked about graphing y equals a sine of bx. So I want to talk about graphing y equals a sine of b times the quantity x minus h so this part is new. And before I get started with examples let's take a look at a demonstration that allows us to see what that x minus h does. Okay here we are looking at geometries sketch pad and I've got my equation g of x equals a times the sine of b times the quantity x minus h. I have sliders over here that allow me to control the values of a, b and h and here's my graphs of first of all the sine graph and second of all the transformed sine graph. Now right now a is 3 and b is 0.5 I'm going to leave those fixed for the moment. Let's adjust the value of h and see what happens, if I adjust it in the positive direction you can see that the shape of the graph doesn't change at all, but it shifts to the right. It shifts to the right exactly as much as the value of h notice that the point, this point started at the origin but now it's a pi h let me show you again. Back at the origin and then up to the pi. So the h value if the h is pi that means this will shift to the right per unit. And what happens if h is negative it just shifts to the left. And so basically the value of h tells you what the horizontal shift is h equals negative 0.5 pi, negative one half pi. If you look at the formula you have g of x equals 3 sine 0.5 times x plus 0.5 pi that's x minus negative 0.5 pi. That means a shift to the left half of pi, so again if h is positive we shift to the right by that amount in this case 0.75 pi. If h is negative, we shift to the left by that amount. Okay let's review what we just learned, the value of h controls the horizontal shift of the graph and if h is half of pi then we shift to the right half of pi. And when we're talking about sine and cosine graphs this is called phase shift. And the phase shift is exactly the horizontal shift of a sine or cosine graph. And it exactly equals h, so you don't have to go through a fancy formula to find the phase shift. Just write the sine or cosine function in this form and identify h that's the phase shift and we'll be using this when we're transforming sine and cosine graphs. transformations sine period amplitude phase shift horizontal shift
{"url":"https://www.brightstorm.com/math/trigonometry/trigonometric-functions/more-transformations-of-sine-and-cosine/","timestamp":"2014-04-18T00:27:33Z","content_type":null,"content_length":"72018","record_id":"<urn:uuid:83d8c2b2-300e-47d4-a001-3a27227f74d5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
High Gain Array of Monopoles-Coupled Antennas for Wireless Applications International Journal of Antennas and Propagation Volume 2012 (2012), Article ID 725745, 8 pages Research Article High Gain Array of Monopoles-Coupled Antennas for Wireless Applications ^1IETR, CNRS, UMR 6164, 20 Avenue des Buttes de Coesmes, 35043 Rennes, France ^2XLIM, CNRS, UMR 6172, 123 Avenue Albert Thomas, 87060 Limoges, France Received 7 June 2012; Revised 27 September 2012; Accepted 11 October 2012 Academic Editor: Huanhuan Gu Copyright © 2012 Ahmad El Sayed Ahmad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An array of monopole antennas over a ground plane that radiates a directive lobe in the end-fire direction are described in this paper. The design uses the rigorous method described by Drouet et al. 2008 in order to synthesize the radiation through the strong cumulative coupling between the monopoles. A gain higher than 20dB was achieved in the end-fire direction over a 4.5% bandwidth. However, the antenna has been tilted in order to compensate the beam deviation caused by the edge diffraction. A prototype with 12 elements has been manufactured in order to validate the antenna principle and the whole antenna is successfully measured. The prototype was studied with the software CST-Microwave Studio and the feed network has been designed with Agilent ADS. 1. Introduction This paper deals with the design of the vehicular antenna that must satisfy some particular requirements. Firstly, this antenna has to be integrated on the roof that induces a low-profile antenna working over a ground plane. Secondly, an end-fire antenna which radiates toward the horizon must be used to communicate with the base stations. Finally, the antenna gain must be high in order to reduce the number of base stations. The design of an antenna that satisfies all these specifications is very difficult to perform. Linear monopole arrays are extensively used in many antenna systems due to their simplicity, low cost, polarization purity, reasonable bandwidth, and power-handling capability [1]. However, the strong mutual coupling between neighbored antenna elements also results in radiation patterns and matching degradations. The feed network can also be directly affected. It has been theoretically demonstrated that mutual coupling effects on radiation patterns can be reduced with appropriate loads [2–5]. The aim of this paper is to design a linear array of monopoles by managing the coupling. Moreover, the antenna design must be robust and easy to manufacture in order to be integrated on a vehicle roof, and, thus, to undergo outdoor conditions such as rain and wind. In the first part, the global design method will be briefly explained. Then, the principle, the design, and the performances of a linear array of 12 monopoles will be given. In the second part, an array of 4 × 12 monopoles fed by a feed network will be described. The last section discusses the design of 4 × 12 monopoles that would be compared to a Yagi antenna. 2. Basic Structure 2.1. Principle The basic structure is composed of twelve monopoles and a feed network. The strong interactions between the monopoles need to design the feed network with a great accuracy in order to optimize the The objective consists of the determination of the impedance matching and the incident power to reach both the objective radiation pattern and the best matching for the monopole array. We employ the method described in [6] for the design of the array antenna with strong coupling: by using CST-Microwave studio we compute the matrix and the 12 radiation patterns when the 12 monopoles are successively fed. These radiation patterns are to (1). An objective radiation pattern is proposed. This objective radiation pattern can be the linear combination of the radiation pattern of one monopole on its limited ground plane multiplied by an array factor (2). In this relation, is the distance between each monopole and is the phase of the th monopole. Equation (1) provides the weights that must be applied to the monopoles’ radiation pattern. Equation (3) leads to the antenna impedances to be considered as a reference in order to reach the matching and (4) gives the input waves that the feed network must achieve: where is the th monopole radiation pattern, is the objective radiation pattern, is the weight that must be applied to the th monopole radiation pattern, is the distance between each monopole, ( free space wavelengths), is the phase shift at th monopole, and is the coupling matrix. 2.2. Design and Performances of the Array of 12 Monopoles As explained in Section 1, the application is a communication system that uses the WIMAX protocol between a vehicle and base stations. The objective is to establish a high-gain monopole array that radiates a directional beam in the azimuthal plane within the frequency band 5.47GHz–5.725GHz. In this section, we propose the complete design of the array of monopoles with its feed network. The optimization frequency is 5.6GHz. In order to achieve a radiation with a single lobe in the direction of the array alignment, the space between two nearby monopoles must stay lower than : we have chosen (24.12mm) for our design. Twelve monopoles are set on a ground plane whose dimensions are = 100mm and = 330mm (Figure 1). The monopole lengths are listed into Table 1 (length) and their diameter is 2.53mm. The connections between the monopoles and the feed network’s ports are achieved with 50Ω coaxial transitions which are drilled through the ground plane (Figure 2). The feed network is printed back to the antenna ground plane, onto a 0.508mm thick Duroïd 6002 substrate , . The array of monopoles is positioned on a limited ground plane. In the limited ground plane size case, the well-known scattering effects on the ground plane edges alter the radiation pattern [7–9] (Figure 3). First of all, the interferences induce maxima and minima field on the radiation pattern. Their angular position is obviously related to the ground plane size. Then, we can observe the classic beam deviation in the elevation plane, which is caused by the scattering on the edges of the limited ground plane, since the main beam direction does not coincide with the horizon. To achieve the objective radiation pattern (we defined an angle ), we apply the array factor (2) with It should to be stressed that these results are approximations since the analysis considers the monopoles do not interfere with each other. The radiation pattern illustrated in Figure 4 (monopole -array factor) can be used as the objective radiation pattern . In the next step, we have used CST Microwave studio to achieve the full-wave analysis of the whole antenna structure. As an example, only 3 monopole radiation patterns are plotted in Figure 3. According to (1), the weights are deduced and written in Table 1. Thus, Figure 4 points out the resemblance between the objective radiation pattern and the linear combination of the radiation patterns of monopoles weighted by the coefficients . Figures 5 and 6 show the scattering matrix of the monopole antenna. Regarding Figure 6, these interactions should not be omitted when connecting the array monopoles with the feed network. The coupling between nearby monopoles is greater than −13dB. The optimum weights and the input impedances () which simultaneously perform the objective radiation and the matching of all the feeding ports can be calculated using (3), (4), the scattering matrix , and the vector. These values are given in Table 1 (columns 4 and 5) with the optimized monopole lengths. These have been set to comply with the different impedance values resulting from the synthesis procedure and to minimize the feed distribution network complexity. The design of the microstrip feed network has been made with the Agilent ADS software in order to perform the weights and the impedance matching specified in Table 1. The realized feed network is shown in Figure 7. In order to perform the numerical validation, the monopole simulation and the feed network design are numerically connected together. Using the CST software, this entire structure simulation provides the performances of the whole array antenna (the 12 monopoles and the feed network). The radiation pattern, the gain, and the return loss are computed. Figure 8 plots the radiation pattern in the plane phi = 90°. This is the plane which is parallel to the array alignment. We can observe that the entire structure simulation agrees very well with the objective radiation pattern (linear combination of the radiation patterns of the monopoles). So, the feed network operates properly through the couplings. Figure 9 presents the radiation pattern in 3D at 5.6GHz; the maximum simulated directivity is 15.6dB. The main beam direction does not coincide with the horizon (); it will be necessary to compensate this deviation by an inclination of the whole antenna. Indeed, it is essential for our application that the maximum gain is radiated in the base stations direction. The return loss at the input of the feed network is plotted in Figure 10 (simulation). The level is lower than −15dB over the operating frequency bandwidth. This numerical validation shows that the radiation pattern is successfully synthesized as well as the impedance matching of every antenna port through the couplings. Although the feed network has been optimized to deal with the antenna couplings at 5.6GHz, we have evaluated the performances of the entire structure (12 monopoles connected with the feed network) from 5.47GHz to 5.725GHz. The antenna gain is 14.7dB over the 5.47GHz–5.725GHz operating bandwidth (Figure 11). The directivity and the gain difference are mainly due to the dielectric losses in the strip line circuit. 2.3. Measurements The array of monopoles and the feed network were manufactured (Figures 1 and 7). The feed network is glued back to the ground plane and screws were added to secure the RF contacts. We have checked that interactions between the screws and the circuit are negligible. An SMA connector is at the input port. Measurements were achieved in an anechoic chamber. The return loss of the tested antenna is in Figure 10 (measurement). This measurement is compared with the simulation: both are close to −15dB over the operating frequency bandwidth. A slight discrepancy of 50MHz can be observed compared to the simulation, but it represents only 0.9% of the frequency shift that can be due to the mesh accuracy during simulation or manufacture tolerance. Figure 12 compares the measured radiation pattern with the theoretical one over 360° in the plane of the array alignment (). The measured gain agrees very well with the prediction. We can conclude that the design is reliable. The feed network operates properly through the couplings. The differences between the simulated and the measured gains are lower than 0.5dB. Metallic losses in the feed network and the uncertainty accuracy of our anechoic chamber can be responsible for this discard. 3. 2D Array of 4 × 12 Monopoles (4 Subarrays) The well-behaved experimental results validate the principle of the 12-monopole linear array. The linear array of twelve monopoles (along [oy]) provided a gain of 14.7dB at 5.6GHz. Figure 9 shows that the radiation pattern contains low side lobes in the perpendicular plane [ox] to the array of monopole plane alignment [oy]. In order to increase the gain, a 2D array of 4 × 12 monopoles was designed (Figure 13). Four sub-arrays, where each of them is described in Section 2, have been used to make the 48-monopole array. Therefore, the 4 sub-arrays are spaced out in order to avoid the interferences in these directions. Obviously, these sub-arrays alignment allow the constructive interference and so increase the gain in the end-fire direction. 3dB power dividers have been designed to connect the feed networks. 1.25λ[0] (67mm) is sufficient in order to avoid the interferences between the lines of the feed network. The corresponding layout of the feed network of 4 × 12 monopoles is shown in Figure 14. The return loss at the input of the feed network is plotted in Figure 15. The level is lower than −15dB over the operating frequency bandwidth. The 3D (Figure 16) radiation pattern shows a very directive lobe. A 20.8dB maximum directivity is obtained at the end-fire direction. An increase of 5.2dB has been obtained compared to the case with a single subarray (12 monopoles) (Figure 9). The antenna gain is 20dB over the 5.47GHz–5.725GHz operating bandwidth (Figure 17). The directivity and the gain difference are mainly due to the dielectric losses in the strip line circuit. Indeed, the insertion losses are very low because the antenna reflection coefficient is lower than −15dB over the 5.47GHz–5.725GHz band (Figure 15). The 4 ×12 monopoles are sufficient to have the gain required in the specifications. The antenna was 15° tilted to give back the main beam deviation caused by the scattering at the ground plane edges. 4. Yagi Antenna In order to check the interest to develop the complete method for the conception, we have made another antenna. The proposed antenna is a Yagi-Uda antenna. Yagi antennas of three or more elements are widely used, although a thorough study is lacking today because of the many parameters, each element having three variables, length, spacing, and the diameter of conductor. Almost all multielement Yagis are invariably designed empirically. In [10], Yagi antenna of three elements was presented. It has been shown the gain over a half-wave dipole of a three element Yagi with various director lengths and spacing. This study shows that as the spacing between director and driver decreases, the optimum length of the director increases. It has been documented in [11–13] that the dimension ratio of the reflector to the driven element can be somewhere between 1.1 and 1.3. The dimension ratio of the director to the driven element can be between 0.8 and 0.95. The distance between the centers of the reflector and the driven element should be about 0.25 free-space wavelengths, while the separation between the centers of the director and the driven element and the separation between the directors themselves should be between 0.3 and 0.4 free-space wavelengths. The antenna characteristics such as gain, front-to-back ratio, beamwidth, and center frequency can be altered by changing the length of the driven element, the length of the parasitic elements, spacing between reflector and dipole, and spacing between director and dipole [14]. The proposed antenna consists of a monopole as a driven element, a reflector, and eleven directors as shown in Figure 18. To facilitate the design, this antenna is designed using the same size of the prototype described in Section 2. Since our application requires only one high-gain radiation direction, it is proceeded to prohibit the radiation in the half space behind the antenna. The backfire radiation can be avoided with some non excited elements named “reflectors” or with a vertical metallic plane. Intended for simplicity constraints, the second solution is selected. So, the driver monopole must be spaced out of a /4 (13.4mm) distance from the reflector plane. This separation allows a constructive interference between the reflected fields and the direct waves. In this case and according to the images theory, the antenna gain should be 3dB increased at the end-fire direction. The separation between the centers of the director and the driven element and the separation between the directors themselves is 0.45 free-space wavelength (24.12mm). The director lengths are 6.7mm (λ[0]/8) and their diameters are 2.53mm. These directors are shortcircuited with the ground plane. The length of the driver monopole is 10.32mm; its diameter is 4.53mm. The yagi antenna is matched to −18dB in simulation over a bandwidth 5.47GHz–5.725GHz (Figure 19). The simulated radiation pattern in 3D is presented in Figure 20; the maximum directivity is 14.3dB at the end-fire direction. In order to increase the directivity, a 2D array of 4×Yagi antenna was designed (Figure 21). The antenna was designed using the same size of the prototype described in Section 3 to make a true comparison between the array of monopole antenna and the Yagi antenna. Figure 22 presents the radiation pattern at 5.6GHz. We obtain a maximum directivity of 18.5dB. The comparison of radiation in the Cartesian plane between the array of monopoles and the yagi antenna is shown in Figure 23. The radiation pattern is compared versus at (maximum radiation). We can observe the first side lobe level of yagi radiation pattern is around 12dB; it is −6dB below the main lobe which explains the maximum directivity of yagi antenna is 2.3dB lower than the radiation of the monopole array. The advantages of the monopole antenna compared to the Yagi antenna are(1)the array of monopole antenna designed in Section 3 does not need to a reflector plane to radiate on the end-fire direction, (2)the radiation pattern of monopole antenna does not contain significant side lobes levels,(3)the maximum level of radiation of the monopoles antenna is greater than the yagi, The disadvantages of the monopole antenna compared to the Yagi antenna is the feed network. 5. Conclusion In this paper, a low-profile antenna with a ground plane has been presented. The purpose was to design a high-gain antenna (single end-fire beam) which must be positioned on a vehicle roof in order to communicate with the far base stations. As a first step, an array of 12 monopoles was designed. In such a structure, the monopoles strongly interact with each other. In our study, the feed network has been designed to deal with the couplings by considering as a reference the impedances and the input waves that optimize the efficiency of the antenna. The feed network and the monopole array were manufactured. The whole antenna was successfully tested. The antenna was tilted to give back the main beam deviation caused by the scatterings on the ground plane edges. As a second step, an array of 4 × 12 monopoles has been designed in order to increase the gain. A gain higher than 20dB has been achieved over a 4.5% bandwidth. Finally, in order to check the interest to develop the complete method for the conception, we have made another antenna. The proposed antenna is a Yagi-Uda antenna. The radiation of this antenna presents high side lobe levels. The maximum radiation on the end fire is lower than the radiation of the monopole array. In conclusion, as the method takes into account couplings, a particular beam pointing with reduced or controlled side lobes can be achieved easily. 1. B. Tomasic and A. Hessel, “Linear array of coaxially fed monopole elements in a parallel plate waveguide-I: theory,” IEEE Transactions on Antennas and Propagation, vol. 36, no. 4, pp. 449–462, 1988. View at Publisher · View at Google Scholar · View at Scopus 2. D. M. Pozar, “The active element pattern,” IEEE Transactions on Antennas and Propagation, vol. 42, no. 8, pp. 1176–1178, 1994. View at Publisher · View at Google Scholar · View at Scopus 3. J. P. Daniel and C. Terret, “Mutual coupling between antennas optimization of transistor parameters in active antenna design,” IEEE Transactions on Antennas and Propagation, vol. AP-23, no. 4, pp. 513–516, 1975. View at Scopus 4. A. K. Bhattacharyya, Phased Array Antennas: Floquet Analysis, Synthesis, BFNs, and Active Array Systems, John Wiley & Sons, Hoboken, NJ, USA, 2006. 5. R. J. Mailloux, Electronically Scanned Array. Synthesis Lecture on Antennas, Morgan & Claypool Publishers, Constantine Balanis, Ariz, USA, 2007. 6. J. Drouet, M. Thevenot, R. Chantalat, et al., “Global synthesis method for the optimization of multi feed EBG antennas,” International Journal of Antennas and Propagation, vol. 2008, Article ID 790358, 6 pages, 2008. View at Publisher · View at Google Scholar 7. S. K. Sharma and L. Shafai, “Beam focusing properties of circular monopole array antenna on a finite ground plane,” IEEE Transactions on Antennas and Propagation, vol. 53, no. 10, pp. 3406–3409, 2005. View at Publisher · View at Google Scholar · View at Scopus 8. C. Phongcharoenpanich, S. Suriya, T. Lertwiriyaprapa, P. Ngamjanyaporn, and M. Krairiksh, “Analysis of circular array of monopole on the ground plane radiating linearly polarized conical beam for wireless LAN applications,” in Proceedings of the 5th International Symposium on Antennas, Propagation and EM Theory, pp. 646–649, Beijing, China, August 2000. 9. V. Volski and G. A. E. Vandenbosch, “Modelling of microstrip antennas on a finite ground plane using the 2D physical optics model,” Microwave and Optical Technology Letters, vol. 40, no. 1, pp. 26–29, 2004. View at Publisher · View at Google Scholar · View at Scopus 10. H. Jasik, Antenna Engineering Handbook, McGraw-Hill Book Company, New York, NY, USA, 1961. 11. J. Huang and A. C. Densmore, “Microstrip Yagi array antenna for mobile satellite vehicle application,” IEEE Transactions on Antennas and Propagation, vol. 39, no. 7, pp. 1024–1030, 1991. View at Publisher · View at Google Scholar · View at Scopus 12. H. Yagi , “Beam transmission of the ultra short waves,” IRE Proceedings, vol. 16, no. 6, pp. 715–740, 1928. View at Publisher · View at Google Scholar 13. C. A. Balanis, Antenna Theory. Analysis and Design, John Wiley & Sons, New York, NY, USA, 1997. 14. S. K. Padhi and M. E. Bialkowski, “Parametric study of a microstrip Yagi antenna,” in Proceedings of the Asia-Pacific Microwave Conference, pp. 715–718, Sydney, NSW, Australia, December 2000. View at Scopus
{"url":"http://www.hindawi.com/journals/ijap/2012/725745/","timestamp":"2014-04-16T12:38:51Z","content_type":null,"content_length":"118723","record_id":"<urn:uuid:a0cc46dd-3dd3-4eb8-a8ed-0f1d51e761b0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
A Light-Dimmer Circuit The Intensity Of A Lightbulb ... | Chegg.com A Light-Dimmer Circuit The intensity of a lightbulb with a resistance of 140 Ohms is controlled by connecting it in series with an inductor whose inductance can be varied from L=0 to L=Lmax. This "light dimmer" circuit is connected to an ac generator with a frequency of 60.0 Hz and an rms voltage of 120 V. A. What is the average power dissipated in the lightbulb when L=0? answer in units of W. B. The inductor is now adjusted so that L=Lmax. In this case, the average power dissipated in the lightbulb is one-fourth the value found in part A? What is the value of Lmax? answer in units H..
{"url":"http://www.chegg.com/homework-help/questions-and-answers/light-dimmer-circuit-intensity-lightbulb-resistance-140-ohms-controlled-connecting-series--q2638795","timestamp":"2014-04-20T19:02:56Z","content_type":null,"content_length":"21511","record_id":"<urn:uuid:0ce9599b-a252-47dc-8b4d-829676a57c44>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Mountlake Terrace Calculus Tutor Find a Mountlake Terrace Calculus Tutor ...For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years before that I worked at the Math Center at Black Hills High School in Olympia. I am certified level 1 by the College Reading and Learning Association, and have tutored subjects rangi... 13 Subjects: including calculus, physics, statistics, geometry ...And then, I give the student sample problems to solve independently and coach them further as needed. My main goal is to make sure the student is self-sufficient, and capable of using the methods on quizzes or tests. With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. 26 Subjects: including calculus, chemistry, physics, geometry I am currently a student attending Shoreline Community College. My cumulative GPA is currently 3.93, I have gotten a 4.0 in every college-level math class I have taken. I have been tutoring for over a year now and truly enjoy getting others excited about the world of mathematics and science. 14 Subjects: including calculus, chemistry, physics, geometry ...Of course, it takes more than book-smarts to teach writing effectively; I'm also well-versed at toning down the technical language and making the subject simple and easy to understand. Tired of hearing names spouted and dates thrown at you? Would you rather hear the English civil war and Magna Carta explained like a Seahawks/49ers game, complete with position, player, and penalty 16 Subjects: including calculus, reading, chemistry, biology ...Additionally, I have taken undergraduate and graduate level Biostatistics courses with success. I have a Ph.D. in Immunology. Genetics was part of my required coursework for both my undergraduate and graduate degrees. 17 Subjects: including calculus, chemistry, physics, geometry
{"url":"http://www.purplemath.com/mountlake_terrace_wa_calculus_tutors.php","timestamp":"2014-04-17T04:18:25Z","content_type":null,"content_length":"24397","record_id":"<urn:uuid:153def06-4475-4289-8478-cdf8acb118fb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Saturday College Football Open Thread This week's picks: Penn St (-10) over Ind., Wisconsin (-3) over Michigan, South Florida (+4) over Pitt, Okla St(-24) over Kansas, Louisville (+4) over W. Va., K. State (-2) over Colorado, Utah St. (-2) over Idaho, Iowa (+4) over Ohio State, Illini (+9) over Northwestern, Stanford (-7) over Cal., Miami (+3) over Va Tech, Ole Miss (+17) over LSU, Arkansas (-2) over Miss State (5 units), Maryland (+5) over FSU, Ore. State (+4) over USC, Notre Dame (-8) over Army (3 units.). Open Thread. Saturday College Football Open Thread | 26 comments (26 topical) Readers Must Login or Create an Account to Comment Turkey day jokes (5.00 / 3) (#11) by waldenpond on Sat Nov 20, 2010 at 07:28:00 PM EST A young man named John received a parrot as a gift. The parrot had a bad attitude and an even worse vocabulary. Every word out of the bird's mouth was rude, obnoxious and laced with profanity. John tried and tried to change the bird's attitude by consistently saying only polite words, playing soft music and anything else he could think of to 'clean up' the bird's vocabulary. Finally, John was fed up and he yelled at the parrot. The parrot yelled back. John shook the parrot and the parrot got angrier and even more rude. John, in desperation, threw up his hand, grabbed the bird and put him in the freezer. For a few minutes the parrot squawked and kicked and screamed. Then suddenly there was total quiet. Not a peep was heard for over a minute. Fearing that he'd hurt the parrot, John quickly opened the door to the freezer. The parrot calmly stepped out onto John's outstretched arms and said "I believe I may have offended you with my rude language and actions. I'm sincerely remorseful for my inappropriate transgressions and I fully intend to do everything I can to correct my rude and unforgivable behavior." John was stunned at the change in the bird's attitude. As he was about to ask the parrot what had made such a dramatic change in his behavior, the bird spoke-up, very softly, "May I ask what the turkey did?" Abraham Lincoln just got (5.00 / 1) (#12) by nycstray on Sat Nov 20, 2010 at 08:27:26 PM EST sexy :) If the Hawkeyes can (none / 0) (#1) by oculus on Sat Nov 20, 2010 at 11:10:07 AM EST beat Ohio State I'm ok w/a Michigan loss. When is the last time BTD picked any (none / 0) (#2) by magster on Sat Nov 20, 2010 at 11:58:12 AM EST team from Colorado in any athletic contest? Before Dan Hawkins was fired, going against Colorado made all the sense in the world, but CO will win today. Heh (none / 0) (#5) by Big Tent Democrat on Sat Nov 20, 2010 at 01:09:15 PM EST I picked Air Force a lot this year. Rolling into Iron Bowl week (none / 0) (#3) by Militarytracy on Sat Nov 20, 2010 at 12:31:16 PM EST Look, I don't do football. I could care less. But then Joshua wants to go to Auburn, and then it does look like that what Joshua wants to apply himself to in life would make Auburn the college in this area he would need to go to. Then spend three years shopping for Auburn wear for one son, and then have the little girls getting the hand-me-down hoodies and that whole family throwing a party at their house everytime Auburn is playing this year, and then name a dog Auburn so you are saying it 20 times a day swinging from lovingly to disdainful, and you wake up one morning knowing when Iron Bowl week is and that there is even such a thing. Santa will probably bring me those car flags you roll up in your windows and I'll end up looking like some Alabama Idiot Auburn Ambassador. He'll have to read up (5.00 / 1) (#6) by CoralGables on Sat Nov 20, 2010 at 01:59:01 PM EST on the 1972 Iron Bowl. I spent a night at Auburn once long ago and couldn't sleep a wink all night because the frat house played the radio broadcast of the blocked punts from the '72 game over and over and over and over and over and over and over and over again until the sun came up. And the only bar I visited when there was called the Blocked Punt. My kind of town... (none / 0) (#26) by jeffinalabama on Mon Nov 22, 2010 at 07:47:02 AM EST 72 was a special year. Radio only. it wasn't a terribly cold day in Auburn, but the team had been trounced by the tide the year before, and the expectation was the same. Fans Booed Auburn when they kicked a field goal in the 4th quarter, down 16-0. Then with 5 minutes left, Bill Newton blocks a punt, and David Langner runs it in for a score. 16-10. A few minutes later, the same players do it again! 17-16 Auburn! That was the year that Bear Bryant said he'd "nothing matters but beating that cow college across the state." While I was enrolled, we voted Bessie the Cow for homecoming queen. She wasn't allowed to receive the honors, though. She hadn't enrolled that quarter...;-) Ha. If you can't beat em, . . . (none / 0) (#4) by oculus on Sat Nov 20, 2010 at 12:59:47 PM EST Dartmouth 31, Princeton 0 (none / 0) (#7) by robrecht on Sat Nov 20, 2010 at 04:57:59 PM EST Took my kids to their first game today. Go Big Green! Jim ("The King") Leyritz (none / 0) (#8) by oculus on Sat Nov 20, 2010 at 06:05:40 PM EST is acquitted of DUI manslaughter, convicted of DUI. Fun to watch him at the plate as a Padre in the 1998 playoffs. CNN That would have been (none / 0) (#9) by CoralGables on Sat Nov 20, 2010 at 06:16:41 PM EST a tough jury to be on. Both drivers drunk and meet at an intersection. The dead woman a mother of two (which tugs at the heartstrings but should have no bearing on the case). I come in the house after spending all (none / 0) (#10) by Militarytracy on Sat Nov 20, 2010 at 06:39:12 PM EST afternoon outside with the dogs and my husbands screams "intercepted". I look up, and the Army game is on the tube. The fabric of my life is unraveling. So, turns out 75yo Mom was (none / 0) (#19) by nycstray on Sat Nov 20, 2010 at 09:55:26 PM EST cruising the TSA site. We were talking about something and she was complaining about her internet being suddenly slow. I mentioned Comcast had just sent me an email they were using a new "security" program to "protect" their users. She says, well could it be because I was looking on the TSA site? I ask why. She's checking which airports have the scanners etc in the areas she wants to travel (for family). She no wants scans or enhanced pat! Funny, I thought I was going to rant about it to her, but she would be neutral . . . . she says "not gonna fly." Sad, all the airports she was looking at were from our area (NoCal) to where her few living relatives are. I think her and I may be taking some road/rail trips. I should note, she's a very active 75yo. swims a few times a week etc. My role model for aging . . . and for politics sake, she's a Mod Repub (but I think she's more liberal than she thinks she is considering how she said she voted last election, lol!~) Did she vote for Brown and Boxer..... (none / 0) (#22) by MKS on Sun Nov 21, 2010 at 12:40:32 AM EST Not sure, but I think at least (none / 0) (#24) by nycstray on Sun Nov 21, 2010 at 03:42:59 AM EST Brown. Any prop I asked her about, she went the same way I did :) She was over eMeg early on. Doubt iCarly did much for her (plus she's pro-choice). She seems to be really old school Repub with a heavy dash of woman's rights in her thought process, and agrees with me on many issues . . . back when I first started voting and also women where on the ballot, she would vote R top of ticket and then any women down ticket (mostly Dems in those days). So she's been voting women Sen, don't tell my dad (as she said;) ) She's also against PropHate, etc . . . Being 'closer' to home for the first time in 20 yrs, I'm trying not to rock the boat too much, so I was kinda surprised when we talked props and she was in line with me. But I guess we have to remember my MidWest mom has lived in the decadent Blue state of CA for about 55+ years. She must have been corrupted at some point ;) And can I just say . . . Thank DAWG for Blue States!!!! Saturday College Football Open Thread | 26 comments (26 topical) Readers Must Login or Create an Account to Comment
{"url":"http://www.talkleft.com/story/2010/11/20/104728/80/blogrelated/Saturday-College-Football-Open-Thread","timestamp":"2014-04-20T06:34:42Z","content_type":null,"content_length":"42940","record_id":"<urn:uuid:cf3f9f3a-7d36-4e1f-b161-438a654bc414>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Roseland, NJ Math Tutor Find a Roseland, NJ Math Tutor ...I will travel to your house or a library. I do recommend a minimum of 1 1/2 - 2 hours per session since I find that to be the most effective in producing concrete results. My qualifications include graduating high school in the top 1% of my class, Cornell University with a BS in Operations Research, Tau Beta Pi (Engineering Honor Society), and an MBA in finance from Cornell 16 Subjects: including precalculus, business, ACT Math, algebra 1 ...I was a History major at Princeton. I took several classes in Anthropology and achieved at least a B in any Anthropology class that I've taken. I recently sat for and passed the New York and New Jersey Bar Exams on the first attempt. 34 Subjects: including algebra 2, algebra 1, prealgebra, SAT math ...I am a recipient of College Board's National AP Scholar award, and I scored an 800 on the SAT II Math Level 2 test. I have tutored over 75 different students in the New Jersey area since September 2013, and I teach multiple classes for homeschooled students. My students have seen dramatic improvement in their scores from my tutoring. 26 Subjects: including algebra 2, psychology, literature, drawing ...I encourage questions since learning is always an interactive process. I’ve taught both high school and college including remedial courses at both levels. I have 30 years' experience in teaching, working with students from a wide variety of ethnic backgrounds as well as adult learners. 9 Subjects: including precalculus, trigonometry, statistics, algebra 1 ...Recently, I have been at home raising two children. From a teaching and tutoring standpoint, I am always patient and encouraging. In addition, I try to foster a learning environment that motivates and builds success. 12 Subjects: including discrete math, differential equations, algebra 1, algebra 2 Related Roseland, NJ Tutors Roseland, NJ Accounting Tutors Roseland, NJ ACT Tutors Roseland, NJ Algebra Tutors Roseland, NJ Algebra 2 Tutors Roseland, NJ Calculus Tutors Roseland, NJ Geometry Tutors Roseland, NJ Math Tutors Roseland, NJ Prealgebra Tutors Roseland, NJ Precalculus Tutors Roseland, NJ SAT Tutors Roseland, NJ SAT Math Tutors Roseland, NJ Science Tutors Roseland, NJ Statistics Tutors Roseland, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Roseland_NJ_Math_tutors.php","timestamp":"2014-04-18T16:09:38Z","content_type":null,"content_length":"23891","record_id":"<urn:uuid:e437ee83-c77a-4e30-b5f4-6ea872eb17a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how much work would it take for a 75 kg person to climb the 9000 meters from sea level to the peak of mount everest? how much potential energy wold this person have when he or she reached the summit? show yoour calculations can somebody help me • one year ago • one year ago Best Response You've already chosen the best response. have you read the formula U = mgh ? Best Response You've already chosen the best response. huh ! Best Response You've already chosen the best response. I'm not sure if can assume that g is constant. OK:\[Work=- \Delta U\]U is potential energy. \[U=-\frac{MmG}{r}\] \[\Delta U=-\frac{MmG}{6353000+9000}-(-\frac{MmG}{6353000})\] http:// -%5Cfrac%7B5.97219%C3%9710%5E24+75+%286.67300+%C3%97+10%5E-11%29%7D%7B6353000%2B9000%7D-%28-%5Cfrac%7B5.97219%C3%9710%5E24+*+75+%286.67300+%C3%97+10%5E-11%29%7D%7B6353000%7D%29 So the work the gravitational field does is -6.65558131072543162062365316101183770926987440628 × 10^6, and the work you do is 6.65558131072543162062365316101183770926987440628 × 10^6 Using just U=mgh W=mg(h(2)-h (1))=75*9.8*(9000)= 6615000 The error is 0.6%, so g is fine being constant. Best Response You've already chosen the best response. no it says if not told then assume g is 10 m/s^2 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50707f9ae4b0c2dc834087a1","timestamp":"2014-04-18T21:23:49Z","content_type":null,"content_length":"35645","record_id":"<urn:uuid:f01a28c0-027e-4987-9b3e-7e1e34695278>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that this number is odd and composite March 9th 2010, 10:05 PM #1 Mar 2010 Show that this number is odd and composite Show the (3^77-1)/2 is odd and composite. (Hint: consider 3^77mod4 and the formula a^n-b^n=(a-b)(a^(n-1)+a^(n-2)*b+...+a*b^(n-2)+b^(n-1)) for any positive integers a, b, n). I rewrote (3^77-1) as (3^77-1^77) and then in the form given in the hint, which when simplified turns the number into 3^76+3^75+...+3^1+3^0, but I'm not sure where to go from there, or how 3^ 77mod4 is useful. Show the (3^77-1)/2 is odd and composite. (Hint: consider 3^77mod4 and the formula a^n-b^n=(a-b)(a^(n-1)+a^(n-2)*b+...+a*b^(n-2)+b^(n-1)) for any positive integers a, b, n). I rewrote (3^77-1) as (3^77-1^77) and then in the form given in the hint, which when simplified turns the number into 3^76+3^75+...+3^1+3^0, but I'm not sure where to go from there, or how 3^ 77mod4 is useful. The expression $3^{76}+3^{75}+\ldots+3^1+3^0$ is a sum of 77 odd numbers, so is odd (and if you do the question that way, you don't need the hint about mod 4). To see that the number is composite, notice that the identity $a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+\ldots+ab^{n-2}+b^{n-1})$ tells you that $(a^7)^{11} - (b^7)^{11} = (a^7-b^7)(a^{70} + a^{63}b^7 + \ ldots + a^7b^{63} + b^{70})$. $3^{77} \equiv (-1)^{77} = -1 \mod{4} \implies 3^{77} = 4k+3$. Therefore $\frac{3^{77}-1}{2} = \frac{4k+3-1}{2} = \frac{4k+2}{2} = 2k+1$. Hence $\frac{3^{77}-1}{2}$ is odd. March 10th 2010, 05:39 AM #2 March 10th 2010, 01:01 PM #3 March 11th 2010, 03:12 AM #4 Mar 2010
{"url":"http://mathhelpforum.com/number-theory/133032-show-number-odd-composite.html","timestamp":"2014-04-21T07:30:00Z","content_type":null,"content_length":"40551","record_id":"<urn:uuid:a01cf3ae-0766-42e6-8ebb-8bea22df7b9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractions and Common Denominators Date: 06/04/2001 at 01:54:54 From: AT Subject: Common Denominators Why do you need a common denominator to add fractions, but not to multiply them? I have tried to answer this question by using the distributive property and the definition of division. I have always been taught that you need a common denominator to add fractions. I have never been told axiomatically why this has to be done. Also, why don't you need a common denominator to multiply fractions? Date: 06/04/2001 at 12:47:52 From: Doctor Peterson Subject: Re: Common Denominators Hi, AT. Let's approach fractions algebraically. A fraction is essentially a division problem stated but not worked out; rather than divide 3 by 4 to get 0.75, we just write 3/4. This is much like algebra, where we write a/b and do not evaluate it, because we don't know the values. The reason we don't evaluate a fraction is that it has been found easier to manipulate the fraction as an expression (and then evaluate it at the end if necessary) rather than to evaluate it first, especially if we want an exact answer. To multiply two fractions, we have to manipulate the expression (a/b) * (c/d) until it is in a valid fractional form, one number "over" another. Let's do it: (a/b) * (c/d) = a * 1/b * c * 1/d = a * c * 1/b * 1/d = (ac) * 1/(bd) = (ac) / (bd) That was easy because multiplication and division work well together; division is a form of multiplication (by the reciprocal), and Now let's add two fractions, manipulating (a/b) + (c/d) until it looks like a fraction. We can't apply a commutative property, since addition and multiplication or division don't commute, but must use the distributive property (which relates addition and multiplication) instead. In particular, we have to choose a new denominator for the answer. It turns out that we can use (bd) as a denominator, and convert both fractions to that denominator by multiplying by 1: (a/b) + (c/d) = (a/b)(1) + (c/d)(1) = (a/b)(d/d) + (c/d)(b/b) = (ad)/(bd) + (bc)/(bd) Now we can apply the distributive property to make this a single = (ad) * 1/(bd) + (bc) * 1/(bd) = (ad + bc)* 1/(bd) = (ad+bc)/(bd) (In working with actual fractions, we can save work by finding the LEAST common denominator; using variables, we don't have that issue to worry about.) So why did we need a common denominator in the latter case, but not in the former? In both cases we needed a new denominator; but in multiplication, it arose by the commutative property, and is hardly noticeable as a common denominator. In doing our addition, we had to get the common denominator first so we could factor it out. Here are a couple explanations we have given at a lower level, which may help: Common Denominators When to Add or Multiply Denominators? - Doctor Peterson, The Math Forum Date: 06/04/2001 at 22:00:46 From: Doctor Ian Subject: Re: Common Denominators Hi AT, If you have something like 3/7 + 2/7 = ? you can use the distributive property to simplify the expression on the left: 3/7 + 2/7 = 3*(1/7) + 2*(1/7) = (3 + 2)*(1/7) = 5*(1/7) = 5/7 Without common denominators, you can't do this. That's why you need to have common denominators to add. If you have something like 3/4 * 5/6 it's really just a series of multiplications and divisions: ((3 / 4) * 5) / 6 which is why you don't need to have common denominators. Does this help? Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/58181.html","timestamp":"2014-04-17T01:26:55Z","content_type":null,"content_length":"9027","record_id":"<urn:uuid:b3e4da5f-f3f7-4d9e-983d-08779f357f9f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
C. Vinzant: Hyperbolic polynomials and definite determinantal representations - Algebra/Algebraic Geometry Hyperbolic polynomials are real polynomials whose corresponding hypersurfaces have a special topological property. These polynomials appear in many areas of mathematics, including optimization, combinatorics and differential equations. In an important breakthrough in 2007, Helton and Vinnikov showed that every hyperbolic polynomial in three variables can be written as the determinant of a definite symmetric matrix of linear forms. We'll talk about this theorem and possible methods for constructing such determinantal representations.
{"url":"https://sites.google.com/a/oakland.edu/algebra/home/conferences/michigan-computational-algebraic-geometry/talks/cvinzant","timestamp":"2014-04-25T08:15:29Z","content_type":null,"content_length":"27592","record_id":"<urn:uuid:20548405-7791-4b6d-bcc9-7329aa38ef12>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphic (Bar) Scales A graphic scale is a ruler printed on the map and is used to convert distances on the map to actual ground distances. The graphic scale is divided into two parts. To the right of the zero, the scale is marked in full units of measure and is called the primary scale. To the left of the zero, the scale is divided into tenths and is called the extension scale. Most maps have three or more graphic scales, each using a different unit of measure (Figure 5-2). When using the graphic scale, be sure to use the correct scale for the unit of measure desired. Figure 5-2. Using a graphic (bar) scale. a. To determine straight-line distance between two points on a map, lay a straight-edged piece of paper on the map so that the edge of the paper touches both points and extends past them. Make a tick mark on the edge of the paper at each point (Figure 5-3). Figure 5-3. Transferring map distance to paper strip. b. To convert the map distance to ground distance, move the paper down to the graphic bar scale, and align the right tick mark (b) with a printed number in the primary scale so that the left tick mark (a) is in the extension scale (Figure 5-4). Figure 5-4. Measuring straight-line map distance. c. The right tick mark (b) is aligned with the 3,000-meter mark in the primary scale, thus the distance is at least 3,000 meters. To determine the distance between the two points to the nearest 10 meters, look at the extension scale. The extension scale is numbered with zero at the right and increases to the left. When using the extension scale, always read right to left (Figure 5-4). From the zero left to the beginning of the first shaded area is 100 meters. From the beginning of the shaded square to the end of the shaded square is 100 to 200 meters. From the end of the first shaded square to the beginning of the second shaded square is 200 to 300 meters. Remember, the distance in the extension scale increases from right to left. d. To determine the distance from the zero to tick mark (a), divide the distance inside the squares into tenths (Figure 5-4). As you break down the distance between the squares in the extension scale into tenths, you will see that tick mark (a) is aligned with the 950-meter mark. Adding the distance of 3,000 meters determined in the primary scale to the 950 meters you determined by using the extension scale, we find that the total distance between points (a) and (b) is 3,950 meters. e. To measure distance along a road, stream, or other curved line, the straight edge of a piece of paper is used. In order to avoid confusion concerning the point to begin measuring from and the ending point, an eight-digit coordinate should be given for both the starting and ending points. Place a tick mark on the paper and map at the beginning point from which the curved line is to be measured. Align the edge of the paper along a straight portion and make a tick mark on both map and paper when the edge of the paper leaves the straight portion of the line being measured (Figure Figure 5-5. Measuring a curved line. f. Keeping both tick marks together (on paper and map), place the point of the pencil close to the edge of the paper on the tick mark to hold it in place and pivot the paper until another straight portion of the curved line is aligned with the edge of the paper. Continue in this manner until the measurement is completed (Figure 5-5B). g. When you have completed measuring the distance, move the paper to the graphic scale to determine the ground distance. The only tick marks you will be measuring the distance between are tick marks (a) and (b). The tick marks in between are not used (Figure 5-5C). h. There may be times when the distance you measure on the edge of the paper exceeds the graphic scale. In this case, there are different techniques you can use to determine the distance. (1) One technique is to align the right tick mark (b) with a printed number in the primary scale, in this case the 5. You can see that from point (a) to point (b) is more than 6,000 meters when you add the 1,000 meters in the extension scale. To determine the exact distance to the nearest 10 meters, place a tick mark (c) on the edge of the paper at the end of the extension scale (Figure 5-6A). You know that from point (b) to point (c) is 6,000 meters. With the tick mark (c) placed on the edge of the paper at the end of the extension scale, slide the paper to the right. Remember the distance in the extension is always read from right to left. Align tick mark (c) with zero and then measure the distance between tick marks (a) and (c). The distance between tick marks (a) and (c) is 420 meters. The total ground distance between start and finish points is 6,420 meters (Figure 5-6B). Figure 5-6. Determining the exact distance. (2) Another technique that may be used to determine exact distance between two points when the edge of the paper exceeds the bar scale is to slide the edge of the paper to the right until tick mark (a) is aligned with the edge of the extension scale. Make a tick mark on the paper, in line with the 2,000-meter mark (c) (Figure 5-7A). Then slide the edge of the paper to the left until tick mark (b) is aligned with the zero. Estimate the 100-meter increments into 10-meter increments to determine how many meters tick mark (c) is from the zero line (Figure 5-7B). The total distance would be 3,030 meters. Figure 5-7. Reading the extension scale. (3) At times you may want to know the distance from a point on the map to a point off the map. In order to do this, measure the distance from the start point to the edge of the map. The marginal notes give the road distance from the edge of the map to some towns, highways, or junctions off the map. To determine the total distance, add the distance measured on the map to the distance given in the marginal notes. Be sure the unit of measure is the same. (4) When measuring distance in statute or nautical miles, round it off to the nearest one-tenth of a mile and make sure the appropriate bar scale is used. (5) Distance measured on a map does not take into consideration the rise and fall of the land. All distances measured by using the map and graphic scales are flat distances. Therefore, the distance measured on a map will increase when actually measured on the ground. This must be taken into consideration when navigating across country. i. The amount of time required to travel a certain distance on the ground is an important factor in most military operations. This can be determined if a map of the area is available and a graphic time-distance scale is constructed for use with the map as follows: R = Rate of travel (speed) T = Time D = Distance (ground distance) T = For example, if an infantry unit is marching at an average rate (R) of 4 kilometers per hour, it will take about 3 hours (T) to travel 12 kilometers. j. To construct a time-distance scale (Figure 5-8A), knowing your length of march, rate of speed, and map scale, that is, 12 kilometers at 3 kilometers per hour on a 1:50,000-scale map, use the following process: (1) Mark off the total distance on a line by referring to the graphic scale of the map or, if this is impracticable, compute the length of the line as follows: (a) Convert the ground distance to centimeters: 12 kilometers x 100,000 (centimeters per kilometer) = 1,200,000 centimeters. (b) Find the length of the line to represent the distance at map scale— 1 1,200,000 MD = = = 24 centimeters 50,000 50,000 (c) Construct a line 24 centimeters in length (Figure 5-8A). Figure 5-8. Constructing a time-distance scale. (2) Divide the line by the rate of march into three parts (Figure 5-8B), each part representing the distance traveled in one hour, and label. (3) Divide the scale extension (left portion) into the desired number of lesser time divisions— 1-minute divisions — 60 5-minute divisions — 12 10-minute divisions — 6 (4) Figure 5-8C shows a 5-minute interval scale. Make these divisions in the same manner as for a graphic scale. The completed scale makes it possible to determine where the unit will be at any given time. However, it must be remembered that this scale is for one specific rate of march only, 4 kilometers per hour. Back to Scale and Distance
{"url":"http://www.4orienteering.com/scale_distance/16/","timestamp":"2014-04-21T02:26:46Z","content_type":null,"content_length":"19134","record_id":"<urn:uuid:04056c8b-2f58-4eaf-96a7-98629670cf83>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Why do this problem? When students meet polar coordinates for the first time, they are sometimes reluctant to engage with a new representation when they can already do so much with the powerful cartesian representation to which they are accustomed. This problem defines polar coordinates through the familiar idea of specifying a location using a bearing and a distance, and then offers students the opportunity to reflect on occasions when the polar representation might be better suited than the cartesian. Possible approach Share with students the map of Scotland from the problem (available on a PowerPoint or as a worksheet). "The grid on this map measures from an origin at Edinburgh, and the lines indicate distances from Edinburgh at 20km intervals. Lines are marked radiating from Edinburgh at $15^\circ$ or $\frac{\pi} {12}$ radian intervals. Imagine you were in Edinburgh. How might you describe the position of Dundee?" Students' answers should provoke a need to standardise the way the distance and direction are expressed. "Mathematicians often use a representation called polar coordinates when they want to specify a position based on its distance and direction from the origin. We measure the angle from the horizontal, usually in radians, and the distance, and we write $(r, \theta)$ for the coordinate pair. If Dundee's polar coordinate position is $(60, \frac{5\pi}{12})$, can you write down the positions of some other places from the map?" Give students a few minutes to have a go, and then share answers. "As with $x$ and $y$ in cartesian coordinates, we can write an equation linking $r$ and $\theta$ to plot graphs. In cartesian coordinates, $y=$ constant, $x=$constant and $y=x$ form straight lines horizontally, vertically and at $45^{\circ}$. Talk to your partner and see if you can sketch the polar coordinate graphs $r=$ constant, $\theta =$ constant and $r=\theta$." While students work on this, circulate to check that they have understood the definitions. If they are stuck, you could ask questions like: "Choose an origin. Can you plot some points where $r=1$? "Can you plot some points where $\theta=\frac{\pi}{4}$? "Can you plot some points where $r = \theta$?" Next, introduce the main part of the problem: "Here are ten different curves. With your partner, decide whether you'd rather draw it using cartesian or polar coordinates. You can choose where the origin goes. Once you've decided, see if you can work out a functional form that results in graphs like the ones you've been given." This works particularly well if students have access to a graphing package or graphical calculator that can operate in cartesian and polar mode. Some useful graphing tools are suggested here. Students can then try out their ideas to recreate the polar graphs. Finally, bring students together to discuss what they have found out. This PowerPoint has all ten images if you want to display them one at a time for discussion. Invite students to say where they would choose to place the origin, and what sort of functional form they think the graphs have. This could be checked using graphing software. Possible extension Earth Orbit offers a very challenging extension for students who are interested in applications of polar coordinates to solve real-world problems. Possible support The article Where? Over There... provides a gentle introduction to polar co-ordinates that students could read through and work on before tackling the main part of this problem.
{"url":"http://nrich.maths.org/8055/note?nomenu=1","timestamp":"2014-04-20T21:17:46Z","content_type":null,"content_length":"6886","record_id":"<urn:uuid:458287a4-b83a-4086-b94f-6669d13afae8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Reference request: representation theory of the hyperoctahedral group up vote 7 down vote favorite I was wondering if someone knows a good reference for the representation theory of the hyper-octahedral group $G$. The hyper-octahedral group $G$ is defined as the wreath product of $C_2$ (cyclic group order $2$) with $S_n$ (symmetric group on $n$ letters). I understand that the representations of $G$ are in bijection with bi-partitions of $n$. I am looking for a reference which explains the details of why the representations of $G$ are in bijection with bi-partitions of $n$, and constructs the irreducible representations of $G$ (I imagine this is vaguely similar to the construction of Specht modules for $S_n$). So far, the only reference I have is an Appendix of MacDonald's "Symmetric functions and Hall polynomials" (2nd version), which deals with the representation theory of the wreath product of $H$ with $S_n$ (for $H$ being an arbitrary group, not $C_2$). "MacDonald" here (and often elsewhere) refers to Ian G. Macdonald, whose work has been highly influential in representation theory, combinatorics... His contemporary Ian D. MacDonald (sometimes 3 I.D. Macdonald) had a quirky professional career and did much less influential work in conventional group theory. Anyway, I.G. Macdonald wrote an interesting note Some irreducible representations of Weyl groups (Bull. LMS, 1972) with reference to W. Specht's 1937 paper Darstellungstheorie der Hyperoktaedergruppe. But more recent references are suggested in the answers here. – Jim Humphreys May 23 '10 at 13:20 add comment 4 Answers active oldest votes As Bruce Westbury suggested at another question, the following book might be mentioned here. A. V. Zelevinsky, Representations of Finite Classical Groups A Hopf algebra approach; Lecture Notes in Mathematics 869 (1981) up vote It is unfortunately probably hard to get anymore, and its typography is not very attractive, but I find the treatment very elegant. The representation theory of $S_n$ gives rise to an 4 down algebraic structure called Positive Self-Adjoint Hopf-algebra, whose simplest nontrivial model is the ring of symmetric functions with a comultiplication. Its combinatorial structure is vote entirely deduced from the axioms, and it provides models for among other things the representations rings for finite linear groups and for wreath products of the symmetric groups. Thus the hyperoctahedral groups are treated nicely as a simple special case. Would you be so kind to comment - what is comultiplication on symmetric functions ? and where does it come from ? – Alexander Chervov Nov 26 '11 at 18:11 I like Zelevinsky's approach very much, but it diverges early from the usual treatments of the representation theory of the symmetric group, so I think it would be better to look at this after understanding one of the more straightforward treatments. – Tom Church Nov 26 '11 at 20:32 Symmetric functions are in infinitely many variables, and order doesn't matter. Now rename the variables $x_0,y_0,x_1,y_1,x_2,\ldots$ and decompose as a sum of products of a symmetric function in the $x$'s and one in the $y$'s. For instance $e_k$ gives $\sum_{i+j=k}e_i\otimes e_j$ since the monomials can be arbitrarily spread across the $x$'s and $y$'s, while $p_k$ gives $p_k\otimes1+1\otimes p_k$ since the monomials involve an $x_i$ or an $y_i$, but the two cannot mix in a power sum. – Marc van Leeuwen Nov 26 '11 at 20:33 @Tom: Yes I agree, this might not be the best introduction to the representations of the symmetric groups (especially if these are among the first groups you study representations of); a more concrete approach would be in place. But once you've seen a bit of that and you wonder if there is any higher perspective that explains why the details fall in place as they do, Zelevinsky's approach is a real revelation. – Marc van Leeuwen Nov 26 '11 at 20:41 add comment I liked the references of Kerber listed in the wikipedia article. The most relevant chapter is available online, along with both volumes which were quite useful. Kerber's presentation focusses on the idea that H is going to be cyclic and specifically handles H of order 2, but like MacDonald handles general H abstractly. GAP handles the up vote 3 hyper-octahedral group this way too, using generic code for wreath products written more or less solely for the hyper-octahedral group. The "bi" in bi-partitions just refers to the two down vote conjugacy classes of C2, and the general theory replaces "bi" by however many conjugacy classes H has. add comment The theory extends to the wreath product of a cyclic group $C$ with the symmetric group $S_n$. Then we are looking at a list of $|C|$ partitions with a total of $n$ boxes. This is in MacDonald so I expect you know this. Now we can deform these group algebras analogously to deforming the group algebras of $S_n$ to Hecke algebras. The group algebra of the cyclic groups is deformed to $K[x]/p(x)$ where the up vote 3 degree of $p$ is $|C|$ where originally we had $x^{|C|}=1$. These are known as Ariki-Koike algebras and there is an extensive literature on these. down vote Even if you are only interested in hyperoctahedral groups you may find papers on Ariki-Koike algebras which give you what you want by specialising. For example, I have seen the semi-normal form for Ariki-Koike algebras but not for hyperoctahedral groups. add comment This is quite late but I've been playing around with type BC Coxeter group representations recently and thought I'd provide a nice reference I've found for anyone else that is interested: Alun Morris provides a construction of all the irreducible representations of the hyperoctahedral group (in characteristic zero) via an extension of the usual 'polytabloid' combinatorial machinery from type A in "Representations of Weyl Groups over and arbitrary field", A.O. Morris; 'Young Tableaux and Schur Functors in algebra and geometry' Asterisque 87-88 (1981) p.267-288 up vote 2 down vote A 'straightening algorithm' is provided by H. Can, here. It should be noted that these articles provide a more computational approach to the representations introduced in I.G. Macdonald's paper referenced by Jim Humphreys in the comment to Vinoth's question. add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/25625/reference-request-representation-theory-of-the-hyperoctahedral-group?sort=newest","timestamp":"2014-04-17T12:37:47Z","content_type":null,"content_length":"70859","record_id":"<urn:uuid:4d3d325b-6e3d-4966-b7a3-3e644354c762>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
The rungs of mathematical discovery by Angela Herring of news@Northeastern More than a decade ago, math­e­matics pro­fessor Valerio Toledano Laredo was puz­zling over the rela­tion­ship between the sym­me­tries of macro­scopic and micro­scopic sys­tems when he dis­cov­ered a brand new set of dif­fer­en­tial equa­tions. Toledano Laredo named them the Casimir equa­tions in honor of the physi­cist Hen­drick Casimir, who, in the 1930s, dis­cov­ered a key ingre­dient in their construction. Toledano Laredo recently received a grant from the National Sci­ence Foun­da­tion to con­tinue exploring the Casimir equa­tions’ unex­pected prop­er­ties. Since their dis­covery, the equa­tions have been turning up in the far­thest cor­ners of math­e­matics — from rep­re­sen­ta­tion theory (the math­e­mat­ical study of sym­metry), to analysis (the field of cal­culus) and alge­braic geometry. They are also rel­e­vant to both string theory, which seeks to unify Einstein’s theory of gen­eral rel­a­tivity with quantum physics, and sta­tis­tical mechanics, which explores large assem­blies of small par­ti­cles such as mol­e­cules in a gas or pho­tons in a laser beam. The Casimir equa­tions also govern a phe­nom­enon in alge­braic geom­etry known as wall-crossing, wherein slight vari­a­tions in the equa­tions gov­erning the shape of a curve or a sur­face cause it to dra­mat­i­cally morph—like a bubble pop­ping off the sur­face of the bath­water and drifting away. Toledano Laredo said he and his stu­dents have grad­u­ally begun to under­stand these mul­ti­fac­eted equa­tions better. Building on work from a pre­vious NSF col­lab­o­ra­tive grant with researchers at Brown, Columbia and MIT, Toledano Laredo will explore in par­tic­ular a prop­erty that he dis­cov­ered in con­junc­tion with Columbia Uni­ver­sity assis­tant pro­fessor Sachin Gautam, who grad­u­ated from North­eastern in 2011 under Toledano Laredo. “The equa­tions seem to exist in more and more sophis­ti­cated guises that are arranged like the rungs of a ladder,” said Toledano Laredo. One com­po­nent of the new project will involve studying the equa­tions on the higher rungs. A second com­po­nent of the project is a spin-off from the first and has taken on a life of its own. As Toledano Laredo put it, “It is con­cerned with exactly under­standing the rela­tion between the rungs.” Sur­pris­ingly, the less sophis­ti­cated equations—those on the lower rungs—seem to con­tain all the tools nec­es­sary to under­stand the more sophis­ti­cated ones, he explained. “People know very well how to go up the ladder and think that what’s below is actu­ally sim­pler,” he said. But as he and Gautam have shown, this is not actu­ally the case. The new grant will allow Toledano Laredo to probe that fact in greater depth. Tagged with: Casmir Equations, College of Science, macroscopic and microscopic systems, Mathematics, Northeastern University, Science, Valerio Toledano Laredo Posted in Mathematics
{"url":"http://www.northeastern.edu/cos/2012/11/the-rungs-of-mathematical-discovery/","timestamp":"2014-04-17T12:47:10Z","content_type":null,"content_length":"28340","record_id":"<urn:uuid:1cfaa5b7-e4b4-494a-aa7a-8c8ef1035f4f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutors West Bloomfield, MI 48322 Master Certified Coach for Exam Prep, Mathematics, & Physics ...I look forward to speaking with you and to establishing a mutually beneficial arrangement in the near future! Best Regards, Brandon S. 1 covers topics such as linear equations, systems of linear equations, polynomials, factoring, quadratic equations,... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Plymouth_MI_algebra_tutors.aspx","timestamp":"2014-04-23T16:34:54Z","content_type":null,"content_length":"59069","record_id":"<urn:uuid:a335c145-57df-4aa2-9673-090ef2a67cdc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Sargent, GA Geometry Tutor Find a Sargent, GA Geometry Tutor ...I am also available for teaching Spanish, as well as almost any subject for lower grades. I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for educational fields. 40 Subjects: including geometry, English, Spanish, reading I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including geometry, statistics, SAT math, GRE [GROUP RATES AVAILABLE!] Math has always seemed interesting to me. It just makes sense. It's logical. 21 Subjects: including geometry, calculus, statistics, algebra 1 ...My style is to first identify the root of the problem(s) and then develop exercises to help strengthen weaknesses and improve performance. I pride myself on my ability to adapt to any learning style. In college, I worked at a tutoring center and tutored my peers in engineering classes. 12 Subjects: including geometry, calculus, GRE, algebra 1 I am currently a full time student getting my degree in radiology technology. I am currently looking for a part time tutoring job to help other students with various subjects. I have several years of experience tutoring. 18 Subjects: including geometry, reading, algebra 1, GED Related Sargent, GA Tutors Sargent, GA Accounting Tutors Sargent, GA ACT Tutors Sargent, GA Algebra Tutors Sargent, GA Algebra 2 Tutors Sargent, GA Calculus Tutors Sargent, GA Geometry Tutors Sargent, GA Math Tutors Sargent, GA Prealgebra Tutors Sargent, GA Precalculus Tutors Sargent, GA SAT Tutors Sargent, GA SAT Math Tutors Sargent, GA Science Tutors Sargent, GA Statistics Tutors Sargent, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/sargent_ga_geometry_tutors.php","timestamp":"2014-04-18T04:19:37Z","content_type":null,"content_length":"23715","record_id":"<urn:uuid:b3eaad96-4f7f-41e3-b221-78d020278f61>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Odd Thought About Identity Harvey Friedman friedman at math.ohio-state.edu Wed May 13 00:43:46 EDT 2009 > On May 12, 2009, at 2:18 PM, Richard Heck wrote: > This came up in my logic final. There was a deduction in which one got > to here: > Rxy . ~Ryx > and needed to get to here: > ~(x = y) > What a lot of students did was this: > (x)(y)(x = y --> Rxy <--> Ryx) > This does not, of course, accord with the usual way we state the > laws of > identity, but it struck me that it is, in fact, every bit as intuitive > as the usual statement. Which, of course, is why they did it that way. > It wouldn't be difficult to formulate a version of the law of identity > that allowed this sort of thing. But I take it that it would not be > "schematic", in the usual sense, or in the strict sense that Vaught > uses. I wonder, therefore, if a logic that had a collection of > axioms of > this sort might not yield an interesting example somewhere. Or if > there > isn't a similar phenomenon somewhere else. > Anyone have any thoughts about this? This is merely a simple comment. In, e.g., Enderton's book, Intro to Math Logic, there is the following formulation of the axioms of identity: x = x x = y implies (A implies A'), where A is atomic and A' is obtained from A by replacing some occurrences of x by y. So this is in this direction. Obviously, we can use x = x x = y implies (A implies A*), where A is atomic and A* is obtained from A by simultaneously replacing all of the occurrences of x,y in A by either x or y. Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2009-May/013612.html","timestamp":"2014-04-21T05:54:21Z","content_type":null,"content_length":"4082","record_id":"<urn:uuid:7e64d77c-161f-46df-aed5-c42f75e2248a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimum, Underpopulation and Over Population Summary This section contains 400 words (approx. 2 pages at 300 words per page) Optimum, Underpopulation and Over Population Summary: Using examples from MEDC's and LEDC's explain three concepts; Under population, over population and optimum population. Population can be split up into three groups. These three groups determine the amount of available resources and the number of people in one area. The three categories consist of; 1. Under population, 2. Over population 3. Optimum population Under population occurs when the amount of resources exceeds the amount of people living in that area. An example of this would be in Canada. With Canada's amount of resources, it could afford to more than double its current population and still withhold its current standard of living. Canada can export their food surplus and their excess minerals and maintain a high income thus leading to an increased standard of living. They attract many immigrants because of a promise of a better life and job vacancies. Over population is when the number of people in one area exceeds the amount of resources available. This would then lead to job... This section contains 400 words (approx. 2 pages at 300 words per page)
{"url":"http://www.bookrags.com/essay-2005/6/4/15759/33891/","timestamp":"2014-04-19T16:12:08Z","content_type":null,"content_length":"33472","record_id":"<urn:uuid:fbdabbe3-57b1-423f-a5f3-126cf5a95d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Analytic Combinatorics Posted: Sat Apr 21, 2007 7:42 am by ndaru Analytic Combinatorics Authors : Philippe Flajolet , Algorithms Project, INRIA Rocquencourt and Robert Sedgewick , Department of Computer Science, Princeton University Pages : 745 Publication Date : October 23, 2006; updated February 11, 2007 From the Preface: Analytic Combinatorics aims at predicting precisely the properties of large structured combinatorial configurations, through an approach based extensively on analytic methods. Generating functions are the central objects of the theory. Analytic Combinatorics starts from an exact enumerative description of combinatorial structures by means of generating functions, which make their first appearance as purely formal algebraic objects. Next, generating functions are interpreted as analytic objects, that is, as mappings of the complex plane into itself. Singularities determine a function’s coefficients in asymptotic form and lead to precise estimates for counting sequences. This chain applies to a large number of problems of discrete mathematics relative to words, trees, permutations, graphs, and so on. A suitable adaptation of the methods also opens the way to the quantitative analysis of characteristic parameters of large random structures, via a perturbational approach. This book is meant to be reader-friendly. Each major method is abundantly illustrated by means of concrete examples treated in detail -- there are scores of them, spanning froma fraction of a page to several pages -- offering a complete treatment of a specific problem. These are borrowed not only from combinatorics itself but also from neighbouring areas of science. With a view of addressing not only mathematicians of varied profiles but also scientists of other disciplines, Analytic Combinatorics is self contained, including ample appendices that recapitulate the necessary background in combinatorics and complex function theory. A rich set of short Notes -- there are more than 250 of them -- are inserted in the text and can provide exercises meant for self study or for students' practice, as well as introductions to the vast body of literature that is available. We have also made every effort to focus on core ideas rather than technical details, supposing a certain amount of mathematical maturity but only basic prerequisites on the part of our gentle readers. The book is also meant to be strongly problem-oriented, and indeed it can be regarded as amanual, or even a huge algorithm, guiding the reader to the solution of a very large variety of problems regarding discrete mathematical models of varied origins. In this spirit, many of our developments connect nicely with computer algebra and symbolic manipulation systems. Courses can be (and indeed have been) based on the book in various ways. Chapters I–III Symbolic Methods serve as a systematic yet accessible introduction to the formal side of combinatorial enumeration. As such it organizes transparently some of the rich material found in treatises like those of Bergeron-Labelle-Leroux, Comtet, Goulden-Jackson, and Stanley. Chapters IV–VIII relative to Complex Asymptotics provide a large set of concrete examples illustrating the power of classical complex analysis and of asymptotic analysis outside of their traditional range of applications. This material can thus be used in courses of either pure or applied mathematics, providing a wealth of nonclassical examples. In addition, the quiet but ubiquitous presence of symbolic manipulation systems provides a number of illustrations of the power of these systems while making it possible to test and concretely experiment with a great many combinatorial models. Symbolic systems allow for instance for fast random generation, close examination of non-asymptotic regimes, efficient experimentation with analytic expansions and singularities, and so on. View/Download Analytic Combinatorics
{"url":"http://freetechbooks.com/analytic-combinatorics-t581.html","timestamp":"2014-04-17T21:22:53Z","content_type":null,"content_length":"29345","record_id":"<urn:uuid:f6089dae-0cc4-4bed-8cb8-1b9efbe2ae44>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Anirban’s Angle: Randomization, Bootstrap and Sherlock Holmes Contributing Editor Anirban DasGupta writes: It might well be an exercise in frivolity, but I see a common thread between Sherlock Holmes and the bootstrap. It’s randomized inference. A standard example in a statistics class is that if a coin is tossed 20 times, the 5% UMP unbiased test concludes that the coin is fair if $10 ± 3$ heads are observed, that the coin is unfair if less than 6 or more than 14 heads are observed, and if exactly 6 or exactly 14 heads are observed, then the decision is left to the toss of a customized coin that produces heads 39.2% of the times. When in a quandary, leave it to a machine. Crazy? Notwithstanding the prestige of the dainty and enduring Neyman–Pearson theory, and that Wald himself considered using post-data randomization in his 1950 book, randomized tests and confidence intervals have been met with polite scorn and a cold shrug. Even the staunchest believer in optimal decisions runs away from post-data randomization (discussions of Basu, 1980, JASA). There is one celebrated exception, the bootstrap, and to a lesser extent Pitman’s permutation tests. It is not my intention to knock or malign a wildly popular method. But, because bootstrap Monte Carlo is all but essential in estimating a bootstrap distribution, or its functionals, the final bootstrap inference is machine randomized. In different sittings, the machine would produce different answers for the same question and the same data. Sometimes visibly different. But it hasn’t caused the bootstrap a smidgen of a dent in its popularity (Efron and Tibshirani, 1993, CRC Press; Edgington, 1995, CRC Press). I quote a small part of an example. Take the usual one dimensional iid $F$ scenario, and consider the mean absolute median-deviation $T_n = \frac{1}{n} \sum_{i=1}^n |X_i − M_n|$, $M_n$ being the median of the data. Under two moments, $\sqrt{n}[T_n − E|X−ξ|]$ is asymptotically normal, ξ being any median of $F$. For general absolutely continuous $F$, modern empirical process theory (Donsker classes) can be used to rigorously obtain the asymptotic variance. I take $F$ to be a Laplace distribution with parameters μ, σ, for, in that case, we have the crisp result is asymptotically $N(0, 1)$. So the traditional percentiles for the 95% CLT interval would be $±1.95996 ≈ ±1.96$. With $n$ = 35, and one fixed simulated dataset, I bootstrapped 15 different times, using each of five values of $B$ three times, $B$ = 600, 750, 900, 1000, 1200 [choice of $B$ is discussed in Hall (1986, AoS), Horowitz (1994, JoE), or Shao and Tu (1995, Springer)]. The bootstrap substitutes for 1.96 varied between 1.651 and 2.030, with an average of 1.845 and the lower percentile varied between −2.021 and −1.777 with an average of −1.883. Thirteen of the 15 times, the bootstrap interval was shorter than the CLT answer, and twice, essentially identical. I would like to be corrected, but I am not sure that in practice the bootstrap is repeated (even if with the same $B$), and the different randomizations properly recombined; I do not have any space to discuss sensible recombination here. The Jackknife is secure on that front. Nonetheless, the bootstrap is a singular success story for randomized inference. Let me proceed to the Sherlock Holmes example, one of wide notoriety. This is the story of The Final Problem. Holmes is fleeing London to escape the ruthless revenge of his mortal enemy, the certified evil genius and “Napoleon of crime,” Professor Moriarty. I apologize to the world that the Professor was a mathematician, and one of “phenomenal faculty”; Euler must be bowing his head in shame. Holmes boards the train at London, intending to get off at the terminal station Dover, and then to take a ship to the continent. The train has one intermediate stop at Canterbury. As the train leaves Victoria station, Holmes sees Moriarty on the platform, and must assume that Moriarty knows he is on this train. Moriarty can surely arrange express transportation to beat him to Dover. Anticipating this, Holmes may instead get off at Canterbury. But being the wily master mathematician that he is, Moriarty will anticipate what Holmes anticipated, and may himself proceed instead to Canterbury. Now, Holmes of course is mighty astute, and so surely he anticipates that Moriarty anticipates what Holmes first anticipated, and so on, yes, we have two great stalwarts, adversaries in a decision problem: where to alight? Philip Stark kindly pointed out that the Sicilian scene in The Princess Bride is formally equivalent to the Holmes-Moriarty game. There is excellent literature on this fascinating example. Let me cite only Morgenstern (1935, NYU Press), Clayton (1986, discussion of Diaconis and Freedman, 1986, AoS), Eichberger (1995, GEB), Case (2000, AMM), and Koppl and Rosser (2002, SCE). The Holmes–Moriarty problem may be set up as a decision problem with a loss function. Each of Holmes’s non-randomized actions $a_0$ = detrain at Canterbury, $a_1$ = detrain at Dover, is admissible as well as minimax. Given the infinite chain of reasonings—“I believe that you believe that I believe that…”—each makes, paradoxes of self-reference arise and convergence is not attained. Randomized decisions seem to make sense here, and only those seem to make sense! If Holmes’s loss, should he find himself at the same station with Moriarty, is $L$, is zero should he detrain at Canterbury while Moriarty merrily proceeds to Dover, and is $cL, c<1$, should Moriarty detrain at Canterbury but Holmes continues to Dover, then Holmes’s optimum randomized strategy is $pa_0 + (1−p)a_1$ and Moriarty’s is $(1−p)a_0 + pa_1$, where $p=\frac{1-c}{2-c} $, and in this case, the game is a stalemate in the sense of von Neumann. And a stalemate is reasonable in a battle of two equal giants. The Sherlock Holmes stories are such monuments of first-rate literature, unequalled and transcendent, that I know connoisseurs who do not leave home for long without Holmes in their suitcase. As in a laughing baby, a rose, a Mozart symphony, sunset over the ocean, raindrops on the window, or a beautiful theorem, in Holmes a man can find his solace. Sir Conan Doyle chose his favorite 19 Holmes stories: The Final Problem is on that list; The Dancing Men is categorically statistical. The British TV Sherlock Holmes series, while romancing all that is bizarre, is also marvelous entertainment. 1 Comment Welcome to the new and improved IMS Bulletin website! We are developing the way we communicate news and information more effectively with members. The print is still with us (free with IMS membership ), and still available as a PDF to download , but in addition, we are placing some of the news, columns and articles on this blog site, which will allow you the opportunity to interact more. We are always keen to hear from IMS members, and encourage you to write articles and reports that other IMS members would find interesting. Contact the IMS Bulletin at What is “Open Forum”? With this new blog website, we are introducing a new feature, the Open Forum . Any IMS member can propose a topic for discussion. Email your subject and an opening paragraph (send this to ) and we'll post it to start off the discussion. Other readers can join in the debate by commenting on the post. Search other Open Forum posts by using the Open Forum category link below. Start a discussion today! Recent posts About IMS The Institute of Mathematical Statistics is an international scholarly society devoted to the development and dissemination of the theory and applications of statistics and probability. We have about 4,500 members around the world. Visit IMS at
{"url":"http://bulletin.imstat.org/2013/07/anirban%E2%80%99s-angle-randomization-bootstrap-and-sherlock-holmes/","timestamp":"2014-04-17T15:30:27Z","content_type":null,"content_length":"27821","record_id":"<urn:uuid:c5bbc2eb-e55a-40da-9f52-eb9e4ad47612>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Assignment 2 Next: Young Energetic pulsars Up: A Tutorial on Radio Previous: The population of millisecond 1. Assuming a thin-screen model, calculate the apparent angular diameter of the Crab pulsar at 600 MHz. Assume that the apparent angular diameter = 100mas at 100 MHz. 2. Calculate the apparent angular diameter at 600 MHz if the screen is located at 1% of the distance from the pulsar to the Earth. Hint: write down a formula for the geometry in question 1 and the make a substitution for the distance from the pulsar to the screen. 3. Calculate the parallax's for pulsars with distances of 100, 500, 1000, 5000, 10000 parsecs. 4. Would it be possible to measure these parallax's using MERLIN? Why? If so suggest an observational strategy ie how many epochs of observations and how you would space them. 5. Compare the characteristic age of the Crab pulsar with the time since Chinese astronomers observed the supernova that formed the pulsar. Try using the Princeton pulsar catalog service on the WWW to get the parameters you need. 6. Calculate the birth period of a pulsar which has the following parameters: P = 1.87 ms, n = 3, t = 5 billion years and Jon Bell Thu Dec 19 15:15:11 GMT 1996
{"url":"http://www.jb.man.ac.uk/~pulsar/Education/Tutorial/tut/node46.html","timestamp":"2014-04-20T05:44:56Z","content_type":null,"content_length":"2791","record_id":"<urn:uuid:b1af931f-f15a-4355-b3e2-c3ceda3f612f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum number of contractions needed to obtain a particular invariant set up vote 7 down vote favorite Consider the Koch curve $G \subseteq \mathbb{R}^2$. Clearly $G$ is the invariant set (IS) of the iterated function system (IFS) $\lbrace \phi_1, \phi_2, \phi_3, \phi_4 \rbrace$. Where (not wanting to jump between $\mathbb{R}^2$ and $\mathbb{C}$ but doing so for ease): $\phi_1(x) = \frac{1}{3} x$, $\phi_2(x) = \frac{1}{3} (x \exp(\frac{i \pi}{3}) + 1)$, $\phi_3(x) = \frac{1}{3} (x \exp(-\frac{i \pi}{3}) + 1 + \exp(\frac{i \pi}{3}))$, $\phi_4(x) = \frac{1}{3} (x + However can we do better? i.e. can we find an IFS consisting of fewer contractions such that its IS is $G$? In this case, yes. The IFS $\lbrace \psi_1, \psi_2 \rbrace$ also has $G$ as its IS where: $\psi_1(x) = \frac{1}{\sqrt{3}} x \exp(-\frac{5 i \pi}{6}) + \frac{1}{3} (1 + \exp(\frac{i \pi}{3}))$, $\psi_2(x) = \frac{1}{\sqrt{3}} x \exp(\frac{5 i \pi}{6}) + 1$ And as we know that an IFS consisting of a single contraction has a single point as its IS, we know that this is the best that we can do. But what about in general? If $G \subseteq \mathbb{R}^n$ is the IS of the IFS $\lbrace \phi_1, \phi_2, \ldots, \phi_m \rbrace$ when can we tell if there exists an IFS with $G$ as its IS and consisting of strictly less than $m$ contractions? As a specific example: how about the Sierpinski gasket / carpet? Can we do better that the obvious 3 / 8 construction IFS? fractals ds.dynamical-systems 2 Can you, please, comment on what makes the $\psi$ IFS work? What is the relation between the semigroups generated by $\phi$ and $\psi$? – Victor Protsak Jul 18 '10 at 9:41 Ah - although it might not be obvious at first, $\psi_1$ and $\psi_2$ were chosen such that $\psi_1 \circ \psi_2 = \phi_1$, $\psi_1 \circ \psi_1 = \phi_2$, $\psi_2 \circ \psi_2 = \phi_3$ and $\ psi_2 \circ \psi_1 = \phi_4$. But is dont think this process will generalise well. – Mark Bell Jul 18 '10 at 18:40 add comment 2 Answers active oldest votes An interesting question. Of course there is some ambiguity in the formulation "when can we tell". Certainly in explicit examples, one may be able to apply ad-hoc methods. For example, things are easier for the Sierpinski gasket and carpet, since these have identifiable features in terms of their complementary regions. For example, if we wish to write the Sierpinski gasket as a union of smaller copies, it should be fairly easy to see that each complementary region must be mapped to another complementary region. But this means that each contraction must correspond to one of the "smaller triangles" that appear in the usual gasket contraction, and we need at least three of these to make up the whole gasket. The same type of argument should work for the Sierpinski carpet. EDIT: Let me provide a few additional arguments to illustrate what I mean in the case of the Sierpinski gasket and carpet. Any equilateral triangle contained in the Sierpinski gasket is the triangle surrounding one of the "children" in the usual construction. up vote 1 down vote An exercise for the reader. (Hint: Note that the only way for a straight line in the gasket to begin in one of the standard triangles but not end there is to pass through the two points accepted by which it is connected to the rest of the gasket.) If $A$ is an affine similarity that maps the Sierpinski gasket into itself, then $A$ can be written as a finite composition of the three maps from the "standard" IFS that generates the A similar argument works for the carpet. Here it is not enough to consider the outer square, but if we add the first generation squares to it, the same works. To state this, let A be the union of the boundaries of nine squares that are joined together to form a larger square; e.g. $$A=\{(x,y)\in[0,3]^2: x\in\{0,1,2,3\} \text{ or } y\in\{0,1,2,3\}\}$$. Let us call any image of A under an affine similarity a "3-by-3 grid". The outer boundary of any nine-by-nine grid contained in the Sierpinski gasquet is the boundary of one of the squares occuring in the usual construction. Again, I will leave the proof as an exercise. The claim that the usual system is optimal then follows immediately once more. But intuitively the Koch curve appears to consists of 4 smaller copies of itself, so I'm not sure that it will "be fairly easy to see that ...". Although we have that if $G$ is the IS of an IFS consisting of $m$ contractions, then $\text{Dim}_H(G)$ < m$. This then gives us a lower bound on the number of contractions needed (for the Serpinski Gasket & Carpet, 3). So in the case of the Gasket we have an optimal solution, but for the carpet we may be able to do better. Can we? Can we prove that we can't? – Mark Bell Aug 3 '10 at 13:39 I am not sure what you mean here. The gasket and carpet are two-dimensional sets, and of course you need at least two contractions to find them. The point about the Koch curve is that you can also write it as two copies of the same shape, as demonstrated in your post. As I state in my answer, the difference with the gasket and carpet are that you have additional topological things that need to be preserved: complementary regions. That's what makes these cases easier, and shows the IFS given are indeed optimal. – Lasse Rempe-Gillen Aug 4 '10 at 11:15 add comment Perhaps you can consult some of the literature on "finite type condition" for IFSs. That's if you are willing to allow overlap in the IFS. MR1488232 (98i:28010) MR1825981 (2002c:28010) up vote 0 down MR2304331 (2008m:28007) When I did my computations on Barnsley's Wreath, MR1117877 (92j:58062) I used an IFS with 6 transformations. Barnsley's text that first describes this, though, says it was done with 5 transformations (but doesn't describe them). I'm sorry, but what are these codes? MR1117877 (92j:58062)? I tried googling them but couldn't find anything. Thanks. – Mark Bell Aug 5 '10 at 9:50 Mathematical Reviews (in your university library) or MathSciNet (on line by subscription). So try links like ams.org/mathscinet-getitem?mr=1117877 and go from there. Maybe that would have been a better way to list these. – Gerald Edgar Aug 5 '10 at 12:12 add comment Not the answer you're looking for? Browse other questions tagged fractals ds.dynamical-systems or ask your own question.
{"url":"http://mathoverflow.net/questions/32322/minimum-number-of-contractions-needed-to-obtain-a-particular-invariant-set","timestamp":"2014-04-16T22:35:05Z","content_type":null,"content_length":"65436","record_id":"<urn:uuid:c064d300-086e-4867-8c3f-0233a1bbd44e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Danvers, MA Algebra Tutor Find a Danvers, MA Algebra Tutor ...Regardless of a student’s capacity or subject matter, my attention is absolute! I experienced "Real Life" Trigonometry, for more than fifteen years, as an Industry Physicist and Electrical Engineer. If taught properly, Trigonometry is easy to understand and apply. 6 Subjects: including algebra 1, physics, trigonometry, precalculus ...I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. 36 Subjects: including algebra 2, algebra 1, English, chemistry ...My teaching approach to the ISEE is very student-specific. I start by analyzing my student's specific strengths and weaknesses on a practice test so that our time together is spent in the most productive and efficient way possible. As we progress, what we spend time on each step of the way is in response to my student's evolving needs. 33 Subjects: including algebra 2, English, algebra 1, reading ...That include the full progression of Algebra, Algebra II, Trigonometry etc. I also have tutored the SAT and the LSAT many times. I scored 99th percentile on the SAT (perfect score on current scale) and 90th on the LSAT. 29 Subjects: including algebra 1, algebra 2, reading, calculus ...I have been able to achieve success by setting a pace that is appropriate for each individual student. During our sessions and the attentiveness of the student, I also believe in engaging in a certain amount of conversation with the student that can make our sessions feel more like getting help ... 13 Subjects: including algebra 2, algebra 1, calculus, geometry Related Danvers, MA Tutors Danvers, MA Accounting Tutors Danvers, MA ACT Tutors Danvers, MA Algebra Tutors Danvers, MA Algebra 2 Tutors Danvers, MA Calculus Tutors Danvers, MA Geometry Tutors Danvers, MA Math Tutors Danvers, MA Prealgebra Tutors Danvers, MA Precalculus Tutors Danvers, MA SAT Tutors Danvers, MA SAT Math Tutors Danvers, MA Science Tutors Danvers, MA Statistics Tutors Danvers, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Danvers_MA_Algebra_tutors.php","timestamp":"2014-04-19T19:36:40Z","content_type":null,"content_length":"23884","record_id":"<urn:uuid:49288aeb-136b-48ad-8445-8b38dc641879>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Streamwood Algebra 2 Tutor Find a Streamwood Algebra 2 Tutor ...I currently include math study skill training, technique, and tips to each of my math students under my tutelage tailored to each student's comprehension, abilities, and study habits. My prior work background and experience for 30+ years consists of providing structural engineering services for ... 10 Subjects: including algebra 2, geometry, algebra 1, GED ...I graduated from the University of California, San Diego with a degree in Biochemistry in 2012. Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this 25 Subjects: including algebra 2, chemistry, calculus, physics ...I look forward to working with you and your children!I was an advanced math student, completing the equivalent of Algebra 1 before high school. I continued applying algebraic skills in high school, where I was a straight A student and completed calculus as a junior. I tutored math through college to stay fresh. 12 Subjects: including algebra 2, statistics, geometry, SAT math ...I played at the college level for 2-3 hours each day. I coached a school team. I play in our backyard and in a nearby open soccer field with six to eight kids. 29 Subjects: including algebra 2, reading, physics, English I have ten years experience tutoring high school and college students in chemistry (organic, inorganic, physical or analytical), physics and mathematics (geometry, calculus and algebra) on both a one on one level as well as in big groups. My undergraduate was a major in chemistry with a minor in ma... 20 Subjects: including algebra 2, chemistry, physics, GRE
{"url":"http://www.purplemath.com/streamwood_algebra_2_tutors.php","timestamp":"2014-04-18T11:26:02Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:340c89a4-4a35-4963-96b4-ea51fff4334b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamentals of Transportation/Traffic Signals From Wikibooks, open books for an open world Traffic Signals are one of the more familiar types of intersection control. Using either a fixed or adaptive schedule, traffic signals allow certain parts of the intersection to move while forcing other parts to wait, delivering instructions to drivers through a set of colorful lights (generally, of the standard red-yellow (amber)-green format). Some purposes of traffic signals are to (1) improve overall safety, (2) decrease average travel time through an intersection, and (3) equalize the quality of services for all or most traffic streams. Traffic signals provide orderly movement of intersection traffic, have the ability to be flexible for changes in traffic flow, and can assign priority treatment to certain movements or vehicles, such as emergency services. However, they may increase delay during the off-peak period and increase the probability of certain accidents, such as rear-end collisions. Additionally, when improperly configured, driver irritation can become an issue. Traffic signals are generally a well-accepted form of traffic control for busy intersections and continue to be deployed. Other intersection control strategies include signs (stop and yield) and roundabouts. Intersections with high volumes may be grade separated. Traffic signals can be pretimed, semi-actuated, or fully-actuated. Pretimed intersections have a fixed cycle length. This is easy to implement but can cause excessive delay at some intersections. Semi-actuated intersections have vehicle detectors on the minor roadway. When a vehicle approaches on the minor roadway, the detector receives a signal to change the light to green. In a fully-actuated intersection, all approaches have a detector. Each phase has an initial green light interval to provide time for standing vehicles to get through the intersection. This initial time is extended if the detector at the approach detects a car moving through the intersection. If there are no cars moving through the intersection for a given period of time, the light will change. This is called a "gap out". After the maximum amount of time has passed for the light to be green, the light will change even if there are still cars moving through the intersection. This is called a "max Intersection Queuing[edit] At an intersection where certain approaches are denied movement, queuing will inherently occur. Of the various queuing models, one of the more commons and simple ones is the D/D/1 Queuing Model. This model assumes that arrivals and departures are deterministic (D) and one departure channel exists. D/D/1 is quite intuitive and easily solvable. Using this form of queuing with an arrival rate $\ lambda$ and a departure rate $\mu$, certain useful values regarding the consequences of queues can be computed. One important piece of information is the duration of the queue for a given approach. This time value can be calculated through the following formula: $t_c {\rm{ }} = {\rm{ }}\frac{{\rho r}}{{1 - \rho }} \,\!$ • $t_c$ = Time for queue to clear • $\rho$ = Arrival Rate divided by Departure Rate • $r$ = Red Time With this, various proportions dealing with queues can be calculated. The first determines the proportion of cycle with a queue. $P_q {\rm{ }} = {\rm{ }}\frac{{r{\rm{ }} + {\rm{ }}t_c }}{C}{\rm{ }} \,\!$ • $P_q$ = Proportion of cycle with a queue • $C$ = Cycle Length Similarly, the proportion of stopped vehicles can be calculated. $P_s = \frac{{\lambda \left( {r + t_C } \right)}}{{\lambda \left( {r + g} \right)}} = \frac{{r + t_C }}{C} = P_q \,\!$ $P_s = \frac{{\lambda \left( {r + t_C } \right)}}{{\lambda \left( {r + g} \right)}} = \frac{{\mu t_C }}{{\lambda C}} = \frac{{t_C }}{{\rho C}} \,\!$ • $P_s$ = Proportion of Stopped Vehicles • $g$ = Green Time Therefore, the maximum number of vehicles in a queue can be found. $Q_{\max } {\rm{ }} = {\rm{ }}\lambda r \,\!$ Intersection delay[edit] Various models of intersection delay at isolated intersections have been put forward, combining queuing theory with empirical observations of various arrival rates and discharge times (Webster and Cobbe 1966; Hurdle 1985; Hagen and Courage 1992). Intersections on arterials are more complex phenomena, including factors such as signal progression and spillover of queues between adjacent intersections. Delay is broken into two parts: uniform delay, which is the delay that would occur if the arrival pattern were uniform, and overflow delay, caused by stochastic variations in the arrival patterns, which manifests itself when the arrival rate exceeds the service flow of the intersection for a time period. Delay can be computed with knowledge of arrival rates, departure rates, and red times. Graphically, total delay is the product of all queues over the time period in which they are present. $D_t {\rm{ }} = {\rm{ }}\frac{{\lambda r^2 }}{{2\left( {1 - \rho } \right)}}{\rm{ }} \,\!$ Similarly, average vehicle delay per cycle can be computed. $d_{avg} {\rm{ }} = {\rm{ }}\frac{{\lambda r^2 {\rm{ }}}}{{2\left( {1 - \rho } \right)}}\frac{1}{{\lambda C}}{\rm{ }}\,\!$ $d_{avg} {\rm{ }} = \frac{{r^2 }}{{2C\left( {1 - \rho } \right)}} \,\!$ From this, maximum delay for any vehicle can be found. $d_{\max } {\rm{ }} = {\rm{ }}r \,\!$ Level of Service[edit] In order to assess the performance of a signalized intersection, a qualitative assessment called Level of Service (LOS) is assessed, based upon quantitative performance measures. For LOS, the performance measured used is average control delay per vehicle. The general procedure for determining LOS is to calculate lane group capacities, calculate delay, and then make a determination. Lane group capacities can be calculated through the following equation: $c = s\frac{{g}}{{C}}\,\!$ • $c$ = Lane Group Capacity • $s$ = Adjusted Saturation Flow Rate • $g$ = Effective Green Length • $C$ = Cycle Length Average control delay per vehicle, thus, can be calculated by summing the types of delay mentioned earlier. $d = (d_1(PF)) + d_2 + d_3\,\!$ If your intersection is D/D/X: $d = ((d_1(PF)) + d_3$ This is because there are no random arrivals. If your intersection is M/D/X: $d = (d_1(PF)) + d_2 + d_3$ You might think that there would be no deterministic arrivals because the intersection is M/D/X, however, this is incorrect. d_1 can be thought of as the baseline for the intersection. • $d$ = Average Signal Delay per vehicle (sec) • $d_1$ = Average Delay per vehicle due to uniform arrivals (sec) (equivalent to $D_T$ in previous section) • $PF$ = Progression Adjustment Factor • $d_2$ = Average Delay per vehicle due to random arrivals (sec) • $d_3$ = Average delay per vehicle due to initial queue at start of analysis time period (sec) Uniform delay can be calculated through the following formula: $d_1 = \frac{{0.5C\left( {1 - \frac{g}{C}} \right)^2 }}{{1 - \left[ {\min \left( {1,X} \right)\frac{g}{C}} \right]}} \,\!$ • $X$ = Volume/Capacity (v/c) ratio for lane group. Similarly, random delay can be calculated: $d_2 = 900T\left[ {\left( {X - 1} \right) + \sqrt {\left( {X - 1} \right)^2 + \frac{{8kIX}}{{cT}}} } \right] \,\!$ • $T$ = Duration of Analysis Period (in hours) • $k$ = Delay Adjustment Factor that is dependent on signal controller mode • $I$ = Upstream filtering/metering adjustment factor Overflow delay generally only applies to densely urban corridors, where queues can sometimes spill over into previous intersections. Since this is not very common (usually the consequence of a poorly timed intersection sequence, the rare increase of traffic demand, or an emergency vehicle passing through the area), it is generally not taken into account for simple problems. Delay can be calculated for individual vehicles in certain approaches or lane groups. Average delay per vehicle for an approach A can be calculated using the following formula: $d_A = \frac{{\sum\limits_i {d_i v_i } }}{{\sum\limits_i {v_i } }} \,\!$ • $d_A$ = Average Delay per vehicle for approach A (sec) • $d_i$ = Average Delay per vehicle for lane group i on approach A (sec) • $v_i$ = Analysis flow rate for lane group i Average delay per vehicle for the intersection can then be calculated: $d_I = \frac{{\sum\limits_A {d_A v_A } }}{{\sum\limits_A {v_A } }} \,\!$ • $d_I$ = Average Delay per vehicle for the intersection (sec) • $d_A$ = Average Delay per vehicle for approach A (sec) • $v_A$ = Analysis flow rate for approach A Critical Lane Groups[edit] For any combination of lane group movements, one lane group will dictate the necessary green time during a particular phase. This lane group is called the Critical Lane Group. This lane group has the highest traffic intensity (v/s) and the allocation of green time for each phase is based on this ratio. The sum of the flow ratios for the critical lane groups can be used to calculate a suitable cycle length. $Y_c = \sum\limits_{i = 1}^n {\left( {\frac{v}{s}} \right)} _{ci} \,\!$ • $Y_c$ = Sum of Flow Ratios for Critical Lane Groups • $(v/s)_{ci}$ = Flow Ratio for Critical Lane Group i • $n$ = Number of Critical Lane Groups Similarly, the total lost time for the cycle is also an element that can be used in the calculation of cycle length. $L = \sum\limits_{i = 1}^n {\left( {t_L } \right)_{ci} } \,\!$ • $L$ = Total lost Time for Cycle • $(t_L)_{ci}$ = Total Lost Time for Critical Lane Group i Cycle Length Calculation[edit] Cycle lengths are calculated by summing individual phase lengths. Using the previous formulas for assistance, the minimum cycle length necessary for the lane group volumes and phasing plan can be easily calculated. $C_{min } = \frac{{L*X_c }}{{{\rm{X}}_{\rm{c}} {\rm{ - }}\sum\limits_{{\rm{i}} = {\rm{1}}}^{\rm{n}} {{\rm{Yi}}} }}{\rm{ }} \,\!$ • $C_{min}$ = Minimum necessary cycle length • $X_c$ = Critical v/c ratio for the intersection • $(v/s)_{ci}$ = Flow Ratio for Critical Lane Group • $n$ = Number of Critical Lane Groups This equation calculates the minimum cycle length necessary for the intersection to operate at an acceptable level, but it does not necessarily minimize the average vehicle delay. A more optimal cycle length usually exists that would minimize average delay. Webster (1958) proposed an equation for the calculation of cycle length that seeks to minimize vehicle delay. This optimum cycle length formula is listed below. $C_{opt} = \frac{{\left[ {\left( {1.5L} \right) + 5} \right]}}{{\left( {1.0{\rm{ }} - {\rm{ }}\sum\limits_{i = 1}^n {Y_i } } \right)}}{\rm{ }} \,\!$ • $C_{opt}$ = Optimal Cycle Length for Minimizing Delay Green Time Allocation[edit] Once cycle length has been determined, the next step is to determine the allocation of green time to each phase. Several strategies for allocating green time exist. One of the more popular ones is to distribute green time such that v/c ratios are equalized over critical lane groups. Similarly, v/c ratios can be found with predetermined values for green time. $X_i = \frac{{v_i }}{{c_i }} = \frac{{v_i }}{{s_i *g_i /C}} = \frac{{v_i /s_i }}{{g_i /C}} \,\!$ • $X_i$ = v/c ratio for lane group i With knowledge of cycle lengths, lost times, and v/s ratios, the degree of saturation for an intersection can be found. $X_c = \sum {\frac{{v_i }}{{s_i }}\frac{C}{{C - L}}} \,\!$ • $X_c$ = Degree of saturation for an intersection cycle From this, the total effective green for all phases can be computed. $\sum {g_i } = \sum {\frac{{v_i }}{{s_i }}\frac{C}{{X_c }}} = C - L \,\!$ Second Method For Green Time[edit] Another method for calculating the effective red and green time for a given cycle is to minimize the total delay of the intersection. By assuming the intersection is controlled based on D/D/1 queuing the above equations for total delay can be used. Since the light will contain at least two or more directions the total delay must be calculated for each direction, and then added together to determine the total delay of the intersection. For a two way intersection with opposing lights a and b $D_t {\rm{ }} = {\rm{ }}\frac{{\lambda_{a} r_{a}^2 }}{{2\left( {1 - \rho_{a} } \right)}}+{\rm{ }}\frac{{\lambda_{b} r_{b}^2 }}{{2\left( {1 - \rho_{b} } \right)}}{\rm{ }} \,\!$ Also the effective red time is the cycle length minus the effective green time for the other directions. $C {\rm{ }} = r_{a}+g_{a}{\rm{ }} \,\!$ $g_a {\rm{ }} = r_{b}{\rm{ }} \,\!$ By substituting the two above equation for cycle length and effective red time into the total delay equation, it can then be written with only one variable red time. $D_t {\rm{ }} = {\rm{ }}\frac{{\lambda_{a} r_{a}^2 }}{{2\left( {1 - \rho_{a} } \right)}}+{\rm{ }}\frac{{\lambda_{b} (C-r_{a}^2) }}{{2\left( {1 - \rho_{b} } \right)}}{\rm{ }} \,\!$ By taking the derivate and setting it equal to zero, the minimum effective red time can be calculated. The other directions effective red time, and the effective green times for each directions can then be calculated by using the two above equations involving the cycle length. Example 1: Intersection Queuing[edit] An approach at a pretimed signalized intersection has an arrival rate of 0.1 veh/sec and a saturation flow rate of 0.7 veh/sec. 20 seconds of effective green are given in a 60-second cycle. Provide analysis of the intersection assuming D/D/1 queuing. Traffic intensity, $\rho$, is the first value to calculate. $\rho = \frac{{\lambda}}{{\mu}} = \frac{{0.1}}{{0.7}} = 0.14\,\!$ Red time is found to be 40 seconds (C - g = 60 - 20). The remaining values of interest can be easily found. Time to queue clearance after the start of effective green: $t_c {\rm{ }} = {\rm{ }}\frac{{\rho r}}{{1 - \rho }} = \frac{{0.14(40)}}{{1 - 0.14}} = 6.51\ s \,\!$ Proportion of the cycle with a queue: $P_q {\rm{ }} = {\rm{ }}\frac{{r{\rm{ }} + {\rm{ }}t_c }}{C}{\rm{ }} = {\rm{ }}\frac{{40{\rm{ }} + {\rm{ }}6.51 }}{60}{\rm{ }} = 0.775\,\!$ Proportion of vehicles stopped: $P_s = \frac{{\lambda \left( {r + t_C } \right)}}{{\lambda \left( {r + g} \right)}} = \frac{{0.1 \left( {40 + 6.51 } \right)}}{{0.1 \left( {40 + 20} \right)}} = 0.775 \,\!$ Maximum number of vehicles in the queue: $Q_{\max } {\rm{ }} = {\rm{ }}\lambda r = {\rm{ }}0.1(40) = 4 \,\!$ Total vehicle delay per cycle: $D_t {\rm{ }} = {\rm{ }}\frac{{\lambda r^2 }}{{2\left( {1 - \rho } \right)}}{\rm{ }} = {\rm{ }}\frac{{0.1(40^2) }}{{2\left( {1 - 0.14 } \right)}}{\rm{ }} = 93 veh-s \,\!$ Average delay per vehicle: $d_{avg} {\rm{ }} = \frac{{r^2 }}{{2C\left( {1 - \rho } \right)}} = \frac{{(40)^2 }}{{2(60)\left( {1 - 0.14} \right)}} = 15.5\ s\,\!$ Maximum delay of any vehicle: $d_{\max } {\rm{ }} = {\rm{ }}r = {\rm{ }} 40\ s \,\!$ Example 2: Total Delay[edit] Compute the average approach delay given certain conditions for a 60-second cycle length intersection with 20 seconds of green, a v/c ratio of 0.7, a progression neutral state (PF=1.0), and no chance of intersection spillover delay (overflow delay). Assume the traffic flow accounts for the peak 15-minute period and a lane capacity of 840 veh/hr, and that the intersection is isolated. Uniform Delay: $d_1 = \frac{{0.5C\left( {1 - \frac{g}{C}} \right)^2 }}{{1 - \left[ {\min \left( {1,X} \right)\frac{g}{C}} \right]}} = \frac{{0.5(60)\left( {1 - \frac{20}{60}} \right)^2 }}{{1 - \left[ {\min \left( {1,0.7} \right)\frac{20}{60}} \right]}} = 17.39\ s\,\!$ Random Delay: $T = 0.25\,\!$ (from problem statement) $X = 0.7\,\!$ $k = 0.5\,\!$ (for pretimed control) $I = 1.0\,\!$ (isolated intersection) $c = 840\,\!$ $d_2 = 900T\left[ {\left( {X - 1} \right) + \sqrt {\left( {X - 1} \right)^2 + \frac{{8kIX}}{{cT}}} } \right] = 900(0.25)\left[ {\left( {0.7 - 1} \right) + \sqrt {\left( {0.7 - 1} \right)^2 + \frac{{8 (0.5)(1)(0.7)}}{{840(0.25)}}} } \right] = 4.83\ s\,\!$ Overflow Delay: Overflow delay is zero because it is assumed that there is no overflow. $d_3 = 0\,\!$ Total Delay: $d = d_1(PF) + d_2 + d_3 = 17.39(1) + 4.83 + 0 = 22.22\ s\,\!$ The average total delay is 22.22 seconds. Example 3: Cycle Length Calculation[edit] Calculate the minimum and optimal cycle lengths for the intersection of Oak Street and Washington Avenue, given that the critical v/c ratio is 0.9, the two critical approaches have a v/s ratio of 0.3, and the Lost Time equals 15 seconds. Minimum Cycle Length: $C_{min } = \frac{{L*X_c }}{{{\rm{X}}_{\rm{c}} {\rm{ - }}\sum\limits_{{\rm{i}} = {\rm{1}}}^{\rm{n}} {{\rm{Yi}}} }}{\rm{ }} = \frac{{15*0.9}}{{[0.9 - (2(0.3))]}} = 45 \ s\,\!$ Optimal Cycle Length: $C_{opt} = \frac{{\left[ {\left( {1.5L} \right) + 5} \right]}}{{\left( {1.0{\rm{ }} - {\rm{ }}\sum\limits_{i = 1}^n {Y_i } } \right)}}{\rm{ }} = \frac{{1.5(15) + 5}}{{1.0 - 2(0.3)}} = 68.75 \ s \,\!$ The minimum cycle length is 45 seconds and the optimal cycle length is 68.75 seconds. Thought Question[edit] Why don't signalized intersections perform more efficiently than uncontrolled intersections? The inherent lost time that comes from each signal change is wasted time that does not occur when intersections are uncontrolled. It comes at quite a surprise to most of the Western World, where traffic signals are plentiful, but there are intersections that perform quite well without any form of control. There is an infamous video on YouTube that shows an uncontrolled intersection in India where drivers somehow navigate through a busy, chaotic environment smoothly and efficiently ^[1] . The video is humorous to watch, but it shows a valid point that uncontrolled intersections can indeed work and are quite efficient. However, the placement of traffic signals is for safety, as drivers entering an uncontrolled intersection have a higher likelihood of being involved in a dangerous accident, such as a T-bone or head-on collision, particularly at high speed. Sample Problem[edit] Problem (Solution) Additional Questions[edit] • $t_c$ - Time for queue to clear • $\rho$ - Arrival Rate divided by Departure Rate • $P_q$ - Proportion of cycle with a queue • $P_s$ - Proportion of Stopped Vehicles • $c$ - Lane Group Capacity • $s$ - Adjusted Saturation Flow Rate • $g$ - Effective Green Length • $d$ - Average Signal Delay per vehicle (sec) • $d_1$ - Average Delay per vehicle due to uniform arrivals (sec) • $PF$ - Progression Adjustment Factor • $d_2$ - Average Delay per vehicle due to random arrivals (sec) • $d_3$ - Average delay per vehicle due to initial queue at start of analysis time period (sec) • $X$ - Volume/Capacity (v/c) ratio for lane group. • $T$ - Duration of Analysis Period (in hours) • $k$ - Delay Adjustment Factor that is dependent on signal controller mode • $I$ - Upstream filtering/metering adjustment factor • $d_A$ - Average Delay per vehicle for approach A (sec) • $d_i$ - Average Delay per vehicle for lane group i on approach A (sec) • $v_i$ - Analysis flow rate for lane group i • $d_I$ - Average Delay per vehicle for the intersection (sec) • $v_A$ - Analysis flow rate for approach A • $Y_c$ - Sum of Flow Ratios for Critical Lane Groups • $(v/c)_{ci}$ - Flow Ratio for Critical Lane Group i • $n$ - Number of Critical Lane Groups • $C_{min}$ - Minimum necessary cycle length • $X_c$ - Critical v/c ratio for the intersection • $(v/s)_{ci}$ - Flow Ratio for Critical Lane Group • $C_{opt}$ - Optimal Cycle Length for Minimizing Delay • $X_i$ - v/c ratio for lane group i • $X_c$ - Degree of saturation for an intersection cycle Key Terms[edit] • Progression Adjustment Factor External Exercises[edit] Use the GAME software at the STREET website to learn how to coordinate traffic signals to reduce delay. Use the OASIS software at the STREET website to study how signals change when given information about time-dependent vehicle arrivals. • Hagen, Lawrence T. and Courage, Kenneth. (1992). “Comparison of Macroscopic Models for Signalized Intersection Analysis.” Transportation Research Record. 1225: 33-44. • Hurdle, V. F. (1982). “Signalized Intersections: A Primer for the Uninitiated.” Transportation Research Record 971:96-105. • Webster, F.V. (1958). Traffic Signal Settings. Road Research Technical Paper No. 39. London: Great Britain Road Research Laboratory. • Webster, F.V. and Cobbe, B. M. (1966). Traffic Signals. Road Research Technical Paper No. 56. HMSO London UK.
{"url":"http://en.wikibooks.org/wiki/Fundamentals_of_Transportation/Traffic_Signals","timestamp":"2014-04-19T02:13:43Z","content_type":null,"content_length":"74910","record_id":"<urn:uuid:d5ada21e-c6b1-489b-83e8-f2fef39d557e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Code for Knittel and Metaxoglou Challenges, Difficulties and Warnings" Note: You will require the GADS toolbox for Matlab to run the programs. We have integrated the other algorithms into the zip files. Automobile Data: 1. Zip file containing Matlab and Stata code required to estimate the 500 sets of results using the base tolerances To use these files: A. Unzip the file B. Run main.m C. Run optim_results_summary.m D. Run mkt_power_analysis.m E. Run merger_results_summary.m F. Run the stata scripts that lie within the optuimization, merger and market power folders To estimate the model using e-16 as the contraction mapping tolerance simply change the tolerance in the meanval.m file 2. Stata do file to construct figures and tables 3. Stata do file to construct figure for contraction mapping tolerance of E-16 Note: You will need to update the global paths to run all stata scripts Cereal Data: 1. Zip file containing Matlab and Stata code required to estimate the 500 sets of results (follow directions above) 2. Stata do file to construct figures and tables 3. Stata do file to construct figure for contraction mapping tolerance of E-12
{"url":"http://web.mit.edu/knittel/www/KM_website.html","timestamp":"2014-04-17T21:45:33Z","content_type":null,"content_length":"3463","record_id":"<urn:uuid:5da90a17-e67f-4aee-9fb6-fc9342af35bf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Undergraduate Academics There is a growing need for people who can combine knowledge of mathematics and science to formulate and solve practical problems. The intent of the Applied Mathematics Option is to provide a broad range of computational and analytical skills, practice in mathematical modeling, and some fundamental knowledge of a scientific discipline. Computational and applied mathematicians are employed in a variety of positions in industry, business, and government. Students must complete one of the following minors: □ Biomedical Sciences □ Chemistry □ Computer Science □ Physics □ Statistics Students should select their minor in the area in which they intend to apply their mathematical talents, and then they should select electives that are particularly suited to the problems in that The General Mathematics Major is the most flexible mathematics major offered at Western. By careful selection of electives, students may combine this major with interests in both the natural and social sciences. Students have even used this major to prepare for law school. The General Mathematics Major provides excellent preparation for graduate study in mathematics. In addition to a strong background in mathematics, this program stresses vigorous logical thinking and communication of abstract concepts. The Secondary Teaching of Mathematics Major combines theoretical mathematics with teaching techniques. This program is designed for students planning to teach in junior or senior high schools. With the current national focus on improving mathematics education, this is a strong major for students interested in a career in education. The major has recently been tailored to meet the modern secondary school curriculum. Background in computer usage, statistics and applications of mathematics are included. Graduation with Honors This recognizes special achievement beyond a normal major program. To graduate with honors a student must maintain a 3.70 GPA in Mathematics and a 3.25 GPA overall and must have taken two of the following courses: • an honors seminar, • a theoretical course selected from Math 530, 570, 580, or an approved 600 level course, • an approved Math 599 course (independent study project leading to a paper or presentation). Interested students should see the associate chairperson to plan their "honors program" in their junior year.
{"url":"http://www.wmich.edu/math/academics/majors-minors.html","timestamp":"2014-04-17T23:28:12Z","content_type":null,"content_length":"12536","record_id":"<urn:uuid:5942985c-87b9-49a6-8712-909684c22e46>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Free multiplication math worksheets, math activities and word searches Printable Multiplication Word Searches Our math worksheets are great to use as activity material. Children love to do word searches. Feel free to print our PDF math puzzles and use them at home or in the classroom. Click here for multiplication and division word searches!!!!! We also have word searches! Popular Addition Word Searches Read here why being able to recite the multiplication table is o so important. Want to know about our new material? Follow us on Facebook.
{"url":"http://www.mathinenglish.com/menuWordSearchesII.php","timestamp":"2014-04-18T20:43:17Z","content_type":null,"content_length":"17681","record_id":"<urn:uuid:7142d787-0990-4fe1-8376-20f991fa7d6b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Reverse Word Search balance to calculate the difference between the credits and debits of (an account). [1/8 definitions] cipher to solve (a problem) by using arithmetic; calculate. [1/7 definitions] compound^1 to calculate or pay (interest) on accrued interest as well as on principal. [1/12 definitions] compute to calculate by mathematical operations. [2/5 definitions] count^1 to compute; calculate. [1/10 definitions] differential in mathematics, a function that expresses the incremental increase of another function as a variable increases, and that can be used to calculate slopes, curves, accelerations, and the like. [1/8 definitions] estimate to calculate the approximate amount, size, or value of. [1/6 definitions] extract in mathematics, to calculate (the root of a number). [1/9 definitions] figure to calculate numerically. [2/16 definitions] Greenwich time the solar time that is determined at the prime meridian through Greenwich, England, and that is used to calculate and regulate time throughout most of the world; Greenwich mean time. Jewish calendar a lunisolar calendar that is used by the Jews to calculate holidays, is reckoned from 3761 B.C., and has twelve months in most years and another month added about every three years. market basket a selection of goods and services, esp. food, considered to represent a typical family's needs over a period of time, and used to calculate changes in the cost of living. miscalculate to calculate or judge incorrectly. miscount to count or calculate incorrectly. [1/2 definitions] phonon a quantum of sound or vibrational energy that is used to calculate the thermal and vibrational properties of solids. (Cf. photon.) recalculate to calculate again, esp. to check for errors. reckon to determine by counting or estimating; make a judgment, as of length, time, or the like; calculate. [1/5 definitions] ring up to record or calculate (the price of a sale) on a cash register. triangulation a system used by navigators and surveyors to calculate the distance between two points, or the relative position of points on a plane, in which each known point is made the vertex of a triangle and each triangle is given a base line of known length. [1/2 definitions] value to determine, estimate, or calculate the worth of; appraise; assess. [1/8 definitions]
{"url":"http://www.wordsmyth.net/?mode=rs&as_data=calculate&as_data_cs=any_w","timestamp":"2014-04-18T03:14:35Z","content_type":null,"content_length":"42403","record_id":"<urn:uuid:905cb114-c862-4434-82cd-1e6bcaa1c6e4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
A biologically motivated and analytically soluble model of collective oscillations in the cortex. II. Applications to binding and pattern segmentation, Biol. Cybern. 71 Results 1 - 10 of 42 - Neural Networks , 1997 "... The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powe ..." Cited by 138 (12 self) Add to MetaCart The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. 1 Definitions and Motivations If one classifies neural network models according to their computational units, one can distinguish three different generations. The first generation i... - NEURAL COMPUTATION , 2000 "... An integral equation describing the time evolution of the population activity in a homogeneous pool of spiking neurons of the integrate-and-fire type is discussed. It is analytically shown that transients from a state of incoherent firing can be immediate. The stability of incoherent firing is analy ..." Cited by 134 (25 self) Add to MetaCart An integral equation describing the time evolution of the population activity in a homogeneous pool of spiking neurons of the integrate-and-fire type is discussed. It is analytically shown that transients from a state of incoherent firing can be immediate. The stability of incoherent firing is analyzed in terms of the noise level and transmission delay and a bifurcation diagram is derived. The response of a population of noisy integrate-and-fire neurons to an input current of small amplitude is calculated and characterized by a linear filter L. The stability of perfectly synchronized `locked' solutions is analyzed. - Neural Computation , 1995 "... We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network o ..." Cited by 53 (11 self) Add to MetaCart We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response- and threshold-functions of the spiking neurons are sufficient in order to employ them for such computations. 1 Introduction and Basic Definitions There exists substantial evidence that timing phenomena such as temporal differences between spikes and frequencies of oscillating subsystems are integral parts of various information processing mechanisms in biological neural systems (for a survey and references see e.g. Kandel et al., ... "... Present and permanent address: Physik-Department der TU Munchen Exploiting local stability we show what neuronal characteristics are essential to ensure that coherent oscillations are asymptotically stable in a spatially homogeneous network of spiking neurons. Under standard conditions, a necessa ..." Cited by 46 (10 self) Add to MetaCart Present and permanent address: Physik-Department der TU Munchen Exploiting local stability we show what neuronal characteristics are essential to ensure that coherent oscillations are asymptotically stable in a spatially homogeneous network of spiking neurons. Under standard conditions, a necessary and in the limit of a large number of interacting neighbors also sufficient condition is that the postsynaptic potential is increasing in time as the neurons fire. If the postsynaptic potential is decreasing, oscillations are bound to be unstable. This is a kind of locking theorem and boils down to a subtle interplay of axonal delays, postsynaptic potentials, and refractory behavior. The theorem also allows for mixtures of excitatory and inhibitory interactions. On the basis of the locking theorem we present a simple geometric method to verify existence and local stability of a coherent oscillation. 2 1 - Network: Computation in Neural Systems , 1997 "... A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes in networ ..." Cited by 31 (2 self) Add to MetaCart A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes in networks of detailed compartmental neuron models. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of excitatory synapses. A corresponding layered architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs. - Neural Networks , 2001 "... Scene analysis in the mammalian visual system, conceived as a distributed and parallel process, faces the so-called binding problem. As a possible solution, the temporal correlation hypothesis has been suggested and implemented in phase-coding models. ..." Cited by 23 (9 self) Add to MetaCart Scene analysis in the mammalian visual system, conceived as a distributed and parallel process, faces the so-called binding problem. As a possible solution, the temporal correlation hypothesis has been suggested and implemented in phase-coding models. , 1998 "... How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidence-detection properties of an integra ..." Cited by 19 (6 self) Add to MetaCart How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidence-detection properties of an integrate-and-fire neuron. We derive an expression indicating how coincidence detection depends on neuronal parameters. Specifically, we show how coincidence detection depends on the shape of the postsynaptic response function, the number of synapses, and the input statistics, and we demonstrate that there is an optimal threshold. Our considerations can be used to predict from neuronal parameters whether and to what extent a neuron can act as a coincidence detector and thus can convert a temporal code into a rate code. - Advances in Neural Information Processing Systems , 1995 "... 2 Abstract We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase-differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a sma ..." Cited by 18 (7 self) Add to MetaCart 2 Abstract We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase-differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response- and threshold-functions of the spiking neurons are sufficient in order to employ them for such computations. Furthermore we prove upper bounds for the computational power of networks of spiking neurons with arbitrary piecewise linear responseand threshold-functions, and show that they are with regard to realtime simulations computationally equivalent to a certain type of random access machine, and to recurrent analog neural nets with piecewise linear activation functions. In addition we give corresponding results for networks of spiking neurons with a limited timing precision, and we prove upper and lower bounds for the VC-dimension and pseudo-dimension of networks of spiking neurons. 3 1 - Biological Cybernetics , 2002 "... To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory represe ..." Cited by 18 (3 self) Add to MetaCart To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with enlarged synchronization range (fast state). Presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast state, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (T,C,H peaks). On the fast time scale (T peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either at or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings while standard phase coding models would predict shifted peaks in the case of different objects. , 1997 "... This paper discusses the relation of theory and experiment in neuroscience exemplified by three assumptions often made in models of coherent activation in the cortex: basic feature-coding oscillators, phase-coding and global binding of whole objects. Apparently these assumptions are not very well s ..." Cited by 16 (14 self) Add to MetaCart This paper discusses the relation of theory and experiment in neuroscience exemplified by three assumptions often made in models of coherent activation in the cortex: basic feature-coding oscillators, phase-coding and global binding of whole objects. Apparently these assumptions are not very well supported by the experimental evidence. We propose that it is the single synchronized population-burst that matters: spikes of feature-coding cells are temporally clustered in our opinion by recurrent associative processes. In each burst a single stimulus is processed (if there are several). Synchronization is restricted to cortical sites which physically interact. These principles are illustrated by computer simulations.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=223116","timestamp":"2014-04-16T21:49:23Z","content_type":null,"content_length":"40098","record_id":"<urn:uuid:b88a627a-6fa0-4365-b33b-28ec8e36573b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
VCE Further Maths Nov 02 Good luck today all further maths students!! Just remember to keep calm (deep breath!), make sure you read the questions carefully and answer as many questions as you can (don’t spend too long on any one question if you’re finding that one hard; the next question is worth just as many marks and might be easy. Circle it and come back later!) You can doooo iiiiitt!! First tutorial in a long time! Gearing up to the 2012 exams, I though I’d rustle one up for old times’ sake. I’ve had a number of requests for more tutes on the Geometry and Trigonometry module, so here it is! May I present to you… The Sine Rule. Congrats to all those students who completed Units 3 anf 4 of VCE Further maths in 2011! Exams are done, results are out, and many of you are looking ahead to your next challenge. For those of you undertaking the subject in 2012, welcome on board and I hope you find this site helpful. Most of the tutes created in 2011 were on the Core (Data Analysis) topic since everyone benefits from that. Check out the List of Tutorials so see all the Core tutorials available. Of the other topics covered in the syllabus, there are 6 possible modules to choose from and each student only selects 3 of those, so video tutes on these other modules are a bit sparse at the moment. Hopefully I’ll get some time over the summer to create a few tutes on these other topics – sign up for the newsletter if you’d like updates when these are uploaded. Congrats, Happy Christmas, Happy new year, and happy revision studying over summer newbies!
{"url":"http://www.vcefurthermaths.com/","timestamp":"2014-04-20T06:12:46Z","content_type":null,"content_length":"24482","record_id":"<urn:uuid:0a47f79b-363b-47b2-961a-57d203722008>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
LJK-Statistics Seminar On Thursday May 6 2010 at 14h00 in Room 1 - IRMA Tower Seminary of Eric MATNER-LOBER (Université de Rennes 2) Réduction itérée du biais pour des lisseurs multivariés We present a general procedure for nonparametric multivariate regression smoothers that outperforms existing procedures such as MARS, additive models, projection pursuit or $L_2$ additive boosting on both real and simulated datasets. In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. We still propose to use classical nonparametric linear smoother, such as thin plate splines or kernel smoothers, but instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting (base) smoother has a small variance but a substantial bias. Afterward, we propose to iteratively correct the biased initial estimator by an estimate of the bias obtained by smoothing the residuals. In univariate settings, we relate our procedure to $L_2$-Boosting. Rules for selecting the optimal number of iterations are also proposed and, based on empirical evidence, we propose one stopping rule. In the regression framework, when the unknown regression function m belongs to the Sobolev space H(n) of order n, we show that using a thin plate splines base smoother and the proposed stopping rule leads to an estimate ^m which converge to the unknown function m. Moreover, our procedure is adaptive with respect to the unknown order n and converge at the minimax rate. We apply our method to both simulated and real data and show that our method compares favourably with existing procedures such as MARS, additive models, L2 boosting or projection pursuit, regression trees, with improvement on mean squared error up to 30%. A R package is available at http://
{"url":"http://www-ljk.imag.fr/Seminars/1272895319697_en.html","timestamp":"2014-04-18T05:30:42Z","content_type":null,"content_length":"8941","record_id":"<urn:uuid:789f35fb-118e-4f88-a4d6-092713aec30d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetry Group Proof January 16th 2012, 07:45 AM Symmetry Group Proof Let $\alpha$ be an m-cycle permutation. Prove that $\alpha^j$ is m-cycle if and only if $gcd(m,j) = 1$. I have an idea for both ways but I am getting stuck on concluding either case. If I assume the gcd is 1 then I am able to reduce down to $(\alpha^j)^s = \alpha$ where s satisfies: $sj+tm=1$. But I struggle concluding that $\alpha^j$ is m-cycle. If I assume $\alpha^j$ is m-cycle and that $k|m \and\ k|j$. If I then raise $\alpha^k$ I should be able to expand the product with respect to k and conclude that k must be 1. But I fail to see a systematic way to write out $\alpha^k$.
{"url":"http://mathhelpforum.com/advanced-algebra/195405-symmetry-group-proof-print.html","timestamp":"2014-04-18T22:13:25Z","content_type":null,"content_length":"5279","record_id":"<urn:uuid:7d1413df-4ae9-472c-8f57-7c03921a463d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
How do I optimize over (or take derivative wrt) a square diagonal matrix? up vote 2 down vote favorite Hello. I'd like to solve the following optimization problem. $P_i$ is a 6x6 matrix $X$, $Y$ is a 6xk matrix $w_i$ is a kx1 vector $diag(w_i)$ is a square diagonal matrix with diagonal entries equal to $w_i$ $\min_{w_i} ~ ||P_i - X diag(w_i) Y^T||_F^2$ So the question is how to optimize over $diag(w_i)$. Does anyone know how to take derivative wrt a diagonal matrix? Or would it work if treat $diag(w_i)$ as a square matrix, solve it, and then set off-diagonal entries to zeros? oc.optimization-control na.numerical-analysis convexity You can certainly take derivatives with respect to matrix parameters, just using the usual multivariable calculus approach. However, it's not clear to me that this is the best way to approach your problem. – Yemon Choi Sep 10 '11 at 1:14 Also: what values of $k$ are you (most) interested in? – Yemon Choi Sep 10 '11 at 1:16 Lastly for now: are all vectors, matrices etc. real-valued here? – Yemon Choi Sep 10 '11 at 1:16 add comment 1 Answer active oldest votes Your notation is somewhat confusing, in that you apply the subscript $i$ to $w$, and have a vector $w_{i}$, but don't use $i$ in any meaningful way in your problem. I'm going to take the liberty of rewriting the problem as $\min_{w} \| P-X \mbox{diag}(w) Y^{T} \|_{F} $. You may have a whole bunch of these problems to solve as $i$ varies over some index set, but each can be solved separately. This is a linear least squares problem in disguise. The key to seeing this is to recognize that the Frobenius norm of a matrix $Z$ is the two norm of the vector $\mbox{vec}(Z)$ obtained from the matrix $Z$ by stacking the columns of $Z$ one on top of another. Also note that $X \mbox{diag}(w) Y^{T}=\sum_{j=1}^{k} w_{j} X_{j}Y_{j}^{T}$ where $X_{j}$ is the $j$th column of $X$, and $Y_{j}$ is the $j$th column of $Y$. Now, your problem can be written as up vote 2 down vote $\min_{w} \| P- \sum_{j=1}^{k} w_{j} X_{j}Y_{j}^{T} \|_{F}$. Let $H_{j}=X_{j}Y_{j}^{T}$, for $j=1, 2, \ldots, k$. We now have $\min_{w} \| P - \sum_{j=1}^{k} w_{j} H_{j} \|_{F}. $ Transforming this into vector form, this becomes $\min_{w} \| \mbox{vec}(P) - \sum_{j=1}^{k} w_{j} \mbox{vec}(H_{j}) \|_{2}$. Let $A$ be the matrix whose columns are given by Then the optimization problem can be written as $\min_{w} \| \mbox{vec}(P) - Aw \|_{2} $. which is a conventional linear least squares problem. Wow I admire your intuition. At the moment I asked the question I had no idea how to approached the problem but now it turns out to be one of the easiest problems in Linear Algebra. Thank you very much. – Jackson Sep 10 '11 at 8:29 I'd argue that this wasn't so much a matter of intuition as knowing some tricks that are frequently useful in convex optimization. I was very familiar with the idea of using $\mbox {vec}()$ to convert the Frobenius norm of a matrix into the 2-norm of a vector and with the idea of writing the the matrix triple product with a diagonal matrix in the middle as a sum of outer products. If you'd like to learn more of this, I'd strongly encourage you to read the textbook "Convex Optimization" by Vandenberghe and Boyd. – Brian Borchers Sep 10 '11 at add comment Not the answer you're looking for? Browse other questions tagged oc.optimization-control na.numerical-analysis convexity or ask your own question.
{"url":"http://mathoverflow.net/questions/75051/how-do-i-optimize-over-or-take-derivative-wrt-a-square-diagonal-matrix?answertab=oldest","timestamp":"2014-04-19T02:18:22Z","content_type":null,"content_length":"59193","record_id":"<urn:uuid:e18da993-5ff4-42b1-8a8f-03c97aae91ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Optimizing reduction loops (sum(), prod(), et al.) [Numpy-discussion] Optimizing reduction loops (sum(), prod(), et al.) Pauli Virtanen pav@iki... Mon Jul 13 03:00:41 CDT 2009 Wed, 08 Jul 2009 22:16:22 +0000, Pauli Virtanen kirjoitti: > On an older CPU (slower, smaller cache), the situation is slightly > different: > http://www.iki.fi/pav/tmp/athlon.png > http://www.iki.fi/pav/tmp/athlon.txt > On average, it's still an improvement in many cases. However, now there > are more regressions. The significant ones (factor of 1/2) are N-D > arrays where the reduction runs over an axis with a small number of > elements. Part of this seemed (thanks, Valgrind!) to be because of L2 cache misses, which came from forgetting to evaluate also the first reduction iteration in blocks. Fixed -- the regressions are now less severe (most are ~0.8), although for this machine there are still some... Pauli Virtanen More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-July/043943.html","timestamp":"2014-04-17T10:54:44Z","content_type":null,"content_length":"3676","record_id":"<urn:uuid:f262f7f3-55e3-4ef3-a34f-1d815e41ffe1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
More Dicey Decisions Copyright © University of Cambridge. All rights reserved. 'More Dicey Decisions' printed from http://nrich.maths.org/ In the problem Dicey Decisions , we encouraged you to consider the possible edge totals by adding up the numbers that meet on the different edges of a six-sided die. If you haven't already done this, why not try now? Imagine that instead of a six-sided die we had a dodecahedron numbered 1-12. There are different ways to arrange the numbers from 1-12. A standard six-sided die has opposite faces that sum to 7, so perhaps our dodecahedral die should have opposite faces that sum to 13. Can you create a net for a dodecahedral die whose opposite faces sum to 13? For the six-sided die, the edge totals were distributed like this: │Edge total │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10│ 11│ │Frequency │1 │1 │2 │2 │0 │2 │2 │1 │1 │ The mean edge total is 7, and the edge totals are distributed symmetrically about the mean. What is the mean edge total for your dodecahedral die? Are the edge totals distributed symmetrically? Ignoring rotations and reflections, there is only one way to number a cube to create a six-sided die with the constraint that opposite faces sum to 7, but there are multiple ways to create a dodecahedral die with opposite faces that sum to 13. Can you make any general statements about which dodecahedral dice will have edge totals with a symmetric distribution? Can you prove your statements? For the six-sided die, the corner totals were also distributed symmetrically. Will the same be true for the corner totals of a dodecahedral die? Now use your insights to make and justify some statements about the edge and corner totals of an icosahedral (20-sided) die with opposite faces that sum to 21.
{"url":"http://nrich.maths.org/7394/index?nomenu=1","timestamp":"2014-04-18T01:12:01Z","content_type":null,"content_length":"5764","record_id":"<urn:uuid:13c5ff23-c99a-4bab-965f-6fcb5060dffd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/erinweeks/medals","timestamp":"2014-04-19T10:20:04Z","content_type":null,"content_length":"104378","record_id":"<urn:uuid:48200eff-f2e3-41ff-8dde-fff0bc1909be>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
against a constant external pressure .. Number of results: 31,238 Consider a mixture of air and gasoline vapor in a cylinder with a piston. The original volume is 70. cm3. If the combustion of this mixture releases 885 J of energy, to what volume will the gases expand against a constant pressure of 645 torr if all the energy of combustion is... Monday, October 22, 2012 at 8:29pm by Brandon Chemical Engineering The pressure is constant, and so the heat at constant pressure is qp = - dH vaporization. W=-pdV and so since p=pext=1atm and dV = (vf - vi) = vliquid - vvapor, we'll say that since the volume of the vapor is significantly greater than that of the liquid, then vf-vi is ... Thursday, February 5, 2009 at 12:01am by Alex A 24L sample of a gas at fixed mass and constant temperature exerts a pressure of 3.0 atm. What pressure will the gas exert if the volume is changed to 16L? Tuesday, March 16, 2010 at 10:24pm by catherine Some air at 275 kPa absolute pressure occupies 50.0m^3. Find its absolute pressure if its volume is doubled at constant temperature. Wednesday, April 11, 2012 at 10:46am by Madison Some air at 275 kPa absolute pressure occupies 50.0m^3. Find its absolute pressure if its volume is doubled at constant temperature. Wednesday, April 11, 2012 at 12:56pm by Ricky A sample of oxygen occupies 1.00 L. If the temperature remains constant and the pressure on oxygen is decreased to one-third the orginal pressure, what is the new volume? Monday, November 26, 2012 at 3:07pm by Anonymous Assuming that no heat enters or leaves the gas, and that it expands by pushing against an external gas, or a moving wall or piston, then there is no entropy change in any case. If there is a "free expansion" into a larger volume with no heat addition, and no work done against ... Monday, January 23, 2012 at 3:33am by drwls College Chemistry Consider a mixture of air and gasoline vapor in a cylinder with a piston. The original volume is 50. cm3. If the combustion of this mixture releases 955 J of energy, to what volume will the gases expand against a constant pressure of 675 torr if all the energy of combustion is... Monday, October 31, 2011 at 3:16pm by Amanda A spring (spring 1) with a spring constant of 520N/m is attached to a wall and connected to another weaker spring (spring 2) with a spring constant of 270N/m on a horizontal surface. Then an external force of 70N is applied to the end of the weaker spring (#2). how much ... Saturday, June 29, 2013 at 12:36pm by victoria Boyle's law states that if the temperature of a gas remains constant, then PV = c. Where P is the pressure, V is the volume, and c is a constant. Given a quantity of gas at constant temperature, if V is decreasing at a rate of 13 in.^3/s, at what rate is P increasing when P = ... Sunday, May 2, 2010 at 11:38pm by Hussain Is this Boyle's law? P1V1=P2V2 or P2V2=P1V1=constant So figure out the constant (P1V1) then P2=constant/V2=3.6/1.5 Now that is the actual pressure. Your question asks what will be change P. I dont know what that means. Change is normally P2-P1, but for the life of me I dont ... Sunday, January 16, 2011 at 10:17am by bobpursley Pressure is force distributed over an area. It is measured in force per area. In the case of pressure, the force is a push against the area, not a pull. Tuesday, January 8, 2008 at 7:55pm by drwls i keep trying to do these problems and i don't understand them. please help me!! 1. Gas stored in a tank at 273 K has a pressure of 388 kPa. The save limit for the pressure is 825 kPa. At what temperature will the gas reach this pressure? A. 850 K B. 925 K C. 580 K D. 273 K 2... Friday, October 10, 2008 at 9:13am by Please Help! When solid CO2 (dry ice) is allowed to come to equilibrium in a closed constant volume container at room temperature (300K), 1. the pressure rises until it reaches 1 atm 2. the pressure rises until a liquid-gas equilibrium is reached. 3. the pressure does not change. 4. the ... Monday, January 31, 2011 at 12:58am by Douglas a 10.0 cm cylindrical chamber has a 5.0cm diameter piston attached to one end. The piston is connected to an ideal spring with a spring constant of 10.0N/cm. Initially, the spring is not compressed but is latched in place so that it cannot move. When the cylinder is filled ... Thursday, November 22, 2012 at 9:31am by Jenny external: outside the particle. If your briefcase is lifted, and external force is the lifting agent. Tuesday, April 20, 2010 at 3:05am by bobpursley Resonance in a system, such as a string fixed at both ends, occurs when a) it is oscillating in simpler harmonic motion b) its frequency is the same as the frequency of an external source c) its frequency is greater than the frequency of an external source d) its frequency is ... Sunday, December 6, 2009 at 6:14pm by Satej Consider a mixture of air and gasoline vapor in a cylinder with a piston. The original volume is 30. cm3. If the combustion of this mixture releases 980. J of energy, to what volume will the gases expand against a constant pressure of 640. torr if all the energy of combustion ... Thursday, November 18, 2010 at 3:45pm by Mattie (a) V = (Volume flow rate)/(faucet area) =25*10^3 cm^3/(30 s* pi cm^2)= 265 cm/s = 2.7 m/s Correct (b) The pressure is ambient (Po) at the faucet exit, where the water is flowing. Use this form of the Bernolli equation: P + (1/2) rho V^2 + rho*g y = constant Let Po be ambient ... Wednesday, December 19, 2007 at 12:15am by drwls If a gas is cooled from 323.0 K to 273.15 K and the volume is kept constant, what final pressure would result if the original pressure was 750.0 mm Hg? Round to the nearest tenth. Don't forget the Friday, May 10, 2013 at 9:02am by nene chem :( If a gas is cooled from 323.0 K to 273.15 K and the volume is kept constant, what final pressure would result if the original pressure was 750.0 mm Hg? Round to the nearest tenth. Don't forget the Friday, May 10, 2013 at 9:06am by nene A sample of Argon gas at standard pressure occupies 1000 mL. At constant temperature, what volume does the gas occupy if the pressure increases to 800 mm Hg? Sunday, November 13, 2011 at 9:10pm by Jessie This graph represents the relationship of the pressure and volume of a given mass of a gas at constant temperature. When the pressure equals 8 millimeters of mercury (mmHg), what is the volume, in milliliters (mL)? Wednesday, February 8, 2012 at 10:21pm by Anonymous Chem II A sample of gas in a 25.0-L containter exerts a pressure of 3.20 atm. Calculate the pressure exerted by the gas if the volume is changed to 45.0-L at constant temperature. Friday, October 12, 2012 at 4:09pm by Layney Chemistry 3A a sample of nitrogen gas, N2 occupies 3.0L at a pressure of 3.0 atm. what volume will it occupy when the pressure is changed to 0.50 atm and the temperature remains constant? Monday, December 3, 2012 at 12:10am by Gurjot A sample of argon has a pressure of 1.021 atm at a temperature of 89.4 degC. Assuming that the volume is constant, at what temperature will the gas have a pressure of 1016.56 mmHg Sunday, February 9, 2014 at 8:04pm by Anonymous A sample of argon has a pressure of 1.021 atm at a temperature of 89.4 degC. Assuming that the volume is constant, at what temperature will the gas have a pressure of 1016.56 mmHg Sunday, February 9, 2014 at 8:04pm by Anonymous The 3 atm (partial pressure) of H2 will combine with 1.5 atm (partial pressure) of O2 to produce 3 atm (partial pressure) of H2O, with 2.5 atm (partial pressure) of O2 left over. Thus 7 atm of reactants produces 5.5 atm of products + unreacted O2. I am assuming that the inital... Friday, November 16, 2007 at 1:29pm by drwls Physical Chemistry Calculate DS if the temperature of 3 moles of an ideal gas with Cv,m = 1.5R is increased from 200 K to 500 K under conditions of (a) constant pressure and (b) constant volume. Friday, November 30, 2012 at 8:43pm by Lawrence If those are the external measures of the box, it will fit in neither. At 6.5 inches (external) the width (internal) will < 6.5. Monday, January 14, 2013 at 10:11pm by PsyDAG chemistry help plz Le chatelier's principle? co(g)+cl2(g)=cocl2(g)is reversible and ,after a certain amount of time ,will reach equilibrium.Explain,using Le chatelier's principle,what effect: 1)increasing the pressure (at constant temperature) 2)increasing the temperature (at constant pressure) ... Saturday, January 5, 2013 at 4:49pm by centya Chemistry 105 A sample of nitrogen gas in a 1.86-L container exerts a pressure of 1.32 atm at 20 C. What is the pressure if the volume of the container is maintained constant and the temperature is raised to 354 Sunday, October 23, 2011 at 5:30pm by JULIE To calculate the air pressure, the volume occupied by the air is assumed constant. Why is this assumption incorrect? Explain how the vapor pressure calculated and the resulting Clausius-Clapeyron plot are affected? Tuesday, October 16, 2012 at 12:03am by Shardae. To calculate the air pressure, the volume occupied by the air is assumed constant. Why is this assumption incorrect? Explain how the vapor pressure calculated and the resulting Clausius-Clapeyron plot are affected? Tuesday, October 16, 2012 at 12:12am by Shardae. Consider the following reaction: 2Na + Cl2 2NaCl ΔH = -821.8 kJ (a) Is the reaction exothermic or endothermic? (b) Calculate the amount of heat transferred when 5.6 g of Na reacts at constant pressure. (c) How many grams of NaCl are produced during an enthalpy ... Tuesday, August 3, 2010 at 11:17pm by Joseph AP Chem For the following equilibrium system, which of the following changes will form more CaCO3? CO2(g) + Ca(OH)2(s) <--> CaCO3(s) + H2O(l) deltaH(rxn) = -113 kJ My choices are: a) Decrease temperature at a constant pressure (no phase change) B) Increase volume at a constant ... Tuesday, February 26, 2013 at 1:31pm by Lisa what if the question doesnt say that the temperature and pressure stay constant. for example..3.0L of a gas has a pressure of 12.0 atm. What is the new pressure if the gas is expanded to 17.0L?....would you assume that the temperature would stay the same or is there another ... Thursday, February 4, 2010 at 9:15pm by lelia 11.31) solve for the new pressure when each of the following temperature changes occurs, with n and V constant: a. a gas sample has a pressure of 1200 torr at 155 celcius. what is the final pressure of the gas after the temperature has droped to 0 celcius? 1200 x12 celcius/0 ... Sunday, June 27, 2010 at 12:21am by Jetta "Under constant conditions of P(pressure) and T(temperature) (and R [gas constant]), write an equation showing d(density) written as a function of M(molecular mass)." I don't understand this, really! Please help! Thanks in advance! Sunday, January 13, 2008 at 9:13pm by TAG Gen Chem 1 We want to change the volume of a fixed amount of gas from 745mL to 2.30L while holding the temperature constant. To what value must we change the pressure if the initial pressure is 102 kPa? Tuesday, October 26, 2010 at 2:08pm by Kay How would applying an external pressure on the following equilibrium affects the distribution of iodide between the polar (aqueous) and non polar (varsol) Sunday, March 2, 2014 at 12:29pm by Sarah How would applying an external pressure on the following equilibrium affects the distribution of iodide between the polar (aqueous) and non polar (varsol) Monday, March 3, 2014 at 7:56pm by Scarlett Which statement about Kc is wrong? A. Kc is always a constant B. For some reactions, Kc increases with temperature increase C. For some reactions, Kc decreases with temperature decrease D. For some reactions, Kc changes when pressure is changed A is incorrect. Kc stays ... Wednesday, January 31, 2007 at 12:40pm by Joe You fill your tires with air on a cold morning (-5degreesC) to 220-kPa gauge pressure, then drive into a 32degreesC desert. (a) Assuming the volume of air in the tires remains constant, what's the new gauge pressure? (b) what would be the gauge pressure if the volume of air ... Tuesday, January 24, 2012 at 10:18pm by Haylee According to Boyle's law, the volume V of a gas at a constant temperature varies inversely with the pressure P. When the volume of a certain gas is 125 cubic meters, the pressure is 20 psi (pounds per square inch). If the volume of gas is increased to 400 cubic meters, the ... Friday, May 25, 2012 at 3:11am by Pat The Boyle's Law states that PV/T = k where k is a constant so, assuming a constant pressure V/T does not change 2.2/(291.15) = V/(311.15) V = 2.35 L Friday, December 2, 2011 at 11:39am by Steve At 850 C, the equilibrium constant Kp for the reaction: C(s)+CO2(g) >< 2CO(g) has a value of 10.7. If the total pressure in the system at equilibrium is 1.000 atm, what is the partial pressure of carbon monoxide. Wednesday, April 22, 2009 at 2:01pm by Noemi A sample of air occupies 3.8 L when the pressure is 1.2 atm. (a) What volume does it occupy at 7.3 atm? (b) What pressure is required in order to compress it to 0.050 L? (The temperature is kept constant.) I already know the answers to each question, but do not understand how ... Monday, March 14, 2011 at 4:54pm by Alex Hey i have an idea how to do this but am not sure could you please help and show work. 12 moles of a gas sample at an initial temperature of 12 degrees celcius at a pressure of 200 Pa is allowed to expand isothermally and reversibly against a pressure of 50 kPa. Calculate the ... Friday, January 18, 2013 at 4:11am by Dermot PRessure is measured in Pascals (Pa). What combination of units is the same as a Pascal? and When you exert a force on a fluid in a closed container, does the pressure increase, decrease or remain constant? I think it will increase, but I'm not totally sure. Monday, May 30, 2011 at 8:00pm by Erika What is the total force on the bottom of a swimming pool 22.0 m by 8.9 m whose uniform depth is 1.6 m? What is the absolute pressure on the bottom of a swimming pool? What will be the pressure against the side of the pool near the bottom? Tuesday, November 16, 2010 at 9:46pm by Cory chemistry 106 Calculate the work done if a gas is compressed from 8.3 L to 3.3 L at constant temperature by a constant pressure of 3.0 atm, and give your answer in units of joules. (1 L · atm = 101.3 J) Wednesday, March 3, 2010 at 6:33pm by Tanisha What you are calling the "liquid pressure" must be the pressure due to liquid above, which is also called the gauge pressure. It equals (density)*g*(depth) = 1000 kg/m^3*9.8*4.0 m = 3.92*10^4 Pa = 39.2 kPa Add atmospheric pressure to that for the absolute pressure, 140.2 kPa. Friday, March 30, 2012 at 8:39am by drwls a has been answered by Bob Pursley below. Keq is the equilibrium constant. If you make a distinction, Kc is the concentration equilibrium constant while Kp is the partial pressure equilibrium constant. If I see a problem where K is listed I always assume it is Kc but that ... Monday, December 2, 2013 at 3:34pm by DrBob222 If an ideal gas is allowed to expand into a vacuum, this means that the external pressure is 0. This doesn't affect the internal pressure though, correct? For example: I have a problem in which one mole of an ideal gas at 300. K and at a volume of 10.0 L expands isothermally ... Tuesday, March 6, 2007 at 6:16pm by Chris Chemsitry II The partial pressure of CH4(g) is 0.185 atm and that of O2(g) is 0.300 atm in a mixture of the two gases. a) What is the mole fraction of each gas in the mixture? b) If the mixture occupies a volume of 11.5 L at 65 degress C, calculate the total number of moles of gas in the ... Thursday, May 10, 2007 at 7:13pm by Jayd The dry gas pressure is pressure inside the tube - watervapor pressure. The pressure has nothing to do with the volume collected, unless you are supporting a column of water, then you have to adjust barometric pressure for the height of the column, it does not appear you had ... Wednesday, September 26, 2007 at 1:21pm by bobpursley compressor operations We are operating a centrifugal compressor. The gas composition does not change during this excursion. While holding suction pressure and flowrate constant, the inlet temperature rises. What would you expect to happen to the discharge pressure, and why Wednesday, January 29, 2014 at 11:12pm by Tony Benzene has a heat of vaporization of 30.72kJ/mol and a normal boiling point of 80.1 ∘C. At what temperature does benzene boil when the external pressure is 480torr ? Monday, April 7, 2014 at 12:02am by Connor vapor and osmotic pressure A solution of the sugar mannitol ( molar mass 182.2 g/mol ) is prepared by adding 54.66 g of mannitol to 1.000 kg of water. The vapor pressure of pure liquid water is 17.54 torr at 20o C. Mannitol is nonvolatile and does not ionize in aqueous solution. a.) Assuming that ... Thursday, October 12, 2006 at 3:03am by Amy The pressure of a gas is 35 atm at a temperature of 48 (degrees) C with constant volume. What will the new pressure be if the temperature is increased to 36 (degrees) C? Friday, July 23, 2010 at 12:34am by bob The pressure of a gas is 35 atmospheres at a temperature of 48 (degrees) C with constant volume. What will the new pressure be if the temperature is increased to 36 (degrees) C? Wednesday, May 9, 2012 at 5:55pm by bob a gas has a volume of 95 mL at a pressure of 930 torr. What volume will the gas occupy at standard temperature if pressure is held constant? Sunday, January 26, 2014 at 3:48pm by Mikhail The volume of a gas is 250 mL at 340.0 kPa pressure. What will the volume be when the pressure is reduced to 50.0 kPa, assuming the temperature remains constant? Tuesday, July 29, 2008 at 2:11pm by natasha The volume of a gas is 250 mL at 340.0 kPa pressure. What will the volume be when the pressure is reduced to 50.0 kPa, assuming the temperature remains constant? Monday, August 4, 2008 at 11:21am by kayla The volume of a gas is 250 mL at 340.0 kPa pressure. What will the volume be when the pressure is reduced to 50.0 kPa, assuming the temperature remains constant? Tuesday, August 12, 2008 at 4:54pm by natasha Chemistry 105 A sample of gas has an initial volume of 33.0 L at a pressure of 1.1 atm. If the sample is compressed to a volume of 10.0 L, what will its pressure be? (Assume constant temperature.) Sunday, October 23, 2011 at 5:24pm by JULIE At 119 degrees C, the pressure of a sample of nitrogen is 1.94 atm. What will the pressure be at 282 degrees C, assuming constant volume? Answer in units of atm Thursday, April 19, 2012 at 12:01am by luke The volume of a gas is 250 mL at 340.0 kPa pressure. What will the volume be when the pressure is reduced to 50.0 kPa, assuming the temperature remains constant? Sunday, May 6, 2012 at 10:48pm by George the volume of a gas is 250 mL at 350 kPa pressure. what will the volume be when the pressure is reduced to 50 kPa, assuming the temperature remains constant? Wednesday, May 30, 2012 at 12:41pm by jose t 103 ◦ C, the pressure of a sample of nitrogen is 1 . 89 atm. What will the pressure be at 275 ◦ C, assuming constant volume? Answer in units of atm Thursday, April 4, 2013 at 8:44pm by Kyle At 103◦C, the pressure of a sample of nitrogen is 1.89 atm. What will the pressure be at 275◦C, assuming constant volume? Answer in units of atm Thursday, April 4, 2013 at 8:49pm by Kyle chem word problem .If a gas is cooled from 323.0 K to 273.15 K and the volume is kept constant, what final pressure would result if the original pressure was 750.0 mm Hg? Round to the nearest tenth. Don't forget the units. what should i plug in? Thursday, May 23, 2013 at 9:46am by ray The volume of a gas is 250 mL at 340.0 kPa pressure. With the temperature remaining constant, what will the volume be when the pressure is reduced to 50.0 kPa? Don't forget the units. Monday, June 11, 2012 at 5:08pm by dimo an ideal gas in a sealed container has an initial volume of 2.70L. at constant pressure, it is cooled to 25.00 C where its final volume is 1.75L. what was the initial temperature? Wednesday, September 18, 2013 at 6:13pm by Anonymous The gas in a cylinder with a piston has a volume of 2.4 m3 and a pressure of 120 kpa/ keeping the tempture constant the pressure is changed to 240 kpa. what is the new volume? Thursday, October 3, 2013 at 7:33pm by Trisha G. A sample of gas has an initial volume of 32.6L at a pressure of 1.2atm.If the sample is compressed to a volume of 13.4L , what will its pressure be? (Assume constant temperature.) Monday, October 7, 2013 at 4:11pm by kristine A 120. mL sample of a gas is at a pressure of 1.50 atm. If the temperature remains constant, what will be its volume at 3.50 atm of pressure? Tuesday, November 30, 2010 at 7:45pm by Kiesha A 120. mL sample of a gas is at a pressure of 1.50 atm. If the temperature remains constant, what will be its volume at 3.50 atm of pressure? Monday, December 6, 2010 at 5:26pm by Ivy Physics help!! A crate is pulled to the right with a force of 80 N, to the left with a force of 125.8 N, upward with a force of 615.4 N, and downward with a force of 248 N. Net external force in the x direction: -45.8 Net external force in the y direction: 367.4 N how do i find the magnitude... Friday, November 11, 2011 at 1:29am by Elle The relationship between volume and pressure (at constant temperature) is directly proportional. my answer- True The relationship between pressure and temperature (at constant volume) is inversely proportional. my answer- True The relationship between volume and temperature (... Sunday, October 17, 2010 at 9:44pm by chris One mole of oxygen gas is at a pressure of 5.10 atm and a temperature of 32.0°C. (a) If the gas is heated at constant volume until the pressure triples, what is the final temperature? Wednesday, February 24, 2010 at 1:28am by Anonymous At a temperature of 30.°C, a gas inside a 1.50 L metal canister has a pressure of 760 torr. If the temperature is increased to 55°C (at constant volume), what is the new pressure of the gas? Tuesday, February 21, 2012 at 3:46pm by Kyle2 At a temperature of 57oC, a gas inside a 5.30 L metal canister has a pressure of 4000 torr. If the temperature is decreased to 13oC (at constant volume), what is the new pressure of the gas? Wednesday, February 26, 2014 at 3:03pm by Chelsey Use the equation Pfluid=Density x height x gravity (1000)(1.9)(9.8)=18,620 The TOTAL absolute pressure on the bottom and against the sides of the swimming pool is 18,620 + 1.01x10^5 = 119,620 Pa Use the equation Pressure = Force/Area Rearrange the equation to Force = Pressure ... Saturday, November 3, 2012 at 12:03pm by Andy Chemistry 3A the pressure of hydrogen gas in a constant volume cylinder is 4.25atm at 0 degrees celsius. what will the pressure be if the temp. is raised to 80 degress celsius? Also do we have to first convert celsius to kelvin? Tuesday, December 4, 2012 at 11:56am by Gurjot Physics (12th Grade) If it is heated at constant pressure, the volume must increase heat in = change in internal energy + work out dQ = dU + dW P V = n R T constant pressure P dV = n R dT = work out Change in internal energy = n Cv dT = n(5/2)R dT for diatomic gas total heat required = n R dT + (5... Monday, April 9, 2012 at 12:49am by Damon A quantity of gas under a pressure of 302 kPa has a volume of 600. cm3. The pressure is increased to 545 kPa, while the temperature is kept constant. What is the new volume? Thursday, March 31, 2011 at 9:44pm by Aubrey A quantity of gas under a pressure of 302 kPa has a volume of 600. cm3. The pressure is increased to 545 kPa, while the temperature is kept constant. What is the new volume? Thursday, March 31, 2011 at 9:45pm by Aubrey A 300 mL sample of hydrogen gas is at a pressure of 0.500 kPa. If the pressure increases to 0.750 kPa, what will be the final volume of the sample? Assume that the temperature stays constant Monday, February 13, 2012 at 9:08pm by Kristyn A 300 mL sample of hydrogen gas is at a pressure of 0.500 kPa. If the pressure increases to 0.750 kPa, what will be the final volume of the sample? Assume that the temperature stays constant Monday, February 13, 2012 at 9:09pm by Kristyn A sample of methane gas at room temperature has a pressure of 1.50 atm and a volumne of 10.5 L. If the temperature is kept constant, what will be the new volume if the pressure is increased to 2.00 Sunday, June 3, 2012 at 9:36pm by Mary A gas occupying a volume of 664 mL at a pressure of 0.970 atm is allowed to expand at constant temperature until its pressure reaches 0.541 atm. What is its final volume? Tuesday, May 7, 2013 at 8:11pm by Logan A gas occupying a volume of 664 mL at a pressure of 0.970 atm is allowed to expand at constant temperature until its pressure reaches 0.541 atm. What is its final volume? Tuesday, May 7, 2013 at 9:27pm by Logan At 125 degrees Celsius, the pressure of a sample of He gas is 345 mmHg. At what temperature degrees Celsius will the pressure become 690 mm Hg, if the volume remains constant? Sunday, July 28, 2013 at 4:27pm by Hanna A container of gas is at a pressure of 1.3x10^5 Pa and a volume of 6.0m^3. How much work is done by the gas if it expands at constant pressure to twice its initial volume? Please help me I really dont know what to do,,, Wednesday, December 11, 2013 at 12:00pm by cheesecake:) 4) A 10 cm cylinder chamber has a 5cm diameter piston attached to one end. The piston is connected to an ideal spring with a spring constant of 10N/cm. Initially the spring is not compressed but is latched in place so that it cannot move. The cylinder is filled with a gas to a... Thursday, December 20, 2012 at 12:04am by P Vijay Kumar Benzene has a heat of vaporization of 30.72kJ/mol and a normal boiling point of 80.1 degrees C. At what temperature does benzene boil when the external pressure is 470 torr? Sunday, September 7, 2008 at 1:34pm by Erin benzene has a heat of vaporization of 30.72kj/mol and a normal boiling point of 80.1c. at what temperature does benzene boil when the external pressure is 445 torr Wednesday, December 2, 2009 at 12:00pm by Bryan Benzene has a heat of vaporization of 30.72 kj/mol and a normal boiling point of 80.1. At what temperature does benzene boil when the external pressure is 480 torr? Sunday, July 17, 2011 at 10:37pm by Anonymous Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=against+a+constant+external+pressure+..&page=3","timestamp":"2014-04-20T02:40:41Z","content_type":null,"content_length":"40913","record_id":"<urn:uuid:928922b2-8456-4976-a91a-dd6fad0b5cdc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"}