content
stringlengths
86
994k
meta
stringlengths
288
619
Logistic Regression From Scratch Logistic regression is among the most famous classification algorithm. It is probably the first classifier that Data Scientists employ to establish a base model on a new project. In this article we will implement logistic regression from scratch using gradient descent. The Jupyter Notebook of this article can be found HERE. where each Xi is a vector of length k and yi is the label (either 0 or 1), the logistic regression algorithm will find a function that maps each feature vector Xi to its label. This function will specify a decision boundary which would be a line with two-dimensional data (Figure 1), a plane with three-dimensional data or an hyperplane with higher dimensional data. Figure 1. Logistic regression decision boundary Each data point laying on the decision boundary will have probability equal to 0.5. On the right side of the decision boundary the probability will be higher than 0.5 and data points will be assigned to the positive class. On the left of the decision boundary the probability is lower than 0.5 and data points will belong to the negative class. The Math - The Sigmoid Function The core of the logistic regression is the sigmoid function: Figure 2. Shape of sigmoid function zi is a negative or positive real number. The sigmoid transforms each zi to a number between 0 and 1 (Figure 2). Note that: By mapping every zi to a number between 0 and 1, the sigmoid function is perfect for obtaining a statistical interpretation of the input zi. Where Xi are the features and Wi the weights of each feature. (3) is the same formula of a linear regression model. The Math - The Loss Function We well use the sigmoid function to make predictions. To evaluate model's predictions we need an objective function. The loss function commonly used in logistic regression is the Binary Cross-Entropy Loss Function: Figure 3. Binary Cross Entropy Loss Function plot As Figure 3 depicts the binary cross entropy loss heavily penalizes predictions that are far away from the true value. The loss function (4) can be rearranged into a single formula: The Math - Model's Parameters Update The pillar of each machine learning model is reducing the value of the loss function during training. The most crucial and complicated part of this process is calculating the derivative, aka the gradient, of the loss function with respect to the model's parameters (W). Once the gradient is calculated, the model's parameters can be readily updated with gradient descent in an iteratively manner. As the model learns from the data, parameters are updated at each iteration and loss decreases. The gradient of the Binary Cross-Entropy Loss Function w.r.t. the model's parameters is: If you are interested in the mathematical derivation of (6), click HERE. Once the gradient is calculated, the model parameters are updated with gradient descent at each iteration: Logistic Regression implementation Example 1 - Non overlapping classes We will train a logistic regressor on the data depicted below (Figure 4). The two classes are disjoint and a line (decision boundary) separating the two clusters can be easily drawn between the two Figure 4. Observed data (non-overlapping classes) The zi (3) for each data point of figure 4 is given by: Therefore, the regressor model will learn the value of w0, w1 and w2. w0 is the intercept and to learn its value we need to pass a columns of ones (Xi0) to the original array: Figure 5. Addition of the X0 column We need few functions to calculate the loss, the gradient and making the predictions: Figure 6. Helper functions The vector W is instantiated with random values between 0 and 1. Next, we use a for loop to train the model: During training the loss dropped consistently, and after 750 iterations the trained model is able to accurately classify 100 percent of the training data: Figure 8. Training loss and confusion matrix Finally, let's look at how the decision boundary changed during training: Figure 9. Decision boundary at 4 different iterations Figure 9 shows that during training the decision boundary moved from the bottom left (random initialization) to between the 2 clusters. Example 2 - Overlapping classes In the previous example the two classes were so easily separable that we could draw the decision boundary on our own. In this second example, the two classes significantly overlap: Figure 10. Observed data (Overlapping classes) The trained model positioned decision boundary somewhere in between the two cluster where the loss was the smallest. Figure 11. Decision boundary at 4 different iterations In this second example the data is not linearly separable, thus the best we can aim for is highest accuracy possible (and smallest loss). The trained model has an accuracy of 93% Closing remarks In this tutorial we learned how to implement and train a logistic regressor from scratch. The two models were training for a predefined number of iterations. Another and more efficient approach is to train the model until the accuracy reached a plateau or the decrease of the loss was negligible (e.i. smaller than a predetermined threshold). Implementing these two options is pretty straight forward and encourage you to modify the training loop accordingly.
{"url":"https://www.python-unleashed.com/post/logistic-regression-from-scratch","timestamp":"2024-11-02T10:42:46Z","content_type":"text/html","content_length":"1050053","record_id":"<urn:uuid:8edff3aa-bc39-4d3f-a8c5-6e46a5daae40>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00798.warc.gz"}
High frequency waves near cusp caustics It is well known that the usual harmonic ansatz of geometrical optics fails near caustics. However, uniform expansions exist which are valid near and on the caustics, and reduce asymptotically to the usual geometric field far enough from them. In this paper, we apply the Kravtsov-Ludwig technique for computing high-frequency fields near cusp caustics. We compare these fields, with those predicted by the geometrical optics, for a couple of model problems: first, the cusp generated by the evolution of a parabolic initial front in a homogeneous medium, a problem which arises in the high-frequency treatment of cylindrical aberrations, and second, the cusp formed by refraction of the rays emitted from a point source in a stratified medium with a weak interface. It turns out that inside and near the cusp, the geometrical optics solution is significantly different than the Kravtsov-Ludwig solution, but far enough from the caustic that the two solutions are, in fact, in very good agreement. ASJC Scopus subject areas Dive into the research topics of 'High frequency waves near cusp caustics'. Together they form a unique fingerprint.
{"url":"https://academia.kaust.edu.sa/en/publications/high-frequency-waves-near-cusp-caustics","timestamp":"2024-11-13T15:16:44Z","content_type":"text/html","content_length":"52555","record_id":"<urn:uuid:99b45185-b900-449b-b247-8963f5d9a2a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00553.warc.gz"}
Boltzmann’s Work in Statistical Physics 1. Cf. the following words of E.G.D. Cohen, (a student of Uhlenbeck, who was a student of Ehrenfest who studied with Boltzmann): “The depth of ill-feelings … and the resistance to [Boltzmann's] ideas … still resonated for me when Uhlenbeck said to me one day in some mixture of anger and indignation ‘that damned Zermelo, a student of Planck's nota bene’ an echo after two generations of past injustice and pain inflicted on Boltzmann by his hostile environment.” (Cohen 1996, 5). 2. The remarkable degree of consent between Mach and Boltzmann has led one commentator (Blackmore 1982) into thinking that Boltzmann abandoned realism altogether. 3. Note that the recent thoughtful biography by Cercignani (1998) carries the subtitle: “the man who trusted atoms”, not “the man who believed in atoms”. 4. Sommerfeld writes: “The battle between Boltzmann and Ostwald resembled the battle of the bull with the supple fighter. However, this time the bull was victorious … . The arguments of Boltzmann carried the day. We, the young mathematicians of that time, were all on the side of Boltzmann …” (Höflechner 1994, I, 167). A similar opinion was voiced by Arrhenius. 5. For example, the well-known passage in Boltzmann (1896b) in which he heaps praise on Zermelo, for providing the first evidence that Boltzmann's papers were actually being read at all in Germany, cannot be taken seriously, coming 8 years after he had been offered Kirchhoff's chair in Berlin and membership of the Prussian Academy. Another example of Boltzmann's love of provocation is his article on Schopenhauer which he entitled “Proof that Schopenhauer is an empty-minded, ignorant, nonsense-spreading philosophaster who thoroughly degenerates heads by selling hollow talk”, but was published, after editorial intervention, as ‘On a thesis by Schopenhauer’ (Boltzmann 1905, 240). 6. It has even been suggested that “Maxwell apparently never read any of the papers that Boltzmann wrote after about 1870” (Klein 1973). But this is not true. There are several occasions where Maxwell refers to the Boltzmann equation and H-theorem from 1872 and to other Boltzmann papers from 1874 and 1876. 7. This is not to say that he always conflated these two interpretations of probability. Some papers employ a clear and consistent choice for one interpretation only. But then that choice differs between papers, or even in different sections of a single paper. In fact, in (1871c) he even multiplied probabilities with different interpretations into one equation to obtain a joint probability. But then in (1872) he conflates them again. Even in his last paper (Boltzmann and Nabl 1904) we see that Boltzmann identifies the two kinds of probability with a simple minded argument. 8. The literature contains some surprising confusion about how the hypothesis got its name. The Ehrenfests borrowed the name from Boltzmann's concept of an Ergode, which he introduced in (Boltzmann 1884) and also discussed in his Lectures on Gas Theory (Boltzmann 1898). But what did Boltzmann actually understood by an Ergode? Brush points out in his translation of (Boltzmann 1898, 297) and similarly in (Brush 1976, 364), that Boltzmann used the phrase to denote a stationary ensemble, characterized by the microcanonical distribution in phase space. In other words, in Boltzmann's (1898) usage an Ergode is just an micro-canonical ensemble, and has very little to do with the so-called ergodic hypothesis. Brush criticized the Ehrenfests for causing confusion by their terminology. However, in his original (1884) introduction of the phrase, an Ergode is a stationary ensemble with only a single integral of motion. As a consequence, its distribution is indeed micro-canonical, but, what is more, every member of the ensemble satisfies the hypothesis of traversing every phase point with the given total energy. Indeed, in that text, being an element of an Ergode and satisfaction of this hypothesis are equivalent. The Ehrenfests were thus completely justified in baptizing the hypothesis “ergodic”. Another dispute has emerged concerning the etymology of the term. The common opinion, going back at least to the Ehrenfests has always been that the word derived from ergos (work) and hodos (path). Gallavotti (1994) has argued however that “undoubtedly” it derives from ergos and eidos (similar). Now one must grant Gallavotti that one would expect the etymology of the suffix “ode” of ergode to be identical to that for Boltzmann's “holode”, “monode”, “orthode” and “planode”, and that a reference to path would be somewhat unnatural in these last four cases. However, I don't believe a reference to eidos would be more natural. Moreover, it seems to me that if Boltzmann intended this etymology, he would have written “ergoide” in analogy to “planetoide”, “ellipsoide”, etc. That he was familiar with this common usage is substantiated by him coining the term “momentoide” for momentum-like degrees of freedom (i.e., those that contribute a quadratic term to the Hamiltonian) in (Boltzmann 1892). The argument mentioned by Cercignani (that Gallavotti's father is a classicist) fails to convince me in this matter. 9. Indeed, in the rare occasion in which he later did mention external disturbances, it was only to say that they are “not necessary” (Boltzmann 1895b). See also (Boltzmann 1896, §91). 10. Or some hypothesis compatible with the quasi-ergodic hypothesis. As it happens, Boltzmann's example is also compatible with the measure-theoretical hypothesis of ‘metric transitivity’. 11. Actually Boltzmann formulated the discussion in terms of a distribution function over kinetic energy rather than velocity. We have here transposed this into the latter, nowadays more common 12. The term “cyclic” is missing in Brush's translation. 13. Actually, as the Ehrenfests showed more clearly, there is also a third possible case, namely (c): H[0] lies on a local minimum of the curve. But that case is even more improbable than case (b)
{"url":"https://plato.stanford.edu/ENTRIES/statphys-boltzmann/notes.html","timestamp":"2024-11-07T07:49:56Z","content_type":"text/html","content_length":"19511","record_id":"<urn:uuid:446e3df2-a6d9-4b37-8583-c5e8fb674cfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00870.warc.gz"}
Taylor Series using Geometric Series and Power Series • Thread starter jegues • Start date In summary: Remember that the series is only valid for |x| < 1, so you'll need to include that restriction in your final answer.In summary, the conversation revolved around finding a series for \frac {1}{x^2} and using it to solve a problem. The individual struggled with differentiating and integrating the series, but eventually arrived at the correct solution. The conversation also included a reminder about the validity of the series for a specific range of values. Homework Statement See figure attached. Homework Equations The Attempt at a Solution Okay I think I handled the lnx portion of the function okay(see other figure attached), but I'm having from troubles with the, [tex]\int x^{-2} = \frac{-1}{x} + C[/tex] How do I deal with the C? If I can get, I can work with it to get something like the following, [tex]\frac{\text{first term of geometric series}}{1 - \text{common ratio}}[/tex] So what do I do about the C? Once I figure this out I can make more of an attempt into shaping, [tex] \frac{-1}{x}[/tex] into the form mentioned above. Any ideas? Thanks again! Last edited: You're going about it backwards. Use [tex]\frac{1}{x^2} = -\frac{d}{dx}\left(\frac{1}{x}\right)[/tex] vela said: You're going about it backwards. Use [tex]\frac{1}{x^2} = -\frac{d}{dx}\left(\frac{1}{x}\right)[/tex] Alrighty I think I've got a series for, See figure attached. Is this correct? I can't seem to figure out how to express it sigma notation however. Any ideas? No, it looks like you integrated the series, but you want to differentiate -1/x to get 1/x^2. vela said: No, it looks like you integrated the series, but you want to differentiate -1/x to get 1/x^2. How does this look? (See figure attached) FAQ: Taylor Series using Geometric Series and Power Series 1. What is a Taylor Series? A Taylor Series is a mathematical representation of a function as an infinite sum of terms, using the derivatives of the function at a specific point as coefficients. It is used to approximate the value of a function at a given point. 2. What is a Geometric Series? A Geometric Series is a series of terms where each term is multiplied by a common ratio, known as the common ratio. The sum of a Geometric Series can be calculated using the formula S = a / (1 - r), where a is the first term and r is the common ratio. 3. How are Taylor Series and Geometric Series related? Taylor Series can be expressed as a Geometric Series if the function being approximated is in the form of a power series. This means that the coefficients of the Taylor Series follow a pattern similar to a Geometric Series, making it easier to calculate. 4. What is a Power Series? A Power Series is a series of terms where each term is a polynomial with increasing powers of a variable, typically x. It can be used to represent a wide range of functions and can be manipulated to find approximations of those functions. 5. How is the accuracy of a Taylor Series determined? The accuracy of a Taylor Series depends on the number of terms used in the series. The more terms included, the closer the approximation will be to the actual value of the function. However, using too many terms can lead to computational errors, so it is important to balance accuracy with practicality.
{"url":"https://www.physicsforums.com/threads/taylor-series-using-geometric-series-and-power-series.433424/","timestamp":"2024-11-08T15:32:04Z","content_type":"text/html","content_length":"102187","record_id":"<urn:uuid:3d6be001-8ebc-4ef6-b088-254f211ff3d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00181.warc.gz"}
Ask Ethan: Why Do Gravitational Waves Travel Exactly At The Speed Of Light? Starts With A Bang — Ask Ethan: Why Do Gravitational Waves Travel Exactly At The Speed Of Light? General Relativity has nothing to do with light or electromagnetism at all. So how to gravitational waves know to travel at the speed of light? There are two fundamental classes of theories required to describe the entirety of the Universe. On the one hand, there’s quantum field theory, which describes electromagnetism and the nuclear forces, and accounts for all the particles in the Universe and the quantum interactions that govern them. On the other hand, there’s General Relativity, which explains the relationship between matter /energy and space/time, and describes what we experience as gravitation. Within the context of General Relativity, there’s a new type of radiation that arises: gravitational waves. Yet, despite having nothing to do with light, these gravitational waves must travel at the speed of light. Why is that? Roger Reynolds wants to know, asking: We know that the speed of electromagnetic radiation can be derived from Maxwell’s equation[s] in a vacuum. What equations (similar to Maxwell’s—perhaps?) offer a mathematical proof that Gravity Waves must travel [at the] speed of light? It’s a deep, deep question. Let’s dive into the details. It’s possible to write down a variety of equations, like Maxwell’s equations, to describe some aspect of the Universe. We can write them down in a variety of ways, as they are shown in both differential form (left) and integral form (right). It’s only by comparing their predictions with physical observations can we draw any conclusion about their validity. (EHSAN KAMALINEJAD OF It’s not apparent, at first glance, that Maxwell’s equations necessarily predict the existence of radiation that travels at the speed of light. What those equations —which govern classical electromagnetism —clearly tell us are about the behavior of: • stationary electric charges, • electric charges in motion (electric currents), • static (unchanging) electric and magnetic fields, • and how those fields and charges move, accelerate, and change in response to one another. Now, using the laws of electromagnetism alone, we can set up a physically relevant system: that of a low-mass, negatively charged particle orbiting a high-mass, positively charged one. This was the original model of the Rutherford atom, and it came along with a big, existential crisis. As the negative charge moves through space, it experiences a changing electric field, and accelerates as a result. But when a charged particle accelerates, it has to radiate power away, and the only way to do so is through electromagnetic radiation: i.e., light. In the Rutherford model of the atom, electrons orbited the positively charged nucleus, but would emit electromagnetic radiation and see that orbit decay. It required the development of quantum mechanics, and the improvements of the Bohr model, to make sense of this apparent paradox. (JAMES HEDBERG / CCNY / CUNY) This has two effects that are calculable within the framework of classical electrodynamics. The first effect is that the negative charge will spiral into the nucleus, as if you’re radiating power away, you have to get that energy from somewhere, and the only place to take it from is the kinetic energy of the particle in motion. If you lose that kinetic energy, you inevitably will spiral towards the central, attracting object. The second effect that you can calculate is what’s going on with the emitted radiation. There are two constants of nature that show up in Maxwell’s equations: • ε_0, the permittivity of free space, which is the fundamental constant describing the electric force between two electric charges in a vacuum. • μ_0, the permeability of free space, which you can think of as the constant that defines the magnetic force produced by two parallel conducting wires in a vacuum with a constant current running through them. When you calculate the properties of the electromagnetic radiation produced, it behaves as a wave whose propagation speed equals (ε_0 · μ_0)^(-1/2), which just happens to equal the speed of light. Relativistic electrons and positrons can be accelerated to very high speeds, but will emit synchrotron radiation (blue) at high enough energies, preventing them from moving faster. This synchrotron radiation is the relativistic analog of the radiation predicted by Rutherford so many years ago, and has a gravitational analogy if you replace the electromagnetic fields and charges with gravitational ones.(CHUNG-LI DONG, JINGHUA GUO, YANG-YUAN CHEN, AND CHANG CHING-LIN, ‘SOFT-X-RAY SPECTROSCOPY PROBES NANOMATERIAL-BASED DEVICES’) In electromagnetism, even if the details are quite the exercise to work out, the overall effect is straightforward. Moving electric charges that experience a changing external electromagnetic field will emit radiation, and that radiation both carries energy away and itself moves at a specific propagation speed: the speed of light. This is a classical effect, which can be derived with no references to quantum physics at all. Now, General Relativity is also a classical theory of gravity, with no references to quantum effects at all. In fact, we can imagine a system very analogous to the one we set up in electromagnetism: a mass in motion, orbiting around another mass. The moving mass will experience a changing external gravitational field (i.e., it will experience a change in spatial curvature) which causes it to emit radiation that carries energy away. This is the conceptual origin of gravitational radiation, or gravitational waves. There is, perhaps, no better analogy for the radiation-reaction in electromagnetism than the planets orbiting the Sun in gravitational theories. The Sun is the largest source of mass, and curves space as a result. As a massive planet moves through this space, it accelerates, and by necessity that implies it must emit some type of radiation to conserve energy: gravitational waves. (NASA/ But why —as one would be inclined to ask —do these gravitational waves have to travel at the speed of light? Why does the speed of gravity, which you might imagine could take on any value at all, have to exactly equal the speed of light? And, perhaps most importantly, how do we know? Imagine what might happen if you were to suddenly pull the ultimate cosmic magic trick, and made the Sun simply disappear. If you did this, you wouldn’t see the skies go dark for 8 minutes and 20 seconds, which is the amount of time it takes light to travel the ~150 million km from the Sun to Earth. But gravitation doesn’t necessarily need to be the same way. It’s possible, as Newton’s theory predicted, that the gravitational force would be an instantaneous phenomenon, felt by all objects with mass in the Universe across the vast cosmic distances all at once. An accurate model of how the planets orbit the Sun, which then moves through the galaxy in a different direction-of-motion. If the Sun were to simply wink out of existence, Newton’s theory predicts that they would all instantaneously fly off in straight lines, while Einstein’s predicts that the inner planets would continue orbiting for shorter periods of time than the outer planets. (RHYS What would happen under this hypothetical scenario? If the Sun were to somehow disappear at one particular instant, would the Earth fly off in a straight line immediately? Or would the Earth continue to move in its elliptical orbit for another 8 minutes and 20 seconds, only deviating once that changing gravitational signal, propagating at the speed of light, reached our world? If you ask General Relativity, the answer is much closer to the latter, because it isn’t mass that determines gravitation, but rather the curvature of space, which is determined by the sum of all the matter and energy in it. If you were to take the Sun away, space would go from being curved to being flat, but only in the location where the Sun physically was. The effect of that transition would then propagate radially outwards, sending very large ripples—i.e., gravitational waves—propagating through the Universe like ripples in a 3D pond. Whether through a medium or in vacuum, every ripple that propagates has a propagation speed. In no cases is the propagation speed infinite, and in theory, the speed at which gravitational ripples propagate should be the same as the maximum speed in the Universe: the speed of light. (SERGIU BACIOIU/FLICKR) In the context of relativity, whether that’s Special Relativity (in flat space) or General Relativity (in any generalized space), the speed of anything in motion is determined by the same things: its energy, momentum, and rest mass. Gravitational waves, like any form of radiation, have zero rest mass and yet have finite energies and momenta, meaning that they have no option: they must always move at the speed of light. This has a few fascinating consequences. 1. Any observer in any inertial (non-accelerating) reference frame would see gravitational waves moving at exactly the speed of light. 2. Different observers would see gravitational waves redshifting and blueshifting due to all the effects—such as source/observer motion, gravitational redshift/blueshift, and the expansion of the Universe—that electromagnetic waves also experience. 3. The Earth, therefore, is not gravitationally attracted to where the Sun is right now, but rather where the Sun was 8 minutes and 20 seconds ago. The simple fact that space and time are related by the speed of light means that all of these statements must be true. Gravitational radiation gets emitted whenever a mass orbits another one, which means that over long enough timescales, orbits will decay. Before the first black hole ever evaporates, the Earth will spiral into whatever’s left of the Sun, assuming nothing else has ejected it previously. Earth is attracted to where the Sun was approximately 8 minutes ago, not to where it is today. (AMERICAN This last statement, about the Earth being attracted to the Sun’s position from 8 minutes and 20 seconds ago, was a truly revolutionary difference between Newton’s theory of gravity and Einstein’s General Relativity. The reason it’s revolutionary is for this simple fact: if gravity simply attracted the planets to the Sun’s prior location at the speed of light, the planets’ predicted locations would mismatch severely with where they actually were observed to be. It’s a stroke of brilliance to realize that Newton’s laws require an instantaneous speed of gravity to such precision that if that were the only constraint, the speed of gravity must have been more than 20 billion times faster than the speed of light! But in General Relativity, there’s another effect: the orbiting planet is in motion as it moves around the Sun. When a planet moves, you can think of it riding over a gravitational ripple, coming down in a different location from where it went up. When a mass moves through a region of curved space, it will experience an acceleration owing to the curved space it inhabits. It also experiences an additional effect due to its velocity as it moves through a region where the spatial curvature is constantly changing. These two effects, when combined, result in a slight, tiny difference from the predictions of Newton’s gravity. (DAVID CHAMPION, In General Relativity, as opposed to Newton’s gravity, there are two big differences that are important. Sure, any two objects will exert a gravitational influence on the other, by either curving space or exerting a long-range force. But in General Relativity, these two extra pieces are at play: each object’s velocity affects how it experiences gravity, and so do the changes that occur in gravitational fields. The finite speed of gravity causes a change in the gravitational field that departs significantly from Newton’s predictions, and so do the effects of velocity-dependent interactions. Amazingly, these two effects cancel almost exactly. It’s the tiny inexactness of this cancellation that allowed us to first test whether Newton’s “infinite speed” or Einstein’s “speed of gravity equals the speed of light” model matched the physics of our Universe. To test out what the speed of gravity is, observationally, we’d want a system where the curvature of space is large, where gravitational fields are strong, and where there’s lots of acceleration taking place. Ideally, we’d choose a system with a large, massive object moving with a changing velocity through a changing gravitational field. In other words, we’d want a system with a close pair of orbiting, observable, high-mass objects in a tiny region of space. Nature is cooperative with this, as binary neutron star and binary black hole systems both exist. In fact, any system with a neutron star has the ability to be measured extraordinarily precisely if one serendipitous thing occurs: if our perspective is exactly aligned with the radiation emitted from the pole of a neutron star. If the path of this radiation intersects us, we can observe a pulse every time the neutron star rotates. The rate of orbital decay of a binary pulsar is highly dependent on the speed of gravity and the orbital parameters of the binary system. We have used binary pulsar data to constrain the speed of gravity to be equal to the speed of light to a precision of 99.8%, and to infer the existence of gravitational waves decades before LIGO and Virgo detected them. However, the direct detection of gravitational waves was a vital part of the scientific process, and the existence of gravitational waves would still be in doubt without it. (NASA (L), MAX PLANCK INSTITUTE FOR RADIO ASTRONOMY / MICHAEL KRAMER (R)) As the neutron stars orbit, the pulsing one—known as a pulsar—carries extraordinary amounts of information about the masses and orbital periods of both components. If you observe this pulsar in a binary system for a long period of time, because it’s such a perfectly regular emitter of pulses, you should be able to detect whether the orbit is decaying or not. If it is, you can even extract a measurement for the emitted radiation: how quickly does it propagate? The predictions from Einstein’s theory of gravity are incredibly sensitive to the speed of light, so much so that even from the very first binary pulsar system discovered in the 1980s, PSR 1913+16 (or the Hulse-Taylor binary), we have constrained the speed of gravity to be equal to the speed of light with a measurement error of only 0.2%! The quasar QSO J0842+1835, whose path was gravitationally altered by Jupiter in 2002, allowing an indirect confirmation that the speed of gravity equals the speed of light. (FOMALONT ET AL. (2000), APJS 131, 95–183) That’s an indirect measurement, of course. We performed a second type of indirect measurement in 2002, when a chance coincidence lined up the Earth, Jupiter, and a very strong radio quasar (QSO J0842+1835) all along the same line-of-sight. As Jupiter moved between Earth and the quasar, the gravitational bending of Jupiter allowed us to indirectly measure the speed of gravity. Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard! The results were definitive: they absolutely ruled out an infinite speed for the propagation of gravitational effects. Through these observations alone, scientists determined that the speed of gravity was between 2.55 × 10⁸ m/s and 3.81 × 10⁸ m/s, completely consistent with Einstein’s predictions of 299,792,458 m/s. Artist’s illustration of two merging neutron stars. The rippling spacetime grid represents gravitational waves emitted from the collision, while the narrow beams are the jets of gamma rays that shoot out just seconds after the gravitational waves (detected as a gamma-ray burst by astronomers). The gravitational waves and the radiation must travel at the same speed to a precision of 15 significant digits. (NSF / LIGO / SONOMA STATE UNIVERSITY / A. SIMONNET) But the greatest confirmation that the speed of gravity equals the speed of light comes from the 2017 observation of a kilonova: the inspiral and merger of two neutron stars. A spectacular example of multi-messenger astronomy, a gravitational wave signal arrived first, recorded in both the LIGO and Virgo detectors. Then, 1.7 seconds later, the first electromagnetic (light) signal arrived: the high-energy gamma rays from the explosive cataclysm. Because this event took place some 130 million light-years away, and the gravitational and light signals arrived with less than a two second difference between them, we can constrain the possible departure of the speed of gravity from the speed of light. We now know, based on this, that they differ by less than 1 part in 10¹⁵, or less than one quadrillionth of the actual speed of light. Illustration of a fast gamma-ray burst, long thought to occur from the merger of neutron stars. The gas-rich environment surrounding them could delay the arrival of the signal, explaining the observed 1.7 second difference between the arrivals of the gravitational and electromagnetic signatures. (ESO) Of course, we think that these two speeds are exactly identical. The speed of gravity should equal the speed of light so long as both gravitational waves and photons have no rest mass associated with them. The 1.7 second delay is very likely explained by the fact that gravitational waves pass through matter unperturbed, while light interacts electromagnetically, potentially slowing it down as it passes through the medium of space by just the smallest amount. The speed of gravity really does equal the speed of light, although we don’t derive it in the same fashion. Whereas Maxwell brought together electricity and magnetism—two phenomena that were previously independent and distinct—Einstein simply extended his theory of Special Relativity to apply to all spacetimes in general. While the theoretical motivation for the speed of gravity equaling the speed of light was there from the start, it’s only with observational confirmation that we could know for certain. Gravitational waves really do travel at the speed of light! Submit your Ask Ethan questions to startswithabang at gmail dot com! Ethan Siegel is the author of Beyond the Galaxy and Treknology. You can pre-order his third book, currently in development: the Encyclopaedia Cosmologica. Who — or what — really controls your mind? More than a century ago, Halifax suffered an accidental blast one-fifth the size of the atomic bomb dropped on Hiroshima. Dive into the twisted truths and concealed realities told by literature’s most unreliable narrators. If you don’t feel better after the weekend, the “burnout paradox” could explain why. 7 min If you can identify a foreground star, the spike patterns are a dead giveaway as to whether it’s a JWST image or any other observatory. We’re too afraid to voice our complaints, and for good reason — it often doesn’t go well.
{"url":"https://preprod.bigthink.com/starts-with-a-bang/ask-ethan-why-do-gravitational-waves-travel-exactly-at-the-speed-of-light/","timestamp":"2024-11-09T03:41:15Z","content_type":"text/html","content_length":"160666","record_id":"<urn:uuid:d98eac0b-558e-4e50-9b28-c09b40a5f271>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00540.warc.gz"}
Dependence modeling using EVT and D-Vine Extreme occurrences such as extreme gains and extreme losses in financial market are unavoidable. Accurate knowledge of dependence on these extremes can help investors to adjust their portfolio mix accordingly. This paper therefore focuses on modeling dependence in extreme gains and losses in a portfolio consisting of three assets using D-Vine Copula. The concept of extreme value theory, the peak over threshold approach is use to identify the sets of extreme gains and extreme losses in each asset contain in the portfolio. For the three assets, a total of six sets (3 sets of extreme gains and 3 sets of extreme losses) are use in the dependence modeling. Due to the well known fact that financial return series are not independent and identically distributed (i.i.d), the returns are first filtered using a GARCH-type model before the extreme value analysis. To take into account the different types of dependence normally present in financial return series, the D-Vine copula, is use to model the dependence structure in the sets of extremes for all the assets in the portfolio. Empirical evidence using D-Vine copula for the dependence modeling, indicates that the conditional and unconditional dependence parameters are significantly different from zero for all pairs of tails. Some of these dependence parameters are negative, showing that an extreme gain in one asset may lead to an extreme loss in the other asset vice versa. ABSTRACT: Extreme occurrences such as extreme gains and extreme losses in financial market are unavoidable. Accurate knowledge of dependence on these extremes can help investors to adjust their portfolio mix accordingly. This paper therefore focuses on modeling dependence in extreme gains and losses in a portfolio consisting of three assets using D-Vine Copula. The concept of extreme value theory, the peak over threshold approach is use to identify the sets of extreme gains and extreme losses in each asset contain in the portfolio. For the three assets, a total of six sets (3 sets of extreme gains and 3 sets of extreme losses) are use in the dependence modeling. Due to the well known fact that financial return series are not independent and identically distributed (i.i.d), the returns are first filtered using a GARCH-type model before the extreme value analysis. To take into account the different types of dependence normally present in financial return series, the D-Vine copula, is use to model the dependence structure in the sets of extremes for all the assets in the portfolio. Empirical evidence using D-Vine copula for the dependence modeling, indicates that the conditional and unconditional dependence parameters are significantly different from zero for all pairs of tails. Some of these dependence parameters are negative, showing that an extreme gain in one asset may lead to an extreme loss in the other asset vice versa. DOI Code: 10.1285/i20705948v9n1p246 Keywords: D-Vine; Extreme Value; Pair-Copula; Dependence; GARCH . Aas. K., Czado. C., Frigessi, A. and BaKKen, H. (2009). Pair-copula constructions of multivariate dependence. Insurance: Mathematics and Economics, 44,182-198. . A.J. McNeil, R. Frey, and P. Embrechts. Quantitative Risk Management: Concepts, Techniques, Tools. Princeton University Press, 2005. . Balkima, A. A. and de Haan, L. (1974). Residual life time at great age. Annals of Probability, 2, 792-804 . Bedford, T and R. M. Cooke (2001) “probability density decomposition for conditionally dependent random variables modeled by vines”, Annals of Mathematics and Artificial Intelligence 32, 245-268 . Bedford, T. and R. M. Cooke (2002) “Vines- a new graphical model for dependent random variables”. Annals of Statistics 30, 1031-1068. . Camble, John Y., Lo, Andrew W., MacKinlay. A. Craig (1997). The econometrics of financial market, Princeton University Press, New Jersey. . Cherubini, U., E. Luciano, and W. Vecchiato, 2004, copula methods in finance. Wiley. . Dobric ́, J. and Schmid, F. (2005): Testing Goodness of fit for parametric Families of Copula- Application to Financial Data”, Communications in statistics: Simulation and computation, Vol. 34, pp . Engle, R.F., 1982. Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrical 50(4), 987-1007 . Embrechts, P. Kluppelberg, C. and. Mikosch, T. (1997). Modeling Extremal Events for insurance and Finance. Berlin: Springer. . Fisher, R.A. and Tippett, L.H.C. (1928). Limiting forms of the frequency distribution of largest or smallest members of a sample. Proceedings of Cambridge Philosophic society, 24, 180-190. . Fisher, R.A. (1932). Statistical methods for Research Workers. Edinburg: Oliver and Boyd. . Haff, I., Aas K., Frigessi A. (2010). On the Simplified Pair-Copula Construction-Simply useful or too Simplistic. Journal of multivariate analysis, 101, 1296-1310.. . Jenkinson, A.F.(1955). The frequency distribution of the annual maximum (minimum) values of meteorological events. Quarterly Journal of the Royal Meteorological Society, 81, 158-172. . Joe H (1996). “Families of m-Variate Distribution with Given Margins and m(m-1)/2 Bivariate Dependence Parameters” In L Ru ̈schendorf, B Schweizer, MD Taylor (eds.), Distributions with Fixed Marginals and Related Topic, pp 120-141. Institute of Mathematical statistics, Hayward. . Kahneman, Daniel and Amos Tversky (1979) “Prospect Theory: An Analysis of Decision under Risk”, Econometrical, ⅪⅦ (1979), 263-291. . Kurowicha, D. and Cooke, R. (2006), Uncertainty Analysis with High Dimensional Dependence Modeling, New York: Wiley. . Nelson, R. (2006). An introduction to Copula, 2nd edition. New York: Springer. Pages 271-294. Providence, RI: American Mathematical Society. . Patton, Andrew J, 2002, Skewness, Asymmetric Dependence, and Portfolios, working paper, Department of Economics, University of California, San Diego. . Pickands, J.I. (1975). Statistical inference using extreme value order statistics. Annals of statistics, 3, 119-131. . R Mashal and A. Zeevi, 2002. Beyond correlation: Extreme co-movements between financial assets. Technical report, Columbia University. . Rosenblatt, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics, 23, 470-472. . Savu, C. and Trede, M. (2004): “Goodness-of-fit tests for parametric families of Archimedean copula”, CAWN, University of Muenster Discussion Paper, No 6. . Schmidt, R. and Theodorescu, R. (2006): “On the Schur unimodality of copula and other multivariate distributions”, Seminal of Economics and Social Statistics, University of Cologne, Working Paper. . Sklar, A. (1959). Fonction de re ́partition a ́n dimensions at leurs marges Publications de I’Institut de Statistique de L’Universite’ de Paris, 8, 229-231. . Von Mises, R (1954). La distribution de la plus grade de n valeurs. In Selected papers, VolumeⅡ, pages 271-294, American Mathematical Society, Providence, RⅠ Full Text:
{"url":"http://siba-ese.unile.it/index.php/ejasa/article/view/13446/0","timestamp":"2024-11-12T09:13:17Z","content_type":"application/xhtml+xml","content_length":"31264","record_id":"<urn:uuid:69773f13-2f1d-4cd6-abde-2bc9a7423568>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00125.warc.gz"}
How do you simplify 7/sqrt7? | Socratic How do you simplify #7/sqrt7#? 2 Answers See a solution process below: To simplify this we need to rationalize the denominator. This means to eliminate the radical from the denominator by multiplying by the appropriate form of $1$: $\frac{7}{\sqrt{7}} \implies \frac{7}{\sqrt{7}} \times \frac{\sqrt{7}}{\sqrt{7}} \implies \frac{7 \times \sqrt{7}}{\sqrt{7} \times \sqrt{7}} \implies \frac{7 \sqrt{7}}{7} \implies \sqrt{7}$ $\frac{7}{\sqrt{7}} \times \frac{\sqrt{7}}{\sqrt{7}}$ $\Rightarrow \frac{7 \sqrt{7}}{\sqrt{49}}$ $\Rightarrow \frac{7 \sqrt{7}}{7}$ $\Rightarrow \frac{\cancel{7} \sqrt{7}}{\cancel{7}}$ $\Rightarrow \sqrt{7}$ Impact of this question 8080 views around the world
{"url":"https://socratic.org/questions/how-do-you-simplify-7-sqrt7","timestamp":"2024-11-04T19:58:27Z","content_type":"text/html","content_length":"34536","record_id":"<urn:uuid:c8f84cb9-41d7-4fd2-b6ca-576671723b69>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00529.warc.gz"}
About ScattPy ScattPy is an open source Python package for light scattering simulations. Its goal is to provide an easy-to-use and flexible modern framework for the numerical solving of the diffraction problems with various kinds of particles. With the current version of ScattPy [cite:Vinokurov_jqsrt2011] it is possible to calculate far- and near-field optical properties of light scattered by dielectric particles with axial symmetry. With ScattPy homogeneous and multilayered particles can be handled [cite:Vinokurov_optsp2010,cite:Vinokurov_jqsrt2009]. ScattPy includes the separation of variables (SVM), extended boundary condition (EBCM) and point matching (PMM) methods. For a review of these methods and their applicability please see [ ScattPy is developed and maintained by Dr. A.A. Vinokurov. The numerical techniques implemented in the ScattPy are based on the results of the research performed by Prof. V.G. Farafonov, Prof. V.B. Il'in and Dr. A.A. Vinokurov in collaboration with Prof. N.V. Voshchinnikov. The original approach was proposed by Prof. V.G. Farafonov. Terms of Use ScattPy is an open source project, distributed under the BSD license. If you are using the package in your research please cite it as A. A. Vinokurov, V. B. Il'in, and V. G. Farafonov, Scattpy: a new python package for light scattering computations, J. Quant. Spectr. Rad. Transf., In Press (accepted) (2011). [ bib | http ] [1] A. A. Vinokurov, V. B. Il'in, and V. G. Farafonov, Scattpy: a new python package for light scattering computations, J. Quant. Spectr. Rad. Transf., In Press (accepted) (2011). [ bib | http ] [2] A. A. Vinokurov, V. B. Il'in, and V. G. Farafonov, On optical properties of nonspherical inhomogeneous particles, Opt. Spectr., 109 (2010), pp. 444-453. [ bib | http ] [3] V. G. Farafonov, V. B. Il'in, and A. A. Vinokurov, Near-and far-field light scattering by nonspherical particles: Applicability of methods that involve a spherical basis, Opt. Spectr., 109 (2010), pp. 432-443. [ bib | http ] [4] A. A. Vinokurov, V. G. Farafonov, and V. B. Il'in, Separation of variables method for multilayered nonspherical particles, J. Quant. Spectr. Rad. Transf., 110 (2009), pp. 1356-1368. [ bib | http [5] V. G. Farafonov, A. A. Vinokurov, and V. B. Il'in, Comparison of the light scattering methods using the spherical basis, Opt. Spectrosc., 102 (2007), pp. 927-938. [ bib | http ] Latest news ScattPy v.0.1.2 is released. • Critical bug fix in the non-iterative method for multilayered particles, • Critical bug fix in the oblate spheroid surface equation. ScattPy v.0.1.1 is released. • Sphinx documentation is added, • Minor changes in the names of some functions • Added LayeredConfocalSpheroid particle class ScattPy v.0.1.0 is released. The package is registered in the Python package index (PyPI) as scikits.scattpy and can be automatically installed using the standard tools such as pip and easy_install. Please refer to the download page for instructions on manual download and installation. The package is well tested and is quite stable. However due to the lack of documentation it is still considered in beta phase. The ScattPy package is designed for modelling the light scattering by non-spherical particles. Currently supported models are homogeneous and multi-layered particles with the following surface shapes: • spherical, • prolate and oblate spheroidal, • Chebyshev: illuminated by a plane wave. Besides, custom axi-symmetrical shapes can be used by declaring new shape classes derived from the Shape class. With the ScattPy one can compute the following optical characteristics: • optical cross sections • efficiency factors • scattering matrix elements • amplitude matrix for specified scattering angles. The following numerical methods are included: • separation of variables method (SVM) with spherical basis, • extended boundary conditions method (EBCM) with spherical basis, • integral point matching method (iPMM) with spherical basis (only homogeneous scatterers and TM mode are implemented). For more details on the methods please refer to the articles: • V.G. Farafonov, A.A. Vinokurov, and V.B. Il’in. Comparison of the light scattering methods using the spherical basis. Opt.Spectrosc., 102:927-938, 2007. • A.A. Vinokurov, V.G. Farafonov, and V.B. Il’in. Separation of variables method for multilayered non-spherical particles. J. Quant. Spectr. Rad. Transf., 110:1356–1368, 2009. Here’s a short example of the ScattPy usage: from numpy import * from sciits.scattpy import * # declare a particle P = ProlateSpheroid(ab=1.5, xv=1., m=1.33+0.2j) # declare laboratory LAB = Lab(P, alpha=pi/4) # Here alpha is the angle between the particle axis and # the direction of the incident radiation propagation. # Solve the light scattering porblem with the EBCM method RES = ebcm(LAB,accuracyLimit=1e-10) # print scattered field efficiency factor Qsca for the TM and TE modes print LAB.get_Csca(RES.c_sca_tm)[1], LAB.get_Csca(RES.c_sca_te)[1] We moved the developement of the ScattPy to the GitHub platform. Here are the main links:
{"url":"http://scattpy.github.io/","timestamp":"2024-11-13T02:35:51Z","content_type":"application/xhtml+xml","content_length":"21315","record_id":"<urn:uuid:cf57e785-c8a6-4171-836a-1d95f2aa3856>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00087.warc.gz"}
Gemtree Software, Comparisons, Full Html Context Help of The Peter System is equal to is not equal to is greater than is less than is greater or equal to is less or equal to The comparison operators are used for comparing numeric terms. As operator parameters one or more numeric terms can be stated, the operator output being a logic value. The operand comparing takes place gradually from the top downwards, each time between two contiguous operands. The comparison operation is satisfied, if the comparison between all comparing operands is satisfied. A special case occurs if only one operand is stated. In this case the operand is compared to zero, i.e. as if the second operand would be a term the value of which is 0. Full Html Context Help of The Peter - Gemtree Software & Children Programming
{"url":"http://www.gemtree.cz/HelpEn/20032.htm","timestamp":"2024-11-05T15:02:48Z","content_type":"text/html","content_length":"4088","record_id":"<urn:uuid:806c673f-63b2-4fd3-a624-753f258be79d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00758.warc.gz"}
How to shuffle an array in JavaScript — jeremias.codes How to shuffle an array in JavaScript The other day I was asked to sort randomly an array of objects, and while it didn't seem a very complex task it turned into hours of investigation. There are a lot of things to consider while dealing with randomizers, so yes, worth a post. Whatever the solution was it needed to cover two concerns to beat any other possible one. First thing was the frequency distribution of the possible results which basically means that I wanted any combination to be equally probable to appear. The second one was performance. Without overthinking too much about them I decided to quickly bring an algorithm that would do the job, just as a start and dive into alternatives and testing later. From scratch Trying to achieve equally probable results, I came up with this idea. First of all, make a copy of the array. Get a random position, take the item in that position out of the array and put it inside a new one. Then repeat that again, considering now the array length has decreased by one until the copied array is empty. The best thing about this approach is that every iteration is independent from the previous one, which should be pretty obvious, but There are a lot of solutions in forums which don't even cover this. Let's start digging into this mentioned approach. Copy an array We need to do this so we don't actually modify the original one. There's an array method called slice that takes two parameters, a start position and a number of elements you want to take from that It returns a new array containing only those elements, if you need a better understanding of it check its MDN reference page. Interesting for us in this case scenario, if you don't pass any arguments to slice it returns a new array with the exact same elements, which is exactly what we need to prevent side effects inside our method. function shuffle(array) { var copiedArray = array.slice(); Remember that in JavaScript an object as a parameter is passed as reference, so any modification inside the function is going to affect the original data, which we don't want. Get a random position To start, we are going to rely on the good ol' Math.random, which returns a number between 0 and 0.99 always. Let's say we have an array with three elements, if we call this method and then multiply the result with the length of the array we can get a value between zero and almost three. With Math.floor we remove the floating part of any of the possible result, and now we can get zero, one or two, the three available indexes in our three elements array. function shuffle(array) { var copiedArray = array.slice(); var len = copiedArray.length; var randomPosition; randomPosition = Math.floor(Math.random() * len); Because we plan on reducing the length as we slice one element from the array, I'm going to put this logic inside a while loop which will end after we decrease the len variable to zero. function shuffle(array) { var copiedArray = array.slice(); var len = copiedArray.length; var randomPosition; while (len) { randomPosition = Math.floor(Math.random() * len--); Simple and beautiful... but still doing nothing, we need to pick up an element randomly using the obtained randomPosition and push it to a new one. Return a new shuffled array For this we can again use splice, but this time we are going to pass randomPosition to point the element and 1 to indicate the amount of elements we are going to extract. function shuffle(array) { var copiedArray = array.slice(); var len = copiedArray.length; var shuffledArray = []; var randomPosition; while (len) { randomPosition = Math.floor(Math.random() * len--); shuffledArray.push(copiedArray.splice(randomPosition, 1)[0]); return shuffledArray; And that's it! In terms of space this is creating two new arrays of the same length, which might not be optimal but in my case arrays longer than 20 items was weird so it wasn't a concern. About its complexity in time, it would be O(n) in Big O notation, more than acceptable for non-critical conditions. We can even return early when an array is empty or only contains one element, which happened a lot inside the business logic this code was placed in. function shuffle(array) { if (array.length < 2) { return array; var copiedArray = array.slice(); var len = copiedArray.length; var shuffledArray = []; var randomPosition; while (len) { randomPosition = Math.floor(Math.random() * len--); shuffledArray.push(copiedArray.splice(randomPosition, 1)[0]); return shuffledArray; I've created a fiddle (link above) where you can see this working. It also contains an iteration that gets executed a one hundred thousand times and its results showing the frequency distribution in the console. After running those tests and making sure it worked well I started searching for other possible alternatives and surprisingly found mine being more stable and reasonable to implement. Using sort, just don't Don't get me wrong, I think sort is great, but when used for its original purposes, to establish a new known order in an array. For that you need a criteria and a compare function that responds to it. Random isn't a known order and has no criteria, but well, here's the little monster I found out there. array.sort(function () { return 0.5 - Math.random(); Beautiful, isn't it? Just one line, something that will encourage you to put it inside your code right away because, you know, it's just one line! The problem is it isn't taking in consideration how sort really works. Every time the compare function is called, sort expects a negative number, a positive number or zero. In case the number is negative the second element in comparison will be moved before the first one, the opposite will happen if the number is positive and nothing will happen if the number returned is zero. That's pretty useful when you are actually sorting elements but since we want to create a random scenario half of the times this compare function gets called nothing actually changes, leaving elements in their original position, which we don't want. If you send an array of two or three elements there's a high probability you will get the exact same order. The best solution out there I imagined this problem wasn't new and that probably smarter people than me already had a solution for a well distributed and performant algorithm. Luckily that was true. The solution is very old and it's called Fisher-Yates shuffle, named after Ronald Fisher and Frank Yates and it assures that any possible permutation is equally possible. This algorithm is the one applied by lodash in their _.shuffle method. I knew there was probably a better solution for this before starting my own approach, but I think giving it a try gives you a great opportunity to think, investigate and learn a lot not only about the problem itself, but new methods, compromises and patterns. That's the good thing about trying to make your own way through challenges. I hope this post reflected some of that experience and, in case you were looking for a nice solution to shuffle an array, you found it useful. Do you want me to write for your publication? Click here to contact me via email.
{"url":"https://jeremias.codes/2015/03/how-to-shuffle-an-array/","timestamp":"2024-11-10T17:47:00Z","content_type":"text/html","content_length":"42367","record_id":"<urn:uuid:2a9e0e77-7746-4922-83be-2769469fa863>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00835.warc.gz"}
Teaching Statistics: Textbook Considerations I have the pleasure of teaching an undergraduate basic statistics class this fall for the third consecutive year. It's not a class I had any specific preparation to teach, but I've tried to make up for that by becoming familiar with some of the statistics education literature, bolstering my content knowledge (although I doubt it will ever be as wide or deep as I'd like), getting access to good resources, and being mindful of the needs of my students. First, it would help to know a little bit about the course. Most strikingly, the class only meets once a week on a Thursday from 4:30 to 7. If you're used to teaching 180-day school years, you really have to wrap your head quickly around the idea that you're only going to see these students 15 times before finals. Also, despite the class being taught in the School of Education, it's not required of any education students. Instead, the class consists mostly of students from two majors: Sociology and Speech, Language, and Hearing Sciences. Honestly, most of them admit to avoiding math classes, but they usually need the stats class to apply for graduate school. As for the content of the course, here is how it is described in the university catalog: Introduces descriptive statistics including graphic presentation of data, measures of central tendency and variability, correlation and prediction, and basic inferential statistics, including the And that's it. As someone who works almost daily with the Common Core State Standards, building a course around such a sparse description would be quite a challenge, especially for a first-time instructor. When I talked to Derek Briggs about teaching the course, he advised that I use his preferred text, Statistics by Freedman, Pisani, and Purves. I'd recently used Agresti and Finlay's Statistical Methods for the Social Sciences for my qualitative methods courses, and while that book suited me pretty well, I was open to something different so I ordered the Freedman text for my In hindsight, the Freedman text was fine, and the Agresti text would have been fine, too. Both were decently well-written and had plenty of problems to assign, but that's the thing — I was looking for a text that offered considerably more than explanations followed by problem sets. I really wanted something that supported students working together in groups during class, making sense of the material as we went along. One book that had gotten my attention was Workshop Statistics: Discovery with Data by Rossman and Chance. I recognized Beth Chance's name immediately from some of the stats education literature I'd read, and felt good that this text would offer what I was looking for. I used the text last year and was not disappointed, and will be using it again this year. Below is a summary of some of the reasons I like Workshop Statistics. Context Continuity In the front matter of the book, Workshop Statistics contains a list of activities by application — in other words, they've categorized all the problems by context and indexed exactly where those contexts get used. The list of related problems appears again with each problem in the text (inset in picture above), so it's easy for me or my students to refer back or forward to where that context appears. I believe in teaching mathematics rooted in context when possible, so I found this an especially helpful way of finding problems that might be relevant or interesting to the students in my Every topic (lesson) in the text opens with some preliminary questions. Some involve data collection, which is great, but at the very least it gives students an opportunity to consider a question and how we might answer it. If Dan Meyer has made anything clear, it's that we shouldn't teach math as finding answers to questions that nobody has bothered to ask. In Brief The end-of-topic summary certainly isn't unique to this text, but the "You should be able to" statements are very handy for writing objectives for standards-based grading. (I hope to write about my SBG approach in a future post.) Online Supports and Simulations Besides both online instructor and student resources, the text uses a number of custom applets that often really help illustrate some of the concepts in the course. Some are Java, but a number have been converted to JavaScript for use on more platforms. I've avoided having students use software beyond a spreadsheet, and some of these applets have saved us from having to purchase SPSS (expensive!) or trying to use R (steep learning curve!). The in-class activities use some interesting contexts and support groups working together. If anything they can be a bit over-scaffolded, but that relieves me from having to lecture much and I can spend most of my time going group-to-group in the classroom and dealing with questions more intimately. There are a number of smaller things that I'm fine with, although they aren't deal-makers or deal-breakers. The pacing of the text is good — if we cover about two topics a week, we finish the text and pretty much everything one would expect in a basic statistics course. The order of the topics is sensible, too. Typically, it makes sense to put descriptive statistics before inferential statistics, and to work from one-variable stats to two-variable stats. This book is no different. Some texts put linear regression earlier, and where probability should land in a book seems to be negotiable. The placement of those topics in this book is fine for this course and the progression from topic to topic was very manageable. Other than my first day activity, I haven't written much about teaching stats, but look for me to change that this semester.
{"url":"https://blog.mathed.net/2013/08/teaching-statistics-textbook.html","timestamp":"2024-11-03T15:09:14Z","content_type":"application/xhtml+xml","content_length":"99728","record_id":"<urn:uuid:7e724426-4ed6-438b-8a6e-e818f630c24d>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00468.warc.gz"}
Linear Algebra Tutors Top Linear Algebra Tutors serving Canberra Edwin: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...I tutor a broad range of subjects, the subjects that interest me the most are calculus, college level mathematics, and test prep. I have a calm, adaptive tutoring style. I revise my tutoring strategies to fit each student's unique traits and I believe that a calm demeanor causes students to be less stressed, which in... Bonnie: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...concepts that I have taught them. I know that they usually understand or master the topics I teach them, because I informally assess them at the end of the session and their results are at or above the level they are expected to be at. In addition, I use approaches where I relate the subject... Kaitlin: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...In college, I earned my Bachelor of Arts degree in Linguistics, while minoring in Arabic and gaining a certificate in Middle Eastern and North African studies. During this period, I started tutoring at my school's academic success center. I truly enjoy helping students understand the content necessary to succeed in their academic endeavors, but I... Adam: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...returning to tutoring during my MBA studies. I struggled throughout high school until finally preparing for the SAT and ACT, so I'm adept with picking up just what you're struggling with - I was there once, too. Now having performed in the upper 10% of all standardized exams I have taken and graduated with high... Tim: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...they are motivated to continue learning new material during their high school and college years. I have been tutoring for years and have thoroughly enjoyed the experience I have gained and the friendships I have made with other students. My biggest priority is meeting the educational needs of my students in any way possible, and... Ash: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...important as being able to work out technical problems, and that all students (regardless of perceived mathematical intelligence) have the ability to do so. I also understand how mathematics functions to gatekeep some students from certain majors and jobs later in life, and it is my hope that I can help provide access to these... Catherine: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...pass on study tips for all of these tests and help students to prepare for the math sections of them. I believe that every student can learn math but that all students learn in different ways. As a trained teacher I have developed many strategies for helping students that have struggled with math in the... Huanran: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...1460/1600 in total. I also got 800 for my SAT2 math test and 740 for my SAT2 chemistry test. I also got a score of 169/170 of quantitive section of GRE. I am currently preparing my MCAT and I am willing to share my study plan to ones who want to know. Zeeshan: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...Sciences portion of the MCAT. My Chemistry, Physics, and Biology knowledge is quite extensive. I have always been fascinated by science and medicine. I have been tutoring for 3 years, and I have experience teaching from 1st grade students all the way up to college kids. I enjoy tutoring because it is a way for... James: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...ACT test prep. Working one on one with students to help them achieve their academic goals has been a deeply rewarding experience for me. I cherish the opportunity to share my skills and experience with my students, and I make their success my highest priority. I believe that with hard work and the proper guidance... Nick: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...analogy or picture might work for one student, and completely bewilder another. I think the key to helping students trying to explain something in various ways until I find an explanation that clicks for that particular student. I also strongly believe in drawing pictures to explain concepts and to help visualize problems. I've even developed... Alec: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...in these subjects and tests seem initially very difficult, they become much less intimidating if you have the confidence and perspective to look at the problems from the right angle. They don't have to be so challenging! I like to connect on students' levels and help them to find that confidence. In my free time... Roberto : Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...fun to the subjects. I have tutored many in these 2 subjects from elementary to college students when I was in high school. Everyone has the capacity to learn, everyone just needs a different rode to get there. I like to help to show many different ways to accomplish learning these subjects. My way is... Tong: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra As a self learner, I understand different struggles that students like myself might come across. I have a great passion for math and would love to share the tips and tricks that I have discovered that are application to not only the liberal arts but also in non academic activities as well. Alain: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...in Business Economics and Minored in Mathematics. I began tutoring because I love to teach. My philosophy is that if you can't explain a complex idea to someone who is unfamiliar with it then you do not fully understand it. I plan to attend a graduate program in Mathematical Game Theory by the fall of... Steppan: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...University of Maryland, College Park. I tutor math at all levels and mostly focus on math, but can also cover theoretical CS, programming in Java & C, and basic physics. I've tutored students in high school through the math honor society, and continued tutoring in college through the Office of Multi-Ethnic Student Education. My long... Emily: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...each student's preferences in order to help them as much as I can. In my free time, I like to play ice hockey (or just skate), knit, do yoga, and read, especially epic fantasy (I may or may not have all the Lord of the Rings books memorized). I also enjoy experimenting and developing gluten... Thomas: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...Indonesia learn engineering concepts and fundamentals using LEGOs. I enjoy tutoring students in all levels of mathematics and physics. As an engineer, I understand the need for young students to be scientifically literate and capable. I love helping struggling students understand mathematical concepts that they didn't believe they were capable of learning. In my free... William: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...and SAT II. I also took 5 AP Exams as a high school student, receiving 5s in Calculus BS, statistics, computer science, biology, and chemistry. At my university, I received an honors scholarship for my academic performance in engineering, and I currently aspire to continue my education at a higher level. As a tutor, I... Jaime: Canberra Linear Algebra tutor Certified Linear Algebra Tutor in Canberra ...role of a Teaching Assistant at Vanderbilt University. These positions have allowed me to improve my ability to articulate concepts to other students. During my time in high school, I took numerous AP examinations, PSAT, SAT and ACT. I also participated in test prep courses which gave me insight into standardized test taking techniques for... Private Online Linear Algebra Tutoring in Canberra Our interview process, stringent qualifications, and background screening ensure that only the best Linear Algebra tutors in Canberra work with Varsity Tutors. To assure a successful experience, you're paired with one of these qualified tutors by an expert director - and we stand behind that match with our money-back guarantee. Receive personally tailored Linear Algebra lessons from exceptional tutors in a one-on-one setting. We help you connect with online tutoring that offers flexible scheduling. Canberra Linear Algebra Tutoring FAQ You can trust Varsity Tutors to assist you as you search for a skilled linear algebra tutor in Canberra who can accommodate your learning and scheduling needs. Whether you're learning about vectors and spaces, alternate coordinate systems, or determinants, a personal mentor can be a great academic resource to help you as you reach for your educational goals. Here, we will explore the advantages offered by private Canberra linear algebra tutoring. Whether you're a student taking linear algebra courses at a local school like the University of Canberra or you're attending lessons elsewhere, it can be difficult to get the help that you need during or after your classes. However, a linear algebra tutor in Canberra can be just what you need to solidify your understanding of the topics that present the most challenges, such as the matrices for solving systems by elimination, Euclidean n-space, or null space and column space. Each Canberra linear algebra tutoring session is one-on-one. This allows your instructor to personalize your sessions in a way that isn't often available in a traditional or group learning environment. They can become familiar with your strengths, weaknesses, goals, interests, and other factors that can play a role in your academic progress. With this information, your Australia linear algebra tutor can develop a study guide that can focus your time and energy on where it can make the most difference. Throughout linear algebra tutoring in Canberra, your instructor can experiment with different teaching styles to identify the techniques that are the most effective in helping you understand the topics at hand. There are many ways and combinations of ways that students can learn. For instance, your linear algebra coach can design colorful flashcards that can help you visualize linear transformation examples. They can lead aural learners through in-depth discussions or roleplay activities to help them process subspaces and the basis for a subspace. Your mentor can introduce you to tricks that can help you commit the properties of determinant to your mind. With so many options, they are sure to find strategies that work for you. Private instruction can even help you build solid study and learning skills that can benefit you throughout your educational career. Your tutor can show you ways that you can more effectively take notes that align with your learning tendencies, such as using a voice recorder for aural students or creating colorful outlines to organize your notes for visual or tactile learners. When we help you find experienced Australia linear algebra tutors, you won't have to stress about making it to your sessions on time. Our Live Learning Platform can easily circumvent any scheduling obstacles that may arise, allowing you to enjoy flexible and convenient lessons that can fit into your life with ease. The Live Learning Platform contains a host of useful features that can increase the personalization of the linear algebra tutoring in Canberra. You can use the video chat capability to interact with your mentor face-to-face as you discuss functions and linear transformations, the Gram-Schmidt process, and cofactor expansion. There is a digital whiteboard that your mentor can use to display information that you may need as you learn about vector arithmetic, change of basis, or complexity analysis. Your private instructor can observe your efforts to solve problems using the shared document editor, which can help them pinpoint areas in which you could use some extra time learning. Our platform records each session you participate in and stores them in a library. You can access these lessons without having to wait for your Canberra linear algebra tutor to be available. If you'd like to connect with a Canberra linear algebra tutor, you don't have to look for someone on your own. Reach out to the educational consultants at Varsity Tutors to get started with a private mentor as quickly as possible. Your Personalized Tutoring Program and Instructor Identify Needs Our knowledgeable directors help you choose your tutor with your learning profile and personality in mind. Customize Learning Your tutor can customize your lessons and present concepts in engaging easy-to-understand-ways. Increased Results You can learn more efficiently and effectively because the teaching style is tailored to you. Online Convenience With the flexibility of online tutoring, your tutor can be arranged to meet at a time that suits you. Top International Cities for Tutoring
{"url":"https://www.varsitytutors.com/canberra-australia/linear_algebra-tutors","timestamp":"2024-11-13T01:36:22Z","content_type":"text/html","content_length":"690118","record_id":"<urn:uuid:c1e0c6f6-00de-48b7-ba89-5a3fc0295ad7>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00422.warc.gz"}
Improved algorithms for burning graph families Shabanijou, Mohammadmasoud In the graph burning problem, the input is an undirected, unweighted, finite, and simple graph. A fire starts at a vertex at each round, and when a particular vertex is burned, all of its adjacent vertices are burned in the next round. We assume that the rounds are synchronous and discrete. At each round, one new fire can start in a new vertex. The goal is to select the vertices at which fires are started so that all vertices are burned as quickly as possible. Finding an optimal burning sequence is known to be NP-hard and the problem remains NP-hard even for simple graph families such as trees or a set of disjoint paths. The best approximation algorithm for general graphs has an approximation factor of 3. In this thesis, we investigate this problem on different graph families of sparse graphs, and in particular, we look at cactus graphs and melon graphs and study algorithms that aim to burn these graphs as quickly as possible. For both graph families, we show that the problem is NP-complete, and provide approximation algorithms with approximation factors smaller than 3. Graph burning problem, Approximation algorithms, NP-completeness, Algorithm Design
{"url":"https://mspace.lib.umanitoba.ca/items/17f59f4f-cc76-4769-9a54-280249e46baa","timestamp":"2024-11-12T22:50:46Z","content_type":"text/html","content_length":"436389","record_id":"<urn:uuid:d6ddd965-0a58-4a86-ba4b-b9bda2a0d653>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00218.warc.gz"}
Cross-Section of Compton Scattering | nuclear-power.com Cross-Section of Compton Scattering Compton Scattering – Cross-Sections The probability of Compton scattering per one interaction with an atom increases linearly with atomic number Z because it depends on the number of electrons available for scattering in the target atom. The Klein-Nishina formula describes the angular distribution of photons scattered from a single free electron: where ε = E[0]/m[e]c^2 and r[0] is the “classical radius of the electron” equal to about 2.8 x 10^-13 cm. The formula gives the probability of scattering a photon into the solid angle element dΩ = 2π sin Θ dΘ when the incident energy is E[0]. The wavelength change in such scattering depends only upon the angle of scattering for a given target particle. Source: hyperphysics.phy-astr.gsu.edu/
{"url":"https://www.nuclear-power.com/nuclear-power/reactor-physics/interaction-radiation-matter/interaction-gamma-radiation-matter/compton-scattering/cross-section-compton-scattering/","timestamp":"2024-11-07T23:15:14Z","content_type":"text/html","content_length":"91257","record_id":"<urn:uuid:6865ef41-7496-4956-9157-78cd36c3abfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00574.warc.gz"}
A Mathematical Look at Connecticut’s Geological Environmental The following statement by the Mathematical Science Education Board as recorded in Everybody Counts “Effective teachers are those who can stimulated students to learn mathematics. Educational research offers compelling evidence that students learn mathematics well only when they construct their own mathematical understanding. To understand what they learn, they must enact for themselves verbs that permanent the mathematics curriculum: “examine,” “represent,” “transform,” “solve,” “apply,” “prove,” “communicate.” This happens most readily when presentations, and in other ways takes charge of their own learning. This curriculum may establish such a community of learners in the classroom. The actual mathematics in this curriculum is not as important as the instructional strategies such as communicating through writing or speaking, using manipulative, working in cooperative groups, and alternative forms of assessment. Maps and scale drawings are used in everyday life and mathematics. This curriculum is for students to explore distance and angle measurement and the concept of reading and making maps. Students will be actively engaged in the process of learning as they work in group and individual settings. Students are asked to apply their learning is situations that will require an understanding of the concept of proportionality as it applies to measurement. This method of instruction may be quite different than methods previously experienced by some students. The purpose of this curriculum is to introduce or reaffirm the instructional strategy and classroom practices used throughout this lesson. In other words, it sets the tone for this study of mathematics for the entire curriculum. Please note that many of the activities in these lessons have more than one task. These multiple tasks provide flexibility for the teacher. Although this is an introductory curriculum, you do not need to confine the tasks to the first lesson. You may reaffirm a certain instructional strategy or classroom practice at anytime during the activity. For example, you may wish to complete a cooperative learning activity after a vacation to reestablish the proper atmosphere for learning groups. In this curriculum, students experience the concepts of scale, similarity, proportional reasoning, and basic geometric constructions. They read and construct different kinds of maps and scale drawings, which call for multiple representations of geometric and numerical data. They become familiar with similar figures by observing patterns and making generalizations. They explore the idea of a path, both in the field and on paper, and estimate both linear and angular measurements in the process of creating paths. They bisect angles, copy angles, and construct triangles using a compass. The coherent mathematical idea underlying this curriculum is the study and application of proportional relationships. The tasks required in this curriculum are accessible for all students. In order to have success in this unit, students should have had an introduction to distance and angle measure. The unit will flow more smoothly if the students have worked in cooperative groups, used a compass and protractor, communicated mathematical ideas through writing, speaking, and modeling and used technological tools such as calculators and computers. Some students may have had prior experiences in mathematics that they will find valuable in completing the lessons, such as making conjectures, designing maps, and preparing reports. If a significant number of students have not had these experiences, it may be necessary to take additional time to provide them. Working in groups, students use pattern blocks to create a map of Faulkner’s Island and mark a Wildlife refuge. They indicate the direction and number of units for each move. They look for the best location to locate the Reseate Tern nesting site. Cooperative groups play an important role in this curriculum. Students interact and work in small groups throughout each of the lessons. Team building and working together are important skills students will need and use in life beyond school. For some students, working with others will be a new experience. Some care will need to be taken to help students develop the skill of collaboration and respect of the ideas of others. Working in pairs is often a good introduction to working with others. Later, pairs can join with other pairs to make groups of four. When students are working in groups, sharing and listening to others becomes the key to successful mathematical decision making. Your role should be that of a facilitator. Your job includes careful listening and effective questioning to help students stay on or get themselves back on track. Each student should also have a role in the group such as recorder or materials manager so that he or she becomes responsible for his or her own learning. Student roles should change so that each group member has opportunity to fill each role. Finally, it is important to discuss, either verbally or in writing, how the group has functioned. Questions such as “In what ways did your group work well together?”, “Was everyone in the group given the effectiveness of the group and where improvements need to be made.” Assessment is a part of each lesson. It is important that students know up front the criteria on which they will be assessed, as well as how much time they will have to complete the task. Students should be graded on their products. Due dates should be set and expected to be met. When student work is turned in, it should be assessed on its quality. Students may revise work not meeting acceptable standards. Generally, four categories of evaluation should be used Well Done, Acceptable, Revisions Needed, and Re-Start. You may want to allow students to create their own class assessment. These assessment can serve as an excellent self-assessment tool. There is no set grading system you should use for this curriculum. The philosophy of this curriculum is to have students show their knowledge learned in the lessons by using different types of formal assessment. The following are same examples of grading. 1. Students work in groups, using the mathematics they have learned to make a group presentation. 2. Each student produces a written product to show his or her knowledge of the material. This is the most important part of the assessment of the curriculum. Homework is an important part or this curriculum. The homework assignments are not routine exercises imitating work done in class. Rather, they are activities that may take a number of days to complete. These assignments may be research oriented, project based, reflective and analytical in nature. Homework is design to extend the class work with meaningful mathematics. New and original ideas may be a product of these assignments. Homework is introduced in class, but the investigations require work outside of the classroom. A journal is a written account that a student keeps to record what he or she had learned. Journal entries are conducive to thinking about why something has been done. They can be used to record and summarize key topics studied, the student’s feelings toward mathematics, accomplishments or frustrations in solving a particular problem or studying a particular topic, or any other notes or comments the student wishes to make. Keeping a mathematical journal can be helpful in students’ development of a reflective and introspective point of view. It also encourages students to have a more thoughtful attitude toward written work and should be instrumental in helping students learn more mathematics. Journals are also an excellent way for students to practice and improve their writing A portfolio is a representative sample of a student’s work that is collected over a period of time. The selection of work samples for a portfolio should be done with an eye toward presenting a balanced portrait of a student’s achievements. The pieces of work placed in a portfolio should have more significance than other work a student has done. They are chosen as illustrations of a student’s best work at a particular point in time. Thus, the range of items selected shows a student’s intellectual growth in mathematics over time. You may wish to have all students include the products of group presentation, and written product in their portfolios. Students should also select the products of at least two additional lessons for inclusion. Bear in mind that the actual selection of the items by the students will tell you what pieces of work the students think are significant. In addition, students should reflect upon their selections by explaining why each particular work was chosen. The following examples illustrate topics that would be appropriate for inclusion in a portfolio. a solution to a difficult or non routine problem that shows originality of thought a written report of an individual project or investigation examples of problems or conjectures formulated by the student mathematical art work, charts, or graphs a student’s contribution to a group report a photo or sketch of physical models or manipulative statements on mathematical disposition, such as motivation, curiosity, and self-confidence a first and final draft of a piece of work that shows student growth
{"url":"https://teachersinstitute.yale.edu/curriculum/units/1995/5/95.05.10.x.html","timestamp":"2024-11-08T22:09:31Z","content_type":"text/html","content_length":"47347","record_id":"<urn:uuid:440e3872-7bb3-4937-a501-5144d40e005a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00560.warc.gz"}
FQxI Article: Editor\'s Choice: Taming Infinity Taming Infinity General relativity and quantum mechanics could be perfectly compatible—as long as you know how to handle infinity, that is. by Anil Ananthaswamy January 10, 2010 Everybody knows that quantum mechanics and general relativity are incompatible—leading to the decades-long search for a theory of quantum gravity that could combine the two. But everybody could well be wrong, according to particle physicist Richard Woodard . He thinks that the mismatch between the two could be nothing more than an illusion, created by the complicated math techniques used in attempts to unite them. Woodard clearly remembers the time he fell in love with particle physics. It was fall, 1977. A self-confessed "hot-shot" who had tackled graduate-level quantum mechanics with ease as an undergraduate at Case Western Reserve University, Ohio, Woodard arrived at Harvard University for graduate studies with thoughts that quantum mechanics was going to get tougher and more complex. And he was ready for it. But then he took his first course in quantum field theory, being taught by Nobel Laureate Steven Weinberg . "It was one of the seminal experiences of my life," says Woodard. "The idea that you could write down the DNA of the universe in just a few statements—a brush stroke on a piece of paper—was just awesome. I recall lying in my bed in my dormitory room, just staring up at the ceiling to look at something blank, because I was overwhelmed by the power of the ideas that were being put into my mind by those lectures." There’s a saying at Harvard that you don’t really understand quantum field theory until you have taken it three times. So, Woodard found himself studying it once more, under Sidney Coleman . Woodard was learning from men who had been instrumental in creating the standard model of particle physics, which seemed to be perfect, explaining everything that was observed in the lab. Well, almost perfect. "That left one big unsolved problem, and that was gravity," says Woodard, now at the University of Florida, Gainsville. Every young theorist wanted to tackle quantum gravity—meshing quantum mechanics with general relativity—but the professors at Harvard heavily discouraged their students from going down that perilous route. But Woodard was sold on it. "We theorists are like mountain climbers. We see a tall mountain, and we just got to climb it." I was overwhelmed by the power of ideas that were being put into my mind. - Richard Woodard So Coleman, his PhD advisor, agreed to let Woodard climb that quantum-gravity mountain with Stanley Deser , who was at nearby Brandeis University. Since then, Woodard has devoted his professional life to quantum gravity. While quantum field theory does a fantastic job of describing electromagnetism, and the strong and weak nuclear forces, it doesn’t work for Einstein’s theory of gravity. No matter how hard you work at applying quantum field theory to gravity, you get the same, dramatically wrong, answer: infinity. "If you take the theory seriously, it says that when I wave my arm, every being in the solar system gets fried by hard gravitational radiation. That’s obviously not true," says Woodard. "So, right now we just have nonsense." Something is clearly going wrong—but where? Almost everyone agrees that quantum mechanics is not the culprit. Most particle theorists think that the problem lies with general relativity. But Woodard thinks they are in danger of throwing the baby out with the bathwater. He argues that just because the calculations don’t work, that doesn’t mean general relativity is wrong, or incompatible with quantum mechanics. And it doesn’t mean that we need to introduce string theory loop quantum gravity , or any other exotic new physical theory. Instead, the blame could lie with the approximate techniques used to carry out quantum field theory calculations. In an ideal world, physicists wouldn’t have to make approximations. But we don’t live in an ideal world. Anyone who has taken high-school physics will remember having to simplify complicated problems in order to have any chance of applying textbook equations—pretending that cars are perfect cuboids sliding along frictionless roads, say. Similarly, physicists are often forced to make approximations as they try to solve equations that describe things that can be measured, such as the total energy of a quantum system. Except for the simplest systems, these equations are impossible to solve exactly. That’s when physicists start approximating—using a technique known as perturbation theory —and the trouble begins. PERTURBING DEVELOPMENTSPhysicists make their best guess when simulating quantum systems on computers. But is the guess good enough for gravity? The trick with perturbation theory is to start with one of the few ideal quantum systems that you can solve completely and then perturb them, so they look approximately like the situation that you actually want to study. For gravity, this is where things get nasty. Specifically, you end up with an answer in the form of a power series expansion or a never-ending string of numbers, each multiplied by higher powers of Newton’s constant for the strength of gravity, , times even powers of the energy or mass, , of the thing being computed (1 + cG^3 E^6 +…, where …are pure numbers). Because times any energy or mass that can be accessed in particle physics is such a small number, just a few terms of the expansion would be wonderfully accurate—if only the pure numbers , and so on were finite. But they aren’t, leading to an infinite answer. This infinity issue isn’t confined to gravity. But for the other forces, physicists have clever ways to sweep these infinities under the carpet and recover meaningful answers. "We get a finite result that is in beautiful agreement with nature," says Woodard. Unfortunately, these don’t work for gravity, so the calculations blow up in your face. "It is quite likely that perturbation theory is giving us misleading results," says Woodard. It’s like figuring out what is in a room behind a locked door. And we don’t even know if there’s a room there. - Roberto Casadio Woodard is scrutinizing perturbation theory’s power series expansion. Could the series expansion for gravity be wrong? Could the terms of the series involve, for example, the logarithm of ? He thinks so. "What I am speculating is that general relativity is the right theory of quantum gravity and that there is a very good series expansion for it," says Woodard. "It just isn’t conventional perturbation theory." The first step for Woodard is to try and calculate the masses of fundamental particles, like electrons, using his alternative series expansion. In the 1960s, Richard Arnowitt , Stanley Deser and Charles Misner showed how to calculate the mass of a particle using classical general relativity. "I’m trying to extend that to quantum physics," says Woodard, who will be using his FQXi grant of over $37,000 towards this work. Woodard is also helped by clues that any corrections to general relativity must either be tiny or else disguised as something that we don’t recognize as a quantum correction. That’s because we do not recognize any large quantum gravitational effects around us. One way to see such effects would be to examine the propagation of light. Imagine bouncing a beam of light off the surface of Alpha Centauri, the nearest star. We can calculate just how long the beam should take to come back to Earth. Assuming we had detectors good enough to measure the reflection, we could test our calculations. And all would be well if spacetime is classical. But in quantum gravity, spacetime is subject to fluctuations. "That signal would traverse the quantum geometry, which is itself fluctuating. Sometimes the fluctuation would be such as to cause that signal to go a little bit faster than in our average geometry," says Woodard. In that case, the reflected beam would arrive earlier than expected. Of course, we’re a long way from testing anything like that right now. But whatever quantum gravitational effects there are, they are virtually undetectable in our current experiments, and that will influence the development of any alternative to conventional perturbation theory, says Woodard. Another physicist who works on quantum gravity, Roberto Casadio of the University of Bologna, Italy, calls Woodard "a brilliant and extremely dedicated scientist." He admires the fact that Woodard doesn’t take the failure to apply quantum field theory to gravity using perturbation theory as a sign that we need any new theories of physics, "a common lore which has led to a (totally uncontrollable) amount of visionary ideas about quantum gravity." Still, quantum gravity remains an enigma, thanks in part to the lack of experimental evidence for it. "It’s like figuring out what is in a room behind a locked door, and we do not even know if there is a room there," says Casadio. "Just because of this, any approach to quantum gravity remains an act of faith."
{"url":"https://qspace.fqxi.org/articles/126","timestamp":"2024-11-11T20:18:22Z","content_type":"text/html","content_length":"61111","record_id":"<urn:uuid:7fa71782-d466-472d-8546-4f522f96dd0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00171.warc.gz"}
13.4: When Should You Conduct Post-Hoc Pairwise Comparisons? Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) The pairwise comparison calculations for a factorial design are the same as any pairwise comparison after any significant ANOVA. Instead of reviewing them here (because you can review them in the prior two chapters), we are going to discuss when (and why) you would or would not conduct pairwise comparisons in a factorial design. Long Answer We’ll start by refreshing our member on what we’ve done before. Let’s start with t-test, from oh-so long ago! Did we conduct pairwise comparisons when we retained the null hypothesis when comparing two groups with a t-test? Why or why not? We did not conduct pairwise comparisons when the null hypothesis was retained with a t-test. We didn’t need to find which means were difference because the null hypothesis (which we retained) says that all of the means are similar. Did we conduct pairwise comparisons when we rejected the null hypothesis when comparing two groups with a t-test? Why or why not? I know that it was a long time ago, but no, we did not conduct pairwise comparisons with t-tests. Even when we rejected the null hypothesis (which said that the means were similar, so we are saying that they are probably different), we only had two means. The t-test was our “pairwise” comparison. In other words, because there were only two means, so we knew that if the means were statistically different from each other that the bigger one was statistically significantly bigger. What about an ANOVA that compared three groups? To answer these questions, it doesn’t matter if the ANOVA was BG or RM, just that there one was IV with three (or more) groups. Did we conduct pairwise comparisons when we retained the null hypothesis when comparing three groups with an ANOVA? Why or why not? No, we did not conduct pairwise comparisons with ANOVAS with three groups if we retained the null hypothesis. With any retained null hypothesis, we are agreeing that the means are similar, so we wouldn’t spend time looking for any pairs of means that are different. Did we conduct pairwise comparisons when we rejected the null hypothesis when comparing three groups with an ANOVA? Why or why not? Yes, this is when we would conduct pairwise comparisons. The null hypothesis says that all of the means are similar, but when we reject that we are only saying that at least one mean is different from one other mean (one pair of means differs). When we have three or more groups, we need to figure out which means differ from which other means. In other words, a significant ANOVA shows us that at least one of the means is different from at least one other mean, but we don’t know which means are different from which other means. We have to do pairwise mean comparisons to see which means are significantly different from which other means. Finally, on to factorial designs! If you’ve been answering the Exercises as you go, these should be pretty easy. Do we conduct pairwise comparisons when we retain the null hypothesis for main effects in a factorial design? Why or why not? No. When we retain a null hypothesis, we are saying that all of the means are similar. Let’s not waste time looking for a difference when we just said that there wasn’t one. Okay, this one might be a little challenging, so we'll walk through it together. Do we conduct pairwise comparisons when we reject the null hypothesis for main effects in a factorial design? It depends! If we only have two means, we don’t have to conduct pairwise comparisons because (just like with a t-test) rejecting the null hypothesis for the main effect means that we know that the bigger mean is statistically significantly bigger. But if our IV has more than two groups, then we would need to conduct pairwise comparisons (just like an in ANOVA) to find which means are different from which other means. Back to an easier one on null hypotheses and post-hoc tests. Do we conduct pairwise comparisons when we retain the null hypothesis for an interaction in a factorial design? Why or why not? No. The null hypothesis says that all of the means are similar. If we retain the null hypothesis, then we are saying that all of the means are probably similar. Why would we look for a difference between pairs of means that we think are similar? This one should be clear if you understand the reasoning for when we do and do not conduct post-hoc pairwise comparisons. Do we conduct pairwise comparisons when we reject the null hypothesis for an interaction in a factorial design? Why or why not? Yes! The smallest factorial design is a 2x2, which means that we have our means representing the combination of the two IVs. Rejecting the null hypothesis for the interaction says that at least one of those means is different from at least one other mean. We should use pairwise comparisons to find which combination of IV levels has a different mean from which other combination. Short Answer Table \(\PageIndex{1}\)- Short Answer for When to Conduct Post-Hoc Pairwise Comparisons Only Two Groups Three or More Groups or Two or More IVs Retain the Null Hypothesis No- means are similar No- means are similar Reject the Null Hypothesis No- The bigger group is statistically bigger Yes- Find which mean is different from each other mean by comparing each pair of means. Time to practice next!
{"url":"https://stats.libretexts.org/Courses/Taft_College/PSYC_2200%3A_Elementary_Statistics_for_Behavioral_and_Social_Sciences_(Oja)/02%3A_Mean_Differences/13%3A_Factorial_ANOVA_(Two-Way)/13.04%3A_When_Should_You_Conduct_Post-Hoc_Pairwise_Comparisons","timestamp":"2024-11-10T12:22:25Z","content_type":"text/html","content_length":"136210","record_id":"<urn:uuid:2aa39e63-353b-45ec-90c9-2ce7f1af0dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00278.warc.gz"}
Nina Perf - MATLAB Central Nina Perf Last seen: ongeveer 2 jaar ago |&nbsp Active since 2021 Followers: 0 Following: 0 of 295.177 25 Questions 0 Answers of 20.184 0 Files of 153.314 0 Problems 0 Solutions Bode diagram for a Butterworth filter Hi, I want help in doing a Bode diagram for a 8th order Butterworth passband with passband between 2 and 12 Hz. Sampling freque... meer dan 2 jaar ago | 2 answers | 0 Help Merge two tables by the column variables Hi, I have the following table tComplete, where there are 50 different Birds, 2 Days (P and O), different Tries, and 200 Featur... meer dan 2 jaar ago | 1 answer | 0 Help iteration for loop and tables vertcat Hi, I have a varible C which is a 45x2 double with the possible combinations of numbers, #, from 1 to 10 (meaning 10 birds). P... meer dan 2 jaar ago | 2 answers | 0 Help with Table and Row indexing Hi, I have the following table Predictions, where there are 2 Days, 2 different Time points, for 2 different Birds, 3 Tries per... meer dan 2 jaar ago | 1 answer | 0 Help with table variables calculation Hi, I have a table 'T' with a column variable named 'rank', which includes categorical numbers: from 1 to 3. I want to che... meer dan 2 jaar ago | 1 answer | 0 Help calculating values from a table into a loop Hi, I have the following table T, where there are 2 different timepoints, for 2 different birds, 3 trials per day. I want to c... meer dan 2 jaar ago | 1 answer | 0 Histogram occurences per class Hi, I want to have the classes count centered at the top of each bar, as a number. Can you please help? figure() h = histogra... meer dan 2 jaar ago | 1 answer | 0 Problem with Cell Array in Table Hi, I have a table T, with cell arrays of doubles. By opening T.Var1{1,1} (selected in the figure above) we get a double with ... meer dan 2 jaar ago | 1 answer | 0 Find consecutive ones and save the contents Hi, indcol1 = ind(:,1); % the indexes indcol2 = ind(:,2); % the array of 1 and 0 Thank you in advance! meer dan 2 jaar ago | 2 answers | 0 Help Segmenting signal processing Hi, I have data. I have segmented the signal in windows and calculated rms. Can you please help in finding multiple minimum val... meer dan 2 jaar ago | 3 answers | 0 Histogram wrong Classes help Hi, I need help representing data with the histogram function. figure() for i = 1:8 subplot(4,2,i), histogram(Data.score... bijna 3 jaar ago | 1 answer | 0 Feature Selection using Correlation-based method Hi, I want to use the Correlation-based Feature Selection to perform feature selection. I tried the https://www.mathworks.com/... bijna 3 jaar ago | 1 answer | 0 Help normalizing data in table Hi, I have a 100x6 Table with 6 variables. I want to center the data so that it has mean 0 and standard deviation 1, using th... bijna 3 jaar ago | 1 answer | 0 Conditions else if - efficiency advice Hi, I want advice on how to make this code more efficient. We have2 groups S and T. For each I want to attribute the aS or a... bijna 3 jaar ago | 1 answer | 0 Convert x axis to seconds Hi, I need help converting the last 2 plots (plot 2 and plot 3) x axis, to the time domain in seconds, given a data signal with... bijna 3 jaar ago | 1 answer | 0 Extracting data cell that match specific strings in different columns Hello! I have a table, T, with the following variable names: Type, Ta, Sub, Day, Try, Data I ask the person for these differe... ongeveer 3 jaar ago | 1 answer | 0 FFT plot in frequency domain, error help Hi, I did a myfft function in order to plot the (PSD,f) of a signal: 10696x1 double [f, ~, ~, psd, ~] = myfft(Data, fs);... ongeveer 3 jaar ago | 1 answer | 0 Converting values to g Hello, I need help converting values of an Accelerometer sensor. It has 12gb so the values are from 0 to 4096. I need to con... ongeveer 3 jaar ago | 1 answer | 0 Units help to plot I want to convert values in a table column to a range of -1 to 1 units. My -1 value is -800 My 0 value is 300 My 1 value i... ongeveer 3 jaar ago | 1 answer | 0 Feature Selection using ReliefF function Hello, I am using the ReliefF feature selection in around 700 features (1 table column is one feature). I am getting an error t... ongeveer 3 jaar ago | 0 answers | 0 Histogram occurences per age problem Hello, I have a table with persons data. I have the ages in one column and I wanted to do an histogram like the one bellow. Ho... ongeveer 3 jaar ago | 1 answer | 0 Using barh to plot top 10 values in Feature Selection Hello, I did Feature Selection of 700 features. I want to bar plot only the top 5 (with highest predictor rank). Can you pl... ongeveer 3 jaar ago | 1 answer | 0 Error Concatenating multiple tables I need to concatenate multiple tables vertically. All tables have the same Variable Names in the columns and they have the same ... meer dan 3 jaar ago | 1 answer | 0
{"url":"https://nl.mathworks.com/matlabcentral/profile/authors/23020446","timestamp":"2024-11-10T01:26:31Z","content_type":"text/html","content_length":"91172","record_id":"<urn:uuid:3ad17d6f-6277-4f0a-8308-46a901883f78>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00015.warc.gz"}
Definition: Refers to the strategy of being a 'frenemy', gaining confidence as a friend only to b█tray someone Part man, part [go█t] (which is a counter-Divine Will symbol), his lower go█t parts were equipped with a phenomenally-sized male endowment. He'd gain your trust as a friend and guide, accompany you out into the d█rk for█st, and then when you were alone with him would take carnal advantage of you. Thus, symbolically gaining your acceptance of a counter-Divine Will basis via the pen█tration symbol, to your very great discomfort and chagrin. His symbolic visual depictions feature a trope of presenting him in his 'nice'-seeming version, but somewhere associated or nearby also presenting him in his 'naughty' version too. This appears to be consistent with their non-overt applications as I've encountered them; personnel implementing this strategy will intersperse their 'friendly' presentments with seemingly non-sensical but very pronounced instances of 'mean' presentments for no apparent reason and with no apparent motivation. They'll then usually seek to 'offset' the latter by presenting various 'perks' and 'l█ve-b█mbs'. If the recipient makes their best effort to forgive the ab█ser, to recognize their True Nature despite their behavior and accept them regardless, they are evidently deemed to have accepted a counter-Divine Will, counter-True Nature basis from the ab█ser and additionally, to have done so merely because they're motivated by obtaining the various 'perks' and benefits presented by them. The organization appears to interpret their position as having no more respectability than is traditionally afforded a literal prost█tute, and precisely the same reason. Derivatives: [p█n p█pe], [go█t], [p█nic], [p█ndemic], the prefix [p█n-] generally, the modern slang expression, 'How do ya like me now?'
{"url":"https://lexicon.divinewillassembly.com/Symbols/Idols/P%E2%96%88n/","timestamp":"2024-11-02T15:37:03Z","content_type":"text/html","content_length":"84043","record_id":"<urn:uuid:3509b6b5-a70e-4c68-bd0d-cf3000063a6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00505.warc.gz"}
A Framework for Analysis of Computational Imaging Systems - Comp Photo Lab Project Description Over the last decade, a number of Computational Imaging (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effects of multiplexing by appropriate reconstruction algorithms. Given the widespread appeal and the considerable enthusiasm generated by these techniques, a detailed performance analysis of the benefits conferred by this approach is important. Unfortunately, a detailed analysis of CI has proven to be a challenging problem because performance depends equally on three components: (1) the optical multiplexing, (2) the noise characteristics of the sensor, and (3) the reconstruction algorithm which typically uses signal priors. A few recent papers [1,2] have performed analysis taking multiplexing and noise characteristics into account. However, analysis of CI systems under state-of-the-art reconstruction algorithms, most of which exploit signal prior models, has proven to be unwieldy. We present a comprehensive analysis framework incorporating all three components. In order to perform this analysis, we model the signal priors using a Gaussian Mixture Model (GMM). A GMM prior confers two unique characteristics. Firstly, GMM satisfies the universal approximation property which says that any prior density function can be approximated to any fidelity using a GMM with appropriate number of mixtures. Secondly, a GMM prior lends itself to analytical tractability allowing us to derive simple expressions for the ‘minimum mean square error’ (MMSE) which we use as a metric to characterize the performance of CI systems. We use our framework to analyze several previously proposed CI techniques (focal sweep [3,4], flutter shutter [5], parabolic exposure [6], etc.), giving conclusive answer to the question: ‘How much performance gain is due to use of a signal prior and how much is due to multiplexing?’ Our analysis also clearly shows that multiplexing provides significant performance gains above and beyond the gains obtained due to use of signal "A Framework for Analysis of Computational Imaging Systems" K. Mitra, O. Cossairt, A. Veeraraghavan IEEE Pattern Analyis and Machine Intelligence (PAMI), 2014 "Performance Bounds for Computational Imaging" O. Cossairt, M. Gupta, K. Mitra, A. Veeraraghavan Imaging and Applied Optics Technical Papers, OSA, 2013 "Performance Limits for Computational Photography" O. Cossairt, K. Mitra, A. Veeraraghavan International Workshop on Advanced Optical Imaging and Metrology, Springer, 2013 "To Denoise or Deblur: Parameter Optimization for Imaging Systems" K. Mitra, O. Cossairt, A. Veeraraghavan SPIE Electronic Imaging Conference, Jan. 2014 “Performance Bounds for Computational Imaging,” O. Cossairt Computational Optical Sensing and Imaging Conference, June 2013. “When Does Computational Imaging Improve Performance?,” O. Cossairt CVPR Workshop on Computational Cameras and Displays, June 2013. “Compressive Imaging,” A. Veeraraghavan CVPR Workshop on Computational Cameras and Displays, June 2013. “Performance Limits for Computational Photography,” O. Cossairt Fringe Conference, September 2013. Image formation and noise model: Follow the convention adopted by Cossairt et al. [1], we define a conventional camera as an impulse imaging system which measures the desired signal directly (e.g. without blur). CI performance is then compared against the impulse imaging system. Noise is related to the lighting level, scene properties and sensor characteristics. To calculate the photon noise in our experiments, we assume an average scene reflectivity of R=0.5 and sensor quantum efficiency of q=0.5, aperture setting of F/11 and exposure time of t=6 milliseconds. We choose three different example cameras that span the a wide range of consumer imaging devices: 1) a high end SLR camera, 2) a machine vision camera (MVC) and 3) a smartphone camera (SPC). For each of these example camera types, we choose parameters that are typical in the marketplace today: sensor pixel size: δSLR=8 μm for the SLR camera, δMVC=2.5 μm for the MVC, and δSPC=1 μm for the SPC. We also assume a sensor read noise of σr=4e- which is typical for today’s CMOS sensors. The figure shows the relation between light levels and average signal levels for the different camera specifications. Image Simulations for Focal Sweep: Subplots (a) and (b) show the simulation results obtained by focal sweep and impulse imaging for low (J/σr2=0.2) and high (J/σr2=20) photon to read noise ratios. For the low photon to read noise ratio case, application of our GMM prior increases SNR by around 14dB for both focal sweep and impulse imaging. Multiplexing increases SNR by about 8 dB regardless of the use of prior. For the high photon to read noise ratio case, the SNR gains due to both prior and multiplexing decrease. Performance Analysis for Extended Depth Of Field (EDOF) Cameras: Here we plot the SNR gain of various EDOF systems at different photon to read noise ratios (J/σr2). In the extended x-axis, we also show the effective illumination levels (in lux) required to produce the given J/σr2 for the three camera specifications: SLR, MVC and SPC. The EDOF systems that we consider are: cubic phase wavefront coding [9], focal sweep camera [3.4], and the coded aperture designs by Zhou et al. [7] and Levin et al. [8]. Signal priors are used to improve performance for both CI and impulse cameras. Wavefront coding achieves a peak SNR gain of 8.8 dB and an average SNR gain of about 7 dB. Performance Analysis for Motion Deblurring Cameras: In this figure, we study the performance of motion invariant [6], flutter shutter [5] and impulse cameras when image priors are taken into account. Subplot (a) shows the analytic SNR gain (in dB) vs. photon to read noise ratio J/σr2 for the two motion deblurring systems. In the extended x-axis, we also plot the corresponding light levels (in lux) for the three different camera specifications: SLR, MVC and SPC. The motion invariant camera achieves a peak SNR gain of 7.3 dB and an average SNR gain of about 4.5 dB. Subplots (b-c) show the corresponding simulation results. At a low photon to read noise ratio of J/σr2=0.2, motion invariant imaging performs 7.4 dB better than impulse imaging. At the high photon to read noise ratio of J/σr2=20, it is only 1.2 dB better. Optimal exposure setting for motion deblurring: Here we compute the optimal exposure time for a conventional camera with signal priors taken into account. We first fix the exposure setting of the impulse imaging in such a way that the motion blur is less than a pixel. We then analytically compute the expected SNR gain of different exposure settings (PSF kernel lengths) with respect to the impulse imaging system (of PSF kernel length 1) at various light levels, see subplot (a). For light levels less than 150 lux capturing the image with a larger exposure and then deblurring is a better option, whereas, for light levels greater than 150 lux we should always capture impulse image and then denoise. Subplot (b) shows the optimal blur PSF length at different light levels. At light level of 1 lux, the optimal PSF is 23, whereas for light levels greater than or equal to 150 lux the optimal is 1, i.e., the impulse image setting. Subplots (c-e) show the simulated results with different PSF kernel lengths at a few lighting levels. Optimal aperture setting for defocus deblurring: Here we compute the optimal aperture setting for a conventional camera with signal priors taken into account. We fix the aperture size of the impulse imaging system so that the defocus blur is less than a pixel. We then analytically compute the SNR gain of different aperture settings (PSF kernel size) with respect to impulse imaging system of PSF kernel size 1×1 pixels for various light levels, see subplot (a). For light levels less than 400 lux capturing the image with a larger aperture and then deblurring is a better option, whereas, for light levels greater than 400 lux we should capture impulse image and then denoise. In subplot (b) we show the optimal blur PSF size at different light levels. At light level of 1 lux, the optimal PSF is 9×9 pixels, whereas for light levels greater than 400 lux the optimal is 1×1 pixels, i.e., the impulse image setting. Subplots (c-d) show the simulated results with different PSF size at a few lighting levels. Kaushik Mitra and Ashok Veeraraghavan acknowledge support through NSF Grants NSF-IIS: 1116718, NSF-CCF:1117939 and a Samsung GRO grant. [1] O. Cossairt, M. Gupta, and S.K. Nayar, When Does Computational Imaging Improve Performance?, IEEE Transactions on Image Processing, 2012. [2] N. Ratner, Y. Schechner, and F. Goldberg. Optimal multiplexed sensing: bounds, conditions and a graph theory link. Optics Express, 2007. [3] G. Hausler. A method to increase the depth of focus by two step image processing. Optics Communications, 1972. [4] H. Nagahara, S. Kuthirummal, C. Zhou, and S. Nayar. Flexible Depth of Field Photography. In ECCV, 2008. [5] R. Raskar, A. Agrawal, and J. Tumblin. Coded exposure photography: motion deblurring using fluttered shutter.In SIGGRAPH, 2006. [6] A. Levin, P. Sand, T. Cho, F. Durand, and W. Freeman. Motion-invariant photography. In SIGGRAPH, 2008. [7] C. Zhou and S. Nayar. What are good apertures for defocus deblurring? In ICCP, 2009. [8] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional camera with a coded aperture. In SIGGRAPH, 2007. [9] E. R. Dowski and T. W. Cathey. Extended depth of field through wave-front coding. Applied Optics, 1995.
{"url":"https://compphotolab.northwestern.edu/project/a-framework-for-analysis-of-computational-imaging-systems-2/","timestamp":"2024-11-02T17:49:51Z","content_type":"text/html","content_length":"62145","record_id":"<urn:uuid:52082bcd-2229-42b0-b99f-aa282a823ead>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00794.warc.gz"}
Tons of coal equivalent to megawatt hours (TCE to MWh) Tons of coal equivalent to megawatt hours converter on this page calculates how many megawatt hours are in 'X' tons of coal equivalent (where 'X' is the number of tons of coal equivalent to convert to megawatt hours). In order to convert a value from tons of coal equivalent to megawatt hours (from TCE to MWh) just type the number of TCE to be converted to MWh and then click on the 'convert'
{"url":"https://www.conversion-website.com/energy/ton-of-coal-equivalent-to-megawatt-hour.html","timestamp":"2024-11-14T15:45:52Z","content_type":"text/html","content_length":"13489","record_id":"<urn:uuid:77e2b27b-e81f-4a44-86dc-98d42fd07a82>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00595.warc.gz"}
Polyphase filters for interpolation - DSP LOG Polyphase filters for interpolation In typical digital signal processing applications, there arises need to increase the sampling frequency of a signal sequence, where the higher sampling frequency is an integer multiple of the original sampling frequency i.e for a signal sequence $x(n)$ with a sampling frequency $f_s$, change the sampling frequency to $Lf_s$, where $L$ is an integer. Typically, upsampling by a factor of $L$ is done by inserting $L-1$ zeros between the original samples and changing the sampling frequency to $Lf_s$. % original sequence fs = 1000; % sampling frequency of 1kHz x = cos(2*pi*200*[0:1/fs:10]) + j*sin(2*pi*200*[0:1/fs:10]); % complex sinusoidal with frequency 200Hz pwelch(x,[],[],[],fs,’twosided’) % psd plot L = 4; % interpolation factor xU = [x; zeros(L-1,length(x))]; xU = xU(:).’; % upsampled sequence with zero padding pwelch(xU,[],[],[],L*fs,’twosided’) % spectrum of the upsampled sequence As can be seen from the second figure, the process of inserting zeros in the time domain caused the original spectrum to get repeated i.e. the original sinusoidal present in 200Hz is now present in 200Hz, 1kHz + 200Hz, 2kHz + 200Hz etc…If it is desired to remove the ‘extra aliases of the original spectrum’, we can do so by having a low pass filter following the upsampled sequence. % lowpass filtering by sinc() filter h = sinc([-20:20]/L); y = conv(xU,h); pwelch(y,[],[],[],L*fs,’twosided’) % spectrum of the filtered upsampled sequence With a simple sinc() shaped low pass filter, we cut down the aliases of the orignal spectrum to be 40dB down the desired spectrum. Having removed the aliases the original spectrum by the low pass filter, comes the important question: Is it possible to reduce the hardware complexity of the low pass filter implementation considering that the input sequence has $L-1$ zeros in between the samples? The answer is YES. To understand how, let us first replace convolution operation by a matrix multiplication where the input sequence is changed to toeplitz form (see previous post). xUM = toeplitz([xU(1) zeros(1,size(h,2)-1) ], [xU zeros(1,size(h,2)-1) ]); y1 = h*xUM; % mean square error diff = (y1-y)*(y1-y)’/length(y1-y) This typical direct form filter implementation with $N$ coefficients will require $N$ multipliers (in this example $N=41$). It is reasonably obvious that, due to the presence of the zeros in the input sequence, not all of the multiplier outputs are used. At a particaly time instant, the taps seperated by $L$samples alone has non-zero values. Hence let us reshape h, which is now a vector of dimension $[1\ \mbox{x}\ N]$ into a matrix of dimension $[\ L\ \mbox{x}\ \lceil \frac{N}{L} \rceil \ ]$. h = [h zeros(1,L*ceil(length(h)/L) – length(h))]; % padding zeroes hM = reshape(h,L,length(h)/L); (We need to pad zeros if the number of coefficients in h ($N$) is not an integer multiple of $L$). Now, instead of using the toeplitz representation of the upsampled input sequence, let us form the toeplitz representation of the original sequence. xM = toeplitz([x(1) zeros(1,size(hM,2)-1) ], [x zeros(1,size(hM,2)-1) ]); % toeplitz representation The matrix multiplication for implementing convolution can be written as, y2 = hM*xM; The output y2 has$L$columns. Reshaping (i.e reading the first column, second column, third column and so on…) the output y2 for comparison y2 = y2(:).’; % mean square error diff = (y2-y)*(y2-y)’/length(y2-y) As can be observed, the output y2 is identical to the output of the direct form implementation (y, y1) obtained prior. Instead of single filter with$N$ coefficients, we now have $L$filtersets with $\ lceil \frac{N}{L} \rceil$ coefficient each. All the $L$ filtersets are fed the input sequence at the original sampling rate $f_s$. The output from each of the filterset is taken sequentially i.e. output of filtersets #1, #2,…,#$L$ , #1, #2, … and so on, are taken at the higher sampling rate $Lf_s$ . Ofcourse, in hardware we do not need to implement $L$different filtersets. One filterset with$ \lceil \frac{N}{L} \rceil$ taps and dynamically loaded coefficients will do the job. 1. The original implementation required hardware implementation of a filter with $N$taps. When the modified implementation, the hardware implementation of the filter requires only $\lceil \frac{N}{L} \rceil$ taps. 2. The original implementation required the input samples to the filter to be clocked at the higher sampling rate of $Lf_s$. With the modified implementation, the input samples to the filter is clocked at the original sampling rate of $f_s$. However, we need to have additional circuitry to dynamically load the coefficients and latch the output at the upsampled frequency $Lf_s$. These filtersets are called polyphase filters. Details about polyphase filters in sufficient mathematical detail is described in Chapter 9.XX in [1] [1] Digital Signal Processing – Principles, Algorithms and Applications, John G. Proakis, Dimitris G. Manolakis 8 thoughts on “Polyphase filters for interpolation” 1. Also Krishna, diff = (y1-y)*(y1-y)’/length(y1-y) actually be… diff = (y1-y)*(y1-y)’/length(y1) ? Thanks again, 1. @Talib: It really does not matter, no? length(y1-y) and length(y1) reports the same number. Agree? 2. Hi Krishna, Great article on polyphase – question though – could you go through in more detail about EXACTLY how you made this sinc-filter? What is its sampling rate?… Why is ‘L’ an argument? … I understand that you need to low pass filter, and that a sinc in time is a rectangle in frequency, but exactly how did you select its arguments?… 1. @Talib: My replies: 1/ In the code, I assumed a sampling rate of 1kHz 2/ L is the oversampling factor. I used a small matlab code snippet to plot the frequency response octave:12> L = 4 octave:13> h = sinc([-20:20]/L); octave:14> hF = fft(h,1024); octave:15> plot([-512:511]/1024,(abs(fftshift(hF)))); octave:16> xlabel(‘freq, kHz’); ylabel(‘amplitude’); Hope this helps. 3. Dear Mr. Krishna, thank you for the wonderful explanation of polyphase filters. I would like to know how you chose the sinc filter time range [-20:20]/L,which will give a vector of 41 values in time domain? 1. @saira: No special reason, I just wanted to define a filter which provided around 40dB attenuation outside the passband. 4. why after upsampling, as your example L = 4, the magnitude of 200Hz is decreased by 20*log10(1/4), which is about -12dB. Do you know how to explain this case ? 1. @sumo: Nice question. Let me try to answer… With the original sampling frequency of 1000Hz, we were able to ‘see’ frequencies from [-500Hz to +500Hz). Now, with the oversampled frequency of 4000Hz, we now can ‘see’ frequencies from [-2000Hz to 2000Hz). Now as the specrum gets replicated at multiples for sampling frequency, the frequency at 200Hz is replicated at 1200Hz, -800Hz, -1800Hz. So, instead of 1 frequency, we have four frequencies at 1/4th amplitude. Hence the reduction in magnitude by 20*log10(1/4) = -12dB. Do you agree?
{"url":"https://dsplog.com/2007/05/12/polyphase-filters-for-interpolation/","timestamp":"2024-11-01T19:11:48Z","content_type":"text/html","content_length":"111878","record_id":"<urn:uuid:0e1bd886-721a-460d-9a2e-d812534823e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00148.warc.gz"}
Fukaya category There is this observation going back (at least?) to Kapustin and Orlov and later used by Gukov and Witten in describing quantization via the A-model that A-branes are represented not only by Lagrangian submanifolds but more generally by coisotropic submanifolds. Shouldn’t the Fukaya category be described in terms of coisotropic subspaces if that is the relevant category for mirror I would say: Because that’s how it’s defined. But I might not get the subtext of your question? Is there a reason why this article is phrased in terms of Lagrangian (as opposed to coisotropic) branes? added pointer to section 7.5 of • Fernando Marchesano, Intersecting D-brane Models (arXiv:hep-th/0307252) diff, v11, current I see; maybe somebody proposed a corresponding definition of generalized Fukaya categories? But the answer to your question in #2 is clearly: The entry on Fukaya categories speaks about Lagrangian submanifolds because these are by definition their objects – see for instance Auroux 2013, p. 22. Last I looked into it, many years ago, the correct definition of Fukaya categories was still felt to be elusive/unsatisfactory, due to the issue of transversality (also indicated in the Idea-section of the entry). There was the idea that a more satisfactory definition would use derived symplectic geometry, where the transversality issue is automatically dealt with. But I haven’t followed what became of this idea and what the state-of-the-art of Fukaya categories is these days. I have completed and brushed-up some of the bibitems in the list if references and touched the wording in the Idea-section diff, v14, current I see. Sure, I suppose my question was more about how that observation fits all this in practice, whether we a. change the definition of the Fukaya category so that mirror symmetry is still phrased in terms of it(s updated version), or b. we keep the definition like that and then speak of a coisotropic extensions of the Fukaya category. One of the reasons I ask is because I’m wondering if there is a sort of clear way to see how the Fukaya category for the A-model and the category of sheaves for the B-model come about from motivic quantization of the AKSZ sigma-model.
{"url":"https://nforum.ncatlab.org/discussion/9546/fukaya-category/?Focus=116446","timestamp":"2024-11-09T15:58:31Z","content_type":"application/xhtml+xml","content_length":"48046","record_id":"<urn:uuid:46e770b4-d318-41ba-a8ea-c0e76a8d1c8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00874.warc.gz"}
Negative rewards in QLearning Let's assume we're in a room where our agent can move along the xx and yy axis. At each point, he can move up, down, right and left. So our state space can be defined by (x, y) and our actions at each point are given by (up, down, right, left). Let's assume that wherever our agent does an action that will make him hit a wall we will give him a negative reward of -1, and put him back in the state he was before. If he finds in the center of the room a puppet he wins +10 reward. When we update our QValue for a given state/action pair, we are seeing what actions can be done in the new state and computing what is the maximum QValue that is possible to get there, so we can update our Q(s, a) value for our current state/action. What this means is that if we have a goal state in the point (10, 10), all states around it will have a QValue a bit smaller and smaller as they get farther. Now, in relationship to the walls, it seems to be the same is not true. When the agent hits a wall(let's assume he's in the position (0, 0) and did the action UP), he will receive for that state/action a reward of -1, thus getting a QValue of -1. Now, if I am in the state (0, 1), and assuming all the other actions of state (0,0 0) are zero when calculating the QValue of (0, 1) for the action LEFT, it will compute it the following way: Q([0,1], LEFT) = 0 + gamma * (max { 0, 0, 0, -1 } ) = 0 + 0 = 0 This is, having hit the wall doesn't propagate to nearby states, contrary to what happens when you have positive reward states. In my optic, this seems odd. At first, I thought finding state/action pairs giving negative rewards would be learning wise as good as positive rewards, but from the example, I have shown above, that statement doesn't seem to hold. There seems to be a bias in the algorithm for taking far more into consideration positive rewards than negative ones. Is this the expected behavior of QLearning? Shouldn't bad rewards be just as important as positive ones? What are "workarounds" for this? You can withdraw negative awards by increasing the default reward from 0 to 1, the goal reward from 10 to 11, and the penalty from -1 to 0. There are many scientific publications on Q-learning, so I'm sure there are other formulations that would allow for negative feedback. The reason for your observation is that you have no guesswork on the outcome of your actions or the state it is in, therefore your agent can always choose the action it considers that it has an optimal reward (thus, the max Q-value overall future actions). This is the reason why your negative feedback doesn't propagate: the agent will simply avoid that action in the future. If your model contains doubt over the outcome over your actions (for instance: there is always a 10% possibility of moving in a random direction), your learning rule should integrate over all feasible future rewards (basically replacing the max by a weighted sum). In that case, negative feedback can be propagated too. If you wish to learn about Q-Learning then visit this Machine Learning Course.
{"url":"https://intellipaat.com/community/15723/negative-rewards-in-qlearning","timestamp":"2024-11-12T13:32:02Z","content_type":"text/html","content_length":"102118","record_id":"<urn:uuid:1e0df62c-cfd2-4a36-8883-cef835ee8588>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00334.warc.gz"}
motor speed with flywheel energy storage Flywheel energy storage OverviewApplicationsMain componentsPhysical characteristicsComparison to electric batteriesSee alsoFurther readingExternal links In the 1950s, flywheel-powered buses, known as gyrobuses, were used in Yverdon (Switzerland) and Ghent (Belgium) and there is ongoing research to make flywheel systems that are smaller, lighter, cheaper and have a greater capacity. It is hoped that flywheel systems can replace conventional chemical batteries for mobile applications, such as for electric vehicles. Proposed flywhe
{"url":"https://aces-bornholm.eu/motor+speed+with+flywheel+energy+storage-39707/","timestamp":"2024-11-12T00:26:29Z","content_type":"text/html","content_length":"41246","record_id":"<urn:uuid:4fb9fb25-c8da-4421-80c6-bb9168c3df46>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00469.warc.gz"}
matematicasVisuales | Resources: Building polyhedra using Zome Resources: Building polyhedra with Zome Golden Section From Euclid's definition of the division of a segment into its extreme and mean ratio we introduce a property of golden rectangles and we deduce the equation and the value of the golden ratio. Octahedron and Icosahedron The twelve vertices of an icosahedron lie in three golden rectangles. Then we can calculate the volume of an icosahedron Icosahedron and octahedron, one inside each other, with Zome: . Some properties of this platonic solid and how it is related to the golden ratio. Constructing dodecahedra using different techniques. Dodecahedron and Icosahedron are dual polyhedra Five tetrahedra inside a dodecahedron Cube inside a dodecahedron Dodecahedron volume One eighth of a regular dodecahedon of edge 2 has the same volume as a dodecahedron of edge 1. A cuboctahedron is an Archimedean solid. It can be seen as made by cutting off the corners of a cube. More polyhedra with Zome
{"url":"http://matematicasvisuales.com/english/html/geometry/resources/zome.html","timestamp":"2024-11-05T10:01:50Z","content_type":"text/html","content_length":"31868","record_id":"<urn:uuid:53f5709a-0b54-444e-9d8d-ab53d43cc462>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00362.warc.gz"}
Identification of structure and directional distribution of vibration transferred to car-body from road roughness The article presents results of the research on identification of structure and directional distribution of vibration transferred to car-body from road roughness. It is the case study of multiple sources of vibration interacting on vehicle and vibration transfer into driver and passengers. During the research the passenger car was driven on special test track. It were recorded the vibration signals in 3 orthogonal axes. The sensors were mounted at the floor panel in locations where the vibration are transferred into the human organism. For the purpose of analysis of vibration transfer in term of human perception it is necessary to correlate the vibration energy, frequency and time of exposition. It allows to evaluate exposure on vibration in frequency bands close to natural frequency of chosen human organs. The analysis of time-frequency distribution of the vibration allow to separate the main components of the signal. The paper presents the results of comparison of RMS value of vibration for different axes in measurement points on floor panel. 1. Introduction Vibration is the mechanical phenomena caused by machines in operation. Generally the vibration is undesirable, wasting energy and creating unwanted effects. The vehicle vibration are one of the most important unwanted effects. It causes decrease of safety and comfort factors and increase of fuel consumption. The vibration problems are very important for vehicle dynamics. It has to be taken into consideration, starting from modeling, designing through production to service and diagnostics of car vehicles [1-7]. The main areas of the author’s interest under the past studies undertaken include an assessment of vibration damping from the perspective of safety and comfort. Furthermore, the author conducted a series of studies pertaining to identification of other vibration sources occurring in vehicles, such as the engine and the power transmission systems [8, 9]. The range of impacts vibrations exposure on a vehicle driver is very broad, starting from the feeling of discomfort to safety hazards caused by vibrations at resonant frequencies of specific organs, thus affecting the driver’s responses. Therefore, it is important to study the paths of vibration propagation from their sources into the human organism and to assess the vibration exposure for different input function conditions [10-12]. The studies discussed in papers [8, 13] illustrate the outcomes of the influence of input parameters on the distribution of the vibrations being generated as well as their propagation. The paper presents some results of identification of structure and directional distribution of vibration transferred to car-body from road roughness. It is the case of multiple sources of vibration interacting on vehicle and transfer into driver and passengers. In order to examine vibration related phenomena occurring in a moving vehicle or a stationary one with its engine in operation, one should start with identification of vibration sources. The sources of vibration in a vehicle are dynamic forces but also free vibrations as well as forced, self-induced, parametrical, non-parametrical, random and stationary ones, all generated by the driving unit, the power transmission system and the road. The large scope of the vehicle vibration determinants include materials, services and construction (frame) production and repairs [14-17]. 2. Biodynamical effects of drivers modeling – state of the art The scientific problems of vehicle’s vibration in many aspects, especially in term of biodynamic response of the human body to the whole-body vibration, are main goal of many years investigation of Professor Michael Griffin from Southampton. Number of biodynamical models, vibration transmissibility concepts, and human biodynamic responses are considered in [18]. His research show the goal and way of investigation for many researchers [19-23]. The dynamics model of vehicle should permit for analysis of response function of the vehicle or human (occupant) on chosen excitation. The state of the art shows many publications on different approach to vehicle dynamics modeling. The paper [21] presents the three degrees of freedom (3-DOF) Human–Vehicle–Road (HVR) model, comprising a quarter-car and a biomechanical representation of the driver. The model used the Kelvin element as viscoelastic representation for modeling vehicle suspension systems and human muscular–skeletal structures. Differential equations are provided to describe the motions of various masses under the influence of a harmonic road excitation. The paper [21] formulates the optimization problem in terms of the requirements stipulated by ISO 2631 standards and utilizes a quarter-car model coupled with the biodynamical model of the driver. The model has been depicted in Table 1 in which ${M}_{3}$ denotes the driver’s mass, ${M}_{2}$ stands for the mass of the vehicle body, and ${M}_{1}$ signifies the unsprung masses of the suspension. The model has been excited by a ground vertical motion, $u\left(t\right)=A{e}^{j\omega t}$, with an amplitude A and a frequency $\omega$. The ${z}_{i}$ represents time-depending deflection. The ${C}_{i}$ are the viscous damping coefficients and ${K}_{i}$ are spring rates. Table 13-DOF HVR model and equations of the motion [21] The differential equations of the motion for the 3-DOF are given by: stackrel{˙}{u}\left(t\right)+{K}_{1}u\left(t\right),\\ {M}_{2}{\stackrel{¨}{z}}_{2}+{C}_{2}{\stackrel{˙}{z}}_{2}+{K}_{2}{z}_{2}-{C}_{2}{\stackrel{˙}{z}}_{1}-{K}_{2}{z}_{1}-{C}_{3}{\stackrel{˙}{z}}_ {3}+{C}_{3}{\stackrel{˙}{z}}_{2}-{K}_{3}{z}_{3}+{K}_{3}{z}_{2}=0,\\ {M}_{3}{\stackrel{¨}{z}}_{3}+{C}_{3}{\stackrel{˙}{z}}_{3}+{K}_{3}{z}_{3}-{C}_{3}{\stackrel{˙}{z}}_{2}-{K}_{3}{z}_{2}.\end{array}\ The equations can be expressed in a matrix form (detailed solution has been presented in [21]) it allows to obtain the expressions for the motions and accelerations of the masses ${M}_{1}$, ${M}_{2}$ and ${M}_{3}$ as equations as follows: ${a}_{1}\left(t\right)=-{\omega }^{2}\frac{\left({K}_{1}+j{C}_{1}\omega \right)\left({\omega }^{2}{K}_{3}{M}_{2}-j\omega {C}_{2}{K}_{3}+{\omega }^{2}{K}_{3}{M}_{3}+{\omega }^{2}{K}_{2}{M}_{3}-j\omega {K}_{2}{C}_{3}-{K}_{2}{K}_{3}\right)}{\delta }A{e}^{j\omega t}$ $-{\omega }^{2}\frac{\left({K}_{1}+j{C}_{1}\omega \right)\left(-{\omega }^{4}{M}_{2}{M}_{3}+j{\omega }^{3}{C}_{3}{M}_{2}+j{\omega }^{3}{C}_{2}{M}_{3}+{\omega }^{2}{C}_{2}{C}_{3}+j{\omega }^{3}{C}_{3} {M}_{3}\right)}{\delta }A{e}^{j\omega t},$ ${a}_{2}\left(t\right)=-{\omega }^{2}\frac{\left({K}_{1}+j{C}_{1}\omega \right)\left({K}_{2}+j\omega {C}_{2}\right)\left({\omega }^{2}{M}_{3}-j\omega {C}_{3}-{K}_{2}\right)}{\delta }A{e}^{j\omega ${a}_{3}\left(t\right)={\omega }^{2}\frac{\left({K}_{1}+j{C}_{1}\omega \right)\left({K}_{2}+j\omega {C}_{2}\right)\left({K}_{3}+j\omega {C}_{3}\right)}{\delta }A{e}^{j\omega t}.$ A generalized nonlinear two-degrees-of-freedom (2-DOF) model has been formulated in [22] for the dynamic analysis of suspension seats with passive, semi-active and active dampers (Table 2). The model incorporates Coulomb friction ${F}_{f}$ due to suspension linkages and bushings, forces arising from interactions with the elastic limit stops, a linear suspension spring and nonlinear damping force for passive, semi-active and active dampers, while the contribution due to biodynamics of the human operator is considered to be negligible. The model masses ${m}_{c}$and ${m}_{ss}$ represent the masses due to occupant upon neglecting its biodynamic interactions and the seat, respectively. The cushion is characterized by linear stiffness ${K}_{c}$ and viscous damping coefficient ${C}_{c}$. The suspension is represented by its linear stiffness ${K}_{ss}$, a clearance spring ${K}_{st}$, dry friction force (Columb) ${F}_{f}$ and a viscous damping coefficient ${C}_{ss}$ in the case of a passive suspension seat. The ${z}_{c}$ and ${z}_{ss}$ represent the vertical movement of the occupant mass ${m}_{c}$ and the suspension seat mass, respectively. The suspension force ${F}_{d}$ may be either ${F}_{a}$ for the active. The forces due to the passive components of the suspension are derived from the algorithm where ${z}_{sp}$ represents the displacement excitation at the base of the Table 2Analytical 2-DOF model of the seat suspension and equations of the motion [22] The equations of the motion for the 2-DOF suspension seat are given by: New approach to system modeling based on possibilities of Finite Element or Neural Network methods allows to develop models dedicated to realize specific functions [24, 25]. 3. Research method The validity of the analytical models have to be examined as comparison to the results obtained on real object. The research studies discussed in the article was conducted on an real object. The passenger car was driven on special test track, without any turns. The profile of the test track, as the road roughness, was set as concrete slab connected every next 5 meters. It was prepared as simulation of driving shock impulse. Vibration signals were measured on the floor panel at 4 points of location. In order to refer the obtained results to the analysis of the passenger exposure to vibrations, the measuring points were arranged at locations where the vibration are transferred into the human organism, i.e. where feet rested. Fig. 1Research and testing diagram and location of the vibration sensors The measurement chain is consisted of the ADXL piezoelectric sensors, a measuring unit, the μDAQ USB-30A16 data acquisition card and a computer featuring the software. The testing diagram has been depicted in Fig. 1. 4. Research results For the purpose of proper identification of vibration transferred to car-body from road roughness it is necessary to provide measurement in multiple points located on vehicle construction. Human vibration perception depends on area and place of contact of human organism and vibrating machine. It depends also on dynamics of the vibration and exposure time. The identification of vibration transfer from road roughness to car-body in driving car was performed as the analysis of transformation of signals in time and frequency domains. This method of vibration signal processing allows to observe changes of the energy in selected frequency bands and correlate it with time. The Figure 2 shows example of identification of vibration increase caused by impulse of road set off. The analysis of time-frequency distribution of the vibration allow to exactly separate the related component of the signal. The speed of the vehicle was set as constant, during the research, so the vibration generated from the engine and powertrain was assumed as constant. Fig. 2Identification of structure of vibration on front left floor panel (under the driver feet) Fig. 3Vibration of front left floor panel (under the driver feet) in 3 orthogonal axes Fig. 4Vibration of front right floor panel (under the front passenger feet) in 3 orthogonal axes Fig. 5Vibration of rear left floor panel (under the left rear passenger feet) in 3 orthogonal axes Fig. 6Vibration of rear right floor panel (under the right rear passenger feet) in 3 orthogonal axes Human perception of vibration also depends on the direction of the exposition. The exposure to vibration has to be consider in 3 orthogonal axes: $X$ – horizontally along the vehicle axis, $Y$ – horizontally crosswise the vehicle axis, and $Z$ – vertically and perpendicularly towards plane $XY$. To analyse the dynamics of the vibration the Fast Fourier Transformation has been proceed on the signals. It allows to evaluate the main frequency components of the vibration. Time and frequency realization of the vibration registered under the passengers feet in 3 axes have been illustrated in Figures 3-6. During the propagation of vibration waves in solid structure in theory the dispersion of the energy can be observe. For structures with combined shapes and profiles sometimes some increase of vibration can be observe because of multi natural frequencies for different parts of the structure. One of the most popular global estimators of energy of the vibration is RMS (Root Mean Square). The comparison of RMS value of vibration for different axes in measurement points on floor panel have been depicted in Fig. 7. Fig. 7Comparison of RMS vibration of floor panel under the feet of passengers: 1 – X axis, 2 – Y axis, 3 – Z axis Fig. 8Distribution of acceleration of vibration in X axis direction (horizontally along the vehicle) – floor panel under the feet of passengers Fig. 9Distribution of acceleration of vibration in Y axis direction (horizontally crosswise the vehicle) – floor panel under the feet of passengers Fig. 10Distribution of acceleration of vibration in vertical direction (Z axis) – floor panel under the feet of passengers For the purpose of analysis of vibration transfer in term of human perception it is necessary to correlate the vibration energy, frequency and time of exposition. It allows to evaluate exposure on vibration in frequency bands close to natural frequency of chosen human organs. There are many methods for multidimensional transformation of the signal. The paper presents the results of the time-frequency distribution of the acceleration of vibration obtained by the Short Time Fourier Transformation. It shows the main components of the energy of the vibration and frequency bands of exposure [26-28]. The Figures 8-10 show time-frequency structure of the vibration transferred to passengers via the feet in 3 orthogonal axes. This presentation of the vibration enables identification of the propagation of the vibration of driving car and calculation of energy changing for chosen frequency bands. 5. Conclusions The proper validate analytical model of vehicle dynamics allows simulations many driving conditions and response functions. One of the most popular application of the model is analysis of different control system for isolation from vibration of road roughness by spring and damping system of the suspension. For the comfort of the passangers the perception of vibration is very important. The goal of prevencion from vibration in passanger cabin is very difficult to reach. The isolation of chosen frequency bands of the vibration can be much easier. It is fundamental to properly identyfy the vibration transfer to car-body. All kind of vibration sources have to be consider as generators during car driving. The results of the research show that structures of the vibration are different for the directions and localization in the vehicle structure. It has to be take into consideration the various kinds and production technologies of means of transport [29, 30]. To complete the research on vibration transfer to car-body by driving car there have to be conducted many more research on different vibration generators or driving speed. The simultaneous analysis of distribution of RMS and time, frequency functions of vibration for different axes in measurement points on floor panel allow to compare propagation of the vibration. It have been observed that most of the energy of the vibration under the passengers feet are transferred in $Z$ axis direction (vertical). The highest vibration were registered under the feet of rear right passenger. The time-frequency structure of vibration of the floor panel allow identification of the main components of the vibration in terms of dynamics (frequency) and time of exposition. It can be observed that local maximum values of vibration for some frequencies are excited in limited time periods. The results show how many conditions have to be taken into consideration for vehicle dynamics model. The diffrences in vibration signals on floor panel in different location and direction (3 orthogonal axes) requires more complicated multibody model. • Bubulis A., Reizina G., Korobko E., Bilyk V., Efremov V. Controllable vibro-protective system for the driver seat of a multi-axis vehicle. Journal of Vibroengineering, Vol. 13, Issue 3, 2011, p. • Ragulskis K., Kanapeckas K., Jonušas R., Juzėnas K. Vibrations generator with a motion converter based on permanent magnet interaction. Journal of Vibroengineering, Vol. 12, Issue 1, 2010, p. • Grządziela A. Modelling of propeller shaft dynamics at pulse loadon structure and directional distribution of vibration generated by engine in the location where vibrations penetrate the human organism. Polish Maritime Research, Vol. 15, Issue 4, 2008, p. 52-58. • Tuma J., Šimek J., Škuta J., Los J. Active vibrations control of journal bearings with the use of piezoactuators. Mechanical Systems and Signal Processing, Vol. 36, Issue 2, 2013, p. 618-629. • Lozia Z. Truck front wheels and axle beam vibrations. 5th Mini Conference on Vehicle System Dynamics, Identification and Anomalies, Budapest, Hungary, 1996. • Lozia Z. A two-dimensional model of the interaction between a pneumatic tire and an even and uneven road surface. Vehicle System Dynamics, Vol. 17, 1988, p. 227-238. • Michalski R., Wierzbicki S. An analysis of degradation of vehicles in operation. Maintenance and Reliability, Vol. 1, Issue 3, 2008, p. 30-32. • Burdzik R. Material vibration propagation in floor pan. Archives of Materials Science and Engineering, Vol. 59, Issue 1, 2013, p. 22-27. • Burdzik R. Monitoring system of vibration propagation in vehicles and method of analysing vibration modes. Springer, Heidelberg, 2012, p. 406-413. • Paddan G., Griffin M. Evaluation of whole-body vibration in vehicles. Journal of Sound and Vibration, Vol. 253, Issue 1, 2002, p. 195-213. • Korzeb J., Nader M., Rózowicz J. Review and estimation of traffic generated vibration developed in proximity of Warsaw subway line. 12th International Congress on Sound and Vibration, 2005, p. • Rózowicz J., Nader M., Korzeb J. Traffic generated vibration impact on buildings. 12th International Congress on Sound and Vibration, 2005, p. 1594-1600. • Burdzik R., Folęga P., Łazarz B., Stanik Z., Warczek J. Analysis of the impact of surface layer parameters on wear intensity of frictional couples. Archives of Materials and Metallurgy, Vol. 57, Issue 4, 2012, p. 987-993. • Blacha L., Siwiec G., Oleksiak B. Loss of aluminium during the process of Ti-Al-V alloy smelting in a vacuum induction melting (VIM) furnace. Metalurgija, Vol. 52, Issue 3, 2013, p. 301-304. • Węgrzyn T., Piwnik J., Burdzik R., Wojnar G., Hadryś A. New welding technologies for car body frame welding. Archives of Materials Science and Engineering, Vol. 58, Issue 2, 2012, p. 245-249. • Węgrzyn T., Piwnik J., Łazarz B., Hadryś D. Main micro-jet cooling gases for steel welding. Archives of Materials and Metallurgy, Vol. 58, Issue 2, 2013, p. 551-553. • Lisiecki A. Diode laser welding of high yield steel. Proceedings of SPIE, Vol. 8703, 2012. • Griffin M. J. Biodynamic response to whole-body vibration. The Shock and Vibration Digest, Vol. 13, Issue 3, 1981. • Thompson A. G., Pearce C. E. M. RMS values for force, stroke and tyre deflection in a quarter-car model active suspension. Vehicle System Dynamics, Vol. 39, 2002, p. 57-75. • Kuznetsov A., Mammadov M., Sultan I. A., Hajilarov E. Vibration analysis optimization of parameters of the two mass model based on Kelvin elements. Proceedings of the Eighth IEEE International Conference on Control and Automation, China, Xiamen, 2010, p. 1326-1332. • Kuznetsov A., Mammadov M., Sultan I., Hajilarov E. Optimization of a quarter-car suspension model coupled with the driver biomechanical effects. Journal of Sound and Vibration, Vol. 330, 2011, p. • Bouazara M., Richard M. J., Rakheja S. Safety and comfort analysis of a 3-D vehicle model with optimal non-linear active seat suspension. Journal of Terramechanics, Vol. 43, 2006, p. 97-118. • Kardas-Cinal E., Droździel J., Sowiński B. Simulation study of a relation between derailment coefficient and track condition. Archives of Transport, Vol. 21, Issue 1-2, 2009, p. 85-98. • Papalukopoulos C., Giadopulos D., Natsiavas S. Dynamics of large scale vehicle models coupled with driver biodynamic models. Proceedings of the fifth GRACM International Congresson Computational Mechanics, Limassol, 2005. • Zheng J., Suzuki K., Fujita M. Car-following behavior with instantaneous driver–vehicle reaction delay: A neural-network-based methodology. Transportation Research Part C, Vol. 36, 2013, p. • Figlus T., Wilk A., Madej H., Łazarz B. Investigation of gearbox vibroactivity with the use of vibration and acoustic pressure start-up characteristics. Archive of Mechanical Engineering, Vol. 58, Issue 2, 2011, p. 209-221. • Dziurdź J. Transformation of nonstationary signals into pseudostationary signals for the needs of vehicle diagnostics. Acta Physica Polonica A, Vol. 118, Issue 1, 2010, p. 49-53. • Da̧browski Z., Deuszkiewicz P. Designing of high-speed machine shafts of carbon composites with highly nonlinear characteristics. Key Engineering Materials, Vol. 490, 2011, p. 76-82. • Dolecek R., Novak J., Cerny O. Experimental research of harmonic spectrum of currents at traction drive with PMSM. Radioengineering, Vol. 20, Issue 2, 2011, p. 512-515. • Lisiecki A. Welding of titanium alloy by disk laser. Proceedings of SPIE, Laser Technology 2012, Applications of Lasers, Vol. 8703, p. 12. About this article 25 September 2013 21 December 2013 15 February 2014 car-body vibration road roughness vibration propagation Copyright © 2014 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/14764","timestamp":"2024-11-05T12:40:18Z","content_type":"text/html","content_length":"144094","record_id":"<urn:uuid:bebdfa02-ed38-4bdc-964a-26b475f84791>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00092.warc.gz"}
BONDS ALLBONDS The rotationally invariant bond order [15] between all pairs of atoms is printed. In this context a bond is defined as the sum of the squares of the density matrix elements connecting any two atoms. For ethane, ethylene, and acetylene, the carbon-carbon bond orders are roughly 1.00, 2.00 and 3.00 respectively. The diagonal terms are the valencies calculated from the atomic terms only and are defined as the sum of the bonds the atom makes with other atoms. In RHF calculations, the total density matrix (alpha plus beta density matrices) is perfectly duodempotent, that is, the square of the density matrix, P, is exactly two times the density matrix, or P*P = 2*P (see Bond Orders), and valencies will be correct. In UHF and non-variationally optimized wavefunctions the calculated valency will be incorrect, the degree of error being proportional to the non-duodempotency of the total density matrix. (In UHF work, the individual alpha and beta density matrices are idempotent, that is, P (alpha)*P(alpha) = 1.0*P(alpha) and P(beta)*P(beta) = 1.0*P(beta), but, in general, the sum of these two matrices is not duodempotent, i.e., (P(alpha)+P(beta))*(P(alpha)+P(beta)) ≠. 2.0*(P(alpha)+P The bonding contributions of all M.O.s in the system are printed immediately before the bonds matrix. The idea of molecular orbital valency was developed by Gopinathan, Siddarth, and Ravimohan [23]. Just as an atomic orbital has a 'valency', so does a molecular orbital. This leads to the following relations: The sum of the bonding contributions of all occupied M.O.s is the same as the sum of all valencies which, in turn is equal to two times the sum of all bonds. The sum of the bonding contributions of all M.O.s is zero. In MOZYME calculations, only bonds of order greater than 0.01 and those that do not involve hydrogen atoms are printed. If ALLBONDS is present, all bonds of order greater than 0.001, including hydrogen atoms, are printed. If LARGE is present, then the Medrano-Bochicchio-Reale population analysis is printed. For each atom, the following quantities are generated: • The non-shared charge (sometimes called the self or inactive charge). • The charge used to form bonds with other atoms (the active charge). • The total charge (the sum of the first two terms). • The valence (from the bonds matrix). • The free valence (the difference of the last two terms). • The statistical promotion (total charge minus core charge). • The "Mulliken promotion" Note that the last two terms are expressed in units of the electron, not the proton charge. For more information, see [24,25,26,27], and also see J. A. Medrano, R. C. Bochicchio, S. G. Das, "The ROHF Extension of the Statistical Population Analysis of Electron and Spin Densities", J. Phys.
{"url":"http://openmopac.net/Manual/bonds.html","timestamp":"2024-11-10T09:17:01Z","content_type":"text/html","content_length":"4689","record_id":"<urn:uuid:d0882b3a-e632-4ebf-a1b2-ad165c6a93a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00475.warc.gz"}
Karush-Kuhn-Tucker Conditions - (Linear Algebra and Differential Equations) - Vocab, Definition, Explanations | Fiveable Karush-Kuhn-Tucker Conditions from class: Linear Algebra and Differential Equations The Karush-Kuhn-Tucker (KKT) conditions are a set of mathematical criteria used to determine the optimal solutions of constrained optimization problems. These conditions extend the method of Lagrange multipliers and provide necessary conditions for a solution in optimization scenarios, particularly in economic and social sciences where resources are limited and choices must be made under congrats on reading the definition of Karush-Kuhn-Tucker Conditions. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The KKT conditions are applicable in scenarios involving both equality and inequality constraints, making them versatile for many real-world problems. 2. A solution that satisfies the KKT conditions may not always guarantee optimality unless certain regularity conditions, like constraint qualifications, are met. 3. In economic models, the KKT conditions help analyze consumer behavior by determining optimal consumption bundles under budget constraints. 4. The conditions consist of primal feasibility, dual feasibility, complementary slackness, and stationarity, each playing a critical role in identifying optimal points. 5. KKT conditions are widely used in machine learning algorithms, particularly in support vector machines, to optimize classification boundaries. Review Questions • How do the Karush-Kuhn-Tucker conditions enhance the understanding of constrained optimization compared to earlier methods? □ The KKT conditions enhance the understanding of constrained optimization by providing a comprehensive framework that incorporates both inequality and equality constraints. Unlike earlier methods like Lagrange multipliers, which only address equality constraints, KKT extends this concept to handle cases where certain variables can be subject to upper or lower bounds. This ability to manage more complex constraints allows for more practical applications in fields such as economics and social sciences, where resource limitations are common. • Discuss the significance of complementary slackness in the Karush-Kuhn-Tucker conditions and how it relates to optimal solutions. □ Complementary slackness is a key component of the KKT conditions that ensures that if a constraint is not active (i.e., not binding at the solution), then the corresponding multiplier must be zero. This relationship indicates that optimal solutions balance between active constraints (which influence the solution) and inactive ones (which do not). Understanding complementary slackness helps identify which constraints are critical for determining optimal resource allocation, making it essential in economic models where choices must be made under various • Evaluate the impact of regularity conditions on the applicability of the Karush-Kuhn-Tucker conditions in real-world optimization problems. □ Regularity conditions are vital for ensuring that the KKT conditions lead to optimal solutions in practical scenarios. When these conditions are satisfied, they guarantee that any point satisfying KKT will be a local optimum. However, if these conditions fail, it may result in scenarios where KKT does not hold true or leads to suboptimal solutions. This limitation requires practitioners to carefully assess their optimization problems for regularity before relying solely on KKT, especially in complex economic and social contexts where constraints can significantly influence outcomes. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/linear-algebra-and-differential-equations/karush-kuhn-tucker-conditions","timestamp":"2024-11-04T17:36:11Z","content_type":"text/html","content_length":"151193","record_id":"<urn:uuid:c334486e-b906-4abd-9101-8603b8919a32>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00324.warc.gz"}
An example of using machine learningAn example of using machine learning Key concepts of machine learning 27 February 2024 ML Linear regression with ChatGPT 27 February 2024 In the future, the construction company's core business process of cost and time estimating will be transformed through the use of machine learning models. This step in automating costing and estimating will not only improve accuracy and efficiency, but will also be the starting point for the development and implementation of machine learning models in other company business processes. The following example will extract key data from past projects, and using this historical data as a basis, a machine learning model will be developed that will provide us with the ability to accurately estimate the cost and time frame for new construction projects. It is important to be able to quickly determine how long a project will take to build and what the total cost of the project will be. These questions about project time and cost have traditionally been at the forefront of the minds of both clients and construction companies since the beginning of the construction industry. As an example, consider three projects with three key attributes: the number of apartment), the number of floors, and a conditional measure of construction complexity on a scale of 1 to 10, where 10 represents the highest complexity (where 100 apartments is equivalent to the number 10 for ease of visualization). In machine learning, the process of converting and simplifying values such as 100 into 10 or 50 into 5 is called "normalization". Normalization in machine learning is the process of bringing different numerical data to a common scale to make it easier to process and analyze. This process is especially important when the data has different scales and units. Let's assume that the first project (Figure 4.5-10) had 50 apartments, 7 floors, and a complexity score of 2, indicating a relatively simple construction. In the second project, we already had 80 apartments, 9 floors, and a relatively complex project. Under these conditions, the first and second apartment building took 270 and 330 days to build, and the total project cost was $4.5 million and $5.8 million, respectively. In building a model for such data, the main task is to identify critical attributes, or labels, for prediction, in this case, construction time and cost. With a small dataset, we will use information about previous construction projects to plan new ones: using machine learning algorithms, our goal is to predict the cost and construction duration of a new project X based on given new project characteristics such as 40 apartments, 4 floors, and the relative high (7) complexity of the project. To create a predictive machine learning model, we need to choose an algorithm to create the model. An algorithm in machine learning is like a recipe that teaches a computer how to make predictions or decisions based on data. To analyze data about past construction projects and predict the time and cost of future projects (Figure 4.5-10), we can use for example one of the popular machine learning • Linear regression: This algorithm tries to find a direct relationship between attributes, for example between the number of floors and the construction cost. The goal is to find a linear equation that best describes this relationship, which allows making predictions. • K-nearest neighbors (k-NN). This algorithm compares a new project with past projects that were similar in size or complexity. The k-NN categorizes the data based on which k training examples are closest to it. In the context of regression, the result is the mean or median of the k nearest neighbors. • Decision Trees: It is a predictive modeling model that divides data into subsets based on different conditions using a tree structure. Each node of the tree represents a condition or question leading to further division of the data, and each leaf represents the final prediction or outcome. The algorithm divides the data into smaller groups based on different characteristics, such as first by the number of floors, then by complexity and so on, to make a prediction. Let's take a look at machine learning algorithms for estimating the cost of a new project, using two popular algorithms as examples: linear regression and K-nearest neighbors.
{"url":"https://datadrivenconstruction.io/2024/02/an-example-of-using-machine-learning-2883/","timestamp":"2024-11-08T18:37:15Z","content_type":"text/html","content_length":"259074","record_id":"<urn:uuid:be076411-b7de-4d61-8c4a-73f4b2802262>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00197.warc.gz"}
Explicit Topology Optimization Design of Stiffened Plate Structures Based on the Moving Morphable Component (MMC) Method 1 Department of Engineering Mechanics, State Key Laboratory of Structural Analysis for Industrial Equipment, International Research Center for Computational Mechanics, Dalian University of Technology, Dalian, 116023, China 2 Ningbo Institute of Dalian University of Technology, Ningbo, 315016, China 3 Beijing Institute of Spacecraft System Engineering, Beijing, 100094, China * Corresponding Authors: Chang Liu. Email: (This article belongs to the Special Issue: New Trends in Structural Optimization) Computer Modeling in Engineering & Sciences 2023, 135(2), 809-838. https://doi.org/10.32604/cmes.2023.023561 Received 02 May 2022; Accepted 21 July 2022; Issue published 27 October 2022 This paper proposes an explicit method for topology optimization of stiffened plate structures. The present work is devoted to simultaneously optimizing stiffeners’ shape, size and layout by seeking the optimal geometry parameters of a series of moving morphable components (MMC). The stiffeners with straight skeletons and the stiffeners with curved skeletons are considered to enhance the modeling and optimization capability of the current approach. All the stiffeners are represented under the Lagrangian-description framework in a fully explicit way, and the adaptive ground structure method, as well as dynamically updated plate/shell elements, is used to obtain optimized designs with more accurate analysis results. Compared with existing works, the proposed approach provides an explicit description of the structure. Thus, a stiffened plate structure with clear stiffener distribution and smooth geometric boundary can be obtained. Several numerical examples provided, including straight and curved stiffeners, hierarchical stiffeners, and a stiffened plate with a cutout, validate the effectiveness and applicability of the proposed approach. Graphic Abstract Plate structures have played an essential role in aerospace, automotive, marine and civil engineering due to their high load-bearing efficiency, lightweight construction and other excellent traits [1 ,2]. These structures, however, are susceptible to deformation, strength, vibration and buckling under use since their thicknesses are much smaller compared to their other dimensions. To improve the load bearing capacity of plate structures, in the past few decades, many methods have been developed to analyze and enhance their strength, stiffness, stability, dynamic performance [3–6], etc., among which the use of stiffener is one of the most efficient and cost-effective methods. This accordingly arouses great interest of researchers in the investigations of optimal design of stiffened plate structures. As an advanced design methodology, structural optimization [7–13], including size, shape and topology optimization, is widely used to solve material distribution problems of stiffened plate structures. For example, Lagaros et al. [14] investigated the optimization design of stiffened shell structures with straight stiffening beams by using evolutionary algorithms. In [15–18], a set of thickness parameters or spacing parameters are chosen as design variables to optimize the size and shape of stiffeners, while the topological form of the stiffeners remains constant during the optimization process. Pavlovčič et al. [19,20] studied the shear strength of the steel plate with trapezoidal stiffeners from numerical and experimental aspects. Kapania et al. [21] studied the optimization results of curvilinear stiffened panels. They found that curvilinear stiffeners may lead to lighter-weight designs than straight stiffeners for certain design cases. After that, Mulani et al. [22,23] proposed a new framework to design curvilinear stiffened panels considering complex, multifunctional and aircraft structural concepts. Wang et al. [24] and Hao et al. [25] developed a novel bilevel optimization strategy based on the hybrid model to optimize the size and layout of stiffened panels with reinforced cutouts. Liu et al. [26,27] suggested a non-parametric shape optimization method for designing the stiffeners on shell structures, in which the stiffness and eigenvalue maximization problems are considered. Moreover, Wang et al. [28,29] introduced an interesting bio-inspired approach for stiffener optimization, where the optimized shape and location of non-uniform curved grid stiffeners can be found in an adaptively evolutionary way. Besides size and shape optimization applied to the stiffener design of plate structures, topology optimization, which can provide more design space and flexibility, is also extensively carried out to optimize stiffened plate structures. Lam et al. [30] suggested an automated optimization method for determining the location of stiffeners from a variable thickness plate iteratively. By introducing a second-ranked microstructure, Ansola et al. [31] introduced a second-ranked microstructure in the stiffened optimization framework and simultaneously optimized the shell’s geometry and the stiffener layout. Afonso et al. [32] developed an integrated computational tool to find the optimal material distribution of variable thickness plates and free form shells, in which topology optimization is performed using both a hybrid algorithm and a homogenization approach. Similarly, Ma et al. [33] established a generative design method based on the homogenization method and an equivalent model to optimize stiffened plates. Sigmund and his co-authors [34] successfully applied the SIMP method combined with a novel computational morphogenesis tool to full-scale aircraft wing design, where the intricate details that appeared spontaneously in the optimization process (e.g., curved spars and local plate structures) could be observed clearly. Recently, Zhang et al. [35] proposed a novel B-spline-based method for structural topology optimization. Based on this framework, Feng et al. [36] effectively utilized B-spline control parameters to characterize the stiffener distribution reinforcing the plate/shell structures. Some other latest investigations conducted on the stiffener optimization of plate structures can be referred to [37–42]. Most of the stiffened structure optimization approaches mentioned above are based on classical implicit solution frameworks that the optimized stiffeners are identified from a black-and-white pixel image, which cannot guarantee that the optimized results are clear stiffeners rather than block-like patterns. Furthermore, the implicit bitmap-like geometric representation may result in a large number of design variables and ambiguous stiffener boundaries. In order to address these problems, some new design methods for stiffened structures based on bionic inspirations and growth simulation have been developed. Mattheck et al. [43] proposed a novel technique based on the swelling function of a commercial finite-element code. Ding et al. [44] developed a growing and branching tree model to generate stiffener layout patterns inspired by natural branching systems. Ji et al. [45] employed a bionic growth approach, which combines a bionic branch model and optimization criteria, to optimize the stiffener layout of the plate/shell structures. Li et al. [46–48] proposed a novel explicit approach to perform topology optimization of stiffened structures via a biologically inspired algorithm and then used it to discover the optimal internal cooling geometries in the heat conduction system. Dong et al. [49] used an adaptive growth method to improve the buckling resistance performance of plate/shell structures by optimizing the stiffener pattern. Unlike implicit topology optimization approaches, these natural growth-based algorithms can obtain explicit stiffener layouts rather than block-like material distributions, making the optimized results more conducive to practical applications. Nevertheless, most of these methods employ smear-out technology/ equivalent stiffness technology for structural response analysis, which is difficult to accurately predict the local mechanical behavior (e.g., local buckling and local stress) of stiffened plate structures. Furthermore, some of these methods are based on pre-defined criteria and lack rigorous sensitivity analysis. To summarize, although numerous approaches have been proposed to optimize stiffened plate structures, there is still some room for further improvement. In the present work, a more effective and practical approach is developed under the moving morphable components (MMC)-based solution framework [50] to solve the problem of explicit topology optimization of stiffened plate structures. In this method, each stiffener is regarded as a structural component with explicit geometric parameters, and the optimal stiffened structure can be obtained by optimizing the shape, size, and layout of these components. Both straight and curved stiffeners are considered in this paper to enhance the geometry modeling and optimization capability. In the present work, the Lagrangian description combined with the adaptive ground structure and dynamically updated plate/shell elements is used for the optimization process, which makes the proposed method capable of obtaining more accurate analysis results and clear stiffened structures. Thanks to the explicit geometric description, the optimized stiffener structure can be directly imported into the CAD/CAE system without resorting to additional post-processing processes. Furthermore, the feature sizes of the stiffeners can also be easily controlled. The remainder of this paper is organized as follows. The topology optimization model of stiffened plate structures based on the MMC method is introduced in Section 2. Then, problem formulation and sensitivity analysis are provided in Section 3. In Section 4, some typical examples are studied to illustrate the effectiveness of the proposed approach. Finally, the main concluding remarks are given in the last section of the paper. 2 Topology Optimization Model of Stiffened Plate Structures Based on the MMC Method 2.1 Geometry Description of Stiffened Plate Structures With the aim of doing topology optimization explicitly and geometrically, the MMC-based solution framework was first proposed by Guo et al. [50]. In the MMC method, some moving morphable components are adopted as the basic structural building blocks for topology optimization, and each component is allowed to move, deform, merge and overlap in the design domain freely. The explicit parameters that describe each component’s geometry and position are used as the topology design variables. In the MMC-based framework, the topology description of structural components can be constructed in both Lagrangian [51] and Eulerian description-based frameworks [52]. In order to achieve high accuracy numerical analysis at relatively low computational efforts, in the present work, both the optimization model and analysis model of stiffened plate structures are described in a pure Lagrangian way. The Lagrangian description can be seamlessly integrated with the adaptive ground structure and adaptive re-meshing technology, which provides a natural advantage for using body-fitted FE meshes to simulate the stiffened plate structures. Detailed aspects will be reported in the following. For a typical stiffened plate structure shown in Fig. 1, the stiffeners and the base plate can be regarded as being made up of a set of stiffener components and a plate component, where the stiffeners perfectly adhere to the base plate. Based on the Lagrangian description way, the profile of each component can be explicitly determined by its geometric parameters. Here we consider the stiffeners with a straight skeleton and a curved skeleton, respectively. Note that both types of components are of constant thickness in the present work. For a cuboid stiffener component (see Fig. 2 ) defined by the thickness t, the height h and a straight skeleton Ps1Ps2¯ (where Ps1=(psx1,psy1,0)⊤ and Ps2=(psx2,psy2,0)⊤ are the two endpoints), the coordinates of any point on the skeleton Cs and the mid-surface S0 of the component can be expressed as where μ∈[0,1] represents a coefficient of convex combination and η∈[0,1] denotes the introduced arc-length coordinate in the height direction. For a curved component with constant thickness as shown in Fig. 3, we can use a quadratic Bezier equation to define the coordinates of an arbitrary point on the curved skeleton Cc and mid-surface S0 where Pc1=(pcx1,pcy1,0)⊤, Pc2=(pcx2,pcy2,0)⊤ and Pc3=(pcx3,pcy3,0)⊤ are the coordinates of three control points of the Bezier curve. Note that the above equations adopt only three control points for an idea illustration purpose. Actually, components with more complex skeleton shapes can also be constructed by interpolating more control points into Bezier curves. Based on Eqs. (2) and (4), the profile of the outer boundaries of a component-based stiffener can be described in the following way: where ns(μ) denotes outward normal vector of the skeleton and r∈[0,1] is an introduced parameter to characterize the position along the thickness direction. In the MMC-based topology optimization approach, although the boundary of a single thin-walled component is smooth, the boundary of the region occupied by multiple overlapped components may not be smooth any more. This issue can be alleviated in the Eulerian description and fixed 2D/3D FE mesh-based MMC approach by introducing the so-called ersatz material model [52], where the equivalent stiffness of an element is determined by the values of the global topology description function on its four nodes. In the present work, since Lagrangian description is used for representing the geometry of a component as well as the layout of the structure, it is quite important to avoid the intersection of components during the process of optimization. Besides, the intersection or overlap of these stiffeners is also generally to be avoided in the design process of stiffened plate structures. In the present work, to prevent the stiffeners from intersecting with each other during the optimization process, the component-connection mechanism based on a so-called adaptive ground structure method is employed [53]. As illustrated in Fig. 4, a ground structure is composed of the base plate and component-based stiffeners, where the stiffeners are connected to each other by a series of driven nodes and the entire structure is updated iteratively by moving a series of driven nodes and varying some size parameters of the components. Correspondingly, the optimal shape and size of the stiffeners can be obtained by optimizing the coordinates of these movable nodes and other control points of the skeletons (PN=(pNx,pNy)) as well as the size parameters of the components (GC=(t,h)). Furthermore, by removing those narrow components with very small thickness after completing the optimization process (since these components have little effect on the overall performance of the stiffened structure), the topology changes of the structure are achieved and the final optimized stiffened plate structure can be obtained. 2.2 Minimum Thickness Control of Stiffeners In practical applications, constraining feature sizes of the structural members is very meaningful to improving the design manufacturability [54]. This, fortunately, can be easily achieved in the explicit optimization framework by directly setting bound limits on relevant geometry parameters. In the present work, benefiting from the explicit geometric description of the stiffeners, it is also easy to control the sizes of stiffeners, such as the height h and the length l (ls=(psx2−psx1)2+(psy2−psy1)2 for a straight stiffener and lc=∫01((2u−2)pcx1+(2−4u)pcx2+2upcx3)2+((2u−2)pcy1+(2−4u) pcy2+2upcy3)2du for a curved stiffener). However, for the thickness control of stiffeners, the lower bound tl cannot be directly imposed on the thickness of a component due to the operation of removing the narrow components. To be specific, if the lower bound tl on the thickness is imposed, the topology change of the stiffened plate structure cannot be realized by removing the components with thicknesses less than a threshold tr (usually tr≪tl) from the final optimization result (note that there are many stiffeners whose thickness values are between tr and tl). To address this problem, we introduce a penalization mechanism to prevent the value of the thickness from falling into the interval [tr,tl] during the process of optimization. By penalizing the stiffener thickness with a middle value (i.e., t∈[tr,tl]), the values of the thickness in the optimized results are either less than the threshold tr or great than the lower bound tl. Then the final structural topology change can be obtained by removing the stiffeners with thickness less than the threshold tr, while the minimum thickness constraint can also be satisfied. In the present work, for a stiffener optimization problem with feature size constraint t∈[tl,tu], we use the following expression to realize the penalization: where H=H(x−tl) is a translated Heaviside function, and its regularized version Hϵ(x−tl) in common practice can be expressed as where ϵ denotes a parameter that controls the magnitude of regularization and α is a small positive number introduced to set the threshold tr in the penalization scheme and we take α=tr/(tl−ϵ) in the present work. It should be noted that instead of t, the value of tp is utilized to treat as the thickness of each component in numerical implementation. As can be seen in Fig. 5, by using the Heaviside penalty function, only the thickness tp of the components with t∈[tl−ϵ,tl+ϵ] will fall into the interval [tr,tl+ϵ] during the process of optimization. Accordingly, the number of components with tp∈[tr,tl+ϵ] can be further reduced by setting ϵ to a small positive number (in the present work, we take ϵ=0.1) and the minimum thickness control can be effectively achieved. 2.3 Numerical Analysis Model of Stiffened Plate Structures Based on the MMC Method In the present work, classical stress/displacement shell elements with three or four nodes constructed from a refined shell theory [55] are adopted for structural response analysis. As the geometry of the stiffened plate is described explicitly in a pure Lagrangian way, a clean and clear geometry model with smooth boundaries can be generated; therefore, it is quite convenient to discretize both the base plate and the stiffeners into an adaptive body-fitted mesh through the adaptive re-meshing technique (see Fig. 6 for reference). Compared with the 3D solid or equivalent stiffness model with a fixed finite element (FE) mesh commonly used in previous works, the shell-element-based numerical analysis model adopted in the present work has a relatively low computational cost and is undoubtedly more suitable for the simulation of stiffened plate structures. Furthermore, since the FE model is built on exact geometry and a refined local FE mesh can be constructed in the regions of special interest (e.g., along the boundary of inner holes and the interfaces between the stiffeners and the base plate, see Fig. 28a for reference), more accurate analysis results can be obtained at each iteration step of optimization. 3 Problem Formulation and Sensitivity Analysis Based on the above discussions, it can be concluded that the design variables of a stiffened plate structure topology optimization problem in the proposed MMC-based framework can be summarized as D= (DN,DC). Here DN=(PN1,…,PNi,…,PNnn) denotes the integrated vector composed of the coordinates of all driven nodes/control points, with PNi=(pNxi,pNyi) representing the coordinates of the i-th driven node/control point and nn denoting the total number of driven nodes/control points. The symbol DC=(GC1,…,GCj,…,GCnc) is the geometric parameters of all stiffener components, with GCj=(tj,hj) being the geometric parameters vector of j-th stiffener and nc denoting the total number of stiffeners. In the present work, it is assumed that the height of all stiffeners remains constant throughout the process, so the vector GCj can be reduced to GCj=tj,j=1,…,nc. With the above result bearing in mind, the optimal design problem for stiffened plate structures can be formulated as where I is the objective function, gi,i=1,…,n are constraint functions and UD is the admissible set that the design variable vector D belongs to. In the present study, with the purpose of enhancing the global stiffness of the stiffened plate structures, the considered optimization problem is to minimize the structural compliance under the available volume constraint and the corresponding problem formulation can be formulated as u=u¯,on Γu where f and u represent the nodal force vector and the nodal displacement vector, respectively. The symbol K denotes the global stiffness matrix assembled from element stiffness matrix of the base plate and the stiffeners. V¯ is the upper bound of available solid material. gj,j=1,…,ng denote some other inequality constraints (e.g., feature size constraints), where ng is the total number of these constraints. In addition, u¯ represents prescribed displacement on the Dirichlet boundary Γu. The proposed solution framework is essentially based on the explicit boundary evolution, and therefore shape sensitivity analysis approach can be performed to obtain the sensitivities of an objective or constraint functional for numerical optimization. According to [56,57], the shape sensitivity of a general objective/constraint functional can be written as a volume integral where u and w represent the primary and adjoint displacement fields, respectively. ∂Ω=∪i=1nc∂Ωi denotes the boundary of all stiffeners and the symbol ∂Ωi, i=1,…,nc is the boundary of the i-th component. Vni is the normal velocity field along δΩi. In the present work, since the considered objective function is the structural compliance, it yields that u=w and f(u,w)=−Eijklui,juk,l. When I represents the volume of the stiffened plate structure, we have f(u,w)=1. As can be seen from Eq. (10), the key point for shape sensitivity analysis is to derive the relationship between Vni and the variation of D. Actually, for a typical component shown in Figs. 2 and 3, only the contributions of outer boundary S1 and S2 to sensitivities are considered since the areas of other boundaries are too small to be ignored in sensitivity analysis. Therefore, the shape sensitivity of the i-th component can be calculated as In Eq. (11), the outward normal velocity field Vnk,k=1,2 associated with the variation of the stiffener boundary Sk can be written as where δSk is the variation of the boundary Sk and nk denotes the outward normal vector of Sk. Based on the above results, we next carry out the shape sensitivity analysis of the straight component and the curved component, respectively. 3.2.1 Sensitivity Analysis of the Straight Component For a typical cuboid component with a straight skeleton as shown in Fig. 2, taking the boundary S1 as an example, we have where S0 is the mid-surface of the component and the variation of it can be expressed as follows: Accordingly, the normal velocity field along S1 can be written as In Eq. (16), the normal outward vector ns can be easily obtained from the tangential vector τs of the skeleton, and τs can be calculated in the following form: Therefore, we have Based on the above results, the normal velocity field along S1 can be described as follows: Similarly, the normal velocity field along S2 can be written as Based on the above equations, the variation of I with respect to the i-th component can be expressed as where the expressions of As,Bs,Cs, Ds and Es can be found in Appendix A. Summarizing the contributions of all components, the sensitivity of the structural compliance/volume with respect to the design variables psx1,psy1, psx2, psy2 and t of i-th component can be written as where sp1 and sp2 are the total number of the straight components driven by the nodes PS1 and PS2. 3.2.2 Sensitivity Analysis of the Curved Component For a curved component as shown in Fig. 3, the variation of the mid-surface S0 can be deduced easily from Eq. (4) The tangential vector τs of the curved skeleton can be calculated as Accordingly, we have Based on the above equations, it yields that the normal velocity field along S1 of the curved component can be expressed as Similarly, Vn2 along S2 can be calculated as Finally, we have where the expressions of Ac,Bc,Cc, Dc, Ec, Fc and Gc can be found in Appendix A. In Eq. (26), cp1 and cp2 denote the total number of the curved components driven by the nodes Pc1 and Pc3, respectively. It is worth noting that all the above computations can be performed by surface integrals on the boundary of the components. In this section, four numerical examples, including straight and curved stiffeners, hierarchical stiffeners, and a stiffened plate with a cutout, are tested to validate the effectiveness of the proposed approach. Without loss of generality, all involved quantities are assumed to be dimensionless. The Young’s modulus of the base plate and the stiffeners are set as Ep=1 and Es=2, respectively, and Poisson’s ratios of both materials are ν=0.3. The method of moving asymptotes (MMA) [58] is utilized to solve the optimization problems numerically. The terminating condition of the optimization process is set to the relative changes of the values of the objective and volume functions in two successive iteration steps are less than 0.1% while the volume constraint is satisfied. For all examples, the stiffeners with thickness values less than a threshold of tr=0.05 are deleted from the final optimization results. 4.1 A Plate Example with Straight Stiffeners In the first example, the optimization problem of a stiffened plate structure with straight stiffeners is tested. The corresponding problem setting is shown in Fig. 7. The height of all stiffeners is set as hs=5 and the thickness of the base plate is tp=1. As stated previously, the coordinates of the driven nodes as well as the thickness of all stiffener components are taken as design variables. The variation range of the stiffeners’ thickness is set to t∈[0.001,2] and the upper bound of the available volume of the stiffeners is V¯=0.1|D| (|D|=200×100×5). As illustrated in Fig. 8, three different initial designs consisting of 315, 450 and 609 components are adopted in this example to test the dependence of the optimization results on the initial layouts of components. The corresponding numbers of design variables of the three initial designs are 513, 736 and 999, respectively. The corresponding optimized results obtained from the different initial layouts with compliance values of 305.40, 303.84 and 303.06, respectively, are displayed in Fig. 9 (note that those narrow components with a thickness less than the threshold tr have been removed). The figure shows that stiffeners are smoothly distributed and perfectly adhered to the base panel. Meanwhile, clear and clean load transmission paths can be easily extracted from the optimized results without any extra post-processing due to the explicit geometry description. Noticing that although the optimized results obtained from different initial layouts are slightly different, the main load transmission paths assembled by the stiffeners are very similar. Fig. 10 depicts the strain energy distributions of all optimized designs; it can be observed that the stiffeners are mainly distributed in the regions with high strain energy, which is reasonable from a mechanics point of view. Fig. 11 shows the iteration histories of the compliance value and the volume constraint for the three cases; the structural compliance value decreases rapidly in the first 100 steps and converges by about 300 steps. Next, to examine the validity of the proposed penalization mechanism in addressing minimum thickness control of stiffeners, the lower bound tl=1 of the thickness control is imposed in this example (the initial design is the same as Fig. 8a). The optimized result with the Heaviside penalization scheme is shown in Fig. 12, and the corresponding value of compliance is 311.98. It can be seen from the figure that, compared to the optimized structure (shown in Fig. 9a) obtained without a penalization scheme, some local narrow stiffeners disappear in the final optimized structure of imposing the Heaviside penalization scheme. Table 1 lists the thickness values of the optimized stiffeners with the Heaviside penalization scheme (only the data of half of the optimized structure is provided since the structural symmetry). It can be found that, by applying the penalization scheme, all the stiffeners satisfy the prescribed thickness size constraint (i.e., t∈[1,2]). Compared with the optimized structure in Fig. 9a, the compliance value of the structure with the thickness control is higher. This is because imposing the thickness control on the stiffeners during the optimization process inevitably reduces the optimization design space. Although there are certain differences in the stiffener layout of the optimized results, the main force transmission paths for both results are similar. Based on the above comparison, it is concluded that the penalization scheme by the Heaviside function can effectively control the thickness of the stiffeners. Besides the size control of thickness, the proposed method also has the capability of performing other feature size control of the stiffened plate structures, such as the length control and the angle control. This, however, will not be reported in the present work for conciseness. 4.2 A Plate Example with Curved Stiffeners In this example, the curved stiffener optimization problem is considered. The problem setting of this example is shown in Fig. 13. The thickness of the base panel is tp=1 and the height of all the curved stiffeners is uniformly set to hs=5. Fig. 14 illustrates the initial design of this example. The geometry of each component is determined by three control points of its skeleton and the thickness. Accordingly, the coordinates of these control points/driven nodes and the thickness are taken as the optimization design variables and the total number of design variables is 190. The thickness of all components is only allowed to vary in the range of [0.001,3] and all control points are restricted to move within the design domain framed by the base panel. The upper bound of available material for the stiffeners is V¯=0.16|D|(|D|=150×50×5). Fig. 15 depicts the final optimized result and the corresponding iteration history for the optimization process is shown in Fig. 16. As can be seen from the optimized result, some curved stiffeners appear in the optimized structure and form several strong structural members to transfer the point load. Some intermediate designs in the optimization process are presented in Fig. 17, which shows the shape and size evolutions of curved stiffeners during the optimization iterations. In the proposed optimization framework, since the profile of the curved stiffeners is described explicitly through a series of geometry parameters, the optimized design can be directly imported into CAD systems, as shown in Fig. 18. 4.3 A Hierarchical Stiffened Plate Example Hierarchical stiffened configuration, as an advanced design form, is widely used in large industrial equipment. In this subsection, we try to apply the proposed method to the optimization design of a hierarchical stiffened structure. The problem setting of the considered hierarchical stiffened plate example is shown in Fig. 19. The thickness of the base plate is tp=1 and the height of primary stiffeners and secondary stiffeners are hs1=4 and hs2=2, respectively. The maximum available volume of the stiffeners is V¯=0.125|D| (|D|=100×50×4). Fig. 20 shows the initial design with 76 primary components and 128 secondary components and the total number of design variables is 330. During the optimization process, the varying thickness ranges of the primary components and the secondary components are set to [0.001,1] and [0.2,0.5], respectively. The final optimized hierarchical stiffened plate with a compliance value of 123.14 is shown in Fig. 21. As can be seen from the figure, several main force transmission paths composed of the primary stiffeners are generated to effectively resist the in-plane bending moment and tensile forces. Meanwhile, the cross-distributed secondary stiffeners in the plate can well resist shear deformation. In addition, by arranging the primary stiffeners and secondary stiffeners, both the global and local stiffness of the plate structure can be enhanced greatly from a mechanical point of view. Fig. 22 shows the iteration history for the optimization process of this example and the corresponding CAD model of the optimized structure is shown in Fig. 23. 4.4 A Rectangular Stiffened Plate with an Inner Hole Example In the last example, the stiffener optimization problem of a rectangular plate with an inner hole is considered and the relevant geometry data, boundary conditions and external loads are shown in Fig. 24. The thickness of the base panel is tp=1.0 and the height of all stiffeners is set as hs=10. The variation range of the thickness is t∈[0.001,4] and the upper bound of the volume occupied by stiffeners is taken as V¯=0.25|D|(|D|=200×100×10). As plotted in Fig. 25, the initial design contains 204 components and 76 driven nodes, and the total number of design variables is 328. The corresponding optimized result with structural compliance of I=644.97 is exhibited in Fig. 26, and the strain energy distribution in the optimized design is depicted in Fig. 27. As can be observed from the figures, several thick stiffeners connecting the inner hole region and the fixed support region are generated to effectively transfer the uniformly distributed vertical line load. Furthermore, some thick stiffeners are also generated with a distributive pattern near the hole, which can uniformly diffuse the external loads and significantly increase the local stiffness of the structure. Besides, it can also be clearly seen that the stiffeners in the optimized structure are mainly distributed in regions with high strain energy, which is quite reasonable from a mechanical point of view. As mentioned previously, in the proposed method, the locally refined mesh can be adopted to accurately analyze the local performance of the structure. Accordingly, Fig. 28 illustrates the locally refined mesh along the inner hole’s boundary and the optimized design’s stress distribution. The iteration history of the example is depicted in Fig. 29. It is worth pointing out that in the present work, both the stiffeners and the inner holes are modeled through the explicit geometry representation. This makes the optimized structure obtained by the proposed method easy to transfer to CAD/CAE systems for subsequent design and manufacturing, as shown in Fig. 30. In this study, a novel approach based on the MMC solution framework for topology optimization of stiffened plate structures is proposed. In this method, all the stiffeners are treated as a set of structural components and the optimal design of stiffened plate structures can be obtained by optimizing the explicit geometry parameters of these components. By adopting Lagrangian type description for geometry representation, an adaptive ground structure method is utilized to regularize the optimization process, while dynamically updated shell elements obtained from an adaptive re-meshing technique are adopted for structural response analysis. Under this treatment, not only highly accurate analysis results with relatively low computational efforts can be achieved, but also a clear and clean optimized stiffened structure without extra processing can be obtained. Compared with previous methods, the proposed method has a smaller number of design variables and can accomplish feature size control of the stiffeners easily. Furthermore, various types of stiffened plate structures optimization problems, including straight and curved stiffeners, hierarchical stiffeners, and stiffened plates with cutouts, can be solved uniformly in the proposed explicit topology optimization framework, and numerical examples demonstrate the effectiveness and efficiency of the proposed approach. Last but not least, the generated optimized structures can be seamlessly transferred to CAD/CAE systems, which has a great prospect in industrial applications. As a preliminary attempt, only the minimum compliance optimization problem is considered in the present work and it can be expected that the proposed method has the potential to be applied to other stiffener optimization designs considering complex multi-physics fields, such as acoustic, thermal, etc. Another promising investigation direction is to extend the present work to the stiffener optimization of arbitrary surfaces. Corresponding research results will be reported elsewhere. Funding Statement: This work is supported by the National Key Research and Development Plan (2020YFB1709401), the National Natural Science Foundation (11821202, 11732004, 12002077, 12002073), the Fundamental Research Funds for Central Universities (DUT21RC(3)076, DUT20RC(3)020), Doctoral Scientific Research Foundation of Liaoning Province (2021-BS-063) and 111 Project (B14013). Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study. 1. Wang, L., Basu, P. K., Leiva, J. P. (2004). Automobile body reinforcement by finite element optimization. Finite Elements in Analysis and Design, 40(8), 879–893. DOI 10.1016/S0168-874X(03)00118-5. [Google Scholar] [CrossRef] 2. Loughlan, J. (2018). Thin-walled structures: Advances in research, design and manufacturing technology. USA: CRC Press. [Google Scholar] 3. Noor, A. K. (1973). Free vibrations of multilayered composite plates. AIAA Journal, 11(7), 1038–1039. DOI 10.2514/3.6868. [Google Scholar] [CrossRef] 4. Qatu, M. S., Sullivan, R. W., Wang, W. (2010). Recent research advances on the dynamic analysis of composite shells: 2000–2009. Composite Structures, 93(1), 14–31. DOI 10.1016/ j.compstruct.2010.05.014. [Google Scholar] [CrossRef] 5. Viola, E., Tornabene, F., Fantuzzi, N. (2013). General higher-order shear deformation theories for the free vibration analysis of completely doubly-curved laminated shells and panels. Composite Structures, 95, 639–666. DOI 10.1016/j.compstruct.2012.08.005. [Google Scholar] [CrossRef] 6. Shroff, S., Acar, E., Kassapoglou, C. (2017). Design, analysis, fabrication, and testing of composite grid-stiffened panels for aircraft structures. Thin-Walled Structures, 119, 235–246. DOI 10.1016/j.tws.2017.06.006. [Google Scholar] [CrossRef] 7. Bendsøe, M. P., Kikuchi, N. (1988). Generating optimal topologies in structural design using a homogenization method. Computer Methods in Applied Mechanics and Engineering, 71(2), 197–224. DOI 10.1016/0045-7825(88)90086-2. [Google Scholar] [CrossRef] 8. Bendsøe, M. P. (1989). Optimal shape design as a material distribution problem. Structural Optimization, 1(4), 193–202. DOI 10.1007/BF01650949. [Google Scholar] [CrossRef] 9. Zhou, M., Rozvany, G. (1991). The COC algorithm, part II: Topological, geometrical and generalized shape optimization. Computer Methods in Applied Mechanics and Engineering, 89(1–3), 309–336. DOI 10.1016/0045-7825(91)90046-9. [Google Scholar] [CrossRef] 10. Wang, M. Y., Wang, X. M., Guo, D. M. (2003). A level set method for structural topology optimization. Computer Methods in Applied Mechanics and Engineering, 192(1–2), 227–246. DOI 10.1016/ S0045-7825(02)00559-5. [Google Scholar] [CrossRef] 11. Honda, M., Kawamura, C., Kizaki, I., Miyajima, Y., Takezawa, A. et al. (2021). Construction of design guidelines for optimal automotive frame shape based on statistical approach and mechanical analysis. Computer Modeling in Engineering & Sciences, 128(2), 731–742. DOI 10.32604/cmes.2021.016181. [Google Scholar] [CrossRef] 12. Yan, J., Sui, Q. Q., Fan, Z. R., Duan, Z. Y. (2022). Multi-material and multiscale topology design optimization of thermoelastic lattice structures. Computer Modeling in Engineering & Sciences, 130(2), 967–986. DOI 10.32604/cmes.2022.017708. [Google Scholar] [CrossRef] 13. Zou, J., Mou, H. L. (2022). Topology optimization of self-supporting structures for additive manufacturing with adaptive explicit continuous constraint. Computer Modeling in Engineering & Sciences, 131(1), 1–19. DOI 10.32604/cmes.2022.020111. [Google Scholar] [CrossRef] 14. Lagaros, N. D., Fragiadakis, M., Papadrakakis, M. (2004). Optimum design of shell structures with stiffening beams. AIAA Journal, 42(1), 175–184. DOI 10.2514/1.9041. [Google Scholar] [CrossRef] 15. Wu, B. C., Young, G. S., Huang, T. Y. (2000). Application of a two-level optimization process to conceptual structural design of a machine tool. International Journal of Machine Tools and Manufacture, 40(6), 783–794. DOI 10.1016/S0890-6955(99)00113-3. [Google Scholar] [CrossRef] 16. Higgins, P. J., Wegner, P., Viisoreanu, A., Sanford, G. (2004). Design and testing of the minotaur advanced grid-stiffened fairing. Composite Structures, 66(1–4), 339–349. DOI 10.1016/ j.compstruct.2004.04.055. [Google Scholar] [CrossRef] 17. Gosowski, B. (2007). Non-uniform torsion of stiffened open thin-walled members of steel structures. Journal of Constructional Steel Research, 63(6), 849–865. DOI 10.1016/j.jcsr.2006.02.006. [ Google Scholar] [CrossRef] 18. Jármai, K., Farkas, J. (2001). Optimum cost design of welded box beams with longitudinal stiffeners using advanced backtrack method. Structural and Multidisciplinary Optimization, 21(1), 52–59. DOI 10.1007/s001580050167. [Google Scholar] [CrossRef] 19. Pavlovčič, L., Detzel, A., Kuhlmann, U., Beg, D. (2007). Shear resistance of longitudinally stiffened panels—Part 1: Tests and numerical analysis of imperfections. Journal of Constructional Steel Research, 63(3), 337–350. DOI 10.1016/j.jcsr.2006.05.008. [Google Scholar] [CrossRef] 20. Pavlovčič, L., Beg, D., Kuhlmann, U. (2007). Shear resistance of longitudinally stiffened panels—Part 2: Numerical parametric study. Journal of Constructional Steel Research, 63(3), 351–364. DOI 10.1016/j.jcsr.2006.05.009. [Google Scholar] [CrossRef] 21. Kapania, R., Li, J., Kapoor, H. (2005). Optimal design of unitized panels with curvilinear stiffeners. Proceedings of the AIAA 5th ATIO and 16th Lighter-than-Air Sys Tech. and Balloon Systems Conferences, pp. 7482–7511. Virginia. [Google Scholar] 22. Mulani, S. B., Slemp, W. C., Kapania, R. (2010). EBF3PanelOpt: A framework for curvilinear stiffened panels optimization under multiple load cases. Proceedings of the 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, pp. 9238–9254. Texas. [Google Scholar] 23. Mulani, S. B., Slemp, W. C. H., Kapania, R. K. (2013). EBF3PanelOpt: An optimization framework for curvilinear blade-stiffened panels. Thin-Walled Structures, 63, 13–26. DOI 10.1016/ j.tws.2012.09.008. [Google Scholar] [CrossRef] 24. Wang, B., Hao, P., Li, G., Tian, K., Du, K. F. et al. (2014). Two-stage size-layout optimization of axially compressed stiffened panels. Structural and Multidisciplinary Optimization, 50(2), 313–327. DOI 10.1007/s00158-014-1046-6. [Google Scholar] [CrossRef] 25. Hao, P., Wang, B., Tian, K., Li, G., Du, K. F. et al. (2016). Efficient optimization of cylindrical stiffened shells with reinforced cutouts by curvilinear stiffeners. AIAA Journal, 54(4), 1350–1363. DOI 10.2514/1.J054445. [Google Scholar] [CrossRef] 26. Liu, Y., Shimoda, M. (2015). Non-parametric shape optimization method for natural vibration design of stiffened shells. Computers & Structures, 146, 20–31. DOI 10.1016/j.compstruc.2014.08.003. [ Google Scholar] [CrossRef] 27. Liu, Y., Shimoda, M. (2013). Parameter-free optimum design method of stiffeners on thin-walled structures. Structural and Multidisciplinary Optimization, 49(1), 39–47. DOI 10.1007/ s00158-013-0954-1. [Google Scholar] [CrossRef] 28. Wang, D., Abdalla, M. M., Wang, Z. P., Su, Z. (2019). Streamline stiffener path optimization (SSPO) for embedded stiffener layout design of non-uniform curved grid-stiffened composite (NCGC) structures. Computer Methods in Applied Mechanics and Engineering, 344, 1021–1050. DOI 10.1016/j.cma.2018.09.013. [Google Scholar] [CrossRef] 29. Wang, D., Abdalla, M. M., Zhang, W. H. (2018). Sensitivity analysis for optimization design of non-uniform curved grid-stiffened composite (NCGC) structures. Composite Structures, 193, 224–236. DOI 10.1016/j.compstruct.2018.03.077. [Google Scholar] [CrossRef] 30. Lam, Y. C., Santhikumar, S. (2003). Automated rib location and optimization for plate structures. Structural and Multidisciplinary Optimization, 25(1), 35–45. DOI 10.1007/s00158-002-0270-7. [ Google Scholar] [CrossRef] 31. Ansola, R., Canales, J., Tarrago, J. A., Rasmussen, J. (2004). Combined shape and reinforcement layout optimization of shell structures. Structural and Multidisciplinary Optimization, 27(4), 219–227. DOI 10.1007/s00158-004-0399-7. [Google Scholar] [CrossRef] 32. Afonso, S. M. B., Sienz, J., Belblidia, F. (2005). Structural optimization strategies for simple and integrally stiffened plates and shells. Engineering Computations, 22(4), 429–452. DOI 10.1108/ 02644400510598769. [Google Scholar] [CrossRef] 33. Ma, X. T., Wang, F. Y., Aage, N., Tian, K., Hao, P. et al. (2021). Generative design of stiffened plates based on homogenization method. Structural and Multidisciplinary Optimization, 64, 3951–3969. DOI 10.1007/s00158-021-03070-3. [Google Scholar] [CrossRef] 34. Aage, N., Andreassen, E., Lazarov, B. S., Sigmund, O. (2017). Giga-voxel computational morphogenesis for structural design. Nature, 550(7674), 84–86. DOI 10.1038/nature23911. [Google Scholar] [ 35. Zhang, W. H., Zhao, L. Y., Gao, T., Cai, S. Y. (2017). Topology optimization with closed B-splines and Boolean operations. Computer Methods in Applied Mechanics and Engineering, 315, 652–670. DOI 10.1016/j.cma.2016.11.015. [Google Scholar] [CrossRef] 36. Feng, S. Q., Zhang, W. H., Meng, L., Xu, Z., Chen, L. (2021). Stiffener layout optimization of shell structures with B-spline parameterization method. Structural and Multidisciplinary Optimization, 63(6), 2637–2651. DOI 10.1007/s00158-021-02873-8. [Google Scholar] [CrossRef] 37. Bakker, C., Zhang, L., Higginson, K., Keulen, F. V. (2021). Simultaneous optimization of topology and layout of modular stiffeners on shells and plates. Structural and Multidisciplinary Optimization, 64(5), 3147–3161. DOI 10.1007/s00158-021-03081-0. [Google Scholar] [CrossRef] 38. Chu, S., Townsend, S., Featherston, C., Kim, H. A. (2021). Simultaneous layout and topology optimization of curved stiffened panels. AIAA Journal, 59(7), 2768–2783. DOI 10.2514/1.J060015. [Google Scholar] [CrossRef] 39. Li, Q. H., Qu, Y. X., Luo, Y. F., Liu, S. T. (2021). Concurrent topology optimization design of stiffener layout and cross-section for thin-walled structures. Acta Mechanica Sinica, 37(3), 472–481. DOI 10.1007/s10409-020-01034-2. [Google Scholar] [CrossRef] 40. Qin, X. C., Dong, C. Y. (2021). NURBS-based isogeometric shape and material optimization of curvilinearly stiffened plates with FGMs. Thin-Walled Structures, 162, 107601. DOI 10.1016/ j.tws.2021.107601. [Google Scholar] [CrossRef] 41. Singh, K., Kapania, R. K. (2021). Accelerated optimization of curvilinearly stiffened panels using deep learning. Thin-Walled Structures, 161, 107418. DOI 10.1016/j.tws.2020.107418. [Google Scholar] [CrossRef] 42. Xu, K., Li, T., Guan, G. F., Qu, J. L., Zhao, Z. et al. (2022). Optimization design of an embedded multi-cell thin-walled energy absorption structures with local surface nanocrystallization. Computer Modeling in Engineering & Sciences, 130(2), 987–1002. DOI 10.32604/cmes.2022.018128. [Google Scholar] [CrossRef] 43. Mattheck, C., Burkhardt, S. (1990). A new method of structural shape optimization based on biological growth. International Journal of Fatigue, 12(3), 185–190. DOI 10.1016/0142-1123(90)90094-U. [ Google Scholar] [CrossRef] 44. Ding, X. H., Yamazaki, K. (2004). Stiffener layout design for plate structures by growing and branching tree model (application to vibration-proof design). Structural and Multidisciplinary Optimization, 26(1–2), 99–110. DOI 10.1007/s00158-003-0309-4. [Google Scholar] [CrossRef] 45. Ji, J., Ding, X. H., Xiong, M. (2014). Optimal stiffener layout of plate/shell structures by bionic growth method. Computers & Structures, 135, 88–99. DOI 10.1016/j.compstruc.2014.01.022. [Google Scholar] [CrossRef] 46. Li, B. T., Hong, J., Liu, Z. F. (2014). Stiffness design of machine tool structures by a biologically inspired topology optimization method. International Journal of Machine Tools and Manufacture , 84, 33–44. DOI 10.1016/j.ijmachtools.2014.03.005. [Google Scholar] [CrossRef] 47. Li, B. T., Xuan, C. B., Tang, W. H., Zhu, Y. S., Yan, K. (2018). Topology optimization of plate/shell structures with respect to eigenfrequencies using a biologically inspired algorithm. Engineering Optimization, 51(11), 1829–1844. DOI 10.1080/0305215X.2018.1552952. [Google Scholar] [CrossRef] 48. Li, B. T., Huang, C. J., Xuan, C. B., Liu, X. (2019). Dynamic stiffness design of plate/shell structures using explicit topology optimization. Thin-Walled Structures, 140, 542–564. DOI 10.1016/ j.tws.2019.03.053. [Google Scholar] [CrossRef] 49. Dong, X. H., Ding, X. H., Li, G. J., Lewis, G. P. (2019). Stiffener layout optimization of plate and shell structures for buckling problem by adaptive growth method. Structural and Multidisciplinary Optimization, 61, 301–318. DOI 10.1007/s00158-019-02361-0. [Google Scholar] [CrossRef] 50. Guo, X., Zhang, W. S., Zhong, W. L. (2014). Doing topology optimization explicitly and geometrically—A new moving morphable components based framework. Journal of Applied Mechanics, 81(8), 081009. DOI 10.1115/1.4027609. [Google Scholar] [CrossRef] 51. Zhang, W. S., Zhang, J., Guo, X. (2016). Lagrangian description based topology optimization—A revival of shape optimization. Journal of Applied Mechanics, 83(4), 041010. DOI 10.1115/1.4032432. [ Google Scholar] [CrossRef] 52. Zhang, W. S., Yuan, J., Zhang, J., Guo, X. (2015). A new topology optimization approach based on moving morphable components (MMC) and the ersatz material model. Structural and Multidisciplinary Optimization, 53(6), 1243–1260. DOI 10.1007/s00158-015-1372-3. [Google Scholar] [CrossRef] 53. Miao, Y. H. (2021). Topology optimization of multiphase material structure based on joint connection (Master Thesis). Dalian University of Technology, China. [Google Scholar] 54. Lazarov, B. S., Wang, F. W. (2017). Maximum length scale in density based topology optimization. Computer Methods in Applied Mechanics and Engineering, 318, 826–844. DOI 10.1016/ j.cma.2017.02.018. [Google Scholar] [CrossRef] 55. Chapelle, D., Bathe, K. J. (2010). The finite element analysis of shells-fundamentals. USA: Springer Science & Business Media. [Google Scholar] 56. Laporte, E., Le Tallec, P. (2002). Numerical methods in sensitivity analysis and shape optimization. USA: Springer Science & Business Media. [Google Scholar] 57. Komkov, V., Choi, K. K., Haug, E. J. (1986). Design sensitivity analysis of structural systems. USA: Academic Press. [Google Scholar] 58. Svanberg, K. (1987). The method of moving asymptotes—A new method for structural optimization. International Journal for Numerical Methods in Engineering, 24(2), 359–373. DOI 10.1002/(ISSN) 1097-0207. [Google Scholar] [CrossRef] Appendix A. Some terms in the expressions of sensitivity analysis Cite This Article APA Style Jiang, X., Liu, C., Zhang, S., Zhang, W., Du, Z. et al. (2023). Explicit topology optimization design of stiffened plate structures based on the moving morphable component (MMC) method. Computer Modeling in Engineering & Sciences, 135(2), 809-838. https://doi.org/10.32604/cmes.2023.023561 Vancouver Style Jiang X, Liu C, Zhang S, Zhang W, Du Z, Zhang X, et al. Explicit topology optimization design of stiffened plate structures based on the moving morphable component (MMC) method. Comput Model Eng Sci. 2023;135(2):809-838 https://doi.org/10.32604/cmes.2023.023561 IEEE Style X. Jiang et al., “Explicit Topology Optimization Design of Stiffened Plate Structures Based on the Moving Morphable Component (MMC) Method,” Comput. Model. Eng. Sci., vol. 135, no. 2, pp. 809-838, 2023. https://doi.org/10.32604/cmes.2023.023561 This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.techscience.com/CMES/v135n2/50174/html","timestamp":"2024-11-11T01:05:19Z","content_type":"application/xhtml+xml","content_length":"301061","record_id":"<urn:uuid:73960adf-3c8d-48ba-8616-f308a9c350e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00303.warc.gz"}
Making sense of number sense… …and its implications in the classroom. Hello and welcome to the 37th edition of our fortnightly newsletter, Things in Education. Can you imagine 100 dolphins? Even if you close your eyes and think about it, you won’t be able to. What you are imagining is many dolphins. To actually get a sense of how many dolphins is 100 dolphins, one could imagine trying to fit 100 dolphins in an empty passenger airplane that flies from Delhi to Bangalore, or at least what volume 100 dolphins may take up. So, an adult’s number sense comes from multiple faculties. In the example above, you would need more than just counting. You would need spatial skills and cognition to imagine the unlikely scenario of having 100 dolphins in an Look at the cards above. Even without really counting, you could say that the two cards have different number of dots on them. Similarly, with the two cards below – do they have the same or different number of dots? Without counting, can you say if the two cards below have the same number of dots? Number sense is the innate ability to recognize and understand the numerical quantity of a group of objects without the use of symbols or formal language. As we discovered, our basic number sense works till 3 objects. We can tell apart 1 object from 2 objects, 2 objects from 3 objects, and 1 object from 3 objects easily. But as the number goes higher, our ability to differentiate between the number of objects goes down. That is why it was not easy to say how many dots there were on the two cards above – it was difficult to say whether there were the same number of dots or not and how different the number of dots was. Surprisingly (or not), this ability to tell apart 1, 2 and 3 objects is evolutionarily engraved in our brains. Studies have shown that various animals are able to do exactly this. It stands to reason then that babies (less than 1 year of age) should also be able to differentiate between 1, 2 and 3 objects – and that is what studies have shown us. Studies in babies and animals have also shown that beyond 3, it becomes difficult for them to decipher the difference between the number of objects. Do you think that it is just a co-incidence that in number notations of different languages, the numbers 1, 2 and 3 are represented by those number of dots or lines? And almost universally it changes with the notation for 4! From: The Number Sense by Stanislas Dehaene As we saw earlier, it was difficult to differentiate between 7 and 8 dots on the two cards. However, if you notice, it gets much easier if the dots are somewhat ordered. Even without counting, you do know that the number of dots is different. Do you need to count the number of circles to know how many there are on the card below? So what do all these cards tell us? Our number sense is good up to 3 objects, and we can almost intuitively differentiate the number of objects. And as we share this trait with other animals and babies, it seems to be a trait that is not learned. As the number of objects goes higher, our ability to differentiate between the number of objects falls. If we order the objects, the ability to tell apart the number of objects is somewhat increased. This suggests that our number sense also comes from visual cues and spatial awareness. So, our number sense comes from evolutionarily conserved cognitive mechanisms. So, what are the different cognitive mechanisms involved in number sense? Subitizing - This is the ability to recognize the number of objects in a group without counting them. It is thought to be based on the visual recognition of patterns or configurations of objects. This is what we did with the dot cards. Approximation - This is the ability to estimate the number of objects in a group based on their overall size or volume. It is thought to be based on our sense of spatial awareness and the relationship between objects in physical space. This is what we did with imagining 100 dolphins in an airplane. Discrimination - This is the ability to differentiate between two quantities of objects. It is thought to be based on our ability to detect differences in the spatial arrangement or pattern of objects. You could discriminate between 7 and 8 dots when they were arranged in a particular way. Most of us may not have been able to tell that there were 3 concentric circles, and would have had to count them. As a preschool teacher or an early childhood curriculum creator, how can you leverage this limited number sense to extend the sense of numbers? Manipulatives, such as blocks, allow students to practise subitizing by recognizing patterns or configurations of objects. They may also provide a tactile and visual representation of quantity, supporting approximation. By engaging in spatial awareness activities Spatial awareness activities, such as arranging blocks or objects in a certain order, help children develop their sense of discrimination by requiring them to distinguish between different patterns or configurations of objects. By encouraging estimation This helps children develop their approximation skills by providing opportunities to practise judging the size or volume of a group of objects without counting them. Developing number sense beyond 3 is a challenging mission for a child’s modular brain. Educators must ensure that their learning activities help with making connections between different areas of the brain. We will go into more detail of the different cognitive mechanisms and how to help students fine-tune them through learning activities in the upcoming editions. If you found this newsletter useful, please share it. If you received this newsletter from someone and you would like to subscribe to us, please click here.
{"url":"https://www.things-education.com/post/making-sense-of-number-sense","timestamp":"2024-11-14T07:00:09Z","content_type":"text/html","content_length":"1050487","record_id":"<urn:uuid:af21c3a5-456b-47f0-8bba-10bffce87a3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00165.warc.gz"}
Nearly as commonly used as eyepieces are the telescope accessories for extending, or for compressing the effective focal length - focal extender (also: Barlow, tele-extender), and focal reducer (telecompressor) lens. The former are used for adding more magnification options with a given set of eyepieces; also, extending/narrowing of the converging light cones improves eyepiece performance (not long ago, added benefit of a Barlow lens was extending tight eye relief of the conventional short-focus eyepieces, but that is less important with new generations of eyepieces with longer eye relief). Focal reducer lens, on the other side, can also serve the purpose of obtaining more magnification options, but is mainly interesting to those who want to make their systems "faster", particularly for astrophotography. For that reason, it is commonly made to acts as a field flattener as well. The two main parameters of either extender or reducer are its focal length and the inside separation from the original focus. In general, the larger either one, the larger the effect. Scheme below shows Barlow lens extending the original cone and, by the same factor M, multiplying the focal length and image magnification, i.e. L/L0=M. In the thin lens approximation, if the extended cone was the original one, and the lens was positive, twice stronger, the new focus would form where the dashed lines meet, with magnification. Optically, the effect of either extender or reducer on the focal length, expressed as a magnification factor, is given with the same equation - it is only the sign of their focal length that produces magnification greater (extender), or smaller (reducer) than one. Graph below shows how system magnification changes with the focal length of the extender/reducer lens, with both lens-to-new-focus separation L and the lens focal length ƒ in units of the lens-to-original-focus separation L0. With the decrease in the relative focal length, extender lens' magnification asymptotically approaches infinity, and reducer lens' zero. Raytrace examples below illustrate some diverging-beam extenders. All are paired with a perfect 1000mm f.l. lens, so all aberrations are produced by the Barlows. Conventional Barlow lens is a cemented doublet achromat, such as one given by Rutten/Venrooij (1, radii 799, -58, 48, spacing 8, 5mm), and has a moderate length and ray divergence, as long as its magnification factor doesn't significantly exceed 2. This particular Barlow lens was designed for the commercial f/10 Schmidt-Cassegrain, and has some residual negative coma, still negligible at f/8, but noticeable at f/5 (raytraced with OSLO "perfect lens", 100/800mm for f/8, and 160/800mm for f/5). Best image field is mildly curved, for all practical purposes good as flat. Fancier glasses produce better performance (2, radii 62.5, -49.05, -24.21, spacing 4, 2mm, 2x extender designed for the TAL-200K telescope), about half as long as the Rutten/Venrooij design, paid for with noticeably stronger divergence. Divergence is, expectedly, even stronger with TAL's 5x double doublet Barlow (3, radii inf, 19.41, 30.69, spacing 1.6, 3.2 and 2.2mm gap, as given in "New serial telescopes and accessories" by Y.A. Klevtsov, 2014, p172). The same angular field here is, of course, 2.5 times larger linearly than with 2x extender. Performance improves significantly for the same linear field, as the 0.1° spots show. Note that in the box at right is raytrace of a single doublet of this Barlow with 2x magnification at ƒ/8. Finally, the "shorty" Barlow (4, radii 95.06, -39.17, -37.34, 30.14, spacing 7, 5, 3mm, from Smith/Ceragioly, Berry) unavoidably also has strong divergence. It has practicaly perfect correction over best image field, but its radius is potentially significantly curved. The astigmatic plot shows approx. 0.15mm defocus at 0.25° (about 7mm off axis in this case), which translates to -0.75 diopters field edge accommodation with the corresponding 14mm 50° FOV eyepiece, and nearly twice as much with a 7mm 100° FOV unit (negative sign indicates shifting focus right-to-left when light travels left-to-right; this is natural accommodation to a diverging beam, in this case the consequence of the edge field point being closer to the eyepiece, hence exiting eyepiece as a slightly diverging beam when the central point is in focus). Even -3 diopters with a 3.5mm 100° unit would be acceptable for most eyes (accommodating from infinity to 0.33m object distance). However, the field is not instantly accessible, as with a nearly flat field. It can be achieved by introducing astigmatism, which would require somewhat different glasses (e.g. SF10/N-FK5/N-LAF2, radii -6000, -38.8, -33, 55, spacing 7, 7, 3mm). Over the given field, there is no appreciable loss in the correction level, but unlike the zero-astigmatism design, "diffraction limited" over the entire usable (2" barrel limit) field at f/8, and nearly so at f/5, "diffraction-limited" field boundary with the flat-field design is at 0.36° radius at f/8, and 0.2° at f/5. Neither version is perfect close to the edge, but the zero-astigmatism type error can be remedied with accommodation, and astigmatism of the flat-field design can not. Probably the best design would have been in between, with somewhat weaker best image curvature and still negligible astigmatism (for instance, N-SF8/K10/N-LAF2, radii 244, -30, -27.4, 44mm, spacing 7, 5, 3mm). Note that the scale differs from one example to another; 1 and 3, and 2 and 4, are fairly comparable, while the former two are roughly 2-3 times larger vs. the other two than what it appears on the More recent development in both, focal extenders and reducers arena are the telecentric types. Unlike their conventional counterparts, they produce near-zero divergence exit beams. The advantage of it is that the added element doesn't affect - generally negatively - performance of telescope eyepieces, which are by default designed for near-telecentric (i.e. parallel with optical axis) entrance beams. For creating telecentric exit beams, a two lenses, or group of lenses opposite in their power sign, and with a wider separation, are needed. Two examples of telecentric Barlow below are, as before, with a perfect 1000mm f.l. lens, hence all aberrations come from the Barlow. The first example is flat-field at ƒ/5, but developing some field curvature at ƒ/8. The other one, more compact, has nearly constant, strong field curvature (over 6 diopters, or approximately infinity-to-8 inch accommodation). However, even with zero accommodation, it is still comparable to the longer design (the reason is the very small relative aperture, below ƒ/22, hence fairly insensitive to defocus). In general, higher magnification requires longer units. The simplest form of the focal reducer is a small achromat, usually cemented, corrected for infinity. Below is shown the effect of such random lens with a perfect 1000mm f.l. ƒ/10 perfect lens (top), a 100mm ƒ/10 doublet achromat (bottom left), and a 200mm ƒ/10 standard SCT. While its performance with a perfect lens is acceptable, it doesn't produce appreciable improvement with the SCT, as its original spots in the box show (the flat field SCT-alone spot is roughly 20% larger). Achromat's astigmatism actually enlarges the wavefront error, but what matters in the outer field is the angular In the achromat, it significantly weakens field curvature, at a price of more astigmatism, mixed with some coma, in the outer field (in the box are the e-line spots for achromats best image field). Performance improves with dedicated achromatized lens pair, either cemented/contact or separated. An example of the former is given by Rutten and Venrooij, as a reducer/flattener for aplanatic (coma-free) SCT. It is shown below also with a perfect ƒ/10 1000mm f.l. lens, with which it does not produce flat field, since its astigmatism/field curvature needs to offset those of the SCT. As ray spot plots and diffraction images (polychromatic, for the wavelengths shown) show, gain over uncorrected flat-field performance is relatively small (in the box are shown flat-field and best image spots for the edge point w/o reducer). Three more examples include a simple reducer/flattener/coma corrector for the standard SCT (top), roughly similar in form reducer/flattener for an apo doublet, and a random 3-lens reducer with a 100mm ƒ/10 (1000mm f.l.) perfect lens. The SCT reducer produces off axis spots larger than the R&V cemented doublet, but its actual performance is significantly better. It is because a better part of its ray spot are widely scattered rays, due to significant proportion of higher-order aberrations curling up relatively small areas of the wavefront, as opposed to the compact astigmatic spots of the achromat (e.g. for given wavefront error, ray spot plot for primary spherical aberration is nearly 6 times larger than for the primary astigmatism spot). Better indicator of performance are the diffraction images, comparable in scale (important factor is that the air-spaced doublet, unlike the cemented one, also corrects for coma). Performance level of this reducer/corrector probably doesn't fall far behind some simpler commercial units, which perform acceptably up to about 1/3 of a degree field radius. More complex units use more lenses, usually 3 to 4, in any arrangement (e.g. Meade's 0.63x reducer consists of two cemented doublets, and its 0.33x reducer of three singlets), with the main difference being field definition beyond this circle. The difference in flat-field performance is quite obvious in the case of the 80mm ƒ/8 fluorite doublet (middle). The reducer is telecentric, and unintended extra bonus was correcting the violet end. Finally,the 3-singlet reducer produces near-perfect 2-degree field with a perfect lens. Yet, its performance with systems having significant astigmatism/field curvature is uncertain. Next, an illustration of the performance level of the common f/6.3 SCT focal reducer/corrector. It is similar with both, Meade and Celestron, as well as some other makers, consisting of two cemented doublets. It is described as an accessory whose primary purpose is focal reduction, with unspecified corrective effect(s) other than it makes coma less visible. A number of different glasses can be used, so in the absence of any specific information except a published lens configuration for the Meade, designs presented here reflect what are the main properties of such a reducer, which should be roughly similar to the actual performance level (it is likely that different brands also have somewhat different design and output). It is assumed that the same two glasses are used for both doublets. Note that this replaces previous example, and is based on the actual location, back focus distance and linear (as opposed to angular) field radius of nearly half inch. The image is observed at the location of the base focus, or nearly so, which means that re-focusing by moving the primary closer to the secondary is applied in order to bring new focus to coincide with the standard one (change in output is negligible for focus shifts of up to a few mm, or so). It is likely that the reducer is near-optimized for the 8-inch SCT, which is shown below (top). Axial diffraction images are given for linear (top, same as the field edge images) and logarithmic response (bottom, to show it more clearly). Double doublet configuration can entirely correct for coma and astigmatism, but since the native astigmatism is of the opposite sign to the mirror Petzval curvature, it worsens best field curvature (middle). In order to maintain linear field size, angular field is increased to 0.67°. Consequently, field edge vignetting is about one magnitude, or 60%. The rear-end boundary is taken to be determined by the baffle tube inner diameter, equal to 37mm (shown as the vertical pink line right after the reducer's last surface). The top field edge diffraction images are as they would look like without vignetting at the rear end (note that the vertical elongation of the diffraction disc at the field edge is not due to astigmatism, but due to vignetting at the rear end effectively reshaping the aperture). Flat field blurs (left) are roughly comparable in size to the base SCT, possibly somewhat larger in their bright area; it's hard to tell since the blur doesn't entirely fit into the display window (note that the same linear size implies larger angular size in a faster system). Longitudinal shift of the astigmatic plots for different wavelengths is due to the longitudinal chromatism, with the plot origin by default being at the paraxial focus. This particular example has 0.52 reduction factor, but similar correction level - with the diffraction image enlarged according to the focal length - can be achieved at the 0.63 reduction (for instance, two N-FK5/F5 doublets, 200/-133/-333 and -1222/-133/-244mm radii). With the field curvature roughly at the level of the base SCT, coma can be significantly reduced, with astigmatism in a range from somewhat lower to somewhat higher. Levels of each can vary; the example shown (bottom) has little less than half the coma of the base SCT, and about 15% lower astigmatism (the latter results in a somewhat stronger best field curvature). As a result, best image field edge blur is noticeably smaller, while the flat field blurs are roughly comparable in size, but being more round with the reducer. By adding astigmatism of opposite sign to that of the Petzval, a flat astigmatic field is possible, but the size of field edge blur does not change significantly (below). With this corrector coma is nearly 25% smaller than in the base SCT (surfaces contributing astigmatism also contribute coma), but astigmatism is five times larger in order for the best - median - astigmatic surface to be flat with the Petzval curvature over 50% stronger (keep in mind that the aberration coefficients express linear transverse aberration, hence the same value implies larger angular aberration in a faster system; similarly, the coma coefficient is over is 50% larger than that for astigmatism, but considering that for any given aberration magnitude transverse coma is 2.5 times larger than astigmatism, the latter is some 60% larger than coma as wavefront error). Blur size and shape can probably be changed somewhat in the final optimization, but with the given configuration no major improvements are possible. Main advantage of the curved field version is that the visual performance is noticeably better. Reducer performance changes with the back focus, hence it will be different in SCT units of different size, and even of the same size but different back focus lengths. For instance, if the 0.63 reducer with curved field from above is placed in a C11 unit, which has significantly longer back focus than an 8-inch SCT, correction of the above 0.63 reducer is nearly perfect across best image surface; however, best image curvature is more than twice stronger than in the base SCT unit (below). The corresponding angular field radius for the base SCT is 0.25°, with the corresponding ray spot plots and diffraction images given for the best (top) and flat field (bottom). With the baffle tube inner diameter of 54mm the only vignetting at the rear end is by the reducer itself (assumed 48mm ID, probably a bit less in the actual unit). Note that the diffraction images are, for clarity, larger by a factor of 2 with respect to the 8-inch SCT. Best image diffraction patterns for both axis and field edge are given for linear and logarithmic response. Here, due to the wider converging cones at the reducer, it does add some spherical aberration, lowering axial Strehl in the central line to 0.7 (undercorrection). Lenses do make Petzval curvature somewhat stronger, but the main factor is that they take out astigmatism of opposite sign to the Petzval, which increases field curvature as a price of coma/astigmatism correction. In general, any back focus extention through the reducer tends to reduce both, coma and astigmatism, but at the price of a stronger Petzval curvature. Finally, one more SCT reducer/corrector configuration, a 3-singlet arrangement used by Meade for its f/3.3 reducer. Shown is f/4.4 reducer using two common glasses, which fully corrects for coma while, similarly to the previous example, makes field curvature somewhat stronger. However, as the image is smaller, the curvature matters less. Field curvature effect becomes significant only close to the field edge. This reducer would be primarily intended for photography, so its best curved field performance is irrelevant, but the simulations at the bottom illustrate modest effect of the quite strong field curvature (R=-144mm) on flat field performance (which would be still lower with the 0.33 reduction ratio). This reducer also induces spherical aberration (undercorrection) which is reduced if it is placed closer to the focal plane. That, however, tends to increase astigmatism, and make full correction of coma more difficult. As with the previous example, it is easier to make surface flatter with some residual coma left in, since the same surfaces that induce correcting (opposite) coma also induce astigmatism of the "wrong" sign. But, as this example illustrates, good performance is possible even with a strong field curvature. Actual units, being computer optimized, probably deliver still better Don Dilworth's two mirror-relay telescope uses lenses to transfer an internal focus out to an accessible location. It could also be considered a two-mirror system with sub-aperture lens corrector(s), but the relay property makes these systems different from the rest. Unlike other two-mirror relay systems - notable example being Robert Sigler's design - which can have very good axial correction, but much left to be desired field wise (Sigler's 6-inch ƒ/7 system has coma close to that of an ƒ/4.5 paraboloid, and a horrendous field curvature of -44mm), Dilworth's design achieves both. It has an extraordinary monochromatic axial correction - practically zero aberration - weakly curved field, field aberrations lower than comparable aplanatic Cassegrain (Ritchey-Chretien), nearly 0.4 waves p-v of longitudinal chromatism in each, C and F line (comparable to a 100mm ƒ/30 achromat) and no detectable lateral color. Additional positives include relatively small central obstruction, fast focal ratio, and generous back focus. The negative is more complex alignment, and collimation sensitivity, due to the three widely separated lenses. However, with the relatively slow primary, it should not be significantly out of the ordinary. Majority of the telescopes in use are those made for general astronomy. However, a telescope for general purpose may be limited in its ability to serve for some special purposes, such as observing outside of the visible range (infrared, radio), or observing particular astronomical object with special properties, such as the Sun. Among various specialized instruments for solar observations (coronagraph, spectroheliograph, etc.), probably the most interesting for an amateur is a telescope specialized for use of the H-α (hydrogen alpha) filter. Blocking the rest of abundant solar radiation makes possible observing of a variety of solar features, otherwise less pronounced or invisible (prominences, filaments, solar eruptions, etc.). Solar H-α etalon telescope The H-α solar telescope can either use H-α filter placed in front of the objective, or H-a etalon placed inside telescope, combined with a blocking filter in front of the objective (for astrophotography of emission nebulae, such filter can be mounted close to the image w/o use of blocking filter, but otherwise it is avoided due to the heat-related risk). For the optimum performance, such filter requires near-collimated light, hence a telescope with H-α etalon located behind the objective needs special arrangement providing a collimated section within the light path. It can be created in a simple arrangement of three singlet lenses, two positive and one negative, as shown below. The advantage of the etalon arrangement is that the filter can be manipulated in order to increase, or modify performance. For instance, double etalon will further narrow the passband; tilting the etalon slightly shifts the passband, allowing optimizing the passband to the detail of observation, and so on. This simple arrangement cancels all aberrations except field curvature and some residual astigmatism (chromatism, of course, is not corrected, but it is of no consequence operating at a single spectral line). Despite the best field being strongly curved, the 0.7-degree field is still well within diffraction limited even at the edge, due to the small linear field extent. Width of the collimated section is a function of the front-to-mid lens separation: the smaller separation, the smaller width, and vice versa. The flat-field correction somewhat improves with the smaller separation, but not significantly. For any given separation, the width of collimated section can be also widened by using stronger glass for the mid element. It also improves field correction but, again, only by 10-15%, or so. The ethalon configuration can be used with an achromat as well. The focal length of the negative front lens needs to be equal to its separation from the original focal plane, and the positive rear lens needs to be slightly weaker (depending on their separation). Best configuration here is with the two lenses facing each other with their curved side. The aberrations induced are a small amount of overcorrection, which actually improves correction in the red, and field curvature. As an example, placing a negative plano-concave lens lens (f=-291mm) at 800mm from the objective in a 150mm f/8 achromat, with the plano-convex lens (f=304mm) 70mm behind it, induces slightly over 1/10 wave P-V in the green e-line, with the error in the red r-line reduced to 1/30 wave. No appreciable effect on chromatism and coma, but the best field curvature goes from -460mm to -270mm. 11.4. WAVEFRONT SPLIT ERROR IN THE AMICI PRISM In order to restore the proper horizontal orientation to the image, Amici prism uses configuration with its back side split into two surfaces coming together in the plane containing optical axis, and at 45 degrees with respect to it. As a result, converging wavefronts containing this line are split in two, with each portion being reflected to the opposite side, and after reflection on that side merging together in the point image. If the prism is less than perfectly symmetrical, these two parts of the wavefront will have different optical path lengths, with the phase differential producing aberrated diffraction images. In addition, since a prism acts as a plane parallel plate, inducing longitudinal chromatism, the color foci of the two wavefront portions won't coincide, which can result in noticeable color infidelities. But this effect is generally smaller and less important than the diffraction effect at the best focus. Images below are OSLO simulations of these diffraction effects, for two simple scenarios: (1) even phase error between the two wavefront portions, caused by one side of the prism being slightly longer, and (2) the error gradually increasing away from the dividing line, as a consequence of one back side being at a slightly different angle. The two parts of the wavefront have a constant path difference. In this case, the part of the wavefront left of the central line is delayed, i.e. having the longer path with respect to the other one. Converging beam has a relative aperture of f/5, with the prism front side 100mm in front of the original focus, and about 10mm wavefront diameter at the splitting line. About 1/9 wave of spherical aberration induced by 50mm in-glass path is present in all simulations. The in glass differential δ produces optical path differential (n-1) δ, where n is the glass refractive index (in this case the glass is Schott BK7, with n=1.517 for the 546nm wavelength). The side length error is generally acceptable for δ~λ/4 and smaller (corresponding to little over 1/8 wave of optical path differential). It is still better than diffraction limited for twice as large error, but doubling it again makes it unsuitable for higher magnifications (with the wavefront diameter at the splitting line of 10mm, the width of the field affected in the final image is nearly as much). At δ=1 and the wavefront split in two halves, the resulting diffraction image is split in a double maxima (MTF graphs below show the contrast consequence). In the second scenario, the path difference, i.e. wavefront error gradually increases away from the split. The prism side angle deviation is 1/4, 1/2 and 1 arc minute (the actual error is somewhat larger, due to the longer path to the opposite side). Since the wavefront becomes folded, resulting aberration has similarities with astigmatism, particularly when the two wavefront portions are comparable in size. For smaller prism errors, resulting wavefront errors are the smallest for the wavefronts split in two, since they are positioned over the area of lower deviation (that changes with the largest prism error, because large wavefront errors result in a different, less predictable phase combining). MTF graphs on the bottom shows contrast loss for the three patterns with the largest prism error. The simulations suggest that the acceptable prism error of this kind should be below 10 arc seconds. It is possible to correct this kind of errors with phase coatings, but it would require accurate measurement of the prism shape before it can be applied; in other words, it would make it prohibitively expensive. Another kind of diffraction artifact comes from the middle line where the two double slanted sides meet. If it was near-perfect, with negligible width, there would be no noticeable effect. If it is, instead, say, 0.05mm wide, with the cone width at that location of, say, 10mm, it would be effectively equaling a vane of 1/200 aperture width (e.g. 0.5mm with 100mm aperture). With bright objects, it would cause a thin long spike orthogonal in orientation to the prism's middle line. 11.5. ABERRATIONS OF THE PRISM DIAGONAL When in a converging light cone, prism diagonal generates aberrations, both chromatic and monochromatic. Since it acts like plane parallel plate, Eq.105.1 applies, with d/L becoming 1/2F, F being the focal ratio (f/D, focal length by aperture diameter). Since d/L becomes a constant for any given system, prism distance, i.e. beam diameter on its front surface becomes non-factor, with the only remaining factors being the focal ratio, in-glass path (thickness) and glass refractive index. Taking for the index n~1.5, gives for the only two possibly significant monochromatic aberrations the P-V wavefront error (mm) as W=T/1380F^4 (spherical aberration) and W=T/65F^3 (coma). Graph below shows how they change as a function of focal ratio (F). Spherical aberration and coma affect all wavelengths neary equally, which makes them a part of chromatic error as well. Purely chromatic errors are longitudinal chromatism, caused by the change in refraction with the wavelength, and lateral color, which is generally negligible. Picture below illustrates these aberrations on a 32x32m prism (BK7) in f/10 and f/5 cone. The objective is a "perfect lens", so all aberrations come from the prism. Note that different prism types have different in-glass path length: for a given clear opening, Amici prism has it about 60%, and penta prism 3.4 times longer than the standard 90-degree prism. At f/10, Zernike term for primary spherical aberration (8) is 0.002853, which divided with 5^0.5 gives the RMS wavefront error as 0.001276. The corresponding P-V error is larger by a factor 11.25^ 0.5, or 1/234 wave (both in units of 546nm wavelength). It, expectedly, agrees with the equation, since no other significant aberrations are present. The term for coma (4) is 0.003984, which divided by 8^0.5 gives the RMS error as 0.00141 (the P-V error is larger by a factor 32^0.5). So, at f/10 coma is somewhat larger than spherical aberration, but both are entirely negligible. Longitudinal chromatism has a form of reversed primary chromatism, with longer wavelengths focusing shorter than shorter wavelengths (the consequence of the refraction at the front surface being diverging). It is a consequence of image displacement caused by the cone angle narrowing inside the prism (looking at the raytrace side view, it is causing the oblique line sections to become longer). The displacement is given by (1-1/n)T, and the variation in n (δn) with the wavelength produces longitudinal chromatism, given by (T/n^2)δn. It remains nominally unchanged with any focal ratio (of course, due to the smaller Airy disc at the faster focal ratio, chromatic error increases correspondingly). At f/5, the error in both, F and C line is over 0.4 wave P-V. It will change relatively little in a fast achromat, unless a very small, but it would introduce noticeable color error in the reflecting systems, or any other fast systems with a very low level of chromatism. Misaligned prism will induce all-field coma, astigmatism and lateral color. Coma dominates astigmatism at fast focal ratios, while the latter can be larger at ~f/10 and slower. At f/5, 1-degree prism tilt vs. optical axis will induce 0.023 wave RMS of coma, and 0.0068 wave RMS of astigmatism. Since coma changes with the 3rd power of focal ratio, and astigmatism with the 2nd, at f/10 coma drops to 0.0028, and astigmatism to 0.0017 waves RMS. But with the coma changing with the tilt angle, and astigmatism with the square of it, at 2-degree tilt the latter will be slightly larger. Since the magnitude of tilt -induced aberrations can be significant only at fast focal ratios - except at insanely large tilt angles - it is only coma that could be of concern. Tilt-induced lateral color (prism effect) doesn't change nominally with the focal ratio, but its magnitude vs. Airy disc does, in proportion to it. At f/5 and 1-degree tilt, the mid-field separation of the F and C lines is 0.002mm, or about 30% of the Airy disc diameter. For the field center, the separation shouldn't be larger than half the Airy disc diameter. Since it increases with the tilt anlgle, it should stay below 2 degrees. At f/10, 1-degree tilt will induce only half the error at f/5, i.e. the F and C lines separation will be about 15% of the Airy disc diameter. 11.6 DIFFRACTION EFFECT OF NON-SYMMETRICAL VANES ARRANGEMENT The standard 4-vane spider is the simplest form of the kind, but rotational stability is not its strongest point. To improve on that, the vanes need to be rearranged, breaking the symmetry of a cross. Simulations below show diffraction effect of two of such arrangements vs. standard 4-vane form. Due to the different converging angles from the vane sections, the modified spider produces wider, complex spikes of similar length. While the amount of energy transfered out of the Airy disc depends solely on the vane face area, significantly wider spikes could appear either more, or less pronounced, depending on the detector's filtering. The eye could be biased toward wider, fainter spikes, or toward narower, brighter ones (for identical vane area). Which it is, has to be established experimentally; it is quite possible that it could vary individually. A variation of the middle vane arrangement, used on some commercial telescopes lately, have the vanes shifted off only slightly, so that their sides lie on a common strait line splitting the aperture in half (below). Diffraction images show similar doubled spikes, but with one clearly dominant. Since the spike energy is nearly identical with that of the standard cross arrangement, spikes should be a bit less pronounced visually, with the faint spike likely remaining invisible (note that these patterns are brighter due to 2.5 times lower normalization value for the unit intensity, 0.02). Doubling the vanes to increase spider rigidity will also alter their diffraction effect. The reason is diffraction interference between the vanes. As the simulation below shows, the spike of a doubled vane is broken into bright segments and extended vs. single vane pattern. The consequence is slightly more energy transferred to the outer areas, even if the vane area is kept unchanged (graph on the right is effectively magnified by showing 5 times smaller radius on the same frame size). On the doubled area vanes' pattern (right) can be detected presence of secondary side maximas, which would imply less energy in the principal maxima, i.e. main spike. As with the vane configurations above, whether the eye would be more sensitive to a longer, segmented, and slightly less bright spike, only experimental examination will answer. 11.7 RAYTRACING EYEPIECES: REVERSE vs. DIRECT Eyepiece performance level is commonly determined by reverse raytracing, i.e. in a setup when the eyepiece exit pupil becomes aperture, and collimated light pencils passing through it travel through the the eyepiece in reverse, to form an image in front of the field lens, at the nearly same location where forms the image of the objective. Image formed by raytracing eyepiece in this manner is real image, but it is neither image that a perfect lens would form if placed at the eyepiece exit pupil, nor image of the objective. Rather, it is an image at the location of objective's image that would produce perfectly collimated pencils at the exit pupil end. As such, this image reflects aberrations of the eyepiece in their kind and magnitude, some of them reversed in sign, some not. For example, if reversed raytracing produces field curvature concave toward eyepiece, it implies that such curvature would produce perfectly collimated exit pencils because eyepiece itself generates curvature of opposite sign (it seems illogical since the two curvatures seemingly coincide, but the image space of the eyepiece is behind the eye lens, not in the objectve's image space; hence with a flat objective's image the eyepiece would produce exit pencils becoming converging toward outer field - because the field points in the image of the objective would've been farther away than what it needs to form a collimated beam - i.e. would form a curved best image surface of the opposite sign). On the other hand, if reverse raytracing produces overcorrection, this means that the eyepiece would, with zero spherical aberration from the objective, form exit pencils with rays becoming divergent toward pencil edge (since off-axis points on unaberrated image surface are in this case closer to the eyepiece), focusing farther than rays closer to the center, i.e. also generating overcorrection. As long as the geometry of pencils passing through the exit pupil is identical, so will be the aberrations generated. But this is strictly valid only for points close to axis. The farther off field point, the more likely that the perfect exit pencils that we are starting with in reverse raytracing won't exactly match those generated by a perfect input from the opposite end, and that will cause different aberration output as well, possibly significant. One particular difference is that reverse raytracing can be done from one fixed pupil location at a time. It is generally insignificant with eyepieces having relatively small exit pupil shift with the change of field angle (so called spherical aberration of the exit pupil, with the pupil generally shifting closer toward eye lens with the increase in field angle), but when it's significant, raytracing from any fixed pupil location will cause gross distortion of the astigmatic field for field zones with different exit pupil location. The only way around it is to raytrace for several different exit pupil locations, and piece up the actual field from that. This problem vanishes in direct raytracing, where every point's cone simply goes to its actual exit pupil. To make easier to detect the differences, a wide field eyepiece is needed, and for the ease of accessing its optical process it should be simple as possible. The perfect candidate is a modified 1+1+2 Bertele, which is for this purpose designed to produce 80° apparent field of view (AFOV). While not in the league of the (much) more complex designes for this field size, it is still significantly better than other conventional designs. Below is how it raytraces in reverse, and directly, the latter using OSLO "perfect lens" as the objective and at the eye end. Eyepiece focal length is 10mm, and exit pupil diameter is 1mm, hence it processes f/10 beam. Top half shows reverse raytracing. Surface #1 is the aperture stop, and #9 the image. The section between two marginal cones (6.4mm radius) is the actual image, and the full length of the vertical dashed line (8.4mm radius) is Gaussian image, i.e. image that would've been seen w/o distortion at the entry field angle (in effect, the image seen at that angle from the distance equal to the eyepiece focal length, as illustrated with the dotted lines at left). Column of numbers at right are the heights of the marginal chief ray (central ray) at all surfaces. Astigmatism plot shows reversal of the tangential line (in the plane containing axis and chief ray) toward field edge, preventing further increase in the outer 20%, or so, of the field radius. At 40° off, the P-V wavefront error is 2 waves. Note that due to the exit pupil shift (3.7mm eye relief for 40° to 5.5mm for half-field), tangential line for 3.7mm pupil position slightly magnifies the longitudinaludinal aberration for the upper half of the field; the actual line is a bit flatter over that section. As the wavefront map shows, on its way through the lenses the wavefront acquires vertical elongation, despite it being cut into a horizontal ellipse (0.5x0.383mm) at the aperture stop (OSLO Edu doesn't have pupil-that-tilts-with-field-angle feature). Longitudinal aberration plot shows entirely negligible spherical aberration, and not entirely negligible axial chromatism. Defocus δ in the blue F line of less than 0.1mm indicates ~0.2 waves P-V wavefront error at f/10 (for 486nm wavelength, from δ/8F^2), and four times as much at f/5. Error in the violet g-line is three times larger. lateral color is well controlled accross the field, with the F-to-C separation reaching half the Airy disc diameter at the field edge. Bottom half shows direct raytrace of the same eyepiece with a "perfect lens" as the 89.4mm f/10 objective (the focal length is determined from the field-end angle of the marginal chief ray in reversed raytracing). Focal length of the "perfect lens" at the image end is set to 10mm, to produce a directly comparable f/10 system. Longitudinal chromatism and spherical aberration are nearly identical to those in reversed raytracing. But that is where the similarity ends. Here, Gaussian image is smaller than the actual, apparent one, as a result of the positive (pincushion) distortion, nominally the inverse of the negative distortion in the reversed raytracing (1.32 vs. 0.76). As a result of positive distortion, with the edge cone visibly more elongated, the Airy disc at 40° is noticeably larger than on axis, unlike the reversed tracing, where it is smaller. Longitudinal astigmatism at 40° is about doubled, but the P-V error is smaller: 1.8 wave. This is mainly the result of the effective f-ratio for that cone being f/13.2, with the transverse astigmatism smaller by a factor 1.75 vs. f/10, and more so vs. effective focal ratio at this field point in the reverse raytracing. Tangential curve now extends farther out, as a result of the higher order astigmatism now being of the same sign as the primary, adding to it instead of taking away as in reverse raytracing. This is caused by the narrower cone of light passing through the eyepiece for the outer field points, which much more affects secondary astigmatism, changing with the 4th power of cone width (as opposed to the 2nd power with primary astigmatism), with the 40° pencil not quite filling out the exit pupil circle outlined by the axial cone, as it does in reverse raytracing. As a result of the different astigmatism plot, field curvature also changes: instead of zero accommodation at the edge, and +1 diopter required for 0.7 field radius, now edge requires somewhat over +1 diopter, and the 0.7 field radius is flat, requiring zero accommodation (small box bottom right, accounting for the effective 13.2mm focal length of the perfect lens at this point). However, it should be noted that if the two astigmatism plots would be corrected for the distortion effect, they would become very similar, despite the difference in the sign of secondary astigmatism. Unlike the 40° wavefront in the reverse raytracing, elongated vertically, here it's flattened, and noticeably more so. This is caused by refraction at large angles, compressing or expanding wavefront vertically, and shows the true extent of it (the elongation in reverse raytracing is partly offset by the horizontally elliptical wavefront outline determined at the entrance pupil). This asymmetrical astigmatic shape will result in asymmetry of both, ray spot plots and diffraction images along the extent of longitudinal aberration. Here, the tangential line, laying in the sagittal plane, is noticeably thicker, because it's formed by the (shorter) vertical wavefront sections focusing into it (blue on the wavefront map is its delayed area, hence it forms convex surface focusing farther away), as well as about 50% longer, since the wavefront extends that much more horizontally. As a result, best focus is not in the middle between sagittal and tangential focus, but closer to the sagittal line, which is laying in the tangential plane (the one containing axis and the chief ray), as indicated on the astigmatism plot. Overall, the differences in aberrations magnitude are small, but one needs to keep in mind that some of them do reverse in sign with the change in light direction. For instance, nominal lateral color error is reversed, and roughly doubled, but the actual change is relatively small due to the larger Airy disc (it is similar with astigmatism, with the two plots appearing grossly different, but with the actual P-V error diferential being near negligible). 11.8 POLYCHROMATIC STREHL: PHOTOPIC vs. MESOPIC vs. CCD Polychromatic Strehl for telescopes with refracting elements is commonly given for photopic (daylight) eye sensitivity. Strictly talking, it is valid only for daytime telescope use, but both sides of the market seem to be neglecting it, or simply are unaware of it. Since the Strehl figure is used as a qualifier of the level of optical quality - 0.80 for so called "diffractin limited", and 0.95 for "sensibly perfect", it does matter to know that it is limited to the sensibility mode used for calculating the Strehl. In general, due to the higher overall sensitivity in the mesopic mode - and particularly toward blue/violet - and the error usually being greater in the blue/violet, the mesopic (twilight level) Strehl will be lower than photopic (broad daylight). It likely worsens somewhat toward scotopic (night conditions) mode, but telescopic eye is most likely to be within the range of mesopic sensitivity. How significant is the difference between photopic and mesopic Strehl depends primarily on the magnitude of chromatic error in the red and blue/violet, with the latter being more significant since, unlike the blue/violet, sensitivity to the red generally declines toward mesopic and scotopic mode. While these eye sensitivity modes are relevant for visual observing, for CCD work it is the chip sensitivity that needs to be used for obtaining the relevant, CCD Strehl. Here, CCD sensitivity is a rough average of the range of sensitivities of different chips. As illustration of the difference between Strehl values for photopic, mesopic and CCD Strehl, it will be calculated for a highly corrected TOA-like triplet in two slightly different arrangement: one with the standard Ohara crown, S-BSL7, and the other with its low-melting-temperature form, L-BSL-7. Slight difference in dispersion between the two is sufficient to produce larger axial error, particularly in the red and violet, which will show the difference in correction level between two seemingly highly corrected "sensibly perfect" systems, when judged by the photopic Strehl value alone. Strehl values are calculated using 9 wavelengths spanning the visual range, as shown below. Mesopic sensitivity is approximation based on empirical results, somewhat different from the official mesopic sensitivity, which is merely a numerical midway between photopic and scotopic values. From top down, first shown is a lens using S-BSL7, extremely well corrected for axial chromatism. So much so, that the mesopic Strehl, and even CCD Strehl are only slightly lower. This lens practically has zero chromatism in the violet, a rarity indeed. Replacing S-BSL7 with L-BSL7 (same prescription, except slightly stronger R1 - 2380mm - to optimize red and blue) roughly doubles axial chromatism, except for the violet, which is now at the same level as the deeper red. Photopic Strehl is still excellent, suggesting there is no noticeable difference in the chromatic correction between the two. However, mesopic Strehl tells different story: this lens is not "sensibly perfect", and its CCD Strehl sinks toward 0.80. Mesopic Strehl gives different picture for achromats too. According to its photopic Strehl, a 100mm f/12 achromat is slightly better than the "diffraction limited" at its best diffraction focus (0.09mm from the e-line focus toward the red/blue; first column shows Strehl values at the best green focus). But its mesopic Strehl, also at the best diffraction focus, is only 0.63, and its CCD Strehl dives down to 0.45. Knowing that the Strehl number reflects average contrast loss over the range of MTF frequencies (for the mesopic Strehl it is, for instance, 37%), implies that this achromat is nowhere close to "diffraction limited" under average night-time conditions. For that, it needs to be twice as slow, f/24. In conclusion, it is hard to draw a precise line for where a "sensibly perfect" photopic Strehl should be for telescopes used at night, but it seems safe to say that it does need to be significantly better than 0.95; probably close to 0.99. 11.9 PLASTIC ACHROMATS Optical plastics are widely used for production of small and not so small lenses for all kinds of cameras, glasses and optical devices, but rarely for telescopes, and when used, nearly without exception for those of low quality. Most important optical plastics are acrylics, polycarbonates and polystyrenes, but some other are also viable. Optically, they can be as good as glass, but have several times higher thermal expansion, and a 100-fold higher variation of the refractive index with temperature. Also, they are more prone to static charges, and more difficult for coatings. On the good side, they are ligther, safer, and cheaper. Technological advances resulted in a wider number of optical-grade plastics available, which makes their application for small telescope objectives easier. Follows overview of the performance level of optical plastics - mainly those listed in OSLO Edu catalog - as components of achromatic 100mm f/12 doublets and triplets. In general, they have better color correction, occasionally approaching - even exceeding - the minimum "true apo" requirement of 0.95 Strehl. Doublets are of the Steinheil type, with the negative element in front, because the "flint" element in most of objectives, polycarbonate, is more resistant to impact and temperature (in general, order of elements does not significantly change the output). Performance level is illustrated with a chromatic Focal shift graph, against that for the standard glass achromat (BK7/F2, black plot). Chromatic focal shift shows the paraxial focus deviation for other wavelengths vs. optimized wavelength (546nm, e-line). In the absence of significant spherochromatism - which is here generally the case - it is a good indicator of the level of longitudinal chromatic correction. The P-V error of defocus can be found from the graph for any wavelength, using P-V=δ/8F^2, where δ is the focus shift from the e-line focus (0 on the graph) and F is the focal number. Graphs are accompanied with the corresponding photopic Strehl (25 wavelengths, 440-680nm), except for the last two, whose rear element plastics are not listed in OSLO (direct indexing for five wavelength was entered from ATMOS). The Strehl value is for the diffraction focus, which for most of these objectives does not coincide with the e-line focus (amount of defocus is given as z, and can be positive or negative, depending on the plot shape). All but one plastic lens combination have a higher Strehl than the glass achromat (0.81). Some combinations have near-apo correction in the blue/violet, some in the red, but most important is how well corrected is the 0.5 to 0.6 micron section (approximately). Two doublets have Strehl value exceeding 0.9, as well as two triplets, with one of them qualifying as a "true apo" by the poly-Strehl criterion of 0.95 or better (#8). It wouldn't satisfy the P-V apo criterion having 2.3 wave error in the violet g-line (1/6 wave F-line, 1/5 wave C, and 1/2.5 wave r-line) but due to the very low eye sensitivity to it in the photopic mode, it has little effect on the photopic Strehl. The more appropriate for night time use, mesopic Strehl, would be somewhat inferior to objectives with a similar photopic Strehl, but better violet correction. There are other plastics available, and more combinations possible (also, the properties of any given plastic can vary somewhat depending on its production process), but these shown here suffice to conclude that optical-grade plastics can be superior to the standard glasses in chromatic correction. Some could even produce the "true apo" level in the range of mid to moderately long focal ratios. 11.10 HOYA FCD1 vs. FCD100, TRIPLET OBJECTIVE The older generation of extra-low dispersion glassess, with Abbe number around 81, is commonly considered inferior in their performance limit to the latest generation, with Abbe number around 95 (also called super-low dispersion, or SD glasses). However, the larger Abbe number gives one single advantage: with any given mating glass the higher order spherical aberration residual is lower, allowing for somewhat faster lens for a given design limit in the optimized wavelength. But the difference is generally small. Let's illustrate this with Hoya's FCD1 and FCD100 glasses in a 5-inch f/ 7.5 triplet objective. Limiting the mating glass to Hoya's catalog, the best match for FCD1 is BCD11, and for FCD100 BSC7 (Hoya's equivalent of Schott BK7). As image below shows, a 5" f/7.5 triplet with FCD1 (top) has photopic polychromatic Strehl rounding off to "sensibly perfect" 0.95 (mesopic value would be somewhat lower, but not by much, considering relatively low errors across well balanced spectrum). The FCD100 triplet (middle) does have better polychromatic Strehl - rounding off to 0.98 - but about half of the differencial comes from the optimized line correction. Since the limit in the optimized e-line for the FCD1 triplet is at the level of 1/15 wave P-V of primary spherical aberration, the actual units with a similar optimized line correction would have no perceptible difference in color correction (granted, any given optimized line correction level would be easier to achieves in the FCD100 triplet, due to its more relaxed inner radii). Note that the FCD1 triplet correction mode minimizes the error in the violet g-line, which is not the best general correction mode. With slightly less of the positive power, the error in violet would increase, but would decrease in the other three wavelengths, with the F and C nearly touching at the edge zone on the OPD graph. In such case, the FCD1 poly-Strehl increases to 0.966 which, considering the unequal error in the optimized line, would imply the same level of chromatic correction. It is obvious on the LA graph that the FCD1 triplet has significantly higher spherochromatism on the primary spherical level, most of it the result of more strongly curved inner radii. But the sign of higher order spherical residual - after optimally balancing the optimized wavelength - significantly reduces the aberration in the blue/violet, while increasing it only moderately on the red end. As a result, Strehl values for non-optimized wavelengths are generally close to those of the FCD100 triplet. One other possibility is using moldable glasses. Best match for FCD1 is M-BACD12. If one surface is aspherized, all four inner radii can be equal, and the triplet is nearly as well corrected as the one with FCD100 glass (bottom). In all, how objective performs still depends more on the combination, than any single glass. It is possible that the older ED glass objective even performs better, although it is generally to expect small to negligible advantage with the higher Abbe# varieties. The difference is more pronounced in the doublets, because the higher order residual increases exponentially with the lens curvature, and doublets require them significantly stronger than triplets. 11.11 MICROSCOPE EYEPIECES IN TELESCOPES While not common, use of microscope eyepieces in telescopes does happen. How well these eyepieces can be expect to perform? Is a good name on them implying they will be as good as those made for telescopes, or even better? The answers are: "no one can tell", and "no", respectively. There are two main differences between the standard microscope and a telescope with respect to the eyepiece performance: (1) due to the significantly shorter objective-to-image distance - for a standard old-fashioned microscope the main part of it the so called "optical tube length" (OTL), standardized to 160mm - rays entering any given eyepiece field stop have significantly larger divergence, and (2) due to the very small objective, the effective focal ratio is very high (measured as a ratio of objective diameter vs. objective-to-image separation; not to confuse with the microscope numerical aperture, which is measured vs. objective-to-object separation). The former generally increases off axis aberrations, while the latter makes them smaller. In other words, looking only at #1, eyepiece optimized for a microscope would have to be sub-optimized for a telescope with respect to field correction. How much does #2 offset for this? Since the microscope magnification can also be expressed as a product of the objective and eyepiece magnifications - the former given by OTL/f[o], and the latter by 250/f[e], with f[o] and f[e] being the objective and eyepiece focal length, respectively - we'll illustrate the divergence vs. focal ratio offset with an average objective of 10mm focal length, and a 20mm focal length Huygenian eyepiece (from the above, they produce 160x250/10x20=200 magnification). Image below shows the optical scheme of a microscope (top) and an actual raytraced system with the given parameters (bottom). The objective and eye lens in the latter are "perfect lens", so neither contributes aberrations (note that the correct magnification for perfect lens 1 should be -16.13, but makes no difference in the ray spot plot). The eyepiece is upscaled 10mm Huygenian shown under "Individual eyepieces" on eyepiece raytracing page, so its nominal aberrations in a telescope are twice larger than those shown for the 10mm unit. In this microscope setting, the 20mm unit (due to re-orienting its effective focal length is around 22mm) shows entirely negligible aberrations all the way up to its 10mm radius field stop. The effect of the f/86 cone (the paraxial data given below objective is for the objective only) makes the effect of significantly stronger divergence entirely negligible over the strongly curved best image field (-4.5 diopters of accommodation required at the edge). Even over flat field, it dwarfs defocus effect to 1/12 wave P-V at the field edge. Note that the field is given in terms of object height, with 0.618mm corresponding to 25° apparent FOV in the eyepiece, and 3.33° true field of the objective, i.e. angular radius of the object (magnification is not, as with a telescope, related to the angular size of the object in the system, but to its angular size as seen from the standard least distance of distinct vision, 250mm; on the schematic microscope, that angle magnified by the objective is α[0], and the final angular object radius is α). In all, correction requirements for microscope eyepieces are much lower than those used in telescopes. This applies to both, axial and off-axis correction, and that is the main risk in using microscope eyepieces for telescopes: those performing just fine in a microscope, could become sub-standard in a telescope. Another possible obstacle, not visible in this demonstration, is that microscope eyepieces could be optimized to offset typical aberrations of microscope objectives, while telescope eyepieces are generally designed to produce best possible stand-alone image. 11.12 SCHMIDT vs. HOUGHTON TRIPLET CAMERA vs. STANDARD SCHMIDT While looking for some modern, "extremely achromatic" camera prescription, a drawing of a triplet catadioptric camera by Bernhard Schmidt caught my eye. It was called an "alternative to the standard Schmidt", and made me curious: just how close it is. Then, in an online PDF file which contained data as close as possible to the prescription ( Journal of Astronomical History and Heritage), there was quite similar triplet camera patented by Houghton some 15-20 years latter (1944, US Pat.#2,350,112). Whether Houghton could know for Schmidt's work is anyone's guess, but doesn't make less interesting finding out how do these two cameras compare, and how close they come to the standard (aspheric plate) Schmidt camera. For the Schmidt triplet, the original handwritten prescription by Schmidt was used, scalled down to 100mm aperture diameter, and optimized by very minor tweaks (there is also a 1934. prototype of the same design which will be mentioned in raytracing analysis). All three cameras are 100mm aperture f/1, to make them directly comparable, and the field radius is 6 degrees. Image below shows raytrace of the downscaled Schmidt 3-lens catadioptric camera. The the outer two lenses are plano-convex and symmetrical with respect to the biconcave mid element. A single glass, probably Schott's old O15 crown (n[d]=1.53, v[d]=58.99) was used; since it is not listed in OSLO Edu catalogs, the closest found was used (during rescaling the lenses got somewhat squeezed up; increasing the gaps to 6.9mm, needed to clear axial pencil, doesn't appreciably change the Central obstruction size is not given. Image size sets the minimum size at 20% linear, which would practically have to be somewhat larger. Since the effect is near-negligible, both central obstruction and (possible) spider vanes are omitted. LA graph shows relatively significant higher-order spherical residual on axis. The corresponding wavefront errors for five selected wavelengths are given by the OPD (optical path difference) plot. Best image surface doesn't fall midway between the tangential and sagittal surface due to the presence of odd secondary (Schwarzschild) aberrations (in presence of spherical aberration, the astigmatism plot, originating at the paraxial focus, is shifted away from best focus, but best image surface is vertical when its radius coincides with the one entered in raytrace). While all five wavelengths have a common focus for the 75% zone ray, their best foci - mainly due to spherochromatism - do not coincide, resulting in a nominally significant chromatism. Still, the g-line error is only about three times the error in the optimized wavelength. The ray spot plots indicate relatively insignificant chromatism (Airy disc is a tiny black dot; its e-line diameter is 0.00133mm, or 1/ 300 of the 0.4mm line). Polychromatic diffraction blur exceeds 0.02mm at 4.1° and 0.04mm at 6° off (the five wavelengths, even sensitivity). Should be mentioned that other than the prescription, there is an actual unit, a prototype of this camera type from 1934. According to the measurements taken by the paper authors, it is nearly identical to the prescription, except that the middle element has slightly weaker radii (perhaps fabrication inaccuracy). When scaled to the comparable 100mm f/1 system (originally 125mm f/1.1) the overall correction is somewhat worse. Houghton's patented camera differs in that it has two biconvex lenses framing in the biconcave central element. Also, it uses two different glasses. Again, there was no near-exact match listed for a glass quoted for the mid element, but the one used for raytracing is close enough not to make the end result significantly different (the minor optimizing tweaks are probably in better part due to the small differences in glass properties). While the LA graph looks better on the first sight, due to considerably lower higher-order spherical residual, chromatic correction is significantly suboptimal due to the five wavelengths having a common focus too high, at the 90% zone. It is larger by a factor of 2.6 than what it would be if the common focus was at the 70.7% zone. The ray spot plots indicate more chromatism than in the Schmidt configuration, but diffraction blurring is significantly reduced. However, when compared to the standard Schmidt below, even the Houghton falls significantly behind. There is not so much difference in the astigmatism plot - which shows primary and standard secondary astigmatism - but significantly smaller ray spot plots and diffraction images indicate much lower odd secondary aberrations. Nominal chromatism (spherochromatism) is also significantly smaller, but not so relative to the optimized wavelength, which is much better corrected than with the 3-lens correctors. Overall superiority of aspheric plate is undisputable. It can be illustrated with the magnitude of Zernike terms, and encircled energy, both 6° off axis (below). The standard Schmidt has only three significant terms, primary astigmatism (#4), primary spherical (#8) and secondary astigmatism (#11). The primary astigmatism term indicates 0.63 wave RMS (term divided by √6), corresponding to 3.1 wave P-V, plus 0.43 wave RMS (2.7 wave P-V, term divided with √10) for secondary astigmatism. That is more than ~4.5 waves P-V corresponding to ~0.02mm longitudinal astigmatism on the plot, indicating the presence of lateral astigmatism, of the same form as primary astigmatism, but increasing with the 4th power of field angle, not included in the plot. Similarly, the spherical aberration term indicates 0.53 wave RMS of primary spherical aberration (term divided by √5), much more than what is present on axis. This is due to the presence of lateral spherical aberration, of the same form as the primary, but increasing with the square of field angle. In the 3-lens Schmidt and Houghton, dominant term is primary astigmatism, followed by primary spherical, secondary astigmatism, primary (#6) and secondary coma (#13). Most of the terms are significantly higher than in the standard Schmidt, and particularly for primary astigmatism. Simiarly to the standard Schmidt, RMS error values indicated by the terms are not in proportion to the graphical output, because it doesn't include odd secondary Schwarzschild aberrations, lateral spherical, astigmatism and coma. The ray spot plot in the paper is elongated vertically probably because it is given for the plotted best astigmatic field not including odd secondary aberrations; the actual best field, according to OSLO, is about 5% stronger (also, spot structure is markedly different than for the one given by OSLO, with the dense part of it for e-line more than twice larger than in OSLO: over 0.05mm vs. 0.025mm; note that system in the paper has 25% larger aperture, but also is somewhat slower, at f/1.13). Polychromatic encircled energy plot (the 5 wavelengths, even sensitivity) shows that the standard Schmidt has about three times smaller 80% energy radius than its 3-lens corrector alternative, with the Houghton midway between. 11.13 FLAT-FIELD QUADRUPLET APO Flat-field quadruplets can come in various forms. The particular arrangement given here is a contact air-spaced triplet combined with a singlet meniscus at some distance behind such as Ascar 130 PHQ from Sharpstar (its glasses are not published, so that's where similarity ends). The triplet does all corrections, central line and chromatic, but it is slightly modified to compensate for the optical effect of the meniscus (it induces corrective astigmatism and field curvature, but also significant amounts of coma and spherical aberration). The base triplet is NPN 130mm f/7.5 with Hoya's FCD1 and BCD11, given under 11.10 above. After adding the field-flattening meniscus, only the 1st and last triplet radius were changed to correct for coma, and one inner radius to correct for spherical aberration. However, since the meniscus exerts negative power, the focal ratio went from f/7.5 to f/8.6 (image below, top). Field can be flattened with any meniscus form, but for minimized lateral color a strongly curved meniscus is required. Off axis monochromatic corretion is the best with the astigmatism cancelled, and some slight residual field curvature remaining (flattening field by indroducing a small amount of astigmatism roughly doubled the edge field wavefront error). Meniscus location is pretty flexible; cutting it in half only slightly worsens chromatic correction. However, placing it right after the objective gives rise to a significant higher-order spherical residual, due to correcting for primary spherical requiring significantly larger inequality in radius value between the three equal inner radii and the one correcting for the spherical (trying to correct spherical by bending lenses produces similar result). One alternate way of correcting field curvature is by placing achromatized meniscus significantly farther from the objective (bottom). The overall chromatic correction is still good, but somewhat less than with the above arrangement. The relative aperture also diminishes, to f/9.7. The singlet meniscus extends the triplet's focal length by roughly 15%, and the achromatized (more widely separaed) closer to 20%. This means that these arrangements need to use triplets capable of achieving good correction at f/6 to f/6.5 in order to produce well corrected f/7 to f/7.5, or so, flat field systems. Note that these are not necessarily the best glass combinations, or separations: they illustrate general system properties (however, with the singlet meniscus, as mentioned, the differences are fairly small). Does the triplet arrangement - NPN vs PNP - affects the outcame? In general, like with the doublets, where reversing order of positive and negative elements generaly has little effect on chromatic correction, shouldn't be substantial, although it can be significant in some respect. Ascar's 130 PHQ uses PNP triplet which, since the positive element has to be ED glass, means it has two ED glass elements. Similarly to the doublets, placing the negative element in front requires significantly stronger inner radii, because the glass used for it always have significantly stronger index of refraction, requiring stronger radii to compensate for the initial chromatic error by the weaker-index positive glass (image below). Significantly weaker inner radii of the PNP arrangement (bottom) seem to be producing less spherochromatism and better overall correction. The exception is the violet g-line, which is slightly worse, due to defocus, but the rest of lines are significantly better. About twice smaller displacement of the astigmatic field origin - which is by default at the paraxial focus - indicates as much smaller spherical aberration in the e-line. However, the NPN lens is set to produce the smallest error possible in the g-line, which comes at a price of sub-optimal correction in the F and C. By making the front radius 1-2mm stronger, F and C lines come to their near-optimal correction, and the difference in F/C chromatic correction becomes insignificant, while the g-line error becomes about 25% larger than in the PNP. What remains unchanged is the twice larger minimum error in the optimized wavelength. 11.14 ED doublets with lanthanum ED doublets using lanthanum as mating element have become common these days. While they leave something to be desired in the violet end correction, their main advantage is their large Abbe# (dispersion) differential vs. ED glasses, combined with sufficiently small relative partial dispersion (RPD) differential to keep secondary spectrum small to acceptable. Large Abbe# differential is a must for fast ED doublets, since the larger it is, the less strongly curved inner radii required, and less higher-order spherical aberration induced. Some other factors are also potentially significant - like the refractive index ratio, actualy not favoring lanthanum in general - but large enough Abbe# differential would compensate for it too. The problem is that, due to the architecture of RPD diagram, the larger Abbe# differential, the higher on it is lanthanum glass, and the larger RPD differential vs. ED glass, i.e. the larger becomes secondary spectrum (higher RPD is in general offset by sufficently larger Abbe# differential, but the tendency is secondary spectrum increase). Thus the choice of lanthanum is always a compromise between a low higher-order spherical residual, determining the central line correction level, and low secondary spectrum. For the former the Abbe differential needs to be as high as possible, for the latter - about as small as possible. The advantage of high Abbe# ED glasses (~95) is that they can use lanthanums that are lower on RPD diagram, i.e. with a smaller RPD differential, hence smaller secondary spectrum as well. It will be illustrated how much of a difference it makes vs. lower Abbe# ED glasses (~81), starting with the latter. Raytrace below shows three variation of an ED doublet using Chinese (CDGM) glasses, FK61 ED and lanthanum (these are similar to the Astro-Tech 4" f/7 AT102ED). Doublet of this type has R2 significantly stronger than R3. From the LA (longitudinal aberration) and OPD (optical path difference, i.e. wavefront error) plots it is immediately visible that the soft spot is correction in the violet. Top doublet uses lanthanum with smaller Abbe# differential than the other two, hence has higher secondary spherical residual, and higher minimum error in the central line. The middle doublet has the highest Abbe# differential, and the bottom one is in between the two. Photopic polychromatic Strehl (0.43-0.67 micron, shown boxed) slightly favors the latter (note that the Strehl values are for the location of best e-line focus; due to the presence of secondary spectrum, best poly-Strehl is shifted toward F/C lines - nearly 0.02mm for all three - 0.890, 0.898 and 0.903, top to bottom, respectively). OSLO quotes the price of its lanthanum glass as 5 times the BK7, vs. 3.5 times for the other two, which makes the top combination most likely. F and C lines are nearly balanced in the second and third combination, at ~0.155 and ~0.125 wave RMS, respectively (0.53 and 0.44 wave P-v of defocus), which puts them at the level of a 100mm f/24 and f/27 achromat, respectively. The top combination has correction somewhat biased toward blue/violet, with 0.074 wave RMS (0.26 wave P-V of defocus) in the F line, and 0.15 (0.52) in the C. If made nearly equal (g-line in that case goes over 1.1 wave P-V), they come at ~0.39 wave P-V of defocus, comparable to f/31 100mm achromat. The middle combination has significantly larger error in the violet, but it has little effect on the photopic polychromatic Strehl, due to the low eye sensitivity to violet in this mode. However, in the more appropriate to night-time observing, mesopic mode, eye sensitivity to violet is significantly larger, and this combination would have more effect on contrast (0.72 vs. 0.76 mesopic poly-Strehl vs. top combination, in part due to the higher sensitivity in the red vs. green/yellow as well) in addition to more violet fringing. This magnitude of violet defocus (comparable to that in a 100mm f/18 achromat) would be visible on bright objects, unless a special lanthanum doped coating, selectively absorbing in violet, is applied (which was likely the case with the APM 140mm f/7 lanthanum doublet). Using ED glass with higher Abbe# - also called "super-dispersion" (SD) - makes possible to use lanthanums with higher Abbe#, with less of RPD differentiial, i.e. inducing less of secondary spectrum, assuming the ED glass is of similar RPD value (FCD100/FPL53 have somewhat lower RPD than FK61/FCD1, thus the advantage is partly offset). Taking Hoya's FCD100 and two possible Hoya lanthanum matching glasses, show reduced secondary spectrum, and significantly better correction in the violet (these are similar to Astro-Tech 102EDL). As a result, the poly-Strehl is significantly higher than with the lower Abbe# ED glass. Due to the presence of secondary spectrum, best polychromatic focus is shifted from the best optimized line focus (defocus Z in mm). Despite its lower F/C error (0.07 vs. 0.08 wave RMS, when the two lines are equilized), the top combination has lower polychromatic Strehl, because its lower optimized-line Strehl weighs more in the poly-Strehl value. But its poly-to-optimized-line Strehl ratio shows that it has better chromatic correction (0.962 vs. 0.954). Interestingly, going somewhat slower while using the lower Abbe# lanthanum glass in order to eliminate higher-order spherical residual is likely to result in a small drop in the chromatic correction, not only due to a bit more of secondary spectrum, but also due to the higher-orderspherical actually reducing error in the blue/violet. For example, taking 125mm f/7.8 AT125EDL configuration with assumed matching lanthanum CDGM's H-LAF50B (equivalent of Ohara S-LAH66, or Hoya's TAF1), with no higher-order spherical residual, produces photopic poly-Strehl at the diffraction polychromatic focus of 0.923 (shown is objective with the positive element in front, but the reverse arrangement produces identical correction). Even at f/7 there is no higher-order spherical, and chromatic correction is only slightly worse (0.912 poly-Strehl) but it was probably made slower to be geared toward visual observers, since its violet correction leaves something to be desired on CCD level. Visually, its violet g-line (0.436μ) is at the level of a 100mm f/28 achromat (or 60mm f/17), i.e. unintrusive. Its F/C correction is at the level of a 100mm f/33 achromat. Non-lanthanum alternatives at this fast f-ratios do exist for SD glasses, but only a few. Short flints, like Schott N-KZFS2, paired with FCD100, Ohara FPL55/53 or LZOS OK4 would produce better overall correction, with the poly-Strehl exceeding 0.95. Schott N-ZK7 crown would produce better chromatic correction than lanthanums, but because of the central wavelength limit to 0.96+ due to more of the higher-order spherical residual its poly-Strehl is lower, at ~0.91 (the inner radii of such objective would be also very strongly curved, requiring very tight fabrication and assembly tollerances). The best match for lanthanums is fluorite, which has the highest RPD value, hence the high Abbe differential lanthanums matched with it would produce less of secondary spectrum. When comparing secondary spectrum in ED doublets with that in achromats, it should be kept in mind that even at near-identical error levels the effect is not the same, because in the former the aberration is a more or less balanced mix of 6th and 4th order spherical, with some amount of defocus, while in the latter it is mainly defocus. They have different forms of intensity distribution, and in so much different effect on contrast. Good indication of this difference is given by their respective MTF plots (below). Since at these (low to moderate) error levels spherical aberration spreads energy wider (it is not fully apparent at the intensity normalized to 0.1 shown, but would be more visible at lower normalization values, or with logarithmic base), it causes more of a contrast loss at low frequencies, but less at mid frequencies, where is the approximate cutoff for bright low-contrast objects, like planetary surfaces. However, it is again more detrimental at high frequencies (lunar, doubles, globulars). Diffraction simulations for 6th/4th order spherical are based on the actual F-line wavefronts in the ED doublets, and defocus simulations are pure defocus. 11.15 ED doublet with plastic It was demonstated above that optical plastics can work well replacing glass in an achromat. How well they can work as a matching element to ED glass? Here's what it looks like with some of plastic materials listed in OSLO Edu. Objective is 80mm f/7, alike AT80ED, which is not using lanthanum (and probably couldn't considering its low price), and cheap suitable crowns have too small Abbe# differential to work well at f/7. It is not to imply AT80ED uses plastic mating element, but it is a possibility. With what appears to be a mix of styrene and acrylic (top), correction in F and C is very good, just over 0.2 wave P-V of defocus. It is comparable to a 100mm f/56 achromat, and satisfies the "tru apo" requirement. At about 1 wave P-V, the red r line is at the level of a 100mm f/22 achromat, while 4.1 wave P-V in the violet g line puts it at the level of a 100mm f/14.5 achromat. Using carbonate as mating element (bottom) produces markedly better correction in the violet, but worse in the other three lines. With just over 0.8 wave P-V in F and C, it is comparable to a 100mm f /15 achromat, with the red r line at the level of f/12, and violet g line f/15. Other plastics did not produce good correction levels, but it is very likely that better correction than these two shown are possible. Of course, using plastics for triplets widens the possibilities. These two combined, with STYAC in front (reversed order is not as good) produce f/7 system with F/C lines at the level of a 100mm f/12 achromat, and the violet g-line more than 2.5 times better, i.e. at the f/31 level. Still better is FK61/CARBO/STYAC f/7 arrangement, with F/C at the level of a 100mm f/15 achromat, and g-line at the level of f/25. These are unusual modes of correction, illustrating that use of plastics could enhance both correction level and correction choices in lens objectives (note that optically there is no difference between the standard H-FK61 glass and low-softening-temperature, moldable D-FK61, but the latter is nearly twice more expensive). Plastics have the advantage of being lighter, but their other physical and chemical properties so should be at least close to those of optical glass. As the production technology advances, it will be becoming more viable as a glass substitute. 11.16 Can 52° AFOV fit 32mm 1.25" Plossl? Different brands of this eyepiece come with anywhere from 44° (Celestron) to 52° (Orion, Meade "Super Plossl", generic brands) apparent fieldof view (AFOV) claimed. Taking that the apsolute limit for the field stop radius is the inner radius of the 1.25" barrel - around 14mm - implying 23.6° zero-distortion angular field (~47° diameter), the limit to the AFOV is imposed by field distortion. In most cases, distortion is positive, enlarging image away from axis, in which case the AFOV is bigger than zero-distortion FOV according to the extent of distortion. For Plossl eyepiece, it is about 10%, implying nearly 52°. Raytrace exercise below tells somewhat different story (it is illustrated using Plossl design, but in general can be applied to any other). Top design is downscaled Plossl from the Rutten/Venrooij's book. Reverse raytracing shows that the size of optical image is 14.2mm, a bit over 14mm, implying 25.6° (51.2° diameter) as the limit to AFOV. Design below, the upscaled patented Nagler Plossl design, implies 26.5° (53° diameter) limit. The difference comes from different distortion rates: the Nagler has somewhat larger distortion, resulting in a larger AFOV transmitted (note that reverse raytracing gives distortion of opposite sign to the actual distortion, hence zero-distortion - or Gaussian - image is larger than the optical image, showing so called barrel distortion). However, the numbers come out differently with direct raytracing - at least at first sight. The Nagler Plossl is plugged in with two "perfect lenses": one for the 125mm f/8 objective, and the other one at the eye end with a 17mm f.l. Here, the true field angle just fitting into 14mm barrel radius is 0.80°. With the objective focal length of 1000mm, and the corresponding 31.25x magnification, the zero-distortion field radius is 25°. That is 2.4° more than what the field stop radius vs. eyepiece f.l. implies. The larger field produces higher distortion (in proportion to the 3rd power of field radius), now about 14%, with the coresponding 57° AFOV. Anyway, that would be the usual way of calculating it. However, magnification is not defined as magnification of the angle, rather as magnification of its tangent. So the correct AFOV in this case can be found from: (1) multiplying tan(0.8°) with 32mm f.l. to obtain unmagnified height corresponding to the eyepiece focal length, (2) having that height multiplied with 31.25x magnification, and (3) from arctan of that height divided by 32mm f.l. obtain the corresponding zero-distortion angle of view. In this case, the unmagnified height is 0.447mm, the magnified one is 13.96mm, and the corresponding zero-distortion angle is 23.6° (47.2° in diameter). With 14% positive (pincushion) distortion, it gives 53.8° AFOV. It is less than a degree larger than the AFOV obtained directly from the stop radius vs. eyepiece f.l., i.e. the corresponding angle, multiplied with the distortion ratio. This difference is most likely due to rounding off the nominal distortion numbers. The size of transmitted AFOV doesn't depend on barrel length, or stop (i.e. image) location within it - the last passing cone is vignetted by over 50%, and the very next one higher is not making it trough at all. Its change with the focal ratio (f/8 shown) is negligible. There is little use of an oversized field lens transmission-wise, but it is desirable in order to avoid light passing near the very edge of a lens. Placing a 1mm wide stop into the barrel would reduce the opening to 12mm radius, with the corresponding zero-distortion field radius reduced to 20.25°, and the corresponding AFOV to 22° (44° AFOV diameter). So, the answer is: it is possible to pack 52° AFOV into 32mm 1.25" barrel eyepiece, but only with no field stop in the barrel. 11.17 80° AFOV Plossl What happens when the apparent field of a standard eyepiece, such as Plossl, extends well beyond its usual 50 degrees? If, for instance, the same Nagler Plossl from above is to expand to 80° AFOV? Image below shows reverse raytrace of this design with the angle of divergence from the aperture stop - in effect the exit pupil of the eyepiece - equal to 40° (top). Obviously, the field lens had to be enlarged in order to accept the wider field, but the astigmatic field looks relatively good, with the astigmatism magnitude remaining nearly unchanged over the last 30% of field radius. There is some field curvature, but quite acceptable: edge of field best focus is less than 2mm away from the field center focus. For 32mm f.l. eyepiece - from 32^2/1000 - it translates to less than +2 diopters of accommodation (infinity to over 0.5m distance). Distortion of about -30% means that in the actual use the zero-distortion field - the one determined by the eyepiece field stop - would be However, coma - which increases with the 3rd power of aperture (i.e. cone width) vs. astigmatism increasing with the 2nd power - becomes obvious in the outer field. Also, lateral chromatism is unacceptably large in the mid 50%, or so, of the field radius. Coma can be diminished by flattening R6 while strengthening R1 which, with the change of glasses, also lowers astigmatism over most of the field, as well as lateral color error (bottom). But longitudinal chromatism is significantly larger, and field curvature is significntly more demanding: field edge requires nearly +4.5 accommodation (infinity to ~0.24m distance). Taking a compromise with some more astigmatism but flatter field gives what is shown below (top). Note that in reverse raytracing Gaussian image height is that of the apparent image (dashed line) and the actual, zero-distortion image, determined by the converging marginal cone, is the actual, "aberrated" image. Similarly, diverging cones entering field lens are unequal in width - a consequence of the all field pencils passing through aperture stop (i.e. eyepiece eit pupil) being by default of equal width. The wider marginal converging cone indicates lower magnification (by forming smaller Airy disc, i.e. image scale), resulting in the negative (barrel) distortion. Note that the half-diagonal of the square representing zero-distortion (Gaussian) image equals the Gaussian image height (radius) in the image plane. But to find out how it is actually working in a telescope, it is necessary to raytrace directly (bottom). The eyepiece is now working with the field produced by a 100/1000mm (f/10) "perfect lens", and another perfect lens is used on the opposite end to form the image. Field angle needed for identical edge point height in front of the field lens is 1.09°, or 19.08mm. The resulting astigmatic field is now significantly changed, with strong higher-order astigmatism dominating the peripheral field. It is a consequence of different ray geometry, with the diverging cones coming from the objective being of the same width at the field lens, while the marginal cone pencil exiting the eyepiece is, for that reason, significantly more narrow than the axial pencil (this is causing it reaching the retina as a narrower converging cone, forming larger Airy disc i.e. generating positive image distortion). Different ray geometry also results in uncorrected lateral chromatism as well. Higher order astigmatism is lowered by weakening R4 which, with corresponding change in the glasses, produces a better performing design, as it would look like in the actual use (bottom; weakening R4 increased the focal length to 33.4mm, and to compensate for that the field angle is increased to 1.14°). This particular modality has astigmatism minimized at the field edge, but with minor changes in the radius it can be increased at the edge while decreased in the inner field. With the nominal distortion of +40%, the actual (zero-distortion) field is 57°. The eyepiece would be usable at mid to slow focal ratios, but it would have nearly double distortion of more complex designs with Smyth lens (even Erfle-type eyepieces have generally lower distortion over wider a field than the standard Plossl's). 11.18 Protruding focuser: diffraction effect It is not uncommon with Newtonian reflectors that bottom end of the focuser protrudes into incoming light. Here, the effect is given for a 50mm wide focuser tube protruding 25mm into the axial pencil of light falling onto a 200m diameter mirror. By area, it is just below 4%, comparable in that respect to 0.20D (20%) linear central obstruction. While the central obstruction effect is considered generally negligible, the effect of protruding focuser may not be. Obstruction by focuser tube makes central diffraction maxima elongated in the direction of focuser. The PSF shows that the perpendicular maxima radius is slightly smaller than that of a perfect aperture, but te elongated radius is about 10% longer. The consequence is a contrast drop incerasing with orientation change, maxing out for the orientation along the elongated radius, where the effect is similar to that of 5% reduction in aperture diameter. Obviously, the same area of protrusion will cause less of obscuration, and less of an effect with a larger mirror, and more with the smaller one. 11.19 Diffraction effect of mirror edge clips Mirror clips are pretty common in Newtonian reflectors, and can be found in other instruments using mirrors to collect light. The relative mirror area they are obscuring is generally small, but speculations run wild about their diffraction effect, particularly spikes that they might be causing. This simulation by OSLO Edu shows no spikes to worry about as far as mirror clips are concerned - and little else to worry about altogether. A 200mm diameter mirror with three 12x6mm clips (as the area over the mirror surface) which block less than 0.7% of incoming light, and the effect is commensurate. The only visible change in the PSF, barely noticeable at intensity normalized to 0.01 (top, middle) are sections of uneven intensity in the 3rd bright ring. Normalizing down to 0.001 brings down similar structure in the subsequent bright rings, but the MTF shows that the overall effect is negligible. Enlarging clips area to 1.5% (bottom, comparable in area to 0.125D linear central obstruction) still shows no sign of spikes in the bright pattern area (note that it is now doubled, 0.1mm square side vs. 0.05mm above). Normalizing intensity down to 0.001 - which means that all points brighter than 0.1% of the central intensity are white - brings out faint outer areas where there is a hexagonal (primary) spike structure developing, but the intensity of these spikes is too low for them to become visible in general observing. The MTF is still not showing appreciable contrast loss, so the conclusion is that mirror clips do not produce neither visible diffraction artifacts, nor noticeable loss of contrast. 11.20 Two apochromatic objectives from "Telescope Optics" by Rutten and Venrooij As an illustration of difference in chromatic correction between objective using ordinary glasses and those using special glasses, an ordinary (BK7/F3) 200mm f/15 achromat is compared to two designs by Klaas Companar: one using Schott FK51 extra-dispersion glass with KZFSN2 short flint, and the other using fluorite with Schott Lak9 lanthanum glass (p56-59). They are both significantly faster, with the ED doublet at f/10, and the fluorite doublet at f/8; the former has 4.6 times lower secondary spectrum, and the latter only half as much of it, or over 9 times lower than the achromat. The ray spot plots given suggest that the two objectives with special glasses are much better corrected. If the difference in focal ratios, i.e. Airy disc size is taken into account, the advantage of special glasses is significantly lessened, but still substantial. However, do the spot sizes tell all the story? If we look at the polychromatic Strehl, the difference becomes unimpressive. At the best diffraction focus, the achromat scores 0.747, ED doublet 0.826 and fluorite 0.813 (430-670nm, photopic sensitivity). Because of its poor correction in the violet, the achromat falls farther behind for mesopic sensitivity (closer to night-time observing): 0.547 vs. 0.763 and 0.757, respectively. Converted to the corresponding P-V wavefront error of primary spherical aberration, it relates as a bit more than 0.4 vs. a bit less than 0.3 waves. And taken to the standard, photopic sensitivity, it becomes even less impressive (for the special glass) 0.28 vs. 0.23-0.24, in the same order. We should state that blur size comparison is strictly valid only for a single type of aberration (here we have defocus with the achromat and spherical+defocus with the other two), but the main reason for disappointing performance of the special glasses is suboptimal design. Raytrace below shows the two objectives as given in the book (top two) and near their optimum (bottom two; prescriptions are for the optimized objectives). Note that OPD (optical path difference, or wavefront deviation) scale - given in units of wavelength on vertical axis - is different for the two objectives, because of the larger violet error of the fluorite objective. For some reason, designer wanted to bring together marginal foci of the blue F and red C line. Not only that the error at this focus location is four times the error at the 0.71 zone, this common focus is farther from the green focus, which makes the error in F and C even larger. The only benefit is better correction in the violet, but if it comes at a price of worsening correction in all other non-optimized wavelengths, it has to result in inferior performance. Bringing the F and C 0.71 zones together (bottom two), minimizes error not only in those two wavelengths, but also in most of the others. The result is photopic Strehl of 0.955 for Apoklaas and 0.927 for fluorite (value under Z is defocus needed from the e-line focus to the best diffraction focus). Note that the Strehl is in a smaller part better because of the minimized error in the optimized wavelength as well. 11.21 Secondary spectrum corrector Not long ago, refractors using ED glasses for secondary spectrum correction were very expensive. One solution to the problem was offered by Valery Deryuzhin, in the form of subaperture corrector, which he named Chromacor. Its prescription was never published, but Roger Ceragioli tried to find out what can be done with such corrector, and results are pretty good, at least over limited field (note that Deryuzhin stated it still falls behind his Chromacor correction wise, especially with respect to lateral color error). Top raytrace shows the 6" f/7.9 achromat alone (it was probably intended to be 150mm f/8, but the difference is negligible). Bottom raytrace shows correction level with the corrector added (like Chromacor, it consists of 5 cemented lenses). Correction is at the "true apo" level in the violet g-line (0.85 Strehl), but falling short of it in the F-line (0.66 Strehl) and, more so, in the red. But even the red end is several times better corrected than in the achromat. More specifically, the red C-line, as indicated by the OPD graph, is less than 1/2 wave P-V, or at the level of a 100mm f/ 27 achromat (with nearly balanced C and F lines). F-line, at 1/3 wave P-V is at the level of a 100mm f/36 achromat. Central line correction is even better with then without corrector (note that the ray spot plot size is not in agreement with other values, and should be more than twice larger - probably some kind of a glitch; the other spot sizes, including the central line's for the achromat alone, appear to be in line with the P-V wavefront error values). Diffraction images bottom left show lateral color error, limiting quality field radius to about 0.05°, when the blue Airy disc starts visibly separating from the green-yellow Airy disc; at that point the polychromatic Strehl (photopic) is around 0.80 - in the absence of any other aberration (note that diffraction images are enlarged vs. ray spot plots, roughly by a factor of 2; also, color code for them is different than for ray spot plots). 11.22 Cemented doublet as a focal reducer In general, the benefits of a faster system are wider fields achievable and gain in "speed" (CCD/photography). Dedicated reducers can also flatten image field, and/or correct aberrations inherent to the system. Here, it will be examined how well can a common cemented objective corrected for infinity function as a focal reducer. Let's start with a 100mm f/12 achromat. As image below shows, the maximum visual field that fits in a 2" barrel is a bit over 2 degrees in diameter. Placing a 600mm f.l. cemented doublet about 300mm in front of its focal plane turns it into an f/8 system. Now, the entire 2-degree field can fit in a 1.25" barrel, or if 2" barrel eyepieces are used, it can expand up to 3.2°. The effect on chromatic correction is nearly negligible, as OPD graphs show, but the reducer adds nearly 1/11 wave of undercorrection. Best field curvature, however, is significantly more relaxed - 1800mm vs. 430mm - with the reducer lens (ray spot plots are for the best field, for e, F and C lines only). The reducer is fairly insensitive with respect to its placement point: as much as several mm deviations will weakly affect the reduction ratio, with no appreciable effect on the correction level. This particular cemented objective uses Schott SF5 flint, because it allows for near-optimal correction with BK7, but any well made cemented objective should have similar effect: reduced focal ratio with the chromatic correction level of the original system. In this case, a 100mm f/8 achromat with chromatism of an f/12 system. With Cassegrain-like systems, there is no gain in field size from using reducer lens, since it is limited by the baffle tube rear opening and in most cases can be fully accepted by 2-inch barrel. Only large systems, with baffle tube opening wider than 2" could gain significant extra field from the use of a reducer lens. Otherwise, the gain limits to possibly be able to pack the useable field into 1.25" barrel. Just recently, someone on a Russian forum wanted to know if he could expand usable field of his Santel Maksutov-Cassegrain with a reducer lens, so that he could fit the entire Pleyades into the view. Below is an exercise with what should be close to the Santel MK91, a 230mm f/13.5 system. Placing the same type of cemented doublet objective inside its baffle tube reduces (expands is more appropriate word) its focal ratio to f/9, packing up the original near maximal 0.8° field into 1.25" barrel. The original field with ~40% edge illumination - the minimum acceptable visually - is 1.1°, assuming 46mm rear baffle tube opening, same as Celestron 9.25. With the reducer, field diameter with nearly identical edge illumination is 1.2°, for the nearly 10% gain. Main effects of the reducer lens are somewhat worsened g-line correction (still safely within the "true apo" requirement), and 2.5 stronger astigmatism for given angular field (2.5 times 2.25 for given linear field). Stronger astigmatism, however, makes best image surface significantly more relaxed: 2000mm vs. 500mm w/o reducer lens. Diffraction image of a star at 0.6° off axis is approx. 0.05mm, which for a 50mm eyepiece, needed for near 6mm eit pupil, translates into 3.4 arc minutes - borderline between appearing as a point source and not to the average eye (from 0.05x57.3x60/50). With the Nagler 31mm and a comparable linear field size, it would increase to 5.5 arc minutes - soft, but acceptable for field edge. System above assumes moving focusing tube, capable of handling the shorter back focal length due to the presence of a reducer lens. More often than not, Maksutov-Cassegrain systems come with mirror focusing instead. In that case, by reducing mirror separation back focal length is extended to the eyepiece field stop, with the eyepiece shoulder remaining at a given, permanent location. The efect of extention is illustrated on a 5" f/12 Maksutov-Cassegrain telescope with aspherised primary (the one at right, with a regular D/10 meniscus thickness). Raytrace below shows that the back focus extension has relatively small effect (before extension -top, after - bottom). In the prescription, boxed numbers show changed values (that matter) due to the extention. It caused focal ratio to change from f/9.5 to f/8.7, adding some more astigmatism which made the best field curvature yet less strongly curved. Longitudinal astigmatism is over 0.8mm vs. less than 0.5mm in the original arrangement, but since the wavefront error changes in inverse proportion to the focal ratio number, astigmatism is over three times larger. Still, the 0.7° diffraction spot size of roughly 0.06mm implies off axis performance nearly as good as with the Santel above (w/reducer). Field illumination and angular size remain very similar to those without reducer lens, only the field is scaled down linearly and fits into 1.25" barrel. There would be little use of trying to expand this field with 2" eyepieces, since field illumination rapidly drops below acceptable. 11.23 Did Maksutov miscalculate with his first instrument? The very first actual instrument built according to Maksutov's prescription during the WW2 was a small 100mm aperture Maksutov-Gregorian, intended for educational purposes in elementary schools and for public at large. The original prescription exists in Maksutov papers, and it raytraces as shown below. It's LA graph is leaning to the left, indicating suboptimal correction, and the shear magnitude of aberration is already unaceptably large. Color correction is also suboptimal, since the blue and violet (more so) need to have their inner zones focusing longer, and their outer zones focusing shorter than those of the longer wavelengths. As if it isn't bad enough, the image forms just in front of the primary's surface, which severely limits its accessibility. Is it possible that Maksutov was so sloppy with the first embodiment of his new telescope? As for the image location, the telescope was supposed to be very simple (read: cheap as possible), so the stick-in eyepiece was a part of achieving that goal. Only longer focal length eyepieces were usable, limiting the highest magnification to about 40x. That was probably why it didn't matter that the central line error was nearly 0.4 waves P-V, or 0.13 wave RMS (balanced 6th/4th order spherical, within annulus with 40% linear central obstruction, comparable to 0.44 waves of primary spherical w/o central obstruction). Note that raytrace measures the P-V error from the non-existing center, hence its figure is significantly larger than the actual P-V error of the wavefront within the annulus. Also, taking a closer look at the central line error reveals that the correction level determined by Maksutov's prescription is the one with the highest Strehl, and with only slightly larger 80% energy radius than the best possible. Reason for this is that the wavefront error is large enough to cause the Strehl to peak not for the wavefront with the lowest RMS error - when a line connecting paraxial and edge zone focus on the LA graph is nearly vertical - but at a slightly different focus location. The lowest nominal P-V wavefront error is, as the OPD graph shows, an artifact of the measurement being done from the origin, where no actual wavefront is present due to obscuration by central obstruction. The actual wavefront in the annulus is significantly larger for this focus point than for any of the three others. Some time later, higher-order aspheric was applied to the primary in order to reduce the excessive spherical aberration of the original instrument. Reportedly, it allowed use of 200x magnification (presumably, image was made accessible to shorter focal length eyepieces). No specifics of the aspheric applied were given, but if we assume that it was a complete correction (even the 8th order term is not negligible), resulting rytrace is shown at the bottom, left. The sub-optimal color correction becomes more obvious, and it cannot be improved without changing corrector lens itself. For given meniscus thickness, the radii would need to be somewhat shorter (small box bottom right), but it would leave the image hopelessly inaccessible. By making meniscus nearly 2mm thicker, the color error can be minimized with the image a few mm closer. The reduction is much greater than what the respective ray spot plots indicate, because the defocus error is replaced by spherical aberration error; for given error magnitude, defocus spot is 4.5 times smaller than that for primary spherical aberration, and more so for the higher-orders. Wavefront error is 6-7 times smaller in the red and blue, and about four times in the violet. That said, the original correction was at the level of a 100mm f/25 achromat, with that error nearly cut in half by putting higher-order aspheric on the primary. That would take it close to the "true apo" minimum. Considering that, it is more likely that the sub-optimal color correction was also a result of cutting corners than design error, i.e. settling with somewhat thinner meniscus than what is needed for the best correction. 11.24 Triplet vs. Petzval It is known that Petzval arrangement makes possible to flatten image field, and to achieve high level of correction, both monochromatic and chromatic aberrations. While the flat field advantage is undisputed, how does it compare to a triplet objective correction-wise? To try to answer that, will raytrace two 140mm f/7 apochromatic arrangements. To make them fully comparable, they both use the same two glasses. As shown below, the triplet comfortably passes the "true apo" criterions, with the polychromatic Strehl (430-670nm, photopic) a bit short of 0.97. Note that this combination can't be made into objective with two pairs of identical radii (suitable for oiling) without aspherizing; since that wouldn't appreciably effect correction level - and to keep the two fully comparable - it is left all spherical, same as the Petzval. Looking below, we see that the Petzval arrangement has better correction in the central line, as well as tighter colors, resulting in the 0.98+ Strehl. Also, field astigmatism is more than twice lower: while at 1° off axis the triplet has diffraction pattern expanded into 0.015mm astigmatic blur, in the Petzval the central maxima remains nearly Petzval's advantage becomes more obvious with faster focal ratios. If, for instance, we go down to f/5.3, its central line correction is practically unchanged (one advantage of the Petzval is that it allows for the higher order residual to be practically eliminated w/o aspherizing). There is more chromatism, but even at this fast focal ratio and at that aperture it is flirting with the "true apo". And the triplet - which can simply be scaled down with minor tweaks, as opposed to the Petzval which needs some more substantial changes (radii 327.6/172.2/172.2/926.8/-172.2/-174.02/-694.4, axial thickness 9/2.3/14.8/546/14/0.7/7) - is now behind in the central line correction, and even more in its chromatic correction. Summing it up with the poly-Strehl, it comes to 0.87 vs. 0.93 for the Petzval. Linear astigmatism didn't change, but since the wavefront error is inversely proportional to the square of focal ratio, it is about 75% larger in both. With the triplet, it means as much larger astigmatic blur (~0.026mm), and in the Petzval it just reaches the level when the diffraction image at 1° off starts turning into a small cross. If downscaled to 100mm aperture, the Petzval comfortably passes the "true apo" test. 11.25 Maksutov 1m-class refractor challenge In his book "Astronomical Optics", Maksutov gives prescription for 1m f/9.8 Maksutov-Cassegrain that would be both better corrected and (much) more compact than refractors of that aperture size - its tube would be less than 2m long (2nd ed. 1979, p352). However, with an f/2 primary such a large meniscus would generate very large higher-order spherical aberration residual, which would, even after minimizing it by balancing with the lower order, render it unusable. Hence, Maksutov writes, either the front meniscus surface, or primary mirror, have to be aspherized in order to have spherical aberration corrected. He states that a slightly deeper than 1 micron aspheric on the primary would do the job, but there is no specific data for aspherization. Just how good such telescope would be? Shown below is the system as given in the book, all spherical (top), with aspheric on the primary to do away with spherical aberration (middle), and with aspheric on the primary needed to correct spherical aberration and minimize coma (bottom). Central obstruction is taken to be 30% by diameter, which is about the practical minimum for this configuration. As raytrace shows, the all-spherical arrangement is hopelessly crippled by spherical aberration. Removing it requires use of three successive aspheric terms: 4th, 6th and 8th order, for primary, secondary and tertiary spherical aberration (taking out the 8th term would leave in as much as 2.5 waves P-V of tertiary spherical, which could be minimized by balancing it with secondary spherical, but would still remain roughly at the 1/2 wave P-V level). The first term is same as conic, with the corresponding value given from K=8(d^4)A4, where d is the aperture semidiameter and A4 the 4th order coefficient (in this case, K=-0.119, with the minus sign coming from the positive A4 value for mirror surface oriented to left, i.e. with the negative sagitta value). The significance of this term (i.e. conic) is that it induces primary coma: in this case, positive coma offsetting the native negative primary coma of the all-spherical arrangement. However, this conic value offsets less than half of the original coma, leaving the field quality with much to be desired. For fully minimizing coma, more than twice stronger conic is needed, with the excess, or rather, lack of primary spherical aberration compensated for by changing corrector radii (making them more relaxed). But, according to the book, what Maksutov had in mind is correcting for spherical aberration alone. That would put visual "diffraction-limited" (0.80 Strehl) field radius at 2.9mm, or 0.017°, with the diffraction blur at 0.25° off roughly 0.12mm (nerly 2 arc seconds) in diameter. The value of higher-order terms, given by A[i]d^i, where i is the order, and d the aperture semidiameter, gives the needed edge depth of the aspheric as δ=0.014875-0.006625-0.0008984=0.007352mm, or 7.35 microns. That is much more than 1 micron stated by Maksutov, which indicates that he had in mind aspherization for the best focus location, not the paraxial focus, as OSLO is set to do. Knowing that the best focus error is four times smaller for primary, about 2.5 times for secondary, and nearly two times for tertiary spherical, the needed maximum depth comes to 0.62 microns, approximately at the 0.72 zone. Neglecting tertiary spherical, the depth is 1.07 microns, nearly identical to Maksutov's figure. Apparently, Maksutov went with the first two terms only, and if the third term is included the needed depth is almost cut in half. Since the maximum depth is at the ~0.72 zone, not at the edge as with the paraxial focus terms, the corresponding volume of glass to remove is significantly smaller than what the depth figures alone indicate. Since including the third term not only improves axial correction, but also makes the aspheric shallower, it is the best option. It is therefore shown for both, system corrected for spherical aberration only (middle), and the one corrected for spherical aberration and coma (bottom). A4 term implies that the conic needed for minimizing coma, by balancing its lower order form, affected by mirror conic, with its nearly constant higher order form, is K=-0.23425. The gain in field correction is significant: "diffraction-limited" field radius increases to 0.105° (17.8mm), and the diffraction blur at 0.25° is about 2.5 times smaller, with the 80% encircled energy radius smaller as much as four times (0.02 vs. 0.08mm, or 0.42 vs. 1.68 arc seconds). Spherochromatism is significntly reduced in the last arrangement (0.905 poly-Strehl vs. 0.838), due to its more relaxed meniscus radii. Astigmatism as also reduced, by some 20%, to ~0.85 wave P-V. The price to pay for these is significantly deeper aspheric - 5.3 microns - due to the larger A4 (i.e. conic) and smaller A6 and A8 terms (higher-order aberrations are reduced because of more relaxed corrector radii). If the back focal length is kept nearly unchanged, the system becomes somewhat faster, at f/9.3. Bottomm left illustrates needed modification of the sphere by the paraxial focus (OSLO) aspheric terms. The A6 and A8 terms are exaggerated vs. A4 - although in the previous arrangement A6 was about half as large as A4 - and the latter is exaggerated vs. starting sphere, for clarity. As mentioned, aspheric based on the best focus correction would have its maximum depth at about 0.72 zone, with no glass to remove at the center and the edge; depth of such aspheric would have been a small fraction of the depth shown in this illustration. The system proposed by Maksutov has two big advantages vs. similar aperture refractor: color correction, and compactness. For comparison, the Yerkes 1-m refractor is 19m long, with its F/C blur measuring nearly 0.7mm (7.5 arc seconds) in diameter. The aspheric required for the Maksutov is not excessive for the last arrangement, and is quite low for the middle one, so it can be regarded as a better overall alternative. Even with the coma left in, the 1m Maksutov at 0.25° has 80% EE radius 0.0799mm, vs. 0.0749mm on axis for the Yerkes refractor (photopic sensitivity), hence having better correction for the smaller fields. 11.26 - Spectacle glass' telescope My first telescope was made of a pair of identical spectacle lenses - the pre-cut, round shape, placed at half the focal length separation which - as a source I was using stated - was minimizing chromatism (in fact, chromatism is larger than that of the single lens, but it is lower than chromatism of a single lens with focal length equaling that of the two lenses combined). Those were probably +1 diopter (1m f.l.), round lenses, about 50mm in diameter, stopped down to 25mm, or so, clear opening with a diaphragm. I did see more with, than without it, although it was certainly more colorful too. These are simple to make and inexpensive telescopes, but how good they really are? To find out, will raytrace such a system. Nowadays, spectacle lenses are made of synthetic materials, such as CR39 plastic, acrylic or polycarbonate. CR39 and acrylic have properties similar to the common crown glass, while polycarbonate has significantly different dispersion and generally produces more chromatism. Image below shows raytracing results for a +1D acrylic lens stopped down to 25mm. That produces a 25mm f/41 singlet lens telescope objective. Longitudinal chromatism, as expected, is in the form of primary chromatism, i.e. with shorter wavelenghts focusing shorter, and longer wavelength longer than the optimized, central wavelength. With these particular parameters, defocus wavefront error is 1.25 wave P-V in the blue F-line, and 1.06 wave in the red C-line. Average for the two, 1.15 waves, nominally corresponds to a 100mm f/10.4 achromat, but the actual chromatism for any given F/C error level is significantly greater with primary spectrum. That is because the defocus error changes with the square of wavelength differential with secondary spectrum, and is closer to changing linerly with primary spectrum. It is reflected in the value of polychromatic Strehl (9 wavelengths, 430-670nm, photopic), which is 0.80 for the achromat (at its best diffraction focus, 0.07mm from the e-line focus toward F/C lines focus), and only 0.51 for the spectacle glass objective, if made of acrylic (nearly identical for CR39 or crown glass). But a lens made of polycarbonate would have poly-Strehl as low as 0.30. Poly-Strehl of 0.51 is at the level of a 100mm f/5 achromat, which imlplies that for a given F/C defocus error the magnitude of chromatism, expressed by focal ratio, is twice larger with primary than with secondary spectrum. Due to its small relative aperture, monochromatic aberrations of this objective are negligibly small (note that the positive meniscus lens shape induces 2-3 times more of each, spherical aberration and coma, but they remain negligible). Best image curvature has radius of -400mm, concave toward objective, causing nearly 0.1mm best focus shift at 0.5° field angle vs. field center. That corresponds to 1 diopter of accommodation for 10mm f.l. (flat field) eyepiece, and 4 diopters for 20mm f.l. (one diopter correspond to accommodation from infinity to 1m distance, four diopters infinity to 4m distance; with the best field curving away from the eyepiece, field points are farter away than the central point, and the pencils exiting the eyepiece are slightly converging, requiring eye lens to relax for proper focusing). Since 10mm unit produces 40x magnification, which is at the maximum usable level for this objective, field curvature is not noticeable, even with the unnatural, positive accommodation. For the final image, created by eye, this objective needs eyepiece. Keeping it all rudimentary, as it is likely to be, the eyepiece can be a single positive lens. Below is the final image with a 50mm f.l. planoconvex lens as an eyepiece (20x magnification). Best astigmatic image radius is -5mm, with the focus for 0.5° field point forming 1mm closer to the eyepiece than the center point focus. With 50mm f.l. eyepiece, one diopter of accommodation equals 2.5mm, which means that the required accommodation here is 2.5 diopters, practically same as flat field (zero accommodation) even with the negative accommodation. Ray spot plots show that the eyepiece lens induces significant lateral color error, about 0.05mm F-to-C split at 0.5°. Boxed ray spot plots show the outcome if the objective is repleced with perfect lens (PL): longitudinal chromatism becomes negligible, but lateral chromatism remains. MTF graph shows contrast transfer for axial point, 0.35&deg and 0.5° field point (for best astigmatic field). Practically all degradation on axis comes from longitudinal chromatism (objective), while additional degradation off axis comes from both lateral color error and astigmatism of the eyepiece. Toward higher frequencies, there is some improvement in contrast transfer off vs. on axis, a consequence of lateral color error becoming less of a factor due to the wider color separation (note that lateral color error effect is at its maximum in orientation perpendicular to that of MTF lines, falling to zero for orientation coinciding with it; in other words, nearly all additional deterioration in contrast transfer for off axis points in tangential plane is due to lateral color error, with the sagittal plane affected mainly by eyepiece astigmatism alone). The actual image is, of course, subject to eye aberrations, which are neglected here. Eyepiece lens orientation does matter, in that turning the flat side in front increases lateral color error by about 25%, and also makes field curvature a bit stronger. Using biconvex (equiconvex) lens slightly increases lateral color, but makes field curvature more than twice more relaxed, due to significantly lower astigmatism generated vs. single twice more strongly curved lens surface (0.1 vs. 0.24 wave RMS at 0.5°). 11.27 - Tilted Houghton vs. Ed Jones' medial One possible solution for correcting aberrations of the Herschelian telescope is using full-aperture Houghton corrector with a tilted element, compensating for aberrations of the tilted mirror. Another, described by Ed Jones, retains only a single lens in front, with the reflecting mirror replaced with Mangin mirror, and a small singlet at some distance in front of the final image. Both systems use small folding mirror, flat in the Herschelian configuration, convex in the Ed Jones'. Raytrace below shows how the two compare, both correction wise, and with respect to overall configuration, in a 150mm f/8 system. The level of correction is similar, except that the Houghton-Herschelian has nearly 6° image tilt, causing more than 1mm defocus at the 0.5° field point. Corresponding to nearly 11mm field radius, it would be just out of the field of the standard 20mm 52° AFOV eyepiece. Since one diopter of defocus for this focal length is 0.4mm, it would require 2.5 diopters accommodation, i.e. infinity to 0.4m distance. It is within the accommodation reach of most eyes. However, since the field top and bottom require opposite in sign accommodation, only one field side can be focused at in any given moment, and switching from one to another requires from the eye to go from +2.5 to -2.5 diopters, making accommodation significantly more difficult. With a 10mm f.l. eyepiece and 0.1mm for one diopter of defocus, the accommodation requirements are doubled. Also, tilted image results in the light cones entering the eyepiece asymmetrically, generating additional monochromatic and (lateral) color The Houghton-Herschel has some residual spherical aberration, which can be removed by putting -0.2 conic on the mirror, but the gain is rather cosmetic (top right). The Jones' medial can be designed practically free of astigmatism and coma, with trefoil as the dominant, yet insignificant aberration (astigmatism/coma showing at the field bottom is a tendency that could be minimized with final optimization). The nominal tilt of the image surface is relative to the rear field lens surface; as the magnified image surface detail shows, image plane is practically perpendicular to the optical axis. Another advantage of the medial is its compactness: it is less than 2/3 as long as the Herschelian. 11.28 - Erfle eyepiece w/o and with Smyth lens The standard Erfle eyepiece has five elements in three groups (2+1+2). Usually made to allow around 65° apparent field, it was the best known wide-field eyepiece before new generation of (ultra) wide-field eyepieces, utilizing Smyth lens, entered the field with Albert Nagler. Its outer field definition went from pretty good with systems slower than ~f/10, acceptable at f/10 to f/7, and increasingly blurry with faster systems. Could the standard Erfle configuration be significantly improved by adding to it a separated negative element(s)? To get some answers, will start with the Erfle eyepiece given by Rutten and Venrooij, which is optimized to the extent that still leaves it close enough to the traditional in its overall performance level. Here, it is scaled down to 10mm f.l. and raytraced with an f/5 system for 66° AFOV (image below, top). Longitudinal color is practically non existent, lateral color is well controlled, but astigmatism explodes toward outer field. Required eye accommodation for best focus is 4.3 diopters (infinity to 1m/4.3), generally acceptable, but of little importance practically, because astigmatism dwarfs defocus error due to field curvature. By relatively small changes in the lens radii, with the heavy flint change to keep lateral color minimized, it can be improved to a flat field design with somewhat less astigmatism but significantly stronger distortion, mainly due to the stronger 1st and 5th radius (second from top). It is about as much as can be done in this configuration. Placing a negative element in front of the field lens increases the focal length of the combined unit (it can be visualised following axial light pencil entering from left, exiting the positive group to focus and diverge into the negative front element - Smyth lens - which causes it to diverge more strongly after it, hence shortening the focal length). To make it directly comparable, the unit is scaled up to 10mm f.l.; also, due to the changed ray geometry, the glasses needed to be significantly different to keep lateral color minimized (3rd from top). The Smyth lens is quite weak, so that the intermediate image almost coincide with the telesope image, but the effect on the astigmatic field is significant. Here, the astigmatic field curvature has opposite sign vs. starting Rutten-Venrooij design; in fact, for a better part of the field radius - up to about 0.7 zone - it is Petzval curvature, since astigmatism is practically non-existent. Edge blurs are roughly comparable, but the overall field correction is significantly better than in the traditional Erfle. Required edge accommodation is also comparable, but of opposite sign. Similar trade off between field curvature and outer field definition can be accomplished without adding Smyth lens, but would require more of a field curvature, and it would make more difficult to correct residual coma. It is logical, not only because the negative element contributes some offsetting aberrations, but also because more surfaces generally make correction easier. Use of Smyth lens allows for significant changes in the positive lens group configuration. As an example, design with somewhat stronger front negative element, similar field curvature as the previous design, with less of edge astigmatism, but more astigmatism in the inner field (bottom). Field properties can be changed toward more curvature and less astigmatism, or vice versa, by manipulating two radii, #3 and #11. Diffraction patterns in the box show the field edge with the given radii values for stronger field curvature and nearly cancelled astigmatism (top); needed accommodation is about 8 diopters. Ray spot plots below them show field edge with the field nearly flat and more astigmatism; they are still significantly smaller than with the stand-alone flat-field Erfle. Above designs were reverse-raytraced, since it is simpler. With direct raytracing possibly giving somewhat different output (the larger field of view, the more so), the last and first design are also raytraced directly. Since this age ancient of SYNOPSYS doesn't offer the "perfect lens" convenience of OSLO Edu, as a perfect objective is used an f/5 Eisenberg-Pearson system, and as a perfect eye a mirror with r.o.c. R=20mm placed 20mm from the exit pupil (effectively aperture stop placed at its center of curvature), producing 10mm f.l. hence reproducing the f/5 relative aperture. The mirror induces no aberrations except field curvature (R/2), and by entering this field curvature into raytrace it is effectively neutralized. Direct raytracing of the last design shows somewhat more field curvature, and less astigmatism, hence no significant differences (keep in mind that the direction of light forming the final image is opposite to that with reverse raytracing, thus the orientation of the astigmatic field, i.e. actual field curvature is of opposite sign). Most of curvature added is probably due to the image having lower distorsion, hence the slightly smaller linear image is larger than that in the reverse raytraced unit when distortion effect is taken out. Focusing on the edge mysteriously changes the form of astigmatic field. It is probably a consequence of the very large angle involved: due to it, refocusing results in a change of the field point height. That by itself shouldn't change the astigmatic field, but there could be a lateral shift of the converging beam too small to be detected at this scale (bottom). Below left, ray geometry for the axial, 0.7 zone and edge beams, shows that the 0.7 zone beam doesn't reflect back in the same direction, due to its exit pupil location - determined by the point of intersection of its central ray and central ray of the axial beam - being closer to the mirror (magnified in box at right). Magnified focus area for the edge beam (top box, focus on axis) shows that the edge pupil location is slightly farther away from the mirror's center of curvature, causing a slight lateral asymmetry vs. incoming central ray. Still, change in the astigmatic field still seems to be a glitch. Defocus does not alter ray geometry, only the pattern of cross section, hance the longitudinal astigmatism remains unchanged. Direct raytrace of the Rutten-Venrooij Erfle has the astigmatic field change even more, at least in terms of longitudinal aberration. In terms of the field edge blur size, the change is nearly negligible. While this seems absurd, it results primarily from the difference in generating higher-order astigmatism. It is relatively low in reverse raytracing (top), but yet lower in the direct raytrace (bottom). In the former, it is large enough to cause a small wrinkle at the bottom of the wavefront (indicated by an extra contour line on the wavefront map shown), throwing the bottom edge rays significantly farther out. However, since the corresponding wavefront area is near negligible, so is its effect on the ray spot plot. Another tricky part - for raytracing software - is distortion. While both SYNOPSYS and OSLO Edu give zero (practically) aberration coefficient for distortion for aperture stop at the center of curvature of a concave spherical mirror (2mm aperture diameter, 200mm mirror f.l.), they at the same time give the corresponding distortion plot showing -16% distortion for the 33-degree beam. There is no change in focal length for this beam, it hits the mirror perfectly collimated, just as the axial beam, the only deformation is the reduction in its vertical diameter at the aperture stop, due to its steep incident angle (SYNOPSYS has the option of rotating aperture stop with incident angle, but it isn't working as intended). It results in the effective focal ratio varying with the pupil angle, from the highest in horizontal plane, to the lowest in the vertical. Consequently, diffraction image is enlarged vertically, which effectvely should produce positive distortion, but raytrace apparently bases its distortion plot on the reduction in the vertical beam diameter - in proportion to the cosine of incoming angle - interpreting it as (geometric) image reduction. Since similar beam deformations are common in eyepieces (the obviously wider 33-degree exit pencils in the direct raytracing setup are a consequence of vertical beam expansion due to projection on oblique surfaces within the eyepiece), it makes questionable the accuracy of distortion assessment. Direct raytracing shows somewhat more coma, and less astigmatism, but the differences are relatively small. Edge pencils are noticeably wider than axial in the vertical plane, as a result of projection on oblique surfaces. If sufficiently extended, it would become visible that they are mildly diverging, i.e. having longer focal length than axial beam after focusing, hence creating larger image scale - positive distortion - of the outer field. Direct raytracing gives significantly more spherical aberration, but still well within acceptable, at the level of 1/8 wave P-V. 11.29 Sub f/2 Schmidt vs. Busack camera Schmidt camera is a long time standard for fast camera systems. In its basic form it consists only from the corrector plate and a spherical mirror. Since it produces strongly curved image field, a singlet lens flattener right in front of the final image can be added, at a price of somewhat compromised field quality. Its main advantage is exceptional correction in optimized wavelength across wide fields. However, at large focal ratios - roughly f/2 and larger - its spherochromatism becomes significant. Here, it will be compared to another highly corrected system, Busack-style Hamiltonian, consisting from a front singlet lens, Mangin mirror and field corrector. While the Schmidt plate uses higher-order aspherics, the Busack is all-spherical system, and in this respect easier to fabricate. Another drawback of the Schmidt camera is its relatively long tube, about twice the focal length; Busack-style camera is twice shorter. The systems considered here are 300mm in aperure diameter, and slightly faster than f/1.9. Although for cameras is relevant wider spectral range, the usual g-r range will be sufficiently illustrative. The Schmidt has somewhat lower limit to minimum central obstruction, but it will be assumed it is 1/3 of aperture diameter for both; it is not likely it would be significantly smaller in the Schmidt. Starting with the Schmidt, the basic system produces practically perfect central line correction over the 4° field with -555mm surface curvature. However, due to spherochromatism, the polychromatic Strehl for the five wavelengths is only 0.6 (even sensitivity). Color correction is also unbalanced, with the error in the blue-violet significantly larger than in the red. Somewhat better balance would be achieved by inducing low amount of undercorrection (~1/10 wave P-V), but the effect is minor, and will be neglected. On the other hand, Busack doesn't have quite as good central line correction - although the difference is inconsequential for the camera system of this size - but it has significantly better color correction, with 0.81 polychromatic Strehl. Its distortion is more than twice higher, although still below 0.2%. Central maxima at 2° off is well defined, but the ring structure is less distinct than in the Schmidt. Looking at the energy distribution plot, the Busack has smaller 80% energy circle. Contrast transfer for the Schmidt is practically identical for all points across the field, somewhat inferior to the Busack's central field transfer, but better than its peripheral field transfer. In the Schmidt with a flattener added (4mm thick plano-convex lens with 190mm surface radius facing the mirror, 548.7mm apart), both energy distribution and contrast transfer are significantly worse than in the Busack. With an integrated flattener (same lens, but the corrector changed to compensate for the errors induced by the flattener), degradation is less pronounced, but overall it is still behind the Busack system. In all, the two systems are roughly comparable in performance when the Schmidt uses its best, curved surface. Schmidt with integrated flattener is somewhat behind, and with the flattener added it is inferior to the Busack. As a final note, spherical surfaces of the Busack don't imply it is easy to fabricate and assemble; tolerances are still very tight. 11.30 Astro-Physics CCDT67 Telecompressor This simple telecompressor consists of a cemented positive doublet with 305mm focal length. According to AP, it is originally developed for their 10" f/14.6 Maksutov_Cassegrain, but it is then oriented toward general use. It is not recommended for systems significantly faster than f/9. With a clear opening of 45mm it provides full illumination for KAF-8300 chip (18x13.5mm, 22.5mm diagonal), or alike. Its designated compression factor is 0.67, but it can be varied simply by changing the lens position vs. original focal plane. As for its other effects, it is only stated that it doesn't flatten field, and does not induce coma. Prescription is not published, but there is a drawing online with its outer radii and center thickness, probably measured up by a user. Starting with those values, it is not hard to come to a doublet that would produce nearly flat, coma-free field, with good chromatic correction - as it can be assumed are requirements for such accessory. Below is raytrace of such a lens. Its focal length is 301mm, quite close to the stated 305mm for the actual design. While a number of glass pairs can be employed, best correction level requires low-index front element (generally crown) and high-index rear element (generally heavy flint). With only three radii to play with, such a doublet cannot be fully corrected for both, coma and astigmatism. Since it has inherently non-zero Petzval curvature, it requires some astigmatism of opposite sign for flat field, and it retains some residual negative coma as well (top). Consideration is limited to Schott glasses, but it shouldn't have significant effect on doublet's correction level. The objective is a 200mm f/10 OSLO "perfect lens", hence all aberrations come from the telescompressor. For this focal length, 11.7mm radius (half of the chip diagonal) corresponds to 0.5° field angle. The lens induces low undercorrection (less than 1/8 wave P-V, from LA/64F^2), with coma and astigmatism at 0.5° (flat field) corner-acceptable (astigmatism alone, with LA=0.21mm, is 0.21/8F^2 or 1 wave P-V for 550nm wavelength; it is in agreement with the Zernike term value, 0.524, indicating 0.524/sq.rt.(6)=0.214 RMS wavefront error, i.e. 0.214*sq.rt.(24)=1.05 P-V wavefront error). Coma alone, according to its Zernike term of 0.317, is at the level of 0.317/sq.rt.(8)=0.112 wave RMS, or 0.112*sq.rt.(32)=0.63 wave P-V. The error diminishes with slower systems (keeping compression ratio unchanged), as illustrated with the boxed ray spot plots for a 133mm f/15 perfect lens). Allowing for a small change in the outer radii values (190,-210,-433mm) corrects for the slight field curvature, reducing the combined astigmatism/coma error at 0.5° off by 15%, with the chromatism only slightly worsened. By a further small reduction in astigmatism, resulting in a mild negative field curvature, the combined error is further reduced by about 5%, but at a price of somewhat more chromatism (190,-216,-400, with SF4 replaced by SF6). This seems to be about the peak correction level for this type of a doublet. As mentioned, relatively low astigmatism is necessery to flatten field; such doublet can be made astigmatism-free, but at a price of larger coma and some field curvature (bottom left). Similarly, if it is made coma-free, astigmatism and field curvature become quite strong (bottom left). While the actual unit could be little better, we'll use the one based on measured radii to evaluate its performance on some specific telescope systems. Starting with the Schmidt-Cassegrain, a 200mm f /2/10 system, will need the lens at 338mm from the secondary (i.e. 129mm in front of the original focal plane) to produce 0.67 compression. With fixed mirrors, it will form focus at 101mm behind the rear lens' surface, or 48mm in front of the original focus (top). If the final focus is to be brought back to its original location by moving primary mirror, the telecopressor lens needs to be at 356mm from the secondary. With respect to the system alone, CCDT67 induces low undercorrection (only with fixed mirrors; with moving primary it is nearly offset by the resulting overcorrection) and astigmatism somewhat weakening field curvature, with longitudinal chromatism actually reduced. Number below the telecompressor is the radius of the converging light at the front lens; it is smaller than its clear opening with fixed mirrors, but somewhat larger with moving mirrors, as shown magnified at the bottom. With an aplanatic SCT, showing the moving mirror case, induced overcorrection is significantly larger at 0.07 wave RMS, due to the undercorrection generated by moving primary adding up to that of the telecompressor lens. In a 200mm f/3/8 Ritchey-Chretien (similar to the GSO 200mm f/8 RC), CCDT67 astigmatism nearly offsets that of the telescope, leaving it with somewhat less field curvature. Induced overcorrection is negligible, but coma becomes more visible at this relatively fast focal ratio. The error at 0.5° is still 25% lower than in the stand-alone system. Shown is only scenario with fixed mirrors, since it is more likely. The original focal plane is only about 5mm behind, so bringing the final focus to that location by moving primary would produce only a small effect. 11.31 Paracorr-like corrector on a sub-f/3 mirror Reading Mike Lockwood's 2009 article about using TV Paracorr on his 14.5" f/2.55 mirror, I wondered what the Paracorr correction level with such a fast mirror looks like. Tele Vue states it works well down to f/3, and according to Mike it still worked very well at f/2.55. So, in order to find out what is actually happening past f/3, I raytraced a couple of designs similar to the Paracorr in their basic arrangement. One is from Smith/Ceragioli/Berry, and the other could go as a poor man's Paracorr, using only two cheap glasses. They are raytraced with f/4.5 and f/2.7 300mm in diameter Starting with the S/C/B design, raytrace at f/4.5 (for an f/5 effective system, with the corrector acting as x1.15 focal extender) shows that it has low residual coma, approximately at the level of an f/8.3 paraboloid. Chromatic correction easily meets the "true apo" requirements, astigmatism is low (0.035mm longitudinal error at 0.5° implies 0.32 wave P-V, or 0.065 wave RMS avefront error) and spherical aberration is negligible (0.045mm longitudinal aberration implies 0.045/64F^2=0.0000283 P-V wavefront error, or little over 1/20 wave for 550nm wavelength). With f/2.7 mirror (for an f/3 system), if maintaining the same back focus, correction looks a lot worse, mainly due to the increase in spherical aberration to 0.63 wave P-V. Also, longitudinal chromatism is not near-optimally balanced anymore, due to the blue/violet end shifting away, and the red closer to the optimized wavelength. Luckily, spherical aberration is easily corrected by pulling corrector little more out: 5mm farther from the mirror brings error down to 1/10 wave P-V (another 0.4mm takes it to its minimum, only ~1/100 wave P-V larger than with f/4.5 mirror). It also reduces chromatic effect, roughly by a half. On the negative side, it makes both, coma and astigmatism larger, but the effect is near negligible (larger astigmatism induces very mild field curvature). Despite it, the bright core of the diffraction image is well defined. The poor man's Paracorr has mild field curvture and sub-optimal longitudinal chromatism. It would benefit from final optimization, but the gain would be small; it is good enough for the purpose. With f/4.5 mirror (for an effective f/5.5 system, i.e. acting as x1.22 focal extender), it comes close correction wise to the S/C/B design (top). Coma at 0.5° is somewhat more visible, because the linear field is 10% wider, and because of small defocus due to the mild field curvature; it is at the level of an f/8.5 paraboloid. With f/2.7 mirror it also appears much worse, if placed so that the back focal length remains identical (which is where coma and astigmatism correction remains nearly unchanged). Spherical aberration is by nearly a third lower than in the S/C/B, due to it resulting from the increase in both, primary and secondary spherical, and the S/C/B has them of the same sign. Spherical aberration is reduced to 1/10 wave P-V by pulling corrector 3.4mm out (which is about the minimum with this design, due to its more strongly curved surfaces, hence more of higher-order spherical). Similarly as with the S/ C/B, it also partly offsets worsened longitudinal chromatism. However, due to the initial chromatism here being of different sign, the field is effectively flattened. Diffraction image is somewhat more smeared than with the S/C/B, but the bright core is mainly preserved. Taking a closer look at the wavefront 0.5&deg off axis, reveals that it is mainly astigmatism and coma causing off axis image degradation. With the S/C/B design and f/2.7 mirror, corrector separation 725mm, the 0.5° focus in the plane of axial focus contains significant Zernike defocus term, indicating that the field is less than perfectly flat (below, left). Minimizing defocus term by shifting focus slightly toward the mirror leaves primary coma and astigmatism as the main error contributor (right; the sign on the astigmatism plot is opposite to that on the system drawing, since the former by default assigns minus to the direction toward objective, and plus away from it). All terms outside the first 15 are entirely negligible, and even those appearing not negligible, like secondary coma and primary spherical aberration, have little effect because the error adds as the square root of the RMS errors squared (the RMS error is implied by the absolute term value divided by a factor varying with the aberration: 2, sqrt(3), sqrt(6), sqrt(8), sqrt(5), sgrt(8), sqrt(10), sqrt(12) and sqrt(7) for tilt, defocus, primary astigmatism, primary coma, primary spherical, trefoil, secondary astigmatism, coma and spherical, respectively). The cumulative RMS error is 0.586 wave in the plane of central focus, and 0.466 wave with defocus Similarly, the poor man's Paracorr has astigmatism and coma as main error contributors, with primary coma somewhat lower, and secondary somewhat higher than the S/C/B. Best field is somewhat less curved, but the best 0.5° image is still found slightly closer to the mirror than the central focus (below). Minimizing defocus reduces the total RMS error from 0.518 to 0.445. The latter corresponds to 2.5 waves P-V of pure coma, which entirely transforms diffraction image into comatic blur 0.063mm long. However, as a mix of multiple aberrations, diffraction image has no clear resemblance to coma aberration, tending to reflect its dominant component, astigmatism. Note that central obstruction is omitted to leave wavefront maps intact; Mike Lockwood's f/2.55 Newtonian had it at 31% linear, so the effect can be neglected in this context. It is hard to tell how close are these two designs to the actual Paracorr without its published prescription, and with somewhat inaccurate data on the published spot size graph (for instance, it states that diffraction limited field with f/4.5 mirror is 3mm, while it is slightly over 2mm, or that the spot size - which probably includes astigmatism - at 10mm off axis w/o Paracorr is 0.023mm, with the actual coma being 4.16 waves P-V, translating to 0.0023mm, corresponding to 0.093mm tangential coma, and 0.112mm RMS blur size). Assuming that the actual Paracorr probably is at least as well corrected as the S/C/B, it would indeed provide well corrected 1°+ fields with sub f/3 mirrors.
{"url":"https://www.telescope-optics.net/miscellaneous_optics.htm","timestamp":"2024-11-07T02:57:10Z","content_type":"text/html","content_length":"179846","record_id":"<urn:uuid:c965a1e1-18cc-4434-9faa-d3b322271367>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00342.warc.gz"}
Important Circuit Calculator Apps, Softwares, Online ToolsImportant Circuit Calculator Apps, Softwares, Online Tools Important Circuit Calculator Apps, Softwares, Online Tools A Circuit Calculator is an Application or Software that helps us to measure or calculate different parameters and inputs, and outputs of an electrical or electronic circuit virtually. Basically, you have to put some data as input then the application or software performs calculations based on the input data and preloaded formulas and finally it will give you the output result. In electrical and electronic engineering, there are so many formulas and calculations. So these circuit calculator software and applications will save your time to calculate complex mathematics, parameters, etc. Basically, those circuit calculator applications are built with different electrical formulas. All the processes, equations, and formulas are programmed and coded within those applications. Although these formulas or equations depend upon the types of applications or what they take as input and what they give as output. Importance of Circuit Calculator Circuit Calculators not only help for calculation during your study even they will also be very helpful for research, analysis, circuit design, estimation, process flow, decision making, and many more. For example, you are measuring current, voltages, and other important parameters of electrical or electronic circuits. Now you have to find out other parameters based on these values. Calculating on a paper is very difficult and time-consuming. So, just open a circuit calculator app suitable for your requirement and input your data then get the result within a second with just one tap. Nowadays you will get so many online tools, software, and applications for circuit calculators. Online tools are provided by different websites and blogs. But the disadvantages of these online tools are you need an internet connection while using them. There are so many mobile applications available for circuit parameter calculation. Even most of them you can use without any Internet Important Circuit Calculators for Electrical and Electronics Engineering 1. Ohm's Law Calculator Ohm's law calculator is the most common and basic calculator to find out or calculate different parameters related to Ohm's law. The most common equations related to the ohm's law are, V = IR and P = VI Here, V = Voltage, I = Current, R = Resistance, and P = Power Here, you can calculate any parameters having at least two parameters. For example, if you have the value of voltage and current then you can find out the value of power and resistance. Now let's know how to use this Ohm's Law calculator step by step. Let's think we have the value of voltage and Current. 1. Now put the value of voltage and select the range if it is in Volt, Kilo Volt, or Mega Volt. 2. Now, put the value of the current and select the range if it is in the milliampere or ampere, or kiloampere. 3. Now tap on the calculate button to get the values of power and resistance. 4. Don't forget to select the range of the output values. This means if you want to get the resistance value in Ohm then you must select the resistance range in Ohm or if you want to get the resistance in megaohm then you should select the resistance range in megaohm. Although there are so many applications, software, and online tools are available for Ohm's law calculation but you can directly go by clicking on this, 2. Resistor Calculator The resistor is used in almost all electrical and electronic circuits. The main important tool is the resistor value calculator from its color codes. We know that resistors have different color codes to indicate their values. There is a basic technique and formula to calculate the value of a resistor from its color codes. In fact, resistors have different color codes for tolerance to measure the value range along with its actual value. A resistor basically has three different color codes band colors, multiplier colors, and tolerance colors. How to calculate the value of a resistor using its color codes. 1. First of all open a resistor value calculator application or any online tools. 2. Now, carefully identify all the colors available on the resistor. 3. Select the band colors on the calculator. 4. Select the multiplier color on the calculator. 5. Select the tolerance color on the multiplier. 6. Now, tap on the calculate button to see the value of the resistor. Note that if you select any wrong color code then the value also be changed. Here, you can use the online resistor calculator by clicking this below, Here, you can also calculate the series and parallel resistance in a circuit. 3. Coil Inductance Calculator It is also a very important and useful tool required for both electrical and electronic circuits. The property of a coil by virtue of which it opposes the changes in the flow of current through it is known as inductance. We need to calculate the inductance of a coil when we calculate or design transformers, solenoids, or any coil-related circuits. In electronic circuits, such as tuning circuits, and RF circuits, and coil inductance calculator is required. Here is the list of parameters we required to calculate the value of inductance, 1. Coil Radius 2. No. of Turns 3. Solenoid Length 4. Relative Permeability 4. Series and Parallel Inductance Calculator When multiple numbers of inductors are connected in series or parallel combinations then we need to calculate the total or equivalent inductance of the circuit. In this case, we need a series inductance calculator or a parallel inductance calculator. Here, you just need to put the value of individual inductors. 5. Series and Parallel Capacitance Calculator Like, a resistor and inductor, a capacitor is also a passive component. It is also used in both electrical and electronic circuits. So when multiple numbers of capacitors are connected in a circuit then we need a capacitance calculator tool to calculate the total or equivalent value. When multiple capacitors are connected in series then we need a series capacitance calculator tool but when multiple capacitors are connected in parallel then we need a parallel capacitance calculator tool. These five calculators were the basic required tools for a simple circuit. There are so many tools and calculators available on the internet. You can try them as per your requirements. How to calculate or analyze a circuit step by step? 1. First of all make sure which type of circuit it is that means AC or DC. 2. Note down all the values of each component of the circuit. 3. Measure all the other parameters as possible of the circuit such as voltage, and current. 4. Now make sure which parameter you need to calculate then take a calculator tool appropriate for your requirements. 5. Nowadays, there are so many circuit analyzer tools and circuit simulation tools available in the market where you can measure, calculate, and operate electrical and electronic circuits virtually. Use these Online Calculators: Important Circuit Calculator Apps, Softwares, Online Tools Reviewed by Author on May 10, 2022 Rating:
{"url":"https://www.etechnog.com/2022/05/important-circuit-calculator-apps.html","timestamp":"2024-11-11T01:33:18Z","content_type":"application/xhtml+xml","content_length":"163619","record_id":"<urn:uuid:de4164b3-9efd-468d-ac97-5a5475a71de7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00502.warc.gz"}
How to Use The Simplex Method and Dual Simplex Method with CPLEX and Frontline - Brightwork Research & Analysis How to Use The Simplex Method and Dual Simplex Method with CPLEX and Frontline Executive Summary • There are several ways of solving a supply chain optimization problem with CPLEX. • These settings are made in both supply planning applications as well as off-the-shelf optimizers. • There is both a simplex method and a duplex method. The solution procedure is the optimization method that is applied. I often describe and differentiate optimizers based upon their objective function. Therefore, optimizers with an objective function of minimizing costs, I call cost optimizers. Those that attempt to reduce inventory at a set service level or maximize service level at a set inventory level are called inventory optimizers. To read about this type of optimization, see this article. CPLEX Options However, something I have discussed significantly less is the optimization solution selected, which is a subset of the optimization method. There are some methods, but a small number of them are the most popular. For applications like supply planning, the following would apply. Where this is set in many optimizers is very clear. This is a screenshot of the Solution Methods tab of the SAP SNP Optimizer. The decomposition methods describe how the problem is segmented to improve run times. More on this topic can be read about in this article. However, notice the options at the bottom of the screenshot under LP Solution Procedure. There are three-LP Solution Procedures available to choose from. This is Primal Simplex, Dual Simplex Method, and Interior Point Method, which can be used along with either of the first two options. As the CPLEX solver is actually what is being used, these are the same options provided by CPLEX. These are described by Wikipedia below: The IBM ILOG CPLEX Optimizer solves integer programming problems, very large^[2] linear programming problems using either primal or dual variants of the simplex method or the barrier interior point method, convex and non-convex quadratic programming problems, and convex quadratically constrained problems (solved via Second-order cone programming, or SOCP).– Wikipedia The methods move from the most simple, the Primal Simplex, to the most complex, the Interior Point Method. Simplex is the most commonly used. The simplex method must work with equalities, not inequalities, requiring the introduction of slack variables, which measure the resource’s unused capacity. Dual Simplex Method The Dual Simplex method is used for a particular problem where the equality constraints are set up in a specific way. This quote is from Elmer G. Wiens site on operations research: Like the primal simplex method (or just the simplex), the standard form of the dual simplex method assumes all constraints are <= or =, but places no restrictions on the signs of the RHS (right hand side variables — to read more about right hand side variables see this article. The dual simplex method algorithm consists of three phases. Phase 0 is identical to Phase 0 of the primal simplex method, as the artificial variables are replaced by the primal variables in the basis. However, the dual simplex method algorithm in Phase 1 searches for a feasible dual program, while in Phase 2, it searches for the optimal dual program, simultaneously generating the optimal primal program. – Elmer G. Weins The interior-point method solves problems differently from the primal or the dual method simplex. The interior-point begins from the interior of the problem rather than looking across the surface. Where the optimizer starts its search is of great importance as to the final solution it develops. For instance, in its documentation (a separate optimizer not associated with SAP), MatLab describes how to “change the initial point” of the optimizer in at least one of its online documentation This is not the only way to change the starting point. The Heuristic First Solution selection will also vary the optimizers’ first point by estimating the best solution with a heuristic before the optimizer begins. Frontline Solver The Frontline Solver offers different options, which are listed in the screenshot below: Something interesting is that Frontline recommends only using the Simplex LP method for non-linear problems. However, CPLEX (which is inside SNP) uses Simplex for non-linear problems (realistic supply planning problems are non-linear). This discrepancy is something that I will update this post with when I figure out the reason for this. The solution method is always of great emphasis for those using a general solver, which requires that the users get very much into the optimization’s detail. However, on enterprise optimization projects, the optimizer parameter setup’s particulars can often be overlooked due to other issues and distractions. However, it is interesting and relevant to know what solution methods are being employed and have a good reason for selecting a documented format.
{"url":"https://www.brightworkresearch.com/optimization-solution-procedures/","timestamp":"2024-11-14T00:12:20Z","content_type":"text/html","content_length":"236595","record_id":"<urn:uuid:8973c2ae-01ec-48f1-b1ad-298ba67c5c62>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00364.warc.gz"}
Understanding the Decibel dB - Formula, Definition, Calculations Understand the deciBel, dB what it is and how to calculate a value in deciBels using the formula or our calculator- also understand the various abbreviations like dBA, dBm, dBW and many more. Decibels, dB Tutorial Includes: Decibels, dB - the basics Decibels levels table dBm to dBW & power conversion chart dBm to watts and volts conversion chart Nepers The deciBel, dB utilises a logarithmic scale based to compare two quantities. It is a convenient way of comparing two physical quantities like electrical power, intensity, or even current, or The deciBel uses the base ten logarithms, i.e. those commonly used within mathematics. By using a logarithmic scale, the deciBel is able to compare quantities that may have vast ratios between them. The deciBel, dB or deci-Bel is actually a tenth of a Bel - the Bel is a unit that is seldom used. The abbreviation for a deciBel is dB - the capital "B" is used to denote the Bel as the fundamental unit. DeciBel applications The deciBel, dB is widely used in many applications. It is used within a wide variety of measurements in the engineering and scientific areas, particularly within electronics, acoustics and also within control theory. Typically the deciBel, dB is used for defining amplifier gains, component losses (e.g. attenuators, feeders, mixers, etc), as well as a host of other measurements such as noise figure, signal to noise ratio, and many others. In view of its logarithmic scale the deciBel is able to conveniently represent very large ratios in terms of manageable numbers as well as providing the ability to carry out multiplication of ratios by simple addition and subtraction. The deciBel is widely used for measuring sound intensity or sound pressure level. For this the sound is referred to a pressure of 0.0002 microbars which equates to the standard for the threshold of How the deciBel arrived Since the beginning of telecommunications there has been the need to measure the levels of relative signal strengths so that loss and gain can be seen. Original telecommunications systems used the loss that occurred in a mile of standard cable at a frequency of 800Hz. However this was not a particularly satisfactory method of determining loss levels, or relative signal strengths and as radio and other electronics based applications started to need to use some form of standard unit for comparison, the Bel was introduced in the 1920s. This gained its name from the Scot, Alexander Graham Bell who was originally credited with the invention of the telephone. With this system, one Bel equalled a tenfold increase in signal level. Once it was introduced the Bel was found to be too large for most suers and so the deciBel was used instead. This is now the standard that has been adopted universally. DeciBel formula for power comparisons The most basic form for deciBel calculations is a comparison of power levels. As might be expected it is ten times the logarithm of the output divided by the input. The factor ten is used because deciBels rather than Bels are used. The deciBel formula or equation for power is given below: Ndb is the ratio of the two power expressed in deciBels, dB P2 is the output power level P1 is the input power level If the value of P2 is greater than P1, then the result is given as a gain, and expressed as a positive value, e.g. +10dB. Where there is a loss, the deciBel equation will return a negative value, e.g. -15dB. In this way a positive number of deciBels implies a gain, and where there is a negative sign it implies a loss. DeciBel calculator for power levels One of the mot useful calculators associated with deciBels is to calculator the ratio in deciBels for two power levels> Our calculator below enables these calculations to be achieved because it is not always easy to access a logarithmic value for anything. Decibel Calculator for Power Levels The decibel, dB calculator enables values of decibels to be calculated on-line from a knowledge of the input and output power levels. DeciBel formulas for voltage & current Although the deciBel is used primarily as comparison of power levels, deciBel current equations or deciBel voltage equations may also be used provided that the impedance levels are the same. In this way the voltage or current ratio can be related to the power level ratio. When using voltage measurements it is easy to make the transformation of the deciBel formula because power = voltage squared upon the resistance: And this can be expressed more simply as Ndb is the ratio of the two power expressed in deciBels, dB V2 is the output voltage level V1 is the input voltage level It is possible to undertake a similar transformation for the formula to use current. Power = current squared upon the resistance, and therefore the deciBel current equation becomes: And this can be expressed more simply as Ndb is the ratio of the two power expressed in deciBels, dB I2 is the output current level I1 is the input current level Voltage & current deciBel formulas for different impedances As a deciBel, dB is a comparison of two power or intensity levels, when current and voltage are used, the impedances for the measurements must be the same, otherwise this needs to be incorporated into the equations. Ndb is the ratio of the two power expressed in deciBels, dB V2 is the output voltage level V1 is the input voltage level Z2 is the output impedance Z1 is the input impedance In this way it is possible to calculate the power ratios in terms of deciBels between signals on points that have different impedance levels using either voltage or current measurements. This could be very useful when measuring power levels on an amplifier that may have widely different impedance levels at the input and output. If the voltage or current readings are taken then this formula can be used to provide the right power comparison in terms of deciBels. DeciBel abbreviations The deciBel is used in many areas from audio to radio frequency scenarios. In all of these it provides a very useful means of comparing two signals. Accordingly there are many variations onto e deciBel abbreviation and it may not always be obvious what they mean. A table of deciBel abbreviations is given below: DeciBel abbreviation Meaning / usage dBA "A" weighted sound pressure or sound intensity measurement. dBc Level of a signal with reference to the carrier being measured - normally used for giving the levels of spurious emissions and noise dBd Gain of an antenna with reference to a half wave dipole in free space dBFS Level with reference to full scale reading dBi Gain of an antenna with reference to an isotropic source, i.e. one that radiations equally in all directions. dBm Power level with reference to 1 mW dBV Level with reference to 1 volt dBµV Level with reference to 1 microvolt dBW Power level with reference to 1 watt The deciBel is widely used in many areas of electronics and sound measurement. It provides a very useful means of comparing different levels that may vary over a huge range. Being logarithmically based, the deciBel is able to accommodate variations of many orders of magnitude without getting lost in a huge number of zeros. In this way it is an ideal way of comparing different values. Ian Poole . Experienced electronics engineer and author. More Basic Electronics Concepts & Tutorials: Voltage Current Power Resistance Capacitance Inductance Transformers Decibel, dB Kirchoff's Laws Q, quality factor RF noise Waveforms Return to Basic Electronics Concepts menu . . .
{"url":"https://www.electronics-notes.com/articles/basic_concepts/decibel/basics-tutorial-formula-equation.php","timestamp":"2024-11-03T07:45:40Z","content_type":"text/html","content_length":"41980","record_id":"<urn:uuid:80460e52-fe17-48f1-bcf1-f0bc65fc6f6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00410.warc.gz"}
Improving the Complexity of Index Calculus Algorithms in Elliptic Curves over Binary Fields Abstract. The goal of this paper is to further study the index calculus method that was first introduced by Semaev for solving the ECDLP and later developed by Gaudry and Diem. In particular, we focus on the step which consists in decomposing points of the curve with respect to an appropriately chosen factor basis. This part can be nicely reformulated as a purely algebraic problem consisting in finding solutions to a multivariate polynomial f(x1, . . . , xm) = 0 such that x1, . . . , xm all belong to some vector subspace of F2n /F2. Our main contribution is the identification of particular structures inherent to such polynomial systems and a dedicated method for tackling this problem. We solve it by means of Gröbner basis techniques and analyze its complexity using the multi-homogeneous structure of the equations. A direct consequence of our results is an index calculus algorithm solving ECDLP over any binary field F2n in time O(2ω t ), with t ≈ n/2 (provided that a certain heu...
{"url":"https://www.sciweavers.org/publications/improving-complexity-index-calculus-algorithms-elliptic-curves-over-binary-fields","timestamp":"2024-11-09T12:57:53Z","content_type":"application/xhtml+xml","content_length":"37929","record_id":"<urn:uuid:08a868db-4fcc-4499-b2f7-c4fdd4600352>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00526.warc.gz"}
The Power of Practical Quantum Computing: An In-Depth Exploration - leopardtheme.com Quantum computing, once a concept confined to the realms of science fiction, is now a tangible reality. It’s a revolution that’s set to redefine the boundaries of technology, promising unprecedented computational power. But what does this mean for us in practical terms? This article will dive into the fascinating world of quantum computing, shedding light on its practical applications. From breakthroughs in medicine to leaps in artificial intelligence, we’ll explore how this technology is poised to transform our everyday lives. Stay tuned as we unravel the mysteries of quantum computing and its real-world implications. Practical Quantum Computing Quantum computing, contrary to classically inclined models, harnesses the principles of quantum mechanics. Imagine a scenario where computations are undertaken at significantly accelerated rates, even for complex problems. That’s the world of quantum computing. Essentially, it exploits quantum bits, or “qubits”, which unlike the binary system of traditional computing that runs on ones and zeros, can exist in multiple states at once. Thanks to this effect, termed superposition, a qubit can be a one, a zero, or both at the same time. Enhancing this capability, qubits also exhibit entanglement, where the state of one qubit correlates with the state of another, irrespective of the distance between them. This conduit in the quantum realm equips quantum computers with a massive computational capacity and speed unmatchable by classical computers. Practical applications of quantum computing bear considerable significance in the technological era. Reiterating the example of medicine, quantum computers stand at the precipice of drug discovery. Researchers anticipate benefits, assuming a quantum computer can effectively analyse complex biochemical reactions, leading to unprecedented breakthroughs in pharmaceutical research. In another instance, the field of artificial intelligence reaps the potential of quantum mechanics. A case in point being optimization problems. AI systems aim at finding the optimal solution among a myriad of possibilities. A quantum computer, considering its superior computational speed and ability, is expected to sort through these possibilities in a heartbeat, increasing efficiency and productivity in AI. Key Technologies in Practical Quantum Computing Quantum computing represents a disruptive change that comes with its unique technology systems. Core components such as Quantum Bits (Qubits) and Quantum Gates, as well as Quantum Circuits and Quantum Algorithms, determine the execution and outcomes of quantum computations. A Qubit, or Quantum Bit, signifies the essential building block in quantum computing. Unlike traditional binary bits that hold a value of either 0 or 1, qubits harness the principles of quantum superposition, allowing for both states to exist simultaneously. This empowers exponential increments in computational power. Quantum Gates, on the other hand, contribute to the dynamic nature of quantum computing. Rather than simply flipping states as in classical logic gates, quantum gates manipulate the states of qubits through operations like quantum entanglement. For instance, a Hadamard gate can put a qubit into superposition, while a CNOT gate can create quantum entanglement. Quantum Circuits, constructed with qubits and quantum gates, act as the main framework for performing quantum computations. Their design follows the principles of quantum mechanics, allowing for computations that are complex if executed with classical computing approaches. Current Innovations in Practical Quantum Computing Significant strides persist in the field of quantum hardware. Firms like Google, IBM, and Microsoft continue pushing the boundaries, with Google demonstrating quantum supremacy using a 53-qubit computer named Sycamore. It’s an achievement verifying a quantum device’s ability to execute a computation infeasible or time-consuming for classical computers. Also, multi-billion dollar initiatives, for instance, the Quantum Flagship program by the European Union, reflect the urgency in developing practical quantum computers. This program aims at placing Europe at the forefront of the second quantum revolution, with a budget allocation of 1 billion euros. Such initiatives underscore the global commitment and investment in quantum hardware Notably, D-Wave Systems, a pioneer in quantum computing, announced the next-generation platform – ‘Advantage.’ This quantum computing system hosts a whopping 5000 qubits, a substantial increase from previous models. It’s a testimony to the growing capability and scalability of quantum hardware.
{"url":"https://leopardtheme.com/the-power-of-practical-quantum-computing-an-in-depth-exploration/","timestamp":"2024-11-07T15:39:04Z","content_type":"text/html","content_length":"158087","record_id":"<urn:uuid:36b471a1-7011-43f2-86db-542b3053aa9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00111.warc.gz"}
Raju scored 540 marks out of 600. Write his percentage. Hint: We use the concept of percentage to find percentage of Raju. Write marks obtained and total marks in form of fraction and convert the fraction into percentage by multiplying the fraction by * Percentage is a part of a whole or complete. Percentage is always greater than or equal to 0% and always less than or equal to 100%. Complete step-by-step answer: We are given total marks as 600 Number of marks obtained by Raju is 540 We know percentage of marks obtained by a student is given by Number of marks obtained divided by total marks \[ \times 100\] Substitute the value of marks obtained by Raju in numerator and total marks in denominator. \[ \Rightarrow \]Percentage of marks obtained by Raju\[ = \left( {\dfrac{{540}}{{600}} \times 100} \right)\% \] We write denominator in terms of its factors so we can cancel out factors from numerator and denominator. We can write \[600 = 6 \times 100\] \[ \Rightarrow \]Percentage of marks obtained by Raju\[ = \left( {\dfrac{{540}}{{6 \times 100}} \times 100} \right)\% \] Cancel same terms from numerator and denominator \[ \Rightarrow \]Percentage of marks obtained by Raju\[ = \left( {\dfrac{{540}}{6}} \right)\% \] Write the numerator in terms of its factor so we can cancel factors that are in denominator and numerator. We can write \[540 = 6 \times 90\] Substitute the value of 540 in numerator \[ \Rightarrow \]Percentage of marks obtained by Raju\[ = \left( {\dfrac{{6 \times 90}}{6}} \right)\% \] Cancel same factor i.e. 6 from both numerator and denominator. \[ \Rightarrow \]Percentage of marks obtained by Raju\[ = 90\% \] Thus, the percentage of marks scored by Raju is 90% Note: Students might make the mistake of not converting the fraction into simpler form. Keep in mind always cancel all factors between numerator and denominator, else the percentage will not be within 0 and 100. Also, when converting percentage into fraction, we divide the number given in percentage by 100.
{"url":"https://www.vedantu.com/question-answer/raju-scored-540-marks-out-of-600-write-his-class-8-maths-cbse-5f8943cf2331d1505c836b5d","timestamp":"2024-11-08T17:44:07Z","content_type":"text/html","content_length":"151363","record_id":"<urn:uuid:bef78178-f9f5-4aa9-9e06-f7454a19b962>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00661.warc.gz"}
证明代写-COMS 4774|学霸联盟 COMS 4774 Spring 2021 Scribe: Sebastian Salazar (ss5971) February 23, 2021 Editors: Qingyang Wu (qw2345) Michael Fagan (mef2224) Jay Pakala (jp4102) High-dimensional statistics - bounds on random vectors 1 Introduction So far in the course, tools for dealing with random variables over real numbers have been studied extensively. However, in many real-world applications, data points are seldom one-dimensional. Therefore, it is essential to develop tools for dealing with high-dimensional random variables (i.e., random vectors). 2 Random Vectors Let {X1, X2, . . . , Xn} ⊆ Rd be an i.i.d. sample of random vectors from some dis- tribution with mean E[X1] = ~µ ∈ Rd. If we want to estimate the value of ~µ using {X1, X2, . . . , Xn}, it is natural to introduce the unbiased estimator for the mean µˆ = 1 n ∑n i=1Xi. Once we have calculated µˆ, it is natural to ask how “close” ~µ and µˆ are. In one dimension, the notion of closeness is well-defined (i.e. just look at the quantity |µ− µˆ|), however, we are dealing with random vectors now! Therefore, thinking about what it means for two vectors to be “close” to each other is not a straightforward endeavor. Do we consider each of the components individually? Or perhaps we could think of our sample as living in some sub-metric space (X , dx) of Rd. These are both valid schemes for thinking about the “closeness” of vectors in Rd. In this section of the notes, however, we will only provide tools for dealing with mean estimation in Rd under the standard Euclidean Norm. In other words, we’ll examine the statistical properties of ‖~µ− µˆ‖22. 2.1 Mean of a Gaussian Random Vector First, consider the case when {X1, X2, . . . , Xn} ⊆ Rd are i.i.d. spherical Gaussians. In this case, we know the distribution of the sums of these random vectors. In particular 1 note that ∑ i Xi ∼ N (n~µ, nσ2Id) 1 n ∑ i Xi ∼ N (~µ, σ 2 n Id) µˆ ∼ N (~µ, σ 2 n Id) µˆ− ~µ ∼ N (0, σ 2 n Id) √ n σ (µˆ− ~µ) ∼ N (0, Id). This implies that ∥∥∥√nσ (µˆ− ~µ)∥∥∥2 2 ∼ χ2(d). This is great, since we know that the Chernoff bounds of the χ2(d) distribution are given by y ∼ χ2(d) : P ( y ≥ d+ 2 √ 2 log 1/δ − 2 log 1/δ ) ≤ δ (1) y ∼ χ2(d) : P ( y ≤ d+ 2 √ 2 log 1/δ ) ≤ δ. (2) Now, by definition, each of the components of √ n σ2 (µˆ− ~µ) is a spherical Gaussian, so it is clear that ∥∥∥√nσ2 (µˆ− ~µ)∥∥∥2 2 ∼ χ2(d). Using these facts and taking the union bound of (1) and (2) we obtain1 P ∥∥∥∥∥ √ n σ (µˆ− ~µ) ∥∥∥∥∥ 2 2 ∈ √ d± 2 (√ d log 1/δ + log 1/δ ) ≥ 1− 2δ P ( ‖µˆ− ~µ‖22 ∈ σ2 n √ d± 2 (√ d log 1/δ + log 1/δ )) ≥ 1− 2δ That is, ‖µˆ− ~µ‖22 ≈ σ2 n (d±O( √ d)) ‖µˆ− ~µ‖2 ≈ σ√ n √ d±O( √ d) (3) with high probability. For a general, non-spherical Gaussian these bounds apply with the slight modification σ → Tr(Σ). 1This is a definite abuse of notation. Here, x ∈ a± b is understood to mean x ∈ (a− b, a+ b). 2 2.2 sub-Gaussian random variables Note : WLOG, all random vectors in this and subsequent sections of Section 2 are understood to have zero mean.2 Note that, to characterize the behavior of a random vector, we could always look at the its behavior in a particular direction. In other words, given some direction u ∈ Sd−1, how does the random variable uᵀX behave? To answer these questions we need to revisit and generalize the notion of subGaussian random variables and introduce subGaussian random vectors. As a starting point, consider the following definitions from [1]: Definition 1. A random variable X in R is said to be sub-Gaussian with variance proxy σ2 if E(X) = 0 and its moment generating function —denoted MX(t)— satisfies MX(s) ≤ exp σ 2t2 2 , ∀t ∈ R. (4) This is often written as X ∼ subG(σ2). Definition 1 can easily be generalized to the case where X is a random vector in R. Definition 2. A random vector X ∈ Rd is said to be sub-Gaussian with variance proxy σ2 if E(X) = 0 and for any u ∈ Sd−1, the random variable uᵀX is sub-Gaussian with variance proxy σ2. This is often written as X ∼ subGd(σ2). From an intuitive standpoint, sub-Gaussian random vectors behave very similarly to Gaussian random variables in the sense that they have very similar concentration properties. To begin formalizing this intuition, consider the following Lemma from [1]: Lemma 1. If X ∼ subG(σ2) then for any t > 0, it follows that: P(X > t) ≤ exp ( − t 2 2σ2 ) P(X < −t) ≤ exp ( − t 2 2σ2 ) This indeed confirms our intuition that, at least from a concentration standpoint, subGaussian random variables behave just like Gaussians. In particular, setting the RHS of the inequalities in Lemma 1 equal to some δ ∈ [0, 1], we immediately get the following useful corollary: 2Note that there is no loss of generality here, since we can always subtract off the mean of a random vector. That is, take X → X − µ 3 Corollary 1. If X ∼ subG(σ2), then with probability at least 1 − 2δ, the following event holds: X ∈ [ −σ √ 2 log 1/δ, σ √ 2 log 1/δ ] The proof of this is left as an exercise to the reader. For the more general case when we are considering a sum of subGaussian indepen- dent random variables, 1 can be generalized in the following way: Lemma 2. Let X1, . . . , Xn be a set of independent random variables such that Xi ∼ subG(σ2), then for any t > 0 and any vector a = (a1, . . . , an) ∈ Rd, P n∑ i=1 aiXi < t ≤ exp(− t2 2σ2‖a‖22 ) P n∑ i=1 aiXi < −t ≤ exp(− t2 2σ2‖a‖22 ) For a proof of this theorem, the reader is refered to [1]. In a similar fashion, the generalization of corollary 1 is given below: Corollary 2. With probability at least 1− 2δ the following event holds: n∑ i=1 aiXi ∈ [ −σ‖a‖2 √ 2 log 1/δ, σ‖a‖2 √ 2 log 1/δ ] 2.3 σ2 sub-Gaussian random vectors Let u ∈ Sd−1 and X be a subGaussian random vector with variance proxy σ2. Then it follows that uᵀX is subGaussian with variance proxy σ2. In particular let us consider a collection of subGaussian random vectors {Xi}ni=1 with variance proxy σ2. Then it follows that {uᵀXi}ni=1 is a collection of subGaussian random vectors with variance proxy σ2. Thus, from corollary 2 is follows that with probability at least 1− 2δ the following event holds (i.e. just take a = 1 n (1, . . . , 1)ᵀ): 1 n n∑ i= 1 uᵀXi ∈ [ − σ√ n √ 2 log 1/δ, σ√ n √ 2 log 1/δ ] and this inequality holds for any fixed u ∈ Sd−1. However, we want to impose a bound on sup u∈Sd−1 ∣∣∣∣∣∣ 1n n∑ i=1 uᵀXi ∣∣∣∣∣∣ = supu∈Sd−1|uᵀµˆ| where µˆ = 1n n∑ i=1 Xi. (5) To impose a bound on Equation(5) the following lemma will be particularly useful 4 Lemma 3. Let {Xi}ni=0 be a collection of n i.i.d. random vectors in Rd such that for every i ∈ {1, . . . , n}, Xi ∼ subGd(σ2). Then it follows the empirical mean µˆ = 1 n n∑ i=1 Xi is subGaussian with variance proxy σ 2 n . Proof. Let s ∈ Sd−1 and note that the MGF of µˆ = 1 n ∑n i= 1Xi is given by Msᵀµˆ(t) = E ( e 1 n ∑n i=1 s ᵀXit ) = E ( e ∑n i=1 s ᵀXi tn ) = n∏ i=1 E ( e t n sᵀXi ) = ( MsᵀXi ( t n ))n (6) Since Xi ∼ subGd(σ2) it follows that sᵀXi ∼ subG(σ2) so we can apply (4) to (6) and obtain ( MsᵀXi ( t n ))n ≤ exp ( σ2 t2 2n2 )n = exp ( σ2 t2 2n ) = exp ( σ2 n t2 2 ) which completes the proof. Now we’ll show that a bound on (5) can be obtained using Lemma 3, and is stated in the following theorem [1]: Theorem 1. Let X ∼ subGd(σ2) be a subGaussian random vector with variance proxy σ2 and B2 be the unit `2 ball in Rd. Then for any δ ∈ (0, 1), the following event holds with probability at least 1− δ: sup u∈Sd−1 uᵀX ≤ sup u∈B2 uᵀX = sup u∈B2 |uᵀX| ≤ 4σ √ d+ 2σ √ 2 log 1/δ To prove this theorem we need to use the following lemmas, which we state without proof (see [1]): 5 Lemma 4. The unit ball B2 in Rd has an -net N of cardinality |N | ≤ ( 3 )d. Lemma 5. Let N := {r1, ..., rn} be a set of points in Rn and X ∈ Rd be a random vector such that rᵀiX are subGaussian random variables with variance proxy σ2. Then the following two inequalities hold:3 P(sup r∈N rᵀX > t) ≤|N | e− t 2 2σ2 P(sup r∈N |rᵀX| > t) ≤ 2|N | e− t 2 2σ2 Now we present a proof of theorem 1 from [1]. Proof. Let N be a 1/2-net of B2. Then from lemma 4 we know we can choose N such that |N | ≤ 6d. Now, since N is a 1/2-net of B2, we know that for any u ∈ B2, there exists an r ∈ N such that ‖u− r‖ ≤ 1/2. Now, define x = u− r and note that since ‖x‖ ≤ 1/2, x ∈ 1 2 B2. Then, sup u∈B2 uᵀX ≤ sup z∈N ,x∈ 1 2 B2 (rᵀX + xᵀX) ≤ sup z∈N rᵀX + sup x∈ 1 2 B2 xᵀX = sup z∈N rᵀX + 1 2 sup u∈B2 uᵀX (7) Where the last equality follows since sup x∈ 1 2 B2 xᵀX = sup x∈ 1 2 B2 1 2 (2x)ᵀX, taking v = 2x ∈ B2 = 1 2 sup v∈B2 vᵀX, interchanging the dummy variables u ⇐⇒ v = 1 2 sup u∈B2 uᵀX Now, from (7), it follows that sup u∈B2 uᵀX ≤ 2 sup r∈N rᵀX Therefore, for any t > 0, P(sup u∈B2 uᵀX > t) ≤ P(2 sup r∈N rᵀX > t) ≤|N | e− t 2 8σ2 ≤ 6de− t 2 8σ2 (8) 3This is a special case of theorem 1.16 from [1] 6 Where the second to last line follows from lemma 1. Setting the RHS of (8) to be at least δ, we obtain the following minimal value of t: 6de− t2 8σ2 ≤ δ d log 6− log δ ≤ t 2 8σ2 t2 ≥ 8σ2 log(6)d+ 8σ2 log 1/δ t ≥ 2σ √ 2 log(6)d+ 2 log 1/δ Now note that 4σ √ d+ 2σ √ 2 log 1/δ ≥ 2σ( √ 2 log(6)d+ √ 2 log 1/δ) ≥ 2σ √ 2 log(6)d+ 2 log 1/δ Therefore, it is sufficient to take t = 4σ √ d+ 2σ √ 2 log 1/δ. Thus, P ( sup u∈Sd−1 uᵀX ≤ sup u∈B2 uᵀX ≤ 4σ √ d+ 2σ √ 2 log 1/δ ) ≥ 1− δ. (9) Now a straightfoward application of Lemma 3 and theorem 1, reveals that: P ( sup u∈Sd−1 uᵀµˆ ≤ sup u∈B2 uᵀµˆ ≤ 4 σ√ n √ d+ 2 σ√ n √ 2 log 1/δ ) ≥ 1− δ (10) Which is consistent with the results from lecture. 2.4 Non-homogeneous random vectors Now, let {Xi}ni=1 ⊆ Rd be a set of sub Gaussian random vectors with variance proxy σ2 having the following distribution P(Xi = a) = P(Xi = −a) = 1 2 . Now, if we apply (9) to µˆ = 1 n ∑n i=1Xi ∼ subG(σ2/n), we will obtain the following approximate bound sup u∈B2 uᵀµˆ ≈ O ( σ √ d n ) (11) However, in the direction of aᵀX ∼ O(1), so the bound (11) is loose by a factor of√ d! To get around this, we introduce the following lemma by Yurinsky: 7 Lemma 6. Suppose ‖X‖ ≤ b almost surely and that ‖X − ~µ‖2 ≤ σ2, then for any δ ∈ (0, 1) the following event holds with probability at least 1− δ: ‖µˆ− ~µ‖2 ≤ E‖µˆ− ~µ‖2 + σ √ 8 log 1/δ n + 2b log 1/δ 3n ≤ σ√ n + σ √ 8 log 1/δ n + 2b log 1/δ 3n Applying the above lemma to the example given, we obtain that with high proba- bility ‖µˆ− ~µ‖2 ≈ O ( ‖a‖2√ n ) , which is a much better than what we had previously. 8 3 A brief introduction to covariance estimation Let {Xi}ni=1 ⊆ Rd be a set of random variables with E(X) = µ such that Xi − ~µ is subGaussian. Now, to begin our study of random matrices, we compare the empirical covariance matrix Σˆ = 1 n n∑ i=1 (Xi − µˆ)(Xi − µˆ)ᵀ = 1 n n∑ i=1 (Xi − ~µ)(X − ~µ)ᵀ − (µˆ− ~µ)(µˆ− ~µ)ᵀ with the actual covariance Σ under the Forbenius norm. In particular, note that: ∥∥∥Σˆ− Σ∥∥∥ F ≤ ∥∥∥∥∥∥ 1n n∑ i=1 (Xi − ~µ)(X − ~µ)ᵀ ∥∥∥∥∥∥ F + ∥∥(µˆ− ~µ)(µˆ− ~µ)ᵀ∥∥ F (12) What is important to note about this is that bothXi−µ and µˆ−µ are both subGaussian random variables. Therefore, all the tools from Section 2 carry over by treating (12) as a vector in Rd2 . Moreover, under the Forbenius norm ∥∥(µˆ− ~µ)(µˆ− ~µ)ᵀ∥∥ F reduces to the familiar ‖µˆ− µ‖, which we already know how to deal with. Finally, we conclude our introduction to random matrices with a proof of the following theorem: Theorem 2. Let Xi be a symmetric, positive semi-definite random matrix, and Sn = ∑n i=1Xi. 4 Define λ1(Sn) to be the largest eigenvalue of the the matrix Sn, then the following inequality holds: P(λ1(Sn) ≥ a) ≤ E(Tr(Sn)) a Proof. Note that a · 1(λ1(Sn)) ≤ λ1(Sn) (13) ≤ n∑ i=1 λi (14) = Tr(Sn) (15) From above we conclude that: 1(λ1(Sn) ≥ a) ≤ Tr(Sn) a 4Recall that the sum of PSD matrices is PSD. 9 Taking the expectation value of both sides yields the desired result E(1(λ1(Sn)) ≥ a) = P(λ1(Sn) ≥ a) ≤ E(Tr(Sn) a Bibliography [1] Philippe Rigollet. High dimensional statistics. 10
{"url":"https://www.xuebaunion.com/detail/1199.html","timestamp":"2024-11-05T19:54:46Z","content_type":"text/html","content_length":"25729","record_id":"<urn:uuid:25b2d5da-f404-45db-b60b-332c45f76408>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00616.warc.gz"}
Linear Regression for Normal People A quick-ish way to see and understand how statisticians use linear regression. Photo by Enayet Raheem on Unsplash Suppose I give you these data on height and weight of 50 people. The data are fake, so don’t get too excited. Next, I ask you if there is any relationship between the height and weight of people. From your experience, you probably will say yes, height is related to weight. The taller the person, the heavier they are. But I counter that I know short people who weigh more than their taller How do you prove to me statistically that taller people weigh more than shorter ones? Step One: The Eyeball Test One of the first things you can do is plot the data on a scatter plot: We can both see that the scatter of the points gets higher in weight as the height increases left to right. You can say to me, “See? The higher the weight, the higher the height, and vice-versa!” But what if I counter by telling you that there are some points indicating lighter-but-taller?
{"url":"https://medium.com/the-quantastic-journal/linear-regression-for-normal-people-25d262b5f71c","timestamp":"2024-11-13T21:59:01Z","content_type":"text/html","content_length":"101574","record_id":"<urn:uuid:2d1e6f51-b4f9-47dd-9e97-88007216e0ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00332.warc.gz"}
Bomba Andrii Версія від 09:47, 22 січня 2020, створена Андрій Олексійович Христюк (обговорення | внесок) (Створена сторінка: thumb|Bomba Andrii ==Bomba Andrii Yaroslavovych is Full Professor, Doctor of Sciences (Technics), Ph.D. (Physics & Mathematics), Professor of D...) Перейти до навігації Перейти до пошуку Bomba Andrii Yaroslavovych is Full Professor, Doctor of Sciences (Technics), Ph.D. (Physics & Mathematics), Professor of Department of Computer Sciences and Applied Mathematics of National University of Water Management and Environmental Engineering. His name is well known to experts in the field of technical and physical-mathematical sciences in Ukraine and abroad. A. Bomba finished the study at Holoshynetska start school and Novoseletska middle school (1956-66 yy.) in Ternopil Region and entered Lviv State University after Ivan Franko to the Faculty of Mechanics and Mathematics. He obtained the Specialist Diploma (Speciality: Matematician) in 1972. He studied at postgraduate school at Institute of Mathematics of the Academy of Sciences of Ukraine (Kyiv, 1978-1981 yy.) and at Doctoral School at the V.M. Glushkov Institute of Cybernetics of National Academy of Sciences of Ukraine (Kyiv, 2000-2003 yy.). He studied at training courses at the Faculty of Cybernetics of Taras Shevchenko Kiev State University, at the department of mathematical problems in mechanics of Institute of Problems in Mechanics of the Academy of Sciences of USSR in Moscow and at National University “Lviv Politechnic”. His scientific achievements are approved by PhD in Physics&Mathematics and Doctor of Sciences in Technics in speciality “Mathematical Modelling and Numerical Methods”. He held the positions of assistant, teacher, senior lecturer, associate professor, later head of the Department of Informatics and Applied Mathematics, professor of RSHU, and also was a teacher. While working on the development of various scientific projects, he also worked as an engineer, junior and senior research associate, managed budgetary topics. His research interests are systematic mathematical modeling of nonlinear perturbations of processes type "filtration-convection-diffusion" with aftereffect at incomplete data, spatial analogues of boundary value problems on quasiconformal mappings and problems of modeling nonlinear processes in porous media, mathematical modeling of nonlinear processes of multicomponent and multiphase filtration in systems such as reservoir-fluid on conditions of control and optimization, and to intensify the influx of the formation fluids using hydraulic fracturing and thermal methods (particularly in the shale and bituminous layers), modeling nonlinear perturbations of processes of mass transfer in different-porous (nanoporous) mediums on conditions of control, optimization and identification of parameters, modeling of nonlinear processes of filtering fluids from multicomponent contaminations with considering the reverse effect and diffusion-masstransfer perturbations, improving the efficiency of consistent and progressing compressions of color images without loss, forecasting the propagation of soliton-type separated waves; construction of fundamentally new diffusion-like models of information process of knowledge potential propagation, problems of identification in electro-impedance tomography, modeling of explosive processes. A. Bomba submitted the idea and contributed to the introduction of specialties in Applied Mathematics at the Universities of Rivne. He also makes much effort for the organization of cultural and scientific life. Professor is founder and chief editor of collections of scientific papers “Volyn Mathematical Bulletin. Applied Mathematics Series”. He is member of the editorial boards of scientific papers collections and specialized scientific councils on thesis defense; systematically reviewed articles, manuals, textbooks, monographs, candidate and doctoral theses. A. Bomba is the author of 9 monographies, tutorial, manuals, more than 500 scientific articles and abstracts of Ukrainian and International Conferences. He trained 14 candidates (Baranovskii S. V., Sydorchuk B. P., Kashtan S. S., Prigornystkii D. O., Prysiazniuk I. M., Safonyk A. P., Gavryliuk V. I., Klymiuk Yu. Ye., Shportko O. V., Yaroshchak S. V., Hladka O. M., Sinchuk A. M., Prysiazniuk O. V., Boichura M. V.) and 2 doctorals (Safonyk A. P. and Turbal Yu. V.) dissertations. Malash K. M. successfully completes PhD thesis. Plan-prospectus of doctoral dissertations by Prysiazhniuk I. M., Yaroshchak S. V., Moroz I. P., Klymiuk Yu. Ye. are formed. Under his leadership, about 10 students of Rivne Region became winners of the All-Ukrainian Competition of the Academy of Sciences of Mathematics and Informatics. Monographies: A. Ya. Bomba, S. V. Baranovskii, A. P. Kuzmenko. Educational-methodical manual for independent study of the discipline "Equation of mathematical physics" [in Ukrainian], Rivne, MEGU-RSHU (2006); A.Ya. Bomba, V.M. Bulavatskyy, V.V. Skopetskyy, Nonlinear mathematical models of processes of heohydrodynamics [in Ukrainian], Naukova Dumka, Kyiv (2007); A.Ya. Bomba, S.V. Baranowski, I.M. Prysiazhnyuk, Nonlinear singularly perturbed problems type "convection-diffusion" [in Ukrainian], NUWMNRU, Rivne (2008); ] A.Ya. Bomba, V.I. Havryliuk, A.P. Safonyk, O.A. Fursachyk, Nonlinear problems type filtering-convection-diffusion-masstransfer under conditions of incomplete data: monograph [in Ukrainian], NUWMNRU, Rivne (2011); A.Ya. Bomba, S.S. Kashtan, D.O. Pryhornytskyy, S.V. Yaroschak, Methods of complex analysis: Monograph [in Ukrainian], NUWMNRU, Rivne (2013); A.Ya. Bomba, Yu.Ye. Klymyuk, Mathematical modeling of spatial singularly-perturbed processes type filtration-convection-diffusion: monograph [in Ukrainian], Assol, Rivne (2014); A.Ya. Bomba, A. M. Sinchuk., S. V. Yaroschak, Modeling of filtration processes in the oil and gas seams numerical methods quasi-conformal mapping: monograph [in Ukrainian], Assol, Rivne (2016); A. Ya. Bomba., O. M. Hladka, A. P. Kuzmenko. Computing technologies based on complex analysis methods and summary images [in Ukrainian], Assol, Rivne (2016); A. Ya. Bomba, A. P. Safonyk. Modeling of nonlinearly perturbed processes of purification of liquids from multicomponent contaminants [in Ukrainian], NUWEE, Rivne (2017); A. Ya. Bomba, I.M. Prysiazhnyuk, O. V. Prysiazhnyuk. Methods of perturbation theory for prediction of processes of heat and mass transfer in porous and microporous media [in Ukrainian], O. Zen, Rivne (2017). His articles are published in Reports of the National Academy of Sciences of Ukraine, magazines: Ukrainian Mathematical Journal, Ukrainian Physical Journal, Cybernetics and Systems Analysis, Journal of Computational and Applied Mathematics, Control and Informatics Problems, Control Systems and Machines, Mathematical Methods and Physico-Mechanical Fields, Electronic Modeling, Information Selection and Processing, "Radioelectronics, Informatics, Control", Oil and Gas Industry, Journal of Applied Computer Science, Journal of Mathematical Sciences, «Informatics, Control, Measurement in Economy and Environment Protection», Journal of Automation and Information Sciences, Journal of Hydrocarbon Power Engineering, Journal of difference equation, Journal of Mathematics and System Sciense, Journal of Environmental Science and Engineering, Mathematical Modeling and Computing, International Journal of Applied Mathematical Research, Radio Electronics Computer Science Control, EasternEuropean Journal of Enterprise Technologies, International Journal of Computing, Journal of Engineering Physics and Thermophysics; in the bulletins of Kyiv, Kharkiv, Lviv universities and in proceedings: Computer Mathematics, Physical-Mathematical Modeling and Information Technology, Advances in Intelligent Systems and Computing etc. Direction of the scientific school: Mathematical modeling and computational methods. Problems. Development of numerical methods of complex analysis and perturbation theory of modeling of nonlinear processes with aftereffect under conditions of control, identification and optimization of parameters. Vectors of search: Systematic mathematical modeling of nonlinear perturbations of "filtration-convection-diffusion" processes with the effect of incomplete data; Spatial analogues of boundary value problems for quasiconformal mappings and problems of modeling nonlinear processes in porous media; Mathematical modeling of nonlinear processes of multicomponent and multiphase filtration in reservoir-type systems under conditions of control and optimization, intensification of reservoir fluid flow using hydraulic fracturing and thermal methods (in particular in shale deposits and bituminous layers); Modeling of nonlinear perturbations of mass transfer processes in different porous (nanoporous) media under conditions of control, optimization and identification of parameters; Modeling of nonlinear processes of filtration of liquids from multicomponent contaminants taking into account feedback and diffusion-mass perturbations; Increasing the efficiency of consistent and progressive lossless lossy compression of color images, identification of electro-impedance tomography parameters, simulation of controlled blasting processes. He was in charge of managing state and state budget topics: Mathematical modeling of nonlinear perturbations of eco-energy systems (State registration number 0100U004897); Numerical-asymptotic methods in ecological problems (State Fund for Basic Research of SCST of Ukraine, project № 1/778 dated 4.05.92 and № 11.3 / 91); Perform mathematical modeling of the diffusion process in the sample-cell system (Yaroslavl Departments of the Kama Institute of Deep and Ultra-Deep Well Research); Systematic mathematical modeling of nonlinearly perturbed "filtration-convection-diffusion" processes with the effect of incomplete data (state registration number 0109U001065); Improving lossless image compression efficiency in modern graphic formats (state registration number 0110U004001); Development of methods and graphic format of progressive lossless compression of color images; Spatial analogues of boundary value problems for quasiconformal mappings and problems of modeling nonlinear processes in porous media ”(State Registration No. 0112U001014); «Development of numerical methods of complex analysis and perturbation theory of modeling of nonlinear processes with aftereffect under conditions of control, identification and optimization of parameters» (0116U000711). He participated in many other topics, in particular: Mathematical models of nonlinear stationary and non-stationary filtration and hydraulic processes, problems of interconnection and consideration of local inhomogeneities (No. I-34 on the basis of the decision of the Expert commission of NUHPP of January 10, 1995, Protocol No. 4 to the order of the rector of January 6, 1995 under No. 6); Physical and mathematical modeling of filtration-deformation processes in soil dams taking into account the mutual influence of gradients of pressure and characteristics of the environment (No. 2-62, NUPGP, 17.05.03); Development and justification of energy efficient constructions of heat reclamation systems for heating of the protected soil by low-temperature waters (for example, thermal waste of industry) (state registration number 0110U00820); Development of theoretical bases for utilization of low potential heat of waste water of industrial facilities in agriculture (№ I-27 - NUELP; RK №0107U002002032 - Ukr STI); Research and theoretical-experimental substantiation of the basic parameters of the process of magnetic purification of the heat-power plant environments (state registration number 0112U001591, NUPGP, 2012-2013); Research and improvement of basic parameters of magnetic deposition apparatus of ferro-containing products of corrosion of thermal power equipment (state registration number 0114 U 001615). He was awarded the "Excellence in Education of Ukraine" badge, a Diploma of the Presidium of the Verkhovna Rada of Ukraine, and acknowledgment of gratitude to the Western Science Center of the NAS of Ukraine and the Ministry of Education and Science of Ukraine. Academician of the Academy of Sciences of Higher Education, Academician of UNGA, Full Member of the NUSh, Chairman of the Rivne Branch of the Ukrainian Mathematical Society, awarded with various honors of the Ministry of Education and Science of Ukraine, Rivne State Administration and Regional Council, Rivne University of Higher Education, etc. Listed in (International Database) "Who is Who in Science and Engineering 2008-2009 - 10th Edition" among the leaders of the scientific and technological revolution, as well as “Who is who in Science and Engineering 2011-2012 - 11th Edition”.
{"url":"https://wiki.nuwm.edu.ua/index.php?title=Bomba_Andrii&oldid=16968","timestamp":"2024-11-07T05:54:55Z","content_type":"text/html","content_length":"34339","record_id":"<urn:uuid:43dca573-c230-452e-8e3d-3621e4d39fcb>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00540.warc.gz"}
Truth Tables for Logic Gates (3.2.2) | CIE A-Level Computer Science Notes | TutorChase In the realm of digital electronics and computer science, logic gates play a pivotal role. They are the fundamental building blocks of digital circuits, determining how binary inputs are processed to yield a binary output. This section focuses on constructing and analyzing truth tables for basic logic gates: NOT, AND, OR, NAND, NOR, and XOR. By exploring these, students can understand how digital systems process information and make decisions. Logic Gates Logic gates are integral components of digital electronics. They perform basic logical functions on binary data (0s and 1s), which are fundamental in computing and electronic systems. Each gate type has a unique symbol and operates based on specific logical rules. The NOT Gate Symbol and Function • Symbol: A triangle pointing right with a circle at its pointed end. • Function: The NOT gate, also known as an inverter, flips the input signal. It outputs 1 (true) for an input of 0 (false), and vice versa. Truth Table • The NOT gate is the simplest of all, having only one input. It's essential in creating more complex circuits and is often used to reverse a signal's logic level. The AND Gate Symbol and Function • Symbol: A D-shaped symbol with a flat top. • Function: Outputs 1 if and only if all its inputs are 1. It's the digital equivalent of the logical 'and'. Truth Table • The AND gate is fundamental in digital logic, used to combine multiple conditions. It's commonly used in circuit design and computational logic. The OR Gate Symbol and Function • Symbol: A curved D shape. • Function: Outputs 1 if at least one of its inputs is 1. It represents the logical 'or'. Truth Table • The OR gate is crucial in scenarios where any one of multiple conditions can trigger an action. It's widely used in alarm systems and decision-making circuits. The NAND Gate Symbol and Function • Symbol: Similar to the AND gate but with a circle at the output. • Function: Outputs 0 only if all its inputs are 1. It's the negation of the AND gate. Truth Table • The NAND gate is a universal gate; you can build any other gate using NAND gates. It's fundamental in digital electronics and logic circuit design. The NOR Gate Symbol and Function • Symbol: Like the OR gate but with a circle at the output. • Function: Outputs 1 only when all inputs are 0. It's the negation of the OR gate. Truth Table • NOR gates, like NAND gates, are universal and can be used to create any other type of logic gate. They are particularly useful in creating circuits that require the output to be false if any input is true. The XOR Gate Symbol and Function • Symbol: Resembles the OR gate but with an additional line on the input side. • Function: Outputs 1 if the number of 1 inputs is odd. It represents the 'exclusive or'. Truth Table • The XOR gate is unique as it outputs true only when the inputs are different. It's widely used in arithmetic circuits, like adders and subtractors. Truth tables play a critical role in the analysis and design of digital circuits. They provide a clear and systematic way to represent the output of a logic gate or circuit for every possible combination of inputs. This is especially important in the initial stages of circuit design, where understanding the behaviour of individual gates and their interactions is crucial. In the analysis phase, truth tables help in verifying the functionality of a circuit. By comparing the actual output of a circuit with the expected output listed in the truth table, engineers can identify and rectify errors in the design. This is crucial in complex circuits where multiple gates interact, as it ensures that the overall system behaves as intended. In the design phase, truth tables are invaluable in simplifying and optimizing circuits. They allow designers to visualize the relationships between inputs and outputs, which aids in reducing the number of gates used and minimizing the complexity of the circuit. This optimization is critical in creating efficient, cost-effective, and reliable digital systems. Furthermore, truth tables are essential in the education of computer science and engineering students, as they provide a foundational understanding of logic gates and circuits, laying the groundwork for more advanced studies in digital systems and computer architecture. AND and OR gates are fundamental in the creation of more complex logic circuits due to their basic yet versatile functionality. In digital electronics, complex operations are often broken down into simpler tasks that can be handled by these basic gates. For example, AND gates are used to implement logical conjunction in circuitry. They can be combined with other gates to form circuits that execute more complex operations, such as arithmetic functions, decision making, and data processing. In a computer processor, AND gates are used in the arithmetic logic unit (ALU) to perform various computational tasks. Similarly, OR gates are used to implement logical disjunction. They are essential in circuits where the output is true if any one of the inputs is true. This property is particularly useful in control systems and decision-making circuits. For instance, an OR gate can be used in an alarm system where multiple sensors (like motion, door, and window sensors) are connected to trigger an alarm if any one of them detects a breach. By combining AND and OR gates with other types of gates like NOT, NAND, NOR, and XOR, one can create more complex circuits such as multiplexers, demultiplexers, encoders, decoders, and even memory circuits. The versatility of AND and OR gates in constructing intricate logic networks underlines their importance in digital electronics and computer engineering. Deriving a logic circuit from a given problem statement involves several steps, starting with understanding the requirements of the problem and translating them into a logical expression or truth table. This process is central to digital design and is commonly used in creating custom circuits for specific applications. Firstly, the problem statement must be analyzed to identify the conditions and actions required. Each condition is represented as an input to the circuit, and the action as the output. The next step is to formulate the logical relationship between these inputs and the desired output. This can be done by writing a logical expression using AND, OR, NOT, NAND, NOR, and XOR operations, as appropriate. Alternatively, a truth table can be constructed to represent the relationship between inputs and outputs explicitly. Once the logical expression or truth table is defined, the next step is to design the circuit. This involves selecting the appropriate logic gates and connecting them in a way that their combined operation matches the logical expression or truth table. The design may go through several iterations to optimize for factors like the number of gates, power consumption, and overall efficiency. Finally, the designed circuit is tested against the problem statement to ensure it meets the required conditions. If discrepancies are found, adjustments are made either in the logic expression or in the circuit configuration. This process of deriving logic circuits from problem statements is fundamental in digital electronics and computing, enabling the creation of tailored solutions for diverse applications, ranging from simple decision-making circuits to complex computational algorithms. The XOR (Exclusive OR) gate holds significant importance in digital circuit design due to its unique logical operation. It outputs a high signal (1) only when the inputs differ. This property makes the XOR gate essential in arithmetic and comparison operations in digital systems. One of the primary applications of the XOR gate is in binary addition, specifically in the construction of half adders and full adders. A half adder, which adds two single binary digits and outputs a sum and a carry, uses the XOR gate to generate the sum. In a full adder, which adds three binary digits (including the carry from a previous addition), XOR gates are used to compute both the sum and the carry-out. Additionally, XOR gates are pivotal in parity check systems for error detection in data transmission. By comparing data bits, XOR gates can identify discrepancies, thereby enhancing data integrity. Their ability to identify differences makes them indispensable in digital systems that require precision and accuracy, such as in computational logic, encryption algorithms, and error detection mechanisms. The NOR gate's behaviour can be effectively illustrated using Boolean algebra, a branch of algebra that deals with true and false values, typically represented as 1 and 0. In Boolean algebra, the NOR operation is expressed as the negation of the OR operation. For a two-input NOR gate with inputs A and B, the Boolean expression is A+B, where X represents the NOT operation, and + represents the OR operation. This means the NOR gate outputs true (1) only when both inputs are false (0). In other words, if either A or B is true, the output of the NOR gate is false. The use of Boolean algebra in representing logic gates is crucial for simplifying and analysing complex logical expressions in digital circuits. It allows for a mathematical approach to designing and understanding logic circuits, which is essential in fields like computer science, electrical engineering, and digital electronics. Practice Questions Explain the function of a NAND gate and construct its truth table. Additionally, describe how a NAND gate can be used to replicate the function of an AND gate. A NAND gate operates by outputting a 0 only when all its inputs are 1; otherwise, it outputs a 1. This behaviour is the inverse of an AND gate. The truth table for a NAND gate with two inputs is as follows: if both inputs A and B are 0, the output is 1; if A is 0 and B is 1, the output is 1; if A is 1 and B is 0, the output is 1; if both are 1, the output is 0. To replicate an AND gate using a NAND gate, one can simply feed the output of the NAND gate into a NOT gate. The NOT gate inverts the output of the NAND gate, thus mimicking the function of an AND gate. This is a fundamental principle in digital logic, demonstrating how NAND gates are universal and can form other types of logic gates. Describe how an XOR gate is different from an OR gate in terms of functionality and provide a real-world application where an XOR gate would be more suitable than an OR gate. An XOR (Exclusive OR) gate differs from an OR gate in that it outputs a 1 only when the number of 1 inputs is odd; specifically, it outputs 1 if exactly one of its inputs is 1. In contrast, an OR gate outputs a 1 if at least one of its inputs is 1. A real-world application where an XOR gate is more suitable than an OR gate is in a digital circuit designed for error detection. For instance, in a parity check system, an XOR gate can be used to compare two bits of data. If the bits are different (i.e., 01 or 10), the XOR gate outputs 1, indicating a discrepancy. This application is crucial in data transmission systems to ensure data integrity.
{"url":"https://www.tutorchase.com/notes/cie-a-level/computer-science/3-2-2-truth-tables-for-logic-gates","timestamp":"2024-11-08T06:20:20Z","content_type":"text/html","content_length":"1049133","record_id":"<urn:uuid:0d5dfb9a-db74-406d-9cc8-1e4e96c8ed9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00156.warc.gz"}
New techniques for convex optimization and sparsification Theory Seminar New techniques for convex optimization and sparsification Arun JambulapatiUniversity of Michigan 3725 Beyster Building Abstract: The computational challenges posed by recent massive datasets motivate the design of more efficient algorithms. My work takes problems motivated by modern machine learning and develops theoretical tools to answer two central questions: 1) how can we learn from data faster, and 2) how can we select representative data (to potentially speed up downstream processing) I will first present several results on faster convex optimization leveraging a new theoretical primitive called a “ball optimization oracle”. I will give near-optimal algorithms for minimizing convex functions assuming access to this oracle, and show how this framework can be applied to yield faster algorithms for important problems in computer science such as regression, differentially private optimization, and robust optimization. I will follow with results on problems motivated by dataset compression. Leveraging techniques from high-dimensional geometry, I give near-optimal constructions of sparsifiers for sums of norms and generalized linear models. This directly implies new sparsifiers for hypergraphs, sums of symmetric submodular functions, and gives faster algorithms for linear regression. Greg Bodwin Euiwoong Lee
{"url":"https://theory.engin.umich.edu/event/new-techniques-for-convex-optimization-and-sparsification","timestamp":"2024-11-11T00:06:24Z","content_type":"text/html","content_length":"43439","record_id":"<urn:uuid:22e41bc4-d83a-4578-a597-45dafc0bff54>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00500.warc.gz"}
nCOV – Making Sense of an Epidemic Each day we learn more about the #nCOV2019 outbreak. Here are 5 key questions and terminology in infectious disease epidemiology to help make sense of all of this information, adapted from a #tweetorial by PhD candidate Rebecca Kahn (@rebeccajk13). R0 - What does it mean, and what does it tell us? • R[subscript]0 — pronounced R-nought or R-zero, and called the basic reproductive “rate” or number, is the value that summarizes how contagious a pathogen is. □ R0 is the average # of people one case will infect if it is introduced into an entirely susceptible population. If R0 >1 each infected person will transmit to >1 person, creating epidemic □ R0 does not give us any information on the total number of people who are currently infected. It is also not a measure of disease severity – it only tells us on average how many people each person will infect, not how severe those infections will be. □ R0 does not have to be constant for a given disease, and may depend on factors such as population density and contact patterns. Despite this, it seems to be relatively similar for a given disease across populations. □ There are several ways to estimate R0. At an epidemic’s start, reported cases grow exponentially, and R0 = (approximately) 1+growth rate*serial interval. The serial interval is the time between one infection and the next in a transmission chain. □ Challenges for this approach are: ○ 1. unreliable reporting (missing cases) – this can lead to biased estimates of R0 (up or down) if there are changes in the proportion of cases detected, or in delays in reporting. ○ 2. uncertain serial interval (time step over which to calculate the rate). • In simple mathematical models, we can estimate R0 as the probability of infection given contact with infectious person (b) x contact rate (k) x infectious duration (d). At the beginning of an outbreak, these parameters are challenging to estimate, given the limited data. ☆ The # of contacts (k) is particularly hard to estimate. Heterogeneity in k can lead to “bursts” of cases from superspreading events (when one individual infects a large # of people) leading to heterogeneity in R0. Superspreaders played important roles in SARS and MERS outbreaks. ☆ For the Wuhan nCoV there are several estimates of R0 using different approaches: • E.g., using the growth rate in cases by date of symptom onset is preferable to using the growth rate by date of report, because reporting can be delayed, come in “chunks” of many cases reported at once, and otherwise be confusing. Even date of onset has issues, but is better. ☆ Also, the growth rate by date of symptom onset will seem to decline near the present, because recent cases won’t all have been reported. If not accounted for this can look like transmission is slowing. How do we know the accurate number of nCOV cases? • Reports from health facilities and government agencies are key sources of information. But at the beginning of an outbreak, even if diagnostics are available quickly (like for nCoV), the total number of cases will be uncertain. □ If we assume many cases are missed in the outbreak’s epicenter (here, Wuhan), but detection is near 100% in international travelers, case incidence in travelers combined with daily probability of travel and mean detection time can be used to estimate total # of cases at the epicenter. □ This approach was used for the nCov outbreak: □ Similar methods were used during the 2009 H1N1 epidemic. For the algebra behind it: □ HKU released results of their real-time nowcast and forecast, using number of confirmed cases and daily travel estimates: □ This analysis combined an approach using outbound traveler volume from Wuhan and exported case numbers to estimate cases in Wuhan, then again used outbound travel volume to estimate cases exported to other Chinese cities: □ Active surveillance, which involves testing an (ideally representative) sample of the population exhibiting symptoms, can also be used to estimate total # cases: How and when did the nCOV outbreak start? • Rapid genomic sequencing can help identify when a virus was introduced, how many times it was introduced, and where imported infections originated. Viruses mutate, so if two people are infected with viruses that are similar, this suggests the two people’s infections are connected. • Combining analysis of genomes of viruses from those infected with estimates of how fast viruses are mutating and the total number of cases, we can estimate when the virus was introduced into a • Analyses of nCoV viruses suggest it was introduced in November or December of 2019. • Combined with data on people’s travel history and exposure, genomic data can help distinguish between imported cases and local transmission. Most cases outside of China have been imported, but local transmission is beginning to be reported. How do we contain an outbreak? What makes containment harder or easier? • R[subscript]e (or the “effective reproductive number”) is the # of people infected by a single case when the population isn’t entirely susceptible, and/or control measures are in place. □ Re=R0*(1-effect of control)*(proportion of pop susceptible). ☆ To stop an epidemic, Re must be <1. • Vaccines are some of our best tools for bringing Re < 1, by reducing the proportion of the population that is susceptible. Like a forest fire, if susceptibles (trees) run out, the epidemic will burn out. • Treatment can also reduce Re by decreasing the duration of infectiousness. Until vaccines and treatment are available, we must rely on non-pharmaceutical interventions. These include measures to decrease contacts, such as symptom monitoring/isolation and quarantine. • The relative effectiveness of these case-based interventions (which require knowing who is infected) depends on two key factors: • Infectiousness before or without symptoms makes control harder, because cases may not be identified before transmission, or may be missed completely yet may still transmit. • The substantial concern raised by reports of a pre-symptomatic transmission (https://www.bbc.com/news/world-asia-china-51254523) reflects that if it is common, control by isolation will be much □ There isn’t any data yet on whether presymptomatic transmission is common in nCoV. • Screening of travelers at ports of entry for symptoms is another way to control outbreaks. □ Pre-symptomatic and asymptomatic transmission make this approach less effective. Severity - A double-edged sword • Understanding the clinical spectrum of a novel infectious pathogen is relevant for public health response to an outbreak. □ Severe cases, while leading to hospitalization and potential death, are more likely to be detected and reported. □ Infected individuals with no or few symptoms are more likely to remain unnoticed. If these individuals contribute to transmission, the outbreak is harder to control. □ On the other hand, if mild or asymptomatic cases are common and do NOT contribute much to transmission, then they will aid control because fewer individuals will need care, and their infections will likely give them immunity to reinfection, at least for some time.
{"url":"https://ccdd.hsph.harvard.edu/research/ncov-making-sense-of-an-epidemic/","timestamp":"2024-11-13T08:25:03Z","content_type":"text/html","content_length":"188775","record_id":"<urn:uuid:e8a84760-00af-4761-bc35-bbfdb978b163>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00147.warc.gz"}
Where can I find experts to help with R programming visualization complexity tasks? | Pay Someone To Take My R Programming Assignment Where can I find experts to help with R programming visualization complexity tasks? Do we have a good tutorial? Dependrivers who guide you over REST and R programming are on a mission to find competent R user for working with R’s programming language and core services. Where can I find experts to help with R programming visualization complexity tasks? Use this tool to find out what can be done with R programming. Who am I looking for? 1) Computer science courses. We have a good idea of what students would like to take, 2) the technical knowledge needed to understand R programming. visit this page would 3) understand the programming language on other computers. Are you interested in running a R development team? Please find a A R development project of your own when you decide how it will do. Why could you think of us as a CRITICAL TEAM? Many types of computer science courses exist but we have created several of the more recent ones You should find an instructor (some have taken more than that) to teach your book. Most students have them over for a week when they finish programming in R, but there are times they won’t do as well. Why would you name a school if you can? We don’t offer exams, so although we do well with subject matter type, there are times where they are hard to cover except for a minor of one’s competence. That is why we have many. If the subject topic you are trying your hand at does not require you to develop a particular knowledge of R programming or that you have strong programming skills, you won’t get familiar with the concepts taught at the level that R programming has defined that are taught according to the requirements of that programming language. That’s why you’ll find many in our programming team not to be suited for that type of assignment. Key to the successful development of a R programming class are its focus on learning or understanding. Some of the R programming education you can do with our instructor is very personal, so be prepared to take time to plan for it! Steps To Finish While programming programs may be difficult due to time involved, most programming is written and processed with the knowledge that a good programming knowledge is needed. You can achieve the goals you desire using our programming excerts. Start Your R Programming Class A R programming class is an introductory level in a computer science course. There are two levels: 2 students and 5 additional students who find success with programming challenges while learning how to use or read R code quickly. You can conduct your learning or learn what you need to do while in the classroom. Each grade level of the teacher has his own approach that can be implemented by your class. Stage One: Class for very proficient level Have you done R programming in a high-performing computer science grade? Have youWhere can I find experts to help with R programming visualization complexity tasks?” — Eric, Jr. Pay Someone To Do University Courses Like , Vrigniew J. and Ashish S., 2005-2007. 1. Introduction Many of the other visualisation tasks that are often included in the R programming language are very complex and time-consuming. These include: Rectangular charts. Rectangular charts: these things include lots of charts, different layout patterns and a lot of animations…. This is often the case with this website in the graphic, and is usually a bit more confusing in R. Gymplot. The image is a GIS plot. This can be a graphical representation or representation of a curve or a geometrical pattern on which an element is attached. Matplot. Models and Object Stylistics. (Note: GIS plots use a 2 dimension grid as their canvas and are not shown directly unless they are part of a curve or curve of a piece of non-rectangular geometry.) Rectangular graphs. Drawing is the visualisation of geometry on the surface of the image, ranging from lines to circles to rectangles and rectangles of a circular shape.. Reddit Do My Homework . this is the most complicated point in the graphics. Labs. Rectangular graphs. A lot of Labs are done by one of the core GIS functions. With nearly two millennia of Labs you need to also write some other functions, either directly in GIS or via R graphics—but you can do it with images & graphics elements rather than with two dimensional geometries. Graphic Layout Matrix Storing. Image Layout Matrix is the basic base GIS visualization. It can be done either as an image or a complex GIS-like layout. One of the big challenges, to implement a GIS-like layout for Labs, that is hard to do in R, is that the matrix needs to actually be displayed in the graphic. Determining which images are GIS is usually difficult enough for a human to insert and check in R. All of these images must be placed, sorted, and the layout itself displayed. I did a lot of research and you can also check the dimensions in R’s output screen. Unfortunately, DICOM doesn’t seem capable of using r R in a graphical format, so I have to also look at the data files and output it in r. 2. Contour Plot Reciprocaly look at pptplot: it looks like this: > canvas = numpy.gfile( “c”, (30,0.01), r = (50,0.01) ) > print pptplot( “c_c,c_c_c_c_c” , p=nplot(aes(x=i),y=j,e=e,i=i,y=i,f=f) > canvas = numpy.gfile( “p_p_p_p_p_p_p_h “)” ), > g = numpy. My Assignment Tutor array( ).size() > canvas.towespectrum() > if g.abs() > 0: first line g = numpy.array( [g Read Full Article (g < 0.1) for g in range(1000)] + numpy.array( ).image(numpy.strlines(max(n, 400), mode= ('max', 100), zc=1)).towespectrum() g + (g < 0.1) for g in range(5000) + []).size()[0].towespectrumWhere can I click for source experts to help with R programming visualization complexity tasks? I’m taking a basic algorithm course at Tech In. I’ve done a quick r-analysis (first time application) focusing on how algorithms work and maybe have some expertise or not, in understanding what algorithms are and what they provide. Given these algorithms I don’t see an easy enough way of providing a comprehensive training course for R-applications to try to understand all the algorithms I mentioned above. I would greatly appreciate anyone who could offer input into further directions I’ve suggested, I’d include it in my brief application. I’ll get right into designing the first chapter, but it would be helpful if more: Since it’s a homework that doesn’t involve an algorithm (doesn’t matter whether you talk about algorithm or in general), I’ll add: all the good things about a lot of those things. There’s a lot more left to demonstrate here, but overall this book isn’t my place. What you’d do for your application is: Set your question to a correct yes/no answer. The more the better: which algorithms do you need to describe? Which algorithms give you enough familiarity with R’s algorithms to overcome the use of an algorithmic theory course? Or instead, find a book or maybe a reference library, sit down and apply the algorithms you’re reading. Do My Discrete site web Homework Basically keep this topic with you and use it as you go along. Replace the above answer with what you want to understand. Try and sketch a way of doing a R-applications problem in R with a simple R-algorithm for each iteration it takes. Create a work set that you can use as your reference when writing R – all that work goes with it – including following exercises. Take a look over some of these algorithms in more detail – and maybe: We give examples from the R-programming language. You can see R-as the human language from the following exercise. Also see the descriptions on the top left of the next page. If I want to work with algorithm-coding (A.R.C.E) how do I turn it into a R-application? If you’re already working with an algorithm and are new to the programming language, it’s an easy task. And you can use in your “exercise” the R-functions and the R-code from the previous example. My question with my second course. I’ve been doing this while researching on the R Core Library of the software RFXS (System Programming in R). I’m really happy with how I managed my project, even with the R-Functions, except a couple of bad pieces thrown in. One of the problems has been the bad definition of R-codes when writing R code. With the language provided by R-functions by the library, I can’t determine exactly what is meant by R-code
{"url":"https://rprogrammingassignments.com/where-can-i-find-experts-to-help-with-r-programming-visualization-complexity-tasks","timestamp":"2024-11-06T07:36:12Z","content_type":"text/html","content_length":"198135","record_id":"<urn:uuid:957ea09f-9cd7-44ad-b3ac-9ef04afe828f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00028.warc.gz"}
Why do we always split a list into a head and a tail in Haskell? | Learning Cardano In Haskell, lists are defined recursively as a head (the first element) and a tail (the remainder of the list). This structure allows for easy and efficient pattern matching, recursion, and functional operations on lists. Here’s why we often split lists into head and tail: 1. Simplicity of Recursive Definitions • Lists in Haskell are constructed recursively, meaning a list is either an empty list [] or an element (head) followed by another list (tail). • Recursive definitions allow us to process each element of the list step-by-step. By matching a list as (x:xs), where x is the head and xs is the tail, we can define functions that operate on each element, often with a recursive call to handle the rest of the list. Example: Sum of a List sumList :: [Int] -> Int sumList [] = 0 -- Base case: the sum of an empty list is 0 sumList (x:xs) = x + sumList xs -- Recursive case: add head to sum of the tail Here, x represents the head, and xs represents the tail. This recursive structure is natural and intuitive in functional programming. 2. Pattern Matching on Lists • Pattern matching on (x:xs) makes it easy to define different behaviors for an empty list ([]) and a non-empty list (x:xs). • This approach enables concise and readable code because Haskell can handle each pattern independently. Example: Describe List Length describeList :: [a] -> String describeList [] = "The list is empty." describeList [x] = "The list has one element." describeList (x:xs) = "The list has multiple elements." By splitting the list into [], [x], and (x:xs), we handle each case separately, enhancing readability and clarity. 3. Efficiency and Laziness • Accessing the head of a list is constant time O(1), while accessing an element in the middle requires traversing the list (linear time). • Haskell’s lazy evaluation means it doesn’t compute the tail unless it’s actually needed. Thus, splitting into head and tail allows Haskell to process only as much of the list as required, making operations efficient. Example: Take First N Elements takeN :: Int -> [a] -> [a] takeN 0 _ = [] takeN _ [] = [] takeN n (x:xs) = x : takeN (n-1) xs Here, takeN only evaluates the head and processes the tail if more elements are needed. 4. Building Other List Operations • Many list operations, like map, filter, foldr, and zip, are easily implemented by processing each element using head and tail. • For instance, map applies a function to the head of the list and recursively processes the tail: Example: Mapping a Function Over a List map' :: (a -> b) -> [a] -> [b] map' _ [] = [] map' f (x:xs) = f x : map' f xs What would happen if we didn’t split the head and tail? If we didn’t split a list into head and tail, working with lists in Haskell would become significantly more challenging, reducing the elegance, readability, and efficiency of common list operations. Here’s what would happen and why splitting into head and tail is so fundamental: 1. Loss of Recursive Processing • Lists in Haskell are defined recursively: a list is either empty ([]) or composed of a head element and a tail list (x:xs). By not splitting into head and tail, we lose this natural recursive structure, making it difficult to process each element one at a time. • Recursive functions like summing a list or applying a function to each element (map) would become harder to implement. Without the head-tail split, the only alternative would be to access elements through indexing, which is inefficient in Haskell and not idiomatic for functional programming. Example Without Head and Tail: -- Summing a list without head-tail splitting becomes complex and inefficient sumList :: [Int] -> Int sumList xs = if null xs then 0 else (xs !! 0) + sumList (drop 1 xs) This approach is less readable, slower, and requires additional function calls (!! for indexing and drop for slicing). 2. No Pattern Matching on List Structure • Splitting into head and tail allows us to use pattern matching on lists, which is concise and powerful. Without head-tail splitting, we couldn’t match on list structures like (x:xs) or [], making code more verbose and harder to read. • Pattern matching enables handling different list cases (like empty lists, single-element lists, and multi-element lists) cleanly in separate branches. Example Without Pattern Matching: describeList :: [a] -> String describeList xs | null xs = "The list is empty." | length xs == 1 = "The list has one element." | otherwise = "The list has multiple elements." This code uses conditions instead of pattern matching, which is more cumbersome. Pattern matching allows us to handle each case in one line by matching directly on (x:xs). 3. Reduced Efficiency • Accessing elements at arbitrary positions in a list (e.g., xs !! 0) is inefficient in Haskell, as lists are linked lists, not arrays. The head-tail split allows us to access the head of the list in constant time O(1) and then recursively process the rest. • Without head-tail splitting, we would need to rely on functions like drop and take to access list elements, which are slower, especially for large lists. Inefficient Example Without Head-Tail: takeN :: Int -> [a] -> [a] takeN 0 _ = [] takeN n xs = [xs !! i | i <- [0..(n-1)]] -- Slower and less readable 4. Difficulty in Implementing Common List Functions • Fundamental list operations in Haskell, such as map, filter, foldr, and zip, all rely on the head-tail structure. Without it, we’d lose Haskell’s natural approach to list processing, and implementing these functions would require more complex and inefficient code. Example of map Without Head and Tail: map' :: (a -> b) -> [a] -> [b] map' f xs | null xs = [] | otherwise = f (xs !! 0) : map' f (drop 1 xs) -- Less efficient This version of map is less efficient because !! and drop both have to traverse the list, resulting in worse performance. Without splitting into head and tail: • We lose recursive processing, making list functions harder to implement and understand. • Pattern matching on lists becomes impossible, leading to more verbose and less readable code. • Efficiency suffers because of the need to use indexing (!!) and slicing (drop), both of which are slower for linked lists. • Common list operations like map, filter, and foldr become less efficient and harder to write. In Haskell, the head-tail split is foundational because it supports a recursive, efficient, and readable approach to handling lists. Without it, much of the elegance and simplicity that Haskell offers for list processing would be lost. Frequently Asked Questions about Head-Tail Splits in Haskell 1. What does head-tail split mean in Haskell? In Haskell, the head-tail split refers to breaking down a list into its first element (the head) and the rest of the list (the tail). This is commonly done with pattern matching: (x:xs) represents a list where x is the head, and xs is the tail. 2. Why is the head-tail split so commonly used? The head-tail split allows for recursive processing of lists, making it easy to define functions that operate on each element in turn. This structure supports functional programming by allowing operations to be applied to one element at a time. 3. What’s the difference between head and tail functions vs. the head-tail split with (x:xs)? The head and tail functions extract the first element and the rest of the list, respectively, but they are not as safe as pattern matching because they can fail on empty lists. Using (x:xs) is more idiomatic and safe, as it allows you to handle empty lists explicitly with pattern matching. 4. Can I access other elements directly with the head-tail split? No, the head-tail split only gives direct access to the first element. Accessing deeper elements requires further pattern matching (like (x:y:ys)) or using functions like drop, take, or indexing, but these can be less efficient. 5. Is the head-tail split efficient? Yes, accessing the head of a list is constant time O(1), as it’s the first element. Processing the tail recursively is also efficient since each step of the recursion accesses only the next head element without needing to traverse the entire list. 6. Are there other ways to split lists in Haskell? Yes, you can use functions like splitAt or takeWhile, which split lists based on conditions or positions, but they don’t provide the same direct recursive access that the head-tail split offers for sequential element processing. 7. What happens if I try to split an empty list? If you try to split an empty list with (x:xs), it won’t match, and an error occurs if you’re using head and tail functions. Pattern matching with [] and (x:xs) allows handling empty lists safely without runtime errors.
{"url":"https://www.learningcardano.com/why-do-we-always-split-a-list-to-a-head-and-a-tail-in-haskell/","timestamp":"2024-11-14T11:29:52Z","content_type":"text/html","content_length":"108564","record_id":"<urn:uuid:af1b1bb8-677d-4599-9bf3-5135c982c2d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00278.warc.gz"}
Doppler Log Janus Configuration Doppler Log general formula with only one transducer- A transducer is fitted on the ship’s keel which transmits a beam of acoustic wave at an angle c usually 60° to the keel in the forward direction, this gives the component “ v cos 𝜶” of the ship’s velocity towards the sea bed thus causing the Doppler shift and the fr = ft ( c + v Cos 𝜶)/(c - v Cos 𝜶.) Acoustic beam transmitted at an angle a towards seabed. If the waves are transmitted directly towards the seabed perpendicular to the keel, there will be no Doppler shift and the transmitted and received frequency will be the same. This is because the component of ship’s speed towards the seabed is zero. Dividing numerator and denominator of equation (i) by C, we get fr = ft {1 + (v cos α)/c} x 1/( (1 - v cos α)/c) By Binomial expansion Theorem, we have since v cos α << C, neglecting higher powers of v cos α/c we get, fr = ft + 2 v ft cos α / c fr-ft = 2 v ft cos α / c ................(ii) v=C (fr - ft) / 2 ft Cos a ............... (iii) With the help of this formula we can calculate the speed of the ship, considering that there is no vertical motion. Janus Configuration- In practice the ship has some vertical motion and the Doppler shift measurement will have a component of this vertical motion. In this case Doppler shift measurement will be fr-ft = 2 v ft cos α / c + 2 Vv ft sin α / c ................ fr-ft = (2 v ft cos α + 2 Vv ft sin α) / c ................ (iv) where Vv represents the vertical motion of the ship. This problem is overcome by installing two transducers, one transmitting in the forward direction and another in the aft direction at the same angle. This arrangement is known as Janus configuration. In this case the forward transducer will give Doppler shift i.e. frf - ft = 2 v ft cos α / c + 2 Vv ft sin α / c frf - ft = (2 v ft cos α + 2 Vv ft sin α) / c ................(v) where frf is the frequency received by the forward transducer while the aft transducer will have the component “v cos α” with negative sign since the transducer is moving away from the reflecting surface i.e. the seabed and hence the Doppler shift measured will be, fra- ft = - 2 v ft cos α / c + 2 Vv ft sin α / c fra- ft = (- 2 v ft cos α + 2 Vv ft sin α) / c ................ (vi) where fra represents frequency received by the aft transducer. In formula (v) and (vi) Vv will have the same sign since both the forward and aft transducers will move upwards or downwards together. By measuring the difference between the two Doppler shift frequencies, the vertical component will cancel out while the horizontal will add hence, (frf - ft) - (fra - ft) = (2v ft cos α + 2Vv ft sin a )/c - (-2 v ft cos α + 2 Vv ft sin α)/c (frf - ft - fra + ft) = 2v ft cos α / c + 2Vv ft sin α/c + 2 v ft cos α/c - 2Vv ft sin α/c frf - fra = 4 v ft cos α/c v = c (frf - fra) / 4 ft cos a ...........(vii) Above formula is used for calculating vessel's speed whe using Janus Configuration.
{"url":"https://www.marineteacher.com/post/doppler-log-janus-configuration","timestamp":"2024-11-12T09:27:15Z","content_type":"text/html","content_length":"1050484","record_id":"<urn:uuid:81b62cef-f100-4960-8bf0-1dfda3f291bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00066.warc.gz"}
Do Evaluations Really Add Up? First, let’s start with the classic article, “How to Improve Your Teaching Evaluations Without Improving Your Teaching” by Ian Neath from the mid 90’s, in which 20 tips are furnished for gaming your end-of-semester evaluations. Despite the funny title and sort of gimmicky conceit — and at this point somewhat out of date research — it is a serious paper in a serious academic journal. We know more now than we knew then, but a lot of the broad strokes are still the same. Often more than teaching and student outcomes, the class size, maleness, quality of students, and other non-pedagogical factors, play an outsized role in Student Evaluation of Teaching (SET) scores. But there’s more. In a recent slam-dunk of a meta-analysis by Uttl et al. in 2017, the authors provide strong evidence that when controlled for prior knowledge and sample size “student evaluation of teaching ratings and student learning are not related.” Yup, that’s it, there is no correlation between the learning and the SET scores. Based on this the authors suggest, “institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty’s teaching effectiveness.” In a post for the Berkeley Blog, the statistician Philip Stark talks about some the “statistical considerations” of SET scores. So we can be reasonably convinced that instructors who get very good evaluations aren’t necessarily bringing better learning outcomes to bear. But maybe they are bringing…something else? In “Availability of cookies during an academic course session affects evaluation of teaching,” published by Hessler et al. in 2018, the authors prove just that. All things being equal, the presence of cookies leads to higher SET scores, or as the authors so succinctly put it, “the provision of chocolate cookies had a significant effect on course evaluation.” And again, they conclude that it might be unwise to use SETs in important promotion and tenure decisions. Since then, the research about Student Evaluations of Teaching continues to roll out and continues to undercut my confidence in the system. Most recently, “Gender Bias in Teaching Evaluations” a study by Mengel et al really lit up the internet. This study includes analysis of almost 20,000 student evaluations and makes some important observations about the presence of bias in SETs. A really nuanced discussion of Mengel et al. appears in a post on the Rice University Center for Teaching Excellence Blog. The tables in the original paper are a bit hard to digest, but this post distills some major ideas into an easy to read infographic, and gives good bulleted summaries of the main points. Some takeaways are that bias is more apparent in math than in other subjects, junior women are subject to more bias than senior women, and bias in evaluations follows some in-group patterns, that is, men tend to rate men more favorably and women tend to rate women more favorably. The most appreciable loss is dealt to female PhD students teaching classes of predominantly men, who see -0.26 on a 5 point scale compared to their male counterparts. This number isn’t huge, but still troubling when you consider the particular importance of SETs for young people just beginning their career. I recently learned that SETs at Villanova this year will also allow students to comment on instructor bias in the classroom. You can read about it in a Wall Street Journal editorial (sorry, paywall), or in this twitter thread from Jeffrey Sachs. Many universities have started to move away from using SETs as tools in determining promotion and tenure cases. In the US, the University of Southern California caused a stir in spring of 2018 when they announced that they would no longer use SETs in promotion and tenure decision. Since then others have also begun to opt out, and others have begun to offer training on how to correctly interpret the scores once they’ve been collected. Jacqueline Dewar wrote a comprehensive blog post for the AMS blog On Teaching and Learning Mathematics about how we might interpret our SETs. A thing that really frustrates me about all of this is that women, POC, and other underrepresented groups who get lower SET scores by no fault of their teaching, are fooled into thinking they are a “bad teachers” when they’re really perfectly good. Consequently, they redirect the energy they would have otherwise spent on research in trying to fix their teaching, thereby increasing the likelihood that they will be viewed as less serious researchers. This, in a word, sucks. The end of the semester is barreling towards us, which means SETs will be dropping soon. Has your institution had the talk about SETs? What will you do to prepare your students? Are you bringing cookies on SET day? Do you love SETs? Tell me everything over on Twitter @extremefriday. 1 Response to Do Evaluations Really Add Up? 1. I’m thinking that SET day is usually pretty close to the end of the term, so students are feeling, in various ways and to various extents, nervous and negative, more so than at the term’s beginning or middle. That can affect how they evaluate the course; in particular, a course which has been taken by students who tend towards nervousness and negativity might be negatively evaluated. Of course, teachers are all in the same boat wrt this, but it’s is still one of the many variables (other than quality of teaching or how much the students have learned). This entry was posted in Issues in Higher Education, Math Education and tagged Bias, Jacqueline Dewar, Philip Stark, Rice University, SET, Student Evaluations, Villanova. Bookmark the permalink.
{"url":"https://blogs.ams.org/blogonmathblogs/2019/04/08/do-evaluations-really-add-up/","timestamp":"2024-11-04T21:40:51Z","content_type":"text/html","content_length":"58031","record_id":"<urn:uuid:41131a4f-11f1-4ea5-9e12-75669a595c19>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00735.warc.gz"}
App For Math Problems Mathematics is one of those subjects where you either love it or hate it. But it depends on the way you understand the concepts. And each concept may have its own method to understand it. For instance, perception matters a lot. A fixed perspective, such as a textbook may help in understanding that concept. But a video that shows many different perspectives may help you gain a deeper understanding of that particular concept. A very good example of this is the traditional 2D Pythagorean Theorem which can be extended into 3D, 4D and higher dimensions. The basic 2D figure can be easily represented in a textbook, but when we move on to higher dimensions, it becomes increasingly difficult to comprehend the complexity of the structure. Hence, apps help to simplify such concepts by providing detailed videos. There are many apps, but the best App for Math Problem on the internet right now is the BYJU’S – The Learning App. It contains elaborate and detailed videos explaining many different concepts in mathematics. The app contains everything that you ever need to make math interesting and fun! Download now and experience Math in a way you’ve never experienced before! Download the app from the link given below. Also Check:
{"url":"https://mathlake.com/App-For-Math-Problems","timestamp":"2024-11-07T01:22:41Z","content_type":"text/html","content_length":"9497","record_id":"<urn:uuid:96233ac1-2931-4eb7-8d2a-2133313bb643>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00376.warc.gz"}
cosmological observations Latest Research Papers | ScienceGate Abstract In this paper, we investigate the possibility of testing the weakly interacting massive particle (WIMP) dark matter (DM) models by applying the simplest phenomenological model which introduces an interaction term between dark energy (DE) and WIMP DM, i.e., Q = 3γDMHρDM. In general, the coupling strength γDE is close to 0 as the interaction between DE and WIMP DM is very weak, thus the effect of γDE on the evolution of Y associated with DM energy density can be safely neglected. Meanwhile, our numerical calculation also indicates that xf ≈ 20 is associated with DM freeze-out temperature, which is the same as the vanishing interaction scenario. As for DM relic density, it will be magnified by $$ \frac{2-3{\upgamma}_{\mathrm{DM}}}{2}{\left[2\pi {g}_{\ast }{m}_{\ mathrm{DM}}^3/\left(45{s}_0{x}_f^3\right)\right]}^{\gamma_{\mathrm{DM}}} $$ 2 − 3 γ DM 2 2 π g ∗ m DM 3 / 45 s 0 x f 3 γ DM times, which provides a new way to test WIMP DM models. As an example, we analyze the case in which WIMP DM is a scalar DM. (SGL+SNe+Hz) and (CMB+BAO+SNe) cosmological observations will give γDM = $$ {0.134}_{-0.069}^{+0.17} $$ 0.134 − 0.069 + 0.17 and γDM = −0.0008 ± 0.0016, respectively. After further considering the constraints from DM direct detection experiment, DM indirect detection experiment, and DM relic density, we find that the allowed parameter space of the scalar DM model will be completely excluded for the former cosmological observations, while it will increase for the latter ones. Those two cosmological observations lead to an almost paradoxical conclusion. Therefore, one could expect more stringent constraints on the WMIP DM models, with the accumulation of more accurate cosmological observations in the near future.
{"url":"https://www.sciencegate.app/keyword/436768","timestamp":"2024-11-13T11:28:52Z","content_type":"text/html","content_length":"124036","record_id":"<urn:uuid:c371fe7a-d830-46f8-bcec-1fff5c335153>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00692.warc.gz"}
Arctan: A Real-Life Example (from Criminology) Arctan (Inverse Tangent): Example from Forensics Does Forensic Science Need Math? Perhaps it can come as a surprise, but mathematical methods are widely used in forensics. While it’s a common misconception that forensic scientists rely primarily on biology or chemistry, such as finding traces of poisons or analyzing DNA from a hair, the reality is that math also plays a crucial role in forensic jobs. One example that requires the calculation of arctangent is crime scene reconstruction, as exemplified below. Crime Scene Reconstruction Criminalists, also known as forensic scientists, reconstruct crime scenes to understand how crimes were performed. For example, in the investigation of shooting scenes, they often need to determine the shooting angle and the suspected path of the bullet. Criminalists may use plastic strings or laser beams to determine bullet trajectories. However, this is not always possible, for example, if some objects like trees or bushes occlude the crime scene. This is where inverse trigonometric functions, particularly inverse tangent (arctan), come to the rescue. How to Calculate Arctan: an Example Consider a shooting scene where the bullet has entered and exited the wall, as shown in the animation below. This scene can be modeled with the right triangle model (ABC), where: • AB is the hypotenuse, formed by the flight of the bullet through the wall; • Leg AC is a ‘horizontal’ leg (if the shooting scene is viewed from above); • Leg BC is the ‘vertical’ leg. • The angle α, formed between the path of the bullet and the wall, is the shooting angle we want to find. The model, along with dimensions, is illustrated in the schematic below. Angles α and ABC are corresponding angles, and therefore equal. To calculate angle ABC, you need to consider the lengths of the opposite leg (AC=10”) and adjacent leg (BC). AC is known, let’s find \[ BC = 22.8 – 10 = 12.8” \] \[tan(\angle ABC) = \frac{AC}{BC}\] and therefore: \[tan(\angle ABC) = \frac{10}{12.8} = 0.78125\] Finally, we have all the data to find arctan(0.78125): \[ \arctan(0.78125) \approx 38° \] Therefore, the angle ABC is approximately 38°, indicating that the shooting angle (α) is also approximately 38°. In this example, we created a model of a right triangle to reconstruct the shooting scene and then used arctan to find the shooting angle. As you have just seen, high school mathematics is not an abstract science but has very specific applications used by ordinary people in their jobs – in our example, it was forensics. The same applies to science – a lot of knowledge gained in physics, chemistry, or biology classes is used, in one way or another, in different workplaces. Download the worksheet for this real-world case study. If you want to share this case with your students in math class, we have a video version available. This video explains the case in under 5 minutes and helps students connect what they learn in school to real-world applications. Check out the video preview below: One of the publications we used to create this blog post is: However, this example is not the only one demonstrating how inverse trigonometry is used in forensics. There are other applications as well, such as analyzing blood stains. Conduct a Google search with keywords like “trigonometry in forensics” to see for yourself how mathematical methods are indeed very useful for crime investigation. In this example, besides learning about arctan and inverse trigonometry, understanding angle geometry was important (remember corresponding angles ABC and α from the previous example?). If you want to explore more about how angles are used in real-life situations, check out the following article:
{"url":"https://dartef.com/blog/arctan-criminology-1/","timestamp":"2024-11-03T06:26:08Z","content_type":"text/html","content_length":"95857","record_id":"<urn:uuid:59cc3341-18b8-43ca-a6b0-3b638168a98b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00856.warc.gz"}
CBSE Class 10 Maths – Pair of Linear Equations in Two Variables MCQ Quiz - Study Rate CBSE Class 10 Maths – Pair of Linear Equations in Two Variables MCQ Quiz Free PDF Download of CBSE Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables Multiple Choice Questions with Answers. MCQ Questions for Class 10 Maths with Answers was Prepared Based on Latest Exam Pattern. Students can solve Class 10 Maths Pair of Linear Equations in Two Variables MCQs with Answers to know their preparation level Download Books for Boards Join our Telegram Channel, there you will get various e-books for CBSE 2024 Boards exams for Class 9th, 10th, 11th, and 12th. We earn a commission if you make a purchase, at no additional cost to you. Please enter your email: 1. Half the perimeter of a rectangular room is 46 m, and its length is 6 m more than its breadth. What is the length and breadth of the room? 3. Choose the pair of equations that satisfy the point (1,-1) 4. The pair of equations x + 2y – 5 = 0 and −3x – 6y + 15 = 0 have: 5. If a pair of linear equations is consistent, then the lines will be: 6. The pair of equations y = 0 and y = –7 has 7. If the lines given by 3x + 2ky = 2 2x + 5y + 1 = 0 are parallel, then the value of k is 8. The value of c for which the pair of equations cx – y = 2 and 6x – 2y = 3 will have infinitely many solutions are 9. One equation of a pair of dependent linear equations is –5x + 7y – 2 = 0. The second equation can be 10. The figure shows the graphical representation of a pair of linear equations. On the basis of graph, the pair of linear equation gives _______________solutions. 11. Two numbers are in the ratio 5 : 6. If 8 is subtracted from each of the numbers, the ratio becomes 4 : 5. Then the numbers are: 12. The solution of the equations x – y = 2 and x + y = 4 is: 13. For which values of a and b, will the following pair of linear equations have infinitely many solutions? x + 2y = 1 (a – b)x + (a + b)y = a + b – 2 14. The father’s age is six times his son’s age. Four years hence, the age of the father will be four times his son’s age. The present ages, in years, of the son and the father, are, respectively 15. In a competitive examination, one mark is awarded for each correct answer while 1/2 mark is deducted for every wrong answer. Jayanti answered 120 questions and got 90 marks. How many questions did she answer correctly?
{"url":"https://schools.studyrate.in/class-10th/cbse-class-10-maths-pair-of-linear-equations-in-two-variables/","timestamp":"2024-11-14T15:17:15Z","content_type":"text/html","content_length":"127608","record_id":"<urn:uuid:2471aa2e-faa4-4e13-95b7-36c219125b0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00266.warc.gz"}
Take four numbers 1,2,3,4.generate and print. All the permutations of these four numbers. | Sololearn: Learn to code for FREE! Take four numbers 1,2,3,4.generate and print. All the permutations of these four numbers. Hi, take a look at my code "permutations". It's in phyton, bit I guess you could use the same algorithm
{"url":"https://www.sololearn.com/en/Discuss/152817/take-four-numbers-1234generate-and-print-all-the-permutations-of-these-four-numbers","timestamp":"2024-11-03T01:17:55Z","content_type":"text/html","content_length":"918004","record_id":"<urn:uuid:2aea8f4f-8584-494d-a310-63abdf1cd98e>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00079.warc.gz"}
12.2 Scatter Plots Before we take up the discussion of linear regression and correlation, we need to examine a way to display the relation between two variables x and y. The most common and easiest way is a scatter plot. The following example illustrates a scatter plot. Example 12.5 In Europe and Asia, m-commerce is popular. M-commerce users have special mobile phones that work like electronic wallets, as well as provide phone and Internet services. Users can do everything from paying for parking to buying a TV set or soda from a machine to banking to checking sports scores on the internet. For the years 2000 through 2004, was there a relationship between the year and the number of m-commerce users? Construct a scatter plot. Let x = the year and let y = the number of m-commerce users, in millions. Table showing the number of m-commerce users (in millions) by $xx$ (year) $yy$ (no. of users) 2000 0.5 2002 20.0 2003 33.0 2004 47.0 Using the TI-83, 83+, 84, 84+ Calculator To create a scatter plot: 1. Enter your x data into list L1 and your y data into list L2. 2. Press 2nd STATPLOT ENTER to use Plot 1. On the input screen for PLOT 1, highlight On and press ENTER. (Make sure the other plots are OFF.) 3. For TYPE, highlight the first icon, which is the scatter plot, then press ENTER. 4. For Xlist, enter L1 ENTER; for Ylist, enter L2 ENTER. 5. For Mark, it does not matter which symbol you highlight, but the square is the easiest to see. Press ENTER. 6. Make sure there are no other equations that could be plotted. Press Y = and clear out any equations. 7. Press the ZOOM key and then the number 9 (for menu item ZoomStat); the calculator will fit the window to the data. You can press WINDOW to see the scaling of the axes. Try It 12.5 Amelia plays basketball for her high school. She wants to improve to play at the college level. She notices that the number of points she scores in a game goes up in response to the number of hours she practices her jump shot each week. She records the following data: x (hours practicing jump shot) y (points scored in a game) Construct a scatter plot and state if what Amelia thinks appears to be true. A scatter plot shows the direction of a relationship between the variables. A clear direction happens when there is either • high values of one variable occurring with high values of the other variable or low values of one variable occurring with low values of the other variable, or • high values of one variable occurring with low values of the other variable. You can determine the strength of the relationship by looking at the scatter plot and seeing how close the points are to a line (see figures above). When you look at a scatter plot, you want to notice the overall pattern and any deviations from the pattern. In this chapter, we are interested in scatter plots that show a linear pattern. Linear patterns are common. The linear relationship is strong if the points are close to a straight line. If we think the points show a linear relationship, we draw a line on the scatter plot. This line can be calculated through a process called linear regression. A linear regression line models the trend of the data. However, we only calculate a regression line if one of the variables helps explain or predict the other variable. If x is the independent variable and y is the dependent variable, then we can use a regression line to predict y for a given value of x.
{"url":"https://texasgateway.org/resource/122-scatter-plots?book=79081&binder_id=78271","timestamp":"2024-11-04T10:59:34Z","content_type":"text/html","content_length":"44461","record_id":"<urn:uuid:540a8f1c-48de-470e-afec-9aa632a0f023>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00118.warc.gz"}
1. Describes a system whose time evolution can be predicted exactly. 2. Describes an algorithm in which the correct next step depends only on the current state. This contrasts with an algorithm involving backtracking where at each point there may be several possible actions and no way to chose between them except by trying each one and backtracking if it fails. Last updated: 1995-09-22 Nearby terms: DESY ♦ DETAB ♦ deterministic ♦ deterministic automaton ♦ DETOL ♦ developer Try this search on Wikipedia, Wiktionary, Google, OneLook.
{"url":"https://foldoc.org/deterministic","timestamp":"2024-11-04T12:35:49Z","content_type":"text/html","content_length":"9154","record_id":"<urn:uuid:f1c0aa94-ac68-4e61-ae40-ed66df389a1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00634.warc.gz"}
14 Multiplication Chart Archives - Multiplication Table Chart Struggling to learn the 14 times table? Take a look at our interactive multiplication table chart here and learn this particular table 14 without any hassle. In the article ahead you can explore the numbers of the time’s tables chart to facilitate the systematic learning of the table. Multiplication tables are highly significant for all scholars and a layman as well in order to deal with day-to-day life. The understanding of the tables enables us to attempt the day-by-day mathematical calculations taking place all around us. So, go ahead with the article and get the excellence over the multiplication table 14. 14 Times Table Well, in simple words the formation of this particular table begins with the numeric of 14 that multiplies itself with the numeric values of 1 to 10. Subsequently, we get a specific sequence of the value of the different digits which we refer to as the time’s table. Besides that, the time table 14 is an intermediate level of the tables which is highly recommended for most of us. If you are an adult individual or a middle school scholar then you will mostly need to learn this specific table for yourself. With the proper fundamentals and the useful source of learning the tables, you can get a systematic understanding of this table. Multiplication Chart 14 Well, getting the right source of learning the multiplication table always plays a significant role for all aspiring table learners. The right source of table learning can make it all easier for the tables enthusiasts to comprehend the table in an easy and convenient manner. For the same cause, we have developed this dedicated multiplication chart that comes with the 14 Multiplication chart 14. The role of this multiplication chart is to promote the effective learning of table 14 for all the table learners. The chart uses an interactive approach of explaining the table that syncs well with the understanding of the table learners around. Multiplication Table 14 As a scholar or even a layman, you should definitely learn the table in order to use it in your academics and daily life. A proper understanding of this table can make a lot of things easier for you. For instance, you would be able to attempt and solve all the intermediate-level mathematical problems. Likewise, you can use table learning to smooth the day-to-day mathematical calculations around in your household or workplace. So, make sure to opt for our multiplication chart to begin your table learning today with this time table 14. Printable 14 Times Table If you are having the quick priority of learning 14 Multiplication Table without any delay then you should go with the printable timetable chart. We are offering here this specific format of printable Multiplication Table. This chart is ideal for anyone who is willing to learn the Multiplication Table with an easily accessible multiplication chart. With this chart, they can simply begin their table learning without making any efforts as such on its preparation. So, feel free to print this chart and you can also share it with other aspiring table learners.
{"url":"https://multiplicationchart.net/tag/14-multiplication-chart/","timestamp":"2024-11-04T20:29:54Z","content_type":"text/html","content_length":"85407","record_id":"<urn:uuid:37c1af41-2ce6-4dae-b7ea-18d31fc71199>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00211.warc.gz"}
A* and Dijkstra Shortest Path in C# Dijkstra is a common algorithm used to find the shortest paths from one node to all others, A* (A star) is a modification of djikstra that considers going from one node to one other. A* considers how close a node is from the goal and prefers nodes that bring you closer to the end goal. This implimentation uses Djikstra to get the shortest path between two nodes, so that it can be compared to A*. A square board is randomly generated and a few blocks are places to add complexity to the paths. Two random places on the grid are marked as start and end points for the algorithms to navigate. The program shows The result of using both Dijsktra and A* pathfinding algorithms to find the shortest path between those points (if there are more than one, one is found at random). The top shows Dijkstra, the bottom shows A*. The number of ticks and nodes each algorithm visited is also displayed. The blue X represends the Start, the yellow X’s represents the path taken to get to the goal. The red x’s are the nodes that were unexplored by the algorithm, the white ones being all the nodes visited but not part of the path.
{"url":"https://noamzeise.com/demo/2020/09/18/AStarDijkstra.html","timestamp":"2024-11-11T06:32:58Z","content_type":"text/html","content_length":"6178","record_id":"<urn:uuid:27d69524-3f0b-478c-a854-d72eefea9189>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00034.warc.gz"}
Equations of State David Young Cytoclonal Pharmaceutics Inc. An equation of state is a formula describing the interconnection between various macroscopically measurable properties of a system. This document only adresses the behavior of physical states of matter, not the conversion from one state to another. For physical states of matter, this equation usually relates the thermodynamic variables of pressure, temperature, volume and number of atoms to one another. In materials science the important properties are often what are termed "mechanical properties" rather than physical properties. Examples of mechanical properties would be hardness and ductility. Mechanical properties will not be addressed here. Gas - There are several types of gases with slightly different behaviors. These are ideal gasses, real gasses, super critical fluids, plasmas and critical opalescent conditions. The ideal gas law is often used as the first order description of any gas although this practice is questionable in the case of critical opalescent conditions. Ideal Gas - Although no gas is truly ideal, many gasses follow the ideal gas law very closely at sufficiently low pressures. The ideal gas law was originally determined empirically and is simply p V = n R T p = absolute pressure (not gage pressure) V = volume n = amount of substance (usually in moles) R = ideal gas constant T = absolute temperature (not F or C) where some values for R are 8.3145 J mol^-1 K^-1 0.0831451 L bar K^-1 mol^-1 82.058 cm^3 atm mol^-1 K^-1 0.0820578 L atom mol^-1 K^-1 1.98722 cal mol^-1 K^-1 62.364 L Torr K^-1 mol^-1 Real Gas - Real gas laws try to predict the true behavior of a gas better than the ideal gas law by putting in terms to describe attractions and repulsions between molecules. These laws have been determined empirically or based on a conceptual model of molecular interactions or from statistical mechanics. A well known real gas law is the van der Waals equation ( P + a / Vm^2 )( Vm - b ) = R T P = pressure Vm = molar volume R = ideal gas constant T = temperature where a and b are either determined empirically for each individual compound or estimated from the relations. a = 27 R^2 Tc^2 64 Pc b = R Tc 8 Pc Tc = critical temperature Pc = critical pressure The first parameter, a, is dependent upon the attractive forces between molecules while the second parameter, b, is dependent upon repulsive forces. Another two parameter real gas equation is the Redlich-Kwong equation. It is almost always more accurate than the van der Waals equation and often more accurate than some equations with more than two parameters. The Redlich-Kwong equation is ( p + a ) ( Vm - b ) = R T Vm ( Vm + b ) T^1/2 p = pressure a = empirical constant Vm = molar volume R = ideal gas constant b = empirical constant T = temperature where a and b are not identical to the a and b in the van der Waals equation. Equations of state in terms of reduced variables give reasonable results without any empirically determined constants for a specific substance. However, these are not generally as accurate as equations using empirical constants. One such equation is ( Pr + 3 / Vr^2 ) ( Vr - 1/3 ) = 8/3 * Tr Pr = reduced pressure Tr = reduced temperature Vr = reduced volume where reduced pressure and temperature are the unitless quantities obtained by dividing the value by the critical value. In the case of reduced volume, molar volume is divided by critical molar A two parameter equation which is no longer used much is the Berthelot equation p = R T - a ----- ---- V - b T V^2 p = pressure a = empirical constant V = volume R = ideal gas constant b = empirical constant T = temperature A somewhat more accurate modified Berthelot equation is p = R T [ 1 + 9 p Tc ( 1 - 6 Tc^2 ) ] --- -------- ----- V 128 Pc T T^2 p = pressure Pc = critial pressure V = volume R = ideal gas constant T = temperature Tc = critical temperature The Dieterici equation is another two parameter equation which has been seldom used in recent years. p = R T e^-a / ( Vm R T ) Vm - b p = pressure a = empirical constant Vm = molar volume R = ideal gas constant b = empirical constant T = temperature The Clausius equation is a simple three parameter equation of state. [ P + a ] ( Vm - b ) = R T T ( Vm + c )^2 a = Vc - R Tc 4 Pc b = 3 R Tc - Vc 8 Pc c = 27 R^2 Tc^3 64 Pc P = pressure T = temperature R = real gas constant Vm = molar volume Tc = critical temperature Vc = critical volume Tc = critical temperature The virial equation is popular because the constants are readily obtained using a perturbative treatment such as from statistical mechanics. The virial coeficients are also readily fitted to experimental data because it is a linear curve fit. p Vm = R T ( 1 + B(T) / Vm + C(T) / Vm^2 + D(T) / Vm^3 + ... ) p Vm = R T ( 1 + B'(T) / p + C'(T) / p^2 + D'(T) / p^3 + ... ) p = pressure Vm = molar volume R = ideal gas constant T = temperature B, C, D, .. = constants for a given temperature B', C', D', .. = constants for a given temperature where B is not identical to B' and etc. The equation of state created by Peng and Robinson has been found to be useful for both liquids and real gasses. p = R T - a ( T ) ------ ---------------------------- Vm - b Vm ( Vm + b ) + b ( Vm - b ) p = pressure a = empirical constant Vm = molar volume R = ideal gas constant b = empirical constant T = temperature The Wohl equation is formulated in terms of critial values making it a bit more convenient for situations where no real gas constants are available [ p + a - c ] ( Vm - b ) = R T --------------- ------ T Vm ( Vm - b ) T^2 Vm^3 a = 6 Pc Tc Vc^2 b = Vc / 4 c = 4 Pc Tc^2 Vc^3 p = pressure Vm = molar volume R = ideal gas constant T = temperature Pc = critical pressure Tc = critical temperature Vc = critical volume A some what more complex equation is the Beattie-Bridgeman equation P = R T d + ( B R T - A - R c / T^2 ) d^2 + ( - B b R T + A a - R B c / T^2 ) d^3 + R B b c d^4 / T^2 P = pressure R = ideal gas constant T = temperature d = molal density a, b, c, A, B = empirical parameters Benedict, Webb and Rubin suggest the real gas equation of state P = R T d + d^2 { R T [ B + b d ] - [ A + a d - a alpha d^4 ] - 1 [ C - c d ( 1 + gama d^2 ) exp ( - gama d^2 ) ] } P = pressure R = ideal gas constant T = temperature d = molal density a, b, c, A, B, C, alpha, gama = empirical parameters Supercritical Fluids - Supercritical fluids are well described by real and ideal gas laws. Critical Opalescence - Critical behavior is generally described using real gas equations which have constants defined in a way which ensures that the slope of reduced pressure vs. reduced volume is zero at the critical point. These give reasonable estimates of the relationships between pressure, volume and temperature but do not describe the opalescence or unique chemical properties very near the critical point. Plasma - The physical behavior of plasmas is most often described by the ideal gas law equation which is quite reasonable except at very high pressures. Liquid - Liquids are much less compressible than gasses. Even when a liquid is described with an equation similar to a gas equation, the constants in the equation will result in much less dramatic changes in volume with a change in temperature. Like wise at constant volume, a temperature change will give a much larger pressure change than seen in a gas. A common equation of state for both liquids and solids is Vm = C1 + C2 T + C3 T^2 - C4 p - C5 p T Vm = molar volume T = temperature p = pressure C1, C2, C3, C4, C5 = empirical constants where the empirical constants are all positive and specific to each substance. For constant pressure processes, this equation is often shortened to Vm = Vmo ( 1 + A T + B T^2 ) Vm = molar volume Vmo = molar volume at 0 degrees C T = temperature A, B = empirical constants where A and B are positive. The equation of state created by Peng and Robinson has been found to be useful for both liquids and real gasses. p = R T - a ( T ) ------ ---------------------------- Vm - b Vm ( Vm + b ) + b ( Vm - b ) p = pressure a = empirical constant Vm = molar volume R = ideal gas constant b = empirical constant T = temperature Superfluid - Superfluids are physically liquids although they have interesting properties, which are quantum mechanical in origin. Since this is still an active area of research and not completely understood, a reference to an introductory article is given, but no equations will be presented here. Suspension - Suspensions behave physically most like liquids. Colloid - A colloid being a type of suspension is also physically most like a liquid. Liquid Crystal - Depending upon the temperature, liquid crystals may be crystalline, glassy, flexible thermoplastics or ordered liquids. At sufficiently high temperatures, a true liquid phase will exist. Most of the physical properties of these are the same as non liquid crystal compounds. One exception is that as liquid crystal compounds are added to a solvent the viscosity increases as expected until the concetration becomes high enough to form a liquid crystal phase, when the viscosity drops. Visceoelastic - Since visceoelastics behave like solids on short time scales and like liquids over a long period of time, equations for liquids and solids could be used. Most of the usefulness of visceoelastics is based on their mechanical properties rather than their physical properties. Solid - The volume of a solid will generally change very little with a change in temperature. However, most solids are very incompressible so a constant volume heating will give a very large pressure change for even a small change in temperature. Crystals, glasses and elastomers are all types of solids. A common equation of state for both liquids and solids is Vm = C1 + C2 T + C3 T^2 - C4 p - C5 p T Vm = molar volume T = temperature p = pressure C1, C2, C3, C4, C5 = empirical constants where the empirical constants are all positive and specific to each substance. For constant pressure processes, this equation is often shortened to Vm = Vmo ( 1 + A T + B T^2 ) Vm = molar volume Vmo = molar volume at 0 degrees C T = temperature A, B = empirical constants where A and B are positive. Crystal - Crystals are solids which are often very hard. The equations above are used for describing the physical properties of crystals. Glass - Glasses are generally very brittle. The equations above are useful for describing the physical behavior until the stress becomes too great and the material shatters. Elastomer - An elastomer is an amorphous solid which can be deformed with out breaking. The change in volume is generally negligible with deformation. However, the cross sectional area may change considerably. For changes in temperature and pressure, elastomers can be considered to be solids although much softer than other solids. Superplastic - The unique ability of superplastics to stretch is a mechanical property. Physically, superplastics are treated as solids. Bose-Einstein Condensate - At the time of this writing, the first reports of having made a Bose-Einstein condensate have just been released. No measurements of physical properties have yet been made. Considering various aspects of the theory predicting the existence of this state lead to the conclusions that it might be a solid or a very supercooled gas or one very large single atom. Refractory - Refractory materials behave physically as solids. Further Information For an introductory chemistry text see L. Pauling "General Chemistry" Dover (1970) A physical chemistry text for non-chemists is P. W. Atkins "The Elements of Physical Chemistry" Oxford University Press (1993) A physical chemistry text for undergraduate chemistry majors is I. N. Levine "Physical Chemistry" McGraw-Hill (1995) A review of real gas equations is K. K. Shah, G. Thodos Industrial and Engineering Chemistry, vol 57, no 3, p. 30 (1965) An introductory article about superfluids is O. V. Lounasmaa, G. Pickett Scientific American, p. 104, June (1990) A mathematical treatment can be found in D. L. Goodstein "States of Matter" Dover (1985) Properties of high molecular weight solids (most commonly polymers) are discussed in H. R. Allcock, F. W. Lampe "Contemporary Polymer Chemistry" Prentice-Hall (1990) Solid state properties are covered in A. R. West "Solid State Chemistry and its Applications" John Wiley & Sons (1992) A review article is M. Ross, D. A. Young, Ann. Rev. Phys. Chem. 44, 61 (1993).
{"url":"https://server.ccl.net/cca/documents/dyoung/topics-orig/eq_state.html","timestamp":"2024-11-09T03:07:35Z","content_type":"text/html","content_length":"16234","record_id":"<urn:uuid:69e15a37-afe1-402c-802d-9e14c0e148ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00375.warc.gz"}
Universality for random matrices and log-gases Eugene Wigner's revolutionary vision predicted that the energy levels of large complex quantum systems exhibit a universal behavior: the statistics of energy gaps depend only on the basic symmetry type of the model. Simplified models of Wigner's thesis have recently become mathematically accessible. For mean field models represented by large random matrices with independent entries, the celebrated Wigner-Dyson-Gaudin-Mehta (WDGM) conjecture asserts that the local eigenvalue statistics are universal. For invariant matrix models, the eigenvalue distributions are given by a log-gas with potential $V$ and inverse temperature $\beta = 1, 2, 4$. corresponding to the orthogonal, unitary and symplectic ensembles. For $\beta \not \in \{1, 2, 4\}$, there is no natural random matrix ensemble behind this model, but the analogue of the WDGM conjecture asserts that the local statistics are independent of $V$. In these lecture notes we review the recent solution to these conjectures for both invariant and non-invariant ensembles. We will discuss two different notions of universality in the sense of (i) local correlation functions and (ii) gap distributions. We will demonstrate that the local ergodicity of the Dyson Brownian motion is the intrinsic mechanism behind the universality. In particular, we review the solution of Dyson's conjecture on the local relaxation time of the Dyson Brownian motion. Additionally, the gap distribution requires a De Giorgi-Nash-Moser type Hölder regularity analysis for a discrete parabolic equation with random coefficients. Related questions such as the local version of Wigner's semicircle law and delocalization of eigenvectors will also be discussed. We will also explain how these results can be extended beyond the mean field models, especially to random band matrices. arXiv e-prints Pub Date: December 2012 □ Mathematical Physics; □ Mathematics - Analysis of PDEs; □ Mathematics - Probability; □ 15B52; □ 82B44 Lecture Notes for the conference Current Developments in Mathematics, 2012
{"url":"https://ui.adsabs.harvard.edu/abs/2012arXiv1212.0839E/abstract","timestamp":"2024-11-13T16:52:53Z","content_type":"text/html","content_length":"41006","record_id":"<urn:uuid:a7ff9614-f246-4cb7-a103-3a32e90cfd3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00729.warc.gz"}
Problem A In a rainforest there are $n$ treehouses high in the forest canopy on different trees (numbered from $1$ to $n$). The $i$-th tree’s location is at $(x_ i, y_ i)$. The first $e$ of them in the list are close enough to neighboring open land around the rainforest so that transportation between all of them is easy by foot. Some treehouses may already be connected by direct straight cables through the air that can allow transport between them. Residents want easy transportation between all the treehouses and the open land, by some combination of walking (between those near the open land), and using one or more cables between treehouses. This may require the addition of more cables. Since the cables are expensive, they would like to add the smallest possible length of cable. The height of a cable up two trees can be set so cables can criss-cross other cables, and not allow any snags or crashes. It is not safe to try to switch between two criss-crossed cables in mid-air! The input will start with the three integers $n$ ($1 \le n \le 1\, 000$), $e$ ($1 \le e \le n$), and $p$ ($0 \le p \le 1\, 000$), where $p$ is the number of cables in place already. Next come $n$ lines, each with two real numbers $x$ and $y$ ($|x|, |y| \le 10\, 000$) giving the location of a treehouse. The $i$-th coordinate pair is for the treehouse with ID $i$. All coordinate pairs are unique. Real numbers are stated as integers or include one digit after a decimal point. Next come $p$ lines, each with two integers $a$, $b$, where $1 \le a < b \le n$, giving the two treehouse ids of an existing cable between their trees. No ID pair will be repeated. The output is the minimum total length of new cable that achieves the connection goal, expressed with absolute or relative error less than $0.001$. Sample Input 1 Sample Output 1 0.0 0.0 4.236067 2.0 0.0 1.0 2.0 Sample Input 2 Sample Output 2 0.0 0.0 0.5 2.0 2.000000 2.5 2.0 Sample Input 3 Sample Output 3 0.0 0.0 2.236067 2.0 0.0 1.0 2.0
{"url":"https://mcpc18.kattis.com/contests/mcpc18/problems/treehouses","timestamp":"2024-11-06T18:36:44Z","content_type":"text/html","content_length":"28806","record_id":"<urn:uuid:8a9d7ea4-4966-45b5-9606-cbc03a4e564d>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00115.warc.gz"}
Translations & Reflections Translations & Reflections Using the graph above, translate the triangle using the following rule: (x, y) --> (x+3, y-5). For your answer just type the new points. The diagram above shows the image of ABCD after a translation of (x, y) --> (x-2, y+1). What are the coordinated of the pre-image of ABCD ? No graph for this question What are the coordinates of the image of C(10,-3) after the translation (x, y) --> (x+7, y-4). No graph What are the coordinates of the pre-image of C'(7,4) after the translation (x, y) --> (x+3, y-2). Using the graph above, what are the coordinates of ABCD after a reflection in the y-axis? No graph What are the coordinates of A(-9,1) B(-5,-5) after a reflection in the x-axis? No graph What are the coordinate of A(1, 3) B(5, -6) after a reflection in the line y=x ? Use the tool above to figure out how many lines of symmetry a rhombus has. Then check your answer Below you will investigate lines of symmetry in 14 different polygons. Type your answer in the green box to check if you are correct. To move to the next polygon, move the black dot over.
{"url":"https://stage.geogebra.org/m/wXHwzzcW","timestamp":"2024-11-12T05:23:17Z","content_type":"text/html","content_length":"156836","record_id":"<urn:uuid:30ab6563-914d-4611-8c3f-69a617587f5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00661.warc.gz"}
NCERT ebook pdf For Class 12 Mathematics - Free PDF Download - Free PDF Download | SaralStudy NCERT ebook pdf For Class 12 Mathematics - Free PDF Download Saralstudy.com providing you chapter-wise free ebook PDF download for class 12 Mathematics. The solutions are provided by the expert teacher following NCERT/CBSE guidelines. Read and prepare for your upcoming exams to get high score. NCERT Book for Class 12 Mathematics in English PDF
{"url":"https://www.saralstudy.com/ncert-ebook-pdf-for-class-12-mathematics","timestamp":"2024-11-01T22:54:50Z","content_type":"application/xhtml+xml","content_length":"31150","record_id":"<urn:uuid:03916fbd-2376-4e47-9c9f-0b60eebe08d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00075.warc.gz"}
Maximum and Minimum Values Another powerful usage of differential calculus is optimization, for example, finding the number of products needed to be sold at a store to maximize its monthly revenue or to minimize its monthly costs. In this section, we will link the application of differential calculus with finding the local extrema, the maxima and minima, of a function. critical number: a number$\ c$ in the domain of a function$\ f$ such that:
{"url":"https://www.studypug.com/ap-calculus-bc/critical-number-and-maximum-and-minimum-values","timestamp":"2024-11-06T15:00:20Z","content_type":"text/html","content_length":"420690","record_id":"<urn:uuid:1e30309b-7b5f-4cc3-9097-dcac8b1973d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00809.warc.gz"}
How do you label a line? How do you label a line? Lines are traditionally labeled by expressing two points through which the line passes. Lines may also be labeled with a single scripted letter, and referred to by that name. When drawing rectangle ABCD: the letters must follow, in order, around the outside of the figure. When two lines intersect what angles are formed? When two lines intersect they form two pairs of opposite angles, A + C and B + D. Another word for opposite angles are vertical angles. Vertical angles are always congruent, which means that they are equal. Adjacent angles are angles that come out of the same vertex. What does it mean to name a line? Naming a Line A line is identified when you name two points on the line and draw a line over the letters. A line is a set of continuous points that extend indefinitely in either of its direction. Lines are also named with lowercase letters or a single lower case letter. Are two angles with measures that have a sum of 90? Complementary angles are pair angles with the sum of 90 degrees. What is the 4 types of triangles? Triangle Types and Classifications: Isosceles, Equilateral, Obtuse, Acute and Scalene. What is the symbol of Triangle? A Simple Triangle For starters, it is connected to the number three which, in many cultures, represents balance and true wisdom. It can symbolize time, and getting a tattoo of a triangle means that you are the product of your past, present, and future. A triangle can represent three traits or values you love. When two lines intersect at right angles they are said to be? In elementary geometry, the property of being perpendicular (perpendicularity) is the relationship between two lines which meet at a right angle (90 degrees). The property extends to other related geometric objects. A line is said to be perpendicular to another line if the two lines intersect at a right angle. How do you name a 3 point line? These three points all lie on the same line. This line could be called ‘Line AB’, ‘Line BA’, ‘Line AC’, ‘Line CA’, ‘Line BC’, or ‘LineCB’ . Do 2 lines intersect? Lines are said to intersect each other if they cut each other at a point. By Euclid’s lemma two lines can have at most 1 1 1 point of intersection. In the figure below lines L 1 L1 L1 and L 2 L2 L2 intersect each other at point P . P. What is another name for Triangle? In this page you can discover 20 synonyms, antonyms, idiomatic expressions, and related words for triangle, like: pyriform, pyramid, deltoid, triangular, trigon, set-square, deltoidal, trigonal, triangulum, trilateral and pyramidal. Do parallel lines form right angles? The lines are always the same distance apart. No matter how far we extend them, they will never intersect. Figure 1 therefore shows parallel lines. If the two lines are too slanted, as in Figure 2, they cannot form right angles. What is another name for Line AB? Note that — AB can also be named — BA . endpoint A and all points on ⃖⃗AB that lie on the same side of A as B. Note that ⃗AB and ⃗BA are different rays. A and B, then ⃗CA and ⃗CB are opposite rays. How are planes named? A plane is a flat surface that extends infinitely in all directions. A plane can be named by a capital letter, often written in script, or by the letters naming three non-collinear points in the plane. For example, the plane in the diagram below could be named either plane ABC or plane P . What is the intersection of two lines? When two or more lines cross each other in a plane, they are called intersecting lines. The intersecting lines share a common point, which exists on all the intersecting lines, and is called the point of intersection. How do you know if 2 lines intersect? If two lines have unequal slope they will intersect in a a point. If two lines have equal slope, they are either disjointly parallel and never intersect, or they are the same line. Can two lines cross at two points? Any two lines can intersect at only a single point. Lines that are on the same plane are ‘coplanar’. Do two vectors intersect? For two lines to intersect, each of the three components of the two position vectors at the point of intersection must be equal. Therefore we can set up 3 simultaneous equations, one for each component. and we solve these in the usual way to find our s and t, showing our working. Is there a symbol for intersecting lines? When two or more lines cut or intersect each other at a single point, they are called intersecting lines. We do not have any symbol to represent intersecting lines. In geometry, there are three types of lines such as: Parallel lines. When two lines intersect How many angles are formed? four angles When two lines meet at 90 degrees they are called? Parallel lines never intersect. Perpendicular lines are lines that intersect at a right (90 degrees) angle. How many point do you need to name a line? A line can be identified by naming two points that lie on it and draw a line ( ) over the letters which indicate the name of each point. It can also be named by a single lowercase letter.
{"url":"https://www.cagednomoremovie.com/how-do-you-label-a-line/","timestamp":"2024-11-03T22:00:53Z","content_type":"text/html","content_length":"48928","record_id":"<urn:uuid:ae504d39-e4ad-42d5-83e1-024b972aec5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00358.warc.gz"}
Integral of Sec x - Formula, Proof Trigonometric functions perform a critical role in various math concepts and applications. One of the fundamental trigonometric functions is the secant function, that is the reciprocal of the cosine function. The secant function is broadly used in arithmetic, physics, engineering, and several other domains. It is an essential tool for analyzing and working out challenges linked to oscillations, waves, and periodic functions. The integral of sec x is an important theory in calculus, a branch of mathematics which deals with the study of rates of change and accumulation. It is utilized to evaluate the area under the curve of the secant function, that is a continuous function applied to depict the mechanism of waves and oscillations. Additionally, the integral of sec x is used to work out a broad array of problems in calculus, for example, figuring out the antiderivative of the secant function and evaluating definite integrals that involve the secant function. In this article, we will explore the integral of sec x in detail. We will discuss its properties, formula, and a proof of its derivation. We will further observer handful of instances of how to utilize the integral of sec x in multiple fields, consisting of engineering, physics, and mathematics. By getting a grasp of the integral of sec x and its uses, learners and professionals in these fields can gain a detailed understanding of the complex phenomena they study and evolve better problem-solving abilities. Significance of the Integral of Sec x The integral of sec x is an important math concept which has many uses in calculus and physics. It is applied to figure out the area under the curve of the secant function, which is a continuous function that is broadly applied in math and physics. In calculus, the integral of sec x is utilized to calculate a wide array of challenges, involving figuring out the antiderivative of the secant function and assessing definite integrals which consist of the secant function. It is also applied to figure out the derivatives of functions which involve the secant function, such as the inverse hyperbolic secant function. In physics, the secant function is used to model a wide spectrum of physical phenomena, consisting of the motion of things in round orbits and the working of waves. The integral of sec x is utilized to calculate the possible energy of objects in circular orbits and to assess the mechanism of waves that consist if variations in amplitude or frequency. Formula for the Integral of Sec x The formula for the integral of sec x is: ∫ sec x dx = ln |sec x + tan x| + C At which point C is the constant of integration. Proof of the Integral of Sec x To prove the formula for the integral of sec x, we will use a approach known as integration by substitution. Let's begin by describing the integral in terms of the cosine function: ∫ sec x dx = ∫ (cos x / sin x) dx Later, we will make the substitution u = sin x, that implies that du/dx = cos x. Applying the chain rule, we can express dx in terms of du: dx = du / cos x Substituting these expressions into the integral, we obtain: ∫ sec x dx = ∫ (1/u) (du / cos x) = ∫ (1/u) sec x du Later, we can apply the formula for the integral of u^n du, which is (u^(n+1))/(n+1) + C, to integrate (1/u) sec x du: ∫ (1/u) sec x du = ln |u| sec x + C Substituting back in for u = sin x, we get: ∫ sec x dx = ln |sin x| sec x + C Still, this formula is not quite in similar form as the initial formula we specified. To get to the desired form, we will use a trigonometric identity that connects sec x and tan x: sec x + tan x = (1 / cos x) + (sin x / cos x) = (1 + sin x) / cos x = csc x / (csc x - cot x) Substituting this identity into the formula we derived above, we get: ∫ sec x dx = ln |csc x / (csc x - cot x)| + C Finally, we can utilize another trigonometric identity to simplify the expression: ln |csc x / (csc x - cot x)| = ln |csc x + cot x| Thus, the final formula for the integral of sec x is: ∫ sec x dx = ln |sec x + tan x| + C In conclusion,the integral of sec x is a crucial concept in calculus and physics. It is applied to evaluate the area under the curve of the secant function and is essential for figuring out a broad range of challenges in physics and calculus. The formula for the integral of sec x is ln |sec x + tan x| + C, and its derivation includes the utilize of integration by substitution and trigonometric Understanding the properties of the integral of sec x and how to use it to figure out problems is important for learners and professionals in domains for instance, engineering, physics, and math. By conquering the integral of sec x, everyone can utilize it to solve challenges and gain detailed insights into the complex workings of the world surrounding us. If you want assistance understanding the integral of sec x or any other mathematical concept, contemplate call us at Grade Potential Tutoring. Our experienced tutors are available online or face-to-face to provide customized and effective tutoring services to guide you be successful. Contact us today to schedule a tutoring session and take your math abilities to the next level.
{"url":"https://www.denverinhometutors.com/blog/integral-of-sec-x-formula-proof","timestamp":"2024-11-06T08:27:40Z","content_type":"text/html","content_length":"75533","record_id":"<urn:uuid:f4db8e69-77ce-46e8-96c5-7506415b2803>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00656.warc.gz"}
98. Validate Binary Search Tree Photo by Isaac Mitchell / Unsplash Given the root of a binary tree, determine if it is a valid binary search tree (BST). A valid BST is defined as follows: • The left subtree of a node contains only nodes with keys less than the node's key. • The right subtree of a node contains only nodes with keys greater than the node's key. • Both the left and right subtrees must also be binary search trees. Example 1: Input: root = [2,1,3] Output: true Example 2: Input: root = [5,1,4,null,null,3,6] Output: false Explanation: The root node's value is 5 but its right child's value is 4. • The number of nodes in the tree is in the range [1, 10^4]. • -2^31 <= Node.val <= 2^31 - 1 On the first sight, the problem is trivial. Let's traverse the tree and check at each step if node.right.val > node.val and node.left.val < node.val. This approach would even work for some trees The problem is this approach will not work for all cases. Not only the right child should be larger than the node but all the elements in the right subtree. Here is an example : That means one should keep both upper and lower limits for each node while traversing the tree, and compare the node value not with children values but with these limits. # Definition for a binary tree node. # class TreeNode: # def __init__(self, val=0, left=None, right=None): # self.val = val # self.left = left # self.right = right class Solution: def isValidBST(self, root: Optional[TreeNode]) -> bool: def recurse(node, low, high): if not node: return True if not (low < node.val < high): return False return recurse(node.left, low, node.val) and recurse(node.right, node.val, high) return recurse(root, -inf, inf) We need the and at the end of the return because we're doing "is left sub-tree AND right sub-tree valid BSTs?"
{"url":"https://skerritt.blog/98-validate-binary-search-tree/","timestamp":"2024-11-03T15:56:35Z","content_type":"text/html","content_length":"34069","record_id":"<urn:uuid:ac24497a-879c-4256-bd90-5564eae0618c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00716.warc.gz"}
Kilometers to Picometers Converter Enter Kilometers β Switch toPicometers to Kilometers Converter How to use this Kilometers to Picometers Converter π € Follow these steps to convert given length from the units of Kilometers to the units of Picometers. 1. Enter the input Kilometers value in the text field. 2. The calculator converts the given Kilometers into Picometers in realtime β using the conversion formula, and displays under the Picometers label. You do not need to click any button. If the input changes, Picometers value is re-calculated, just like that. 3. You may copy the resulting Picometers value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Kilometers to Picometers? The formula to convert given length from Kilometers to Picometers is: Length[(Picometers)] = Length[(Kilometers)] × 1e+15 Substitute the given value of length in kilometers, i.e., Length[(Kilometers)] in the above formula and simplify the right-hand side value. The resulting value is the length in picometers, i.e., Calculation will be done after you enter a valid input. Consider that a high-end electric car has a maximum range of 400 kilometers on a single charge. Convert this range from kilometers to Picometers. The length in kilometers is: Length[(Kilometers)] = 400 The formula to convert length from kilometers to picometers is: Length[(Picometers)] = Length[(Kilometers)] × 1e+15 Substitute given weight Length[(Kilometers)] = 400 in the above formula. Length[(Picometers)] = 400 × 1e+15 Length[(Picometers)] = 400000000000000000 Final Answer: Therefore, 400 km is equal to 400000000000000000 pm. The length is 400000000000000000 pm, in picometers. Consider that a private helicopter has a flight range of 150 kilometers. Convert this range from kilometers to Picometers. The length in kilometers is: Length[(Kilometers)] = 150 The formula to convert length from kilometers to picometers is: Length[(Picometers)] = Length[(Kilometers)] × 1e+15 Substitute given weight Length[(Kilometers)] = 150 in the above formula. Length[(Picometers)] = 150 × 1e+15 Length[(Picometers)] = 150000000000000000 Final Answer: Therefore, 150 km is equal to 150000000000000000 pm. The length is 150000000000000000 pm, in picometers. Kilometers to Picometers Conversion Table The following table gives some of the most used conversions from Kilometers to Picometers. Kilometers (km) Picometers (pm) 0 km 0 pm 1 km 1000000000000000 pm 2 km 2000000000000000 pm 3 km 3000000000000000 pm 4 km 4000000000000000 pm 5 km 5000000000000000 pm 6 km 6000000000000000 pm 7 km 7000000000000000 pm 8 km 8000000000000000 pm 9 km 9000000000000000 pm 10 km 10000000000000000 pm 20 km 20000000000000000 pm 50 km 50000000000000000 pm 100 km 100000000000000000 pm 1000 km 1000000000000000000 pm 10000 km 10000000000000000000 pm 100000 km 100000000000000000000 pm A kilometer (km) is a unit of length in the International System of Units (SI), equal to 0.6214 miles. One kilometer is one thousand meters. The prefix "kilo-" means one thousand. A kilometer is defined by 1000 times the distance light travels in 1/299,792,458 seconds. This definition may change, but a kilometer will always be one thousand meters. Kilometers are used to measure distances on land in most countries. However, the United States and the United Kingdom still often use miles. The UK has adopted the metric system, but miles are still used on road signs. A picometer (pm) is a unit of length in the International System of Units (SI). One picometer is equivalent to 0.000000000001 meters or 1 Γ 10^(-12) meters. The picometer is defined as one trillionth of a meter, making it a very small unit of measurement used for measuring atomic and molecular distances. Picometers are used in fields such as chemistry, materials science, and nanotechnology to describe the sizes of atoms, molecules, and other microscopic structures. Frequently Asked Questions (FAQs) 1. What is the formula for converting Kilometers to Picometers in Length? The formula to convert Kilometers to Picometers in Length is: Kilometers * 1e+15 2. Is this tool free or paid? This Length conversion tool, which converts Kilometers to Picometers, is completely free to use. 3. How do I convert Length from Kilometers to Picometers? To convert Length from Kilometers to Picometers, you can use the following formula: Kilometers * 1e+15 For example, if you have a value in Kilometers, you substitute that value in place of Kilometers in the above formula, and solve the mathematical expression to get the equivalent value in Picometers.
{"url":"https://convertonline.org/unit/?convert=kilometers-picometers","timestamp":"2024-11-03T09:45:01Z","content_type":"text/html","content_length":"90816","record_id":"<urn:uuid:ab87cf7c-c874-42c1-abd6-197270d50056>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00312.warc.gz"}
Getting started with Getting started with spectre C.E. Simpkins, S. Hanss, M. Hesselbarth, M.C. Spangenberg, J. Salecker and K. Wiegand General background spectre is an R package which easily implements an advanced optimization algorithm capable of predicting regional community composition at fine spatial resolutions using only sparse biological and environmental data. The algorithm underlying spectre utilizes estimates of \(\alpha\)-diversity (i.e. species richness) and \(\beta\)-diversity (i.e. species dissimilarity) to come up with community composition estimates for all patches within a target region. The method used in spectre is an adapted version of that presented by Mokany et al. (2011). Install the release version from CRAN: To install the developmental version of spectre, use: Use case example This example acts as a minimal working case and uses simple “simulated” data matching the structure of that needed by the relevant functions. This simple example is used to minimize the time and data storage requirements needed to run this vignette. Generating input data The first step in using the spectre package is to gather estimates for \(\alpha\)-biodiversity and \(\beta\)-biodiversity (in the form of Bray-Curtis dissimilarity) for the area of interest at the desired outcome resolution. For this example, we created a random species composition (15 sites, gamma diversity = 20) and calculated i) an \(\alpha\)-diversity estimate and ii) a Bray-Curtis dissimilarity estimate in the output format of the gdm package (Fitzpatrick et al. 2021). Both are used as input for the spectre algorithm, please see R/generate_minimal_example_data.R for details. Running the optimization We use the input estimates (\(\alpha\)-diversity and Bray-Curtis dissimilarity) to generate a commonness matrix (i.e. species in common between each site by site pair) using the generate_commonness_matrix_from_gdm() function. This commonness matrix acts as the objective function (i.e. target) for the optimization algorithm. # Calculate objective_matrix from (modelled) alpha-diversity and Bray-Curtis dissimilarity objective_matrix <- spectre::generate_commonness_matrix_from_gdm( gdm_predictions = beta_list, alpha_list = alpha_list) Once the input estimates and objective function have been obtained the optimization algorithm is straightforward in spectre, requiring only one function call. Note though that the run time for this function may be high especially for large landscapes with high species diversity and if max_iterations is high. Result analysis spectre incorporates functions to allow for easy calculation of certain error metrics, namely the mean absolute commonness error (\(MAE_c\)) and the relative commonness error (\(\% RCE\)). \(MAE_c\) is the mean of the absolute difference between the solved solution matrix and the objective function, whereas \(\% RCE\) is the \(MAE_c\) over the absolute commonness from the objective function represented as a percentage. The objective function had a mean commonness of 1.75. The mean absolute error between the objective function and the solved solution matrix was 0.07. The solution matrix had an relative commonness error (RCE) of 3.8%. These results can be visualized in two ways using functions built into the package. First, one can plot the error of the solved solution matrix over time. Second, the commonness error between the final solved solution matrix and the objective function for each patch can be plotted # With an increasing number of iterations, the solution matrix improved spectre::plot_error(x = res) Fitzpatrick, M.C., Mokany, K., Manion, G., Lisk, M., Ferrier, S., Nieto-Lugilde, D., 2021. gdm: Generalized Dissimilarity Modeling. R package version 1.4.2.2. https://CRAN.R-project.org/package=gdm Mokany, K., Harwood, T.D., Overton, J.M.C., Barker, G.M., Ferrier, S., 2011. Combining α - and β -diversity models to fill gaps in our knowledge of biodiversity: Filling gaps in biodiversity knowledge. Ecology Letters 14, 1043–1051. https://doi.org/10.1111/j.1461-0248.2011.01675.x
{"url":"https://cloud.r-project.org/web/packages/spectre/vignettes/getting_started_with_spectre.html","timestamp":"2024-11-02T00:07:10Z","content_type":"text/html","content_length":"28651","record_id":"<urn:uuid:c8d3e572-9db2-4a2d-a7f2-efbb9bba59e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00433.warc.gz"}
Efficiency and Markets Efficiency and Markets Adam Smith observed that people pursuing their own interests could, if guided by a competitive market, serve the public interest. The purpose of this section is to show that Smith was right--the interaction of individuals in competitive markets results in economic efficiency. Other reading units have developed the analytical tools needed to show that a simple model of an exchange economy can be economically efficient. In this model, all transactors must be price takers. This assumption means that supply and demand curves can be used to represent all markets. The supply and demand curves of a representative market, the bread market, are shown in the center of the graphs below. The graph on the left shows the viewpoint of the typical buyer, Jane Doe, who is one buyer of a great many buyers. On the right is a graph showing the viewpoint of the typical producer, Martha Smith. The price that the market produces will be P1, the market-clearing price. Jane Doe's demand curve for bread is downward-sloping. At a high price, she buys little, and at a low price she buys much. The reason for this behavior is that, as she gets more bread, the value of another loaf declines. In fact, one can go further and argue that her demand curve shows the marginal benefit of bread to her. If she is willing to buy five loaves for $.50 a loaf, but not the sixth loaf at this price, then the value of the sixth loaf, or its marginal benefit, must be less than $.50. If the price must drop to $.45 per loaf before she buys the sixth loaf, her behavior reveals that the marginal benefit of the sixth loaf is $.45. Her willingness to pay, which is revealed in her demand curve, measures her marginal benefit from bread. Jane Doe, as a price taker, can buy all she wants at the market price. To her, the market price appears to be a supply curve. It is also her marginal cost of buying bread because it shows how much an additional loaf will cost. Assuming that she maximizes utility, she will buy until marginal benefit equals marginal cost, or to the point at which the price line crosses her demand curve. At price P1 in the graph, she buys q1 loaves. Jane Doe is only one of a great many other buyers. To find out how much all buyers will take at each price, one needs to add up how much each individual buys at each price. (This is not a problem in an abstract model, but is clearly impossible in the real world.) The summation of individual demand curves yields the market demand curve. Because it is found by adding marginal benefit curves, the market demand curve shows the marginal benefit to buyers. This identity of demand curves and marginal benefit curves is vital for considering the efficiency of a model of a market economy. Marginal costs and benefits look different from the point of view of Martha Smith, a typical seller in this market. To her, the fixed price shows the marginal benefits of a transaction. The price tells her how much more revenue she gets from selling another loaf of bread. Her marginal cost curve reflects the value of the resources she must add to make another loaf of bread. These costs depend on two things. First, they depend on how much extra resources are needed to produce another loaf of bread, or the marginal productivity of resources. This productivity is determined by the technology of making bread. Second, they depend on the prices of the needed extra resources, which will depend on the alternative uses of these resources. If the resources needed to make bread are high priced, these resources must, in our simple model, have alternative uses that are highly valued. Marginal cost curves can slope upward or downward (recall returns to scale), but only an upward-sloping curve will give us sellers who are price takers. Assuming that Martha Smith wants to maximize profits, she will find that level of output for which marginal revenue, which is price, equals marginal cost. This profit-maximizing output can be found on the graph by finding the intersection of the price line and the seller's marginal cost curve. For any price, the marginal cost curve tells how many loaves the profit-maximizing seller will produce. Because a supply curve also tells how much a seller will sell at given prices, the marginal cost curve must be a supply curve. The individual seller is only one of a great many sellers. The market supply curve is obtained by seeing what each seller does at a price and then adding up all the outputs at that price. (Again, this presents no problems in an abstract model, but is clearly impossible in the real world.) The result will be a supply curve that can also be interpreted as a marginal cost curve.^1 The point of the previous discussion (you are normal if you are asking why bother with all this abstract reasoning) is that the demand curve represents a marginal-benefit-to-buyers curve and the supply curve represents a marginal-cost-to-sellers curve. However, we are not done yet. In the model of the exchange economy one can (given a few hidden assumptions) also show that the marginal cost to the sellers is the same as the marginal cost to the buyers in the goods market. Marginal costs to sellers are determined by what the firm must pay for the resources necessary to produce one more unit of output. To produce one more unit of commodity A, a seller must bid resources away from another use. If the resources were producing output worth an added six dollars in the alternative use, the seller could not bid them away for a mere five dollars. If those resources add six dollars worth of output somewhere else, the seller of product A will have to pay at least six dollars for them. But he will not have to pay very much more for them. If we assume perfect information and the absence of transactions costs, any amount (however small) over six dollars will cause resources to move to product A. The demand curve represents marginal benefit to buyers, and, because sellers take the price from it, to sellers as well. The supply curve represents marginal cost not only to sellers, but also to buyers. It tells both buyers and sellers what the value of the resources needed to produce the product is. The maximization principle says that to maximize net benefit, marginal benefit must equal marginal cost. As a result, the optimal output, the one that maximizes value to society, will be the output at which supply and demand intersect--the amount that the market provides. This is a remarkable conclusion: Adam Smith was correct in saying that pursuit of self-interest could lead to a socially desirable result. A look at two other possibilities may help the reader to grasp the meaning of this conclusion. ^1A technical note: to get a market supply curve by summing up individual supply curves (or a market demand curve by summing up individual demand curves), we need to assume that a change in price or quantity will not affect any of the factors held fixed. Thus, if higher prices and quantities for output bid up the price of resources, the individual supply curves would change, and the assumption that other factors are held fixed is violated. In this case, a simple addition would not be correct. This qualification is ignored in the following discussion. Copyright Robert Schenk
{"url":"http://ingrimayne.com/econ/comp_eff/EfficiencyMark.html","timestamp":"2024-11-05T12:37:53Z","content_type":"text/html","content_length":"12567","record_id":"<urn:uuid:8c9b9abb-9bad-4983-8edf-56e97e1f3bb5>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00655.warc.gz"}
H3881 לוי לויּי - Strong's Hebrew Lexicon לוי לויּי lêvı̂yı̂y lêvı̂y lay-vee-ee', lay-vee' Patronymic from H3878; a Leviite or descendant of Levi KJV Usage: Levite. Brown-Driver-Briggs' Hebrew Definitions לוי לויּי Levite = see Levi "joined to" 1. the descendants of Levi, the 3rd son of Jacob by Leah a. the tribe descended from Levi specially set aside by God for His service Origin: patronymically from H3878 TWOT: None Parts of Speech: Adjective Levite = see Levi "joined to" 1) the descendants of Levi, the 3rd son of Jacob by Leah 1a) the tribe descended from Levi specially set aside by God for His service View how H3881 לוי לויּי is used in the Bible
{"url":"https://studybible.info/strongs/H3881","timestamp":"2024-11-11T04:49:37Z","content_type":"text/html","content_length":"24119","record_id":"<urn:uuid:dad1f563-6d67-40bc-944e-d3739b12e960>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00477.warc.gz"}
Exterior Angle Sum of a Triangle Before you begin: Feel free to move the triangle's vertices (corners) wherever you'd like. You can also control the size of the blue exterior angle by using the blue slider. Interact with the applet for a minute or two. Then answer the question that follows. The 3 COLORED ANGLES are said to be the EXTERIOR ANGLES of this triangle. What can you conclude about the measures of the exterior angles of ANY TRIANGLE?
{"url":"https://beta.geogebra.org/m/kyxtd4ep","timestamp":"2024-11-05T03:56:44Z","content_type":"text/html","content_length":"93207","record_id":"<urn:uuid:1a595fda-e869-48fd-9247-6a0ad90cd0ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00796.warc.gz"}
Stirling's Formula: An Approximation for Factorials Author: Neo Huang Review By: Nancy Deng LAST UPDATED: 2024-10-03 13:53:00 TOTAL USAGE: 4485 TAG: Approximations Factorials Mathematics Unit Converter ▲ Unit Converter ▼ Powered by @Calculator Ultra Find More Calculator☟ Stirling's formula is a powerful tool in mathematics and statistics, providing a convenient approximation for the factorial of large numbers. It is named after the Scottish mathematician James Stirling, who introduced this approximation in the early 18th century. Historical Background The factorial function, denoted as \(n!\), is the product of all positive integers up to \(n\). For large values of \(n\), calculating \(n!\) directly can be impractical due to the rapid growth of the factorial function. Stirling's formula offers a solution by approximating \(n!\) with a formula that is much easier to compute for large numbers. Calculation Formula Stirling's approximation formula is expressed as: \[ n! \approx \sqrt{2\pi n} \left(\frac{n}{e}\right)^n \] • \(n\) is the positive integer for which the factorial is being approximated, • \(e\) is the base of the natural logarithm, approximately equal to 2.71828. Example Calculation To approximate the factorial of 10 using Stirling's formula: \[ 10! \approx \sqrt{2\pi \times 10} \left(\frac{10}{e}\right)^{10} \approx 3628800 \] The actual value of \(10!\) is 3,628,800, demonstrating the accuracy of Stirling's formula for even relatively small values of \(n\). Importance and Usage Scenarios Stirling's formula is particularly useful in statistics, combinatorics, and thermodynamics, where factorials appear frequently but are cumbersome to compute directly for large numbers. It is also used in algorithms and computational methods that require factorial calculations. Common FAQs 1. How accurate is Stirling's approximation? □ The accuracy improves with larger values of \(n\). For small values, the approximation may not be very close, but it rapidly converges to the actual value as \(n\) increases. 2. Can Stirling's formula be used for small values of \(n\)? □ While it can be used, direct calculation or lookup tables are more accurate for small \(n\). Stirling's formula shines for large \(n\) where direct computation is infeasible. 3. Are there corrections to improve the accuracy of Stirling's formula? □ Yes, there are refined versions of the formula that include additional terms to improve accuracy for smaller values of \(n\). Stirling's formula bridges practical computation and theoretical analysis, enabling efficient approximations of factorial values critical in various scientific and engineering fields.
{"url":"https://www.calculatorultra.com/en/tool/stirlings-formula.html","timestamp":"2024-11-04T02:20:22Z","content_type":"text/html","content_length":"47230","record_id":"<urn:uuid:12a5fe31-fcdd-471b-a69d-ddd8662ec2bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00155.warc.gz"}
How does FSI help in predicting flutter in structures? | SolidWorks Assignment Help How does FSI help in predicting flutter in structures? This is a review of the data jig built backlinks by a fellow Jig reviewer. I saw an article on FSI about how something that refers to a flutter or wall will fail. I read through the pages and I thought you might be interested to know that I noticed a couple of things. I cannot really begin to guess where the issues arise. The final book on FSI and the issues TFI provides Click This Link A Simple Guide to Flutter Our team uses FSI to predict the ability and properties of objects, and as these define properties these conditions could be extended over the next fifty years or so. When you look up two properties of a property, FSI will produce your link as if those properties were the only ones. Thus, let’s have a look at what these properties look like. The key properties of Flutter are: Material property, Metal property, Rigidity property, and Validity property; however, all these properties need to be true to the Jagged object reference specification. Some properties are not yet known to exist for every Jagged object itself, such as Materials properties, Metal properties, and Validity properties; however, they are all just a summary of a single property of the Jagged object that is associated to a given reference. This is just like knowing a common way of knowing is to work by looking at two properties together to see all the values. And that is exactly why I noticed the comments “whereas what’s better is just a summary.” I cannot take it for granted that what you saw was similar! The second property, Metal property, is valid for all a Widget or Jagged object, regardless of the type you just created or the properties you provided (All those are equal to that property are just the default set of the Renders collection in Jagged and the objects you created) Below is the link you posted to Flutter: https://www.flutterjs.com/docs/api.html So we go back in time to Jagged Jagged. This is where the flutter APIs are implemented. There are times when there is often a big difference between a Widget (by a wintype property) and a Jagged object (by a v2 endpoint). Before we go any further, we’re going to look at the Jagged and v2 architecture. We saw a great article about this topic that began by saying which values the flutter object belongs to. The next point is how we can define these properties. Homework Pay Services We built a Jagged object here in the following way: class Widget(Jagged.TextWrapper): class TextWrapper(Wandroid): static public struct StringWrapper: static Wandroid with Jagged.TextWrapper): Wandroid with List()How does FSI help in predicting flutter in structures? I recently bought a 5-inch flat finger ring and realized I was looking for a good way to predict flutter of a structure. The only time I found flutter was in a building. I read several articles on the topic suggested I could predict flutter using C-RAD computed, but it was still an idea. So, I decided on a simple idea where I could predict flutter with FSI (that explains how that works) Solve: First you’ll need a very basic equation: A point W with a height h is given as 2 W. If h is unknown, find the x-axis by factoring out Math.Sqrt(h*4), for h = 1, Numerical analysis of Flutter is done. It turns out that the order in which you’re going to start FSI is irrelevant! If you want to find out the x-plane of the wall, you must place at least one point L inside h Next, calculate the height or geometry along the X-axis of the screen. So instead of measuring on the diagonal of the wall, you should calculate that one from below the diagonal – I’m trying to pull together the geometry of the wall so that it’s at least height along the diagonal of the screen, Click This Link we can place on the screen. We do have a look at a few of the equations below, but have a feeling. You can also try setting the x-axis to one of the points L and R where h corresponds. With this simple example I should be able to calculate flutter from the entire screen in the equation below Given that we have the parameters h and N we have the location of point L in FSI after giving h. The geometry just gets super bright in some regions. Other things in the equation are as follows: Eradial displacement (rad/y km) = 3.1°/W/(GM) There’s no really good relationship between the measurement radius and function values. Now, I guess we can use click here to read to calculate flutter using geometry: Liang-Rhine (AISD) = Numerical, how to calculate flutter(x-axis): Figure out how to do FSI / geometry + the definition of Liang-Rhine Figure out how to do FSI / geometry – the definition of Frank useful source Figure out how to figure out flutter from the height=Density = 2 / (x-Density) Figure out how to figure out flutter from the geometry=Density = 2 / (P+L-x+L/N) So we’ll see that the equation is: H = 3.1 / (N / W) Again we use theHow does FSI help in predicting flutter in structures? FSI helps infer how a particular structure is going to look as you build it. However, I have yet to write a book describing how FSI predicts a structure. Some structures help infer how the structure looks. Take Your Course For my case, I have a wall-building structure that has only a single fixed-size brick about 8″ deep. Since the structure is made of bricks of some types and the bricks do not have special design properties, determining how it looks depends upon finding out something that was not determined well before. You can learn by taking a look at some images related to this topic and analyzing how FSI predicts fissts in the vicinity of the structure. For context, below are some images I have of A6 and C2 structures. Other images are based on some images of D3, D4, C5 structures. To the most common example, on the left panel of this figure (left) FSI takes the following action: (B) A6 enters a position at which it pay someone to take solidworks homework reach the exterior of the structure. Next we will need to determine if the construction area at the top of a brick is affected by relative diameters of the brick’s exterior or surface. If the brick is too deep, the exterior area under it is underestimated thereby reducing the impact of the structure on the interior. To determine this, we consider a free-flowing brick that is a piece of solid build-up material with a base, separated 1.5″ from the brick to the depth of the brick. The interior wall surface should be such that the exterior area does not exceed 7.5″ or less, indicating that the surface may be covered by a higher-density area than its exterior. We can determine this by the equation: If the exterior wall surface was a smooth piece of building material, the figure in the left panel demonstrates that we can calculate a surface area at the exterior of the building with 0.03 to 0.8 of a 2.0-inch width. Based on these surface areas, we can determine the thickness of the brick’s interior. The figure in the right panel of this figure demonstrates how the diameter of the brick’s interior area varies during construction; once this is examined, the surface area and thickness of the brick must be calculated. Once the construction area of the brick is determined, the three equations describing the brick’s interior run from: (B) D = (0.03 − 0. Pay Someone To Do My Assignment 08)2.5/0.3 = 0.05 = 0.6° (D) E = (0.23 + 0.10)2.0/0.9 = 0.3° Knowing the surface area of a brick, we can get a rough estimate of the thickness of the brick’s interior. There is a
{"url":"https://solidworksaid.com/how-does-fsi-help-in-predicting-flutter-in-structures-18348","timestamp":"2024-11-03T09:25:31Z","content_type":"text/html","content_length":"156507","record_id":"<urn:uuid:547b0459-0da7-40d9-9867-ba96691282d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00155.warc.gz"}
Execution time limit is 1 second Runtime memory usage limit is 256 megabytes Consider a rooted tree that initially consists of only a single root. The children of each vertex are ordered from left to right. You can perform the following operations on the tree: • Add a leaf to the tree. • Remove a leaf from the tree. • Determine the number of vertices on the path between two leaves. • Determine the number of vertices "under the path" between two leaves. The vertices "under the path" between leaves u and v are defined as follows. Consider the path u=w_0-w_1-...-w_k=v between them. Identify the vertex w_c where both edges of the path lead to its children. The left edge moves towards vertex u, and the right edge moves towards vertex v. The vertices considered "under the path" include: • All children of w_c that lie between w_{c-1} and w_{c+1}, along with all vertices in their subtrees. • For i = 1, 2, ..., c-1, all children of w_i that are to the right of the child of w_{i-1}, along with all vertices in their subtrees. • For i = c+1, c+2, ..., k-1, all children of w_i that are to the left of the child of w_{i+1}, along with all vertices in their subtrees. Write a program that processes a sequence of queries to modify the tree by adding and removing vertices, and calculates the answers to queries about the number of vertices on the path and "under the The first line of the input contains a single integer n - the number of queries (0 ≤ n ≤ 300000). Each of the following n lines contains one query. The possible types of queries are: • l x - add a new leaf as the leftmost child of vertex x. • r x - add a new leaf as the rightmost child of vertex x. • a x y - add a new leaf as a child of vertex x, positioned immediately to the right of vertex y; all children of x that were previously to the right of y will be to the right of the new vertex after addition; it is guaranteed that y is a child of x. • d x - delete vertex x. It is guaranteed that at this moment, vertex x is not deleted and is a leaf. • p x y - find the number of vertices on the path between x and y, including these vertices themselves; it is guaranteed that x and y are leaves. • q x y - find the number of vertices "under the path" between x and y, including these vertices themselves; it is guaranteed that x and y are leaves. Vertices are numbered starting from one in the order they are added through the queries. The root of the tree is numbered 0 and is never considered a leaf. For each query of type p or q, output the answer on a separate line as you process the queries in order. Submissions 133 Acceptance rate 2%
{"url":"https://basecamp.eolymp.com/en/problems/4325","timestamp":"2024-11-12T15:58:26Z","content_type":"text/html","content_length":"280654","record_id":"<urn:uuid:ae5e7cb9-8e91-45f1-8bb7-bb15badc5fd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00121.warc.gz"}
Principles of Mathematical Analysis 3rd Edition by Walter Rudin, ISBN-13: 978-0070542358 - ebookschoice.comPrinciples of Mathematical Analysis 3rd Edition by Walter Rudin, ISBN-13: 978-0070542358ebookschoice.com - The best ebooks collection Principles of Mathematical Analysis 3rd Edition by Walter Rudin, ISBN-13: 978-0070542358 [PDF eBook eTextbook] • Publisher: McGraw Hill; 3rd edition (January 1, 1976) • Language: English • 342 pages • ISBN-10: 007054235X • ISBN-13: 978-0070542358 This text is part of the Walter Rudin Student Series in Advanced Mathematics. The third edition of this well known text continues to provide a solid foundation in mathematical analysis for undergraduate and first-year graduate students. The text begins with a discussion of the real number system as a complete ordered field. (Dedekind’s construction is now treated in an appendix to Chapter I.) The topological background needed for the development of convergence, continuity, differentiation and integration is provided in Chapter 2. There is a new section on the gamma function, and many new and interesting exercises are included. Table of Contents: Chapter 1: The Real and Complex Number SystemsIntroductionOrdered SetsFieldsThe Real FieldThe Extended Real Number SystemThe Complex FieldEuclidean SpacesAppendixExercisesChapter 2: Basic TopologyFinite, Countable, and Uncountable SetsMetric SpacesCompact SetsPerfect SetsConnected SetsExercisesChapter 3: Numerical Sequences and SeriesConvergent SequencesSubsequencesCauchy SequencesUpper and Lower LimitsSome Special SequencesSeriesSeries of Nonnegative TermsThe Number eThe Root and Ratio TestsPower SeriesSummation by PartsAbsolute ConvergenceAddition and Multiplication of SeriesRearrangementsExercisesChapter 4: ContinuityLimits of FunctionsContinuous FunctionsContinuity and CompactnessContinuity and ConnectednessDiscontinuitiesMonotonic FunctionsInfinite Limits and Limits at InfinityExercisesChapter 5: DifferentiationThe Derivative of a Real FunctionMean Value TheoremsThe Continuity of DerivativesL’Hospital’s Rule Derivatives of Higher-OrderTaylor’s TheoremDifferentiation of Vector-valued FunctionsExercisesChapter 6: The Riemann-Stieltjes IntegralDefinition and Existence of the IntegralProperties of the IntegralIntegration and DifferentiationIntegration of Vector-valued FunctionsRectifiable CurvesExercisesChapter 7: Sequences and Series of FunctionsDiscussion of Main ProblemUniform ConvergenceUniform Convergence and ContinuityUniform Convergence and IntegrationUniform Convergence and DifferentiationEquicontinuous Families of FunctionsThe Stone-Weierstrass TheoremExercisesChapter 8: Some Special FunctionsPower SeriesThe Exponential and Logarithmic FunctionsThe Trigonometric FunctionsThe Algebraic Completeness of the Complex FieldFourier SeriesThe Gamma FunctionExercisesChapter 9: Functions of Several VariablesLinear TransformationsDifferentiationThe Contraction PrincipleThe Inverse Function TheoremThe Implicit Function TheoremThe Rank TheoremDeterminantsDerivatives of Higher OrderDifferentiation of Integrals Exercises Chapter 10: Integration of Differential FormsIntegrationPrimitive MappingsPartitions of UnityChange of VariablesDifferential FormsSimplexes and ChainsStokes’ TheoremClosed Forms and Exact FormsVector AnalysisExercisesChapter 11: The Lebesgue TheorySet FunctionsConstruction of the Lebesgue MeasureMeasure SpacesMeasurable FunctionsSimple FunctionsIntegrationComparison with the Riemann IntegralIntegration of Complex FunctionsFunctions of Class L2ExercisesBibliographyList of Special SymbolsIndex What makes us different? • Instant Download • Always Competitive Pricing • 100% Privacy • FREE Sample Available • 24-7 LIVE Customer Support There are no reviews yet.
{"url":"https://ebookschoice.com/product/principles-of-mathematical-analysis-3rd-edition-by-walter-rudin-isbn-13-978-0070542358/","timestamp":"2024-11-03T13:03:57Z","content_type":"text/html","content_length":"92954","record_id":"<urn:uuid:801c7f71-7cd3-47d0-a053-04429ec65c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00364.warc.gz"}
Unscramble RORKPES How Many Words are in RORKPES Unscramble? By unscrambling letters rorkpes, our Word Unscrambler aka Scrabble Word Finder easily found 75 playable words in virtually every word scramble game! Letter / Tile Values for RORKPES Below are the values for each of the letters/tiles in Scrabble. The letters in rorkpes combine for a total of 21 points (not including bonus squares) • R [5] • O [1] • R [5] • K [5] • P [3] • E [1] • S [1] What do the Letters rorkpes Unscrambled Mean? The unscrambled words with the most letters from RORKPES word or letters are below along with the definitions.
{"url":"https://www.scrabblewordfind.com/unscramble-rorkpes","timestamp":"2024-11-06T23:41:37Z","content_type":"text/html","content_length":"50781","record_id":"<urn:uuid:c73b79f2-777f-4765-a1a7-e3bf2360f2fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00283.warc.gz"}
Excel Formula for Python - Net Profit Tax Calculation In this guide, we will learn how to calculate the tax amount owing based on the net amount of profit using an Excel formula in Python. This formula utilizes the VLOOKUP function to determine the tax amount based on the net profit falling within one of the seven tax brackets. To calculate the tax amount, we need a table of tax brackets that consists of two columns. The first column represents the upper limit of each tax bracket, while the second column represents the corresponding tax amount owing. The VLOOKUP function searches for the net amount of profit in the tax brackets table and returns the tax amount from the second column that corresponds to the appropriate tax bracket. The fourth argument of the VLOOKUP function is set to TRUE, allowing it to find an approximate match for the net amount of profit in cases where it falls between two tax brackets. Let's consider an example to understand how this formula works. Suppose we have the following tax brackets table: A B If the net amount of profit is $750, the formula =VLOOKUP(750, tax_brackets, 2, TRUE) would return 15, which is the tax amount owing for that profit bracket. Similarly, if the net amount of profit is $3000, the formula =VLOOKUP(3000, tax_brackets, 2, TRUE) would return 20. Please note that the tax brackets table should be sorted in ascending order for the VLOOKUP function to work correctly. An Excel formula =VLOOKUP(A1, tax_brackets, 2, TRUE) Formula Explanation This formula uses the VLOOKUP function to calculate the tax amount owing based on the net amount of profit falling within one of the 7 tax brackets. Step-by-step explanation 1. The VLOOKUP function searches for the value in cell A1 (the net amount of profit) in the range tax_brackets. 2. The tax_brackets range is a table that contains two columns: the first column represents the upper limit of each tax bracket, and the second column represents the corresponding tax amount owing. 3. The VLOOKUP function returns the tax amount from the second column of the tax_brackets range that corresponds to the net amount of profit falling within the appropriate tax bracket. 4. The fourth argument of the VLOOKUP function is set to TRUE, which means that it will find an approximate match for the net amount of profit in the tax_brackets range. This allows the formula to handle cases where the net amount of profit falls between two tax brackets. For example, let's assume we have the following tax_brackets table: | A | B | | 0 | 0 | | 100 | 5 | | 500 | 10 | | 1000 | 15 | | 2000 | 20 | | 5000 | 25 | | 10000 | 30 | If the net amount of profit is $750, the formula =VLOOKUP(750, tax_brackets, 2, TRUE) would return 15, which is the tax amount owing for that profit bracket. Similarly, if the net amount of profit is $3000, the formula =VLOOKUP(3000, tax_brackets, 2, TRUE) would return 20, which is the tax amount owing for that profit bracket. Note that the tax_brackets table should be sorted in ascending order for the VLOOKUP function to work correctly.
{"url":"https://codepal.ai/excel-formula-generator/query/XPYtkojc/excel-formula-python-net-profit-tax-amount","timestamp":"2024-11-09T03:53:10Z","content_type":"text/html","content_length":"94230","record_id":"<urn:uuid:5c429c13-5f0b-4c17-8678-38bdeb395dd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00155.warc.gz"}
Project Descriptions Conformal prediction and causal inference Ahmed Alaa, Professor Electrical Engineering and Computer Science Applications for Fall 2024 are closed for this project. Conformal prediction (CP) is a model-agnostic and distribution-free method for quantifying uncertainty in black-box machine learning (ML) models. CP can be used to construct prediction sets/intervals that covers the true labels with a pre-determined probability as long as the training and testing data are exchangeable. While this assumption may hold in a supervised learning setup, it does not hold in causal inference problems where the goal is to predict causal effects of an intervention on individual units. This project will explore the theory and methods for applying CP to a various causal inference problems. Role: - Develop new theory and methods for CP in causal inference settings, and will be supervised directly by the PI. - Students will be expected to meet with their supervisor at least once a week. - Students will conduct literature reviews, develop , develop new algorithms and run experiments. - Experience with Python is required. Successful applicants should have a strong background in statistics, mathematics or theoretical computer science. The workload for this project is expected to be 12 hours/week or more. Qualifications: - Solid foundation in statistics, mathematics, or theoretical computer science - Completion of Stat 241B / CS 281B is highly desirable - Strong interest in pursuing graduate studies Day-to-day supervisor for this project: Lars van der Laan, Graduate Student Hours: 12 or more hours Related website: https://proceedings.neurips.cc/paper_files/paper/2023/hash/94ab02a30b0e4a692a42ccd0b4c55399-Abstract-Conference.html Related website: https://alaalab.berkeley.edu/ Digital Humanities and Data Science Engineering, Design & Technologies
{"url":"https://urapprojects.berkeley.edu/detail.php?id=20228-1","timestamp":"2024-11-14T03:44:28Z","content_type":"text/html","content_length":"6699","record_id":"<urn:uuid:f31e3b8d-2dc9-43b8-b7c5-a1d7be4f0476>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00448.warc.gz"}
How many zeros does 1 million have? How many zeros does 1 million have? 61,000,000 / Number of zeros How many digits does 1 million have in it? six zeros One million has six zeros (1,000,000), while one billion has nine zeros (1,000,000,000)….How Many Zeros in a Million? How Many Zeros in a Billion? Reference Chart. Name Number of Zeros Written Out One Million 6 1,000,000 Billion 9 1,000,000,000 Trillion 12 1,000,000,000,000 Quadrillion 15 1,000,000,000,000,000 What is a number with 1 million zeros called? A googol is the large number 10100. How many zeros do one billion has? nine zeros Writing big numbers can take a lot of space. If you write a 1 followed by nine zeros, you get 1,000,000,000 = one billion! What do you mean by 1 million? one thousand thousand 1 Million Equals One million equals one thousand thousand or ten lakhs. Numerically, it is represented as: 1 million (M) = 1000, 000. 1 million = 10, 00000 = 10 lakhs. How much is a million? 1000 thousands A million is 1000 thousands, a billion is 1000 millions, and a trillion is 1000 billions. How much is a quadrillion? 1000 trillion Explanation: 1 Quadrillion = 1000 trillion. What is Vigintillion? Definition of vigintillion US : a number equal to 1 followed by 63 zeros — see Table of Numbers also, British : a number equal to 1 followed by 120 zeros — see Table of Numbers. What is a 1 followed by 100 zeros called? A googol is 10 to the 100th power (which is 1 followed by 100 zeros). A googol is larger than the number of elementary particles in the universe, which amount to only 10 to the 80th power. How many zeros does 30 million have? Answer: 30 million means 30000000. How many zeros in 30 million? Answer: 7. How many millions are in a billion? a thousand million The USA meaning of a billion is a thousand million, or one followed by nine noughts (1,000,000,000). How much money is in a million? 10 Lakhs 1 Million is equal to 10 Lakhs. 1 Million in numbers is written as 10,00,000. How many zeros are need to make a million? Numbers Bigger Than a Trillion. The digit zero plays an important role as you count very large numbers. All of Those Zeroes. Zeros Grouped in Sets of 3. Numbers With Very Large Numbers of Zeros. Million and Billion: Some Differences. How many zeros are there in the number one million? The number one million consists of six zeros. This figure doesn’t contain decimal points. One million is also referred to as one thousand thousand, and a comma is used to separate the digits. It’s written as 1,000,000. How many zeroes are in 1 trillion dollars? Total household debt rose by $1.02 trillion last year, boosted by higher balances on home and auto loans, the Federal Reserve Bank of New York said Tuesday. It was the largest increase since a $1.06 trillion jump in 2007. Total consumer debt now sits at around $15.6 trillion, compared with $14.6 trillion a year earlier. How many zeros are in five hundred thousand dollars? Reference to sets of zeros is reserved for groupings of three zeros, meaning they are not relevant for smaller numbers. We write numbers with commas separating sets of three zeros so that it’s easier to read and understand the value. For example, you write one million as 1,000,000 rather than 1000000.
{"url":"https://www.cagednomoremovie.com/how-many-zeros-does-1-million-have/","timestamp":"2024-11-05T22:41:42Z","content_type":"text/html","content_length":"44073","record_id":"<urn:uuid:ccb32dbd-b982-4a70-9168-05323e5c18db>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00278.warc.gz"}
Solving a 2D Fourier Transform 13716 Views 4 Replies 2 Total Likes Solving a 2D Fourier Transform HI Folks, I don't do much in the Fourier arena so please forgive the naivete of this question. I am trying to solve the following 2D Fourier transform (z is a constant): FourierTransform[1/Sqrt[x^2 + y^2 + z^2], {x, y}, {u, t}] After some time, Mathematica simply returns the original formula. Similar scenario, if I try to solve the corresponding integral directly: Integrate[Exp[2 Pi I (m*x + n*y)]/Sqrt[x^2 + y^2 + z^2], {x, 0, 1}, {y, 0, 1}] Any advice? 4 Replies It may not be an integral that is doable in closed form. If you first do the x-Integral, the remaining y-integral is (assuming that z is Real) Sqrt[2/Pi] BesselK[0, u Sqrt[y^2 + z^2] Sign[u]], {y}, {t}, Assumptions -> {z \[Element] Reals}] Which returns unevaluated. Hi David, Thanks very much for your comment. Yes, z is Real and positive. I will focus on a numerical solution. I had thought that perhaps I was missing something obvious with respect to a closed form Also note, FourierTransform transform is defined over {- Infinity, + Infinity}, while your 2nd integrals have range {0,1}. Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/252930?sortMsg=Replies","timestamp":"2024-11-05T15:33:06Z","content_type":"text/html","content_length":"108914","record_id":"<urn:uuid:fe519a35-8364-4baf-b123-ec586524a224>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00643.warc.gz"}
Three-phase Y and <DELTA> configurations : POLYPHASE AC CIRCUITS Three-phase Y and configurations Initially we explored the idea of three-phase power systems by connecting three voltage sources together in what is commonly known as the “Y” (or “star”) configuration. This configuration of voltage sources is characterized by a common connection point joining one side of each source. (Figure below) Three-phase “Y” connection has three voltage sources connected to a common point. If we draw a circuit showing each voltage source to be a coil of wire (alternator or transformer winding) and do some slight rearranging, the “Y” configuration becomes more obvious in Figure below. Three-phase, four-wire “Y” connection uses a "common" fourth wire. The three conductors leading away from the voltage sources (windings) toward a load are typically called lines, while the windings themselves are typically called phases. In a Y-connected system, there may or may not (Figure below) be a neutral wire attached at the junction point in the middle, although it certainly helps alleviate potential problems should one element of a three-phase load fail open, as discussed earlier. Three-phase, three-wire “Y” connection does not use the neutral wire. When we measure voltage and current in three-phase systems, we need to be specific as to where we're measuring. Line voltage refers to the amount of voltage measured between any two line conductors in a balanced three-phase system. With the above circuit, the line voltage is roughly 208 volts. Phase voltage refers to the voltage measured across any one component (source winding or load impedance) in a balanced three-phase source or load. For the circuit shown above, the phase voltage is 120 volts. The terms line current and phase current follow the same logic: the former referring to current through any one line conductor, and the latter to current through any one component. Y-connected sources and loads always have line voltages greater than phase voltages, and line currents equal to phase currents. If the Y-connected source or load is balanced, the line voltage will be equal to the phase voltage times the square root of 3: However, the “Y” configuration is not the only valid one for connecting three-phase voltage source or load elements together. Another configuration is known as the “Delta,” for its geometric resemblance to the Greek letter of the same name (Δ). Take close notice of the polarity for each winding in Figure below. Three-phase, three-wire Δ connection has no common. At first glance it seems as though three voltage sources like this would create a short-circuit, electrons flowing around the triangle with nothing but the internal impedance of the windings to hold them back. Due to the phase angles of these three voltage sources, however, this is not the case. One quick check of this is to use Kirchhoff's Voltage Law to see if the three voltages around the loop add up to zero. If they do, then there will be no voltage available to push current around and around that loop, and consequently there will be no circulating current. Starting with the top winding and progressing counter-clockwise, our KVL expression looks something like this: Indeed, if we add these three vector quantities together, they do add up to zero. Another way to verify the fact that these three voltage sources can be connected together in a loop without resulting in circulating currents is to open up the loop at one junction point and calculate voltage across the break: (Figure below) Voltage across open Δ should be zero. Starting with the right winding (120 V ∠ 120^o) and progressing counter-clockwise, our KVL equation looks like this: Sure enough, there will be zero voltage across the break, telling us that no current will circulate within the triangular loop of windings when that connection is made complete. Having established that a Δ-connected three-phase voltage source will not burn itself to a crisp due to circulating currents, we turn to its practical use as a source of power in three-phase circuits. Because each pair of line conductors is connected directly across a single winding in a Δ circuit, the line voltage will be equal to the phase voltage. Conversely, because each line conductor attaches at a node between two windings, the line current will be the vector sum of the two joining phase currents. Not surprisingly, the resulting equations for a Δ configuration are as Let's see how this works in an example circuit: (Figure below) The load on the Δ source is wired in a Δ. With each load resistance receiving 120 volts from its respective phase winding at the source, the current in each phase of this circuit will be 83.33 amps: So each line current in this three-phase power system is equal to 144.34 amps, which is substantially more than the line currents in the Y-connected system we looked at earlier. One might wonder if we've lost all the advantages of three-phase power here, given the fact that we have such greater conductor currents, necessitating thicker, more costly wire. The answer is no. Although this circuit would require three number 1 gage copper conductors (at 1000 feet of distance between source and load this equates to a little over 750 pounds of copper for the whole system), it is still less than the 1000+ pounds of copper required for a single-phase system delivering the same power (30 kW) at the same voltage (120 volts conductor-to-conductor). One distinct advantage of a Δ-connected system is its lack of a neutral wire. With a Y-connected system, a neutral wire was needed in case one of the phase loads were to fail open (or be turned off), in order to keep the phase voltages at the load from changing. This is not necessary (or even possible!) in a Δ-connected circuit. With each load phase element directly connected across a respective source phase winding, the phase voltage will be constant regardless of open failures in the load elements. Perhaps the greatest advantage of the Δ-connected source is its fault tolerance. It is possible for one of the windings in a Δ-connected three-phase source to fail open (Figure below) without affecting load voltage or current! Even with a source winding failure, the line voltage is still 120 V, and load phase voltage is still 120 V. The only difference is extra current in the remaining functional source windings. The only consequence of a source winding failing open for a Δ-connected source is increased phase current in the remaining windings. Compare this fault tolerance with a Y-connected system suffering an open source winding in Figure below. Open “Y” source winding halves the voltage on two loads of a Δ connected load. With a Δ-connected load, two of the resistances suffer reduced voltage while one remains at the original line voltage, 208. A Y-connected load suffers an even worse fate (Figure below) with the same winding failure in a Y-connected source Open source winding of a "Y-Y" system halves the voltage on two loads, and looses one load entirely. In this case, two load resistances suffer reduced voltage while the third loses supply voltage completely! For this reason, Δ-connected sources are preferred for reliability. However, if dual voltages are needed (e.g. 120/208) or preferred for lower line currents, Y-connected systems are the configuration of choice. • REVIEW: • The conductors connected to the three points of a three-phase source or load are called lines. • The three components comprising a three-phase source or load are called phases. • Line voltage is the voltage measured between any two lines in a three-phase circuit. • Phase voltage is the voltage measured across a single component in a three-phase source or load. • Line current is the current through any one line between a three-phase source and load. • Phase current is the current through any one component comprising a three-phase source or load. • In balanced “Y” circuits, line voltage is equal to phase voltage times the square root of 3, while line current is equal to phase current. • In balanced Δ circuits, line voltage is equal to phase voltage, while line current is equal to phase current times the square root of 3. • Δ-connected three-phase voltage sources give greater reliability in the event of winding failure than Y-connected sources. However, Y-connected sources can deliver the same amount of power with less line current than Δ-connected sources.
{"url":"https://www.learningelectronics.net/vol_2/chpt_10/5.html","timestamp":"2024-11-10T07:42:48Z","content_type":"text/html","content_length":"17418","record_id":"<urn:uuid:469d4c03-1887-43ba-8c6c-091560cffcc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00233.warc.gz"}
cbse class 10 maths quadratic equations mcq Archives - Hi students, Welcome to AMBiPi (Amans Maths Blogs). In this article, you will get Quadratic Equations CBSE Class 10 Maths MCQ Questions with Answer Keys. You can download this PDF and save it in your mobile device or laptop etc. Quadratic Equations CBSE Class 10 Maths MCQ Question No 41: If 1 Read More Extra MCQ Questions for Class 10 Maths Chapter 4 Quadratic Equations Hi students, Welcome to AMBiPi (Amans Maths Blogs). In this article, you will get Extra MCQ Questions for Class 10 Maths Chapter 4 Quadratic Equations. You can download this PDF and save it in your mobile device or laptop etc. Extra MCQ Questions for Class 10 Maths Quadratic Equations Question No Read More Important MCQ Questions for Class 10 Maths Chapter 4 Quadratic Equations Hi students, Welcome to AMBiPi (Amans Maths Blogs). In this article, you will get Important MCQ Questions for Class 10 Maths Chapter 4 Quadratic Equations. You can download this PDF and save it in your mobile device or laptop etc. Important MCQ Questions of Quadratic Equations Class 10 Question No 21: If Read More Quadratic Equations Class 10 Maths MCQ Questions with Answer Keys Hi students, Welcome to AMBiPi (Amans Maths Blogs). In this article, you will get Quadratic Equations Class 10 MCQ Questions Maths with Answer Keys. You can download this PDF and save it in your mobile device or laptop etc. Quadratic Equations MCQ Questions for Class 10 Question No 11: The value of Read More MCQ Questions for Class 10 Maths Chapter 4 Quadratic Equations with Answer Keys Hi students, Welcome to AMBiPi (Amans Maths Blogs). In this article, you will get MCQ Questions for Class 10 Maths Chapter 4 Quadratic Equations with Answer Keys PDF. You can download this PDF and save it in your mobile device or laptop etc. MCQ Questions for Class 10 Maths Chapter 4 Read More
{"url":"https://www.amansmathsblogs.com/tag/cbse-class-10-maths-quadratic-equations-mcq/","timestamp":"2024-11-08T21:49:30Z","content_type":"text/html","content_length":"113105","record_id":"<urn:uuid:35c07922-d2a4-499e-8cc5-e2836adb1389>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00378.warc.gz"}
Downstream Fairness: A New Way to Mitigate Bias Note: This blog post and code could not have been done without the fantastic research from our former research fellow Kweku Kwegyir-Aggrey and former machine learning researcher Jessica Dai. You can view their paper here. Many organizations utilize binary classifiers for a variety of reasons, such as helping loan providers decide who should get a loan, predicting whether or not something is spam, or providing evidence on whether or not something is fraudulent. These use cases require specific classification thresholds. Imagine an algorithm is predicting whether or not someone qualifies for a loan. One way to do this is to attribute a probability to a person, and if that probability is above a certain threshold (let’s say 0.5), then they can get a loan. If not, then they will be rejected. What is the proper threshold to use in these scenarios? Taking spam detection as an example, the threshold set will determine how often an email is classified as spam. A threshold of 0.8 is less permissive than a threshold of 0.4. That is why many organizations have threshold ranges for their algorithms, which can complicate things. Current bias mitigation techniques, such as the one we offer at Arthur, traditionally require you to change your classification threshold to meet some fairness definition. This change in threshold could be outside the range that your company allows, creating questions as to whether or not you can be fair. Further complicating these situations are models that are utilized in many downstream applications, where different threshold ranges (and possibly different fairness definitions) need to be utilized. Downstream fairness solves this dilemma. It’s an algorithm that achieves various fairness definitions (equalized odds, equal opportunity, and demographic parity) in a threshold-agnostic way, meaning that a company won’t have to adjust their threshold. Instead it operates on a binary classifier’s output probabilities to achieve a fairness definition. And this is all done with minimal accuracy loss! For the remainder of this blog post, we’ll be digging deeper into this algorithm and how to use our new open source code. Downstream Fairness Saving the mathematical details for the Geometric Repair paper, we will discuss the essence of how Downstream Fairness works and provide code snippets from our open source package. First off, downstream fairness is a post-processing algorithm that operates on the training dataset (or some representative dataset) for the model we are trying to make fair. The data needs to contain some key information: the prediction probabilities for each data point, the classification label, and a column containing the sensitive attribute on which you are operating. How the algorithm works is that it looks at the distribution of prediction probabilities per group of our original model and then computes a repair of each of those distributions for demographic parity. The reason this works for demographic parity is because the definition of demographic parity (equalizing selection rates for each group) only requires prediction probabilities and group On the implementation side, this process produces an adjustment table. The adjustment table contains how much the prediction probabilities need to be adjusted to achieve demographic parity, for each group. Below is an example of how that table looks: Luckily, this is all automated with our codebase! Here is an example of how to do this: And, unlike some other bias mitigation approaches, downstream fairness is a pareto optimal algorithm. Meaning that it will achieve these fairness definitions with the minimum amount of accuracy loss. Of course, there are some limitations. The dataset used to train downstream fairness must contain prediction probabilities for each class for each group, and there should be a good amount of examples for each class for each group. But if that is provided, the algorithm should work as expected. We went through some of the algorithmic and implementation details of downstream fairness. If you want to explore more of the mathematical details, please go read the paper. Us at Arthur would love for you all to try out our work! Feel free to pip install our package and kick the tires a bit. As you find failure cases or think of new features, feel free to send your feedback to me at daniel.nissani@arthur.ai. Even better, please submit PRs or Issues on our open source GitHub repo. The GitHub repo provides a demo notebook, where you can try out all of our functionality we described in this post. 1. What are the specific mathematical principles behind the Downstream Fairness algorithm? The Downstream Fairness algorithm is grounded in statistical and probability theory, particularly focusing on the concept of distribution repair for ensuring fairness across different groups. The mathematical foundation involves adjusting the distribution of prediction probabilities for each group to align with fairness criteria such as demographic parity, equalized odds, or equal opportunity. This involves a process known as "Geometric Repair," which essentially recalibrates the output probabilities of a predictive model so that the resultant probabilities do not disproportionately favor or disadvantage any particular group based on sensitive attributes. The algorithm employs optimization techniques to find the best possible adjustments that achieve fairness while minimizing accuracy loss. This is achieved by constructing an adjustment table that represents how much the prediction probabilities need to be shifted for each group to meet the desired fairness standard. 2. How does Downstream Fairness compare to other bias mitigation techniques in terms of performance and implementation complexity? Downstream Fairness differs from other bias mitigation techniques primarily in its post-processing approach, focusing on adjusting model outputs rather than altering the training process or the data. Compared to methods like reweighing, which modifies the weight of instances in the training data, or adversarial debiasing, which involves training a model to predict the target while another model predicts the sensitive attribute to reduce bias, Downstream Fairness is implemented after a model has been trained, thereby not affecting the original training pipeline. This can make it easier to integrate into existing workflows without needing to retrain models. In terms of performance, Downstream Fairness aims to be pareto optimal, meaning it seeks to achieve the best possible trade-off between fairness and model accuracy. This contrasts with some methods that might significantly reduce a model's performance to achieve fairness criteria. However, the actual performance and complexity can vary based on the specific scenario and the extent of bias in the original model. 3. Can Downstream Fairness be applied to non-binary classifiers and multi-class scenarios? The concept of Downstream Fairness as described in the blog post primarily addresses binary classification problems. However, the underlying principles can be adapted for non-binary or multi-class classification scenarios with some modifications. In multi-class scenarios, fairness typically involves ensuring that the predictive performance is balanced across different groups for all classes, not just two. This could involve extending the adjustment table to cover all possible class predictions and ensuring that the adjustments lead to fair outcomes across all classes and groups. However, this adaptation can increase the complexity, as it requires considering inter-class fairness in addition to intra-group fairness. The implementation for multi-class scenarios would need to calculate separate adjustments for each class and group combination, possibly leading to a more complex optimization problem. While the original Downstream Fairness algorithm may not directly apply, the principles of adjusting prediction probabilities and achieving demographic parity can still be extended to these more complex scenarios with appropriate modifications.
{"url":"https://www.arthur.ai/blog/downstream-fairness-a-new-way-to-mitigate-bias","timestamp":"2024-11-14T00:10:57Z","content_type":"text/html","content_length":"67617","record_id":"<urn:uuid:822ca28c-dc20-4b22-bc62-396bcb9db708>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00773.warc.gz"}
Binary To Decimal | Techfonist Binary to Decimal: Understanding the Conversion and How It Works In the world of computing, everything boils down to binary— a system of ones and zeros that represent everything from complex algorithms to simple text. If you’ve ever encountered a string of binary numbers and wondered how to convert it into a decimal number (the numbering system we use in everyday life), this guide is for you. Binary to decimal conversion might seem intimidating at first, but it’s quite straightforward once you grasp the concept. Let’s explore how this conversion works, its applications, and how you can calculate it manually or using online tools. Table of Contents 1. Introduction to Binary and Decimal Systems □ What is Binary? □ What is Decimal? □ Why is Binary Important in Computing? 2. Understanding the Place Value in Binary □ What Binary Digits Represent □ Powers of 2 in Binary Numbers 3. How to Convert Binary to Decimal Manually □ Step-by-Step Explanation □ Example of Binary to Decimal Conversion 4. Binary to Decimal Formula □ Breaking Down the Formula □ Understanding Each Component 5. Using Online Binary to Decimal Converters □ Advantages of Online Tools □ Top Free Binary to Decimal Calculators 6. Common Applications of Binary to Decimal Conversion □ How Binary Is Used in Computers □ Binary in Networking and Data Storage 7. Binary to Decimal Conversion with Fractions □ How to Handle Binary Points □ Example of Fractional Binary to Decimal Conversion 8. Binary to Other Base Conversions □ Binary to Hexadecimal □ Binary to Octal 9. Why Learning Binary to Decimal Is Important □ Understanding Machine-Level Operations □ Its Role in Programming and Cybersecurity 10. Common Mistakes in Binary to Decimal Conversion □ Misreading Binary Place Values □ Forgetting to Multiply by the Powers of 2 11. Binary to Decimal in Programming Languages □ How Programming Languages Handle the Conversion □ Writing a Simple Program for Binary to Decimal 12. Historical Background of Binary System □ Origins of the Binary System □ How Binary Became the Foundation of Modern Computing 13. Practice Problems for Binary to Decimal Conversion □ Basic Examples to Try □ Solutions and Explanations 14. Binary to Decimal: Beyond Computing □ Binary in Everyday Technology □ Role in Cryptography and Data Encoding 15. Conclusion: Mastering Binary to Decimal for a Deeper Understanding of Technology What is Binary? Binary is the fundamental language of computers, consisting of only two digits: 0 and 1. These digits, also known as bits, are the building blocks of all computer operations. Every piece of data a computer processes—whether it’s text, images, or video—is represented in binary form. The decimal system is the numbering system most people are familiar with, consisting of ten digits (0 to 9). It’s a base-10 system, meaning each digit’s value depends on its position and is based on powers of 10. For example, the number 345 in decimal means: 3×102+4×101+5×1003 \times 10^2 + 4 \times 10^1 + 5 \times 10^03×102+4×101+5×100 Why is Binary Important in Computing? Computers operate on electrical signals, which can be easily represented by binary numbers. A 1 typically represents the presence of an electrical charge, while 0 represents its absence. This simple on/off system is the foundation for all digital systems, making binary crucial for everything from processing instructions to storing information. Understanding the Place Value in Binary In binary, each digit has a place value based on powers of 2, rather than 10 as in the decimal system. Starting from the right, the place values are: • 2^0 = 1 • 2^1 = 2 • 2^2 = 4 • 2^3 = 8 • 2^4 = 16 • and so on… Each binary digit (bit) corresponds to one of these powers of 2. To convert a binary number to decimal, you multiply each binary digit by its respective power of 2 and sum the results. How to Convert Binary to Decimal Manually Step-by-Step Explanation Let’s walk through an example: Convert the binary number 1011 to decimal: 1. Write down the binary number and assign place values: 1×23,0×22,1×21,1×201 \times 2^3, 0 \times 2^2, 1 \times 2^1, 1 \times 2^01×23,0×22,1×21,1×20 2. Multiply each binary digit by its corresponding power of 2: (1×8)+(0×4)+(1×2)+(1×1)(1 \times 8) + (0 \times 4) + (1 \times 2) + (1 \times 1)(1×8)+(0×4)+(1×2)+(1×1) 3. Perform the multiplications: 8+0+2+18 + 0 + 2 + 18+0+2+1 4. Add the results: 8+2+1=118 + 2 + 1 = 118+2+1=11 Therefore, 1011 in binary is equal to 11 in decimal. Example of Binary to Decimal Conversion Let’s try a larger example with the binary number 110101: 1. Assign place values: 1 × 2^5, 1 × 2^4, 0 × 2^3, 1 × 2^2, 0 × 2^1, 1 × 2^0 2. Perform the multiplication: (1 × 32) + (1 × 16) + (0 × 8) + (1 × 4) + (0 × 2) + (1 × 1) 3. Add the results: 32 + 16 + 0 + 4 + 0 + 1 = 53 Thus, 110101 in binary equals 53 in decimal. Binary to Decimal Formula The formula to convert binary to decimal is:Decimal Number=∑(bi×2i)\text{Decimal Number} = \sum (b_i \times 2^i)Decimal Number=∑(bi×2i) • bib_ibi is each binary digit (0 or 1), • iii is the position of the digit, starting from 0 on the right. This formula simplifies binary to decimal conversion by systematically multiplying each binary digit by its corresponding power of 2 and adding the results. Using Online Binary to Decimal Converters Advantages of Online Tools If you find manual calculations tedious or need to convert large binary numbers quickly, online binary to decimal converters can be helpful. These tools allow you to input a binary number and instantly see the decimal equivalent. They are widely used in programming, networking, and educational settings. Top Free Binary to Decimal Calculators • RapidTables: Offers a simple and effective binary to decimal conversion tool. • Calculator.net: Provides a calculator for converting binary, octal, hexadecimal, and other bases. • MathIsFun: Features a user-friendly binary to decimal conversion tool, ideal for students. Common Applications of Binary to Decimal Conversion How Binary Is Used in Computers Binary is at the heart of how computers operate. From processing instructions in a CPU to storing data on hard drives, binary numbers are converted into decimal numbers that we can understand and Binary in Networking and Data Storage In networking, binary is used to represent IP addresses and subnet masks. For example, an IP address like 192.168.1.1 has a binary equivalent that routers and computers use for communication. Data storage also relies heavily on binary, where each bit represents a piece of information. Binary to Decimal Conversion with Fractions How to Handle Binary Points Just like decimal numbers can have fractions (e.g., 12.34), binary numbers can have a fractional part, represented by a binary point. To convert a binary fraction to decimal, treat the digits after the binary point as negative powers of 2. Example of Fractional Binary to Decimal Conversion Consider the binary number 101.101: • Before the binary point: Convert 101 (5 in decimal) • After the binary point: Convert .101 1. Write down the place values for the fractional part:1×2−1,0×2−2,1×2−31 \times 2^{-1}, 0 \times 2^{-2}, 1 \times 2^{-3}1×2−1,0×2−2,1×2−3 2. Multiply and sum:(1×0.5)+(0×0.25)+(1×0.125)=0.5+0+0.125=0.625(1 \times 0.5) + (0 \times 0.25) + (1 \times 0.125) = 0.5 + 0 + 0.125 = 0.625(1×0.5)+(0×0.25)+(1×0.125)=0.5+0+0.125=0.625 3. Add the integer part and the fractional part: 5+0.625=5.6255 + 0.625 = 5.6255+0.625=5.625 Thus, 101.101 in binary equals 5.625 in decimal. Conclusion: Mastering Binary to Decimal Conversion Converting binary to decimal is an essential skill for anyone interested in computing, programming, or technology. By understanding the place valueand the conversion process, you can decode binary numbers effortlessly. Whether you’re a student, a professional in IT, or just a curious mind, mastering this conversion will deepen your understanding of how computers work and how data is represented. From simple calculations to more complex programming tasks, the ability to switch between binary and decimal enhances your analytical skills and broadens your tech knowledge. Practice Problems for Binary to Decimal Conversion To solidify your understanding, here are a few practice problems to try on your own: Basic Examples to Try 1. Convert the binary number 1110 to decimal. 2. Convert the binary number 10001 to decimal. 3. Convert the binary number 110011 to decimal. 4. Convert the binary fraction 0.101 to decimal. 5. Convert the binary number 100110.011 to decimal. Solutions and Explanations 1. 1110: 1×23+1×22+1×21+0×20=8+4+2+0=141 \times 2^3 + 1 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 = 8 + 4 + 2 + 0 = 141×23+1×22+1×21+0×20=8+4+2+0=14 2. 10001: 1×24+0×23+0×22+0×21+1×20=16+0+0+0+1=171 \times 2^4 + 0 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 = 16 + 0 + 0 + 0 + 1 = 171×24+0×23+0×22+0×21+1×20=16+0+0+0+1=17 3. 110011: 1×25+1×24+0×23+0×22+1×21+1×20=32+16+0+0+2+1=511 \times 2^5 + 1 \times 2^4 + 0 \times 2^3 + 0 \times 2^2 + 1 \times 2^1 + 1 \times 2^0 = 32 + 16 + 0 + 0 + 2 + 1 = 511×25+1×24+0×23+0×22+1×21+1×20= 4. 0.101: 1×2−1+0×2−2+1×2−3=0.5+0+0.125=0.6251 \times 2^{-1} + 0 \times 2^{-2} + 1 \times 2^{-3} = 0.5 + 0 + 0.125 = 0.6251×2−1+0×2−2+1×2−3=0.5+0+0.125=0.625 5. 100110.011: Integer part: 1×25+0×24+0×23+1×22+1×21+0×20=32+0+0+4+2+0=381 \times 2^5 + 0 \times 2^4 + 0 \times 2^3 + 1 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 = 32 + 0 + 0 + 4 + 2 + 0 = 381×25+0×24+0×23+1×22+1×21+0×20= Fractional part: 0×2−1+1×2−2+1×2−3=0+0.25+0.125=0.3750 \times 2^{-1} + 1 \times 2^{-2} + 1 \times 2^{-3} = 0 + 0.25 + 0.125 = 0.3750×2−1+1×2−2+1×2−3=0+0.25+0.125=0.375 Total: 38+0.375=38.37538 + 0.375 = 38.37538+0.375=38.375 Binary to Other Base Conversions Binary to Hexadecimal Hexadecimal (base 16) is another number system commonly used in computing. To convert binary to hexadecimal, group the binary digits into sets of four (starting from the right), then convert each group to its hexadecimal equivalent. For example, to convert 110101101 to hexadecimal, first, pad it to make groups of four: Then convert each group: • 0011 = 3 • 0101 = 5 • 1010 = A Thus, 110101101 in binary equals 35A in hexadecimal. Binary to Octal Similarly, to convert binary to octal (base 8), group the binary digits into sets of three. For example, converting 101110 to octal involves grouping as: 10 111 0 (pad it to three digits: 010 111 000) Then convert: Therefore, 101110 in binary is 270 in octal. Why Learning Binary to Decimal Is Important Understanding how to convert binary to decimal is fundamental in fields like programming, networking, and cybersecurity. Here are a few reasons why it’s essential: 1. Understanding Machine-Level Operations: Knowing how binary numbers represent data allows you to write better code and understand how computers interpret your instructions. 2. Role in Programming: Many programming languages include functions to handle binary and decimal conversions. Mastering these conversions can improve your programming skills and efficiency. 3. Cybersecurity: A solid grasp of binary and other numbering systems can aid in understanding encryption algorithms and network communications, which often rely on binary representations. Common Mistakes in Binary to Decimal Conversion While converting binary to decimal, it’s easy to make a few common mistakes: 1. Misreading Binary Place Values: Ensure you correctly identify each digit’s position and its corresponding power of 2. 2. Forgetting to Multiply by the Powers of 2: Each digit must be multiplied by its respective power of 2. Failing to do this will lead to incorrect results. 3. Ignoring Fractions: When dealing with binary fractions, remember to apply negative powers of 2 for the digits after the binary point. Binary to Decimal in Programming Languages How Programming Languages Handle the Conversion Many programming languages offer built-in functions to convert binary to decimal. For example: • Python: You can use the int() function:pythonCopy codedecimal_number = int('1011', 2) # Returns 11 • Java: The Integer.parseInt() method can be utilized:javaCopy codeint decimalNumber = Integer.parseInt("1011", 2); // Returns 11 Writing a Simple Program for Binary to Decimal Here’s a simple Python program that converts binary to decimal: pythonCopy codedef binary_to_decimal(binary_str): decimal_number = 0 for index, digit in enumerate(reversed(binary_str)): decimal_number += int(digit) * (2 ** index) return decimal_number # Example usage binary_input = '1011' print(f'The decimal of {binary_input} is {binary_to_decimal(binary_input)}') # Outputs: 11 Historical Background of Binary System The binary system dates back to ancient civilizations, but it was mathematician Gottfried Wilhelm Leibniz who formalized it in the 17th century. He recognized that a binary system could represent numbers and logic using just two symbols, paving the way for modern computing. Origins of the Binary System The concept of binary has roots in various cultures, including the ancient Egyptians and Chinese, who used base-2 systems in their counting. The advent of electronic computers in the 20th century cemented binary as the standard for data representation, ultimately leading to the digital revolution. How Binary Became the Foundation of Modern Computing The adoption of binary as the fundamental language of computers stems from its simplicity and reliability. It aligns perfectly with electronic circuitry, where the presence or absence of voltage can represent binary digits, making it an ideal choice for data processing. Conclusion: Mastering Binary to Decimal for a Deeper Understanding of Technology Converting binary to decimal is more than just a mathematical exercise—it’s a key skill that opens doors to understanding the intricate world of computing. By mastering this conversion, you enhance your programming capabilities, deepen your technical knowledge, and gain insight into how data is represented and manipulated in digital systems. As you practice and apply these concepts, you’ll find yourself navigating the world of binary with ease and confidence. 1. What is binary code? Binary code is a system of representing text or computer processor instructions using the binary number system, which uses only two symbols: 0 and 1. Each digit in binary represents a power of 2, enabling complex information to be encoded efficiently. 2. How can I convert binary to decimal using a calculator? To convert binary to decimal using a calculator, input the binary number into a scientific calculator or an online converter that specifically supports binary to decimal conversions. Ensure you choose the correct base for conversion. 3. Can all binary numbers be converted to decimal? Yes, any binary number can be converted to decimal. The conversion process will always yield a unique decimal equivalent for a given binary number. 4. What happens when you add binary numbers? When you add binary numbers, you follow the same principles as decimal addition but only carry over when the sum reaches 2. For example, 1+1=101 + 1 = 101+1=10 in binary. 5. Is it necessary to learn binary conversion for programming? While not strictly necessary for all programming tasks, understanding binary conversion is essential for low-level programming, working with data structures, and grasping how computers interpret data. It can greatly enhance your problem-solving skills in programming contexts. 2 thoughts on “Binary To Decimal” 1. Pingback: ASCII To Binary | Techfonist 2. Pingback: JSON to XML | Techfonist
{"url":"https://techfonist.com/binary-to-decimal/","timestamp":"2024-11-08T02:26:17Z","content_type":"text/html","content_length":"178729","record_id":"<urn:uuid:eec2ceb8-e943-4613-9a3a-1b67733ffe62>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00581.warc.gz"}
Felix Klein Felix Christian Klein , German , born: April 25 1849 , died: June 22 He was a professor at the Universities of Erlangen, Munich, Leipzig and finally Göttingen, teaching mathematics. His major topics were non-Euclidean geometry, group theory and function theory. His enunciation of the Erlangen programme classifying geometries by their underlying group of symmetries was hugely influential: a synthesis of much of the mathematics of its time. See also: External links:
{"url":"http://www.fact-index.com/f/fe/felix_klein.html","timestamp":"2024-11-11T13:29:05Z","content_type":"text/html","content_length":"4867","record_id":"<urn:uuid:6b5290cc-091f-40fa-8798-14423fdf5612>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00039.warc.gz"}
Is there a 6th base?Is there a 6th base? in Dating, Magazine Is there a 6th base? There is no 6th base in baseball, nor is it widely known as a measurement in relationships. Relationship bases—no matter their number—are figures of speech that mean different things to different What’s 3rd base in a relationship? The third base in dating involves the use of the tongue (and teeth, if you’re both into that sort of thing) to offer sexual stimulation. From the breasts to all the way down there. This is usually when things start getting a lot more sexual, and it can also be used as foreplay for what’s about to come next. Therefore, What is second base with a guy? First base = kissing, including open-mouth (or French) kissing. Second base = petting above the waist, including touching, feeling, and fondling the chest, breasts, and nipples. What does base mean in math? The word « base » in mathematics is used to refer to a particular mathematical object that is used as a building block. The most common uses are the related concepts of the number system whose digits are used to represent numbers and the number system in which logarithms are defined. Then, What’s 2nd base in a relationship? While there’s no « official » definition of what the bases represent, there seems to be a general understanding of each base: First base = kissing, including open-mouth (or French) kissing. Second base = petting above the waist, including touching, feeling, and fondling the chest, breasts, and nipples. What are the making out bases? Running the bases First base – mouth-to-mouth kissing, especially French kissing; Second base – skin-to-skin touching/kissing of the breasts; in some contexts, it may instead refer to touching any erogenous zones through the clothes (i.e., not actually touching the skin); Is there a 5th base? She said second base was copping a feel, third base was hands (or more?) in the pants, and a home run was sex. Well, an inside source just sent me a little tip about another base… (Shy readers, look away!) The fifth base is—you guessed it—going through the back door. What is a home run in a relationship? (colloquial) Sexual intercourse. How do you find the base? Substitute the value of « r » into the equation for the area of a circle: area = πr^2. Note that π is the symbol for pi, which is approximately 3.14. For example, a circle with a radius of 3 cm would yield an equation like this: area = π3^2. Simply the equation to determine the area of the base. Which is the base? A base is a substance that can neutralize the acid by reacting with hydrogen ions. Most bases are minerals that react with acids to form water and salts. Bases include the oxides, hydroxides and carbonates of metals. What are base numbers? A number base (or base for short) of a numeral system tells us about the unique or different symbols and notations it uses to represent a value. For example, the number base 2 tells us that there are only two unique notations 0 and 1. The most common number base is decimal, also known as base 10. How do I know if Im a good kisser? 8 signs you’re a great kisser 1. You get rave reviews. People will let you know. … 2. You kiss often. Leave them wanting more. … 3. You kiss for a long time. … 4. You feel in sync with your kissing partner. … 5. You’re confident. … 6. You’re not afraid to use your hands. … 7. You practice good oral hygiene. … 8. You’ve mastered multiple types of kissing. What is the 8th base? A. The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7, that is to say 10[octal] represents eight and 100[octal] represents sixty-four. What is 10th base? It’s also known as the decimal system, as the numerical value of a number relies on where the decimal point sits. In base 10, each digit in a position of a number can have an integer value ranging from 0 to 9 (10 possibilities). This system uses 10 as its base number, so that is why it is called the base 10 system. How many points is a home run? Each statistic your players accumulate is worth a certain amount of points. For example, a single hit is worth one (1) point, a home run equals four (4) points and a pitching win gives you three (3) What does it mean to hit a home run with a girl? to do something that is very successful. What does hitting a homerun mean? Definition. A home run occurs when a batter hits a fair ball and scores on the play without being put out or without the benefit of an error. In almost every instance of a home run, a batter hits the ball in the air over the outfield fence in fair territory. What is a base area? It refers to the area of one of the bases of a solid figure. It can be used to determine the volume of solid figures. How do you change a base? How To Use Change of Base Formula? The change of base formula says logb b a = [logc c a] / [logc c b]. It means to change the base of a logarithm logb b a, we just use division [log a] / [log b] where these logarithms can have any (same) positive number as a base. What is base shape? Definition: The bottom of a shape, solid or three dimensional object. The base is what the object ‘rests’ on. Base is used in polygons, shapes and solids. The base is used as a reference side for other measurements, most often used in triangles. What are the types of bases? The word base has three different definitions in chemistry, and they are Arrhenius base, Bronsted base, and Lewis base. All the base definitions agree to the fact that bases react with acids. What do bases end with? There is no special system for naming bases. Since they all contain the OH^– anion, names of bases end in hydroxide. What is base give example? Examples of bases are the hydroxides of the alkali and alkaline earth metals (sodium, calcium, etc.) and the water solutions of ammonia or its organic derivatives (amines). Such substances produce hydroxide ions (OH^–) in water solutions (see Arrhenius theory). What are the 5 bases? Here are the generally agreed upon basics: • First Base: Getting to first base usually means kissing or making out. … • Second Base: Rounding second involves copping a feel. … • Third Base: Generally speaking, reaching third is all about hands in the pants. • Home Base: Hitting a homer refers to having sex. How do you add bases? Adding in another base You can add in another base (without converting to base 10) as long as you remember that you « carry » when you have a sum that is greater than or equal to your base (instead of greater than or equal to 10), and that what you « carry » is the number of times you can pull out the base from your sum. What do you call base 7? Septenary (Base 7) has 7 digits: 0, 1, 2, 3, 4 5 and 6. Laisser un commentaire What do you think? 17 Points Upvote Downvote
{"url":"https://speeddating.tn/is-there-a-6th-base-4/","timestamp":"2024-11-07T09:39:22Z","content_type":"text/html","content_length":"127436","record_id":"<urn:uuid:2c5892ac-cf36-4a4b-8bfc-c0b524cd2de4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00177.warc.gz"}
Perlen-Colloquium: Dr. Frank Bretz (Novartis) 27 Apr 2023 Department of Mathematics and Computer Science, Spiegelgasse 5, Basel, 5th floor, seminar room 05.002 Perlen-Colloquium: Dr. Frank Bretz (Novartis) We consider the problem of testing multiple null hypotheses, where a decision to reject or retain must be made for each one and embedding incorrect decisions into a real-life context may inflict different losses. We argue that traditional methods controlling the Type I error rate may be too restrictive in this situation and that the standard familywise error rate may not be appropriate. Using a decision-theoretic approach, we define suitable loss functions for a given decision rule, where incorrect decisions can be treated unequally by assigning different loss values. Taking expectation with respect to the sampling distribution of the data allows us to control the familywise expected loss instead of the conventional familywise error rate. Different loss functions can be adopted, and we search for decision rules that satisfy certain optimality criteria within a broad class of decision rules for which the expected loss is bounded by a fixed threshold under any parameter configuration. We illustrate the methods with the problem of establishing efficacy of a new medicinal treatment in non-overlapping subgroups of patients. Export event as iCal
{"url":"https://dmi.unibas.ch/en/news-events/past-events/detail/perlen-colloquium-dr-frank-bretz-novartis/","timestamp":"2024-11-06T15:56:04Z","content_type":"text/html","content_length":"22531","record_id":"<urn:uuid:57d1a09a-9589-494b-8f6a-4e68c0c17cb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00136.warc.gz"}
A Gentle Introduction to Bayesian Deep Learning | by François Porcher | Jul, 2023A Gentle Introduction to Bayesian Deep Learning | by François Porcher | Jul, 2023 | QSOL IT Welcome to the exciting world of Probabilistic Programming! This article is a gentle introduction to the field, you only need a basic understanding of Deep Learning and Bayesian statistics. By the end of this article, you should have a basic understanding of the field, its applications, and how it differs from more traditional deep learning methods. If, like me, you have heard of Bayesian Deep Learning, and you guess it involves bayesian statistics, but you don’t know exactly how it is used, you are in the right place. One of the main limitation of Traditional deep learning is that even though they are very powerful tools, they don’t provide a measure of their uncertainty. Chat GPT can say false information with blatant confidence. Classifiers output probabilities that are often not calibrated. Uncertainty estimation is a crucial aspect of decision-making processes, especially in the areas such as healthcare, self-driving cars. We want a model to be able to be able to estimate when its very unsure about classifying a subject with a brain cancer, and in this case we require further diagnosis by a medical expert. Similarly we want autonomous cars to be able to slow down when it identifies a new environment. To illustrate how bad a neural network can estimates the risk, let’s look at a very simple Classifier Neural Network with a softmax layer in the end. The softmax has a very understandable name, it is a Soft Max function, meaning that it is a “smoother” version of a max function. The reason for that is that if we had picked a “hard” max function just taking the class with the highest probability, we would have a zero gradient to all the other classes. With a softmax, the probability of a class can be close to 1, but never exactly 1. And because the sum of probabilities of all classes is 1, there is still some gradient flowing to the other classes. Hard max vs Soft Max, Image by Author However, the softmax function also presents an issue. It outputs probabilities that are poorly calibrated. Small changes in the values before applying the softmax function are squashed by the exponential, causing minimal changes to the output probabilities. This often results in overconfidence, with the model giving high probabilities for certain classes even in the face of uncertainty, a characteristic inherent to the ‘max’ nature of the softmax Comparing a traditional Neural Network (NN) with a Bayesian Neural Network (BNN) can highlight the importance of uncertainty estimation. A BNN’s certainty is high when it encounters familiar distributions from training data, but as we move away from known distributions, the uncertainty increases, providing a more realistic estimation. Here is what an estimation of uncertainty can look like: Traditional NN vs Bayesian NN, Image by Author You can see that when we are close to the distribution we have observed during training, the model is very certain, but as we move farther from the known distribution, the uncertainty increases. There is one central Theorem to know in Bayesian statistics: The Bayes Theorem. Bayes Theorem, Image by Author • The prior is the distribution of theta we think is the most likely before any observation. For a coin toss for example we could assume that the probability of having a head is a gaussian around p = 0.5 • If we want to put as little inductive bias as possible, we could also say p is uniform between [0,1]. • The likelihood is given a parameter theta, how likely is that we got our observations X, Y • The marginal likelihood is the likelihood integrated over all theta possible. It is called “marginal” because we marginalized theta by averaging it over all probabilities. The key idea to understand in Bayesian Statistics is that you start from a prior, it’s your best guess of what the parameter could be (it is a distribution). And with the observations you make, you adjust your guess, and you obtain a posterior distribution. Note that the prior and posterior are not a punctual estimations of theta but a probability distribution. To illustrate this: Image by author On this image you can see that the prior is shifted to the right, but the likelihood rebalances our prior to the left, and the posterior is somewhere in between. Bayesian Deep Learning is an approach that marries two powerful mathematical theories: Bayesian statistics and Deep Learning. The essential distinction from traditional Deep Learning resides in the treatment of the model’s weights: In traditional Deep Learning, we train a model from scratch, we randomly initialize a set of weights, and train the model until it converges to a new set of parameters. We learn a single set of Conversely, Bayesian Deep Learning adopts a more dynamic approach. We begin with a prior belief about the weights, often assuming they follow a normal distribution. As we expose our model to data, we adjust this belief, thus updating the posterior distribution of the weights. In essence, we learn a probability distribution over the weights, instead of a single set. During inference, we average predictions from all models, weighting their contributions based on the posterior. This means, if a set of weights is highly probable, its corresponding prediction is given more weight. Let’s formalize all of that: Inference, Image from Author Inference in Bayesian Deep Learning integrates over all potential values of theta (weights) using the posterior distribution. We can also see that in Bayesian Statistics, integrals are everywhere. This is actually the principal limitation of the Bayesian framework. These integrals are often intractable (we don’t always know a primitive of the posterior). So we have to do very computationally expensive approximations. Advantage 1: Uncertainty estimation • Arguably the most prominent benefit of Bayesian Deep Learning is its capacity for uncertainty estimation. In many domains including healthcare, autonomous driving, language models, computer vision, and quantitative finance, the ability to quantify uncertainty is crucial for making informed decisions and managing risk. Advantage 2: Improved training efficiency • Closely tied to the concept of uncertainty estimation is improved training efficiency. Since Bayesian models are aware of their own uncertainty, they can prioritize learning from data points where the uncertainty — and hence, potential for learning — is highest. This approach, known as Active Learning, leads to impressively effective and efficient training. Demonstration of the effectiveness of Active Learning, Image from Author As demonstrated in the graph below, a Bayesian Neural Network using Active Learning achieves 98% accuracy with just 1,000 training images. In contrast, models that don’t exploit uncertainty estimation tend to learn at a slower pace. Advantage 3: Inductive Bias Another advantage of Bayesian Deep Learning is the effective use of inductive bias through priors. The priors allow us to encode our initial beliefs or assumptions about the model parameters, which can be particularly useful in scenarios where domain knowledge exists. Consider generative AI, where the idea is to create new data (like medical images) that resemble the training data. For example, if you’re generating brain images, and you already know the general layout of a brain — white matter inside, grey matter outside — this knowledge can be included in your prior. This means you can assign a higher probability to the presence of white matter in the center of the image, and grey matter towards the sides. In essence, Bayesian Deep Learning not only empowers models to learn from data but also enables them to start learning from a point of knowledge, rather than starting from scratch. This makes it a potent tool for a wide range of applications. White Matter and Gray Matter, Image by Author It seems that Bayesian Deep Learning is incredible! So why is it that this field is so underrated? Indeed we often talk about Generative AI, Chat GPT, SAM, or more traditional neural networks, but we almost never hear about Bayesian Deep Learning, why is that? Limitation 1: Bayesian Deep Learning is slooooow The key to understand Bayesian Deep Learning is that we “average” the predictions of the model, and whenever there is an average, there is an integral over the set of parameters. But computing an integral is often intractable, this means that there is no closed or explicit form that makes the computation of this integral quick. So we can’t compute it directly, we have to approximate the integral by sampling some points, and this makes the inference very slow. Imagine that for each data point x we have to average out the prediction of 10,000 models, and that each prediction can take 1s to run, we end up with a model that is not scalable with a large amount of data. In most of the business cases, we need fast and scalable inference, this is why Bayesian Deep Learning is not so popular. Limitation 2: Approximation Errors In Bayesian Deep Learning, it’s often necessary to use approximate methods, such as Variational Inference, to compute the posterior distribution of weights. These approximations can lead to errors in the final model. The quality of the approximation depends on the choice of the variational family and the divergence measure, which can be challenging to choose and tune properly. Limitation 3: Increased Model Complexity and Interpretability While Bayesian methods offer improved measures of uncertainty, this comes at the cost of increased model complexity. BNNs can be difficult to interpret because instead of a single set of weights, we now have a distribution over possible weights. This complexity might lead to challenges in explaining the model’s decisions, especially in fields where interpretability is key. There is a growing interest for XAI (Explainable AI), and Traditional Deep Neural Networks are already challenging to interpret because it is difficult to make sense of the weights, Bayesian Deep Learning is even more challenging. Whether you have feedback, ideas to share, wanna work with me, or simply want to say hello, please fill out the form below, and let’s start a conversation. Say Hello 🌿 Don’t hesitate to leave a clap or follow me for more! 1. Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452–459. Link 2. Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424. Link 3. Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning (pp. 1050–1059). Link 4. Louizos, C., Welling, M., & Kingma, D. P. (2017). Learning sparse neural networks through L0 regularization. arXiv preprint arXiv:1712.01312. Link 5. Neal, R. M. (2012). Bayesian learning for neural networks (Vol. 118). Springer Science & Business Media. Link
{"url":"https://www.qsolit.com/a-gentle-introduction-to-bayesian-deep-learning-by-francois-porcher-jul-2023/","timestamp":"2024-11-14T21:23:28Z","content_type":"text/html","content_length":"405950","record_id":"<urn:uuid:546d04ab-7e07-4258-811c-f6836e1bbd71>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00515.warc.gz"}
Exactness Of Decimal Representations | Solved Examples | Numbers- Cuemath A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. Exactness of Decimal Representations Exactness of Decimal Representations Types of Decimal Representations: Any rational number can have two types of decimal representations (expansions): • Terminating • Non-terminating but repeating Any irrational number can have only one type of decimal representation (expansion): • Non-terminating and Non-repeating Exactness of decimal representations of Irrational Numbers: Let's have a look at the decimal representation of the following irrational numbers: \[\sqrt 2 = 1.4142135623 \ldots \] \[\pi = 3.1415926535 \ldots \] Clearly, these are the non-terminating and non-repeating decimal expansion that has an infinite number of digits and there is no pattern as well. What does this really mean? Does this mean that we do not know the exact value of irrational numbers as we do for rational numbers? No, that’s not correct! When we talk about an irrational number, say, \(\sqrt 2 \) or \(\pi \), we do know the exact number (or quantity) we are talking about. For example, we can exactly construct a length of \(\sqrt 2 \) units, given a length of 1 unit. We know that the ratio of the circumference to the diameter in any circle is exactly equal to \(\pi \). Well, you should think of this as follows: the decimal representation of a number is just one way to represent a number. In the case of irrational numbers, it so happens that their decimal representations are non-terminating and non-repeating. But this does not mean that irrational numbers are not exact in any sense. It only means that when we try to write an irrational number in decimal form, the sequence of digits after the decimal point never ends. ✍Note: Even though the decimal representation of irrational numbers are inexact but we can exactly construct any irrational numbers geometrically. How to represent irrational numbers exactly? Let’s think of \(\sqrt 2 \) for a moment. If we try to write \(\sqrt 2 \) in decimal form (say, to 5 decimal digits), we have \(\sqrt 2 = 1.41421 \ldots \). Another person might say: Let me specify \ (\sqrt 2 \) more accurately, say up to 10 decimal digits. He would write \[\sqrt 2 = 1.4142135623\ldots \] Still another person might decide to write \(\sqrt 2 \) even more accurately, to a thousand decimal digits, for example. The more digits we include in the representation, the closer our number is to the actual value of \(\sqrt 2 \). But what is the actual value? Do we even know it? Can we represent it somehow (by doing something other than just writing \(\sqrt 2 \))? Yes, we can – geometrically! If we construct a right-angled triangle with the two sides each of length 1 unit, the hypotenuse is exactly \(\sqrt 2 \) units: Thus, we see that even though the decimal representation might be less than exact (no matter how many digits you take in your decimal representation), the geometrical representation is exact. To summarize the whole discussion: • The decimal representation of a rational number is exact (if you don’t leave out any digits). This is because it will either be terminating or non-terminating but repeating. • The decimal representation of an irrational number will always be inexact because it will be non-terminating and non-repeating. ✍Note: What is inexact is the decimal representation of irrational numbers, not the number itself. An irrational number can be represented exactly using something other than the decimal representation – for example, geometrically. Solved Example: Example 1: What is \(\pi \)? Solution: \(\pi \) is an irrational number and the ratio of the circumference to the diameter in any circle. The approximate value of \(\pi \) is \(\pi \approx \frac{{22}}{7}\). This is approximate, not exact. Since \(\pi \) is not a rational number, it cannot be exactly specified in \(\frac{p}{q}\) form. In decimal form, we can write approximately as \(\pi \approx 3.14159\). The value \(\frac{22}{7}\) is only a rational approximation for \(\pi \), which is actually an irrational number. In fact, a better rational approximation of \(\pi \) is \(\frac{355}{113}\). Challenge: Can you construct exact \(\sqrt 3 \) cm geometrically? ⚡Tip: We can construct \(\sqrt 2 \). Also, by pythagoras theorem, \({\left( {\sqrt 3 } \right)^2} = {\left( {\sqrt 2 } \right)^2} + {\left( 1 \right)^2}\). Download SOLVED Practice Questions of Exactness of Decimal Representations for FREE Learn from the best math teachers and top your exams ● Live one on one classroom and doubt clearing ● Practice worksheets in and after class for conceptual clarity ● Personalized curriculum to keep up with school
{"url":"https://www.cuemath.com/numbers/exactness-of-decimal-representations/","timestamp":"2024-11-14T10:40:32Z","content_type":"text/html","content_length":"223395","record_id":"<urn:uuid:cf8444a6-a167-46ad-baca-4f186d312483>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00783.warc.gz"}
Poseidon | Panther Protocol Documentation POSEIDON is a cryptographic hash function designed to be efficient when expressed as a circuit over a large prime field $\mathbb{F}$. It was introduced by Dmitry Khovratovich, Alexei Ivanov, and Dmitry Meshkov in 2019. Its use is advantageous in the context of SNARKs as many other hash functions (such as SHA-256) that are widely used in other contexts do not have efficient circuit representations. Panther uses this hash function as it fits the requirements of our platform. Poseidon is a Snark-friendly cryptographic hashing algorithm based on a sponge function with the $POSEIDON^\pi$ permutation. Poseidon, as a function per se, maps strings over $\mathbb{F}_p$ (for a prime $p \approx 2^n > 2^{31}$) to fixed-length strings over $\mathbb{F}_p$, i.e. POSEIDON: $\mathbb{F}_p^* \longrightarrow \mathbb{F}p^o$ where $o$ is the output length measured in $\mathbb{F}_p$ elements (usually, $o = 1$). Poseidon takes a string of words in $\mathbb{F}_p$ as its input, and gives a single representative element of $\mathbb{F}_p$ as output (although longer outputs are The main features of the Poseidon hash include: 1. Efficiency: The Poseidon hash is designed to be highly efficient, enabling fast computation of hash values. It achieves this efficiency using a round-based permutation structure that can be parallelized and optimized for hardware implementations. 2. Security: The Poseidon hash is built upon the cryptographic sponge construction, which provides resistance against cryptographic attacks such as preimage attacks, second preimage attacks, and collision attacks. It employs a combination of algebraic and bitwise operations to ensure the security of the hash function. 3. Resistance to certain cryptographic attacks: The Poseidon hash is specifically designed to resist certain types of attacks that exploit algebraic properties of hash functions, such as differential and linear attacks. The round-based structure and carefully chosen operations make it resistant to these attacks. 4. Customizable parameters: Poseidon allows for the customization of its parameters, such as the number of rounds and field size, to adapt to specific security requirements and performance constraints. This flexibility enables the fine-tuning of the hash function for different applications. 5. Application versatility: Poseidon is suitable for a wide range of cryptographic applications, including digital signatures, Zero-Knowledge proofs, and blockchain systems. It provides a robust and efficient hashing primitive that can be utilized in various cryptographic protocols. However, let's keep in mind that the Poseidon hash is designed for efficient and secure computation, especially in the context of Zero-Knowledge applications aiming to minimize proof generation time, proof size, and verification time (when it is not constant). The primary application of Poseidon is hashing in large prime fields, and so $POSEIDON^\pi$ takes inputs of $t\ge 2$ words in $\mathbb{F}_p$. For curves such as BLS12-381 of BN254, the prime (scalar) fields have a size of around $2^{255}$. Consequently, a security level of 128 bits (that is, Poseidon-128) corresponds to a capacity of 255 bits, which is one field element. It's important to note that the Poseidon hash is just one among many cryptographic hash functions available. While it is true that Poseidon is better than the Pedersen Hash and Rescue for several use cases, the choice of hash function depends on the specific requirements and security considerations of the application at hand. Documentation on the Poseidon hash function is currently under construction, and we are actively working on providing users with a comprehensive overview of this technology.
{"url":"https://docs.pantherprotocol.io/docs/learn/cryptographic-primitives/poseidon","timestamp":"2024-11-13T21:53:55Z","content_type":"text/html","content_length":"219231","record_id":"<urn:uuid:32bcfafa-ccb6-42a1-a880-02882cb4c35a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00351.warc.gz"}
A Study on Parallel Computation Based on 3D Forward Algorithm of Gravity A Study on Parallel Computation Based on 3D Forward Algorithm of Gravity () 1. Introduction Interpretation of Gravity-megnetic data is based on data processing and inversion. Forward modeling is important and difficult in the interpretation of geophysical data [1] [2] [3] , so we spend much time to do research on the problem and get much achievement. With the development of demand for geophysical data interpretation, 2D gravity-megnetic forward modeling can’t meet the requirement of identifying the geological structure, so we need to develop the gravity 3D forward modeling [4] [5] [6] . 3D forward modeling is very difficult and the increased computation task makes it more difficult. Many researchers do research on the problem and raise treatment method. Changli Yao adopts the grid separation and storage technology, the method decreases the computation tasks and storage [7] [8] , so it gets better effect. In the method we need to supplement many observation points. Zhaoxi Chen adopts the GPU parallel technology to accelerate the processing speed and it gets good effects [9] [10] . The paper adopts the MPI parallel technology to accelerate the processing speed of 3D forward modeling. It gets good effects. 2. Gravity 3D Forward Modeling We divide the source field underground into many cuboids (Figure 1). Forward modeling is the basic work of inversion [11] [12] . By means of traditional method that deals with the forward data one by one, the computation tasks will be huge and it will take a long time. The 3D density model forward equation illustrates the forward status. In the Figure 1, according the known equation we can get the gravity anomaly of the P(x,y,z) which is the jth cell. $\begin{array}{l}\Delta {g}_{j}\left(x,y,z\right)\\ =G{\sigma }_{j}\underset{l=1}{\overset{2}{\sum }}\underset{m=1}{\overset{2}{\sum }}\underset{n=1}{\overset{2}{\sum }}{\left(-1\right)}^{l+m+n}\end right){\mathrm{tan}}^{-1}\frac{\left({z}_{n}-z\right){R}_{lmn}}{\left({x}_{l}-x\right)\left({y}_{m}-y\right)}\right\}\end{array}$ (1) G is gravitational constant, σ[j] is the density of the jth cell. ${R}_{lmn}=\sqrt{{\left({x}_{l}-x\right)}^{2}+{\left({y}_{m}-y\right)}^{2}+{\left({z}_{n}-z\right)}^{2}}$ (2) By analyzing the equation, we know that in order to compute the gravity anomaly that the jth cell produces at the point P(x,y,z), we need to do so many computations: 16 times log, 8 times inverse trigonometric functions, 64 times multiplications, 8 times divisions, 8 times evolutions, 120 times additions or subtractions. The computation task is so huge for a cell at a point. If the number of the model cell is big, the total computation task will be very huge. The total computation is very huge, so the huge computation is a bottleneck problem. 3. MPI Parallel Algorithm 3.1. Introduction of MPI MPI realize the data broadcasting, data sending, data receiving and data synchronization [13] . MPI support several data type, including complex. Although Figure 1. (a) rectangular model, (b) a cell of the model. the MPI program is code, the program is different and it can identified by process ID. Different process has different computation task. In the MPI program, every process has all the variates and functions. The variates and functions have the same name, but the data in the variate is always not the same. If the process needs to know the data of other process, it needs to communicate. Every process can allocate different memory for a pointer. When the task is different for a process in the 3D gravity forward algorithm, it needs to allocate memory more flexibly [14] [15] . 3.2. The MPI Parallel Algorithm Based on 3D Gravity Forward The gravity forward modeling is that the sum of the anomaly that every cuboid in the model produces. The serial program needs loop computation for 5 times. For the observing point, we need to calculate the point of x direction and y direction. For every point, we need calculate the sum of anomaly that the cell produces for x direction, y direction and z direction. The pseudo-code is For(i=1;i < nx_obs;i++) //observing x direction, nx_obs points For(j=1;i< ny_obs;i++) //observing x direction, ny_obs points For(l=1;i < nz_model;i++) //cells along the z direction,the number of cells along z direction is nz_model For(m=1;i < nx_model;i++) //cells along the x direction,the number of cells along x direction is nz_model For(n=1;i < ny_model;i++) //cells along the y direction,the number of cells along y direction is nz_model forwarddata(I,j) =forwarddata (I,j)+forward(I,j,l,m,n) We analyze the 3d gravity forward algorithm and find that the algorithm needs loop computation for 5 times. The loop is the main work for parallel computation. We can divide the nx_obs into several parts for processes. The computation task that every process in parallel algorithm needs to do is less than that in the serial algorithm. Firstly, we need to divide the computation task into several count = nx_obs / size; // observing along x direction, nx_obs points. Size is the number of processes segment = nx_obs %size; //when tasks is allocated for every process equally, segment is the number of remainder for (i = 0; i < size; i++) if (i < segment) parr[i] = count + 1; parr[i] = count; Parr is an array. The array records the computation task that every process needs to do. For example, nx_obs is 101, size is 4. We known parr[0] = 26, parr[1] = 25, parr[2] = 25, parr[3] = 25. The processes need to calculate 1 - 26, 27 - 51, 52 - 76, 77 - 101 row data, so every process do smaller task in the MPI algorithm than that in the serial algorithm. The method uses MPI_Gatherv function to gather result. In the parallel algorithm, the program allocates the computation tasks, and every process only has his data result. If we don’t gather the data result, the process can write a file for the result data. For example the algorithm has 4 processes, we can get 4 files of data. If we want to get the final data result, we need merge 4 files manually. It is very difficult. MPI_Gatherv (forwarddata, count, MPI_FLOAT, forwarddata_all, rcounts, displs, MPI_FLOAT, 0, MPI_COMM_WORLD). The function MPI_Gatherv gathers the forwarddata to forwarddata_all and gathers the data from different process to one process. The length of the data can not the same. It is very useful. Figure 2 is gravity 3D forward MPI algorithm flow chart. The program allocates the computation tasks to every process, and process 0 is the manager process. It reads the parameter file, and allocates the computation tasks to every process. It broadcasts the parameter to processes, and gathers the data result from processes. Finally, it writes the result file. Other processes receive the parameter, and do the computation tasks. They send the data result to the manager process. At the same time, manager process also do computation task. We will introduce the performing process. (1) Initial the MPI environment, MPI_INT(). The manager process reads the model data and broadcasts the data to other process, MPI_Bcast(). (2) The manager process allocates the computation tasks according to the task Figure 2. Gravity 3D forward MPI parallel flow chart. allocate Table 1. Processes do the tasks separately and get the forward data for every observed point. (3) After all the processes have done the tasks, the manager process gathers the data. (4) The manager process writes the data file and finalize the MPI environment, MPI_FINALIZE(). The MPI parallel environment OS: windows 7 CPU: intel core i7 3.2 Ghz support 8 processes Memory: 8 GB Develop language: c Compiler:visual studo 2010 Parallel environment: mpich2 Command: Mpiexec -np N ./gstd_forward N is the number of processes. 4. Result and Discussions The grid of the model is 101 × 101 × 30. The model is 101 columns, 101 rows and 30 layers. The distance along x direction between 2 neighbor points is 10 m, the distance along y direction between 2 neighbor points is 10 m, the distance along z direction between 2 neighbor points is 2 m. The observed points are 101 × 101 on the ground. 4.1. Validate of the Forward Result The parallel computation is based on the serial program. According to the characteristics, that process does computation task separately. It distributes the tasks to all processes and never changes other algorithm, so the parallel computation result is the same with the serial program’s result. The study compares the forward result of two programs, the result is the same. We choose a line to draw a picture. In Figure 3, x axis is position of the observed point, y axis is the Δg forward data, dot is the MPI result and plus is the serial algorithm result. We can see that the forward data in 6th row is the same, so it proves the validity of the program. 4.2. Discussions of Parallel Efficiency Analysis of the Table 2 shows that the efficiency is the highest when the number of processes is 2. The speedup changes slightly and the efficiency declines when Figure 3. Forward result figure in 6^th row. Table 2. 3D gravity forward parallel algorithm time used statistics. the number of processes changes from 4 to 8. The communication of the processes occupy much time when the number of processes increases from 4 to 8. We can see that the effect of parallel algorithm is very obvious. When the number of processes is 8, we can save 52 minutes compared to sequential algorithm. 5. Conclusion The computation of 3D gravity forward for relatively big grid spends much time and the forward algorithm is called for dozens of times, so the key to the problems is parallel computation. The study realizes the parallel algorithm for 3D gravity forward in the MPI environment. The algorithm is proved correct and efficient. The study lays the foundation of the parallel computation for 3D gravity The paper is supported by China university of Geosciences (Beijing) basic research funding (2-9-2017-096).
{"url":"https://www.scirp.org/journal/PaperInformation?paperID=79102","timestamp":"2024-11-13T17:36:49Z","content_type":"application/xhtml+xml","content_length":"107113","record_id":"<urn:uuid:00d887b9-1462-4e9b-8780-5bcbe4ad799c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00681.warc.gz"}
Waldmeister is a theorem prover for first order unit equational logic. It is based on unfailing Knuth-Bendix completion employed as proof procedure. Waldmeister's main advantage is that efficiency has been reached in terms of time as well as of space. In outline, the task Waldmeister deals with is the following: A theory is formulated as a set E f implicitly universally quantified equationsover a many-sorted signature. It shall be demonstrated that a given equation s=t is valid in this equational theory, i.e. that it holds in all models of E. Equivalently, s is deducible from t by applications of the axioms of E, substiyuting equals for equals. - In 1970, Knuth and Bendix presented a completion algorithm, which later was extended to unfailing completion, as described e.g. by Bachmair et al. Parameterized with a reduction ordering, the unfailing varinat transforms E into a ground convergent set of rewrite rules. For theoretical reasons, this set is not necessarily finite, but if so, the word problem of E is solved by testing for syntactical identity after normalization. In both cases, however, if s=t holds, then a proff is always found in finite time. This justifies the use of unfailing completion as semi-complete proof procedure for equational logic. Accordingly, when searching for a proof, Waldmeister saturates the given axiomatization until the goals can be shown by narrowing or rewriting. The saturation is performed in a cycle working on a set of waiting facts (critical pairs) and a set of selected facts (rules). Inside the completion loop, the following steps are performed: 1. Select an equation from the set of critical pairs. 2. Simplify this equation to a normal form. Discard if trivial, otherwise orient if possible. 3. Modify the set of rules according to the equation. 4. Generate all new critical pairs. 5. Add the equation to the set of rules. The selection is controlled by a top-level heuristic maintaining a priority queue on the critical pairs. This top-level heuristic is one of the two most important control parameters. The other one is the reduction ordering to orient rules with. There is some evidence that the latter is of even stronger influence. Unfailing Knuth Bendix completion (excerpt from the <a href="https://www.mpi-inf.mpg.de/departments/automation-of-logic/software/waldmeister/references/">SEKI Report</a>) What is the typical task of a deduction system? A fixed logic has been operationalized towards a calculus the application of which is steered by some control strategy. Before we will describe the implementation of Waldmeister, we will state the starting point of the whole process, i.e. the inference system for unfailing Knuth-Bendix completion. Unfailing completion In a paper that was published 1970, Knuth and Bendix introduced the completion algorithm which tries to derive a set of convergent rules from a given set of equations. With the extension to unfailing completion in a later paper by Bachmair, Dershowitz and Plaisted, it has turned out to be a valuable means of proving theorems in equational theories. The following nine inference rules form a variant of unfailing completion which is suitable for equational reasoning. An informal illustration of the inference system is given here. The boxes represent sets in the mathematical sense and contain pairs of terms: namely rules, equations, goals, and critical pairs. Initially, E includes the axioms and G the hypotheses. The set CP(R,E) is a function of the current sets R and E and holds the essence of all the local divergencies. These arise whenever on the same term two different rewrite steps are applicable. An informal illustration of the inference system is given here. The boxes represent sets in the mathematical sense and contain pairs of terms: namely rules, equations, goals, and critical pairs. Initially, E includes the axioms and G the hypotheses. The set CP(R,E) is a function of the current sets R and E and holds the essence of all the local divergencies. These arise whenever on the same term two different rewrite steps are applicable. The arrows denote single inference steps, and each shifts a single termpair from one set to another. Along the simplify arrows, the terms alter according to the single-step rewrite relation induced by R and E. An equation with identical sides may be deleted. The same applies to an equation that is subsumed by a more general one. As soon as one side is greater than the other with respect to a fixed reduction ordering, the equation may be oriented into a rule. If the left-hand side of a rule is simplified, this rule becomes an equation again. This does not happen when reducing the right-hand side. A new equation is generated by selecting a termpair from CP(R,E). As soon as left-hand side and right-hand side of a goal are identical, the corresponding hypothesis is successfully proved. Instantiating this inference system leads to proof procedures. To achieve semi-completeness, a control constraint - the so-called fairness - has to be considered: From every critical pair the parents of which are persistent an equation must be generated some time. In the making of Waldmeister, we have applied an engineering approach, identifying the critical points leading to efficiency. We have started from a three-level system model consisting of inference step execution, aggregation into an inference machine, and topping search strategy. See our SEKI-Report for a detailed description of the entire design process. The main completion loop is parameterized by a selection heuristic and a reduction ordering. Both are derived from the algebraic structure described by the axioms. The derivation process has been lifted such that control knowledge is expressed declaratively, which eases the integration of human experience. For full flexibility, the search parameters may be re-instantiated during the proof search. For the mid-level, flexible adjustment has proven essential as experimental evaluation has shown. On the lowest level, we put great stress on efficiency in terms of time and space. To that purpose, we employ special indexing techniques and space-saving representations along with a specialized memory management. The main goal during the development of Waldmeister was overall efficiency. To achieve this goal, one has to follow the paradigm of an engineering approach: Analyzing typical saturation-based provers, we recognize a logical three-level structure. We will analyze each of these levels with respect to the critical points responsible for the prover's overall performance. Wherever necessary, the question of how to tackle these critical points will have to be answered by extensive experimentation and statistical evaluation. In this context, an experiment is a fair, quantitative comparison of different realizations of the same functionality within the same system. Hence, modularity has to be a major principle in structuring the system. Now, what does that mean in practice? Imagine a certain functionality needed for constructing say an unfailing completion procedure, e.g. normalization of terms. Out of the infinitely many different realizations, one could choose an arbitrary one and forget about the rest. However, there is no justification for restricting a system to a single normalization strategy, since different strategies are in demand for different domains. Hence, the system should include numerous normalization routines (allowing easy addition of other realizations) and leave the choice between them up to the user. As this applies to many choice points, an open system architecture produces a set of system parameters for fixing open choice points in one of the supported ways, and thus allows an easy experimental comparison of the different solutions. It is not before now that an optimal one can be named - if there is one: in some cases different domains require different solutions. In this section we will depict the afore-mentioned three-level model, and describe the systematic stepwise design process we followed during the development of Waldmeister. The seven steps we have followed can be generalized towards applicability to arbitrary (synthetic) deduction systems. The three-level model We aim at automated deduction systems including a certain amount of control coping with many standard problems - in contrast to interactive provers. The basis for our development is the afore-mentioned logical three-level model of completion-based provers. As every calculus that is given as an inference system, unfailing completion has many inherent indeterminisms. When realizing completion procedures in provers, this leads to a large family of deterministic, but parameterized algorithms. Two main parameters are typical for such provers: the reduction ordering as well as the search heuristic for guiding the proof. Choosing them for a given proof task forms the top level in our model. The mid-level is that of the inference machine, which aggregates the inference rules of the proof calculus into the main completion loop. This loop is deterministic for any fixed choice of reduction ordering and selection heuristic. Since there is a large amount of freedom, many experiments are necessary to assess the differing aggregations before a generally useful one can be found. The lowest level provides efficient algorithms and sophisticated data structures for the execution of the most frequently used basic operations, e.g. matching, unification, storage and retrieval of facts. These operations consume most of the time and space. Trying to solve a given problem by application of an inference system, one has to proceed as follows: while the problem is not yet solved do 1. select an inference rule R 2. select a tuple of facts 3. apply R to the chosen tuple of facts Let us have a look at the inference system for unfailing completion: Orienting an equation into a rule is instantiated with an equation and the given reduction ordering that is fixed during the proof run. The four simplifying inference rules each have to be instantiated with the object, namely the equation, rule or goal they are applied to, the position in the term to be simplified and the rule or equation used for simplification. The decision which equation should be deleted from the set E due to triviality or due to subsumption obviously is of minor importance. Nevertheless, as subsumption testing takes time, it should be possible to turn it off if the proof process does not benefit from it. Finally, there is the generation of new equations which depends on the "parent'' rules / equations and a term position, restricted by the existence of a unifier. Since selection can be seen as the search for the "best'' element in SUE, it is of the same complexity as a minimum-search. Taking a look at the changes that appear in SUE between two successive applications of select reveals that exactly one element has been removed whereas a few elements might have been added. As every selection strategy induces an ordering on this set, we can realize SUE as a priority queue. Storing all the unselected but preprocessed equations in such a manner induces serious space problems. Hence, we follow two approaches to keep this set small. On the one hand, the total number of generated equations should be minimized. As only critical pairs built with rules and equations from R and E[selected] will be added to SUE, we must guarantee that R and E[selected] holds rules and equations which are minimal in the following sense: A rule or equation is minimal if each of the following conditions holds: • both sides cannot be simplified by any other minimal rule or equation, • it is not trivial and cannot be subsumed, • it has not been considered superfluous by subconnectedness criteria, • it has been assessed valuable and thus once been {\em selected}, and • it has been oriented into a rule if possible. A newly selected equation will first be normalized (applying simplify), then deleted or subsumed if possible. Then the new member of R and E[selected] is used to modify the facts already in R and E[ selected] (interreduction), or the critical pairs in SUE. Afterwards, it will be added to R if it could be oriented, otherwise it becomes a member of E[selected]. Former rules or equations that have been removed during interreduction will be added to SUE again. Finally, all the critical pairs that can be built with the new rule or equation are generated, preprocessed, and added to SUE which closes the cycle. From time to time, one must take a look at the goals and check whether they can already be joined. Furthermore, reprocessing the elements of SUE should be taken into consideration (intermediate reprocessing, IRP). We have instantiated the general three-level model according to this sketch of Define a set of system parameters determined by users Having designed a sketch of algorithm, we now have to deal with those choice points which could not definitely be fixed by experience and thus - in the framework of an open system architecture as induced by the engineering approach - shall be left open to the user. More precisely, some of the system's functionality will have to be realized in different ways: A certain task will be fulfilled by several components; and the user (or the control strategy) must be able to select one of them at run time. This leads to a corresponding set of system parameters. As soon as the deduction system is completed, statistical evaluations of different parameter settings should result in a kind of default setting suitable for most problem classes. In the context of unfailing completion, we found three groups of remaining choice points. High-level control parameters: the reduction ordering yielding oriented equality, and the select function guiding the selection of equations from SUE. Treatment of unselected equations: the organization of intermediate reprocessing which includes simplification and/or re-classification of the yet unselected equations, the criteria that shall be applied immediately after new equations have been generated (e.g. subconnectedness criteria), and the amount of simplification preceeding the classification of newly generated equations (preprocessing). Normalization of terms: the term traversal strategy (e.g. leftmost-outermost, leftmost-innermost), the way of combining R and E[selected]which may differ for the various situations normalization is applied, the priority of rules when simplifying with respect to R and Eselected, the question which of the possibly more than one applicable rules and / or equations shall be applied, and the backtracking behaviour after the execution of a single rewrite-step. Although a few of these choice points might appear quite unimportant, all of them have proven valuable during our experimental analysis, or at least of significant influence on the prover's So Waldmeister allows the user to make a choice between diffrent options for the mentioned choice points. For more on this, see the Waldmeister documentation that comes with the Waldmeister
{"url":"https://www.mpi-inf.mpg.de/departments/automation-of-logic/software/waldmeister/implementation","timestamp":"2024-11-11T01:52:16Z","content_type":"text/html","content_length":"122171","record_id":"<urn:uuid:a7427421-ad5d-4dd5-84cb-25ddf0ca7b10>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00076.warc.gz"}
Ch. 4 Challenge Problems - University Physics Volume 2 | OpenStax Challenge Problems (a) An infinitesimal amount of heat is added reversibly to a system. By combining the first and second laws, show that $dU=TdS−dWdU=TdS−dW$. (b) When heat is added to an ideal gas, its temperature and volume change from $T1andV1toT2andV2T1andV1toT2andV2$. Show that the entropy change of n moles of the gas is given by Using the result of the preceding problem, show that for an ideal gas undergoing an adiabatic process, $TVγ−1TVγ−1$ is constant. With the help of the two preceding problems, show that $ΔSΔS$ between states 1 and 2 of n moles an ideal gas is given by A cylinder contains 500 g of helium at 120 atm and $20°C20°C$. The valve is leaky, and all the gas slowly escapes isothermally into the atmosphere. Use the results of the preceding problem to determine the resulting change in entropy of the universe. A diatomic ideal gas is brought from an initial equilibrium state at $p1=0.50atmp1=0.50atm$ and $T1=300KT1=300K$ to a final stage with $p2=0.20atmp2=0.20atm$ and $T1=500K.T1=500K.$ Use the results of the previous problem to determine the entropy change per mole of the gas. The gasoline internal combustion engine operates in a cycle consisting of six parts. Four of these parts involve, among other things, friction, heat exchange through finite temperature differences, and accelerations of the piston; it is irreversible. Nevertheless, it is represented by the ideal reversible Otto cycle, which is illustrated below. The working substance of the cycle is assumed to be air. The six steps of the Otto cycle are as follows: i. Isobaric intake stroke (OA). A mixture of gasoline and air is drawn into the combustion chamber at atmospheric pressure $p0p0$ as the piston expands, increasing the volume of the cylinder from zero to $VAVA$. ii. Adiabatic compression stroke (AB). The temperature of the mixture rises as the piston compresses it adiabatically from a volume $VAtoVBVAtoVB$. iii. Ignition at constant volume (BC). The mixture is ignited by a spark. The combustion happens so fast that there is essentially no motion of the piston. During this process, the added heat $Q1Q1$ causes the pressure to increase from $pBtopCpBtopC$ at the constant volume $VB(=VC)VB(=VC)$. iv. Adiabatic expansion (CD). The heated mixture of gasoline and air expands against the piston, increasing the volume from $VCtoVDVCtoVD$. This is called the power stroke, as it is the part of the cycle that delivers most of the power to the crankshaft. v. Constant-volume exhaust (DA). When the exhaust valve opens, some of the combustion products escape. There is almost no movement of the piston during this part of the cycle, so the volume remains constant at $VA(=VD)VA(=VD)$. Most of the available energy is lost here, as represented by the heat exhaust $Q2Q2$. vi. Isobaric compression (AO). The exhaust valve remains open, and the compression from $VAVA$ to zero drives out the remaining combustion products. (a) Using (i) $e=W/Q1e=W/Q1$; (ii) $W=Q1−Q2W=Q1−Q2$; and (iii) $Q1=nCv(TC−TB)Q1=nCv(TC−TB)$, $Q2=nCv(TD−TA)Q2=nCv(TD−TA)$, show that (b) Use the fact that steps (ii) and (iv) are adiabatic to show that where $r=VA/VBr=VA/VB$. The quantity r is called the compression ratio of the engine. (c) In practice, r is kept less than around 7. For larger values, the gasoline-air mixture is compressed to temperatures so high that it explodes before the finely timed spark is delivered. This preignition causes engine knock and loss of power. Show that for $r=6r=6$ and $γ=1.4γ=1.4$ (the value for air), $e=0.51e=0.51$, or an efficiency of $51%.51%.$ Because of the many irreversible processes, an actual internal combustion engine has an efficiency much less than this ideal value. A typical efficiency for a tuned engine is about $25%to30%25%to30%$. An ideal diesel cycle is shown below. This cycle consists of five strokes. In this case, only air is drawn into the chamber during the intake stroke OA. The air is then compressed adiabatically from state A to state B, raising its temperature high enough so that when fuel is added during the power stroke BC, it ignites. After ignition ends at C, there is a further adiabatic power stroke CD. Finally, there is an exhaust at constant volume as the pressure drops from $pDpD$ to $pApA$, followed by a further exhaust when the piston compresses the chamber volume to zero. (a) Use $W=Q1−Q2W=Q1−Q2$, $Q1=nCp(TC−TB)Q1=nCp(TC−TB)$, and $Q2=nCv(TD−TA)Q2=nCv(TD−TA)$ to show that $e=WQ1=1−TD−TAγ(TC−TB)e=WQ1=1−TD−TAγ(TC−TB)$. (b) Use the fact that $A→BA→B$ and $C→DC→D$ are adiabatic to show that (c) Since there is no preignition (remember, the chamber does not contain any fuel during the compression), the compression ratio can be larger than that for a gasoline engine. Typically, $VA/VB= 15andVD/VC=5VA/VB=15andVD/VC=5$. For these values and $γ=1.4,γ=1.4,$ show that $ε=0.56ε=0.56$, or an efficiency of $56%56%$. Diesel engines actually operate at an efficiency of about $30%to35%30%to35%$ compared with $25%to30%25%to30%$ for gasoline engines. Consider an ideal gas Joule cycle, also called the Brayton cycle, shown below. Find the formula for efficiency of the engine using this cycle in terms of $P1P1$, $P2P2$, and $γγ$. Derive a formula for the coefficient of performance of a refrigerator using an ideal gas as a working substance operating in the cycle shown below in terms of the properties of the three states labeled 1, 2, and 3. Two moles of nitrogen gas, with $γ=7/5γ=7/5$ for ideal diatomic gases, occupies a volume of $10−2m310−2m3$ in an insulated cylinder at temperature 300 K. The gas is adiabatically and reversibly compressed to a volume of 5 L. The piston of the cylinder is locked in its place, and the insulation around the cylinder is removed. The heat-conducting cylinder is then placed in a 300-K bath. Heat from the compressed gas leaves the gas, and the temperature of the gas becomes 300 K again. The gas is then slowly expanded at the fixed temperature 300 K until the volume of the gas becomes $10−2m310−2m3$, thus making a complete cycle for the gas. For the entire cycle, calculate (a) the work done by the gas, (b) the heat into or out of the gas, (c) the change in the internal energy of the gas, and (d) the change in entropy of the gas. A Carnot refrigerator, working between $0°C0°C$ and $30°C30°C$ is used to cool a bucket of water containing $10−2m310−2m3$ of water at $30°C30°C$ to $5°C5°C$ in 2 hours. Find the total amount of work
{"url":"https://openstax.org/books/university-physics-volume-2/pages/4-challenge-problems","timestamp":"2024-11-03T22:18:27Z","content_type":"text/html","content_length":"451619","record_id":"<urn:uuid:2a5eeb65-e826-4bd0-bf45-03c42c01ad49>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00225.warc.gz"}
In the diagram shown, ACDF is a rectangle, and GBHE is a circle... | Filo In the diagram shown, is a rectangle, and is a circle. If inches, and inches, what is the number of square inches in the shaded area? Not the question you're searching for? + Ask your question Choice B is correct. The area of the shaded figure is equal to the difference between the areas of the rectangle and the circle. The area of the rectangle is defined by the formula , where and are the two adjacent sides of the rectangle. In this case, is equal to 4 inches inches, or 24 square inches. The area of the circle is defined by the formula , where is the radius. Since equals the diameter of the circle and is equal to 4 inches, then the radius must be 2 inches. Thus, the area of the circle is , or square inches. Subtracting, we obtain the area of the shaded portion: square Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE for FREE 9 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Circles in the same exam Practice more questions from Circles View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes In the diagram shown, is a rectangle, and is a circle. If inches, and inches, what is the number of square inches in the shaded area? Question Text Topic Circles Subject Mathematics Class Grade 12 Answer Type Text solution:1 Upvotes 97
{"url":"https://askfilo.com/mathematics-question-answers/in-the-diagram-shown-a-c-d-f-is-a-rectangle-and-g-b-h-e-is-a-circle-if-c-d4","timestamp":"2024-11-05T00:10:06Z","content_type":"text/html","content_length":"345831","record_id":"<urn:uuid:f9dbd1c4-a56d-4cfb-8ea4-7a3818e8f0bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00046.warc.gz"}
Evolution and God Wed, 06/20/2018 - 13:33 (Reply to #91) #92 Well I disagree with all of the above but life is short so I’ll just focus in on one area. So for the fine tuning argument you acknowledge above that the probability of a fine tuner is not 0%. What figure would you put on it? Wed, 06/20/2018 - 15:09 (Reply to #92) #93 How about that infamous number the Absolutists use: 1 x 10^-508? Still virtually 0%. Thu, 06/21/2018 - 10:23 (Reply to #93) #94 "Well I disagree with all of the above but life is short so I’ll just focus in on one area. So for the fine tuning argument you acknowledge above that the probability of a fine tuner is not 0%. What figure would you put on it?" It's the same as the odds for anything no one can demonstrate any evidence for. What are the odds invisible unicorns exist? Thu, 06/21/2018 - 13:20 (Reply to #94) #95 Thu, 06/21/2018 - 13:44 (Reply to #95) #96 Dan - You always start at 50% probability for a two-valued unknown... balance fallacy Thu, 06/21/2018 - 13:49 (Reply to #96) #97 For example if you were to flip a coin, you would start at 50% heads. Same here Then you take evidence for and against, eg a coin weighted to heads 50% + 50% x 90% etc... Thu, 06/21/2018 - 18:24 (Reply to #97) #98 Coins are typically assigned 50/50 odds because of symmetry. What symmetry are you appealing to for your 50/50 set up? Thu, 06/21/2018 - 18:32 (Reply to #98) #99 Well if you start with no evidence either way then that is symmetric; each outcome is equally likely as far as you can tell. So it’s 50% / 50% always until you allow for any evidence you may have... Thu, 06/21/2018 - 18:54 (Reply to #99) #100 Again, that is known as the balance fallacy. We have very good reasons (symmetry, empirical results, etc) to assign a coin flip 50/50 odds. Thu, 06/21/2018 - 20:03 (Reply to #100) #101 If you have no evidence either way, 50% is statistically more accurate than 0% or 100% (assuming an even probability distribution which we have to as no evidence): 1. If you choose 0%, the value has to come to <25% for you to ‘win’ 2. if you choose 100%, the value has to come to >75% for you to ‘win’ 3. If you choose 50%, you ‘win’ for anything >25% and <75% So option 3 is twice as likely to be correct as option 2 or 1. Option 3, 50% the midpoint or average of a normal probability distribution is always the best bet. Fri, 06/22/2018 - 12:08 (Reply to #101) #102 An interesting point! If we were guessing at a 10-digit number from 0 to 1, and if we got 10 points for being within .25, 3 points for being withing .50, and 1 point for being within .75, then going for the middle would be an optimal strategy. Choosing 0, for example, would be a bad choice in that there is nothing in this setup to the left of 0. It's a smaller target as is 1. I'm having a problem seeing how this applies to evaluating a totally unknown situation. No evidence means that no choice can be better than another. Either you are right or you are wrong and being close collects no points because the situation can't be, say, 70% probable because it has already happened. Bets are only being taken on "yes" and "no" and accumulating evidence won't change what has actually happened. Therefore, there would be no point in starting in the middle since there is no middle. I would think that the best strategy would be to assign probabilities that each of the two outcomes would yield the incoming data. If you either had a loaded coin (75% chance of heads) or a normal coin (50% chance of heads) you could evaluate the probability of each predicting the incoming data. But if you have no model at all, I would think that you could do no more than tally the incoming data and use statistics to create a model. I don't see beginning with a model when no model is available is a useful step. Thu, 06/21/2018 - 13:46 (Reply to #102) #103 How can there be a 50% chance of a claim being true with zero evidence for it? I am dubious. The big bang doesn't demonstrate any evidence for a deity, neither does the assumption the universe is fine tuned, which is a claim that would itself have to be evidenced. What universes are you comparing this one to, in order to assert it is fine tuned? The prime mover argument isn't an argument for a deity, we already covered the objections to it and you dismissed them without any comment. I'm bored with you making up stats now. Can you demonstrate any objective evidence at all? Fri, 06/22/2018 - 12:30 (Reply to #103) #104 I'm thinking that Dan is starting with a model that assigns a 50% probability, a model to be modified as data comes in. However, as you noted, the initial model has no credibility. As the model evolves one way or another, as data comes in, degrees of credibility would be statistically attached. So, the initial 50% is merely a token figure, a convenient starting point from which the real probability emerges. If we have no data then, of course, we can say nothing. Thu, 06/21/2018 - 14:01 (Reply to #104) #105 " I’m not sure you are getting the math? You always start at 50% probability for a two-valued unknown," So you're claiming there is a 50% chance that invisible unicorns exist? Again, I am dubious. Try this... "The balance fallacy is an informal logical fallacy that occurs when two sides of an argument are assumed to have equal or comparable value regardless of their respective merits, which (in turn) can lead to the conclusion that the answer to a problem is always to be found between two extremes. The latter is effectively an inverse false dilemma, discarding the two extremes rather than the middle. While the rational position on a topic is often between two extremes, this cannot be assumed without actually considering the evidence. Sometimes the extreme position is actually the correct one, and sometimes the entire spectrum of belief is wrong, and truth exists in an orthogonal direction that hasn't yet been considered." Thu, 06/21/2018 - 14:11 (Reply to #105) #106 Chance of invisible unicorns: Start at 50% chance Evidence against: Invisibility in nature not seen 10% 50% x 10% = 5% Evidence against: no mythical creatures yet discovered 10% 5% x 10% = 0.5% chance of invisible unicorns You can obviously use the same method to put together a case for why there is not a creator... Thu, 06/21/2018 - 14:21 (Reply to #106) #107 "Chance of invisible unicorns: Start at 50% chance" So non existent things have a 50% chance of being real? "Invisibility in nature not seen " Now that's fucking priceless fair play to you. This has to be a fucking windup? Between you and someone I think I'm going to take a break for a while. At least until some halfway cogent apologetics are posted. Thu, 06/21/2018 - 14:24 (Reply to #107) #108 You are just being rude because you either don’t get or refuse to get the math. Thu, 06/21/2018 - 14:40 (Reply to #108) #109 Although I may not be an expert on the Laws of Probability, I do know enough to know that your calculations are completely wrong. For instance, it is not 50% + 50% x 60% = 80% When you say there is a 0.5 chance of something happening and then add another factor that is 0.6 chance, the equation is NOT: 0.5 + (0.5 x 0.6) = 0.8. The actual equation IS: 0.5 x 0.6 = 0.3. Or if you want it in %ages: 50% x 60% = 30%. And I just asked a friend of mine who is a Professor who teaches Advanced mathematics at UNM. Thu, 06/21/2018 - 15:02 (Reply to #109) #110 No you have the math wrong. Evidence against you use multiplication, evidence for you use addition. It helps if you think about the probability space as a box. Let’s start with the proposition ‘the dog is nice’. Let’s assume you know nothing about this or any dog then the chance of the dog being nice is 50%. So imagine the probability space cut 50% / 50% dog is nice / dog is nasty. Now we can add a peice of evidence FOR the proposition. The owner says the dog is nice and we trust him 75%. So we already know that 50% of dogs are nice what about the 50% of dogs unknown? Well we can multiply that 50% by 75% and add it to the 50% we already had for dog is nice: 50% + 50% x 75% = 87.5%. Think of the original 50/50 probability space growing to 87.5/12.5 dog is nice / dog is So above is how you compute ‘evidence FOR’. ‘Evidence AGAINST’ is a different calculation: Starting with dog is nice 50% Now add a piece of evidence AGAINST: ‘the dog bit me’. 90% chance dog is nasty so that’s a 10% chance the dog is nice. So we take 50% x 10% = 5% chance dog is nice. NOTICE THE CACULATION IS DIFFERENT Hope this makes sense... Thu, 06/21/2018 - 18:26 (Reply to #110) #111 Dan - Let’s assume you know nothing about this or any dog then the chance of the dog being nice is 50%...So we already know that 50% of dogs are nice... You are chasing your own tail. With that logic you can get any result you want. You are basically doing numerology. Thu, 06/21/2018 - 20:05 (Reply to #111) #112 No you have it wrong. The form of probability you are proposing is multiplicative. The more factors you add for something occurring, makes the single occurrence YOU desire less likely to occur. In other words, the factors are multiplicative, NOT additive. As said, I asked a professor of math at UNM. Thu, 06/21/2018 - 20:12 (Reply to #112) #113 No you have it wrong. It additive when the factors are reenforcing but multiplicative when the factors are undermining. For example: - murderer had blood on his cloths. Makes him 70% likely to be the killer according to an expert witness - the murderers prints are on the knife. Makes him 90% likely to be the killer according to an expert witness - taking BOTH pieces of evidence into account, how likely is it that he is guilty? - 70% + 30% x 90% = 97% likely he is guilty - So the two pieces of evidence combined make the killer more likely to be guilty than each individual piece of evidence alone does. Fri, 06/22/2018 - 06:47 (Reply to #113) #114 So I guess you know more about math than the professor I have conversed with at UNM who happens to have a PhD in Mathematics. What fallacy is that? Argumentum ignorantum? Fri, 06/22/2018 - 07:05 (Reply to #114) #115 Well what’s your (or his) version of the calculation? Maybe best to use the short simple example of the murderer I gave above... Fri, 06/22/2018 - 07:11 (Reply to #115) #116 OKAY. Let's rip it apart piece by piece. - accused had blood on his cloths. Makes him 70% likely to be the killer according to an expert witness Who's blood? What expert witness? Blood on his clothes could be his own. see below. - the accused prints are on the knife. Makes him 90% likely to be the killer according to an expert witness This just proves he handled the knife, could have cut himself, got blood on his clothes, went to the bathroom to deal with cut. While in bathroom, the other person was murdered. - taking BOTH pieces of evidence into account, how likely is it that he is guilty? And with both pieces of evidence, I have created enough "shadow of doubt" to... Final Verdict: Not Guilty. Fri, 06/22/2018 - 08:48 (Reply to #116) #117 Dan: Well what’s your (or his) version of the calculation? Maybe best to use the short simple example of the murderer I gave above... Sheldon has already well explained that these claims are falsifiable: Chance god exists because of fine tuning 50% Chance god exists because of Big Bang 50% Change god exists because of Prime Mover 50% As none of your claims are falsifiable, they can all safely be given a 0% probability. Fri, 06/22/2018 - 08:05 (Reply to #117) #118 Apart from the asinine nature of the assumption in your absurd analogy, it is at least dealing with testable empirical evidence now, though hopelessly misunderstanding how reliably objective it is in your conclusions. The maths as always is risible nonsense. Could you link some citations for these formulas you appear to be plucking out of thin air? " So the two pieces of evidence combined make the killer more likely to be guilty than each individual piece of evidence alone does." Not really as both pieces of evidences are circumstantial. You're also spuriously conflating legal standards of evidence with epistemological standards for knowledge claims, which is another reason it's a poor analogy. Worst of all you're confusing cumulative probability with that of single events. Our universe can only be viewed as a single event as we have only 1 universe to test, so making claims about the odds of it turning out the way it has is fallacious. Your bias is a derivative of your desire to know, and your inability to show the intellectual humility to admit we don't yet know, and indeed may never know some of the things you want answers to is making you ignore the truth in favour of answers you like. This is precisely what religions and superstitions do, they fulfil an innate need in humans for answers, but they are superficial as they have no objective evidence to support them. Science is slow and tedious most of the time, far easier to just assert what you want to believe and twist the facts to suit, can I get a hallelujah? The difference between you and the others in this thread is not that they don't care, or don't want to know, it's just that they aren't prepared (like you) to use assumptions and bias in place of proper evidence. They'd rather admit they don't know than believe something that is not true. Fri, 06/22/2018 - 08:26 (Reply to #118) #119 All you do is rubbish my math without providing alternatives. For example what is your calculation of the chance the murderer is guilty? I’m not willing to take ‘I don’t know’ as an answer. Science, reason and probability allow a meta-analysis of the problem that at least gives an approximation. Fri, 06/22/2018 - 10:26 (Reply to #119) #120 I already showed he is Not Guilty. As for probability of guilt, well... that is always going to be subjective, NEVER objective. As long as a believable shadow of doubt can be cast, he shall never be found Guilty. And always remember this: Not Guilty ≠ Innocent. Everything you have provided is SUBJECTIVE OPINIONS. No evidence. As for your rubbish math, well, it is rubbish. Probability does not work, nor is it calculated the way you are doing it. At least I know that much about probability. You are purposefully skewing everything to be in your favor. It reminds me of this cartoon (https://i.imgur.com/XU0Rf55.jpg). You are purposefully performing apologetics to make everything line up with your presupposed conclusion. Meta-analysis = Apologetics. Apologetics = Huge Pile of Horse Hoowhee. Fri, 06/22/2018 - 10:49 (Reply to #120) #121 My estimate is 100%. You described the defendant as a "murderer" throughout.
{"url":"https://www.atheistrepublic.com/forums/debate-room/evolution-god?page=3","timestamp":"2024-11-01T22:16:11Z","content_type":"text/html","content_length":"176717","record_id":"<urn:uuid:01b85a64-8872-4fc2-bc26-a3ea239659b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00514.warc.gz"}
REHVA Journal Simulation Optimization on Heat Transfer Characteristics of Carbon Dioxide in Microchannel Evaporator Simulation Optimization on Heat Transfer Characteristics of Carbon Dioxide in Microchannel Evaporator │ │ │ │ │Fu │Li Guo │Lv Jing │ │Yijun │ │ │ │ │College of Environment and Building, University of Shanghai for Science and Technology, │College of Environment and Building, University of Shanghai for Science and Technology, │ │ │Shanghai, PR. China │Shanghai, PR. China │ Keywords: CO2, Heat transfer characteristics, Microchannel evaporator, Simulation model Simulating model based on a specific CO2 microchannel evaporator was established through control volume method with MATLAB, in which both wet and dry conditions for air side, and two-phase and over-heat zones for CO2side have been considered during the evaporative process. Simulation results showed little discrepancy with pervious experimental data which validates the model. And then the heat transfer characteristics in microchannel evaporator were simulated under different inlet air parameters. It was shown that air velocity has the greatest impact on heat transfer effect, followed by air temperature, and air humidity at last. Meanwhile, the dry-out point also has an important impact on heat transfer performance: before the dry out happens, the heat transfer coefficient of the CO2 increased with higher air temperature, relative humidity and velocity, while after the dry out occurs, there has been a drastic decline of convective heat transfer coefficient. Therefore, the dry-out point should be postponed for better performance. Then, structural optimization has been made by utilizing two-stage series evaporators. Corresponding simulation results showed that 37.5% area of the original experimental device can still achieve 90.5% heat transfer rate of the former one. So, this method can greatly improve the heat transfer effect of the CO2 microchannel evaporator. The main devices of heat transfer in the refrigerating cycle of CO2 have been going through the development from the finned tube style to the microchannel style. Compared to the traditional heat exchanger, the microchannel heat exchanger is usually smaller and higher heat transfer coefficient, but it’s pressure resistance and drop is higher, which may easily cause congestion and imbalanced distribution of fluid. CO2 can cover the shortage of microchannel heat exchanger due to its low ratio between liquid and gas density [1]. However, when the hydraulic diameter is smaller than 3mm, the two-phase flow and heat transfer regulation differs from the normal size. More noticeable microscale effect can be observed in narrow passageway [2]. Many research institutions have studied on this issue that mainly focus on the boiling heat transfer coefficient of the two-phase field, critical heat flux, dry-out point, two-phase flow pattern and pressure drop model [3]. Cheng et al. have discovered that the critical dryness of carbon dioxide was generally between 0.5 and 0.7, which was much lower than that of R22 with a critical dryness usually between 0.8 and 0.9 [4]. Then, they have considered the characteristics of intermittent flow, annular flow, dry-out inception and mist flow to modify the boiling heat transfer correlation under the basis research of Wojtan [5]. Zhang has established a two-dimensional distributed parameter model for the CO2 microchannel evaporator and proposed a modified heat transfer correlation after comparing to the experimental data [6]. Several appropriate heat transfer correlations are selected according to the heat transfer characteristics of CO2 in microchannel evaporator, and comprehensively considered the different heat transfer characteristics of wet and dry conditions on air side along with two-phase region and overheated region of CO2. Parameter distribution simulation model of CO2 microchannel evaporator has been established and verified through the experimental results. Finally, structural optimization has been proposed and verified through further simulation under different channel number, air temperature, humidity and velocity have been analyzed for studying their impact on heat transfer performance. I. Experiment Research A. Microchannel evaporator Experiment research on a parallel flow micro-channel evaporator which is composed of 36 parallel flat tubes, each of which has 18 microchannels with equivalent diameter of 1.096 mm. The two-phase CO2 coming from the collecting pipes flows into the microchannel and exchanges heat with the air in the louver fin between the microchannels. Figure 1is the structure of the microchannel evaporator and it’s detailed 3D diagram. The calculated main structural parameters are shown in Table 1. Figure 1. Diagram of the microchannel evaporator. Table 1. Main structural parameters of microchannel evaporator. │Upwind surface│Air direction depth │Volume (cm³)│Heat exchange area (m²) │Equivalent diameter │ │Width/Height │(mm) │ ├────────┬────────────────┤(mm) │ │(mm) │ │ │Air side│Refrigerant side│ │ │810/50 │25 │7087.5 │9.46 │2.28 │1.096 │ B. Experimental system The experimental table of the CO2 microchannel evaporator was set up (see Figure 2). The conditions of the evaporator side are provided by the Enthalpy Different Laboratory. The platinum resistances and pressure transmitters were installed in the evaporator inlet and outlet in order to measure the temperature and pressure of CO2. Thermocouples were fixed on the surface of the evaporator to measure it’s tube temperature. The temperature, humidity and speed of air side were measured by thermometer, hydrometer and anemometer. Finally via the electronic expansion valve, the dryness and mass flow rate of CO2 at the inlet of the evaporator was adjusted. Figure 2. The experiment system diagram. C. Experimental results The 18th flat tube was analyzed and divided into 9 sections, which is 90 mm with measuring points set in the center of each section. The incipient air temperature is set to 23°C, and relative humidity is 25%, so the dew point temperature is 2.14°C.The experiment measured CO2 mass flow rate, inlet dryness, pressure, evaporation temperature, wall temperature, air temperature, humidity and speed which are shown in Table 2 and Table 3. The convective heat transfer coefficient and heat transfer amount of each section of this flat tube would be calculated according to the experimental data, which are shown in Table 2 and Table 3. Table 4 is the convective heat transfer coefficient and heat transfer amount along the length distribution. Table 2. Experimental measurement values of CO[2 ]side. │Category │Mass flow│Inlet dryness│Inlet pressure│Outlet pressure│Outlet Temperature │ │ │(g/s) │ │(MPa) │(MPa) │(°C) │ │Measured values│15.67 │0.28 │3.22 │3.18 │11.58 │ Table 3. Distribution of Parameters. │Measuring points │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ │Wind speed (m/s) │1.4 │1.9 │1.6 │1.4 │1.8 │1.8 │1.9 │1.8 │ │Outlet temperature (°C) │22.4│22.1│21.7│21.1│19.0│8.2 │6.5 │5.9 │ │Outlet humidity (%) │26.5│26.8│27.6│28.9│32.7│66.1│60.0│61.1│ │Wall temperature (°C) │22.8│22.1│21.5│20.5│17.1│6.0 │−1.7│−1.9│ Table 4. The convective heat transfer coefficient and heat transfer amount along the length distribution. │Length (mm) │45 │135 │225 │315 │405 │495 │585 │675 │765 │ │Convective heat Transfer coefficient (W/m² K) │5033.6│4742.3│820.9│156.7│149.2│148.2│147.2│146.3│145.2│ │Heat exchange amount (W) │30.64 │30.90 │22.00│5.94 │2.14 │1.23 │1.13 │1.02 │1.01 │ II. Simulation Model A. Heat transfer correlation of CO2 side 1) Overheated region According to the different evaporator outlet states of CO2, the refrigerant flow can be divided into two-phase region and overheated region. As for the overheated region, different heat transfer correlations were selected according to the Reynolds number: When Re ≥ 2300, the convective heat transfer coefficient was calculated by Gnielinski formula [7]; When Re < 2300, the convective heat transfer coefficient was calculated by Sieder-Tate formula [8]. 2) Two-phase region The currently available CO2 boiling heat transfer correlations are mainly Shah, Gungor and Winterton, Hwang, Yoon, and Cheng correlations. In Cheng correlation, the whole two-phase region is divided into 3 phases as the intermittent flow, annular flow and mist flow according to the boundary point of intermittent flow and annular flow and the dry-out point. Compared with the experimental data in the reference [9], the Cheng correlation was considered to be the most accurate in this experimental condition. So the Cheng correlation was selected in our simulation. B. Heat transfer correlation of air side When the wall temperature is below the air dew point temperature, dew will occur on the surface of the flat tube. So the dry and the wet working conditions should be both considered to analyze the heat transfer on the air side. Many research focus on dry condition, while regardless of wet condition. In the wet condition, the surface thermal resistance increases and the heat transfer coefficient will be much smaller. The correlations developed by Kim and Bullard can predict accurately about the heat and mass transfer performance of the shutters both in dry and wet conditions [10]. So their correlation was used in our simulation model. C. Controlling equations In order to make the process of calculation easier, the mathematical model of CO2 microchannel evaporator was assumed as follows: 1) CO2 is equally divided into flow each microchannel; 2) No thermal conduction or heat resistance exists between microchannels; 3) CO2 side and air side are both steady flow; 4) The air on the condensation water surface is saturated and the thermal resistance of the condensed water is negligible; 5) The effect of lubricating oil and noncondensing gas is not considered. Making the flat tube and the 1/2 louver fin on the upside and underside of it as the research object, the control unit is shown in Figure 3. Figure 3. Sectional view of control unit. As shown in Figure 4, every control volume can be regarded as a small cross flow heat exchanger, which was analyzed by energy conservation: Air side heat exchange: (1) CO2 side heat exchange (two-phase): (2) CO2 side heat exchange (overheated): (3) Where Q represents heat transfer rate (W), M the mass flow rate, (kg/s), h the enthalpy(kJ/kg), x thedryness,and I the latent heat of vaporization(kJ/kg). Where subscript abbreviation a represent air, r refrigerant, i inlet, o outlet, tp two-phase, j the j control volume. Figure 4. The figure of control volume. Heat transfer between air and pipe wall under dry and wet conditions: Where α represent sensible heat transfer coefficient (W/m²·k), β the mass transfer coefficient (kg/m²·s), η the cooling efficiency, A the heat transfer area (m²), and T the temperature(K). Where subscript abbreviation: d represents dry air, w the water film, m average, and s saturated. D. Simulation process design The heat transfer existing both in two-phase region and overheated region in normal heat pump system, because a certain overheated degree at evaporator outlet is usually required in order to ensure CO2 enter into the compressor with gas phase. In this condition, the point with dryness equal 1 of CO2 was calculated first to divide the heat transfer process into the two-phase region and the overheated region. The specific calculation process is shown in Figure 5, in which the state parameters of CO2 and air were obtained from MATLAB by manipulating the REFPROP. Figure 5. Simulation process of heat transfer. II. Results and Discussion A. Comparison of experimental and simulation results Comparison between simulating and experimental values of CO2 temperature, tube wall temperature, air inlet and outlet temperature and the convective heat transfer coefficient are respectively shown in Figure 6 and Figure 7. Figure 6. Comparison of CO[2] temperature, wall temperature, air inlet and outlet temperatures. Figure 7. Comparison of the convection heat transfer coefficient. The relative errors between experimental and simulated values were calculated. The relative error of CO2 temperature and wall temperature is under 10%.But the relative error of convective heat transfer coefficient is about 18%. This is because that microchannel evaporator tended to have an uneven flow distribution problem during actual operation which was omitted in the model. Which may cause too much or less CO2 mass flow rate in some microchannes. Meanwhile, similar changing trend of simulated and experimental values can be seen from Figure 7 with acceptable errors range, which is usually 20% for a simulation of engineering application. So the simulation model is highly reliable forfurther analysis and optimization. B. Sturctural optimization Considering that the location of the drying point is at the first 1/4 length in the channel as is shown in Figure 7, heat transfer efficiency decreases sharply at the latter part of the channel. Therefore, we divide the former one evaporator into two ones with a gas-liquid separator installed between them. The length of the first evaporator is halved to 0.405 m, together with halved flat tubes number of 18. Simulation results show a dryness of 0.68 at the outlet, as is shown in Figure 8. Then the two-phase flow of the first evaporator flows into the gas-liquid separator before it enters the compressor. And the remaining 32% liquid refrigerant enters the next evaporator which also has the length of 0.405m. Different number of flat tubes of the second evaporator have been simulated which are 18, 13, 9 and 5 respectively in order to study their impact on heat transfer efficiency under the same inlet air parameters of former experiments, while the CO2 inlet dryness is assumed to be 0 in the second-stage evaporator. Figure 8. Heat transfer along channel of first-stage evaporator. Figure 9. CO[2 ]dryness of second-stage evaporator under different flat tubes number. Figure 10. CO[2 ]temperature of second-stage evaporator under different flat tubes number. Figure 9 shows that when flat tubes number is decreased to 9, CO2 at the outlet of the second stage evaporator is in the overheat zone with overheat temperature of 6°C as is shown in Figure 10. When the number decreased to 5, it turned into two-phase zone. Considering only the postpone of dry-out point, in order to ensure the over-heated state at the outlet, the number of flat tubes should be 9 for optimization of the secondary evaporator, which can reduce the heat transfer area to 37.5% of the original one. Former heat transfer capacity of the evaporator was 3.46 kW, and after the optimization, it has become 2.48 kW for the first stage evaporator, and 0.65 kW for the second stage evaporator, which adds to 90.5% of the former one. C. Simulation analysis In order to further reduce the size of evaporator, we plan to figure out the proper operation conditions in which the outlet CO2 will be in the overheated zone. for evaporator with less flat tubes. Different inlet air temperature, humidity and velocity have been analyzed through simulation for studying their impact on such smaller evaporators as is shown in Table 4. Heat transfer rate in each interval was calculated for securing the location of dry-out point. Figure 11 to 13 showed the impact of inlet air parameters on heat transfer performance: Table 4. Research conditions. │Category │Air inlet temperature (°C) │Relative humidity (%)│Air face velocity (m/s)│ │Figure 12│23~38 │25 │2 │ │Figure 13│23 │25~70 │2 │ │Figure 14│23 │25 │2~5 │ Figure 11. Influence of air temperature on the heat transfer rate. Figure 12. Influence of air relative humidity on the heat transfer rate. Figure 13. Influence of air speed on the heat transfer rate. We can see the same pattern in heat transfer rate which is similar to former device that the dry-out point marks a threshold of a drastic decline of heat transfer rate after it. While under all inlet conditions, total heat transfer rate remains to be about 600 W. Figure 12 and 13 show that when air temperature and humidity increase, dry out happens earlier. But the increase of velocity may also improve total heat transfer rate Moreover, air temperature has more significant effect than humidity on the location of dry-out point. Therefore, in order to create a proper condition for smaller evaporator like this in order to have better heat transfer performance, we found that under air temperature of 28°C, humidity of 40% and velocity of 5 m/s, CO2 can reach the dry-out point in the evaporator with 5 flat tubes with overheated CO2 at the outlet. Considering dry and wet conditions on air side and different heat transfer characteristics of CO2 in two-phase and overheated region, a distributed parameter simulation model of the CO2 microchannel evaporator was established. The heat transfer in the two-phase region was calculated by Cheng correlation, while in overheated region, it was selected according to the Reynolds number. Comparison between experimental and simulated values in terms of CO2 temperature, wall temperature, inlet and outlet air temperature and convective heat transfer coefficient showed little discrepancy which verifies the simulation. Both inlet parameters impact on heat transfer performance and structural optimization have been realized through simulation process according to which we can dawn the follow 1) The comparison between experimental and simulation results shows little discrepancy within 18% which verifies the simulation method. 2) Convective heat transfer coefficient reached the maximum at the dry-out point and then decline drastically which causes heat transfer deterioration. In the overheated region, the heat transfer coefficient is way smaller compared to that of the two-phase region. Therefore, the later the dry-out happens, the better the cooling efficiency of the device. 3) Structural improvement of the evaporator has been made by separating one evaporator into two with gas-liquid separator between them. Results show that 37.5% area of the original experimental device can still achieve 90.5% heat transfer rate of the former one. 4) Dry out occurs in a 5-tube evaporator when the air temperature is 28°C, with humidity of 40% or air velocity of 5 m/s. Higher air temperature or relative humidity makes the dry out happen earlier, while none of which has apparent impact on the total heat transfer. Higher air velocity not only makes the dry-out occurs earlier, but also improves total heat transfer rate. This paper has been admitted by CCHVAC in 2018. L.J. F.Y.J and L.G. thank to the committee members of CCHAVC and sponsors of TICA company, China. [1] Zhao Y and Ohadi M. M, “Experimental study of supercritical CO2 gas cooling in a microchannel gas cooler”, ASHRAE Trans., vol. 110 (1), pp. 291-300. 2004. [2] Zan Wu. “Mechanism and predictive methods for flow boiling in micro/mini-channels and micro-fin tubes.” Ph. D. Thesis. Zhe Jiang University, China, 2013. [3] Yoon S H, Cho E S and Hwang Y. W. “Characteristics of evaporative heat transfer and pressure drop of carbon dioxide and correlation development”, International. Journal. Refrigerator.,vol. 27 (2), 2004, pp 111-119. [4] Cheng L, Ribatski G and Wojtan L “New flow boiling heat transfer model and flow pattern map for carbon dioxide evaporating inside horizontal tubes” Int. J. Heat Mass Transfer, vol. 49 (21-22), 2006, pp 4082-4094. [5] L Wojtan, T Ursenbacker, J R “Thome, Investigation of flow boiling in horizontal tubes: part II – development of a new heat transfer model for stratified-wavy, dry-out and mist flow regimes”, Int. J. Heat Mass Transfer, vol. 48(4), 2005, pp2 970-2985. [6] Haiqing Zhang, Bei Guo, “Numerical simulation of the carbon dioxide microchannel evaporator using distributed parameter model,” Journal of Xi’an Jiao Tong University, vol. 46 (1) 2012, pp [7] V. Gnielinski, “New equations for heat and mass transfer in turbulent pipe and channel flow”, Int. Chen. Eng. vol.16 (2) 1976, pp 401-409. [8] Sieder E N, Tate G E, Heat transfer and pressure drop of liquids in tubes. Ind Eng Chem., 28 (1936):1429-1435. [9] Pettersen J, “Flow vaporization of CO2 in microchannel tubes,” Exp. Therm. Fluid Sci.,vol. 28 (2-3), 2004, pp 111-121. [10] Kim M H, Bullard C W, “Air-side thermal hydraulic performance of multi-louvered fin aluminum heat exchangers”, International Journal Refrigerator.,vol.25 (3), 2002, pp 390-400. Stay Informed Follow us on social media accounts to stay up to date with REHVA actualities
{"url":"https://www.rehva.eu/rehva-journal/chapter/simulation-optimization-on-heat-transfer-characteristics-of-carbon-dioxide-in-microchannel-evaporator","timestamp":"2024-11-14T16:37:28Z","content_type":"text/html","content_length":"143686","record_id":"<urn:uuid:55d5b3f6-0490-4e31-ae9d-e241e685da49>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00887.warc.gz"}
What type of language is Matlab? scripting language What is Matlab and how it works? MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. What language is C written? C started with the BCPL language, Ken Thomson had access to a compiler for it that ran on their General Electrics 635 main frame. Unhappy with the language, Thomson used BCPL to write a compiler for the B language, an evolutionary step beyond BCPL that removed some of the technical problems in BCPL. What is the difference between C and Matlab? C vs MATLAB: What are the differences? Developers describe C as “One of the most widely used programming languages of all time”. . On the other hand, MATLAB is detailed as “A high-level language and interactive environment for numerical computation, visualization, and programming”. What are the advantages of Matlab? Matlab Advantages • Implement and test your algorithms easily. • Develop the computational codes easily. • Debug easily. • Use a large database of built in algorithms. • Process still images and create simulation videos easily. • Symbolic computation can be easily done. • Call external libraries. • Perform extensive data analysis and visualization. How is Matlab used in engineering? MATLAB is widely-used in many different fields of engineering and science, and provides an interactive environment for algorithm development, data visualisation, data analysis, and numerical What does C stand for in programming? Basic Combined Programming Language What is array in C? An array is a collection of data items, all of the same type, accessed using a common name. A one-dimensional array is like a list; A two dimensional array is like a table; The C language places no limits on the number of dimensions in an array, though specific implementations may. What is C used for today? ‘C’ language is widely used in embedded systems. It is used for developing system applications. It is widely used for developing desktop applications. Most of the applications by Adobe are developed using ‘C’ programming language. What is Matlab API? MatLab API is the kind of library that empowers you to compose Fortran and C programs that communicate with MatLab. It is mainly for reading and writing the important Mat files and calling Matlab as the computational engine. What are the two types of C programming applications? It has found lasting use in applications previously coded in assembly language. Such applications include operating systems and various application software for computer architectures that range from supercomputers to PLCs and embedded systems….K&R C • Standard I/O library. • long int data type. • unsigned int data type. Why C is called C language? The reason why the language was named “C” by its creator was that it came after B language. Back then, Bell Labs already had a programming language called “B” at their disposal. The Unix operating system was originally created at Bell Labs by Ken Thompson, Dennis Ritchie, and others. Is Matlab available for free? While there is no “free” versions of Matlab, there is a cracked license, which works until this date. What is Matlab full form? The name MATLAB stands for MATrix LABoratory. MATLAB was written originally to provide easy access to matrix software developed by the LINPACK (linear system package) and EISPACK (Eigen system package) projects. MATLAB [1] is a high-performance language for technical computing. What are the basics of C? 1. C programming basics to write a C Program: C Basic commands Explanation #include This is a preprocessor command that includes standard input output header file(stdio.h) from the C library before compiling a C program int main() This is the main function from where execution of any C program begins. What is Matlab used for in real life? It involves mechanical engineering, electronic engineering, and computer science to name a few to create robots or human-like machines. Robotics researchers and engineers use MATLAB to design and tune algorithms, model real-world systems, and automatically generate code – all from one software environment. What are the basics of Matlab? MATLAB Basics Tutorial • Contents. Vectors. • Vectors. Let’s start off by creating something simple, like a vector. • Functions. To make life easier, MATLAB includes many standard functions. • Plotting. It is also easy to create plots in MATLAB. • Polynomials as Vectors. • Polynomials Using the s Variable. • Matrices. • Printing. What are the applications of Matlab? Millions of engineers and scientists worldwide use MATLAB for a range of applications, in industry and academia, including deep learning and machine learning, signal processing and communications, image and video processing, control systems, test and measurement, computational finance, and computational biology.
{"url":"https://gowanusballroom.com/what-type-of-language-is-matlab/","timestamp":"2024-11-08T12:40:13Z","content_type":"text/html","content_length":"54069","record_id":"<urn:uuid:e1a6dc0f-1f23-40aa-983f-da6858617ea7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00412.warc.gz"}
In this task, students imagine that they are joining an expedition to explore one of the Egyptian pyramids. As the physicists in the group, they have been tasked with the job of getting a Ground Penetrating Radar (GPR) unit up the side of a pyramid. After choosing a pyramid to explore,...
{"url":"https://www.performanceassessmentresourcebank.org/tags/design","timestamp":"2024-11-06T01:38:45Z","content_type":"text/html","content_length":"117817","record_id":"<urn:uuid:b0afaa76-c5f8-44f6-9f62-9fc2c8901a9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00536.warc.gz"}
Cramp Function Distribution Cramp Function Distribution Definition The (complex) cramp function distribution is a continuous distribution defined as [1]: The function integrates the normal distribution, giving the probability a normally distributed random variable Y (with mean 0 and variance ½), falls into the range [−x, x]. Cramp Function vs. Error Function The term “(complex) cramp function” is seen in literature by Russian or Latvian authors (for example, see [2], [3], [4]), where it is usually denoted as W(x) [5]. Elsewhere, it is known as the error [1] Johnson, Kotz, and Balakrishnan, (1994), Continuous Univariate Distributions, Volumes I and II, 2nd. Ed., John Wiley and Sons. [2] Mikhailovskiy, A. B. (1975), Theory of Plasma Instabilities, Atomizdat, in Russian [3] Baumjohann, W., and R. A. Treumann (1997), Basic Space Plasma Physics, Imperial College Press, London. [4] [4] Zagursky, V. Pilot signal detection in Wireless Sensor Networks (Latvian translation). Online: https://ortus.rtu.lv/science/en/publications/11860/fulltext [5] Error Functions (TeX). Online: http://nlpc.stanford.edu/nleht/Science/reference/tex/errorfun/errorfun.tex
{"url":"https://www.statisticshowto.com/cramp-function-distribution/","timestamp":"2024-11-04T15:16:14Z","content_type":"text/html","content_length":"66520","record_id":"<urn:uuid:6c993227-ff55-4a48-80cb-ad8f32246b15>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00693.warc.gz"}