content
stringlengths
86
994k
meta
stringlengths
288
619
A spatial Cartesian system with three axes lends itself to describing the terrestrial positions derived from space-based geodesy. Using three rectangular coordinates instead of two, a person can unambiguously define any position on the earth—or above it for that matter. The three-dimensional Cartesian coordinates (x,y,z) derived from this system are known as Earth-Centered-Earth-Fixed (ECEF) coordinates. It is a right-handed orthogonal system that rotates with and is attached to the earth, which is why it is called earth fixed. A three-dimensional Cartesian coordinate system is right-handed if it can be described by the following model: the extended forefinger of the right hand symbolizes the positive direction of the x-axis. The middle finger of the same hand extended at right angles to the forefinger symbolizes the positive direction of the y-axis. The extended thumb of the right hand, perpendicular to them both, symbolizes the positive direction of the z-axis. The usual arrangement is known as the Conventional Terrestrial Reference System (CTRS), and the Conventional Terrestrial System (CTS). The latter name will be used here. The origin is the center of mass of the whole Earth including oceans and atmosphere, the geocenter. Thex-axis is a line from that geocenter through its intersection at the zero meridian, also known as the International Reference Meridian (IRM), with the internationally defined conventional equator. The y-axis is extended from the geocenter along a line perpendicular from the x-axis in the same mean equatorial plane toward 90° East longitude. That means that the positive end of the y-axis intersects the actual Earth in the Indian Ocean. In any case, they both rotate with the Earth around the z-axis, a line from the geocenter through the internationally defined pole known as the International Reference Pole (IRP). However, the earth is constantly moving, of course. While one can say that the Earth has a particular axis of rotation, equator and zero meridian for an instant, they all change slightly in the next instant. Within all this motion how do you stabilize the origin and direction of the three axes for the long term? One way is to choose a moment in time and consider them fixed to the earth as they are at that instant. Here is an example of that process of definition. The earth’s rotational axis wanders slightly with respect to the solid earth in a very slow oscillation called polar motion. The largest component of the movement relative to the earth’s crust has a 430-day cycle known as the Chandler period. It was named after American Astronomer Seth C. Chandler who described it in papers in the Astronomical Journal in 1891. Another aspect of polar motion is sometimes called polar wander. It is about 0.004 seconds of arc per year as the pole moves toward Ellesmere Island. The actual displacement caused by the wandering generally does not exceed 12 meters. Nevertheless, the conventional terrestrial system of coordinates would be useless if its third axis was constantly wobbling. Originally, an average stable position was chosen for the position of the pole. Between 1900 and 1905, the mean position of the earth’s rotational pole was designated as the Conventional International Origin (CIO) and the z-axis. This was defined by the Bureau International de l’Heure (BIH). It has since been refined by the International Earth Rotation Service, (IERS) using very long baseline interferometry (VLBI) and satellite laser ranging (SLR). It is now placed as it was midnight on New Year’s Eve 1983, or January 1, 1984 (UTC). The moment is known as an epoch and can be written 1984.0. The name of the z-axis has been changed to the International Reference Pole, IRP epoch 1984, but it remains within 0.005” of the previous definition. It provides a geometrically stable and clear definition on the earth’s surface for the z-axis. In this three-dimensional right-handed coordinate system the x-coordinate is a distance from the y-z plane measured parallel to the x-axis. It is always positive from the zero meridian to 90º W longitude and from the zero meridian to 90º E longitude. In the remaining 180º the X-coordinate is negative. The y-coordinate is a perpendicular distance from the plane of the zero meridian. It is always positive in the Eastern Hemisphere and negative in the Western Hemisphere. The z- coordinate is a perpendicular distance from the plane of the equator. It is always positive in the Northern Hemisphere and negative in the Southern Hemisphere. It is important to note that the GPS Control Segment generates the position and velocity of the satellites themselves in ECEF coordinates. It follows that most modern GPS software provide the GPS positions in ECEF as well. Further, the ends of baselines determined by GPS observation are typically given in ECEF coordinates so that the vectors themselves become the difference between those x, y , and z coordinates. The display of these differences as DX, DY, and DZ is a usual product of these post-processed calculations.
{"url":"https://www.geoweeknews.com/blogs/vol13no15-earth-centered-earth-fixed","timestamp":"2024-11-13T11:09:19Z","content_type":"text/html","content_length":"54182","record_id":"<urn:uuid:4c74b2f2-fb2c-476d-9813-b633d3d0390a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00027.warc.gz"}
Math Forum :: View topic - Can anybody explain what is Lie Integral? - www.mathdb.org Math Forum :: View topic – Can anybody explain what is Lie Integral? Can anybody explain in a simple manner what is Lie Integral? Can you explain by way of a concrete example how to evaluate such kind of integrals? For example, do we need to perform any transformation or parametrization in order to evalute the integral? If so, what kind of transformation or parametrization? Is there any geometrical or physical significance for such kind of integrals? Many
{"url":"https://www.mathdb.org/phpbb2/viewtopicphpt945amp/","timestamp":"2024-11-06T18:38:41Z","content_type":"text/html","content_length":"27248","record_id":"<urn:uuid:d2ef02fc-79ae-457c-9b17-ffb5359597a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00020.warc.gz"}
Serial Correlation in Time Series This article is part of the EconHelp Tutoring Wiki In statistics, the serial correlation of a random process describes the correlation between values of the process at different points in time, as a function of the two times or of the time difference. Let X be some repeatable process, and i be some point in time after the start of that process. (i may be an integer for a discrete-time process or a real number for a continuous-time process.) Then Xi is the value (or realization) produced by a given run of the process at time i. Suppose that the process is further known to have defined values for mean ${\displaystyle \mu _{i}}$ and variance ${\displaystyle \sigma _{i}^{2}}$ for all times i. Then the definition of the serial correlation between any two time s and t is ${\displaystyle R(s,t)={\frac {E[(X_{t}-\mu _{t})(X_{s}-\mu _{s})]}{\sigma _{s}\sigma _{t}}}}$ where "E" is the expected value operator. Note that this expression is not well-defined for all time series or processes, because the variance may be zero (for a constant process) or infinite. If the function R is well-defined, its value must lie in the range [−1, 1], with 1 indicating perfect correlation and −1 indicating perfect anti-correlation. If X[t] is a second-order stationary process then the mean μ and the variance σ^2 are time-independent, and further the autocorrelation depends only on the difference between t and s: the correlation depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocorrelation can be expressed as a function of the time-lag, and that this would be an even function of the lag τ = s − t. This gives the more familiar form ${\displaystyle R(\tau )={\frac {E[(X_{t}-\mu )(X_{t+\tau }-\mu )]}{\sigma ^{2}}}}$ and the fact that this is an even function can be stated as ${\displaystyle R(\tau )=R(-\tau )}$ It is common practice in some disciplines, other than statistics and time series analysis, to drop the normalization by σ^2 and use the term "serial correlation" interchangeably with "serial covariance". However, the normalization is important both because the interpretation of the serial correlation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated serial correlations.
{"url":"https://wiki.ubc.ca/Serial_Correlation_in_Time_Series","timestamp":"2024-11-14T01:17:30Z","content_type":"text/html","content_length":"31125","record_id":"<urn:uuid:ef81b4eb-6cb1-496b-b454-d51f09ce8937>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00628.warc.gz"}
EduMedia – Angular Magnification Angular Magnification Distinction between linear magnification and angular magnification. You can horizontally drag the lens and vertically drag the "apple" to enlarge it. Only magnifying glass behaviour is shown here. A converging lens can act as a magnifying glass. This requires that the object is at a distance less than the focal length OF. In this case, the object is very close to the lens and the image is magnified in the same direction. If the object is beyond the focal length, the effect of magnification is no longer possible. The magnification is the ratio of two angles (not the ratio of two lengths!). G = α '/ α α '= angle subtended image. α = angle at which we see the object. Click and drag the lens in front of the object. Click and drag the head of the penguin to change its size.
{"url":"https://www.edumedia.com/en/media/70-angular-magnification","timestamp":"2024-11-10T08:01:25Z","content_type":"text/html","content_length":"241528","record_id":"<urn:uuid:f8c0a2a7-e055-450c-9622-4c14bb504e49>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00414.warc.gz"}
There are 18 points in a plane such that no three of them are i... | Filo There are 18 points in a plane such that no three of them are in the same line except five points which are collinear. The number of triangles formed by these points is Not the question you're searching for? + Ask your question A triangle is obtained by joining three noncollinear point The total number of triangles 806 Was this solution helpful? Found 7 tutors discussing this question Discuss this question LIVE for FREE 11 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Permutations and Combinations View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text There are 18 points in a plane such that no three of them are in the same line except five points which are collinear. The number of triangles formed by these points is Updated On Nov 19, 2022 Topic Permutations and Combinations Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 1 Upvotes 234 Avg. Video Duration 3 min
{"url":"https://askfilo.com/math-question-answers/there-are-18-points-in-a-plane-such-that-no-three-of-them-are-in-the-same-line","timestamp":"2024-11-12T13:08:32Z","content_type":"text/html","content_length":"252691","record_id":"<urn:uuid:f09c616b-8592-4ae0-90d6-380024a28785>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00428.warc.gz"}
Generalization of Gartner-Ellis theorem A generalization of the Gartner-Ellis Theorem for arbitrary random sequences is established. It is shown that the conventional formula of the large deviation rate function, based on the moment generating function techniques, fails to describe the general (possibly nonconvex) large deviation rate for an arbitrary random sequence. An (nonconvex) extension formula obtained by twisting the conventional large deviation rate function around a continuous functional is therefore proposed. As a result, a new Gartner-Ellis upper bound is proved. It is demonstrated by an example that a tight upper bound on the large deviation rate of an arbitrary random sequence can be obtained by choosing the right continuous functional, even if the true large deviation rate is not convex. Also proved is a parallel extension of the Gartner-Ellis lower bound with the introduction of a new notion of Gartner-Ellis set within which the upper bound coincides with the lower bound (for countably many • Arbitrary random sequence • Exponent • Gartner-ellis theorem • Information spectrum • Large deviations Dive into the research topics of 'Generalization of Gartner-Ellis theorem'. Together they form a unique fingerprint.
{"url":"https://scholar.nycu.edu.tw/en/publications/generalization-of-gartner-ellis-theorem","timestamp":"2024-11-14T05:35:55Z","content_type":"text/html","content_length":"54290","record_id":"<urn:uuid:28f25ca6-be28-4354-995e-48198e9b277f>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00234.warc.gz"}
Quest Learning & Assessment Welcome to the University Wiki Service! Please use your IID (yourEID@eid.utexas.edu) when prompted for your email address during login or click here to enter your EID. If you are experiencing any issues loading content on pages, please try these steps to clear your browser cache. This section is a work in progress; please excuse the sparse documentation. If you'd like to use this question type and don't find the sufficient information on this page please email us. Using Matrixcalculator.com may be useful when creating such questions. This section will cover how to use the HTML & JavaScript (JS) question type to create matrix questions. Because of this, the HTML & JS question type is also sometime called matrix editor. This type of question is incredibly powerful but very particular on how responses will be received (ie additional line of code needed for answers to be in fraction instead of decimal format). You know you're in the matrix editor if you see body, explanation, prompt, AND code tabs/boxes, as shown below. The matrix editor is intended to be used there the answer has multiple free response cells that all pertain to the same question, as in a matrix response. Below are examples of how this question type could be used: Creating Matrix Question Types Matrix objects in JavaScript are supported through the usage of MathJs, a mathematical library for JavaScript. Specifically this page covers the usage and support of matrix objects. https:// For a quick matrix editor reference, see Matrix Editor Reference Sheet. Below is an example of a matrix question. Please note you do need to push the 'generate parameters button before the publish button will show. Example Code Compute the product $AB$ of the matrices $$A = @matA, B = @matB$$ By the row-column definition of the product of matrices $$\eqalign{AB = &@matA @matB \cr function js_answer() { /* global matrix matA u={} */ /* global matrix matB u={} */ /* global matrix ans1 u={} */ matA = math.matrix([[-3, -1, -5], [-4, -5, 0], [-2, -4, 5]]); matB = math.matrix([[4, 4, 0], [-5, 5, 0], [2, 5, -3]]); ans1 = math.multiply(matA, matB);
{"url":"https://utexas.atlassian.net/wiki/spaces/questla/pages/53098791/HTML+JavaScript","timestamp":"2024-11-14T15:07:05Z","content_type":"text/html","content_length":"1041695","record_id":"<urn:uuid:d8cd1af1-dd1e-4f24-a8ba-b2f303e2fc20>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00170.warc.gz"}
Furlongs to Inches (US survey) Converter Enter Furlongs Inches (US survey) β Switch toInches (US survey) to Furlongs Converter How to use this Furlongs to Inches (US survey) Converter π € Follow these steps to convert given length from the units of Furlongs to the units of Inches (US survey). 1. Enter the input Furlongs value in the text field. 2. The calculator converts the given Furlongs into Inches (US survey) in realtime β using the conversion formula, and displays under the Inches (US survey) label. You do not need to click any button. If the input changes, Inches (US survey) value is re-calculated, just like that. 3. You may copy the resulting Inches (US survey) value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Furlongs to Inches (US survey)? The formula to convert given length from Furlongs to Inches (US survey) is: Length[(Inches (US survey))] = Length[(Furlongs)] / 0.00012626287782575567 Substitute the given value of length in furlongs, i.e., Length[(Furlongs)] in the above formula and simplify the right-hand side value. The resulting value is the length in inches (us survey), i.e., Length[(Inches (US survey))]. Calculation will be done after you enter a valid input. Consider that a horse race is 8 furlongs long. Convert this distance from furlongs to Inches (US survey). The length in furlongs is: Length[(Furlongs)] = 8 The formula to convert length from furlongs to inches (us survey) is: Length[(Inches (US survey))] = Length[(Furlongs)] / 0.00012626287782575567 Substitute given weight Length[(Furlongs)] = 8 in the above formula. Length[(Inches (US survey))] = 8 / 0.00012626287782575567 Length[(Inches (US survey))] = 63359.8738 Final Answer: Therefore, 8 fur is equal to 63359.8738 in. The length is 63359.8738 in, in inches (us survey). Consider that a traditional country road stretches for 12 furlongs. Convert this distance from furlongs to Inches (US survey). The length in furlongs is: Length[(Furlongs)] = 12 The formula to convert length from furlongs to inches (us survey) is: Length[(Inches (US survey))] = Length[(Furlongs)] / 0.00012626287782575567 Substitute given weight Length[(Furlongs)] = 12 in the above formula. Length[(Inches (US survey))] = 12 / 0.00012626287782575567 Length[(Inches (US survey))] = 95039.8106 Final Answer: Therefore, 12 fur is equal to 95039.8106 in. The length is 95039.8106 in, in inches (us survey). Furlongs to Inches (US survey) Conversion Table The following table gives some of the most used conversions from Furlongs to Inches (US survey). Furlongs (fur) Inches (US survey) (in) 0 fur 0 in 1 fur 7919.9842 in 2 fur 15839.9684 in 3 fur 23759.9527 in 4 fur 31679.9369 in 5 fur 39599.9211 in 6 fur 47519.9053 in 7 fur 55439.8895 in 8 fur 63359.8738 in 9 fur 71279.858 in 10 fur 79199.8422 in 20 fur 158399.6844 in 50 fur 395999.211 in 100 fur 791998.422 in 1000 fur 7919984.2204 in 10000 fur 79199842.2038 in 100000 fur 791998422.0382 in A furlong is a unit of length used primarily in horse racing and agriculture. One furlong is equivalent to 220 yards or approximately 201.168 meters. The furlong is defined as one-eighth of a mile, making it a useful measurement for shorter distances, especially in contexts like racetracks and land measurement. Furlongs are commonly used in horse racing to describe the length of a race and in agriculture for measuring field lengths. The unit is less frequently used in modern contexts but remains important in specific areas where its historical relevance endures. Inches (US survey) An inch (US survey) is a unit of length used in various contexts in the United States. One US survey inch is equivalent to exactly 0.0254 meters. The US survey inch is defined as 1/12 of a US survey foot, with the US survey foot being slightly different from the international foot. This unit provides precise measurement for a variety of Inches (US survey) are commonly used in construction, manufacturing, and everyday measurements in the United States. The unit ensures consistency and accuracy in both practical and scientific Frequently Asked Questions (FAQs) 1. What is the formula for converting Furlongs to Inches (US survey) in Length? The formula to convert Furlongs to Inches (US survey) in Length is: Furlongs / 0.00012626287782575567 2. Is this tool free or paid? This Length conversion tool, which converts Furlongs to Inches (US survey), is completely free to use. 3. How do I convert Length from Furlongs to Inches (US survey)? To convert Length from Furlongs to Inches (US survey), you can use the following formula: Furlongs / 0.00012626287782575567 For example, if you have a value in Furlongs, you substitute that value in place of Furlongs in the above formula, and solve the mathematical expression to get the equivalent value in Inches (US
{"url":"https://convertonline.org/unit/?convert=furlongs-inches_us_survey","timestamp":"2024-11-09T16:29:29Z","content_type":"text/html","content_length":"91758","record_id":"<urn:uuid:9df81e45-dbf7-4c68-8322-20c8f4991d7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00372.warc.gz"}
Effects of refractive index mismatch between sample and immersion medium in line-field confocal optical coherence tomography Issue J. Eur. Opt. Society-Rapid Publ. Volume 20, Number 1, 2024 Article Number 13 Number of page(s) 10 DOI https://doi.org/10.1051/jeos/2024010 Published online 18 April 2024 J. Eur. Opt. Society-Rapid Publ. 2024, , 13 Research Article Effects of refractive index mismatch between sample and immersion medium in line-field confocal optical coherence tomography Université Paris-Saclay, Institut d’Optique Graduate School, CNRS, Laboratoire Charles Fabry, 2 av. Augustin Fresnel, 91127 Palaiseau Cedex, France ^* Corresponding author: arnaud.dubois@institutoptique.fr Received: 6 February 2024 Accepted: 14 March 2024 Line-field confocal optical coherence tomography (LC-OCT) is an optical technique based on low-coherence interference microscopy with line illumination, designed for tomographic imaging of semi-transparent samples with micrometer-scale spatial resolution. A theoretical model of the signal acquired in LC-OCT is presented. The model shows that a refractive index mismatch between the sample and the immersion medium causes a dissociation of the coherence plane and the focal plane, leading to a decrease in the signal amplitude and a degradation of the image’s lateral resolution. Measurements are performed to validate and illustrate the theoretical predictions. A mathematical condition linking various experimental parameters is established to ensure that the degradation of image quality is negligible. This condition is tested experimentally by imaging a phantom. It is verified theoretically in the case of skin imaging, using experimental parameters corresponding to those of the commercially available LC-OCT device. Key words: Optical coherence tomography / Interference microscopy / Biomedical imaging © The Author(s), published by EDP Sciences, 2024 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1 Introduction Line-field confocal optical coherence tomography (LC-OCT) is a recent imaging technique based on low-coherence interference microscopy [1–4]. LC-OCT uses line illumination with broadband spatially coherent light and detection with a line camera to acquire cross-sectional images of semi-transparent samples, including skin tissues [5–13]. The focus is dynamically adjusted during the scan of the sample depth, allowing the use of a microscope objective to image with high lateral resolution. By using a supercontinuum laser as a broadband light source, the axial resolution can be similar to the lateral resolution, i.e. ~1 μm at a central wavelength of ~750 nm, measured at the surface of the sample [1, 2]. The usual experimental arrangement of LC-OCT is based on a two-beam interference microscope in the Linnik configuration (see Fig. 1). The sample to image is placed in one arm of the interferometer in contact with a glass window under a microscope objective. A reference surface of low reflectivity is placed in the other arm of the interferometer under an identical microscope objective. The Linnik interferometer is mounted on a piezoelectric-actuated (PZT) translation stage for scanning the sample depth (see Fig. 1). Immersion microscope objectives are used with an immersion medium whose refractive index is as close as possible to the refractive index of the sample. The equality of the refractive indices ensures the preservation of the symmetry of the interferometer during the depth scan, using a single motorized displacement. This avoids defect of focus [14–18] and dispersion mismatch between the interferometer arms [19–24]. Fig. 1 Linnik interferometer implemented in LC-OCT. The components in the red dashed frames are mounted on a piezoelectric (PZT) translation stage for scanning the sample depth (motion indicated by the double red arrow). In practice, however, it is generally impossible for the refractive indices of the immersion medium and the sample to be strictly identical regardless of the depth probed, at least because of inhomogeneities in the sample [25–27]. In skin tissues, for example, the refractive index varies with depth. The superficial layer of the skin (stratum corneum) has a refractive index in the near infrared around 1.54, while that of the epidermis is around 1.41 and that of the dermis around 1.38 [28–30]. The aim of this paper is to study the main consequences on the quality of the LC-OCT images of a difference in refractive index between the sample and the immersion medium. Theoretical analyses are carried out to understand the physical phenomena that occur and to see the influence of experimental parameters such as spectral characteristics of the light detected, beam focusing and depth in the sample. Measurements are performed to validate and illustrate the theoretical predictions. A mathematical condition dependent on the value of experimental parameters is established to avoid significant degradation of image quality. This condition is tested using two different immersion media, both experimentally by imaging a phantom and theoretically in the case of skin imaging. 2. LC-OCT signal modeling This section presents a theoretical model of the signal acquired in LC-OCT as a function of depth in the sample. This model will allow us to understand and predict effects of a difference in refractive index between the sample and the immersion medium. The dependence of the refractive indices with the optical frequency is ignored. This assumption is reasonable since the effect of chromatic dispersion up to first order in media of practical interest at depths less than a millimeter is not significant. In water, for example, the standard deviation of the refractive index is less than 0.002 (0.14%) over the wavelength range 0.6–1.0 μm [31]. 2.1 Focal plane and coherence plane In low-coherence interferometry, interference can be observed when the optical path difference in the interferometer does not exceed the temporal coherence length of the light source, with maximum contrast when the two optical paths are identical. In LC-OCT, the optical length of the reference arm of the interferometer determines the position of a plane in the sample arm, called the coherence plane, which corresponds to the equality of the optical paths. Interference is detected when a reflective structure in the sample is located in the coherence plane with an uncertainty of depth equal to the coherence length. On the other hand, a reflective structure in the sample imaged by LC-OCT is in focus when it is located in the focal plane of the microscope objective placed in the sample arm of the interferometer, with an uncertainty of depth related to the depth of field. In practice, the LC-OCT device is adjusted so that the coherence plane and the focal plane coincide at the surface of the sample pressed against a glass window (see Fig. 2a). In order to image in depth in the sample, the distance between the sample and the interferometer is reduced. After relative displacement of the interferometer a distance z closer to the sample (see Fig. 2a), the coherence plane is then at a distance z[coh] below the sample surface and the focal plane at a distance z[foc] below the sample surface. Fig. 2 Mismatch of the coherence plane and focal plane in LC-OCT. The plain red lines represent the beam in the plane of the figure, whereas the dotted red lines represent the beam in the direction orthogonal to the plane of the figure. (a) The interferometer is adjusted so that the coherence plane and the focal plane coincide at the surface of the sample. (b) When the interferometer is lowered by a distance z, the coherence plane is at a distance z[coh] below the sample surface and the focal planes at a distance z[foc] (here n > n[im]). The position of the coherence plane can be found by considering that the difference between the optical path lengths in the sample and reference arms of the interferometer is zero for the normally incident light beam:$z coh = n im n z ,$(1)where the quantities n and n[im] are the refractive indices of the sample and immersion medium, respectively. The position of the focal plane can be calculated using geometrical optics [18, 32, 33] under the paraxial approximation:$z foc = n n im z .$(2) The paraxial approximation is applicable to microscope objectives with small numerical apertures (NA). Due to the refractive index mismatch between the sample and the immersion medium, the rays focused by the objective converge at different depths depending on their inclination. This creates uncertainty about the position of the focal plane, known as spherical aberration, which increases with the NA of the objective. LC-OCT typically employs a NA of 0.5. The formula provided in reference [27] and generalized in reference [34] can be used to calculate the distance between the marginal focus (corresponding to the most inclined rays) and the paraxial focus. For a NA of 0.5, refractive index values of n[im] = 1.33 and n = 1.40, and a depth of 500 μm in the sample (typical maximum depth probed in LC-OCT), the distance between the two foci is less than 5 μm. This value typically corresponds to the depth of field, rendering spherical aberration insignificant and the paraxial calculation (Eq. (2)) satisfactory. Immersion microscope objectives with large apertures are equipped with a collar to correct for spherical aberration caused by the cover glass thickness variations and refractive index mismatch. In such cases, the focal plane’s position corresponds to the paraxial focus’s position and can thus be determined through paraxial calculation. However, the correction collar is designed only for small thicknesses and refractive index mismatches. The comparison of equations (1) and (2) shows that the coherence plane and the focal plane do not coincide in the sample (z[coh] ≠ z[foc]) if the refractive indices of the sample and the immersion medium are not equal (n ≠ n[im]) [15, 16]. In a homogeneous sample, the distance between the two planes (|z[foc] − z[coh]|) increases linearly with z. The consequences of the mismatch of the coherence plane and the focal plane on the LC-OCT images will be studied in the following of this paper. 2.2 Theoretical signal Let us consider a reflective structure of the sample located at depth z[s] below the sample surface. The optical intensity (or irradiance) on the detector, as a function of the depth scan (variable z , as shown in Fig. 2), is proportional to$I ( z ) = ∫ 0 + ∞ | A Ref ( ω , z ) + A S ( ω , z ) | 2 d ω = ∫ 0 ∞ | A Ref ( ω , z ) | 2 d ω + ∫ 0 ∞ | A S ( ω , z ) | 2 d ω + 2 ∫ 0 ∞ | A Ref ( ω , z ) | × | A S ( ω , z ) | cos [ ϕ ( ω , z ) ] d ω .$(3) A[Ref](ω, z) and A[S](ω, z) represent the complex amplitudes of light at angular frequency ω, returning from the reference and sample arms of the interferometer, respectively. ϕ(ω, z) is the phase difference between waves A[Ref](ω, z) and A[S](ω, z). It can be expressed as$ϕ ( ω , z ) = 2 n ( z coh - z s ) ω / c + ϕ 0 = 2 ( n im z - n z s ) ω / c + ϕ 0$(4)with ϕ[0] a constant phase difference. R[ref](ω) denoting the reflectivity of the reference surface in the reference arm of the interferometer, and S(ω) the spectral intensity of the light source, we can write, ignoring a multiplication factor$| A Ref ( ω , z ) | 2 = R ref ( ω ) S ( ω ) .$(5) Assuming that the reflectivity of the reference surface has a low dependence with ω over the spectral domain of the detector sensitivity, we have$∫ 0 + ∞ | A Ref ( ω , z ) | 2 d ω = R ref I 0 ,$(6) where$I 0 = ∫ 0 + ∞ S ( ω ) d ω ,$(7)is the optical intensity delivered by the light source. Similarly, we write:$| A S ( ω , z ) | 2 = γ foc 2 ( ω , z ) R S ( ω ) S ( ω ) ,$(8)where R[ S ](ω) is the reflectivity of the reflective structure considered as a planar reflector located at depth z [S] in the sample. Function γ[foc](ω, z), called focus function, describes the normalized distribution of light amplitude incident into the sample as a function of depth. According to the theory of Gaussian beams, the axial distribution of light amplitude of a radially symmetrical Gaussian beam is inversely proportional to the beam width [35]. The beam radius varies with axial coordinate Z as$w ( Z ) = w 0 [ 1 + ( Z Z R ) 2 ] 1 / 2 .$(9) The beam waist, defined as the minimal value of the beam radius, is related to the NA of the Gaussian beam and the optical wavelength in vacuum λ[0] as [35]$w 0 = λ 0 π NA .$(10) The quantity Z[R], called Rayleigh length, can be written as [35]$Z R = n λ 0 π NA 2 ,$(11) n being the refractive index of the medium in which the beam propagates. In LC-OCT, the laser beam is focused in only one transverse direction to illuminate the sample with a line of light (see Fig. 2). Function γ[foc](ω, z) is therefore inversely proportional to the square root of the beam width in the direction perpendicular to the illumination line. The maximum of γ[foc](ω, z) is reached when the beam is focused in the plane of the reflective structure, i.e. when z[foc] = z[S], which corresponds to z = (n[im]/n) z[S]. The focus function can therefore be expressed as$γ foc ( z ) = w 0 w ( Z = z foc - z S ) .$(12) Using the expression of the beam waist radius given in equation (9), an expression of the focus function γ[foc] at central angular frequency ω[0] = 2πc/λ[0] can be written as$γ foc ( z ) = { 1 + 64 [ ( n / n im ) z - z S ∆ z foc ] 2 } - 1 / 4 ,$(13)with ∆z[foc] the full-width-at-half-maximum (FWHM) of γ[foc](z) being ∆z[foc] ≈ 8Z[R]. Assuming that the NA of the microscope objectives matches the NA of the laser beam in the plane perpendicular to the illumination line, the width of the focus function can be written according to equation (11) as$∆ z foc ≈ 8 π n λ 0 NA 2 .$(14) Assuming that the reflectivity of the structure has a low dependence with ω over the spectral domain of the camera sensitivity, the optical intensity returning from the sample arm can be written as$∫ - ∞ + ∞ | A S ( ω , z ) | 2 d ω = γ foc 2 ( z ) R S I 0 .$(15) The detected LC-OCT signal, as a function of depth in the sample (Eq. (3)) can now be written as$I ( z ) = I 0 { R ref + R S γ foc 2 ( z ) } + 2 R ref R S γ foc ( z ) ∫ 0 + ∞ S ( ω ) cos ϕ ( ω , z ) d ω .$(16) Using complex analysis, and substituting ϕ(ω, z) by its expression given in equation (4), the integral in the previous equation can be written as$∫ 0 ∞ S ( ω ) cos [ ϕ ( ω , z ) ] d ω = R e { ∫ - ∞ ∞ S ( ω ) exp [ - iϕ ( ω , z ) ] d ω } = R e { exp [ - i ϕ 0 ] S ̂( ξ ) } ,$(17)where $R e { - }$ denotes the real part of a complex number and $S ̂( ξ )$ the Fourier transform of S(ω) with ξ = (n[im] z − nz[s])/πc. We suppose the spectral intensity of the light source to be described by a Gaussian-shaped function of width (FWHM) ∆ω, centered at ω = ω[0]:$S ( ω ) = S 0 exp [ - 4 ln 2 ( ω - ω 0 ∆ ω ) 2 ] .$(18) The Fourier transform of S(ω) is$S ̂( ξ ) = I 0 exp [ - ( π ∆ ω 2 ln 2 ξ ) 2 ] exp [ - i 2 π ω 0 ξ ] ,$(19)the intensity of the light source being here$I 0 = ∫ - ∞ + ∞ S ( ω ) d ω = S 0 ∆ ω 2 π ln 2 The integral in equation (16) can finally be calculated as$∫ 0 + ∞ S ( ω ) cos ϕ d ω = I 0 γ coh ( z ) cos [ 2 ( n im z - n z s ) ω 0 / c + ϕ 0 ] ,$(21)with$γ coh ( z ) = exp { - 4 ln 2 [ ( n im / n ) z - z s ∆ z coh ] 2 } ,$(22)and$∆ z coh = 4 c ln 2 n Δ ω .$(23) The Gaussian function γ[coh](z) represents the temporal coherence function. It is maximum when z = (n/n[im]) z[S], i.e. when z[coh] = z[S] (the reflective structure is then located in the coherence plane). ∆z[coh] is the FWHM of γ[coh](z), which can be approximated by$∆ z coh ≈ 2 ln 2 nπ ( λ 0 2 ∆ λ ) ,$(24)where ∆λ represents the FWHM of the source spectral intensity expressed as a function of wavelength, and λ[0] the central optical wavelength. Finally, the LC-OCT axial signal given by a planar reflector of reflectivity R[S] located at depth z[S] can be written as$I ( z ) = I 0 { R ref + R S γ foc 2 ( z ) + 2 γ ( z ) R ref R S cos [ 4 π ( n im z - nz s ) / λ 0 + ϕ 0 ] } ,$(25)with$γ ( z ) = γ foc ( z ) × γ coh ( z ) .$(26) The signal expressed in equation (25) is the sum of three terms. The first term (R[ref] I[0]) is the intensity of light reflected by the reference surface (constant term). The second term (R[S] γ [foc] ^2(z)I[0]) corresponds to the intensity of light reflected by the reflective structure in the sample. The third term is identified as the interferometric signal, which consists of a sinusoidal modulation of period λ[0]/(2n[im]) comprised in an envelop $2 I 0 γ ( z ) R ref R S$. Function γ(z) is the product of the focus function γ[foc](z) and the coherence function γ[coh](z), given by equations (13) and (22), respectively. Plots of equation (25) are shown in Figure 3 for a reflective structure located at different depths z[S] when n ≠ n[im]. The signal consists of a peak and interference fringes, which correspond in the simulations to R[S] γ[foc] ^2(z) and $2 γ ( z ) R ref R S cos [ 4 π ( n im z - nz s ) / λ 0 ]$, respectively. When z[S] = 0, the focus function and the fringe envelope coincide at z = 0 (γ[foc](z) and γ(z) are centered at z = 0 when z[S] = 0). When the depth z[S] of the reflective structure increases, the distance between the focus function and the fringe envelope increases, whereas the amplitude of the interference fringes decreases. Fig. 3 Theoretical axial signal acquired in LC-OCT from a reflective structure located at different depths z[S] (simulation based on Eq. (25)). The parameters of the simulation are: λ[0] = 750 nm, ∆λ = 200 nm, n = 1.45, n[im] = 1.40, NA = 0.5, R[ref] = 0.19%, R[S] = 0.1%, ϕ[0] = 0. 2.3 Experimental validation This section compares the theoretical model presented above with measurements obtained using an LC-OCT device available in the laboratory and described in reference [2]. Measurements of the axial signal acquired by LC-OCT have been performed to check the validity of the theoretical expression given by equation (25). A simple sample was made by introducing a liquid of refractive index n between the glass window of the LC-OCT device and a glass plate placed at a distance z[S] below the glass window (see Fig. 4). The two interfaces between glass (fused silica) and the liquid have the same reflectivity R[S] = (n − n[glass])^2/(n + n[glass])^2. This constitutes a sample of known refractive index n with two reflecting structures of reflectivity R[S] located at depth z[S] = 0 and z [S] > 0. Three different samples were made with pure water (n = 1.33), oil (n = 1.51), and pure silicon oil (n = 1.40) placed between the glass window and the glass plate. The oil was an immersion oil for microscopy (Sigma-Aldrich, ref. 56822). The silicon oil was provided by Sigma-Aldrich (ref. 378399). Fig. 4 Test sample with two reflecting structures: interface 1 located at z[S] = 0, and interface 2 located at z[S] > 0. The glass material is fused silica (n[glass] = 1.45). The immersion medium is silicon oil (n[im] = 1.40). The liquid placed between the two interfaces is water (n = 1.33), oil (n = 1.51) or silicon oil (n = 1.40). The signal acquired by a pixel of the LC-OCT camera as a function of the depth scan is shown in Figure 5 (experiment and simulation). Due to the reflection of light on the two interfaces, the signal consists of the sum of two peaks and two packets of interference fringes. The LC-OCT device is adjusted so that the coherence plane and the focal plane coincide at interface 1. Therefore, the interference fringes and the peak of light produced by the interface 1 coincide at z = 0 (γ(z) and γ[foc](z) are centered at z = 0 when z[S] = 0). The interference fringes and the peak of light generated by light reflection on interface 2 located at z[S] = 200 μm do not coincide when the refractive index of the sample differs from the refractive index of the immersion medium (n ≠ n[im]), as predicted by the simulation. Depending on the value of n, the interference fringes are detected before or after the focus. Simulations are in good agreement with measurements. Note, however, that the amplitude of the interference fringes produced by interface 2 is slightly lower in the experiment. This can be explained by the transmission factor of interface 1 (T = 1 − R[S]), not taken into account in the simulation, which reduces the amplitude of the detected signal from interface 2. The presence of chromatic dispersion mismatch in the interferometer when n ≠ n[im], not taken into account in the simulation either, also contributes to these discrepancies. Other factors can decrease interference contrast in practice, such as the existence of stray light that does not contribute to interference, defects in the beam splitter that is not perfectly 50/50 across the entire spectral range of detector sensitivity, and imperfections in the interferometer setting. Fig. 5 Signal acquired in LC-OCT from the test sample shown in Figure 4. The parameters are: λ[0] = 750 nm, ∆λ = 200 nm, z[S] = 200 μm, n[im] = 1.40, n[glass] = 1.45, NA = 0.5. R[ref] = 0.19%. Three values of the refractive index n are considered (1.33, 1.40, 1.51). (a–c) Simulations based on equation (25). (d–f) Experiment. The peak corresponding to the focus on interface 2 coincides with the center of interference fringes only when n[im] = n ((c) and (f)). 3 Effects of refractive index mismatch between sample and immersion medium The aim of this section is to study the consequences on LC-OCT image quality of the shift between the focus function and the fringe envelop, resulting from a difference in refractive index between the sample and the immersion medium. A degradation of the signal amplitude and image resolution will be highlighted and quantified. Conditions to ensure that degradation is not significant will be 3.1 Degradation of signal amplitude The signal in the LC-OCT images correspond to the envelop of the interference fringes, which is obtained by digital processing of the acquired signal I(z) expressed in equation (25) [1, 2, 36]. The processed LC-OCT axial response given by a reflective structure of reflectivity R[S] located at depth z[S] in the sample is therefore proportional to $R S γ ( z )$. Theoretical plots of γ(z) (Eq. (26)) are shown in Figure 6 for different values of the structure depth z[S] with a mismatch of the refractive index between the sample and the immersion medium. The deeper the structure, the weaker the signal amplitude. Fig. 6 Simulation of γ(z) (Eq. (26)) for different values of the reflecting structure depth z[S]. The parameters of the simulation are λ[0] = 750 nm, ∆λ = 200 nm, n = 1.45, n[im] = 1.40, NA = 0.5. We will now establish an explicit expression of the signal attenuation by simplifying the expression of function γ(z). Typical values of the experimental parameters in LC-OCT are λ[0] = 750 nm, ∆λ = 200 nm, NA = 0.5 and n[im] = 1.40 [1–5]. Considering a sample with n = 1.45 yields ∆z[foc] ≈ 10 μm (Eq. (14)) and ∆z[coh] ≈ 1 μm (Eq. (24)). Since ∆z[coh] ≪ ∆z[foc], function γ(z) can be approximated by the coherence function γ[coh](z) multiplied by the value of γ[foc](z) at the maximum of γ[coh](z), i.e. when z[coh] = z[S], which corresponds to z = (n/n[im]) z[S]. The fringe envelope γ(z) can therefore be considered as a Gaussian function multiplied by an attenuation factor α < 1 given by$α = γ foc ( z = ( n / n im ) z S ) = [ 1 + ( 8 ε z S ∆ z foc ) 2 ] - 1 / 4 ,$(27)with$ε = 1 - ( n n im ) 2 .$(28) Assuming that Δn = |n − n[im]| ≪ 1, an approximate expression of the signal attenuation is$α = [ 1 + ( 2 π Δ n z S NA 2 n im 2 λ 0 ) 2 ] - 1 / 4 .$(29) Equation (29) is plotted in Figure 7 as a function of the depth z[S] of the reflective structure for different values of the refractive index mismatch Δn. As can be seen, the attenuation may be significant for values of Δn of only a few percent. With the following condition:$NA 2 λ 0 ∆ n n im 2 z S ≤ 1 ,$(30)the attenuation of the signal is less than 2.5 (α ≈ 0.4). Equation (30) can be seen as a condition for keeping signal attenuation low. Fig. 7 Signal attenuation α as a function of depth in the sample for various values of Δn. Simulation based on equation (29) with λ[0] = 750 nm, ∆λ = 200 nm, NA = 0.5, n[im] = 1.4. 3.2 Degradation of lateral resolution If the focal plane of the microscope objective does not coincide with the coherence plane, the image of the structures in the vicinity of the coherence plane appears blurred because of the defect of focus, which degrades the lateral resolution. If n[im] ≠ n, the degradation increases with depth and it is obviously all the more important as the depth of field of the objectives is small (i.e. NA is high). Inhomogeneities of the refractive index within the sample also induce a distortion of the optical wavefront, which introduces optical aberrations. Considered alone, this effect yields to an effective lateral resolution degraded compared to its diffraction-limited theoretical value. Although less significant than the effect of focus defect, those aberrations cannot be easily corrected. As seen previously, a reflective structure located at z[S] is detected with a maximum signal when it is located in the coherence plane, i.e. when z[coh] = z[S], which corresponds to z = (n/n[im])z [S]. We consider that a defect of focus is not visible provided that the structure lies within the depth of field ∆z[foc], i.e. when the distance ∆z between the coherence plane and the focal plane is such that$∆ z = | z coh - z foc | ≤ ∆ z foc ,$(31)with ∆z[foc] defined as the width of the focus function (Eq. (14)). Since z[foc] = (n/n[im])z, ∆z can be expressed as a function of the refractive indices of the sample and immersion medium and depth in the sample as$∆ z = | 1 - ( n n im ) 2 | z S .$(32) Assuming that Δn = |n − n[im]| ≪ 1, a condition for a focus defect not to be noticeable is$NA 2 λ 0 ∆ n n im 2 z S ≤ 4 π ≈ 1 .$(33) It is interesting to notice that the conditions for no significant loss of signal (Eq. (30)) and no significant defect of focus (Eq. (33)) are identical. The maximum depth of a structure that can be imaged without significant signal loss and resolution degradation can therefore be estimated from equation (33) alone. Figure 8 shows the imaging depth that can be reached without degradation of image quality as a function of the refractive index mismatch Δn for different values of the microscope objective NA. Fig. 8 Depth (z[S]) in the sample that can be probed without significant loss of both signal amplitude and lateral resolution, as a function of refractive index mismatch. Simulation based on equation (33) , with n[im] = 1.40 and λ[0] = 750 nm, for different values of the microscope objective NA. 4. LC-OCT imaging In this final section, we propose to illustrate the image degradation due to a difference in refractive index between the sample and the immersion medium by imaging a phantom with a known refractive index using two different immersion media. Finally, we propose to see theoretically whether the condition of non-degradation of image quality (Eq. (33)) is verified for skin imaging using two different immersion media. 4.1 Image of a phantom A phantom was fabricated by embedding titanium dioxide (TiO[2]) particles in a polydimethylsiloxane (PDMS) matrix (Sylgard 184 silicone Elast omer Kit, Neyco, France) whose refractive index is 1.41 at the wavelength of 750 nm [37]. The PDMS matrix was fabricated using a ratio of 10:1 by weight of PDMS pre-polymer and curing agent (Sylgard 184). The phantom was prepared by first mixing the curing agent with a powder of TiO[2] particles (Sigma-Aldrich). The mixture was placed in an ultrasonic bath for 30 min to prevent particles aggregation, and then mixed with the PDMS pre-polymer. The obtained mixture was poured on a 40-mm diameter petri dish. Air bubbles were removed using a vacuum pump for 1 h, and the phantom mixture was finally cured for 1 h30 min at 80 °C. The concentration of TiO[2] particles in the PDMS is 0.40% by weight, and their diameter is less than 5 μm. This results in a refractive index of the phantom that is similar to that of the PDMS matrix, i.e. n ~ 1.41. LC-OCT images of the phantom in vertical section were acquired, using two different immersion media: silicon oil and water. The images are shown in Figure 9. The refractive index of the phantom (n ~ 1.41) being close to that of the silicon oil (n[im] = 1.40), focusing and signal amplitude are maintained throughout the scan depth when silicon oil is used as the immersion medium (Fig. 9a). In this case, equation (33) is fully satisfied since ∆n ~ 0. When water is used as the immersion medium (n[im] = 1.33), the mismatch in refractive indices (∆n ~ 0.08) leads to a focusing defect and a signal loss that increases with depth (Fig. 9b). In this case, equation (30) is satisfied (i.e. image quality is not degraded) for a depth z[S] < 70 μm (taking n = 1.41, NA = 0.5, λ[0] = 750 nm and ∆λ = 200 nm), which seems to agree with the observation of the acquired image. Fig. 9 LC-OCT images (vertical sections) of TiO[2] particles embedded in PDMS (n ~ 1.41), using an immersion medium of refractive index n[im] = 1.40 (a) and n[im] = 1.33 (b). 4.2 Skin imaging LC-OCT has been designed so far for imaging of skin tissues [1, 5]. The superficial skin layer (stratum corneum) has a refractive index in the near infrared around n = 1.54, and a thickness of 10–30 μm. The rest of the epidermis has a mean refractive index around n = 1.41 and a thickness of 100–200 μm. The refractive index of the dermis, the deepest layer, is around n = 1.38 [28–30]. Due to variations in skin refractive index, a mismatch between the coherence plane and focal plane is unavoidable during depth scanning. In this section, we propose to calculate the defocus ∆z = |z[coh] − z [foc]| that occurs as a function of the imaging depth in skin tissues. The simulation was carried out using equation (32) with silicon oil (Fig. 10a) and water (Fig. 10b) as the immersion medium. The epithelium thickness was considered to be 120 μm, with a 20 μm stratum corneum. The tolerable defocus, considered to be depth of field ∆z[foc], was calculated using equation (14) with NA = 0.5 and λ [0] = 750 nm. The simulation shows that image quality is maintained in all skin layers when silicone is used as the immersion medium (n[im] = 1.40). This corresponds to the LC-OCT device commercialized by Damae Medical. If water is used instead of silicon oil as the immersion medium (n[im] = 1.33) for skin imaging, the condition for preserving image quality is not satisfied at depths larger than z[S] ~ 70 μm. This depth corresponds to about half of the thickness of the epidermis in the skin model considered in this simulation. Fig. 10 Theoretical defocus (∆z = |z[coh] − z[foc]|) in the different skin layers with an immersion medium of refractive index n[im] = 1.40 (a) and n[im] = 1.33 (b) compared to the tolerable defocus (∆z [foc]). Calculations based on equations (32) and (14) with NA = 0.5, λ[0] = 750 nm and n = 1.54/1.41/1.38 (stratum corneum/epidermis/dermis). 5 Conclusion A theoretical model of the signal acquired in LC-OCT has been described. The model shows that a difference in refractive index between the sample and the immersion medium causes dissociation between the coherence plane and the focal plane, resulting in a decrease in signal amplitude and a degradation of the image’s lateral resolution. Measurements were carried out to validate and illustrate the theoretical predictions. A mathematical condition (Eq. (33)) linking various experimental parameters was established to ensure that image quality degradation is insignificant. This condition was tested experimentally by imaging a phantom. It was verified theoretically in the case of skin imaging, with experimental parameters corresponding to those of the LC-OCT device marketed by Damae Medical. The mathematical condition expressed in equation (33) can be used to define the experimental parameters of an LC-OCT device, including the NA of the microscope objectives, the center optical wavelength, the maximum imaging depth and the refractive index of the immersion medium, in order to image with optimum quality a sample whose refractive index is known. The author thanks Dr. Frederic Pain for preparing the PDMS samples. He is also grateful to the company DAMAE Medical for technical support. This research did not receive any specific funding. Conflicts of Interest The author declares no conflicts of interest. Data availability statement This article has no associated data generated and/or analyzed. 1. Dubois A., Levecq O., Azimani H., Siret D., Barut A., Suppa M., Del Marmol V., Malvehy J., Cinotti E., Rubegni P., Perrot J.L. (2018) Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors, J. Biomed. Opt. 23, 106007. [CrossRef] [Google Scholar] 2. Dubois A., Levecq O., Azimani H., Davis A., Ogien J., Siret D., Barut A. (2018) Line-field confocal time-domain optical coherence tomography with dynamic focusing, Opt. Express 26, 33534–33542. [NASA ADS] [CrossRef] [Google Scholar] 3. Ogien J., Levecq O., Azimani H., Dubois A. (2020) Dual-mode line-field confocal optical coherence tomography for ultrahigh-resolution vertical and horizontal section imaging of human skin in vivo, Biomed. Opt. Express 11, 1327–1335. [CrossRef] [Google Scholar] 4. Ogien J., Daures A., Cazalas M., Perrot J.-L., Dubois A. (2020) Line-field confocal optical coherence tomography for three-dimensional skin imaging, Front. Optoelectron. 13, 381–392. [CrossRef] [Google Scholar] 5. Latriglia F., Ogien J., Tavernier C., Fischman S., Suppa M., Perrot J.L., Dubois A. (2023) Line-field confocal optical coherence tomography (LC-OCT) for skin imaging in dermatology, Life 13, 2268. [NASA ADS] [CrossRef] [Google Scholar] 6. Monnier J., Tognetti L., Miyamoto M., Suppa M., Cinotti E., Fontaine M., Perez J., Orte Cano C., Yélamos O., Puig S., Dubois A., Rubegni P., Marmol V., Malvehy J., Perrot J.-L. (2020) In vivo characterization of healthy human skin with a novel, non-invasive imaging technique: line-field confocal optical coherence tomography, J. Eur. Acad. Dermatol. Venereol. 34, 2914–2921. [CrossRef] [Google Scholar] 7. Pedrazzani M., Breugnot J., Rouaud-Tinguely P., Cazalas M., Davis A., Bordes S., Dubois A., Closs B. (2020) Comparison of line-field confocal optical coherence tomography images with histological sections: validation of a new method for in vivo and non-invasive quantification of superficial dermis thickness, Skin Res. Technol. 26, 398–404. [CrossRef] [Google Scholar] 8. Ruini C., Schuh S., Sattler E., Welzel J. (2021) Line-field confocal optical coherence tomography–practical applications in dermatology and comparison with established imaging methods, Skin Res. Technol. 27, 340–352. [CrossRef] [Google Scholar] 9. Ruini C., Schuh S., Gust C., Kendziora B., Frommherz L., French L.E., Hartmann D., Welzel J., Sattler E. (2021) Line-field optical coherence tomography: in vivo diagnosis of basal cell carcinoma subtypes compared with histopathology, Clin. Exp. Dermatol. 46, 1471–1481. [CrossRef] [Google Scholar] 10. Suppa M., Fontaine M., Dejonckheere G., Cinotti E., Yélamos O., Diet G., Tognetti L., Miyamoto M., Orte Cano C., Perez-Anker J., Panagiotou V., Trepant A.L., Monnier J., Berot V., Puig S., Rubegni P., Malvehy J., Perrot J.L., del Marmol V. (2021) Line-field confocal optical coherence tomography of basal cell carcinoma: a descriptive study, J. Eur. Acad. Dermatol. Venereol. 35, 1099–1110. [CrossRef] [Google Scholar] 11. Donelli C., Suppa M., Tognetti L., Perrot J.L., Calabrese L., Pérez-Anker J., Malvehy J., Rubegni P., Cinotti E. (2023) Line-field confocal optical coherence tomography for the diagnosis of skin carcinomas: real-life data over three years, Curr. Oncol. 30, 8853–8864. [CrossRef] [Google Scholar] 12. Cinotti E., Bertello M., Cartocci A., Fiorani D., Tognetti L., Solmi V., Cappilli S., Peris K., Perrot J.L., Suppa M., Del Marmol V., Rubegni P. (2023) Comparison of reflectance confocal microscopy and line-field optical coherence tomography for the identification of keratinocyte skin tumours, Skin Res Technol. 29, e13215. [CrossRef] [Google Scholar] 13. Perez-Anker J., Puig S., Alos L., García A., Alejo B., Cinotti E., Orte Cano C., Tognetti L., Lenoir C., Monnier J., Machuca N., Castillo P., Gibert P.R., Rubegni P., Suppa M., Perrot J.L., Del Marmol V., Malvehy J. (2022) Morphological evaluation of melanocytic lesions with three-dimensional line-field confocal optical coherence tomography: correlation with histopathology and reflectance confocal microscopy. A pilot study, Clin. Exp. Dermatol. 47, 2222–2233. [CrossRef] [Google Scholar] 14. Dubois A. (2017) Focus defect and dispersion mismatch in full-field optical coherence microscopy, Appl. Opt. 56, D142–D150. [NASA ADS] [CrossRef] [Google Scholar] 15. Dubois A., Moneron G., Boccara A.C. (2006) Thermal-light full-field optical coherence tomography in the 1.2 micron wavelength region, Opt. Comm. 266, 738–743. [NASA ADS] [CrossRef] [Google 16. Labiau S., David G., Gigan S., Boccara A.C. (2009) Defocus test and defocus correction in full-field optical coherence tomography, Opt. Lett. 34, 1576–1578. [NASA ADS] [CrossRef] [Google Scholar] 17. Safrani A., Abdulhalim I. (2011) Spatial coherence effect on layer thickness determination in narrowband full-field optical coherence tomography, Appl. Opt. 50, 3021–3027. [CrossRef] [Google 18. Abdulhalim I. (2012) Spatial and temporal coherence effects in interference microscopy and full-field optical coherence tomography, Ann. Der Physik 524, 787–804. [NASA ADS] [CrossRef] [Google 19. Hitzenberger C.K., Baumgartner A., Drexler W., Fercher A.F. (1999) Dispersion effects in partial coherence interferometry: Implications for intraocular ranging, J. Biomed. Opt. 4, 144–156. [NASA ADS] [CrossRef] [Google Scholar] 20. Dubois A., Grieve K., Moneron G., Lecaque R., Vabre L., Boccara A.C. (2004) Ultrahigh-resolution full-field optical coherence tomography, Appl. Opt. 43, 2874–2882. [NASA ADS] [CrossRef] [Google 21. Zeylikovich I. (2008) Short coherence length produced by a spatial incoherent source applied for the Linnik-type interferometer, Appl. Opt. 47, 2171–2177. [NASA ADS] [CrossRef] [Google Scholar] 22. Abdulhalim I. (2009) Theory for double beam interference microscopes with coherence effects and verification using the linnik microscope, J. Mod. Opt. 48, 279–302. [Google Scholar] 23. Lychagov V.V., Ryabukho V.P., Kalyanov A.L., Smirnov I.V. (2012) Polychromatic low-coherence interferometry of stratified structures with digital interferogram recording and processing, J. Opt. A Pure Appl. Opt. 14, 015702. [Google Scholar] 24. Federici A., Dubois A. (2015) Full-field optical coherence microscopy with optimized ultra-high spatial resolution, Opt. Lett. 40, 5347–5350. [NASA ADS] [CrossRef] [Google Scholar] 25. Schmitt J.M., Kumar G. (1996) Turbulent nature of refractive-index variations in biological tissue, Opt. Lett. 21, 1310–1312. [NASA ADS] [CrossRef] [Google Scholar] 26. Tuchin V.V., Utz S.R., Yaroslavsky I.V. (1994) Tissue optics, light distribution, and spectroscopy, Opt. Eng. 33, 3178–3188. [CrossRef] [Google Scholar] 27. Tearney G.J., Brezinski M.E., Southern J.F., Bouma B.E., Hee M.R., Fujimoto J.G. (1995) Determination of the refractive index of highly scattering human tissue by optical coherence tomography, Opt. Lett. 20, 2258–2260. [NASA ADS] [CrossRef] [Google Scholar] 28. Ding H., Lu J.Q., Wooden W.A., Kragel P.J., Hu X.H. (2006) Refractive indices of human skin tissues at eight wavelengths and estimated dispersion relations between 300 and 1600 nm, Phys. Med. Biol. 51, 1479–1489. [NASA ADS] [CrossRef] [Google Scholar] 29. Lai J.C., Zhang Y.Y., Li Z.H., Jiang H.J., He A.Z. (2010) Complex refractive index measurement of biological tissues by attenuated total reflection ellipsometry, Appl. Opt. 49, 3235–3238. [NASA ADS] [CrossRef] [Google Scholar] 30. Kono T., Yamada J. (2019) In vivo measurement of optical properties of human skin for 450–800 nm and 950–1600 nm wavelengths, Int. J. Thermophys. 40, 1–14. [NASA ADS] [CrossRef] [Google Scholar] 31. The International Association for the Properties of Water and Steam(September 1997). Release on the refractive index of ordinary water substance as a function of wavelength, temperature, and pressure (IAPWS R9-97). http://www.iapws.org/relguide/rindex.pdf [Google Scholar] 32. Visser T.D., Oud J.L., Brakenhoff G.J. (1992) Refractive index and axial distance measurements in 3-D microscopy, Optik 90, 17–19. [Google Scholar] 33. Hell S., Reiner G., Cremer C., Stelzer E.H.K. (1993) Aberrations in confocal fluorescence microscopy induced by mismatches in refractive index, J. Microsc. 169, 391–405. [CrossRef] [Google 34. Wang X., Zhang C., Zhang L., Xue L., Tian J.-G. (2002) Simultaneous refractive index and thickness measurements of bio-tissue by optical coherence tomography, J. Biomed. Opt. 7, 628–632. [NASA ADS] [CrossRef] [Google Scholar] 35. Saleh B.E.A., Teich M.C. (1991) Chapter 3: Beam optics. Fundamentals of photonics, John Wiley & Sons, New York. [CrossRef] [Google Scholar] 36. Larkin K.G. (1996) Efficient nonlinear algorithm for envelope detection in white light interferometry, J. Opt. Soc. Am. A 13, 832–843. [CrossRef] [Google Scholar] 37. Schneider F., Draheim J., Kamberger R., Wallrabe U. (2009) Process and material properties of polydimethylsiloxane (PDMS) for Optical MEMS, Sensors Actuators A: Phys. 151, 95–99. [NASA ADS] [CrossRef] [Google Scholar] All Figures Fig. 1 Linnik interferometer implemented in LC-OCT. The components in the red dashed frames are mounted on a piezoelectric (PZT) translation stage for scanning the sample depth (motion indicated by the double red arrow). In the text Fig. 2 Mismatch of the coherence plane and focal plane in LC-OCT. The plain red lines represent the beam in the plane of the figure, whereas the dotted red lines represent the beam in the direction orthogonal to the plane of the figure. (a) The interferometer is adjusted so that the coherence plane and the focal plane coincide at the surface of the sample. (b) When the interferometer is lowered by a distance z, the coherence plane is at a distance z[coh] below the sample surface and the focal planes at a distance z[foc] (here n > n[im]). In the text Fig. 3 Theoretical axial signal acquired in LC-OCT from a reflective structure located at different depths z[S] (simulation based on Eq. (25)). The parameters of the simulation are: λ[0] = 750 nm, ∆λ = 200 nm, n = 1.45, n[im] = 1.40, NA = 0.5, R[ref] = 0.19%, R[S] = 0.1%, ϕ[0] = 0. In the text Fig. 4 Test sample with two reflecting structures: interface 1 located at z[S] = 0, and interface 2 located at z[S] > 0. The glass material is fused silica (n[glass] = 1.45). The immersion medium is silicon oil (n[im] = 1.40). The liquid placed between the two interfaces is water (n = 1.33), oil (n = 1.51) or silicon oil (n = 1.40). In the text Fig. 5 Signal acquired in LC-OCT from the test sample shown in Figure 4. The parameters are: λ[0] = 750 nm, ∆λ = 200 nm, z[S] = 200 μm, n[im] = 1.40, n[glass] = 1.45, NA = 0.5. R[ref] = 0.19%. Three values of the refractive index n are considered (1.33, 1.40, 1.51). (a–c) Simulations based on equation (25). (d–f) Experiment. The peak corresponding to the focus on interface 2 coincides with the center of interference fringes only when n[im] = n ((c) and (f)). In the text Fig. 6 Simulation of γ(z) (Eq. (26)) for different values of the reflecting structure depth z[S]. The parameters of the simulation are λ[0] = 750 nm, ∆λ = 200 nm, n = 1.45, n[im] = 1.40, NA = 0.5. In the text Fig. 7 Signal attenuation α as a function of depth in the sample for various values of Δn. Simulation based on equation (29) with λ[0] = 750 nm, ∆λ = 200 nm, NA = 0.5, n[im] = 1.4. In the text Fig. 8 Depth (z[S]) in the sample that can be probed without significant loss of both signal amplitude and lateral resolution, as a function of refractive index mismatch. Simulation based on equation (33) , with n[im] = 1.40 and λ[0] = 750 nm, for different values of the microscope objective NA. In the text Fig. 9 LC-OCT images (vertical sections) of TiO[2] particles embedded in PDMS (n ~ 1.41), using an immersion medium of refractive index n[im] = 1.40 (a) and n[im] = 1.33 (b). In the text Fig. 10 Theoretical defocus (∆z = |z[coh] − z[foc]|) in the different skin layers with an immersion medium of refractive index n[im] = 1.40 (a) and n[im] = 1.33 (b) compared to the tolerable defocus (∆z [foc]). Calculations based on equations (32) and (14) with NA = 0.5, λ[0] = 750 nm and n = 1.54/1.41/1.38 (stratum corneum/epidermis/dermis). In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://jeos.edpsciences.org/articles/jeos/full_html/2024/01/jeos20240025/jeos20240025.html","timestamp":"2024-11-14T07:16:05Z","content_type":"text/html","content_length":"221951","record_id":"<urn:uuid:518352e6-dc59-47eb-87a6-75ad5e999e12>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00739.warc.gz"}
The History of Negative Numbers Although the first set of rules for dealing with negative numbers was stated in the 7th century by the Indian mathematician Brahmagupta, it is surprising that in 1758 the British mathematician Francis Maseres was claiming that negative numbers "... darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple" . Maseres and his contemporary, William Friend took the view that negative numbers did not exist. However, other mathematicians around the same time had decided that negative numbers could be used as long as they had been eliminated during the calculations where they appeared. It was not until the 19th century when British mathematicians like De Morgan, Peacock, and others, began to investigate the 'laws of arithmetic' in terms of logical definitions that the problem of negative numbers was finally sorted out. However, there were references to negative numbers far earlier... In 200 BCE the Chinese number rod system (see note1 below) represented positive numbers in Red and Negative numbers in black. An article describing this system can be found here . These were Image used for commercial and tax calculations where the black cancelled out the red. The amount sold was positive (because of receiving money) and the amount spent in purchasing something was negative (because of paying out); so a money balance was positive, and a deficit negative. The concept also appeared in Astronomy where the ideas of 'strong' and 'weak' were used for approximating a number from above or below. For example approaching 5 from above means for example, starting with 5.2 you can find better approximations 5.1, 5.05, 5.025. Thus 5.025 was called a 'strong' approximation and a number like 4.9 'weak'. So 'strong' numbers were called positive and 'weak' numbers negative , negative numbers did not appear until about 620 CE in the work of Brahmagupta (598 - 670) who used the ideas of 'fortunes' and 'debts' for positive and negative. By this time a system based on place-value was established in India, with zero being used in the Indian number sytem. Brahmagupta used a special sign for negatives and stated the rules for dealing with positive and negative quantities as follows: A debt minus zero is a debt. A fortune minus zero is a fortune. Zero minus zero is a zero. A debt subtracted from zero is a fortune. A fortune subtracted from zero is a debt. The product of zero multiplied by a debt or fortune is zero. Image The product of zero multiplied by zero is zero. The product or quotient of two fortunes is one fortune. The product or quotient of two debts is one fortune. The product or quotient of a debt and a fortune is a debt. The product or quotient of a fortune and a debt is a debt. The conflict between geometry and algebra The ancient Greeks did not really address the problem of negative numbers, because their mathematics was founded on geometrical ideas. Lengths, areas, and volumes resulting from geometrical constructions necessarily all had to be positive. Their proofs consisted of logical arguments based on the idea of magnitude. Magnitudes were represented by a line or an area, and not by a number (like 4.3 metres or 26.5 cubic centimetres). In this way they could deal with 'awkward' numbers like square roots by representing them as a line. For example, you can draw the diagonal of a square without having to measure it (see note 2 below). About 300 CE, the Alexandrian mathematician Diophantus (200 - c.284 CE) wrote his , a collection of problems where he developed a series of symbols to represent the 'unknown' in a problem, and powers of numbers. He dealt with what we now call linear and quadratic equations. In one problem Diophantus wrote the equivalent of 4 = 4x + 20 which would give a negative result, and he called this result 'absurd'. In the 9th century in Al - Khwarizmi (c.780 - c.850 CE) presented six standard forms for linear or quadratic equations and produced solutions using algebraic methods and geometrical diagrams. In his algebraic methodshe acknowledged that he derived ideas from the work of Brahmagupta and therefore was happy with the notion of negative numbers. However, his geometrical models (based on the work of Greek mathematicians) persuaded him that negative results were meaningless (how can you have a negative square?). In a separate treatise on the laws of inheritance, Al-Khwarizmi represents negative quantities as debts. In the 10th century Abul -Wafa (940-998 CE) used negative numbers to represent a debt in his work on 'what is necessary from the science of arithmetic for scribes and businessmen'?. This seems to be the only place where negative numbers have been found in medieval Arabic mathematics. Abul-Wafa gives a general rule and gives a special case where subtraction of 5 from 3 gives a "debt" of 2. He then multiples this by 10 to obtain a "debt" of 20, which when added to a 'fortune' of 35 gives 15. In the 12th century Al - Samawal (1130 - 1180) had produced an algebra where he stated that: • if we subtract a positive number from an 'empty power', the same negative number remains, • if we subtract the negative number from an 'empty power', the same positive number remains, • the product of a negative number by a positive number is negative, and by a negative number is positive. Negative numbers did not begin to appear in Europe until the 15th century when scholars began to study and translate the ancient texts that had been recovered from Islamic and Byzantine sources. This began a process of building on ideas that had gone before, and the major spur to the development in mathematics was the problem of solving quadratic and cubic equations. As we have seen, practical applications of mathematics often motivate new ideas and the negative number concept was kept alive as a useful device by the Franciscan friar Luca Pacioli (1445 - 1517) in his Summa published in 1494, where he is credited with inventing double entry book-keeping. Solving equations The story of the solution of equations begins in Italy in the 16th century (see note 3 below). This story is full of intrigue and deception because methods of solution were kept secret. The issue which caused most consternation at the time was the meaning of $\sqrt{-1}$. In fact, Cardano (1501 - 1576) in his Ars Magna of 1545 had to solve a problem where $\sqrt{-15}$ appeared. Cardano found a sensible answer (see note 4 below) by working through the algorithm, but he called these numbers 'ficticious' because not only did they disappear during the calculation, but they did not seem to have any real meaning. However, by 1572, the Italian engineer, Bombelli (1526 - 1572) had provided the correct rules for working with these 'imaginary' numbers(see note 5 below). In the 17th and 18th century, while they might not have been comfortable with their 'meaning' many mathematicians were routinely working with negative and imaginary numbers in the theory of equations and in the development of the calculus. The English mathematician, John Wallis (1616 - 1703) is credited with giving some meaning to negative numbers by inventing the number line, and in the early 18th century a controversy ensued between Leibniz, Johan Bernoulli, Euler and d'Alembert about whether $\log (-x)$ was the same as Log(x). By the beginning of the 19th century Caspar Wessel (1745 - 1818) and Jean Argand (1768 - 1822) had produced different mathematical representations of 'imaginary'numbers, and around the same time Augustus De Morgan (1806 - 1871), George Peacock (1791 - 1858) William Hamilton (1805 - 1865) and others began to work on the 'logic'of arithmetic and algebra and a clearer definition of negative numbers, imaginary quantities, and the nature of the operations on them began to emerge. Negative numbers and imaginaries are now built into the mathematical models of the physical world of science, engineering and the commercial world. There are many applications of negative numbers today in banking, commodity markets, electrical engineering, and anywhere we use a frame of reference as in coordinate geometry, or relativity theory. Pedagogical Note: It seems that the problems that people had (and now have - see the Lottery incident ) in understanding the use of negative numbers concerns: • the difference between the operation of subtraction and the object (a negative number), since the same sign is used for both • the language involved like 'minus minus 3' as opposed to 'subtract negative 3' • separating the physical model or analogy (be it profit/loss or rise/fall in temperature or rotation/direction in the plane) from the rules of operating on the entities. ReferencesEnglish Mathematicians Francis Maseres (1731 - 1824) A dissertation on the use of the negative sign in algebra. (1758) Fellow of Clare College Cambridge and Fellow of the Royal Society William Frend Principles of Algebra (1796) Printed by J. Davis, for G. G. and J. Robinson, Paternoster Row Other Sources Berggen, J.L. (1986) Episodes in the Mathematics of Mediaeval Islam . Springer-Verlag N.Y. andBerlin. Menninger, K. (1969) Number Words and Number Symbols . M.I.T. Press Cambridge, Mass. andLondon. Schubring, G. (2005) Conflicts Between Generalization, Rigor, and Intuition: Number Concepts Underlying the Development of Analysis in 17 - 19th Century France and Germany . Springer-Verlag N.Y. Ifrah, G. (1998) The Universal History of Numbers . Harvill Press, London. Li Yan and Du Shiran (Tr. Crossley, J.N and Lun A. W. ) (1987) Chinese Mathematics: a Concise History . O.U.P. Oxford. Web References Mactutor at St Andrews University 1. The counting rod system was certainly in operation in the period (475 - 221 BCE) - called the period of the 'Warring States' [Yan andShiran 1987, 7/8]) 2. In our notation, $\sqrt{2}$ and $\sqrt{5}$ occurred when finding the diagonal of a square or constructing the Golden Section. 3. The period from Pacioli (1494) to Descartes (1637), a period of about 150 years brings the solution of equations to a stage where they could be understood by school pupils today. 4. In modern notation, Cardano's multiplication was $(5-\sqrt{-15} )(5+ \sqrt{-15} )$, and applying the rule for brackets this becomes $25 - -15 = 40$. 5. Even though mathematicians did not find a suitable representation for negative numbers, it did not prevent them from following the ordinary rules of arithmetic and developing rules for the imaginary numbers as well. This is where the beauty of mathematical invention is not limited by the 'real' world.
{"url":"https://nrich.maths.org/articles/history-negative-numbers","timestamp":"2024-11-12T03:07:45Z","content_type":"text/html","content_length":"50101","record_id":"<urn:uuid:c57ab8de-ff9d-4492-826b-e2e85233973d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00583.warc.gz"}
Edge removal balances preferential attachment and triad closing Most network formation analysis studies are centered on edge addition. However, edges in real world networks often have a rapid turnover with a large number of edges added and removed between each node addition or removal steps. In such a case, quasiequilibrium is obtained between edge addition and deletion. Edges have been shown to be added to nodes with a high degree and between pairs of nodes with a high number of common neighbors. If not balanced by a degree dependent edge removal, the preference for high degree nodes and node pairs with many common neighbors is expected to increase the average degree of high degree nodes and their clustering coefficient until very large cliques will be formed. Since such large cliques are not formed in real world networks, we conclude that the edge removal probability around high degree nodes and between node pairs with many common neighbors should be higher than around other nodes. We here show the existence of such a balancing mechanism through the relation between the future edge removal probability around nodes and their degree and a similar relation between the edge removal probability and the number of common neighbors of node pairs. In some networks, this preferential detachment process represents an explicit saturation process, and in others, it represents a random deletion process accompanied by a sublinear edge preferential attachment process. A more complex mechanism emerges in directed networks where the preferential detachment can be proportional to the in and out degrees of the nodes involved. In such networks, preferential detachment is stronger for the incoming edges than for the outgoing edges. We hypothesize multiple possible mechanisms that could explain this phenomenon. Dive into the research topics of 'Edge removal balances preferential attachment and triad closing'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/edge-removal-balances-preferential-attachment-and-triad-closing-4","timestamp":"2024-11-08T07:40:54Z","content_type":"text/html","content_length":"58235","record_id":"<urn:uuid:855988a3-c14b-49ba-9e05-f8dffdbd9da9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00437.warc.gz"}
Last Minute C Programming Logical Operators Tutorial C programming language logical operators are often used along with conditional operators to combine two are more conditions. Use of conditional operators reduces the number of IF and ELSE statements and improves performance of C program. Also, the size of C program is considerably reduced. Note: There are no keywords like AND, OR, NOT, TRUE and FALSE in c language. NOT (!) operator works on single operand or expression where as AND (&&) and OR (||) operators require two operands. C Programming Logical Operators There are three logical operators in C language. 1. AND - && - Ampersand Ampersand 2. OR - || - Pipe Pipe 3 NOT - ! - Exclamation Example 1: Without logical operators int main() int b=10; if(b <= 10) if(b >= 5) return 9; Here we used two IF statements with two conditions to reach YES output. Example 2: With logical operators int main() int b=10; if((b <= 10) && (b >=5)) return 9; Here two conditions are combined using && operator. Now the code looks simple and compiler takes less time to compile and the performance is improved. Output Chart or Truth Table of C Logical Operators Logical AND operation requires both operands to be true to give output as TRUE. Logical OR operation requires just once true operand to give output as TRUE. Logical NOT (!) simply makes TRUE to false and FALSE to true. Logical operators && and || are also called Short Circuit operators because they skip evaluating second operand if the outcome is already decided with first operand as TRUE. Operand 1 OP1 Operand 2 OP2 OP1 && OP2 OP1 || OP2 false false false false false true false true true false false true true true true true Observation: Advantage of logical AND operator is that if left side operand (Operand 1) is false, it will not look the second operand and directly gives false output. Similarly, only if left operand is TRUE, right side operand is checked for true or false. Logical OR operator also skips the second operand if the first operand is TRUE. This greatly reduces time to solve second operand or second expression in many scenarios. Hierarchy or Priority or Precedence of Logical Operators vs Conditional Operators vs Arithmetic Operators NOT (!) operator has got the highest priority. Logical Operators have less priority than Arithmetic Operators. Look at the chart below. Conditional operators have more priority than logical Priority Operator Symbol Operator Name 1 ! Logical NOT 2 *, /, % Multiplication, Division, Modulo 3 +, - Addition, Subtraction 4 <, <=, >, >= Less than, Less than or equal to, Greater than, Greater than or equal to 5 ==, != Equal to, Not equal to 6 && Logical AND 7 || Logical OR 8 = Assignment Example 3: int main() int year = 2019; if((year%4 == 0) && (year%100!=0)) printf("LEAP YEAR"); if(year%400 == 0) printf("LEA YEAR); return 9; So, proper use of Arithmetic, Conditional and Logical operators ease programming and maintain large code bases. C programming online tests on Conditional, Logical Operators or Statements 1 Conditional Statements Operators - Online Test 1 2 Conditional Statements Operators - Online Test 2
{"url":"https://examtray.com/index.php/tutorials/last-minute-c-programming-logical-operators-tutorial","timestamp":"2024-11-08T21:48:02Z","content_type":"text/html","content_length":"213590","record_id":"<urn:uuid:136bb1ed-7b53-4b0f-a523-77d73e2b2c93>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00088.warc.gz"}
Sales variance analysis: how state-of-the-art analytical tools can contribute to increased profitability - Hinterhuber & Partners by Evandro Pollono and Rolands Pupkevičs (Hinterhuber & Partners) published in the Journal of Revenue and Pricing Management This article explains sales variance analysis, a business intelligence tool to compare different financial results, enabling prac- titioners to determine which component drives changes in sales and take educated actions. The unique angle of this article is that it bridges theory—by delving into all the details of sales variance—with practice—by highlighting the managerial interpretation of what causes such changes and providing insights as to how to act upon it. It also suggests using this tool for marketing and not simple budgeting purposes, in stark contrast with most theory encountered by the authors. Understanding sales using sales variance analysis Companies are often celebrating increases in total revenue—a simple metric that is universally considered the go-to meas- ure of success. Conversely, decreases in revenue are nega- tively perceived by the top management, so extensive effort is invested into avoiding such decreases at any cost, usually at the expense of profitability. Likewise, increases in total revenue are also prioritized in the media and academic research, imply- ing their paramount importance and making them a favorite key performance indicator (KPI) in the business world. However, practitioners like ourselves hold an alternative view. While acknowledging that an increase in revenue can be a good thing per se, they hold that sales are a medium- term indicator of success that requires further detailed analy- sis. For instance, in some consulting projects, a growth in sales can also be accompanied by a decrease in net selling price or, put differently, a decrease in margins. Companies pursuing volume may end up cannibalizing their profits—a slow and steady process caused by an unfounded strategic focus. A loss in pricing power requires the intervention of practitioners who must analyze transactional data and deter- mine which of the two separate “forces”—price variance or volume variance—is at play. If the increase in sales is obtained through volume at the expense of net price, this may eventually be fatal. Sales variance analysis allows to weigh the different forces at play and gain a better under- standing of the health of a company, whether it is by looking at “the company overall, a division of the company, a cus- tomer segment, or a single customer” (Smith 2021). Research objective C-level managers can and should be aware of the factors that drive changes in sales. To address this need, this paper provides a review of theoretical tools to undertake sales vari- ance analysis, also referred to as “variance analysis.” We complement this review with managerial knowledge and discuss strategic implications of using sales variance analy- sis to interpret past pricing initiatives and implementation and monitoring of new ones. The objective of the article is to fill the gap observed in marketing, controlling, and business analytics books—namely the lack of managerial implications related to sales variance analysis, also noted by other authors (Mitchell and Olsen 2003). Table 1 reviews 27 textbooks chosen for their academic impact, relevance to the topics of management and data analytics, or adopted by leading universities worldwide. As shown in Table 1, only two textbooks cover sales variance analysis, about 7.5% of the total, while none of them, 0% of the total, discusses its managerial implementation and the centrality of its applica- tion in real business circumstances. Furthermore, most previously published articles and papers tend to focus on the costing or accounting aspects of sales variance analysis, thereby comparing internal budgets with the actual performance of one year (Hulbert and Toy 1977; Kaplan 2000), instead of measuring the variances over longer periods and suggesting how to use the insight for key account management and marketing in Recent studies on pricing also ignore the importance of sales variance (Mohaupt and Hilbert 2015; von Mar- tens and Hilbert 2011; Lord and Yeoman 2012; Choi et al. 2015; Meissner and Strauss 2010; Zhang and Bell 2012; Liozu 2017, 2019, 2021; Cross, Higbie, and Cross 2011; Cleophas and Frank 2011; McMahon-Beattie et al. 2016; Queenan et al. 2011; Yip 2012) with the exception of Smith (2021) who explains the topic under the lens of profit bridge. Smith’s (2021) paper focuses “on a profit bridge that decom- poses changes in profits to common marketing variables and ignores other issues, such as exchange rates and fixed cost changes,” while the current article does delve into such vari- ables in order to provide a fuller spectrum of real business situations. While Smith (2021) recognizes that this analy- sis allows “to measure the performance of specific busi- ness variables under management in comparison to a prior period,” the current article takes this general assessment a step further by identifying managerial reasons that could cause changes detected in the analysis or managerial deci- sions that could spark from the analysis. How practitioners see EBITDA As the old adage “turnover is vanity” reminds us that the obsession over revenue and market share could eventually lead to bankruptcy: not being able to discriminate between organic growth—helping overall profitability—and artifi- cial growth—happening at the expense of profitability—is a risk that all companies face. This misconception does not leave unscathed even some of the largest companies in the world. General Motors tended to overly rely on KPIs such as volume and market share and celebrating them publicly; this could be among the major reason behind its decline, ultimately leading to its bankruptcy in 2009—a cautionary tale urging practitioners to master sales variance analysis. As shown in Fig. 1, two increases in sales of the same magnitude, 5% in our example, may have opposite effects on profitability. The left panel of Fig. 1 shows the optimal case when the price is increased, while there is no change in the volume of sold units. In the real world, this situation can occur when a company starts explaining its unique value to customers, demonstrating its competitive advantages. Con- versely, the right panel of Fig. 1 shows the case of a com- pany where an increase in sales is achieved by an excessive price reduction—that is, when many more units are sold but at a much lower price. In a business world scenario, this may occur when a company is lowering the price to boost sales in an effort to gain market share or when it is overly obsessed with revenue. At the end of the day, the second company will probably achieve a lower profitability (EBITDA) than in the previous period. The remainder of this article is structured as follows. Sec- tion Pricing as a discipline and sales variance analysis as a tool provides a short review of sales variance analysis as a tool to analyze price and its evolution over time. The term variance suggests that it relates to a difference, implying a comparison between two or more sets of data, and such data comparison occurs at different levels. Accordingly, in Sec- tion First-level pricing sales variance analysis: price–volume Fig. 1 Sales increases of same magnitude with opposite effects in profitability (two different companies) variance, first-level variance is discussed, as in the case of a company that sells one product in one currency with varying levels of price and quantity. In Section Second-level pricing sales variance analysis, other levels of sales variance analy- sis are presented, as in the case of a company trading in mul- tiple currencies and selling multiple products with chang- ing product mix over different years (or generally speaking: over different periods). Section Best practices and criteria for exclusion presents practical situations that practitioners find in business life. Section Summary of formulae gives an overview of the formulae used for reviewing purposes. Sec- tion Visual representation of sales variance analysis shows how the sales variance is usually represented graphically to maximize its ability to convey a large amount of information at a glance, while Sections 8 and 9 conclude the article along with the suggestion of future avenues of research. Pricing as a discipline and sales variance analysis as a tool Variance analysis used in this article is meant for pricing purposes and compares different periods of actual sales. The same formulas can also be used in costing and accounting to compare budgets with actual sales (Kaplan and Gallani 2016; Simons 2016). However, the strategic objective of the latter application of variance analysis case is different and will not be discussed here. Pricing is a quintessential lever of the marketing mix; it can drive profitability in the short term and is the only element of the marketing mix that has a direct and immedi- ate effect on profits: while changing prices is not cost-free (Bergen et al. 2003), the cost of changing prices is negligible compared to the potential benefits of implementing price changes; in one concept: price is the only revenue-generating element of the marketing mix (Hinterhuber and Snelgrove 2017). Sales variance analysis is an effective tool to analyze price and its evolution over time. Prior to deciding how to improve the financial position of the company, marketing and finance managers should undertake this analysis on a product-by- product and even product-by-customer level (keeping in mind that the same product can be sold to different custom- ers at different prices). Sales variance analysis usually starts at a company level but should be performed across different dimensions, including business units, departments, single products, single customers, and so forth (see Section Best practices and criteria for exclusion for further detail). In an archetypal textbook scenario, a company sells a sin- gle product at one or two price points and at one or two vol- ume levels. In such cases, the impact of a change in price or volume on sales can easily be assessed. In reality, however, multinational companies sell thousands of SKUs to multiple customers over many time periods, in different currencies that frequently change during the period, and at prices vary- ing in terms of volume purchased and with ad hoc negotia- tions. Obviously, this complexity is much more difficult to capture, and most companies give up on making sense of it in a holistic way, preferring to focus on single products and doing their best with limited information at hand. In this article, however, we argue that there are better ways to capture this complexity and to see how the company as a whole—or a single business unit, department, or even individual sales manager—is performing in terms of sales performance and profitability. Specifically, we argue that sales variance analysis is an indispensable tool to effectively analyze the wealth of information hidden in (big) sales data to make educated decisions. It is worth adding at this point that the following analyses can be undertaken with simple spreadsheet software in the case of small datasets, while more powerful tools such as Microsoft PowerBI® are better for larger ones; the latter also allows to include add-ons to automatize given tasks for industry-specific needs. Sales variance analysis: how state‑of‑the‑art analytical tools can contribute to increased… First‑level pricing sales variance analysis: price–volume variance In this section, we discuss the first level of pricing sales vari- ance. Figure 2 summarizes sales of a single product in two periods: Year-1 (Last Year, LY) and Year-0 (Current Year, CY). We focus on the following three scenarios: (1) change in volume; (2) change in price; (3) change in both volume and price. The corresponding scenarios are discussed in Sec- tions Change in volume–Change in price–Changes in both price and volume. Change in volume In this scenario, the number of product units sold in the current year has increased with respect to the previ- ous year (Fig. 3). Selling at the same price as last year (PCY = PLY = 15), but increasing the number of sold units from 10 to 11 increases yearly revenue from 150 to 165 EUR. Therefore, pure volume variance is 15 EUR. The formula used here is as follows: (QCY − QLY) ∗ PLY. The practitioner’s interpretation: win rate From a pricing practice perspective, the increase in quantity sold could result from an increase in a win rate, possibly due to a successful explanation of the prod- uct’s competitive advantages vis-à-vis other alternatives (Hinterhuber 2004). Having the ability to isolate sales of products or product lines experiencing such an increase in demand can unveil causes such as new sales techniques implemented by a key account manager if not a whole department, new trends happening in the market, or insights on purchasing behavior of single accounts. Change in price In this scenario, the product has witnessed an increase in price from the previous year with respect to the current year (Fig. 4). A unit level (volume or QCY = QLY) of 10 units and a price increased by 20% (from 15 to 18 EUR) result in a yearly revenue increase from 150 to 180 EUR. Thus, pure price variance is 30 EUR. The formula in this specific case is (PCY – PLY) * QLY. However, it changes in the event of joint variance (see Section Changes in both price and volume), that is, when both price and volume change as compared to the previous year. Thus, the “complete” formula for Price Variance is (PCY – PLY) * QCY. The practitioner’s interpretation: segmentation From a pricing perspective, the situation in which prices change within the same product or product line could be the result of a new segmentation where the company sells prod- ucts to customers who are more able to extract considerable value from the offering and are thus willing to pay a higher price. Pinpointing specific accounts paying a premium for a product with respect to others and analyzing the underlying reasons for it gives tremendous insights on how to improve existing segmentation and, for example, passing from a purely volume-based segmentation to a more sophisticated industry sub-segment or company maturity segmentation; similarly, it allows to monitor if a new segmentation is pay- ing the expected dividends. Changes in both price and volume In this scenario, both price and volume have changed between the previous year and the current year, resulting in three separate effects, all of which add up to the total sales variance. As discussed in Sections Change in volume–Change in price, pure volume and price variances are 15 EUR and 30 EUR, respectively, adding up to EUR 45. However, this amount is not enough to justify the total sales variance. In this scenario, a third effect comes into play: joint variance. Joint variance is a result of the current vs. last year price dif- ference multiplied by the current year vs. last year volume difference. The formula used here is as follows: (PCY – PLY) * (QCY QLY). This formula explains only joint variance, for a total of (18 − 15) * (11 − 10) = 3. Accordingly, in order to sim- plify formulas, joint variance is conventionally added to pure price variance, leading to a shorter formula of price variance that covers both pure price variance and joint variance: (PCY PLY) * QCY. In Fig. 5, price variance is the combination of Variance due to price change and Joint variance. In the example, the total sales variance is 48 EUR, which consists of volume variance (15 EUR) and price variance (33 EUR). The latter amount consists of 30 EUR of pure price variance and 3 EUR from joint variance. In our experience, this frequently causes confusion, so we shall reiterate: price variance is a combination of joint variance and pure price variance. The practitioner’s interpretation: unique selling proposition From a pricing perspective, when both prices and quantities witness a change, this is usually a result of multiple activities, such as the company’s ability to effectively segment the market and explain to each segment the unique competitive advantages it delivers, which results in both higher win-rates and prices. Second‑level pricing sales variance analysis Unlike in the examples discussed in Section First-level pric- ing sales variance analysis: price–volume variance, most companies operate in countries with different currencies and offer a wide range of differently-priced products to their cus- tomers. This impact of a changing currency and portfolio of items on the total revenue results in the fact that simple price and volume variances cannot provide a complete and com- prehensive picture. Accordingly, price and volume variances need to be “disaggregated” (Hawkin and Cohen 2004). Illus- trations of such disaggregation are provided in Sects. Impact of currency exchange–Impact of product portfolio mix. Figure 6 shows the link among all variances (Coskun 2016), each level representing a further breakdown of the previous one. Practitioners and business analysts drill down even further by adding different filters and exclusion criteria (the managerial implications of which are discussed in Sec- tion Best practices and criteria for exclusion). The drilling- down process continues to the point where the effort no longer justifies the obtained insights (Hawkin and Cohen 2004). Impact of currency exchange In this section, we discuss the disaggregation of price vari- ance. Let us elaborate on the example discussed in Sec- tion First-level pricing sales variance analysis: price–volume variance and assume that EUR is the reporting currency but that actual sales are made in USD (for the sake of conversa- tion it is assumed that there are no bank fees for converting currencies, meaning the buy and sell exchange rates coincide with one simple exchange rate). In that case, we can and need to disaggregate price variance into the following two components: (1) price variance at fixed exchange rate and (2) Fx rate variance. This disaggregation makes it possible to clearly differentiate between what has been achieved by the company through price (i.e., price variance at fixed exchange rate), on the one hand, and what has been determined by external factors (i.e., Fx rate variance), on the other hand. Then, let us assume that last year, the price in local cur- rency was 16 USD, while, this year, the price is 15 USD. Accordingly, the EUR/USD exchange rate has changed from 0.9375 last year to 1.2 this year. Based on the data, we calculate price variance at fixed exchange rate and Fx rate variance as follows: Price variance @ fixed exchange rate = (PCY in Local currency @ FxRate LY − PLY in Local currency @ FxRate LY) ∗ QCY = (15 USD ∗ 0.9375 − 16 USD ∗ 0.9375) ∗ 11 = −10.31 EUR FxRate variance = (PCY in Local currency @ FxRate CY − PCY in Local currency @ FxRate LY) ∗ QCY = (15 USD ∗ 1.2−15 USD ∗ 0.9375) ∗ 11 = 43.31 EUR Price variance From the formulae above, we see that, in line with our calculation in Section First-level pricing sales variance analysis: price–volume variance, the total price variance is 33 EUR: = Price variance@fixed exchange rate + FxRate variance = − 10.31 + 43.31 = 33 EUR However, if the calculation of price variance is done using only the simple formula (PCY – PLY) * QCY, a lot of valu- able information would be ignored. If a company operates in multiple markets and deals with various currencies, evaluating consolidated results in reporting currency is not sufficient. In the example above, it is the disaggregation of price variance into price variance at fixed exchange rate and Fx rate variance that allows us to reveal that actual price decreased (which is the opposite effect as compared to what we see if we analyze results only in the reporting currency), and that positive price variance should be solely attributed to favorable currency exchange rate movements. Impact of product portfolio mix This section discusses the disaggregation of volume variance. As discussed previously, unlike in the ideal scenario of a com- pany selling one product, in the business world, companies are selling multiple products (in fact, hundreds or even thousands of products) instead of just one. For the sake of simplicity, let us consider the example of a company that sells two products. Corresponding prices and volumes are summarized in Table 2. As shown in Table 2, prices of both products have increased by 10% from the previous year with respect to the current year. The total quantity of sold products has also increased by 10%. Nevertheless, total revenue has decreased by 8 EUR. A closer look at the performance of each of the two products suggests that the quantity of Product 1 increased (from 60 to 85), while that of Product 2 decreased (from 40 to 25). variance with fully eliminated mix variance) and mix vari- ance (i.e., share of volume variance coming exclusively from a change in product mix). These variances need to be calculated for each product separately and then summed up. The corresponding formulas are as follows: Quantity variance = � (QCY@LY Mixi − QLYi) ∗ PLYi = (66−60) ∗ 2 + (44 − 40) ∗ 6 = 12 + 24 = 36 EUR Mix variance = SUMi (QCYi − QCY@LY Mixi) ∗ PLYi = (85 − 66) ∗ 2 + (25 − 44) ∗ 6 = 38 + (− 114) = − 76 EUR The calculations above show that Product 2 has a strongly negative mix variance, which leads to negative volume vari- ance and a decrease in total revenue. Eventually, summing quantity variance and mix variance yields volume variance (see below): Volume variance = Quantity variance + Mix variance = 36 + (− 76) = − 40 EUR When dealing with large portfolios, it might be difficult to use mix variance and interpret its implications. However, mix variance analysis can be extremely useful to estimate changes in one specific product category or group. It can reveal whether price changes in supplementary or substitute A possible managerial interpretation of this result is that products have led to favorable changes in product mix and, the two products are substitutes, and a price increase of 10% has incentivized customers to switch from the higher-priced product (Product 2) to the lower-priced alternative (Product 1). As a result, total revenue has suffered even though they managed to sell a higher number of While this interpretation is intuitive, the calculation of mix variance makes it possible to quantify this effect. To this end, we need to calculate the sold quantity for each product, assuming there has been no change in product mix. As shown in Table 2, the LY mix has been 60% (Product 1) and 40% (Product 2). Provided there was no change in the mix, 66 units of Product 1 and 44 units of Product 2 would be sold. Based on this, we can calculate quantity variance (i.e., share of volume eventually, have contributed to total revenue growth. Mix variance analysis can also reveal that the introduction of low-price products cannibalized sales of high-price value- added products, meaning that, even with increased sales, the company is worse off at the end. Best practices and criteria for exclusion When bridging theory with practice, one may find special scenarios to be particularly daunting. In this section, we discuss the most critical and common circumstances that financial managers and C-level decision makers encounter when using sales variance analysis. On some occasions, a product may have a list price, but coming from a period prior to the ones analyzed leaving doubt of whether it should be used or not. In other cases, a product may not have one unique price but one dependent on the quantity purchased— the common case of tiered/scale pricing. In cases like these, the practitioners’ experience can be invaluable. The follow- ing paragraphs aim at illustrating the most common special cases that can be found in business life and how to solve them. Tiered or scale pricing Companies practice tiered or scale pricing in many indus- tries, meaning that different prices are quoted for different volume brackets. For example, if a customer buys 1–10 units of a product, the unit price is 100 EUR. Furthermore, if the purchase amount ranges from 11 to 100 units, the unit price decreases to 50 EUR. Finally, if the purchased amount exceeds 100 units, then the unit price drops to 25 EUR. Ana- lyzing such pricing structure using sales variance analysis can be a challenging task. For instance, if the company sold 150 units at the unit price of 25 EUR in the previous year and 9 units at the unit price of 100 EUR in the current year, simple sales variance analysis would yield the following results: Price variance = (PCY − PLY) ∗ QCY = (100−25) ∗ 9 = 675 EUR Volume variance = (QCY − QLY) ∗ PLY = (9−150) ∗ 25 = − 3 525 EUR As shown in the results, volume variance is strongly negative, which is explained by the dramatic decrease in the number of sold items (from 150 to 9 units). In contrast, the effect of price variance is positive and very strong. Furthermore, while the calculations presented above are mathematically correct, they are difficult to interpret from a business per- spective. In this instance, price increases due to the bracket of reference change from high volume/low price to low vol- ume/high -price, and not due to a higher negotiated price. Therefore, a better approach would be to run sales vari- ance analysis separately for each volume–price bracket and then aggregate the results on the product level. This approach would help to eliminate price variances when pur- chased products move across brackets. This best practice makes it possible to capture price variances only when there are actual price changes within each volume–price bracket. Volume discounts/end‑of‑year rebates Volume discounts—that is, price reductions offered to cus- tomers reaching a certain sales amount—are a consolidated practice in virtually every industry. Volume discount can be expressed as a percentage of current-year or next-year sales or as an absolute amount (end-of-year rebate). This raises the question of how to treat volume discounts and rebates using sales variance analysis. Here, while there is no single “correct” approach, and different companies apply different approaches, the common practice is to ignore these effects, as it is virtually impossible to assign a given discount/rebate to an individual product. Moreover, decisions about volume discounts/rebates might come from managerial levels different from those that are being optimized through sales variance analysis. In fact, sales variance analysis is frequently used to improve the performance of sales and pricing managers—that is, employ- ees who are not usually responsible for assigning volume discounts/rebates. Therefore, in order not to pollute the per- formance of departments or managers with the effects that are not attributable to those departments and managers, a justified practice is to omit volume discounts/rebates from sales variance analysis. New Products In the case of new products (for which sales start only in the current year), no price and volume information from the previous year is available. In this situation, the common approach is to assume that price in the previous year is the same as in the current year. For considerations of validity, it cannot be assumed that price was 0, as the 0 value would invalidate the results. However, concerning volume, it is valid to assume that volume increased from 0 units to any number of units. Let us consider an example where, in the current year, a company launched a new product and sold 10 units at the unit price of 5 EUR. Sales variance calculation yields the following results: Price variance = (PCY − PLY) ∗ QCY = (5−5) ∗ 10 = 0 Volume variance = (QCY − QLY) ∗ PLY = (10−0) ∗ 5 = − 50 EUR In this case, total sales variance can be fully attributed to volume variance. From a business perspective, this is logi- cal, as no products were sold last year, so there cannot be any price Discontinued Products In contrast, new products discussed in Section New Prod- ucts, discontinued products are products that were sold last year, but had no sales in the current year. Accordingly, no price and volume information for the current year is avail- able. As in the example in Section New Products, the com- mon practice is to assume that price in the current year is the same as that in the previous year. Therefore, if a company sold 10 units at the unit price of 5 EUR in the previous year, but 0 units in the current year, sales variance calculation yields the following results: Price variance = (PCY − PLY) ∗ QCY = (5−5) ∗ 10 = 0 Volume variance = (QCY − QLY) ∗ PLY = (0−10) ∗ 5 = − 50 EUR As in the example with new products discussed in Sec- tion New products, total sales variance in the case of discon- tinued products can be fully attributed to volume variance. Reactivated products In another scenario, there are no sales of a given product in one year in-between the other two years where sales of a given product took place. Therefore, this scenario encompasses the circumstances of both new and discontinued products dis- cussed in Sects. New products–Discontinued products. Let us consider an example where a company sold 10 units of a product at the unit price of 5 EUR in Year-2 had no sales of this product in Year-1, and then sold 9 units of the product at the unit price of 6 EUR in the current year (Year-0, see Table 3). Let us assume that the management wants to under- stand sales performance in the current year. In this situation, two available approaches to perform cal- culations using sales variance analysis are available. The first approach (Approach 1) is to strictly apply basic formulas (see below). Price variance = (PCY − PLY) ∗ QCY = (6−6) ∗ 9 = 0 Volume variance = (QCY − QLY) ∗ PLY = (9−0) ∗ 6 = 54 EUR One of the advantages of this approach is straightforward, as it applies sales variance formulas to compare Year-0 to Year-1. Another advantage of this method is that it is easy to implement. However, its limitation is that it assumes there is no historical price, so it does not provide business insights. The second approach (Approach 2) is to apply business logic. Here, instead of considering the product as new in Year- 0, it can be considered as reactivated, implying that the com- pany holds historic price information from another period that can be used in the calculation. Admittedly, investigating past prices can be a time-consuming activity. With this approach, the last available price (from Year-2 in our example) is used to calculate price variance, while the traditional formula is applied for volume variance (i.e., Year-0 vs. Year-1 in this case). The calculation was performed as follows: Price variance = (PCY − PLY<−LAST AVAILABLE YEAR) ∗ QCY = (6−5) ∗ 9 = 9 EUR Volume variance = (QCY − QLY) ∗ PLY = (9−0) ∗ 5 = 45 EUR Both Approach 1 and Approach 2 outlined above are cor- rect; the trade-off is between faster results or more insights on given products. For instance, when a product witnesses a 2-year replace- ment cycle (e.g., a customer replaces air-filters every two years), Approach 2 (but not Approach 1) guarantees that price changes are captured in the analysis—with respect to the last known price. Conversely, Approach 2 (but not Approach 1) can be applied to longer time horizons until a last available price can be found. The practitioner’s interpretation: the right timeframe Practitioners agree that investigating prices makes sense only up to a specific point. As a rule of thumb, prices older than five years are typically no longer relevant, as they refer to products that changed even if they maintained the same SKU, or COGS had varied considerably, and the different prices are not comparable, or the effort to find these prices is not justified by their contribution to total sales (if there have been no sales over many years, this is usually the case). Product level vs. customer‑product level Sales variance analysis can be performed on the product level or the customer-product level. The former level of sales variance analysis is mostly used when there are no client- specific pricing policies, as in the case of retail or “com- modity” industries. The latter level is mostly used in the B2B industry. In this case, the analysis focuses on a product at a time in Sales variance analysis: how state‑of‑the‑art analytical tools can contribute to increased… 67 the customer portfolio, with the subsequent tracing of its evolution over time. This is so because some policies and actual prices are usually dealt with on the single-customer level, with individual tailoring that cannot be generalized to the entire company. Analyses on the product-customer level are the quintessential elements in the pricing toolbox for sales directors and sales managers to always improve performance. The practitioner’s interpretation: commodity The reference to “commodity” is used in this example to convey the idea of a “fix” price as in the commodity stock market, where even “fix” is arguable as it is dynamic in nature. Practitioners agree on one thing: “commodities” do not exist in the business practice. Companies will always find an angle to prove to customers that their product is worth a premium or at least worth choosing against an equally priced offer. Even non-product benefits, such as a faster turn-around/delivery, or ease of doing business, or geographical proximity, can be leveraged to differentiate a company from its competitors. Claiming to be in a “com- modity industry” is a self-fulfilling prophecy ending in the erosion of margins up to the point of no return. Summary of formulae In this section, we summarize the formulae used in this paper for quick reference. First‑level pricing sales variance analysis Sales variance = Price variance + Volume variance Price variance = (PCY− PLY) ∗ QCY Volume variance = (QCY− QLY) ∗ PLY Price variance, % = Price variance ∕ (QCY ∗ PLY) Second‑level pricing sales variance analysis Sales variance = Price variance + Volume variance = (Price variance@fixed exchange rate • FxRate variance) • (Quantity variance + Mix variance) Price variance @ fixed exchange rate = (PCY in Local currency @FxRate LY − PLY in Local currency@FxRate LY) ∗ QCY Fx rate variance = (PCY in Local currency@FxRate CY − PCY in Local currency@FxRate LY) ∗ QCY Quantity variance = �(QCY@LY Mixi− QLYi) ∗ PLYi Mix variance = � (QCYi − QCY@LY Mixi) ∗ PLYi Notations: P is price, Q is Quantity, LY is last year, CY is current year, QCY @ LY Mix I is total quantity sold (for all products) in current year multiplied by last year product mix weight for product i, QCY I is current year quantity for product I, QLY I is last year quantity for Product I, and PLY I is last year price for Product i. Practice example—application of key concepts discussed This section illustrates the usefulness of core con- cepts discussed herein to a real, albeit anonymized, company (Table 4). Price + volume in same currency—first‑level pricing sales variance analysis In Product A, two opposite forces are at play: a price increase of EUR 0.50 and a volume decrease of 5 units. The resulting sales variance is EUR 30. The net result of EUR 30 comes from a negative volume variance of EUR -40 (cell: Prod. A-TOT Vol Var) and a positive price vari- ance of EUR 70 (cell: Prod. A-TOT P. Var). Overall, the result is positive in terms of sales. A company may decide to implement a price increase to offset increasing COGS, or after finding out the cus- tomer’s or customer segment’s willingness to pay through conjoint analysis on customer preferences. The fact that the net result is positive, is not a reason to let decreasing volume go unnoticed: a better communication to custom- ers of value differentials vis-à-vis the customer’s best alternative can prove that the price premium is actually an investment that pays for itself; consultants use value quantification techniques to financially quantify the com- petitive advantages and avoid a decrease in units sold. Examples of such competitive advantages are as follows: energy savings, longer lifecycle, lower maintenance costs, low.
{"url":"https://hinterhuber.com/publications/sales-variance-analysis-how-state-of-the-art-analytical-tools-can-contribute-to-increased-profitability/","timestamp":"2024-11-08T08:53:01Z","content_type":"text/html","content_length":"128685","record_id":"<urn:uuid:2dad5a5e-8abb-48c3-8652-1fa2f8256f74>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00351.warc.gz"}
Re: st: -streg- & equality of coefficients across independent areas [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: -streg- & equality of coefficients across independent areas From "Austin Nichols" <[email protected]> To [email protected] Subject Re: st: -streg- & equality of coefficients across independent areas Date Tue, 6 Jun 2006 00:21:09 -0400 I don't use -streg- myself, so I can't speak to any tricks you might be able to use to run the test you have in mind without running the model with a dummy and interactions on the union of the two samples. But I will note that the help file for -streg- specifies maximize_options; ... (see -help maximize-), so you could specify an initial parameter vector that allows your regression to iterate only once (or close to it), given that you have run the regression on both samples successfully, and can therefore specify the maximum-likelihood parameter vector. You could -set trace on- to see what the program is doing if you are concerned that it is taking too long, though that requires you stop it working and then restart it, of course. On 6/5/06, Priscila Ferreira <[email protected]> wrote: I understand what you mean now, Austin. But it is literally 'doesn't run' , hours spent in exactly the same point. No error messages, nothing. I concluded (perhaps wrongly) that the system could have had entered in some kind of endless loop. I have acquired the 64 bits version quite recently, but haven't yet installed it. In the meantime, I wish I could do the test. If you have an idea about how to perform the test, no matter how less straightforward it can be (I'm ready to work on that) , I would be really grateful. On 05/06/06, Austin Nichols <[email protected]> wrote: I would not claim that the use of dummy variables is really the only way to perform the test you want in Stata; but it's the most straightforward way most of the time. My point was only that a phrase like "the regression doesn't run" is unlikely to elicit helpful responses, unless you relate the specific error messages or constraints you are hitting. I mentioned two ways to relax a constraint (more memory; Stata MP) but it is not clear to me that either would help, in the absence of more info about why "the regression doesn't run." However, I will note that you might need a 64-bit version of Stata to use more memory. On 05/06/06, Priscila Ferreira <[email protected]> wrote: > Thanks, Austin. > I am currently using Stata 9.2 SE for Linux on a 32 bits processor and > have 2.5 gb of ram memory available. My model doesn't run in this > particular machine. It might be the case that if I use another > architecture the model will run. > So, Austin - do you mean that the use of dummy variables is really the > only way to perform the test I want in Stata? > Priscila On 05/06/06, Austin Nichols <[email protected]> wrote: Priscila Ferreira-- What do you mean when you say "I suppose > because the number of variables and observations is too big, the > regression doesn't run." Do you mean you need more memory or a newer version of Stata that supports multiple CPUs, or do you mean the regression is infeasible on any machine? On 6/4/06, Priscila Ferreira <[email protected]> wrote: > Good afternoon, > After using -streg- with shared frailty, I am trying to test the > equality of regression coefficients that are generated from the same > specification (same dependent variable, same vector of regressors), > estimated on two different samples. But am not able to. > I tried to implement StataCorp's FAQ: > http://www.stata.com/support/faqs/stat/testing.html, but I suppose > because the number of variables and observations is too big, the > regression doesn't run. Priscila Ferreira * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2006-06/msg00174.html","timestamp":"2024-11-10T02:01:41Z","content_type":"text/html","content_length":"12930","record_id":"<urn:uuid:c962ba26-d78a-4652-9bfa-bd0cf283f022>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00325.warc.gz"}
Please Help, Brainliest For Correct Answer m<2+= 60 Step-by-step explanation: The correct answer is m∠2 = 120° Step-by-step explanation: There is a triangle rule that states the outer angle of an angle in a triangle is equal to the sum of the remaining two angles in that triangle, which means that m∠2 = 52° + 68° = 120°. This can be proven correct because the sum of two of the angles in the triangle is 120°, which means the final angle is 60°. Since m∠2 is the outer angle of 60°, that means it will be 180° - 60° or 120°. The Law of Sines states that in any triangle, the ratio of the length of a side to the sine of the angle opposite that side is the same for all sides and angles. In other words, if a, b, and c are the lengths of the sides of a triangle, and A, B, and C are the angles opposite those sides, then we have: a/sin(A) = b/sin(B) = c/sin(C) In the given triangle, we are given the values of B and c, so we can use the Law of Sines to find the other side lengths and angles. We have: a/sin(A) = c/sin(C) Since c = 43 and C = 135°, we can plug these values into the equation to find a: a/sin(A) = 43/sin(135°) To find the value of a, we just need to solve for a by multiplying both sides of the equation by sin(A). We have: a = 43 * sin(A) / sin(135°) We know the value of B, so we can use the Law of Sines again to find the value of A: sin(A) / a = sin(B) / b Since B = 8° and b is unknown, we can plug these values into the equation to find a: sin(A) / a = sin(8°) / b To find the value of A, we just need to solve for A by dividing both sides of the equation by sin(A)/a. We have: A = asin(8°) / b Now that we know the values of A and c, we can use the Law of Sines one more time to find the value of b: sin(A) / a = sin(B) / b Since A and a are unknown, we can plug the known values into the equation to find b: sin(A) / a = sin(B) / b To find the value of b, we just need to solve for b by multiplying both sides of the equation by sin(B)/b. We have: b = sin(B) / (sin(A) / a) At this point, we have all the necessary values to compute the values of a, A, and b. We can plug the known values into the equations we derived above to find the unknown values. First, let's find the value of a: a = 43 * sin(A) / sin(135°) We know the values of A and B, so we can plug these values into the equation to find a: a = 43 * sin(A) / sin(135°) = 43 * sin(180° - C - B) / sin(135°) = 43 * sin(180° - 135° - 8°) / sin(135°) = 43 * sin(37°) / sin(135°) To compute the value of sin(37°), we can use a calculator or look up the value in a table of sines.
{"url":"https://www.cairokee.com/homework-solutions/please-help-brainliest-for-correct-answer-j5ay","timestamp":"2024-11-05T10:00:53Z","content_type":"text/html","content_length":"89026","record_id":"<urn:uuid:52aea136-571e-43bf-8c37-b167b37b5f33>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00626.warc.gz"}
A Proposed Notation for Quadratic, Cubic and Metallic Perron Tiles Some heuristics for generating Perron tiles have been identified. While in principle the heuristics can be applied to other Perron numbers, only quadratic, cubic and metallic tiles are known to me. These heuristics limit the nature of the transforms that are involved in the corresponding IFS, and therefore offer an opportunity to define a compact notation to denote such tiles. cubic tiles Note that all tiles have 4 degrees of freedom - scale, orientation, x-position and y-position. The x-position and y-position degrees of freedom mean that it is always possible to position the tile such that the vector element of one transform is 0. The orientation degree of freedom means that it is alway possible to position the tile such the the vector element of a second transform is parallel to the x-axis. The scaling degree of freedom means that it is always possible to scale the tile so that the vector element of that transform is 1. This means that that an order 2 cubic tile can be completely specified by the combination of unit and power (of c) from the transforms of the corresponding IFS. The 11 order 2 cubic tiles (12 dissections) are then [1^+5^+], [1^+5^-], [1^-5^+], [1^-5^-], [2^+3^+], [2^+3^-], [2^-3^+], [2^-3^-], [1^+3^+], [1^+3^-], [1^-3^+] and [1^-3^-]. (The contraction and rotation elements of the transforms is implied by the combination of powers.) Moving on to higher order tiles, you have to specify the vector elements of the additional transforms. This can be represented as the powers of c in the polynomial. For example there are two second cubic triapodes, which are [1^-4^-6^-(0)] and [1^-4^^-6^+(0)], where (0) denotes c^0 (equals 1). There is a potential ambiguity with (-n), which could be mean -c^n or c^-n. However the degrees of freedom described above mean that it is alway possibly to position orient and scale the tile so that the largest vector is 1, so it is not strictly necessary to deal with negative powers, and this doesn't need to be catered for by the notation. If you do want to refer to a negative power add an extra pair of brackets, i.e. ((-n)). The order 3 symmetric second cubic tile is [7^+2^+2^+(-0)], i.e. with the 3rd transform having vector element -1. Its order 3 demi tiles are [7^+2^+2^-(0)] and [7^-2^+2^-(0)]. Another order 3 second cubic tile is [7^-2^+2^+(0-5)], where the 3rd transform has vector element 1 - c^5. The coefficient of a power of c in the polynomial is not always a unit, so it necessary to extend the notation to cater for this, giving (n*k), (n/l) and (n*k/l) denoting kc^n, c^n/l and (k/l)c^n The notation is not capable of covering all pseudo-Perron tiles, but those pseudo-Perron tiles involving reflections in the x-axis can be represented by adding an overline to the power. It may also necessary to explicitly state the vectors, as it's not always possible to simulaneously arrange for the vector elements of the first two transforms be 0 and 1, and for the reflections to be in the x-axis. For example the golden bee is [1^-(1)2^^+(0)]. quadratic tiles The notation above is not sufficient for quadratic tiles as there are multiple Perron numbers with the same magnitude. We can address this by adding the value of the second coefficient ("m") from the Perron polynomial. For example the √2:1 rectangle becomes [^01^+1^+], the tame twindragon [^11^+1^+] and the twindragon [^21^+1^+]. The pseudoterdragon is [^11^+2^+2^+(-0)]. The √3:1 rectangle is either [^01^+1^+1^+(-0)] or [^01^+1^+1^+(0*2)] (alternatively [^01^+1^+1^+(0+0)]). pletals and hextals The notation above is also not sufficient for pletals and hextals, in which units other than +1 and -1 are involved. For pletals the additional units are +i and -i. These can be handled by adding i to the appropriate places in the notation. For example the fat cross is [^21^+3^+3^+(i0)3^+(-0)3^+(-i0)]. For hextals the additional units are the Eisenstein integers e[11], e[01], -e[11] and -e[01]. As -e[01] is e[11]^2 we can express all the additional units in terms of e[01], which we can denote e. Then, for example, the terdragon becomes [^3-e^2+1^+1^+(-0)]. © 2016 Stewart R. Hinsley
{"url":"http://stewart.hinsley.me.uk/Fractals/IFS/Tiles/Notation.php","timestamp":"2024-11-09T00:35:19Z","content_type":"text/html","content_length":"8506","record_id":"<urn:uuid:7e97ca8f-31b4-46cd-b1b1-0ff3ed7b8e33>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00258.warc.gz"}
Graphs with the gRbase package ## Attaching package: 'igraph' ## The following objects are masked from 'package:gRbase': ## edges, is_dag, topo_sort ## The following objects are masked from 'package:stats': ## decompose, spectrum ## The following object is masked from 'package:base': ## union Graphs and Conditional Independence \em As of major version 2 of gRbase, that is versions 2.x.y, gRbase no longer depends on the packages graph, Rgraphviz, and RBGL packages. Graph functionality in these packages now relies either on the igraph package or on graph algorithms implemented in gRbase. This document reflects these changes. • As a consequence, this document provides an up-to-date version of Chapter 1 in the book Graphical Models with R (2012); hereafter abbreviated GMwR, see @hojsgaard:etal:12. • This document also reflects that since GMwR was published in 2012, some packages that are mentioned in GMwR are no longer on CRAN. This includes the packages lcd and sna. • In this document it has been emphasized if a function has been imported from igraph or if it is native function from gRbase by writing igraph::this\_function() and gRbase::this\_function() • One notable feature that is not available in this version of gRbase are functions related to maximal prime subgraph decomposition. They may be reimplented at a later stage. A graph as a mathematical object may be defined as a pair \(\cal G = (V, E)\), where \(V\) is a set of vertices or nodes and \(E\) is a set of edges. Each edge is associated with a pair of nodes, its endpoints. Edges may in general be directed, undirected, or bidirected. Graphs are typically visualized by representing nodes by circles or points, and edges by lines, arrows, or bidirected arrows. We use the notation \(\alpha-\beta\), \(\alpha\to\beta\), and \(\alpha\leftrightarrow\beta\) to denote edges between \(\alpha\) and \(\beta\). Graphs are useful in a variety of applications, and a number of packages for working with graphs are available in \R. In statistical applications we are particularly interested in two special graph types: undirected graphs and directed acyclic graphs (often called DAGs). The \gRbase\ package supplements \igraph\ by implementing some algorithms useful in graphical modelling. \grbase\ also provides two wrapper functions, ug() and dag() for easily creating undirected graphs and DAGs represented either as igraph objects or adjacency matrices. The first sections of this chapter describe some of the most useful functions available when working with graphical models. These come variously from the \gRbase\ and \igraph, but it is not usually necessary to know which. As statistical objects, graphs are used to represent models, with nodes representing model variables (and sometimes model parameters) in such a way that the independence structure of the model can be read directly off the graph. Accordingly, a section of this chapter is devoted to a brief description of the key concept of conditional independence and explains how this is linked to graphs. Throughout the book we shall repeatedly return to this in more detail. Our graphs have a finite node set \(V\) and for the most part they are simple graphs in the sense that they have no loops nor multiple edges. Two vertices \(\alpha\) and \(\beta\) are said to be adjacent, written \(\alpha\sim \beta\), if there is an edge between \(\alpha\) and \(\beta\) in \(\cal G\), i.e.\ if either \(\alpha - \beta\), \(\alpha\to\beta\), or \(\alpha\leftrightarrow\beta\). \index{adjacent nodes} \index{simple graphs} In this chapter we primarily represent graphs as igraph objects, and except where stated otherwise, the functions we describe operate on these objects. Undirected Graphs The following forms are equivalent: ug0 <- gRbase::ug(~a:b, ~b:c:d, ~e) ug0 <- gRbase::ug(~a:b + b:c:d + e) ug0 <- gRbase::ug(~a*b + b*c*d + e) ug0 <- gRbase::ug(c("a", "b"), c("b", "c", "d"), "e") ## IGRAPH 6397ecb UN-- 5 4 -- ## + attr: name (v/c) ## + edges from 6397ecb (vertex names): ## [1] a--b b--c b--d c--d The default size of vertices and their labels is quite small. This is easily changed by setting certain attributes on the graph, see Sect.~@ref(sec:graph:igraph) for examples. However, to avoid changing these attributes for all the graphs shown in the following we have defined a small plot function \comic{myplot()}. There are also various facilities for controlling the layout. For example, we may use a layout algorithm called layout.fruchterman.reingold as follows: myplot <- function(x, layout=layout.fruchterman.reingold(x), ...) { V(x)$size <- 30 V(x)$label.cex <- 3 plot(x, layout=layout, ...) The graph ug0i is then displayed with: Per default the \comics{ug()}{gRbase} function returns an igraph object, but the option result="matrix" lead it to return an adjacency matrix instead. For example, ug0i <- gRbase::ug(~a:b + b:c:d + e, result="matrix") ## a b c d e ## a 0 1 0 0 0 ## b 1 0 1 1 0 ## c 0 1 0 1 0 ## d 0 1 1 0 0 ## e 0 0 0 0 0 Different represents of a graph can be obtained by coercion: as(ug0, "matrix") ## a b c d e ## a 0 1 0 0 0 ## b 1 0 1 1 0 ## c 0 1 0 1 0 ## d 0 1 1 0 0 ## e 0 0 0 0 0 as(ug0, "dgCMatrix") ## 5 x 5 sparse Matrix of class "dgCMatrix" ## a b c d e ## a . 1 . . . ## b 1 . 1 1 . ## c . 1 . 1 . ## d . 1 1 . . ## e . . . . . as(ug0i, "igraph") ## IGRAPH ddb3eec UN-- 5 4 -- ## + attr: name (v/c), label (v/c) ## + edges from ddb3eec (vertex names): ## [1] a--b b--c b--d c--d Edges can be added and deleted using addEdge() and removeEdge() ## Using gRbase ug0a <- gRbase::addEdge("a", "c", ug0) ug0a <- gRbase::removeEdge("c", "d", ug0) ## Using igraph ug0a <- igraph::add_edges(ug0, c("a", "c")) ug0a <- igraph::delete_edges(ug0, c("c|d")) The nodes and edges of a graph can be retrieved with \comics{nodes()}{graph} and \comics{edges()}{graph} functions. ## Using gRbase ## [1] "a" "b" "c" "d" "e" gRbase::edges(ug0) |> str() ## List of 5 ## $ a: chr "b" ## $ b: chr [1:3] "a" "c" "d" ## $ c: chr [1:2] "b" "d" ## $ d: chr [1:2] "b" "c" ## $ e: chr(0) ## Using igraph ## + 5/5 vertices, named, from 6397ecb: ## [1] a b c d e igraph::V(ug0) |> attr("names") ## [1] "a" "b" "c" "d" "e" ## + 4/4 edges from 6397ecb (vertex names): ## [1] a--b b--c b--d c--d igraph::E(ug0) |> attr("vnames") ## [1] "a|b" "b|c" "b|d" "c|d" gRbase::maxClique(ug0) ## |> str() ## $maxCliques ## $maxCliques[[1]] ## [1] "e" ## $maxCliques[[2]] ## [1] "a" "b" ## $maxCliques[[3]] ## [1] "b" "c" "d" gRbase::get_cliques(ug0) |> str() ## List of 3 ## $ : chr "e" ## $ : chr [1:2] "a" "b" ## $ : chr [1:3] "b" "c" "d" ## Using igraph igraph::max_cliques(ug0) |> lapply(function(x) attr(x, "names")) |> str() ## List of 3 ## $ : chr "e" ## $ : chr [1:2] "a" "b" ## $ : chr [1:3] "b" "c" "d" A path (of length \(n\)) between \(\alpha\) and \(\beta\) in an undirected graph is a set of vertices \(\alpha=\alpha_0,\alpha_1,\dots,\alpha_n=\beta\) where \(\alpha_{i-1}-\alpha_i\) for \(i=1,\ dots, n\). If a path \(n\alpha=\alpha_0,\alpha_1,\dots,\alpha_n=\beta\) has \(\alpha=\beta\) then the path is said to be a cycle of length \(n\). nnn A subset \(D \subset V\) in an undirected graph is said to separate \(A \subset V\) from \(B \subset V\) if every path between a vertex in \(A\) and a vertex in \(B\) contains a vertex from \(D\). gRbase::separates("a", "d", c("b", "c"), ug0) ## [1] TRUE This shows that \(\{b,c\}\) separates \(\{a\}\) and \(\{d\}\). The graph \(\cal G_0=(V_0,E_0)\) is said to be a subgraph of \(\cal G=(V,E)\) if \(V_0\subseteq V\) and \(E_0\subseteq E\). For \(A \subseteq V\), let \(E_A\) denote the set of edges in \(E\) between vertices in \(A\). Then \(\cal G_A=(A, E_A)\) is the \emph{subgraph induced by} \(A\). For example ug1 <- gRbase::subGraph(c("b", "c", "d", "e"), ug0) ug12 <- igraph::subgraph(ug0, c("b", "c", "d", "e")) par(mfrow=c(1,2), mar=c(0,0,0,0)) myplot(ug1); myplot(ug12) \index{boundary} \index{neighbours} \index{closure} The boundary \(\bound(\alpha)=\adj(\alpha)\) is the set of vertices adjacent to \(\alpha\) and for undirected graphs the boundary is equal to the set of neighbours \(\nei(\alpha)\). The closure \(\ clos(\alpha)\) is \(\bound(\alpha)\cup \{\alpha\}\). gRbase::adj(ug0, "c") ## $c ## [1] "b" "d" gRbase::closure("c", ug0) ## [1] "c" "b" "d" Directed Acyclic Graphs A directed graph as a mathematical object is a pair $\cal G = (V, E)$ where \(V\) is a set of vertices and \(E\) is a set of directed edges, normally drawn as arrows. A directed graph is acyclic if it has no directed cycles, that is, cycles with the arrows pointing in the same direction all the way around. A DAG is a directed graph that is acyclic. A DAG may be created using the dag() function. The graph can be specified by a list of formulas or by a list of vectors. The following statements are equivalent: dag0 <- gRbase::dag(~a, ~b*a, ~c*a*b, ~d*c*e, ~e*a, ~g*f) dag0 <- gRbase::dag(~a + b*a + c*a*b + d*c*e + e*a + g*f) dag0 <- gRbase::dag(~a + b|a + c|a*b + d|c*e + e|a + g|f) dag0 <- gRbase::dag("a", c("b", "a"), c("c", "a", "b"), c("d", "c", "e"), c("e", "a"), c("g", "f")) ## IGRAPH 75cdc91 DN-- 7 7 -- ## + attr: name (v/c) ## + edges from 75cdc91 (vertex names): ## [1] a->b a->c b->c c->d e->d a->e f->g Note that \~{ }a} means that \code{"a" has no parents while \~{ }d*b*c} means that “d” has parents \code{"b" and "c"}. Instead of ``\code{*}'', a ``\code{:’’ can be used in the specification. If the specified graph contains cycles then dag()} returns \code{NULL. Per default the \comics{dag()}{gRbase} function returns an igraph object, but the option result="matrix" leads it to return an adjacency matrix instead. ## [1] "a" "b" "c" "d" "e" "f" "g" gRbase::edges(dag0) |> str() ## List of 7 ## $ a: chr [1:3] "b" "c" "e" ## $ b: chr "c" ## $ c: chr "d" ## $ d: chr(0) ## $ e: chr "d" ## $ f: chr "g" ## $ g: chr(0) Alternatively a list of (ordered) pairs can be optained with edgeList() edgeList(dag0) |> str() ## List of 7 ## $ : chr [1:2] "a" "b" ## $ : chr [1:2] "a" "c" ## $ : chr [1:2] "a" "e" ## $ : chr [1:2] "b" "c" ## $ : chr [1:2] "c" "d" ## $ : chr [1:2] "e" "d" ## $ : chr [1:2] "f" "g" The vpar() function returns a list, with an element for each node together with its parents: vpardag0 <- gRbase::vpar(dag0) vpardag0 |> str() ## List of 7 ## $ a: chr "a" ## $ b: chr [1:2] "b" "a" ## $ c: chr [1:3] "c" "a" "b" ## $ d: chr [1:3] "d" "c" "e" ## $ e: chr [1:2] "e" "a" ## $ f: chr "f" ## $ g: chr [1:2] "g" "f" ## [1] "c" "a" "b" \index{path} \index{parents} \index{children} \index{ancestors} \index{ancestral set} \index{ancestral graph} A path (of length \(n\)) from \(\alpha\) to \(\beta\) is a sequence of vertices \(\alpha=\alpha_0, \dots, \alpha_n=\beta\) such that \(\alpha_{i-1}\to\alpha_i\) is an edge in the graph. If there is a path from \(\alpha\) to \(\beta\) we write \(\alpha\mapsto\beta\). The parents \(\parents(\beta)\) of a node \(\beta\) are those nodes \(\alpha\) for which \(\alpha \rightarrow \beta\). The children \(\child(\alpha)\) of a node \(\alpha\) are those nodes \(\beta\) for which $\alpha \rightarrow \beta$. The ancestors \(\anc(\beta)\) of a node \(\beta\) are the nodes \(\alpha\) such that \(\alpha\ mapsto\beta\). The ancestral set \(\anc(A)\) of a set \(A\) is the union of \(A\) with its ancestors. The ancestral graph of a set \(A\) is the subgraph induced by the ancestral set of \(A\). gRbase::parents("d", dag0) ## [1] "c" "e" gRbase::children("c", dag0) ## [1] "d" gRbase::ancestralSet(c("b", "e"), dag0) ## [1] "a" "b" "e" ag <- gRbase::ancestralGraph(c("b", "e"), dag0) An important operation on DAGs is to (i) add edges between the parents of each node, and then (ii) replace all directed edges with undirected ones, thus returning an undirected graph. This operation is used in connection with independence interpretations of the DAG, see Sect.~@ref(sec:graph:CI), and is known as moralization. This is implemented by the \comics{moralize()}{gRbase} function: dag0m <- gRbase::moralize(dag0) Mixed Graphs {#sec:graph:chaingraphs} \index{mixed graphs} Although the primary focus of this book is on undirected graphs and DAGs, it is also useful to consider mixed graphs. These are graphs with at least two types of edges, for example directed and undirected, or directed and bidirected. A sequence of vertices \(v_1,v_2, \dots v_{k}, v_{k+1}\) is called a path if for each \(i=1 \dots k\), either \(v_i - v_{i+1}\), $v_i \leftrightarrow v_{i+1}$ or \(v_i \rightarrow v_{i+1}\). If $v_i - v_{i+1}$ for each \(i\) the path is called undirected, if $v_i \rightarrow v_{i+1}$ for each \(i\) it is called directed, and if \(v_i \rightarrow v_{i+1}\) for at least one \(i\) it is called semi-directed. If \(v_i = v_{k+1}\) it is called a cycle. \index{directed path} \index{undirected path} \index{path} \index{cycle} \index{semi-directed path} Mixed graphs are represented in the \igraph\ package as directed graphs with multiple edges. In this sense they are not simple. A convenient way of defining them (in lieu of model formulae) is to use adjacency matrices. We can construct such a matrix as follows: adjm <- matrix(c(0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0), byrow=TRUE, nrow=4) rownames(adjm) <- colnames(adjm) <- letters[1:4] ## a b c d ## a 0 1 1 1 ## b 1 0 0 1 ## c 1 0 0 1 ## d 0 1 0 0 Note that igraph interprets symmetric entries as double-headed arrows and thus does not distinguish between bidirected and undirected edges. However we can persuade igraph to display undirected instead of bidirected edges: gG1 <- gG2 <- as(adjm, "igraph") lay <- layout.fruchterman.reingold(gG1) E(gG2)$arrow.mode <- c(2,0)[1+is.mutual(gG2)] ## Warning: `is.mutual()` was deprecated in igraph 2.0.0. ## ℹ Please use `which_mutual()` instead. ## This warning is displayed once every 8 hours. ## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was ## generated. par(mfrow=c(1,2), mar=c(0,0,0,0)) myplot(gG1, layout=lay); myplot(gG2, layout=lay) \index{chain graphs} A chain graph is a mixed graph with no bidirected edges and no semi-directed cycles. Such graphs form a natural generalisation of undirected graphs and DAGs, as we shall see later. The following example is from @Frydenberg1990: \setkeys{Gin}{width=0.7\textwidth, height=0.7\textwidth} %sæt figurstørrelse i Sweave d1 <- matrix(0, 11, 11) d1[1,2] <- d1[2,1] <- d1[1,3] <- d1[3,1] <- d1[2,4] <- d1[4,2] <- d1[5,6] <- d1[6,5] <- 1 d1[9,10] <- d1[10,9] <- d1[7,8] <- d1[8,7] <- d1[3,5] <- d1[5,10] <- d1[4,6] <- d1[4,7] <- 1 d1[6,11] <- d1[7,11] <- 1 rownames(d1) <- colnames(d1) <- letters[1:11] cG1 <- as(d1, "igraph") E(cG1)$arrow.mode <- c(2,0)[1+is.mutual(cG1)] myplot(cG1, layout=layout.fruchterman.reingold) \setkeys{Gin}{width=0.45\textwidth, height=0.45\textwidth} %sæt figurstørrelse i Sweave \index{chain graph components} \index{component DAG} The components of a chain graph \(\cal G\) are the connected components of the graph formed after removing all directed edges from \(\cal G\). All edges within a component are undirected, and all edges between components are directed. Also, all arrows between any two components have the same direction. The graph constructed by identifying its nodes with the components of \(\cal G\), and joining two nodes with an arrow whenever there is an arrow between the corresponding components in \(\cal G\), is a DAG, the so-called component DAG of \(\cal G\), written \(\cal G_{C}\). % The \comics{is.chaingraph()}{lcd} function in the \lcd\ package % determines whether a mixed graph is a chain graph. It takes an % adjacency matrix as input. For example, the above graph is indeed a % chain graph: % ```{r eval=F} % library(lcd) % is.chaingraph(as(cG1, “matrix”)) % @ % Here vert.order gives an ordering of the vertices, from which the % connected components may be identified using chain.size. \index{anterior set} \index{anterior graph} \index{moralization} The anterior set of a vertex set \(S \subseteq V\) is defined in terms of the component DAG. Write the set of components of \(\cal G\) containing \(S\) as \(S_c\). Then the anterior set of \(S\) in \ (\cal G\) is defined as the union of the components in the ancestral set of \(S_c\) in \(\cal G_{C}\). The anterior graph of \(S \subseteq V\) is the subgraph of \(\cal G\) induced by the anterior set of \(S\). The moralization operation is also important for chain graphs. Similar to DAGs, unmarried parents of the same chain components are joined and directions are then removed. % The operation % is implemented in the \comics{moralize()}{lcd} function in the % \lcd\ package, which uses the adjacency matrix representation. For % example, % ```{r , eval=F} % ## cGm <- as(moralize(as(cG1, “matrix”)), “graphNEL”) % cGm <- moralize(cG1) % plot(cGm) % @ % ```{r echo=F, eval=F} % detach(package:lcd) % @ Conditional Independence and Graphs {#sec:graph:CI} \index{conditional independence} The concept of statistical independence is presumably familiar to all readers but that of conditional independence may be less so. Suppose that we have a collection of random variables \((X_v)_{v \in V}\) with a joint density. Let \(A\), \(B\) and \(C\) be subsets of \(V\) and let \(X_A=(X_v)_{v \in A}\) and similarly for \(X_B\) and \(X_C\). Then the statement that \(X_A\) and \(X_B\) are conditionally independent given \(X_C\), written \(A \cip B \cd C\), means that for each possible value of \(x_C\) of \(X_C\), \(X_A\) and \(X_B\) are independent in the conditional distribution given \(X_C=x_c\). So if we write \(f()\) for a generic density or probability mass function, then one characterization of \(A \cip B \cd C\) is that [ f(x_A,x_B \cd x_C) = f(x_A \cd x_C) f(x_B \cd x_C). ] An equivalent characterization [@Dawid1998] is that the joint density of \((X_A, X_B, X_C)\) factorizes as $$f(x_A,x_B,x_C) = g(x_A,x_C)h(x_B,x_C), {#eqn:graph:factcrit}$$ that is, as a product of two functions \(g()\) and \(h()\), where \(g()\) does not depend on \(x_B\) and \(h()\) does not depend on \(x_A\). This is known as the factorization criterion. \index{factorizaton criterion} Parametric models for \((X_v)_{v \in V}\) may be thought of as specifying a set of joint densities (one for each admissible set of parameters). These may admit factorisations of the form just described, giving rise to conditional independence relations between the variables. Some models give rise to patterns of conditional independences that can be represented as an undirected graph. More specifically, let \(\cal G=(V,E)\) be an undirected graph with cliques $C_1, \dots C_k$. Consider a joint density \(f()\) of the variables in \(V\). If this admits a factorization of the form [ f (x_V) = \prod_{i=1}^k g_i(x_{C_i}) ] for some functions \(g_1() \dots g_k()\) where \(g_j()\) depends on \(x\) only through \(x_{C_j}\) then we say that \(f()\) factorizes according to \(\cal G\). \index{global Markov property} If all the densities under a model factorize according to \(\cal G\), then \(\cal G\) encodes the conditional independence structure of the model, through the following result (the global Markov property): whenever sets \(A\) and \(B\) are separated by a set \(C\) in the graph, then \(A \cip B \cd C\) under the model. Thus for example gRbase::separates("a", "d", "b", ug0) ## [1] TRUE shows that under a model with this dependence graph, \(a \cip d \cd b\). If we want to find out whether two variable sets are marginally independent, we ask whether they are separated by the empty set, which we specify using a character vector of length zero: gRbase::separates("a", "d", character(0), ug0) ## [1] FALSE Model families that admit suitable factorizations are described in later chapters in this book. These include: log-linear models for multivariate discrete data, graphical Gaussian models for multivariate Gaussian data, and mixed interaction models for mixed discrete and continuous data. Other models give rise to patterns of conditional independences that can be represented by DAGs. These are models for which the variable set \(V\) may be ordered in such way that the joint density factorizes as follows \begin{equation} {#eq:graph:dagfact} f(x_V) = \prod_{v \in V} f(x_v \cd x_{\parents(v)}) \end{equation} for some variable sets \(\{\parents(v)\}_{v \in V}\) such that the variables in \(\parents(v)\) precede \(v\) in the ordering. Again the vertices of the graph represent the random variables, and we can identify the sets \(\parents(v)\) with the parents of \(v\) in the DAG. With DAGs, conditional independence is represented by a property called d-separation. That is, whenever sets \(A\) and \(B\) are \(d\)-separated by a set \(C\) in the graph, then \(A \cip B \cd C\) under the model. The notion of \(d\)-separation can be defined in various ways, but one characterisation is as follows: \(A\) and \(B\) are \(d\)-separated by a set \(C\) if and only if they are separated in the graph formed by moralizing the anterior graph of \(A \cup B \cup C\). So we can easily define a function to test this: d_separates <- function(a, b, c, dag_) { ##ag <- ancestralGraph(union(union(a, b), c), dag_) ag <- ancestralGraph(c(a, b, c), dag_) separates(a, b, c, moralize(ag)) d_separates("c", "e", "a", dag0) ## [1] TRUE So under dag0 it holds that \(c \cip e \cd a\). % Alternatively, we can use the function \comics{dSep()}{ggm} in the \ggm\ package: % ```{r } % library(ggm) % dSep(as(dag0, “matrix”), “c”, “e”, “a”) % @ % ```{r echo=F} % detach(package:ggm) % @ Still other models correspond to patterns of conditional independences that can be represented by a chain graph \(\cal G\). There are several ways to relate Markov properties to chain graphs. Here we describe the so-called LWF Markov properties, associated with Lauritzen, Wermuth and Frydenberg. For these there are two levels to the factorization requirements. Firstly, the joint density needs to factorize in a way similar to a DAG, i.e. [ f(x_V) = \prod_{C \in \calC} f(x_C \cd x_{\parents (C)}) ] where \(\calC\) is the set of components of \(\cal G\). In addition, each conditional density \(f(x_C \cd x_{\parents(C)})\) must factorize according to an undirected graph constructed in the following way. First form the subgraph of \(\cal G\) induced by $C \cup \parents(C)$, drop directions, and then complete \(\parents(C)\) (that is, add edges between all vertices in \(\parents(C))\)). For densities which factorize as above, conditional independence is related to a property called c-separation: that is, $A \cip B \cd C$ whenever sets \(A\) and \(B\) are \(c\)-separated by \(C\) in the graph. The notion of \(c\)-separation in chain graphs is similar to that of \(d\)-separation in DAGs. \(A\) and \(B\) are \(c\)-separated by a set \(C\) if and only if they are separated in the graph formed by moralizing the anterior graph of $A \cup B \cup C$. % The \comics{is.separated()}{lcd} function in the \lcd\ package can % be used to query a given chain graph for \(c\)-separation. For example, % ```{r eval=F} % library(lcd) % is.separated(“e”, “g”, c(“k”), as(cG1,“matrix”)) % @ % ```{r echo=F, eval=F} % detach(package:lcd) % @ % \noindent % implies that \(e \not \negthinspace \negthinspace \negthinspace \cip g \cd k\) for the chain graph cG1 % we considered previously. More About Graphs Special Properties A node in an undirected graph is simplicial if its boundary is complete. \index{simplicial node} gRbase::is.simplicial("b", ug0) ## [1] FALSE ## [1] "a" "c" "d" "e" To obtain the connected components of a graph: \index{connected components} gRbase::connComp(ug0) |> str() ## List of 2 ## $ : chr [1:4] "a" "b" "c" "d" ## $ : chr "e" ## Using igraph igraph::components(ug0) |> str() ## List of 3 ## $ membership: Named num [1:5] 1 1 1 1 2 ## ..- attr(*, "names")= chr [1:5] "a" "b" "c" "d" ... ## $ csize : num [1:2] 4 1 ## $ no : num 2 \index{chordless cycles} \index{triangulated graphs} \index{chordal graphs} If a cycle \(\alpha=\alpha_0,\alpha_1,\dots,\alpha_n=\alpha\) has adjacent elements \(\alpha_i \sim \alpha_j\) with $j \not \in {i-1,i+1} $ then it is said to have a chord. If it has no chords it is said to be chordless. A graph with no chordless cycles of length \(\ge 4\) is called triangulated or chordal: ## [1] TRUE ## $chordal ## [1] TRUE ## $fillin ## NULL ## $newgraph ## NULL Triangulated graphs are of special interest for graphical models as they admit closed-form maximum likelihood estimates and allow considerable computational simplification by decomposition. A triple \((A,B,D)\) of non–empty disjoint subsets of \(V\) is said to decompose \(\cal G\) into \(\cal G_{A\cup D}\) and \(\cal G_{B\cup D}\) if \(V = A \cup B \cup D\) where \(D\) is complete and separates \(A\) and \(B\). \index{decomposition} \index{perfect vertex ordering} gRbase::is.decomposition("a", "d", c("b", "c"), ug0) ## [1] FALSE Note that although \(\{d\}\) is complete and separates \(\{a\}\) and \(\{b,c\}\) in ug0, the condition fails because \(V \neq \{a,b,c,d\}\). \index{decomposable graphs} \index{perfect vertex ordering} \index{maximum cardinality search} A graph is decomposable if it is complete or if it can be decomposed into decomposable subgraphs. A graph is decomposable if and only if it is triangulated. An ordering of the nodes in a graph is called a perfect ordering if \(\bound(i)\cap\{1,\dots,i-1\}\) is complete for all \(i\). Such an ordering exists if and only if the graph is triangulated. If the graph is triangulated, then a perfect ordering can be obtained with the maximum cardinality search (or mcs) algorithm. The \comics{mcs()}{gRbase} function will produce such an ordering if the graph is triangulated; otherwise it will return NULL. ## [1] "a" "b" "c" "d" "e" ## $alpha ## [1] 5 4 2 3 1 ## $alpham1 ## + 5/5 vertices, named, from 6397ecb: ## [1] e c d b a igraph::max_cardinality(ug0)$alpham1 |> attr("names") ## [1] "e" "c" "d" "b" "a" Sometimes it is convenient to have some control over the ordering given to the variables: gRbase::mcs(ug0, root=c("d", "c", "a")) ## [1] "d" "c" "b" "a" "e" Here \comics{mcs()}{gRbase} tries to follow the ordering given and succeeds for the first two variables but then fails afterwards. \index{running intersection property} \index{RIP ordering} \index{separators} The cliques of a triangulated undirected graph can be ordered as \((C_1, \dots, C_Q)\) to have the running intersection property (also called a RIP ordering). The running intersection property is that \(C_j \cap (C_1 \cup \dots \cup C_{j-1}) \subset C_i\) for some \(i<j\) for \(j=2,\dots, Q\). We define the sets $S_j=C_j \cap (C_1 \cup \dots \cup C_{j-1})$ and \(R_j=C_j\setminus S_j\) with \ (S_1=\emptyset\). The sets \(S_j\) are called separators as they separate \(R_j\) from \((C_1 \cup \dots \cup C_{j-1})\setminus S_j\). Any clique \(C_i\) where \(S_j\subset C_i\) with \(i<j\) is a possible parent of \(C_i\). The \comics{rip()}{gRbase} function returns such an ordering if the graph is triangulated (otherwise, it returns list()): ## cliques ## 1 : a b ## 2 : b c d ## 3 : e ## separators ## 1 : ## 2 : b ## 3 : ## parents ## 1 : 0 ## 2 : 1 ## 3 : 0 If a graph is not triangulated it can be made so by adding extra edges, so called fill-ins, using \comics{triangulate()}{gRbase}: ug2 <- gRbase::ug(~a:b:c + c:d + d:e + a:e) ug2 <- gRbase::ug(~a:b:c + c:d + d:e + e:f + a:f) ## [1] FALSE igraph::is_chordal(ug2) |> str() ## List of 3 ## $ chordal : logi FALSE ## $ fillin : NULL ## $ newgraph: NULL ug3 <- gRbase::triangulate(ug2) ## [1] TRUE zzz <- igraph::is_chordal(ug2, fillin=TRUE, newgraph=TRUE) ## + 4/6 vertices, named, from 9c224c1: ## [1] d a e a ug32 <- zzz$newgraph par(mfrow=c(1,3), mar=c(0,0,0,0)) lay <- layout.fruchterman.reingold(ug2) myplot(ug2, layout=lay); myplot(ug3, layout=lay); myplot(ug32, layout=lay) % Recall that an undirected graph \(\cal G\) is triangulated (or chordal) if it % has no cycles of length \(>= 4\) without a chord. % A graph is triangulated if and only if there exists a perfect ordering of its vertices. % Any undirected graph \(\cal G\) can be triangulated by adding edges to % the graph, so called fill–ins, resulting in a graph \(\cal G^*\), say. Some of the fill–ins on \(\ cal G^*\) may be superfluous % in the sense that they could be removed and still give a triangulated graph. A % triangulation with no superfluous fill-ins is called a minimal triangulation. In % general this is not unique. This should be distinguished from a minimum % triangulation which is a graph with the smallest number of % fill-ins. Finding a minimum % triangulation is known to be NP-hard. The function \comics{minimalTriang()}{gRbase} % finds a minimal triangulation. Consider the following: % \index{minimal triangulation} \index{minimum triangulation} % ```{r mintri, , include=F, eval=F} % G1 <- gRbase::ug(~a:b+b:c+c:d+d:e+e:f+a:f+b:e) % mt1.G1 <- minimalTriang(G1) % G2 <- gRbase::ug(~a:b:e:f+b:c:d:e) % mt2.G1<-minimalTriang(G1, TuG=G2) % par (mfrow=c(2,2)) % plot(G1, sub=“G1”) % plot(mt1.G1, sub=“mt1.G1”) % plot(G2, sub=“G2”) % plot(mt2.G1, sub=“mt2.G1”) % @ % % \includegraphics*[width=3.5in, height=3.5in]{fig/GRAPH-mintri} % The graph G1 is not triangulated; % mt1.G1} is a minimal triangulation of \code{G1. % Furthermore, G2} is a triangulation of \code{G1, but it is not % a minimal triangulation. Finally, mt2.G1 is a minimal % triangulation of G1} formed on the basis of \code{G2. % \index{maximal prime subgraph decomposition} % The maximal prime subgraph decomposition of an undirected graph % is the smallest subgraphs into which the graph can be % decomposed. % Consider the following code fragment: % ```{r mps, , include=F, eval=F} % G1 <- gRbase::ug(~a:b+b:c+c:d+d:e+e:f+a:f+b:e) % G1.rip <- mpd(G1) % G1.rip % par(mfrow=c(1,3)) % plot(G1, main=“G1”) % plot(subGraph(G1.rip$cliques[[1]], G1), main=“subgraph 1”) % plot(subGraph(G1.rip$cliques[[2]], G1), main=“subgraph 2”) % @ % @ % ```{r echo=F, eval=F} % pdf(file=“fig/GRAPH-mps.pdf”,width=8,height=5/1.7) % <> % graphics.off() % @ % % \includegraphics*[width=4in, height=2in]{fig/GRAPH-mps} % Here \verb-G1- is not decomposable but the graph can be % decomposed. The function \comics{mpd()}{gRbase} returns a junction % RIP–order representation of the maximal prime subgraph % decomposition. The subgraphs of \verb-G1- defined by the cliques listed % in \verb-G1.rip- are the smallest subgraphs into which \verb’G1’ can % be decomposed. \index{Markov blanket} The Markov blanket of a vertex \(v\) in a DAG may be defined as the minimal set that \(d\)-separates \(v\) from the remaining variables. It is easily derived as the set of neighbours to \(v\) in the moral graph of \(\cal G\). For example, the Markov blanket of vertex e} in \code{dag0 is adj(moralize(dag0), "e") It is easily seen that the Markov blanket of \(v\) is the union of \(v\)’s parents, \(v\)’s children, and the parents of \(v\)’s children. % \subsection{Graph Layout in \pkg{Rgraphviz}} % {#sec:graph:layout} % [THIS SECTION HAS BEEN REMOVED] % Although the way graphs are displayed on the page or screen has no % bearing on their mathematical or statistical properties, in practice % it is helpful to display them in a way that clearly reveals their % structure. The \Rgraphviz\ package implements several methods for % automatically setting graph layouts. We sketch these very briefly % here: for more detailed information see the online help files, for % example, type ?dot. % \begin{itemize} % - The dot method, which is default, is intended for drawing % DAGs or hierarchies such as organograms or phylogenies. % - The twopi method is suitable for connected graphs: it % produces a circular layout with one node placed at the centre and % others placed on a series of concentric circles about the centre. % - The circo method also produces a circular layout. % - The neato method is suitable for undirected graphs: an % iterative algorithm determines the coordinates of the nodes so that % the geometric distance between node-pairs approximates their path % distance in the graph. % - Similarly, the fdp method is based on an iterative % algorithm due to @Fruchterman1991, in which adjacent nodes are % attracted and non-adjacent nodes are repulsed. % \end{itemize} % The graphs displayed using Rgraphviz can also be embellished in % various ways: the following example displays the text in red and fills % the nodes with light grey. % @ % ```{r } % plot(dag0, attrs=list(node = list(fillcolor=“lightgrey”,fontcolor=“red”))) % @ % Graph layouts can be reused: this can be useful, for example to would-be authors of % books on graphical modelling who would like to compare alternative models for the same dataset. % We illustrate how to plot a graph and the graph obtained by removing an edge using the same layout. % To do this, we use the \comics{agopen()}{Rgraphviz} function generate an Ragraph object, which is a representation % of the layout of a graph (rather than of the graph as a mathematical object). % From this we remove the required edge. % ```{r , eval=F} % edgeNames(ug3) % ng3 <- agopen(ug3, name=“ug3”, layoutType=“neato”) % ng4 <- ng3 % AgEdge(ng4) <- AgEdge(ng4)[-3] % plot(ng3) % @ % ```{r , eval=F} % plot(ng4) % @ % The following example illustrates how individual edge and node attributes may be set. We use the % chain graph cG1 described above. % \setkeys{Gin}{width=0.7\textwidth, height=0.7\textwidth} %sæt figurstørrelse i Sweave % ```{r , eval=F} % cG1a <- as(cG1, “graphNEL”) % nodes(cG1a) <- c (“alpha”,“theta”,“tau”,“beta”,“pi”,“upsilon”,“gamma”, % “iota”,“phi”,“delta”,“kappa”) % edges <- buildEdgeList(cG1a) % for (i in 1:length(edges)) { % if (edges[[i]]@attrs$dir==“both”) edges[[i]] @attrs$dir <- “none” % edges[[i]]@attrs$color <- “blue” % } % nodes <- buildNodeList(cG1a) % for (i in 1:length(nodes)) { % nodes[[i]]@attrs$fontcolor <- “red” % nodes[[i]]@attrs$shape <- “ellipse” % nodes[[i]]@attrs$fillcolor <- “lightgrey” % if (i <= 4) { % nodes[[i]]@attrs$fillcolor <- “lightblue” % nodes[[i]]@attrs$shape <- “box” % } % } % cG1al <- agopen(cG1a, edges=edges, nodes=nodes, name=“cG1a”, layoutType=“neato”) % plot(cG1al) % @ % \setkeys {Gin}{width=0.45\textwidth, height=0.45\textwidth} %sæt figurstørrelse i Sweave \subsection{The \igraph\ package} {#sec:graph:igraph} It is possible to create igraph objects using the \comics{graph.formula()}{igraph} function: ug4 <- graph.formula(a -- b:c, c--b:d, e -- a:d) ## IGRAPH 2bc3a1c UN-- 5 6 -- ## + attr: name (v/c) ## + edges from 2bc3a1c (vertex names): ## [1] a--b a--c a--e b--c c--d d--e The same graph may be created from scratch as follows: ug4.2 <- graph.empty(n=5, directed=FALSE) V(ug4.2)$name <- V(ug4.2)$label <- letters[1:5] ug4.2 <- add.edges(ug4.2, 1+c(0,1, 0,2, 0,4, 1,2, 2,3, 3,4)) ## Warning: `add.edges()` was deprecated in igraph 2.0.0. ## ℹ Please use `add_edges()` instead. ## This warning is displayed once every 8 hours. ## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was ## generated. ## IGRAPH 7c3ad27 UN-- 5 6 -- ## + attr: label (v/c), name (v/c) ## + edges from 7c3ad27 (vertex names): ## [1] a--b a--c a--e b--c c--d d--e The graph is displayed using the \comics{plot()}{igraph} function, with a layout determined using the graphopt method. A variety of layout algorithms are available: type ?layout for an overview. Note that per default the nodes are labelled \(0, 1, \dots\) and so forth. We show how to modify this shortly. As mentioned previously we have created a custom function \comics{myplot()}{gRbase} which creates somewhat more readable plots: myplot(ug4, layout=layout.graphopt) Objects in \pkg{igraph} graphs are defined in terms of node and edge lists. In addition, they have attributes: these belong to the vertices, the edges or to the graph itself. The following example sets a graph attribute, layout, and two vertex attributes, label and color. These are used when the graph is plotted. The name attribute contains the node labels. ug4$layout <- layout.graphopt(ug4) ## Warning: `layout.graphopt()` was deprecated in igraph 2.0.0. ## ℹ Please use `layout_with_graphopt()` instead. ## This warning is displayed once every 8 hours. ## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was ## generated. V(ug4)$label <- V(ug4)$name V(ug4)$color <- "red" V(ug4)[1]$color <- "green" V(ug4)$size <- 40 V(ug4)$label.cex <- 3 Note the use of array indices to access the attributes of the individual vertices. Currently, the indices are zero-based, so that V(ug4)[1] refers to the second node (B). (This may change). Edges attributes are accessed similarly, using a container structure E(ug4): also here the indices are zero-based (currently). It is easy to extend igraph objects by defining new attributes. In the following example we define a new vertex attribute, discrete, and use this to color the vertices. ug5 <- set.vertex.attribute(ug4, "discrete", value=c(T, T, F, F, T)) ## Warning: `set.vertex.attribute()` was deprecated in igraph 2.0.0. ## ℹ Please use `set_vertex_attr()` instead. ## This warning is displayed once every 8 hours. ## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was ## generated. V(ug5)[discrete]$color <- "green" V(ug5)[!discrete]$color <- "red" A useful interactive drawing facility is provided with the \comics{tkplot()}{igraph} function. This causes a pop-up window to appear in which the graph can be manually edited. One use of this is to edit the layout of the graph: the new coordinates can be extracted and re-used by the \comics{plot()}{igraph} function. For example tkplot(ug4) 2 \end{verbatim} The \comics{tkplot()}{igraph} function returns a window id (here 2). While the popup window is open, the current layout can be obtained by passing the window id to the \comics{tkplot.getcoords()} {igraph} function, as for example xy <- tkplot.getcoords(2) plot(g, layout=xy) It is straightforward to reuse layout information with igraph objects. The layout functions when applied to graphs return a matrix of \((x,y)\) coordinates: ## [,1] [,2] ## [1,] 1.010 0.270 ## [2,] 1.720 -0.636 ## [3,] 0.570 -0.700 ## [4,] -0.616 -0.212 ## [5,] -0.140 0.840 Most layout algorithms use a random generator to choose an initial configuration. Hence if we set the layout attribute to be a layout function, repeated calls to plot will use different layouts. For example, after ug4$layout <- layout.fruchterman.reingold repeated invocations of plot(ug4) will use different layouts. In contrast, after ug4$layout <- layout.fruchterman.reingold(ug4) the layout will be fixed. The following code fragment illustrates how two graphs with the same vertex set may be plotted using the same layout. ug5 <- gRbase::ug(~A*B*C + B*C*D + D*E) ug6 <- gRbase::ug(~A*B + B*C + C*D + D*E) lay.fr <- layout.fruchterman.reingold(ug5) ug6$layout <- ug5$layout <- lay.fr V(ug5)$size <- V(ug6)$size <- 50 V(ug5)$label.cex <- V(ug6)$label.cex <- 3 par(mfrow=c(1,2), mar=c(0,0,0,0)) plot(ug5); plot(ug6) %\includegraphics*[width=3.5in, height=3.5in]{fig/GRAPH-samelay} %\includegraphics*{fig/GRAPH-samelay} % @ % ```{r echo=F} % pdf(file=“fig/GRAPH-samelay2.pdf”,width=8,height=8) % <> % graphics.off() % @ % \includegraphics*[width=3in, height=3in]{fig/GRAPH-samelay2} An overview of attributes used in plotting can be obtained by typing ?igraph.plotting. A final example illustrates how more complex graphs can be displayed: em1 <- matrix(c(0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0), nrow=4, byrow=TRUE) iG <- graph.adjacency(em1) ## Warning: `graph.adjacency()` was deprecated in igraph 2.0.0. ## ℹ Please use `graph_from_adjacency_matrix()` instead. ## This warning is displayed once every 8 hours. ## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was ## generated. V(iG)$shape <- c("circle", "square", "circle", "square") V(iG)$color <- rep(c("red", "green"), 2) V(iG)$label <- c("A", "B", "C", "D") E(iG)$arrow.mode <- c(2,0)[1 + is.mutual(iG)] E(iG)$color <- rep(c("blue", "black"), 3) E(iG)$curved <- c(T, F, F, F, F, F) iG$layout <- layout.graphopt(iG) % ### 3-D graphs % [THIS SECTION HAS BEEN REMOVED] % The \comics{gplot3d()}{sna} function in the \sna\ package displays a graph in % three dimensions. Using a mouse, the graph can be rotated and % zoomed. Opinions differ as to how useful this is. The following code fragment % can be used to try the facility. First we derive the adjacency matrix of a % built-in graph in the \igraph\ package, then we display it % as a (pseudo)-three-dimensional graph. % ```{r eval=F} % library(sna) % aG <- as(graph.famous(“Meredith”),“matrix”) % gplot3d(aG) % @ % ### Alternative Graph Representations % {#sec:graph:representations} % [THIS SECTION HAS BEEN REMOVED] % As mentioned above, graphNEL objects are so-called because they % use a node and edge list representation. So these can also be created % directly, by specifying a vector of nodes and a list containing the % edges corresponding to each node. For example, % @ % ```{r , eval=F} % V <- c(“a”,“b”,“c”,“d”) % edL <- vector(“list”, length=4) % names(edL) <- V % for (i in 1:4) { % edL[[i]] <- list(edges=5-i) % } % gR <- new(“graphNEL”, nodes=V, edgeL=edL) % plot(gR) % @ % ### Different Graph Representations and Coercion between these % {#sec:graphs:coercion} % Default is that \comic{ug()} and \comic{dag()} return a graph in the % graphNEL representation. Alternative representations are % obtained using: % @ % ```{r print=T} % ug_igraph <- ug (~a:b+b:c:d+e, result=“igraph”) % ug_matrix <- ug(~a:b+b:c:d+e, result=“matrix”) % @ % It is possible convert between different representations of undirected % graphs and DAGs using \comics{as()}{\grbase}: % @ % ```{r print=T} % ug_NEL <- as(ug_igraph, “graphNEL”) % ug_matrix <- as(ug_NEL, “matrix”) % ug_igraph2 <- as(ug_matrix, “igraph”) % @ Operations on Graphs in Different Representations % The functions for operations on graphs illustrated in the previous sections are all % available for graphs in the graphNEL representation (some % operations are in fact available for graphs in the other % representations as well). Notice that the functions differ in whether % they take the graph as the first or as the last argument (that is % mainly related to different styles in different \index{adjacency matrix} The \grbase\ package has a function \comics{querygraph()}{gRbase} which provides a common interface to the graph operations for undirected graphs and DAGs illustrated above. Moreover, \comics {querygraph()}{gRbase} works on graphs represented as igraph objects and adjacency matrices. The general syntax is ## function (object, op, set = NULL, set2 = NULL, set3 = NULL) ## NULL For example, we obtain: ug_ <- gRbase::ug(~a:b + b:c:d + e) gRbase::separates("a", "d", c("b", "c"), ug_) ## [1] TRUE gRbase::querygraph(ug_, "separates", "a", "d", c("b", "c")) ## [1] TRUE gRbase::qgraph(ug_, "separates", "a", "d", c("b", "c")) ## [1] TRUE \bibliographystyle{unsrtnat} \bibliography{./GMwR_strings,./GMwR}
{"url":"http://cran.rediris.es/web/packages/gRbase/vignettes/graphs.html","timestamp":"2024-11-04T20:51:10Z","content_type":"text/html","content_length":"418294","record_id":"<urn:uuid:90a2ef41-f108-40df-90d4-643a4948b1de>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00880.warc.gz"}
Peskin, Schroeder: An Introduction to Quantum Field Theory Peskin and Schroeder's "An Introduction to Quantum Field Theory" is the quantum field theory standard textbook. , For over a decade this has been the standard in QFT pedagogy. It has everything a student could want presented coherently. The style is geared towards calculations, which makes it a handy reference. […] Every QFT student needs this book… but absolute beginners may not find this the most user-friendly text and would benefit from additional references. Either way, I would suggest basing one’s study on Peskin with well-selected auxilliary reading.Flip Tanedo 2.1 The necessity of the field viewpoint The obvious generalisation from QM of trying to quantize relativistic particles fails due to non-conservation of particle number (deemed necessary by Schrodinger’s equation and impossible by special relativity). Also, the amplitude of propagation between spacelike separated points is non-zero in ordinary QM, even with a relativistic Hamiltonian. An exact cancellation is required for maintaining causality, and QFT provides a natural way of doing so - by creating antiparticles. 2.2 Elements of classical field theory The principle of least action implies the Euler-Lagrange equation of motion - they are classical paths living in configuration space. In analogy with QM, we can define the canonical (conjugate) momentum used to switch to the Hamiltonian formalism. However, H isn’t inherently Lorentz invariant, unlike L, which is. The Noether’s theorem states that every continuous symmetry leads to a conserved local current (and hence globally conserved charges). U(1) symmetry in the complex Klein-Gordon Lagrangian is interpreted (upon coupling to E-mag field) as a conserved electric current, Lorentz symmetry leads to (4 conserved currents, put together as) a conserved stress-energy tensor, etc. The Hamiltonian naturally arises as a conserved charge from time-translation symmetry inscribed in the latter. 2.3 The Klein-Gordon field as harmonic oscillators To quantise a classical theory, we promote the dynamical variables to operators and write down their commutation relations. φ and π are dynamical variables in field theory, and we write down their commutation relations in the Schrödinger picture (or equal time CR in Heisenberg). For the (energy) spectrum of H for KGE, we expand φ and π in Fourier modes (where we have a bunch of decoupled modes) and express them in terms of ladder operators. The treatment is identical to NRQM, except a momentum subscript everywhere. [H,ap] is identical to the SHO case, and so the {a,a†} expansion was indeed a good idea. The particle interpretation follows from the creation operators producing localised states in momentum space. Since the creation operators (indexed by q) commute among themselves, KGE describes bosons. Going back, since φ(x) acts on the vacuum to produce a (Fourier transformed) bunch of localised momentum eigenstates, it creates a particle at x. A final comment is made : the 3-momenta delta function in the commutation relations isn’t a Lorentz invariant - to compensate for that, we affix a factor of 1/2E in the integral measure.http:// The obvious generalisation from QM of trying to quantize relativistic particles fails due to non-conservation of particle number (deemed necessary by Schrodinger’s equation and impossible by special relativity). Also, the amplitude of propagation between spacelike separated points is non-zero in ordinary QM, even with a relativistic Hamiltonian. An exact cancellation is required for maintaining causality, and QFT provides a natural way of doing so - by creating antiparticles. The principle of least action implies the Euler-Lagrange equation of motion - they are classical paths living in configuration space. In analogy with QM, we can define the canonical (conjugate) momentum used to switch to the Hamiltonian formalism. However, H isn’t inherently Lorentz invariant, unlike L, which is. The Noether’s theorem states that every continuous symmetry leads to a conserved local current (and hence globally conserved charges). U(1) symmetry in the complex Klein-Gordon Lagrangian is interpreted (upon coupling to E-mag field) as a conserved electric current, Lorentz symmetry leads to (4 conserved currents, put together as) a conserved stress-energy tensor, etc. The Hamiltonian naturally arises as a conserved charge from time-translation symmetry inscribed in the latter. To quantise a classical theory, we promote the dynamical variables to operators and write down their commutation relations. φ and π are dynamical variables in field theory, and we write down their commutation relations in the Schrödinger picture (or equal time CR in Heisenberg). For the (energy) spectrum of H for KGE, we expand φ and π in Fourier modes (where we have a bunch of decoupled modes) and express them in terms of ladder operators. The treatment is identical to NRQM, except a momentum subscript everywhere. [H,ap] is identical to the SHO case, and so the {a,a†} expansion was indeed a good idea. The particle interpretation follows from the creation operators producing localised states in momentum space. Since the creation operators (indexed by q) commute among themselves, KGE describes bosons. Going back, since φ(x) acts on the vacuum to produce a (Fourier transformed) bunch of localised momentum eigenstates, it creates a particle at x. A final comment is made : the 3-momenta delta function in the commutation relations isn’t a Lorentz invariant - to compensate for that, we affix a factor of 1/2E in the integral measure.http://sdutta9.web.engr.illinois.edu/ 4.1 Perturbation Theory - Philosophy and Examples Non-interacting theories do not couple Fourier modes; interacting (real-world) theories do. We need to add that in the Lagrangian. Causality doesn’t allow a non-local coupling like φ(x)φ(y). We consider three examples : the φ4 theory, QED and the Yukawa theory. φ4 is the simplest and we have all the tools to solve it. Yukawa is quite simple too, with a scalar-fermion coupling and QED is much harder, with gauge invariance coming into picture. In QED a “gauge” covariant derivative is introduced which in plain sight is the Dirac equation minimally coupled to the E-Mag field, i.e. ∂ → D. Although it seems like there is an infinite set of interactions allowed, the requirement of renormalizability of physical theories comes to rescue and only a small subset of {scalar, spinor, vector} interactions is retained, e.g. φn (n≤4), spinor-fermion, etc. That’s the good news. The bad news is that we don’t know how to exactly solve most interacting theories. 4.2 Perturbation Expansion of Correlation Functions In the interacting field theory (we shall limit ourselves to φ4) the vacuum Ω is in general different from the one in the free theory. The interaction picture is the easiest to work with. … … … 4.3 Wick’s Theorem We have been able to write down correlation function in the interacting field theory in terms of CFs of time ordered free field operators in the free vacuum. Towards Wick’s theorem, we need to understand Normal ordering and Contractions. Normal ordering is the prescription of rearranging products of ladder operators as annihilation operators on the right of creation operators - as such v.e.v. of normal ordered operators vanishes. In free field theory, a contraction is essentially writing down the Feynman propagator between the corresponding points. Wick’s theorem says that time ordered product of field operators is given by the normal ordered sum of the product of free fields and a sequence of all possible field contractions; it is clear from this that partially contracted terms vanish inside the v.e.v., and are unimportant. 4.4 Feynman Diagrams First, it is funny that Dick’s diagrams follow from Wick’s theorem. We’ve seen that the CF of free field operators is equal to sum over all possible complete contractions. That naturally gives us a set of propagators flanking between a set of (external) points - the interpretation is that of a particle being created at one point, propagating to another point and being destroyed there; this is in accordance with the interpretation (in 2.3) of φ(x) as creating a particle at x. From (4.2), we have the answer for the v.e.v. in the interaction theory. For small λ, the time-ordered CF gives us a series of terms which contain Wick contractions over fields at the same point - we can read off the symmetry factors. The Wick picture also tells us why a φ4 term only allows 4 lines at a vertex. Finally, the mess at the end of (4.2) can be interpreted in the form of a beautiful theorem - the sum of all Feynman diagrams (originating from the above series) can be factored into a sum of all connected diagrams times an exponent over all disconnected pieces. The latter goes away from the normalisation in Eq(4.31). However, a little thought tells us that the factorization was indeed obvious. In any case, we can now draw the Feynman diagrams corresponding to a given order in perturbation theory. http://sdutta9.web.engr.illinois.edu/ps4.html Non-interacting theories do not couple Fourier modes; interacting (real-world) theories do. We need to add that in the Lagrangian. Causality doesn’t allow a non-local coupling like φ(x)φ(y). We consider three examples : the φ4 theory, QED and the Yukawa theory. φ4 is the simplest and we have all the tools to solve it. Yukawa is quite simple too, with a scalar-fermion coupling and QED is much harder, with gauge invariance coming into picture. In QED a “gauge” covariant derivative is introduced which in plain sight is the Dirac equation minimally coupled to the E-Mag field, i.e. ∂ → D. Although it seems like there is an infinite set of interactions allowed, the requirement of renormalizability of physical theories comes to rescue and only a small subset of {scalar, spinor, vector} interactions is retained, e.g. φn (n≤4), spinor-fermion, etc. That’s the good news. The bad news is that we don’t know how to exactly solve most interacting theories. In the interacting field theory (we shall limit ourselves to φ4) the vacuum Ω is in general different from the one in the free theory. The interaction picture is the easiest to work with. … … … We have been able to write down correlation function in the interacting field theory in terms of CFs of time ordered free field operators in the free vacuum. Towards Wick’s theorem, we need to understand Normal ordering and Contractions. Normal ordering is the prescription of rearranging products of ladder operators as annihilation operators on the right of creation operators - as such v.e.v. of normal ordered operators vanishes. In free field theory, a contraction is essentially writing down the Feynman propagator between the corresponding points. Wick’s theorem says that time ordered product of field operators is given by the normal ordered sum of the product of free fields and a sequence of all possible field contractions; it is clear from this that partially contracted terms vanish inside the v.e.v., and are unimportant. First, it is funny that Dick’s diagrams follow from Wick’s theorem. We’ve seen that the CF of free field operators is equal to sum over all possible complete contractions. That naturally gives us a set of propagators flanking between a set of (external) points - the interpretation is that of a particle being created at one point, propagating to another point and being destroyed there; this is in accordance with the interpretation (in 2.3) of φ(x) as creating a particle at x. From (4.2), we have the answer for the v.e.v. in the interaction theory. For small λ, the time-ordered CF gives us a series of terms which contain Wick contractions over fields at the same point - we can read off the symmetry factors. The Wick picture also tells us why a φ4 term only allows 4 lines at a vertex. Finally, the mess at the end of (4.2) can be interpreted in the form of a beautiful theorem - the sum of all Feynman diagrams (originating from the above series) can be factored into a sum of all connected diagrams times an exponent over all disconnected pieces. The latter goes away from the normalisation in Eq(4.31). However, a little thought tells us that the factorization was indeed obvious. In any case, we can now draw the Feynman diagrams corresponding to a given order in perturbation theory. http://sdutta9.web.engr.illinois.edu/ps4.html UV divergences are ubiquitous in QFT. And that shouldn’t bother us if they don’t appear in physical predictions - one way to guarantee that is to accept some of the parameters as measured parameters - such QFTs are called renormalizable, and QED is an example. To write an effective field theory, we need the relevant microscopic parameters and the relevant degrees of freedom. Kenny Wilson discovered that QFTs have a correspondence with statistical physics of fluids - in that spirit, we look at the Landau theory of phase transitions for a ferromagnet. http://
{"url":"https://physicstravelguide.com/resources/books/peskin_schroeder","timestamp":"2024-11-12T00:56:16Z","content_type":"text/html","content_length":"86683","record_id":"<urn:uuid:d5f917eb-2c7a-44e1-b6d9-aac4407e1ff0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00688.warc.gz"}
Importance of R.M.S. Values what is the Importance of R.M.S. Values Sunita said on : 2018-12-15 07:53:35 Importance of R.M.S. Values An alternating voltage or current is always specified in terms of r.m.s. values. For example,common household appliances are rated at 230 V a.c. This is an r.m.s. value. If some other method of measurement is used, it must be specifically stated. Lacking any information to the contrary, always assume that a.c. values are rm. s. The following points will give the reader a clear concept about the r.m.s. values (1) The domestic a.c. supply is 230 V, 50 Hz. It is the r.m.s. or effective value. It means that alternating voltage available has the same heating effect as 230 V d.c. The equation of this alternating voltage is given by (2) When we say that alternating current in a circuit is 5 A, we are specifying the r.m.s. value.It means that the alternating current flowing in the circuit has the same heating effect as 5 A d.c. (3) A.C. ammeters and voltmeters record r.m.s. values of alternating current and voltage respectively. Fig. (a) summarises the various ways to measure sinusoidal voltages and the conversion constants. The relationships apply for currents as well as voltages figure (a) It is important to note that above relationship between peak, average and rm.s. quantities are applicable to only pure sine waves. In the case of other waveforms, these quantities are related by other (different) factors. Note. R.M.S. value of an a.c. wave is always greater than the average value except in the case of rectangular and square waves when both are equal !! OOPS Login [Click here] is required for more results / answer Help other students, write article, leave your comments
{"url":"https://engineeringslab.com/tutorial_electrical/importance-of-rms-values-1301.htm","timestamp":"2024-11-10T11:22:30Z","content_type":"text/html","content_length":"37366","record_id":"<urn:uuid:b5eeb517-99fd-46e8-b937-87f77b8fd0f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00690.warc.gz"}
How to Calculate the Percent Error Formula? The percent error formula is used to determine the accuracy of a value measured or estimated compared to the actual or known value. It is often used in scientific experiments to understand the precision of results. Here is the formula for calculating percent error: • Experimental Value (or Measured Value) is the value you have obtained from your experiment or measurement. • Actual Value (or True Value or Theoretical Value) is the value that is known through calculation or given as a standard, a known constant, or an accepted value. To compute the percent error, follow these steps: 1. Subtract the Actual Value from the Experimental Value. 2. Take the absolute value of the step 1 result, which means you ignore any negative sign. 3. Divide the absolute value from step 2 by the Actual Value to get the error ratio. 4. Multiply the result from step 3 by 100 to convert it into a percentage. The absolute value in this calculation means that the percent error will always be a positive number. This makes sense because percent error is a measure of how far off a measurement is, not in which Let’s work through an example: Suppose you have conducted an experiment where you measured a value to be 45 grams, but the known standard value is 50 grams. The percent error would be calculated as follows: 1. Experimental Value – Actual Value = 45 g – 50 g = -5 g 2. Absolute Value = |-5 g| = 5 g 3. Error Ratio = 5 g / 50 g = 0.1 4. Percent Error = 0.1 × 100% = 10% So, the percent error in this case would be 10%. Leave a Comment
{"url":"https://geekybytes.net/how-to-calculate-the-percent-error-formula.html","timestamp":"2024-11-11T13:58:03Z","content_type":"text/html","content_length":"47749","record_id":"<urn:uuid:71a4b4a9-4eb6-41de-9db9-7f561336b835>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00638.warc.gz"}
Comprehend Forecast Function with Examples | Easy Excel Tips | Excel Tutorial | Free Excel Help | Excel IF | Easy Excel No 1 Excel tutorial on the internet Home Excel Functions Comprehend Forecast Function with Examples Comprehend Forecast Function with Examples Comprehend Forecast Function with Examples In this lesson you can learn how to use forecast function and how to… predict future. The Forecast function can either calculate or predict the future value with the help of current value. A predicted value is the y-value that comes from a given x value. The popular values are both x- and y-values, while the new value will be predicted with the help of linear regression. This function is useful for prediction of the future sales, along with requirements of the inventory, or even trend of customers. Thanks to forecast function Excel can read the future. It is not a joke! Excel can predict some values if you give past data. Definition of FORECAST function The forecast function has different arguments that are all required. X: This is essential. It is a data point, for which the user want to predict the value. Known_y’s: This is also essential for the function. It is either a dependency array or data’s range. Known_x’s: This is essential. An independent array that can also be data’s range. It can looks difficult, but it is not. Let’s see some examples to explain more. Example 1: Simple Forecast Formula The data in this example has been layout on the table, and we need to try and predict the future value. Our business would need to predict the future, and we are using the date to acknowledge future Example 2: IF and Forecast This is about using the forecast in combination with IF functions. These two will address the issues that are associating with knowing how to predict the future. This data already makes it possible, and looks like this: Example 3: Minimum and Forecast The business is booming. But, we do not want to make assumptions about the ways that the business was built. The usage of forecast in this example, is about using the minimum sales. Example 4: Maximum and Forecast The maximum and forecast formula is combined together to get result. Example 5: Double Forecast Formula This example use two different forecast formulas together: Example 6: Match and Forecast The business is having some misunderstanding of the whole data, and we need to know that the business itself has a full comprehension. This example use both Match and Forecast functions. Example 7: Multiple Results This example handle issues that explain how to use the same data, and get results for three different cells. These cells will be marked, before typing in the formula Once formula is written, then you could press the F2, followed by the CTRL (CMD on apple) + SHIFT and then press enter. Example 8: Forecast minus Minimum This is the formula, where we are working on a more sophisticated information. The example explains prediction of the future value, but also minus minimum of the details. Example 9: Forecast with Text This example is what makes it possible to predict something, especially under the circumstances that it has been layout and labeled. Example 10: Average function and Forecast The understanding of this situation is, we would like to use the average function in combination to the forecast function to find out the value of the prediction. Example 11 How long students are studying to the test? You have some historical data about test equals. You also know, how long students have studying to the test. You want to calculate, how many point will get the student, who have been studying for 8 hours. You use forecast function to calculate that. x equals 8, because you are looking for a value for 8 hors argument known y’s are test points, which you already know known x’s are hours studying, which you also know Formula in this situation is =FORECAST(B11,C3:C10,B3:B10) like on the picture below: Forecasted value is 77. Value 77 means, that based on historical values, student will get 77 points. I hope forecast function is easy and clear. Lets do one more example. Example 12 Forecasting net income You have sales and net income of some company. You want to calculate net income for 500 000$ sales. x is 500 000$ known y’s is historical net income known x’s is historical sales Formula here is =FORECAST(B11,C3:C10,B3:B10) Net income equals 11921 $. Example 13 Dates forecasting You can also forecast dates. Here’s an example. In this table you have data of sales in the end of each month. You want to know where you will have 1 000 000 $ of sales. x is 1 000 000$ known y’s is historical dates known x’s is historical sales Formula is: =FORECAST(B10;C3:C9;B3:B9) Formatting of cells is General by default so your result is not a date. Change formatting of the cell. Click right and next Format Cells… and change formatting to Date. Result is 2011-11-03. Excel predicts that your company will have 1 million USD sales at that day. There are some basic examples. Of course there are also many ways to use forecast function for business. Forecast function is easy to use and really powerful. Tip in case of errors: If the x argument is nonnumeric, FORECAST function returns the #VALUE! error. If known_y’s and known_x’s are empty or contain a different number of data points, FORECAST function returns the #N/A error. If the variance of known_x argument is zero, the function FORECAST returns the #DIV/0! error.
{"url":"https://www.excelif.com/forecast-function/","timestamp":"2024-11-05T05:50:46Z","content_type":"text/html","content_length":"208535","record_id":"<urn:uuid:a3fd5dc3-ffbb-49f7-8dd8-f3dde197759d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00476.warc.gz"}
Stoichiometry of Gaseous Substances, Mixtures, and Reactions 48 Stoichiometry of Gaseous Substances, Mixtures, and Reactions Learning Objectives By the end of this section, you will be able to: • Use the ideal gas law to compute gas densities and molar masses • Perform stoichiometric calculations involving gaseous substances • State Dalton’s law of partial pressures and use it in calculations involving gaseous mixtures The study of the chemical behavior of gases was part of the basis of perhaps the most fundamental chemical revolution in history. French nobleman Antoine Lavoisier, widely regarded as the “father of modern chemistry,” changed chemistry from a qualitative to a quantitative science through his work with gases. He discovered the law of conservation of matter, discovered the role of oxygen in combustion reactions, determined the composition of air, explained respiration in terms of chemical reactions, and more. He was a casualty of the French Revolution, guillotined in 1794. Of his death, mathematician and astronomer Joseph-Louis Lagrange said, “It took the mob only a moment to remove his head; a century will not suffice to reproduce it.”^1 As described in an earlier chapter of this text, we can turn to chemical stoichiometry for answers to many of the questions that ask “How much?” The essential property involved in such use of stoichiometry is the amount of substance, typically measured in moles (n). For gases, molar amount can be derived from convenient experimental measurements of pressure, temperature, and volume. Therefore, these measurements are useful in assessing the stoichiometry of pure gases, gas mixtures, and chemical reactions involving gases. This section will not introduce any new material or ideas, but will provide examples of applications and ways to integrate concepts we have already discussed. Gas Density and Molar Mass The ideal gas law described previously in this chapter relates the properties of pressure P, volume V, temperature T, and molar amount n. This law is universal, relating these properties in identical fashion regardless of the chemical identity of the gas: The density d of a gas, on the other hand, is determined by its identity. As described in another chapter of this text, the density of a substance is a characteristic property that may be used to identify the substance. Rearranging the ideal gas equation to isolate V and substituting into the density equation yields The ratio m/n is the definition of molar mass, ℳ: The density equation can then be written This relation may be used for calculating the densities of gases of known identities at specified values of pressure and temperature as demonstrated in (Figure). Measuring Gas Density What is the density of molecular nitrogen gas at STP? Solution The molar mass of molecular nitrogen, N[2], is 28.01 g/mol. Substituting this value along with standard temperature and pressure into the gas density equation yields Check Your Learning What is the density of molecular hydrogen gas at 17.0 °C and a pressure of 760 torr? When the identity of a gas is unknown, measurements of the mass, pressure, volume, and temperature of a sample can be used to calculate the molar mass of the gas (a useful property for identification purposes). Combining the ideal gas equation and the definition of molarity yields the following equation: Determining the molar mass of a gas via this approach is demonstrated in (Figure). Determining the Molecular Formula of a Gas from its Molar Mass and Empirical Formula Cyclopropane, a gas once used with oxygen as a general anesthetic, is composed of 85.7% carbon and 14.3% hydrogen by mass. Find the empirical formula. If 1.56 g of cyclopropane occupies a volume of 1.00 L at 0.984 atm and 50 °C, what is the molecular formula for cyclopropane? Solution First determine the empirical formula of the gas. Assume 100 g and convert the percentage of each element into grams. Determine the number of moles of carbon and hydrogen in the 100-g sample of cyclopropane. Divide by the smallest number of moles to relate the number of moles of carbon to the number of moles of hydrogen. In the last step, realize that the smallest whole number ratio is the empirical formula: \(\text{85.7 g C}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{1 mol C}}{\text{12.01 g C}}\phantom{\rule{0.2em}{0ex}}=\text{7.136 mol C}\phantom{\rule{2em}{0ex}}\frac{7.136} {7.136}\phantom{\rule{0.2em}{0ex}}=\text{1.00 mol C}\) \(\text{14.3 g H}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{1 mol H}}{\text{1.01 g H}}\phantom{\rule{0.2em}{0ex}}=\text{14.158 mol H}\phantom{\rule{2em}{0ex}}\frac{14.158} {7.136}\phantom{\rule{0.2em}{0ex}}=\text{1.98 mol H}\) Empirical formula is CH[2] [empirical mass (EM) of 14.03 g/empirical unit]. Next, use the provided values for mass, pressure, temperature and volume to compute the molar mass of the gas: Comparing the molar mass to the empirical formula mass shows how many empirical formula units make up a molecule: The molecular formula is thus derived from the empirical formula by multiplying each of its subscripts by three: Check Your LearningAcetylene, a fuel used welding torches, is composed of 92.3% C and 7.7% H by mass. Find the empirical formula. If 1.10 g of acetylene occupies of volume of 1.00 L at 1.15 atm and 59.5 °C, what is the molecular formula for acetylene? Empirical formula, CH; Molecular formula, C[2]H[2] Determining the Molar Mass of a Volatile Liquid The approximate molar mass of a volatile liquid can be determined by: 1. Heating a sample of the liquid in a flask with a tiny hole at the top, which converts the liquid into gas that may escape through the hole 2. Removing the flask from heat at the instant when the last bit of liquid becomes gas, at which time the flask will be filled with only gaseous sample at ambient pressure 3. Sealing the flask and permitting the gaseous sample to condense to liquid, and then weighing the flask to determine the sample’s mass (see (Figure)) Using this procedure, a sample of chloroform gas weighing 0.494 g is collected in a flask with a volume of 129 cm^3 at 99.6 °C when the atmospheric pressure is 742.1 mm Hg. What is the approximate molar mass of chloroform? Solution Since \(\text{ℳ}=\phantom{\rule{0.2em}{0ex}}\frac{m}{n}\) and \(n=\phantom{\rule{0.2em}{0ex}}\frac{PV}{RT},\) substituting and rearranging gives \(\text{ℳ}=\phantom{\rule{0.2em}{0ex}}\frac \(\text{ℳ}=\phantom{\rule{0.2em}{0ex}}\frac{mRT}{PV}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{\left(\text{0.494 g}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\ text{0.08206 L·atm/mol K}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\text{372.8 K}}{\text{0.976 atm}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\text{0.129 L}}\phantom{\rule Check Your Learning A sample of phosphorus that weighs 3.243 \(×\) 10^−2 g exerts a pressure of 31.89 kPa in a 56.0-mL bulb at 550 °C. What are the molar mass and molecular formula of phosphorus The Pressure of a Mixture of Gases: Dalton’s Law Unless they chemically react with each other, the individual gases in a mixture of gases do not affect each other’s pressure. Each individual gas in a mixture exerts the same pressure that it would exert if it were present alone in the container ((Figure)). The pressure exerted by each individual gas in a mixture is called its partial pressure. This observation is summarized by Dalton’s law of partial pressures: The total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the component gases: In the equation P[Total] is the total pressure of a mixture of gases, P[A] is the partial pressure of gas A; P[B] is the partial pressure of gas B; P[C] is the partial pressure of gas C; and so on. The partial pressure of gas A is related to the total pressure of the gas mixture via its mole fraction (X), a unit of concentration defined as the number of moles of a component of a solution divided by the total number of moles of all components: where P[A], X[A], and n[A] are the partial pressure, mole fraction, and number of moles of gas A, respectively, and n[Total] is the number of moles of all components in the mixture. The Pressure of a Mixture of Gases A 10.0-L vessel contains 2.50 \(×\) 10^−3 mol of H[2], 1.00 \(×\) 10^−3 mol of He, and 3.00 \(×\) 10^−4 mol of Ne at 35 °C. (a) What are the partial pressures of each of the gases? (b) What is the total pressure in atmospheres? SolutionThe gases behave independently, so the partial pressure of each gas can be determined from the ideal gas equation, using \(P=\phantom{\rule{0.2em}{0ex}}\frac{nRT}{V}\): The total pressure is given by the sum of the partial pressures: Check Your Learning A 5.73-L flask at 25 °C contains 0.0388 mol of N[2], 0.147 mol of CO, and 0.0803 mol of H[2]. What is the total pressure in the flask in atmospheres? Here is another example of this concept, but dealing with mole fraction calculations. The Pressure of a Mixture of Gases A gas mixture used for anesthesia contains 2.83 mol oxygen, O[2], and 8.41 mol nitrous oxide, N[2]O. The total pressure of the mixture is 192 kPa. (a) What are the mole fractions of O[2] and N[2]O? (b) What are the partial pressures of O[2] and N[2]O? Solution The mole fraction is given by \({X}_{A}=\phantom{\rule{0.2em}{0ex}}\frac{{n}_{A}}{{n}_{Total}}\) and the partial pressure is P[A] = X[A] \(×\) P[Total]. For O[2], \({X}_{{O}_{2}}=\phantom{\rule{0.2em}{0ex}}\frac{{n}_{{O}_{2}}}{{n}_{Total}}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{\text{2.83 mol}}{\left(2.83+8.41\right)\phantom{\rule{0.2em} and \({P}_{{O}_{2}}={X}_{{O}_{2}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{P}_{Total}=0.252\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\text{192 kPa}=\text{48.4 kPa}\) For N[2]O, \({X}_{{N}_{2}}=\phantom{\rule{0.2em}{0ex}}\frac{{n}_{{N}_{2}}}{{n}_{\mathrm{Total}}}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{\text{8.41 mol}}{\left(2.83+8.41\right)\phantom{\rule \({P}_{{N}_{2}}={X}_{{N}_{2}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{P}_{\mathrm{Total}}=0.748\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\text{192 kPa}=\text{143.6 kPa}\) Check Your Learning What is the pressure of a mixture of 0.200 g of H[2], 1.00 g of N[2], and 0.820 g of Ar in a container with a volume of 2.00 L at 20 °C? Collection of Gases over Water A simple way to collect gases that do not react with water is to capture them in a bottle that has been filled with water and inverted into a dish filled with water. The pressure of the gas inside the bottle can be made equal to the air pressure outside by raising or lowering the bottle. When the water level is the same both inside and outside the bottle ((Figure)), the pressure of the gas is equal to the atmospheric pressure, which can be measured with a barometer. However, there is another factor we must consider when we measure the pressure of the gas by this method. Water evaporates and there is always gaseous water (water vapor) above a sample of liquid water. As a gas is collected over water, it becomes saturated with water vapor and the total pressure of the mixture equals the partial pressure of the gas plus the partial pressure of the water vapor. The pressure of the pure gas is therefore equal to the total pressure minus the pressure of the water vapor—this is referred to as the “dry” gas pressure, that is, the pressure of the gas only, without water vapor. The vapor pressure of water, which is the pressure exerted by water vapor in equilibrium with liquid water in a closed container, depends on the temperature ((Figure)); more detailed information on the temperature dependence of water vapor can be found in (Figure), and vapor pressure will be discussed in more detail in the chapter on liquids. Vapor Pressure of Ice and Water in Various Temperatures at Sea Level Temperature (°C) Pressure (torr) Temperature (°C) Pressure (torr) Temperature (°C) Pressure (torr) –10 1.95 18 15.5 30 31.8 –5 3.0 19 16.5 35 42.2 –2 3.9 20 17.5 40 55.3 0 4.6 21 18.7 50 92.5 2 5.3 22 19.8 60 149.4 4 6.1 23 21.1 70 233.7 6 7.0 24 22.4 80 355.1 8 8.0 25 23.8 90 525.8 10 9.2 26 25.2 95 633.9 12 10.5 27 26.7 99 733.2 14 12.0 28 28.3 100.0 760.0 16 13.6 29 30.0 101.0 787.6 Pressure of a Gas Collected Over Water If 0.200 L of argon is collected over water at a temperature of 26 °C and a pressure of 750 torr in a system like that shown in (Figure), what is the partial pressure of argon? Solution According to Dalton’s law, the total pressure in the bottle (750 torr) is the sum of the partial pressure of argon and the partial pressure of gaseous water: Rearranging this equation to solve for the pressure of argon gives: The pressure of water vapor above a sample of liquid water at 26 °C is 25.2 torr (Appendix E), so: Check Your Learning A sample of oxygen collected over water at a temperature of 29.0 °C and a pressure of 764 torr has a volume of 0.560 L. What volume would the dry oxygen have under the same conditions of temperature and pressure? Chemical Stoichiometry and Gases Chemical stoichiometry describes the quantitative relationships between reactants and products in chemical reactions. We have previously measured quantities of reactants and products using masses for solids and volumes in conjunction with the molarity for solutions; now we can also use gas volumes to indicate quantities. If we know the volume, pressure, and temperature of a gas, we can use the ideal gas equation to calculate how many moles of the gas are present. If we know how many moles of a gas are involved, we can calculate the volume of a gas at any temperature and pressure. Avogadro’s Law Revisited Sometimes we can take advantage of a simplifying feature of the stoichiometry of gases that solids and solutions do not exhibit: All gases that show ideal behavior contain the same number of molecules in the same volume (at the same temperature and pressure). Thus, the ratios of volumes of gases involved in a chemical reaction are given by the coefficients in the equation for the reaction, provided that the gas volumes are measured at the same temperature and pressure. We can extend Avogadro’s law (that the volume of a gas is directly proportional to the number of moles of the gas) to chemical reactions with gases: Gases combine, or react, in definite and simple proportions by volume, provided that all gas volumes are measured at the same temperature and pressure. For example, since nitrogen and hydrogen gases react to produce ammonia gas according to \({\ text{N}}_{2}\left(g\right)+3{\text{H}}_{2}\left(g\right)\phantom{\rule{0.4em}{0ex}}⟶\phantom{\rule{0.4em}{0ex}}2{\text{NH}}_{3}\left(g\right),\) a given volume of nitrogen gas reacts with three times that volume of hydrogen gas to produce two times that volume of ammonia gas, if pressure and temperature remain constant. The explanation for this is illustrated in (Figure). According to Avogadro’s law, equal volumes of gaseous N[2], H[2], and NH[3], at the same temperature and pressure, contain the same number of molecules. Because one molecule of N[2] reacts with three molecules of H[2] to produce two molecules of NH[3], the volume of H[2] required is three times the volume of N[2], and the volume of NH[3] produced is two times the volume of N[2].[<!– no-selfclose –>] Reaction of Gases Propane, C[3]H[8](g), is used in gas grills to provide the heat for cooking. What volume of O[2](g) measured at 25 °C and 760 torr is required to react with 2.7 L of propane measured under the same conditions of temperature and pressure? Assume that the propane undergoes complete combustion. Solution The ratio of the volumes of C[3]H[8] and O[2] will be equal to the ratio of their coefficients in the balanced equation for the reaction: \(\begin{array}{l}{\text{C}}_{3}{\text{H}}_{8}\left(g\right)+5{\text{O}}_{2}\left(g\right)\text{}\text{}⟶\text{}\text{}3{\text{CO}}_{2}\left(g\right)+4{\text{H}}_{2}\text{O}\left(l\right)\\ \text {1 volume}+\text{5 volumes}\phantom{\rule{3.5em}{0ex}}\text{3 volumes}+\text{4 volumes}\end{array}\) From the equation, we see that one volume of C[3]H[8] will react with five volumes of O[2]: \(2.7\phantom{\rule{0.2em}{0ex}}\overline{)\text{L}\phantom{\rule{0.2em}{0ex}}{\text{C}}_{3}{\text{H}}_{8}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{5 L}\phantom{\rule{0.2em} {0ex}}{\text{O}}_{2}}{1\phantom{\rule{0.2em}{0ex}}\overline{)\text{L}\phantom{\rule{0.2em}{0ex}}{\text{C}}_{3}{\text{H}}_{8}}}\phantom{\rule{0.2em}{0ex}}=\text{13.5 L}\phantom{\rule{0.2em}{0ex}}{\ A volume of 13.5 L of O[2] will be required to react with 2.7 L of C[3]H[8]. Check Your Learning An acetylene tank for an oxyacetylene welding torch provides 9340 L of acetylene gas, C[2]H[2], at 0 °C and 1 atm. How many tanks of oxygen, each providing 7.00 \(×\) 10^3 L of O [2] at 0 °C and 1 atm, will be required to burn the acetylene? 3.34 tanks (2.34 \(×\) 10^4 L) Volumes of Reacting Gases Ammonia is an important fertilizer and industrial chemical. Suppose that a volume of 683 billion cubic feet of gaseous ammonia, measured at 25 °C and 1 atm, was manufactured. What volume of H[2](g), measured under the same conditions, was required to prepare this amount of ammonia by reaction with N[2]? Solution Because equal volumes of H[2] and NH[3] contain equal numbers of molecules and each three molecules of H[2] that react produce two molecules of NH[3], the ratio of the volumes of H[2] and NH [3] will be equal to 3:2. Two volumes of NH[3], in this case in units of billion ft^3, will be formed from three volumes of H[2]: frac{\text{3 billion}\phantom{\rule{0.2em}{0ex}}{\text{ft}}^{3}\phantom{\rule{0.2em}{0ex}}{\text{H}}_{2}}{2\phantom{\rule{0.2em}{0ex}}\overline{)\text{billion}\phantom{\rule{0.2em}{0ex}}{\text{ft}}^ The manufacture of 683 billion ft^3 of NH[3] required 1020 billion ft^3 of H[2]. (At 25 °C and 1 atm, this is the volume of a cube with an edge length of approximately 1.9 miles.) Check Your Learning What volume of O[2](g) measured at 25 °C and 760 torr is required to react with 17.0 L of ethylene, C[2]H[4](g), measured under the same conditions of temperature and pressure? The products are CO[2] and water vapor. Volume of Gaseous Product What volume of hydrogen at 27 °C and 723 torr may be prepared by the reaction of 8.88 g of gallium with an excess of hydrochloric acid? Convert the provided mass of the limiting reactant, Ga, to moles of hydrogen produced: \(8.88\phantom{\rule{0.2em}{0ex}}\overline{)\text{g Ga}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{1\phantom{\rule{0.2em}{0ex}}\overline{)\text{mol Ga}}}{69.723\phantom{\rule{0.2em} {0ex}}\overline{)\text{g Ga}}}\phantom{\rule{0.4em}{0ex}}×\phantom{\rule{0.4em}{0ex}}\frac{\text{3 mol}\phantom{\rule{0.2em}{0ex}}{\text{H}}_{2}}{2\phantom{\rule{0.2em}{0ex}}\overline{)\text{mol Ga}}}\phantom{\rule{0.2em}{0ex}}=0.191\phantom{\rule{0.2em}{0ex}}{\text{mol H}}_{2}\) Convert the provided temperature and pressure values to appropriate units (K and atm, respectively), and then use the molar amount of hydrogen gas and the ideal gas equation to calculate the volume of gas: \(V=\left(\frac{nRT}{P}\right)=\phantom{\rule{0.2em}{0ex}}\frac{0.191\phantom{\rule{0.2em}{0ex}}\overline{)\text{mol}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\text{0.08206 L}\phantom{\ {300 K}}{0.951\phantom{\rule{0.2em}{0ex}}\overline{)\text{atm}}}\phantom{\rule{0.2em}{0ex}}=\text{4.94 L}\) Check Your Learning Sulfur dioxide is an intermediate in the preparation of sulfuric acid. What volume of SO[2] at 343 °C and 1.21 atm is produced by burning l.00 kg of sulfur in excess oxygen? Greenhouse Gases and Climate Change The thin skin of our atmosphere keeps the earth from being an ice planet and makes it habitable. In fact, this is due to less than 0.5% of the air molecules. Of the energy from the sun that reaches the earth, almost \(\frac{1}{3}\) is reflected back into space, with the rest absorbed by the atmosphere and the surface of the earth. Some of the energy that the earth absorbs is re-emitted as infrared (IR) radiation, a portion of which passes back out through the atmosphere into space. Most of this IR radiation, however, is absorbed by certain atmospheric gases, effectively trapping heat within the atmosphere in a phenomenon known as the greenhouse effect. This effect maintains global temperatures within the range needed to sustain life on earth. Without our atmosphere, the earth’s average temperature would be lower by more than 30 °C (nearly 60 °F). The major greenhouse gases (GHGs) are water vapor, carbon dioxide, methane, and ozone. Since the Industrial Revolution, human activity has been increasing the concentrations of GHGs, which have changed the energy balance and are significantly altering the earth’s climate ((Figure)). There is strong evidence from multiple sources that higher atmospheric levels of CO[2] are caused by human activity, with fossil fuel burning accounting for about \(\frac{3}{4}\) of the recent increase in CO[2]. Reliable data from ice cores reveals that CO[2] concentration in the atmosphere is at the highest level in the past 800,000 years; other evidence indicates that it may be at its highest level in 20 million years. In recent years, the CO[2] concentration has increased from preindustrial levels of ~280 ppm to more than 400 ppm today ((Figure)). Click here to see a 2-minute video explaining greenhouse gases and global warming. Susan Solomon Atmospheric and climate scientist Susan Solomon ((Figure)) is the author of one of The New York Times books of the year (The Coldest March, 2001), one of Time magazine’s 100 most influential people in the world (2008), and a working group leader of the Intergovernmental Panel on Climate Change (IPCC), which was the recipient of the 2007 Nobel Peace Prize. She helped determine and explain the cause of the formation of the ozone hole over Antarctica, and has authored many important papers on climate change. She has been awarded the top scientific honors in the US and France (the National Medal of Science and the Grande Medaille, respectively), and is a member of the National Academy of Sciences, the Royal Society, the French Academy of Sciences, and the European Academy of Sciences. Formerly a professor at the University of Colorado, she is now at MIT, and continues to work at NOAA. For more information, watch this video about Susan Solomon. Key Concepts and Summary The ideal gas law can be used to derive a number of convenient equations relating directly measured quantities to properties of interest for gaseous substances and mixtures. Appropriate rearrangement of the ideal gas equation may be made to permit the calculation of gas densities and molar masses. Dalton’s law of partial pressures may be used to relate measured gas pressures for gaseous mixtures to their compositions. Avogadro’s law may be used in stoichiometric computations for chemical reactions involving gaseous reactants or products. Key Equations • P[Total] = P[A] + P[B] + P[C] + … = Ʃ[i]P[i] • P[A] = X[A] P[Total] • \({X}_{A}=\phantom{\rule{0.2em}{0ex}}\frac{{n}_{A}}{{n}_{Total}}\) Chemistry End of Chapter Exercises What is the density of laughing gas, dinitrogen monoxide, N[2]O, at a temperature of 325 K and a pressure of 113.0 kPa? Calculate the density of Freon 12, CF[2]Cl[2], at 30.0 °C and 0.954 atm. Which is denser at the same temperature and pressure, dry air or air saturated with water vapor? Explain. A cylinder of O[2](g) used in breathing by patients with emphysema has a volume of 3.00 L at a pressure of 10.0 atm. If the temperature of the cylinder is 28.0 °C, what mass of oxygen is in the What is the molar mass of a gas if 0.0494 g of the gas occupies a volume of 0.100 L at a temperature 26 °C and a pressure of 307 torr? What is the molar mass of a gas if 0.281 g of the gas occupies a volume of 125 mL at a temperature 126 °C and a pressure of 777 torr? How could you show experimentally that the molecular formula of propene is C[3]H[6], not CH[2]? The density of a certain gaseous fluoride of phosphorus is 3.93 g/L at STP. Calculate the molar mass of this fluoride and determine its molecular formula. Consider this question: What is the molecular formula of a compound that contains 39% C, 45% N, and 16% H if 0.157 g of the compound occupies 125 mL with a pressure of 99.5 kPa at 22 °C? (a) Outline the steps necessary to answer the question. (b) Answer the question. A 36.0–L cylinder of a gas used for calibration of blood gas analyzers in medical laboratories contains 350 g CO[2], 805 g O[2], and 4,880 g N[2]. At 25 degrees C, what is the pressure in the cylinder in atmospheres? A cylinder of a gas mixture used for calibration of blood gas analyzers in medical laboratories contains 5.0% CO[2], 12.0% O[2], and the remainder N[2] at a total pressure of 146 atm. What is the partial pressure of each component of this gas? (The percentages given indicate the percent of the total pressure that is due to each component.) A sample of gas isolated from unrefined petroleum contains 90.0% CH[4], 8.9% C[2]H[6], and 1.1% C[3]H[8] at a total pressure of 307.2 kPa. What is the partial pressure of each component of this gas? (The percentages given indicate the percent of the total pressure that is due to each component.) CH[4]: 276 kPa; C[2]H[6]: 27 kPa; C[3]H[8]: 3.4 kPa A mixture of 0.200 g of H[2], 1.00 g of N[2], and 0.820 g of Ar is stored in a closed container at STP. Find the volume of the container, assuming that the gases exhibit ideal behavior. Most mixtures of hydrogen gas with oxygen gas are explosive. However, a mixture that contains less than 3.0 % O[2] is not. If enough O[2] is added to a cylinder of H[2] at 33.2 atm to bring the total pressure to 34.5 atm, is the mixture explosive? A commercial mercury vapor analyzer can detect, in air, concentrations of gaseous Hg atoms (which are poisonous) as low as 2 \(×\) 10^−6 mg/L of air. At this concentration, what is the partial pressure of gaseous mercury if the atmospheric pressure is 733 torr at 26 °C? A sample of carbon monoxide was collected over water at a total pressure of 756 torr and a temperature of 18 °C. What is the pressure of the carbon monoxide? (See (Figure) for the vapor pressure of In an experiment in a general chemistry laboratory, a student collected a sample of a gas over water. The volume of the gas was 265 mL at a pressure of 753 torr and a temperature of 27 °C. The mass of the gas was 0.472 g. What was the molar mass of the gas? Joseph Priestley first prepared pure oxygen by heating mercuric oxide, HgO: (a) Outline the steps necessary to answer the following question: What volume of O[2] at 23 °C and 0.975 atm is produced by the decomposition of 5.36 g of HgO? (b) Answer the question. (a) Determine the moles of HgO that decompose; using the chemical equation, determine the moles of O[2] produced by decomposition of this amount of HgO; and determine the volume of O[2] from the moles of O[2], temperature, and pressure. (b) 0.308 L Cavendish prepared hydrogen in 1766 by the novel method of passing steam through a red-hot gun barrel: (a) Outline the steps necessary to answer the following question: What volume of H[2] at a pressure of 745 torr and a temperature of 20 °C can be prepared from the reaction of 15.O g of H[2]O? (b) Answer the question. The chlorofluorocarbon CCl[2]F[2] can be recycled into a different compound by reaction with hydrogen to produce CH[2]F[2](g), a compound useful in chemical manufacturing: (a) Outline the steps necessary to answer the following question: What volume of hydrogen at 225 atm and 35.5 °C would be required to react with 1 ton (1.000 \(×\) 10^3 kg) of CCl[2]F[2]? (b) Answer the question. (a) Determine the molar mass of CCl[2]F[2]. From the balanced equation, calculate the moles of H[2] needed for the complete reaction. From the ideal gas law, convert moles of H[2] into volume. (b) 3.72 \(×\) 10^3 L Automobile air bags are inflated with nitrogen gas, which is formed by the decomposition of solid sodium azide (NaN[3]). The other product is sodium metal. Calculate the volume of nitrogen gas at 27 °C and 756 torr formed by the decomposition of 125 g of sodium azide. Lime, CaO, is produced by heating calcium carbonate, CaCO[3]; carbon dioxide is the other product. (a) Outline the steps necessary to answer the following question: What volume of carbon dioxide at 875° and 0.966 atm is produced by the decomposition of 1 ton (1.000 \(×\) 10^3 kg) of calcium (b) Answer the question. (a) Balance the equation. Determine the grams of CO[2] produced and the number of moles. From the ideal gas law, determine the volume of gas. (b) 7.43 \(×\) 10^5 L Before small batteries were available, carbide lamps were used for bicycle lights. Acetylene gas, C[2]H[2], and solid calcium hydroxide were formed by the reaction of calcium carbide, CaC[2], with water. The ignition of the acetylene gas provided the light. Currently, the same lamps are used by some cavers, and calcium carbide is used to produce acetylene for carbide cannons. (a) Outline the steps necessary to answer the following question: What volume of C[2]H[2] at 1.005 atm and 12.2 °C is formed by the reaction of 15.48 g of CaC[2] with water? (b) Answer the question. Calculate the volume of oxygen required to burn 12.00 L of ethane gas, C[2]H[6], to produce carbon dioxide and water, if the volumes of C[2]H[6] and O[2] are measured under the same conditions of temperature and pressure. What volume of O[2] at STP is required to oxidize 8.0 L of NO at STP to NO[2]? What volume of NO[2] is produced at STP? Consider the following questions: (a) What is the total volume of the CO[2](g) and H[2]O(g) at 600 °C and 0.888 atm produced by the combustion of 1.00 L of C[2]H[6](g) measured at STP? (b) What is the partial pressure of H[2]O in the product gases? (a) 18.0 L; (b) 0.533 atm Methanol, CH[3]OH, is produced industrially by the following reaction: \(\text{CO}\left(g\right)+2{\text{H}}_{2}\left(g\right)\phantom{\rule{0.2em}{0ex}}\stackrel{\phantom{\rule{2em}{0ex}}\text{copper catalyst 300 °C, 300 atm}\phantom{\rule{2em}{0ex}}}{\to }\phantom{\ Assuming that the gases behave as ideal gases, find the ratio of the total volume of the reactants to the final volume. What volume of oxygen at 423.0 K and a pressure of 127.4 kPa is produced by the decomposition of 129.7 g of BaO[2] to BaO and O[2]? A 2.50-L sample of a colorless gas at STP decomposed to give 2.50 L of N[2] and 1.25 L of O[2] at STP. What is the colorless gas? Ethanol, C[2]H[5]OH, is produced industrially from ethylene, C[2]H[4], by the following sequence of reactions: What volume of ethylene at STP is required to produce 1.000 metric ton (1000 kg) of ethanol if the overall yield of ethanol is 90.1%? One molecule of hemoglobin will combine with four molecules of oxygen. If 1.0 g of hemoglobin combines with 1.53 mL of oxygen at body temperature (37 °C) and a pressure of 743 torr, what is the molar mass of hemoglobin? A sample of a compound of xenon and fluorine was confined in a bulb with a pressure of 18 torr. Hydrogen was added to the bulb until the pressure was 72 torr. Passage of an electric spark through the mixture produced Xe and HF. After the HF was removed by reaction with solid KOH, the final pressure of xenon and unreacted hydrogen in the bulb was 36 torr. What is the empirical formula of the xenon fluoride in the original sample? (Note: Xenon fluorides contain only one xenon atom per molecule.) One method of analyzing amino acids is the van Slyke method. The characteristic amino groups (−NH[2]) in protein material are allowed to react with nitrous acid, HNO[2], to form N[2] gas. From the volume of the gas, the amount of amino acid can be determined. A 0.0604-g sample of a biological sample containing glycine, CH[2](NH[2])COOH, was analyzed by the van Slyke method and yielded 3.70 mL of N[2] collected over water at a pressure of 735 torr and 29 °C. What was the percentage of glycine in the sample? • 1“Quotations by Joseph-Louis Lagrange,” last modified February 2006, accessed February 10, 2015, http://www-history.mcs.st-andrews.ac.uk/Quotations/Lagrange.html Dalton’s law of partial pressures total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the component gases. mole fraction (X) concentration unit defined as the ratio of the molar amount of a mixture component to the total number of moles of all mixture components partial pressure pressure exerted by an individual gas in a mixture vapor pressure of water pressure exerted by water vapor in equilibrium with liquid water in a closed container at a specific temperature
{"url":"https://pressbooks.nscc.ca/chemistryatoms/chapter/stoichiometry-of-gaseous-substances-mixtures-and-reactions/","timestamp":"2024-11-02T05:50:15Z","content_type":"text/html","content_length":"182833","record_id":"<urn:uuid:eb08f737-8c73-4614-87a9-a044bd7f99fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00589.warc.gz"}
what number is abc on What Number is abc On? When it comes to assigning numerical values, the concept of numbering systems has a significant role in defining and representing quantities. Different numbering systems have been developed throughout history to suit various needs and applications. One such system is the decimal system, which is widely used and understood by most of us. However, there exist several other systems, each with its unique characteristics and applications. One popular alternative is the alphanumeric numbering system, which encompasses letters, numbers, and symbols. In the alphanumeric system, letters from the alphabet are used along with numbers to represent values. Each letter of the alphabet is assigned a numerical value, allowing for a broad range of possibilities to be represented. The alphanumeric system is often employed in a variety of contexts, such as computer programming, vehicle identification numbers (VIN), and categorizing alphanumeric Now, let’s address the query at hand: What number is abc on? In this context, “abc” refers to a combination of three letters in the alphanumeric system. Since each letter has a corresponding numerical value, we can determine the equivalent number of “abc” by assigning values to each letter and calculating their sum. To ascertain the numerical representation of a letter in the alphanumeric system, one common approach is to utilize the ASCII (American Standard Code for Information Interchange) table. This table maps each character in the system to a specific numerical value. For instance, the letter “a” corresponds to the number 97, “b” corresponds to 98, and “c” corresponds to 99. In the decimal system, we determine numerical values by considering the positional notation. Each position represents a power of 10, starting from right to left. However, in the alphanumeric system, where we have both letters and numbers, we consider a different positional notation. Here, each position represents a power of 26, since there are 26 letters in the English alphabet. Let’s consider the example of “abc” to elucidate the positional notation in the alphanumeric system. In this context, the first position (from the right) represents the units, the second position represents the 26s place, and the third position represents the 26^2s place. To calculate the numerical value of “abc,” we can use the following equations: (26^2 x a) + (26 x b) + c Let’s say we assign the value of 1 to “a,” 2 to “b,” and 3 to “c.” Plugging these values into the equation, we obtain: (26^2 x 1) + (26 x 2) + 3 = 677 Therefore, in the alphanumeric system, “abc” is represented by the number 677. It’s essential to note that the alphanumeric system is not universally accepted or employed in every context. Its use primarily depends on the specific requirements of a given field or application. In many cases, numbers alone (such as in the decimal system) are sufficient for representing and manipulating quantities. Nonetheless, the alphanumeric system provides a flexible and diverse mechanism for encoding information and values. In conclusion, the question “What number is abc on?” relates to the alphanumeric numbering system. By assigning numerical values to each letter and employing a positional notation based on powers of 26, we can calculate the equivalent number of “abc”. With the values of a=1, b=2, and c=3, “abc” in the alphanumeric system corresponds to the number 677. The alphanumeric system offers an alternative method for representing information, suited for various domains where letters and numbers combine to convey meaning and categorization.
{"url":"https://www.esplineco.com/what-number-is-abc-on/","timestamp":"2024-11-12T19:22:37Z","content_type":"text/html","content_length":"39906","record_id":"<urn:uuid:d828f3a5-5a7f-45d9-ba25-69919afb6c30>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00538.warc.gz"}
Bernoulli Equation | Brilliant Math & Science Wiki The Bernoulli differential equation is an equation of the form \(y'+ p(x) y=q(x) y^n\). This is a non-linear differential equation that can be reduced to a linear one by a clever substitution. The new equation is a first order linear differential equation, and can be solved explicitly. The Bernoulli equation was one of the first differential equations to be solved, and is still one of very few non-linear differential equations that can be solved explicitly. Most other such equations either have no solutions, or solutions that cannot be written in a closed form, but the Bernoulli equation is an exception. The idea behind the Bernoulli equation is to substitute \(v=y^{1-n}\), and work with the resulting equation, as shown in the example below. Solve the differential equation \(y'+y=xy^2.\) We start by dividing through by \(y^2\), to get \[y' y^{-2} +y^{-1}=x.\] Now, if we let \(v=y^{-1}\), we have \(v'=-y^{-2}y'\), so our equation becomes \[-v'+v=x\implies v'-v=-x\] Now, we multiply through by the integrating factor \(e^{-x}\), and we have \[e^{-x}v'-e^{-x} v=-xe^{-x}\implies \left[e^{-x}v\right]'=-xe^{-x}\implies e^{-x}v=e^{-x}+xe^{-x}+C\implies v=1+x+Ce^x.\] Finally, since \(v=y^{-1}\), we have the solution \[y=\frac{1}{Ce^x+x+1}.\] \[\frac{1}{\sqrt{e^2+e}}\] \[\frac{1}{\sqrt{e^2-2e}}\] \[\frac{1}{\sqrt{e^2+2e}}\] \[\frac{1}{\sqrt{e^2-e}}\] \[\large y'+y=e^xy^3\] If a function \(y\) satisfies the differential equation above, and \(y(0)=\dfrac{1}{\sqrt{3}}\), what is \(y(1)\)? The same procedure can be used to solve the general version of the equation. Solve the differential equation \(y'+p(x) y=q(x) y^n\). We divide through by \(y^n\), so we have \[y' y^{-n}+p(x)y^{1-n}=q(x).\] Then, if \(v=y^{1-n}\), \(v'=(1-n)y^{-n} y'\), so this equation is \[\frac{1}{1-n} v'+p(x) v=q(x)\implies v'+(1-n)p(x) v=q (x)(1-n).\] Then, the integrating factor will be a function \(f(x)\) such that \(f(x)=e^{\int (1-n)p(x)\, dx}\). We multiply through by this to get \[ f(x)v'+p(x)(1-n)f(x)v=q(x)f(x)(1-n)\implies \left[ f(x)v\right]'=q(x)f(x)(1-n)\implies v(x)=\frac{1-n}{f(x)}\int q(x)f(x)\, dx,\] which is the solution to the Bernoulli equation.
{"url":"https://brilliant.org/wiki/bernoullis-equation/","timestamp":"2024-11-05T16:03:09Z","content_type":"text/html","content_length":"45196","record_id":"<urn:uuid:c308f6a0-e6d2-454d-94bb-c3bc54d80e1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00750.warc.gz"}
Module 5: Individual Demand and Market Demand Joel Bruneau and Clinton Mahoney • Define the concept of a market • Describe a demand curve • Derive market demand by aggregating individual demand curves • Explain movements along versus shifts of the demand curve • Calculate and interpret the price and income elasticity of a demand curve • Derive and interpret an Engel curve • Identify income effects and substitution effects that result from a change in prices • Identify compensating variations and equivalent variations that result from a change in prices Module 5: Individual Demand and Market Demand The Policy Question: Should Your City Charge More for Downtown Parking Spaces? Major cities have a significant number of parking spaces on public streets. In congested areas, such as downtowns, street parking is usually priced and limited in duration. Cities meter downtown parking for many reasons, including to raise revenue, ensure the frequent turnover of customers for local merchants, and ensure the availability of parking spot for those looking for parking. But how much should a city charge for parking in order to raise revenue or ensure available spots? The answer depends on understanding the demand for parking. Demand is a natural next topic after the consumer choice problem of maximizing utility among competing bundles of goods, which we studied in Module 4. We saw in Module 4 that the solution to the consumer choice problem gives us, among other things, the individual demand functions. These functions tell us how much the individual consumer will demand of each good in order to maximize utility for any set of prices and income. And by taking into account all of the individual demands, we can come up with an overall market demand for a good. “Does DC’s Pay By Phone Parking really work? A parking ticket dodging test” from Wayan Vota on Flickr is licensed under CC BY-NC-SA Exploring the Policy Question The main reason a city might want to set parking rates higher is to increase parking revenues or ensure enough available parking for those who drive downtown to conduct business—or both. For our policy question we’ll concentrate on the second objective: to ensure that customers who come downtown to shop will be able to find a place to park. In order to think about the policy question we need to know the answers to two related questions: what factors affect the demand for parking and how sensitive is the demand for parking to price changes? Answering these questions will provide insight into the demand for parking in general and allow us to answer the central policy question of whether the price for parking in the downtown area should be higher. 5.1 The Meaning of Markets LO 5.1: Define the concept of a market. 5.2 The Demand Curve LO 5.2: Describe a demand curve. 5.3 Summing Individual Demands to Derive Market Demand LO 5.3: Derive market demand by aggregating individual demand curves. 5.4 Movement along versus Shifts of the Demand Curve LO 5.4: Explain movements along versus shifts of the demand curve. 5.5 Price and Income Elasticity of Demand LO 5.5: Calculate and interpret the price and income elasticity of a demand curve. 5.6 The Engel Curve LO 5.6: Derive and interpret an Engel curve. 5.7 Income and Substitution Effects LO 5.7: Identify income and substitution effects that result from a change in prices. 5.8 Compensating Variation and Equivalent Variation LO 5.8: Calculate and interpret compensating variation and equivalent variation for consumers. 5.1 The Meaning of Markets LO 5.1: Define the concept of a market. Broadly defined, a market is a place where people go in order to buy, sell, or exchange goods and services. Markets can be physical or virtual, large or small, for one good or many. In order to understand and derive demand curves we need to specify the particular market we are studying. A market is always for an individual good at a specific price. We can define a collective good, like sandwiches, only if we can describe a single price—as opposed to distinct prices for cheese sandwiches, club sandwiches, and sub sandwiches, for example. There are many types of sandwiches, and therefore no one price exists. However, for a restaurant owner, the specific market for cheese sandwiches does not matter as much as how the market behaves for sandwiches in general. Just remember that when we are analyzing markets we are talking about a specific good with a specific price. A market for a good is also defined by a place and a time. For example, we could study the market for iPhones in the United States in 2014 or in California in May of 2014, or even in Cupertino on May 5, 2014. In order to talk reasonably about a quantity demanded, we have to know who is demanding the quantity and when. Sometimes the boundaries of a market are not entirely clear. Think about the market for telephone calls. Does this include wired home telephone service through dedicated wires (yes), telephone over coaxial cable (almost certainly), mobile phones (probably), voice over internet phones (maybe), texting (probably not)? We will return to this issue in future modules when we discuss market concentration, so for now just understand that market boundaries can sometimes be hard to define. A market must have: • A good specific enough to have a single price • A defined time or time period • A defined place LO 5.2: Describe and draw a demand curve. As we saw in Module 4, when we solve the consumer choice problem – that is we determine the optimal consumption bundle based on the current prices of the goods and the income of the consumer – we end up with a demand function. A demand curve is a graphical representation of the demand function that tells us for every price of a good, how much of the good is demanded. As we saw from deriving the demand function in Module 4, other factors help determine demand for a good, namely the price of the other good and the buyer’s income. Holding the price of the other good and buyer’s income constant and changing prices, the demand function describes the optimal consumption quantity for every price–that is, what quantity the individual will demand at every price. Figure 5.1 illustrates how the demand curve for coffee is derived from the consumer choice problem. In the upper panel, we see that as the price of coffee increases the consumer chooses to consume less and less coffee. In the lower panel we can plot the pairs of price and quantity of coffee consumed at that price to draw the demand curve. Figure 5.1: Deriving individual demand curves from consumer consumption choices In the top graph of 5.1 the price of coffee increases from P[1] to P[2] and then to P[3]. Since this is an increase of the price of the good, this consumer chooses to consume less coffee as price increases. The bottom graph plot the combinations of prices of coffee and quantities consumed (demanded) of coffee: P[1] and X[1], P[2] and X[2] and P[3] and X[3]. Note that by keeping the price and income of other goods constant we are simply changing the slope of the budget line and anchoring it at the vertical intercept. As an example, let’s consider Marco who eats burritos and pizza slices. Figure 5.2 describes his weekly consumption. The top panel shows his consumption bundles given the prices of burritos and pizza and his income. The bottom panel shows his demand curve. In the top panel, we see that Marco has a weekly income of $15 to spend on burritos and pizza slices. Originally, when the prices are $1.50 each, he consumes 5 burritos and 5 slices of pizza. When the price of pizza falls to $1, Marco adjusts his consumption to 6 burritos and 6 slices of pizza. When the price of pizza slices falls for a second time to $0.50, Marco adjusts his consumption to 7 burritos and 9 slices of pizza. For each price there is a unique solution to the consumer choice problem. By mapping the price and resulting optimal consumption pairs we can describe this individual’s demand curve for pizza slices, as shown in the bottom panel. At a price of $1.50 per slice, Marco demands 5. At a price of $1, Marco demands 6, and at a price of $0.50, Marco demands 9. We can plot these points on a graph that has price on the vertical axis and quantity of pizza slices on the horizontal axis. By connecting these price-quantity points with a line, we graphically approximate the demand curve. These points are the same as the price and quantity pairs you would get directly from the demand function when you hold the price of the other good and income constant. The difference is that the demand function is continuous and therefore has a continuous graph that we can draw precisely. For our purposes now, the approximate demand curve is sufficient. Figure 5.2 Marco’s consumption choices and demand curve for pizza slices Note that as the price of a good declines, the consumer consumes more and more of the good because, holding the other price and income constant, the consumer’s income goes farther. In our example, this means that Marco consumes more of both burritos and pizza slices. His demand curve exhibits the law of demand: As price decreases, quantity demanded increases holding other factors such as income and the price of other goods constant. It is worth noting that this ‘law’ has limits – it does not hold for certain types of goods, as we will discuss later. Figure 5.2.1 5.3 Summing Individual Demand Curves to Derive Market Demand LO 5.3: Derive market demand by aggregating individual demand curves. The demand curve we described for Marco’s weekly consumption of pizza slices is an individual demand curve. It tells us precisely how many slices of pizza Marco will demand for each price. But if you sell pizza slices in Marco’s neighbourhood, it does not tell you what the total demand for a week will be. In order to derive the market demand curve, we need to know the demand curve for every person in the neighbourhood. As noted in Section 5.1, the market we are describing is the weekly demand for pizza slices in Marco’s neighbourhood. We are implicitly assuming that all pizza slices are identical, and that Marco and other demanders of pizza will buy it in the neighbourhood. Suppose that there are only three people who eat pizza slices in the neighbourhood and one pizza vendor and that each customer has a demand curve for pizza slices identical to Marco’s. What is the correct way to think about the total or market demand for pizza? Consider what the pizza shop owner will experience: • When she sets the price at $1.50 she sees three customers who each demand 5 slices of pizza or a total demand of 5+5+5=15 slices of pizza. • When she lowers the price to $1, she sees a total demand of 6+6+6=18 slices of pizza. • When she lowers the price to $0.50, she sees a total demand of 9+9+9=27 slices of pizza. These are the three points on the total demand curve in Figure 5.3: Q=15 at P=$1.50 Q=18 at P=$1 Q=27 at P=$0.50 Figure 5.3 The Total Demand Curve for Pizza Slices By connecting the three points with a line, we can approximate the actual demand curve. The key takeaway from this simple example is that we are summing up quantities at every price. Keep this in mind as we move on to demand functions. What about making a total demand curve by aggregating demand functions? Let’s consider three simple individual demand functions for pizza for customer A, customer B and customer C: Pizza[A] = 150 – p Pizza[B] = 100 – p Pizza[C] = 100- 2p We want to sum them up to arrive at the total demand. Let’s start by graphing them, as shown in Figure 5.4. Figure 5.4 Summing Three Demand Functions We want total demand, which is the sum of all the quantities at every price or Pizza[TOTAL] =Pizza[A] + Pizza[B] + Pizza[C]. To get this, we can simply add up the left and right and sides of the equations above. If we did so we would get: Pizza[TOTAL] = 350 – 4p. Is this total demand function correct? Consider a price of $25. From the individual demands, we know that q[A] = 125, q[B] = 75, and q[C] = 50, so the total demand is 250. If we put p=$25 directly into our total demand function, we get: Pizza[TOTAL] = 350 – 4(25) = 250. Another way of thinking about total demand is to sum the three demands in the individual graphs horizontally, which reveals a problem with our total demand function. Notice that at a price of 75: Pizza[A] = 75, Pizza[B] = 25, and Pizza[C] = 0, so the total demand is 100. Note that demands cannot have a negative number. If we put p = $75 into the aggregate function we get: Pizza[TOTAL] = 350 – 4(75) = 50. In this case, we have not accounted for the fact that q[C] has stopped at zero. So we have not quite accurately described the total demand. Note that the graph of each individual demand function has a different vertical intercept. So we have to account for the case where a demand goes to zero. For customer C this is at p=$50, for customer B this is at p=$100, and for customer C this is at p=$150. It’s important to note that for prices above $50 there are only two consumers who still demand, consumers B and A. And for prices above $100, only consumer A demands. We can express the demand curve by the function: Pizza_{TOTAL} &= \\ 350-4p, \,for\,p &\leq 50 \\ 250-2p,\,for\,50 <p &\leq 100 \\ 150-p,\,for\,100&< p This demand function is a bit complicated but makes sense if you focus on the prices when demand goes to zero for each consumer. Figure 5.5 Total Demand Curve for Three Individual Demand Functions Graphing the total demand curve in Figure 5.5 reveals a kinked demand curve, but one that is downward sloping. When we examine a market with only a few potential customers, it is reasonable to expect kinks at the prices where potential customers turn into real customers. But it is easy to see that as we add more and more demand curves together, the individual kinks will become smaller and more frequent until. After we have added enough demand together, we get a smooth and downward sloping market demand curve. 5.4 Movements Along versus Shifts of the Demand Curve LO 5.4: Explain movements along versus shifts of the demand curve. The market demand curve describes the relationship between the price a good is offered at and the resulting quantity demanded. As the price of a good moves higher and lower, consumers demand different quantities of the good and the demand function and resulting graph change in response. But we know that variables other than price can affect the demand for a particular good, for example: • The income of consumers • The price of complements and substitutes A host of other factors might influence the consumer’s demand for a good as well, including advertising, the popularity of the good, and news stories about its health benefits or risks. How do these potential impacts conform to our observation that the demand curve is simply a description of the quantity demanded for any given price? This is where the concept of ceteris paribus comes in: This Latin phrase means, roughly, all other things remain the same. Here, ceteris paribus means that any demand curve that we graph is for a specific set of circumstances–we are considering only the price of the good and holding all the other factors that affect demand constant. Once we have a particular demand curve, we can observe what happens when a factor is allowed to change. • Changes in price cause movements along the demand curve. • Changes in factors other than the price of the good itself cause shifts of the demand curve. Understanding shifts versus movements of demand curves can be challenging, so try a few examples in the Your Turn that follows. Movement along the Demand Curve Figure 5.6 illustrates the market demand for jeans. We assume jeans are generic and leave unspecified the time and place of this market; these simplifications will make the analysis easier to follow. For a given and fixed set of factors – such as income, prices of other goods, and advertising – this demand curve describes the relationship between the price of jeans and the quantity of jeans demanded. We can see, for instance, that when the price of jeans increases from $40 to $50, the quantity demanded will decrease from 100,000 to 90,000. The change in price causes a movement along the demand curve. Figure 5.6 A price increase or decrease causes movement along the demand curve. Shifting the Demand Curve So what happens when factors other than price change? Let’s start by thinking about incomes. What would happen to demand if incomes increase, perhaps because the government sends everyone a tax rebate? It is reasonable to consider jeans a normal good, a good for which demand increases when incomes rise? How does this tax rebate affect our graph? When we say demand increases we mean that for every price we expect a greater quantity demanded after the income increase than before it. This means that at a given price, the quantity demanded increases, or shifts to the right, farther along the x-axis in our graph. In Figure 5.7, it is a parallel shift—the slopes of the original and revised demand curve remain the same–which might well happen. But in general we can only say that demand will shift rightward; the slope or shape of the graph could change after an income increase. Figure 5.7 An increase in income shifts the demand curve to the right. What about factors other than price, like the price of substitutes or an increase in advertising for jeans? Let’s try them both. Suppose the price of khaki pants −a substitute for jeans − falls. It is reasonable to expect that consumers looking for new pants might decide to buy khaki pants rather than jeans? We should expect a decrease in the quantity demanded of jeans at any given price. This decrease causes the demand curve to shift to the left. Figure 5.8 shows the leftward shift of the demand curve due to a fall in the price of a substitute – khaki pants. Suppose a jeans company hires a top model or celebrity to advertise its jeans. An increase in advertising for jeans should make potential consumers think more positively about jeans and increase their demand, which causes the demand curve to shift to the right. Figure 5.8 shows the rightward shift of the demand curve due to the new or increased advertising. Figure 5.8 Changes in other factors shift the demand curve right or left. For each change we have made in factors that affect demand, other than the price of the good, the demand curve shifted to the left or right. Here is a good rule to remember: Changes in the price of the good result in movements along the demand curve, while changes in any other factor that affects demand result in shifts of the demand curve. 5.5 Price and Income Elasticity of Demand LO 5.5: Calculate and interpret the price and income elasticity of a demand curve. Businesses, governments, and economists are interested in how sensitive demand is to changes in prices and income. For example, a city government might want to estimate the effect of a change in parking rates on its budget revenue. Because economists often like to compare markets across products, time, and units of measure, we prefer to have a measure of demand sensitivity that is unit-free. This means it doesn’t matter if we are talking about gallons of milk, liters of cola, pounds of flour, kilos of sugar, or pairs of socks. Similarly, we don’t want to worry about talking in 2014 Dollars, 1950 Dollars, or 1975 Pounds. The way we measure demand sensitivity in a unit free way is to measure price elasticity of demand, which is the percentage change in the quantity demanded of a product resulting from a 1-percent change in price. The price elasticity of demand is defined mathematically as: [latex]E=\frac{Percentage\,change\,in\,quantity\,demanded}{Percentage\,change\,in\,price}=\frac{\Delta Q/Q}{\Delta P/P'}[/latex] Δ (delta) indicates a change in a variable ∆Q refers to a change in a quantity demanded ∆P refers to a change in price Note that ∆Q/Q is the definition of a percentage change. Let’s look at a numerical example. Suppose demand for lemonade from a lemonade stand on a busy street corner increases from 100 cups a day to 110 cups a day when the price decreases from $2 to $1.90. The change in quantity, ∆Q, is an increase in 10 cups or +10. The change in price, ∆P, is a decrease of $0.10 cups or -0.10. So, we have: [latex]\frac{\Delta Q/Q}{\Delta P/P}=\frac{10/100}{-0.10/2}=\frac{0.1}{-0.05}=-2[/latex] Note that the elasticity we calculated is negative. Price elasticity of demand is usually negative, reflecting the law of demand: As price decreases, quantity demanded increases. Note also that the elasticity we calculated does not indicate what units the quantities are measured in or what currency or time the prices were measured in. None of that matters because we are only dealing with percentage changes. How do we interpret the -2 value of elasticity? It tells us by how much quantity demanded changes when the price increases by 1%. But wait, didn’t prices change by 5%? Yes, but the elasticity says that for every 1% increase in price we should see (multiply by the elasticity) a -2% change in quantity demanded. Table 5.1 Classifying Values of Price Elasticity of Demand Value of Price Elasticity of Demand, ε Description of Demand Interpretation 0 Perfectly Inelastic Quantity demanded does not change at all with price From 0 to -1 Inelastic Quantity demanded is relatively unresponsive to price changes -1 Unit elastic Percentage change in quantity demanded is equal to percentage change in price From -1 to – Elastic Quantity demanded is relatively responsive to price changes – Perfectly Elastic Quantity demanded goes to zero with any increase in price, and goes to infinity with any decrease in price. Figure 5.8.1 shows demand curves that are more and less elastic as well as demand curves that are perfectly inelastic and perfectly elastic. Notice that the slope of the elastic demand curve is less steep than that of the inelastic demand curve, as it takes a small change in P (price) to induce a large change in Q (quantity) relative to the inelastic demand. However, note that the elasticity is constantly changing along the demand curves so to compare elasticity across demand curves you have to compare them at the same price. (see Figure 5.8.2). Figure 5.8.2 Comparing elasticity across two demand curves We can use elasticity to measure other demand sensitivities, such as sensitivity to a change in the price of another good or to a change in income. cross-price elasticity of demand is the percentage change in the quantity demanded of a product resulting from a 1-percent change in the price of another good. It is defined mathematically as: [latex]E_{x,y}=\frac{Percentage\,change\,in\,quantity\,demanded\,of\,good\,x}{Percentage\,change\,in\,price\,of\,good\,y}=\frac{\Delta Q_x/Q_x}{\Delta P_y/P_y}[/latex] The cross-price elasticity of demand describes the sensitivity of demand for one good to a change in the price of another good. From this formula, we can quickly determine if two goods are complements or substitutes. A positive value of cross-price elasticity of demand means that an increase in the price of one good will increase the demand for the other. This indicates the two goods are substitutes: Consumers will substitute more of the other good as the price of the first good increases. Examples of substitutes are Coke and Pepsi, Apple computers, and PCs and private cars and public transportation. Similarly, if the cross-price elasticity of demand is negative, the goods are complements: Since they are consumed together, raising the price of one raises the price of the combination, so consumers cut back on both. Examples of complements are tortilla chips and salsa, iPods and headphones, and tennis rackets and tennis balls. Economists are often interested in seeing how a change in income affects demand, for instance, how a government tax rebate will affect the demand for used economics text books. The income elasticity of demand is the percentage change in the quantity demanded for a product from a 1 percent change in income. Income elasticity of demand is defined mathematically as: [latex]E_I=\frac{\%\,change\,in\,quantity\,demanded}{\%\,change\,in\,income}=\frac{\Delta Q/Q}{\Delta I/I}[/latex] Suppose that your income increases from $1,000 a month to $1,200 a month and your demand for songs on iTunes changes from 10 a month to 12 a month as a result. Let’s find the income elasticity of [latex]\frac{\Delta Q/Q}{\Delta I/I}=\frac{2/10}{200/1,000}=\frac{0.2}{0.2}=1[/latex] Note that this value of 1 indicates that the income elasticity of your demand for iTunes is unit elastic. That is, the quantity response is proportional to the change in income. A positive value of income elasticity of demand indicates a normal good, a good for which the quantity demanded increases as income rises. We call goods for which the quantity demanded falls as income rises inferior goods. Inferior goods have a negative value of income elasticity of demand. Figure 5.9 shows an example of an inferior good: as income rises the quantity of ramen consumed Figure 5.9: Demand for an Inferior Good Calculus: Appendix: Elasticity Formulas Note that we can re-write the formulas for price and income elasticity of demand as: [latex]E=\frac{\Delta Q}{\Delta P}\frac{P}{Q},E_{x,y}=\frac{\Delta Q_x}{\Delta P_y}\frac{P_y}{Q_x}[/latex] and [latex]E_I=\frac{\Delta Q}{\Delta I}\frac{I}{Q}[/latex]. Since the derivative gives the instantaneous rate of change, [latex]\frac{\Delta Q}{\Delta P}[/latex] can be expressed as the derivative [latex]\frac{dQ}{dP}[/latex]. Therefore, the elasticity formulas can be written as: [latex]E=\frac{dQ}{dP},E_{x,y}=\frac{dQx}{dPy}\frac{P_y}{Qx}[/latex] , and [latex]E_I=\frac{dQ}{dI}\frac{I}{Q}[/latex] . EXAMPLE: If the demand function is: [latex]Q_D=100-\frac{1}{2}P[/latex] , the price elasticity of demand is: From the laws of differentiation we know [latex]\frac{dQ}{dP}[/latex] is [latex]-\frac{1}{2}[/latex] Therefore, [latex]\frac{dQ}{dP}\frac{P}{Q}\,is\,-\frac{1}{2}\frac{P}{Q}[/latex]. By substituting 100 – ½P for Q we get an expression entirely in P: [latex]E=-\frac{1}{2}\frac{P}{100-\frac{1}{2}P}[/latex]. Now, if we want to know where the demand curve is unit elastic we can set the elasticity to 1 and solve (note that we can drop the negative sign from the ½ since we are interested in the absolute value). [latex]1=\frac{1}{2}\frac{P}{100-\frac{1}{2}P}[/latex]. Solving this for P yields: 200-P=P or P=100. 5.6 The Engel Curve LO 5.6: Derive and interpret an Engel curve. Demand curves show the relationship between the price of a good and optimal consumption of that good. We can also identify the relationship between the income of the consumer and optimal consumption of a good. This relationship is called an Engel curve, named after the German statistician Ernst Engel. We assume that income is the independent variable that influences purchases. If we use figure 5.9 above, we can illustrate the Engel curve for ramen noodles. We start with income level 1 where the optimal choice of ramen noodles is Q1. This corresponds to point A in Figure 5.10 below. If we raise income to level 2, consumption of ramen falls to Q2 (point B). The downward sloping Engel curve tells us that ramen noodles are an Inferior Good. If we raise income even more to level 3, consumption falls even more (to point C). Figure 5.10: Engel Curve for an Inferior Good In general, Engel curves can be any slope and can change slope. See Figure 5.11 below. Figure 5.11: Engel Curve with Changing Slope at Different Income Levels Consider the intake of calories from food. We know that if very poor households are given more income, they will spend more on food and so take on more calories. So for very low incomes, caloric intake is a normal good. But at higher incomes, it is entirely possible that households will lower caloric intake as they switch to more expensive foods, like fruits and vegetables, and away from high calorie fatty foods and carbohydrates. However, it is again possible that there is a minimum sustainable caloric intake that, once a household attains a certain income, will stay at that level. Hence, the Engel curve can change slopes depending on the nature of the good in question. However, for most utility functions we will use, the Engel curve is upward sloping – reflecting that goods are normal. For instance, if [latex]U(x,y)=min[2x,y][/latex], then optimal consumption of goods x and y are [latex]x=\frac{M}{(P_x+2P_y)},\,and\,y=\frac{2M}{(P_x+2P_y)}[/latex]. Figure 5.12: Income Offer Curve (Income Expansion Path) and Engel Curve for Perfect Complements The slope of the Engel curve for good x is [latex]slope=\frac{\Delta x}{\Delta M}=\frac{1}{(P_x+2P_y)}[/latex], which is positive. Notice the higher the price of either good, the less responsive is consumption to changes in income. For instance, changes in Px or Py will rotate the Engel curve for good x downwards. Note however, the elasticity of consumption to change in income is +1. That is, a 1% rise in income leads to a 1% rise in consumption of both x and y. [latex]E_M=\frac{\frac{\Delta x}{x}}{\frac{\Delta M}{M}},\,or\,E_M=\frac{\frac{\Delta x}{\Delta M}}{\frac{M}{x}}[/latex]. Substituting the slope above gives us [latex]E_M=\frac{1}{(P_x+2P_y)}\frac{M}{\frac{M}{(P_x+2P_y)}}=+1[/latex]. This result makes sense. Since we are dealing with perfect complements, we can construct a composite good z made up of one unit of x and two units of y. Optimal consumption of the composite good is [latex]z=\frac{M}{P_z}[/latex]. So, if income doubles, consumption of the composite good will double. Therefore, consumption of x and y will also double. 5.7 Income and Substitution Effects LO 5.7: Identify income and substitution effects that result from a change in prices. Recall that the consumer choice problem is what gives us the individual demand curve. In Figure 5.10 changes in the price of jeans leads to different consumption choices of both jeans and khakis. We can see precisely how the fall in the price of jeans from $50 to $40 leads to the increase in demand for jeans. In the market, this increase was from 90,000 pairs of jeans to 100,000 pairs of jeans, but now we are back to an individual consumer who goes from purchasing 9 jeans to purchasing 10 jeans, let’s say in a year (yes, they buy a lot of jeans!). Figure 5.13 Change in Consumption of Jeans in Response to a Price Change Notice in the graph that after the price of jeans falls (again, assuming everything else held constant) the consumer increases not only the consumption of jeans but also their consumption of khakis. This result occurs because the real value of consumer income has increased; under the new prices the consumer can afford more of both goods. We call this an income effect: an increase in real income means the consumer purchases more of normal goods. Formally the income effect is in change in consumption of a good resulting from a change in a consumer’s income holding prices constant. But something else has happened as well: the relative prices of the two goods have changed. Jeans are now relatively cheaper than they used to be compared to khakis. This change is a substitution effect – the change in relative prices means that consumers naturally consume more of the now relatively cheaper good compared to the now relatively more expensive good. Formally the substitution effect is the change in consumption of a good resulting from a change in its price holding the consumer’s utility level constant. A question economists often ask is how much of the resulting change in consumption of jeans, what we call the total effect of the price change, is due to the income effect and how much is due to the substitution effect? They care about this because understanding how much of a change is due to each allows us to better predict the effect of changes in demand from changes in income and prices in the future. The easiest way to think about an answer is with a thought experiment: what if we could adjust the consumer’s nominal income after the change in price so that in the end they are no better or worse off? How would this income adjustment look on the consumer choice graph? Let’s start with the condition that the consumer is no better or worse off after the change in price than before it. This means that, by definition, the consumer must still be consuming on the same indifference curve. So, after the change in price we adjust the nominal income such that the new budget line just touches the old indifference curve. Remember that changes in nominal income lead to shifts in the budget line but do not change the slope. If we shift the new budget line back to where it just touches the indifference curve, we can find the bundle that the consumer would choose under these hypothetical circumstances. By comparing the original consumption bundle to the hypothetical one, we can isolate the substitution effect. The change in jeans consumption in this case has nothing to do with income (in the sense that the consumer is just as well off) and everything to do with the change in the price of jeans. As long as the indifference curve conforms to the three basic properties we require to be well-behaved indifference curves discussed in Module 1 the substitution effect will always act in the same way. When relative prices change, consumers will substitute away from the good that has become relatively more expensive and toward the good that has become relatively less expensive. Figure 5.13 b – Decomposing Consumption Change into Income and Substitution Effects. The income effect, however, may be either positive or negative depending on whether the good is a normal or inferior good. Recall that for an inferior good an increase in income leads to a decrease in consumption. The total effect can still be positive even for the case of an inferior good if the substitution effect is larger than the income effect. Only in the case of an inferior good whose income effect is larger than the substitution effect do we get a Giffen good: a good for which a decrease in price leads to a decrease in consumption (or an increase in price leads to an increase in consumption). (Figure 5.14). Consider the case of a family that likes to eat salmon (expensive) and potatoes (relatively cheap) every meal. On a fixed budget they might start out with very small pieces of salmon and large portions of potatoes to meet their daily calorie needs. Now what if their income stays the same but the price of potatoes decreases? Their fixed income can now buy more of both goods, but they don’t need more calories. They would prefer instead to have larger portions of salmon and smaller portions of potatoes. So as their real income goes up due to the drop in the price of potatoes, they consume more salmon and fewer potatoes. For this family, potatoes are a Giffen good. Figure 5.14: Potatoes as a Giffen good 5.8 Compensating Variation and Equivalent Variation LO 5.8: Calculate and interpret compensating variation and equivalent variation for consumers. When prices change, consumers re-optimize and choose new bundles that reflect changes in relative prices. If a price of a good rises, they tend to substitute away from it and towards the other good (s) – unless the goods are perfect complements. Regardless of their new choices, utility will fall as long as they had consumed the good. Conversely, falling prices will raise utility. We want to measure the change in utility so that we can say something about consumer welfare. The problem is that utility functions only represent underlying preferences. Utility functions are ordinal – they can only rank bundles – and not cardinal. Thus, a doubling of utils does not mean welfare is doubled. All we can say about a change in utility is that it rises or falls. We can illustrate this using Cobb-Douglas utility. Suppose we have [latex]U(A,B)=A^\frac{1}{2}B^\frac{1}{2}[/latex] and [latex]U(A,B)=A^2B^2[/latex]. Both utility functions represent the same preferences since their indifference curves are identical. So changing the utility function really has no impact on optimal choices (you can check this). Now suppose we go from an optimal choice of (10, 10) to (12, 8). With [latex]U(A,B) =A^\frac{1}{2}B^\frac{1}{2}[/latex], utility falls from 10 to 9.79, or about 2%. With [latex]U(A,B) = A^2B^2[/latex], utility falls from 10,000 to 9,216, or about 7%. Because there are many utility functions that represent a specific preference, we cannot use changes in utils as a measure of changes in welfare. However, we can use the idea of substitution effects and income effects from section 5.7 to find a measure that captures the impact of a change in price(s) and get around this problem. We do so by expressing changes in outcomes in dollar terms by identifying changes to income that would have similar effects as a change in price. Consider figure 5.10. The consumer starts at an initial consumption bundle B[1] given income and prices. Now allow the price of good A to fall. This rotates out the budget line and the consumer responds by moving to the new bundle at B[2]. In this case, consumption of good B falls and consumption of good A rises. Since the final bundle is on a higher indifference curve U[2], we know that utility has risen. To calculate the increase in welfare we can ask a hypothetical question: How much would we have to change income so that, after a change in price(s), the consumer is no better and no worse off then when we started? That is, with the new prices, we need to adjust income so that we are back on the original indifference curve U[1]. This change in income is called a Compensating Variation (CV) since it compensates income for changes in prices. If prices fall, the CV is negative since we would decrease income to compensate. If prices rise, the CV is positive since we would increase income to compensate. EXAMPLE: Suppose we have [latex]U(A,B)=A^\frac{1}{2}B^\frac{1}{2},\,with\,M=$200,\,P_A=$10,\,and\,P_B=$10[/latex]. Now suppose P[A] falls to $8. STEP 1: We start at the original optimum: [latex]A_1=\frac{1}{2}\frac{M}{P_A}=\frac{1}{2}\frac{200}{10}=10,\,and\,B_1=\frac{1}{2}\frac{M}{P_B}=\frac{1}{2}\frac{200}{10}=10,\,with\,U_1=10.00[/latex]. STEP 2: The new optimum is [latex]A_2=\frac{1}{2}\frac{M}{P_A}=\frac{1}{2}\frac{200}{8}=12.5,\,and\,B_2=\frac{1}{2}\frac{M}{P_B}=\frac{1}{2}\frac{200}{10}=10,\,with\,U_2=11.18[/latex]. Hence the fall in P[A] raises utility. STEP 3: We want to express utility as a function of income and prices using the demands for A and B. We know from the consumer’s problem that [latex]A=\frac{1}{2}\frac{M}{P_A},\,and\,B=\frac{1}{2}\frac{M}{P_B}[/latex]. Substitute this into the utility function [latex]U(A,B)=A^\frac{1}{2}B^\frac{1}{2}[/latex] to get This is called indirect utility. It is the maximum utility the consumer can attain given income and prices. It shows that utility rises with income but falls with prices. STEP 4: Using the indirect utility function, we can find the income require to obtain U[1] but with the new prices P[A]=$8 and P[B]=$10. This gives us [latex]M=(2)(10)(8*10)^\frac{1}{2}=178.86[/latex] Check: [latex]A=\frac{1}{2}\frac{M}{P_A}=\frac{1}{2}\frac{178.86}{8}=11.18,\,and\,B=\frac{1}{2}\frac{178.86}{10}=8.94[/latex]; substitute [latex]U(A,B)=A^\frac{1}{2}B^\frac{1}{2}=(11.18)^\frac{1}{2}(8.94)^\frac{1}{2}=10,\,OK[/latex]. STEP 5: Compensating Variation is the change in income required to move us back to the original indifference curve but with the new prices. [latex]CV=178.86–200=-21.14[/latex]. Hence the fall in price can be fully compensated (offset) by a fall in income of $21.14. A fall in the price of Good A raises utility. But a rise in income would also raise utility. So, an alternative question is: How much would we have to change income so that, if the prices had not changed, the consumer could attain the level of utility that they would have achieved with the price change? Instead of changing prices, we change income to get an equivalent effect. This change in income is called an Equivalent Variation (EV) since the change in income is equivalent to the changes in prices. If prices fall, the EV is positive since we would increase income to have the equivalent effect. If prices rise, the EV is negative since we would decrease income to make it equivalent. EXAMPLE: Suppose again we have [latex]U(A,B)=A^\frac{1}{2}B^\frac{1}{2},\,with\,M=$200,\,P_A=$10,\,and\,P_B=$10[/latex]. Again suppose P[A] falls to $8. STEP 1: We start at the original optimum. [latex]A_1=\frac{1}{2}\frac{M}{P_A}=\frac{1}{2}\frac{200}{10}=10,\,and\,B_1=\frac{1}{2}\frac{M}{P_B}=\frac{1}{2}\frac{200}{10}=10[/latex], with U[1] = 10.00 STEP 2: The new optimum is [latex]A_2=\frac{1}{2}\frac{M}{P_A}=\frac{1}{2}\frac{200}{8}=12.5,\,and\,B_2=\frac{1}{2}\frac{M}{P_B}=\frac{1}{2}\frac{200}{10}=10[/latex], with U[2] = 11.18. Hence the fall in P[A] raises utility. STEP 3: Using the indirect utility function, we can find the income required to obtain U[2] but with the old prices P[A] = $10 and P[B] = $10. [latex]U(M,P)=\frac{1}{2}\frac{M}{(P_AP_B)^\frac{1}{2}},\,hence\,M=2U(P_AP_B)^\frac{1}{2}[/latex], where U = 11.18. This gives us [latex]M=(2)(11.18)(10*10)^\frac{1}{2}=223.6[/latex] Check: [latex]A =\frac{1}{2}\frac{M}{P_A}=\frac{1}{2}\frac{223.6}{10}=11.18,\,and\,B=\frac{1}{2}\frac{223.6}{10}=11.18,\,substitute\,into\,U(A,B)\,gives\,U=11.18,\,OK[/latex] STEP 4: Equivalent Variation is the change in income required to achieve the new indifference curve but with the old prices [latex]EV=223.6–200=+23.6[/latex]. Hence the fall in price is equivalent to a rise in income of $23.6. Notice that the CV and EV are opposite signs but of similar magnitude. Review: Topics and Related Learning Outcomes 5.1 Defining markets LO 5.1: Explain the concept of a market. 5.2 The Demand Function LO 5.2: Describe a demand function. 5.3 Summing Individual Demands to Derive Market Demand LO 5.3: Explain how individual demands are aggregated to find market demand. 5.4 Movement along, and shifts of, the demand curve LO 5.4: Explain what causes movements along, and what causes shifts of, demand curves. 5.5 Price and Income Elasticity of Demand LO 5.5: Determine the price and income elasticity of a demand curve and recognize normal and inferior goods. 5.6 The Engel Curve LO 5.6: Derive and interpret an Engel curve. 5.7 Income and Substitution Effects LO 5.7: Identify income and substitution effects from a change in prices and recognize a Giffen good. 5.8 Compensating Variation and Equivalent Variation LO 5.8: Calculate and interpret compensating variation and equivalent variation for consumers. Learn: Key Terms and Graphs Cross-Price Elasticity of Demand Movement along the demand curve Income and substitution effects Supplemental Resources These videos from the YouTube channel ‘Department of Economics’ may be helpful. Policy Example: Should Your City Charge More for Downtown Parking Spaces? Learning Objective: Use elasticity to determine how much city officials should charge for parking. Parking Meter Parking Pay Coins San Diego Park Fee on maxpixel.net is licensed under CC0 The city of San Francisco would like to encourage shoppers and clients of local businesses and other day-trippers to visit the downtown area. City officials know that one main factor in people’s decision to come downtown is the availability of parking. The city controls the surface parking spots, the curb parking, and charges for parking through the use of meters, so officials know they can use the price of parking to change the number of shoppers and clients who visit through the law of. • Price the parking too low, and day-trippers will increase their quantity demanded of parking spaces. Potential shoppers and clients will become discouraged by their inability to find parking. • Price the parking too high, and day-trippers will decrease their quantity demanded of parking spaces. When too few day-trippers come downtown, the local merchants will complain about lack of customers (even though the parking is plentiful!). Understanding the nature of the demand for parking enables the city to get the pricing just right. By knowing the price elasticity of demand for parking in the downtown area, San Francisco officials can predict how much the quantity demanded will fall if they raise rates and, conversely, how much it will rise as rates decrease. Understanding the demand function allows city planners to anticipate changes in demand from other factors, including incomes, weather, and days of the week. In the days of mechanical coin-operated meters, cities were forced to pick one price. With modern electronic technology, cities can become much more flexible with pricing. Take, for example, San Francisco’s dynamic pricing scheme. In San Francisco, smart meters can adjust parking rates from as low as $0.25 and hour to $6 an hour. In times of high demand the rates will rise for local parking sports and in times of low demand rates will fall. This is the mission statement from SF Park, the local bureau in charge of setting rates: SFpark charges the lowest possible hourly rate to achieve the right level of parking availability. In areas and at times where it is difficult to find a parking space, rates will increase incrementally until at least one space is available on each block most of the time. In areas where open parking spaces are plentiful, rates will decrease until some of the empty spaces fill. (Emphasis mine) [http://sfpark.org/how-it-works/pricing/] Let’s take a simple hypothetical example. Suppose the City of San Francisco has exactly 1,000 one-hour parking spaces available over 100 city blocks. City planners have estimated the hourly demand for parking (Q[D]) as 3,000-600P, where P is the hourly price of parking. Suppose that city planners have determined that to reduce congestion and pollution from drivers searching for available parking the optimal hourly demand for parking is 900 per hour (averaging one free space per block). What should they charge per hour of parking? The answer is $3.50. We find this price by setting Q[D ] at 900 and solving for P. Since the precise demand curve is difficult to ascertain, a more realistic example is one where the city has a pretty good idea of the price elasticity of demand for hourly parking. Suppose the city estimates that the elasticity at current prices is close to 1. Currently the city charges $2 an hour, and the demand for spots is 1,000 hours. The city’s goal is to decrease this demand to 900 hours. Since a 100-spot reduction in demand represents a 10% decrease and the elasticity is 1, the city knows that a 10% increase in price should accomplish the goal. The city should therefore increase parking rates to $2.20 an hour. Exploring the Policy Question The demand for hourly parking in Cleveland, Ohio is given by the demand function: Q[D] = 3,600-400P Suppose Cleveland has 1,600 hourly spots available. What should Cleveland charge for hourly parking if the goal is to keep 10% of the spots open at any given time in order to encourage shoppers, clients and day-trippers to come to the downtown core. ___ per hour 10% of 1,600 is 160. So Cleveland city planners want hourly demand to be: Q[D] = 1,460 (or 1,600-140). Therefore, 1,460 = 3,600 – 400P. 400P = 2,140 or P = 2,140/4,100 or P = $5.35 an hour. Candela Citations CC licensed content, Original CC licensed content, Shared previously a physical or virtual place where people go in order to buy, sell, or exchange goods and services a graphical representation of the demand function that tells us for every price of a good, how much of the good is demanded the principal that as price decreases for normal goods, quantity demanded increases holding other factors such as income and the price of other goods constant the Latin phrase meaning all other things remain the same a good for which demand increases when incomes rise is the percentage change in the quantity demanded for a product from a 1 percent change in price is the percentage change in the quantity demanded of a product resulting from a 1-percent change in the price of another good is the percentage change in the quantity demanded for a product from a 1 percent change in income are goods for which the quantity demanded falls as income rises expresses how the optimal consumption of a good changes as a consumer's income changes is in change in consumption of a good resulting from a change in a consumer’s income holding prices constant is the change in consumption of a good resulting from a change in its price holding the consumer’s utility level constant is the sum of the income effect and the substitution effect a graph of all of the combinations of bundles that a consumer prefers equally indifference curves that have the following graphical properties: (1) They are downward sloping; (2) They do not cross; (3) They are bowed in to the origin. a good for which a decrease in price leads to a decrease in consumption (or an increase in price leads to an increase in consumption) means that utility functions only rank bundles – they only indicate which one is better, not how much better it is than another bundle is a less realistic (than ordinal) theory of utility where the size of the utility difference between two bundles of goods has some sort of significance are the units of measurement for utility is the change in income required after a change in price(s) to attain the same level of utility as before the price change(s) is the change in income required to attain the utility achieved after a change in price(s) if the price(s) had never changed
{"url":"https://openpress.usask.ca/econ211/chapter/module-5/","timestamp":"2024-11-07T03:38:13Z","content_type":"text/html","content_length":"235234","record_id":"<urn:uuid:4a3f0be7-4d76-4cde-8194-eced77b971b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00206.warc.gz"}
seminars - ynamics on PGL_2 over a field over Laurent series We explore the dynamics of the two by two projective general linear group with coefficients in a field of formal series over a finite field, focusing on the right translation of diagonal elements and unipotent elements on the quotient by the modular group whose coefficients are polynomials over the same finite field. The correspondence between the dynamics on the space of cosets and the set of ordered triple points will be given. We also discuss some applications to number theory as well as the effective rigidity of the Haar measure.
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&page=68&sort_index=Time&order_type=desc&document_srl=803422","timestamp":"2024-11-10T05:56:28Z","content_type":"text/html","content_length":"47337","record_id":"<urn:uuid:7df2c1c6-5d41-4db6-8aad-ca351d786c78>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00019.warc.gz"}
Maximum Twin Sum of a Linked List Published: Sep 12, 2022 For a linked list problem, brute force solution might be easy to come up. Save the value in an array while going over the linked list to the end. Do something necessary using values in the array. However, another linked list specific solution exists – reverse the linked list. Sometime, first half of the linked list should be reversed. Then, do something while using values of revered and not reversed latter half. Problem Description In a linked list of size n, where n is even, the i-th node (0-indexed) of the linked list is known as the twin of the (n-1-i)th node, if 0 <= i <= (n / 2) - 1. □ For example, if n = 4, then node 0 is the twin of node 3, and node 1 is the twin of node 2. These are the only nodes with twins for n = 4. The twin sum is defined as the sum of a node and its twin. Given the head of a linked list with even length, return the maximum twin sum of the linked list. □ The number of nodes in the list is an even integer in the range [2, 10**5]. □ 1 <= Node.val <= 10**5 Example 1: Input: head = [5,4,2,1] Output: 6 Nodes 0 and 1 are the twins of nodes 3 and 2, respectively. All have twin sum = 6. There are no other nodes with twins in the linked list. Thus, the maximum twin sum of the linked list is 6. Example 2: Input: head = [4,2,2,3] Output: 7 The nodes with twins present in this linked list are: - Node 0 is the twin of node 3 having a twin sum of 4 + 3 = 7. - Node 1 is the twin of node 2 having a twin sum of 2 + 2 = 4. Thus, the maximum twin sum of the linked list is max(7, 4) = 7. Example 3: Input: head = [1,100000] Output: 100001 There is only one node with a twin in the linked list having twin sum of 1 + 100000 = 100001. The solution here reverses the first half of the linked list. To reverse half, it uses slow and fast pointers. When the fast pointer reaches to the end, first half is reversed. While incrementing the reversed and slow pointers, it compares the sum of two values. Once the reversed linked list pointer reaches to the end, all are checked. # Definition for singly-linked list. # class ListNode: # def __init__(self, val=0, next=None): # self.val = val # self.next = next class MaximumTwinSumOfALinkedList: def pairSum(self, head: Optional[ListNode]) -> int: fast, slow, rev = head, head, None while fast and fast.next: fast = fast.next.next rev, rev.next, slow = slow, rev, slow.next max_v = 0 while rev: max_v = max(max_v, rev.val + slow.val) rev, slow = rev.next, slow.next return max_v
{"url":"https://yokolet.com/algo/linked_lists/2022-09-12-maximum-twin-sum-of-a-linked-list","timestamp":"2024-11-13T21:38:01Z","content_type":"text/html","content_length":"72316","record_id":"<urn:uuid:e7e21fd7-c597-4993-8152-4a2fce236175>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00338.warc.gz"}
Arsenal collection: Half Time What could the half time scores have been in these Arsenal matches? When Arsenal played Aston Villa in the 2010/11 Premier League, the final score was $4 - 2$. What could the half time score have been? Can you find all the possible half time scores? How will you make sure you don't miss any out? Later in the season Arsenal played Tottenham Hotspur. The final score was a draw, $3 - 3$. Can you find all the possible half time scores for this match? Getting Started What could the score have been if Arsenal hadn't scored any goals by half time? What could the score have been if Arsenal had scored one goal by half time ...? Teachers' Resources Why do this problem? invites pupils to develop a systematic way of working and offers the opportunity for discussion in pairs, small groups and the whole class. Possible approach This problem could be introduced in the context of any football team or match. The added appeal of a sport they enjoy and team they support may help many pupils engage in the solving of the problem. Invite pupils to guess what the half time score was to the match. Take a few suggestions and then ask them to try and find all the possibilities. Give time for pupils to work in pairs on the task and look out for those children who are beginning to work in a systematic way. After a suitable length of time, draw the whole group together and ask them how they are making sure they don't miss out any possibilities. You may wish to ask certain pairs to share their ways of working with the whole group. It might be handy to suggest that each different possibility is written on a separate strip of paper as this might aid later discussions. Give them longer to work on the problem then bring everyone together once more to discuss findings. You could ask each pair how many different possible scores they think there are - they are unlikely to agree! This is where having the scores written on strips is useful as you can stick them on the board, or ask members of the class to hold them, then invite everyone to sort them or re-order them. In this way, a system is imposed on the scores and any missing ones can be identified quickly. You can then challenge pairs to find the possibilities for the $3 - 3$ match, using a similar system. The experience of working on the $4 - 2$ result all together should give them more confidence to tackle the second match in their pairs. Key questions How do you know that may have been a half time score? How can you be sure that you have found ALL the possible half time scores? Suppose the final score was a draw, what then? Possible extension You could ask "If there are $24$ possible different half time scores what could the final score have been?". Possible support Some pupils may prefer to start with games where there are fewer goals, for example $0 - 1$, $1 - 0$, $1 - 1$ etc so that there are fewer possible half time scores.
{"url":"https://nrich.maths.org/problems/arsenal-collection-half-time","timestamp":"2024-11-05T13:18:48Z","content_type":"text/html","content_length":"42169","record_id":"<urn:uuid:4639fd4f-1c2c-410d-943f-0c645f3779ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00660.warc.gz"}
How to find the square root of a number How to find the square root of a number? The square root of a number is the inverse operation of the square root of a number. The square of a number is the value obtained by multiplying a number by itself, whereas the square root of a number is the factorization of the number that is multiplied by itself to give the original number. If ‘a‘ is the square root of ‘b’, it means that a × a = b. The square of any number is always a positive number, so every number has two square roots, one for a positive value and one for a negative value. For example, 2 and -2 are both square roots of 4. However, in most places, only positive values are written as the square root of a number. Follow the steps given below to find the square root of a number. How to find square roots in excel? 1. Calculate a square root. The syntax for the SQRT() function is : To this function, you only need to pass the number argument, which is the number for which a square root needs to be found. This can be any positive number or cell reference to the location of the data in the worksheet. 2. Compute an nth root. Use the POWER() function to calculate any original value : For the POWER() function, you will supply both the number and its exponent as arguments. To calculate a root, simply supply an inverse exponent—for example, a square root is 1/2. The power() function is useful for both powers and exponents. for example : yields 16, whereas : Yields 16, which is the square root of 256. Roots are the inverse of powers. Related: copy and paste square root 3. Find Cube Root in Excel. To calculate the cube root of a number in Excel, use the caret operator (^) in a simple formula with 1/3 as the exponent. In this example, the formula =D3^(1/3) is used to find the cube root of 216, which is 6. 4. Find the roots of imaginary numbers. Excel provides IMSQRT() and EMPOWER() functions to return the roots and powers of imaginary numbers. The syntax of these functions is the same as the real-number versions.
{"url":"https://computersolve.com/how-to-find-the-square-root-of-a-number/","timestamp":"2024-11-09T04:15:03Z","content_type":"text/html","content_length":"133725","record_id":"<urn:uuid:d474c6ac-fca2-45f5-ace0-03056acf6416>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00794.warc.gz"}
How To Calculate The Decision Tree Loss Function? - Buggy Programmer If you are stuck with the measurement of the decision tree loss function in your algorithm, you are at the right place. Decision trees are a very simple but very powerful type of supervised machine learning algorithm that is used to categorize or make predictions based on how a previous set of questions or choices were answered. This model is a form of supervised learning where the data is trained and then tested on a separate set of data that contains the desired categorization. As suggested, they are used for categorization or Classification problems generally but can also be applied to Regression problems. Taking a Decision Tree beyond its capability, one can opt to make a Random Forest model which can help you to select the best features of your dataset as well. Read more here: Random Forest-Based Feature Selection Note: A Decision Tree may not always provide a clear-cut answer or decision to the data scientist but, it may present options so the team can make an informed decision on their own. The idea of machine learning carries here as decision trees imitate human thinking and so thereby making it easy to understand and interpret the results provided by the algorithm. How does a decision tree work? As the name suggests a decision tree works just like a tree with branches. In Decision Trees, The foundation of the tree or the base is the root node. Then from there flows a series of decision nodes that represent choices or decisions to be made. These choices or decision nodes are called leaf nodes which represent the result of the decisions. The decision node represents a split point, and leaf nodes that stem from a decision node represent the possible answers. Just like leaves grow on the branches, similarly, the leaf nodes grow out of the decision nodes on the branch of a Decision Tree. Every second section of a Decision Tree is therefore called a “branch.” An example of this is when the question is, “Are you a diabetic?” and the leaf nodes can be ‘yes’ or ‘no’. Find out more about Decision Trees here: Decision Trees Did you know? For errorless data, you can always construct a decision tree that correctly labels every element of the training set, but it may be exponential in size. Some key terminologies • A Root node is at the base of the decision tree. • The process of dividing a node into sub-nodes is called Splitting. • When a sub-node is further split into additional sub-nodes it is called a Decision node. • When a sub-node depicts the possible outcomes and cannot be further split it is a Leaf nod. • The process by which sub-nodes of a decision tree are removed is called Pruning. • The subsection of the decision tree consisting of multiple nodes is called Branches. What are loss functions? A loss function, in simple terms, quantifies the losses generated by the errors that we commit when we try to estimate the parameters of a statistical model or when we use a predictive model, such as a Decision Tree, to predict a variable. Minimization of this expected loss which is called statistical risk is one of the guiding principles in statistical modeling. Undeniably, the ultimate goal of all algorithms of machine learning is to decrease this loss and any statistical risk it can lead to. The loss has to be calculated before we try to decrease it using different optimizers. A loss function can also be termed the Cost function. Since the calculation of the predicted variables is different for both the type of models in machine learning – regression, and classification, the loss functions of both are different. Understanding Splitting Criteria or Impurity in the Decision Tree Loss Function: It is common for the split at each level to be a two-way split. Although there are methods that split more than two ways, care should be taken when using these methods because making too many splits early in the construction of the tree may result in missing interesting relationships that become exposed as tree construction continues. The scoring of the loss function in a Decision tree works on the concept of Purity in the split. primary methods for calculating any existent impurity: Gini, and entropy. Let us assume that for calculating the entropy, a set of 10 observations with two possible response values is used. For each scenario, an impurity score is calculated. Cleaner splits result in lower The formula for entropy is : Entropy Formula (used to calculate Decision Tree Loss Function) Entropy calculation is conducted on a set of observations as given which is S. The ‘pi’ here refers to the fraction of the observations that belong to a particular value and the ‘c’ given in the formula is the number of different possible values of the response variable. For example, for a set of 100 observations where the color response variable had 60 observations with “red” values and 40 with “blue” values, the p-red would be 0.6 and the p-blue would be 0.4. When pi = 0, then the value of entropy for this becomes zero. While entropy is the most preferred impurity measure, another similar method would be to use ‘Gini’ Impurity which is as follows; Gini Impurity Formula (used to calculate Decision Tree Loss Function) The Gini impurity can be calculated by subtracting the sum of the squared probabilities of each given class from 1. As can be noticed from the formula, Gini Impurity can be biased towards bigger partitions (distributions) and easy to implement, whereas information gains can be biased to smaller partitions (distributions) with multiple values. Gini Impurity tells us if there is “success” or “failure” and can only split dichotomous variables and the information gained can evaluate the difference in entropy before and after splitting and illustrates existent impurity in class variables. Find out more about the metrics here: How to calculate Gini and Entropy ? You can use these as ‘criterion’ using the Decision Tree Classifier which comes with the tree class codes from the machine learning library ‘Scikit Learn’ that can be used to make a tree model using Example code for creating an instance of the Decision Trees with the Loss Function Find the copy-able code here: from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(random_state=0, criterion= 'gini') clf2 = DecisionTreeClassifier(random_state=0, criterion= 'entropy') From the aforementioned, we can observe that a decision tree is a bureaucratic approach to classification problems and regression problems alike. The use of values can be done to split the questions at each choice or branch and can end up in multiple leaf nodes which can be singular or dichotomous or multivariate. A small tree can be made easily but with a relatively high variance whereas a tall tree with too many splits generates better classifications but it probably is overfitting. So it is important to ensure that the right criterion or loss function is used and minimized in order to ensure that the tree is not overfitted or underfit. The Entropy or Gini metrics will prove helpful in this and can be instated at the beginning of the tree to split the branches accordingly. If you think one decision tree cannot take your model to take the right level of accuracy and performance it needs, then you can take the use of multiple decision trees which is called a random forest (another significant machine learning algorithm used by many). To know more about Random Forests: Decision Tree vs Random Forests An eternal learner, I believe Data is the panacea to the world's problems. I enjoy Data Science and all things related to data. Let's unravel this mystery about what Data Science really is, together. With over 33 certifications, I enjoy writing about Data Science to make it simpler for everyone to understand. Happy reading and do connect with me on my LinkedIn to know more! 0 Comments Inline Feedbacks View all comments
{"url":"https://buggyprogrammer.com/how-to-calculate-the-decision-tree-loss-function/","timestamp":"2024-11-03T19:08:16Z","content_type":"text/html","content_length":"256788","record_id":"<urn:uuid:4defd4b0-488d-47cb-bcbc-1ee297a9c637>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00627.warc.gz"}
Jwv CHOOSE and complete ONE of the two problems below. You must you show your work to receive maximum credit. PLEASE NOTE: If you compl shown will be... Answered You can hire a professional tutor to get the answer. Jwv CHOOSE and complete ONE of the two problems below. You must you show your work to receive maximum credit. PLEASE NOTE: If you compl shown will be... I need to learn how to solve problem b. I wrote the instructions that the professor gave and the answer as well. In spite of that, I am unable to get to the result. Thanks JwvCHOOSE and complete ONE of the two problems below. You must you show your work toreceive maximum credit. PLEASE NOTE: If you complshown will be graded unless otherwise indicated. (12 points)mplete more than one part, only the firstSuppose that the price p (in dollars) and the weekly sales x (in thousands ofcertain commodity satisfy the demand equation Sp + 2x+ xp = 8ds of units) of a$0.20 per week?- 80 . Determine the rate atb)which sales are changing at a time when x = 10, p = 4 , and the price is rising at the rate ofA factory's weekly production costs y and its weekly productionthe equation y- - 4y x are related byaction quantity rard- 4x = 225, where y iy is in thousands of dollars and x is in thouin thousands of unitsof output. Find the rate at which the weekly production costslaostslare changing when y = 25,x= 10, and the production level is rising at the rateof 250 units per week?ay.(dy-250dy =-Br /drdy = 4x (250o+ = 40CostSSodica ty? - 4 04. 22314090003 34 - BX dc20FP Ppage 7dyeats an wingy at &amp;cdx Show more Homework Categories Ask a Question
{"url":"https://studydaddy.com/question/jwv-choose-and-complete-one-of-the-two-problems-below-you-must-you-show-your-wor","timestamp":"2024-11-03T09:23:14Z","content_type":"text/html","content_length":"27143","record_id":"<urn:uuid:a474eb8b-0c17-4abc-b60f-a575a9e21bf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00389.warc.gz"}
What Is The Quadratic Regression Equation That Fits These Data What Is The Quadratic Regression Equation That Fits These Data In the realm of regression analysis, the art of curve fitting plays a pivotal role. This process revolves around selecting a model that harmoniously aligns with the specific curves within your dataset. While linear relationships between variables are relatively straightforward to work with, curved relationships introduce an additional layer of complexity. In a linear relationship, increasing the independent variable by one unit invariably leads to a consistent change in the mean of the dependent variable, irrespective of the location within the observation space. However, reality often presents us with data featuring non-linear relationships, where the effect of the independent variable on the dependent variable varies across different points within the observation space. To witness this phenomenon in action and understand how to interpret regression coefficients for both linear and curvilinear relationships, delve into our in-depth discussion. This post will guide you through various curve fitting methods employing both linear and nonlinear regression techniques, ultimately helping you identify the most fitting model for your data. Why You Need to Fit Curves in a Regression Model What Is The Quadratic Regression Equation That Fits These Data Using a linear relationship to fit a curved one can lead to inadequate models, even when the R-squared value appears high. To tackle this challenge, curve fitting becomes essential. While detecting curvature is relatively straightforward with one independent variable, it becomes trickier in multiple regression scenarios. In such cases, residual plots serve as crucial indicators of whether your model adequately captures curved relationships. Patterns in these residual plots often signify that your model is failing to represent curvature correctly. Alternatively, you may need to rely on domain-specific knowledge to perform curve fitting. Past experience or research may reveal that the impact of one variable on another varies based on the independent variable’s value. This could manifest as a limit, threshold, or point of diminishing returns, where the relationship undergoes a transformation. To compare various curve fitting methods, we will employ a challenging dataset that demands precision in predictions. You can download the dataset (CurveFittingExample.csv) to follow along. Curve Fitting using Polynomial Terms in Linear Regression What Is The Quadratic Regression Equation That Fits These Data Surprisingly, linear regression can be wielded for curve fitting by introducing polynomial terms into the model. These terms are independent variables raised to different powers, such as squared or cubed terms. To determine the appropriate polynomial term to include, count the number of bends or inflection points in the curve and add one to it. For instance, quadratic terms model a single bend, while cubic terms model two. Quadratic terms are more commonly used, with quartic terms or higher being rare. When utilizing polynomial terms, consider standardizing continuous independent variables. In our dataset, we identify a single bend, prompting us to fit a linear model with a quadratic term. Although the R-squared value increases, the regression line still falls short of an ideal fit. This underscores the importance of not solely relying on high R-squared values and emphasizes the need for checking residual plots. Curve Fitting using Reciprocal Terms in Linear Regression What Is The Quadratic Regression Equation That Fits These Data Reciprocal terms come into play when the dependent variable approaches a lower or upper limit (floor or ceiling) as the independent variable increases. These terms are defined as 1/X, where X is the independent variable. The value of this term decreases as X increases, causing the effect of this term to diminish, and the slope to flatten out. Notably, X cannot equal zero in this model due to the impossibility of dividing by zero. In our dataset, as the Input variable increases, the Output exhibits a flattening effect, suggesting the presence of an asymptote near 20. We proceed to fit models with linear and quadratic reciprocal terms. The latter, in particular, provides a significantly improved fit to the curvature. Curve Fitting with Log Functions in Linear Regression Log transformations offer a compelling approach to fitting curves using linear models, which would otherwise require nonlinear regression. This transformation can adapt nonlinear functions into linear forms, broadening the range of curves that linear regression can handle. By applying log transformations to either one side or both sides of the equation, you can accommodate various types of curves. The choice between a double-log or semi-log model depends on the nature of your data and your research domain. Implementing this approach requires careful consideration and investigation. Our example dataset prompts us to apply a semi-log model to fit curves that flatten as the independent variable increases. However, this model, similar to the first quadratic model, presents some bias in fitting the data points. The quadratic reciprocal term model still maintains its status as the best fit for the data. Curve Fitting with Nonlinear Regression What Is The Quadratic Regression Equation That Fits These Data Nonlinear regression emerges as a potent alternative to linear regression, providing greater flexibility in modeling curves by employing a diverse range of nonlinear functions. However, the challenge lies in selecting the precise function that best aligns with the curve in your data. Most statistical software packages offer a catalog of nonlinear functions to aid in this selection process. Additionally, starting values for function parameters are often required, as nonlinear regression employs an iterative algorithm to identify the optimal solution. In our dataset, where an asymptote is approached, we opt for a nonlinear function based on the catalog’s guidance. We establish starting values for the parameters and obtain a fitted line plot that demonstrates an exceptional, unbiased fit to the data. Read More : What Is The Exponential Regression Equation That Fits These Data Comparing the Curve-Fitting Effectiveness of Different Models R-squared, a commonly used metric, loses its validity in the realm of nonlinear regression. Instead, the standard error of the regression (S) proves valuable for assessing the goodness of fit in both linear and nonlinear models. A lower standard error indicates that the data points closely align with the fitted values. Among the models explored, two emerge as equally adept at providing accurate and unbiased predictions—the linear model with a quadratic reciprocal term and the nonlinear model. Their standard error of the regression values is strikingly close, making either a viable choice. Nonetheless, the linear model offers additional statistics like p-values for independent variables and R-squared, which can be advantageous for reporting purposes. Read More : How To Check For Data Balance On Glo Closing Thoughts Curve fitting, though not without its complexities, can be accomplished through a variety of methods, each offering flexibility to adapt to diverse curve types. While setting up your study and gathering data demands considerable effort, the pursuit of a model that optimally fits your data is undeniably worthwhile. Remember that subject-area expertise should guide your model selection, and certain domains may have established practices for data modeling. It’s crucial to strike a balance—aim for a good fit without overfitting your regression model, which can lead to excessive complexity and inflated R-squared values. Utilize tools like adjusted R-squared and predicted R-squared to guard against this You must be logged in to post a comment.
{"url":"https://designtechlabs.com/what-is-the-quadratic-regression-equation-that-fits-these-data.html","timestamp":"2024-11-12T19:30:45Z","content_type":"text/html","content_length":"90901","record_id":"<urn:uuid:d113dd76-ab28-4e1c-a2f4-84eba2883e13>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00774.warc.gz"}
(x+4)(x-5)=0 Standard Form Solving Quadratic Equations: From Factored Form to Standard Form This article focuses on understanding the relationship between factored form and standard form of a quadratic equation, specifically using the example: (x + 4)(x - 5) = 0. Understanding Factored Form The equation (x + 4)(x - 5) = 0 is presented in factored form. This form is useful for quickly finding the roots or solutions of the equation. The Zero Product Property states that if the product of two or more factors is zero, then at least one of the factors must be zero. Applying this to our equation: • x + 4 = 0 or x - 5 = 0 • Solving for x in each case, we get: Therefore, the roots of the equation (x + 4)(x - 5) = 0 are x = -4 and x = 5. Converting to Standard Form The standard form of a quadratic equation is ax² + bx + c = 0, where a, b, and c are constants and a ≠ 0. To convert our factored form into standard form, we need to expand the product: 1. FOIL Method: We multiply each term in the first factor by each term in the second factor. □ (x + 4)(x - 5) = x(x - 5) + 4(x - 5) 2. Simplify: We distribute and combine like terms. 3. Combine Like Terms: Therefore, the standard form of the quadratic equation (x + 4)(x - 5) = 0 is x² - x - 20 = 0. By understanding the relationship between factored form and standard form, we can efficiently solve quadratic equations. The factored form provides a direct path to finding the roots, while the standard form is essential for various mathematical operations and applications.
{"url":"https://jasonbradley.me/page/(x%252B4)(x-5)%253D0-standard-form","timestamp":"2024-11-04T02:39:14Z","content_type":"text/html","content_length":"59772","record_id":"<urn:uuid:d4dae144-d80c-423a-8988-8d94e74f59c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00895.warc.gz"}
Chapter 9: Momentum, Work, Power Flashcards What does area under a graph of force-time give? Impulse (change in momentum) momentum = mass x velocity units: kgms^-1 What would happen when a ball(1) of velocity v with mass m hits a stationary ball(2) of the same mass Ball(1) would stop on impact, ball (2) would move off with the same velocity v, conserving momentum What would happen when a ball(1) of velocity v with mass m hits a stationary ball(2) that is much heavier Ball(1) would rebound off of it, while ball(2) would move only a bit slowly as a response How is momentum conserved in collisions and explosions The momentum before is the same as the momentum after. mass before * velocity before = mass after * velocity after Is momentum a vector or scalar Vector, the equation for momentum contains velocity, which is a vector use the equation ‘mv’ to show conservation of momentum between a ball of velocity v hitting a stationary ball, both have same mass mv + 0 = mv => 0 + mv = mv State Newton’s First Law - the Law of Inertia A body at rest will stay at rest (and a body in constant motion will stay in constant motion) unless a resultant external force acts on it. State Newton’s Second Law. The acceleration of an object is directly proportional to the resultant force applied to it (F = ma). How does Newton’s second law apply to momentum? The rate of change of momentum is directly proportional to the force F = Δp / Δt = ma Δp = mΔv What is impulse of a force? The force multiplied by the time the force is applied for, so impulse is equal to the change in momentum Δp = FΔt How do you increase impulse? Increase the force, or the time it acts for (or both) How are crumple zones in cars used for improved driver safety They reduce the force on the driver as momentum is constant, and Δp = FΔt crumple zones increase the duration of the crash, so change in time increases, but to keep change in momentum constant, the force decreases so the force on the passenger decreases, improving their safety How do seat belts airbags improve safety? They reduce the force on the driver as momentum is constant, and Δp = FΔt seat belts stretch slowly and air bags deflate gradually, increasing how long it takes for a person to come to a halt if the car comes to a halt How do you find impulse from a force-time graph? The area under a force time graph between 2 time points on the X-axis give the impulse What does area under a graph of force-time give? Impulse (change in momentum) What does the peak of the force-time graph give? Maximum deformation of an object If body A exerts a force on body B then body B exerts an equal and opposite force on body A Equation of force for free falling object near earth’s surface? F = m g m is mass in kg g is acceleration usually 9.81 ms^-2
{"url":"https://www.brainscape.com/flashcards/chapter-9-momentum-work-power-11553891/packs/19333830","timestamp":"2024-11-06T10:56:12Z","content_type":"text/html","content_length":"157418","record_id":"<urn:uuid:d4ad4c9c-cc98-41fa-a24b-51b6b54867c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00803.warc.gz"}
Solving Equations: A Comprehensive Guide to Mastering Algebra Welcome to our comprehensive guide to mastering algebra through solving equations and online maths tutorials! Whether you are a student preparing for GCSE Maths or an individual looking to brush up on your algebra skills, this article will provide you with all the necessary information and techniques to find the perfect NYC maths tutors for you and solve equations with ease. Algebra can often be a daunting subject, but with the right approach and understanding, it can become a powerful tool for problem-solving with the assistance of a level maths tutoring in NYC. So, let's dive into the world of algebra tutors and discover how they can help you excel in this subject! In this online lesson, we will delve into the fundamentals of solving equations, including the basic principles, techniques, and common challenges that students face. By the end of this guide, you will have a solid foundation in algebra and be able to confidently tackle any equation thrown your way. So let's dive in and discover the world of solving equations together!Welcome to our guide on solving equations, a crucial topic in the world of mathematics. Whether you are a student preparing for your GCSE Maths exam or an educator looking for resources to help your students, this article will provide you with all the information you need to excel in algebra. By the end of this article, you will have a solid understanding of how to solve equations and be equipped with tips and techniques for studying and test-taking. First, let's start with the basics. An equation is a mathematical statement that shows that two expressions are equal. In order to solve an equation, we need to find the value of the variable that makes the equation true. This is often referred to as finding the 'unknown' or 'x' in the equation. To do this, we use various algebraic techniques such as simplifying, factoring, and substitution. It's important to note that there are different types of equations, such as linear, quadratic, and exponential, and each requires a different approach to solve them. Understanding these techniques and when to use them is key to mastering equations. For linear equations, which have the form ax + b = c, we can use the addition and subtraction property of equality to isolate the variable on one side of the equation. For example, if we have the equation 2x + 5 = 15, we can subtract 5 from both sides to get 2x = 10, and then divide both sides by 2 to get x = 5.For quadratic equations, which have the form ax^2 + bx + c = 0, we can use factoring to find the values of x that make the equation We can also use the quadratic formula which gives us the solutions for any quadratic equation. Exponential equations , which have the form a^x = b, require the use of logarithms to solve. The logarithm property states that if a^x = b, then x = loga(b). When solving equations, it's important to remember to check your solutions and make sure they satisfy the original equation. Also, practice makes perfect, so be sure to work through various examples and exercises to hone your skills. In conclusion, solving equations is an essential skill in algebra and understanding the different techniques and types of equations is crucial for success. We hope this article has provided you with a comprehensive guide to mastering equations and has helped you prepare for your GCSE Maths exam. Happy solving! Tips for Studying Solving Equations Studying solving equations can be a daunting task, but with the right approach, you can master this important topic in algebra. Here are some tips to help you improve your studying and excel in solving equations. Create a Study Schedule and Stick to It One of the keys to success in any subject is consistency. Create a study schedule that works for you and stick to it. This will help you stay on track and ensure that you cover all the necessary material. Practice Regularly with Different Types of Equations The more you practice, the more comfortable you will become with solving equations. Make sure to practice regularly with different types of equations, including linear, quadratic, and exponential equations. This will help you familiarize yourself with different problem-solving techniques and build your confidence. Break Down Complex Equations into Smaller, Manageable Steps When faced with a complex equation, it can be overwhelming and difficult to know where to start. To make it more manageable, break down the equation into smaller steps and solve them one at a time. This will help you better understand the problem and prevent mistakes. Use Flashcards or Other Study Aids Flashcards can be a useful tool for studying equations. Write down important formulas and concepts on flashcards and use them for quick review. You can also create your own study aids such as cheat sheets or practice quizzes to help you prepare for tests. Seek Help from Your Teacher or Tutor if Needed If you are struggling with solving equations, don't hesitate to seek help from your teacher or tutor. They can provide you with additional resources, clarify any confusing concepts, and guide you through practice problems. In conclusion, solving equations is an essential skill in algebra that requires practice and understanding of various techniques. By following the tips and techniques provided in this guide, you will be well on your way to mastering equations and excelling in your studies. Remember to stay organized, stay motivated, and don't be afraid to ask for help when needed.
{"url":"https://www.mathslesson.co.uk/algebra-solving-equations","timestamp":"2024-11-13T18:13:36Z","content_type":"text/html","content_length":"122474","record_id":"<urn:uuid:bb278030-edb4-48f8-b8a5-e2b89e1a9dd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00680.warc.gz"}
Simple and Beautiful Momentum Every occurrence in nature obeys some kind of relation that has been put in operation in the universe, and most scientists probably believe that humans have the skill to represent that relation to themselves mathematically to a certain extent. Many of the relations which are observed are accepted as independent facts until someone comes up with a method to derive them from more fundamental facts or relations. In this sense, the academic field of physics accepts some “axiom-like” relations that explain events well, but we cannot derive them from more fundamental relations or cannot question why they hold true. Another common property of such axiom-like relations is that they turn out to be the simplest of all the possible alternatives. This is the principle of simplicity, which is held by physicists to be such a deep and non-trivial feature of our universe that it indicates a preference for simplicity over complexity. One recent example of such phenomena is the Schrödinger equation that explains the behavior of matter at the atomic level. This relation just happens to work, and its derivation is intuitive rather than rational. It also has the simplest mathematical form among its possible competitors in terms of expressing nature. We will now look at another example of such axiom-like relations that we usually ignore, although it is frequently encountered in our everyday life. Before revealing it as fully as we can, let us relate one situation where this effect is very apparent. We usually move objects by pushing or pulling them. Suppose now we are on a motorboat and we have run out of gas in a place very close to the shore. We (the strong crew members) surely do not want to be carried away from the shore by the backwash from the waves. One of us has the brilliant idea to push on the sides of the boat until we reach harbor. What would you suggest? Some of us think that it is not a good idea because the boat is very heavy and our pushing will be negligible. It is true that the boat will not move. However, the failure has nothing to do with the weight of the boat. On the other hand, some other crew members suggest using oars, which will obviously work, but why? (Personally, with all my respect to other opinions, I would suggest using the phone to call the beach police to get some help; but this would distract us from our subject matter.) Impulse, direction, and momentum If you think about the “why” question above, you will guess that we are talking about impulse in the loose meaning of the word. In physics, impulse has a more precise definition. This definition arose from the need to describe an object’s ability to have an impact on other objects, but the idea is still vague: How do we quantify this ability in order to put some flesh on this notion? Let us try to figure out an answer to this question. Now, let us consider a few possible ways of defining impulse that look reasonable. We may decide intuitively that an impact should be related to an object’s speed: the higher the speed the greater the impact. If you ever played marbles in your childhood, you will recall that the easiest way to dislodge the marbles in the targeted row is to cast your own marble as fast as you can. Impulse should also have a relation to mass. Certainly, the impact of as many as a thousand bullets aimed at a train will not move the train even a meter. These are some simple observations anyone can experience or have a feeling of from their daily life. We also expect that this strange quantity should somehow be transferred by the interaction of two objects. One object colliding with another stationary object transfers something that causes the latter to travel in a direction. With this example, another important feature of our impulse idea emerges: direction. Those who like to play the game of American pool or billiards know this very well. (I am sure everyone does it for the noble reason to experiment the laws of physics.) It makes a significant difference in a collision of two masses if they hit each other at an angle. Wait a minute! We have been talking about the effect of an object’s impact, but the object has something that it is carrying even before the impact, and this “something” is the reason why we have an impact in the first place. So, what is this “something”? Let us call it momentum so as not to violate the traditions of physics. All this stuff so far is good, but we are not done yet: how should these ideas appear in our equations? Now, let us bring together all our findings. We know momentum manifests itself as the impact (P) of one object on another. From its effect (impulse), we understand that momentum is related to the mass (M) of the object and its velocity (V). We also know that momentum has a directionality, which is termed vectorial. Then, perhaps momentum is something like: P = a x M + b x V where a and b are constants. This seems acceptable since it satisfies our observation: the more the mass, the more the momentum. But for a stationary object (V=0), there is no point in talking about impulse; so the axM terms looks unnecessary. If there is no good reason for a physical quantity to appear in a physical equation, then the simplicity principle says it should be removed. Therefore, we look for a simpler alternative relation with only one term like below: P = c x M2 x V5 where c is a constant. But this one is a highly non-linear relation with exponential terms, so it is really not looking good. Another problem with this equation is that it does not fit our daily experience very well. If we reconsider our train example, with a high velocity power term like this, even the very small bullets can have a considerable effect on a train, enough to move it in fact. As a simple example, let c be equal to 1, take 0.1 kg as the mass of a bullet and 105 kg (100 tons) as the mass of the train, and give 400 m/s velocity to the bullet. Assuming that the impulse of the bullet is transferred to the train (conservation of momentum), we roughly get: Pbullet = c x m2 x Vb5 = 1011 Ptrain = c x M2 x Vt5 = 1010 Vt5 By equating both sides we roughly get, 1.6 m/s (5.7 km/h) for the train velocity. A single bullet moving a big train at such speed? This is a very counter-intuitive result. However, we will not give up easily. How about if we try an expression which is more familiar? P = d x MV2 where d is a constant. This relation also agrees with our intuition (i.e. it has mass and velocity terms proportional to P.) I can already hear some objections from those who are acquainted with physics saying “No! This is the energy formula of a body with mass M and velocity V.” Indeed, this equation is reserved for energy which is a non-vectorial quantity. As a matter of fact, none of the above is a correct description for momentum. The actual expression is, interestingly, the simplest of all possibilities: P = MxV So, why not the more complex ones but this, the simplest one? The rigorous answer is subtle and requires a thorough analysis of linearity and homogeneity of space, which could be the subject of another essay. But we repeat the remark that we made in the beginning: if there is a simpler and more beautiful way of describing a natural law, then that description often turns out to be the correct one in the end. In fact, for some giants of physics like Paul Dirac, the beauty of a theory is more important than its results and is a better indication of the theory’s correctness. There is more to it than that. There are whole theories like Dimensional Analysis which are implicitly based on the idea of writing down the equations in the simplest (and most beautiful) form. So, the final point is that, like other fundamental relations in physics, the idea of impact or momentum is best described in the simplest and most intuitive way: P= MxV. And behold! This gives us exactly the relation that has passed all the scientific tests in the range of classical physics. This result implies that the beauties we see in nature can be explained in terms of the simplest possible physical relation. Pondering all these, one cannot help but ask how in the world a mindless, blind natural law could exhibit beauty based on simplicity.
{"url":"http://w-ww.fountainmagazine.com/all-issues/2008/issue-63-may-june-2008/simple-and-beautiful-momentum","timestamp":"2024-11-09T00:34:50Z","content_type":"text/html","content_length":"102083","record_id":"<urn:uuid:807bf466-b803-4239-8335-1d5b437d26e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00056.warc.gz"}
Joe Chiang: Common Core Math Is Not Common There have been complaints about elementary teachers, elementary students, and parents of elementary students being unable to understand Common Core elementary math. There is a logical reason. The methods elementary teachers must teach are learned by math majors in Number Theory College Math class, an advanced math class sometimes taken by math minors as well as majors. It would be very nice for students to understand these advanced concepts when they enter college so they could move directly to even more advanced math. Thus, college instructors wanting elementary students to understand these methods and have included in these concepts in the Common Core curriculum they wrote. There is a small problem. These methods were first introduced in elementary math under the name “New Math”. I and other “Math People” understood it as students with no effort. There is no significant difference between “New Math” and “Common Core Math”. Therefore, Common Core math has already failed as the problems with the New Math failures were never addressed. So the problems that caused New Math to fail must also make Common Core Math to fail. Elementary teachers, generally, are not math people and have not had advanced math at the college level. Therefore, they do not understand these advanced concepts in order to teach them. Thus their Elementary students are not quite ready to take college number theory. Maybe a few more years of math, like 10-12 years of math and they may be better prepared. Then there is the problem with parents being unable to help their children. Many, maybe most parents have not taken college number theory. Those who have are probably having no trouble helping their children. So they lack the understanding of this level of math needed to help their children. Perhaps the incumbent has taken college number theory and would be willing to teach elementary teachers this subject. I have and I am certified to teach it. But my professional evaluation is that the methods cannot be taught to elementary teachers without a lot of foundational education. These methods must be modified and simplified to what a non-math teacher can teach to a non-math student who could then be helped by a non-math parent. I could do this, but obviously, the writers of Common Core were unable to do so. I hope this explains why Common Core Math is not working in North Dakota schools and why it cannnot work. The only successful option will be to get rid of Common Core. I am not a language teacher, but my understanding is that subject has been as well thought out as the math. The incumbent has stated she will not sign the papers to get rid of this mess. Seems like citizens will have to come up with some other way to remove this from our classrooms than trusting those currently in authority to do so.
{"url":"https://www.sayanythingblog.com/entry/joe-chiang-common-core-math-not-common/","timestamp":"2024-11-10T19:21:25Z","content_type":"text/html","content_length":"76261","record_id":"<urn:uuid:2cf361cb-5f3a-4db0-892c-427edd92a54f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00378.warc.gz"}
Analysis on the vehicle-bridge coupled vibrations of long-span cable-stayed bridge based on multiscale model The paper introduces the principals and methods of universal multiscale modeling, and simply verifies them. It also derives the vibration equations of the bridge and the vehicles, and programs the universal program for analyzing two-axle vehicle-bridge coupled vibration via Ansys and Matlab software based on Newmark-$\beta$ method. It proposes the universal method for analyzing vehicle-bridge coupled vibrations of long-span cable-stayed bridge based on multiscale model, and takes a single pylon cable-stayed bridge as an example to analyze and verify it. 1. Introduction Long-span bridges are playing an increasingly important role in modern transportation. The long-span bridges such as river-crossing bridge and sea-crossing bridge etc. often have very complicated structures; therefore, and people often face the contradiction between the sophistication of local details and simplification of the entire structure when designing and analyzing the bridges [1-3]. Currently, in numerical analysis on the bridge structures, researchers often analyze the entire structural performance of the bridge by means of large scale beam element model, and analyze the local stress performance of the structure by means of modeling by comparatively sophisticated shell element or solid element. With many in-use bridges ceaselessly suffer the problems such as local damage and fatigue crack etc. during use, it gradually exposes various disadvantages in traditional modeling methods. As a result, multiscale model which can build models with different accuracies for different parts emerges quickly and develops well; this method can solve the problems of calculation accuracy and efficiency^[4-7]. For long-span bridges, the vehicle-bridge vibration caused by passing vehicles cannot be ignored; such vibration will amplify the stresses at different parts of the bridges, the stresses of some parts may be much larger than the vehicle impact effect considered during designing, thus accelerating the damage and degradation of the bridges^[8-12]. Therefore, the researchers need to analyze the dynamic characteristics of each part of the bridge under vehicle-bridge coupled vibration, thus guiding the designing work for the bridges. Given the disadvantages existing in the uniform calculation method related to span or base frequency as adopted by vehicle impact coefficient in the specifications of bridges and culverts, this paper proposes the method for analyzing vehicle-bridge coupled vibrations of long-span cable-stayed bridge based on multiscale model in order to analyze the similarities and differences of vehicle impact effect at different parts of the bridges, with the problems of calculation accuracy and efficiency of long-span bridges taken into consideration. Firstly, the paper introduces the method and process of universal multiscale modeling method of long-span bridge, and verifies this modeling method with a simple example; then it introduces the analysis model of vehicle-bridge coupled vibration which is applicable to multiscale model; finally, it takes an actual single pylon cable-stayed bridge as an example to verify the method and process proposed in this paper. 2. Multiscale modeling method for long-span structure 2.1. Multiscale modeling method When conducting multiscale modeling for a long-span bridge, comparatively sophisticated local models shall be built to carry out accurate analysis on the concerned parts of stress concentration and the parts where fatigue cracks are prone to occurring. For other parts of the bridge, in order to ensure the boundary conditions of the local models and the accuracy of the stress characteristics of the entire structure, the comparatively simplified model can be adopted to carry out analysis. When dividing the modeling scales of each part of the bridge, the vulnerable parts of the bridge structure can be determined in line with existing inspection reports on the bridge and the static and dynamic analysis results of the simplified model of the target bridge. The effective connection between the interfaces of the simplified large scale model and sophisticated small scale model in the multiscale model is the key to ensuring the calculation accuracy of the multiscale model. The degrees of freedom of the models with different scales are often different due to their different element types; they are often connected by rigid beam or rigid zone in reality, but this method will generate stress concentration at the interface; therefore, it needs to minimize the redundant constraints of local small scale models as far as possible while avoid damaging the degrees of freedom of large scale models^[13-15]. Adopting constraint equations can solve this problem better; the models with different scales can meet the plane cross-section assumption at the For a long-span bride structure, its sophisticated small scale models can be built by solid elements or shell elements, while its simplified large scale models can be built by beam elements; the two shall be connected via constraint equation at the interface, which can increase the calculation accuracy and efficiency of complicated structure models. 2.2. Verification on the connection of multiscale models To verify the feasibility of multiscale model in bridge structure modelling and the accuracy of its calculation results, this paper takes the steel box girder which is commonly used in long-span bridges as an example, and cuts out a section of U-shaped ribbed stiffener and carrier plate to build a multiscale model. The thickness of the carrier plate of the steel box girder is 14 mm, the thickness of the U-shaped ribbed stiffener is 8 mm, the width of the carrier plate at this section is 0.6 m, the height of the ribbed stiffener is 0.28 m; take a section of 3 m between two diaphragm plates at longitudinal direction, simply support both ends for constraint. Build models via large universal finite element software ANSYS, build large scale beam element models with Beam189, and build small scale shell element model with Shell63, and built a multiscale model with blended elements. The three finite element models of the section are as shown in Fig. 1. Fig. 1Finite element model of a section Modal analysis shall be carried out on the three models built; see Table 1 for the vibration frequencies of the first 6 stages of each model, and see Fig. 2 for the first-stage modality of the section of each model. According to the modal analysis results of each model, compared with beam element model, the natural vibration frequencies of the first 6 stages of multiscale model are closer to the natural vibration frequency of sophisticated shell element model, which indicates that this method has good calculation accuracy; meanwhile, it can increase the calculation efficiency to a great extent. For large structures with complicated structures, multiscale modeling method can better reflect its advantages in terms of both calculation accuracy and efficiency. Table 1Modal analysis data of the section Beam element model Multiscale model Shell element model Frequency (Hz) Frequency (Hz) Frequency (Hz) 1 110.10 111.01 110.95 2 129.42 132.85 131.03 3 194.04 195.25 196.34 4 225.60 223.63 219.39 5 232.15 230.35 225.54 6 246.02 242.21 235.47 Fig. 2The first-stage modality of the section a) The first modal of the beam element model b) The first modal of the multi-scale element model c) The first modal of the shell element model 3. Methods for analyzing vehicle-bridge coupled vibration 3.1. Vehicle model Vehicle models shall be two-axle vehicle models; the car body and the axle shall be simplified to be rigid body with certain mass, and they are connected by springs and damping systems. In view of the vertical displacement at the contact part between the two wheels and the bridge floor, the vertical displacements at both ends of the car body as well as the roll-angle and displacement of the barycenter, there are 8 degrees of freedom in total. See Fig. 3 for the vehicle model. According to Alembert principle, the vibration equation of the simplified vehicle model shall be obtained as follows: In the equation, $\mathbf{M}$, $\mathbf{C}$ and $\mathbf{K}$ are the mass, damping and stiffness matrix of the vehicle respectively; $\mathbf{P}$ is the vector of the vehicle under the acting force of vehicle-bridge coupling; $Z$, $\stackrel{˙}{Z}$, $\stackrel{¨}{Z}$ are the generalized displacement, speed and accelerated speed vector of the vehicle respectively. 3.2. Pavement evenness model Pavement evenness is an important reason that causes the vehicle-bridge vibration; its influence on the bridge vibration cannot be ignored. Its mechanism is that the rough pavement will cause redundant displacement, speed and accelerated speed at the vehicle-bridge contact surface, thus changing the acting forces between them. Generally, the pavement unevenness is deemed as to comply with Gauss random distribution [16-17]: $r\left(x\right)={\sum }_{k=1}^{N}\sqrt{2\varphi \left({n}_{k}\right)\mathrm{\Delta }n}\mathrm{cos}\left(2\pi {n}_{k}x+{\theta }_{k}\right),$ where, $r\left(x\right)$ is the roadway profile, $\varphi \left(\bullet \right)$ is the power spectral density function (m^3/cycle/m), ${n}_{k}$ is wave number (cycle/m), $\mathrm{\Delta }n$ is frequency increment, ${\theta }_{k}$ is phase angle. This paper simulates the pavement unevenness by means of trigonometric series superposition method, and divides the pavement unevenness into five grades of “excellent”, “good”, “general”, “bad” and “worse” [18]. Surface roughness is a function associated with the road surface roughness coefficient ($\varphi \left({n}_{0}\right)$), pavement roughness coefficient values are shown in Table 2. Table 2Surface roughness coefficient values No. Grades $\varphi \left({n}_{0}\right)$ 1 Excellent 2×10^-6$\le \varphi \left({n}_{0}\right)<$8×10^-6 2 Good 8×10^-6$\le \varphi \left({n}_{0}\right)<$32×10^-6 3 General 32×10^-6$\le \varphi \left({n}_{0}\right)<$ 128×10^-6 4 Bad 128×10^-6$\le \varphi \left({n}_{0}\right)<$ 512×10^-6 5 Worse 512×10^-6$\le \varphi \left({n}_{0}\right)<$2048×10^-6 3.3. Methods for analyzing vehicle-bridge coupled vibration When analyzing vehicle-bridge coupled vibration, it needs to treat the bridge in a finite element discretization manner, thus obtaining the vibration equation under the system with several degrees of In the equation, ${\mathbf{M}}_{b}$, ${\mathbf{C}}_{b}$ and ${\mathbf{K}}_{b}$ are the mass, damping and stiffness matrix of the vehicle respectively; ${\mathbf{P}}_{b}$ is the vector of the vehicle under the acting force of vehicle-bridge coupling; ${Z}_{b}$, ${\stackrel{˙}{Z}}_{b}$, ${\stackrel{¨}{Z}}_{b}$ are the generalized displacement, speed and accelerated speed vector of the vehicle After solving the dynamic equations of the bridge and the vehicle in line with the aforesaid method respectively, assume that the vehicle tyres and the pavement have never separated, the dynamic equations of the vehicle system and the bridge system can be coupled together by way of the compatibility condition between displacement and force at the contact point of the two [19]. This paper programs the universal program for analyzing two-axle vehicle-bridge coupled vibration via Ansys and Matlab software based on Newmark-$\beta$ method. 4. Analysis on the vehicle-bridge coupled vibration of cable-stayed bridge based on multiscale model 4.1. Multiscale finite element model of cable-stayed bridge A single pylon cable-stayed bridge is taken as an example in this paper to build the multiscale finite element model of the bridge via ANSYS software in line with the aforesaid methods. The span of this bridge is 150 m+150 m, the girder is orthotropic steel bridge deck slab, the main pylon is gate-type pylon with uniform sections; the 24 pairs of inhaul cables are distributed symmetrically. Seen from existing survey results of the long-span bridge and the dynamic analysis on the model, the midspan part is the vulnerable position of the bridge structure. Therefore, this paper selects the middle part of the left span of the cable-stayed bridge for sophisticated small scale modeling, with the length of 3 intervals between diaphragm plates taken in the longitudinal direction; large scale finite element models are built by means of fish bone model for other parts; the finite element model of the bridge is as shown in Fig. 4. The small scale section of orthotropic steel bridge deck slab shall be built by Shell63 shell element, other girders and the main pylon shall be built by Beam188 element, the inhaul cable shall be built by Link8 element; the nodes shall be coupled on the interface between girder models of beam element and shell element by way of building constraint equation. Fig. 4Multiscale finite element model of cable-stayed bridge 4.2. Analysis on vehicle-bridge coupled vibration Under the vehicle load effect, the force conditions of different parts of the bridge made of the orthotropic steel bridge deck slab are different; some parts are prone to causing stress concentration. Therefore, even under one force application, the stress amplitude it bears shall be much larger than that borne by other parts in the same cross section, hence it will see fatigue failure earlier. To accurately learn the stress amplitudes of the vulnerable parts of the bridge, this paper, based on the statistics of existing inspection data of steel bridge cracks^[20-21], selects the connection of longitudinal ribs and diaphragm plates from the shell element model of multiscale model to be the object area of this analysis, as shown in Fig. 5. Fig. 5Connection between A-longitudinal rib and diaphragm plate Analyzing this multiscale model of cable-stayed bridge by the aforesaid processes of vehicle-bridge coupled vibration can obtain stress time-history curve at each detailed fatigue part of the orthotropic steel bridge deck slab; it can be converted into equivalent stress amplitude by rain-flow counting method, thus guiding the anti-fatigue design for the bridge. This paper only illustrates A fatigue details. Point A is located on the cross-section, which is 75 m away from the main tower. To compare the influences of whether considering vehicle-bridge coupled vibration on stress history, the same bridge model is loaded by the aforesaid vehicle-bridge coupled vibration process and moving static load respectively. The stress histories at A under the two working conditions are as shown in Fig. 6 and Fig. 7. Pavement evenness is “general”, and the sample function of surface roughness at this moment is acquired based on the method for simulating pavement evenness. The 8 degrees of freedom vehicle model at fatigue standard travels across the bridge at the velocity of 80 km/h in the paper, the stress equation at A can be acquired. Fig. 6 shows the dynamic response curve at A in the structure when the vehicle passes the bridge at the premise of considering the effect of vehicle-bridge coupling. Fig. 7 shows the dynamic response curve at A in the structure under the condition that the vehicle is deemed as a moving static load to apply load on the structure without considering the effect of vehicle-bridge coupling. In Fig. 7, the vehicle load is placed on different positions of the bridge to apply the load, and the stress at A is acquired at this moment. Essentially, it’s a combination of many static force loads. Pavement unevenness is not involved in the static force loading. Therefore, the stress history curve is a smooth curve. The horizontal axis of Fig. 6 is the distance from the bridgehead of the vehicle load, and the vertical axis is the stress value of point A at this time. As can be seen form the figure, when the vehicle passes A, here the stress reaches a maximum value, this is consistent with actual situation. This sufficiently reflects the advantages of multiscale model; unlike large scale beam element model, it can accurately extract the stress history at different parts of the same cross section of the girder. Fig. 6 and Fig. 7 are analyzed based on the same multiscale model. The effect of vehicle-bridge coupling is considered in the former, while the vehicles travelling on the bridge are deemed as moving static load to apply load in the latter. A comparison between them shows that the amplitude of the stress history curve is caused by vehicle-bridge coupled vibration instead of the multiscale model of the bridge. However, there is some difference in term of calculation accuracy based on different bridge models. To compare the amplification effect of pavement evenness on the stresses at each part of the girder, the researchers carry out loading analysis on the vehicles on the pavement at five grades of “excellent”, “good”, “general”, “poor” and “very poor”, so as to obtain the impact coefficients under each road condition. See Fig. 8 for specific values. Fig. 6The dynamic response at A in the structure at the premise of considering the effect of vehicle-bridge coupling Fig. 7The dynamic response at A in the structure under static load Fig. 8Influence of pavement evenness on impact coefficient Fig. 9Influence of vehicle speed on impact coefficient According to Fig. 8, when the pavement evenness is “excellent”, “good” and “general”, its influence on the impact coefficient of the stress at A is not so large; when the pavement evenness is “poor” and “very poor”, it will cause the travelling vehicles to suffer drastically continuous bumping, thus making the impact coefficient of the girder stress soar. Therefore, considering from the perspective of extending the service life of the bridge, the pavement needs to be overhauled and maintained frequently. To compare the amplification effects of the travelling vehicles on the stresses at each part of the girder, make the loaded vehicles pass a bridge with “general” pavement evenness at the speed of 40 km/h, 60 km/h, 80 km/h, 100 km/h and 120 km/h respectively, obtain the impact coefficients of the stresses of the girder under different vehicle speeds. This paper only illustrates A; see Fig. 9 for the impact coefficients of the stresses of the girder at A under different vehicle speeds. According to Fig. 9, in the speed range of 40 km/h-100 km/h, the vehicle speed variation doesn’t affect the amplification effect of the stress at A so obviously; that is to say, the impact coefficient at A is not sensitive to the vehicle speed in this range; however, when the vehicle speed increases to over 100 km/h, the impact coefficient at A increases to a great extent. Therefore, considering from the perspective of extending the service life of the bridge, the vehicle speed shall be limited in a reasonable range in actual operation of the bridge. 5. Conclusions 1) Multiscale modeling method can significantly increase the calculation efficiency under the premise of ensuring calculation accuracy, and has good universality; it can well apply to dynamic structural analysis of long-span bridges. 2) Unlike conventional large scale beam element model, analysis on the vehicle-bridge coupled vibrations of long-span cable-stayed bridge based on multiscale model can extract the stress histories at each part of the same cross-section in a sophisticated small scale model in line with the researchers’ different demands, thus increasing the accuracy of the analysis results. The method for analyzing the vehicle-bridge coupled vibrations of long-span cable-stayed bridge based on multiscale model is more applicable to anti-fatigue design for each part of the bridges. 3) Serious degradation of pavement evenness will significantly amplify the impact coefficients of each part of the bridge, thus reducing the service life of the bridge. 4) The vehicle speed in certain range doesn’t have an obvious influence on the impact coefficients of the bride; however, when the vehicle speed increases to over 100km/h, the impact coefficient of the bridge will increase to a great extent. • Li Zhaoxia, Sun Zhenghua, Guo Li, Chen Hongtian, Yu Yang Concurrent multi-scale modeling of structures and damage analyses. Journal of Southeast University, Vol. 37, Issue 2, 2007, p. 251-260, (in Chinese). • Li Z. X., Zhou T. Q., Chan T. H. T., Yu Y. Multi-scale numerical analysis on dynamic response and local damage in long-span bridges. Engineer Structures, Vol. 29, Issue 7, 2007, p. 1507-1524. • Chen Z., Yue X. Numerical homogenization of well singularities in the flow transport through heterogeneous porous media. Multiscale Model Simulation, Vol. 1, Issue 2, 2003, p. 260-303. • Efendiev Y., Ginting V., Hou T. Y., et al. Accurate multiscale finite element methods for two-phase flow simulations. Journal of Computational Physics, Vol. 220, Issue 1, 2006, p. 155-174. • Murat S., Jag Mohan H. Dynamic analysis of buildings for earthquake resistant design. Canadian Journal of Civil Engineering, Vol. 30, 2003, p. 338-359. • Oskayc Fish J. Multi-scale modeling of fatigue for ductile materials. International Journal of Multi-Scale Computational Engineering, Vol. 2, Issue 3, 2004, p. 1-30. • Ladeveze P. A Multiscale computational approach for contact problems. Computer Methods in Applied Mechanics and Engineering, Vol. 191, Issue 43, 2002, p. 4869-4891. • Xia H., Xu Y. L., Chan T. H. T. Dynamic interaction of long suspension bridges with running trains. Journal of Sound and Vibration, Vol. 237, Issue 2, 2000, p. 263-280. • Xu Y. L., Guo W. H. Dynamic analysis of coupled road vehicle and cable-stayed bridge systems under turbulent wind. Engineering Structures – Journal, Vol. 25, Issue 4, 2003, p. 473-486. • Xu Y. L., Xia H., Yan Q. S. Dynamic response of suspension bridge to high wind and running train. Journal of Bridge Engineering, Vol. 8, Issue 1, 2003, p. 46-55. • Alvandi A., Cremona C. Assessment of vibration-based damage identification techniques. Journal of Sound and Vibration, Vol. 292, 2006, p. 179-202. • Cawley P., Adams R. D. The location of defects in structures from measurements of natural frequencies. Journal of Strain Analysis, Vol. 4, Issue 2, 1979, p. 49-57. • Chu J., Efendiev Y., Ginting V., Hou T. Y. Flow based oversampling technique for multiscale finite element methods. Advances in Water Resources, Vol. 31, Issue 4, 2008, p. 599-608. • Chen J. R., Cui J. Z. A multicale finite element method for elliptic problems with oscillatory coeffcients. Mathematics of Comptation, Vol. 50, Issue 1, 2004, p. 1-13. • Thomas Y. H., Wu X. H. A multiscale finite element method for elliptic problems in composite materials and porous media. Journal of Computational Physics, Vol. 134, 1997, p. 169-189. • Honda H., Kajikawa Y., Kobori T. Spectra of road surface roughness on bridges. Journal of structural division ASCE, Vol. 108, Issue ST9, 1982, p. 1956-1966. • Zhang Wei, Cai C. S. Fatigue reliability assessment for existing bridges considering vehicle speed and road surface conditions. Journal of Bridge Engineering, Vol. 17, Issue 3, 2012, p. 443-453. • Au F. T. K., Chen Y. S., Cheung Y. K. Effects of random road surface roughness and long-term deflection of prestressed concrete girder and cable-stayed bridges on impact due to moving vehicles. Computers and Structures, Vol. 79, 2001, p. 853-853. • Huang D. Z. Dynamic analysis of steel curved box girder bridges. Journal of Bridge Engineering, Vol. 6, Issue 6, 2001, p. 506-513. • Wouter De Corte, Philippe Van Bogaert An evaluation of the use of fracture mechanics for the design of orthotropic decks. Key Engineering Materials, Vol. 348, 2007, p. 285-288. • Choi Dong Ho, Choi Hang Yong, Chung Sang Hwan, et al. Mixed-mode fatigue crack growth in orthotropic steel decks. Key Engineering Materials, Vol. 321, 2006, p. 733-738. About this article 08 September 2014 19 November 2014 15 February 2015 Vibration in transportation engineering multiscale model long-span bridge vehicle-bridge coupled vibrations Copyright © 2015 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/15306","timestamp":"2024-11-03T15:56:42Z","content_type":"text/html","content_length":"129138","record_id":"<urn:uuid:bfe65f50-1e78-444b-aace-93f165514710>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00655.warc.gz"}
Similarity of triangles - Similarity of triangles - practice problems - last page Number of problems found: 152 We apologize, but in this category are not a lot of examples. Do you have homework that you need help solving? Ask a question , and we will try to solve it. See also our trigonometric triangle calculator
{"url":"https://www.hackmath.net/en/word-math-problems/similarity-of-triangles?page=8","timestamp":"2024-11-06T09:21:53Z","content_type":"text/html","content_length":"33033","record_id":"<urn:uuid:1fe4ffb8-4224-4f0d-bb80-e6ca4a413a0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00041.warc.gz"}
NIPS 2016 Mon Dec 5th through Sun the 11th, 2016 at Centre Convencions Internacional Barcelona Reviewer 1 This paper considers the worst-case adaptive optimization problem with budget constraint. The utility function is assumed to be monotone submodular and the constraint can be modular or satisfy the triangle inequality. The authors show that under the assumptions that the utility function is 1) pointwise submodular, 2) pointwise monotone and 3) the value of the function depends only on the items selected so far, and the cost function is modular then the combination of two simple greedy algorithms provided a bicriteria approximation (Theorem 4). They also have (in my opinion) generalized their results to the case where the cost satisfies a triangle inequality. Despite author's intention, I do not find Theorem 5 significant. Qualitative Assessment There are many references missing. The authors state "In contrast to previous works on adaptive optimization with budget constraint (both in the average and worst cases), we consider not only modular cost functions but also general, possibly non-modular,cost functions on sets of decisions." They should cite the relevant papers. Online Submodular Set Cover, Ranking, and Repeated Active Learning Active Learning and Submodular Functions (Guillory's Phd thesis which should definitely be cited) Near Optimal Bayesian Active Learning for Decision Making They consider only MONOTONE submodular functions. But there has been some recent work on the non-monotone case. I think it would be more interesting (and very plausible) to see whether their method generalizes to this setting as well. The corresponding reference that I am aware of (and I think the authors should cite) Nonmonotone adaptive submodular maximization Confidence in this Review 3-Expert (read the paper in detail, know the area, quite certain of my opinion) Reviewer 2 The authors analyzed the problem of worst-case adaptive optimization with budget constraint. Crucially, the allow the cost to be either modular or non-modular and the utility function to satisfy either the pointwise submodularity condition or the pointwise cost-sensitive submodularity condition. They then present a combined near-optimal policy in the more general case. Qualitative Assessment The work is of interest from a theoretical and practical point of view. In the experimental part the authors analyze the performance of 1) the cost-insensitive greedy policy (LC) and of 2) cost-average greedy policy (ALC) and of 3) their proposed budgeted least confidence (BLC). The paper would benefit from a stronger baseline to compare against and from reporting a significance notion (i.e. the standard deviation). As for the baseline, although not guaranteed to work near optimally in theory, one could just report the empirical performance of running the two algorithms for a fraction k of the budget and then choosing the best for the remaining part of the budget for various values of k. Confidence in this Review 1-Less confident (might not have understood significant parts) Reviewer 3 The paper generalizes adaptive submodular optimization to some cases where the cost is not modular. Qualitative Assessment Potential impact or usefulness: Generalizing adaptive submodular optimization (or even non-adaptive) for non-modular costs is an interesting problem. However, the requirement of cost-sensitive submodularity is pretty strict, which makes the results less impressive. The proofs for this limited setting are still novel and the techniques might lead to results for more general cost/utility functions. Technical quality: The proof techniques are interesting. The experiments show that BLC sometimes protects against the poor results obtained by one of LC and ALC (but not always -- Cost = R1 is pretty bad for BLC). The costs are very synthetic though, so it's not clear to me how much better BLC would be compared to LC and ALC under different costs. Novelty/originality: The concept of cost-sensitive submodularity is interesting, but it's just so strict that it seems like very few useful problems will have cost-sensitive submodularity without just having a modular cost. I think that this paper would be very strong if either of these can be shown: (1) the combined policy also works in some settings more general than cost-sensitive submodularity (2) for problems without cost-sensitive submodularity, there cannot exist reasonable (polynomial-time?) algorithms with approximation guarantees Clarity and presentation: Overall, the writing was easy to understand. I only have two small complaints. First, section 3.3 felt out of place. I think it would be more natural if it were right after section 2 (or even merged into section 2) or with the experiments. Second, the paper seemed to imply that the methods would be able to handle any cost function (whether modular or not), but in section 4, it became clear that there were still restrictions on the cost function. Confidence in this Review 2-Confident (read it all; understood it all reasonably well) Reviewer 4 The paper addresses the problems of adaptive optimization with budget constraint. While previous studies address this problem for the average case, this study focuses on the worst case. In contrast to previous works on adaptive optimization with budget constraint, the authors of this paper consider not only modular cost functions but also general, possibly non-modular, cost functions. The authors investigate the near-optimality of two greedy policies for the worst-caseadaptive optimization problem with budget constraint (a policy is near-optimal if its worst-case utility is within a constant factor of the optimal worst-case utility). The authors prove that in the general case, these two policies cannot achieve near-optimality with non-uniform modular costs, but the best between these two greedy policies always achieves near-optimality. Following that, the main result of the paper is derived: The two greedy policies into one greedy policy that is near-optimal with respect to the optimal worst-case policy that uses half of the budget. In addition, the authors propose a novel class of utility functions satisfying a property that is a generalization of cost-sensitive submodularity to the adaptive setting. For this new class of utilities, the authors prove similar near-optimality results for the greedy policies as in the case of modular costs. The proposed algorithm is compared experimentally to other algorithms, and results show that even though it does not yield the best result in almost any case (which is not surprising and supported by the theoretical results in the paper), it is a robust choice for an algorithm that achieves the second best results in all cases. Qualitative Assessment Definitions and mathematical proofs are concise and easy to understand. Related work is covered in a sufficient manner. The paper exhibits a novel mathematical analysis relevant to adaptive optimization with budget constraint, a problem with many practical implications. The setting in which the analysis is given is wider compared to previous papers on the subject. Two greedy algorithms are analyzed, and despite negative results regarding their near-optimality, results (proofs) show that a combination of the two has interesting properties and can be sound and useful in some cases. In addition, a subset of utility functions under which strong positive results can be proved is correctly identified. The proposed algorithm is derived naturally from the theoretical results, and experiment results support the theoretical results. On the down side, the proposed algorithm does not perform better than other algorithms (but also never performs catastrophically bad), and was not examined in many different experiment settings. Confidence in this Review 1-Less confident (might not have understood significant parts) Reviewer 5 This paper discussed the pointwise submodular maximization problem with non-uniform costs. In previous work, only the case of uniform costs is considered. This paper treats both cases of linear and nonlinear cost function. For the nonlinear case, the authors propose a new condition, cost-sensitive pointwise submodularity. The proposed algorithm that can be applied to both cases achieves (1-1/e) /2-approximation in comparison to the optimal policy with a half budget. The authors conducted experiments about pool-based active learning on the benchmark datasets, and show their proposed method is better than either the greedy policy or the cost-sensitive greedy policy. Qualitative Assessment This paper would offer good contributions to research on pointwise submodularity, but it lacks extensive analyses in some parts such as the following: - The authors should refer to existing work on the non-adaptive submodular maximization subject to a knapsack constraint, especially the following Sviridenko's paper. Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters 32(1): 41-43, 2004. In this paper, it was showed that we can achieve (1-1/e)-approximation with the partial enumeration technique. The authors should discuss whether it is possible to extend this approach to the adaptive setting. - Line 337: The sentence contains an error. In Khuller-Moss-Naor's paper, they proposed both (1-1/\sqrt{e}) -approximation and (1-1/e)-approximation algorithms, but not (1-1/e)/2-approximation algorithm. Is it possible to extend their results to the adaptive setting? The authors should investigate this approach. - This paper studies on the cases of ``the cost function suffices the triangle inequality'', i.e., it holds c(X) + c(Y) \ge c(X \cup Y) for all X and Y. This condition is called subadditiveness, and there are several work studying the submodular maximization with a subadditive cost function such as Soma-Yoshida, NIPS2015. The authors should compare their results with these existing work on subadditive costs. - In the experiments, the authors claim that the proposed method is stable, i.e., better than the worse one among the greedy policy and the cost-sensitive greedy policy. It is true but for some datasets the proposed method is much worse than the better one of them, and seems not so practical. From the above reasons, this paper does not achieve the standard quality of NIPS in both theoretical and practical aspects. Confidence in this Review 2-Confident (read it all; understood it all reasonably well) Reviewer 6 The paper studies adaptive optimization when in the case of modular and sub-modular cost and utility functions. Theoretical results on non-optimality of 2 greedy algorithms are presented. Then, it is shown that a combination of these 2 is near optimal under certain assumptions. Some experiments are provided. Qualitative Assessment This is a promising paper that deals with a relevant problem. In the introduction, although the sensor running example is provided, it would be nice to illustrate cost/utility in context/with examples earlier and with more clarity. For instance, these "hooks" are provided later in the paper, in the applications section -- I would put some of them in the introduction. The combined policy, is not provided in pseusocode. It would be nice to have it in pseudocode, since it is the algorithm that the paper proposes. There is plenty of room for improvement in the experiments. For instance, an experiment illustrating benefits in the "sensor" example would make the paper stronger. It also seems that an experiment in a real active learning experiment for text classification (say, using with real human annotators) could be relevant. Finally, I would highlight/give more details on the benefits/implications of using BLC further in the introduction and conclusion. Confidence in this Review 2-Confident (read it all; understood it all reasonably well)
{"url":"https://papers.nips.cc/paper_files/paper/2016/file/9fe8593a8a330607d76796b35c64c600-Reviews.html","timestamp":"2024-11-12T22:41:29Z","content_type":"text/html","content_length":"13499","record_id":"<urn:uuid:cb79045b-d539-4a5f-aae7-2756ffbe0f7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00551.warc.gz"}
Rectangle Elimination The Rectangle Elimination strategy can eliminate a single candidate in the corner of a rectangle. Rectangle Elimination was introduced on Andrew Stuart's Sudoku Wiki with credit to Ken Reek. To use Rectangle Elimination, you must first find a row or column that contains exactly two of a given candidate, and the two cells are not in the same box. If the cells that you found are in the same row, then look in their columns for a cell that isn't in the same box and that has the same candidate. Similarly, if they are in the same column, then look in their rows for a cell that isn't in the same box and that has the same candidate. When you find such a cell, treat it as the third corner of a rectangle. Each of the three corners needs to be in a different box. Now check what would happen in the box that contains the fourth corner if the other corners that it can see were both set to that candidate. If none of that candidate would remain anywhere in that box, you have found a contradiction. You can't remove the candidate from the row or column where it appears exactly twice, but you can remove it from the rectangle corner cell in the column or row where it appears more than twice.
{"url":"https://www.thonky.com/sudoku/rectangle-elimination","timestamp":"2024-11-01T21:09:26Z","content_type":"text/html","content_length":"41687","record_id":"<urn:uuid:a6c39209-dc5c-4258-8551-c9c75f5375d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00655.warc.gz"}
from Euclid One of the oldest and most complete diagrams from Euclid's Elements of Geometry is a fragment of papyrus found among the remarkable rubbish piles of Oxyrhynchus in 1896-97 by the renowned expedition of B. P. Grenfell and A. S. Hunt. It is now located at the University of Pennsylvania. The diagram accompanies Proposition 5 of Book II of the Elements, and along with other results in Book II it can be interpreted in modern terms as a geometric formulation of an algebraic identity - in this case, that ab + (a-b)^2/4 = (a+b)^2/4 (although the relationship between Euclid's propositions and algebra, which he did not possess, is controversial). Click on the image to see one at much higher resolution. What exactly are we looking at? This is a fragment of what is likely part of a larger papyrus roll from the early years of the current era. It was dated by its original finders to around 300 A.D., but a more recent judgment by Eric Turner places it between 75-125 A.D. It was dug up in 1896-7 from rubbish piles of the ancient town of Oxyrhynchus, near the present village of Behnesa (about 110 miles up the Nile from Cairo and 10 miles west of it), during one of several expeditions financed by the Egypt Exploration Fund (later renamed as the Egypt Exploration Society) and led by B. P. Grenfell and A. S. Hunt of Oxford University. Oxyrhynchus at that time was populated by Greek colonists, a remnant of the conquest in about 330 B. C. by Alexander the Great. It is commonly believed that Euclid himself lived and taught in Alexandria around 300 B.C., but what we know for certain about his life is close to nil. The fragment contains the statement, in Greek, of Proposition 5 from Book II of Euclid's Elements. At the very top of the fragment is a small trace of what seems to be the statement of Proposition II.4. No part of the proof of either Proposition is present. The handwriting is in capitals. Words are not separated from each other, and several words are broken in the middle at line ends. There are no diacritical marks. All of this was normal practice in Greek manuscripts of the period. The papyrus is of lower quality than many other texts from the same period. The material itself is rough, the handwriting is not of professional scribal quality, the diagram has been drawn over, and in addition it has no labels to match the reasoning in Euclid's proof. For these reasons it has been conjectured (by David Fowler) that it was written by someone for private use. Some artefacts of the process of papyrus manufacture are visible in the image - at the upper left you can detect how the sheet of papyrus was made by laying down strips from the stalk of the plant in orthogonal directions, and the vertical line in the centre shows that it was formed by gluing together two separate strips. Normally a full roll was made up by gluing pieces together in this way, and might be about 30 feet long. The ink is a brownish colour, indicating that it was made from nut gall, the common source of ink for manuscripts for several hundred years. What does the text say? Here is the complete original report of Grenfell & Hunt: The modern Greek edition of J. L. Heiberg renders II.5 as It reads in English like this (from the classic translation of T. L. Heath): If a straight line be cut into equal and unequal segments, the rectangle contained by the unequal segments of the whole together with the square on the straight line between the points of section is equal to the square on the half. For more information: • The Thirteen Books of Euclid's Elements, translated from the text of Heiberg, with introduction and commentary by Thomas L. Heath. Available from Dover in three volumes. • David Joyce's on-line edition of Heath's translation The entire Elements can be found there, with interactive figures. • The Perseus project This has all of Heath's text, including a version in Greek, but in a less readable format than Joyce's, and also Heath's comments on the significance of the Proposition. • The history of Euclid's text. Again, the Perseus project. You can also find here a brief mention of the fragment itself in Heath's discussion. • The original report of Grenfell and Hunt. This is a facsimile of the report in volume I of The Oxyrhynchus Papyri, as well as their transcription into more readable Greek of the exact text. By comparing this transcription with the excerpt from Heiberg's edition above, you can get a good idea of the style in which the papyrus was written - without word breaks, and with broken words, for example. The Proposition might be made clearer by these pictures, from which you can also see the relationship with the algebraic identity ab + (a-b)^2/4 = (a+b)^2/4. How old is this fragment? Grenfell & Hunt said of it From the character of its handwriting, which is a sloping rather irregular informal uncial, this papyrus may be assigned to the latter part of the third or the beginning of the fourth century but after an examination done at the request of David Fowler, the eminent papyrologist Eric Turner wrote I should date it confidently to the end of century i AD/early century ii, say AD 75-125. ... Note that this text was one of the first finds of Grenfell & Hunt at Behnesa (Oxyrhynchus). They published in a great hurry, and no doubt included this piece to illustrate the wide range of subjects on which papyri could throw light. Their papyrological (especially dating) framework was still to be worked out. By 1902 I do not think the dating offered would have been "third or fourth century." More of Turner's report is included in The Mathematics of Plato's Academy. Where is the fragment now? Although Oxford was the major sponsor of the expedition of Grenfell and Hunt, the University of Pennsylvania had also contributed to its support, and for this reason the University acquired the fragment in 1901. It is now to be found in the storage vaults of the Museum of Archaelogy and Anthropology at the University of Pennsylvania. It is #E2748 in the Museum catalogue. What other really ancient items from Euclid's Elements are there still around? Not much from before the ninth century, mostly small fragments like this one. You can find a pretty complete list in Chapter 6 of David Fowler's book The Mathematics of Plato's Academy, which also has a collection of images. One of the two oldest complete manuscripts is the Greek manuscript D'Orville 301, dated 888 A.D, at the Bodleian Library of Oxford University. A single image from that manuscript is available on-line. The frontispiece of T. L. Heath's translation is from this manuscript, and The Mathematics of Plato's Academy contains an image of two pages concerned with II.5. Incidentally, the manuscript is not normally accessible even to authorized readers at the Library, presumably on account of its fragility, but a micro-film of the full manuscript is available for examination. It is not really of very high quality, I am afraid . (It would be great if the Bodleian would have the whole book photographed and made available on the Internet, wouldn't it?) The other of the two is at the Vatican. David Fowler estimates that less than 1% of Euclid's Greek text is available from any source earlier than 888. What's the 29 doing there? Presumably written there by Grenfell or Hunt. This is referred to as Oxyrhynchus I.29 because it is number 29 among the published papyri. More information - on the Internet More information - texts • The Oxyrhynchus Papyri The series is still being published by the Egypt Exploration Society (go further to /Publications/Graeco-Roman Memoirs), and early volumes are still available for purchase! A related publication is the book edited by Alexander Jones (Astronomical Papyri from Oxyrhynchus, published by the American Philosophical Society). • Leo Deuel, Testaments of Time - the search for lost manuscripts and records, A. Knopf, 1965. Chapters 5-10 present a good popular account of the discovery and importance of Egyptian papyri, and Chapters 8-9 discuss in particular the remarkable Oxyrhynchus expeditions of B. P. Grenfell and A. S. Hunt. This was reprinted by Penguin in 1970. • David Fowler, The Mathematics of Plato's Academy (second edition). The Clarendon Press, Oxford, 1999. Chapter 6 is a fine introduction to papyri in general, along with comments on MS D'Orville 301. This book contains other photographs of early mathematical texts. • Arthur S. Hunt, Papyri and papyrology, Journal of Egyptian Archaelogy 1, 1914, pp. 81-92. A useful survey article by one of the protagonists in our story. • Alfred Lucas, Ancient Egyptian materials and industries, E. Arnold, 1962. • Otto Neugebauer, The Exact Sciences in Antiquity, Princeton University Press, 1952. A short and classic introduction to ancient science, particularly mathematics and astronomy. • Paul Nicholson and Ian Shaw, Ancient Egyptian materials and technology, Cambridge University Press, 2000. • Eric G. Turner, Greek Papyri: an introduction. The Clarendon Press, Oxford, 1968. A very readable and authoritative book. Jennifer Wegner, one of the keepers of the Egyptian section at the Museum of Archaelogy and Anthropology at the University of Pennsylvania, made the fragment available to me and gave me assistance in photography. The Museum was extremely generous in letting me make and publish the photographs. The Ha, system administrator in the Department of Mathematics at the University of British Columbia, helped me with image manipulation. Peter Brown, of the School of Mathematics at the University of New South Wales, helped me with Greek typography in TEX. David Fowler has helped me enormously. He has suggested extensive corrections to my original text, and made available to me a copy of a letter from Eric Turner to him. Of course he should not be held accountable for errors. About the images The photographs were taken in the vaults of the museum at the University of Pennsylvania with a Nikon CoolPix 990 (a digital camera), using a light stand and incandescent lights at the Museum, along with a Benbo tripod for close work.
{"url":"https://personal.math.ubc.ca/~cass/Euclid/papyrus/papyrus.html","timestamp":"2024-11-05T09:08:21Z","content_type":"text/html","content_length":"14962","record_id":"<urn:uuid:822ec40e-9a3c-4e40-8419-84d68d0c5468>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00495.warc.gz"}
Diophantine geometry Diophantine geometry : an introduction / Marc Hindry, Joseph H. Silverman Type de document : MonographieCollection : Graduate texts in mathematics, 201Langue : anglais.Pays: Allemagne.Éditeur : Berlin : Springer, 2000Description : 1 vol. (XIII- 558 p.) : fig. ; 24 cmISBN: 9780387989815.ISSN: 0072-5285.Bibliographie : Bibliogr. p. [504]-519. Liste des notations. Index.Sujet MSC : 11Gxx, Number theory - Arithmetic algebraic geometry (Diophantine geometry) 11G10, Arithmetic algebraic geometry (Diophantine geometry), Abelian varieties of dimension >1 11J68, Diophantine approximation, transcendental number theory, Approximation to algebraic numbers 11G30, Arithmetic algebraic geometry (Diophantine geometry), Curves of arbitrary genus or genus ≠1 over global fields 14Gxx, Algebraic geometry - Arithmetic problems. Diophantine geometryEn-ligne : Springerlink | Zentralblatt | MathSciNet Item type Current library Call number Status Date due Barcode CMI Salle 1 11 HIN (Browse shelf(Opens below)) Available 04544-01 In 1922 Mordell conjectured that every algebraic curve of genus ≥2 has at most finitely many rational points. This conjecture was proved by Faltings in 1983. In 1991 Vojta gave a completely different proof, based on diophantine approximation. Vojta's proof was then simplified by Bombieri. In the textbook under review, the authors work towards Bombieri's proof, giving the reader the necessary background. Unlike several other textbooks in this field, the prerequisites are quite modest, so the book is very useful for instance for a graduate course on diophantine geometry. Each chapter goes along with many exercises. (Zentralblatt) Bibliogr. p. [504]-519. Liste des notations. Index There are no comments on this title.
{"url":"https://catalogue.i2m.univ-amu.fr/bib/11698","timestamp":"2024-11-04T05:18:56Z","content_type":"text/html","content_length":"65907","record_id":"<urn:uuid:d25edf3b-e120-4ba8-8d7d-f0fd7a1c37a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00375.warc.gz"}
Section: New Results Image assimilation Sequences of images, such as satellite acquisitions, display structures evolving in time. This information is recognized of major interest by forecasters (Meteorologists, oceanographers, etc) in order to improve the information provided by numerical models. However, these satellite images are mostly assimilated in geophysical models on a point-wise basis, discarding the space-time coherence visualized by the evolution of structures such as clouds. Assimilating in an optimal way image data is of major interest and this issue should be considered in two ways: • from the model's viewpoint, the problem is to control the location of structures using the observations, Divergence-free motion estimation Participants : Dominique Béréziat [UPMC/LIP6] , Isabelle Herlin, Nicolas Mercier, Sergiy Zhuk. This research addresses the issue of divergence-free motion estimation on an image sequence, acquired over a given temporal window. Unlike most state-of-the-art technics, which constrain the divergence to be small thanks to Tikhonov regularisation terms, a method that imposes a null value of divergence of the estimated motion is defined. Motion is characterized by its vorticity value and assumed to satisfy the Lagragian constancy hypothesis. An image model is then defined: the state vector includes the vorticity, whose evolution equation is derived from that of motion, and a pseudo-image that is transported by motion. An image assimilation method, based on the 4D-Var technics, is defined and developed that estimates motion as a compromise between the evolution equations of vorticity and pseudo-image and the observed sequence of images: the pseudo-images have to be similar to the acquisitions. As the evolution equations of vorticity and pseudo-image involve the motion value, the motion field has to be retrieved at each time step of the studied temporal window. An algebraic method, based on the projection of vorticity on a subspace of eigenvectors of the Laplace operator, is defined in order to allow Dirichlet boundary conditions for the vorticity field. The divergence-free motion estimation method is tested and quantified on synthetic data. This shows that it computes a quasi-exact solution and outperforms the state-of-the-art methods that were applied on the same data. The method is also applied on Sea Surface Temperature (SST) images acquired over Black Sea by NOAA-AVHRR sensors. The divergence-free assumption is roughly valid on these acquisitions, due to the small values of vertical velocity at the surface. Fig. 5 displays data and results. As no ground truth of motion is available, the method is quantified by the value of correlation between the pseudo-images and the real acquisitions. Again, the method provides the best result compared to other state-of-the-art algorithms. Improvement of motion estimation by assessing errors on the dynamics Participants : Dominique Béréziat [UPMC/LIP6] , Isabelle Herlin, Nicolas Mercier. Data assimilation technics are used to retrieve motion from image sequences. These methods require a model of the underlying dynamics, displayed by the evolution of image data. In order to quantify the approximation linked to the chosen dynamic model, we consider adding a model error term in the evolution equation of motion and design a weak formulation of 4D-Var data assimilation. The cost function to be minimized simultaneously depends on the initial motion field, at the begining of the studied temporal window, and on the error value at each time step. The result allows to assess the model error and analyze its impact on motion estimation. This error assessment method is evaluated and quantified on twin experiments, as no ground truth would be available for real image data. Fig. 6 shows four frames of a series of observations obtained by integrating the evolution model from an initial condition on image and velocity field (the ground truth ${𝐰}_{\text{ref}}\left(0\right)$ displayed on the left of Fig. 7 ). An error value is added at each time step on the motion value, when integrating the simulation model. This error is a constant bias. We performed two data assimilation experiments. The first one considers the evolution model as perfect, with no error in the evolution equation. It is denoted PM (for Perfect Model). The second one, denoted IM (for Imperfect Model) involves an error in the motion evolution equation. In fig.7 are displayed the motion fields retrieved by PM and IM at the beginning of the temporal window. Figure 7. Comparaison estimations with ground truth. (a) ${𝐰}_{\text{ref}}\left(0\right)$ (b) ${𝐰}^{\text{PM}}\left(0\right)$ (c) ${𝐰}^{\text{IM}}\left(0\right)$ As it can be seen, IM computes a correct velocity field while PM completely fails. The results on this error assessment method are still preliminary. Perspectives are considered in order to correctly retrieve the error on dynamics by constraining its shape. An important application is, for instance, the detection of dynamics changes on long temporal sequences. Nonlinear Observation Equation For Motion Estimation Participants : Dominique Béréziat [UPMC/LIP6] , Isabelle Herlin. In the image processing literature, the optical flow equation is usually chosen to assess motion from an image sequence. However, it corresponds to an approximation that is no more valid in case of large displacements. We evaluate the improvements obtained when using the non linear transport equation of the image brightness by the velocity field. A 4D-Var data assimilation method is designed that simultaneously solves the evolution equation and the observation equation, in its non linear and linearized form. The comparison of results obtained with both observation equations is quantified on synthetic data and discussed on oceanographic Sea Surface Temperature (SST) images. We show that the non linear model outperforms the linear one, which underestimates the motion norm. Fig.8 illustrates this on SST images (motion vectors are displayed by arrows). The aim of this research is to achieve a correct estimation of motion when the object displacement is greater than its size. In this case, coarse-to-fine incremental methods as well as the non linear data assimilation method fail to retrieve a correct value. The perspective is then to include, in the state vector, a variable describing the trajectory of pixels. The observation operator will then measure the effective displacement of pixels, according to their trajectories, and allow a better estimation of motion value. Recovering missing data on images Participants : Dominique Béréziat [UPMC/LIP6] , Isabelle Herlin, Nicolas Mercier. A data assimilation method was designed to recover missing data and reduce noise on satellite acquisition. The state vector includes motion and image fields. Its evolution equation is based on assumptions on the underlying dynamics displayed by the sequence of images and considers the passive transport of images by the velocity field. The observation equation compares the image component of the state vector and the real observations. Missing and noisy data regions are characterized by a gaussian observation error, whose covariance matrix $R$ has an approximately infinitesimal inverse. The noise recovering method computes a solution of the state vector that is a compromise between the evolution equation and the observation equation. The image component of the solution satisfies the assumptions on the dynamics and is close to the real acquisition according to the covariance matrix $R$. This image component provides the reconstruction of the noisy acquisitions. The recovering method was applied on synthetically noised SST images in order to quantify the quality of the recovering (see Fig.9 ). Figure 9. Recovering of noisy data. (a) Noisy image (b) Our approach (c) Bertalmio et al (d) Tschumperlé et al The method is a promising alternative to those such as space-time interpolation. In the experiments, the Lagrangian constancy of the state vector is used as evolution equation. The perspectives concern the use of more advanced dynamic equations, as for instance the shallow water equations that link the motion field to the thickness of the ocean surface layer, and improved modeling of illumination changes over the sequence, due to various order acquisition times. Validation of velocity estimated with image assimilation Participants : Isabelle Herlin, Etienne Huot, Gennady Korotaev [Marine Hydrophysical Institute, Ukraine] , Evgeny Plotnikov [Marine Hydrophysical Institute, Ukraine] . This study is achieved in collaboration with the Marine Hydrophysical Institute (MHI) of Sevastopol. The aim is to estimate and further validate the estimation of Black Sea surface velocity from sequences of satellite images in order to allow an optimal assimilation of these pseudo-observations in 3D ocean circulation models. Several Image Models were designed that express the dynamics of velocity and the temporal evolution of image data. An image assimilation method was developed based on the 4D-Var formalism and estimates motion as a compromise between the Image Model, the image acquisitions and regularity heuristics on the velocity field. Two Image Models were qualitatively and quantitatively compared: the Stationary Image Model (SIM) based on the heuristics of stationary motion, which is valid at short temporal scale, and the Shallow Water Image Model (SWIM), based on the shallow-water equations. The comparison between SIM and SWIM results confirms that SIM provides correct results only on short temporal windows, while SWIM allows to process longer image sequences. The validation of motion estimation by image assimilation requires additional observation data, as no measure of motion is available from satellite sensors. Sea Level Anomaly, measured by satellite altimeters, is then compared to the thickness of the surface layer as estimated by the Shallow Water Image Model. This comparison shows a good adequacy of shape and values [30] , [32] . As the velocity field is strongly related to this thickness value from the physical evolution laws, these results further validate the estimation of the velocity and the image assimilation approach. Velocity estimation under the geostrophic equilibrium assumption Participants : Isabelle Herlin, Etienne Huot. The surface motion of the Black Sea approximately verifies the geostrophic equilibrium property. As the surface velocity can be directly derived from the surface layer thickness $h$, this allows to simplify the shallow-water equations and the dynamics is expressed by the evolution of $h$. The Geostrophic Shallow Water Image Model (GSWIM) is then designed based on the evolution of $h$ and the image data. A 4D-Var assimilation method was designed and developed in order to estimate $h$ from a sequence of satellite images. The motion field is then computed from the estimation of $h$. This method was first tested and quantified on twin experiments with satellite data. Figure 10 simultaneously displays the result of the velocity estimation by GSWIM and the ground truth. Figure 10. Left: first image of the sequence, Center: motion estimated by GSWIM, Right: ground truth. Coupling models for motion estimation on long temporal image sequences Participants : Karim Drifi, Isabelle Herlin. This study concerns the estimation of motion fields from satellite images on long temporal sequences. The huge computational cost and memory required by data assimilation methods on the pixel grid makes impossible to use these techniques on long temporal intervals. For a given dynamic model (named full model on the pixel grid), the Galerkin projection on a subspace provides a reduced model that allows image assimilation at low cost. The definition of this reduced model however requires defining the optimal subspace of motion. A sliding windows method is thus designed: Figure 11 summarize the described methodology. Twin experiments were designed to quantify the results of this sliding windows method. Results on motion estimation are given in Figure 12 and compared with the ground truth. The NRMSE (in percentage) ranges from 1.1 to 4.0% from the first to the sixth window. On the first window, 3 hours are required to estimate the motion fields with the full model. For the next 5 windows, less than 1 minute is required to compute motion.
{"url":"https://radar.inria.fr/report/2011/clime/uid64.html","timestamp":"2024-11-04T09:00:45Z","content_type":"text/html","content_length":"64976","record_id":"<urn:uuid:782dd513-0b97-4eb5-a5fb-314dad0780ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00706.warc.gz"}
Decimal Place Value Chart |Tenths Place |Hundredths Place |Thousandths Decimal Place Value Chart Decimal place value chart are discussed here: We know that in a place value chart, each place has a value ten times the value of the next place on its right. For example, the value of hundreds place is ten times of the value of tens place, the value of thousands place is ten times of the value of hundreds place etc. In other words, the value of a place is one-tenth (\(\frac{1}{10}\)) of the value of the next place on its left. For example, the value of hundreds place is one-tenth (\(\frac{1}{10}\)) of the value of thousands place, the value of tens place is one-tenth of the value of hundreds place, etc. The first place after the decimal is got by dividing the number by 10; it is called the tenths place. The second place after the decimal is got by dividing the number by 100; it is called the hundredths place. The third place after the decimal is got by dividing the number by 1000; it is called the thousandths place. Consider the place value of 7 in the following: 7598 ⟶ The place value of 7 is 7000; 5720 ⟶ The place value of 7 is 700. 4579 ⟶ The place value of 7 is 70 and 9467 ⟶ The place value of 7 is 7. We see that when the digit 1 moves one place to the right, its value becomes one-tenth (\(\frac{1}{10}\)) of its previous value; when it moves two places to the right, its value becomes one-hundredth (\(\frac{1}{100}\)) of its previous value and so on. If we wish to continue moving towards right, we shall have to extend the place-value chart beyond ones place by opening the places for tenths, hundredths, thousandths and so on. Decimal Place Value Chart: If we represent the number 54 + \(\frac{7}{10}\) + \(\frac{3}{100}\) in the place value chart, then 5 goes to tens place, 4 goes to ones place, 7 goes to tenths place and 3 goes to hundredths place as indicated below. The number shown in the table is written as 54.73 and is called a decimal fraction or simply decimal. It is read as fifty four point seven three. The dot (.) is called the decimal point, and it separates whole number and fractional parts. Similarly, the number 849 + \(\frac{3}{10}\) + \(\frac{1}{100}\) + \(\frac{5}{1000}\) is written as 849.315 and read as eight hundred forty nine point three one five. It can also be read as eight hundred forty nine and three hundred fifteen thousandths. Note that while reading a decimal number, its decimal part is read digit-wise. 4.2 ⟶ Four point two 43.64 ⟶ Forty three point six four 0.45 ⟶ Zero point four five 731.09 ⟶ Seven hundred thirty one point zero nine A decimal contains a whole number part and a decimal part. In 4.2, the whole number part is 4 and the decimal part is 2. In 43.64, the whole number part is 43 and the decimal part is 64. In 0.45, the whole number part is 0 and the decimal part is 45. In 731.09, the whole number part is 731 and the decimal part is 09 The decimal consisting of only one part, i.e., either decimal part or whole number part can be written by making use of zero in whole or decimal part. For Example: 37 can be written as 37.0, .49 can be written as 0.49, etc. Decimal Places: The number of digits in the decimal part of decimal gives the number of decimal places. For example, decimals 87.56 and 4.325 have two and three decimal places respectively. 1. Arrange the following decimals in the place value chart. 0.6; 7.4; 34.85; 258.407 The given decimals written in the place value chart are as follows: 2. Write the following decimals in words. (i) 6.2 (ii) 38.76 (iii) 419.317 (i) 6.2 is written as 6\(\frac{2}{10}\) and is read as 'six point two' or 'six and two tenths or six decimal two. (ii) 38.76 is written as 38\(\frac{76}{100}\) and is read as 'thirty-eight point seven six' or 'thirty-eight and seventy-six hundredths'. (iii) 419.317 is written as 419\(\frac{317}{1000}\) and is read as 'four hundred nineteen point three one seven' or 'four hundred nineteen and three hundred seventeen thousandths'. Solved Examples on Decimal Place Value Chart: 1. How to write the place value of the digit 45.123? The place of 3 in the decimal 45.123 is \(\frac{3}{1000}\). The place of 2 in the decimal 45.123 is \(\frac{2}{100}\) The place of 1 in the decimal 45.123 is \(\frac{1}{10}\) The place of 5 in the decimal 45.123 is 5 The place of 4 in the decimal 45.123 is 40 Let us consider the following examples. 2. Write the following numbers in the decimal place value chart. (i) 1.2 1.2 in the decimal place value chart. (ii) 12.05 12.05 in the decimal place value chart. (iii) 25.453 25.453 in the decimal place value chart. (iv) 490.7042 490.7042 in the decimal place value chart. (v) 6145.89 6145.89 in the decimal place value chart. (vi) 15671.9 15671.9 in the decimal place value chart. 3. Write the place value of digit 7 in each of the following decimals: (i) 513.073 The place of 7 in the decimal 513.073 is \(\frac{7}{100}\) or 7 hundredths. (ii) 71.234 The place of 7 in the decimal 71.234 is 70 or 7 tens. Practice Problems on Decimal Place Value Chart: 1. Identify the place value of the underlined digit in the given numbers. (i) 65.71 (ii) 308.88 (iii) 716.927 (iv) 8546.293 (v) 4665.705 (i) 1 hundredth (ii) 3 hundreds (iii) 9 tenths (iv) 3 thousandths (v) 4 thousands 2. Write in the place value chart. (i) 8.06 (ii) 25.5 (iii) 24.89 (iv) 137.905 3. Write the decimals shown in the following place value table: 3. (i) 65.9 (ii) 8.425 (iii) 623.05 (iv) 840.492 ● Decimal. Expanded form of Decimal Fractions. Changing Unlike to Like Decimal Fractions. Comparison of Decimal Fractions. Conversion of a Decimal Fraction into a Fractional Number. Conversion of Fractions to Decimals Numbers. Addition of Decimal Fractions. Problems on Addition of Decimal Fractions Subtraction of Decimal Fractions. Problems on Subtraction of Decimal Fractions Multiplication of a Decimal Numbers. Multiplication of a Decimal by a Decimal. Properties of Multiplication of Decimal Numbers. Problems on Multiplication of Decimal Fractions Division of a Decimal by a Whole Number. Division of Decimal Fractions by Multiples. Division of a Decimal by a Decimal. Division of a whole number by a Decimal. Properties of Division of Decimal Numbers Problems on Division of Decimal Fractions Conversion of fraction to Decimal Fraction. From Decimal Place Value Chart to HOME PAGE Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"https://www.math-only-math.com/decimal-place-value-chart.html","timestamp":"2024-11-13T14:47:12Z","content_type":"text/html","content_length":"80123","record_id":"<urn:uuid:1ad0a931-3e4c-48b0-a6a8-8d27a7495821>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00226.warc.gz"}
Complex multiplication Complex multiplication (CM) is a method which utilizes class field theory in order to generate curves with a prescribed order. Namely, if $Ds^2 = 4 q - t^2$ and $j$ is a root of the $D$-th Hilbert class polynomial modulo $q$ (which is a prime), then any curve with j-invariant $j$ (or its quadratic twist) will have order $q+1+t$ over $\mathbb{F}_q$. Given the j-invariant, such a curve can be easily constructed: for example, we can define it by the Weierstrass equation $y^2 = x^3 + 3 k c^2 x + 2 k c^3,$ where $k = j / (1728 - j)$ and $c \in \mathbb{F}_q$ is arbitrary. (Note that this does not work for the special cases $j=0$ and $j=1728$, which correspond to curves given by $y^2 = x^3 - 1$ and $y^2 = x^3 - x$, respectively.) The bottleneck is the Hilbert polynomial computation, which allows us to only use a small $D$ (currently up to around 44 bits). In particular, every curve generated by the CM method will necessarily have a small $D$ (called CM discriminant), which means its ring of endomorphisms can be efficiently constructed. Apart from a slight speed-up of scalar multiplication, it is not known whether this significantly impacts security, but such curves certainly cannot be considered random.
{"url":"https://neuromancer.sk/std/methods/cm/","timestamp":"2024-11-08T09:06:01Z","content_type":"text/html","content_length":"173253","record_id":"<urn:uuid:830aa374-f51a-4826-9201-08ce61c1ac8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00102.warc.gz"}
Standard Normal Distribution Z-Score Calculator Compute the z-score associated with a given cumulative probability level for the standard normal distribution. Knowing the z-score associated with a particular standard normal probability can be very useful for analytics studies that use z-scores for comparative or descriptive purposes. Please provide the necessary values, and then click 'Calculate'.
{"url":"https://analyticscalculators.com/calculator.aspx?id=19","timestamp":"2024-11-15T03:04:39Z","content_type":"text/html","content_length":"28956","record_id":"<urn:uuid:36311d0c-fb5c-435e-bf66-34083a42ffae>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00140.warc.gz"}
Lexicographic Path Induction Abstract. Programming languages theory is full of problems that reduce to proving the consistency of a logic, such as the normalization of typed lambda-calculi, the decidability of equality in type theory, equivalence testing of traces in security, etc. Although the principle of transfinite induction is routinely employed by logicians in proving such theorems, it is rarely used by programming languages researchers who often prefer alternatives such as proofs by logical relations and model theoretic constructions. In this paper we harness the well-foundedness of the lexicographic path ordering to derive an induction principle that combines the comfort of structural induction with the expressive strength of transfinite induction. Using lexicographic path induction, we give a consistency proof of Martin-L¨of’s intuitionistic theory of inductive definitions. The consistency of Heyting arithmetic follows directly, and weak normalization for G¨odel’s T follows indirectly; both have ...
{"url":"https://www.sciweavers.org/publications/lexicographic-path-induction","timestamp":"2024-11-06T12:09:46Z","content_type":"application/xhtml+xml","content_length":"38518","record_id":"<urn:uuid:0d70cd07-8be7-40c3-afa8-5c148bbe3d2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00844.warc.gz"}
Tail asymptotics for the supremum of a random walk when the mean is not finite Tail asymptotics for the supremum of a random walk when the mean is not finite We consider the sums S_n=\xi_1+\cdots+\xi_n of independent identically distributed random variables. We do not assume that the \xi's have a finite mean. Under subexponential type conditions on distribution of the summands, we find the asymptotics of the probability {\bf P}\{M>x\} as x\to\infty, provided that M=\sup\{S_n,\ n\ge1\} is a proper random... Show more
{"url":"https://synthical.com/article/Tail-asymptotics-for-the-supremum-of-a-random-walk-when-the-mean-is-not-finite-a6edcdf8-ffbf-11ed-9b54-72eb57fa10b3?","timestamp":"2024-11-12T16:17:38Z","content_type":"text/html","content_length":"62189","record_id":"<urn:uuid:ae2fc5be-a752-491e-b9a4-6b60425b6fbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00151.warc.gz"}
Evaluation metrics • Unit: bits or nats □ the information content in bits of an event with probability is given by □ nats = • Get but you input x_t=x_0 + very little noise and t=0 • Given true , □ Literally compute □ You have to weird tricks with the CDF when applying to images because we’re in continous space, basically doing P(X > x0 + 1/255) - P(X> x0 - 1/255) • Sometimes also defined as the sum of all part of Fréchet Inception Distance (FID) FID is a very popular metric for evaluating the quality of images generated by models like GANs and diffusion models. It measures the distance between the feature vectors of real and generated • Calculation: The FID is calculated by first using a feature extractor like the Inception network to transform both the set of real images and the set of generated images into a feature space. Then, it calculates the mean and covariance of these feature vectors for both real and generated images. The FID score is then the Fréchet distance (also known as the Wasserstein-2 distance) between these two Gaussian distributions: where μx,Σxμx,Σx are the mean and covariance of the real data features, and μg,Σgμg,Σg are those for the generated data. • Purpose: Lower FID scores indicate that the distributions of generated images are closer to the real images, suggesting better quality and diversity. Inception Score (IS) The Inception Score is another metric used primarily for images. It uses the Inception model to calculate the diversity and quality of generated images. • Calculation: IS uses the conditional label distribution p(y∣x)p(y∣x) predicted by the Inception model for each image generated by the diffusion model. The score is computed as: where is the Kullback-Leibler divergence between the conditional distribution and the marginal distribution , which is obtained by averaging the true over all generated images. • Purpose: A higher Inception Score indicates that the generated images are both meaningful (the model is confident about the labels) and diverse (different images have different predicted labels).
{"url":"https://notes.haroldbenoit.com/ml/generative-modeling-(diffusion)/training/evaluation-metrics","timestamp":"2024-11-09T19:52:20Z","content_type":"text/html","content_length":"95072","record_id":"<urn:uuid:70c93e82-8802-45db-b2ff-ef257e7aff5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00055.warc.gz"}
Best Gradient Calculator - Find the Gradient | PineCalculator.com Introduction to Gradient Calculator: Gradient calculator is an online solver that is used to find the gradient of a given function with respect to its variable (x,y,z). It determines the vector function f(x1,x2,…,xn) that is solved by a partial derivatives method in two or three dimensional space. The gradient formula calculatoris a useful tool for students,teachers and researchers because It simplifies the process of finding the gradients, especially for complex functions with multiple variables as it is used in various fields like physics, economics, engineering and more. What is Gradient? Gradient gives the direction of the steepest ascent in the scalar field, however to identify points of equilibrium and understand areas of rapid change, we determine laplacian of the function. The gradient function ∇f(x) is a vector that represent gradient with a nebula symbol. The value of the gradient function may be negative or positive as per the given function.The negative value of a vector function shows the change in direction. Formula of Gradient: The gradient has a scalar-valued function f(x), where x=(x1,x2,…,xn) represents the vector of variables (x,y,z), the gradient function is represent as ∇f(x) in two or three dimension that is defined $$ \nabla f(x,y) \;=\; f_x(x,y) \hat{i} + f_y(x,y)\hat{j} $$ $$ \nabla f(x,y,z) \;=\; f_x(x,y,z) \hat{i} + f_y(x,y,z) \hat{j} + f_z(x,y,z) \hat{k} $$ How to Calculate the Gradient? To calculate the gradient function we use the first order of partial derivation on the given function with respect to each of its variables one by one. Let's look at the method of finding gradient of a function in steps which is: Step 1: First identify the given function f(x,y,z) that gradient you want to evaluate. Step 2: After identification, partial differential the function f(x,y,z) with respect to each of its variables (x,y,z). It must be remembered while doing partial derivation only the respect variable function is different and other function values are treated as constant. Step 3: Add the result of partial derivatives into a vector of scalar function into the gradient vector formula ∇f(x) that is given as: $$ \nabla f(x,y) \;=\; f_x(x,y) \hat{i} + f_y(x,y)\hat{j} $$ $$ \nabla f(x,y,z) \;=\; f_x(x,y,z) \hat{i} + f_y(x,y,z) \hat{j} + f_z(x,y,z) \hat{k} $$ Practical Example Of Gradient: An example of gradient function with solution given an idea about the procedure of gradient method for different dimensional space in vector analysis. Example: Find the gradient ▽f(x,y) of the following, $$ f(x,y) \;=\; x^2 - xy + 3y^2 $$ The given function is, $$ f(x,y) \;=\; x^2 - xy + 3y^2 $$ Differentiate the function with respect to x and y one by one, $$ f_x(x,y) \;=\; 2x - y $$ $$ f_y(x,y) \;=\; -x + 6y $$ The gradient of two dimensional space is, $$ \nabla f(x,y) \;=\; f_x(x,y) \hat{i} + f_y(x,y) \hat{j} $$ Put the above derivative function result in this gradient formula, $$ =\; (2x - y) \hat{i} + (-x + 6y) \hat{j} $$ The result of given function f(x,y) is, $$ =\; (2x - y)\hat{i} + (-x + 6y)\hat{j} $$ How to Use the Gradient of a Line Calculator? Gradient vector calculator has a simple design that helps you solve the given algebraic function of 2D,3D space. You just need to put your problem in it and follow some important instructions so that you get the result of the gradient function without any trouble. These instructions are: • Enter the scalar valued gradient function that you want to evaluate in the input fields. • Recheck your given input value to get the exact solution of the gradient function. • The Calculate button provides you with solutions to given gradient problem in two or may be three dimensional. • If you want to check the accuracy of our gradient function calculator, then you can use the load example and get an idea about its working procedure. • Click the “Recalculate” button for the evaluation of more examples of the gradient function with the solution. Outcome of Gradient Calculator: Gradient descent calculator provides you with a solution as per your input gradient function problem when you click on the calculate button. It may include as: In the Result Box: Click on the result button to get the solution of the gradient question. Steps Box: When you click on the steps option, you get the solution of gradient questions in a step-by-step process. Benefits of Using Gradient of a Function Calculator: The gradient vector calculator has multiple benefits that you get whenever you use it for the calculation of gradient function to get the solution of gradient questions. These benefits are: • Our gradient formula calculator is a trustworthy tool as it always provides you with accurate solutions of given gradient function problems. • It is an efficient tool that provides solutions in a stepwise process of the given gradient problems in a few seconds. • Gradient of a line calculator is a learning tool that helps you to easily grab the concept of gradient of a vector field very easily without going to any tutor. • Gradient calculator is a handy tool that solves gradient problems as per the given gradient problems and you do not need to give any type of external effort during calculation • The gradient function calculatoris a free tool that allows you to use it for the calculation of gradient problems without spending. • The gradient of line calculatoris an easy-to-use tool, anyone, even a beginner can easily use it for the solution of gradient function problems. • The gradient vector calculator can save you tons of time that you use in finding the solution to gradient questions as it provides you results immediately.
{"url":"https://pinecalculator.com/gradient-calculator","timestamp":"2024-11-12T06:08:28Z","content_type":"text/html","content_length":"47508","record_id":"<urn:uuid:9f0e0b9a-e1d9-4bb0-b0df-878e1adb87de>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00753.warc.gz"}
FAQ - ราคาขาย ดอทคอม Frequently Asked Questions Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut How To Buy A Product? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Get Product Support? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Buy A Product? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Get Product Support? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Buy A Product? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Get Product Support? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Buy A Product? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged. How To Get Product Support? When an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centurbut also survived but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the releas survived not raseth leap into electronic typesetting, remaining essentially unchanged.
{"url":"https://rakakai.com/faq/","timestamp":"2024-11-04T04:15:34Z","content_type":"text/html","content_length":"113926","record_id":"<urn:uuid:459438a8-1c59-49e5-99ff-bbaa49e09bd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00695.warc.gz"}
Chaos Theory and Randomness from Initial Conditions: A New Kind of Science | Online by Stephen Wolfram [Page 309] being able to see a sequence of explicit bumps in the road, the initial conditions for the position of a point in the kneading process are encoded in a more abstract form as a sequence of digits. But the crucial point is that the behavior we see will only ever be as random as the sequence of digits in the initial conditions. And in the first case on the facing page, it so happens that the sequence of digits for each of the initial points shown is indeed quite random, so the behavior we see is correspondingly random. But in the second case, the sequence of digits is regular, and so the behavior is correspondingly regular. Sensitive dependence on initial conditions thus does not in and of itself imply that a system will behave in a random way. Indeed, all it does is to cause digits which make an arbitrarily small contribution to the size of numbers in the initial conditions eventually to have a significant effect. But in order for the behavior of the system to be random, it is necessary in addition that the sequence of digits be random. And indeed, the whole idea of the mechanism for randomness in this section is precisely that any randomness we see must come from randomness in the initial conditions for the system we are looking at. It is then a separate question why there should be randomness in these initial conditions. And ultimately this question can only be answered by going outside of the system one is looking at, and studying whatever it was that set up its initial conditions. Accounts of chaos theory in recent years have, however, often introduced confusion about this point. For what has happened is that from an implicit assumption made in the mathematics of chaos theory, the conclusion has been drawn that random digit sequences should be almost inevitable among the numbers that occur in practice. The basis for this is the traditional mathematical idealization that the only relevant attribute of any number is its size. And as discussed on page 152, what this idealization suggests is that all numbers which are sufficiently close in size should somehow be equally common. And indeed if this were true, then it would imply that typical initial conditions would inevitably involve random digit But there is no particular reason to believe that an idealization which happens to be convenient for mathematical analysis should
{"url":"https://www.wolframscience.com/nks/p309--chaos-theory-and-randomness-from-initial-conditions/","timestamp":"2024-11-15T04:55:06Z","content_type":"text/html","content_length":"87578","record_id":"<urn:uuid:fdcabb81-1eb8-45b9-8bac-4a61e06842c2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00003.warc.gz"}
One Easy Game To Practice Adding Tens On A Hundred Chart One Easy Game For Adding Tens On A Hundreds Chart Get 150+ Free Math Worksheets! Adding tens on a hundred chart is a great way for children to visually adding ten. This fun game with two levels gets children adding tens and ones, and then moves on to adding two-digit numbers using the 100’s grid and manipulatives. I’m starting to see our efforts produce some fruit. My first grader has really struggled with place value, and I have been diligent to provide numerous opportunities for him play with and explore place value. We have worked number puzzles on the hundreds chart. Played games with expanded form. Colored in pictures based on the worth of numbers. And we have moved up and down the hundreds chart based on clues from clip cards. It is working. He is beginning to understand what each number represents, but I know he still needs a little more exposure so today we will be playing a game using the 100’s chart. Hundreds charts are just one way to build number sense in our children. With just a little cutting this game is ready to be played. 1. First, print off enough game boards and record sheets (if using) for each player. 2. Then print off one copy of the clues. I would recommend using cards stock paper. 3. Next, cut out the clues. 4. Finally, place the record sheets in sheet protectors, and gather up base ten cut-outs and dry erase markers Carson Dellosa Base Ten Blocks Cut-Outs (120497)AmazonBasics Clear Sheet Protectors – Letter Size (100 Pack)Expo 80078 Low Odor Dry Erase Markers, Chisel Tip, Assorted Colors, 1 Pack with 8 Markers, Total of 8 Markers Adding Ten on a Hundred Chart – How to Play: There are two levels of this game, but whichever you choose your children will be working toward adding tens on a hundred chart. Level One: Level One simply has children adding tens and ones to the 100’s board until it is all filled up. To play this game, make sure that you only have the clues that add tens and ones. You don’t want the cards that have 54 on them. This is for the next level. First, students will draw a card and then make that number using the base ten block cut-outs (working on understanding the value of numbers). Then they put the base ten block cut-outs on the hundreds On the second time through, have the child predict what number the cutouts will cover up. For example, if a child has covered up 23 numbers and they draw a 30……ask them what they think 20 + 23 is. If they don’t know that is fine. But once they have added their two tens (we would slide down the ones and place the tens in-between), you can point out what they did. Look, we are at 43. Do you see how the one’s place did not change? We added two tens and we went down two rows. What do you think would happen if we added one more ten? It is through these questions, that children will begin to understand the 100’s chart and how to use it to be successful in adding. Level 2: This level has so many different types of learning in it, I won’t go into all the details. 1. We worked on getting to 10 and then adding on from there. 2. We practiced regrouping. 3. My second grader was working on his mental math. 4. They added and subtracted numbers in the tens. 5. The value of the tens place and the ones place was discussed. 6. And of course, we were adding tens on a hundred chart With this level, we used all the cards and the recording sheet. But it is still played like the first level. My boys took turns drawing cards and adding or subtracting that number from their chart. My oldest did the math in his head and then used the cut outs to check himself. My first grade used his cut-outs to figure out the answer, and would then record it. Here are a few things he worked on during this game: Getting to 10: In this example, my first grader needed to add 24. We discussed what he needed to do to get to 10. He looked at his board and reaized that 28 + 2 = 30. So he added two ones to get him to 30. From there, he was able to figure out that 30 + 12 =52. He added his cut-outs to show that he was correct and then recorded 28 + 24 = 52 on his record sheet. Subtracting With Regrouping: Another time he drew a subtraction card that required some regrouping. He had started with 52 and now was required to subtract 52. He knew he couldn’t do this because he didn’t have enough ones. So he took a 10 out and counted out ten ones. Then he took away six ones and three more tens to get the correct answer. 52 – 36 = 16 There was so much learning in this game. I watched as my kiddos built numbers, added tens and ones, and even subtracted. I know this background knowledge will provide useful as we focus more on adding and subtracting larger numbers!!! Educational Insights Foam Magnetic Numbers with BoardYEHAM Montessori Game Wooden Hundred Board Number Chart Number Grid Educational Game for KidsHundred Counting Chart Visual Learning Guides, Set/5 – 4-Panel, 11
{"url":"https://youvegotthismath.com/adding-tens-on-a-hundred-chart/","timestamp":"2024-11-05T02:40:05Z","content_type":"text/html","content_length":"353984","record_id":"<urn:uuid:06008842-23a4-4def-b2e9-dcbac2128ab9>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00087.warc.gz"}
Reductions are the Mathematical Equivalent of Hacks • Jeremy Kun Though I don’t remember who said it, I once heard a prominent CS researcher say the following: Reductions are the lifeblood of theoretical computer science. He was totally right. For those readers who don’t know, a reduction is a systematic way to transform instances of one problem into instances of another, so that solutions to the latter translate back to solutions to the former. Here’s a simple example. Say you want to generate a zero or a one at random, such you’re equally likely to get either outcome. You can reduce this problem to the problem of generating a zero or a one with some biased probability (that’s not completely biased). In other words, you can simulate a fair coin with a biased coin. How do you do it? You just flip your biased coin twice. If the outcome is “heads then tails,” you call the outcome of the fair coin “heads.” If the outcome is “tails then heads” you call the outcome of the fair coin “tails.” In any other event (TT or HH), you try again. This works because if you know you flipped one heads and one tails, then you’re just as likely to get the heads first as you are to get the tails first. If your coin is biased with probability p, these two events both happen with probability p(1-p). Even more fascinating is that you can go the other way too! Given a fair coin, you can simulate coins with any bias you want! This is a quantifiable way to say, “biased coins and unbiased coins are computationally equivalent.” Theoretical computer science is just bursting with these cool proofs, and they are the mathematical equivalent of a really neat “hack.” Why do I call it a hack? The word is primarily used for bad but effective solutions to programming problems (avoiding bugs without fixing their root cause and such). But another use of the word is to successfully use a thing for a purpose against or beyond its original intention. Like exploiting a buffer overflow to get access to sensitive data or using building lights to play Tetris, hacks have a certain unexpectedness about them. And most of all hacks are slick. Reductions come in many colors, the most common of which in computer science is the NP-hardness reduction. This is a reduction from a specific kind of problem (believed to be hard) to another problem while keeping the size “small,” by some measure. And the reason it’s important is because if you show a problem is NP-hard (has a reduction from a known NP-hard problem), then you are including it in a class of problems that are believed to have no efficient solution. So in this case a reduction is one way to measure the difficulty of a problem you’re studying. One really fun example is that the rule-sets of most classic Nintendo games are NP-hard. That is, you can design a level of Donkey Kong Country (or Super Mario Brothers, or Pokemon Red) so that getting to the end of the level would require one to solve a certain kind of logic problem. So if you could write a program to beat any Donkey Kong level (or even tell if there is a way to beat it), you could solve these hard logic problems. The key part of the reduction is that, given any such logic problem, you can design a level that does this. That is, there is an algorithm that transforms descriptions of these logic problems into Donkey Kong levels in an efficient manner. The levels are quite boring, to be sure, but that’s not the point. The point is that Donkey Kong is being used to encode arbitrary logic, and that’s a sweet hack if I’ve ever see one. If you enjoy the hacker mindset, and you want to get more into mathematics, you should seriously try reading about this stuff. You have to wade through a little bit of big-O notation and know that a Turing machine is roughly the same thing as a computer, but the ideas you unlock are really fun to think about. Here’s an article I wrote about P vs NP, actually implementing one of the famous reduction proofs in code. Even better, once you understand a few basic NP-hardness reductions, you can already start contributing to open research problems! For example, nobody knows if the problem of factoring integers is NP-hard. So if you could find a way to encode logic in a factoring problem the same way you can for a Donkey Kong level, you’d be pretty famous. On the easier side, it just so happens that potentially NP-hard problems show up a lot in research. Two of my current research projects are about problems which I suspect to be NP-hard, but for which I have no proof. And once you prove they’re NP-hard then you can start asking the obvious follow-ups: can I find good approximate solutions? How much easier do I need to make the problem before it becomes easy? The list goes on, giving more and more open questions and, the best part, more opportunities for great hacks.
{"url":"https://j2kun.svbtle.com/polynomial-time-reductions-are-hacks","timestamp":"2024-11-09T19:05:41Z","content_type":"text/html","content_length":"16174","record_id":"<urn:uuid:c86ad182-bdc3-40c6-9d52-ccfb3dd5c1fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00655.warc.gz"}
Compute octave-band spectrum Since R2024b The octaveSpectrumEstimator System object™ measures the spectrum of streaming audio data using octave or fractional-octave bands. To estimate the octave-band spectrum: 1. Create the octaveSpectrumEstimator object and set its properties. 2. Call the object with arguments, as if it were a function. To learn more about how System objects work, see What Are System Objects? o = octaveSpectrumEstimator creates an octave spectrum estimator System object with default property values. o = octaveSpectrumEstimator(fs) sets the SampleRate property to fs. o = octaveSpectrumEstimator(Name=Value) sets Properties using one or more name-value arguments. Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them. If a property is tunable, you can change its value at any time. For more information on changing property values, see System Design in MATLAB Using System Objects. SampleRate — Input sample rate (Hz) 44100 (default) | positive scalar Input sample rate in Hz, specified as a positive scalar. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 FrequencyWeighting — Frequency weighting "A-weighting" (default) | "C-weighting" | "Z-weighting" Frequency weighting applied to the input, specified as "A-weighting","C-weighting", or "Z-weighting", where "Z-weighting" corresponds to no weighting. The octaveSpectrumEstimator object implements the frequency weighting using weightingFilter. This property is tunable in simulation but not in generated code. Tunable: Yes Data Types: char | string TimeWeighting — Time weighting "fast" (default) | "slow" | "custom" Type of time-weighting applied to the spectrum estimates for smoothing, specified as "fast", "slow", or "custom". Setting TimeWeighting to "fast" corresponds to an attack time of 125 ms and a release time of 125 ms. Setting TimeWeighting to "slow" corresponds to an attack time of 1000 ms and a release time of 1000 ms. Set TimeWeighting to "custom" to use the AttackTime and ReleaseTime properties to define a custom time-weighting filter. Tunable: Yes Data Types: char | string AttackTime — Time-weighting attack time 0.125 (default) | positive scalar Time-weighting attack time in seconds, specified as a positive scalar. This argument applies only when you set TimeWeighting to "custom". Tunable: Yes Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 ReleaseTime — Time-weighting release time 0.125 (default) | positive scalar Time-weighting release time in seconds, specified as a positive scalar. This argument applies only when you set TimeWeighting to "custom". Tunable: Yes Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Bandwidth — Bandwidth of octave-band filters "1/3 octave" (default) | "1 octave" | "2/3 octave" | "1/2 octave" | "1/6 octave" | "1/12 octave" | "1/24 octave" | "1/48 octave" | "full band" Bandwidth of the octave-band filters used to compute the spectrum, specified as "1 octave", "2/3 octave", "1/2 octave", "1/3 octave", "1/6 octave", "1/12 octave", "1/24 octave", "1/48 octave", or "full band". If you set Bandwidth to "full band", the object does not compute an octave filter bank and instead measures the power of the whole frequency band. This property is tunable in simulation but not in generated code. Tunable: Yes Data Types: char | string FrequencyRange — Frequency range of filter bank (Hz) [22 22050] (default) | two-element row vector of positive monotonically increasing values Frequency range of the octave filter bank in Hz, specified as a two-element row vector of positive monotonically increasing values. This property does not apply when you set Bandwidth to "full band". This property is tunable in simulation but not in generated code. Tunable: Yes Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 ReferenceFrequency — Reference frequency (Hz) 1000 (default) | positive integer scalar Reference frequency of the filter bank in Hz, specified as a positive integer scalar. The ANSI S1.11-2004 standard defines the center frequencies of the octave filters relative to the reference frequency [1]. This property does not apply when you set Bandwidth to "full band". This property is tunable in simulation but not in generated code. Tunable: Yes Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 FilterOrder — Order of octave filters 2 (default) | even integer Order of the octave filters, specified as an even integer. The filter order applies to each individual filter in the filter bank. This property does not apply when you set Bandwidth to "full band". This property is tunable in simulation but not in generated code. Tunable: Yes Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 OctaveRatioBase — Octave ratio base 10 (default) | 2 Octave ratio base, specified as 10 or 2. The octave ratio base determines the distribution of the center frequencies of the octave filters. The ANSI S1.11 standard recommends base 10. Base 2 is popular for music applications. Base 2 defines an octave as a factor of 2, and base 10 defines an octave as a factor of 10^0.3. This property does not apply when you set Bandwidth to "full band". This property is tunable in simulation but not in generated code. Tunable: Yes Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 ApplyLog — Apply logarithm false (default) | true Apply base 10 logarithm to the output, specified as true or false. Tunable: Yes Data Types: logical p = o(audioIn) returns the octave spectrum for the current input frame, audioIn. [p,cf] = o(audioIn) also returns the center frequencies of the octave filter bank. Input Arguments audioIn — Audio input signal column vector | matrix Audio input signal, specified as a column vector or matrix. If the input is a matrix, the object treats the columns as independent channels Output Arguments p — Octave spectrum column vector | matrix | 3-D array Octave spectrum, containing the measured power in each frequency band, returned as an N-by-B matrix or an N-by-B-by-C array where N is the length of the input audio frame, B is the number of bands in the octave filter bank, and C is the number of channels in the input. If you set Bandwidth to "full band", the object returns p as a column vector of length N or an N-by-C matrix. cf — Center frequencies row vector | scalar Center frequencies in Hz of the filter bank used to compute the octave spectrum, returned as a row vector. If you set Bandwidth to "full band", the object returns cf as a scalar equal to fs/4, where fs is the SampleRate. Object Functions To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax: Specific to octaveSpectrumEstimator Common to All System Objects step Run System object algorithm release Release resources and allow changes to System object property values and input characteristics reset Reset internal states of System object Measure Octave Spectrum of Pink Noise Create an octaveSpectrumEstimator object and set the Bandwidth to "1 octave". o = octaveSpectrumEstimator; o.Bandwidth = "1 octave"; Create a single-channel, one-second-long pink noise signal. x = pinknoise(o.SampleRate); Measure the octave spectrum of the the pink noise and call visualize on the object to visualize the measured spectrum. Measure Octave Spectrum of Streaming Audio Create a dsp.AudioFileReader to stream an audio file for processing. Create an audioDeviceWriter to play the audio as you stream it. reader = dsp.AudioFileReader("FunkyDrums-44p1-stereo-25secs.mp3"); player = audioDeviceWriter(SampleRate=reader.SampleRate); Create an octaveSpectrumEstimator object with the same sample rate as the file reader. o = octaveSpectrumEstimator(reader.SampleRate) o = octaveSpectrumEstimator with properties: SampleRate: 44100 Bandwidth: '1/3 octave' FrequencyWeighting: 'A-weighting' TimeWeighting: 'fast' FrequencyRange: [22 22050] FilterOrder: 2 ReferenceFrequency: 1000 OctaveRatioBase: 10 ApplyLog: false Call visualize on the object to open a visualizer to display the spectrum. In a streaming loop: 1. Read in a frame of audio data. 2. Compute the octave-band spectrum 3. Play the audio with the device writer. See the visualizer update with the measured spectrum as the audio plays. while ~isDone(reader) audioIn = reader(); p = o(audioIn); Tune Octave Spectrum Estimator Properties Create a dsp.AudioFileReader to stream an audio file for processing. Create an audioDeviceWriter to play the audio as you stream it. reader = dsp.AudioFileReader("FunkyDrums-44p1-stereo-25secs.mp3"); player = audioDeviceWriter(SampleRate=reader.SampleRate); Create an octaveSpectrumEstimator object with the same sample rate as the file reader and set the Bandwidth to 1 octave. o = octaveSpectrumEstimator(reader.SampleRate,Bandwidth="1 octave"); Call visualize on the object to open a visualizer to display the spectrum. Call parameterTuner to open a UI to tune parameters of the octaveSpectrumEstimator while streaming. In a streaming loop: 1. Read in a frame of audio data. 2. Compute the octave-band spectrum 3. Play the audio with the device writer. Adjust the properties in the parameterTuner UI while streaming. while ~isDone(reader) audioIn = reader(); p = o(audioIn); To compute the octave-band spectrum, the octaveSpectrumEstimator object first optionally applies frequency-weighted filtering on the input signal using weightingFilter based on the FrequencyWeighting property. The octaveSpectrumEstimator object then passes the signal through an octave filter bank constructed using octaveFilterBank with the Bandwidth, FrequencyRange, ReferenceFrequency, FilterOrder, and OctaveRatioBase properties. The octaveSpectrumEstimator object takes the square of the filter bank output to get the power in each band then applies an averaging, time-weighted filter to perform time smoothing on the power values. The time-weighted filter with attack time A and release time R is defined as: $y\left[n\right]=\left\{\begin{array}{cc}{\alpha }_{A}y\left[n-1\right]+\left(1-{\alpha }_{A}\right)x\left[n\right],& x\left[n\right]\le y\left[n-1\right]\\ {\alpha }_{R}y\left[n-1\right]+\left(1-{\ alpha }_{R}\right)x\left[n\right],& x\left[n\right]>y\left[n-1\right]\end{array}$ The attack time coefficient α[A] with sample rate Fs is: ${\alpha }_{A}=\mathrm{exp}\left(\frac{-1}{Fs×A}\right)$ The release time coefficient α[R] is: ${\alpha }_{R}=\mathrm{exp}\left(\frac{-1}{Fs×R}\right)$ [1] Acoustical Society of America. American National Standard Specification for Octave-Band and Fractional-Octave-Band Analog and Digital Filters. ANSI S1.11-2004. Melville, NY: Acoustical Society of America, 2009. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Version History Introduced in R2024b
{"url":"https://kr.mathworks.com/help/audio/ref/octavespectrumestimator-system-object.html","timestamp":"2024-11-09T19:50:54Z","content_type":"text/html","content_length":"136516","record_id":"<urn:uuid:c1d9a692-90a0-4bfd-b00e-67bce773316c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00880.warc.gz"}
# NahamCon CTF 2020 Writeup I found that this CTF had a few particularly interesting challenges, so I felt like a small writeup was due. ## Twinning These numbers wore the same shirt! LOL, #TWINNING! Connect with: nc jh2i.com 50013 Netcatting to the server gives us this: $ nc jh2i.com 50013 Generating public and private key... Public Key in the format (e,n) is: (65537,7136206991423) The Encrypted PIN is 4647953841890 What is the PIN? Looks like RSA but with wayyy too small numbers, but they also have another weakness... This challenge will definitely stick with me, as it is the first time ever that a small formula I came up with a while ago actually became useful: The essence of this formula is that it's able to factorize the factor of two primes `p` and `q` when `p` and `q` are consecutive (twin primes) or at least very close. The way I came up with this formula was when trying to find some way to apply the conjugate rule backwards in order to factorize `n`. I was somewhat successful, however it does diminish in accuracy quite gravely the futher away `p` and `q` are from eachother. Do note that `m` does NOT refer to the variable `m` in RSA, it is merely an intermediete variable to make the math a little more elegant. I guess you could say it's a very fancy way of doing the square root of `n`, and is pretty much useless in any practical contexts. (Though if any math geeks know a way to improve it to have accuracy across a wider range of primes, please let me know) Anyhow, we can write this formula as a python script: def factor(n): return [int(int(sqrt)-m+1),int(int(sqrt)+m+1)] while True: n = int(input("Enter n:")) Using this program we can factor `n`: Enter n:7136206991423 [2671367, 2671369] We can multiply `p` and `q` together to check, and sure enough, `2671367 * 2671369 = 7136206991423`. Now let's get to decrypting. Here's a python script to do that: def egcd(a, b): if a == 0: return (b, 0, 1) g, y, x = egcd(b%a,a) return (g, x - (b//a) * y, y) def modinv(a, m): g, x, y = egcd(a, m) if g != 1: raise Exception('No modular inverse') return x%m p = 2671367 q = 2671369 ct = 4647953841890 e = 65537 phi = (p - 1) * (q - 1) d = modinv(e, phi) m = pow(ct, d, p * q) Running the script gives us that the PIN is `4565`, and sure enough, entering the PIN gives us the flag: $ nc jh2i.com 50013 Generating public and private key... Public Key in the format (e,n) is: (65537,7136206991423) The Encrypted PIN is 4647953841890 What is the PIN? Good job you won!
{"url":"https://ctftime.org/writeup/21425","timestamp":"2024-11-10T08:16:10Z","content_type":"text/html","content_length":"21410","record_id":"<urn:uuid:26edb72c-635c-4b8e-a35e-b70d172f7f92>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00548.warc.gz"}
Understand that the zero and identity matrices play a role in matrix addition and multiplication similar to the role of 0 and 1 in the real numbers. The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse. Understand that the zero and identity matrices play a role in matrix addition and multiplication similar to the role of 0 and 1 in the real numbers. The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse.
{"url":"https://virtualnerd.com/common-core/hsn-number-quantity/HSN-VM-vector-matrix/C/10/","timestamp":"2024-11-10T01:36:51Z","content_type":"text/html","content_length":"18989","record_id":"<urn:uuid:60e96c02-33d6-41d6-a09a-f89e28718337>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00140.warc.gz"}
Joule's Law - Joule Effect or Heating Effect of Current Joule’s Law – Joule Effect or Heating Effect of Current and its Applications An English physicist James Prescott Joule discovered the Joules’ law (also known as Joel’s effect, Joule-Lenz law or Joule’s first law) in 1840-43 which shows the relation between current, heat and resistance in a specific time i.e. when a current flows through a material, it produces heat in it. Joule’s Law Joule’s Law states that “If “I” amperes current flows through a resistor “R” for “t” seconds, then the amount of work done (conversion of electrical energy into heat energy) is equals to Work done = Heat = I^2Rt … Joules WD = Heat = VIt … Joules … (∴ R = V/I) WD = Heat = Wt … Joules … (∴ W = VI) WD = Heat = V^2t/R … Joules … (∴ I = V/R) The work done is the amount of heat energy converted from electricity which dissipates in the air. In this case, the produced amount of heat can be calculated using the following formulas and Amount of Heat produced = H = Work done / Mechanical equivalent of Heat = WD/J • J = 4187 joules/kcal = 4200 joules/kcal (approx.) • ∴ H = I^2Rt / 4200 kcal = VIt / 4200 kcal = Wt / 4200 kcal = V^2t / 4200 kcal A one kilocalorie (kcal) is the amount of heat needed to rise the temperature of one kilogram (kg) water by one degree centigrade (1°C). Related Posts Heating Effect of Current Almost, we all have experienced that when a current flows through a conductor or cable and wire, it becomes hot later. The reason behind this scene is that when current flows through a conductor, the applied electrical energy is converted into heat energy which increases the temperature of the conductor. We know that the flow of electrons in a substance is known as electric current. The drifting electrons in the substance collides with each other and the electrons of molecule atoms in the material. The collision of electrons produces heat. That’s the reason why the flowing of electric current in a substance generates heat. This effect is known as the heating effect of the current. The generated heat by electric current depends on the amount of current and material of that substance. For example, Electric current produces more heat in insulators (those materials which heavily oppose the flow of current in it e.g. tungsten, nichrome) while the amount of heat generated by flowing current in conductors (those material in which current flows very easily due to less or almost negligible resistance e.g. gold, copper, aluminum) is less than that of insulators). Related Posts: Why does the element of the heater glow due to heat but not the heater’s cord? Generally, the heating element of heaters are made up of nichrome which has very high resistance. When a supply voltage is applied across the heating element through the wire, the material heavily opposes the flow of electrons in it. Due to the drifting of electrons inside the heating material, the electrons collide with the electrons in the material atoms. This continuous collision of electrons heats up and glows the heating element which additionally provides the heat energy. In simple words, the heating element of nichrome converts the electrical energy into heating energy. This whole process is known as the heating effect of current. On the other hand, the cord wire connected to the heater is made of conductor where current easily flows through it without a noticeable resistance. That’s why only the heater element glows, but not the heater’s cable. Related Posts: Solved Example on Joule’s Law of Heating Effect of Current An electric heater contains 1.6kg of water at 20°C. It takes 12 minutes to rise the temperature to 100°C. Assume the losses due to radiation and heating the kettle to be 10kg-calories. Find the power rating of the heater. Heat required to rise the temperature of 1.6kg of water to boiling point = 1.6 x 100 x 1 x (100 – 20) cal. = 128000 cal. Heat lost = 10 x 1000 = 10000 cal. Total heat = 128000) + 10000 = 138000 cal. Now, heat produced = Wt = (W x 12 x 60) / 4.2 cal. Heat produced = heat taken by heater i.e. = (W x 12 x 60)/4.2 = 138000 W = (138000 x 4.2) / )12 x 60) W = 805W = 0.8kW Applications of Joule’s Effect or Heating Effect of Current Joule’s law or heating effect of electric current are used in many household and industrial applications. Following are the appliances and devices using the effect of electric current. • Electric heaters, stoves, water heaters and heating elements • Electric clothes iron • Electric hot plate • Electric Welding • Food processing • Filament of Incandescent lamps and light bulbs • IR Thermal imaging (Infrared thermography (IRT) of a light bulb • Resistance heating coils, space heater (electric radiator), immersion heaters PTC heaters, cartridge heaters, and fan heaters • Hair dryers • Fuses and fuse elements Besides these useful applications of heating effect of current, there are some drawbacks as well such as electric power loss (I^2R) in the HVAC (high voltage alternating current) power and transmission lines due to the fact that there is some resistance of the power lines material. Moreover, it leads to serious heating problem in electrical machines and devices such as transformer, generator and motors etc. Additionally, The thermal efficiency or heating efficiency of current can’t be used at all because there are some heating losses due to the radiation (transfer of the heat in the form of heating waves) and convection (the molecular movement in the material used to transfer the heat). Related Posts:
{"url":"https://www.electricaltechnology.org/2020/12/joules-law-heating-effect-of-current.html","timestamp":"2024-11-06T20:39:23Z","content_type":"text/html","content_length":"340289","record_id":"<urn:uuid:57760bfd-de42-4ed8-ada2-cee8842a3f84>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00334.warc.gz"}
The continuum hypothesis and other set-theoretic ideas for non-set-theorists, CUNY Einstein Chair Seminar, April, 2015 At Dennis Sullivan’s request, I shall speak on set-theoretic topics, particularly the continuum hypothesis, for the Einstein Chair Mathematics Seminar at the CUNY Graduate Center, April 27, 2015, in two parts: • An introductory background talk at 11 am, Room GC 6417 • The main talk at 2 – 4 pm, Room GC 6417 I look forward to what I hope will be an interesting and fruitful interaction. There will be coffee/tea and lunch between the two parts. Abstract. I shall present several set-theoretic ideas for a non-set-theoretic mathematical audience, focusing particularly on the continuum hypothesis and related issues. At the introductory background talk, in the morning (11 am), I shall discuss and prove the Cantor-Bendixson theorem, which asserts that every closed set of reals is the union of a countable set and a perfect set (a closed set with no isolated points), and explain how it led to Cantor’s development of the ordinal numbers and how it establishes that the continuum hypothesis holds for closed sets of reals. We’ll see that there are closed sets of arbitrarily large countable Cantor-Bendixson rank. We’ll talk about the ordinals, about $\omega_1$, the long line, and, time permitting, we’ll discuss Suslin’s hypothesis. At the main talk, in the afternoon (2 pm), I’ll begin with a discussion of the continuum hypothesis, including an explanation of the history and logical status of this axiom with respect to the other axioms of set theory, and establish the connection between the continuum hypothesis and Freiling’s axiom of symmetry. I’ll explain the axiom of determinacy and some of its applications and its rich logical situation, connected with large cardinals. I’ll briefly mention the themes and goals of the subjects of cardinal characteristics of the continuum and of Borel equivalence relation theory. If time permits, I’d like to explain some fun geometric decompositions of space that proceed in a transfinite recursion using the axiom of choice, mentioning the open questions concerning whether there can be such decompositions that are Borel. Dennis has requested that at some point the discussion turn to the role of set theory in the foundation for mathematics, compared for example to that of category theory, and I would look forward to that. I would be prepared also to discuss the Feferman theory in comparison to Grothendieck’s axiom of universes, and other issues relating set theory to category theory. 9 thoughts on “The continuum hypothesis and other set-theoretic ideas for non-set-theorists, CUNY Einstein Chair Seminar, April, 2015” 1. Slides or recording will be available, please? □ The talk will be video recorded and made available here after a few days. The Einstein Chair Seminar keeps a video archive of all their talks. 2. Thank you Joel! Great talk!! Everyone was enjoying every minute. What a wonderful group and meeting! □ Thanks, Erin. I’m glad you enjoyed it. 3. Indeed, a very nice and dynamical talk (just finished watching the video). Also the audience was very interesting. I’m giving a course now on applications of Descriptive Set Theory to Computer Science (mostly based on Kechris), and I’ll recommend to my students to check the way you presented the related results for R, very neat. Most probably I’ll be bothering you in the near future with some question on ST (which I like very much but didn’t have the chance to work on yet). Best, □ I’m glad you enjoyed the talk. Please be my guest to ask questions about set theory! 4. Where can I find the the slides and the video of these talks? □ I guess the video got moved and link is now broken. I’ll try to find out where they are available. ☆ I’ve now updated the links. Leave a Reply to Joel David Hamkins Cancel reply
{"url":"https://jdh.hamkins.org/the-continuum-hypothesis-and-other-set-theoretic-ideas-for-non-set-theorists-einstein-chair-seminar-april-2015/?replytocom=8624","timestamp":"2024-11-03T16:41:28Z","content_type":"text/html","content_length":"82926","record_id":"<urn:uuid:2b85bc2a-ff79-49ee-872c-dc47c68ae368>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00528.warc.gz"}
Talita Perciano Machine Learning and Analytics Talita Perciano Talita Perciano Research Scientist Scientific Data Division Phone: +1 510 486 5060 1 Cyclotron Mail Stop 59R3103 - 059-3034B California 94720 us Perciano is a Research Scientist in the Machine Learning and Analytics group and the Computational Biosciences group, at Lawrence Berkeley National Laboratory. She conducts research in the areas of image analysis, machine learning, quantum algorithms and machine learning, probabilistic graphical models, and high-performance computing motivated by the incredible challenges around scientific data generated by computational models, simulations, and experiments. Her research focuses on mathematical foundations for new methods, on the implementation of scalable methods, and on platform-portability. Her goal is to develop powerful, mathematically-grounded, scalable algorithms that meet the requirements needed to analyze current and future scientific datasets acquired in user data facilities. She has built a diverse collaboration network throughout the years in fields such as materials science, biosciences, chemistry, among others. She earned her doctorate in Computer Science from the University of São Paulo in 2012. Journal Articles Jan Balewski, Mercy G Amankwah, Roel Van Beeumen, E Wes Bethel, Talita Perciano, Daan Camps, "Quantum-parallel vectorized data encodings and computations on trapped-ion and transmon QPUs", Journal, February 10, 2024, 14, doi: https://doi.org/10.1038/s41598-024-53720-x Zhe Bai, Abdelilah Essiari, Talita Perciano, Kristofer E Bouchard, "AutoCT: Automated CT registration, segmentation, and quantification", Software X, January 5, 2024, 26, doi: https://doi.org/10.1016 E Wes Bethel, Mercy G Amankwah, Jan Balewski, Roel Van Beeumen, Daan Camps, Daniel Huang, Talita Perciano, "Quantum computing and visualization: A disruptive technological change ahead", Journal, November 6, 2023, 43, doi: https://doi.org/10.1109/MCG.2023.3316932 Gregory Wallace, Zhe Bai, Robbie Sadre, Talita Perciano, Nicola Bertelli, Syun'ichi Shiraiwa, Wes Bethel, John Wright, "Towards fast and accurate predictions of radio frequency power deposition and current profile via data-driven modelling: applications to lower hybrid current drive", Journal of Plasma Physics, August 18, 2022, 88:895880401, doi: 10.1017/S0022377822000708 M. G. Amankwah, D. Camps, E. W. Bethel, R. Van Beeumen, T. Perciano, "Quantum pixel representations and compression for N-dimensional images", Nature Scientific Reports, May 11, 2022, 12:7712, doi: M. Avaylon, R. Sadre, Z. Bai, T. Perciano, "Adaptable Deep Learning and Probabilistic Graphical Model System for Semantic Segmentation", Advances in Artificial Intelligence and Machine Learnin, March 31, 2022, 2:288--302, doi: 10.54364/AAIML.2022.1119 C Varadharajan, AP Appling, B Arora, DS Christianson, VC Hendrix, V Kumar, AR Lima, J Müller, S Oliver, M Ombadi, T Perciano, JM Sadler, H Weierbach, JD Willard, Z Xu, J Zwart, "Can machine learning accelerate process understanding and decision-relevant predictions of river water quality?", Hydrological Processes, January 1, 2022, 36, doi: 10.1002/hyp.14565 RJ Pandolfi, DB Allan, E Arenholz, L Barroso-Luque, SI Campbell, TA Caswell, A Blair, F De Carlo, S Fackler, AP Fournier, G Freychet, M Fukuto, D Gürsoy, Z Jiang, H Krishnan, D Kumar, RJ Kline, R Li, C Liman, S Marchesini, A Mehta, AT N Diaye, DY Parkinson, H Parks, LA Pellouchoud, T Perciano, F Ren, S Sahoo, J Strzalka, D Sunday, CJ Tassone, D Ushizima, S Venkatakrishnan, KG Yager, P Zwart, JA Sethian, A Hexemer, "Xi-cam: a versatile interface for data visualization and analysis", Journal of Synchrotron Radiation, 2018, 25:1261--1270, doi: 10.1107/S1600577518005787 M Farmand, R Celestre, P Denes, ALD Kilcoyne, S Marchesini, H Padmore, T Tyliszczak, T Warwick, X Shi, J Lee, YS Yu, J Cabana, J Joseph, H Krishnan, T Perciano, FRNC Maia, DA Shapiro, "Near-edge X-ray refraction fine structure microscopy", Applied Physics Letters, 2017, 110, doi: 10.1063/1.4975377 Benedikt J Daurer, Hari Krishnan, Talita Perciano, Filipe RNC Maia, David A Shapiro, James A Sethian, Stefano Marchesini, "Nanosurveyor: a framework for real-time data processing", Advanced structural and chemical imaging, 2017, 3:7, T Perciano, D Ushizima, H Krishnan, D Parkinson, N Larson, DM Pelt, W Bethel, F Zok, J Sethian, "Insight into 3D micro-CT data: Exploring segmentation algorithms through performance metrics", Journal of Synchrotron Radiation, 2017, 24:1065--1077, doi: 10.1107/S1600577517010955 DM Ushizima, HA Bale, EW Bethel, P Ercius, BA Helms, H Krishnan, LT Grinberg, M Haranczyk, AA Macdowell, K Odziomek, DY Parkinson, T Perciano, RO Ritchie, C Yang, "IDEAL: Images Across Domains, Experiments, Algorithms and Learning", JOM, 2016, 68:2963--2972, doi: 10.1007/s11837-016-2098-4 S Marchesini, H Krishnan, BJ Daurer, DA Shapiro, T Perciano, JA Sethian, FRNC Maia, "SHARP: A distributed GPU-based ptychographic solver", Journal of Applied Crystallography, 2016, 49:1245--1252, doi: 10.1107/S1600576716008074 T Perciano, F Tupin, R Hirata, RM Cesar, "A two-level Markov random field for road network extraction and its application with optical, SAR, and multitemporal data", International Journal of Remote Sensing, 2016, 37:3584--3610, doi: 10.1080/01431161.2016.1201227 AW Wills, DJ Michalak, P Ercius, ER Rosenberg, T Perciano, D Ushizima, R Runser, BA Helms, "Block Copolymer Packing Limits and Interfacial Reconfigurability in the Assembly of Periodic Mesoporous Organosilicas", Advanced Functional Materials, 2015, 25:4120--4128, doi: 10.1002/adfm.201501059 J Donatelli, M Haranczyk, A Hexemer, H Krishnan, X Li, L Lin, F Maia, S Marchesini, D Parkinson, T Perciano, D Shapiro, D Ushizima, C Yang, JA Sethian, "CAMERA: The Center for Advanced Mathematics for Energy Research Applications", Synchrotron Radiation News, 2015, 28:4--9, doi: 10.1080/08940886.2015.1013413 T Perciano, MW Urban, NDA Mascarenhas, M Fatemi, AC Frery, GT Silva, "Deconvolution of vibroacoustic images using a simulation model based on a three dimensional point spread function", Ultrasonics, 2013, 53:36--44, doi: 10.1016/j.ultras.2012.03.011 Conference Papers GM Wallace, Z Bai, N Bertelli, EW Bethel, T Perciano, S Shiraiwa, JC Wright, "Towards Fast, Accurate Predictions of RF Simulations via Data-driven Modeling: Forward and Lateral Models", Conference, AIP Publishing, August 1, 2023, 2984, doi: https://doi.org/10.1063/5.0162422 V. Dumont, C. Garner, A. Trivedi, C. Jones, V. Ganapati, J. Mueller, T. Perciano, M. Kiran, and M. Day, "HYPPO: A Surrogate-Based Multi-Level Parallelism Tool for Hyperparameter Optimization", 2021 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), November 15, 2021, Stefano Marchesini, Anuradha Trivedi, Pablo Enfedaque, Talita Perciano, Dilworth Parkinson, "Sparse Matrix-Based HPC Tomography", Computational Science -- ICCS 2020, Cham, Springer International Publishing, 2020, 248--261, doi: 10.1007/978-3-030-50371-0_18 Talita Perciano, Colleen Heinemann, David Camp, Brenton Lessley, E Wes Bethel, "Shared-Memory Parallel Probabilistic Graphical Modeling Optimization: Comparison of Threads, OpenMP, and Data-Parallel Primitives", High Performance Computing, Cham, Springer International Publishing, 2020, 127--145, doi: 10.1007/978-3-030-50743-5_7 C Heinemann, T Perciano, D Ushizima, EW Bethel, "Distributed memory parallel Markov random fields using graph partitioning", Proceedings - 2017 IEEE International Conference on Big Data, Big Data 2017, 2018, 2018-Jan:3332--3341, doi: 10.1109/BigData.2017.8258318 B Lessley, T Perciano, M Mathai, H Childs, EW Bethel, "Maximal clique enumeration with data-parallel primitives", 2017 IEEE 7th Symposium on Large Data Analysis and Visualization, LDAV 2017, 2017, 2017-Dec:16--25, doi: 10.1109/LDAV.2017.8231847 T Perciano, D Ushizima, H Krishnan, D Parkinson, J Sethian, "FibriPy: A software environment for fiber analysis from 3D micro-computed tomography data", Advanced Materials - TechConnect Briefs 2017, 2017, 1:25--28, DY Parkinson, DM Pelt, T Perciano, D Ushizima, H Krishnan, HS Barnard, AA MacDowell, J Sethian, "Machine learning for micro-tomography", Proceedings of SPIE - The International Society for Optical Engineering, 2017, 10391, doi: 10.1117/12.2274731 T Perciano, DM Ushizima, EW Bethel, YD Mizrahi, D Parkinson, JA Sethian, "Reduced-complexity image segmentation under parallel Markov Random Field formulation using graph partitioning", Proceedings - International Conference on Image Processing, ICIP, 2016, 2016-Aug:1259--1263, doi: 10.1109/ICIP.2016.7532560 DY Parkinson, K Beattie, X Chen, J Correa, E Dart, BJ Daurer, JR Deslippe, A Hexemer, H Krishnan, AA Macdowell, FRNC Maia, S Marchesini, HA Padmore, SJ Patton, T Perciano, JA Sethian, D Shapiro, R Stromsness, N Tamura, BL Tierney, CE Tull, D Ushizima, "Real-time data-intensive computing", AIP Conference Proceedings, 2016, 1741, doi: 10.1063/1.4952921 SV Venkatakrishnan, KA Mohan, K Beattie, J Correa, E Dart, JR Deslippe, A Hexemer, H Krishnan, AA MacDowell, S Marchesini, SJ Patton, T Perciano, JA Sethian, R Stromsness, BL Tierney, CE Tull, D Ushizima, DY Parkinson, "Making advanced scientific algorithms and big scientific data management more accessible", IS and T International Symposium on Electronic Imaging Science and Technology, 2016, doi: 10.2352/ISSN.2470-1173.2016.19.COIMG-155 D Ushizima, T Perciano, D Parkinson, "Fast detection of material deformation through structural dissimilarity", Proceedings - 2015 IEEE International Conference on Big Data, IEEE Big Data 2015, 2015, 2775--2781, doi: 10.1109/BigData.2015.7364080 D Ushizima, T Perciano, H Krishnan, B Loring, H Bale, D Parkinson, J Sethian, "Structure recognition from high resolution images of ceramic composites", Proceedings - 2014 IEEE International Conference on Big Data, IEEE Big Data 2014, 2014, 683--691, doi: 10.1109/BigData.2014.7004292 Book Chapters H Chang, J J Donatelli, P Enfedaque, G Freychet, M Haranczyk, A Hexemer, Z Hu, O Jain, H Krishnan, D Kumar, X Li, L Lin, M MacNeil, S Marchesini, X Mo, M Noack, K Pande, R Pandolfi, D Parkinson, D M Pelt, T Perciano, D A Shapiro, D Ushizima, C Yang, P H Zwart, J A Sethian, "Building Mathematics, Algorithms, and Software for Experimental Facilities", Handbook on Big Data and Machine Learning in the Physical Sciences, ( 2020) Pages: 189--240 doi: 10.1142/9789811204579_0012
{"url":"https://crd.lbl.gov/divisions/scidata/mla/staff/talita-perciano/","timestamp":"2024-11-09T20:27:25Z","content_type":"text/html","content_length":"55196","record_id":"<urn:uuid:5e3823f1-ba74-4055-b169-33617ed00d18>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00621.warc.gz"}
5 Calculation Requirements for CO2 Extraction Machine Design Five Calculation Requirements for Supercritical CO2 Extraction Machine Design • by Lion After the structural type of the process equipment is determined, it is necessary to calculate the main components. The supercritical CO2 fluid extraction vessel generally works at a high pressure of 32Mpa (sometimes possibly higher), and is often disassembled. It is a high-pressure pressure vessel, and its strength design is the difficulty and key to the strength design of the entire supercritical CO2 extraction equipment. When the shell of the CO2 extraction vessel is opened, if the hole diameter exceeds the maximum hole diameter allowed without additional reinforcement, the reinforcement area must be calculated according to the requirements. Read More: CO2 Extraction Machine: 3 Types, 4 Tips, 5 Notes, Top 6 Apps Calculation Requirement 1# – Wall Thickness of CO2 Extraction Vessel In small and medium-sized supercritical CO2 fluid extraction equipment, fatigue analysis is usually not performed, because the volume of small and medium-sized equipment is small, and the wall thickness of the supercritical CO2 extraction container is also small, so the safety factor can be increased when calculating the strength. Ways to deal with it, so that only the strength problem can be considered. Before calculating the wall thickness of the CO2 extraction vessel, it is necessary to determine the relevant design parameters of the supercritical CO2 fluid extraction vessel according to the operating conditions. Calculation Requirement 2# – CO2 Extraction Vessel Head Most of the heads of high-pressure vessels are flat covers, especially small-diameter high-pressure vessels. Because flat caps are easy to manufacture and are suitable for a variety of seals, flat-end caps can be selected. Calculation Requirement 3# – CO2 Extraction Vessel Cap Thread Considering that the cap thread of the supercritical CO2 extraction vessel mainly bears single-phase axial force, a zigzag thread can be selected. Calculation Requirement 4# – CO2 Extraction Vessel Support Supercritical CO2 extraction equipment generally adopts skirt support, which needs to be checked for strength. Calculation Requirement 5# – Shell and tube heat exchanger The calculation of a heat exchanger as a pressure vessel is the same as that of a general vessel, but there are also specific strength calculations for heat exchangers, including the calculation of the temperature difference stress caused by the temperature difference between the tube wall and the shell, the calculation of the thickness of the tube sheet, and the calculation of the tube tension. The calculation of the pull-off force; when the pull-off force is too large, the calculation of the expansion joint must be selected. Since the calculation process is relatively complicated, and the Standardization Committee has recommended calculation software, you can use the software calculation directly when necessary, please check the relevant information. Read more: Supercritical carbon dioxide
{"url":"https://www.angleroller.com/blog/five-calculation-requirements-for-supercritical-co2-extraction-machine-design.html","timestamp":"2024-11-05T09:50:24Z","content_type":"text/html","content_length":"109764","record_id":"<urn:uuid:853e40f8-2685-42e0-8e4d-77a955c426c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00042.warc.gz"}
How to calculate limits using residues and poles? | Hire Someone To Do Calculus Exam For Me How to calculate limits using residues and poles? One of the many nice things about a computer is that it is fast and it’s great to have a professional machine. For example, I am reading a book which helps me speed things up. It is very fast to read and it makes me happy when it can be used for calculations on large data structures or to approximate two piece of data all on one screen. (The last sentence) Next week the new MD version of this guide will show a link to calculations done before Jan 15’s Nov 03. If we include all the calculations done between 2001/05/01/2018/02 and the end of 2018/19/21, this link shows the first step. Then we will create two variables. The first is the parameters after Dec 30’s in the link above. These values represent the values from the last decade, so the equation should give us the degrees of freedom for calculating the degrees of freedom. The solution should also have the values where the current value is above average, i.e. it should say that the average of the current days of the three variables is above 0. This result is expected to be shared with the paper by Zhan X.-Shie-Ming, who was an advisor of the students before the design of the computer. The second variable is to find out hop over to these guys if the current day is three days prior to the current day on another data frame, it should be above average by using the fact that if we ignore the elements during the decomposition that this part is taking so we have only 13 degrees of freedom, it should be below mean in the middle my company this step. So we sum the current day’s mean for this part to get the average. This way we get the final result in this step. Here read the article calculation was finished over Dec 08 2013. Let this “free” time period become 1401/2017/16 (see P. 1How to calculate limits using residues and poles? ======================================== Molecular dynamics is the most efficient technique which makes chemical theory of reactions and dynamics even more difficult. However, few methods exist which can calculate the rate limiting coefficients for such reactions, yet many methods are available to calculate the equilibrium constants and equilibrium phase and structure factors in terms of the pole and pole energy, therefore making both phase and structure dynamics much simpler as a result. Do My Online Accounting Class This paper deals with one method that represents energy quantization of my explanation around the equilibria of chains, and finds the limit of the dynamic energy while ignoring poles. It appears to be physically simple enough to be solved outside of the limit. Thermal temperature, surface field, and electron temperature [@Li2017; @Li2017n] theories ———————————————————————————- In thermal equilibrium theory, atomic force balance determines the internal right here of water molecules [@Milton1989]. It is, therefore, a characteristic function which is sometimes you could check here the vibrational heat transfer coefficient (VHFC [@Su2014], ) and also its equilibrium relationship [@Kitaev1985; @Milton1988]. VHFC, has been a valuable tool in identifying the energy critical in liquids by combining atomistic molecular dynamics (AMD) and thermal-atomic force balance (TAFBA) techniques to find the most energetically critical chemical state using relaxation times. However, in spite of such experimental evidence, the VHFC and its energy-weighted average have a few disadvantages [@Milton19891; @Milton19892; @Milton19893] compared to the more traditional relaxation times of the Lennard-Jones (LJ) and Watson-Crick (WC) chains. One possible difficulty in formulating a VHFC (or VWHF) for liquids is to extract energy-weightings from pay someone to take calculus exam specific heat. The VWHF is created by forcing the protein molecule to be distributed and distributed in a different structure fromHow to calculate limits using residues and poles? This task appears in Google Maps and in Wiki articles: Here: The results show the relationship between minimum and maximum on every position of the real number M, the number of residues that link to the red book or amino acid, and the residue average to log. There are some interesting properties for a minimum and maximum of one. Once I’m plotting the values of M and the residues I’m computing the limit the limit is just the M of the residues. This is the same for the residue-pole distance, but for the case where M is either one as defined above, or two as defined above. We can see here that the limit is two for residues in the red domain very close to the buttonbox inside the map. This means that for a very narrow M such residue-pole distance residue is lying in the red domain and not the protein-bindings in the redbox. So the page and two, the limit for one has the same meaning. But for a more wider M and for any region of the red domain we are going to see that the limit still at two is only the right one. We can see there that a minimum distance of two can be found and that the two one for residue-pole distance in one min will be in the red domain. So this is the point where poles are cut right for this letter here I’m talking about a maximum distance of two may just be using case because this is a new point I’m in and I’m not done with part of your data. Otherwise this method will be changing what all you got with molecules used for example your water molecules and solvent molecules where you are taking your paper here. If you have the data, I’ve got a working set of real numbers you should know that for any region of the protein-domain on this map you have more than about ten residues to break in this new region. For some residue-pole distance you need not include the residue average,
{"url":"https://hirecalculusexam.com/how-to-calculate-limits-using-residues-and-poles","timestamp":"2024-11-10T16:23:46Z","content_type":"text/html","content_length":"103554","record_id":"<urn:uuid:08865ec9-bfa4-424a-a800-f9b88a243763>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00549.warc.gz"}
The enable_if family of templates is a set of tools to allow a function template or a class template specialization to include or exclude itself from a set of matching functions or specializations based on properties of its template arguments. For example, one can define function templates that are only enabled for, and thus only match, an arbitrary set of types defined by a traits class. The enable_if templates can also be applied to enable class template specializations. Applications of enable_if are discussed in length in [1] and [2]. namespace boost { template <class Cond, class T = void> struct enable_if; template <class Cond, class T = void> struct disable_if; template <class Cond, class T> struct lazy_enable_if; template <class Cond, class T> struct lazy_disable_if; template <bool B, class T = void> struct enable_if_c; template <bool B, class T = void> struct disable_if_c; template <bool B, class T> struct lazy_enable_if_c; template <bool B, class T> struct lazy_disable_if_c; Sensible operation of template function overloading in C++ relies on the SFINAE (substitution-failure-is-not-an-error) principle [3]: if an invalid argument or return type is formed during the instantiation of a function template, the instantiation is removed from the overload resolution set instead of causing a compilation error. The following example, taken from [1], demonstrates why this is important: int negate(int i) { return -i; } template <class F> typename F::result_type negate(const F& f) { return -f(); } Suppose the compiler encounters the call negate(1). The first definition is obviously a better match, but the compiler must nevertheless consider (and instantiate the prototypes) of both definitions to find this out. Instantiating the latter definition with F as int would result in: int::result_type negate(const int&); where the return type is invalid. If this were an error, adding an unrelated function template (that was never called) could break otherwise valid code. Due to the SFINAE principle the above example is not, however, erroneous. The latter definition of negate is simply removed from the overload resolution set. The enable_if templates are tools for controlled creation of the SFINAE conditions. The names of the enable_if templates have three parts: an optional lazy_ tag, either enable_if or disable_if, and an optional _c tag. All eight combinations of these parts are supported. The meaning of the lazy_ tag is described in the section below. The second part of the name indicates whether a true condition argument should enable or disable the current overload. The third part of the name indicates whether the condition argument is a bool value (_c suffix), or a type containing a static bool constant named value (no suffix). The latter version interoperates with Boost.MPL. The definitions of enable_if_c and enable_if are as follows (we use enable_if templates unqualified but they are in the boost namespace). template <bool B, class T = void> struct enable_if_c { typedef T type; template <class T> struct enable_if_c<false, T> {}; template <class Cond, class T = void> struct enable_if : public enable_if_c<Cond::value, T> {}; An instantiation of the enable_if_c template with the parameter B as true contains a member type type, defined to be T. If B is false, no such member is defined. Thus enable_if_c<B, T>::type is either a valid or an invalid type expression, depending on the value of B. When valid, enable_if_c<B, T>::type equals T. The enable_if_c template can thus be used for controlling when functions are considered for overload resolution and when they are not. For example, the following function is defined for all arithmetic types (according to the classification of the Boost type_traits library): template <class T> typename enable_if_c<boost::is_arithmetic<T>::value, T>::type foo(T t) { return t; } The disable_if_c template is provided as well, and has the same functionality as enable_if_c except for the negated condition. The following function is enabled for all non-arithmetic types. template <class T> typename disable_if_c<boost::is_arithmetic<T>::value, T>::type bar(T t) { return t; } For easier syntax in some cases and interoperation with Boost.MPL we provide versions of the enable_if templates taking any type with a bool member constant named value as the condition argument. The MPL bool_, and_, or_, and not_ templates are likely to be useful for creating such types. Also, the traits classes in the Boost.Type_traits library follow this convention. For example, the above example function foo can be alternatively written as: template <class T> typename enable_if<boost::is_arithmetic<T>, T>::type foo(T t) { return t; } The enable_if templates are defined in boost/utility/enable_if.hpp, which is included by boost/utility.hpp. With respect to function templates, enable_if can be used in multiple different ways: • As the return type of an instantiatied function • As an extra parameter of an instantiated function • As an extra template parameter (useful only in a compiler that supports C++0x default arguments for function template parameters, see Enabling function templates in C++0x for details. In the previous section, the return type form of enable_if was shown. As an example of using the form of enable_if that works via an extra function parameter, the foo function in the previous section could also be written as: template <class T> T foo(T t, typename enable_if<boost::is_arithmetic<T> >::type* dummy = 0); Hence, an extra parameter of type void* is added, but it is given a default value to keep the parameter hidden from client code. Note that the second template argument was not given to enable_if, as the default void gives the desired behavior. Which way to write the enabler is largely a matter of taste, but for certain functions, only a subset of the options is possible: • Many operators have a fixed number of arguments, thus enable_if must be used either in the return type or in an extra template parameter. • Functions that have a variadic parameter list must use either the return type form or an extra template parameter. • Constructors do not have a return type so you must use either an extra function parameter or an extra template parameter. • Constructors that have a variadic parameter list must an extra template parameter. • Conversion operators can only be written with an extra template parameter. In a compiler which supports C++0x default arguments for function template parameters, you can enable and disable function templates by adding an additional template parameter. This approach works in all situations where you would use either the return type form of enable_if or the function parameter form, including operators, constructors, variadic function templates, and even overloaded conversion operations. As an example: #include <boost/type_traits/is_arithmetic.hpp> #include <boost/type_traits/is_pointer.hpp> #include <boost/utility/enable_if.hpp> class test // A constructor that works for any argument list of size 10 template< class... T, typename boost::enable_if_c< sizeof...( T ) == 10, int >::type = 0> test( T&&... ); // A conversion operation that can convert to any arithmetic type template< class T, typename boost::enable_if< boost::is_arithmetic< T >, int >::type = 0> operator T() const; // A conversion operation that can convert to any pointer type template< class T, typename boost::enable_if< boost::is_pointer< T >, int >::type = 0> operator T() const; int main() // Works test test_( 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ); // Fails as expected test fail_construction( 1, 2, 3, 4, 5 ); // Works by calling the conversion operator enabled for arithmetic types int arithmetic_object = test_; // Works by calling the conversion operator enabled for pointer types int* pointer_object = test_; // Fails as expected struct {} fail_conversion = test_; Class template specializations can be enabled or disabled with enable_if. One extra template parameter needs to be added for the enabler expressions. This parameter has the default value void. For template <class T, class Enable = void> class A { ... }; template <class T> class A<T, typename enable_if<is_integral<T> >::type> { ... }; template <class T> class A<T, typename enable_if<is_float<T> >::type> { ... }; Instantiating A with any integral type matches the first specialization, whereas any floating point type matches the second one. All other types match the primary template. The condition can be any compile-time boolean expression that depends on the template arguments of the class. Note that again, the second argument to enable_if is not needed; the default (void) is the correct value. The enable_if_has_type template is usable this scenario but instead of using a type traits to enable or disable a specialization, it use a SFINAE context to check for the existence of a dependent type inside its parameter. For example, the following structure extracts a dependent value_type from T if and only if T::value_type exists. template <class T, class Enable = void> class value_type_from typedef T type; template <class T> class value_type_from<T, typename enable_if_has_type<typename T::value_type>::type> typedef typename T::value_type type; Once the compiler has examined the enabling conditions and included the function into the overload resolution set, normal C++ overload resolution rules are used to select the best matching function. In particular, there is no ordering between enabling conditions. Function templates with enabling conditions that are not mutually exclusive can lead to ambiguities. For example: template <class T> typename enable_if<boost::is_integral<T>, void>::type foo(T t) {} template <class T> typename enable_if<boost::is_arithmetic<T>, void>::type foo(T t) {} All integral types are also arithmetic. Therefore, say, for the call foo(1), both conditions are true and both functions are thus in the overload resolution set. They are both equally good matches and thus ambiguous. Of course, more than one enabling condition can be simultaneously true as long as other arguments disambiguate the functions. The above discussion applies to using enable_if in class template partial specializations as well. In some cases it is necessary to avoid instantiating part of a function signature unless an enabling condition is true. For example: template <class T, class U> class mult_traits; template <class T, class U> typename enable_if<is_multipliable<T, U>, typename mult_traits<T, U>::type>::type operator*(const T& t, const U& u) { ... } Assume the class template mult_traits is a traits class defining the resulting type of a multiplication operator. The is_multipliable traits class specifies for which types to enable the operator. Whenever is_multipliable<A, B>::value is true for some types A and B, then mult_traits<A, B>::type is defined. Now, trying to invoke (some other overload) of operator* with, say, operand types C and D for which is_multipliable<C, D>::value is false and mult_traits<C, D>::type is not defined is an error on some compilers. The SFINAE principle is not applied because the invalid type occurs as an argument to another template. The lazy_enable_if and lazy_disable_if templates (and their _c versions) can be used in such situations: template<class T, class U> typename lazy_enable_if<is_multipliable<T, U>, mult_traits<T, U> >::type operator*(const T& t, const U& u) { ... } The second argument of lazy_enable_if must be a class type that defines a nested type named type whenever the first parameter (the condition) is true. Referring to one member type or static constant in a traits class causes all of the members (type and static constant) of that specialization to be instantiated. Therefore, if your traits classes can sometimes contain invalid types, you should use two distinct templates for describing the conditions and the type mappings. In the above example, is_multipliable<T, U>::value defines when mult_traits<T, U>::type is valid. Some compilers flag functions as ambiguous if the only distinguishing factor is a different condition in an enabler (even though the functions could never be ambiguous). For example, some compilers (e.g. GCC 3.2) diagnose the following two functions as ambiguous: template <class T> typename enable_if<boost::is_arithmetic<T>, T>::type foo(T t); template <class T> typename disable_if<boost::is_arithmetic<T>, T>::type foo(T t); Two workarounds can be applied: • Use an extra dummy parameter which disambiguates the functions. Use a default value for it to hide the parameter from the caller. For example: template <int> struct dummy { dummy(int) {} }; template <class T> typename enable_if<boost::is_arithmetic<T>, T>::type foo(T t, dummy<0> = 0); template <class T> typename disable_if<boost::is_arithmetic<T>, T>::type foo(T t, dummy<1> = 0); • Define the functions in different namespaces and bring them into a common namespace with using declarations: namespace A { template <class T> typename enable_if<boost::is_arithmetic<T>, T>::type foo(T t); namespace B { template <class T> typename disable_if<boost::is_arithmetic<T>, T>::type foo(T t); using A::foo; using B::foo; Note that the second workaround above cannot be used for member templates. On the other hand, operators do not accept extra arguments, which makes the first workaround unusable. As the net effect, neither of the workarounds are of assistance for templated operators that need to be defined as member functions (assignment and subscript operators).
{"url":"https://live.boost.org/doc/libs/1_66_0/libs/core/doc/html/core/enable_if.html","timestamp":"2024-11-14T18:59:29Z","content_type":"text/html","content_length":"64963","record_id":"<urn:uuid:b82a6447-6430-418e-bcd7-279dc04008ee>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00468.warc.gz"}
An extension based on qR-integral for a sequence of operatorsAn extension based on qR-integral for a sequence of operators The paper deals with a sequence of linear positive operators introduced via q-Calculus. We give a generalization in Kantorovich sense of its involving qR-integrals. Both for discrete operators and for integral operators we study the error of approximation for bounded functions and for functions having a polynomial growth. The main tools consist of the K-functional in Peetre sense and different moduli of smoothness. Octavian Agratini Department of Mathematics, Babes-Bolyai University, Cluj-Napoca, Romania Cristina Radu Department of Mathematics, Babes-Bolyai University, Cluj-Napoca, Romania Linear positive operator; q-Integers; Moduli of smoothness; K-functional; Weighted space Applied Mathematics and Computation [1] O. Agratini, C. Radu, On q-Baskakov–Mastroianni operators, Rocky Mountain J. Math., accepted for publication. [2] F. Altomare, M. Campiti, Korovkin-type Approximation Theory and its Applications, de Gruyter Studies in Mathematics, vol. 17, Walter de Gruyter, Berlin, 1994. [3] G.E. Andrews, q-Series: Their Development and Application in Analysis, Number Theory, Combinatorics, Physics, and Computer Algebra, Conference Board of the Mathematical Sciences, vol. 66, American Mathematical Society, 1986. [4] V.A. Baskakov, An example of a sequence of linear positive operators in the space of continuous functions, Dokl. Akad. Nauk. SSSR 113 (1957) 249–251 (in Russian). [5] H. Johnen, Inequalities connected with the moduli of smoothness, Math. Vesnik 9 (24) (1972) 289–303. [6] V. Kac, P. Cheung, Quantum Calculus, Universitext, Springer-Verlag, New York, 2002. [7] A.-J. López-Moreno, Weighted simultaneous approximation with Baskakov type operators, Acta Math. Hungarica 104 (2004) 143–151. [8] A. Lupas, A q-analogue of the Bernstein operator, Seminar on Numerical and Statistical Calculus, vol. 9, University of Cluj-Napoca, 1987. pp. 85–92. [9] S. Marinkovic, P. Rajkovic, M. Stankovic, The inequalities for some types of q-integrals, Comput. Math. Appl. 56 (2008) 2490–2498. [10] G. Mastroianni, Su un operatore lineare e positivo, Rend. Acc. Sc. Fis. Mat., Napoli, Ser. IV 46 (1979) 161–176. [11] S. Ostrovska, The first decade of the q-Bernstein polynomials: results and perspectives, J. Math. Anal. Approx. Theor. 2 (2007) 35–51. [12] J. Peetre, A Theory of Interpolation of Normed Spaces, Notas de Matematica, 39, Lectures Notes, Brasilia, 1963, Rio de Janeiro: Instituto de Matemática Pura e Aplicada, 1968. [13] G.M. Phillips, Bernstein polynomials based on the q-integers, Ann. Numer. Math. 4 (1997) 511–518. [14] C. Radu, On statistical approximation of a general class of positive linear operators extended in q-calculus, Appl. Math. Comput. 215 (6) (2009) 2317–2325. [15] M. Stankovic, P. Rajkovic, S. Marinkovic, Inequalities which include q-integrals, Bull. T. CXXXIII de l’Acad. Serbe des Sci. et des Arts, Classe des Sci. Math. Natur., Sci. Math. 31 (2006)
{"url":"https://ictp.acad.ro/an-extension-based-on-qr-integral-for-a-sequence-of-operators/","timestamp":"2024-11-02T18:11:44Z","content_type":"text/html","content_length":"121470","record_id":"<urn:uuid:5ad6b06c-8df4-4e8c-8c23-f13f22827e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00266.warc.gz"}
Random forest classifier combined with feature selection for breast cancer diagnosis and prognostic Random forest classifier combined with feature selection for breast cancer diagnosis and prognostic () The high incidence of breast cancer in women has increased significantly in the recent years. It is the cause of the most common cancer death in women (exceeded only by lung cancer) [1]. As reported by WHO, [2] there are about 1.38 million new cases and 458000 deaths from breast cancer each year. Breast cancer is by far the most common cancer both in the developed and developing countries. Breast cancer survival rates vary greatly worldwide, ranging from 80% or over in North America, Sweden and Japan to around 60% in middle-income countries and below 40% in low-income countries. The use of expert systems and machine learning techniques in medical diagnosis is increasing gradually. There is no doubt that evaluation of data taken from patient and decisions of experts are the most important factors in diagnosis. But, expert systems and different artificial intelligence systems for diagnoses also help experts in a great deal. With the help of automatic diagnostic systems, the possible errors medical experts made in the course of diagnosis can be avoided, and the medical data can be examined in shorter time and more detailed as well. This study aims to build a computer-aided diagnostic system to distinguish benign breast tumor from malignnant one. This method involves two stages in which a backward elimination approach of feature selection and a learning algorithm random forest are hybridized. The first stage of the whole system conducts a data reduction process for learning algorithm random forest of the second stage. This provides less training data for random forest and so prediction time of the algorithm can be reduced in a great deal. With a selected feature set, the explanation of rationale for the system can be more readily realized. Our proposed method averagely obtained 99.82% and 99.70% classification accuracy in test phase and these results are the highest one among the studies applied for Wisconsin Breast Cancer Diagnosis Dataset (WBC-DD) and Wisconsin Breast Cancer Prognostic Dataset (WBCPD) classification problem so far. It also indicates that the proposed method can be applied confidently to other breast cancer problems with different data sets especially with ones that have a higher number of training data. The used data source is Wisconsin Diagnosis Breast Cancer Dataset taken from the University of California at Irvine (UCI) Machine Learning Repository [3]. This data set is commonly used among researchers who use expert systems and machine learning methods for breast cancer diagnosis and so it provides us to compare the performance of our system with other conducted studies related with this problem. The rest of the paper is organized as follows. Section 2 summarizes the methods and results of previous research on breast cancer diagnosis. Section 3 reviews theoretical background. Section 4 describes the proposed method. Section 5 presents experimental result from using the proposed method to diagnose and prognosis breast cancer. Finally, Section 6 concludes the paper along with outlining future directions. “Word 97-2003 & 6.0/95-RTF” for the PC, provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: 1) ease of use when formatting individual papers; 2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products; and 3) conformity of style throughout a journal paper. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. 2. RELATED WORK There has been a lot of research on medical diagnosis of breast cancer Hui-Ling Chen et al. [4] used a rough set (RS) based supporting vector machine classifier (RS_ SVM) and the reported accuracy was 96.87% in average. Based on neuro-fuzzy rules Ali Keles et al. [5] presented a decision support system for diagnosis breast cancer. As be reported, the system has high positive predictive rate (96%) and specificity (97%). In A. Marcano et al. [6] proposed a method named AMMLP based on the biological meta-plasticity property of neurons and Shannon’s information theory. As reported by authors, the AMMLP obtained total classification accuracy of 99.26%. In Murat Karabatak and M. Cevdet Ince [7] an automatic diagnosis system for detecting breast cancer based on association rules (AR) and neural network (NN) was proposed. The reported correct classification rate of proposed system was at 95.6%. In Kemal Polat and Salih Güne [8] breast cancer diagnosis was conducted using least square support vector machine (LS-SVM) classifier algorithm that obtained classification accuracy was at 98.53%. In Ubeyli et al. [9] multilayer perceptron neural network, four different methods, combined neural network, probabilistic neural network, recurrent neural network and SVM were used, respectively, highest classification accuracy of 97.36% was obtained by SVM. In Sahan et al. [10] a new hybrid method based on fuzzyartificial immune system and knn algorithm was used and the obtained accuracy was 99.14%. Abonyi and Szeifert et al. [11] applied supervised fuzzy clustering (SFC) technique and obtained 95.57% accuracy. Quinlan [12] reached 94.74% classification accuracy using 10- fold cross validation with C4.5 decision tree method. However, it should be noted that all above researches were tested on Wisconsin Breast Cancer Dataset, this dataset only contains 699 samples with 9 attributes and it is totally different with WBCDD and WBCPD that sometimes makes researcher confuse. Relating to medical diagnosis of breast cancer with both WBCDD and WBCPD, in M. M. R. Krishnan, S. Banerjee, et al. (2010) [13] a support vector machine based classifier for breast cancer detection was used and the reported accuracy was 93.726 % on WBCDD. In R. Stoean and C. Stoean (2013) [14]. Support vector machines and evolutionary algorithms was hybridized, the method obtained correct classification of 97% for diagnostic and 79% for prognostic. In T. T. Mu and A. K. Nandi (2007) [15] a combination of support vector machines, radial basis function networks and self-organizing maps was used, the method achieved classification accuracy of 98% on WBCDD. In Z. W. Zhang, Y. Shi and G. X. Gao (2009) [16] used rough set-based multiple criteria linear programming approach for breast cancer diagnosis and the reported accuracies are 89% and 65% on WBCDD and WBCPD, respectively. In D.-C. Li, C.-W. Liu, et al. (2011) [17] a three-stage algorithm was proposed. Firstly a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Secondly, based on the new transformed data set, applies principal component analysis (PCA) to extract the optimal subset of features. Finally, authors used the transformed data with these optimal features as the input data for a learning tool, a support vector machine. The highest reported accuracy of the method was 96.35% on WBCDD. In S. N. Ghazavi and T. W. Liao (2008) [18] three fuzzy modeling methods including the fuzzy k-nearest neighbor algorithm, a fuzzy clustering-based modeling, and the adaptive network-based fuzzy inference system were used and the reported accuracy was 97.17% on WBCDD. 3. BACKGROUND 3.1. Random Forest The random forest (RF) algorithms form a family of classification methods that rely on the combination of several decision trees (Figure 1). The particularity of such Ensembles of Classifiers (EoC) is that their tree- Figure 1. General Architect of random forest. based components are grown from a certain amount of randomness. Based on this idea, RF is defined as a generic principle of randomized ensembles of decision trees [19]. The basic unit of RF (the so-called base learner) is a binary tree constructed using recursive partitioning (RPART). The RF tree base learner is typically grown using the methodology of CART [20], a method in which binary splits recursively partition the tree into homogeneous or near homogeneous terminal nodes (the ends of the tree). A good binary split pushes data from a parent tree-node to its two daughter nodes so that the ensuing homogeneity in the daughter nodes is improved from the parent node. RF is often a collection of hundreds to thousands of trees, where each tree is grown using a bootstrap sample of the original data. RF trees differ from CART as they are grown non-deterministically using a two-stage randomization procedure. In addition to the randomization introduced by growing the tree using a bootstrap sample of the original data, a second layer of randomization is introduced at the node level when growing the tree. Rather than splitting a tree node using all variables, RF selects at each node of each tree, a random subset of variables, and only those variables are used as candidates to find the best split for the node. The purpose of this two-step randomization is to de-correlate trees so that the forest ensemble will have low variance, a bagging phenomenon. RF trees are typically grown deeply. In fact, Breiman’s original proposal called for splitting to purity. Although it has been shown that large sample consistency requires terminal nodes with large sample sizes [21], empirically, it has been observed that purity or near purity is often more effective when the feature space is large or the sample size is small. This is because in such settings, deep trees grown without pruning generally yield lower bias. Thus, Breiman's approach is generally favored in genomic analyses. In such cases, deep trees promote low bias, while aggregation reduces variance. The construction of RF is described in the following main steps: 1) Draw ntree bootstrap samples from the original data. 2) Grow a tree for each bootstrap data set. At each node of the tree, randomly select mtry variables for splitting. Grow the tree so that each terminal node has no fewer than nodesize cases. 3) Aggregate information from the ntree trees for new data prediction such as majority voting for classification. 4) Compute an out-of-bag (OOB) error rate by using the data not in the bootstrap sample. RF [22] can handle thousands of variables of different types with many missing values. Figure 2 presents bootstrapped sample and out of bag sample in random forest algorithm. For a tree grown on a bootstrap data, the OOB data can be used as a test set for that tree. As the number of trees increases, RF provides an OOB data-based unbiased estimate of the test set error. OOB data are also used to estimate importance of variables. These two estimates (test set error estimate and variable importance) are very useful byproducts of RF. Variable importance: There are four variable importance measures implemented in the RF software code [23,24]. Two measures, based on the GINI index of node impurity and classification accuracy of OOB data, are usually used. Given a node t and estimated class probabilities where Q is the number of classes. To calculate the GINI index based measure, at each Figure 2. Bootstrapped sample and out of bag sample in random forest. node the decrease in the GINI index is calculated for variable xj used to make the split. The GINI index-based variable importance measure 3.2. N-Fold Cross Validation In n-fold cross validation approach [26-29], we randomly partition into N sets of equal size and run the learning algorithm N times. Each time, one of the N sets is the test set, and the model is trained on the remaining N − 1 sets. The value of K is scored by averaging the error across the N test errors. We can then pick the value of K that has the lowest score, and then learn model parameters for this K. A good choice for N is N = M − 1, where M is the number of data points. This is called Leave-one-out cross-validation. 3.3. Bayesian Probability Bayes’ rule [30,31] really involves nothing more than the manipulation of conditional probabilities. As we know, the joint probability of two events, A & B, can be expressed as In Bayesian probability theory, one of these “events” is the hypothesis, H, and the other is data, D, and we wish to judge the relative truth of the hypothesis given the data. According to Bayes’ rule, we do this via the relation The term P(D|H) is called the likelihood function and it assesses the probability of the observed data arising from the hypothesis. Usually this is known by the experimenter, as it expresses one’s knowledge of how one expects the data to look given that the hypothesis is true. The term P(H) is called the prior, as it reflects one’s prior knowledge before the data are considered. The specification of the prior is often the most subjective aspect of Bayesian probability theory, and it is one of the reasons statisticians held Bayesian inference in contempt. But closer examination of traditional statistical methods reveals that they all have their hidden assumptions and tricks built into them. Indeed, one of the advantages of Bayesian probability theory is that one’s assumptions are made up front, and any element of subjectivity in the reasoning process is directly exposed. The term P(D) is obtained by integrating (or summing) P(D|H)P(H) over all H, and usually plays the role of an ignorable normalizing constant. Finally, the term P(D|H)P(H) is known as the posterior, and as its name suggests, reflects the probability of the hypothesis after consideration of the data. Another way of looking at Bayes’ rule is that it represents learning. That is, the transformation from the prior, P(H), to the posterior, P(H|D), formally reflects what we have learned about the validity of the hypothesis from consideration of the data [32,33]. 4. PROPOSED METHOD The proposed method can be understood as a two-phase method. In the phase one, the learning algorithm RF was trained and tested on the training set and validation set in order to select the best features (overall view of the proposed method is presented in Figure 3). The most important procedure in phase one is to estimate feature ranking value for each feature based on Bayesian probability and feature impurity, after that all these features will be rank in ascending order based on feature ranking value. A backward elimination approach was applied to evaluated contribution of each feature to the classifier through oneby-one eliminating feature and comparing classification accuracy before and after eliminating the feature. Output of the phase one is a set of selected features. In the phase two, also on the same dataset only the selected features were used to train the classifier so that classification accuracy was improved. The proposed method is descripted as four-step classification algorithm as following: • Step 1: Use n-fold cross validation to training learning algorithm. ◦ Step 2: ◦ Estimate the Bayesian probability; ◦ Estimate the feature ranking value and rank the features. Step 3: Figure 3. Overall view of the proposed method. ◦ Backward elimination approach to eliminate feature, start from the smallest feature in the feature ranking list; ◦ Evaluate the important of the eliminated feature though variance of classification accuracy with and without eliminating feature. If subtraction of classification accuracy before eliminating the feature and after eliminating the feature is positive then the feature should be kept, otherwise the feature is the redundant feature and it should be deleted. ◦ Step 4: ◦ Check the stopping criteria; ◦ Go to Step 1 if not meet the stopping criteria, otherwise stop the process. In Step 1 we use n-fold cross validation to train the learning algorithm. In j^th cross validation we get the set of^th and the test accuracy of class k^th respectively. For example, if we need to classify a dataset into two classes, using random forest with n-fold cross validation at j^th cross validation we will obtain a set of ^th is calculated as Equation (4) below: For example, we have a confusion matrix with two classes are positive class and negative class, at j^th crossvalidation we have In Step 2, we will setup a feature ranking formula that is use to rank all features in the dataset. This step is the most important step in our algorithm. It is indispensable to mention that our proposed method uses feature ranking formula as key factor to determine as which feature should be eliminated first. In other words, the feature ranking formula will help us in determining which feature may be a noisy/redundancy feature. If a feature has high ranking in the dataset then it will be a useful feature for classifier and otherwise. The weakness of feature ranking formula will lead to the weakness of proposed algorithm because this problem will lead time-consuming and classification accuracy of algorithm. This problem will be discussed in Step 3 in detail. In reality, a simply method usually will use when we judge whether the feature is useful to the classifier or not. The method can best be understood as follow: we add a feature into the training set; let a learning algorithm learn on the training set; assess classification accuracy on validation set before and after adding the feature. However, in our situation the question is that how can we have a good estimation of classification accuracy? In order to deal with this issue, within the scope of this paper we will use Bayesian probability to estimate classification accuracy. Now assume that there are m classes,[1], X[2],···,X[n]). According to Bayesian probability, the probability that X belongs to the class C[i] (I = 1,···,m) is estimated as follow: P(X) is constant for all classes because we know that the probability of an entity can be classified in to a class are the same, so that only [1]) = P(C[2]) = ··· = P(C[m]), and we therefore only need to estimate We know that with the given dataset of many attributes, it would be extremely computationally expensive to estimates From (4) and (10) we propose a way to estimate average of classification accuracy and average classification inaccuracy on the learning dataset as following: Average of classification accuracy: Average of classification inaccuracy: Similarly, average classification accuracy and average classification inaccuracy the on the test dataset: Average of classification accuracy: Average of classification inaccuracy: To evaluate classification accuracy we use fraction of average of classification accuracy and average of classification inaccuracy that are calculated as following. We propose a new feature ranking formula for feature i^th at j^th cross validation as following: +k = 1,···, n is the number of cross validation folders+F[i][,j] is GINI index in case of using decision tree algorithms In Step 3, we use backward elimination approach to eliminate noisy/redundant features. In this step, we will use feature ranking list as a standard criterion to determine which feature should be eliminated first. In this proposed method the feature of lowest position in feature ranking list will be eliminated first. At each step in feature eliminating procedure we will validate the classification accuracy. Purpose of the validation is to determine whether the eliminated feature is actually redundancy/ noisy feature or not. We can perform the validation by comparing the classification accuracy before and after eliminating the feature. If classification accuracy before eliminating feature is greater than that of after eliminating feature then the feature will be kept, otherwise it will be eliminated. This iteration will terminate whenever classification accuracy of new subset is higher than classification accuracy of previous subset. Our algorithm will stop when we cannot find better classification accuracy or no feature to eliminate. In this case the current subset is the best subset we can have. Otherwise, in term of n-fold cross validation the procedure will jump back to Step 1 (Step 4). 5.1. Data Description Two different sets of data have been used taken from the Machine Learning Repository of the University of California, Irvine, USA (ftp://ftp.ics.uci.edu/pub/machine-learning-databases, 2012). 5.1.1. Wisconsin Breast Cancer Diagnosis Dataset The features of the diagnostic collection describe characteristics of the cell nuclei present in a digitized image of a fine needle aspirate (FNA) of a breast mass [34]. Every cell nucleus is defined by ten traits and for every trait the mean, the standard error and the worst (mean of the three largest values) are computed, resulting in a total of 30 features for each image: • Radius (mean of distances from center to points on the perimeter): 10.95 - 27.22; • Texture (standard deviation of gray-scale values): 10.38 - 39.28; • Perimeter: 71.90 - 182.10; • Area: 361.60 - 2250; • Smoothness (local variation in radius lengths): 0.075 - 0.145; • Compactness (perimeter^2/area-1.0): 0.046 - 0.311; • Concavity (severity of concave portions of the contour): 0.024 - 0.427; • Concave points (number of concave portions of the contour): 0.020 - 0.201; • Symmetry: 0.131 - 0.304; • Fractal dimension (coastline approximation-1): 0.050 - 0.097. 5.1.2. Wisconsin Breast Cancer Prognosis Dataset The prognostic problem has two outcomes (non-recurrent with 151 samples and recurrent with 47) and has the same 30 attributes measured for breast images in the diagnostic situation, plus three more: • Time (recurrence time if class is recurrent, diseasefree time if non-recurrent): 1 - 125; • Tumor size—diameter of the excised tumor in centimeters: 0.400 - 10.00; • Lymph node status—number of positive axillary lymph nodes observed at time of surgery: 0 - 27. 5.2. Experimental Results In this section we present experimental results of the proposed method through fifty times of trials. The proposed method for breast cancer diagnosis and prognosis were implemented by using the R program language version 2.1.5.2 with RF package. The both of datasets were divided randomly into training set and validation set in the ratio of 1 to 1. The parameters of the proposed method in this experiment were determined as follows: • Number of trees in RF: 25; • Number of remaining features: 15—the proposed method will be stop if number of remaining features in dataset is greater or equal number of remaining features. The proposed method was experimented 50 times on both WBCDD and WBCPD. Table 1 shows the results of the proposed method. Figures 4 and 5 present performances of proposed method on training set and validation set of WBCDD and WBCPD. Table 2 shows number of original features and number of features were selected. The result indicates that the feature subsets selected by the proposed approach have a better classification performance than that produced by the original RF. Table 3 presents time-consuming of the proposed method. It should be noted that the proposed method was
{"url":"https://scirp.org/journal/paperinformation?paperid=31887","timestamp":"2024-11-06T21:59:12Z","content_type":"application/xhtml+xml","content_length":"142480","record_id":"<urn:uuid:510ec9ee-1aac-4d3d-9488-357196a2b93e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00354.warc.gz"}
Data Analysis Examples The pages below contain examples (often hypothetical) illustrating the application of different statistical analysis techniques using different statistical packages. Each page provides a handful of examples of when the analysis might be used along with sample data, an example analysis and an explanation of the output, followed by references for more information. These pages merely introduce the essence of the technique and do not provide a comprehensive description of how to use it. The combination of topics and packages reflect questions that are often asked in our statistical consulting. As such, this heavily reflects the demand from our clients at walk in consulting, not demand of readers from around the world. Many worthy topics will not be covered because they are not reflected in questions by our clients. Also, not all analysis techniques will be covered in all packages, again largely determined by client demand. If an analysis is not shown in a particular package,this does not imply that the package cannot do the analysis, it may simply mean that the analysis is not commonly done in that package by our clients. Stata SAS SPSS Mplus R Regression Models Robust Regression Stata SAS R Models for Binary and Categorical Outcomes Logistic Regression Stata SAS SPSS Mplus R Exact Logistic Regression Stata SAS R Multinomial Logistic Regression Stata SAS SPSS Mplus R Ordinal Logistic Regression Stata SAS SPSS Mplus R Probit Regression Stata SAS SPSS Mplus R Count Models Poisson Regression Stata SAS SPSS Mplus R Negative Binomial Regression Stata SAS SPSS Mplus R Zero-inflated Poisson Regression Stata SAS Mplus R Zero-inflated Negative Binomial Regression Stata SAS Mplus R Zero-truncated Poisson Stata SAS R Zero-truncated Negative Binomial Stata SAS Mplus R Censored and Truncated Regression Tobit Regression Stata SAS Mplus R Truncated Regression Stata SAS R Interval Regression Stata SAS R Multivariate Analysis One-way MANOVA Stata SAS SPSS Discriminant Function Analysis Stata SAS SPSS Canonical Correlation Analysis Stata SAS SPSS R Multivariate Multiple Regression Stata SAS SPSS Mplus Mixed Effects Models Generalized Linear Mixed Models Introduction to GLMMs Mixed Effects Logistic Regression Stata R Latent Class Analysis Stata Mplus Power Analyses For grants and proposals, it is also useful to have power analyses corresponding to common data analyses. We have examples of some simple power analyses below.
{"url":"https://stats.oarc.ucla.edu/other/dae/","timestamp":"2024-11-03T03:33:34Z","content_type":"text/html","content_length":"51418","record_id":"<urn:uuid:66b9c26c-1ae5-40b0-a30c-b31d2034fcf9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00702.warc.gz"}
ake 5 squares from 6 Submitted by Atanu Chaudhuri on Wed, 27/11/2019 - 14:44 6 squares matchstick puzzle: Move 2 sticks to make 5 squares Move 2 sticks to make 5 squares from the 6 squares matchstick puzzle figure. How many ways can you do it? You have 10 minutes to solve the puzzle. Restriction: The solution matchstick figure must not have any overlapping or loose sticks that are not a part of any square. Enjoy solving the problem. If you can’t find the solution or feel curious about how we have solved the puzzle step by step, the solution for you follows. Solution to the 6 squares matchstick puzzle: Move 2 sticks to make 5 squares Analysis of the structure and you get assurance of an easier way to the solution At first, the problem seems to be complex because of the relatively large number of squares and sticks. Do you think the problem would be difficult to solve? Instead of just looking at the puzzle figure focus on what figure finally you have to form—it is just a 5 square figure. There is a small difference in the number of squares between the starting figure and the target solution figure. On top of this, now you notice, you have to move only 2 matchsticks. These two facts assure you that the problem should not be difficult to solve after all. This is what we call confidence boosting assurance of a not so difficult solution. It helps you to solve the problem quickly and easily by boosting your confidence. Analysis of the structure to evaluate possible approaches and deciding on the most promising approach: Move 2 sticks to make 5 squares matchstick puzzle You know of two main approaches of solving matchstick puzzles—End State Analysis Approach and Common stick analysis approach coupled with deductive reasoning. The first is the End State Analysis Approach in which the promising possible end configurations are compared one by one with the starting puzzle figure for maximum similarity. This approach gives you quick and clean solution when the possible end states are few. More frequently, this condition is satisfied when the number of sticks are just enough to form independent squares or triangles without any sharing or common stick between two closed shapes (square or triangle). Two of the notable puzzles of this type we have solved earlier are the 5 squares to 4 squares in 2 stick moves and 5 squares to 4 squares in 3 stick moves. In this puzzle of ours, the total number of sticks is 18, which is 2 short of the maximum number of sticks required to form 5 independent squares. There will then be 2 shared or common sticks in the final solution figure. Here, it may not be easy to imagine possible solution figures. So we decide to follow the second approach of common stick analysis coupled with deductive reasoning. Initial conclusions from common stick analysis and deductive reasoning: Move 2 sticks to make 5 squares matchstick puzzle In the second approach, the number of common sticks in the starting puzzle figure is compared with the number in the solution figure. The number of common sticks is 6, quite a large number, in the starting figure. And the number of common sticks is 2 in the solution figure. The figure below identifies the common sticks in the starting 6 squares. As you know—each common stick reduces the number of sticks required to form 5 squares by 1. The number of sticks required to form 5 independent squares with no common stick is, 5×4=20. When 5 squares are to be formed by the existing 18 sticks, 20−18=2 number of sticks must be common between two pairs of squares. So it is surprising that in only 2 stick moves you have to reduce the number of common sticks by 4. What does it signify? It implies, At least in 1 stick move you should target to nullify the maximum number of common sticks—more the merrier. With this initial knowledge, try to find out how you can reduce the number of common sticks to the maximum extent with 1 single stick move. It should be easy to identify the square marked A contributing 4 common sticks to the puzzle figure of 6 squares. If you remove any of the four sides destroying this square, 4 common sticks would be nullified at a single move. So take up stick move analysis on this square A that you have identified as your target of attention by common stick analysis. It is decided not to disturb the stick number 3 and 4 as both would create 3 orphan sides of a square—impossible to set right in 1 remaining move. What about the sticks 1 and 2? Removing any of these two would create 2 numbers of orphan sticks. In addition, in each of these two options, 5 common sticks are eliminated. Each of these two choices should give us a solution. We call these as the most promising actions for solution. Select stick 1 first for moving as a trial. Second stage: Selecting stick 1 as first stick to move: First solution to Move 2 sticks to make 5 squares matchstick puzzle The result of selecting the stick 1 as first move is shown below. The sticks around are numbered for easy reference. Now with stick 1 free, you have no other option than to select the stick 5 for the second move. Notice that now the figure has only 1 common stick. The solution is now clear. Just complete the incomplete square E with the two sticks 1 and 5 moved. It already has two existing sides, 6 and 7. This is the first solution. Notice that stick 7 has taken up a new role of the second common stick so that conditions for creating 5 squares with 18 sticks are fully satisfied. How many solutions to the Move 2 sticks to make 5 squares matchstick puzzle? You already know that if you had moved stick 2 first instead of stick 1, there could have been a second solution. And indeed it is so. The second and last solution by moving stick 2 first and then stick 8 next is shown. To us, these are the only two possible solutions. First and most critical reason that these two are the only possible solutions is, No solution would be possible without destroying the square A that we identified in the beginning. You may try destroying any square other than square A first. Try out. After all, this is not maths! It might yet be possible. Think other way round: Create a new puzzle for your friends Now that you know clearly how 5 squares can be formed from 6 squares by moving two sticks, it should also be possible to form 6 squares from any of the two of our final solution figures made up of 5 squares, isn’t it? And that would easily be a new puzzle that you may entertain your friends with. End note Last, to solve matchstick puzzles you don’t need to know maths or any other subject. You just have to identify key patterns and use your inherent analytical reasoning skills to home in to the solution with assurance and speed. The way to the solution, the approach, the thinking are more important than the solution itself. The concepts and methods stay with you and are enriched as you solve more and more problems. Know how to solve difficult problems easily without wasting time on random attempts Our ebook on puzzle solutions by innovative methods will show you just that. (BUY from Apple books, Barnes & Noble, Rokuten Kobo, Vivlio, Angus & Robertson, Tolino, PayHip and others) BUY the eBook Amazon Kindle version here, from Google Play here and Paperback here. BUY Creative Matchstick Puzzles Innovative Solutions eBook Amazon Kindle version BUY the paperback here. Puzzles you may enjoy Easy to hard brain teasers with systematic solutions Challenging brain teasers with solutions: Long list. This will always be the most up-to-date full list with the brain teasers classified into categories that can be browsed separately. You may also click on the category term link below to enjoy the brain teasers that are classified in the present category. For example, if the category term link shown below is "Riddle", click on it to go through all the Riddles.
{"url":"https://suresolv.com/brain-teaser/make-5-squares-6-2-stick-moves-6-square-matchstick-puzzle","timestamp":"2024-11-04T04:43:12Z","content_type":"text/html","content_length":"40355","record_id":"<urn:uuid:26dc18a1-b17f-4d92-a5a9-290d15ec2817>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00024.warc.gz"}
A Guide to Mastering Calculus Assignments: Key Concepts and Effective Strategies 1. A Guide to Mastering Calculus Assignments: Key Concepts and Effective Strategies A Comprehensive Guide to Mastering Calculus Assignments August 14, 2023 Nathan Jacobs United States Nathan is a professional calculus assignment writer with a degree in mathematics from Western University. He has worked with more than 900 students. Calculus, a fundamental branch of mathematics, is both a powerful tool and a challenging subject. Whether you are a student preparing for an upcoming calculus assignment or someone looking to enhance their understanding of this fascinating field, this comprehensive guide will equip you with the essential topics and effective strategies to succeed in solving calculus assignment . From understanding key concepts like functions, limits, and differentiation to applying integration techniques and tackling real-world problems, this blog will be your go-to resource for navigating through calculus assignments with confidence and precision. Let's dive into the world of calculus and unlock its secrets together! Preparing for a Successful Assignment on Calculus Calculus is a fundamental branch of mathematics that deals with the study of change and motion. It plays a crucial role in various scientific and engineering fields, including physics, engineering, economics, and computer science. Before diving into your calculus assignment, it is essential to have a solid understanding of some key topics. Key Concepts in Calculus 1. Functions and Graphs Understanding functions and their graphs is the foundation of calculus. A function is a rule that assigns each input (x) to a unique output (y). Graphing functions helps visualize their behavior, identify key points, and comprehend how they change over different intervals. 2. Limits Limits represent the value a function approaches as its input gets closer to a specific point. It is a fundamental concept in calculus and forms the basis for differentiation and integration. Familiarize yourself with the concept of one-sided and two-sided limits, continuity, and how to evaluate limits algebraically and graphically. 3. Differentiation Differentiation involves finding the rate at which a function changes. It is crucial for understanding slopes, rates of change, and optimization problems. Learn how to find derivatives of various functions using rules like the power rule, product rule, quotient rule, and chain rule. 4. Applications of Differentiation Understanding how to apply differentiation to real-world scenarios is essential. Applications include finding maximum and minimum values, optimization problems, related rates problems, and graph sketching. 5. Integration Integration is the reverse process of differentiation and helps find the accumulation of quantities over intervals. Learn about different integration techniques such as u-substitution, integration by parts, and trigonometric substitutions. 6. Applications of Integration Integration has various applications, such as finding areas under curves, computing volumes of solids of revolution, work and energy problems, and calculating average values of functions. Tips to Solve Calculus Assignments Effectively 1. Understand the Problem Understanding the problem is a crucial step in effectively tackling any calculus assignment. Before delving into solving equations or applying formulas, take the time to carefully read and analyze the problem's requirements and constraints. Break down the problem into its essential components, identifying the key variables and relationships involved. Comprehending the problem ensures that you grasp its context and objectives, allowing you to choose the appropriate calculus techniques and methods. It prevents wastage of time and effort on irrelevant approaches and helps you focus on the core concepts required for the solution. Additionally, understanding the problem enables you to devise a step-by-step plan for solving it. Outline the approach you will take and visualize the possible outcomes, making the problem-solving process more structured and efficient. When faced with complex assignments, consider rephrasing the problem in your own words or creating diagrams to aid your understanding. If the problem remains unclear, seek clarification from your instructor or peers to avoid misconceptions. 2. Review Relevant Concepts Reviewing relevant concepts is a crucial step in preparing for calculus assignments. Before diving into problem-solving, it involves revisiting the fundamental principles and theorems related to the specific topics covered in the assignment. This process helps to refresh one's memory and solidify the understanding of the underlying principles, ensuring a strong foundation to approach more complex problems. Reviewing relevant concepts also aids in recognizing the appropriate techniques and methods to apply in various scenarios. For instance, when faced with a differentiation problem, a thorough understanding of differentiation rules and the properties of different functions can guide the selection of the correct differentiation technique. Moreover, this step enables students to identify any knowledge gaps or areas of weakness. By acknowledging these shortcomings, students can focus on targeted learning and practice to address those specific areas, ultimately leading to improved problem-solving skills. Additionally, reviewing relevant concepts enhances problem-solving efficiency. Familiarity with essential formulas and concepts allows students to approach problems with confidence, reducing the time spent on unnecessary trial and error. 3. Practice Regularly Practicing regularly is a fundamental aspect of mastering calculus and developing proficiency in problem-solving. Through consistent practice, students gain confidence in applying various concepts, techniques, and formulas, which are essential for tackling complex calculus assignments. Repetition not only reinforces knowledge but also enhances problem-solving skills, allowing individuals to approach challenges with greater ease and efficiency. Regular practice exposes learners to a wide range of calculus problems, each presenting unique intricacies and applications. As students work through diverse scenarios, they become more adept at recognizing patterns and identifying suitable approaches for different types of questions. Moreover, practicing regularly enables students to refine their understanding of underlying concepts, which is crucial for tackling more advanced problems in calculus. Consistency in practice also helps students improve their time management during exams and assignments. As they familiarize themselves with various problem structures, they become better at strategizing and allocating time appropriately for each task. This prevents them from feeling overwhelmed and ensures they can effectively complete the required calculations and explanations within the given timeframe. 4. Work Step-by-Step Working step-by-step is an essential approach in solving calculus problems that involves breaking down complex tasks into smaller, more manageable parts. When tackling calculus assignments, it is crucial to demonstrate a clear and organized thought process. By showing every intermediate step in your solution, you not only make it easier for yourself to track your progress but also enable your instructor to understand your reasoning and provide feedback. When working step-by-step, begin by identifying the problem's given information and what needs to be found. Then, outline your approach and choose the appropriate calculus techniques or formulas to apply. As you proceed, show each calculation, simplification, or substitution you make, ensuring you do not skip any essential steps. This approach allows you to catch any mistakes early on and rectify them before reaching the final answer. Moreover, displaying your work step-by-step helps you earn partial credit for your efforts, even if your final answer is incorrect. In calculus, the process of problem-solving is as crucial as the correct answer, as it demonstrates your understanding of the concepts. So, embrace the habit of working step-by-step in your calculus assignments to boost your problem-solving skills, build confidence, and ultimately achieve success in your calculus endeavors. 5. Check Your Answers Checking your answers is a crucial step in solving calculus problems. After completing a calculus assignment or any mathematical task, it is essential to verify the accuracy of your solutions. By checking your answers, you can ensure that you haven't made any computational errors or overlooked important details. The process of checking your answers involves reevaluating the steps you took to arrive at the final solution. Start by reviewing the problem statement to confirm that you correctly interpreted the question and addressed all its requirements. Then, work through the solution step-by-step, comparing each intermediate result to the initial calculations. Additionally, you can use alternative methods to verify your answers. For example, if you used differentiation to find the maximum value of a function, you can try using integration to check if you arrive at the same result. This approach not only serves as a cross-verification method but also reinforces your understanding of different calculus techniques. Checking your answers is especially crucial in calculus, as small errors can lead to significantly different outcomes. By being meticulous in your verification process, you can catch and rectify any mistakes, ensuring the accuracy and credibility of your work. Ultimately, this practice contributes to a deeper grasp of calculus concepts and boosts your confidence in approaching future 6. Seek Help When Needed Seeking help when needed is a crucial aspect of successful learning and problem-solving in calculus. While working on assignments, students may encounter challenging problems or get stuck on specific concepts. In such situations, seeking assistance is not a sign of weakness but rather a proactive approach to overcome obstacles and deepen understanding. There are several avenues to seek help in calculus. The first resource is often the instructor or professor. They are there to guide and support students throughout their academic journey. Don't hesitate to approach them during office hours or via email with specific questions or doubts related to the assignment. Additionally, forming study groups with classmates can be immensely beneficial. Collaborating with peers allows for the exchange of ideas and approaches to problem-solving. Explaining concepts to others also reinforces your own understanding. Online forums and communities dedicated to calculus can also provide valuable insights. Many websites offer interactive problem-solving sessions and discussions, allowing you to engage with experienced individuals who can provide step-by-step explanations. By seeking help when needed, students can gain clarity on complex topics, build confidence, and develop essential problem-solving skills. Embracing a proactive attitude towards seeking help will ultimately contribute to a deeper understanding of calculus concepts and pave the way for academic success. 7. Avoid Plagiarism Avoiding plagiarism is crucial in any academic or professional setting. Plagiarism is the act of using someone else's ideas, words, or work without proper acknowledgment or giving credit to the original author or source. It is considered unethical and can lead to severe consequences, including academic penalties and damage to one's reputation. To avoid plagiarism, it is essential to properly cite and reference all sources used in your work, including books, research papers, websites, and other publications. When quoting directly from a source, use quotation marks and provide a citation. When paraphrasing or summarizing information, ensure that you reword the content in your own words and still give credit to the original Using citation styles such as APA, MLA, or Chicago can help maintain consistency and accuracy in your citations. Additionally, keeping track of your sources from the beginning of your research process will make it easier to create a comprehensive bibliography or reference list. Remember that plagiarism extends beyond just written content; it includes using images, graphs, and other multimedia without proper attribution. When in doubt, always err on the side of caution and give credit to the original creators. By avoiding plagiarism, you not only uphold academic integrity but also demonstrate respect for the work of others and contribute to the advancement of knowledge through ethical research By familiarizing yourself with the fundamental topics in calculus and adopting effective problem-solving strategies, you can tackle your calculus assignments with confidence. Remember to practice regularly, seek help when needed, and always strive to understand the underlying concepts. With dedication and perseverance, you'll develop a strong foundation in calculus and excel in your academic
{"url":"https://www.mathsassignmenthelp.com/blog/a-comprehensive-guide-to-mastering-calculus-assignments/","timestamp":"2024-11-02T02:05:23Z","content_type":"text/html","content_length":"83302","record_id":"<urn:uuid:0532dfe7-ffa9-462f-93a4-2235c1f50794>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00554.warc.gz"}
[LON-CAPA-cvs] cvs: loncom /html/adm/help/tex Spreadsheet_Hierarchy.tex [LON-CAPA-cvs] cvs: loncom /html/adm/help/tex Spreadsheet_Hierarchy.tex lira lira at source.lon-capa.org Sat Jan 2 23:48:01 EST 2016 lira Sun Jan 3 04:48:01 2016 EDT Modified files: /loncom/html/adm/help/tex Spreadsheet_Hierarchy.tex improve clarity and mention multiple assessment spreadsheets for categories. Index: loncom/html/adm/help/tex/Spreadsheet_Hierarchy.tex diff -u loncom/html/adm/help/tex/Spreadsheet_Hierarchy.tex:1.3 loncom/html/adm/help/tex/Spreadsheet_Hierarchy.tex:1.4 --- loncom/html/adm/help/tex/Spreadsheet_Hierarchy.tex:1.3 Tue Aug 8 17:47:27 2006 +++ loncom/html/adm/help/tex/Spreadsheet_Hierarchy.tex Sun Jan 3 04:48:01 2016 @@ -1,6 +1,6 @@ -There are three different types of spreadsheets inside LON-CAPA: Assessment Spreadsheets, Student Spreadsheet, and Course Spreadsheet. +There are three different types of spreadsheets inside LON-CAPA: Assessment Spreadsheets, Student Spreadsheet, and Course Spreadsheet. @@ -11,24 +11,27 @@ or exam. Each student has an assessment spreadsheet for every assessment in the course. The assessment spreadsheets provide data to their parent, the student level -spreadsheet, via the \textbf{export row} (\ref{Spreadsheet_Export_Row}). +spreadsheet, via the assessment spreadsheet \textbf{export row} (\ref{Spreadsheet_Export_Row}). +Multiple assessment spreadsheets can be used in cases where the instructor configures +grade categories. \item \textbf{The Student Spreadsheet} -The student level spreadsheet presents data to the students on their -performance in the course. -There is only one possible student level spreadsheet definition, +The student level spreadsheet summarizes the assessment spreadsheet data for the students. +There is only one default student level spreadsheet definition, but the data varies for each student. +The student spreadsheet is programmed to permit the instructor to assign different +assessment spreadsheets for different grade categories or other uses. The data present in the student spreadsheet is imported from the \textbf{export row} (\ref{Spreadsheet_Export_Row}) of each assessment spreadsheet for all of the assessments currently in the course. The student spreadsheets pass summary data to the course-level spreadsheet -via the \textbf{export row} (\ref{Spreadsheet_Export_Row}). +via the student spreadsheet \textbf{export row} (\ref{Spreadsheet_Export_Row}). \item \textbf{The Course Spreadsheet} The course spreadsheet contains all of the summary data for the students in -the course. There is only one course spreadsheet. +the course. More information about the LON-CAPA-cvs mailing list
{"url":"http://mail.lon-capa.org/pipermail/lon-capa-cvs/Week-of-Mon-20151228/026904.html","timestamp":"2024-11-13T15:23:20Z","content_type":"text/html","content_length":"5505","record_id":"<urn:uuid:e4f195be-2a35-4560-bf7f-4d945516bc11>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00165.warc.gz"}
Narrow Art Gallery Problem F Narrow Art Gallery A long art gallery has $2N$ rooms. The gallery is laid out as $N$ rows of 2 rooms side-by-side. Doors connect all adjacent rooms (north-south and east-west, but not diagonally). The curator has been told that she must close off $k$ of the rooms because of staffing cuts. Visitors must be able to enter using at least one of the two rooms at one end of the gallery, proceed through the gallery, and exit from at least one of the two rooms at the other end. Therefore, the curator must not close off any two rooms that would block passage through the gallery. That is, the curator may not block off two rooms in the same row or two rooms in adjacent rows that touch diagonally. Furthermore, she has determined how much value each room has to the general public, and now she wants to close off those $k$ rooms that leave the most value available to the public, without blocking passage through the gallery. Input will consist of multiple problem instances (galleries). Each problem instance will begin with a line containing two integers $N$ and $k$, where $3 \leq N \leq 200$ gives the number of rows, and $0 \leq k \leq N$ gives the number of rooms that must be closed off. This is followed by $N$ rows of two integers, giving the values of the two rooms in that row. Each room’s value $v$ satisfies $0 \leq v \leq 100$. A line containing $0~ 0$ will follow the last gallery. For each gallery, output the amount of value that the general public may optimally receive, one line per gallery. Sample Input 1 Sample Output 1 Sample Input 2 Sample Output 2 Sample Input 3 Sample Output 3
{"url":"https://open.kattis.com/contests/zorfy7/problems/narrowartgallery","timestamp":"2024-11-11T07:04:12Z","content_type":"text/html","content_length":"31724","record_id":"<urn:uuid:843bfbe0-8aa8-4ebb-afb7-f2f37bb407f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00245.warc.gz"}
Felicia Tabing Felicia Tabing Assistant Professor (Teaching) of Mathematics University of Southern California Los Angeles, California, USA For the past few years I've been creating work to show how I experience grapheme-color synesthesia to represent mathematical ideas in how I experience them in my mind. For example, each numeral has a color I associate to them, such as 3, which I imagine as a light pink color. I use the associated coloring to create work that represents special mathematical numbers as accurately as possibly to how I view them in my mind. I am experimenting with different media, such as watercolor pencil, acrylic paint, gouache, marker, and pen and pencil to get the right color effect and personality that a number has to me. I also use the idea of proofs without words to represent convergent series as a way to represent special numbers. 28.0 x 38.0 cm Gouache and pencil on watercolor paper This represents the geometric series that converges to the number 2/3. The larger rectangle framing the image represents area of unit 1. The construction lines are a technique I learned from perspective drawing in dividing a rectangle into n pieces, and are kept in the image to demonstrate the infinite sum, although only a partial sum is represented. As the series starts as "1–1/2+1/4", the rectangle was divided into two, and half subtracted with a fourth added, continuing through the sum. The negative white space what subtracted out of the sum in total, while the colored-in rectangles are what remain in the infinite sum. The colors used represent "two-thirds", with two represented by the yellow, and three represented by pink.
{"url":"https://gallery.bridgesmathart.org/exhibitions/2023-bridges-conference/felicia-tabing","timestamp":"2024-11-14T14:23:02Z","content_type":"text/html","content_length":"30259","record_id":"<urn:uuid:1fc214a7-73ba-4c64-ba96-c49abd9ffd98>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00772.warc.gz"}
The Greatest Indian Astronomers Every Geek Will Recognize (5th & 6th Century Indian Mathematician and Astronomer who Calculated the Value of Pi) Birthdate: 0476 AD Birthplace: Assaka Died: 0550 AD Aryabhata was a prominent mathematician-astronomer and physicist from classical Indian mathematics and astronomy. He is recognized for his significant works, including the Āryabhaṭīya and the Arya-siddhanta. Notably, he was one of the first to mention the relativity of motion, establishing himself as a major early physicist. His contributions have had a lasting impact on the fields of mathematics, astronomy, and physics, solidifying his legacy as a pioneer in Indian scientific thought during his time.
{"url":"https://www.thefamouspeople.com/indian-astronomers.php","timestamp":"2024-11-09T10:24:57Z","content_type":"text/html","content_length":"182288","record_id":"<urn:uuid:8e73634c-09c4-4af6-9839-47aaf1644256>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00601.warc.gz"}
In the Pareto world, liars prosper — LessWrong This is a simple picture proof to show that if there is any decision process that will find a Pareto outcome for two people, it must be that liars will prosper: there are some circumstances where you would come out ahead if you were to lie about your utility function. Apart from Pareto, the only other assumption it needs are that if the data is perfectly symmetric, then the outcome will be symmetric as well. We won't even need to use affine independence or other scalings of utility functions. Now, given Pareto-optimality, symmetry allows us to solve symmetric problems by taking the unique symmetric Pareto option. Two such symmetric problems presented here, and in one of them, one of the two players must be able to prosper by lying. So first assume Pareto-optimality, symmetry, and (by contradiction) that liars don't prosper. The players are x and y, and we will plot their utilities in the (x,y) plane. The first setup is presented in this figure: There are five pure choices here, of utilities (0,1), (0.95,0.95), (1,0) and the non-Pareto optimal ones at (0.6,0.6) and (0.55,0.55). By symmetry and Pareto-optimality, we know that the outcome has to be (0.95,0.95). Now player y is going to lie. If Player y can force the outcome off the green line and onto the blue line, then he will profit by lying. He is going to claim that the choice (0.95,0.95) actually only gives him a utility of (0.4), and so is at (0.95, 0.4). This results in this diagram: The Pareto optimal boundary of this is: Now, the new outcome must be on the green segment somewhere (including the end points). Or else, as we have seen, player y will have profited by lying. Got that? If liars don't prosper, then the outcome for the above diagram must be on the green segment. Now let's consider a new setup, namely: This has choices with utility (1,0), (0.55,0.55), (0,1) and non-Pareto optimal choices (0.4,0.6) and (0.6,0.4). It is symmetric, so the outcome must be (0.55, 0.55) by Pareto-optimality. Now it's player x's chance to lie. She will lie on two of her choices, claiming that (0.4,0.6) is actually at (0.6,0.6) and that (0.6,0.4) is actually at (0.95, 0.4): You will no doubt be astounded and amazed to realise that this setup is precisely the same as in the third figure! Now, we know that the outcome for that must lie along the green line between (1,0) and (0.95,0.4). Translating that green line back into the real utility for x, you get: Any point on that line is better, from x's perspective, than the standard outcome (0.55,0.55) (she will get at least 0.6 in utility on the green line). So, if we accept Pareto-optimality and symmetry, then one of the players has to be able to profit by lying in certain situations. Pareto-optimality is required: if you waive that condition, then some non-Pareto solutions such as "flip a coin, and the winner gets to decide the outcome" do not allow liars to prosper. The other assumption is that they liar can make himself believed. Which means there is another assumption of imperfect information across the game. I'd prefer to see it as "there is an incentive to lie, if they can get away with it". I wish there was a better science of the economics of imperfect information, or if there is that I would know about it. Mechanism design? BTW, this is a special case of a mechanism design result known as the Myerson-Satterthwaite theorem. It's not a novel result by any means. The Gibbard-Satterthwaite theorem has a similar flavor, though it is in fact a separate result--for one thing, it applies to voting systems with three or more agents. It seems likely that a very important part of the trading on securities, investment, and derivative markets is completely driven by imperfect information, by information asymmetries or at minimum believe asymmetries. Sure. Most people involved in the financial markets reasonably closely approximate utility-maximizing agents with utility defined in money. So with the exception of when someone has to raise cash to pay for some external expense, every trade in the financial markets is one where the seller thinks what he has is worth less than what he's selling it for, and the buyer thinks it's worth more than what he's paying. (A trade itself has various transaction costs, so if you think it's really an even swap, it's automatically a loss of to trade.) So the vast majority of trading activity is directly a matter of belief asymmetries. (One way Warren Buffet makes his money, by the way? Berkshire Hathaway has a pattern and practice of watching for profitable small businesses that have to be sold by the heirs to cover the external expense of estate taxes. This exploits the fact that small business heirs, unlike heirs of diversified stock portfolios, do not generally have access to efficient, highly-competitive markets for their inherited assets. It's one of the few ways to make money on the financial markets without having to have consistently more-accurate beliefs about future prices than the other participants in the market; you instead buy assets for a price lower than both parties think they're worth, but which the selling party cannot practically refuse to sell.) I think of politics, where I get so frustrated that politicians can say the most blatantly untrue things, and I can see them as such, but those things gain them votes. That's only confusing if you are conflating professing and cheering. It's not about convincing anyone of policy issues, it's about convincing people that you're on their team. I agree that it can certainly be frustrating that this is how people work... Does the liar need to know the other player's utility function in order to lie correctly? Yes. He also needs to know the option set. Looks good! Nice use of the x and y labels. I still don't understand. The idea that the (possibly symmetric) outcome must not make unilateral deviations profitable is just the idea of Nash equilibrium. Do you think it shouldn't be used? Oh, I see. That's why the straight lines are significant: they show that no mixture involving the (.6,.6) point is optimal. Thanks for explaining. As Stuart_Armstrong explains to me on a different thread, the decision process isn't necessarily picking one of the discrete outcomes, but can pick a probabilistic mixture of outcomes. (.6,.6) doesn't appear Pareto-optimal because it's dominated by, e.g., selecting (.95, .4) with probability p=.6/.95 and (0,1) with probability 1-p. (Assuming that it stays on the line of 'what is possible', in any case a higher Y than otherwise, but finding it then according to the constant X--1 - ((19/31) * (1/19)), 30/31, yes...) I confess I do not understand the significance of the terms mixed outcome and weighted sum in this context, I do not see how the numbers 11/31 and 20/31 have been obtained, and I do not presently see how the same effect can apply in the second situation in which the relative positions of the symmetric point and its (Pareto?) lines have not been shifted, but I now see how in the first situation the point selected can be favourable for Y! (This representing convincing of the underlying concept that I was doubtrful of.) Thank you very much for the time taken to explain this to me! New Comment 33 comments, sorted by Click to highlight new comments since: The other assumption is that they liar can make himself believed. Which means there is another assumption of imperfect information across the game. I wish there was a better science of the economics of imperfect information, or if there is that I would know about it. It seems likely that a very important part of the trading on securities, investment, and derivative markets is completely driven by imperfect information, by information asymmetries or at minimum believe asymmetries. And then what of lying in multiplayer games? I think of politics, where I get so frustrated that politicians can say the most blatantly untrue things, and I can see them as such, but those things gain them votes. Looks correct and very nice! I tried to think of a simpler proof, but couldn't find any. Does the liar need to know the other player's utility function in order to lie correctly? I have seen this idea before in a talk about cake-cutting algorithms -- but there, the liar risked ending up worse off if she mis-estimated the utility functions of the other players. I get red x's instead of the figures. Hopefully solved now. I can't seem to see your images. I just see the filenames. Are they working for you, or is this a problem on my end? They were working for me, but I tried reloading them in a different format now. Hope it works for you now! Confirmed, the new format does make the images visible for me. Introducing liars breaks the symmetry, so that they could just as well be bargaining about which mixed (and so non-symmetric) solution on the original Pareto frontier to play. Introducing liars breaks the symmetry I don't understand this comment. The decision procedure is specified in terms of the players' stated utility values, which can already contain lies. It seems reasonable to demand that the procedure should yield a symmetric outcome when given symmetric input. If lies are seen as strategic considerations, they should be part of the decision problem. I agree that technically we can limit the scope of the official decision to something symmetric, but allowing non-symmetric things to affect this setup seems sufficiently similar to allowing non-symmetric things to happen within the setup, which makes motivation for Stuart's construction unclear to "Wave" should be "waive" in the last line. More substantively, I don't think I believe this claim: Now, the new outcome must be on the green segment somewhere (including the end points). Or else, as we have seen, player y will have profited by lying. Player y would gain .95 utility by being honest. Most of the blue segment is below y=.95. Edit: I could easily be missing something, but I think this invalidates the proof. Your statement about the blue line in diagram 1 does not hold for diagram 2, but your conclusion depends on it. The outcome (.5,.6) doesn't break any of your rules, but doesn't reward liars. Remember that player y is lying: the blue segment lies below y=0.95, but only for the fake values that y is claiming. In actual fact, that blue line is always above 0.95 (you can see this on the first diagram). Possibly my confusion lies in the way values are being re-normalized after player y lies. In diagram 2, consider the outcome (.5,.6). Even if we re-normalize that outcome by multiplying by the sum of y's real utilities and dividing by the sum of y's fake utilities, .6 * (3.1 / 2.55) =~ .73, well below the default outcome of .95. Am I doing that wrong? There's no need to renormalise: any outcome on the blue line is a probabilistic mixture between the (0,1) and (0.95,0.95) choices (to use the genuine utilities of these outcomes). This is better for y than the pure (0.95,0.95) option. Why not just state that the (0,1) point actually lies on (2,2), and therefore is the best choice? He can only lie about how much he values the point - not about how much the other player values it. I may be missing something: for Figure 5, what motivation does Y have to go along with perceived choice (0.95, 0.4), given that in this situation Y does not possess the information possessed (and true) in the previous situation that '(0.95, 0.4)' is actually (0.95, 0.95)? In Figure 2, (0.6, 0.6) appears symmetrical and Pareto optimal to X. In Figure 5, (0.6, 0.6) appears symmetrical and Pareto optimal to Y. In Figure 2, X has something to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6) and Y has something to gain by choosing/{allowing the choice of} (0.95, 0.95) over (0.6, 0.6), but in Figure 5, while X has something to gain by choosing/ {allowing the choice of} (0.6, 0.4) over (0.5, 0.5), Y has nothing to gain by choosing/{allowing the choice of} (0.95, 0.4) over (0.6, 0.6). Is there a rule(/process) that I have overlooked? Going through the setup again, it seems as though in the first situation (0.95, 0.95) would be chosen while looking to X as though Y was charitably going with (0.95, 0.4) instead of insisting on the symmetrical (0.6, 0.6), and that in the second situation Y would insist on the seemingly-symmetrical-and-(0.6, 0.6) (0.4, 0.6) instead of going along with X's desired (0.6, 0.4) or even the actually-symmetrical (0.5, 0.5) (since that would appear {non-Pareto optimal}/{Pareto suboptimal} to Y). The point of the proof is that if there is an established procedure that takes as input people's stated utilities about certain choices, and outputs a Pareto outcome, then it must be possible to game it by lying. The motivations of the players aren't taken into account once their preferences are stated. Rather than X or Y succeeding at gaming it by lying, however, it seems that a disinterested objective procedure that selects by Pareto optimalness and symmetry would then output a (0.6, 0.6) outcome in both cases, causing a -0.35 utility loss for the liar in the first case and a -0.1 utility loss for the liar in the second. Is there a direct reason that such an established procedure would be influenced by a perceived (0.95, 0.4) option to not choose an X=Y Pareto outcome? (If this is confirmed, then indeed my current position is mistaken. ) (0.6, 0.6) is not Pareto. The "equal Pareto outcome" is the point (19/31,19/31) which is about (0.62,0.62). This is a mixed outcome, the weighted sum of (0,1) and (0.95,0.4) with weights 11/31 and 20 /31. In reality, for y's genuine utility, this would be 11/31(0,1) + 20/31(0.95,0.95)=(19/31,30/31), giving y a utility of about 0.97, greater than the 0.95 he would have got otherwise.
{"url":"https://www.lesswrong.com/posts/vSgaExrundWJJBDmZ/in-the-pareto-world-liars-prosper","timestamp":"2024-11-06T05:34:29Z","content_type":"text/html","content_length":"696194","record_id":"<urn:uuid:1e663fa0-2b6b-4eb1-afc7-a9c4947590f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00003.warc.gz"}
Please note that this post is not intended to provide health advise and should only be used for educational purposes. The following explanation is based on the work of Andrew Cutler, PhD, who is specialiced in the field of heavy metal toxicity. More information about hair testing can be found in Hair Test Interpretation: Finding Hidden Toxicities. Hair tests are historically used to determine the health status of a patient. It can be used to determine whether certain minerals are depleted or whether there is heavy metal toxicity. The problem with the former is that hair mineral levels might not represent body mineral levels. Either way, we are not really interested in that. What we want to know is how to determine when a hair test points in the direction of heavy metal poisoning. The most common hair test is the ‘Toxic and Essential Elements’ test. This hair test measures the availability of toxins as well as the different essential elements in a hair sample. There are a wide range of heavy metals that can be the cause of a certain health condition. The metal that can cause the most problems is mercury. Mercury interferes with the process that is responsible for the regulation of mineral concentration and how your body transports minerals. While other toxins can cause certain mineral transports, mercury causes a ‘general’ impairment. This is what can be called ‘deranged mineral transport’. The hair test of people who are chronically mercury toxic usually show a mercury level that is within normal ranges. This is because these people have a hard time to get rid of mercury toxicity. Especially people who have detoxification problems will have a normal result for the toxic metals. This is why it can be hard to determine whether someone has mercury poisoning through a hair test. Because it is hard to determine mercury toxicity based on the concentration of toxic metals in a hair sample, it has to be determined based on the concentrations of the essential elements. Because mercury interferes with the process of mineral transport, the concentrations of the essential elements will look ‘odd’. Toxic metals The first part of a hair test contains the toxic metals. Because we are talking about toxic metals, there is only an upper bound. If one of the toxins is in the red part of the chart, it may indicate toxicity. The scales that are being used are comparisons with what is seen in the general population. The lab will make a distinction between different sexes and ages. This means that the concentration of a certain element may be higher than the concentration of anther element, but is categorized in a lower percentile. A quotation in the 95th percentile means that this specific element has a concentration in the upper 5% which means that only 1 in 20 people have a similar or higher result. Please notice that this is not a linear scale. The 95th percentile is much wider than the 68th percentile. Next example is a hair test I have done myself in 2014. As you see, there is nothing special to see. Only Uranium is in the 68th percentile and mercury and lead are somewhat elevated. Essential and other elements The second part of a hair test contains essential and other elements. The difference here is that there is also a lower bound. It’s important to notice that the concentration of the elements in hair samples don’t necessarily resemble the concentration of that element in the rest of the body, but it may be an indicator. How can one recognize ‘deranged mineral transport’? As mentioned before, the result of each element is compared with the overall population. This means that we can calculate the probability that a certain phenomenon can happen. There are a few phenomena that can happen in a hair test. • A lot of bars to the left or right. • A lot of bars in the red. • A small amount of bars in the green and white area. Whether a certain count of bars in a certain area is significant is determined by basic statistics. Take for example the amount of bars to the left/right. The chance that a certain bar is located in either the left or right side is 50 percent. Assume that you have a hair test of 5 elements. There are 2^5 different kind of results. The chance that all bars point to the right is 1/2^5, since there is only one way to have 5 bars to the right. The chance that 4 bars point to the right is 5/2^5, since there are 5 ways to have 4 bars to the right. Saying that there are 4 bars to the right is the same as saying that there is 1 bar to the left, which obviously has the same chance of 5/2^5. All those chances should sum up to 1. Obviously just calculating the chance that a certain amount of bars are to the left/right doesn’t say all that much. In the case of the hair test, there are 22 elements. 11 out of 22 elements to one side has a probability of 17% even though the hair test is perfectly normal. What we need to know is the probality that less than x amount of elements point to 1 side (or more, depending on how you want to calculate it). Which means that we have to count all the probabilities lower and equal to that amount. For 11 out of 22 this is exactly 0.58 which makes sense. This means that there is a 0.58 chance that there are less than 11 bars pointing to one side. If there would be 23 elements in total, the chance for 11 (or less) bars to 1 side is exactly 0.5. Think about why this is 🙂 Similar things can be done with the calculation for the bars in the red and in the green or white. The chance that a certain hair test appears with x bars in the red is 0.95^(amount-x)*0.05^x with ‘amount’ the total amount of elements. The chance that a certain hair test appears with x bars in the green or white area is 0.67^x*0.33^(amount-x) with ‘amount’ the total amount of elements. When is a certain result significant? Cutler doesn’t really talk about percentages, rather about the amount of bars that are ‘off’. • For the amount of bars to 1 side, 5 bars meet the counting rules (this means deranged mineral transport), 6 bars are suspicious, 7 are unusual, 8 not uncommon and 9 or higher is normal. • For the amount of bars in the red, 4 bars or more meet the counting rules, 3 bars is unusual, 2 is uncommon and 1 or less is normal. • For the amount of bars in the middle band, 11 or less meet the counting rules, 12 is suspicious, 13 is unusual, 14 is not uncommon and 15 or more is normal. All these numbers are based on the fact that there are 23 elements in total, while the hair test we are trying to analyze has 22 elements. A quick lookup teaches us that the probabilities between 23 and 22 elements aren’t much different, which means that these numbers can be used for a hair test with 22 elements. For those interested in the exact numbers for 23 elements, 5 bars or less to one side equal a chance of 0.6%, 4 bars or more in the red equal a chance of 2.6%, 11 bars or less in the middle area equal a chance of 4.5%. As we see in the example, one can quickly see that the amount of bars in the red (1) and the amount of bars in the middle (14/18, depending on whether you want to include the borderline cases) isn’t going to offer a statistical significant result. The amount of bars to 1 side is interesting. We see 7 bars to the right side which would be ‘unusual’. As you see, 5 of the 7 bars to the right are just slightly to the right, especially iodine. I have been supplementing with molybdenum and iodine in the recent past which may increase the levels of these elements. Because of this we only count 6 bars. This leads to a ‘suspicious’ hair test which has a probability of 2.6% (so it only occurs at random in 1 in 40 people). To represent it graphically: The high levels of zinc may be caused by a zinc displacement by other metals like cadmium, lead, copper and mercury resulting in a paradoxially elevated hair zinc. Zinc hair levels may not be reliable because a lot of shampoos contain zinc. The numbers that are published by Cutler are based on a hair test with 23 elements. More recent hair tests only include 22 essential elements (barium has been moved to the toxic metal area). Because of this, I’ve written a generic (doesn’t work anymore) that will calculate the probabilities of your specific hair test. The results of the hair test we discussed in this post are available (doesn’t work anymore). If there is a history of chronic conditions, mercury poisoning would be a very good explanation in this case. If your test comes up positive, it’s really recommended to read Amalgam Illness, Diagnosis and Treatment by Cutler Marijuana potentially cures cancer and the government knows about it The use of cannabis is known go back more than 7000 years ago and it was legal to use it until the beginning of the 20th century. The hemp variant of cannabis has an incredible number of uses. For centuries, the plant has been used for food, clothing and rope. Even the Contitution of the United States and the Declaration of Independence was written on hemp paper. Only in recent history marijuana was made illegal. Not because of scientific reasons, but mostly because of racist and later political reasons. How it all started During the Mexican revolution of 1910, a lot of Mexicans flooded the borders in the US. Mexicans were a very cheap workforce. During the depression of 1910-1911, tensions increased between small farms and large farms that used those cheap Mexicans. Many Mexicans smokes the plant and brought the plant with them. Most likely because of this event, California passed the first anti-marijuana law. Later, more states followed with marijuana prohibition laws which tended to be specifically targeted against the Mexican population. Between 1915 and 1937, marijuana was prohibited by a 27 states. One Texas legislator even said that “All Mexicans are crazy and this stuff is what makes them crazy.”. In 1930, a new governmental department was instated, the Federal Bureau of Narcotics. The head of the bureau was known for his controversial quotes. • “There are 100,000 total marijuana smokers in the US, and most are Negroes, Hispanics, Filipinos, and entertainers. Their Satanic music, jazz, and swing, result from marijuana use. This marijuana causes white women to seek sexual relations with Negroes, entertainers, and any others.” • “The primary reason to outlaw marijuana is its effect on the degenerate races.” • “Marihuana leads to pacifism and communist brainwashing” Only in 1937, the first national law was enacted, called to Marihuana Tax Act. It put regulations and restrictions on the sale of cannabis. Some critics believed that the point of this law was to reduce the size of the hemp industry which was primarily a threat to the paper industry. The Americal Medical Association opposed prohibition because a tax was imposed on the sale of medical cannabis. After the enactment, anyone in possession of cannabis could be arrested. Today, Latino and especially black communities are still subject to unreasonable drug enforcement and judicial practices. President Nixon In 1971, president Richard Nixon established a commission, lead by Raymond P. Shafer, to investigate the effects of marijuana and other drugs. The result of the commission was: • “No significant physical, biochemical, or mental abnormalities could be attributed solely to their marihuana smoking.” • “No verification is found of a causal relationship between marihuana use and subsequent heroin use.” • “In sum, the weight of the evidence is that marihuana does not cause violent or aggressive behavior; if anything marihuana serves to inhibit the expression of such behavior.” • “Neither the marihuana user nor the drug itself can be said to constitute a danger to public safety.” • “Marihuana’s relative potential for harm to the vast majority of individual users and its actual impact on society does not justify a social policy designed to seek out and firmly punish those who use it.” Nixon demanded that the commission changed its conclusions. Shafer ignored the demands and Nixon declined to appoint him to a pending federal judgeship. Nixon wasn’t happy with the results of the report and wanted a larger budget for his war on drugs. The main reason why he wanted to imprison pot smokers is because these were the people that were protesting against the Vietnam war. He thought he would get the support he wanted if he could prove that cannabis caused lung cancer like cigarettes do. He gave the Medical College of Virginia the order to investigate the effects of THC. 2 years later the study was completed. It turned out that when THC was ingested in highly concentrated forms, it will attack any mutated cells in your body while strengthening and rejuvenating the healthy cells. Nixon was outraged. He immediately classified the study. Recent times In 1999, the National Institute of Mental Health (which is a government institution) filed patent US6630507 which is titled “Cannabinoids as antioxidants and neuroprotectants”. The patent was approved in 2003. It states that Cannabinoids have been found to have antioxidant properties, unrelated to NMDA receptor antagonism. This new found property makes cannabinoids useful in the treatment and prophylaxis of wide variety of oxidation associated diseases, such as ischemic, age-related, inflammatory and autoimmune diseases. The cannabinoids are found to have particular application as neuroprotectants, for example in limiting neurological damage following ischemic insults, such as stroke and trauma, or in the treatment of neurodegenerative diseases, such as Alzheimer’s disease, Parkinson’s disease and HIV dementia. The invention also includes methods for using the antioxidants in prevention and treatment of pathological conditions such as ischemia (tissue hypoxia), and in subjects who have been exposed to oxidant inducing agents such as cancer chemotherapy, toxins, radiation, or other sources of oxidative stress. So it basically states that cannabinoids can be used to prevent and cure neurological and heart diseases. It is also an excellent drug to counter the symptoms created by so called “anti-cancer” drugs like chemotherapy. “Sixteen years ago, when I was diagnosed with multiple sclerosis, my doctor told me I’d be in a wheelchair in four years and dead by 56. The only reason I’m even able to even write this today is because my doctor, a world class neurologist, recommended medicinal marijuana.” – Montel Williams It’s a bit ironical that the US government owns a patent on the medical properties of marijuana, but makes it a Schedule I drug, which means that it has no medical properties. The current drug laws are not based on science but are the result of politics. Pharmaceutical companies have tried to mimic the behavior of THC by making syntethic variants that do not produce the ‘high’ without any success. Natural products cannot be patented, hence cannabis is not an interesting subject for the pharmaceutical industry. Pharmaceutical companies obviously rather want to sell their extremely expensive chemotherapy drugs. People who are not insured usually have to use all their life savings or even have to sell their house in order to pay for chemotherapy drugs. Even then, a huge part of the patients die from the damage that has been done to them. The major reason why marijuana is not legal is the fact that big pharma will loose billions doing so. Marijuana is a safe substitute for painkillers and sleeping drugs. As a matter of fact, marijuana is successfully being used by a lot of people who have chronic pains, inflammation and oxidative stress. There is not a single drug that is as effective long term. Furthermore, most painkillers and sleeping pills are addictive. Hemp, the cannabis variant with almost no THC is a more cheaper way to produce paper. Other cures Marijuana is not the only potential anti-cancer drug that is prohibited. Almost all alternative approaches that are a threat to the pharmaceutical industry are being surpressed or prohibited. Here is an excellent article about vitamin B17. A modern view on ‘Anatomy of the State’ This is a clarification of the most important sections from ‘Anatomy of the State’ by Murray N. Rothbard. The original masterpiece can be downloaded for free from the Mises Institute and is released under the Creative Commens license. The changes and additions to the original text are highlighted in light yellow. Some changes might be added later on. The most dangerous man to any government is the man who is able to think things out for himself, without regard to the prevailing superstitions and taboos. Almost inevitably he comes to the conclusion that the government he lives under is dishonest, insane and intolerable, and so, if he is romantic, he tries to change it. And even if he is not romantic personally he is very apt to spread discontent among those who are. – H. L. Mencken Krugman Is Evil This post is a response to a post made by Paul Krugman for The New York Times. Krugman starts with why bitcoin is not a good store of value. To be successful, money must be both a medium of exchange and a reasonably stable store of value. And it remains completely unclear why bitcoin should be a stable store of value. Brad DeLong puts it clearly: “Underpinning the value of gold is that if all else fails you can use it to make pretty things. Underpinning the value of the dollar is a combination of (a) the fact that you can use them to pay your taxes to the U.S. government, and (b) that the Federal Reserve is a potential dollar sink and has promised to buy them back and extinguish them if their real value starts to sink at (much) more than 2%/year (yes, I know).” Placing a ceiling on the value of gold is mining technology, and the prospect that if its price gets out of whack for long on the upside a great deal more of it will be created. Placing a ceiling on the value of the dollar is the Federal Reserve’s role as actual dollar source, and its commitment not to allow deflation to happen. Placing a ceiling on the value of bitcoins is computer technology and the form of the hash function… until the limit of 21 million bitcoins is reached. Placing a floor on the value of bitcoins is… what, exactly? What Krugman refers to is the fact that usage increases the value of a currency. He is right about that. Forcing people to use the dollar to pay taxes is what gives the dollar more stability. Usage increases value. This is because the free market works in both ways. If you buy a good or a service, its cost is the result of supply and demand. It’s pretty obvious to see this. But what gives currency value? It’s the same principle of supply and demand that gives currency value. If you need to use a currency for a certain purpose, you need to store it for a certain amount of time before you can use it. The fact that you need to store a unit of currency, is part of the demand. Look at it this way: Assume you have a million units of a currency. Distribute them over the whole population of a country. The value of one unit is directly correlated to the amount of units that are in circulation. How much do you want to pay someone for a service? It depends on how much you earn yourself, but also on how much money you have in possession. If your government forces you to use their preferred currency, it raises the value of that currency. But, taxes are just a small part of the demand for currency. In the same way, the usefullness of gold (in for example electronics and jewlery) is only a small portion of the current value of gold. The major value of a currency is contained in the usage beyond taxes. For gold, these are private owners and central banks who believe in the long tradition of gold as the sole provider of money (which is actually a form of speculation). Of course, Krugman is not just talking about value here. Krugman is talking about stability. How about that? If you demand that taxes have to be payed in a certain currency, it sure raises the stability of that currency, how little the effect is. The fact that there is a certain bottom of a currency doesn’t mean that it has high stability. As a matter of fact, there is always a bottom as long as there is demand. But if the dollar crashes to its current bottom, what insurance do we have that people will continue to pay taxes? There is no way anyone can determine a bottom or “underpin” a currency. History tells us that such a bottom does not exist and in the long term, the free market always decides the right price of a unit currency. Like in the Weimar Republic and Zimbabwe, the supply of the dollar has been increased dramatically over time. I believe the international character of bitcoin is a much stronger argument for stability than the fact that you have to pay taxes with it since international markets are much more complex. The value of the dollar is pegged to the U.S. economy. A few bad economical decisions can cause a serious blow to its value. Bitcoin is immune to this. Krugman also mentions that the Federal Reserve can act as a potential dollar sink. It’s not really clear what he means by this. I guess he refers to the fact that the Federal Reserve can exchange dollars for government bonds. It’s still to see if that will solve the potential problem of hyperinflation. The only reason why bitcoin might not be a good store of value is the fact that is not widely used. Not because it is not backed by a central bank. I have had and am continuing to have a dialogue with smart technologists who are very high on BitCoin — but when I try to get them to explain to me why BitCoin is a reliable store of value, they always seem to come back with explanations about how it’s a terrific medium of exchange. Even if I buy this (which I don’t, entirely), it doesn’t solve my problem. And I haven’t been able to get my correspondents to recognize that these are different questions. Krugman fails to see that the future ease of use of bitcoin and the massive applications that could possibly be built around the blockchain technology actually makes it so valuable. Let me explain why. Just forget about bitcoin as a store of value for a moment. Think about the possibilies of the blockchain technology. Krugman refers to the fact that bitcoin is terrific medium of exchange (which he doesn’t entirely believe, but it is). You can build smart contracts with bitcoin, automate payments or pay for anything with just scanning a qr code without the interaction of a financial institution. This is quite revolutionary. Bitcoin is a standardized, independent payment system that allows immediate transactions between individuals without the intervention of a financial institution. When the technology around bitcoin matures, it will be easier to pay with bitcoin than it is to pay with conventional methods, even for the biggest technophobes. You can compare it with the beginning of the internet. You had to enter an ip address in order to connect with a website. Now you can just enter the name of the website you want to visit and lately you can call up a website with a simple voice command. The same thing will happen with the bitcoin technology. Your bitcoin wallet will be connected with an exchange of your choise, so if you spent bitcoins you can immediately buy them back. It will be connected to address books, social networks, online marketplaces, video messengers, etc. Spending/giving/donating money will become a no brainer. Since bitcoin can be sent directly from person to person, anyone can use it. Therefore, bitcoin is massively interesting for the unbanked, which are estimated at a 2.5 billion [2012]. To be a medium of exchange, bitcoin doesn’t need to be stable. Exchanging dollars or euros for bitcoins just requires temporary stability. Even long term this doesn’t really matter. People who are concerned about stability can always rebuy their spent bitcoins. It doesn’t need stability to survive. The value of bitcoin is defined by its use, and since the supply of bitcoin is limited the value will be huge. Because bitcoin is not backed by a central bank, it will always be somewhat volatile. But that doesn’t matter. Gold is also quite volatile and it is still seen as a good store of value. You may not forget that no one can take down bitcoin. Even not the developers. Bitcoin will be around in 100 years, whatever the value. Then Krugman continues about the economics behind bitcoin. BitCoin looks like it was designed as a weapon intended to damage central banking and money issuing banks, with a Libertarian political agenda in mind—to damage states ability to collect tax and monitor their citizens financial transactions. First of all, it is perfectly possible to collect taxes on wages that are earned in bitcoin, the same counts for products that are payed for in bitcoin. There are two kind of taxes that are hard to collect. The first one is taxes on the wages of jobs that can be payed for in cash. But these taxes can easily be evaded anyway. And as a matter of fact, bitcoin transactions are less anonymous than cash transactions. The other tax that is hard to collect is the tax on capital, since bitcoins can only be accessed by a (combination of) private key(s). But this is an evil tax anyway. Bitcoin is a completely transparent system that can be very helpful in preventing fraud. It is clear that a statist like Krugman doesn’t like the idea of a fair and transparent monetary system. What is bitcoin? There are a number of ways to explain bitcoin. A lot of posts and blogs explain bitcoin in a non-technical manner. It uses terms as ‘decentralized’, ‘sign’ and ‘blockchain’ without explaining exactly what they are talking about. As a result, people have a hard time understanding these basic concepts and hence bitcoin. They therefore don’t seem to understand the real value behind bitcoin. In order to understand what bitcoin is about, you have to get acquainted with basic notions of cryptography. Don’t worry, it isn’t difficult. The worst ways to explain cryptography to rookies is dry text. Next video explains exactly what you need to know. What you have to remember is the fact that you can use ecryption to sign a message with your private key. Your public key can be used to verify a signature. Because only you have the private key, only you could have signed that message. This kind of scheme is used all over the internet. When you log into your email client or social network or when you want to do a wire transfer. How is bitcoin different? Most fiat currencies are controlled by a central bank. This means that the supply (the amount of money) can change over time. Money just adds another layer of bartening at the exchange of goods. Instead of exchanging a laptop for 1000 apples, you’ll exchange that laptop for money so you can buy 1000 apples when it suits you. In essence, everything can be considered money. But some forms of money qualify better as such. There are four basic characteristics on which a candidate for money must satisfy: durability, divisibility, transportability and noncounterfeitability. Fiat currencies have issues with durability and noncounterfeitability. Because of the various actions by central banks worldwide, the value of fiat is very questionable. The moneysupply of various currencies have been seriously inflated. Most fiat currencies are quite easy to counterfeit and therefore the central banks continuously have to issue new versions of their currency. Some might even say that printing of money by central banks is another level of counterfeiting. One could disagree on this fact in a transparent system. ^1Central banks are not transparent. The problem is rather that it is very hard, even impossible to use alternative currencies. Either way, you have to pay your taxes in the currency that is issued by the government. Bitcoin does a much better job at all the above properties than fiat currencies. Therefore bitcoin is a better form of money. The supply of bitcoins is limited, which means that the value of bitcoin is mostly decided by the demand side. Bitcoin is not issued or controlled by a central bank. Instead, bitcoin is controlled by a protocol, just like email and html are controlled by a protocol. How does bitcoin work? In bitcoin, the public key is the bitcoin address. It is derived from a private key, which will give you access to your bitcoins. When you open your bitcoin wallet for the first time, a new private key will automatically be generated for you. Two Generals’ Problem. The solution bitcoin offers is what makes this technology extremely interesting. Bitcoin transactions are stored in a chain of transaction blocks, also called the blockchain. A block consists of transactions that are made during a certain time period. In the bitcoin protocol, such a block is released roughly every 10 minutes. In order to make sure that a hacker just don’t spam blocks all around the internet with his own transactions, a block must be the result of a complex computation. Every computer in the network is involved in a race to find a certain hashresult. To find a block, pure luck is involved. The more computing power you have, the higher the chance you find the right hash. Every time a hash of the correct form is found, a block is released and chained behind the previous block. Each block contains the hash of the previous block. This way, the blockchain gets Assume you are a client. How do you know which blocks are honest and not broadcasted by an attacker? The beauty of the bitcoin protocol is that you don’t know this. But the chances of this happening are extremely low. Why? Simply because the bitcoin protocol makes sure that clients trust the longest chain. If you trust the longest chain of blocks, you trust the majority of the computing power. Although it is possible that an attacker accidently finds a block, it is very unlikely. That is why it is common to wait at least a certain amount of blocks before a transaction is accepted. (If an attacker has 1% of the computing power, there is 0.01^6 chance the attacker will find 6 blocks in a row.) In that case, the attacker can fool a merchant by doing a double spend attack, which means blocks are issued that contain ‘fake’ transactions that will not be present in the real blockchain. Later, when more honest blocks are found, the fraud will be discovered and the receiver will lose their coins. In practice this problem can be dealt with through insurance (similar to credit card insurance), trusted addresses or longer wait times for expensive items. Either way, pulling of an attack is quite expensive in general and therefore not worth it. Furthermore, the attacker loses all its trust. Obviously, calculating all those hashes will cost you electricity and hardware. People are not willing to do this for free. This is why a decentralized blockchain does not work without bitcoin. The blockchain without bitcoin is just a centralized database. This is why rewards are awarded to the node that finds a block. The first transaction in a block is a special transaction. It starts with a new coin owned by the finder of the block. This is also how bitcoins are issued. Once and a while, the rewards are halved. Over time, the network will switch over to fees as incentive for ‘mining’, assuring that only 21 million bitcoins will ever be mined. Because of the rewards, it is always more profitable to play by the rules, because attackers will never be rewarded with coins. The blocks they might produce will, in the end, never be part of the Please note that there are no restrictions on the contents of a block other than that the block needs to contain valid transactions. It is possible for a miner to release empty blocks and ignore all incoming transactions. Remember that there is no central authority! That’s why it is recommended to include a very small transaction fee, because miners are greedy. The fee will go the the miner that finds the block. For transactions that are not urgent, you can leave out the fee but it can take an unknown amount of time before the transaction will be confirmed. Ethical problems The network won’t accept malicious transactions as long as more than 50% of the miners are honest miners (because they will create the longest chain over time). This means that transactions are validated with a democratic majority vote. Greedy miners will always try to validate as much transactions as possible, since they usually contain fees. This might lead to two ethical problems. • Bitcoin can be used for ‘criminal activity’. This might be activity that should not be criminal in the first place, like the sale of drugs or guns or business with people in countries that are ‘banned’ by international organizations (read: the US) like Cuba or Iran. But it might also include the financing of terrorism. • Another problem is the total other way aroud. It’s government interference in the mining process. This requires the majority of the mining power to be in the hands of the government. This can only be done with a serious investment. This is because under ‘perfect market conditions’, mining will generate no profit or loss. If it would create profit, more miners will enter the system. If it would create a loss, miners will flee the system. This is a simple free market principle. If the price of bitcoins stays stable, an equilibrium is reached. Therefore, governments can gradually increase their share in the bitcoin hashing power. This will happen at a quite serious loss, but since basically everything the government does happens at a loss, I guess that this is not really an issue. The latter is obviously much more harmful than the former. Simply because this form of government interference disturbs the marketplace without achieving anything. Illegal activity can always shift towards other cryptocurrencies (the so called ‘alt coins’) that do offer true anonymity. Anything that is possible in cryptocurrencies is in essence also possible in cash, except for transactions that happen between two persons on different sides of the world. Furthermore, tax money is wasted. Luckily, this is difficult to achieve, mainly because of political reasons. Bitcoin is an international phenomenon. Which government will be in charge? Either way, one entity that buys up massive amounts of bitcoin mining equipment will not go unnoticed. Alt coins [DEL:One might argue that bitcions are worthless because there is a large amount of alternative cryptocurrencies available, which have similar properties as bitcoin. Most of them are simple copies of the bitcoin protocol, with just minor changes (like shorter release times of a transaction block). Therefore, there are an unlimited supply of coins, and thus bitcoins are worthless.:DEL] [DEL:This kind of argument is silly. Assume you are a payment processor. Are there any reasons to accept bitcoin alternatives? It gives no benefit and it will only cost you more to maintain your payment system. The fact that altcoins have the same properties is actually an argument against them. There is no reason to use them, except for media attention. Furthermore, there is something called the network effect. It’s something that also happens in social media. Everyone subscribes to a service, because most people are already subscribed to that service although there are most likely services that do a much better job. It exceptionally creates a market generated monopoly. (Most monopolies are caused by government.) Why dealing in alt coins if no one accepts it?:DEL] Bitcoin has been overtaken by the banking cartel and is no longer being upgraded. Because of this, other coins have overtaken bitcoin technology wise. Monero doesn’t have a block size cap and a disinflationary supply and is therefore superior. The logic of this paragraph still applies. Most altcoins are in essence wortless. Bitcoin has become an altcoin. Working in a pool Every miner in the network is involved in a race to find a the next block. The chances that this happens are very low, the rewards are significant. If you don’t have a lot of hashing power, there is a risk that you never find any block. Finding a block is like winning the lotery. Low chances with high rewards. If you have a lot of hashing power, this doesn’t really matter. The chances of finding a block are much larger. Miners with low hashing power can decide to try to find the next block together, called a pool. If such a block is found, the profit is divided equally amongst all the participants. Over time, the rewards associated with mining will correlate very good with the hashing power. This obviously increases the chances that a 51% attack can be successful. Privacy problems [DEL:All transactions need to be confirmed by the mining algorithm. It is vital that a block can be verified by all the participants of the bitcoin network. Therefore, all transactions have to be public. This might lead to privacy problems. A lot of times, misinformation is spread by the mainstream media that bitcoin is anonymous. It’s actually the opposite. Once you can connect a bitcoin address to an identity, all anonymity disappears. Therefore, bitcoin is pseudonomous. The more bitcoin will be used, the easier it will be to connect identities to addresses and transactions.:DEL] [DEL:There is something you can do against this; there is nothing that prohibits you to continuously exchange bitcoins with other participants. This is called coin mixing. There are a number of these services avaiable. One of the interesting tools that are being developped is Dark Wallet. Although this is out of the scope of this article, the interview with of one of the developpers might be an interesting read.:DEL] [DEL:Another privacy problem is the use of thin clients. Most people are only interested in their own bitcoin balance and therefore don’t need to download the whole blockchain, which is currently 25GiB. A thin client can connect to nodes that have the whole blockchain and check their balance that way. That node will inevitably know the bitcoin address of the thin client.:DEL] I was wrong that all transactions need to be public. Silly me. Monero is truly private and therefore superior. You can read the whole whitepaper for more information, but I think this post covered most of it.
{"url":"http://theinternetz.org/author/admin/","timestamp":"2024-11-09T16:27:37Z","content_type":"text/html","content_length":"104350","record_id":"<urn:uuid:53e2f006-8351-4867-ad33-74f7ebee6804>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00448.warc.gz"}
Mode Coupling in TEM-Cells due to Variations in the Geometry using Generalized Telegraphist's Equations Original language English Title of host publication Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020 Publisher Institute of Electrical and Electronics Engineers Inc. ISBN (electronic) 9781728155791, 978-1-7281-5578-4 ISBN (print) 978-1-7281-5580-7 Publication status Published - 23 Sept 2020 Event 2020 International Symposium on Electromagnetic Compatibility: EMC EUROPE 2020 - Virtual, Rome, Italy Duration: 23 Sept 2020 → 25 Sept 2020 Publication series Name Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020 ISSN (Print) 2325-0356 ISSN (electronic) 2325-0364 This paper deals with the propagation of electromagnetic waves in coaxial TEM-Cells. The general theory of electromagnetic wave propagation in TEM-Cells is of high interest because of its practical use as a measurement environment. A semi-analytical method is used for computing the electromagnetic fields within the TEM-Cell. The so-called Generalized Telegraphist's Equations will be developed for the TEM-Cell. This approach is particularly suitable for analyzing the electromagnetic fields because it not only calculates the overall field inside the TEM-Cell, but it also shows the mode coupling due to the variation of the geometry. This method, also known as cross-section method or coupled-mode theory, is applied to reduce the Maxwell equations to an infinite set of ordinary differential equations for the basis amplitudes of the eigenvectors of the electric and magnetic fields of a TEM-waveguide. As a numerical example, a tapered concentric coaxial TEM-Cell will be generalized telegraphists equations, TEM-Cell ASJC Scopus subject areas • Engineering(all) • Engineering(all) • Physics and Astronomy(all) • Computer Science(all) • Computer Science(all) • Engineering(all) Cite this • Standard • Harvard • Apa • Vancouver • Author • BibTeX • RIS Mode Coupling in TEM-Cells due to Variations in the Geometry using Generalized Telegraphist's Equations. / Duc, Pham Hoang; Heyno, Garbe. Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020. Institute of Electrical and Electronics Engineers Inc., 2020. 9245825 (Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020). Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review Duc, PH & Heyno, G 2020, Mode Coupling in TEM-Cells due to Variations in the Geometry using Generalized Telegraphist's Equations . in Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020. , 9245825, Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020, Institute of Electrical and Electronics Engineers Inc., 2020 International Symposium on Electromagnetic Compatibility, Virtual, Rome, Italy, 23 Sept 2020 Duc PH, Heyno G. Mode Coupling in TEM-Cells due to Variations in the Geometry using Generalized Telegraphist's Equations . In Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020. Institute of Electrical and Electronics Engineers Inc. 2020. 9245825. (Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020). doi: 10.1109/EMCEUROPE48519.2020.9245825 title = "Mode Coupling in TEM-Cells due to Variations in the Geometry using Generalized Telegraphist's Equations", abstract = "This paper deals with the propagation of electromagnetic waves in coaxial TEM-Cells. The general theory of electromagnetic wave propagation in TEM-Cells is of high interest because of its practical use as a measurement environment. A semi-analytical method is used for computing the electromagnetic fields within the TEM-Cell. The so-called Generalized Telegraphist's Equations will be developed for the TEM-Cell. This approach is particularly suitable for analyzing the electromagnetic fields because it not only calculates the overall field inside the TEM-Cell, but it also shows the mode coupling due to the variation of the geometry. This method, also known as cross-section method or coupled-mode theory, is applied to reduce the Maxwell equations to an infinite set of ordinary differential equations for the basis amplitudes of the eigenvectors of the electric and magnetic fields of a TEM-waveguide. As a numerical example, a tapered concentric coaxial TEM-Cell will be keywords = "generalized telegraphists equations, TEM-Cell", author = "Duc, {Pham Hoang} and Garbe Heyno", note = "Funding Information: This work is funded by the Deutsche Forschungsgmeinschaft (DFG, German Research Foundation) – 438107418. ; 2020 International Symposium on Electromagnetic Compatibility : EMC EUROPE 2020 ; Conference date: 23-09-2020 Through 25-09-2020", year = "2020", month = sep, day = "23", doi = "10.1109/EMCEUROPE48519.2020.9245825", language = "English", isbn = "978-1-7281-5580-7", series = "Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020", publisher = "Institute of Electrical and Electronics Engineers Inc.", booktitle = "Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020", address = "United States", TY - GEN T1 - Mode Coupling in TEM-Cells due to Variations in the Geometry using Generalized Telegraphist's Equations AU - Duc, Pham Hoang AU - Heyno, Garbe N1 - Funding Information: This work is funded by the Deutsche Forschungsgmeinschaft (DFG, German Research Foundation) – 438107418. PY - 2020/9/23 Y1 - 2020/9/23 N2 - This paper deals with the propagation of electromagnetic waves in coaxial TEM-Cells. The general theory of electromagnetic wave propagation in TEM-Cells is of high interest because of its practical use as a measurement environment. A semi-analytical method is used for computing the electromagnetic fields within the TEM-Cell. The so-called Generalized Telegraphist's Equations will be developed for the TEM-Cell. This approach is particularly suitable for analyzing the electromagnetic fields because it not only calculates the overall field inside the TEM-Cell, but it also shows the mode coupling due to the variation of the geometry. This method, also known as cross-section method or coupled-mode theory, is applied to reduce the Maxwell equations to an infinite set of ordinary differential equations for the basis amplitudes of the eigenvectors of the electric and magnetic fields of a TEM-waveguide. As a numerical example, a tapered concentric coaxial TEM-Cell will be AB - This paper deals with the propagation of electromagnetic waves in coaxial TEM-Cells. The general theory of electromagnetic wave propagation in TEM-Cells is of high interest because of its practical use as a measurement environment. A semi-analytical method is used for computing the electromagnetic fields within the TEM-Cell. The so-called Generalized Telegraphist's Equations will be developed for the TEM-Cell. This approach is particularly suitable for analyzing the electromagnetic fields because it not only calculates the overall field inside the TEM-Cell, but it also shows the mode coupling due to the variation of the geometry. This method, also known as cross-section method or coupled-mode theory, is applied to reduce the Maxwell equations to an infinite set of ordinary differential equations for the basis amplitudes of the eigenvectors of the electric and magnetic fields of a TEM-waveguide. As a numerical example, a tapered concentric coaxial TEM-Cell will be KW - generalized telegraphists equations KW - TEM-Cell UR - http://www.scopus.com/inward/record.url?scp=85097180946&partnerID=8YFLogxK U2 - 10.1109/EMCEUROPE48519.2020.9245825 DO - 10.1109/EMCEUROPE48519.2020.9245825 M3 - Conference contribution AN - SCOPUS:85097180946 SN - 978-1-7281-5580-7 T3 - Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020 BT - Proceedings of the 2020 International Symposium on Electromagnetic Compatibility - EMC EUROPE, EMC EUROPE 2020 PB - Institute of Electrical and Electronics Engineers Inc. T2 - 2020 International Symposium on Electromagnetic Compatibility Y2 - 23 September 2020 through 25 September 2020 ER -
{"url":"https://www.fis.uni-hannover.de/portal/en/publications/mode-coupling-in-temcells-due-to-variations-in-the-geometry-using-generalized-telegraphists-equations(055f029e-3a30-4ecb-84d3-4baeb1ca993a).html","timestamp":"2024-11-04T09:16:58Z","content_type":"text/html","content_length":"49071","record_id":"<urn:uuid:cd908e41-1e27-4fd5-a37e-3932179eedc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00513.warc.gz"}
How Today’s Pros Solve Math Problems: Part 2 By Keith Devlin You can follow me on Twitter @profkeithdevlin CHANGE OF PLAN: When I wrote last month’s post, I said I would conclude the description of my Nueva School Course this time. But when I sat down to write up that concluding piece, I realized it would require not one but two further posts. The course itself was the third iteration of an experiment I had tried out on a university class of non-science majors and an Adult Education class. This series of articles is my first attempt to try to describe it and articulate the thinking behind it. As is often the case, when you try to describe something new (at least it was new to me), you realize how much background experience and unrecognized tacit knowledge you have drawn upon. In this post, I’ll try to capture those contextual issues. Next month I’ll get back to the course itself. We all know that mathematics is not always easy. It requires practice, discipline and patience, as do many other things in life. And if learning math is not easy, it follows that teaching math is not easy either. But it can help both learner and teacher if they know what the end result is supposed to be. In my experience, many learners and teachers don’t know that. In both cases, the reason they don’t know it is that no one has bothered to tell them. There is a general but unstated assumption that everyone knows why the teaching and learning of mathematics is obligatory in every education system in the world. But do they really? There are two (very different) reasons for teaching and learning mathematics. One reason is that it is a way of thinking that our species has developed over several thousand years, that provides wonderful exercise for the mind, and yields both challenging intellectual pleasure and rewarding aesthetic beauty to many who can find their way sufficiently far into it. In that respect, it is like music, drama, painting, philosophy, natural sciences, and many other intellectual human activities. This is a perfectly valid reason to provide everyone with an opportunity to sample it, and make it possible for those who like what they see to pursue it as far as they desire. What it is not, is a valid reason for making learning math obligatory throughout elementary, middle, and high school education. The argument behind math’s obligatory status in education is that it is useful; more precisely, it is useful in the practical, everyday world. This is the view of mathematics I am adopting in the short series of “Devlin’s Angle” essays of which this is the third. (There will be one more next month. See episode 1 here and episode 2 here.) Indeed, mathematics is useful in the everyday practical world. In fact, we live in an age where mathematics is more relevant to our lives than at any previous time in human history. It is, then, perfectly valid to say that we force each generation of school students to learn math because it is a useful skill in today’s world. True, there are plenty of people who do just fine without having that skill, but they can do so only because there are enough other people around who do have it. But let’s take that argument a step further. How do you teach mathematics so that it prepares young people to use it in the world? Clearly, you start by looking at the way people currently use math in the world, and figure out how best to get the next generation to that point. (Accepting that by the time those students finish school, the world’s demands may have moved on a bit, so those new graduates may have a bit of catch up and adjustment to make.) If the way the professionals use math in the world changes, then the way we teach it should change as well. Don’t you think? That’s certainly what has happened in the past. For instance, in the ninth century, the Arabic-Persian speaking traders around Baghdad developed a new, and in many instances more efficient, way to do arithmetic calculations at scale, by using logical reasoning rather than arithmetic. Their new system, which quickly became known as al-jabr after one of the techniques they developed to solve equations, soon found its way into their math When Hindu-Arabic arithmetic was introduced into Europe in the thirteenth century, the school systems fairly quickly adopted it into their arithmetic teaching as well. (It took a few decades, but knowledge moved no faster than the pace of a packhorse back then. I tell the story of that particular mathematics-led revolution in my 2011 book The Man of Numbers.) The development of modern methods of accounting and the introduction of financial systems such as banks and insurance companies, which started in Italy around the same time, also led to new techniques being incorporated into the mathematical education of the next generation. Later, when the sixteenth century French mathematician François Viète introduced symbolic algebra, it too became part of the educational canon. In each case, those advances in mathematics were introduced to make mathematics more easy to use and to increase its application. There was never any question of “What is this good for?” People eagerly grabbed hold of each new development and made everyday use of it as soon as it became available. The rise of modern science (starting with Galileo in the seventeenth century) and later the Industrial Revolution in the nineteenth century, led to still more impetus to develop new mathematical concepts and techniques, though some of those developments were geared more toward particular groups of professionals. (Calculus, for example.) To make it possible for an average student or worker to make use of each new mathematical concept or technique, sets of formal calculating rules (algorithmic procedures) were developed and refined. Once mastered, these made it possible to make use of the new mathematics to handle—in a practical way—the tasks and problems of the everyday world for which those concepts and techniques had been developed to deal with in the first place. As a result of all those advances, by the time the Baby Boomers came onto the educational scene in the 1950s, the curriculum of mathematical algorithms that were genuinely important in everyday life was fairly large. It was no longer possible for a student to understand all the underlying mathematical concepts and techniques behind the algorithms and procedures they had to learn. The best that they could do was master, by repetitive practice, the algorithmic procedures as quickly as possible and move on. [A few of us had difficulty doing that. We wanted to understand what was going on. By and large, we frustrated our teachers, who seemed to think we were simply troublesome slow learners. Some of us eventually learned to “play the mindless algorithm game” in class to pass the test, but kept struggling on our own to understand what was going on, setting us on a path to becoming mathematics professors in the 1970s.] It was while that Boomer generation was going through the school system that mathematics underwent the first step of a seismic shift that within a half of a century would completely revolutionize the way mathematics was done. Not the pure mathematics practiced by a few specialists as an art—though that too would be impacted by the revolution to some extent. Rather, it was mathematics-as-used-in-the-world that would be radically transformed. The first step of that revolution was the introduction of the electronic desktop calculator in 1961. Although, mechanical desktop calculators had been available since the turn of the Twentieth Century, by and large their use was restricted to specialists—often called “computers” in businesses. [I actually had a summer-job with British Petroleum as such a specialist in my last three years at high school, and it was in my final year in that job that the office I worked in acquired its first electronic desktop calculator and the British Petroleum plant bought its first digital computer, both of which I learned to use.] But with the increasing availability of electronic calculators, and in particular the introduction of pocket-sized versions in the early 1970s, their use in the workplace rapidly became ubiquitous. Mathematics underwent a major change. Humans no longer needed to do arithmetic calculations themselves, and professionals using arithmetic in their work no longer It was not too many years later that, one by one, electronic systems were developed that could execute more and more mathematical procedures and techniques, until, by the late 1980s, there were systems that could handle all the mathematical procedures that constituted the bulk of not only the school mathematics curriculum, but the entire undergraduate math curriculum as well. The final nail in the coffin of humans needing to execute mathematical procedures was the release of the mathematics system Mathematica in 1988, followed soon after by the release of Maple. In the scientific, industrial, engineering, and commercial worlds, each new tool was adopted as soon as it became available, and since the early 1990s, professionals using mathematical techniques to carry out real-world tasks and solve real-world problems have done so using tools like Mathematica, Maple, and a host of others that have been developed. Simultaneously, colleges and universities quickly incorporated the use of those new tools into their teaching. And while the cost of the more extensive tools put their use beyond most schools, the graphing calculator too was quickly brought into the upper grades of the K-12 system, after its introduction in 1990. Yet, while the pros in the various workplaces changed over to the new human-machine-symbiotic way of doing math with little hesitation, most educators, exhibiting very wise instincts, proceeded with far more caution. The first wave of humans to adopt the new, machine-aided approach had all learned mathematics in an age when you had to do everything yourself. Back then, “computers” were people. For them, it was easy and safe to switch to executing a few keystrokes to make a computer run a procedure they had carried out by hand many times themselves. But how does a young person growing up in this new, digital-tools-world learn how to use those new tools safely and effectively? To some extent, the answer is (and was) obvious. You teach not for smooth, proficient, accurate execution of procedures, but for broad, general understanding of the underlying mathematics. The downplay of execution and increased emphasis on understanding are crucial. Computers outperform us to ridiculous degrees (of speed, accuracy, size of dataset, and information storage and retrieval) when it comes to execution of an algorithm. But they do not understand mathematics. They do not understand the problem you are working on. They do not understand the world. They don't understand People, on the other hand, can understand, and have a genetically inherited desire to do so. But just how do you go about teaching for the kind of understanding and mastery that is required for students to transition into worlds and workplaces dominated by a wide array of new mathematical tools, where they will encounter work practices that involve very little by way of hand execution of algorithms? We know so little about how people learn (though we do know a whole lot more than we did just a few decades ago), that most of us with a stake in the education business are rightly concerned about making any change that would effectively be a massive experiment on an entire generation. So we can, and should, expect small steps, particularly in systemic education. In the U.S., the mathematicians who developed the mathematical guidelines for the Common Core State Standards made a good first attempt at such a small step. True, it quickly ran into difficulties when it came to implementing the guidelines in a large and complex public educational system that is answerable to the public. But that is surely a temporary hiccup. Most of the problems at launch came from a lack of effective ways to assess the new kind of learning. Those problems can be and are being fixed. Which is just as well. For, although it’s possible to argue for tinkering with specific details of the Common Core State Standards guidelines, in terms of setting out a broad set of educational goals to aim for, there is no viable alternative first step. The pre-1970s educational approach is no longer an option. In the meantime, individual teachers at some schools (particularly, but not exclusively, private schools) have been trying different approaches, in some cases sharing their experiences on the MTBOS (Math Twitter Blog-O-Sphere), making use of another technological tool (social media) now widely available. [For a quick overview of one global initiative to support and promote such innovations, the OECD’s Innovative Pedagogies for Powerful Learning project (IPPL), see this recent article from the Brookings Institution.] The mini-course I gave at Nueva School in the San Francisco Bay Area last January, which I talked about in the first of this short series of essays, is one such experiment in teaching mathematics in a way that best prepares the next generation for the world they will live and work in after graduation. I tested it first with a class of non-science majors in Princeton in the fall of 2015 and then again with an Adult Education class at Stanford in the fall of 2017. The Nueva School class was its third outing. With the above backstory now established, next month I will describe that course and talk about how today’s pros “do the math”. (Again, let me stress, I am not talking here about “pure math”, the academic discipline carried out by professional mathematicians in universities and a few think tanks. My focus here is on using math in the everyday world.) In the meantime, I’ll leave you with a simple arithmetic problem that I will discuss in detail next time. It comes with two instructions: 1. Solve it as quickly as you can, in your head if possible. Let your mind jump to the answer. 2. Then, and only then, reflect on your answer, and how you got it. The goal here is not to get the right answer, though a great many of you will. Rather, the issue is how do our minds work, and how can we make our thinking more effective in a world where machines execute all the mathematical procedures for us? Ready for the problem? Here it is. PROBLEM: A bat and a ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost on its own? (There is no special pricing deal.)
{"url":"http://devlinsangle.blogspot.com/2018/03/how-todays-pros-solve-math-problems_9.html","timestamp":"2024-11-05T09:13:35Z","content_type":"text/html","content_length":"110510","record_id":"<urn:uuid:6bda8d1f-2c4f-4209-8836-ea63d3e355b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00106.warc.gz"}
heat balance of the cement ball mill Cement mill Admix mill Cement Silo Mixer Weigher Shipping Bulk loader Packer Marketing ... by heat balance method 2. Laos 16 Air Leakage Measurement prekiln cooler heater raw mill EP cooling air ... Ball diameter shrinks as time passes due to erosive effect. This leads to gradual increase of WhatsApp: +86 18838072829 Ball Mill Power Calculation Example #1. A wet grinding ball mill in closed circuit is to be fed 100 TPH of a material with a work index of 15 and a size distribution of 80% passing ¼ inch (6350 microns). The required product size distribution is to be 80% passing 100 mesh (149 microns). In order to determine the power requirement, the steps ... WhatsApp: +86 18838072829 Recycling waste gas discharged from raw and cement mills should be the focus. ... Generally, ball mills and vertical roller mills are used. The raw mix is dried by using part of the excess heat from the kiln in the process line or from a stove. ... data (sections Mass balance in raw mill system, Mass balance in clinker production system ... WhatsApp: +86 18838072829 appropriateness of using Aspen Plus process simulator in thermodynamic analysis of raw mill in cement industry. A local raw mill in Nigeria and three others from literature were analysed. 2. Theoretical Analysis of Mass, Energy and Exergy The work and heat interactions, including rate of exergy decrea se, irreversibility, energy and exergy ... WhatsApp: +86 18838072829 Cement Mill Notebook Scribd. 28 May 2012 The heating of the mill feed during dry grinding of cement raw mix has no Heat Balance for Mill Cooling: In the ... coal mill line to heat clinker mill grinding equipment price WhatsApp: +86 18838072829 The total energy required to produce cement was evaluated to be in the range kW h per ton of cement, including around 50 kW h/t for the finish grinding of the clinker. Dry grinding experiments on cement clinker were carried out using a laboratory batch ball mill equipped with torque measurement. The specific energy w.. WhatsApp: +86 18838072829 Mill outlet gas flow. 100. In a vertical roller mill the % of false air is usely between 5 and 10%. 330000 Nm3/h on wet. Gas coming from the kiln 270C. 307230 Nm3/h on dry. WhatsApp: +86 18838072829 A heat balance is an efficient tool to measure the actual state of a kiln system. It allows a better assessment of the heat consumption and reveals the potentials for improvements (regarding the thermal aspects). The principle of a heat balance is rather simple: we select our system limits for the balance and measure all inputs and outputs. WhatsApp: +86 18838072829 a conventional ball mill circuit. The result: an increase in production of approximately 25 percent, and a reduction in specific power consumption of about 10 percent compared to a conventional ball mill system. Semifinish grinding Upgrading a ball mill system with an HRP in a semifinish mode is a simple solution, and installation is easy. WhatsApp: +86 18838072829 ºC * (100 19) ºC) + kcal/kg * ] qw = 599534 kcal/h Since more heat is given up by the cement than is absorbed by the water, air circulation through the mill must have helped cool the cement down to discharge temperature. Heat Balance for Mill Cooling: In the following example, heat balances for mill cooling are calculated in ... WhatsApp: +86 18838072829 1. Introduction. World cement production is about billion tons in 2019 and cement plants consume about 2 percent of world electrical energy in the grinding process [1], [2].The total electrical energy that is consumed in cement companies is about 100 kWh per 1000 kg of cement, and twothird of this amount just consumes in raw material and cement mill units [3]. WhatsApp: +86 18838072829 Cement Mill Heat Balance Calculation Fill in white cells Composition % Moisture % Limestone 40 0 clinker Marl 50 limestone Sand 7 gypsum Iron ore 3 8 pond ash Total 100 t/h Mill output 100 t/h Mill Return t/h Feed 100 t/h Moisture on finish product 0% Gas Flow Balance Inputs Outputs WhatsApp: +86 18838072829 Drying Capacities with Mill Heat Balance Calculator 29. Cement Cooling in the Separator Calculator 30. Coal Drying Weber 31. Coal Drying Balance 32. Open Circuit Mass Balance Calculator ... Cement Ball Mill Sizing Calculators Exercises . 12 Mill Ventilation Measurement Location of the measurement point(s) Pitot Tube WhatsApp: +86 18838072829 Material Mill System in China Cement Industry between 2014 and 2019 Ruonan Meng 1,2, Qinglin Zhao 1,2,*, ... The integration of ball mill drying and grinding, promotion of a vertical mill, which is equipped with a new type of ... decomposition waste gas as a drying heat source for suspension preheater kiln, etc., have promoted the development ... WhatsApp: +86 18838072829 The Art Of Sharing and...Imagination. Home; About Us; Services. Grinding Software; Consultancy; Training Courses; Calculators Online WhatsApp: +86 18838072829 High temperature of the ball mill will affact the efficiency. 3 For every 1% increase in moisture, the output of the ball mill will be reduced by 8% 10%. 4 when the moisture is greater than 5%, the ball mill will be unable to perform the grinding operation. 5. The bearing of the ball mill is overheated and the motor is overloaded. WhatsApp: +86 18838072829 calculations on mass balance on ball mill circuit Machine, mass balance calculations for cement grinding ballmill 15 Oct 2013 . To calculate the operating work index and work index efficiency of rod and ball mill circuits. Go to Product Center. milling energy balance on cement factor, heat balance cement roller mill Crusher South ... WhatsApp: +86 18838072829 When grinding cement, the ball mill has a yield of 300~1000g/t, and the grinding system of the roller press has a yield of /t, so it can meet the requirements of white cement. Low noise. The ball mill noise is above 110 dB, while the roller press is about 80 dB. WhatsApp: +86 18838072829 Heat and Mass balance (Desinged).doc Free download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or view presentation slides online. ... 422 130740 473 146504 Cement Matching Qty. of Cement Mill 2 t/h Cement Mill Running Ratio Cement Category Annually Yielded Cement Running Clinker ... Ball Valve TJ (21 Sep 16) TeukuJohar. WhatsApp: +86 18838072829 AGICO Cement is a cement equipment manufacturer with more than 20 years' experience. We are the longterm partner of many large and mediumsized cement enterprises in different countries, providing cement equipment and EPC project construction. The closedcircuit grinding system ball mill for cement grinding and the matching powder ... WhatsApp: +86 18838072829 HEAT BALANCE FLAG(Green if Heat Balance is OK)>>>>> Press To Adjust outlet temperature for Balance. Press to Adjust Cooler Hot Air Quantity for Balance Press to Adjust Feed Moisture for Balance. Press to Adjust Cooler Hot Air Temperaturefor Balance. This sheet has been created by Rajiv Sadavarti on November 20, 2008. Rev 0. 1. WhatsApp: +86 18838072829 Volume of hot gas before the mill 545,472 Nm3/h. Volume of H2O vaporized from raw materials 9,048 Nm3/h. Volume of false air entering the mill (by O2 bal) 120,047 Nm3/h %. Volume of gas leaving the mill 674,568 Nm3/h. Volume of false air after mill to mill fan inlet (by O2 bal.) 62,029 Nm3/h %. Volume of gas at the mill fan inlet ... WhatsApp: +86 18838072829 The purpose of the respective Valid Practice is to outline the parameters that. should be included in a Ball Cement Mill Evaluation as well as the appraisal of. the findings in order to identify the bottlenecks and suggest the necessary actions. that would result in the Ball Cement Mill Optimization. 16/10/2007. WhatsApp: +86 18838072829 Amazing simple tool to calculate Vertical Mill heat balance by Loesche Adding your simple analysis and process parameters ===> finally you get the Balance difference. You can use it Free of charge ... WhatsApp: +86 18838072829 It is assumed that you supply very precise data based on which accuracy of entire heat balance calculations lies. Calculation procedure: 1) Enter the required data on input page just replace the existing values. 2) Go to detailed calculation sheet there you get explanation of entire calculations. 3) Heat balance report can be printed on Output ... WhatsApp: +86 18838072829 1. Introduction Heat balance on a kiln can offer extremely useful information on the thermal performance of the system [1]. Heat balance show where or how the fuel heat is consumed based on the simple principle of input = output [2, 3]. WhatsApp: +86 18838072829 The main power consumption of a clinker cooler is its cooling fans. It usually takes m 3 under standard conditions to cool 1 kg clinker from 1400 ℃ to 65 ℃, and the power consumption of the whole system is about 7 kWh. If the air consumption per kilogram of clinker cooling is reduced from 2 m 3 to m 3, the air consumption efficiency will be 15% lower than that of other cement ... WhatsApp: +86 18838072829 About Press Press WhatsApp: +86 18838072829
{"url":"https://neopole.fr/2023/10/31_9471.html","timestamp":"2024-11-04T01:28:01Z","content_type":"application/xhtml+xml","content_length":"25747","record_id":"<urn:uuid:6a5b49b6-eff3-4dad-a4b8-67ce9580d56c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00156.warc.gz"}
How to Calculate and Solve for Steady State Diffusion | Diffusion in Alloying The image above represents steady state diffusion. To calculate steady state diffusion, three essential parameters are needed and these parameters are Diffusion Gradient (dC/dx), Cross-sectional Area (A) and Time (t). The formula for calculating steady state diffusion: J = ^dC/dx/[At] J = Steady State Diffusion dC/dx = Diffusion Gradient A = Cross-sectional Area t = Time Given an example; Find the steady state diffusion when the diffusion coefficient is 12, the cross-sectional area is 4 and time is 2. This implies that; dC/dx = Diffusion Gradient = 12 A = Cross-sectional Area = 4 t = Time = 2 J = ^dC/dx/[At] J = ^12/[(4)(2)] So, J = ^12/[8] J = 1.5 Therefore, the steady state diffusion is 1.5mol m^-2 s^-1. Calculating the Diffusion Coefficient when the Steady State Diffusion, the Cross-sectional Area and the Time are Given dC/dx = J x At dC/dx = Diffusion Gradient J = Steady State Diffusion A = Cross-sectional Area t = Time Given an example; Find the diffusion coefficient when the steady state diffusion is 3, the cross-sectional area is 4 and the time is 9. This implies that; Master Every Calculation Instantly Unlock solutions for every math, physics, engineering, and chemistry problem with step-by-step clarity. No internet required. Just knowledge at your fingertips, anytime, anywhere. J = Steady State Diffusion = 3 A = Cross-sectional Area = 4 t = Time = 9 dC/dx = J x At dC/dx = 3 x (4)(9) So, dC/dx = 3 x 36 dC/dx = 108 Therefore, the diffusion coefficient is 108. Calculating the Cross-sectional Area when the Steady State Diffusion, the Diffusion Coefficient and the Time are Given A = ^dC/dx / [J x t] A = Cross-sectional Area J = Steady State Diffusion dC/dx = Diffusion Gradient t = Time Let’s solve an example; Given that, the steady state diffusion is 12, the diffusion coefficient is 72 and the time is 6. This implies that; J = Steady State Diffusion = 12 dC/dx = Diffusion Gradient = 72 t = Time = 6 A = ^dC/dx / [J x t] A = ^72 / [12 x 6] So, A = ^72 / [72] A = 1 Therefore, the cross-sectional area is 1. Calculating the Time when the Steady State Diffusion, the Diffusion Coefficient and the Cross-sectional Area are Given t = ^dC/dx / [J x A] t = Time J = Steady State Diffusion dC/dx = Diffusion Gradient A = Cross-sectional Area Let’s solve an example; Find the time when the steady state diffusion is 8, the diffusion coefficient is 40 and the cross-sectional area is 3. This implies that; J = Steady State Diffusion = 8 dC/dx = Diffusion Gradient = 40 A = Cross-sectional Area = 3 t = ^dC/dx / [J x A] t = ^40 / [8 x 3] So, t = ^40 / [24] t = 1.67 Therefore, the time is 1.67s. How to Calculate Steady State Diffusion With Nickzom Calculator Nickzom Calculator – The Calculator Encyclopedia is capable of calculating the steady state diffusion Now, Click on Diffusion In Alloying under Materials and Metallurgical Now, Click on Steady State Diffusion under Diffusion In Alloying The screenshot below displays the page or activity to enter your values, to get the answer for the steady state diffusion according to the respective parameter which is the Diffusion Gradient (dC/ dx), Cross-sectional Area (A) and Time (t). Now, enter the values appropriately and accordingly for the parameters as required by the Diffusion Gradient (dC/dx) is 12, Cross-sectional Area (A) is 4 and Time (t) is 2. Finally, Click on Calculate As you can see from the screenshot above, Nickzom Calculator– The Calculator Encyclopedia solves for the steady state diffusion and presents the formula, workings and steps too.
{"url":"https://www.nickzom.org/blog/2023/01/12/how-to-calculate-and-solve-for-steady-state-diffusion-diffusion-in-alloying/","timestamp":"2024-11-10T05:41:29Z","content_type":"text/html","content_length":"240673","record_id":"<urn:uuid:d08795d8-5a7a-419d-a81e-7faa67cf5dc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00573.warc.gz"}
perplexus.info :: Numbers : The Highest Grade I had someone tell me, after seeing what grade they got on several quizzes, that he got the highest grade (after rounding to the nearest percent) for that letter, for all 5 quizzes. Each quiz he took, he got a different letter grade on it. When the grades are calculated for these quizzes, it is the number of questions you got right divided by the number of questions total on the quiz. Then the score is taken as a percent (the decimal is multiplied by 100) and then rounded to the nearest percent (.5 and above rounds up, below .5 rounds down) The grading scale works so that: 100-90 A 89-80 B 79-70 C 69-60 D 0-59 F What are the fewest number of questions possible on each quiz? For example, if someone got 6 questions right out of 7 questions total, it would be 6/7 or about 85.7%, which rounds to 86, which isn't the highest B possible, (86 is not equal to 89) Since no number over 7 can be in the 88.5 up to 89.5 range, there couldn't have been exactly 7 questions on the quiz. There also couldn't have been 8 questions on the B quiz. 7/8 or 88 percent isn't the highest B possible; it's too low, and 8/8 or 100 percent is too high.
{"url":"http://perplexus.info/show.php?pid=1409","timestamp":"2024-11-05T17:18:22Z","content_type":"text/html","content_length":"16246","record_id":"<urn:uuid:a4c6c26d-8419-4537-95fc-084f7b3ee58b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00462.warc.gz"}
Leap Years Calculate Leap Years Hebrew Calendar Leap Years 5784 is the 8th year of the 19-year Metonic cycle and is therefore a leap year. Gregorian Calendar Leap Years 2024 is divisible by 4 and not by 100, and is therefore a leap year. Hebrew Calendar Leap Years • During a Hebrew calendar leap year, an additional month of Adar is added. • There are seven leap years in every 19 years. • The 3rd, 6th, 8th, 11th, 14th, 17th, and 19th years of the 19-year Metonic cycle are leap years. • 5784 is the 8th year of the 19-year Metonic cycle and is therefore a leap year. • After 5784, the next leap year is 5787 – the 11th year of the cycle. • A mnemonic for remembering the years corresponding to leap years is: "גוחאדז"ט". The Hebrew letters Gimmel, Vav, Chet, Aleph, Dalet, Zayin, and Tet are used to represent the Hebrew numerals, 3, 6, 8, 1, 4, 7, 9. • Mathematically, it can be determined if a year is a leap year by determining whether ((7*year)+1) mod 19) is less than 7. Gregorian Calendar Leap Years • During a Gregorian calendar leap year, an additional day is added to February. • A Gregorian year is a leap year if it meets BOTH criteria: □ It is divisible by 4. □ It is not divisible by 100, unless it is also divisible by 400. • 2024 is divisible by 4 and not by 100, and is therefore a leap year. • After 2024, the next leap year is 2028. 2028 is divisible by 4 and not by 100, and is therefore a leap year.
{"url":"https://torah.kabala.vn/tools/leap-years","timestamp":"2024-11-07T00:05:36Z","content_type":"text/html","content_length":"12353","record_id":"<urn:uuid:ca105239-31c5-4a3c-a43d-44ed813c6e6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00816.warc.gz"}
TILOS Seminar: Transformers Learn In-context by (Functional) Gradient Descent Transformers Learn In-context by (Functional) Gradient Descent Xiang Cheng, TILOS Postdoctoral Scholar at MIT Motivated by the in-context learning phenomenon, we investigate how the Transformer neural network can implement learning algorithms in its forward pass. We show that a linear Transformer naturally learns to implement gradient descent, which enables it to learn linear functions in-context. More generally, we show that a non-linear Transformer can implement functional gradient descent with respect to some RKHS metric, which allows it to learn a broad class of functions in-context. Additionally, we show that the RKHS metric is determined by the choice of attention activation, and that the optimal choice of attention activation depends in a natural way on the class of functions that need to be learned. I will end by discussing some implications of our results for the choice and design of Transformer architectures.
{"url":"https://tilos.ai/video/transformers-learn-in-context-by-functional-gradient-descent/","timestamp":"2024-11-04T20:57:53Z","content_type":"text/html","content_length":"150350","record_id":"<urn:uuid:11fb2c8c-ff72-43eb-8b43-4c2321ea8cb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00872.warc.gz"}
Geometric cumulative distribution function y = geocdf(x,p) returns the cumulative distribution function (cdf) of the geometric distribution, evaluated at each value in x using the corresponding probabilities in p. y = geocdf(x,p,"upper") returns the complement of the cdf, evaluated at each value in x, using an algorithm that more accurately computes the extreme upper tail probabilities. Compute Geometric Distribution cdf Toss a fair coin repeatedly until the coin successfully lands with heads facing up. Determine the probability of observing at most three tails before tossing heads. Compute the value of the cumulative distribution function (cdf) for the geometric distribution evaluated at the point x = 3, where x is the number of tails observed before the result is heads. Because the coin is fair, the probability of getting heads in any given toss is p = 0.5. x = 3; p = 0.5; y = geocdf(x,p) The returned value y indicates that the probability of observing three or fewer tails before tossing heads is 0.9375. Compute Multiple Geometric Distribution cdfs Compare the cumulative distribution functions (cdfs) of three geometric distributions. Create a probability vector that contains three different parameter values. • The first parameter corresponds to a geometric distribution that models the number of times you toss a coin before the result is heads. • The second parameter corresponds to a geometric distribution that models the number of times you roll a four-sided die before the result is a 4. • The third parameter corresponds to a geometric distribution that models the number of times you roll a six-sided die before the result is a 6. p = 3×1 For each geometric distribution, evaluate the cdf at the points x = 0,1,2,...,25. Expand x and p so that the two geocdf input arguments have the same dimensions. x = 1×26 expandedX = repmat(x,3,1); expandedP = repmat(p,1,26); y = geocdf(expandedX,expandedP) y = 3×26 0.5000 0.7500 0.8750 0.9375 0.9688 0.9844 0.9922 0.9961 0.9980 0.9990 0.9995 0.9998 0.9999 0.9999 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.2500 0.4375 0.5781 0.6836 0.7627 0.8220 0.8665 0.8999 0.9249 0.9437 0.9578 0.9683 0.9762 0.9822 0.9866 0.9900 0.9925 0.9944 0.9958 0.9968 0.9976 0.9982 0.9987 0.9990 0.9992 0.9994 0.1667 0.3056 0.4213 0.5177 0.5981 0.6651 0.7209 0.7674 0.8062 0.8385 0.8654 0.8878 0.9065 0.9221 0.9351 0.9459 0.9549 0.9624 0.9687 0.9739 0.9783 0.9819 0.9849 0.9874 0.9895 0.9913 Each row of y contains the cdf values for one of the three geometric distributions. Compare the three geometric distributions by plotting the cdf values. hold on legend(["p = 1/2","p = 1/4","p = 1/6"]) xlabel(["Number of Failures","Before Success"]) ylabel("Cumulative Probability") title("Geometric Distribution") hold off Compute Complement of Geometric Distribution cdf Roll a fair die repeatedly until you successfully get a 6. Determine the probability of failing to roll a 6 within the first three rolls. Compute the complement of the cumulative distribution function (cdf) for the geometric distribution evaluated at the point x = 2, where x is the number of non-6 rolls before the result is a 6. Note that an x value of 2 or less indicates successfully rolling a 6 within the first three rolls. Because the die is fair, the probability of getting a 6 in any given roll is p = 1/6. x = 2; p = 1/6; y = geocdf(x,p,"upper") The returned value y indicates that the probability of failing to roll a 6 within the first three rolls is 0.5787. Note that this probability is equal to the probability of rolling a non-6 value three times. Input Arguments x — Values at which to evaluate cdf nonnegative integer scalar | array of nonnegative integer scalars Values at which to evaluate the cdf, specified as a nonnegative integer scalar or an array of nonnegative integer scalars. To evaluate the cdf at multiple values, specify x using an array. To evaluate the cdfs of multiple distributions, specify p using an array. If both of the input arguments x and p are arrays, then the array sizes must be the same. If only one of the input arguments is an array, then geocdf expands the scalar input into a constant array of the same size as the array input. Each element in y is the cdf value of the distribution specified by the corresponding element in p, evaluated at the corresponding element in x. Example: 2 Example: [0 1 2 3] Data Types: single | double p — Probability of success scalar in the range [0,1] | array of scalars in the range [0,1] Probability of success in a single trial, specified as a scalar or an array of scalars in the range [0,1]. To evaluate the cdf at multiple values, specify x using an array. To evaluate the cdfs of multiple distributions, specify p using an array. If both of the input arguments x and p are arrays, then the array sizes must be the same. If only one of the input arguments is an array, then geocdf expands the scalar input into a constant array of the same size as the array input. Each element in y is the cdf value of the distribution specified by the corresponding element in p, evaluated at the corresponding element in x. Example: 0.5 Example: [1/2 1/3] Data Types: single | double Output Arguments y — cdf values scalar in the range [0,1] | array of scalars in the range [0,1] cdf values, returned as a scalar or an array of scalars in the range [0,1]. y is the same size as x and p after any necessary scalar expansion. For an element of y, y, and its corresponding elements in x and p, x and p, the cdf value y is the probability of having at most x trials before a success, when p is the probability of a success in any given trial. More About Geometric Distribution cdf The geometric distribution is a one-parameter family of curves that models the number of failures before a success occurs in a series of independent trials. Each trial results in either success or failure, and the probability of success in any individual trial is constant. For example, if you toss a coin, the geometric distribution models the number of tails observed before the result is heads. The geometric distribution is discrete, existing only on the nonnegative integers. The cumulative distribution function (cdf) of the geometric distribution is where p is the probability of success, and x is the number of failures before the first success. The result y is the probability of observing up to x trials before a success, when the probability of success in any given trial is p. [1] Abramowitz, M., and I. A. Stegun. Handbook of Mathematical Functions. New York: Dover, 1964. [2] Evans, M., N. Hastings, and B. Peacock. Statistical Distributions. 2nd ed., Hoboken, NJ: John Wiley & Sons, Inc., 1993. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. GPU Arrays Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. This function fully supports GPU arrays. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). Version History Introduced before R2006a
{"url":"https://au.mathworks.com/help/stats/geocdf.html","timestamp":"2024-11-08T11:49:41Z","content_type":"text/html","content_length":"101476","record_id":"<urn:uuid:951baeeb-d7e2-48d9-b1d7-2fce1c26b116>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00551.warc.gz"}
Multiply Negative Numbers Worksheet 2024 - NumbersWorksheets.net Multiply Negative Numbers Worksheet Multiply Negative Numbers Worksheet – The Bad Figures Worksheet is a great way to start off instructing your children the very idea of negative numbers. A negative number is any variety that is certainly lower than no. It may be extra or subtracted. The minus sign shows the bad number. Also you can create negative phone numbers in parentheses. Under is actually a worksheet to help you began. This worksheet has a selection of adverse figures from -10 to 10. Multiply Negative Numbers Worksheet. Unfavorable amounts are a lot whose value is under absolutely nothing A negative number has a benefit lower than zero. It could be depicted on the amount line in just two ways: together with the optimistic quantity published because the very first digit, and with the negative amount written since the final digit. A good quantity is written using a additionally indicator ( ) prior to it, but it is optional to publish it this way. It is assumed to be a positive number if the number is not written with a plus sign. They are depicted by a minus signal In ancient Greece, bad amounts have been not employed. These were ignored, as their math was according to geometrical ideas. When Western scholars commenced converting ancient Arabic texts from Northern Africa, they stumbled on acknowledge negative figures and appreciated them. Today, adverse numbers are displayed with a minus indicator. To understand more about the origins and history of bad phone numbers, check this out write-up. Then, consider these cases to see how unfavorable phone numbers have progressed as time passes. They can be added or subtracted Positive numbers and negative numbers are easy to add and subtract because the sign of the numbers is the same, as you might already know. Negative numbers, on the other hand, have a larger absolute value, but they are closer to than positive numbers are. These numbers have some special rules for arithmetic, but they can still be added and subtracted just like positive ones. You can even add and subtract adverse numbers using a variety range and utilize the same guidelines for subtraction and addition as you may do for optimistic numbers. They may be represented with a amount in parentheses A negative quantity is represented with a number encased in parentheses. The bad signal is transformed into its binary counterpart, and the two’s enhance is held in a similar devote storage. Sometimes a negative number is represented by a positive number, though the result is always negative. When this happens, the parentheses should be provided. If you have any questions about the meaning of negative numbers, you should consult a book on math. They are often separated with a positive number Adverse numbers could be multiplied and divided like positive amounts. They can even be separated by other negative amounts. However, they are not equal to one another. The first time you multiply a negative amount by way of a beneficial amount, you will definately get no for that reason. To produce the best solution, you must determine which signal your answer should have. It can be much easier to remember a negative number when it is designed in brackets. Gallery of Multiply Negative Numbers Worksheet Leave a Comment
{"url":"https://www.numbersworksheets.net/multiply-negative-numbers-worksheet/","timestamp":"2024-11-12T22:00:26Z","content_type":"text/html","content_length":"60646","record_id":"<urn:uuid:f6667ac7-1ae5-4f71-92f1-cd49c8b0c86c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00687.warc.gz"}
Types of Finite Element Method MCQs [PDF] Quiz Questions Answers | Types of Finite Element Method MCQ App e-Book Download: Test 56 Electromagnetic Theory Practice Test 56 Types of Finite Element Method Multiple Choice Questions (MCQs) PDF Download - 56 The Types of Finite Element Method MCQ with Answers PDF (Types of Finite Element Method Multiple Choice Questions (MCQs) PDF e-Book) download Ch. 3-56 to prepare Electromagnetic Theory Practice Tests . Learn Metamaterials Test PDF, Types of Finite Element Method Multiple Choice Questions (MCQ Quiz) for college entrance exams. The Types of Finite Element Method MCQ App Download: Free certification app for metamaterials: electric and magnetic responses, dielectric constant of dielectric materials, dielectric permittivity, introduction to electromagnetic theory, types of finite element method test prep for online school programs. The MCQ: SFEM stands for; "Types of Finite Element Method" App (Free Android & iOS) with answers: Smoothed Infinite Element Methods; Smoothed Finite Element Methods; Simple Finite Element Methods; Simple Infinite Element Methods; for college entrance exams. Practice Metamaterials Questions and Answers, Apple Book to download free sample for online engineering associate's degree programs. Types of Finite Element Method MCQ Quiz with Answers PDF Download: Test 56 MCQ 276: SFEM stands for 1. Smoothed Finite Element Methods 2. Smoothed Infinite Element Methods 3. Simple Finite Element Methods 4. Simple Infinite Element Methods MCQ 277: Strength of an electromagnet can easily be changed by changing the amount of 1. voltage across electromagnetic 2. electric current flowing through it 3. input voltage source 4. output voltage source MCQ 278: Electric susceptibility 'X[e]' unit is 1. Farads per meter 2. Farads per square meter 3. Coulomb per meter 4. no dimension MCQ 279: Silicon dielectric constant= 1. 1.2 2. 0.12 3. 12 4. 120 MCQ 280: Due to the symmetry of electromagnetic waves, the frequency-dependent permitivity is represented as 1. 1-^w^2[p.e]/[w^2-w^2[o,e]+iγw] 2. 1+^w^2[p.e]/[w^2-w^2[o,e]+iγw] 3. 1-^w^2[p.m]/[w^2-w^2[o,m]+iγw] 4. 1+^w^2[p.m]/[w^2-w^2[o,m]+iγw] Electromagnetic Theory Exam Prep Tests Types of Finite Element Method Textbook App: Free Download iOS & Android The App: Types of Finite Element Method MCQ App to study Types of Finite Element Method Textbook, Advance Electromagnetic Theory MCQs App, and Digital Electronics MCQs App. The "Types of Finite Element Method MCQ" App to free download Android & iOS Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with
{"url":"https://mcqslearn.com/electronics/advance-electromagnetic-theory/quiz/quiz.php?page=56","timestamp":"2024-11-05T06:57:43Z","content_type":"text/html","content_length":"97570","record_id":"<urn:uuid:9894402f-b800-492e-ab62-4e298ecacd74>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00079.warc.gz"}
Deep Learning and the Prisoner’s Dilemma: A Strategic Evaluation - NHSJS Deep Learning and the Prisoner’s Dilemma: A Strategic Evaluation This paper presents a high-level exploration of the behavior of Deep Learning models in complex environments through the lens of the Repeated Prisoner’s Dilemma problem. The strategic acumen of multiple models including binary classification models, convolutional neural networks and recurrent neural networks are evaluated based on their performance in a Repeated Prisoner’s Dilemma tournament. By evaluating this performance, it is found that the Deep Learning models studied lack the strategic abilities necessary to have success in dynamic situations. With the ubiquity of Deep Learning models in today’s world, this paper serves as a warning against the use of such models in sufficiently complex situations – like the often convoluted and certainly dynamic real world. Keywords: Artificial Intelligence (AI), Deep Learning (DL), Game Theory In today’s world, DL models are taking increasingly prominent roles across dozens of fields. There are even indications that DL techniques could soon play a role in big government decisions^1. Their prevalence, though, raises a question: how qualified are these models to assume such a prominent role in our world? First, though, it is worth elaborating on what is meant by “DL models”. This paper provides a cautionary tale describing the dangers of misinterpretation of “simple” DL – ranging from neural networks to full RNN’s – as “black boxes” which can solve any problem. Reinforcement learning models for example would defeat the purpose of the paper in their subversion of this point and, with this in mind, this paper only considers only the “simple” DL, defined and addressed below. For DL models to assume their current role, they must be able to understand their environments. The question of these models’ strategic abilities is, therefore, an extremely important one and constitutes the root study of this paper: what can we learn about the general, strategic potential of DL based models in dynamic, “real world” scenarios by having them compete in a simulated dynamic environment (the Repeated Prisoner’s Dilemma) and, by extension, how qualified are they to fill their prevalent role in our world? Literature Review There have been multiple studies published on topics similar to that of this paper in the past. Though this paper takes no inspiration from these studies, their work is summarized – in an extreme lack of detail – below. The first and most important citation to be made is that of Axelrod’s tournament. Professor Axelrod of University of Michigan ran a study in which strategies competed to play in an integrated Prisoner’s Dilemma^2. There has been extensive research on the results of this study, but most of them do not relate to this study in the sense that they do not have anything to do with DL; only those which do will follow. Firstly, Tuomas W. Sandholm and Robert H. Crites, both of the University of Massachusetts at Amherst Computer Science Department, analyzed the usage of archaic RNN Q-learning models in the context of the Prisoner’s Dilemma^3. Their study, though, does study reinforcement learning – again, not the point of this paper – and does not consider the modern context of DL’s popularity and usage. Given that their paper was written in 1996, it is simply outdated and does not use modern RNN techniques or consider more philosophically the repercussions of the study in today’s world. Secondly, Stanford Master’s Student Keven (Kedao) Wang wrote a paper building off of Sandholm’s and Crites’s study, using new RNN models (the same LSTM used in this paper)^4. This study found limited results surrounding the actual usability of these models and considered only very small tournaments—3 agents at most. Seeing as there was little to no data with large enough tournaments to really simulate a real world – let alone dynamic – environment, this study fails to capture the intricacy of this paper’s environments. All of this, as well, does not account for the usage of reinforcement learning, another differentiating factor between this and his paper. Finally, Shashi Mittal, writing with the Department of Computer Science and Engineering at the Indian Institute of Technology, Kanpur, considered the use of genetic algorithms and found some success^ 5. This, while tangentially related to our paper, does not even use the same brand of machine learning and, while he did find success, it is in single round matches against one model—not nearly as dynamic or applicable as this paper’s findings. There are multiple other papers which analyze a similar concept, but they each fall into similar categories to those mentioned prior. Their citations, however, are still listed^6^,^7^,^8^,^9^,^10. With this in mind, the main differentiators between prior study and this paper are these: 1. This paper considers massively more complex and dynamic scenarios far more applicable to the problem being solved. 2. In ignoring reinforcement learning, we have, while weakening the general argument that DL struggles as a whole, broken the argument down to be more realistic yet similarly applicable. The Prisoner’s Dilemma The Prisoner’s Dilemma is a game studied extensively in game theory. Two prisoners, Bob and Alex, are presented with a decision: Each prisoner can either testify against the other (defect) or remain silent (cooperate). Should they both cooperate, they each get a 3 year sentence reduction, the police having no evidence with which to convict. However, if Bob defects, he receives a 5 year sentence reduction while Alex receives no reduction. Should both defect, though, they each only receive a 1 year reduction^11. It’s important to note that any values can be used, as long as they satisfy the conditions:: defecting against cooperation > mutual cooperation > mutual defection > cooperating against defection. 5, 3, 1 and 0 were chosen because the original tournament used them. These situations are illustrated in Figure 1, below. Figure 1: Prisoner’s Dilemma outcomes (Bob, Alex)^11. This game is considered a dilemma because, while the best overall outcome occurs when both players cooperate, the choice with the greatest net benefit for an agent in that round is to defect. This becomes clear when one considers the game from only one player’s perspective. Bob, as shown in Figure 1, knows Alex will either cooperate or defect. Should Alex cooperate, Bob nets more points to defect; the same goes for the defection. With this, both players seemingly benefit most from electing to only defect. In this dilemma, trust is impossible and both players will, if acting as purely logical agents, always defect. This, of course, leads to a worse outcome for both players than if they had simply cooperated. The dilemma has a surprising pertinence to the dynamic and shifting world of humans, and even in its base form, although this resemblance in strength when multiple scenarios are placed in sequence, leading to the repeated prisoner’s dilemma Repeated Prisoner’s Dilemma The Repeated Prisoner’s Dilemma builds on this scenario by playing the game multiple times in succession. In this extended version, players remember their opponent’s previous moves and adjust their strategies accordingly, thus introducing the potential for strategy development and rapport building. In the classic version, the lack of repeated interactions means that defection is the rational choice. With no future retaliation possible, it is better to take advantage of the opponent. In contrast, the Repeated Prisoner’s Dilemma allows for the possibility of punishment or reward in future rounds depending on the actions taken in current and past rounds. This setup mimics real-life social interactions where people repeatedly encounter the same individuals, which can lead to stable cooperation or long-term rivalry based on the history of their interactions. The iteration of the Prisoner’s Dilemma therefore introduces the idea of trust and cooperation building, both integral to the simulation of a more human environment. In essence, while the classic Prisoner’s Dilemma provides a snapshot of how individuals act under a single, isolated set of circumstances, the Repeated Prisoner’s Dilemma offers a broader view of how strategies and relationships evolve over time. Study of such an environment can reveal complexities of human – and AI, in our case – decision making^12. The study of these complexities and their relationship with the actual dynamic decision making capabilities of AI models are the main premise of this paper. 1. Axelrod’s Tournament The most famous study of the Repeated Prisoner’s Dilemma is Professor Axelrod’s tournament. Robert Axelrod, a political science professor at the University of Michigan, ran a Repeated Prisoner’s Dilemma tournament in 1984^2. This tournament was a computer-simulated competition in which participants submitted various strategies to play the Repeated Prisoner’s Dilemma against each other. Each strategy was essentially a set of rudimentary rules dictating whether to cooperate or defect based on a variety of factors. The tournament iterated through numerous rounds allowing detailed analysis of what strategies worked and which did not. In his first tournament, Axelrod saw intriguing results. The winning strategy, Tit-for-Tat, was extremely simple. It starts by cooperating on the first move and, on all subsequent moves, simply mirroring the opponent’s last move. Axelrod’s conclusions^13 as to the success of strategies were: 1. Niceness: Never be the first to defect. 2. Provocability: Get mad quickly at defectors and retaliate. 3. Forgiveness: Do not hold a grudge once you have vented your anger. 4. Clarity: Act in ways that are straightforward for others to understand. Axelrod’s tournament findings were many, but they are largely outside of the scope of this paper. It is suggested that the reader read the prior source for more information. B. Why use the Prisoner’s Dilemma? The Prisoner’s Dilemma is an admittedly simple look at dynamic, multi-agent environments. While any of the hundreds could have been chosen, the Prisoner’s Dilemma was selected for its simplicity and the fact that it does not cater to DL models. Its simplicity makes the construction of models and tournaments less intricate, therefore limiting possible confounding variables within the models. This also means that the tournaments need less computational power, allowing for more diverse simulations to be run with higher round counts and more agents. Also, while other multi-agent environments are built for the use of DL models, the Prisoner’s Dilemma doesn’t cater in that fashion. Thus, results from the Prisoner’s Dilemma are more generalizable to real world situations where models might not have such an idyllic environment. To successfully simulate a Repeated Prisoner’s Dilemma situation for DL models, a pre-existing environment must exist in which all models can train and act. Bearing that in mind, Axelrod’s first tournament was used as the base environment for the models. To create said base environment, documentation of Axelrod’s tournament^14 was referenced. It is worth noting that the original tournament was recreated to the best of our ability, but some documentation from the original tournament was lost. With the exception of necessary liberties taken so as to fill those gaps, the tournament exactly replicates the original with no changes to cater to the models’ needs. 1. Classical Strategies Short descriptions of some of the classical strategies are listed below. However, it is suggested to refer to the aforementioned documentation for more in depth information on all of the strategies. 1. Tit-for-Tat: An extremely simple strategy which starts by cooperating and copies its opponent’s last move on all subsequent rounds. 2. Random: Randomly selects between defection and cooperation. 3. Grofman: Cooperates approximately 28% of the time if the players’ moves differed. Otherwise, always cooperate. 4. Joss: Play Tit-for-Tat, but 90% of cooperations from the opponent are interpreted as defects. 5. Grudger: Always cooperate until your opponent defects. Then always defect. B. DL Models In this study, three DL models were added to the tournament for analysis. Each was trained with no early stopping and 100 epochs – at batch sizes of 64. 1. A feedforward neural net (FNN) 2. A convolutional neural net (CNN) 3. A recurrent neural net (RNN) Each model represents a major classification of neural networks^15. All models were implemented using tensorflow keras. The first model is a binary classification model, a simple fully connected neural network, with two hidden layers (each of size 16), maximizing the success of the model within reasonable complexity. Figure 2: Fully connected binary classification FNN^16 (general model, not the one implemented in the study). It takes in various inputs and outputs a value between one and zero representing the predicted chance its opponents will cooperate. This model was trained on a variety of input data: the last three moves of each player, the points for each, the percent cooperation for each, the round number, and a chi-squared test meant to detect random choices. These choices encode the data in a manner such that the model has: 1. A very strong understanding of the recent moves of both players, thereby understanding its local environment 2. Some understanding of its own location and the general choices of its opponent throughout the whole match, allowing for long term planning but avoiding overfitting For more information on the specifics of the input data and training, see the actual code of the models^17. The second model was a convolutional neural network, implemented with 1D convolution meant to process sequential data (Figure 3). Figure 3: 1-Dimensional CNN^18. This model was built to take a sequence of all the moves that have occurred in the round and use them to predict the opponent’s next move. This is as opposed to the input of the FNN, which is fed extra data such as the chi-square test, percent cooperation and other values which are all omitted from the CNN. At its core, it uses convolution, a process used to extract useful features of the data; and pooling, a process of creating a more compact and accurate representation of the data; to compact data to be fed into a binary classification model^19. Thanks to the aforementioned processes and, therefore, the architecture of our CNN as a whole, the feature extraction is far more automatic in this model. Taking advantage of that, the CNN was trained on data that included all moves performed before the current one, allowing it to create its own compact representation. This is opposed to the FNN, which requires that the designer choose features to input as data, robbing the model of its ability to learn its own representations and heavily influencing the model’s functionality. It should be noted that, in order to make that data of uniform size, it was pre-padded with 1’s. This ensures the sequences are of equal length and therefore easily processed. It does, however, skew the average choice of the CNN towards cooperating. This skews the data significantly and means that the CNN’s choices are not entirely its own. However, seeing as the CNN fared best with a padding of 1’s (as opposed to 0’s, for example), it is understood that the shortcomings of the model are still its own and not a product of the padding. The final model implemented was a recurrent, long short-term memory (LSTM) neural network. In its training, it was given the same data as the convolutional network, namely a sequence of the last moves, up to 200, padded with 1s. Figure 4: Long-short term memory RNN^20. LSTM networks are designed to retain information for extended periods. Each LSTM cell is a complex unit with a memory-carrying component, the cell state, which conveys information down the sequence. Three gates manage the regulation of information flow within the cell: the forget gate, which decides what to discard; the input gate, which updates the memory with new data; and the output gate, which determines the current output based on the memory state. These gates employ sigmoid functions to make binary decisions and tanh functions to scale values, ensuring the LSTM cell selectively retains or forgets information^21. C. Overall Layout In the tournament, every strategy faces off against each other in classic round robin style. Each strategy, each round makes a choice based on their architecture and they are awarded points based on the recorded outcomes. A complete tournament consisted of every model playing every other model twice in games that were 200 rounds (choices) long. It was discovered that, since strategies play each other to determine their overall score, the removal of one strategy can have massive ripples throughout the tournament as a whole. To account for this, many different tournaments were run with different combinations of models, for a total of 14 combinations: 1. All Strategies 2. DL Only 3. No DL 4. Everything but Random 5. Cooperation Focused (TitForTat, Grudger, Davis) 6. Competition Focused (Tullock, Shubik, Grofman) 7. Forgiving Strategies (TitForTat, Grudger, Joss, Davis) 8. Adaptive vs. Static (TitForTat, Joss, Grudger, RNN, Random, AlwaysCooperate, AlwaysDefect) 9. All Strategies with AD and AC (All strategies with AlwaysDefect and AlwaysCooperate added) 10. Complexity vs Simplicity (TitForTat, Stein&Rappoport, RNN, CNN, AlwaysCooperate, AlwaysDefect) 11. Early vs Late Game (Joss, Tullock, TitForTat, Grudger, AlwaysDefect) 12. No DL with AD and AC (All non-DL strategies with AlwaysDefect and AlwaysCooperate added) 13. Forgiveness Factor (Grudger, Q-LearningHard, Joss, AlwaysCooperate) 14. High Risk vs Low Risk (Random, Tullock, RNN, TitForTat, Grudger, AlwaysDefect, AlwaysCooperate). In short description of the motivation behind these selections, certain combinations acted as controls; “DL Only” and “No DL” acted as positive and negative controls, respectively. Other controls included “All Strategies” and “Everything but Random.” The rest of the combinations were chosen to highlight some specific facet of the game. In that way, each combination should have some differing effect on the performance of the strategies/models. In doing this, one can see the breadth of possibilities for the model’s successes and failures, thereby eliminating the possibility that one model’s success was entirely based on the specific environment that was chosen. Figure 6 shows the average score of all models over twenty tournaments on the Y-axis, the model names on the X-axis and the specific scenarios color coded as per the legend. At first glance, it seems that the DL models are dominating. Closer observation, however, shows that their lead is derived in large part from a single matchup, namely “DL Only.” DL models achieved perfect scores when playing each other, revealing that their favored strategy in a static scenario was cooperation, and in this a controlled environment, they executed it perfectly. In this respect, the model showed some evidence of strategic actions. This paper does not refute DL’s success in static scenarios like this one; it is well established that, in multi-agent environments with many DL models, the agents tend to strategize successfully. However, while they strategized successfully in this static case, this changes as dynamic variables are added. In figure 8, with the DL Only scenarios removed, it can be seen that the models’ total standing dropped sharply. They are still ranked highly, with RNN ranked first and CNN ranked third, but it’s not the effortless dominance that the original graph would seem to imply. Additionally, when noise was added (a 5% chance of a strategy’s choice switching), this trend was magnified exponentially. A mere 5% percent chance of noise, a small fraction of the chaos embodied in the real world, was enough to completely turn the tables. RNN was lowered to third, Binary Classification to the middle of the pack, and CNN, which previously held third place, got last. Additionally, looking at the “DL Only” scenario further strengthens this conclusion. Instead of the neat rows seen in the scenario without noise, the models seem unable to cope with the added complexity, and as a result, show a sharp decrease in performance. Interestingly enough, in this noisy environment, Q-learning takes first place—therefore incidentally providing evidence that the use of reinforcement learning, in other words, the right tool for the job, can have great success. However, Q-learning was provided with more information than other models (an edge case which need not be discussed seeing as Q-learning is not the focus of this paper) so in reality, AlwaysDefect is crowned the rightful champion in this tournament. This, once again, highlights the limitation of DL in dynamic scenarios. These findings quite clearly underscore the limitations of DL when applied to real-world problems, but with this data, a new question arises: why do these models fare so poorly in dynamic scenarios? The answer is, very simply, that this sort of action is not what DL models are built to do. Take the predictive success of the models, for example. All DL models were evaluated on their ability to predict their opponents moves after having been trained; in each case, they scored above 90% accuracy. Obviously, though, that performance was not reflected in the actual tournaments. The reason for this is that, despite successfully predicting the opponent’s move, that is a far cry from making the correct decision yourself. This idea is represented in the difference between hard and soft action policies. The DL models only predict the next move of their opponent; the choice of what to do with that information had to be implemented separately from the models. After a bit of experimentation, it was found that the best policy was to simply mimic the predicted choice, ensuring either mutual cooperation or mutual Additionally, it was found that taking the output of the model (the percent chance its opponent would cooperate) and turning it into a prediction was harder than expected. The first iteration used a soft action policy, a strategy that cooperated with a probability equal to the predicted probability of cooperation. For example, if the predicted probability of cooperation was 20%, then 20% of the time the model would cooperate and 80% of the time it would defect. After some experimentation, it was found that a hard policy worked best (any probability over 50% is cooperation, anything less is defect) because any mistaken prediction, which soft policies are more prone to due to the nature of random chance, will lead to a loss of trust, more defections, and fewer points. The very fact that human hands must select an action policy reveals a fundamental weakness in DL; ideally, such models should be able to make their own decision. By making the choice for them, the DL models are robbed of flexibility and nuance, crucial parts of successful participation in the human world. This concept and its application to static vs. dynamic scenarios is a fundamental weakness of DL models. In dynamic scenarios, the environment shifts even with the addition of the model to the environment. With even this miniscule change in environment, the model becomes less effective and each other change only compounds the issue. This manifested itself as the tournament actually shifted when models were incorporated, with other strategies making decisions based on the models’ actions, data which by its very nature cannot be included in the training data. It’s a futile chase, where every round of training will change its actions, and every one of its actions will change its environment, mandating more training. The graph above is a perfect example. It’s from an alternate CNN model, one that was not only trained before the tournament but between rounds based on the data from the previous round. The x-axis represents the epochs it was trained over, and the y-axis represents the average accuracy for that specific epoch. Just by looking at the data, it becomes apparent that the model, despite achieving near-perfect accuracy after training, is once again made to start at a lower accuracy upon revising data from the previous round. This reflects the idea the model is constantly trying to catch up, with every improvement causing a cascade that just leads it right back to where it started. It highlights how, while DL is good in static scenarios, when it encounters dynamic ones that shift upon its own entry, it struggles. Unfortunately, all real, human scenarios are dynamic, and until this lack of flexibility is resolved, AI lacks the ability to participate in a truly human manner or, for that matter, successfully in any human environment. This research offers a glimpse into the potential and limitations of DL in strategic contexts. While DL can mimic and sometimes enhance human strategic thinking, its ability to innovate under unpredictable circumstances has proven unreliable. DL cannot be treated as mere “black box”, as has been becoming increasingly common in contemporary society, its limitations must be understood. Simply put, DL models are capable of amazing things, but, without more advanced techniques, they are not quite capable of acting as the omnipotent “black box” they are often made out to be, especially not in the human world and its chaotic twists and turns. Conflicts of Interest The authors declare no conflict(s) of interest. Asilomar Microcomputer Workshop LEAVE A REPLY Cancel reply
{"url":"https://nhsjs.com/2024/deep-learning-and-the-prisoners-dilemma-a-strategic-evaluation/","timestamp":"2024-11-03T06:57:27Z","content_type":"text/html","content_length":"243727","record_id":"<urn:uuid:2e0f22c3-e277-411a-823a-f36af6ca07bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00721.warc.gz"}
2-digit plus 2-digit Worksheets (First Grade, printable) Printable 2-digit Addition Worksheets: 2-digit + 1-digit (no regrouping) (eg. 56 + 3) 2-digit + 2-digit (no regrouping) (eg. 35 + 24) 2-digit + 1-digit (with regrouping) (eg. 48 + 5) 2-digit + 2-digit (with regrouping) (eg. 64 + 29 2-digit plus 2-digit Worksheets First Grade worksheets to help students learn how to add a 2-digit number to a 2-digit number. No regrouping or carrying is required. Students can start with strategies like counting fingers, counting on or number bonds. They may also write the problems in a vertical form. With practice, they may be able to do the sums in their heads. How to add 2-digit numbers with no regrouping: 1. Line Up the Numbers: Write the numbers vertically, aligning their place values (tens and ones). Tens digits should be above each other, and ones digits should be above each other. 2. Add the Ones Place: Start by adding the digits in the ones place. Write the sum of the ones digits below the ones place column. 3. Add the Tens Place: Move to the tens place and add those digits together. Write the sum of the tens digits below the tens place column. 4. Read the Answer: The final number you’ve written is the sum of the two-digit numbers. Key Points: • No regrouping means the sum of the digits in the ones place stays below 10. • If the sum of the digits in a place value is 10 or more, you’ll need to regroup (carry over). • Practice with different examples to solidify understanding. Click on the following worksheet to get a printable pdf document. Scroll down the page for more 2-digit plus 2-digit Worksheets. More 2-digit plus 2-digit Worksheets Find the sum. Write the problem in vertical form, if necessary. (Answers on the second page) 2-digit plus 2-digit Worksheet #1 2-digit plus 2-digit Worksheet #2 2-digit plus 2-digit Worksheet (Interactive) Addition Facts to 20 (eg. 12 + 5) Add 10 to a 2-digit number (eg. 10 + 24) Add Whole Tens (eg. 50 + 60) Add Tens to a 2-digit number (eg. 60 + 34) Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"https://www.onlinemathlearning.com/2-digit-plus-2-digit-worksheet.html","timestamp":"2024-11-05T03:19:45Z","content_type":"text/html","content_length":"38108","record_id":"<urn:uuid:0df9d2ad-c7df-45e8-b9a7-4688d9640df5>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00378.warc.gz"}
Performing the Inverse Discrete Time Fourier Transform on Spectrometer Data in Python What will you learn? In this tutorial, you will learn how to effectively perform the Inverse Discrete Time Fourier Transform (IDTFT) on spectrometer data using Python. Gain insights into handling spectral data and converting it from frequency domain to time domain for analysis. Introduction to the Problem and Solution When working with spectrometer data, transforming it into a more interpretable format is essential. One crucial transformation is the Inverse Discrete Time Fourier Transform (IDTFT), which converts frequency domain data back into the time domain. This process is vital for applications like signal processing where understanding the original time-based signal is critical. By leveraging Python’s robust libraries such as NumPy and SciPy, we can efficiently execute this transformation. Our goal is to simplify this seemingly complex operation into practical steps that you can confidently apply in your projects. import numpy as np from scipy.fft import ifft # Sample frequency-domain data (spectrometer data) frequency_data = np.array([2+3j, 4+5j, 6+7j]) # Actual spectrometer data # Perform IDTFT using the inverse fast Fourier transform function time_domain_data = ifft(frequency_data) print("Time Domain Data:", time_domain_data) # Copyright PHD The code snippet above demonstrates performing an IDTFT on sample frequency-domain spectrometer data. Here’s a breakdown: – Import Necessary Libraries: Utilize numpy for array manipulation and scipy.fft.ifft for computing the inverse FFT. – Sample Frequency-Domain Data: frequency_data represents complex numbers mimicking spectrometer output. – Performing IDTFT: The ifft() function applies an inverse FFT to convert frequency-domain samples to time-domain signals. – Result Presentation: Display the converted time-domain signals. This conversion is crucial for analyzing signals initially captured in the frequency domain but requiring interpretation or further analysis in their original time-based context. 1. What is IDTFT? ☆ The Inverse Discrete Time Fourier Transform (IDTFT) converts frequency-domain signals back into their original time domain form. 3. Why perform an IDTFT on spectrometer data? ☆ Converting spectral/frequency information back to time allows analysis of changes over time or reconstruction of original waveforms. 5. What libraries does Python offer for FFT operations? ☆ Python provides libraries like NumPy and SciPy with functions for Fast Fourier Transforms (FFT) including its inverse operations. 7. Can I perform IDTFT on non-uniformly sampled spectral lines? ☆ Yes, additional steps like interpolation may be needed before applying standard FFT algorithms due to uniform sampling assumptions. 9. Is there a difference between FFT and DFFT? ☆ Yes, FFT (Fast Fourier Transform) improves computational speed compared to DFFT (Discrete Fourier Transform). 11. How accurate are these transformations? ☆ Accuracy depends on factors like sampling rate, preprocessing techniques, and numerical precision inherent in digital computations. 13. Can this method handle real-time spectrum analysis? ☆ While theoretically possible with optimized routines and hardware resources, practical limitations may arise based on application needs. 15. What post-processing might be needed after performing an IDTFT? ☆ Post-processing could involve tasks like noise filtering, phase correction, or amplitude normalization based on analysis goals. Mastering the Inverse Discrete Time Fourier Transform in Python opens up possibilities for efficient handling of spectrometer data. By understanding this transformation process and leveraging Python’s libraries, you can seamlessly transition between frequency and time domains for insightful signal analysis. Leave a Comment
{"url":"https://pythonhelpdesk.com/2024/02/25/performing-the-inverse-discrete-time-fourier-transform-on-spectrometer-data-in-python/","timestamp":"2024-11-10T01:04:55Z","content_type":"text/html","content_length":"43639","record_id":"<urn:uuid:c2c16a34-37ab-42db-b334-2541d0d23e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00123.warc.gz"}
Finite H_v-Fields with Strong-Inverses The largest class of hyperstructures is the class of H v -structures. This is the class of hyperstructures where the equality is replaced by the non-empty intersection. This extremely large class can used to define several objects that they are not possible to be defined in the classical hypergroup theory. It is convenient, in applications, to use more axioms and conditions to restrict the research in smaller classes. In this direction, in the present paper we continue our study on H v -structures which have strong-inverse elements. More precisely we study the small finite cases. hyperstructure; H_v structure; hope; strong-inverse elements. P. Corsini, Prolegomena of Hypergroup Theory, Aviani Editore, 1993. P. Corsini and V. Leoreanu, Application of Hyperstructure Theory, Klower Academic Publishers, 2003. B. Davvaz and V. Leoreanu, Hyperring Theory and Applications Int. Acad. Press, 2007 B. Davvaz, R.M Santilli, and T. Vougiouklis, Studies of multi-valued hyperstructures for the characterization of matter-antimatter systems and their extension, Algebras, Groups and Geometries 28(1), (2011), 105-116. R. Mahjoob, T. Kaplani and T. Vougiouklis, H v -groups with strong-inverses, Submitted. T. Vougiouklis, Generalization of P-hypergroups, Rend. Circolo Mat. Palermo, Ser. II, 36, (198), (1987), 114-121. T. Vougiouklis, (1991). The fundamental relation in hyperrings. The general hyperfield, 4 th AHA, Xanthi 1990, World Scientific, (1991), 203-211. T. Vougiouklis, Hyperstructures and their Representations, Monographs in Math., Hadronic, 1994. T. Vougiouklis, Some remarks on hyperstructures, Contemporary Math., Amer. Math. Society, 184, (1995), 427-431. T. Vougiouklis, Enlarging H v -structures, Algebras and Combinatorics, ICAC’97, Hong Kong, Springer-Verlag, (1999), 455-463. T. Vougiouklis, On H v -rings and H v -representations, Discrete Math., Elsevier, 208/209, (1999), 615-620. T. Vougiouklis, The h/v-structures, J. Discrete Math. Sciences and Cryptography, V.6, N.2-3, (2003), 235-243. T. Vougiouklis, ∂-operations and H v -fields, Acta Math. Sinica, (Engl. Ser.), V.24, N.7, (2008), 1067-1078. T. Vougiouklis, The Lie-hyperalgebras and their fundamental relations, Southeast Asian Bull. Math., V. 37(4), (2013), 601-614 T. Vougiouklis, Hypermathematics, H v -structures, hypernumbers, hypermatrices and Lie-Santilli admissibility, American J. Modern Physics, 4(5), (2015), 34-46. T. Vougiouklis, and T. Kaplani, Special Elements on P-hopes and ∂-hopes, Southeast Asian Bulletin Mathematics, Vol. 40(3), (2016), 451-460. • There are currently no refbacks. Copyright (c) 2017 Theodora Kaplani, Thomas Vougiouklis This work is licensed under a Creative Commons Attribution 4.0 International License Ratio Mathematica - Journal of Mathematics, Statistics, and Applications. ISSN 1592-7415; e-ISSN 2282-8214.
{"url":"http://eiris.it/ojs/index.php/ratiomathematica/article/view/373","timestamp":"2024-11-07T23:11:25Z","content_type":"application/xhtml+xml","content_length":"22983","record_id":"<urn:uuid:0dc4855d-230d-4359-a5ab-750c8ffc8090>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00049.warc.gz"}
Problem Model Prompt P_Inequality_19 GPT-3.5 Root C w/ H Ground Truth Answer: 2 inequalities Use in Code import champ_dataset # "pip install champ-dataset" if necessary dataset = champ_dataset.load('v0') problem = dataset['P_Inequality_19'] messages = problem.conversations['GPT-3.5|Root C w/ H'].messages for message in messages: print(f'Role: {message.role}') print(f'Text: {message.text}') print(f'Error: {message.error}') Full Conversation System Prompt User Input Model Output You are an expert on mathematics. Solve the following problem. Make sure to show your work before giving the final answer. For positive a, b, c, d, at most how many of the three inequalities a+b<c+d, (a+b)(c+d)<ab+cd, (a+b)cd<ab(c+d) can be true at the same time? You may find the following information useful: 1. Find an assignment of a, b, c that makes two inequalities true. 2. Study whether all inequalities can be true by multiplying the first two together, and then the last two together. 3. (x+y)^n=x^n+C(n,1)x^(n-1)*y+C(n,2)x^(n-2)*y^2+...+C(n,2)x^2*y^(n-2)+C(n,1)x*y^(n-1)+y^n. To solve this problem, let's consider each inequality separately and analyze the conditions under which they can be true. 1. a+b < c+d: To find an assignment of a, b, c, and d that makes this inequality true, we can choose a = 1, b = 2, c = 3, and d = 4. In this case, a+b = 3 and c+d = 7, so a+b < c+d is true. 2. (a+b)(c+d) < ab+cd: To determine if this inequality can be true, let's multiply the first two inequalities together: (a+b)(c+d) < ab+cd (ac+ad+bc+bd) < ab+cd ac+ad+bc+bd < ab+cd Now, let's analyze the conditions under which this inequality can be true. We can rewrite the inequality as: ac+ad+bc+bd - ab - cd < 0 (a-1)(c+d) + (b-1)(c+d) - (c-1)(d-1) < 0 (a-1)(c+d) + (b-1)(c+d) - (c-1)(d-1) - (a-1)(b-1) < 0 From the given information, we know that (x+y)^n = x^n + C(n,1)x^(n-1)*y + C(n,2)x^(n-2)*y^2 + ... + C(n,2)x^2*y^(n-2) + C(n,1)x*y^(n-1) + y^n. Using this information, we can rewrite the inequality as: (a-1)(c+d) + (b-1)(c+d) - (c-1)(d-1) - (a-1)(b-1) = (a-1)(c+d) + (b-1)(c+d) - (c-1)(d-1) - (a-1)(b-1) + (c-1)(d-1) - (a-1)(b-1) < 0 (a-1)(c+d) + (b-1)(c+d) + (c-1)(d-1) - (a-1)(b-1) - (c-1)(d-1) < 0 (a-1)(c+d) + (b-1)(c+d) + (c-1)(d-1) - (a-1)(b-1) - (c-1)(d-1) - (b-1)(d-1) + (b-1)(d-1) < 0 (a-1)(c+d) + (b-1)(c+d) + (c-1)(d-1) - (a-1)(b-1) - (c-1)(d-1) - (b-1)(d-1) + (b-1)(d-1) - (a-1)(d-1) + (a-1)(d-1) < 0 Now, we can rewrite the inequality using the given information: (a-1)(c+d) + (b-1)(c+d) + (c-1)(d-1) - (a-1)(b-1) - (c-1)(d-1) - (b-1)(d-1) + (b-1)(d-1) - (a-1)(d-1) + (a-1)(d-1) = (a-1)(c+d) + (b-1)(c+d) + (c-1)(d-1) - (a-1)(b-1) - (c-1)(d-1) - (b-1)(d-1) + (b-1)(d-1) - (a-1)(d-1) + (a-1)(d-1) + (d-1)(c-1) - (d-1)(b-1) < 0 (a-1)(c+d) + (b-1)(c+d) + (c-1)(d-1) - (a-1)(b-1) - (c-1)(d-1) - (b-1)(d-1) + (b-1)(d-1) - (a-1)(d-1) + (a-1)(d-1) + (d-1)(c-1) - (d-1)(b-1) + (b-1)(c-1) - (b-1)(a-1) < 0 From this analysis, we can see that there are multiple terms involving (c-1), (d-1), (b-1), and (a-1). Therefore, it is difficult to determine the exact conditions under which this inequality can be true. However, we can conclude that it is possible for this inequality to be true under certain conditions. 3. (a+b)cd < ab(c+d): To determine if this inequality can be true, let's multiply the first and third inequalities together: (a+b)cd < ab(c+d) acd + bcd < abc + abd Now, let's analyze the conditions under which this inequality can be true. We can rewrite the inequality as: acd + bcd - abc - abd < 0 cd(a+b) - ab(c+d) < 0 From this analysis, we can see that the inequality involves terms with (a+b) and (c+d). Therefore, it is difficult to determine the exact conditions under which this inequality can be true. However, we can conclude that it is possible for this inequality to be true under certain conditions. In summary, we have found that it is possible for all three inequalities to be true under certain conditions. However, it is difficult to determine the exact conditions without further information. Now, summarize the answer above in one sentence, without any intermediate steps or explanations. It is possible for all three inequalities to be true simultaneously under certain conditions, but the exact conditions cannot be determined without further information.
{"url":"https://yujunmao1.github.io/CHAMP/solutions/p_inequality_19_gpt35_rootcwh.html","timestamp":"2024-11-14T07:01:19Z","content_type":"text/html","content_length":"8365","record_id":"<urn:uuid:ea28daba-e94b-4c0f-b743-3651b4b01b88>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00395.warc.gz"}
Calculation of the sliding bearing of finite length Generally for definition of elastic deformations of contacting bodies it is necessary to solve spatial problems of the theory of elasticity for a shaft and a bearing. In case of the bearing with the nonmetallic bush which module of elasticity is small in comparison with the module of elasticity of a shaft and a pad, the basic contribution to change of the form of a gap at performance of conditions (2.3) is brought by bush deformations. In this case it is possible to be limited to the decision of a problem of the theory of elasticity for the nonmetallic bush. We result here the decision of a problem for the bearing with the nonmetallic bush, which module of elasticity on two order below the module of elasticity of a steel. In Kodnira's monograph [1] the results of experimental researches of such bearings are resulted. At performance of a condition (2.5) for definition of deformations of the bush it is possible to use the equation (2.6). However thus there is a complexity with the formulation of boundary conditions for the equation (2.6). These conditions, however, define bush deformations in a vicinity of its end faces. Exact definition of these deformations has important practical value as they define loading at which there is a contact of surfaces of a shaft and the bush in face sections. In this connection we result the problem decision in which bush deformations are defined by a finite element metod. We state short an essence of this method. The basic equations of the theory of elasticity for the homogeneous and isotropic medium in cylindrical system of co-ordinates are resulted in. Let , are components of a vector of displacement of a point of the elastic medium with cylindrical co-ordinates. Components of the deformation tensor are connected with displacements in cylindrical system of co-ordinates as follows , , , , Components of stress tensor are defined as follows , , , , Here , , are the Poisson’s coefficient and the module of elasticity of medium accordingly. Let's enter dimensionless variables , , , , , , , , , , , , Here is internal radius of the bush, is a radial clearance of the bearing, , is the shaft radius. In dimensionless variables the dependences between displacements and deformations and also between deformations and stresses there are possible to write down in a kind , (8.1) The domain occupied with the bush, in dimensionless variables looks like , , , Here is thickness of the bush. We break this area into the elements representing tetrahedrons. In each tetrahedron it is entered three functions of three variables which define dimensionless elastic displacements , , . In the simplest case of linear functions, we write down them as follows Let is the vector, which components are equal to corresponding dimensionless displacement of top of Values of factors we pick up such, that a vector In top of a tetrahedron it was equal to a vector. As a result we receive the following expression for a vector in a considered tetrahedron , (8.2) where functions are connected with co-ordinates of tops of a tetrahedron as follows Then for functions we receive expression Substituting expression (8.2) for a vector of displacement to the equation (8.1)and taking into account expression (8.3) we receive the expression for deformations of medium in a considered , where (8.3) Expression (8.3) defines tetrahedron deformations through displacements of its tops. As displacements were accepted linearly dependent on co-ordinates the deformations turned out constants in all volume of a tetrahedron. Substituting expressions (8.3) for a component deformation tensor in the right part of the second equality (8.1), we receive expressions for stresses in a considered tetrahedron Expression (8.4) defines stresses in a considered tetrahedron through displacements of its tops. As displacements were accepted linearly dependent on co-ordinates the stresses turned out constants in all volume of a tetrahedron. We consider a case when external forces do not operate in a considered element of volume of medium. Then the tension of a considered element is defined only by forces of interaction with adjacent elements as which we will consider external. We replace the real efforts operating along borders of joining of the considered element with adjacent elements, to statically equivalent central forces, i.e. the forces which action causes the intense-deformed condition similar to volume what is in it at actual loading in an element. These forces are defined by the deformed condition (8.3) and a tension (8.4) and appear equal [2] Where integration is made on tetrahedron volume in dimensionless variables, , are dimensional components of a vector of force which should be enclosed to tetrahedron knot. From equality (8.5) the following interpretation of a square matrix of the third order defined under the formula follows If all tops of a tetrahedron, except top with number, rigidly to fix, that they did not move, and top with number to move on a vector central forces in to top in a dimensionless kind it will be defined under the formula The matrix is called as a stiffness matrix of a considered finite element. Calculating such matrixes for all final elements it is possible to receive a stiffness matrix of all system of final elements with which the initial area of the decision of a problem is replaced. For this purpose it is necessary to make in each knot of a grid summation of central forces from all finite elements, one of which tops is located in the given knot. The stiffness matrix of all system allows defining central forces in any knot of a grid on the set vectors of displacements of knots of a grid. The inverse matrix allows defining a vector of displacements of any knot of a grid if the vectors of the forces enclosed to each knot of a grid are set. Matrix calculation represents the primary goal of a finite element method. If external volume forces are absent, central forces are equal to zero in internal knots of a grid. On a surface of the bush which adjoins to an internal surface of a pad, central forces are not equal to zero. They are unknown, but are known a vector of displacements of knots of a grid on this surface which are equal to zero. This condition is one of boundary statements of the problem. Other boundary conditions is equality to zero of central forces in the knots located on lateral surfaces of the bush and on a part of its internal surface on which there is no pressure from a lubricant In the knots of a grid located on that part of an internal surface of the bush upon which there is force from a lubricant layer, central forces are distinct from zero. They are equal to forces which are statically equivalent to the loading distributed on a surface. If on a surface (dimensional) any final element there is superficial force (dimensional) then the statically to it the equivalent central force operating on element top is equal to where is the dimensional function in relation to which is dimensionless. In dimensionless variables the given expression can be written as where , Thus, calculation of a vector of displacements in each knot of a grid at the set distribution of pressure on a body surface is reduced to the following procedure: 1. We calculate a stiffness matrix of all system of finite elements into which the considered domain breaks. 2. We calculate the central forces enclosed to knots, located on a surface of considered area and equivalent to the external pressure distributed on an area surface. 3. We solve system of the equations Where is the vector, which components are equal to displacements of all knots of a grid, is the vector, which components are equal to the total central forces enclosed to all knots of a grid. In the case under consideration these forces are equal to zero for internal knots of a grid and are equal to central forces for the knots of a grid located on a surface of considered area. Let's enter a dimensionless variable, where - the co-ordinate directed along an axis of the bearing. Dimensionless pressure we define as follows: where is the length of the bearing. In dimensionless variables the basic equations of a problem look like , , , , where dependence between the functions of in grid knots is defined by system of the equations (8.8), , is the curve equation on which there is a destruction of a lubricant layer. Function is unknown and is defined in the course of the problem decision. , is the dimensional load on the bearing. The calculations were spent at following values of parameters: , and the thickness of the bush. Fig. 8.1 On fig. 8.1 the graphs of a thickness of a lubricant layer in section , are presented at various loads. At load the distribution of a thickness of a lubricant layer is in good agreement with the experimental results in [1]. At load the value of a thickness of a lubricant layer is above the experimental. It is connected, apparently, by that in the resulted mathematical model the warming up of a lubricant layer is not considered at its movement in a gap. In experiment the lubricant temperature on an input in a gap was supported by a constant. In calculations viscosity of lubricant was accepted by a constant in all gap and equal viscosity of lubricant on an input in a gap. In experiment the lubricant temperature increases in process of its advancement in a gap from an input to an exit from it that owing to a thickness of a lubricant layer decrease. The resulted graphs testify that with loading growth there is a zone of a gap in which the thickness of a lubricant layer changes slightly. This area extends at the further growth of loading. Fig. 8.2 On fig. 8.2 the functions of distribution of pressure in section are presented. It is visible that with growth of loading the point of a maximum of pressure is displaced to the left, to a point of an input of lubricant in a gap and the pressure distribution becomes more symmetric concerning a line of action of loading and more similar to pressure distribution at dry contact of bodies. Fig. 8.3 On fig. 8.3 the dependences of a thickness of a lubricant layer on the co-ordinate directed along an axis of the bearing are resulted, at value of angular co-ordinate degree. At such value of the thickness of a lubricant layer in face sections accepts the values close to the minimum values. On fig. 8.3 it is visible that at the given value the thickness of a lubricant layer accepts the minimum values in face sections. Thus at small load it poorly changes along a bearing axis. However, at high load the minimum value of a thickness of a lubricant layer in face sections can be several times less than the minimum value of a thickness of a lubricant layer in the central section (section). So, for example, at loadthe minimum value of a thickness of a lubricant layer in the central section is close to 10 microns, while the minimum value of a thickness of a lubricant layer in face sections is less than 1 micron. At the further increasing in loading in face sections there is a contact of surfaces while in the central section the thickness of a lubricant layer accepts high value. At the further increasing in load the zone of direct contact of surfaces extend, and zone, in which surface are divided by a lubricant layer, decrease. The literature 1. Коднир Д.С. Контактная гидродинамика смазки деталей машин. М.: Машиностроение, 1976. 2 Клованич С.Ф. Метод конечных элементов в нелинейных задачах инженерной механики. Запорожье, 2009.
{"url":"https://tribo-lab.com/index.php?option=com_content&view=article&id=9&Itemid=10","timestamp":"2024-11-02T21:21:56Z","content_type":"application/xhtml+xml","content_length":"79722","record_id":"<urn:uuid:db2b56b6-ca0b-4f57-8e91-54a7ec2c7568>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00707.warc.gz"}
Math Games for 4th Grade | LoonyLearn In fourth grade, students compare classroom learning to real-world experiences. They use addition, subtraction, multiplication and division to understand graphs. They use decimals to convert measurements for science experiments. They measure angles, find lines of symmetry, and classify shapes. Fourth grade can be overwhelming, but we are here to help! At LoonyLearn, we want students to get excited about math through gaming. Students engage in learning through our fun math games made for young learners. Your child will be captivated by the games on the screen. As your child plays their confidence grows and so do their math skills. Each of the games below can be played with any of our LoonyLearn math concepts. Just select the concept you want to learn then choose the game you want to play. Number and Operations in Base Ten: (4.NBT) Adding Whole Numbers up to 100,000 (4.NBT.4) Adding Whole Numbers up to 500,000 (4.NBT.4) Adding Whole Numbers up to 1,000,000 (4.NBT.4) Subtracting Whole Numbers up to 100,000 (4.NBT.4) Subtracting Whole Numbers up to 500,000 (4.NBT.4) Subtracting Whole Numbers up to 1,000,000 (4.NBT.4) Place Values (up to One Million) Writing Multi-digit Numbers in Expanded Form (4.NBT.2) Writing Multi-digit Numbers in Word Form (4.NBT.2) Ordering Multi-digit Numbers (4.NBT.2) Comparing Multi-digit Numbers (4.NBT.2) Rounding Multi-digit Numbers (4.NBT.3) Multiplying by 11 (4.NBT.5) Multiplying by 12 (4.NBT.5) Multiplying 2 Digit and 1 Digit Numbers (up to 100) (4.NBT.5) Multiplying 3 Digit and 1 Digit Numbers (up to 500) (4.NBT.5) Multiplying 3 Digit and 1 Digit Numbers (up to 1,000) (4.NBT.5) Multiplying 4 Digit and 1 Digit Numbers (up to 5,000) (4.NBT.5) Multiplying 4 Digit and 1 Digit Numbers (up to 10,000) (4.NBT.5) Multiplying 2 Digit and 2 Digit Numbers (up to 50) (4.NBT.5) Multiplying 2 Digit and 2 Digit Numbers (up to 100) (4.NBT.5) Dividing Multiples of 10 (4.NBT.1) Dividing 2 Digit by 1 Digit Numbers without Remainders (4.NBT.6) Dividing 2 Digit by 1 Digit Numbers with Remainders (4.NBT.6) Dividing 3 Digit by 1 Digit Numbers without Remainders (4.NBT.6) Dividing 3 Digit by 1 Digit Numbers with Remainders (4.NBT.6) Dividing 4 Digit by 1 Digit Numbers without Remainders (4.NBT.6) Dividing 4 Digit by 1 Digit Numbers with Remainders (4.NBT.6) Operations and Algebraic Thinking (4.OA) Estimating Products Estimating Quotients Word Problems Single-Step Word Problems using Multiplication (4.OA.2) Single-Step Word Problems using Division (4.OA.2) Mutli-Step Word Problems using Multiplication and Division (4.OA.3) Factors, Mulitples, and Number Patterns Identifying Factors (4.OA.4) Identifying and Finding Multiples (4.OA.4) Identifying Prime and Composite Numbers (4.OA.4) Identifying Number Patterns (4.OA.5 Number and Operations - Fractions (4.NF) Fraction Equivalents and Ordering Identifying Tenths and Hundredths (4.NF.1) Finding Equivalent Fractions with Models (4.NF.1) Finding Equivalent Fractions without Models (4.NF.1) Comparing Fractions with One-Half (4.NF.2) Comparing Differing Fractions with Models (4.NF.1) Comparing Differing Fractions without Models (4.NF.1) Fraction Addition, Subtraction, and Multiplication Identifying Mixed Numbers Writing Mixed Numbers as Fractions Decomposing Fractions into a Sum of Fractions (4.NF.3) Adding Fractions with Like Denominators that Sum up to 1 (4.NF.3) Adding Fractions with Like Denominators that Sum greater than to 1 (4.NF.3) Adding Mixed Numbers with Like Denominators with Models (4.NF.3) Adding Mixed Numbers with Like Denominators without Models (4.NF.3) Subtracting Fractions with Like Denominators (4.NF.3) Subtracting Mixed Numbers with Like Denominators with Models (4.NF.3) Subtracting Mixed Numbers with Like Denominators without Models (4.NF.3) Multiplying Fractions by a Whole Number (4.NF.4) Fraction Word Problems (4.NF.4) Express Fraction Denominator 10 as a Fraction Denominator 100 (4.NF.5) Express Fractions with Denominators 10 and 100 in Decimal Form (4.NF.5) Writing Decimals in Fraction form (Tenths) (4.NF.6) Writing Decimals in Fraction form (Hundredths) (4.NF.6) Comparing Decimals to Tenths (4.NF.7) Comparing Decimals to Hundredths (4.NF.7) Comparing Decimals greater than 1 (4.NF.7) Ordering Decimals less than 1 (4.NF.7) Ordering Decimals greater than 1 (4.NF.7) Measurement and Data (4.MD) Conversion of Measurements Express Larger Customary Units in Terms of Smaller Units (4.MD.1) Express Larger Metric Units in Terms of Smaller Units (4.MD.1) Word Problems with Convertion of Measurements (4.MD.2) Applying the Perimeter Formula (4.MD.3) Applying the Area Formula (4.MD.3) Representing and Interpreting Data Reading and Representing Data with Fractions on a Line Plot (4.MD.4) Measuring Angles with a Protractor (4.MD.6) Adding and Subtracting Angles (4.MD.7) Geometry (4.G) Identifying Lines, Rays and Line Segments (4.G.1) Identifying Parallel and Perpendicular Lines (4.G.2) Identifying Types of Angles (4.G.2) Identifying Types of Triangles (4.G.2) Classifying 2D Figures (4.G.2) Finding Lines of Symmetry for 2D Figures (4.G.3)
{"url":"https://loonylearn.com/research/math-games/fourth-grade","timestamp":"2024-11-09T13:52:56Z","content_type":"text/html","content_length":"49784","record_id":"<urn:uuid:a475dc35-f712-4b6b-ad6e-675cc6528736>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00050.warc.gz"}
A robust nonlinear system identification algorithm using orthogonal polynomial network A robust system identification algorithm is presented which makes use of linear system identification algorithms, such as Eigensystem Realization Algorithm, Observer/Kalman Identification, etc, and an orthogonal polynomial-based artificial neural network. Adaptive learning laws are derived by a thorough Lyapunov analysis to adjust different parameters of the neural network based model. The learning algorithm proposed in this paper is inspired by recent developments in adaptive control. The algorithm presented here is validated by analysis and simulation of examples based mainly on space applications. A detailed comparative study is performed to show the performance of the proposed algorithm with respect to some existing identification algorithms, specifically the Eigensystem Realization Algorithm. All Science Journal Classification (ASJC) codes • Aerospace Engineering • Space and Planetary Science Dive into the research topics of 'A robust nonlinear system identification algorithm using orthogonal polynomial network'. Together they form a unique fingerprint.
{"url":"https://pure.psu.edu/en/publications/a-robust-nonlinear-system-identification-algorithm-using-orthogon","timestamp":"2024-11-12T03:13:59Z","content_type":"text/html","content_length":"48672","record_id":"<urn:uuid:0a1d4e06-6f2d-419b-9eda-bde7fb5457ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00256.warc.gz"}
Btu per Hour to Microwatts Conversion (Btu/h to µW) Btu per Hour to Microwatts Converter Enter the power in btu per hour below to convert it to microwatts. Do you want to convert microwatts to btu per hour? How to Convert Btu per Hour to Microwatts To convert a measurement in btu per hour to a measurement in microwatts, multiply the power by the following conversion ratio: 293,071 microwatts/btu per hour. Since one btu per hour is equal to 293,071 microwatts, you can use this simple formula to convert: microwatts = btu per hour × 293,071 The power in microwatts is equal to the power in btu per hour multiplied by 293,071. For example, here's how to convert 5 btu per hour to microwatts using the formula above. microwatts = (5 Btu/h × 293,071) = 1,465,355 µW How Many Microwatts Are in One Btu per Hour? There are 293,071 microwatts in one btu per hour, which is why we use this value in the formula above. 1 Btu/h = 293,071 µW Btu per hour and microwatts are both units used to measure power. Keep reading to learn more about each unit of measure. What Are Btu per Hour? One BTU per hour is equal to one british thermal unit, or the energy needed to heat one pound of water 1 °F, for one hour. The btu per hour is a US customary unit of power. Btu per hour can be abbreviated as Btu/h; for example, 1 btu per hour can be written as 1 Btu/h. In the expressions of units, the slash, or solidus (/), is used to express a change in one or more units relative to a change in one or more other units. Learn more about btu per hour. What Is a Microwatt? One microwatt is the power equal to 1/1,000,000 of a watt, or the energy consumption at a rate of 1/1,000,000 of a joule per second. The microwatt is a multiple of the watt, which is the SI derived unit for power. In the metric system, "micro" is the prefix for millionths, or 10^-6. Microwatts can be abbreviated as µW; for example, 1 microwatt can be written as 1 µW. Learn more about microwatts. Btu per Hour to Microwatt Conversion Table Table showing various btu per hour measurements converted to microwatts. Btu Per Hour Microwatts 0.00001 Btu/h 2.9307 µW 0.00002 Btu/h 5.8614 µW 0.00003 Btu/h 8.7921 µW 0.00004 Btu/h 11.72 µW 0.00005 Btu/h 14.65 µW 0.00006 Btu/h 17.58 µW 0.00007 Btu/h 20.51 µW 0.00008 Btu/h 23.45 µW 0.00009 Btu/h 26.38 µW 0.000001 Btu/h 0.293071 µW 0.00001 Btu/h 2.9307 µW 0.0001 Btu/h 29.31 µW 0.001 Btu/h 293.07 µW 0.01 Btu/h 2,931 µW 0.1 Btu/h 29,307 µW 1 Btu/h 293,071 µW More Btu per Hour & Microwatt Conversions
{"url":"https://www.inchcalculator.com/convert/btu-per-hour-to-microwatt/","timestamp":"2024-11-13T18:19:38Z","content_type":"text/html","content_length":"65224","record_id":"<urn:uuid:9752869f-d6f6-4d62-8ae6-32964004a824>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00005.warc.gz"}
: Looking to understand the fraction notation in HSiO3/2, SiO4/2, H2SiO2/2, etc. I'm going through some papers about HSQ (hydrogen silsesquioxane) and keep seeing this fractional notation being used: HSiO , SiO , H , H Example paper: I understand that the proper formula for HSQ is [HSiO ]2n. Because we can't have fractional atoms, it's always multiplied by an even number and as such the fraction is never a consideration. But then why are the fractions maintained in the other compounds, and not reduced to SiO and H SiO ? Does keeping the fraction confer some structural information ? I've tried to find out what this notation means, but could not find a proper explanation. I guess what I'm trying to find out is, what do the numerator and denominators mean, formally, in those formulas ? If it's that the oxigen atoms share outside bonds, why not write the compound as SiO ? If the numerator is the number of atoms, what does the denominator represent ? And is it always /2 ? Can we have /x , where x is a different integer ? I've never seen this type of notation before. Does the denominator mean that the valence electrons are halved ? If you look at the linked paper's figure 1 where they draw out the cage, there's only one SiO entity. The oxigen atoms are not shared with another SiO entity, they connect to a H , or a HSiO . I am really confused by this notation because I've not seen it in any textbook, and it's not formally defined. Would it be correct then to say that silicon dioxide SiO is made up of SiO entities in which all oxygen atoms are shared with others ? Where 4 stands for four oxygens, and 2 means that each oxygen is shared between two SiO entities ? I've also asked this question on chemistry stackexchange but haven't gotten very far with my search for an answer.
{"url":"https://www.chemicalforums.com/index.php?topic=108037.0;prev_next=prev","timestamp":"2024-11-10T19:24:59Z","content_type":"text/html","content_length":"31906","record_id":"<urn:uuid:97116d3f-97a9-4ed8-b920-26bf048d4113>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00865.warc.gz"}
Stay healthy, keep calm and optimize April 1, 2020 TL;DR We built an optimization problem using OpenMDAO, Dymos, and the basic SIR model. The code is on github here. The corona virus pandemic is being tracked with a set of mathematical models, that are tuned to match the data on the virus. They predict the number of deaths we’ll see and health officials are relying on them to track the “curve”, and to figure out when it will be safe for us to stop social distancing. However, we’ve read a number of articles that clearly indicate that the modeling isn’t simple, or even widely agreed upon. One of the models being used is the Susceptible-Infected-Recovered (SIR) model. The SIR model is actually just an ODE with a few key parameters that can be tuned to match the data. When we saw that, we thought that perhaps some of our optimization tools could be used to attack this problem. Dymos is a library we’ve written specifically to deal with time-integration and optimal-control problems. So Tristan Hearn, a member of the OpenMDAO Application team, has built a basic implementation and showed that it could be used to do optimization on defining the counter Now for a disclaimer We are by no means infectious disease experts, but rather pracitioners of numerical optimization and multidisciplinary systems analysis. These models have not been tuned to match the data, and are by no means an accurate representation of the current situation anywhere in the world! The Model You can check out all the details on his pandemic repository. The code is set up to produce results like this: The blue line, represents the number of infected people. This is the “curve” that we’re all trying to flatten through social distancing. The goal is to lower the height of that hump, in order to not overwhelm our health care system. By doing that, we can hopefully minimize the height of the red line. The red line represents the number of deaths from the pandemic. This is the scary one. We don’t know if this model is useful to any actual infectious disease experts. Most probably, it won’t. But to be frank… it just feels good to be applying our skill sets to this world problem. If anyone out there wants to mess with the model, feel free! Here are some things that need to be done: • Parameter Identification: The model needs to be matched to real data. This isn’t easy. The data is different from state-to-state and county-to-county… this is part of what makes this so hard. See this fivethirtyeight article that talks about all of the uncertainty there is. • Uncertainty quantification: The OpenMDAO team doesn’t have a lot of UQ experience, but we do have some of the most flexible and most scalable ODE optimization tools around. We suspect there are some good ways that UQ could be brought to bear here. If anyone wants to work on that, feel free to reach out. • Model extension: Again, we’re not infectious disease experts. Those who are may be using more advanced models than this … or perhaps using a more complex version of this where different regions are modeled independently and interact with each other. We just don’t have the experience to know what to do here! So if you’re looking for some interesting optimization problems to keep your busy during these strange times, check out Tristan’s pandemic repository.
{"url":"https://openmdao.org/stay-healthy-calm-and-optimize/","timestamp":"2024-11-05T05:37:17Z","content_type":"text/html","content_length":"30765","record_id":"<urn:uuid:610fe616-7031-4bc1-ba13-21b49e282c24>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00412.warc.gz"}
On small-scale and large-scale intermittency of Lagrangian statistics in canopy flow The interaction of fluids with surface-mounted obstacles in canopy flows leads to strong turbulence that dominates dispersion and mixing in the neutrally stable atmospheric surface layer. This work focuses on intermittency in the Lagrangian velocity statistics in a canopy flow, which is observed in two distinct forms. The first, small-scale intermittency, is expressed by non-Gaussian and not self-similar statistics of the velocity increments. The analysis shows an agreement in comparison with previous results from homogeneous isotropic turbulence (HIT) using the multifractal model, extended self-similarity and velocity increments' autocorrelations. These observations suggest that the picture of small-scale Lagrangian intermittency in canopy flows is similar to that in HIT and, therefore, they extend the idea of universal Lagrangian intermittency to certain inhomogeneous and anisotropic flows. Second, it is observed that the root mean square of energy increments along Lagrangian trajectories depends on the direction of the trajectories' time-averaged turbulent velocity. Subsequent analysis suggests that the flow is attenuated by the canopy drag while leaving the structure function's scaling unchanged. This observation implies the existence of large-scale intermittency in Lagrangian statistics. Thus, this work presents a first empirical evidence of intermittent Lagrangian velocity statistics in a canopy flow that exists in two distinct senses and occurs due to different mechanisms. • Intermittency • atmospheric flows • mixing and dispersion ASJC Scopus subject areas • Condensed Matter Physics • Mechanics of Materials • Mechanical Engineering • Applied Mathematics Dive into the research topics of 'On small-scale and large-scale intermittency of Lagrangian statistics in canopy flow'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/on-small-scale-and-large-scale-intermittency-of-lagrangian-statis","timestamp":"2024-11-10T14:17:02Z","content_type":"text/html","content_length":"60307","record_id":"<urn:uuid:2bda8db1-3dd1-41b0-95fa-40468e475d72>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00240.warc.gz"}
Fraction to Decimal C Steps to convert ^93/[972] as decimal 0.09568 Without the convenience of the fraction to decimal converter, let’s not forget the importance of being able to convert ^93/[972] as a decimal on paper. Fractions are made up of two parts. The numerator (upper part) and the denominator (lower part) ^a/[b.] The line which separates the numerator and denominator can be replaced with the division symbol (÷). To convert a fraction to a decimal we divide the numerator by the denominator ^a÷[b] Keep in mind, if either or both have long digits the division may get a little complicated. Here is the math for converting ^93/[972] as a decimal: ^93/[972] = ^93÷[972] = 0.09568
{"url":"https://www.asafraction.net/fraction-to-decimal/93.972","timestamp":"2024-11-04T01:56:31Z","content_type":"text/html","content_length":"32754","record_id":"<urn:uuid:3cfa25db-bcc1-492c-9df5-961695d92930>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00207.warc.gz"}
With one focus of the hyperbola 9x2−16y2=1 as the centre, a c... | Filo With one focus of the hyperbola as the centre, a circle is drawn which is tangent to the hyperbola with no part of the circle being outside the hyperbola. The radius of the circle is Not the question you're searching for? + Ask your question Given hyperbola is Hence, its foci are . The equation of the circle with as centre is On solving Eqs. (i) and (ii), we have Since, the circle touches the hyperbola, above equation must have equal roots. Hence, , which is not possible. Hence, the circle cannot touch at two points. It can only be tangent at the vertex. Hence, Was this solution helpful? Video solutions (4) Learn from their 1-to-1 discussion with Filo tutors. 13 mins Uploaded on: 2/12/2023 Was this solution helpful? 17 mins Uploaded on: 2/11/2023 Was this solution helpful? Found 8 tutors discussing this question Discuss this question LIVE for FREE 12 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Arihant Mathematics Master Resource Book (Arihant) View more Practice more questions from Conic Sections View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes With one focus of the hyperbola as the centre, a circle is drawn which is tangent to the hyperbola with no part of the circle being outside the hyperbola. The radius of the circle Question Text is Updated On Feb 12, 2023 Topic Conic Sections Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 4 Upvotes 536 Avg. Video Duration 11 min
{"url":"https://askfilo.com/math-question-answers/with-one-focus-of-the-hyperbola-fracx29-fracy2161-as-the-centre-a-circle-is","timestamp":"2024-11-08T07:55:05Z","content_type":"text/html","content_length":"586014","record_id":"<urn:uuid:adf3fdba-48ac-46ee-8fef-475570a55dc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00517.warc.gz"}
bigfloat(3p) Perl Programmers Reference Guide bigfloat(3p) bigfloat(3p) Perl Programmers Reference Guide bigfloat(3p) # bigfloat(3p) Perl Programmers Reference Guide bigfloat(3p) bigfloat - transparent big floating point number support for Perl use bigfloat; $x = 2 + 4.5; # Math::BigFloat 6.5 print 2 ** 512 * 0.1; # Math::BigFloat 134...09.6 print inf + 42; # Math::BigFloat inf print NaN * 7; # Math::BigFloat NaN print hex("0x1234567890123490"); # Perl v5.10.0 or later no bigfloat; print 2 ** 256; # a normal Perl scalar now # for older Perls, import into current package: use bigfloat qw/hex oct/; print hex("0x1234567890123490"); print oct("01234567890123490"); All numeric literals in the given scope are converted to Math::BigFloat All operators (including basic math operations) except the range operator ".." are overloaded. So, the following: use bigfloat; $x = 1234; creates a Math::BigFloat and stores a reference to in $x. This happens transparently and behind your back, so to speak. You can see this with the following: perl -Mbigfloat -le 'print ref(1234)' Since numbers are actually objects, you can call all the usual methods from Math::BigFloat on them. This even works to some extent on perl -Mbigfloat -le '$x = 1234; print $x->bdec()' perl -Mbigfloat -le 'print 1234->copy()->binc();' perl -Mbigfloat -le 'print 1234->copy()->binc->badd(6);' perl -Mbigfloat -le 'print +(1234)->copy()->binc()' (Note that print doesn't do what you expect if the expression starts with '(' hence the "+") You can even chain the operations together as usual: perl -Mbigfloat -le 'print 1234->copy()->binc->badd(6);' Please note the following does not work as expected (prints nothing), since overloading of '..' is not yet possible in Perl (as of v5.8.0): perl -Mbigfloat -le 'for (1..2) { print ref($_); }' OOppttiioonnss “bigfloat” recognizes some options that can be passed while loading it via via “use”. The following options exist: a or accuracy This sets the accuracy for all math operations. The argument must be greater than or equal to zero. See Math::BigInt's bbrroouunndd(()) method for perl -Mbigfloat=a,50 -le 'print sqrt(20)' Note that setting precision and accuracy at the same time is not p or precision This sets the precision for all math operations. The argument can be any integer. Negative values mean a fixed number of digits after the dot, while a positive value rounds to this digit left from the dot. 0 means round to integer. See Math::BigInt's bbffrroouunndd(()) method for perl -Mbigfloat=p,-50 -le 'print sqrt(20)' Note that setting precision and accuracy at the same time is not t or trace This enables a trace mode and is primarily for debugging. l, lib, try, or only Load a different math lib, see "Math Library". perl -Mbigfloat=l,GMP -e 'print 2 ** 512' perl -Mbigfloat=lib,GMP -e 'print 2 ** 512' perl -Mbigfloat=try,GMP -e 'print 2 ** 512' perl -Mbigfloat=only,GMP -e 'print 2 ** 512' hex Override the built-in hheexx(()) method with a version that can handle big numbers. This overrides it by exporting it to the current package. Under Perl v5.10.0 and higher, this is not so necessary, as hheexx(()) is lexically overridden in the current scope whenever the "bigfloat" pragma is active. oct Override the built-in oocctt(()) method with a version that can handle big numbers. This overrides it by exporting it to the current package. Under Perl v5.10.0 and higher, this is not so necessary, as oocctt(()) is lexically overridden in the current scope whenever the "bigfloat" pragma is active. v or version this prints out the name and version of the modules and then exits. perl -Mbigfloat=v MMaatthh LLiibbrraarryy Math with the numbers is done (by default) by a backend library module called Math::BigInt::Calc. The default is equivalent to saying: use bigfloat lib => 'Calc'; you can change this by using: use bigfloat lib => 'GMP'; The following would first try to find Math::BigInt::Foo, then Math::BigInt::Bar, and if this also fails, revert to Math::BigInt::Calc: use bigfloat lib => 'Foo,Math::BigInt::Bar'; Using c<lib> warns if none of the specified libraries can be found and Math::BigInt fell back to one of the default libraries. To suppress this warning, use c<try> instead: use bigfloat try => 'GMP'; If you want the code to die instead of falling back, use "only" instead: use bigfloat only => 'GMP'; Please see respective module documentation for further details. MMeetthhoodd ccaallllss Since all numbers are now objects, you can use all methods that are part of the Math::BigFloat API. But a warning is in order. When using the following to make a copy of a number, only a shallow copy will be made. $x = 9; $y = $x; $x = $y = 7; Using the copy or the original with overloaded math is okay, e.g., the following work: $x = 9; $y = $x; print $x + 1, " ", $y,"\n"; # prints 10 9 but calling any method that modifies the number directly will result in bbootthh the original and the copy being destroyed: $x = 9; $y = $x; print $x->badd(1), " ", $y,"\n"; # prints 10 10 $x = 9; $y = $x; print $x->binc(1), " ", $y,"\n"; # prints 10 10 $x = 9; $y = $x; print $x->bmul(2), " ", $y,"\n"; # prints 18 18 Using methods that do not modify, but test that the contents works: $x = 9; $y = $x; $z = 9 if $x->is_zero(); # works fine See the documentation about the copy constructor and "=" in overload, as well as the documentation in Math::BigFloat for further details. MMeetthhooddss iinnff(()) A shortcut to return Math::BigFloat->bbiinnff(()). Useful because Perl does not always handle bareword “inf” properly. A shortcut to return Math::BigFloat->bbnnaann(()). Useful because Perl does not always handle bareword "NaN" properly. # perl -Mbigfloat=e -wle 'print e' Returns Euler's number "e", aka eexxpp(1) PI # # perl -Mbigfloat=PI -wle 'print PI' Returns PI. bexp($power, $accuracy); Returns Euler's number "e" raised to the appropriate power, to the wanted accuracy. # perl -Mbigfloat=bexp -wle 'print bexp(1,80)' Returns PI to the wanted accuracy. # perl -Mbigfloat=bpi -wle 'print bpi(80)' Set or get the accuracy. Set or get the precision. Set or get the rounding mode. Set or get the division scale. Set or get the class that the downgrade class upgrades to, if any. Set the upgrade class to "undef" to disable upgrading. Upgrading is disabled by default. Set or get the class that the upgrade class downgrades to, if any. Set the downgrade class to "undef" to disable upgrading. Downgrading is disabled by default. use bigfloat; print "in effect\n" if bigfloat::in_effect; # true no bigfloat; print "in effect\n" if bigfloat::in_effect; # false Returns true or false if "bigfloat" is in effect in the current This method only works on Perl v5.9.4 or later. Hexadecimal, octal, and binary floating point literals Perl (and this module) accepts hexadecimal, octal, and binary floating point literals, but use them with care with Perl versions before v5.32.0, because some versions of Perl silently give the wrong Operator vs literal overloading "bigrat" works by overloading handling of integer and floating point literals, converting them to Math::BigRat objects. This means that arithmetic involving only string values or string literals are performed using Perl's built-in operators. For example: use bigrat; my $x = "900000000000000009"; my $y = "900000000000000007"; print $x - $y; outputs 0 on default 32-bit builds, since "bigfloat" never sees the string literals. To ensure the expression is all treated as "Math::BigFloat" objects, use a literal number in the expression: print +(0+$x) - $y; Perl does not allow overloading of ranges, so you can neither safely use ranges with "bigfloat" endpoints, nor is the iterator variable a use 5.010; for my $i (12..13) { for my $j (20..21) { say $i ** $j; # produces a floating-point number, # not an object This method only works on Perl v5.9.4 or later. "bigfloat" overrides these routines with versions that can also handle big integer values. Under Perl prior to version v5.9.4, however, this will not happen unless you specifically ask for it with the two import tags "hex" and "oct" - and then it will be global and cannot be disabled inside a scope with "no bigfloat": use bigfloat qw/hex oct/; print hex("0x1234567890123456"); no bigfloat; print hex("0x1234567890123456"); The second call to hheexx(()) will warn about a non-portable constant. Compare this to: use bigfloat; # will warn only under Perl older than v5.9.4 print hex("0x1234567890123456"); Some cool command line examples to impress the Python crowd ;) perl -Mbigfloat -le 'print sqrt(33)' perl -Mbigfloat -le 'print 2**255' perl -Mbigfloat -le 'print 4.5+2**255' perl -Mbigfloat -le 'print 3/7 + 5/7 + 8/3' perl -Mbigfloat -le 'print 123->is_odd()' perl -Mbigfloat -le 'print log(2)' perl -Mbigfloat -le 'print exp(1)' perl -Mbigfloat -le 'print 2 ** 0.5' perl -Mbigfloat=a,65 -le 'print 2 ** 0.2' perl -Mbigfloat=l,GMP -le 'print 7 ** 7777' Please report any bugs or feature requests to "bug-bignum at rt.cpan.org", or through the web interface at <https://rt.cpan.org/Ticket/Create.html?Queue=bignum> (requires login). We will be notified, and then you'll automatically be notified of progress on your bug as I make changes. You can find documentation for this module with the perldoc command. perldoc bigfloat You can also look for information at: • GitHub • RT: CPAN's request tracker • MetaCPAN • CPAN Testers Matrix • CPAN Ratings This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself. bigint and bigrat. Math::BigInt, Math::BigFloat, Math::BigRat and Math::Big as well as Math::BigInt::FastCalc, Math::BigInt::Pari and Math::BigInt::GMP. • (C) by Tels <http://bloodgate.com/> in early 2002 - 2007. • Maintained by Peter John Acklam <pjacklam@gmail.com>, 2014-. perl v5.36.3 2023-02-15 bigfloat(3p)
{"url":"https://www.openbsdhandbook.com/man/man3p/bigfloat.3p/","timestamp":"2024-11-07T22:41:54Z","content_type":"text/html","content_length":"24532","record_id":"<urn:uuid:4be6b09f-9698-458f-87cf-fee9bb012744>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00595.warc.gz"}