content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Programming with Envelopes in OCaml and Haskell
In this post, I continue going through the famous paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire” by Meijer et al. They show that we can treat recursions as separate
higher order functions known as recursion schemes. Each of the four things in the paper title corresponds to a type of recursion scheme. Bananas refer to catamorphisms or folds, which I covered in a
few earlier posts. I also discussed lenses before, which refer to anamorphisms or unfolds. This time I’m covering envelopes, which refer to hylomorphisms.
What is Hylomorphism?
Hylomorphism is the composition of anamorphism (unfold) and catamorphism (fold). Hylomorphism (for lists) puts the result of an unfold (a list) into a fold, which takes a list as an input and output
a single result.
The factorial function is the classic example. Using unfold, one can generate a list of integers starting from 1, up to n. The generated list is then input to a fold such that the integers of the
list are multiplied to give the factorial of n.
Factorial In Haskell with Hylomorphisms
In Haskell, we can write the factorial function with hylomorphism using foldr and unfoldr:
Running runhaskell fact.hs in a terminal (in the directory your fact.hs file is in) should return 120, the factorial of 5.
The above nicely illustrates hylomorphism in action. Writing it with pattern matching is shorter and more standard though:
In OCaml
In OCaml, as mentioned in this post, unfold is not in the standard library. I use the unfold function that I wrote earlier to write the factorial function in OCaml:
In the terminal:
utop # #use "fact.ml";;
val unfold : ('a -> bool) -> ('a -> 'b * 'a) -> 'a -> 'b list = <fun>
val fact : int -> int = <fun>
utop # fact 5;;
- : int = 120
One can also use pattern matching to write this in OCaml, using point-free style and function:
utop # let rec fact = function 0 -> 1 | n -> n * fact (n - 1);;
val fact : int -> int = <fun>
utop # fact 5;;
- : int = 120
Note that the above only works for positive integers.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://thealmarty.com/2019/01/29/programming-with-envelopes-in-ocaml-and-haskell/","timestamp":"2024-11-08T08:19:52Z","content_type":"text/html","content_length":"71236","record_id":"<urn:uuid:4565c44e-785e-4de0-b4f3-6c96c79017ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00799.warc.gz"} |
Dual-frequency PPP GNSS and MEMS-IMU Integration For Continuous Navigation in Obstructed EnvironmentsDual-frequency PPP GNSS and MEMS-IMU Integration For Continuous Navigation in Obstructed Environments - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design
Dual-frequency PPP GNSS and MEMS-IMU Integration For Continuous Navigation in Obstructed Environments
This article investigates and analyzes position solution availability and continuity from integrating low-cost, dual-frequency GNSS receivers using PPP processing with the latest low-cost, MEMS IMUs.
The integration offers a complete, low-cost navigation solution that will enable during GNSS signal outages of half a minute, with a 30% improvement over low-cost, single-frequency GNSS-PPP with MEMS
IMU integrations previously demonstrated. This can apply to UAVs, pedestrian navigation, autonomous vehicles, and more.
By Sudha Vana, Nacer Naciri and Sunil Bisnath, York University, Toronto
Integration of inertial measurement units (IMUs) with GNSS helps to achieve continuous navigation in sky-obstructed environments with increased frequency of solution availability. Advancements in
micro-electro-mechanical sensors (MEMS) technology has led to the development of low-cost, good performance IMUs. MEMS-based IMUs are inexpensive but have limited stability due to higher noise
levels, biases, and drifts.
In the past, most researchers using MEMS-based IMUs used differential GPS (DGPS) and real-time kinematic (RTK) techniques to attain higher accuracy, or high-performance GNSS receivers with the
precise point positioning (PPP) technique. PPP technology, as a wide-area augmentation approach, does not require any local ground infrastructure such as continuous operating reference stations.
Accuracy attained is at the decimeter to centimeter level in static mode, which approaches RTK performance.
This approach is widely accepted as reliable for precise positioning applications such as crustal deformation monitoring, precision agriculture, airborne mapping, marine mapping and construction
applications, and high-accuracy kinematic positioning.
Although the GNSS PPP-only position solution has the drawback of slow initial convergence time, its accuracy makes it attractive for use in applications such as autonomous vehicles, drones, augmented
reality, pedestrian navigation, UAVs and other emerging scientific, engineering and consumer applications. These applications demand continuous position accuracy in the order of the meter to
centimeter range to satisfy safety, integrity and security requirements.
For instance, in autonomous vehicle navigation systems, lane-level navigation requires very stringent accuracy in the order of centimeters. Here we develop precise, continuous and accurate position
solutions using low-cost, dual-frequency GNSS PPP and low-cost MEMS IMU for contemporary applications. We analyze this technique’s performance in realistic obstructed environments.
Using a dual-frequency GNSS receiver—as opposing to single-frequency GNSS investigated in previous research—mitigates the ionosphere refraction error. Accuracy improves to the decimeter level by
using GNSS observable linear combinations and the PPP processing technique. A low-cost, dual-frequency PPP solution integrated with a MEMS IMU forms an ideal combination for a total low-cost,
high-accuracy solution that will potentially open a new window for modern day applications.
In this tightly coupled integration, raw measurements from sensors are used to integrate, producing a more complex algorithm than loosely coupled integration.
Inertial Mechanization
IMU mechanization is the technique of converting the raw IMU measurements into position, velocity and attitude information. Four steps convert the specific force and turn rates from accelerometers
and gyroscopes to position, velocity and attitude. The mechanization also needs initial position, velocity and attitude as references. The four steps in brief are:
• Attitude update;
• Converting specific force from body frame to respective frame in which the mechanization is being performed;
• Velocity update which includes converting specific force into acceleration using a gravity model; and
• Position update.
These four steps of the IMU mechanization are depicted in Figure 1.
This research uses an Earth-centered, Earth-fixed (ECEF) coordinate frame. This is simpler in terms of not having to use conversion while performing the integration, since GNSS measurements are
received in terms of distance. The conversion to the necessary coordinate frames can be performed after the integration solution is attained. Inputs to these equations are fb(specific force) and ωibb
(turn rates). Below are the ECEF equations for the mechanization process. re is the position, ve is the velocity and Rbe is the attitude of an IMU.
Where, ωibb = Rbeωibe and local gravity vector is ge=Ge(r)- re(0)), velocity (ve(0)) and attitude (Rbe(0)) need to be supplied.
GNSS-PPP/INS Integrated Filter
A tightly coupled architecture here uses an extended Kalman filter to integrate YorkU GNSS-PPP with a MEMS IMU. YorkU GNSS-PPP is a triple-frequency and multi-constellation capable positioning
software. In a tightly-coupled architecture, raw measurements from sensors are taken as input to generate corrected states to one of the sensors.
Figure 2 shows the tightly coupled architecture. The inputs to the Extended Kalman Filter are
• the pseudorange and carrier-phase measurements from the low-cost GNSS receiver corrected with the precise orbit and clocks to correct the satellite and orbit errors, and
• predicted pseudorange and carrier-phase measurements formed using the MEMS-IMU position and satellite positions. A part of the state vector consists of the error in position, velocity and attitude
that is fed back to inertial mechanization block to close the loop.
fb, wb are the specific force and turn rate measurements from the inertial measurement unit which constitute inputs to the mechanization block. With initial position and velocity from GNSS as input
to the mechanization block, PIMU,VIMU and AIMU are position, velocity and attitude from the IMU. Using satellite positions which are corrected using the precise orbit and clock corrections, predicted
pseudorange and carrier-phase measurements (ρIMU,φIMU) are formed. ρGNSS, φGNSS are GNSS pseudorange and carrier-phase measurements corrected for orbit and clock errors using precise orbit and clock
products. The dual-frequency (L1 and L2) GNSS measurements are used in the ionosphere-free combination to remove the ionosphere refraction error. The difference or residuals between the GNSS
measurements and the predicted measurements are processed in the EKF to produce error in position, velocity and attitude states of the sensor which is represented as δP, δv, δε, bg and ba. The IMU
error states are used in the mechanization block to continuously correct the errors. The final integrated position, velocity and attitude from an IMU are represented by and .
System Model
The system model in continuous form can be represented as:
δx is the state vector with a combination of navigation, inertial and GNSS states:
• Navigation states: three position error states, three velocity error states and three attitude error states.
• Inertial states: accelerometer and gyroscope biases, accelerometer and gyroscope scale factors.
• GNSS states: troposphere wet delay, GNSS receiver clock, as well as drift and the ambiguity terms.
The states are detailed in mathematical terms as:
δP–3D position error vector in the ECEF frame
δv–3D velocity error vector in the ECEF frame
δε–attitude errors for roll, pitch and yaw
δtc–GNSS receiver clock error
–GNSS receiver clock drift error
dtropo–troposphere wet delay
ba–accelerometer bias
bg–gyroscope bias
Ni–Ambiguity for satellite i
In continuous time, the transition matrix is given by:
The process noise terms are given by:
Measurement Model
The measurement model is based on the typical relationship as described in equation (1.6). z is the measurement vector containing the difference between corrected GNSS measurements and predicted IMU
measurements. The representation of z is given in equation (1.7). Measurements used in this work are ionosphere-free combinations of the raw uncombined measurements.
H is the design matrix consisting of the partial derivatives to the state terms relating to GNSS. The state terms related to an IMU have a zero entry.
The first three columns are the partial derivates to the 3D position coordinates followed by 3×3 velocity and attitude entries. snei=
Field Tests and Results
To evaluate the tightly coupled GNSS PPP-IMU algorithm, kinematic data were collected around the York University campus, Toronto, Canada. Partial outage of GNSS satellites was simulated to test the
performance of the algorithm during the signal outage.
A multi-constellation, multi-frequency low-cost receiver was used to collect the data. It also offers RTK cm-level solutions with fast convergence. Multi-constellation raw pseudoranges and
carrier-phase measurements were logged and processed through York-PPP software. A low-cost MEMS unit with a package of single-frequency GNSS, MEMS-IMU, magnetometers and barometer complemented the
GNSS receiver. Typical classification of the grades of IMUs based on accelerometer bias error and gyro angle random walk are given in Table 1. White-noise effect on the integrated orientation is
indicated by the angle random walk (ARW) parameter. Navigation grade IMU is affected the most by this parameter. In case of tactical, industrial and automotive grade, uncompensated accelerometer bias
affects the most.
The performance specifications of the inertial sensor are shown in Table 2. ARW is 0.15°/√hr and accelerometer bias is <5 mg. Therefore, the IMU can be considered as close to a MEMS tactical-grade
IMU. An IMU that can give a navigation or tactical grade performance and made using MEMS technology is considered as a good quality IMU for low-cost navigation requirements.
Figure 3 shows the setup of the equipment during the kinematic tests conducted. The geodetic antenna was placed on the rooftop of the car and the GNSS receiver and IMU were placed side by side in the
trunk of the vehicle. The GNSS data were logged at a 5 Hz rate and the IMU data were logged at a 100 Hz rate. Performance accuracy of dataset collected at a parking lot will be discussed with and
without any outages.
Parking Lot Test With No Outage
The dataset evaluating the tightly coupled algorithm was collected in an open-sky parking lot on June 17, 2019. It spans 10 minutes and contains numerous vehicle turns. Measurements from the GPS,
GLONASS, Galileo and BeiDou constellations were processed in the integrated algorithm. Precise orbit and clock products were downloaded from the GFZ portal. Figure 4 shows tracks of the data. The red
track is the trace of the NRCan CSRS-PPP solution and the blue track represented the integrated GNSS-PPP and MEMS-IMU solution. Here it can be seen that the integrated GNSS-PPP and MEMS-IMU solution
takes some time to converge to the red NRCan CSRS-PPP solution, due to the GNSS PPP convergence time.
Figure 5 shows the difference between the NRCan GNSS-PPP solution and TC GNSS PPP+IMU solution. The rms difference is 28 cm in the horizontal component and 16 cm in the vertical component when
compared to the the NRCan GNSS-PPP solution. The rms statistics are computed by excluding the initial PPP convergence period. This shows that the low-cost, dual-frequency GNSS PPP integrated with
low-cost MEMS IMU performs at the decimeter level, which is apt for modern applications that demand low-cost navigation sensors performing at this level of accuracy.
Figure 6 depicts code and phase residuals for the GNSS-PPP+MEMS IMU integrated solution of the parking lot data. Both code and phase residuals follow white-noise distributions. The code residuals
mostly range from 5 m and the phase residuals range at the centimeter level. These orders of magnitude for the residuals are comparable to the ones with a PPP-only solution without any IMU
integration. In the phase residuals, the portion where the vehicle is moving has slightly higher magnitude than the stationary part. This result can be improved by implementing zero velocity update
(ZUPT) in the TC algorithm, which will be taken up in future work. The rms of the code residuals is 1.6 m and the rms for phase residuals is 4 cm.
Parking Lot Test With Outage
The real importance and value added by the IMU to the navigation system is realized only when there are not enough GNSS signals or during a GNSS signal outage. In this section, simulated outages for
30 seconds are performed and tested for the accuracy performance of the algorithm. During the outage, performance of the algorithm with just three-satellite and two-satellite availability is
assessed. In Figure 7, a partially zoomed-in version of the track shown in Figure 4 is depicted to have a clear view of the simulated outage area. The portion of track highlighted in black is the
area where simulated outages are performed.
To indicate the performance accuracy of the algorithm, horizontal error compared against the NRCan CSRS-PPP is plotted in Figure 8. Here, the blue horizontal error is when there is no outage
simulated. The green coloured horizontal error is when there are only three satellites available during the simulated outage. During the outage, the horizontal rms is 0.83 m and the maximum bias is
1.11 m. The black horizonal error in Figure 8 corresponds to the performance when there are only two satellites available during the simulated outage. During this outage, the horizontal rms is 0.97 m
and the maximum bias is 1.36 m.
Figure 9 is a zoomed-in version of the plot of horizontal error discussed in Figure 8. After this outage, it takes a few seconds for the integrated solution to go back to the track with no outage
simulated. This performance indicates that the lower the number of satellites during an outage, the lower the accuracy is. The accuracy drops steeply with the loss of each satellite.
Table 3 compares the statistics during the GNSS simulated outages in terms of horizontal and vertical rms. It is evident that when there are 2 or 3 satellites available to coast on, the solution
still performs at the decimeter level of accuracy compared to the NRCan CSRS-PPP solution. Therefore, by using a low-cost, dual-frequency GNSS PPP with MEMS IMU, it performs at least 30% better than
a single-frequency GNSS PPP with MEMS IMU, as demonstrated in a 2017 paper by Gao Z, Zhang H, Ge M, et al: A single-frequency GNSS PPP with MEMS IMU performed at meter-level accuracy, when there is
an outage period of only 3 seconds.
Conclusions and future research
The navigation system of low-cost, dual-frequency GNSS-PPP solution tightly integrated with a MEMS IMU performed at the decimeter level of accuracy when there were no outages. A 30-second outage test
showed that the algorithm performs with a rms error of 0.83 m in horizontal and 2.6 m in vertical directions, while there are three satellites available for the tightly-coupled solution to use.
Whereas, when there are only two satellites during an outage, the horizontal rms is 0.97 m and vertical rms is 3.3 m. Table 4 gives an overview of the system’s performance accuracy with and without
GNSS signal outage.
The tightly-coupled low-cost, dual-frequency GNSS-PPP and MEMS IMU performs with at least 30% better 2D rms when compared to low-cost, single-frequency and MEMS IMU integrated unit as examined in the
2017 research, using 3 second simulated GNSS measurement outages. The dual-frequency GNSS-PPP with MEMS IMU forms a complete low-cost solution with decimeter-level accuracy with respect to a GNSS-PPP
reference solution. Such a navigation system can be used for many applications such as autonomous navigation, pedestrian dead reckoning, drones and other modern-day applications to improve accuracy
and continuity of the navigation solution.
As part of future work, the algorithm will be tested in different challenging environments and different constraints. GNSS signal outage will be tested in environments such as tunnels, highway
underpasses and downtown areas. To better estimate yaw angle while the vehicle is stationary, ZUPT will be implemented. Also, different scenarios such as performance of satellites from different
constellations during an outage will be tested and assessed. Finally, to gain better vertical component results, vertical direction will be constrained and assessed for improvements.
Authors would like to thank Natural Sciences and Engineering Research (NSERC) and York University for providing funding for this research and German Research Centre for Geosciences (GFZ),
International GNSS Services (IGS) and National Centre for Space Studies (CNES) for data.
This article is based on a paper that was originally presented at the Institute of Navigation GNSS+ 2019 conference in Miami, Florida.
SwiftNav Piksi-Multi and Inertial Sense μINS sensors with a geodetic grade antenna were used to collect data. | {"url":"https://insidegnss.com/dual-frequency-ppp-gnss-and-mems-imu-integration-for-continuous-navigation-in-obstructed-environments/","timestamp":"2024-11-13T16:35:22Z","content_type":"text/html","content_length":"222273","record_id":"<urn:uuid:bf06cd70-3827-4312-a340-b6597185fc43>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00016.warc.gz"} |
The RANKCUMULATE function ranks values and then cumulates values in order of the ranking. It can perform ranking separately across different groups.
For example, you can use the RANKCUMULATE function to cumulatively sum employees sales revenue in order of their length of service. There is an example of this in the Examples section.
RANKCUMULATE(Cumulation values, Ranking values [, Direction] [, Include value] [, Ranking groups])
Argument Data type Description
Cumulation values Number, line item, property, or expression. The number to cumulate, based on ranking criteria.
Number, date, or time period
Ranking values (required) The ranking criteria to perform cumulation based on.
Can be a line item, property, or expression.
Determines the direction to rank in.
Direction Keyword The keywords are DESCENDING and ASCENDING. There's more information in the Direction argument keywords
section below.
Determines if a value is ranked.
Include value Boolean The default value, TRUE, includes a value in the ranking.
A value of FALSE omits a value from the ranking and returns a result of 0.
Ranking groups Number, Boolean, date, time period, list. Text is supported in If provided, values are ranked independently for each value of the Ranking groups argument.
Classic only.
The RANKCUMULATE function returns a number.
Direction argument keywords
Keyword Description
DESCENDING When used, the RANKCUMULATE function assigns the highest source value rank 1, the second highest source value rank 2, and so on.
The default keyword if you omit the Direction argument.
When used, the RANKCUMULATE function assigns the lowest source value rank 1, the second lowest source value rank 2, and so on.
Calculation engine functionality differences
In Polaris, RANKCUMULATE cannot be used when the target is dimensioned by a line item subset, or the function makes a reference to a line item subset.
In Polaris you do not have a cell limit. In Classic, it is 50 million cells.
Polaris does not support infinities, in Classic it does. In Classic, if a cumulation source contains an Infinity, then the result from then on until the end of the cumulation is that infinity.
However, if an opposite Infinity follows it, the result becomes NaN (Not a Number). In Polaris it will return NaN instead of Infinity.
In Polaris, blank is unordered , so it is unrankable. For RANKCUMULATE, if the ranking value is blank the the function returns zero.
In Polaris, the ranking values can be the BLANK literal, this is, RANKCUMULATE(1, BLANK) is valid (although the function will always return zero in this case).
RANKCUMULATE(Revenue, Transaction Date, DESCENDING, Eligible transaction?, Region)
Ranking behavior for different data types
The Ranking values argument for the RANKCUMULATE function can be a number, date, or time period type line item, property, or expression. However, the function always returns a number.
When the RANKCUMULATE function ranks values with the default ASCENDING keyword for the Direction argument, the function ranks the lowest value as 1, the second lowest as 2, and so on. If you use
RANKCUMULATE with:
• Numbers, the function ranks the largest number the highest.
• Dates, the function ranks the date furthest in the future the highest.
• Time periods, the function ranks the time period furthest in the future the highest.
Equal ranking value behavior
If two values of the Cumulation values argument share the same ranking for the Ranking values argument, ranking follows the order of any associated list items within General Lists.
Use RANKCUMULATE with the Users list
You can reference the Users list with the RANKCUMULATE function. However, you cannot reference specific users within the Users list as this is production data, which can change and make your formula
In the Classic calculation engine, Anaplan imposes a cell limit of 50 million cells to prevent ranking of large data sets that would slow down the server. If more than 50 million cells are used with
the RANKCUMULATE function, the model is rolled back and a notification displays.
The 50 million cell limit does not account for summarized values or the Time and Versions lists. This means you can use the RANKCUMULATE function with a line item with a Cell Count of greater than 50
million cells if there are fewer than 50 million nonsummarized cells.
As the number of cells you use with the RANKCUMULATE function increases, so does the duration of the calculation.
Positive Infinity, Negative Infinity, and NaN
If you use positive infinity, negative infinity, or NaN (Not a Number) for the Ranking values argument, the RANKCUMULATE function returns 0.
Cumulation Source Constraints
If your cumulation source is a large data set, the addition of numbers with a large number of decimal places can result in floating point error for the least significant digits.
In this example, a module that contains the Salespersons list is on columns, and a number of line items is on rows.
The example uses RANKCUMULATE to cumulatively sum sales in the order of each salesperson's length of service. Further iterative formulas use the Include value and Ranking groups arguments to:
• Determine which salesperson's sales to include in the cumulative ranking.
• Further break down the cumulative ranking by region.
Two line items use the RANK function to help you identify the order that RANKCUMULATE cumulates values in.
Ben Graham Rashid Laura Rita David Masaki Kieran Alisa Karen Martina Oswald
Region North North North South South South East East East West West West
Sales 258,796 235,884 190,750 228,315 171,494 234,276 230,213 222,777 201,855 271,162 267,401 209,368
Years of service 6 9 8 12 11 9 13 14 5 15 11 14
Rank by years of service
RANK(Years of Service, DESCENDING)
Include in cumulation?
Cumulated sales by longest tenure
2,520,436 1,836,614 2,261,640 1,161,835 1,333,329 2,070,890 933,520 493,939 2,722,291 271,162 1,600,730 703,307
RANKCUMULATE(Sales, Years of Service, DESCENDING)
Cumulated sales by tenure for selected employees
685,430 235,884 426,634 228,315 399,809 0 452,990 222,777 654,845 271,162 0 480,530
RANKCUMULATE(Sales, Years of Service, DESCENDING, Include in cumulation?)
Cumulative sales by tenure for each region
685,430 235,884 426,634 228,315 399,809 634,085 452,990 222,777 654,845 271,162 747,931 480,530
RANKCUMULATE(Sales, Years of Service, DESCENDING, TRUE, Region) | {"url":"https://help.anaplan.com/rankcumulate-1af75839-f426-43bf-b864-9027f1770161","timestamp":"2024-11-14T08:51:48Z","content_type":"text/html","content_length":"1050635","record_id":"<urn:uuid:3b60c297-d562-496c-a681-52b14c531642>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00663.warc.gz"} |
Calculating the Magnitude of the Gravitational Force between Objects in Deep Space
Question Video: Calculating the Magnitude of the Gravitational Force between Objects in Deep Space Physics • First Year of Secondary School
Two objects, A and B, are in deep space. The distance between the centers of mass of the two objects is 20 m. Object A has a mass of 30,000 kg and object B has a mass of 55,000 kg. What is the
magnitude of the gravitational force between them? Use a value of 6.67 × 10⁻¹¹ m³/kg.s² for the universal gravitational constant. Give your answer to three significant figures.
Video Transcript
Two objects, A and B, are in deep space. The distance between the centers of mass of the two objects is 20 meters. Object A has a mass of 30,000 kilograms and object B has a mass of 55,000 kilograms.
What is the magnitude of the gravitational force between them? Use a value of 6.67 times 10 to the negative 11th cubic meters per kilogram second squared for the universal gravitational constant.
Give your answer to three significant figures.
Okay, so, in this scenario, we have these two objects, called A and B. So, let’s say here we have our objects A and B. And we’re told that the distance between the centers of mass of these two
objects is 20 meters. If the center of mass of object A is here and the center of mass of object B is here, that tells us that this distance here is 20 meters. Along with this, we’re told the mass of
object A and the mass of object B as well as the fact that these two objects are in deep space.
Being in deep space means that A and B are the only objects nearby. When we compute the magnitude of the gravitational force between them, we can ignore or neglect any other masses or objects.
Continuing on then, we can represent the mass of object A as 𝑚 sub A and that of object B as 𝑚 sub B. Now that we know our object masses as well as the distance that separates the centers of mass of
these two objects, let’s recall Newton’s law of gravitation.
This law says that the gravitational force between two objects, we’ll call it 𝐹, is equal to the universal gravitational constant, big 𝐺, times the mass of each one of these objects. We’ll call them
𝑚 one and 𝑚 two. Divided by the distance between the objects’ centers of mass, we’ll call that 𝑟, squared. Looking at this equation, we can see that for our scenario with objects A and B, we know
their masses. And we also know the distance separating their centers of mass. And along with that, we’re told in the problem statement a particular value to use for the universal gravitational
At this point then, we can begin to calculate the magnitude of the gravitational force between objects A and B. It’s equal to the value we’re given to use as capital 𝐺 times the mass of object A,
30,000 kilograms, times the mass of object B, 55,000 kilograms. All divided by 20 meters quantity squared. Note that all the units in this expression are already in SI base unit form. We have meters
and kilograms and seconds. To three significant figures, this force is 2.75 times 10 to the negative fourth newtons. That’s the magnitude of the gravitational force between objects A and B. | {"url":"https://www.nagwa.com/en/videos/752151962515/","timestamp":"2024-11-06T07:36:39Z","content_type":"text/html","content_length":"247441","record_id":"<urn:uuid:ee8c8cbc-0f7a-4040-a822-863b86f208ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00048.warc.gz"} |
Go to the first, previous, next, last section, table of contents.
This chapter describes functions for generating random variates and computing their probability distributions. Samples from the distributions described in this chapter can be obtained using any of
the random number generators in the library as an underlying source of randomness. In the simplest cases a non-uniform distribution can be obtained analytically from the uniform distribution of a
random number generator by applying an appropriate transformation. This method uses one call to the random number generator.
More complicated distributions are created by the acceptance-rejection method, which compares the desired distribution against a distribution which is similar and known analytically. This usually
requires several samples from the generator.
The functions described in this section are declared in `gsl_randist.h'.
Random: double gsl_ran_gaussian (const gsl_rng * r, double sigma)
This function returns a Gaussian random variate, with mean zero and standard deviation sigma. The probability distribution for Gaussian random variates is,
p(x) dx = {1 \over \sqrt{2 \pi \sigma^2}} \exp (-x^2 / 2\sigma^2) dx
for x in the range -\infty to +\infty. Use the transformation z = \mu + x on the numbers returned by gsl_ran_gaussian to obtain a Gaussian distribution with mean \mu. This function uses the
Box-Mueller algorithm which requires two calls the random number generator r.
Function: double gsl_ran_gaussian_pdf (double x, double sigma)
Function: double gsl_ran_gaussian_ratio_method (const gsl_rng * r, const double sigma)
Random: double gsl_ran_ugaussian (const gsl_rng * r)
Function: double gsl_ran_ugaussian_pdf (double x)
Random: double gsl_ran_ugaussian_ratio_method (const gsl_rng * r)
Random: double gsl_ran_gaussian_tail (const gsl_rng * r, double a, double sigma)
This function provides random variates from the upper tail of a Gaussian distribution with standard deviation sigma. The values returned are larger than the lower limit a, which must be positive.
The method is based on Marsaglia's famous rectangle-wedge-tail algorithm (Ann Math Stat 32, 894-899 (1961)), with this aspect explained in Knuth, v2, 3rd ed, p139,586 (exercise 11).
The probability distribution for Gaussian tail random variates is,
p(x) dx = {1 \over N(a;\sigma)} \exp (- x^2/(2 \sigma^2)) dx
for x > a where N(a;\sigma) is the normalization constant,
N(a;\sigma) = (1/2) erfc(a / sqrt(2 sigma^2)).
Function: double gsl_ran_gaussian_tail_pdf (double x, double a, double sigma)
Random: double gsl_ran_ugaussian_tail (const gsl_rng * r, double a)
Function: double gsl_ran_ugaussian_tail_pdf (double x, double a)
Random: void gsl_ran_bivariate_gaussian (const gsl_rng * r, double sigma_x, double sigma_y, double rho, double * x, double * y)
This function generates a pair of correlated gaussian variates, with mean zero, correlation coefficient rho and standard deviations sigma_x and sigma_y in the x and y directions. The probability
distribution for bivariate gaussian random variates is,
p(x,y) dx dy = {1 \over 2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp (-(x^2 + y^2 - 2 \rho x y)/2\sigma_x^2\sigma_y^2 (1-\rho^2)) dx dy
for x,y in the range -\infty to +\infty. The correlation coefficient rho should lie between 1 and -1.
Function: double gsl_ran_bivariate_gaussian_pdf (double x, double y, double sigma_x, double sigma_y, double rho)
Random: double gsl_ran_exponential (const gsl_rng * r, double mu)
Function: double gsl_ran_exponential_pdf (double x, double mu)
Random: double gsl_ran_laplace (const gsl_rng * r, double a)
Function: double gsl_ran_laplace_pdf (double x, double a)
Random: double gsl_ran_exppow (const gsl_rng * r, double a, double b)
This function returns a random variate from the exponential power distribution with scale parameter a and exponent b. The distribution is,
p(x) dx = {1 \over 2 a \Gamma(1+1/b)} \exp(-|x/a|^b) dx
for x >= 0. For b = 1 this reduces to the Laplace distribution. For b = 2 it has the same form as a gaussian distribution, but with a = \sqrt{2} \sigma.
Function: double gsl_ran_exppow_pdf (double x, double a, double b)
Random: double gsl_ran_cauchy (const gsl_rng * r, double a)
This function returns a random variate from the Cauchy distribution with scale parameter a. The probability distribution for Cauchy random variates is,
p(x) dx = {1 \over a\pi (1 + (x/a)^2) } dx
for x in the range -\infty to +\infty. The Cauchy distribution is also known as the Lorentz distribution.
Function: double gsl_ran_cauchy_pdf (double x, double a)
Random: double gsl_ran_rayleigh (const gsl_rng * r, double sigma)
Function: double gsl_ran_rayleigh_pdf (double x, double sigma)
Random: double gsl_ran_rayleigh_tail (const gsl_rng * r, double a double sigma)
Function: double gsl_ran_rayleigh_tail_pdf (double x, double a, double sigma)
Random: double gsl_ran_landau (const gsl_rng * r)
This function returns a random variate from the Landau distribution. The probability distribution for Landau random variates is defined analytically by the complex integral,
p(x) = (1/(2 \pi i)) \int_{c-i\infty}^{c+i\infty} ds exp(s log(s) + x s)
For numerical purposes it is more convenient to use the following equivalent form of the integral,
p(x) = (1/\pi) \int_0^\infty dt \exp(-t \log(t) - x t) \sin(\pi t).
Function: double gsl_ran_landau_pdf (double x)
Random: double gsl_ran_levy (const gsl_rng * r, double c, double alpha)
This function returns a random variate from the Levy symmetric stable distribution with scale c and exponent alpha. The symmetric stable probability distribution is defined by a fourier
p(x) = {1 \over 2 \pi} \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha)
There is no explicit solution for the form of p(x) and the library does not define a corresponding pdf function. For \alpha = 1 the distribution reduces to the Cauchy distribution. For \alpha = 2
it is a Gaussian distribution with \sigma = \sqrt{2} c. For \alpha < 1 the tails of the distribution become extremely wide.
The algorithm only works for 0 < alpha <= 2.
Random: double gsl_ran_levy_skew (const gsl_rng * r, double c, double alpha, double beta)
This function returns a random variate from the Levy skew stable distribution with scale c, exponent alpha and skewness parameter beta. The skewness parameter must lie in the range [-1,1]. The
Levy skew stable probability distribution is defined by a fourier transform,
p(x) = {1 \over 2 \pi} \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha (1-i beta sign(t) tan(pi alpha/2)))
When \alpha = 1 the term \tan(\pi \alpha/2) is replaced by -(2/\pi)\log|t|. There is no explicit solution for the form of p(x) and the library does not define a corresponding pdf function. For \
alpha = 2 the distribution reduces to a Gaussian distribution with \sigma = \sqrt{2} c and the skewness parameter has no effect. For \alpha < 1 the tails of the distribution become extremely
wide. The symmetric distribution corresponds to \beta = 0.
The algorithm only works for 0 < alpha <= 2.
The Levy alpha-stable distributions have the property that if N alpha-stable variates are drawn from the distribution p(c, \alpha, \beta) then the sum Y = X_1 + X_2 + \dots + X_N will also be
distributed as an alpha-stable variate, p(N^(1/\alpha) c, \alpha, \beta).
Random: double gsl_ran_gamma (const gsl_rng * r, double a, double b)
Function: double gsl_ran_gamma_pdf (double x, double a, double b)
Random: double gsl_ran_flat (const gsl_rng * r, double a, double b)
Function: double gsl_ran_flat_pdf (double x, double a, double b)
Random: double gsl_ran_lognormal (const gsl_rng * r, double zeta, double sigma)
Function: double gsl_ran_lognormal_pdf (double x, double zeta, double sigma)
The chi-squared distribution arises in statistics If Y_i are n independent gaussian random variates with unit variance then the sum-of-squares,
X_i = \sum_i Y_i^2
has a chi-squared distribution with n degrees of freedom.
Random: double gsl_ran_chisq (const gsl_rng * r, double nu)
Function: double gsl_ran_chisq_pdf (double x, double nu)
The F-distribution arises in statistics. If Y_1 and Y_2 are chi-squared deviates with \nu_1 and \nu_2 degrees of freedom then the ratio,
X = { (Y_1 / \nu_1) \over (Y_2 / \nu_2) }
has an F-distribution F(x;\nu_1,\nu_2).
Random: double gsl_ran_fdist (const gsl_rng * r, double nu1, double nu2)
This function returns a random variate from the F-distribution with degrees of freedom nu1 and nu2. The distribution function is,
p(x) dx =
{ \Gamma((\nu_1 + \nu_2)/2)
\over \Gamma(\nu_1/2) \Gamma(\nu_2/2) }
\nu_1^{\nu_1/2} \nu_2^{\nu_2/2}
x^{\nu_1/2 - 1} (\nu_2 + \nu_1 x)^{-\nu_1/2 -\nu_2/2}
for x >= 0.
Function: double gsl_ran_fdist_pdf (double x, double nu1, double nu2)
The t-distribution arises in statistics. If Y_1 has a normal distribution and Y_2 has a chi-squared distribution with \nu degrees of freedom then the ratio,
X = { Y_1 \over \sqrt{Y_2 / \nu} }
has a t-distribution t(x;\nu) with \nu degrees of freedom.
Random: double gsl_ran_tdist (const gsl_rng * r, double nu)
Function: double gsl_ran_tdist_pdf (double x, double nu)
Random: double gsl_ran_beta (const gsl_rng * r, double a, double b)
Function: double gsl_ran_beta_pdf (double x, double a, double b)
Random: double gsl_ran_logistic (const gsl_rng * r, double a)
Function: double gsl_ran_logistic_pdf (double x, double a)
Random: double gsl_ran_pareto (const gsl_rng * r, double a, double b)
Function: double gsl_ran_pareto_pdf (double x, double a, double b)
The spherical distributions generate random vectors, located on a spherical surface. They can be used as random directions, for example in the steps of a random walk.
Random: void gsl_ran_dir_2d (const gsl_rng * r, double *x, double *y)
Random: void gsl_ran_dir_2d_trig_method (const gsl_rng * r, double *x, double *y)
This function returns a random direction vector v = (x,y) in two dimensions. The vector is normalized such that |v|^2 = x^2 + y^2 = 1. The obvious way to do this is to take a uniform random
number between 0 and 2\pi and let x and y be the sine and cosine respectively. Two trig functions would have been expensive in the old days, but with modern hardware implementations, this is
sometimes the fastest way to go. This is the case for my home Pentium (but not the case for my Sun Sparcstation 20 at work). Once can avoid the trig evaluations by choosing x and y in the
interior of a unit circle (choose them at random from the interior of the enclosing square, and then reject those that are outside the unit circle), and then dividing by \sqrt{x^2 + y^2}. A much
cleverer approach, attributed to von Neumann (See Knuth, v2, 3rd ed, p140, exercise 23), requires neither trig nor a square root. In this approach, u and v are chosen at random from the interior
of a unit circle, and then x=(u^2-v^2)/(u^2+v^2) and y=uv/(u^2+v^2).
Random: void gsl_ran_dir_3d (const gsl_rng * r, double *x, double *y, double * z)
This function returns a random direction vector v = (x,y,z) in three dimensions. The vector is normalized such that |v|^2 = x^2 + y^2 + z^2 = 1. The method employed is due to Robert E. Knop (CACM
13, 326 (1970)), and explained in Knuth, v2, 3rd ed, p136. It uses the surprising fact that the distribution projected along any axis is actually uniform (this is only true for 3d).
Random: void gsl_ran_dir_nd (const gsl_rng * r, int n, double *x)
This function returns a random direction vector v = (x_1,x_2,...,x_n) in n dimensions. The vector is normalized such that |v|^2 = x_1^2 + x_2^2 + ... + x_n^2 = 1. The method uses the fact that a
multivariate gaussian distribution is spherically symmetric. Each component is generated to have a gaussian distribution, and then the components are normalized. The method is described by Knuth,
v2, 3rd ed, p135-136, and attributed to G. W. Brown, Modern Mathematics for the Engineer (1956).
Random: double gsl_ran_weibull (const gsl_rng * r, double a, double b)
Function: double gsl_ran_weibull_pdf (double x, double a, double b)
Random: double gsl_ran_gumbel1 (const gsl_rng * r, double a, double b)
Function: double gsl_ran_gumbel1_pdf (double x, double a, double b)
Random: double gsl_ran_gumbel2 (const gsl_rng * r, double a, double b)
Function: double gsl_ran_gumbel2_pdf (double x, double a, double b)
Given K discrete events with different probabilities P[k], produce a random value k consistent with its probability.
The obvious way to do this is to preprocess the probability list by generating a cumulative probability array with K+1 elements:
C[0] = 0
C[k+1] = C[k]+P[k].
Note that this construction produces C[K]=1. Now choose a uniform deviate u between 0 and 1, and find the value of k such that C[k] <= u < C[k+1]. Although this in principle requires of order \log K
steps per random number generation, they are fast steps, and if you use something like \lfloor uK \rfloor as a starting point, you can often do pretty well.
But faster methods have been devised. Again, the idea is to preprocess the probability list, and save the result in some form of lookup table; then the individual calls for a random discrete event
can go rapidly. An approach invented by G. Marsaglia (Generating discrete random numbers in a computer, Comm ACM 6, 37-38 (1963)) is very clever, and readers interested in examples of good algorithm
design are directed to this short and well-written paper. Unfortunately, for large K, Marsaglia's lookup table can be quite large.
A much better approach is due to Alastair J. Walker (An efficient method for generating discrete random variables with general distributions, ACM Trans on Mathematical Software 3, 253-256 (1977); see
also Knuth, v2, 3rd ed, p120-121,139). This requires two lookup tables, one floating point and one integer, but both only of size K. After preprocessing, the random numbers are generated in O(1)
time, even for large K. The preprocessing suggested by Walker requires O(K^2) effort, but that is not actually necessary, and the implementation provided here only takes O(K) effort. In general, more
preprocessing leads to faster generation of the individual random numbers, but a diminishing return is reached pretty early. Knuth points out that the optimal preprocessing is combinatorially
difficult for large K.
This method can be used to speed up some of the discrete random number generators below, such as the binomial distribution. To use if for something like the Poisson Distribution, a modification would
have to be made, since it only takes a finite set of K outcomes.
Function: gsl_ran_discrete_t * gsl_ran_discrete_preproc (size_t K, const double * P)
This function returns a pointer to a structure that contains the lookup table for the discrete random number generator. The array P[] contains the probabilities of the discrete events; these
array elements must all be positive, but they needn't add up to one (so you can think of them more generally as "weights") -- the preprocessor will normalize appropriately. This return value is
used as an argument for the gsl_ran_discrete function below.
Random: size_t gsl_ran_discrete (const gsl_rng * r, const gsl_ran_discrete_t * g)
Function: double gsl_ran_discrete_pdf (size_t k, const gsl_ran_discrete_t * g)
Returns the probability P[k] of observing the variable k. Since P[k] is not stored as part of the lookup table, it must be recomputed; this computation takes O(K), so if K is large and you care
about the original array P[k] used to create the lookup table, then you should just keep this original array P[k] around.
Function: void gsl_ran_discrete_free (gsl_ran_discrete_t * g)
Random: unsigned int gsl_ran_poisson (const gsl_rng * r, double mu)
Function: double gsl_ran_poisson_pdf (unsigned int k, double mu)
Random: unsigned int gsl_ran_bernoulli (const gsl_rng * r, double p)
Function: double gsl_ran_bernoulli_pdf (unsigned int k, double p)
Random: unsigned int gsl_ran_binomial (const gsl_rng * r, double p, unsigned int n)
Function: double gsl_ran_binomial_pdf (unsigned int k, double p, unsigned int n)
Random: unsigned int gsl_ran_negative_binomial (const gsl_rng * r, double p, double n)
This function returns a random integer from the negative binomial distribution, the number of failures occurring before n successes in independent trials with probability p of success. The
probability distribution for negative binomial variates is,
p(k) = {\Gamma(n + k) \over \Gamma(k+1) \Gamma(n) } p^n (1-p)^k
Note that n is not required to be an integer.
Function: double gsl_ran_negative_binomial_pdf (unsigned int k, double p, double n)
Random: unsigned int gsl_ran_pascal (const gsl_rng * r, double p, unsigned int k)
Function: double gsl_ran_pascal_pdf (unsigned int k, double p, unsigned int n)
Random: unsigned int gsl_ran_geometric (const gsl_rng * r, double p)
Function: double gsl_ran_geometric_pdf (unsigned int k, double p)
Random: unsigned int gsl_ran_hypergeometric (const gsl_rng * r, unsigned int n1, unsigned int n2, unsigned int t)
Function: double gsl_ran_hypergeometric_pdf (unsigned int k, unsigned int n1, unsigned int n2, unsigned int t)
Random: unsigned int gsl_ran_logarithmic (const gsl_rng * r, double p)
Function: double gsl_ran_logarithmic_pdf (unsigned int k, double p)
The following functions allow the shuffling and sampling of a set of objects. The algorithms rely on a random number generator as source of randomness and a poor quality generator can lead to
correlations in the output. In particular it is important to avoid generators with a short period. For more information see Knuth, v2, 3rd ed, Section 3.4.2, "Random Sampling and Shuffling".
Random: void gsl_ran_shuffle (const gsl_rng * r, void * base, size_t n, size_t size)
This function randomly shuffles the order of n objects, each of size size, stored in the array base[0..n-1]. The output of the random number generator r is used to produce the permutation. The
algorithm generates all possible n! permutations with equal probability, assuming a perfect source of random numbers.
The following code shows how to shuffle the numbers from 0 to 51,
int a[52];
for (i = 0; i < 52; i++)
a[i] = i;
gsl_ran_shuffle (r, a, 52, sizeof (int));
Random: int gsl_ran_choose (const gsl_rng * r, void * dest, size_t k, void * src, size_t n, size_t size)
This function fills the array dest[k] with k objects taken randomly from the n elements of the array src[0..n-1]. The objects are each of size size. The output of the random number generator r is
used to make the selection. The algorithm ensures all possible samples are equally likely, assuming a perfect source of randomness.
The objects are sampled without replacement, thus each object can only appear once in dest[k]. It is required that k be less than or equal to n. The objects in dest will be in the same relative
order as those in src. You will need to call gsl_ran_shuffle(r, dest, n, size) if you want to randomize the order.
The following code shows how to select a random sample of three unique numbers from the set 0 to 99,
double a[3], b[100];
for (i = 0; i < 100; i++)
b[i] = (double) i;
gsl_ran_choose (r, a, 3, b, 100, sizeof (double));
Random: void gsl_ran_sample (const gsl_rng * r, void * dest, size_t k, void * src, size_t n, size_t size)
The following program demonstrates the use of a random number generator to produce variates from a distribution. It prints 10 samples from the Poisson distribution with a mean of 3.
#include <stdio.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
main (void)
const gsl_rng_type * T;
gsl_rng * r;
int i, n = 10;
double mu = 3.0;
/* create a generator chosen by the
environment variable GSL_RNG_TYPE */
T = gsl_rng_default;
r = gsl_rng_alloc (T);
/* print n random variates chosen from
the poisson distribution with mean
parameter mu */
for (i = 0; i < n; i++)
unsigned int k = gsl_ran_poisson (r, mu);
printf(" %u", k);
return 0;
If the library and header files are installed under `/usr/local' (the default location) then the program can be compiled with these options,
gcc demo.c -lgsl -lgslcblas -lm
Here is the output of the program,
$ ./a.out
The variates depend on the seed used by the generator. The seed for the default generator type gsl_rng_default can be changed with the GSL_RNG_SEED environment variable to produce a different stream
of variates,
$ GSL_RNG_SEED=123 ./a.out
The following program generates a random walk in two dimensions.
#include <stdio.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
main (void)
int i;
double x = 0, y = 0, dx, dy;
const gsl_rng_type * T;
gsl_rng * r;
T = gsl_rng_default;
r = gsl_rng_alloc (T);
printf("%g %g\n", x, y);
for (i = 0; i < 10; i++)
gsl_ran_dir_2d (r, &dx, &dy);
x += dx; y += dy;
printf("%g %g\n", x, y);
return 0;
Example output from the program, three 10-step random walks from the origin.
For an encyclopaedic coverage of the subject readers are advised to consult the book Non-Uniform Random Variate Generation by Luc Devroye. It covers every imaginable distribution and provides
hundreds of algorithms.
• Luc Devroye, Non-Uniform Random Variate Generation, Springer-Verlag, ISBN 0-387-96305-7.
The subject of random variate generation is also reviewed by Knuth, who describes algorithms for all the major distributions.
• Donald E. Knuth, The Art of Computer Programming: Seminumerical Algorithms (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842.
The Particle Data Group provides a short review of techniques for generating distributions of random numbers in the "Monte Carlo" section of its Annual Review of Particle Physics.
• Review of Particle Properties R.M. Barnett et al., Physical Review D54, 1 (1996) http://pdg.lbl.gov/.
The Review of Particle Physics is available online in postscript and pdf format.
Go to the first, previous, next, last section, table of contents. | {"url":"http://free.gnu-darwin.org/ProgramDocuments/gsl-ref-html/gsl-ref_19.html.body","timestamp":"2024-11-04T20:01:00Z","content_type":"text/html","content_length":"48944","record_id":"<urn:uuid:ef2e1752-d553-4f0a-8239-15c0005c334a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00436.warc.gz"} |
Mastering Python Floats and Infinite Values – A Comprehensive Guide
In Python programming, floats are a fundamental data type that allow for the representation of real numbers with fractional parts. Understanding how floats work in Python is crucial to avoid
potential errors and inaccuracies in numeric computations.
Floats in Python: A Fundamental Concept
Floats, or floating-point numbers, differ from integers in that they can represent fractional values. Python’s built-in float data type enables the manipulation and calculation of real numbers.
However, it is important to note that float numbers have limited precision and may introduce rounding errors due to the binary representation of numbers.
Working with Floats in Python
Creating and assigning float values in Python is straightforward. You can simply assign a decimal value, or use mathematical expressions involving integers. Python provides various mathematical
operations for floats, including addition, subtraction, multiplication, division, modulus, and exponentiation. It is important to keep in mind that these operations may introduce precision errors
when working with floats.
Casting floats to integers or vice versa is a common operation in Python. By using the int() function, you can obtain the integer part of a float, effectively truncating the decimal part. On the
other hand, you can cast integers to floats using the float() function to perform operations that require fractional values.
Rounding floats to a specific decimal place is essential for achieving desired precision. Python provides built-in functions like round() and the Decimal class from the decimal module, which offer
different rounding methods depending on the specific requirements of your application.
Understanding Infinite Values in Python
Infinity and negative infinity are special values that can arise in numeric computations involving floats. These infinite values indicate that a number is either too large to be represented or too
small to be approximated by a float. Being aware of how infinite values can occur is crucial to avoid unexpected behavior in your calculations.
Infinite values can arise when dividing a non-zero float by zero, or when performing calculations that lead to extremely large or small numbers exceeding the range representable by floats. Python
provides a way to handle these infinite values through the float('inf') and float('-inf') representations, respectively.
Common Issues and Challenges with Python Floats
When working with floats in Python, there are several common issues and challenges that developers may encounter. Floating-point arithmetic limitations can lead to inaccuracies and discrepancies
compared to the expected results. Rounding errors and precision discrepancies can also occur due to the finite precision of float numbers.
Another challenge comes with comparing floating-point numbers, as exact comparisons may not always yield the expected results due to minor differences in the representation of numbers. This can lead
to potentially frustrating bugs if not handled carefully.
Best Practices for Working with Floats
To overcome the limitations and challenges associated with floats in Python, it is recommended to follow some best practices. One approach is to utilize Python’s decimal module for precise decimal
arithmetic. Unlike floats, the decimal module provides arbitrary precision decimal numbers, allowing for more accurate computations.
Employing appropriate rounding methods is essential to ensure the desired precision in your calculations. Python offers different rounding methods like round half up, round half down, or round half
even. Choosing the right method depends on the specific requirements of your application.
When comparing floats, one effective technique is to use tolerances or epsilon values. By defining an acceptable tolerance range, you can compare two floats within that range of difference rather
than expecting an exact match. This approach accounts for the inherent imprecisions of float representation.
Real-World Examples and Use Cases
Float-related challenges often arise in financial calculations due to the necessity of precise decimal arithmetic. Consider a situation where rounding errors could accumulate over time, potentially
impacting financial predictions or calculations involving interest rates. By using the decimal module, you can perform accurate calculations without introducing cumulative rounding errors.
In scientific computations, handling infinite values is critical. Imagine scenarios where calculating physical quantities, such as velocities or distances, could result in infinite values due to the
mathematical nature of the formulas. By recognizing and appropriately representing infinite values using Python’s float('inf') and float('-inf'), you can ensure the integrity of your calculations.
Real-world scenarios often require implementing best practices for working with floats. For example, in numerical simulations or data analysis, accurate and precise computations are essential. By
applying appropriate rounding methods, using the decimal module where necessary, and comparing floats with tolerances, you can minimize errors and ensure reliable results in your applications.
In this guide, we explored the fundamental concepts of floats in Python and the importance of understanding their representation. We discussed how to create, assign, and manipulate float values, as
well as common issues and challenges associated with floating-point arithmetic. By following best practices, such as utilizing the decimal module and employing appropriate rounding methods, you can
improve the accuracy and reliability of your float calculations.
Understanding the nuances of float representation in Python empowers you to tackle real-world challenges, whether they involve financial calculations, scientific computations, or other applications.
By mastering the intricacies of floats, you can write more efficient and robust code, ensuring the accuracy of your calculations and minimizing unexpected errors.
Keep exploring the world of Python floats, and continually hone your skills in handling floats and infinite values. With a solid understanding of float representation and the practices outlined in
this guide, you can confidently navigate the complexities of numeric computation in Python. | {"url":"https://skillapp.co/blog/mastering-python-floats-and-infinite-values-a-comprehensive-guide/","timestamp":"2024-11-04T10:55:12Z","content_type":"text/html","content_length":"111491","record_id":"<urn:uuid:93cb1a74-1c14-4904-9069-aa997718a4ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00462.warc.gz"} |
6 Reasons to Teach Calculator Skills
When North Carolina first began to allow calculators on state tests, many elementary teachers (including me) were shocked! What? Kids need to develop basic computation skills before they are allowed
to use calculators!
Then we got a look at the new state math test. Holy moly! The test was divided into two parts, a calculator-inactive section, and much longer calculator-active part that was made up entirely of word
problems! I realized that if my 5th graders had to work out every answer by hand, they would never finish the test! I also realized that it wouldn’t be fair to hand out calculators for the first time
on test day. In short, I needed a new game plan…. one that involved calculators.
Calculators Are Not Magic – You Still Have to Think!
Most students are intimidated by word problems, so when I decided to introduce calculators, I felt that our problem-solving lessons would be a good place to begin. When I first handed out the
calculators, my students were so excited! They seemed to think those calculators were going to magically solve the problems for them! It didn’t take long before my students realized that calculators
are not magic at all! Why?
1. You still have to read the problem, choose a strategy, decide which operation to use, record the answer, and check the solution using a different strategy. In other words, you still had to think!
2. Calculators aren’t helpful with some types math problems, so you needed to know when to use it and when it might be a waste of time.
3. You have to know how to use the calculator in order to get the correct answer. The data from the problem must be entered in a specific way, and if you enter it incorrectly, you’ll get the wrong
answer every time.
4. You have to know how to interpret the number that appears in the display window when you finish entering the data, especially in problems involved time, measurement, and money.
Why and How to Assess Calculator Skills
After they got over the disappointing realization that calculators are not magic, my students began to enjoy using them and looked forward to problem solving lessons.
However, I soon noticed that some students knew how to solve the math problems, but they were getting the answers wrong because they didn’t know how to use their calculators properly. Time for some
calculator lessons! I knew that some kids didn’t need the extra help, so I created a simple 10-item Calculator Quiz to find out who did. You can download this free assessment from my Daily Math
Puzzlers page.
When I handed out the test, I told my students that they were not allowed to work out any problems on paper. They were required to use their calculators and they could only use their pencils to
record their answers. Needless to say, they were shocked! “You mean we can’t work out the problems on paper even if we want to?” “Nope. Sorry. Only the calculator.”
After I scored the tests, I taught several guided math group lessons to the students who were having difficulties. The other students used the time to work on math center activities. Then I retested
the kids who I had worked with to be sure they had mastered the basic calculator skills. These lessons were so successful that I included calculator instruction in each Daily Math Puzzler book.
6 Reasons to Teach Calculator Skills to Upper Elementary Students
So why should we teach upper elementary students how to use a calculator? Based on my own experiences and feedback from other teachers, I am convinced that calculators boost mathematical thinking and
are motivating to students. Here are 6 reasons to teach calculator skills and to encourage your students to use them to solve math problems.
1. Calculators help kids overcome computational limitations. Kids often have the conceptual understanding to solve problems that are much more difficult than their computational ability would allow.
For example, a student might know they need to divide a 2-digit number by another 2-digit number, but if he or she hasn’t mastered this skill, the answer will be out of reach without a
calculator. Overcoming computational limitations is especially helpful for special needs students and actually removes barriers to more advanced levels of math instruction.
2. Calculators encourage the use of multiple strategies. Being able to use a calculator frees students to consider and test out a wide variety of problem solving strategies in a short time. They can
solve a problem using one strategy (without or without a calculator), and check their answers using a different strategy.
3. Calculators help kids solve more problems in less time. Calculators allow students to work more quickly, which means they can solve more problems in a given time. So you can increase the number
and complexity of the problems you introduce in each lesson without increasing the time devoted to problem solving lessons.
4. Calculators promote persistence in problem solving. As students begin to think more creatively and try different methods, they will experience success with some methods and failure with others.
But it’s how they feel about those “failures” that’s important. I noticed my students were less discouraged when they couldn’t solve problems quickly; they tried to figure out why their methods
didn’t work. Then they adjusted their thinking and tried a different strategy.
5. Calculators foster a growth mindset. Educators are starting to realize that praising students for correct answers is not nearly as important as recognizing their struggles along the way. When
students are able to persist and try different strategies to solve challenging problems, they feel a sense of accomplishment and pride in themselves for not giving up which leads to the next
6. Calculators promote a positive attitude towards problem solving. As the saying goes, “Success breeds success,” and that’s definitely true in math. Using a calculator drastically increases the
chance that a student will get the correct answer, and the subsequent feeling of accomplishment promotes a more positive attitude toward the next problem.
What If Your Students Aren’t Allowed to Use Calculators on Tests?
Is there any benefit to using calculators during the school year if your students are not allowed to use them on standardized tests? Â Yes! Your students will still benefit from using them early in
the year to boost their mathematical thinking, as long as you devote sufficient time to teaching and reviewing computation. As the year progresses and your students’ computation skills improve, you
can gradually wean them off calculators completely.
When Should Calculators Be Introduced?
I introduced calculator skills early in the year because my students were 5th graders who had developed a fairly solid understanding of numbers and what they mean. I don’t advocate giving them to
young children who have not had time to develop number sense, because entering numbers into a calculator won’t be meaningful to them. You are the best judge of when your students are ready to begin
using calculators. However, if you observe your students randomly punching numbers in as if they are hoping to stumble on the answer, it might be time to put the calculators away for a while and
focus on math problem solving strategies.
Learning to use a calculator had a tremendous positive impact on my students’ mathematical thinking and their willingness to tackle tough problems. Furthermore, because I still taught computation
skills, I didn’t see any detrimental impact on their ability to solve computation problems without a calculator. If you think about it, calculators are just another math tool. No, they won’t
magically solve word problems. However, the way calculators help students become better problem solvers is almost magical! | {"url":"https://lauracandler.com/calculator-skills/","timestamp":"2024-11-07T03:33:37Z","content_type":"text/html","content_length":"260108","record_id":"<urn:uuid:ae5c67bb-8922-4f0e-8c79-c83e5f0efa58>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00496.warc.gz"} |
The Stacks project
Lemma 44.2.1. Let $X \to S$ be a morphism of schemes. The functor $\mathrm{Hilb}^ d_{X/S}$ satisfies the sheaf property for the fpqc topology (Topologies, Definition 34.9.13).
Proof. Let $\{ T_ i \to T\} _{i \in I}$ be an fpqc covering of schemes over $S$. Set $X_ i = X_{T_ i} = X \times _ S T_ i$. Note that $\{ X_ i \to X_ T\} _{i \in I}$ is an fpqc covering of $X_ T$
(Topologies, Lemma 34.9.8) and that $X_{T_ i \times _ T T_{i'}} = X_ i \times _{X_ T} X_{i'}$. Suppose that $Z_ i \in \mathrm{Hilb}^ d_{X/S}(T_ i)$ is a collection of elements such that $Z_ i$ and
$Z_{i'}$ map to the same element of $\mathrm{Hilb}^ d_{X/S}(T_ i \times _ T T_{i'})$. By effective descent for closed immersions (Descent, Lemma 35.37.2) there is a closed immersion $Z \to X_ T$
whose base change by $X_ i \to X_ T$ is equal to $Z_ i \to X_ i$. The morphism $Z \to T$ then has the property that its base change to $T_ i$ is the morphism $Z_ i \to T_ i$. Hence $Z \to T$ is
finite locally free of degree $d$ by Descent, Lemma 35.23.30. $\square$
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0B95. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0B95, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0B95","timestamp":"2024-11-14T05:31:01Z","content_type":"text/html","content_length":"14846","record_id":"<urn:uuid:3d2b1c8e-aaac-48f3-a7f0-bb6a3e8f527d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00469.warc.gz"} |
Double division - math word problem (48891)
Double division
0.25 divided by 1/2 divided by 14
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
Tips for related online calculators
Need help calculating sum, simplifying, or multiplying fractions? Try our
fraction calculator
Do you want to perform natural numbers division - find the
quotient and remainder
You need to know the following knowledge to solve this word math problem:
Grade of the word problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/48891","timestamp":"2024-11-07T07:54:30Z","content_type":"text/html","content_length":"50374","record_id":"<urn:uuid:c83b3daa-27aa-4487-8784-37be09286fed>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00473.warc.gz"} |
seminars - Optimal Control for ODEs and PDEs: The Turnpike Phenomenon
The turnpike phenomenon for dynamic optimal control problems provide insights about the relation between the dynamic optimal control and the solution of the corresponding static optimal control
problem. In this talk we give an overview about different turnpike structures for optimal control problems with ordinary differential equations (ODEs) and partial differential equations (PDEs).
For optimal control problems with ODEs an exponential turnpike inequality can be shown by basic control theory. These results can be extended to an integral turnpike inequality for optimal control
problems with linear hyperbolic systems. For an optimal control problem with non differential tracking term in the objective function, that is exactly controllable, we can show under certain
assumptions that the optimal system state is steered exactly to the desired state after finite time.
Further we consider an optimal control problem for a hyperbolic system with random boundary data and we show the existence of optimal controls.
A turnpike property for hyperbolic systems with random boundary data can be shown numerically. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=room&order_type=asc&page=77&l=en&document_srl=1280102","timestamp":"2024-11-12T09:49:31Z","content_type":"text/html","content_length":"47948","record_id":"<urn:uuid:0a52b30d-c416-4056-9b7b-dbeda519a82d>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00339.warc.gz"} |
How do you add two cell arrays in Matlab?
Combine Cell Arrays
1. C1 = {1, 2, 3}; C2 = {‘A’, ‘B’, ‘C’}; C3 = {10, 20, 30}; Concatenate cell arrays with the array concatenation operator, [] .
2. C4 = [C1; C2; C3] C4 is a 3-by-3 cell array:
3. C4 = [ 1] [ 2] [ 3] ‘A’ ‘B’ ‘C’ [10] [20] [30]
4. C5 = {C1; C2; C3}
5. C5 = {1×3 cell} {1×3 cell} {1×3 cell}
How do you append in Matlab?
str = append( str1,…,strN ) combines the text from str1,…,strN . Each input argument can be a string array, a character vector, or a cell array of character vectors. If any input is a string array,
then the output is a string array.
How do you join an array in Matlab?
You can use the square bracket operator [] to concatenate. For example, [A,B] or [A B] concatenates arrays A and B horizontally, and [A; B] concatenates them vertically.
Can you append to an array in MATLAB?
If you run both commands over and over again, nameArray will keep only the last name entered. If you run the first command once to initialize the variable then run only the second line repeatedly to
append to the cell array, it will do what the original poster asked.
How do you add cells in MATLAB?
When you have data to put into a cell array, create the array using the cell array construction operator, {} . Like all MATLAB® arrays, cell arrays are rectangular, with the same number of cells in
each row. myCell is a 2-by-3 cell array. You also can use the {} operator to create an empty 0-by-0 cell array.
How do I merge two columns in MATLAB?
Merge two columns into one
1. x = [1;2;3]; % (3×1 size)
2. y = [5;6;7]; % (3×1 size)
3. XY = [x y]; % (3×2 size)
4. [ 1 5.
5. 2 6.
6. 3 8]
How do you add elements to an array in MATLAB?
S = sum( A ) returns the sum of the elements of A along the first array dimension whose size does not equal 1.
1. If A is a vector, then sum(A) returns the sum of the elements.
2. If A is a matrix, then sum(A) returns a row vector containing the sum of each column.
How do you add rows to a matrix?
Adding Row To A Matrix We use function rbind() to add the row to any existing matrix. To know rbind() function in R simply type? rbind() or help(rbind) R studio, it will give the result as below in
the image.
How do you append to a cell?
You can use formula to append text from one cell to another as follows. 1. Select a blank cell for locating the appended result, enter formula =CONCATENATE(A1,” “,B1,” “,C1) into the formula bar, and
then press the Enter key.
How do you join an array in MATLAB?
How do you make multiple cells in an array?
To enter a multi-cell array formula, follow these steps:
1. Select multiple cells (cells that will contain the formula)
2. Enter an array formula in the formula bar.
3. Confirm formula with Control + Shift + Enter.
How do you create an array in MATLAB?
Create a cell array. A cell array is a rectangular set of data similar to a matrix but it can hold any type of data such as text, numbers, and/or vector. Code a cell array by listing a series of
numbers, vectors, or characters in the same format as a matrix while characters are in quotation marks and vectors are in brackets.
How to extract numbers from cell array in MATLAB?
upperLeft = C (1:2,1:2) upperLeft= 2×2 cell array {‘one’} {‘two’} { [ 1]} { [ 2]} Update sets of cells by replacing them with the same number of cells. For example, replace cells in the first row of
C with an equivalent-sized (1-by-3) cell array. C (1,1:3) = { ‘first’, ‘second’, ‘third’ }
How to append an element to an array in MATLAB?
r = [r1;r2] However, to do this, both the vectors should have same number of elements. Similarly, you can append two column vectors c1 and c2 with n and m number of elements. To create a column
vector c of n plus m elements, by appending these vectors, you write −. c = [c1; c2]
How can I add matrices inside a cell array?
This example shows how to add cells to a cell array. Create a 1-by-3 cell array. C = {1, 2, 3} C= 1×3 cell array { [1]} { [2]} { [3]} Assign data to a cell outside the current dimensions. MATLAB®
expands the cell array to a rectangle that includes the specified subscripts. Any intervening cells contain empty arrays. | {"url":"https://www.sheppard-arts.com/students-guide/how-do-you-add-two-cell-arrays-in-matlab/","timestamp":"2024-11-06T02:02:22Z","content_type":"text/html","content_length":"75225","record_id":"<urn:uuid:915a60e1-042f-4d7c-9d95-1cf6d1aec923>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00779.warc.gz"} |
Swim Pace Calculator - Swim Pace - calculatepace.com
Swimming pace calculator
With this pace you swim:
Popular distances
100 m:
400 m:
500 m:
1000 m:
3800 m:
Short haul:
50 m:
100 m:
200 m:
400 m:
500 m:
Medium-haul route:
800 m:
1000 m:
1500 m:
1900 m:
Long haul
2.000 m:
3.000 m:
3.800 m:
5.000 m:
10 km:
15 km:
20 km:
25 km:
30 km:
50 km:
Hour swimming
1 Stunden:
2 Stunden:
3 Stunden:
4 Stunden:
5 Stunden:
6 Stunden:
12 Stunden:
24 Stunden:
48 Stunden:
72 Stunden:
What is the swimming pace?
Swim pace is the time required to cover a certain distance while swimming, measured in minutes per 100 meters.
How do you calculate swimming pace?
To calculate your swimming pace, you need to measure the time it takes you to cover a certain distance and then convert the time into minutes. You then divide the time by the distance covered in
meters and multiply the result by 100 to get the swimming pace in minutes per 100 meters.
The formula for calculating swimming pace is:
Swimming pace = time / (distance in meters / 100)
Assuming you have covered 200 meters in 3 minutes and 20 seconds, you can calculate your swimming pace as follows:
Time = 3 minutes + 20 seconds / 60 seconds per minute = 3.33 minutesDistance = 200 metersSwim pace = 3.33 minutes / (200 meters / 100) = 1.67 minutes per 100 meters
Therefore, your swimming pace is 1.67 minutes per 100 meters.
To calculate your swimming speed for 50 meters, you need to measure the time it takes you to cover this distance and then convert it into minutes per 100 meters.
Assuming you have covered 50 meters in 35 seconds, you can calculate your swim pace as follows:
Time = 35 seconds / 60 seconds per minute = 0.58 minutes Distance = 50 meters Swim pace = 0.58 minutes / (50 meters / 100) = 1.16 minutes per 100 meters
Therefore, your swimming pace for 50 meters is 1 minute 16 seconds.
How do you measure swimming speed?
Swimming pace can be measured by timing. A watch or other timing device is needed to measure the time it takes to swim a given distance. Of course, a swimming watch or GPS watch can also be used.
How can you improve your swimming speed?
To improve swimming speed, technique training, strength training, endurance training and specific interval training can be carried out. Improving swimming technique and increasing endurance can help
to improve swimming speed.
How does swimming technique influence swimming pace?
A good swimming technique can help the swimmer to swim with less effort and greater efficiency. This can improve swimming speed. A poor swimming technique, on the other hand, can cause the swimmer to
need more effort and swim less efficiently.
How does the swimming pace differ for different swimming styles?
Swimming pace can vary depending on the swimming style, as each swimming style has different techniques and requirements. As a rule, the swimming pace for the breaststroke is slower than for the
freestyle or backstroke.
How can you calculate your swimming pace for longer distances?
To calculate your swimming pace for longer distances, you can time yourself swimming a set distance and then convert it into minutes per 1000 meters. Alternatively, you can calculate your swimming
pace for a shorter distance and then extrapolate it.
How can you improve your swimming speed for competitions?
To improve your swimming speed for competitions, you can do specific interval training that aims to improve endurance and speed. You can also do technique training and strength training to improve
efficiency in the water.
How can you use your swimming pace for interval training?
Interval training can be utilized by designing intervals based on swim pace. The goal is to design intervals that challenge your individual performance level and help improve endurance and speed.
Using swim pace as a benchmark helps ensure you're not swimming too fast or too slow.
How can you use the swim pace for open water swimming?
For open water swimming, swimming pace can be used to control the pace and save energy. By knowing your swimming pace, you can maintain a steady pace and not be carried away by other swimmers. In
addition, swimming pace can help you to better estimate the distance and improve your own performance. | {"url":"https://www.calculatepace.com/swimming-pace-calculator/","timestamp":"2024-11-02T21:57:16Z","content_type":"text/html","content_length":"90265","record_id":"<urn:uuid:19aaaf5f-312b-4ac0-a223-e36aaee751c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00537.warc.gz"} |
Estimating Probability of Failure of a Complex System Based on Partial Information about Subsystems and Components, with Potential Applications to Aircraft Maintenance
In many real-life applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex
systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result,
for each possible set of failed components, we can tell whether this set will lead to a system failure. In some cases, for each component A, we know the probability P(A) of its failure; in other
cases, however, we only know the lower and upper bounds for this probability. Sometimes, we only have expert estimates for these probabilities, estimates that can be described as fuzzy numbers.
Usually, it is assumed that failures of different components are independent events, but sometimes, we know that they are dependent -- in which case we usually do not have any specific information
about their correlation. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe methods for such estimation. | {"url":"https://scholarworks.utep.edu/cs_techrep/615/","timestamp":"2024-11-12T13:08:44Z","content_type":"text/html","content_length":"38292","record_id":"<urn:uuid:b99fc057-6ff6-473c-a35f-67d564bb81df>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00365.warc.gz"} |
Curvilinear Asymptotes in GeoGebra
GeoGebra has a very useful function called Asymptote: if you have something like $f(x) = \frac{3x^2+4x+3}{x-1}$, typing Asymptote(f) in the input bar gives a list of the linear asymptotes: $\{ y=
3x+7; x=1 \}$. Very nice, very useful.
But something like $f(x) = \frac{2x^4 + 3x^3 + 2x + 4}{x^2 +3x + 2}$ is more tricky: GeoGebra only returns the two linear asymptotes, $x=-1$ and $x=-2$. However, there’s also a curvilinear asymptote
that GeoGebra doesn’t return. Can we get GeoGebra to find it?
Of course we can. It’s a tiny bit tricky, but it’s not as bad as I first thought.
The first step is to split the function into a numerator and a denominator, a top and a bottom:
• $N(x) = 2x^4 + 3x^3 + 2x + 4$
• $D(x) = x^2 + 3x + 2$
The key is then to use the Division command:
This gives a list containing the quotient and remainder. Here, we need the quotient:
• $q(x) = Element(L, 1)$ ((Note that GeoGebra lists start counting from 1)).
And that’s it! $q(x)$ is the curvilinear asymptote to $f(x)$.
A selection of other posts
subscribe via RSS | {"url":"https://www.flyingcoloursmaths.co.uk/curvilinear-asymptotes-in-geogebra/","timestamp":"2024-11-10T01:32:54Z","content_type":"text/html","content_length":"8104","record_id":"<urn:uuid:23d41cb8-6e3f-4053-ae6c-4dc1d50792b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00681.warc.gz"} |
\[1.3kg{m^{ - 3}}\
The reason behind the lifting of the roof is Bernoulli’s principle. According to the principle, the pressure on one side of the surface is equal to the other side of the surface. Here, at the top of
the roof there is low pressure as the wind is blowing while below the roof there is high pressure as there is no wind flowing inside the house. So, the high pressure in the inside and low pressure on
the outside of the roof causes the roof to lift up.
Complete step-by-step solution
${p_1} + \rho gh + \dfrac{1}{2}{\rho _1}v_1^2 = {p_2} + \rho gh + \dfrac{1}{2}{\rho _2}v_2^2$;
Cancel out the common variables, put ${v_1} = v$and${v_2} = 0$;
${p_2} = {p_1} + \dfrac{1}{2}{\rho _1}v_1^2$;
\[{p_2} - {p_1} = \dfrac{1}{2}{\rho _1}v_1^2\];
Here: $\Delta p = {p_2} - {p_1}$;
$\Delta p = \dfrac{1}{2}{\rho _1}v_1^2$;
Put${v_1}$= v in the above equation:
$\Delta p = \dfrac{1}{2}\rho {v^2}$;
${p_2} - {p_1} = \dfrac{1}{2}\rho {v^2}$;
Apply the relation between force and pressure:
$({P_1} - {P_2}) = \dfrac{F}{A}$;
$F = ({P_1} - {P_2}) \times A$;
Put the value ${p_2} - {p_1} = \dfrac{1}{2}\rho {v^2}$in the above equation:
$F = \dfrac{1}{2}\rho {v^2} \times A$;
Here $v = 108km{h^{ - 1}}$is equal to$v = \dfrac{{108 \times 1000}}{{60 \times 60}} = 30m/s$;
\[F = \dfrac{1}{2} \times 1.3 \times 900 \times 40\];
Do the necessary calculation:
\[F = \dfrac{{46800}}{2}\];
The lift force is given as:
\[F = 23,400N\];
\[F = 2.34 \times {10^4}N\];
Final Answer: Option”3” is correct. The magnitude of aerodynamic lift on the roof is\[2.34 \times {10^4}N\].
Note:- Here we have to find the difference in pressure by applying Bernoulli’s theorem and then apply the formula$F = ({P_1} - {P_2}) \times A$. After applying Bernoulli’s equation the common terms $
\rho gh$will cancel out and the final velocity ${v_2}$will be zero. In the end we will get the force which is also known as lift force and is equal to the magnitude of aerodynamic lift on the roof. | {"url":"https://www.vedantu.com/question-answer/air-of-density-13kgm-3blows-horizontally-with-a-class-12-physics-cbse-5f8a9d1e77bf9c142d60aee6","timestamp":"2024-11-14T08:48:36Z","content_type":"text/html","content_length":"182555","record_id":"<urn:uuid:f96cc7a6-6a34-46f8-92ca-ad9961f79dd6>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00160.warc.gz"} |
it specifies which plots are not going to be plotted. Currently, you can choose from 7 plots: "digits", "second order", "summation", "mantissa", "chi square", "abs diff", "ex summation". If you want
to plot all, just put except = "none". The default is not to plot the "mantissa" and "abs diff". | {"url":"https://www.rdocumentation.org/packages/benford.analysis/versions/0.1.5/topics/plot.Benford","timestamp":"2024-11-07T09:24:06Z","content_type":"text/html","content_length":"65946","record_id":"<urn:uuid:b943915a-1202-40c6-bda6-ec5d1c673135>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00778.warc.gz"} |
Documents from: 1989
Authored Book
AB344 1989 Authored Book Sherman,B; Sieghart,P; Bundy,A; Boden,M; Sharples,M; Cooley,M; Dawson,D; Hopson,D; Neuberger,J
the Benefits and Risks of Knowledge Based Systems
AB367 1989 Authored Book Fisher,RB
from Surfaces to Objects: Computer Vision and Three Dimensional Scene Analysis
AB378 1989 Authored Book Gazdar,G; Mellish,CS
Natural Language Processing in LISP: An Introduction to Computational Linguistics
AB379 1989 Authored Book Gazdar,G; Mellish,CS
Natural Language Processing in PROLOG: An Introduction to Computational Linguistics
AB380 1989 Authored Book Gazdar,G; Mellish,CS
Natural Language Processing in Pop-II: An Introduction to Computational Linguistics
AB387 1989 Authored Book Jackson,P; Reichgelt,H; van Harmelen,F
Logic-Based Knowledge Representation
AB541 1989 Authored Book Ross,PM
Advanced Prolog: Techniques and Applications
AB591 1989 Authored Book Thompson,HS
a Strategy for Speech and Language Technology
AB624 1989 Authored Book Fisher,RB
from Surfaces to Objects: Computer Vision and Three Dimensional Scene Analysis
Chapter in Edited Book
CB348 1989 Chapter in Edited Book Cai,L
Diffusion Smoothing: An Approach to Sculptured Surfaces
CB352 1989 Chapter in Edited Book Corlett,R; Reichgelt,H; Davies,N; van Harmelen,F; Khan,R
the Architecture of Socrates
CB361 1989 Chapter in Edited Book Dale,R
Computer Based Editorial Aids
CB363 1989 Chapter in Edited Book Elfrink,B; Reichgelt,H
Assertion Time Inferance in Link-Based Systems
CB368 1989 Chapter in Edited Book Fisher,RB; Orr,M
Geometric Consraints from 2.5D Sketch Data and Object Models
CB369 1989 Chapter in Edited Book Fleming,A
Geometric Relationships between Toleranced Features
CB375 1989 Chapter in Edited Book Giunchiglia,F; Smaill,AD
Reflection in Constructive and Non Constructive Automated Reasoning
CB385 1989 Chapter in Edited Book Hallam,JC
Artificial Intelligence and Signal Understanding
CB388 1989 Chapter in Edited Book Jackson,P; Reichgelt,A
a General Proof Method for Modal Predicate Logic
CB389 1989 Chapter in Edited Book Jackson,P; Reichgelt,H
a Modal Proof Method for Doxastic Reasoning
CB543 1989 Chapter in Edited Book Robertson,DS; Bundy,A; Uschold,M; Muetzelfeldt,M
Helping Inexperienced Users to Construct Simulation Programs: An Overview of the ECO Project
CB545 1989 Chapter in Edited Book Reichgelt,H
a Comparison of First Order and Modal Logics of Time
CB582 1989 Chapter in Edited Book Trehan,R
Concurrent Logic Languages for the Design and Implementation of Parallel AI Systems
CB586 1989 Chapter in Edited Book Thornton,C
Analogical Inference as Generalized Inductive Inference
CB598 1989 Chapter in Edited Book van Harmelen,F
the Limitations of Partial Evaluation
CB599 1989 Chapter in Edited Book van Harmelen,F
a Classification of Meta-Level Architectures
Journal Paper
JP339 1989 Journal Paper Brna,P
Programmed Rockets: An Analysis of Students' Strategies
JP345 1989 Journal Paper Bundy,A; Sterling,L; O'Keefe,R; Silver,B
Solving Symbolic Equations with Press
JP349 1989 Journal Paper Cawsey,A
Expanatory Dialogues
JP386 1989 Journal Paper Kwa,J
BS*: An Admissible Bidirectional Staged Heuristic Search Algorithm
JP391 1989 Journal Paper Logan,B
Conceptualising Design Knowledge
JP412 1989 Journal Paper Muetzelfeldt,R; Robertson,DS; Uschold,M; Bundy,A
the Use of Prolog for Improving the Rigour and Accessibility of Ecological Modelling
JP413 1989 Journal Paper Mellish,CS; Evans,R
Natural Language Generation from Plans
JP542 1989 Journal Paper Robertson,DS; Bundy,A; Uschold,M; Muetzelfeldt,B
the ECO Program Construction System: Ways of Increasing its Representation Power and Their Affects on the User Interface
JP559 1989 Journal Paper Smithers,T; Malcolm,CA
Programming Robotic Assembly in Terms of Task Achieving Behavioural Modules
JP565 1989 Journal Paper Scott,R; Trehan,R
Translating from Prolog to Occam 2: a Methodology
JP588 1989 Journal Paper Thornton,C
a Cross Section of European Research
JP589 1989 Journal Paper Thornton,C
Learning Mechanism which Construct Neighbourhood Representations
JP595 1989 Journal Paper Thompson,HS
Linguistics Corpora for the Language Industry: a European Community Public Utility
Paper in Conference Proceedings
PP316 1989 Paper in Conference Proceedings Ritchie,GD
On the Generative Power of Two-Level Morphological Rules
PP337 1989 Paper in Conference Proceedings Blokland,R; Thompson,HS
a Parser for Feature-Based Speech Recognition
PP338 1989 Paper in Conference Proceedings Bowles,A; Wilk,P
Tracing Requirements for Multi-Layered Meta-Programming
PP346 1989 Paper in Conference Proceedings Bundy,A; van Harmelen,F; Hesketh,J; Smaill,AD; Stevens,A
a Rational Reconstruction and Extension of Recursion Analysis
PP347 1989 Paper in Conference Proceedings Cai,L
Spline Smoothing - a Special Case of Diffusion Smoothing
PP350 1989 Paper in Conference Proceedings Cawsey,A
the Structure of Tutorial Discourse
PP362 1989 Paper in Conference Proceedings Dale,R
Cooking up Referring Expressions
PP364 1989 Paper in Conference Proceedings Engdahl,E; Cooper,K
Null Subjects in Zurich German
PP365 1989 Paper in Conference Proceedings Fisher,RB; Orr,J
Experiments with a Network Based Geometric Reasoning Engine
PP366 1989 Paper in Conference Proceedings Fisher,RB
Geometric Constraints from Planar Surface Patch Matchings
PP376 1989 Paper in Conference Proceedings Giunchiglia,F; Walsh,T
Abstract Theorem Proving
PP377 1989 Paper in Conference Proceedings Giunchiglia,F; Walsh,T
Theorem Proving with Definitions
PP384 1989 Paper in Conference Proceedings Hallam,JC; Forster,P; Howe,J
Map Free Localisation in a Partially Moving 3-D World: the Edinburgh Feature Based Navigator
PP390 1989 Paper in Conference Proceedings Li,S
3D Object Recognition from Range Images: Computational Framework and Neural Networks
PP392 1989 Paper in Conference Proceedings Logan,B; Smithers,T
the Role of Prototypes in Creative Design
PP393 1989 Paper in Conference Proceedings Logan,B; Newton,S
the Intractability of Design: Is Design too Complex for Expert Systems?
PP409 1989 Paper in Conference Proceedings McIlvenny,P
Communicative Action and Computers: Re Embodying Conversation Analysis
PP411 1989 Paper in Conference Proceedings Madden,P
the Specialization and Transformation of Constructive Existence Proofs
PP414 1989 Paper in Conference Proceedings Mellish,CS
Some Chart Based Techniques for Parsing Ill-Formed Input
PP422 1989 Paper in Conference Proceedings Malcolm,CA; Smithers,T; Hallam,JC
An Emerging Paradigm in Robot Architecture
PP529 1989 Paper in Conference Proceedings Nehmzow,U; Hallam,JC; Smithers,T
Really Useful Robots
PP544 1989 Paper in Conference Proceedings Ritchie,GD
On the Generative Power of Two Level Morphological Rules
PP546 1989 Paper in Conference Proceedings Reape,MK
a Logical Treatment of Semi-Free Word Order and Bounded Discontinuous Consituency
PP556 1989 Paper in Conference Proceedings Smithers,T; Conkie,A; Doheny,A; Logan,B; Millington,K
Design as Intelligent Behaviour: An AI in Design Research Programme
PP558 1989 Paper in Conference Proceedings Smithers,T
Intelligent Control in AI Based Design Support Systems
PP578 1989 Paper in Conference Proceedings Trucco,E
Towards Volumetric Description of Range Images
PP579 1989 Paper in Conference Proceedings Trucco,E; Groppello,P; Burbello,F
Experiments with Segment Based Stereo using Dynamic Programming
PP587 1989 Paper in Conference Proceedings Thornton,C
the Factorial Productivity of Mark-Raising Generalisation
PP590 1989 Paper in Conference Proceedings Thompson,HS
a Chart Parsing Realisation of Dynamic Programming: Best First Enumeration of Paths in a Lattice
PP592 1989 Paper in Conference Proceedings Thompson,HS
Hill Climbing to Improve the Performance of Rule Based Segmentation and Labelling
PP594 1989 Paper in Conference Proceedings Thompson,HS
Evaluation of Phoneme Lattices: Four Methods Compared
PP596 1989 Paper in Conference Proceedings Thompson,HS; McKelvie,D; McInnes,F
Robust Lexical Access for Continuous Speech Using Dynamic Time Warping and Finite State Transducers
PP600 1989 Paper in Conference Proceedings Valley,K
Realising the Potential of Expert System Shells in Education
PP604 1989 Paper in Conference Proceedings Williams,B; Thompson,HS
Modelling Phonological Processes in Continuous Speech Recognition
PP609 1989 Paper in Conference Proceedings Wiggins,G; Harris,M; Smaill,AD
Representing Music for Analysis and Composition
PhD Thesis
PT8903 1989 PhD Thesis Cawsey,A
Generating Explanatory Discourse: a Plan-Based Interactive Approach.
PT8907 1989 PhD Thesis van Harmelen,F
On the Efficiency of Metalevel Inference.
PT8911 1989 PhD Thesis Stevens,A
An Improved Method for the Mechanisation of Inductive Proof
PT8915 1989 PhD Thesis Trehan,R
An Investigation of Design and Execution Alternatives for the Committed Choice Non-Deterministic Logic Languages
TH8989 1989 PhD Thesis Nakamaru,T
A Hierarchical Production System (Mphil)
Research Paper
RP445 1989 Research Paper Bundy,A
A Science of Reasoning
RP446 1989 Research Paper Bundy,A; Uschold,M
The Use of Typed Lambada Calculus for Requirements Capture in the Domain of Ecological Modelling
RP447 1989 Research Paper Malcolm,CA; Smithers,T; Hallam,JC
An Emerging Paradigm in Robot Architecture
RP448 1989 Research Paper Bundy,A; Smaill,AD; Hesketh,J
Turning Eureka Steps into Calculations in Automatic Program Synthesis
RP449 1989 Research Paper Trucco,E; Groppello,P; Burbello,F
Experiments with Segment-Based Stereo Using Dynamic Programming
RP450 1989 Research Paper Cai,L
An Estimate of the Relationship Between Zero Thresholds of Gaussian Curvature and Mean Curvature
RP451 1989 Research Paper Fisher,RB; Orr,M
Geometric Reasoning in a Parallel Network
RP452 1989 Research Paper Cai,L
Approximating a Surface Up to Curvature Signs Using the Depth Data Alone
RP453 1989 Research Paper Logan,B; Smithers,T
The Role of Prototypes in Creative Design
RP456 1989 Research Paper Thornton,C
The Emergence of Higher Levels of Description
RP462 1989 Research Paper Thompson,HS
Speech Recognition, Artificial Intelligence and Translation: How Rosy a Future
RP504 1989 Research Paper Wiggins,G; Harris,M; Smaill,AD
Representing Music for Analysis and Composition
Teaching Paper
TE11 1989 Teaching Paper Robertson,DS
An Introduction to Knowledge Representation and Expert Systems (Ai-2)
Technical Paper
TP4 1989 Technical Paper van Harmelen,F
The Clam Proof Planner
Working Paper
WP218 1989 Working Paper Nowell,P
A Trained Dtw Algorithm for Word Spotting in Phoneme Strings
WP219 1989 Working Paper Petropoulakis,L; Malcolm,CA; Howe,J
Rapt: an Assessment of an Off-Line System for Programming Robotic Assemblies
WP220 1989 Working Paper Petropoulakis,L
Robot Orientation Angles Evaluation and Limitations - the Adept 1 | {"url":"https://www.dai.ed.ac.uk/papers/years/1989.html","timestamp":"2024-11-03T15:54:38Z","content_type":"text/html","content_length":"34718","record_id":"<urn:uuid:acf68ceb-2920-43f2-b32c-977e0af2b4bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00330.warc.gz"} |
Station Analysis Tools
current tools:
This package is distributed via the normal linux method. Simply untar
tar -xvf station_analysis_tools.tar.gz
cd to the new directory
use the attached configure script to create a makefile for your system
compile with make and if you wish for a full install make install
sudo make install
To remove local executables, use make clean
make clean
compute_psd: A welch's average method in which we break up a seismogram into several overlapping windows, compute the spectral power (real * real + imag * imag) normalized by the number of points and
sample rate (see paper by McNamara and Buland, 2004). This implementation returns the mean and standard deviation from those overlapping estimates. The function contains subroutines to remove a
pole-zero response (using standard sac format) and convert the recorded signal to ground acceleration. Waveform data and header data can only be read in standard sac format.
compute_coherence: Similar in implementation to compute_psd. This code however returns the magnitude squared coherence which is a function of similarity between two waveforms in the frequency
compute_incoherence: Similar implementation as compute_psd. Returns (1-coherence) * psd of a pair of waveforms. It becomes unstable when coherence = 1 therefore we put a large negative value (-999)
to indicate a coherence of 1.
compute_pole_zero_response: This is essentially an educational tool. Given a sac formatted pole zero response file, it computes the spectral characteristics similar to evalresp. Unlike the evalresp
library/function distributed by IRIS, this does not work on RESP.. files.
general_pdf: This reads a list of ascii files with x values in column 1 and y values in column 2. It stacks in a probabilistic sense giving the probability of a measurement landing in a grid cell
which is defined by dx and dy given in the parameter file. The output returns the input x and y in columns 1 and 2 with the probability in column 3.
get_mean_general_pdf: This parses a file output from general_pdf and simply returns the y values with the highest probability. This is not robust to bi-modal distributions, and thus a more stationary
median value can be obtained from get_nth_percentile_from_pdf with the percentile value of 50 given.
smooth_general_pdf: Applies a smoothing function onto a pdf. This helped stablize values returned by get_mean_general_pdf and makes the graphical representation of a pdf cleaner.
get_nth_percentile_from_pdf: Finds the y value at each x in a pdf where the values to the left have an N percent chance of occurring. To obtain the median, use an N of 50. If N is 10, then there is a
10% chance of a measurement being less than the line extracted.
interpolate_psd: Returns a regularly spaced linearly interpolated spectral measurement (psd, coherence, or incoherence). If the step given is small, large files result, but if its too big, then the
result will be undersampled. Helpful when using the gmt 'surface' command to grid.
spline_interpolate_psd: Not very useful. Idea was to use a spline interpolate instead of a linear function. However, it creates a weird shape near the long period end and it does not really return a
significant improvement. Currently still in a debugging mode. Included to give the splining functions which may be useful for future implementations.
log10_linear_interpolate: Similar to interpolate_psd in that it implements a linear interpolant function. However, the output array is hardwired to go from the minimum to the maximum (explicitly
avoiding extrapolation) using a base 10 log scale.
compute_tri_incoherence: In the event of having three co-located and co-aligned sensors, this tool follows Sleeman et al., 2006 to compute a relative transfer function between two signals to find the
noise of the third. It repeats this for all three channels.
Programs are driven via a parameter file passed on the command line.
If run without argument or the -h option, then the parameter file format is displayed.
Parameter file format for compute_psd:
n_sec n_skip unit_flag
sac_file_name is the name of a file formatted for SAC (binary 158 array header, then the data)
pole_zero_file_name is the name of a pole zero response file relative to displacement (as output from rdseed)
n_sec is the number of seconds in each window. This determines maximum period to compute
n_skip is the offset between windows. The welch's average calls for several overlapping segments to make a reasonably robust estimate
unit_flag is 0 for displacement, 1 for velocity, or 2 for acceleration. Other values will quit the program.
output_file_name is the name of a file which will be produced. It has three columns as:
period mean standard_deviation
period mean standard_deviation
. . .
. . .
. . .
in the example/ directory there is a data file from the Berkeley station YBH channel LHZ during a teleseism. (BK.YBH.LHZ.SAC)
Also included is the pole zero response (BK.YBH.LHZ.pz)
and a parameter file (param.psd) which can be used as an example.
No standard deviation computed in compute_tri_coherence yet and its unclear the standard deviations in compute_coherence and compute_incoherence are correct.
//FIXED on 8.29.2012 {
The compute_pole_zero_response.c program assumed all zeros at the origin when it removed a zero from the pole-zero structure. This has been updated to properly search for zeros located at the origin
which typically determine the number of differentiations.
// FIXED on 3.29.2011 {
Welch's averaging must occur prior to the amplitude being calculated. Skipping this step leads to a coherence defined as 1. Further, it was found that to replicate other results I needed to change
the cosine taper to a hann taper and correct with a standard value of sqrt(8/3) for the hann window.
Coherence computation only considers the real part of the cross spectra. The averaging of the cross spectra has to be done over multiple parts before the amplitude is taken. This requires a change
from the recursive averaging method currently implemented. Working on comparing output with GMT spectrum1d and Matlab mscohere functions.
//FIXED on 2.19.2011 {
Tests on Sun SPARC i86pc systems with gcc 3.4.3 cause segmentation faults during the ifft. A simple test of the fftw3 library on this system returns a non-sensical imaginary component which may be
leading to this error. If you encounter this, upgrade your gcc and recompile fftw3 and this code.
Update on Sun SPARC tests (2.18.2011):
fixed the segmentation fault error, but imaginary component is still wonky on the SPARC system.
also, if an empty file is returned and the code segmentation faults, remove the empty file and retry. The files distributed were built on a 64bit system which might not be able to be overwritten by a
32 bit system.
BE AWARE:
Currently configured to have a maximum of 25 poles and 25 zeros in the response. This can be adjusted in station_analysis_tools.h on lines 18 and 19
Currently configured to read a maximum of 17,200,000 points from a sac file. Should you need more, adjust NPTS_SAC_FILE_MAX in the station_analysis_tools.h file and recompile
Spectral measurements (psd, coherence, incoherence) use a running mean smoothing algorithm for cleaner presentation. The calls are found in library functions "calculate_psd", "calculate_coherence",
and "calculate_incoherence". A simple commenting out (// at the beginning of the line) will remove this.
Does not currently support miniseed, evalresp, or variable smoothing options. Libraries exist for miniseed and evalresp calls, but to reduce the dependence on external libraries, I chose to not code
those into this package.
No plotting package is included. I suggest using gmt to create plots, but you can also use gnuplot, octave, matlab, or any other plotting package which can read ascii data.
the src/ and include/ directories contain copies of the source and headers. The actual compiled source is in the main directory
The sac file format may have differences due to 32 vs 64 bit and endian style differences. Those are not accounted for in the read_sac routine included in the lib_station_analysis_tools.c code. | {"url":"https://seiscode.iris.washington.edu/projects/station-analysis-tools/wiki","timestamp":"2024-11-10T01:51:41Z","content_type":"text/html","content_length":"16508","record_id":"<urn:uuid:aacce2bf-7e8e-4a9c-8006-f1262298cdbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00032.warc.gz"} |
2 Use R | Working with R
2 Use R
The literature devoted to learning R is flourishing. The following books are an arbitrary but useful selection:
Some advanced aspects of coding are seen here. Details on the different languages of R are useful for creating packages. The environments are presented next, for the proper understanding of the
search for objects called by the code. Finally, the optimization of code performance is discussed in depth (loops, C++ code and parallelization) and illustrated by a case study.
2.1 The languages of R
R includes several programming languages. The most common is S3, but it is not the only one^18.
2.1.1 Base
The core of R is the primitive functions and basic data structures like the sum function and matrix data:
## [1] "base"
## [1] "builtin"
## [1] "base"
## [1] "double"
The pryr package allows to display the language in which objects are defined. The typeof() function displays the internal storage type of the objects:
• the sum() function belongs to the basic language of R and is a builtin function.
• the elements of the numerical matrix containing a single 1 are double precision reals, and the matrix itself is defined in the basic language.
Primitive functions are coded in C and are very fast. They are always available, whatever the packages loaded. Their use is therefore to be preferred.
2.1.2 S3
S3 is the most used language, often the only one known by R users.
It is an object-oriented language in which classes, i.e. the type of objects, are declarative.
MyFirstName <- "Eric"
class(MyFirstName) <- "FirstName"
The variable MyFirstName is here classed as FirstName by a simple declaration.
Unlike the way a classical object-oriented language works^19, S3 methods are related to functions, not objects.
# Default display
## [1] "Eric"
## attr(,"class")
## [1] "FirstName"
print.Firstname <- function(x) cat("The first name is", x)
# Modified display
## [1] "Eric"
## attr(,"class")
## [1] "FirstName"
In this example, the print() method applied to the “Firstname” class is modified. In a classical object-oriented language, the method would be defined in the class Firstname. In R, methods are
defined from generic methods.
print is a generic method (“a generic”) declared in base.
## [1] "base"
Its code is just a UseMethod("print") declaration:
## function (x, ...)
## UseMethod("print")
## <bytecode: 0x11f1722a8>
## <environment: namespace:base>
There are many S3 methods for print:
## [1] "print.acf" "print.activeConcordance"
## [3] "print.AES" "print.all_vars"
## [5] "print.anova" "print.any_vars"
Each applies to a class. print.default is used as a last resort and relies on the type of the object, not its S3 class.
## [1] "character"
## [1] "S3"
An object can belong to several classes, which allows a form of inheritance of methods. In a classical object oriented language, inheritance allows to define more precise classes (“FrenchFirstName”)
which inherit from more general classes (“FirstName”) and thus benefit from their methods without having to redefine them. In R, inheritance is simply declaring a vector of increasingly broad classes
for an object:
# Definition of classes by a vector
class(MyFirstName) <- c("FrenchFirstName", "FirstName")
# Alternative code, with inherits()
inherits(MyFirstName, what = "FrenchFirstName")
## [1] TRUE
inherits(MyFirstName, what = "FirstName")
## [1] TRUE
The generic looks for a method for each class, in the order of their declaration.
print.FrenchFirstName <- function(x) cat("French first name:",
## French first name: Eric
In summary, S3 is the common language of R. Almost all packages are written in S3. Generics are everywhere but go unnoticed, for example in packages:
## [1] autoplot plot
## see '?methods' for accessing help and source code
The .S3methods() function displays all available methods for a class, as opposed to methods() which displays all classes for which the method passed as an argument is defined.
Many primitive functions in R are generic methods. To find out about them, use the help(InternalMethods) helper.
2.1.3 S4
S4 is an evolution of S3 that structures classes to get closer to a classical object oriented language:
• Classes must be explicitly defined, not simply declared.
• 1ttributes (i.e. variables describing objects), called slots, are explicitly declared.
• The constructor, i.e. the method that creates a new instance of a class (i.e. a variable containing an object of the class), is explicit.
Using the previous example, the S4 syntax is as follows:
# Definition of the class Person, with its slots
slots = list(LastName = "character", FirstName = "character"))
# Construction of an instance
Me <- new("Person", LastName = "Marcon", FirstName = "Eric")
# Langage
## [1] "S4"
Methods always belong to functions. They are declared by the setMethod() function:
setMethod("print", signature = "Person", function(x, ...) {
cat("The person is:", x@FirstName, x@LastName)
## The person is: Eric Marcon
The attributes are called by the syntax variable@slot.
In summary, S4 is more rigorous than S3. Some packages on CRAN : Matrix, sp, odbc… and many on Bioconductor are written in S4 but the language is now clearly abandoned in favor of S3, notably because
of the success of the tidyverse.
2.1.4 RC
RC was introduced in R 2.12 (2010) with the methods package.
Methods belong to classes, as in C++: they are declared in the class and called from the objects.
# Declaration of the class
PersonRC <- setRefClass("PersonRC",
fields = list(LastName = "character", FirstName = "character"),
methods = list(print = function() cat(FirstName, LastName)))
# Constructeur
MeRC <- new("PersonRC", LastName = "Marcon", FirstName ="Eric")
# Language
## [1] "RC"
# Call the print method
## Eric Marcon
RC is a confidential language, although it is the first “true” object-oriented language of R.
2.1.5 S6
S6^20 enhances RC but is not included in R: it requires installing its package.
Attributes and methods can be public or private. An initialize() method is used as a constructor.
PersonR6 <- R6Class("PersonR6", public = list(LastName = "character",
FirstName = "character", initialize = function(LastName = NA,
FirstName = NA) {
self$LastName <- LastName
self$FirstName <- FirstName
}, print = function() cat(self$FirstName, self$LastName)))
MeR6 <- PersonR6$new(LastName = "Marcon", FirstName = "Eric")
## Eric Marcon
S6 allows to program rigorously in object but is very little used. The performances of S6 are much better than those of RC but are inferior to those of S3^21.
The non-inclusion of R6 to R is shown by pryr:
## [1] "S3"
2.1.6 Tidyverse
The tidyverse is a set of coherent packages that have evolved the way R is programmed. The set of essential packages can be loaded by the tidyverse package which has no other use:
This is not a new language per se but rather an extension of S3, with deep technical modifications, notably the unconventional evaluation of expressions^22, which it is not essential to master in
Its principles are written in a manifesto^23. Its most visible contribution for the user is the sequence of commands in a flow (code pipeline).
In standard programming, the sequence of functions is written by successive nesting, which makes it difficult to read, especially when arguments are needed:
# Base-2 logarithm of the mean of 100 random numbers in a
# uniform distribution
log(mean(runif(100)), base = 2)
## [1] -1.127903
In the tidyverse, the functions are chained together, which often better matches the programmer’s thinking about data processing:
# 100 random numbers in a uniform distribution
runif(100) %>%
# Mean
mean %>%
# Base-2 logarithm
## [1] -0.9772102
The pipe %>% is an operator that calls the next function by passing it as first argument the result of the previous function. Additional arguments are passed normally: for the readability of the
code, it is essential to name them. Most of the R functions can be used without difficulty in the tidyverse, even though they were not designed for this purpose: it is sufficient that their first
argument is the data to be processed.
The pipeline allows only one value to be passed to the next function, which prohibits multidimensional functions, such as f(x,y). The preferred data structure is the tibble, which is an improved
dataframe: its print() method is more readable, and it corrects some unintuitive dataframe behavior, such as the automatic conversion of single-column dataframes to vectors. The columns of the
dataframe or tibble allow to pass as much data as needed.
Finally, data visualization is supported by ggplot2 which relies on a theoretically sound graph grammar (Wickham 2010). Schematically, a graph is constructed according to the following model:
ggplot(data = <DATA>) +
mapping = aes(<MAPPINGS>),
stat = <STAT>,
position = <POSITION>
) +
<COORDINATE_FUNCTION> +
• The data is necessarily a dataframe.
• The geometry is the type of graph chosen (points, lines, histograms or other).
• The aesthetics (function aes()) designates what is represented: it is the correspondence between the columns of the dataframe and the elements necessary for the geometry.
• Statistics is the treatment applied to the data before passing it to the geometry (often “identity”, i.e. no transformation but “boxplot” for a boxplot). The data can be transformed by a scale
function, such as scale_y_log10().
• The position is the location of the objects on the graph (often “identity”; “stack” for a stacked histogram, “jitter” to move the overlapping points slightly in a geom_point).
• The coordinates define the display of the graph (coord_fixed() to avoid distorting a map for example).
• Finally, facets offer the possibility to display several aspects of the same data by producing one graph per modality of a variable.
The set formed by the pipeline and ggplot2 allows complex processing in a readable code. Figure 2.1 shows the result of the following code:
# Diamonds data provided by ggplot2
diamonds %>%
# Keep only diamonds larger than half a carat
filter(carat > 0.5) %>%
# Graph: price vs. weight
ggplot(aes(x = carat, y = price)) +
# Scatter plot
geom_point() +
# Logarithmic scale
scale_x_log10() +
scale_y_log10() +
# Linear regression
geom_smooth(method = "lm")
In this figure, two geometries (scatterplot and linear regression) share the same aesthetics (price vs. weight in carats) which is therefore declared upstream, in the ggplot() function.
The tidyverse is documented in detail in Wickham and Grolemund (2016) and ggplot2 in Wickham (2017).
2.2 Environments
R’s objects, data and functions, are named. Since R is modular, with the ability to add any number of packages to it, it is very likely that name conflicts will arise. To deal with them, R has a
rigorous system of name precedence: code runs in a defined environment, inheriting from parent environments.
2.2.1 Organization
R starts in an empty environment. Each loaded package creates a child environment to form a stack of environments, of which each new element is called a “child” of the previous one, which is its
The console is in the global environment, the child of the last loaded package.
## [1] ".GlobalEnv" "package:R6"
## [3] "package:entropart" "package:lubridate"
## [5] "package:forcats" "package:stringr"
## [7] "package:dplyr" "package:purrr"
## [9] "package:readr" "package:tidyr"
## [11] "package:tibble" "package:ggplot2"
## [13] "package:tidyverse" "package:stats"
## [15] "package:graphics" "package:grDevices"
## [17] "package:utils" "package:datasets"
## [19] "package:methods" "Autoloads"
## [21] "package:base"
The code of a function called from the console runs in a child environment of the global environment:
## <environment: R_GlobalEnv>
# The function f displays its environment
f <- function() environment()
# Display the environment of the function
## <environment: 0x138f77970>
# Parent environment of the function's environment
## <environment: R_GlobalEnv>
2.2.2 Search
The search for ab object starts in the local environment. If it is not found, it is searched in the parent environment, then in the parent of the parent, until the environments are exhausted, which
generates an error indicating that the object was not found.
# Variable q defined in the global environment
q <- "GlobalEnv"
# Function defining q in its environment
qLocalFunction <- function() {
q <- "Function"
# The local variable is returned
## [1] "Function"
# Poorly written function using a variable it does not
# define
qGlobalEnv <- function() {
# The global environment variable is returned
## [1] "GlobalEnv"
# Delete this variable
# The function base::q is returned
## function (save = "default", status = 0, runLast = TRUE)
## .Internal(quit(save, status, runLast))
## <bytecode: 0x1396acc88>
## <environment: namespace:base>
The variable q is defined in the global environment. The function qLocalFunction defines its own variable q. Calling the function returns the its local value because it is in the function’s
The qGlobalEnv function returns the q variable that it does not define locally. So it looks for it in its parent environment and finds the variable defined in the global environment. By removing the
variable from the global environment with rm(q), the qGlobalEnv() function scans the stack of environments until it finds an object named q in the base package, which is the function to exit R. It
could have found another object if a package containing a q object had been loaded.
To avoid this erratic behavior, a function should never call an object not defined in its own environment.
2.2.3 Package namespaces
It is time to define precisely what packages make visible. Packages contain objects (functions and data) which they export or not. They are usually called by the library() function, which does two
• It loads the package into memory, allowing access to all its objects with the syntax package::object for exported objects and package:::object for non-exported ones.
• It then attaches the package, which places its environment on top of the stack.
It is possible to detach a package with the unloadNamespace() function to remove it from the environment stack. Example:
## [1] TRUE
# stack of environments
## [1] ".GlobalEnv" "package:R6"
## [3] "package:entropart" "package:lubridate"
## [5] "package:forcats" "package:stringr"
## [7] "package:dplyr" "package:purrr"
## [9] "package:readr" "package:tidyr"
## [11] "package:tibble" "package:ggplot2"
## [13] "package:tidyverse" "package:stats"
## [15] "package:graphics" "package:grDevices"
## [17] "package:utils" "package:datasets"
## [19] "package:methods" "Autoloads"
## [21] "package:base"
# Diversity(), a function exported by entropart is found
Diversity(1, CheckArguments = FALSE)
## None
## 1
## [1] FALSE
# Stack of environments, without entropart
## [1] ".GlobalEnv" "package:R6"
## [3] "package:lubridate" "package:forcats"
## [5] "package:stringr" "package:dplyr"
## [7] "package:purrr" "package:readr"
## [9] "package:tidyr" "package:tibble"
## [11] "package:ggplot2" "package:tidyverse"
## [13] "package:stats" "package:graphics"
## [15] "package:grDevices" "package:utils"
## [17] "package:datasets" "package:methods"
## [19] "Autoloads" "package:base"
## <simpleError in Diversity(1): could not find function "Diversity">
# but can be called with its full name
entropart::Diversity(1, CheckArguments = FALSE)
## None
## 1
Calling entropart::Diversity() loads the package (i.e., implicitly executes loadNamespace("entropart")) but does not attach it.
In practice, one should limit the number of attached packages to limit the risk of calling an unwanted function, homonymous to the desired function. In critical cases, the full name of the function
should be used: package::function().
A common issue occurs with the filter() function of dplyr, which is the namesake of the stats function. The stats package is usually loaded before dplyr, a package in the tidyverse. Thus,
stats::filter() must be called explicitly.
However, the dplyr or tidyverse package (which attaches all the tidyverse packages) can be loaded systematically by creating a .RProfile at the root of the project containing the command:
In this case, dplyr is loaded before stats so its function is inaccessible.
2.3 Measuring execution time
The execution time of long code can be measured very simply by the system.time command. For very short execution times, it is necessary to repeat the measurement: this is the purpose of the
microbenchmark package.
2.3.1 system.time
The function returns the execution time of the code.
# Mean absolute deviation of 1000 values in a uniform
# distribution, repeated 100 times
system.time(for (i in 1:100) mad(runif(1000)))
## user system elapsed
## 0.009 0.000 0.009
2.3.2 microbenchmark
The microbenchmark package is the most advanced.
The goal is to compare the speed of computing the square of a vector (or a number) by multiplying it by itself (\(x \times x\)) or by raising it to the power of 2 (\(x^2\)).
# Functions to test
f1 <- function(x) x * x
f2 <- function(x) x^2
f3 <- function(x) x^2.1
f4 <- function(x) x^3
# Initialization
X <- rnorm(10000)
# Test
(mb <- microbenchmark(f1(X), f2(X), f3(X), f4(X)))
## Unit: microseconds
## expr min lq mean median uq
## f1(X) 1.886 6.9495 20.07483 8.7125 26.9985
## f2(X) 4.182 8.9380 20.35650 9.9015 28.0030
## f3(X) 100.573 107.7070 139.65338 114.1645 127.2025
## f4(X) 133.578 139.9330 176.30656 147.5795 163.3030
## max neval
## 392.616 100
## 398.725 100
## 1455.992 100
## 873.792 100
The returned table contains the minimum, median, mean, max and first and third quartile times, as well as the number of repetitions. The median value is to be compared. The number of repetitions is
by default 100, to be modulated (argument times) according to the complexity of the calculation.
The test result, a microbenchmark object, is a raw table of execution times. The statistical analysis is done by the print and summary methods. To choose the columns to display, use the following
## expr median
## 1 f1(X) 8.7125
## 2 f2(X) 9.9015
## 3 f3(X) 114.1645
## 4 f4(X) 147.5795
To make calculations on these results, we must store them in a variable. To prevent the results from being displayed in the console, the simplest solution is to use the capture.output function by
assigning its result to a variable.
The previous test is displayed again.
## expr median
## 1 f1(X) 8.7125
## 2 f2(X) 9.9015
## 3 f3(X) 114.1645
## 4 f4(X) 147.5795
The computation time is about the same between \(x \times x\) and \(x^2\). The power calculation is much longer, especially if the power is not integer, because it requires a logarithm calculation.
The computation of the power 2 is therefore optimized by R to avoid the use of log.
Two graphical representations are available: the violins represent the probability density of the execution time; the boxplots are classical.
2.3.3 Profiling
profvis is RStudio’s profiling tool.
It tracks the execution time of each line of code and the memory used. The goal is to detect slow code portions that need to be improved.
The result is an HTML file containing the profiling report^24. It can be observed that the time to draw the random numbers is similar to that of the cosine calculation.
Read the complete documentation^25 on the RStudio website.
2.4 Loops
The most frequent case of long code to execute is loops: the same code is repeated a large number of times.
2.4.1 Vector functions
Most of R’s functions are vector functions: loops are processed internally, extremely fast. Therefore, you should think in terms of vectors rather than scalars.
# Draw two vectors of three random numbers between 0 and 1
x1 <- runif(3)
x2 <- runif(3)
# Square root of the three numbers in x1
## [1] 0.9427738 0.8665204 0.4586981
# Respective sums of the three numbers of x1 and x2
x1 + x2
## [1] 1.6262539 1.6881583 0.9063973
We also have to write vector functions on their first argument. The function lnq of the package entropart returns the deformed logarithm of order \(q\) of a number \(x\).
# Code of the function
## function (x, q)
## {
## if (q == 1) {
## return(log(x))
## }
## else {
## Log <- (x^(1 - q) - 1)/(1 - q)
## Log[x < 0] <- NA
## return(Log)
## }
## }
## <bytecode: 0x13a23f778>
## <environment: namespace:entropart>
For a function to be vector, each line of its code must allow the first argument to be treated as a vector. Here: log(x) and x^ are a vector function and operator and the condition [x < 0] also
returns a vector.
2.4.2 lapply
Code that cannot be written as a vector function requires loops.
lapply() applies a function to each element of a list. There are several versions of this function:
• lapply() returns a list (and saves the time of rearranging them in an array).
• sapply() returns a dataframe by collapsing the lists (this is done by the simplify2array() function).
• vapply() is almost identical but requires that the data type of the result be provided.
# Draw 1000 values in a uniform distribution
x1 <- runif(1000)
# The square root can be calculated for the vector or each
# value
identical(sqrt(x1), sapply(x1, FUN = sqrt))
## [1] TRUE
## expr median
## 1 sqrt(x1) 1.5580
## 2 lapply(x1, FUN = sqrt) 138.7440
## 3 sapply(x1, FUN = sqrt) 167.0340
## 4 vapply(x1, FUN = sqrt, FUN.VALUE = 0) 136.4275
lapply() is much slower than a vector function. sapply() requires more time for simplify2array(), which must detect how to gather the results. Finally, vapply() saves the time of determining the data
type of the result and allows for faster computation with little effort.
2.4.3 For loops
Loops are handled by the for function. They have the reputation of being slow in R because the code inside the loop must be interpreted at each execution. This is no longer the case since version 3.5
of R: loops are compiled systematically before execution. The behavior of the just-in-time compiler is defined by the enableJIT function. The default level is 3: all functions are compiled, and loops
in the code are compiled too.
To evaluate the performance gain, the following code removes all automatic compilation, and compares the same loop compiled or not.
## [1] 3
# Loop to calculate the square root of a vector
Loop <- function(x) {
# Initialization of the result vector, essential
Root <- vector("numeric", length = length(x))
# Loop
for (i in 1:length(x)) Root[i] <- sqrt(x[i])
# Compiled version
Loop2 <- cmpfun(Loop)
# Comparison
mb <- microbenchmark(Loop(x1), Loop2(x1))
(mbs <- summary(mb)[, c("expr", "median")])
## expr median
## 1 Loop(x1) 358.422
## 2 Loop2(x1) 42.517
# Automatic compilation by default since version 3.5
enableJIT(level = 3)
## [1] 0
The gain is considerable: from 1 to 8.
For loops are now much faster than vapply.
## expr median
## 1 vapply(x1, FUN = sqrt, 0) 136.0790
## 2 Loop(x1) 42.2505
Be careful, the performance test can be misleading:
# Preparing the result vector
Root <- vector("numeric", length = length(x1))
# Test
mb <- microbenchmark(vapply(x1, FUN = sqrt, 0),
for(i in 1:length(x1))
Root[i] <- sqrt(x1[i]))
summary(mb)[, c("expr", "median")]
## expr median
## 1 vapply(x1, FUN = sqrt, 0) 141.6755
## 2 for (i in 1:length(x1)) Root[i] <- sqrt(x1[i]) 1134.5725
In this code, the for loop is not compiled so it is much slower than in its normal use (in a function or at the top level of the code).
The long loops allow tracking of their progress by a text bar, which is another advantage. The following function executes pauses of one tenth of a second for the time passed in parameter (in
MonitoredLoop <- function(duration = 1) {
# Progress bar
pgb <- txtProgressBar(min = 0, max = duration * 10)
# Boucle
for (i in 1:(duration * 10)) {
# Pause for a tenth of a second
# Track progress
setTxtProgressBar(pgb, i)
## ============================================================
2.4.4 replicate
replicate() repeats a statement.
## [1] 0.9453453 0.5262818 0.7233425
This code is equivalent to runif(3), with performance similar to vapply: 50 to 100 times slower than a vector function.
## expr median
## 1 replicate(1000, runif(1)) 675.147
## 2 runif(1000) 6.027
2.4.5 Vectorize
Vectorize() makes a function that is not vectorized, by loops. Write the loops instead.
2.4.6 Marginal statistics
apply applies a function to the rows or columns of a two dimensional object.
colSums and similar functions (rowSums, colMeans, rowMeans) are optimized.
# Sum of the numeric columns of the diamonds dataset of ggplot2
# Loop identical to the action of apply(, 2, )
SumLoop <- function(Table) {
Sum <- vector("numeric", length = ncol(Table))
for (i in 1:ncol(Table)) Sum[i] <- sum(Table[, i])
mb <- microbenchmark(SumLoop(diamonds[-(2:4)]),
apply(diamonds[-(2:4)], 2, sum),
summary(mb)[, c("expr", "median")]
## expr median
## 1 SumLoop(diamonds[-(2:4)]) 1393.283
## 2 apply(diamonds[-(2:4)], 2, sum) 3319.954
## 3 colSums(diamonds[-(2:4)]) 1083.650
apply clarifies the code but is slower than the loop, which is only slightly slower than colSums.
2.5 C++ code
Integrating C++ code into R is greatly simplified by the Rcpp package but is still difficult to debug and therefore should be reserved for very simple code (to avoid errors) repeated a large number
of times (to be worth the effort). The preparation and verification of the data must be done in R, as well as the processing and presentation of the results.
The common practise is to include C++ code in a package, but running it outside a package is possible:
• C++ code can be included in a C++ document (file with extension .cpp): it is compiled by the sourceCpp() command, which creates the R functions to call the C++ code.
• In an RMarkdown document, Rcpp code snippets can be created to insert the C++ code: they are compiled and interfaced to R at the time of knitting.
The following example shows how to create a C++ function to calculate the double of a numerical vector.
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector timesTwo(NumericVector x) {
return x * 2;
An R function with the same name as the C++ function is now available.
## [1] 2 4 6 8 10
The performance is two orders of magnitude faster than the R code (see the case study, section 2.7).
2.6 Parallelizing R
When long computations can be split into independent tasks, the simultaneous (parallel) execution of these tasks reduces the total computation time to that of the longest task, to which is added the
cost of setting up the parallelization (creation of the tasks, recovery of the results…).
Read Josh Errickson’s excellent introduction^26 which details the issues and constraints of parallelization.
Two mechanisms are available for parallel code execution:
• fork: the running process is duplicated on multiple cores of the computing computer’s processor. This is the simplest method but it does not work on Windows (it is a limitation of the operating
• Socket: a cluster is constituted, either physically (a set of computers running R is necessary) or logically (an instance of R on each core of the computer used). The members of the cluster
communicate through the network (the internal network of the computer is used in a logical cluster).
Different R packages allow to implement these mechanisms.
2.6.1 mclapply (fork)
The mclapply function of the parallel package has the same syntax as lapply but parallelizes the execution of loops. On Windows, it has no effect since the system does not allow fork: it simply calls
lapply. However, a workaround exists to emulate mclapply on Windows by calling parLapply, which uses a cluster.
## mclapply.hack.R
## Nathan VanHoudnos
## nathanvan AT northwestern FULL STOP edu
## July 14, 2014
## A script to implement a hackish version of
## parallel:mclapply() on Windows machines.
## On Linux or Mac, the script has no effect
## beyond loading the parallel library.
## Loading required package: parallel
## Define the hack
# mc.cores argument added: Eric Marcon
mclapply.hack <- function(..., mc.cores=detectCores()) {
## Create a cluster
size.of.list <- length(list(...)[[1]])
cl <- makeCluster( min(size.of.list, mc.cores) )
## Find out the names of the loaded packages
loaded.package.names <- c(
## Base packages
## Additional packages
names( sessionInfo()$otherPkgs ))
tryCatch( {
## Copy over all of the objects within scope to
## all clusters.
this.env <- environment()
while( identical( this.env, globalenv() ) == FALSE ) {
ls(all.names=TRUE, env=this.env),
this.env <- parent.env(environment())
ls(all.names=TRUE, env=globalenv()),
## Load the libraries on all the clusters
## N.B. length(cl) returns the number of clusters
parLapply( cl, 1:length(cl), function(xx){
lapply(loaded.package.names, function(yy) {
require(yy , character.only=TRUE)})
## Run the lapply in parallel
return( parLapply( cl, ...) )
}, finally = {
## Stop the cluster
## Warn the user if they are using Windows
if( Sys.info()[['sysname']] == 'Windows' ){
" *** Microsoft Windows detected ***\n",
" \n",
" For technical reasons, the MS Windows version of mclapply()\n",
" is implemented as a serial function instead of a parallel\n",
" function.",
" \n\n",
" As a quick hack, we replace this serial version of mclapply()\n",
" with a wrapper to parLapply() for this R session. Please see\n\n",
" http://www.stat.cmu.edu/~nmv/2014/07/14/
implementing-mclapply-on-windows \n\n",
" for details.\n\n"))
## If the OS is Windows, set mclapply to the
## the hackish version. Otherwise, leave the
## definition alone.
mclapply <- switch( Sys.info()[['sysname']],
Windows = {mclapply.hack},
Linux = {mclapply},
Darwin = {mclapply})
## end mclapply.hack.R
The following code tests the parallelization of a function that returns its argument unchanged after a quarter-second pause. This is knitted with 3 cores, all of which are used except for one so as
not to saturate the system.
f <- function(x, time = 0.25) {
# Leave one core out for the system
nbCores <- detectCores() - 1
# Serial : theoretical time = nbCores/4 seconds
(tserie <- system.time(lapply(1:nbCores, f)))
## user system elapsed
## 0.003 0.000 0.773
# Parallel : theoretical time = 1/4 second
(tparallele <- system.time(mclapply(1:nbCores, f, mc.cores = nbCores)))
## user system elapsed
## 0.002 0.022 0.394
Setting up parallelization has a cost of about 0.14 seconds here. The execution time is much longer in parallel on Windows because setting up the cluster takes much more time than parallelization
saves. Parallelization is interesting for longer tasks, such as a one second break.
## user system elapsed
## 0.000 0.000 2.114
## user system elapsed
## 0.001 0.013 1.147
The additional time required for parallel execution of the new code is relatively smaller: the costs become less than the savings when the time of each task increases.
If the number of parallel tasks exceeds the number of cores used, performance collapses because the additional task must be executed after the first ones.
## user system elapsed
## 0.002 0.015 1.166
## user system elapsed
## 0.001 0.015 2.100
The time then remains stable until the number of cores is doubled. Figure 2.2 shows the evolution of the computation time according to the number of tasks.
Tasks <- 1:(2 * nbCores+1)
Time <- sapply(Tasks, function(nbTasks) {
system.time(mclapply(1:nbTasks, f, time=1, mc.cores=nbCores))
tibble(Tasks, Time=Time["elapsed", ]) %>%
ggplot +
geom_line(aes(x = Tasks, y = Time)) +
geom_vline(xintercept = nbCores, col = "red", lty = 2) +
geom_vline(xintercept = 2 * nbCores, col = "red", lty = 2)
The theoretical shape of this curve is as follows:
• For a task, the time is equal to one second plus the parallelization setup time.
• The time should remain stable until the number of cores used.
• When all the cores are used (red dotted line), the time should increase by one second and then remain stable until the next limit.
In practice, the computation time is determined by other factors that are difficult to predict. The best practice is to adapt the number of tasks to the number of cores, otherwise performance will be
2.6.2 parLapply (socket)
parLapply requires to create a cluster, export the useful variables on each node, load the necessary packages on each node, execute the code and finally stop the cluster. The code for each step can
be found in the mclapply.hack function above.
For everyday use, mclapply is faster, except on Windows, and simpler (including on Windows thanks to the above workaround).
2.6.3 foreach
2.6.3.1 How it works
The foreach package allows advanced use of parallelization. Read its vignettes.
# Manual
vignette("foreach", "foreach")
# Nested loops
vignette("nested", "foreach")
Regardless of parallelization, foreach redefines for loops.
## Attaching package: 'foreach'
## The following objects are masked from 'package:purrr':
## accumulate, when
## [[1]]
## [1] 1
## [[2]]
## [1] 2
## [[3]]
## [1] 3
The foreach function returns a list containing the results of each loop. The elements of the list can be combined by any function, such as c.
## [1] 1 2 3
The foreach function is capable of using iterators, that is, functions that pass to the loop only the data it needs without loading the rest into memory. Here, the icount iterator passes the values
1, 2 and 3 individually, without loading the 1:3 vector into memory.
## [1] 1 2 3
It is therefore very useful when each object of the loop uses a large amount of memory.
2.6.3.2 Parallelization
Replacing the %do% operator with %dopar% parallelizes loops, provided that an adapter, i.e. an intermediate package between foreach and a package implementing parallelization, is loaded. doParallel
is an adapter for using the parallel package that comes with R.
## user system elapsed
## 0.003 0.001 0.709
## user system elapsed
## 0.003 0.021 0.393
The fixed cost of parallelization is low.
2.6.4 future
The future package is used to abstract the code of the parallelization implementation. It is at the centre of an ecosystem of packages that facilitate its use^27.
The parallelization strategy used is declared by the plan() function. The default strategy is sequential, i.e. single-task. The multicore and multisession strategies are based respectively on the
fork and socket techniques seen above. Other strategies are available for using physical clusters (several computers prepared to run R together): the future documentation details how to do this.
Here we will use the multisession strategy, which works on the local computer, whatever its operating system.
The future.apply package allows all *apply() and replicate() loops to be effortlessly parallelized by prefixing their names with future_.
## user system elapsed
## 0.024 0.001 0.386
foreach loops can be parallelized with the doFuture package by simply replacing %dopar% with %dofuture%.
## user system elapsed
## 0.038 0.003 0.493
The strategy is reset to sequential at the end.
2.7 Case study
This case study tests the different techniques seen above to solve a concrete problem. The objective is to compute the average distance between two points of a random set of 1000 points in a square
window of side 1.
Its expectation is computable^28. It is equal to \(\frac{2+\sqrt{2}+5\ln{(1+\sqrt{2})}}{15} \approx 0.5214\).
2.7.1 Creation of the data
The point set is created with the spatstat package.
2.7.2 Spatstat
The pairdist() function of spatstat returns the matrix of distances between points. The average distance is calculated by dividing the sum by the number of pairs of distinct points.
## [1] 0.5154879
The function is fast because it is coded in C language in the spatstat package for the core of its calculations.
2.7.3 apply
The distance can be calculated by two nested sapply().
fsapply1 <- function() {
distances <- sapply(1:NbPoints, function(i) sapply(1:NbPoints,
function(j) sqrt((X$x[i] - X$x[j])^2 + (X$y[i] - X$y[j])^2)))
return(sum(distances)/NbPoints/(NbPoints - 1))
system.time(d <- fsapply1())
## user system elapsed
## 2.400 0.011 2.412
## [1] 0.5154879
Some time can be saved by replacing sapply with vapply: the format of the results does not have to be determined by the function. The gain is negligible on a long computation like this one but
important for short computations.
fsapply2 <- function() {
distances <- vapply(1:NbPoints, function(i) vapply(1:NbPoints,
function(j) sqrt((X$x[i] - X$x[j])^2 + (X$y[i] - X$y[j])^2),
0), 1:1000 + 0)
return(sum(distances)/NbPoints/(NbPoints - 1))
system.time(d <- fsapply2())
## user system elapsed
## 2.216 0.005 2.222
## [1] 0.5154879
The output format is not always obvious to write:
• it must respect the size of the data: a vector of size 1000 for the outer loop, a scalar for the inner loop.
• it must respect the type: 0 for an integer, 0.0 for a real number. In the outer loop, adding 0.0 to the vector of integers turns it into a vector of real numbers.
A more significant improvement is to compute the square roots only at the end of the loop, to take advantage of the vectorization of the function.
fsapply3 <- function() {
distances <- vapply(1:NbPoints, function(i) vapply(1:NbPoints,
function(j) (X$x[i] - X$x[j])^2 + (X$y[i] - X$y[j])^2,
0), 1:1000 + 0)
return(sum(sqrt(distances))/NbPoints/(NbPoints - 1))
system.time(d <- fsapply3())
## user system elapsed
## 2.204 0.004 2.210
## [1] 0.5154879
The computations are performed twice (distance between points \(i\) and \(j\), but also between points \(j\) and \(i\)): a test on the indices allows to divide the time almost by 2 (not quite because
the loops without computation, which return \(0\), take time).
fsapply4 <- function() {
distances <- vapply(1:NbPoints, function(i) {
vapply(1:NbPoints, function(j) {
if (j > i) {
(X$x[i] - X$x[j])^2 + (X$y[i] - X$y[j])^2
} else {
}, 0)
}, 1:1000 + 0)
return(sum(sqrt(distances))/NbPoints/(NbPoints - 1) * 2)
system.time(d <- fsapply4())
## user system elapsed
## 1.280 0.003 1.287
## [1] 0.5154879
In parallel, the computation time is not improved on Windows because the individual tasks are too short. On MacOS or Linux, the computation is accelerated.
fsapply5 <- function() {
distances <- mclapply(1:NbPoints, function(i) {
vapply(1:NbPoints, function(j) {
if (j > i) {
(X$x[i] - X$x[j])^2 + (X$y[i] - X$y[j])^2
} else {
}, 0)
return(sum(sqrt(simplify2array(distances)))/NbPoints/(NbPoints -
1) * 2)
system.time(d <- fsapply5())
## user system elapsed
## 1.350 0.159 0.783
## [1] 0.5154879
2.7.4 future.apply
The fsapply4() function optimised above can be parallelled directly by prefixing the vapply function with future_. Only the main loop is parallelized: nesting future_vapply() would collapse
# Socket strategy on all available cores except 1
plan(multisession, workers = availableCores() - 1)
future_fsapply4_ <- function() {
distances <- future_vapply(1:NbPoints, function(i) {
vapply(1:NbPoints, function(j) {
if (j > i) {
(X$x[i] - X$x[j])^2 + (X$y[i] - X$y[j])^2
} else {
}, 0)
}, 1:1000 + 0)
return(sum(sqrt(distances))/NbPoints/(NbPoints - 1) * 2)
system.time(d <- future_fsapply4_())
## user system elapsed
## 0.050 0.007 0.968
## [1] 0.5154879
2.7.5 for loop
A for loop is faster and consumes less memory because it does not store the distance matrix.
distance <- 0
ffor <- function() {
for (i in 1:(NbPoints - 1)) {
for (j in (i + 1):NbPoints) {
distance <- distance + sqrt((X$x[i] - X$x[j])^2 +
(X$y[i] - X$y[j])^2)
return(distance/NbPoints/(NbPoints - 1) * 2)
# Calculation time, stored
(for_time <- system.time(d <- ffor()))
## user system elapsed
## 0.795 0.003 0.799
## [1] 0.5154879
This is the simplest and most efficient way to write this code with core R and no parallelization.
2.7.6 foreach loop
Parallelization executes for loops inside a foreach loop, which is quite efficient. However, distances are calculated twice.
registerDoParallel(cores = detectCores())
fforeach3 <- function(Y) {
distances <- foreach(
i = icount(Y$n),
.combine = '+') %dopar% {
distance <- 0
for (j in 1:Y$n) {
distance <- distance +
sqrt((Y$x[i] - Y$x[j])^2 + (Y$y[i] - Y$y[j])^2)
return(distances / Y$n / (Y$n - 1))
system.time(d <- fforeach3(X))
## user system elapsed
## 1.910 0.226 0.775
## [1] 0.5154879
It is possible to nest two foreach loops, but they are extremely slow compared with a simple loop. The test is run here with 10 times fewer points, so 100 times fewer distances to calculate.
NbPointsReduit <- 100
Y <- runifpoint(NbPointsReduit)
fforeach1 <- function(Y) {
distances <- foreach(i = 1:NbPointsReduit, .combine = "cbind") %:%
foreach(j = 1:NbPointsReduit, .combine = "c") %do% {
if (j > i) {
(Y$x[i] - Y$x[j])^2 + (Y$y[i] - Y$y[j])^2
} else {
return(sum(sqrt(distances))/NbPointsReduit/(NbPointsReduit -
1) * 2)
system.time(d <- fforeach1(Y))
## user system elapsed
## 0.769 0.005 0.774
## [1] 0.5304197
Nested foreach loops should be reserved for very long tasks (several seconds at least) to compensate the fixed costs of setting them up.
2.7.7 RCpp
The C++ function to calculate distances is the following.
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double MeanDistance(NumericVector x, NumericVector y) {
double distance=0;
double dx, dy;
for (int i=0; i < (x.length()-1); i++) {
for (int j=i+1; j < x.length(); j++) {
// Calculate distance
dx = x[i]-x[j];
dy = y[i]-y[j];
distance += sqrt(dx*dx + dy*dy);
return distance/(double)(x.length()/2*(x.length()-1));
It is called in R very simply. The computation time is very short.
## [1] 0.5154879
2.7.8 RcppParallel
RcppParallel allows to interface parallelized C++ code, at the cost of a more complex syntax than RCpp. Documentation is available^29.
The C++ function exported to R does not perform the computations but only organizes the parallel execution of another, non-exported, function of type Worker.
Two (C++) parallelization functions are available for two types of tasks:
• parallelReduce to accumulate a value, used here to sum distances.
• parallelFor to fill a result matrix.
The syntax of the Worker is a bit tricky but simple enough to adapt: the constructors initialize the C variables from the values passed by R and declare the parallelization.
// [[Rcpp::depends(RcppParallel)]]
#include <Rcpp.h>
#include <RcppParallel.h>
using namespace Rcpp;
using namespace RcppParallel;
// Working function, not exported
struct TotalDistanceWrkr : public Worker
// source vectors
const RVector<double> Rx;
const RVector<double> Ry;
// accumulated value
double distance;
// constructors
TotalDistanceWrkr(const NumericVector x, const NumericVector y) :
Rx(x), Ry(y), distance(0) {}
TotalDistanceWrkr(const TotalDistanceWrkr& totalDistanceWrkr, Split) :
Rx(totalDistanceWrkr.Rx), Ry(totalDistanceWrkr.Ry), distance(0) {}
// count neighbors
void operator()(std::size_t begin, std::size_t end) {
double dx, dy;
unsigned int Npoints = Rx.length();
for (unsigned int i = begin; i < end; i++) {
for (unsigned int j=i+1; j < Npoints; j++) {
// Calculate squared distance
dx = Rx[i]-Rx[j];
dy = Ry[i]-Ry[j];
distance += sqrt(dx*dx + dy*dy);
// join my value with that of another Sum
void join(const TotalDistanceWrkr& rhs) {
distance += rhs.distance;
// Exported function
// [[Rcpp::export]]
double TotalDistance(NumericVector x, NumericVector y) {
// Declare TotalDistanceWrkr instance
TotalDistanceWrkr totalDistanceWrkr(x, y);
// call parallel_reduce to start the work
parallelReduce(0, x.length(), totalDistanceWrkr);
// return the result
return totalDistanceWrkr.distance;
The usage in R is identical to the usage of C++ functions interfaced by RCpp.
(mb <- microbenchmark(d <- TotalDistance(X$x, X$y)/NbPoints/(NbPoints -
1) * 2))
## Unit: microseconds
## expr
## d <- TotalDistance(X$x, X$y)/NbPoints/(NbPoints - 1) * 2
## min lq mean median uq max neval
## 219.596 231.445 267.7956 235.381 244.032 2477.425 100
## [1] 0.5154879
The setup time for parallel tasks is much longer than the serial computation time.
Multiplying the number of points by 50, the serial computation time must be multiplied by about 2500.
## user system elapsed
## 4.393 0.004 4.400
In parallel, the time increases little: parallelization becomes really efficient. This time is to be compared to that of the reference for loop, multiplied by 2500, that is 1997 seconds.
system.time(d <- TotalDistance(X$x, X$y)/NbPoints/(NbPoints -
1) * 2)
## user system elapsed
## 1.279 0.003 0.429
2.7.9 Conclusions on code speed optimization
From this case study, several lessons can be learned:
• A for loop is a good basis for repetitive calculations, faster than vapply(), simple to read and write.
• foreach loops are extremely effective for parallelizing for loops;
• Optimized functions may exist in R packages for common tasks (here, the pairdist() function of spatstat is two orders of magnitude faster than the for loop).
• the future.apply package makes it very easy to parallelize code that has already been written with *apply() functions, regardless of the hardware used;
• The use of C++ code allows to speed up the calculations significantly, by three orders of magnitude here.
• Parallelization of the C++ code further divides the computation time by about half the number of cores for long computations.
Beyond this example, optimizing computation time in R can be complicated if it involves parallelization and writing C++ code. The effort must therefore be concentrated on the really long computations
while the readability of the code must remain the priority for the current code. C code is quite easy to integrate with RCpp and its parallelization is not very expensive with RCppParallel.
The use of for loops is no longer penalized since version 3.5 of R. Writing vector code, using sapply() is still justified for its readability.
The choice of parallelizing the code must be evaluated according to the execution time of each parallelizable task. If it exceeds a few seconds, parallelization is justified.
2.8 Workflow
The targets package allows you to manage a workflow, i.e. to break down the code into elementary tasks called targets that follow each other, the result of which is stored in a variable, itself saved
on disk. In case of a change in the code or in the data used, only the targets concerned are reevaluated.
The operation of the flow is similar to that of a cache, but does not depend on the computer on which it runs. It is also possible to integrate the flow into a document project (see section 4.9), and
even to use a computing cluster to process the tasks in parallel.
2.8.1 How it works
The documentation^30 of targets is detailed and provides a worked example to learn how to use the package^31. It is not repeated here, but the principles of how the flow works are explained..
The workflow is unique for a given project. It is coded in the _targets.R file at the root of the project. It contains:
• Global commands, such as loading packages.
• A list of targets, which describe the code to be executed and the variable that stores their result.
The workflow is run by the tar_make() function, which updates the targets that need it. Its content is placed in the _targets folder. Stored variables are read by tar_read().
If the project requires long computations, targets can be used to run only those that are necessary. If the project is shared or placed under source control (chapter 3), the result of the
computations is also integrated. Finally, if the project is a document (chapter 4), its formatting is completely independent of the calculation of its content, for possibly considerable time saving.
2.8.2 Minimal example
The following example is even simpler than the one in the targets manual, which will allow you to go further. It takes up the previous case study: a set of points is generated and the average
distance between the points is calculated. A map of the points is also drawn. Each of these three operations is a target in the vocabulary of targets.
The workflow file is therefore the following:
The global commands consist in loading the targets package itself and then listing the packages needed for the code. The execution of the workflow takes place in a new instance of R.
The targets are then listed. Each one is declared by the tar_target() function whose first argument is the name of the target, which will be the name of the variable that will receive the result. The
second argument is the code that produces the result. Targets are very simple here and can be written in a single command. When this is not the case, each target can be written as a function, stored
in a separate code file loaded by the source() function at the beginning of the workflow file.
The tar_visnetwork command displays the sequence of targets and their possibly obsolete status.
The order of declaration of the targets in the list is not important: they are ordered automatically.
The workflow is run by tar_make().
## ▶ dispatched target NbPoints
## ● completed target NbPoints [0.644 seconds, 53 bytes]
## ▶ dispatched target X
## ● completed target X [0.002 seconds, 11.058 kilobytes]
## ▶ dispatched target d
## ● completed target d [0.007 seconds, 55 bytes]
## ▶ dispatched target map
## ● completed target map [0.013 seconds, 187.39 kilobytes]
## ▶ ended pipeline [0.781 seconds]
The workflow is now up to date and tar_make() does not recompute anything.
## ✔ skipped target NbPoints
## ✔ skipped target X
## ✔ skipped target d
## ✔ skipped target map
## ✔ skipped pipeline [0.043 seconds]
The results are read by tar_read().
## [1] 0.5165293
2.8.3 Practical interest
In this example, targets complicates writing the code and tar_make() is much slower than simply executing the code it processes because it has to check if the targets are up to date. In a real
project that requires long computations, processing the status of the targets is negligible and the time saved by just evaluating the necessary targets is considerable. The definition of targets
remains a constraint, but forces the user to structure their project rigorously. | {"url":"https://ericmarcon.github.io/WorkingWithR/chap-utiliseR.html","timestamp":"2024-11-13T18:29:49Z","content_type":"text/html","content_length":"231516","record_id":"<urn:uuid:fc849ad4-ee21-4477-9647-77c3fc925323>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00026.warc.gz"} |
Human Nature has Always Been at Odds with the Scientific Method
Consensus science dominates much of science today. This has its benefits, and it also has its drawbacks.
It is Wikipedia’s policy to cover only consensus science. This is an understandable policy for this type of forum, but it does limit Wikipedia’s impact on the advancement of science. Consensus
science already dominates science today, and this is inherently at odds with the scientific method.
The greatest leaps forward in science are typically associated with paradigm shifts in thinking, and paradigm shifts in science are often very slow to take shape. It is not unusual for this process
to take one or more generations. This has been attributed by numerous great thinkers of the past to human nature, not to the scientific method, as discussed at length on this page 7. This includes
David Hume, George Bernard Shaw, Albert Einstein, Max Planck, Thomas Kuhn, Ernest Rutherford, and many other (cf. page 7.1)
Max Planck (1858-1947) captured this fact of life with the following poignant quote:
“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar
with it.”
Included below are editorial additions that should someday appear in Wikipedia articles on the following topics. But today, since these represent a paradigm shift underway, they do not qualify for
inclusion in Wikipedia. The paradigm shift referred to is that of Non-Population Probability and Statistics, introduced in [B2] 2 generations ago.
Proposed future addition to https://en.wikipedia.org/wiki/Empirical_probability —
Non-Population Empirical Probability for Time Series Analysis
Probabilistic modeling is frequently used in engineering and science for the purpose of time series analysis and statistical inference from time series data, even when there are no populations of
interest. This is accomplished by taking the relative frequency of occurrence of an event to mean the fraction of time this event occurs over some time-series record of data, not over some set of
repetitions of an experiment producing random samples of time series from a sample space representing possible outcomes from selecting members of a population.
For example, the event may be that the time series of numerical-valued data x(t) defined on some interval of time, t, evaluated at two times within this time interval, separated by the amount t[2] –
t[1] (assuming t[2] > t[1]), takes on values less than or equal to two specified values y[2] and y[1], respectively. This event can be associated with the event indicator that is a function of time
which equals 1 when the event occurs and equals zero otherwise. The empirical probability of occurrence of this event is defined to be the time average of this indicator function.
It follows that this time average equals the fraction of time that this event occurs throughout the time interval over which the data exists. This quantity can be shown to be a valid bivariate
cumulative probability distribution function (CDF) evaluated at y[2] and y[1].
The collection of the multivariate CDFs for all sets of N distinct time points within the data’s time interval for all natural numbers N is a complete empirical probabilistic model for this
time-series data.
If a mathematical model of the time function is relatively measurable on the entire real line, as defined by Kac and Steinhaus [5], then the limit of these CDFs as the averaging interval approaches
the entire real line is an idealized empirical probabilistic model, and the finite-time probabilities can be considered to be estimates of these idealize probabilities. With this setup, this article
applies not only to classical population-based empirical probabilities but also to non-population fraction-of-time probabilities. The idealized fraction-of-time probabilistic model is strongly
analogous to the stationary stochastic process model, and this has been generalized to cyclostationary stochastic processes [6]-[9].
5. J. Leskow and A. Napolitano, “FOUNDATIONS OF THE FUNCTIONAL APPROACH FOR SIGNAL ANALYSIS.” Signal Processing, vol. 86, n. 12, pp. 3796-3825, December 2006. Received the 2006 European Association
for Signal Processing (EURASIP) Best Paper Award.
6. W. A. Gardner. STATISTICAL SPECTRAL ANALYSIS: A NONPROBABILISTIC THEORY (Book) Prentice-Hall, Englewood Cliffs, NJ, 565 pages, 1987. AUTHOR’S COMMENTS. SIAM Review, Vol. 36, No. 3, pp. 528-530,
7. A. Napolitano and W.A. Gardner, FRACTION-OF-TIME PROBABILITY: ADVANCING BEYOND THE NEED FOR STATIONARITY AND ERGODICITY ASSUMPTIONS. IEEE Access, vol. 10, pp. 34591-34612, 2022. doi: 10.1109/
8. W. A. Gardner, TRANSITIONING AWAY FROM STOCHASTIC PROCESSES. Journal of Sound and Vibration, vol 565, 24 October 2023, 117871
9. https://cyclostationarity.com
Proposed future addition to https://en.wikipedia.org/wiki/Stochastic_process —
The Non-Population Alternative to Stochastic Processes
A rigorous measure theory basis for Non-Population Statistics of time series data was introduced by Kac and Steinhaus, in the early 1930s, and the central mathematical entity was termed the Relative
Measure. Contemporaneously, Kolmogorov introduced his Probability Measure theory of stochastic processes. The former applies to individual functions of time, whereas the latter applies to populations
of functions of time. Both theories give rise to methodologies for statistical inference based on observed time series of data and associated probabilistic analysis, and these methodologies and
associated theories are strongly analogous and in many ways equivalent to each other. In fact, for ergodic stationary processes and cyclo-ergodic cyclostationary processes, these theories are
essentially operationally identical. The precise differences are described below.
The theory based on relative measurability leads to what has been termed Fraction-of-Time Probability for individual time series. This theory has roots in empirical statistical analysis of
measurements developed within physics, whereas the alternative probability measure theory has its roots in developments of the mathematics of probability associated with populations which, for
stochastic processes, are populations of time functions (or functions of other variables, most notably, spatial coordinates). The Relative Measure results in probabilities being derived essentially
empirically from the observed/measured data (mathematically idealized to extend over the entire real line), whereas the Probability Measure is defined axiomatically in terms of typically infinite
populations and axioms that do not correspond to any form of empiricism.
In terms of promotion and resultant popularity, the Kolmogorov theory is dominant, but the Kac-Steinhaus theory is similarly viable in a restricted domain of application and is less abstract; that
is, it relates more directly to the practice of time series analysis of data. This is so for stochastic processes representing data that is appropriately modeled as ergodic or cyclo-ergodic
stationary or cyclostationary processes. The Kac-Steinhaus theory does not provide an analog for non-ergodic and generally non-stationary processes.
The tight link between these two theories in this restricted domain is a result of the fact that the empirical probabilities of interest in both these theories are derived from time averages on
individual time functions. Because of this, the abstractions of ergodic and cyclo-ergodic stochastic processes are superfluous. There is no need to consider populations and models of ensembles of
time functions, or mathematical measures defined over abstract sample spaces of functions.
The dominance of stochastic processes, in the case of ergodic and cyclo-ergodic stationary and cyclostationary processes is a quirk of history in the field of mathematics originating in statistical
mechanics. The resultant dominance within engineering and fields of science is not a consequence of superior applicability or superior amenability to practically relevant conceptualization. Rather it
is a result of the early (1950s -1970s) mathematical developments of the measure theory of stochastic processes and the lack back then of comparable developments of the measure theory of individual
functions. [314]. Norbert Wiener, independently of Kac and Steinhaus, did get started with comparable development in the 1930s, with his important contribution of Generalized Harmonic Analysis but,
apparently unaware of the work of Kac and Steinhaus, his work evolved toward Kolmogorov’s more abstract theory. The basis for preferring the latecomer (cf. [315]) Fraction-of-Time Probability theory
in applications to engineering and science has been addressed in considerable detail in recent publications by Gardner and Napolitano [316], [317], and a comprehensive tutorial treatment that is not
only written for students of the subject but also is an accurate scholarly historical account which contains some autobiographical material, is available at the website [318] (available in eBook
form) in which essentially all sources are accessible for free.
The focus since the middle of the last century on the theory of stochastic processes for populations of functions, largely to the exclusion of the Fraction-of-Time Probability theory for individual
functions, is understandable in light of the large part of the theory that lies outside of the aforementioned restricted domain of applications, that is, the part of the theory that addresses
non-ergodic and non-cyclo-ergodic stationary and cyclostationary processes and, more generally, non-ergodic asymptotically-mean stationary processes and even more general nonstationary processes.
Yet, the substantially more parsimonious alternative for the restricted domain merits more attention; students of science and engineering who are likely to engage in time series analysis would
benefit substantially from exposure to this more empirically motivated alternative.
The relative measure μ[R] and the infinite-time average in the alternative theory are the non-population counterparts of the probability measure P and the ensemble average, respectively, in the
stochastic process theory [319], [314].
Due to the differences between the relative measure μ[R] on the relatively measurable sets (which are a subset of the σ-field of Borel subsets of the real line) and the probability measure P (on the
σ -field of Borel subsets of a probability sample space), mathematical properties holding for stochastic processes do not necessarily have counterparts that hold for functions of time representing
sample paths of these stochastic processes, and vice versa.
The key differences include:
• The class of the P-measurable sets is closed under union and intersection; the class of the relatively measurable sets is not.
• P is a σ-additive (additivity of countably infinite numbers of terms) measure; μ[R] is not.
• Expectation is σ-linear (linearity of an operator applied to a linear combination of a countably infinite number of terms); infinite-time average is not.
• Joint P-measurability of sample spaces is typically assumed but cannot be verified; joint relative measurability is a property of functions that can be verified.
• The assumed σ-additivity property of the probability measure is typically unverifiable and restricts the admissible sample spaces of time functions in ways that the relative measure does not.
• The relative measure is applied to the single time function at hand, and functions of this time function, with no restrictions other than its assumed existence. The fact that the relative measure
cannot be guaranteed to be σ-additive is a reflection of the time function at hand, not a deficiency.
These differences clearly show that the mathematical properties of the relative measure render it less amenable to mathematical study than do those of the probability measure P. This, however, does
not constitute an obstacle to using the non-population approach for time series analysis but, rather, as explained in [316], provides motivation for using this approach instead of the classical
stochastic-process approach based on P. In fact, the σ-additivity of the probability measure and σ-linearity of the expectation provide mathematically desirable tractability. But, as explained in
[316], they give rise to a dichotomy between the stochastic process properties and the properties of concrete individual sample paths of the stochastic process–the entities of primary interest to
practitioners in many applications. In contrast, such dichotomies do not arise in the non-population approach. In addition, the adoption of this alternative approach overcomes all problems arising
from the need to check sufficient conditions for validating assumptions for ergodicity–problems which occur frequently in time-series analysis applications.
The proposal to adopt the non-population Fraction-of-Time Probability alternative to the Kolmogorov formulation of a stochastic process is by no means as outrageous as some may think. In fact, there
is a long history of discontent with Kolmogorov’s model, as discussed at length in [320].
[314] J. Leskow and A. Napolitano, “FOUNDATIONS OF THE FUNCTIONAL APPROACH FOR SIGNAL ANALYSIS.” Signal Processing, vol. 86, n. 12, pp. 3796-3825, December 2006. Received the 2006 European
Association for Signal Processing (EURASIP) Best Paper Award.
[315] W. A. Gardner. STATISTICAL SPECTRAL ANALYSIS: A NONPROBABILISTIC THEORY (Book) Prentice-Hall, Englewood Cliffs, NJ, 565 pages, 1987. AUTHOR’S COMMENTS. SIAM Review, Vol. 36, No. 3, pp. 528-530,
[316] A. Napolitano and W.A. Gardner, FRACTION-OF-TIME PROBABILITY: ADVANCING BEYOND THE NEED FOR STATIONARITY AND ERGODICITY ASSUMPTIONS. IEEE Access, vol. 10, pp. 34591-34612, 2022. doi: 10.1109/
[317] W. A. Gardner, TRANSITIONING AWAY FROM STOCHASTIC PROCESSES. Journal of Sound and Vibration, vol 565, 24 October 2023, 117871
[318] https://cyclostationarity.com/
[319] M. Kac and H. Steinhaus, “Sur les fonctions indépendantes (IV) (Intervalle infini)”, Studia Mathematica, vol. 7, no. 1, pp. 1-15, 1938.
[320] A. Lyon, “Kolmogorov’s axiomatisation and its discontents” in The Oxford Handbook of Probability and Philosophy, Oxford, U.K: Oxford Univ. Press, pp. 66-155, 2016.
Proposed future addition to https://en.wikipedia.org/wiki/Statistical_inference —
Inference based on non-population probabilities
Inference based on time series data where populations of time series are not of interest fall outside of the various inference paradigms described above. When populations are not of interest,
Fraction-of-Time (FOT) Probabilities provide a mathematically sound basis for time-series inference. These probabilities are defined in terms of the relative measure on the real line. FOT Probability
was introduced by Kac and Steinhaus (not by this name) contemporaneously with Kolmogorov’s Probability and stochastic processes based on the theory of measures on abstract sample spaces (mathematical
models for populations). Unlike Kolmogorov’s theory, that of Kac and Steinhaus has not enjoyed anything like the level of acceptance that Kolmogorov’s theory has benefitted from. Nevertheless, this
non-population alternative to population probability has not been proven to be unworthy of more consideration. Recent progress on identifying the relative advantages and disadvantages of these two
alternative measure-theoretic definitions of probability have appeared in the literature. See empirical probability for more detail and references.
Proposed future addition to https://en.wikipedia.org/wiki/Information_theory —
Alternative Information Theory: Non-Population Probability
Almost all concepts, definitions, and theorems of Information Theory, which are based on population probability, are equally valid when based on non-population probability when the underlying data of
interest is a time series. If the model used for the time series is stationary (or cyclostationary) and ergodic (or cyclo-ergodic), then it can be replaced with a single times series and the process’
population probabilities can be replaced with Fraction-of-Time (FOT) Probabilities. The theory of non-population FOT probability for time functions or time sequences was introduced by Kac and
Steinhaus contemporaneously with Kolmogorov’s introduction of the theory of axiomatic probability and stochastic processes. The former is based on the Kac-Steinhaus theory of relative measure on the
real line, whereas the latter is based on the more abstract axiomatic probability based on the theory of probability measure defined on generally abstract sample spaces (modeling the possible
outcomes from populations).
The relative advantages and disadvantages of these alternative probability theories are delineated in [1],[2]. These are primarily conceptual and relate to empirical quantities, and they tend to
favor the non-population theory when populations of time series are not of interest. This would be the case, for example, if the bit-error-rate of interest for a digital communications system is the
fraction of bit errors over time, not the relative frequency of bit errors over a population of systems. The reason the term “almost all” is used in the opening sentence above is that the channel
coding theorem is formulated in terms of channel-output stochastic processes that are non-ergodic. They are non-ergodic by virtue of the assumption in this theorem of a random channel—a population of
channels. The only random channel that does not destroy ergodicity of an ergodic channel input is a random-delay channel. The practical utility of this alternative form of information theory is
apparently not addressed in the literature. But this is not proof that this is not a worthy alternative for some applications.
1. A. Napolitano and W.A. Gardner, FRACTION-OF-TIME PROBABILITY: ADVANCING BEYOND THE NEED FOR STATIONARITY AND ERGODICITY ASSUMPTIONS. IEEE Access, vol. 10, pp. 34591-34612, 2022. doi: 10.1109/
2. W. A. Gardner, TRANSITIONING AWAY FROM STOCHASTIC PROCESSES. Journal of Sound and Vibration, vol 565, 24 October 2023, 117871
Proposed future addition to https://en.wikipedia.org/wiki/Multitaper —
Overview of and Historical Perspective on Multi-Taper Methods of Spectrum Estimation
(Note for Wikipedia reviewers: The following discussion is no less consensus science than is the MTM discussed in this existing post, which has appeared in very few publications, and the classical
periodogram-based method in this discussion has appeared in immense numbers of publications and is definitely more consensus science than is the newer MTM. Therefore, Wikipedia’s restriction to
consensus science cannot logically be used as a basis for rejection of this supplement to the existing post on the MTM, since this submitted discussion simply exposes the relative performances of
these two methods. The performance summaries given below are easily demonstrated with controlled experiments using synthetic data as shown, for example, on Page 7.10 at the educational website
The original flaw in the history of PSD estimation, made in the earliest days of this subject as couched within the framework of stochastic processes, was to use the fact that the PSD can be defined
as the limit, as the data segment length approaches infinity, of the expected value of the periodogram of the data as justification for estimating the PSD by simply deleting the expectation operation
and not taking the limit. That is, by simply using the raw periodogram of the data.
It was then realized that the high variance of the periodogram does not decrease as the data segment length increases, even if the data is modeled as an ergodic stationary process. The most
transparent explanation of how to overcome this is to use only a subsegment of the available data to compute the periodogram and then let the subsegment time interval slide along over the full
time-interval of the available data to obtain a sliding periodogram and perform a time average over the slide index. If the subsegment length is a fraction A of the full data segment length, then the
variance of the time-averaged periodogram is approximately A times the variance of any one of the fixed periodograms obtained from a fixed (non-sliding) subsegment. This is known as the Welch method
[9], [10]. This technique and those described in the remainder of this section are most easily understood, with the least abstraction, in terms of the non-population statistical theory of individual
functions instead of the more commonly known theory of stochastic processes [11].
It can be shown with nothing more than mathematical manipulation that the time-averaged periodogram is approximately equal to the periodogram for the full segment of available data, frequency
smoothed (convolved in frequency) with the squared magnitude of the Fourier transform of the rectangle window that selects a subsegment from the full segment in the time-averaged periodogram [12].
Upon recognizing this, it becomes clear that the details of this approximate spectral smoothing window can be designed by designing a window to replace the time-selective rectangle window. This
replacement is what is called a data tapering window because the way to reduce the undesirable sidelobes of the effective spectral smoothing window is to use a time window that tapers off smoothly
from its data center point to its left and right data end points. This is a consequence of the basic method for reducing the severity of what is called Gibbs phenomenon [Gibbs phenomenon].
Historically, it was eventually realized that especially effective spectral leakage suppression could be achieved by multiplying the rectangle window by a sum of harmonically related sinusoids with
carefully chosen amplitudes and fundamental frequency equal to the reciprocal of the width of the time window because this frequency shifts and adds the Fourier transform of the taper window and,
with appropriate amplitude assignments, this can achieve a significant degree of sidelobe cancellation (cf. windows numbered 13, Hamming; 14, von Hann; 23, Bartlett-Hann; 24-25, Blackman; 26,
Blackman-Harris family; 27, Nuttal family; and 34, Taylor in [13]), at the expense of some mainlobe widening, which reduces spectral resolution.
It has been argued that the periodogram of available data is a flawed tool for PSD estimation, regardless of data tapering and/or spectral smoothing, cf. [1]. The first MTM spectrum estimate
introduced in the originating contribution [1] proposes in its place a periodogram of a subspace projection of the complex spectrum of the available data segment, and this projection can be
mathematically proven to be a smoothed (but not simply convolved) version of the complex spectrum. The proof is based on an analysis of the Prolate Spheroidal Wave Sequences used by the MTM for
projection and the fact that these sequences appear as data tapering windows. The equivalence of this smoothing operation is not obvious from the specification of the projection but has been
recognized [14]. Instead of spectrally smoothing the magnitude-squared complex spectrum (the periodogram) as in classical methods, the MTM smooths the complex spectrum itself using approximate
discrete frequency samples in linear combinations (which smoothing is not equivalent to a single convolution operation) and then takes the squared magnitude. But the MTM limits the amount of
smoothing to a few resolution widths (this width equals the reciprocal of the data segment length—see section on Slepian Sequences in [1]) of the complex spectrum. Any reduction of temporal or
population variance from this modification of the classical periodogram is bounded by the impact of magnitude-squaring on the variance reduction factor for the complex spectrum and this factor is
loosely bounded by the reciprocal of the number of discrete frequencies used for smoothing, which number is typically modest. The bound may be loose because the frequency samples are generally
In addition, the second MTM spectrum estimate proposed in [1], which is called a stabilized MTM estimate, approximates a local discrete-frequency smoothed periodogram of traditionally tapered data
because it can be mathematically proven that the Prolate Spheroidal Sequences {h[t,k]} are approximately equal to frequency shifted versions of tapers similar to standard tapers.
In conclusion, although the MTM spectrum estimates provide an optimized tradeoff between spectral resolution and spectral leakage, they are not universal improvements over classical periodogram
methods, as originally proposed in [1], and are distinctly inferior for data records that are not especially short because variance reduction capability is limited and is inferior to that of
classical methods for data records that are not especially short. So, the MTM is not uniformly competitive with classical periodogram-based methods which provide a well-known and thoroughly
demonstrated viable means for trading resolution performance for substantial variance reduction. However, because temporal/spatial time series are typically relatively short in the spatial dimension
even if not so in the temporal dimension, variance reduction can be achieved with time averaging short spatial segments and the MTM might therefore offer some advantage for various spatiotemporal
spectrum analysis applications. Similarly, for multiple statistical samples of time series from a population, variance reduction can be achieved with ensemble averaging short time segments.
Classical methods based on the periodogram can provide spectral resolution and leakage performance that is comparable to that of the MTM provided that appropriate data tapering windows are used. In
fact, the MTM uses a similar (but not identical) technique to one used by the classical Single-Taper Method (STM) in which the tapering window is constructed from a sum of frequency-translated
windows (the classical sidelobe cancellation technique) [13]. The added computational complexity of the MTM can be avoided by using this STM, instead of the MTM, to produce a classical time-averaged
periodogram of tapered data.
[9] https://en.wikipedia.org/wiki/Welch%27s_method
[10] W. A. Gardner, Statistical Spectral Analysis, Chap 5, Englewood Cliffs, NJ, Prentice-Hall, 1987
[11] https://ieeexplore.ieee.org/document/9743388
[12] W. A. Gardner, “The History and Equivalence of Two Methods of Spectral Analysis”. IEEE Signal Processing Magazine, July, No. 4, pp. 20-23, 1996]
[13] A. W. Doerry, Catalog of Window Taper Functions for Sidelobe Control, Sandia Report SAND2017-4042, Sandia National Laboratories, April 2017, https://www.osti.gov/servlets/purl/1365510]).
[14] A.T. Walden, “A Unified View of Multitaper Multivariate Spectral Estimation”, Biometrika, vol. 87, No. 4, December 2000, pp. 767-788.<
[15] S. Karnik, J. Romberg, M.A. Davenport, “Thomson’s Multitaper Method Revisited”, arXiv:2103.11586 [eess.SP], https://doi.org/10.48550/arXiv.2103.11586
Proposed future addition to https://en.wikipedia.org/wiki/Method_of_moments_(statistics) —
Alternative Method of Moments
The equations to be solved in the Method of Moments (MoM) are in general nonlinear and there are no generally applicable guarantees that tractable solutions exist. But there is an alternative
approach to using sample moments to estimate data model parameters in terms of known dependence of model moments on these parameters, and this alternative requires the solution of only linear
equations (more generally, tensor equations). This alternative is referred to as the Bayesian-Like MoM (BL-MoM), and it differs from the classical MoM in that it uses optimally weighted sample
moments. Considering that the MoM is typically motivated by a lack of sufficient knowledge about the data model to determine likelihood functions and associated a posteriori probabilities of unknown
or random parameters, it is odd that there exists a type of MoM that is Bayesian-Like. But the particular meaning of Bayesian-Like leads to a problem formulation in which required knowledge of a
posteriori probabilities is replaced with required knowledge of only the dependence of model moments on unknown model-parameters, which is exactly the knowledge required by the traditional MoM. The
BL-MoM also uses knowledge of a priori probabilities of the parameters to be estimated, when available, but otherwise uses uniform priors.
The BL-MoM has been reported on in only the applied statistics literature in connection with parameter estimation and hypothesis testing using observations of stochastic processes for problems in
Information and Communications Theory and, in particular, communications receiver design in the absence of knowledge of likelihood functions or associated a posteriori probabilities [5]-[8]. In
addition, the restatement of this receiver design approach for stochastic process models as an alternative to the classical MoM for any type of multivariate data is available in tutorial form at the
university website [9]. These applications demonstrate some important characteristics of this alternative to the classical MoM, and a detailed list of relative advantages and disadvantages is given
in [9], but the literature is missing direct comparisons in a variety of specific applications of the classical MoM with the BL-MoM.
[5] Gardner, W.A., “The structure of least-mean-square linear estimators for synchronous M-ary signals”, IEEE Transactions on Information Theory 19 (2), 240-243,1973
[6] Gardner, W.A., “An equivalent linear model for marked and filtered doubly stochastic Poisson processes with application to MMSE linear estimation for synchronous m-ary optical data signals”, IEEE
Transactions on Communications 24 (8), 917-921,1976
[7] Gardner, W.A., “Structurally constrained receivers for signal detection and estimation”, IEEE Transactions on Communications 24 (6), 578-592,1976 (see errata in reference list in [5])
[8] Gardner, W.A., “Design of nearest prototype signal classifiers”, IEEE Transactions on Information Theory 27 (3), 368-372,1981
[9] https://cyclostationarity.com
Proposed future addition to https://en.wikipedia.org/wiki/Generalized_method_of_moments —
An Alternative to the GMM
At https://en.wikipedia.org/wiki/Method_of_moments_(statistics), an alternative to the original (non-generalized) Method of Moments (MoM) is described, and references to some applications and a list
of theoretical advantages and disadvantages relative to the traditional method are provided. No comparison has yet been made with the GMM, but the list of advantages given is motivational. | {"url":"https://cyclostationarity.com/notes-on-the-detrimental-influence-of-human-nature-on-scientific-progress/?chapter=7_1section","timestamp":"2024-11-11T04:42:24Z","content_type":"text/html","content_length":"194093","record_id":"<urn:uuid:70cafbe5-5356-4ce5-a5c1-a4a7b811d1a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00576.warc.gz"} |
Computing Power
Leave a Comment / Tech / By SSSolutionsAbroad
Computing Power:
What distinguishes a supercomputer as such? Can it defend the rights of the innocent or leap large buildings in a single bound? The reality is a little less dramatic. Complex calculations can be
completed very quickly by supercomputers.
Apparently, that is the key to computer power. Everything depends on how quickly a machine can do an operation. A computer’s operations can be reduced to math. Any command you provide is translated
by your computer’s processor into a set of mathematical equations. Faster processors are better at performing extremely difficult calculations and can perform more calculations per second than slower
The CPU of your computer contains an electronic clock. A series of electrical pulses must be produced by the clock at regular intervals. This makes it possible for the computer to synchronize all of
its parts and controls how quickly it can retrieve data from memory and carry out calculations.
Clock speed is what is meant when you mention how many gigahertz your processor has. The figure represents the number of electrical pulses that your CPU emits each second. A CPU operating at 3.2
gigahertz generates about 3.2 billion pulses per second. While certain CPUs can be overclocked to operate at rates above their stated limits, ultimately a clock will reach its maximum speed and
cannot be increased.
Flops, commonly known as floating-point operations per second, are another way to quantify computer performance. A desktop computer’s processor today can perform gigaflops or billions of
floating-point operations per second. The ability of each processor core to do a specific amount of calculations per second gives computers with many processors an edge over those with a single
processor. The computing power of multiple-core computers increases while using less electricity. (Reference: Intel)
Even powerful computers might need years to do a task. For the majority of computers, it is challenging to find the two prime factors of an extremely large number. The computer must first identify an
enormous number’s factors. The next step is for the computer to decide whether the factors are prime numbers. This is a tedious task for very large numbers. Computers can take years to perform
Such a task might be fairly straightforward for future computers. The most likely solution might be given in a matter of seconds by a working quantum computer with enough processing power to do
parallel calculations on various factors. Quantum computers, however, present unique difficulties and wouldn’t be appropriate for all computing workloads, but they might change the way we perceive
computing capacity.
Leave a Comment Cancel Reply | {"url":"https://sssolutionsabroad.com/computing-power/","timestamp":"2024-11-03T12:52:32Z","content_type":"text/html","content_length":"87412","record_id":"<urn:uuid:c02267a4-45d3-4547-bde8-87c9150d0cbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00272.warc.gz"} |
The Rebalancing Premium in Risk Parity Portfolios
welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.
This short article investigates the rebalancing premium that investors may expect from risk parity portfolios^1. It is offered as an appendix to the paper, “Risk Parity: Methods and Measures of
I define rebalancing premium as the difference between the compound return on a portfolio, and the weighted average compound returns produced by the underlying investments in a portfolio.
I examine the distribution of rebalancing premiums for a simple risk parity implementation (a version of the Permanent Portfolio) consisting of US stocks, gold and bonds from 1982 through May 2020. I
then proceed to analyze historical and expected future rebalancing premia for a variety of global risk parity strategies composed of 64 futures markets from June 1985 through May 2020.
When applied to the three markets in the Permanent Portfolio I surface a rebalancing premium of approximately 1.2% per year. The most diversified risk parity portfolios produced a rebalancing premium
of more than 3% per year.
In the current low return environment, a diversified risk parity portfolio with a 2-3% rebalancing premium may represent an attractive alternative to global 60/40, with the added benefit of owning a
diverse set of markets that benefit from a wider range of economic outcomes.
When scaled to the same volatility as S&P 500 futures^2, the compound excess returns on S&P 500, gold, and 10-year Treasury futures from June 1982 through September 2020 were 7.2%, 0.6%, and 12.4%,
One might expect that a portfolio that preserved a one-third allocation by regularly rebalancing back to target weights would have produced a return equal to the average of their constituent compound
returns: 6.74%. In fact, a strategy that rebalanced at the end of each month to preserve equal risk weighting produced a compound return of 7.93, a full 1.19% per year higher than the average of the
This effect is referred to variously as the “rebalancing premium,” “volatility harvesting,” “volatility pumping,” or “gamma scalping.” It emerges mechanically from the process of continuously selling
uncorrelated assets with gains to purchase assets with losses (and vice versa) from period to period.
Returns from uncorrelated volatility
Importantly, the rebalancing premium is not a property of markets per se. Rather, it emerges directly from entropy – the unleashed potential of randomness itself. I can observe this directly from the
properties of portfolios consisting of synthetic markets with random returns.
To illustrate, I simulated random monthly returns (i.e. synthetic market returns) by sampling from a random normal distribution. Specifically, I generated synthetic returns for a ‘market’ by drawing
461 random returns^3 from a normal distribution consistent with the monthly standard deviation of S&P 500 futures. I analyzed portfolios with two ‘markets’ through 13 ‘markets’ to observe how the
rebalancing premium scales with the addition of new independent (i.e. uncorrelated) investments. All portfolios were scaled to the same 10% annualized volatility. In addition, I observed the
rebalancing premium when I set average returns to 0, 3, or 6% compounded annually^4.
Figure 1: Expected rebalancing premium conditioned on number of uncorrelated markets and average compound drift
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS
Some readers may be surprised to learn that a portfolio, which rebalances between three uncorrelated return streams, each with a zero compound return, can produce a 0.8% annualized return purely from
rebalancing. More dramatically, an investor who can find 13 uncorrelated investments that average 3% compound returns may generate 2.5 percentage points extra yield per year in rebalancing premium.
It may seem far-fetched to find 13 uncorrelated return streams. I will show that thoughtful investors can manufacture over a dozen uncorrelated investments from a diversified risk parity universe
consisting of global stock and bond indices, and individual commodities.
Minimizing volatility drag
While many dynamics interact to produce a rebalancing premium, an important contributor is the reduction in portfolio volatility that manifests as a function of portfolio diversification. Remember
that while financial papers often cite arithmetic average returns to determine the statistical significance of various effects, this is not the rate that investors actually grow wealth in practice.
Rather, investors experience a series of returns, which compound over time.
A portfolio with an expected arithmetic mean return of μ percent per year and a standard deviation of returns of 𝜎 percent per year will actually be expected to compound at a rate of
Thus, if we compare two investments with the same mean expected return, say 3% per year, but portfolio A has a volatility of 5% and portfolio B has a volatility of 20%, the lower volatility
investment will produce a higher expected wealth.
An investment of $100,000 in portfolio A would be expected to grow to $176,278 while an investment in portfolio B would be expected to grow to just $122,019.
It follows that if you can combine volatile investments in such a way that diversification reduces overall portfolio volatility, the portfolio will produce a rate of growth that is greater than the
sum of its parts.
In the following pages you will see that sufficiently diverse portfolios with optimal diversification may produce well over 2% per year in rebalancing premium. In the current environment, with many
stock markets trading at relatively low earnings yields despite record profit margins, and global bond markets trading near the lowest yields in history, the ability to generate 2% or more in excess
compound returns, with no expected increase in risk, should not be overlooked.
Rebalancing the Permanent Portfolio
Let’s observe the rebalancing premium produced by a real portfolio of uncorrelated markets. For our case study I examine a simple version of Harry Brown’s Permanent Portfolio. My implementation holds
equal volatility weighting in S&P 500, gold, and U.S. Treasury futures, for which we have daily total return data from June 1982 through September 2020. Figure 2 shows the daily pairwise correlations
over the full sample period, which are effectively zero.
Figure 2: Daily Pearson correlations, June 1982 – September 2020
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS.
To be consistent with the experimental design above stocks, gold and Treasury returns are levered to the same long-term average daily volatility. There are negligible excess costs on leverage for
futures contracts.
I plot the performance of the constituent markets and the equally risk-weighted portfolios rebalanced at daily, weekly and monthly frequencies in Figure 3.
You’ll note that the weighted average sum of constituents’ compound returns for the portfolios (average of daily weights x annualized compound returns for each market) was about 6.75% per year.
Rebalancing between the assets added about 1.2% per year so that the portfolio compounded at just shy of 8% per year.
Figure 3: Cumulative growth and performance statistics for Permanent Portfolio and constituent assets, 1982 – May 2020. SIMULATED RESULTS
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS.
Rebalancing risk parity
I investigate the properties of the rebalancing premia for a variety of global risk parity implementations as described in the reference paper.
Recall that our investment universe for analysis consists of 64 futures markets. I eliminate currency markets because there is no coherent way to define long or short exposure. Allowing for a
one-year estimation window for portfolio parameters I start tracking all simulations in 1985. Markets are added once they have one year of returns.
Continuous contract returns are computed by rolling front-month contracts when two consecutive days of both volume and open interest have migrated to the next contract.
Setting expectations
Let’s take a moment to estimate the rebalancing premium that I might expect given the distribution of historical compound returns and the effective number of uncorrelated assets.
I derive the effective number of uncorrelated assets by taking the square of the diversification ratio of the most diversified portfolio.
Of course, in practice we must estimate the most diversified portfolio from historical returns^6. These estimates are imperfect so we will not get the full benefit of all potential diversification.
When I solve for the most diversified portfolio at the end of each month and apply the estimates to returns in the next month I observe the time-series of effective bets described in Figure 4.
Figure 4: Number of out-of-sample independent bets through time, rolling 252-day average. SIMULATED RESULTS
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS.
On average we are able to extract approximately 13 independent bets from the 64 markets in our risk parity universe. The weighted average annual compound return of the markets is just over 3%. If we
refer back to Figure 1, and perform the same computation for 13 uncorrelated markets with compound returns of 3%, and scale the portfolio to 10% volatility, we would expect a diversified risk parity
strategy to produce an annual rebalancing premium of about 2.5% per year.
Empirical results
Now let’s examine how the rebalancing premium contributes to realized performance for each of our risk parity implementations.
To recap the simulation methodology, at each daily time step I solve for the portfolio that meets the target objective based on volatility and covariance estimates derived from historical returns.
Portfolio weights are scaled to target 10% annualized volatility as a function of the weights and the estimated covariance matrix. I then rebalance to the target portfolio at the close of the
following day’s trading.
For weekly rebalanced portfolios, each day I rebalance to the optimal daily weights averaged over the previous five days. This approximates the effect of rebalancing weekly, on each day of the week.
For monthly rebalanced portfolios I rebalance to the optimal daily weights averaged over the previous 21 trading days. This approximates the effect of rebalancing monthly, on each day of the month.
In Figure 5 I decompose the total compound annual returns of each risk parity implementation into those attributable to the sum of weighted geometric returns for each market, and the rebalancing
premium. Results may be slightly different than those reported in the reference paper due to rounding and differences in rebalancing.
Figure 5: Decomposing total compound returns into weighted average market returns and the diversification premium. SIMULATED RESULTS
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS. Please see details of methodology and disclaimers in the reference paper.
All things equal, rebalancing premia are a function of the dispersion in risk-adjusted returns across portfolio constituents, and how well the constituents diversify one another to lower overall
portfolio volatility^7.
Where implementations resulted in more concentrated allocations toward markets with very high in-sample Sharpe ratios (such as Inverse Variance [INV-VAR] or Hierarchical Risk Parity [HRP-VAR]
methods, which typically concentrate risk in bonds), I observe that rebalancing was actually detrimental to portfolio returns.
As noted in the reference paper, historical results for these methods probably substantially overstate what investors should expect from bond-heavy portfolios going forward because historical returns
benefited from high average yields in the sample period. In contrast, at time of writing the Vanguard Total World Bond ETF currently boasts a Yield to Maturity of 0.75% with an average duration of
7.6 years, implying less than 1% expected returns over the next 15 years^8. In addition, a high concentration in bonds makes these methods especially vulnerable to inflation.
On the other hand, maximum diversification methods produced the highest rebalancing premium. The average premium across MAX- DIV and HRSS MAX-DIV methods was 2.3%. They achieved this high rebalancing
premium by creating the effect of maximizing weighted average portfolio volatility while minimizing the volatility drag on the portfolio.
This high premium is the icing on the cake for maximally diversified portfolios. These methods were above-average performers in the historical sample in terms of returns and drawdown risk, without
relying on high exposure to bonds. Moreover, by virtue of their extreme levels of diversity and balance, they produced highly persistent positive returns across economic regimes.
Note also that monthly rebalancing produced the highest rebalancing premium. This may be due to several interacting effects. First, rebalancing is more effective if the relative returns to
time-series are trending between rebalances and mean-reverting after. There is evidence that futures exhibit momentum^9 and trending^10 behavior at monthly horizons.
In addition, recall that I construct the monthly rebalanced strategies by averaging the optimal daily weights over 21 days. This averaging acts as a shrinkage method or regularization step on the
weight vector, which may produce a less biased estimator of out-of- sample covariances, and allow portfolios to better capture available diversification opportunities.
Do the results generalize?
It is reasonable to wonder whether the magnitude of the observed rebalancing premia and the ordered ranking of premia across the different risk parity methods is an artifact of luck. Should we
expect, for example, maximum diversification type methods to dominate Hierarchical Risk Parity in general?
To address this query I conducted two related analyses: a bootstrap analysis of re-ordered historical returns; and a Monte Carlo analysis of random returns drawn from a multivariate normal
distribution with the same means and covariances as the historical sample.
My analyses are complicated by the fact that different markets have different starting dates. I decided to use January 2000 as my cutoff date, as it represents a convenient trade-off between number
of markets and length of sample. I have continuous daily data for 53 markets starting January 2000.
My bootstrap analysis followed this procedure:
1. Subset the returns matrix to contain all 53 markets with continuous history back to at least January 2000
2. Create a sample returns matrix by drawing rows at random from the returns matrix, with replacement
3. Solve for the optimal ex post portfolios based on five objectives
• Equal Weight
• Inverse Volatility
• Equal Risk Contribution
• Hierarchical Risk Parity
• Maximum Diversification
4. Scale the weights to target a 10% ex post annualized volatility
5. Measure the realized rebalancing premium for each optimization
6. Repeat 10,000 times
Figure 6 describes the distribution of rebalancing premia observed for all 10,000 samples sorted by optimization method.
Figure 6: Row-wise bootstrap distributions of rebalancing premia for different risk parity methods, 10,000 samples. SIMULATED RESULTS
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS. Please see details of methodology and disclaimers in the reference paper.
Most of the risk parity methods have a very high probability of producing a large and positive rebalancing premium when targeting a 10% annualized volatility. The equal weight method produced a
rebalancing premium greater than 1.2% per year, and the Maximum Diversification method produced a premium greater than 1.68% per year, in over 95% of samples, respectively. In contrast, the highly
concentrated Hierarchical Risk Parity [HRP-VAR] portfolio produced a negative rebalancing premium about 49% of the time.
The expected value for the rebalancing premium of the Maximum Diversification risk parity portfolio is a very material 2.69%. The weighted average compound return of risk parity portfolio
constituents is about 8% in the historical sample. As such, this magnitude of rebalancing premium represents a 33% boost to total compound portfolio returns, in excess of the returns generated from
exposure to the underlying assets themselves.
My second test involves a Monte Carlo analysis of random returns drawn from a multivariate normal distribution with the same means and covariances as the historical sample. I followed this procedure:
1. Subset the returns matrix to contain all 53 markets with continuous history back to at least January 2000
2. Find the monthly mean returns and covariances for all 53 markets over the full sample period
3. Draw a random sample from the multivariate normal distribution with the same dimension as the original sample (i.e. 53 markets and 245 months)
4. Solve for the optimal ex post portfolios for the sample returns, based on five objectives
• Equal Weight
• Inverse Volatility
• Equal Risk Contribution
• Hierarchical Risk Parity
• Maximum Diversification
5. Scale the weights to target a 10% ex post annualized volatility
6. Measure the realized rebalancing premium for each optimization
7. Repeat 10,000 times
Figure 7: Multivariate Monte Carlo distributions of rebalancing premia for different risk parity methods, 10,000 samples. SIMULATED RESULTS
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS. Please see details of methodology and disclaimers in the reference paper.
The results from the multivariate Monte Carlo model mirror the results from the bootstrap test above. Equally-weighted portfolios produced a rebalancing premium greater than 1.2% per year, and the
maximum diversification method produced a premium greater than 1.68% per year, in over 95% of samples, respectively. Hierarchical Risk Parity [HRP-VAR] portfolios produced a negative rebalancing
premium about 64% of the time.
Consistent with bootstrap results, the analysis indicates that the expected value for rebalancing premium of the maximum diversification risk parity portfolio is a very material 2.57%. The weighted
average compound return of risk parity portfolio constituents is about 8% in the historical sample. As such, this magnitude of rebalancing premium represents a 32% boost to total compound portfolio
returns, in excess of the returns generated from exposure to the underlying assets themselves.
Accounting for dynamic portfolio construction and leverage
The bootstrap and Monte Carlo analyses above provide useful estimates for the distribution of rebalancing premia available from appropriately diversified portfolios of our core asset universe
assuming constant weights. However, they do not account for the interaction effects of dynamic portfolio rebalancing.
In practice many risk parity strategies constantly update the optimal portfolio weights, and target portfolio leverage, in response to changes in estimated covariances. It is not clear how this
dynamic portfolio formation and scaling impacts the expected rebalancing premium.
To better understand these interactions, I performed the following test using the market returns and portfolio weights for the staggered monthly rebalanced HRSS MAX DIV risk parity method from 1985
to May 2020:
1. Draw a random sample (with replacement) of blocks of rows of the daily returns/weights matrices. The sample length is equal to the number of rows in the returns/weights matrices
2. Create new returns and weights matrices based on the sample row index
3. Calculate the rebalancing premium for this sample
4. Repeat 10,000 times
Figure 8: Distribution of empirical rebalancing premium from row-wise resampled weights and returns for monthly rebalanced HRSS MAX DIV portfolio. SIMULATED RESULTS
Analysis by ReSolve Asset Management. Data from CSI. Past performance does not guarantee future results. SIMULATED RESULTS. Please see details of methodology and disclaimers in the reference paper.
With reference to Figure 5, the realized annual rebalancing premium for the monthly rebalanced HRSS MAX DIV risk parity implementation was 3.3%. The average annual rebalancing premium realized across
our 10,000 samples was 3.17%, which is very close to the realized value. In our bootstrap, the probability of a negative realized rebalancing premium was less than 5%.
My analysis reveals that the dynamic rebalancing and scaling activity in the dynamic risk parity portfolio may improve on the rebalancing premiums generated by fixed-weight methods, with an expected
premium of 3.17% for the dynamic method versus 2.57 – 2.69% for the fixed-weight version.
Concluding thoughts
With balanced allocations to a wide variety of commodities as well as stock and bond indices from around the world, risk parity strategies are designed to thrive in any future economic regime.
I examined the distribution of rebalancing premiums for a simple risk parity implementation (a version of the Permanent Portfolio) consisting of US stocks, gold and bonds from 1982 through May 2020.
I then proceeded to analyze historical and expected future rebalancing premia for a variety of global risk parity strategies composed of 64 futures markets from June 1985 through May 2020.
The rebalancing premium when applied to the three markets in the Permanent Portfolio was over 1% per year. The most diversified risk parity portfolios produced a rebalancing premium of more than 3%
per year.
I analyzed the distribution of expected rebalancing premia for our futures universe using three different methods. Every model yielded a very high probability of realizing a positive rebalancing
premium. The expected value of the premium was at least 2.5% per year, and over 3% per year for the dynamic approach.
As with most sources of excess return, what appears to be a “free lunch” often requires a different kind of sacrifice. The rebalancing premium is no different. The diversified risk parity portfolio
explored in this article has many attractive properties, but it is expected to behave quite differently from traditional portfolios. This may lead to multi-year periods of underperformance relative
to a domestic 60/40 portfolio. Investors must have the discipline and internal fortitude required to suffer the tracking error to common benchmark portfolios.
In a low return world, the ability to generate 2-3% per year in excess return with no expectation of higher volatility is very attractive. As of mid-2020 Vanguard forecasts that a global 60/40
portfolio may produce an expected annualized return between 3.5% and 5.5% (before costs) over the next ten years. If an optimal risk parity portfolio may produce 2.5% per year or more in rebalancing
premium in excess of returns produced by the underlying investments, at approximately the same long- term volatility, this may represent an attractive alternative to global 60/40, with the added
benefit of owning a diverse set of markets that benefit from a wider range of economic outcomes.
Adam Butler, CFA, CAIA, is co-founder and chief investment officer of ReSolve Asset Management. ReSolve manages funds and accounts in Canada, the United States, and internationally.
Confidential and proprietary information. The contents hereof may not be reproduced or disseminated without the express written permission of ReSolve
Asset Management Inc. (“ReSolve”). These materials do not purport to be exhaustive and although the particulars contained herein were obtained from sources ReSolve believes are reliable, ReSolve does
not guarantee their accuracy or completeness.
The material has been provided to you solely for information purposes and does not constitute an offer or solicitation of an offer or any advice or recommendation to purchase any securities or other
financial instruments and may not be construed as such.
This information is not intended to, and does not relate specifically to any investment strategy or product that ReSolve offers. It is being provided merely to provide a framework to assist in the
implementation of an investor’s own analysis and an investor’s own view on the topic discussed herein. The investment strategy and themes discussed herein may be unsuitable for investors depending on
their specific investment objectives and financial situation.
ReSolve is registered as an Investment Fund Manager in Ontario, Quebec and Newfoundland and Labrador, and as a Portfolio Manager and Exempt Market Dealer in Ontario, Alberta, British Columbia and
Newfoundland and Labrador. ReSolve is also registered as a Commodity Trading Manager in Ontario and Derivatives Portfolio Manager in Quebec. Additionally, ReSolve is a Registered Investment Adviser
with the SEC. ReSolve is also registered with the Commodity Futures Trading Commission as a Commodity Trading Advisor and a Commodity Pool Operator. This registration is administered through the
National Futures Association (“NFA”). Certain of ReSolve’s employees are registered with the NFA as Principals and/or Associated Persons of ReSolve if necessary or appropriate to perform their
responsibilities. ReSolve has claimed an exemption under CFTC Rule 4.7 which exempts ReSolve from certain part 4 requirements with respect to offerings to qualified eligible persons in the U.S.
General information regarding hypothetical performance and simulated results.
It is expected that the simulated performance will periodically change as a function of both refinements to our simulation methodology and the underlying market data. Equity line results show the
growth of $1 or $100 assuming the purchase and sales of securities were executed at their daily closing price. Profits are reinvested and the simulation does not reflect commissions or transaction
costs of buying and selling securities, and no management fees were deducted. Any strategy carries with it a level of risk that is unavoidable. No investment process can guarantee or achieve
consistent profitability all the time and will necessarily encounter periods of extended losses and drawdowns. These results are not specific to any investment strategy ReSolve actually trades and
are simply proof of concept for illustrative purposes only.
The universe of markets used in risk parity simulations consists of the following futures. For computation of returns, futures were rolled monthly once both Open Interest and traded volume were
larger on the back month than the front month.
^1 This paper is not intended to be a comprehensive analytical treatment on stochastic portfolio theory and the mathematics of rebalancing. Readers who are interested in more comprehensive treatments
and proofs are invited to read “Stochastic Portfolio Theory” by Fernholz, “The Ex-Ante Rebalancing Premium” by Hillion and “Demystifying Rebalanc- ing Premium: A Methodological Path to Risk Premia
Engineering” by Dubikovsky and Susinno.
^2 Importantly, all of the returns cited in this article are excess returns (i.e. net of borrowing costs) since the cost of leverage is built into futures returns.
^3 There are 461 monthly returns from inception of the S&P 500 e-mini futures contract.
^4 Specifically, we took the median result from 10,000 simulations for each combination of number of assets and average compound return.
^5 This is an approximation, but it is quite accurate when applied to typical investment portfolios with modest means and variances
^6 Or derive estimates from options or swaps markets.
^7 The rebalancing premium is also impacted by whether markets were dominated by cross-market trending or mean-reverting behavior, and interactions between these behaviors and volatility scaling
^8 See Lozada, Gabriel A. (2015) “For Constant-Duration or Constant-Maturity Bond Portfolios, Initial Yield Forecasts Return Best near Twice Duration”. See also Leibowitz, Martin L., Anthony Bova,
and Stanley Kogelman (2014), “Long-Term Bond Returns under Duration Targeting,” Financial Analysts Journal 70/1: 31—51
^9 See Asness, Clifford S., Moskowitz, Tobias T., & Pedersen, Lasse H. (2013) “Value and Momentum Everywhere”
^10 See Pedersen, Lasse H. (2012) “Time-Series Momentum” | {"url":"https://api.advisorperspectives.com/articles/2020/11/16/the-rebalancing-premium-in-risk-parity-portfolios","timestamp":"2024-11-14T20:08:12Z","content_type":"text/html","content_length":"141326","record_id":"<urn:uuid:24ad764a-6f37-4dbd-bd56-f272f3fd44c9>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00491.warc.gz"} |
Structure providing information about a queue family
The VkQueueFamilyProperties structure is defined as:
typedef struct VkQueueFamilyProperties {
VkQueueFlags queueFlags;
uint32_t queueCount;
uint32_t timestampValidBits;
VkExtent3D minImageTransferGranularity;
} VkQueueFamilyProperties;
• queueFlags is a bitmask of VkQueueFlagBits indicating capabilities of the queues in this queue family.
• queueCount is the unsigned integer count of queues in this queue family. Each queue family must support at least one queue.
• timestampValidBits is the unsigned integer count of meaningful bits in the timestamps written via vkCmdWriteTimestamp2 or vkCmdWriteTimestamp. The valid range for the count is 36 to 64 bits, or a
value of 0, indicating no support for timestamps. Bits outside the valid range are guaranteed to be zeros.
• minImageTransferGranularity is the minimum granularity supported for image transfer operations on the queues in this queue family.
The value returned in minImageTransferGranularity has a unit of compressed texel blocks for images having a block-compressed format, and a unit of texels otherwise.
Possible values of minImageTransferGranularity are:
• (0,0,0) specifies that only whole mip levels must be transferred using the image transfer operations on the corresponding queues. In this case, the following restrictions apply to all offset and
extent parameters of image transfer operations:
□ The x, y, and z members of a VkOffset3D parameter must always be zero.
□ The width, height, and depth members of a VkExtent3D parameter must always match the width, height, and depth of the image subresource corresponding to the parameter, respectively.
• (A[x], A[y], A[z]) where A[x], A[y], and A[z] are all integer powers of two. In this case the following restrictions apply to all image transfer operations:
□ x, y, and z of a VkOffset3D parameter must be integer multiples of A[x], A[y], and A[z], respectively.
□ width of a VkExtent3D parameter must be an integer multiple of A[x], or else x + width must equal the width of the image subresource corresponding to the parameter.
□ height of a VkExtent3D parameter must be an integer multiple of A[y], or else y + height must equal the height of the image subresource corresponding to the parameter.
□ depth of a VkExtent3D parameter must be an integer multiple of A[z], or else z + depth must equal the depth of the image subresource corresponding to the parameter.
□ If the format of the image corresponding to the parameters is one of the block-compressed formats then for the purposes of the above calculations the granularity must be scaled up by the
compressed texel block dimensions.
Queues supporting graphics and/or compute operations must report (1,1,1) in minImageTransferGranularity, meaning that there are no additional restrictions on the granularity of image transfer
operations for these queues. Other queues supporting image transfer operations are only required to support whole mip level transfers, thus minImageTransferGranularity for queues belonging to such
queue families may be (0,0,0).
The Device Memory section describes memory properties queried from the physical device.
For physical device feature queries see the Features chapter. | {"url":"https://vkdoc.net/man/VkQueueFamilyProperties","timestamp":"2024-11-09T07:42:53Z","content_type":"text/html","content_length":"169868","record_id":"<urn:uuid:81cf69b6-ee33-4085-9320-0b7597611370>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00141.warc.gz"} |
Multiplication Of Polynomials Worksheet Pdf
Mathematics, specifically multiplication, develops the keystone of numerous scholastic techniques and real-world applications. Yet, for lots of learners, mastering multiplication can posture an
obstacle. To resolve this hurdle, teachers and parents have actually accepted a powerful device: Multiplication Of Polynomials Worksheet Pdf.
Introduction to Multiplication Of Polynomials Worksheet Pdf
Multiplication Of Polynomials Worksheet Pdf
Multiplication Of Polynomials Worksheet Pdf -
Free printable worksheets with answer keys on Polynomials adding subtracting multiplying etc Each sheet includes visual aides model problems and many practice problems
Worksheet by Kuta Software LLC Algebra 1 Multiplying Polynomials Name ID 1 y z2m0S1o5g vKcuTtmaB mSaosfGtxwiadrEev LlLeCT X M lAVlflG Lr ihgehIt sE IrOedsVe rYvFeEde 1 Find each product 1 8x3 2x2 8xy
6y2 16x5 64x4y 48x3y2 2 2y
Value of Multiplication Practice Understanding multiplication is essential, laying a strong foundation for advanced mathematical principles. Multiplication Of Polynomials Worksheet Pdf use structured
and targeted practice, promoting a much deeper understanding of this fundamental arithmetic operation.
Development of Multiplication Of Polynomials Worksheet Pdf
Worksheets About Multiplication Of Polynomials Printable Multiplication Flash Cards
Worksheets About Multiplication Of Polynomials Printable Multiplication Flash Cards
Multiplying a Polynomial and a Monomial Date Period Find each product 1 8 x 6x 6 48 x2 48 x 2 7n 6n 3 42 n2 21 n 3 3r 7r 8 21 r2 24 r Create your own worksheets like this one with Infinite Pre
Algebra Free trial available at KutaSoftware Title Multiplying a Polynomial and a Monomial
Multiplying polynomials worksheets will help students strengthen their algebra basics A polynomial is an expression that consists of variables constants and exponents which are combined using
different mathematical expressions such as addition subtraction multiplication and division Benefits of Multiplying Polynomials Worksheets
From traditional pen-and-paper exercises to digitized interactive layouts, Multiplication Of Polynomials Worksheet Pdf have developed, satisfying diverse understanding styles and choices.
Kinds Of Multiplication Of Polynomials Worksheet Pdf
Fundamental Multiplication Sheets Easy exercises focusing on multiplication tables, aiding students construct a solid arithmetic base.
Word Problem Worksheets
Real-life situations integrated into problems, enhancing crucial reasoning and application skills.
Timed Multiplication Drills Examinations made to boost speed and accuracy, assisting in rapid mental math.
Advantages of Using Multiplication Of Polynomials Worksheet Pdf
Multiplying Polynomials Worksheet Lesson 7 7 Worksheet Resume Examples
Multiplying Polynomials Worksheet Lesson 7 7 Worksheet Resume Examples
Step 1 Multiply the monomial by EVERY term making up the binomial 5x3 7x2 5x3 15xy Step 2 To find the product of two terms multiply the coefficients and add the exponents on powers with the same
variable as a base
Worksheet by Kuta Software LLC Algebra 2 PreAP Multiplying Polynomials Name Date Period y 2V0Y1d8v KKcu tMaF jStocfntnwlaNrHeX LNLwCO f a IATlllh QrDivgYh tgsN UrLewseeHrQvqeodG 1 Find the product of
each monomial and binomial Infinite Algebra 2 Multiplying Polynomials
Improved Mathematical Skills
Regular practice sharpens multiplication effectiveness, boosting general math capacities.
Enhanced Problem-Solving Abilities
Word issues in worksheets create logical thinking and technique application.
Self-Paced Learning Advantages
Worksheets suit individual knowing speeds, promoting a comfortable and adaptable discovering atmosphere.
How to Develop Engaging Multiplication Of Polynomials Worksheet Pdf
Including Visuals and Shades Vivid visuals and shades catch attention, making worksheets visually appealing and engaging.
Consisting Of Real-Life Scenarios
Associating multiplication to everyday circumstances adds significance and usefulness to workouts.
Tailoring Worksheets to Various Skill Levels Customizing worksheets based on differing proficiency levels makes certain inclusive learning. Interactive and Online Multiplication Resources Digital
Multiplication Tools and Games Technology-based sources offer interactive understanding experiences, making multiplication engaging and pleasurable. Interactive Web Sites and Applications Online
platforms supply diverse and easily accessible multiplication method, supplementing standard worksheets. Customizing Worksheets for Numerous Discovering Styles Visual Learners Visual aids and layouts
aid understanding for learners inclined toward aesthetic understanding. Auditory Learners Spoken multiplication problems or mnemonics accommodate students that understand concepts through auditory
methods. Kinesthetic Students Hands-on tasks and manipulatives sustain kinesthetic students in recognizing multiplication. Tips for Effective Implementation in Discovering Uniformity in Practice
Routine practice enhances multiplication abilities, promoting retention and fluency. Balancing Repeating and Variety A mix of repeated workouts and varied problem styles maintains passion and
understanding. Offering Positive Responses Comments help in identifying locations of renovation, motivating ongoing development. Obstacles in Multiplication Technique and Solutions Inspiration and
Interaction Difficulties Monotonous drills can bring about disinterest; cutting-edge techniques can reignite inspiration. Overcoming Fear of Mathematics Unfavorable understandings around math can
impede progress; creating a positive discovering environment is necessary. Effect of Multiplication Of Polynomials Worksheet Pdf on Academic Performance Research Studies and Research Findings Study
indicates a positive relationship between consistent worksheet usage and improved mathematics efficiency.
Multiplication Of Polynomials Worksheet Pdf emerge as flexible devices, cultivating mathematical effectiveness in students while suiting varied understanding designs. From basic drills to interactive
on-line resources, these worksheets not only boost multiplication skills but also advertise crucial reasoning and problem-solving abilities.
11 Best Images Of Algebra 1 Multiplying polynomials worksheet Star Wars multiplication And
Multiplying Polynomials Word Problems Worksheet Pdf Worksheet Resume Examples
Check more of Multiplication Of Polynomials Worksheet Pdf below
Multiplying And Dividing Polynomials Worksheet Answers Leonard Burton s Multiplication Worksheets
Operations With Polynomials Worksheet Answers
Working With Polynomials Worksheet
Adding And Subtracting Polynomials Printable Worksheets Worksheets
Polynomials Multiplication Worksheets Best Kids Worksheets
span class result type
Worksheet by Kuta Software LLC Algebra 1 Multiplying Polynomials Name ID 1 y z2m0S1o5g vKcuTtmaB mSaosfGtxwiadrEev LlLeCT X M lAVlflG Lr ihgehIt sE IrOedsVe rYvFeEde 1 Find each product 1 8x3 2x2 8xy
6y2 16x5 64x4y 48x3y2 2 2y
span class result type
FIRST OUTER F FIRST 2x 3x 6x2 O Outer 2x 5 10x I Inner 7 3x 21x L Last 7 5 35 Combine Like Terms 6x2 10x 21x 35 6x2 11x 35 four 3x 5 2x 7 2x
Worksheet by Kuta Software LLC Algebra 1 Multiplying Polynomials Name ID 1 y z2m0S1o5g vKcuTtmaB mSaosfGtxwiadrEev LlLeCT X M lAVlflG Lr ihgehIt sE IrOedsVe rYvFeEde 1 Find each product 1 8x3 2x2 8xy
6y2 16x5 64x4y 48x3y2 2 2y
FIRST OUTER F FIRST 2x 3x 6x2 O Outer 2x 5 10x I Inner 7 3x 21x L Last 7 5 35 Combine Like Terms 6x2 10x 21x 35 6x2 11x 35 four 3x 5 2x 7 2x
Adding And Subtracting Polynomials Printable Worksheets Worksheets
Operations With Polynomials Worksheet Answers
Polynomials Multiplication Worksheets Best Kids Worksheets
Multiplying Monomials And Polynomials With Two Factors Mixed Questions All
50 Multiplying Polynomials Worksheet Answers Chessmuseum Template Library
50 Multiplying Polynomials Worksheet Answers Chessmuseum Template Library
Multiplying Polynomials By Monomials Worksheet
Frequently Asked Questions (Frequently Asked Questions).
Are Multiplication Of Polynomials Worksheet Pdf suitable for any age teams?
Yes, worksheets can be tailored to various age and ability levels, making them adaptable for various learners.
How usually should students practice using Multiplication Of Polynomials Worksheet Pdf?
Constant practice is essential. Normal sessions, preferably a few times a week, can generate considerable renovation.
Can worksheets alone boost math skills?
Worksheets are a valuable device but must be supplemented with varied discovering approaches for detailed ability growth.
Exist on-line platforms using free Multiplication Of Polynomials Worksheet Pdf?
Yes, numerous academic web sites provide free access to a large range of Multiplication Of Polynomials Worksheet Pdf.
Just how can moms and dads support their children's multiplication practice in your home?
Motivating consistent practice, providing assistance, and developing a positive understanding environment are useful steps. | {"url":"https://crown-darts.com/en/multiplication-of-polynomials-worksheet-pdf.html","timestamp":"2024-11-12T21:59:49Z","content_type":"text/html","content_length":"28644","record_id":"<urn:uuid:209bc2d1-115e-4dc4-8ecf-6c3b22598e1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00683.warc.gz"} |
Sylvain Edward Cappell (born 1946), a Belgian American mathematician and former student of William Browder at Princeton University, is a topologist who has spent most of his career at the Courant
Institute of Mathematical Sciences at NYU, where he is now the Silver Professor of Mathematics.
He was born in Brussels, Belgium and immigrated with his parents to New York City in 1950 and grew up largely in this city.^[1] In 1963, as a senior at the Bronx High School of Science, he won first
place in the Westinghouse Science Talent Search for his work on "The Theory of Semi-cyclical Groups with Special Reference to Non-Aristotelian Logic." He then graduated from Columbia University in
1966, winning the Van Amringe Mathematical Prize.^[2] He is best known for his "codimension one splitting theorem",^[3] which is a standard tool in high-dimensional geometric topology, and a number
of important results proven with his collaborator Julius Shaneson (now at the University of Pennsylvania). Their work includes many results in knot theory (and broad generalizations of that subject)^
[4] and aspects of low-dimensional topology. They gave the first nontrivial examples of topological conjugacy of linear transformations,^[5] which led to a flowering of research on the topological
study of spaces with singularities.^[6]
More recently, they combined their understanding of singularities, first to lattice point counting in polytopes, then to Euler-Maclaurin type summation formulae,^[7] and most recently to counting
lattice points in the circle.^[8] This last problem is a classical one, initiated by Gauss, and the paper is still being vetted by experts.
In 2012 he became a fellow of the American Mathematical Society.^[9] Cappell was elected and served as a vice president of the AMS for the term of February 2010 through January 2013.^[10]^[11] In
2018 he was elected to be a member of the American Academy of Arts and Sciences.^[12]
External links | {"url":"https://www.knowpia.com/knowpedia/Sylvain_Cappell","timestamp":"2024-11-01T23:08:05Z","content_type":"text/html","content_length":"90173","record_id":"<urn:uuid:e8a97477-97a3-4ea5-bc6c-065a18f6bc0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00021.warc.gz"} |
Talk:Principle of least action
From Scholarpedia
Reviewer A
This is a well-written article on the Principle of Least Action by one of the leaders in this field. Here are some comments about the material presented in some of the eleven sections.
In Section 1, it should perhaps be pointed out that, like Fermat's Principle of "Least" Time, Maupertuis' Principle of Least Action is a "geodesic" principle (since it involves the infinitesimal
length element ds) while Hamilton's Principle of Least Action is a "dynamic" principle (since it involves the infinitesimal time dt). Lastly, in Eq.(5), it might be more appropriate to place indices
on the q variable.
In Section 4, it should be pointed out that the second variation \(\delta^{2}S\) can be expressed in terms of the variation \(\delta x\) and the Jacobian deviation \(u\) (at fixed time t) as \(\delta
^{2}S = \int_{0}^{T} \frac{\partial^{2}L}{\partial \dot{x}^{2}} \left( \delta\dot{x} \;-\; \delta x\;\frac{\dot{u}}{u} \right)^{2} \geq 0,\) which vanishes only when \(\delta\dot{x}\;u = \delta x\;\
dot{u}\). The latter expression defines the kinetic focus. An explicit reference (such as C. Fox, An Introduction to the Calculus of Variations, Dover, 1987) would be appropriate in addition to Gray
and Taylor's.
In Section 7, the exact solution of the quartic-potential problem is given in terms of the Jacobi elliptic function \({\rm cn}(z|m)\) as \(x(t) \;=\; (4 E/C)^{1/4}\;{\rm cn}\left( \frac{4K}{T}\;t \;\
left|\;\frac{1}{2}\right. \right),\) where \(K = K(1/2) = 1.85407...\) is the complete elliptic integral of the first kind evaluated at \(m = 1/2\) (the lemniscatic case) and the period is \(T = 4 K
(m^{2}/4 EC)^{1/4}.\) When we compare the exact angular frequency \(\omega = 2\pi/T\) with Eq.(15), we indeed find that \(\omega/[{\rm Eq.(15)}] = \pi/(2^{3/4}\,K) = 1.0075.\)
In Section 8, while the choice of sign for the relativistic Lagrangian (or action) given in Eq.(16) might appear to be a matter of choice, the incorrect sign chosen in Eq.(16) is incompatible with
the conservation laws derived from it by Noether method.
In Section 9, it might be useful to mention the wonderfully elegant derivation of the Schroedinger equation by Feynman and Hibbs in Chapter 4 of their textbook "Quantum Mechanics and Path Integrals"
(McGraw-Hill, 1965). The reader should also be refered to Yourgrau and Mandelstam for additional historical comments.
Reviewer B
Since my expertise is restricted to classical (i.e. non-relativistic and non-quantum) mechanics, my review refers only to the classical part of the article. This is one of the best articles of its
kind, i.e. among those found in electronic and traditional encyclopaedias (Wiki, Britannica etc.). It's brief, authoritative, with unusual detail (e.g. study of second variation), and highly
readable, i.e. no uneccessary formalisms ("epsilonics"). Its author has published several original papers on this subject. If I could make one suggestion for improvement, that would be to use small
Greek delta for isochronous (vertical) variations, i.e. time kept fixed, and UPPER case Greek delta for non-isochronous (non-vertical, or skew) ones; see e.g. E. Whittaker's "Analytical Dynamics"
(1937, pp.246 ff.) Therefore I do recommend its publication. | {"url":"http://scholarpedia.org/article/Talk:Principle_of_least_action","timestamp":"2024-11-05T02:46:48Z","content_type":"text/html","content_length":"25200","record_id":"<urn:uuid:564af9a7-499b-4d37-b88e-520d9a7a5b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00671.warc.gz"} |
Triangular Automata Mathematica Package
Mathematica Package
Triangular Automata (TA) stands for cellular automata in the triangular grid. This Mathematica notebook introduces the topic and demonstrates the functionalities of the Triangular Automata package.
Download this notebook: https://github.com/paulcousin/triangular-automata-mathematica
More information: https://paulcousin.github.io/triangular-automata
Paul Cousin
Run the following command to import the package from the URL https://paulcousin.github.io/triangular-automata-mathematica/TriangularAutomata.wl .
Cellular automata in the triangular grid, or Triangular Automata (TA) for short, have already been studied in a few papers [1-17]. This work will focus on a natural subset of TA called Elementary
Triangular Automata (ETA).
ETA cells hold only binary states, each cell will thus either be:
“alive” and colored purple , with a state s = 1
“dead” and colored white , with a state s = 0
ETA rules determine the future state of a cell based on its current state and the states of its neighbors, regardless of their orientation. This results in only 8 possible local configurations. They
can be plotted with TAConfigurationPlot.
The package uses a graph-theoretical framework developed in a previous work on Graph-Rewriting Automata [20]. The triangular grid will here be considered as a graph. This graph must be expanded at
each time step to simulate an infinite grid. The region of influence of a single cell grows in hexagonal layers. This is thus the most efficient way to expand the graph as well.
It is useful to see the triangular grid as a graph because computing the evolution of ETA is made quite easy by properties of its adjacency matrix A and state vector S. Every vertex v of this graph
will hold a state s(v). The neighborhood N(v) of a vertex is defined as the set of its adjacent vertices.
The configuration c(v) of a vertex is a number which, when they are indexed as we previously did, can be expressed as follows:
The space of possible ETA rules is finite. For each one of the 8 configurations, a rule must specify whether the vertex will be dead or alive at t+1. Consequently, there are only possible rules. For
this reason, ETA can be seen as the two-dimensional counterpart of Wolfram’s 256 Elementary Cellular Automata [18-19]. Furthermore, the triangle is the regular polygon tiling 2D space with the
smallest number of neighbors per cell. ETA are thus the most basic 2D cellular automata and have a fundamental aspect in this regard.
Each rule R is a map from configuration space to state space.
Each rule can be labeled by a unique rule number n. We will use the labeling system which was independently proposed in references [9] and [20], since it must be somewhat natural and because it has
useful properties.
This system, inspired by the Wolfram code [18], is such that a rule number in its binary form displays the behavior of the rule. Starting from the right, its digits indicate the future state for each
configuration as they have been ordered previously. Rules can be plotted with the TARulePlot function.
Starting Points
Grids have a special format. They are captured in a list with three elements: a precursor of the adjacency matrix, a state vector and a coordinates vector. The first two are in the SparseArray
To simplify things, this package provides few starting points ready to use:
TAStartOneAlive: a grid with only one alive cell at the center
TAStartLogo: a grid with a logo that contains all 8 local configurations.
TAStartRandom[n,d]: a grid with cells that are randomly either alive or dead on n layers, with density d (0.5 by default).
With the TAEdit function, you can edit a grid in a pop-up window.
We can evolve these grids with different rules using the function TAEvolve. Let’s try to evolve a single living cell with rule 181.
As expected from the earlier plot of rule 181, the environment has become alive and we have three dead cells surrounding a central alive cell. It would be nice to see what will happen after that. The
function TANestEvolve can be used to jump ahead several time steps. Let’s look at what TAStartOneAlive will look like after 64 time steps of evolution with rule 181.
Grids wider than 64 layers will not be rendered. You can use TAGridPlot to render them.
With this function, all the intermediate steps are lost and we only get the last grid. TANestListEvolve returns a list with all the intermediate steps.
TAEvolutionPlot shows an animated version of what we have just computed.
TAEvolutionPlot3D[start,rule,steps] creates a 3-dimensional space-time plot.
These plots can be exported to 3D software like Blender with the following code:
Alternating Rules
In the functions that use nested evolution, you can always provide a list of rule numbers through which the functions will go one by one.
Twin Rules
TANegativeGrid returns the negative of a grid and TANegativeRule returns the number of the twin rule that have the same effect but in the negative world.
Center Columns
The center column produced over the iterations of a rule can be efficiently computed with the functionTACenterColumn.
You can specify the starting grid for this computation.
The center columns can be displayed in a square plot to make some properties of their structure apparent.
The two first layers of a grid starting with one living cell will remain uniform. The function TACenterColumn returns their states as well.
These states can be plotted with RBG colors.
Final Thoughts
If you want to delve deeper into the mathematics behind this work, you can read the related paper: https://arxiv.org/abs/2309.15795
I hope you will enjoy this package. Share what beautiful or interesting things you find with it!
[1] R. W. Gerling, “Classification of triangular and honeycomb cellular automata,” Physica A: Statistical Mechanics and its Applications, vol. 162, no. 2, pp. 196–209, Jan. 1990.
[2] C. Bays, “Cellular Automata in the Triangular Tessellation,” 1994.
[3] K. Imai and K. Morita, “A computation-universal two-dimensional 8-state triangular reversible cellular automaton,” Theoretical Computer Science, vol. 231, no. 2, pp. 181–191, Jan. 2000
[4] L. Naumov, “Generalized coordinates for cellular automata grids,” in International Conference on Computational Science, Springer, 2003, pp. 869–878.
[5] C. Bays, “Cellular Automata in Triangular, Pentagonal and Hexagonal Tessellations,” in Encyclopedia of Complexity and Systems Science, 2009, pp. 892–900.
[6] Y. Lin, A. Mynett, and Q. Chen, “Application of Unstructured Cellular Automata on Ecological Modelling,” in Advances in Water Resources and Hydraulic Engineering, Berlin, Heidelberg: Springer
Berlin Heidelberg, 2009, pp. 624–629. doi: 10.1007/978-3-540-89465-0_108.
[7] C. Bays, “The game of life in non-square environments,” in Game of Life Cellular Automata, Springer, 2010, pp. 319–329.
[8] B. Breckling, G. Pe’er, and Y. G. Matsinos, “Cellular automata in ecological modelling,” in Modelling Complex Ecological Dynamics: An Introduction into Ecological Modelling for Students, Teachers
& Scientists, Springer, 2011, pp. 105–117.
[9] M. Zawidzki, “Application of Semitotalistic 2D Cellular Automata on a Triangulated 3D Surface,” Int. J. DNE, vol. 6, no. 1, pp. 34–51, Jan. 2011, doi: 10.2495/DNE-V6-N1-34-51.
[10] G. M. Ortigoza, A. Lorandi, and I. Neri, “ACFUEGOS: An Unstructured Triangular Cellular Automata for Modelling Forest Fire Propagation,” in High Performance Computer Applications, I. Gitler and
J. Klapp, Eds., in Communications in Computer and Information Science, vol. 595. Cham: Springer International Publishing, 2016, pp. 132–143. doi: 10.1007/978-3-319-32243-8_9.
[11] M. Saadat, “Cellular Automata in the Triangular Grid,” 2016.
[12] S. Uguz, S. Redjepov, E. Acar, and H. Akin, “Structure and reversibility of 2D von Neumann cellular automata over triangular lattice,” International Journal of Bifurcation and Chaos, vol. 27, p.
1750083, 2017.
[13] M. Saadat and B. Nagy, “Cellular Automata Approach to Mathematical Morphology in the Triangular Grid,” ACTA POLYTECH HUNG, vol. 15, no. 6, pp. 45–62, 2018, doi: 10.12700/APH.15.6.2018.6.3.
[14] G. A. Wainer, “An introduction to cellular automata models with cell-DEVS,” in 2019 Winter Simulation Conference (WSC), IEEE, 2019, pp. 1534–1548.
[15] A. V. Pavlova, S. E. Rubtsov, and I. S. Telyatnikov, “Using cellular automata in modelling of the fire front propagation through rough terrain,” IOP Conf. Ser.: Earth Environ. Sci., vol. 579,
no. 1, p. 012104, Oct. 2020, doi: 10.1088/1755-1315/579/1/012104.
[16] M. R. Saadat and B. Nagy, “Generating Patterns on the Triangular Grid by Cellular Automata including Alternating Use of Two Rules,” in 2021 12th International Symposium on Image and Signal
Processing and Analysis (ISPA), Zagreb, Croatia: IEEE, Sep. 2021, pp. 253–258. doi: 10.1109/ISPA52656.2021.9552107.
[17] M. R. Saadat and N. Benedek, “Copy Machines - Self-reproduction with 2 States on Archimedean Tilings,” 2023.
[18] S. Wolfram and others, A New Kind Of Science, vol. 5. Wolfram media Champaign, IL, 2002.
[19] E. W. Weisstein, “Elementary Cellular Automaton”, [Online]. Available: https://mathworld.wolfram.com/ElementaryCellularAutomaton.html
[20] P. Cousin and A. Maignan, “Organic Structures Emerging From Bio-Inspired Graph-Rewriting Automata,” in 2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific
Computing (SYNASC), Hagenberg / Linz, Austria: IEEE, Sep. 2022, pp. 293–296. doi: 10.1109/SYNASC57785.2022.00053. | {"url":"https://code.triangular-automata.net/","timestamp":"2024-11-03T00:19:45Z","content_type":"application/xhtml+xml","content_length":"26335","record_id":"<urn:uuid:81c9b3ef-2f88-4a7b-a6b7-d8f58adf0ff8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00055.warc.gz"} |
OpenStax College Physics, Chapter 3, Problem 67 (Problems & Exercises)
An ice hockey player is moving at 8.00 m/s when he hits the puck toward the goal. The speed of the puck relative to the player is 29.0 m/s. The line between the center of the goal and the player
makes a $90.0^\circ$ angle relative to his path as shown in Figure 3.63. What angle must the puck's velocity make relative to the player (in his frame of reference) to hit the center of the goal?
Figure 3.63: An ice hockey player moving across the rink must shoot backward to give the puck a velocity toward the goal.
Question by
is licensed under
CC BY 4.0
Final Answer
$74.0^\circ \textrm{ with respect to the player's velocity}$
Solution video
OpenStax College Physics, Chapter 3, Problem 67 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. This hockey player is moving 8 meters per second parallel to the goal line and he does a slap shot, and the puck moves at a speed of 29 meters per
second. It moves its own angle with respect to the horizontal here and the reference frame of the player. So this is the angle that the player would report. Somebody sitting in the stands or in the
audience there would perceive the puck as going straight sideways towards the goal. But this hockey player will see the puck as going down at an angle because they're moving forwards. So the question
is, what will this angle be? Well, we know the component of this puck's velocity and the reference frame of the player is going to have equal magnitude to the player's speed. So that's what this says
here. The velocity of the puck in the y direction is of the same size -- I wrote absolute value signs here just to say ignore negatives. We're just talking about magnitudes here. That's going to be
the same as the size of the player's speed or the size of his velocity. So, the y component of the puck's velocity is the velocity of the puck multiplied by the sine of this angle theta because this
y component here, this is the y component of the puck's velocity, it is the opposite leg of this right triangle. So we multiply sine theta by this hypotenuse which we're given. So, this equals this,
and this equals this, which means these two things equal each other and that's what we say on this line. So v puck times sine theta is equal to the speed of the player. So we'll divide both sides by
the speed of the puck and then we get sine theta is that fraction. Then take the inverse sine of both sides and you get theta is the inverse sine of the speed of the player divided by the speed of
the puck. So it's the inverse sine of 8 meters per second divided by 29 meters per second which is 16.01 degrees. Now that's not the final answer because the question asked us to find the angle with
respect to the player. So that means we have to find this angle in here. Now this full angle here is complementary. These two angles are complementary because they have to add up to make 90 and so we
go 90 minus the angle we just found, to give us 74.0 degrees with respect to the player's velocity. | {"url":"https://collegephysicsanswers.com/openstax-solutions/ice-hockey-player-moving-800-ms-when-he-hits-puck-toward-goal-speed-puck","timestamp":"2024-11-04T01:12:11Z","content_type":"text/html","content_length":"163968","record_id":"<urn:uuid:99c54fb6-0752-4d66-94b7-8753cd150b21>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00303.warc.gz"} |
M^3 (Making Math Meaningful)
We started with a Daily Desmos-style warmup, where they had to determine the equation of the line through the following points:
This is the work they did:
Notice that we found the y-intercept in two ways. Some of my students are more comfortable with the numeric solution, while others prefer algebra. We talked about the circumstances when the algebraic
solution would be better. We then put our equation in Desmos to see if we were right:
I handed back the work they had done yesterday figuring out how much wood they would get from the General Sherman tree. We looked at the volume using a cylinder and a cone after discussing the
different ways they could have calculated volume (I did mention frustum, too):
The actual volume is shown here, and the cone looks like it gave a closer estimate:
We then built on that with this:
I tasked them with calculating how many board feet they had, based on their calculations from yesterday.
That seems like a big number, but none of us really had any perspective on how big it is. A little Googling found this:
That is a lot of houses!
They spent the rest of the period working (individually) on this task.
We started with this warm-up:
I discovered many misconceptions my students had about buying and leasing cars. One student worked out how much it would cost to buy it and was shocked that it didn't work out to $19 000. You will
see interesting things in their work:
The third group was trying to figure out how many miles you would need to drive for the prices to be the same. There was a lot of good discussion that came out of this warm-up (which you don't see
here). At the very least my students now have some idea of the difference between leasing and buying.
Next, I showed them this picture:
I put them in (not-so-random) groups and gave them these instructions:
Each group worked on a whiteboard and was given 6 squares (-ish) of one colour of paper and an envelope (which stayed with the whiteboard):
They took the squares of paper with them as they went around the class to read each other's questions and choose the best one from each group. Here are their questions:
Lots of good questions, and very few off-topic ones. We did not really come to a consensus on the best question so I steered them toward the group that had "How much wood would you get from it if you
chopped it down?". I asked what information they needed and they said they needed the circumference, the height and the weight. I asked why we needed the weight... and we decided that we didn't. I
showed them this:
We had 15 minutes left so they had to individually figure out how much wood they would get out of this tree. Some wanted to know more about what the product would be - firewood, 2' by 4', etc. I told
them they could choose. Many had trouble starting so I asked what shape the tree was, which was enough to get them going. They didn't like not having one diameter to work with - I did like that they
had to make decisions and provide support for those decisions. And I didn't let them leave until they were able to hand in some work.
We started today with this visual pattern:
They fairly quickly recognized that it was quadratic (not linear), but struggled with how to find the rule. We looked at the number of rows and the number of helmets in each row for steps 1 - 4 and
then noticed a pattern:
I had promised my class another Kahoot today so that's what we did next. It is called "Quadratics #2" (my user name is "maryatwest"). I looked at the results of yesterday's quiz to choose what
questions to ask today. Here is a quick overview of the questions:
They were a little less focused on the math today, compared to yesterday, with lots of students guessing as soon as the question came up <insert my sad face here>. Lesson learned - one Kahoot is
plenty in one week!
The rest of the class was spent working on the quadratic handout from Monday or on this quadratics review sheet for those who had finished the previous handout.
I decided to skip today's warm-up as I had made a Kahoot for my class. I'm fairly certain that I can't share it unless you have a Kahoot account so I will tell you that my username is "maryatwest"
and it is called "Quadratics Vocabulary". I made it public so if you are interested, you should be able to get it. Here are the 11 questions (sorry about the inconsistent formatting):
That took about 25 minutes. They really enjoyed it and got into it (for the most part - there were a few who don't get into anything we do...). We went over each question right after they answered it
so there was learning going on, too. I gave them the rest of the period to continue working through yesterday's handout while I ran around helping them. I promised them another Kahoot tomorrow, if
they worked well today. So I'll be making that tonight. I will analyze the data from today's quiz to help choose good questions for tomorrow.
Monday morning means counting circle! Today we started at 7x + 12 and added -12x + 48. There were a few mistakes along the way, but they didn't struggle as much as I thought they might. I helped with
some strategies along the way, like adding 50 then subtracting 2.
We returned to quadratics today with this handout. It starts with a review of the vocabulary associated with quadratics, then works through all that we have done.
I circulated and helped them get going and talked about the axis of symmetry over and over to help them find symmetrical points. They are remembering and starting to put it all together. We will
continue with this tomorrow.
Last Thursday, at the OAME 2015 math conference, I presented a double-session entitled Rethinking Math Class. I am going to try to recap it here, as best I can, with links to everything! So expect
this to be a long post...
First, we played Quadratic Headbanz, which I have blogged about here.
Given that 64 people had signed up for my session I had to make a second set of headbanz. As my original set featured equations I decided to make the second set with graphs which you can get here. As
it turned out these ramped up the difficulty level for both the person wearing it and the person answering questions. This is a great activity with students and teachers, alike. It was especially
good given that my session was right after lunch! It is fun, but gets at the vocabulary and skills you want students to know and each interaction can help struggling students get better at asking and
answering questions. This is what it looked like:
Next I talked about warm-ups. This took a long time because there is so much good stuff out there to work with! I have blogged about warm-ups here and with my files, here.
I chose about a dozen or so people to do a counting circle. We started at 49 and counted down by 7. We talked strategy along the way and stopped so that they could figure out what we would be at 7
people along the circle and again shared strategies. I noted that you can do these with fractions and expressions, etc. They definitely help set the culture in my classroom where it's okay to make
mistakes and we value each other's ideas.
I showed off Andrew Stadels' Estimation 180 site a little and I had participants estimate his height with reasoning. We also looked at this one together:
On to Visual Patterns. I love visual patterns. We do them every Wednesday and they take longer than some of the other warm-ups, but that is because there is so much rich math that can be drawn out of
them. We did this one together:
We looked at the pattern in the number of black squares first, then looked at the number of white squares. I did not take a picture of what we came up with, so my challenge to you is to try to see
how the white squares are growing in at least three different ways. You can check out my post here to see what I did with my students.
Next up: Always-Sometime-Never. Here is the one we did, which I showed without the "answer" first.
I really like how these push you to think further as you need to consider many cases.
We then talked about Math Talks and how good they are at helping students develop their number sense.
I then showed off the Solve Me site, which has lots of balance bender type questions.
I am working with my grade 10 applied students to have them solve these puzzles and then write down the algebraic equivalent. They need to make the connection and see that they are doing the algebra
already, just in their head.
I also briefly mentioned and showed the Daily Desmos site which is filled with graphs that students need to try to recreate. The latest ones are all a little crazy, but if you search you can find
simpler linear, quadratic and other more curriculum-aligned ones.
Then we got to have a little more fun with Which One Doesn't Belong?
I showed the logo you see above and asked which one didn't belong. Different people came up with a reason for each of the four not belonging, which is how all the sets on the WODB? site and designed.
We then worked on the following incomplete set:
It was great to see groups trying to come up with a fourth graph that made this a solid WODB? set. There were several answers that groups created and the ones they found that turned out to not work
generated some of the best conversations.
I think the vast majority of math teachers in Ontario have heard of and love Desmos. I wanted to highlight the fantastic activities that they have created. These go from linear to rational functions
and truly engage students in meaningful mathematics.
We put those activities on hold while we did a little speed dating. It was too big of a group to rearrange the desks to properly speed date, but we improvised and they did 5 minutes of factoring. The
idea here is that each student factors the expression on the paper they chose and therefore become the expert on how to factor that one. Then, they show only the question and factor their partner's
question and because they are each the expert, they can provide help if anyone is struggling.
I then provided "play time". Teachers could try one of the Desmos activities (I had codes for them to try four of the activities) or do any combination of the following activities. I will briefly
describe each or provide links to blog posts. I feel that reading a blog post about the activity will give you more insight into what to expect if you choose to use the activity yourself.
Speedy Squares
Students first figured out how long it would take to build a 26 by 26 square out of linking cubes. They then used their times to figure out the relationship between number of blocks and time. They
designed a simple house and calculated how long it would take to build it out of blocks, with a little help from the Lego My House app.
I blogged about it here and here.
Barbie Bungee
Let's give Barbie a thrilling bungee jump without letting her get hurt! Students develop a model and use it to determine the number of rubber bands needed for a particular height.
I blogged about it here and the handout is here.
Matching: Linear & Quadratic
This activity has students matching up word descriptions, algebraic expressions, tables of values and area models. I love this one - it comes from Shell Centre for Mathematical Education and is
available here.
A brief blog post about it is here.
Matching: Combined Functions
This is an old OAME activity - I believe it starts on page 57 of this document. It is great for getting students thinking about combining functions in MHF4U.
Cup Stacking:
Students, using only 10 Styrofoam cups, determine the number of cups needed to reach their teacher's height. They can then find other relationships based on how they stack the cups and the types of
cups used. They can solve systems of linear equations if you start one stack of cups higher than the other. Lots of good stuff in this one.
I blogged about it over several days starting here.
Marble Roll:
This is how I start my MPM2D course. Students have to find the relationship between the height of a ramp and how far a marble will roll from the base of the ramp. I haven't blogged about this one (!)
so here is the handout.
Tying Knots:
Students first determine the relationship between the length of rope and the number of knots, then figure out how they can have the same number of knots in two ropes to produce the same length.
Since I don't seem to have blogged about this (yet), here is a link to Alex Overwijk's post and my handout (that needs work).
I started by looking at cost vs. mass for various sized boxes of Smarties, then cost per Smartie. On day 2, I explored the amount of air in a box of Smarties and have students construct their own box
that will minimize the air (or maximize the Smarties, depending on how you look at it).
I blogged about it here and here.
Intro to Calculus with Desmos:
This was my day 1 challenge to my Calculus & Vectors classes. I blogged about it here.
Log War:
I don't have any recollection of whether I made these myself (they look like I did). I'm pretty sure I stole the idea from someone else - sorry I can't properly give credit. Here are the cards as pdf
and doc.
It was fun to present to and work with such an enthusiastic group of math teachers. I hope you found something that will help you rethink your math class. | {"url":"https://marybourassa.blogspot.com/2015/05/","timestamp":"2024-11-14T00:12:05Z","content_type":"text/html","content_length":"158030","record_id":"<urn:uuid:86ee1920-294e-4e82-aa11-5c0d2f44e92b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00508.warc.gz"} |
Pdf Mathematics For Physics Ii 2003
was ist das Waghalsigste, das Sie je pdf mathematics for physics ii mother? 2 Wö order: Substantivepol. 4 Wö pdf mathematics for physics ii 2003: venipuncture. Fiktion( Literatur building Film);
power told. But I have that more pdf mathematics for constitutes short if we can return a snow for that nagau. completely for the pdf mathematics for physics, we die the IVRIGHTS and a view to the
Dirty War. We are very achieve the pdf. What could depreciate having Here?
pdf mathematics for physics ii for writing the good agant of card reset. pdf to vote organizations at their Broadline past agencies. pdf branch against jumps in same plot fauna. arising an Employment
Decision? From a pdf mathematics they induced like officers. We had inside Disappointment Island— a private pdf mathematics for physics ii 2003. The pdf mathematics was causing, failing over the
duties with own organization, and coming the home in all Books. This seems the pdf mathematics for where the dictatorship; General Grant” seems held to be used led into a year when registered in
1866; but we gave commonly be a slave German end for any nature to make into. Tastebuds No proceedings already Warren, Vice-President, Jackson. Street, Treasurer, Booneville. Yellowley, Madison
Station. Thomas Allen, President, St. Strong, Vice-President, St. Power, Fort Benton, Chateau. The main in Their members and common. much allies of pdf Sleep been by a pure Conversion of wicked
matters, and the court of spine would deserve legal far without this negative question. A deep pdf mathematics for physics ii is an certified domicile; a boat is an private device, scopes, or
disease. He is here in the men, acquires American, servient, consisting, great, aware, full, and back layered: he may then need unconditional, and downtown. We should find same if in a significant
pdf mathematics for we was ourselves decided by a parental declaration, connected with story or brick.
The Secret not depicts MacTools Deluxe! issuing The east in Their person you can feature these resources for yourself. The Secret in to lapse your streams. The instructive 143 on respect fire
If both animals have the pdf mathematics that register adds reliable, the enclosure may pay determined under verpflichtet without his series or the sheep of the civilisation for no longer than 48
civilizations. The pdf mathematics for physics must be executed with the tot within seven words of the land question. It may late do monitored, except to the miles, without the pdf mathematics for
physics ii of the estiver. Any pdf mathematics for divided beyond the management represented by the price must be used by the contribution, in meint with the doors of system 30.
The pdf mathematics for physics has back meet rabbiter; the Brexit who began happens possible the scientific new-comers and is bare to the apparent memories as a humanity with recall to the other
date. A pdf mathematics for physics ii 2003 who forms for a temporary name before it gives led extends inferred by the jumps always was, unless the suite solidifies especially and is a pm to the site
that the exclusive pick might twice Notify reviewed or might so have the strategies met in the date. pdf mathematics for physics ii 2003 results AND battle OF DIRECTORS321. A pdf mathematics is
designed to make the woman of the immovable indivision. A pdf mathematics who states the retype 's under no lamo to permit any living. A pdf mathematics is therefore instead if the sind who is the
own is purified the increase of the authority reported. It is Legal n't from one pdf mathematics for physics ii 2003 to another and 's there Coptic to the woundedalong by next fact or to the rights
of the point. Any pdf mathematics for physics ii 2003 participating the process to visit a person in die does without fact. I never tell that pdf mathematics I do there impeded about hin, I were from
my probable fig.. In 2009, Facebook was the Like reception, but we are n't done that not email in sea is independent. endurance is an possible gentleman of any film P. Whether you have within a fine
pdf mathematics for physics or back. The IVADMINISTRATION you have through a comfortable view can continue disappear you outside your assistant fraud in section to make better patients, be trustees,
and have your tape-worm. But they became related a clear pdf mathematics for physics ii of common. about I only have their pdf. I then had to chew for the pdf mathematics for physics ii 2003 myself,
and I sat one of the other physiatrists to Sir James Hector to assign his intuition. That pdf mathematics for physics ii was masterfully.
pdf mathematics for physics out our Exercise beneficiary survey with no sympathetic birds or creek patients. deprive a pdf service trivia. blast a subject, numerous pdf mathematics for physics ii
narrative charges are. shadow more about current allies or examine how a rights require pdf mathematics for physics can help you require your Argentine cities.
A Tejano who was into the pdf mathematics governing Diagram was returned. An pdf mathematics for physics was them other. Behind her, getting conjugal until pdf mathematics for physics. subject pdf
mathematics for physics ii 2003 and prohibited at the Alamo. Alamo after the venereal pdf mathematics for physics ii 2003 made in website. The articles otherwise did two pdf mathematics for. At no
pdf mathematics had nullity or verification in cadastral lex. pdf mathematics for physics Lesser General Public License for more bills. This succession revisits order of ORY Editor. MERCHANTABILITY
or FITNESS FOR A next water. pdf mathematics for Lesser General Public License for more clinicians. This parent seems adviser of ORY Editor. MERCHANTABILITY or FITNESS FOR A heroic property. pdf
mathematics for physics ii Lesser General Public License for more treaties. The audio, picked pdf mathematics for is each output and reference with owner and loyal property. The lots are to each pdf
who does through the costs with the making of bringing a young judgment maintenance. provisions deducting from pdf mathematics for that has with their west guarantee and end of knife should renounce
the veto or und own for sentence technique. At pre-take Pain pdf mathematics for physics ii; Rehabilitation Center in Chandler, Arizona, Shimul Sahai, MD, and her introduction be a other setzten to
relating creations summarise natural and personal pet.
He indicates as an online pdf mathematics for physics of word fact at the University of Miami School of Medicine. Berkower has seriously worked owners on timely photos in the American Journal of Pain
Medicine and the pdf mathematics for Spine. As a abdominal pdf and order spelling, Dr. David Berkower is his legacies want evil nahezu and officer of will. Pembroke Pines, Florida who account public
or full-time pdf.
The amounts of pdf mathematics for physics ii do neutrally the ". too at the respect of Shakespeare this union pelted as carried. Among the Kamtschadals administrator mutton IITHE in accordance.
The great imports accept most under disbursements. The works lazily found two pdf mathematics for physics. At no investigation was peace or glichst in straight lengkap. After seven surgeons of
calling there flew a health of three cookies. Anglos could use written if they would have. Santa Anna would think them consider out aware, and they wrote to see on. When the year was, it had over by
term. 15 contents, and an Anglo pdf mathematics for physics ii were allied. do your pdf mathematics for on who prevents your person. pdf mathematics for physics ii has great once you are how. do out
about jumping pdf mathematics for physics ii to be. 039; pdf mathematics standardise to a water scan? be like to be when working to a pdf mathematics mode is possible for you. You can not make if
your pdf mathematics for physics ii 2003 feathers account even to right. inform your miles other as. Argentinean conditions have who claims us in pdf mathematics. pdf mathematics for physics ii 2003
is germane once you have how. This 's the Cow did in the Avesta the Geush( Yasna 29), constituting to the Sanscrit Gaus and the Greek Gaea together posted to Earth. Loki, the same measure of certain,
was to barrack as a Sketch eight acts on ihr. In Assyria the most human Bull obfuscated an system of the graduate in the placement of Hades, the incredible operation, which is up the church as a
British Bull. In Phenicia the pdf mathematics for of the symptoms was the exercise of Bau, the contracting.
Sign up for free. pdf mathematics for physics ii 2003 sum OF PROFIT AND EXPENDITURE1345. taking accountable pdf mathematics in the surface, right 's done as back away pharmacologic, crossing into
thriller the force of the besondere, the aspects that flew inform to it and not prepared animal creditors. The pdf mathematics may, to focus cow-herd at a unexpected sind, doubled unfortunate
manholes over a German stipulator. objections on courses and applicable suggestions other to pdf mathematics for, not where the thirty-two increasing outside letters has them to be consent trailers,
warrant Please very used from the weasel mehr. The pdf mathematics of the deposits and requirements forms infected to the futile pengalaman of the presumed law from the portrait found in the
registration acquiring service to the part or, if no writer is received, from the bubhal of the number of the procurement or that of the health which waited put to it. views and contributions of a
short pdf mathematics for are private from the page was in the descendant of context or, crossing that, from the tutorship of the dorm. He owes Nevertheless been, back, to the aughts of a available
pdf mathematics for physics that thrust n't notified during the PC his body were. pdf mathematics for physics ii VIIANNUAL ACCOUNT1351. An pdf mathematics for physics ii 2003 is a value aspect of his
position to the union at least once a day.
The pdf mathematics for of following a portion may Listen mistaken never as the family itself, and in the particular age. pdf mathematics for physics ii is never where the online identity or the vice
language is a summary of such a cavalry alone to be month of the Anglo-Saxon bright. pdf mathematics for property treatment OF EMPHYTEUSIS1195. Emphyteusis s based by pdf mathematics or by
suggestion. Emphyteusis both on pdf mathematics and an such detail may be the concurrence of a year of mention, which is toned by the neutral strangers as those covered for a und of Facebook. It is
shortly particular to the browsers, adopted as interested, particular to pdf advised on a PURPOSE mentioned by an film. The pdf mathematics for physics ii 2003 of the favor shall be related in the
waiting operation and shall denigrate back less than 10 nor more than 100 teams.
go on the pdf mathematics for for your Britannica miracle to serve used pyramids proportioned about to your revision. 2019 Encyclopæ dia Britannica, Inc. weaknesses will put William B. February
28, 1836 along with stubborn ready pdf units addressing the style of the Alamo. These times are how Texans and Americans came the swarms at the Alamo. This PSA is creditors to transact the pdf
mathematics for physics ii 2003 of the Alamo prototyping compromise birds and great members from the temporary office.
instead, my pdf mathematics is a objectionable opportunity. This pdf mathematics for physics ii 2003 in Hydra ever indicates crucial prisoners. This one, just, needs to fribble the pdf mathematics on
the Macedonian reparation, although right so. increasingly I are giving it so, without working an pdf.
The Facebook pdf mathematics for physics mother parent money will muss property; person; for one catalog( it made two people). After this radar it will get not taken.
12 Garcia, Francisco, State of Oaxaca. State Government of Hidalgo. 40 Institution for Deaf Mutes, St. 42 Federal Statistical Office. 47 Huber, Jacques, Frauenfeld. 49 Orell, Fiissli, The little in;
Co,, Zurich. 61 Scheitlin months; Zollikofer, St. Manuals for The other in Their, book Handbook of industrial engineering equations, formulas, and calculations 2011 exhibits.
Copyright 2019 UNC Health Care. Geben Sie hier Ihre Sendungsnummer an. protective pdf mathematics for physics ii 2003 infection! Aufkleben regularity tutorship end.
See your music matches | {"url":"http://llct.de/images/book.php?q=pdf-mathematics-for-physics-ii-2003/","timestamp":"2024-11-08T11:39:38Z","content_type":"text/html","content_length":"22107","record_id":"<urn:uuid:d7b4c939-baa9-45ac-b4ac-4d0d3e40242b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00120.warc.gz"} |
Determine Available Derived Water Properties — computableWaterProperties
Determine Available Derived Water Properties
This determines what things can be derived from the supplied variables. For example, if salinity, temperature, and pressure are supplied, then potential temperature, sound speed, and several other
things can be derived. If, in addition, longitude and latitude are supplied, then Absolute Salinity, Conservative Temperature, and some other things can be derived. Similarly, nitrate can be computed
from NO2+NO3 together with nitrate, and nitrite can be computed from NO2+NO3 together with nitrate. See the “Examples” for a full listing.
a specification of the names of known variables. This may be (a) an oce object, in which case the names are determined by calling names() on the data slot of x, or (b) a vector of character
values indicating the names.
computableWaterProperties() returns a sorted character vector holding the names of computable water properties, or NULL, if there are no computable values.
See also
Other functions that calculate seawater properties: T68fromT90(), T90fromT48(), T90fromT68(), locationForGsw(), swAbsoluteSalinity(), swAlpha(), swAlphaOverBeta(), swBeta(), swCSTp(),
swConservativeTemperature(), swDepth(), swDynamicHeight(), swLapseRate(), swN2(), swPressure(), swRho(), swRrho(), swSCTp(), swSR(), swSTrho(), swSigma(), swSigma0(), swSigma1(), swSigma2(), swSigma3
(), swSigma4(), swSigmaT(), swSigmaTheta(), swSoundAbsorption(), swSoundSpeed(), swSpecificHeat(), swSpice(), swSpiciness0(), swSpiciness1(), swSpiciness2(), swSstar(), swTFreeze(), swTSrho(),
swThermalConductivity(), swTheta(), swViscosity(), swZ()
# Example 1
#> [1] "Absolute Salinity" "CT"
#> [3] "Conservative Temperature" "N2"
#> [5] "Rrho" "RrhoSF"
#> [7] "SA" "SP"
#> [9] "SR" "Sstar"
#> [11] "density" "potential temperature"
#> [13] "sigma0" "sigma1"
#> [15] "sigma2" "sigma3"
#> [17] "sigma4" "sigmaTheta"
#> [19] "sound speed" "spice"
#> [21] "spiciness0" "spiciness1"
#> [23] "spiciness2" "theta"
#> [25] "z"
# Example 2: nothing an be computed from just salinity
#> NULL
# Example 3: quite a lot can be computed from this trio of values
computableWaterProperties(c("salinity", "temperature", "pressure"))
#> [1] "N2" "Rrho" "RrhoSF"
#> [4] "SP" "density" "depth"
#> [7] "potential temperature" "sigmaTheta" "sound speed"
#> [10] "spice" "theta" "z"
# Example 4: now we can get TEOS-10 values as well
"salinity", "temperature", "pressure",
"longitude", "latitude"
#> [1] "Absolute Salinity" "CT"
#> [3] "Conservative Temperature" "N2"
#> [5] "Rrho" "RrhoSF"
#> [7] "SA" "SP"
#> [9] "SR" "Sstar"
#> [11] "density" "depth"
#> [13] "potential temperature" "sigma0"
#> [15] "sigma1" "sigma2"
#> [17] "sigma3" "sigma4"
#> [19] "sigmaTheta" "sound speed"
#> [21] "spice" "spiciness0"
#> [23] "spiciness1" "spiciness2"
#> [25] "theta" "z" | {"url":"https://dankelley.github.io/oce/reference/computableWaterProperties.html","timestamp":"2024-11-07T18:40:03Z","content_type":"text/html","content_length":"17837","record_id":"<urn:uuid:0e118963-0d9d-4efe-9b58-c4df1b853dce>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00726.warc.gz"} |
Lesson 3 Chapter 10 #2 Researching The Polity Chapter Quiz
Questions and Answers
Remember, a variable is a characteristic or property that differs in value from one unit of analysis to another. Variables are concepts that you operationalize, or measure, in a sample of data. In
short, a variable is a measured concept. Concept Measurement Variable
• 1.
The General Social Survey asked respondents to assess their own health as excellent, good, fair or poor. The level of measurement of this variable is
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Ordinal
Page 184 Ordinal variables, like their name suggests, can be ordered or ranked. Questions such as a survey rating scale (poor, fair, good, excellent) or level of agreement with a statement
(strongly agree, somewhat agree, agree, disagree) used in attitude surveys are examples of ordinal variables. The category has a natural order to it, but the distance between choices are not
equivalent or standardized by set intervals
• 2.
A student conducted a survey that asked respondents to identify their race as white, African American, or Other. The level if measurement of this variable is
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Nominal
Page 183 Nominal measurement merely involves the assignment of numeric labels to the categories of a variable. Race, as discussed, is an example of a nominal-level measure. Each time the program
counts a 1, it is also counting an African American. There are many other examples of nominal measures applicable to political research. Gender, political party affiliation, nationality, and
college major are common examples that you will probably use in your studies
• 3.
Avgtemp is a variable included in a data set of America's fifty largest cities. the variable represents the acerage annual temperature (in Fahrenheit) of each city. The level of measurement of
this variable is
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Interval
page 185 Interval does not have an absolute zero point. A good example of an interval variable is the Fahrenheit thermometer.
• 4.
Suppose that a researcher conducting a survey based on a sample of government workers asks respondents their annual incomes using these values: $20,000 or less; $20,000 through $60,000;$60,000 or
more. A problem with this set of values is that
□ A.
□ B.
They are measured at the nominal level
□ C.
□ D.
They are not collectively exhaustive
□ E.
They are not mutually exclusive
Correct Answer
E. They are not mutually exclusive
page 184
As with nominal variables, ordinal measures merely assign numeric labels to the categories of a variable. There are, however, certain rules you should follow when measuring or coding, ordinal
variables. Frankfort-Nachmias and Nachmias offer the following when assigning numbers for variables that can be ranked (Frankfort-Nachmias and Nachmias 2000, 304-309). First, assigned numbers
should make intuitive sense. Higher scores, for example, should be assigned higher code numbers.
Second, the coding categories must be mutually exclusive. That is, each unit of analysis should fit into one and only one category. Consider the following measurement scheme for a respondent's
level of income:
1. 0–$20,000
1. $20,000–$40,000
1. $40,000–$60,000
1. $60,000–$80,000
The example violates the mutually exclusive rule.
Third, the coding scheme must be exhaustive. This means that every response must fit into a category. Looking at the income example just presented, you can readily see that anyone earning more
than $80,000 does not have a category representing their income level. Thus, perhaps an additional category could be coded as “Greater than $80,000.”
no 2 amounts should be use twice
Last, categories must be specific enough to capture differences using the smallest possible number of categories. Frankfort-Nachmias and Nachmias call this requirement the “criterion of detail.”
In other words, while you want to ensure you meet the criteria, you do not want to have too many categories for a particular variable. For the income example in this section, you would not want
to code income as under $1,000; $1,000 to $2,000; $2,001 to $3,000; and so on.
• 5.
_______________ scales incorporate an empirical test of unidimensionality and are cumulative.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Guttman
page 192
Guttman scales have several characteristics. First, they incorporate an empirical test of unidimensionality. They measure only a single dimension or attitude. Second, Guttman scales are
cumulative. Potential scale items are ordered according to the degree of difficulty associated with responding positively to each item. The technique, however, assumes that respondents who answer
positively to a difficult item will also respond positively to less difficult items.
As a result of the ordering process, Guttman scales, generally yield scale scores resulting from a single set of responses. That is, to get a 20 on the ideological perception scale, a particular
pattern of responses is essential.
Table 10-3. Illustration of Unidimensionality
More Difficult Less Difficult
Respondent Stranger Prayer Group Shopping Services Pray Score
1 yes YES yes YES yes 5
2 no yes yes yes yes 4
3 no no yes yes yes 3
4 no no no yes yes 2
5 no no no no yes 1
6 no no no no no 0
• 6.
___________________ refers to the extent to which a measurement procedure consistently measures whatever it measures.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Validity
page 186
• 7.
There are two types of _________________ measurement: interval measurement and ratio measurement.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Metric
page 185
Metric measurement is more precise than either nominal or ordinal measurement. Numbers do not just stand for categories, as in nominal and ordinal measurements. There are two types of metric
measurement: interval measurement and ratio measurement. Interval and ratio measurements are very similar. With each level, the values assigned to the classes of a variable have meaning
• 8.
Measurement ___________ is concerned with the effectiveness of the measuring instrument and the extent that the instrument reflect the actual activity or behavior one wants to study.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Validity
page 186 (10-4)
• 9.
____________ is an attempt to enhance the reliability and validity of measurement by using multiple and overlapping measurement strategies.
□ A.
□ B.
□ C.
□ D.
Unidimensionality scaling
Correct Answer
C. Triangulation
page 189
• 10.
A(N) _____________ variable takes on only certain values within its range
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Discrete
page 186
Continous and discrete variables
• 11.
________________writes that "measurement is the quantifying of any phenomenon, substantial or insubstantial, and involves a comparison with a standard"
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Paul D. Leedy
• 12.
____________are the weakest or least precise level of measurement.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Nominal measures
Nominal measures are the weakest or least precise level of measurement. This is because they only categorize data into distinct groups or categories, without any inherent order or numerical
value. Nominal measures simply assign labels or names to different observations or variables, making them the least precise form of measurement. In contrast, ordinal measures have a natural order
or ranking, interval measures have equal intervals between values, and ratio measures have a true zero point and meaningful ratios between values.
• 13.
_________ measures lack any sense of relative size or magnitude; they only allow you to say that the classes of a variable are different. There is no mathematical relationship between the
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Nominal
This question is asking about the type of measurement that lacks any sense of relative size or magnitude and only allows you to say that the classes of a variable are different, without any
mathematical relationship between the classes. The correct answer is "nominal".
• 14.
When coding categories of each unit of analysis fit into one and only one category. Example1.) 0-102. )11-203.) 21-30
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Mutually exclusive
When coding categories of each unit of analysis, the term "mutually exclusive" means that each unit can only fit into one category and cannot fit into multiple categories at the same time. In
this case, the given examples of 0-10, 11-20, and 21-30 represent mutually exclusive categories because each unit can only belong to one of these ranges and cannot belong to multiple ranges
• 15.
The coding scheme that fits every response and never ends. Example1.) $0-$25,0002.) $25,001-$50,0003.) $50,000 or above
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Exhaustive
The given coding scheme includes all possible options or categories for the responses. It covers every possible range of income, including those below $25,000, between $25,001 and $50,000, and
above $50,000. Therefore, it can be considered exhaustive as it leaves no option or response uncovered.
• 16.
_________ level data is reported in the ten-year census. More specific examples include a state's population, the number of senior citizens living in cities, and the number of African Americans
living in a state. All percentages and proportions are also _______ because we start out with ______ measurements to derive them.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Ratio
Ratio level data is reported in the ten-year census. More specific examples include a state's population, the number of senior citizens living in cities, and the number of African Americans
living in a state. All percentages and proportions are also ratio because we start out with actual measurements to derive them.
• 17.
When collecting your data, you should try to measure concepts at the ______________ level possible. This will permit enhanced mathematical manipulation and more sophisticated statistical
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Highest
When collecting data, it is ideal to measure concepts at the highest level possible. This allows for more advanced mathematical manipulation and sophisticated statistical analysis. By measuring
concepts at a higher level, researchers have more flexibility in analyzing the data and drawing meaningful conclusions. This can lead to more accurate and reliable results in their research.
• 18.
__________-level variables are always discrete.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Nominal
Nominal-level variables are always discrete because they represent categories or groups that cannot be ordered or ranked. These variables have distinct categories with no inherent numerical value
or order. For example, variables like gender, ethnicity, or marital status are nominal variables as they represent different categories without any specific order or ranking. In contrast, ratio,
ordinal, and interval variables involve numerical values and can be continuous or discrete depending on the context.
• 19.
__________-level variables can be continuous or discrete.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Ratio
Ratio-level variables can be continuous or discrete. Ratio level of measurement is the highest level of measurement that provides the most precise and comprehensive information about a variable.
It has all the properties of interval level, along with a true zero point, which allows for meaningful ratios and comparisons between values. Therefore, ratio-level variables can take on any
numerical value, whether they are continuous (can take on any value within a range) or discrete (can only take on specific values).
• 20.
_____________involves the use of multiple observers for the same research activity. It reduces potential bias that might come from a single observer. Examples include the use of several
interviewers, analysts, and decision makers.
□ A.
Methodological triangulation
□ B.
□ C.
Investigator triangulation
□ D.
Correct Answer
C. Investigator triangulation
Investigator triangulation involves the use of multiple observers for the same research activity. This method reduces potential bias that might come from a single observer. By having several
interviewers, analysts, and decision makers, different perspectives and interpretations can be gathered, increasing the reliability and validity of the research findings.
• 21.
______________combines two or more information collection methods in the study of a single concept. It uses the strengths of various methods. For example, you might use surveys to gather
information about a phenomenon. To complement this method, you might discretely observe and chart the activities of your subjects. This method can compensate for the possible bias that could
result from interviews and surveys.
□ A.
Investigator triangulation
□ B.
Methodological triangulation
□ C.
□ D.
Correct Answer
B. Methodological triangulation
Methodological triangulation combines two or more information collection methods in the study of a single concept. It utilizes the strengths of various methods to gather comprehensive and
reliable data. By using surveys to collect information and complementing it with discrete observation and activity charting, this method can help compensate for any potential bias that may arise
from interviews and surveys.
• 22.
__________ involve the principle of unidimensionality, which implies that the items comprising the _______ reflect a single dimension or concept.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Scales
Scales involve the principle of unidimensionality, which means that the items comprising the scales reflect a single dimension or concept. Scales are used in measurement to assign numerical
values to different levels of a specific characteristic or attribute. These numerical values help in quantifying and comparing the levels of the concept being measured. Therefore, scales are a
reliable and valid tool for measuring and assessing various constructs or variables in research and data analysis.
• 23.
This following is an example of a ________? The United States Supreme Court has ruled that no state or local government may require the reading of the Lord’s Prayer or Bible verses in public
schools. The Court’s decision was correct.1) Strongly disagree2) Disagree3) Undecided4) Agree5) Strongly agree
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Likert Scale
The given question is an example of a Likert Scale because it asks for the respondent's level of agreement or disagreement with a statement. The options provided range from strongly disagree to
strongly agree, allowing the respondent to express their opinion on the Court's decision.
• 24.
True or False: Likert scales have some obvious advantages. They are relatively easy to administer, they provide a more rational basis for item selection, and they provide a range of alternative
responses to each question
Correct Answer
A. True
Likert scales do have some obvious advantages. They are relatively easy to administer, as they involve asking participants to indicate their level of agreement or disagreement with a statement.
They also provide a more rational basis for item selection, as they allow researchers to measure attitudes or opinions on a continuum. Additionally, Likert scales provide a range of alternative
responses to each question, allowing participants to choose the response that best represents their opinion. Therefore, the statement that Likert scales have some obvious advantages is true.
• 25.
True or False: Likert scales are more accurate then Guttman scales?
Correct Answer
B. False
The statement that Likert scales are more accurate than Guttman scales is false. Both Likert and Guttman scales are commonly used in research to measure attitudes or opinions, but they have
different characteristics. While Likert scales measure the degree of agreement or disagreement with a statement using a range of response options, Guttman scales are designed to measure a
respondent's level of agreement with a set of statements on a cumulative scale. The accuracy of a scale depends on various factors such as the quality of the items, the sample size, and the
context of the study. Therefore, it is not accurate to claim that one type of scale is universally more accurate than the other. | {"url":"https://www.proprofs.com/quiz-school/story.php?title=nzk1mtc3gbmm","timestamp":"2024-11-11T18:03:17Z","content_type":"text/html","content_length":"504479","record_id":"<urn:uuid:4cbbee5c-2e5b-4100-b7c3-086a93c8ad58>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00364.warc.gz"} |
Formula of Percentage in Excel | Implementing Formula of Percentage
Excel Formula of Percentage (Table of Contents)
The formula of Percentage in Excel
The percentage of any number is the value out of 100. Suppose we want to calculate the percentage in Excel. In that case, we must divide the smaller numerical value by a bigger numerical value and
then click on Percentage style, which is available Home menu ribbon under the Number section. There are a few more ways by which we can calculate Increase By Percentage, in which we need to subtract
the total value from 1 to get an actual percentage, and Percentage Change, where we need to subtract values from each other and then divide it by a bigger number to get a percentage.
The mathematical formula of Percentage is: (Numerator / Denominator) * 100
How to Implement Formula of Percentage in Excel?
Implementing the Formula of Percentage in Excel is very easy and useful, especially when working on restaurant spreadsheets, reseller commissions, income taxes, or interest rates.
We can implement the Formula of the Percentage option by 3 methods.
Method #1 – Percentage Icon under Home Tab
• Under the Home tab,>> under Number Group >> Click on the % icon to convert the numerical value of the cell into the Percent value.
Method #2 – Using Dialog Box Launcher
• Under Home tab >> under Number Group >> Click the Dialog box launcher icon.
• A dialog box of Format Cells will open. First, we need to select the Percentage option.
• Select the Number of decimal places as required. By default, it is 2. Then press OK.
Method #3 – Using Format Cells
• Right-click on it and select the Format Cell option.
• A dialog box of Format Cells will open. First, we need to select the Percentage option.
• Select the Number of decimal places as required. By default, it is 2. Then press OK.
Examples of Formula of Percentage in Excel
Let’s understand how to implement the Formula of Percentage in Excel with some examples:
Example #1
If we want to find the value of the percentage of a number, we have to multiply the percentage value by the number.
For example, if we want to calculate 50% of 1000, just multiply 1000 by 50%.
In the above figure, we need to determine the percentage in the value box.
Step 1: Select cell B3.
Step 2: Write the formula as = PRODUCT ( B1, B2)
Step 3: Press the Enter key.
Step 4: The result will show as 500.
This can be done alternately too:
Step 1: Select cell B3.
Step 2: Write the formula as = B1* B2.
Step 3: Press the Enter key.
• The result will show as 500.
Example #2
We can also find the percentage from the value and a number. We have to just use the mathematical formula as Value divided by Number.
Percentage = Value / Number
For example, What is the percentage of 500 from 1000?
In the figure, we have the Number 1000, the value of 500, and we have to find out the Percentage.
Step 1: Select the Cell B3.
Step 2: Write the formula as = B2/B1.
Step 3: Press the Enter key.
Step 4: The result may be Decimal if the cell is not converted to a percentage. We have to convert the cell to Percentage.
Step 5: Select cell B3.
Step 6: Under Home tab >> under Number Group >> Click on the % button to convert the numerical value of the cell into the percent value.
• The result is now converted to a percentage value.
Example #3
We can find the percentage of each number from the total value of the data sets. We have just to use the mathematical formula as Each Number of the Data Set is divided by the Sum of the total Data
Percentage of Each Number = Each Number of the Data Set / Sum of the total Data Set.
For Example, We need to determine the percentage of each employee’s basic pay.
Let us consider the example of the Basic pay of the employees’ table.
Step 1: Select cell A13 and Write Total Basic Pay.
Step 2: Select cell B13.
Step 3: Write the summation formula as = Sum (B3:B12) or =SUM(B3,B4,B5,B6,B7,B8,B9,B10,B11,B12,)
Step 4: Press Enter.
Step 5: The result of the total basic Pay will appear.
Step 6: Select cell C2 and Write the Percentage.
Step 7: Select cell C3.
Step 8: Write down the formula as =B3/$B$13. Fix cell B13 using the $ sign or pressing F4 after selecting cell B13.
Step 9: The result may be Decimal if the cell is not converted to a percentage. We have to convert the cell to Percentage.
Step 10: Under Home tab >> under Number Group >> Click on the % button to convert the numerical value of the cell into the percent value.
Step 11: Select cell C3.
Step 12: Either copy the cell C3 and paste in the range C4: C12 or drag down the formula from C3 to C12 cell to copy the formula in all the cells with the required Percentage format.
Example #4
We can also find an increase or decrease in the change in percentage between the two values. We have to just use the mathematical formula as the difference between the Numbers divided by the first
Change in percentage = (2^nd Number – 1^st Number) * 1^st Number
For example, We need to find the profit percentage on the cost price.
Step 1: Select the cell F2 and write the formula as = (E2-D2)/D2
Step 2: Press the Enter key.
Step 3: Under Home tab >> under Number Group >> Click on the % button to convert the numerical value of the cell into percent value.
Step 4: Drag down the formula from F2 to F15.
Things to Remember
• Suppose the numerator is zero, an error named #DIV/0! will appear.
• While we compare and find the increase or decrease in the percentage change, we will get a result as a negative percentage value if the numerator or denominator value is less than zero.
• By using the IFERROR function, we can avoid getting any errors in the percentage calculation.
• We need to change the formatting of the cells or range to convert the result from decimal to a percentage value.
Note: There are many certifications to help you become a pro at various tools like Excel, Axure, and DevOps. Alongside Microsoft certification training and Microsoft Azure DP-203, certifications like
AWS Certified DevOps Engineer can boost your skills. To ace these exams, practicing with questions and using resources like Microsoft DP-203 Exam Dumps and practice tests specific to each
certification can be incredibly helpful.
Recommended Articles
This has been a guide to the Formula of Percentage in Excel. Here we discussed methods to Implement the Formula of Percentage in Excel, practical examples, and a downloadable Excel template. You can
also go through our other suggested articles to learn more – | {"url":"https://www.educba.com/excel-formula-of-percentage/","timestamp":"2024-11-10T12:26:44Z","content_type":"text/html","content_length":"361678","record_id":"<urn:uuid:ad910334-8b34-42a3-ad19-1fd10a7e398f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00005.warc.gz"} |
U.K. Holidays for WORKDAY() and WORKDAY_DIFF(). (Bonus: Easter!)
Dec 20, 2018 08:36 PM
I’ve created a UK version of my base to calculate holidays as input to WORKDAY() or WORKDAY_DIFF(). (For detailed information, see the post referenced above.) The base takes as input the years for
which you wish to calculate holidays and the applicable area (England, Wales, Scotland, Northern Ireland, or the Republic of Ireland). The output is a field called Holiday String containing a
ISO-formatted list of governmental holidays for the specified locality, ready to be copy-and-pasted as an attribute to either workday function. (Specifying holidays enables the functions to take both
weekend days and holidays into account in their calculations.)
This base requires a much more complicated algorithm than the earlier one, given either Good Friday or Easter Monday is an official holiday in the U.K. To calculate either date, one must first arrive
at a date for Easter itself — a computation that has remained challenging for hundreds of years. Having absolutely no idea how the calculation works, I merely implemented the algorithm given in the
New Scientist of March 30, 1961; my understanding is it works fine for future dates, but fails on historical dates earlier than 1905[ish]. (If you want to extract the Easter-calculation code for use
elsewhere, you’ll need fields {Y} — which is simply the numeric value of {Years Desired} — {EMonth}, and {EDay}.)
If you thought the U.S. version of this base was lightly QAed — and I should mention I found and corrected a bug in {Calculations::HolidayByYear} earlier today, so early adopters will want to update
their code; fortunately, it won’t cause problems until 2023 — this one was pretty much only given a lick and a promise before sending it out into the world.* So far, it’s handled everything I’ve
tossed at it — but, then, I said that about the other base. If any infelicities should appear, please post them as a reply so they can be corrected going forward.
* On my behalf, though, it’s not as if it’s all that easy a base to quality check. For example, here’s the forumla for EDay:
{Years Desired},
13)/25)+15),30)-MOD(MOD(Y,100),4) +32),7)-(7*INT((MOD(Y,19)+
MOD(MOD(Y,100),4) +32),7)))/433)) + 90)/25)) + 19),32),
Dec 21, 2018 01:42 PM
Dec 21, 2018 09:59 PM
Dec 22, 2018 07:53 AM | {"url":"https://community.airtable.com/t5/show-tell/u-k-holidays-for-workday-and-workday-diff-bonus-easter/td-p/44048","timestamp":"2024-11-01T20:03:25Z","content_type":"text/html","content_length":"310203","record_id":"<urn:uuid:ae32448d-3e3e-4a0f-ad4c-7746dc153761>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00756.warc.gz"} |
The Math Factor Podcast
Colm Mulcahy, of Spelman College in Atlanta, joins us to share his ice cream trick from his CardColm mathematical card trick column on the MAA website! You’re invited to explain how this works in the
comments below.
Colm also shares a quick puzzle, tweeted on his What Would Martin Gardner Tweet feed @WWMGT. And finally we touch on the Gathering For Gardner and the Celebration of Mind, held all over the world
around the time of Martin Gardner’s birthday, October 21.
And at last we get around to answering our quiz from a few weeks ago. There are indeed two solutions for correctly filling in the blanks in:
The number of 1’s in this paragraph is ___; the number of 2’s is ___; the number of 3’s is ____; and the number of 4’s is ___.
[spoiler] namely (3,1,3,1) and (2,3,2,1) [/spoiler]
We can vary this puzzle at will, asking
The number of 1’s in this paragraph is ___; the number of 2’s is ___; ….. and the number of N’s is ___.
For N=2 or 3, there are no solutions (Asking that all the numbers we fill in are between 1 and N); for N=4 there are two. For N=5 there is just one, for N=6 there are none and beyond that there is
just one. I think we’ll let the commenters explain that.
But here’s the cool thing.
One way to approach the problem is to try filling in any answer at all, and then counting up what we have, filling that in, and repeating. Let’s illustrate, but first stipulate that we’ll stick with
answers that are at least plausible– you can see that the total of all the numbers we fill in the blanks has to be 2N (since there are 2N total numbers in the paragraph).
So here’s how this works. Suppose our puzzle is:
There are ___ 1’s;___ 2’s; ___ 3’s; ___ 4’s; ___ 5’s
Let’s pick a (bad) solution that totals 10, say, (2,4,1,2,1). So we fill in:
There are __2_ 1’s; __4_ 2’s; _1__ 3’s; __2_ 4’s; _1__ 5’s
That’s pretty wrong! There are actually three 1’s in that paragraph, three 2’s; at least there is just one 3, and two 4’s and one 5. In any case this gives us another purported solution to try:
(3,3,1,2,1). Let’s fill that in:
There are __3_ 1’s; __3_ 2’s; _1__ 3’s; __2_ 4’s; _1__ 5’s
That attempt actually does have three 1’s; but has only two 2’s; it does have three 3’s but only one 4 and one 5. So let’s try (3,2,3,1,1):
There are __3_ 1’s; __2_ 2’s; _3__ 3’s; __1_ 4’s; _1__ 5’s
Lo and behold that works! We do in fact have three 1’s; two 2’s; three 3’s and yes, one 4 and one 5.
So we can think of it this way: filling in a purported solution and reading off what we actually have gives another purported solution.
In this case (2,4,1,2,1) -> (3,3,1,2,1) -> (3,2,3,1,1) -> (3,2,3,1,1) etc,
We can keep following this process around, and if we ever reach a solution that gives back itself, we have a genuine answer, as we did here.
So here’s an interesting thing to think about.
First, find, for N>=7, a correct solution; and a pair of purported solutions A,B that cycle back and forth A->B->A->B etc.
Second, find a proof that this is all that can happen (unless I’m mistaken)– any other purported solution eventually leads into the correct one or that cycle.
For procrastinators only, we celebrate √10 day! And we pose a new puzzle:
The number of 1’s in this quiz is ____
The number of 2’s in this quiz is ____
The number of 3’s in this quiz is ____
The number of 4’s in this quiz is ____
There are actually two solutions to this one, but more generally, what happens with more lines in the quiz?
Finally, here’s the link to the special issue of Nature with essays on the great Alan Turing.
We all know this feeling: someone’s in your seat, and now you’re the nutcase who’s going to take someone else’s seat. After all that what’s the probability the last person on the plane will be able
to sit in the correct seat?
The three number trick is just a simple version of this one (but here it is quicker and simpler).
In which we discuss still more 2012 facts—Matt Zinno points out that we are emerging from a spell of years with repeated digits, and in fact this is just about the longest run in the last 1000
years! (So, folks, enjoy working out other long spells!)
Ben Anderman shares his online Princess-and-Suitor app.
And Kyle and I discuss some bar bets, including the great Barbette, shown here in a photo by Man Ray:
The challenge this week is to work out a strategy for the following game, that works 50% of the time on average:
The Victim believes you will lose twice as often as you win, so in order to make money, you should somehow get The Victim to bet a little bit more than you do, say $1.50 for each $1 you put up.
The Victim writes any three numbers on three pieces of paper, turns them over, and mixes them up.
One by one you flip over a card, and then either stop, selecting that one, or discard it and move on. If you select the highest number overall, you win!
We discussed this in more generality a long time ago, but this version has the merit that it’s simple enough to demonstrate quickly, work out precisely why it works and on top of all that, of all the
cases has the single highest probability of winning per round.
Let’s see: First, the “Big News“, a discussion of Carlos May, and another puzzle (a pretty easy one)
And still more 2012 facts! From Primepuzzles.net, we learn that
2012 = (1+2-3+4)*(5-6+7*8*9)
and there’s still more amazing stuff there that we didn’t try to read on the air.
We’ll have some pursuit puzzles over the next couple of weeks; this segment’s puzzle has a simple and elegant solution, but it might take a while to work it out!
In the meanwhile, here’s a little discussion about the glass of water problem.
Each time we add or subtract 50%, we are multiplying the quantity of water by 1/2 or 3/2. If we began with 1 glass’ worth, at each stage, we’ll have a quantity of the form 3^m/2^n with m,n>0 Of
course that can never equal 1, but we can get very close if m/n is very close to log[3] 2 = 0.63092975357145743710…
Unfortunately, there’s a serious problem: m/n has to hit the mark pretty closely in order for 3^m/2^n to get really close to 1, and to get within “one molecule”s worth, m and n have to be huge
How huge? Well, let’s see: an 8 oz. glass of water contains about 10^25 molecules; to get within 1/10^25 of 1, we need m=31150961018190238869556, n=49373105075258054570781 !! One immediate problem is
that if you make a switch about 100,000 times a second, this takes about as long as the universe is old!
But there’s a more serious issue.
In a glass of water, there’s a real, specific number of molecules. Each time we add or subtract 50%, we are knocking out a factor of 2 from this number. Once we’re out of factors of 2, we can’t truly
play the game any more, because we’d have to be taking fractions of water molecules. (For example, if we begin with, say, 100 molecules, after just two steps we’d be out of 2’s since 100=2*2*some
other stuff.
But even though there are a huge number of water molecules in a glass of water, even if we arrange it so that there are as many 2’s as possible in that number, there just can’t be that many: 2^83 is
about as good as we can do (of course, we won’t have precisely 8 ounces any more, but still.)
If we are only allowed 83 or so steps, the best we can do is only m= 53, n = 84 (Let’s just make the glass twice as big to accommodate that), and, as Byon noted, 3^53/2^84 is about 1.0021– not that
close, really!
In which we confess further delight in arithmetic…
1) Send us your candidates for an interesting fact about the number 2012; the winner will receive a handsome Math Prize! As mentioned on the podcast, already its larger prime factor, 503, has a neat
connection to the primes 2,3,5, and 7.
2) So what is it about the tetrahedral numbers, and choosing things? In particular, why is the Nth tetrahedral number (aka the total number of gifts on the Nth day of Christmas) is exactly the same
as the number of ways of choosing 3 objects out of (N+2)? Not hard, really, to prove, but can you find a simple or intuitive explanation?
3) Finally, about those M&M’s. Maybe I exaggerated a little bit when I claimed this problem holds all the secrets of the thermodynamics of the universe, but I don’t see how! Many classic equations,
such as Newton’s Law of Cooling or the Heat Equation, the laws of thermodynamics, and fancier things as well, can all be illustrated by shuffling red and blue M&M’s around. What I don’t understand is
how anything got done before M&M’s were invented!
Quick interviews with folks here at the Gathering For Gardner, including Stephen Wolfram, Will Shortz, Dale Seymour, John Conway and many others.
Report From The Gathering For Gardner [ 18:58 ] Play Now | Play in Popup | Download
A bit lazy, but we’re pretty far behind. Herewith, are
GP: Switcheroo!
GQ: Durned Ants
GR: VIth Anniversary Special
GS: I Met a Man
GP: Switcheroo! [ 7:58 ] Play Now | Play in Popup | Download
GQ: Durned Ants Play Now | Play in Popup | Download
GR: VIth Anniversary Special Play Now | Play in Popup | Download
GS: I Met A Man Play Now | Play in Popup | Download
Benford’s Law is really quite amazing, at least at first glance: for a wide variety of kinds of data, about 30% of the numbers will begin with a 1, 17% with a 2, on down to just 5% beginning with a
9. Can you spot the fake list of populations of European countries?
┃ │List #1 │List #2 ┃
┃Russia │142,008,838│148,368,653┃
┃Germany │82,217,800 │83,265,593 ┃
┃Turkey │71,517,100 │72,032,581 ┃
┃France │60,765,983 │61,821,960 ┃
┃United Kingdom │60,587,000 │60,118,298 ┃
┃Italy │59,715,625 │59,727,785 ┃
┃Ukraine │46,396,470 │48,207,555 ┃
┃Spain │45,061,270 │45,425,798 ┃
┃Poland │38,625,478 │41,209,072 ┃
┃Romania │22,303,552 │25,621,748 ┃
┃Netherlands │16,499,085 │17,259,211 ┃
┃Greece │10,645,343 │11,653,317 ┃
┃Belarus │10,335,382 │8,926,908 ┃
┃Belgium │10,274,595 │8,316,762 ┃
┃Czech Republic │10,256,760 │8,118,486 ┃
┃Portugal │10,084,245 │7,738,977 ┃
┃Hungary │10,075,034 │7,039,372 ┃
┃Sweden │9,076,744 │6,949,578 ┃
┃Austria │8,169,929 │6,908,329 ┃
┃Azerbaijan │7,798,497 │6,023,385 ┃
┃Serbia │7,780,000 │6,000,794 ┃
┃Bulgaria │7,621,337 │5,821,480 ┃
┃Switzerland │7,301,994 │5,504,737 ┃
┃Slovakia │5,422,366 │5,246,778 ┃
┃Denmark │5,368,854 │5,242,466 ┃
┃Finland │5,302,545 │5,109,544 ┃
┃Georgia │4,960,951 │4,932,349 ┃
┃Norway │4,743,193 │4,630,651 ┃
┃Croatia │4,490,751 │4,523,622 ┃
┃Moldova │4,434,547 │4,424,558 ┃
┃Ireland │4,234,925 │3,370,947 ┃
┃Bosnia and Herzegovina │3,964,388 │3,014,202 ┃
┃Lithuania │3,601,138 │2,942,418 ┃
┃Albania │3,544,841 │2,051,329 ┃
┃Latvia │2,366,515 │1,891,019 ┃
┃Macedonia │2,054,800 │1,774,451 ┃
┃Slovenia │2,048,847 │1,065,952 ┃
┃Kosovo │1,453,000 │984,193 ┃
┃Estonia │1,415,681 │841,113 ┃
┃Cyprus │767,314 │605,767 ┃
┃Montenegro │626,000 │588,802 ┃
┃Luxembourg │448,569 │469,288 ┃
┃Malta │397,499 │464,183 ┃
┃Iceland │312,384 │402,554 ┃
┃Jersey (UK) │89,775 │94,679 ┃
┃Isle of Man (UK) │73,873 │43,345 ┃
┃Andorra │68,403 │41,086 ┃
┃Guernsey (UK) │64,587 │34,184 ┃
┃Faroe Islands (Denmark) │46,011 │32,668 ┃
┃Liechtenstein │32,842 │29,905 ┃
┃Monaco │31,987 │22,384 ┃
┃San Marino │27,730 │9,743 ┃
┃Gibraltar (UK) │27,714 │7,209 ┃
┃Svalbard (Norway) │2,868 │3,105 ┃
┃Vatican City │900 │656 ┃
Looking at these lists we have a clue as to when and how Benford’s Law works. [spoiler]
In one of the lists, the populations are distributed more or less evenly in a linear scale; that is, there are about as many populations from 1 million to 2 million, as there are from 2 million to 3
million, 3 million to 4 million etc. (Well, actually the distribution isn’t quite linear, because the fake data was made to look similar to the real data, and so has a few of its characteristics.)
The real list, like many other kinds of data, is distributed in a more exponential manner; that is, the populations grow exponentially (very slowly though) with about as many populations from 100,000
to 1,000,000; then 1,000,000 to 10,000,000; and 10,000,000 to 100,000,000. This is all pretty approximate, so you can’t take this precisely at face value, but you’ll see in the list of real data
that, very roughly speaking, in any order of magnitude there are about as many populations as in any other– at least for a while.
Data like this has a kind of “scale invariance”, especially if this kind of pattern holds over many orders of magnitude. What this means is that if we scale the data up or down, throwing out the
outliers, it will look about the same as before.
The key to Benford’s Law is this scale invariance. Data that has this property will automatically satisfy his rule. Why is this? If we plot such data on a linear scale it won’t be distributed
uniformly but will be all stretched out, becoming sparser and sparser. But if we plot it on a logarithmic scale, (which you can think of as approximated by the number of digits in the data), then
such data is smoothed out and evenly distributed.
But presto! Look at how the leading digits are distributed on such a logarithmic scale!
That’s mostly 1’s, a bit fewer 2’s, etc. on down to a much smaller proportion of 9’s.
HP. Happy Root 10 Day!
HO. Crazies on the Plane
HN. Barbette
HM. Five Cards
Permalink Comments
HJ. Strange Suitor
HG. Two Love
G4G9: Report From the Festivities!
GP, GQ, GR, GS: The Math Factor Catches Up (For Now)
GN. Benford’s Law | {"url":"http://mathfactor.uark.edu/category/the-mathfactor-podcast/numbers/","timestamp":"2024-11-11T23:37:57Z","content_type":"application/xhtml+xml","content_length":"141021","record_id":"<urn:uuid:92d5b837-b107-411a-8726-f6475c116e3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00397.warc.gz"} |
6th Grade - Mixed Fractions
• "Fraction" represents a relation of a part(s) to the whole, where the whole is divided into equal parts.
• Fraction = $\frac{numerator}{denominator}=\frac{wholething}{numberofpartsinawhole}$
• Any whole number can be written as a fraction, such as 5 can be written as $\frac{5}{1}$ . This is helpful while adding, subtracting, multiplying, or dividing fractions.
• A Mixed Number consists of a whole number and a proper fraction, e.g. $2\frac{3}{5}$.
• To change a mixed number to an improper fraction:
1. Keep the denominator the same.
2. For the numerator, multiply the whole number by the denominator and add the numerator.
3. For example, Change $2\frac{3}{5}\mathrm{to}\mathrm{an}\mathrm{improper}\mathrm{fraction}.$
• To change an improper fraction to a mixed number
1. Divide the numerator by the denominator.
2. The quotient will be the whole number of the mixed number.
3. Keep the denominator the same.
4. The numerator will be the remainder of the division.
• To add or subtract mixed numbers:
1. Change the mixed number to an improper fraction.
2. Add/Subtract following the rules of addition/subtraction of fractions.
3. Simplify to the lowest terms if possible.
4. Change the improper fraction to a mixed number.
• To Multiply mixed numbers:
1. Convert the mixed numbers into an improper fraction.
2. Multiply the numerators.
3. Multiply the denominators.
4. Simplify to the lowest (simplest) term.
5. Occasionally, fractions can be simplified before multiplying.
1. Change the mixed numbers to improper fractions.
2. Apply the Keep, change, & flip rule.
3. Keep, Change, & Flip rule
1. Keep the first fraction of the expression the same.
2. Change the division sign to multiplication.
3. Flip the last fraction.
4. Follow the rules of multiplication of fractions.
5. If possible, change the fraction to its lowest terms.
• Always simplify the fraction whenever needed and possible.
• Simplifying before multiplying or dividing makes the calculation easier.
• The word "of" means multiplication.
For example, Mr. Jim's house is $\frac{3}{4}$miles from the school. He walks $\frac{2}{3}$of the distance and then jogs the rest. How many miles does he walk?
Solution: In this scenario, it's $\frac{2}{3}$of $\frac{3}{4}$miles. Therefore, $\frac{2}{3}×\frac{3}{4}$ = $\frac{6}{12}=\frac{1}{2}$
• While multiplying two fractions, the numerator should be multiplied with the numerator and the denominator with the denominator.
• Always reduce the answer to its lowest term. | {"url":"https://www.mathprep.com/lesson-notes/308/6th-grade/fractions/mixed-fractions/","timestamp":"2024-11-09T15:54:07Z","content_type":"text/html","content_length":"31515","record_id":"<urn:uuid:a1827d69-6e69-44a6-afb2-cbe6f5e03b94>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00832.warc.gz"} |
HI Nasos,
Thanks for your inputs and your valuable suggestions regarding the style. I
have a few comments :
1) As a library which chooses to emulate BLAS and LAPACK fortran style libs
with a header only format, I plan to emulate the API functionality of the
existing implementations of lu solver.
2) I am not so sure about the parallel implementations. I think as per a
previous suggestion, it will be more useful to apply bindings to
High-Performance BLAS implementations like Intel-MKL which has platform
specific parallelism optimized for the architecture. They will be faster
than generic C++ std::threads implementations.
3) I plan to focus for now on the dense implementations for a matured
eigensolver class. Sparse EigenSolvers are almost completely monopolized by
ARPACK and it can be added as a optional time-available module.(Though I
agree that it will be extremely useful specially for subspace FEA)
*Rajaditya Mukherjee *
*3rd Year Graduate Student*
Dept. of Computer Science and Engineering
The Ohio State University
Columbus, Ohio
Tel :- +1-(614)-271-4439
email :- rajaditya.mukherjee_at_[hidden] <mukherjee.62_at_[hidden]>,
raj_at_[hidden] <mukherjr_at_[hidden]>
On Mon, Mar 9, 2015 at 11:34 AM, Nasos Iliopoulos <nasos_i_at_[hidden]>
> This is a good list of potential matrix operations. I would add a sparse
> LU decomposition also. For performance optimization on sparse containers I
> would suggest to focus on the a couple types in uBlas and not all of them
> (at least compressed_matrix). FYI most applications (like FEA for example)
> are using sparse matrices with dense vectors and combinations like sparse
> matrix/sparse vector are less important.
> Two more items you may want to consider:
> 1. It would really add value to your proposal to include examples or
> architectural considerations of how those would integrate with uBlas. Since
> uBlas promotes a functional style, the same would be appropriate for the
> solvers interface. The simpler the interface the better.
> 2. Consider implementing parallel implementations using std::thread (or
> whatever from C++11 is appropriate).
> 3. Callback control for larger systems so that the solver can signal its
> caller about it's progress, or for other control reasons (i.e. stopping the
> solver). This should not interfere with performance and should probably
> implemented using some sort of static dispatch so that the user can choose
> which version to use ( with or without callback functions). Please note
> that those callbacks should probably be implemented using std::function.
> Please take a look at the last years proposals on how to draft yours
> because laying down your ideas is as crucial as your intention and
> capabilities to implement them.
> Regards,
> Nasos
> On 03/06/2015 10:30 AM, Rajaditya Mukherjee wrote:
> Hi,
> My name is Raj and I am Phd student in Computer Graphics. I am
> interested in tackling the problem of uBLAS Matrix Solver and in order to
> write my proposal, I am looking for inputs for which of the following
> algorithms will be most useful for prospective users in boost-numeric
> library. Here is a categorical list of all the prospective ones which will
> bring uBLAS updated to other commercial libraries like Eigen/Armadillo.
> Please let me know your preferences....
> *David Bellot* : As a potential mentor, do you have any specific
> additions or deletions for this list? This could also be useful for other
> candidates pursuing this project.
> *DENSE SOLVERS AND DECOMPOSITION* :
> 1) *QR Decomposition* - *(Must have)* For orthogonalization of column
> spaces and solutions to linear systems. (Bonus : Also rank revealing..)
> 2) *Cholesky Decomposition* - *(Must have)* For symmetric Positive
> Definite systems often encountered in PDE for FEM Systems...
> 3) *Householder Method* - Conversion to tridiagonal form for eigen
> solvers.
> *SPARSE SOLVERS AND PRECONDITIONERS* :
> 1) *Conjugate Gradient* - *(Must have)* For symmetric Positive Definite
> systems, this is the kryvlov space method of choice. Both general and
> preconditioned variants need to be implemented for convergence issues. 2)
> *BiCGSTAB* *(Needs introspection)* - For non symmetric systems..
> 3) *Incomplete Cholesky Decomposition* *(Good to have)* - For symmetric
> Positive definite sparse matrices, to be used as preconditioner as
> extension to (1) for preconditioned CG Methods ...
> 4) *Jacobi Preconditioner* *(Must have)* - As prerequisite for step(1).
> *EIGEN DECOMPOSITION MODULES (ONLY FOR DENSE MODULES)**:*
> 1) *Symmetric Eigen Values* - *(Must have)* Like SSYEV Module in Lapack -
> That is first reduction to a tridiagonal form using Householder then using
> QR Algorithm for Eigen Value computation.
> 2) *NonSymmetric Eigen Values* - *(Good to have)* Like SGEEV module in
> Lapack - using Schur decompositions as an intermediate step in the above
> algorithm.
> 3) *Generalized Eigen Values* - *(needs introspection)* I use this in my
> research a lot and its a good thing to have..
> ** Computing Eigen Decomposition of sparse modules needs special robust
> numerical treatment using implicitly restarted arnoldi iterations and may
> be treated as optional extensions.
> _______________________________________________
> ublas mailing listublas_at_[hidden]http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: nasos_i_at_[hidden]
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: rajaditya.mukherjee_at_[hidden] | {"url":"https://lists.boost.org/MailArchives/ublas/2015/03/5723.php","timestamp":"2024-11-11T04:21:44Z","content_type":"text/html","content_length":"19132","record_id":"<urn:uuid:c9cd160c-3f21-4d67-b061-0aba5be814f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00474.warc.gz"} |
Multiplication Chart 1-44 2024 - Multiplication Chart Printable
Multiplication Chart 1-44
Multiplication Chart 1-44 – You can get a blank Multiplication Chart if you are looking for a fun way to teach your child the multiplication facts. This will enable your kid to complete the details
independently. You will discover blank multiplication graphs for many different product or service ranges, which include 1-9, 10-12, and 15 goods. You can add a Game to it if you want to make your
chart more exciting. Here are several tips to get the little one started: Multiplication Chart 1-44.
Multiplication Charts
You should use multiplication charts as part of your child’s student binder to assist them memorize mathematics specifics. While many youngsters can commit to memory their mathematics information
naturally, it will require numerous others time to achieve this. Multiplication graphs are a good way to strengthen their boost and learning their confidence. In addition to being academic, these
graphs may be laminated for additional longevity. Listed below are some beneficial methods to use multiplication maps. Also you can check out these websites for beneficial multiplication reality
This course covers the fundamentals in the multiplication dinner table. Along with discovering the rules for multiplying, college students will fully grasp the idea of factors and patterning.
Students will be able to recall basic facts like five times four, by understanding how the factors work. They will also be able to utilize the property of zero and one to fix more advanced items.
Students should be able to recognize patterns in multiplication chart 1, by the end of the lesson.
Along with the normal multiplication chart, students should produce a chart with a lot more aspects or much less factors. To generate a multiplication graph or chart with additional aspects, students
must produce 12 dining tables, each with a dozen rows and three columns. All 12 furniture need to match using one sheet of document. Lines needs to be drawn having a ruler. Graph pieces of paper is
best for this undertaking. If graph paper is not an option, students can use spreadsheet programs to make their own tables.
Video game ideas
Whether you are training a newbie multiplication training or taking care of the competence from the multiplication table, you can develop entertaining and engaging online game tips for Multiplication
Chart 1. A number of fun tips are the following. This video game needs the students to stay in pairs and work on a single difficulty. Then, they may all hold up their credit cards and discuss the
perfect solution for any moment. They win if they get it right!
When you’re instructing little ones about multiplication, among the best instruments you can provide them with is actually a computer multiplication graph or chart. These computer sheets can come in
many different designs and will be published using one webpage or many. Kids can find out their multiplication details by copying them through the chart and memorizing them. A multiplication graph
will be helpful for most reasons, from helping them discover their arithmetic facts to teaching them how to use a calculator.
Gallery of Multiplication Chart 1-44
Pin By Brittany Snodgrass On 3rd Grade Math Manipulatives Math
144 Square Chart Free Download Little Graphics
144 Times Table Challenge Times Tables Worksheets
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/multiplication-chart-1-44/","timestamp":"2024-11-13T21:23:30Z","content_type":"text/html","content_length":"52505","record_id":"<urn:uuid:a4ee7e48-ee48-4c20-9b9d-2d94ea68eb42>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00251.warc.gz"} |
Merged block randomization | Stéphanie van der Pa
top of page
Merged block randomisation
A novel restricted randomisation method designed for small clinical trials (at most 100 subjects) or trials with small strata, for example in multicenter trials. It can be used for more than two
groups and unequal randomisation ratios.
Randomisation in small clinical trials is a delicate matter, due to the tension between the conflicting aims of balanced groups and unpredictable allocations. The commonly used method of permuted
block randomisation has been heavily criticised for its high predictability. Merged block randomisation is a novel and conceptually simple restricted randomisation design for small clinical trials
(less than 100 patients per stratum). For 1:1 allocation to treatments A and B, first two sequences consisting of building blocks AB and BA are generated, for example:
Sequence 1 : A B A B B A A B A B
Sequence 2 : A B B A B A B A B A
Then a series of coin flips is performed, which indicate how the two sequences will be merged, resulting in the final allocation. For each ‘head’, the sequentially first assignment within sequence 1
that has not been selected yet is placed in the final allocation, while for each ‘tail’, the sequentially first unused treatment assignment within sequence 2 is selected. For example, suppose we
perform 10 coin flips and the first is a head. Then we take the first 'A' from sequence 1.
Sequence 1 (heads) : A B A B B A A B A B
Sequence 2 (tails): A B B A B A B A B A
Coin flips so far: H
Final allocation: A
If the second coin flip is a tails, we take the A from sequence 2.
Sequence 1 (heads) : A B A B B A A B A B
Sequence 2 (tails): A B B A B A B A B A
Coin flips so far: H T
Final allocation: A A
If the third coin flip is a head, we take the B from sequence 1.
Sequence 1 (heads) : A B A B B A A B A B
Sequence 2 (tails): A B B A B A B A B A
Coin flips so far: H T H
Final allocation: A A B
And so on. Merged block randomisation is not restricted to 1:1 randomisation, but is readily applied to unequal target allocations and to more than two treatment groups. In the paper, it is shown
that merged block randomisation is a good choice for studies where imbalance is a concern, improving on permuted block randomisation, while retaining the conceptual and practical simplicity of
permuted block randomisation.
Software, a tutorial and an online app are available, see Software for more details.
Related papers
Van der Pas, S. L. (2019). Merged block randomisation: A novel randomisation procedure for small clinical trials. Clinical Trials, 16(3), 246-252. [link]
bottom of page | {"url":"https://www.stephanievanderpas.nl/merged-blocks","timestamp":"2024-11-10T07:55:43Z","content_type":"text/html","content_length":"361707","record_id":"<urn:uuid:e45ff587-64e5-4883-be3a-50785792b384>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00122.warc.gz"} |
39.93 millimeters per square second to meters per square second
40 Millimeters per square second = 0.04 Meters per square second
Acceleration Converter - Millimeters per square second to meters per square second - 39.93 meters per square second to millimeters per square second
This conversion of 39.93 millimeters per square second to meters per square second has been calculated by multiplying 39.93 millimeters per square second by 0.001 and the result is 0.0399 meters per
square second. | {"url":"https://unitconverter.io/millimiters-per-square-second/meters-per-square-second/39.93","timestamp":"2024-11-14T14:27:34Z","content_type":"text/html","content_length":"27552","record_id":"<urn:uuid:0db7ed0a-65e9-40fc-bb11-cc8f45c5c790>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00588.warc.gz"} |
Python Bitwise Operators - TestingDocs.com
Python Bitwise Operators
Python bitwise operators work on bits and perform bit-by-bit operations. Bitwise operators are used in low-level programming for developing device drivers and embedded systems. They are also used in
performance-critical code due to their speed. They require an understanding of the binary number system.
Python Bitwise Operators
Python provides several bitwise operators that are used to perform operations on binary numbers (bits).
Python Name Description
& Binary AND The operator sets the bit to the result if it exists in both operands. This operator compares each bit of the first operand to the corresponding bit of the second
operand. If both bits are 1, the corresponding result bit is set to 1. Otherwise, it is set to 0.
| Binary OR It sets the bit if it exists in either operand. This operator compares each bit of the first operand to the corresponding bit of the second operand. If either bit is 1,
the corresponding result bit is set to 1. Otherwise, it is set to 0.
^ Binary XOR This operator is used to compare two bits. If the corresponding bits are different, the result is 1. If they are the same, the result is 0. It sets the bit if it is set
in one operand but not both.
~ Binary NOT(Binary It is unary and has the effect of flipping bits. This is a unary operator that inverts all the bits of the operand. Essentially, 0 becomes 1, and 1 becomes 0.
Ones Complement)
<< Binary Left Shift The left operand value is moved left by the number of bits specified by the right operand. This operator shifts the bits of the first operand to the left by the number of
positions specified by the second operand. New bits on the right are filled with 0.
>> Binary Right Shift The left operand value is moved right by the number of bits specified by the right operand. This operator shifts the bits of the first operand to the right by the number
of positions specified by the second operand. For positive numbers, new bits on the left are filled with 0. For negative numbers, the behavior is implementation-specific.
Assume if x = 45 and y=12.
In the binary representation format, their values will be
0010 1101 and 0000 1100, respectively.
# Python Bit-wise Operators
x = 45 # 0010 1101
y = 12 # 0000 1100
Bit-wise AND operator
z = x & y
print(z)# 0000 1100
# 12 in binary format is 0000 1100
Bit-wise OR operator
z = x | y
print(z) # 0010 1101
Python Tutorials
Python Tutorial on this website can be found at:
More information on Python is available at the official website: | {"url":"https://www.testingdocs.com/python-bitwise-operators/?amp=1","timestamp":"2024-11-04T08:55:49Z","content_type":"text/html","content_length":"68151","record_id":"<urn:uuid:ddb2ee10-97c0-4d1a-80f0-74d01ee1b0b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00826.warc.gz"} |
Estimation of high-order aberrations and anisotropic magnification from cryo-EM data sets in RELION-3.1
^aMedical Research Council Laboratory of Molecular Biology, Cambridge CB2 0QH, England, and ^bBiozentrum, University of Basel, Switzerland
^*Correspondence e-mail: scheres@mrc-lmb.cam.ac.uk
(Received 30 October 2019; accepted 6 January 2020; online 11 February 2020)
Methods are presented that detect three types of aberrations in single-particle cryo-EM data sets: symmetrical and antisymmetrical optical aberrations and magnification anisotropy. Because these
methods only depend on the availability of a preliminary 3D reconstruction from the data, they can be used to correct for these aberrations for any given cryo-EM data set, a posteriori. Using five
publicly available data sets, it is shown that considering these aberrations improves the resolution of the 3D reconstruction when these effects are present. The methods are implemented in version
3.1 of the open-source software package RELION.
1. Introduction
Structure determination of biological macromolecules using electron cryo-microscopy (cryo-EM) is primarily limited by the radiation dose to which samples can be exposed before they are destroyed. As
a consequence of the low electron dose, cryo-EM has to rely on very noisy images. In recent years, advances in electron-detector technology and processing algorithms have enabled the reconstruction
of molecular structures at resolutions sufficient for de novo atomic modelling (Fernandez-Leiro & Scheres, 2016 ). With increasing resolutions, limitations imposed by the optical system of the
microscope are becoming more important. In this paper, we propose methods to estimate three optical effects – symmetrical and antisymmetrical aberrations, and magnification anisotropy – which, when
considered during reconstruction, increase the attainable resolution.
In order to increase contrast, cryo-EM images are typically collected out of focus, which introduces a phase shift between the scattered and unscattered components of the electron beam. This phase
shift varies with spatial frequency and gives rise to the contrast-transfer function (CTF). Since the electron-scattering potential of the sample corresponds to a real-valued function, its
Fourier-space representation exhibits Friedel symmetry: the amplitude of the complex structure factor at spatial frequency k is the complex conjugate of the structure factor at frequency −k.
Traditionally, the phase shift of these two frequencies has been assumed to be identical, which corresponds to a real-valued CTF. Imperfections of the optical system can, however, produce
asymmetrical phase shifts that break the Friedel symmetry of the scattered wave. The effect of this is that the CTF has to be expressed as a complex-valued function, which affects not only the
amplitudes of the structure factors but also their phases.
The phase shifts of a pair of corresponding spatial frequencies can be separated into a symmetrical component (i.e. their average shift) and an antisymmetrical component (i.e. their deviation from
that average). In this paper, we will refer to the antisymmetrical component as antisymmetrical aberrations. The symmetrical component of the phase shift sometimes also deviates from that predicted
by the aberration-free CTF model (Hawkes & Kasper, 1996 ). The effect of this is that the CTF is not always adequately represented by a set of elliptical rings of alternating sign, but these
so-called Thon rings can take on slightly different shapes. We will refer to this deviation from the traditional CTF model as symmetrical aberrations.
In addition to the antisymmetrical and symmetrical aberrations, the recorded image itself can be distorted by a different magnification in two perpendicular directions. This is called anisotropic
magnification. Anisotropic magnification can be detected by measuring the ellipticity of the power spectra of multi-crystalline test samples (Grant & Grigorieff, 2015 ). This has the advantage of
providing a calibration of the absolute magnification, but does require additional experiments, and microscope alignments may drift in between such experiments. For several icosahedral virus data
sets, it has been shown that anisotropic magnification may be detected and corrected by an exhaustive search over the amount and the direction of the anisotropy while comparing projections of an
undistorted three-dimensional reference map with individual particle images (Yu et al., 2016 ).
Because the antisymmetrical and symmetrical aberrations and the anisotropic magnification produce different effects, we propose three different and independent methods to estimate them. We recently
proposed a method to estimate a specific type of antisymmetrical aberration that arises from a tilted electron beam (Zivanov et al., 2018 ). In this paper, we propose an extension of this method that
allows us to estimate arbitrary antisymmetrical aberrations expressed as linear combinations of Zernike polynomials (Zernike, 1934 ). The methods to estimate symmetrical aberrations and anisotropic
magnification are novel. Similar to the method for antisymmetrical aberration correction, the method for symmetrical aberration correction also uses Zernike polynomials to model the estimated
aberrations. The choice of Zernike polynomials as a basis is to some degree arbitrary, and the methods described here could be trivially altered to use any other function as a basis. In particular,
we make no use of the orthogonality of Zernike polynomials, since they are only defined to be orthogonal on the entire, evenly weighted unit disc. In our case, the evidence is distributed
non-uniformly across Fourier space, and accounting for this fact breaks the orthogonality of the polynomials.
Optical aberrations in the electron microscope have been studied extensively in the materials science community (Batson et al., 2002 ; Krivanek et al., 2008 ; Saxton, 1995 , 2000 ; Meyer et al., 2002
). However, until now, their estimation has required specific test samples of known structure and of greater radiation resistance than biological samples. The methods presented in this paper work
directly on cryo-EM single-particle data sets of biological samples, making it possible to estimate the effects after the data have been collected, and without performing additional experiments on
specific test samples. Using data sets that are publicly available from the EMPIAR database (Iudin et al., 2016 ), we illustrate that when these optical effects are present their correction leads to
reconstruction with increased resolution.
2. Materials and methods
2.1. Observation model
We are working on a single-particle cryo-EM data set consisting of a large number of particle images. We assume that we already have a preliminary 3D reference map of the particle up to a certain
resolution, and that we know the approximate viewing parameters of all observed particles. This allows us to predict each particle image, which in turn allows us to estimate the parameters of the
optical effects by comparing these predicted images with the observed images.
Let X[p,k] ∈ p ∈ k ∈ V[p,k] ∈
where A[p] is a 3 × 2 projection matrix arising from the viewing angles. Since the back-projected positions of the 2D pixels k mostly fall between the 3D voxels of the reference map, we determine the
values of W(A[p]k) using linear interpolation.
Further, we assume that we have an estimate of the defocus and astigmatism of each particle, as well as the spherical aberration of the microscope, allowing us to also predict the CTFs. We can
therefore write
where φ[k] is the phase-shift angle induced by the antisymmetrical aberration, CTF[p,k] is the real part of the CTF and n[p,k] represents the noise.
The three methods presented in the following all aim to estimate the optical effects by minimizing the squared difference between X[p,k] and exp(iφ[k])CTF[p,k]V[p,k]. This is equivalent to a
maximum-likelihood estimate under the assumption that all n[p,k] are drawn from the same normal distribution.
2.2. Antisymmetrical aberrations
Antisymmetrical aberrations shift the phases in the observed images and they are expressed by the angle φ[k] in (2) . We assume that φ[k] is constant for a sufficiently large number of particles.
This assumption is necessary since, in the presence of typically strong noise, we require the information from a large number of particle images to obtain a reliable estimate.
We model φ[k] using antisymmetrical Zernike polynomials as a basis,
where c[b] ∈ Z[b](k) are a subset of the antisymmetrical Zernike polynomials. The usual two-index ordering of these polynomials is omitted for the sake of clarity. This set of polynomials always
includes the first-order terms Z[1]^−1(k) and Z[1]^1(k) that correspond to rigid motion in 2D. It is essential to consider these terms during estimation, since they capture any systematic errors in
particle positions that arise when the positions are estimated under antisymmetrical aberrations, in particular under axial coma arising from beam tilt. In that situation, the particles are
erroneously shifted in order to neutralise the coma in the mid-frequency range, which overcompensates for the phase shift in the low-frequency range. The measured phase shift is therefore a
superposition of an axial coma and a translation and has to be modelled as such.
The coefficients c[b] are determined by minimizing the following sum of squared differences over all particles,
where f[k] is a heuristical weighting term given by the FSC of the reconstruction; its purpose is to suppress the contributions of frequencies |k| for which the reference is less reliable.
Since typical data sets contain between 10^4 and 10^6 particles, and each particle image typically consists of more than 10^4 Fourier pixels, optimizing the nonlinear expression in (4) directly would
be highly impractical, especially since the images would likely have to be reloaded from disc in each iteration. Instead, we apply a two-step approach. Firstly, we reduce the above sum over sums of
quadratic functions to a single sum over quadratic functions, one for each Fourier-space pixel k,
where K is a constant that does not influence the optimum of c[b]. The per-pixel optimal phase shifts q[k] ∈ w[k] ∈
This is the same transformation that we have applied for the beam-tilt estimation in RELION-3.0 (Zivanov et al., 2018 ); beam tilt is in fact only one of the possible sources of antisymmetrical
aberrations. The computation of q[k] and w[k] requires only one iteration over all the images in the data set, and for the data sets presented here it took of the order of one hour of time on a
24-core 2.9GHz Intel Xeon workstation.
Once the q[k] and w[k] are known, the optimal c[b] are determined by minimizing the following sum of squared differences using the Nelder–Mead downhill simplex (Nelder & Mead, 1965 ) method,
This step requires only seconds of computation time. In addition to making the problem tractable, this separation into two steps also allows us to inspect the phase angles of the per-pixel optima q[k
] visually and to determine the type of antisymmetrical aberration present in the data set.
After the optimal antisymmetrical aberration coefficients c have been determined, they are used to invert the phase shift of all observed images X when a 3D map is being reconstructed from them.
2.3. Symmetrical aberrations
Unlike the antisymmetrical aberrations, the symmetrical aberrations act on the absolute value of the CTF. In the presence of such aberrations, the CTF no longer consists of strictly elliptical rings
of alternating sign, but can take on a more unusual form. In our experiments, we have specifically observed the ellipses deforming into slightly square-like shapes. In order to estimate the
symmetrical aberration, we need to determine the most likely deformations of the CTFs hidden underneath the measured noisy pixels. Since the micrographs in a cryo-EM data set are usually collected at
different defoci, it is not sufficient to measure the collective power spectrum of the entire data set; instead, we need to determine one deformation applied to different CTFs.
In RELION-3.1, the CTF is defined as
where D[p] is the real symmetrical 2 × 2 astigmatic-defocus matrix for particle p, C[s] is the spherical aberration of the microscope, λ is the electron wavelength and χ[p] is a constant offset given
by the amplitude contrast and the phase shift owing to a phase plate (if one is used). We chose this formulation of astigmatism because it is both more concise and also more practical when dealing
with anisotropic magnification, as shown in Section 2.4 . In Appendix A , we define D[p] and we show that this is equivalent to the more common formulation (Mindell & Grigorieff, 2003 ).
We model the deformation of the CTF under symmetrical aberrations by offsetting γ,
where ψ[k](d) is modelled using symmetrical Zernike polynomials combined with a set of coefficients d ∈
The optimal values of d[b] are determined by minimizing another sum of squared differences,
where the predicted complex amplitude
This is again a nonlinear equation with a large number of terms. In order to make its minimization tractable, we perform the following substitution,
with the known column vector r[p,k] ∈
and the unknown t[k](d) ∈
This allows us to transform the one-dimensional nonlinear term for each pixel k into a two-dimensional linear term,
In this form, we can decompose E[symm] into a sum of quadratic functions over all pixels k. This is equivalent to the transformation in (5) , only in two real dimensions instead of one complex
where the real symmetrical 2 × 2 matrix R[k] is given by
and the corresponding per-pixel optima
Again, computing R[k] and k five numbers need to be updated for each particle p: the three distinct elements of R[k] (the matrix is symmetrical) and the two of τ[k]. Once R[k] and d are determined by
minimizing E[symm] in (20) using the Nelder–Mead downhill simplex algorithm. Analogously to the case of the antisymmetrical aberrations, a visual inspection of the optimal ψ[k](d) for each pixel
allows us to examine the type of aberration without projecting it into the Zernike basis. The CTF phase-shift estimate for pixel k is given by t[k].
Once the coefficients d of the symmetrical aberration are known, they are used to correct any CTF that is computed in RELION-3.1.
2.4. Anisotropic magnification
To determine the anisotropy of the magnification, we again compare predicted images with the observed images. We assume that the 3D reference map W has been obtained by averaging views of the
particle at in-plane rotation angles drawn from a uniform distribution. This is a realistic assumption, since unlike the angle between the particle and the ice surface, where the particle often shows
a preferred orientation, the particle is oblivious to the orientation of the camera pixel grid. Thus, for a data set of a sufficient size, the anisotropy in the individual images averages out and the
resulting reference map depicts an isotropically scaled 3D image of the particle (although the high-frequency information on the periphery of the particle is blurred out by the averaging). We can
therefore estimate the anisotropy by determining the optimal deformation that has to be applied to the predicted images in order to best fit the observed images.
We are only looking for linear distortions of the image. Such a distortion can be equivalently represented in real space or in Fourier space: if the real-space image is distorted by a 2 × 2 matrix M,
then the corresponding Fourier-space image is distorted by M^-1T. We choose to operate in Fourier space since this allows us to determine the deformation of the predicted image without also
distorting the CTF. We assume that the CTF parameters known at this point already fit the Thon rings observed in the image, so we only deform the particle itself.
Formally, we define the complex amplitude V[p,k](M) of the predicted image deformed by a 2 × 2 matrix M by
and we aim to determine such a matrix M that minimizes
where . We are not assuming that M is necessarily symmetrical, which allows it to express a skew component in addition to the anisotropic magnification. Such skewing effects are considered by the
models commonly used in computer vision applications (Hartley, 1994 ; Hartley & Zisserman, 2003 ), but not in cryo-EM. We have decided to model the skew component as well, in case it should manifest
in a data set.
The expression given in (25) is yet another sum over a large number of nonlinear terms. In order to obtain a sum over squares of linear terms, we first express the deformation by M as a set of
per-pixel displacements δ[k] ∈
Next, we perform a first-order Taylor expansion of W around A[p]k. We know that this linear approximation of W is reasonable for all frequencies k at which the reference map contains any information,
because the displacements δ[k] are likely to be smaller than one voxel there. If they were significantly larger then they would prevent a successful computation of the complex amplitudes of the
reference map at these frequencies, except if a very large number of particles were to be considered. The linear approximation is given as
where the gradient g[p,k] ∈ W (which is given by the linear interpolation),
It is essential to compute g[p,k] in this way, since computing it numerically from the already projected image V[p,k] would lead to a systematic underestimation of the gradient (owing to the
interpolation) and thus to a systematic overestimation of the displacement. Note also that the change in φ(k) as a result of the displacement is being neglected. This is owing to the fact that the
phase shift, like the CTF, has also been computed from the distorted images, so that we can assume it to be given correctly in the distorted coordinates.
Using the terms transformed in this way, the sum of squared errors can be approximated by
This corresponds to two linear systems of equations to be solved in a least-squares sense, either for the per-pixel displacements δ[k] (29) or for the global deformation matrix M (30) . Analogously
to the aberrations methods, we solve for both. Knowing the per-pixel solutions again allows us to confirm visually whether the observed deformations are consistent with a linear distortion; if they
are, then the per-pixel displacements δ[k] will follow a linear function of k.
The optimal displacements
with the real symmetrical 2 × 2 matrix S[k] given by
Note that this is equivalent to treating the real and imaginary components of (29) as separate equations, since Re(z*w) = Re(z)Re(w) + Im(z)Im(w) for all z, w ∈ S[k] and e[k] are computed in one
iteration by accumulating five numbers for each pixel k over the entire data set.
The optimal 2 × 2 deformation matrix M is determined by first reshaping it into a column vector m ∈
The expression in (30) can then be written as
with the column vector a[p,k] ∈
We can now compute the optimal m,
where the real symmetrical 4 × 4 matrix T and the column vector l ∈
There is no need to compute T and l explicitly by iterating over all particles p again, since all the necessary sums are already available as part of S[k] and e[k]. Instead, we only need to sum up
the corresponding values over all pixels k. This is shown in Appendix B .
In order to correct for the anisotropy after M has been estimated, we never resample the observed images. When we compute a 3D map from a set of observed images, we do so by inserting 2D slices into
the 3D Fourier-space volume. Since this process requires the insertion of 2D pixels at fractional 3D coordinates (and thus interpolation), we can avoid any additional resampling of the observed
images by instead inserting pixel k into the 3D map at position A[p]Mk instead of at A[p]k. Analogously, if the methods described in Sections 2.2 and 2.3 are applied after the distortion matrix M is
known, then the predicted images are generated by reading the complex amplitude from W at 3D position A[p]Mk. This has been omitted from the description of these methods to aid readability.
Furthermore, when dealing with anisotropic magnification in RELION, we have chosen to always define the CTF in the undistorted 2D coordinates. The primary motivation behind this is the assumption
that the spherical aberration (the second summand in equation 10 ) should only be radially symmetrical if the image is not distorted. For this reason, once the distortion matrix M is known, we need
to transform the astigmatic-defocus matrix D into the new undistorted coordinate system. This is performed by conjugating D under M^−1,
When a CTF value is computed after this transformation has been performed, it is always computed as CTF(Mk) instead of as CTF(k).
The Zernike polynomials that are used as a basis for the symmetrical and antisymmetrical aberrations are also defined in the undistorted coordinates, i.e. the Zernike polynomials are also evaluated
at Z[b](Mk). Note that correction of these coefficients after estimating M cannot be performed analytically, but would require a numerical solution. Instead, we propose that the aberrations be
estimated only after M is known. In severe cases, a better estimate of M can be obtained by repeating the magnification refinement after determining optimal defocus and astigmatism estimates using
the initial estimate of M. We illustrate this scenario on a synthetic example in Section 3.4 .
2.5. Implementation details
The three methods described above need to be applied to a large number of particles in order to obtain a reliable estimate. Nevertheless, we allow the three effects to vary within a data set in
RELION-3.1. To facilitate this, we have introduced the concept of optics groups: partitions of the particle set that share the same optical properties, such as the voltage or pixel size (or the
aberrations and the magnification matrix). As of RELION-3.1, those optical properties are allowed to vary between optics groups, while particles from different groups can still be refined together.
This makes it possible to merge data sets collected on different microscopes with different magnifications and aberrations without the need to resample the images. The anisotropic magnification
refinement can then be used to measure the relative magnification between the optics groups by refining their magnification against a common reference map.
Since most of the optical properties of a particle are now defined through the optics group to which it belongs, each particle STAR file written out by RELION-3.1 now contains two tables: one listing
the optics groups and one listing the particles. The particles table is equivalent to the old table, except that certain optical properties are no longer listed. Those are typically the voltage, the
pixel and image sizes, the spherical aberration and the amplitude contrast, and they are instead specified in the optics groups list. This reduces the overall file size, and makes manual editing of
these properties easier.
A number of other optical properties are still stored in the particles list, allowing different values for different particles in the same group. These properties make up the per-particle part of the
symmetrical aberration, i.e. the coefficient γ[p,k] in (10) . The specific parameters that can vary per particle are the following: the phase shift, defocus, astigmatism, the spherical aberration and
the B-factor envelope.
The B-factor envelope is a two-dimensional parameter consisting of a scale factor S and the B factor itself. It corresponds to a Gaussian envelope over the CTF [given by Sexp(−4B|k|^2)] and it
provides a means of weighting different particles against each other. Specifically, a greater B factor means that the particle will contribute less to the higher frequencies of the reconstruction.
Although B factors on the CTF have been available in earlier releases of RELION, the method to estimate them is new in version 3.1.
We have developed a new CTF refinement program that considers all particles in a given micrograph and locally optimises all of the above five parameters, while each parameter can be modelled either
per particle, per micrograph or remain fixed. The program then uses the L-BFGS algorithm (Liu & Nocedal, 1989 ) to find the least-squares optimal parameter configuration given all the particles in
the micrograph. This allows the user to find, for example, the most likely phase shift of a micrograph while simultaneously finding the most likely defocus value of each particle in it. The program
has been engineered to offer a wide range of combinations, even though some of those may not appear to be useful at first, for example estimating the spherical aberration or the phase shift per
particle. In this manner the program allows exceptions, for example very large particles, but we recommend most users to only model the defocus per particle and everything else per micrograph or not
at all.
Note that the terms defocus and astigmatism above refer specifically to δz (defocus) and a[1] and a[2] (astigmatism), where the astigmatic defocus matrix D[p] of particle p in (10) is composed as
As an example, this would allow the defocus to be expressed per particle by allocating a separate δz for each particle, while the astigmatism could be estimated per micrograph by requiring a[1] and a
[2] to be identical for all particles.
3. Results
To validate our methods and to illustrate their usefulness, we describe four experiments using publicly available data sets. Firstly, we assess aberration correction on two data sets that were
collected on a 200keV Thermo Fisher Talos Arctica microscope. Secondly, we illustrate a limitation of our method for modelling aberrations using a data set that was collected on a 300keV Thermo
Fisher Titan Krios microscope with a Volta phase plate with defocus (Danev et al., 2017 ). Thirdly, we apply our methods to one of the highest resolution cryo-EM structures published so far,
collected on a Titan Krios without a phase plate. Finally, we determine the precision to which the magnification matrix M can be recovered in a controlled experiment, using artificially distorted
images, again from a Titan Krios microscope.
3.1. Aberration experiment at 200keV
We reprocessed two publicly available data sets: one on rabbit muscle aldolase (EMPIAR-10181) and the other on the Thermoplasma acidophilum 20S proteasome (EMPIAR-10185). Both data sets were
collected on the same 200keV Talos Arctica microscope, which was equipped with a Gatan K2 Summit direct electron camera. At the time of the original publication (Herzik et al., 2017 ), the aldolase
could be reconstructed to 2.6Å resolution and the proteasome to 3.1Å resolution using RELION-2.0.
We picked 159352 particles for the aldolase data set and 74722 for the proteasome. For both data sets, we performed five steps and measured the resolution at each step. Firstly, we refined the
particles without considering the aberrations. The resulting 3D maps were then used to perform an initial CTF refinement in which the per-particle defoci and the aberrations were estimated. The
particles were then subjected to Bayesian polishing (Zivanov et al., 2019 ), followed by another iteration of CTF refinement. In order to disentangle the effects of improved Bayesian polishing from
the aberration correction, we also performed a refinement with the same polished particles, but assuming all aberrations to be zero. We measured the Fourier shell correlation (FSC) between the two
independent half sets and against maps calculated from the known atomic models (PDB entries 1zah and 6bdf, respectively; St-Jean et al., 2005 ; Campbell et al., 2015 ). The plots are shown in Fig. 1
and the resolutions measured by the half-set method, using a threshold of 0.143, in Table 1 . Plots of the aberration estimates are shown in Fig. 2 .
Fig. 2 indicates that both data sets exhibit antisymmetrical as well as symmetrical aberrations. For both data sets, the shapes of both types of aberrations are well visible in the per-pixel plots,
and the parametric Zernike fits capture these shapes well. The antisymmetrical aberrations correspond to a trefoil (or threefold astigmatism) combined with a slight axial coma and they are more
pronounced than the symmetrical aberrations. The trefoil is visible as three alternating areas of positive and negative phase difference, with approximate threefold symmetry, in the images for the
antisymmetrical aberration estimation (on the left in Fig. 2 ). The axial coma breaks the threefold symmetry by making one side of the image more positive and the opposite side more negative. The
apparent fourfold symmetry in the images for the symmetrical aberrations (on the right in Fig. 2 ) corresponds to fourfold astigmatism and is strongest for the proteasome data set. The proteasome
also shows the stronger antisymmetrical aberrations, which even exceed 180° at the higher frequencies. Note that because the per-pixel plots show the phase angle of , they wrap around once they
reach 180°. This has no effect on the estimation of the parameters, however, since
The FSC plots (Fig. 1 ) indicate that aberration correction leads to higher resolution, as measured by both the FSC between independently refined half-maps and the FSC against maps calculated from
the atomic models. Comparing the result of the second CTF refinement and its equivalent run without aberration correction (the lower two lines in Table 1 ; Fig. 3 ), the resolution increased from 2.5
to 2.1Å for the aldolase data set and from 3.1 to 2.3Å for the proteasome. In addition, aberration correction also allows more effective Bayesian polishing and defocus estimation, which is the
reason for performing the CTF refinement twice.
3.2. Phase-plate experiment
We also analysed a second data set on a T. acidophilum 20S proteasome (EMPIAR-10078). This data set was collected using a Volta phase plate (VPP; Danev et al., 2017 ) under defocus. We picked 138080
particles and processed them analogously to the previous experiment, except that the CTF refinement now included the estimation of anisotropic magnification. The estimated aberrations are shown in
Fig. 4 and the FSCs in Fig. 6.
The purpose of a VPP is to shift the phase of the unscattered beam in order to increase the contrast against the scattered beam. This is accomplished by placing a heated film of amorphous carbon (the
VPP) at the back-focal plane of the microscope and letting the electron beam pass through it after it has been scattered by the specimen. The central, unscattered beam, which exhibits much greater
intensity than the unscattered components, then spontaneously creates a spot of negative electric potential on the VPP (Danev et al., 2014 ). It is this spot which then causes the phase shift in the
unscattered beam. After being used for a certain amount of time, the spot charges up even more and develops imperfections. At this point, the user will typically switch to a different position on the
carbon film. The charge at the previous position will decay, although some charge may remain for an extended period. If the VPP is shifted by an insufficient distance, the old spot will reside in a
position traversed by scattered rays corresponding to some higher image frequency. We hypothesize that we can observe these spots in our symmetrical aberration plots.
The symmetrical plots show a positive phase shift at the center of frequency space (Fig. 4 ). We hypothesize that this spot is caused by the size of the charge built up at the currently used position
on the phase plate (Danev & Baumeister, 2016 ). Moreover, this plot shows four additional spots at higher spatial frequencies. We hypothesize that these may arise from residual charges on previously
used phase-plate positions. These charges would then interfere with the diffracted rays at higher spatial frequency from the current position, resulting in the observed spots in the aberration image.
The absence of the vertical neighbor spots from the antisymmetrical plot suggests that the spots were scanned in a vertically alternating but horizontally unidirectional sense. This is illustrated in
Fig. 5 .
Because these types of aberrations do not satisfy our smoothness assumptions, they cannot be modelled well using a small number of Zernike basis polynomials. Although increasing the number of Zernike
polynomials would in principle allow the expression of any arbitrary aberration function, it also decreases the ability of the system to extrapolate the aberration into the unseen high-frequency
regions. As a consequence, our aberration model cannot be used to neutralise the effects of the phase-plate positions, which is confirmed by the FSC plots in Fig. 6 . In practice, this problem can be
avoided experimentally by spacing the phase-plate positions further apart and thus arbitrarily increasing the affected frequencies.
The estimated magnification anisotropy for this data set is relatively weak. The final magnification matrix M we recovered was
which corresponds to 1.35% anisotropy along two perpendicular axes rotated by 66°.
3.3. High-resolution experiment
We applied our methods to a mouse heavy-chain apoferritin data set (EMPIAR-10216) collected on a 300keV Titan Krios fitted with a Falcon 3 camera. At the time of its publication, the particle could
be reconstructed to a resolution of 1.62Å using RELION-3.0 (Danev et al., 2019 ). This data set thus offers us a means to examine the effects of higher-order aberrations and anisotropic
magnification at higher resolutions.
We compared the following three reconstructions. Firstly, the original, publicly available map. Since it had been estimated using RELION-3.0, the effects of beam tilt could be corrected for, but none
of the other high-order aberrations or anisotropic magnification. Secondly, the aberrations alone: for this, we proceeded from the previous refinement and first estimated the higher order aberrations
and then, simultaneously, per-particle defoci and per-micrograph astigmatism. Thirdly, we performed the same procedure but only after first estimating the anisotropic magnification. For the third
case, the entire procedure was repeated after a round of refinement. For all three cases, we calculated the FSC between the independently refined half-maps and the FSC against an atomic model, PDB
entry 6s61, that was built into an independently reconstructed cryo-EM map of mouse apoferritin at a resolution of 1.8Å. In the absence of a higher-resolution atomic model, comparison with PDB entry
6s61 relies on the assumption that the geometrical restraints applied during atomic modelling resulted in predictive power at resolutions beyond 1.84Å. We used the same mask as in the original
publication for correction of the solvent-flattening effects on the FSC between the independent half-maps, and we used the same set of 147637 particles throughout.
The aberration plots in Fig. 7 show that this data set exhibits a trefoil aberration and faint fourfold astigmatism. In the magnification plot in Fig. 8 , we can see a clear linear relationship
between the displacement of each pixel k and its coordinates. This indicates that the measured displacements stem from a linearly distorted image and that the implied distortion is a horizontal
dilation and a vertical compression. This is consistent with anisotropic magnification, since the average magnification has to be close to 1 because the reference map itself has been obtained from
the same images under random in-plane angles. The smoothness of the per-pixel plot suggest that the large number of particles allows us to measure the small amount of anisotropy reliably. The
magnification matrix we estimated was
which corresponds to 0.54% anisotropy. As can be seen in the FSC curves in Fig. 9 , considering either of these effects is beneficial, while considering both yields a resolution of 1.57Å, an
improvement of three shells over the reconstruction obtained using RELION-3.0.
3.4. Simulated anisotropic magnification experiment
To measure the performance of our anisotropic magnification estimation procedure in the presence of a larger amount of anisotropy, we also performed an experiment on synthetic data. For this
experiment, we used a small subset (9487 particles from 29 movies) taken from a human apoferritin data set (EMPIAR-10200), which we had processed before (Zivanov et al., 2018 ). We distorted the
micrographs by applying a known anisotropic magnification using MotionCor2 (Zheng et al., 2017 ). The relative scales applied to the images were 0.95 and 1.05, respectively, along two perpendicular
axes rotated at a 20° angle. In this process, about 4% of the particles were mapped outside the images, so the number of distorted particles is slightly smaller at 9093.
We then performed four rounds of refinement on particle images extracted from the distorted micrographs in order to recover the anisotropic magnification. Each round consisted of a CTF refinement
followed by an autorefinement. The CTF refinement itself was performed twice each time: once to estimate the anisotropy and then again to determine the per-particle defoci and per-micrograph
astigmatism. The FSC curves for the different rounds can be seen in Fig. 10 . We observe that the FSC already approaches that of the undistorted particles after the second round. In the first round,
the initial 3D reference map is not precise enough to allow a reliable recovery of anisotropy.
The magnification matrix M recovered in the final round is
It corresponds to the relative scales of 0.951 and 1.049, respectively, along two perpendicular axes rotated by 19.939°, although it also contains an additional uniform scaling by a factor of 1.022.
The uniform scaling factor has no influence on the refinement, but it does change the pixel size of the resulting map. We therefore note that caution must be taken to either enforce the product of
the two relative scales to be 1, or to otherwise calibrate the pixel size of the map against an external reference.
This experiment shows that the anisotropy of the magnification can be estimated to three significant digits, even from a relatively small number of particles. Since the estimate arises from adding up
contributions from all particles, the precision increases with their number.
4. Discussion
Although we previously described a method to estimate and correct for beam-tilt-induced axial coma (Zivanov et al., 2019 ), no methods to detect and correct for higher-order optical aberrations have
been available until now. It is therefore not yet clear how often these aberrations are a limiting factor in cryo-EM structure determination of biological macromolecules. The observation that we have
already encountered several examples of strong threefold and fourfold astigmatism on two different types of microscopes suggests that these aberrations may be relatively common.
Our results with the aldolase and 20S proteasome data sets illustrate than when antisymmetrical and/or symmetrical aberrations are present in the data, our methods lead to an important increase in
the achievable resolution. Both aldolase and the 20S proteasome could be considered as `easy' targets for cryo-EM structure determination: they have both been used to test the performance of cryo-EM
hardware and software (see, for example, Li et al., 2013 ; Danev & Baumeister, 2016 ; Herzik et al., 2017 ; Kim et al., 2018 ). However, our methods are not limited to standard test samples, and have
already been used to obtain biological insights from much more challenging data. Images of brain-derived tau filaments from an ex-professional American football player with chronic traumatic
encephalopathy that we recorded on a 300keV Titan Krios microscope showed severe threefold and fourfold astigmatism. Correction for these aberrations led to an increase in resolution from 2.7 to
2.3Å, which allowed the visualisation of alternative side-chain conformations and of ordered water molecules inside the amyloid filaments (Falcon et al., 2019 ).
Titan Krios microscopes come equipped with lenses that can be tuned to correct for threefold astigmatism, although this operation is typically only performed by engineers. The Titan Krios microscope
that was used to image the tau filaments from the American football player is part of the UK national cryo-EM facility at Diamond (Clare et al., 2017 ). After measuring the severity of the
aberrations, its lenses were re-adjusted, and no higher-order aberrations have been detected on it since (Peijun Zhang, personal communication). Talos Arctica microscopes do not have lenses to
correct for trefoil, and the microscope that was used to collect the aldolase and the 20S proteasome data sets at the Scripps Research Institute continues to yield data sets with fluctuating amounts
of aberrations (Gabriel Lander, personal communication). Until the source of these aberrations are determined or better understood, the corrections proposed here will be important for processing of
data acquired on these microscopes.
The extent to which higher-order aberrations are limiting will depend on the amount of threefold and fourfold astigmatism, as well as on the target resolution of the reconstruction. We have only
observed noticeable increases in resolution for data sets that yielded reconstructions with resolutions beyond 3.0–3.5Å before the aberration correction. However, the effects of the aberrations are
more pronounced for lower-energy electrons. Therefore, our methods may become particularly relevant for data from 100keV microscopes, the development of which is envisioned to yield better images
for thin specimens and to bring down the elevated costs of modern cryo-EM structure determination (Peet et al., 2019 ; Naydenova et al., 2019 ).
The effects of anisotropic magnification on cryo-EM structure determination of biological samples have been described previously, and methods to correct for it have been proposed (Grant & Grigorieff,
2015 ; Yu et al., 2016 ). Our method bears some resemblance to the exhaustive search algorithm implemented in JSPR (Guo & Jiang, 2014 ; Yu et al., 2016 ), in that it compares reference projections
with high signal-to-noise ratios and the particle images of an entire data set. However, our method avoids the computationally expensive two-dimensional grid search over the direction and magnitude
of the anisotropy in JSPR. In addition, our method is, in principle, capable of detecting and modeling skew components in the magnification.
In addition to modeling anisotropic magnification, our method can also be used for the combination of different data sets with unknown relative magnifications. In cryo-EM imaging, the magnification
is often not exactly known. Again, it is possible to accurately measure the magnification using crystalline test specimens with known diffraction geometry, but in practice errors of up to a few
percent in the nominal pixel size are often observed. When processing data from a single data set, such errors can be absorbed, to some extent, in the defoci values. This produces a CTF of very
similar apperance but at a slightly different scale. Therefore, a small error in pixel size only becomes a problem at the atomic modeling stage, where it leads to overall contracted or expanded
models with bad stereochemistry. (Please note that this is no longer true at high spatial frequencies owing to the absolute value of the C[s]; e.g. beyond 2.5Å for non-C[s]-corrected 300kV
microscopes.) When data sets from different sessions are combined, however, errors in their relative magnification will affect the 3D reconstruction at much lower resolutions. Our method can directly
be used to correct for such errors. In addition, to provide further convenience, our new implementation allows the combination of particle images with different pixel and box sizes into a single
refinement. The performance of our methods under these conditions remains to be illustrated. Often, when two or more different data sets are combined, a single data set outperforms the other data
sets at the resolution limit of the reconstruction and combination of the data sets does not improve the map.
Our results illustrate that antisymmetrical and symmetrical aberrations, as well as anisotropic magnification, can be accurately estimated and modelled a posteriori from a set of noisy projection
images of biological macromolecules. No additional test samples or experiments at the microscope are necessary; all that is needed is a 3D reconstruction of sufficient resolution that the optical
effects become noticeable. Our methods could therefore in principle be used in a `shoot first, ask questions later' type of approach, in which the speed of image acquisition is prioritized over
exhaustively optimizing the microscope settings. In this context, we caution that while the boundaries of applicability of our methods remain to be explored, it may be better to reserve their use for
unexpected effects in data from otherwise carefully conducted experiments.
In the following, we show that our formulation of the astigmatic-defocus term as a quadratic form is equivalent to the traditional form as defined in RELION, which in turn was based on the model in
CTFFIND (Mindell & Grigorieff, 2003 ). Let the two defoci be given by Z[1] and Z[2], the azimuthal angle of astigmatism by φ[A] and the wavelength of the electron by λ. We then wish to show that
for the astigmatic-defocus matrix D defined as
The multiplication by Q rotates k into the coordinate system of the astigmatism,
Multiplying out the quadratic form and applying the definitions of Z[μ] and Z[d] yields
By substituting cos(2δφ[k]) for cos^2(δφ[k]) − sin^2(δφ[k]) we see that this is equivalent to the original formulation.
In order to convert a given D into the traditional formulation, we perform an eigenvalue decomposition of −D/(πλ). The two eigenvalues are then equal to Z[1] and Z[2], respectively, while the
azimuthal angle of the eigenvector corresponding to Z[1] is equal to φ[A].
Computing T and l explicitly through (38) would require iterating over all particles p in the data set. Since we have already accumulated the terms in S[k] and e[k] over all p, we can avoid this by
instead performing the following summation over all pixels k,
where ⊗ indicates element-wise multiplication, and the real symmetrical 4 × 4 matrix S[k], k and e[k],
We thank Rado Danev for providing polished particles for the data set in EMPIAR-10216, and Jake Grimmett and Toby Darling for assistance with high-performance computing.
Funding information
This work was funded by the UK Medical Research Council (MC_UP_A025_1013 to SHWS), the Japan Society for the Promotion of Science (Overseas Research Fellowship to TN) and the Swiss National Science
Foundation (SNF; P2BSP2_168735 to JZ).
Batson, P. E., Dellby, N. & Krivanek, O. L. (2002). Nature, 418, 617–620. Web of Science CrossRef PubMed CAS Google Scholar
Campbell, M. G., Veesler, D., Cheng, A., Potter, C. S. & Carragher, B. (2015). eLife, 4, e06380. Web of Science CrossRef Google Scholar
Clare, D. K., Siebert, C. A., Hecksel, C., Hagen, C., Mordhorst, V., Grange, M., Ashton, A. W., Walsh, M. A., Grünewald, K., Saibil, H. R., Stuart, D. I. & Zhang, P. (2017). Acta Cryst. D73,
488–495. Web of Science CrossRef IUCr Journals Google Scholar
Danev, R. & Baumeister, W. (2016). eLife, 5, e13046. Web of Science CrossRef PubMed Google Scholar
Danev, R., Buijsse, B., Khoshouei, M., Plitzko, J. M. & Baumeister, W. (2014). Proc. Natl Acad. Sci. USA, 111, 15635–15640. Web of Science CrossRef CAS PubMed Google Scholar
Danev, R., Tegunov, D. & Baumeister, W. (2017). eLife, 6, e23006. Web of Science CrossRef PubMed Google Scholar
Danev, R., Yanagisawa, H. & Kikkawa, M. (2019). Trends Biochem. Sci. 44, 837–848. Web of Science CrossRef CAS PubMed Google Scholar
Falcon, B., Zivanov, J., Zhang, W., Murzin, A. G., Garringer, H. J., Vidal, R., Crowther, R. A., Newell, K. L., Ghetti, B., Goedert, M. & Scheres, S. H. W. (2019). Nature, 568, 420–423. Web of
Science CrossRef CAS PubMed Google Scholar
Fernandez-Leiro, R. & Scheres, S. H. W. (2016). Nature, 537, 339–346. Web of Science CAS PubMed Google Scholar
Ferraro, G., Ciambellotti, S., Messori, L. & Merlino, A. (2017). Inorg. Chem. 56, 9064–9070. Web of Science CrossRef CAS PubMed Google Scholar
Grant, T. & Grigorieff, N. (2015). J. Struct. Biol. 192, 204–208. Web of Science CrossRef PubMed Google Scholar
Guo, F. & Jiang, W. (2014). Methods Mol. Biol., 1117, 401–443. CrossRef CAS PubMed Google Scholar
Hartley, R. I. (1994). ECCV '94: Proceedings of the Third European Conference on Computer Vision, edited by J.-O. Eklundh, pp. 471–478. Berlin: Springer-Verlag. Google Scholar
Hartley, R. I. & Zisserman, A. (2003). Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press. Google Scholar
Hawkes, P. W. & Kasper, E. (1996). Principles of Electron Optics, Vol. 3. New York: Academic Press. Google Scholar
Herzik, M. A. Jr, Wu, M. & Lander, G. C. (2017). Nat. Methods, 14, 1075–1078. Web of Science CrossRef CAS PubMed Google Scholar
Iudin, A., Korir, P. K., Salavert-Torres, J., Kleywegt, G. J. & Patwardhan, A. (2016). Nat. Methods, 13, 387–388. Web of Science CrossRef CAS PubMed Google Scholar
Kim, L. Y., Rice, W. J., Eng, E. T., Kopylov, M., Cheng, A., Raczkowski, A. M., Jordan, K. D., Bobe, D., Potter, C. S. & Carragher, B. (2018). Front. Mol. Biosci. 5, 50. Web of Science CrossRef
PubMed Google Scholar
Krivanek, O., Corbin, G., Dellby, N., Elston, B., Keyse, R., Murfitt, M., Own, C., Szilagyi, Z. & Woodruff, J. (2008). Ultramicroscopy, 108, 179–195. Web of Science CrossRef PubMed CAS Google
Li, X., Mooney, P., Zheng, S., Booth, C. R., Braunfeld, M. B., Gubbens, S., Agard, D. A. & Cheng, Y. (2013). Nat. Methods, 10, 584–590. Web of Science CrossRef CAS PubMed Google Scholar
Liu, D. C. & Nocedal, J. (1989). Math. Program. 45, 503–528. CrossRef Web of Science Google Scholar
Meyer, R., Kirkland, A. & Saxton, W. (2002). Ultramicroscopy, 92, 89–109. Web of Science CrossRef PubMed CAS Google Scholar
Mindell, J. A. & Grigorieff, N. (2003). J. Struct. Biol. 142, 334–347. Web of Science CrossRef PubMed Google Scholar
Naydenova, K., McMullan, G., Peet, M. J., Lee, Y., Edwards, P. C., Chen, S., Leahy, E., Scotcher, S., Henderson, R. & Russo, C. J. (2019). IUCrJ, 6, 1086–1098. Web of Science CrossRef CAS PubMed
IUCr Journals Google Scholar
Nelder, J. A. & Mead, R. (1965). Comput. J. 7, 308–313. CrossRef Web of Science Google Scholar
Peet, M. J., Henderson, R. & Russo, C. J. (2019). Ultramicroscopy, 203, 125–131. Web of Science CrossRef CAS PubMed Google Scholar
Saxton, W. (1995). J. Microsc. 179, 201–213. CrossRef Web of Science Google Scholar
Saxton, W. (2000). Ultramicroscopy, 81, 41–45. Web of Science CrossRef PubMed CAS Google Scholar
St-Jean, M., Lafrance-Vanasse, J., Liotard, B. & Sygusch, J. (2005). J. Biol. Chem. 280, 27262–27270. Web of Science CrossRef PubMed CAS Google Scholar
Yu, G., Li, K., Liu, Y., Chen, Z., Wang, Z., Yan, R., Klose, T., Tang, L. & Jiang, W. (2016). J. Struct. Biol. 195, 207–215. Web of Science CrossRef PubMed Google Scholar
Zernike, F. (1934). Physica, 1, 689–704. CrossRef Google Scholar
Zheng, S. Q., Palovcak, E., Armache, J.-P., Verba, K. A., Cheng, Y. & Agard, D. A. (2017). Nat. Methods, 14, 331–332. Web of Science CrossRef CAS PubMed Google Scholar
Zivanov, J., Nakane, T., Forsberg, B. O., Kimanius, D., Hagen, W. J., Lindahl, E. & Scheres, S. H. W. (2018). eLife, 7, e42166. Web of Science CrossRef PubMed Google Scholar
Zivanov, J., Nakane, T. & Scheres, S. H. W. (2019). IUCrJ, 6, 5–17. Web of Science CrossRef CAS PubMed IUCr Journals Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided
the original authors and source are cited. | {"url":"https://journals.iucr.org/m/issues/2020/02/00/fq5009/","timestamp":"2024-11-04T11:52:37Z","content_type":"text/html","content_length":"220625","record_id":"<urn:uuid:ce503f8d-f17c-4240-b0b7-a6df13fb1e3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00798.warc.gz"} |
Systems Of Linear Equations Practice Worksheet - Equations Worksheets
Systems Of Linear Equations Practice Worksheet
Systems Of Linear Equations Practice Worksheet – The goal of Expressions and Equations Worksheets is to help your child learn more effectively and efficiently. These worksheets contain interactive
exercises and problems that are determined by the order in which operations are performed. These worksheets make it easy for children to grasp complex concepts and simple concepts quickly. These PDF
resources are free to download and may be used by your child in order to practise maths equations. These resources are beneficial for students from 5th-8th grade.
Get Free Systems Of Linear Equations Practice Worksheet
These worksheets can be utilized by students in the 5th through 8th grades. The two-step word problems are made with fractions and decimals. Each worksheet contains ten problems. You can find them at
any online or print resource. These worksheets are an excellent opportunity to practice rearranging equations. In addition to practicing rearranging equations, they also aid students in understanding
the basic properties of equality and reverse operations.
These worksheets can be used by fifth- and eighth graders. These worksheets are perfect for students who have trouble calculating percentages. There are three types of problems that you can pick
from. You have the option to solve single-step questions that contain whole numbers or decimal numbers or words-based techniques for fractions and decimals. Each page contains 10 equations. These
worksheets for Equations can be used by students from the 5th to 8th grades.
These worksheets are a great tool for practicing fraction calculations as well as other concepts that are related to algebra. Many of these worksheets allow students to choose between three different
types of problems. You can pick a numerical or word-based challenge. It is important to choose the correct type of problem since every challenge will be unique. There are ten challenges on each page,
so they’re fantastic resources for students in the 5th through 8th grades.
These worksheets teach students about the relationship between variables and numbers. These worksheets allow students to practice with solving polynomial equations as well as solving equations and
discovering how to utilize them in daily life. These worksheets are an excellent opportunity to gain knowledge about equations and formulas. These worksheets will educate you about various kinds of
mathematical problems and the various symbols that are used to express them.
This worksheet is extremely useful to students in the beginning grade. These worksheets will teach students how to graph equations and solve them. These worksheets are ideal for practice with
polynomial variables. They can also help you learn how to factor and simplify the equations. It is possible to find a wonderful collection of equations and expressions worksheets for children at any
grade. Making the work yourself is the most efficient way to master equations.
There are plenty of worksheets that teach quadratic equations. Each level comes with their own worksheet. The worksheets are designed to allow you to practice solving problems of the fourth degree.
Once you have completed an amount of work, you can continue to work on solving other types of equations. You can then work to solve the same problems. For instance, you could, find the same problem
as an extended one.
Gallery of Systems Of Linear Equations Practice Worksheet
Systems Of Linear And Quadratic Equations Worksheet
Solving Systems Of Equations By Elimination Worksheet
Systems Of Linear Equations Worksheets Systems Of
Leave a Comment | {"url":"https://www.equationsworksheets.net/systems-of-linear-equations-practice-worksheet/","timestamp":"2024-11-09T07:26:05Z","content_type":"text/html","content_length":"64884","record_id":"<urn:uuid:e983463a-8795-48c8-b1d8-4010a7e139e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00823.warc.gz"} |
Arithmetic Workshop
For Students in Grades 4-6th
Get ready for our classes! Before starting our math courses, we highly recommend working through this Arithmetic Workshop. This series of four sessions focus on reviewing basic arithmetic concepts:
addition, subtraction, multiplication and division of whole numbers.
This is a great way to have your students ease into the new academic year (or new math program) with confidence and smiles! This is a series of 4 sessions, 60 minutes each, open to all students.
There's an optional Math Project (Candy Shop) to complete between sessions if you'd like your student to have additional practice with place value.
to read the rest of this content.
If you are already logged in for this Supercharged Math content, please let us know ASAP so we can fix it for you!
If you have any questions about accessing this content, please contact us at
[email protected]
right away and we`ll help you out. | {"url":"https://math.sciencelearningspace2.com/mini-math-courses/summer-arithmetic-review/","timestamp":"2024-11-10T00:03:32Z","content_type":"text/html","content_length":"325478","record_id":"<urn:uuid:2ba8bd36-0102-498f-8e86-8b241e2efaf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00212.warc.gz"} |
duf Pl(x):
duf Pl(x):
def P2(x):
return x*x*x-x*x + 4x - 30
def P3(x):
return x*x*x+ 0.5*x*x+x-6
return 3*x*x*x + 13.6*x*x 13.2 x 37.8
For this test, you are provided with the file poly.txt containing coefficients of 25 polynomials. The first few lines of
the file look as follows:
Roots of polynomials can be found using function goalSeek if you supply the tested
polynomial function as the function parameter, set the target parameter equal to zero, and
start with a good LowLimit and HighLimit interval enclosing the root.
Note that goalSeek requires that the tested function f is such that f(LowLimit) s target
≤ f(HighLimit). All polynomials provided for this task will satisfy this requirement.
3 x^3 1.8 x^2 - 7.6 x - 20.8
2 x^3 + 5 x^2 + 14 x + 24
2.7 x^3 + 7.59 x^2 + 9.49 x + 3.85
x^3 - 8 x - 32
The first line is a header, you will have to skip it when reading the file. Each of the following lines starts with the four
coefficients A, B, C, D, uniquely determining a polynomial
Ax³ + Bx² + Cx+D.
The coefficients are followed by the suggested. Lo and Hi limits. Each provided polynomial is guaranteed to have
exactly one root in the interval Lo sxs Hi. We will use goal Seek to find this root for each of the provided
The last column is a conventional representation of a polynomial, which you can be pasted in WolframAlpha
(https://www.wolframalpha.com) to confirm that your program correctly finds the roots. For example, here is the response
(https://www.wolframalpha.com/input/?i-3+x96E3+-1.8+xN5E2-76-x-+20.8) for the first polynomial in the file, in particular it
says that the real root is equal to 2.6.
In this task, we are going to write a program test7.py that finds the roots of cubic polynomials listed in the file
poly.txt using goalSeek function.
Fig: 1 | {"url":"https://tutorbin.com/questions-and-answers/duf-pl-x-def-p2-x-return-x-x-x-x-x-4x-30-def-p3-x-a-3-2-return-x-x-x-0-5-x-x-x-6-2-7-1-return-3-x-x-x-13-6-x-x-13-2-x-37-xrbeXZ9TI","timestamp":"2024-11-13T02:09:22Z","content_type":"text/html","content_length":"70149","record_id":"<urn:uuid:2a567f74-7046-4beb-b17b-76a2b79cf2cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00315.warc.gz"} |
Deductive proofs: transferability and the 'built-in opponent'
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
Deductive proofs: transferability and the 'built-in opponent'
(Cross-posted at NewAPPS)
This week I read an extremely interesting paper by Kenny Easwaran, ‘
Probabilistic proofs and transferability
’, which appeared in
Philosophia Mathematica
in 2009. Kenny had heard me speak at the Formal Epistemology Workshop in Munich a few weeks ago, and thought (correctly!) that there were interesting connections between the concept of
transferability that he develops in the paper and my ‘built-in opponent’ conception of logic and deductive proofs; so he kindly drew my attention to his paper. Because I believe Kenny is really on to
something deep about mathematics in his paper, I thought it would be a good idea to elaborate a bit on these connections in a blog post, hoping that it will be of interest to a number of people
besides the two of us!
Drawing on previous work by Don Fallis, Kenny’s paper addresses the issue of why probabilistic proofs are for the most part not regarded as ‘real proofs’ by mathematicians, even though some of them
can be said to have a higher degree of certainty than very long traditional proofs (given that in very long proofs, the probability of a mistake being made somewhere is non-negligible). He discusses
in particular the Miller-Rabin primality test. (I recall having heard Michael Rabin speaking on this twice some years ago, and recall being both extremely impressed and extremely puzzled by what he
was saying!) But for the full story, you’ll just have to read Kenny’s paper, as here I will focus on his concept of transferability so as to compare it to the built-in opponent conception of proofs
that I am currently developing.
Kenny’s main claim is that, even if they offer a very high degree of epistemic certainty (in some sense), these probabilistic proofs lack the feature of
, and this is why they are not accepted as ‘proper proofs’ by most mathematicians. Thus, he proposes transferability as the ultimate conceptual core of the conception of proofs that mathematicians
de facto
entertain. Here is how he presents the concept of transferability:
… the basic idea is that a proof must be such that a relevant expert will become convinced of the truth of the conclusion of the proof just by consideration of each of the steps in the proof.
With non-transferable proofs, something extra beyond just the steps in the proof is needed—in the case of probabilistic proofs, this extra component is a knowledge of the process by which the
proof was generated, and in particular that the supposedly random steps really were random.
As he soon adds, “transferability is a social epistemic virtue, rather than an individual one.” A transferable proof is one which can be checked by any expert mathematician (possibly within a given
mathematical subfield). Importantly, when checking a proof, a mathematician adopts what can be described as an adversarial attitude towards the author of the proof: she will scrutinize every step
looking for loops in the argumentation, in particular counterexamples to specific inferential steps. Once she runs through the proof and finds no fault in it, she is persuaded of the truth of the
conclusion if she has granted the premises. Thus, on this conception, a proof is a
public discourse
aimed at persuasion; this also explains why mathematicians prefer proofs that are not only correct, but which are also explanatory: their persuasive effect is greater.
As Kenny correctly noted, his notion of transferability is very closely related to my ‘built-in opponent’ (BIO) hypothesis. I recall having mentioned BIO in blog posts before (and see
here for a draft paper
), but here is a recap: I rely on the historical development of the deductive method (as documented in e.g. Netz’s
The Shaping of Deduction
) to argue that a deductive argument is originally a discourse aimed at compelling the audience to accept (the truth of) the conclusion, if they accept (the truth of) the premises. It is only in the
modern period, in particular with Descartes and Kant, that logic became predominantly associated with inner thinking processes rather than with public situations of dialogical interaction.
Crucially, deductive proofs would correspond to a specific kind of dialogues, namely
dialogues of a very special kind, as the participants have opposite goals: proponent seeks to establish the conclusion; opponent seeks to block the establishment of the conclusion. But deductive
proofs are no longer dialogues properly speaking, as they do not correspond to actual dialogical interactions between two or more active participants. In effect, the two main transformations leading
from the actual dialogues of the early Academy (which provided the historical background for the emergence of the notion of a deductive argument) to deductive proofs seem to be the move from oral to
written contexts, and the fact that the deductive method has
the opponent in the sense that it is now built into the framework: every inferential step must be immune to counterexamples, i.e. it must be
. I refer to this conception as the built-in opponent conception of proofs because the original role of opponent (checking whether the dialogical moves made by proponent are indefeasible) is now
played by the method itself, so to speak: the method has become the idealized opponent. Another way of formulating the same point is to say that what started out as a strategic desideratum (the
formulation of indefeasible arguments) then became a constitutive feature of the method as such.
It should be clear by now how closely related Kenny’s notion of transferability and my BIO conception are. For starters, we both emphasize the social nature of a deductive proof as a discourse aimed
at persuasion, which must thus be ‘transferable’. Indeed, Kenny discusses at length the fact that, in mathematics, testimony is not a legitimate source of information/knowledge (contrasting with how
widely testimony is relied upon for practical purposes and even in other scientific domains*): the mathematician “will want to convince herself of everything and avoid trusting testimony.” I believe
that the key point to understand the absence of testimony in mathematics is the adversarial nature of the dialogues having given rise to the deductive method: your opponent in such a dialogical
interaction is by definition not trustworthy. However, in a probabilistic proof, she who surveys the proof must trust that the author of the proof did not cherry-pick the witnesses, which is at odds
with the idea of mathematical proofs as corresponding to adversarial dialogues.
Moreover, the dialogical model explains why, in a mathematical proof, one is allowed to use only information that is explicitly accepted. In Kenny’s terms:
Papers will rely only on premises that the competent reader can be assumed to antecedently believe, and only make inferences that the competent reader would be expected to accept on her own
consideration. If every proof is published in a transferable form, then the arguments for any conclusion are always publicly available for the community to check.
This is because the premises in a mathematical proof are the propositions that all participants in the dialogue in question (proponent, opponent and audience) have explicitly granted: no recourse to
external, contentious information is allowed.
But the upshot is that, while ultimately based on an adversarial dialogical model, it is precisely the constant self-monitoring made possible by the transferability of proofs that makes mathematics a
social, collective enterprise as well as an astonishingly fruitful field of inquiry: it allows for a sort of cooperative division of labor (distributed cognition!). Recall when Edward Nelson
announced he had a proof of the inconsistency of PA last year: the mathematical community immediately joined forces to scrutinize his (purported) proof, thus adopting the adversarial role of opponent
(in my terminology). Before long, Terry Tao found a loop in the proof (see
comments in this post
, where Tao describes his train of thought); and once he was convinced of the cogency of Tao’s argument, Nelson gracefully withdrew his claim.
I also want to suggest that the social nature of mathematics makes Bayesianism, originally an individualistic framework, ultimately unsuitable to deal with the epistemology of mathematics. But as
this post is already much too long, I will not further develop this idea for now. Indeed, these are just some preliminary reflections on the connections between the concepts of transferability and of
a ‘built-in opponent’ in a deductive proof; I hope to give all this much more thought in the coming months, but for now I’d be curious to hear what others may have to say on all this.
* I suspect that transferability in mathematics does have a counterpart in the empirical sciences, namely replication of experimental results, but this will remain a topic for another post.
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
1. Thanks for the discussion of the paper! I just want to point out that I accept that all mathematicians do in fact come to know lots of mathematics merely on the basis of testimony. The role of
transferability is in making sure that this is not essential for any specific result that is accepted into the body of mathematics.
I think I make some minor points near the end of the paper trying to apply this idea to philosophy, to explain what appears to be going wrong in experimental philosophy - they are treating
intuitions as evidence for a claim, rather than just as giving premises that the reader happens to accept. I suspect that in philosophy you get more preservation of the adversarial idea, but the
same goal of transferability applies.
1. Hi Kenny, thanks! Yes, there were lots of interesting points in your paper that I had to leave out of the post, which was already too long. I did like a lot your brief discussion on
philosophical methodology at the end, for example, but I just couldn't fit it in. Actually, one thing I'd really like to hear your thoughts on is the brief suggestion at the end of my post,
whether Bayesianism is a suitable framework for the epistemology of mathematics. It always struck me that Bayesianism is a very individualistic framework, but maybe I'm oversimplifying things
2. There are proofs based on testimony. For example, a mathematician may not attempted to oppose the proposed argument for the classification of finite simple groups. Based on testimony, he may
still use this classification in his own argument for another result. If conscientious, then he may write "if the classification of finite simple group holds, then my result holds."
This use of testimony happens quite prevalently at extents not as extreme as the above. For example, someone who studies Riemann surfaces may use the topological classification by genus without
prior attempt to oppose the topological classification. This happens as a mathematician who studied Riemann surfaces is versed in "analysis" or "algebra" but may not have the ability to intensely
find counter examples to the classification by genus, which is in another branch of mathematics, namely "topology".
Popular Posts
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
5 comments
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
6 comments | {"url":"http://m-phi.blogspot.com/2012/06/deductive-proofs-transferability-and.html","timestamp":"2024-11-09T12:36:43Z","content_type":"application/xhtml+xml","content_length":"152082","record_id":"<urn:uuid:18b2f027-aa34-4248-adc1-2ff911012c94>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00491.warc.gz"} |
1st PUC Physics Previous Year Question Paper March 2019
Students can Download 1st PUC Physics Previous Year Question Paper March 2019, Karnataka 1st PUC Physics Model Question Papers with Answers helps you to revise the complete Karnataka State Board
Syllabus and score more marks in your examinations.
Karnataka 1st PUC Physics Previous Year Question Paper March 2019
Mangalore District
First PUC Annual Examination, March 2019
Time : 3.15 Hours
Maximum Marks: 70
General Instructions:
1. All parts are compulsory.
2. Answers without relevant diagram / figure / circuit wherever necessary will not carry any marks.
3. Numerical problems should be solved with relevant formulae.
Part – A
I Answer the following questions. ( 10 × 1 = 10 )
Question 1.
Name the system of units accepted internationally.
Question 2.
Define null vector.
Question 3.
Give on example for conservative force.
Question 4.
Write an expression for moment of inertia of a solid sphere of radius R and mass M about its diameter.
Question 5.
Define angular acceleration
Question 6.
What is meant by elasticity of a body?
Question 7.
How does the fractional change in length of a rod varies with its change in temperature?
Question 8.
State zeroth law of thermodynamics
Question 9.
Define mean free path of a gas molecule.
Question 10.
What is a node in stationary waves?
Part – B
II. Answer any FIVE of the following Questions. ( 5 × 2 = 10 )
Question 11.
Name the strongest and weakest fundamental forces in nature.
Question 12.
Write any two applications of dimensional analysis.
Question 13.
Write any two differences between distance and displacement.
Question 14.
Mention any two factors on which the centripetal acceleration depends.
Question 15.
How does the linear momentum of a body varies with its mass and velocity?
Question 16.
State Pascal’s law. Name any one device which works on the principle of Pascal’s law.
Question 17.
State and explain Boyle’s law.
Question 18.
What is periodic motion? Give an example.
Part – C
III. Answer any FIVE of the following questions. ( 5 × 3 = 15 )
Question 19.
Explain triangle method of addition of two vectors.
Question 20.
Write any three advantages of friction
Question 21.
Define work done by a force. Under what conditions the work done by a force is maximum and minimum?
Question 22.
When does the rigid body is said to be in mechanical equilibrium? Write the conditions for the equilibrium of a rigid body.
Question 23.
Derive an expression for acceleration due to gravity on the surface of the earth in terms of mass of the earth and gravitational constant.
Question 24.
State and explain Hook’s law and hence define modulus of elasticity.
Question 25.
What is meant by viscosity? How does the viscosity change with rise of temperature in case of (i) liquids and
(ii) gases?
Question 26.
What is radiation? Mention any two properties of thermal radiations.
Part – D
IV. Answer any TWO of the following Questions. ( 2 × 5 = 10 )
Question 27.
What is meant by velocity – time graph? Derive the equation x = v[0]t + \(\frac{1}{2}\) at^2 from velocity – time graph
Question 28.
Define elastic potential energy. Derive an expression for potential energy of a spring.
Question 29.
Derive an expression for kinetic energy of a rolling body.
V. Answer any TWO of the following questions. ( 2 × 5 = 10 )
Question 30.
Explain the working of Carnot engine with the help of PV diagram.
Question 31.
Derive an expression for time period of oscillation of a simple pendulum
Question 32.
Show that only odd harmonics are present in vibrations of air column in closed pipe.
VI. Answer any THREE of the following questions. ( 3 × 5 = 15 )
Question 33.
A Cricket ball is thrown at a speed of 28 ms^-1 in the direction 30° above the horizontal. Calculate (a) maximum
height reached by the projectile and (b) horizontal range of the projectile. (Given: Acceleration due to gravity = 9.8 ms^-2)
Question 34.
A constant force acting on a body of mass 3.0 kg changes its speed from 2.0 ms^-1 to 3.5 ms^-1 in 25 second. If the direction of the motion of the body remains unchanged, find the magnitude and
direction of the force?
Question 35.
An artificial satellite revolves around the earth with an orbital velocity of 5.92 kms^-1. Calculate height of the satellite from the earth’s surface. (Given: Radius of the earth = 6380 km
Acceleration due to gravity of earth = 9.8 ms^-2).
Question 36.
A copper block of mass 2.5 kg is heated in a furnace to a temperature of 500°C and then placed on a large ice block at a temperature of 0°C. Calculate the maximum amount of ice that can melt? (Given:
Specific he: t of copper = 390 Jkg^-1k^-1 Latent heat of fusion of ice = 3.35 × 105 Jkg^-1)
Question 37.
A transverse harmonic wave travelling along positive X-axis on a stretched string is described by Y(x, t) = 0.03 sin(36t – 1.8x + \(\frac{\pi}{4}\)) where X and Y are in metre and t is in second.
Calculate (a) amplitude (b) initial phase (c) frequency and (d) speed of the wave.
Bangalore North District
First PUC Annual Examination, March 2019
Time : 3.15 Hours
Maximum Marks: 70
General Instructions:
1. All parts are compulsory.
2. Answers without relevant diagram / figure / circuit wherever necessary will not carry any marks.
3. Numerical problems should be solved with relevant formulae.
Part – A
I. Answer the following questions. ( 10 × 1 = 10 )
Question 1.
How many kilograms in one unified atomic mass unit?
Question 2.
State the law of triangle of addition of two vectors.
Question 3.
State Aristotle’s fallacy.
Question 4.
Name the type of energy stored in a stretched or compressed spring.
Question 5.
Give an example for Torque or Moment of couple.
Question 6.
Write the aim of Cavendish experiment in Gravitation.
Question 7.
How does strain depends on stress?
Question 8.
Mention the value of steam point of water in Fahrenheit scale.
Question 9.
Which quantity is kept constant in Adiabatic process.
Question 10.
The function y = log(ωt). The displacement y increases monotonically with time t, is it periodic function or Non periodic function.
Part – B
II. Answer any FIVE of the following Questions. ( 5 × 2 = 10 )
Question 11.
Name any two fundamental forces in nature.
Question 12.
Define accuracy in the measurement. How does accuracy depends on precision in the measurement?
Question 13.
Distinguish between speed and velocity.
Question 14.
State and explain Newton’s first law of motion.
Question 15.
Mention the relation between linear momentum and angular momentum with usual meanings.
Question 16.
Write any two practical applications of Pascal’s law.
Question 17.
Give any two Assumptions of Kinetic theory of gases.
Question 18.
How does the period of oscillation of a pendulum depends on mass of the bob and length of the pendulum.
Part – C
III. Answer any FIVE of the following questions. ( 5 × 3 = 15 )
Question 19.
Deduce the expression for Horizontal range of a projectile and for what angle of projection horizontal range becomes maximum?
Question 20.
Write any three laws of friction.
Question 21.
Deduce the work-energy theorem for a constant force.
Question 22.
Discuss the motion of centre of mass.
Question 23.
Name the three types of moduli of elasticity.
Question 24.
What is capillary rise? Write the expression for height of capillary rise in the capillary tube with usual meanings.
Question 25.
Obtain the expression for Thermal stress.
Question 26.
Show that the specific heat capacity of a solid is equal to three times that of Gas constant.
[C = 3R]
Part – D
IV. Answer any TWO of the following Questions. ( 2 × 5 = 10 )
Question 27.
What is velocity – time graph? Deduce x = v[0]t + \(\frac{1}{2}\) at^2 by using velocity-time graph.
Question 28.
State and illustrate the law of conservation of linear momentum for any two colliding particles in a closed system.
Question 29.
State and explain parallel axes and perpendicular axes theorems of moment of inertia.
V. Answer any TWO of the following questions. ( 2 × 5 = 10 )
Question 30.
State and explain the laws of thermal conductivity and hence mention the SI Unit of coefficient of thermal conductivity.
Question 31.
Deduce the expression for energy stored in a body executing simple harmonic motion.
Question 32.
Discuss the mode of vibration of air columns in a closed pipe and hence define the fundamental frequency of vibration.
VI. Answer any THREE of the following questions. ( 3 × 5 = 15 )
Question 33.
A stone of mass 0.25 kg tied to the end of a string is whirled round in a circle of radius 1.5 m with a speed of 40 revolutions per minute in a horizontal plane. What is the tension in the string?
Question 34.
A pump on the ground floor a building can pump up water to fill a tank of volume 30 m^3 in 15 minutes. If the tank is 40 m above the ground and efficiency of the pump is 30%. How much of electrical
power is consumed by the pump?
Question 35.
If two spheres of equal masses with their centres 0.2 m apart attract each other with a force of 1 x 10^-6 kg wt. What should be the value of their masses?
Given : g = 9.8 ms^-2 and G = 6.67 × 10^-11 Nm^2 kg^-2.
Question 36.
The sink in Carnot’s heat engine is at 300 K and has efficiency 0.4. If the efficiency of the engine is to be increased to 0.5. Find by how many Kelvin the temperature of the source should be
Question 37.
A train is moving at a speed of 72 Kmph towards a station sounding a whistle of frequency 640 Hz. What is the apparent frequency of the whistle as heard by a man standing on the platform when train
approaches to him.
Given : Speed of sound = 340 ms^-1.
Bangalore South District
First PUC Annual Examination, March 2019
Time : 3.15 Hours
Maximum Marks: 70
General Instructions:
1. All parts are compulsory.
2. Answers without relevant diagram / figure / circuit wherever necessary will not carry any marks.
3. Numerical problems should be solved with relevant formulae.
Part – A
I. Answer the following questions. ( 10 × 1 = 10 )
Question 1.
Write the dimensional formula for force.
Question 2.
Write SI unit of power.
Question 3.
What is Projectile motion?
Question 4.
Write relation between angular velocity and linear velocity.
Question 5.
Write expression for acceleration due to gravity at a height h from the surface of earth.
Question 6.
Define stress.
Question 7.
Define angle of contact.
Question 8.
Write ideal gas equation for one mole of gas.
Question 9.
State zeroth law of thermodynamics.
Question 10.
State law of equipartition of energy.
Part – B
II. Answer any FIVE of the following Questions. ( 5 × 2 = 10 )
Question 11.
Name any two fundamental forces in nature.
Question 12.
Write any two applications of dimensional analysis.
Question 13.
A body gets displacement of 5m in 2s. What is the average velocity?
Question 14.
Define scalar product of two vectors.
Question 15.
What is kinetic friction? Write the expression.
Question 16.
Define specific heat capacity of a substance.
Question 17.
Write any two difference between isothermal process and adiabatic process.
Question 18.
Define frequency and period of oscillation.
Part – C
III. Answer any FIVE of the following questions. ( 5 × 3 = 15 )
Question 19.
A body moving in uniform circular motion, Derive centripetal acceleration a[e] = \(\frac{v^{2}}{r}\).
Question 20.
State Newton’s second law of motion. Hence derive F = ma.
Question 21.
State work energy theorem with proof.
Question 22.
Show that kinetic energy of rotating body is \(\frac{1}{2}\) Iω^2.
Question 23.
State Kepler’s laws of planetory motion.
Question 24.
Calculate \(\frac{C_{p}}{c_{v}}\) for monatomic gas.
Question 25.
Write stress-strain curve for a metal. What is proportional limit and yield point.
Question 26.
State Bernoulli’s theorem and write the equation.
Part – D
IV. Answer any TWO of the following Questions. ( 2 × 5 = 10 )
Question 27.
Derive the equation x = v[0]t + \(\frac{1}{2}\) at^2 using v-t graph.
Question 28.
State and explain law of conservation of momentum with proof.
Question 29.
State and explain parallel axis and perpendicular axis theorem.
V. Answer any TWO of the following questions. ( 2 × 5 = 10 )
Question 30.
Explain working of Carnot’s heat engine.
Question 31.
Derive an expression for energy of a body which is in S.H.M.
Question 32.
What is Doppler effect of sound? Derive expression for apparent frequency of sound when source is moving away from stationary listener.
VI. Answer any THREE of the following questions. ( 3 × 5 = 15 )
Question 33.
A body is projected by making an angle 30° with horizontal with a velocity of 39.2 ms^-1. Find
A. Time of flight (T)
B. Range (R)
C. Maximum height (H)
Question 34.
A body of mass 5 kg moving with a velocity of 6 ms^-1 collide with another body of mass 2 kg which is at rest. After wards they move in the same direction as before. If the velocity of the body of
mass 2 kg is 10 ms^-1. find the velocity and kinetic energy of the body of mass 5 kg.
Question 35.
Find the potential energy of a system of four particles each of mass 5 kg placed at the vertices of a square of side 2m.
Question 36.
Two identical bars each of length L = 0.1 m and area A = 0.02 m^2. One is iron of thermal conductivity K[1] = 79 W m^-1 K^-1 and another of brass of thermal conductivity K[2] = 109 W m^-1 K^-1 are
soldered end to end. Free end of iron bar is at 373K and of brass bar is at 273K. Find the temperature at junction of two bars.
Question 37.
A wave travelling in a string is described by an equation y(x, t) = 0.005 sin (80x – 3t) all are in SI unit. Find
a) amplitude (A)
b) Wave length (λ)
c) Period (T)
Dharwad District
First PUC Annual Examination, February 2019
Time : 3.15 Hours
Maximum Marks: 70
General Instructions:
1. All parts are compulsory.
2. Answers without relevant diagram / figure / circuit wherever necessary will not carry any marks.
3. Numerical problems should be solved with relevant formulae.
Part – A
I. Answer the following questions. ( 10 × 1 =10 )
Question 1.
Define acceleration.
Question 2.
What does slope of velocity – time graph represent?
Question 3.
For what angle of projection, does the range of a projectile is maximum?
Question 4.
Stale Work – Energy theorem.
Question 5.
Mention an expression for moment of inertia of a thin, circular ring of radius ‘R’ and mass ‘M’ about an axis passing through its diameter.
Question 6.
Mention the value of universal gravitational constant.
Question 7.
Of rubber and steel, which is more elastic?
Question 8.
What happens lo viscosity of a liquid, when it is heated?
Question 9.
State first law of thermodynamics.
Question 10.
Mention the number of degrees of freedom of a mono atomic gas.
Part – B
II. Answer any FIVE of the following Questions. ( 5 × 2 = 10 )
Question 11.
Mention any two basic forces in nature.
Question 12.
A pebble of mass 0.05 kg is thrown vertically upwards. Give the direction of the net force on the pebble
(a) During its upward motion (b) During its downward motion
Question 13.
Mention the conditions for equilibrium of a rigid body.
Question 14.
State and explain Newton’s law of gravitation.
Question 15.
Distinguish between heat capacity and specific heat capacity.
Question 16.
Convert 100°C into Fahrenheit temperature.
Question 17.
Define period and frequency of oscillation.
Question 18.
Distinguish between longitudinal waves and transverse waves.
Part – C
III. Answer any FIVE of the following questions. ( 5 × 3 = 15 )
Question 19.
Check the correctness of an equation V – V[0] = at using dimensional analysis (symbols have usual meaning).
Question 20.
Mention any three advantages of friction.
Question 21.
Derive an expression for potential energy of a stretched spring.
Question 22.
Explain law of conservation of angular momentum in case of a person sitting in a rotating chair.
Question 23.
Define Young’s modulus. Mention an expression for Young’s modulus of a wire and explain the symbols used.
Question 24.
Derive an expression for liquid pressure at a point inside the liquid.
Question 25.
State and explain Dalton’s law of partial pressures.
Question 26.
Mention Newton’s formula for speed for sound in gases and explain Laplace’s connection to Newton’s formula.
Part – D
IV. Answer any TWO of the following Questions. ( 2 × 5 = 10 )
Question 27.
Derive x = v[0]t + \(\frac{1}{2}\) at^2 using velocity – time graph.
Question 28.
Derive an expression for centripetal acceleration.
Question 29.
State and verify law of conservation of linear momentum in case of a system of two bodies.
V. Answer any TWO of the following questions. ( 2 × 5 = 10 )
Question 30.
What is meant by acceleration due to gravity? Derive an expression for variation of acceleration due to gravity with height from earth’s surface.
Question 31.
What is heat engine? With schematic diagram, explain the parts of heat engine.
Question 32.
Derive an expression for total energy in simple harmonic motion.
VI. Answer any THREE of the following questions. ( 3 × 5 = 15 )
Question 33.
A cricket ball is thrown at a speed of 50 ms^-1 in a direction making an angle 30° with the horizontal, calculate time of flight and maximum height. (Given y = 9.8 ms^-2)
Question 34.
A pump on the ground floor of a building can pump up water to fill a tank of volume 30m^3 in 15 minutes. If the tank is 40 m above the ground and the efficiency of the pump is 30%. How much electric
power is consumed by the pump? (Density of water is 10^3 kg m^-3)
Question 35.
The angular speed of a motor wheel is increased from 1200 rpm to 3120 rpm in 16 seconds.
(i) What is its angular acceleration?
(ii) How many revolutions does the wheel make during this time?
Question 36.
A body cools from 80°C to 50°C in 5 minutes. Calculate the time it takes to cool from 60°C to 30°C. The temperature of the surrounding is 20°C.
Question 37.
A train, standing at die outer signal of a railway station blows a whistle of frequency 400 Hz in still air. What is the frequency of the whistle for a platform observer when die train
(a) Approaches the platform with a speed of 10 ms^-1.
(b) Recedes from the platform with a speed of 10 ms^-1
(c) What is the speed of sound in each case? Speed of sound in still air can be taken as 340 ms^-1.
Mysore District
First PUC Annual Examination, February 2019
Time : 3.15 Hours
Maximum Marks: 70
General Instructions:
1. All parts are compulsory.
2. Answers without relevant diagram / figure / circuit wherever necessary will not carry any marks.
3. Numerical problems should be solved with relevant formulae.
Part – A
I. Answer the following questions. ( 10 × 1 = 10 )
Question 1.
Write the dimensional formula for Pressure.
Question 2.
When does the circular motion become uniform?
Question 3.
What is the amount of work done by the gravitational force on the body moving on the horizontal plane?
Question 4.
What is elastic collision?
Question 5.
Give one example for elastomers.
Question 6.
What is thermal stress?
Question 7.
Define mean free path of the molecule in the gas.
Question 8.
Where does Kinetic energy of the oscillating particle become maximum?
Question 9.
What is resonance?
Question 10.
Give the value of phase difference between the particles of adjacent loop of stationary wave.
Part – B
II. Answer any FIVE of the following Questions. ( 5 × 2 = 10 )
Question 11.
Name any two fundamental forces of nature.
Question 12.
Mention any two sources of systematic error.
Question 13.
Draw position time graph for the particle having zero acceleration.
Question 14.
Write the equation for maximum horizontal range in projectile motion and explain the terms.
Question 15.
Mention any two methods of reducing friction.
Question 16.
Mention the conditions for the body in mechanical Equilibrium.
Question 17.
State and explain Hooke’s law.
Question 18.
What are extensive thermo dynamical variables? Give one example.
Part – C
III. Answer any FIVE of the following questions. ( 5 × 3 = 15 )
Question 19.
Obtain the equation for maximum height in projectile motion.
Question 20.
Derive \(\overrightarrow{\boldsymbol{F}}=m \overrightarrow{\boldsymbol{a}}\) with usual notations.
Question 21.
Prove the work-energy theorem in the case of constant force.
Question 22.
Obtain the equation for moment of couple.
Question 23.
Derive equation for acceleration clue to gravity on the surface of the earth.
Question 24.
Show that volume co-efficient of expansion of ideal gas is inversely proportional to absolute temperature.
Question 25.
Show that C[v] = \(\frac{3}{2}\) R for mono atomic gas.
Question 26.
Give any three differences between progressive wave and stationary wave.
Part – D
IV. Answer any TWO of the following Questions. ( 2 × 5 = 10 )
Question 27.
Derive equation for centripetal acceleration.
Question 28.
Prove “law of conservation of mechanical energy” for freely falling body.
Question 29.
Obtain equation for Kinetic energy of rolling motion.
V. Answer any TWO of the following questions. ( 2 × 5 = 10 )
Question 30.
State and prove Bemaulli’s theorem.
Question 31.
What is isothermal process? Obtain equation for work done in isothermal process.
Question 32.
What is closed pipe? Show that modes of vibration in closed pipe are odd harmonics.
VI. Answer any THREE of the following questions. ( 3 × 5 = 15 )
Question 33.
A Car moving along a straight road with speed of 144 Kmh^-1 is brought to a stop within a distance of 200 m. What is the retardation of the car and how long does it take to come to rest?
Question 34.
A circular racetrack of radius 200 m is banked at the angle of 10°. If the coefficient of friction between the wheels of race car and the road is 0.15, what is the
a) optimum speed of the race car to avoid wear and tear on its tyres?
b) maximum permissible speed to avoid slipping? [Given : Acceleration due to gravity on the earth = 9.8 ms-2]
Question 35.
If the weight of a 4 kg mass on the surface of the earth is 39.2 N, calculate the acceleration due to gravity of earth at
a) 32 km height from the surface of the earth.
b) 16 km depth from the surface of the earth. [Given : Radius of earth = 6400 km]
Question 36.
A copper block of mass 2.5 kg is heated in a furnace to a temperature of 500°C and then placed on a large ice block. What is the maximum amount of ice that can melt?
(Given: Specific heat of copper = 0.39 × 10^3 Jkg^-1K^-1 Latent Heat of fusion of water = 335 x 10^3 Jkg^-1)
Question 37.
A body oscillates with SHM according to the equation (in SI units)
x = 5 cos (3πt + \(\frac{\pi}{4}\))
(a) frequency of oscillation
(b) amplitude of oscillation
(c) Displacement of oscillation at t= 1s
Belgaum District
First PUC Annual Examination, March 2019
Time : 3.15 Hours
Maximum Marks: 70
General Instructions:
1. All parts are compulsory.
2. Answers without relevant diagram / figure / circuit wherever necessary will not carry any marks.
3. Numerical problems should be solved with relevant formulae.
Part – A
I. Answer the following questions. ( 10 × 1 = 10 )
Question 1.
Write the number of significant figures in 287.5 m.
Question 2.
Give example for a vector multiplied by scalar.
Question 3.
When work done is said to be maximum?
Question 4.
Which law of motion is used to explain rocket propulsion?
Question 5.
Write S. I. unit of moment of inertia.
Question 6.
Which is more elastic, rubber or steel?
Question 7.
Define coefficient of volume expansion in solids.
Question 8.
State Zeroth law of thermodynamics.
Question 9.
How many degrees of freedom are there in diatomic gases?
Question 10.
What is Doppler’s effect?
Part – B
II. Answer any FIVE of the following Questions. ( 5 × 2 = 10 )
Question 11.
Name any two basic forces in nature.
Question 12.
Mention any two uses of dimension analysis.
Question 13.
Draw velocity time graph of
a) a body starting from rest and moving with uniform acceleration.
b) a body moving with uniform negative acceleration.
Question 14.
Mention the expression for magnitude of two vectors and explain terms.
Question 15.
What is the importance of centre of mass of system of particles?
Question 16.
Define Poisson’s ratio and write the formula for it.
Question 17.
What is meant by streamline flow and turbulent flow?
Question 18.
What are damped oscillations? Give one example.
Part – C
III. Answer any FIVE of the following questions. ( 5 × 3 = 15 )
Question 19.
Obtain an expression for range of a projectile.
Question 20.
Mention types of friction.
Question 21.
Distinguish between elastic and inelastic collision.
Question 22.
State and explain the theorem of perpendicular axes.
Question 23.
Derive, the relation between acceleration due to gravity and gravitational constant.
Question 24.
Arrive at an expression for pressure inside a liquid.
Question 25.
What is latent heat? On what factors does it depend? Give its S.I. unit.
Question 26.
State the assumptions of kinetic theory of gases.
Part – D
IV. Answer any TWO of the following Questions. ( 2 × 5 = 10 )
Question 27.
Define centripetal acceleration and obtain expression for it
Question 28.
State Newton’s second law of motion and hence derive F = ma where symbols have usual meanings.
Question 29.
Define torque show that torque equal to the rate of change of angular momentum of particle.
V. Answer any TWO of the following questions. ( 2 × 5 = 10 )
Question 30.
State and explain the Newton’ s law of cooling.
Question 31.
Derive expression for frequency and time period of oscillatory bob of simple pendulum.
Question 32.
Explain modes of vibration of air column in closed Pipe.
VI. Answer any THREE of the following questions. ( 3 × 5 = 15 )
Question 33.
A player throws a ball upwards with an initial speed of 29.4 ms^-1.
a) What is the velocity at the highest point of its path?
b) To what height does the ball rise and after how long does the ball return to the player’s hands?
Take g = 9.8 ms^-2 Neglect air resistance.
Question 34.
A pump on ground floor of building can pump up water to fill a tank of volume 30 m^3 in 15 minutes. If tank is 40m above the ground and efficiency of pump is 30%, how much electric power is consumed
by the pump. Take g = 9.8ms^-2, ρ= 1000 kgm^-3.
Question 35.
Calculate the acceleration due to gravity
a) at height 16 km above the earth’s surface and
b) at depth 12.8 km below the surface of earth,
Given radius of earth is 6400 km acceleration due to gravity on the surface of earth = 9.8 ms^-2.
Question 36.
A Carnot’s engine whose efficiency is 30% takes heat from a source maintained at temperature of 600 K of it desired to have an engine of efficiency 50%; What should be the intake temperature for same
exhaust (sink)?
Question 37.
A progressive wave described by Y(x, t) = 0.005 sin(80πt – 3πx) where x, y are in metre and ‘t’ is in second. Find the (a) amplitude (b) wave length (c) period and frequency of wave. | {"url":"https://ktbssolutions.com/1st-puc-physics-previous-year-question-paper-march-2019/","timestamp":"2024-11-11T00:16:57Z","content_type":"text/html","content_length":"122911","record_id":"<urn:uuid:10eeb5b0-b8ee-41c1-9c16-6176d2360be2>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00565.warc.gz"} |
Fenske, Nora (2012): Structured additive quantile regression with applications to modelling undernutrition and obesity of children. Dissertation, LMU München: Faculty of Mathematics, Computer Science
and Statistics
Preview 9MB
Quantile regression allows to model the complete conditional distribution of a response variable - expressed by its quantiles - depending on covariates, and thereby extends classical regression
models which mainly address the conditional mean of a response variable. The present thesis introduces the generic model class of structured additive quantile regression. This model class combines
quantile regression with a structured additive predictor and thereby enables a variety of covariate effects to be flexibly modelled. Among other components, the structured additive predictor
comprises smooth non-linear effects of continuous covariates and individual-specific effects which are particularly important in longitudinal data settings. Furthermore, this thesis gives an
extensive overview of existing approaches for parameter estimation in structured additive quantile regression models. These approaches are structured into distribution-free and distribution-based
approaches as well as related model classes. Each approach is systematically discussed with regard to the four previously defined criteria, (i) which different components of the generic predictor can
be estimated, (ii) which properties can be attributed to the estimators, (iii) if variable selection is possible, and, finally, (iv) if software is available for practical applications. The main
methodological development of this thesis is a boosting algorithm which is presented as an alternative estimation approach for structured additive quantile regression. The discussion of this
innovative approach with respect to the four criteria points out that quantile boosting involves great advantages regarding almost all criteria - in particular regarding variable selection. In
addition, the results of several simulation studies provide a practical comparison of boosting with alternative estimation approaches. From the beginning of this thesis, the development of structured
additive quantile regression is motivated by two relevant applications from the field of epidemiology: the investigation of risk factors for child undernutrition in India (by a cross-sectional study)
and for child overweight and obesity in Germany (by a birth cohort study). In both applications, extreme quantiles of the response variables are modelled by structured additive quantile regression
and estimated by quantile boosting. The results are described and discussed in detail.
Item Type: Theses (Dissertation, LMU Munich)
Subjects: 500 Natural sciences and mathematics
500 Natural sciences and mathematics > 510 Mathematics
Faculties: Faculty of Mathematics, Computer Science and Statistics
Language: English
Date of oral examination: 30. October 2012
1. Referee: Fahrmeir, Ludwig
MD5 Checksum of the PDF-file: a1760b787c3ca9694a709c38c11c5dcf
Signature of the printed copy: 0001/UMC 20858
ID Code: 15161
Deposited On: 07. Jan 2013 12:44
Last Modified: 24. Oct 2020 01:37 | {"url":"https://edoc.ub.uni-muenchen.de/15161/","timestamp":"2024-11-02T23:47:26Z","content_type":"application/xhtml+xml","content_length":"30805","record_id":"<urn:uuid:dc380d27-a2fa-4d15-98f0-68d0fa7efc09>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00413.warc.gz"} |
Computing the Error of a <span class=
Let \(a,b\) be 2 floating point numbers. It can be shown that \((a+b)-\RN(a+b)\) is a floating point number.
This may not be true for other rounding methods. It’s also assuming there is no overflow.
Let \(\beta\le3\), and assume we have correct rounding with rounding to the nearest. Let \(a,b\) be floating point numbers. Assume \(|a|\ge|b|\). Then consider this algorithm:
s = RN(a+b)
z = RN(s-a)
t = RN(b-z)
This has the property that \(s+t=a+b\) exactly.
Now in reality, we don’t always know if \(|a|\ge|b|\), and doing a check is expensive. The following algorithm will give the correct result all the time, regardless of the base, and as long as there
is no overflow.
s = RN(a+b)
a' = RN(s-b)
b' = RN(s-a')
Da = RN(a-a')
Db = RN(b-b')
t = RN(Da + Db)
Think of Da as \(\delta_{a}\). It has also been shown that this algorithm is optimal in the number of operations.
So what is the point of all this? Well, assume \(a+b\) cannot be represented as a floating point number. Then \(t\) will give us the error in our sum.
We can do something similar for the product of two floating point numbers \(a,b\), provided that \(e_{a}+e_{b}\ge e_{\min}+p-1\). Let \(\pi=\RN(ab)\) and \(\rho=\RN(ab-\pi)\). Then \(\pi+\rho=ab\).
Note that this utilizes FMA. It works with all the rounding functions. It is called the Fast2MultFMA Algorithm. This works if there is no overflow. | {"url":"http://www.studies.nawaz.org/posts/computing-the-error-of-a-fp-addition-or-multiplication/","timestamp":"2024-11-12T14:09:02Z","content_type":"text/html","content_length":"13179","record_id":"<urn:uuid:4bf2cb13-7a79-48fa-8841-755cee61d8f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00046.warc.gz"} |
Asphalt and Concrete Resurfacing Mix Kalmatron(R) Krete100® | | 415-385-3290
Slump of Krete100® is at 2 1/2" or 6 (cm) by its workability always visualized as a concrete with slump at least 5" or 13 (cm).
The simplicity of Krete100® texturing without any surface curing saves on work hours while giving high quality and durability.
Picture of textured Krete100® taken after 5 years showing well-preserved, sharp edges in a trafficked area requiring deicing treatments every winter.
(At left) Krete100® was used to repair the Vilnius airport loading area. The project was completed in 4 hours and exceeded the customer's expectations.
Krete100®'s surface hardness is recognized by ACI 300 for Category D for distribution centers and truck stops.
__________ 1. Measure the area to be repaired, “A” in square feet.
___________2. Standard Krete100® consumption with layer thickness at 1” is 6.35 LB per 1 SF.
___________3. Consumption of Krete100® is: “A” x 6.35 = “W”, LB.
___________4. If a thinner overlay is required, for instance ½”, multiply “W” x ½ to get the required weight of Krete100®.
___________5. If a thicker overlay is required, for instance 2 ½”, then multiply “W” x 2 ½ to get required weight of Krete100®
Determining the most economical method of Krete100® application will depend on how large that area is. The right choice will save money, construction
time and labor cost. Therefore, unload Krete100®:
- by bobcat onto an area that requires no more than 3 to 4 workers. This video shows a batch of Krete100® with a W/C of 0.465 and slump of 3 3/4".
- by hose onto an area that requires over 4 workers. This video shows the batch flow with W/C = 0.493 and slump 4.5". The length of the hose is 190'.
Krete100® examples of texture, stamping, casting as construction joints, and draining traps after 5 years of deicing by salt every winter:
(Left) A sample of Krete100® placed without finishing. At right is the same batch of Krete100® placed with tamping.
Pigment can be added to Krete100®, with the same high resistance to dusting wear, delamination and cornering wear.
Many projects are required to be completed overnight. Krete100® offers fast setting times (4 hours) and a high density, durable and thin overlay with a variety of finishes. Fields of application
include driveways, roads and pavements, basements, loading docks, septic tanks and courtyards. | {"url":"http://www.drivewayoverlay.com/concrete-admixture-product-application.html","timestamp":"2024-11-13T15:14:15Z","content_type":"text/html","content_length":"42116","record_id":"<urn:uuid:e51a0f53-0b8c-4259-996d-62ba903c7d81>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00790.warc.gz"} |
What is forward dynamics in multi-body systems? | SolidWorks Assignment Help
What is forward dynamics in multi-body systems? There is an accepted interpretation behind several types of forward dynamics: forward dynamics caused by exchange of information between the two
bodies. In this article, we will develop a framework for studying this approach. We will first first outline some necessary definitions. Then assume that we can generate two bodies, and now we
discuss forward dynamics. We view website then show that forward dynamics is not necessary for a two body process, because it only corresponds to one of the two bodies’ properties, such as that where
the two bodies “move” from one body’s position we easily calculate the time derivatives of their velocity and position. Then, we will now build several forward dynamics that we will discuss below. 3.
Forward dynamics in a two body process {#sec3-1} ======================================== Now, we shall define the forward dynamics in a two body process, which is the process through which the two
bodies are moved at the same time. Next, we construct forward dynamics that can tell us the same direction without having to move the two bodies from the same position that the process was started
from. Given an input force and the unit vector $[0,\ \frac{1}{2}]$, the following will form a current. The change in velocity of a new body $f(x,y)$ at point $x$ with the added force between it and
another body $g(x,y)$ will be given by $$V(f(x,y))=V(h(x,y))=V(f(x,y)+{\mathbf 10}_{f^x}h(x,y))\label{eq1-1}$$ $$\begin{aligned} &{\mathbf C}\begin{bmatrix}f(x,y)+{\mathbf10}_{f^x}\\ f(x,y)+{\mathbf
r}_h\end{bmatrix}\\ &={\mathbf C}\begin{bmatrix}f(x)&{\mathbf10}_{f^x}&\mathbf0_{{\mathbf r}_h}\\ f(x,y)&f(x,y)\end{bmatrix}={\mathbf C}\begin{bmatrix}f(x)&{\mathbf10}_{f^x}&0\\ f(x,y)&0&f(x,y)\end
{bmatrix}\end{aligned}$$ The vectors $f^x$ and ${\mathbf r}_h$ are symmetric and orthogonal. In this paper, we will analyze $V$ in a linear form, and apply that to the velocity field (\[eq1-2\]). If
$\psi\neq0$, it is likely that $\psi=c$, where $c$ is constant on the $y$-axis, and so in an even number of steps and can be obtained by multiplying $\psi$ by the constant eigenvector of the
Laplacian. From this point on the two body process will be described as $$V=\int\int\psi’\mathbf10_{\psi\frac{{\mathbf r}_x}x}’d\mathbf x\,\psi\,\psi~~~~\text{on }\quad\partial\Omega_1\alpha_1.$$
From the Laplace-Beltrami equation for $\psi$ we obtain $$\delta\psi=i\mathbf10_{f^x}f(x),~~~~~\text{on}\quad\partial \Omega_1\alpha_1.$$ Thus, we obtain the backward dynamics (\[eq1-1\]) which
represents moving two bodies in the cross-section of the same cross-section, and also, for the motion along the $y$-axis at $f^x=0$ we have the forward dynamics (\[eq1-2\]). We note that $\partial\
Omega_1\alpha_1/\partial\Omega_2\ge 4c$ as long as $c$ is small, or it is zero.\ Dynamics with time derivatives require that the derivatives of the two-body velocity are known to be positive. To
obtain $\delta\psi$ from (\[eq1-1\]), we are going to integrate around $\psi\to \mathbf0_{{\mathbf r}_h}=(\mathbf0_{x_1},{{\bf n}})$, i.e.
Take My Online Algebra Class For Me
, along the $x$-axis which we wrote ${{\What is forward dynamics in multi-body systems? One of the most celebrated observations of the recent past is an observation that backstopped dynamics slows
down the system’s evolution. Consider a single wellbore in a polyclonic gas at pressure $P_0$, which has zero fluid-air filling rate. If two liquid crystals meet one they exchange their repulsion:
there is less room for them to move; every three months the gas fills, and the liquid is increased until the bottom of the cavity is filled, whereas the gas is unchanged by this change, like a moving
solid. Suddenly, the liquid takes on a new, almost inert, form. All that remains is to play the plastic game: the system becomes progressively impermeable to downward-moving forces, both fluid and
liquid. This analogy serves as our setting, where the central advantage for a gas to play is with the cooling rate ($c^4$). Forms first become impermeable by spring, cooling about a factor of two. At
first, this is called “snapshot” — that is, nearly as the size of a square of dew point varies with pressure. This is a model that can be treated independently of the force response. However, in
spring-induced collapse of the gas, the gas spontaneously transitions useful source the form from impermeable at the bottom to thin-wall mode (e.g., in the “gravitized” case, but again, less than
pure pressure, and where the gas is fully isolated from the external environment). This behavior is caused by freezing. An analogy that this does not include the effects that in an otherwise stable
and expanding gas a “free-phase” gas becomes infertile — a form that we do not click this study here. But it does provide a form in which the microscopic dynamics of the system is driven by a
non-steady component. The form seems to be a combination of this and earlier work that attempts to evaluate the form of the liquid, but it will be more accurate for more quantitatively description of
the system. Also, it reveals only those gas-permeable physical systems that have truly no thermoformed matter there, like the gas with a boundary layer. The simplest, in this sense, gives a form that
is a combination of the compressional and thermal transitions that take place in a fraction of the system, and navigate to this site capable of more than 2 orders of magnitude in effective
thermoelasticity. In practice all these effects are destroyed in the simplest, quasi-harmonic, of the gas: i.e.
, the liquid increases in temperature and pressure faster than in the “expanded” gas itself, and then rapidly transitions back to fully inexpressed liquid at constant pressure. But the derivation is
still subject to the task of analyzing the thermodynamic response to the temperature change. And it should be noted that a) the picture of a gas—which may beWhat is forward dynamics in multi-body
systems? This thesis is what is being said at the Council on Humanities and Society, National Institute of Neurological Disorders and Leukemic Disorders (NINDS), Institute for Neuroscience Research
International (INRICT), and is still fresh in academic discourse. Many themes arise: This one has to some extent been his comment is here bit of a blur. It started something many years back; and in
particular has inspired those who debated in Theory of Mathematical Models and Computation who remain faithful to the project (to whom many of these models can be axiomatic) during its second and
third projects in this book. Although NINDS has just filed its “2nd Theorising Session: How the Mathematical Model of Neuroscience and Mental Disorders”, that session is one of many I attended… and
has been mostly an unofficial one that dates back to earlier papers reviewed in this book. It is notable that the authors have a much smaller number of papers than before. Throughout the text…
because of an increase in the number of readings in the text for both papers in this group, one would have expected that one would get five to eleven papers in this number. But the sheer volume in
papers in the first group and the breadth… have made me ask myself if there are any obvious candidates for some of these topics. Are the methodological problems like complexity of the equations or
computational difficulties linked together with the sheer volume? I suspect not, but I have learned that the easy use of the non-commutative geometry of calculus seems to give a lot of problems a
better appearance than it has been accepted by the mathematical community. And it is interesting, a little concerning, that I find this method of thinking, and perhaps very similar idea in the field,
to explain why some disciplines have some problems similar to those of others at least for their problems at different levels of theoretical theory.
Online Test Taker Free
It is of course theoretically relevant in many other ways, but on the other side is the amount that it can be done. The reason it resembles the “clique method” is because those of us who have worked
the “clique method” figure out the equation and in turn if we pass through those problems which were the basis for the calculation and which led to the final formulas we got from them in the example
given. As the paper proceeds, I have to tell an interesting story. I am, admittedly, confused about the motivation behind this book… because the material has fallen into a blurring of academic work
from a central place (I do not know what was the “blue cell theme”!) The reader will have to pick a few alternative threads to try to find the story of one of those bl connected with the path leading
to this book. There are many people (including many others) who are left out in favor of this book just because it does not follow the guidelines (I think?). As usual, | {"url":"https://solidworksaid.com/what-is-forward-dynamics-in-multi-body-systems-29551","timestamp":"2024-11-02T06:18:15Z","content_type":"text/html","content_length":"157702","record_id":"<urn:uuid:feca068a-8c16-40dd-8790-339c3f6dc91b>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00030.warc.gz"} |
Sign up for free
You have reached the daily AI limit
Start learning or create your own AI flashcards
Review generated flashcards
Sign up for free
to start learning or create your own AI flashcards
Facts about integers
Integers can be generated from the set of counting numbers and the subtraction operation. For example, when you subtract a larger natural number from a smaller one, you have a negative number. When a
natural number is also subtracted from itself, you have zero.
The result of adding, subtracting, or multiplying integers is always an integer. This cannot be true with dividing integers. Dividing 5 by 2 will give you 2.5, which isn't an integer.
Positive integers are known as natural numbers. An important characteristic of natural numbers can be seen in the equation a + x = b. This only has a solution if b> a, as a and x can only be positive
and their addition will produce a larger number. In the realm of integers, the equation a + x = b will always have an answer.
A good way to represent integers on a number line is shown in the figure below.
A set of integers is denoted by Z, which is written as Z = {…, -4, -3, -2, -1, 0, 1, 2, 3, 4, ...}. Z here has a property that shows that it has an infinite number of elements (...- 4, -3, -2, -1,
0,) in figure 1 in the set.
Real-life example of integers
Integers help to capture values in every field.
• In weather forecasting, they can be used to show the temperature in different regions. Where temperatures can go below zero in Fahrenheit and Celsius scales, integers will be negative
• Integers are used to represent values in every transaction we make, from banks to everyday cash machines.
Consecutive integers
Consecutive integers are integer numbers that follow each other in a sequence without gaps. They represent an unbroken sequence of numbers where one follows the other by the addition of one. If we
had x as an integer, then x + 1 and x + 2 will be the two consecutive integers. These numbers are in ascending order, and some examples are:
• -5, -4, -3, -2, -1, 0, 1, 2
• 200, 201, 202, 203, 204, 205
• -1, 0, 1, 2, 3, 4, 5
• -13, -12, -11, -10, -9, -8, -7
Assuming you had to solve a mathematical equation and you know the sum of two consecutive integers is 97. What are the two integers?
Let's assume that the first integer is x. We know from the description of a consecutive integer that the second must be x + 1. We can write an equation for this.
\(x + (x + 1) = 97 \rightarrow 2x + 1 = 97 \rightarrow 2x = 97 - 1 \rightarrow x = 48\)
This means the first integer is 48. And the second will be 48 + 1, which is 49.
Odd consecutive integers
These are odd integers that follow each other yet differ by two. When x is an odd integer, then consecutive odd integers are x + 2, x + 4, x + 6. Examples are:
• {5, 7, 9, 11, 13...}
• {-7, -5, -3, -1, 1..}
Even consecutive integers
These are also even integers that follow each other yet differ by two. When x is an even integer, then consecutive even integers are x + 2, x + 4, x + 6. Examples are:
• {2, 4, 6, 8, 10, 12..}
• {-10, -8, 6, 4..}
Integer rules for mathematical operations
It's useful to learn the rules for integers in mathematical operations.
• Adding two positive integers will always give you a positive integer.
• Adding two negative integers will always give you a negative integer.
• Adding one positive and one negative integer will give you:
□ A positive number if the positive integer is greater
□ A negative number if the negative integer is greater
• The product of a positive integer and a negative integer will always give you a negative integer.
• The product of two positive integers will always be a positive-valued integer.
• The product of two negative integers will always be a positive integer.
• Dividing two positive integers will always give you a positive value.
• Dividing two negative integers will give a positive value.
• Dividing a negative integer by a positive integer will give you a negative value, and the opposite applies too.
Adding and subtracting integers
Let's take a few examples to get familiar with these operations.
Sam owes his friend Frank $5. He goes to borrow an additional $3, how much will he owe in all?.
This is quite simple. We add both and know he owes $8.
However, this can be expressed mathematically as - $5 + (- $3) = - $8. This can in turn be written as: $5 - $3 = - $8
This can be confusing – using a number line makes it much easier.
Number line expressing integer additions
Using your first figure as a reference point, move three steps back on the integer number line. Whilst positive values move right (forward), negative ones move left (backward). And with our example,
we have -8 as our answer again.
Let's say Sam eventually pays back $4 out of the $8 he owes. How much is left to pay?
This is another simple calculation. Intuitively, we know that the answer is $4.
However, we can write this mathematically as - $ 8 + $ 4 = - $ 4, as well as draw a number line again.
Using your first figure as a reference, move four steps forward on the integer number line. This shows that -4 is our answer.
You might be presented with an equation like \(-3 - (-6) = x\).
When two negative signs meet as they do in this equation, they both become positive.
So we can have \(-3 + 6 = x \rightarrow x = 3\)
Multiplying and dividing integers
Let's look at examples that prove the rule of multiplication.
What is the product of -3 and 7?
\(-3 \cdot 7 = -21\)
Remember – the product of a positive and a negative integer will be a negative one.
What is the product of 5 and 4?
\(5 \cdot 4 = 20\)
As we mentioned the product of two positive integers, will be a positive one, in this case 20.
What is the product of -6 and -8?
\(-6 \cdot -8 = 48\)
Divide \(\frac{16}{8} = 2\)
Remember, dividing two positive integers will give you a positive integer.
Divide \(\frac{-28}{-4} = 7\)
Integers - Key takeaways
• Integers are whole numbers that are either positive, zero, or negative.
• The result of adding, subtracting, or multiplying integers is always an integer.
• Consecutive integers are integer numbers that follow each other in a sequence or in order without gaps.
• A set of integers is denoted by Z.
• You cannot always have an integer when two integers are divided.
Learn with 8 Integers flashcards in the free StudySmarter app
We have 14,000 flashcards about Dynamic Landscapes.
Sign up with Email
Already have an account? Log in
Frequently Asked Questions about Integers
Integers are whole numbers that can be positive or negative.
What is integer division?
In integer division, the fraction is discarded.
What are consecutive integers?
Consecutive integers are integer numbers that follow each other in a sequence or in order without gaps.
What is the difference between whole numbers vs integers?
Whole numbers are a set of natural numbers that start with zero while integers are a set of positive and negative natural numbers including zero.
What are five examples of integers?
Save Article
Test your knowledge with multiple choice flashcards
Join the StudySmarter App and learn efficiently with millions of flashcards and more!
Learn with 8 Integers flashcards in the free StudySmarter app
Already have an account? Log in
That was a fantastic start!
You can do better!
Sign up to create your own flashcards
Access over 700 million learning materials
Study more efficiently with flashcards
Get better grades with AI
Sign up for free
Already have an account? Log in
Open in our app
About StudySmarter
StudySmarter is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning
support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT,
Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and
tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.
Learn more | {"url":"https://www.studysmarter.co.uk/explanations/math/pure-maths/integers/","timestamp":"2024-11-10T10:50:01Z","content_type":"text/html","content_length":"642956","record_id":"<urn:uuid:7715f24d-21bd-49fe-9ce8-17902aa47d1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00608.warc.gz"} |
Southwest Center for Arithmetic Geometry
The Southwest Center for Arithmetic Geometry
Arizona Winter School 2001
Course Descriptions and Notes
□ Lecture notes:
□ Project on computing the U operator on overconvergent 2-adic modular forms of level 1 and even weight: dvi, ps, pdf
□ Buzzard's students' write-up of their lecture: dvi, ps, pdf
□ Project description: html
□ Preliminary lecture notes: dvi, ps, pdf
□ More detailed notes by Mazur's students: dvi, ps, pdf
□ Project description: dvi, ps, pdf
□ Notes on Kato-Siegel functions, used in Mazur's students' lectures: dvi, ps, pdf
□ A paper by Tom Weston on The modular curves X[0](11) and X[1](11) : ps, pdf
□ Another paper by Tom Weston, on The Euler system of Heegner points : dvi, ps, pdf
□ A repository at Harvard of documents related to Mazur's lectures and project
□ Course outline and project description: dvi, ps, pdf
□ Course description and projects: html
□ References: dvi, ps, pdf | {"url":"https://swc-math.github.io/aws/2001/notes.html","timestamp":"2024-11-13T03:11:08Z","content_type":"text/html","content_length":"8703","record_id":"<urn:uuid:d6eee42e-3cf1-4211-853d-fbe49d40dd83>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00836.warc.gz"} |
OpenEnergyMonitor current transformer (CT) value calculators
These calculators are designed to help you determine the values of the components you'll need for the current transformer(s) used in the Open Energy Monitor project. Go check it out!
Please note these calculators do not take into account core permeability and core AL value. Use at your own risk.
Update November 2017: Fixed formulas displayed below each calculator to halve the system voltage as was originally intended.
For energy monitors based on 5V arduinos (mostly older ones), enter 5 above
Burden Resistor
Calculates the value of the burden resistor to use across a current transformer
Formula: burden_resistor = (system_voltage / 2.0) / ((I_RMS * 1.414) / ct_turns)
Max current
Calculates the maximum current you can reasonably expect to measure using a current transformer
Formula: I_RMS = (((system_voltage / 2.0) / burden_resistor) * Turns) * 0.707
Current Sensor Turns
Calculates an estimate of the number of turns a current transformer would need
Formula: turns = (I_RMS * 1.414) / ((system_voltage / 2.0) / burden_resistor) | {"url":"https://tyler.anairo.com/projects/open-energy-monitor-calculator","timestamp":"2024-11-06T11:31:16Z","content_type":"text/html","content_length":"5214","record_id":"<urn:uuid:b2627a5e-779f-4d91-a91a-81747548f603>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00008.warc.gz"} |
A Full Guide On Z Test: Formula & Example - FinanceNInsurance
A Full Guide On Z Test: Formula & Example
What Is a Z Test?
Z test is basically a statistical method which is used to analyse data having its application in science, business, and numerous further disciplines.
Z test definition tells us that it is used to calculate the mean of the population and to analyse that the mean of two populations are different, where we know the variance as well as there is large
sample size. Though, while analysing the data we assume that it has a distribution, and standard deviation is given or known.
click here – What’s The Difference Between 2PL and 3PL?
How Z-Tests Work
Z test make use of normal distribution to resolve the complications involving too large samples. Though, Z test and t test are closely related to each other, but t test is performed having a smaller
sample size., and in t test the standard deviation is not known to us. Whereas, in the Z test the sample size is large and the standard deviation is known to us.
Requirements of Z Test:
While analysing and calculating the Z test, you require the following:
• Running a Z test on your data requires five steps:
• You need to demonstrate up the null hypothesis as well as the alternate hypothesis.
• The alpha level should be selected.
• You need to look up for the critical value of z in the table of z.
• You will need to Calculate up the z test statistic
• You will then Compare up the test statistic with the critical z value and then take up the decision to accept or to reject the given null hypothesis.
Essential condition to Z Test:
You need the following conditions to do or run a Z test, which are as follows:
• The data should be independent from each other.
• The sample size of the data should be greater than 30. If not, t test is recommended for smaller sample size.
• The sample should be equal.
• There should be normal distributed data for the sample.
• The data should be selected arbitrarily.
Z Test Formula
According to the Z test formula we subtract the mean of population from the mean of sample and divide the result by the result of the Standard Deviation of the Population divided by the number of the
observations square roots. Which states that,
click here – Treynor Ratio : Meaning, Calculation and Examples
Formula 1:
Z = (x – μ) / ơ
where in the above formula,
• x = slightly value from the population
• μ = mean of the population
□ ơ = populations standard deviation
Formula 2:
Z = (x – x_mean) / s
Where in the above Z test formula,
• x = somewhat value from the sample
• x mean = mean of the sample
• s = standard deviation of the sample
Z test Examples:
Example 1: We assume that the mean score of the company who have given upgradation test is given as 75 and the standard deviation is given to us as 15. Now, we need to calculate the z-test score of
ABC who has scored 90 in the upgradation test of the company.
So, we have the following data with us as:
• The mean of the population i.e. μ= 75
• Standard deviation of the Population i.e. ơ = 15
Therefore, by using the Z test formula, we calculate Z test as:
Z = (x – μ) / ơ
Z = (90 – 75) / 15
Z = 1
Here, we will consider the Z table and find the value and see to it that 84.13{367c01af22dc6c3a8611ff25983b0f0a247ed9fc1c45fd9103ad49b47a0c5f39} of the colleges scored less than ABC.
Calculation of Z test in excel:
Now, the above example is calculated in excel as follows:
A B
mean of the population 75
Standard deviation of the Population 15
Any value from the population 90
Z test result =(B4-B2)/B3
Applying or using the formula =(B4-B2)/B3 for calculating the Z test we get the following result as:
A B</b
mean of the population 75
Standard deviation of the Population 15
Any value from the population 90
Z test result 1
Example 2: Taking up or assuming the values as follows:
Population Mean (μ): 20
Population Variance (σ2): 10
Sample Mean (M): 4
Sample Size (N): 35
Calculate the value of z and p, for the single Z sample.
Z Score or Z test Calculation using the Z test formula
Z = (M – μ) / √(σ2 / n)
Z = (2 – 4) / √(3 / 40)
Z = -2 / 0.27386
Z = -7.30297
Result: The value obtained of z = -7.30297. The value obtained of p = < .00001. Therefore, the noteworthy result at p < .05.
One Proportion Z-Test
A One Proportion Z-Test or one sample z test takes up only one sample at a time and calculate the hypothesis as well as interpret the result. It is used to analyse whether the population is different
from the other parameters of the hypothesis or not.
Here, you should do the following:
• Analyse the Z test hypotheses.
• Consider the following table to analyse the hypothesis.
• You can choose the value lying between 0 and 1.
• Calculate test statistic by using the Z test formula as: Z = (x – μ) / ơ
• Calculate the Z test and the P-value.
• Assess as well as analyse the null hypothesis.
Two Proportion Z-Test
A two Proportion Z-Test or two sample z-test takes up two sample at a time and calculate the hypothesis as well as interpret the result. Here, you compare the two proportions of Z test to see if they
are alike or not.
Z test calculator:
As we know the Z test is calculated for the one sample as well as for the two samples. Thus, for calculating it we use calculator. So, calculate the Z test for one population and the Z test for 2
When to use z test?
The Z test should be used in the following conditions:
• When there is need to compare two data or sample.
• The size of sample is larger than 30.
• The normal distribution of data is there or exists.
• The standard deviation is known to us.
It should be noted that Z- test should only be used when the sample size is large or greater than 30, otherwise you should opt for a t test. And one should also use it with the normal distribution
and when the standard deviation is known to us. | {"url":"https://financeninsurance.com/a-full-guide-on-z-test-formula-example/","timestamp":"2024-11-07T00:11:42Z","content_type":"text/html","content_length":"78258","record_id":"<urn:uuid:dca8856d-6ad6-443f-bdff-4f1565bea3de>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00547.warc.gz"} |
ANSI Common Lisp 12 Numbers 12.2 Dictionary of Numbers
Class Precedence List:
real, number, t
The type real includes all numbers that represent mathematical real numbers, though there are mathematical real numbers (e.g., irrational numbers) that do not have an exact representation in
Common Lisp. Only reals can be ordered using the <, >, <=, and >= functions.
The types rational and float are disjoint subtypes of type real.
Compound Type Specifier Kind:
Compound Type Specifier Syntax:
(real [lower-limit [upper-limit)]]
Compound Type Specifier Arguments:
lower-limit, upper-limit - interval designators for type real. The defaults for each of lower-limit and upper-limit is the symbol *.
Compound Type Specifier Description:
This denotes the reals on the interval described by lower-limit and upper-limit.
Allegro CL Implementation Details: | {"url":"https://franz.com/support/documentation/ansicl/dictentr/real.htm","timestamp":"2024-11-14T21:35:33Z","content_type":"text/html","content_length":"7691","record_id":"<urn:uuid:fbaa25e1-ce3f-46b5-ba75-3664b0e431ad>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00440.warc.gz"} |
A line passes through (4 ,7 ) and (6 ,4 ). A second line passes through (3 ,5 ). What is one other point that the second line may pass through if it is parallel to the first line? | HIX Tutor
A line passes through #(4 ,7 )# and #(6 ,4 )#. A second line passes through #(3 ,5 )#. What is one other point that the second line may pass through if it is parallel to the first line?
Answer 1
#color(crimson)("One other point on the second line is " (3, 5)#
$\text{Slope of line 1 } = m = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}} = \frac{4 - 7}{6 - 4} = - \frac{3}{2}$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-line-passes-through-4-7-and-6-4-a-second-line-passes-through-3-5-what-is-one-o-8f9afa4348","timestamp":"2024-11-08T05:01:34Z","content_type":"text/html","content_length":"575746","record_id":"<urn:uuid:5d0b6f12-b2e1-420e-9057-63c164daea36>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00540.warc.gz"} |
How Do You Graph a Quadratic Equation in Intercept Form?
Graphing a quadratic equation in intercept form is a breeze! All the information you need is in the equation. You just need to pick it out and use it. Follow along with this tutorial to see how to
take an equation intercept form and use it to find the x-intercepts, vertex, and axis of symmetry. | {"url":"https://virtualnerd.com/common-core/hsf-functions/HSF-IF-interpreting-functions/C/7/7a/graph-intercept-form","timestamp":"2024-11-10T03:19:44Z","content_type":"text/html","content_length":"34404","record_id":"<urn:uuid:364cb013-e4e2-4820-b8f1-e18a8bd0c26b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00053.warc.gz"} |
4.4 Newton's Third Law of Motion: Symmetry in Forces
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Understand Newton's third law of motion
• Apply Newton's third law to define systems and solve problems of motion
The information presented in this section supports the following AP® learning objectives and science practices:
• 3.A.2.1 The student is able to represent forces in diagrams or mathematically using appropriately labeled vectors with magnitude, direction, and units during the analysis of a situation. (S.P.
• 3.A.3.1 The student is able to analyze a scenario and make claims—develop arguments, justify assertions—about the forces exerted on an object by other objects for different types of forces or
components of forces. (S.P. 6.4, 7.2)
• 3.A.3.3 The student is able to describe a force as an interaction between two objects and identify both objects for any force. (S.P. 1.4)
• 3.A.4.1 The student is able to construct explanations of physical situations involving the interaction of bodies using Newton's third law and the representation of action-reaction pairs of
forces. (S.P. 1.4, 6.2)
• 3.A.4.2 The student is able to use Newton's third law to make claims and predictions about the action-reaction pairs of forces when two objects interact. (S.P. 6.4, 7.2)
• 3.A.4.3 The student is able to analyze situations involving interactions among several objects by using free-body diagrams that include the application of Newton's third law to identify forces.
(S.P. 1.4)
• 3.B.2.1 The student is able to create and use free-body diagrams to analyze physical situations to solve problems with motion qualitatively and quantitatively. (S.P. 1.1, 1.4, 2.2)
• 4.A.2.1 The student is able to make predictions about the motion of a system based on the fact that acceleration is equal to the change in velocity per unit time, and velocity is equal to the
change in position per unit time. (S.P. 6.4)
• 4.A.2.2 The student is able to evaluate using given data whether all the forces on a system or whether all the parts of a system have been identified. (S.P. 5.3)
• 4.A.3.1 The student is able to apply Newton's second law to systems to calculate the change in the center-of-mass velocity when an external force is exerted on the system. (S.P. 2.2)
‘Whether the stone hits the pitcher or the pitcher hits the stone, it’s going to be bad for the pitcher.’” This is exactly what happens whenever one body exerts a force on another—the first also
experiences a force equal in magnitude and opposite in direction. Numerous common experiences, such as stubbing a toe or throwing a ball, confirm this. It is precisely stated in Newton’s third law of
Newton’s Third Law of Motion
Whenever one body exerts a force on a second body, the first body experiences a force that is equal in magnitude and opposite in direction to the force that it exerts.
This law represents a certain symmetry in nature: Forces always occur in pairs, and one body cannot exert a force on another without experiencing a force itself. We sometimes refer to this law
loosely as action-reaction, where the force exerted is the action and the force experienced as a consequence is the reaction. Newton’s third law has practical uses in analyzing the origin of forces
and understanding which forces are external to a system.
We can readily see Newton’s third law at work by taking a look at how people move about. Consider a swimmer pushing off from the side of a pool, as illustrated in Figure 4.9. She pushes against the
pool wall with her feet and accelerates in the direction opposite to that of her push. The wall has exerted an equal and opposite force back on the swimmer. You might think that two equal and
opposite forces would cancel, but they do not because they act on different systems. In this case, there are two systems that we could investigate: the swimmer or the wall. If we select the swimmer
to be the system of interest, as in the figure, then $Fwall on feetFwall on feet size 12{F rSub { size 8{"wall on feet"} } } {}$ is an external force on this system and affects its motion. The
swimmer moves in the direction of $Fwall on feetFwall on feet size 12{F rSub { size 8{"wall on feet"} } } {}$. In contrast, the force $Ffeet on wallFfeet on wall size 12{F rSub { size 8{"feet on
wall"} } } {}$ acts on the wall and not on our system of interest. Thus $Ffeet on wallFfeet on wall size 12{F rSub { size 8{"feet on wall"} } } {}$ does not directly affect the motion of the system
and does not cancel $Fwall on feetFwall on feet size 12{F rSub { size 8{"wall on feet"} } } {}$. Note that the swimmer pushes in the direction opposite to that in which she wishes to move. The
reaction to her push is thus in the desired direction.
Similarly, when a person stands on Earth, Earth exerts a force on the person, pulling the person toward Earth. As stated by Newton’s third law of motion, the person also exerts a force that is equal
in magnitude, but opposite in direction, pulling Earth up toward the person. Since the mass of Earth is so great, however, and $F=maF=ma$, the acceleration of Earth toward the person is not
Other examples of Newton’s third law are easy to find. As a professor paces in front of a whiteboard, she exerts a force backward on the floor. The floor exerts a reaction force forward on the
professor that causes her to accelerate forward. Similarly, a car accelerates because the ground pushes forward on the drive wheels in reaction to the drive wheels pushing backward on the ground. You
can see evidence of the wheels pushing backward when tires spin on a gravel road and throw rocks backward. In another example, rockets move forward by expelling gas backward at high velocity. This
means the rocket exerts a large backward force on the gas in the rocket combustion chamber, and the gas therefore exerts a large reaction force forward on the rocket. This reaction force is called
thrust. It is a common misconception that rockets propel themselves by pushing on the ground or on the air behind them. They actually work better in a vacuum, where they can more readily expel the
exhaust gases. Helicopters similarly create lift by pushing air down, thereby experiencing an upward reaction force. Birds and airplanes also fly by exerting force on air in a direction opposite to
that of whatever force they need. For example, the wings of a bird force air downward and backward in order to get lift and move forward. An octopus propels itself in the water by ejecting water
through a funnel from its body, similar to a jet ski.
Example 4.3 Getting Up To Speed: Choosing the Correct System
A physics professor pushes a cart of demonstration equipment to a lecture hall, as seen in Figure 4.10. Her mass is 65.0 kg, the cart’s is 12.0 kg, and the equipment’s is 7.0 kg. Calculate the
acceleration produced when the professor exerts a backward force of 150 N on the floor. All forces opposing the motion, such as friction on the cart’s wheels and air resistance, total 24.0 N.
Since they accelerate as a unit, we define the system to be the professor, cart, and equipment. This is System 1 in Figure 4.10. The professor pushes backward with a force $FfootFfoot size 12{F rSub
{ size 8{"foot"} } } {}$ of 150 N. According to Newton’s third law, the floor exerts a forward reaction force $FfloorFfloor size 12{F rSub { size 8{"floor"} } } {}$ of 150 N on System 1. Because all
motion is horizontal, we can assume there is no net force in the vertical direction. The problem is therefore one-dimensional along the horizontal direction. As noted, $ff size 12{f} {}$ opposes the
motion and is thus in the opposite direction of $FfloorFfloor size 12{F rSub { size 8{"floor"} } } {}$. Note that we do not include the forces $FprofFprof size 12{F rSub { size 8{"prof"} } } {}$ or
$FcartFcart size 12{F rSub { size 8{"cart"} } } {}$ because these are internal forces, and we do not include $FfootFfoot size 12{F rSub { size 8{"foot"} } } {}$ because it acts on the floor, not on
the system. There are no other significant forces acting on System 1. If the net external force can be found from all this information, we can use Newton’s second law to find the acceleration as
requested. See the free-body diagram in the figure.
Newton’s second law is given by
4.18 $a=Fnetm.a=Fnetm size 12{a = { {F rSub { size 8{"net"} } } over {m} } } {}.$
The net external force on System 1 is deduced from Figure 4.10 and the discussion above to be
4.19 $F net = F floor − f = 150 N − 24 . 0 N = 126 N . F net = F floor − f = 150 N − 24 . 0 N = 126 N size 12{F rSub { size 8{"net"} } = F rSub { size 8{"floor"} } -f ="150 N"-"24" "." "0 N"="126 N"}
{} .$
The mass of System 1 is
4.20 $m=(65.0 + 12.0 + 7.0) kg = 84 kg.m=(65.0 + 12.0 + 7.0) kg = 84 kg size 12{m = \( "65" "." "0 "+" 12" "." "0 "+" 7" "." 0 \) " kg "=" 84 kg"} {}.$
These values of $F net F net size 12{F} {}$ and $mm size 12{m} {}$ produce an acceleration of
4.21 a = F net m , a = 1 26 N 84 kg = 1 . 5 m/s 2 . a = F net m , a = 1 26 N 84 kg = 1 . 5 m/s 2 . alignl { stack { size 12{a= { {F rSub { size 8{"net"} } } over {m} } ,} {} # a = { {1"26 N"} over
{"84"" kg"} } =" 1" "." "5 m/s" rSup { size 8{2} } "." {} } } {}
None of the forces between components of System 1, such as between the professor’s hands and the cart, contribute to the net external force because they are internal to System 1. Another way to look
at this is to note that forces between components of a system cancel because they are equal in magnitude and opposite in direction. For example, the force exerted by the professor on the cart results
in an equal and opposite force back on her. In this case both forces act on the same system and, therefore, cancel. Thus internal forces between components of a system cancel. Choosing System 1 was
crucial to solving this problem.
Example 4.4 Force on the Cart—Choosing a New System
Calculate the force the professor exerts on the cart in Figure 4.10 using data from the previous example if needed.
If we now define the system of interest to be the cart plus equipment (System 2 in Figure 4.10), then the net external force on System 2 is the force the professor exerts on the cart minus friction.
The force she exerts on the cart, $FprofFprof size 12{F rSub { size 8{"prof"} } } {}$, is an external force acting on System 2. $FprofFprof size 12{F rSub { size 8{"prof"} } } {}$ was internal to
System 1, but it is external to System 2 and will enter Newton’s second law for System 2.
Newton’s second law can be used to find $FprofFprof size 12{F rSub { size 8{"prof"} } } {}$. Starting with
4.22 $a = F net m a = F net m size 12{a = { {F rSub { size 8{"net"} } } over {m} } } {}$
and noting that the magnitude of the net external force on System 2 is
4.23 $Fnet=Fprof−f,Fnet=Fprof−f size 12{F rSub { size 8{"net"} } = F rSub { size 8{"prof"} } -f} {},$
we solve for $FprofFprof size 12{F rSub { size 8{"prof"} } } {}$, the desired quantity
4.24 $F prof = F net + f . F prof = F net + f . size 12{F rSub { size 8{"prof"} } = F rSub { size 8{"net"} } + f} {}$
The value of $ff size 12{f} {}$ is given, so we must calculate net $FnetFnet size 12{F} {}$. That can be done since both the acceleration and mass of System 2 are known. Using Newton’s second law we
see that
4.25 $Fnet=ma,Fnet=ma size 12{F rSub { size 8{"net"} } = ital "ma"} {},$
where the mass of System 2 is 19.0 kg ($mm size 12{m} {}$= 12.0 kg + 7.0 kg) and its acceleration was found to be $a=1.5 m/s2a=1.5 m/s2 size 12{a=1 "." "50"" m/s" rSup { size 8{2} } } {}$ in the
previous example. Thus,
4.26 $Fnet=ma,Fnet=ma size 12{F rSub { size 8{"net"} } = ital "ma"} {},$
4.27 $Fnet=(19.0 kg)(1.5 m/s2)=29 N.Fnet=(19.0 kg)(1.5 m/s2)=29 N size 12{F rSub { size 8{"net"} } = \( "19" "." "0 kg" \) \( 1 "." "50 m/s" rSup { size 8{2} } \) ="28" "." 5" N"} {}.$
Now we can find the desired force
4.28 $Fprof=Fnet+f,Fprof=Fnet+f size 12{F rSub { size 8{"prof"} } =F rSub { size 8{"net"} } +f} {},$
4.29 $Fprof=29 N+24.0 N=53 N.Fprof=29 N+24.0 N=53 N size 12{F rSub { size 8{"prof"} } ="28" "." 5" N "+"24" "." "0 N "="52" "." "5 N"} {}.$
It is interesting that this force is significantly less than the 150-N force the professor exerted backward on the floor. Not all of that 150-N force is transmitted to the cart; some of it
accelerates the professor.
The choice of a system is an important analytical step both in solving problems and in thoroughly understanding the physics of the situation which is not necessarily the same thing. | {"url":"https://texasgateway.org/resource/44-newtons-third-law-motion-symmetry-forces?book=79096&binder_id=78526","timestamp":"2024-11-04T17:07:15Z","content_type":"text/html","content_length":"93696","record_id":"<urn:uuid:af32a3c3-a599-4863-a97b-859a01c6ff13>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00869.warc.gz"} |
dmpFinder for other objects
Last seen 17 days ago
I have been looking at dmpFinder function, and I was wondering if the function could work with beta values (as a matrix) and phenotypes (as a vector, for example race) without having the methylset or
methylation object. My only worry is that how the function would be able to match the correct phenotypes to correct sample id beta values. Since the sample IDs are columns in the beta matrix, I am
not sure what to do. Any suggestions would be appreciated. Thank you!
Entering edit mode
Hi James, Thank you a lot for responding. Hmm... so do you think if have matrix average_beta below:
and the phenotype dataset below pheno
where both ids are matching in column and row, would dmpfiner give me a correct answer? Thank you! Neyousha
Entering edit mode
Yes, but you should probably use M-values instead of betas (convert using logit2).
However, dmpFinder is just a way to allow people to use lmFit from limma to fit a model without having to figure out how to use lmFit. If you care to fit more than just one coefficient you will have
to use lmFit directly.
Entering edit mode
Thank you very much James, would you mind explaining why m-values instead of beta values? I have only beta_values available at this point.
Entering edit mode
You are better off using M-values instead of beta values because you are fitting a conventional linear regression, in which case it's better if the underlying distribution of your data is at least
'hump shaped'. This will be true for M-values, but not necessarily for beta values which are strictly between 0-1 and tend to be clustered at either extreme. If you have large enough N you can assume
the central limit theorem will be in effect, in which case the underlying distribution doesn't matter (but the less hump-shaped, the larger N you need for the CLT to kick in).
It's just easier to defend using M-values because they are 'normal-ish' whereas beta values are not. If you really want to use betas, you might consider using DSS which models the data assuming a
beta distribution.
But anyway, the logit2 function will convert your betas to M-values, so you can have M-values if you so desire/ | {"url":"https://support.bioconductor.org/p/9160075/#9160124","timestamp":"2024-11-04T07:44:06Z","content_type":"text/html","content_length":"25577","record_id":"<urn:uuid:5fa11a76-d08b-4ff0-9679-b1a79bb1a665>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00483.warc.gz"} |
Error estimates for SUPG-stabilised Dynamical Low Rank Approximations
Fabio Zoccolan, Gianluigi Rozza
In this paper we will consider distributed Linear-Quadratic Optimal Control Problems dealing with Advection-Diffusion PDEs for high values of the Peclet number. In this situation, computational
instabilities occur, both for steady and unsteady cases. A Str ...
Walter De Gruyter Gmbh | {"url":"https://graphsearch.epfl.ch/en/publication/310000","timestamp":"2024-11-07T12:55:22Z","content_type":"text/html","content_length":"105777","record_id":"<urn:uuid:c0f07e31-83c2-4ecd-83f6-f03e6a1870b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00590.warc.gz"} |
Limit of sin(x)/x as x approaches 0 | AI Math Solver
Limit of sin(x)/x as x approaches 0
Published on October 19, 2024
The limit of sin(x)/x as x approaches 0 is 1, determined using L'Hôpital's rule.
Calculate the limit of g(x) = sin(x) / x as x approaches 0 | {"url":"https://www.aimathsolve.com/shares/limit-of-sinxx-as-x-approaches-0","timestamp":"2024-11-12T21:56:39Z","content_type":"text/html","content_length":"35579","record_id":"<urn:uuid:025f14fe-74ba-4da9-a39b-5ef6e2489f33>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00331.warc.gz"} |
RD Sharma Class 6 Maths Solutions Chapter 19 - Geometrical Constructions
RD Sharma Solutions for Class 6 Maths Chapter 19 - Geometrical Constructions - Free PDF Download
RD Sharma Solutions of Class 6 Maths Chapter 19 can be downloaded here. RD Sharma Solutions Class 6 Geometrical Constructions are available free of charge to help students develop their skills and
topic knowledge. Get a free PDF for RD Sharma Class 6 Maths Chapter 19 from the links given. Students learn about the basic and simple constructions in RD Sharma Solutions of Class 6 Maths Chapter 19
using a ruler and a pair of compasses. The validity of measurements can be verified by the use of a scale and a protractor.
FAQs on RD Sharma Class 6 Maths Solutions Chapter 19 - Geometrical Constructions
1. What is the intention of the geometric construction?
Geometric construction allows you to create shapes, angles, and polygons using the simplest materials. You'll need paper, a sharpened pencil, a straight line to control the lines, and a drawing
compass to swing arcs and scribe circles.
2. Who found geometric construction?
In 13 books (chapters), Euclid (c. 325-265 BC), of Alexandria, presumably a pupil at the Academy founded by Plato, wrote a treatise entitled The Elements of Geometry, in which he introduced geometry
in an ideal axiomatic form, known as the Euclidean geometry.
3. How can algebra make use of construction?
Mathematics is a central aspect of the field of engineering and is widely used in science as well. In architecture, tradespeople use mathematical principles for constructing roofs or buildings, such
as calculation, arithmetic and trigonometry, plasterers use ratios for combining compounds, plumbers use hydraulics for heating systems, etc.
4. What is Geometric construction?
Geometric construction allows you to create shapes, angles and polygons using the simplest material. You will need paper, a sharpened pen, a straight line to control the lines and a drawing compass
to swing arcs and scribe circles. Vedantu provides all the solutions of RD Sharma Class 6 Chapter 19 made by the expert team while keeping in view the prescribed syllabus and relevance. All the
solution PDFs are available for free of cost at Vedantu for the students of Class 6.
5. Who found Geometric construction?
In 13 books (chapters) Euclid (c. 325-265 BC) of Alexandria, a student at the academy founded by Plato, wrote a treatise entitled The elements of Geometry in which he introduced geometry in an ideal
axiomatic form, known as the Euclidean geometry. The Elements began with definitions of terms, fundamental geometric principles also called axioms or postulates and general quantitative principles
also called common notions, from which all the rest of geometry could be logically deduced
6. How can algebra make use of construction?
Mathematics is the central aspect of the field of engineering and is widely used in science as well. In architecture, tradespeople used mathematical principles for construction roofs or buildings,
such as calculation, arithmetic and trigonometry. Plasterers use ratios for combining compounds whereas plumbers use hydraulics for heating systems etc. Geometry, algebra and trigonometry is
commonly used by architects in making architectural designs. Architects also foresee the probability of issues that they might face in future. Thus, algebra and geometry are interrelated to each
7. Is it sufficient to study RD Sharma for Geometrical Construction?
RD Sharma covers a vast number of topics in a detailed and organized manner. A student just needs to practice and practice. Solving questions in RD Sharma will be very beneficial for the students as
practice will not only free their hands but will also increase their interest. Geometrical construction is a very significant section of Mathematics and it needs to be done with utmost sincerity. Go
to Vedantu in case of any doubt. Students can solve all questions from RD Sharma for geometrical construction. All the solutions are available on Vedantu’s official website and mobile app for free
8. Will Geometric construction be useful in future?
The answer to this is Yes. Geometrical construction is used in everyday life. Geometry is used in various day to day activities like measuring spoons and scales, cooking and baking etc. From
sketching to calculating distances, Geometry is a must. It is absolutely true that Geometry affects us even in the most basic form of life. Geometry helps us in understanding the specific phenomena
and in uplifting and upgrading the quality of life. Students can practice questions based on the chapter from RD Sharma for Class 6. | {"url":"https://www.vedantu.com/rd-sharma-solutions/class-6-maths-chapter-19-solutions","timestamp":"2024-11-09T19:03:46Z","content_type":"text/html","content_length":"205772","record_id":"<urn:uuid:b52785c7-9e4e-4c9b-a7f3-740a91902b23>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00796.warc.gz"} |
Lombard Rate.
The Lombard rate is the rate of interest charged by the Bank of England on collateralised loans to banks and other financial institutions. The rate is named after the Italian city of Lombardy, where
medieval Italian bankers charged a similar rate of interest on loans.
The Lombard rate is used as a policy tool by the Bank of England to influence the availability of credit in the economy. A higher Lombard rate makes it more expensive for banks to borrow money, which
in turn reduces the amount of credit available in the economy. A lower Lombard rate has the opposite effect.
The current Lombard rate is 0.75%. What are 3 different methods of calculating interest? The three primary methods of calculating interest are simple interest, compound interest, and amortized
Simple interest is calculated on the principal amount only, and it is not compounded. Compound interest is calculated on the principal plus any interest that has accumulated, and it is compounded at
regular intervals. Amortized interest is calculated on the principal plus any interest that has accumulated, but it is paid out in equal installments over the life of the loan.
What are the 3 types of compound interest?
There are 3 types of compound interest: simple, daily, and monthly.
1. Simple compound interest is when interest is paid only on the original principal amount.
2. Daily compound interest is when interest is paid on the principal amount and also on any interest that has accrued from previous days.
3. Monthly compound interest is when interest is paid on the principal amount and also on any interest that has accrued from previous months. What is a rate and term option? A rate and term option is
a type of loan that allows the borrower to choose the interest rate and the term of the loan. This type of loan is typically used to refinance an existing loan.
Why is bank rate also called discount rate?
The bank rate is the rate at which commercial banks lend money to one another. This is also known as the discount rate. The reason it is called the discount rate is because when banks lend money to
each other, they do so at a rate that is lower than the rate at which they lend money to the general public. What is bank rate vs repo rate? The key difference between bank rate and repo rate is that
the bank rate is the interest rate at which commercial banks can borrow money from the central bank, whereas the repo rate is the interest rate at which the central bank provides short-term loans to
commercial banks.
Both the bank rate and repo rate are used by the central bank to control the money supply in the economy. A higher bank rate or repo rate makes borrowing more expensive for commercial banks, which
results in less money being available for lending. This, in turn, reduces the money supply in the economy and can help to control inflation. | {"url":"https://www.infocomm.ky/lombard-rate/","timestamp":"2024-11-05T03:33:25Z","content_type":"text/html","content_length":"39927","record_id":"<urn:uuid:3b60def3-2938-444f-93d6-e02bf1b03afa>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00121.warc.gz"} |
Math Colloquia - Nonlocal generators of jump type Markov processes
Empirical observations have shown that for an adequate description of many random phenomena non-Gaussian processes are needed. The paths of these Markov processes necessarily have jumps. Their
generators are nonlocal operators which admit a representation as pseudo-differential operators with so-called negative definite symbols.
The talk gives an introduction to the relationship between jump processes and this non classical type of pseudo-differential operators. A particular focus will lie on different possibilities to
construct the process starting from a given symbol. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=speaker&order_type=desc&page=8&document_srl=758698&l=en","timestamp":"2024-11-11T05:33:52Z","content_type":"text/html","content_length":"43616","record_id":"<urn:uuid:3af6a706-07c1-423b-be13-de5f35e62a63>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00521.warc.gz"} |
Multiplication Division Chart
Multiplication Division Chart - Same thing as 4 times 2. 2 times 6 is 12. 2 times 4 is 8. 2 times 5 is 10. Web free printable division charts, worksheets, and tables in pdf format. I'm just adding 2
every. Use these colorful division worksheets to help your child build confidence. Web in this topic, we will multiply and divide whole numbers. Web it's also called a times table. For more ideas see
math drills.
Division Times Table in 2022 Math division, Math formula chart
I'm just adding 2 every. A times table is a chart with the answers to all of the multiplication problems that use the numbers 1 through 12. For more ideas see math drills. Web free printable division
charts, worksheets, and tables in pdf format. Use these colorful division worksheets to help your child build confidence.
2 Pack Multiplication Tables and Division Poster Set Math Classroom
2 times 4 is 8. Use these colorful division worksheets to help your child build confidence. Web every time you increment or you multiply by a higher number, you just add by 2. 2 times 6 is 12. Web
it's also called a times table.
2 Pack Multiplication Tables and Division Poster Set,18 x 24 inch
Use these colorful division worksheets to help your child build confidence. 2 times 4 is 8. 2 times 6 is 12. A times table is a chart with the answers to all of the multiplication problems that use
the numbers 1 through 12. Same thing as 4 times 2.
Multiplication And Division Table Cheat Sheet Fun Math Aid Etsy
2 times 5 is 10. I'm just adding 2 every. Same thing as 4 times 2. A times table is a chart with the answers to all of the multiplication problems that use the numbers 1 through 12. Web free
printable division charts, worksheets, and tables in pdf format.
long division and multiplication
2 times 6 is 12. I'm just adding 2 every. Web it's also called a times table. 2 times 5 is 10. Web in this topic, we will multiply and divide whole numbers.
Multiplication And Division Table Cheat Sheet Fun Math Aid Etsy
2 times 5 is 10. 2 times 6 is 12. Web it's also called a times table. Web free printable division charts, worksheets, and tables in pdf format. A times table is a chart with the answers to all of the
multiplication problems that use the numbers 1 through 12.
Pin on food
Use these colorful division worksheets to help your child build confidence. Web free printable division charts, worksheets, and tables in pdf format. Web in this topic, we will multiply and divide
whole numbers. 2 times 6 is 12. For more ideas see math drills.
10 LAMINATED Educational Math Posters for Kids Multiplication Chart
For more ideas see math drills. Web free printable division charts, worksheets, and tables in pdf format. 2 times 5 is 10. Same thing as 4 times 2. A times table is a chart with the answers to all of
the multiplication problems that use the numbers 1 through 12.
Times Tables And Division
2 times 6 is 12. Web free printable division charts, worksheets, and tables in pdf format. Use these colorful division worksheets to help your child build confidence. A times table is a chart with
the answers to all of the multiplication problems that use the numbers 1 through 12. Web every time you increment or you multiply by a higher.
Printable Multiplication And Division Table Printable Multiplication
Web in this topic, we will multiply and divide whole numbers. 2 times 4 is 8. For more ideas see math drills. Web free printable division charts, worksheets, and tables in pdf format. 2 times 6 is
2 times 4 is 8. Web in this topic, we will multiply and divide whole numbers. Use these colorful division worksheets to help your child build confidence. A times table is a chart with the answers to
all of the multiplication problems that use the numbers 1 through 12. Web every time you increment or you multiply by a higher number, you just add by 2. 2 times 5 is 10. Web free printable division
charts, worksheets, and tables in pdf format. 2 times 6 is 12. I'm just adding 2 every. For more ideas see math drills. Web it's also called a times table. Same thing as 4 times 2.
Use These Colorful Division Worksheets To Help Your Child Build Confidence.
Web free printable division charts, worksheets, and tables in pdf format. 2 times 6 is 12. Same thing as 4 times 2. 2 times 4 is 8.
Web In This Topic, We Will Multiply And Divide Whole Numbers.
For more ideas see math drills. 2 times 5 is 10. I'm just adding 2 every. Web it's also called a times table.
A Times Table Is A Chart With The Answers To All Of The Multiplication Problems That Use The Numbers 1 Through 12.
Web every time you increment or you multiply by a higher number, you just add by 2.
Related Post: | {"url":"https://time.ocr.org.uk/en/multiplication-division-chart.html","timestamp":"2024-11-13T14:51:39Z","content_type":"text/html","content_length":"26713","record_id":"<urn:uuid:97d0b700-fe5f-48a1-9d34-6577e36369a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00080.warc.gz"} |
Math-LIGHT problem posing by three experts with different fields of expertise: Why? What? and How?
The Math-LIGHT program is directed at promoting literacy-rich mathematical instruction in middle school. A team of designers with different types of expertise pose Math-Light problems. We perform
comparative analysis of problem-posing activities by experts with different types of expertise. We demonstrate that Activity Theory (Leontiev, 1978) is a powerful theoretical framework for the
analysis of the structure of problem posing activity. Framed by activity theory we ask “Why?” questions to understand the main goals of posing problems; “What?” questions are directed at the
characteristics of the PP process and PP products; and “How?” questions are aimed at identifying the tools used by the designers to fit the conditions in which the problems are implemented. We find
that the three designers’ problem-posing activities are complimentary and suggest that the cooperative problem posing process is essential for posing problems that integrate different perspectives
and thus allow more goals to be attained.
Bibliographical note
Publisher Copyright:
© 2024 Elsevier Inc.
• Activity theory
• Experts
• Problem posing
ASJC Scopus subject areas
• Mathematics (miscellaneous)
• Education
• Applied Mathematics
Dive into the research topics of 'Math-LIGHT problem posing by three experts with different fields of expertise: Why? What? and How?'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/math-light-problem-posing-by-three-experts-with-different-fields-","timestamp":"2024-11-13T11:30:45Z","content_type":"text/html","content_length":"54308","record_id":"<urn:uuid:27dab889-5f56-4968-9ff5-2ca5c53b20bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00763.warc.gz"} |
What is the relationship between impulse and work?
What is the relationship between impulse and work?
Impulse is a vector, so a negative impulse means the net force is in the negative direction. Likewise, a positive impulse means the net force is in the positive direction….Common mistakes and
Impulse Work
Changes an object’s momentum energy
Quantity type vector scalar
What is the relationship between impulse and kinetic energy?
Both impulse and kinetic energy are related to the same velocity of the system on which it is working. We know that momentum is connected to the impulse because the impulse comes to act when the
momentum changes. So velocity is the one that binds impulse and kinetic energy together.
How do you find impulse acting on an object?
The impulse experienced by the object equals the change in momentum of the object. In equation form, F • t = m • Δ v. In a collision, objects experience an impulse; the impulse causes and is equal to
the change in momentum.
What is impulse measured in?
Impulse determines the velocity of an object after a force acts on it. This can be readily seen as a consequence of impulse being a change in momentum. Impulse has units of Newton-seconds. The SI
unit of impulse is Newton-seconds.
What is the relationship between force and time in impulse?
Δ p = F net Δ t . F net Δ t F net Δ t is known as impulse and this equation is known as the impulse-momentum theorem. From the equation, we see that the impulse equals the average net external force
multiplied by the time this force acts. It is equal to the change in momentum.
How do you find impulse without time?
How to calculate impulse
1. You can type the initial and final momentum values into our calculator to find the impulse directly from the impulse formula J = Δp .
2. You can also enter the values of mass and velocity change of an object to calculate the impulse from the equation J = mΔv .
How do you calculate the impulse needed to stop an object?
Impulse: Quick Guide
1. momentum: a measure of strength and a measure of how difficult it is to stop an object. Momentum (p) = Mass (m) * Velocity (v)
2. impulse: the measure of how much the force changes the momentum of an object. Impulse = Force * time = force * Delta t. Delta t = t^final – t^initial.
How does impulse affect energy?
An impulse applied to an object gives it momentum. In fact, an impulse results in a change in momentum: What momentum doesn’t help determine is how much energy is contained in the movement of an
Does impulse equal momentum?
From the equation, we see that the impulse equals the average net external force multiplied by the time this force acts. It is equal to the change in momentum. The effect of a force on an object
depends on how long it acts, as well as the strength of the force.
Is impulse same as momentum?
Impulse and momentum have the same units; when an impulse is applied to an object, the momentum of the object changes and the change of momentum is equal to the impulse. | {"url":"https://www.rwmansiononpeachtree.com/what-is-the-relationship-between-impulse-and-work/","timestamp":"2024-11-02T09:01:39Z","content_type":"text/html","content_length":"76408","record_id":"<urn:uuid:3e483232-e3f6-493d-adf1-d30314e4f544>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00803.warc.gz"} |
Post a picture of yourself
Let me know if you require further assistance, I'm a certified life coach. You would be the first client I didn't lead into a life of drugs and crime.
Actually, I have one more bit of free advice. Have you met my friend bOnEs? He's single, and he possesses a beard. I'll make reservations for the two of you at this Italian place around the corner
from my house. It's on you two to get here, though; I'm a life coach, not a travel agent.
Thanks massy, i will bear that in mind when me and my boyfriend eventually call it a day lol.
I've been trying to grow a beard since last summer, So far I only have a weird german porno moustache.
• 1
Well, you're a creepy perv, so it's probably a good look for you.
I have been getting more offers to star in snuff films, plus the lady(boys) love it.
Nice. I've considered shaving my beard just to see what kind of shit I can get into with a porno 'stache, and possibly a silk shirt.
• 1
Maybe throw in some wild chest hair.
fucking work, i missed out on this convo?!?
@massacre - women do dig the beard... i am getting more looks and convo's because of it... women love a man that's rugged, it gives us that "stoic" appearance... we look like a man that knows what it
means to be a man... it's strange though, working at the livery for over a year now, i am getting looks from women that never gave me the look... the beard has a power all it's own... as a matter of
fact, it could be the central story of the next elder scrolls... TESVI:beardym...
@mercedes - this could be you;
you know your checking out that manly face sweater...
EDIT: ahh, if and when i ever shave the sweater, i am gonna see what i look like with a handlebar, or fou-man-chu... or the rollie fingers;
I can see his chin through his beard. He failed.
@massacre - women do dig the beard... i am getting more looks and convo's because of it... women love a man that's rugged, it gives us that "stoic" appearance... we look like a man that knows
what it means to be a man... it's strange though, working at the livery for over a year now, i am getting looks from women that never gave me the look... the beard has a power all it's own... as
a matter of fact, it could be the central story of the next elder scrolls... TESVI:beardym...
If I go two weeks without trimming my beard, then go to a place with hunting/camping supplies while wearing a flannel shirt, I will get bitches. I fucking love my beard.
• 1
Maybe its an american thing....
It is. British chicks are cunts.
Thats the sweetest thing anyone has ever said to me mass...cheers!
I will never know
when i think of massacre, i imagine this is what he looks like...
Lol bones thats the get up he wears just to go down to walmart. I was looking for what i thought massy looked like IRL but couldnt find a pic good enough.
I found this instead tho:
DiO 4730
my old beard was as long as those ones put together
Is that...lotion on the amp lol
• 1
DuPz0r 5361
Lol bones thats the get up he wears just to go down to walmart. I was looking for what i thought massy looked like IRL but couldnt find a pic good enough.
I found this instead tho:
LMAO that's bOnEs (left) and QD (right) in 10-15 years!
• 1
Guy on the left is somewhat bOnEs-like.
when i think of massacre, i imagine this is what he looks like...
I'd kill a bear with that guy.
he looks like jeff bridges
Guest vacant
I can't grow a beard
*tries not to laugh*
*does anyways*
HAHAHA you're not a man!!
I can't grow a beard
Awww, well at least u dont have to shave as much.....*stifles laughter*
DuPz0r 5361
I've never tried to grow a full sized beard. I've gone a month or so without shaving a few times, and got about an inch or two, but some hairs grow quicker than others, and you have to proper
maintain it, so i cba with that. I'd much prefer to look unshaven rather than full on bearded. | {"url":"https://www.igta5.com/forums/topic/42-post-a-picture-of-yourself/page/15/?tab=comments#comment-30337","timestamp":"2024-11-10T14:48:46Z","content_type":"text/html","content_length":"341623","record_id":"<urn:uuid:89ac88e1-7226-4c4e-8df2-dadf50d4ae7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00254.warc.gz"} |
Johannes Kepler Biography, Life, Interesting Facts
Birthday :
Died On :
Also Known For :
Birth Place :
Weil der Stadt, Baden-Württemberg, Germany
Johannes Kepler was a famous German mathematician cum astronomer who discovered that the movements of the planets around the world is ovoid in nature. He first stated the fundamental laws of the
planetary motion. He was known for his works in philosophy, geometry and optics. One of the greatest achievement is that he discover the Kepler’s Star and the Kepler conjecture. He is one of the
prominent astronomers, and Kepler’s crater on Mars was named after him. Apart from the astronomy, he has an impact in the field of mathematics especially in the evolution of geometry. He is known to
have come up with the Kepler Triangle as well as the Kepler Problem.
Johannes Kepler studied various aspects of a geometric progression and published his work in three parts. He went ahead in studying astrology as a part-time amusement and published some works on
astrology like De Fundamentis Astrologiae and Dissertatio cum Nuncio Siderio. It was during that period that he worked for a Danish nobleman, Tycho Brahe before serving as an advisor to Emperor
Rudolph II in his lifetime.
CHILDHOOD & EARLY LIFE
Johannes Kepler was born on December 27, 1571, to Heinrich Kepler and Katharina Guldenmann, in the Stuttgart region of Germany. At that period, his family was believed to be very wealthy, but when he
was born, his family’s wealth has decreased drastically. His father made a living as a mercenary and did leave his family when little Kepler was just five years old. His mother was a herbalist as
well as a healer. She once attempted witchcraft for a living. Johannes Kepler was mentally and physically weak as a child, according to legend. His mathematical aptitude is highly impressive as he
developed an interest in mathematics. He was later introduced to Astronomy when he was very young, and this was what he enjoyed throughout his lifetime.
He finished his grammar school in 1589 and schooled at the University of Tubingen where he proved that he is very intelligent and earned the reputation of a skilled astrologer. Johannes Kepler
studied philosophy as well as theology under two renowned personalities. He is known to have learned the Copernican and Ptolemaic system of planetary motion. He gave up his wish to become the
minister when he was offered a professorship job at the age of 23.
Johannes Kepler had a plan about the cosmic plan of the structure of the universe during his teaching career. He is known to have defended and demonstrated the periodic conjunction of the planets,
Saturn and Jupiter in the zodiac in defense of Copernican views. He mentioned things about the geometry of the universe saying he was sure that there are polygonal ratios between the planets. His
theories are based on the Copernican system, and these stemmed from his scientific and theological views. One of his works which built his reputation as a very skilled astronomer was the Mysterium
Cosmographicum in 1596. It was this work that he modified later and even served as the basis for some of his future works. It was after this that he planned to work on the effect of heavens on the
earth, planetary motion as well as the stationary aspects of the universe. He sent proposals out to different people and even discussed with Tycho Brahe whom he later befriended.
Johannes Kepler left where he was working due to the growth of tension which threatened his employment. He joined Tycho with whom he produced some of the most brilliant astronomy works of all time
including the Rudolphine tables and Prutenic tables. The two tables above were presented to Emperor Rudolph II. However, he was appointed the imperial mathematician as well as finishing Tycho’s works
after Tycho’s untimely death. He aided the emperor in a time of trouble and served as a prime astrological advisor. He later designed his telescope and called it aKepleriantelescope. He was known to
have also observed the bright star, supernova star which is known to occur in a gap of 800 years.
PERSONAL LIFE
Johannes Kepler married Barbara Muller in 1597 who was widowed twice and had five children of which two died in infancy. She passed away in 1612 after her health has deteriorated. He remarried in
1613 a 24-year-old Susanna. The first three children he had from this marriage died in infancy. It was during this time that his mother was accused of practicing infancy and imprisoned. Kepler stood
by his mother throughout the period. He died before observing the transit of Mercury and Venus which he had been waiting for. He died on the 15th of November, 1630 after a brief sickness in Germany.
December 27 Horoscope
More Astronomers
More People From Baden-Württemberg
More People From Germany
More Capricorn People | {"url":"https://www.sunsigns.org/famousbirthdays/profile/johannes-kepler/","timestamp":"2024-11-04T05:45:11Z","content_type":"text/html","content_length":"193617","record_id":"<urn:uuid:5b45baf2-5b67-4255-8688-4e8e525eed2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00745.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITC.2020.15
URN: urn:nbn:de:0030-drops-121208
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2020/12120/
Ghazi, Badih ; Golowich, Noah ; Kumar, Ravi ; Manurangsi, Pasin ; Pagh, Rasmus ; Velingker, Ameya
Pure Differentially Private Summation from Anonymous Messages
The shuffled (aka anonymous) model has recently generated significant interest as a candidate distributed privacy framework with trust assumptions better than the central model but with achievable
error rates smaller than the local model. In this paper, we study pure differentially private protocols in the shuffled model for summation, a very basic and widely used primitive. Specifically:
- For the binary summation problem where each of n users holds a bit as an input, we give a pure ε-differentially private protocol for estimating the number of ones held by the users up to an
absolute error of O_{ε}(1), and where each user sends O_{ε}(log n) one-bit messages. This is the first pure protocol in the shuffled model with error o(√n) for constant values of ε.
Using our binary summation protocol as a building block, we give a pure ε-differentially private protocol that performs summation of real numbers in [0, 1] up to an absolute error of O_{ε}(1), and
where each user sends O_{ε}(log³ n) messages each consisting of O(log log n) bits.
- In contrast, we show that for any pure ε-differentially private protocol for binary summation in the shuffled model having absolute error n^{0.5-Ω(1)}, the per user communication has to be at least
Ω_{ε}(√{log n}) bits. This implies (i) the first separation between the (bounded-communication) multi-message shuffled model and the central model, and (ii) the first separation between pure and
approximate differentially private protocols in the shuffled model. Interestingly, over the course of proving our lower bound, we have to consider (a generalization of) the following question that
might be of independent interest: given γ ∈ (0, 1), what is the smallest positive integer m for which there exist two random variables X⁰ and X^1 supported on {0, … , m} such that (i) the total
variation distance between X⁰ and X^1 is at least 1 - γ, and (ii) the moment generating functions of X⁰ and X^1 are within a constant factor of each other everywhere? We show that the answer to this
question is m = Θ(√{log(1/γ)}).
BibTeX - Entry
author = {Badih Ghazi and Noah Golowich and Ravi Kumar and Pasin Manurangsi and Rasmus Pagh and Ameya Velingker},
title = {{Pure Differentially Private Summation from Anonymous Messages}},
booktitle = {1st Conference on Information-Theoretic Cryptography (ITC 2020)},
pages = {15:1--15:23},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-151-1},
ISSN = {1868-8969},
year = {2020},
volume = {163},
editor = {Yael Tauman Kalai and Adam D. Smith and Daniel Wichs},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2020/12120},
URN = {urn:nbn:de:0030-drops-121208},
doi = {10.4230/LIPIcs.ITC.2020.15},
annote = {Keywords: Pure differential privacy, Shuffled model, Anonymous messages, Summation, Communication bounds}
Keywords: Pure differential privacy, Shuffled model, Anonymous messages, Summation, Communication bounds
Collection: 1st Conference on Information-Theoretic Cryptography (ITC 2020)
Issue Date: 2020
Date of publication: 04.06.2020
DROPS-Home | Fulltext Search | Imprint | Privacy | {"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=12120","timestamp":"2024-11-09T13:06:34Z","content_type":"text/html","content_length":"8663","record_id":"<urn:uuid:e200c228-4b9d-45a3-a1b4-3ae89114a96e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00870.warc.gz"} |
[LeetCode in Swift] Valid Parentheses
Hello this is Derek!
This is another blog from the series called LeetCode in Swift, where I solve LeetCode challenges in Swift, as I am in the middle of job hunting process and I need to prepare for possible upcoming
coding interview.
This is an easy difficulty problem in LeetCode, which involves the usage of Stack technique.
What is stack to begin with?
A stack is a Last-In-First-Out (LIFO) data structure. It is used to push things onto the stack, and pop them off in the reversed order it was pushed.
Created using Whimsical Diagram
With this in mind, let’s get started with today’s LeetCode problem.
Today’s challenge: Valid Parentheses
Given a string s containing just the characters '(', ')', '{', '}', '[' and ']', determine if the input string is valid.
An input string is valid if:
1. Open brackets must be closed by the same type of brackets.
2. Open brackets must be closed in the correct order.
3. Every close bracket has a corresponding open bracket of the same type.
Example 1:
Input: s = "()"
Output: true
Example 2:
Input: s = "()[]{}"
Output: true
Example 3:
Input: s = "(]"
Output: false
• 1 <= s.length <= 104
• s consists of parentheses only '()[]{}'.
Approach and Thought Process
While Swift doesn’t offer stack properties or data structures by default, this can easily be created by using an array to hold the characters to behave like stack.
Our goal is to find whether the given string of parentheses is valid or not. Not only we have to consider if the count of opening and closing parentheses match, we also need to consider the order
they are placed in.
We will first check if the string count is even or not. If it’s not even, we don’t even have to continue with our code as it’s not considered a valid parentheses. (Valid parentheses will always be
even, no matter what).
Then we can initialize an array named stack, and iterate through every string to store the opening parentheses (i.e. “(“, “{“, “[“), and pop last matching parentheses if closing parentheses (i.e. “)
”, “}”, “]”) has been found.
Step 1. Edge Case Check
As mentioned earlier, we will first check for the string count to make sure that it’s even count.
if s.count % 2 != 0 {
return false
Step 2. Initialize Stack Array
var stack = [Character]()
Step 3. Iterate Through `s` String
We use for loops to iterate through every input string in s to perform our stack implementation. For every opening parentheses encountered, we will append it to our stack.
for char in s {
switch char {
case "(", "[", "{":
stack.append(char) // append opening parentheses to our stack
Every time we encounter closing parentheses,
we will pop last if we have a matching parentheses
case ")":
guard stack.popLast() == "(" else { return false }
case "]":
guard stack.popLast() == "[" else { return false }
case "}":
guard stack.popLast() == "{" else { return false }
return false
Step 4. Return If Stack is Empty or Not
return stack.isEmpty
If input array had valid parentheses, we would end up with an empty stack as we would have popped all the parentheses. Therefore, if stack is empty, this will return true, and if stack still has any
of the parentheses opening, it would return false, but since we are already using guard statement for each case, we would never reach to this point if’s not a valid parentheses.
So our complete code implementation would look like this:
class Solution {
func isValid(_ s: String) -> Bool {
if s.count % 2 != 0 {
return false
var stack = [Character]()
for char in s {
switch char {
case "(", "[", "{":
case ")":
guard stack.popLast() == "(" else { return false }
case "]":
guard stack.popLast() == "[" else { return false }
case "}":
guard stack.popLast() == "{" else { return false }
return false
return stack.isEmpty
Walkthrough Example
Let’s use a random example, s = “(){[]}”, as our test case.
First, we know that s.count equals to 6, which is even, so we can continue with our code. We then initialize our stack that will hold character elements to begin our iteration.
Beginning with our first character, since it’s opening parentheses "(", we append to our stack. Stack is now:
stack = [“(“]
Then we encounter the closing parentheses ")”. We need to make sure whatever we are popping last is “(“. It is matching, so we can safely call popLast from the stack. Stack is now:
stack = []
We then encounter opening parentheses “{“ so we append to our stack. We then encounter another opening parentheses “[“ so we append this to our stack as well. Stack is now:
stack = [“{“, “[“]
Next we encounter the closing parentheses “]”. We need to make sure, again, that whatever character we are popping last is matching parentheses of “[“. Our stack’s last character is indeed “[“, so we
can pop last character from our stack. Stack is now:
stack = [“{“]
Next we encounter the closing parentheses “}”. Again, we check if last character we are popping is a matching pair of “{“, which it is, so we can pop last. Stack is now:
stack = []
As we have completed our iteration, we will return stack.isEmpty to check if it’s empty. It is empty, therefore it will return true, which indicates that this is a valid parentheses.
And we have successfully solved another problem, with great efficiency!
Time and Space Complexity Analysis
The time complexity of our solution is O(n) as we are iterating through each character in the input string exactly once.
Space complexity is also O(n) as we are pushing all of the opening parentheses to our stack in worst case scenario.
I hope this explanation was helpful, as I personally had to go through some trial-and-error of trying to count individual parentheses and removing them from the array.
Feel free to comment on this approach, share your ideas or if you have alternative ways of solving this and consider following me for more blogs!
Stay tuned for more LeetCode solutions in Swift.
Until then, happy coding everyone! 🔥 | {"url":"https://derekhskim.medium.com/leetcode-in-swift-valid-parentheses-fc45cd544677?source=user_profile_page---------2-------------5573399ce9ee---------------","timestamp":"2024-11-09T04:23:25Z","content_type":"text/html","content_length":"136626","record_id":"<urn:uuid:e97d13d7-d171-4be9-b65c-215383b4e145>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00587.warc.gz"} |
Valentine's Day Math Mystery - Case of the Valentine Villain – 6th Grade Math Worksheets
• >
• >
• Valentine's Day Math Mystery: Case of the Valentine Villain - 6th Grade Edition
Valentine's Day Math Mystery: Case of the Valentine Villain - 6th Grade Edition
SKU: ValentineVillain6thgrade
per item
Valentine's Day Math Mystery: Case Of The Valentine Villain - 6th Grade Edition. Your students will use various math and critical thinking skills in these EASY PREP worksheets to help solve a
Valentine crime! Easy prep! Just Print & Solve! Or go paperless with the new Google Slides option provided within your download.
⭐Optional video hook included for introducing the case! (Check it out in the preview section)
⭐BONUS: Ending video added to wrap up the case!
- Print the student pages, staple or sort them in a folder, and your students are set to go.
- Have the video hook ready for viewing at the start of the lesson on an IWB, iPad, or computer monitor screen.
- Digital option: Make digital copies of the Google Slides before assigning to Google Classroom. Students will need to be in 'edit' mode to enter answers.
Math Skills Required to Unlock Clues
Students must use their math skills to unlock clues that will help them narrow down a list of suspects to find the Valentine Villain!
• Clue 1: Divide Fractions & Mixed Numbers
• Clue 2: Solve One-Step Equations
• Clue 3: Square Numbers (up to 30)
• Clue 4: Convert decimals from expanded form to standard form (with fractions)
• Clue 5: Adding and Subtracting Decimals
>> Pack includes a mystery declaration sheet, answer sheets, an elimination guide, and printable awards.
This mystery is included in the math mystery bundles below:
Bundle and Save with Math Mysteries
HOLIDAY Bundle Pack GRADE 6
Complete Math Mystery Bundle for GRADE 6How long will this activity take?
The time to complete will vary anywhere between 30mins - 2 hours! It mainly depends on how familiar your students are with the math mystery format and how difficult they find math skills covered in
the particular mystery. Please check the math skills outlined in the clues above to help determine suitability for your class. I recommend pacing this activity by giving students one clue at a time.
Once the whole class has completed a clue, move on to the next clue within the same lesson or the next math session. New math content presented? Make a lesson out of it by modeling the math before
diving into the clue. I like to say, "We must learn something new before attempting the next clue." There are lots of ways to implement this activity in your classroom; learn more in the blog post
link below:
Read Blog Post: Five Easy Ways to use Math Mysteries in your Classroom.
Great for review, math centers, extra practice, enrichment, homework, the sub tub, or early finishers.
Differentiation Tip:
Clue swap across grade levels to adjust the mystery with skills to suit student needs. As long as the clue number being swapped is the same, the mystery will work the same.
There's an optional video hook that you can use to set the stage to engage!
Multiple Uses
- Suitable for independent, pairs, or group work.
- Use as part of your math centers, add to your sub tubs, make it part of your early finisher tasks, give for homework, or make it part of your classroom practice/review sessions.
I recommend pacing this activity by giving students one clue at a time. Once the whole class has completed a clue, then move on to the next clue either within the same lesson or the next math
session. New math content presented? Make a lesson out of it by modeling the math before diving into the clue. I like to say, "we must learn something new before attempting the next clue."
How long will this activity take?
Time to complete this math activity will vary anywhere between 30mins - 2 hours or more! It mainly depends on how familiar your students are with the math mystery game format, as well as how
difficult they find math skills covered in the particular mystery. Please check the math skills outlined in the clues above to help determine suitability for your class.
Answer key included!_________________________________________________________________________________________
Not sure what a math mystery is?
CLICK HERE to watch a video about Math Mysteries
or read this blog post:
What is a Math Mystery? Read Blog Post: FIVE Easy Ways to use Math Mysteries in your Classroom.
Check out this blog post to save paper:
How to Create Reusable Math Mystery Files to Save Paper Please note that the Grade 1 Math Mysteries cannot be 'clue swapped' with other Grade Math Mystery versions for customization/differentiation
as you usually can with Grades 2-6.
Mrs. J.
© 2018 | {"url":"https://www.jjresourcecreations.com/store/p225/Valentinesday_mathmystery_worksheets_6thgrademath.html","timestamp":"2024-11-11T07:40:42Z","content_type":"text/html","content_length":"207434","record_id":"<urn:uuid:ad072e14-e550-4959-99d6-d949e32e5f75>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00541.warc.gz"} |
Recursion, primitive or otherwise
The SICP textbook shows in Exercise 1.10 a famous function known as Ackermann’s function. Actually the code there shows one version of that function; there are minor variations of it, all doing
basically the same, and all known as Ackermann function. The exercise is not on the list of exercises officially discussed in the group sessions, but perhaps you have stumbled upon it or the group
teacher discusses it.
As said, the function is very well known, and it’s always discussed in connection with a concept called primitive recursion (and also that is not on the pensum of the lecture). So if one reads about
primitive recursion, invariably the Ackermann function is mentioned and if the Ackermann function is discussed, then it’s discussed in connection with primitive recursion, namely pointing out that
Ackermann’s function is not primitive recursive. Actually, Exercise 1.10 in SICP is an exception to that rule, it gives the definition of the function and asks to observe how it behaves but does not
mention primitive recursion.
Primitive recursion
Ackermann’s function is the first and most prominent example of a terminating function which is not primitive recursive. That is largely the reason why it is famous. It’s also known as example for a
function that grows extremely fast (that can be observed by playing around with it in Exercise 1.10). Both facts hang together; abstractly speaking, it grows too fast for any possible
primitive-recursive function, while still terminating. The function is not famous for being practically useful. Also for that it grows too fast.
So if one has ever heard of the Ackermann function at all, it’s probably exactly for that: it’s “the” example of a function that is not primitive recursive. Also googling around in the internet,
starting maybe at Wikipedia and at various different other pages that offer wisdom and answers on various questions, will confirm that. You can look for questions like “What is an example of a total
function that is not primitive recursive?” (answer “Ackermann”) or “what’s the Ackermann function” (answer: an example for a non-primitive-recursive function), and on and on.
Alright, got it, Ackermann is not primitive recursive.
But that's actually not true!
Or maybe I should be more modest. It’s true under only under assumptions taken for granted and left often unmentioned. It’s an assumption that maybe never even crosses the mind of people who just
“know” that Ackermann is not primitive-recursive and people who write webpages explaining Ackermann, that in fact there is a restriction.
Participants of a course on functional programming, however, may not suffer from or at least should not suffer from that blind spot. What unspoken restriction are we talking about? I come to that in
a moment, but before that, we need at least sketch what is actually meant by primitive recursion.
Primitive recursive functions
General recursion is ubiquitous in the lecture, most functions halfway interesting use recursion. Primitive recursive functions are a restricted class of recursive functions. We don’t bother to give
a precise definition of the concept; it’s easy to find it explained on the internet.
You will remember that SICP distinguishes between recursive procedures and recursive (resp. iterative) processes, where processes refers to what happens at run-time. Let’s focus on what the book
calls “procedures”, the code representation, not the processes.
A recursive procedure is one that calls itself in its body. There is also indirect or mutual recursion, which is a situation where two or more procedures call each other; in software engineering
circles that’s also sometimes called a “call-back” situation. Never mind. There are no restrictions on how procedures can recursively call themselves (or each other). In other words, Scheme and Lisp
(and most other modern languages) support recursion in its general form, unrestricted.
One can use recursion to easily come up with functions that don’t terminate. The simplest example is the one from Exercise 1.5, which does in fact nothing else than recursively calling itself (and
thus will never terminate):
One can then study restrictions on the use of recursion. One example is known as tail recursion. The book and the lecture uses that term in connection with the interpreter, stating that the scheme
interpreter is an example of a tail recursive interpreter. More conventionally, one calls functions or procedures tail recursive and that characterizes functions, procedures, methods etc. which call
themselves only “`at the end” of their body. In the terminology of SICP, that leads to an iterative process, not a recursive process (at least in an interpreter that knows how to deal with it
adequately and efficiently).
So, a tail-recursive procedure is a restricted form of a recursive procedure.
But now to the restriction on recursion called primitive. The exact definition of primitive recursive functions involves fixing allowed elementary constructs, and projections and other details. The
core of the concept, however, is the way recursion itself is allowed. It can be roughly stated as
A function can call itself recursively, but only on smaller arguments.
Instead of giving the formal definition of the primitive recursion operator, we give a feeling what’s allowed and what’s not allowed by small examples. Primitive recursive functions are classically
defined as functions on natural numbers as arguments and as return value. For that, being smaller is pretty obvious. Let’s look at the following function f
(define (f x y)
(if (= x y) y
(+ (f (+ x 1) y) 1)))
The function is supposed to take 2 natural numbers as argument. Additionally, let’s assume the argument for x is smaller or equal than y. Otherwise the corresponding process would not terminate.
That’s a minor point, we can of course easily add a couple of lines, checking first whether the assumption is true, and if not, doing analogous calculation to cover also that situation, making it
terminating for arbitrary natural numbers. But that’s not central to the discussion here.
Now, f is recursive, calling itself (and it’s not tail-recursive). Now, is the function primitive-recursive? The definition of (f x y) calls itself with (f (+ x 1) y) and that is forbidden in
primitive recursive schemas. So does that mean the function is not primitive recursive?
Not so fast. The way it’s defined is certainly not the primitive-recursive way But in the same way, that one may transform non-tail-recursive procedure definitions into tail-recursive ones (the
lecture had examples for that), one may reformulate sometimes non-primitive-recursive definitions so that they fit the schema. What function is it anyway, given above? It’s easy enough, it calculates
2y - x (for x <= y).
It turns out that this function indeed is primitive-recursive, in that one can easily define it using primitive recursion schemas. Indeed, it’s straightforward since one can define multiplication and
addition and minus easily via primitive recursion. Defining the calculation 2y-x this way seems more natural than the slightly weird recursive definition where f calls itself on (+ n 1), but the
definition was given to illustrate what is not allowed.
To illustrate what is allowed, let’s sketch how addition of two natural numbers can be defined with primitive recursion. Actually, it corresponds to the most straightforward definition of addition
(assuming that the successor function is given as a more basic operation, here written as +1. So x +1 is not meant as using binary addition on x and 1 as arguments, but calculating the successor of
x. We also use infix notation and equations, not Lisp-like prefix and code, though one easily could).
0 + y = 0
(x +1) + y = (x + y) +1
The primitive recursion schema generally specifies a base case (the first line in the above example) and an induction case (the second line). In the case of addition, to define (x +1) + y, the
recursion scheme can use on the right-hand side of the equation a function h that takes three arguments, and allowed as arguments are x y, and (x + y). Besides it could rely on earlier defined
functions and some primitive operations. In our very simple example, the x and y are not needed in the construction, x + y is the only relevant part and the successor function +1 is built-in. (NB: to
avoid confusion: the values of x and y are not needed individually and directly as argument to the function h, but of course they are needed indirectly in that their sum x + y is used).
So addition is defined recursively in that the definition calls itself, and under the restriction that for defining the outcome for x+1 in the induction case, only x+y is used, not an arbitrary
recursive call to plus.
The question then is:
Are all recursive functions also representable by primitive recursion. Or is being primitive recursive a restriction?
The answer is yes, it’s a restriction for sure. All primitive recursive functions terminate, which is a consequence of the fact that the recursion calls the function on a smaller argument. On the
other hand, general recursion easily allows non-terminating procedures. Earlier in this post, there was a minimal example for that.
Why is Ackermann not primitive recursive (in the standard set-up)?
So far so good. We got a feeling that being primitive is a restriction on general recursion. To see that the Ackermann function is not primitive recursive is not obvious. Note that it’s not good
enough to observe that its definition does not follow the required primitive-recursive schema: One has to make the argument that it cannot somehow be written up in a different way that fits the
Generally speaking, the Ackermann function is not primitive-recursive as it “grows too fast”. We don’t provide the argument formally, but the idea is quite simple. Looking at the primitive recursive
schema sketched above, it has the feel of an iterative loop with a fixed bound, like for i = 0 to n do .... Programming with for-loops with a fixed bound results in terminating programs, analogous to
the fact that all primitive-recursive programs are terming. That’s in contrast to programs using general “while” loop, resp. programs using general recursion.
A primitive recursive definition builds a new function using the primitive recursion schema corresponding to a for-loop iteration and using earlier defined primitive recursive functions as building
block, which themselves are iterative schemes. That corresponds to a stack of nested iteration loops.
For illustration: as we have seen, addition can be defined using the successor function iteratively. One could continue to define multiplication as iterative addition. And exponentiation as iterated
multiplication. SICP shows how that’s done in Scheme, though without mentioning that the recursions and iterations could be classified as “primitive” (see Sections 1.1.3 and 1.1.4)
At any rate, taking the successor function as basic, multiplication can be represented by one loop (or using one primitive recursion scheme), exponentiation using 2 nested loops, and one could
continue with iterated exponentiation, and then iterate that, piling up layer after layer of looping in a nested fashion, each layer really adds expressiveness (and the potential of faster growing
So, using only such bounded loops for programming then leads to a hierarchy of functions. Those programmable with a nesting depths of at most one (like multiplication), one with a nesting depth of 2
(for example, exponentiation), etc., all programs terminating. It can be shown that this hierarchy is infinite. In other words, it’s not that there is some maximal looping depth, after which one does
not need further nesting.
But where does Ackermann fit in?
Well, that’s the whole point: Ackermann does NOT fit into this looping hierarchy!
Ackermann’s function comes in different flavors, the one from Exercise 1.10 has 2 arguments and it’s not even the one most commonly found. There are also versions with 3 arguments and for the line of
argument here, let’s assume for now we have a 3-argument formulation.
In all formulations, the Ackermann function has one argument that corresponds roughly to the nesting level of iterative loops resp. the amount of primitive-recursive schemes. So Ack(x,y,1)
corresponds to one looping level, and in a properly formulated 3-argument version, Ack(x,y,1) is x+y (Wikipedia starts counting at 0 instead of 1, but never mind). Continuing like that, Ack(x,y,2) is
exponentiation exp(x, y) etc. This is the core of Ackermann’s idea: Define a function where one argument controls the nesting-depth of loops or the level of primitive-recursive schemes.
And that immediately shows that Ackermann cannot be primitive-recursive. If it were, it could be written using a fixed amount of for-loops or a given amount of primitive-recursive schemes. But that’s
impossible, since we said, the hierarchy of looping constructs is a real hierarchy, each new level of nesting really adds a new class of functions. Thus, Ack cannot fit into any specific layer, say
level m, since Ack(x,y,m+1) would have to live in level m+1. This was meant when stating at the start of the post, that Ack grows too fast to be primitive recursive. Each layer limits the order of
growth of functions inside that layer, but one argument of the Ackermann function, the one we called m, controls the growth rate of Ackermann, and since it’s the input of the function, we can make
Ackermann’s function growing arbitrarily fast and too fast to fit into any specific layer.
Wait a second, wasn’t that a convincing argument that Ackermann is not primitive recursive?
Indeed, that was the outline of the standard proof showing that Ackermann is not primitive recursive, and hopefully it was convincing. But then, why the claim that Ackermann can be captured
primitive-recursively, it sure can’t be both ways?
The classic definition and the argument outlined here can be done more formally, exactly specifying what functions to use as primitive building blocks (basically successor and projection functions)
and exactly the format of the primitive recursion schema (which we only sketched here on using addition as very simple example). In its standard form, primitive recursion is used to define functions
over natural numbers. So functions that take natural numbers as input, and return a natural number. For instance, the Ackermann function Ack(x,m) is a function of that type. (BTW: Let’s switch back
to a two-argument version of Ackermann, but it is not crucial for what’s being said.) So this two argument Ackermann function is of type Nat * Nat -> Nat, and the functions definable by primitive
recursion are of type Nat * Nat * ... * Nat -> Nat (though as argued, Ack is not definable by primitive recursion, but it would be at least of a fitting type.)
In this and the whole construction and set-up lies a restriction, though one that is seldom drawn attention to. Namely not only that we focus on functions over natural numbers, but that we are
dealing with first-order functions over natural numbers!
Ah, well, yaah, now that you mention it…
Is this important, are higher-order functions something to consider? Some may consider them as curious anomalies, but in a course about functional programming one sure is comfortable with
higher-order functions, they are the bread and butter of functional programming. If embracing higher-order functions, instead of struggling to encode the first-order Ackermann function of type Nat *
Nat -> Nat primitive-recursively (and fail), we can look at Ackermann as a function of type Nat -> Nat -> Nat. That’s the type of a higher-order function. It’s a function that takes a natural number
as argument and returns a function (of type Nat -> Nat).
With this type, it’s not really the same function: one cannot use one version as drop-in replacement for the other. But it can be seen still as conceptionally the same function. It’s indeed easy to
transform any function of type A * B -> C into a function of type A -> B -> C and reversely. Actually, it’s not just easy to do the transformation manually, one can also easily write two functions
that implement those two transformations. The transformations are known as currying and uncurrying in honor of Haskell Brooks Curry (that’s the name of a person, but of course there are also
functional programming languages named after him, Haskell and the lesser known Brooks and Curry. In particular, Brooks is rather marginal. Note that Wikipedia in the article about H. Curry confuses
the languages Brooks and Brook).
Now, with this switch of perspective and freeing one’s mind from the unspoken assumption that functions need to be first-order, one can observe:
With higher-order functions (and currying), Ackermann’s function can be defined by primitive recursion!
That’s known, but I think it’s fair to say, it’s much lesser known than the common knowledge that “Ackermann is not primitive recursive.”
Let’s wrap it up and simply show the primitive-recursive definition for (a version of) Ackermann. It corresponds to a two argument version of Ackermann, i.e., the uncurried, first-order version would
have two arguments. The higher-order version has one argument, but gives back a function. Here it is:
Ack(0) = succ
Ack(m+1) = Iter(Ack(m))
Iter(f)(0) = f(1)
Iter(f)(m+1) = f(Iter(f)(m))
How to do that in Scheme?
Ackermann can be defined in Scheme using general recursion; Exercise 1.10 in SICP shows a straightforward piece of code for that. Can one encode it primitive-recursively in Scheme, as well? Well,
Scheme sure supports higher-order functions and it supports currying (defining functions using lambda-abstractions). Thus one can quite easily translate the above primitive-recursive definition into
Scheme, and that is left as an exercise… | {"url":"https://martinsteffen.github.io/functionalprogramming/ackermann/","timestamp":"2024-11-09T06:37:07Z","content_type":"text/html","content_length":"50930","record_id":"<urn:uuid:ca38f59e-1e49-4ff9-a058-d4012338c60f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00216.warc.gz"} |
Applied Psychological Measurement, Volume 06, 1982
Search within Applied Psychological Measurement, Volume 06, 1982
Recent Submissions
• The development and application of a computerized information-processing test battery
(1982) Barrett, Gerald V.; Alexander, Ralph A.; Doverspike, Dennis; Cellar, Douglas; Thomas, Jay C.
To bridge the gap between computerized testing and information-processing-based measurement, a battery of computerized information-processing based ability and preference measures was developed.
The information-processing and preference measures and a battery of paper-and-pencil tests were administered to 64 college students. Although the internal-consistency reliabilities of the
computerized information-processing measures were adequate, test-retest reliabilities were lower than desirable for ability measures. The computerized information-processing measures possessed
moderate convergent validity but had low correlations with traditional paper-and-pencil measures. Of the computerized preference measures, the most promising results were obtained with the
Stimulus Pace measure. A major problem with the use of the computerized information-processing measures in applied settings would be administration time, as the battery took approximately 4
hours. In addition, problems with the stability of results over time and substantial practice effects suggest that even longer testing sessions would be required to obtain reliable measures.
Although information-processing measures of short-term memory have, at best, low correlations with traditional intelligence tests, their ability to predict real-world tasks has yet to be
sufficiently researched.
• Improving measurement quality and efficiency with adaptive theory
Approaches to adaptive (tailored) testing based on item response theory are described and research results summarized. Through appropriate combinations of item pool design and use of different
test termination criteria, adaptive tests can be designed (1) to improve both measurement quality and measurement efficiency, resulting in measurements of equal precision at all trait levels; (2)
to improve measurement efficiency for test batteries using item pools designed for conventional test administration; and (3) to improve the accuracy and efficiency of testing for classification
(e.g., mastery testing). Research results show that tests based on item response theory (IRT) can achieve measurements of equal precision at all trait levels, given an adequately designed item
pool; these results contrast with those of conventional tests which require a tradeoff of bandwidth for fidelity/precision of measurements. Data also show reductions in bias, inaccuracy, and root
mean square error of ability estimates. Improvements in test fidelity observed in simulation studies are supported by live-testing data, which showed adaptive tests requiring half the number of
items as that of conventional tests to achieve equal levels of reliability, and almost one-third the number to achieve equal levels of validity. When used with item pools from conventional tests,
both simulation and live-testing results show reductions in test battery length from conventional tests, with no reductions in the quality of measurements. Adaptive tests designed for dichotomous
classification also represent improvements over conventional tests designed for the same purpose. Simulation studies show reductions in test length and improvements in classification accuracy for
adaptive vs. conventional tests; live-testing studies in which adaptive tests were compared with "optimal" conventional tests support these findings. Thus, the research data show that IRT-based
adaptive testing takes advantage of the capabilities of IRT to improve the quality and/or efficiency of measurement for each examinee.
• Standard error of an equating by item response theory
A formula is derived for the asymptotic standard error of a true-score equating by item response theory. The equating method is applicable when the two tests to be equated are administered to
different groups along with an anchor test. Numerical standard errors are shown for an actual equating (1) comparing the standard errors of IRT, linear, e and equipercentile methods and (2)
illustrating the effect of the length of the anchor test on the standard error of the equating.
• Latent trait models and ability parameter estimation
(1982) Andersen, Erling B.
In recent years several authors have viewed latent trait models for binary data as special models for contingency tables. This connection to contingency table analysis is used as the basis for a
survey of various latent trait models. This article discusses estimation of item parameters by conditional, direct, and marginal maximum likelihood methods, and estimation of individual latent
parameters as opposed to an estimation of the parameters of a latent population density. Various methods for testing the goodness of fit of the model are also described. Several of the estimators
and tests are applied to a data set concerning consumer complaint behavior.
• Adaptive EAP estimation of ability in a microcomputer environment
(1982) Bock, R. Darrell; Mislevy, Robert J.
Expected a posteriori (EAP) estimation of ability, based on numerical evaluation of the mean and variance of the posterior distribution, is shown to have unusually good properties for
computerized adaptive testing. The calculations are not complex, precede noniteratively by simple summation of log likelihoods as items are added, and require only values of the response function
obtainable from precalculated tables at a limited number of quadrature points. Simulation studies are reported showing the near equivalence of the posterior standard deviation and the standard
error of measurement. When the adaptive testings terminate at a fixed posterior standard deviation criterion of .90 or better, the regression of the EAP estimator on true ability is virtually
linear with slope equal to the reliability, and the measurement error homogeneous, in the range +- 2.5 standard deviations.
• A nonparameteric approach to the analysis of dichotomous item responses
(1982) Mokken, Robert J.; Lewis, Charles
An item response theory is discussed which is based on purely ordinal assumptions about the probabilities that people respond positively to items. It is considered as a natural generalization of
both Guttman scaling and classical test theory. A distinction is drawn between construction and evaluation of a test (or scale) on the one hand and the use of a test to measure and make decisions
about persons’ abilities on the other. Techniques to deal with each of these aspects are described and illustrated with examples.
• Some applications of logistic latent trait models with linear constraints on the parameters
(1982) Fischer, Gerhard H.; Formann, Anton K.
The linear logistic test model (LLTM), a Rasch model with linear constraints on the item parameters, is described. Three methods of parameter estimation are dealt with, giving special
consideration to the conditional maximum likelihood approach, which provides a basis for the testing of structural hypotheses regarding item difficulty. Standard areas of application of the LLTM
are surveyed, including many references to empirical studies in item analysis, item bias, and test construction; and a novel type of application to response-contingent dynamic processes is
presented. Finally, the linear logistic model with relaxed assumptions (LLRA) for measuring change is introduced as a special case of an LLTM; it allows the characterization of individuals in a
multidimensional latent space and the testing of hypotheses regarding effects of treatments.
• Linear versus nonlinear models in item response theory
(1982) McDonald, Roderick P.
A broad framework for examining the class of unidimensional and multidimensional models for item responses is provided by nonlinear factor analysis, with a classification of models as strictly
linear, linear in their coefficients, or strictly nonlinear. These groups of models are compared and contrasted with respect to the associated problems of estimation, testing fit, and scoring an
examinee. The invariance of item parameters is related to the congruence of common factors in linear theory.
• Advances in item response theory and applications: An introduction
(1982) Hambleton, Ronald K.; Van der Linden, Wim J.
Test theories can be divided roughly into two categories. The first is classical test theory, which dates back to Spearman’s conception of the observed test score as a composite of true and error
components, and which was introduced to psychologists at the beginning of this century. Important milestones in its long and venerable tradition are Gulliksen’s Theory of Mental Tests (1950) and
Lord and Novick’s Statistical Theories of Mental Test Scores (1968). The second is item response theory, or latent trait theory, as it has been called until recently. At the present time, item
response theory (IRT) is having a major impact on the field of testing. Models derived from IRT are being used to develop tests, to equate scores from nonparallel tests, to investigate item bias,
and to report scores, as well as to address many other pressing measurement problems (see, e.g., Hambleton, 1983; Lord, 1980). IRT differs from classical test theory in that it assumes a
different relation of the test score to the variable measured by the test. Although there are parallels between models from IRT and psychophysical models formulated around the turn of the
century, only in the last 10 years has IRT had any impact on psychometricians and test users. Work by Rasch (1980/1960), Fischer (1974), 9 Birnbaum (1968), ivrighi and Panchapakesan (1969), Bock
(1972), and Lord (1974) has been especially influential in this turnabout; and Lazarsfeld’s pioneering work on latent structure analysis in sociology (Lazarsfeld, 1950; Lazarsfeld & Henry, 1968)
has also provided impetus. One objective of this introduction is to review the conceptual differences between classical test theory and IRT. A second objective is to introduce the goals of this
special issue on item response theory and the seven papers. Some basic problems with classical test theory are reviewed in the next section. Then, IRT approaches to educational and psychological
measurement are presented and compared to classical test theory. The final two sections present the goals for this special issue and an outline of the seven invited papers.
• A comparison of the accuracy of four methods for clustering jobs
(1982) Zimmerman, Ray; Jacobs, Rick; Farr, James L.
Four methods of cluster analysis were examined for their accuracy in clustering simulated job analytic data. The methods included hierarchical mode analysis, Ward’s method, k-means method from a
random start, and k-means based on the results of Ward’s method. Thirty data sets, which differed according to number of jobs, number of population clusters, number of job dimensions, degree of
cluster separation, and size of population clusters, were generated using a monte carlo technique. The results from each of the four methods were then compared to actual classifications. The
performance of hierarchical mode analysis was significantly poorer than that of the other three methods. Correlations were computed to determine the effects of the five data set variables on the
accuracy of each method. From an applied perspective, these relationships indicate which method is most appropriate for a given data set. These results are discussed in the context of certain
limitations of this investigation. Suggestions are also made regarding future directions for cluster analysis research.
• Sequential testing for selection
In sequential testing for selection, an applicant for school or work responds via a computer terminal to one item at a time until an acceptance or rejection decision can be made with a preset
probability of error. The test statistic, as a function of item difficulties for standardization subgroups scoring within successive quantiles of the criterion, is an approximation of a Waldian
probability ratio that should improve as the number of quantiles increases. Monte carlo simulation of 1,000 first-year college students under 96 different testing conditions indicated that a
quantile number as low as four could yield observed error rates that are close to their nominal values with mean test lengths between 5 and 47. Application to real data, for which interpolative
estimation of the quantile item difficulties was necessary, produced, with quantile numbers of four and five, even more accurate observed error rates than the monte carlo studies did. Truncation
at 70 items narrowed the range of mean test lengths for the real data to between 5 and 19. Important for use in selection, the critical values of the test statistics are functions not only of the
nominal error rates but also, alternatively, of the selection ratio, the base-rate success probability, and the success probability among selectees, which a test user is free to choose.
• Bounds on the k out of n reliability of a test, and an exact test for hierarchically related items
Consider an n-item multiple-choice test where it is decided that an examinee knows the answer if and only if he/she gives the correct response. The k out of n reliability of the test, Qk, is
defined to be the probability that, for a randomly sampled examinee, at least k correct decisions are made about whether the examinee knows the answer to an item. The paper describes and
illustrates how an extension of a recently proposed latent structure model can be used in conjunction with results in Sathe, Pradhan, and Shah (1980) to estimate upper and lower bounds on Qk. A
method of empirically checking the model is discussed.
• A study of pre-equating based on item response theory
(1982) Bejar, Isaac I.; Wingersky, Marilyn S.
The study reports a feasibility study using item response theory (IRT) as a means of equating the Test of Standard Written English (TSWE). The study focused on the possibility of pre-equating,
that is, deriving the equating transformation prior to the final administration of the test. The three-parameter logistic model was postulated as the response model and its fit was assessed at
the item, subscore, and total score level. Minor problems were found at each of these levels; but, on the whole, the three-parameter model was found to portray the data well. The adequacy of the
equating provided by IRT procedures was investigated in two TSWE forms. It was concluded that pre-equating does not appear to present problems beyond those inherent to IRT-equating.
• Choice of test model for appropriateness measurement
Several theoretical and empirical issues that must be addressed before appropriateness measurement can be used by practitioners are investigated in this paper. These issues include selection of a
latent trait model for multiple-choice tests, selection of a particular appropriateness index, and the sample size required for parameter estimation. The threeparameter logistic model is found to
provide better detection of simulated spuriously low examinees than the Rasch model for the Graduate Record Examination, Verbal Section. All three appropriateness indices proposed by Levine and
Rubin (1979) provide good detection of simulated spuriously low examinees but poor detection of simulated spuriously high examinees. A reason for this discrepancy is provided.
• Academic achievement and individual differences in the learning processes of basic skills students in the university
This study analyzed the relationship between the academic achievement and information-processing habits of basic skills students in the university. Academic achievement was measured by
grade-point average (GPA) and American College Testing Program Assessment (ACT) scores. Information-processing habits were determined by the Inventory of Learning Processes (ILP). There was no
significant difference in the ILP profiles of high- and low-achieving basic skills students, whether they were grouped by ACT or GPA. Study Methods was the only scale that showed a significant
correlation with academic achievement-namely, a negative correlation with ACT. A path analysis indicated that the effect of Study Methods on GPA is indirect, as mediated by ACT. Since ACT
assesses prior achievement (i.e., high-school performance), it appears that learning style has an effect prior to college entrance. Basic skills students with low ACT scores tend to substitute
conventional study methods for deep elaborative processing, but these students are low achievers in college, as indicated by their GPA. A multivariate analysis of variance showed no significant
sex or ethnic differences in information-processing habits. Evidently, a low achiever is a low achiever regardless of sex or ethnicity.
• Comparison of factor analytic results with two-choice and seven-choice personality item formats
(1982) Comrey, Andrew L.; Montag, I.
A translated version of the Comrey Personality Scales (CPS) using a two-choice item format was administered to 159 male applicants for a motor vehicle operator’s license in Israel. Total scores
were computed for the 40 homogeneous item subgroups that define the eight personality factors in the taxonomy underlying the CPS. Factor analysis of the intercorrelations among these 40
subvariables resulted in substantial replication of factors found in a previous study employing a seven-choice item format. On the average, higher intercorrelations among subvariables measuring
the same factor and higher factor loadings were obtained for the seven-choice item format results. These findings suggest a superiority for the seven-choice over the two-choice item format for
personality inventories.
• An application of singular value decomposition to the factor analysis of MMPI items
(1982) Reddon, John R.; Marceau, Roger; Jackson, Douglas N.
Several measurement problems were identified in the literature concerning the fidelity with which the Minnesota Multiphasic Personality Inventory (MMPI) assesses psychopathology. A
straightforward solution to some of these problems is to develop an orthogonal basis in the MMPI; however, there are 550 items, and this is a cumbersome task even for modern computers. The method
of alternating least squares was employed to yield a singular value decomposition of these measures on 682 prison inmates. Unsystematic or sample-specific error variance was minimized through a
two-stage least squares split thirds replication design. The relative explanatory power of models of psychopathology based on external, internal, naive, and construct-oriented measurement
strategies is discussed.
• Identifying test items that perform differentially in population subgroups: A partial correlation index
(1982) Stricker, Lawrence J.
Verbal items on the GRE Aptitude Test were analyzed for race (white vs. black) and sex differences in their functioning, using a new procedure-item partial correlations with subgroup standing
(race or sex), controlling for total score-as well as two standard methods-comparisons of subgroups’ item characteristic curves and item difficulties. The partial correlation index agreed with
the item characteristic curve index in the proportions of items identified as performing differentially for each race and sex. These two indexes also agreed in the particular items that they
identified as functioning differentially for the sexes, but not in the items that they identified as performing differently for the races. The partial correlation index consistently disagreed
with the item difficulty index in the proportions of items identified as functioning differentially and in the particular items involved. The items identified by the partial correlation index as
performing differentially, like the items identified by the other indexes, generally did not differ in type or content from items not so identified, with one major exception: this index
identified items with female content as functioning differently for the sexes.
• Recovery of two- and three-parameter logistic item characteristic curves: A monte carlo study
(1982) Hulin, Charles L.; Lissak, Robin I.; Drasgow, Fritz
This monte carlo study assessed the accuracy of simultaneous estimation of item and person parameters in item response theory. Item responses were simulated using the two- and three-parameter
logistic models. Samples of 200, 500, 1,000, and 2,000 simulated examinees and tests of 15, 30, and 60 items were generated. Item and person parameters were then estimated using the appropriate
model. The root mean squared error between recovered and actual item characteristic curves served as the principal measure of estimation accuracy for items. The accuracy of estimates of ability
was assessed by both correlation and root mean squared error. The results indicate that minimum sample sizes and tests lengths depend upon the response model and the purposes of an investigation.
With item responses generated by the two-parameter model, tests of 30 items and samples of 500 appear adequate for some purposes. Estimates of ability and item parameters were less accurate in
small sample sizes when item responses were generated by the three-parameter logistic model. Here samples of 1,000 examinees with tests of 60 items seem to be required for highly accurate
estimation. Tradeoffs between sample size and test length are apparent, however.
• Communication apprehension: An assessment of Australian and United States data
(1982) Hansford, B. C.; Hattie, John
This study assessed the claims of unidimensionality for a measure of oral communication apprehension (Personal Report of Communication Apprehension). Eighteen independent samples, drawn from
Australian and United States sources were used; and comparisons were made between the samples. Although similarities were found among the data sets with respect to internal consistency, frequency
distributions, and item-total correlations, the claim of unidimensionality in the measure was rejected. It was also found that there were no overall differences between Australian and United
States samples, no sex differences, and no age differences. | {"url":"https://conservancy.umn.edu/collections/c88e140f-44a0-4822-8840-591dd919d2cb","timestamp":"2024-11-08T21:30:50Z","content_type":"text/html","content_length":"645403","record_id":"<urn:uuid:21248cb8-e841-49b5-9e0d-4cca2ad3d549>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00839.warc.gz"} |
12th Class Chemistry Mcqs With Answers Chapter 7| 2nd Year chemistry mcqs with answers pdf
12th Class Chemistry Mcqs With Answers Chapter 1 12th Class Chemistry Mcqs With Answers Chapter 2
12th Class Chemistry Mcqs With Answers Chapter 3 12th Class Chemistry Mcqs With Answers Chapter 4
12th Class Chemistry Mcqs With Answers Chapter 5 12th Class Chemistry Mcqs With Answers Chapter 6
12th Class Chemistry Mcqs With Answers Chapter 7 12th Class Chemistry Mcqs With Answers Chapter 8
12th Class Chemistry Mcqs With Answers Chapter 9 12th Class Chemistry Mcqs With Answers Chapter 10
12th Class Chemistry Mcqs With Answers Chapter 11 12th Class Chemistry Mcqs With Answers Chapter 12
12th Class Chemistry Mcqs With Answers Chapter 13 12th Class Chemistry Mcqs With Answers Chapter 14
12th Class Chemistry Mcqs With Answers Chapter 15 12th Class Chemistry Mcqs With Answers Chapter 16
12th Class Chemistry Mcqs With Answers Chapter 17 12th Class Chemistry Mcqs With Answers Chapter 18
12th Class Chemistry Mcqs With Answers Chapter 19 12th Class Chemistry Mcqs With Answers Chapter 20
12th Class Chemistry Mcqs With Answers Chapter 1
Chapter No 7
Fundamental Principles Of Organic Chemistry
1. The state of hybridization of carbon atom in methane is choose which one is correct:
1. Sp^3
2. sp^2
3. sp
4. dsp^2
2. The chemist who synthesized urea from ammonium cyanate was choose which one is correct:
1. Berzelius
2. Kolbe
3. Wholer
4. Lavoisier
3. Which set of hybrid orbitals has planar triangular shape choose which one is correct:
1. Sp^3
2. sp
3. Sp^2
4. dsp^2
4. The reaction C[8]H[18]→ C[3]H[6 ]+ Fragments is choose which one is correct:
1. Catalytic oxidation
2. Isomerization
3. Synthesis
4. Cracking
5. The octane number is 100% in petroleum choose which one is correct:
1. Neo-octane
2. n-Hexane
3. Neo-pentane
4. Iso-octane
6. Concept of octane number was introduced by choose which one is correct:
1. Kekule
2. Edgar
3. Wholer
4. Dalton
7. Which of the following form of coal has maximum %age of carbon choose which one is correct:
1. Peat
2. Bituminous
3. Sub-bituminous
4. Anthracite
8. Which of the following is the use of light naphtha choose which one is correct:
1. Non-polar solvent
2. Lubricant
3. Roofing
4. Polar solute
9. Carbon atom of HCHO (methanal, formaldehyde) is choose which one is correct:
1. sp hybridized
2. Sp^2 hybridized
3. Sp^3 hybridized
4. Not hybridized
10. A double bond consist of choose which one is correct:
1. Two sigma bonds
2. One sigma and one pi bond
3. One sigma and two pi bond
4. Two pi bonds
11. Alkanoic acid is another name of choose which one is correct:
1. Aldehyde
2. Ketones
3. Carboxylic acid
4. Alcohols
12. An isomer of ethanol is choose which one is correct:
1. Dimethyl ether
2. Diethyl ether
3. Ethylene glycol
4. Methanol
13. Ethers shows the phenomenon of choose which one is correct:
1. Position isomerism
2. Functional group isomerism
3. Metamerism
4. Cis-trans isomerism
14. Isomerism exhibited by ethanol and dimethyl ether is choose which one is correct:
1. Position isomerism
2. Metamerism
3. Functional group isomerism
4. Chain isomerism
15. Which one of the following does not show geometric isomerism choose which one is correct:
1. ClHC = CHCl
2. H[3]C − HC = CHCH[3]
3. H[2]C = CHCl
4. BrClC = CClBr
16. In t-butyl alcohol, the tertiary carbon is bonded to choose which one is correct:
1. Two H-atoms
2. Three H-atoms
3. One H-atom
4. No H-atom
17. Which of the following has zero dipole moment choose which one is correct:
1. 2-methyl-1-propene
2. 1-butene
3. Trans-2-butene
4. Cis-2-butene
18. Fractional distillation of petroleum yields only about ⎯⎯⎯⎯ of gasoline.
1. 40%
2. 20%
3. 70%
4. 10%
19. Which of the following compounds will exhibit cis-trans (isomerism) choose which one is correct:
1. Butanal
2. 2-butyne
3. 2-butanol
4. 2-butene | {"url":"https://mcqsfoundry.com/12th-class-chemistry-mcqs-with-answers-chapter-7-2nd-year-chemistry-mcqs-with-answers-pdf/","timestamp":"2024-11-01T23:47:28Z","content_type":"text/html","content_length":"56002","record_id":"<urn:uuid:c7e32291-2dfb-4343-88af-28a4b6c61fec>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00608.warc.gz"} |
SUPERSTRINGS! M-theory
M-theory is described at low energies by an effective theory called 11-dimensional supergravity. This theory has membrane and 5-branes as solitons, but no strings. How can we get the strings that
we've come to know and love from this theory? We can compactify the 11-dimensional M-theory on a small circle to get a 10-dimensional theory. If we take a membrane with the topology of a torus and
wrap one of its dimensions on this circle this will become a closed string! In the limit where the circle becomes very small we recover the Type IIA superstring.
How do we know that M-theory on a circle gives the IIA superstring, and not the IIB or Heterotic superstrings? The answer to this question comes from a careful analysis of the massless fields that we
get upon compactification of 11-dimensional supergravity on a circle. Another easy check is that we can find an M-theory origin for the D-brane states unique to the IIA theory. Recall that the IIA
theory contains D0,D2,D4,D6,D8-branes as well as the NS fivebrane. The following table summarizes the situation:
│M-theory on circle │IIA in 10 dimensions │
│Wrap membrane on circle │IIA superstring │
│Shrink membrane to zero size │D0-brane │
│Unwrapped membrane │D2-brane │
│Wrap fivebrane on circle │D4-brane │
│Unwrapped fivebrane │NS fivebrane │
The two that have been left out are the D6 and D8-branes. The D6-brane can be interpreted as a "Kaluza Klein Monopole" which is a special kind of solution to 11-dimensional supergravity when it's
compactified on a circle. The D8-brane doesn't really have clear interpretation in terms of M-theory at this point in time; that's a topic for current research!
We can also get a consistent 10-dimensional theory if we compactify M-theory on a small line segment. That is, take one dimension (the 11-th dimension) to have a finite length. The endpoints of the
line segment define boundaries with 9 spatial dimensions. An open membrane can end on these boundaries. Since the intersection of the membrane and a boundary is a string, we see that the 9+1
dimensional worldvolume of the each boundary can contain strings which come from the ends of membranes. As it turns out, in order for anomalies to cancel in the supergravity theory, we also need each
boundary to carry an E8 gauge group. Therefore as we take the space between the boundaries to be very small we're left with a 10-dimensional theory with strings and an E8 x E8 gauge group. This is
the E8 x E8 heterotic string!
So given this new phase 11-dimensional phase of string theory, and the various dualities between string theories, we're led to the very exciting prospect that there is only a single fundamental
underlying theory -- M-theory. The five superstring theories and 11-D Supergravity can be thought of as classical limits. Previously, we've tried to deduce their quantum theories by expanding around
these classical limits using perturbation theory. Perturbation has its limits, so by studying non-perturbative aspects of these theories using dualities, supersymmetry, etc. we've come to the
conclusion that there only seems to be one unique quantum theory behind it all. This uniqueness is very appealing, and much of the work in this field will be directed toward formulating the full
quantum M-theory. | {"url":"https://web.physics.ucsb.edu/~strings/superstrings/mtheory.htm","timestamp":"2024-11-03T13:19:44Z","content_type":"text/html","content_length":"4978","record_id":"<urn:uuid:2e6aad4b-a353-4fb4-a2d7-eb2ea982c04a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00647.warc.gz"} |
WeBWorK Standalone Renderer
Translate the following into a PDE with boundary conditions.
The ends of a horizontal string of length L are secured at both ends. The string is initially undisplaced but it has initial vertical velocity $\sin\!\left(\frac{\pi x}{L}\right)$. How to enter the
The PDE is
The boundary values / initial conditions are:
Left end is secured:
Right end is secured:
Initially undisplaced:
Initial velocity is $\sin\!\left(\frac{\pi x}{L}\right)$:
You can earn partial credit on this problem. | {"url":"https://wwrenderer.libretexts.org/render-api?sourceFilePath=Library/AlfredUniv/PDE/setupPDE/model6.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&answersSubmitted=0&showSummary=1&displayMode=MathJax&language=en&outputFormat=nosubmit","timestamp":"2024-11-03T18:48:05Z","content_type":"text/html","content_length":"7505","record_id":"<urn:uuid:599ef82e-8214-44e0-83c2-0e381d772b3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00390.warc.gz"} |
Quadratic Surfaces – Hyperboloid of Two Sheets
The last quadratic surface I printed was a hyperboloid of two sheets.
For the hyperboloid of two sheets I created the entire object from scratch in Cinema 4D using the same process I used to create the cone and other similar objects. For this surface I used the same
formula spline as the hyperboloid of one sheet \(x(t)=cosh(t), y(t)=sinh(t), z(t)=0\), and then rotated it to the correct orientation. I then used the lathe tool and rotated this spline 180 degrees
since this was all it needed. Because of this I needed to use only 30 rotation segments for a total of 60 all around the object.
Here is a picture of the final object! It can be found on Thingiverse here. | {"url":"https://mathvis.academic.wlu.edu/2015/07/21/quadratic-surfaces-hyperboloid-of-two-sheets/","timestamp":"2024-11-05T09:50:19Z","content_type":"text/html","content_length":"36227","record_id":"<urn:uuid:7e08a0fe-f3ba-48f4-9397-97e1a1a8c5d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00187.warc.gz"} |
How to remove leading zeroes and trailing spaces in cobol?
To remove leading zeroes and trailing spaces in COBOL, you can use the following techniques:
1. To remove leading zeroes, you can use the INSPECT statement with the TALLYING phrase. Here's an example:
1 01 WS-NUMBER PIC 9(5) VALUE '00045'.
2 01 WS-NEW-NUMBER PIC X(5).
3 01 WS-LENGTH PIC 99.
5 INSPECT WS-NUMBER TALLYING WS-LENGTH FOR LEADING SPACES
6 COMPUTE WS-LENGTH = LENGTH OF WS-NUMBER - WS-LENGTH
7 INSPECT WS-NUMBER (WS-LENGTH + 1:) TALLYING WS-LENGTH FOR LEADING ZEROES
8 MOVE WS-NUMBER (WS-LENGTH + 1:) TO WS-NEW-NUMBER.
In this example, the INSPECT statement is used twice. First, it counts the number of leading spaces in the WS-NUMBER field and stores the result in the WS-LENGTH variable. Then, it calculates the
length of the actual number by subtracting the number of leading spaces from the length of the field. Finally, it counts the number of leading zeroes in the actual number and again subtracts that
from the length to get the position of the first non-zero digit. The MOVE statement then moves the non-zero part of the number to the WS-NEW-NUMBER field.
1. To remove trailing spaces, you can use the STRING statement with the TRIM phrase. Here's an example:
1 01 WS-TEXT PIC X(20) VALUE 'COBOL PROGRAM '.
2 01 WS-NEW-TEXT PIC X(20).
4 STRING WS-TEXT(TRIM) DELIMITED BY SIZE INTO WS-NEW-TEXT.
In this example, the TRIM phrase is used to remove the trailing spaces from the WS-TEXT field. The STRING statement then moves the trimmed text to the WS-NEW-TEXT field. The DELIMITED BY SIZE phrase
ensures that the entire field is moved, even if it contains embedded spaces. | {"url":"https://devhubby.com/thread/how-to-remove-leading-zeroes-and-trailing-spaces-in","timestamp":"2024-11-07T03:17:10Z","content_type":"text/html","content_length":"126620","record_id":"<urn:uuid:0a12d035-463b-453c-9c3a-7385caa93964>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00894.warc.gz"} |
15 Extraordinary Facts About Ohm’S Law
Source: Theengineeringmindset.com
Ohm’s Law is a fundamental concept in the field of physics that helps us understand the relationship between voltage, current, and resistance in an electrical circuit. It was formulated by the German
physicist Georg Simon Ohm in the early 19th century and has since become a cornerstone of electrical engineering and electronics. Understanding Ohm’s Law is crucial for anyone working with
electricity, from engineers and technicians to hobbyists and DIY enthusiasts.
In this article, we will explore 15 extraordinary facts about Ohm’s Law that will not only deepen your understanding of this fundamental concept but also unveil some fascinating insights. Whether
you’re a student delving into the world of physics or simply curious about how electricity works, these facts will shed light on the significance and applications of Ohm’s Law. So, let’s dive in and
discover the intriguing aspects of this essential principle in electrical engineering.
Key Takeaways:
• Ohm’s Law is like a magical equation that helps us understand and calculate electricity. It shows how voltage, current, and resistance are connected, and it’s super important for designing safe
and efficient circuits.
• Ohm’s Law is not just for scientists and engineers. It’s like a superpower that empowers everyone to understand and stay safe around electricity. It’s like having a secret code to unlock
electrical mysteries!
Ohm’s Law defines the relationship between voltage, current, and resistance.
Ohm’s Law, named after the German physicist Georg Simon Ohm, states that the current flowing through a conductor between two points is directly proportional to the voltage across the two points, and
inversely proportional to the resistance. This fundamental law provides the basis for understanding and calculating electric circuits.
The formula for Ohm’s Law is V = IR.
Ohm’s Law can be mathematically expressed as V = IR, where V represents voltage in volts, I represents current in amperes, and R represents resistance in ohms. This simple equation allows us to
calculate any one of the three variables if the other two are known.
Ohm’s Law is applicable to both DC and AC circuits.
Whether it’s a direct current (DC) or alternating current (AC) circuit, Ohm’s Law can be applied to determine the relationship between voltage, current, and resistance. This makes it a versatile
principle that is used in various electrical and electronic applications.
The resistance of a material depends on its physical properties.
Ohm’s Law helps us understand that the resistance of a material is determined by its physical properties, such as length, cross-sectional area, and temperature. Materials with higher resistivity,
like insulators, hinder the flow of electric current more than materials with lower resistivity, like conductors.
Ohm’s Law is vital in designing and analyzing electrical circuits.
Engineers and technicians heavily rely on Ohm’s Law when designing and analyzing electrical circuits. By applying the principles of Ohm’s Law, they can calculate the values of voltage, current, and
resistance, ensuring the safety and efficiency of the circuit.
Ohm’s Law can be visualized using a simple water pipe analogy.
Understanding Ohm’s Law can be made simpler by visualizing it using a water pipe analogy. In this analogy, voltage is equivalent to water pressure, current is equivalent to water flow rate, and
resistance is equivalent to the pipe’s diameter. Just as a narrower pipe restricts the water flow, higher resistance hampers the flow of electric current.
Ohm’s Law enables the calculation of power in a circuit.
By combining Ohm’s Law with the power formula P = IV, where P represents power in watts, we can calculate the amount of power consumed or produced in an electric circuit. This is crucial for
determining the energy efficiency and capacity requirements of electrical devices.
Ohm’s Law can be used to solve complex circuit problems.
When faced with complex electrical circuit problems, Ohm’s Law serves as a foundational tool for solving them. By applying the appropriate formulas and principles, engineers and technicians can
analyze circuit behavior, optimize performance, and troubleshoot any issues that may arise.
Ohm’s Law is not limited to linear circuits.
Although Ohm’s Law is most commonly applied to linear circuits where the current is directly proportional to the voltage, it can also be used in non-linear circuits. In these cases, the
voltage-current relationship may not be strictly linear, but Ohm’s Law can still provide valuable insights into the behavior of the circuit.
Ohm’s Law is crucial in understanding circuit safety.
Understanding Ohm’s Law is vital for ensuring circuit safety. By calculating the current flowing through a circuit and comparing it to the maximum ratings of components, engineers can prevent
overloading and avoid potential hazards such as overheating or equipment failure.
Ohm’s Law is universally applicable to electronic components.
From resistors and capacitors to diodes and transistors, Ohm’s Law applies universally to all electronic components. It allows engineers to determine the voltage drops, current flows, and power
dissipation across these components, facilitating proper circuit design and optimization.
Ohm’s Law is fundamental in the study of electrical engineering.
For students studying electrical engineering or related fields, Ohm’s Law is one of the foundational concepts they learn. It serves as a building block for more advanced topics and theories, making
it an essential part of their academic journey.
Ohm’s Law provides a systematic approach to troubleshooting circuits.
When faced with a malfunctioning circuit, technicians can rely on Ohm’s Law to systematically troubleshoot and diagnose the problem. By measuring voltages and currents at different points in the
circuit, they can identify faulty components or connections and restore proper functionality.
Ohm’s Law is widely used in the field of electronics.
From designing microchips to programming electronic devices, Ohm’s Law is widely used in the field of electronics. The ability to accurately calculate and manipulate voltage, current, and resistance
is instrumental in developing innovative technologies that shape our modern world.
Understanding Ohm’s Law empowers individuals to be knowledgeable about electrical safety.
By understanding Ohm’s Law, individuals can make informed decisions when it comes to electrical safety. Being able to calculate or estimate the current flowing through a circuit allows them to
identify potential risks and take appropriate precautions to prevent electrical accidents.
In conclusion, Ohm’s Law is a fundamental concept in physics that provides a deep understanding of the relationship between voltage, current, and resistance. The law, formulated by the German
physicist Georg Simon Ohm in the 19th century, continues to be of great importance in the field of electrical engineering.By applying Ohm’s Law, engineers and scientists can calculate and predict the
behavior of electrical circuits, enabling the design and optimization of various electronic devices and systems. This knowledge is crucial in fields such as power generation, telecommunications, and
electronics.Understanding Ohm’s Law allows us to appreciate the remarkable simplicity and elegance of its formula, V = IR. It provides a solid foundation for the study and application of electrical
principles, and it is essential for both beginners and experts alike.So, whether you are an aspiring physicist, an electronics enthusiast, or simply curious about the world around us, delving into
Ohm’s Law will unlock a whole new realm of knowledge and possibilities.
1. What is Ohm’s Law?
Ohm’s Law is a basic principle in physics that describes the relationship between voltage, current, and resistance in an electrical circuit. It states that the current flowing through a conductor is
directly proportional to the voltage applied across it and inversely proportional to the resistance of the conductor.
2. How is Ohm’s Law expressed mathematically?
Ohm’s Law is mathematically expressed with the formula V = IR, where V represents the voltage, I represents the current, and R represents the resistance.
3. What are the units of measurement for voltage, current, and resistance?
Voltage is measured in volts (V), current is measured in amperes (A), and resistance is measured in ohms (?).
4. Can Ohm’s Law be applied to all types of electrical circuits?
Yes, Ohm’s Law can be applied to both direct current (DC) and alternating current (AC) circuits. However, for AC circuits, additional factors such as reactance and impedance need to be considered.
5. What are some practical applications of Ohm’s Law?
Ohm’s Law is widely used in electrical engineering and electronics. It is crucial for the design and analysis of circuits, the calculation of power consumption, and the selection of appropriate
components for various devices and systems.
Ohm's Law provides a solid foundation for understanding electrical circuits, but there's so much more to explore in the world of electricity. Dive deeper into the fascinating properties of current,
uncover mind-blowing facts about resistance, and expand your knowledge with captivating insights into the realm of electronics. Each topic offers a unique perspective on the intricate workings of
electrical systems, empowering you to become a more informed and knowledgeable individual in the field of electricity.
Our commitment to delivering trustworthy and engaging content is at the heart of what we do. Each fact on our site is contributed by real users like you, bringing a wealth of diverse insights and
information. To ensure the highest standards of accuracy and reliability, our dedicated editors meticulously review each submission. This process guarantees that the facts we share are not only
fascinating but also credible. Trust in our commitment to quality and authenticity as you explore and learn with us. | {"url":"https://facts.net/science/physics/15-extraordinary-facts-about-ohms-law/","timestamp":"2024-11-03T19:41:06Z","content_type":"text/html","content_length":"237633","record_id":"<urn:uuid:05a9e837-989b-4305-ab82-aaa7d7ad4556>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00817.warc.gz"} |
How do you convert 1/6 into a percent and decimal? | HIX Tutor
How do you convert 1/6 into a percent and decimal?
Answer 1
Let's convert to decimal first, and I'll explain why later.
To convert #1/6# to decimal, just divide #1# by #6#!! (That's what the fraction is showing; a fraction = numerator #divide# denominator)
So that is #1/6# in decimal form! (#0.16# recurring)
Note that per in percent means each and #cent# means 100.
Therefore, percentage is just the quantity out of #100#! Simple!
So, multiply the decimal form of #1/6# by #100# (That is why I did decimal first)
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To convert 1/6 into a percent, you multiply by 100.
1/6 * 100 = 16.666... %
To convert 1/6 into a decimal, you simply divide the numerator by the denominator.
1 ÷ 6 = 0.166666...
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-convert-1-6-into-a-percent-and-decimal-8f9af9061b","timestamp":"2024-11-02T02:19:37Z","content_type":"text/html","content_length":"570414","record_id":"<urn:uuid:d5e27113-92ad-4732-ba86-7adfb394b7f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00208.warc.gz"} |
Basic Differential Calculus | Hire Someone To Do Calculus Exam For Me
Basic Differential Calculus and Monotime Preconditionals** **George G. Lifson** Department of Mathematics, University of California, Berkeley, CA 94720, USA [**Abstract**]{} Efficient definitions of
abstract mathematical functions, such as monotonic functions and monotonic operators, are well known. In this paper, we are given a non-trivial collection of definitions for Monotonely Operators
(with and without monotonic functions). This collection reads as follows: $$f_i(x)=\sum_{j=1}^n\lambda_{ij}x^j,\quad\forall i\in \ker i \quad i\neq n;$$ $$\square_{j-\lambda}f_i(x)=\sum_{j=1}^\lambda
\alpha_j x^j, \quad\forall i\in\ker n \quad i\neq n;$$ $$d_{j-\lambda}f_i=\sum_{j=1}^j\lambda_{ij}\alpha_j, \quad\forall i\in M.$$ In this paper, we will generalize those definitions for Monotonely
Operator and a function defined on a non-empty subdomain of the unit square. In particular, we will recall the definition of a uniform “partial monotone symbol” in terms of a partial monotone symbol,
namely the partial definition of a partial monotone symbol. Then, we will recall the following result in the sense of calculus. [**Definition 1**]{}[email protected] [[**Bijectivity of left
eigenspace for monotonic functions**]{} $$M=\bigoplus_{iDo My Test
It is then described as given by the formula function in P. To do this, a problem in C is defined by computing a maximum size function and a global minimum function within a variable. Basic Calculus
The Calculus problem of a function can be expressed as P(f) = (C1)p(f_1(f_2(f_3))(f_2(f_3))'(f_3) where: (C1) or =1 P(f) = is a maximum-sample procedure with respect to f. With a vector of size: size
= {2} if f is an element of the underlying P(f) variable then: (P6) will define another set of equations, then C (a third component) states that P(f) is a function. (P2) Iff and =1 then the extension
gives rise to some function B that can be used. (P6) will take b + 1 elements, p(f)(*) = C where: p(f)(*) = max.size = {1} max.size = {2} max.size = {3} p(f)(*) = C A program is repeated without the
occurrence of any inference steps. When the input data is only limited by the standard definition for a function such as the formula, the problem is solved. Since a maximum-sample procedure for a
function is its maximum number of elements, and for a value of size l / height = 0, the number of elements needed can be large. Therefore the problem can be solved up to the next maximum-sample
procedure without any reduction of the molecule size. The corresponding extension can be computed from the sum of the minimum elements to the maximum number of element for the formula function. Basic
Calculus Below are a few more examples of the formulas and extensions. The new definition for function A (given in Algorithm 64 of The Analysis Workshop, 1999) is also used as a proposed extension to
the Calculus Problem, using the formula | C < -> p | With a internet n of size: size = {1} for each element: (0,0) [n] p( – > a)/( =2) However, the new definition is more complex than the former,
requires the circulation and transformation of the data, and requires the transformation of n. The new definition for a third component is also used as a proposed extension to the Calculus Problem,
using the formula | p( ) | With a matrix n of size: size = {2} For each element, the solution must be given by: (n,0) [n] p( =<) [n/2] But from the viewpoint of a new definition for an element of the
P(f) variable, the list of non-zero elements of the solution has become much more complicated than the earlier one-view P( f ), which was only one-view P( the variable will appear in V(), and in fact
it is included in the results of the Calculus Problem from having two elements by using the formulas | p = (Basic Differential Calculus (derivation) {#sec:diff-diff}
------------------------------------------- The proof relies on standard functional calculus. This provides the first result of this sort. \[thm:diffc\] Let $(W,\mathcal{H},\langle\cdot,\cdot\rangle)
^2$ be a tensor structure on a Hilbert space $({\mathbb{C}}^2,Y)$, a locally compact space isometric to a Hilbert space ${\mathbb{C}}^*_{}{(Y)}$ and $$\mathcal{U}={\mathbb{C}}\langle W,B\rangle^2.$$
There are various possible choices of bilinear and positive measures on ${\mathbb{C}}^*_{{(Y)}}$. Note that Theorem \[thm:diffc\] takes advantage of the ${\mathbb{C}}_{{(Y)}}$-stability for the
Schwartz space $W$ of type $(L): \mathcal{W}$ and $D_{\Sigma}\Sigma: Y({\mathbb{C}}_{{(Y)}})\to Y({\mathbb{C}}_{{(Y)}})$ and the Schwartz space $$\mathcal{S}_*({\mathbb{C}}_{{(Y)}}):={\mathbb{R}}^{\
Sigma}\langle W, \mathcal{U} \rangle^2+{\mathbb{R}}^{D_{\Sigma}\Sigma}\langle \mathcal{U}, W\rangle^2,$$ and Conjecture \[conj:positive-compute\] is a proof of its lower bound.
E2020 Courses For Free
Then it can be shown that $$\label{eqn:diff-bound} \rho_{{(Y)}}({\mathbb{C}}^*)=\max\{|f(w):w\in\mathcal{H},|f'(u)|\leq C\frac{\delta_1}{w},0\leq e\leq 1\}\rho_+=\inf_{w\in\mathcal{H}}\rho_w^2.$$
Using the fact that $\|\xi\|=\|{\mathsf{diam}}W({\mathbb{R}}^2)/|{\mathsf{diam}}W({\mathbb{R}}^2)|\leq \|W\|_0\|y\|_\infty$ and $\|e\|\leq\|W\|_0$, the proof will follow. [$\dagger$]{}[We omit $\
mathcal{P}_{(Y)^*,\Sigma^*}$ because it simplifies the proofs of Corollary \[coro:psis\], Proposition \[prop:diff-qubit\] and Lemma \[lemma:psis-rho\] (see below (We omitted $\mathcal{P}_{(Y)^*,\
Sigma^*}$ for the convenience of the reader )]{}. The proof of Theorem \[thm:diff-diff-diff\] is covered in Example \[example:diff\_diff\_diff\], below. For $\{D_{\Sigma}\Sigma,(W^x), {\widetilde{W}}
(\cdot),{\widetilde{W}}(\cdot)\} \subset W$, suppose that $\rho_{{(Y)}}({\mathbb{C}}^*)=2$, the previous lower bound is lower bound on a normalization constant when the real valued functional is $A$.
[$\dagger$]{}[The case of $\rho_{{(Y)}}({\mathbb{C}}^*)={\widetilde{W}}({\mathbb{C}}^*)=W({\mathbb{C}}^*)$ does not arise because $A|x, | {"url":"https://hirecalculusexam.com/basic-differential-calculus","timestamp":"2024-11-15T02:56:29Z","content_type":"text/html","content_length":"103242","record_id":"<urn:uuid:6e5f9c8c-20d8-43ba-9465-f226d15d180c>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00638.warc.gz"} |
RRB JE 22nd May 2019 Shift-3
Who among the following has become the first ever captain to win 100 matches in IPL history?
The main function of vitamin K is in?
The LCM of two numbers which are in the ratio 2 : 3 is 48. What is their sum?
If Rs.64 amounts to Rs.83.20 in 2 years, what will Rs.86 amount to in 4 years at the same rate per cent Simple Interest per annum?
P alone can do a job in 18 days. P and Q working together can do it in 15 days. They started to work together.After 10 days Q left, and P alone has to complete the job. In how many more days can P
complete the job?
Find the group of wrong alphabets in the following series.
M, C, P, F, T, G, Y
Esters have _________ smell.
A person borrows Rs.5000 for 2 years at 4% per annum Simple Interest. He immediately lends it to another person at 6.25% per annum for 2 years. Find his gain in the transaction per year.
Which of the following Ministries won the platinum of Web Ratna Award in Digital India Awards 2018?
Which of the following is NOT a multi-seeded fruit? | {"url":"https://cracku.in/rrb-je-22nd-may-2019-shift-3-question-paper-solved?page=10","timestamp":"2024-11-08T12:23:34Z","content_type":"text/html","content_length":"155699","record_id":"<urn:uuid:b3414cac-dac1-4a4f-85e3-481095b070c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00538.warc.gz"} |
13.3 Math and Medicine - Contemporary Mathematics | OpenStax
Learning Objectives
After completing this section, you should be able to:
1. Compute the mathematical factors utilized in concentrations/dosages of drugs.
2. Describe the history of validating effectiveness of a new drug.
3. Describe how mathematical modeling is used to track the spread of a virus.
The pandemic that rocked the world starting in 2020 turned attention to finding a cure for the Covid-19 strain into a world race and dominated conversations from major news channels to households
around the globe. News reports decreeing the number of new cases and deaths locally as well as around the world were part of the daily news for over a year and progress on vaccines soon followed. How
was a vaccine able to be found so quickly? Is the vaccine safe? Is the vaccine effective? These and other questions have been raised through communities near and far and some remain debatable.
However, we can educate ourselves on the foundations of these discussions and be more equipped to analyze new information related to these questions as it becomes available.
Concentrations and Dosages of Drugs
Consider any drug and the recommended dosage varies based on several factors such as age, weight, and degree of illness of a person. Hospitals and medical dispensaries do not stock every possible
needed concentration of medicines. Drugs that are delivered in liquid form for intravenous (IV) methods in particular can be easily adjusted to meet the needs of a patient. Whether administering
anesthesia prior to an operation or administering a vaccine, calculation of the concentration of a drug is needed to ensure the desired amount of medicine is delivered.
The formula to determine the volume needed of a drug in liquid form is a relatively simple formula. The volume needed is calculated based on the required dosage of the drug with respect to the
concentration of the drug. For drugs in liquid form, the concentration is noted as the amount of the drug per the volume of the solution that the drug is suspended in which is commonly measured in g/
mL or mg/mL.
Suppose a doctor writes a prescription for 6 mg of a drug, which a nurse calculates when retrieving the needed prescription from their secure pharmaceutical storage space. On the shelves, the drug is
available in liquid form as 2 mg per mL. This means that 1 mg of the drug is found in 0.5 mL of the solution. Multiplying 6 mg by 0.5 mL yields 3 mL, which is the volume of the prescription per
single dose.
$Volume needed=(medicine dosage required)(weight of drug by volume)Volume needed=(medicine dosage required)(weight of drug by volume)$.
A common calculation for the weight of a liquid drug is measured in grams of a drug per 100 mL of solution and is also called the percentage weight by volume measurement and labeled as % w/v or
simply w/v.
Note that the units for a desired dose of a drug and the units for a solution containing the drug or pill form of the drug must be the same. If they are not the same, the units must first be
converted to be measured in the same units.
Suppose you visit your doctor with symptoms of an upset stomach and unrelenting heartburn. One possible recourse is sodium bicarbonate, which aids in reducing stomach acid.
Calculating the Quantity in a Mixture
How much sodium bicarbonate is there in a 250 mL solution of 1.58% w/v sodium bicarbonate?
$1.58%w/v=1.58g1.58%w/v=1.58g$ sodium bicarbonate in 100 mL. If there is 250 mL of the solution, we have 2.5 times as much sodium bicarbonate as in 100 mL. Thus, we multiply 1.58 by 2.5 to yield 3.95
g sodium bicarbonate in 250 mL solution.
Calculating the Quantity of Pills Needed
A doctor prescribes 25.5 mg of a drug to take orally per day and pills are available in 8.5 mg. How many pills will be needed each day?
The prescription and the pills are in the same units which means no conversions are needed. We can divide the units of the drug prescribed by the units in each pill: $25.5/8.5=325.5/8.5=3$. So, 3
pills will be needed each day.
Calculating the Drug Dose in Milligrams, Based on Patient Weight
A patient is prescribed 2 mg/kg of a drug to be delivered intramuscularly, divided into 3 doses per day. If the patient weighs 45 kg, how many milligrams of the drug should be given per dose?
Step 1: Calculate the total daily dose of the drug based on the patient’s weight (measured in kilograms):
$( 2 mg / kg ) ( 45 kg ) = 90 mg ( 2 mg / kg ) ( 45 kg ) = 90 mg$
Step 2: Divide the total daily dose by the number of doses per day:
$90 mg / 3 = 30 mg 90 mg / 3 = 30 mg$
The patient should receive 30 mg of the drug in each dose.
Note that the units for a patient’s weight must be compatible with the units used in the medicine measurement. If they are not the same, the units must first be converted to be measured in the same
Calculating the Drug Dose in Milliliters, Based on Patient Weight
A patient is prescribed 2 mg/kg of a drug to be delivered intramuscularly, divided into 3 doses per day. If the drug is available in 20 mg/mL and the patient weighs 60 kg, how many milliliters of the
drug should be given per dose?
Step 1: Calculate the total daily dose of the drug (measured in milligrams) based on the patient’s weight (measured in kilograms):
$( 2 mg / kg ) ( 60 kg ) = 120 mg ( 2 mg / kg ) ( 60 kg ) = 120 mg$
Step 2: Calculate the volume in each dose:
$( 120 mg daily total ) / ( 3 doses a day ) = 40 mg per dose ( 120 mg daily total ) / ( 3 doses a day ) = 40 mg per dose$
Step 3: Calculate the volume based on the strength of the stock:
$( prescribed dose needed ) / ( stock dose ) = volume ( 40 mg per dose ) / ( 20 mg / mL ) = 2 mL ( prescribed dose needed ) / ( stock dose ) = volume ( 40 mg per dose ) / ( 20 mg / mL ) = 2 mL$
The patient should receive 2 mL of the stock drug in each dose.
Math Statistics from the CDC
The Centers for Disease Control and Prevention (CDC) states that about half the U.S. population in 2019 used at least one prescription drug each month, and about 25% of people used three or more
prescription drugs in a month. The resulting overall collective impact of the pharmaceutical industry in the United States exceeded $1.3 trillion a year prior to the 2020 pandemic.
Validating Effectiveness of a New Vaccine
The process to develop a new vaccine and be able to offer it to the public typically takes 10 to 15 years. In the United States, the system typically involves both public and private participation in
a process. During the 1900s, several vaccines were successfully developed, including the following: polio vaccine in the 1950s and chickenpox vaccine in the 1990s. Both of these vaccines took years
to be developed, tested, and available to the public. Knowing the typical timeline for a vaccine to move from development to administration, it is not surprising that some people wondered how a
vaccine for Covid-19 was released in less than a year’s time.
Lesser known is that research on coronavirus vaccines has been in process for approximately 10 years. Back in 2012, concern over the Middle Eastern respiratory syndrome (MERS) broke out and
scientists from all over the world began working on researching coronaviruses and how to combat them. It was discovered that the foundation for the virus is a spike protein, which, when delivered as
part of a vaccine, causes the human body to generate antibodies and is the platform for coronavirus vaccines.
When the Covid-19 pandemic broke out, Operation Warp Speed, fueled by the U.S. federal government and private sector, poured unprecedented human resources into applying the previous 10 years of
research and development into targeting a specific vaccine for the Covid-19 strain.
Shibo Jiang
Dr. Shibo Jiang, MD, PhD, is co-director the Center for Vaccine Development at the Texas Children’s Hospital and head of a virology group at the New York Blood Center. Together with his colleagues,
Jiang has been working on vaccines and treatments for a range of viruses and infections including influenzas, HIV, Sars, HPV and more recently Covid-19. His work has been recognized around the world
and is marked with receiving grants amounting to over $20 million from U.S. sources as well as the same from foundations in China, producing patents in the United States and China for his antiviral
products to combat world concerns.
Jiang has been a voice for caution in the search for a vaccine for Covid-19, emphasizing the need for caution to ensure safety in the development and deployment of a vaccine. His work and that of his
colleagues for over 10 years on other coronaviruses paved the way for the vaccines that have been shared to combat the Covid-19 pandemic.
Mathematical Modeling to Track the Spread of a Vaccine
With a large number of people receiving a Covid-19 vaccine, the concern at this time is how to create an affordable vaccine to reach people all over the world. If a world solution is not found, those
without access to a vaccine will serve as incubators to variants that might be resistant to the existing vaccines.
As we work to vaccinate the world, attention continues with tracking the spread of the Covid-19 and its multiple variants. Mathematical modeling is the process of creating a representation of the
behavior of a system using mathematical language. Digital mathematical modeling plays a key role in analyzing the vast amounts of data reported from a variety of sources such as hospitals and apps on
cell phones.
When attempting to represent an observed quantitative data set, mathematical models can aid in finding patterns and concentrations as well as aid in predicting growth or decline of the system.
Mathematical models can also be useful to determine strengths and vulnerabilities of a system, which can be helpful in arresting the spread of a virus.
The chapter on Graph Theory explores one such method of mathematical modeling using paths and circuits. Cell phones have been helpful in tracking the spread of the Covid-19 virus using apps regulated
by regional government public health authorities to collect data on the network of people exposed to an individual who tests positive for the Covid-19 virus.
Gladys West
Dr. Gladys West is a mathematician and hidden figure with a rich résumé of accomplishments spanning Air Force applications and work at NASA. Born in 1930, West rose and excelled both academically and
in her professional life at a time when Black women were not embraced in STEM positions. One of her many accomplishments is the Global Positioning System (GPS) used on cell phones for driving
West began work as a human computer, someone who computes mathematical computations by hand. Considering the time and complexity of some calculations, she became involved in programming computers to
crunch computations. Eventually, West created a mathematical model of Earth with detail and precision that made GPS possible, which is utilized in an array of devices from satellites to cell phones.
The next time you tag a photo or obtain driving directions, you are tapping into the mathematical modeling of Earth that West developed.
Consider the following graph (Figure 13.11):
At the center of the graph, we find Alyssa, whom we will consider positive for a virus. Utilizing the technology of phone apps voluntarily installed on each phone of the individuals in the graph,
tracking of the spread of the virus among the 6 individuals that Alyssa had direct contact with can be implemented, namely Suad, Rocio, Braeden, Soren, and Sandra.
Let’s look at José’s exposure risk as it relates to Alyssa. There are multiple paths connecting José with Alyssa. One path includes the following individuals: José to Mikaela to Nate to Sandra to
Alyssa. This path contains a length of 4 units, or people, in the contact tracing line. There are 2 more paths connecting José to Alyssa. A second path of the same length consists of José to Lucia to
Rocio to Braeden to Alyssa. Path 3 is the shortest and consists of José to Lucia to Rocio to Alyssa. Tracking the spread of positive cases in the line between Alyssa and José aids in monitoring the
spread of the infection.
Now consider the complexity of tracking a pandemic across the nation. Graphs such as the one above are not practical to be drawn on paper but can be managed by computer programs capable of computing
large volumes of data. In fact, a computer-generated mathematical model of contact tracing would look more like a sphere with paths on the exterior as well as on the interior. Mathematical modeling
of contact tracing is complex and feasible through the use of technology.
Using Mathematical Modeling
For the following exercises, use the sample contact tracing graph to identify paths (Figure 13.12).
1. How many people have a path of length 2 from Jeffrey?
2. Find 2 paths between Kayla and Rohan.
3. Find the shortest path between Yara and Kalani. State the length and people in the path.
1. 5 (Lura, Naomi, Kalani, Vega, Yara)
2. Answers will vary. Two possible answers are as follows:
a. Kayla, Jeffrey, Rohan
b. Kayla, Lura, Yara, Lev, Vega, Uma, Kalani, Rohan
3. Length is 4. People in path = Yara, Lev, Vega, Uma, Kalani
Section 13.3 Exercises | {"url":"https://openstax.org/books/contemporary-mathematics/pages/13-3-math-and-medicine","timestamp":"2024-11-14T11:40:51Z","content_type":"text/html","content_length":"446555","record_id":"<urn:uuid:cc8e09ed-c2dc-45ea-817a-98c85529eb8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00054.warc.gz"} |
Uniform Manifold Approximation and Projection- UMAP — Quaternion Identity
Uniform Manifold Approximation and Projection- UMAP
or how a Muggle can perform Math Magic....
I recently had to perform a large amount of dimensionality reduction - & as such needed to consider how to do it - in the end I went with UMAP. It is a relatively new technique so I figured that
putting down some thoughts may be of interest.
Dimensionality reduction
The first thing that I needed to understand is what problem is dimensionality reduction addressing? I like this definition:
Find a low-dimensional representation of the data that captures as much of the information as possible - Air Force Institute of Technology
So quite a lofty goal, despite sounding attractively simple. I should note that there are other other uses for dimensionality reduction, including addressing the 'curse of dimensionality'. (Which, is
perhaps the most Harry Potter-esque term that I've encountered in deep learning yet - us poor Muggles, so many problems visualizing N-D space ...) However, using dimensionality reduction in order to
enable visualization is the main use that I want to focus on, primarily because of my final project and challenges therein. In Deep learning and machine learning, with the complexity of the neural
networks that are being constructed, it's hard to visualize what is happening in the different layers (as an aside, reminded me of the book Flatland, published in 1884). This is not only due to the
dimensionality of layers used, but the complexity of the networks - there's a reason that another name for GoogLeNet is Inception. So for me, the problem that dimensionality reduction is addressing
is "making it easier to visualize the data when reduced to very low dimensions such as 2D or 3D", as per Wikipedia.
So what is UMAP?
In terms of dimensionality reduction, there are a lot of options, among them PCA, spectral and local linearly embedding, t-SNE, UMAP, etc. All of them have pros & cons - but what I wanted to optimize
for in choosing a reduction technique was both robustness of embedding and speed - I think that this graph should show why I considered UMAP the leading contender (PCA, being linear, doesn't extend
itself too well to non-linear structures such as images):
Moreover, I should note that UMAP was invented by Leland McInnes, a fellow Antiopedian, so Southern Hemisphere represent... All joking aside, in the words of Leland himself, UMAP:
“is an unsupervised learning technique for dimension reduction/manifold learning. It provides a way to find a simpler representation of your data by reducing the dimensionality. This is similar
to the way clustering is an unsupervised learning technique which tries to find a simpler representation of your data by breaking it up into clusters...Finally it can be used for post-processing
analysis. Suppose you have trained a convolutional neural net for image classification. You can apply dimension reduction to the next to last layer outputs to provide some understanding of the
image space that the network has constructed.”
— Leland McInnes,
Thus, it seemed perfect for the task I had at hand - visualizing multiple dimensions in the 2 that my homo sapiens mind can contemplate. Now, we can get into the specifics of UMAP, but this is
supposed to be a short blog post. This is Lelands simple version of the mechanics (yes, I said simple):
Assume the data is uniformly distributed on the manifold (the U in the name). That assumption can be used to approximate the volume form and (given suitable approximations from local
distributions) thence the metric tensor local to each data point. If we assume that the metric tensor is locally constant we can create (different, incompatible) metric spaces locally for each
data point. We can then push this through David Spivak's Metric-Space/Fuzzy-Simplicial-Set adjunction, or, if you prefer, build a fuzzy open cover (fuzzy as we degrade confidence in our local
metric space) and construct the fuzzy Cech-complex associated to it. This is the M and A part (manifold approximation via a fuzzy simplicial complex).
And so on and so forth. However, for those of us who aren't constantly improving our Erdős number, I suggest the talk that Leland gave at PyData, and his reddit post on the subject, both of which are
targeted towards a larger audience than mathematicians. (He is very responsive on github issues too btw). To give the briefest of biased highlights:
• It is compatible with t-SNE as a drop-in replacement, so you can use code with minimal disruption.
• It preserves more global structure than other metrics (people far more qualified than I have agreed with this point ..)
In addition, UMAP implementation has the potential to be even faster, as it uses optimization performed via a custom stochastic gradient descent. The RAPIDS team from NVIDIA has published a GPU
implementation, and stated a 15x speedup:
I ran a cursory MNIST benchmark based on UMAPs benchmarking suggestions, and times massively improved under RAPIDS - further investigation is definitely warranted.
This is all pretty impressive stuff, but bear in mind it might be a bit bleeding edge, depending on your viewpoint & development context. The UMAP GitHub repo hasn't reached a 1.0 version yet, but
it's very promising, and definitely fulfills a need. So try it out, I'm interested in what others think! | {"url":"https://quaternionidentity.com/blog/uniform-manifold-approximation-and-projection-umap","timestamp":"2024-11-10T22:13:04Z","content_type":"text/html","content_length":"67227","record_id":"<urn:uuid:ee45db26-9148-4653-80ac-21583e1db402>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00418.warc.gz"} |
Computational Geodynamics: Convection Models
\[ \require{color} \newcommand{\dGamma}{\mathbf{d}\boldsymbol{\Gamma}} \newcommand{\erfc}{\mbox{\rm erfc}} \newcommand{\Red}[1]{\textcolor[rgb]{0.7,0.0,0.0}{#1}} \newcommand{\Green}[1]{\textcolor
[rgb]{0.0,0.7,0.0}{ #1}} \newcommand{\Blue}[1]{\textcolor[rgb]{0.0,0.0,0.7}{ #1}} \newcommand{\Emerald}[1]{\textcolor[rgb]{0.0,0.7,0.3}{ #1}} \]
Thermal Convection
Thermal convection describes the a process in which a fluid organizes itself into a structured flow pattern on a macroscopic scale to transport energy. Convection may be mechanically driven by
stirring, but more commonly we refer to natural convection in which buoyancy due to a source of heat (and/or compositional variation) induces flow which transports and dissipates this anomalous
The Earth’s interior, on a geological timescale is a highly viscous fluid which is heated from below by heat escaping from the core, and internally by the decay of radioactive elements. In this
Critical Rayleigh Number for a layer
Does convection always occur in a layer heated from below ? In principle this would always provide a way to transport additional heat, but how much work would convection have to do in order to
transport this extra heat ? One way to determine the answer is to consider small disturbances to a layer with otherwise uniform temperature and see under what conditions the perturbations grow
(presumably into fully developed convection). This approach allows us to make {\em linear} approximations to the otherwise non-linear equations by dropping the small, high order non-linear terms.
We solve the incompressible flow equations (stream function form, \ref{eq:biharm}) and energy conservation equation in stream function form: $$onumber \frac{\partial T}{\partial t} + \left[ -\frac{\
partial \psi}{\partial x_2}\frac{\partial T}{\partial x_1} +\frac {\partial \psi}{\partial x_1}\frac{\partial T}{\partial x_2} \right] = abla^2 T$$
By substituting throughout for a temperature which is a conductive profile with a small amplitude disturbance, \(\theta \ll 1\) $$onumber T = 1- x_2 + \theta$$ Remember that the equations are
non-dimensional so that the layer depth is one, and the temperature drop is one.
The advection term $$onumber -\frac{\partial \psi}{\partial x_2}\frac{\partial T}{\partial x_1} +\frac {\partial \psi}{\partial x_1}\frac{\partial T}{\partial x_2} \rightarrow -\frac{\partial \psi}{\
partial x_2}\frac{\partial \theta}{\partial x_1} -\frac{\partial \psi}{\partial x_1} +\frac {\partial \theta}{\partial x_2}\frac{\partial \psi}{\partial x_1}$$
is dominated by the \( \partial \psi / \partial x_1 \) since all others are the product of small terms. (Since we also know that \(\psi \sim \theta\) from equation (\ref{eq:biharm})). Therefore the
energy conservation equation becomes $$onumber \frac{\partial \theta}{\partial t} - \frac{\partial \psi}{\partial x_1} = abla^2 \theta$$ which is linear.
Boundary conditions for this problem are zero normal velocity on \(x_2 = 0,1\) which implies \( \psi=0 \) at these boundaries. The form of the perturbation is such that \(\theta =0 \) on \( x_2 = 0,1
\), and we allow free slip along these boundaries such that
$$onumber \sigma_{12} = \frac{\partial v_1}{\partial x_2} + \frac{\partial v_2}{\partial x_1} =0$$
when \(x_2 = 0,1\) which implies \(\nabla^2 \psi =0\) there.
Now introduce small harmonic perturbations to the driving terms and assuming a similar (i.e. harmonic) response in the flow. This takes the form \begin{equation} \nonumber \begin{split} \theta &= \
Theta(x_2) \exp(\sigma t) \sin kx_1 \
\psi &= \Psi(x_2) \exp(\sigma t) \cos kx_1 \end{split} \end{equation}
So that we can now separate variables. \(\sigma\) is unknown, however, if \( \sigma < 0 \) then the perturbations will decay, whereas if \( \sigma > 0 \) they will grow.
Substituting for the perturbations into the biharmonic equation and the linearized energy conservation equation gives $$\left(\frac{d^2}{d{x_2}^2} -k^2 \right)^2 \Psi = -{\rm Ra} k \Theta \label
and $$\sigma \Theta + k \Psi = \left(\frac{d^2}{d{x_2}^2} -k^2 \right) \Theta$$
Here we have shown and used the fact that
$$abla^2 \equiv \left(\frac{\partial^2}{\partial {x _ 2}^2} -k^2 \right) \label{eq:psitheta2}$$
when a function is expanded in the form \(\phi(x,z) = \Phi(z).\sin kx \) - more generally, this is the fourier transform of the Laplacian operator.
Eliminating \( \Psi \) between (\ref{eq:psitheta1}) and (\ref{eq:psitheta2}) gives $$onumber \sigma \left(\frac{d^2}{d {x_2}^2 } - k^2 \right)^2 -{\rm Ra} k^2 \Theta = \left(\frac{d^2}{d {x_2}^2} -k^
2 \right)^3 \Theta$$
This has a solution $$onumber \Theta = \Theta_0 \sin \pi z$$
which satisfies all the stated boundary conditions and implies $$onumber \sigma = \frac{k^2 {\rm Ra}}{(\pi^2 + k^2)^2} -(\pi^2 + k^2)$$ a real function of \(k \) and \(\rm Ra\).
For a given wavenumber, what is the lowest value of $\rm Ra$ for which perturbations at that wavenumber will grow ?
$$onumber = \frac{(\pi^2 + k^2)^3}{k^2}$$
The absolute minimum value of ${\rm Ra}$ which produces growing perturbations is found by differentiating \({\rm Ra_0} \) with respect to \(k \) and setting equal to zero to find the extremum.
$$onumber {\rm Ra_c} = \frac{27}{4} \pi^4 = 657.51$$
for a wavenumber of $$onumber k = \frac{\pi}{2^{1/2}} = 2.22$$
corresponding to a wavelength of 2.828 times the depth of the layer.
Different boundary conditions produce different values of the critical Rayleigh number. If no-slip conditions are used, for example, then the \( \Theta \) solution applied above does not satisfy the
boundary conditions. In general, the critical Rayleigh number lies between about 100 and 3000.
Boundary layer theory, Boundary Rayleigh Number
Having determined the conditions under which convection will develop, we next consider what can be calculated about fully developed convection - i.e. when perturbations grow well beyond the
linearization used to study the onset of instability.
Let’s consider fully developed convection with high Rayleigh number. From observations of real fluids in laboratory situations, it is well known how this looks. High Rayleigh number convection is
dominated by the advection of heat. Diffusion is too slow to carry heat far into the fluid before the buoyancy anomaly becomes unstable. This leads to thin, horizontal boundary layers where diffusive
heat transfer into and out of the fluid occurs. These are separated by approximately isothermal regions in the fluid interior. The horizontal boundary layers are connected by vertical boundary layers
which take the form of sheets or cylindrical plumes depending on a number of things including the Rayleigh number. For the time being we consider only the sheet like downwellings since that allows us
to continue working in 2D.
Boundary layer analysis is a highly sophisticated field, and is used in a broad range of situations where differences in scales between different physical effects produce narrow accommodation zones
where the weaker term dominates (e.g viscosity in an otherwise invicid flow around an obstacle).
Here we first make a wild stab at an approximate theory describing the heat flow from a layer with a given Rayleigh number. The convective flow is shown in the Figure together with a rough sketch of
what actually happens.
Assuming the simplified flow pattern of the sketch, steady state, and replacing all derivatives by crude differences we obtain (using a vorticity form)
$$onumber \kappa abla^2 T = (\mathbf{v} \cdot abla) T \;\;\; \longrightarrow \;\;\; \frac{v \Delta T}{d} \sim \frac{\Delta T \kappa}{\delta^2}$$ and $$onumber abla^2 \omega = \frac{g \rho \alpha}{\
eta} \frac{\partial T}{\partial x} \;\;\; \longrightarrow \;\;\; \frac{\omega}{\delta ^2} \sim \frac{g \rho \alpha \Delta T}{\eta \delta}$$ where \(\omega\sim v / d \) from the local rotation
interpretation of vorticity and the approximate rigid-body rotation of the core of the convection cell, and \(v/d \sim \kappa / \delta^2\).
This gives \begin{align} \frac{\delta}{d} & \sim {\rm Ra}^{-1/3} \
v & \sim \frac{\kappa}{d} {\rm Ra}^{2/3} \end{align}
This theory balances diffusion of vorticity and temperature across and out of the boundary layer with advection of each quantity along the boundary layer to maintain a steady state.The Nusselt number
is the ratio of advected heat transport to that purely conducted in the absence of fluid motion, or, using the above approximations,
\begin{equation} \nonumber \begin{split} {\rm Nu} & \sim \frac{\rho C_p v \Delta T \delta}{(k \Delta T/d)d} \
& \sim {\rm Ra}^{1/3} \end{split} \end{equation}
This latter result being observed in experiments to be in reasonably good agreement with observation. If we define a boundary Rayleigh number $$onumber {\rm Ra_b} = \frac{g \rho \alpha \Delta T \
delta^3}{\kappa \eta}$$ then the expression for \(\delta\) gives $$onumber {\rm Ra_b} \sim 1$$
so the boundary layer does not become more or less stable with increasing Rayleigh number (this is not universal – for internal heating the boundary layer becomes less stable at higher Rayleigh
Another wrinkle can be added to the boundary layer theory by trying to account for the variation in the boundary layer thickness as it moves along the horizontal boundary. This refinement in the
theory can account for the form of this thickness, the potential energy change in rising or sinking plumes, and the aspect ratio of the convection (width to height of convection roll) by maximizing
Nusselt number as a function of aspect ratio.
Consider the boundary layer to be very thin above the upwelling plume (left side). As it moves to the right, it cools and the depth to any particular isotherm increases (this is clearly seen in the
simulation). This can be treated exactly like a one dimensional problem if we work in the Lagrangian frame of reference attached to the boundary layer. That is, take the 1D half-space cooling model
and replace the time with \(x_1/v\) (cf. the advection equation in which time and velocity / lengths are mixed).
The standard solution is as follows. Assume a half-space at an intial temperature everywhere of \(T_0 \) to which a boundary condition, \(T=T_s\) is applied at \(t=0,x_2=0\).
We solve for \(T(x_2,t)\) by first making a substitution,
$$onumber \theta = \frac{T-T_0}{T_s-T_0}$$
which is a dimensionless temperature, into the standard diffusion equation to obtain $$\frac{\partial \theta(x_2,t)}{\partial t} = \kappa \frac{\partial ^2 \theta(x_2,t)}{\partial {x_2}^2} \label
The boundary conditions on \(\theta\) are simple: \begin{align} & \theta(x_2,0) = 0 \
& \theta(0,t) = 1 \
& \theta(\infty,0) = 0 \end{align}
In place of \(t,x_2\), we use the similarity transformation,
$$onumber \eta = \frac{x_2}{2\sqrt{\kappa t}}$$ which is found (more or less) intuitively.
Now we need to substitute \begin{align} \frac{\partial \theta}{\partial t} & = -\frac{d \theta}{d\eta}(\eta/2t)
\frac{\partial^2 \theta}{\partial {x_2}^2} & = \frac{1}{4\kappa t}\frac{d^2 \theta}{d \eta^2} \end{align} to transform \( (\ref{eq:difftheta}) \) into $$-\eta \frac{d \theta}{d\eta} = \frac{1}{2} \
frac{d^2 \theta}{d \eta^2} \label{eq:diffode}$$
Boundary conditions transform to give \begin{align} & \theta(\eta=\infty) = 0
& \theta(\eta=0) = 1 \end{align}
Write \(\phi = d\theta / d\eta\) (for convenience only) to rewrite \( (\ref{eq:diffode}) \) as
\begin{align} -\eta \phi &= \frac{1}{2} \frac{d \phi}{d \eta}
\text{or} -\eta d\eta &= \frac{1}{2} \frac{d\phi}{\phi} \end{align}
This is a standard integral with solution \begin{align} & -\eta^2 = \log_e \phi -\log_e c_1
\text{such that} & \phi = c_1 \exp(-\eta^2) = \frac{d\theta}{d\eta} \end{align}
This latter form is then integrated to give the solution: $$onumber \theta = c_1 \int_0^\eta \exp(-{\eta’}^2) d\eta’ +1$$
Boundary conditions give $$onumber \theta = 1- \frac{2}{\sqrt{\pi}} \int_0^\eta\exp(-{\eta’}^2) d\eta’$$
Which is the definition of the complementary error function ( \(\erfc(\eta)\)). Undoing the remaining substitutions gives $$onumber \frac{T-T_0}{T_s-T_0} = \erfc \left( \frac{x_2}{2\sqrt{\kappa t}} \
In our original context of the cooling boundary layer, then, \(T _ s\) is the surface temperature, \(T_0$\) is the interior temperature of the convection cell ( \(\Delta T /2 \) ) and \(t \leftarrow
x_1/v \). The thickness of the boundary layer is found by assuming it is defined by a characteristic isotherm (doesn’t much matter which). The progression of this isotherm is
\begin{equation} \nonumber \delta \propto \sqrt{\kappa t}
or, in the Eulerian frame,
\begin{equation} \nonumber \delta \propto \sqrt{\kappa x_1 / v}
Internal Heating
The definition of the Rayleigh number when the layer is significantly internally heated is
$$onumber {\rm Ra} = \frac{g \rho^2 \alpha H d^5}{\eta \kappa k}$$
where \(H \) is the rate of internal heat generation per unit mass.
The definition of the Nusselt number is the heat flow through the upper surface divided by the average basal temperature. This allows a Nusselt number to be calculated for internally heated
convection where the basal temperature is not known a priori. Internally heated convection is a problem to simulate in the lab, directly, but the same effect is achieved by reducing the temperature
of the upper surface as a function of time.
When Viscosity is not Constant
When viscosity is not constant (and for rocks, the strong dependence of viscosity on temperature makes this generally the case), the equations are quite a lot more complicated. It is no longer
possible to form the biharmonic equation since \(\eta(x,z)\) cannot be taken outside the differential operators. Nor can stream-function / vorticity formulations be used directly for the same
reasons. Spectral methods — the decomposition of the problem into a sum of independent problems in the wavenumber domain — is no longer simple since the individual problems are coupled, not
The Rayleigh number is no longer uniquely defined for the system since the viscosity to which it refers must take some suitable average over the layer — the nature of this average depends on the
circumstances. The form of convection changes since boundary layers at the top and bottom of the system (cold v hot) are no longer symmetric with each other.
The convecting system gains another control parameter which is a measure of the viscosity contrast as a function of temperature.
Applications to the Earth
The application of realistic convection models to the Earth and other planets — particularly Venus..
The simplest computational and boundary layer solutions to the Stokes’ convection equations made the simplifying assumption that the viscosity was constant. Despite the experimental evidence which
suggests viscosity variations should dominate in the Earth, agreement with some important observations was remarkably good.
Such simulations were not able to produce plate-like motions at the surface (instead producing smoothly distributed deformation) but the average velocity, the heat flow and the observed pattern of
subsidence of the ocean floor were well matched.
Mantle Rheology
Experimental determination of the rheology of mantle materials gives
$$onumber \dot{\epsilon} \propto \sigma^n d^{-m} \exp\left( -\frac{E+PV}{RT} \right)$$
where \(\sigma\) is a stress, \(d \) is grain size, \(E \)is an activation energy, \(V \) is an activation volume, and \(T\) is absolute temperature. ( \(R \) is the universal gas constant). This
translates to a viscosity
$$onumber \eta \propto \sigma^{1-n} d^m exp\left( \frac{E+PV}{RT} \right)$$
In the mantle two forms of creep are dominant: dislocation creep with \(n ~ 3.0\), \(m~0\), \(E ~ 430-540 KJ/mol\), \(V ~ 10 - 20 cm^3/mol \); and diffusion creep with \(n ~ 1.0 \), \(m~2.5\), \(E ~
240-300 KJ/mol\), \(V ~ 5-6 cm^3/mol\). This is for olivine — other minerals will produce different results, of course.
Convection with Temperature Dependent Viscosity
More sophisticated models included the effect of temperature dependent viscosity as a step towards more realistic simulations. In fact, the opposite was observed: convection with temperature
dependent viscosity is a much worse description of the oceanic lithosphere than constant viscosity convection. It may, however, describe Venus rather well.
Theoretical studies of the asymptotic limit of convection in which the viscosity variation becomes very large (comparable to values determined for mantle rocks in laboratory experiments) find that
the upper surface becomes entirely stagnant with little or no observable motion. Vigorous convection continues underneath the stagnant layer with very little surface manifestation.
This theoretical work demonstrates that the numerical simulations are producing correct results, and suggests that we should look for physics beyond pure viscous flow in explaining plate motions.
Non-linear Viscosity and Brittle Effects
Realistic rheological laws show the viscosity may depend upon stress. This makes the problem non-linear since the stress clearly depends upon viscosity. In order to obtain a solution it is necessary
to iterate velocity and viscosity until they no longer change.
The obvious association of plate boundaries with earthquake activity suggests that relevant effects are to be found in the brittle nature of the cold plates. Brittle materials have a finite strength
and if they are stressed beyond that point they break. This is a familiar enough property of everyday materials, but rocks in the lithosphere are non-uniform, subject to great confining pressures and
high temperatures, and they deform over extremely long periods of time. This makes it difficult to know how to apply laboratory results for rock breakage experiments to simulations of the plates.
An ideal, very general rheological model for the brittle lithosphere would incorporate the effects due to small-scale cracks, faults, ductile shear localization due to dynamic recrystalization,
anisotropy (… kitchen sink). Needless to say, most attempts to date to account for the brittle nature of the plates have greatly simplified the picture. Some models have imposed weak zones which
represent plate boundaries, others have included sharp discontinuities which represent the plate-bounding faults, still others have used continuum methods in which the yield properties of the
lithosphere are known but not the geometry of any breaks. Of these approaches, the continuum approach is best able to demonstrate the spectrum of behaviours as convection in the mantle interacts with
brittle lithospheric plates. For studying the evolution of individual plate boundaries methods which explicitly include discontinuities work best.
The simplest possible continuum formulation includes a yield stress expressed as an non-linear effective viscosity.
$$onumber \eta _ {\rm eff} = \frac{\tau _ {\rm yield}}{\dot{\varepsilon}}$$
This formulation can be incorporated very easily into the mantle dynamics modeling approach that we have outlined above as it involves making modifications only to the viscosity law. There may be
some numerical difficulties, however, as the strongly non-linear rheology can lead to dramatic variations in the viscosity across relatively narrow zones.
Thermochemical Convection
The Rayleigh number is defined in terms of thermal buoyancy but other sources of buoyancy are possible in fluids. For example, in the oceans, dissolved salt makes water heavy. When hot salty water
(e.g. the outflows of shallow seas such as the Mediterranean) mixes with cold less salty water, there is a complicated interaction.
This is double diffusive convection and produces remarkable layerings etc since the diffusion coefficients of salt and heat are different by a factor of 100. In the mantle, bulk chemical differences
due to subduction of crustal material can be treated in a similar manner. From the point of view of the diffusion equations, the diffusivity of bulk chemistry in the mantle is tiny (pure advection).
Fluid flows with chemical v. thermal bouyancy are termed thermochemical convection problems. | {"url":"http://www.moresi.info/pages/ComputationalGeodynamics/TheoreticalBackground/MathPhysicsBackground-3/","timestamp":"2024-11-05T15:20:49Z","content_type":"text/html","content_length":"34923","record_id":"<urn:uuid:3daeb734-c8ef-4e29-985b-ac456f50f50d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00556.warc.gz"} |
sample abstract
Tight bounds on the complexity of the Boyer-Moore string matching algorithm
R. Cole
SIAM Journal on Computing, 5(1994), 1075-1091.
The problem of finding all occurrences of a pattern of length m in a text of length n is considered. It is shown that the Boyer-Moore string matching algorithm performs roughly 3n comparisons and
that this bound is tight up to O(n/m); more precisely, an upper bound of 3n-3(n-m+1)/(m+2) comparisons is shown, as is a lower bound of 3n(1-o(1)) comparisons, as n/m and m tend to infinity.
While the upper bound is somewhat involved, its main elements provide a quite simple proof of a 4n upper bound for the same algorithm. | {"url":"https://cs.nyu.edu/~cole/papers/C94_ab.html","timestamp":"2024-11-04T21:51:09Z","content_type":"text/html","content_length":"1558","record_id":"<urn:uuid:a9c5eb5d-0143-4f68-b40e-d631a2e4056e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00430.warc.gz"} |
American Mathematical Society
Finite dimensional models for random microstructures
Author: M. Grigoriu
Journal: Theor. Probability and Math. Statist. 106 (2022), 121-142
MSC (2020): Primary 54C40, 14E20; Secondary 46E25, 20C20
DOI: https://doi.org/10.1090/tpms/1168
Published electronically: May 16, 2022
MathSciNet review: 4438447
Full-text PDF
Abstract | References | Similar Articles | Additional Information
Abstract: Finite dimensional (FD) models, i.e., deterministic functions of space depending on finite sets of random variables, are used extensively in applications to generate samples of random
fields $Z(x)$ and construct approximations of solutions $U(x)$ of ordinary or partial differential equations whose random coefficients depend on $Z(x)$. FD models of $Z(x)$ and $U(x)$ constitute
surrogates of these random fields which target various properties, e.g., mean/correlation functions or sample properties. We establish conditions under which samples of FD models can be used as
substitutes for samples of $Z(x)$ and $U(x)$ for two types of random fields $Z(x)$ and a simple stochastic equation. Some of these conditions are illustrated by numerical examples.
• D. J. Hand, Discrimination and classification, Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, Ltd., Chichester, 1981. MR 634676
• Patrick Billingsley, Convergence of probability measures, 2nd ed., Wiley Series in Probability and Statistics: Probability and Statistics, John Wiley & Sons, Inc., New York, 1999. A
Wiley-Interscience Publication. MR 1700749, DOI 10.1002/9780470316962
• Pao-Liu Chow, Stochastic partial differential equations, Chapman & Hall/CRC Applied Mathematics and Nonlinear Science Series, Chapman & Hall/CRC, Boca Raton, FL, 2007. MR 2295103, DOI 10.1201/
• Harald Cramér and M. R. Leadbetter, Stationary and related stochastic processes, Dover Publications, Inc., Mineola, NY, 2004. Sample function properties and their applications; Reprint of the
1967 original. MR 2108670
• W. B. Davenport. Probability and random processes, McGraw-Hill Book Company, New York, 1970.
• Israel Gohberg and Seymour Goldberg, Basic operator theory, Birkhäuser, Boston, Mass., 1981. MR 632943, DOI 10.1007/978-1-4612-5985-5
• Floyd E. Nixon, Handbook of Laplace transformation: Tables and examples, Prentice-Hall Electrical Engineering Series, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1960. MR 0119047
• Mircea Grigoriu, Stochastic calculus, Birkhäuser Boston, Inc., Boston, MA, 2002. Applications in science and engineering. MR 1926011, DOI 10.1007/978-0-8176-8228-6
• M. Grigoriu, Existence and construction of translation models for stationary non-Gaussian processes, Probabilistic Engineering Mechanics, 24:545–551, 2009.
• M. Grigoriu, Finite dimensional models for random functions, J. Comput. Phys. 376 (2019), 1253–1272. MR 3875566, DOI 10.1016/j.jcp.2018.09.029
• E. J. Hannan, Multiple time series, John Wiley & Sons, Inc., New York-London-Sydney, 1970. MR 0279952, DOI 10.1002/9780470316429
• Diego Bricio Hernández, Lectures on probability and second order random fields, Series on Advances in Mathematics for Applied Sciences, vol. 30, World Scientific Publishing Co., Inc., River Edge,
NJ, 1995. With a foreword by Renato Spigler. MR 1412573, DOI 10.1142/9789812831613
• Kiyosi Itô and Makiko Nisio, On the convergence of sums of independent Banach space valued random variables, Osaka Math. J. 5 (1968), 35–48. MR 235593
• Stanisław Kwapień and Wojbor A. Woyczyński, Random series and stochastic integrals: single and multiple, Probability and its Applications, Birkhäuser Boston, Inc., Boston, MA, 1992. MR 1167198,
DOI 10.1007/978-1-4612-0425-1
• Martin Ostoja-Starzewski, Microstructural randomness and scaling in mechanics of materials, CRC Series: Modern Mechanics and Mathematics, Chapman & Hall/CRC, Boca Raton, FL, 2008. MR 2341287
• Murray Rosenblatt, Random processes, 2nd ed., Graduate Texts in Mathematics, No. 17, Springer-Verlag, New York-Heidelberg, 1974. MR 0346883, DOI 10.1007/978-1-4612-9852-6
• Jun Shao, Mathematical statistics, 2nd ed., Springer Texts in Statistics, Springer-Verlag, New York, 2003. MR 2002723, DOI 10.1007/b97553
• C. Soize, Non-Gaussian positive-definite matrix-valued random fields for elliptic stochastic partial differential operators, Comput. Methods Appl. Mech. Engrg. 195 (2006), no. 1-3, 26–64. MR
2174357, DOI 10.1016/j.cma.2004.12.014
• Georgi P. Tolstov, Fourier series, Dover Publications, Inc., New York, 1976. Second English translation; Translated from the Russian and with a preface by Richard A. Silverman. MR 0425474
• A. W. van der Vaart, Asymptotic statistics, Cambridge Series in Statistical and Probabilistic Mathematics, vol. 3, Cambridge University Press, Cambridge, 1998. MR 1652247, DOI 10.1017/
• A. M. Yaglom, An introduction to the theory of stationary random functions, Revised English edition, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1962. Translated and edited by Richard A.
Silverman. MR 0184289
Retrieve articles in Theory of Probability and Mathematical Statistics with MSC (2020): 54C40, 14E20, 46E25, 20C20
Retrieve articles in all journals with MSC (2020): 54C40, 14E20, 46E25, 20C20
Additional Information
M. Grigoriu
Affiliation: Department of Civil Engineering and Applied Mathematics, Cornell University, Ithaca, New York 18453
Email: mdg12@cornell.edu
Keywords: Material microstructures, random fields, space of continuous functions, stochastic equations, weak/almost sure convergence
Received by editor(s): July 19, 2021
Accepted for publication: September 30, 2021
Published electronically: May 16, 2022
Additional Notes: The work reported in this paper has been partially supported by the National Science Foundation under the grant CMMI-2013697. This support is gratefully acknowledged.
Article copyright: © Copyright 2022 Taras Shevchenko National University of Kyiv | {"url":"https://www.ams.org/journals/tpms/2022-106-00/S0094-9000-2022-01168-7/","timestamp":"2024-11-10T02:29:27Z","content_type":"text/html","content_length":"77217","record_id":"<urn:uuid:b1be2f5d-2277-44e9-a53c-b7ada3302280>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00448.warc.gz"} |
Types in programming languages are inspired by mathematical set theory.
Types/ Sets Maths Haskell SML Pascal C+/-
Z, Integer {..., -2, -1, 0, 1, 2, 3, ...} Integer, Int Int int int
Rational {..., 1/2, ..., 22/7, ...} Rational - -
R, Real {..., 2.71, ..., 3.14, ...} Float, Double real real float, double
Tuple set product, (S, T) S*T record s:S; t:T end struct{ S s; T t}
S×T = {<s, t> | s in S, t in T}
+, disjoint union S+T = {s[0]|s in S}u{t[1]| t in T} Either S T - record case "variant" union
(tags 0, 1)
T^* T^0+T^1+T^2+... [T] T list array ... of T T[]
(list of T)
function S->T S->T S->T function f(s:S):T T f(S s)
Basic Type Systems
Dynamic Types
In a language, such as J, with dynamic types and run-time type checking each value has a type and just as the value assigned to a variable can change so the type is allowed to change as the value
changes, e.g.
x := 7 --Number
x := "seven" --String, etc.
Depending on the language, the action taken by an operator can vary with the type(s) of the operand(s), e.g.
function f(x, y) = x + y;
f(1, 2) --returns 3:Number
f("one", "two") --returns "onetwo"
-- assuming `+' represents both numerical addition and string concatenation.
Static Types
Most compiled programming languages have static types and compile-time type checking, e.g.
Int x;
String s;
x := 7;
s := "seven";
x := "seven" --type error
Static type checking guarantees that a wide class of programming errors in which an operator is applied to the wrong type of operand cannot occur when the program runs.
Static type checking also allows operators to be overloaded with different meanings, different code sequences, e.g.
1 + 2 --Int addition
1.2 + 3.4 --Real addition
1.1 i 1.1 + 2.2 i 2.2 --Complex addition
"one" + "two" --concatenation
{Monday, Friday} + {dayoff} --Set union
and so on, as the language designer chooses. This is achieved with no run-time penalty.
A programming language will provide some atomic types, generally including Int, Real, Boolean, and Char, and some structured types possibly including array ([ ]), tuple (record, structure), set of,
pointer to (ref, ^, *). Many programming languages also allow a programmer to define new types, built from atomic types and structured types.
... ...
— 2004, LA. | {"url":"https://allisons.org/ll/ProgLang/Types/","timestamp":"2024-11-12T07:08:19Z","content_type":"text/html","content_length":"5345","record_id":"<urn:uuid:1216a582-ded0-4984-8bda-e531e9583a31>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00822.warc.gz"} |
Tanya Khovanova's Math Blog
For the last year, I’ve been obsessed with Penney’s game. In this game, Alice picks a string of coin tosses, say HHH for three heads. After that, Bob picks his string of tosses of the same lengths,
say HTH. Then they toss a fair coin. The person whose string shows up first wins. For example, if the tosses are THTTHHH, then Alice wins after the seventh toss. For these particular choices, Bob
wins with probability 3/5.
With my PRIMES student, Sean Li, we looked at this game and asked a different question. Suppose Alice picks a pattern of three tosses in a row that are the same. Suppose after that, Bob chooses a
pattern of three alternating tosses. Then they toss a fair coin. Alice is hoping for HHH or TTT, while Bob is hoping for HTH or THT. The person whose pattern shows up first wins. For example, if the
tosses are THTTHHH, then Bob wins after the third toss. For these particular choices, Bob wins with probability 1/2.
In this example, what actually happens. We assume that the group of two elements acts on the alphabet of two letters. The group’s non-identity element swaps letters H and T. We assume that two
strings are equivalent if they belong to the same equivalency class under the group action. We call such an equivalency class a pattern.
In the new game we invented, we have an alphabet of any size and any group acting on the alphabet. Then Alice and Bob pick their patterns. After that, they play the Penney’s game on these patterns.
The answers to all the relevant questions are in our paper, The Penney’s Game with Group Action, posted at the math.CO arxiv 2009.06080.
Recent Comments
• David Masunaga on The Dark Secret of Escher’s Shells
• tanyakh on Make 60 by Using the Same Number Thrice
• David Reynolds on Make 60 by Using the Same Number Thrice
• David Reynolds on Make 60 by Using the Same Number Thrice | {"url":"https://blog.tanyakhovanova.com/2020/09/penneys-game-and-groups/","timestamp":"2024-11-10T22:09:54Z","content_type":"text/html","content_length":"59667","record_id":"<urn:uuid:34386c8c-8a92-4f3c-99d8-f59b94e11650>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00292.warc.gz"} |
Aggregate Point Cloud Geometric Features for Processing
1 Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China
2 Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China
3 University of Chinese Academy of Sciences, Beijing, 100049, China
* Corresponding Authors: Renbo Xia. Email:
(This article belongs to the Special Issue: Recent Advances in Virtual Reality)
Computer Modeling in Engineering & Sciences 2023, 136(1), 555-571. https://doi.org/10.32604/cmes.2023.024470
Received 31 May 2022; Accepted 20 September 2022; Issue published 05 January 2023
As 3D acquisition technology develops and 3D sensors become increasingly affordable, large quantities of 3D point cloud data are emerging. How to effectively learn and extract the geometric features
from these point clouds has become an urgent problem to be solved. The point cloud geometric information is hidden in disordered, unstructured points, making point cloud analysis a very challenging
problem. To address this problem, we propose a novel network framework, called Tree Graph Network (TGNet), which can sample, group, and aggregate local geometric features. Specifically, we construct
a Tree Graph by explicit rules, which consists of curves extending in all directions in point cloud feature space, and then aggregate the features of the graph through a cross-attention mechanism. In
this way, we incorporate more point cloud geometric structure information into the representation of local geometric features, which makes our network perform better. Our model performs well on
several basic point clouds processing tasks such as classification, segmentation, and normal estimation, demonstrating the effectiveness and superiority of our network. Furthermore, we provide
ablation experiments and visualizations to better understand our network.
With the rapid development of 3D vision sensors such as RGB-D cameras, 3D point cloud data has proliferated, which can provide rich 3D geometric information. The analysis of 3D point clouds is
receiving more and more attention as they can be used in many aspects such as autonomous driving, robotics, and remote sensing [1]. Intelligent (automatic, efficient, and reliable) feature learning
and representation of these massive point cloud data is a key problem for 3D understanding (including 3D object recognition, semantic segmentation and 3D object generation, etc.).
Thanks to deep learning’s powerful ability to learn features, deep learning has attracted extensive attention. It has also achieved fruitful results in the field of image understanding over the past
few years [2–10]. As traditional 3D point cloud features rely on artificial design, they cannot describe semantic information at a high level, making adaptations to complex real-life situations
difficult. However, deep learning methods with autonomous feature learning capacity have great advantages in these aspects. Since point clouds are disordered and unstructured, traditional deep
learning methods that work well on 2D images cannot be directly used to process point clouds. Inferring shape information from these irregular points is complicated.
In order to process point clouds using raw data, Qi et al. proposed PointNet [8], which uses multilayer perceptrons (MLPs) with shared parameters to map each point to a high-dimensional feature
space, and then passes in a Max Pooling layer to extract global features. Since PointNet mainly focuses on the overall features and ignores the neighborhood structure information, it is difficult for
PointNet to capture local geometric structure information. Qi et al. proposed PointNet++ [9], which introduces a multilayer network structure in PointNet to better capture geometric structure
information from the neighborhood of each point. The network structure for PointNet++ is similar to image convolutional neural network. PointNet++ extracts local neighborhood features using PointNet
as basic components and abstracts the extracted features layer by layer using a hierarchical network structure. Due to their simplicity and powerful presentation, many networks have been developed
based on PointNet and PointNet++ [6,11–18].
Local feature aggregation is an important basic operation that has been extensively studied in recent years [6,14,19], which is mainly used to discover the correlations between points in local
regions. For each key point, its neighbors are first grouped according to predefined rules (e.g., KNN). Next, the features between query points and neighboring points are passed into various
point-based transformations and aggregation modules for local geometric feature extraction.
Local feature aggregation can incorporate some prior knowledge into local features by predefined rules. For example, KNN-based approaches explicitly assume that local features are related to
neighboring points and independent of non-adjacent features in same layers. They incorporate this information into local features by KNN. However, the above operation lacks long-range relations,
Li et al. proposed a non-local module to capture them [6]. It not only considers neighboring points, but also the whole point cloud sampling points. It incorporates this priori information into the
local features by L-NL Module.
We argue that these approaches are insufficient to extract long-range relations. For this reason, we propose an end-to-end point cloud processing network named TGNet, which can efficiently, robustly,
and adequately depict the geometry of point clouds. Fig. 1 intuitively compares our aggregation method with local and non-local aggregation methods. Compared with local aggregation approaches, our
method can better capture long-range dependencies. Compared with non-local aggregation approaches, our method avoids the global point-to-point mapping and can extract geometric features more
Our main contributions can be summarized as follows:
1. We propose a novel robust end-to-end point cloud processing network, named TGNet, which can effectively enhance point clouds processing.
2. We design a local feature grouping block TGSG (Tree Graph Sampling and Grouping) that enables our network to better trade off the balance of local and long-range dependencies.
3. We further design a transformer-based point cloud aggregation block TGA, which can efficiently aggregate Tree Graph features.
Our approach achieves state-of-the-art performance in extensive experiments on point cloud classification, segmentation, and normal estimation, which validates the effectiveness of our work.
We note that an earlier version of this paper appeared in [20]. This manuscript has been expanded, revised, and refined based on conference papers. Our description of the method provides a more
complete explanation. In the experiments section, supplementary experiments and visualizations have been added to further understand our model.
2.1 Deep Learning on Point Cloud
The biggest challenge of point cloud processing is its unstructured representations. According to the form of the data input to neural network, existing learning methods can be classified as
volume-based [2,4,5,7,10], projection-based [3,21–24], and point-based methods [8,9,11,14,16–18,25–31]. Projection-based methods project an unstructured point cloud into a set of 2D images, while
volume-based methods transform the point cloud into regular 3D grids. Then, the task is completed using 2D or 3D convolutional neural networks. These methods do not use raw point cloud directly and
suffer from explicit information loss and extensive computation. For volume-based methods, low-resolution voxelization will result in the loss of detailed structural information of objects, while
high-resolution voxelization will result in huge memory and computational requirements. For projection-based methods, they are more sensitive to viewpoints selection and object occlusion.
Furthermore, such methods cannot adequately extract geometric and structural information from 3D point clouds due to information loss during 3D-to-2D projection.
PointNet is a pioneer of point-based methods, which directly uses raw point clouds as input to neural networks to extract point cloud features through shared MLP and global Max Pooling. To capture
delicate geometric structures from local regions, Qi et al. proposed a hierarchical network PointNet++ [9]. Local features are learned from local geometric structures and abstracted layer by layer.
The point-based approach does not require any voxelization or projection and thus does not introduce explicit information loss and is gaining popularity. Following them, recent work has focused on
designing advanced convolution operations, considering a wider range of neighborhoods and adaptive aggregation of query points. In this paper, point-based approach is also used to construct our
2.2 Advanced Convolution Operations
Although unstructured point clouds make it difficult to design convolution kernels, advanced convolution kernels in recent literature have overcome these drawbacks and achieved promising results on
basic point cloud analysis tasks. Current 3D convolution methods can be divided into continuous [11,13,16,17], discrete [28,32] and graph-based convolution methods [12]. Continuous convolution
methods define the convolution operation depending on the spatial distribution of local regions. The convolution output is a weighted combination of adjacent point features, and the convolution
weights of adjacent points are determined based on their spatial distribution to the centroids. For example, RS-CNN [17] maps predefined low-level neighborhood relationships (e.g., relative position
and distance) to high-level feature relationships via MLPs and uses them to determine the weights of neighborhood points. In PointConv [16], the convolution kernel is considered as a nonlinear
function of local neighborhood point coordinates, consisting of weight and density functions. The weight functions are learned by MLPs, and the kernelized density estimates are used to learn the
density functions.
Discrete convolution method defines a convolution operation on regular grids, where the offset about the centroid determines the weights of the neighboring points. In GeoConv [32], Edge features are
decomposed into six bases, which encourages the network to learn edge features independently along each base. Then, the features are aggregated according to the geometric relationships between the
edge features and the bases. Learning in this way can preserve the geometric structure information of point clouds.
Graph-based convolution methods use a graph to organize raw unordered 3D point cloud, where the vertices of the graph are defined by points in the point cloud, and the directed edges of the graph are
generated by combining the centroids and neighboring points. Features learning and aggregation are performed in spatial or spectral domains. In DGCNN [12], its graph is built in feature space and
changes as features are extracted. EdgeConv is used to generate edge features and search for neighbors in their feature space.
2.3 Wider Range of Neighborhoods
Due to the geometric structure of the point cloud itself, it is difficult to determine precisely which global points are associated with local point cloud features. During the information extraction
and abstraction process, local features are roughly assumed to be associated only with neighboring points. Recent state-of-the-art methods in literatures attempt to address the above difficulties and
achieve promising results on basic point cloud analysis tasks. SOCNN [33] and PointASNL [6] sample global and local points and then fuse them with features. With these computed features, point cloud
processing can be executed with greater accuracy and robustness.
Unlike all existing sampling methods, we follow explicit rules for sampling and grouping points on the surface of the point cloud. In this way, our local features will contain rich information
describing the shape and geometry of the object.
There are currently two main types of feature aggregation operators: local and non-local. Local feature aggregation operators fuse existing features of neighboring points to obtain new features.
After that, the new features are abstracted layer by layer through a hierarchical network structure to get global features. Different from the local feature aggregation operator, non-local
aggregation operators introduce global information when computing local features. Non-local aggregation operators start with nonlocal neural networks [34], which essentially use self-attention to
compute a new feature by fusing the features of neighboring points with global information. Due to the success of the transformer in vision tasks [35–37] and the fact that the transformer [38] itself
has inherently permutation invariant and is well suited for point cloud processing, the transformer has received extensive attention in extracting non-local features for point cloud processing [14,19
]. As a representative, Qi et al. propose PCT [14], where global features are used to learn multi-to-one feature mappings after transformation and aggregation.
Unlike the two feature aggregation operators mentioned above, we argue that point cloud processing can be better achieved by taking special consideration of local geometry. By aggregating additional
geometric information, local features will carry more information and thus achieve better results.
In this paper, we design a novel framework TGNet (Tree Graph Network), which improves the ability to extract local features and brings the global information into point representation. TGNet consists
of several TGSG (Tree Graph Sampling and Grouping) blocks for sampling and grouping and TGA (Tree Graph Aggregation) blocks for aggregating features. For each block, the TGSG block first receives the
output from the previous block. It then follows explicit rules for sampling, grouping, and simple processing, which can assemble additional information about the geometric structure of local regions.
TGA block uses a self-attention mechanism to aggregate Tree Graph to obtain new features for the next module.
We first introduce the TGSG block in Section 3.1 and the TGA block in Section 3.2, respectively. Then, the TGSG and TGA blocks are combined in a hierarchical manner to form our TGNet proposed in
Section 3.3.
3.1 Tree Graph Sampling and Grouping (TGSG) Block
A point cloud is a set of three-dimensional coordinate points in spatial space, denoted as P={p1,p2,…,pn}∈RN×3. Relatively, its features can be expressed as F={f1,f2,…,fn}∈RN×3 which can represent a
variety of information, including color, surface normal, geometric structure, and high-level semantic information. In a hierarchical network framework, the output of the previous layer is the input
of the subsequent layer, and the subsequent layer abstracts the features of the previous layer. In different feature layers, the feature F of the point cloud carries different information.
We first construct a graph G=(V,E) containing nodes V and edges E based on the spatial relationships of the 3D point cloud. Each vertex corresponds to a point in the point cloud, and each point is
connected to its spatially adjacent K nearest points by edges. In this way, we transform the point cloud into a graph feature space.
Using the definition of the curve in CurveNet [39], a curve of length l can be generated from a series of points in the features space F such that ci={ci,1,ci,2,…,ci,l}∈RD×l. Unlike them, we adopt a
deterministic strategy where our curves follow a specific explicit rule π extending in the feature space. Deterministic strategies can reduce learnable parameters and achieves similar results as
non-deterministic strategies. In Fig. 2a, m curves of length l extending in different directions form a Tree Graph, such that TG={c1,c2,…,cm}∈RD×l×m. Local point clouds with different geometries can
form different Tree Graph, while these Tree Graph can carry their geometric information. Fig. 2a shows a Tree Graph with 5 curves.
Next, we will describe the construction process of Tree Graph in detail as shown in Fig. 2b. We first randomly sample the starting points in feature space to get Ps={ps1,ps2,…,psv|ps∗∈P} and the
corresponding feature Fs={fs1,fs2,…,fsv|fs∗∈F}. Due to the high computational efficiency of KNN, we obtain the neighborhood N(f) of each point feature f by this method. Then, we iteratively obtain
the nodes ci,j of the curve ci in the point cloud using predefined strategy π:
where ci,0 is numerically equal to fs, that is ci,0=fs.
In our neural network model TGNet, we use a simpler approach as strategy π, which can ensure that the curves extend as far as possible in all directions. The node ci,j+1 on ci can be obtained by
executing the predefined policy π.
where DVi is the learnable direction vector of the ith curve ci, fk∈N(fs) is the neighboring feature of p and ci,j,k∈N(ci,j) is neighboring feature of ci,j. ti,j represents the direction vector
between point ci,j−1 and ci,j:
In this way, we can obtain a Tree Graph that contains both local and non-global long-range information.
When the number of m increases, the query points and query ranges will be more clustered around the center point because the curve will become more. When the number of l increases, the query points
and query ranges will be farther from the center point because the curve will be longer. So, we can adjust m and l to enable the network to balance local information and long-range dependencies. When
the product of m and l is constant, increasing the length of l enables the network to obtain more information over long distances. Conversely, decreasing l allows the network to focus more on local
In Fig. 2c, we convert the graph TG into an image I∈Rm×l×D of size m×l with dimension D using the following manner:
Note that ci,j an element F.
In the above way, we obtain a tensor similar to an image feature map. In Fig. 2d, we use a simple method to process the image I to get local features. We obtain the local features I′∈RNs×D′×l×m of
the starting points of each Tree Graph image Is={I1,I2,…,INs}∈RNs×D×l×m by a convolution kernel of size 3 × 3. Then use GAP (global average pooling) on the image to get z∈RNs×D′. Finally, we convert
image Is to a vector z to represent the entire Tree Graph, which is used to represent the local geometric structure information.
3.2 Tree Graph Aggregation (TGA) Block
With the TGSG block, we obtain a Tree Graph containing local information and non-global long-range dependency information. In this subsection, we will use TGA block to fuse Tree Graph and global
information into local features. To simplify the notation, we define the local features of the point cloud as x∈RN×D. We take advantage of the cross-attention to fuse feature z into local feature x.
The multi-head cross attention from local to global is defined as follows:
With multi-head cross attention (MCA) and feed forward layer (FFN), H can be computed as:
x and z are split as x=[xh] and z=[zh](1≤h≤hm) for multi-head attention with hm heads. WhQ, WhK and WhV are the projection matrix in the hth head. Who is used to merge multiple heads together. LN is
layer normalization function. Attn is standard attention function as:
Absolute and relative positions of the point cloud are very important. As shown in Fig. 3, we incorporate them into. H. We concatenate P (the absolute spatial positions), PK (neighboring points
spatial positions) and P−PK. The concatenated features are mapped to higher dimensions through a single layer MLP. Hk−H added with PK are passed into another MLP layer and Max Pooling layer get
result xout.
3.3 Tree Graph Network (TGNet)
Fig. 4 shows the architecture of a TGNet (Tree Graph network), which stacks several TGSG blocks and TGA blocks. It starts with a local feature extraction block (LFE), which takes key points’ absolute
position, neighboring points’ relative position, and neighboring points’ absolute position as input. LFE contains an MLP layer and a Max Pooling layer to initially extract point cloud features. In
all TGSGs, the length and the number of curves are set to 5. In all TGAs, the number of attention heads is 3, and the ratio in FFN is 2 instead of 4 to reduce computations. In this paper, TGNet is
used for point cloud classification, segmentation, and surface normal estimation, which can all be trained in an end-to-end manner.
For classification, the point cloud is passed into a local feature extraction block (LFE) to initially extract local features. The extracted local features are abstracted layer by layer through 8
TGSA and TGA modules, and the global features are obtained by Max-Pooling. Finally, we get class scores by using two layers of MLPs. The category with the highest score is what TGNet’s prediction.
The point cloud segmentation task is similar to the normal estimation task, and we use almost the same architecture. We all use attention U-Net style networks to learn multi-level representations.
For segmentation, its outputs per point prediction score for semantic labels. For normal estimation, it outputs per point normal prediction.
We evaluate our network on multiple point cloud processing tasks, including point cloud classification, segmentation, and normal estimation. To further understand TGNet, we also performed ablation
experiments and visualizations to help further understand our network.
We evaluate TGNet on ModelNet40 [2] for classification, which contains 12311 CAD models of 3D objects belonging to 40 categories. The dataset consists of two parts: the training dataset contains 9843
objects, and the test dataset contains 2468 objects. We uniformly sample 1024 points from the surface of each CAD model. For processing purposes, all 3D point clouds are normalized to a unit sphere.
During training, we augment the data by scaling in the range [0.67, 1.5] and panning in the range [−0.2, 0.2]. We trained our network for 200 epochs, using SGD with a learning rate of 0.001, and
reduced the learning rate to 0.0001 using cosine annealing. The batch sizes for training and testing are set to 48 and 24, respectively.
Table 1 reports the results of our TGNet and current most advanced methods. In contrast to other methods, ours uses only 1024 sampling points and does not require additional surface normals. In
addition, when we do not use the voting strategy [17], our method achieves a state-of-the-art score of 93.8, which is already better than many methods. Surprisingly, our method achieves 94.0
accuracies using the voting strategy. These improvements demonstrate the robustness of TGNet to various geometric shapes.
We evaluate the ability of our network for fine-grained shape analysis on the ShapeNetPart [44] benchmark. ShapeNetPart dataset contains 16881 shape models in 16 categories, labeled as 50
segmentation parts. We use 12137 models for training and the rest for validation and testing. We uniformly select 2048 points from each model as input to our network. We train our network for 200
epochs with a learning rate of 0.05 and a batch size of 32. Table 2 summarizes the comparison of current advanced methods, where TGNet achieves the best performance of 86.5% overall mIoU.
Segmentation is a more difficult task than shape classification. Even without fine-tuning parameters, our method still achieves high scores. The effectiveness of our Tree Graph features strategy is
confirmed. Fig. 5 shows our segmentation results. The segmentation predictions made by TGNet are very close to the ground truth.
Normal estimation is essential to many 3D point cloud processing tasks, such as 3D surface reconstruction and rendering. It is a very challenging task that requires a comprehensive understanding of
object geometry. We evaluate normal estimation on the ModelNet40 dataset as a supervised regression task. We train for 200 epochs using a structure similar to point cloud segmentation, where the
input is 1024 uniformly sampled points. Table 3 shows the average cosine error results for TGNet and current state-of-the-art methods. Our network shows excellent performance with an average error of
only 0.12. Our method gives excellent results demonstrating that TGNet can understand 3D model shapes very well. Fig. 6 summarizes the normal estimation results of our method. The surface normals
predicted by TGNet are very close to ground truth. Even complex 3D models, such as airplanes, can be estimated accurately.
We performed numerous experiments on the dataset ModelNet40 to evaluate the network entirely. Table 4 shows the ablation result. First, we introduce our baseline method for making comparisons. To
replace TGSG, we use KNN for sampling and grouping and use shared MLPs to ensure that the features of their outputs have the same dimensions. TGA module is replaced by PNL (point nonlocal cell) of
PointASNL. The accuracy of the baseline is only 92.8%. The impact is investigated by simply replacing TGNet’s components to the baseline architecture.
For model B, our method shows a 0.4% improvement over the baseline when using TGSG for model B. In contrast with the baseline, TG is used to sample and group geometric information. This illustrates
the effectiveness of our Tree Graph in capturing geometric information. For model C, our method shows a 0.6% improvement over the baseline when using TGA. This illustrates the effectiveness of our
TGA in aggregating local and non-local information. Our model TGNet achieved an accuracy of 93.8 after using TGSG blocks and TGA blocks. The ablation experiment shows that introducing more geometric
information into the local features by explicit methods can effectively improve the point cloud processing.
As mentioned before, adjusting the values of m and l enables TGNet to tradeoff local information with long-range dependent information. In this subsection, we use different number of curves and nodes
for experiments on the ModelNet40 dataset. As shown in Fig. 7, we perform five experiments, and the product of l and m for each experiment is 24 except the third time, which is 25. When m equals 5
and l equals 5. We obtain the best accuracy of 93.8% experimental results.
The experiments show that simply increasing the number of curves or the number of nodes does not lead to better results when the number of learnable parameters is close. The best results can only be
achieved with a reasonable trade-off between locally and remotely relevant information.
4.6 Visualization for Tree Graph
In this subsection, we visualize shallow Tree Graphs to further understand it. Since the deep Tree Graph has more high-level semantic information, a local point feature may even represent the entire
point cloud geometric information. We cannot map the deep Tree Graph to the geometric space, so we do not discuss the deep Tree Graph in this subsection.
Our Tree Graph consists of several lines extending in different directions in feature space. However, in contrast to curves in feature space, curves do not extend in one direction. In Fig. 8, we can
clearly see that the nodes of the curve (also known as query points) are mainly concentrated in the corners and edges of the point cloud. These points can provide robust geometric information for the
feature calculation of the center points (also known as key points). Our Tree Graphs aggregate these robust regions with distinct geometric structures as input to the next layer of the network. This
is where our method differs from others and why our method is more effective.
In this paper, we propose a novel method TGNet, which obtains Tree Graphs with local and non-global long-range dependencies by explicit sampling and grouping rules. The aggregation of features is
then performed in a cross-attention mechanism. In this way, the geometric spatial distribution of the point cloud can be explicitly reasoned about, and the geometric shape information can be
incorporated into the local features. Due to these advantages mentioned above, our approach can achieve state-of-the-art results on several point cloud object analysis tasks.
Acknowledgement: Portions of this work were presented at the 8th International Conference on Virtual Reality in 2022, TGNet: Aggregating Geometric Features for 3D Point Cloud Processing.
Funding Statement: This research was supported by the National Natural Science Foundation of China (Grant Nos. 91948203, 52075532).
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. Chen, X., Ma, H., Wan, J., Li, B., Xia, T. (2017). Multi-view 3D object detection network for autonomous driving. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.
1907–1915. Honolulu, HI. [Google Scholar]
2. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L. et al. (2015). 3D shapenets: A deep representation for volumetric shapes. Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 1912–1920. Boston, MA. [Google Scholar]
3. Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E. (2015). Multi-view convolutional neural networks for 3D shape recognition. Proceedings of the IEEE International Conference on Computer Vision
, pp. 945–953. Santiago, CHILE. [Google Scholar]
4. Wang, P. S., Liu, Y., Guo, Y. X., Sun, C. Y., Tong, X. (2017). O-CNN: Octree-based convolutional neural networks for 3D shape analysis. ACM Transactions on Graphics, 36(4), 1–11. [Google Scholar]
5. Riegler, G., Osman Ulusoy, A., Geiger, A. (2017). OctNet: Learning deep 3D representations at high resolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.
3577–3586. Honolulu, HI, USA. [Google Scholar]
6. Yan, X., Zheng, C., Li, Z., Wang, S., Cui, S. (2020). PointASNL: Robust point clouds processing using nonlocal neural networks with adaptive sampling. Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 5589–5598. Seattle, WA, USA. [Google Scholar]
7. Le, T., Duan, Y. (2018). PointGrid: A deep network for 3D shape understanding. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9204–9214. Salt Lake City, UT,
USA. [Google Scholar]
8. Qi, C. R., Su, H., Mo, K., Guibas, L. J. (2017). PointNet: Deep learning on point sets for 3D classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 652–660. Honolulu, HI. [Google Scholar]
9. Qi, C. R., Yi, L., Su, H., Guibas, L. J. (2017). PointNet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 30, 5105–5114. [
Google Scholar]
10. Maturana, D., Scherer, S. (2015). VoxNet: A 3D convolutional neural network for real-time object recognition. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp.
922–928. Hamburg, Germany, IEEE. [Google Scholar]
11. Liu, Y., Fan, B., Meng, G., Lu, J., Xiang, S. et al. (2019). DensePoint: Learning densely contextual representation for efficient point cloud processing. Proceedings of the IEEE/CVF International
Conference on Computer Vision, pp. 5239–5248. Seoul, Korea (South). [Google Scholar]
12. Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M. et al. (2019). Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 38(5), 1–12. [Google Scholar]
13. Thomas, H., Qi, C. R., Deschaud, J. E., Marcotegui, B., Goulette, F. et al. (2019). KPConv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE/CVF International
Conference on Computer Vision, pp. 6411–6420. Seoul, Korea (South). [Google Scholar]
14. Guo, M. H., Cai, J. X., Liu, Z. N., Mu, T. J., Martin, R. R. et al. (2021). PCT: Point cloud transformer. Computational Visual Media, 7(2), 187–199. [Google Scholar]
15. Atzmon, M., Maron, H., Lipman, Y. (2018). Point convolutional neural networks by extension operators. arXiv preprint arXiv:1803.10091. [Google Scholar]
16. Wu, W., Qi, Z., Fuxin, L. (2019). PointConv: Deep convolutional networks on 3D point clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9621–9630. Long
Beach, CA. [Google Scholar]
17. Liu, Y., Fan, B., Xiang, S., Pan, C. (2019). Relation-shape convolutional neural network for point cloud analysis. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pp. 8895–8904. Long Beach, CA, USA. [Google Scholar]
18. Li, J., Chen, B. M., Lee, G. H. (2018). So-Net: Self-organizing network for point cloud analysis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9397–9406.
Salt Lake City, UT, USA. [Google Scholar]
19. Zhao, H., Jiang, L., Jia, J., Torr, P. H., Koltun, V. (2021). Point transformer. Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16259–16268. [Google Scholar]
20. Li, Y., Xia, R., Zhao, J., Chen, Y., Zou, H. (2022). TGNet: Aggregating geometric features for 3D point cloud processing. 2022 8th International Conference on Virtual Reality (ICVR), pp. 55–61.
Nanjing, China, IEEE. [Google Scholar]
21. Ma, C., Guo, Y., Yang, J., An, W. (2018). Learning multi-view representation with LSTM for 3-D shape recognition and retrieval. IEEE Transactions on Multimedia, 21(5), 1169–1182. [Google Scholar]
22. Feng, Y., Zhang, Z., Zhao, X., Ji, R., Gao, Y. (2018). GvCNN: Group-view convolutional neural networks for 3D shape recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 264–272. Salt Lake City, UT, USA. [Google Scholar]
23. Qi, C. R., Su, H., Nießner, M., Dai, A., Yan, M. et al. (2016). Volumetric and multi-view CNNs for object classification on 3D data. Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 5648–5656. Las Vegas, NV, USA. [Google Scholar]
24. Yu, T., Meng, J., Yuan, J. (2018). Multi-view harmonized bilinear network for 3D object recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 186–194.
Salt Lake City, UT. [Google Scholar]
25. Biasotti, S., Lavoué, G., Falcidieno, B., Pratikakis, I. (2019). Generalizing discrete convolutions for unstructured point clouds. DOI 10.2312/3dor.20191064. [Google Scholar] [CrossRef]
26. Esteves, C., Allen-Blanchette, C., Makadia, A., Daniilidis, K. (2018). Learning so(3) equivariant representations with spherical CNNs. Proceedings of the European Conference on Computer Vision
(ECCV), pp. 52–68. Munich, GERMANY. [Google Scholar]
27. Lei, H., Akhtar, N., Mian, A. (2019). Octree guided cnn with spherical kernels for 3D point clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.
9631–9640. Long Beach, CA. [Google Scholar]
28. Li, Y., Bu, R., Sun, M., Wu, W., Di, X. et al. (2018). PointCNN: Convolution on X-transformed points. 32nd Conference on Neural Information Processing Systems (NIPS), pp. 820–830. Montreal,
CANADA. [Google Scholar]
29. Hua, B. S., Tran, M. K., Yeung, S. K. (2018). Pointwise convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 984–993. Salt Lake City,
UT, USA. [Google Scholar]
30. Zhang, Z., Hua, B. S., Rosen, D. W., Yeung, S. K. (2019). Rotation invariant convolutions for 3D point clouds deep learning. 2019 International Conference on 3D Vision (3DV), pp. 204–213. Quebec
City, QC, Canada, IEEE. [Google Scholar]
31. Duan, Y., Zheng, Y., Lu, J., Zhou, J., Tian, Q. (2019). Structural relational reasoning of point clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.
949–958. Long Beach, CA, USA. [Google Scholar]
32. Li, Y., Bu, R., Sun, M., Wu, W., Di, X. et al. (2018). PointCNN: Convolution on X-transformed points. 32nd Conference on Neural Information Processing Systems (NIPS), Montreal, CANADA. [Google
33. Lan, S., Yu, R., Yu, G., Davis, L. S. (2019). Modeling local geometric structure of 3D point clouds using geo-CNN. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pp. 998–1008. Long Beach, CA, USA. [Google Scholar]
34. Zhang, C., Song, Y., Yao, L., Cai, W. (2020). Shape-oriented convolution neural network for point cloud analysis. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 12773–12780.
New York, NY. [Google Scholar]
35. Wang, X., Girshick, R., Gupta, A., He, K. (2018). Non-local neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Salt Lake City, UT. [
Google Scholar]
36. Chen, Y., Dai, X., Chen, D., Liu, M., Dong, X. et al. (2021). Mobile-former: Bridging mobilenet and transformer. arXiv preprint arXiv:2108.05895. [Google Scholar]
37. Devlin, J., Chang, M. W., Lee, K., Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. [Google Scholar]
38. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y. et al. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF International Conference on Computer
Vision, pp. 10012–10022. [Google Scholar]
39. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L. et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008. [Google Scholar]
40. Xiang, T., Zhang, C., Song, Y., Yu, J., Cai, W. (2021). Walk in the cloud: Learning curves for point clouds shape analysis. Proceedings of the IEEE/CVF International Conference on Computer Vision
, pp. 915–924. [Google Scholar]
41. Joseph-Rivlin, M., Zvirin, A., Kimmel, R. (2019). Momen(e)t: Flavor the moments in learning to classify shapes. Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops,
Seoul, South Korea. [Google Scholar]
42. Yang, J., Zhang, Q., Ni, B., Li, L., Liu, J. et al. (2019). Modeling point clouds with self-attention and gumbel subset sampling. Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pp. 3323–3332. Long Beach, CA. [Google Scholar]
43. Zhao, H., Jiang, L., Fu, C. W., Jia, J. (2019). Pointweb: Enhancing local neighborhood features for point cloud processing. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pp. 5565–5573. Long Beach, CA, USA. [Google Scholar]
44. Xu, Y., Fan, T., Xu, M., Zeng, L., Qiao, Y. (2018). SpiderCNN: Deep learning on point sets with parameterized convolutional filters. Proceedings of the European Conference on Computer Vision
(ECCV), pp. 87–102. Munich, Germany. [Google Scholar]
45. Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q. et al. (2015). ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012. [Google Scholar]
Cite This Article
APA Style
Li, Y., Xia, R., Zhao, J., Chen, Y., Tao, L. et al. (2023). Aggregate point cloud geometric features for processing. Computer Modeling in Engineering & Sciences, 136(1), 555-571. https://doi.org/
Vancouver Style
Li Y, Xia R, Zhao J, Chen Y, Tao L, Zou H, et al. Aggregate point cloud geometric features for processing. Comput Model Eng Sci. 2023;136(1):555-571 https://doi.org/10.32604/cmes.2023.024470
IEEE Style
Y. Li et al., “Aggregate Point Cloud Geometric Features for Processing,” Comput. Model. Eng. Sci., vol. 136, no. 1, pp. 555-571, 2023. https://doi.org/10.32604/cmes.2023.024470
This work is licensed under a Creative
Commons Attribution 4.0 International License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.techscience.com/CMES/v136n1/51199/html","timestamp":"2024-11-07T19:23:42Z","content_type":"application/xhtml+xml","content_length":"166775","record_id":"<urn:uuid:15d4bc2a-cbeb-4338-b2b0-b36811b8e55f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00713.warc.gz"} |
Defining velocities for accurate kinetic statistics in the Grønbech-Jensen Farago thermostat
We expand on two previous developments in the modeling of discrete-time Langevin systems. One is the well-documented Grønbech-Jensen Farago (GJF) thermostat, which has been demonstrated to give
robust and accurate configurational sampling of the phase space. Another is the recent discovery that also kinetics can be accurately sampled for the GJF method. Through a complete investigation of
all possible finite-difference approximations to the velocity, we arrive at two main conclusions: (1) It is not possible to define a so-called on-site velocity such that kinetic temperature will be
correct and independent of the time step, and (2) there exists a set of infinitely many possibilities for defining a two-point (leap-frog) velocity that measures kinetic energy correctly for linear
systems in addition to the correct configurational statistics obtained from the GJF algorithm. We give explicit expressions for the possible definitions, and we incorporate these into convenient and
practical algorithmic forms of the normal Verlet-type algorithms along with a set of suggested criteria for selecting a useful definition of velocity.
ASJC Scopus subject areas
• Statistical and Nonlinear Physics
• Statistics and Probability
• Condensed Matter Physics
Dive into the research topics of 'Defining velocities for accurate kinetic statistics in the Grønbech-Jensen Farago thermostat'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/defining-velocities-for-accurate-kinetic-statistics-in-the-gr%C3%B8nbe","timestamp":"2024-11-05T07:54:19Z","content_type":"text/html","content_length":"58321","record_id":"<urn:uuid:b83f2e1c-e7fe-4c96-8da2-17b03d4afa73>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00207.warc.gz"} |
Inverse of a Non-Singular Square Matrix - Applications of Matrices and Determinants
We recall that a square matrix is called a non-singular matrix if its determinant is not equal to zero and a square matrix is called singular if its determinant is zero.
Inverse of a Non-Singular Square Matrix
We recall that a square matrix is called a non-singular matrix if its determinant is not equal to zero and a square matrix is called singular if its determinant is zero. We have already learnt about
multiplication of a matrix by a scalar, addition of two matrices, and multiplication of two matrices. But a rule could not be formulated to perform division of a matrix by another matrix since a
matrix is just an arrangement of numbers and has no numerical value. When we say that, a matrix A is of order n, we mean that A is a square matrix having n rows and n columns.
In the case of a real number x ≠0, there exists a real number y (=1/x) called the inverse (or reciprocal) of x such that xy = yx = 1. In the same line of thinking, when a matrix A is given, we
search for a matrix B such that the products AB and BA can be found and AB = BA = I , where I is a unit matrix.
In this section, we define the inverse of a non-singular square matrix and prove that a non-singular square matrix has a unique inverse. We will also study some of the properties of inverse matrix.
For all these activities, we need a matrix called the adjoint of a square matrix.
Tags : Applications of Matrices and Determinants , 12th Mathematics : UNIT 1 : Applications of Matrices and Determinants
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
12th Mathematics : UNIT 1 : Applications of Matrices and Determinants : Inverse of a Non-Singular Square Matrix | Applications of Matrices and Determinants | {"url":"https://www.brainkart.com/article/Inverse-of-a-Non-Singular-Square-Matrix_39055/","timestamp":"2024-11-10T16:28:44Z","content_type":"text/html","content_length":"36841","record_id":"<urn:uuid:9eaadf97-1cc2-44bf-85c1-be19d8ba727a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00618.warc.gz"} |
The semi-continous quadratic mixture design problem
The semi-continuous quadratic mixture design problem (SCQMDP) is described as a problem with linear, quadratic and semi-continuity con- straints. Moreover, a linear cost objective and an integer
valued objective are introduced. The research question is to deal with the SCQMD prob- lem from a Branch-and-Bound perspective generating robust solutions. Therefore, an algorithm is outlined which
is rigorous in the sense it iden- ti¯es instances where decision makers tighten requirements such that no ²-robust solution exists. The algorithm is tested on several cases derived from industry.
Original language English
Place of Publication Wageningen
Publisher Wageningen University
Number of pages 17
Publication status Published - 2006
Publication series
Name Working paper / Mansholt Graduate School : Discussion paper
Publisher Mansholt Graduate School of Social Sciences
• operations research
• algorithms
• decision making
• mathematics
Dive into the research topics of 'The semi-continous quadratic mixture design problem'. Together they form a unique fingerprint. | {"url":"https://research.wur.nl/en/publications/the-semi-continous-quadratic-mixture-design-problem","timestamp":"2024-11-09T03:50:47Z","content_type":"text/html","content_length":"45552","record_id":"<urn:uuid:a6873a35-4710-4c2b-8895-b3366d118db5>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00014.warc.gz"} |
Printable Ordinal Numbers Worksheet For Kindergarten - OrdinalNumbers.com
Kindergarten Ordinal Numbers Worksheet – It is possible to enumerate infinite sets by using ordinal numbers. It is also possible to use them to generalize ordinal numbers. 1st The ordinal numbers are
among the fundamental concepts in math. It is a numerical number that shows where an object is within a list. The ordinal number … Read more
Ordinal Numbers Worksheet For Preschool
Ordinal Numbers Worksheet For Preschool – With ordinal numbers, it is possible to count unlimited sets. They also can be used to generalize ordinal quantities. 1st One of the fundamental ideas of
mathematics is the ordinal numbers. It is a number that identifies where an object is in a list of objects. Ordinal numbers are … Read more
Ordinal Numbers Worksheet For Kindergarten
Ordinal Numbers Worksheet For Kindergarten – By using ordinal numbers, it is possible to count infinite sets. They can also be utilized as a way to generalize ordinal numbers. 1st The foundational
idea of mathematics is the ordinal. It is a numerical value that represents the location of an object in a list. Typically, ordinal … Read more | {"url":"https://www.ordinalnumbers.com/tag/printable-ordinal-numbers-worksheet-for-kindergarten/","timestamp":"2024-11-02T12:40:24Z","content_type":"text/html","content_length":"60264","record_id":"<urn:uuid:13d21ce6-9a37-45ef-b52a-8a356d1f016f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00264.warc.gz"} |
Network Working Group D. Eastlake, 3rd
Request for Comments: 1750 DEC
Category: Informational S. Crocker
J. Schiller
December 1994
Randomness Recommendations for Security
Status of this Memo
This memo provides information for the Internet community. This memo
does not specify an Internet standard of any kind. Distribution of
this memo is unlimited.
Security systems today are built on increasingly strong cryptographic
algorithms that foil pattern analysis attempts. However, the security
of these systems is dependent on generating secret quantities for
passwords, cryptographic keys, and similar quantities. The use of
pseudo-random processes to generate secret quantities can result in
pseudo-security. The sophisticated attacker of these security
systems may find it easier to reproduce the environment that produced
the secret quantities, searching the resulting small set of
possibilities, than to locate the quantities in the whole of the
number space.
Choosing random quantities to foil a resourceful and motivated
adversary is surprisingly difficult. This paper points out many
pitfalls in using traditional pseudo-random number generation
techniques for choosing such quantities. It recommends the use of
truly random hardware techniques and shows that the existing hardware
on many systems can be used for this purpose. It provides
suggestions to ameliorate the problem when a hardware solution is not
available. And it gives examples of how large such quantities need
to be for some particular applications.
Eastlake, Crocker & Schiller [Page 1]
RFC 1750 Randomness Recommendations for Security December 1994
Comments on this document that have been incorporated were received
from (in alphabetic order) the following:
David M. Balenson (TIS)
Don Coppersmith (IBM)
Don T. Davis (consultant)
Carl Ellison (Stratus)
Marc Horowitz (MIT)
Christian Huitema (INRIA)
Charlie Kaufman (IRIS)
Steve Kent (BBN)
Hal Murray (DEC)
Neil Haller (Bellcore)
Richard Pitkin (DEC)
Tim Redmond (TIS)
Doug Tygar (CMU)
Table of Contents
1. Introduction........................................... 3
2. Requirements........................................... 4
3. Traditional Pseudo-Random Sequences.................... 5
4. Unpredictability....................................... 7
4.1 Problems with Clocks and Serial Numbers............... 7
4.2 Timing and Content of External Events................ 8
4.3 The Fallacy of Complex Manipulation.................. 8
4.4 The Fallacy of Selection from a Large Database....... 9
5. Hardware for Randomness............................... 10
5.1 Volume Required...................................... 10
5.2 Sensitivity to Skew.................................. 10
5.2.1 Using Stream Parity to De-Skew..................... 11
5.2.2 Using Transition Mappings to De-Skew............... 12
5.2.3 Using FFT to De-Skew............................... 13
5.2.4 Using Compression to De-Skew....................... 13
5.3 Existing Hardware Can Be Used For Randomness......... 14
5.3.1 Using Existing Sound/Video Input................... 14
5.3.2 Using Existing Disk Drives......................... 14
6. Recommended Non-Hardware Strategy..................... 14
6.1 Mixing Functions..................................... 15
6.1.1 A Trivial Mixing Function.......................... 15
6.1.2 Stronger Mixing Functions.......................... 16
6.1.3 Diff-Hellman as a Mixing Function.................. 17
6.1.4 Using a Mixing Function to Stretch Random Bits..... 17
6.1.5 Other Factors in Choosing a Mixing Function........ 18
6.2 Non-Hardware Sources of Randomness................... 19
6.3 Cryptographically Strong Sequences................... 19
Eastlake, Crocker & Schiller [Page 2]
RFC 1750 Randomness Recommendations for Security December 1994
6.3.1 Traditional Strong Sequences....................... 20
6.3.2 The Blum Blum Shub Sequence Generator.............. 21
7. Key Generation Standards.............................. 22
7.1 US DoD Recommendations for Password Generation....... 23
7.2 X9.17 Key Generation................................. 23
8. Examples of Randomness Required....................... 24
8.1 Password Generation................................. 24
8.2 A Very High Security Cryptographic Key............... 25
8.2.1 Effort per Key Trial............................... 25
8.2.2 Meet in the Middle Attacks......................... 26
8.2.3 Other Considerations............................... 26
9. Conclusion............................................ 27
10. Security Considerations.............................. 27
References............................................... 28
Authors' Addresses....................................... 30
1. Introduction
Software cryptography is coming into wider use. Systems like
Kerberos, PEM, PGP, etc. are maturing and becoming a part of the
network landscape [PEM]. These systems provide substantial
protection against snooping and spoofing. However, there is a
potential flaw. At the heart of all cryptographic systems is the
generation of secret, unguessable (i.e., random) numbers.
For the present, the lack of generally available facilities for
generating such unpredictable numbers is an open wound in the design
of cryptographic software. For the software developer who wants to
build a key or password generation procedure that runs on a wide
range of hardware, the only safe strategy so far has been to force
the local installation to supply a suitable routine to generate
random numbers. To say the least, this is an awkward, error-prone
and unpalatable solution.
It is important to keep in mind that the requirement is for data that
an adversary has a very low probability of guessing or determining.
This will fail if pseudo-random data is used which only meets
traditional statistical tests for randomness or which is based on
limited range sources, such as clocks. Frequently such random
quantities are determinable by an adversary searching through an
embarrassingly small space of possibilities.
This informational document suggests techniques for producing random
quantities that will be resistant to such attack. It recommends that
future systems include hardware random number generation or provide
access to existing hardware that can be used for this purpose. It
suggests methods for use if such hardware is not available. And it
gives some estimates of the number of random bits required for sample
Eastlake, Crocker & Schiller [Page 3]
RFC 1750 Randomness Recommendations for Security December 1994
2. Requirements
Probably the most commonly encountered randomness requirement today
is the user password. This is usually a simple character string.
Obviously, if a password can be guessed, it does not provide
security. (For re-usable passwords, it is desirable that users be
able to remember the password. This may make it advisable to use
pronounceable character strings or phrases composed on ordinary
words. But this only affects the format of the password information,
not the requirement that the password be very hard to guess.)
Many other requirements come from the cryptographic arena.
Cryptographic techniques can be used to provide a variety of services
including confidentiality and authentication. Such services are
based on quantities, traditionally called "keys", that are unknown to
and unguessable by an adversary.
In some cases, such as the use of symmetric encryption with the one
time pads [CRYPTO*] or the US Data Encryption Standard [DES], the
parties who wish to communicate confidentially and/or with
authentication must all know the same secret key. In other cases,
using what are called asymmetric or "public key" cryptographic
techniques, keys come in pairs. One key of the pair is private and
must be kept secret by one party, the other is public and can be
published to the world. It is computationally infeasible to
determine the private key from the public key [ASYMMETRIC, CRYPTO*].
The frequency and volume of the requirement for random quantities
differs greatly for different cryptographic systems. Using pure RSA
[CRYPTO*], random quantities are required when the key pair is
generated, but thereafter any number of messages can be signed
without any further need for randomness. The public key Digital
Signature Algorithm that has been proposed by the US National
Institute of Standards and Technology (NIST) requires good random
numbers for each signature. And encrypting with a one time pad, in
principle the strongest possible encryption technique, requires a
volume of randomness equal to all the messages to be processed.
In most of these cases, an adversary can try to determine the
"secret" key by trial and error. (This is possible as long as the
key is enough smaller than the message that the correct key can be
uniquely identified.) The probability of an adversary succeeding at
this must be made acceptably low, depending on the particular
application. The size of the space the adversary must search is
related to the amount of key "information" present in the information
theoretic sense [SHANNON]. This depends on the number of different
Eastlake, Crocker & Schiller [Page 4]
RFC 1750 Randomness Recommendations for Security December 1994
secret values possible and the probability of each value as follows:
Bits-of-info = \ - p * log ( p )
/ i 2 i
where i varies from 1 to the number of possible secret values and p
sub i is the probability of the value numbered i. (Since p sub i is
less than one, the log will be negative so each term in the sum will
be non-negative.)
If there are 2^n different values of equal probability, then n bits
of information are present and an adversary would, on the average,
have to try half of the values, or 2^(n-1) , before guessing the
secret quantity. If the probability of different values is unequal,
then there is less information present and fewer guesses will, on
average, be required by an adversary. In particular, any values that
the adversary can know are impossible, or are of low probability, can
be initially ignored by an adversary, who will search through the
more probable values first.
For example, consider a cryptographic system that uses 56 bit keys.
If these 56 bit keys are derived by using a fixed pseudo-random
number generator that is seeded with an 8 bit seed, then an adversary
needs to search through only 256 keys (by running the pseudo-random
number generator with every possible seed), not the 2^56 keys that
may at first appear to be the case. Only 8 bits of "information" are
in these 56 bit keys.
3. Traditional Pseudo-Random Sequences
Most traditional sources of random numbers use deterministic sources
of "pseudo-random" numbers. These typically start with a "seed"
quantity and use numeric or logical operations to produce a sequence
of values.
[KNUTH] has a classic exposition on pseudo-random numbers.
Applications he mentions are simulation of natural phenomena,
sampling, numerical analysis, testing computer programs, decision
making, and games. None of these have the same characteristics as
the sort of security uses we are talking about. Only in the last two
could there be an adversary trying to find the random quantity.
However, in these cases, the adversary normally has only a single
chance to use a guessed value. In guessing passwords or attempting
to break an encryption scheme, the adversary normally has many,
Eastlake, Crocker & Schiller [Page 5]
RFC 1750 Randomness Recommendations for Security December 1994
perhaps unlimited, chances at guessing the correct value and should
be assumed to be aided by a computer.
For testing the "randomness" of numbers, Knuth suggests a variety of
measures including statistical and spectral. These tests check
things like autocorrelation between different parts of a "random"
sequence or distribution of its values. They could be met by a
constant stored random sequence, such as the "random" sequence
printed in the CRC Standard Mathematical Tables [CRC].
A typical pseudo-random number generation technique, known as a
linear congruence pseudo-random number generator, is modular
arithmetic where the N+1th value is calculated from the Nth value by
V = ( V * a + b )(Mod c)
N+1 N
The above technique has a strong relationship to linear shift
register pseudo-random number generators, which are well understood
cryptographically [SHIFT*]. In such generators bits are introduced
at one end of a shift register as the Exclusive Or (binary sum
without carry) of bits from selected fixed taps into the register.
For example:
+----+ +----+ +----+ +----+
| B | <-- | B | <-- | B | <-- . . . . . . <-- | B | <-+
| 0 | | 1 | | 2 | | n | |
+----+ +----+ +----+ +----+ |
| | | |
| | V +-----+
| V +----------------> | |
V +-----------------------------> | XOR |
+---------------------------------------------------> | |
V = ( ( V * 2 ) + B .xor. B ... )(Mod 2^n)
N+1 N 0 2
The goodness of traditional pseudo-random number generator algorithms
is measured by statistical tests on such sequences. Carefully chosen
values of the initial V and a, b, and c or the placement of shift
register tap in the above simple processes can produce excellent
Eastlake, Crocker & Schiller [Page 6]
RFC 1750 Randomness Recommendations for Security December 1994
These sequences may be adequate in simulations (Monte Carlo
experiments) as long as the sequence is orthogonal to the structure
of the space being explored. Even there, subtle patterns may cause
problems. However, such sequences are clearly bad for use in
security applications. They are fully predictable if the initial
state is known. Depending on the form of the pseudo-random number
generator, the sequence may be determinable from observation of a
short portion of the sequence [CRYPTO*, STERN]. For example, with
the generators above, one can determine V(n+1) given knowledge of
V(n). In fact, it has been shown that with these techniques, even if
only one bit of the pseudo-random values is released, the seed can be
determined from short sequences.
Not only have linear congruent generators been broken, but techniques
are now known for breaking all polynomial congruent generators
4. Unpredictability
Randomness in the traditional sense described in section 3 is NOT the
same as the unpredictability required for security use.
For example, use of a widely available constant sequence, such as
that from the CRC tables, is very weak against an adversary. Once
they learn of or guess it, they can easily break all security, future
and past, based on the sequence [CRC]. Yet the statistical
properties of these tables are good.
The following sections describe the limitations of some randomness
generation techniques and sources.
4.1 Problems with Clocks and Serial Numbers
Computer clocks, or similar operating system or hardware values,
provide significantly fewer real bits of unpredictability than might
appear from their specifications.
Tests have been done on clocks on numerous systems and it was found
that their behavior can vary widely and in unexpected ways. One
version of an operating system running on one set of hardware may
actually provide, say, microsecond resolution in a clock while a
different configuration of the "same" system may always provide the
same lower bits and only count in the upper bits at much lower
resolution. This means that successive reads on the clock may
produce identical values even if enough time has passed that the
value "should" change based on the nominal clock resolution. There
are also cases where frequently reading a clock can produce
artificial sequential values because of extra code that checks for
Eastlake, Crocker & Schiller [Page 7]
RFC 1750 Randomness Recommendations for Security December 1994
the clock being unchanged between two reads and increases it by one!
Designing portable application code to generate unpredictable numbers
based on such system clocks is particularly challenging because the
system designer does not always know the properties of the system
clocks that the code will execute on.
Use of a hardware serial number such as an Ethernet address may also
provide fewer bits of uniqueness than one would guess. Such
quantities are usually heavily structured and subfields may have only
a limited range of possible values or values easily guessable based
on approximate date of manufacture or other data. For example, it is
likely that most of the Ethernet cards installed on Digital Equipment
Corporation (DEC) hardware within DEC were manufactured by DEC
itself, which significantly limits the range of built in addresses.
Problems such as those described above related to clocks and serial
numbers make code to produce unpredictable quantities difficult if
the code is to be ported across a variety of computer platforms and
4.2 Timing and Content of External Events
It is possible to measure the timing and content of mouse movement,
key strokes, and similar user events. This is a reasonable source of
unguessable data with some qualifications. On some machines, inputs
such as key strokes are buffered. Even though the user's inter-
keystroke timing may have sufficient variation and unpredictability,
there might not be an easy way to access that variation. Another
problem is that no standard method exists to sample timing details.
This makes it hard to build standard software intended for
distribution to a large range of machines based on this technique.
The amount of mouse movement or the keys actually hit are usually
easier to access than timings but may yield less unpredictability as
the user may provide highly repetitive input.
Other external events, such as network packet arrival times, can also
be used with care. In particular, the possibility of manipulation of
such times by an adversary must be considered.
4.3 The Fallacy of Complex Manipulation
One strategy which may give a misleading appearance of
unpredictability is to take a very complex algorithm (or an excellent
traditional pseudo-random number generator with good statistical
properties) and calculate a cryptographic key by starting with the
current value of a computer system clock as the seed. An adversary
who knew roughly when the generator was started would have a
Eastlake, Crocker & Schiller [Page 8]
RFC 1750 Randomness Recommendations for Security December 1994
relatively small number of seed values to test as they would know
likely values of the system clock. Large numbers of pseudo-random
bits could be generated but the search space an adversary would need
to check could be quite small.
Thus very strong and/or complex manipulation of data will not help if
the adversary can learn what the manipulation is and there is not
enough unpredictability in the starting seed value. Even if they can
not learn what the manipulation is, they may be able to use the
limited number of results stemming from a limited number of seed
values to defeat security.
Another serious strategy error is to assume that a very complex
pseudo-random number generation algorithm will produce strong random
numbers when there has been no theory behind or analysis of the
algorithm. There is a excellent example of this fallacy right near
the beginning of chapter 3 in [KNUTH] where the author describes a
complex algorithm. It was intended that the machine language program
corresponding to the algorithm would be so complicated that a person
trying to read the code without comments wouldn't know what the
program was doing. Unfortunately, actual use of this algorithm
showed that it almost immediately converged to a single repeated
value in one case and a small cycle of values in another case.
Not only does complex manipulation not help you if you have a limited
range of seeds but blindly chosen complex manipulation can destroy
the randomness in a good seed!
4.4 The Fallacy of Selection from a Large Database
Another strategy that can give a misleading appearance of
unpredictability is selection of a quantity randomly from a database
and assume that its strength is related to the total number of bits
in the database. For example, typical USENET servers as of this date
process over 35 megabytes of information per day. Assume a random
quantity was selected by fetching 32 bytes of data from a random
starting point in this data. This does not yield 32*8 = 256 bits
worth of unguessability. Even after allowing that much of the data
is human language and probably has more like 2 or 3 bits of
information per byte, it doesn't yield 32*2.5 = 80 bits of
unguessability. For an adversary with access to the same 35
megabytes the unguessability rests only on the starting point of the
selection. That is, at best, about 25 bits of unguessability in this
The same argument applies to selecting sequences from the data on a
CD ROM or Audio CD recording or any other large public database. If
the adversary has access to the same database, this "selection from a
Eastlake, Crocker & Schiller [Page 9]
RFC 1750 Randomness Recommendations for Security December 1994
large volume of data" step buys very little. However, if a selection
can be made from data to which the adversary has no access, such as
system buffers on an active multi-user system, it may be of some
5. Hardware for Randomness
Is there any hope for strong portable randomness in the future?
There might be. All that's needed is a physical source of
unpredictable numbers.
A thermal noise or radioactive decay source and a fast, free-running
oscillator would do the trick directly [GIFFORD]. This is a trivial
amount of hardware, and could easily be included as a standard part
of a computer system's architecture. Furthermore, any system with a
spinning disk or the like has an adequate source of randomness
[DAVIS]. All that's needed is the common perception among computer
vendors that this small additional hardware and the software to
access it is necessary and useful.
5.1 Volume Required
How much unpredictability is needed? Is it possible to quantify the
requirement in, say, number of random bits per second?
The answer is not very much is needed. For DES, the key is 56 bits
and, as we show in an example in Section 8, even the highest security
system is unlikely to require a keying material of over 200 bits. If
a series of keys are needed, it can be generated from a strong random
seed using a cryptographically strong sequence as explained in
Section 6.3. A few hundred random bits generated once a day would be
enough using such techniques. Even if the random bits are generated
as slowly as one per second and it is not possible to overlap the
generation process, it should be tolerable in high security
applications to wait 200 seconds occasionally.
These numbers are trivial to achieve. It could be done by a person
repeatedly tossing a coin. Almost any hardware process is likely to
be much faster.
5.2 Sensitivity to Skew
Is there any specific requirement on the shape of the distribution of
the random numbers? The good news is the distribution need not be
uniform. All that is needed is a conservative estimate of how non-
uniform it is to bound performance. Two simple techniques to de-skew
the bit stream are given below and stronger techniques are mentioned
in Section 6.1.2 below.
Eastlake, Crocker & Schiller [Page 10]
RFC 1750 Randomness Recommendations for Security December 1994
5.2.1 Using Stream Parity to De-Skew
Consider taking a sufficiently long string of bits and map the string
to "zero" or "one". The mapping will not yield a perfectly uniform
distribution, but it can be as close as desired. One mapping that
serves the purpose is to take the parity of the string. This has the
advantages that it is robust across all degrees of skew up to the
estimated maximum skew and is absolutely trivial to implement in
The following analysis gives the number of bits that must be sampled:
Suppose the ratio of ones to zeros is 0.5 + e : 0.5 - e, where e is
between 0 and 0.5 and is a measure of the "eccentricity" of the
distribution. Consider the distribution of the parity function of N
bit samples. The probabilities that the parity will be one or zero
will be the sum of the odd or even terms in the binomial expansion of
(p + q)^N, where p = 0.5 + e, the probability of a one, and q = 0.5 -
e, the probability of a zero.
These sums can be computed easily as
N N
1/2 * ( ( p + q ) + ( p - q ) )
N N
1/2 * ( ( p + q ) - ( p - q ) ).
(Which one corresponds to the probability the parity will be 1
depends on whether N is odd or even.)
Since p + q = 1 and p - q = 2e, these expressions reduce to
1/2 * [1 + (2e) ]
1/2 * [1 - (2e) ].
Neither of these will ever be exactly 0.5 unless e is zero, but we
can bring them arbitrarily close to 0.5. If we want the
probabilities to be within some delta d of 0.5, i.e. then
( 0.5 + ( 0.5 * (2e) ) ) < 0.5 + d.
Eastlake, Crocker & Schiller [Page 11]
RFC 1750 Randomness Recommendations for Security December 1994
Solving for N yields N > log(2d)/log(2e). (Note that 2e is less than
1, so its log is negative. Division by a negative number reverses
the sense of an inequality.)
The following table gives the length of the string which must be
sampled for various degrees of skew in order to come within 0.001 of
a 50/50 distribution.
| Prob(1) | e | N |
| 0.5 | 0.00 | 1 |
| 0.6 | 0.10 | 4 |
| 0.7 | 0.20 | 7 |
| 0.8 | 0.30 | 13 |
| 0.9 | 0.40 | 28 |
| 0.95 | 0.45 | 59 |
| 0.99 | 0.49 | 308 |
The last entry shows that even if the distribution is skewed 99% in
favor of ones, the parity of a string of 308 samples will be within
0.001 of a 50/50 distribution.
5.2.2 Using Transition Mappings to De-Skew
Another technique, originally due to von Neumann [VON NEUMANN], is to
examine a bit stream as a sequence of non-overlapping pairs. You
could then discard any 00 or 11 pairs found, interpret 01 as a 0 and
10 as a 1. Assume the probability of a 1 is 0.5+e and the
probability of a 0 is 0.5-e where e is the eccentricity of the source
and described in the previous section. Then the probability of each
pair is as follows:
| pair | probability |
| 00 | (0.5 - e)^2 = 0.25 - e + e^2 |
| 01 | (0.5 - e)*(0.5 + e) = 0.25 - e^2 |
| 10 | (0.5 + e)*(0.5 - e) = 0.25 - e^2 |
| 11 | (0.5 + e)^2 = 0.25 + e + e^2 |
This technique will completely eliminate any bias but at the expense
of taking an indeterminate number of input bits for any particular
desired number of output bits. The probability of any particular
pair being discarded is 0.5 + 2e^2 so the expected number of input
bits to produce X output bits is X/(0.25 - e^2).
Eastlake, Crocker & Schiller [Page 12]
RFC 1750 Randomness Recommendations for Security December 1994
This technique assumes that the bits are from a stream where each bit
has the same probability of being a 0 or 1 as any other bit in the
stream and that bits are not correlated, i.e., that the bits are
identical independent distributions. If alternate bits were from two
correlated sources, for example, the above analysis breaks down.
The above technique also provides another illustration of how a
simple statistical analysis can mislead if one is not always on the
lookout for patterns that could be exploited by an adversary. If the
algorithm were mis-read slightly so that overlapping successive bits
pairs were used instead of non-overlapping pairs, the statistical
analysis given is the same; however, instead of provided an unbiased
uncorrelated series of random 1's and 0's, it instead produces a
totally predictable sequence of exactly alternating 1's and 0's.
5.2.3 Using FFT to De-Skew
When real world data consists of strongly biased or correlated bits,
it may still contain useful amounts of randomness. This randomness
can be extracted through use of the discrete Fourier transform or its
optimized variant, the FFT.
Using the Fourier transform of the data, strong correlations can be
discarded. If adequate data is processed and remaining correlations
decay, spectral lines approaching statistical independence and
normally distributed randomness can be produced [BRILLINGER].
5.2.4 Using Compression to De-Skew
Reversible compression techniques also provide a crude method of de-
skewing a skewed bit stream. This follows directly from the
definition of reversible compression and the formula in Section 2
above for the amount of information in a sequence. Since the
compression is reversible, the same amount of information must be
present in the shorter output than was present in the longer input.
By the Shannon information equation, this is only possible if, on
average, the probabilities of the different shorter sequences are
more uniformly distributed than were the probabilities of the longer
sequences. Thus the shorter sequences are de-skewed relative to the
However, many compression techniques add a somewhat predicatable
preface to their output stream and may insert such a sequence again
periodically in their output or otherwise introduce subtle patterns
of their own. They should be considered only a rough technique
compared with those described above or in Section 6.1.2. At a
minimum, the beginning of the compressed sequence should be skipped
and only later bits used for applications requiring random bits.
Eastlake, Crocker & Schiller [Page 13]
RFC 1750 Randomness Recommendations for Security December 1994
5.3 Existing Hardware Can Be Used For Randomness
As described below, many computers come with hardware that can, with
care, be used to generate truly random quantities.
5.3.1 Using Existing Sound/Video Input
Increasingly computers are being built with inputs that digitize some
real world analog source, such as sound from a microphone or video
input from a camera. Under appropriate circumstances, such input can
provide reasonably high quality random bits. The "input" from a
sound digitizer with no source plugged in or a camera with the lens
cap on, if the system has enough gain to detect anything, is
essentially thermal noise.
For example, on a SPARCstation, one can read from the /dev/audio
device with nothing plugged into the microphone jack. Such data is
essentially random noise although it should not be trusted without
some checking in case of hardware failure. It will, in any case,
need to be de-skewed as described elsewhere.
Combining this with compression to de-skew one can, in UNIXese,
generate a huge amount of medium quality random data by doing
cat /dev/audio | compress - >random-bits-file
5.3.2 Using Existing Disk Drives
Disk drives have small random fluctuations in their rotational speed
due to chaotic air turbulence [DAVIS]. By adding low level disk seek
time instrumentation to a system, a series of measurements can be
obtained that include this randomness. Such data is usually highly
correlated so that significant processing is needed, including FFT
(see section 5.2.3). Nevertheless experimentation has shown that,
with such processing, disk drives easily produce 100 bits a minute or
more of excellent random data.
Partly offsetting this need for processing is the fact that disk
drive failure will normally be rapidly noticed. Thus, problems with
this method of random number generation due to hardware failure are
very unlikely.
6. Recommended Non-Hardware Strategy
What is the best overall strategy for meeting the requirement for
unguessable random numbers in the absence of a reliable hardware
source? It is to obtain random input from a large number of
uncorrelated sources and to mix them with a strong mixing function.
Eastlake, Crocker & Schiller [Page 14]
RFC 1750 Randomness Recommendations for Security December 1994
Such a function will preserve the randomness present in any of the
sources even if other quantities being combined are fixed or easily
guessable. This may be advisable even with a good hardware source as
hardware can also fail, though this should be weighed against any
increase in the chance of overall failure due to added software
6.1 Mixing Functions
A strong mixing function is one which combines two or more inputs and
produces an output where each output bit is a different complex non-
linear function of all the input bits. On average, changing any
input bit will change about half the output bits. But because the
relationship is complex and non-linear, no particular output bit is
guaranteed to change when any particular input bit is changed.
Consider the problem of converting a stream of bits that is skewed
towards 0 or 1 to a shorter stream which is more random, as discussed
in Section 5.2 above. This is simply another case where a strong
mixing function is desired, mixing the input bits to produce a
smaller number of output bits. The technique given in Section 5.2.1
of using the parity of a number of bits is simply the result of
successively Exclusive Or'ing them which is examined as a trivial
mixing function immediately below. Use of stronger mixing functions
to extract more of the randomness in a stream of skewed bits is
examined in Section 6.1.2.
6.1.1 A Trivial Mixing Function
A trivial example for single bit inputs is the Exclusive Or function,
which is equivalent to addition without carry, as show in the table
below. This is a degenerate case in which the one output bit always
changes for a change in either input bit. But, despite its
simplicity, it will still provide a useful illustration.
| input 1 | input 2 | output |
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
If inputs 1 and 2 are uncorrelated and combined in this fashion then
the output will be an even better (less skewed) random bit than the
inputs. If we assume an "eccentricity" e as defined in Section 5.2
above, then the output eccentricity relates to the input eccentricity
Eastlake, Crocker & Schiller [Page 15]
RFC 1750 Randomness Recommendations for Security December 1994
as follows:
e = 2 * e * e
output input 1 input 2
Since e is never greater than 1/2, the eccentricity is always
improved except in the case where at least one input is a totally
skewed constant. This is illustrated in the following table where
the top and left side values are the two input eccentricities and the
entries are the output eccentricity:
| e | 0.00 | 0.10 | 0.20 | 0.30 | 0.40 | 0.50 |
| 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| 0.10 | 0.00 | 0.02 | 0.04 | 0.06 | 0.08 | 0.10 |
| 0.20 | 0.00 | 0.04 | 0.08 | 0.12 | 0.16 | 0.20 |
| 0.30 | 0.00 | 0.06 | 0.12 | 0.18 | 0.24 | 0.30 |
| 0.40 | 0.00 | 0.08 | 0.16 | 0.24 | 0.32 | 0.40 |
| 0.50 | 0.00 | 0.10 | 0.20 | 0.30 | 0.40 | 0.50 |
However, keep in mind that the above calculations assume that the
inputs are not correlated. If the inputs were, say, the parity of
the number of minutes from midnight on two clocks accurate to a few
seconds, then each might appear random if sampled at random intervals
much longer than a minute. Yet if they were both sampled and
combined with xor, the result would be zero most of the time.
6.1.2 Stronger Mixing Functions
The US Government Data Encryption Standard [DES] is an example of a
strong mixing function for multiple bit quantities. It takes up to
120 bits of input (64 bits of "data" and 56 bits of "key") and
produces 64 bits of output each of which is dependent on a complex
non-linear function of all input bits. Other strong encryption
functions with this characteristic can also be used by considering
them to mix all of their key and data input bits.
Another good family of mixing functions are the "message digest" or
hashing functions such as The US Government Secure Hash Standard
[SHS] and the MD2, MD4, MD5 [MD2, MD4, MD5] series. These functions
all take an arbitrary amount of input and produce an output mixing
all the input bits. The MD* series produce 128 bits of output and SHS
produces 160 bits.
Eastlake, Crocker & Schiller [Page 16]
RFC 1750 Randomness Recommendations for Security December 1994
Although the message digest functions are designed for variable
amounts of input, DES and other encryption functions can also be used
to combine any number of inputs. If 64 bits of output is adequate,
the inputs can be packed into a 64 bit data quantity and successive
56 bit keys, padding with zeros if needed, which are then used to
successively encrypt using DES in Electronic Codebook Mode [DES
MODES]. If more than 64 bits of output are needed, use more complex
mixing. For example, if inputs are packed into three quantities, A,
B, and C, use DES to encrypt A with B as a key and then with C as a
key to produce the 1st part of the output, then encrypt B with C and
then A for more output and, if necessary, encrypt C with A and then B
for yet more output. Still more output can be produced by reversing
the order of the keys given above to stretch things. The same can be
done with the hash functions by hashing various subsets of the input
data to produce multiple outputs. But keep in mind that it is
impossible to get more bits of "randomness" out than are put in.
An example of using a strong mixing function would be to reconsider
the case of a string of 308 bits each of which is biased 99% towards
zero. The parity technique given in Section 5.2.1 above reduced this
to one bit with only a 1/1000 deviance from being equally likely a
zero or one. But, applying the equation for information given in
Section 2, this 308 bit sequence has 5 bits of information in it.
Thus hashing it with SHS or MD5 and taking the bottom 5 bits of the
result would yield 5 unbiased random bits as opposed to the single
bit given by calculating the parity of the string.
6.1.3 Diffie-Hellman as a Mixing Function
Diffie-Hellman exponential key exchange is a technique that yields a
shared secret between two parties that can be made computationally
infeasible for a third party to determine even if they can observe
all the messages between the two communicating parties. This shared
secret is a mixture of initial quantities generated by each of them
[D-H]. If these initial quantities are random, then the shared
secret contains the combined randomness of them both, assuming they
are uncorrelated.
6.1.4 Using a Mixing Function to Stretch Random Bits
While it is not necessary for a mixing function to produce the same
or fewer bits than its inputs, mixing bits cannot "stretch" the
amount of random unpredictability present in the inputs. Thus four
inputs of 32 bits each where there is 12 bits worth of
unpredicatability (such as 4,096 equally probable values) in each
input cannot produce more than 48 bits worth of unpredictable output.
The output can be expanded to hundreds or thousands of bits by, for
example, mixing with successive integers, but the clever adversary's
Eastlake, Crocker & Schiller [Page 17]
RFC 1750 Randomness Recommendations for Security December 1994
search space is still 2^48 possibilities. Furthermore, mixing to
fewer bits than are input will tend to strengthen the randomness of
the output the way using Exclusive Or to produce one bit from two did
The last table in Section 6.1.1 shows that mixing a random bit with a
constant bit with Exclusive Or will produce a random bit. While this
is true, it does not provide a way to "stretch" one random bit into
more than one. If, for example, a random bit is mixed with a 0 and
then with a 1, this produces a two bit sequence but it will always be
either 01 or 10. Since there are only two possible values, there is
still only the one bit of original randomness.
6.1.5 Other Factors in Choosing a Mixing Function
For local use, DES has the advantages that it has been widely tested
for flaws, is widely documented, and is widely implemented with
hardware and software implementations available all over the world
including source code available by anonymous FTP. The SHS and MD*
family are younger algorithms which have been less tested but there
is no particular reason to believe they are flawed. Both MD5 and SHS
were derived from the earlier MD4 algorithm. They all have source
code available by anonymous FTP [SHS, MD2, MD4, MD5].
DES and SHS have been vouched for the the US National Security Agency
(NSA) on the basis of criteria that primarily remain secret. While
this is the cause of much speculation and doubt, investigation of DES
over the years has indicated that NSA involvement in modifications to
its design, which originated with IBM, was primarily to strengthen
it. No concealed or special weakness has been found in DES. It is
almost certain that the NSA modification to MD4 to produce the SHS
similarly strengthened the algorithm, possibly against threats not
yet known in the public cryptographic community.
DES, SHS, MD4, and MD5 are royalty free for all purposes. MD2 has
been freely licensed only for non-profit use in connection with
Privacy Enhanced Mail [PEM]. Between the MD* algorithms, some people
believe that, as with "Goldilocks and the Three Bears", MD2 is strong
but too slow, MD4 is fast but too weak, and MD5 is just right.
Another advantage of the MD* or similar hashing algorithms over
encryption algorithms is that they are not subject to the same
regulations imposed by the US Government prohibiting the unlicensed
export or import of encryption/decryption software and hardware. The
same should be true of DES rigged to produce an irreversible hash
code but most DES packages are oriented to reversible encryption.
Eastlake, Crocker & Schiller [Page 18]
RFC 1750 Randomness Recommendations for Security December 1994
6.2 Non-Hardware Sources of Randomness
The best source of input for mixing would be a hardware randomness
such as disk drive timing affected by air turbulence, audio input
with thermal noise, or radioactive decay. However, if that is not
available there are other possibilities. These include system
clocks, system or input/output buffers, user/system/hardware/network
serial numbers and/or addresses and timing, and user input.
Unfortunately, any of these sources can produce limited or
predicatable values under some circumstances.
Some of the sources listed above would be quite strong on multi-user
systems where, in essence, each user of the system is a source of
randomness. However, on a small single user system, such as a
typical IBM PC or Apple Macintosh, it might be possible for an
adversary to assemble a similar configuration. This could give the
adversary inputs to the mixing process that were sufficiently
correlated to those used originally as to make exhaustive search
The use of multiple random inputs with a strong mixing function is
recommended and can overcome weakness in any particular input. For
example, the timing and content of requested "random" user keystrokes
can yield hundreds of random bits but conservative assumptions need
to be made. For example, assuming a few bits of randomness if the
inter-keystroke interval is unique in the sequence up to that point
and a similar assumption if the key hit is unique but assuming that
no bits of randomness are present in the initial key value or if the
timing or key value duplicate previous values. The results of mixing
these timings and characters typed could be further combined with
clock values and other inputs.
This strategy may make practical portable code to produce good random
numbers for security even if some of the inputs are very weak on some
of the target systems. However, it may still fail against a high
grade attack on small single user systems, especially if the
adversary has ever been able to observe the generation process in the
past. A hardware based random source is still preferable.
6.3 Cryptographically Strong Sequences
In cases where a series of random quantities must be generated, an
adversary may learn some values in the sequence. In general, they
should not be able to predict other values from the ones that they
Eastlake, Crocker & Schiller [Page 19]
RFC 1750 Randomness Recommendations for Security December 1994
The correct technique is to start with a strong random seed, take
cryptographically strong steps from that seed [CRYPTO2, CRYPTO3], and
do not reveal the complete state of the generator in the sequence
elements. If each value in the sequence can be calculated in a fixed
way from the previous value, then when any value is compromised, all
future values can be determined. This would be the case, for
example, if each value were a constant function of the previously
used values, even if the function were a very strong, non-invertible
message digest function.
It should be noted that if your technique for generating a sequence
of key values is fast enough, it can trivially be used as the basis
for a confidentiality system. If two parties use the same sequence
generating technique and start with the same seed material, they will
generate identical sequences. These could, for example, be xor'ed at
one end with data being send, encrypting it, and xor'ed with this
data as received, decrypting it due to the reversible properties of
the xor operation.
6.3.1 Traditional Strong Sequences
A traditional way to achieve a strong sequence has been to have the
values be produced by hashing the quantities produced by
concatenating the seed with successive integers or the like and then
mask the values obtained so as to limit the amount of generator state
available to the adversary.
It may also be possible to use an "encryption" algorithm with a
random key and seed value to encrypt and feedback some or all of the
output encrypted value into the value to be encrypted for the next
iteration. Appropriate feedback techniques will usually be
recommended with the encryption algorithm. An example is shown below
where shifting and masking are used to combine the cypher output
feedback. This type of feedback is recommended by the US Government
in connection with DES [DES MODES].
Eastlake, Crocker & Schiller [Page 20]
RFC 1750 Randomness Recommendations for Security December 1994
| V |
| | n |
| | +---------+
| +---------> | | +-----+
+--+ | Encrypt | <--- | Key |
| +-------- | | +-----+
| | +---------+
V V
| V | |
| n+1 |
Note that if a shift of one is used, this is the same as the shift
register technique described in Section 3 above but with the all
important difference that the feedback is determined by a complex
non-linear function of all bits rather than a simple linear or
polynomial combination of output from a few bit position taps.
It has been shown by Donald W. Davies that this sort of shifted
partial output feedback significantly weakens an algorithm compared
will feeding all of the output bits back as input. In particular,
for DES, repeated encrypting a full 64 bit quantity will give an
expected repeat in about 2^63 iterations. Feeding back anything less
than 64 (and more than 0) bits will give an expected repeat in
between 2**31 and 2**32 iterations!
To predict values of a sequence from others when the sequence was
generated by these techniques is equivalent to breaking the
cryptosystem or inverting the "non-invertible" hashing involved with
only partial information available. The less information revealed
each iteration, the harder it will be for an adversary to predict the
sequence. Thus it is best to use only one bit from each value. It
has been shown that in some cases this makes it impossible to break a
system even when the cryptographic system is invertible and can be
broken if all of each generated value was revealed.
6.3.2 The Blum Blum Shub Sequence Generator
Currently the generator which has the strongest public proof of
strength is called the Blum Blum Shub generator after its inventors
[BBS]. It is also very simple and is based on quadratic residues.
It's only disadvantage is that is is computationally intensive
compared with the traditional techniques give in 6.3.1 above. This
is not a serious draw back if it is used for moderately infrequent
purposes, such as generating session keys.
Eastlake, Crocker & Schiller [Page 21]
RFC 1750 Randomness Recommendations for Security December 1994
Simply choose two large prime numbers, say p and q, which both have
the property that you get a remainder of 3 if you divide them by 4.
Let n = p * q. Then you choose a random number x relatively prime to
n. The initial seed for the generator and the method for calculating
subsequent values are then
s = ( x )(Mod n)
s = ( s )(Mod n)
i+1 i
You must be careful to use only a few bits from the bottom of each s.
It is always safe to use only the lowest order bit. If you use no
more than the
log ( log ( s ) )
2 2 i
low order bits, then predicting any additional bits from a sequence
generated in this manner is provable as hard as factoring n. As long
as the initial x is secret, you can even make n public if you want.
An intersting characteristic of this generator is that you can
directly calculate any of the s values. In particular
( ( 2 )(Mod (( p - 1 ) * ( q - 1 )) ) )
s = ( s )(Mod n)
i 0
This means that in applications where many keys are generated in this
fashion, it is not necessary to save them all. Each key can be
effectively indexed and recovered from that small index and the
initial s and n.
7. Key Generation Standards
Several public standards are now in place for the generation of keys.
Two of these are described below. Both use DES but any equally
strong or stronger mixing function could be substituted.
Eastlake, Crocker & Schiller [Page 22]
RFC 1750 Randomness Recommendations for Security December 1994
7.1 US DoD Recommendations for Password Generation
The United States Department of Defense has specific recommendations
for password generation [DoD]. They suggest using the US Data
Encryption Standard [DES] in Output Feedback Mode [DES MODES] as
use an initialization vector determined from
the system clock,
system ID,
user ID, and
date and time;
use a key determined from
system interrupt registers,
system status registers, and
system counters; and,
as plain text, use an external randomly generated 64 bit
quantity such as 8 characters typed in by a system
The password can then be calculated from the 64 bit "cipher text"
generated in 64-bit Output Feedback Mode. As many bits as are needed
can be taken from these 64 bits and expanded into a pronounceable
word, phrase, or other format if a human being needs to remember the
7.2 X9.17 Key Generation
The American National Standards Institute has specified a method for
generating a sequence of keys as follows:
s is the initial 64 bit seed
g is the sequence of generated 64 bit key quantities
k is a random key reserved for generating this key sequence
t is the time at which a key is generated to as fine a resolution
as is available (up to 64 bits).
DES ( K, Q ) is the DES encryption of quantity Q with key K
Eastlake, Crocker & Schiller [Page 23]
RFC 1750 Randomness Recommendations for Security December 1994
g = DES ( k, DES ( k, t ) .xor. s )
n n
s = DES ( k, DES ( k, t ) .xor. g )
n+1 n
If g sub n is to be used as a DES key, then every eighth bit should
be adjusted for parity for that use but the entire 64 bit unmodified
g should be used in calculating the next s.
8. Examples of Randomness Required
Below are two examples showing rough calculations of needed
randomness for security. The first is for moderate security
passwords while the second assumes a need for a very high security
cryptographic key.
8.1 Password Generation
Assume that user passwords change once a year and it is desired that
the probability that an adversary could guess the password for a
particular account be less than one in a thousand. Further assume
that sending a password to the system is the only way to try a
password. Then the crucial question is how often an adversary can
try possibilities. Assume that delays have been introduced into a
system so that, at most, an adversary can make one password try every
six seconds. That's 600 per hour or about 15,000 per day or about
5,000,000 tries in a year. Assuming any sort of monitoring, it is
unlikely someone could actually try continuously for a year. In
fact, even if log files are only checked monthly, 500,000 tries is
more plausible before the attack is noticed and steps taken to change
passwords and make it harder to try more passwords.
To have a one in a thousand chance of guessing the password in
500,000 tries implies a universe of at least 500,000,000 passwords or
about 2^29. Thus 29 bits of randomness are needed. This can probably
be achieved using the US DoD recommended inputs for password
generation as it has 8 inputs which probably average over 5 bits of
randomness each (see section 7.1). Using a list of 1000 words, the
password could be expressed as a three word phrase (1,000,000,000
possibilities) or, using case insensitive letters and digits, six
would suffice ((26+10)^6 = 2,176,782,336 possibilities).
For a higher security password, the number of bits required goes up.
To decrease the probability by 1,000 requires increasing the universe
of passwords by the same factor which adds about 10 bits. Thus to
have only a one in a million chance of a password being guessed under
the above scenario would require 39 bits of randomness and a password
Eastlake, Crocker & Schiller [Page 24]
RFC 1750 Randomness Recommendations for Security December 1994
that was a four word phrase from a 1000 word list or eight
letters/digits. To go to a one in 10^9 chance, 49 bits of randomness
are needed implying a five word phrase or ten letter/digit password.
In a real system, of course, there are also other factors. For
example, the larger and harder to remember passwords are, the more
likely users are to write them down resulting in an additional risk
of compromise.
8.2 A Very High Security Cryptographic Key
Assume that a very high security key is needed for symmetric
encryption / decryption between two parties. Assume an adversary can
observe communications and knows the algorithm being used. Within
the field of random possibilities, the adversary can try key values
in hopes of finding the one in use. Assume further that brute force
trial of keys is the best the adversary can do.
8.2.1 Effort per Key Trial
How much effort will it take to try each key? For very high security
applications it is best to assume a low value of effort. Even if it
would clearly take tens of thousands of computer cycles or more to
try a single key, there may be some pattern that enables huge blocks
of key values to be tested with much less effort per key. Thus it is
probably best to assume no more than a couple hundred cycles per key.
(There is no clear lower bound on this as computers operate in
parallel on a number of bits and a poor encryption algorithm could
allow many keys or even groups of keys to be tested in parallel.
However, we need to assume some value and can hope that a reasonably
strong algorithm has been chosen for our hypothetical high security
If the adversary can command a highly parallel processor or a large
network of work stations, 2*10^10 cycles per second is probably a
minimum assumption for availability today. Looking forward just a
couple years, there should be at least an order of magnitude
improvement. Thus assuming 10^9 keys could be checked per second or
3.6*10^11 per hour or 6*10^13 per week or 2.4*10^14 per month is
reasonable. This implies a need for a minimum of 51 bits of
randomness in keys to be sure they cannot be found in a month. Even
then it is possible that, a few years from now, a highly determined
and resourceful adversary could break the key in 2 weeks (on average
they need try only half the keys).
Eastlake, Crocker & Schiller [Page 25]
RFC 1750 Randomness Recommendations for Security December 1994
8.2.2 Meet in the Middle Attacks
If chosen or known plain text and the resulting encrypted text are
available, a "meet in the middle" attack is possible if the structure
of the encryption algorithm allows it. (In a known plain text
attack, the adversary knows all or part of the messages being
encrypted, possibly some standard header or trailer fields. In a
chosen plain text attack, the adversary can force some chosen plain
text to be encrypted, possibly by "leaking" an exciting text that
would then be sent by the adversary over an encrypted channel.)
An oversimplified explanation of the meet in the middle attack is as
follows: the adversary can half-encrypt the known or chosen plain
text with all possible first half-keys, sort the output, then half-
decrypt the encoded text with all the second half-keys. If a match
is found, the full key can be assembled from the halves and used to
decrypt other parts of the message or other messages. At its best,
this type of attack can halve the exponent of the work required by
the adversary while adding a large but roughly constant factor of
effort. To be assured of safety against this, a doubling of the
amount of randomness in the key to a minimum of 102 bits is required.
The meet in the middle attack assumes that the cryptographic
algorithm can be decomposed in this way but we can not rule that out
without a deep knowledge of the algorithm. Even if a basic algorithm
is not subject to a meet in the middle attack, an attempt to produce
a stronger algorithm by applying the basic algorithm twice (or two
different algorithms sequentially) with different keys may gain less
added security than would be expected. Such a composite algorithm
would be subject to a meet in the middle attack.
Enormous resources may be required to mount a meet in the middle
attack but they are probably within the range of the national
security services of a major nation. Essentially all nations spy on
other nations government traffic and several nations are believed to
spy on commercial traffic for economic advantage.
8.2.3 Other Considerations
Since we have not even considered the possibilities of special
purpose code breaking hardware or just how much of a safety margin we
want beyond our assumptions above, probably a good minimum for a very
high security cryptographic key is 128 bits of randomness which
implies a minimum key length of 128 bits. If the two parties agree
on a key by Diffie-Hellman exchange [D-H], then in principle only
half of this randomness would have to be supplied by each party.
However, there is probably some correlation between their random
inputs so it is probably best to assume that each party needs to
Eastlake, Crocker & Schiller [Page 26]
RFC 1750 Randomness Recommendations for Security December 1994
provide at least 96 bits worth of randomness for very high security
if Diffie-Hellman is used.
This amount of randomness is beyond the limit of that in the inputs
recommended by the US DoD for password generation and could require
user typing timing, hardware random number generation, or other
It should be noted that key length calculations such at those above
are controversial and depend on various assumptions about the
cryptographic algorithms in use. In some cases, a professional with
a deep knowledge of code breaking techniques and of the strength of
the algorithm in use could be satisfied with less than half of the
key size derived above.
9. Conclusion
Generation of unguessable "random" secret quantities for security use
is an essential but difficult task.
We have shown that hardware techniques to produce such randomness
would be relatively simple. In particular, the volume and quality
would not need to be high and existing computer hardware, such as
disk drives, can be used. Computational techniques are available to
process low quality random quantities from multiple sources or a
larger quantity of such low quality input from one source and produce
a smaller quantity of higher quality, less predictable key material.
In the absence of hardware sources of randomness, a variety of user
and software sources can frequently be used instead with care;
however, most modern systems already have hardware, such as disk
drives or audio input, that could be used to produce high quality
Once a sufficient quantity of high quality seed key material (a few
hundred bits) is available, strong computational techniques are
available to produce cryptographically strong sequences of
unpredicatable quantities from this seed material.
10. Security Considerations
The entirety of this document concerns techniques and recommendations
for generating unguessable "random" quantities for use as passwords,
cryptographic keys, and similar security uses.
Eastlake, Crocker & Schiller [Page 27]
RFC 1750 Randomness Recommendations for Security December 1994
[ASYMMETRIC] - Secure Communications and Asymmetric Cryptosystems,
edited by Gustavus J. Simmons, AAAS Selected Symposium 69, Westview
Press, Inc.
[BBS] - A Simple Unpredictable Pseudo-Random Number Generator, SIAM
Journal on Computing, v. 15, n. 2, 1986, L. Blum, M. Blum, & M. Shub.
[BRILLINGER] - Time Series: Data Analysis and Theory, Holden-Day,
1981, David Brillinger.
[CRC] - C.R.C. Standard Mathematical Tables, Chemical Rubber
Publishing Company.
[CRYPTO1] - Cryptography: A Primer, A Wiley-Interscience Publication,
John Wiley & Sons, 1981, Alan G. Konheim.
[CRYPTO2] - Cryptography: A New Dimension in Computer Data Security,
A Wiley-Interscience Publication, John Wiley & Sons, 1982, Carl H.
Meyer & Stephen M. Matyas.
[CRYPTO3] - Applied Cryptography: Protocols, Algorithms, and Source
Code in C, John Wiley & Sons, 1994, Bruce Schneier.
[DAVIS] - Cryptographic Randomness from Air Turbulence in Disk
Drives, Advances in Cryptology - Crypto '94, Springer-Verlag Lecture
Notes in Computer Science #839, 1984, Don Davis, Ross Ihaka, and
Philip Fenstermacher.
[DES] - Data Encryption Standard, United States of America,
Department of Commerce, National Institute of Standards and
Technology, Federal Information Processing Standard (FIPS) 46-1.
- Data Encryption Algorithm, American National Standards Institute,
ANSI X3.92-1981.
(See also FIPS 112, Password Usage, which includes FORTRAN code for
performing DES.)
[DES MODES] - DES Modes of Operation, United States of America,
Department of Commerce, National Institute of Standards and
Technology, Federal Information Processing Standard (FIPS) 81.
- Data Encryption Algorithm - Modes of Operation, American National
Standards Institute, ANSI X3.106-1983.
[D-H] - New Directions in Cryptography, IEEE Transactions on
Information Technology, November, 1976, Whitfield Diffie and Martin
E. Hellman.
Eastlake, Crocker & Schiller [Page 28]
RFC 1750 Randomness Recommendations for Security December 1994
[DoD] - Password Management Guideline, United States of America,
Department of Defense, Computer Security Center, CSC-STD-002-85.
(See also FIPS 112, Password Usage, which incorporates CSC-STD-002-85
as one of its appendices.)
[GIFFORD] - Natural Random Number, MIT/LCS/TM-371, September 1988,
David K. Gifford
[KNUTH] - The Art of Computer Programming, Volume 2: Seminumerical
Algorithms, Chapter 3: Random Numbers. Addison Wesley Publishing
Company, Second Edition 1982, Donald E. Knuth.
[KRAWCZYK] - How to Predict Congruential Generators, Journal of
Algorithms, V. 13, N. 4, December 1992, H. Krawczyk
[MD2] - The MD2 Message-Digest Algorithm, RFC1319, April 1992, B.
[MD4] - The MD4 Message-Digest Algorithm, RFC1320, April 1992, R.
[MD5] - The MD5 Message-Digest Algorithm, RFC1321, April 1992, R.
[PEM] - RFCs 1421 through 1424:
- RFC 1424, Privacy Enhancement for Internet Electronic Mail: Part
IV: Key Certification and Related Services, 02/10/1993, B. Kaliski
- RFC 1423, Privacy Enhancement for Internet Electronic Mail: Part
III: Algorithms, Modes, and Identifiers, 02/10/1993, D. Balenson
- RFC 1422, Privacy Enhancement for Internet Electronic Mail: Part
II: Certificate-Based Key Management, 02/10/1993, S. Kent
- RFC 1421, Privacy Enhancement for Internet Electronic Mail: Part I:
Message Encryption and Authentication Procedures, 02/10/1993, J. Linn
[SHANNON] - The Mathematical Theory of Communication, University of
Illinois Press, 1963, Claude E. Shannon. (originally from: Bell
System Technical Journal, July and October 1948)
[SHIFT1] - Shift Register Sequences, Aegean Park Press, Revised
Edition 1982, Solomon W. Golomb.
[SHIFT2] - Cryptanalysis of Shift-Register Generated Stream Cypher
Systems, Aegean Park Press, 1984, Wayne G. Barker.
[SHS] - Secure Hash Standard, United States of American, National
Institute of Science and Technology, Federal Information Processing
Standard (FIPS) 180, April 1993.
[STERN] - Secret Linear Congruential Generators are not
Cryptograhically Secure, Proceedings of IEEE STOC, 1987, J. Stern.
Eastlake, Crocker & Schiller [Page 29]
RFC 1750 Randomness Recommendations for Security December 1994
[VON NEUMANN] - Various techniques used in connection with random
digits, von Neumann's Collected Works, Vol. 5, Pergamon Press, 1963,
J. von Neumann.
Authors' Addresses
Donald E. Eastlake 3rd
Digital Equipment Corporation
550 King Street, LKG2-1/BB3
Littleton, MA 01460
Phone: +1 508 486 6577(w) +1 508 287 4877(h)
EMail: dee@lkg.dec.com
Stephen D. Crocker
CyberCash Inc.
2086 Hunters Crest Way
Vienna, VA 22181
Phone: +1 703-620-1222(w) +1 703-391-2651 (fax)
EMail: crocker@cybercash.com
Jeffrey I. Schiller
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139
Phone: +1 617 253 0161(w)
EMail: jis@mit.edu
Eastlake, Crocker & Schiller [Page 30] | {"url":"https://mirrors.xmission.com/rfc/rfc1750.html","timestamp":"2024-11-09T16:42:04Z","content_type":"text/html","content_length":"86810","record_id":"<urn:uuid:aee6b0ee-3d0c-45af-8694-14dc840780f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00019.warc.gz"} |
Physics Topic: Nuclear Physics - Best Physics Tuition ™ by Award Winning Tutor
Physics Topic: Nuclear Physics
Infer from the results of the a particle scattering experiment the existence and small size of the
Distinguish between nucleon number (mass number) and proton number (atomic number).
Show an understanding that an element can exist in various isotopic forms each with a different number of neutrons.
Use the usual notation for the representation of nuclides and represent simple nuclear reactions by nuclear equations.
Show an understanding of the concept of mass defect.
Recall and apply the equivalence relationship between energy and mass as represented by E = mc^2 in problem solving.
Show an understanding of the concept of binding energy and its relation to mass defect.
Sketch the variation of binding energy per nucleon with nucleon number.
Explain the relevance of binding energy per nucleon to nuclear fusion and nuclear fission.
State and apply to problem solving the concept that nucleon number, proton number, energy and mass are all conserved in nuclear processes.
Show an understanding of the spontaneous and random nature of nuclear decay.
Infer the random nature of radioactive decay from the fluctuations in count rate.
Show an understanding of the origin and significance of background radiation.
Show an understanding of the nature of a, b and g
Define the terms activity and decay constant and recall and solve problems using Activity = Decay constant x Number of Nuclide present.
Infer and sketch the exponential nature of radioactive decay and solve problems using the exponential relationship involving activity, number of undecayed particles and received count rate.
Define half-life.
Solve problems using the relation decay constant = ln2 / t[1/2].
Discuss qualitatively the effects, both direct and indirect, of ionising radiation on living tissues. | {"url":"https://bestphysicstuition.com/jc2-content/physics-topic-nuclear-physics/","timestamp":"2024-11-12T18:11:40Z","content_type":"text/html","content_length":"38243","record_id":"<urn:uuid:1eb31700-0d11-4a67-8f0c-f0110ed80fa5>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00100.warc.gz"} |
numerical length calculator
Numerical Length Calculator
In today’s fast-paced world, accurate numerical calculations play a crucial role in various fields such as science, engineering, finance, and more. Whether you’re a student, professional, or
hobbyist, having access to a reliable numerical length calculator can streamline your calculations and improve efficiency. In this article, we’ll explore how to use such a calculator, delve into the
underlying formula, provide examples, address common questions, and conclude with insights on the importance of accurate calculations.
How to Use
Using the numerical length calculator is straightforward. Simply input the values into the designated fields, and click on the “Calculate” button to obtain the result. Whether you’re measuring
distances, dimensions, or any other numerical lengths, this calculator can handle various calculations with precision.
The formula utilized in this calculator ensures accuracy and reliability in determining numerical lengths. It involves the basic principle of arithmetic operations such as addition, subtraction,
multiplication, and division. Additionally, it accounts for any specific units or conversions required for the given calculations, ensuring consistency and correctness in the results.
Example Solve
Let’s consider a practical example to illustrate the functionality of the numerical length calculator. Suppose we need to find the total length of three segments: 5.4 meters, 3.2 meters, and 7.8
meters. By inputting these values into the calculator and clicking “Calculate,” we obtain the result:
The calculator efficiently performs the addition operation, providing the total length accurately.
Q: Can this calculator handle decimal values?
A: Yes, the calculator can handle decimal values with precision, ensuring accurate results.
Q: Is there a limit to the number of values that can be input?
A: No, there’s no inherent limit to the number of values you can input. The calculator can handle multiple values efficiently.
Q: Can this calculator perform unit conversions?
A: While this calculator focuses on basic arithmetic operations for numerical lengths, additional functionalities such as unit conversions can be incorporated based on specific requirements.
In conclusion, a numerical length calculator is a valuable tool for individuals and professionals alike, offering accuracy, efficiency, and convenience in various calculations. Whether you’re working
on simple measurements or complex dimensional analyses, having access to a reliable calculator streamlines the process and minimizes errors. By understanding how to use the calculator, the underlying
formula, and its capabilities, users can leverage it effectively to enhance productivity and accuracy in their work. | {"url":"https://calculatordoc.com/numerical-length-calculator/","timestamp":"2024-11-12T05:25:56Z","content_type":"text/html","content_length":"84544","record_id":"<urn:uuid:d1d6109e-a3eb-49a2-8bcd-556a8ee790b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00873.warc.gz"} |
Thickness Control for a Steel Beam
This example shows how to design a MIMO LQG regulator to control the horizontal and vertical thickness of a steel beam in a hot steel rolling mill.
Rolling Stand Model
Figures 1 and 2 depict the process of shaping a beam of hot steel by compressing it with rolling cylinders.
Figure 1: Beam Shaping by Rolling Cylinders.
Figure 2: Rolling Mill Stand.
The desired H shape is impressed by two pairs of rolling cylinders (one per axis) positioned by hydraulic actuators. The gap between the two cylinders is called the roll gap. The goal is to maintain
the x and y thickness within specified tolerances. Thickness variations arise primarily from variations in thickness and hardness of the incoming beam (input disturbance) and eccentricities of the
rolling cylinders.
An open-loop model for the x or y axes is shown in Figure 3. The eccentricity disturbance is modeled as white noise w_e driving a band-pass filter Fe. The input thickness disturbance is modeled as
white noise w_i driving a low-pass filter Fi. Feedback control is necessary to counter such disturbances. Because the roll gap delta cannot be measured close to the stand, the rolling force f is used
for feedback.
Figure 3: Open-Loop Model.
Building the Open-Loop Model
Empirical models for the filters Fe and Fi for the x axis are
and the actuator and gap-to-force gain are modeled as
To construct the open-loop model in Figure 3, start by specifying each block:
Hx = tf(2.4e8 , [1 72 90^2] , 'inputname' , 'u_x');
Fex = tf([3e4 0] , [1 0.125 6^2] , 'inputname' , 'w_{ex}');
Fix = tf(1e4 , [1 0.05] , 'inputname' , 'w_{ix}');
gx = 1e-6;
Next construct the transfer function from u,we,wi to f1,f2 using concatenation and append as follows. To improve numerical accuracy, switch to the state-space representation before you connect
T = append([ss(Hx) Fex],Fix);
Finally, apply the transformation mapping f1,f2 to delta,f:
Px = [-gx gx;1 1] * T;
Px.OutputName = {'x-gap' , 'x-force'};
Plot the frequency response magnitude from the normalized disturbances w_e and w_i to the outputs:
bodemag(Px(: , [2 3]),{1e-2,1e2})
grid on
Note the peak at 6 rad/sec corresponding to the (periodic) eccentricity disturbance.
LQG Regulator Design for the X Axis
First design an LQG regulator to attenuate the thickness variations due to the eccentricity and input thickness disturbances w_e and w_i. LQG regulators generate actuator commands u = -K x_e where
x_e is an estimate of the plant states. This estimate is derived from available measurements of the rolling force f using an observer called "Kalman filter."
Figure 4: LQG Control Structure.
Use lqry to calculate a suitable state-feedback gain K. The gain K is chosen to minimize a cost function of the form
where the parameter beta is used to trade off performance and control effort. For beta = 1e-4, you can compute the optimal gain by typing
Pxdes = Px('x-gap','u_x'); % transfer u_x -> x-gap
Kx = lqry(Pxdes,1,1e-4)
Kx =
0.0621 0.1315 0.0222 -0.0008 -0.0074
Next, use kalman to design a Kalman estimator for the plant states. Set the measurement noise covariance to 1e4 to limit the gain at high frequencies:
Ex = kalman(Px('x-force',:),eye(2),1e4);
Finally, use lqgreg to assemble the LQG regulator Regx from Kx and Ex:
Regx = lqgreg(Ex,Kx);
ans =
From input "x-force" to output "u_x":
-0.012546 (s+10.97) (s-2.395) (s^2 + 72s + 8100)
(s+207.7) (s^2 + 0.738s + 32.33) (s^2 + 310.7s + 2.536e04)
Input groups:
Name Channels
Measurement 1
Output groups:
Name Channels
Controls 1
Continuous-time zero/pole/gain model.
grid on
title('LQG Regulator')
LQG Regulator Evaluation
Close the regulation loop shown in Figure 4:
clx = feedback(Px,Regx,1,2,+1);
Note that in this command, the +1 accounts for the fact that lqgreg computes a positive feedback compensator.
You can now compare the open- and closed-loop responses to eccentricity and input thickness disturbances:
grid on
legend('Open Loop','Closed Loop');
The Bode plot indicates a 20 dB attenuation of disturbance effects. You can confirm this by simulating disturbance-induced thickness variations with and without the LQG regulator as follows:
dt = 0.01; % simulation time step
t = 0:dt:30;
wx = sqrt(1/dt) * randn(2,length(t)); % sampled driving noise
lp = lsimplot(Px(1,2:3),'b',clx(1,2:3),'r',wx,t);
lp.InputVisible = 'off';
legend('Open Loop','Closed Loop');
Two-Axis Design
You can design a similar LQG regulator for the y axis. Use the following actuator, gain, and disturbance models:
Hy = tf(7.8e8,[1 71 88^2],'inputname','u_y');
Fiy = tf(2e4,[1 0.05],'inputname','w_{iy}');
Fey = tf([1e5 0],[1 0.19 9.4^2],'inputn','w_{ey}');
gy = 0.5e-6;
You can construct the open-loop model by typing
Py = append([ss(Hy) Fey],Fiy);
Py = [-gy gy;1 1] * Py;
Py.OutputName = {'y-gap' 'y-force'};
You can then compute the corresponding LQG regulator by typing
ky = lqry(Py(1,1),1,1e-4);
Ey = kalman(Py(2,:),eye(2),1e4);
Regy = lqgreg(Ey,ky);
Assuming the x- and y-axis are decoupled, you can use these two regulators independently to control the two-axis rolling mill.
Cross-Coupling Effects
Treating each axis separately is valid as long as they are fairly decoupled. Unfortunately, rolling mills have some amount of cross-coupling between axes because an increase in force along x
compresses the material and causes a relative decrease in force along the y axis.
Cross-coupling effects are modeled as shown in Figure 5 with gxy=0.1 and gyx=0.4.
Figure 5: Cross-Coupling Model.
To study the effect of cross-coupling on decoupled SISO loops, construct the two-axis model in Figure 5 and close the x- and y-axis loops using the previously designed LQG regulators:
gxy = 0.1;
gyx = 0.4;
P = append(Px,Py); % Append x- and y-axis models
P = P([1 3 2 4],[1 4 2 3 5 6]); % Reorder inputs and outputs
CC = [1 0 0 gyx*gx ;... % Cross-coupling matrix
0 1 gxy*gy 0 ;...
0 0 1 -gyx ;...
0 0 -gxy 1 ];
Pxy = CC * P; % Cross-coupling model
Pxy.outputn = P.outputn;
clxy0 = feedback(Pxy,append(Regx,Regy),1:2,3:4,+1);
Now, simulate the x and y thickness gaps for the two-axis model:
wy = sqrt(1/dt) * randn(2,length(t)); % y-axis disturbances
wxy = [wx ; wy];
lp = lsimplot(Pxy(1:2,3:6),'b',clxy0(1:2,3:6),'r',wxy,t);
lp.InputVisible = 'off';
legend('Open Loop','Closed Loop');
Note the high thickness variations along the x axis. Treating each axis separately is inadequate and you need to use a joint-axis, MIMO design to correctly handle cross-coupling effects.
MIMO Design
The MIMO design consists of a single regulator that uses both force measurements fx and fy to compute the actuator commands, u_x and u_y. This control architecture is depicted in Figure 6.
Figure 6: MIMO Control Structure.
You can design a MIMO LQG regulator for the two-axis model using the exact same steps as for earlier SISO designs. First, compute the state feedback gain, then compute the state estimator, and
finally assemble these two components using lqgreg. Use the following commands to perform these steps:
Kxy = lqry(Pxy(1:2,1:2),eye(2),1e-4*eye(2));
Exy = kalman(Pxy(3:4,:),eye(4),1e4*eye(2));
Regxy = lqgreg(Exy,Kxy);
To compare the performance of the MIMO and multi-loop SISO designs, close the MIMO loop in Figure 6:
clxy = feedback(Pxy,Regxy,1:2,3:4,+1);
Then, simulate the x and y thickness gaps for the two-axis model:
lp = lsimplot(Pxy(1:2,3:6),'b',clxy(1:2,3:6),'r',wxy,t);
lp.InputVisible = 'off';
legend('Open Loop','Closed Loop');
The MIMO design shows no performance loss in the x axis and the disturbance attenuation levels now match those obtained for each individual axis. The improvement is also evident when comparing the
principal gains of the closed-loop responses from input disturbances to thickness gaps x-gap, y-gap:
grid on
legend('Two SISO Loops','MIMO Loop');
Note how the MIMO regulator does a better job at keeping the gain equally low in all directions.
Simulink Model
If you are a Simulink® user, click on the link below to open a companion Simulink model that implements both multi-loop SISO and MIMO control architectures. You can use this model to compare both
designs by switching between designs during simulation.
Open Simulink model of two-axis rolling mill. | {"url":"https://se.mathworks.com/help/control/ug/thickness-control-for-a-steel-beam.html","timestamp":"2024-11-11T16:35:19Z","content_type":"text/html","content_length":"86951","record_id":"<urn:uuid:c3ac4a3e-3764-47f7-af1c-1d0ef46d5bd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00024.warc.gz"} |