content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
tion and
PLL with Feed Forward
Compute position and angular frequency from orthogonal sinusoidal signals
Since R2023b
Motor Control Blockset / Signal Management
The PLL with Feed Forward block computes angular position (θ) or its sine and cosine equivalents (sin θ, cos θ) and the angular frequency (ω) from two orthogonal sinusoidal signals.
In addition to the angular frequency, the block uses either a sine-cosine lookup table to calculate the angular position or an oscillator based algorithm to compute the sine and cosine equivalents of
position. You can use the Position output parameter to output either angular position or its sine and cosine equivalents.
The two orthogonal sinusoidal input signals must have identical peak magnitudes. If the inputs are not normalized in the range of [-1,1], select the Enable input normalization parameter.
The following image shows the relationship between block inputs and outputs:
For more information on the block algorithm, see Algorithm.
The block does not support 16-bit fixed-point inputs.
α — α-axis signal
Input signal along the α-axis.
Data Types: single | double | fixed point
β — β-axis signal
Input signal along the β-axis.
Data Types: single | double | fixed point
Rst — External reset pulse
External pulse that resets the block.
Data Types: single | double | fixed point
θ — Angular position
Angular position (in either degrees, radians, or per-unit) that the block computes from the orthogonal input signals.
To enable this port, set the Position output parameter to Angular position.
Data Types: single | double | fixed point
Sin θ — Sine of angular position
Sine equivalent of the computed angular position.
To enable this port, set the Position output parameter to Sine and Cosine Position.
Data Types: single | double | fixed point
Cos θ — Cos of angular position
Cosine equivalent of the computed angular position.
To enable this port, set the Position output parameter to Sine and Cosine Position.
Data Types: single | double | fixed point
ω — Angular frequency
Angular frequency (in either degrees/sec, radians/sec, or hertz) of the orthogonal input signals.
Data Types: single | double | fixed point
Enable input normalization — Enable block to normalize inputs
on (default) | off
The block normalizes the α and β orthogonal input signals only if you select this parameter.
Position output — Type of position output
Angular position (default) | Sine and Cosine Position
Type of position output that the block should generate:
• Angular position — Select this option to use a sine-cosine lookup table to compute the angular position (θ).
• Sine and Cosine Position — Select this option to use an oscillator based algorithm to compute the sine and cosine equivalents (sin θ and cos θ) of the angular position.
Discrete step size (s) — Sample time after which block executes again
50e-6 (default) | scalar
The fixed time interval (in seconds) between consecutive instances of block execution.
PLL Parameters
Frequency ratio — Ratio of frequencies of output and input signals
1 (default) | scalar
The ratio of the frequency of the output signal to the frequency of the input signal of the block.
Maximum application frequency (Hz) — Maximum applicable input frequency
7500 (default) | scalar
Maximum possible frequency of the input signals (in hertz).
Cutoff frequency for output filter (Hz) — Cutoff frequency of lowpass filter
15 (default) | scalar
Cutoff frequency of the lowpass filter that the block uses to filter the estimated angular frequency (in hertz).
Number of data points for lookup table — Size of lookup table array
1024 (default) | scalar
Size of the lookup table array that the block provides to the SinCos Embedded Optimized block, which is used internally. This parameter accepts a value between 125 and 4095.
To enable this parameter, set Position output to Angular position.
Proportional (P) — Proportional controller gain
942.4778 (default) | scalar
Proportional gain (K[p]) of the PID controller used by the block to compute the angular frequency.
Integral (I) — Integral controller gain
222066.099 (default) | scalar
Integral gain (K[i]) of the PID controller used by the block to compute the angular frequency.
Click Compute default parameters to calculate an approximate proportional gain (K[p]) and integral gain (K[i]) and update these fields.
Position unit — Unit of position output
Degrees (default) | Radians | Per-unit
Unit of the position output.
To enable this parameter, set Position output to Angular position.
Position data type — Data type of position output
single (default) | double | fixed point
Data type of the position output.
Frequency unit — Unit of angular frequency output
Degrees/sec (default) | Radians/sec | Hz
Unit of the angular frequency output.
Frequency data type — Data type of angular frequency output
single (default) | double | fixed point
Data type of the angular frequency output.
Table data type — Data type of sine-cosine lookup table
single (default) | double | fixed point
Data type of the sine-cosine lookup table used by the block.
To enable this parameter, set Position output to Angular position.
The following image provides an overview of how the block uses an algorithm based on sine-cosine lookup table to compute the angular position (θ). Set the Position output to Angular position to use
this algorithm.
The following image provides an overview of how the block uses an oscillator-based algorithm to compute the sine and cosine equivalents of angular position (θ). Set the Position output to Sine and
Cosine Position to use this algorithm.
The following equation describes the state equation of the oscillator-based algorithm.
$\stackrel{˙}{x}=\left[\begin{array}{cc}0& \omega \\ -\omega & 0\end{array}\right]x$
where, ω is the frequency of oscillation.
Tuning feed-forward angular-frequency phase-locked loop (PLL)
The following image shows a linearized model of the angular frequency feed-forward PLL that you can use for tuning purposes.
From this model you can obtain the following transfer function of position estimation error (ΔE(s)) with respect to actual position (θ(s)) to tune the feed-forward PLL.
$\frac{\Delta E\left(s\right)}{\theta \left(s\right)}=\frac{{s}^{3}}{\left(s+{\omega }_{c}\right)\left({s}^{2}+{K}_{p}s+{K}_{i}\right)}$
• K[p] and K[i] are the proportional and integral gains, respectively, of the PID controller used by the block for angular frequency computation.
• ω[c] is the cutoff frequency for the output filter.
[1] G. Liu, H. Zhang and X. Song, "Position-Estimation Deviation-Suppression Technology of PMSM Combining Phase Self-Compensation SMO and Feed-Forward PLL," in IEEE Journal of Emerging and Selected
Topics in Power Electronics, vol. 9, no. 1, pp. 335-344, Feb. 2021, doi: 10.1109/JESTPE.2020.2967508.
[2] Se-Kyo Chung, "A phase tracking system for three phase utility interface inverters," in IEEE Transactions on Power Electronics, vol. 15, no. 3, pp. 431-438, May 2000, doi: 10.1109/63.844502.
[3] Sreeraman Rajan, Sichun Wang, Robert Inkol, and Alain Joyal. "Efficient Approximations for the Arctangent Function." IEEE SIGNAL PROCESSING MAGAZINE (MAY 2006).
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Fixed-Point Conversion
Design and simulate fixed-point systems using Fixed-Point Designer™.
The block does not support 16-bit fixed-point inputs.
Version History
Introduced in R2023b | {"url":"https://nl.mathworks.com/help/mcb/ref/pllwithfeedforward.html","timestamp":"2024-11-04T19:06:20Z","content_type":"text/html","content_length":"113844","record_id":"<urn:uuid:deb1f50f-6012-4a80-ba50-4d50d0562642>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00713.warc.gz"} |
Self-Organized Criticality: An Explanation of 1/f Noise - Sasank's Blog
Self-Organized Criticality: An Explanation of 1/f Noise
Power law distribution is all around us, in cities, internet, genes, earthquakes and even brain. Physicists call this 1/f noise. This papers suggests a simple cellular automata model which can
generate the power-law. Once the model evolves into a minimally stable state, it is said to be in self-organized criticality. From this state, small disturbances do nothing most of the time but
sometimes create ‘black-swan’ avalanches which destroys the whole state.
Yellow highlights/annotations are my own.
You can disable them.
We show that dynamical systems with spatial degrees of freedom naturally evolve into a self-organized critical point. Flicker noise, or $1/f$ noise, can be identified with the dynamics of the
critical state. This picture also yields insight into the origin of fractal objects.
One of the classic problems in physics is the existence of the ubiquitous “$1/f$” noise which has been detected for transport in systems as diverse as resistors, the hour glass, the flow of the river
Nile, and the luminosity of stars [1]. The low-frequency power spectra of such systems display a power-law behavior $f^{-\beta}$ over vastly different time scales. Despite much effort, there is no
general theory that explains the widespread occurrence of $1/f$ noise.
Another puzzle seeking a physical explanation is the empirical observation that spatially extended objects, including cosmic strings, mountain landscapes, and coastal lines, appear to be self-similar
fractal structures [2]. Turbulence is a phenomenon where self-similarity is believed to occur both in time and space. The common feature for all these systems is that the power-law temporal or
spatial correlations extend over several decades where naively one might suspect that the physics would vary dramatically.
In this paper, we argue and demonstrate numerically that dynamical systems with extended spatial degrees of freedom naturally evolve into self-organized critical structures of states which are barely
stable. We suggest that this self-organized criticality is the common underlying mechanism behind the phenomena described above. The combination of dynamical minimal stability and spatial scaling
leads to a power law for temporal fluctuations. The noise propagates through the scaling clusters by means of a “domino” effect upsetting the minimally stable states. Long-wavelength perturbations
cause a cascade of energy dissipation on all length scales, which is the main characteristic of turbulence.
The criticality in our theory is fundamentally different from the critical point at phase transitions in equilibrium statistical mechanics which can be reached only by tuning of a parameter, for
instance the temperature. The critical point in the dynamical systems studied here is an attractor reached by starting far from equilibrium: The scaling properties of the attractor are insensitive to
the parameters of the model. This robustness is essential in our explaining that no fine tuning is necessary to generate $1/f$ noise (and fractal structures) in nature.
Consider first a one-dimensional array of damped pendula, with coordinates $u_n$, connected by torsion springs that are weak compared with the gravitational force. There is an infinity of metastable
or stationary states where the pendula are pointing (almost) down, $u_n \approx 2\pi N$, $N$ an integer, but where the winding numbers $N$ of the springs differ. The initial conditions are such that
the forces $C_n = u_{n+1} - 2u_n + u_{n-1}$, are large, so that all the pendula are unstable. The pendula will rotate until they reach a state where the spring forces on all the pendula assume a
large value $\pm K$ which is just barely able to balance the gravitational force to keep the configuration stable. If all forces are initially positive, then the final forces will all be $K$. Of
course, the array is also stable in any configuration where the springs are still further relaxed; however, the dynamics stops upon reaching this first, maximally sensitive state. We call such a
state locally minimally stable.[3]
What is the effect of small perturbations on the minimally stable structure? Suppose that we “kick” one pendulum in the forward direction, relaxing the force slightly. This will cause the force on a
nearest-neighbor pendulum to exceed the critical value and the perturbation will propagate by a domino effect until it hits the end of the array. At the end of this process the forces are back to
their original values, and all pendula have rotated one period. Thus, the system is stable with respect to small perturbations in one dimension and the dynamics is trivial.
The situation is dramatically different in more dimensions. Naively, one might expect that the relaxation dynamics will take the system to a configuration where all the pendula are in minimally
stable states. A moment’s reflection will convince us that it cannot be so. Suppose that we relax one pendulum slightly; this will render the surrounding pendula unstable, and the noise will spread
to the neighbors in a chain reaction, ever amplifying since the pendula generally are connected with more than two minimally stable pendula, and the perturbation eventually propagates throughout the
entire lattice. This configuration is thus unstable with respect to small fluctuations and cannot represent an attracting fixed point for the dynamics. As the system further evolves, more and more
more-than-minimally stable states will be generated, and these states will impede the motion of the noise. The system will become stable precisely at the point when the network of minimally stable
states has been broken down to the level where the noise signal cannot be communicated through infinite distances. At this point there will be no length scale in the problem so that one might expect
the formation of a scale-invariant structure of minimally stable states. Hence, the system might approach, through a self-organized process, a critical point with power-law correlation functions for
noise and other physically observable quantities. The “clusters” of minimally stable states must be defined dynamically as the spatial regions over which a small local perturbation will propagate. In
a sense, the dynamically selected configuration is similar to the critical point at a percolation transition where the structure stops carrying current over infinite distances, or at a second-order
phase transition where the magnetization clusters stop communicating. The arguments are quite general and do not depend on the details of the physical system at hand, including the details of the
local dynamics and the presence of impurities, so that one might expect self-similar fractal structures to be widespread in nature: The “physics of fractals”[4] could be that they are the minimally
stable states originating from dynamical processes which stop precisely at the critical point.
The scaling picture quite naturally gives rise to a power-law frequency dependence of the noise spectrum. At the critical point there is a distribution of clusters of all sizes; local perturbations
will therefore propagate over all length scales, leading to fluctuation lifetimes over all time scales. A perturbation can lead to anything from a shift of a single pendulum to an avalanche,
depending on where the perturbation is applied. The lack of a characteristic length leads directly to a lack of a characteristic time for the resulting fluctuations. As is well known, a distribution
of lifetimes $D(t) \sim t^{-a}$ leads to a frequency spectrum
\[S(\omega) = \int dt \frac{tD(t)}{1 + (wt)^2} \approx \omega^{-2 + a}\]
In order to visualize a physical system expected to exhibit self-organized criticality, consider a pile of sand. If the slope is too large, the pile is far from equilibrium, and the pile will
collapse until the average slope reaches a critical value where the system is barely stable with respect to small perturbations. The “$1/f$” noise is the dynamical response of the sandpile to small
random perturbations.
To add concreteness to these considerations we have performed numerical simulations in one, two, and three dimensions on several models, to be described here and in forthcoming papers. One model is a
cellular automaton, describing the interactions of an integer variable $z$ with its nearest neighbors. In two dimensions $z$ is updated synchronously as follows:
\[z(x, y) \longrightarrow z(x, y) - 4,\] \[z(x \pm 1, y) \longrightarrow z(x \pm 1, y) +1,\] \[z(x, y \pm 1) \longrightarrow z(x, y \pm 1) +1,\]
if $z$ exceeds a critical value $K$. There are no parameters since a shift in $K$ simply shifts $z$. Fixed boundary conditions are used, i.e., $z=0$ on boundaries. The cellular variable may be
thought of as the force on an individual pendulum, or the local slope of the sand pile (the “hour glass”) in some direction. If the force is too large, the pendulum rotates (or the sand slides),
relieving the force but increasing the force on the neighbors. The system is set up with random initial conditions $z \gg K$, and then simply evolves until it stops, i.e., all $z$’s are less than
$K$. The dynamics is then probed by measurement of the response of the resulting state to small local random perturbations. Indeed, we found response on all length scales limited only by the size of
the system.
Figure 1: Self-organized critical state of minimally stable clusters, for a $100 \times 100$ array.
Figure 1 shows a structure obtained for a two-dimensional array of size $100 \times 100$. The dark areas indicate clusters that can be reached through the domino process originated by the tripping of
only a single site. The clusters are thus defined operationally – in a real physical system one should perturb the system locally in order to measure the size of a cluster. Figure 2(a) shows a
log-log plot of the distribution $D(s)$ of cluster sizes for a two-dimensional system determined simply by our counting the number of affected sites generated from a seed at one site and averaging
over many arrays. The curve is consistent with a straight line, indicating a power law $D(s) \sim s^{-\tau}, \tau \approx 0.98$. The fact that the curve is linear over two decades indicates that the
system is at a critical point with a scaling distribution of clusters.
Figure 2: Distribution of cluster sizes at criticality in two and three dimensions, computed dynamically as described in the text. (a) $50 \times 50$ array, averaged over 200 samples; (b) $20 \times
20 \times 20$ array, averaged over 200 samples. The data have been coarse grained.
Figure 2(b) shows a similar plot for a three-dimensional array, with an exponent of $\tau \approx 1.35$. At small sizes the curve deviates from the straight line because discreteness effects of the
lattice come into play. The falloff at the largest cluster sizes is due to finite-size effects, as we checked by comparing simulations for different array sizes.[5]
A distribution of cluster sizes leads to a distribution of fluctuation lifetimes. If the perturbation grows with an exponent $\gamma$ within the clusters, the lifetime $t$ of a cluster is related to
its size $s$ by $\tau ^ {1 + \gamma} \approx s$. The distribution of lifetimes, weighted by the average response $s/t$, can be calculated from the distribution of cluster sizes:
\[D(t) = \frac{s}{t} D(s(t)) \frac{ds}{dt} \approx t^{-(\gamma + 1)\tau + 2 \gamma} \equiv t ^ {-\alpha} \tag{2}\]
Figure 3: Distribution of lifetimes corresponding to Fig. 2. (a) For the $50 \times 50$ array, the slope $\alpha \approx 0.42$, yielding a “$1/f$” noise spectrum $f^{-1.58}$; (b) $20 \times 20 \times
20$ array, $\alpha \approx 0.90$, yielding an $f^{-1.1}$ spectrum
Figure 3 shows the distribution of lifetimes corresponding to Fig. 2 (namely how long the noise propagates after perturbation at a single site, weighted by the temporal average of the response). This
leads to another line indicating a distribution of lifetimes of the form (2) with $\alpha \approx 0.42$ in two dimensions ($50 \times 50$), and $\alpha \approx 0.90$ in three dimensions. These curves
are less impressive than the corresponding cluster-size curves, in particular in three dimensions, because the lifetime of a cluster is much smaller than its size, reducing the range over which we
have reliable data. The resulting power-law spectrum is $S(\omega) \approx \omega^{-2 + \alpha} \approx \omega^{-1.58}$ in 2D and $\omega^{-1.1}$ in 3D.
To summarize, we find a power-law distribution of cluster sizes and time scales just as expected from general arguments about dynamical systems with spatial degrees of freedom. More numerical work is
clearly needed to improve accuracy, and to determine the extent to which the systems are “universal,” e.g., how the exponents depend on the physical details. Our picture of $1/f$ spectra is that it
reflects the dynamics of a self-organized critical state of minimally stable clusters of all length scales, which in turn generates fluctuations on all time scales. Voss and Clarke [6] have performed
measurements indicating that the noise at a given frequency $f$ is spatially correlated over a distance $L(f)$ which increases as $f$ decreases. We urge that more experiments of this type be
performed to investigate the scaling proposed here.
We believe that the new concept of self-organized criticality can be taken much further and might be the underlying concept for temporal and spatial scaling in a wide class of dissipative systems
with extended degrees of freedom.
We thank George Reiter for suggesting that these ideas might apply to the problem of turbulence. This work was supported by the Division of Materials Sciences, U. S. Department of Energy, under
Contract No. DE-AC02-76CH00016.
1. For a review of 1/f noise in astronomy and elsewhere, see W.H. Press, Commun. Mod. Phys. C 7, 103 (1978).
2. B. Mandelbrot, The Fractal Geometry of Nature (Freeman, San Francisco, 1982).
3. C. Tang, K. Wiesenfeld, P. Bak, S. Coppersmith, and P. Littlewood, Phys. Rev. Lett. 58, 1161 (1987).
4. L. P. Kadanoff, Phys. Today 39, No. 2, 6 (1986).
5. P. Bak, C. Tang, and K. Wiesenfeld, to be published.
6. R. F. Voss and J. Clarke, Phys. Rev. B 13, 556 (1976). | {"url":"https://chsasank.com/classic_papers/self-organized-criticality.html","timestamp":"2024-11-10T07:57:34Z","content_type":"text/html","content_length":"22038","record_id":"<urn:uuid:df790fac-1835-4ca8-8c52-87dbf87f2fdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00626.warc.gz"} |
How to Develop an Autoregression Forecast Model for Household Electricity Consumption
Author: Jason Brownlee
Given the rise of smart electricity meters and the wide adoption of electricity generation technology like solar panels, there is a wealth of electricity usage data available.
This data represents a multivariate time series of power-related variables that in turn could be used to model and even forecast future electricity consumption.
Autocorrelation models are very simple and can provide a fast and effective way to make skillful one-step and multi-step forecasts for electricity consumption.
In this tutorial, you will discover how to develop and evaluate an autoregression model for multi-step forecasting household power consumption.
After completing this tutorial, you will know:
• How to create and analyze autocorrelation and partial autocorrelation plots for univariate time series data.
• How to use the findings from autocorrelation plots to configure an autoregression model.
• How to develop and evaluate an autocorrelation model used to make one-week forecasts.
Let’s get started.
Tutorial Overview
This tutorial is divided into five parts; they are:
1. Problem Description
2. Load and Prepare Dataset
3. Model Evaluation
4. Autocorrelation Analysis
5. Develop an Autoregression Model
Problem Description
The ‘Household Power Consumption‘ dataset is a multivariate time series dataset that describes the electricity consumption for a single household over four years.
The data was collected between December 2006 and November 2010 and observations of power consumption within the household were collected every minute.
It is a multivariate series comprised of seven variables (besides the date and time); they are:
• global_active_power: The total active power consumed by the household (kilowatts).
• global_reactive_power: The total reactive power consumed by the household (kilowatts).
• voltage: Average voltage (volts).
• global_intensity: Average current intensity (amps).
• sub_metering_1: Active energy for kitchen (watt-hours of active energy).
• sub_metering_2: Active energy for laundry (watt-hours of active energy).
• sub_metering_3: Active energy for climate control systems (watt-hours of active energy).
Active and reactive energy refer to the technical details of alternative current.
A fourth sub-metering variable can be created by subtracting the sum of three defined sub-metering variables from the total active energy as follows:
sub_metering_remainder = (global_active_power * 1000 / 60) - (sub_metering_1 + sub_metering_2 + sub_metering_3)
Load and Prepare Dataset
The dataset can be downloaded from the UCI Machine Learning repository as a single 20 megabyte .zip file:
Download the dataset and unzip it into your current working directory. You will now have the file “household_power_consumption.txt” that is about 127 megabytes in size and contains all of the
We can use the read_csv() function to load the data and combine the first two columns into a single date-time column that we can use as an index.
# load all data
dataset = read_csv('household_power_consumption.txt', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0,1]}, index_col=['datetime'])
Next, we can mark all missing values indicated with a ‘?‘ character with a NaN value, which is a float.
This will allow us to work with the data as one array of floating point values rather than mixed types (less efficient.)
# mark all missing values
dataset.replace('?', nan, inplace=True)
# make dataset numeric
dataset = dataset.astype('float32')
We also need to fill in the missing values now that they have been marked.
A very simple approach would be to copy the observation from the same time the day before. We can implement this in a function named fill_missing() that will take the NumPy array of the data and copy
values from exactly 24 hours ago.
# fill missing values with a value at the same time one day ago
def fill_missing(values):
one_day = 60 * 24
for row in range(values.shape[0]):
for col in range(values.shape[1]):
if isnan(values[row, col]):
values[row, col] = values[row - one_day, col]
We can apply this function directly to the data within the DataFrame.
# fill missing
Now we can create a new column that contains the remainder of the sub-metering, using the calculation from the previous section.
# add a column for for the remainder of sub metering
values = dataset.values
dataset['sub_metering_4'] = (values[:,0] * 1000 / 60) - (values[:,4] + values[:,5] + values[:,6])
We can now save the cleaned-up version of the dataset to a new file; in this case we will just change the file extension to .csv and save the dataset as ‘household_power_consumption.csv‘.
# save updated dataset
Tying all of this together, the complete example of loading, cleaning-up, and saving the dataset is listed below.
# load and clean-up data
from numpy import nan
from numpy import isnan
from pandas import read_csv
from pandas import to_numeric
# fill missing values with a value at the same time one day ago
def fill_missing(values):
one_day = 60 * 24
for row in range(values.shape[0]):
for col in range(values.shape[1]):
if isnan(values[row, col]):
values[row, col] = values[row - one_day, col]
# load all data
dataset = read_csv('household_power_consumption.txt', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0,1]}, index_col=['datetime'])
# mark all missing values
dataset.replace('?', nan, inplace=True)
# make dataset numeric
dataset = dataset.astype('float32')
# fill missing
# add a column for for the remainder of sub metering
values = dataset.values
dataset['sub_metering_4'] = (values[:,0] * 1000 / 60) - (values[:,4] + values[:,5] + values[:,6])
# save updated dataset
Running the example creates the new file ‘household_power_consumption.csv‘ that we can use as the starting point for our modeling project.
Need help with Deep Learning for Time Series?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Model Evaluation
In this section, we will consider how we can develop and evaluate predictive models for the household power dataset.
This section is divided into four parts; they are:
1. Problem Framing
2. Evaluation Metric
3. Train and Test Sets
4. Walk-Forward Validation
Problem Framing
There are many ways to harness and explore the household power consumption dataset.
In this tutorial, we will use the data to explore a very specific question; that is:
Given recent power consumption, what is the expected power consumption for the week ahead?
This requires that a predictive model forecast the total active power for each day over the next seven days.
Technically, this framing of the problem is referred to as a multi-step time series forecasting problem, given the multiple forecast steps. A model that makes use of multiple input variables may be
referred to as a multivariate multi-step time series forecasting model.
A model of this type could be helpful within the household in planning expenditures. It could also be helpful on the supply side for planning electricity demand for a specific household.
This framing of the dataset also suggests that it would be useful to downsample the per-minute observations of power consumption to daily totals. This is not required, but makes sense, given that we
are interested in total power per day.
We can achieve this easily using the resample() function on the pandas DataFrame. Calling this function with the argument ‘D‘ allows the loaded data indexed by date-time to be grouped by day (see all
offset aliases). We can then calculate the sum of all observations for each day and create a new dataset of daily power consumption data for each of the eight variables.
The complete example is listed below.
# resample minute data to total for each day
from pandas import read_csv
# load the new file
dataset = read_csv('household_power_consumption.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# resample data to daily
daily_groups = dataset.resample('D')
daily_data = daily_groups.sum()
# summarize
# save
Running the example creates a new daily total power consumption dataset and saves the result into a separate file named ‘household_power_consumption_days.csv‘.
We can use this as the dataset for fitting and evaluating predictive models for the chosen framing of the problem.
Evaluation Metric
A forecast will be comprised of seven values, one for each day of the week ahead.
It is common with multi-step forecasting problems to evaluate each forecasted time step separately. This is helpful for a few reasons:
• To comment on the skill at a specific lead time (e.g. +1 day vs +3 days).
• To contrast models based on their skills at different lead times (e.g. models good at +1 day vs models good at days +5).
The units of the total power are kilowatts and it would be useful to have an error metric that was also in the same units. Both Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) fit this
bill, although RMSE is more commonly used and will be adopted in this tutorial. Unlike MAE, RMSE is more punishing of forecast errors.
The performance metric for this problem will be the RMSE for each lead time from day 1 to day 7.
As a short-cut, it may be useful to summarize the performance of a model using a single score in order to aide in model selection.
One possible score that could be used would be the RMSE across all forecast days.
The function evaluate_forecasts() below will implement this behavior and return the performance of a model based on multiple seven-day forecasts.
# evaluate one or more weekly forecasts against expected values
def evaluate_forecasts(actual, predicted):
scores = list()
# calculate an RMSE score for each day
for i in range(actual.shape[1]):
# calculate mse
mse = mean_squared_error(actual[:, i], predicted[:, i])
# calculate rmse
rmse = sqrt(mse)
# store
# calculate overall RMSE
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] - predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
return score, scores
Running the function will first return the overall RMSE regardless of day, then an array of RMSE scores for each day.
Train and Test Sets
We will use the first three years of data for training predictive models and the final year for evaluating models.
The data in a given dataset will be divided into standard weeks. These are weeks that begin on a Sunday and end on a Saturday.
This is a realistic and useful way for using the chosen framing of the model, where the power consumption for the week ahead can be predicted. It is also helpful with modeling, where models can be
used to predict a specific day (e.g. Wednesday) or the entire sequence.
We will split the data into standard weeks, working backwards from the test dataset.
The final year of the data is in 2010 and the first Sunday for 2010 was January 3rd. The data ends in mid November 2010 and the closest final Saturday in the data is November 20th. This gives 46
weeks of test data.
The first and last rows of daily data for the test dataset are provided below for confirmation.
The daily data starts in late 2006.
The first Sunday in the dataset is December 17th, which is the second row of data.
Organizing the data into standard weeks gives 159 full standard weeks for training a predictive model.
The function split_dataset() below splits the daily data into train and test sets and organizes each into standard weeks.
Specific row offsets are used to split the data using knowledge of the dataset. The split datasets are then organized into weekly data using the NumPy split() function.
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
We can test this function out by loading the daily dataset and printing the first and last rows of data from both the train and test sets to confirm they match the expectations above.
The complete code example is listed below.
# split into standard weeks
from numpy import split
from numpy import array
from pandas import read_csv
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# load the new file
dataset = read_csv('household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
train, test = split_dataset(dataset.values)
# validate train data
print(train[0, 0, 0], train[-1, -1, 0])
# validate test
print(test[0, 0, 0], test[-1, -1, 0])
Running the example shows that indeed the train dataset has 159 weeks of data, whereas the test dataset has 46 weeks.
We can see that the total active power for the train and test dataset for the first and last rows match the data for the specific dates that we defined as the bounds on the standard weeks for each
(159, 7, 8)
3390.46 1309.2679999999998
(46, 7, 8)
2083.4539999999984 2197.006000000004
Walk-Forward Validation
Models will be evaluated using a scheme called walk-forward validation.
This is where a model is required to make a one week prediction, then the actual data for that week is made available to the model so that it can be used as the basis for making a prediction on the
subsequent week. This is both realistic for how the model may be used in practice and beneficial to the models allowing them to make use of the best available data.
We can demonstrate this below with separation of input data and output/predicted data.
Input, Predict
[Week1] Week2
[Week1 + Week2] Week3
[Week1 + Week2 + Week3] Week4
The walk-forward validation approach to evaluating predictive models on this dataset is implement below, named evaluate_model().
The name of a function is provided for the model as the argument “model_func“. This function is responsible for defining the model, fitting the model on the training data, and making a one-week
The forecasts made by the model are then evaluated against the test dataset using the previously defined evaluate_forecasts() function.
# evaluate a single model
def evaluate_model(model_func, train, test):
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = model_func(history)
# store the predictions
# get real observation and add to history for predicting the next week
history.append(test[i, :])
predictions = array(predictions)
# evaluate predictions days for each week
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
Once we have the evaluation for a model, we can summarize the performance.
The function below named summarize_scores() will display the performance of a model as a single line for easy comparison with other models.
# summarize scores
def summarize_scores(name, score, scores):
s_scores = ', '.join(['%.1f' % s for s in scores])
print('%s: [%.3f] %s' % (name, score, s_scores))
We now have all of the elements to begin evaluating predictive models on the dataset.
Autocorrelation Analysis
Statistical correlation summarizes the strength of the relationship between two variables.
We can assume the distribution of each variable fits a Gaussian (bell curve) distribution. If this is the case, we can use the Pearson’s correlation coefficient to summarize the correlation between
the variables.
The Pearson’s correlation coefficient is a number between -1 and 1 that describes a negative or positive correlation respectively. A value of zero indicates no correlation.
We can calculate the correlation for time series observations with observations with previous time steps, called lags. Because the correlation of the time series observations is calculated with
values of the same series at previous times, this is called a serial correlation, or an autocorrelation.
A plot of the autocorrelation of a time series by lag is called the AutoCorrelation Function, or the acronym ACF. This plot is sometimes called a correlogram, or an autocorrelation plot.
A partial autocorrelation function or PACF is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of intervening
observations removed.
The autocorrelation for an observation and an observation at a prior time step is comprised of both the direct correlation and indirect correlations. These indirect correlations are a linear function
of the correlation of the observation, with observations at intervening time steps.
It is these indirect correlations that the partial autocorrelation function seeks to remove. Without going into the math, this is the intuition for the partial autocorrelation.
We can calculate autocorrelation and partial autocorrelation plots using the plot_acf() and plot_pacf() statsmodels functions respectively.
In order to calculate and plot the autocorrelation, we must convert the data into a univariate time series. Specifically, the observed daily total power consumed.
The to_series() function below will take the multivariate data divided into weekly windows and will return a single univariate time series.
# convert windows of weekly multivariate data into a series of total power
def to_series(data):
# extract just the total power from each week
series = [week[:, 0] for week in data]
# flatten into a single series
series = array(series).flatten()
return series
We can call this function for the prepared training dataset.
First, the daily power consumption dataset must be loaded.
# load the new file
dataset = read_csv('household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
The dataset must then be split into train and test sets with the standard week window structure.
# split into train and test
train, test = split_dataset(dataset.values)
A univariate time series of daily power consumption can then be extracted from the training dataset.
# convert training data into a series
series = to_series(train)
We can then create a single figure that contains both an ACF and a PACF plot. The number of lag time steps can be specified. We will fix this to be one year of daily observations, or 365 days.
# plots
lags = 365
# acf
axis = pyplot.subplot(2, 1, 1)
plot_acf(series, ax=axis, lags=lags)
# pacf
axis = pyplot.subplot(2, 1, 2)
plot_pacf(series, ax=axis, lags=lags)
# show plot
The complete example is listed below.
We would expect that the power consumed tomorrow and in the coming week will be dependent upon the power consumed in the prior days. As such, we would expect to see a strong autocorrelation signal in
the ACF and PACF plots.
# acf and pacf plots of total power
from numpy import split
from numpy import array
from pandas import read_csv
from matplotlib import pyplot
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# convert windows of weekly multivariate data into a series of total power
def to_series(data):
# extract just the total power from each week
series = [week[:, 0] for week in data]
# flatten into a single series
series = array(series).flatten()
return series
# load the new file
dataset = read_csv('household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# convert training data into a series
series = to_series(train)
# plots
lags = 365
# acf
axis = pyplot.subplot(2, 1, 1)
plot_acf(series, ax=axis, lags=lags)
# pacf
axis = pyplot.subplot(2, 1, 2)
plot_pacf(series, ax=axis, lags=lags)
# show plot
Running the example creates a single figure with both ACF and PACF plots.
The plots are very dense, and hard to read. Nevertheless, we might be able to see a familiar autoregression pattern.
We might also see some significant lag observations at one year out. Further investigation may suggest a seasonal autocorrelation component, which would not be a surprising finding.
We can zoom in the plot and change the number of lag observations from 365 to 50.
lags = 50
Re-running the code example with this change results is a zoomed-in version of the plots with much less clutter.
We can clearly see a familiar autoregression pattern across the two plots. This pattern is comprised of two elements:
• ACF: A large number of significant lag observations that slowly degrade as the lag increases.
• PACF: A few significant lag observations that abruptly drop as the lag increases.
The ACF plot indicates that there is a strong autocorrelation component, whereas the PACF plot indicates that this component is distinct for the first approximately seven lag observations.
This suggests that a good starting model would be an AR(7); that is an autoregression model with seven lag observations used as input.
Develop an Autoregression Model
We can develop an autoregression model for univariate series of daily power consumption.
The Statsmodels library provides multiple ways of developing an AR model, such as using the AR, ARMA, ARIMA, and SARIMAX classes.
We will use the ARIMA implementation as it allows for easy expandability into differencing and moving average.
First, the history data comprised of weeks of prior observations must be converted into a univariate time series of daily power consumption. We can use the to_series() function developed in the
previous section.
# convert history into a univariate series
series = to_series(history)
Next, an ARIMA model can be defined by passing arguments to the constructor of the ARIMA class.
We will specify an AR(7) model, which in ARIMA notation is ARIMA(7,0,0).
# define the model
model = ARIMA(series, order=(7,0,0))
Next, the model can be fit on the training data. We will use the defaults and disable all debugging information during the fit by setting disp=False.
# fit the model
model_fit = model.fit(disp=False)
Now that the model has been fit, we can make a prediction.
A prediction can be made by calling the predict() function and passing it either an interval of dates or indices relative to the training data. We will use indices starting with the first time step
beyond the training data and extending it six more days, giving a total of a seven day forecast period beyond the training dataset.
# make forecast
yhat = model_fit.predict(len(series), len(series)+6)
We can wrap all of this up into a function below named arima_forecast() that takes the history and returns a one week forecast.
# arima forecast
def arima_forecast(history):
# convert history into a univariate series
series = to_series(history)
# define the model
model = ARIMA(series, order=(7,0,0))
# fit the model
model_fit = model.fit(disp=False)
# make forecast
yhat = model_fit.predict(len(series), len(series)+6)
return yhat
This function can be used directly in the test harness described previously.
The complete example is listed below.
# arima forecast
from math import sqrt
from numpy import split
from numpy import array
from pandas import read_csv
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot
from statsmodels.tsa.arima_model import ARIMA
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# evaluate one or more weekly forecasts against expected values
def evaluate_forecasts(actual, predicted):
scores = list()
# calculate an RMSE score for each day
for i in range(actual.shape[1]):
# calculate mse
mse = mean_squared_error(actual[:, i], predicted[:, i])
# calculate rmse
rmse = sqrt(mse)
# store
# calculate overall RMSE
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] - predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
return score, scores
# summarize scores
def summarize_scores(name, score, scores):
s_scores = ', '.join(['%.1f' % s for s in scores])
print('%s: [%.3f] %s' % (name, score, s_scores))
# evaluate a single model
def evaluate_model(model_func, train, test):
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = model_func(history)
# store the predictions
# get real observation and add to history for predicting the next week
history.append(test[i, :])
predictions = array(predictions)
# evaluate predictions days for each week
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# convert windows of weekly multivariate data into a series of total power
def to_series(data):
# extract just the total power from each week
series = [week[:, 0] for week in data]
# flatten into a single series
series = array(series).flatten()
return series
# arima forecast
def arima_forecast(history):
# convert history into a univariate series
series = to_series(history)
# define the model
model = ARIMA(series, order=(7,0,0))
# fit the model
model_fit = model.fit(disp=False)
# make forecast
yhat = model_fit.predict(len(series), len(series)+6)
return yhat
# load the new file
dataset = read_csv('household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# define the names and functions for the models we wish to evaluate
models = dict()
models['arima'] = arima_forecast
# evaluate each model
days = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']
for name, func in models.items():
# evaluate and get scores
score, scores = evaluate_model(func, train, test)
# summarize scores
summarize_scores(name, score, scores)
# plot scores
pyplot.plot(days, scores, marker='o', label=name)
# show plot
Running the example first prints the performance of the AR(7) model on the test dataset.
We can see that the model achieves the overall RMSE of about 381 kilowatts.
This model has skill when compared to naive forecast models, such as a model that forecasts the week ahead using observations from the same time one year ago that achieved an overall RMSE of about
465 kilowatts.
arima: [381.636] 393.8, 398.9, 357.0, 377.2, 393.9, 306.1, 432.2
A line plot of the forecast is also created, showing the RMSE in kilowatts for each of the seven lead times of the forecast.
We can see an interesting pattern.
We might expect that earlier lead times are easier to forecast than later lead times, as the error at each successive lead time compounds.
Instead, we see that Friday (lead time +6) is the easiest to forecast and Saturday (lead time +7) is the most challenging to forecast. We can also see that the remaining lead times all have a similar
error in the mid- to high-300 kilowatt range.
This section lists some ideas for extending the tutorial that you may wish to explore.
• Tune ARIMA. The parameters of the ARIMA model were not tuned. Explore or search a suite of ARIMA parameters (q, d, p) to see if performance can be further improved.
• Explore Seasonal AR. Explore whether the performance of the AR model can be improved by including seasonal autoregression elements. This may require the use of a SARIMA model.
• Explore Data Preparation. The model was fit on the raw data directly. Explore whether standardization or normalization or even power transforms can further improve the skill of the AR model.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
In this tutorial, you discovered how to develop and evaluate an autoregression model for multi-step forecasting household power consumption.
Specifically, you learned:
• How to create and analyze autocorrelation and partial autocorrelation plots for univariate time series data.
• How to use the findings from autocorrelation plots to configure an autoregression model.
• How to develop and evaluate an autocorrelation model used to make one-week forecasts.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
The post How to Develop an Autoregression Forecast Model for Household Electricity Consumption appeared first on Machine Learning Mastery. | {"url":"https://www.aiproblog.com/index.php/2018/10/02/how-to-develop-an-autoregression-forecast-model-for-household-electricity-consumption/","timestamp":"2024-11-15T04:02:21Z","content_type":"text/html","content_length":"90124","record_id":"<urn:uuid:f8f3d788-549f-4b2b-8f00-db795996150e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00366.warc.gz"} |
Baseboard Trim Calculator - Online Calculators
Enter the length & width of the room, door and windows to calculate the trim needed with our basic and advanced baseboard trim calculator
Baseboard Trim Calculator
Enter any 2 values to calculate the missing variable
Baseboard trim Calculator is an efficient tool online that would help you to design your room by planning as per the size, height and trimming requirement of your room. This calculator lets you know
all with the single click.
The formula is:
$\text{BT} = 2 \times (L + W)$
Variable Meaning
BT Baseboard Trim (the total length of trim needed)
L Length of the room
W Width of the room
How to Calculate ?
Firstly, measure the length (L) and width (W) of the room where the baseboard trim will be installed. Secondly, add the length and width together. Finally, multiply this sum by 2 to calculate the
total length of baseboard trim (BT) needed to cover all four walls of the room
Solved Examples:
Example 1:
• Length of the room (L) = 12 feet
• Width of the room (W) = 10 feet
Calculation Instructions
Step 1: BT = $2 \times (L + W)$ Start with the formula.
Step 2: BT = $2 \times (12 + 10)$ Replace L with 12 feet and W with 10 feet.
Step 3: BT = $2 \times 22$ Add the length and width: $12 + 10 = 22$
Step 4: BT = 44 feet Multiply 22 by 2 to get the total baseboard trim needed.
The total length of baseboard trim needed is 44 feet.
Example 2:
• Length of the room (L) = 15 feet
• Width of the room (W) = 8 feet
Calculation Instructions
Step 1: BT = $2 \times (L + W)$ Start with the formula.
Step 2: BT = $2 \times (15 + 8)$ Replace L with 15 feet and W with 8 feet.
Step 3: BT = $2 \times 23$ Add the length and width: $15 + 8 = 23$
Step 4: BT = 46 feet Multiply 23 by 2 to get the total baseboard trim needed.
The total length of baseboard trim needed is 46 feet.
Baseboard Trimming; Things You Must Know
The Baseboard trimming is a a decorative purpose element which is set up along the edges of interior walls. It is also known as baseboard modelling.
Apart from giving an aesthetic look, it also protects the walls from foot damages, from vacuum cleaners and furniture. It covers up the gaps where wall meets the floor, giving a neat and clean look.
Its touch gives an aesthetic appeal to your room where you observe the walls and floors giving a cleaner look.
In order to calculate how much baseboard trim you need, you need to measure the perimeter of your room, including the walls. For installing trim in a 10 by 10 room, it is estimated that you need 40
feet of baseboard trim. However, for larger rooms, like 2000 sq ft house, you need to calculate as per the formula after measuring perimeter of your room.
Knowledge about baseboard trimming is essential. It would help you in selecting trim size, cutting angles, and in calculation of trimming amount of baseboard.
It totally depends upon you to select baseboard trim for your room as per its design, height and size. However, common baseboard heights are in range between 3 to 6 inches. The rooms with higher
ceiling require taller trims.
Baseboard cutting is achieved with cutting tools like miter saws for accurate and desired cuts, especially for corners. Caulking of edges after installing baseboard trims, on the bottom edges of
walls, help to fill any gap.
The Baseboard Trim Calculator simplifies the process of estimating the amount of trim required for your room renovation projects. By providing an easy-to-use tool for material estimation, this
calculator enhances efficiency and accuracy in planning. | {"url":"https://areacalculators.com/baseboard-trim-calculator/","timestamp":"2024-11-03T03:45:05Z","content_type":"text/html","content_length":"117864","record_id":"<urn:uuid:a1096891-466c-4af8-a5b0-00d1c2d6d999>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00826.warc.gz"} |
Test Quiz 2
Question 1 of 6
A company has outsourced the manufacturing of a gasket for one of their valves to a company in China. The gaskets are received in very large lots (with many thousands of gaskets). The company
controls a batch by sampling 200 gaskets at random from the lot, these are classified as defective or intact. A lot is accepted if there are at most 2 defective item among the controlled ones.
What is the approximate probability of accepting a lot if the percentage of defectives is $0.4\%$?
Question 2 of 6
In a redesigned valve the socalled “elasticity modulus’’ of the material is important for the functionality. To compare the elasticity modulus of 3 different brass alloys, samples from each alloy was
purchased from 5 different manufacturers. The measurements in the table below indicates the measured elasticity modulus in GPa:
Brass alloy Row sum
M1 M2 M3
Manufacturer A 82.5 90.9 75.6 249.0
Manufacturer B 83.7 99.2 78.1 261.0
Manufacturer C 80.9 101.4 87.3 269.6
Manufacturer D 95.2 104.2 92.2 291.6
Manufacturer E 80.8 104.1 83.8 268.7
Column sum 423.1 499.8 417.0
Consider only the data for brass alloy M1. The median and the upper quartile for these become: (using the eBook Chapter 1 definition) end:text
Question 3 of 6
The arrival of guests wishing to check into a hotel is assumed in the period between 14 (2pm) and 18 (6 pm) o’clock to be described by a poisson proces (arrivals are assumed evenly distributed over
time and independent of each other). From extensive previous measurements it has been found that the probability that no guests arrive in a period of 15 minutes is 0.30. ($ P (X_{15min} = 0) = 0.30
$, where $ X_{15min} $ describes the number of arrivals per 15 min).
The expected number of arrivals per 15 min, and the probability that in a period of 1 hour 8 guests or more arrive are:
Question 4 of 6
On a shelf 9 apparently identical ring binders are postioned. It is known that 2 of the ring binders contain statistics exercises, 3 of the ring binders contain math problems and 4 of ring binders
contain reports. Three ring binders are sampled without replacement.
The random variable X describes the number of ring binders with statistics exercises among the 3 chosen ones.The mean and variance for the random variable X is:
Question 5 of 6
If you did the previous exercise, the following is a repetition: On a shelf 9 apparently identical ring binders are postioned. It is known that 2 of the ring binders contain statistics exercises, 3
of the ring binders contain math problems and 4 of ring binders contain reports. Three ring binders are sampled without replacement.
The probability ($ {P_1} $) that all the three chosen ring binders contain reports and the probability ($ {P_2} $) to chose exactly one of each kind of ring binder are:
Question 6 of 6
A PC user noted that the probability of no spam emails during a given day is 5\%. ($ P (X = 0) = 0.05$, where $ X $ denotes the number of spam emails per day). The number of spam emails per day is
assumed to follow a poisson distribution.
The expected number of spam emails per day and the probability of getting more than 5 spam mails on any given day are: | {"url":"https://02402.compute.dtu.dk/quizzes/test-quiz-2","timestamp":"2024-11-05T09:37:31Z","content_type":"text/html","content_length":"26399","record_id":"<urn:uuid:8788b205-42fc-4e4b-b78f-5201a7d5f874>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00607.warc.gz"} |
All Subsets Regression - Linear Models • Genstat v21
Select menu: Stats | Regression Analysis | All Subsets Regression | Linear Models
Use this to search through linear regression models. There are various methods for choosing a regression model when there are many candidate model terms. The Change model button on the Linear
Regression dialog provides forward selection, backward elimination and stepwise regression. However these methods result in only one model and alternative models, with an equivalent or even better
fit, are easily overlooked.
1. After you have imported your data, from the menu select
Stats | Regression Analysis | All Subsets Regression | Linear Models.
2. Fill in the fields as required then click Run.
You can set additional Options then after running, you can save the results by clicking Save.
Especially in observational studies with many non-orthogonal terms there are frequently a number of alternative models, and then selection of just one well-fitting model is unsatisfactory and
possibly misleading. A preferable method is to fit all possible regression models, and to evaluate these according to some criterion. In this way a number of best regression models can be selected.
However the fitting of all possible regression models is very computer intensive. It should also be used with caution, because models can be selected which appear to have a lot of explanatory power,
but contain noise variables only. This may occur particularly when the number of parameters is large in comparison with the number of units. Terms should therefore not be selected on the basis of a
statistical analysis alone. Use this to perform these model selection methods.
Available data
This lists data structures appropriate to the current input field. The contents will change as you move from one field to the next. You can double-click a name to copy it to the current input field
or type it in.
Response variate
Specify a response variate containing the data.
Model formula of list of explanatory data
Specifies the model candidate terms in the fitted model, which may be set using a model formula or using a list of terms separated by commas.
Terms always included in the model
It is sometimes desirable to include specific terms in every model. Such terms may be specified by in this box, which may be set using a model formula or using a list of terms separated by commas.
This provides a quick way of entering operators in the Model terms formula. Double-click on the required symbol to copy it to the current input field. You can also type in operators directly. See
model formula for a description of each.
Factorial limit on model terms
You can control the factorial limit on model terms to be generated when you use in model-formula operators like ‘*’.
Specifies the model selection methods to be used.
Accumulated An accumulated analysis of deviance in which all model terms are added one by one to the model in the given order
Pooled An accumulated analysis of deviance in which terms with the same number of identifiers, e.g. main effects or two-factor interactions, are pooled
Forward selection an accumulated analysis of deviance resulting from forward selection
Backwards elimination An accumulated analysis of deviance resulting from backward elimination
Forward stepwise An accumulated analysis of deviance resulting from stepwise regression starting with no candidate terms in the model
Backward stepwise An accumulated analysis of deviance resulting from stepwise regression starting with all candidate terms in the model
All possible Summary statistics for a number of best models among all possible models
See also | {"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/all-subsets-regression-linear-models/","timestamp":"2024-11-07T13:35:16Z","content_type":"text/html","content_length":"43532","record_id":"<urn:uuid:df52fdf9-12ef-476d-9016-5ed43d995fe0>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00435.warc.gz"} |
Help setting up an absolute value for expressions
I have an absolute value that I need to set to be an absolute value.
I followed the advice found here but when I run my model I find that the variables in my code below both are negative. The code sample and values can be seen below in screenshots below.
freq, price_down and price_up are all read from a pandas dataframe while tot_e_del_quart is a list containing different expressions.
I've been quite stumped at this issue as I don't see what I'm doing wrong compared to other examples. Like usual any help would be greatly appreciated!
• Hi Bill,
The solution values of z and z1 are numerically 0, i.e., with regards to floating-point tolerances. Please consult our guide on numerics for more details on this.
• So what you're saying is that the optimizer has made the values so small that they effectivly become 0 and the absolute value of -0 is 0?
On a more hypothetical note. Would the way the code is currently structured work in your opinion or is there some flaw in it?
If we were to say that z = -2, would z1 then be equal to 2?
• Any value close enough to 0 is interpreted as 0. When taking the absolute value of a variable, more things happen behind the scenes of the solver than just taking the absolute value of the value
in the solution.
You should maybe relax the z variable to allow negative values in the first place. By default, all variables are non-negative and you need to specify a specific negative lower bound to change
z = m.addVar(lb=-GRB.INFINITY)
See also the documentation about variables.
• Thank you so so so much Matthias.
You've helped clear this up something immensly.
Have a wonderful day!
Please sign in to leave a comment. | {"url":"https://support.gurobi.com/hc/en-us/community/posts/360058259091-Help-setting-up-an-absolute-value-for-expressions","timestamp":"2024-11-09T03:40:06Z","content_type":"text/html","content_length":"48062","record_id":"<urn:uuid:7633199a-2e58-496c-8c16-a02042d6f662>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00087.warc.gz"} |
Calculate interest rate earned on investment
Calculate your earnings and more. Consistent investing over a long period of time can be an effective strategy to accumulate wealth. Even small deposits to a If this calculation is for a lump sum
deposit with no recurring transactions enter " Never" in Multiply your interest earned against income tax rate (as a decimal) and that will be the “Long term bonds are a terrible investment at
current rates. Assuming that the interest is compounded annually, calculate the annual interest rate earned on this investment. The following timeline plots the variables that
RD Calculator - Calculate the interest earned and the amount of Recurring Deposit you the maturity value of the investment if it grows at a certain interest rate. When investing in a Fixed Deposit,
the amount you deposit earns interest as per the prevailing FD interest rate. This interest keeps compounding over time, and They usually calculate according to their own will. Each time you earn
interest on your principal, it is added to the original amount, which then *While the annualized rate of return is 8% during the investment time period of 15 years, the Instantly calculate the
compound interest earnings on money market deposit Calculate money market account interest earnings given the interest rate, investing versus keeping funds in a regular savings account is the higher
rate of interest. Because it simply doesn't make sense to earn 1% on your money when you
Simple interest calculator with formulas and calculations to solve for principal, interest rate, number of periods or final investment value. A = P(1 + rt)
They usually calculate according to their own will. Each time you earn interest on your principal, it is added to the original amount, which then *While the annualized rate of return is 8% during the
investment time period of 15 years, the Instantly calculate the compound interest earnings on money market deposit Calculate money market account interest earnings given the interest rate,
investing versus keeping funds in a regular savings account is the higher rate of interest. Because it simply doesn't make sense to earn 1% on your money when you Estimate interest on your ANZ term
deposits. Investment amount ($) Term Deposit and an ANZ Term Deposit (apart from the interest rate that applies) is that Compounding is when you earn interest on your investment over a period of
time, due to which you witness a You expect the Annual Rate of Returns to be. To calculate the future value of a monthly investment, enter the beginning the monthly dollar amount you plan to deposit,
the interest rate you expect to earn,
Instantly calculate the compound interest earnings on money market deposit Calculate money market account interest earnings given the interest rate, investing versus keeping funds in a regular
savings account is the higher rate of interest. Because it simply doesn't make sense to earn 1% on your money when you
Annual Interest Estimate the rate you'll earn on your investment by checking Bankrate's rate tables. You can find the best rates on CDs, checking, savings and Calculate the interest rate you are
paying on your loan, or receiving on your The effective annual rate is the interest rate earned on a loan or investment over a Calculate your earnings and more. Consistent investing over a long
period of time can be an effective strategy to accumulate wealth. Even small deposits to a If this calculation is for a lump sum deposit with no recurring transactions enter " Never" in Multiply
your interest earned against income tax rate (as a decimal) and that will be the “Long term bonds are a terrible investment at current rates. Assuming that the interest is compounded annually,
calculate the annual interest rate earned on this investment. The following timeline plots the variables that
The FV function can calculate compound interest and return the future value of an investment. To configure the function, we need to provide a rate, the number of
Input term length and interest rate to see total interest earned. NerdWallet’s CD calculator shows what you can earn with a CD, a low-risk investment that you can leave untouched for months or Use
this simple interest calculator to find A, the Final Investment Value, using the simple interest formula: A = P(1 + rt) where P is the Principal amount of money to be invested at an Interest Rate R%
per period for t Number of Time Periods. Interest Rate. The published interest rate for this CD. Make sure to enter the actual interest rate, not the annual percentage yield (APY). Compounding.
Interest earned on your CD's accumulated interest. This calculator allows you to choose the frequency that your CD's interest income is added to your account. For example, in the United States, the
middle class has a marginal tax rate of 25% and the average inflation rate is 3%. To maintain the value of the money, a stable interest rate or investment return rate of 4% or above needs to be
earned, and this is not easy to achieve. CD Calculator Calculate your earnings and more. Use this CD calculator to find out how much interest is earned on a certificate of deposit (CD). Just enter a
few pieces of information and this CD So how do you know what rate of return you'll earn? Well, the SmartAsset investment calculator default is 4%. This may seem low to you if you've read that the
stock market averages much higher returns over the course of decades.
Annual Interest Estimate the rate you'll earn on your investment by checking Bankrate's rate tables. You can find the best rates on CDs, checking, savings and
That's why, all it takes to earn South Africa's best investment rate is R500. Because we understand Calculate the returns on your investment at SA's Best rates. Calculate your repayments & total
interest under different fixed & variable rate scenarios. Term Deposit. Your Savings Details. Investment Term. 1 Month. Our Term Deposit Interest Rate calculator will enable you to estimate how much
you can potentially earn with different interest rates and term deposit terms. Our term deposit calculator helps you determine your accumulated savings based on the amount invested, the term of
investment and the interest rate offered. Interest, in finance and economics, is payment from a borrower or deposit-taking financial (Interest may be part or the whole of the profit on an investment,
but the two The rate of interest is equal to the interest amount paid or received over a Compound interest means that interest is earned on prior interest in addition Simply put, you calculate the
interest rate divided by the number of times in a year to invest in a fixed deposit with compound interest, this is how you will earn 20 Aug 2018 Our compound interest calculator will help you
determine how much When you invest in the stock market, you don't earn a set interest rate.
See how to calculate interest in your accounts, including tips for compound interest. Compound interest; Ongoing investments (monthly deposits, for example) The calculation above works when your
interest rate is quoted as an annual | {"url":"https://bestcurrencyxngffnh.netlify.app/loa14702rej/calculate-interest-rate-earned-on-investment-374.html","timestamp":"2024-11-13T20:00:24Z","content_type":"text/html","content_length":"36097","record_id":"<urn:uuid:7643dbf3-5424-4d27-81c0-19129623473a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00795.warc.gz"} |
detailed lesson plan in math grade 1 shapes
LEARNING OBJECTIVES At the end of the lesson, the pupils are expected to achieve atleast 75% proficiency level in the following behavior: Cognitive: Name and tell the months of the year in the right
order. Looks like youve clipped this slide to already. III. Able to join sets to addition of whole numbers Lesson Plan Examples. %PDF-1.7 Cognitive: Show the relationship of joining sets to addition
of whole numbers. A detailed lesson plan in math grade 1 RoiVincentVillaraiz 8.8k views Similar to Lesson plan about Shapes (20) Grade 4 Port 1 Melissa Hoesman 192 views cdc lesson plan floor time
240 Kimberly Singleton 2.6k views UNIT 2 - SOLID FIGURES (6th grade) Rachel Sonnabend 13.3k views Year 7 Lesson Plan Anne 400 views Jack yajenel 187 views Jack A Detailed Lesson Plan in Math- Grade 1
I. A detailed lesson plan in mathematics 1 Rodessa Marie Canillas 20k views Lesson plan-in-science- (PARTS OF THE BODY) Alkhalif Amberol 90.3k views Lesson plan multiplication g.3 Rophelee Saladaga
55.8k views Proper Nouns and Common Nouns Detailed Lesson Plan Cristy Melloso 193.8k views Detailed Lesson Plan in English 3 janehbasto Have two separate lines for boys and girls with 5 members each
line. <> If you have Wixie, you can assign this presentation file to the students in your class so they can identify the shapes in the pictures. Tap here to review the details. These projects will
help children learn all about fruit, while developing critical thinking skills, reinforcing math. CCSS.MATH.CONTENT.2.G.A.1 Copyright 2023 StudeerSnel B.V., Keizersgracht 424, 1016 GC Amsterdam, KVK:
56829787, BTW: NL852321363B01, asking students what are the sets of things in a given song. Display the presentation to your students and work together to identify and draw the shapes on it. Sort by.
Draw the four basic shapes. Put them together! Valuing: Why do you think group ___ won? Prerequisite Concepts and Skills: You may ask, Using a yellow art paper, draw a circle on, board. Working well
with others. Write the letter of the correct answer on the blank. (M1NS - lIld-75) II. asking students what are the sets of things in a given song. What did they buy? Davao Oriental State College of
Science and Technology. You can evaluate prior knowledge during your initial conversation about shapes. 4 0 obj Subject Matter Skill Focus: Recognizing of of a whole Materials: cut outs, apple, Value
. This lesson plan contains activities, a quiz, and discussion points that will help your students learn more about various geometric shapes and their properties. Enjoy playing mathematics puzzles.
What word did we use before the answer? .bc@?f)%{5cY0BdFbBGdWZ" C$A3gPbe: 9U !PzqlXo"jN?qga]t^xB]x'yK](g, cL 48Y,NC05{mz 70";\CP . Did all the members of their group work well? Students: d. build
knowledge by actively exploring real-world issues and problems, developing ideas and theories and pursuing answers and solutions. Take a walk around your classroom room or school looking for
additional shapes in the environment. Checking of assignment. Mathematics. Now customize the name of a clipboard to store your clips. Instruct students to label the corners and sides of a couple
objects in the room. Science Lesson Plans Elementary. CCSS.CONTENT.K.G.A.1 Grade the artwork based on a pre-determined rubric, if needed. Show your students different examples of squares, circles,
rectangles, and triangles and work with them to count the sides and identify them correctly. Spanish-English dictionary, translator, and learning. . _____ % 1st girl writes the TeacherVisionis part
of the Sandbox Learning family of educational and reference sites for parents, teachers and students. Making educational experiences better for everyone. by Edupstairs Staff. . Kids will be tasked
with a shape hunt to find circular, triangular, and rectangular shapes in either their home or classroom. High School Lesson Plans . 1M followers. Explain that each, Explain to the class that today
they will learn how to describe the. a) 4 bags 2 boys b) 6 stars 2 stars Ask students to find a corner in the room. The Greedy Triangle. Have students use paint or drawing tools to identify the shape
on the page. A DETAILED LESSON PLAN IN MATHEMATICS GRADE 1 (FIRST QUARTER) I. Students will gain a better understanding of how to describe a shape by the number of edges and vertices it has, rather
than by its name. View PDF Detailed Lesson Plan in Mathematics I (Using a Calendar- Months of the Year) I. I. It appears that you have an ad-blocker running. Entire Library Printable Worksheets Games
Guided Lessons Lesson Plans. Copyright 2023 Education.com, Inc, a division of IXL Learning All Rights Reserved. Middle Assyrian Lunar Calendar and Chronology, The class moves! II. Module 4 - Chemical
Kinetics Prelab Study Guide-converted, BIBLIOGRAPHY OF LEARNING AREA IN MATHEMATICS 10 2021-2022, Seatwork in MATH | 1st Quarter | 10th Grade, Math 8 q2 mod8 solvingproblemnsinvolvinglinearfunctions
V3, MATH | COMBINATION | 10TH GRADE | S.Y. Grade 1 Lesson Plan. detailed lesson plan in elementary mathematics grade level: strand: numbers and number Skip to document Ask an Expert Sign inRegister
Sign inRegister Home Ask an ExpertNew Your students will love identifying how many sides shapes have by drawing and counting them! The group with the most number of correct answers wins.
Semi-Detailed Lesson Plan. (9) $4.50. 2. We've updated our privacy policy. Zip. Great discussion questions include: Point to each corner on the triangle. Ask students to look around your classroom to
find objects in your classroom that are a particular shape like a circle or square and to name the shapes that they see. GRADE 1 SAMPLE COT LESSON PLAN is . CCSS.MATH.CONTENT.2.G.A.1. If students
have their access to iPads or tablets, it is easy for them to log in to Wixie, start a new project, and simply add the image from the camera roll. Objectives: At the end of 60 minute discussion, the
pupils shoul be able to: a. compares two sets using the expressions "less than," "more than," and "as many as" and orders sets from least to greatest and vice versa b. Value: Working well with
others. Clipping is a handy way to collect important slides you want to go back to later. For the second group, the 1st girl shows again another card with objects, the1st boy Geometry . 1st grade .
Draw a shape such as a triangle on the board, and change the non-defining attibutes by coloring it different colors, drawing it bigger or smaller, and drawing it upside down or sideways. Present a
song using pictures. The children will put the shape to, The children will spin the wheel. Students will be able to name shapes according to their attributes. For a writing-focused shape project
explore: Informational text projects that build thinking and creativity. mathematics this shape is called a (rhombus). An example of data being processed may be a unique identifier stored in a
cookie. To buy pencils and paper Subject Matter: Topic: Recreational Mathematics. LESSON PLAN Teacher Hannah Jane D. Rosagaran Subject Math. lesson plan in math grade 1.docx. LESSON PLANS. _____, How
many pencils did they buy? Pray for knowledge and wisdom for you to demo it properly. picture cards, cutouts Explain to the class that a, Count the sides of the triangle together. Subject Matter: 1.
Sandbox Learning is part of Sandbox & Co., a digital learning company. Write the number story. If you would like to change your settings or withdraw consent at any time, the link to do so is in our
privacy policy accessible from our home page.. Do not sell or share my personal information, Brushed aluminum shape with gradient spheres. Display a triangle to the class, either by drawing it on the
whiteboard or using an interactive whiteboard. _____ Lesson Plan. CONTENT Paglalarawan at Pagguhit ng Buong Region o Pangkat Batay sa / na Bahaging Natira. This interesting learning aid features a
guided missile cruiser made up of squares, rectangles, triangles and squares. . The SlideShare family just got bigger. Introduce different 2-dimensional shapes to your students. If students created
individual pages using Wixie, combine them together using the Import Pages feature. d) 3 umbrellas 2 hats 3 0 obj For example, the orange ball could be painted red, and then it would be a red ball.
How easily can students spot shapes in your room? Reference: Math in the Modern World, by Dr. Ricardo Talde, Lorimar Publishing . of mango, guava, etc In this lesson, young learners will be
introduced to defining and non-defining attributes (e.g., a circle is round and a closed shape) of common shapes such as circles, triangles, and rectangles. K-12 Curriculum Guide in Mathematics I p.
1, Don Honorio Ventura Technological State University, Polytechnic University of the Philippines, Secondary Education Major in English (BS Education), Bachelors of Science Major in Accountancy
(BSA-2), National Service Training Program (NSTP 1), Accountancy and Business Management (ABM 1-6), Science, Technology and the Society (STS01), Disaster Readiness & Risk Reduction (DRRR 01),
Entrepreneurship In Tourism And Hospitality (THC1109), Financial Accounting And Reporting (AC108), Appendix 3F COT RPMS Inter observer Agreement Form for T I III for SY 2021 2022 in the time of Covid
19, PAS 40 Multiple Choice Questions on Investment Property, Customs of the Tagalogs-Juan de Plasencia, Grade 8 DepEd Curriculum Guide in English. Psychomotor: Write the correct number stories.
Compare and classify 2-dimensional (flat/plane) figures according to common attributes. Show students real objects such as different types and sizes of balls, or photographs of real objects. 1 0 obj
Explain that it means a characteristic that specifically describes something. . Reinforce students' learning of geometric shapes and colors by making tissue paper sun catchers. Students learn how to
solve music and math problems by finding patterns. If you have access to a printer, ask student to print their page or project, so you can hang the images around the room as examples of different
shapes students can find in the world around them. improve their knowledge about numbers and enhance their skills of joining sets to addition of Sort 2-D and 3-D objects, using two attributes, and .
DOCX, PDF, TXT or read online from Scribd, 76% found this document useful (21 votes), 76% found this document useful, Mark this document as useful, 24% found this document not useful, Mark this
document as not useful, Relate the basic shape square to the things around them, tune of London Bridge. rrrggf. Provide students with a set of flashcards that have pictures of shapes in the lesson on
the front and the shape's names in English and their home language (L1) on the back. Have students think-pair-share what a corner and side are with an elbow partner. Extension Activities: 1.
Reflection Paper: Ganito Kami Noon, Paano Kayo Ngayon? Enter the email address you signed up with and we'll email you a reset link. For example, a ball is round, so. 2023 Tech4Learning, Inc | All
Rights Reserved | Privacy Policy, 2023 Tech4Learning, Inc | All Rights Reserved | https://www.thecreativeeducator.com. Capture images of the shapes you find with a digital camera or an iPad or
tablet. Document Information click to expand document information. Motivation 4.7. Write the correct number sentence Put the items together A fascinating mind-boggler, it is also a great way to teach
kids important concepts in geometry! Performance Standards The learner is able to create models of plane figures and solve accurately authentic problems involving sides and angles of a polygon.
References: K-12 Curriculum Guide in Mathematics I p. 11, Lesson Guide in Elem. This lesson provides young mathematicians with a great introduction to geometry! Identify triangles, quadrilaterals,
pentagons, hexagons, and cubes. Materials: pictures of the song, objects like pencil, eraser, ruler, etc. Identify triangles, quadrilaterals, pentagons, hexagons, and cubes. What will be the new set?
The Magic Of Math Unit 4 for FIRST GRADE focuses on: Week 1 2D Shapes: Naming Shapes, Attributes, and Sorting Shapes Week 2: Composing 2D Shapes Week 3: 3D Shapes: Naming Shapes, Attributes, Looking
at what 2D Shapes make up the 3D Shapes Week 4: Fractions: Halves, Fourths, Examples, NonExamples, Equal Shares and Parts . Read through the flashcards with the students orally prior to the lesson.
3-D Shapes Bundle - A week supply of hands on 3-D shapes mathematics activities for ages 9-10. This assessment continues as students move about the room and school finding and capturing additional
shapes with a camera or tablet. (and) Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces. writes the number on the board.
________ Here are some suggestions for you to try.
Where Was The Skyrizi Commercial Filmed, Dentists That Accept Badgercare In La Crosse, Wi, Houston Life Derrick, Scalinatella Reservations, Articles D | {"url":"https://unser-altona.de/7pjvtp6/iazia/archive.php?id=detailed-lesson-plan-in-math-grade-1-shapes","timestamp":"2024-11-13T18:01:21Z","content_type":"text/html","content_length":"64346","record_id":"<urn:uuid:b9b181ba-c7f6-4f66-bddb-c8a98fc5278d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00140.warc.gz"} |
Regression with SAS
Chapter 3 – Regression with Categorical Predictors
Chapter Outline
3.0 Regression with categorical predictors
3.1 Regression with a 0/1 variable
3.2 Regression with a 1/2 variable
3.3 Regression with a 1/2/3 variable
3.4 Regression with multiple categorical predictors
3.5 Categorical predictor with interactions
3.6 Continuous and categorical variables
3.7 Interactions of continuous by 0/1 categorical variables
3.8 Continuous and categorical variables, interaction with 1/2/3 variable
3.9 Summary
3.10 For more information
3.0 Introduction
In the previous two chapters, we have focused on regression analyses using continuous variables. However, it is possible to include categorical predictors in a regression analysis, but it requires
some extra work in performing the analysis and extra work in properly interpreting the results. This chapter will illustrate how you can use SAS for including categorical predictors in your analysis
and describe how to interpret the results of such analyses.
This chapter will use the elemapi2 data that you have seen in the prior chapters. We assume that you have put the data files in "c:sasreg" directory. We will focus on four variables api00, some_col,
yr_rnd and mealcat, which takes meals and breaks it up into three categories. Let’s have a quick look at these variables.
proc datasets nolist;
contents data="c:sasregelemapi2" out=elemdesc noprint;
proc print data=elemdesc noobs;
var name label nobs;
where name in ('api00', 'some_col', 'yr_rnd', 'mealcat');
NAME LABEL NOBS
api00 api 2000 400
mealcat Percentage free meals in 3 categories 400
some_col parent some college 400
yr_rnd year round school 400
So we have seen the variable label and number of valid observations for each variable. Now let’s take a look at the basic statistics of each variable. We will use proc univariate and make use of the
Output Delivery System (ODS) introduced in SAS 8 to get a shorter output. ODS gives us a better control over the output a SAS procedure.
proc univariate data="c:sasregelemapi2";
ods output BasicMeasures=varinfo;
proc sort data=varinfo;
by varName;
proc print data=varinfo noobs;
by varName;
where varName in ('api00', 'some_col', 'yr_rnd', 'mealcat');
Measure LocValue VarMeasure VarValue
Mean 647.623 Std Deviation 142.24896
Median 643.000 Variance 20235
Mode 657.000 Range 571.00000
_ Interquartile Range 239.00000
Measure LocValue VarMeasure VarValue
Mean 2.015 Std Deviation 0.81942
Median 2.000 Variance 0.67145
Mode 3.000 Range 2.00000
_ Interquartile Range 2.00000
Measure LocValue VarMeasure VarValue
Mean 19.713 Std Deviation 11.33694
Median 19.000 Variance 128.52616
Mode 0.000 Range 67.00000
_ Interquartile Range 16.00000
Measure LocValue VarMeasure VarValue
Mean 0.230 Std Deviation 0.42136
Median 0.000 Variance 0.17754
Mode 0.000 Range 1.00000
_ Interquartile Range 0
We can use proc means to obtain more or less the same type of statistics as above shown below. But we have to know the names for the statistics and we have less control over the layout of the output.
options nolabel;
proc means data="c:sasregelemapi2" mean median range std var qrange;
var api00 some_col yr_rnd mealcat;
Variable Mean Median Range Std Dev Variance Range
api00 647.6225000 643.0000 571.0000 142.2489610 20234.77 239
some_col 19.7125000 19.0000 67.0000 11.3369378 128.5261591 16
yr_rnd 0.2300000 0 1.0000 0.4213595 0.1775439 0
mealcat 2.0150000 2.0000 2.0000 0.8194227 0.6714536 2
The variable api00 is a measure of the performance of the students. The variable some_col is a continuous variable that measures the percentage of the parents in the school who have attended college.
The variable yr_rnd is a categorical variable that is coded 0 if the school is not year round, and 1 if year round. The variable meals is the percentage of students who are receiving state sponsored
free meals and can be used as an indicator of poverty. This was broken into 3 categories (to make equally sized groups) creating the variable mealcat. The following macro function created for this
dataset gives us codebook type information on a variable that we specify. It gives the information of the number of unique values that a variable take, which we couldn’t get from either proc
univariate or proc means. This macro makes use of proc sql and has very concise output.
%macro codebook(var);
proc sql;
title "Codebook for &var";
select count(&var) label="Total of Obs",
count(distinct &var) label="Unique Values",
max(&var) label="Max",
min(&var) label="Min",
nmiss(&var) label="Coded Missing",
mean(&var) label="Mean",
std(&var) label ="Std. Dev."
from "c:sasregelemapi2";
title " ";
options label formdlim=' ';
options formdlim='';
Codebook for api00
Total Unique Coded Std.
of Obs Values Max Min Missing Mean Dev.
400 271 940 369 0 647.6225 142.249
Codebook for yr_rnd
Total Unique Coded Std.
of Obs Values Max Min Missing Mean Dev.
400 2 1 0 0 0.23 0.42136
Codebook for some_col
Total Unique Coded Std.
of Obs Values Max Min Missing Mean Dev.
400 49 67 0 0 19.7125 11.33694
Codebook for mealcat
Total Unique Coded Std.
of Obs Values Max Min Missing Mean Dev.
400 3 3 1 0 2.015 0.819423
3.1 Regression with a 0/1 variable
The simplest example of a categorical predictor in a regression analysis is a 0/1 variable, also called a dummy variable or sometimes an indicator variable. Let’s use the variable yr_rnd as an
example of a dummy variable. We can include a dummy variable as a predictor in a regression analysis as shown below.
proc reg data="c:sasregelemapi2";
model api00 = yr_rnd;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 1 1825001 1825001 116.24 <.0001
Error 398 6248671 15700
Corrected Total 399 8073672
Root MSE 125.30036 R-Square 0.2260
Dependent Mean 647.62250 Adj R-Sq 0.2241
Coeff Var 19.34775
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 684.53896 7.13965 95.88 <.0001
yr_rnd year round school 1 -160.50635 14.88720 -10.78 <.0001
This may seem odd at first, but this is a legitimate analysis. But what does this mean? Let’s go back to basics and write out the regression equation that this model implies.
api00 = Intercept + Byr_rnd * yr_rnd
where Intercept is the intercept (or constant) and we use Byr_rnd to represent the coefficient for variable yr_rnd. Filling in the values from the regression equation, we get
api00 = 684.539 + -160.5064 * yr_rnd
If a school is not a year-round school (i.e., yr_rnd is 0) the regression equation would simplify to
api00 = constant + 0 * Byr_rnd
api00 = 684.539 + 0 * -160.5064
api00 = 684.539
If a school is a year-round school, the regression equation would simplify to
api00 = constant + 1 * Byr_rnd
api00 = 684.539 + 1 * -160.5064
api00 = 524.0326
We can graph the observed values and the predicted values using the scatter command as shown below. Although yr_rnd only has two values, we can still draw a regression line showing the relationship
between yr_rnd and api00. Based on the results above, we see that the predicted value for non-year round schools is 684.539 and the predicted value for the year round schools is 524.032, and the
slope of the line is negative, which makes sense since the coefficient for yr_rnd was negative (-160.5064).
proc reg data="c:sasregelemapi2";
model api00 = yr_rnd;
plot api00*yr_rnd;
Let’s compare these predicted values to the mean api00 scores for the year-round and non-year-round students. Let’s create a format for variable yr_rnd and mealcat so we can label these categorical
variables. Notice that we use the format statement in proc means below to show value labels for variable yr_rnd.
options label;
proc format library = library ;
value yr_rnd /* year round school */
value mealcat /* Percentage free meals in 3 categories */
1='0-46% free meals'
2='47-80% free meals'
3='81-100% free meals';
format yr_rnd yr_rnd.;
format mealcat mealcat.;
proc means data="c:sasregelemapi2" N mean std;
class yr_rnd ;
format yr_rnd yr_rnd.;
var api00;
The MEANS Procedure
Analysis Variable : api00 api 2000
round N
school Obs N Mean Std Dev
No 308 308 684.5389610 132.1125339
Yes 92 92 524.0326087 98.9160429
As you see, the regression equation predicts that for a school, the value of api00 will be the mean value of the group determined by the school type.
Let’s relate these predicted values back to the regression equation. For the non-year-round schools, their mean is the same as the intercept (684.539). The coefficient for yr_rnd is the amount we
need to add to get the mean for the year-round schools, i.e., we need to add -160.5064 to get 524.0326, the mean for the non year-round schools. In other words, Byr_rnd is the mean api00 score for
the year-round schools minus the mean api00 score for the non year-round schools, i.e., mean(year-round) – mean(non year-round).
It may be surprising to note that this regression analysis with a single dummy variable is the same as doing a t-test comparing the mean api00 for the year-round schools with the non year-round
schools (see below). You can see that the t value below is the same as the t value for yr_rnd in the regression above. This is because Byr_rnd compares the non year-rounds and non year-rounds (since
the coefficient is mean(year round)-mean(non year-round)).
proc ttest data="c:sasregelemapi2" ci=none;
class yr_rnd;
var api00;
Lower CL Upper CL
Variable yr_rnd N Mean Mean Mean Std Dev Std Err
api00 0 308 669.73 684.54 699.35 132.11 7.5278
api00 1 92 503.55 524.03 544.52 98.916 10.313
api00 Diff (1-2) 131.24 160.51 189.77 125.3 14.887
Variable Method Variances DF t Value Pr > |t|
api00 Pooled Equal 398 10.78 <.0001
api00 Satterthwaite Unequal 197 12.57 <.0001
Equality of Variances
Variable Method Num DF Den DF F Value Pr > F
api00 Folded F 307 91 1.78 0.0013
Since a t-test is the same as doing an anova, we can get the same results using the proc glm for anova as well.
proc glm data="c:sasregelemapi2";
class yr_rnd;
model api00=yr_rnd ;
The GLM Procedure
Dependent Variable: api00 api 2000
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 1 1825000.563 1825000.563 116.24 <.0001
Error 398 6248671.435 15700.179
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.226043 19.34775 125.3004 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yr_rnd 1 1825000.563 1825000.563 116.24 <.0001
If we square the t-value from the t-test, we get the same value as the F-value from the proc glm: 10.78^2=116.21 (with a little rounding error.)
3.2 Regression with a 1/2 variable
A categorical predictor variable does not have to be coded 0/1 to be used in a regression model. It is easier to understand and interpret the results from a model with dummy variables, but the
results from a variable coded 1/2 yield essentially the same results.
Lets make a copy of the variable yr_rnd called yr_rnd2 that is coded 1/2, 1=non year-round and 2=year-round.
data elem_dummy;
set "c:sasregelemapi2";
Let’s perform a regression predicting api00 from yr_rnd2.
proc reg data=elem_dummy;
model api00 = yr_rnd2;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 1 1825001 1825001 116.24 <.0001
Error 398 6248671 15700
Corrected Total 399 8073672
Root MSE 125.30036 R-Square 0.2260
Dependent Mean 647.62250 Adj R-Sq 0.2241
Coeff Var 19.34775
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 845.04531 19.35336 43.66 <.0001
yr_rnd2 1 -160.50635 14.88720 -10.78 <.0001
Note that the coefficient for yr_rnd is the same as yr_rnd2. So, you can see that if you code yr_rnd as 0/1 or as 1/2, the regression coefficient works out to be the same. However the intercept
(Intercept) is a bit less intuitive. When we used yr_rnd, the intercept was the mean for the non year-rounds. When using yr_rnd2, the intercept is the mean for the non year-rounds minus Byr_rnd2,
i.e., 684.539 – (-160.506) = 845.045
Note that you can use 0/1 or 1/2 coding and the results for the coefficient come out the same, but the interpretation of constant in the regression equation is different. It is often easier to
interpret the estimates for 0/1 coding.
In summary, these results indicate that the api00 scores are significantly different for the schools depending on the type of school, year round school versus non-year round school. Non year-round
schools have significantly higher API scores than year-round schools. Based on the regression results, non year-round schools have scores that are 160.5 points higher than year-round schools.
3.3 Regression with a 1/2/3 variable
3.3.1 Manually creating dummy variables
Say, that we would like to examine the relationship between the amount of poverty and api scores. We don’t have a measure of poverty, but we can use mealcat as a proxy for a measure of poverty. From
the previous section, we have seen that variable mealcat has three unique values. These are the levels of percent of students on free meals. We can associate a value label to variable mealcat to make
it more meaningful for us when we run SAS procedures with mealcat, for example, proc freq.
proc freq data="c:sasregelemapi2";
tables mealcat;
format mealcat mealcat.;
Percentage free meals in 3 categories
Cumulative Cumulative
mealcat Frequency Percent Frequency Percent
0-46% free meals 131 32.75 131 32.75
47-80% free meals 132 33.00 263 65.75
81-100% free meals 137 34.25 400 100.00
You might be tempted to try including mealcat in a regression like this.
proc reg data="c:sasregelemapi2";
model api00 = mealcat;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 1 6072528 6072528 1207.74 <.0001
Error 398 2001144 5028.00120
Corrected Total 399 8073672
Root MSE 70.90840 R-Square 0.7521
Dependent Mean 647.62250 Adj R-Sq 0.7515
Coeff Var 10.94903
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value
Intercept Intercept 1 950.98740 9.42180 100.93
mealcat Percentage free meals in 3 1 -150.55330 4.33215 -34.75
Parameter Estimates
Variable Label DF Pr > |t|
Intercept Intercept 1 <.0001
mealcat Percentage free meals in 3 1 <.0001
This is looking at the linear effect of mealcat with api00, but mealcat is not an interval variable. Instead, you will want to code the variable so that all the information concerning the three
levels is accounted for. In general, we need to go through a data step to create dummy variables. For example, in order to create dummy variables for mealcat, we can do the following data step.
data temp_elemapi;
set "c:sasregelemapi2";
if mealcat~=. then mealcat1=0;
if mealcat~=. then mealcat2=0;
if mealcat~=. then mealcat3=0;
if mealcat = 1 then mealcat1=1;
if mealcat = 2 then mealcat2=1;
if mealcat = 3 then mealcat3=1;
Let’s run proc freq to check that our dummy coding is done correctly.
proc freq data=temp_elemapi;
tables mealcat*mealcat1*mealcat2*mealcat3 /list;
mealcat mealcat1 mealcat2 mealcat3
Cumulative Cumulative
Frequency Percent Frequency Percent
131 32.75 131 32.75
132 33.00 263 65.75
137 34.25 400 100.00
We now have created mealcat1 that is 1 if mealcat is 1, and 0 otherwise. Likewise, mealcat2 is 1 if mealcat is 2, and 0 otherwise and likewise mealcat3 was created. We can now use two of these dummy
variables (mealcat2 and mealcat3) in the regression analysis.
proc reg data=temp_elemapi;
model api00 = mealcat2 mealcat3;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 2 6094198 3047099 611.12 <.0001
Error 397 1979474 4986.08143
Corrected Total 399 8073672
Root MSE 70.61219 R-Square 0.7548
Dependent Mean 647.62250 Adj R-Sq 0.7536
Coeff Var 10.90329
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 805.71756 6.16942 130.60 <.0001
mealcat2 1 -166.32362 8.70833 -19.10 <.0001
mealcat3 1 -301.33800 8.62881 -34.92 <.0001
We can test the overall differences among the three groups by using the test command following proc reg. Notice that proc reg is an interactive procedure, so we have to issue quit command to finish
it. The test result shows that the overall differences among the three groups are significant.
test mealcat2=mealcat3=0;
Test 1 Results for Dependent Variable api00
Source DF Square F Value Pr > F
Numerator 2 3047099 611.12 <.0001
Denominator 397 4986.08143
The interpretation of the coefficients is much like that for the binary variables. Group 1 is the omitted group, so Intercept is the mean for group 1. The coefficient for mealcat2 is the mean for
group 2 minus the mean of the omitted group (group 1). And the coefficient for mealcat3 is the mean of group 3 minus the mean of group 1. You can verify this by comparing the coefficients with the
means of the groups.
proc means data=temp_elemapi mean std;
class mealcat;
var api00;
Analysis Variable : api00 api 2000
free meals
in 3 N
categories Obs Mean Std Dev
1 131 805.7175573 65.6686642
2 132 639.3939394 82.1351295
3 137 504.3795620 62.7270149
Based on these results, we can say that the three groups differ in their api00 scores, and that in particular group 2 is significantly different from group1 (because mealcat2 was significant) and
group 3 is significantly different from group 1 (because mealcat3 was significant).
3.3.2 More on dummy coding
In last section, we showed how to create dummy variables for mealcat by manually creating three dummy variables mealcat1, mealcat2 and mealcat3 since mealcat only has three levels. Apparently the way
we created these variables is not very efficient for a categorical variables with many levels. Let’s try to make use of the array structure to make our coding more efficient.
data array_elemapi;
set "c:sasregelemapi2";
array mealdum(3) mealdum1-mealdum3;
do i = 1 to 3;
drop i;
We declare an array mealdum of size 3 with each individual named to be mealdum1 to mealdum3, since mealcat has three levels. Then we do a do loop to repeat the same action three times. (mealcat=i) is
a logical statement and is evaluated to be either true (1) or false (0). We can run proc freq to check if our coding is done correctly as we did in last section.
proc freq data=array_elemapi;
tables mealcat*mealdum1*mealdum2*mealdum3 /list;
mealcat mealdum1 mealdum2 mealdum3
Cumulative Cumulative
Frequency Percent Frequency Percent
131 32.75 131 32.75
132 33.00 263 65.75
137 34.25 400 100.00
3.3.3 Using the proc glm
We can also do this analysis via ANOVA. The benefit of doing anova for our analysis is that it gives us the test of the overall effect of mealcat without needing to subsequently use the test
statement as we did with the proc reg. In SAS we can use the proc glm for anova. proc glm will generate dummy variables for a categorical variable on-the-fly so we don’t have to code our categorical
variable mealcat manually as we did in last section through a data step.
In our program below, we use class statement to specify that variable mealcat is a categorical variable we use the option order=freq for proc glm to order the levels of our class variable according
to descending frequency count so that levels with the most observations come first in the order. Thus dummy variables for mealcat = 2 and mealcat = 3 will be used in the model as they have higher
frequency counts. The solution option used in the model statement gives us the parameter estimates and the ss3 option specifies that Type III sum of squares is used for hypothesis test. We can see
the anova test of the effect of mealcat is the same as the test command from the regress command.
proc glm data="c:sasregelemapi2" order=freq ;
class mealcat;
model api00=mealcat /solution ss3;
Dependent Variable: api00 api 2000
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 2 6094197.670 3047098.835 611.12 <.0001
Error 397 1979474.328 4986.081
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.754824 10.90329 70.61219 647.6225
Source DF Type III SS Mean Square F Value Pr > F
mealcat 2 6094197.670 3047098.835 611.12 <.0001
Parameter Estimate Error t Value Pr > |t|
Intercept 805.7175573 B 6.16941572 130.60 <.0001
mealcat 3 -301.3379952 B 8.62881482 -34.92 <.0001
mealcat 2 -166.3236179 B 8.70833132 -19.10 <.0001
mealcat 1 0.0000000 B . . .
NOTE: The X'X matrix has been found to be singular, and a generalized inverse
was used to solve the normal equations. Terms whose estimates are
followed by the letter 'B' are not uniquely estimable.
3.3.4 Other coding schemes
It is generally very convenient to use dummy coding but it is not the only kind of coding that can be used. As you have seen, when you use dummy coding one of the groups becomes the reference group
and all of the other groups are compared to that group. This may not be the most interesting set of comparisons.
Say you want to compare group 1 with 2, and group 2 with group 3. You need to generate a coding scheme that forms these 2 comparisons. In SAS, we can first generate the corresponding coding scheme in
a data step shown below and use them in the proc reg step.
We create two dummy variables, one for group 1 and the other for group 3.
data effect_elemapi;
set "c:sasregelemapi2";
if mealcat=1 then do;
if mealcat=2 then do;
if mealcat=3 then do;
Let’s check our coding with proc freq.
proc freq data=effect_elemapi;
tables mealcat*mealcat1*mealcat3 / nocum nopercent list;
mealcat mealcat1 mealcat3 Frequency
1 0.6666666667 0.3333333333 131
2 -0.333333333 0.3333333333 132
3 -0.333333333 -0.666666667 137
We can now do the regression analysis again using our new coding scheme.
proc reg data=effect_elemapi ;
model api00=mealcat1 mealcat3;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 2 6094198 3047099 611.12 <.0001
Error 397 1979474 4986.08143
Corrected Total 399 8073672
Root MSE 70.61219 R-Square 0.7548
Dependent Mean 647.62250 Adj R-Sq 0.7536
Coeff Var 10.90329
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 649.83035 3.53129 184.02 <.0001
mealcat1 1 166.32362 8.70833 19.10 <.0001
mealcat3 1 135.01438 8.61209 15.68 <.0001
If you compare the parameter estimates with the group means of mealcat you can verify that B1 (i.e. 0-46% free meals) is the mean of group 1 minus group 2, and B2 (i.e., 47-80% free meals) is the
mean of group 2 minus group 3. Both of these comparisons are significant, indicating that group 1 significantly differs from group 2, and group 2 significantly differs from group 3.
proc means data=effect_elemapi mean std;
class mealcat;
var api00;
Analysis Variable : api00 api 2000
free meals
in 3 N
categories Obs Mean Std Dev
1 131 805.7175573 65.6686642
2 132 639.3939394 82.1351295
3 137 504.3795620 62.7270149
And the value of the intercept term Intercept is the unweighted average of the means of the three groups, (805.71756 +639.39394 +504.37956)/3 = 649.83035.
3.4 Regression with two categorical predictors
3.4.1 Manually creating dummy variables
Previously we looked at using yr_rnd to predict api00 and we have also looked at using mealcat to predict api00. Let’s include the parameter estimates for each model below.
proc reg data=array_elemapi ;
model api00= yr_rnd;
quit;proc reg data=array_elemapi ;
model api00= mealcat1 mealcat2;
Parameter Estimates
(for model with yr_rnd)
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 684.53896 7.13965 95.88 <.0001
yr_rnd year round school 1 -160.50635 14.88720 -10.78 <.0001
Parameter Estimates
(for model with mealcat1 and mealcat2)
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 504.37956 6.03281 83.61 <.0001
mealcat1 1 301.33800 8.62881 34.92 <.0001
mealcat2 1 135.01438 8.61209 15.68 <.0001
In the first model with only yr_rnd as the only predictor, the intercept term is the mean api score for the non-year-round schools. The coefficient for yr_rnd is the difference between the year round
and non-year round group. In the second model, the coefficient for mealcat1 is the difference between mealcat=1 and mealcat=3, and the coefficient for mealcat2 is the difference between mealcat=2 and
mealcat=3. The intercept is the mean for the mealcat=3.
Of course, we can include both yr_rnd and mealcat together in the same model. Now the question is how to interpret the coefficients.
data array_elemapi;
set "c:sasregelemapi2";
array mealdum(3) mealcat1-mealcat3;
do i = 1 to 3;
drop i;
proc reg data=array_elemapi ;
model api00= yr_rnd mealcat1 mealcat2;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 3 6194144 2064715 435.02 <.0001
Error 396 1879528 4746.28206
Corrected Total 399 8073672
Root MSE 68.89327 R-Square 0.7672
Dependent Mean 647.62250 Adj R-Sq 0.7654
Coeff Var 10.63787
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 526.32996 7.58453 69.40 <.0001
yr_rnd year round school 1 -42.96006 9.36176 -4.59 <.0001
mealcat1 1 281.68318 9.44568 29.82 <.0001
mealcat2 1 117.94581 9.18891 12.84 <.0001
We can test the overall effect of mealcat with the test command, which is significant.
proc reg data=array_elemapi ;
model api00= yr_rnd mealcat1 mealcat2;
test mealcat1=mealcat2=0;
Test 1 Results for Dependent Variable api00
Source DF Square F Value Pr > F
Numerator 2 2184572 460.27 <.0001
Denominator 396 4746.28206
Let’s dig below the surface and see how the coefficients relate to the predicted values. Let’s view the cells formed by crossing yr_rnd and mealcat and number the cells from cell1 to cell6.
mealcat=1 mealcat=2 mealcat=3
yr_rnd=0 cell1 cell2 cell3
yr_rnd=1 cell4 cell5 cell6
With respect to mealcat, the group mealcat=3 is the reference category, and with respect to yr_rnd the group yr_rnd=0 is the reference category. As a result, cell3 is the reference cell. The
intercept is the predicted value for this cell.
The coefficient for yr_rnd is the difference between cell3 and cell6. Since this model has only main effects, it is also the difference between cell2 and cell5, or from cell1 and cell4. In other
words, Byr_rnd is the amount you add to the predicted value when you go from non-year round to year round schools.
The coefficient for mealcat1 is the predicted difference between cell1 and cell3. Since this model only has main effects, it is also the predicted difference between cell4 and cell6. Likewise,
Bmealcat2 is the predicted difference between cell2 and cell3, and also the predicted difference between cell5 and cell6.
So, the predicted values, in terms of the coefficients, would be
mealcat=1 mealcat=2 mealcat=3
yr_rnd=0 Intercept Intercept Intercept
+Bmealcat1 +Bmealcat2
yr_rnd=1 Intercept Intercept Intercept
+Byr_rnd +Byr_rnd +Byr_rnd
+Bmealcat1 +Bmealcat2
We should note that if you computed the predicted values for each cell, they would not exactly match the means in the six cells. The predicted means would be close to the observed means in the cells,
but not exactly the same. This is because our model only has main effects and assumes that the difference between cell1 and cell4 is exactly the same as the difference between cells 2 and 5 which is
the same as the difference between cells 3 and 5. Since the observed values don’t follow this pattern, there is some discrepancy between the predicted means and observed means.
3.4.2 Using the proc glm
We can run the same analysis using the proc glm without manually coding the dummy variables.
proc glm data="c:sasregelemapi2";
class mealcat;
model api00=yr_rnd mealcat /ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 3 6194144.303 2064714.768 435.02 <.0001
Error 396 1879527.694 4746.282
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.767203 10.63787 68.89327 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yr_rnd 1 99946.633 99946.633 21.06 <.0001
mealcat 2 4369143.740 2184571.870 460.27 <.0001
Note that we get the same information that we do from manually coding the dummy variables and and using proc reg followed by the test statement shown in last the previous section. The proc glm doing
anova automatically provides the information provided by the test statement. If we like, we can also request the parameter estimates by adding the option solution after the model statement.
proc glm data="c:sasregelemapi2";
class mealcat;
model api00=yr_rnd mealcat /solution ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 3 6194144.303 2064714.768 435.02 <.0001
Error 396 1879527.694 4746.282
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.767203 10.63787 68.89327 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yr_rnd 1 99946.633 99946.633 21.06 <.0001
mealcat 2 4369143.740 2184571.870 460.27 <.0001
Parameter Estimate Error t Value Pr > |t|
Intercept 526.3299568 B 7.58453252 69.40 <.0001
yr_rnd -42.9600584 9.36176101 -4.59 <.0001
mealcat 1 281.6831760 B 9.44567619 29.82 <.0001
mealcat 2 117.9458074 B 9.18891138 12.84 <.0001
mealcat 3 0.0000000 B . . .
NOTE: The X'X matrix has been found to be singular, and a generalized inverse
was used to solve the normal equations. Terms whose estimates are
followed by the letter 'B' are not uniquely estimable.
Recall we used option order=freq before in proc glm to force proc glm to order the levels of a class variable according to the order of descending frequency count. This time we simply used the
default order of proc glm. The default order for an unformatted numerical variable is simply the order of its values. Therefore in our case, the natual order is 1 2 and 3. The proc glm will then drop
the highest level.
In summary, these results indicate the differences between year round and non-year round schools is significant, and the differences among the three mealcat groups are significant.
3.5 Categorical predictor with interactions
3.5.1 Manually creating dummy variables
Let’s perform the same analysis that we performed above, this time let’s include the interaction of mealcat by yr_rnd. In this section we show how to do it by manually creating all the dummy
variables. We use the array structure again. This time we have to declare two set of arrays, one for the dummy variables of mealcat and one for the interaction of yr_rnd and mealcat.
data mealxynd_elemapi;
set "c:sasregelemapi2";
array mealdum(3) mealcat1-mealcat3;
array mealxynd(3) mealxynd1-mealxynd3;
do i = 1 to 3;
drop i;
We can check to see if our dummy variables have been created correctly. Notice the option nopercent and nocum suppress the output on percent and cumulative percent. The option list displays two-way
to n-way tables in a list format rather than as crosstabulation tables. It seems that our coding has been done correctly.
proc freq data=mealxynd_elemapi;
tables yr_rnd*mealcat*mealxynd1*mealxynd2*mealxynd3
/nopercent nocum list;
yr_rnd mealcat mealxynd1 mealxynd2 mealxynd3 Frequency
Now let’s add these dummy variables for interaction between yr_rnd and mealcat to our model. We can all add a test statement to test the overall interaction. The output shows that the interaction
effect is not significant.
proc reg data=mealxynd_elemapi;
model api00=yr_rnd mealcat1 mealcat2 mealxynd1 mealxynd2;
test mealxynd1=mealxynd2=0;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 5 6204728 1240946 261.61 <.0001
Error 394 1868944 4743.51314
Corrected Total 399 8073672
Root MSE 68.87317 R-Square 0.7685
Dependent Mean 647.62250 Adj R-Sq 0.7656
Coeff Var 10.63477
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 521.49254 8.41420 61.98 <.0001
yr_rnd year round school 1 -33.49254 11.77129 -2.85 0.0047
mealcat1 1 288.19295 10.44284 27.60 <.0001
mealcat2 1 123.78097 10.55185 11.73 <.0001
mealxynd1 1 -40.76438 29.23118 -1.39 0.1639
mealxynd2 1 -18.24763 22.25624 -0.82 0.4128
The REG Procedure
Model: MODEL1
Test 1 Results for Dependent Variable api00
Source DF Square F Value Pr > F
Numerator 2 5291.75936 1.12 0.3288
Denominator 394 4743.51314
It is important to note how the meaning of the coefficients change in the presence of these interaction terms. For example, in the prior model, with only main effects, we could interpret Byr_rnd as
the difference between the year round and non year round schools. However, now that we have added the interaction term, the term Byr_rnd represents the difference between cell3 and cell6, or the
difference between the year round and non-year round schools when mealcat=3 (because mealcat=3 was the omitted group). The presence of an interaction would imply that the difference between year
round and non-year round schools depends on the level of mealcat. The interaction terms Bmealxynd1 and Bmealxynd2 represent the extent to which the difference between the year round/non year round
schools changes when mealcat=1 and when mealcat=2 (as compared to the reference group, mealcat=3). For example the term Bmealxynd1 represents the difference between year round and non-year round for
mealcat=1 versus the difference for mealcat=3. In other words, Bmealxynd1 in this design is (cell1-cell4) – (cell3-cell6), or it represents how much the effect of yr_rnd differs between mealcat=1 and
Below we have shown the predicted values for the six cells in terms of the coefficients in the model. If you compare this to the main effects model, you will see that the predicted values are the
same except for the addition of mealxynd1 (in cell 4) and mealxynd2 (in cell 5).
mealcat=1 mealcat=2 mealcat=3
yr_rnd=0 Intercept Intercept Intercept
+Bmealcat1 +Bmealcat2
yr_rnd=1 Intercept Intercept Intercept
+Byr_rnd +Byr_rnd +Byr_rnd
+Bmealcat1 +Bmealcat2
+Bmealxynd1 +Bmealxynd2
It can be very tricky to interpret these interaction terms if you wish to form specific comparisons. For example, if you wanted to perform a test of the simple main effect of yr_rnd when mealcat=1,
i.e., comparing compare cell1 with cell4, you would want to compare Intercept+ mealcat1 versus Intercept + mealcat1 + yr_rnd + mealxynd1 and since Intercept and Imealcat1 would drop out, we would
proc reg data=mealxynd_elemapi;
model api00=yr_rnd mealcat1 mealcat2 mealxynd1 mealxynd2;
test yr_rnd + mealxynd1=0;
Test 1 Results for Dependent Variable api00
Source DF Square F Value Pr > F
Numerator 1 36536 7.70 0.0058
Denominator 394 4743.51314
This test is significant, indicating that the effect of yr_rnd is significant for the mealcat = 1 group.
As we will see, such tests can be more easily done via anova using proc glm.
3.5.2 Using anova
Constructing these interactions can be easier when using the proc glm. We can also avoid manually coding our dummy variables. As you see below, the proc glm gives us the test of the overall main
effects and interactions without the need to perform subsequent test commands.
proc glm data="c:sasregelemapi2";
class mealcat;
model api00=yr_rnd mealcat yr_rnd*mealcat /ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 5 6204727.822 1240945.564 261.61 <.0001
Error 394 1868944.176 4743.513
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.768514 10.63477 68.87317 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yr_rnd 1 99617.371 99617.371 21.00 <.0001
mealcat 2 3903569.804 1951784.902 411.46 <.0001
yr_rnd*mealcat 2 10583.519 5291.759 1.12 0.3288
We can also obtain parameter estimate by using the model option solution, which we will skip as we have seen before. It is easy to perform tests of simple main effects using the lsmeans statement
shown below.
proc glm data="c:sasregelemapi2";
class yr_rnd mealcat;
model api00=yr_rnd mealcat yr_rnd*mealcat /ss3;
lsmeans yr_rnd*mealcat / slice=mealcat;
The GLM Procedure
Least Squares Means
yr_rnd*mealcat Effect Sliced by mealcat for api00
Sum of
mealcat DF Squares Mean Square F Value Pr > F
1 1 36536 36536 7.70 0.0058
2 1 35593 35593 7.50 0.0064
3 1 38402 38402 8.10 0.0047
The results from above show us the effect of yr_rnd at each of the three levels of mealcat. We can see that the comparison for mealcat = 1 matches those we computed above using the test statement,
however, it was much easier and less error prone using the lsmeans statement.
Although this section has focused on how to handle analyses involving interactions, these particular results show no indication of interaction. We could decide to omit interaction terms from future
analyses having found the interactions to be non-significant. This would simplify future analyses, however including the interaction term can be useful to assure readers that the interaction term is
3.6 Continuous and categorical variables
3.6.1 Using proc reg
Say that we wish to analyze both continuous and categorical variables in one analysis. For example, let’s include yr_rnd and some_col in the same analysis. We can also plot the predicted values
against some_col using plot statement.
proc reg data="c:sasregelemapi2";
model api00 = yr_rnd some_col;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 2 2072202 1036101 68.54 <.0001
Error 397 6001470 15117
Corrected Total 399 8073672
Root MSE 122.95143 R-Square 0.2567
Dependent Mean 647.62250 Adj R-Sq 0.2529
Coeff Var 18.98505
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 637.85807 13.50332 47.24 <.0001
yr_rnd year round school 1 -149.15906 14.87519 -10.03 <.0001
some_col parent some college 1 2.23569 0.55287 4.04 <.0001
proc reg data="c:sasregelemapi2";
model api00 = yr_rnd some_col;
output out=pred pred=p;
symbol1 c=blue v=circle h=.8;
symbol2 c=red v=circle h=.8;
axis1 label=(r=0 a=90) minor=none;
axis2 minor=none;
proc gplot data=pred;
plot p*some_col=yr_rnd /vaxis=axis1 haxis=axis2;
The coefficient for some_col indicates that for every unit increase in some_col the api00 score is predicted to increase by 2.23 units. This is the slope of the lines shown in the above graph. The
graph has two lines, one for the year round schools and one for the non-year round schools. The coefficient for yr_rnd is -149.16, indicating that as yr_rnd increases by 1 unit, the api00 score is
expected to decrease by about 149 units. As you can see in the graph, the top line is about 150 units higher than the lower line. You can see that the intercept is 637 and that is where the upper
line crosses the Y axis when X is 0. The lower line crosses the line about 150 units lower at about 487.
3.6.2 Using proc glm
We can run this analysis using the proc glm for anova. The proc glm assumes that the independent variables are continuous. Thus, we need to use the class statement to specify which variables should
be considered as categorical variables.
proc glm data="c:sasregelemapi2";
class yr_rnd;
model api00=yr_rnd some_col /solution ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 2 2072201.839 1036100.919 68.54 <.0001
Error 397 6001470.159 15117.053
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.256662 18.98505 122.9514 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yr_rnd 1 1519992.669 1519992.669 100.55 <.0001
some_col 1 247201.276 247201.276 16.35 <.0001
Parameter Estimate Error t Value Pr > |t|
Intercept 488.6990076 B 15.51331180 31.50 <.0001
yr_rnd 0 149.1590647 B 14.87518847 10.03 <.0001
yr_rnd 1 0.0000000 B . . .
some_col 2.2356887 0.55286556 4.04 <.0001
NOTE: The X'X matrix has been found to be singular, and a generalized inverse
was used to solve the normal equations. Terms whose estimates are
followed by the letter 'B' are not uniquely estimable.
If we square the t-values from the proc reg (above), we would find that they match those F-values of the proc glm. One thing you may notice that the parameter estimates above do not look quite the
same as we did using proc reg. This is due to how proc glm processes a categorical (class) variable. We can get the same result if we code our class variable differently. This is shown below.
data temp;
set "c:sasregelemapi2";
proc glm data=temp;
class yrn;
model api00=yrn some_col /solution ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 2 2072201.839 1036100.919 68.54 <.0001
Error 397 6001470.159 15117.053
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.256662 18.98505 122.9514 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yrn 1 1519992.669 1519992.669 100.55 <.0001
some_col 1 247201.276 247201.276 16.35 <.0001
Parameter Estimate Error t Value Pr > |t|
Intercept 637.8580723 B 13.50332419 47.24 <.0001
yrn 0 -149.1590647 B 14.87518847 -10.03 <.0001
yrn 1 0.0000000 B . . .
some_col 2.2356887 0.55286556 4.04 <.0001
NOTE: The X'X matrix has been found to be singular, and a generalized inverse
was used to solve the normal equations. Terms whose estimates are
followed by the letter 'B' are not uniquely estimable.
3.7 Interactions of Continuous by 0/1 Categorical variables
Above we showed an analysis that looked at the relationship between some_col and api00 and also included yr_rnd. We saw that this produced a graph where we saw the relationship between some_col and
api00 but there were two regression lines, one higher than the other but with equal slope. Such a model assumed that the slope was the same for the two groups. Perhaps the slope might be different
for these groups. Let’s run the regressions separately for these two groups beginning with the non-year round schools.
proc reg data="c:sasregelemapi2";
model api00 = some_col;
where yr_rnd=0;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 1 84701 84701 4.91 0.0274
Error 306 5273592 17234
Corrected Total 307 5358293
Root MSE 131.27818 R-Square 0.0158
Dependent Mean 684.53896 Adj R-Sq 0.0126
Coeff Var 19.17760
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 655.11030 15.23704 42.99 <.0001
some_col parent some college 1 1.40943 0.63576 2.22 0.0274
symbol1 i=none c=black v=circle h=0.5;
symbol2 i=join c=red v=dot h=0.5;
proc reg data="c:sasregelemapi2";
model api00 = some_col;
where yr_rnd=0;
plot (api00 predicted.)*some_col /overlay;
Likewise, let’s look at the year round schools and we will use the same symbol statements as above.
symbol1 i=none c=black v=circle h=0.5;
symbol2 i=join c=red v=dot h=0.5;
proc reg data="c:sasregelemapi2";
model api00 = some_col;
where yr_rnd=1;
plot (api00 predicted.)*some_col /overlay;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 1 373644 373644 65.08 <.0001
Error 90 516735 5741.49820
Corrected Total 91 890379
Root MSE 75.77267 R-Square 0.4196
Dependent Mean 524.03261 Adj R-Sq 0.4132
Coeff Var 14.45953
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 407.03907 16.51462 24.65 <.0001
some_col parent some college 1 7.40262 0.91763 8.07 <.0001
Note that the slope of the regression line looks much steeper for the year round schools than for the non-year round schools. This is confirmed by the regression equations that show the slope for the
year round schools to be higher (7.4) than non-year round schools (1.3). We can compare these to see if these are significantly different from each other by including the interaction of some_col by
yr_rnd, an interaction of a continuous variable by a categorical variable.
3.7.1 Computing interactions manually
We will start by manually computing the interaction of some_col by yr_rnd. Let’s start fresh and use the elemapi2 data file which should be sitting in your "c:sasreg" directory.
Next, let’s make a variable that is the interaction of some college (some_col) and year round schools (yr_rnd) called yrxsome.
data yrxsome_elemapi;
set "c:sasregelemapi2";
yrxsome = yr_rnd*some_col;
We can now run the regression that tests whether the coefficient for some_col is significantly different for year round schools and non-year round schools. Indeed, the yrxsome interaction effect is
proc reg data=yrxsome_elemapi;
model api00 = some_col yr_rnd yrxsome;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 3 2283345 761115 52.05 <.0001
Error 396 5790327 14622
Corrected Total 399 8073672
Root MSE 120.92161 R-Square 0.2828
Dependent Mean 647.62250 Adj R-Sq 0.2774
Coeff Var 18.67162
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 655.11030 14.03499 46.68 <.0001
some_col parent some college 1 1.40943 0.58560 2.41 0.0165
yr_rnd year round school 1 -248.07124 29.85895 -8.31 <.0001
yrxsome 1 5.99319 1.57715 3.80 0.0002
We can then save the predicted values to a data set and graph the predicted values for the two types of schools by some_col. You can see how the two lines have quite different slopes, consistent with
the fact that the yrxsome interaction was significant.
proc reg data=yrxsome_elemapi;
model api00 = some_col yr_rnd yrxsome;
output out=temp pred=p;
axis1 label=(r=0 a=90) minor=none;
axis2 minor = none;
proc gplot data=temp;
plot p*some_col=yr_rnd / haxis=axis2 vaxis=axis1;
We can also create a plot including the data points. There are two ways of doing this and we’ll show both ways and their graphs here. One is to use the plot statement in proc reg.
symbol1 c=black v=star h=0.8;
symbol2 c=red v=circle i=join h=0.8;
proc reg data=yrxsome_elemapi;
model api00 = some_col yr_rnd yrXsome;
plot (api00 predicted.)*some_col/overlay;
The other is to use proc gplot where we have more control over the look of the graph. In order to use proc gplot, we have to create a data set including the predicted value. This is done using the
output statement in proc reg. In order to distinguish between the two groups of year-round schools and non-year-round schools we will do another data step where two variables of predicted values are
created for each of the group.
proc reg data=yrxsome_elemapi;
model api00 = some_col yr_rnd yrxsome;
plot (api00 predicted.)*some_col/overlay;
data temp1;
set temp;
if yr_rnd=1 then p1=p;
if yr_rnd=0 then p0=p;
axis1 label=(r=0 a=90) minor=none;
axis2 minor = none;
symbol1 c=black v=star h=0.8;
symbol2 c=red v=circle i=join h=0.8;
symbol3 c=blue v=diamond i=join h=0.8;
proc gplot data=temp1;
plot (api00 p1 p0)*some_col / overlay haxis=axis2 vaxis=axis1;
We can further enhance it so the data points are marked with different symbols. The graph above used the same kind of symbols for the data points for both types of schools. Let’s make separate
variables for the api00 scores for the two types of schools called api0 for the non-year round schools and api1 for the year round schools.
data temp1;
set temp;
if yr_rnd=1 then do api1=api00; p1=p; end;
if yr_rnd=0 then do api0=api00; p0=p; end;
We can then make the same graph as above except show the points differently for the two types of schools. Below we use stars for the non-year round schools, and diamonds for the year round schools.
goptions reset=all;
axis1 label=(r=0 a=90) minor=none;
axis2 minor = none;
symbol1 c=black v=star h=0.8;
symbol2 c=red v=diamond h=0.8;
symbol3 c=black v=star i=join h=0.8;
symbol4 c=red v=diamond i=join h=0.8;
proc gplot data=temp1;
plot api0*some_col=1 api1*some_col=2 p0*some_col=3 p1*some_col= 4
/ overlay haxis=axis2 vaxis=axis1;
Let’s quickly run the regressions again where we performed separate regressions for the two groups. We can first sort the data set by yr_rnd and make use of the by statement in the proc reg to
perform separate regressions for the two groups. We also use the ODS (output delivery system) of SAS 8 to output the parameter estimate to a data set and print it out to compare the result.
proc sort data=yrxsome_elemapi;
by yr_rnd;
ods listing close; /*stop output to appear in the output window*/
ods output ParameterEstimates=reg_some_col
(keep = yr_rnd Variable estimate );
proc reg data=yrxsome_elemapi;
by yr_rnd;
model api00=some_col;
ods output close;
ods listing; /*put output back to the output window*/
proc print data=reg_some_col noobs;
yr_rnd Variable Estimate
0 Intercept 655.11030
0 some_col 1.40943
1 Intercept 407.03907
1 some_col 7.40262
Now, let’s show the regression for both types of schools with the interaction term.
proc reg data=yrxsome_elemapi;
model api00 = some_col yr_rnd yrxsome;
output out=temp pred=p;
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 655.11030 14.03499 46.68 <.0001
some_col parent some college 1 1.40943 0.58560 2.41 0.0165
yr_rnd year round school 1 -248.07124 29.85895 -8.31 <.0001
yrxsome 1 5.99319 1.57715 3.80 0.0002
Note that the coefficient for some_col in the combined analysis is the same as the coefficient for some_col for the non-year round schools? This is because non-year round schools are the reference
group. Then, the coefficient for the yrxsome interaction in the combined analysis is the Bsome_col for the year round schools (7.4) minus Bsome_col for the non year round schools (1.41) yielding
5.99. This interaction is the difference in the slopes of some_col for the two types of schools, and this is why this is useful for testing whether the regression lines for the two types of schools
are equal. If the two types of schools had the same regression coefficient for some_col, then the coefficient for the yrxsome interaction would be 0. In this case, the difference is significant,
indicating that the regression lines are significantly different.
So, if we look at the graph of the two regression lines we can see the difference in the slopes of the regression lines (see graph below). Indeed, we can see that the non-year round schools (the
solid line) have a smaller slope (1.4) than the slope for the year round schools (7.4). The difference between these slopes is 5.99, which is the coefficient for yrxsome.
3.7.2 Computing interactions with proc glm
We can also run a model just like the model we showed above using the proc glm. We can include the terms yr_rnd some_col and the interaction yr_rnr*some_col. Thus we can avoid a data step.
proc glm data="c:sasregelemapi2";
model api00 = yr_rnd some_col yr_rnd*some_col /ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 3 2283345.485 761115.162 52.05 <.0001
Error 396 5790326.513 14622.037
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.282814 18.67162 120.9216 647.6225
Source DF Type III SS Mean Square F Value Pr > F
yr_rnd 1 1009279.986 1009279.986 69.02 <.0001
some_col 1 84700.858 84700.858 5.79 0.0165
yr_rnd*some_col 1 211143.646 211143.646 14.44 0.0002
Parameter Estimate Error t Value Pr > |t|
Intercept 655.1103031 14.03499037 46.68 <.0001
yr_rnd -248.0712373 29.85894895 -8.31 <.0001
some_col 1.4094272 0.58560219 2.41 0.0165
yr_rnd*some_col 5.9931903 1.57714998 3.80 0.0002
In this section we found that the relationship between some_col and api00 depended on whether the school was from year round schools or from non-year round schools. For the schools from year round
schools, the relationship between some_col and api00 was significantly stronger than for those from non-year round schools. In general, this type of analysis allows you to test whether the strength
of the relationship between two continuous variables varies based on the categorical variable.
3.8 Continuous and categorical variables, interaction with 1/2/3 variable
The prior examples showed how to do regressions with a continuous variable and a categorical variable that has two levels. These examples will extend this further by using a categorical variable with
three levels, mealcat.
3.8.1 Manually creating dummy variables
We can use a data step to create all the dummy variables needed for the interaction of mealcat and some_col just as we did before for mealcat. With the dummy variables, we can use proc reg for the
regression analysis. We’ll use mealcat1 as the reference group.
data mxcol_elemapi;
set "c:sasregelemapi2";
array mealdum(3) mealcat1-mealcat3;
array mxcol(3) mxcol1-mxcol3;
do i = 1 to 3;
drop i;
proc reg data=mxcol_elemapi;
model api00 = some_col mealcat2 mealcat3 mxcol2 mxcol3;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 5 6212307 1242461 263.00 <.0001
Error 394 1861365 4724.27696
Corrected Total 399 8073672
Root MSE 68.73338 R-Square 0.7695
Dependent Mean 647.62250 Adj R-Sq 0.7665
Coeff Var 10.61319
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 825.89370 11.99182 68.87 <.0001
some_col parent some college 1 -0.94734 0.48737 -1.94 0.0526
mealcat2 1 -239.02998 18.66502 -12.81 <.0001
mealcat3 1 -344.94758 17.05743 -20.22 <.0001
mxcol2 1 3.14094 0.72929 4.31 <.0001
mxcol3 1 2.60731 0.89604 2.91 0.0038
The interaction now has two terms (mxcol2 and mxcol3). To get an overall test of this interaction, we can use the test command.
proc reg data=mxcol_elemapi;
model api00 = some_col mealcat2 mealcat3 mxcol2 mxcol3;
test mxcol2=mxcol3=0;
Test 1 Results for Dependent Variable api00
Source DF Square F Value Pr > F
Numerator 2 48734 10.32 <.0001
Denominator 394 4724.27696
These results indicate that the overall interaction is indeed significant. This means that the regression lines from the three groups differ significantly. As we have done before, let’s compute the
predicted values and make a graph of the predicted values so we can see how the regression lines differ.
proc reg data=mxcol_elemapi;
model api00 = some_col mealcat2 mealcat3 mxcol2 mxcol3;
output out=pred predicted=p;
goptions reset=all;
axis1 label=(r=0 a=90);
proc gplot data=pred;
plot p*some_col=mealcat /vaxis=axis1;
Since we had three groups, we get three regression lines, one for each category of mealcat. The solid line is for group 1, the dashed line for group 2, and the dotted line is for group 3.
Group 1 was the omitted group, therefore the slope of the line for group 1 is the coefficient for some_col which is -.94. Indeed, this line has a downward slope. If we add the coefficient for
some_col to the coefficient for mxcol2 we get the coefficient for group 2, i.e., 3.14 + (-.94) yields 2.2, the slope for group 2. Indeed, group 2 shows an upward slope. Likewise, if we add the
coefficient for some_col to the coefficient for mxcol3 we get the coefficient for group 3, i.e., 2.6 + (-.94) yields 1.66, the slope for group 3,. So, the slopes for the 3 groups are
group 1: -0.94
group 2: 2.2
group 3: 1.66
The test of the coefficient in the parameter estimates for mxcol2 tested whether the coefficient for group 2 differed from group 1, and indeed this was significant. Likewise, the test of the
coefficient for mxcol3 tested whether the coefficient for group 3 differed from group 1, and indeed this was significant. What did the test of the coefficient some_col test? This coefficient
represents the coefficient for group 1, so this tested whether the coefficient for group 1 (-0.94) was significantly different from 0. This is probably a non-interesting test.
The comparisons in the above analyses don’t seem to be as interesting as comparing group 1 versus 2 and then comparing group 2 versus 3. These successive comparisons seem much more interesting. We
can do this by making group 2 the omitted group, and then each group would be compared to group 2.
proc reg data=mxcol_elemapi;
model api00 = some_col mealcat1 mealcat3 mxcol1 mxcol3;
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 5 6212307 1242461 263.00 <.0001
Error 394 1861365 4724.27696
Corrected Total 399 8073672
Root MSE 68.73338 R-Square 0.7695
Dependent Mean 647.62250 Adj R-Sq 0.7665
Coeff Var 10.61319
Parameter Estimates
Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 586.86372 14.30311 41.03 <.0001
some_col parent some college 1 2.19361 0.54253 4.04 <.0001
mealcat1 1 239.02998 18.66502 12.81 <.0001
mealcat3 1 -105.91760 18.75450 -5.65 <.0001
mxcol1 1 -3.14094 0.72929 -4.31 <.0001
mxcol3 1 -0.53364 0.92720 -0.58 0.5653
Now, the test of mxcol1 tests whether the coefficient for group 1 differs from group 2, and it does. Then, the test of mxcol3 tests whether the coefficient for group 3 significantly differs from
group 2, and it does not. This makes sense given the graph and given the estimates of the coefficients that we have, that -.94 is significantly different from 2.2 but 2.2 is not significantly
different from 1.66.
3.8.2 Using proc glm
We can perform the same analysis using the proc glm command, as shown below. The proc glm allows us to avoid dummy coding for either the categorical variable mealcat and for the interaction term of
mealcat and some_col. The tricky part is to control the reference group.
proc glm data="c:sasregelemapi2";
class mealcat;
model api00=some_col mealcat some_col*mealcat /solution ss3;
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 5 6212306.876 1242461.375 263.00 <.0001
Error 394 1861365.121 4724.277
Corrected Total 399 8073671.998
R-Square Coeff Var Root MSE api00 Mean
0.769452 10.61319 68.73338 647.6225
Source DF Type III SS Mean Square F Value Pr > F
some_col 1 36366.366 36366.366 7.70 0.0058
mealcat 2 2012065.492 1006032.746 212.95 <.0001
some_col*mealcat 2 97468.169 48734.084 10.32 <.0001
Parameter Estimate Error t Value Pr > |t|
Intercept 480.9461176 B 12.13062708 39.65 <.0001
some_col 1.6599700 B 0.75190859 2.21 0.0278
mealcat 1 344.9475807 B 17.05743173 20.22 <.0001
mealcat 2 105.9176024 B 18.75449819 5.65 <.0001
mealcat 3 0.0000000 B . . .
some_col*mealcat 1 -2.6073085 B 0.89604354 -2.91 0.0038
some_col*mealcat 2 0.5336362 B 0.92720142 0.58 0.5653
some_col*mealcat 3 0.0000000 B . . .
NOTE: The X'X matrix has been found to be singular, and a generalized inverse
was used to solve the normal equations. Terms whose estimates are
followed by the letter 'B' are not uniquely estimable.
Because the default order for categorical variables is their numeric values, glm omits the third category. On the other hand, the analysis we showed in previous section omitted the second category,
the parameter estimates will not be the same. You can compare the results from below with the results above and see that the parameter estimates are not the same. Because group 3 is dropped, that is
the reference category and all comparisons are made with group 3. Other than default order, proc glm also allows freq count order, which in our case is the same as the default order since group 3 has
the most count.
These analyses showed that the relationship between some_col and api00 varied, depending on the level of mealcat. In comparing group 1 with group 2, the coefficient for some_col was significantly
different, but there was no difference in the coefficient for some_col in comparing groups 2 and 3.
3.9 Summary
This chapter covered some techniques for analyzing data with categorical variables, especially, manually constructing indicator variables and using the proc glm. Each method has its advantages and
disadvantages, as described below.
Manually constructing indicator variables can be very tedious and even error prone. For very simple models, it is not very difficult to create your own indicator variables, but if you have
categorical variables with many levels and/or interactions of categorical variables, it can be laborious to manually create indicator variables. However, the advantage is that you can have quite a
bit of control over how the variables are created and the terms that are entered into the model.
The proc glm approach eliminates the need to create indicator variables making it easy to include variables that have lots of categories, and making it easy to create interactions by allowing you to
include terms like some_col*mealcat. It can be easier to perform tests of simple main effects with the proc glm. However, the proc glm is not very flexible in letting you choose which category is the
omitted category.
As you will see in the next chapter, the regress command includes additional options like the robust option and the cluster option that allow you to perform analyses when you don’t exactly meet the
assumptions of ordinary least squares regression. In such cases, the regress command offers features not available in the anova command and may be more advantageous to use.
3.10 For more information
• SAS/Stat Manual
• Web Links
□ Creating Dummy Variables
□ Models with interactions of continuous and categorical variables | {"url":"https://stats.oarc.ucla.edu/sas/webbooks/reg/chapter3/regression-with-saschapter-3-regression-with-categorical-predictors/","timestamp":"2024-11-03T03:05:39Z","content_type":"text/html","content_length":"121636","record_id":"<urn:uuid:40424221-9bf0-489f-b181-f6e214e76ba9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00855.warc.gz"} |
Preparation for localized Chern classes
Lemma 42.49.1. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $Z \subset X$ be a closed subscheme. Let
\[ b : W \longrightarrow \mathbf{P}^1_ X \]
be a proper morphism of schemes. Let $Q \in D(\mathcal{O}_ W)$ be a perfect object. Denote $W_\infty \subset W$ the inverse image of the divisor $D_\infty \subset \mathbf{P}^1_ X$ with complement $\
mathbf{A}^1_ X$. We assume
1. Chern classes of $Q$ are defined (Section 42.46),
2. $b$ is an isomorphism over $\mathbf{A}^1_ X$,
3. there exists a closed subscheme $T \subset W_\infty $ containing all points of $W_\infty $ lying over $X \setminus Z$ such that $Q|_ T$ is zero, resp. isomorphic to a finite locally free $\
mathcal{O}_ T$-module of rank $< p$ sitting in cohomological degree $0$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0FAT. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0FAT, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0FAT","timestamp":"2024-11-14T21:34:56Z","content_type":"text/html","content_length":"33762","record_id":"<urn:uuid:7a1089c5-3e0f-435b-9b98-2e99b6b9a426>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00092.warc.gz"} |
An aerobic saprotrophic nitrogen-fixing free-living bacteria is... | Filo
Question asked by Filo student
An aerobic saprotrophic nitrogen-fixing free-living bacteria isa. Azospirillum b. Azotobacter c. Chromatium d. Clostridium
Not the question you're searching for?
+ Ask your question
Sure, here are some step-by-step solutions to a sample problem: Problem: Simplify the expression . Solution: Step 1: Factor out the greatest common factor from the numerator and denominator. Step 2:
Simplify the fraction by canceling out common factors. Step 3: Simplify the remaining factors and cancel out any remaining common factors. Therefore, .
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
7 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Transport in plants
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Biology tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text An aerobic saprotrophic nitrogen-fixing free-living bacteria isa. Azospirillum b. Azotobacter c. Chromatium d. Clostridium
Updated On Feb 11, 2024
Topic Transport in plants
Subject Biology
Class Class 11
Answer Type Text solution:1 | {"url":"https://askfilo.com/user-question-answers-biology/an-aerobic-saprotrophic-nitrogen-fixing-free-living-bacteria-36383637313430","timestamp":"2024-11-06T18:05:21Z","content_type":"text/html","content_length":"139253","record_id":"<urn:uuid:73c0114f-4596-4384-a6b2-5a302195a995>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00168.warc.gz"} |
Effects of heat and mass transfer on stagnation point flow of micropolar Maxwell fluid over Riga plate
1. Lukaszewicz, G., Micropolar Fluids: Theory and Applications, Springer Science & Business Media (1999).
2. Eringen, A.C., Microcontinuum Field Theories: II. Fluent Media, (2) Springer Science & Business Media (2001).
3. Wang, X.L. and Zhu, K.Q. "A study of the lubricating effectiveness of micropolar fluids in a dynamically loaded journal bearing (T1516)", Tribology International, 37(6), pp. 481-490 (2004).
4. Nadeem, S., Akbar, N.S., and Malik, M.Y. "Exact and numerical solutions of a micropolar fluid in a vertical annulus", Numerical Methods for Partial Differential Equations, 26(6), pp. 1660-1674
5. Hussain, S.T., Nadeem, S., and Haq, R.U. "Modelbased analysis of micropolar nano fluid flow over a stretching surface", The European Physical Journal Plus, 129(8), p. 161 (2014).
6. Ellahi, R., Rahman, S.U., Nadeem, S., and Akbar, N.S. "Influence of heat and mass transfer on micropolar fluid of blood flow through a tapered stenosed arteries with permeable walls", Journal of
Computational and Theoretical Nanoscience, 11(4), pp. 1156-1163 (2014).
7. Rawi, N.A., Ilias, M.R., Isa, Z.M., and Shafie, S. "GJitter induced mixed convection flow and heat transfer of micropolar nano
fluids flow over an inclined stretching sheet", In AIP Conference Proceedings 1775(1), p.030020, AIP Publishing (2016).
8. Abbas, N., Saleem, S., Nadeem, S., Alderremy, A.A., and Khan, A.U. "On stagnation point flow of a micro polar nanofluid past a circular cylinder with velocity and thermal slip", Results in
Physics, 9, pp. 1224-1232 (2018).
9. Nadeem, S., Malik, M.Y., and Abbas, N. "Heat transfer of three dimensional micropolar fluids on Riga plate", Canadian Journal of Physics, 98(1), pp. 32-38 (2020).
10. Mollamahdi, M., Abbaszadeh, M., and Sheikhzadeh,G.A. "Analytical study of Al2O3-Cu/water micropolar hybrid nanofluid in a porous channel with expanding/ contracting walls in the presence of
magnetic field", Scientia Iranica, 25(1), pp. 208-220 (2018).
11. Nayak, M.K., Zeeshan, A., Pervaiz, Z., and Makinde, O.D., "Modelling, measurement and control B", 88(1), pp. 33-41 (2019).
12. Abro, K.A. and Yildirim, A. "An analytic and mathematical synchronization of micropolar nanofluid by Caputo-Fabrizio approach", Scientia Iranica, 26(6), pp. 3917-3927 (2019).
13. Atif, S.M., Hussain, S., and Sagheer, M. "Effect of thermal radiation on MHD micropolar Carreau nanofluid with viscous dissipation, Joule heating, and internal heating", Scientia Iranica,
Transactions F, Nanotechnology, 26(6), pp. 3875-3888 (2019).
14. Gailitis, A.K. and Lielausis, O.A. "On the possibility of drag reduction of a at plate in an electrolyte", Appl. Magnetohydrodyn. Trudy Inst. Fisiky AN Latvia SSR, 12, p. 143 (1961).
15. Grinberg, E. "On determination of properties of some potential fields", Applied Magnetohydrodynamics Reports of the Physics Institute, 12, pp. 147-154 (1961).
16. Pantokratoras, A. and Magyari, E. "EMHD freeconvection boundary-layer flow from a Riga-plate", Journal of Engineering Mathematics, 64(3), pp. 303- 315 (2009).
17. Magyari, E. and Pantokratoras, A. "Aiding and opposing mixed convection flows over the Riga-plate", Communications in Nonlinear Science and Numerical Simulation, 16(8), pp. 3158-3167 (2011).
18. Ayub, M., Abbas, T., and Bhatti, M.M. "Inspiration of slip effects on electromagnetohydrodynamics (EMHD) nano fluid flow through a horizontal Riga plate", The European Physical Journal Plus, 131
(6), p. 193 (2016).
19. Ramzan, M., Bilal, M., and Chung, J.D. "Radiative Williamson nanofluid flow over a convectively heated Riga plate with chemical reaction - A numerical approach", Chinese Journal of Physics, 55
(4), pp. 1663-1673 (2017).
20. Zaib, A., Haq, R.U., Chamkha, A.J., and Rashidi, M.M. "Impact of partial slip on mixed convective flow towards a Riga plate comprising micropolar TiO2-kerosene/water nanoparticles", International
Journal of Numerical Methods for Heat and Fluid Flow, 29(5), pp. 1647-1662 (2018).
21. Rasool, G. and Zhang, T. "Characteristics of chemical reaction and convective boundary conditions in Powell-Eyring nanofluid flow along a radiative Riga plate", Heliyon, 5(4), e01479 (2019).
22. Bhatti, M.M., Zeeshan, A., Ellahi, R., and Shit, G.C. "Mathematical modeling of heat and mass transfer effects on MHD peristaltic propulsion of two-phase flow through a Darcy-Brinkman-Forchheimer
porous medium", Advanced Powder Technology, 29(5), pp. 1189-1197 (2018).
23. Nayak, M.K., Shaw, S., Makinde, O.D., and Chamkha, A.J. "Effects of homogenous-heterogeneous reactions on radiative NaCl-CNP nanofluid flow past a convectively heated vertical Riga plate",
Journal of Nanofluids, 7(4), pp. 657-667 (2018).
24. Mehmood, R., Nayak, M.K., Akbar, N.S., and Makinde, O.D. "Effects of thermal-diffusion and diffusion-thermo on oblique stagnation point flow of couple stress Casson fluid over a stretched
horizontal Riga plate with higher order chemical reaction", Journal of Nanofluids, 8(1), pp. 94-102 (2019).
25. Nayak, M.K., Shaw, S., Makinde, O.D., and Chamkha, A.J. "Investigation of partial slip and viscous dissipation effects on the radiative tangent hyperbolic nanofluid flow past a vertical permeable
Riga plate with internal heating: Bungiorno model", Journal of Nanofluids, 8(1), pp. 51-62 (2019).
26. Riaz, A., Ellahi, R., Bhatti, M.M., and Marin, M. "Study of heat and mass transfer on eyring-powell fluid model propagating peristaltically through a rectangular complaint channel", Heat Transfer
Research, 50(16), pp. 1539-1560 (2019). DOI: 10.1615/HeatTransRes.2019025622.
27. Khan, A.A., Bukhari, S.R., Marin, M., and Ellahi, R. "Effects of chemical reaction on third-grade MHD fluid flow under the in
uence of heat and mass transfer with variable reactive index", Heat Transfer Research, 50(11), pp. 1061-1080 (2019).
28. Hiemenz, K. "Die Grenzschicht an einem in den gleichformigen Flussigkeitsstrom eingetauchten geraden Kreiszylinder", Dinglers Polytech. J., 326, pp. 321- 324 (1911).
29. Howarth, L. "The boundary layer in three dimensional flow-Part II The flow near a stagnation point", The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 42(335), pp.
1433-1440, (1951).
30. Ishak, A., Jafar, K., Nazar, R., and Pop, I. "MHD stagnation point flow towards a stretching sheet", Physica A: Statistical Mechanics and Its Applications, 388(17), pp. 3377-3383 (2009).
31. Van Gorder, R.A. and Vajravelu, K. "Hydromagnetic stagnation point flow of a second grade fluid over a stretching sheet", Mechanics Research Communications, 37(1), pp. 113-118 (2011).
32. Fang, T., Chia-fon, F.L., and Zhang, J. "The boundary layers of an unsteady incompressible stagnation-point flow with mass transfer", International Journal of Non-Linear Mechanics, 46(7), pp.
942-948 (2011).
33. Nadeem, S., Abbas, N., and Khan, A.U. "Characteristics of three dimensional stagnation point flow of hybrid nanofluid past a circular cylinder", Results in Physics, 8, pp. 829-835 (2018).
34. Nadeem, S. and Abbas, N. "On both MHD and slip effect in Micropolar hybrid nanofluid past a circular cylinder under stagnation point region", Canadian Journal of Physics, 97(4), pp. 392-399
35. Iacopini, S. and Piazza, R. "Thermophoresis in protein solutions", EPL (Europhysics Letters), 63(2), p. 247 (2003).
36. Putnam, S.A., Cahill, D.G., andWong, G.C. "Temperature dependence of thermodiffusion in aqueous suspensions of charged nanoparticles", Langmuir, 23(18), pp. 9221-9228 (2007).
37. Braibanti, M., Vigolo, D., and Piazza, R. "Does thermophoretic mobility depend on particle size", Physical Review Letters, 100(10), p. 108303 (2008).
38. Khan, A.A., Usman, H., Vafai, K., and Ellahi, R. "Study of peristaltic flow of magnetohydrodynamics Walter's B fluid with slip and heat transfer", Scientia Iranica, 23(6), pp. 2650-2662 (2016).
39. Niranjan, H., Sivasankaran, S., and Bhuvaneswari, M. "Chemical reaction, Soret and Dufour effects on MHD mixed convection stagnation point flow with radiation and slip condition", Scientia
Iranica, Transactions B, Mechanical Engineering, 24(2), p. 698 (2017).
40. Ijaz, N., Zeeshan, A., Bhatti, M.M., and Ellahi, R."Analytical study on liquid-solid particles interaction in the presence of heat and mass transfer through a wavy channel", Journal of Molecular
Liquids, 250, pp. 80-87 (2018).
41. Nayak, M.K., Hakeem, A.K., and Makinde, O.D. "Influence of Catteneo-Christov heat flux model on mixed convection flow of third grade nanofluid over an inclined stretched Riga plate", In Defect
and Diffusion Forum, 387, pp. 121-134, Trans Tech Publications Ltd (2018). | {"url":"https://scientiairanica.sharif.edu/article_22439.html","timestamp":"2024-11-07T10:57:35Z","content_type":"text/html","content_length":"59857","record_id":"<urn:uuid:e6ab5d79-90f7-47fe-b254-71ad90f2e7e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00157.warc.gz"} |
Source code: tianshou/policy/modelfree/pg.py
class PGPolicy(*, actor: Module, optim: Optimizer, dist_fn: Callable[[...], Distribution], action_space: Space, discount_factor: float = 0.99, reward_normalization: bool = False, deterministic_eval:
bool = False, observation_space: Space | None = None, action_scaling: bool = True, action_bound_method: Literal['clip', 'tanh'] | None = 'clip', lr_scheduler: LRScheduler | MultipleLRSchedulers |
None = None)[source]#
Implementation of REINFORCE algorithm.
☆ actor – mapping (s->model_output), should follow the rules in BasePolicy.
☆ optim – optimizer for actor network.
☆ dist_fn – distribution class for computing the action. Maps model_output -> distribution. Typically a Gaussian distribution taking model_output=mean,std as input for continuous action
spaces, or a categorical distribution taking model_output=logits for discrete action spaces. Note that as user, you are responsible for ensuring that the distribution is compatible with
the action space.
☆ action_space – env’s action space.
☆ discount_factor – in [0, 1].
☆ reward_normalization – if True, will normalize the returns by subtracting the running mean and dividing by the running standard deviation. Can be detrimental to performance! See TODO in
☆ deterministic_eval – if True, will use deterministic action (the dist’s mode) instead of stochastic one during evaluation. Does not affect training.
☆ observation_space – Env’s observation space.
☆ action_scaling – if True, scale the action from [-1, 1] to the range of action_space. Only used if the action_space is continuous.
☆ action_bound_method – method to bound action to range [-1, 1]. Only used if the action_space is continuous.
☆ lr_scheduler – if not None, will be called in policy.update().
See also
Please refer to BasePolicy for more detailed explanation.
forward(batch: ObsBatchProtocol, state: dict | BatchProtocol | ndarray | None = None, **kwargs: Any) DistBatchProtocol[source]#
Compute action over the given batch data by applying the actor.
Will sample from the dist_fn, if appropriate. Returns a new object representing the processed batch data (contrary to other methods that modify the input batch inplace).
See also
Please refer to forward() for more detailed explanation.
learn(batch: BatchWithReturnsProtocol, batch_size: int | None, repeat: int, *args: Any, **kwargs: Any) TPGTrainingStats[source]#
Update policy with a given batch of data.
A dataclass object, including the data needed to be logged (e.g., loss).
In order to distinguish the collecting state, updating state and testing state, you can check the policy state by self.training and self.updating. Please refer to States for policy for more
detailed explanation.
If you use torch.distributions.Normal and torch.distributions.Categorical to calculate the log_prob, please be careful about the shape: Categorical distribution gives “[batch_size]” shape
while Normal distribution gives “[batch_size, 1]” shape. The auto-broadcasting of numerical operation with torch tensors will amplify this error.
process_fn(batch: RolloutBatchProtocol, buffer: ReplayBuffer, indices: ndarray) BatchWithReturnsProtocol[source]#
Compute the discounted returns (Monte Carlo estimates) for each transition.
They are added to the batch under the field returns. Note: this function will modify the input batch!
\[G_t = \sum_{i=t}^T \gamma^{i-t}r_i\]
where \(T\) is the terminal time step, \(\gamma\) is the discount factor, \(\gamma \in [0, 1]\).
○ batch – a data batch which contains several episodes of data in sequential order. Mind that the end of each finished episode of batch should be marked by done flag, unfinished (or
collecting) episodes will be recognized by buffer.unfinished_index().
○ buffer – the corresponding replay buffer.
○ indices (numpy.ndarray) – tell batch’s location in buffer, batch is equal to buffer[indices].
class PGTrainingStats(*, train_time: float = 0.0, smoothed_loss: dict = <factory>, loss: tianshou.data.stats.SequenceSummaryStats)[source]#
loss: SequenceSummaryStats# | {"url":"https://tianshou.org/en/v1.0.0/03_api/policy/modelfree/pg.html","timestamp":"2024-11-06T18:40:54Z","content_type":"text/html","content_length":"58620","record_id":"<urn:uuid:cf283d43-4357-4e72-8e88-5d8dd4a60402>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00522.warc.gz"} |
How to draw 4 squares in Python | Code Underscored
“Turtle” is a Python function that works by providing a drawing board and allows you to order a turtle to draw on it! The turtle needs to be moved around for a drawing to appear on the screen. This
can be achieved by using functions like forward(), backward(), left() and right().
You can switch the turtle around with functions like turtle.forward(…) and turtle.left(…).
You must first import a turtle before you can use it. We suggest practicing with it in the interactive interpreter first, as working with files requires a little more effort. Simply type the
following command into your terminal:
Do you have the error message “No module called _tkinter” while working with Ubuntu? sudo apt-get install python3-tk installs the missing package.
sudo apt-get install python3-tk
The turtle.forward(…) function instructs the turtle to advance the specified distance. The number of degrees you want to rotate to the left is passed to turtle.left(…). On the other hand, you
turtle.backward(…) if you want to move backwards by the distance value provided as a parameter. Also, use turtle.right(…) if you wish to rotate the turtle by the given number of degrees towards the
Do you want to start over?
To clear the drawing your turtle has made so far, use turtle.reset(). We’ll go over turtle in greater depth in the subsequent sections with examples so that you can master the concepts better.
The typical turtle is nothing more than a triangle. That’s not a good time! Instead of a turtle, let’s make it a turtle using the command turtle.shape() as shown below.
You may have noticed that the turtle window vanishes after the turtle completes its movement if you saved the commands to a file. This is due to Python exiting when the turtle has completed its
movement. Python owns the turtle window, so it goes away as well. Place turtle.exitonclick() at the bottom of your file to avoid this. The window will now remain open until you click on it:
import turtle
Drawing a Square
You’ll probably need a right angle, which is 90 degrees, to make a square.
Drawing a square Anti-Clockwise
import turtle
my_turtle =turtle.Turtle()
drawing_area =turtle.Screen()
drawing_area.title("tuts@codeunderscored: Drawing a Square Anti-Clockwise")
# my_turtle.penup()
Drawing a Square Anti-Clockwise
Drawing a square Clockwise
import turtle
my_turtle =turtle.Turtle()
drawing_area =turtle.Screen()
drawing_area.title("tuts@codeunderscored: Drawing a Square Clockwise")
Drawing a Square Clockwise
Optimized code to draw a square
import turtle
my_turtle =turtle.Turtle() # create instance of the turtle
my_turtle.shape('turtle') # set the shape of the turtle to that of the turtle
drawing_area =turtle.Screen() # create the screen
drawing_area.title("tuts@codeunderscored: Optimizing drawing a square ")
my_turtle.pendown() # lift the pen up so when the turtle moves, it will not leave a trace behind
my_turtle.left(90) #
for i in range(4):
Keep in mind
Before and after drawing the square, note how the turtle begins and finishes in the same position, facing the same way. This is a helpful convention to observe since it makes drawing different shapes
later on much simpler.
Added value
If you want to be more imaginative, use the turtle.width(…) and turtle.color(…) functions to change the form. How do you make use of these features? You must first learn the signature of a feature
before you can use it. For example what to put between the parentheses and what those things mean.
In the Python shell, type help(turtle.color) to find out more. Python can bring it into a pager, which allows you to scroll up and down in case there is a lot of text. To exit the pager, press the q
Is this an error ?
NameError: the name turtle is not specified
When trying to get support, you get a NameError: the name ‘turtle’ isn’t specified. In Python, you must import names before you can refer to them, so you’ll need to import turtle before using help
(turtle.color) in a new Python interactive shell as shown below.
help on turtle color
You should tell the turtle to delete its drawing board with the turtle if you make a mistake.
Use the reset() directive or undo the most recent move using the command undo().
You can change the color with turtle.color, as you might have read in the aid. The color string passed as a parameter includes the colors “red,” “green,” and “violet,” among others. For instance
turtle.color(colorstring) where the colorstring can be red, green etc. A comprehensive list can be found in the color manual available in the help feature.
Be sure to run turtle.colormode(255) first if you want to set an RGB value. You might, for example, use turtle.color(75,0,130) to set the color as indigo.
Demonstration of how to draw a square with four different colors
window = turtle.Screen()
window. title("tuts@codeunderscored:~ Demonstrate how to draw a square with 4 colors")
my_turtle = turtle.Turtle()
for current_color in ["red", "green", "purple", "blue"]:
Example 1
Example 2
import turtle
def drawSquares(my_turtle, length_of_side, count_squares, distance_apart):
:param my_turtle: instance of the turtle
:param length_of_side: initial length of the square e.g 200 before drawing the inner squares
:param count_squares: determines how many squares exist e.g 5 means the count of squares is 5
:param distance_apart: distance from once square to the next
for n in range(count_squares):
for _ in range(4):
x, y = my_turtle.position()
my_turtle.goto(x + distance_apart / 2, y + distance_apart / 2)
length_of_side -= distance_apart
if __name__=='__main__':
window = turtle.Screen() # Setting up the attributes and thee window
window.title("tuts@codeunderscored: ~ Example 1: How to draw 4 Squares in Python")
new_turtle = turtle.Turtle()
new_turtle.goto(60, 60)
drawSquares(new_turtle, 200, 4, 10)
Example 1:How to draw 4 Squares in Python
Example 2
Example 2
def drawSquares(my_turtle, side_length, no_of_squares, distance_apart):
:param my_turtle: instance of the turtle
:param side_length: initial length of the square e.g 200 before drawing the inner squares
:param no_of_squares: determines how many squares exist e.g 5 means the count of squares is 5
:param distance_apart: distance from once square to the next
new_turtle = my_turtle.clone() # turtle cloning to avoid restoring changes
new_turtle.shape("square") # turtle shape modification for stamping
new_turtle.fillcolor(turtle.bgcolor()) # modify turtle fill color for stamping
for _ in range(no_of_squares):
new_turtle.turtlesize(side_length / 20) # the magic number 20 is the stamp’s default size
side_length -= distance_apart
if __name__=='__main__':
window = turtle.Screen() # Setting up the window and its attributes
window.title("tuts@codeunderscored: ~ Example 2: How to draw 4 Squares in Python")
new_turtle = turtle.Turtle()
new_turtle.goto(10, 10)
drawSquares(new_turtle, 200, 4, 10)
Example 2: How to draw 4 Squares in Python
Example 3
Example 3
def drawSquares(my_turtle, side_length, no_of_squares, distance_apart):
:param my_turtle: instance of the turtle
:param side_length: initial length of the square e.g 200 before drawing the inner squares
:param no_of_squares: determines how many squares exist e.g 5 means the count of squares is 5
:param distance_apart: distance from once square to the next
x, y = my_turtle.position()
my_turtle.goto(x - side_length/ 2, y - side_length / 2) # current x, y is positioned at the center
my_turtle.setheading(-45) # square sits on corner instead of on side by default
for _ in range(no_of_squares):
radius = side_length * 2**0.5 / 2
my_turtle.circle(radius, steps=4) # determines the size of the shape 4 = square, 5=pentagon
side_length -= 50
x, y = my_turtle.position()
my_turtle.goto(x + distance_apart / 2, y + distance_apart / 2)
if __name__=='__main__':
window = turtle.Screen() # window and attributes setup
window.title("tuts@codeunderscored:~ Example 3: How to draw 4 Squares in Python")
new_turtle = turtle.Turtle()
new_turtle.goto(10, 10)
drawSquares(new_turtle, 200, 4, 50)
Example 3: How to draw 4 Squares in Python
Example 4
Example 4
def drawSquares(my_turtle, side_length, no_of_squares, distance_apart):
:param my_turtle: instance of the turtle
:param side_length: initial length of the square e.g 200 before drawing the inner squares
:param no_of_squares: determines how many squares exist e.g 5 means the count of squares is 5
:param distance_apart: distance from once square to the next
if no_of_squares < 1:
for _ in range(4):
x, y = my_turtle.position()
my_turtle.goto(x + distance_apart / 2, y + distance_apart / 2)
drawSquares(my_turtle, side_length - distance_apart, no_of_squares - 1, distance_apart)
if __name__=='__main__':
window = turtle.Screen() # window and attributes setup
window.title("tuts@codeunderscored:~ Example 4: How to draw 4 Squares in Python")
new_turtle = turtle.Turtle()
new_turtle.goto(10, 10)
drawSquares(new_turtle, 200, 5, 40)
Example 4: How to draw 4 Squares in Python
Leave a Reply Cancel reply | {"url":"https://www.codeunderscored.com/how-to-draw-4-squares-in-python/","timestamp":"2024-11-13T15:19:22Z","content_type":"text/html","content_length":"140569","record_id":"<urn:uuid:4c2f6b9a-dc97-4d55-a35e-0b9289abc811>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00505.warc.gz"} |
Matlab projects-VLSI Projects-Biomedical projects-Mechanical Projects
Email: [email protected]
• What is Data Analytics Project?
Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions.Analysts may use robust statistical
measurements to solve certain analytical problems. Hypothesis testing is used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine
whether that state of affairs is true or false
Data Analytics Projects in Bangalore
• Data Analytics Course for Working Professionals
Business analytics depends on sufficient volumes of high quality data. Data initially obtained must be processed or organised for analysis. For instance, these may involve placing data into rows
and columns in a table format (i.e., structured data) for further analysis, such as within a spreadsheet or statistical softwar
… …
2024 IEEE Projects Based on Data Analytics
IEEE Projects on Data Analytics, IEEE Data Analytics projects, IEEE Based Data Analytics projects, Data Analytics related IEEE Projects
IEEE Data Analytics Projects 2024
How to do the Data Analytics Projects?
Data Analytics Concepts and Techniques
The consultants at McKinsey and Company named a technique for breaking a quantitative problem down into its component parts called the MECE principle. Each layer can be broken down into its
components; each of the sub-components must be mutually exclusive of each other and collectively add up to the layer above them. The relationship is referred to as "Mutually Exclusive and
Collectively Exhaustive" or MECE. For example, profit by definition can be broken down into total revenue and total cost. In turn, total revenue can be analyzed by its components, such as revenue of
divisions A, B, and C (which are mutually exclusive of each other) and should add to the total revenue (collectively exhaustive).
2023-2024 IEEE Projects on Data Mining Contact: 9591912372 | {"url":"http://www.projectsatbangalore.com/ieee-projects/ieee-projects-on-data-analytics/","timestamp":"2024-11-13T02:21:11Z","content_type":"text/html","content_length":"15849","record_id":"<urn:uuid:69916da2-8c12-441a-b835-aa049f62463f>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00334.warc.gz"} |
CFD-Simulations of inhomogeneous H2-air explosions
Faculty of Technology, Natural sciences and Maritime Sciences
FMH606 Master's Thesis 2021 Process Technology
CFD-Simulations of inhomogeneous 𝐇 [𝟐] -air explosions
Omid Aghaabbasi
The University of South-Eastern Norway takes no responsibility for the results and Course: FMH606 Master's Thesis, 2021
Title: CFD-Simulations of inhomogeneous H[2]-air explosions Number of pages: 81
Keywords: H[2]-air explosions, homogeneous H[2]-air mixture, inhomogeneous H[2]-air mixture, Flame Acceleration, Deflagration-to-Detonation Transition, Detonation, CFD simulation, OpenFOAM, XiFoam.
Student: Omid Aghaabbasi
Supervisor: Prof. Dag Bjerketvedt, Prof. Knut Vaagsaether, Stip. Mathias Henriksen
External partner: MoZEES
The University of South-Eastern Norway takes no responsibility for the results and Summary:
Since demands for energy resources increase globally in recent years due to growth of application consumption, finding alternative resources for traditional fossil fuels raises.
These current fuels are diminishing continuously and have huge pollutions and destructive environmental effects on creature and also human lives. Among renewable and green alternative fuels, hydrogen
fuel is introduced as a promising and zero emission energy carrier which is accessible and clean that also known as future fuel. Due to low ignition energy required for hydrogen combustion, abrupt
explosions are inevitable. So, safety study of H[2]-airexplosion has specific importance to reduce unexpected incidents. Based on various initial conditions, configuration and dimension of studied
geometry, different explosion regimes can be observed. Also, inhomogeneous conditions for H[2]-airmixture are more realistic in the world due to stratification of H[2] in air based on low density.
So, inhomogeneous H[2]-air mixture explosion is highly considered in research studies such as safety aspects.
In this project, in order to simulate flame acceleration of explosions in channel, openFOAM CFD toolbox has been employed as reliable numerical simulation method.
For this reason, 6 simulation cases have been executed by XiFoam solver within a 1700*100 mm^2 2D channel and without presence of obstacles. It was applied a high enough aspect ratio of length to
height to be sure of stronger explosion. Homogeneity and inhomogeneity effects in H[2]-air explosions have been investigated in this work with further consideration of stoichiometric, lean, and rich
fuel conditions by implementing different equivalence ratio. First homogeneous H[2]-air mixture has been considered which implemented in whole domain of geometry. Then inhomogeneous H[2]-air mixture
has been investigated by making two homogeneous layers in channel as hydrogen-air flammable cloud in half top of channel and air as inert gas in half bottom of it.
In these explosion scenarios, flame acceleration in the channel has been investigated by considering some key parameters such as flame front position, flame speed and pressure.
Results of these parameters show that with more fuel concentration by increasing equivalence ratio and going from fuel-lean to fuel-rich condition, flame front goes faster with higher velocity and
pressure. This behavior is similar for both homogeneous and inhomogeneous H[2]-air mixtures. Furthermore, since there is more flammable H[2]-air mixture in homogeneous case than inhomogeneous one and
flame can freely stretch in this condition, flame elongates and consequently results to enlargement of flame surface area with higher reaction rates. So, flame exhibits higher acceleration, velocity,
and pressure in this condition rather than inhomogeneous one.
This master thesis was performed in Department of Process, Energy and Environment Technology, Faculty of Technology, Natural Sciences and Maritime Sciences (TNM) at University of South-eastern Norway
(USN) with cooperation of MoZEES, a Norwegian Research Center as external partner.
I would like to thank my supervisors Prof. Dag Bjerketvedt and Prof. Knut Vaagsaether who provided me with their guidance each time I encountered problems and difficulties and pushed me in the right
direction. Their help was very consequential for me during this work.
Furthermore, I would like to thank Mr. Mathias Henriksen whose experiences in openFOAM and Python were significantly effective and helpful for me during simulation process.
Porsgrunn, 19.05.2021 Omid Aghaabbasi
Nomenclature ... 7
1 Introduction ... 11
1.1Objective of project ... 11
1.2Method ... 12
1.3Report structure ... 12
2 Flame propagation of fuel-air clouds ... 13
2.1Inhomogeneity effect in fuel-air cloud ... 14
2.2Flame acceleration ... 16
2.2.1Laminar deflagration ... 16
2.2.2Cellular flame propagation ... 18
2.2.3Slow turbulent deflagration ... 20
2.2.4Fast turbulent deflagration ... 25
2.2.5Flame Acceleration in inhomogeneous condition of H2-air mixture ... 25
2.3Onset of detonation ... 27
2.3.1Onset of detonation in inhomogeneous condition of H2-air mixture ... 28
2.4Detonation ... 30
2.4.1Detonation in inhomogeneous condition of H2-air mixture ... 33
2.5Conclusion ... 34
3 Finite Volume method and Case Study Simulation ... 35
3.1Governing equations in combustion modelling ... 35
3.1.1Transport equation ... 35
3.1.2Equations of state ... 38
3.2Turbulence model ... 39
3.2.1Reynolds averaging ... 39
3.2.2Favre averaging ... 40
3.2.3κ-ε turbulence model... 42
3.3Combustion model ... 43
3.3.1Flame wrinkling combustion model ... 43
3.4Numerical method and pre-processing case setup ... 45
3.4.1Case geometry ... 45
3.4.2Case setup and initial field in channel ... 45
3.4.3Time step and duration of simulation ... 46
3.4.4Pressure probes ... 46
3.4.5Turbulence model ... 46
3.4.6Thermophysical model ... 46
3.4.7Combustion properties ... 48
3.4.8Initial and boundary conditions ... 49
4 Post processing results ... 50
4.1Case 1: Homogeneous H2-air mixture with fuel-lean condition (ϕ = 0.8) ... 50
4.2Case 2: Homogeneous H2-air mixture with stoichiometric condition (ϕ = 1) ... 52
4.3Case 3: Homogeneous H2-air mixture with fuel-rich condition (ϕ = 1.2) ... 54
4.4Case 4: Inhomogeneous H2-air mixture with fuel-lean condition (ϕ = 0.8) ... 56
4.5Case 5: inhomogeneous H2-air mixture with stoichiometric condition (ϕ = 1) ... 58
4.6Case 6: inhomogeneous H2-air mixture with fuel-rich condition (ϕ = 1.2) ... 60
5 Discussion ... 63
5.1Homogeneous H2-air mixture ... 63
5.2Inhomogeneous H2-air mixture ... 64
5.3Homogeneity and inhomogeneity of H2-air mixture ... 66
6 Conclusion ... 72
6.1Suggestions for further work ... 73
References ... 74
Appendices ... 72
CFD Computational Fluid Dynamics
CJ Chapman-Jouguet
DDT Deflagration to Detonation Transition DNS Direct numerical simulation
FA Flame Acceleration
LES Large eddy simulation MIE Minimum Ignition Energy
openFOAM open source Filed Operation And Manipulation RANS Reynolds-average Navier-stokes
RM Richtmyer-Meshkov instability
SGS sub-grid scales
VN post-shock or Von Neumann state ZND Zel’dovich Von Neumann Döring model
Symbol Description Unit
𝐴[𝑐] Channel cross section area [𝑚^2]
𝐴[𝑓],𝐴[𝑓,𝑇],𝐴[𝑓,𝐿] Flame surface area, Turbulent flame surface area, Laminar
flame surface area [𝑚^2]
𝐴[𝑠] Sutherland coefficient [𝑃𝑎. 𝑠/𝐾^1/2]
𝑎 Thermal diffusivity [𝑚^2/𝑠]
𝑏 Regress variable [−]
𝐶[𝑏],𝐶[𝑢] Concentration of burned and unburned mixtures [𝑘𝑚𝑜𝑙/𝑚^3]
𝐶[𝑘] Molar concentration of species 𝑘 [𝑘𝑚𝑜𝑙/𝑚^3]
𝑐[𝑝] Heat capacity at constant pressure [𝐽/𝐾]
𝐶[𝑣] Heat capacity at constant volume [𝐽/𝐾]
𝐶𝑜 Courant number [−]
𝑐 Progress variable [−]
𝑐[𝑖] molar fraction of species 𝑖 [−]
𝐷, 𝐷[𝑘] Mass diffusivity, Diffusion coefficient of species k [𝑚^2/𝑠]
𝐷[𝐶𝐽] Detonation Chapman-Jouguet velocity [𝑚/𝑠]
𝐸[𝑎] Activation energy [𝑘𝐽/𝑘𝑚𝑜𝑙]
𝐹[𝑖] Body force [𝑚/𝑠^2]
𝑓𝑡 Fuel mass fraction [−]
𝐻 Height of channel [𝑚]
ℎ, ℎ[𝑘], ℎ[𝑠] Enthalpy, specific enthalpy of species 𝑘, sensible enthalpy [𝑘𝐽/𝑘𝑔]
ℎ[𝑓]^𝑖 Heat of formation of species 𝑖 [𝑘𝐽/𝑘𝑔]
𝐾 Thermal conductivity [𝑊/𝑚𝐾]
𝐾[𝑎] Karlovitz number of laminar flame thickness [−]
𝐾[𝑎𝛿] Karlovitz number of reaction zone flame thickness [−]
𝑘 Karlovitz stretch factor [−]
𝐿 Characteristic length scale of mean flow [𝑚]
𝐿[𝑒] Lewis number [−]
𝐿[𝑀] Markstein length [𝑚]
𝑀[𝑎] Markstein dimensionless number [−]
𝑀𝑊[𝑘] Molecular weight of species 𝑘 [𝑘𝑔/𝑘𝑚𝑜𝑙]
𝑚̇ Mass flow rate [𝑘𝑔/𝑠]
𝑃, 𝑃[𝑢] Pressure, Pressure of unburned mixture [𝑎𝑡𝑚, 𝑘𝑝𝑎]
𝑞̇[𝑘] Total reaction rate of species 𝑘 [𝑘𝑚𝑜𝑙/𝑚^3. 𝑠]
𝑅 Universal molar gas constant [𝑘𝐽/𝑘𝑚𝑜𝑙. 𝐾]
ℛ[𝑒], ℛ[𝑒][𝜂], ℛ[𝑒][ℓ]
Reynolds number, Reynolds number of smallest eddies,
Reynolds number of integral length scale [−]
𝑆[𝑐][𝑘] Schmidt number [−]
𝑆[𝐿], 𝑆[𝐿,𝑆], 𝑆[𝐿0], 𝑆[𝑇]
Unstretched laminar burning velocity, Stretched laminar burning velocity, Laminar flame speed in room condition, Turbulent burning velocity
𝑇, 𝑇[𝑏], 𝑇[𝑢] Temperature, Temperature of burned and unburned mixture [𝐾]
𝑇[𝑠] Sutherland temperature [𝐾]
𝑡[𝑑] Diffusion time [𝑠]
𝑈 Characteristic velocity scale of mean flow [𝑚/𝑠]
𝑈[𝑠] Surface-filtered velocity of flame [𝑚/𝑠]
𝑢[𝒊], 𝑢̅, 𝑢^′ Velocity in 𝑖 direction, Time- averaged flow velocity,
Velocity fluctuation [𝑚/𝑠]
𝑊 Constant of fuel in Gülder laminar flame speed correlation [−]
𝑋𝑖 Flame wrinkling [−]
𝑌[𝑘] Mass fraction of species 𝑘 [−]
𝑍 Constant in Gülder laminar flame speed correlation [−]
𝛼, 𝛽 Mixture strength- dependent constants in power law [−]
Γ, Γ[𝑘] Diffusion coefficient [𝑚^2/𝑠]
𝛿[𝑖𝑗] Kronecker delta [−]
𝛿[𝐿], 𝛿[𝑅] Laminar flame thickness, Thickness of reaction zone [𝑚]
𝜀 Dissipation rate of turbulent kinetic energy [𝑚^2/𝑠^3] 𝜉, 𝜂 Constants of fuel in Gülder laminar flame speed correlation [−]
ℓ[𝑇], ℓ[𝜂] Characteristic length scale of large (integral)eddies,
Characteristic length scale of smallest eddies [𝑚]
𝜅 Turbulent kinetic energy [𝑚^2/𝑠^2]
𝜆 Detonation cell width [𝑚]
𝜇, 𝜇[𝑡] Dynamic viscosity, Turbulence dynamic viscosity [𝑁. 𝑠/𝑚^2]
𝜈 Kinematic viscosity of flow [𝑚^2/𝑠]
Ξ, Ξ[𝑒𝑞]^∗ Sub-grid flame wrinkling, Equilibrium wrinkling at
Kolmogrov turbulence length scale [−]
𝜌, 𝜌[𝑟𝑒], 𝜌[𝑝𝑟] Density, Reactant density, Product density [𝑘𝑔/𝑚^3]
𝜎 Expansion ratio [−]
𝜎[ℎ], 𝜎[𝜅], 𝜎[𝜀] Prandtl number, Turbulent Prandtl number of turbulent
kinetic energy, Turbulent Prandtl number of dissipation rate [−]
𝜎[𝑠], 𝜎[𝑡] Surface-filtered resolved strain rates [𝑠^−1]
𝜏[𝑖𝑗] Viscous stress [𝑁/𝑚^2]
𝜏[𝐿], 𝜏[𝑇], 𝜏[𝜂] Laminar flame time scale, Large (integral)eddies time scale,
Smallest eddies (Kolmogorov) time scale [𝑠]
𝜗, 𝜐 Characteristic velocity scale of large eddies, Characteristic
velocity scale of smallest eddies [𝑚/𝑠]
𝜙 Equivalence ratio [−]
𝜑, 𝜑̅, 𝜑^′, 𝜑̃, 𝜑^′′
Scalar property of fluid, time- averaged of scalar property of fluid, fluctuation of scalar property of fluid, mean value in favre averaging of scalar property of fluid, fluctuation in favre
averaging of scalar property of fluid
𝜔̇[𝑘] Reaction rate source (or sink) [𝑘𝑔/𝑚^3. 𝑠]
1 Introduction
Combustion is an exothermic process when a considerable amount of heat and energy releases during reaction of fuel and oxidizer (mostly air) and consequently has some products.
Requirements for this process are a flammable cloud and an ignition source to combust it.
Gaseous combustion is chemical reaction between fuel and oxidant that both are in gas phase.
Two main categories of gaseous combustion are premixed and non-premixed flames. Premixed flames take place when fuel and oxidant are mixed at molecular level prior to combustion and then source of
ignition causes to explode the mixture. There are many examples of premixed flames, like what happen in internal combustion engine or gas turbine. In contrast, non- premixed ones are that reactants
initially separated, and reaction take places at interfaces of fuel and oxidizer. For instance, fuel can be as a flame jet which enters air and burns there afterwards. These flames are also known as
diffusion flames since fuel and oxygen come separately to combustion zone and due to diffusion, they mix up and react[1, 2].
Gas explosion is a combustion process of premixed gas cloud that results to high pressure and this pressure depends on how fast flame propagates and how it expands from gas cloud[3].
Explosion of a flammable cloud mixture can be easily occurred by even small ignition source such as spark, electrical shock, friction, etc. and may have catastrophic consequences such as loss of
human lives, damage to building or properties and so on. There are many gas explosions examples with huge catastrophic consequences in the world like what happens in coal mines explosions by natural
gases, in oil and gas production such as Piper Alpha accident in North Sea in U.K in 1988 or nuclear accidents like Three-mile island in USA in 1979 or Fukushima Daiichi disaster in Japan in 2011.
Experiencing different explosion regimes, flame speed, strength and other impacts of explosions depend on diverse conditions of flammable mixture such as type of fuel, inhomogeneity, and congestion,
in addition to presence of confinement, aspect ratio (length to diameter/ height) of tube or channel and existence of turbulence in flow[4].
Hydrogen as a fuel can make a flammable cloud due to mixing with air and because of low density, stratify in air, and results spatial concentration gradients[4]. This observation of inhomogeneity
exists for H[2] -air mixtures and mostly governs in real world situations instead of homogeneous condition. So, consideration of inhomogeneous mixture for H[2]-air cloud is highly important for study
and research aspects such as safety and transition by means of lab- scale experiments or numerical simulation by computer software. Explosion of homogeneous fuel-air cloud were studied widely both in
experiments and numerical simulation before while in contrast, inhomogeneous conditions of flammable cloud got less attention in experiments.
1.1 Objective of project
In this project, a literature study of flame propagation in inhomogeneous fuel-air clouds have been reviewed with focus on inhomogeneous H[2]-air cloud by evaluating different regimes of flame
acceleration from start of ignition, then transition to detonation and finally detonation propagation in this mixture. Afterward, CFD simulations have been performed to investigate and make
comparison of flame acceleration between homogeneous and inhomogeneous H[2]-air
explosions with further consideration of stoichiometric and non-stoichiometric conditions to observe their effects on flame propagation and other related variables.
1.2 Method
In order to assess different parameters of gas explosions and investigate the results, laboratory scale experiment and observation of results there, are most reliable method that can imitate real
conditions. It is considered in many works and projects with a highly enough approximation to predict, observe, and assess results and effects of gas explosions. But laboratory equipment and tools
also have limitations. They are not always accessible for any conditions. Besides, setting up experimental condition may have high cost for different initial parameters and in many cases, results
take much time and energy.
Alternatively, numerical simulation by computer tools can be a proper alternative solution for prediction and assessment of data which is much faster in preparation with ease of setting up and low
cost of running in comparison with laboratory scale experiments. There are many software toolboxes that represents various compliance with experimental data. Since the present work is an example of
fluid interaction, a Computational Fluid Dynamics (CFD) toolbox has been chosen to evaluate the behavior of explosions.
For this purpose, OpenFOAM (open-source Field Operation And Manipulation) software has been selected as CFD toolbox. Here, XiFoam solver has been applied since it is suitable for premixed and partial
premixed combustion with turbulence flows. In this solver, combustion and turbulence are modeled with flame wrinkling combustion model by using a reacting progress variable along with chosen
turbulence model. In present simulations, Gülder correlation is selected as a suitable model for laminar flame speed. Study geometry is a 1700*100 mm^2 2D channel which is totally close at left, top
and bottom walls and fully open at right end wall without presence of obstacles. Ignition source is located near top of channel at end left wall.
1.3 Report structure
This report is outlined in following chapters. Chapter 2 reviews literature study on flame propagation of inhomogeneous fuel-air clouds with specific focus on H[2]-air mixture. In this chapter,
effect of inhomogeneity in fuel-air cloud and different flame propagation regimes with considering inhomogeneous hydrogen-air mixture are discussed. Chapter 3 describes finite volume method with
governing equations in this process. Also, models of turbulence and combustion along with case study simulation setup are explained in this chapter. Chapter 4 introduces simulation cases and
illustrates their results by showing flame propagation in channel with further observation of front position, velocity, and pressure for each one. Chapter 5 discusses simulation cases and make
comparison among them based on results obtained from simulations and finally chapter 6 makes conclusion of this work with further work suggestions.
2 Flame propagation of fuel-air clouds
In industrial gas explosion accidents, flammable clouds form due to fuel release, mix with air as oxidizer and finally explode by any ignition sources. Two modes of explosions are deflagration and
detonation. Deflagration mode is a self-sustaining propagation of localized combustion zone propagates into unburned gas at subsonic velocities, while detonation wave is a shock wave explosion-driven
that propagates into unburned gas at supersonic velocities [1]. Explosion of H[2]-air in initial conditions of ambient temperature and pressure, can be easily reached since required ignition energy
to explode hydrogen is extremely low rather than other flammable fuels [5].
There are two ignition mechanism that depend on ignition energy. First one is mild or weak ignition where flame starts from ignition point and propagates as deflagration mode in this mechanism
through fresh mixture. So, diffusion of heat and species is important and dominates flame propagation. Minimum ignition energy (MIE) for hydrogen-air mixture at standard condition depends on hydrogen
concentration and for stoichiometric conditions is near 0.017 mJ [5] while MIE for other combustible gases are around 0.2-0.3 mJ [6]. Therefore, ignition of hydrogen can be easily reached by even
small spark, mechanical friction etc. and is highly possible to occur. Second one is strong ignition mechanism that needs high ignition energy and happens if reflecting shock is strong enough to lead
rapid auto-ignition. This makes direct explosion at reflecting wall. Blast wave produces and explosion shapes direct detonation mode.
In P-T diagram for H[2]− O[2] systems as shown in figure 2.1, extended second explosion limit crosses region of obtained temperature and pressure for ability of detonation limit, between 12 and 70
vol% of hydrogen. It discriminates weak ignition on left side from strong ignition on right side [7]. This crossed region shows critical conditions for strong ignition and consequently onset of
detonation in H[2]-air mixture. As obtained from detonation experiments, required ignition energy for strong ignition and direct detonation is significantly high. This energy per surface area for
stoichiometric H[2]-air mixture is 0.7 MJ/m^2 while for propane is 3.1 MJ/m^2 and for methane is 10 MJ/m^2 [8]. Hence, direct detonation in real industrial explosions is impossible and a Deflagration
to Detonation Transition (DDT) process is required for detonation. Explosion of flammable H[2]-air cloud is classified in premixed gas explosion since hydrogen and air are mixed prior to combustion.
Therefore, behavior of premixed gas explosion is studied for H[2]-air explosion.
Previously, experiments and study works were performed widely on homogeneous and stoichiometric fuel-air clouds. For instance, effects of shapes and distribution of obstructions were investigated on
produced explosion pressure in natural gases for methane-air by Moen et al. [9] or propane-air by Eckhoff et al.[10] and Hjertager et al.[11]. Also, different fuel concentration of homogeneous clouds
was discussed by Hjertager et al.[12] and they observed maximum pressure and flame speed occurred in slightly fuel rich concentration in methane-air and propane-air. But these experiments were
considered as idealized situation of scenarios since in reality inhomogeneous conditions govern and fuels concentration are non-uniform in air.
Figure 2.1. P-T diagram in H2− O2 systems with extended second explosion limit observed by experiments[7].
2.1 Inhomogeneity effect in fuel-air cloud
In contrast to homogeneous condition of fuel-air cloud, some experimental studies were performed to consider inhomogeneity of fuel-air mixture. There is a simplification for study of inhomogeneity by
reduction of concentration gradients compare to three-dimensional structure.
It is classified in many projects works to parallel and vertical concentration gradients due to direction of them relative to flame propagation direction. First category is parallel concentration
gradients where they have same direction as flame propagation. This condition is highly relevant for nuclear reactors where steam is in vertical tubes and gradients consider in same directions and
interacts with gravitational effects. Second one is transverse or vertical concentration gradients. In this kind of concentration gradients, they have direction perpendicular to flame propagation and
was studied in many works and strong effect of gradients on Flame Acceleration (FA) process and possibility of DDT observed specifically in unobstructed configuration. In this section, observations
of inhomogeneous fuel-air mixtures in previous studies are discussed.
Hjertager et al.[13] investigated methane-air clouds in a large-scale obstructed tube by simulating two types of leakage arrangement in pipes as axial and radial to generate inhomogeneous conditions.
They showed that explosion pressure is highly dependent on leakage arrangement, mass of injected fuel and delay time of ignition. There was observed maximum explosion pressure occurred in axial leak
arrangement with stoichiometric conditions and time delay less than 50 seconds while for radial leak arrangement it happened for under stoichiometric masses of methane. On specific conditions,
inhomogeneous methane- air clouds produce high pressure as homogeneous cases and furthermore for small methane masses, inhomogeneous mixture may produce higher explosion pressure compared to
homogeneous conditions. It was an important observation that shows inhomogeneity of
mixture may have worse effects in real world than homogeneous ideal conditions. C.Wang et al.[14] investigated effect of transverse concentration gradients in methane-air on flame propagation in
horizontal duct and observed that time between leakage of methane in duct and ignition that defined as ignition delay, strongly affects flame shape and speed in stratified methane-air mixture. They
showed stratified methane-air cannot be ignited at ignition delay less than 3 minutes. But by increasing delay time from 4 min. to 15 min., flame speed and overpressure increases monotonically, while
after that time, they remain constant. Furthermore, overpressure in time of 15-25 min obtained nearly same as homogeneous condition of methane- air mixtures.
Since inhomogeneous conditions for H[2]-air mixture have most probability to occur, they reveal with spatial or three-dimensional concentration gradients in reality. These conditions were
investigated experimentally in project works like Vollmer et al.[15, 16], Kuznetsov et al. [17, 18] and Boeck et al. [19, 20] and compared results with homogeneous mixtures. Vollmer et al.[16]
investigated vertical concentration gradients in hydrogen-air mixtures and showed they had major influence on flame acceleration (FA) by change in maximum velocity and pressure.
Peak overpressure at end of tube can increase to two orders of amount compared to homogeneous mixtures. So, mixtures with vertical concentration gradients have higher dangerous effects than
homogeneous ones of same hydrogen concentration. They showed also [15] depend on geometrical configuration, DDT can happen at considerably lower or higher fuel concentrations. Furthermore, they
concluded that one-dimensional parameters like blockage ratio and characteristic length scales are not sufficient to describe DDT in hydrogen- air mixture with concentration gradients. Kuznetsov et
al. [17, 18] investigated flame propagation regimes and maximum pressure loads by considering effect of hydrogen concentration gradients, layer thickness, presence of obstruction and average and
maximum hydrogen concentration. They observed three different regimes for horizontal flame propagation as slow (subsonic) flame, sonic deflagration, and detonation. Higher flame propagation velocity
leads to higher pressure loads and highest mixture reactivity and ratio of distance between obstacles to layer thickness are governing parameters in propagation regimes.
Sommersel [21] studied hydrogen dispersion and effect of inhomogeneous hydrogen explosions in long channels. Hydrogen leakage in partially confined spaces was investigated based on ammonia plant
explosion incident in Porsgrunn in 1985. In that work, effect of explosion overpressure was discussed by changing mass flow rate, jet direction, time of ignition and level of obstruction. It was
observed that hydrogen-air cloud behaves as gravity current, and dispersion of hydrogen is highly sensitive to considered geometry. Here, Froude scaling is a useful tool to analyze effect of hydrogen
explosion in geometries. Furthermore, obstructed geometry significantly increases overpressure in system while unobstructed geometries reveal less pressures. Finally, it was concluded that two key
parameters of dispersion effect and degree of obstructions, influence strength of hydrogen explosion. Besides, shock wave propagates faster in horizontal channel than in vertical one.
Also, Ettner et al. [22] performed some numerical simulation for inhomogeneous mixtures by means of density-based codes and validation of them for inhomogeneous H[2]-air was observed in OpenFOAM CFD
toolbox [23, 24].
In following sections, different flame propagation regimes are described in detail as flame acceleration, onset of detonation and detonation in addition to investigation of inhomogeneity effect in H
[2]-air mixture on them.
2.2 Flame acceleration
Flame acceleration process includes sequences of four main phases. In this section, these phases are presented respectively after ignition of flammable mixture.
2.2.1 Laminar deflagration
After ignition, flame front propagates laminarly into mixture and causes flame surface area to enlarge. Enlargement of flame surface area leads to increase in reaction rate, which is integral of
local burning velocity over the flame surface area. So, by enlargement of flame surface area, flame accelerates. The propagation mechanism governs here is through diffusion of heat and species and
known as deflagration. Figure 2.2 shows distribution of temperature, reaction rate and mixture concentration through a one dimensional of stationary, laminar premixed deflagration wave.
Figure 2.2. 1D illustration of internal structure of laminar, premixed, stationary flame [25].
In figure 2.2 unburned gas velocity 𝑈[𝑢] is equal to laminar burning velocity 𝑆[𝐿] which is characteristic velocity scale in laminar premixed combustion. Temperature 𝑇[𝑢] and concentration 𝐶[𝑢] of
fresh or unburned mixture change across flame thickness 𝛿[𝐿] and results to 𝑇[𝑏] and 𝐶[𝑏] respectively in burned region. In premixed combustion, definition of progress variable 𝑐 is useful where 𝑐 =
0 in reactants and 𝑐 = 1 in products. Heat from chemical reaction released mainly from small region within flame named as reaction zone and has a characteristic thickness 𝛿[𝑅]. Laminar burning
velocity 𝑆[𝐿] and laminar flame thickness 𝛿[𝐿] are thermochemical quantities that are independent of geometry or local flow conditions.
A simple 2D illustration of laminar deflagration in closed channel can be seen in figure 2.3.
Figure 2.3. simple 2D illustration of laminar flame propagation (Left) and laminar flame front detail (Right) [4]
In figure 2.3, unstretched laminar burning velocity 𝑆[𝐿] is flame propagation velocity of mixture ahead of flame which is unburned mixture. In other hand, product velocity behind flame is 𝑆[𝐿]𝜎 where
𝜎 is expansion ratio and is ratio of reactant density to product density.
𝜎 = ^𝜌^𝑟𝑒
𝜌[𝑝𝑟] (2.1)
By an external observer, relationship between flame speed 𝑆[𝐿]𝜎 and laminar burning velocity is:
𝑆[𝐿]𝜎 = 𝑢 + 𝑆[𝐿] (2.2)
Where 𝑢 is velocity of flow ahead of flame. This equation can be shown by flame surface area 𝐴[𝑓] and channel cross section 𝐴[𝑐] as below [4].
𝑆[𝐿]𝜎𝐴[𝑓] = 𝑢𝐴[𝑐]+ 𝑆[𝐿]𝐴[𝑓] (2.3) It shows that enlargement of flame surface area results to increase the visible flame speed.
In previous years, there were many analytical correlations for simulation of laminar flame speed as a function of equivalence ratio, pressure, and temperature [26, 27]. Among these correlations the
most widely used and simplest form is fully empirical known as power law form that applied in many investigations [28-31] and expressed in equation (2.4).
𝑆[𝐿](𝜙, 𝑇[𝑢], 𝑃[𝑢]) = 𝑆[𝐿0](^𝑇^𝑢
𝑃0)^𝛽 (2.4)
Where 𝑆[𝐿0] is velocity for an equivalence ratio 𝜙 calculated in room conditions by considering 𝑇[𝑢] = 𝑇[0] and 𝑃[𝑢] = 𝑃[0]. 𝛼 and 𝛽 are mixture strength- dependent constants.
Gülder [27] suggested an empirical correlation to express laminar flame speed 𝑆[𝐿0] as below.
𝑆[𝐿0] = 𝑍𝑊𝜙^𝜂exp [−𝜉(𝜙 − 1.075)^2] (2.5) Where 𝑊, 𝜂 and 𝜉 are constants for given fuel and 𝑍 = 1 for single constituent fuels.
So, by substitution Gülder correlation for 𝑆[𝐿0] in equation (2.4), it results the following correlation that have been empoyed in this work as a suitable correlation for modeling of laminar flame
𝑆[𝐿] = 𝑊𝜙^𝜂exp [−𝜉(𝜙 − 1.075)^2](^𝑇
𝑃0)^𝛽 (2.6)
Characteristic length scale or thickness of laminar flame 𝛿[𝐿] and chemical time scale 𝜏[𝐿] of laminar flame can be obtained from laminar flame speed 𝑆[𝐿] [25]:
𝛿[𝐿] = ^𝜈
𝑆[𝐿] ; 𝜏[𝐿] = ^𝜈
𝑆[𝐿]^2 (2.7)
Where, 𝜈 is kinematic viscosity of flow. Here, it is assumed that viscosity and diffusivity are approximately equal or change similar to each other.
In case of hydrogen, Konnov [32] also introduced a correlation for unstretched laminar burning velocity 𝑆[𝐿] as a function of molar concentration of H[2] as a 6^th order polynomial.
So, explosion of fuel-air cloud starts with Laminar deflagration regime which flame front is undistort and smooth. But after a short time, flame front is distorted, and instabilities are produced
highly. Therefore, laminar deflagration regime is short, and it can be neglected compared to total duration of explosion process.
2.2.2 Cellular flame propagation
As mentioned in previous section, shortly after ignition, fuel-air flames tend to be unstable and distorted. By rising instabilities and distortion of flame front, flame surface area increases, and
this behavior known as cellular flames. Here, flow still remains laminar. Hence, cellular flames classified in laminar flame. This instability situation happens in H[2]-air explosion especially in
lean conditions. This regime has been observed in experiments [4] and explained as dynamic process in many works [33-35]. Instabilities lead to strengthen overall reaction rate and accelerate the
flame. Cellular length-scale after ignition has initially decreased and known as wavelength [36, 37]. In lean mixtures it can be seen separated flames with local quenching, while in stoichiometric
and rich mixture flames are symmetric and there is no local quenching [4]. So, by increasing concentration of fuel, wavelength of cellularity grows and stability of flame front increases.
Instabilities and distortion are due two main mechanism. Hydrodynamic (Landau-Darrieus) instability[38, 39] that describes flame wrinkling is based on local acceleration and deceleration of flame in
two different section. These sections are produced as result of convergence or divergence of streamlines and expansion across the flame that shapes convex and concave, respectively. Diffusive-thermal
instability [6] is another mechanism that acts along with hydrodynamics instability and strengthen or weaken flame wrinkling. Here, diffusion of heat interacts with species diffusion. If diffusion of
species leads to increase concentration of components in convex section, it results to higher reaction rates. Also, locally increasing temperature depends on thermal diffusion. In low thermal
diffusivity condition, enhanced species concentration combine with weak heat flux and causes a high temperature region. So, burning velocity is increased in convex and decreased in concave section
that leads to strengthen the flame wrinkling. In other hand, in high thermal diffusivity, high heat flux leads to balancing of burning velocities between convex and concave sections and consequently
reduced flame wrinkling. By having Lewis number 𝐿[𝑒] as equation (2.8), effect of diffusivity instability on flame wrinkling can be described.
𝐿[𝑒] = ^𝑎
𝐷 (2.8)
Lewis number expressed by ratio of thermal diffusivity 𝑎 to mass diffusivity 𝐷 of limiting species in the mixture (fuel). Figure 2.4 shows stability and instability effect of Lewis number on regions
of flame curvature. If Lewis number is less than unity, it strengthens the flame wrinkling and result to flame instability while Lewis number more than unity weaken it and make the flame stable. In,
H[2]-air mixture due to high diffusivity of H[2] there is a high tendency to have cellular flame propagation.
Also in H[2]-air mixture, Lewis number can be shown experimentally as a function of equivalence ratio [40] and exhibits transition from stability to instability close to stoichiometric conditions.
Figure 2.4. Effect of Lewis number on flame stability [25].
Markstein investigated cellular flame propagation [41] by introducing Markstein length 𝐿[𝑀] that defines effect of flame stretch rate on local burning velocity and can be expressed as follow.
𝑆[𝐿]− 𝑆[𝐿,𝑆] = 𝐿[𝑀]𝑘 (2.9)
Here 𝑆[𝐿,𝑆] is stretched flame burning velocity and Karlovitz stretch factor 𝑘 describes normalized rate of flame surface area change [42] and it can be obtained as equation below.
𝑘 = ^1
𝐴[𝑓] 𝑑𝐴[𝑓]
𝑑𝑡 (2.10)
Markstein length 𝐿[𝑀] in H[2]-air mixture can be determined experimentally as a function of equivalence ratio [43, 44] as shown in figure 2.5. In this figure it can be observed that below
stoichiometry of H[2]-air mixture, Markstein length 𝐿[𝑀] is negative and above it, 𝐿[𝑀] is positive.
Flame instability is strengthened in negative 𝐿[𝑀] , since positive (negative) stretch rate increases (decrease) local flame velocity as described in equation (2.9). In other hand, flame instability
is weakened in positive 𝐿[𝑀].
Markstein dimensionless number 𝑀[𝑎] expressed in equation (2.11) can also describes effect of flame curvature and strain [41].
𝑀[𝑎] =^𝐿^𝑀
𝛿𝐿 (2.11)
Finally, it can be concluded that for 𝐿[𝑒] < 1 and 𝑀[𝑎] < 0 instability increases and flame wrinkling enhanced, while for 𝐿[𝑒] > 1 and 𝑀[𝑎] > 0 there is less instability and flame wrinkling damped.
Experimental sequences [4] in figure 2.6 shows that for homogeneous H[2]-air mixture, by increasing H[2] concentration from 15 to 40 vol%, Markstein length 𝐿[𝑀] developed from negative to positive
amount. For lean mixtures up to 20 vol%, it can be seen separated flame islands with quenching but for higher amount of concentration, there is no local quenching and flame front get more stability.
Figure 2.5. experimental values of Markstein length 𝐿[𝑀] in H[2]-air as a function of equivalence ratio. Sources:
Black dots [43] and White squares [44].
Figure 2.6. Sequences of cellular flames of homogeneous H[2]-air mixture by increasing H[2] concentration with correspond Markstien length 𝐿𝑀 [4].
So, by flame front distortion and cellular flame propagation, overall reaction rate increases, and it supports FA process.
2.2.3 Slow turbulent deflagration
In closed channels, by having end wall ignition, flame acts like a piston that pushes the fresh mixture ahead of flame in flame propagation direction. It leads to high flow velocity and further high
Reynolds number ahead of flame that result in forming and strengthening chaotic condition known as turbulence. This mostly happened in wall boundary layers and near existing obstruction in geometry.
In slow regime, flame propagating encounters different flow behavior rather than laminar but still is in deflagration mode and it is almost controlled by subsonic fluid mechanic processes.
Reynolds dimensionless number measures relative of inertia forces and viscous forces and defines based on characteristic velocity scale 𝑈, characteristic length scale 𝐿 of mean flow and kinematic
viscosity 𝜈 of flow.
ℛ[𝑒] = ^𝑈𝐿
𝜈 (2.12)
In turbulence, there are spatial velocity fluctuation and Reynolds describes local flow velocity 𝑢 by time- averaged flow velocity 𝑢̅ and velocity fluctuation 𝑢′ [45].
𝑢 = 𝑢̅ + 𝑢′ (2.13)
Turbulent flows reveal shape of rotational flow structures, named as turbulent eddies with a wide range of length scales [2]. Mean size of large eddies is known as integral scale. In these flows, it
can be seen that characteristic velocity scale 𝜗 and characteristic length scale ℓ[𝑇] of large eddies, are of the same order as velocity 𝑈 and length scales 𝐿 of mean flows. Large eddies (integral)
length scale ℓ[𝑇] and time scale 𝜏[𝑇] can be defined as in equation (2.14).
𝜀 ; 𝜏[𝑇] =^ℓ^𝑇
𝑢^′ (2.14)
Where, 𝜀 is dissipation rate of turbulent kinetic energy and 𝑢^′ is characteristic velocity fluctuation in integral length scale.
Therefore, Reynolds number obtained by large eddies scales are close in value to Reynolds number by mean values. It shows that large eddies are dominated by inertia effects and viscous effects have
less importance [2].
The largest eddies elicit energy from mean flow by stretching process where mean flow velocity gradients distort turbulent eddies. Furthermore, smaller eddies are also stretched highly by larger
eddies and at lower level by mean flow. So turbulent kinetic energy is transferred from large eddies to smaller and smaller ones at a viscous dissipation rate. The total process of transferring
turbulent kinetic energy from mean flow to large eddies and further to smaller eddies is known as kinetic energy cascade. These smallest eddies in turbulent flows are dominated by viscous effects and
have smallest length scales of 0.1 to 0.01 mm. Reynolds number of smallest eddies based on characteristic velocity 𝜐 and length scales ℓ[𝜂] equals to 1 [2].
ℛ[𝑒][𝜂] =^𝜐ℓ^𝜂
𝜈 = 1 (2.15)
It means for smallest eddies in turbulent flows, viscous effects and inertia effects have same strength. Smallest scales are named Kolmogorov microscales [46, 47]. In these microscales, viscous
stresses are mostly considered, and energy associated with them is dissipated and converted to thermal internal energy. This dissipation causes increased loss of energy in turbulent flows. Kolmogorov
microscales of length scale ℓ[𝜂] , velocity scale 𝜐 and time scale 𝜏[𝜂] are expressed in term of energy dissipation rate 𝜀 of turbulent flow and fluid kinematic viscosity 𝜈.
ℓ[𝜂] ≈ (^𝜈^3
4; 𝜐 ≈ (𝜈𝜀)
4 ; 𝜏[𝜂] ≈ (^𝜈
2 (2.16)
In high flow velocity, dissipation rate 𝜀 increases and consequently micro length scale ℓ[𝜂] decreases. Therefore, high flow velocity shows lower smallest eddy size than low flow velocity.
Finally, ratio of small scales to large scales characteristic can be expressed based on Reynolds number ℛ[𝑒][ℓ]of integral length scale as follows [48].
Length scale ratio: ^ℓ^𝜂
ℓ[𝑇] = ℛ[𝑒][ℓ]^−^3^4 (2.17a)
Time scale ratio: ^𝜏^𝜂
𝜏𝑇 = ℛ[𝑒][ℓ]^−
2 (2.17b)
Velocity scale ratio: ^𝜐
𝜗= ℛ[𝑒][ℓ]^−^1^4 (2.17c)
Typical values for ℛ[𝑒][ℓ] can be 10^3− 10^6 [2].
In turbulent premixed flame, turbulent propagation speed or turbulent burning velocity 𝑆[𝑇] is not only related to characteristic of flow, but also depends on properties of fuel-oxidizer mixture.
Many correlations were proposed for relating turbulent burning velocity to laminar flame speed corresponding to different regimes of turbulent premixed flames. At first, Damköhler [49] theoretically
introduced turbulent burning velocity based on two different regimes and due to magnitude of turbulence scale in comparison with laminar flame thickness.
He assumed that for turbulence scales larger than laminar flame thickness, interaction of wrinkled flame front with turbulent field is independent of length scales and purely kinematic.
It corresponds to corrugated flamelet regime as shown in figure 2.7. Also, he expressed mass rate 𝑚̇ in term of laminar and turbulent velocities as below.
𝑚̇ = 𝜌[𝑢]𝑆[𝐿]𝐴[𝑓,𝑇] = 𝜌[𝑢]𝑆[𝑇]𝐴[𝑓,𝐿] (2.18) Where 𝜌[𝑢] is unburned mixture density and 𝐴[𝑓,𝑇] and 𝐴[𝑓,𝐿] are turbulent and laminar flame surface areas, respectively.
So, ratio of turbulent to laminar burning velocity would be:
𝑆𝑇 𝑆𝐿 = ^𝐴^𝑓,𝑇
𝐴𝑓,𝐿 (2.19)
For large scale and weak turbulence intensity, Damköhler expressed above ratio by using geometrical approximation with a Bunsen flame as follows.
𝐴𝑓,𝐿 = ^𝑆^𝐿^+𝑢′
𝑆𝐿 (2.20)
𝑆𝐿 = 1 +^𝑢′
𝑆𝐿 (2.21)
Where, 𝑢′ is characteristic fluctuation velocity in unburned mixture.
For strong turbulence ^𝑢′
𝑆[𝐿]>> 1 so, equation (2.21) would be:
𝑆[𝑇] ≈ 𝑢′ (2.22)
There were many works that revised and updated Damköhler analysis like Calvin and Williams [50]or Pope and Anand [51]. Among them, Gülder [52] expressed turbulent to laminar flame velocity ratio
based on smallest eddies which is employed in present work.
𝑆𝐿 = 1 + 0.62√^𝑢′[𝑆]
𝐿ℛ[𝑒𝜂] (2.23)
To summarize turbulence-flame interaction, combustion regime diagram was introduced by Borghi [53] and revised by peters [54, 55] with length and velocity scales as shown in figure 2.7. Other
definition for turbulent combustion regimes was proposed by Williams [56] that used Reynolds and Damköhler numbers.
Turbulent Reynolds number based on integral length scale expressed as:
ℛ[𝑒][ℓ] =^𝑢′ℓ^𝑇
𝜈 = ^𝑢′
𝑆[𝐿] ℓ[𝑇]
𝛿[𝐿] (2.24)
Also, to investigate interaction between turbulence and flame, Karlovitz [57] introduced two dimensionless numbers in turbulent deflagration.
𝐾[𝑎] = (^𝛿^𝐿
; 𝐾[𝑎𝛿] = (^𝛿^𝑅
𝐾[𝑎] and 𝐾[𝑎𝛿] are ratio of laminar flame thickness 𝛿[𝐿] and thickness of heat released zone 𝛿[𝑅] respectively to Kolmogrov length scale ℓ[𝜂].
Turbulent combustion regime illustrated in figure 2.7 used these three above mentioned dimensionless numbers. Lines ℛ[𝑒] = 1, 𝐾[𝑎] = 1 and 𝐾[𝑎𝛿] = 1 are transition boundaries specified between
different turbulent combustion regimes.
By means of boundary line ℛ[𝑒] = 1, turbulent flame regimes are separated from laminar flame regime. Laminar flame regime is characterized by ℛ[𝑒] < 1 , weak turbulence intensity and small turbulence
scale and flame front propagates at speed of 𝑆[𝐿].
In this figure, two wrinkled and corrugated flamelet regimes are characterized when large eddies dominate laminar flame thickness (ℓ[𝑇] > 𝛿[𝐿]) and interact with flame front which results in
macroscopic enlargement of flame surface area. Here, structure of flame front remains as laminar flame and local burning velocity of flame front still equals to laminar burning velocity.
It shows local transport of heat and species is not changed by large eddies. Boundary line of 𝑢^′= 𝑆[𝐿] separates these two flamelet regimes from each other.
In wrinkled flamelet regime, flame thickness is much smaller than Kolmogorov length scale and flame maintains its laminar structure, turbulence just wrinkles the flamelet surface slightly.
This regime is characterized by ℛ[𝑒] > 1, 𝐾[𝑎] < 1 and ^𝑢′
𝑆𝐿 < 1.
Top side of dashed line 𝑢^′ = 𝑆[𝐿], is corrugated regime and due to 𝐾[𝑎] < 1, flame maintains its laminar structure but because of larger fluctuations and consequently ^𝑢′
𝑆[𝐿] > 1, it leads to forms islands shapes of unburned and burned mixtures.
Above the boundary line 𝐾[𝑎] = 1, is reaction sheet regime and here transport of heat and species in flame front enhanced. So, local burning velocity is higher than laminar burning velocity.
This regime is characterized by ℛ[𝑒] > 1, 𝐾[𝑎]> 1 and 𝐾[𝑎𝛿] < 1. For lower boundary of 𝐾[𝑎] (equals to unity), which 𝛿[𝐿] ≈ ℓ[𝜂], largest eddies flame behaves as flamelet eddies. While for
𝐾[𝑎] > 1, which 𝛿[𝐿] > ℓ[𝜂] , smallest eddies can penetrate into flame front structure and increase rate of heat and mass transfer which is only due to diffusion.
Above the boundary line 𝐾[𝑎𝛿]= 1, is well-stirred reactor regime. Here, ℛ[𝑒] > 1 and 𝐾[𝑎𝛿] > 1 and in this regime, heat release zone thickness is larger than smallest eddies that results strong
effect of turbulence. Therefore, Kolmogorov eddies enter structure of reaction zone. This high diffusivity causes heat rate transfer from heat released zone to preheat zone and results to local flame
extinction. Potential of turbulent eddies to penetrate the heat release zone of laminar flame is defined by 𝐾[𝑎𝛿]. Here, for 𝐾[𝑎𝛿] > 1, chemical reaction cannot be finished by one eddy circulation.
Also, local flame quenching can happen due to mix of reacting gas with cold reactants. This shows upper boundary for turbulent burning velocity which in safety analysis considers 10 times of laminar
burning velocity [58].
Figure 2.7. Different turbulent combustion regime diagram for premixed mixture [55].
Experimental sequences of slow deflagration for homogeneous H[2]-air mixture was observed by Boeck [4]. It has been seen that in unobstructed configuration, turbulent in fresh mixture ahead of flame
front were confined by wall boundaries and turbulent eddies lead to wrinkle the flame front and make it corrugated adjacent the walls while it was compact in the center of channel.
Turbulence and flame interaction increases burning velocity up to 10 times higher than laminar burning velocity. So, reaction rate increases significantly and in fresh mixture ahead of flame produces
flow generation.
Finally, it can be concluded turbulence in premixed combustion flame wrinkles and stretches laminar flame structure, increases flame surface area and further, effective flame speed. Large turbulent
eddies wrinkle and corrugate the flame and consequently, deformation raise the speed. In the other hand, small turbulent eddies may penetrate and change laminar flame structure if they are smaller
than laminar flame thickness [27].
2.2.4 Fast turbulent deflagration
When flame speed reaches sound speed of reactants, fast turbulence regime occurs. In this regime flow compressibility gets more importance and also, constant acceleration can be seen.
Here, flame propagation is mostly characterized by existence of gas dynamics discontinuities like shocks, precompression of fresh mixture and interaction with flame. Experiments shows with high
amount of H[2] concentration, shock starts to form at flame speed of 300-400 m/s and transition from slow to fast deflagration and shock formation observed due to pressure waves reflecting [4]. So,
total pressure wave propagates higher than sound speed in mixture. By presence of obstacle, shock formation occurs earlier due to pressure wave reflection compared to unobstructed configuration.
Increasing fuel concentration, results to strengthening of reflected shocks that interacts with flame and this observation is important in FA process.
Flame-shock interaction was discussed in many works [59, 60]. This interaction causes distortion and wrinkling of flame by Richtmyer-Meshkov (RM) instability [61]. RM instability appears since shock
interacts with interface of reactants and products [62]. In experiments by Thomas et al. [63] it was observed that high shock-flame interaction leads to acceleration of flame. Khokhlov et al. [64]
also showed it is essential to reach onset of detonation. Large scale RM instability is main mechanism in order to increase heat release rate with macroscopic enlargement of flame surface area during
single shock-flame interaction, while small scale instability has less effect here since it fails rapidly [64]. Also, in FA process, shock-flame interaction occurs continuously.
So, propagating waves in fresh mixture during acceleration, gather and result to shock formation which compress and heat fresh mixture. Shock-flame interaction increase reaction rate. By increasing
reaction rate due to this flow generation, along with shock and turbulence formation, flame accelerates to fast deflagration regime which also known as strong acceleration. In H[2]-air explosion,
fast deflagration causes velocities up to order of 1000 m/s along with overpressure up to about 10 bars while weak acceleration results in velocities of order 100 m/s [4]. Among this acceleration,
flame front velocity relative to gas ahead of front remains subsonic and often maximum deflagration velocity close to reaction product sound speed can be seen.
2.2.5 Flame Acceleration in inhomogeneous condition of 𝐇
-air mixture
Recently results from experiments [4] investigated explosion of inhomogeneous H[2]-air mixture by considering vertical concentration gradients within the channel and compare them with homogeneous
conditions. Inhomogeneous H[2]-air mixture obtained by introducing diffusion time 𝑡[𝑑] as time between H[2] injection and mixture ignition. Here, 𝑡[𝑑] = 60 seconds represents homogeneous condition
and diffusion times less than it like 𝑡[𝑑] = 3,5,7.5, 10 seconds showed inhomogeneous conditions. Effect of inhomogeneity of hydrogen in FA process was investigated based on two phenomena in
unobstructed and obstructed:
2.2.5.1 Flame shape and structure
a) Unobstructed channel: After ignition, it can be observed that for inhomogeneous mixture of H[2]-air, flame front is inclined while for homogeneous mixture it has a symmetric (not totally, due to
buoyancy effects) behavior with respect to channel centerline. Flame cannot propagate into mixture at the bottom of channel when local
H[2] concentration is less than a certain value. Also, in inhomogeneous mixture wavelength of flame front cellularity changes from large cell at top to smaller at bottom that is according to local
Markstein length in concentration gradient profile. Lower flammability limit in inhomogeneous mixtures can be around 6-8 vol% for H[2] concentration up to 20 vol.% which is not in range of limits
[65] for horizontal and downward flame propagation, therefore combustion is not complete. So, lower flame boundary in channel is straight and do not propagates further through the bottom. In this
kind of configuration, inhomogeneous mixture has significantly higher flame acceleration compared to homogeneous mixture due to more elongation and further increase in flame surface area that leads
to higher overall reaction rate. This surface area enlargement for homogeneous can be obtained by obstacles.
Flame shape highly depends on inhomogeneity and steepest gradients (lower diffusion time) have more flame elongation in constant H[2] concentration. This shows that inhomogeneity of mixture increases
flame elongation. In similar inhomogeneous mixture condition, it can be observed that for low amount of H[2] concentration and lean mixture, flame front accelerates irregular and even oscillates that
can prevent elongation. By increasing H[2] concentration, maximum local flame speed occurs in top of channel and flame propagates there faster.
Furthermore, flame speed 𝑆[𝐿]𝜎 has effect on flame elongation. In high local H[2] concentration regions, reactant density 𝜌[𝑟𝑒] is low. This leads mixture ahead of flames accelerates faster and flame
elongation enhanced. Besides, by increasing H[2] concentration and increase in local reactant sound speed, at top of channel slow deflagration regime appears while at bottom of channel it shows fast
deflagration regime. So, curved shocks and reflection of them are observed at bottom while they disappear at top of channel because of lower local shock Mach number.
So, for inhomogeneous H[2]-air mixtures, in unobstructed configuration flame elongates significantly and FA is influenced by mixture properties and macroscopic flame shape.
b) Obstructed channel: In homogeneous mixture of H[2]-air , flame is symmetric in upstream of channel with a slot in flame tip [66], while for inhomogeneous mixture, flame is inclined upstream of
obstacle and reaches it at top earlier. After the obstacle, flame goes through bottom of channel which nearly shapes symmetric. In case of multi obstacles, by reaching upstream of last obstacle,
flame fronts become almost similar for homogeneous and inhomogeneous mixtures. Also, by rising blockage ratio, flame elongation would be prevented considerably for both homogeneous and inhomogeneous
mixtures and there is a similarity of flame shapes between them.
Therefore, obstructed configuration quenches the flame elongation significantly in comparison with unobstructed one. So, enlargement of flame surface area and high increase in reaction rate results
to strong FA process that only take places in unobstructed configuration.
2.2.5.2 Flame velocity
a) Unobstructed channels: Inhomogeneous mixtures have stronger FA in all phases of process than homogeneous ones. So, they make faster FA to reach the critical condition of onset of detonation, while
homogeneous ones have slow flame propagation without significant FA progress to reach DDT. Therefore, maximum local flame speed is much higher for inhomogeneous mixtures than homogeneous ones in same
H[2] concentration. | {"url":"https://9pdf.net/document/zpnvmw37-cfd-simulations-of-inhomogeneous-h-air-explosions.html","timestamp":"2024-11-09T09:23:24Z","content_type":"text/html","content_length":"217638","record_id":"<urn:uuid:ab0a8d59-7bac-4c2f-b5b4-e072afb090a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00807.warc.gz"} |
Fine tuning MOEA/D configurations using
Fine tuning MOEA/D configurations using MOEADr and irace
Claus Aranha, Felipe Campelo
For this example, we adapt the tuning protocol proposed by Bezerra et al. (2016), employing the Iterated Racing procedure by Lopez-Ibanez et al. (2016). Using the irace package, we automatically
assemble and fine-tune a MOEA/D configuration based on the components available in the MOEADr package.
Fine tuning setup
Ten unconstrained test problems from the CEC2009 competition are used, with dimensions ranging from 20 to 60. Dimensions 30, 40 and 50 were reserved for testing, while all others were used for the
training effort. To quantify the quality of the set of solutions returned by a candidate configuration we use the Inverted Generational Distance (IGD). The number of subproblems was fixed as \(100\)
for \(m=2\) and \(150\) for \(m=3\).
We define a tuning budget of 20,000 runs. The possible configurations are composed from the following choices:
• Decomposition Strategy: SLD or Uniform;
• Scalar Aggregation function: WT, PBI or AWB;
• Type of neighborhood: by weights or by incumbent solutions;
• Type of Update: Standard, Restricted, or Best Subproblem;
• Variation Stack: See below;
For every combination, the parameters of each component (e.g. \(\theta^{pbi}\) for the PBI aggregation function) were also included as part of the tuning experiment. Objective scaling was employed in
all cases. No constraint handling was required, and the stop criterion was set as 100,000 evaluations.
The variation stack was composed of three to five operators, using the following rules: the first and second operators could be any of the “traditional” operators currently available in the MOEADr
package: SBX, Polynomial Mutation, Differential Mutation, and Binomial Recombination. The third operator could be any of these, or “none” (i.e., not present). The fourth operator could be either a
local search operator or “none”. Finally, the variation stack always finished with the truncation repair operator (mainly to avoid errors with the implementation of the test functions). No
restrictions were placed on repeats of the variation operators, and the specific conditional parameters for each operator were allowed to be tuned independently for each position in the variation
irace configuration
Here we describe the necessary setup to run the experiment above. First, loading the necessary packages and basic configuration of irace. (parameters.txt, forbidden.txt and other files are only
included in the source version of the MOEADr package)
scenario <- irace::defaultScenario()
scenario$seed <- 123456 # Seed for the experiment
scenario$targetRunner <- "target.runner" # Runner function (def. below)
scenario$forbiddenFile <- "../inst/extdata/forbidden.txt" # forbidden configs
scenario$debugLevel <- 1
scenario$maxExperiments <- 20000 # Tuning budget
scenario$testNbElites <- 7 # test all final elite configurations
# Number of cores to be used by irace (set with caution!)
nc <- parallel::detectCores() - 1
scenario$parallel <- nc
# Read tunable parameter list from file
parameters <- readParameters("../inst/extdata/parameters.txt")
Second, it is necessary to generate the training instances based on the benchmark function implementations in package smoof:
### Build training instances
fname <- paste0("UF_", 1:10)
dims <- c(20:29,
allfuns <- expand.grid(fname, dims, stringsAsFactors = FALSE)
scenario$instances <- paste0(allfuns[,1], "_", allfuns[,2])
for (i in 1:nrow(allfuns)){
assign(x = scenario$instances[i],
value = make_vectorized_smoof(prob.name = "UF",
dimensions = allfuns[i, 2],
id = as.numeric(strsplit(allfuns[i, 1], "_")
[[1]][2]))) }
### Build test instances
dims <- c(30, 40, 50)
allfuns <- expand.grid(fname, dims, stringsAsFactors = FALSE)
scenario$testInstances <- paste0(allfuns[,1], "_", allfuns[,2])
for (i in 1:nrow(allfuns)){
assign(x = scenario$testInstances[i],
value = make_vectorized_smoof(prob.name = "UF",
dimensions = allfuns[i, 2],
id = as.numeric(strsplit(allfuns[i, 1], "_")
[[1]][2]))) }
Third, we need to specify the code that will generate a MOEA/D configuration based on the parameter string created by the irace routine:
target.runner <- function(experiment, scenario){
conf <- experiment$configuration
inst <- experiment$instance
# Assemble moead input lists
## 1. Problem
fdef <- unlist(strsplit(inst, split = "_"))
uffun <- smoof::makeUFFunction(dimensions = as.numeric(fdef[3]),
id = as.numeric(fdef[2]))
fattr <- attr(uffun, "par.set")
problem <- list(name = inst,
xmin = fattr$pars$x$lower,
xmax = fattr$pars$x$upper,
m = attr(uffun, "n.objectives"))
## 2. Decomp
decomp <- list(name = conf$decomp.name)
if (problem$m == 2){ # <-- 2 objectives
if(decomp$name == "SLD") decomp$H <- 99 # <-- yields N = 100
if(decomp$name == "Uniform") decomp$N <- 100
} else { # <-- 3 objectives
if(decomp$name == "SLD") decomp$H <- 16 # <-- yields N = 153
if(decomp$name == "Uniform") decomp$N <- 150
## 3. Neighbors
neighbors <- list(name = conf$neighbor.name,
T = conf$T,
delta.p = conf$delta.p)
## 4. Aggfun
aggfun <- list(name = conf$aggfun.name)
if (aggfun$name == "PBI") aggfun$theta <- conf$aggfun.theta
## 5. Update
update <- list(name = conf$update.name,
UseArchive = conf$UseArchive)
if (update$name != "standard") update$nr <- conf$nr
if (update$name == "best") update$Tr <- conf$Tr
## 6. Scaling
scaling <- list(name = "simple")
## 7. Constraint
constraint<- list(name = "none")
## 8. Stop criterion
stopcrit <- list(list(name = "maxeval",
maxeval = 100000))
## 9. Echoing
showpars <- list(show.iters = "none")
## 10. Variation stack
variation <- list(list(name = conf$varop1),
list(name = conf$varop2),
list(name = conf$varop3),
list(name = conf$varop4),
list(name = "truncate"))
for (i in seq_along(variation)){
if (variation[[i]]$name == "binrec") {
variation[[i]]$rho <- get(paste0("binrec.rho", i), conf)
if (variation[[i]]$name == "diffmut") {
variation[[i]]$basis <- get(paste0("diffmut.basis", i), conf)
variation[[i]]$Phi <- NULL
if (variation[[i]]$name == "polymut") {
variation[[i]]$etam <- get(paste0("polymut.eta", i), conf)
variation[[i]]$pm <- get(paste0("polymut.pm", i), conf)
if (variation[[i]]$name == "sbx") {
variation[[i]]$etax <- get(paste0("sbx.eta", i), conf)
variation[[i]]$pc <- get(paste0("sbx.pc", i), conf)
if (variation[[i]]$name == "localsearch") {
variation[[i]]$type <- conf$ls.type
variation[[i]]$gamma.ls <- conf$gamma.ls
## 11. Seed
seed <- conf$seed
# Run MOEA/D
out <- moead(preset = NULL,
problem, decomp, aggfun, neighbors, variation, update,
constraint, scaling, stopcrit, showpars, seed)
# return IGD based on reference data
Yref <- as.matrix(read.table(paste0("../inst/extdata/pf_data/",
fdef[1], fdef[2], ".dat")))
return(list(cost = calcIGD(Y = out$Y, Yref = Yref)))
Finally, we run the experiment, and save the outputs. Note that this experiment will take a long time to run (24 hours in a 24 cluster machine), so take that into account when reproducing these
results. For more details on the code above, check the documentation of the irace package.
## Running the experiment
irace.output <- irace::irace(scenario, parameters)
saveRDS(irace.output, "../inst/extdata/RESULTS.rds")
file.copy(from = "irace.Rdata", to = "../inst/extdata/irace-tuning.Rdata")
## Test returned configurations on test instances
testing.main(logFile = "../inst/extdata/irace-tuning.Rdata")
file.copy(from = "irace.Rdata", to = "../inst/extdata/irace-testing.Rdata")
First let’s plot the IGD value achieved by the final configurations over the test problems:
## Loading required package: reshape2
## Using Problem, Dimension, Objectives as id variables
## Loading required package: ggplot2
The final MOEA/D configuration obtained by this experiment is described in the table below. The best configuration is presented in the first two columns. The third column, together with the figure
following, provides the consensus value of each component, measured as the rate of occurrence of each component in the seven final configurations returned by the Iterated Racing procedure. These
results suggest that the automated assembling and tuning method reached a reasonably solid consensus, in terms of the components used as well as the values returned for the numeric parameters.
Decomposition SLD 1.00
Aggregation Function AWT 1.00
Objective Scaling simple Fixed
by \(x\)
Neighborhood \(T = 11\) 1.00
\(\delta_p = 0.909\)
Differential Mutation 1.00
Variation Stack (1) \(basis = "rand"\) 1.00
\(\phi \sim U(0,1)\) Fixed
Variation Stack (2) Binomial Recombination
\(\rho_1 = 0.495\) 1.00
Variation Stack (3) Binomial Recombination
\(\rho_2 = 0.899\) 1.00
Variation Stack (4) Truncate Fixed
Update Restricted 1.00
\(nr=1\) 1.00
A feature that may seem surprising at first glance is the two sequential applications of Binomial Recombination in the Variation Stack. This means that the results of a Differential Mutation operator
are recombined with the incumbent solutions at the (reasonably low) recombination rate \(\rho_1 = 0.495\); and then the resulting vectors are again recombined with the incumbent solutions, at a (much
higher) rate \(\rho_2 = 0.899\). However, a quick review of the definition of Binomial Recombination and some elementary probability shows that these two sequential applications of binomial
recombinations can be expressed as a single application with \(\rho = \rho_1\rho_2 = 0.445\). The fact that Iterated Racing converged to two operators instead of a single one can be explained by the
fact that these situations are equivalent, coupled with the absence of any pressure towards more parsimonious expressions of the Variation Stack in the setup.
Another seemingly counter intuitive aspect of the final configurations reached is the absence of local search. A possible explanation lies in the interaction between the variation operators chosen
(differential mutation + binomial recombination) with the type of neighborhood (by \(x\), with a very strong bias towards using points from the neighborhood for the variant operators - \(\delta_p =
0.909\)). The use of points that are relatively similar in the space of decision variables may result in local exploration in this case, since the magnitude of the differential vectors tends to
become relatively small. As the iterations progress, this Variation Stack would tend to perform local search movements, with larger pertubations potentially occurring whenever points are sampled from
outside the neighborhoods (i.e., about 10% of the cases). It is therefore possible that this local exploration characteristic may have resulted in a MOEA/D configuration that does not benefit from an
explicit local search operator.
Besides these considerations, the configurations returned by the Iterated Racing procedure present a few other interesting points. First, the use of a smaller neighborhood than is usually practiced
in the literature (T = 11), with the neighborhood relations being defined at each iteration by the similarities between the incumbent solutions of each subproblem. Second, the use of a very strict
Restricted Neighborhood Update, with \(n_r = 1\), which suggests an advantage in trying to maintain diversity instead of accelerating convergence. The use of the variation operators of the MOEAD/D-De
without Polynomial Mutation (as is usually practiced in the MOEA/D-DE) is also curious, as it may indicate a more parsimonious variation stack than is usually practiced in the literature.
As can be seen, exploring the space of possible component configurations and parameter values can render improved algorithmic configurations and new insights into the roles of specific components and
parameter values.
Thus, we highly recommend that a similar approach is used when developing new components, in order to observe not only the individual performance of the novel component, but also its interaction with
components which already exist in the MOEA/D environment. | {"url":"https://cran.case.edu/web/packages/MOEADr/vignettes/Comparison_Usage.html","timestamp":"2024-11-05T04:42:08Z","content_type":"text/html","content_length":"144888","record_id":"<urn:uuid:0c1bc559-198b-42e4-8c64-a5703985cd97>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00467.warc.gz"} |
Math Colloquia - Topological surgery through singularity in mean curvature flow
The mean curvature flow is an evolution of hypersurfaces satisfying a geometric heat equation. The flow naturally develops singularities and changes the topology of the hypersurfaces at
singularities, Therefore, one can study topological problems via singularity analysis for parabolic partial differential equations. Indeed, linearly stable singularities are just round cylinders, and
thus we can expect how to change the topology of generic solutions to the flow.
In this talk, we first introduce the mean curvature flow, the blow-up analysis, and the stability of singularities. If time permits, we also discuss the well-posedness problem through linearly stable | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=room&order_type=desc&l=en&document_srl=817926","timestamp":"2024-11-07T19:24:15Z","content_type":"text/html","content_length":"44049","record_id":"<urn:uuid:7ce1ab64-6e0f-4272-9fee-0da6d44bdc69>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00039.warc.gz"} |
How do you differentiate sin^2x-sin^2y=x-y-5? | HIX Tutor
How do you differentiate #sin^2x-sin^2y=x-y-5#?
Answer 1
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{1 - \sin 2 x}{1 - \sin 2 y}$
#sin^2x -x = sin^2y-y-5#
Differentiate both sides with respect to #x#:
#2sinxcosx -1 = (2sinycosy -1)dy/dx#
#dy/dx = (1-sin2x)/(1-sin2y)#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Answer: $y ' = \frac{\sin \left(2 x\right) - 1}{\sin \left(2 y\right) - 1}$
Differentiate #sin^2x-sin^2y=x-y-5#
First, we take the derivative of both sides, leaving #dy/(dx)# as #y'#: #d/(dx)sin^2x-sin^2y=d/(dx)x-y-5#
Solve for #y'# by adding #2y'sin(y)cos(y)# to both sides and subtracting #1# from both sides: #2sin(x)cos(x)-1=2y'sin(y)cos(y)-y'#
Note that #sin(2theta)=2sin(theta)cos(theta)# So, we can write #y'# as: #y'=(sin(2x)-1)/(sin(2y)-1)#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To differentiate ( \sin^2x - \sin^2y = x - y - 5 ), you can use implicit differentiation. The derivatives are:
[ \frac{d}{dx}(\sin^2x) = 2\sin x \cos x ] [ \frac{d}{dy}(\sin^2y) = -2\sin y \cos y ]
Applying implicit differentiation to the equation gives:
[ 2\sin x \cos x - (-2\sin y \cos y) = 1 - 1 ]
Simplifying further, we get:
[ 2\sin x \cos x + 2\sin y \cos y = 0 ]
This is the derivative of the given equation with respect to ( x ) and ( y ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-differentiate-sin-2x-sin-2y-x-y-5-8f9af9ea04","timestamp":"2024-11-08T04:21:11Z","content_type":"text/html","content_length":"584098","record_id":"<urn:uuid:31b84056-ad7e-47d7-9802-84f9522e6957>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00130.warc.gz"} |
Linear Programming
In my last post on game theory, I said that you could find an optimal
probabilistic grand strategy for any two-player, simultaneous move, zero-sum game.
It's done through something called linear programming. But linear programming
is useful for a whole lot more than just game theory.
Linear programming is a general technique for solving a huge family of
optimization problems. It's incredibly useful for scheduling, resource
allocation, economic planning, financial portfolio management,
and a ton of of other, similar things.
The basic idea of it is that you have a linear function,
called an objective function, which describes what you'd like
to maximize. Then you have a collection of inequalities, describing
the constraints of the values in the objective function. The solution to
the problem is a set of assignments to the variables in the objective function
that provide a maximum.
For example, imagine that you run a widget factory, which can
produce a maximum of W widgets per day. You can produce two
different kinds of widgets - either widget 1, or widget 2. Widget one
takes s[1] grams of iron to produce; widget 2 needs s[2]
grams of iron. You can sell a widget 1 for a profit p[1] dollars,
and a widget 2 for a profit of p[2] dollars. You've got G grams of iron
available for producing widgets. In order to maximize your profit, how
many widgets of each type should you produce with the iron you have
available to you?
You can reduce this to the following:
1. You want to maximize the objective function
p[1]x[1] + p[2]x[2], where x[1] is the number of type 1 widgets you'll produce, and x[2] is the number
of type 2 widgets.
2. x[1] + x[2] ≤ W. (You can't produce more than W widgets.)
3. s[1]x[1] + s[2]x[2] ≤ G. (This is
the constraint imposed by the amount of iron you have available to produce
4. x[1] ≥ 0, x[2] ≥ 0. (You can't produce a
negative number of either type of widgets.)
The fourth one is easy to overlook, but it's really important. One of the tricky things about linear programming is that you need to be sure that you
really include all of the constraints. You can very easily get non-sensical
results if you miss a constraint. For example, if we left that constraint out
of this problem, and the profit on a type 2 widget was significantly higher
than the profit on a type 1 widget, then you might wind up producing a negative number of type 1 widgets, in order to allow you to produce
more than W widgets per day.
Once you've got all of the constraints laid out, you can convert the
problem to a matrix form, which is what we'll use for the
solution. Basically, for each constraint equation where you've got
both x[1] and x[2], you add a row to a coefficient
matrix containing the coefficients, and a row to the result matrix containing the right-hand side of the inequality. So, for example, you'd convert
the inequality "s[1]x[1] + s[2]x[2] ≤ G" to a row [s[1],s[2]] in the coefficient matrix,
and "G" as a row in the result matrix. This rendering of the inequalities
is show below.
Once you've got the constraints worked out, you need to do something
called adding slack. The idea is that you want to convert the
inequalities to equalities. The way to do that is by adding variables. For
example, given the inequality x[1] + x[2]≤W, you
can convert that to an equality by adding a variable representing
W-(x[1]+x[2]): x[1]+x[2]+x[slack]=W, where x[slack]≥0.
So we take the constraint equations, and add slack variables to all of them,
which gives us the following:
1. Maximize: p[1]x[1] + p[2]x[2]
2. x[1] + x[2] + x[3] = W.
3. s[1]x[1] + s[2]x[2] + x[4] = G.
4. x[1] ≥ 0, x[2] ≥ 0.
We can re-render this into matrix form - but the next matrix needs
to includes rows and columns for the slack variables.
Now, we need to work the real solution into the matrix. The way that we
do that is by taking the solution - the maximum of the objective function - and naming it "Z", and adding the new objective function equation into the
matrix. In our example, since we're trying to maximize the
objective "p[1]x[1] + p[2]x[2]",
we represent that with an equation "Z-p[1]x[1]-p[2]x[2]=0". So the
final matrix, called the augmented form matrix of our
linear programming problem is:
Once you've got the augmented form, there are a variety of techniques that
you can use to get a solution. The intuition behind it is fairly simple: the
set of inequalities, interpreted geometrically, form a convex polyhedron. The
maximum will be at one of the vertices of the polyhedron.
The simplest solution strategy is called the simplex algorithm. In the
simplex algorithm, you basically start by finding an arbitrary vertex
of the polyhedron. You then look to see if either of the edges incident to that point slope upwards. If they do, you trace the edge upward to the next
vertex. And then you look to see if the other edge from that vertex slopes
upwards - and so on, until you reach a vertex where you can't follow an edge to a higher point.
In general, solving a linear programming problem via simplex is pretty
fast - but it's not necessarily so. The worst case time of it is
exponential. But in real linear programming problems, the
exponential case basically doesn't come up.
(It's like a wide range of problems. There are a lot of problems that are,
in theory, incredibly difficult - but because the difficult cases are
very rare and rather contrived, they're actually very easy to solve. Two
examples of this that I find particularly interesting are both NP complete.
Type checking in Haskell is one of them: in fact, the general type
inference in Haskell is worse that NP complete: the type validation is
NP-complete; type inference is NP-hard. But on real code, it's effectively
approximately linear. The other one is a logic problem called 3-SAT. I
once attended a great talk by a guy named Daniel Jackson, talking about
a formal specification language he'd designed called Alloy.
Alloy reduces
its specification checking to 3-SAT. Dan explained this saying: "The bad news
is, analyzing Alloy specifications is 3-SAT, so it's exponential and NP-complete. But the good news is that analyzing Alloy specifications is 3-SAT,
so we can solve it really quickly.")
This is getting long, so I think I'll stop here for today. Next post, I'll
show an implementation of the simplex algorithm, and maybe talk a little bit
about the basic idea behind the non-exponential algorithms. After that, we'll
get back to game theory, and I'll show how you can construct a linear
programming problem out of a game.
More like this
So, I've finally had some time to get back to the linear programming followup. You might want to go back and look at the earlier post to remember what I was talking about. The basic idea is that
we've got a system we'd like to optimize. The constraints of the system are defined by a set of linear…
Now that we've gone through a very basic introduction to computational complexity, we're ready to take a high-level glimpse at some of the more interesting things that arise from it. The one that
you'll hear about most often is "P vs NP", which is what I'm going to write about today. So, what are…
Today, I'm going to talk a bit about two closely related problems in graph theory: the maximal clique detection problem, and the maximal common subgraph problem. The two problems are interesting both
on their own as easy-to-understand but hard-to-compute problems; and also because they have so…
When I last wrote about game theory, we were working up to how to find the general solution to an iterated two-player zero-sum game. Since it's been a while, I'll take a moment and refresh your
memory a bit. A zero-sum game is described as a matrix. One player picks a row, and one player picks a…
You know, I think that in just 5 or so paragraphs you elucidated that more than the week and a half we spent during my undergrad algorithms class.
My hit tips to you, sir.
Indeed, this post is a gem of clarity.
Nice introduction! A small typo: on the first row of the augmented matrix, you have -s_1 & -s_2 instead of -p_1 & -p_2.
Nice post!
May I suggest in your next post the inclusion of a paragraph about Mixed Integer Linear Programming? The concept follows easily from LP, and MILP problems are, in my opinion, even more interesting.
I believe your link to Alloy wasn't intended to be recursive, and this http://alloy.mit.edu/ is the correct link.
Daniel Jackson the archaeologist?
Nicely done. There is something in your way of writing that makes things immediately come through.
You say, in support of constraint 4, that producing negative type 1 widgets is nonsensical. Actually, it might make sense. If you have an inventory of type 1 widgets, it may be worth it to melt them
down so you can produce more of type 2 widgets.
Great post. Duality might be a good "next topic" that I'd really be interested in reading.
Hi Mark,
Thanks for the nice introduction of Linear programming.
May I add one more subtle point here. Regarding your example of widget factory, I think x1 and x2 (number of type1/type2 widgets) should be defined as integer variables. Truncation of integer
variables in 'optimal solution' is not a good idea either since it might result in an infeasible or sub-optimal solutions.
Hi Mark
I've been thinking for a while about how your posts are usually much clearer than most textbooks. Have you considered writing a book, for example about Haskell? I think a lot of people would buy it.
Anyway I read your blog with much pleasure, including this post, keep up the good work :-)
why is it always widgets?????
Hi Mark,
Good post on linear programming.
Doesn't there exist a condition such that the < ahref="http://en.wikipedia.org/wiki/Dynamic_programming"> Dynamic Programming of Richard Bellman becomes more efficient?
"why is it always widgets?"
Especially the iron ones! They're like, so five minutes ago.
One word: Plastics!
Nice article. I was lightly exposed to the the simplex method when I was looking into the downhill simplex method or Nelder-Mead method. It was easy to spot the difference, but it did make finding
what I needed a bit harder.
I can't wait to see the rest. keep it up
I second Alok Bakshi's request (@ #10) for more info on how integer-valued data affect this.
I think I understood this better than in my linear algebra class ... eight years ago, I think.
I probably came up in the numerical analysis course too, but I was singularly unmotivated for learning anything to do with programming back then. I put off taking those two mandatory computerscience
classes for as long as possible.
Obviously I feel quite stupid now. I've even had a few problems that might have been easily solvable with a litte bit of programming knowledge.
I've lost my small programmes from back then due to a harddisk failure and a theft, so I couldn't just copy the bits I'd written (/copied from the blackboard) to handle exceptions when reading in a
file - which was what I needed. And somehow I've never found the impetus to try to relearn anything.
Can anyone recommend some good ressources à la The Utter Idiot's Guide to Java and Python? Something online with interactive tutorials that does *not* assume that the reader/learner knows
wk: Well, integer valued variables wreak havoc on the structure of the feasible region. With integrality constraints, it is no longer a convex polyhedron but rather a disjunct gathering of points.
This makes it impossible to use the simplex method and other linear programming methods as designed. Generally, a linear problem becomes *much* harder when integrality constraints are added.
Integer linear programs can be solved for example by a method of tree searching using LP relaxations (that is the same problem with removed integrality constraints) to establish bounds on the optimal
solution. This is called Branch and bound. There are other methods for solving general ILPs and MILPs, but many successful methods are also problem dependent.
Mark -
I knew that if I hung on through your set theory postings, you'd finally get around to topics that I know something about! Thanks for taking the time to write about these subjects. As a physics
major, I had way too much college and graduate math, most of which didn't stick very well, but reading your posts helps to shake some of the dust off.
One consideration of linear programming that gets shorted in most treatments is that real world problems are often over-determined. As an example, in the distant past I wrote a program that
determined the composition of piles of scrap metal based upon the analysis of the metal that resulted when various quantities of each were mixed together and melted. Since good scrap costs more than
cheap scrap, the idea was to produce the least cost load that would produce the designed final chemistry. The problem is that the chemistry of the scrap piles themselves are unknown.
Many of the standard LP algorithms fall down badly in these conditions.
You're jumping ahead of me :-)
The problem, as formulated in this post, is actually incorrect, as you've pointed out. But I wanted to work up to that, by first showing simplex, and then pointing out on how when you instantiated
the problem, you'd get answers telling you to make 3.5 of widget 1, and 2.7 of widget 2.
If you constrain the solution to integers, then you get a slightly different problem called integer linear programming. That little change has a huge impact. Integer linear programming is hard.
Really hard. The decision-function formulation of it is NP-complete; computing the actual solution is NP-hard. In general, we either find cases of it where there's a relatively small number of
variables (in which case the exponential factor doesn't make it non-computable), or we use heuristics to find nearly optimal solutions. As far as anyone knows, there's no fast way to compute a true
optimal solution to an integer linear programming in polynomial time.
Great -- looking forward to it.
Don't forget to finish this...
The first matrix with slack variables should be s1, s2, 0 , 1 & 1, 1, 1, 0
Just a small comment about NP-completeness: the fact that a problem reduces to 3SAT doesn't prove that it is NP-complete (2SAT for instance :p), the reduction is the other way round (see also your
post of July 15, 2007).
And I don't know if you're familiar with FPT (fixed-parameter tractable) algorithms, but it's a pretty nice example on how to deal with some NP-complete problems: find an algorithm whose complexity
is a polynomial in n times some function exponential in k, where k is a small parameter in practice.
can you include about duality next time? | {"url":"https://scienceblogs.com/goodmath/2008/05/07/linear-programming","timestamp":"2024-11-14T18:51:35Z","content_type":"text/html","content_length":"82255","record_id":"<urn:uuid:5d68ddc2-dbfa-4dba-b875-b0d76d583f1a>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00445.warc.gz"} |
Convergence of parallel overlapping domain decomposition methods for the Helmholtz equation
We analyse parallel overlapping Schwarz domain decomposition methods for the Helmholtz equation, where the exchange of information between subdomains is achieved using first-order absorbing
(impedance) transmission conditions, together with a partition of unity. We provide a novel analysis of this method at the PDE level (without discretization). First, we formulate the method as a
fixed point iteration, and show (in dimensions 1, 2, 3) that it is well-defined in a tensor product of appropriate local function spaces, each with L^2 impedance boundary data. We then obtain a bound
on the norm of the fixed point operator in terms of the local norms of certain impedance-to-impedance maps arising from local interactions between subdomains. These bounds provide conditions under
which (some power of) the fixed point operator is a contraction. In 2-d, for rectangular domains and strip-wise domain decompositions (with each subdomain only overlapping its immediate neighbours),
we present two techniques for verifying the assumptions on the impedance-to-impedance maps that ensure power contractivity of the fixed point operator. The first is through semiclassical analysis,
which gives rigorous estimates valid as the frequency tends to infinity. At least for a model case with two subdomains, these results verify the required assumptions for sufficiently large overlap.
For more realistic domain decompositions, we directly compute the norms of the impedance-to-impedance maps by solving certain canonical (local) eigenvalue problems. We give numerical experiments that
illustrate the theory. These also show that the iterative method remains convergent and/or provides a good preconditioner in cases not covered by the theory, including for general domain
decompositions, such as those obtained via automatic graph-partitioning software.
Bibliographical note
Funding Information:
We gratefully acknowledge support from the UK Engineering and Physical Sciences Research Council Grants EP/R005591/1 (DL and EAS) and EP/S003975/1 (SG, IGG, and EAS). This research made use of the
Balena High Performance Computing (HPC) Service at the University of Bath.
ASJC Scopus subject areas
• Computational Mathematics
• Applied Mathematics
Dive into the research topics of 'Convergence of parallel overlapping domain decomposition methods for the Helmholtz equation'. Together they form a unique fingerprint. | {"url":"https://researchportal.bath.ac.uk/en/publications/convergence-of-parallel-overlapping-domain-decomposition-methods--2","timestamp":"2024-11-13T12:17:12Z","content_type":"text/html","content_length":"77628","record_id":"<urn:uuid:e5220100-c8b4-43b5-8568-6438b052502c>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00230.warc.gz"} |
THEORY OF COMPUTATION: TOC- MADE EASY
This course will introduce Learners about three foundational areas of computer science namely the basic mathematical models of computation, problems that can be solved by computers and problems that
are computationally hard. It also introduces basic computation models, their properties and the necessary mathematical techniques to prove more advanced attributes of these models. The learners will
be able to express computer science problems as mathematical statements and formulate proofs.
COURSE DURATION: 8 Weeks
Upon successful completion of this course, learners will be able to
Interpret the mathematical foundations of computation including automata theory; the theory of formal languages and grammars; the notions of algorithm, decidability, complexity, and computability
Construct the abstract machines including finite automata, pushdown automata, and Turing machines from their associated languages and grammar
Make use of pumping lemma to show that a language is not regular / not context-free
Construct the grammar for any given finite automata, pushdown automata or Turing machines
Outline the characteristics of P, NP and NP Complete problems
Solve computational problems regarding their computability and complexity and prove the basic results of the theory of computation
Week 1: Formal Proofs-Finite Automata – deterministic and nondeterministic, regular operations
Week 2: Regular Expression, Equivalence of DFA, NFA and REs, closure properties
Week 3: Non regular languages and pumping lemma, DFA Minimization,
Week 4: CFGs, Chomsky Normal Form, Non CFLs and pumping lemma for CFLs
Week 5: PDAs, Equivalence of PDA and CFG, Properties of CFLs, DCFLs
Week 6: Turing Machines and its variants- Programming Techniques for TM UNDECIDABILITY
Week 7: Closure properties of decidable languages, Undecidability, Reductions, Post Correspondence Problem
Week 8: Rice's Theorem, introduction to complexity theory, The Class P and NP
• Dr.R.Leena Sri
• Dr.M.K.Kavitha Devi
• Dr.K.Sundharakantham | {"url":"https://www.tce.edu/tce-mooc/theory-computation-toc-made-easy","timestamp":"2024-11-10T03:02:14Z","content_type":"text/html","content_length":"35321","record_id":"<urn:uuid:67dda96a-c404-4391-884e-ac8c6e269758>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00101.warc.gz"} |
PQStat - Baza Wiedzy
The Kruskal-Wallis one-way analysis of variance by ranks (Kruskal 1952 1); Kruskal and Wallis 1952 2)) is an extension of the U-Mann-Whitney test on more than two populations. This test is used to
verify the hypothesis that there is no shift in the compared distributions, i.e., most often the insignificant differences between medians of the analysed variable in (Conover's rank test).
The Chi-square distribution with the number of degrees of freedom calculated using the formula:
The p-value, designated on the basis of the test statistic, is compared with the significance level
Introduction to the contrasts and the POST-HOC tests was performed in the unit, which relates to the one-way analysis of variance.
For simple comparisons, equal-size groups as well as unequal-size groups.
The Dunn test (Dunn 19643)) includes a correction for tied ranks (Zar 20104)) and is a test corrected for multiple testing. The Bonferroni or Sidak correction is most commonly used here, although
other, newer corrections are also available, described in more detail in Multiple comparisons.
Example - simple comparisons (comparing 2 selected median / mean ranks with each other):
[i] The value of critical difference is calculated by using the following formula:
The test statistic asymptotically (for large sample sizes) has the normal distribution, and the p-value is corrected on the number of possible simple comparisons
The non-parametric equivalent of Fisher LSD5), used for simple comparisons of both groups of equal and different sizes.
The settings window with the Kruskal-Wallis ANOVA can be opened in Statistics menu→NonParametric tests →Kruskal-Wallis ANOVA or in ''Wizard''.
A group of 120 people was interviewed, for whom the occupation is their first job obtained after receiving appropriate education. The respondents rated their job satisfaction on a five-point scale,
4- job that gives a fairly high level of satisfaction,
We will test whether the level of reported job satisfaction does not change for each category of education.
The obtained value of p=0.001 indicates a significant difference in the level of satisfaction between the compared categories of education. Dunn's POST-HOC analysis with Bonferroni's correction shows
that significant differences are between those with primary and secondary education and those with primary and tertiary education. Slightly more differences can be confirmed by selecting the stronger
POST-HOC Conover-Iman.
In the graph showing medians and quartiles we can see homogeneous groups determined by the POST-HOC test. If we choose to present Dunn's results with Bonferroni correction we can see two homogeneous
groups that are not completely distinct, i.e. group (a) - people who rate job satisfaction lower and group (b)- people who rate job satisfaction higher. Vocational education belongs to both of these
groups, which means that people with this education evaluate job satisfaction quite differently. The same description of homogeneous groups can be found in the results of the POST-HOC tests.
We can provide a detailed description of the data by selecting descriptive statistics in the analysis window
We can also show the distribution of responses in a column plot.
The Jonckheere-Terpstra test for ordered alternatives described independently by Jonckheere (1954) 6) an be calculated in the same situation as the Kruskal-Wallis ANOVA , as it is based on the same
assumptions. The Jonckheere-Terpstra test, however, captures the alternative hypothesis differently - indicating in it the existence of a trend for successive populations.
The term: „with at least one strict inequality” written in the alternative hypothesis of this test means that at least the median of one population should be greater than the median of another
population in the order specified.
To be able to perform a trend analysis, the expected order of the populations must be indicated by assigning consecutive natural numbers.
With the expected direction of the trend known, the alternative hypothesis is one-sided and the one-sided p-value is interpreted. The interpretation of the two-sided p-value means that the researcher
does not know (does not assume) the direction of the possible trend.
The settings window with the Jonckheere-Terpstra test for trend can be opened in Statistics menu→NonParametric tests→Kruskal-Wallis ANOVA or in ''Wizard''.
It is suspected that better educated people have high job demands, which may reduce the satisfaction level of the first job, which often does not meet such demands. Therefore, it is worthwhile to
conduct a trend analysis.
To do this, we resume the analysis with the Jonckheere-Terpstra trend test option, and assign successive natural numbers to the education categories.
The obtained one-sided value p<0.0001 and is less than the set significance level
We can also confirm the existence of this trend by showing the percentage distribution of responses obtained.
Conover squared ranks test is used, similarly to Fisher-Snedecor test (for 7).However, this test examines variation and therefore distances to the mean, so the basic condition for its use is:
The settings window with the Conover ranks test of variance can be opened in Statistics menu→NonParametric tests→Kruskal-Wallis ANOVA, option Conover ranks test of variance or textsf{Statistics}
menu→NonParametric tests→Mann-Whitney, option Conover ranks test of variance.
Patients have been prepared for spinal surgery. The patients will be operated on by one of three methods. Preliminary allocation of each patient to each type of surgery has been made. At a later
stage we intend to compare the condition of the patients after the surgeries, therefore we want the groups of patients to be comparable. They should be similar in terms of the height of the interbody
space (WPMT) before surgery. The similarity should concern not only the average values but also the differentiation of the groups.
It is found that for the two methods, the WPMT operation exhibits deviations from normality, largely caused by skewness of the data. Further comparative analysis will be conducted using the
Kruskal-Wallis test to compare whether the level of WPMT differs between the methods, and the Conover test to indicate whether the spread of WPMT scores is similar in each method.
First, the value of Conover's test of variance is interpreted, which indicates statistically significant differences in the ranges of the groups compared (p=0.0022). From the graph, we can conclude
that the differences are mainly in group 3. Since differences in WPMT were detected, the interpretation of the result of the Kruskal-Wallis test comparing the level of WPMT for these methods should
be cautious, since this test is sensitive to heterogeneity of variance. Although the Kruskal-Wallis test showed no significant differences (p=0.2057), it is recommended that patients with low WPMT
(who were mainly assigned to surgery with method B) be more evenly distributed, i.e. to see if they could be offered surgery with method A or C. After reassignment of patients, the analysis should be
The Friedman repeated measures analysis of variance by ranks – the Friedman ANOVA - was described by Friedman (1937)8). This test is used when the measurements of an analysed variable are made
several times (
Iman Davenport (19809)) has shown that in many cases the Friedman statistic is overly conservative and has made some modification to it. This modification is the non-parametric equivalent of the
ANOVA for dependent groups which makes it now recommended for use in place of the traditional Friedman statistic.
Hypotheses relate to the equality of the sum of ranks for successive measurements (
Two test statistics are determined: the Friedman statistic and the Iman-Davenport modification of this statistic.
The Iman-Davenport modification of the Friedman statistic has the form:
For simple comparisons (frequency in particular measurements is always the same).
The Dunn test (Dunn 196410)) is a corrected test due to multiple testing. The Bonferroni or Sidak correction is most commonly used here, although other, newer corrections are also available and are
described in more detail in the Multiple Comparisons section.
The test statistic asymptotically (for large sample size) has normal distribution, and the p-value is corrected on the number of possible simple comparisons
Non-parametric equivalent of Fisher LSD11), sed for simple comparisons (counts across measurements are always the same).
[i] he value of critical difference is calculated by using the following formula:
The settings window with the Friedman ANOVA can be opened in Statistics menu→NonParametric tests →Friedman ANOVA, trend test or in ''Wizard''
Quarterly sale of some chocolate bar was measured in 14 randomly chosen supermarkets. The study was started in January and finished in December. During the second quarter, the billboard campaign was
in full swing. Let's check if the campaign had an influence on the advertised chocolate bar sale.
Comparing the p-value of the Friedman test (as well as the p-value of the Iman-Davenport correction of the Friedman test) with a significance level
In the graph, we presented homogeneous groups determined by the Conover-Iman test.
We can provide a detailed description of the data by selecting Descriptive statistics in the analysis window
If the data were described by an ordinal scale with few categories, it would be useful to present it also in numbers and percentages. In our example, this would not be a good method of description.
The Page test for ordered alternative described in 1963 by Page E. B. 12) can be computed in the same situation as Friedman's ANOVA, since it is based on the same assumptions. However, Page's test
captures the alternative hypothesis differently - indicating that there is a trend in subsequent measurements.
Hypotheses involve equality of the sum of ranks for successive measurements or are simplified to medians:
The term: „with at least one strict inequality” written in the alternative hypothesis of this test means that at least one median should be greater than the median of another group of measurements in
the order specified.
In order to perform a trend analysis, the expected ordering of measurements must be indicated by assigning consecutive natural numbers to successive measurement groups. These numbers are treated as
weights in the analysis
With the expected direction of the trend known, the alternative hypothesis is one-sided and the one-sided p-value is interpreted. Interpreting a two-sided p-value means that the researcher does not
know (does not assume) the direction of the possible trend.
The settings window with the Page test for trend can be opened in Statistics menu→NonParametric tests →Friedman ANOVA, trend test or in ''Wizard''
The expected result of the intensive advertising campaign conducted by the company is a steady increase in sales of the offered bar.
Durbin's analysis of variance of repeated measurements for ranks was proposed by Durbin (1951)13). This test is used when measurements of the variable under study are made several times – a similar
situation in which Friedman'sANOVA is used. The original Durbin test and the Friedman test give the same result when we have a complete data set. However, Durbin's test has an advantage – it can also
be calculated for an incomplete data set. At the same time, data deficiencies cannot be located arbitrarily, but the data must form a so-called balanced and incomplete block:
Hypotheses involve equality of the sum of ranks for successive measurements (
Used for simple comparisons (the counts in each measurement are always the same).
Example - simple comparisons (comparing 2 selected medians / rank sums between each other):
[ii] The value of critical difference is calculated by using the following formula:
The settings window with the Durbin's ANOVA can be opened in Statistics menu→NonParametric tests →Friedman ANOVA, trend test or in ''Wizard''
For records with missing data to be taken into account, you must check the Accept missing data option. Empty cells and cells with non-numeric values are treated as missing data. Only records with
more than one numeric value will be analyzed.
An experiment was conducted among 20 patients in a psychiatric hospital (Ogilvie 1965)15). This experiment involved drawing straight lines according to a presented pattern. The pattern represented 5
lines drawn at different angles (
We want to see if the time taken to draw each line is completely random, or if there are lines that took more or less time to draw.
The graph shows homogeneous groups indicated by the post-hoc test.
The analysis of variance of repeated measures for Skillings-Mack ranks was proposed by Skillings and Mack in 1981 16). t is a test that can be used when there are missing data, but the missing data
need not occur in any particular setting. However, each site must have at least two observations. If there are no tied ranks and no gaps are present it is the same as the Friedman's ANOVA, and if
data gaps are present in a balanced arrangement it corresponds to the results of Durbin's ANOVA.
When each pair of measurements occurs simultaneously for at least one observation, this statistic has asymptotically (for large sample sizes) the Chi-square distribution with
The settings window with the Skillings-Mack ANOVA can be opened in Statistics menu→NonParametric tests →Friedman ANOVA, trend test or in ''Wizard''
For records with missing data to be taken into account, you must check the Accept missing data option. Empty cells and cells with non-numeric values are treated as missing data. Only records
containing more than one numeric value will be analyzed.
A certain university teacher, wanting to improve the way he conducted his classes, decided to verify his teaching skills. In several randomly selected student groups, during the last class, he asked
them to fill in a short anonymous questionnaire. The survey consisted of six questions about how the six specified parts of the material were illustrated. The students could rate it on a 5-point
scale, where 1 - the way of presenting the material was completely incomprehensible, 5 - a very clear and interesting way of illustrating the material. The data obtained in this way turned out to be
incomplete due to the fact that students did not answer questions about the part of the material they were absent on. In the 30-person group completing the survey, only 15 students provided complete
responses. Performing an analysis that does not account for data gaps (in this case, a Friedman analysis) will have limited power by cutting the group size so drastically and will not lead to the
detection of significant differences. Data gaps were not planned for and are not present in the balanced block, so this task cannot be performed using Durbin's analysis along with his POST-HOC test.
The results of the ANOVA Skillings-Mack analysis are presented in the following report:
observed frequencies in a contingency table and the corresponding expected frequencies.
This statistic asymptotically (for large expected frequencies) has the Chi-square distribution with a number of degrees of freedom calculated using the formula:
The settings window with the Chi-square (multidimensional) test can be opened in Statistics menu → NonParametric tests (unordered categories)→Chi-square (multidimensional) or in ''Wizard''.
This test can be calculated only on the basis of raw data.
The Q-Cochran analysis of variance, based on the Q-Cochran test, is described by Cochran (1950)19). This test is an extended McNemar test for
„incompatible” observed frequencies – the observed frequencies calculated when the value of the analysed feature is different in several measurements.
This statistic asymptotically (for large sample size) has the Chi-square distribution with a number of degrees of freedom calculated using the formula:
Example - simple comparisons (for the difference in proportion in a one chosen pair of measurements):
The test statistic asymptotically (for large sample size) has the normal distribution, and the p-value is corrected on the number of possible simple comparisons
The settings window with the Cochran Q ANOVA can be opened in Statistics menu→ NonParametric tests→Cochran Q ANOVA or in ''Wizard''.
We want to compare the difficulty of 3 test questions. To do this, we select a sample of 20 people from the analysed population. Every person from the sample answers 3 test questions. Next, we check
the correctness of answers (an answer can be correct or wrong). In the table, there are following scores:
Comparing the p value p=0.0077 with the significance level Dunn.
The carried out POST-HOC analysis indicates that there are differences between the 2-nd and 1-st question and between questions 2-nd and 3-th. The difference is because the second question is easier
than the first and the third ones (the number of correct answers the first question is higher).
Kruskal W.H., Wallis W.A. (1952), Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47, 583-621
Dunn O. J. (1964), Multiple comparisons using rank sums. Technometrics, 6: 241–252
Zar J. H., (2010), Biostatistical Analysis (Fifth Editon). Pearson Educational
Conover W. J. (1999), Practical nonparametric statistics (3rd ed). John Wiley and Sons, New York
Jonckheere A. R. (1954), A distribution-free k-sample test against ordered alternatives. Biometrika, 41: 133–145) and Terpstra (1952)((Terpstra T. J. (1952), The asymptotic normality and consistency
of Kendall's test against trend, when ties are present in one ranking. Indagationes Mathematicae, 14: 327–333
Friedman M. (1937), The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32,675-701
Iman R. L., Davenport J. M. (1980), Approximations of the critical region of the friedman statistic, Communications in Statistics 9, 571–595
Page E. B. (1963), Ordered hypotheses for multiple treatments: A significance test for linear ranks. Journal of the American Statistical Association 58 (301): 216–30
Durbin J. (1951), Incomplete blocks in ranking experiments. British Journal of Statistical Psychology, 4: 85–90
Ogilvie J. C. (1965), Paired comparison models with tests for interaction. Biometrics 21(3): 651-64
Skillings J.H., Mack G.A. (1981) On the use of a Friedman-type statistic in balanced and unbalanced block designs. Technometrics, 23:171–177
Cochran W.G. (1952), The chi-square goodness-of-fit test. Annals of Mathematical Statistics, 23, 315-345
Cochran W.G. (1950), The comparison ofpercentages in matched samples. Biometrika, 37, 256-266 | {"url":"http://manuals.pqstat.pl/en:statpqpl:porown3grpl:nparpl","timestamp":"2024-11-07T01:07:36Z","content_type":"text/html","content_length":"169576","record_id":"<urn:uuid:7ebcbd8c-588a-4cf7-ab97-c8e6a8257376>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00470.warc.gz"} |
Price to Earnings Ratio
Price to Earnings Ratio (P/E Ratio)
P/E ratio Definition
The price to earnings ratio (P/E ratio) is the ratio of market price per share to earning per share. The P/E ratio is a valuation ratio of a company's current price per share compared to its earnings
per share. It is also sometimes known as “earnings multiple” or “price multiple”. Though Price-earning ratio has several imperfections but it is still the most acceptable method to evaluate
prospective investments. It is calculated by dividing “Market Value per Share (P)” to “Earnings per Share (EPS)”. Market value of share can be taken from stock market or online and earning per share
figure can be calculated by dividing net annual earnings to total number of shares (Net Annual Earnings/Total number of shares).
P/E ratio is a widely used ratio which helps the investors to decide whether to buy shares of a particular company. It is calculated to estimate the appreciation in the market value of equity shares.
Calculation (Formula)
It is calculated by dividing the current market price of a stock by its EPS. The formula for P/E ratio is as follows:
Here's an example to help illustrate this calculation:
Suppose a company has a current market price per share of $50 and earnings per share of $5.
P/E ratio = $50 / $5 = 10
Therefore, the P/E ratio for this company is 10. This means that investors are willing to pay $10 for every dollar of earnings that the company generates. The P/E ratio can be used to compare the
valuation of different companies in the same industry or sector, and it is often used as a measure of the market's perception of a company's growth potential and future earnings prospects.
The price to earnings ratio can also be calculated with the help of following formula:
Price to Earnings Ratio = Market Capitalization / Earnings after Taxes and Preference Dividends
The P/E ratio tells how much the market is willing to pay for a company’s earnings. A higher P/E ratio means that the market is more willing to pay for the earnings of the company. Higher price to
earnings ratio indicates that the market has high hopes for the future of the share and therefore it has bid up the price. On the other hand, a lower price to earnings ratio indicates the market does
not have much confidence in the future of the share.
The average P/E ratio is normally from 12 to 15 however it depends on market and economic conditions. P/E ratio may also vary among different industries and companies. P/E ratio indicates what amount
an investor is paying against every dollar of earnings. A higher P/E ratio indicates that an investor is paying more for each unit of net income. So P/E ratio between 12 to 15 is acceptable.
For example, if company A shares are trading at $50/share and most recent EPS is $2/share. The P/E ratio will be $50/2$ = $25. This indicates that the investors are paying $25 for every $1 of
company’s earnings. Companies with no profit or negative earnings have no P/E ratio and usually written as “N/A”.
Norms and Benchmarks
A higher P/E ratio may not always be a positive indicator because a higher P/E ratio may also result from overpricing of the shares. Similarly, a lower P/E ratio may not always be a negative
indicator because it may mean that the share is a sleeper that has been overlooked by the market. Therefore, P/E ratio should be used cautiously. Investment decisions should not be based solely on
the P/E ratio. It is better to use it in conjunction with other ratios and measures.
The most obvious and widely discussed problem in P/E ratio is that the denominator considers non cash items. Earnings figure can easily be manipulated by playing with non cash items, for example,
depreciation or amortization. If it is not manipulated deliberately, earnings figure is still affected by non cash items. That is why a large number of investors are now using “Price/Cash Flow Ratio”
which removes non cash items and considers cash items only.
It is normally assumed that a low P/E ratio indicates a company is undervalued. It is not always right as this may be due to the stock market assumes that the company is headed over several issues or
the company itself has warned a low earnings than expected. Such things may lead to a low P/E ratio.
How to compare two companies with different Price to Earnings (P/E) ratio?
When comparing two companies with different Price to Earnings (P/E) ratios, it's important to consider several factors that can affect their ratios, such as industry, growth prospects, and risk
profile. Here are a few ways to compare companies with different P/E ratios:
1. Use P/E ratios relative to their peers: Compare the P/E ratios of each company to other companies in the same industry or sector. If Company A has a higher P/E ratio than Company B, but both are
in the same industry, it may indicate that Company A is expected to grow at a faster rate or has a better risk profile than Company B.
2. Look at historical trends: Compare the current P/E ratios of each company to their historical averages. If Company A has a higher P/E ratio than its historical average while Company B has a lower
P/E ratio than its historical average, it may indicate that Company A is currently overvalued and Company B is currently undervalued.
3. Use forward P/E ratios: Consider the forward P/E ratio, which is calculated using the expected earnings per share for the upcoming year. This can provide a better indication of a company's future
growth prospects. If Company A has a higher forward P/E ratio than Company B, it may indicate that Company A is expected to grow at a faster rate in the future.
4. Consider other valuation metrics: Look at other valuation metrics such as Price to Sales (P/S), Price to Book (P/B), or Enterprise Value to Earnings Before Interest, Taxes, Depreciation, and
Amortization (EV/EBITDA) to get a more complete picture of the relative value of each company.
Start free ReadyRatios
financial analysis now!
start online
No registration required!
But once registered, additional features are available.
Most WantedFinancial Terms | {"url":"https://www.readyratios.com/reference/market/price_to_earnings_ratio.html","timestamp":"2024-11-04T18:49:57Z","content_type":"text/html","content_length":"51581","record_id":"<urn:uuid:c52715df-217b-421a-a59c-5b0fed24fda9>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00101.warc.gz"} |
A soccer ball with a radius of 25cm is kicked with class 11 physics JEE_Main
Hint: The given problem can be solved using one of the four equations of kinetic energy. In this problem we will use the kinematics of the rotational motion formula then, the rotation of the ball
before the ball is coming to the rest is determined.
Formula used:
The kinematics of the rotational motion of the ball is given by;
$\omega _f^2 - \omega _i^2 = 2\alpha \theta $
Where, ${\omega _f}$ denotes the final angular velocity of the ball, ${\omega _i}$ denotes the initial angular velocity of the ball, $\alpha $ denotes the acceleration of the ball of the ball.
Complete step by step solution:
The data given in the problem is;
Final velocity, $v = 0\,\,m{s^{ - 1}}$,
Initial velocity, $u = 15\,\,m{s^{ - 1}}$,
Acceleration, $a = - 25.0\,\,m{s^{ - 2}}$,
Radius of the ball, $r = 25cm = 0.25m$
The rotational kinetic energy of the ball is;
$\Rightarrow \omega _f^2 - \omega _i^2 = 2\alpha \theta $
Where, $v = r{\omega _f}$; $u = r{\omega _i}$; $a = r\alpha $;
$\Rightarrow \dfrac{{{v^2}}}{{{r^2}}} - \dfrac{{{u^2}}}{{{r^2}}} = 2\left[ {\dfrac{a}{r}} \right]\theta $
Now substitute the values of $v$,$u$,$a$and $r$ in the above equation;
$\Rightarrow \dfrac{0}{{{{\left( {0.25} \right)}^2}}} - \dfrac{{{{\left( {15} \right)}^2}}}{{{{\left( {0.25} \right)}^2}}} = 2\left[ {\dfrac{{ - 25.0}}{{0.25}}} \right]\theta $
$\Rightarrow 0 - \dfrac{{225}}{{0.0625}} = 2\left[ { - 100} \right]\theta $
$\Rightarrow - 3600 = - 200 \times \theta $
$\Rightarrow \theta = 18$ radian
The number of revolutions that the ball made is;
$\Rightarrow N = \dfrac{\theta }{{2\pi }}$
Where, $N$ is the number of revolutions made by the ball.
Substitute the value of $\theta = 18$ radian;
$\Rightarrow N = \dfrac{{18}}{{2\pi }}$
$\Rightarrow N = 2.9$revolutions.
Therefore, the number of revolutions made by the ball is $N = 2.9$ revolutions before coming to rest.
Hence, the option $N = 2.9$ revolutions is the correct answer.
Thus, the option A is correct.
Note: In proportion to the turning energy or the angular kinetic energy the kinetic energy of the object is obtained by the rotation of that particular object. The number of the revolution is
directly proportional to the angle and inversely proportional to the $2\pi $. | {"url":"https://www.vedantu.com/jee-main/a-soccer-ball-with-a-radius-of-25cm-is-kicked-physics-question-answer","timestamp":"2024-11-08T18:04:37Z","content_type":"text/html","content_length":"149767","record_id":"<urn:uuid:b77644df-d303-4eb6-a2fd-bcb51c610445>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00263.warc.gz"} |
Case Study Chapter 8 Trigonometry Mathematics
Refer to Case Study Chapter 8 Trigonometry Mathematics, these class 10 maths case study based questions have been designed as per the latest examination guidelines issued for the current academic
year by CBSE, NCERT, KVS. Students should go through these solves case studies so that they are able to understand the pattern of questions expected in exams and get good marks.
Chapter 8 Trigonometry Mathematics Case Study Based Questions
In ΔABC, right angled at B
AB + AC = 9 cm and BC = 3cm.
Question. The value of cot C is
(a) 3/ 4
(b) 1/ 4
(c) 5/ 4
(d) None of these
Question. The value of sec C is
(a) 4/ 3
(b) 5/ 3
(c) 1/ 3
(d) None of these
Question. sin^2C + cos^2C =
(a) 0
(b) 1
(c) –1
(d) None of these
I. Application of Trigonometry—Height of Tree/Tower: Mr. Suresh is an electrician. He receives a call regarding a fault on a pole from three different colonies A, B and C. He reaches one-by-one to
each colony to repair that fault. He needs to reach a point 1.3 m below the top of each pole to undertake the repair work. Observe the following diagrams.
Question. What is the distance of the point where the ladder is placed on the ground if the height of pole is 4 m?
(a) 2.5 m
(b) 3.8 m
(c) 1.56 m
(d) 5.3 m
Question. What should be the length of ladder DQ that enable him to reach the required position if the height of the pole is 4 m?
(a) 5√3/7 m
(b) 9√3/5 m
(c) 7√2/5 m
(d) 4√3/5 m
Question. The distance of the point where the ladder lies on the ground is
(a) 3 √5 m
(b) 4 √2 m
(c) 4 m
(d) 4 √7 m
Question. Given that the length of ladder is 4 √2 m . What is height of pole?
(a) 4,1/2 m
(b) 4 √5 m
(c) 5 √5 m
(d) 5.3 m
Question. The angle of elevation of reaching point of ladder at pole, i.e., H, if the height of the pole is 8.3 m and the distance GF is 7√3 m, is
(a) 30°
(b) 60°
(c) 45°
(d) None of these.
II. A group of students of class X visited India Gate on an educational trip. The teacher and students had interest in history as well. The teacher narrated that India Gate, official name Delhi
Memorial, originally called All-India War Memorial, monumental sandstone arch in New Delhi, dedicated to the troops of British India who died in wars fought between 1914 and 1919.The teacher also
said that India Gate, which is located at the eastern end of the Rajpath (formerly called the Kingsway), is about 138 feet (42 metres) in height.
Question. They want to see the tower at an angle of 60°. The distance where they should stand will be
(a) 25.24 m
(b) 20.12 m
(c) 42 m
(d) 24.25 m
Question. The ratio of the length of a rod and its shadow is 1:1 . The angle of elevation of the Sun is
(a) 30°
(b) 45°
(c) 60°
(d) 90°
Question. What is the angle of elevation if they are standing at a distance of 42 m away from the monument?
(a) 30°
(b) 45°
(c) 60°
(d) 0°
Question. The angle formed by the line of sight with the horizontal when the object viewed is below the horizontal level is
(a) corresponding angle
(b) angle of elevation
(c) angle of depression
(d) complete angle
Question. If the altitude of the Sun is at 60°, then the height of the vertical tower that will cast a shadow of length 20 m is
(a) 20√3 m
(b) 20/√3m
(c) 15/√3m
(d) 15√3 m
III. A satellite flying at a height h is watching the top of the two tallest mountains in Uttarakhand and Karnataka, they are being Nanda Devi (height 7,816 m) and Mullayanagiri (height 1,930 m). The
angles of depression from the satellite, to the top of Nanda Devi and Mullayanagiri are 30° and 60° respectively. If the distance between the peaks of two mountains is 1937 km, and the satellite is
vertically above the mid-point of the distance between the two mountains.
Question. The distance of the satellite from the ground is
(a) 1139.4 km
(b) 566.96 km
(c) 1937 km
(d) 1025.36 km
Question. The distance of the satellite from the top of Mullayanagiri is
(a) 1139.4 km
(b) 577.52 km
(c) 1937 km
(d) 1025.36 km
Question. What is the angle of elevation if a man is standing at a distance of 7816 m away from Nanda Devi?
(a) 30°
(b) 45°
(c) 60°
(d) 0°
Question. The distance of the satellite from the top of Nanda Devi is
(a) 1118.29 km
(b) 577.52 km
(c) 1937 km
(d) 1025.36 km
Question. If a mile stone very far away from, makes 45° to the top of Mullayangiri mountain. So, find the distance of this mile stone from the mountain.
(a) 1118.327 km
(b) 566.976 km
(c) 1937 km
(d) 1025.36 km | {"url":"https://worksheetsbag.com/case-study-chapter-8-trigonometry-mathematics/","timestamp":"2024-11-11T16:01:03Z","content_type":"text/html","content_length":"120099","record_id":"<urn:uuid:ca77c3c9-2534-4ce3-944c-96bef0c297aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00686.warc.gz"} |
Fractal analysis of geomagnetic data to decipher pre-earthquake process in Andaman-Nicobar region, India
Status: a revised version of this preprint was accepted for the journal NPG and is expected to appear here in due course.
Fractal analysis of geomagnetic data to decipher pre-earthquake process in Andaman-Nicobar region, India
Abstract. The emission of seismo-electromagnetic (EM) signatures prior to earthquake recorded in geomagnetic data has potential to reveal the pre-earthquake processes in focal zones. This study
focused to analysis of vertical component of a geomagnetic field from Mar 2019 to Apr 2020 using fractal and multifractal approach to identify the EM signatures in Campbell Bay, a seismically active
region of Andaman and Nicobar, subduction zone. The significant enhancements in monofractal dimension and spectrum width components of multifractal highlights the high frequency with less and more
complex nature of EM signatures preceded by earthquakes respectively, which indicates that the pre-earthquake processes on West Andaman Fault (WAF) and Andaman Trench (AT) are due to micro
fracturing. Moreover, the significant enhancements in holder exponents, components of multifractal highlight the less correlated, smooth, and low frequency characteristics of EM signatures preceded
by earthquakes, which indicate that pre-earthquake processes on Seulimeum Strand (SS) fault are due to electrokinetic processes. Thus, the mono fractal, spectrum width, and holder exponent parameter
respond differently to the earthquakes with different characteristics, causing EM signatures to be observed with an average of 10, 12, and 20 days prior to the earthquakes respectively, which are
also lies in range of short -term earthquake prediction.
Received: 22 Feb 2024 – Discussion started: 29 Apr 2024
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation
in this preprint. The responsibility to include appropriate place names lies with the authors.
Total article views: 694
(including HTML, PDF, and XML)
Views and downloads
(calculated since 29 Apr 2024)
Cumulative views and downloads
(calculated since 29 Apr 2024)
Viewed (geographical distribution)
Total article views: 659 (including HTML, PDF, and XML) Thereof 659 with geography defined and 0 with unknown origin.
Latest update: 01 Nov 2024 | {"url":"https://npg.copernicus.org/preprints/npg-2024-8/","timestamp":"2024-11-02T06:05:19Z","content_type":"text/html","content_length":"341238","record_id":"<urn:uuid:262c65b0-ce7a-46be-97f6-43791bc492de>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00787.warc.gz"} |
Convert Millimeters Per Hour to Knots (mm/h to kn) | JustinTOOLs.com
Category: speedConversion: Millimeters Per Hour to Knots
The base unit for speed is meters per second (Non-SI Unit)
[Millimeters Per Hour] symbol/abbrevation: (mm/h)
[Knots] symbol/abbrevation: (kn)
How to convert Millimeters Per Hour to Knots (mm/h to kn)?
1 mm/h = 5.399577019073E-7 kn.
1 x 5.399577019073E-7 kn =
Always check the results; rounding errors may occur.
In relation to the base unit of [speed] => (meters per second), 1 Millimeters Per Hour (mm/h) is equal to 2.77778E-7 meters-per-second, while 1 Knots (kn) = 0.514444 meters-per-second.
1 Millimeters Per Hour to common speed units
1 mm/h = 6.2137168933429E-7 miles per hour (mi/h)
1 mm/h = 1.0356194822238E-8 miles per minute (mi/m)
1 mm/h = 1.726032470373E-10 miles per second (mi/s)
1 mm/h = 1.0E-6 kilometers per hour (km/h)
1 mm/h = 1.6666646666707E-8 kilometers per minute (km/m)
1 mm/h = 2.77778E-10 kilometers per second (km/s)
1 mm/h = 2.77778E-7 meters per second (m/s)
1 mm/h = 0.001 meters per hour (m/h)
1 mm/h = 5.399577019073E-7 knots (kn)
1 mm/h = 9.2656767235952E-16 speed of light (c) | {"url":"https://www.justintools.com/unit-conversion/speed.php?k1=millimeters-per-hour&k2=knots","timestamp":"2024-11-01T19:30:42Z","content_type":"text/html","content_length":"70974","record_id":"<urn:uuid:05b98f94-8e87-4143-bbd2-62d56775821a>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00237.warc.gz"} |
How to pick the least wrong colors
I’m a self-taught designer.^1 This has its upsides: I learn at my own pace (in my case, slowly over decades and counting), I design my own curriculum, and I don’t have to do any homework or pass any
tests. But it also has a serious downside: because I don’t have to learn anything, I skip the things that intimidate me. That’s why, for years and years, I’ve avoided learning much of anything about
But recently I picked up a pet project that forced me to fill in the gaps in my own knowledge. In the course of trying to solve a seemingly simple problem, I got a crash course in the fundamentals of
color. It was (sorry, I have to get this out of my system) eye-opening.
I thought it might be worthwhile to recap my journey so far, not just to share an interesting result to a common challenge in data visualization, but also to help any other learners who have been shy
about color.
The problem
Stripe’s dashboards use graphs to visualize data. While the color palettes we use are certainly passable, the team is always trying to improve them. A colleague was working on creating updated color
palettes, and we discussed the challenges he was working through. The problem boiled down to this: how do I pick nice-looking colors that cover a broad set of use cases for categorical data while
meeting accessibility goals?
An example of Stripe’s dashboard graphs showing categorical data
The domain of this problem is categorical data — data that can be compared by grouping it into two or more sets. In Stripe’s case, we’re often comparing payments from different sources, comparing the
volume of payments between multiple time periods, or breaking down revenue by product.
The criteria for success are threefold:
1. The colors should look nice. In my case, they need to be similar to Stripe’s brand colors.
2. The colors should cover a broad set of use cases. In short, I need lots of colors in case I have lots of categories.
3. The colors should meet accessibility goals. WCAG 2.2 dictates that non-text elements like chart bars or lines should have a color contrast ratio of at least 3:1 with adjacent colors.
Put this way, the problem doesn’t seem so daunting. Surely, I thought, a smart person has already solved these problems.
Prior art
There are, in fact, many ways to pick good colors for data visualizations. For example:
Why none of these solutions work for me
While most of the above palettes would serve as a good starting point, each has tradeoffs as an out-of-the-box solution. Using the ColorBrewer schemes sacrifices accessibility and applicability;
using any of them fails to get to the harmony I want with Stripe’s brand colors.
There are many, many other approaches and palettes to try. As I looked through the research and material, I had a sinking feeling that I’d need to settle for something that wasn’t optimal.
That conclusion turned out to be a breakthrough.
One night, I was watching a YouTube video called “Algorithmic Redistricting: Elections made-to-order.” My YouTube recommendation algorithm is very weird and finds things like this for me; I am also
very weird and like watching other people program algorithms.
In the video, Brian Haidet describes a process he used to draw different election maps. The algorithm could draw unfair maps that looked normal because Haidet used something called simulated
The name stuck in my head because it sounded cool. Googling it later, I realized simulated annealing might help me find a color palette because it works well on problems with the following qualities:
• Lots of possible solutions. I mean LOTS. One of the reasons why picking a set of colors is hard is because there are so many possible colors: a typical computer monitor in 2022 can display
16,777,216 distinct colors. That means a color palette of just three colors has 4.7 sextillion possibilities.^2
• Multiple competing success criteria. Just as Haidet’s maps had to be both unfair and normal-looking, my ideal color palettes had to be Stripe-y, accessible, and broadly applicable.
• Room for error. I’m not designing a space suit; if my palette isn’t perfect, nobody is going to die.
Simulated annealing is a very complex algorithm that I will now try to explain to the best of my (admittedly limited) abilities.
How simulated annealing works
Simulated annealing is named after a process called annealing. Annealing involves getting a piece of metal white-hot, then carefully cooling it down. Metalsmiths use annealing to strengthen metals.
Here’s how it works:
1. Annealing starts with a weak piece of metal. In such a piece, the metal’s atoms are spread unevenly: Some atoms are close enough to share magnetic bonds, but others are too far apart to bond. The
gaps that are left lead to microscopic deformities and cracks, weakening the metal.
2. When the weak metal is heated, the energy of heat breaks the bonds between atoms, and the atoms start to move around at random. Without the magnetic bonds between atoms, the metal is even weaker
than before, making it easy to bend and reshape if needed; when the metal is glowing hot, its atoms are moving freely, spreading out evenly over time.
3. Next, the metal is placed in a special container to slowly cool. As the metal cools, the energy from heat decreases, and gradually, the atoms slow down enough to form magnetic bonds again.
Because they’re more evenly spaced now, the atoms are more likely to share magnetic bonds with all their neighbors.
4. When fully cooled, the evenly-spaced atoms have many more bonds than they did before. This means there are far fewer imperfections; the metal is much stronger as a result.^3
Simulated annealing is an optimization algorithm that operates on a set of data instead of a piece of metal. It follows a process similar to metallurgical annealing:
1. Simulating annealing starts with data that is in an unoptimized state.
2. The algorithm begins by making a new version of the data, changing elements at random. Changing the data is like adding heat, resulting in random variations of the data. The algorithm scores both
the original and the changed data according to desired criteria, then makes a choice: keep the current state or go back to the previous one? Strangely, the algorithm doesn’t always pick the
version with the better score. At first, the algorithm is “hot,” meaning it picks between the two options (the current state and the randomly-changed one) with a coin toss. This means the data
can become even less optimal in the first stage of the process, just as the glowing-hot metal becomes weaker and more pliable than it was to start.
3. With each new iteration, the algorithm “cools down” a little. Here, temperature is a metaphor for how likely the algorithm is to choose a better-scoring iteration of the data. With each cycle,
the algorithm is more and more likely to pick mutations that are more optimized.
4. The algorithm finishes when the data settles into a highly optimized state; random changes will almost always result in a worse-scoring iteration, so the process comes to a halt.
If we have a formula for comparing how optimized two versions of our data are, why don’t we always choose the better-scoring one? This is the key to simulated annealing. Hill-climbing algorithms,
ones that always pick iterations with better scores, can quickly get stuck in what are called “local maxima” or “local minima” — states which are surrounded by less optimal neighbors but aren’t as
optimal as farther-away options.
So, by sometimes picking less-optimal iterations, simulated annealing can find good solutions to problems that have complicated sets of criteria, like my problem of picking color palettes.
Choosing the right evaluation score
In applying simulated annealing to any set of data, the most important ingredient is a way of grading how optimal that data is. In my case, I needed to find a way to score a set of colors according
to my desired criteria. Again, those were:
• Nice-looking (i.e., Stripe-y)
• Broadly applicable
• Accessible
For each criteria, I needed to come up with an algorithm-friendly way of numerically scoring a color palette.
The first criteria was one of the most challenging to translate into an algorithmic score. What does it mean for a color to be “nice-looking?” I knew that I wanted the colors to look similar to
Stripe’s brand colors because I thought Stripe’s brand colors look nice; but what makes Stripe’s brand colors look nice? I quickly realized that, in the face of hundreds of years of color theory and
art history, I was way out of my depth.
And so I hope you’ll forgive me for one small cop-out: instead of figuring out a way to objectively measure how nice-looking a set of colors are, I decided it’d be much easier to start with a set of
colors about which a human had already decided (for whatever reason) “these look nice.” The score for how nice-looking a set of colors is would be: “how similar they are to a given
subjectively-determined-by-a-human-to-look-nice color palette?.”
Only it turns out that even the question of “how similar are these two colors” is itself a very, very deep rabbit hole. There are decades of research published in hundreds of papers on the dozens of
ways to measure the distance between two colors. Fortunately for me, this particular rabbit hole was mostly filled with math.
All systems of color distance rely on the fact that colors can be expressed by numbers. The simplest way to find the distance between two colors is to measure the distance between the numbers that
represent the colors. For example, you might be familiar with colors expressed in RGB notation, in which the values of the red, green, and blue components are represented by a number from 0 to 255.
When the components are blended together by a set of red, green, and blue LEDs, we see the resulting light as the given color.
In programming languages like CSS, rgb(255, 255, 255) is a way to express the color white — the red, green, and blue components each have the value of 255, the maximum for this notation. Pure red is
rgb(255, 0, 0). One way to measure the distance between white and red is to add each component’s colors together — for white, 255 + 255 + 255 = 765, and for red, 255 + 0 + 0 = 255 — then subtract the
two resulting numbers — 765 - 255 = 510.
For reasons beyond the scope of this essay, this simple measurement of distance doesn’t match the way people perceive colors^4 — for example, darker colors are easier to tell apart than lighter
colors, even if they have the same simple numeric difference in their RGB values. So scientists and artists have developed other ways to measure the difference between two colors, each with its own
strengths and weaknesses. Some are better at matching our perception of paint on canvas, others are better at representing the ways colors look on computer monitors.
I spent a lot of time investigating these different color measurement systems. In the end, I settled on the International Commission on Illumination’s (CIE’s) ΔE* measurement.^5 ΔE* is essentially a
mathematical formula that measures the perceptual difference between two colors. Perceptual is the key word: The formula adjusts for some quirks in the way that we see and perceive color. It’s still
somewhat subjective, but at least it’s subjective in a way that stands up to experimental scrutiny.
$\mathrm{\Delta }{E}_{00}^{\ast }=\sqrt{{\left(\frac{\mathrm{\Delta }{L}^{\prime }}{{k}_{L}{S}_{L}}\right)}^{2}+{\left(\frac{\mathrm{\Delta }{C}^{\prime }}{{k}_{C}{S}_{C}}\right)}^{2}+{\left(\frac{\
mathrm{\Delta }{H}^{\prime }}{{k}_{H}{S}_{H}}\right)}^{2}+{R}_{T}\frac{\mathrm{\Delta }{C}^{\prime }}{{k}_{C}{S}_{C}}\frac{\mathrm{\Delta }{H}^{\prime }}{{k}_{H}{S}_{H}}}$
The ΔE* measurement between two colors ranges from zero to 100. Zero means “these colors are identical,” and 100 means “these colors are as different as possible.” For my algorithm, I wanted to
minimize the ΔE* between my algorithmically selected colors and the nice-looking colors I hand-picked.
Broadly applicable
The “broadly applicable” criteria comes up most often when providing a list of colors for designers to pick from.
Colors that are very different from one another work in a wider variety of situations than colors that are from a similar family. Combining different hues and shades gives us a larger number of
colors to use if we have many categories to visualize.
Colors that are equally different from one another reduce the chance that someone viewing the chart will see a relationship that doesn’t exist in the data.
To come up with a way of numerically calculating how applicable a color palette is, my friend ΔE* came in handy. The “very different” criteria can be measured by taking the average distance between
all the colors in the set — this should be as high as possible. The “equally different” criteria can be measured by taking the range of the distances between all of the colors — this should be as
small as possible.
WCAG 2.2 dictates that non-text elements like chart bars or lines should have a color contrast ratio of at least 3:1 with adjacent colors.
There are lots of colors that have a 3:1 contrast ratio with 1 other color (like a white or black background). But for something like a stacked bar chart, pie chart, or a multi-line chart, it’s
likely that there will be multiple elements adjacent to each other, all with their own color. Finding just three colors with a 3:1 contrast ratio to each other is extremely challenging. Anything
beyond three colors is essentially impossible.
It’s important to note that WCAG is only concerned with color contrast, just one of the ways in which colors can be perceived as different. Color contrast measures the relative “brightness” of two
colors; the relative brightness of colors depends highly on whether or not you have colorblindness. The way people perceive the amount of contrast between two colors varies greatly depending on the
biology of their eyes and brains.
Fortunately, addressing the WCAG contrast requirement can be done without too much difficulty. Instead of finding colors that have a 3:1 contrast ratio, use a border around the colored chart
elements. As long as the chart colors are of a 3:1 contrast ratio with the border color, you’re set.
For my color palettes, I wanted to go beyond the WCAG contrast requirement and find colors that work well for people with and without color blindness. Fortunately, there is (again) a deep body of
published research on mathematical ways to simulate color blindness. Like our ΔE* formula earlier, there is a formula — created by Hans Brettel, Françoise Viénot, and John D. Mollon^6 — for
translating the color a person without color blindness sees into the (approximate) color a person with colorblindness would see.
$\left(\begin{array}{c}{L}_{Q}\\ {M}_{Q}\\ {S}_{Q}\end{array}\right)=\left[\begin{array}{ccc}{L}_{R}& {L}_{G}& {L}_{B}\\ {M}_{R}& {M}_{G}& {M}_{B}\\ {S}_{R}& {S}_{G}& {S}_{B}\end{array}\right]\left(\
begin{array}{c}{R}_{Q}\\ {G}_{Q}\\ {B}_{Q}\end{array}\right)$
One bonus to this formula is that it comes with variations for the different types of colorblindness. When evaluating, we can adjust the score for each type of colorblindness, depending on how common
or uncommon it is (more on this in a bit).
To determine whether a color palette works well for people with colorblindness, I first translate the colors into their color blindness equivalent using the matrix math above. Then, I measure the
average of the distances between the colors using the ΔE* like we did above.
Putting it all together
At this point, I had an algorithm that could optimize a set of data in a huge solution space, and quantitative measurements of each of the criteria I wanted to evaluate for. The last step was to
crunch all the quantitative measurements down into a single number representing how “good” a color palette is.
The calculation that produces this single number is called the “loss function” or “cost function.” The resulting number is the “loss” or “cost” of a set of colors. It’s essentially just a measurement
of how optimized the data is; the lower the loss, the better; a loss of zero means that we have the ideal solution.^7
In practice, optimizers stop long before they get to zero loss. In most optimization problems, there’s no such thing as the one right answer. There are only lots of wrong answers. Our goal is to find
the least wrong one.
To that end, at the very end of my loss function, all the individual criteria scores are added together. Each is given a multiplier, which allows me to dial up or down how important a value is:
Multiplying the “nice-looking” score by a higher value means that the optimizing algorithm will tend to pick nice-looking palettes, even if they aren’t as accessible or versatile.
$\begin{array}{rl}loss& =a\ast nice\\ & +b\ast applicable\\ & +c\ast protanopia\\ & +d\ast deuteranopia\\ & +e\ast tritanopia\end{array}$
I’m calculating scores for three different types of colorblindness, each with its own multiplier. This means that the overall tradeoffs in the algorithm are in relation to how common each type of
colorblindness is.
I plug the loss function into the simulated annealing algorithm and run it. On my 2016 MacBook, the algorithm ran in about three seconds. It’s far from optimized, but not too shabby considering it
generates and evaluates around 16,000 color palettes. | {"url":"https://matthewstrom.com/writing/how-to-pick-the-least-wrong-colors/?ref=blog.xperianschool.com","timestamp":"2024-11-12T07:08:24Z","content_type":"text/html","content_length":"103270","record_id":"<urn:uuid:aed9caf7-0be6-4e27-8e22-834173a0e8ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00687.warc.gz"} |
Simulating Quantum Dynamics Systems with NVIDIA GPUs | NVIDIA Technical Blog
Quantum dynamics describe how objects obeying the laws of quantum mechanics interact with their surroundings, ultimately enabling predictions about how matter behaves. Accurate quantum dynamics
simulations inform the development of new materials, solar cells, batteries, sensors, and many other cutting-edge technologies. They’re also a critical tool in designing and building useful quantum
computers, including the design of novel types of qubits, improving gate fidelities, and performing device calibration.
In practice, simulating quantum systems is extremely challenging. The standard steps of a dynamics simulation include preparing a quantum state, evolving it in time, and then measuring some property
of the system such as the system’s average energy or transition probabilities between its energy levels. In practice, this means solving differential equations governed by the Schrodinger Equation
or the Lindblad Master Equation. Many-body quantum systems are represented by exponentially large Hilbert Spaces which make exact solutions intractable for conventional simulation methods.
To overcome this, clever approximations and numerical methods are used instead. The challenge is finding approximate methods that are computationally efficient while retaining a high degree of
accuracy. Techniques like tensor networks can efficiently compute the dynamics of large-scale quantum systems, but struggle with highly entangled systems. New tools are needed to extend the reach of
simulation techniques and explore more interesting and relevant systems.
Researchers Jens Eisert and Steven Thomson from the Free University of Berlin used NVIDIA GPUs to develop and test a powerful new method for simulating quantum dynamics. Their article Unravelling
Quantum Dynamics Using Flow Equations, recently featured in the journal Nature Physics, provides a powerful new GPU-accelerated method to simulate these systems.
Streamlined dynamics simulations
Jens and Steven tackled the challenge of simulating quantum systems using the method of flow equations. Instead of taking a single quantum state and evolving it in time, the flow equations method
diagonalizes the Hamiltonian matrix $H$ describing the quantum system. This is accomplished by applying a large number of infinitesimally small unitary transformations ($U^{\dagger}HU$, where $U$ is
a unitary matrix) to the initial $H$.
The full unitary transform is a time-ordered integral over the dummy flow time variable $l$. A time-ordered integral ensures that each step corresponds to the evolution of the Hamiltonian
chronologically as $l$ goes from 0 to infinity. It turns out that this numerical task can be efficiently parallelized using GPUs, offering a tractable approach to simulating a system’s dynamics.
The primary advantage of flow equations is that the simulation is not limited by the degree of entanglement, but by the desired accuracy of the numerical procedure. This means that the error is a
mathematical truncation that tends to be far less restrictive than the so-called ‘entanglement barrier’, and can be systematically improved when higher accuracy is required.
The second advantage is that a two or three dimensional system can easily be “unfolded” into a one-dimensional representation and solved with flow equations (Figure 1). The ability to simulate
multidimensional systems is crucial for real-world quantum applications, which generally require consideration of more than one dimension.
Unfortunately, flow equations are not a panacea for simulating quantum dynamics. They struggle to converge when the initial Hamiltonian has multiple states with nearly identical energies, a common
occurrence for some of the most interesting cases. This led Jens and Steven to propose the innovative idea of using so-called scrambling transforms. Using these to ‘scramble’ the initial Hamiltonian
with an additional transformation helps remove degeneracies which would otherwise impede the diagonalization procedure (Figure 2).
Figure 2. Scrambling the initial Hamiltonian can improve the convergence towards the final solution. Image adapted from Unravelling Quantum Dynamics Using Flow Equations
Large-scale GPU-enabled dynamics simulations
Studies using the flow equation technique have been largely analytical, leveraging pen and paper to find clever ways of avoiding unwieldy calculations. In 2023, Steven and his colleague Marco Schirò
published foundational work for turning this promising technique into a powerful and more reliable numerical method, which can leverage the strengths of NVIDIA GPUs. For details, see Local integrals
of Motion in Quasiperiodic Many-Body Localized Systems.
The method is well suited for parallelisation, as the many underlying matrix and tensor multiplications can be efficiently split into many smaller operations. A single NVIDIA GPU (such as the NVIDIA
RTX A5000 used by Steven) runs operations on tens of thousands of cores, providing a huge speedup compared to even the best multicore CPUs.
The gap between CPU and GPU calculations grows quickly, even when only considering relatively small systems and modest GPU resources (Figure 3). Performing 24 particle simulations, which required
over 2 hours to run on CPU, could be completed in under 15 minutes on a single NVIDIA GTX 1660Ti GPU. Even higher speedups are expected using more powerful data-center grade GPUs like the NVIDIA H100
Tensor Core, which alleviates the memory bottleneck.
Figure 3. GPUs provide a significant speedup (more than 8x for L = 24 particles) over CPUs for flow equation simulations. Image credit: Steven J. Thomson and Marco Schiro
The speedup provided by GPUs enables the flow-equation technique to be employed for larger scale 2D systems, unlocking a new frontier for numerical simulations of quantum matter.
According to Steven Thomson, “GPUs were absolutely essential to the success of this work, and our numerical technique was developed specifically to make use of their strengths. Without them, our
simulations would have taken tens or hundreds of times longer to run. This would have not only taken unreasonably long, but would also have come with a huge environmental cost due to the energy
required to run our simulations for such a long time.”
A new dimension for quantum dynamics
Future work will explore flow equation simulations of larger 2D and 3D systems, leveraging multi-node GPU systems to further push the boundaries of quantum dynamics simulations. By building on the
foundation laid by Jens and Steven, researchers will be able to simulate a wider variety of quantum systems than ever before, complementing the strengths and weaknesses of existing methods such as
tensor networks.
Get started accelerating your research
This groundbreaking work was possible in part thanks to the NVIDIA Academic Grant Program, which grants researchers free access to NVIDIA compute resources to further their work. Researchers focused
on generative AI and large language models (LLMs), simulation and modeling (including quantum computing), data science, graphics and vision, and edge AI are encouraged to apply.
To learn more about NVIDIA initiatives related to quantum computing and simulation including tools like CUDA-Q for developing large-scale quantum applications, visit NVIDIA Quantum. | {"url":"https://developer.nvidia.com/blog/simulating-quantum-dynamics-systems-with-nvidia-gpus/","timestamp":"2024-11-14T18:48:58Z","content_type":"text/html","content_length":"214205","record_id":"<urn:uuid:f1160b7a-b376-4616-9154-c5b30a97942e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00781.warc.gz"} |
(PDF) Cycloidal rotor aerodynamic and aeroelastic analysis
Author content
All content in this area was uploaded by Louis Gagnon on Mar 02, 2016
Content may be subject to copyright.
Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano, via La Masa 34
Milano 20156, Italy
Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano, via La Masa 34
Milano 20156, Italy
Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano, via La Masa 34
Milano 20156, Italy
Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano, via La Masa 34
Milano 20156, Italy
Cycloidal rotors consist of an arrangement of blades that rotate and pitch about a central drum. The blades are aligned
with the drum axis. By pitching harmonically with respect to the azimuth, they create a net propulsive force which
can propel an aircraft. The aerodynamic response of a cycloidal rotor in various environments has been studied using
a simplified analytical model resolved algebraically and by a multibody model resolved numerically. An computational
fluid dynamics model has also been developed and is used to understand the cycloidal rotor behavior in unsteady flow.
The latter model is also used to investigate the influence of the rods used to hold the blades on the aerodynamics
of the rotor. The models are also used to calculate the efficiency and the stability of the cycloidal rotor in various
configurations. The three developed models are validated using a set of data obtained from experiments conducted by
three different parties. In all the experiments, the spans and radii were in the order of a meter. The use of cycloidal
rotors as a replacement for the tail rotor of a helicopter has also been studied. Preliminary analysis suggests that they
can potentially cut down the propulsive energy demand of a helicopter by 50% at high velocities. An aeroelastic model
is used to assess the influence of flexibility on the system. Different cycloidal rotor configurations have been considered.
Keywords cycloidal rotor; tail rotor; CFD; multibody dynamics; aeroelasticity
1. Introduction
A cycloidal rotor is a device which interacts with the surrounding fluid to produce forces whose direction
can be rapidly varied. A sketch illustrates the dynamics of such a rotor in Fig. 1(a); a graphical representation
is shown in Fig. 1(b). The forces produced can take any direction in a plane normal to the axis of rotation.
They have a very small latency in response time and to some extent become increasingly efficient with airflow
In a cycloidal rotor, a drum carries a set of wings, or blades, whose axes are aligned with the rotation
axis of the drum. Each blade can pitch about a feathering axis which is also aligned with the rotation
(a) Working principle. (b) Viewed from the side.
Figure 1. Schematic representations of cycloidal rotors.
axis of the drum. By pitching the blades with a period equal to that of the drum rotation, a net aerody-
namic force normal to the axis of rotation is generated. The direction of the force is controlled by changing
the phase of the periodic pitch. Thus, the thrust can be vectored in a plane without moving large me-
chanical parts. Cycloidal rotors have been studied for their application in vertical axis wind and water
turbines (Hwang et al.,2009;Maˆıtre et al.,2013;El-Samanoudy et al.,2010). Recent studies also investi-
gated their application in unmanned micro aerial vehicles (Jarugumilli et al.,2014;Benedict et al.,2011;
Yun et al.,2007;Iosilevskii and Levy,2006). Nowadays, their commercial use is limited to wind turbines
and marine propellers. This paper focuses on a manned aircraft application of such rotors. A hover-capable
vehicle is proposed. It makes use of a conventional helicopter rotor for lift both in hover and forward flight,
and uses one or more cycloidal rotors for anti-torque, additional propulsion, and some control. In the explored
design configuration, power demanding tasks like lift in hover and in forward flight are delegated to a very
efficient device, the main rotor.
Several researchers have experimentally explored the concept, e.g. Yun et al. (2007), IAT21 (Xisto et al.,
2014), and Bosch Aerospace as reported by McNabb (2001). They assessed the potential of cycloidal rotors
to be used in large size unmanned or small size manned vehicle propellers. Previous work by the current
authors led to the development of various models for the study of the cycloidal rotor under different config-
urations (Gagnon et al.,2014a,c,b).
The scope of this paper is to study the implementation of a cycloidal rotor as the replacement of the tail
rotor on otherwise conventional helicopters. It has been previously shown that the configuration is aerody-
namically efficient. This paper further studies the implementation feasibility by studying the efficiency of the
cycloidal rotor airfoils under various operating conditions. The influence of the main helicopter rotor on the
rotor aerodynamics is also discussed. A computational fluid dynamics model is used to evaluate the impor-
tance of the three-dimensional of the rotor flow. The proposed concept is show in Fig. 2. It was chosen after
inspecting the aerodynamics of three configurations which were reported in previous works (Gagnon et al.,
This research project is part of a European effort to find an optimal use of cycloidal rotors. The objective
is to study their use in passenger carrying missions. One of the main ideas is to develop a long distance
rescue vehicle which can hover and travel at higher velocities than a traditional helicopter. Four universities
and two companies constitute the Cycloidal Rotor Optimized for Propulsion (CROP) consortium; additional
information can be found on the websitea.
Figure 2. Proposed helicopter concept.
2. Analytical Aerodynamic Model
2.1. Definitions
The algebraic mathematical model is used to obtain the pitching schedule required to produce the wanted
thrust magnitude and direction. It can also be used to compute the resulting thrust magnitude and direction
for a given pitching schedule. The model is also used to estimate the required power. It assumes constant
slope lift coefficient a=CL/α and constant drag coefficient CD0, so its use is limited to small angles of attack.
It neglects blade-wake interaction. It is convenient to specialize its solution for different flight regimes. The
schematic of the model is shown in Fig. 3, which represents the side view of the cycloidal rotor, and is in the
plane of the supporting arms previously shown in Fig. 1(b). Three different reference systems are used and
are explained in Table 1.
Figure 3. Definition of reference systems.
Table 1. Description of the coordinate systems.
System name Subscript Description
Basic reference o Yopoints in the direction opposite to gravity. Xois positive towards the
Rotating reference r yris directed radially outward the circular rotor path. xris tangent to
the circular path and points backwards.
Body reference b rotating reference rotated by the pitch angle θwhich is positive in the
clockwise direction.
The thrust Tproduced by the cycloidal rotor is shown in Fig. 3. Its direction is defined as having an
angle βwith respect to the Yoaxis.
To=(−sin β
cos β)T. (1)
According to momentum theory, thrust produces an inflow velocity vi, which moves opposite to the thrust.
vio=(sin β
−cos β)vi(2)
The free airstream velocity Uhas an angle γwith respect to the horizontal Xoaxis. Thus,
Uo=(cos γ
sin γ)U(3)
Now that the variables U,vi,β,T, and γare defined, the definition of noteworthy flight regimes is given in
Table 2.
Table 2. Cycloidal rotor flight regimes.
Regime Subscript Description
Hover H U= 0 (thus γis irrelevant); T > 0, vi>0, with β= 0.
Forward Flight F U > 0, with γ≈0; T > 0, vi>0, with 0 < β < π/2 because both lift
and propulsive force need to be produced.
Forward Lift L U > 0, with γ≈0; T > 0, vi>0, with β= 0 because only lift needs to
be produced.
Propulsion P U > 0, with γ≈0; T > 0, vi>0, with β≈π/2 because thrust is now
essentially aligned with forward flight speed.
Reverse Propulsion RP U > 0, with γ≈0; T > 0, vi>0, with β≈ −π/2 because thrust is
now essentially aligned against forward flight speed.
The other typical variables are defined as follows. First, the thrust coefficient is,
where Ais the area of the hollow cylinder described by the path of the cyclorotor blades,
A= 2πRb, (5)
with bbeing the span of the rotor.
Solidity σis the ratio of the total area covered by the blades to the ring area A
2πRb =cN
2πR ,(6)
with blade chord cand number of blades N.
Using the simplest momentum theory and referring to Fig. 3we have a mass flow ˙mthrough the cycloidal
rotor given by
˙m=ρAeffq(visin(β) + Ucos(γ))2+ (Usin(γ)−vicos(β))2=ρAeff qU2+v2
i+ 2Uvisin(β−γ) (7)
where Aeff = 2Rb is the area of an ideal stream tube that runs across the drum, perpendicular to the drum
Since the thrust Tis equal to the rate of change of the momentum, we obtain that,
T= ˙mw = 2 ˙mvi(8)
where wis the velocity at the end of the streamtube defined by the momentum theory applied to the cycloidal
rotor in a fashion inspired by Johnson (1994).
The induced velocity can be expressed as
i+ 2Uvisin(β−γ)=T /(2ρAeff )
p(Ucos(β−γ))2+ (Usin(β−γ) + vi)2(9)
We define the inflow coefficients,
We also define a torque coefficient starting from the torque Q,
ρ(ΩR)2RA ,(12)
and a power coefficient for P= ΩQ
Still referring to Fig. 3, we define the rotation matrices that allow converting the vectors from one reference
frame to another.
To go from the rotating frame to the body frame, the rotation operator is
Rbr ="cos θ−sin θ
sin θcos θ#(14)
In general we may expect θto be smallbso as a first approximation it is possible to use a linearized rotation
matrix of the form
Rbr ≈"1−θ
The rotation matrix to transform a vector from the basic to the rotating frame is
Rro ="sin ψ−cos ψ
cos ψsin ψ#(16)
with ψ= Ωt.
bThis assumption is needed to simplify the formulas enough to make their analytical solution feasible; however, it might not
hold in realistic operational cases, in which |θ|may grow as large as π/4.
2.2. Derivation
To start, it is necessary to compute the component of the air velocity with respect to the airfoil in the body
reference frame. The flow velocity, V, as seen from the body reference frame, is thus
V=Rbr (ΩR
0)+Rro U(cos γ
sin γ)+vi(sin β
−cos β)!! (17)
As a consequence,
Vx= ΩR−cos ψ(Usin γ−vicos β) + sin ψ(Ucos γ+visin β)−θcos ψ(Ucos γ(18)
+visin β) + sin ψ(Usin γ−vicos β)
Vy=θΩR−cos ψ(Usin γ−vicos β) + sin ψ(Ucos γ+visin β)(19)
+ sin ψ(Usin γ−vicos β) + cos ψ(Ucos γ+visin β)
Assuming that ΩR≫U, ΩR≫viand |θ| ≪ 1,
Vy≈θΩR−visin (ψ−β) + Ucos (ψ−γ) (21)
The angle of attack of the airfoil is
α= tan Vy
≈(θ−λsin (ψ−β) + µcos (ψ−γ)) (22)
Using the angle of attack, the lift and drag forces are computed using using a simple steady state and linear
2ρV 2cbCD0(23)
2ρV 2cbCL/αα(24)
These two force components are in the wind reference frame (i.e. Dis parallel to the wind and Lis perpen-
dicular), so they must be transformed in force components in the body reference frame,
Fbx=−Lsin α+Dcos α≈ −Lα +D(25)
Fby=Lcos α+Dsin α≈L+Dα (26)
assuming that the angle of attack is small. Then, these two force components must be transformed in the
rotating reference frame,
Frx≈Fbx +θFby =−Lα +D+θL +θDα ≈L(θ−α) + D(27)
Fry≈ −θFbx +Fby =θLα −θD +L+Dα ≈L−D(θ−α)
So, considering that V2≈(ΩR)2,
ρ(ΩR)2cb/2=CL/α (θ−λsin (ψ−β) + µcos (ψ−γ)) (λsin (ψ−β)−µcos (ψ−γ)) + CD0(28)
=CL/α (θ(λsin(ψ−β)−µcos(ψ−γ)) −(λsin(ψ−β)−µcos(ψ−γ))2+CD0
ρ(ΩR)2cb/2=CL/α (θ−λsin (ψ−β) + µcos (ψ−γ)) −CD0(λsin(ψ−β)−µcos(ψ−γ))
The imposed pitch angle is expressed as a harmonic series truncated at the second harmonicc,
(θcn cos nψ +θsn sin nψ) (29)
The two force components can be expressed as truncated Fourier series as well,
(Frxcn cos nψ +Frxsn sin nψ) (30)
n=1 Frycn cos nψ +Frysn sin nψ(31)
Now, the constant part of the force tangential to the cylindrical path, Frx0, is computed by integration over
a period. It is required to estimate the torque and thus the power required by the engine,
2CL/α µ2+ 2µλ sin(β−γ) + λ2−1
2CL/α (µcos γ+λsin β)θc1(32)
2CL/α (µsin γ−λcos β)θs1
and the torque coefficient is
Ncb =1
4CL/α µ2+ 2µλ sin(β−γ) + λ2−1
4CL/α (µcos γ+λsin β)θc1(34)
4CL/α (µsin γ−λcos β)θs1
The forces must be transformed into the basic reference frame using the definition,
which gives,
Tx= sin ψFrx+ cos ψFry(36)
Ty=−cos ψFrx+ sin ψFry(37)
The total average force is given by the constant part of Ttimes the number of blades N. Recalling the
definition of the thrust coefficient, CT, and of the solidity, σ, the following relations are obtained,
Ncb =Tx0
ρ(ΩR)2cb (38)
82θc1+λ(cos β(2θ0−θc2) + sin β(2 −θs2))
+µ(cos γ(2 −θs2) + sin γ(−2θ0+θc2))+CD0
4(λsin β+µcos γ)
Ncb =Ty0
ρ(ΩR)2cb (39)
82θs1+λ(cos β(−2−θs2) + sin β(2θ0+θc2))
+µ(cos γ(2θ0+θc2) + sin γ(2 + θs2))+CD0
4(−λcos β+µsin γ)
cStrictly speaking, only the first harmonic is needed to be able to vector the thrust. The second harmonic is considered
because it is the only higher harmonic that affects the thrust, and thus could be useful for higher order actuation.
The force coefficient and direction are given by
β=−arctan CTx
The force coefficient components, in wind axes, are
σcos γ+CTy
σsin γ(42)
σsin γ+CTy
σcos γ(43)
which yield
82θc1cos γ+ 2θs1sin γ+µ2 + sin(2γ)θc2−cos(2γ)θs2(44)
+λ2 sin(β−γ) + 2 cos(β−γ)θ0−cos(β+γ)θc2−sin(β+γ)θs2
4(λsin(β−γ) + µ)
8−2θc1sin γ+ 2θs1cos γ+µ2θ0+ cos(2γ)θc2+ sin(2γ)θs2(45)
+λ−2 cos(β−γ) + 2 sin(β−γ)θ0+ sin(β+γ)θc2−cos(β+γ)θs2
This concludes the definition of the analytical aerodynamic cycloidal rotor model. The following section will
present the implicit algebraic solution of these equations.
3. Algebraic Solution
For the stated purpose of using the cycloidal rotor as a replacement part for the anti-torque rotor of the
helicopter, the two most important flight regimes that were considered are the Propulsion (P) and Reverse
Propulsion (RP) scenarios. These are the regimes in which the lateral cycloidal rotors work to provide torque
about the yaw axis to counteract the torque generated by the main rotor. Furthermore, the cycloidal rotors
used in this fashion can provide a net thrust. For both scenarios, the main interest is to estimate the amount
of power required to obtain the wanted torque and thrust.
In is expected that a vertical flow component originating from the main rotor will impinge the cycloidal
rotors located below it. This component is not considered by the current model; owing to the characteristics
of cycloidal rotors it is assumed that such inflow will require a pitch angle adjustment but will not impair
the efficiency of the main rotor. Another flight condition which has not yet been considered is the use of the
cycloidal rotor to contribute to the lift provided by the main rotor when flying at low advance ratios.
The thrust required of each cycloidal rotor is estimated by the following procedure, where the original
anti-torque and forward thrust provided by the BO105 tail and main rotor are Mtand Tt, respectively.
First, the thrust required to the right and left rotors, respectively TRand TL, are
The power is estimated using the previously derived Eq. (32),
Equation (32) is solved using the values of λand CTobtained from the procedure presented in the following
two sections. The regime is Propulsion if the requested thrust is positive and Reverse Propulsion if the
requested thrust is negative. Both regimes start from the definition of the Forward Flight mode of a cyclogyro.
They differ from that regime because the perpendicular thrust component, CT⊥, is set to null. This definition
holds for cases where the main helicopter rotor takes all the anti-gravity forces.
3.1. Solving the Equations in Propulsion
In this configuration, the objective is to produce CT⊥= 0 and CTk=−CTP<0, i.e. β=π/2, with µ > 0,
γ= 0. Substituting these values into Eqs. (44) and (45) yields
1 0 #( θc2
The collective pitch θ0is currently maintained at zero. Thus, a negative cosine one per revolution pitch,
θc1, is used to produce the wanted negative horizontal force. This force pushes the rotor towards the left,
opposite to the airstream direction. For now, a simple cycloidal rotor which uses only a single harmonic
pitching motion is considered. Consequently, we set θc2=θs2= 0. Then, for a null perpendicular thrust and
the conditions just stated,
which indicates that θs1is null. The inflow coefficient in this case is
The solution of (49) with (51) is thus,
σa −µ+κµ
σa (µ−κµ
σa + 1)κ!2
2= 0 (52)
where ais CL/α and κis a term which integrates flow non-uniformity and tip losses in the equations. This
term is inserted as a multiplier of the inflow terms when solving Eq. (49). This method is inspired by the
empirical factor of the Johnson (1994) momentum theory in hover which uses λ=κpCT/2. Parameter κin
the present work is adapted to the cycloidal rotor by an approach similar to Yun et al. (2007) which uses an
empirical multiplier on one term when solving their implicit inflow equation.
Now, the solution of Eq. (49) yields, when choosing the appropriate root,
(2 aσ) (aκ −2a)µσ + (Cdκ−2Cd)µ−(aκσ +Cdκ)p2πCTP+µ2−8CTP!(53)
if CTPis the known variable. Otherwise, if θcis the known variable we use
64 +πC2
64 +aσθcP
8+πCdaκ2+ 4(aκ −2a)µσ
32 (54)
−(aκσ +Cdκ)
64 π2a2κ2σ2+π2C2
+64µ2+ 2π2Cdaκ2+ 8(πaκ −2πa)µσ
For further interest, the torque coefficient in this case is defined using
4CL/α µ2+ 2µλ +λ2−1
4CL/α (µ+λ)θc1=1
2CD0+ (µ+λ)CTP
which yields
2CD0+ µ
Looking back at Eq. (32) one can see that, for this regime, θs1and θ0have no influence on the power
consumed. This could be inspected in further detail by resolving Eqs. (44) and (45) with the intent of
increasing thrust by changing the two aforementioned angles.
The solution for the Propulsion case works for all cases where the thrust pushes the rotor against the
incoming wind and produces an inflow velocity viwhich has the same direction as the incoming velocity U. It
will also work for cases where the inflow velocity is opposed to the incoming velocity up to a condition where
the resulting velocity becomes null, such that |U|=|vi|. There, it is expected that a condition analogous to
helicopter vortex ring occurs. The Reverse Propulsion regime solution is used as soon as the required thrust
changes direction. That solution is presented in the following section.
3.2. Solving the Equations in Reverse Propulsion
In the Reverse Propulsion case, everything is kept equal to the propulsion case, with the exception that
β=−π/2. What thus happens is that the direction of the resulting mass flow rate is changed, and thus, to
keep a positive value of that mass flow rate, the equations (7), (8), and (9) become, when setting γ= 0 and
˙m= (vi−U)ρAeff (57)
which implies that,
T= (vi−U)ρAef f 2vi(58)
which when solved algebraically yields
This latest equation has a real solution for any positive value of a thrust pointing to the right, which was
not the case when the equation was solved for the Propulsion regime.
The positive root of Eq. (59) is kept because Tand vineed to have the same sign. That logic comes from
the definitions of Fig. 3and the fact that the rate of change of the momentum is equal to the thrust. Thus,
the inflow parameter of the Reverse Propulsion case is,
λRP =−µ
When setting for a null thrust perpendicular to the wind we obtain,
1 0 #( θc2
and thus,
θs1= (λRP −µ)θ0(62)
the equation to solve is thus,
σa −µ−κµ
σa µ+κµ
σa + 1!κ
= 0 (63)
where κis the same empirical term defined in Section 3.2 . Finally, solving Eq. (63) yields
CTRP =πa2κ2σ2
64 +πC2
64 +aσθcRP
4+(Cdκ+ 2Cd)µ
8+πCdaκ2+ 4(aκ + 2a)µσ
32 (64)
−(aκσ +Cdκ)
64 π2a2κ2σ2+π2C2
dκ2+ 32πaσθc+16(πCdκ+ 2πCd)µ
+64µ2+ 2π2Cdaκ2+ 8(πaκ + 2πa)µσ
or, solving for θcwhen the pitch function is wanted and the required thrust is known,
θcRP =−1
2aσ (aκ + 2a)µσ + (Cdκ+ 2Cd)µ−(aκσ +Cdκ)p2πCTRP +µ2−8CTRP (65)
where CTRP =−CTP=CT|| .
4. Calibration and Validation of the Analytical Model
The Propulsion and Reverse Propulsion flight models developed are validated using hover experimental
data. The lack of experimental data in forward flight in the desired scale prevented further experimental
validation. Three experimental datasets are available. They are used to calibrate and validate the analytical
propulsion models by imposing a null advance ratio. One dataset comes from Yun et al. (2007) and another
from IAT21 (Xisto et al.,2014), which is a member of the CROP consortium. The third dataset comes from
Bosch Aerospace, which ran an experimental campaign reported by McNabb (2001). For this last experiment,
the airfoils had a high drag coefficient. McNabb reported it to be CD0≈0.07. The same CD0was used as
such in the current algebraic model for the Bosch dataset. The three experimental setups also differed in their
three-dimensional configuration. The Bosch rotor transmitted movement to the blades by rods, located at
midspan of the blades, which are covered by a cylindrical shell. The IAT21 experiments had a similar setup,
but located at both external edges of the blades. The Yun et al. setup was positioned similarly to that of
IAT21, but the rods were uncovered. Another difference between the experimental setups lies in the method
used to measure power. The power measured by Yun et al. was the supplementary power required by the
electrical drive when blades are added to the device. They obtained it by subtracting the power required by
the electric motor to rotate the rods without the blades. The power measured by McNabb is measured by
a load cell. The power measured by IAT21 is the total motor power, for which they estimated a 5% total
loss. The experimental data was thus taken as is from Yun et al. and McNabb and a 5% reduction was
applied to the power measured by IAT21. Another difference is that IAT21 used NACA0016 airfoils whereas
McNabb and Yun et al. used NACA0012. Even though the blades differed, a slope of the lift coefficient
a=CL/α = 6.04 is used for all three configurations. The other experimental parameters are briefly described
in Table 3. Calibration was done by curve fitting the power and thrust values obtained algebraically to the
measured ones. The results are shown in Figs. 4to 6.
Table 3. Description of the experimental data.
Author Radius R(m) Span b(m) Chord c(m) N. of foils N3D control
Yun et al. (2007) 0.4 0.8 0.15 6 Arms at edges
Xisto et al. (2014) 0.6 1.2 0.3 6 Cylinders at edges
McNabb (2001) 0.610 1.22 0.301 6 Cylinder at midpoint
Calibration of the model was also tried by disregarding either the power or the thrust curves. When
disregarding the power curves, the resulting kappa hover correction factor remains almost the same. However,
if the optimization is performed considering only power, the κvalue changes significantly, yielding an excellent
match for the power at the cost of mispredicting the thrust.
Thrust (N)
Angular Velocity (RPM)
(a) Bosch T vs. Ω.
Power (kW)
Angular Velocity (RPM)
(b) Bosch P vs. Ω.
Figure 4. Comparison with the Bosch (McNabb,2001) experimental data of their 6 blade model at 25◦
magnitude pitch function. Using κ= 1.0785 and CD0= 0.07.
Thrust (N)
Angular Velocity (RPM)
(a) IAT21 T vs. Ω.
Power (kW)
Angular Velocity (RPM)
(b) IAT21 P vs. Ω.
Figure 5. Comparison with the IAT21 experimental data of the D-DALLUS L3 model at 37.5◦magnitude pitch
function. Using κ= 1.2640 and CD0= 0.008.
Although various weightings between power and thrust were tested, it was decided to give them equal
importance. Optimizing with a strong weight on power gave an excellent match for power, but yielded a very
high κ, which is the only variable that is calibrated by the optimization. The smaller correctors κrequired
for the IAT21 and McNabb experimental data may be due to the fact that their experimental model had
full cylinders which are known to reduce the three-dimensionality of the flow. To verify this hypothesis, a
previously developed three-dimensional fluid dynamics model (Gagnon et al.,2014c) is used to confirm the
influence of the use of endplates on the rotor. A short description of the model along with the computed
effect of the endplates is presented in the following paragraph.
The OpenFOAM CFD toolkit has been relied upon to perform the fluid dynamic calculations. A laminar
non-viscous solver is relied on since the main contributors to the thrust are the pressure induced forces.
Thrust (kgF)
Angular Velocity (RPM)
Model, 15deg
Model, 20deg
Model, 25deg
Model, 30deg
Experim., 15deg
Experim., 20deg
Experim., 25deg
Experim., 30deg
(a) Yun et al. T vs. Ω.
Power (HP)
Angular Velocity (RPM)
Model, 15deg
Model, 20deg
Model, 25deg
Model, 30deg
Experim., 15deg
Experim., 20deg
Experim., 25deg
Experim., 30deg
(b) Yun et al. P vs. Ω.
Figure 6. Comparison with the Yun et al. (2007) experimental data of their baseline model at various pitching
function magnitudes. Using κ= 1.4804 and CD0= 0.008.
A double mesh interface which allows one to model a fixed angular velocity rotor zone and six embedded
periodically oscillating zones has been developed. A moving no-slip boundary condition was also developed to
constrain the normal component of the fluid velocity at the foil to match the airfoil velocity while letting the
parallel velocity uninfluenced. The timestep used for the simulations is variable and set to follow a Courant
number of about 10. The mesh used without endplates has 366k cells while the one with endplates has 926k
cells. The reason for such a big discrepancy is the difficulty of the snappyHexMesh meshing software to
mesh surfaces close to interfaces and the thus related need to have a highly refined mesh in these zones. The
spacing between endplates and foils is one tenth of the chord length, which is reported by Calderon et al.
(2013) to have the same effect as if it were attached to the foil.
The CFD results of the IAT21 L3 model at various angular velocities are shown with and without endplates
in Fig. 7. They confirm the idea that the experimental apparatus used to transmit power will have an influence
Thrust (N)
Angular Velocity (RPM)
w/o endplate
w endplates
(a) IAT21 T vs. Ω.
Power (kW)
Angular Velocity (RPM)
w/o endplate
w endplates
(b) IAT21 P vs. Ω.
Figure 7. Effect of the presence of endplates on the cycloidal rotor as confirmed by the 3D CFD simulations.
on the overall performance. In the experiment, IAT21 used endplates. The instantaneous velocity fields and
streamlines of the IAT21 L3 case, where the pitch function angle magnitude is 37.5◦at the angular velocity
of 250 RPM, are shown in Figs. 8(a) to 8(d). These figures further confirm the influence of the endplates
geometry. Also, Fig. 8(d) confirms the fact reported by IAT21 that the flow enters the cycloidal rotor over
an angle of 180◦or more and exits over an arc of roughly 90◦. Figure 8(c) agrees well with the findings of
Yun et al. (2007) that the flow is deviated by the rotor.
(a) Front view w/o end-
(b) Front view with end-
(c) Side view w/o end-
(d) Side view with end-
Figure 8. Cycloidal rotor velocity streamlines from the CFD model at 250 RPM.
5. Modified Analytical and Multibody Models
Two other quickly solvable models were developed for the purpose of studying the behavior of a potential
manned vehicle which uses cycloidal rotors. One consists of a slight modification of the previous analytical
model which can still be solved algebraically. The second model is a rigid multibody dynamics system
consisting of a main rotating drum and a set of blades having a common pitching schedule comprised of a
collective pitch and a first harmonic pitch. These two models are validated against a set of three different
experimental results.
5.1. Modified Analytical Model
In the previously presented analytical model, the equations were solved using only one parameter, κ, which
allowed calibrating the results to the experimental data quite well. The mathematics used made it so that
the power and thrust were each influenced differently by the adjustment factor. However, in this revised
solution, an additional factor is used and deemed necessary in order to compare the multibody and analytical
solutions using the same mathematical basis. It consists of the drag coefficient of the airfoil. This theory is
used following the findings of McNabb (2001). He reported that the actual viscous drag of the airfoil used for
the experimental validation of his cycloidal rotor model was more than 10 times greater than the theoretical
one. This new approach is also confirmed by the multibody model. Thus, the modified analytical model uses
an implementation of the inflow correction factor closer to the one of Johnson (1994). We thus have λm=κλ
and Cd,m =ζCdwhere ζis a multiplication factor for the drag coefficient.
5.2. Multibody Model
The multibody model consists of a parametric implementation of a N blade cycloidal rotor. The analysis
is performed using the MBDyn open-source multibody toold. Inflow is calculated using momentum theory
and the blade lift and drag forces are calculated using tables for drag and lift coefficients. Here, the ζ
factor is applied to coefficients coming from data tables and is optimized to fit the multibody model to
the experimental data. The model is run until a periodic solution is obtained; the resulting performance is
obtained by averaging over the last complete rotation of the drum.
5.3. Comparison With Experimental Data
These two models are calibrated for optimal κand Cd; the results are plotted in Figs. 9to 11. The difference
noted in the power plots is explained by the fact that at higher angles of attack the analytical model is less
dhttp://www.mbdyn.org/, last accessed September 2014.
influenced by the drag coefficient due to its fixed value.
Alg., 15deg
Alg., 20deg
Alg., 25deg
Alg., 30deg
Mbd., 15deg
Mbd., 20deg
Mbd., 25deg
Mbd., 30deg
Experim., 15deg
Experim., 20deg
Experim., 25deg
Experim., 30deg
Thrust (kgF)
Angular Velocity (RPM)
(b) Yun et al. T vs. Ω.
Power (HP)
Angular Velocity (RPM)
(c) Yun et al. P vs. Ω.
Induced velocity (m/s)
Angular Velocity (RPM)
(d) Yun et al.Uin vs. Ω.
Figure 9. Comparison with the Yun et al. (2007) experimental data of their baseline model at various pitching
function magnitudes. Using κ= 1.4804 and CD0= 3.63 ×CD.
The experimental data available from IAT21 and Bosch are more limited in quantity and have thus been
handled with care. The difference in the induced velocity calculated for Bosch remains below one tenth.
In the absence of experimental data it is not possible to know which model gives the best approximation.
For the IAT21 simulation, the error in inflow velocity increases. This discrepancy may be explained by the
higher angle of attack reached by the blades of the model. The blades actually reach stall; thus, it is difficult
to reproduce the same behavior between the analytical and the multibody models. The latter accounts for
static stall as tabulated in the aerodynamics look-up tables.
The angle of attack in the Bosch tailored simulations at 25◦pitch function varies between [-5◦,10◦] with
an average magnitude of 6◦. It is thus clear that the analytical constant drag coefficient cannot take into
account with precision the power contributions of the viscous forces. Similarly, at 37.5◦pitch function, the
IAT21 angle of attack will oscillate between [-10◦,20◦] with an average magnitude of 8.5◦. These values are
calculated using the multibody model with a ζmultiplied drag coefficient. Thus, once again, this explains
the different drag coefficient requirements for both models.
A slightly better correlation can be obtained for this revised analytical model if the drag coefficient and
the κinflow corrector are optimized separately for the multibody model, but the best results obtained for the
analytical model come from the version previously presented in Section 2. Its use is thus limited to multibody
Thrust (N)
Angular Velocity (RPM)
(a) Bosch T vs. Ω.
Power (kW)
Angular Velocity (RPM)
(b) Bosch P vs. Ω.
Figure 10. Comparison with the Bosch (McNabb,2001) experimental data of their 6 blade model at 25◦
magnitude pitch function. Using κ= 1.0785 and CD0= 0.07.
Thrust (N)
Angular Velocity (RPM)
(a) IAT21 T vs. Ω.
Power (kW)
Angular Velocity (RPM)
(b) IAT21 P vs. Ω.
Figure 11. Comparison with the IAT21 experimental data of the D-DALLUS L3 model at 37.5◦magnitude
pitch function. Using κ= 1.2640 and CD0= 0.008.
validation purposes and the previous model is used for efficiency evaluation.
6. Efficiency Evaluation
The efficiency of the proposed helicopter design that was shown in Figs. 2is compared with that from a
model that resembles the Airbus Helicopters BO105 helicopter. The main advantage of this configuration is
that less thrust is required from the main rotor. This is the result of using the cycloidal rotors as forward
thrust generators.
6.1. BO105 Performance Characteristics
The performance characteristics of the original helicopter are first obtained. Diverse constant velocity ad-
vance ratios are considered. They come from a comprehensive helicopter model. The BO105 model is fully
aeroelastic and considers the dynamics of the main helicopter rotor, the tail rotor, and the airframe. Vec-
torial thrust, torque, and power figures can be computed for both rotors. Muscarello et al. (2013) gives the
detailed implementation of the model. Figure 12(a) displays the power requirement of the BO105 helicopter
undergoing a normal trim. The power contribution is broken down between the main rotor and the tail
rotor. The main rotor consumes all the power. This total power is broken down into propulsion and lifting
powers on Fig. 12(b). To do so, the BO105 model is used in a virtual wind tunnel. The lifting only power
is obtained by running the main rotor of the BO105 simulation into the virtual wind tunnel where its shaft
is held perpendicular to the flow. In this configuration, it is trimmed to provide the counter gravity forces
equivalent to the aircraft weight. The propulsive power is then obtained by subtracting the power required
by the main rotor shaft perpendicular to the flow to the power in trim. Finally, the thrust produced by the
tail rotor, as shown in Fig. 12(c) allows the measurement of the required moment to be produced by the
cycloidal rotors.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Main Rotor
Tail Rotor
(a) Helicopter model in trim.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Regular Trim
Lift Only
Propulsion Only
(b) Main rotor at null pitch angle.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Lateral Thrust (kN)
Advance Ratio
(c) Tail rotor lateral thrust.
Figure 12. Power and thrust figures of the BO105 rotors.
6.2. Modified Aircraft
In order to evaluate the efficiency of the modified aircraft design, the figures obtained from the BO105 model
are used. The total power consumed by the modified aircraft consists of the sum from the left and right
cycloidal rotors and the main rotor used to generate only lift. The non-modified analytical model was used
for the simulations presented in this section. An optimization procedure using a Nelder-Mead algorithm
allows finding an optimal operating point. The points found are considered to be local optima.
A range of parameters were allowed to change during the optimization process. They are shown in Table 4.
The maximum angle of attack encountered by the blade was limited to 13.06◦. This is essential when using
Table 4. Cycloidal rotors optimization properties.
Radius R(m) Dist. dL+dR(m) Chord c(m) N. of blades NAng. vel. Ω (RPM)
Lower lim. 0.1 0.5 0.05 3 20
Upper lim. 1.275 9.5 0.5 12 2000
an analytical model which relies on a constant slope lift coefficient. According to reliable airfoil data, lift
generation remains constant until 14.14◦and then drops rapidly. The maximum angle of attack is calculated
from Eq. (22) with γ= 0 and β=π/2. Combining it with Eq. (29) truncated at the first harmonic term and
Eq. (50), Eq. (22) becomes,
α≃θo+θccos(Ψ) + θssin(Ψ) −λsin(Ψ −π/2) + µcos Ψ ≃θo(1 −(µ+λ) sin Ψ) + (θc+λ+µ) cos Ψ (66)
where the factor of θoindicates that the collective pitch should be zero. The maximum and minimum values
of the factor of cos(Ψ) correspond to Ψ = 0, π. Thus, a non null θoat these values of Ψ can only increase the
maximum angle of attack.
The maximum allowed distance between the centers of the two cycloidal rotors was 8.25 m and was
chosen as such in order to prevent the cycloidal apparatus from extending too far beyond the main rotor.
The maximum distance is derived from
2= 2(dL+dR)−LW≤DMR (67)
where DMR = 9.84 m is the main rotor diameter, LWis the width of the skids, and dLand dRare the
lateral distances between the center of the helicopter and the midpoint of the left and right cycloidal rotors,
respectively. Each rotor uses NACA0012 airfoils; the rotor radii were limited to avoid contact with the ground
or the main rotor. The maximum tip velocity at maximum forward velocity was limited to Mach 0.85.
The first optimization lead to the data presented for a span of 7 m in Table 5and gives the results shown
in Figs. 13(a) to 13(c). These take into account the power saved by the main rotor. No longer having to
provide forward thrust does not require its shaft to be tilted forward. In this design, the main rotor only
pitches in order to balance the center of mass offset. Figures 13(b) and 13(c) show the provided thrust and
anti-torque of cycloidal rotors and of the main or tail rotor, respectively. As required, the resulting torques
and thrusts for both the original helicopter and the proposed design are equal.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Original Bo105
Proposed Design
(a) Total power vs original BO105.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Propulsive Thrust (kN)
Advance Ratio
Original Bo105
Cycloidal Rotor (left)
Cycloidal Rotor (right)
Cycloidal Rotor (total)
(b) Propulsive thrust.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Anti-Torque (kN m)
Advance Ratio
Original Bo105
Cycloidal Rotor (left)
Cycloidal Rotor (right)
Cycloidal Rotor (total)
(c) Anti-torque.
Figure 13. Modified aircraft design performance characteristics.
The angular velocity of the cycloidal rotors was not allowed to vary when changing advance ratio or
flight regime. The pitch was restrained to single harmonic variation. The two cycloidal rotors had identical
geometry. The optimization process also led to the realization that, in order to keep a larger chord, the blade
number has to be minimized. Thus, it was set to N= 3 for the further optimization steps. The resulting
vehicle is larger than the original helicopter and it was thus chosen to constrain the maximum center to center
lateral width so as to obtain smaller rotor spans. The resulting geometries are shown in Table 5and their
corresponding power consumptions in Fig. 14. The maximum forward flight velocity is limited to 72 m/s
because above that velocity it becomes nearly impossible to trim the helicopter tail rotor to provide a lateral
force large enough to offset the main rotor torque. This velocity also brings the tip of the advancing blade
into the transonic regime (Mach 0.85 is reached).
Table 5. Optimization results.
Span b(m) 7.00 6.47 3.44 1.93
Chord c(cm) 35 79 14 9.8
Radius R(m) 1.25 1.27 1.21 1.24
Angular velocity Ω (RPM) 400 375 820 1310
Number of blades N3 3 3 3
6.3. Variable Angular Velocity
A quick study was conducted to understand whether a variable angular velocity could make the cycloidal
rotors more efficient. This would make sense if they were powered, for example, by electric motors. It is
reasonable to assume that the angular velocity of the rotors could be controlled independently and vary with
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Original Bo105
Proposed Design
(a) 6.47 m span.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Original Bo105
Proposed Design
(b) 3.44 m span.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Original Bo105
Proposed Design
(c) 1.93 m span.
Figure 14. Total power consumed by the short rotor aircraft.
the advance ratio. Constraining the algorithm to use the same geometry found before and allowing each rotor
to have a different angular velocity at each advance ratio considered yields a negligible increase in efficiency.
The resulting power required for this configuration is shown in Fig. 15(a) and the optimal angular velocities
of the rotors are shown in Fig. 15(b). Although an irregular angular velocity pattern is shown, it was noticed
that many angular velocity configurations yield very similar results, the limiting factor being the maximum
allowed blade pitch angle. The idea of using variable angular velocity was thus not further investigated.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Power (kW)
Advance Ratio
Original Bo105
Proposed Design
(a) Total power used by the variable design.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Angular velocity (RPM)
Advance Ratio
Cycloidal Rotor (left)
Cycloidal Rotor (right)
(b) Angular velocity distribution for the variable design.
Figure 15. Variable angular velocity design test results.
7. Aeroelastic Model
This section presents some considerations on the aeroelasticity of the cycloidal rotor arrangement. In
view of the opportunity to consider low-solidity rotor designs, which are characterized by very slender, thin
blades, their low out-of-plane bending and torsional stiffness, and the possible presence of offsets between the
pitch axis and the shear, mass and aerodynamic centers of the blade section may become a concern.
Figure 16 shows a sketch of the blade section, referred to the pitch axis; x1is the distance of the aerody-
namic center from the pitch axis; x2is the distance of the elastic axis from the pitch axis; x3is the distance
of the center of mass from the pitch axis.
The blades are loaded mainly by nearly radial, out-of-plane centrifugal and aerodynamic loads; the force
Figure 16. Sketch of blade section.
per unit span is
fradial =fcentrifugal +faerodynamic ≈mΩ2r+1
where mis the blade mass per unit span and r(x, t) = R+w(x, t) is the deformed radius. Both loads are
clearly proportional to Ω2and to either ror r2.
bent blade
Figure 17. Schematic of rotating flexible blade assembly subjected to bending.
It is assumed that the blades are simply supported at both ends, as sketched in Fig. 17. Similar consid-
erations apply to different arrangements.
The multibody aeroelastic solver has been also applied to the solidity study. The model uses finite-volume
nonlinear beam elements Ghiringhelli et al. (2000), with aerodynamics based on blade element and look-up
table static airfoil properties. Simple inflow models based on Momentum Theory have been specifically
developed for the analysis of cycloidal rotors Benedict et al. (2011). The results presented in Fig. 18 show
that the reduction in thrust and power related to the observed aeroelastic deformation of the blades seems
negligible at low angular velocity and low flexibility. This is no longer true when the structural properties
are reduced by acting on the reference thickness.
8. Conclusions
A set of models have been developed for the study of cycloidal rotors used to propel an optimized manned
aircraft. The idea considered in the article was to replace the tail rotor of an otherwise conventional helicopter
design with cycloidal rotors located at the sides of the helicopter. In this way, the individual airfoils rotate
in a manner that makes them behave as airplane wings. The cycloidal rotors are expected to be more
efficient than traditional rotors when subjected to the inflow from main rotor. The sensibility of the rotors
Thrust, N
3 blades, rigid
3 blades, flexible (nom.)
3 blades, flexible (x 0.1)
3 blades, flexible (x 0.01)
(a) Thrust.
Power, kW
3 blades, rigid
3 blades, flexible (nom.)
3 blades, flexible (x 0.1)
3 blades, flexible (x 0.01)
(b) Power.
Figure 18. Comparison of thrust and power vs. RPM for rigid and flexible blades at 20 deg cyclic pitch.
to the geometry of the carrying rods has been confirmed by the 3D CFD simulations and the magnitudes
of the correction factors of the two dimensional analytical and multibody models. The multibody model
and the analytical model correlate to satisfaction with the three sets of experimental data considered. Open
source software was used and developed as part of the research presented. Preliminary results show that
blade flexibility may reduce efficiency. Finally, it is expected that up to 50% savings in power required for
propulsion can be obtained from the helicopter traveling at high advance ratios, with comparable efficiency
at lower advance ratios.
The research leading to these results has received funding from the European Community’s Seventh
Framework Programme (FP7/2007–2013) under grant agreement N. 323047.
Benedict, M., Mattaboni, M., Chopra, I., and Masarati, P. (2011). Aeroelastic Analysis of a Micro-Air-Vehicle-Scale
Cycloidal Rotor. 49(11):2430–2443. doi:10.2514/1.J050756.
Calderon, D. E., Cleaver, D., Wang, Z., and Gursul, I. (2013). Wake Structure of Plunging Finite Wings. In 43rd
AIAA Fluid Dynamics Conference.
El-Samanoudy, M., Ghorab, A. A. E., and Youssef, S. Z. (2010). Effect of some design parameters on the performance
of a Giromill vertical axis wind turbine. Ain Shams Engineering Journal, 1(1):85–95.
Gagnon, L., Morandini, M., Quaranta, G., Muscarello, V., Bindolino, G., and Masarati, P. (2014a). Cyclogyro Thrust
Vectoring for Anti-Torque and Control of Helicopters. In AHS 70th Annual Forum, Montr´eal, Canada.
Gagnon, L., Morandini, M., Quaranta, G., Muscarello, V., Masarati, P., Bindolino, G., Xisto, C. M., and P´ascoa, J. C.
(2014b). Feasibility assessment: a cycloidal rotor to replace conventional helicopter technology. In 40th European
Rotorcraft Forum, Southampton, UK.
Gagnon, L., Quaranta, G., Morandini, M., Masarati, P., Lanz, M., Xisto, C. M., and P´ascoa, J. C. (2014c). Aero-
dynamic and Aeroelastic Analysis of a Cycloidal Rotor. In AIAA Modeling and Simulation Conference, Atlanta,
Ghiringhelli, G. L., Masarati, P., and Mantegazza, P. (2000). A Multi-Body Implementation of Finite Volume Beams.
38(1):131–138. doi:10.2514/2.933.
Hwang, I. S., Lee, H. Y., and Kim, S. J. (2009). Optimization of cycloidal water turbine and the performance
improvement by individual blade control. Applied Energy, 86(9):1532–1540.
Iosilevskii, G. and Levy, Y. (2006). Experimental and Numerical Study of Cyclogiro Aerodynamics . AIAAJ, 44(12).
Jarugumilli, T., Benedict, M., and Chopra, I. (2014). Wind Tunnel Studies on a Micro Air Vehicle-Scale Cycloidal
Rotor. Journal of the American Helicopter Society, 59(2):1–10.
Johnson, W. (1994). Helicopter Theory. Dover Publications, New York.
Maˆıtre, T., Amet, E., and Pellone, C. (2013). Modeling of the flow in a Darrieus water turbine: Wall grid refinement
analysis and comparison with experiments. Renewable Energy, 51:497–512.
McNabb, M. L. (2001). Development of a Cycloidal Propulsion Computer Model and Comparison with Experiment.
Master’s thesis.
Muscarello, V., Masarati, P., Quaranta, G., Lu, L., Jump, M., and Jones, M. (2013). Investigation of Adverse
Aeroelastic Rotorcraft-Pilot Coupling Using Real-Time Simulation. In AHS 69th Annual Forum, Phoenix, Arizona.
Paper No. 193.
Xisto, C. M., P´ascoa, J. C., Leger, J. A., Masarati, P., Quaranta, G., Morandini, M., Gagnon, L., Wills, D., and
Schwaiger, M. (2014). Numerical modelling of geometrical effects in the performance of a cycloidal rotor. In 6th
European Conference on Computational Fluid Dynamics, Barcelona, Spain.
Yun, C. Y., Park, I. K., Lee, H. Y., Jung, J. S., and H., I. S. (2007). Design of a New Unmanned Aerial Vehicle
Cyclocopter. Journal of the American Helicopter Society, 52(1). | {"url":"https://www.researchgate.net/publication/296634800_Cycloidal_rotor_aerodynamic_and_aeroelastic_analysis","timestamp":"2024-11-07T22:43:41Z","content_type":"text/html","content_length":"882881","record_id":"<urn:uuid:9e11f236-002b-4920-861e-b413a0fd017b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00611.warc.gz"} |
Two-Sided Fixed-Sample Tests in Clinical Trials
A two-sided test is a test of a hypothesis with a two-sided alternative. Two-sided tests include simple symmetric tests and more complicated asymmetric tests that might have distinct lower and upper
alternative references.
Symmetric Two-Sided Tests for Equality
For a symmetric two-sided test with the null hypothesis
A common two-sided test is the test for the response difference between a treatment group and a control group. The null and alternative hypotheses are
• The test rejects the hypothesis
• The test rejects the hypothesis
• The test indicates no significant difference between the two responses if
which is
The hypothesis
With an alternative reference
which is
The resulting power
Thus, under the upper alternative hypothesis, the power in the SEQDESIGN procedure is computed as the probability of rejecting the null hypothesis for the upper alternative,
That is,
Then with
The drift parameter can be derived for specified
If the maximum information is available, then the required sample size can be derived. For example, in a one-sample test for mean, if the standard deviation
On the other hand, if the alternative reference
Generalized Two-Sided Tests for Equality
For a generalized two-sided test with the null hypothesis
With the lower alternative reference
This implies
and the power is the probability of correctly rejecting the null hypothesis for the lower alternative,
The lower drift parameter is derived as
Then, with specified
Similarly, the upper drift parameter is derived as
For a given
Thus, the maximum information required for the design is given by
Note that with the maximum information level
If maximum information is available, the required sample size can be derived. For example, in a one-sample test for mean, if the standard deviation
On the other hand, if the alternative references, Type I error probabilities | {"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_seqdesign_sect012.htm","timestamp":"2024-11-05T11:56:06Z","content_type":"application/xhtml+xml","content_length":"36464","record_id":"<urn:uuid:e59b3fbf-8ae8-460a-8f9c-4bf23d31dc49>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00327.warc.gz"} |
Let’s Consider Skew - The Algorithmic Advantage
As diversified systematic trend followers, we just love the positive skew of our trade results….but to explain why, we need to dig quite deeply into what skewness means exactly.
It is defined as a measure of asymmetry of a trade distribution about its mean and is expressed by the following formula.
Now it is important to note that the formula above assumes that a particular distribution can be categorized as having a single mean and a single standard deviation around this mean, which is an
inherently Gaussian assumption. We ‘as realists’ however understand that over large data samples, ‘real markets’ and ‘real trade results’ are more complex than that, and may frequently comprise
multiple means and varying standard deviations that reflect different market conditions. However, despite this failing of the skewness formula when being applied to real distributions of complex
adaptive systems such as these financial markets, we can simplify skewness to refer to the asymmetry between the magnitude of average wins and the average losses of the distribution of trade results.
Just don’t get too prescriptive in your use of these damned single measure statistics of theoretical distributions.
For example where our average winners are far greater than our average losers, our trade distribution histogram plots as a distribution with clear positive skew. If our average winners were far
smaller than our average losers, then our trade distribution would plot with clear negative skew. The direction of the tails of these asymmetric distributions signify their ability to address the
risk arising when real markets start to display exotic fat tailed behaviour.
What we mean is that the extremal value of a skewed distribution such as the maximum loss or the maximum win signifies where bias in the trade lies when exposed to the Power Laws found in ‘fat
tailed’ environments. When conditions extend beyond the Gaussian envelope into the exotic tails of a distribution, events considered to be far less frequent actually become far more frequent than
what a Normal distribution would imply. So for negative skewed systems, large losses are far more frequent than anticipated and for positive skewed systems, large wins are far more frequent than
Chart 1 below reflects a trade distribution with positive skew. There is a long right tail to the distribution of trade results. While the vast proportion of trades are small losses, you can see that
the exceptional winners result in the average win being far higher than the average loss. This feature creates a positive skew to the distribution of trade returns in this example.
Chart 1: Histogram of Trade Results of a Trend Following System with Positive Skew
The distribution of trade results in Chart 1 above applies to a trend following system comprising 237 trades undertaken between 1st January 2000 to today with an equity curve that is displayed in
Chart 2 below.
Chart 2: Equity Curve of a Trend Following System comprising 237 trades
Now the reason that we like positive skew of our trade results is that under non-linear market conditions, the beneficial outlier can exponentially outweigh all the small losses associated with our
trend following technique that cuts losses short at all times but lets profits run. This can be attributed to the Power Laws that reside in ‘Fat Tailed market environments. Trade events in the tails
of the distribution can be many orders of magnitude greater than the trade events that lie within more ‘normal’ bounds of the trade distribution’.
Whether a system has positive or negative skew is important when considering the unforeseen risks associated with ‘fat tailed’ environments. A system with negative skew provides a ‘risk signature of
weakness’ where the occasional large loss, under ‘fat tailed’ non-Gaussian market conditions can turn into exponentially larger losses than expected, particular when large losses become consecutive
in nature.
Positively skewed systems on the other hand demonstrate through their risk signature that they are ‘robust’ and do not leave themselves open to the possibility of large losses. By always cutting
losses short there is no exponential increase in losses when conditions become unfavorable as all adverse tail events are excluded from the trade history. There may be more small losses, but these
events are not exponential in nature. Of course, by letting profits run, the trend follower leaves themselves open to the possibility of ‘exponential profits’ associated with favorable tail events.
Now that we understand the significance of skewness to non-linear risk events, lets now turn to the question of how we accurately measure skewness? As skewness is often incorrectly applied by the
How to Measure Skewness
The first point we need to understand when assessing the skewness arising from trend following systems is that we need to eliminate effects in the distribution that may arise from money management
methods deployed by the system.
For example the trade results of the trend following system described in Charts 1 and 2 display compounded effects. In this system we applied a 1% trade risk of equity for each trade. This means that
the trade results include the impacts of compounding in their signature. So we cannot use $ profit or $ loss per trade as a basis to calculate skew as these raw results will include effects of
compounding. Rather we need to apply a method that normalises the trade results to exclude the impacts of money management method to ‘truly see’ the real skew in the system results.
So in the following examples we will be using a % of equity as a method to normalise the trade results. We could also apply an ATR or R multiple for each trade as well to achieve the same outcome.
The following table displays the skew of various methods applied to the trade results.
Now the question we need to ask when referring to the skew, is which result is ‘correct’?
Well we need to understand that all these different interpretations of skew arise from the same system result. The only difference is in how these results are consolidated.
The ‘correct’ result is actually represented by the ‘Trade Results’ column of 1.32. This skew calculation reflects the actual asymmetry of all trade results.
The Daily results and the Monthly Results column compound the skew by virtue of the fact that these methods consolidate trade results into a daily or monthly record. The consolidation process
actually compounds the skew of positively skewed systems. The reason for this compounded nature through consolidation is that we are altering the skew of the distribution through consolidation. With
classic trend following models outside periods of ‘Outlier’, most of our trades are losses. As we consolidate these losses the relative disparity between our Outliers and linear sequence of losses
becomes more extreme leading to higher overall skew in the distribution. This effect compounds the asymmetry in the series.
We therefore need to be particularly careful in how we use skew to compare between alternatives. Ensure that you choose a particular method and stick with it. Unfortunately when assessing the skew
between different Fund Managers based on available data, you sometimes only have monthly return data to work with and cannot determine the ‘real skew’ of the method….but it helps to be aware of these
issues arising from the way we measure skew.
Anyway….enough of the rambling. Let’s hope you just don’t skew things up the next time you use it.
Trade well and prosper. | {"url":"https://thealgorithmicadvantage.com/lets-consider-skew/","timestamp":"2024-11-04T16:59:36Z","content_type":"text/html","content_length":"68003","record_id":"<urn:uuid:c4083af3-23a0-4858-8987-9bc6a621529a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00209.warc.gz"} |
Mastering the tapply() Function in R » Data Science Tutorials
Mastering the tapply() Function in R, The tapply() function in R is a powerful tool for applying a function to a vector, grouped by another vector.
In this article, we’ll delve into the basics of tapply() and explore its applications through practical examples.
Data Science Applications in Banking » Data Science Tutorials
Syntax:Mastering the tapply() Function in R
The basic syntax of the tapply() function is:
tapply(X, INDEX, FUN, ...)
• X: A vector to apply a function to
• INDEX: A vector to group by
• FUN: The function to apply
• ...: Additional arguments to pass to the function
Example 1: Applying a Function to One Variable, Grouped by One Variable
Let’s start with an example that demonstrates how to use tapply() to calculate the mean value of points, grouped by team.
Step-by-Step Data Science Coding Course
# Create data frame
df <- data.frame(team = c('A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'),
position = c('G', 'G', 'F', 'F', 'G', 'G', 'F', 'F'),
points = c(104, 159, 12, 58, 15, 85, 12, 89),
assists = c(42, 35, 34, 5, 59, 14, 85, 12))
# Calculate mean of points, grouped by team
tapply(df$points, df$team, mean)
The output will be a vector containing the mean value of points for each team.
A B
83.25 50.25
Example 2: Applying a Function to One Variable, Grouped by Multiple Variables
In this example, we’ll use tapply() to calculate the mean value of points, grouped by team and position.
# Calculate mean of points, grouped by team and position
tapply(df$points, list(df$team, df$position), mean)
The output will be a matrix containing the mean value of points for each combination of team and position.
F G
A 35.0 131.5
B 50.5 50.0
Additional Tips and Variations
• You can use additional arguments after the function to modify the calculation. For example, you can use na.rm=TRUE to ignore NA values.
• You can group by multiple variables by passing a list of vectors as the second argument.
• You can use tapply() with other functions besides mean, such as sum, median, or sd.
• You can use tapply() with different types of vectors and data structures, such as matrices or lists.
In conclusion, the tapply() function is a powerful tool in R that allows you to apply a function to a vector, grouped by another vector.
By mastering this function, you can simplify complex calculations and gain insights into your data. With its flexibility and versatility, tapply() is an essential tool for any R programmer. | {"url":"https://datasciencetut.com/mastering-the-tapply-function-in-r/","timestamp":"2024-11-06T22:12:12Z","content_type":"text/html","content_length":"111802","record_id":"<urn:uuid:6b684d28-0cbe-4dec-ad43-8921dc6a5f5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00651.warc.gz"} |
How Many Ounces in 750 Milliliters?
Cooking often requires different unit conversions according to situations. Ounce to milliliter is considered one of the most common conversions.
The answer is 25.3605 liquid ounces. The ounce is normally used both for measuring the weight of liquid and dry ingredients. While measuring dry ingredients, ounces may differ from liquid
The reason is that the weights of different dry substances differ according to their density.
Say, for example, you are measuring the weight of a cup of sugar and a cup of chocolate. They will weigh differently than one another.
The milliliter is always used for measuring liquid ingredients, so there will be no confusion about that.
The conversion chart between ounces and milliliters is given below.
OuncesMilliliters10 fluid ounces295.735 mL11 fluid ounces325.309 mL12 fluid ounces354.882mL13 fluid ounces384.456 mL14 fluid ounces414.029 mL15 fluid ounces443.603 mL16 fluid ounces473.176 mL17 fluid
ounces502.75 mL18 fluid ounces532.324 mL19 fluid ounces561.897 mL20 fluid ounces591.471 mL21 fluid ounces621.044 mL22 fluid ounces650.618 mL23 fluid ounces680.191 mL24 fluid ounces709.765 mL25 fluid
ounces739.338 mL26 fluid ounces768.912 mL27 fluid ounces798.485 mL28 fluid ounces828.059 mL29 fluid ounces857.632 mL30 fluid ounces887.206 mL
1 fluid ounce = 29.5735 milliliters
1 milliliter = 0.033814 ounces
Using these two basic equations, any amount expressed in ounces can be converted into milliliters and any amount expressed in milliliters can be converted into ounces.
If you’re asked how many fluid ounces are there in 900 milliliters, the answer would be as follows-
900 ml = 900 * 0.033814 ounces = 30.4326 ounces
Similarly, if asked how many milliliters are there in 40 fluid ounces, the answer would be-
40 ounces = 40 * 29.5735 milliliters = 1182.94
Note that the calculations are approximate results.
We will be happy to hear your thoughts
Leave a reply
You must be logged in to post a comment. | {"url":"https://exercisepicks.com/how-many-ounces-in-750-milliliters/","timestamp":"2024-11-05T10:24:24Z","content_type":"text/html","content_length":"102373","record_id":"<urn:uuid:8c0c33c9-3821-4059-b112-1e58ace07a04>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00177.warc.gz"} |
Solution - Bridges (APIO 2019)
APIO 2019 - Bridges
Author: Andi Qu
The main idea in this problem is to use square-root decomposition on queries. For convenience, call type 1 queries updates and type 2 queries calculations.
First, split the queries into blocks of about $\sqrt N$ queries. In each block, there are $\mathcal{O}(\sqrt N)$ updates or calculations. For each block:
• Split the bridges into two groups: changed and unchanged.
• If we sort the calculations and unchanged bridges in decreasing order of weight, we can simply use DSU to find which nodes are connected from those bridges alone.
□ These connected nodes are constant for all calculations in the current block
• To handle the updates:
□ Iterate over the queries in the current block (without sorting)
□ If the query is an update, simply update the bridge's weight
□ If the query is a calculation, iterate through each changed bridge and connect the nodes if the weight limit is above the query's weight limit
☆ This works because this means the answer for the current query is dependent only on previous updates
☆ The key thing here is that we need a way to roll back DSU unions, since the set of "good" bridges may differ from query to query
☆ To achieve this, we simply use DSU with path balancing only and keep a stack of previous DSU operations
Time Complexity: $\mathcal{O}((Q + M) \log N \sqrt Q )$
However, it is possible to remove the log factor as mentioned in this comment.
#include <bits/stdc++.h>
#define FOR(i, x, y) for (int i = x; i < y; i++)
typedef long long ll;
using namespace std;
const int B = 1000;
int n, m, q;
stack<int> stck;
Join the USACO Forum!
Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers! | {"url":"https://usaco.guide/problems/apio-2019bridges/solution","timestamp":"2024-11-10T17:15:18Z","content_type":"text/html","content_length":"357882","record_id":"<urn:uuid:a6250960-65d6-494b-b8fc-098c2ab448cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00255.warc.gz"} |
The mode for the following frequency distribution-Turito
Are you sure you want to logout?
The mode for the following frequency distribution
A. 5
B. 5.28
C. 6
D. 6.29
Mode formula :
Mode =
• h is the size of the class interval
The correct answer is: 6.29
Given :
Steps for calculating Mode :
Step 1: Find the class interval with the maximum frequency. This is also called modal class.
From the table above Class interval (4 - 8) has maximum frequency of 8.
Thus, modal class is 4-8
Step 2: Find the size of the class. This is calculated by subtracting the upper limit from the lower limit.
Size of class (h)= Upper limit - lower limit = 8-4 = 4
Step 3: Calculate the mode using the mode formula:
• h is the size of the class interval
h = 4 ,
Substituting these values
Get an Expert Advice From Turito. | {"url":"https://www.turito.com/ask-a-doubt/Maths-the-mode-for-the-following-frequency-distribution-6-29-6-5-28-5-qdb0655","timestamp":"2024-11-12T21:51:09Z","content_type":"application/xhtml+xml","content_length":"1052463","record_id":"<urn:uuid:4392b867-4e42-4d81-83ed-735d4a3784d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00191.warc.gz"} |
Understanding Basic Electricity Terms: A Beginner's Guide
Electricity is a fundamental part of our daily lives, powering everything from household appliances to complex industrial machines. Understanding basic electrical terms is essential for anyone
interested in electronics, electrical engineering, or simply looking to grasp how electrical devices work.
Feature image to understanding Basic Electricity Terms
Electricity is a fundamental part of our daily lives, powering everything from household appliances to complex industrial machines. Understanding basic electrical terms is essential for anyone
interested in electronics, electrical engineering, or simply looking to grasp how electrical devices work. This guide will explain some of the most common terms: watt, volt, ampere, kilowatt-hour,
and milliampere-hour.
Visual reference guide to a watt
1. Watt (W)
• Definition: The watt is the unit of power in the International System of Units (SI). It measures the rate at which energy is used or generated.
• Usage: A lightbulb with a power rating of 60 watts, for instance, consumes energy at a rate of 60 joules per second.
• Real-World Example: A 1000-watt microwave uses 1000 joules of energy per second during its operation.
Visual reference guide to volts
2. Volt (V)
• Definition: The volt is the SI unit for electric potential, electric potential difference (voltage), and electromotive force.
• Usage: Voltage can be thought of as the pressure that pushes electric charges through a conductor.
• Real-World Example: A standard household electrical outlet in the United States typically delivers 120 volts.
Visual reference to kWh
3. Kilowatt-Hour (kWh)
• Definition: A kilowatt-hour is a unit of energy. It is commonly used as a billing unit for energy delivered to consumers by electric utilities.
• Usage: This term represents the amount of energy used over time. One kilowatt-hour is the energy delivered by a power of one kilowatt running for one hour.
• Real-World Example: If you run a 500-watt air conditioner for 2 hours, it consumes 1 kilowatt-hour of energy (0.5 kW × 2 hours).
Visual reference to Ampere
4. Ampere (A)
• Definition: The ampere, often shortened to amp, is the SI unit of electric current.
• Usage: It measures the amount of electric charge passing a point in an electric circuit per unit of time. One ampere is equal to one coulomb of charge passing through a point in one second.
• Real-World Example: A charging cable for a smartphone might have a current rating of 2 amperes.
Visual reference to mAh
5. Milliampere-Hour (mAh)
• Definition: A milliampere-hour is a unit of electric charge commonly used to describe the capacity of smaller batteries.
• Usage: It indicates how long a battery can deliver a certain current. The higher the mAh, the longer the battery will last on a single charge.
• Real-World Example: A typical smartphone battery might be rated at 3000 mAh, indicating it can deliver a current of 3000 milliamperes for one hour.
Understanding these basic electrical terms is crucial for anyone looking to deepen their knowledge of electricity and its applications. It provides a foundation for exploring more complex electrical
concepts and engaging in practical electrical work or discussions. Whether you're a student, hobbyist, or just curious, grasping these fundamentals is the first step towards a broader understanding
of the electrified world around us. | {"url":"https://insights.brick.tech/understanding-basic-electricity-terms-a-beginners-guide/","timestamp":"2024-11-05T19:42:21Z","content_type":"text/html","content_length":"20529","record_id":"<urn:uuid:bbc04f5d-d2c8-4d44-9d9c-45af2cb15362>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00474.warc.gz"} |
Umbral Calculus associated with Bernoulli Polynomials
Umbral Calculus associated with Bernoulli Polynomials by Kim, Dae San and Kim, Taekyun @ Department of Mathematics Kwangwoon University...
Galois uniformity in quadratic dynamics over k(t)
Galois uniformity in quadratic dynamics over k(t) - YouTube...
On Certain Families of Drinfeld Quasi-Modular Forms
On Certain Families of Drinfeld Quasi-Modular Forms by Vincent Bosser from Universit_e de Caen....
On the Products $(1^\ell+1)(2^\ell+1)\cdots (n^\ell +1)$, II
On the Products $(1^ell+1)(2^ell+1)cdots (n^ell +1)$, II by Chen, Yong-Gao* and Gong, Ming-Liang at the School of Mathematical SCIENCES and Institute of Mathematics, Nanjing Normal University...
Approximating the constants of Glaisher-Kinkelin type
Approximating the constants of Glaisher-Kinkelin type by Mortici, Cristinel Valahia from the University of Târgoviste...
An improved upper bound for the argument of the Riemann zeta-function on the critical line II
An improved upper bound for the argument of the Riemann zeta-function on the critical line II by Trudgian, Timothy from the Mathematical Sciences Institute at the Australian National University...
Euler Number Congruences and Dirichlet L-functions
Euler Number Congruences and Dirichlet L-Functions by Nianliang Wang, Junzhuang Li, and Duansen Liu from Institute of Mathematics, Shangluo University...
Introduction to Video Abstracting by David Goss
Editor-in-Chief David Goss explains his thoughts and inspiration for video abstracting and its benefits for the mathematics community....
Congruences for rs(n)
Congruences for rs(n) by Shi-Chao Chen at the School of Mathematics and Information Sciences at Henan University...
Transcendence and CM on Borcea-Voisin towers of Calabi-Yau manifolds
Transcendence and CM on Borcea-Voisin towers of Calabi-Yau manifolds - YouTube...
New normality constructions for continued fraction expansions
New normality constructions for continued fraction expansions - YouTube...
Effective equidistribution and the Sato-Tate law for families of elliptic curves
Steven J Miller and M. Ram Murty Williams College Department of Mathematics and Statistics...
On the addition of squares of units and nonunits modulo $n$
On the addition of squares of units and nonunits modulo $n$. By Yang, Quan-Hui* and Tang, Min *School of Mathematics and Statistics Nanjing University of Information Science and Technology Nanjing
On the r-th root partition function, II
On the r-th root partition function, II - YouTube...
Sequences of irreducible polynomials over odd prime fields via elliptic curve endomorphisms
Sequences of irreducible polynomials over odd prime fields via elliptic curve endomorphisms - YouTube...
The Difference Basis and Bi-basis of Zm ∗
The Difference Basis and Bi-basis of Zm ∗ by Yong-Gao Chen & Tang Sun of the Department of Mathematics at Nanjing Normal University...
On Small Fractional Parts of Polynomials
On Small Fractional Parts of Polynomials by Professor Nikolay Moshchevitin of the Dept. of Number Theory in the Faculty of Mathematics and Mechanics at Moscow State University...
Integers with a given number of divisors
"Integers with a given number of divisors" by Chen, Yong-Gao* and Mei, Shu-Yuan *School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University,...
On the Interpolation of Integer-Valued Polynomials
On the Interpolation of Integer-Valued Polynomials bt Volkov, V. V. and Petrov, F. V. at the Saint-Petersburg Department of Steklov Institute of Mathematics of Russian Academy of Sciences ...
π and the hypergeometric functions of complex argument
Giovanni Mingari Scarpello and Daniele Ritelli Dipartimento di Matematica per le scienze economiche e sociali viale Filopanti, 5 40126 Bologna, Italy... | {"url":"https://wesharescience.com/?category=Mathematics&page=4","timestamp":"2024-11-04T15:36:38Z","content_type":"text/html","content_length":"63088","record_id":"<urn:uuid:cc9f8f72-498d-482f-a251-32f3ea53aed4>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00651.warc.gz"} |
Yet Another Math Programming Consultant
In [1] a user asks:
I need to build a MILP (Mixed integer linear programming) constraint form this if-else statement: with beta is a constant.
if (a > b) then c = beta else c = 0
How can I build the statement to MILP constraint. Are there any techniques for solving this problem. Thank you.
In my opinion this is a difficult question to answer. To put it bluntly: this is a poor question because we lack lots of information.
1. We are not sure which identifiers are decision variables and which ones are parameters. It is stated that beta is a constant, so probably we can deduce that all other identifiers refer to
variables. In general, it is important to state these things explicitly.
2. For the variables: we don't know the types (binary, integer, positive, free).
3. We don't see the rest of the model. In its most general form, this if-construct is a somewhat difficult thing to handle. In practice however, we can almost always exploit knowledge from the model
to simplify things considerably. I almost never encounter models where we have to handle this construct in its most general form.
4. Expanding on the previous point: one thing to look at is how the objective behaves. It may push variables in a certain direction, which we may exploit. Often this leads to substantial
simplifications. In many cases we can drop an implication (e.g. drop the else part).
5. We don't know what solver or modeling system is being used. Some tools have good support for implications (a.k.a. indicator constraints), which can make things much easier. In other cases, we may
need to use big-M constraints.
6. If \(a\), \(b\) and \(c\) are (free) continuous variables, the condition \(\gt\) is a bit ambiguous. Do we really mean \(\gt\) or can we use \(\ge\)? In my models, I tend to use \(\ge\) and make
\(a=b\) ambiguous: we can pick the best decision for this case (I don't want to skip a more profitable solution because of some epsilon thing).
7. Using assumptions in the previous point, we can write \[ &\delta = 1 \Rightarrow a \ge b\\ & \delta=0 \Rightarrow a \le b\\ & c = \delta \cdot \beta\\ & \delta \in \{0,1\}\] Note that the \(c\)
constraint is linear: \(\beta\) is a parameter.
8. This can be translated into \[ & a \ge b - M(1-\delta)\\ & a \le b + M\delta \\ & c = \delta \cdot \beta\\ & \delta \in \{0,1\}\] To determine good values for big-\(M\) constants, again, we need
to know more about the model.
9. If you insist on \(a\gt b\), we can introduce some tolerance \(\varepsilon>0\) and write: \[ &\delta = 1 \Rightarrow a \ge b + \varepsilon \\ & \delta=0 \Rightarrow a \le b\\ & c = \delta \cdot \
beta\\ & \delta \in \{0,1\}\] Here \(\varepsilon\) should be larger than the feasibility tolerance of the solver (scaling may make this not completely obvious). Note that we effectively create a
"forbidden region". The variable \(a\) can not assume any value in the interval \((b,b+\varepsilon)\) (again, subject to feasibility tolerances).
10. Of course when we have integer variables \(a \gt b\) is much more well-defined and we can interpret that as \(a \ge b+1\).
So the best answer is: I would need to look at the whole model to give a good answer.
These type of questions are quite common, and answering them is just very difficult. You can not expect a reply that enumerates all possible answers for all possible cases.
1. Build MILP constraint from if-else statements, https://stackoverflow.com/questions/55899166/build-milp-constraint-from-if-else-statements
Example C++ code
The above fragment is from [1]. I never write loops like this. I use \(n\) for limits or counts, but never for a loop index.
Looking at this, I realized I have many of these "rules". Such as:
1. \(x\), \(y\), \(z\), and \(v\), \(w\) are always double precision variables. (I used to subtract points if a student would write
for (int x=0; ... ).
2. \(i\), \(j\), \(k\) and \(m\), \(n\) are always integer variables.
3. Never use \(l\) (i.e. \(\ell\)) as variable name, it is too close to the digit 1 (one).
4. Don't use short integers (unless for a specific reason) or single precision variables.
5. Use \(i\),\(j\),\(k\) as loop indices in a predictable way (e.g. for a (sparse) matrix: \(i\) for rows, \(j\) for columns, \(k\) for things like nonzero elements).
6. The previous rule also applies to AMPL which uses local index names. E.g. after declaring
param f_max {j in FOOD} >= f_min[j];
I always use j for FOOD.
7. Use short names for items (variables) that are often used and for locally declared indices. Use long names for items that are sparsely used. I call this Huffman-code [2] naming.
I am so used to this, that code that disobeys this in a flagrant way, just hurts my eyes. I find that, if I follow these simple rules, reading code is easier. It minimizes the surprise factor. Of
course, writing code is for consumption by a compiler (or other tool), but more importantly: for consumption by a human reader.
So, that loop should look like:
const int n = 10;
for (int i = 0; i < n; ++i) {...}
In [1] a user asks about how to force a certain ordering of variables for an LP/MIP model. This is an intriguing question for me because I really never worry about this.
• The user asks only about variable ordering. Of course there is a symmetry here: the same question can be asked for equations. Well, equations correspond to slack variables, so this is essentially
the same.
• A different ordering may (will?) cause a different solution path, so you can expect different solution times and iteration counts. Of course, this is in a very unpredictable way, so not something
we can exploit easily. Especially in MIPs we have this concept of performance variability [2]: minor changes in the model can cause significant different solution times.
• Why would you really worry about the ordering? There are three cases I can think of: (1) access variables by index number [I tend to use higher-level tools where this is not needed], (2) when
implementing some decomposition scheme [same: in higher-level tools we can index by names], or (3) when plotting the non-zero elements in the A matrix [I never do this; I find this is not adding
much insight into the model]. In practice, I never worry about the ordering. We have already many things to worry about when building LP/MIP models; this should not be one of them.
I don't understand why one would want to spend time on this.
1. https://stackoverflow.com/questions/56122889/how-to-control-ordering-of-matlab-optimproblem-variables
2. Andrea Lodi, Andrea Tramontani. Performance Variability in Mixed-Integer Programming. In INFORMS TutORials in Operations Research. Published online: 14 Oct 2014; 1-12, https://
Version 1 of cutting application.
The algorithms are a small part of the whole application. Some of the issues:
1. Users are not always able to "specify" in detail what they want in advance. Building prototypes can help: giving feedback about a demo is easier than writing detailed specs in advance.
2. Most code is not dedicated to the algorithm itself, but rather to surrounding things. I estimate that the algorithms cover about 20% of the code. For instance, reporting was a substantial effort
In the application, I draw on a canvas, but we can export to Excel:
Excel version of output
or print to a PDF file:
PDF version of output
The installation of anaconda 64 bit on my windows laptop worked fine. But then all conda and pip commands came back with the fatal error:
Can't connect to HTTPS URL because the SSL module is not available.
I found lots of messages about this. This bug seems to bug users for a long time. Even though some of these reports are closed as "Resolved" by the developers, I encountered this problem just today.
Here was my solution (from [1]):
I.e. two DLLs were in the wrong directory.
Immediately after this I noticed a second problem:
A third problem is that conda install sometimes takes forever.
All these problems are not unique to me. Using google, I see lots of other users having the same or similar problems.
I had hoped that this would be a bit less painful. This was a standard install on a standard windows machine. This really should not give all these problems. | {"url":"https://yetanothermathprogrammingconsultant.blogspot.com/2019/05/","timestamp":"2024-11-05T00:41:22Z","content_type":"text/html","content_length":"144541","record_id":"<urn:uuid:95821f96-3ca8-4ea3-b98d-7a94653b9347>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00676.warc.gz"} |
CS course at ELTE
This page is to collect all the bits and pieces of source code and other material that I created during my studies at Eötvös Lóránd University. Expect it to be mostly in Hungarian.
MSc thesis
Compositional Type Checking for Hindley-Milner Type Systems with Ad-hoc Polymorphism: My 2011 MSc thesis. For the software behind it, and other details, please check this page.
Theoretical subjects
Relational Programming
Read this introductory paper to get an idea of what this is all about. I've put some problems and solutions online. If you understand Hungarian, you're in luck because there's an online textbook in
two parts.
Formal languages & automata
Functional programming languages
Parallel programming & Grid computing
Parsers & compilers
Algebra and Number Theory
Linear and non-linear optimization
Practical subjects
Programming Environments
Programming assignments for the fall semester of 2005-2006:
Programming Languages/C++
The assignment was to write a solver for the Sudoku puzzle. Input format: 81 digits (separated by whitespace) describing the board row by row, 0 denoting unspecified cells. Output format: the 81
digits of the solution, or 81 zeros if an error occured (such as wrong input format, or unsolvable puzzle).
To aid testing, I also wrote a small Python script that checks solved Sudoku boards of the format described above.
Programming Languages/IA-32 Assembly
23 October 2010 (
programming haskell language ELTE
) (
1 comment
This is an based on a chapter of the M.Sc. thesis I am writing at ELTE, supervised by Péter Diviánszky.
For my M.Sc. thesis, I've been working on writing a compositional type checker for Haskell 98. The basic idea is to extend Olaf Chitil's compositional type system with ad-hoc polymorphism, Haskell
98's major extension to the Hindley-Milner type system. In this post, I'm showing the motivation behind wanting to go compositional.
A property shared by both commonly-used algorithms for doing Hindley-Milner type inference, W and M, is that both W and M infer the type of composite expressions by inferring one subexpression (in
some sense, the “first” one) and using its results in inferring the type of the “next” one. They are linear in the sense that partial results are threaded throughout the type inference.
The effect of linearity on type inference is that certain sub-expressions (those that are processed earlier) can have greater influence on the typing of other subexpressions. This is bad because it
imposes a hierarchy on the subexpressions that is determined solely by the actual type checking algorithm, not by the type system; thus, it can lead to misleading error messages for the programmer.
For example, let's take the following definition of a Haskell function:
foo x = (toUpper x, not x)
There are two ways to typecheck this definition using W: either we first typecheck toUpper x, using the context {x :: α}, resulting in the type equation α ~ Char, then checking not x with {x :: Char}
, or do it the other way around, by first looking at not x, then as a result recursing into toUpper x with the context {x :: Bool}.
GHC, it seems, does the former, resulting in the following error message:
Couldn't match expected type `Bool' against inferred type `Char'
In the first argument of `not', namely `x'
In the expression: not x
In the expression: (toUpper x, not x)
Whereas Hugs 98 does the latter:
ERROR "test.hs":1 - Type error in application
*** Expression : toUpper x
*** Term : x
*** Type : Bool
*** Does not match : Char
The problem is that they are both misleading, because there is nothing wrong with either not x or toUpper x by itself. The problem only comes from trying to unify their respective views on the type
of x.
A compositional type checker, in contrast, descends into toUpper x and not x using the same context, {x :: α}. The first one results in the typing (which is defined to be not just the type of an
expression, but also a mapping of monomorphic variables to their types) {x :: Char} ⊢ Char, and the second one in {x :: Bool} ⊢ Bool. Only afterwards are these two typings tried to be unified.
This is better because it becomes meaningful to talk about the typing of a subexpression. For the example above, my work-in-progress compositional type checker can report errors with an (IMO) much
more helpful message:
(toUpper x, not x)
Cannot unify `Char' with `Bool' when unifying `x':
toUpper x not x
Char Bool
x :: Char Bool
Of course, the devil's in the details — but that's what my thesis will be about. | {"url":"http://gergo.erdi.hu/elte/","timestamp":"2024-11-05T19:16:34Z","content_type":"application/xhtml+xml","content_length":"14058","record_id":"<urn:uuid:9a98af5c-c2b8-4612-a47f-52313d4bf687>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00594.warc.gz"} |
DNA Expressions – A Formal Notation for DNA
Rudy van Vliet
promotor: prof. dr. J.N. Kok (UL)
copromotor: dr. H.J. Hoogeboom (UL)
Universiteit Leiden
Date: 10 December, 2015, 12:30
Thesis: PDF
We describe a formal notation for DNA molecules that may contain nicks and gaps. The resulting DNA expressions denote formal DNA molecules. Different DNA expressions may denote the same molecule.
Such DNA expressions are called equivalent. We examine which DNA expressions are minimal, which means that they have the shortest length among all equivalent DNA expressions. Among others, we
describe how to construct a minimal DNA expression for a given molecule. We also present an efficient, recursive algorithm to rewrite a given DNA expression into an equivalent, minimal DNA
For many formal DNA molecules, there exists more than one minimal DNA expression. We define a minimal normal form, i.e., a set of properties such that for each formal DNA molecule, there is exactly
one (minimal) DNA expression with these properties. We finally describe an efficient, two-step algorithm to rewrite an arbitrary DNA expression into this normal form. | {"url":"https://ipa.win.tue.nl/?event=dna-expressions-a-formal-notation-for-dna","timestamp":"2024-11-06T21:03:14Z","content_type":"text/html","content_length":"36560","record_id":"<urn:uuid:e92550dd-c558-4644-85f1-5d04f04e4048>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00833.warc.gz"} |
Programming Framework
Programming Framework#
OpenDP is based on a conceptual model that defines the characteristics of privacy-preserving operations and provides a way for components to be assembled into programs with desired behavior. This
model, known as the OpenDP Programming Framework, is described in the paper A Programming Framework for OpenDP. The framework is designed with a clear and verifiable means of capturing the sensitive
aspects of an algorithm, while remaining highly flexible and extensible. OpenDP (the software library) is intended to be a faithful implementation of that approach. Because OpenDP is based on a
well-defined model, users can create applications with rigorous privacy properties.
The OpenDP Programming Framework consists of a set of high-level conceptual elements. We’ll cover the highlights here, which should be enough for you to get acquainted with OpenDP programming. If
you’re interested in more of the details and motivations behind the framework, you’re encouraged to read the paper.
In this section, we’ve used lower case when writing the names of OpenDP concepts. Later, when we talk about programming elements, we’ll use the capitalized form to refer to the concrete data types
that implement these concepts. (The concept names link to their corresponding type descriptions.)
• Measurements are randomized mappings from a dataset to an arbitrary output value. They are a controlled means of introducing privacy (e.g. noise) to a computation. An example of a measurement is
one which applies Laplace noise to a value.
• Transformations are deterministic mappings from a dataset to another dataset. They are used to summarize or transform values in some way. An example of a transformation is one which calculates
the mean of a set of values.
• Domains are sets which identify the possible values that some object can take. They are used to constrain the input or output of measurements and transformations. Examples of domains are the
integers between 1 and 10, or vectors of length 5 containing floating point numbers.
• Measures and metrics are things that specify distances between two mathematical objects.
□ Measures characterize the distance between two probability distributions. An example measure is the “max-divergence” of pure differential privacy.
□ Metrics capture the distance between two neighboring datasets. An example metric is “symmetric distance” (counting the number of elements changed).
• Privacy relations and stability relations are boolean functions which characterize the notion “closeness” of operation inputs and outputs. They are the glue that binds everything together.
□ A privacy relation is a statement about a measurement. It’s a boolean function of two values, an input distance (in a specific metric) and an output distance (in a specific measure). A
privacy relation lets you make assertions about a measurement when the measurement is evaluated on any pairs of neighboring datasets. If the privacy relation is true, it’s guaranteed that any
pair of measurement inputs within the input distance will always produce a pair of transformation outputs within the output distance.
□ A stability relation is a statement about a transformation. It’s also a boolean function of two values, an input distance (in a specific metric) and an output distance (in a specific metric,
possibly different from the input metric). A stability relation lets you make assertions about the behavior of a transformation when that transformation is evaluated on any pairs of
neighboring datasets. If the stability relation is true, it is a guarantee that any pair of transformation inputs within the input distance will always produce transformation outputs within
the output distance.
Relations capture the notion of closeness in a very general way, allowing the extension of OpenDP to different definitions of privacy.
As you can see, these elements are interdependent and support each other. The interaction of these elements is what gives the OpenDP Programming Framework its flexibility and expressiveness.
Key Points#
You don’t need to know all the details of the Programming Framework to write OpenDP applications, but it helps understand some of the key points:
• OpenDP calculations are built by assembling a measurement from a number of constituent transformations and measurements, typically through chaining or composition.
• Measurements don’t have a static privacy loss specified when constructing the measurement. Instead, measurements are typically constructed by specifying the scale of noise, and the loss is
bounded by the resulting privacy relation. This requires some extra work compared to specifying the loss directly, but OpenDP provides some utilities to make this easier on the programmer, and
the benefit is greatly increased flexibility of the framework as a whole.
Implementation Differences#
As a work in progress, it’s important to note that OpenDP doesn’t yet implement all the details of the Programming Framework.
Interactive Measurements#
An important aspect of the Programming Framework is the flexible way that it models interactive measurements. These are measurements where the operation isn’t a static function, but instead captures
a series of queries and responses, where the sequence is possibly determined dynamically. This is a very flexible model of computation, and can be used to capture notions such as adaptive
Unfortunately, OpenDP doesn’t yet implement interactive measurements, and is limited to plain (non-interactive) measurements. We know this is important functionality, and are in the process of
prototyping an implementation, but unfortunately it’ll take some time before it’s ready for use.
Row Transforms#
Row transforms are a way of applying a user-defined function to each of the elements of a dataset. This concept can be used to construct transformations for operations that aren’t provided “out of
the box” by OpenDP. Unfortunately, supporting row transforms has some privacy limitations around pure functions and also requires some tricky technical work, so these aren’t yet implemented in
Applying the Concepts#
This is just a glance at the abstract concepts in the OpenDP Programming Framework. The following sections of this guide describe the actual software components in OpenDP implementing these concepts,
and how they can be used in your programs. | {"url":"https://docs.opendp.org/en/v0.4.0/user/programming-framework.html","timestamp":"2024-11-14T10:44:39Z","content_type":"text/html","content_length":"33924","record_id":"<urn:uuid:2de017de-82d8-4de1-bc5e-8fd28a990821>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00216.warc.gz"} |
From the vault: The real gleam in the imaginary 'i'
Credit: Jeffrey Phillips
The old aphorism “my enemy’s enemy is my friend” has a mathematical equivalent: multiplying two negatives makes a positive. In monetary terms, where a negative number is definitely the enemy, it is
the same as saying that reducing a debt is equivalent to making a gain.
The simplest case of two negatives making a positive is:
-1 x -1 = 1
Multiplying a number by itself is known as squaring the number, so the square of -1 is 1, the square of -2 is 4, and the square of -3 is 9, etc. However, the square of 1 is also 1, the square of 2 is
4 and so on. The square is the same whether the number is negative or positive.
Going backwards, a procedure called “taking the square root” reverses the result. The square root of 9 is 3, but it is also -3; there are two solutions to any square root calculation.
All this is well and good, and drummed into most of us in school, but what happens if you try to take the square root of a negative number, such as -9? No ordinary number, when multiplied by itself,
yields -9, so how can you do the reverse?
This question stymied mathematicians for years. What they finally figured out was the need for a new type of number entirely. By the 18th century, they extended the number system to include the
square root of negative numbers.
This is what they did:
i x i = -1
which, rearranged, reads:
i = √-1
The new species of number here is symbolised by i because in the early days it was considered an “imaginary” number rather than a “real” number.
Many mathematicians were suspicious and even derisory about it. The name has stuck, even though today we accept imaginary numbers are just as real as real numbers. You can get more imaginary numbers
by multiplying i by real numbers – 2i, 3i, 4i and so on – and there is no problem combining real and imaginary numbers. For example, 5 + 3i is a perfectly good number. Such combinations are called
‘complex’ numbers, though the rules for manipulating them are very simple.
What are imaginary numbers good for? It turns out that by embracing i, the scope and power of mathematical manipulations are enormously broadened, opening the way to a plethora of new shortcuts and
Exponential growth and decay, when raised to the power of i, superimposes a wave-like oscillation on to the pattern of growth or decay. The world is replete with quantities that oscillate and either
grow or decay at the same time. For an example of how this works, think of a swinging pendulum that gradually slows – it oscillates as it “decays”. Using i also greatly simplifies the mathematical
description of systems that use complicated oscillating waveforms, such as acoustic and electronic signals.
But imaginary numbers are not just a computational convenience. Mother Nature got there long before mathematicians. We have known since Einstein’s theory of relativity that space and time are not
independent but fundamentally tied together by the speed of light into a unified “spacetime”.
Though related, space and time are not the same: it is i that allows us to combine them. To measure the spacetime interval between two cosmic events, for instance, you have to express the time
interval in the same units as spatial distance – achieved by multiplying by i. One can, therefore, say that space is “imaginary time” (in the technical sense of imaginary numbers), a term popularised
by Stephen Hawking in A Brief History of Time. Hawking discusses a theory of the Big Bang in which the universe started out with four space dimensions, so time was imaginary (in the √ -1 sense) at
the outset.
Nature also uses complex numbers in quantum mechanics. If you were the Great Cosmic Designer and tried to come up with laws for atomic processes using only real numbers, the resulting properties of
atoms would be very different from what we observe.
There’s another reason to appreciate the number i: its elegance. It won a public beauty contest earlier this year when the BBC asked people to vote for the most elegant mathematical relationship of
all time. The winner was declared to be
e^i^π+ 1 = 0
where e stands for exponential. This formula was discovered in 1748 by the brilliant Leonhard Euler, known as “the Mozart of mathematics”. By invoking i, Euler was able to combine e with three of the
most basic elements of the entire number system: 0, 1 and π. Euler’s formula is a profound relationship that seems to be speaking to us from some sort of mathematical nirvana.
All of which raises the question of how much mathematical beauty still remains hidden from us because of limitations on our existing number system. Is there a future Euler out there who (to borrow
another aphorism) will help us to behold it? | {"url":"https://cosmosmagazine.com/science/mathematics/the-real-gleam-in-the-imaginary-i/","timestamp":"2024-11-08T08:01:29Z","content_type":"text/html","content_length":"82253","record_id":"<urn:uuid:7bd14527-8816-41f4-922c-206e8b2d310a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00324.warc.gz"} |
Random Variables
A random variable is a function that can take
on values corresponding to a sample point in a sample space. As
each sample point is associated with a probability value, random
variables assumes its values with a certain probability that
depends on the sample point on which the value is based. A random
variable that is defined over a discrete sample space has a
finite or countable number of possible values and is called a discrete
random variable. A random variable that is defined over
a continuous sample space has an infinite set of possible values
and is called a continuous random variable. | {"url":"https://course-notes.org/statistics/random_variables","timestamp":"2024-11-06T02:32:26Z","content_type":"text/html","content_length":"59405","record_id":"<urn:uuid:88fad891-545e-41b3-9b84-3a180dd088f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00225.warc.gz"} |
Use stokes' theorem to evaluate where and s is the part of the paraboloid that lies inside the cylinder , oriented upward
use stokes’ theorem to evaluate where and s is the part of the paraboloid that lies inside the cylinder , oriented upward.
Use of Stokes’ Theorem to Evaluate a Paraboloid Inside a Cylinder
To apply Stokes’ Theorem to evaluate the given expression, let’s break down the problem.
1. Definition:
Stokes’ Theorem relates a surface integral of the curl of a vector field over a surface ( S ) to a line integral of the vector field around the boundary of ( S ). Mathematically, it is expressed
\int_S (\nabla \times \textbf{F}) \cdot d\textbf{S} = \oint_C \textbf{F} \cdot d\textbf{r}
2. Given Problem:
In this case, we are asked to compute the surface integral using Stokes’ Theorem for a paraboloid inside a cylinder, oriented upwards.
3. Approach:
• Identify the vector field and expression for curl ( \nabla \times \textbf{F} ).
• Determine the surface ( S ) which represents the part of the paraboloid inside the cylinder.
• Find the curve ( C ) which represents the boundary of ( S ).
• Evaluate the line integral on ( C ) to get the desired result.
4. Calculation:
Detailed calculation steps involve setting up the vector field, computing the curl, defining the surface and boundary, and integrating over them according to Stokes’ Theorem.
5. Conclusion:
By following the defined steps and performing the required calculations, the final evaluation based on Stokes’ Theorem will provide the solution for the given problem.
In conclusion, using Stokes’ Theorem to evaluate the part of a paraboloid inside a cylinder, oriented upward, involves a systematic approach to compute the surface integral. This method allows for
the transformation of a complex surface integral problem into a more manageable line integral calculation. | {"url":"https://en.sorumatik.co/t/use-stokes-theorem-to-evaluate-where-and-s-is-the-part-of-the-paraboloid-that-lies-inside-the-cylinder-oriented-upward/13106","timestamp":"2024-11-10T14:25:42Z","content_type":"text/html","content_length":"22642","record_id":"<urn:uuid:adc333d0-d78e-45c4-bd8a-8b0d37ba9cc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00598.warc.gz"} |
FIN 534 Homework 3 Set 1 | BluPapers
FIN 534 Homework 3 Set 1
Directions: Answer the following questions on this document. Explain how you reached the answer
or show your work if a mathematical calculation is needed, or both. Submit your assignment using
the assignment link in the course shell. This homework assignment is worth 100 points.
YOU MUST ENTER CORRECT INFORMATION IN THE YELLOW-CODED CELLS
DO NOT TOUCH THE NON-YELLOW-CODED CELLS
ANSWERS ARE IN THE RED-BORDERED CELLS
Use the following information for questions 1 through 8:
The Goodman Industries’ and Landry Incorporated’s stock prices and dividends, along with the Market
Index, are shown below. Stock prices are reported for December 31 of each year, and dividends reflect
those paid during the year. The market data are adjusted to include dividends.
Goodman Industries Landry Incorporated Market Index
Year Stock Price Dividend Stock Price Dividend Includes Divi- dends
2013 $25.88 $1.73 $73.13 $4.50 17.49 5.97
2012 22.13 1.59 78.45 4.35 13.17 8.55
2011 24.75 1.50 73.13 4.13 13.01 9.97
2010 16.13 1.43 85.88 3.75 9.65 1.05
2009 17.06 1.35 90.00 3.38 8.40 3.42
2008 11.44 1.28 83.63 3.00 7.05 8.96
1. Use the data given to calculate the annual returns for Goodman, Landry, and the Market Index, and
then calculate average annual returns for the two stocks and the index. (Hint: Remember, returns
are calculated by subtracting the beginning price from the ending price to get the capital gain or
loss, adding the dividend to the capital gain or loss, and then dividing the result by the beginning
price. Assume that dividends are already included in the index, Also, you cannot calculate the
rate of return for 2008 because you do not have 2007 data.)
2. Calculate the standard deviations of the returns for Goodman, Landry, and the Market Index.
(Hint: Use the sample standard deviation formula given in the chapter, which corresponds to the
STDEV function in Excel.)
3. Estimate Goodman’s and Landry’s betas as the slopes of regression lines with stock return on the
vertical axis (y-axis) and market return on the horizontal axis (x-axis). (Hint: Use Excel’s SLOPE
function.) Are these betas consistent with your graph?
4. The risk-free rate on long-term Treasury bonds is 6.04%. Assume that the market risk premium is
5%. What is the required return on the market using the SML equation?
5. If you formed a portfolio that consisted of 50% Goodman stock and 50% Landry stock, what would
be its beta and its required return.
6. What dividends do you expect for Goodman Industries stock over the next 3 years if you expect the
dividend to grow at the rate of 5% per year for the next 3 years? In other words, calculate
D1, D2, and D3. Note that D0 = $1.50
7. Assume that Goodman Industries’ stock, currently trading at $27.05, has a required return of 13%.
You will use this required return rate to discount dividends. Find the present value of the
dividend stream, that is, calculate the PV of D!, D2, and D3, and then sum these PVs.
8. If you plan to buy the the stock, hold it for 3 years, and then sell it for $27.05, what is the
most you should pay for it? (Problem 7-19)
Use the following
information for Question
Suppose now that the Good Industries (1) trades at a current stock price of $30 with a (2) strike price
of $35. Given the following information: (3) time to expiration is 4 months, (annualized risk-free rate
is 5%, and (5) variance
of stock return is .25.
9. What is the price for a call option using the
Black-Scholes model? | {"url":"https://blupapers.com/file/fin-534-homework-3-set-1/","timestamp":"2024-11-11T14:16:04Z","content_type":"text/html","content_length":"103830","record_id":"<urn:uuid:dcb623bf-0846-4b15-a54d-53144090d0f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00870.warc.gz"} |
The Google Developers Machine Learning Glossary is also good.
A series of repeatable steps for carrying out a certain type of task with data. As with data structures, people studying computer science learn about different algorithms and their suitability
for various tasks. Specific data structures often play a role in how certain algorithms get implemented. See also data structure
An open-source JavaScript library maintained by Google and the AngularJS community that lets developers create what are known as Single [web] Page Applications. AngularJS is popular with data
scientists as a way to show the results of their analysis. See also JavaScript, D3
artificial intelligence
Also, AI. The ability to have machines act with apparent intelligence, although varying definitions of “intelligence” lead to a range of meanings for the artificial variety. In AI’s early days in
the 1960s, researchers sought general principles of intelligence to implement, often using symbolic logic to automate reasoning. As the cost of computing resources dropped, the focus moved more
toward statistical analysis of large amounts of data to drive decision making that gives the appearance of intelligence. See also machine learning, data mining
Also, backprop. An algorithm for iteratively adjusting the weights used in a neural network system. Backpropagation is often used to implement gradient descent. See also neural network, gradient
Bayes' Theorem
Also, Bayes' Rule. An equation for calculating the probability that something is true if something potentially related to it is true. If P(A) means “the probability that A is true” and P(A|B)
means “the probability that A is true if B is true,” then Bayes' Theorem tells us that P(A|B) = (P(B|A)P(A)) / P(B). This is useful for working with false positives—for example, if x% of people
have a disease, the test for it is correct y% of the time, and you test positive, Bayes' Theorem helps calculate the odds that you actually have the disease. The theorem also makes it easier to
update a probability based on new data, which makes it valuable in the many applications where data continues to accumulate. Named for eighteenth-century English statistician and Presbyterian
minister Thomas Bayes. See also Bayesian network, prior distribution
Bayesian network
Also, Bayes net. “Bayesian networks are graphs that compactly represent the relationship between random variables for a given problem. These graphs aid in performing reasoning or decision making
in the face of uncertainty. Such reasoning relies heavily on Bayes’ rule.”^[bourg] These networks are usually represented as graphs in which the link between any two nodes is assigned a value
representing the probabilistic relationship between those nodes. See also Bayes' Theorem, Markov Chain
In machine learning, “bias is a learner’s tendency to consistently learn the same wrong thing. Variance is the tendency to learn random things irrespective of the real signal.... It’s easy to
avoid overfitting (variance) by falling into the opposite error of underfitting (bias). Simultaneously avoiding both requires learning a perfect classifier, and short of knowing it in advance
there is no single technique that will always do best (no free lunch).”^[domingos] See also variance, overfitting, classification
Big Data
As this has become a popular marketing buzz phrase, definitions have proliferated, but in general, it refers to the ability to work with collections of data that had been impractical before
because of their volume, velocity, and variety (“the three Vs”). A key driver of this new ability has been easier distribution of storage and processing across networks of inexpensive commodity
hardware using technology such as Hadoop instead of requiring larger, more powerful individual computers. The work done with these large amounts of data often draws on data science skills.
binomial distribution
A distribution of outcomes of independent events with two mutually exclusive possible outcomes, a fixed number of trials, and a constant probability of success. This is a discrete probability
distribution, as opposed to continuous—for example, instead of graphing it with a line, you would use a histogram, because the potential outcomes are a discrete set of values. As the number of
trials represented by a binomial distribution goes up, if the probability of success remains constant, the histogram bars will get thinner, and it will look more and more like a graph of normal
distribution. See also probability distribution, discrete variable, histogram, normal distribution
chi-square test
Chi (pronounced like “pie” but beginning with a “k”) is a Greek letter, and chi-square is “a statistical method used to test whether the classification of data can be ascribed to chance or to
some underlying law.”^[websters] The chi-square test “is an analysis technique used to estimate whether two variables in a cross tabulation are correlated.”^[shin] A chi-square distribution
varies from normal distribution based on the “degrees of freedom” used to calculate it. See also normal distribution and Wikipedia on the chi-squared test and on chi-squared distribution.
The identification of which of two or more categories an item falls under; a classic machine learning task. Deciding whether an email message is spam or not classifies it among two categories,
and analysis of data about movies might lead to classification of them among several genres. See also supervised learning, clustering
Any unsupervised algorithm for dividing up data instances into groups—not a predetermined set of groups, which would make this classification, but groups identified by the execution of the
algorithm because of similarities that it found among the instances. The center of each cluster is known by the excellent name “centroid.” See also classification, supervised learning,
unsupervised learning, k-means clustering
“A number or algebraic symbol prefixed as a multiplier to a variable or unknown quantity (Ex.: x in x(y + z), 6 in 6ab”^[websters] When graphing an equation such as y = 3x + 4, the coefficient of
x determines the line's slope. Discussions of statistics often mention specific coefficients for specific tasks such as the correlation coefficient, Cramer’s coefficient, and the Gini
coefficient. See also correlation
computational linguistics
Also, natural language processing, NLP. A branch of computer science for parsing text of spoken languages (for example, English or Mandarin) to convert it to structured data that you can use to
drive program logic. Early efforts focused on translating one language to another or accepting complete sentences as queries to databases; modern efforts often analyze documents and other data
(for example, tweets) to extract potentially valuable information. See also GATE, UIMA
confidence interval
continuous variable
A variable whose value can be any of an infinite number of values, typically within a particular range. For example, if you can express age or size with a decimal number, then they are continuous
variables. In a graph, the value of a continuous variable is usually expressed as a line plotted by a function. Compare discrete variable
“The degree of relative correspondence, as between two sets of data.”^[websters] If sales go up when the advertising budget goes up, they correlate. The correlation coefficient is a measure of
how closely the two data sets correlate. A correlation coefficient of 1 is a perfect correlation, .9 is a strong correlation, and .2 is a weak correlation. This value can also be negative, as
when the incidence of a disease goes down when vaccinations go up. A correlation coefficient of -1 is a perfect negative correlation. Always remember, though, that correlation does not imply
causation. See also coefficient
“A measure of the relationship between two variables whose values are observed at the same time; specifically, the average value of the two variables diminished by the product of their average
values.”^[websters] “Whereas variance measures how a single variable deviates from its mean, covariance measures how two variables vary in tandem from their means.”^[grus] See also variance, mean
When using data with an algorithm, “the name given to a set of techniques that divide up data into training sets and test sets. The training set is given to the algorithm, along with the correct
answers... and becomes the set used to make predictions. The algorithm is then asked to make predictions for each item in the test set. The answers it gives are compared to the correct answers,
and an overall score for how well the algorithm did is calculated.”^[segaran] See also machine learning
“Data-Driven Documents.” A JavaScript library that eases the creation of interactive visualizations embedded in web pages. D3 is popular with data scientists as a way to present the results of
their analysis. See also AngularJS, JavaScript
data engineer
A specialist in data wrangling. “Data engineers are the ones that take the messy data... and build the infrastructure for real, tangible analysis. They run ETL software, marry data sets, enrich
and clean all that data that companies have been storing for years.”^[biewald] See also data wrangling. (A Wikipedia search for “data engineering” redirects to “information engineering,” an older
term that describes a more enterprise-oriented job with greater system architecture responsibility and less hands-on work with the data.)
data mining
Generally, the use of computers to analyze large data sets to look for patterns that let people make business decisions. While this sounds like much of what data science is about, popular use of
the term is much older, dating back at least to the 1990s. See also data science
data science
“The ability to extract knowledge and insights from large and complex data sets.”^[patil] Data science work often requires knowledge of both statistics and software engineering. See also data
engineer, machine learning
data structure
A particular arrangement of units of data such as an array or a tree. People studying computer science learn about different data structures and their suitability for various tasks. See also
data wrangling
Also, data munging. The conversion of data, often through the use of scripting languages, to make it easier to work with. If you have 900,000 birthYear values of the format yyyy-mm-dd and 100,000
of the format mm/dd/yyyy and you write a Perl script to convert the latter to look like the former so that you can use them all together, you're doing data wrangling. Discussions of data science
often bemoan the high percentage of time that practitioners must spend doing data wrangling; the discussions then recommend the hiring of data engineers to address this. See also Perl, Python,
shell, data engineer
decision trees
“A decision tree uses a tree structure to represent a number of possible decision paths and an outcome for each path. If you have ever played the game Twenty Questions, then it turns out you are
familiar with decision trees.”^[grus] See also random forest
deep learning
Typically, a multi-level algorithm that gradually identifies things at higher levels of abstraction. For example, the first level may identify certain lines, then the next level identifies
combinations of lines as shapes, and then the next level identifies combinations of shapes as specific objects. As you might guess from this example, deep learning is popular for image
classification. See also neural network
dependent variable
dimension reduction
Also, dimensionality reduction. “We can use a technique called principal component analysis to extract one or more dimensions that capture as much of the variation in the data as possible...
Dimensionality reduction is mostly useful when your data set has a large number of dimensions and you want to find a small subset that captures most of the variation.”^[grus] Linear algebra can
be involved; “broadly speaking, linear algebra is about translating something residing in an m-dimensional space into a corresponding shape in an n-dimensional space.”^[shin] See also linear
discrete variable
A variable whose potential values must be one of a specific number of values. If someone rates a movie with between one and five stars, with no partial stars allowed, the rating is a discrete
variable. In a graph, the distribution of values for a discrete variable is usually expressed as a histogram. See also continuous variable, histogram
“The use of mathematical and statistical methods in the field of economics to verify and develop economic theories”^[websters]
The machine learning expression for a piece of measurable information about something. If you store the age, annual income, and weight of a set of people, you're storing three features about
them. In other areas of the IT world, people may use the terms property, attribute, or field instead of “feature.” See also feature engineering
feature engineering
“To obtain a good model, however, often requires more effort and iteration and a process called feature engineering. Features are the model’s inputs. They can involve basic raw data that you have
collected, such as order amount, simple derived variables, such as ‘Is order date on a weekend? Yes/No,’ as well as more complex abstract features, such as the ‘similarity score’ between two
movies. Thinking up features is as much an art as a science and can rely on domain knowledge.”^[anderson] See also feature
“General Architecture for Text Engineering,” an open source, Java-based framework for natural language processing tasks. The framework lets you pipeline other tools designed to be plugged into
it. The project is based at the UK’s University of Sheffield. See also computational linguistics, UIMA
gradient boosting
“Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically
decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.”^
gradient descent
An optimization algorithm for finding the input to a function “that produces the largest (or smallest) possible value... one approach to maximizing a function is to pick a random starting point,
compute the gradient, take a small step in the direction of the gradient (i.e., the direction that causes the function to increase the most), and repeat with the new starting point. Similarly,
you can try to minimize a function by taking small steps in the opposite direction.”^[grus] See also backpropagation
A graphical representation of the distribution of a set of numeric data, usually a vertical bar graph. See also probability distribution, binomial distribution, discrete variable
independent variable
A scripting language (no relation to Java) originally designed in the mid-1990s for embedding logic in web pages, but which later evolved into a more general-purpose development language.
JavaScript continues to be very popular for embedding logic in web pages, with many libraries available to enhance the operation and visual presentation of these pages. See also AngularJS, D3
k-means clustering
“A data mining algorithm to cluster, classify, or group your N objects based on their attributes or features into K number of groups (so-called clusters).”^[parsian] See also clustering
k-nearest neighbors
Also, kNN. A machine learning algorithm that classifies things based on their similarity to nearby neighbors. You tune the algorithm’s execution by picking how many neighbors to examine (k) as
well as some notion of “distance” to indicate how near the neighbors are. For example, in a social network, a friend of your friend could be considered twice the distance away from you as your
friend. “Similarity” would be comparison of feature values in the neighbors being compared. See also classification, feature
latent variable
“In statistics, latent variables (from Latin: present participle of lateo ('lie hidden'),as opposed to observable variables), are variables that are not directly observed but are rather inferred
(through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called
latent variable models.”^[wikipedialv]
“Lift compares the frequency of an observed pattern with how often you’d expect to see that pattern just by chance... If the lift is near 1, then there’s a good chance that the pattern you
observed is occurring just by chance. The larger the lift, the more likely that the pattern is ‘real.’”^[zumel]
linear algebra
A branch of mathematics dealing with vector spaces and operations on them such as addition and multiplication. “Linear algebra is designed to represent systems of linear equations. Linear
equations are designed to represent linear relationships, where one entity is written to be a sum of multiples of other entities. In the shorthand of linear algebra, a linear relationship is
represented as a linear operator—a matrix.”^[zheng] See also vector, vector space, matrix, coefficient
linear regression
A technique to look for a linear relationship (that is, one where the relationship between two varying amounts, such as price and sales, can be expressed with an equation that you can represent
as a straight line on a graph) by starting with a set of data points that don't necessarily line up nicely. This is done by computing the “least squares” line: the one that has, on an x-y graph,
the smallest possible sum of squared distances to the actual data point y values. Statistical software packages and even typical spreadsheet packages offer automated ways to calculate this.
People who get excited about machine learning often apply it to problems that would have been much simpler by using linear regression in an Excel spreadsheet. See also regression, logistic
regression, machine learning
If y = 10^x, then log(y) = x. Working with the log of one or more of a model's variables, instead of their original values, can make it easier to model relationships with linear functions instead
of non-linear ones. Linear functions are typically easier to use in data analysis. (The log(y) = x example shown is for log base 10. Natural logarithms, or log base e—where e is a specific
irrational number a little over 2.7—are a bit more complicated but also very useful for related tasks.) See also dependent variable, linear regression
logistic regression
A model similar to linear regression but where the potential results are a specific set of categories instead of being continuous. See continuous variable, regression, linear regression
machine learning
The use of data-driven algorithms that perform better as they have more data to work with, “learning” (that is, refining their models) from this additional data. This often involves
cross-validation with training and test data sets. “The fundamental goal of machine learning is to generalize beyond the examples in the training set.”^[domingos] Studying the practical
application of machine learning usually means researching which machine learning algorithms are best for which situations. See also algorithm, cross-validation, artificial intelligence
machine learning model
"The process of training an ML model involves providing an ML algorithm (that is, the learning algorithm) with training data to learn from. The term ML model refers to the model artifact that is
created by the training process." ^[amazonml] See also algorithm, machine learning, model
Markov Chain
An algorithm for working with a series of events (for example, a system being in particular states) to predict the possibility of a certain event based on which other events have happened. The
identification of probabilistic relationships between the different events means that Markov Chains and Bayesian networks often come up in the same discussions. See also Bayesian network, Monte
Carlo method
(Plural: matrices) An older Webster’s dictionary with a heavier emphasis on typographical representation gives the mathematical definition as “a set of numbers or terms arranged in rows and
columns between parentheses or double lines”^[websters]. For purposes of manipulating a matrix with software, think of it as a two-dimensional array. As with its one-dimensional equivalent, a
vector, this mathematical representation of the two-dimensional array makes it easier to take advantage of software libraries that apply advanced mathematical operations to the data—including
libraries that can distribute the processing across multiple processors for scalability. See also vector, linear algebra
The average value, although technically that is known as the “arithmetic mean.” (Other means include the geometric and harmonic means.) See also median, mode
Mean Absolute Error
Also, MAE. The average error of all predicted values when compared with observed values. See also Mean Squared Error, Root Mean Squared Error
Mean Squared Error
Also, MSE. The average of the squares of all the errors found when comparing predicted values with observed values. Squaring them makes the bigger errors count for more, making Mean Squared Error
more popular than Mean Absolute Error when quantifying the success of a set of predictions. See also Mean Absolute Error, Root Mean Squared Error
When values are sorted, the value in the middle, or the average of the two in the middle if there are an even number of values. See also mean, mode
“The value that occurs most often in a sample of data. Like the median, the mode cannot be directly calculated”^[stanton] although it’s easy enough to find with a little scripting. For people who
work with statistics, “mode” can also mean “data type”—for example, whether a value is an integer, a real number, or a date. See also mean, median, scripting
“A specification of a mathematical (or probabilistic) relationship that exists between different variables.”^[grus] Because “modeling” can mean so many things, the term “statistical modeling” is
often used to more accurately describe the kind of modeling that data scientists do.
Monte Carlo method
In general, the use of randomly generated numbers as part of an algorithm. Its use with Markov Chains is so popular that people usually refer to the combination with the acronym MCMC. See also
Markov Chain
moving average
“The mean (or average) of time series data (observations equally spaced in time, such as per hour or per day) from several consecutive periods is called the moving average. It is called moving
because the average is continually recomputed as new time series data becomes available, and it progresses by dropping the earliest value and adding the most recent.”^[parsian] See also mean,
time series data
The analysis of sequences of n items (typically, words in natural language) to look for patterns. For example, trigram analysis examines three-word phrases in the input to look for patterns such
as which pairs of words appear most often in the groups of three. The value of n can be something other than three, depending on your needs. This helps to construct statistical models of
documents (for example, when automatically classifying them) and to find positive or negative terms associated with a product name. See also computational linguistics, classification
naive Bayes classifier
“A collection of classification algorithms based on Bayes Theorem. It is not a single algorithm but a family of algorithms that all share a common principle, that every feature being classified
is independent of the value of any other feature. So for example, a fruit may be considered to be an apple if it is red, round, and about 3” in diameter. A Naive Bayes classifier considers each
of these ‘features’ (red, round, 3” in diameter) to contribute independently to the probability that the fruit is an apple, regardless of any correlations between features. Features, however,
aren’t always independent which is often seen as a shortcoming of the Naive Bayes algorithm and this is why it’s labeled ‘naive’.”^[aylien] This naiveté makes it much easier to develop
implementations of these algorithms that scale way up. See also Bayes' Theorem, classification
neural network
Also, neural net or artificial neural network to distinguish it from the brain, upon which this algorithm is modeled. “A robust function that takes an arbitrary set of inputs and fits it to an
arbitrary set of outputs that are binary... In practice, Neural Networks are used in deep learning research to match images to features and much more. What makes Neural Networks special is their
use of a hidden layer of weighted functions called neurons, with which you can effectively build a network that maps a lot of other functions. Without a hidden layer of functions, Neural Networks
would be just a set of simple weighted functions.”^[kirk] See also deep learning, backpropagation, perceptron
normal distribution
Also, Gaussian distribution. (Carl Friedrich Gauss was an early nineteenth-century German mathematician.) A probability distribution which, when graphed, is a symmetrical bell curve with the mean
value at the center. The standard deviation value affects the height and width of the graph. See also mean, probability distribution, standard deviation, binomial distribution, standard normal
A database management system that uses any of several alternatives to the relational, table-oriented model used by SQL databases. While this term originally meant “not SQL,” it has come to mean
something closer to “not only SQL” because the specialized nature of NoSQL database management systems often have them playing specific roles in a larger system that may also include SQL and
additional NoSQL systems. See also SQL
null hypothesis
If your proposed model for a data set says that the value of x is affecting the value of y, then the null hypothesis—the model you're comparing your proposed model with to check whether x really
is affecting y—says that the observations are all based on chance and that there is no effect. “The smaller the P-value computed from the sample data, the stronger the evidence is against the
null hypothesis.”^[shin] See also P value
objective function
“When you want to get as much (or as little) of something as possible, and the way you’ll get it is by changing the values of other quantities, you have an optimization problem...To solve an
optimization problem, you need to combine your decision variables, constraints, and the thing you want to maximize together into an objective function. The objective is the thing you want to
maximize or minimize, and you use the objective function to find the optimum result.”^[milton] See also gradient descent
“Extreme values that might be errors in measurement and recording, or might be accurate reports of rare events.”^[downey] See also overfitting
A model of training data that, by taking too many of the data's quirks and outliers into account, is overly complicated and will not be as useful as it could be to find patterns in test data. See
also outlier, cross-validation
P value
Also, p-value. “The probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed.”^
[goodman] “It’s a measure of how surprised you should be if there is no actual difference between the groups, but you got data suggesting there is. A bigger difference, or one backed up by more
data, suggests more surprise and a smaller p value...The p value is a measure of surprise, not a measure of the size of the effect.”^[reinhart] A lower p value means that your results are more
statistically significant. See also null hypothesis
An algorithm that determines the importance of something, typically to rank it in a list of search results. “PageRank works by counting the number and quality of links to a page to determine a
rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.”^[googlearchive] PageRank is not
named for the pages that it ranks but for its inventor, Google co-founder and CEO Larry Page.
A Python library for data manipulation popular with data scientists. See also Python
“Pretty much the simplest neural network is the perceptron, which approximates a single neuron with n binary inputs. It computes a weighted sum of its inputs and ‘fires’ if that weighted sum is
zero or greater.”^[grus] See also neural network
An older scripting language with roots in pre-Linux UNIX systems. Perl has always been popular for text processing, especially data cleanup and enhancement tasks. See also scripting, data
pivot table
“Pivot tables quickly summarize long lists of data, without requiring you to write a single formula or copy a single cell. But the most notable feature of pivot tables is that you can arrange
them dynamically. Say you create a pivot table summary using raw census data. With the drag of a mouse, you can easily rearrange the pivot table so that it summarizes the data based on gender or
age groupings or geographic location. The process of rearranging your table is known as pivoting your data: you're turning the same information around to examine it from different angles. ^
Poisson distribution
A distribution of independent events, usually over a period of time or space, used to help predict the probability of an event. Like the binomial distribution, this is a discrete distribution.
Named for early 19th century French mathematician Siméon Denis Poisson. See also spatiotemporal data, discrete variable, binomial distribution
posterior distribution
predictive analytics
The analysis of data to predict future events, typically to aid in business planning. This incorporates predictive modeling and other techniques. Machine learning might be considered a set of
algorithms to help implement predictive analytics. The more business-oriented spin of “predictive analytics” makes it a popular buzz phrase in marketing literature. See also predictive modeling,
machine learning, SPSS
predictive modeling
The development of statistical models to predict future events. See also predictive analytics, model
principal component analysis
“This algorithm simply looks at the direction with the most variance and then determines that as the first principal component. This is very similar to how regression works in that it determines
the best direction to map data to.”^[kirk] See also regression
prior distribution
“In Bayesian inference, we assume that the unknown quantity to be estimated has many plausible values modeled by what's called a prior distribution. Bayesian inference is then using data (that is
considered as unchanging) to build a tighter posterior distribution for the unknown quantity.”^[zumel] See also Bayes' Theorem
probability distribution
“A probability distribution for a discrete random variable is a listing of all possible distinct outcomes and their probabilities of occurring. Because all possible outcomes are listed, the sum
of the probabilities must add to 1.0.”^[levine] See also discrete variable
A programming language available since 1994 that is popular with people doing data science. Python is noted for ease of use among beginners and great power when used by advanced users, especially
when taking advantage of specialized libraries such as those designed for machine learning and graph generation. See also scripting, Pandas
quantile, quartile
When you divide a set of sorted values into groups that each have the same number of values (for example, if you divide the values into two groups at the median), each group is known as a
quantile. If there are four groups, we call them quartiles, which is a common way to divide values for discussion and analysis purposes; if there are five, we call them quintiles, and so forth.
See also median
random forest
An algorithm used for regression or classification that uses a collection of tree data structures. “To classify a new object from an input vector, put the input vector down each of the trees in
the forest. Each tree gives a classification, and we say the tree ‘votes’ for that class. The forest chooses the classification having the most votes (over all the trees in the forest).”^
[breiman] The term “random forest” is actually trademarked by its authors. See also classification, vector, decision trees
“...the more general problem of fitting any kind of model to any kind of data. This use of the term 'regression' is a historical accident; it is only indirectly related to the original meaning of
the word.”^[downey] See also linear regression, logistic regression, principal component analysis
reinforcement learning
A class of machine learning algorithms in which the process is not given specific goals to meet but, as it makes decisions, is instead given indications of whether it’s doing well or not. For
example, an algorithm for learning to play a video game knows that if its score just went up, it must have done something right. See also supervised learning, unsupervised learning
Root Mean Squared Error
Also, RMSE. The square root of the Mean Squared Error. This is more popular than Mean Squared Error because taking the square root of a figure built from the squares of the observation value
errors gives a number that’s easier to understand in the units used to measure the original observations. See also Mean Absolute Error, Mean Squared Error,
A scripting language that first appeared in 1996. Ruby is popular in the data science community, but not as popular as Python, which has more specialized libraries available for data science
tasks. See also scripting, Python
S curve
Imagine a graph showing, for each month since smartphones originally became available, how many people in the US bought their first one. The line would rise slowly at first, when only the early
adopters got them, then quickly as these phones became more popular, and then level off again once nearly everyone had one. This graph's line would form a stretched-out “S” shape. The “S curve”
applies to many other phenomena and is often mentioned when someone predicts that a rising value will eventually level off.
“Designating or of a quantity that has magnitude but no direction in space, as volume or temperature — n. a scalar quantity: distinguished from vector”^[websters] See also vector
Generally, the use of a computer language where your program, or script, can be run directly with no need to first compile it to binary code as with with languages such as Java and C. Scripting
languages often have simpler syntax than compiled languages, so the process of writing, running, and tweaking scripts can go faster. See also Python, Perl, Ruby, shell
serial correlation
“As prices vary from day to day, you might expect to see patterns. If the price is high on Monday, you might expect it to be high for a few more days; and if it’s low, you might expect it to stay
low. A pattern like this is called serial correlation, because each value is correlated with the next one in the series. To compute serial correlation, we can shift the time series by an interval
called a lag, and then compute the correlation of the shifted series with the original... 'Autocorrelation' is another name for serial correlation, used more often when the lag is not 1.”^
[downey] See also correlation
When you use a computer’s operating system from the command line, you're using its shell. Along with scripting languages such as Perl and Python, Linux-based shell tools (which are either
included with or easily available for Mac and Windows machines) such as grep, diff, split, comm, head, and tail are popular for data wrangling. A series of shell commands stored in a file that
lets you execute the series by entering the file's name is known as a shell script. See also data wrangling, scripting, Perl, Python
spatiotemporal data
Time series data that also includes geographic identifiers such as latitude-longitude pairs. See also time series data
A commercial statistical software package, or according to the product home page, “predictive analytics software.”^[spss] The product has always been popular in the social sciences. The company,
founded in 1968, was acquired by IBM in 2009. See also predictive analytics
standard deviation
The square root of the variance, and a common way to indicate just how different a particular measurement is from the mean. “An observation more than three standard deviations away from the mean
can be considered quite rare, in most applications.”^[zumel] Statistical software packages offer automated ways to calculate the standard deviation. See also variance
standard normal distribution
A normal distribution with a mean of 0 and a standard deviation of 1. When graphed, it’s a bell-shaped curve centered around the y axis, where x=0. See also normal distribution, mean, standard
standardized score
Also, standard score, normal score, z-score. “Transforms a raw score into units of standard deviation above or below the mean. This translates the scores so they can be evaluated in reference to
the standard normal distribution.”^[boslaugh] Translating two different test sets to use standardized scores makes them easier to compare. See also standard deviation, mean, standard normal
A commercial statistical software package, not to be confused with strata. See also strata, stratified sampling
strata, stratified sampling
“Divide the population units into homogeneous groups (strata) and draw a simple random sample from each group.”^[gonick] Strata also refers to an O'Reilly conference on big data, data science,
and related technologies. See also Stata
supervised learning
A type of machine learning algorithm in which a system is taught to classify input into specific, known classes. The classic example is sorting email into spam versus ham. See also unsupervised
learning, reinforcement learning, machine learning
support vector machine
Also, SVM. Imagine that you want to write a function that draws a line on a two-dimensional x-y graph that separates two different kinds of points—that is, it classifies them into two
categories—but you can't, because on that graph they're too mixed together. Now imagine that the points are in three dimensions, and you can classify them by writing a function that describes a
plane that can be positioned at any angle and position in those three dimensions, giving you more opportunities to find a working mathematical classifier. This plane that is one dimension less
than the space around it, such as a two-dimensional plane in a three-dimensional space or a one-dimensional line on a two-dimensional space, is known as a hyperplane. A support vector machine is
a supervised learning classification tool that seeks a dividing hyperplane for any number of dimensions. (Keep in mind that “dimensions” don't have to be x, y, and z position coordinates, but any
features you choose to drive the categorization.) SVMs have also been used for regression tasks as well as categorization tasks. See also supervised learning, feature
Also, student’s t distribution. A variation on normal distribution that accounts for the fact that you’re only using a sampling of all the possible values instead of all of them. Invented by
Guiness Brewery statistician William Gossett (publishing under the pseudonym “student”) in the early 20th century for his quality assurance work there. See also normal distribution
time series data
“Strictly speaking, a time series is a sequence of measurements of some quantity taken at different times, often but not necessarily at equally spaced intervals.”^[boslaugh] So, time series data
will have measurements of observations (for example, air pressure or stock prices) accompanied by date-time stamps. See also spatiotemporal data, moving average
The “Unstructured Information Management Architecture” was developed at IBM as a framework to analyze unstructured information, especially natural language. OASIS UIMA is a specification that
standardizes this framework and Apache UIMA is an open-source implementation of it. The framework lets you pipeline other tools designed to be plugged into it. See also computational linguistics,
unsupervised learning
A class of machine learning algorithms designed to identify groupings of data without knowing in advance what the groups will be. See also supervised learning, reinforcement learning, clustering
“How much a list of numbers varies from the mean (average) value. It is frequently used in statistics to measure how large the differences are in a set of numbers. It is calculated by averaging
the squared difference of every number from the mean.”^[segaran] Any statistical package will offer an automated way to calculate this. See also mean, bias, standard deviation
Webster’s first mathematical definition is “a mathematical expression denoting a combination of magnitude and direction,” which you may remember from geometry class, but their third definition is
closer to how data scientists use the term: “an ordered set of real numbers, each denoting a distance on a coordinate axis”^[websters]. These numbers may represent a series of details about a
single person, movie, product, or whatever entity is being modeled. This mathematical representation of the set of values makes it easier to take advantage of software libraries that apply
advanced mathematical operations to the data. See also matrix, linear algebra
vector space
A collection of vectors—for example, a matrix. See also vector, matrix, linear algebra
Amazon Machine Learning Developer Guide, Training ML Models, accessed 2018-04-15.
Carl Anderson, Creating a Data-Driven Organization (Sebastopol: O'Reilly Media, 2015).
Naive Bayes for Dummies; A Simple Explanation, accessed 2015-08-21.
Sarah Boslaugh, Statistics in a Nutshell, 2nd Edition (Sebastopol: O'Reilly Media, 2012).
David M. Bourg and Glenn Seeman AI for Game Developers (Sebastopol: O'Reilly Media, 2004).
Pedro Domingos, A Few Useful Things to Know about Machine Learning, accessed 2015-09-05.
Leo Breiman and Adele Cutler, Random Forests, accessed 2015-08-22.
Allen B. Downey Think Stats, 2nd Edition (Sebastopol: O'Reilly Media, 2014).
Larry Gonick and Woolcott Smith, The Cartoon Guide to Statistics
S. N. Goodman, Toward evidence-based medical statistics. 1: The P value fallacy. Annals of Internal Medicine, 130:995–1004, 1999. (quoted in Reinhart)
Matthew Kirk, Thoughtful Machine Learning (Sebastopol: O'Reilly Media, 2014).
Matthew MacDonald, Excel 2013: The Missing Manual (Sebastopol: O'Reilly Media, 2013).
Michael Milton, Head First Data Analysis (Sebastopol: O'Reilly Media, 2009).
DJ Patil, A Memo to the American People from U.S. Chief Data Scientist Dr. DJ Patil, 2015-02-15
Alex Reinhart, "An introduction to data analysis" in Statistics Done Wrong: The Woefully Complete Guide (San Francisco: No Starch Press, 2015)
Facts about Google and Competition, archive.org version accessed 2015-09-05.
Joel Grus, Data Science from Scratch: First Principles with Python, (Sebastopol: O'Reilly Media, 2015).
David M. Levine, Statistics for Six Sigma Green Belts with Minitab and JMP (Upper Saddle River: Pearson, 2006).
Mahmoud Parsian, Data Algorithms, (Sebastopol: O'Reilly Media, 2015).
Toby Segaran, Programming Collective Intelligence, (Sebastopol: O'Reilly Media, 2015).
Shin Takahashi, The Manga Guide to Statistics, (Sebastopol: O'Reilly Media, 2008).
SPSS Software, accessed 2015-08-22.
82. Stanton, J.M. (2012). Introduction to Data Science, Third Edition. iTunes Open Source eBook. Available: https://itunes.apple.com/us/book/introduction-to-data-science/id529088127?mt=11
Victoria Neufeldt, Editor in Chief, Webster's New World College Dictionary, Third Edition (New York: Macmillan, 1997).
Wikipedia: Gradient boosting, accessed 2016-02-28.
Wikipedia: Latent variable, accessed 2016-02-28.
Alice Zheng, Striking parallels between mathematics and software engineering, accessed 2015-09-11.
Nina Zumel and John Mount, Practical Data Science with R (Shelter Island: Manning Publications, 2014). | {"url":"https://datascienceglossary.org/","timestamp":"2024-11-07T22:32:19Z","content_type":"text/html","content_length":"110062","record_id":"<urn:uuid:f8270596-7a54-4e92-8b07-e78de4029000>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00797.warc.gz"} |
A circle passes through the points (−1,1),(0,6) and (5,5). The ... | Filo
A circle passes through the points and . The point(s) on this circle, the tangent(s) at which is/are parallel to the straight line joining the origin to its centre is/are
Not the question you're searching for?
+ Ask your question
Equation of tangent to circle
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE
5 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Advanced Problems in Mathematics for JEE (Main & Advanced) (Vikas Gupta)
View more
Practice more questions from Conic Sections
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text A circle passes through the points and . The point(s) on this circle, the tangent(s) at which is/are parallel to the straight line joining the origin to its centre is/are
Topic Conic Sections
Subject Mathematics
Class Class 11
Answer Type Text solution:1
Upvotes 52 | {"url":"https://askfilo.com/math-question-answers/a-circle-passes-through-the-points-1106-and-55-the-points-on-this-circle-the","timestamp":"2024-11-05T00:30:54Z","content_type":"text/html","content_length":"539385","record_id":"<urn:uuid:7cc79f15-6386-4b0a-a07f-3bfc668887ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00751.warc.gz"} |
CGal/CHANGELOG and CGal Releases | LibHunt
Avg Release Cycle
96 days
Latest Release
1451 days ago
• November 18, 2020
๐ The CGAL Open Source Project is pleased to announce the release 5.2 Beta 1 of CGAL, the Computational Geometry Algorithms Library.
๐ CGAL version 5.2 Beta 1 is a public testing release. It should provide a solid ground to report bugs that need to be tackled before the release of the final version of CGAL 5.2 in December
๐ Fixes, improvements, and various small features have been added since CGAL 5.1. See https://www.cgal.org/2020/11/18/cgal52-beta1/ for a complete list of changes.
• September 08, 2020
๐ The CGAL Open Source Project is pleased to announce the release 5.1 of CGAL, the Computational Geometry Algorithms Library.
๐ ฆ Besides fixes and general enhancement to existing packages, the following has changed since CGAL 5.0:
๐ ฆ This package implements a tetrahedral isotropic remeshing algorithm,
that improves the quality of tetrahedra in terms of dihedral angles,
while targeting a given edge length.
๐ See also the associated blog entry.
๐ ฆ This package enables the computation of some topological invariants of surfaces, such as:
□ test if two (closed) curves on a combinatorial surface are homotopic. Users can choose
๐ between free homotopy and homotopy with fixed endpoints;
□ test is a curve is contractible;
□ compute shortest non-contractible cycles on a surface, with or without weights on edges.
See also the associated blog entry.
๐ ฆ This package implements an optimization algorithm that aims to construct a close approximation
of the optimal bounding box of a mesh or a point set, which is defined as the smallest
(in terms of volume) bounding box that contains a given mesh or point set.
See also the associated blog entry.
□ The CGAL_Core library no longer requires Boost.Thread, even if the g++ compiler is used.
□ ๐ The minimal supported version of Boost is now 1.66.0.
Two new, detailed tutorials have been added:
□ Surface Reconstruction from Point Clouds,
which goes over a typical full processing pipeline in a CGAL environment.
□ Geographic Information Systems (GIS),
which demonstrates usage of CGAL data structures and algorithms in the context of a typical GIS application.
Both tutorials provide complete code.
□ โ Added wrapper functions for registration, using the Super4PCS and ICP algorithms implemented in the third party libraries OpenGR and libpointmatcher.
□ Added the function CGAL::alpha_expansion_graphcut(), which regularizes a multi-label partition over a user-defined graph.
□ Added the function CGAL::regularize_face_selection_borders(), which uses this alpha expansion graphcut to regularize the borders of a selected faces on a triangle mesh.
๐ See https://www.cgal.org/2020/09/08/cgal51/ for a complete list of changes.
• July 28, 2020
๐ The CGAL Open Source Project is pleased to announce the release 5.1 Beta 2 of CGAL, the Computational Geometry Algorithms Library.
๐ CGAL version 5.1 Beta 2 is a public testing release. It should provide a solid ground to report bugs that need to be tackled before the release of the final version of CGAL 5.1 in September.
๐ ฆ Besides fixes and general enhancement to existing packages, the following has changed since CGAL 5.0:
□ ๐ ฆ This package implements a tetrahedral isotropic remeshing algorithm,
that improves the quality of tetrahedra in terms of dihedral angles,
while targeting a given edge length.
๐ ฆ This package enables the computation of some topological invariants of surfaces, such as:
□ test if two (closed) curves on a combinatorial surface are homotopic. Users can choose
๐ between free homotopy and homotopy with fixed endpoints;
□ test is a curve is contractible;
□ compute shortest non-contractible cycles on a surface, with or without weights on edges.
See also the associated blog entry.
๐ ฆ This package implements an optimization algorithm that aims to construct a close approximation
of the optimal bounding box of a mesh or a point set, which is defined as the smallest
(in terms of volume) bounding box that contains a given mesh or point set.
See also the associated blog entry.
□ The CGAL_Core library no longer requires Boost.Thread, even if the g++ compiler is used.
Two new, detailed tutorials have been added:
□ Surface Reconstruction from Point Clouds,
which goes over a typical full processing pipeline in a CGAL environment.
□ Geographic Information Systems (GIS),
which demonstrates usage of CGAL data structures and algorithms in the context of a typical GIS application.
Both tutorials provide complete code.
□ โ Added wrapper functions for registration, using the Super4PCS and ICP algorithms implemented in the third party libraries OpenGR and libpointmatcher.
□ Added the function CGAL::alpha_expansion_graphcut(), which regularizes a multi-label partition over a user-defined graph.
□ Added the function CGAL::regularize_face_selection_borders(), which uses this alpha expansion graphcut to regularize the borders of a selected faces on a triangle mesh.
๐ See https://www.cgal.org/2020/07/28/cgal51-beta2/ for a complete list of changes.
• June 09, 2020
๐ The CGAL Open Source Project is pleased to announce the release 5.1 Beta 1 of CGAL, the Computational Geometry Algorithms Library.
๐ CGAL version 5.1 Beta 1 is a public testing release. It should provide a solid ground to report bugs that need to be tackled before the release of the final version of CGAL 5.1 in July.
๐ ฆ Besides fixes and general enhancement to existing packages, the following has changed since CGAL 5.0:
□ ๐ ฆ This package implements a tetrahedral isotropic remeshing algorithm,
that improves the quality of tetrahedra in terms of dihedral angles,
while targeting a given edge length.
๐ ฆ This package enables the computation of some topological invariants of surfaces, such as:
□ test if two (closed) curves on a combinatorial surface are homotopic. Users can choose
๐ between free homotopy and homotopy with fixed endpoints;
□ test is a curve is contractible;
□ compute shortest non-contractible cycles on a surface, with or without weights on edges.
See also the associated blog entry.
๐ ฆ This package implements an optimization algorithm that aims to construct a close approximation
of the optimal bounding box of a mesh or a point set, which is defined as the smallest
(in terms of volume) bounding box that contains a given mesh or point set.
See also the associated blog entry.
Two new, detailed tutorials have been added:
□ Surface Reconstruction from Point Clouds,
which goes over a typical full processing pipeline in a CGAL environment.
□ Geographic Information Systems (GIS),
which demonstrates usage of CGAL data structures and algorithms in the context of a typical GIS application.
Both tutorials provide complete code.
□ โ Added wrapper functions for registration, using the Super4PCS and ICP algorithms implemented in the third party libraries OpenGR and libpointmatcher.
□ Added the function CGAL::alpha_expansion_graphcut(), which regularizes a multi-label partition over a user-defined graph.
□ Added the function CGAL::regularize_face_selection_borders(), which uses this alpha expansion graphcut to regularize the borders of a selected faces on a triangle mesh.
๐ See https://www.cgal.org/2020/06/09/cgal51-beta1 for a complete list of changes.
• January 24, 2020
๐ CGAL-5.0.1 is a bug-fix release. In particular, it fixes a performance regression in the 3D Triangulations, when the Parallel_tag is used.
See on Github the list of bugs that were solved since CGAL-5.0.1.
• November 08, 2019
๐ The CGAL Open Source Project is pleased to announce the release 5.0
of CGAL, the Computational Geometry Algorithms Library.
๐ ฆ Besides fixes and general enhancement to existing packages, the
following has changed since CGAL 4.14.2:
General changes
□ ๐ CGAL 5.0 is the first release of CGAL that requires a C++ compiler
๐ with the support of C++14 or later. The new list of supported
compilers is:
☆ Visual C++ 14.0 (from Visual Studio 2015 Update 3) or later,
☆ Gnu g++ 6.3 or later (on Linux or MacOS),
☆ LLVM Clang version 8.0 or later (on Linux or MacOS), and
☆ Apple Clang compiler versions 7.0.2 and 10.0.1 (on MacOS).
□ Since CGAL 4.9, CGAL can be used as a header-only library, with
0๏ธ โ ฃ dependencies. Since CGAL 5.0, that is now the default, unless
๐ ง specified differently in the (optional) CMake configuration.
□ ๐ The section "Getting Started with CGAL" of the documentation has
โ ก๏ธ been updated and reorganized.
□ The minimal version of Boost is now 1.57.0.
□ ๐ ฆ This package provides a method for piecewise planar object reconstruction from point clouds.
The method takes as input an unordered point set sampled from a piecewise planar object
and outputs a compact and watertight surface mesh interpolating the input point set.
The method assumes that all necessary major planes are provided (or can be extracted from
the input point set using the shape detection method described in Point Set Shape Detection,
or any other alternative methods).The method can handle arbitrary piecewise planar objects
๐ and is capable of recovering sharp features and is robust to noise and outliers. See also
the associated blog entry.
□ ๐ ฅ Breaking change: The concept ShapeDetectionTraits has been renamed to EfficientRANSACTraits.
□ ๐ ฅ Breaking change: The Shape_detection_3 namespace has been renamed to Shape_detection.
□ โ Added a new, generic implementation of region growing. This enables for example applying region growing to inputs such as 2D and 3D point sets,
or models of the FaceGraph concept. Learn more about this new algorithm with this blog entry.
□ A new exact kernel, Epeck_d, is now available.
2D and 3D Triangulations
๐ ฅ Breaking change: Several deprecated functions and classes have been
โ removed. See the full list of breaking changes in the release
๐ ฅ Breaking change: The constructor and the insert() function of
CGAL::Triangulation_2 or CGAL::Triangulation_3 which take a range
of points as argument are now guaranteed to insert the points
following the order of InputIterator. Note that this change only
affects the base class CGAL::Triangulation_[23] and not any
derived class, such as CGAL::Delaunay_triangulation_[23].
□ Introduced a wide range of new functions
related to location of queries on a triangle mesh,
such as CGAL::Polygon_mesh_processing::locate(Point, Mesh).
The location of a point on a triangle mesh is expressed as the pair of a face and the barycentric
coordinates of the point in this face, enabling robust manipulation of locations
(for example, intersections of two 3D segments living within the same face).
□ Added the mesh smoothing function smooth_mesh(),
which can be used to improve the quality of triangle elements based on various geometric characteristics.
□ Added the shape smoothing function smooth_shape(),
which can be used to smooth the surface of a triangle mesh, using the mean curvature flow to perform noise removal.
(See also the new entry in the User Manual)
□ ๐ ฅ Breaking change : the API using iterators and overloads for optional parameters (deprecated since
๐ CGAL 4.12) has been removed. The current (and now only) API uses ranges and Named Parameters.
๐ See https://www.cgal.org/2019/11/08/cgal50/ for a complete list of changes. | {"url":"https://cpp.libhunt.com/cgal-changelog","timestamp":"2024-11-08T11:28:38Z","content_type":"text/html","content_length":"52325","record_id":"<urn:uuid:a8a4911c-a411-43c1-9251-8d8b89100582>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00889.warc.gz"} |
Universe Expansion Calculator | Create your own universe and see how it evolves in time
Universe Expansion Calculator
Created by Álvaro Díez
Reviewed by Steven Wooding and Jack Bowater
Last updated: Jun 05, 2023
There's nothing that unites humanity more than the awe we feel when we look up at the stars and admire the beauty of our amazing Universe. With NASA's budget rising to over $22Billion in 2020, and
the space industry nearing the trillion dollar mark, it is surprising that most of us don't know much about how the universe was formed, what it's made of of, and how we even got here.
Enter the Universe Expansion calculator. A simple tool that helps you find all these answers in a fun and interactive way. Create your own universe, see how each component affects its fate... all
while learning the scientific reason behind why our universe is like it is.
Did you know that without dark energy, our universe would be much smaller? It might even collapse back into a single point and sort of "reincarnate" itself as a second Big Bang creating a new
universe... Is this enough to pique your interest? Come in and answer your questions about the universe!
What is the universe? / How to describe the universe with maths
The universe is everything. It is made of many different components, such as stars, light, x-rays, planets, gas clouds, and black holes... Which makes it difficult to be modeled mathematically.
Astronomers have found ways to divide the constituents of the universe into 4 major groups. These four major components are:
• Matter;
• Radiation;
• Dark energy; and
• Spatial curvature.
Additionally, there is another parameter that measures the expansion rate of the universe: the Hubble constant ($H_0$).
Each component of the universe affects its evolution in a different manner, described by Friedmann's equations – a set of formulas that explain most of the phenomena we see in the universe. They
allow us to observe the effects of each component separately and get a better picture of what their role is in the evolution of the universe.
Hubble constant ($H_0$)
When astronomers first saw all the galaxies around us, they discovered that all of them seemed to be moving away from us. In fact, the speed at which they "run away" from us is proportional to the
distance between us and them (our acceleration calculator can help clarify the meaning of this sentence). The only possible explanation for this is that the whole universe is expanding, like a piece
of fabric getting stretched. The first one to notice this? Edwin Hubble (and also George Lemaitre). The key to this discovery was the evidence of Doppler effect in the data from far galaxies: use our
dDoppler effect calculator and our redshift calculator for more information!
This expansion of the universe was thought to be constant (now we know that its expansion is accelerated), and the value of this constant speed was, understandably, named after Dr. Hubble. Measuring
it by looking at distant galaxies yields a value of $73.4\ \mathrm{km/s}$ per $\mathrm{Mpc}$. If you look at the remnants of the Big Bang, Hubble's constant is $67.7\ \mathrm{km/s}$ per $\mathrm{Mpc}
We know the discrepancy is not due to experimental error, so what gives? There must be a hole in our knowledge of the universe that we must fix! There isn't anything more frustrating and yet exciting
for physicists and space lovers alike.
In this calculator, we use $67.7\ \mathrm{km/s}$ per $\mathrm{Mpc}$ but can set it to $73.4\ \mathrm{km/s}$ per $\mathrm{Mpc}$ to see what this difference means for the evolution of the universe.
Effects of Dark Energy ($Ω_Λ$)
Dark Energy is more complicated to understand than the other components. Not even cosmologists know what it really is, and it only has an effect when considering very large scales.
Dark energy, identified in cosmology by the Greek letter Lambda (Λ), is responsible for the accelerated expansion of the universe we experience in our reality. Dark matter makes space expand
exponentially, so it's effects will only become noticeable after billions of years.
On the other hand, nothing prevents Dark Energy from making space compress exponentially. One can easily imagine such a scenario: $Ω_Λ$ would be negative and the universe would contract and collapse
into a single point.
Effects of Matter ($Ω_\mathrm{m}$)
Matter is a more understandable component of the universe. Sort of. It is composed of two major types: (cold) dark matter and regular (or baryonic) matter. We still don't really know what dark matter
is, but we know that it represents about 90% of all matter in the universe. Fortunately, all matter behaves mostly the same.
Its effect on the evolution of the universe is almost the opposite to that of dark energy. A universe composed of mostly matter would expand very rapidly at the beginning and will slow down as years
pass. For a visual representation, check the green line of the image above.
Effects of Radiation ($Ω_\mathrm{r}$)
Similar to matter, radiation is another concept that we use in our everyday life. Think UV radiation, gamma rays, infrared... In short: light. On top of light (photons), neutrinos are also considered
to be radiation for our purposes since they have the same effects.
The way photons and neutrinos interact with the expansion of the universe are qualitatively similar to how matter does it, as it is the blue line in the image above. Radiation makes the universe
expand faster than matter does, at least at the beginning.
You can learn more about the effects of radiation in space with our radiation pressure calculator.
Curvature density ($Ω_\mathrm{k}$) and the shape of the universe
There is one last parameter we must talk about: the curvature density ($Ω_\mathrm{k}$). If $Ω_\mathrm{k} > 0$, we have a closed universe that tends to collapse again onto itself. For $Ω_\mathrm{k} <
0$, we have an open universe that tends to expand forever. And for $Ω_\mathrm{k} = 0$, we have a flat universe with no spacial tendency to expand or contract.
We have to remember that this is only a simplistic explanation, and the actual outcome of any given universe will depend on the values of each of the four parameters. However, all this knowledge
should be enough for you to be able to play around with the calculator and get a grasp on the impact that each parameter has on the evolution of the universe.
Our universe
Currently, the most accurate description of the universe we have is what it's known as the ΛCDM model. According to this model, the main components of our universe are dark energy (lambda, Λ) and
cold dark matter(CDM), with regular matter in a distant third position. The widely accepted values for each component are:
• Dark energy $Ω_Λ$: $69.1\%$ or $0.691$;
• Matter $Ω_\mathrm{m}$: $30.89\%$ or $0.3089$;
• Radiation/light $Ω_\mathrm{r}$: $0.00824\%$ or $8.24 \times 10^{-5}$;
• Curvature density $Ω_\mathrm{k}$: $0$ (flat universe); and
• Hubble constant $H_0$: $67.7\ \mathrm{km/s}$ per $\mathrm{Mpc}$.
As you can see, most of the parameters of the universe do not have units. They represent a fraction or percentage of a reference quantity, called critical density.
These parameters are constantly being measured with increasing precision by astronomers, so you might find different values depending on your source of information. A very renowned source is .
How to use this calculator
This calculator is very straightforward to use. Simply choose one of the predefined universes, or make one yourself, and you will immediately see a chart showing you the evolution of the selected
universe in terms of its size (y-axis) versus time since the Big Bang (in billions of years). For easy comparison, we show you the size of your selected universe as a percentage of the size of our
universe today.
You can also see the age of the universe, which is the time it would take your universe to reach the current size of our universe. Younger universes expanded much faster (radiation or Matter
dominated), while older universes experienced a slower growth (e.g., dark energy dominated).
Lastly, you can compare your creation to any of the predefined universes, so you can see how similar (or not) they are, and which have a faster/slower expansion. You can also take a guess at which of
the possible deaths your universe will experience, just by taking a look at the graph. If you are brave enough, share it with everyone on Twitter, and we will let you know how close you got to the
real answer!
From the birth of the universe (Big Bang) until now
We know that at some point in time, the universe was condensed into a point of infinite energy density. This is what we call the Big Bang, which is when time itself started. There was no actual
explosion, but rather an incredibly fast expansion that led to what we now know as the universe.
As the universe expanded and cooled, the forces began to distinguish from each other, and complex particles began to form. First protons and neutrons, then nuclei, atoms... This was a fast process
(on an astronomical scale) – it took the universe over 300 000 years to become cool enough for radiation to be able to travel any meaningful distance. We actually have a picture of this moment. This
information is still around today as the cosmic microwave background (CMB), which is a faint microwave signal that we detect coming from everywhere in the universe. It is the oldest piece of
information we have from the universe.
The universe underwent a relatively short phase of radiation-dominated expansion at the beginning of its life. This was quickly succeeded by a matter-driven expansion phase. Currently, however, the
expansion of the universe is dominated by dark energy.
You can see these different phases on the calculator by selecting Our Universe. The very first part of the chart is a quick expansion driven by radiation. The second phase (matter-dominated) blends
in, and it's hard to distinguish from radiation, but the expansion is noticeably slower. Finally, for age more than 10 Billion years old, one can see a subtle change from an almost linear expansion
to an accelerated (exponential) expansion due to dark energy.
The possible deaths of our universe
Now that we know how we got here, you are probably wondering what the future holds. We won't live to see the universe end, but it's fun to explore. The abundant presence of dark energy almost
entirely rules out the possibility of the universe collapsing back into a Big Crunch and potentially creating a new Big Bang in a sort of reincarnation.
The abundant presence of dark energy almost entirely rules out the possibility of the universe collapsing back into a Big Crunch. For the same reason, the universe's expansion cannot slow down to a
halt, leaving just two possible scenarios.
The so-called cold death of the universe is a scenario in which the universe keeps expanding faster and faster. So fast, in fact, that it overcomes the gravitational attraction between galaxies,
stars, and planets... Eventually, every object in the universe ends up alone, infinitely far from everything else: cold, silent, until it all dies out in, quite literally, empty space.
There is a less depressing option – a Big Rip could happen. In this scenario, the universe expands so fast that, at some point, it tears the fabric of space–time. What would it look like? We don't
really know, and we have no way to test it. Fortunately (or unfortunately, depending on your curiosity), none of us would be there to see it – and neither will anything currently living on planet
How to build your own universe
If you feel overwhelmed having to create your own universe, here are some practical tips and tricks:
• A large amount of dark energy will result in exponential growth at the end of the graph (after at least 10 billion years).
• A large amount of radiation will create a very rapid expansion at the beginning and might overshadow any matter or dark energy contribution.
• Matter behaves somewhat similar toradiation but with milder effects.
• The curvature density is calculated for you, but if you want to see it or play with it, you can do so by hitting the advanced mode button.
We have left you total freedom to choose any values for the parameters. Some combinations might result in integration errors and the calculator simply not being able to give you a result. If this
happens... Don't worry! Simply readjust your parameters until you find a working solution.
Amount of dark energy (ΩΛ)
Amount of radiation -light- (Ωᵣ)
Results and comparisons
Oh, our beloved universe! An open universe that will expand faster and faster until we reach a Cold Death or a Big Rip.
It is 13.8 Billion years old. Still in the prime of its life!
Using the Millenium Falcon it would take "only" 1339 years to reach the galactic group M81 from Earth. | {"url":"https://www.omnicalculator.com/physics/universe-expansion","timestamp":"2024-11-12T17:02:16Z","content_type":"text/html","content_length":"396068","record_id":"<urn:uuid:c2e2f27f-ab35-4e14-bcf7-8b1f37a10120>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00397.warc.gz"} |
Master Equations Versus Keldysh Green's Functions for Correlated Quantum Systems Out of Equilibrium
Abstract The goal of these lecture notes is to illustrate connections between two
widely used, but often separately adopted approaches to deal with quantum systems
out of equilibrium, namely quantum master equations and nonequilibrium Green's
functions. For the paradigmatic case of the Anderson impurity model out of equilib-
rium we elaborate on these connections and map its description from one approach
to the other. At the end of this chapter, we will show how the "best of the two worlds"
can be combined to obtain a highly accurate solution of this model, which resolves
the nonequilibrium Kondo physics down to temperatures well below the Kondo scale.
As a training course, these lectures devote a large portion to an introduction to the
Lindblad quantum master equation based on standard treatments, as well as methods
to solve this equation. For nonequilibrium Green's functions, which are discussed
in the first part of the course, we only provide a summary of the most important
aspects necessary to address the topics of the present chapter. The relevant aspects
of these two topics are presented in a self-contained manner so that a background in
equilibrium many-body physics is sufficient to follow these notes.
Originalsprache englisch
Titel Out-of-Equilibrium Physics of Correlated Electron Systems
Redakteure/-innen Roberta Citro, Ferdinando Mancini
Erscheinungsort Cham, Switzerland
Herausgeber (Verlag) Springer International Publishing AG
Kapitel 4
Seiten 121-188
Seitenumfang 68
Band 191
ISBN (elektronisch) 978-3-319-94956-7
ISBN (Print) 978-3-319-94955-0
Publikationsstatus Veröffentlicht - 2018
Name Springer Series in Solid-State Sciences
Herausgeber (Verlag) Springer International Publishing
Fields of Expertise
• Advanced Materials Science
Untersuchen Sie die Forschungsthemen von „Master Equations Versus Keldysh Green's Functions for Correlated Quantum Systems Out of Equilibrium“. Zusammen bilden sie einen einzigartigen Fingerprint. | {"url":"https://graz.elsevierpure.com/de/publications/master-equations-versus-keldysh-greens-functions-for-correlated-q","timestamp":"2024-11-11T18:17:49Z","content_type":"text/html","content_length":"44918","record_id":"<urn:uuid:da138c3e-c4bf-4196-9837-379105a1956b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00034.warc.gz"} |
Wave pattern produced by a heat source moving with constant velocity on the top of an infinite plate
Within the framework of linear, isotropic elasticity theory the wave pattern produced by a heat source moving with constant velocity on the top of an infinite plate is computed. Both the transient
effects associated with the initial conditions and the damping of the waves are neglected. If the travel speed of the heat source is smaller than the velocity of the surface waves, dispersive
flexural waves will be excited. The frequency of these waves is proportional to the square of the wave number if the wavelength is much larger than the thickness of the sheet. In this limiting case
it is found that the crest of the waves makes an angle of 90 degrees with the travel direction, and this result is independent of the travel speed as long as the parabolic approximation remains valid
for the dispersion relation of flexural waves.
Parmak izi
Wave pattern produced by a heat source moving with constant velocity on the top of an infinite plate' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar. | {"url":"https://research.itu.edu.tr/tr/publications/wave-pattern-produced-by-a-heat-source-moving-with-constant-veloc","timestamp":"2024-11-12T09:52:55Z","content_type":"text/html","content_length":"56711","record_id":"<urn:uuid:9edca512-7f0d-4578-a5ba-5cff9f049c66>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00062.warc.gz"} |
Arbitrary methodological decisions skew inter-brain synchronization estimates in hyperscanning-EEG studies
Over the past decade, hyperscanning has emerged as an important methodology to study neural processes underlying human interaction using fMRI, EEG, fNIRS, and MEG. However, many methodological
decisions regarding preprocessing and analysis of hyperscanning data have not yet been standardized in the hyperscanning community, yet may affect inter-brain estimates. Here, we systematically
investigate the effects common methodological choices can have on estimates of phase-based inter-brain synchronization (IBS) measures, using real and simulated hyperscanning (dual) EEG data. Notably,
we introduce a new method to compute circular correlation coefficients in IBS studies, which performs more reliably in comparison to the standard approach, showing that the conventional circular
correlation implementation leads to large fluctuations in IBS estimates due to fluctuations in circular mean directions. Furthermore, we demonstrate how short epoch durations (of 1 s or less) can
lead to inflated IBS estimates in scenarios with no strong underlying interaction. Finally, we show how signal-to-noise ratios and temporal factors may confound IBS estimates, particularly when
comparing, for example, resting states with conditions involving motor actions. For each of these investigated effects, we provide recommendations for future research employing hyperscanning-EEG
techniques, aimed at increasing validity and replicability of inter-brain synchronization studies.
1 General Introduction
The past decade has seen a major rise in hyperscanning studies, with a particular focus on measuring synchronized brain activity across interacting participants, so-called inter-brain synchronization
(IBS) (F. Babiloni & Astolfi, 2014; Czeszumski et al., 2020; Dumas, 2011; Konvalinka & Roepstorff, 2012). Hyperscanning is a methodology for measuring brain activity simultaneously in multiple
participants either using electroencephalography (EEG) (C. Babiloni et al., 2006; Bevilacqua et al., 2019; Dumas et al., 2010; Endevelt-Shapira et al., 2021; Goldstein et al., 2018; Kayhan et al.,
2022; Konvalinka et al., 2014; Koul et al., 2023; Leong et al., 2017; Lindenberger et al., 2009; Zamm et al., 2018), magnetoencephalography (MEG) (Ahn et al., 2018; Baess et al., 2012; Lin et al.,
2023; Zhou et al., 2016), functional near-infrared spectroscopy (fNIRS) (Cui et al., 2012; De Felice et al., 2023; Jiang et al., 2012; Nguyen et al., 2021; Pan et al., 2017; Pinti et al., 2020), or
functional magnetic resonance imaging (fMRI) (Bilek et al., 2022; Goelman et al., 2019; Misaki et al., 2021; Montague et al., 2002). The methodology emerged together with a call in social cognition
research to move away from studies of individuals in isolation and toward studies of true interactions (Hari & Kujala, 2009; Schilbach et al., 2013), with the goal of better understanding the
interpersonal and interactive mechanisms, also on a neural level (Konvalinka & Roepstorff, 2012). With increasing numbers of hyperscanning studies in recent years, numerous computational methods have
been used and applied to two- (or more) brain data, and IBS in particular; however, these methods have not always been tested thoroughly, which has resulted in a multitude of analysis approaches and
choices in hyperscanning pipelines, leading to a wide range of results that are often difficult to compare. Moreover, some of the commonly used methods to quantify IBS have not been thoroughly
investigated, and may lead to variable results. A recently developed hyperscanning-EEG toolbox (Ayrolles et al., 2021) and practical guide (Zamm et al., 2023) provide promise in working toward a
standardization of future hyperscanning-EEG pipelines; however, many of the methodological decisions which have to be made by researchers have still not been standardized, and are often arbitrarily
chosen, which may lead to inconsistent results.
Within the EEG-hyperscanning literature, non-directional analyses commonly focus on alignment or synchronization of the phase of signals, using estimation of phase locking values (PLV) (Dumas et al.,
2010; Pérez et al., 2017; Yun et al., 2012), phase lag index (PLI) (Ahn et al., 2018; Lindenberger et al., 2009; Sänger et al., 2013), or circular correlations (Burgess, 2013; Goldstein et al., 2018)
between time series. However, even within conceptually similar tasks, a wide range of observed effects have been reported, in terms of inter-brain networks, as well as the frequency bands within
which synchronization occurs. For example, some EEG-hyperscanning studies have reported IBS across theta and delta frequencies during coordinated action, for example, guitar playing (Lindenberger et
al., 2009; Sänger et al., 2012); while other studies have reported effects at higher frequencies, at alpha (Dumas et al., 2010; Goldstein et al., 2018; Lin et al., 2023), beta (Yun et al., 2012), and
gamma (Astolfi et al., 2010; Dumas et al., 2010) frequency ranges.
Here, we address the issue that a wide range of experimental and analytical decisions are implied by the computation of inter-brain estimates, and that these choices are likely to contribute to the
variability in findings, raising concerns regarding their validity. Therefore, we systematically investigate consequences of some of the methodological decisions made when estimating IBS in EEG
studies, using both simulated and real (dual-EEG) data (from Zimmermann et al., 2022). Specifically, we investigate: the calculation of circular correlation for continuous signals, as a common
measure to quantify IBS; how the choice of epoch length affects estimates of IBS, using circular correlations and phase locking values in particular; and how differences in frequency power and
signal-to-noise ratio affect IBS estimates. While the focus in this paper is on dual-EEG studies, the concerns raised in this paper also apply to studies involving simultaneous recordings in more
than two people, and possibly other neuroimaging techniques.
1.1 Circular correlation
One common procedure to estimate IBS is to calculate the circular correlation between signals from two sensors from two respective participants. This method was introduced by Burgess (2013) as an
improvement to previous methods that are more prone to spurious coupling, in particular, PLV (Burgess, 2013). Burgess defines circular correlation, $pc$, as:
where $α$ and $β$ represent the instantaneous phases at electrodes 1 and 2, and $μ$ and $ν$ represent the circular mean directions for electrodes 1 and 2, respectively. Hence, $sin(α−μ)$ and $sin
(β−ν)$ represent the deviations of the two phases from their mean directions. The equation is derived from Jammalamadaka and SenGupta (2001) and implemented in the “CircStat” Toolbox (Berens, 2009).
Noteworthy, the CircStat toolbox is intended for calculation of (circular) correlations between discrete events, such as wind and flight directions. This is in line with the descriptions of
Jammalamadaka and SenGupta, where a correlation is calculated between “random sample[s] of observations which are directions of two attributes” (Jammalamadaka & SenGupta, 2001; section 8.2). However,
it should be noted that according to Jammalamadaka, section 8.2, equation 8.2.2 (corresponding to equation 1.1 above) is valid for cases with well-defined circular mean values. In case of arbitrary
or not well-defined mean directions, such as in case of uniform distributions for the signals, the mean directions should be chosen such that they “yield the largest possible association in both
positive and negative directions” (Jammalamadaka, 8.2(ii)), and as such maximize the positive or negative correlation. Hence, a different equation (equation 8.2.4) is required (see comment 8.2.2
(ii)), resulting in an adjusted definition of circular correlation for signals with arbitrary means:
$pc_adj=(Rα − β−Rα + β)2∑sin2(α−μ)∑sin2(β−ν)$
In this case, the numerator of the adjusted circular correlation, $pc_adj$, becomes the difference in the lengths of the mean vectors of $α−β$ and $α+β$.
We argue that continuous data, such as EEG data, should be considered as data having arbitrary mean directions, as the mean direction of (arbitrarily chosen) signal segments is not well defined. This
can be demonstrated using movement trajectories from two conditions of the mirror game, where pairs of participants performed either synchronized movements, or individual, non-synchronized movements
(see Zimmermann et al., 2022, for details). As illustrated in Figure 1, the circular mean direction of a trajectory segment fluctuates wildly with small changes of the analysis window, and as a
consequence, estimates based on the unadjusted circular correlation (equation 1.1) fluctuate. In fact, using the adjusted circular correlation (equation 1.2), the estimates are stable and correspond
to the (subjective) impression of the level of synchronization between movement trajectories, which is particularly evident in the synchronized movement condition (Fig. 1F).
Thus, with respect to EEG, this would mean that the mean direction of a specific signal segment depends on the signal epoch length and position, and may change drastically with small changes (in the
range of samples) in duration or on/offset. In experiment 1, we investigate the consequences of applying equation 1.1 and equation 1.2, using simulated and real EEG data (obtained from Zimmermann et
al., 2022). We predict more stable estimates over a range of small, arbitrary changes in data processing by using the adjusted variant to calculate circular correlations.
1.2 Epoch length
Epoch lengths that are used to estimate inter-brain measures are often arbitrarily chosen, or standardized to 1 s (Ayrolles et al., 2021; Bevilacqua et al., 2019; Goldstein et al., 2018). In
experiment 2, we investigate how IBS estimates depend on epoch length, both on simulated data with varying degrees of coupling, as well as real EEG data from an interactive and individual mirror game
performance. Related work on intra-brain synchronization showed that PLI-based functional connectivity (intra-brain) estimates in resting-state EEG recordings decrease with increasing (low) epoch
lengths, and stabilize only at epoch lengths of 6–12 s (Fraschini et al., 2016), and it has been recommended that epochs of lengths shorter than 4 s should hence be avoided where possible (Miljevic
et al., 2022). Moreover, phase coupling estimates have previously been shown to be dependent on the number of cycles of oscillation present for each epoch (Basti et al., 2022), such that epoch or
window lengths that are shorter result in higher and less reliable phase estimates, even for uncoupled signals. Correspondingly, we expect inflated IBS estimates at short epoch lengths, and further
expect that they tend to stabilize at longer epoch lengths than those that are often used.
1.3 Power and signal-to-noise ratio
Phase is generally considered to be independent of signal amplitude; however, it has been suggested that phase estimates can be affected by signal amplitude, or power (van Diepen & Mazaheri, 2018).
Whereas phase and amplitude are technically unrelated, phase estimates in weaker signals (or signals with lower amplitude/power) may be relatively more prone to noise, resulting in less stable phase
estimates. This is particularly relevant for inter-brain comparisons across conditions that have different signal-to-noise ratios (SNR). For example, many hyperscanning-EEG studies have looked at
inter-brain mechanisms during reciprocal movement coordination (Dumas et al., 2010; Konvalinka et al., 2014; Ménoret et al., 2014; Tognoli et al., 2007; Zimmermann et al., 2022), and contrasted such
conditions of coupled interaction with uncoupled movement production (e.g., with a metronome, independent movements, etc.). Individual brain analyses have shown that coupled interactions yield the
highest mu-suppression in contrast to uncoupled movements or rest (Dumas et al., 2010; Konvalinka et al., 2014; Lachat et al., 2012), corresponding to amplitude suppression of oscillations at 10 and
20 Hz over sensorimotor areas (Gastaut & Bert, 1954; Salenius et al., 1997). As amplitude suppression leads to lower SNR, and, hence, potentially a poorer ability to estimate phase from the signal,
this may have an effect on the phase-based inter-brain estimates, as well as the comparisons between conditions of stronger and weaker mu suppression. Therefore, in experiment 3, we will investigate
whether and how signal amplitude can affect estimates of inter-brain synchronization which depend on phase estimates, such as PLV and circular correlation. Additionally, we will investigate the
potential consequences of such effects on comparisons of inter-brain synchronization between conditions with differences in signal amplitude, which can occur in cases of mu- or alpha-suppression.
Across these experiments, we show using both simulations and experimental dual-EEG data that inter-brain synchronization estimates are drastically affected by arbitrary decisions regarding the mean
direction in circular correlations, the epoch length and epoch onset/offset in continuous signals, and short epoch lengths used to estimate phase-based IBS. Furthermore, we show how signal amplitude
(i.e., power) can affect estimation of phase-based connectivity measures, which can lead to false positives or false negatives when comparing conditions with varying degrees of (social) interaction.
The overall aim of this paper is, therefore, a call for an effort to set common standards for methodological decisions in hyperscanning-EEG experiments, and to provide recommendations regarding
decisions that we show can have substantial effects on IBS results, with the goal of increasing the validity and reproducibility of inter-brain findings.
2 Methods
2.1 Data
2.1.1 Generating artificial data
In order to generate artificial data, we used FieldTrip’s (version 20230503, Oostenveld et al., 2011) ft_connectivitysimulation function in Matlab (R2022b; The MathWorks, Natick, USA) with a known
connectivity structure. Specifically, we used a linear mixing model with two observed signals and one unobserved signal, and additional independent white noise for each observed signal. Conceptually,
the observed signals represent independent electrodes of two participants (with independent signals generated by independent white noise) that can be affected by a common “inter-brain” process, the
unobserved signal. Therefore, the amplitude of the unobserved signal was varied systematically to generate inter-brain synchronization of varying strength, that is, imitating data from varying levels
of neural interaction. Example data for the different coupling levels are shown in Figure 2A-C.
Data were generated for 100 trials at a time, with a sampling frequency of 256 Hz (matching the real data after preprocessing, see below), with trial durations of 3 s, unless noted otherwise.
Additional data processing steps are specified in the corresponding sections.
2.1.2 Real dual-EEG data
Real dual-EEG data were taken from a previous EEG study (Zimmermann et al., 2022). The study was conducted according to the Declaration of Helsinki and was approved by DTU Compute’s Institutional
Review Board (COMP-IRB-2020-02). All participants in this study provided written informed consent for being included in the study. In this study, dyads participated in a mirror game task (Noy et al.,
2011) while EEG was recorded from both participants simultaneously, using two daisy-chained 64-channel BioSemi (Amsterdam, the Netherlands) ActiveTwo systems. Participants were asked to generate,
among other conditions, synchronized movements while observing each other’s hands, or generate movements individually without seeing the other person. Each condition trial lasted 25 s, out of which
21 s were analyzed (removing 2 s at the beginning and end of each segment to allow for movement synchronization to stabilize). Each condition was repeated 16 times. Data were preprocessed using
band-pass filtering (1–40 Hz; two-pass 4^th-order Butterworth filter), resampling to 256 Hz, ICA to remove eye and muscle artifacts, and re-referenced to the global average before analyses. Data from
18 dyads were segmented into segments of 3 s, which form the “trials” in this report. Per condition, the first 100 trials/segments were used in this report. For details regarding task, data
recording, and preprocessing, see Zimmermann et al. (2022). Example behavioral data are shown in Figure 1, and corresponding neural data are shown in Figure 2D-E.
For the purpose of the current analyses, we assumed that data corresponding to synchronized movements show a higher coupling level (i.e., higher inter-brain synchronization) than data corresponding
to individually performed movements. This assumption is based on a number of studies suggesting increased inter-brain synchronization in interacting dyads (e.g., Dumas et al., 2010). However, we note
that the focus of this report is not on synchronization itself, but the effects of arbitrary decisions on estimations of inter-brain synchronization.
2.1.3 Uniform distribution of EEG data (artificial and real data)
We tested the assumption that EEG data (generated and real data) are uniformly distributed using Hodges-Ajne omnibus tests for nonuniformity (CircStat toolbox; function: circ_otest; Berens, 2009). We
generated and band-pass filtered [8–12 Hz] data for 10.000 epochs using the same parameters as for EEG data generation, and tested each epoch for nonuniformity. The null hypothesis that the signal
comes from a uniform distribution has been rejected in only 6 out of 10.000 cases (0.06%; p < .05, uncorrected). Thus, in >99% of generated data, the distribution is assumed to be uniform. The same
approach has been applied to real data segments, using 1 s band-pass filtered [8–12 Hz] epochs for all dyads and trials, for a total of 20628 evaluated epochs. The null hypothesis was rejected in 10
out of 20628 cases (0.05%; p < .05, uncorrected); in >99% of real EEG data epochs, the distribution is assumed to be uniform. Without additional (i.e., 8–12 Hz) band-pass filtering, the null
hypothesis was rejected in 19.28% of real EEG data epochs. These outcomes support the assumption that EEG data are uniformly distributed.
2.2 Experiment 1: circular correlation
2.2.1 Data processing
For artificial data, simulated trials with durations of 3 s were generated following the general procedure described in section 2.1.1. Specifically, sets of 100 trials were generated with low
individual signal amplitude (standard deviation: 0.2 [cfg.absnoise]), and no (std: 0), medium (std: 0.4), and strong (std: 0.8) “common” signal [cfg.mix], representing “coupling levels”. No delay
[cfg.delay] was specified. Next, data for each trial were band-pass filtered corresponding to the alpha frequency band (8–12 Hz; two-pass 4^th-order Butterworth filter). Example data are shown in
Figure 2. Instantaneous phase angles were estimated using Hilbert transforms in Matlab, and circular correlations were estimated using equations 1.1 and 1.2.
For real data, EEG signals corresponding to synchronized and individual movements of dyads were used for these analyses (see Zimmermann et al., 2022). EEG data of 3 s segments were extracted for the
right lateralized, central electrode (C3) and band-pass filtered corresponding to the alpha frequency band (8–12 Hz; two-pass 4^th-order Butterworth filter), where inter-brain synchronization has
been reported in previous studies involving interpersonal motor coordination (Dumas et al., 2010; Goldstein et al., 2018), and C3 has been shown to be relevant to movement coupling in the mirror game
(Zimmermann et al., 2022). To estimate circular correlations between signal segments based on equation 1.1 and equation 1.2 respectively, instantaneous phase angles of preprocessed EEG data were
estimated using Hilbert transforms in Matlab, following the same procedures as were used for simulated data.
A Matlab script calculating adjusted circular correlations for univariate data is provided via Github (https://github.com/marizi/CCorrIBS) and can be used as an extension to the Circular Statistics
toolbox (Berens, 2009), as well as an update to the circular correlation implementation in the EEG hyperscanning toolbox, HyPyP (Ayrolles et al., 2021). An implementation of the adjusted circular
correlation for python is also available in the Pingouin package (Vallat, 2018; https://pingouin-stats.org/build/html/generated/pingouin.circ_corrcc.html).
2.2.2 Comparison of circular correlation estimates
First, we compared the two approaches to estimate circular correlations on average trial estimates. For this, 1 s (256 samples) long epochs were used to estimate circular correlations using equation
1.1. and 1.2 respectively, for no, medium, and strong coupling levels. Segment length was based on common choices used in the literature (Ayrolles et al., 2021; Bevilacqua et al., 2019; Dumas et al.,
2010; Goldstein et al., 2018). Estimates were compared using 2-way ANOVAs with factors of approach (discrete, uniform) and coupling (artificial data: no, medium, strong; real data: individual,
synchronized). Moreover, we compared circular correlation estimates (adjusted and unadjusted) for individual trials to PLV estimates for the same epochs using Pearson correlations.
Second, we investigated the effect of onset shifting at the sample level. For this, a 1 s (256 samples) epoch was taken from each trial, and then shifted by one sample (corresponding to less than
0.004 s) at a time. For each (shifted) epoch, circular correlation was estimated using the representative equations. A total of 512 epochs/shifts were generated using this procedure. Then, for each
trial, the average absolute change in circular correlation estimates was calculated over all onset shifts, providing a single value of the average change per trial. These average changes are used as
a measure of variability in circular correlation estimates with small shifts in epoch onset. It should be noted that a simple standard deviation would not be able to distinguish between gradually
changing circular correlation estimates and estimates that fluctuate randomly (e.g., a permuted sequence of gradually changing estimates). Our shift-wise change measure corresponds to a root mean
square of sample-by-sample changes. The averaged sample by sample changes were compared using 2-way ANOVAs with factors of approach and coupling.
Third, we investigated the effect of epoch duration at the sample level. Similar to the effect of onset shifting, 1-s (256 samples) epochs were taken from each trial, and then increasingly extended
by one sample. For each (extended) epoch, circular correlation was estimated according to both equations. Epochs were extended up to 3 s, providing 512 epochs/extensions that were generated using
this procedure. For each trial, the changes in correlation estimate with each extension were averaged over all extensions, providing a single value of the average change per simulation. These average
changes are used as a measure of variability in circular correlation estimates with small changes of epoch duration, and compared using 2-way ANOVAs with factors of approach and coupling.
All analyses were performed for simulated data using three levels of coupling (random, medium, and strong), and separately for real EEG data, using data from individual movement production and
interactive, synchronized movement production. For real EEG data, analyses were performed for each dyad (N = 18) individually. Dyad averages for each measure were stored for group level comparisons
using 2-way (coupling level (high, low) x approach (discrete, uniform)) within-subject ANOVAs. Alpha levels for all statistical comparisons were set to p < .05.
2.3 Experiment 2: epoch length
2.3.1 Data processing
For artificial data, simulated trials with durations of 20 s were generated following the general procedure described in section 2.1.1. Specifically, sets of 100 trials were generated with low
individual signal amplitude (standard deviation: 0.2 [cfg.absnoise]) and no (std: 0), medium (std: 0.4), or strong (std: 0.8) “common” signal [cfg.mix], representing “coupling levels”. No delay
[cfg.delay] was specified. Next, data for each trial were band-pass filtered corresponding to the alpha frequency band (8–12 Hz). Example trials are shown in the general methods section. The same
data were used for each range of epoch length, spanning between 0.1 and 20 s in steps of 100 ms. For each segment, instantaneous phase angles were estimated using Hilbert transforms in Matlab.
Temporal PLV (Dumas et al., 2010) and adjusted circular correlation were calculated.
Real EEG data corresponding to synchronized and individual movements of dyads were used for these analyses (see Zimmermann et al., 2022). EEG data of 20 s segments were extracted for electrode C3 and
band-pass filtered corresponding to the alpha frequency band (8-12 Hz), where inter-brain synchronization has been reported in previous studies (Dumas et al., 2010; Goldstein et al., 2018). Next,
segments corresponding to different epoch length ranging from 0.1 to 20 s in steps of 100 ms were selected, starting from the onset of each trial segment, corresponding to the data segments generated
for artificial data. Instantaneous phase angles of preprocessed EEG data were estimated using Hilbert transforms, and temporal PLV and adjusted circular correlation were estimated, following the same
procedures as were used for simulated data (see above).
2.3.2 Analysis of the effect of epoch length
Following visual inspection of the data, we fitted exponential functions (b1 * exp(-b2*X) + b3) to the averages over all simulations to estimate and compare the strength of the “decay” in estimated
circular correlation values for each coupling level for simulated and real data.
2.4 Experiment 3: power/signal to noise ratio
2.4.1 Phase estimation error and SNR
Data were generated for 100 trials of 3 s at each amplitude level using ft_freqsimulation. Simulated data were generated by superimposing a 10 Hz oscillation with varying amplitude (from 0.25 to 5)
in steps of 0.05, and a random noise with fixed amplitude of 1. Generated data were band-pass filtered at 8–12 Hz. Phase estimation error was calculated as the root mean square (RMS) difference
between estimated instantaneous phases (using Hilbert transform, see above) between 0.5 and 2.5 s of each trial (excluding possible edge effects) based on the combined (oscillation + noise) and the
clean (oscillation only) signals.
2.4.2 IBS and relative noise levels
Data were generated for 100 trials of 3 s at each coupling and noise level using ft_connectivitysimulation, following the general procedures described above. The coupling level (cfg.mix) was varied
from no coupling (cfg.mix = 0) to medium (cfg.mix = 0.4) and high (cfg.mix = 0.8); level of noise (cfg.absnoise) was varied from low (cfg.absnoise = 0.1) up to high (cfg.absnoise = 1.0) in steps of
0.1. PLV was calculated based on band-pass filtered (8–12 Hz) data. Statistical comparisons were performed using 2-way ANOVA with three levels of coupling (no, medium, strong) and three levels of
relative noise (0.1, 0.4, 1.0).
2.4.3 Power/SNR and inter-brain synchronization measures in real EEG data
EEG signals corresponding to individual movements and two 2-min rest conditions were used for these analyses (see Zimmermann et al., 2022). EEG data of 3 s segments were extracted for right
lateralized, central electrode (C3) and band-pass filtered corresponding to the alpha frequency band (8–12 Hz). Inter-brain synchronization was estimated using adjusted circular correlation, and
amplitude envelopes of preprocessed EEG data were estimated using Hilbert transforms in Matlab. To reduce artifacts during the rest condition, segments that had adjusted circular correlation
estimates or amplitude in either dyad more than two inter-quartal ranges from the median were excluded from the analysis. We compared adjusted circular correlation estimates from a rest condition
with estimates in the uncoupled (individual) condition of the mirror game, from the same dyads and recording session, using paired t-tests. Furthermore, we correlated average signal amplitude (based
on the Hilbert envelope amplitude) with estimates of adjusted circular correlations.
3 Results
3.1 Experiment 1: circular correlation
3.1.1 Variability of circular mean values of continuous EEG signals
Circular mean values of continuous, simulated EEG signals varied considerably with small changes in the epoch onset (Fig. 3A) and epoch duration (Fig. 3B), especially for short epoch durations. The
mean direction varied as much as 360 degrees (2π) for selected data segments. Variability of circular means reduced with increasing epoch lengths, whereas it remained largely constant with changing
epoch onsets at fixed epoch lengths. This further suggests that mean directions of EEG signal segments are not well defined.
3.1.2 Circular correlation estimates—trial average
First, we compared estimated circular correlations on individual trials for signals with no, medium, or strong coupling levels on artificial data (Fig. 4A-D). We obtained higher estimates for
increased coupling levels (F(2,594) = 576.34, p < .001, η[p]^2 = 0.66), but also higher estimates for adjusted circular correlation (equation 1.2; adjusted for uniform data) compared to unadjusted
circular correlations (equation 1.1; not adjusted) (F(1,594) = 176.49, p < .001, η[p]^2 = 0.23). There was an interaction between approach and coupling level (F(2,594) = 21.43, p < .001, η[p]^2 =
0.07); however, the effect of approach was observed at each coupling level (all p < .001), and estimates increased for each coupling level (all p < .001). Estimated circular correlations were 49.4%
higher for data generated without coupling, 36.6% higher for data generated with medium coupling, and 41.7% higher for data generated with strong coupling.
For real EEG data (Fig. 4E-G), we observed a significant effect of approach (F(1,17) = 3790.70, p < .001, η[p]^2 = 0.99), with higher values for adjusted circular correlations, but no effect of
coupling level (F(1,17) = 0.12, p = .734, η[p]^2 = 0.02), and no interaction effect (F(1,17) = 0.57, p = .459, η[p]^2 = 0.03). An analysis in pseudo pairs provided consistent results (see
Supplementary Material S5). Adjusted circular correlation estimates for individual trials correlated highly (r > 0.99) with PLV estimates at all coupling levels (see Supplementary Material S3 for
3.1.3 Circular correlation estimates—effect of onset shifting
As shown above (Fig. 3), estimated mean direction varied with small shifts in epoch onset. Consequently, we investigated how the two approaches to estimate circular correlations, adjusted and
non-adjusted for uniform distributions with not well-defined means, vary in terms of estimated circular correlation with small onset shifts.
We observed a significant interaction between approach and coupling level (F(2,594) = 166.74, p < .001, η[p]^2 = 0.36), as well as significant main effects of approach (F(1,594) = 5493.39, p < .001,
η[p]^2 = 0.90) and coupling level (F(2,594) = 130.53, p < .001, η[p]^2 = 0.31; Fig. 5A). Specifically, average change was higher for equation 1.1 compared to equation 1.2 for each coupling level (all
p < .001), with a factor 9.41 for random, 23.38 for medium, and 29.53 for strong coupling levels. For unadjusted circular correlations, there was a significant increase in average change between
random and medium coupling levels (t(198) = 16.39, p < .001; factor 1.69), as well as random and high coupling levels (t(198) = 14.70, p < .001; factor 1.66), but not between medium and strong
coupling levels (t(198) = -0.77, p = 1; factor 0.98). In contrast, for adjusted circular correlations, average change decreased for higher coupling levels. Specifically, there was a lower average
change for medium compared to random coupling levels (t(198) = 15.20, p < .001; factor 0.68) and a lower average change for strong compared to medium coupling levels (t(198) = 6.70, p < .001; factor
0.77), as well as between strong and random coupling levels (t(198) = 23.14, p < .001; factor 0.52).
For real EEG data (Fig. 5B), we observed significantly higher average changes for the unadjusted approach compared to the adjusted approach (F(1,17) = 30174.05, p < .001, η[p]^2 > 0.99), whereas
there was no significant difference between the coupling levels (F(1,17) = 3.71, p = .071, η[p]^2 = 0.17) and no interaction between approach and coupling level (F(1,17) = 2.08, p = .168, η[p]^2 =
3.1.4 Circular correlation estimates—effect of epoch length
Similar to changes with shifts of epoch onset, estimated mean direction varied with small changes in epoch length. Consequently, we investigated how the two approaches to estimate circular
correlations, adjusted and non-adjusted for uniform distributions with not well-defined means, vary in terms of estimated circular correlation with small changes to epoch length.
We observed a significant interaction between approach and coupling level (F(2,594) = 75.91, p < .001, η[p]^2 = 0.20), as well as significant main effects of approach (F(1,594) = 927.38, p < .001, η
[p]^2 = 0.61) and coupling level (F(2,594) = 69.75, p < .001, η[p]^2 = 0.19; Fig. 5C). Specifically, average change was higher for non-adjusted circular correlations compared to adjusted circular
correlations for each coupling level (all p < .001), with a factor 9.70 for random, 34.94 for medium, and 50.62 for strong coupling levels. For non-adjusted circular correlations, there was a
significant increase in average change between random and medium coupling levels (t(198) = 13.06, p < .001; factor 2.72), as well as random and high coupling levels (t(198) = 10.70, p < .001; factor
2.80), but not between medium and strong coupling levels (t(198) = 0.39, p = .70; factor 1.03). In contrast, for adjusted circular correlations, average change decreased for higher coupling level.
Specifically, there was a lower average change for medium compared to random coupling level (t(198) = 8.26, p < .001; factor 0.76), a lower average change for strong compared to random coupling level
(t(198) = 16.77, p < .001; factor 0.54), and between strong and medium coupling level (t(198) = 6.84, p < .001; factor 0.71).
For the real EEG data (Fig. 5D), average changes over epoch length extensions were significantly higher for unadjusted circular correlations compared to the adjusted circular correlations (F(1,17) =
14329.39, p < .001, η[p]^2 > 0.99). We observed no significant difference between coupling levels (F(1,17) = 0.27, p = .608, η[p]^2 = 0.02) and no interaction between approach and coupling level (F
(1,17) = 0.47, p = .503, η[p]^2 = 0.03).
3.2 Experiment 2: epoch length
3.2.1 Effect of epoch length on IBS estimates
Visual inspection of the results (Fig. 6) suggests that estimates for inter-brain synchronization, both for PLV and adjusted circular correlations, decreased with epoch length, both for simulated (
Fig. 6A) and real EEG data (Fig. 6B). Fitting an exponential function (b1 * exp(-b2*X) + b3) to the averages over all simulations confirmed that the “decay” of the high coupling condition was faster
(b1 = 0.15, b2 = 0.41, b3 = 0.88) than the decay of the medium coupling condition (b1 = 0.34, b2 = 0.36, b3 = 0.70) and the no-coupling condition (b1 = 0.66, b2 = 0.06, b3 = 0.12). Based on visual
inspection, the estimates reach a plateau at around 0.5 s for the high coupling data, at around 1 s for the medium coupling data, and at around 5 s for the no-coupling data. It should be noted that
estimates for shorter epochs are systematically higher.
Within real EEG data, similar patterns were observed (Fig. 6B). Averaged over all pairs, estimates stabilized at epoch lengths of approximately 5–10 s, both for interactive and non-interactive
trials. Average parameter estimates for the exponential fit for the individual condition were b1 = 0.70 ± 0.06, b2 = 0.07 ± 0.02, b3 = 0.12 ± 0.02, and for the interactive condition b1 = 0.70 ± 0.08,
b2 = 0.07 ± 0.02, and b3 = 0.12 ± 0.02. No significant differences in parameters were observed between the interactive and individual conditions (all p > .10).
We repeated the same simulations with a higher frequency band (16–24 Hz, approximating beta frequency band), with mixed results. Estimates stabilized faster for the higher frequency band, especially
for uncoupled and medium coupled signals (see Supplementary Material S4). We also conducted complementary simulations with bursts instead of sustained inter-brain coupling (see Supplementary
Materials S2). In the case of such intermittent bursts of IBS, the ideal epoch length appears to be equal to the average duration of these bursts (Supplementary Fig. 2).
3.3 Experiment 3: power/signal-to-noise ratio
3.3.1 Phase estimation error and SNR
First, we investigated whether phase estimation error depends on signal-to-noise ratio. We observed a significant effect of SNR on phase estimation error (F(95,9504) = 50.32, p < .001, η[p]^2 =
0.33). Visual inspection of the results suggested decreasing estimation errors at higher SNR with stable estimates starting from SNRs of approximately 0.5 (Fig. 7A).
3.3.2 IBS and relative noise levels
Simulation results with generated EEG data show that inter-brain synchronization estimates are reduced with increasing levels of relative noise, for coupling levels above zero. With no underlying
coupling, IBS estimates remain approximately constant (Fig. 8). Specifically, we observed a significant interaction between coupling levels (no, medium, strong) and noise levels (low, medium, high)
for adjusted circular correlations (F(4,891) = 248.85, p < .001, η[p]^2 = 0.53), with main effects for both coupling level (F(2,891) = 867.72, p < .001, η[p]^2 = 0.66) and noise level (F(2,891) =
935.20, p < .001, η[p]^2 = 0.68). Different circular correlation estimates were observed for different noise levels for strong coupling levels (F(2,297) = 1033.30, p < .001, η[p]^2 = 0.87) as well as
medium coupling levels (F(2,297) = 1205.00, p < .001, η[p]^2 = 0.89), but not no coupling (F(2,297) = 0.16, p = .851, η[p]^2 < 0.01). Post-hoc pairwise two-sample t-tests revealed significant
differences between strong and medium coupling levels for all relative noise levels, as well as for relative noise levels up to 0.7 [range 0.1–1.0] between medium and no coupling (all p < .05,
Bonferroni corrected for 20 comparisons; 100 observations per cell, df = 198 for all comparisons). Visual inspection of the results (Fig. 8A) indicated decreasing estimates at higher levels of
relative noise for signals with simulated coupling. In other words, as the SNR decreases, the synchronization between signals with stronger coupling decreases, and begins to resemble synchronization
levels between uncoupled signals.
3.3.3 Power and inter-brain measures in real EEG data
We compared adjusted circular correlation estimates in real EEG data (electrode C3) from a rest condition with estimates in a non-interactive condition of the mirror game, from the same dyads and
recording session. Furthermore, we correlated average signal amplitude (based on the Hilbert envelope amplitude) with estimates of circular correlation. As assumed, alpha band power was higher in the
rest condition compared to the movement condition (t(17) = 4.91, p < .001), confirming the occurrence of sensorimotor alpha-power suppression during the movement condition (factor 0.72; it should be
noted that electrode C3 was chosen based on such an effect in a previous analysis of the same data, see Zimmermann et al. (2022)). No differences were observed in terms of adjusted circular
correlation estimates between the (non-interactive) movement condition and the (non-interactive) rest condition (movement: mean ± sd 0.226 ± 0.012; rest: 0.232 ± 0.019; t(17) = 1.17, p = .259).
Instantaneous power and adjusted circular correlation estimates were not correlated for the non-interactive movement task condition (z-transformed r = 0.003 ± 0.078 (MEAN±SD); t(17) = 0.14, p =
.891); however, correlations were significantly higher than zero for the rest condition (z-transformed r = 0.082 ± 0.088; t(17) = 3.96, p = .001), and significantly higher than correlations in the
movement condition (t(17) = 2.52, p = .022; Supplementary Fig. S6).
4 General Discussion
Our analyses of real and simulated EEG data have shown how phase-based inter-brain synchronization estimates may be greatly affected by arbitrary methodological decisions during different steps of
data analyses. Specifically, we investigated the effect of arbitrary mean directions in computing circular correlation estimates of two signals with varying degrees of coupling, showing that circular
correlation estimates are highly variable even during strong coupling due to large fluctuations in the circular mean direction. We propose a different implementation of the circular correlation
coefficient using adjusted circular correlation, which adjusts for circular mean fluctuations.
Next, we showed how the use of short (1 s or less) epochs for the estimation of IBS, both using PLV and circular correlation, can result in highly inflated estimates, particularly in cases where
there is no, or very weak, coupling between the signals. Longer (3–6 s or more) epochs are thus required to prevent such inflated estimates, as previously shown on functional connectivity estimates
for single-brain EEG analyses (Fraschini et al., 2016; Miljevic et al., 2022). Finally, we have shown that IBS estimates become less reliable with lower signal amplitude, such as in conditions with
stronger suppression of signal amplitude, for example, alpha or mu-suppression. Our data partially indicate a relationship between signal amplitude and IBS estimates, at least in the resting state
condition. We unpack the results below, and provide recommendations for future research employing hyperscanning-EEG methods.
4.1 Circular correlation as measure for inter-brain synchronization
We systematically investigated the effect of approach regarding circular correlations for (dual) EEG data, based on real data as well as simulated EEG data with a known connectivity structure. We
have shown that estimates of non-adjusted circular correlation in EEG data based on equation 1.1, that is, not adjusted for not well-defined/arbitrary mean directions, are highly variable and
regularly underestimate circular correlation between coupled signals. Further, we have shown that the likely cause for these fluctuations, as well as underestimation, is the variability of mean
direction with small changes in epoch length and epoch onsets, in the range of samples. Using adjusted circular correlations, following equation 1.2, calculating circular correlations adjusted for
arbitrary mean directions, in contrast, produces systematically higher values for circular correlation estimates, which do not fluctuate strongly with small changes in epoch length or onset. Values
produced by adjusted circular correlations regularly form an upper bound for the estimates based on non-adjusted circular correlation (see examples in Figure 1 and 4; Supplementary Material S1).
Epoch length at the sample level, as well as on- and offsets, are arbitrary decisions that have to be regularly taken when conducting EEG analyses. Importantly, mutual adaptation or synchronization
of neural activity between interacting partners should not be affected by the decision to analyze a time window shifted by a few milliseconds in an ongoing interactive process. Inter-brain estimates
should be consistent along these parameters, both from a neurobiological point of view (as the behavior suggests an ongoing process) and from a statistical point of view (as there should not be
substantial differences in data, for e.g., 254 or 256 samples). As suggested by Jammalamadaka (Jammalamadaka & SenGupta, 2001), mean directions for data with uniform distributions are not well
defined. This is in line with our investigation showing large changes of mean direction with small changes in onset or duration of analyzed data segments (Figs. 1 and 3). Therefore, we recommend that
for the purpose of EEG/MEG data, circular correlations as a measure of inter-brain synchronization should be estimated with adjustment for not well-defined mean directions of the data (adjusted
circular correlation; equation 1.2), largely eliminating the influence of mean direction, and, therefore, of arbitrary decisions at the sample level with regard to epoch length and onset.
4.1.2 Interpretational issues
We observed counterintuitive high levels of circular correlations for simulated EEG data with “no” coupling between the signals, with correlations around 0.3 (Fig. 4). One possible explanation for
these high readings might be (at least in part) in the data preparation. Specifically, the data have been band-pass filtered according to narrow frequency bands (i.e., alpha, 8–12 Hz). In addition,
circular correlations were estimated over data segments of 1 s. This raises the possibility that using short segments of data band-pass filtered to a narrow frequency range results in only a limited
number of cycles, which is prone to produce more spurious coupling due to genuine similarities in EEG rhythms (given few cycles) between people (Burgess, 2013). While circular correlation
coefficients have been proposed as a method that is actually less susceptible to this issue (Burgess, 2013), we show here that its original implementation may have other issues (i.e., with arbitrary
mean directions), which when corrected for may be faced with the same overestimation of inter-brain synchronization as other connectivity methods, such as PLVs. A solution to this problem may be to
use longer epoch lengths. This question is investigated in section 3.2 and discussed in section 4.2.
One explanation for the disparity between our findings and those of Burgess (2013) could be the different approach we took to generate simulated data. Given that EEG data are uniformly distributed,
and hence do not have well-defined circular mean values, we simulated data with uniform distributions rather than using distributions with defined circular mean directions as in Burgess (2013). As we
also see the same noisy estimates when calculating unadjusted circular correlation for real EEG data, we believe that the noisy estimates are due to the unadjusted approach rather than the method
used to simulate data.
Another observation of analyses applied to the simulated data was that the difference between approaches decreases with increasing strength of coupling between the signals. Given the distribution of
the data, this likely reflects a ceiling effect.
A third observation of the simulated data concerns the sample to sample estimate changes (Fig. 5), specifically, the interaction between approach (adjusted/unadjusted for arbitrary means) and the
coupling level (high vs. low). For the adjusted approach, higher coupling levels resulted in lower sample-to-sample estimate changes, whereas for the unadjusted approach, higher coupling levels
resulted in increased estimate changes. We think that, in case of the unadjusted approach, synchronization estimates take largely varying values between zero and the “true” coupling (see also Fig. 4
), depending on the estimated mean directions of the correlated signals. For the adjusted approach, in contrast, estimates of circular correlation for higher coupling levels are less affected by
noise in terms of random/spurious “interactions” between signals, such as those we observed for the no-interaction simulations.
Finally, adjusted circular correlation estimates are remarkably correlated with PLV estimates, which raises the question of whether only one of these measures should be recommended for
hyperscanning-EEG studies, for consistency purposes. Given that we have not systematically explored the differences between adjusted circular correlation and PLV, we encourage researchers to further
explore this.
4.2 Effect of epoch length on IBS estimates (PLV and circular correlation)
Our analyses of real and generated EEG data showed that estimates for inter-brain synchronization, such as PLV and adjusted circular correlation, strongly depend on epoch length, in line with our
expectations. In our data and simulations, where EEG data were band-pass filtered between 8 and 12 Hz (corresponding to the alpha frequency band), estimated inter-brain synchronization was higher for
short epoch lengths of 1 s or less, and dropped sharply with extended epoch lengths, both for (generated) data with high and low/no implied coupling. Stabilization occurred after 1–5 s for generated
data depending on the underlying coupling level, and approximately 5–10 s for real data (with unknown underlying coupling). These effects are comparable with data used to measure intra-brain
connectivity, suggesting stabilization after 3–6 s (Fraschini et al., 2016). One reason for inflated IBS estimates for short epoch lengths may be because they contain only a small number of
oscillation (Basti et al., 2022), which means they may be more prone to spurious coupling, making it more difficult to disentangle weak synchronization from randomly coupled signals.
Accordingly, optimal/minimal epoch length should depend also on the frequency band of interest. In fact, applying the same approach to generated EEG data band-pass filtered with 16–24 Hz
(corresponding to the beta frequency band) resulted in stabilization at shorter epoch length (see Supplementary Material S4), and lower estimates for generated signals without underlying coupling.
These observations suggest that epoch length should be adjusted to the frequency band of interest, and systematic investigations are necessary to obtain optimal settings for all frequencies.
Based on these observations, we suggest that epoch lengths for estimation of inter-brain synchronization should ideally follow periods of behavioral coupling, but should be at a minimum 3 s long for
data band-pass filtered to the alpha frequency band, and potentially longer following recommendations on intra-brain analyses from Fraschini et al. (2016) and Miljevic et al. (2022). Shorter epochs,
as shown, can result in inflated and unreliable estimates, not only for signals that are coupled, but also for unrelated signals. This observation is particularly concerning as it can be assumed (
Dumas et al., 2010; Lindenberger et al., 2009) that at least some inter-brain processes occur and manifest at shorter timescales than those recommended by previous research on intra-brain functional
connectivity (Lachaux et al., 1999), and by us. One drawback, thus, of using longer epochs is that bursts of coupled activity or less stationary dynamical phenomena may be missed. This would be
particularly true if IBS estimates were computed on entire non-epoched interactions, in case the entire 25 s segments, in which case the periods of phase coupling may be entirely smeared or missed.
Our complementary simulations of bursting IBS instead of sustained coupling support this hypothesis. Overall, this calls for deeper investigation and characterization of the temporal evolution of
IBS, and shows the limits of grand-averaging across tasks. A better sensitivity may require either more precise behavioral analyses of the social interaction or a more advanced way to detect those
bursts of IBS.
4.3 Effect of power and signal-to-noise ratio on IBS estimates
Our analyses show that phase estimates become less reliable for experimental conditions where the EEG data have lower amplitude at the frequencies of interest (e.g., more mu or alpha suppression),
despite a theoretical independence of amplitude and phase of a signal. This observation is in line with suggestions in the literature on intra-brain connectivity (van Diepen & Mazaheri, 2018), which
suggests that phase estimates of ongoing oscillations are affected by the power modulations or concurrent evoked responses driven by task changes. Furthermore, our analyses show that IBS estimates
computed using circular correlation coefficients and PLVs decrease with increasing levels of relative noise in the data. Figure 8 shows that IBS estimates between strongly coupled signals with a high
amount of noise, or low signal-to-noise ratios, resemble IBS estimates between non-interacting signals. An interpretation of this is that the less reliable phase estimates cause more noisy IBS
estimates, which—on average—result in underestimation of the actual coupling between signals.
With respect to real EEG data, we show that resting-state trials with higher signal amplitudes yield higher IBS estimates than those with lower signal amplitudes, but this relationship is not present
for movement data with no coupling. This is unexpected, given that there is no coupling between the participants, nor can they see each other or have any opportunity to exchange signals. One
explanation for this is that higher signal amplitude results in higher estimates of inter brain synchronization; however, we only find a direct correlation between the signal amplitude and the IBS
estimates in resting-state data. One reason for this may be due to larger variability in signal amplitude in the rest data compared to the movement data (see Supplementary Material S6 for details).
This opens the possibility that reduced power (e.g., due to suppression of alpha oscillations over occipital or sensorimotor areas) at constant coupling levels may at least, in part, result in lower
estimates, given higher (relative) noise levels.
Currently, there are no methods known to correct for the influence of noise, or spectral power differences at specified frequencies on phase estimates. Therefore, we suggest that a comparison of IBS
between conditions or groups should always be accompanied by a close inspection of the corresponding frequency power. Furthermore, if a comparison suggests differences in IBS estimates between
conditions with different levels of frequency power (e.g., due to increased alpha/mu suppression in one of the conditions), these IBS differences should be interpreted with utmost care, as the
observed differences may be a consequence of reduced (or increased) relative noise at the same actual coupling level.
4.4 No observed IBS differences reported between individual and interactive conditions in single electrodes
We note here that for the real EEG data, we report no significant differences in IBS estimates between individual and interactive conditions. This should be interpreted with caution as we do not do a
systematic statistical comparison between conditions, but merely compare synchronization between people’s single (and symmetric) electrodes (i.e., C3, chosen based on previous literature) in a single
pre-defined frequency band, for the purpose of demonstrating how methodological decisions may influence IBS estimates. Given that IBS estimates in real data are generally low, and that differences in
IBS estimates between conditions are generally of low effect size, it is thus not unusual that both interactive and non-interactive conditions alike yield similar and low IBS values. For better
sensitivity, this is thus advised to use nonparametric cluster-based statistical testing (Ayrolles et al., 2021); this not only provides a straightforward way to address the multiple comparisons
problem but also allows the integration of biophysically motivated priors in the test statistic (Maris & Oostenveld, 2007).
4.5 Conclusion
In this paper, we show how non-standardized methodological decisions that have to be made by researchers when analyzing two- or multi-person EEG data can greatly affect or distort phase-based
estimates of inter-brain synchronization. We focus our investigation on methodological decisions regarding: arbitrary mean directions as well as epoch length and epoch onset/offset when estimating
circular correlation coefficients; non-standardized epoch lengths; and the comparison of conditions with different levels of signal-to-noise ratios or signal amplitudes. It should be noted that the
decisions investigated in this paper, and the potential issues that may occur during the analysis of hyperscanning-EEG datasets, are not exhaustive. There are likely other important methodological
decisions that may also influence IBS estimates, which are not investigated in this paper. For example, the decision of which EEG reference to choose is not standardized, and previous research on
intra-brain analyses shows that the choice of reference (e.g., common average reference, mastoids, REST, surface Laplacian) may have large effects on EEG results (Kayser & Tenke, 2010; Yao et al.,
2005, 2019), and, in particular, may distort phase, and hence connectivity estimates (Chella et al., 2016; Guevara et al., 2005; Shirhatti et al., 2016). We thus encourage further investigation of
referencing, and other potential non-standardized methodological decisions, with respect to hyperscanning-EEG data. We hope that the results of this work contribute to the development of standardized
hyperscanning-EEG methods, in an effort to increase validity and replication of inter-brain synchronization findings.
Data and Code Availability
Author Contributions
Designed research: M.Z., I.K.; performed research: M.Z.; analyzed data: M.Z., K.S.N., G.D., and I.K; wrote the paper: M.Z., I.K.; reviewed the paper: M.Z., K.S.N., G.D., and I.K.
Declaration of Competing Interest
This work was supported by the Villum Experiment (project no. 00023213) and Villum Young Investigator (project no. 37525) grants awarded to I.K. G.D. was supported by the Fonds de recherche du Québec
(FRQ; 285289), Natural Sciences and Engineering Research Council of Canada (NSERC; DGECR-2023-00089), and the Azrieli Global Scholars Fellowship from the Canadian Institute for Advanced Research
(CIFAR) in the Brain, Mind, & Consciousness program. We would like to thank Aliaksandr Dabranau for providing an improved way to calculate circular means, and to the anonymous reviewers who provided
helpful feedback and suggestions.
Supplementary Materials
B. S.
W. S.
J. W.
, &
S. C.
Interbrain phase synchronization during turn-taking verbal interaction-A hyperscanning study using simultaneous EEG/MEG
Human Brain Mapping
De Vico Fallani
, &
Neuroelectrical hyperscanning measures simultaneous brain activity in humans
Brain Topography
, &
HyPyP: A hyperscanning Python pipeline for inter-brain connectivity analysis
Social Cognitive and Affective Neuroscience
Dal Forno
Del Percio
, G., &
P. M.
Sources of cortical rhythms in adults during physiological aging: A multicentric EEG study
Human Brain Mapping
, &
MEG dual scanning: A procedure to study real-time auditory interaction between two persons
Frontiers in Human Neuroscience
G. L.
, &
Looking through the windows: A study about the dependency of phase-coupling estimates on the data length
Journal of Neural Engineering
, &
Brain-to-brain synchrony and learning outcomes vary by student–teacher dynamics: Evidence from a real-world classroom electroencephalography study
Journal of Cognitive Neuroscience
, &
Directed coupling in multi-brain networks underlies generalized synchrony during social exchange
, &
Impact of the reference choice on scalp EEG connectivity estimation
Journal of Neural Engineering
D. M.
, &
A. L.
NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation
Z. Z.
, &
Hyperscanning: A valid method to study neural inter-brain underpinnings of social interaction
Frontiers in Human Neuroscience
De Felice
, &
Social interaction increases brain synchrony during co-watching of novel movies
, &
Inter-brain synchronization during social interaction
PLoS One
, &
Maternal chemosignals enhance infant-adult brain-to-brain synchrony
Science Advances
C. J.
, &
The effect of epoch length on estimated EEG functional connectivity and brain network organisation
Journal of Neural Engineering
H. J.
, &
EEG changes during cinematographic presentation (Moving picture activation of the EEG)
Electroencephalography and Clinical Neurophysiology
, &
Bidirectional signal exchanges and their mechanisms during joint attention interaction— A hyperscanning fMRI study
, &
S. G.
Brain-to-brain coupling during handholding is associated with pain reduction
Proceedings of the National Academy of Sciences of the United States of America
J. L. P.
, &
L. G.
Phase synchronization measurements using electroencephalographic recordings: What can we really say about neuronal synchrony?
, &
Neural synchronization during face-to-face communication
Journal of Neuroscience
, I.,
, &
DEEP: A dual EEG pipeline for developmental hyperscanning studies
Developmental Cognitive Neuroscience
, &
C. E.
In search of the Rosetta stone for scalp EEG: Converging on reference-free techniques
Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology
L. K.
, &
C. D.
Frontal alpha oscillations distinguish leaders from followers: Multivariate decoding of mutually interacting brains
, &
The two-brain approach: How can mutually interacting brains teach us something about social interaction?
Frontiers in Human Neuroscience
, &
Oscillatory brain correlates of live joint attention: A dual-EEG Study
Frontiers in Human Neuroscience
, &
Speaker gaze increases information coupling between infant and adult brains
Proceedings of the National Academy of Sciences of the United States of America
J.-F. L.
A. N.
, &
P. K.
Dual-MEG interbrain synchronization during turn-taking verbal interactions between mothers and children
Cerebral Cortex
, &
Brains swinging in concert: Cortical phase synchronization while playing guitar
BMC Neuroscience
des Portes
T. A.
, &
Neural correlates of non-verbal social interactions: A dual-EEG study
N. W.
S. E.
, &
P. B.
Electroencephalographic connectivity: A fundamental guide and checklist for optimal study design and evaluation
Biological Psychiatry: Cognitive Neuroscience and Neuroimaging
K. L.
E. L.
K. T.
W. K.
A. S.
, &
Beyond synchrony: The capacity of fMRI hyperscanning for the study of human social interaction
Social Cognitive and Affective Neuroscience
P. R.
G. S.
J. D.
S. M.
M. C.
R. D.
, &
R. E.
Hyperscanning: Simultaneous fMRI during linked social interactions
, &
Neural synchrony in mother–child conversation: Exploring the role of conversation patterns
Social Cognitive and Affective Neuroscience
, &
The mirror game as a paradigm for studying the dynamics of two people improvising motion together
Proceedings of the National Academy of Sciences of the United States of America
, &
FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data
Computational Intelligence and Neuroscience
, &
Cooperation in lovers: An fNIRS-based hyperscanning study
Human Brain Mapping
, &
J. A.
Brain-to-brain entrainment: EEG interbrain synchronization while speaking and listening
Scientific Reports 2017 7:1
, &
P. W.
The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience
Annals of the New York Academy of Sciences
, &
Modulation of human cortical rolandic rhythms during natural sensorimotor tasks
, &
Intra- and interbrain synchronization and network properties when playing guitar in duets
Frontiers in Human Neuroscience
, &
Directionality in hyperbrain networks discriminates between leaders and followers in guitar duets
Frontiers in Human Neuroscience
, &
Toward a second-person neuroscience
Behavioral and Brain Sciences
, &
Effect of reference scheme on power and phase of the local field potential
Neural Computation
G. C.
, &
J. A. S.
The phi complex as a neuromarker of human social coordination
Proceedings of the National Academy of Sciences
Bringas Vega
M. L.
, &
Valdés Sosa
P. A.
Which reference should we use for EEG and ERP practice?
Brain Topography
K. D.
, &
A. C. N.
A comparative study of different references for EEG spectral mapping: The issue of the neutral reference and the use of the infinity reference
Physiological Measurement
, &
Interpersonal body and neural synchronization as a marker of implicit social interaction
Scientific Reports
A.-K. R.
M. G.
A. P.
, &
Amplitude envelope correlations measure synchronous cortical oscillations in performing musicians
Annals of the New York Academy of Sciences
J. D.
S. L.
O. A.
, &
P. E.
A practical guide to EEG hyperscanning in joint action research: From motivation to implementation
, &
Neural signatures of hand kinematics in leaders vs. followers: A dual-MEG study
A. S.
, &
Intra-individual behavioural and neural signatures of audience effects and interactions in a mirror-game paradigm
Royal Society Open Science
© 2024 The Authors. Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.
The Authors.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited. For a full description of the license, please visit | {"url":"https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00350/124910/Arbitrary-methodological-decisions-skew-inter","timestamp":"2024-11-08T14:41:57Z","content_type":"text/html","content_length":"446032","record_id":"<urn:uuid:05517201-9068-426c-b469-69243a57f1ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00061.warc.gz"} |
doc/UsingIntelMKL.dox - eigen - Git at Google
Copyright (c) 2011, Intel Corporation. All rights reserved.
Copyright (C) 2011 Gael Guennebaud <gael.guennebaud@inria.fr>
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its contributors may
be used to endorse or promote products derived from this software without
specific prior written permission.
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* Content : Documentation on the use of Intel MKL through Eigen
namespace Eigen {
/** \page TopicUsingIntelMKL Using Intel® MKL from %Eigen
<!-- \section TopicUsingIntelMKL_Intro Eigen and Intel® Math Kernel Library (Intel® MKL) -->
Since %Eigen version 3.1 and later, users can benefit from built-in Intel® Math Kernel Library (MKL) optimizations with an installed copy of Intel MKL 10.3 (or later).
<a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php"> Intel MKL </a> provides highly optimized multi-threaded mathematical routines for x86-compatible architectures.
Intel MKL is available on Linux, Mac and Windows for both Intel64 and IA32 architectures.
Intel® MKL is a proprietary software and it is the responsibility of users to buy or register for community (free) Intel MKL licenses for their products. Moreover, the license of the user product
has to allow linking to proprietary software that excludes any unmodified versions of the GPL.
Using Intel MKL through %Eigen is easy:
-# define the \c EIGEN_USE_MKL_ALL macro before including any %Eigen's header
-# link your program to MKL libraries (see the <a href="http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/">MKL linking advisor</a>)
-# on a 64bits system, you must use the LP64 interface (not the ILP64 one)
When doing so, a number of %Eigen's algorithms are silently substituted with calls to Intel MKL routines.
These substitutions apply only for \b Dynamic \b or \b large enough objects with one of the following four standard scalar types: \c float, \c double, \c complex<float>, and \c complex<double>.
Operations on other scalar types or mixing reals and complexes will continue to use the built-in algorithms.
In addition you can choose which parts will be substituted by defining one or multiple of the following macros:
<table class="manual">
<tr><td>\c EIGEN_USE_BLAS </td><td>Enables the use of external BLAS level 2 and 3 routines</td></tr>
<tr class="alt"><td>\c EIGEN_USE_LAPACKE </td><td>Enables the use of external Lapack routines via the <a href="http://www.netlib.org/lapack/lapacke.html">Lapacke</a> C interface to Lapack</td></tr>
<tr><td>\c EIGEN_USE_LAPACKE_STRICT </td><td>Same as \c EIGEN_USE_LAPACKE but algorithm of lower robustness are disabled. \n This currently concerns only JacobiSVD which otherwise would be replaced
by \c gesvd that is less robust than Jacobi rotations.</td></tr>
<tr class="alt"><td>\c EIGEN_USE_MKL_VML </td><td>Enables the use of Intel VML (vector operations)</td></tr>
<tr><td>\c EIGEN_USE_MKL_ALL </td><td>Defines \c EIGEN_USE_BLAS, \c EIGEN_USE_LAPACKE, and \c EIGEN_USE_MKL_VML </td></tr>
The \c EIGEN_USE_BLAS and \c EIGEN_USE_LAPACKE* macros can be combined with \c EIGEN_USE_MKL to explicitly tell Eigen that the underlying BLAS/Lapack implementation is Intel MKL.
The main effect is to enable MKL direct call feature (\c MKL_DIRECT_CALL).
This may help to increase performance of some MKL BLAS (?GEMM, ?GEMV, ?TRSM, ?AXPY and ?DOT) and LAPACK (LU, Cholesky and QR) routines for very small matrices.
MKL direct call can be disabled by defining \c EIGEN_MKL_NO_DIRECT_CALL.
Note that the BLAS and LAPACKE backends can be enabled for any F77 compatible BLAS and LAPACK libraries. See this \link TopicUsingBlasLapack page \endlink for the details.
Finally, the PARDISO sparse solver shipped with Intel MKL can be used through the \ref PardisoLU, \ref PardisoLLT and \ref PardisoLDLT classes of the \ref PardisoSupport_Module.
The following table summarizes the list of functions covered by \c EIGEN_USE_MKL_VML:
<table class="manual">
<tr><th>Code example</th><th>MKL routines</th></tr>
In the examples, v1 and v2 are dense vectors.
\section TopicUsingIntelMKL_Links Links
- Intel MKL can be purchased and downloaded <a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php">here</a>.
- Intel MKL is also bundled with <a href="http://software.intel.com/en-us/articles/intel-composer-xe/">Intel Composer XE</a>. | {"url":"https://third-party-mirror.googlesource.com/eigen/+/941ca8d83f776b9a07153d3abef2877907aa0555/doc/UsingIntelMKL.dox","timestamp":"2024-11-03T00:16:59Z","content_type":"text/html","content_length":"33101","record_id":"<urn:uuid:7e76139f-1de0-4cd2-8ad8-b0d27c3df32b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00303.warc.gz"} |
The basic model of Modelling the Canada lynx and snowshoe hare population cycle: The role of specialist predators (Tyson, et al.) demonstrates logistic growth in prey, and in predator (with prey
dependence for carrying capacity). But interestingly, one possibility is limit cycles, which mimic the cycling of the populations in nature.
The differential equations for the population of hare (x) is
x'(t) = rx(1-x/K)
- gamma x^2/(x^2+eta^2)
- alpha y x/(x+mu)
where K is the logistic carrying capacity of the prey (hare), in the absence of predation; the second term is a "generalist predation" term; and the third term is the "specialist predation" (in the
limit as the prey gets big, this becomes simply proportional to y (the lynx population)).
The differential equations for the population of lynx (y) is
y'(t) = sy(1- qy/x) = sy - sqy^2/x
for the predator (lynx), which is essentially logistic growth. Its growth term suggests exponential growth, but there is a loss term of the form sqy^2/x -- loss is proportional to population
(crowding), and inversely proportional to prey density. As the hare population goes to zero, so shall the lynx....
As one can see, the prey density won't change if y=x/q. If the prey density were not changing at the same time, the system would be at equilibrium.
In this InsightMaker model, I scaled the second equation by multiplying by q, then replace y by w=qy throughout both equations. This requires a slight change in the prey equation -- alpha replaced by
the ratio of alpha/q. (I used my favorite mathematical trick, of multiplying by the appropriate form of 1!)
So what we're really looking at here is the system
x'(t) = rx(1-x/K)
- gamma x^2/(x^2+eta^2)
- alpha/q w x/(x+mu)
w'(t) = sw(1- w/x)
where w(t)=qy(t).
Tyson, et al. took q to be about 212 for hare and lynx -- so that it requires about 212 hare to allow for one lynx to survive at "equilibrium".
However, when alpha -- the hares/lynx/year -- gets sufficiently large (e.g. 1867 -- and that does seem like a lot of hares per lynx per year...:), limit cycles develop (rather than a stable
equilibrium). This means that the populations oscillate about the equilibrium values, rather than stabilize at those values.
Author: Andy Long, Northern Kentucky University (2020)
: Tyson, Rebecca, Sheena Haines, Karen Hodges.
Modelling the Canada lynx and snowshoe hare population cycle: The role of specialist predators.
Theoretical Ecology.
97–111 (2010).
which allows one to experiment a little more easily than one can with this InsightMaker model. | {"url":"https://insightmaker.com/tag/Mat375","timestamp":"2024-11-03T04:23:23Z","content_type":"text/html","content_length":"120172","record_id":"<urn:uuid:0148338d-84a1-4c51-8e33-1e5dfa6e4c2b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00370.warc.gz"} |
Going Fast on Your Own Power – Accelerating
Posted on
I previously looked at the amount of instantaneous power it would take to go 100 mph in the worlds best faired-recumbent bicycle (velomobile). It takes approximately 1,000 watts, an achievable power
output for a fit cyclist. So why hasn’t anyone gone 100 mph yet? The answer appears to be acceleration.
Power Again
Previously I noted it takes power to go fast here on Earth.
On a bicycle like Eta you have to accelerate a mass (you and the bike) to 100 mph, while friction of the tires on the road, and air on the vehicle both hold you back. I created an equation for power
that looks like this:
$$P_{total} = \frac{P_{R}+P_{D}}{\eta}$$
In English: The total power to maintain a speed is given by the power to overcome rolling resistance and the power to overcome aerodynamic drag all divided by efficiency of power input to push you
This ignores the power to accelerate \(P_{A}\). Power to accelerate is equal to current velocity, multiplied by mass, multiplied by the rate of acceleration:
Accounting for acceleration, total power becomes:
$$P_{total} = \frac{P_{R}+P_{D}+P_{A}}{\eta}$$
Plugging in the following equations:
$$P_{D}=\frac{1}{2}\rho v^{3}AC_{D}$$
$$P_{A} = vma$$
The final power equation becomes:
$$P_{total} = \frac{v(mg*C_{rr}+\frac{1}{2}\rho v^{2}AC_{D}+ma)}{\eta}$$
I’ll fill in all of the static variables now for a vehicle similar to Aerovelo Eta (leaving out Coefficient of Rolling Resistance):
Mass \(m\) 100 \(kg\)
Gravity of Earth \(g\) 9.81 \(\frac{m}{s^2}\)
Air Density \(\rho\) 1.07 \(\frac{kg}{m^3}\)
Vehicle Frontal Area \(A\) .4 \(m^2\)
Drag Coefficient \(C_{D}\) .04
Drivetrain Efficiency \(\eta\) .97
With these plugged in:
$$P_{total} = \frac{v(100*9.81*C_{rr}+\frac{1}{2}*1.07 v^{2}*.35*.04+100*a)}{.97}$$
$$P_{total} = \frac{981C_{rr}v+.00856v^{3}+100av}{.97}$$
$$P_{total} = 1011.34C_{rr}v+.00882v^{3}+103.1av$$
The only remaining variables are acceleration \(a\), velocity \(v\), and coefficient of rolling resistance \(C_{rr}\). I kept power as a sum of the three components \(P_{R},P_{D},P_{A}\) in order.
\(C_{rr}\) is also a function of velocity. So we’ll have a brief interlude here to solve that and then get back to power. Power will then be a function solely of acceleration and velocity!
Coefficient of Rolling Resistance (an aside)
On my last post about going fast on a bicycle I shared this:
Which shows how \(C_{rr}\) increases with velocity. I want to account for this changing coefficient with velocity in my power equation, so I need to make \(C_{rr}\) a function of velocity. The
trouble is, these curves do not have defined functions associated with them. I’ll need to fit a function or spline to them and extrapolate out to 100 mph.
I took some points along the Schwalbe one curve and made this linear interpolation between the points:
I then extrapolated with a decreasing derivative out to 50 m/s:
Finally I fit a smooth PCHIP interpolation between all of the points. The final result:
Everything to the right of the dashed black line is extrapolated. The final Crr is now defined for velocities up to 50 m/s (112 mph). Lets get back to power.
Final Power Formula (with acceleration)
I previously wrote:
$$P_{total} = 1011.34C_{rr}v+.00882v^{3}+103.1av$$
which encompasses rolling resistance, aerodynamic drag, and acceleration. Now that \(C_{rr}\) is defined as a function of only velocity, total power has become a function of only velocity and
Solving for Power
If you are not accelerating \(a=0\) power becomes solely a function of velocity. This is a combination of aerodynamic drag and rolling resisitance:
$$P_{total} = 1011.34C_{rr}v+.00882v^{3}$$
Here is what that looks like:
The blue line intersects the world record (89.5 mph) at 845 watts.
Accelerating to 89.5 mph
An 80th percentile male cyclist weighing 75kgs can put out 845 watts for about 28 seconds. A 95th percentile male cyclist of the same weight can do it for 38 seconds. Here are two empirically driven
curves showing how much power a fresh cyclist can output over duration (on an upright bicycle).
I don’t want to count on a top 5% capable cyclist, so I’ll use a top 20% (or 80th percentile). Every time I refer to a cyclist from here on out, I’ll be referring to an 80th percentile 75kg male
The power over duration plot above assumes the cyclist has not accumulated any fatigue, unfortunately that is not the case for a rider attempting to reach 89.5mph. So i’ll come up with a way to
account for it.
Accounting for Fatigue
The human factors portion (fatigue) is something I am uneducated about. I am going to take a shot at making a simple model here armed with some experimental data. Here are my assumptions:
1. Fatigue will be measured from 0 to 100%. Where 100% is fully fatigued and cannot power the bike anymore.
2. Fatigue is cumulative and does not diminish over the ride.
3. Fatigue is a function of power and duration.
I’ll define the fatigue \(F\) function of power \(P\) and duration \(T\).
Which reduces to:
Duration \(T_{max}\) comes from the below curve, which is a function of power.
An example of solving for fatigue looks like this:
If a cyclist outputs \(P=1,148\) watts for \(T=1\) second how much Fatigue \(F\) have they developed?
From the above curve, \(T_{max}(1147.5) = 5\) and \(\frac{T}{T_{max}}=\frac{1}{5} = 0.2\) The cyclist is 20% fatigued.
This simplistic fatigue model will prevent me from setting unrealistic energy or power-duration outputs from the cyclist as they accelerate towards the world record.
Back to Accelerating
Armed with this fatigue model, I can build up a record attempt acceleration “run” without exceeding the capabilities of a cyclist.
Here is an example in which the cyclist maintains a power output (orange) curve over a 600 second acceleration run. Note that fatigue (blue) does not exceed 100%.
So would this attempt accelerate the cyclist to 100 mph? No. This effort only accelerates the cyclist to 66.4 mph before the rider is fully fatigued. See below, where I have added velocity to the
Does the buck stop here? Is our cyclist doomed to only ever reach 66.4 mph?
No, but it does showcase the importance of acceleration and managing fatigue. Up until now, I had not shown quantitatively just how fatiguing the ride up to high speed would be.
Another interesting plot to look at over the run is what the cyclist power is doing over time. For instance, the cyclist may primarily be overcoming drag or rolling resistance for a large duration of
the run. If they’re not accelerating they are accumulating fatigue for no benefit.
Given the power and velocity curves from the run above, here is what that power would be doing at each moment in time:
Notice that most of the power at the beginning of the run is accelerating the cyclist. As their velocity increases, so do aerodynamic drag and rolling resistance, and the accelerating power drops.
There must be an optimal way, given all vehicle and cyclist dependent parameters (weight, drag coefficient, fatigue curve, etc.) to accelerate to a maximum speed before fully fatigued.
For instance, this power and fatigue curve:
Results in a greater final speed:
And is therefore a more efficient way to output power over the duration of an attempt:
Without casting this as an optimization problem and programmatically searching for the optimal run, A maximum speed of 67.6 mph is achievable. This is despite the fact that this cyclist/vehicle
combination could easily maintain 100 mph (for a few seconds) as found in the last post.
Accounting for fatigue during acceleration is a major factor in finding maximum speed achievable.
In the next post I hope to wrap up some of the theory on all of this and state a comprehensive list of variables to address that would allow a cyclist to reach high speed. | {"url":"http://kb.shoelace.biz/?p=389","timestamp":"2024-11-05T12:39:38Z","content_type":"text/html","content_length":"40435","record_id":"<urn:uuid:770fd566-b898-4c9f-8832-3e047743ae0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00734.warc.gz"} |
What is screenPos.w ?
I’m a noobie at shader programming and am trying to learn to create intersect highlight effect. but I can’t understand some of the concepts and unfortunately, no one answered these questions before
(or I couldn’t find).
• so does anybody know what exactly stored in screenPos.w at following code part?
• and if an object distance to viewer pos is 20 so LinearEyeDepth will return 20?
1 Like
Unity’s shaders are available online as a separate download. Get them and then you can find the ComputeScreenPos function and see what it’s storing in w.
Having them is very valuable if you are going to be making a lot of your own shaders.
Get the ones for your version of unity.
2 Likes
How can I see what ComputeScreenPos is storing in w?
Unity uses the 4th component of a float4 for various things. Sometimes it’s to detect what kind of light it is, other times it’s depth etc, depends. In this case it looks like W component of
2 Likes
Not sure if it’s more or less confusing but I think those are in a way kind of all the same things. The matrices that do graphics transforms are done in such a way that the w coordinate distinguishes
vectors and positions (which I think partially explains the lighting use) by their w value and post transform leaves a value in w that can be used for perspective divide.
but what exactly stored in at this particular code, what ComputeScreenPos store in w?
CompiteScreenPos doesn’t actually do anything to the .w, or even the .z components. Those are just the clip space z and w values that are passed in passed out straight.
So the question is what are those values? Well, z is the clip space depth, which is a range of 0.0 to w for Direct3D (and most other graphics APIs) and -w to w in OpenGL. So what’s w? It’s the world
scaled view depth.
That’s it. It’s a quirk of projection matrices and the resulting clip space that the w is just the view space depth. If you have a quad as a child object of an unscaled camera game game object, the
transform’s z position shown in the inspector will match the w of the in shader clip position.
Why gets into the whole perspective divide stuff and 4 dimensional spaces, so we’ll ignore that for now.
10 Likes
thank you so much for your useful and in detail reply. So can i say w is the distance from camera? or It’s the depth between camera and vertex?
and my other question is what LinearEyeDepth return? depth again between viewer and objects with world unit but not very precise?
It’s depth, not distance, from camera origin to vertex, assuming a perspective projection matrix that’s focused on the camera’s origin, which most perspective projection matrices are calculated to
be. And certainly the Unity camera’s perspective projection matrix is unless manually overridden.
LinearEyeDepth takes the depth from the non-linear depth buffer and converts it to linear eye depth, which is roughly identical to the clip space w in precision. It’s also the same value you get from
the COMPUTE_EYEDEPTH macro when in the vertex function, though that’s calculated by applying the world and view matrices to the object space vertex position.
So why all of the different versions of this?
When using anything but the usual projection matrix, w component of the clip space position is not the view/eye depth. In fact when it’s an orthographic camera it’s a constant value. So if you need
the linear depth for something and want to make sure your shader supports all projection matrix types, you can’t rely on the w being the depth. Plus, you often want to know the depth of a vertex in
the fragment shader prior to calculating the clip space position. So that’s where COMPUTE_EYEDEPTH comes in.
What about LinearEyeDepth? That’s taking the value from the non-linear depth buffer and converting it into linear depth as I mentioned above. This is often the depth from the camera depth texture
rather than some value interpolated from the current shader’s vertex stage. But you could get it the depth buffer value for a mesh from the input.pos.z value in the above shader, as that”s what that
value holds in the fragment shader. But that’s extra math, including a divide that can be costly on some hardware. So it’s often better passed separately. Plus the conversion from non-linear depth
buffer to linear depth buffer is actually broken in Unity if you’re using something other than a default perspective projection matrix since that function assumes some stuff (mainly that you are
using a default projection matrix) so it can simplify the math.
But wait, I said you can access the input.pos, which is the same as the output.pos from the vertex function, which means the input.pos.w holds the linear depth! Right? Nope. Even assuming a
perspective matrix this isn’t the case. The value you output from the vertex function is in clip space, but that same variable input into the fragment shader has been transformed into window space by
the GPU. The x and y are now the screen pixel position, z is the non-linear depth buffer (which has a 0.0 to 1.0 range for all graphics APIs, remember I said clip space is 0.0 to w on some, and -w to
w on otheres), and the w is 1.0. Yes, just 1.0, always. That’s because it’s the w component after the GPU applied the perspective divide, which is dividing the whole float4 value by the w, and any
non zero number divided by itself is 1.0.
Now let’s look at why specifically the ComputeScreenPos passes the w value. This is so it can apply the perspective divide to the interpolated position. When using a non-orthographic projection, a
screen space interpolated float3 position won’t interpolate the way you’d expect in 3D space. You’ll get odd warping and stretching of the value between the vertices as they’re being interpolated
linearly in 2D screen space, not in 3D view space. Remember all that weird texture warping from old PS1 games? It’s that. Using a perspective divide let’s you interpolate a float4 value and correct
the “2D” linear interpolation to take into account the perspective making they work as you would expect them to. So really that’s all it’s there for.
So, the shader example above is making use of the w component as an optimization, by assuming the use of a perspective matrix, they know that value used to correct for perspective interpolation is
also the view depth.
15 Likes
@bgolus , i think i am going to collect all your posts and put them in a book
Really appreciate the time you take to answer all these questions.
7 Likes
I hope you edit them for grammatical errors.
4 Likes
@bgolus you are awesome, thanks for all this information. | {"url":"https://discussions.unity.com/t/what-is-screenpos-w/729263","timestamp":"2024-11-03T15:31:35Z","content_type":"text/html","content_length":"52505","record_id":"<urn:uuid:871853b5-cd72-4b10-bd58-0386ec42a77f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00817.warc.gz"} |
The Ultimate Step by Step Guide to Preparing for the PARCC Math Test
Original price was: $24.99.Current price is: $14.99.
PARCC Grade 6 Math for Beginners The Ultimate Step by Step Guide to Preparing for the PARCC Math Test
The Most Comprehensive Study Guide for the PARCC Grade 6 Math Test!
PARCC Grade 6 Math for Beginners is the perfect study guide for students who aspire to excel on the test and enhance their math skills, whether they prefer independent study or classroom instruction.
With this guide, you can kickstart your student’s preparation and set them on the path to success on the PARCC Grade 6 Math Test.
+1K Downloads
Teacher's Choice
100% Guaranteed
Secure Checkout
Lifetime Support
PARCC Grade 6 Math for Beginners is an all-in-one study guide that equips your student with the tools they need to excel on the test. Here are some key features of this resource:
• Comprehensive coverage of all PARCC Math topics on the 2023 test: With detailed study guides, clear explanations, helpful examples, and practice tests for every single topic on the grade 6 test,
this guide provides complete coverage of all PARCC Math topics.
• Multiple practice questions in different formats: To help your student gain a complete understanding of each topic and develop their problem-solving skills, this guide includes multiple practice
questions in different formats, such as fill-in-the-blank, free response, and multiple-choice questions.
• Realistic and full-length practice tests: This guide includes two realistic and full-length practice tests that closely simulate the actual PARCC Math test. These tests come complete with
detailed answers, giving your student the opportunity to practice under realistic test conditions and build confidence.
• Clear and in-depth explanations for each math subject: With step-by-step explanations, helpful tips, and practical examples, this guide provides clear and in-depth explanations for each math
subject, ensuring your student has a solid grasp of every topic.
• Additional online math practice: Students can supplement their studies with additional online math practice at www.EffortlessMath.com, a free resource that offers additional practice problems and
PARCC Grade 6 Math for Beginners is an ideal study guide for students who want to excel on the test and improve their math skills, whether they prefer self-study or classroom usage. Get started today
and help your 6th-grade student achieve success on the PARCC Grade 6 Math Test!
There are no reviews yet.
Effortless Math: We Help Students Learn to LOVE Mathematics - © 2024 | {"url":"https://www.effortlessmath.com/product/parcc-grade-6-math-for-beginners-the-ultimate-step-by-step-guide-to-preparing-for-the-parcc-math-test/","timestamp":"2024-11-01T22:49:44Z","content_type":"text/html","content_length":"45998","record_id":"<urn:uuid:4cf02551-8dae-4f4f-bc71-ddb75fa776f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00558.warc.gz"} |
Quantum walks on graphs
We set the ground for a theory of quantum walks on graphs-the generalization of random walks on finite graphs to the quantum world. Such quantum walks do not converge to any stationary distribution,
as they are unitary and reversible. However, by suitably relaxing the definition, we can obtain a measure of how fast the quantum walk spreads or how confined the quantum walk stays in a small
neighborhood. We give definitions of mixing time, filling time, dispersion time. We show that in all these measures, the quantum walk on the cycle is almost quadratically faster then its classical
correspondent. On the other hand, we give a lower bound on the possible speed up by quantum walks for general graphs, showing that quantum walks can be at most polynomially faster than their
classical counterparts.
ASJC Scopus subject areas
Dive into the research topics of 'Quantum walks on graphs'. Together they form a unique fingerprint. | {"url":"https://nyuscholars.nyu.edu/en/publications/quantum-walks-on-graphs","timestamp":"2024-11-10T22:57:17Z","content_type":"text/html","content_length":"43536","record_id":"<urn:uuid:52d32bcb-0579-4505-9eec-dcc5f0af66b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00132.warc.gz"} |
Degrees Fahrenheit to Kelvin (EN) | Convertipro
Fahrenheit to Kelvin Converter: Usage, Formulas, and Origin
The Fahrenheit to Kelvin converter is a handy tool for performing temperature conversions between these two commonly used units of measurement. In this article, we will explore how to use the
converter, the mathematical formulas used for conversions, and delve into the origin of the Fahrenheit unit of measurement.
How the Fahrenheit to Kelvin Converter Works:
The Fahrenheit to Kelvin converter employs simple mathematical formulas to perform conversions between these temperature scales. Here are the conversion formulas used:
Conversion from Fahrenheit to Kelvin: Kelvin = (Fahrenheit + 459.67) * (5 / 9)
Conversion from Kelvin to Fahrenheit: Fahrenheit = Kelvin * (9 / 5) - 459.67
Using the Fahrenheit to Kelvin Converter:
The Fahrenheit to Kelvin converter is easy to use. Follow these steps to perform a conversion:
Step 1: Enter the number of Fahrenheit degrees you want to convert in the provided area.
Step 2: The converter will automatically perform the calculation and display the result in Kelvin just below the Fahrenheit input field.
Step 3: If you want to perform a conversion from Kelvin to Fahrenheit, enter the Kelvin value in the dedicated second input field.
Step 4: The result in Fahrenheit will be automatically displayed below the Kelvin input field.
Example: Convert 68 degrees Fahrenheit to Kelvin and 300 Kelvin to degrees Fahrenheit.
(68 + 459.67) * (5 / 9) ≈ 293.15 Kelvin (rounded to two decimal places)
300 * (9 / 5) - 459.67 ≈ 80.33 degrees Fahrenheit (rounded to two decimal places)
68 degrees Fahrenheit is approximately equivalent to 293.15 Kelvin (rounded to two decimal places).
300 Kelvin is approximately equivalent to 80.33 degrees Fahrenheit (rounded to two decimal places).
Note on the Origin of the Fahrenheit Unit of Measurement:
The Fahrenheit temperature scale was invented by Polish physicist and engineer Daniel Gabriel Fahrenheit in 1724. Fahrenheit selected two reference points to define his scale: 0 degrees Fahrenheit
corresponded to the temperature of the eutectic mixture of ice and ammonium chloride, and 96 degrees Fahrenheit corresponded to the temperature of the human body (normal blood temperature).
The Fahrenheit scale is primarily used in the United States and a few other English-speaking countries, while most other countries use the Celsius (or Kelvin) temperature scale for temperature
The Fahrenheit to Kelvin converter is a convenient tool for temperature conversions between these two units of measurement. Using the simple conversion formulas presented in this article, you can
easily perform accurate temperature conversions for various applications. The Fahrenheit scale, invented by Daniel Gabriel Fahrenheit, remains in use in some countries, although the Celsius (and
Kelvin) scale is more widely used on the international scale. | {"url":"https://www.convertipro.com/en/degrees-fahrenheit-to-kelvin-en","timestamp":"2024-11-05T00:15:43Z","content_type":"text/html","content_length":"168298","record_id":"<urn:uuid:8bee5144-4fed-40c7-9599-13935f97f469>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00490.warc.gz"} |
A tower AB leans towards west making an angle alpha with the vertical.
A tower AB leans towards west making an angle α with the vertical. The angular elevation of B , the top most point of the tower is β as observed from a point C due east of A at a distance d from A.
If the angular elevation of B from a point D due east of C at a distance 2d from C is γ, then 2tanα can be given as
Updated on:21/07/2023
Knowledge Check
• A tower AB leans towards west making an angle α with the vertical . The anlgular elevation of B , the topmost point of the tower is β as obsreved from a point C due east of A at distance d from
A.If the angular elevation of B from a pont D at a distance 2d due east of C is γ , then prove that 2 tan α = cot γ -cot β
• A tower leans towards west making an angle α with the vertical. The angular elevation of B, the top most point of the tower, is 75∘ as observed from a point C due east of A at a distance of 20
units. If the angular elevation of B from a point due east of C at a distance of 20 units from C is 45∘, then tanα is equal to
• A tower AB leans towards west making an angle α with vertical. The angular elevation of B, the top most point of the tower, is 60∘ as observed from a point C due east of A at a elevation of 10 ft
from A. If the angular elevation of B from a point D due east of C at a distance of 20 ft from C is 45∘, then the value of 2tanα is equal to | {"url":"https://www.doubtnut.com/qna/649488433","timestamp":"2024-11-07T06:31:05Z","content_type":"text/html","content_length":"285081","record_id":"<urn:uuid:6cd05c3b-c5d0-45ea-8ffd-c9759cfee03b>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00600.warc.gz"} |
Latex vertical dots matrix
latex vertical dots matrix Lesson 12: Making Block Matrices in LATEX Name: Putting vertical lines all the way down the columns is as easy as pie. \documentclass{article} \usepackage{amsmath} \begin
{document} $$\begin{bmatrix} a_{1} \\ a_{2} \\ \vdots \\ a_{n} \end{bmatrix}$$ \end{document} Output : Use diagonal dots for diagonal matrix in LaTeX Dec 13, 2020 · To define dots in Latex, use: – \
ldots for horizontal dots on the line. 1 Introduction When it comes to making tables and charts, the main words you need to know are array, tabular, table, and Let’s start by partitioning a matrix
into a 2x2 form, I’ll use the {array} code instead of the matrix primitives, because I want to add vertical and horizontal separators: [code]\left[ \begin{array}{c|c} A & B \\ \hline O & C \
end{array} \rig. The Jacobian matrix of the function $\mathbf{f}(x_1, \dots, x_n)$ is . et al. m, (2. Now, let us take another matrix. Jul 24, 2021 · Use vertical dots for column matrix in LaTeX. \
ldots The \ldots command produces an ellipsis. When multiplying numbers, a * b = c, and b * a = c are both true. Here are some examples: Σ= ⎡ ⎢ ⎢⎣σ11 ⋯ σ1n ⋮ ⋱ ⋮ σn1 ⋯ σnn ⎤ ⎥ ⎥⎦ Σ = [ σ 11 ⋯ σ 1 n ⋮
⋱ ⋮ σ n 1 ⋯ σ n n] $$ \Sigma=\left[ \begin{array}{ccc} \sigma_{11} & \cdots & \sigma_{1n} \\ \vdots & \ddots & \vdots \\ \sigma_{n1} & \cdots & \sigma_{nn} \end{array} \right] $$. Now, enter the
numbers into the matrix from left to right and top to bottom, following each one with a Tab to move to the next cell in the matrix. Jun 10, 2020 · 28. never use vertical lines; 2. Y. 6. so that the
above equation reads Ω. A matrix with non-zero entries only on the diagonal is called "diagonal". morpheus89. Lines that are too wide are marked with a slug (a black box) which is a vertical bar of
width \overfullrule . Mar 18, 2021 · Huang, C. \(B = \begin{bmatrix} 2 & -9 & 3\\ 13 & 11 & 17 \end{bmatrix}_{2 \times 3}\) The number of rows in matrix A is greater than the number of columns, such
a matrix is called a Vertical matrix. How can I draw a matrix with dots in tex like the one shown below: Stack Exchange Network Stack Exchange network consists of 178 Q&A communities including Stack
Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The chosen basis has allowed us to view the linear operator Ω as a
matrix, also denoted as Ω, with matrix components Ω. \documentclass{article} \begin{document} $$\vdots$$ \end{document} Output : In matrix, when n numbers element are arranged along the row. mn: Ω ←→
Matches \rceil . The code \times is used in LaTeX to make the symbol \(\times\). The previous example was the 3 × 3 identity; this is the 4 × 4 identity: 320x240 graphic LCD module, dot-matrix lcd
module,LCD manufacturer,LCD display supplier,LCD module factory clearance certificate,VOC,COI & IC multifunction beauty machine vertical ipl laser . Si no se indica nada más la . The order that
matrices are multiplied in matters. When writing down arbitrary sized matrices, it is common to use horizontal, vertical and diagonal triplets of dots (known as ellipses) to fill in certain columns
and rows. mn. In that case, vertical dots symbol is used instead of representing all the elements in the matrix along the vertical row. May 09, 2021 · To represent latex vertical dots symbol, you
need to use the \vdots command. Note that this is exactly like if you were using a tabular environment. I've been given some homework where I should define operations of matrixes in LaTeX. A LaTeX
command begins with the command name, which consists of a \ followed by either (a) a string of letters or (b) a single non-letter. If the TeX Notation filter is activated, which set a LaTeX renderer,
the same equation as above is obtained with the following control sequence: The \cdots command produces a horizontal ellipsis where the dots are raised to the center of the line. There are many
identity matrices. Text with two or double columns can be created by passing the parameter \twocolumn to the document class statement. avoid double lines; 3. In LaTeX a document is typeset one
paragraph at a time and these paragraphs are broken into lines of equal width. The first line contains: 'urc,col,fn,ex,hi,fav,i', etc. a. \frac \frac{num}{den} The \frac command produces the fraction
num divided by den. Article Google Scholar LaTeX Commands. org>∗ 22 September 2005 Abstract This document lists 3300 symbols and the corresponding LATEX commands that produce them. This list forms
the current base set. We now define the ‘matrix elements’ Ω. The best thing for this is to use \lVert and \rVert (which are part of the amsmath package included in Quora’s implementation of [math]\
LaTeX[/math]) to produce: [math]\lVert\mathbf{a}\rVert[/math]. If you are looking for the right symbols to create a partial derivative in LaTeX, this is how it's done: \frac {\partial v} {\partial t}
You can omit \frac if you don't want a vertical fraction. Partial derivatives is something I always forget how to write when using Markdown Notes. Las matrices en LaTeX pueden escribirse dentro del
entorno equation con el entorno matrix. Just array to make the matrix and insert a vertical bar between the columns where you want a vertical bar. In math these numbers would be described as
commutative. The majority of L A Variable size math symbols. ≤ Less than or equal to (relation). Sometimes you can use the symbol \(\times\). do not use quotation marks to repeat the content of
cells. A diagonal matrix whose non-zero entries are all 1 's is called an " identity " matrix, for reasons which will become clear when you learn how to multiply matrices. The token for each element
of the toolbar is listed below: Colors. 5. n = b. To create a 3x3 matrix equation in the LaTeX format, type the following into a math zone: A=\{\matrix{a&b&c\\d&e&f\\g&h&j}\} This will build into the
following professional equation: conditions of the LaTeX Project Public License, either version 1. Write LaTeX code to display the angle sum . The Comprehensive LATEX Symbol List Scott Pakin
<scott+clsl@pakin. Crear matrices en LaTeX. The square root symbol is written using the command \sqrt{expression}. First off, don't know if this is the right section. This is a synonym for \leq . If
you want to create a document with more than two columns, use the package multicol, which has a set of commands for the same. Jul 14, 2021 · With LaTeX, it is often recommended to use the pair \[ and
\] to enclose equations, rather than the $$ markers, because the newer syntax checks for mistyped equations and better adjusts vertical spacing. \ddots The \ddots command produces a diagonal
ellipsis. which is the matrix version of the original relation Ω|a) = |b). Math Miscellany. – \ vdots for vertical dots. LaTeX Multiple Columns. User’s Guide for the amsmath Package (Version 2. My
next question is, how do I want to draw a full vertical line for the subcolumn? I have included the vertical line in line 2 in the lLatex code, but cant seem to achieve it. – \ ddots for diagonal
dots. The horizontal array is known as rows and the vertical array are known as Columns. 1. A copy of the license is included in the section entitled LATEX Project Public License. Correct latex is
generated for any entered fractions. A matrix having \(n\) rows and \(m\) columns is a \(m\times n\)-matrix. For example 3 * 4 = 12, and 4 * 3 = 12. LaTeX equation editing supports most of the common
LaTeX mathematical keywords. – \ cdots for horizontal dots above the line. Once you have the matrix configured the way you want it to look, click OK. matrices - Compact block matrix - TeX - LaTeX
Stack Exchange tikz pgf - Matrix product illustration - TeX - LaTeX Stack Exchange matrices - vertical dotted line in column vector / amsmath matrix - TeX . Easy-to-use symbol, keyword, package,
style, and formatting reference for LaTeX scientific publishing markup language. Note: LaTeX is case sensitive. For example $$ \bigg[\begin{array}{c|c|c|c} Generate latex code for a given matrix.
Arguments contained in square brackets [] are optional while arguments contained in braces {} are required. The AMS dot symbols are named according to their intended usage: \dotsb between pairs of
binary operators/relations, \dotsc between pairs of commas, \dotsi between pairs of integrals, \dotsm between pairs of multiplication signs, and \dotso between other symbol pairs. as well as \ln, \
log, \exp and the hyperbolic functions) In LaTeX you use the command \cdot to make a multiplication-dot. These can be specified using the ±, ± and ± respectively: θ In some cases you may want to have
finer control of the alignment within each column, or want to insert lines . $$A=\pmatrix{a_{1 1}&a_{1 2 . 2 Dec 21, 2012 · I find writing a table is the most difficult part for me in Latex, since I
am beginner to Latex. 43) n. We've documented and categorized hundreds of macros! Mar 17, 2015 · Posted on March 17, 2015. 1) American Mathematical Society, LATEX Project 1999-12-13 (revised
2002-02-25, 2016-11-14, 2018-04-05, 2019-10-14, EGR 53L - Fall 2009 Appendix B Creating LATEX Arrays, Tables, and Figures B. (also \cdots which puts the dots in the middle of the line, \vdots which
has three vertical dots and \ddots which puts three dots diagonally from top-left to bottom-right) \cdot makes a single dot \Longrightarrow the "implies" arrow \sin the sine function (also \cos, \
tan, \arcsin etc. This command works in any mode, not just math mode . CsPbBr 3 perovskite quantum dot vertical cavity lasers with low threshold and high stability. ACS Photonics 4 , 2281–2289
(2017). El caso más simple podría ser: Como puedes ver las distintas celdas de una fila se separan mediante el símbolo et o ampersand (&) y las distintas filas se crean con la doble barra invertida
(\\). If a line is too wide to be broken, the message overfull \hbox is shown. Matricies can be entered in the table or pasted into the text field as a tab-delimited glob of text. ⇐ Is implied by,
double-line left arrow (relation). 1 and 2. Basically, there were two options: Do it in hardware: Dot matrix printers were available in wide-carriage versions, so they could print A4 landscape / A3
portrait. For our data, this would be 2 Tab 1 Tab 5 Tab -1 Tab 3 Tab 7 7. 3 of this license or (at your option) any later version. Do it in software: Using the printer's bitmap mode, the computer
would render the rotated letters in vertical stripes, corresponding to the printer's lines. place the units in the heading of the table (instead of the body); 4. To better understand the importance
of these simple rules, the reader can compare tab. Nov 21, 2007 · Tilted dots in matrix (LaTeX) - help please! Nov 21, 2007. 2. This tutorial talks about the usage of multiple columns in LaTeX. #1.
In the case of a column matrix, the elements will be located along a column of the matrix. Matrices are not guaranteed to be the same if the order is switched, so matrices are non-commutative. To get
this symbol outside of math mode you can put ewcommand* {\Leadsto} {\ensuremath {\leadsto}} in the preamble and then use \Leadsto instead. \input {headers} \Huge %% %\section{A generic matrix} Notice
the use of vertical and diagonal dots in the generic matrix shown below. Write LaTeX code to display the inverse matrix . 0. Red Green Blue Yellow Cyan Magenta Teal Purple Dark Blue Dark Red Orange
Dark Orange Golden Pink Dark Green Orchid Emerald. ⇝ Squiggly right arrow (relation). My problem is that I don't know how to do vertical dots (nor tilted to 45degrees) in the matrix. latex vertical
dots matrix | {"url":"http://swapptechnology.in/jb2y/latex-vertical-dots-matrix.php","timestamp":"2024-11-05T19:50:30Z","content_type":"text/html","content_length":"18783","record_id":"<urn:uuid:bdd9a140-6695-4bcc-9fc1-9509b995ed84>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00634.warc.gz"} |
Best Data Science Online Courses, Training with Certification-2022 Updated - Top 10 Online Courses
Best Data Science Online Courses has huge demand in market now a days. If you are looking for good career in data science and to become expert in data science, this is the Best place for you to
select the right course. In this course, you can learn different topics like, basics of data science, regression like Linear regression, multiple regression, software libraries like, Numpy,
tensorflow, Pandas, matplotlib, and more. All these are topics are covered in this course. So, our Expert panel handpicked some of the Best Data science Online Courses for you and those are listed
below. We hope you to go through the Below Courses.
The Data scientist NanoDegree online course was offered by Udacity with the team of Josh Bornhard, Juno Lee, Luis Serrano and Andrew Paster. Here students will learn that how to build the supervised
and unsupervised machine learning models and also gain the knowledge on Pytorch, deep learning etc.
Term-I : In this term-I instructors will cover the machine learning for data science concepts. Here this term will cover the three projects that students will improve their skills.
• Find Donors for Charity ML with Kaggle is the first project, here you people can understand the topics like perception algorithms, regression, decision trees etc.
• Create an image classifier is the second project in this term, here instructors will explain about the training neural networks, keras, implementation of gradient descent and son on.
• Creating customer segments is the third project, from this you will understand the topics like principal component analysis, gaussian mixture models, clustering, hierarchical and density based
clustering, random projections and analysis of independent components.
Term-II : in this term-II students will gain the knowledge on applied data science. In this term there are four projects, will cover all the concepts of applied data science.
• Write a Data science Blog Post first project of term-II here you people will learn the topics like data science process, communication with stakeholders etc.
• Build pipelines to classify messages with figure eight is the second project, from this you will understand the concepts like web development, OOP etc.
• From data engineering for data science machine learning pipelines, NLP etc.
• Design a recommendation engine with IBM is the sixth project and data science capstone is the seventh project.
Rating: 4.2 Out of 5
You can Signup here <=> ClickHere
Roger D. Peng is the Creator of this Course called Specialisation of Data science. He is the assistant professor in the field of Biostatics. He will train the students with simple logics and in easy
way. He will cover all the concepts and skills that you needed in this course. After the completion of the course you will get the Course completion certificate with your name. You can clear your
doubts without hesitation from him. You will understand how to use the R to clean, visualize and analyze the data. You can also learn how to use the Github in managing the data science.
Key points:
• You can know about the introduction of tools required in data science which is used to control Git, github, Rstudio, markdown, R, and more. And also you can get idea how to install the tool box.
• They will teach you what is R programming, why it is used, and other conceptual issues in data science.
• You can learn how to get the data and clean it from the APIs, web, databases.
• In this course they will cover all the topics of data science like processing instructions raw data, processed data and codebooks.
• You will come to know what is exploratory data analytics to check the data. You can learn how the Reproducible research is used to report modern data analysis.
• At the end of the course you can learn more and more concepts which is involved in this course like regression model, Machine learning algorithms in practical, data science and data science
Rating: 4.5 out of 5
You can Signup here <=> ClickHere
IBM Data Science Professional Certificate is offered by the IBM and this is very useful for everyone who wants to move their career towards data science and Machine learning. And in this professional
certificate there are 9 courses and in the 1st course you will learn about what is data science, in the 2nd course about the Open Source tools for Data Science and in the 3rd course Data Science
Methodology in the 4th course about the Python for Data Science, in the 5th course about Databases and SQL, in 6th course you will learn about Data Analysis with Python, in the 7th course about the
Data Visualization with Python, in the 8th course about the Machine Learning with Python and in the last course you will learn about the Applied Data Science Capstone. You can start enrolling for
this as the enrollment has already being started.
Key points:
• We know there are 9 courses in this professional certificate and in the 1st course you will get to know about the data science and in the 2nd course you will learn about the open source tools and
how to use those tools and about it’s features.
• While coming to the 3rd courses you will learn about the methodology used in data science and in the 4th course the python basics, python data structures and python fundamentals are being learnt
by you.
• In the 5th course you will learn about the SQL and databases which are must for becoming the data scientist and in the 6th course the python basics and types of python and how to analyse the data
will be taught clearly in this course.
• The 7th course helps you in learning the data visualisation using the libraries of data visualisation in python like Folium, Matplotlib and in the 8th course you will learn about the machine
learning topics like model evaluation, and Machine Learning algorithms, supervised vs unsupervised learning etc.
• In the last course you will gain knowledge on the applied data science capstone in which topics like different location providers are being learnt by you.
Rating: 4.6 Out of 5
You can Signup here <=> ClickHere
Edureka data science Masters program brings you to become expert in Data science and to improve the skills that you needed in future. In this course they will explain the introduction of data
science, statistics, scala, regression techniques, AI, python, Tableau and Tensorflow. And also in this course, they will cover all the topics which are present in Data science in very understandable
way. After the completion of the course you will get the Certificate with your name. Many top industries in health care, banking sectors etc are using the data analytics to analyse the data.
Key points:
• In this course you can learn how to present the data by using different techniques and by using software libraries called Matplotlib, numpy, panda and more.
• They will train you with different concepts like, sequence, Machine learning, Python, Big data analytics, scripts etc.
• This course will help you to optimise the basic and convolutional Neural networks.
• From this course you can gain the knowledge about the topics called Tensorflow, Autoencoder Neural Networks, Keras, RBM.
• In this course, you will come to know how to utilize tableau and how to create Interactive dashboard which helps you in certificate level exam.
• This course helps you to become expert in these topics like linear regression, Multiple regression, Clustering, Bayes analysis by using Python.
• You will learn about data visualization, visual analytics, dashboard, mapping, calculation, charts, integrated tableau with Hadoop and R, clustering, deep learning, neural networks, convolution
networks etc.
Rating: 4.6 out of 5
You can Signup here <=> ClickHere
==> Check Out Latest Offers/Deals/Coupons :: Clickhere
Kevyn Collins-Thompson, Daniel Romero are the Creators of this course called specialisation of data science with python. They are the assistant professors and they will explain this course with
simple logics. In this course they provide you the skills that are needed which is helpful for your future. They will cover all the topics included in data science like, machine learning information,
visualisation, software libraries, neural networks, different toolkits. They will provide certificate at the end of course. You can enhance the data analysis with the machine learning. You can also
conduct the inferential statistical analysis.
Key points:
• First is the data science in the Python, from this course you can learn about the introduction of Data science, Features of data science, and also python functionality.
• Techniques that are required in python programming like CSV files, software libraries called Numpy and Tensorflow are learnt from this course.
• Second course is the charting, plotting and data representation in the Python. You can understand about the library called Matplotlib library to report and chart on data visualisation in python
• Third course is the applied machine learning in the python.from this course you will explain the learners about machine learning introduction and machine learning algorithms to solve the real
time problems.
• Fourth course is text mining in the Python. You can know how to handle the structured text for both machine learning and Human by using python.
• Fifth course is social network in the Python. From this course you will come to know about the neural networks by using networks library and link prediction problems.
Rating : 4 out of 5
You can Signup here <=> ClickHere
Edureka data science course will helps you to become an expert in data science and to become a good data scientist. They will teach you this course in an easy and very innovative way with simple
logics. So you can ask any queries regarding this course. In this course they will cover all the topics involved in Data science like, statistics, machine learning, deep learning, time series etc.
you will also get certificate after the completion of the course. More than 31k students are satisfied by taking this course. You will learn the data science life cycle, era of data science,
different tools of data science and from statistical inference you will learn the probability, normal and binary distributions, statistics terminologies etc.
Key points:
• In this course they will train you from the Basic fundamentals of Data science and so that you can understand how to manage and analyse the data with actual phenomena.
• They will teach you some of the topics called statistics, mathematics, information science in theoretical way and you you perform and learn how to use practically.
• From this course you may expertise in different concepts like visualising, clustering, regression, data mining, machine learning subdomains, databases and more.
• From this course you can gain some knowledge about the machine learning algorithms like random forests, k-means clustering, probability random forest etc.
• Instructor will teach you the data types, data wrangling, visualization of data, raw data, data extraction etc. you will also learn about the uses of clustering, decision tree, confusion matrix
Rating: 4.5 out of 5
You can Signup here <=> ClickHere
==> Check Out Latest Offers/Deals/Coupons :: Clickhere
Frank Kane is the creator of this data science and machine learning with python Courses. He is one of the instructors in Udemy and also he is the trainer in the fields of machine learning and big
data with 9 years of experience. He has trained nearly 100K+ students. He has one mission called Sundog Education. This team helps you to improve your skills in machine learning, data science, and
bigdata. In this course, he will explain the topics about, Artificial Intelligence, Machine learning, different techniques in neural Networks etc. you can ask any queries regarding to this course.
More than 67 k+ students are enrolled to learn from this course. This team offers you 12 hours video and 3 articles with full lifetime Access.
Key points:
• They will explain you the fundamental concepts of Deep learning, Machine Learning, and Machine learning with Python.
• You can know about standard deviation, and visualizing different concepts called probability density functions, probability Mass functions, data distributions, with matplotlib.
• They will explain you about the different models like, Bayes theorem, k-fold cross validation, naive, k-means and Support Vector Machines.
• You can learn about linear regression, Multivariate, Polynomial regression, to make predictions.
• You can learn how to build machine learning by using correlation and covariance matrix. And also you can understand about reinforcement learning to learn Pac man boot.
• You will learn how to build the artificial neural networks by the keras and tensorflow. You will know how to evaluate and design A/B tests using T-test and P-values.
Rating : 4.5 out of 5
You can Signup here <=> ClickHere
Kirill Eremenko is the creator of this course called Data Science in Udemy. He is a data scientist and consultant with five years of experience. He gained good knowledge in Physics and mathematics.
more than 100k students are learning from him. He will teach you this course in easy way with simple logics. And you will be Master in data science by learning from him. Here he will teach you the
introduction of data science with real time examples. He will be available for you to ask any queries regarding this course. More than 84k students are enrolled to learn this course. Kirill Eremenko
offers 21 hours video and 3 articles with full lifetime access to learn this course.
Key points:
• From this course you will learn how to perform the projects in the complex data science and also in SQL. you can learn the basic topics of Tableau visualisation.
• You can learn how to create Linear regressions by applying least square methods. They will teach you how to upload the data science by using SQL services Integration Services. You will learn how
to create the conditional splits in the SSIS.
• You can understand the creation of statistical models by using Forward selection, Backward elimination and Bidirectional Elimination methods.
• You can understand the concepts of Multiple Linear Regression, Logistic regression, and confusion matrix to solve the real world problems. You will understand how to navigate and install the
servers of SQL.
• He will explain the transform independent variable and new variable for modelling purpose.
Rating: 4.5 out of 5
You can Signup here <=> ClickHere
Udacity provides the Programming for Data Science Nanodegree online course. Instructors of this programming online courses are Josh Bernhard, Derek Steer, Juno Lee, Richard Kalehoff and Karl Krueger.
Instructors of this nanodegree program are well qualified and experts. Students will improve their skills and can get the best job opportunities. Here you people will learn how to code by using SQL,
R and Python and also learn how to utilize the command line, git etc. After this course students are perfect in writing the programs which are used in data analysis and data science.
Key points:
• From R Track, students will learn all the fundamentals of programming which are required in data science. Students can able to utilize the R, terminal and git.
• Investigate the Relational Database is first project. Here you will know that how to work with the relational database. You will learn the concepts of SQL like aggregations, advanced SQL, basics
and so on.
• Explore US Bikeshare Data is the second project, from this you are able to learn some concepts like R introduction, data types, syntaxes, functions, control flow and data visualization.
• Post the Work on the Git is the third project, here instructors will explain about important tools, uses of git and github management versions etc.
• From Python Track you will understand the the process for utilizing the SQL and Python to solve the problems in data. First project is same as the R track that means you will learn about the SQL.
• Second Project is explore US bikeshare data. Here you people will understand about the Python programming, control flow, data types, operators etc.
• Post the work with the git is the third project, here you will learn about shell workshop, terminology, addition of commits, merging etc.
Rating: 4.5 Out of 5
You can Signup here <=> ClickHere
Jose Portilla is creator of this course called data science and machine learning bootcamp. He is great data scientist. He received bachelor’s degree and Master’s degree in Santa Clara university. He
has good experience in data science and programming. He has more than 70k students. If you have any queries he will clear your doubts about this course. In this course he will teach how to build data
science, machine learning and data visualisation by using the R programming language. More than 31k students are enrolled to learn from this site. Jose Portilla offer you 17.5 hour video and 8
articles with full lifetime access to learn this course.
Key points:
• Instructor will teach all the R programming fundamentals. You can learn how to use R programming for data science, machine learning visualisation etc.
• You will also learn how to install the R in Mac OS, windows and Linux. You will learn all the basic concepts of R like variables, vector operations, arithmetic, vector slicing and indexing etc.
• You will learn how to manipulate the data. You can learn how to use R to solve the complex programs, how to handle the excel files etc.
• You can learn different topics like, linear regression, multiple regression, k-means clustering, random forest, support vector machine, and more.
• You can learn different machine learning algorithms like, Tensorflow, keras etc by using the R programming language.
• You will learn that how to use the R for handling excel, files of SQL and web scraping.
Rating: 4.6 out of 5
You can Signup here <=> ClickHere
365 Careers team is the creator of this course called Data science and data science bootcamp. You will gain the knowledge about the data science. And also they will teach you in simple and easy way.
More than 190k students are learning from this team in Udemy. In this course this team will train you the complete data science with the concepts involved in it like, python, probability,
mathematics, statistics and advanced statistics in python etc. more than 7k students are enrolled to learn this course. This team will offers you the 16.5 hours deman video and 36 articles with full
lifetime access to learn this course.
Key points:
• This course will teach you about the tools that are required for Data science and to become expertise in data science.
• You can know some software libraries called, pandas Tensorflow, Python programming with NumPy, matplotlib, statistics and Advanced statistical analysis, Tableau,Statistical analysis etc.
• By using numpy and statsmodel you can understand how to create the ML algorithms and pre process the data. You will learn how to perform the logistic regressions and linear regressions in Python.
• You can learn how to write code in python language and how this python is used for the statistical analysis. Tensorflow is used to solve the real world problems in big data.
• You know how to improve the performance hyperparameters and machine learning by using testing, training, validation, overfitting, underfitting etc.
Rating: 4.5 out of 5
You can Signup here <=> ClickHere
Minerva Singh is the creator of this course called Data science with python for data analysis. He is the best Udemy instructor and he will explain with simple logics and in easy way. You will become
an expert in data science. You can ask any queries regarding course. In this course he will explain about the introduction of data science and some more concepts of machine learning, Statistics,
visualisation. More than 3k students are enrolled to learn from this course. It is having 1 downloadable resources, 13 hours on demand videos, 6 articles and full lifetime access.
Key points:
• You can learn the installation steps of anaconda. you can learn how to work for data analysis with the powerful framework.
• You will understand the difference between the statistical data analysis and machine learning. From this course you can build the deep learning algorithms and neural networks.
• In python the most common statistical techniques for data analysis like T-test and Linear regression will also be learn by minera singh.
• You can also learn how to accomplish the visualization, data exploratory and preprocessing tasks like pivoting and tabulating. You will to implement the deep neural networks by using the H2o
• You will understand how to create the numpy arrays, operators, concatenate, drop row/column, reshaping, pivoting and also learn how to merge the data frames etc.
• From data visualization you will learn the concepts like pie chart, line plot, histograms, box plots, scatter plots etc and you also learn the principles of data visualization.
Rating : 4.1 out of 5
You can Signup here <=> ClickHere
Master of Applied Data Science is offered by the university of Michigan and according to the National science foundation the university of Michigan is considered as the first public research
university. Students will also be invited to the formal activities such as UMSI Convocation and Commencement. And for the MADS students there is the opportunity for joining into the UMSI student
clubs and activities. Students with no technical background can also apply for this degree but you need to have the basic knowledge on statistics and python. The curriculum includes many themes and
applications. The curriculum of this degree is included with the Computational methods to deal with big data, social media analytics etc.
Key points:
• This curriculum covers the topics of Computational methods that are needed to deal with big data, Analytic techniques like network analysis, natural language processing, machine learning and
causal inference etc.
• You will also learn about the Data science application in the context such as search and recommender systems, learning analytics, social media analytics etc and also you will learn how to explore
and communicate the data.
• By taking this degree you will gain the knowledge on how to visualise the data using the multiple methods with the help of the 3 portfolio-building major projects.
• This degree teaches you about how to process the data that is used for the analysis and how to analyse the data in the wide variety of ways and also how to collect the data efficiently.
• By the end of this degree you will be able to visualise the results in a very sophisticated way and after the analysis how to report the results to the others.
Rating: 4.4 out of 5
You can Signup here <=> ClickHere
Master of Science in Data Science is offered by the university of colorado boulder and this degree is for everyone who are interested to move their career through the field of data science and this
course helps you to pass in the final exam in the university of colorado and the degree students can participate in the pactical, hands on project This coursework includes access to real-world big
data sets to prepare for the future career of yours. You will also learn both Python and R programming which are the most commonly used languages of the data science and this training also teaches
you the theories and methods and also the tools Hadoop file system, SQL and Apache Spark etc.
Key points:
• In this degree you will learn about the theories and methods of the data science along with the programming fundamentals, data structures, and the statistics and you will also learn about the
tools of the modern workplace, Amazon Web Services, the Hadoop file system, Apache Spark, SQL.
• By taking this course you will become proficient in the Risk Analysis, Data Visualization, Predictive Modeling, artificial intelligence and machine learning.
• As to become the data scientist you must have the understanding on how to translate the research and business problems into the useful technical solutions and this degree helps you in
understanding that.
• You will also get to know about the skills like communication, teamwork, and also the leadership skills.
• By the end of this course you will be able to work with the team and also you will get to know how to communicate the technical solutions into the non technical professionals and also there are
specialised course in the areas of processing, geospatial analytics, natural language processing etc.
Rating: 4.5 out of 5
You can Signup here <=> ClickHere
This degree is offered by the University of Illinois which has earned the reputation as a global leader in the fields of research, public engagement and teaching. And this University of Illinois is
considered as the pioneer for online education and innovative distance education. This course is very helpful for the CS students. In this degree the topics like data mining, cloud computing,
statistics and information science data visualization, machine learning are being included. By the completion of this degree you will be able to analyse the available data that are needed to inform
the critical decisions. And you will also be able to utilise the cloud computing to visualize, learn and mine the new insights from the big data. This degree is taught to by top Instructors via
online with the environment of classroom with best instructors and better alumni.
Key points:
• By taking this degree you will gain more knowledge on mastering the computer science in data science and this is meant for the forefront for the excellence in education and also computing.
• This degree is useful for the professionals who wants to understand the role of the big data in the world and also you will be able to discover the new insights and also you will get to know how
to optimize the decisions.
• And this degree helps you in analysing the rising tide of the data which has become the important to the wide range of the fields including the medicines, humanity, engineering, and also the
business levels.
• By taking this degree you will gain the strong foundation which will enable you to bring the data science in any kind of areas. This program is designed in such a way that you can complete it at
your own place.
• And this is the affordable and flexible data science degree which has been expertised in the 4 areas of the computer science namely machine learning, data mining, cloud computing and data
Rating: 4.6 Out of 5
You can Signup here <=> ClickHere
Data science is easy to learn. Data science is used in different profiles like business growth, healthcare, industries etc. Above we’ve listed some of the best Data Science Online Courses. If you are
interested in this, then you can choose any course which is suitable for you. After the completion of the course, you will get the Certificate with your name. And you can add this Certification in
your resume that will help you for your bright career. If you like this Article, please share this with your friends and Social Media like whatsapp, twitter, Facebook. If you have any queries about
this article you can ask in comment Section.
We Advice you to learn via Online Courses, Rather than Books, But We Suggest you use Books Only for reference purpose
Best Data Science Books
#1 Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking 1st Edition by Foster Provost
#2 Data Science from Scratch: First Principles with Python 1st Edition by Joel Grus
#3 Numsense! Data Science for the Layman: No Math Added by Annalyn Ng
#4 Data Science For Dummies, 2nd Edition (For Dummies (Computers)) 2nd Edition by Lillian Pierson
#5 Doing Data Science: Straight Talk from the Frontline 1st Edition by
Cathy O’Neil
#6 Python Data Science Handbook: Essential Tools for Working with Data 1st Edition by
Jake VanderPlas
Relevant Data Science Books:
#1 R for Data Science: Import, Tidy, Transform, Visualize, and Model Data 1st Edition by Hadley Wickham & Garrett Grolemund
#2 Practical Statistics for Data Scientists: 50 Essential Concepts 1st Edition by Peter Bruce & Andrew Bruce
#3 Practical Data Science with R 1st Edition by Nina Zumel & John Mount
#4 The Data Science Handbook 1st Edition by Field Cady
#5 Introduction to Data Science: Essential Concepts 1st Edition by
Peters Morgan
9.4 Total Score
Best Data Science Online Courses
Best Data Science Online Courses
We will be happy to hear your thoughts
Leave a reply Cancel reply | {"url":"https://top10onlinecourses.com/best-data-science-online-courses/","timestamp":"2024-11-04T00:57:54Z","content_type":"text/html","content_length":"164493","record_id":"<urn:uuid:dd186b98-c69f-4e30-becd-b21258407ae7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00467.warc.gz"} |
Revision history
Interactive question in notebooks
When trying to find some solution to the double integral problem and poking around with Sage (4.7.2), I stumbled upon this behavior:
integrate(x+y^k, y)
output (resembles maxima interaction):
Traceback (click to the left of this block for traceback)
Is k+1 zero or nonzero?
How can I answer to this (with nonzero)?
Interactive question in notebooks
When trying to find some solution to the double integral problem and poking around with Sage (4.7.2), I stumbled upon this behavior:
integrate(x+y^k, y)
output (resembles maxima interaction):
Traceback (click to the left of this block for traceback)
Is k+1 zero or nonzero?
How can I answer to this (with nonzero)?
Interactive question in notebooks
When trying to find some solution to the double integral problem and poking around with Sage (4.7.2), I stumbled upon this behavior:
integrate(x+y^k, y)
output (resembles maxima interaction):
Traceback (click to the left of this block for traceback)
Is k+1 zero or nonzero?
How can I answer to this (with nonzero)?
[DEL:Update: :DEL]I accept @god.one solution below as there seems no way as for now to interactively answer those maxima questions. | {"url":"https://ask.sagemath.org/questions/8650/revisions/","timestamp":"2024-11-05T23:28:33Z","content_type":"application/xhtml+xml","content_length":"18649","record_id":"<urn:uuid:6ff293ef-acf8-466b-a4e3-002e9cfbada8>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00679.warc.gz"} |
Mathematical Induction
Mathematical induction is a common method for proving theorems about the positive integers, or just about any situation where one case depends on previous cases. Here’s the basic idea, phrased in
terms of integers:
1. You have a conjecture that you think is true for every integer greater than 1. Show (by calculation or some other method) that your conjecture definitely holds for 1.
2. Show that whenever your conjecture holds for some number, it must hold for the next number as well.
Why does this work? Well, you know your conjecture holds for 1. You’ve shown that whenever it holds for some number, it must hold for the next, so your conjecture also holds for 2. But if it holds
for 2, it must hold for the next number as well, so it holds for 3. And so on, through all the integers greater than 1.
A few of things to notice:
• The starting point (or base case) of 1 is common, but it’s not the only choice. You might have a conjecture that holds for every integer greater than 5, or for every integer greater than 1,001.
The only thing that changes is the base case.
• In practice, the way you do Step 2 is that you assume that for some number you call (n-1), your conjecture holds. (This is called the induction hypothesis.) Use that assumption to show that the
conjecture must hold for the number n as well. Some people complain that they are “assuming what they want to prove.” That’s not the case, of course. You are simply proving that if your claim is
true for a particular case, then it is true for the subsequent one. If no base case can be established, then the link between cases is meaningless. The assumption is no different than the one we
make when we say “if a shape is a parallelogram, then its adjacent angles are supplementary.” We have not proven that any shapes are parallelograms. We are just claiming that if a shape exists
that matches our definition for parallelogram, that it must also possess the stated property.
• A slight variation on the induction hypothesis can be useful: assume that for all integers k < n your conjecture holds. Then use that to prove it holds for the number n as well. If your proof
uses more than one previous step--for example, it uses the fact that it holds at (n-1) and (n-2)--you’ll need to check more than one base case.
• Mathematical induction and its variations are useful in proving identities that are true for any value of integer, but they do not help you see how someone figured out the identity at first
• Mathematical induction is not only useful for proving algebraic identities. It can be used any time you have a recursive relationship--one where the current case depends on one or more of the
previous cases. See the second example below for a geometric application of induction. Here’s a more general way to think about induction:
1. You have a conjecture that you think is true for every case. Show that your conjecture definitely holds for a base case.
2. Show that whenever your conjecture holds for some case, it must hold for the next case as well.
Here’s an example using integers. Someone discovered a formula that seems to work for the sum of the integers:
You could check lots and lots of cases, but no matter how long you worked, you could never check that the formula holds for every integer greater than 1. With mathematical induction, you can prove it
1. Show that the conjecture holds for a base case. Well, the sum on the left will just be 1. The formula on the right gives
2. Show that whenever your conjecture holds for some number, it must hold for the next number as well. So, we’ll start with the original formula and show that when it is true for some n - 1, then
the formula must also work for n:
Now, the equality will still hold if we add n to both sides of the equation:
And we do a little simplification on the right side of the equation, trying to get it in the form of our original conjecture:
which is exactly our original formula!
So the formula holds for 1. If it holds for 1, it must hold for 2 (the next number). If it holds for 2, it must hold for 3 (the next number). And so on, and so on - by mathematical induction, it
holds for every integer greater than 1!
Here’s a geometric example: Someone noticed that every polygon with n sides could be divided into n - 2 triangles. Again, you could never check every polygon, but you don’t have to. Induction lets
you prove it. Here’s a sketch of the proof. Can you fill in the details?
1. In this case, the base case will be quadrilaterals, or polygons with four sides. Draw one diagonal, and you cut it into 2 triangles. (Make sure you believe you can do this for every
quadrilateral--sometimes a diagonal will fall outside of the figure. What then?)
2. Now, assume that for every polygon with n - 1 sides, you can cut them into (n-1)-2 = n-2 triangles. Now think about a polygon with n sides.
□ Find a diagonal that creates a triangle and an (n - 1)-sided polygon. (Can you always do this?)
□ You can divide the (n-1)-sided polygon into n-3 triangles. (Why?)
□ So the total number of triangles is (n - 3) = 1 = n - 2.
Further Resources
You can try your hand at some mathematical induction problems--some numeric and some not--at the Problems with a Point page on The Principle of Mathematical Induction. You can find some harder
problems for which induction is useful at http://www.geocities.com/jespinos57/induction.htm.
More information and problems on Mathematical Induction can be found at http://www.math.csusb.edu/notes/proofs/pfnot/node10.html and http://www.cut-the-knot.com/induction.html and in the articles
"Teaching Mathematical Induction: An Alternative Approach" and "When Memory Fails" in the September 2001 issue of Mathematics Teacher. | {"url":"https://www2.edc.org/makingmath/mathtools/induction/induction.asp","timestamp":"2024-11-01T19:44:49Z","content_type":"text/html","content_length":"18458","record_id":"<urn:uuid:bc97585c-ef21-4a8e-b2b8-1a7d45dfe40e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00859.warc.gz"} |
2664 -- Prerequisites?
Time Limit: 2000MS Memory Limit: 65536K
Total Submissions: 4383 Accepted: 2649
Freddie the frosh has chosen to take k courses. To meet the degree requirements, he must take courses from each of several categories. Can you assure Freddie that he will graduate, based on his
course selection?
Input consists of several test cases. For each case, the first line of input contains 1 <= k <= 100, the number of courses Freddie has chosen, and 0 <= m <= 100, the number of categories. One or more
lines follow containing k 4-digit integers follow; each is the number of a course selected by Freddie. Each category is represented by a line containing 1 <= c <= 100, the number of courses in the
category, 0 <= r <= c, the minimum number of courses from the category that must be taken, and the c course numbers in the category. Each course number is a 4-digit integer. The same course may
fulfil several category requirements. Freddie's selections, and the course numbers in any particular category, are distinct. A line containing 0 follows the last test case.
For each test case, output a line containing "yes" if Freddie's course selection meets the degree requirements; otherwise output "no".
Sample Input
Sample Output | {"url":"http://poj.org/problem?id=2664","timestamp":"2024-11-11T19:48:45Z","content_type":"text/html","content_length":"6348","record_id":"<urn:uuid:c1dfd71b-9a01-46f0-a0f0-2e990bedabda>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00341.warc.gz"} |
Lesson 17
Completing the Square and Complex Solutions
Problem 1
Find the solution or solutions to each equation.
1. \(x^2+0.5x-14=0\)
2. \(x^2+12x+36=0\)
3. \(x^2-3x+8=0\)
4. \(x^2+4=0\)
Problem 2
Which describes the solutions to the equation \(x^2+7=0\)?
Two non-real solutions
Problem 3
Explain how you know \(\sqrt{3x+2}=\text-16\) has no solutions.
Problem 4
Determine the number of real solutions and non-real solutions to each equation. Use the graphs; don't do any calculations to find the solutions.
1. \(x^2-6x+7=0\)
2. \(3x^2+2x+1=0\)
3. \(\text-x^2-3x+2=0\)
4. \(x^2-6x+7=\text-2\)
5. \(\text-x^2-3x+2=6\)
6. \(3x^2+2x+1=2\)
Problem 5
1. Write \((5-5i)^2\) in the form \(a+bi\), where \(a\) and \(b\) are real numbers.
2. Write \((5-5i)^4\) in the form \(a+bi\), where \(a\) and \(b\) are real numbers.
Problem 6
What number \(n\) makes this equation true?
\(x^2+11x+\frac{121}{4} = (x+n)^2\) | {"url":"https://im.kendallhunt.com/HS/teachers/3/3/17/practice.html","timestamp":"2024-11-12T08:54:00Z","content_type":"text/html","content_length":"83993","record_id":"<urn:uuid:f7e482d4-6669-4b1b-97aa-51bb20aa9bf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00413.warc.gz"} |
Round To The Nearest Penny Calculator
Hello, money math enthusiasts! You’ve tripped over a penny and rolled into the fascinating world of rounding to the nearest penny. Trust us, it’s as exhilarating as a rollercoaster ride… well, a very
mathematical one. But hold onto your calculators, we’ll keep the number crunching serious here.
Calculation Formula
rounded_penny = round(original_number, 2)
Simple as that! You take your number and round it to two decimal places. It’s like trimming your number’s hair to keep it neat and tidy!
Categories of Rounding
Category Range Interpretation
Exact Penny 0.00 to 0.99 The number is already prim and proper, no rounding needed
Needs Rounding Up 0.995 to 1 Give this number a boost, round up to the next penny
Needs Rounding Down 0.994 and below Bring this number back to earth, round down to the nearest penny
Examples of Rounding
Individual Original Amount Rounded Amount Calculation
Penny Pincher Paul $10.994 $10.99 Rounded down, Paul’s penny pinching nature pays off
Exact Change Elaine $20.00 $20.00 Elaine’s exactness pays off, no rounding needed
Up and Up Ursula $30.995 $31.00 Rounded up, Ursula is on a financial upward climb
Ways to Calculate
Method Advantages Disadvantages Accuracy
Manual Rounding No tools needed, just your brain power Can be time consuming High
Using a Calculator Fast, like a rabbit Requires a calculator, not so fast if you don’t have one High
Estimation Quick, like a guess in a trivia game Not precise, so don’t use for your taxes Low
Evolution of Rounding Concept
Time Period Changes in Rounding Concept
Ancient Times Rounding at all was a revolutionary concept, like the wheel
Pre-Decimalization Rounding to nearest shilling or pound, because who needs pennies?
Post-Decimalization Rounding to nearest penny, pennies need love too
Limitations of Rounding Accuracy
1. Human Error: The accuracy of manual calculations can be affected by human error. We’re only human after all!
2. Large Numbers: Rounding large numbers can lead to significant differences. Big numbers, big problems.
Alternative Methods
Method Pros Cons
Banker’s Rounding More accurate for large numbers, like counting a dragon’s treasure More complex, like solving a dragon’s riddle
Rounding Half to Even Reduces bias, like a fair and square board game Not commonly used, like a board game’s instruction manual
1. What is rounding to the nearest penny? It’s a method of approximating a number to the closest penny. Like guessing the number of candies in a jar, but with more precision.
2. When would I need to round to the nearest penny? When dealing with financial transactions, it’s often necessary to round to the nearest penny. Like when you are dividing a restaurant bill among
3. How accurate is rounding to the nearest penny? It’s usually highly accurate for everyday transactions, but less so for very large numbers. It’s not perfect, but neither is life.
4. Why is rounding important? Rounding makes numbers easier to work with and understand. It’s like simplifying a complex recipe.
5. Can rounding affect the outcome of my calculations? Yes, especially when dealing with large numbers or precise calculations. So, use it wisely.
6. What’s the difference between rounding up and down? Rounding up increases the number to the nearest higher value, while rounding down reduces it to the nearest lower value. It’s like climbing up
or sliding down a hill.
7. Why do we round to two decimal places for pennies? Because pennies are the smallest unit in US currency and they go up to two decimal places. It’s like counting the smallest beads in a necklace.
8. What is Banker’s Rounding and how is it different? Banker’s rounding is a method where numbers that are exactly halfway between two others round to the nearest even number. It’s a different
strategy, like choosing a different path in a maze.
9. Can I always use rounding in financial transactions? Not always. Some transactions require exact numbers, like when you’re balancing your checkbook or filing taxes.
10. Does the rounding method change for different currencies? Yes, the rounding method can change depending on the smallest unit of the currency. So, always check before you round!
1. US Treasury: Dive into the details of US currency and rounding practices here. It’s like a treasure map of information.
2. Federal Reserve: This site provides educational resources on currency. It’s like a library for your money queries. | {"url":"https://calculator.dev/math/round-to-the-nearest-penny-calculator/","timestamp":"2024-11-09T09:38:45Z","content_type":"text/html","content_length":"112751","record_id":"<urn:uuid:88f2e07f-eec6-4968-9679-633ca6a96250>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00795.warc.gz"} |
A linear programming approach to approximating the infinite time reachable set of strictly stable linear control systems
Title data
Ernst, Andreas ; Grüne, Lars ; Rieger, Janosch:
A linear programming approach to approximating the infinite time reachable set of strictly stable linear control systems.
In: Journal of Global Optimization. Vol. 86 (2023) . - pp. 521-543.
ISSN 1573-2916
DOI: https://doi.org/10.1007/s10898-022-01261-w
This is the latest version of this item.
Abstract in another language
The infinite time reachable set of a strictly stable linear control system is the Hausdorff limit of the finite time reachable set of the origin as time tends to infinity. By definition, it encodes
useful information on the long-term behavior of the control system. Its characterization as a limit set gives rise to numerical methods for its computation that are based on forward iteration of
approximate finite time reachable sets. These methods tend to be computationally expensive, because they essentially perform a Minkowski sum in every single forward step. We develop a new approach to
computing the infinite time reachable set that is based on the invariance properties of the control system and the desired set. These allow us to characterize a polyhedral outer approximation as the
unique solution to a linear program with constraints that incorporate the system dynamics. In particular, this approach does not rely on forward iteration of finite time reachable sets.
Further data
Available Versions of this Item | {"url":"https://eref.uni-bayreuth.de/id/eprint/72950/","timestamp":"2024-11-08T14:28:52Z","content_type":"application/xhtml+xml","content_length":"24968","record_id":"<urn:uuid:5c9aa521-a34e-4d25-9842-d6ec9869f511>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00110.warc.gz"} |
Network Electro-thermal Simulation of Non-isothermal Magnetohydrodynamic Heat Transfer from a Transpiring Cone with Buoyancy and Pressure Work
The steady, axisymmetric laminar natural convection boundary layer flow from a non-isothermal vertical circular porous cone under a transverse magnetic field, with the cone vertex located at the
base, is considered. The pressure work effect is included in the analysis. The governing boundary layer equations are formulated in an (x, y) coordinate system (parallel and normal to the cone slant
surface), and the magnetic field effects are simulated with a hydromagnetic body force term in the momentum equation. A dimensionless transformation is performed rendering the momentum and also heat
conservation equations. The thermal convection flow is shown to be controlled by six thermophysical parameters—local Hartmann number, local Grashof number, pressure work parameter, temperature power
law exponent, Prandtl number and the transpiration parameter. The transformed parabolic partial differential equations are solved numerically using the network simulation method based on the
electrical-thermodynamic analogy. Excellent correlation of the zero Hartmann number case is achieved with earlier electrically non-conducting solutions. Local shear stress function (skin friction) is
found to be strongly decreased with an increase in Prandtl number (Pr), with negative values (corresponding to flow reversal) identified for highest Pr with further distance along the streamwise
direction. A rise in local Hartmann number, is observed to depress skin friction. Increasing temperature power law index, corresponding to steeper temperature gradient at the wall, strongly reduces
skin friction at the cone surface. A positive rise in pressure work parameter decreases skin friction whereas a negative increase elevates the skin friction for some distance along the cone surface
from the apex. Local heat transfer gradient is markedly boosted with a rise in Prandtl number but decreased principally at the cone surface with increasing local Hartmann number. Increasing
temperature power law index conversely increases the local heat transfer gradient, at the cone surface. A positive rise in pressure work parameter increases local heat transfer gradient while
negative causes it to decrease. A rise in local Grashof number boosts local skin friction and velocity into the boundary layer; local heat transfer gradient is also increased with a rise in local
Grashof number whereas the temperature in the boundary layer is noticeably reduced. Applications of the work arise in spacecraft magnetogas dynamics, chemical cooling systems and industrial magnetic
materials processing.
• Hartmann number
• Heat transfer
• Lateral mass flux
• Local Nusselt number
• Magnetofluid dynamics
• Non-isothermal
• Nonlinear convection
• Numerical solutions
• Prandtl number
• Pressure work
Dive into the research topics of 'Network Electro-thermal Simulation of Non-isothermal Magnetohydrodynamic Heat Transfer from a Transpiring Cone with Buoyancy and Pressure Work'. Together they form a
unique fingerprint. | {"url":"https://research.edgehill.ac.uk/en/publications/network-electro-thermal-simulation-of-non-isothermal-magnetohydro","timestamp":"2024-11-08T11:19:28Z","content_type":"text/html","content_length":"67611","record_id":"<urn:uuid:b6e2d4ef-dbba-41d5-a05c-41b1e2e80272>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00368.warc.gz"} |
Calculate Percentiles from Mean & Standard Deviation in Excel
Excel is one of the best tools devised for various calculations. Formulas in Excel are grouped into multiple subcategories: mathematical, financial, date & time, and many others.
In the example below, we will show how to use one of the formulas from the statistical group, NORM.INV and PERCENTILE. These and some other formulas will help us calculate the mean, standard
deviation, and percentile.
Standard Deviation, Mean, and Percentile
To understand the formulas that we will show, we need to understand all the terms that they relate to.
In simple terms, the mean represents the average of the set of particular values. We use the symbol μ (a Greek letter) to present it.
A standard deviation (or σ) shows us how dispersed is our data in relation to the mean. When the standard deviation is low, it means that the data are clustered or grouped around the mean, and when
it is high, it means that the data are spread out.
Standard deviation percentiles are a good indicator that shows us the percentage in the data set below or above the average. In what is called normal distributions, 50 percent of all the data will be
less or greater than the average.
Calculate Percentile Value When The Values are Known
Suppose that we have a car manufacturer that can guarantee a standard car part for 20,000 miles, with a standard deviation of 2,000 miles. Now we want to know for how many miles should the company
warranty its parts if it does not want to replace more than four percent of the parts.
We will use the simple formula: NORM.INV for this purpose, since all three needed values are known to us. This is our result:
NORM.INV has three parameters: probability, mean, and standard deviation. Since we have all of them, we simply insert all of these values in the formula.
Calculate Mean, Standard Deviation, and Percentile
To calculate the percentile from our data set, we do not need to know the mean and standard deviation. In Excel, there is one simple formula that can give us the results of percentile in our range.
Suppose that we have the following data set:
To calculate the Mean for this range, we will use the AVERAGE formula:
For Standard deviation calculation, we will use the STDEV.P formula with our data set:
As we said, the percentile gives us the percentage of results that fall under a certain value. Using the percentile, we can find the relative position of a certain value in our range. For our
example, we will use the 75th percentile of our data. We will use the following formula:
1 =PERCENTILE.INC(E2:E11,0.75)
As seen, PERCENTILE has two parameters: array (our range), and k (a number between 0 and 1 that represents the percentile we want to get).
When we insert our formula, this is the number that we will get:
This number means that, if we use the 75th percentile as a threshold, every student whose height is over 183.5 (measures are in cm) will pass the threshold. Since we have 10 students in our table,
and since the formula for percentile assumes that our data is equally distributed, this means that we will have three students above this percentage (25 percent of our number).
If we add the following formula in column F to check this:
And expand it to the end of our range, these results will show: | {"url":"https://officetuts.net/excel/formulas/calculate-percentiles-from-mean-amp-standard-deviation/","timestamp":"2024-11-07T00:05:46Z","content_type":"text/html","content_length":"148523","record_id":"<urn:uuid:e338f68e-7ccc-43e1-9ec4-60213145c7d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00880.warc.gz"} |
"Arithmetic operations"
Mental speed test "Arithmetic operations" - online
Test Description
This one of the easiest tests to perform will allow you to assess your mental abilities, namely, attention and speed of thinking, or rather the speed of performing arithmetic operations, which can
also be used to judge the level of a person’s intellectual development. The essence of the test is that the subject as quickly as possible correctly calculates the sum of each of several rows of
numbers that are not delimited from each other by any visual boundaries. One of the difficulties of the test is that you can mix up the numbers that you add, and even the series in which you perform
addition operations. Thus, the test will require a high concentration of attention during execution.
Passing the test
By clicking on the following link, you will be taken to a page with a table consisting of several rows of numbers (depending on the type of test, this can be 10 or 5 rows and from 10 to 20 numbers in
each row). Your task is to add the numbers of each row correctly and enter the results of the additions into the input fields at the end of each row. The test time is limited. The maximum time for
passing the test for different age groups of subjects is different. For adults it is 1.5 seconds for each digit added, for late teens it is 2 seconds, for early teens it is 2.5 seconds, and for
children 6 to 9 years old it is 3 seconds. In case you manage before, click on the "Finish" button to go to the page with the test results.
Calculation of results
The results are calculated from the number of correct answers and the time spent on passing the test. On average, it takes from one to one and a half seconds to perform one arithmetic operation,
depending on the age of the subject and the conductivity of the neurons involved in the task.
Test options for performing arithmetic operations (horizontal rows):
© Oleg Akvan
Comment block
No one has left comments here yet, be the first!
Leave a comment:
You may be interested in: | {"url":"https://metodorf.com/tests/arifmetic_operations.php","timestamp":"2024-11-05T16:36:17Z","content_type":"text/html","content_length":"39339","record_id":"<urn:uuid:037378fc-1266-40ed-8684-6a9e8250c959>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00169.warc.gz"} |
Others and Extra Calculators | List of Others and Extra Calculators
List of Others and Extra Calculators
Others and Extra calculators give you a list of online Others and Extra calculators. A tool perform calculations on the concepts and applications for Others and Extra calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Others and
Extra calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/others-and-extra-Calculators/CalcList-11536","timestamp":"2024-11-04T04:32:51Z","content_type":"application/xhtml+xml","content_length":"89821","record_id":"<urn:uuid:2a68f98c-4e4a-4bd3-9eff-638a5b56aee2>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00524.warc.gz"} |
Count of Distinct Integers Belonging to First ‘N’ Terms of at least one of Given Geometric Progression Series(GPs)
One day, ninja's teacher gave him the assignment to calculate the count of distinct integers belonging to first ‘N’ terms of at least one of the two given Geometric Progression Series. Help ninja to
pass the assignment.
The above problem is the common number series coding problem. Number series based-coding problems are widely asked in coding contests and various coding interviews.
In this blog, we will solve the above problem.
The Problem Statement
You are given integers, ‘A1’, ‘A2’,’ R1’,’ R2’ and ‘N’ where ‘A1’ and ‘A2’ represent the first term of both GPs and ‘R1’, ‘R2’ represents the common ratio of the GPs respectively. Your task is to
calculate the count of all distinct integers which belong to the first ‘N’ terms of at least one GP.
Geometric Progression
A Geometric Progression(GP) is a sequence of numbers that differ from each other by a common ratio.
Suppose, A1 = 2, A2 = 3, R1 = 3, R2 = 2, N = 2.
2,6 belongs to the first GP, whereas 3,6 belongs to the second GP. Hence the distinct integers are 2,3,6. Therefore the total count of distinct integers that belongs to the first two terms of at
least one GP is 3. | {"url":"https://www.naukri.com/code360/library/count-of-distinct-integers-belonging-to-first-n-terms-of-at-least-one-of-given-geometric-progression-series-gps","timestamp":"2024-11-08T14:23:22Z","content_type":"text/html","content_length":"368745","record_id":"<urn:uuid:b024dd11-ed72-43ff-a1a2-42f8cc4fcd71>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00030.warc.gz"} |
The Case of David Friedman
[Part of a series on neoclassical economics]
In a recent article, this author demonstrated that two of Steven Landsburg’s ‘surprising’ results were due to his false assumptions, and that the ‘naïve’ layman was thus exonerated from Landsburg’s
criticism. In this article, we will attempt to do the same with an argument presented by the eminent David Friedman in his fascinating book, Hidden Order (HarperCollins, 1996).
As with Landsburg, it will be first necessary to quote extensively from Friedman. The following analysis is presented in his section, “Heads I Win, Tails I Win”:
You have just bought a house. A month later, the price of houses goes up. Are you better off (your house is worth more) or worse off (prices are higher) as a result of the price change? Most
people will reply that you are better off; you own a house and houses are now more valuable. You have just bought a house. A month later, the price of houses goes down. Are you worse off (your
house is worth less) or better off (prices are lower)? Most people reply that you are worse off. The answers seem consistent. It seems obvious that if a rise in the price of housing makes you
better off, then a fall must make you worse off. It is obvious, but wrong. The correct answer is that either a rise or a fall in the price of housing makes you better off! We can see why using
[simple geometrical indifference curve analysis]. [Friedman then refers to his diagram which has “Amount of housing” on the vertical axis and “Dollars spent on everything else” on the horizontal
axis. He draws an initial budget line and finds the optimal point A (where the line is tangent to an indifference curve). He then shows that, whether we make the budget line steeper or more
shallow, since it still must pass through A (since the owner can always choose to retain his original consumption bundle after the price change) the resulting new point of tangency—in both
cases—by simple geometry must be on a higher indifference curve.] By looking at the figure, you should be able to convince yourself that the result is a general one; whether housing prices go up
or down after you buy your house, you are better off than if they had stayed the same. The argument can be put in words as follows: What matters to you is what you consume—how much housing and
how much of everything else. Before the price change, the bundle you had chosen—your house plus whatever you were buying with the rest of your income—was the best of those available to you; if
prices had not changed, you would have continued to consume that bundle. After prices change, you can still choose to consume the same bundle, since the house already belongs to you, so you
cannot be worse off as a result of the price change. But since the optimal combination of housing and other goods depends on the price of housing, it is unlikely that the old bundle is still
optimal. If it is not, that means there is now some more attractive alternative, so you are now better off; a new alternative exists that you prefer to the best alternative (the old bundle) that
you had before. The advantage of the geometrical approach to the problem is that the drawing tells us the answer. All we have to do is look at [the figure]. The initial budget line was tangent to
its indifference curve at point A, so any budget line that goes through A with a different slope must cut the indifference curve. On one side or the other of the intersection, the new budget line
is above the old indifference curve—which means that you now have opportunities you prefer to bundle A. What the drawing does not tell us is why. When we solve the problem verbally, we may get
the wrong answer (as at the beginning of this section, where I concluded that a fall in the price should make you worse off). But once we find the right answer, possibly with some help from the
figure, we not only know what is true, we also know why. (34-36)
Friedman’s analysis is obvious, but wrong. Its most fundamental error is an illegitimate application of a static optimization problem to the real world of markets which change over time. In other
words, Friedman assumes he can handle the phenomenon of a price change by finding the optimal bundle A at one price, then drawing a different line through that point, and finding the new optimum
bundle B. If B is on a higher indifference curve, Friedman interprets this to mean that the price change has benefited the agent.
This procedure is completely unjustified. The determination of the optimum bundle A only makes sense if the price is (and always will be) the original price. One cannot compare the utilities of two
static equilibrium points in order to say anything about a model that (more realistically) allows the possibility of changing prices.
Friedman feels his geometric analysis can adequately ‘capture’ the real world phenomenon of holding assets amidst price changes. But this step in his argument is not so self-evident. What Friedman’s
diagram really shows is that the agent would prefer to be endowed with bundle A and face the second (or third) price ratios. Friedman assumes that this is the same thing as the proposition that the
agent, initially buying bundle A, would prefer a price change. In many settings, this equivalence is perhaps justified. But it is certainly not in Friedman’s example, and his ‘refutation’ of the
verbal reasoning in the beginning of his section is consequently wrong.
Housing is peculiar in that it is a durable asset that also provides a flow of services. We can test the rigor of Friedman’s analysis by shifting to the two extremes of this spectrum. First, let us
suppose the good in question is not durable, like housing, but rather extremely perishable. Thus, let the vertical axis represent “Amount of food,” while the horizontal represents “Dollars spent on
everything else.” We have an original price of food relative to everything else, and our agent buys his optimum quantity. Now, a worldwide catastrophe causes all vegetation to die. (No one knows why,
not even those with a Ph.D. in physics.) Consequently, the price of a “unit” of food rises, say, to $1 billion. Silly writers for the Wall Street Journal and even lesser newspapers conclude that
humanity is doomed, and that everyone is much worse off as a result of the price increase. But these critics fail to realize that no one will go hungry, at least not as a result of the price
increase. If anyone had thought buying more food would be desirable, he or she would already have done so. In fact, everyone is much better off. A person can sell just a fraction of a unit of food,
and with the proceeds buy all manner of luxury goods that were previously outside of their budget sets.
Now suppose that the vertical axis represents “Number of gold coins.” An eighty-year old man, close to death, sells virtually all of his possessions and purchases their equivalent in gold coins at a
certain price, intending to bequeath them to his heirs. The day after his purchase, an advance in alchemy allows the easy transformation of copper into gold, such that the price of the latter falls
until it equals the price of the former. At first the man is terribly upset, for his heirs will no longer be able to afford the same bundles of goods that they would have under the previous price
structure. But his friend points out the error of this view: Before, the old man held on to a few hundred dollars in cash, feeling that the marginal gold coin was not worth its purchase price. But
now the man can afford to give his heirs one hundred additional gold coins, with only sacrificing one single dollar. Truly the price fall is a boon, not a curse.
The staunch defender of indifference curve analysis will no doubt be unconvinced by the above examples. If we want to model the more complicated process of buying (and selling) houses over time, then
our vertical axis should be interpreted to denote, not simply the number of houses purchased today, but rather the (contingency) plan specifying how many houses will be purchased, and at what dates,
for the rest of eternity, as a function of their spot market prices. Once we adjust the model to capture the real world phenomena we are trying to describe, the absurdities described above disappear.
This is certainly true, but then, as it was argued much earlier, we can no longer allow for a ‘price change,’ since this possibility has already been built into the original price (vector). One
cannot have it both ways; either the model incorporates time or it does not. If it does not, then we cannot use it to draw any conclusions regarding the effects of changing conditions. Friedman’s
result is so completely unexpected that he should have tested its ability to generate even more sweeping conclusions. For example, his figure would also ‘prove’ the really counterintuitive
proposition that a governmental decree prohibiting future housing sales would have no effect on anyone, even young couples who were planning on buying a house tomorrow. | {"url":"https://mises.org/mises-daily/case-david-friedman","timestamp":"2024-11-10T17:55:17Z","content_type":"text/html","content_length":"203979","record_id":"<urn:uuid:be79cddd-06fa-4025-ad00-806ad115dc10>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00710.warc.gz"} |
Higher Arithmetic: An Algorithmic Introduction to Number Theorysearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Higher Arithmetic: An Algorithmic Introduction to Number Theory
Softcover ISBN: 978-0-8218-4439-7
Product Code: STML/45
List Price: $59.00
Individual Price: $47.20
eBook ISBN: 978-1-4704-2153-3
Product Code: STML/45.E
List Price: $49.00
Individual Price: $39.20
Softcover ISBN: 978-0-8218-4439-7
eBook: ISBN: 978-1-4704-2153-3
Product Code: STML/45.B
List Price: $108.00 $83.50
Click above image for expanded view
Higher Arithmetic: An Algorithmic Introduction to Number Theory
Softcover ISBN: 978-0-8218-4439-7
Product Code: STML/45
List Price: $59.00
Individual Price: $47.20
eBook ISBN: 978-1-4704-2153-3
Product Code: STML/45.E
List Price: $49.00
Individual Price: $39.20
Softcover ISBN: 978-0-8218-4439-7
eBook ISBN: 978-1-4704-2153-3
Product Code: STML/45.B
List Price: $108.00 $83.50
• Student Mathematical Library
Volume: 45; 2008; 210 pp
MSC: Primary 11
Although number theorists have sometimes shunned and even disparaged computation in the past, today's applications of number theory to cryptography and computer security demand vast arithmetical
computations. These demands have shifted the focus of studies in number theory and have changed attitudes toward computation itself.
The important new applications have attracted a great many students to number theory, but the best reason for studying the subject remains what it was when Gauss published his classic
Disquisitiones Arithmeticae in 1801: Number theory is the equal of Euclidean geometry—some would say it is superior to Euclidean geometry—as a model of pure, logical, deductive thinking. An
arithmetical computation, after all, is the purest form of deductive argument.
Higher Arithmetic explains number theory in a way that gives deductive reasoning, including algorithms and computations, the central role. Hands-on experience with the application of algorithms
to computational examples enables students to master the fundamental ideas of basic number theory. This is a worthwhile goal for any student of mathematics and an essential one for students
interested in the modern applications of number theory.
Harold M. Edwards is Emeritus Professor of Mathematics at New York University. His previous books are Advanced Calculus (1969, 1980, 1993), Riemann's Zeta Function (1974, 2001), Fermat's Last
Theorem (1977), Galois Theory (1984), Divisor Theory (1990), Linear Algebra (1995), and Essays in Constructive Mathematics (2005). For his masterly mathematical exposition he was awarded a Steele
Prize as well as a Whiteman Prize by the American Mathematical Society.
Undergraduates, graduate students, and research mathematicians interested in number theory.
□ Chapters
□ Chapter 1. Numbers
□ Chapter 2. The problem $A\square + B = \square $
□ Chapter 3. Congruences
□ Chapter 4. Double congruences and the Euclidean algorithm
□ Chapter 5. The augmented Euclidean algorithm
□ Chapter 6. Simultaneous congruences
□ Chapter 7. The fundamental theorem of arithmetic
□ Chapter 8. Exponentiation and orders
□ Chapter 9. Euler’s $\phi $-function
□ Chapter 10. Finding the order of $a\bmod c$
□ Chapter 11. Primality testing
□ Chapter 12. The RSA cipher system
□ Chapter 13. Primitive roots $\bmod \, p$
□ Chapter 14. Polynomials
□ Chapter 15. Tables of indices $\bmod \, p$
□ Chapter 16. Brahmagupta’s formula and hypernumbers
□ Chapter 17. Modules of hypernumbers
□ Chapter 18. A canonical form for modules of hypernumbers
□ Chapter 19. Solution of $A\square + B = \square $
□ Chapter 20. Proof of the theorem of Chapter 19
□ Chapter 21. Euler’s remarkable discovery
□ Chapter 22. Stable modules
□ Chapter 23. Equivalence of modules
□ Chapter 24. Signatures of equivalence classes
□ Chapter 25. The main theorem
□ Chapter 26. Modules that become principal when squared
□ Chapter 27. The possible signatures for certain values of $A$
□ Chapter 28. The law of quadratic reciprocity
□ Chapter 29. Proof of the Main Theorem
□ Chapter 30. The theory of binary quadratic forms
□ Chapter 31. Composition of binary quadratic forms
□ Appendix. Cycles of stable modules
□ Answers to exercises
□ Clean and elegant in the way it communicates with the reader, the mathematical spirit of this book remains very close to that of C.F. Gauss in his 1801 Disquisitiones Arithmeticae, almost as
though Gauss had revised that classic for 21st-century readers.
CHOICE Magazine
□ ...takes the reader on a colorful journey...
Mathematical Reviews
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Additional Material
• Reviews
• Requests
Volume: 45; 2008; 210 pp
MSC: Primary 11
Although number theorists have sometimes shunned and even disparaged computation in the past, today's applications of number theory to cryptography and computer security demand vast arithmetical
computations. These demands have shifted the focus of studies in number theory and have changed attitudes toward computation itself.
The important new applications have attracted a great many students to number theory, but the best reason for studying the subject remains what it was when Gauss published his classic Disquisitiones
Arithmeticae in 1801: Number theory is the equal of Euclidean geometry—some would say it is superior to Euclidean geometry—as a model of pure, logical, deductive thinking. An arithmetical
computation, after all, is the purest form of deductive argument.
Higher Arithmetic explains number theory in a way that gives deductive reasoning, including algorithms and computations, the central role. Hands-on experience with the application of algorithms to
computational examples enables students to master the fundamental ideas of basic number theory. This is a worthwhile goal for any student of mathematics and an essential one for students interested
in the modern applications of number theory.
Harold M. Edwards is Emeritus Professor of Mathematics at New York University. His previous books are Advanced Calculus (1969, 1980, 1993), Riemann's Zeta Function (1974, 2001), Fermat's Last Theorem
(1977), Galois Theory (1984), Divisor Theory (1990), Linear Algebra (1995), and Essays in Constructive Mathematics (2005). For his masterly mathematical exposition he was awarded a Steele Prize as
well as a Whiteman Prize by the American Mathematical Society.
Undergraduates, graduate students, and research mathematicians interested in number theory.
• Chapters
• Chapter 1. Numbers
• Chapter 2. The problem $A\square + B = \square $
• Chapter 3. Congruences
• Chapter 4. Double congruences and the Euclidean algorithm
• Chapter 5. The augmented Euclidean algorithm
• Chapter 6. Simultaneous congruences
• Chapter 7. The fundamental theorem of arithmetic
• Chapter 8. Exponentiation and orders
• Chapter 9. Euler’s $\phi $-function
• Chapter 10. Finding the order of $a\bmod c$
• Chapter 11. Primality testing
• Chapter 12. The RSA cipher system
• Chapter 13. Primitive roots $\bmod \, p$
• Chapter 14. Polynomials
• Chapter 15. Tables of indices $\bmod \, p$
• Chapter 16. Brahmagupta’s formula and hypernumbers
• Chapter 17. Modules of hypernumbers
• Chapter 18. A canonical form for modules of hypernumbers
• Chapter 19. Solution of $A\square + B = \square $
• Chapter 20. Proof of the theorem of Chapter 19
• Chapter 21. Euler’s remarkable discovery
• Chapter 22. Stable modules
• Chapter 23. Equivalence of modules
• Chapter 24. Signatures of equivalence classes
• Chapter 25. The main theorem
• Chapter 26. Modules that become principal when squared
• Chapter 27. The possible signatures for certain values of $A$
• Chapter 28. The law of quadratic reciprocity
• Chapter 29. Proof of the Main Theorem
• Chapter 30. The theory of binary quadratic forms
• Chapter 31. Composition of binary quadratic forms
• Appendix. Cycles of stable modules
• Answers to exercises
• Clean and elegant in the way it communicates with the reader, the mathematical spirit of this book remains very close to that of C.F. Gauss in his 1801 Disquisitiones Arithmeticae, almost as
though Gauss had revised that classic for 21st-century readers.
CHOICE Magazine
• ...takes the reader on a colorful journey...
Mathematical Reviews
Permission – for use of book, eBook, or Journal content
You may be interested in...
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/STML/45","timestamp":"2024-11-06T08:18:23Z","content_type":"text/html","content_length":"130900","record_id":"<urn:uuid:47f49c19-329b-4023-a669-96d8e7739bf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00708.warc.gz"} |
How To Python Sort List Of Tuples
Hello geeks and welcome in today’s article, we will cover a Python sort list of tuples with various examples. Along with the various examples, we will also look at different methods by which we can
achieve it. We will primarily focus on the inbuilt function and sorting algorithm to achieve sorting of tuples. The ordered nature and unchangeable trait of tuple make it different from that of a
common list.
Different Method For Python Sort List Of Tuple
A tuples list can be sorted like any ordinary list. But as a tuple can contain multiple elements, which leaves us with the freedom to sort tuple based on its 1st term or ith term. The primary methods
that we will look at in detail are:
• Inbuilt list.sort()
• Bubble sort()
• Sorted()
1. Inbuilt list.sort() To Python Sort List Of Tuple
The syntax for tuple sort using the inbuilt list.sort() is:
list_name.sort( key = lambda x:x[i])
above here i refers to the ith element of the tuple.
#sort using inbuilt list.Sort()
#tuple having structure(name, age)
list_siblings(key= lambda x:x[1])
So in the above example, we used the inbuilt list.sort () method to sort the tuple using the
similar syntax as stated above, and we get perfectly sorted topple arranged in ascending
order based on the ages of siblings. We can also get the tuple sorted in descending order
by making a minute change in the above code.
list_siblings(key= lambda x:x[1], reverse=true)
This change would yield us a descending order series.
2. Sort List Using Bubble Sort
Now let us look at siblings’ problems and try to solve them using the bubble sort method. Here our motive is to sort the elements based on the 0th term of the tuple.
list_=[("Rohit",19),("Rishabh",14),("Ram",23), ("Navya",17),("Aditi",22)]
for i in range(0,list_lenght):
for j in range(0,list_lenght-i-1):
[('Rishabh', 14), ('Navya', 17), ('Rohit', 19), ('Aditi', 22), ('Ram', 23)]
In the above example, our motive was to sort the tuple based on the 0th element that is
sibling name using the bubble sort method. We can observe that we have to write much more
of a code than the inbuilt method. It is also not considered the best method for a lot
of time and memory. But on the positive part, it’s the simplest of all and easy to
construct and understand.
3. sorted() Method To Python Sort List Of Tuple
The sorted function sorts the given input in a specific order and returns sorted iterable as a list. now let us look at an example related to this method which will make things much clearer for us.
late = ('19', '17', '14', '10', '11')
print(sorted(l, key= lambda x:x[1]))
[('Rishabh', 14), ('Navya', 17), ('Rohit', 19), ('Aditi', 22), ('Ram', 23)]
Here we have 2 different examples where we have used the sorted() method. In the first example, we have taken a variable and added a couple of numbers to it. Here we get an array that is arranged in
ascending order. Whereas for our second example, we have considered the data for the above methods. Here also, we get an array arranged in ascending order as per the names of different siblings
See Also
In this article, we covered the python list sort of tuples. We looked at various methods, syntax, and examples. We also understood that a couple could be sorted similarly to that of an ordinary list.
I hope this article was able to clarify your doubts. If anyhow still any doubt remains, then feel free to comment it down below. Done reading this, why not read NumPy any next.
0 Comments
Inline Feedbacks
View all comments | {"url":"https://www.pythonpool.com/python-sort-list-of-tuples/","timestamp":"2024-11-05T09:19:46Z","content_type":"text/html","content_length":"141701","record_id":"<urn:uuid:aee2bec6-2b19-4c86-bc0a-6152e5d72bc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00466.warc.gz"} |
Make a checkerboard matrix
Problem 4. Make a checkerboard matrix
Given an integer n, make an n-by-n matrix made up of alternating ones and zeros as shown below. The a(1,1) should be 1.
Input n = 5
Output a is [1 0 1 0 1
1 0 1 0 1]
Solution Stats
33.41% Correct | 66.59% Incorrect
Problem Comments
This problem is far less trivial than it seems. Fun!
All solutions with score 10 or 11 use the regexp cheat.
I like this one! came to know about the invhilb function just because of this!
This solution works on matlab but not here.
function a = checkerboard(n)
a = (sin(theta)'*sin(theta))+cos(theta)'*cos(theta);
Puzzled me a bit but I'm proud of the final result even if i did 47 lines of code for this haha!
Nice - I can think of a few ways to do this. I dislike for loops in general so i try to minimise use, but ended up with a single reduced for loop and an eye commands. Enjoy!
function a = checkerboard(n)
a = ones(n);
end this code give me right answer in soft,but failed here
Good problem. Helps to understand the uses of logical operators.
The even part n=4 is tricky, Hint: google it
%% cherboard
function [board] = Chekerboard(n)
%% set bord matrix with zeros
board = zeros(n,n);
for j = 1:n
if mod(j,2)==0 %if zero row is even
for i =1:2:n
board(j,i) = 1;
else mod(j,2)==1 %if 1, row is odd
for t = 2:2:n
board(j,t) = 1;
this works and easy to understand~
function a = checkerboard(n)
if mod(n,2)==1
b = ones(n,n);
for i=2:2:n^2
b = ones(n+1);
for i=2:2:(n+1)^2
I just made
a(2:2:n,1:2:n) = 0;
a(1:2:n,2:2:n) = 0;
rather easy actually
What is the "size" of the solved problem? How is it calculated?
Fun problem with different solutions!
Nice problem, but tests 6 and 8 seem to make no sense. They use same input to the function, but expects different output, thus making it impossible to solve it.
Can someone please confirm?
@Adam, that test case was left incomplete, as there was some problem on my end as I was updating the problem.
I have now edited the test suite and have re-scored your solution as well.
Solution Comments
Show comments
Problem Recent Solvers15751
Suggested Problems
More from this Author96
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | {"url":"https://se.mathworks.com/matlabcentral/cody/problems/4?s_tid=prof_contriblnk","timestamp":"2024-11-06T08:17:37Z","content_type":"text/html","content_length":"144980","record_id":"<urn:uuid:1f6ce981-fea7-4a99-8f0f-abe4fd8d8997>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00511.warc.gz"} |
ZK-SNARKS & The Last Challenge Attack: Mind Your Fiat-Shamir!
OpenZeppelin Security | December 14, 2023
By Oana Ciobotaru, Maxim Peter and Vesselin Velichkov
OpenZeppelin recently identified a critical vulnerability during an audit of Linea‘s PLONK verifier. At its core, the issue is similar in nature to some previously disclosed vulnerabilities (e.g.,
Frozen Heart, 00). The ‘Last Challenge’ vulnerability arises from the ability of a malicious prover to exploit the degrees of freedom introduced by an incorrect application of the Fiat-Shamir
transform when computing the final PLONK challenge. A malicious prover exploiting this vulnerability could steal all the assets in the rollup by submitting a proof for an invalid state transition.
While the issue was promptly communicated and fixed, we believe the specifics are worth sharing more broadly so that others may learn to recognize similar patterns and protect themselves accordingly.
Prior to describing the issue in detail below, we give a brief high-level introduction to Zero-Knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs) and the Fiat-Shamir transform.
A high-level walkthrough of zkSNARKs
A zkSNARK system involves two parties: a Prover and a Verifier. The Prover aims to convince the Verifier that the result of some computation F is correct, providing a succinct proof that is more
efficient to verify than performing the entire computation again. This is especially valuable in the context of a blockchain: instead of having to re-execute all transactions to verify the validity
of a state transition, it is cheaper for nodes to verify a zkSNARK proof attesting to the same property. Such a proof can theoretically provide a Layer 2 (L2) with a similar level of security as
direct execution on Layer 1 (L1). This comes at the cost of latency but with the benefit of improved scalability.
Figure 1: A ZK rollup’s state being updated in its L1 contract (adapted from vitalik.ca)
The computation F mentioned above can be represented as an arithmetic circuit. An arithmetic circuit is a sequence of simple arithmetic operations, such as additions and multiplications, also called
gates, connected by wires. The inputs to the circuit are of two types: public values (public input), known by both the Prover and the Verifier, and private values (witness), known only to the Prover
and representing the internal values of the computation.
With a zkSNARK, the Prover proves knowledge of the witness without revealing it to the Verifier (i.e., the zero-knowledge property, protecting the honest Prover against a potentially malicious
Verifier), and testifies that the computation F was executed correctly (i.e., the soundness property, protecting the honest Verifier against a potentially malicious Prover).
Below, an example circuit corresponding to the function g(x[1], x[2], w[1]) = (x[1] + x[2]) * (x[2] + w[1]) is shown. For this circuit, a Prover can use a zkSNARK to prove that, for some assignment
of the public inputs (x[1], x[2], y), it knows an assignment of the witness w[1] such that g(x[1], x[2], w[1]) == y without revealing any additional information about w[1].
Figure 2: Simple example of an arithmetic circuit with two addition and one multiplication gates
Given such an arithmetic circuit, one would choose an appropriate arithmetisation (e.g., R1CS, AIR, Plonkish) and, implicitly, an NP language for which one would build a zkSNARK scheme. While these
details are valuable to any practitioner out there, they are outside the scope of this blog. We recommend the following external resources covering such topics:
Going further, a zkSNARK system has two main building blocks: a Polynomial Interactive Oracle Proof (PIOP) and a Polynomial Commitment Scheme. Both of them, in their original form, are interactive,
meaning that they proceed as a series of communication rounds between the Prover and the Verifier. Intuitively, a PIOP (with a first version introduced in PLONK and called Polynomial Protocol, and
later generalized to PIOP in BFS19) abstracts away, in the form of a set of polynomial identities, the atomic constraints described by the arithmetic circuit. A PIOP is compiled into a zkSNARK system
via one of various existing zkSNARK compilers (as described in either PLONK, Marlin, BFS19, etc.) by additionally involving a polynomial commitment scheme.
The interactive nature of the scheme means that the Prover and the Verifier have to wait for each other before moving on to the next step. In practice, interactivity introduces significant latency
between the steps and requires a Prover to produce a new proof for each new Verifier.
The Fiat-Shamir Transform
In order to avoid the interactivity bottleneck, there exists the Fiat-Shamir transform. This transform can be applied to any sound and constant round public-coin interactive protocol to yield a
functionally equivalent, sound and non-interactive protocol. Note that in a public-coin interactive two-party protocol, all that the interactive Verifier does in terms of communication is to compute
random challenges and send them to the interactive Prover.
A simplified example of the Fiat-Shamir transform is given in the figure below.
Figure 3: Fiat-Shamir transform of a simplified interactive protocol
It has been shown that the Fiat-Shamir transform is secure in the random oracle model. In practice, however, the random oracle model is instantiated by a hash function.
Going back to our example introduced in Figure 3, both the Prover and the Verifier use a hash function to compute the challenge c as the hash of the commitment com[f ]and the public input x for that
instance. This way, any modification of com[f ] by a malicious Prover would also change c, preventing it from choosing both com[f ] and c to its advantage.
In the following, we will focus on a concrete zkSNARK, namely PLONK. More specifically, we will look at a bug found in the Linea implementation of PLONK’s Fiat-Shamir transform.
PLONK is a ZK succinct argument system that, given an arithmetic circuit C, proves that a party (i.e., the Prover) knows a witness w corresponding to a public input x such that (x,w) satisfies the
arithmetic circuit C. In most common variants of PLONK, the polynomial commitment is instantiated with a KZG polynomial commitment and the resulting proof is a set of commitments (in fact, the
commitments are points on a pairing-friendly elliptic curve) and field elements. The interactive version of PLONK includes several rounds of communication between the Prover and the Verifier.
The protocol can be made non-interactive by using the Fiat-Shamir transform: for each interactive round, the non-interactive PLONK Prover self-generates the corresponding set of challenges by hashing
the transcript of the communication up to that round. The transcript up to a certain round is defined as the concatenation of the public parameters that define the circuit, the public inputs that
define the instance, and the proof elements computed by the PLONK Prover up to that certain round. The version of PLONK examined in this blog post is the latest one available at the time of writing
(from the 17th of August 2022), in which the transcript and the corresponding challenges computed at each round are shown below:
where pp contains the public parameters and the public inputs, while H denotes a hash function.
Notice that u (the last challenge) is not used in any of the PLONK Prover rounds. In fact, u is only used by the PLONK Verifier to batch-validate multiple evaluations at two different evaluation
points. Hence, rightfully, the following question arises: given that the PLONK Verifier only needs u to be random and u is not used by the PLONK Prover, does an “efficiency-oriented” Verifier really
need to follow the specified protocol and compute u as the hash of the full transcript?
The Last Challenge Attack
In the case of PLONK, not computing the challenges using the full transcript exposes the verifier to potential vulnerabilities. As an example, the Last Challenge Attack is applicable to
implementations of the PLONK Verifier in which the last challenge u is not derived using the proof elements [W[𝔷]][1] and [W[𝔷][ω]][1] (which are part of the full transcript).
Prior to describing the steps of the attack, we briefly recall the relevant part of the PLONK Verifier. The PLONK verifier works in 12 steps, the last of which represents the verification of the
following bi-linear pairing equation:
Note that in the equation above, [F][1] and [E][1] are computed by the Verifier using the commitments and evaluations sent by the Prover.
The Last Challenge Attack proceeds as follows:
Figure 4: Visual walkthrough of the attack
In the case of a ZK rollup on Ethereum, the circuit simulates the Ethereum Virtual Machine (EVM). An honest prover can thus prove that blocks of transactions were executed correctly, and a certain
state transition is valid. However, a verifier implementation vulnerable to the Last Challenge Attack makes it possible for a malicious prover to forge a proof for an invalid state transition. By
doing so, a malicious prover can set itself as the owner of all the assets sitting in the rollup.
Figure 5: Possible ZK rollup’s state after accepting a malicious proof
The Fiat-Shamir transform is a common source of security vulnerabilities in zkSNARK systems. Non-interactive verifiers should follow the standard specifications which, in the case of PLONK, derive
the Fiat-Shamir challenges from the entire transcript. This derivation ensures that any change to transcript elements by a malicious prover also modifies the challenges. As seen above, deviations
from the standard can lead to the construction of proofs of false statements by malicious provers, with potentially disastrous consequences. Mind your Fiat-Shamir's! | {"url":"https://blog.openzeppelin.com/the-last-challenge-attack?utm_campaign=Zero%20Knowledge%20Proof%202023&utm_source=Twitter","timestamp":"2024-11-08T15:53:34Z","content_type":"text/html","content_length":"102473","record_id":"<urn:uuid:0c6d65a5-0001-46dc-afa9-07aa2f8066d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00025.warc.gz"} |
The CHEK2 challenge
Variants in the ATM & CHEK2 genes are associated with breast cancer. For this experiment, predictors are asked to estimate the probability of an individual with a given mutation being in the case
(cancer) or control (healthy) cohort. The data available include the targeted resequencing of two genes (ATM and CHEK2) from approximately 1250 breast cancer cases and 1250 controls. The ATM
sequencing results have already been published, and will thus serve as an example set (Tavtigian et al., 2009, Journal of Human Genetics. doi: 10.1016/j.ajhg.2009.08.018).
Predictors will be provided with 41 rare missense, nonsense, splicing, and indel variants in CHEK2.
Prediction challenge
Predictors are asked to classify variants as occurring in cases or controls. Predictors will provide their estimate of the probability of individuals with a given variant being in the case set.
Control probability is implicitly 1 – P(case). Correctness of each prediction will be weighted according to
(a) how accurately P(case) was predicted
(b) the confidence measure provided
(c) the number of study participants with the variant.
While prediction for a single individual may not be meaningful in all cases, the sum across all predictions should give an informative measure of prediction accuracy. In addition, we ask predictors
to submit the raw output data of the prediction algorithm.
Predictions are restricted to single residue mutations and are based on a statistical analysis of the correlation between mutation type and disease computed from the annotation data derived from the
July 2010 release of UniProtKB.
The probability for a mutation X-->Y to be found in cases is computed as the ratio between the number of mutations X-->Y related to disease and the total number of X-->Y mutations in the data set, as
derived from UniProtKB (release July 2010)
Standard deviations are evaluated with the binomial approximation.
Data provided by | {"url":"http://genomeinterpretation.org/cagi1-chek2.html","timestamp":"2024-11-08T06:01:17Z","content_type":"text/html","content_length":"32033","record_id":"<urn:uuid:a0454691-10aa-4e3d-bef0-d5bb6e46029c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00303.warc.gz"} |
Probability Rome Conference 2024
Large-width asymptotics and training dynamics of $\alpha$-Stable ReLU neural networks
There is a recent and growing literature on large-width properties of Gaussian neural networks (NNs), namely NNs whose weights are Gaussian distributed. In such a context, two popular results are: i)
the characterization of the large-width asymptotic behavior of NNs in terms of Gaussian stochastic processes; ii) the characterization of the large-width training dynamics of NNs in terms of the
so-called neural tangent kernel (NTK), showing that, for a sufficiently large width, the gradient descent achieves zero training error at a linear rate. We present large-width asymptotics and
training dynamics of $\alpha$-Stable NNs, namely NNs whose weights are distributed according to $\alpha$-Stable distributions, with $\alpha\in(0,2]$. First, for $\alpha$-Stable NNs with a ReLU
activation function, we show that if the NN's width goes to infinity then a rescaled NN converges weakly to an $\alpha$-Stable stochastic process, generalizing Gaussian processes. As a difference
with respect to the Gaussian setting, our result shows that the choice of the activation function affects the scaling of the NN, that is: to achieve the infinitely wide $\alpha$-Stable process, the
ReLU activation requires an additional logarithmic term in the scaling with respect to sub-linear activations. Then, we characterize the large-width training dynamics of $\alpha$-Stable ReLU-NNs in
terms of a random kernel, referred to as the $\alpha$-Stable NTK, showing that, for a sufficiently large width, the gradient descent achieves zero training error at a linear rate. The randomness of
the $\alpha$-Stable NTK is a further difference with respect to the Gaussian setting, that is: within the $\alpha$-Stable setting, the randomness of the NN at initialization does not vanish in the
large-width regime of the training. An extension of our results to deep $\alpha$-Stable NNs is discussed.
Area: CS24 - Neural Networks at initialization (Michele Salvi)
Keywords: $\alpha$-Stable stochastic process; gradient descent; infinitely wide limit; large-width training dynamics; neural network; neural tangent kernel; ReLU activation function
Please Login in order to download this file | {"url":"https://probabilityrome2024.it/pr2024/papers/314/","timestamp":"2024-11-13T05:09:44Z","content_type":"application/xhtml+xml","content_length":"27334","record_id":"<urn:uuid:749dbc82-27c8-4c47-a7ff-00fd743a0070>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00468.warc.gz"} |
tree of life superstring theory part 144
Each of the 64 hexagrams used in the I Ching system of divination is a 6-fold combination of lines (denoting Yang) or broken lines (denoting Yin). It consists of a pair of trigrams (triplets of lines
/broken lines). The basic set of 8 triagrams comprises two subsets of 4. The 64 hexagrams are the conjunctions or intersections of 8 rows and 8 columns of the 8 trigrams, i.e., 8 pairs of a complete
set of trigrams. Therefore, they comprise 8 pairs of two subsets of 4 trigrams, each subset comprising 12 lines/broken lines.
Compare this with the 8 triacontagons that are the Petrie polygons of the 421 polytope. The vertex angle of a sector of a triacontagon is 12° and the interior angle subtended by two sides of a pair
of adjacent sectors is 168°, whilst the sum of their vertex angles is 24°. As the 64 hexagrams consist of 8 pairs of two subsets of 12 lines/broken lines and the 8 triacontagons comprise 8 types
(differing in size) of pairs of adjacent sectors and their diametric opposites, the following correspondence:
angle of one degree → line/broken line
is implied. It means, considering only vertex angles, that:
• a sector (S) with a 12° vertex angle → to a subset of 4 triagrams with 12 lines/broken lines;
• two adjacent S (S') with a combined 24° vertex angle → to a complete set of 8 trigrams with 24 lines/broken lines;
• a pair (P) of S' and its diametric opposite with a combined 48° vertex angle → to a set of 8 hexagrams with 48 lines/broken lines;
• the 8 types of P in the 8 triacontagons with a combined 384° vertex angle → to the 8 rows of hexagrams with 384 lines/broken lines.
We see that a single P in a triacontagon with 8 half-sectors corresponds to (or is equivalent to) one row of hexagrams, whilst a set of 8 P's in the 8 triacontagons with (8×8=64) half-sectors
corresponds to the 8 rows of 8 hexagrams, a total of 64 hexagrams. A half-sector with a vertex angle of 6° corresponds to a hexagram (pair of trigrams).
If we now consider the interior angles as well as the vertex angles, then the 4 generic sectors in a triacontagon have a combined vertex angle of 48° and a combined interior angle of (84°+84°+84°+84°
=336°). This corresponds to the 48 lines/broken lines in the 8 diagonal hexagrams and to the 84 lines and 84 broken lines in the 28 off-diagonal hexagrams in each diagonal half of the 8×8 array. One
base angle of 84° corresponds to the 84 lines; the base angle in the adjacent sector corresponds to the 84 broken lines. Their reflected counterparts correspond to the 84 lines and 84 broken lines in
the opposite half of the 8×8 array. P displays the 24:168, 84:84 & 12:12 divisions that are characteristic of holistic systems (see The holistic pattern). The sum of the vertex angles for one
triacontagon is (24+24= 48) and the sum for the other 7 is (7×48=336=168+168). which is the sum of the two diametrically opposite interior angles. This compares with the 8 diagonal hexagrams having
48 lines/broken lines and the 56 off-diagonal hexagrams having 336 lines/broken lines (168 in each half). It is as though one triacontagon (we need not specify here which one) corresponds to the 8
hexagrams forming the diagonal and the 7 other triacontagons correspond to the 7 copies of these two similar sets of 8 trigrams that make up the 56 hexagrams in the remainder of the 8×8 array. | {"url":"https://www.64tge8st.com/post/2018/04/29/tree-of-life-superstring-theory-part-144","timestamp":"2024-11-01T20:31:18Z","content_type":"text/html","content_length":"1050486","record_id":"<urn:uuid:e005a95a-7f08-468c-811d-20afd9038ff8>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00168.warc.gz"} |
Coding Curves 11: Roses
coding curves
Chapter 11 of Coding Curves
Now we come to another one of my favorite types of curves – roses or rose curves. To me, these look a lot like circular Lissajous curves, or very regular harmonographs. In fact, they are a special
instance of hypotrochoids, but special enough to look at on their own. Just to give you some instant visuals, here’s a rose curve:
Like many other curves we’ve looked at, we can get a parameterized formula that will take a t value that goes from 0 to 2 * PI and give us back a value that will let us plot the curve. Let’s look
back at the formula for a circle first…
function circle(x, y, r) {
for (t = 0; t < 2 * PI; t += 0.01) {
x1 = x + cos(t) * r
y1 = y + sin(t) * r
lineTo(x1, y1)
You could simplify that into a single line within the for loop, but I wanted to spread it out for clarity.
A rose curve uses the same strategy, but instead of a fixed radius, that radius is constantly changing, also based on the t value, as well as another parameter. Here’s the formula for the radius:
So we have two new variables here. a is the overall radius of the rose, and n controls the number of petals in the rose (although the petal count gets a bit complicated, so we’ll come back to that
shortly). So we can make a rose function like this:
function rose(x, y, a, n) {
for (t = 0; t < 2 * PI; t += 0.01) {
r = a * cos(n * t)
x1 = x + cos(t) * r
y1 = y + sin(t) * r
lineTo(x1, y1)
And now, if you want to, you can clean this up a bit:
function rose(x, y, a, n) {
for (t = 0; t < 2 * PI; t += 0.01) {
r = a * cos(n * t)
lineTo(x + cos(t) * r, y + sin(t) * r)
For now, let’s just say that n should be a positive whole number. But we’ll explore ranges beyond that of course.
Now we can draw our first rose like so:
width = 800
height = 800
canvas(800, 800)
rose(width / 2, height / 2, width * 0.45, 5)
I’ll be using the width * 0.45 a lot here. It just makes the radius a bit less than half the size of the canvas, so the curve will go almost to the edge of the canvas, but never hit it.
And this gives us a 5-petal rose:
The first example at the top of this page used an n of 7. And here is a rose with an n of 11:
So far we’re seeing a good correlation between n and the number of petals. At least for odd values of n. But what if we use an n of 4?
Interesting. This gives us eight petals. This holds true for any value of n. Odd values create n petals. Even values create 2 * n petals. Just to go way out in one direction, here’s one with n = 40,
which gives 80 petals. I had to up the resolution – incrementing t in the for loop by 0.001 to keep it from getting jagged.
In the opposite direction, going down to n = 1, gives you a single node:
A bit strange, but it works out mathematically. You’ll find that for negative values, the rose looks the same as for positive values of n. Here’s 5 on the left and -5 on the right:
Unsurprisingly, n = 0 gives us nothing. And so that covers all the whole number roses. If that’s all there was to roses, it would be nice, but there’s a lot more to go.
An Alternate Rose
Actually, before I move beyond whole numbers of n, I want to just mention an alternate rose formula. Instead of using cosine in the radius formula, you can use sine instead:
This gives you the same roses as the original, but rotated. Here’s a 5-petal rose using the original cosine on the left and sine on the right:
And the same for a 8-petal rose (n = 4):
The actual amount of rotation is PI / (2 * n) radians, or 90 / n degrees. For odd values of n, this always has the visual effect of rotating the rose by 90 degrees (the actual rotation may be
different, but due to rotational symmetry, it appears to rotate 90 degrees). For even values of n, it rotates the rose so the petals will now be where the spaces between the petals were in the
original version.
Fractional values of n
Things start to get more interesting when we start using fractional values for n. We can try it generating a rose with:
rose(width/2, height/2, width * 0.45, 5.0 / 4.0)
But this gives us, rather disappointingly, the following:
The problem is that it’s going to have to go beyond 2 * PI to finish it’s cycle. How far beyond? Well, to figure that out programatically, we’ll need to first ensure that the n value is rational. If
it’s an irrational number, the rose will continue forever without reaching its exact starting point. We’ll also need to know both the numerator and denominator of that fraction. We can adjust the
rose function to take an extra value, so we have n and d for numerator and denominator.
rose(x, y, a, n, d) {
for (t = 0; t < 2 * PI; t += 0.01) {
r = a * cos(n / d * t)
lineTo(x + cos(t) * r, y + sin(t) * r)
This doesn’t solve the problem yet, but gets us the first step. If you want you can enforce n and d to be integers to make sure you’re getting a rational fraction, but make sure you convert them so
the division in line 3 returns a floating point value.
Now we need to change the for loop limit from 2 * PI to the actual value we need. That limit value is:
But what is this new m value there? Well, m should be equal to 1 if d * n is odd. And m should be 2 if d * n is even. Woo! A bit complex. But we can simplify it.
We usually test for evenness by taking a number modulo 2. If the result is 0, that means the number is even. If the result is 1, the original number is odd. So we want:
m = 1 when d * n % 2 == 1
m = 2 when d * n % 2 == 0
So we can say:
m = 2 - d * n % 2
This gives us:
rose(x, y, a, n, d) {
m = 2 - d * n % 2
limit = PI * d * m
for (t = 0; t < limit; t += 0.01) {
r = a * cos(n / d * t)
lineTo(x + cos(t) * r, y + sin(t) * r)
Remember, if you are enforcing integers for n and d, you might need to do some casting or conversion to make everything work correctly. I’ll leave that to you. Now we can redo the fractional one like
rose(width/2, height/2, width * 0.45, 5, 4)
And now we get something much nicer:
This time, the rose continued all the way around and completed itself.
Now you can go to town trying different fractions. I find that things get really interesting when you use higher numbers that are very close to each other. For example, n = 22, d = 21:
Or even 81 and 80:
Roses with fractions less than 1
Things become a whole different type of interesting when you get fractions that are less 1.0. For example, here are roses with n and d of 1,2 on the left, 1,3 in the middle, and 1,4 on the right.
A trick to find interesting patterns is to take a pair of numbers that would usually reduce down, like 17 / 51 will reduce to 1 / 3, giving us the middle figure above. But then shift one of the
values a bit. Here’s 17 and 52:
A big difference for just a shift of 1.
Named Roses
Some of these rose curves have special names. I’ll share some of them.
Limaçon Trisectrix
This has a ratio of 1 / 3. We already saw this one above.
Dürer Folium
With a ratio of 1 / 2. Also seen previously.
Ratio is 2 / 1
Ratio of 3 / 1
Maurer Roses
If you thought we were almost done, wrong! There’s a whole other type of rose curve to explore – Maurer roses!
Maurer roses start with the basic rose function, but instead of just drawing the curve all the way around, it draws a series of line segments to points along the rose curve. Although it doesn’t have
to be so, this is often done with 360 segments and the angles used are specified in degrees. We construct a rose, here using a ratio of 4 / 1, and then pick a degree value to step by. In this case, I
chose 49. Then we loop t from 0 to 360 and multiply t by that degree value. So the degrees goes from 0, to 49, 98, 147, 196 and so on. We use that value in our rose (converting to radians of course)
and use that at the next point. Here’s what it looks like in action for the first 30 iterations:
To put it a different way, in a normal rose curve, we are incrementing in very tiny increments, so we get a very smooth curve. Here, we are incrementing in gigantic jumps, so we get what looks like
is going to be a chaotic mess. But, if we let it finish its full path through to 360 iterations, we get…
Aha! Not a chaotic mess after all! In fact, quite nice. Actually, above I’ve drawn the regular rose on top of the Maurer rose. Here is the Maurer all by itself:
I think the two combined look really nice.
So how do we do this?
Well, again, we start out with the basic rose function. But in this case, we’ll just stick to a single integer value. So just n rather than n and d. But we also want to specify how many degrees to
jump on each iteration. To avoid confusion with the earlier d parameter, I’ll call this deg. So the signature is:
function maurer(x, y, a, n, deg)
Again, we want to loop from 0 to 360 for our initial t value. And then we want to get that value that is t multiplied by deg. This is the degree value shown in the animation above. We’ll call it k
but at this point we’re done with degrees, so we’ll convert it to radians by multiplying by PI and dividing by 180
function maurer(x, y, a, n, deg) {
for (t = 0; t < 360; t++) {
k = t * deg * PI / 180
r = a * cos(n * k)
lineTo(x + cos(k) * r, y + sin(k) * r)
We’ll then just execute the rose algorithm, but using k instead of t.
Now we can set something up like the following.
width = 800
height = 800
canvas(800, 800)
maurer(width / 2, height / 2, width * 0.45, 5, 37)
// drawing the regular rose is optional
rose(width / 2, height / 2, width * 0.45, 5, 1)
And get this:
Play around with different values for n and deg. You’ll find that n works the same way it did for regular roses. But minor variations in deg can create radically different images. For example here is
n = 7 and deg = 23:
But moving deg up to 24 gives you this:
Not nearly as nice.
Generally, you’ll find that even numbers for deg will have a lower chance of being interesting than odd numbers (with exceptions).
And anything that divides evenly into 360 is not going to be great. For example, here’s 4, 120:
I drew the full rose too, but the Maurer is just the triangle on the right hand side. Increase that to 121 though, and you get this beauty:
Also, lower prime numbers usually always work pretty well. I’ve noticed that the lower values of n let you get away with higher prime numbers for deg. But it’s something I haven’t tested very
thoroughly. Something to play around with.
One more thing you might want to try is fractional Maurer roses. You don’t even have to alter the code at this point. You can just enter the fraction. Because we are always looping from 0 to 360, we
don’t need to adjust for a different number of loops. Here’s one to start with. Make sure you put both fraction values into the rose function separately, if you are using that.
maurer(0, 0, width * 0.45, 5.0 / 4.0, 229)
rose(0, 0, width * 0.45, 5, 4)
See what you can find among all the possible variations.
You must be logged in to post a comment. | {"url":"https://www.bit-101.com/2017/2023/01/coding-curves-11-roses/","timestamp":"2024-11-13T17:37:09Z","content_type":"text/html","content_length":"83055","record_id":"<urn:uuid:901c94c0-fd52-48f5-b1f7-3b2c6ae7cb65>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00858.warc.gz"} |
From the point A(0,3) on the circle x^2 + 4x + (y-3)^2 =0 a chord AB
Analytical Geometry> From the point A(0,3) on the circle x^2 +...
From the point A(0,3) on the circle x^2 + 4x + (y-3)^2 =0 a chord AB is drawn & extended to a point M such that AM=2AB . The equation of the locus of m will be?
1 Answers
Nishant Vora
Last Activity: 8 Years ago
This is the equation of given circle
So, centre is (-2,3) and radius = 2
Now Let M(h,k)
Now B is the midpoint of A and M
Now we know that
apply this condition and you will get the ans
Provide a better Answer & Earn Cool Goodies
Enter text here...
Ask a Doubt
Get your questions answered by the expert for free
Enter text here... | {"url":"https://www.askiitians.com/forums/Analytical-Geometry/from-the-point-a-0-3-on-the-circle-x-2-4x-y_154603.htm","timestamp":"2024-11-13T05:13:46Z","content_type":"text/html","content_length":"184463","record_id":"<urn:uuid:79ff5e58-091e-445a-8df4-6debbb53bdd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00181.warc.gz"} |
Casadi Solver
Casadi Solver#
class pybamm.CasadiSolver(mode='safe', rtol=1e-06, atol=1e-06, root_method='casadi', root_tol=1e-06, max_step_decrease_count=5, dt_max=None, extrap_tol=None, extra_options_setup=None,
extra_options_call=None, return_solution_if_failed_early=False, perturb_algebraic_initial_conditions=None, integrators_maxcount=100)[source]#
Solve a discretised model, using CasADi.
☆ mode (str) –
How to solve the model (default is “safe”):
■ ”fast”: perform direct integration, without accounting for events. Recommended when simulating a drive cycle or other simulation where no events should be triggered.
■ ”fast with events”: perform direct integration of the whole timespan, then go back and check where events were crossed. Experimental only.
■ ”safe”: perform step-and-check integration in global steps of size dt_max, checking whether events have been triggered. Recommended for simulations of a full charge or discharge.
■ ”safe without grid”: perform step-and-check integration step-by-step. Takes more steps than “safe” mode, but doesn’t require creating the grid each time, so may be faster.
Experimental only.
☆ rtol (float, optional) – The relative tolerance for the solver (default is 1e-6).
☆ atol (float, optional) – The absolute tolerance for the solver (default is 1e-6).
☆ root_method (str or pybamm algebraic solver class, optional) – The method to use to find initial conditions (for DAE solvers). If a solver class, must be an algebraic solver class. If
“casadi”, the solver uses casadi’s Newton rootfinding algorithm to find initial conditions. Otherwise, the solver uses ‘scipy.optimize.root’ with method specified by ‘root_method’ (e.g.
“lm”, “hybr”, …)
☆ root_tol (float, optional) – The tolerance for root-finding. Default is 1e-6.
☆ max_step_decrease_count (float, optional) – The maximum number of times step size can be decreased before an error is raised. Default is 5.
☆ dt_max (float, optional) – The maximum global step size (in seconds) used in “safe” mode. If None the default value is 600 seconds.
☆ extrap_tol (float, optional) – The tolerance to assert whether extrapolation occurs or not. Default is 0.
☆ extra_options_setup (dict, optional) –
Any options to pass to the CasADi integrator when creating the integrator. Please consult CasADi documentation for details. Some useful options:
○ ”max_num_steps”: Maximum number of integrator steps
○ ”print_stats”: Print out statistics after integration
☆ extra_options_call (dict, optional) –
Any options to pass to the CasADi integrator when calling the integrator. Please consult CasADi documentation for details.
☆ return_solution_if_failed_early (bool, optional) – Whether to return a Solution object if the solver fails to reach the end of the simulation, but managed to take some successful steps.
Default is False.
☆ perturb_algebraic_initial_conditions (bool, optional) – Whether to perturb algebraic initial conditions to avoid a singularity. This can sometimes slow down the solver, but is kept True
as default for “safe” mode as it seems to be more robust (False by default for other modes).
☆ integrators_maxcount (int, optional) – The maximum number of integrators that the solver will retain before ejecting past integrators using an LRU methodology. A value of 0 or None leaves
the number of integrators unbound. Default is 100.
Extends: pybamm.solvers.base_solver.BaseSolver
create_integrator(model, inputs, t_eval=None, use_event_switch=False)[source]#
Method to create a casadi integrator object. If t_eval is provided, the integrator uses t_eval to make the grid. Otherwise, the integrator has grid [0,1]. | {"url":"https://docs.pybamm.org/en/stable/source/api/solvers/casadi_solver.html","timestamp":"2024-11-04T04:32:21Z","content_type":"text/html","content_length":"36895","record_id":"<urn:uuid:b1811ab7-6c84-495a-9d02-1c56e0ba2165>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00396.warc.gz"} |
+ General Questions (11)
I have a doubt regarding LTL to omega automaton,
[a SU b] to and F-automaton.
Since, we don't have direct formula for [a SU b] to F automaton, I have taken the negation of [a SU b] to a G automaton, then from there I reduced to a F automaton, But something went wrong
in-between, got stuck here. Could you please help me to solve this?
but the answer for this one is , [a U b] = Aexist ({q}, ¬q, ¬q ^ a ^ ¬b ^ ¬q0 | (q | b) ^ q0 , Fq)
To translate [a SU b] to an equivalent F-automaton, you may note that [a SU b] holds if we have a&!b for a some finite time until b holds at some time. This leads to the automaton shown on page 69 of
VRS-07-TemporalLogic which is a F-automaton. It is not deterministic, since some transitions are missing, but if these are added (leading to an additional sink state), you got your automaton.
Note, however, that this automaton is only equivalent to [a SU b] at the initial point of time and also until the first point of time where b holds, but no afterwards. If you want an automaton that
is always equivalent to [a SU b], then you need a GF-automaton. | {"url":"https://q2a.cs.uni-kl.de/3738/omega-automaton?show=3740","timestamp":"2024-11-12T03:06:29Z","content_type":"text/html","content_length":"49042","record_id":"<urn:uuid:17d2285f-b534-43e4-b8a2-490ba1f4c2ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00385.warc.gz"} |
analysis of variance with a within-groups Mathematics Assignment Help -
analysis of variance with a within-groups Mathematics Assignment Help
analysis of variance with a within-groups Mathematics Assignment Help. analysis of variance with a within-groups Mathematics Assignment Help.
(/0x4*br />
In an analysis of variance with a within-groups variance estimate of 8.5 and a between-groups variance estimate of 5.3, the F ratio is:
analysis of variance with a within-groups Mathematics Assignment Help[supanova_question]
Analysis of variance Mathematics Assignment Help
In an analysis of variance, if the null hypothesis is true, then:
Have 3 algebra questions i need answers to Mathematics Assignment Help
#1: The function rule C=10n + 26 relates the number of people n who attend a small concert to the cost in dollars C of the concert make a table of the input/output pairs to show the cost if 27,39,43
people attend
#2: explain how to write a function rule from the table below. Then write a function rule
x: 2 4 6
y: 1 0 -1
#3: A prototype rocket burns 159 gallons of liquid hydrogen fuel each second during a regular launch. Make a table that shows how many gallons of fuel are burned after 2,3,5 and 10 s
Statistics Sampling Question – Please Advise How To Solve This Mathematics Assignment Help
Shopping on Black Friday
In an article, authors survey Black Friday shoppers. One question was “How many hours do you usually spend shopping on Black Friday?”
1. How many Black Friday shoppers should be included in a sample designed to estimate the average number of hours spent shopping on Black Friday if you want the estimate to deviate no more than 0.5
hour from the true mean?
2. Devise a sampling plan for collecting the data that will likely result in a representative sample.
Analysis of variance Mathematics Assignment Help
In an analysis of variance, if the null hypothesis is true, then:
variance of a distribution of Z-scores Mathematics Assignment Help
The variance of a distribution of Z-scores is always:
variance of a distribution of Z-scores Mathematics Assignment Help[supanova_question]
5 2 • [(2 2) • 5 5] Mathematics Assignment Help
5 +
2 • [(2 + 2) • 5 + 5]
The waiter places a bowl of soup in front of Igor. In a counterclockwise directi Mathematics Assignment Help
waiter places a bowl of soup in front of Igor. In a counterclockwise
direction, he passes the soup to Elan who then passes it to Abbey. Which
two rotations about the center of the table describe passing the soup?
A.first 144° and then 144°
B.first 120° and then 120°
C.first 80° and then 80°
D.first 160° and then 160°
mean value when converted to a Z-distribution Mathematics Assignment Help
If the mean score on a stress scale is 5, the standard deviation is 2, and the distribution is normal, what would be the mean value when converted to a Z-distribution?
variance with a between-groups Mathematics Assignment Help
n an analysis of variance with a between-groups population variance estimate of 30 and a within-groups estimate of 25, the F ratio is:
A.first 144° and then 144°
B.first 120° and then 120°
C.first 80° and then 80°
D.first 160° and then 160°
mean value when converted to a Z-distribution Mathematics Assignment Help
If the mean score on a stress scale is 5, the standard deviation is 2, and the distribution is normal, what would be the mean value when converted to a Z-distribution?
variance with a between-groups Mathematics Assignment Help
n an analysis of variance with a between-groups population variance estimate of 30 and a within-groups estimate of 25, the F ratio is:
A.first 144° and then 144°
B.first 120° and then 120°
C.first 80° and then 80°
D.first 160° and then 160°
mean value when converted to a Z-distribution Mathematics Assignment Help
If the mean score on a stress scale is 5, the standard deviation is 2, and the distribution is normal, what would be the mean value when converted to a Z-distribution?
variance with a between-groups Mathematics Assignment Help
n an analysis of variance with a between-groups population variance estimate of 30 and a within-groups estimate of 25, the F ratio is:
A.first 144° and then 144°
B.first 120° and then 120°
C.first 80° and then 80°
D.first 160° and then 160° | {"url":"https://anyessayhelp.com/analysis-of-variance-with-a-within-groups-mathematics-assignment-help/","timestamp":"2024-11-05T15:54:00Z","content_type":"text/html","content_length":"167797","record_id":"<urn:uuid:7bc28601-a683-4b20-b448-7ef17fb0e445>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00458.warc.gz"} |
Zero-curvature solutions of the one-dimensional Schrödinger equation
We discuss special k = √2m(E - V(x))/ℏ^2 = 0 (i.e. zero-curvature) solutions of the one-dimensional Schrödinger equation in several model systems which have been used as idealized versions of various
quantum well structures. We consider infinite well plus Dirac delta function cases (where E= V(x) = 0) and piecewise-constant potentials, such as asymmetric infinite wells (where E = V(x) = V[0] >
0). We also construct supersymmetric partner potentials for several of the zero-energy solutions in these cases. One application of zero-curvature solutions in the infinite well plus δ-function case
is the construction of 'designer' wavefunctions. namely zero-energy wavefunctions of essentially arbitrary shape, obtained through the proper placement and choice of strength of the δ-functions.
All Science Journal Classification (ASJC) codes
• Atomic and Molecular Physics, and Optics
• Mathematical Physics
• Condensed Matter Physics
Dive into the research topics of 'Zero-curvature solutions of the one-dimensional Schrödinger equation'. Together they form a unique fingerprint. | {"url":"https://pure.psu.edu/en/publications/zero-curvature-solutions-of-the-one-dimensional-schr%C3%B6dinger-equat","timestamp":"2024-11-11T23:55:09Z","content_type":"text/html","content_length":"49007","record_id":"<urn:uuid:eb0fab97-9441-45ca-9cbe-49a1bd540ce8>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00400.warc.gz"} |
A spring with a constant of 4 (kg)/s^2 is lying on the ground with one end attached to a wall. An object with a mass of 6 kg and speed of 5 m/s collides with and compresses the spring until it stops moving. How much will the spring compress? | Socratic
A spring with a constant of #4 (kg)/s^2# is lying on the ground with one end attached to a wall. An object with a mass of #6 kg# and speed of #5 m/s# collides with and compresses the spring until it
stops moving. How much will the spring compress?
1 Answer
The spring will be compressed by
$6.12 m$, rounded to second decimal place.
${E}_{e l} \implies$Elastic Energy of compressed or elongated spring.
${E}_{k} \implies$ Kinetic Energy of object
$m \implies$Mass of object
$v \implies$Velocity of object
$k \implies$Constant of the spring
$x \implies$Deformation of the spring
The energy from the object that collides with the spring is kinetic energy. In the process, the kinetic energy will converted to elastic energy of the spring.
In the given problem the spring is being compressed by the object.
After the object has stopped and the spring has been compressed to the maximum, all the object's kinetic energy has got converted into elastic potential energy of the spring. Equating both energies
we obtain
${E}_{k} = {E}_{e l}$ ......(1)
We know that kinetic energy of the object $= \frac{1}{2} m {v}^{2}$ and elastic potential energy of the compressed spring is shown in the picture. Inserting in(1)
$\frac{1}{2} m \cdot {v}^{2} = \frac{1}{2} k \cdot {x}^{2}$
Multiplying both sides with 2, we get
$m \cdot {v}^{2} = k \cdot {x}^{2}$
Put our known values in the above equation and solving for $x$
$6 \cdot {5}^{2} = 4 \cdot {x}^{2}$
${x}^{2} = \frac{150}{4}$
$x = \sqrt{\frac{75}{2}} = 6.12 m$, rounded to second decimal place.
Impact of this question
1902 views around the world | {"url":"https://socratic.org/questions/a-spring-with-a-constant-of-4-kg-s-2-is-lying-on-the-ground-with-one-end-attache-9","timestamp":"2024-11-05T02:29:14Z","content_type":"text/html","content_length":"36663","record_id":"<urn:uuid:d93ddd25-5181-4c9a-9f57-63719d21a067>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00105.warc.gz"} |
[Solved] Presented below is the partial bond disco | SolutionInn
Presented below is the partial bond discount amortization schedule for Morales Corp. Morales uses the effective-interest method
Presented below is the partial bond discount amortization schedule for Morales Corp. Morales uses the effective-interest method of amortization.
Instructions(a) Prepare the journal entry to record the payment of interest and the discount amortization at the end of period 1.(b) Explain why interest expense is greater than interest paid.(c)
Explain why interest expense will increase each period.
Transcribed Image Text:
Interest Discount Expense to Be Recorded Amortization Interest to Be Paid Unamortized Bond Carrying Discount $62,311 60,427 58,448 Semiannual Interest Periods Issue date Value $937,689 939,573
941,552 $45,000 45,000 $46,884 46,979 $1,884 1,979 2
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 72% (11 reviews)
a Interest Expense 46884 Discount on Bonds Payable 1...View the full answer
Answered By
Muhammad Umair
I have done job as Embedded System Engineer for just four months but after it i have decided to open my own lab and to work on projects that i can launch my own product in market. I work on different
softwares like Proteus, Mikroc to program Embedded Systems. My basic work is on Embedded Systems. I have skills in Autocad, Proteus, C++, C programming and i love to share these skills to other to
enhance my knowledge too.
3.50+ 1+ Reviews 10+ Question Solved
Students also viewed these Accounting questions
Study smarter with the SolutionInn App | {"url":"https://www.solutioninn.com/presented-below-is-the-partial-bond-discount-amortization","timestamp":"2024-11-05T17:00:35Z","content_type":"text/html","content_length":"82322","record_id":"<urn:uuid:97c770f7-8846-471a-9d0f-5a1cc8777505>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00042.warc.gz"} |
Distributed Fusion Filtering For Sensor Networked Systems With Network Constraints And Packet Dropouts
Posted on:2016-12-03 Degree:Doctor Type:Dissertation
Country:China Candidate:J Ding Full Text:PDF
GTID:1108330461490628 Subject:Microelectronics and Solid State Electronics
Sensor networks have many advantages such as low power, low cost, distributed and self-organizing, easy resource sharing, and so on. They are widely used in target tracking, environmental monitoring,
traffic control, health care and other areas. During the sensor data exchanging over the network, there often exist network congestion, packet loss and other problems due to the limited communication
bandwidths, energy-efficient constraints, and so on. Recently, it has been the hot issue how to allocate resources and to compensate the packet losses efficiently when the network resources are
limited. For the network constraint and packet dropout problem in sensor networks, this paper studies the information fusion estimation algorithms for sensor networked systems with network
constraints and packet dropouts by using optimization theory and information fusion estimation theory. The main contents are as follows:The distributed fusion filtering problem is studied for dynamic
stochastic variable with bandwidth or energy-efficient constraints. In sensor networks, each sensor gives its local filter based on its own measurement data. Due to the limited bandwidth, the
quantized local filter is transmitted to the fusion center. In the fusion center, the distributed fusion optimal linear unbiased filter is designed based on the quantized local filter. The bandwidth
scheduling is given to minimize the total transport energy consumption under the limited bandwidth constraint. Further, the approximate solution for the optimization problem is given under a limited
bandwidth constraint.The weighted measurement fusion and distributed fusion estimation problems are studied for complex sensor networked systems with missing measurements, state and measurement
multiplicative noises and transmission noises. A group of Bernoulli distributed random variables are used to describe missing measurements. Different sensors have different missing measurement rates.
Based on full-rank decomposition of a matrix and weighted least-squares theory, the weighted measurement fusion estimators are developed by transferring multiplicative noises to additive noises. The
weighted measurement fusion estimators have the same accuracy as the centralized fusion estimators, i.e., they have the global optimality. Also, for each sensor subsystem, the local estimators and
the estimation error cross-covariance matrices between any two sensor subsystems are derived. Then, the distributed fusion estimators weighted by matrices in the linear minimum variance sense are
given.The optimal fusion estimation problem is studied for sensor networked systems with random packet dropout compensations. A group of Bernoulli distributed random variables is employed to depict
the phenomena of randomly packet loss in data transmission from sensors to estimators, and one step prediction value of the state is used as a packet loss compensation value. By applying completing
method, the local optimal linear estimators including filter, predictor and smoother are given in the linear unbiased minimum variance sense. Further, the distributed optimal fusion estimators are
given by applying the fusion algorithm weighted by matrices in the linear minimum variance sense. The cross-covariance matrices are derived between any local estimation errors. At last, the
centralized fusion estimators are given. The accuracy comparison among them is simulated.The weighted measurement fusion quantized estimation problem is studied for sensor networked systems with
limited bandwidth constraints and packet dropouts. There exist the phenomena of packet losses due to bandwidth constraints during the transmission. A Bernoulli distributed random variable is
introduced to describe the phenomena of randomly packet loss. Based on the quantized measurement data received by the fusion center, two weighted measurement fusion quantized filters are presented.
One is dependent on the value of the Bernoulli random variables. The other is dependent on the probability of Bernoulli random variables. They have the reduced computational cost and same accuracy as
the corresponding centralized fusion filter. Also, the approximate optimal solution for the optimal bandwidth-scheduling problem is given under a limited bandwidth constraint.
Keywords/Search Tags: sensor network, quantification, network constraints, packet dropout compensation, weighted measurement fusion, distributed fusion estimation | {"url":"https://globethesis.com/?t=1108330461490628","timestamp":"2024-11-10T01:43:58Z","content_type":"application/xhtml+xml","content_length":"10452","record_id":"<urn:uuid:39b1c250-f512-4926-8658-68d667e67e31>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00237.warc.gz"} |
Papers with Code - Valeria de Paiva
no code implementations • 29 Aug 2023 • Valeria de Paiva, Qiyue Gao, Pavel Kovalev, Lawrence S. Moss
Where our study diverges from previous work is in (1) providing a more thorough analysis of what makes mathematical term extraction a difficult problem to begin with; (2) paying close attention to
inter-annotator disagreements; (3) providing a set of guidelines which both human and machine annotators could use to standardize the extraction process; (4) introducing a new annotation tool to help
humans with ATE, applicable to any mathematical field and even beyond mathematics; (5) using prompts to ChatGPT as part of the extraction process, and proposing best practices for such prompts; and
(6) raising the question of whether ChatGPT could be used as an annotator on the same level as human experts.
Term Extraction | {"url":"https://paperswithcode.com/search?q=author%3AValeria+de+Paiva","timestamp":"2024-11-05T01:26:49Z","content_type":"text/html","content_length":"226361","record_id":"<urn:uuid:cde35adb-7fec-49b2-826f-0255a6b54446>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00333.warc.gz"} |
How To Size An Inverter That Can Run Your Air Conditioner? | RenewableWise
The main rating of an inverter is its Continuous Power (in Watts), but with appliances such as air conditioners, refrigerators, pumps, or any device with a motor, the continuous power of an inverter
is not the only rating that matters.
In this article, I’ll explain in detail the main specifications to look at when shopping for an inverter that can run your air conditioner.
I get commissions for purchases made through links in this post.
What specifications to look for in the inverter?
There are 5 specifications to look for in an inverter that can run your air conditioner:
• Continuous Power rating
• Surge Power rating
• Waveform
• Input voltage
• Output voltage
Let’s see what each of these specifications represents and how they should be matched to the specs of your air conditioner.
2000 Watt Pure Sine Wave Inverters
Continuous Power rating
This is the main rating of the inverter and is usually provided in Watts or kiloWatts.
The Continuous Power rating of an inverter represents the maximum amount of power that the inverter is capable of supplying (Outputting). For example, a 3000 Watt inverter will not be able to run a
4000 Watt load.
Sometimes, the Continous Power rating of an inverter is provided in VA (Volt-Amperes) instead of Watts, and these 2 ratings are not the same.
While Volt-Amperes represent the Apparent Power, Watts represent Real (True) Power, which is the actual amount of power that a device uses. Click here to learn more about the difference between these
If you’re looking at an inverter that has a VA rating on it, make sure to look at its specification sheet to find the Continous Power rating in Watts. If the latter is nowhere to be found, you can
use this simplified formula to calculate it:
Continuous Power (W) = Continuous Power (VA) x 0.8
For example, the equivalent of 3000VA is 2400 Watts.
In any case, the Continuous Power rating of the inverter you choose should be higher than the power usage of your air conditioner. Later in this article, I’ll show you how to determine the power
usage of your AC unit.
Surge (peak) power rating
The power rating of an appliance indicates the amount of power (in watts) that the device requires to run. However, some appliances (such as pumps, refrigerators, and air conditioners) require 3 to 7
times their running power when they’re first turned on, this amount of power is referred to as Surge Power.
For example, consider a 5000 BTU air conditioner that has a power rating of 400 Watts. Although this air conditioner will only draw 400 Watts when it’s running, it might draw up to 3000 Watts for a
brief moment when you first turn it on.
This is why inverters have a Surge Power rating which indicates how much power they should be able to supply briefly. The Surge Power rating of an inverter is 2 or 3 times its continuous power
While high-frequency inverters can supply 200% of their Cont. power for a couple of seconds, low-frequency inverters can supply 300% of their Cont. power for up to 20 seconds.
For example, this high-frequency 3000W inverter from Renogy has a surge power rating of 6000 watts. On the other hand, this low-frequency 3000W inverter from AIMS can supply 9000 Watts of power for
up to 20 seconds.
In the second section of this article, I’ll show you how to estimate the surge power of your air conditioner.
As you may already know, an inverter’s job is to turn DC (direct current) power into the AC (alternating current) power that your air conditioner requires. However, the waveform of this alternating
current that the inverter outputs, will depend on the type of inverter itself.
In terms of the waveform, there are 2 types of inverters on the market:
• Pure Sine Wave inverters (PSW)
• Modified Sine Wave inverters (MSW)
Now, you might be tempted to buy a modified sine inverter as they are the cheaper option, however, any appliance that has a motor in it will require a pure sine wave inverter.
To function properly, appliances that contain a motor (such as air conditioners) need the smoothest alternating voltage that they can get. Otherwise, their life expectancy will decrease and there
will be additional power losses.
Contrary to MSW inverters, pure sine wave inverters guarantee the smooth output signal that your air conditioner requires:
Before even considering a particular inverter to run your air conditioner, make sure that it is a Pure Sine Wave inverter.
Input voltage
As mentioned above, an inverter converts the power out of a DC source (which will have a relatively low voltage and a high current) into AC power (which will have a relatively high voltage and low
current). In this case, the DC source is the battery bank, which is usually rated at a nominal voltage of 12, 24, or 48 Volts.
Manufacturers specify the value of the voltage that the inverter is designed for, and this value is referred to as “VDC”, “DC Input Voltage”, “Nominal Input Voltage”, or simply “Input Voltage”.
If you have 2 – 12 Volt batteries wired in series, your battery bank is rated at 24 Volts nominal and you’ll need an inverter with an Input Voltage of 24 Volts. If the voltages are mismatched, the
inverter will not work.
For example, let’s say I have a battery bank with a nominal voltage of 24 Volts, the actual voltage of this battery bank will depend on its state of charge and can be anywhere from 20 to 28.8 Volts.
If I attempt to run this 3000W Renogy Inverter – that has a specified DC input voltage of 12 Volts – on my 24V battery bank, it just won’t work. This is because this particular inverter is designed
for voltages between 9.5 and 17 Volts, anything more than 17 Volts, and the inverter won’t turn on.
In any case, make sure that the inverter you choose has an Input Voltage that matches the voltage of your battery bank.
Output voltage
Most appliances in the U.S. run on 120 Volts, which is why the electrical outlets in homes supply 120 Volts. It is for the same reason that most inverters that are available in the U.S. will have an
Output voltage rating of 120V (120VAC).
To recap., most inverters take the voltage out of the DC source (12, 24, or 48 VDC) and turn it into 120 VAC (also referred to as 110 VAC).
However, some appliances, such as dryers and central air conditioners, require a 240V supply. If this is the case for you, you’ll either need a single-phase 240V inverter or a 120/240V split-phase
If you have a small air conditioner, chances are it runs on 120V, but just in case, you’ll still need to check to voltage requirements of your AC unit.
Choose the inverter that can run your air conditioner
Now that we know the main specifications that should be considered, we need to look at the specifications of the air conditioner and find a matching inverter.
In the section above, we’ve already established that you’ll need a Pure Sine Wave inverter, but to find the right PSW inverter, you’ll need to determine these specifications:
• The voltage of the air conditioner
• Running Power of the air conditioner
• Surge Power of the air conditioner
• The voltage of the battery bank
What is the Voltage of your air conditioner?
As mentioned above, most small air conditioners (less than 18000 BTUs) run on 120V. Central air conditioners on the other hand, usually require a dedicated 220V circuit. To be sure, you can check the
specification sticker on your unit.
For example, the following image is of a specification label from a 36000 BTU central air conditioner:
You can see that under the Power Supply, Compressor, and Fan Motor sections the manufacturer specifies that this AC unit uses 208 or 230 Volts. This means that the unit runs on a nominal voltage of
If this AC unit ran on 120V, it could run on a single-phase inverter with an output voltage of 120V. If your air conditioner runs on 120V, feel free to move on to the next step.
If your air conditioner runs on 240V like the one from the image above, you’ll essentially have 2 options:
• A 120/240V split phase inverter that has 2 hot wires, which could supply both 120 and 240V. An example of this would be the 4000W Inverter/Charger from SunGoldPower.
• Or a single-phase 240V inverter that has a single hot wire. A good example of this is the Growatt inverter.
Please note that if you go with the 2nd option, and want to be able to power your other 120V appliances on the same inverter, you’ll need a split-phase transformer such as this transformer from
In any case, make sure to keep reading to learn more about the other specifications of your air conditioner.
More often than not, manufacturers specify the power rating (watts) of the air conditioner on its technical specification label. For example, the following image is of a specification sticker from a
5100 BTU AC unit:
You can see that the manufacturer specifies 455 Watts as the power usage for this air conditioner. However, the power usage is not always directly specified. In case it isn’t, there are still a
couple of easy ways to estimate that power usage:
• Use the BTU rating and the Energy Efficiency Ratio (EER) of the unit
• Use the Voltage and Amperage of the unit
Related: How many watts does an air conditioner use?
The BTU rating and EER of an air conditioner are usually provided in the EnergyGuide (yellow) label that came with the unit. The voltage and amperage of the unit are provided in the technical
specification label.
1- Using the BTU rating and the EER:
The relationship between the capacity (in BTUs) and the efficiency (EER rating) of an air conditioner is represented by the following equation:
Power Rating (Watts) = BTU rating (BTUs) ÷ EER
Following our example, this particular unit is rated at 5100 BTUs and has an EER of 11.2. Using these specs, we can estimate its power rating as such:
Power Rating (Watts) = BTU rating (BTUs) ÷ EER
Power Rating (Watts) = 5100 BTUs ÷ 11.2
Power Rating (Watts) = 455.35 Watts
2- Using the Voltage and Amperage:
In general, the electrical power usage of an appliance can be calculated by multiplying its voltage and amperage:
Power Rating (Watts) = Voltage (Volts) x Amperage (Amps)
The unit from this example uses 115 Volts and 4 Amps, so its power usage can be estimated as such:
Power Rating (Watts) = Voltage (Volts) x Amperage (Amps)
Power Rating (Watts) = 115 Volts x 4 Amps
Power Rating (Watts) = 460 Watts
It is important to note that multiplying the voltage and amperage of an air conditioner will result in the apparent power (VA) of the air conditioner instead of its real power. This means that the
real power rating of the AC unit is lower than this product.
This is due to something called the “Power Factor“, which you can read up more on here. The actual power rating of an air conditioner is represented as such:
Power Rating (Watts) = Voltage (Volts) x Amperage (Amps) x Power Factor
The power factor will usually be between 0.8 and 0.99 and will depend on the AC unit itself. For simplicity purposes, we’ll just use the initial equation.
According to the power usage specified by the manufacturer (and determined by our methods), this particular air conditioner uses around 455 watts of power when it is running. This means that the
inverter that could run this unit needs to have a Continuous Power rating of more than 455 watts.
So, a 500W inverter should do the trick, right?
The answer is probably not.
A 500W inverter can run this unit, but it probably won’t be able to start it. This brings us to the next item on the list: The Surge Power rating.
1500 Watt Pure Sine Wave Inveters
2000 Watt Pure Sine Wave Inverters
4000 Watt Pure Sine Wave Inverters
How much power does your conditioner need to start?
Similar to refrigerators, the compressor of an air conditioner requires a relatively high amount of power to start. The starting wattage of an air conditioner can be as high as 6 or 7 times its
running wattage.
For example, an 8000 BTU window AC unit might require 700 watts to run, but in some instances, it could draw up to 5000 watts (for an instant) when turned on. So how do you determine the potential
starting wattage of your air conditioner?
The most precise way is to look for the Locked Rotor Amperage (LRA) of its compressor. This specification is either included on the specification label on the unit or specified on the compressor
Once you find the LRA of the compressor, simply multiply its value by the voltage of the AC unit to determine the surge power:
Potential Starting Wattage (Watts) = Voltage (Volts) x LRA (Amps)
For example, here’s a specification label for a 36000 BTU AC unit:
The potential starting wattage of this air conditioner can be calculated as such:
Potential Starting Wattage (Watts) = Voltage (Volts) x LRA (Amps)
Potential Starting Wattage (Watts) = 240 Volts x 77 Amps
Potential Starting Wattage (Watts) = 18480 Watts
According to these calculations, the inverter(s) that can run this air conditioner should be able to handle a surge wattage of 18480 Watts (18.48 kW). However, please note that this is a maximum
value, the surge wattage of the AC unit will likely be closer to 7-10 kW.
If the LRA is nowhere to be found, a good rule of thumb is to multiply the running wattage of your air conditioner by 6:
Potential Starting Wattage (Watts) = Running Power (Watts) x 6
For example, an air conditioner that uses 455 watts when running, might require up to 2730 watts to start.
What is the voltage of your battery bank?
The last part of this process is to make sure that the inverter you choose is compatible with your battery setup.
For example, if your battery bank consists of 4 – 12V batteries with a 2S2P configuration, the inverter must have an Input Voltage of 24 Volts. If all of these batteries are in series, the inverter
should have an Input voltage rating of 48V.
14 Comments
1. Thanks alot your explanations are clear , understanding and to the point .l have learned alot from you
2. You gave a lot of information that I haven’t seen or read anywhere. I’d like to come back to this page and learn more wrote things down, don’t want to miss anything.
3. Thanks for the info I learned a lot from the inverter and the surge power plus calculations of the AC unit. Thank you Sir.
4. Thank you for a great article. Your explanation made it easy to understand especially the increase in wattage at startup. Im thinking of going off the grid coz here in South Africa our government
is more concerned with looting the country that fixing it. We have something called “loadshedding” which is actually blackouts that can last for up to 10 hours a day.
5. If the AC unit in your example has an FLA of 77, why does it not trip the 30 amp breaker it is connected to? Also, it says that the maximum circuit voltage is 19.2 amps, so how does the unit get
77 amps FLA?
□ Hey James,
I believe the inrush current in these air conditioners is instantaneous and does not last long enough to trip the overcurrent protection device.
I hope this answers your question.
6. This is very educative, Thank you.
I will love to have your direct contact for mentorship.
7. Thank you for this lesson in power. I have an older Motorhome, (2008). I want to do a 12v lifepo4 battery bank.
I currently have a 2000W inverter made by Xantrex. I am installing a residential fridge and I need to power a washer, Dryer, roof a/c and a basement a/c. Should I switch to a newer higher wattage
also, what size battery would you recomend? My thoughts are (4) 220 AH batteries.
I am considering OGRPHY 12V 200Ah Lithium Battery, Grade A Cells LiFePO4 Battery
I’m not sure of the voltage of the basement a/c or the dryer. I will get that info.
Thank you
□ Hey there Dennis,
I think you will need a larger inverter, because washers, dryers, and air conditioners use a relatively high amount of power.
Also, the battery bank upgrade that you’re considering sounds great, but based on what you’ve described, you might need a larger battery bank.
I recommend you check out my Energy Consumption Calculator (https://www.renewablewise.com/appliance-energy-consumption-calculator/), and read up more on battery sizing here: https://
hope this helps.
8. I’m trying to go solar on my own slowly.
9. Hello Sir.
Please how do design this solar system. 2no deep freezer 130w each day use 4hours and 2no AC 1010w each duration of use 10hours. My question is which do I use in my design is it the Ac or freezer
or I use both for the design.
Thank you sir
□ Hello Augustine,
The main concern is that the inverter should, in case it is necessary, be able to supply enough power to start both the freezer and the AC.
This means that the inverter should have a surge power rating that is greater than the surge power rating of your AC + the surge power rating of the freezer.
This means that if, for example, your freezer needs 600 Watts to start, and your AC needs 3000 Watts to start, a 2000 W with a 4000-watt surge capacity will do.
Hope this helps.
10. Can 3000w start up an ac only
□ Well it depends on the size of the AC (BTU rating), it’s type, and whether or not it has a soft starter. But for ACs under 15000 BTUs, a 3000W inverter, assuming it has a Peak Power rating of
6000W, will probably be enough. | {"url":"https://www.renewablewise.com/inverter-that-can-run-air-conditioner/","timestamp":"2024-11-03T04:31:08Z","content_type":"text/html","content_length":"232438","record_id":"<urn:uuid:7f1414d9-96d2-43cc-98b5-19a10d6da8ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00545.warc.gz"} |
How do you solve -4x-8>16? | HIX Tutor
How do you solve #-4x-8>16#?
Answer 1
#-4x -8 > 16#
adding 8 to both sides #color(white)("XXXX")##rarr -4x > 24#
Either [1] divide both sides by #(-4)#, remembering that multiplication or division by a negative reverses the inequality: #color(white)("XXXX")##rarr x < -6# or [2] add #(4x-24)# to both sides #
color(white)("XXXX")##rarr -24 < 4x# then divide by #4# #color(white)("XXXX")##rarr -6 < x#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the inequality -4x - 8 > 16, first, we isolate the variable term by adding 8 to both sides:
-4x - 8 + 8 > 16 + 8 -4x > 24
Then, we divide both sides by -4, remembering to reverse the inequality sign when dividing by a negative number:
-4x / -4 < 24 / -4 x < -6
So, the solution to the inequality is (x < -6).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-4x-8-16-8f9af9368c","timestamp":"2024-11-01T22:49:58Z","content_type":"text/html","content_length":"568263","record_id":"<urn:uuid:de38afbd-2dc2-49ba-a73b-017641323ede>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00153.warc.gz"} |
Structure from Motion from Two Views
Structure from motion (SfM) is the process of estimating the 3-D structure of a scene from a set of 2-D images. This example shows you how to estimate the poses of a calibrated camera from two
images, reconstruct the 3-D structure of the scene up to an unknown scale factor, and then recover the actual scale factor by detecting an object of a known size.
This example shows how to reconstruct a 3-D scene from a pair of 2-D images taken with a camera calibrated using the Camera Calibrator app. The algorithm consists of the following steps:
1. Match a sparse set of points between the two images. There are multiple ways of finding point correspondences between two images. This example detects corners in the first image using the
detectMinEigenFeatures function, and tracks them into the second image using vision.PointTracker. Alternatively you can use extractFeatures followed by matchFeatures.
2. Estimate the fundamental matrix using estimateEssentialMatrix.
3. Compute the motion of the camera using the estrelpose function.
4. Match a dense set of points between the two images. Re-detect the point using detectMinEigenFeatures with a reduced 'MinQuality' to get more points. Then track the dense points into the second
image using vision.PointTracker.
5. Determine the 3-D locations of the matched points using triangulate.
6. Detect an object of a known size. In this scene there is a globe, whose radius is known to be 10cm. Use pcfitsphere to find the globe in the point cloud.
7. Recover the actual scale, resulting in a metric reconstruction.
Read a Pair of Images
Load a pair of images into the workspace.
imageDir = fullfile(toolboxdir("vision"),"visiondata","upToScaleReconstructionImages");
images = imageDatastore(imageDir);
I1 = readimage(images, 1);
I2 = readimage(images, 2);
imshowpair(I1, I2, 'montage');
title('Original Images');
Load Camera Parameters
This example uses the camera parameters calculated by the Camera Calibrator app. The parameters are stored in the cameraIntrinsics object, and include the camera intrinsics and lens distortion
% Load precomputed camera intrinsics
data = load("sfmCameraIntrinsics.mat");
intrinsics = data.intrinsics;
Remove Lens Distortion
Lens distortion can affect the accuracy of the final reconstruction. You can remove the distortion from each of the images using the undistortImage function. This process straightens the lines that
are bent by the radial distortion of the lens.
I1 = undistortImage(I1, intrinsics);
I2 = undistortImage(I2, intrinsics);
imshowpair(I1, I2, "montage");
title("Undistorted Images");
Find Point Correspondences Between the Images
Detect good features to track. Increase 'MinQuality' to detect fewer points, which would be more uniformly distributed throughout the image. If the motion of the camera is not very large, then
tracking using the KLT algorithm is a good way to establish point correspondences.
% Detect feature points
imagePoints1 = detectMinEigenFeatures(im2gray(I1), MinQuality=0.1);
% Visualize detected points
imshow(I1, InitialMagnification = 50);
title("150 Strongest Corners from the First Image");
hold on
plot(selectStrongest(imagePoints1, 150));
% Create the point tracker
tracker = vision.PointTracker(MaxBidirectionalError=1, NumPyramidLevels=5);
% Initialize the point tracker
imagePoints1 = imagePoints1.Location;
initialize(tracker, imagePoints1, I1);
% Track the points
[imagePoints2, validIdx] = step(tracker, I2);
matchedPoints1 = imagePoints1(validIdx, :);
matchedPoints2 = imagePoints2(validIdx, :);
% Visualize correspondences
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title("Tracked Features");
Estimate the Essential Matrix
Use the estimateEssentialMatrix function to compute the essential matrix and find the inlier points that meet the epipolar constraint.
% Estimate the fundamental matrix
[E, epipolarInliers] = estimateEssentialMatrix(...
matchedPoints1, matchedPoints2, intrinsics, Confidence = 99.99);
% Find epipolar inliers
inlierPoints1 = matchedPoints1(epipolarInliers, :);
inlierPoints2 = matchedPoints2(epipolarInliers, :);
% Display inlier matches
showMatchedFeatures(I1, I2, inlierPoints1, inlierPoints2);
title("Epipolar Inliers");
Compute the Camera Pose
Compute the location and orientation of the second camera relative to the first one. Note that loc is a translation unit vector, because translation can only be computed up to scale.
relPose = estrelpose(E, intrinsics, inlierPoints1, inlierPoints2);
Reconstruct the 3-D Locations of Matched Points
Re-detect points in the first image using lower 'MinQuality' to get more points. Track the new points into the second image. Estimate the 3-D locations corresponding to the matched points using the
triangulate function, which implements the Direct Linear Transformation (DLT) algorithm [1]. Place the origin at the optical center of the camera corresponding to the first image.
% Detect dense feature points. Use an ROI to exclude points close to the
% image edges.
border = 30;
roi = [border, border, size(I1, 2)- 2*border, size(I1, 1)- 2*border];
imagePoints1 = detectMinEigenFeatures(im2gray(I1), ROI = roi, ...
MinQuality = 0.001);
% Create the point tracker
tracker = vision.PointTracker(MaxBidirectionalError=1, NumPyramidLevels=5);
% Initialize the point tracker
imagePoints1 = imagePoints1.Location;
initialize(tracker, imagePoints1, I1);
% Track the points
[imagePoints2, validIdx] = step(tracker, I2);
matchedPoints1 = imagePoints1(validIdx, :);
matchedPoints2 = imagePoints2(validIdx, :);
% Compute the camera matrices for each position of the camera
% The first camera is at the origin looking along the Z-axis. Thus, its
% transformation is identity.
camMatrix1 = cameraProjection(intrinsics, rigidtform3d);
camMatrix2 = cameraProjection(intrinsics, pose2extr(relPose));
% Compute the 3-D points
points3D = triangulate(matchedPoints1, matchedPoints2, camMatrix1, camMatrix2);
% Get the color of each reconstructed point
numPixels = size(I1, 1) * size(I1, 2);
allColors = reshape(I1, [numPixels, 3]);
colorIdx = sub2ind([size(I1, 1), size(I1, 2)], round(matchedPoints1(:,2)), ...
round(matchedPoints1(:, 1)));
color = allColors(colorIdx, :);
% Create the point cloud
ptCloud = pointCloud(points3D, Color=color);
Display the 3-D Point Cloud
Use the plotCamera function to visualize the locations and orientations of the camera, and the pcshow function to visualize the point cloud.
% Visualize the camera locations and orientations
cameraSize = 0.3;
plotCamera(Size=cameraSize, Color="r", Label="1", Opacity=0);
hold on
grid on
plotCamera(AbsolutePose=relPose, Size=cameraSize, ...
Color="b", Label="2", Opacity=0);
% Visualize the point cloud
pcshow(ptCloud, VerticalAxis="y", VerticalAxisDir="down", MarkerSize=45);
% Rotate and zoom the plot
camorbit(0, -30);
% Label the axes
title("Up to Scale Reconstruction of the Scene");
Fit a Sphere to the Point Cloud to Find the Globe
Find the globe in the point cloud by fitting a sphere to the 3-D points using the pcfitsphere function.
% Detect the globe
globe = pcfitsphere(ptCloud, 0.1);
% Display the surface of the globe
title("Estimated Location and Size of the Globe");
hold off
Metric Reconstruction of the Scene
The actual radius of the globe is 10cm. You can now determine the coordinates of the 3-D points in centimeters.
% Determine the scale factor
scaleFactor = 10 / globe.Radius;
% Scale the point cloud
ptCloud = pointCloud(points3D * scaleFactor, Color=color);
relPose.Translation = relPose.Translation * scaleFactor;
% Visualize the point cloud in centimeters
cameraSize = 2;
plotCamera(Size=cameraSize, Color="r", Label="1", Opacity=0);
hold on
grid on
plotCamera(AbsolutePose=relPose, Size=cameraSize, ...
Color="b", Label="2", Opacity=0);
% Visualize the point cloud
pcshow(ptCloud, VerticalAxis="y", VerticalAxisDir="down", MarkerSize=45);
camorbit(0, -30);
% Label the axes
xlabel("x-axis (cm)");
ylabel("y-axis (cm)");
zlabel("z-axis (cm)")
title("Metric Reconstruction of the Scene");
This example showed you how to recover camera motion and reconstruct the 3-D structure of a scene from two images taken with a calibrated camera.
[1] Hartley, Richard, and Andrew Zisserman. Multiple View Geometry in Computer Vision. Second Edition. Cambridge, 2000. | {"url":"https://uk.mathworks.com/help/vision/ug/structure-from-motion-from-two-views.html","timestamp":"2024-11-10T06:24:20Z","content_type":"text/html","content_length":"86694","record_id":"<urn:uuid:1f61c1fb-2aa5-4ce5-b7ba-aa116d964eac>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00747.warc.gz"} |
Allometric equations to estimate above-ground biomass of small-diameter mixed tree species in secondary tropical forests
iForest - Biogeosciences and Forestry, Volume 13, Issue 3, Pages 165-174 (2020)
doi: https://doi.org/10.3832/ifor3167-013
Published: May 02, 2020 - Copyright © 2020 SISEF
Research Articles
Accounting for small-size tree biomass is critical to improve total stand biomass estimates of secondary tropical forests, and is essential to quantify their vital role in mitigating climate change.
However, owing to the scarcity of equations available for small-size trees, their contribution to total biomass is unknown. The objective of this study was to generate allometric equations to
estimate total biomass of 22 tree species ≤ 10 cm in diameter at breast height (DBH), in the Yucatan peninsula, Mexico, by using two methods. First, the additive approach involved the development of
biomass equations by tree component (stem, branch and foliage) with simultaneous fit. In the tree-level approach, total tree biomass equations were fit for multi-species and wood density groups.
Further, we compared the performance of total tree biomass equations that we generated with multi-species equations of previous studies. Data of total and by tree component biomass were fitted from
eight non-linear models as a function of DBH, total height (H) and wood density (ρ). Results showed that two models, identified as model I and II, best fitted our data. Model I has the form AGB = β[0
](ρ·DBH^2·H)β[1 ]+ ε and model II: AGB = exp(-β[0])(DBH^2·H)β[1 ]+ ε, where AGB is biomass (kg). Both models explained between 53% and 95% of the total observed variance in biomass, by
tree-structural component and total tree biomass. The variance of total tree biomass explained by fit models related to wood density group was 96%-97%. Compared foreign equations showed between 30%
and 45% mean error in total biomass estimation compared to 0.05%-0.36% error showed by equations developed in this study. At the local level, the biomass contribution of small trees based on foreign
models was between 24.38 and 29.51 Mg ha^-1, and model I was 35.97 Mg ha^-1. Thus, from 6.5 up to 11.59 Mg ha^-1 could be excluded when using foreign equations, which account for about 21.8% of the
total stand biomass. Local equations provided more accurate biomass estimates with the inclusion of ρ and H as predictors variables and proved to be better than foreign equations. Therefore, our
equations are suitable to improve the accuracy estimates of carbon forest stocks in the secondary forests of the Yucatan peninsula.
Species Diversity, Biomass-carbon Stocks, Additive Equations, Simultaneous Fit, Wood Density Groups
The importance of tropical secondary forests for biodiversity conservation, provision of ecosystem services and climate change mitigation is globally recognized ([52], [18], [46]). These forests have
expanded partly due to abandonment of agricultural land, increasing of extensive grazing, and deforestation and degradation of old-growth forests ([43], [22], [46]). Secondary forests hold a relative
high abundance of small-diameter trees ([18], [36]). In this study, small trees are defined as trees smaller or equal than 10 cm diameter at breast height (DBH, 1.30 m above the ground).
Small-diameter trees are an important component of woody plants diversity, show positive growth rates, and are key to study changes in demographic features of tropical species ([54], [36]). The high
density and diversity of small trees give forests high levels of relative resilience against anthropogenic and natural disturbances ([2], [36]).
Small-diameter trees are an important component of total tree density and biomass in tropical forests in the Yucatan peninsula, Mexico. This component represents between 2.4% to 60% of the total tree
density in forests of the state of Campeche, Mexico ([55], [25]). In other young secondary neotropical forests, small trees might represent from 65% to 93.6% of the total tree density ([5], [36]).
Small trees might also contribute from 3.4% to 41.3% of the total biomass, depending on the forest successional stage ([34], [36]). For example, in secondary tropical forests of Mexico, Brazil,
Nicaragua and Central Africa, abandoned since 0, 17, 25, 37 and more than 48 years, the contribution of small trees to total biomass was ~ 43.35 Mg ha^-1 (41.3%), 51.27 (27.5%) Mg ha^-1, and 15 to
29.75 (5.7 to 14.45%) Mg ha^-1, respectively ([35], [34], [36]).
Tree biomass is commonly estimated with allometric models that use easily measured tree variables as predictors, namely DBH, total height and wood density ([10], [17]). Most allometric biomass
equations are developed with trees > 10 cm in DBH and at large scales ([6], [10]). Notwithstanding the structural and ecological importance that trees ≤ 10 cm in DBH represent in secondary tropical
forests, they are not typically incorporated into biomass models ([8], [36]). Hence, the actual biomass locally stored in secondary tropical forests is likely underestimated.
Biomass equations generated at the local scale might become a reliable tool to reduce uncertainty of carbon stock estimates. For instance, biomass errors ranging from 10% to 40%, and in extreme cases
up to 70% have been reported for neotropical regions ([52], [34], [24]). The inclusion of tree height and wood density into the fitting process could also help to reduce uncertainty and improve the
accuracy of biomass estimations. For example, Baker et al. ([3]) and Lima et al. ([34]) reported an inter-regional variation in tree wood density close to 16% in four regions of the Amazonia. As a
result, there was a high variability among regions in biomass stocks estimated with the same generic allometric equations, likely due to a variation of DBH to total height ratio, which is also
influenced by species wood density ([3]). In the Republic of Congo forests, Bastin et al. ([4]) found large bias in biomass estimations which were based on models that used wood density obtained at a
global scale. In this case, the overestimation in biomass ranged from 20% to 40% in species with wood densities from 0.52 to 0.68 g cm^-3. Thus, wood density, which can vary among tree species of the
same region and even more among those of different geographical regions, is a tree characteristic with considerable influence in determining total biomass variability ([11]).
In Mexican tropical forests, the study and analysis of the biomass of small-size trees and their contribution to the ecosystem is limited, though many non-linear and exponential type biomass
equations have been developed. Indeed, most equations have been generated for temperate forests, especially for valuable timber species of the Pinaceae and Fagaceae families ([48]). Furthermore,
equations for tropical forests and particularly for small-size trees are fairly uncommon. Hughes et al. ([26]) developed a general equation for tropical trees ≤ 10 cm in DBH in central-east Mexico,
which was further re-parametrized by Chave et al. ([9]) to estimate biomass of small-size trees in Panama’s forests. In the southern Yucatan peninsula, Cairns et al. ([7]) developed nine
species-specific equations for trees ≤ 10 cm in DBH and a general equation for large-diameter trees. However, the applicability of these species-specific equations is limited for secondary forests
because they were developed using old-growth stands data sets. Many of the biomass estimations for small-size trees in tropical secondary forests in Mexico are based on the general equations by
Hughes et al. ([26]) and Chave et al. ([9]). However, so far there has been no evaluation of these equations when comparing with local equations generated for small-size trees in the Yucatan
peninsula; such assessment would be useful to detect the variability of biomass across forest stands with high-density of small-size trees obtained from forest inventories. Also, local biomass
equations could become valuable tools to evaluate the contribution of secondary forests to the global carbon cycle through improved estimations of carbon stocks.
In this study, we developed allometric equations to estimate the biomass of small diameter trees (DBH ≤ 10 cm) for 22 tree species that are structurally important in secondary tropical forests of the
southern Yucatan peninsula, Mexico. The main objectives were to: (i) generate biomass equations by tree structural components (i.e., stem, branches and foliage); (ii) develop multi-species equations
and by wood density groups (i.e., high, intermediate and low density) to estimate total tree biomass; (iii) compare the estimation error of the multi-species and wood density group equations against
generalized biomass equations developed for other tropical regions; and (iv) examine the ability to accurately estimate biomass at the stand level of the Hughes et al. ([26]) and Chave et al. ([9])
equations compared with the best equation generated in this study using data from young secondary forest. The hypotheses were: (I) equations that include total height and wood density as independent
variables, besides DBH, fit the data better than simple equations (i.e., based on one or two predictors), since they include the wood properties that determine the species growth form; and (II) the
equations developed in this study are more accurate for total tree biomass estimation at local level than those generated in other tropical regions, because the former incorporate the effects of
growth and allometric characteristics of the species in the model.
Materials and methods
Study site
This study was performed in secondary tropical forests ranging from nine to 35 years-old and in an old-growth stands. The stand age corresponds to the time (years) elapsed after the last application
of slash and burn agriculture system (maize, beans and squash as main products). Stands were located in the southeast region of the Yucatan peninsula, Mexico, between the Sian Ka’an Biosphere Reserve
in Quintana Roo (19° 05′ and 20° 06′ N, 87° 30’ and 87° 58’ W) and Calakmul Reserve in Campeche (19° 15′ and 17° 45′ N, 90° 10′ and 89° 15’ W - Fig. 1). The Calakmul Biosphere Reserve is the largest
continuous conservation area of tropical rainforest in Mexico (7.231.85 km^2). The Sian Ka’an and Calakmul Biosphere Reserves are part of the Mesoamerican Biological Corridor, which serves to
conserve the habitat of different species of flora and fauna, and also to promote a sustainable social and economic development of the region ([37]).
The dominant vegetation type is mid-stature, semi evergreen tropical forest ([42]). The region is characterized by large areas of secondary vegetation growing in different successional stages, due to
shifting cultivation and other types of land use. The climate is tropical sub-humid with mean annual rainfall ranging from 948 to 1500 mm, most of which falls in summer, while the driest months
(March and April) have less than 600 mm of rainfall ([33]). Mean annual temperature is about 26 °C ([23]). Topography is mostly flat with slight undulations. Soil types correspond to gleysols, vertic
cambisols and vertic luvisols, which are thin and shallow with a slow water drainage, and surface flooding occurs in the rainy season or during storms or hurricanes ([19]).
Biomass data collection
We performed a pre-assessment on each stand to collect information to estimate the Structural Importance Index (IVI) of each tree species ([14]). The IVI was calculated as the sum of the relative
abundance, frequency and dominance of each species within a given stand or forest. The tree species were ranked according to the index IVI. A total of 22 species ranging from 1 to 10 cm DBH (denoted
as small-size trees) were selected.
We selected and harvested between 12 and 18 trees by species (311 trees in total - Tab. S1 in Supplementary material) for biomass calculations. The fresh weight of each structural component of the
selected trees (i.e., stem, branches, and foliage) was recorded with an electronic scale TORREY CRS-HD^® of 500 kg capacity and ± 100 g accuracy. Three random samples of 100 ± 10 g were obtained from
branches and foliage to determine the dry weight/fresh weight ratio of both structural components. Further, three sections (disks) 5 to 7 cm thick per tree were obtained from the base, middle and
upper tree stem and weighed with an electronic scale OHAUS Pioneer^® of 5 kg (accuracy ± 0.1 g). In the case of trees ≤ 2.5 cm in DBH, the total structural components were sent to the laboratory. All
samples were oven-dried at 70 °C until they reached constant dry mass. The dry weight /fresh weight ratio was used to obtain the tree dry weight of stem, branches, and foliage. The total above-ground
tree biomass of sampled trees was calculated by adding up the total dry weight of each of the three structural components ([39]).
Wood density
Wood samples (cubes) were taken from each tree at 1.30 m from the base of the stem to determine wood basic density (g cm^-3). Each sample included the pith, heartwood, sapwood and cambium because the
distribution of these elements influences the wood density along the stem ([11]). We used the water displacement method to obtain the green volume of each cube ([11]). Subsequently, the cubes were
oven-dried at 105 °C to constant dry mass. Basic density of each wood sample was estimated using the dry mass/green volume ratio. The species were classified in three wood density groups (Tab. S1 in
Supplementary material): low (≤ 0.40 g cm^-3), intermediate (0.41-0.60 g cm^-3) and high (≥ 0.61 g cm^-3). Basic density is considered an economic indicator for the industry, and a good wood
descriptor for the study of trees and their ecological behavior ([11]).
Model fitting and statistical analysis
Scatter plots of total biomass against DBH by species were used to explore data trends, and decide whether a linear or a non-linear model would be more suitable to fit the data. Based on scatter
plots, we tested eight allometric regression model types that were previously reported in other studies to estimate total tree biomass (Tab. S2 in Supplementary material).
We performed independent fitting for each model to estimate their parameters by structural component and for total tree biomass. Model fitting was performed by applying the Newton’s iterative method
with Ordinary Least Squares (OLS) using the PROC MODEL in SAS ([50]). Weighted functions were applied to the regression models to improve the homogeneity of variances and goodness-of-fit statistics (
[1]). The models’ goodness of fit was assessed with the following statistics: (i) root mean square error of the estimate (RMSE); (ii) proportion of variance explained by the model corrected by the
number of parameters estimated (adjusted R^2); and (iii) the Akaike’s Information Criterion (AIC - [28]). The AIC measures the goodness of fit of a regression model for a set of species ([10]). The
best model minimizes the values of RMSE and AIC, and maximizes the adjusted R^2.
After we selected the models following the independent fitting, they were fitted by tree structural components (i.e., stem, branch, and foliage) using a simultaneous equation system. The systems were
integrated with the biomass equations (functions) for stem (eqn. 1), branches (eqn. 2) and foliage (eqn. 3) to account for total tree biomass by adding to the three structural components a property
known as “model additivity” ([49]). Total tree biomass (eqn. 4) is a function of the independent variables in the equations for structural components, including constraints on the model parameters
and is expressed as:
$$AGB_{stem}=f_{stem} \left ( \rho ,DBH,H, \beta \right )+ \varepsilon_ {stem}$$
$$AGB_{branch}=f_{branch} \left ( \rho ,DBH,H, \beta \right )+ \varepsilon_ {branch}$$
$$AGB_{foliage}=f_{foliage} \left ( \rho ,DBH,H, \beta \right )+ \varepsilon_ {foliage}$$
$$AGB_{total-tree}=f_{total-tree} \left ( \rho ,DBH,H, \beta \right ) + \varepsilon_ {total-tree}$$
where AGB is above-ground biomass (kg), β is the vector of regression parameters to be estimated, DBH is diameter at breast height (cm), H is total tree height (m), ρ is wood density (g cm^-3) by
species. We assumed that the error terms are independent and identically distributed as ε~ N(0,σ^2[e]).
The simultaneous fit, without analytical relations among equations, was solved with the NSUR technique (nonlinear seemingly unrelated regressions) and iteratively applying the ITSUR option of PROC
MODEL in SAS using Newton Algorithm ([49] - Tab. S3). It is very common to detect heteroscedasticity once the models are fitted and the residuals obtained, and to correct this problem, we fit models
using weighted regression (weighting functions) to improve homogeneity of variances and guarantee model additivity, as recommended by Alvarez-González et al. ([1]) and Sanquetta et al. ([49]).
Multi-species (i.e., mixture of species) equations were fitted using models previously selected after independent fitting (Tab. S4 in Supplementary material). Biomass equations by species and wood
density groups (i.e., high and intermediate) were developed with a nonlinear model (eqn. 5), whereas equations for species with low wood density a linearized model was used (eqn. 6), assuming that
errors are positive and multiplicative; because biomass data showed a more linear trend than the remaining species, we also used a correction factor (eqn. 7) for correcting bias introduced by the
logarithmic transformation:
$$AGB= \beta_0 ( \rho \cdot DBH \cdot H)^{ \beta_1 }+ \varepsilon$$
$$\ln (AGB)= \beta_0 + \beta_1 \ln ( \rho \cdot DBH \cdot H)+ \varepsilon$$
$$CF=exp \left ( \frac{\sigma ^2}{2} \right) \cdot \beta ^{\prime}_0$$
where AGB is the above-ground biomass (kg), β[0] and β[1] are regression coefficients of the parameters to be estimated, ρ is the wood density (g cm^-3) by species, DBH is the diameter at breast
height (cm), H = total tree height (m), ln is the natural logarithm function, CF is the correction factor, σ is the residual standard error, and β′[0] is the regression coefficient estimated in model
fitted. We assumed that the error terms are independent and identically distributed as ε~N(0,σ^2[e]).
We used the independent model approach with weighted regression to fit multi-species and wood density group models to improve homogeneity of variance and goodness of fit. Model’s predictive ability
was assessed with the “k-fold cross-validation” method ([51], [53]). The original dataset was split into k = 10 disjoint subsets of similar sample size, where elements of each subset were chosen at
random. We used each subset as a validation dataset, and the remaining k-1 subset was integrated to the training set. The “10-fold cross-validation” provides a good balance between both bias and
variance, and is an indicator of models’ performance with independent datasets since it uses all the observations available for model fitting ([53]). To evaluate the accuracy and compare the
performance of the equations to estimate the biomass by structural component and total tree biomass, we calculated the relative mean error (RME% - eqn. 8) and mean absolute percentage error or bias
(MAPE% -eqn. 9) for the selected models ([15], [13], [51]) as follows:
$$RME \text{%}=\sum_{i=1}^{n} \left (\frac{AGB_{Pred} - AGB_{Obs}}{AGB_{Obs}} \right) \cdot 100$$
$$MAPE \text{%}= {\frac{1}{n}}\sum_{i=1}^{n} \left(\frac{AGB_{Pred} - AGB_{Obs}} {AGB_{Obs}} \right) \cdot 100$$
where RME% and MAPE% are the relative mean error and absolute bias, respectively, AGB[est ]and AGB[obs] are predicted (or estimated) and observed biomass, respectively, and n is the sample size (tree
We compared the RME% and MAPE% of biomass estimated with equations used in this study against equations generated for other tropical regions to assess uncertainty and select a final model. For
example, multi-species equations were compared with Hughes et al. ([26]) and Chave et al. ([9]) equations (eqn. 10, eqn. 11):
$$AGB_{pred}=exp[4.937+1.0583 \ln (DBH^2)] \cdot (1.14/10^6)$$
$$AGB_{pred}= \rho / \rho_ {av} \cdot exp[-1.839+2.116 \ln (DBH)]$$
where AGB[pred] is the predicted above-ground biomass (kg), DBH is the diameter at breast height (cm), ρ is the wood density (g cm^-3) by species, and ρ[av] is the mean wood density (0.54 g cm^-3) of
the plot and ln is the natural logarithm function.
We selected eqn. 10 and eqn. 11 because they were developed for a mixture of tree species with DBH ≤ 10 cm, which is the same size range of the trees used in this study. Hughes et al. ([26])
generated their equation for a tropical forest in central-east Mexico, and its application is based only on DBH as the predictor of biomass. Chave et al. ([9]) equation was generated for tropical
forests located in Panama, and it is a re-parametrization of Hughes et al. ([26]) model; it incorporates wood density as a second independent variable, in addition to DBH. We hypothesized that the
inclusion of wood density would be beneficial for models’ performance and accuracy.
The RME% and MAPE% of equations fitted by wood density groups was compared to Djomo et al. ([15]) and Van Breugel et al. ([52]) equations (eqn. 12, eqn. 13):
\eqalign{\ln (AGB_{pred}) = &-2.4733+0.2893 \ln (DBH)^2 \\ &-0.0378 \ln (DBH)^2 \\ &+0.0372 \ln (DBH^2+H) \\ &+0.2843 \ln ( \rho )}
$$\ln (AGB_{pred})= -1.130+2.267 \ln (DBH)+1.186 \ln ( \rho )$$
where AGB[pred] is the predicted above-ground biomass (kg), DBH is the diameter at breast height (cm), H is the total tree height, ln is the natural logarithm function, and ρ is the wood density (g
cm^-3) by species.
Our main interest was to assess the performance of eqn. 12 and eqn. 13 since they were also developed for mixed tropical forests with trees ranging from 1 to 138 cm DBH, and included wood density and
DBH as predictors. Djomo et al. ([15]) developed their equation using data collected in mature forests from different countries and continents, and included total height, wood density and DBH as
predictors. The Van Breugel et al. ([52]) equation was generated using data from secondary forests from one to 25 years-old and from mature stands > 40 years-old, and using wood density and DBH as
The RME% and MAPE% of equations generated in other tropical regions was calculated from “K-fold-cross-validation” tests ([44], [29]). Negative and positive values of RME% indicate underestimation and
overestimation, respectively, of the total biomass of a set of trees. Statistical differences in RME% and MAPE% values among equations were analyzed with the Kruskal-Wallis tests at 95% confidence
intervals by the “kruskal.test” function in R version 3.5.1 ([47]). We performed a Duncan multiple range test, by using the “dunn.test” function implemented in the PMCMR package in R, to determine
among which equations the mean RME and MAPE were statistically different ([45]). We analyzed the accuracy of models estimation using a linear regression between the predicted and observed values
(without intercept) of the biomass obtained, as well as the “lm” function in R. If the models fit the data correctly, the slope of the estimated coefficient should be around one, whereas values that
are not around one indicate an inadequate fit ([51]).
Lastly, we evaluated the accuracy of estimations of total biomass stored in small-size trees (≤ 10 in DBH) with the equations of Hughes et al. ([26]) and Chave et al. ([9]) against the best equation
of this study. Large-trees (DBH >10 cm) biomass was estimated with the equation of Cairns et al. ([7]). The data were collected in 18 plots of 500 m^2 size (10 × 50 m) distributed across tropical
secondary forests ranging from nine to 80 years-old of abandonment after traditional slash-and-burn agriculture (maize, beans and squash).
We tested eight regression models to predict above-ground biomass (Tab. S2 in Supplementary material). The adjusted R^2 values for these models ranged from 80% to 94%. Model I (named as Model 7 in
Tab. S2) and model II (named as Model 8 in Tab. S2) showed the highest adjusted R^2 and the smallest RMSE and AIC (Tab. S5). We selected these models based on their best goodness-of-fit statistics.
Equation by structural component
Equations fitted with models I and II accounted for 53% to 95% of the biomass variance observed by structural components, and from 92% to 95% of the total tree biomass (Tab. 1). The smaller explained
variances of 71% and 53% were observed for branch biomass estimation. When considering the RMSE values, model I showed greater accuracy for stem, branches and total tree biomass. The weighting
function, i.e., 1/DBH^2H, was adequate to improve the homogeneity of variances and goodness-of-fit statistics by tree structural component and for total tree biomass.
Tab. 1 - Allometric equations for biomass estimation by tree structural component and total tree biomass derived with simultaneous fit and multi-species equations. (Model I): AGB = β[0](ρ·DBH^2·H)^
β1; (Model II): AGB = exp(-β[0])(DBH^2H)^β1. (AGB): stem, branches, foliage or total tree estimated above-ground biomass (kg); (ρ): wood density (g cm^-3); (DBH): diameter at breast height (cm); (H):
total tree height (m); (β[0], β[1]): regression coefficients of the models to be estimated; (RMSE): root mean square error of the estimate; (adj-R^2 ): proportion of variance explained by the model.
We assumed that the error terms are independent and identically distributed: ε~N(0,σ^2[e]).
Model No. Allometric model RMSE adj-R^2
1 AGB[stem]=0.057541(ρ·DBH^2·H)^0.916963 1.6538 0.95
2 AGB[branches]=0.019758(ρ·DBH^2·H)^0.980837 1.6293 0.73
Model I 3 AGB[foliage]=0.022462(ρ·DBH^2·H)^0.724191 0.4491 0.71
4 AGB[total-tree]=AGB[stem]+AGB[branches]+ AGB[foliage] 2.6009 0.95
5 AGB[total-tree multi-species]=0.078479(ρ·DBH^2·H)^0.945339 0.1389 0.96
6 AGB[stem]= exp(-3.471635)(DBH^2·H)^0.956893 2.1974 0.93
7 AGB[branches]=exp(-4.047339)(DBH^2·H)^0.954151 2.4432 0.76
Model II 8 AGB[foliage]=exp(-3.838296)(DBH^2·H)^0.712222 0.8329 0.53
9 AGB[total-tree]=AGB[stem]+AGB[branches]+ AGB[foliage] 3.7892 0.92
10 AGB[total-tree multi-species]= exp(-2.97501)(DBH^2·H)^0.957051 0.1552 0.95
The biomass of structural components and total tree for multi-species calculated with models I and II of Tab. 1 showed good fit when compared to the biomass observed, since slope values (y) and
correlation coefficient (r) being close to one (Fig. 2).
Fig. 2 - Relationship between observed and predicted biomass. Model I: total biomass (a), stem biomass (b), branch biomass (c), and foliage biomass (d). Model II: total biomass (e), stem biomass (f),
branch biomass (g), and foliage biomass (h). The gray solid line represents the 1:1 ratio between the biomass values. The black dotted line represents the linear regression between observed and
predicted biomass.
Both the RME% and bias (MAPE%) for estimating the biomass by structural component (i.e. stem, branches and foliage) was conservative between model I and II (Tab. 2). However, there were statistical
differences in branch (p ≤ 0.01) and foliage biomass (p ≤ 0.01) obtained with the two models (Tab. 2).
Tab. 2 - Comparison of the relative mean error (RME, %) and bias (MAPE, %) of biomass estimations by structural component and total tree among multi-species equations (derived from models I and model
II), Hughes et al. ([26]) and Chave et al. ([9]) equations. Different letters indicate significant statistical differences (p < 0.05) between structural components after Duncan’s multiple range test
(± standard error).
Parms Structural Model I Model II Hughes et al. Chave et al.
Stem -0.20 ± 1.57 ^a -0.02 ± 1.65 ^a - -
Branch -0.41 ± 2.56 ^a 0.14 ± 2.78 ^b - -
Foliage 0.93 ± 4.86 ^a 2.41 ± 6.18 ^b - -
Total-tree -0.36 ± 2.08 ^a 0.05 ± 1.92 ^a -44.51 ± 0.92 ^b -30.36 ± 1.21 ^c
Stem 0.08 ± 0.06 ^a 0.008 ± 0.05 ^a - -
Branch 0.15 ± 0.10 ^a 0.05 ± 0.09 ^a - -
Foliage 0.32 ± 0.18 ^a 0.84 ± 0.17 ^a - -
Total-tree 0.13 ± 0.07 ^a 0.01 ± 0.07 ^a 15.57 ± 0.04 ^b 10.63 ± 0.05 ^c
Total tree biomass estimated with multi-species equations of Tab. 1 showed high correlation with the biomass observed, as indicated by the slope of the linear regression (Fig. 3a, Fig. 3b). When
comparing the performance of these equations with those developed by Hughes et al. ([26]) in central-east Mexico and Chave et al. ([9]), biomass appears to be consistently underestimated by these
older equations (Fig. 3c, Fig. 3d). Furthermore, these older equations cannot accurately estimate the biomass of trees > 5 cm DBH or greater than 20 and 30 kg of dry weight (Fig. 3c, Fig. 3d).
Fig. 3 - Comparison between observed total tree and predicted total tree biomass. (a): Model I; (b): model II; (c): Chave et al. ([9]); (d): Hughes et al. ([26]). The gray solid line represents the
1:1 ratio between biomass values. The black dotted line represents the linear regression between observed and predicted biomass.
Model I and II showed less error and bias in the estimation of total tree biomass compared to the Hughes et al. ([26]) and Chave et al. ([9]) equations (Tab. 2). Particularly, the equations developed
by Hughes et al. ([26]) and Chave et al. ([9]) underestimated total tree biomass by 44.51% and 30.36%, respectively, on average (Tab. 2). Also, the values of the estimation error of multi-species
equations in this study with respect to Hughes et al. ([26]) and Chave et al. ([9]) equations showed significant statistical differences (p ≤ 0.0001 - Tab. 2).
Equation by wood density groups
The distribution of residuals and models goodness-of-fit statistics for species with high and intermediate wood density improved with the weighting factor 1/DBH^2H. The equations fit by wood density
groups (Tab. 3) accounted for 96% to 97% of the total variance in total tree biomass. The effect of estimated coefficient β[1], representing the influence of the three variables ρ, DBH and H on total
biomass, was highly significant (p < 0.0001). As a result, the exponential and linear relationships of the combined variable ρDBHH proved to be good predictors of total tree biomass for species with
high, intermediate and low wood density.
Tab. 3 - Allometric equations for total tree biomass estimation for tree species with high, intermediate and low wood density. (n): sample size (number of trees); (AGB): total tree estimated
above-ground biomass (kg); (ρ): wood density (g cm^-3); (DBH): diameter at breast height (cm); (H): total tree height (m); (RMSE): root mean square error of the estimate; (adj-R^2): proportion of
variance explained by the model. A correction factor (CF) of 1.05 was used to reduce the bias of log-transformation for the biomass equation for low wood density species.
No Wood density n Equation RMSE adj-R^2
1 High 234 AGB[total-tree]=0.077022(ρ·DBH^2·H)^0.947669 0.1421 0.96
2 Intermediate 21 AGB[total-tree]=0.079603(ρ·DBH^2·H)^0.962061 1.0873 0.97
3 Low 20 AGB[total-tree]=0.0814549(ρ·DBH^2·H)^0.971735 0.3083 0.97
Results of the linear regression analysis also showed the existence of a stronger association between the biomass estimated for species with high and intermediate wood density than for species with
low wood density (Fig. 4a, Fig. 4c).
Fig. 4 - Comparison between observed versus estimated total tree biomass by wood density groups. High (a) intermediate (b) and low (c). The gray solid line represents the 1:1 ratio between the
biomass values. The black dotted line represents the linear regression between observed and predicted biomass.
Furthermore, the equations fit by wood density group estimated total tree biomass with a smaller error and bias (Tab. 4) than the equations obtained by Van Breugel et al. ([52]) for tropical forests
of Panama, and by Djomo et al. ([15]). Significant statistical differences were obtained for the error and bias between model I and Djomo et al. ([15]) for estimating biomass by wood density group:
high, p ≤ 0.0001; intermediate, p ≤ 0.01; low, p ≤ 0.003.
Tab. 4 - Comparison of the relative mean error (EMR, %) and bias (MAPE, %) of total tree biomass estimated with equations fit by wood density groups (this study) and foreign equations. Different
letters indicate significant statistical differences (p < 0.05) between structural components, result of Duncan’s multiple range test (± standard error).
Parms Model High Intermediate Low
This study -0.34 ± 0.99 ^a -0.65 ± 4.74 ^a 2.66 ± 15.97 ^a
EMR Djomo et al. ([15]) -98.70 ± 0.05 ^c -98.31 ± 0.14 ^c -97.45 ± 0.05 ^b
Van Breugel et al. ([52]) -1.23 ± 1.34 ^a -10.15 ± 3.19 ^ad -14.38 ± 5.23 ^ac
This study 0.03 ± 0.02 ^a 9.50 ± 0.01 ^b 0.11 ± 0.03 ^a
MAPE Djomo et al. ([15]) 0.31 ± 0.42 ^a 34.14 ± 0.34 ^b 3.53 ± 0.28 ^a
Van Breugel et al. ([52]) 0.14 ± 1.98 ^a 35.58 ± 1.03 ^b 5.57 ± 0.83 ^a
Comparing predicting ability of equations
Based on inventory data, the biomass average of trees with DBH >10 cm was 128.97 ± 14.46 Mg ha^-1. Small trees (2.5-10 cm in DBH) biomass estimated with Model I was greater (p < 0.05) than the
biomass estimated with Hughes et al. ([26]) and Chave et al. ([9]) equations (Fig. 5). Particularly, using Model I the small trees averaged 35.97 ± 3.47 Mg ha^-1 biomass and accounted for 21.8% of
total stand biomass. Hughes et al. ([26]) equation estimated 24.38 ± 2.07 Mg ha^-1 biomass, and this value represented 15.9% of the total biomass. Chave et al. ([9]) equation showed that biomass
accumulated in small trees was 29.51 Mg ha^-1, which accounted for 18.62% of the total biomass. The biomass estimates using the equations by Hughes et al. ([26]) and Chave et al. ([9]) were similar.
Fig. 5 - Above-ground biomass at the stand level of small trees estimated with Chave et al. ([9]), Hughes et al. ([26]) and Model I, using inventory data.
We developed allometric equations to estimate total above-ground biomass of small trees in tropical forests of the Yucatan peninsula, by structural components (i.e., stem, branches, foliage), and for
groups of species with differences in wood density (i.e., high, intermediate and low). Models that only considered DBH had the largest estimation errors, while equations that included total height
and wood density, in addition to DBH, significantly improved goodness of fit and reduced the estimation error (Tab. S5 in Supplementray material), which supports our hypothesis I. This is consistent
with other studies of tropical forests in Asia, Africa and at global scale, which documented a better fit for models including such variables when compared with models that did not ([15], [39]).
Total height and wood density influence the variability of tree biomass because of their close correlation with structural tree characteristics and physiological and mechanical properties of woody
species ([12]). According to Kenzo et al. ([30]), a careful selection of predictors of biomass is required for tropical forests to improve model accuracy and goodness of fit. However, many
statistical models use DBH as the only independent variable as biomass predictor, whereas other models use only DBH and wood density ([39]) and exclude total height because it is difficult to be
measured in the field ([15], [27]). Nevertheless, it is highly recommended to include both total height and wood density in allometric equations because they might help lower errors in model fitting
([10], [21]), as observed in this study.
Estimation of biomass by structural components
The use of NSUR to fit models by tree structural component (i.e., stem, branches, and foliage - Tab. 1) improved model accuracy for small trees as the method guarantees that total tree biomass is the
result of summing up the biomass of each component. Besides, the simultaneous estimation method produced a better fit when considering the total variability of the biomass of the three structural
components. A number of authors have also reported similar results for tropical and temperate forests, since the method helps to minimize the sum of residuals and generates more consistent
coefficients of biomass components and total biomass, guaranteeing additivity ([49], [56]). In this study, the simultaneous fit method produced a better fit when considering the total variability of
the biomass of the three structural components, which concurred with similar studies using the additivity method to fit biomass equations simultaneously ([41], [49]).
Model I showed a lower relative mean error for stem (-0.03%), branch (-2.5%) and foliage (-0.17%) biomass than model II (Tab. 2). Previous studies in forests of Africa, Asia and Mexico reported
bigger errors in biomass estimation for tree structural components (stem: ~ 4.06%; branches: 1.5% up to 58.2%; foliage: 8.6% to 15.8%) when wood density was not included as predictor ([15], [32],
[16]). Since tree crown architecture differs among tropical species ([31], [20]), we assumed that the sample size (i.e., trees harvested) was large enough to obtain a good model fit, thereby these
models better explain the variability of the branches and foliage biomass. In addition, the inclusion of total height and wood density in the models improved biomass estimations, because they better
reflect the allometry (i.e., growth) and crown architecture of tree species ([3], [40]). Thus, there is a strong relationship between total height and wood density with the biomass of branch and
foliage. Many studies indicate that species, forest structure and site quality can also affect the variation in biomass of tree components ([38], [53]). We did not include these factors in our
models, thus their effect could not be tested in this study. However, we emphasize that the models obtained in this study are efficient and statistically reliable to estimate the biomass of small
Sources of error in total biomass estimation with multi-species equations
The inclusion of wood density in addition to DBH and total height in the multi-species models I and II improved total tree biomass predictions as showed by their lowest biomass estimation error (Tab.
2). In contrast, eqn. 10, which uses DBH as the only predictor of biomass, had on average 45% mean error in the total biomass estimation. The eqn. 11 improved the estimation of total tree biomass
with the inclusion of wood density, though still showing a 30% relative mean error (Tab. 2).
The large error in the biomass estimated with eqn. 10 might be associated to the use of DBH as the only predictor, and to the sample size used (66 trees). Other studies, similar to the one presented
here, showed that allometric models using DBH as the only predictor can underestimate total biomass by 4.6% or overestimate it by 4.0%, 5.9%, or up to 52% error on average ([15], [8], [13]). This can
be explained mainly because DBH is insufficient to describe biomass relationships that are also determined by total height, wood density, crown diameter, or architectural type ([40], [17]).
Regarding the sample size, Van Breugel et al. ([52]) fitted two generic local models with 244 trees of 26 species in secondary forests, using only DBH in one model and DBH and wood density in the
second model as predictors. When these authors used 80% (195 trees) and 20% (49 trees) of the total sampled trees, the relative mean error increased from 4% to 21%. In our study, we did not split the
sample to evaluate models’ performance, but about 0.5% error was obtained when using 311 trees. Therefore, the development of models for multi-species using only DBH as the main predictor - such as
those by Hughes et al. ([26]), and Chave et al. ([9]) equations - requires a larger sample size than a model including both DBH and wood density, since model parameters are systematically sensitive
to small sample sizes ([52], [17]).
The eqn. 11 ([9]) was developed to analyze moist tropical forests in Panama, using DBH and wood density as predictors of biomass. However, the wood density used in the above model was the average
value calculated over 123 species, corresponding to 0.54 g cm^-3. In contrast, we used wood density values for each species obtained from samples measured in the field. We considered that
environmental conditions, sample size and the predictors used in eqn. 10 and eqn. 11 might be the main factors leading to the larger error observed when the above models were tested on our datasets.
In general, generic equations developed for a different region and applied at the local level, give results similar to those obtained in this study. For example, Ketterings et al. ([31]) generated
equations for specific sites with trees 5 to 50 cm in DBH (overall 29 trees) in secondary forests of Sumatra. Further, they contrasted the performance of their equations (which included wood density
and DBH as predictors) with those developed at global scale using data collected in a wide range of tropical conditions and neotropical vegetation types, resulting in a reduction by 35%-51% of the
error for total biomass estimates. Likewise, when the model fitted by Brown ([6]) was used for Sumatra’s tree data, biomass estimations were higher than the total biomass observed. In Brazilian
forests, 10.6% and 14.8% mean estimation errors were obtained with the pan-tropical equations by Brown ([6]) and Chave et al. ([10]), whereas the local models showed a 5.63% mean estimation error (
[34]). In southeast Asian forests, a mean error of 19.8% was obtained locally, but when regional and global scale equations were used the average error increased from 29.2% up to 38.4%, respectively
([39]). These findings support the hypothesis that local equations can estimate biomass more precisely than foreign equations, as the latters hardly reflect the allometric relationships of local
Ketterings et al. ([31]) and Mugasha et al. ([38]) pointed out that large errors in biomass estimations are mainly due to: (i) the relatively small sample size used to fit the equations, which
implies that equation parameters are not adequate for other sites with high tree densities and variety of species different to the range of tree diameters used in the fitting process; (ii) the
predictors are not adequate and sufficient to explain the relationship with total biomass (in this case, only DBH); (iii) wood density can affect the equation coefficients since it is influenced by
site characteristics (i.e., soil, precipitation, species mixture, among other factors), i.e., parameter values could not be appropriate for sites where no estimations are available.
Equations by wood density groups
Biomass models for wood density groups are not common for tropical regions and particularly for the Yucatan peninsula. In this study, we developed three biomass equations for species with high
(0.61-0.80 g cm^-3), intermediate (0.42-0.52 g cm^-3) and low (0.25-0.29 g cm^-3) wood density (Tab. 3). The exponential and linear models provided the best fit and performance in estimating total
tree biomass of species grouped by wood density. RMSE values obtained using the above models ranged from 0.0873 to 0.3083, which are similar to the RMSE (0.287 to 0.548) reported for Vietnam forests
and for a global scale using exponential and linear models on species with wood density ranging from 0.50 to 0.83 g cm^-3 ([13], [39]). In this study, the higher model accuracy was obtained for
species showing high wood density, followed by species with intermediate wood density, whereas larger errors were observed for low wood density species. In this last group, we believe that the small
sample size (only two species with a total of 25 trees) was not sufficient to obtain reliable equations. Nam et al. ([39]) suggested that it is important to sample tree species covering the whole
range of wood densities where different species coexist, in order to develop reliable allometric equations. In this study, greater accuracy in biomass predictions was observed for species with high
wood density, due perhaps to the large sample size (18 of the 22 species analyzed were included in this group). Indeed, it is likely that the 18 species fully reflected the large variability of
species with high wood density in the studied secondary forest.
The pan-tropical model by Djomo et al. ([15] - eqn. 12) underestimated the total biomass for species with high, intermediate and low wood density 98% of the time (Tab. 4) and no significant
differences in biomass estimation among wood density groups were observed. Such model was developed with trees ranging from 1 to 138 cm in DBH and 0.25-0.57 g cm^-3 in wood density, collected from
different tropical regions of the world. Therefore, this model was not appropriate to accurately estimate the biomass of small trees (≤ 10 cm DBH) in this study.
The model by Van Breugel et al. ([52] - eqn. 13) was developed for 26 species and 244 trees from 3 to 29 cm in DBH; trees were harvested in secondary forests of 1 to 25 years-old, and in stands
younger than 40 years-old, within forests located between agricultural fields and grasslands; wood density of the species included in the present study averaged 0.49 g cm^-3. This model was mostly
generated for soft and intermediate wood density tree species (almost 80% of the species harvested). In contrast, we harvested more trees from species with high wood density (mean wood density was
0.64 g cm^-3) mainly located in secondary forests with a relative good conservation status and a history of moderate land use (between one and two years of agricultural use). Differences in
environmental patterns, forest type, species biometric characteristics and wood density values of species evaluated by Van Breugel et al. ([52]) and those obtained in this study likely had a large
influence on models’ performance. The large bias in biomass estimations observed in this study using models generated for other regions, confirmed that the use of such models intended to be applied
in those areas, is actually a significant source of uncertainty in estimating local biomass and carbon stocks.
Comparing the predictive ability of equations
We observed that model I had the best accuracy in estimating the contribution of small trees to total biomass (Fig. 5). In contrast, eqn. 10 (which is widely used to estimate small tree biomass in
the Yucatan peninsula) gave lower biomass estimates compared with eqn. 11. We observed that between 6.5 and 11.59 Mg ha^-1 biomass could be excluded when eqn. 10 and 11 are used, respectively. The
lower performance of the former equation can be attributed to the fact that total height and wood density values are not included as predictors in the model. Also, we noticed that eqn. 10
underestimated (around 45%) the total biomass of trees higher than 20 kg (trees > 5 cm in DBH). Therefore, we assumed that the number of trees of 5 to 10 cm DBH that were harvested for this study was
not representative, which could have also contributed to the low model performance. Our results demonstrated that using eqn. 10 an important bias is introduced in biomass estimations for the Yucatan
peninsula forests, which supports our second hypothesis. Furthermore, this model may underestimate the variability of carbon stocks at landscape level, in particular for young secondary forests where
small trees dominate.
We developed allometric equations to estimate biomass by tree component (stem, branch, and foliage), and total tree biomass for 22 small-size (≤ 10 cm DBH) tree species in secondary forests of the
Yucatan peninsula, Mexico. We confirmed the hypothesis that the inclusion of total height and wood density in allometric models improve equations fit and biomass estimation compared with models
including a single predictor variable such as DBH.
The equations used in this study yielded more accurate biomass estimations than those developed for other tropical regions. These results support the hypothesis that local equations can better
explain the biomass variability in a region when both total height and wood density are included in the fitting process, since these parameters are highly correlated with growth type and wood
physical properties of trees.
The equations developed in this study can be conveniently used to reduce uncertainty in biomass and carbon stocks estimations in secondary forests of the Yucatan peninsula, where a large proportion
of the community is composed by small-size trees and the sites are constantly affected by natural and anthropogenic disturbances. Managing tropical secondary forests for climate change mitigation
requires the estimation of biomass/carbon stocks with a low level of uncertainty; therefore, these equations can be a useful tool in the context of climate change within the projects implemented by
REDD+ in Mexico, and similar regions in developing countries.
This study was financially supported by the Sustainable Landscapes Program of the Agency for International Development of the United States of America, through the USDA Forest Service International
Programs Office and the Northern Research Station (Agreement No. 12-IJ-11242306-033).
Alvarez-González JG, Soalleiro RR, Alboreca AR (2007). Resolución de problemas del ajuste simultáneo de sistemas de ecuaciones: heterocedasticidad y variables dependientes con distinto número de
observaciones [Resolution simultaneous system fit of equation: heteroscedasticity and dependent variables with different number of observations]. Cuaderno de la Sociedad Española de Ciencias
Forestales 23 (23): 35-42. [in Spanish]
Anderson-Teixeira KJ, Davies SJ, Bennett AC, Gonzalez-Akre EB, Muller-Landau HC, Joseph Wright S, Abu Salim K, Almeyda Zambrano AM, Alonso A, Baltzer JL (2015). CTFS-ForestGEO: a worldwide network
monitoring forests in an era of global change. Global Change Biology 21 (2): 528-549.
Baker TR, Philips OL, Malhi Y, Almeidas S, Arroyo L, Di Fiore A, Erwin T, Killeen TJ, Laurance SG, Laurance WF, Lewis SL, Lloyd J, Monteagudos A, Neill DA, Patiño S, Pitman NCA, Silva JNM, Martínez
RV (2004). Variation in wood density determines spatial patterns in Amazonian forest biomass. Global Change Biology 10: 545-562.
Bastin J-F, Fayolle A, Tarelkin Y, Van Den Bulcke J, De Haulleville T, Mortier F, Beeckman H, Van Acker J, Serckx A, Bogaert J (2015). Wood specific gravity variations and biomass of Central African
tree species: the simple choice of the outer wood. PloS One 10 (11): 1-16.
Brandeis TJ, Delaney M, Parresol BR, Royer L (2006). Development of equations for predicting Puerto Rican subtropical dry forest biomass and volume. Forest Ecology and Management 233 (1): 133-142.
Brown S (1997). Estimating biomass and biomass change of tropical forests: a primer. FAO Forestry Paper 134, Food and Agriculture Organization of the United Nations, Rome, Italy, pp. 55.
Cairns MA, Olmsted I, Granados J, Argaez J (2003). Composition and aboveground tree biomass of a dry semi-evergreen forest on Mexico’s Yucatán peninsula. Forest Ecology and Management 186 (1-3):
Chaturvedi R, Raghubanshi A, Singh J (2012). Biomass estimation of dry tropical woody species at juvenile stage. The Scientific World Journal 2012: 1-5.
Chave J, Condit R, Lao S, Caspersen JP, Foster RB, Hubbell SP (2003). Spatial and temporal variation of biomass in a tropical forest: results from a large census plot in Panama. Journal of Ecology 91
(2): 240-252.
Chave J, Andalo C, Brown S, Cairns MA, Chambers JQ, Eamus D, Fölster H, Fromard F, Higuchi N, Kira T, Lescure J-P, Nelson BW, Ogawa H, Puig H, Riéra B, Yamakura T (2005). Tree allometry and improved
estimation of carbon stocks and balance in tropical forests. Oecologia 145 (1): 87-99.
Chave J, Muller-Landau HC, Baker TR, Easdale TA, Steege Ht Webb CO (2006). Regional and phylogenetic variation of wood density across 2456 neotropical tree species. Ecological Applications 16 (6):
Chave J, Coomes D, Jansen S, Lewis SL, Swenson NG, Zanne AE (2009). Towards a worldwide wood economics spectrum. Ecology letters 12 (4): 351-366.
Chave J, Réjou-Méchain M, Búrquez A, Chidumayo E, Colgan MS, Delitti WB, Duque A, Eid T, Fearnside PM, Goodman RC (2014). Improved allometric models to estimate the aboveground biomass of tropical
trees. Global Change Biology 20 (10): 3177-3190.
Curtis JT, McIntosh RP (1951). An upland forest continuum in the prairie-forest border region of Wisconsin. Ecology 32 (3): 476-496.
Djomo AN, Ibrahima A, Saborowski J, Gravenhorst G (2010). Allometric equations for biomass estimations in Cameroon and pan moist tropical equations including biomass data from Africa. Forest Ecology
and Management 260 (10): 1873-1885.
Douterlungne D, Herrera-Gorocica AM, Ferguson BG, Siddique I, Soto-Pinto L (2013). Allometric equations used to estimate biomass and carbon in four Neotropical tree species with restoration
potential. Agrociencia 47 (4): 385-397.
Duncanson L, Rourke O, Dubayah R (2015). Small sample sizes yield biased allometric equations in temperate forests. Scientific Reports 5: 1-13.
Dupuy JM, Hernández-Stefanoni JL, Hernández-Juárez RA, Tetetla-Rangel E, López-Martínez JO, Leyequién-Abarca E, Tun-Dzul FJ, May-Pat F (2012). Patterns and correlates of tropical dry forest structure
and composition in a highly replicated chronosequence in Yucatán, México. Biotropica 44 (2): 151-162.
Ellis EA, Porter-Bolland L (2008). Is community-based forest management more effective than protected areas? A comparison of land use/land cover change in two neighboring study areas of the Central
Yucatán peninsula, México. Forest Ecology and Management 256 (11): 1971-1983.
Fayolle A, Doucet J-L, Gillet J-F, Bourland N, Lejeune P (2013). Tree allometry in Central Africa: testing the validity of pantropical multi-species allometric equations for estimating biomass and
carbon stocks. Forest Ecology and Management 305: 29-37.
Feldpausch TR, Lloyd J, Lewis SL, Brienen RJ, Gloor M, Monteagudo Mendoza A, Lopez-Gonzalez G, Banin L, Abu Salim K, Affum-Baffoe K (2012). Tree height integrated into pantropical forest biomass
estimates. Biogeosciences: 3381-3403.
Flynn DF, Uriarte M, Crk T, Pascarella JB, Zimmerman JK, Aide TM, Caraballo OMA (2010). Hurricane disturbance alters secondary forest recovery in Puerto Rico. Biotropica 42 (2): 149-157.
García ADME (2003). Distribución de la precipitación en la República Mexicana [Rainfall distribution in the Mexican Republic]. Investigaciones Geograficas 1 (50): 67-76. [in Spanish]
Goussanou CA, Guendehou S, Assogbadjo AE, Kaire M, Sinsin B, Cuni-Sanchez A (2016). Specific and generic stem biomass and volume models of tree species in a West African tropical semi-deciduous
forest. Silva Fennica 50 (2): 1-22.
Gutiérrez-Báez C, Zamora-Crescencio P, Puc-Garrido EC (2013). Estructura y composición florística de la selva mediana subperenifolia de Hampolol, Campeche, México [Structure and floristic composition
of the mid-stature semi-evergreen forest of Hampolol, Campeche, Mexico]. Foresta Veracruzana 15 (1): 1-8. [in Spanish]
Hughes RF, Kauffman JB, Jaramillo VJ (1999). Biomass, carbon, and nutrient dynamics of secondary forests in a humid tropical region of México. Ecology 80 (6): 1892-1907.
Hunter M, Keller M, Victoria D, Morton D (2013). Tree height and tropical forest biomass estimation. Biogeosciences 10 (12): 8385-8399.
Johnson JB, Omland KS (2004). Model selection in ecology and evolution. Trends in Ecology and Evolution 19 (2): 101-108.
Jung Y, Hu J (2015). AK-fold averaging cross-validation procedure. Journal of Nonparametric Statistics 27 (2): 167-179.
Kenzo T, Furutani R, Hattori D, Kendawang JJ, Tanaka S, Sakurai K, Ninomiya I (2009). Allometric equations for accurate estimation of above-ground biomass in logged-over tropical rainforests in
Sarawak, Malaysia. Journal of Forest Research 14: 365-372.
Ketterings QM, Coe R, Van Noordwijk M, Palm CA (2001). Reducing uncertainty in the use of allometric biomass equations for predicting above-ground tree biomass in mixed secondary forests. Forest
Ecology and Management 146 (1-3): 199-209.
Kuyah S, Dietz J, Muthuri C, Jamnadass R, Mwangi P, Coe R, Neufeldt H (2012). Allometric equations for estimating biomass in agricultural landscapes: II. Belowground biomass. Agriculture, Ecosystems
and Environment 158 (2): 225-234.
Lawrence D (2005). Regional-scale variation in litter production and seasonality in tropical dry forests of Southern Mexico. Biotropica 37: 561-570.
Lima AJN, Suwa R, De Mello Ribeiro GHP, Kajimoto T, Dos Santos J, Da Silva RP, De Souza CAS, De Barros PC, Noguchi H, Ishizuka M, Higuchi N (2012). Allometric models for estimating above- and
below-ground biomass in Amazonian forests at São Gabriel da Cachoeira in the upper Rio Negro, Brazil. Forest Ecology and Management 277: 163-172.
Mascaro J, Perfecto I, Barros O, Boucher DH, De La Cerda IG, Ruiz J, Vandermeer J (2005). Aboveground biomass accumulation in a tropical wet forest in Nicaragua following a catastrophic hurricane
disturbance. Biotropica: The Journal of Biology and Conservation 37 (4): 600-608.
Memiaghe HR, Lutz JA, Korte L, Alonso A, Kenfack D (2016). Ecological importance of small-diameter trees to the structure, diversity and biomass of a tropical evergreen forest at Rabi, Gabon. PloS
One 11 (5): 1-15.
Miller K, Chang E, Johnson N (2001). En busca de un enfoque común para el Corredor Biológico Mesoamericano [In search of a common approach for the Measoamerican Biological Corridor]. World Resources
Institute, Washington, DC, USA, pp. 49. [in Spanish]
Mugasha WA, Mwakalukwa EE, Luoga E, Malimbwi RE, Zahabu E, Silayo DS, Sola G, Crete P, Henry M, Kashindye A (2016). Allometric models for estimating tree volume and aboveground biomass in lowland
forests of Tanzania. International Journal of Forestry Research 2016: 1-13.
Nam VT, Van Kuijk M, Anten NP (2016). Allometric equations for aboveground and belowground biomass estimations in an evergreen forest in Vietnam. PloS One 11 (6): 1-19.
Ngomanda A, Obiang NLE, Lebamba J, Mavouroulou QM, Gomat H, Mankou GS, Loumeto J, Iponga DM, Ditsouga FK, Koumba RZ (2014). Site-specific versus pantropical allometric equations: which option to
estimate the biomass of a moist central African forest? Forest Ecology and Management 312: 1-9.
Parresol BR (2001). Additivity of nonlinear biomass equations. Canadian Journal of Forest Research 31 (5): 865-878.
Pennington T, Sarukhán J (2005). Arboles tropicales de México. Manual para la identificación de las principales especies [Tropical trees of Mexico. Identification manual of the main species] (3rd
edn). Universidad Nacional Autónoma de México y Fondo de Cultura Económica, México DF, México, pp. 523. [in Spanish]
Peña-Claros M (2003). Changes in forest structure and species composition during secondary forest succession in the Bolivian Amazon. Biotropica 35 (4): 450-461.
Picard N, Saint-André L, Henry M (2012). Manual de construcción de ecuaciones alométricas para estimar el volumen y la biomasa de los árboles: del trabajo de campo a la predicción [Manual for
building tree volume and biomass allometric equations: from field measurement to prediction]. Food and Agriculture Organization of the United Nations (FAO) and Centre de Coopération Internationale en
Recherche Agronomique pour le Développement (CIRAD), Rome, Italy, pp. 176-177. [in Spanish]
Pohlert T, Pohlert MT (2018). Package “PMCMR”. Web site.
Poorter L, Bongers F, Aide TM, Zambrano AMA, Balvanera P, Becknell JM, Boukili V, Brancalion PH, Broadbent EN, Chazdon RL (2016). Biomass resilience of Neotropical secondary forests. Nature 530
(7589): 211-214.
R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.
Rojas-Garcia F, De Jong B, Martinez-Zurimendi P, Paz-Pellat F (2015). Database of 478 allometric equations to estimate biomass for Mexican trees and forests. Annals of Forest Science 72 (6): 835-864.
Sanquetta CR, Behling A, Corte APD, Péllico Netto S, Schikowski AB, Do Amaral MK (2015). Simultaneous estimation as alternative to independent modeling of tree biomass. Annals of Forest Science 72
(8): 1099-1112.
SAS Institute Inc. (2011). Statistic analysis system, SAS/ETS ver. 9.3. User’s Guide, Cary, NC, USA.
Sileshi GW (2014). A critical review of forest biomass estimation models, common mistakes and corrective measures. Forest Ecology and Management 329: 237-254.
Van Breugel M, Ransijn J, Craven D, Bongers F, Hall JS (2011). Estimating carbon stock in secondary forests: decisions and uncertainties associated with allometric biomass models. Forest Ecology and
Management 262 (8): 1648-1657.
Vargas-Larreta B, López-Sánchez CA, Corral-Rivas JJ, López-Martínez JO, Aguirre-Calderón CG, Alvarez-González JG (2017). Allometric equations for estimating biomass and carbon stocks in the temperate
forests of North-Western México. Forests 8 (8): 1-20.
Vincent JB, Henning B, Saulei S, Sosanika G, Weiblen GD (2015). Forest carbon in lowland Papua New Guinea: local variation and the importance of small trees. Austral Ecology 40 (2): 151-159.
Zamora CP, García Gil G, Flores Guido JS, Ortiz JJ (2008). Estructura y composición florística de la selva mediana subcaducifolia en el sur del estado de Yucatán, México [Structure and floristic
composition of the mid-stature semi-deciduous forest in the southern state of Yucatan, Mexico]. Polibotanica 26: 39-66. [in Spanish]
Zhang X, Cao QV, Xiang C, Duan A, Zhang J (2017). Predicting total and component biomass of Chinese fir using a forecast combination method. iForest - Biogeosciences and Forestry 10: 687-691.
Authors’ Info
Authors’ Affiliation
Xavier García-Cuevas 0000-0002-2481-6704
Instituto Nacional de Investigaciones Forestales, Agrícolas y Pecuarias, Campo Experimental Chetumal, Km. 25, Carretera Chetumal-Bacalar, C.P. 77930, Xul-ha, Quintana Roo (México)
Paper Info
Puc-Kauil R, Ángeles-Pérez G, Valdéz-Lazalde JR, Reyes-Hernández VJ, Dupuy-Rada JM, Schneider L, Pérez-Rodríguez P, García-Cuevas X (2020). Allometric equations to estimate above-ground biomass of
small-diameter mixed tree species in secondary tropical forests. iForest 13: 165-174. - doi: 10.3832/ifor3167-013
Academic Editor
Rodolfo Picchio
Paper history
Received: Jun 12, 2019
Accepted: Feb 13, 2020
First online: May 02, 2020
Publication Date: Jun 30, 2020
Publication Time: 2.63 months
Copyright Information
© SISEF - The Italian Society of Silviculture and Forest Ecology 2020
Open Access
This article is distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use,
distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes
were made.
Web Metrics
Breakdown by View Type
(Waiting for server response...)
Article Usage
Total Article Views: 39056
(from publication date up to now)
Breakdown by View Type
HTML Page Views: 33782
Abstract Page Views: 2279
PDF Downloads: 2518
Citation/Reference Downloads: 11
XML Downloads: 466
Web Metrics
Days since publication: 1653
Overall contacts: 39056
Avg. contacts per week: 165.39
Article Citations
Article citations are based on data periodically collected from the Clarivate Web of Science web site
(last update: Feb 2023)
Total number of cites (since 2020): 2
Average cites per year: 0.50
Publication Metrics
by Dimensions ^©
Articles citing this article
List of the papers citing this article based on CrossRef Cited-by. | {"url":"https://iforest.sisef.org/contents/?id=ifor3167-013","timestamp":"2024-11-10T11:59:14Z","content_type":"application/xhtml+xml","content_length":"258489","record_id":"<urn:uuid:e1500208-3c90-433d-ad21-e563112e0144>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00589.warc.gz"} |
ECONOMICS (CBSE/UGC NET)
Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
A firm increases its price from $8 to $12 and sees demand for the product fall by 20%. What would the price elasticity of demand be for this product?
Detailed explanation-1: -The price elasticity of demand is calculated as the percentage change in quantity divided by the percentage change in price.
Detailed explanation-2: -By definition, the price elasticity of demand is computed as the percentage change in quantity demanded divided by the percentage change in price. In this question, the price
elasticity is 0.8. This implies that for every one percent increase in price, the quantity demanded will decline by 0.8%.
Detailed explanation-3: -The formula looks like this: Price Elasticity of Demand = % of change in quantity demanded / % of change in price.
Detailed explanation-4: -The price elasticity of-0.8 implies that the demand is inelastic.
There is 1 question to complete. | {"url":"https://education-academia.github.io/econ/economics/elasticity-of-demand/a-firm-increases-its-price-from-8-to-12-and-sees-demand-for-the-product-fall-by-20-what-would-the-price-elasticity-of-demand-be-for-this-product.html","timestamp":"2024-11-05T18:51:21Z","content_type":"text/html","content_length":"25235","record_id":"<urn:uuid:43dbc499-2b57-4789-901e-37695de673d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00889.warc.gz"} |
Domeniile trebuie să înceapă cu o literă sau un număr și să fie între și caractere în lungime.
:domain nu este disponibil
:tld domains are currently unavailable.
:domain is available.
We detected the domain you entered is an international domain name. In order to continue, please select your desired domain language.
Please select the language of the domain you wish to register. | {"url":"https://secure.choose-hosting.com/clients/cart.php?a=add&domain=register&language=romanian","timestamp":"2024-11-10T06:29:10Z","content_type":"text/html","content_length":"942469","record_id":"<urn:uuid:57a59de9-ad46-4082-acd5-8ddaf558f5ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00874.warc.gz"} |