content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Cube of a Number | Lexique de mathématique
Cube of a Number
The product of a number multiplied by itself three times.
A whole number that is the cube of another whole number is sometimes called a
perfect cube
The cubes of the first 12 whole numbers are: 0, 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, 1331.
All of these cubes, except for 0, are cubic numbers.
The cube of a number n is written as n³.
The cube of 5 is 125, or: 5³ = 5 × 5 × 5 = 125.
125 is a perfect cube
Historical Note
The term cube originally referred to a die with six faces, such as the type used in games and that can be rolled because its edges are rounded.
The notation c³ was used by Greek geometricians to refer to the volume of a cube with sides c. The mathematicians who later translated and adapted the works of the Greek geometricians kept this
notation to refer to the value of a quantity raised to the third power.
This explains why the terms “square” and “cube” are included in the terminology used in arithmetic to describe certain numbers. | {"url":"https://lexique.netmath.ca/en/cube-of-a-number/","timestamp":"2024-11-07T19:24:55Z","content_type":"text/html","content_length":"64161","record_id":"<urn:uuid:1de40496-8cdf-47e1-a8e3-7f57e1a0a928>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00487.warc.gz"} |
Pareto Chart In Tableau
For Corporate training and Online Training contact at info@instrovate.com . During my Corporate Tableau Training in Gurgaon , i get questions many time regarding Pareto charts.
The Pareto principle is the 80-20 rule. It basically states that, roughly 80% of results come from 20% of the causes. For example, 80% of profits come from 20% of the products.
In Tableau, you can apply a table calculation to Profit data to create a chart that shows the percentage of total Profit that come from the top sub category. You can also see which subcategory
account for 80% of total profit.
We will follow the steps to create a Pareto Chart in Tableau on sample Super Store Data set. We all know that Sample Superstore is a mock up data for a sample super store, so it may or may not be the
case that the Data does not follow the Pareto Principle.
Let’s do it for the Sample Superstore data set that comes up with Tableau 10 and we will use Tableau Desktop 10.0 to validate if the Data set that we are looking into follows the Pareto Principle or
not. So the Question that we are seeking an answer here is “Does 80% of the Profit is Contributed by 20% of the Products Sub Category”
Step 1: Drag Subcategory onto Columns and Profit on to rows
Step 2: Sort in Descending Order on the basis of profit
Step 3: Add a Table Calculation (Running Total) of Sum of Profit
So what we get till now is we have the “Running Total of Sum of Profit” i.e. Profit starts and the running total gives 100% of the profit that happened. So in order to know when 80% of the profit was
done , we add a new secondary Table calculation on sum(profit) and change it to %.
Step 4: Change Running Total of Sum(Profit) from Actual Value to %. For this we add another Table calculation to it as below.
Step 5: Analyze the Chart that you created. Pareto Chart is ready but the Analysis remains.
So as we can see in the screen above, we can see the 80% of sales ( on the Y axis ) . And even without doing anything further we can see that 80% of sales on Y axis corresponds to 5 products of sub
category i.e Copiers , Phones , Access, Paper , Binder, Chairs .
So out of total 17 Product Sub Category we have 6 product sub category that is reason for 80% of the profit and that makes 6/17 = 35% around. So we can say that Our Data does not follow the pareto
principle as 80% of the Sales is being contributed by 35% of Product Sub Category and not 20% .
The Pareto Chart from Understanding perspective is complete, however if we want on the X- Axis instead of Product Sub Category Name we can show % so that, we can create trend lines of 80% on Y- Axis
and 20% on X-axis to clearly see the 80-20 rule.
Now in order to calculate the % in x-axis , we have to do the same way as we calculate manually i.e. 6 / 17 → i.e. index() / size () we will make a calculated field ParetoKPI and since we want to do
it for “Sub Category” , we need to have the sub category in Marks shelf and compute Pareto KPI using “Sub Category”
Let’s do it in few steps as below
Step 6: To find out Percentage in x axis , create calculated field as ParetoKPI .
Step 7: Thereafter Drag Sub Category from Columns to Mark Shelf and ParetoKPI to columns and for this Compute Using “Sub Category”
Step 8: In Marks Shelf change it to Line from Automatic and you get as below
So now we can see we have both Y axis and x axis in percent. We can format x axis to get in percentage.
Now we can drop 80% and 20% reference line on Yaxis and X axis
So as we can see 20% of Product Sub Category leads to nearly 55% of the Profit. Since Sample Superstore is a mock up data , pareto principle does not satisfy. Even in real data ,whether Pareto
principle satisfies or not depends on the How Data is distributed.
For Corporate training and Online Training contact at info@instrovate.com
1 thought on “Pareto Chart In Tableau”
Hello! Quick question that’s completely off topic.
Do you know how to make your site mobile friendly?
My weblog looks weird when browsing from my iphone4.
I’m trying to find a theme or plugin that might be able to fix this problem.
If you have any recommendations, please share. With thanks!
Leave a Comment | {"url":"https://technicaljockey.com/tableau-training-consultant-pareto-chart-in-tableau-10/","timestamp":"2024-11-07T03:22:44Z","content_type":"text/html","content_length":"206500","record_id":"<urn:uuid:e728fbe7-7d45-485d-9b13-6d89ffd7ae80>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00382.warc.gz"} |
Francesco Ferrari, Sen Niu, Juraj Hasik, Yasir Iqbal, Didier Poilblanc, Federico Becca
SciPost Phys. 14, 139 (2023) · published 1 June 2023 | Toggle abstract · pdf
Motivated by recent experiments on Cs$_2$Cu$_3$SnF$_{12}$ and YCu$_{3}$(OH)$_{6}$Cl$_{3}$, we consider the ${S=1/2}$ Heisenberg model on the kagome lattice with nearest-neighbor super-exchange $J$
and (out-of-plane) Dzyaloshinskii-Moriya interaction $J_D$, which favors (in-plane) ${{\bf Q}=(0,0)}$ magnetic order. By using both variational Monte Carlo and tensor-network approaches, we show that
the ground state develops a finite magnetization for $J_D/J \gtrsim 0.03 \mathrm{-} 0.04$; instead, for smaller values of the Dzyaloshinskii-Moriya interaction, the ground state has no magnetic order
and, according to the fermionic wave function, develops a gap in the spinon spectrum, which vanishes for $J_D \to 0$. The small value of $J_D/J$ for the onset of magnetic order is particularly
relevant for the interpretation of low-temperature behaviors of kagome antiferromagnets, including ZnCu$_{3}$(OH)$_{6}$Cl$_{2}$. For this reason, we assess the spin dynamical structure factor and the
corresponding low-energy spectrum, by using the variational Monte Carlo technique. The existence of a continuum of excitations above the magnon modes is observed within the magnetically ordered
phase, with a broad peak above the lowest-energy magnons, similarly to what has been detected by inelastic neutron scattering on Cs$_{2}$Cu$_{3}$SnF$_{12}$. | {"url":"https://www.scipost.org/contributor/1349","timestamp":"2024-11-12T06:47:16Z","content_type":"text/html","content_length":"43388","record_id":"<urn:uuid:ded193cd-38c9-4b89-ab1c-5a4c2c585b24>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00434.warc.gz"} |
Interpreting Fractions as Division (Examples) - BYJUS
Example 1: Use a model to represent \( 1\div~4\).
\( 1\div~4\) is the same as \( \frac{1}{4}\). To represent this using a tape diagram, we need to divide 1 whole into 4 equal parts.
Each division of the whole represents the fraction \( \frac{1}{4}\). So, \( 1\div~4 = \frac{1}{4}\)
Example 2: Divide \( 4\div~5\) and use a model to represent the result.
As we have learned already, \( 4\div~5\) is the same as \( \frac{4}{5}\). We can represent \( \frac{4}{5}\) on a tape diagram by taking “4 wholes” and dividing each of those wholes into 5 divisions.
When we divide 4 wholes into 5 parts, each part is \( \frac{1}{5}\) of a whole. And supposed we take 4 parts out of 5, we obtain, \(4\div~5 = \frac{4}{5}\).
Example 3: A pumpkin pie and an apple pie is shared among 6 people. If both pies are of the same size, use an area model to find the fraction of pie shared with each person.
Since the two pies are of the same size, and are shared equally among six people, each person will get an equal share of both pies.
Here “2 wholes” are divided into 6 divisions. Each person gets \(\frac{1}{6}\) of each whole. That means each person gets \(\frac{1}{6}\) of the share of each pie.
Example 4: Suppose you are sharing 3 different chocolate bars with 2 of your friends. Use an area model to find the fraction of chocolate bars that the three of you receive.
You are sharing 3 chocolate bars with 2 of your friends. So, “3 wholes” are divided into 3 divisions. The three divisions stand for the two friends and yourself. The tape diagram looks like this:
Here, 3 wholes are divided into 3 divisions. Each part gets \(\frac{1}{3}\) of the whole. So, each person gets \(\frac{1}{3}\) of each chocolate bar. | {"url":"https://byjus.com/us/math/interpret-fractions-as-division/","timestamp":"2024-11-03T13:44:44Z","content_type":"text/html","content_length":"163078","record_id":"<urn:uuid:d7ef57e8-65c0-4ca0-9484-77b8a289f14e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00016.warc.gz"} |
The science of nuclear energy
1.3.5 Energy from fission
Nuclear power is based on the energy that is released each time a uranium nucleus undergoes fission.
Energy and mass
In nuclear reactions, the energy released can be understood from Einstein’s most famous equation:
E = mc^2
Here E is energy; m is mass and c the speed of light.
This shows that there is a clear equivalent relationship between mass (m) and energy (E). The speed of light is shown as c and it is has a large value. Light travels at 300 million ms^−1. In fact,
the speed of light is squared in the equation giving an even bigger number, 90 million billion! (The units of this would be (ms^−1)^2 but we don’t need to worry about this.) Suffice to say, to work
out the amount of energy bound up in a mass we multiply the mass by a huge number – a small mass converts into a very large energy.
This equation holds true if you can convert mass into pure energy – although this is easier said than done! One place that this conversion takes place is within the nucleus, so if we can master
nuclear reactions we can tap into this energy.
Binding energy
Imagine you had a marble and knew its mass was 5g. If you then received a bag of 20 identical marbles and were asked the total mass of these marbles you may reason that it is 20 × 5g = 100g. And you
would be correct!
Mass of 20 marbles = 20 × mass of one marble
Now imagine you had a nucleon and knew it was a certain mass m (protons and neutrons have pretty much the same mass) and you were then asked the mass of a nucleus containing 20 nucleons. You may
reason, as before, that its mass would be 20m. But in fact the mass of the nucleus would be less than this – the mass of the nucleus is less than the mass of the constituent parts! This is
represented in Figure 16.
Mass of a nucleus containing 20 nucleons < 20 × mass of one nucleon
The difference in mass is called the mass defect.
Mass defect = Mass of separate nucleons − Mass of nucleus
Where has the missing mass gone? It has been converted into energy – this is called the binding energy of the nucleus. The binding energy is associated with the forces that bind the nucleus together.
From E = mc^2
Binding energy = mass defect × c^2
Different nuclei have differing amounts of binding energy. In fission, a large nuclei is split into smaller parts. The total mass of these parts may be lower than the initial large nucleus. This
difference in mass is due to the difference in binding energy between the nuclei and is released as energy.
Energy from fission
The mass of the fission products and the three neutrons is less than of the U-236, although the number of nucleons is the same. This missing mass is released as energy – generally taken away by the
fission products, the neutron and the gamma radiation.
The energy released from one fission is small but with many nuclei, the energy adds up to far exceed the amount from the equivalent chemical reactions. The fission of 1kg uranium provides a million
times more energy than the burning of 1kg of coal. Fission can provide a great deal of heat energy.
It was this knowledge, before the ability to harness this energy had been developed, that led nuclear physicist Leo Szilard, in 1934, to speculate about planned experiments that, if successful, would
lead to:
Power production … on such a large scale and probably with so little cost that a sort of industrial revolution could be expected; it appears doubtful for instance whether coal mining or oil
production could survive after a couple of years.
(Quoted in Weart and Szilard, 1978, p. 39)
Next week, you will be learning more about the use of fission as an energy source to generate electricity.
In the next section is a video outlining the development of working nuclear reactors. | {"url":"https://www.open.edu/openlearn/mod/oucontent/view.php?id=26801§ion=3.5","timestamp":"2024-11-03T02:44:03Z","content_type":"text/html","content_length":"141129","record_id":"<urn:uuid:fb57b2bd-b2ef-4e89-b642-a81c9097b24f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00383.warc.gz"} |
Darcy's Law Calculator
Register to enable "Calculate" button.
Make Selections:
© 2015 LMNO Engineering, Research, and Software, Ltd.
Pressure Loss, ΔP:
Bed Length, L:
Upstream Area, A[up]:
Mass Flow Rate, M:
Volumetric Flow, Q:
Upstream Velocity, V[up]:
Bed Velocity, V[bed]: m/s
Fluid Density, ρ:
Fluid Viscosity, μ:
Grain Diameter, d:
Bed Porosity, n:
Packing Angle, θ:
Bed Permeability, k:
Minor Loss Coeff, K[m]:
Idelchik Reynolds Number:
Darcy Reynolds Number:
Darcy's Law Calculator Units: barrel=42 U.S. gallons, cm=centimeter, cP=centipoise, cSt=centistoke, darcy=permeability unit (1.01325x10^12 darcy=1 m^2), inch H[2]O=inch water at 60F, inch Hg=inch
mercury at 60F, ft=foot, ft fluid=ft of whatever fluid is flowing through the bed, g=gram, gal=U.S. gallon, hr=hour, kg=kilogram, kg/m^2-s=kg/s of flow through 1 m^2 of upstream area, lb=pound, lb/ft
^2-s=lb/s of flow through 1 ft^2 of upstream area, lb(f)=pound (force), m=meter, m fluid=meters of whatever fluid is flowing through the bed, mbar=millibar, mm=millimeter, mm H[2]O=mm water at 4C,
min=minute, N=Newton, psi=lb(f)/in^2, s=second
Chemical engineers know this as a packed or porous bed. Groundwater hydrologists call this a permeameter or flow through porous media. We provide two methods - Idelchik and Darcy's law - since
hydrogeologists and civil engineers are usually more familiar with Darcy's law while chemical and mechanical engineers may be more familiar with the Idelchik approach. The Idelchik method is valid
for laminar or turbulent flow through the bed while Darcy's law is valid only for laminar flow (Darcy Reynolds Number<10). Another difference is that only Darcy's law requires entering the
permeability. Permeability is used rather than hydraulic conductivity since permeability is a soil property independent of fluid while hydraulic conductivity is defined for water only. If the
Idelchik method is selected, permeability will be back-calculated (based on Darcy's law) in case you want to compare a bed material with a soil type. We also compute a minor loss coefficient (K[m])
for flow through the bed in case the bed is part of a longer pipeline that you are modeling. Any liquid or gas can be used with either method so long as the fluid's density and viscosity are known.
Several fluids have properties built into the program.
Darcy's Law and Idelchik Calculator Equations
Idelchik Method (Fried and Idelchik, 1989):
Valid for any porosity and Reynolds number. However, if porosity is computed from packing angle, the packing angle equation is only valid for packing angles between 60^o and 90^o (resulting in
computed porosity between 0.2595 and 0.4764).
Darcy's Law Method (e.g. Freeze and Cherry, 1979; Bear, 1979):
Valid for Darcy Reynolds Number < 10 and any porosity. Porosity can be computed from packing angle from Idelchik's equation above for packing angles between 60^o and 90^o.
For Both Methods:
For our calculation, all equations were solved analytically for the desired variables, except when computing velocity and flowrate using the Idelchik method. This required a numerical solution based
on a cubic solver such as presented in Rao (1985).
Variables (Units: F=Force, L=Length, M=Mass, T=Time)
A[up]=Upstream area (column cross-sectional area) [L^2].
A[bed]=Bed area (column cross-sectional area minus area of grains) [L^2].
d=Grain diameter [L].
dH/dx=Hydraulic gradient [unit-less]. Change in head with respect to distance.
k=Permeability of particles [L^2].
K=Hydraulic conductivity [L/T]. Not used in calculation but shown in equations above. Instead of K, k is used in calculations since it is independent of fluid type.
K[m]=Minor loss coefficient of bed [unit-less].
L=Bed length [L]. Also known as permeameter length.
M=Mass flow rate [M/T].
n=Porosity of bed. Enter as decimal. For example, enter 35% as 0.35.
Q=Volumetric flow rate [L^3/T].
Re(Idelchik)=Reynolds number defined by Idelchik method [unit-less]. Based on upstream velocity.
Re(Darcy)=Reynolds number for Darcy method [unit-less]. Based on bed velocity.
V[up]=Upstream velocity [L/T]. Also known as Darcy velocity. Upstream of porous bed.
V[bed]=Bed velocity [L/T]. Velocity of fluid flowing through the bed. Also known as pore velocity.
ρ=Fluid density [M/L^3].
μ=Fluid dynamic viscosity [F-T/L^2] or [M/L-T].
ΔP=Pressure loss [F/L^2]. Upstream Pressure - Downstream Pressure. This computation does not include the pressure change of the water due to elevation difference from inlet to outlet. It is only the
pressure loss due to the porous media. It is as if the flow is horizontal.
θ=Packing angle [radians or degrees]. Angle between successive rows of particles.
Soil Properties
The following table of soil properties was compiled from Freeze and Cherry (1979), Bear (1979), Hillel (1982), and Sanders (1998).
Table of Soil Properties
│Soil Type│Permeability, k (m^2) │Porosity, n│Grain Size, d (mm)│
│Clayey │ 10^-18 - 10^-15 │ 0.4 - 0.7 │ < 0.002 │
│Silty │ 10^-16 - 10^-12 │0.35 - 0.5 │ 0.002 - 0.05 │
│Sandy │ 10^-14 - 10^-10 │0.25 - 0.5 │ 0.05 - 2 │
│Gravelly │ 10^-10 - 10^-7 │0.25 - 0.4 │ > 2 │
Error Messages given by calculation
Input checks. If one of these messages appear, the calculation will not proceed.
Lower limits on variables:
"A must be > 0", "Density must be > 0", "Viscosity must be > 0", "Grain dia must be > 0", "L must be > 0", "k must be > 0", "Pressure loss must be > 0", "M must be > 0", "Q must be > 0", "V[bed] must
be > 0".
Upper limits on variables:
"A must be < 1e50 m^2", "Density must be < 1e6 kg/m^3", "Viscosity must be < 1e9 m^2/s", "d must be < 1000 m", "L must be < 1e10 m", "k must be < 100 m^2", "P loss must be < 1e50 Pa", "M must be <
1e50 kg/s", "Q must be < 1e50 m^3/s", "V[bed] must be < 1e50 m/s".
Other limits on variables:
"Porosity out of range". Porosity must be between 0.00001 and 0.99999.
"Angle out of range". If packing angle is entered, it must be between 60^o and 90^o.
"L must be ≥ d". Bed length cannot be less than grain size.
Run-time errors. The following messages may be generated during execution:
"Infeasible input." May occur when using Idelchik method and solving for flowrate using input values that approach machine precision (e.g. Pressure loss of 10^-200 Pa) and are physically unrealistic.
"L will be < d." May appear if bed length is being computed, and the length will be less than the grain diameter.
"Darcy Re >10." Will appear only if the Darcy method was selected, and the Darcy Reynolds number is > 10. Darcy's law is only valid for laminar flow which is defined as Re(Darcy)<10. This is only a
warning message; results will be computed and shown.
Bear, J. 1979. Hydraulics of Groundwater. McGraw-Hill Pub Co.
Freeze, R. A. and J. A. Cherry. 1979. Groundwater. Prentice Hall, Inc.
Fried, E. and I. E. Idelchik. 1989. Flow Resistance: A Design Guide for Engineers. Hemisphere Pub. Corp.
Hillel, D. 1982. Introduction to Soil Physics. Academic Press, Inc.
Rao, S. S. 1985. Optimization Theory and Applications. Wiley Eastern Limited. 2ed.
Sanders, L. L. 1998. A Manual of Field Hydrogeology. Prentice Hall, Inc.
© 2001-2024 LMNO Engineering, Research, and Software, Ltd. All rights reserved.
Please contact us for consulting or questions about permeameters, porous media, and packed beds.
LMNO Engineering, Research, and Software, Ltd.
7860 Angel Ridge Rd. Athens, Ohio 45701 USA Phone: (740) 707-2614
LMNO@LMNOeng.com https://www.LMNOeng.com
LMNO Engineering home page (more calculations)
Related Calculations:
Compute gradient from well head measurements
1-D Step injection with advection, dispersion, retardation
3-D Pulse (slug) injection with advection, dispersion
Drawdown for unsteady groundwater flow to pumping well in confined aquifer (Theis calculator)
Transmissivity T=Kb (and K table)
Unit Conversions | {"url":"https://www.lmnoeng.com/Groundwater/darcys-law-calculator.php","timestamp":"2024-11-11T17:23:53Z","content_type":"text/html","content_length":"27027","record_id":"<urn:uuid:4ee793c7-c9d2-4831-9c21-77c16a977ab4>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00434.warc.gz"} |
Math Contest Repository
Challenge for the Weak 1 - P3
Calculate $L = \lim_{n \to \infty} \sum_{k=0}^{n} \frac{k+1}{10^k}$.
A. $L = \infty$
B. $L = \frac{10}{9}$
C. $L = \frac{10}{81}$
D. $L = \frac{1000}{9}$
E. $L = \frac{100}{81}$
F. $L = \frac{9}{10}$
Your answer should be a single capitalized letter, e.g. "A".
Please login or sign up to submit and check if your answer is correct.
flag Report Content
You should report content if:
• It may be offensive.
• There is something wrong with it (statement or difficulty value)
• It isn't original.
Thanks for keeping the Math Contest Repository a clean and safe environment! | {"url":"https://mathcontestrepository.pythonanywhere.com/problem/cw1p3/","timestamp":"2024-11-04T04:04:45Z","content_type":"text/html","content_length":"9496","record_id":"<urn:uuid:b9099022-e67b-4823-9b0d-13b0baef592d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00016.warc.gz"} |
Scientific Publications - University of Bacau [Revues] (scientific reviews, abstracts and articles)
â Vasile Alecsandriâ University of BacÄ u, Faculty of Engineering, Department of Power Engineering, Mechatronics and Computer Science, 157 Calea MÄ rÄ Ć eĆ ti, BacÄ u, 600115, Romania,
Issue: SSRSMI, Number 1, Volume XXIII
Section: Volume 23, Number 1
Abstract: We introduce a new type of first order Poincaré inequality for functions defined on a metric measure space, that is an useful tool in the study of Newtonian spaces based on Banach
function spaces. This Poincaré inequality extends the Orlicz-Poincaré inequality introduced by Aïssaoui (2004) and the Poincaré inequality based on Lorentz spaces, introduced by
Costea and Miranda (2011), that in turn generalize the well-known weak (1,p)-Poincaré inequality. Using very recent results of Durand-Cartagena, Jaramillo and Shanmugalingam (2012,
2013), it turns out that every complete metric space X, endowed with a doubling measure and supporting a weak Poincaré inequality based on a Banach function space is (thick) quasiconvex.
We prove that the validity of the Poincaré inequality based on a Banach function space, on a doubling metric measure space, implies a pointwise estimate involving an appropriate maximal
Keywords: metric measure space, Banach function space, weak upper gradient, Newtonian space, Poincaré inequality, maximal operator.
Code SSRSMI201323V23S01A0008 [0003825]
Full Download pdf | {"url":"https://pubs.ub.ro/?pg=revues&rev=ssrsmi&num=201323&vol=23&aid=3825","timestamp":"2024-11-05T23:43:09Z","content_type":"application/xhtml+xml","content_length":"12400","record_id":"<urn:uuid:505c871c-1409-4332-b78e-959e3f1d1b30>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00191.warc.gz"} |
5.2: Partioning the Sums of Squares
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Time to partition the sums of squares again. Remember the act of partitioning, or splitting up, the variance is the core idea of ANOVA. To continue using the house analogy, our total sums of squares
(SS Total) is our big empty house. We want to split it up into little rooms. Before in the between-subjects ANOVA, we partitioned SS Total using this formula:
\[SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Error} \nonumber \]
The \(SS_\text{Effect}\) was the variance we could attribute to the means of the different groups, and \(SS_\text{Error}\) was the leftover variance that we couldn’t explain. \(SS_\text{Effect}\) and
\(SS_\text{Error}\) are the partitions of \(SS_\text{TOTAL}\), they are the little rooms.
In the between-subjects ANOVA above, we got to split \(SS_\text{TOTAL}\) into two parts. What is most interesting about the repeated-measures design, is that we get to split \(SS_\text{TOTAL}\) into
three parts, there’s one more partition. Can you guess what the new partition is? Hint: whenever we have a new way to calculate means in our design, we can always create a partition for those new
means. What are the new means in the repeated measures design?
Here is the formula for partitioning \(SS_\text{TOTAL}\) in a repeated-measures ANOVA:
\[SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Subjects} +SS_\text{Error} \nonumber \]
We’ve added \(SS_\text{Subjects}\) as the new idea in the formula. What’s the idea here? Well, because each subject or participant was measured in each condition, we have a new set of means. These
are the means for each subject or participant, collapsed across the conditions. For example, subject 1 has a mean (mean of their scores in conditions A, B, and C); subject 2 has a mean (mean of their
scores in conditions A, B, and C); and subject 3 has a mean (mean of their scores in conditions A, B, and C). There are three subject means, one for each subject, collapsed across the conditions.
And, we can now estimate the portion of the total variance that is explained by these subject means.
Before we go into the calculations, it's important to pause and compare the differences of how the sum of squares are partitioned in between-subjects ANOVA vs. within-subjects ANOVA.
Recall, in between-subjects ANOVA, we use different words to describe parts of the ANOVA (which can be really confusing). For example, we described the SS formula for a between-subjects ANOVA like
\[SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Error} \nonumber \]
The very same formula is often written differently, using the words between and within in place of effect and error, it looks like this:
\[SS_\text{TOTAL} = SS_\text{Between} + SS_\text{Within} \nonumber \]
Here, \(SS_\text{Between}\) (which we have been calling \(SS_\text{Effect}\)) refers to variation between the group means, that’s why it is called \(SS_\text{Between}\). Second, and most important, \
(SS_\text{Within}\) (which we have been calling \(SS_\text{Error}\)), refers to the leftover variation within each group mean. Specifically, it is the variation between each group mean and each score
within that group. Remember, for each group mean, every score is probably off a little bit from the mean. So, the scores within each group have some variation. This is the within group variation, and
it is why the leftover error that we can’t explain is often called \(SS_\text{Within}\).
Perhaps a picture will help to clear things up.
Figure \(\PageIndex{1}\): Illustration showing how the total sums of squares are partitioned differently for a between versus repeated-measures design.
The figure lines up the partitioning of the Sums of Squares for both between-subjects and repeated-measures designs. In both designs, \(SS_\text{Total}\) is first split up into two pieces \(SS_\text
{Effect (between-groups)}\) and \(SS_\text{Error (within-groups)}\). At this point, both ANOVAs are the same. In the repeated measures case we split the \(SS_\text{Error (within-groups)}\) into two
more littler parts, which we call \(SS_\text{Subjects (error variation about the subject mean)}\) and \(SS_\text{Error (left-over variation we can't explain)}\).
The critical feature of the repeated-measures ANOVA, is that the \(SS_\text{Error}\) that we will later use to compute the MS (Mean Squared) in the denominator for the \(F\)-value, is smaller in a
repeated-measures design, compared to a between subjects design. This is because the \(SS_\text{Error (within-groups)}\) is split into two parts, \(SS_\text{Subjects (error variation about the
subject mean)}\) and \(SS_\text{Error (left-over variation we can't explain)}\).
To make this more clear, here is another figure:
Figure \(\PageIndex{2}\): Close-up showing that the Error term is split into two parts in the repeated measures design.
As we point out, the \(SS_\text{Error (left-over)}\) in the green circle will be a smaller number than the \(SS_\text{Error (within-group)}\). That’s because we are able to subtract out the \(SS_\
text{Subjects}\) part of the \(SS_\text{Error (within-group)}\). This can have the effect of producing larger F-values when using a repeated-measures design compared to a between-subjects design,
which is more likely to yield smaller P obtained values and allow us to reject the null hypothesis. | {"url":"https://stats.libretexts.org/Courses/Kansas_State_University/EDCEP_917%3A_Experimental_Design_(Yang)/05%3A_Within-Subjects_Design/5.02%3A_Partioning_the_Sums_of_Squares","timestamp":"2024-11-10T12:35:59Z","content_type":"text/html","content_length":"128223","record_id":"<urn:uuid:bc0cf011-6150-4dec-b406-f35b4f8e5394>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00424.warc.gz"} |
Cube Volume Calculator
How to Find the Surface Area & Volume of a Cube?
A polyhedron is a three-dimensional geometric object bounded by polygons. These polygons are called its faces.A common side of two adjacent faces of the polyhedron is called an edge or side of the
polyhedron. Three or more faces meet at common vertex which is called a vertex of the polyhedron. A polyhedron is a regular polyhedron if all of its faces are regular congruent polygons and all of
the edges are congruent. There are five convex regular polyhedron, known as the Platonic solids: tetrahedron, cube, octahedron, dodecahedron, icosahedron.
Tetrahedron, cube, octahedron, dodecahedron, icosahedron
A prism is a polyhedron with two parallel congruent faces called bases. The other faces are parallelograms. Prisms are usually named by the shape of their bases. A regular prism is a prism with bases
of regular polygons. A cube is an example of a regular prism. A cube has $6$ faces, $8$ vertices and $12$ edges and all faces of a cube are squares.
If we cut a cardboard box at the edges and put it in a plane, we get a pattern or net for the cube. In general, net can be made for every solid. Net is very useful for finding the surface area of
solids. A net for cube is shown below. The surface area of cube is the sum of the areas of each face of the cube. So, surface area of the cube is
Because all faces have equal areas, the surface area is
$$6\times a\times a=6\times a^2$$
The volume of a cube is the measure of the amount of space that a cube contributes. If we multiply the length, width and the height of a cube, we will get the volume formula of a cube. Since the the
length, width and the height of a cube are equal to each other, for example $a$, the volume of a cube is
$$V=a\times a\times a$$
Doubling the cube, well-known as the Delian problem, is one of the three most famous geometric problems in an ancient period (H. Dorrie, it 100 Great Problems Of Elementary Mathematics, Dover
Publications, NY, 1965). Given a cube with the side $a$, find the side of another cube whose volume is twice the volume of the given one. This problem requires the construction of a line segment of
length $x$, where $x^3 = 2$, i.e. $x=\sqrt[3]{2}$, the cube root of two. It can be solved by computation but it cannot be solved by a straightedge and compass construction.
The surface area of the cube is measured in units such as square centimeters $(cm^2)$, square meters $(m^2)$, square kilometers $(km^2)$, etc. The volume of a cube is measured in units such as cube
centimeters $(cm^3)$, cube meters $(m^3)$, cube kilometers $(km^3)$ etc.
The cube volume and surface area work with steps shows the complete step-by-step calculation for finding the surface area and volume of the cube with the side length of $5\;in$ using the surface area
and volume formulas. For any other value for the length of the side of a cube, just supply a positive real number and click on the GENERATE WORK button. The grade school students may use this cube
volume and surface area calculator to generate the work, verify the results of the surface area and volume of the three-dimensional bodies or do their homework problems efficiently. | {"url":"https://ncalculators.com/geometry/cube-volume-calculator.htm","timestamp":"2024-11-13T15:24:57Z","content_type":"text/html","content_length":"61707","record_id":"<urn:uuid:70242679-dee4-419b-bd48-57f734419c24>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00896.warc.gz"} |
Volume complexity of dS bubbles
In the framework of the static patch approach to de Sitter holography introduced in [L. Susskind, J. Hologr. Appl. Phys. 1, 1 (2021)], the growth of holographic complexity has a hyperfast behavior,
which leads to a divergence in a finite time. This is very different from the anti–de Sitter (AdS) spacetime, where instead the complexity rate asymptotically reaches a constant value. We study
holographic volume complexity in a class of asymptotically AdS geometries which include de Sitter bubbles in their interior. With the exception of the static bubble case, the complexity obtained from
the volume of the smooth extremal surfaces which are anchored just to the AdS boundary has a similar behavior to the AdS case, because it asymptotically grows linearly with time. The static bubble
configuration has a zero complexity rate and corresponds to a discontinuous behavior, which resembles a first order phase transition. If instead we consider extremal surfaces which are anchored at
both the AdS boundary and the de Sitter stretched horizon, we find that complexity growth is hyperfast, as in the de Sitter case.
Entra nei temi di ricerca di 'Volume complexity of dS bubbles'. Insieme formano una fingerprint unica. | {"url":"https://publires.unicatt.it/it/publications/volume-complexity-of-ds-bubbles","timestamp":"2024-11-14T22:15:26Z","content_type":"text/html","content_length":"56432","record_id":"<urn:uuid:6e4ac07f-3763-4306-b13c-fa807efdd9d7>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00111.warc.gz"} |
How to Create P Value in Excel? A Step-by-Step Guide
Ready to create a P value in Excel? Want to learn how to quickly and efficiently do it? Read on.
To calculate p-values in Excel, you generally use the T-test statistical function for hypothesis testing. You can also use it to interpret the results of hypothesis tests and make informed decisions
about your data.
In this article, you’ll learn what a p-value is, its importance, and how to calculate it using various techniques in Excel, such as the T.test function and ANOVA.
By the end, you’ll be equipped with the necessary knowledge and practical skills to expertly interpret p-values in Excel, improving your ability to draw meaningful conclusions from your data.
Let’s get Started!
What is a P-Value
A p-value provides a measure of the evidence against a null hypothesis. To understand this concept, let’s break down the different components.
• Hypothesis testing: In statistics, researchers test whether their results are due to true treatment effects or mere chance. This process involves formulating a null hypothesis ($H_0$) and an
alternative hypothesis ($H_1$).
• Null hypothesis ($H_0$): This statement asserts no significant population parameter or distribution difference. If you are trying to determine if there’s an effect from a treatment, the null
hypothesis might state that the treatment has no effect.
• Alternative hypothesis ($H_1$): This is the opposite of the null hypothesis. It states that there is a significant difference or effect.
• P-value: This is the probability of observing your data or something more extreme if the null hypothesis is true.
In hypothesis testing, you typically set a significance level, usually denoted as alpha ($\alpha$), a threshold for the p-value. If the p-value is less than or equal to the significance level, you
reject the null hypothesis in favor of the alternative hypothesis.
On the other hand, if the p-value is more significant than the significance level, you fail to reject the null hypothesis. In other words, a low p-value indicates strong evidence against the null
hypothesis, while a high p-value fails to provide strong evidence against it.
Now, let’s delve into how to effectively calculate p-values using Excel’s various functions and methods.
How to Calculate P-Values in Excel
There are various functions and methods in Excel that will help you calculate p-values. These include the T.subtract, T.dist, T.test, ChiSq.test, and ANOVA.
What are T-Test in Excel
T-tests are a family of statistical tests used to infer the population mean of a certain characteristic from a sample. There are three main types of t-tests in Excel:
• One-sample t-test: Compares the mean of a sample to a known value or hypothesized mean of the population.
• Independent (or unpaired) t-test: Compares the means of two unrelated (independent) groups.
• Paired t-test: Compares the means of two related (paired) groups.
The T.TEST function is a built-in Excel statistical function that calculates the p-value for a given sample.
It is used for two-tailed hypothesis testing and returns the probability associated with a t-value from a t-distribution.
This function has the following syntax:
• T.TEST(known\_data, x, tails, type): Returns the two-tailed t-distribution probability given a sample and a constant.
• known\_data: The array or range containing the sample data.
• x: The value corresponding to the sample mean to be tested against the population mean.
• tails: A numerical value indicating the number of tails for the distribution (1 for one-tailed test, 2 for two-tailed test).
• type: An optional argument that specifies the type of t-test to be performed (1 for paired, 2 for two-sample with equal variance, and 3 for two-sample with unequal variance).
Next, we explore the process of conducting a t-test in Excel, a crucial step in hypothesis testing.
How to perform a t-test in Excel
1. Arrange your sample data in the Excel worksheet.
2. Use the T.TEST function to calculate the p-value for the respective t-test (one-sample, independent, or paired).
You can follow the sample syntax specific to each type of t-test as per your requirements.
• One-sample t-test syntax: =T.TEST(known\_data, x, 2, 1) for a one-tailed test or =T.TEST(known\_data, x, 2, 2) for a two-tailed test.
• Independent t-test syntax: =T.TEST(array\_1, array\_2, 2, 3).
• Paired t-test syntax: =T.TEST(array\_1, array\_2, 2, 1).
Let’s break down how to perform a one-sample t-test in Excel, a fundamental technique in statistical analysis.
How to perform a One-sample t-test
For example, if your sample data is in the range A1:A10, and you want to perform a one-sample t-test with a hypothesized mean of 60, with two tails, the formula would be =T.TEST(A1:A10, 60, 2, 2).
The result of this function will be the p-value, which indicates the likelihood of observing the sample mean given that the true population mean is equal to the specified value, using the specified
tails and type of t-test.
By effectively understanding and using the T.TEST function, you can confidently perform t-tests and interpret their results in Excel.
Moving on, we examine the steps to conduct an ANOVA test in Excel, an essential tool for comparing multiple groups.
How to Perform an ANOVA Test in Excel
ANOVA (Analysis of Variance) is a statistical method used to evaluate whether there are statistically significant differences between the means of three or more independent groups.
In Excel, you can use the ANOVA: Single Factor Data Analysis Toolpak or the ANOVA function for Single Factor ANOVA.
Let’s look at these approaches in more detail.
1. ANOVA: Single Factor Data Analysis Toolpak
To perform a single factor ANOVA in Excel using the built-in tool, you first need to enable the Data Analysis ToolPak.
Here’s how to do it:
Get the XLMiner Analysis Toolpack
1. Click on the File tab, and then select Options.
2. In the Excel Options dialog box, click on Add-Ins.
3. In the Add-Ins dialog box, select Excel Add-Ins in the Manage box, and then click Go.
4. In the Add-Ins dialog box, check the Analysis ToolPak box, and then click OK.
Now that you have enabled the Data Analysis ToolPak, you can use it to perform a single-factor ANOVA as follows:
Perform a single-factor ANOVA
1. Click on the Data tab, and then click on Data Analysis.
2. In the Data Analysis dialog box, select ANOVA: Single Factor, and click OK.
3. In the Input Range box, enter the range of the data you want to analyze.
4. In the Grouping Information box, enter the range of the cell that contains the group labels for the data you want to analyze.
5. Select the appropriate Output Options, and click OK.
The result of this function is an ANOVA table, which includes the p-value associated with the F-statistic.
If the p-value is less than the significance level (usually set at 0.05), you can conclude that there is a significant difference between the means of at least two groups.
The ANOVA table also includes other information, such as the sum of squares, degrees of freedom, F-statistic, and within-group and between-group variances.
You can use this information to gain insights into the relationships between the data points and make informed decisions based on the statistical significance of the differences among the group
With Excel’s built-in ANOVA tool, you can confidently analyze your data and draw meaningful conclusions from your results.
Diving deeper, we’ll learn about the Chi-Square test in Excel and how it’s used to assess relationships between categorical variables.
Learning Chi-Square Test in Excel
The chi-square test is a statistical test that is used to determine whether there is a significant association between two categorical variables.
It is commonly used in fields such as science, medicine, and social sciences to analyze data and make inferences about the population.
The chi-square test is often used to test whether two categorical variables are independent of each other.
If the variables are independent, there is no relationship between them, while if they are dependent, there is a relationship between them.
In Microsoft Excel, you can easily perform a chi-square test on your data using the CHITEST function. This function, also known as the Pearson’s chi-square test, can be used to determine the p-value
associated with the chi-square statistic.
Here’s an Example of a Chitest
=CHITEST(actual\_data, expected\_data)
The CHITEST function in Excel has the following syntax:
• actual\_data: This is the range of cells containing the observed frequencies or counts. These are the actual data you have collected.
• expected\_data: This is the range of cells containing the expected frequencies or counts. These are the values you expect to see if the null hypothesis is true.
The function will return the p-value associated with the chi-square test. You can then use this p-value to assess the significance of the relationship between the two categorical variables.
Now, let’s focus on how to apply the CHITEST function in Excel for a practical chi-square test.
How to use the CHITEST function
1. Organize your data. Create a table in Excel that displays the observed and expected frequencies for each category of the two categorical variables you are investigating.
2. Use the CHITEST function in a cell in your worksheet.
3. Press Enter. The result should be a p-value.
The chi-square test is useful for examining the relationship between two categorical variables. You can easily perform this test in Excel using the CHITEST function or the Data Analysis ToolPak.
The p-value from the test can help you make inferences regarding the significance of the relationship between the variables in your data.
The following sections will guide you on using the Excel statistical functions for t-tests, ANOVA, and chi-square tests to calculate p-values.
Next, we explore the various Excel statistical functions that are instrumental in calculating p-values.
Using Excel Statistical Functions
Excel provides a range of statistical functions to help you calculate p-values for your data. Some of the most commonly used Excel statistical functions are explained below.
1. T.SUBTRACT
The T.SUBTRACT function (also known as the “T.INV.2T” function) calculates the critical value from a t-distribution for a given significance level and degrees of freedom.
The formula for the T.SUBTRACT function is:
• =T.SUBTRACT(alpha, df)
• alpha: The significance level at which you conduct the hypothesis test (e.g., 0.05 for a 5% significance level).
• df: The degrees of freedom of the t-distribution (usually equal to the sample size minus 1 for a one-sample t-test).
T.INV.2T can help you calculate the p-value for t-tests, allowing you to determine the statistical significance of your findings.
2. T.DIST.2T
The T.DIST.2T function calculates the probability density function for two-tailed t-tests.
The syntax for the T.DIST.2T function is:
• =T.DIST.2T(t, df)
• t: The t-statistic for which you want to calculate the p-value.
• df: The degrees of freedom of the t-distribution.
The result of the T.DIST.2T function is a p-value, which represents the probability of observing a t-statistic as extreme as the one in the sample data, assuming the null hypothesis is true.
3. T.DIST
The T.DIST function calculates the probability density function for a given t-statistic and a set of degrees of freedom.
The t-distribution looks normal but differs depending on the sample size used for the t-test.
The syntax for the T.DIST function is:
• =T.DIST(x, df)
• x: The t-statistic for which you want to calculate the p-value.
• df: The degrees of freedom of the t-distribution.
4. T.INV.2T
The T.INV.2T function (also known as the “T.INV” function) calculates the inverse of the two-tailed t-distribution.
This function can be helpful when you have the desired alpha level and must calculate the corresponding critical t-value.
The syntax for the T.INV.2T function is:
• =T.INV.2T(alpha, df)
• alpha: The desired significance level (e.g., 0.05 for a 5% significance level).
• df: The degrees of freedom of the t-distribution.
5. T.TEST
The T.TEST function in Excel is used to perform a two-sample t-test.
The t-test is a statistical test used to determine whether a significant difference exists between the means of two independent groups.
The syntax for the T.TEST function is:
=T.TEST(array1, array2, tails, type)
• array1: The first set of data.
• array2: The second set of data.
• tails: The type of test to perform: 1 for a one-tailed test, 2 for a two-tailed test.
• type: The type of t-test to perform: 1 for a paired test, 2 for a two-sample equal variance test, 3 for a two-sample unequal variance test.
If the T.TEST is two-tailed, the result of the function is the p-value.
If you have a one-tailed test, you need to divide it by 2.
If the test is greater or less than, you’ll have to use the following formulas: B1/2, B/(cell), and (1-B)/2. Remember to use the absolute value of the t-statistic for a less-than test.
The T.TEST function can also be used to interpret the results of a t-test.
The Excel statistical functions for calculating p-values are essential for hypothesis testing and inferential statistics. By utilizing these functions, you can confidently make decisions based on the
probability of observing specific values in your data.
Mastering the calculation of p-values in Excel is vital for making informed decisions based on statistical analysis.
Final Thoughts
Calculating p-values in Excel is a crucial part of statistical hypothesis testing, enabling you to make informed decisions and draw meaningful conclusions.
The statistical functions available in Excel and the statistical analysis tools allow you to carry out a variety of hypothesis tests and calculate p-values with ease.
Understanding how to confidently perform T-tests, ANOVA, and Chi-Square tests to assess the statistical significance of your data sets is a valuable skill.
Mastering p-value calculations in Excel will help you make informed decisions based on your statistical analysis, ensuring the accuracy and reliability of your results.
So go on, give it a test, and see how easy it is to create a p value in Excel.
Do you want to learn how to supercharge you PowerBi development with chaGPT? Checkout the EnterpriseDNA Youtube Channel.
Frequently Asked Questions
What is a P-value in Excel?
A p-value in Excel is a statistical measure that helps you determine the significance of your findings in hypothesis testing. d
It represents the probability of obtaining a result at least as extreme as the one observed under the assumption that the null hypothesis is true.
Why is the P-value important in Statistical Analysis?
P-values are crucial in determining whether to reject or accept the null hypothesis. A low p-value (< 0.05, typically) suggests strong evidence against the null hypothesis, indicating that your
findings are statistically significant.
How Can I Calculate a P-Value in Excel?
You can calculate p-values in Excel using various functions and tests, including T.TEST for t-tests, ANOVA for analysis of variance, and CHITEST for the chi-square test.
These functions compute the p-value based on your data and the statistical test you perform.
What are T-Tests, and How are They Used in Excel?
T-tests in Excel are used to compare sample means against a known population mean (one-sample t-test) or between two groups (independent or paired t-tests). Excel’s T.TEST function helps calculate
the p-value for these tests.
What is ANOVA, and How is it Performed in Excel?
ANOVA (Analysis of Variance) is a method used to compare means between three or more groups. In Excel, you can perform ANOVA using the Data Analysis Toolpak or the ANOVA function.
It generates a table with the p-value, helping you assess the statistical significance of the differences among group means.
How Do I Use the Chi-Square Test in Excel?
The chi-square test in Excel, performed using the CHITEST function, assesses the association between two categorical variables.
It calculates the p-value, determining whether the observed association is statistically significant.
Are There Specific Excel Functions for Calculating P-Values?
Yes, Excel offers specific functions like T.SUBTRACT, T.DIST.2T, T.DIST, and T.INV.2T, each serving different purposes in the calculation of p-values, depending on the hypothesis test being
How Do I Interpret P-Values in Excel?
P-value interpretation depends on your set significance level (?). If the p-value is less than ? (usually 0.05), it suggests strong evidence against the null hypothesis. If it’s greater, it indicates
insufficient evidence to reject the null hypothesis.
Can Excel Handle Different Types of T-Tests?
Yes, Excel can handle different types of t-tests, including one-sample, independent, and paired t-tests. The function syntax varies slightly depending on the test type.
Is It Possible to Perform Hypothesis Testing for Large Data Sets in Excel?
Yes, Excel is capable of handling large data sets for hypothesis testing. However, the process might be slow for extremely large data sets, and care must be taken to ensure accurate data entry and
formula application.
Develop a VBA application to automate repetitive tasks in Excel, increasing efficiency and reducing manual errors.
Building a VBA-Based Task Automation Tool | {"url":"https://blog.enterprisedna.co/how-to-create-p-value-in-excel/","timestamp":"2024-11-01T18:50:22Z","content_type":"text/html","content_length":"537657","record_id":"<urn:uuid:e7a0f58c-0f5d-40ff-8fd2-54d4ef01e2b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00424.warc.gz"} |
8,389 research outputs found
AMS-02 recently published its lepton spectra measurement. The results show that the positron fraction no longer increases above $\sim$200 GeV. The aim of this work is to investigate the possibility
that the excess of positron fraction is due to pulsars. Nearby known pulsars from ATNF catalogue are considered as a possible primary positron source of the high energy positrons. We find that the
pulsars with age $T\simeq (0.45\sim4.5)\times10^{5}$ yr and distance $d<0.5$ kpc can explain the behavior of positron fraction of AMS-02 in the range of high energy. We show that each of the four
pulsars --- Geminga, J1741-2054, Monogem and J0942-5552 --- is able to be a single source satisfying all considered physical requirements. We also discuss the possibility that these high energy $e^{\
pm}$ are from multiple pulsars. The multiple pulsars contribution predicts a positron fraction with some structures at higher energies.Comment: 27 pages, 5 figures, 3 tables, accepted for publication
in EPJ
Topologically ordered states are quantum states of matter with topological ground state degeneracy and quasi-particles carrying fractional quantum numbers and fractional statistics. The topological
spin $\theta_a=2\pi h_a$ is an important property of a topological quasi-particle, which is the Berry phase obtained in the adiabatic self-rotation of the quasi-particle by $2\pi$. For chiral
topological states with robust chiral edge states, another fundamental topological property is the edge state chiral central charge $c$. In this paper we propose a new approach to compute the
topological spin and chiral central charge in lattice models by defining a new quantity named as the momentum polarization. Momentum polarization is defined on the cylinder geometry as a universal
subleading term in the average value of a "partial translation operator". We show that the momentum polarization is a quantum entanglement property which can be computed from the reduced density
matrix, and our analytic derivation based on edge conformal field theory shows that the momentum polarization measures the combination $h_a-\frac{c}{24}$ of topological spin and central charge.
Numerical results are obtained for two example systems, the non-Abelian phase of the honeycomb lattice Kitaev model, and the $u=1/2$ Laughlin state of a fractional Chern insulator described by a
variational Monte Carlo wavefunction. The numerical results verifies the analytic formula with high accuracy, and further suggests that this result remains robust even when the edge states cannot be
described by a conformal field theory. Our result provides a new efficient approach to characterize and identify topological states of matter from finite size numerics.Comment: 13 pages, 8 figure
The Higgs boson production can be affected by several anomalous couplings, e.g. $c_t$ and $c_g$ anomalous couplings. Precise measurement of $gg\to h$ production yields two degenerate parameter spaces
of $c_t$ and $c_g$; one parameter space exhibits the SM limit while the other does not. Such a degeneracy could be resolved by Higgs boson pair production. In this work we adapt the strategy
suggested by the ATLAS collaboration to explore the potential of distinguishing the degeneracy at the 14 TeV LHC. If the $c_t$ anomalous coupling is induced only by the operator $H^\dag H \bar Q_L \
tilde{H} t_R$, then the non-SM-like band could be excluded with an integrated luminosity of $\sim 235~{\rm fb}^{-1}$. Making use of the fact that the Higgs boson pair is mainly produced through an
$s$-wave scattering, we propose an analytical function to describe the fraction of signal events surviving a series of experimental cuts for a given invariant mass of Higgs boson pair. The function
is model independent and can be applied to estimate the discovery potential of various NP models | {"url":"https://core.ac.uk/search/?q=author%3A(Zhang%2C%20Hong-Hao)","timestamp":"2024-11-11T14:06:26Z","content_type":"text/html","content_length":"105524","record_id":"<urn:uuid:c28d20da-5847-49d7-978f-77fe4123d918>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00091.warc.gz"} |
calene triangle
C-Programmering: Inlämningsuppgift 1, Felsökning och testfall
Sphero Activity Mat 1 - Racetrack mat Utbildningsteknik
Has no line of symmetry. Types of Scalene Triangle. There are mainly four types of Scalene triangle:-Acute Scalene Triangle – A triangle where all the three angles are acute angles (less than 90 …
Scalene triangle calculator diagram. You can lock a side or angle and animate the other sides and angles to get the triangle dimensions you need. Hold down Left or Right ALT or CTRL keys to lock a
side or angle. A red circle will appear to mark the locked side or angle. Now Want to learn about the scalene muscles of the neck?
Area of a scalene triangle Since in a scalene triangle you probably know the lengths of all three sides, the best way to calculate the area is using Heron's Formula. However you can use any method
that works with what you are given. See Area of a Triangle. Other triangle topics General.
A scalene triangle is a triangle in which all three sides have different lengths and all angles are diferent too..
Rivningsarbete Bollebygd - Po Sic In Amien To Web
scalene triangle noun Triangle Scalene Golvhögtalare - visar egenskaper. A triangle with all sides (and thus all angles) equal is equilateral, one with two sides equal is isosceles and one with no
two sides equal is scalene.
HUR MAN FöRHINDRAR ATT DINA SCALENE - cc-inc
Another name for this formation is the scalene fissure or scalene haitus. The scalene triangle is important because the … * an equilateral scalene triangle impossible, because these two types of
triangles exclude each other - if a triangle is equilateral, it has three equal sides; if it’s scalene, it has no equal sides * an acute isosceles triangle possible, for example a triangle with
angles 40°, 70°, 70° * a right equilateral triangle How to pronounce scalene triangle.
Let's look more closely The Lesson. A scalene triangle is a triangle where none of the sides or angles are equal. Note: The lines through the two sides of the triangle in the image above 17 Jun
2019 An N-tiling of triangle ABC by triangle T is a way of writing ABC as a union of N trianglescongruent to T, overlapping only at their boundaries. The 29 Jan 2016 Median of a scalene triangle is
a line segment from a vertex (coiner point) to the midpoint of the opposite side. Circumcircle is a circle that passes Obtuse triangles' circumcenter falls outside the triangle.
Bruno manz memoirs
Chaque élément en coin de cube comprend un triangle de base relié par une strie de chacun des trois ensembles de stries se croisant, le triangle de base étant un triangle scalène . Compute answers
using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals.
Scalene triangle calculator diagram You can lock a side or angle and animate the other sides and angles to get the triangle dimensions you need.
Freelancer sverige
web poregular platelet countvilka partier sitter i eu parlamentetanna stina svantessonyrkande och grundlandskoder telefon nummerordning
Gratis nedladdning av vektorgrafik och illustrationer med kite
Two sides of every 45 45 90 are equal. Notice, when cut in half, the two halves 20 Aug 2020 A triangle with no equal sides and angles is called a scalene triangle. There are three ways that you can
figure out the area of this kind of Scalene Triangle icons.
Folkeregister norge oslony cancerbehandling
Supergirl scalene triangle Supergirl, Super herói, Marvel
Attributive form of scalene triangle, noun. ( adjective) A scalene-triangle form of the Pythagorean theorem. The scalene triangle is bound by the anterior and middle scalene muscles, with the first
rib at the base. The brachial plexus nerves (yellow) and the subclavian On the other hand, triangles can be divided into equilateral, isosceles and scalene, since no other kind of triangle can
exist. "Encyclopaedia Britannica, 11th definition; triangle; triangles; equilateral; equilateral triangle; isosceles; isosceles triangle; scalene; scalene triangle; congruent sides; congruent
angles; equal A right triangle is scalene if all three sides are a different length such as one with sides of 3, 4 and 5. You can have an isosceles right triangle with two sides the If at least two
sides of a triangle are the same length, then the triangle is called an isosceles triangle. | {"url":"https://hurmaninvesterarjmjajnt.netlify.app/57889/61551.html","timestamp":"2024-11-06T01:08:02Z","content_type":"text/html","content_length":"9832","record_id":"<urn:uuid:f4240226-dd19-4d1f-9947-90662c299fe1>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00130.warc.gz"} |
True or False: Math.pow(2, 53) == Math.pow(2, 53) + 1 ?
Hello code newbies:
I have a fun JavaScript question for you.
Is the following expression True or False?
Math.pow(2, 53) == Math.pow(2, 53) + 1
Before answering, please test your answer in your favorite JavaScript environment.
Top comments (6)
Max • • Edited on • Edited
At first, I was confused why this would not return false, but of course you wouldn't have posted this if it would return false. 😃
While looking into why these statements are evaluated to be equal, I stumbled upon an even more interesting problem: When you try to enter exceedingly big numbers into a JavaScript console, they will
be returned similiarly but not with the exact same value. For example, 9007199254740992111 will be transformed into 9007199254740992000. This sounded awfully like floating point issues. After having
a look, the way integers are stored in JavaScript engines using 64 bits, which means there's a limit to correctly display numbers above a certain threshold. And that threshold appears to be 2^53,
which explains why you took those parameters for Math.pow!
Previously, I was only aware of examples where floating point issues arise with the storage of numbers smaller than zero, leading me to think it would be a "comma problem" (for example 0.1 + 0.2 !=
0.3), but this example made clear for me that this of course can also happen for big numbers.
This thread helped me understand the details, it's worth checking out.
Pacharapol Withayasakpunt •
Actually, Math.pow(2, 53) === Math.pow(2, 53) + 1 (triple equals), unless your wrap in BigInt.
Every programming languages have a concept of Number.MAX_SAFE_INTEGER.
sahas023 •
The MAX_SAFE_INTEGER in Javascript is 2^53 - 1, meaning that in this range integers are safe to be represented and compared. Any integer above this value will not be safe to be compared anymore, thus
2^53 == 2^53 + 1 evaluates to true, which is mathematically incorrect.
Arber Braja •
Mike Lezhnin •
Welp, I actually expected JS to behave same way as python and produce false. I've never been so wrong.
Pacharapol Withayasakpunt • • Edited on • Edited
Yeah, Python doesn't tell you about short, long, float and double and it automates this; but for JavaScript, it is always double.
I am actually stumbled in standard Python that it is False, instead of throwing an error. It will not be the case in Kotlin.
>>> 2 ** 64 == 2 ** 64 + 1
>>> type(2 ** 64 + 1)
<class 'int'>
Now, I can ask on StackOverflow, but SO is such as scary place. There is this in Quora, but not many wants to answer.
It seems that it is bignum, actually. You don't even have to care about Decimal, unless you care about 0.1 + 0.2
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/codeguppy/true-or-false-math-pow-2-53-math-pow-2-53-1-3alb","timestamp":"2024-11-11T21:49:25Z","content_type":"text/html","content_length":"129449","record_id":"<urn:uuid:21e30321-35fc-471f-8e44-c920a3bcd900>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00645.warc.gz"} |
Shuttering Area Calculation | Formwork Area Calculation
How to calculate shuttering area?
In this civil engineering article, you will learn how to measure the shuttering area in a formwork.
Measuring Shuttering area:-
The shuttering is measured with respect to Sq.M in the Rate Analysis of Shuttering. Before working out the area of shuttering, it is necessary to learn the calculation of peripheral length
(Perimeter) concerning any shape.
Peripheral length (Perimeter):
Perimeter refers to the distance around a two dimensional shape.
As for instance, there are four sides for the square and the length of one side is considered as ?s?.
So, the peripheral length = s+s+s+s = 4s
Vital Formulae for working out the Shuttering Area:
Perimeter of Square : 4S (S = Length of Side)
Perimeter of Rectangle : 2[L+B] (L=Length & B = Breadth)
Perimeter of Circle : 2pr (r = Radius of circle)
Area of Rectangle = Length x Breadth
Area of Square = Side x Side
Keep in mind that each member in a structure whether it is a Slab or Beam or Column contains six sides (faces). Shuttering area is measured with the use of two methods. One is applied with the
following formula :-
Shuttering area = Peripheral length (Perimeter) x Depth
The other method is applied by measuring the separate areas of faces.
How to work out the shuttering area of a column :-
In a column, the shuttering is provided for four sides and the other two sides (the top of column is departed for filling concrete and bottom is attached with ground level). Omit top and bottom in
The side of column remains in rectangle shape containing side length ?l? and breadth ?b?.
Peripheral length of Rectangle is = l+b+l+b = 2l+2b
Shuttering area = Peripheral length (Perimeter) x Depth
Peripheral length = 2?0.8+2?0.6 = 1.6+1.2 = 2.8Sq.m
Total Area of Shuttering of a column = 2.8 x 4 = 11.2 Sq.m
For circular column, the following formula is applied to work out the shuttering area :-
Shuttering area of Circular Column = 2pr x Depth
How to work out the shuttering area of a beam
For Beam, shuttering is provided in 5 sides and the other side (top side is departed to fill concrete)
Shuttering area is also computed by determining the individual area of each faces as follow:
Face 1 : Area of rectangle = L x B = 0.8 x 4 = 3.2
Face 2 : Area of rectangle = L x B = 0.6 x 4 = 2.4
Face 3 : Area of rectangle = L x B = 0.8 x 4 = 3.2
Face 4 : Area of rectangle = L x B = 0.8 x 0.6 = 0.48
Face 5 : Area of rectangle = L x B = 0.8 x 0.6 = 0.48
Total Area of Shuttering = 3.2 + 2.4 + 3.2+ 0.48 +0.48
= 9.76.Sqm
How to work out the shuttering area of a slab: Slab is supported on beam, it is not necessary to arrange shuttering to the slab on four sides. Similar to beam & column, the top of the slab is
departed to fill concrete and for curing. So, shuttering is only arranged to the bottom of slab. The below mentioned values are considered for finding the shuttering of a slab.
To determine the shuttering area of a slab, the following formula is used :-
Shuttering area of Slab = Bottom area of slab = L x B | {"url":"https://quantity-takeoff.com/how-to-calculate-shuttering-area.htm","timestamp":"2024-11-05T10:32:41Z","content_type":"text/html","content_length":"25951","record_id":"<urn:uuid:b3345d01-a917-4952-9ba2-7ec176d70ea7>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00446.warc.gz"} |
Automation, Robotics and Computer Vision Laboratory (ARVC)
On the Stability of the Quadruple solutions of the forward kinematic problem in analytic parallel robots
A. Peidró, A. Gil, J.MŞ Marín, L. Payá, O. Reinoso
Journal of Intelligent & Robotic Systems (2017)
Ed. Springer ISSN:1573-0409 DOI:http://dx.doi.org/10.1007/s10846-016-0453-x - 86(3), pp. 381-396
Many parallel robots can change between different assembly modes (solutions of the forward kinematic problem) without crossing singularities, either by enclosing cusps or alpha-curves of the planar
sections of their singularity loci. Both the cusps and the alpha-curves are stable singularities, which do not disappear under small perturbations of the geometry of the robot. Recently, it has been
shown that some analytic parallel robots can also perform these nonsingular changes of assembly mode by encircling isolated points of their singularity loci at which the forward kinematic problem
admits solutions with multiplicity four. In this paper, we study the stability of these quadruple solutions when the design of the robot deviates from the analytic geometry, and we show that such
quadruple solutions are not stable since the isolated singular points at which they occur degenerate into closed deltoid curves. However, we also demonstrate that, although the quadruple solutions
are unstable, the behavior of the robot when moving around them is practically unaffected by the perturbations from the analytic geometry. This means that the robot preserves its ability to perform
nonsingular transitions by enclosing the quadruple solutions, even when its geometry is not exactly analytic due to small manufacturing tolerances. | {"url":"https://arvc.umh.es/respaper.php?id=423&lang=en","timestamp":"2024-11-02T02:46:02Z","content_type":"text/html","content_length":"3990","record_id":"<urn:uuid:e029d88d-ba15-4366-85e1-ab4ffb0ac4ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00135.warc.gz"} |
Optimization of packaging
in the end I went for the service of https://www.3dbinpacking.com. It looks like the results are correct and the service returns the results quickly enough
All Replies
this is a variation of a mathematical problem known as the “knapsack problem”. the knapsack problem is thought to be “NP-hard”. all known solutions at the moment are some type of brute force with
see en.wikipedia.org/.../Knapsack_problem
and en.wikipedia.org/.../Packing_problems
There are many commercial packages out there, as an alternative to building your own solution.
We have just started using Magic Logic (I have no other affiliation with Magic Logic, current or prior).
has anybody succesfully managed to write an algorithm to minimize the packing used? In this case a set of items has to be put in the least number of boxes and the smallest boxes possible. So far I
have found only 'brute force' options that simply try out the six possible orientations for each and every item, resulting in thousands of calculations for just a handfull of items.
I am familiar with the name
And a rest service could be an option...
Still, with the myriad of OE business applications, I imagined somebody has solved it...
Just do what Amazon does. Use the largest box possible and put the smallest item in it so it is guaranteed to fit.
in the end I went for the service of https://www.3dbinpacking.com. It looks like the results are correct and the service returns the results quickly enough
This thread is closed | {"url":"https://community-archive.progress.com/forums/00019/60049.html","timestamp":"2024-11-11T23:47:02Z","content_type":"text/html","content_length":"11850","record_id":"<urn:uuid:c7d7e07b-c656-457b-a149-7f7e30b137ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00255.warc.gz"} |
Voronoi entropy vs. Continuous measure of symmetry of the penrose tiling: Part i. analysis of the voronoi diagrams
A continuous measure of symmetry and the Voronoi entropy of 2D patterns representing Voronoi diagrams emerging from the Penrose tiling were calculated. A given Penrose tiling gives rise to a
diversity of the Voronoi diagrams when the centers, vertices, and the centers of the edges of the Penrose rhombs are taken as the seed points (or nuclei). Voronoi diagrams keep the initial symmetry
group of the Penrose tiling. We demonstrate that the continuous symmetry measure and the Voronoi entropy of the studied sets of points, generated by the Penrose tiling, do not necessarily correlate.
Voronoi diagrams emerging from the centers of the edges of the Penrose rhombs, considered nuclei, deny the hypothesis that the continuous measure of symmetry and the Voronoi entropy are always
correlated. The Voronoi entropy of this kind of tiling built of asymmetric convex quadrangles equals zero, whereas the continuous measure of symmetry of this pattern is high. Voronoi diagrams
generate new types of Penrose tiling, which are different from the classical Penrose tessellation.
• Continuous symmetry measure
• Ordering
• Penrose tiling
• Symmetry
• Voronoi entropy
Dive into the research topics of 'Voronoi entropy vs. Continuous measure of symmetry of the penrose tiling: Part i. analysis of the voronoi diagrams'. Together they form a unique fingerprint. | {"url":"https://cris.ariel.ac.il/en/publications/voronoi-entropy-vs-continuous-measure-of-symmetry-of-the-penrose--2","timestamp":"2024-11-03T19:51:58Z","content_type":"text/html","content_length":"58441","record_id":"<urn:uuid:27294a6c-1590-4c2d-b4a8-6398558439e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00583.warc.gz"} |
In a much earlier blog I looked at how we can work out transpositions, and how permutations fit into this. See:
What I'd now like to do is extend the permutation basis to three-dimensional geometry (specifically solids) and look at the tetrahedron(Fig. 1). Since the interface in the main blogger frame doesn't
allow the adequate use of symbols such as I'll be using - I will insert some the working in the image (Fig. 1). (Note also that some algebraic homology goes into this - so interested readers may want
to go back to earlier blogs where I dealt with it, e.g.:
(Last part- showing the
Sierpinski gasket
as an oriented oriented 2-simplex(see Fig. 2- top) for which we can write: a1a2a3 = a2a3a1 = a3a1a2 = -a1a3a2 = - a3a2a1 = -a2a1a3)
The main proposition we're going to be proving here is that for a 3D figure like the tetrahedron, we can reduce it to an algebraic complex then show the "boundary of a boundary" is zero, by applying
the permutation principle.
Now, consider the ordered tetrahedron (vertices ordered by number) shown in Fig. 1. Call the ordering '1234'. In terms of signage (sign rules - e.g. for (+) or (-) being followed, it's important to
note that a segment (1 2) induces orientation (+1) in the associated complex, but a segment (2 1) induces (-1). This is how differing segments acquire negative signage in the complex.
The boundary of the tetrahedron, in terms of its four faces can than be written:
- (1 2 3) - (1 3 4) + (1 2 4) + (1 3 4)
And the calculation of the boundary is shown in Fig. 1 using the appropriate boundary symbols for the respective faces.
Leading to the result that the boundary of a boundary is zero, or 'delta delta = 0"
By definition, the factor group: H_r = Z_r/ B_r
Then, in our case, B_r = B_2 (for the boundary) while:
H_2 = Z_2
Z_2 = a(1 2 3) + b(1 2 4) + c (1 3 4) + d( 2 3 4)
The careful reader who's followed the earlier blog entries should be able to show this. | {"url":"http://brane-space.blogspot.com/2010/03/tetrahedral-treatment-via-permutations.html","timestamp":"2024-11-10T17:46:21Z","content_type":"text/html","content_length":"115548","record_id":"<urn:uuid:6c963e8a-1cd5-47b0-97d7-21f19c0954e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00431.warc.gz"} |
A $100 petty cash fund has cash of $9 and receipts of $85. The journal entry to replenish ...
A $\ 100$ petty cash fund has cash of $\ 9$ and receipts of $\ 85$. The journal entry to replenish the account would include a credit to
Select one: a. Cash for $\ 85$. b. Cash Over and Short for $\ 6$. c. Cash for $\ 91$. d. Petty Cash for $\ 85$.
STEP 1
1. The petty cash fund has an established amount of $100.<br />2. The current cash in the petty cash fund is$9.
3. The receipts total $85.<br />4. The petty cash fund needs to be replenished to its original amount of$100.
5. The journal entry will reflect the replenishment of the petty cash fund.
STEP 2
First, we need to determine the total amount spent from the petty cash fund. This is the difference between the established fund amount and the current cash on hand.
$Total\, spent = Established\, fund\, amount - Current\, cash\, on\, hand$
STEP 3
Now, plug in the given values for the established fund amount and the current cash on hand to calculate the total amount spent.
$Total\, spent = \100 - \9$
STEP 4
Calculate the total amount spent from the petty cash fund.
$Total\, spent = \100 - \9 = \91$
STEP 5
Next, we need to verify the total amount spent by adding the cash on hand to the receipts to ensure it matches the total spent.
$Verification = Current\, cash\, on\, hand + Receipts$
STEP 6
Plug in the values for the current cash on hand and the receipts to verify the total amount spent.
$Verification = \9 + \85$
STEP 7
Calculate the verification amount.
$Verification = \9 + \85 = \94$
STEP 8
Since the verification amount ($94) does not match the total amount spent ($91), there is a discrepancy. We need to find the difference, which could be accounted for as "Cash Over and Short."
$Cash\, Over\, and\, Short = Verification - Total\, spent$
STEP 9
Plug in the values for the verification amount and the total amount spent to calculate the "Cash Over and Short."
$Cash\, Over\, and\, Short = \94 - \91$
STEP 10
Calculate the "Cash Over and Short" amount.
$Cash\, Over\, and\, Short = \94 - \91 = \3$
STEP 11
Now we know that there is a $3 discrepancy, which is "Cash Over and Short." This means the petty cash fund has$3 more than it should have after accounting for the receipts.
STEP 12
The journal entry to replenish the petty cash fund should include the total amount spent ($91) and the "Cash Over and Short" amount ($3). The total replenishment amount will be the sum of these two
$Replenishment\, amount = Total\, spent + Cash\, Over\, and\, Short$
STEP 13
Plug in the values for the total amount spent and the "Cash Over and Short" to calculate the replenishment amount.
$Replenishment\, amount = \91 + \3$
STEP 14
Calculate the replenishment amount.
$Replenishment\, amount = \91 + \3 = \94$
STEP 15
The journal entry to replenish the account will include a debit to the various expenses for $85 (the amount of the receipts), a debit to "Cash Over and Short" for$3 (since we have an overage), and a
credit to "Cash" for $94 (the total replenishment amount).
Based on the options provided, the correct answer is a credit to "Cash" for the total replenishment amount, which is $91 for the receipts plus the$3 overage.
The correct answer is: c. Cash for $91. | {"url":"https://studdy.ai/shared-solution/bb8e111d-b055-4ade-a60d-b26aa777cf89","timestamp":"2024-11-02T11:09:46Z","content_type":"text/html","content_length":"147594","record_id":"<urn:uuid:f06a2dbd-0a5a-4a50-bdec-09e416212218>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00780.warc.gz"} |
Incenter of a Triangle in context of Distance
26 Aug 2024
The Incenter of a Triangle: A Study of Distance
In geometry, the incenter of a triangle is an important point that lies within the triangle and has several unique properties. One of these properties is its relationship with distance. In this
article, we will explore the concept of the incenter and how it relates to distance.
What is the Incenter?
The incenter of a triangle is the point where the three angle bisectors of the triangle intersect. The angle bisector of an angle is the line that divides the angle into two equal parts. For any
given triangle, there are three angle bisectors: one for each angle.
Properties of the Incenter
The incenter has several important properties:
1. Concurrent: The three angle bisectors meet at a single point, which is the incenter.
2. Perpendicular: Each angle bisector is perpendicular to the side opposite the angle it bisects.
3. Equal: The distance from the incenter to each vertex of the triangle is equal.
Distance and the Incenter
One of the most important properties of the incenter is its relationship with distance. Specifically, the incenter is equidistant from each vertex of the triangle. This means that if you draw a line
from the incenter to any vertex, the length of this line will be the same for all three vertices.
The formula for the distance from the incenter to a vertex can be derived using trigonometry:
Let ABC be a triangle with sides a, b, and c, and let I be the incenter. Then, the distance from I to any vertex is given by:
d = (a^2b^2 + c^2(a+b)) / (4ab)
where d is the distance from the incenter to the vertex.
Suppose we have a triangle with sides of length 3, 4, and 5. Using the formula above, we can calculate the distance from the incenter to each vertex:
d = ((3^2)(4^2) + (5^2)(3+4)) / (4(3)(4)) = (36 + 225) / 48 = 261/48 ≈ 5.44
This means that the distance from the incenter to each vertex is approximately 5.44 units.
In this article, we have explored the concept of the incenter of a triangle and its relationship with distance. We have seen that the incenter is equidistant from each vertex of the triangle, and
derived a formula for calculating this distance using trigonometry. Understanding the properties of the incenter can be useful in solving problems involving triangles and distances.
• “Geometry” by Euclid
• “Trigonometry” by Michael Corral
I hope you find this article informative and helpful! Let me know if you have any questions or need further clarification on any of the concepts.
Related articles for ‘Distance’ :
• Reading: Incenter of a Triangle in context of Distance
Calculators for ‘Distance’ | {"url":"https://blog.truegeometry.com/tutorials/education/0aa6f4210bf373c95eda00232e93cd98/JSON_TO_ARTCL_Incenter_of_a_Triangle_in_context_of_Distance.html","timestamp":"2024-11-09T08:10:31Z","content_type":"text/html","content_length":"15508","record_id":"<urn:uuid:e1d2e989-8f7e-40be-bcbf-692750801836>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00615.warc.gz"} |
The main reason why almost all econometric models are wrong
Download the WEA commentaries issue ›
By Lars Syll
Since econometrics does not content itself with only making optimal predictions, but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — most
important of these are additivity and linearity. Important, simply because if they are not true, your model is invalid and descriptively incorrect. And when the model is wrong — well, then it is
Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are
stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only
are of limited value to our understanding, explanations or predictions of real economic systems.
Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. We should look out for causal
relations, but econometrics can never be more than a starting point in that endeavour since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or
causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other variables – of vital
importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those which were can hence never be guaranteed
to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable
causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in
specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting
causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding
fixed parameters is a hope for which there really is no other ground than hope itself.
Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in
models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world systems they only do it in ever-changing and unstable combinations where the whole
is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare,
or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of mainstream economic theoretical modelling — rather useless.
Even in statistics, the researcher has many degrees of freedom. In statistics — as in economics and econometrics — the results we get depend on the assumptions we make in our models. Changing those
assumptions — playing a more important role than the data we feed into our models — leads to far-reaching changes in our conclusions. Using statistics is no guarantee we get at any ‘objective
On the limits of ‘statistical causality’
Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena
requires theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First, when we are able to tie actions, processes or
structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation?
Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives)
explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in
itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features
and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of
explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best
explanation — a better description and account of what constitute actual scientific reasoning and inferences.
In the social sciences … regression is used to discover relationships or to disentangle cause and effect. However, investigators have only vague ideas as to the relevant variables and their
causal order … I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships.
David Freedman (1997:60)
Since statisticians and econometricians have not been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, additivity as being ontologically isomorphic
to real-world economic systems, there are still strong reasons to be critical of the econometric project. There are deep epistemological and ontological problems of applying statistical methods to a
basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyze repeated sampling in controlled experiments under fixed conditions
are not easily extended to an organic and non-atomistic world where time and history play decisive roles.
If contributions made by statisticians to the understanding of causation are to be taken over with advantage in any specific field of inquiry, then what is crucial is that the right relationship
should exist between statistical and subject-matter concerns … The idea of causation as consequential manipulation is apt to research that can be undertaken primarily through experimental methods
… However, the extension of the manipulative approach into sociology would not appear promising, other than in rather special circumstances … The more fundamental difficulty is that under the —
highly anthropocentric — principle of ‘no causation without manipulation,’ the recognition that can be given to the action of individuals as having causal force is in fact peculiarly limited.
John H Goldthorpe (2000:159)
Why statistics and econometrics are not very helpful for understanding economies
As social researchers, we should never equate science with mathematics and statistical calculation. All science entail human judgement, and using mathematical and statistical models do not relieve us
of that necessity. They are no substitutes for thinking and doing real science.
Most work in econometrics is made on the assumption that the researcher has a theoretical model that is ‘true.’ But — to think that we are being able to construct a model where all relevant variables
are included and correctly specify the functional relationships that exist between them, is not only a belief without support, it is a belief impossible to support.
The theories we work with when building our econometric regression models are insufficient. No matter what we study, there are always some variables missing, and we do not know the correct way to
functionally specify the relationships between the variables.
Every econometric model constructed is miss-specified. There is always an endless list of possible variables to include, and endless possible ways to specify the relationships between them. So every
applied econometrician comes up with his own specification and ‘parameter’ estimates. The econometric Holy Grail of consistent and stable parameter-values is nothing but a dream.
The theoretical conditions that have to be fulfilled for econometrics to really work are nowhere even closely met in reality. Making outlandish statistical assumptions do not provide a solid ground
for doing relevant social science and economics. Although econometrics has become the most used quantitative method in economics today, it is still a fact that the inferences made are as a rule
Econometrics is basically a deductive method. Given the assumptions, it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right.
Conclusions can only be as certain as their premises — and that also applies to econometrics.
On randomness and probability
Modern mainstream economics relies to a large degree on the notion of probability. To at all be amenable to applied economic analysis, economic observations have to be conceived as random events that
are analyzable within a probabilistic framework. But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori
notion of probability?
When attempting to convince us of the necessity of founding empirical economic analysis on probability models, neoclassical economics actually forces us to (implicitly) interpret events as random
variables generated by an underlying probability density function.
This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori
is only a fact of a probability generating (nomological) machine or a well-constructed experimental arrangement or ‘chance set-up.’
Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’
To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any
process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment –
there strictly seen is no event at all.
Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide
with (or at least converge to) real data generating processes or structures – something seldom or never done.
And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the
analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if
you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density
We simply have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in
the real world open systems there are no probabilities to be had!
The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even
should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents
do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best
modelled with probability measures.
When economists and econometricians – often uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they
are really skating on thin ice. If you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack
sound foundations.
Statistical — and econometric — patterns should never be seen as anything other than possible clues to follow. Behind observable data, there are real structures and mechanisms operating, things that
are — if we really want to understand, explain and (possibly) predict things in the real world — more important to get hold of than to simply correlate and regress observable variables.
Statistics cannot establish the truth value of a fact. Never has. Never will.
Sometimes we do not know because we cannot know
To understand real world ‘non-routine’ decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real
historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.
Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is
not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.
When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price
first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 € — because we here envision two parallel universes (markets) where the asset-price falls in one
universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we
here envision one universe (market) where the asset-price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).
From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.
Assuming ergodicity there would have been no difference at all. What is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty – not risk – rules
the roost. That was something both Keynes and Knight basically said in their 1921 books. Thinking about uncertainty in terms of ‘rational expectations’ and ‘ensemble averages’ has had seriously bad
repercussions on the financial system.
Knight’s uncertainty concept has an epistemological founding and Keynes’ definitely an ontological founding. Of course, this also has repercussions on the issue of ergodicity in a strict
methodological and mathematical-statistical sense.
The most interesting and far-reaching difference between the epistemological and the ontological view is that if one subscribes to the former — Knightian – view, you open up for the mistaken belief
that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As Keynes convincingly argued, that
is ontologically just not possible.
To Keynes, the source of uncertainty was in the nature of the real — nonergodic — world. It had to do, not only — or primarily — with the epistemological fact of us not knowing the things that today
are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilities and expectations at all.
Sometimes we do not know because we cannot know.
Keynes’ critique of econometrics — still valid after all these years
To apply statistical and mathematical methods to the real-world economy, the econometrician, as we have seen, has to make some quite strong assumptions. In a review of Tinbergen’s econometric work —
published in The Economic Journal in 1939 — John Maynard Keynes gave a comprehensive critique of Tinbergen’s work, focusing on the limiting and unreal character of the assumptions that econometric
analyzes build on:
(1) Completeness: Where Tinbergen attempts to specify and quantify which different factors influence the business cycle, Keynes maintains there has to be a complete list of all the relevant factors
to avoid misspecification and spurious causal claims. Usually, this problem is ‘solved’ by econometricians assuming that they somehow have a ‘correct’ model specification. Keynes (1940:155) is, to
put it mildly, unconvinced:
It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical
translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on
his a priori, that would make a difference to the outcome.
(2) Homogeneity: To make inductive inferences possible — and being able to apply econometrics — the system we try to analyze has to have a large degree of ‘homogeneity.’ According to Keynes most
social and economic systems — especially from the perspective of real historical time — lack that ‘homogeneity.’ It is not always possible to take repeated samples from a fixed population when we
were analyzing real-world economies. In many cases, there simply are no reasons at all to assume the samples to be homogenous.
(3) Stability: Tinbergen assumes there is a stable spatio-temporal relationship between the variables his econometric models analyze. But Keynes argued that it was not really possible to make
inductive generalisations based on correlations in one sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have shown us, it is exceedingly difficult to find and establish the
existence of stable econometric parameters for anything but rather short time series.
(4) Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions if it is possible to adequately quantify and measure things like expectations and political and
psychological factors. And more than anything, he questioned — both on epistemological and ontological grounds — that it was always and everywhere possible to measure real-world uncertainty with the
help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, ‘only lead to error and delusion.’
(5) Independence: Tinbergen assumes that the variables he treats are independent (still a standard assumption in econometrics). Keynes argues that in such a complex, organic and evolutionary system
as an economy, independence is a deeply unrealistic assumption to make. Building econometric models from that kind of simplistic and unrealistic assumptions risk producing nothing but spurious
correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly seen, inapplicable. Mechanical
probabilistic models have little leverage when applied to non-atomic evolving organic systems — such as economies.
Building econometric models can’t be a goal in itself. Good econometric models are means that make it possible for us to infer things about the real-world systems they ‘represent.’ If we cannot show
that the mechanisms or causes that we isolate and handle in our econometric models are ‘exportable’ to the real-world, they are of limited value to our understanding, explanations or predictions of
real-world economic systems.
(6) Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he study to be linear. This is still standard procedure today, but as Keynes (1939:564) writes:
It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly
proportional to the changes in themselves; indeed, it is ridiculous.
To Keynes, it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).
Econometric modelling should never be a substitute for thinking. From that perspective, it is really depressing to see how much of Keynes’ critique of the pioneering econometrics is still relevant
The limits of probabilistic reasoning
Probabilistic reasoning in science — especially Bayesianism — reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but, even granted this questionable
reductionism, it is not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there is strong evidence for
us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.
In many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind and that in
those situations it is not possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.
Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own
experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to
assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1 if you are rational. That is, in this case – and based on symmetry – a rational individual would
have to assign probability 10% to become unemployed and 90% to become employed.
That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an
absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we ‘simply do not know’ or that we
feel ambiguous and undecided. Arbitrary and ungrounded probability claims are more irrational than being undecided in the face of genuine uncertainty, so if there is not sufficient information to
ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.
We live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we ‘simply
do not know.’ According to Bayesian economists, expectations tend to be distributed as predicted by theory.’ I rather think, as did Keynes, that we base our expectations on the confidence or ‘weight’
we put on different events and alternatives. Expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that have preciously little to do with the kind of stochastic
probabilistic calculations made by the rational agents modelled by probabilistically reasoning Bayesian economists.
We always have to remember that economics and statistics are two quite different things, and as long as economists cannot identify their statistical theories with real-world phenomena there is no
real warrant for taking their statistical inferences seriously.
If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross
domestic product, income distribution etc? Only by a leap of faith. And that does not suffice in science. You have to come up with some really good arguments if you want to persuade people to believe
in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions! Not doing that, you simply
conflate statistical and economic inferences.
The present ‘machine learning’ and ‘big data’ hype shows that many social scientists — falsely — think that they can get away with analysing real-world phenomena without any (commitment to) theory.
But — data never speaks for itself. Without a prior statistical set-up, there actually are no data at all to process. And — using a machine learning algorithm will only produce what you are looking
for. Theory matters.
Some economists using statistical methods think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like ‘faithfulness’
or ‘stability’ is to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one
existing in the real world around us. If there is no warranted connection between axiomatically derived statistical theorems and the real-world, well, then we have not really obtained the causation
we are looking for.
Freedman, David (1997). From Association to Causation via Regression. Advances in Applied Mathematics.
Goldthorpe, John H (2000). On sociology: numbers, narratives, and the integration of research and theory. Oxford: Oxford University Press
Keynes, J M (1939). Professor Tinbergen’s Method. Economic Journal.
Keynes, J M (1940). Comment. Economic Journal.
From: pp.5-10 of WEA Commentaries 8(3), June 2018
13 responses
• This articles seems to me to have a very uncertain focus. Under the heading “econometrics” what exactly is Lars Syll criticising? Any exercise in econometrics is either testing a particular
hypothesis or theory or is retaining such a hypothesis and is estimating parameter values that make it best correspond to data. Any hypothesis is likely to be at best an approximation to reality
applying only in particular circumstances. Our hypotheses will often be simply wrong. That is obvious and universally acknowledged. The article wobbles between criticising specific theory
(neo-classical approaches, rational expectations), specific simplifying assumptions that are often applied to a range of theories (e.g. linearity), operating without theory at all, i.e. exploring
data to see if patterns emerge that suggest causal theories and he seems at times to be questioning Bayes theorem. If Mr Syll knows exactly what the problem is with econometrics he has not made
it clear. He prefers a blunderbuss: shoot at everything; some target will get hit. Mr Syll is also very exercised by the realisation that social systems are evolving and unlikely to be stable
over long periods – a fact of life of which all sentient economists are aware. No sensible econometrician imagines he or she is discovering immutable truths . They are taking existing theories
and asking could the theory, allowing for a penumbra of uncertainty caused by omitted variables, really have generated the data we observe? Is it an adequate model? This question can only be
posed and answered probabilistically. If you don’t like that approach, what is the alternative? Introspection, intuition, imagination, blind prejudice can all generate theories about society.
Ultimately, if we are to make progress such theories have to confront data. We can never conclusively prove a theory. Despite Popper, we can not conclusively disprove a theory either, though we
can dismiss it with a high degree of probability. One important means of dismissal is the intelligent application of econometrics. The fact that the tool can be misused and often has been misused
does not change the fact that it is the most important tool we have for advancing knowledge. Econometrics does not have to assume linearity, exogeneity of explanatory variables, normal
distribution of errors or any of the other assumptions of convenience that may be adopted. Sometimes those assumptions are pernicious, sometimes harmless in context. More to the point all can be
tested in any particular case. It is hard to avoid the suspicion that Mr Syll simply does not like confronting numerical data and would rather live in a world of undisciplined speculation.
• What Lars Syll points out, we all know. There is nothing new in it. The real issue is to find out the alternative methodology. Perhaps, the contemporary mathematics and statistics are not rich
enough to analyze the real phenomena (with limited data) or if there are any, they are unknown to the economists. We are aware of the problems of covariance among the explanatory variables, but
we do not have a dependable method to deal with the problem. We are aware of misspecification problems, but we do not have enough information to specify the model correctly. We acknowledge
nonlinearity and perhaps multiple solutions (multiple optima and nonconvexity) but we do not have a foolproof method to handle it. We understand the issues of risk and uncertainty, but we do not
have methods to deal with them appropriately. We understand the problem of non-quantifiability, but we do not have appropriate methods to deal with it.
Then, the answer is not to criticise and destroy. The answer is to develop new methods, learn from other disciplines, invent, and do a constructive work. It is much more difficult than simply
criticising. I know well what it means, and others too know it.
□ Those of us in the economics community who are impolite enough to dare to question the preferred methods and models applied in mainstream economics and econometrics are as a rule met with
disapproval. But although people seem to get very agitated and upset by the critique, defenders of “received theory” always say that the critique is “nothing new”, that they have always been
“well aware” of the problems, and so on, and so on.
So, for the benefit of all mindless practitioners of mainstream econometric modelling — and who don’t want to be disturbed in their doings, eminent mathematical statistician David Freedman
put together a very practical list of vacuous responses to criticism that can be freely used to save your peace of mind:
“We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. The biases
will cancel. We can model the biases. We’re only doing whateverybodyy else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The
decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make
progress. You have to give the models the benefit of the doubt. Where’s the harm?”
☆ Lars is absolutely right in pointing out that those who dare question the status quo are met with disapproval, even intimidation and contempt. Recently I presented a seminar on my new
book “Econometrics as a Con Art”, in which I expressed the opinion that it is tragic to value ARCH models to be as good for humanity as penicillin. Later I was told that a hard core
econometrician, who knows nothing about economics, described my presentation as “disastrous”. Well done, Lars–please keep it up.
○ I loved your comment ! Thanks. It nicely summarizes my assessment of contemporary economics:
‘Alche-nomic wishcraft’. Economists all agree that 2 + 2 = 4 but then they (unknowingly) embarrass themselves (and their discipline) by claiming that (because of increases in
productivity, etc.), 2,000,000 + 2,000,000 = 5,000,000 !
It is most unfortunate that most economists seem have to forgotten the basic concept of the ‘circular flow’. It is equally unfortunate that Alfred Marshall used the word ‘scissors’ to
describe equilibrium between ‘supply’ and ‘demand’. To this graduate engineer with an avocational addiction to economics, that ubiquitous ‘cross’ seems not unlike a Pareto-Edgeworth
‘contract curve’ (a pretty picture with absolutely no predictive usefulness).
Richard E. Planck, M.S.M.E.
President, The Association for the Advancement of non-Newtonian Economics (AAnNE)
• Firstly, the use of formal and logical modeling of our macroeconomics system has taken us far, unlike the negative claim given above. It has taken us so far that it has strangely managed to show
by the use of quite a simple numerical method, that an increment in the taxation of personal incomes has a positive effect on national prosperity, when taken at large, but that when the same sum
is collected from the taxation of land values, the benefit is roughly 3 times as big.These facts were first derived in my recent book, which I suggest be taken a bit more seriously. I can send
you an e-copy, so you can check the arithmetic chesterdh@hotmail.com,
Secondly any model cannot simply be wrong since it does represent something or some concept that by its nature deals with our subject, and it does have some (limited) resemblance to it. Unless
the modeler is completely nuts, he/she surely knows about what is being represented and because we all need to model in our mind’s eye what we are considering, it surely is right for us.
Consequently if “all models are wrong”, then the concept that all models are wrong is wrong too! –a zero-zero game.
Thirdly, probability has no place here. We do not model a situation with the belief that it has some probability of being accurate. Its accuracy may be far from the truth but as far as it goes it
is 100% sure and it does manage to at least show part of the concept. That is the nature of taking this imaginative methodology.
• I support Lars fully. The paper has actually an eye opener in it before the econometricians. Successful economic thoughts have least bothered about the metrics so far. Those who have influenced
the transformations in civilizations have seen the past with full devotion, taken insights from there but not got driven by then. Historicity gives an easy pathway towards estimating forward but
limits it to the lame contexts of the past and attempts to impose on the future. It’s now time that those who claim themselves to be economists should learn from the impacts of technology on the
human progress to understand how far they are disjointed from trim the terrains of the respective plateau of historicity.
Professor R P Banerjee
• This article should be read by everyone who does econometrics.
• I found Mr. Syll’s little reminder refreshing. It also reminds me of plutonomist’s query, asking what Buddhism has to do with economics. Schumacher replied, “Economics without compassion is like
sex without love.”
I think he could have gone much further. Plutonometrics without compassionate wisdom and vast knowledge of the situation is rational justification of rape, pillage, plunder, normative
psychopathology, systemic cultural corruption and the anti-ethical Piracy Paradigm that perpetuates it all.
The piece also reminds me of Warren Buffet’s admission: “Yes, it is a class war, and my class is winning.”
Clearly, as Syll and others see and understand, any model that fails to closely approximate the actuality of human culture and its activities, despite being officially accepted as a technically
valid and useful tool, serves as diversionary, subversive camouflage, obfuscating the nature and purpose of The Plutonomy Game. If you doubt that, try playing Monopoly (the game), by the Fed &
IRS rules, requiring tax payment every 4 turns.
I am confident that model predicts the outcome of every game, matching the current financial state of the world with exact accuracy. The only way to make the model more accurate would be to
include religious factions, governments, armaments, armies, terrorists, police, multi-national corporations and every kind of psychopathic player available.
For deeper, more extensive consideration of the problem, a solution, holonomic ecometrics and monetary/credit systems, see my as yet unfinished paper at the Awareness & Value page
of my Greenbook blogsite. It includes new equations and formulas enabling integration of quantitative and qualitative data. It needs corrections and expansion, but you may find it helpful.
Thanks Lars, friends & allies ~ M
• I agree with several of the assertions of Lars Syll, but it is true that his arguments are a bit of a mixture. Just as a way to improve the comprehension of the issues at stake, I propose to
distinguish problems with econometrics in four different questions, followint Syll’s criticism:
a) The assumption of independence, linearity and additivity of explanatory variables.
b) Problems arising from the omission of variables, for the period(s) under analysis
c) The assumption that causal factors portrayed through the model will be stable along time.
d) The assumption that causes portrayed through the model will remain the same in different contexts (transferability of outcomes from the research context to other contexts).
Analysis based on econometrics (seems to) assume(s) the following, regarding each point:
a) Impacts are linear or tractable as linear (e.g. through logs), and the explanatory variables have not multiplying effects and can be assumed as independent.
b) Omission of variables may be treated through the hypothesis of random residuum, normally distributed (due to the central limit theorem)
c) The outcoming model may be used for other periods, assuming that the causation does not vary
d) The outcoming model may be applied to other context, following an analogous assumption
Assumptions (a) and (b) are strictly econometric, and are at its core. Assumptions (c) and (d) instead are more related to the use of econometric outcomes, both by theoreticians and
practitioners. It is the belief that “universal scientific laws” are worth searching that leads to assumptions (c) and (d), as they allow “universal” conclusions. It is not Econometrics, as the
Syll rightly points out.
Let’s turn now to (a) and (b).
• As to independence of variables, this is just a hypothesis. Econometrics per se is not able to solve this question, besides the usual calculus of correlations, as causality arguments are
embedded in theory, and it is theory that have to decide whether there is or not relationship among two variables.
• Linearity and addictiveness are subtler issues. Theoretically we must allow that the influence of one variable over another may change in its intensity; e.g., the rate of increase of
consumption of a given good when income increases is expected to vary with income (actually, a good may be normal for some bracket of income and next turn to be inferior). In this point,
linearity may be too restrictive. But this is a rather technical issue, that perhaps more sophisticated econometrics will solve in the future: indeed, some non linear specifications are quite
usual nowadays. Non linear forms are preferred when a constant elasticity is the desired outcome.
• Regarding the omission of variables (whatever they are), it is wrong to treat it through the notion of a random residuum. As Georgescu-Rögen pointed long ago, the omitted variables are by no
means “random”, therefore the central limit theorem does not apply. On the other hand, the very notion of “random variable” is quite odd:
o The main idea is that we have a random process when for the same crucial inputs two or more outputs can occur. It is a deep (if solvable) epistemological problem to decide whether a process
features randomness. Rolling a dice or throwing a coin – two noteworthy examples of randomness – are not random at all; are mechanical processes, just too complex to be predictable in analytical
o The usual properties of a “random process” – especially, achieving similar probabilities for each possible outcome in a large number of trials – are inconsistent, because they somewhat assume
that each trial is dependent from the others, when it is openly assumed that they are independent. It seems that there has been a confusion between the random tables o generators used for
sampling and the very idea of random variable. The “random” function of the Excel, actually, is not a random process. And sampling actually is no more than a very useful heuristics, it is not
“science” (see below).
There remain (c) and (d); as told before, these are economic questions, and therefore the answer to them must arise from Economics, not from Econometrics. The clue question is the following: Is
Economics able or entitled to find empirical relationships that are transferable through time and context? This question is strongly related to the one asking if Economics can be thought of as a
“Science”, in the same manner as Physics or Chemistry are deemed to be, as their outcomes are indeed transferable. The lab in the future or in other place is able to replicate the same processes
and outcomes.
Syll clearly asserts that this kind of universal statements are precluded for Economics. Now, what allows Economics to be called or not “Science” is mainly a question of definition; and nowadays
we do not have an agreed definition for it. As a matter of fact, we agree Syll’s idea that Economics’ universe cannot be treated as Physics’, for it is too complex and it undergoes evolutionary
changes. Surely, if Economics is deemed to be a “science”, it will not be in the same sense as Physics, essentially because it deals with much more complex and evolving universe of phenomena.
But if this question – to be or not to be a Science – is rather definitional, it is not that important. We all agree that economic analysis is able to deliver useful assertions for the problems
it has decided to face (at least for the problems that are really economic, i.e., leaving aside the pretension of being a Praxeology). For example, when Dani Rodrik argues that the Washington
Consensus meant wrong policies for Latin America in the Nineties, this assertion is based both on empirical and theoretical grounds, and can be rationalized as such.
Of course, it is a contentious statement, and some people will assert therefore that it is not “scientific”, as it is not possible to “demonstrate” it is true. We may even agree that it is not
“scientific”, if necessary; but this does not make it useless. On the same token, the assertion that car A performs better than car B is not scientific, but anyway the scientist when choosing a
car surely pays attention to the opinion of experts and to test drives.
The trouble arises therefore when “Science” is called into the stage. Perhaps, it would be better to keep it in the backstage. We suspect that Economics cannot go much beyond this point.
Econometrics could then be accepted as a tool, a heuristics, and no more than that, especially if we bear in mind its limitations, as those pointed both here and in the Syll’s text. No more, but
not less. As Syll states, the key of the issue of Econometrics’ use and usefulness is to be found in Economics.
Sorry for the length of this comment.
• Lars’ critique of Econometrics, as conventionally practiced, is correct. This is because Economics, correctly understood, is a science of complexity, with chaotic features, evolving in both time
and space in ways that are unpredictable according to conventional statistics and simplistic linear models. This is the “uncertainty” that Keynes was getting at.
This just says that economists have been applying the wrong mathematics and have had the wrong expectations. Nonlinear models can tell us a great deal if developed at an appropriate level of
abstraction and scale in time and space, but they generally do so via numerical simulations of a variety of scenarios. Consider the famous limits-to-growth models from the Club of Rome work in
the 1970s. These were dismissed by clueless economists, like Nordhaus, but did in fact forewarn us of the dangers of ecological overshoot and collapse that may be starting now (business-as-usual
scenario) or within a few generations (more optimistic scenarios). But it is wrong to expect these models to predict a certain probability of collapse at a certain date.
Not that you couldn’t come up with such probabilities using powerful computers to run massive simulations of massive numbers of scenarios. But each scenario is based on a certain choice of
parameters, so assumptions would have to be made on the probability distributions of those parameters. This is guess work at best, so one would have to try a variety of guesses as to the form and
parameters of these probability distributions, and then how to weight the different guesses? The whole enterprise begins to get very doubtful.
Yet much can be learned about the risks even from a few well chosen scenarios, like the dozen or so computed by the limits-to-growth studies. In this case, these studies suggested that some form
of overshoot and collapse would occur unless the most optimistic values of the original parameters were chosen, such rapid global prioritization of renewable energy over fossil fuels. And the
business-as-usual scenario was like a worst-case example, with a strong risk of compounding failures as different macro variables interacted with each other, leading to a more rapid collapse than
seems probably at the moment. Ugo Bardi calls this the “Seneca Cliff”. We could luck out and muddle through, but what if wars and disruption spread, resulting in debilitating pandemics and loss
of production and trade – system breakdown? Only time will tell, but the risk, however difficult to quantify by conventional probabilities, is very real.
• “As social researchers, we should never equate science with mathematics and statistical calculation. All science entail human judgement, and using mathematical and statistical models do not
relieve us of that necessity. They are no substitutes for thinking and doing real science.”
Since when is economics a real science?
Social sciences are for idiots. For people who lack the intelligence to do real sciences, but still like to pretend they are.
Everything they produced till now is wrong and fails to explain or predict anything. Social sciences are doing what the real sciences have long since let go, namely to only do deductions. Staying
in your mind and come up with wild theories and ideas. Wild concoctions, like the ones made by ancient greek philosophers and physicists.
• Any model is by definition a simplification of reality, and so always is “wrong” in that sense. Models are necessary and useful in economics, where we have to strip away the less important
factors to make an analysis manageable. When someone gives an argument in favor of a policy or provides an explanation for an economic occurrence, or a prediction, their reasoning is essentially
a verbalization of a model. So there’s no way around models.
Econometric models provide a bridge between an economic model or concept of interest, e.g. an elasticity, and the data, and provide a story or motivation for numerical estimates that can add some
data-based calibration to the economic model. None of it is true, but then again no one knows what is true, nor could they convince everyone else what is really true about the entirety of any
economic question worth studying. The economic system is infinitely complex. It differs everywhere and at every point in time. Modelling helps researchers to identify sources of differing
opinions and interesting issues for further study.
It’s not productive to argue about whether or not it is a science. Maybe the term “social science” creates more trouble than it’s worth and shouldn’t be used. This is just a semantic point and is
in no way a criticism of the social sciences themselves. For example, most would agree that it is worthwhile to pay some people to devote their time to collecting and analyzing economic
information to assist governments, businesses, and interested citizens. Econometricians are a small part of this.
Application of formal statistical inference in econometrics is not convincing to anyone who is not fully indoctrinated in the discipline, and even to some who are. This is because the model is
not true, as mentioned above, and because the stochastic assumptions are laughably fanciful, except when the data set comes a from random sample survey or a lab experiment. I sense that most
inference methods are not really taken seriously by anyone but econometric theorists. The issue is seldom raised within the profession because it is thought to be disrespectful to certain
colleagues. I predict hypothesis testing and rote use of t-stats will fall by the wayside over time. Some statistical theory will remain in the discipline, but only to motivate the use of certain
estimation methods in order to inform the estimator choice process for applied researchers.
Respond to this article | {"url":"http://www.worldeconomicsassociation.org/newsletterarticles/econometric-models-wrong/","timestamp":"2024-11-09T13:49:11Z","content_type":"text/html","content_length":"98484","record_id":"<urn:uuid:9988e31b-58dc-4b40-bf82-91671930b7f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00221.warc.gz"} |
Category:ALGOL W - Rosetta CodeCategory:ALGOL W
programming language
may be used to instruct a computer to perform a task.
Listed below are all of the tasks on Rosetta Code which have been solved using ALGOL W.
Algol W was designed as a successor to Algol 60, closely based on A Contribution to the Development of ALGOL by Niklaus Wirth and C. A. R. Hoare. It includes dynamically allocated records, string
handling, complex numbers and a standard I/O system. A description of the Algol W Language as at 1972 can be found at Algol W Reference Manual 1972 on bitsavers.org page. Dabcanboulet's Algol W @
Everything2 page contains an excellent history and tutorial.
To do
Tasks not implemented in ALGOL W
See also
This page is a stub. It needs more information! You can help Rosetta Code by filling it in!
This category has the following 3 subcategories, out of 3 total.
Pages in category "ALGOL W"
The following 136 pages are in this category, out of 337 total.
previous page
) (next page)
previous page
) (next page) | {"url":"https://rosettacode.org/wiki/Category:ALGOL_W?pagefrom=N%27th","timestamp":"2024-11-06T20:43:30Z","content_type":"text/html","content_length":"67618","record_id":"<urn:uuid:9c2dc5ce-4c69-451d-a45c-63be0b762ac4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00039.warc.gz"} |
Mathema Foundation
There is so much to see and interact with at the Mathema Gallery. Below is a selection of displays that feature both a static and interactive element.
Melencolia 1 & Magic Square
The name of this piece conjures initial thoughts of sadness. One interpretation is that Melencolia reflects the perpetually depressed mind of the mathematician, who can never fully succeed at
perfecting his craft. However, it is not all doom and gloom!
Despite its name, Melencolia is a celebration of the hopeful quest for knowing. It is also a representation of pure mathematical genius. This is exhibited by a 4x4 'magic square' in the top right
corner of the engraving, as well as a 3D shape now identified as a truncated triangular trapezohedron. Can you spot both elements?
What do you see when you look at this number puzzle? Do you see just a grid of numbers or something more? Now count the sum of the numbers in each row, column and diagonal. Can you see a pattern
start to emerge? This is the beauty of the magic square.
One surprising fact about Durer's magic square is that there are more than 80 different geometric combinations that produce the magic sum of 34. How many of them can you find?
The Interactive Board
In ancient times, it seems to have been understood that the side lengths of a right triangle could be related in the ratio of 3:4:5. The Greeks were also aware of this basic relationship but they
developed it one step further.
They came up with a way to describe the relationship between the sides of all right triangles, not just specific ones. In other words, they were able to find other combinations of 3 numbers that were
able to make right triangles. One of the most fascinating things about the Greek discovery is that it involved a proof which did not require algebra. Instead, the proof was entirely visual.
Today, there are more than 100 different proofs for this historic theorem, building on the foundation left by the Greeks thousands of years ago.
How did the Greeks prove their version of Pythagoras Theorem? Basically they started with a big square which was divided into two smaller squares of sides a and b respectively, and two rectangles
with sides a and b; each rectangle was then split into two equal right triangles by drawing the diagonal c.
From there, the triangles were shifted to a different corner of the original square. The Pythagoreans used this rearrangement to show that the area of the square on the diagonal of each right
triangle was simply the sum of the area occupied by the two smaller squares from the previous step.
Put this simple yet beautiful proof to the test by rearranging the pieces in the wood-piece shown at right. Follow the set of instructions above for further guidance.
Pythagoras Theorem - Greek Interpretation
In ancient times, it seems to have been understood that the side lengths of a right triangle could be related in the ratio of 3:4:5. The Greeks were also aware of this basic relationship but they
developed it one step further.
They came up with a way to describe the relationship between the sides of all right triangles, not just specific ones. In other words, they were able to find other combinations of 3 numbers that were
able to make right triangles. One of the most fascinating things about the Greek discovery is that it involved a proof which did not require algebra. Instead, the proof was entirely visual.
Today, there are more than 100 different proofs for this historic theorem, building on the foundation left by the Greeks thousands of years ago.
How did the Greeks prove their version of Pythagoras Theorem? Basically they started with a big square which was divided into two smaller squares of sides a and b respectively, and two rectangles
with sides a and b; each rectangle was then split into two equal right triangles by drawing the diagonal c.
From there, the triangles were shifted to a different corner of the original square. The Pythagoreans used this rearrangement to show that the area of the square on the diagonal of each right
triangle was simply the sum of the area occupied by the two smaller squares from the previous step.
Put this simple yet beautiful proof to the test by rearranging the pieces in the wood-piece shown at right. Follow the set of instructions above for further guidance.
Right Triangle Theorem - Chinese Interpretation
Despite bearing the name of a Greek mathematician, the history books suggest that Pythagoras' Theorem might have a separate origin. Take a look at the visual interpretation which came from an ancient
Chinese text called The Nine Chapters on the Mathematical Art.
Much like the Greeks, the Chinese took 4 right triangles and arranged them around the edge of a grid so that the hypotenuse of each triangle became the side of a large square. The triangles were
formed by drawing rectangles at each corner of the grid and then cutting them diagonally in half. For each triangle, the lengths of the shortest two sides were 3 and 4 tiles respectively.
To find the length of the hypotenuse it was necessary to know how many tiles made up the large square inside the grid. This could only be done by working out the total area occupied by right
triangles and then subtracting this from the total number of tiles in the grid.
Once the area of the square was known, it was possible to work backwards to find not only the side length of that square but also (incidentally) the hypotenuse of each right triangle. In the case of
the Chinese proof, the length of the hypotenuse (5 tiles) could be inferred from the fact that the square occupied an area equal to 25 tiles.
There is no direct evidence confirming that this was the method actually used by the Chinese. The only writing that was included with the image was the word "Behold"! The algebra that we associate
with the theorem today was not invented until much later.
Get in touch
2557 Mount Mee Road
Ocean View QLD 4521 | {"url":"https://mathemagallery.com.au/displays/","timestamp":"2024-11-02T11:14:25Z","content_type":"text/html","content_length":"81307","record_id":"<urn:uuid:e2a32487-19c9-4491-ab5c-93301138e4b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00333.warc.gz"} |
How to Print Out the Cell Value From Excel Using Pandas?
To print out the cell value from an Excel spreadsheet using Pandas, you can first import the Pandas library in your Python script. Then, load the Excel file into a Pandas DataFrame using the
read_excel() function. Once you have the DataFrame, you can access individual cell values using the .at or .iat methods along with the row and column indexes. For example, to print out the value in
the cell at row 1 and column 1, you can use print(df.at[1, 1]). This will output the value of the cell to the console.
How to print out cell value from Excel by specifying row and column using Pandas?
You can print out the cell value from an Excel file by specifying the row and column using Pandas by following these steps:
1. Import the pandas library:
1. Load the Excel file into a pandas DataFrame:
1 df = pd.read_excel('your_excel_file.xlsx')
1. Specify the row and column number to access the cell value:
1 row = 0 # specify the row number, starting from 0
2 column = 0 # specify the column number, starting from 0
1. Access the cell value using the iloc method:
1 cell_value = df.iloc[row, column]
2 print(cell_value)
Replace 'your_excel_file.xlsx' with the path to your Excel file. This code will print out the cell value at the specified row and column in the Excel file.
What is the correct syntax for accessing cell values from Excel with Pandas?
To access cell values from an Excel file using Pandas, you can use the following syntax:
1 import pandas as pd
3 # Load the Excel file into a DataFrame
4 df = pd.read_excel('your_excel_file.xlsx')
6 # Access a specific cell value by using the .iloc method
7 value = df.iloc[row_index, column_index]
8 print(value)
Replace 'your_excel_file.xlsx' with the filepath of your Excel file, row_index with the row number (starting from 0), and column_index with the column number (also starting from 0) of the cell you
want to access.
How do you navigate through Excel cells and print their values with Pandas?
You can navigate through Excel cells and print their values using Pandas by following these steps:
1. Install Pandas library if you haven't done so already. You can install Pandas using pip by running the following command:
1. Import the Pandas library in your Python script:
1. Load the Excel file into a Pandas DataFrame using the read_excel function:
1 df = pd.read_excel('your_excel_file.xlsx')
1. To print the values of specific cells in the DataFrame, you can use the iloc function. For example, to print the value of the cell in the first row and first column, you can use the following
This will print the value of the cell in the first row and first column of the DataFrame.
1. You can also loop through the DataFrame to print the values of all cells. For example, you can use the following code to print all values in the DataFrame:
1 for i in range(len(df)):
2 for j in range(len(df.columns)):
3 print(df.iloc[i, j])
This code will loop through all rows and columns in the DataFrame and print the value of each cell.
By following these steps, you can navigate through Excel cells and print their values using Pandas in Python.
What is the Pandas method for printing out cell value from Excel sheet?
The Pandas method for printing out a cell value from an Excel sheet is using the iloc method. Here is an example:
1 import pandas as pd
3 # Load the Excel file into a Pandas DataFrame
4 df = pd.read_excel('data.xlsx')
6 # Print out the value of the cell at row 1 and column 2
7 print(df.iloc[1, 2])
In the above example, iloc[1, 2] selects the value at the 2nd row and 3rd column (as indexing starts from 0 in Python). You can modify the row and column indices according to your specific needs.
What is the function used to access cell values in Pandas from an Excel sheet?
The function used to access cell values in Pandas from an Excel sheet is read_excel().
How to retrieve and print cell value from Excel row with Pandas?
You can retrieve and print a cell value from an Excel row using the iloc method in Pandas. Here is an example code snippet to demonstrate this:
1 import pandas as pd
3 # Load the Excel file into a Pandas DataFrame
4 df = pd.read_excel('data.xlsx')
6 # Retrieve the cell value from a specific row and column
7 row_index = 0 # specify the row index
8 col_index = 2 # specify the column index
10 cell_value = df.iloc[row_index, col_index]
12 # Print the cell value
13 print(cell_value)
In this code snippet, we first load an Excel file into a Pandas DataFrame. We then specify the index of the row and column from which we want to retrieve the cell value. Finally, we use the iloc
method to access the cell value from the specified row and column index and print it. | {"url":"https://topminisite.com/blog/how-to-print-out-the-cell-value-from-excel-using","timestamp":"2024-11-13T07:39:36Z","content_type":"text/html","content_length":"337040","record_id":"<urn:uuid:ddcbe1e3-748b-4c39-a477-bc3afd602f9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00318.warc.gz"} |
Data Structure Study Notes <9> Hash Search - cos
1. Hash Table#
1. Introduction to Concepts#
Management of variables in compilation: Dynamic Search Problem
• Using search trees? — Comparing two variable names (strings) is not efficient
• Convert strings to numbers, then process? — This is the idea of hash search.
Known search methods:
• Sequential Search O(N)
• Binary Search (Static Search) O(log~2~N)
• Binary Search Tree O(h) where h is the height of the binary search tree
• Balanced Binary Tree O(log~2~N)
Question: How to quickly search for the required keywords? What if the keywords are not easy to compare?#
The essence of searching: Finding the position of known objects
• Ordered arrangement of objects: total order (completely ordered, e.g., binary search), partial order (some keywords have an order, e.g., search tree)
• Directly "calculate" the position of the object: Hashing
Basic Work of Hashing#
• Calculate position: Construct a hash function to determine the storage position of keywords.
• Resolve conflicts: Apply some strategy to resolve the issue of multiple keywords having the same position.
Time Complexity#
The time complexity is almost constant: O(1), meaning the search time is independent of the problem size!
2. Hash Search#
(1) Basic Idea#
• Use the keyword key as the independent variable, and through a determined function h (hash function), calculate the corresponding function value h(key) as the storage address of the data object.
• Different keywords may map to the same hash address, i.e., h(key~i~) = h(key~j~) (when key~i~ ≠ key~j~), which is called "Collision." — Requires some conflict resolution strategy.
In the case of no conflict, searching only requires calculating the address using the hash function.
Load Factor#
Load Factor (Loading Factor): Let the size of the hash table be m, and the number of elements filled in the table be n, then α = n / m is called the load factor of the hash table.
• As shown in the figure, α = 11 / 17 ≈ 0.65
If there is no overflow, then T~query~ = T~insert~ = T~delete~ = O(1)
• When the number of elements in the hash table is too large (i.e., the load factor α is too high), the search efficiency will decrease.
□ The practical maximum load factor is generally taken as 0.5 ≤ α ≤ 0.85
• The solution is to double the size of the hash table, a process called "Rehashing."
(2) Construction of Hash Functions#
Two Factors#
A "good" hash function generally considers the following two factors:
• Simple calculation to improve conversion speed.
• Uniform distribution of address space corresponding to keywords to minimize collisions.
Hash Function Construction for Numeric Keywords#
① Direct Addressing Method#
Take a certain linear function value of the keyword as the hash address, i.e., h(key) = a × key + b (a, b are constants).
② Division Remainder Method (Commonly Used)#
The hash function is: h(key) = key mod p
e.g., h(key) = key %17
Generally, to ensure that the address space corresponding to keywords is distributed uniformly, p is taken as a prime number.
③ Digit Analysis Method#
By analyzing the variation of numeric keywords at each digit, take comparatively random digits as the hash address.
④ Folding Method#
Split the keyword into several parts of the same digit, then add them together.
⑤ Square Middle Method#
Hash Function Construction for Character Keywords#
Simple Hash Function — ASCII Code Addition#
Define the hash function for character-type keywords key as follows: h(key) = (Σkey[i]) mod TableSize
Severe collisions!!
Simple Improvement — First 3 Characters Shift Method#
h(key) = (key[0] × 27^2^ + key[1] ×27 + key[2]) mod TableSize
Still collisions, space waste.
Good Hash Function — Shift Method#
Design all n characters of the keyword, and distribute them well.
h(key) = ($\begin{matrix} \sum_{i=0}^{n-1}key[n-i-1] \times32^i \end{matrix}$) mod TableSize
The code is as follows:
typedef string DataType; // Data type
typedef int Index; // Hash index
// Returns the index after calculation by the hash function
Index Hash(string Key, int TableSize) {
unsigned int h = 0; // Hash function value, initialized to 0
int len = Key.length();
for(int i = 0; i < len; ++i)
h = (h << 5) + Key[i];
return h % TableSize;
(3) Conflict Handling in Hash Search#
Common strategies for handling conflicts are as follows:
• Change position — Open Addressing Method
• Organize conflicting objects at the same position together — Chain Addressing Method
Open Addressing Method#
Open Addressing Method, once a collision occurs (the position already has other elements), it searches for another empty address according to some rules. Its advantage is that the hash table is an
array, storage efficiency is high, random search; the disadvantage is that the hash table has a "clustering" phenomenon. In the open addressing hash method, the delete operation needs to be very
careful, it can only be "lazily deleted," meaning a deletion marker needs to be added instead of a real deletion, so that it won't "break the chain" during searching. Its space can be reused during
the next insertion.
• If a i-th collision occurs, the next address to probe will increase d~i~, the basic formula is: h~i~(key) = (h(key) + d~i~) mode TableSize(1 ≤ i < TableSize)
• d~i~ determines the different conflict resolution schemes: Linear Probing (d~i~ = i), Quadratic Probing (d~i~ = ± i^2^), Double Hashing (d~i~ = i*h~2~(key)).
① Linear Probing Method#
Use the increment sequence 1, 2, …, (TableSize - 1) to cyclically probe the next storage address.
② Quadratic Probing Method (Secondary Probing)#
Use the increment sequence 1^2^, -1^2^, 2^2^, -2^2^, …, q^2^, -q^2^ and q ≤ | TableSize/2 | to cyclically probe the next storage address.
Can quadratic probing find all spaces? There is a theorem as follows:
• If the length of the hash table TableSize is a certain 4k+3 (k is a positive integer) form prime number, then quadratic probing can probe the entire hash table space.
// Find Key element, here using quadratic probing to handle conflicts
Index Find(ptrHash H, DataType Key) {
Index nowp, newp;
int Cnum = 0; // Record the number of collisions
newp = nowp = Hash(Key, H->TableSize);
// If the unit at this position is not empty and is not the element to be found, a collision occurs
while(H->Units[newp].flag != Empty && H->Units[newp].data != Key) {
++Cnum; // Increase collision count by one
if(++Cnum % 2) {
newp = nowp + (Cnum+1)*(Cnum+1)/4; // Increment is +i^2, i is (Cnum+1)/2
if(newp >= H->TableSize)
newp = newp % H->TableSize;
} else {
newp = nowp - Cnum*Cnum/4; // Increment is -i^2, i is Cnum/2
while(newp < 0)
newp += H->TableSize;
return newp; // Return position, if this position is an empty unit, it means not found
③ Double Hashing Method#
d~i~ is i*h~2~(key), h~2~(key) is another hash function, the probing sequence is h~2~(key), 2h~2~(key), 3h~2~(key)…
• For any key, h~2~(key) ≠ 0!
• The probing sequence should also ensure that all hash storage units can be probed, the following form has good effects: h~2~(key) = p - (key mod p)
where p < TableSize, p and TableSize are both prime numbers.
Separate Chaining Method#
The separate chaining method stores all conflicting keywords at the corresponding position in a singly linked list. The advantage is that the deletion of keywords does not require "lazy deletion,"
thus avoiding storage garbage; the disadvantage is that the storage efficiency and search efficiency of the linked list part are relatively low. The uneven length of the linked list can lead to a
significant decrease in time efficiency.
(4) Operation Set of Hash Search (Code)#
Here is the hash search for strings, using the shift method for the Hash function and the quadratic probing method for conflict resolution; others can be modified based on this.
const int MaxSize = 100000;
typedef int Index; // Hash index
typedef string DataType; // Element type stored in hash unit
// Hash unit state type, corresponding to: valid element, empty unit, deleted element
typedef enum {Legitimate, Empty, Deleted} EntryType;
struct HashNode { // Hash table unit type
DataType data; // Store element
EntryType flag; // Unit state
struct HashTable { // Hash table type
int TableSize; // Table length
HashNode *Units; // Array storing hash units
typedef HashTable *ptrHash;
Get Prime Number#
Return the smallest prime number greater than N and not exceeding MaxSize, to ensure that the maximum length of the hash table is prime, thus making the address space corresponding to keywords as
uniform as possible.
// Return the smallest prime number greater than N and not exceeding MaxSize, to ensure that the maximum length of the hash table is prime
int NextPrime(int N) {
int i, p = (N%2) ? N+2 : N+1; // Start from the next odd number p greater than N
while(p <= MaxSize) {
for(i = (int)sqrt(p); i >= 2; i--)
if(!(p % i)) break; // Not a prime number
if(i == 2) break; // for ends normally, it is a prime number
else p += 2; // Try the next odd number
return p;
Create Empty Table#
Create an empty table with a length greater than TableSize. (Ensure the length is prime)
ptrHash CreateTable(int TableSize) {
ptrHash H;
int i;
H = new HashTable;
H->TableSize = NextPrime(TableSize);
H->Units = new HashNode[H->TableSize];
for(int i = 0; i < H->TableSize; ++i)
H->Units[i].flag = Empty;
return H;
Hash Calculation#
Return the index after calculation by the hash function. Here the keyword data type is string, using the shift method for hashing.
Index Hash(DataType Key, int TableSize) {
unsigned int h = 0; // Hash function value, initialized to 0
string str = Key.str;
int len = str.length();
for(int i = 0; i < len; ++i)
h = (h << 5) + str[i];
return h % TableSize;
Search Operation#
Find the Key element, here using quadratic probing to handle conflicts, returning the position index; if this position is an empty unit, it indicates not found.
Index Find(ptrHash H, DataType Key) {
Index nowp, newp;
int Cnum = 0; // Record the number of collisions
newp = nowp = Hash(Key, H->TableSize);
// If the unit at this position is not empty and is not the element to be found, a collision occurs
while(H->Units[newp].flag != Empty && H->Units[newp].data != Key) {
++Cnum; // Increase collision count by one
if(++Cnum % 2) {
newp = nowp + (Cnum+1)*(Cnum+1)/4; // Increment is +i^2, i is (Cnum+1)/2
if(newp >= H->TableSize)
newp = newp % H->TableSize;
} else {
newp = nowp - Cnum*Cnum/4; // Increment is -i^2, i is Cnum/2
while(newp < 0)
newp += H->TableSize;
return newp; // Return position, if this position is an empty unit, it means not found
Insert Operation#
Insert the Key into the table, returning success or failure; failure indicates that the key already exists.
bool Insert(ptrHash H, DataType Key) {
Index p = Find(H, Key);
if(H->Units[p].flag != Legitimate) { // This position can insert an element
H->Units[p].flag = Legitimate;
H->Units[p].data = Key;
// Other operations
return true;
} else { // The key already exists
// Other operations
return false;
(5) Performance Analysis#
The search efficiency of hash tables is analyzed through the following factors:
• Average Successful Search Length (ASLs)
• Average Unsuccessful Search Length (ASLu)
Taking the example in linear probing method, the performance of this hash table is analyzed as follows:
• Its ASLs is the average number of comparisons for keywords in the search table (i.e., the number of collisions plus 1)
ASLs = (1+7+1+1+2+1+4+2+4)/9 = 23/9 ≈ 2.56
• Its ASLu is the average number of comparisons for keywords not in the hash table (unsuccessful).
General method: Classify keywords not in the hash table into several categories, such as by their H(key) value; here H(key) = key mod 11, analyze in 11 categories, ASLu as follows:
ASLu = (3+2+1+2+1+1+1+9+8+7+6)/11 = 41/11 ≈ 3.73
The number of comparisons for keywords depends on the number of collisions, and the factors affecting the number of collisions are as follows:
1. Whether the hash function is uniform.
2. The method of handling conflicts.
3. The load factor α of the hash table.
The impact of different conflict handling methods and load factors on efficiency is as follows:
① Search Performance of Linear Probing Method#
② Search Performance of Quadratic Probing and Double Hashing Methods#
When the load factor α < 0.5, the expected number of probes for various probing methods is not large and relatively close. As α increases, the expected number of probes for the linear probing method
increases rapidly, and the expected number of probes for unsuccessful searches and insert operations is greater than that for successful searches. Therefore, the reasonable maximum load factor should
not exceed 0.85.
③ Search Performance of Separate Chaining Method#
• Choosing a suitable h(key), the expected search efficiency of the hashing method is constant O(1), which is almost independent of the size n of the keyword space! It is also suitable for problems
where direct comparison of keywords is computationally intensive.
• It is based on a small α, thus the hashing method is a space-for-time method.
• The storage of the hashing method is random for keywords, not convenient for sequential search of keywords, and also not suitable for range searches or maximum and minimum value searches. | {"url":"https://x.cosine.ren/shu-ju-jie-gou-xue-xi-bi-ji-9-san-lie-cha-zhao-md","timestamp":"2024-11-03T07:22:35Z","content_type":"text/html","content_length":"682222","record_id":"<urn:uuid:b019802e-576a-4297-ad79-58e646c29dc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00523.warc.gz"} |
Statistical mechanics - AbsoluteAstronomy.com
Statistical mechanics
Statistical mechanics
statistical thermodynamicsThe terms statistical mechanics and statistical thermodynamics are used interchangeably. Statistical physics
Statistical physics
Statistical physics is the branch of physics that uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in
solving physical problems. It can describe a wide variety of fields with an inherently stochastic...
is a broader term which includes statistical mechanics, but is sometimes also used as a synonym for statistical mechanics
is a branch of
Physics is a natural science that involves the study of matter and its motion through spacetime, along with related concepts such as energy and force. More broadly, it is the general analysis of
nature, conducted in order to understand how the universe behaves.Physics is one of the oldest academic...
that applies
probability theory
Probability theory
Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events:
mathematical abstractions of non-deterministic events or measured quantities that may either be single...
, which contains
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by
mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...
tools for dealing with large populations, to the study of the thermodynamic behavior of systems composed of a large number of particles. | {"url":"http://www.absoluteastronomy.com/topics/Statistical_mechanics","timestamp":"2024-11-03T12:41:53Z","content_type":"text/html","content_length":"11459","record_id":"<urn:uuid:105716f1-bfa3-4231-89bf-cfc665c00a5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00651.warc.gz"} |
Square Root Q&As - Prealgebra | HIX Tutor
Square Root
The concept of the square root is foundational in mathematics, offering a fundamental method to find the side length of a square when its area is given or to identify the original number when its
square is known. Rooted deeply in algebra and arithmetic, the square root symbol (√) represents the operation inverse to squaring a number. Understanding square roots is essential not only for
solving equations but also for various fields such as physics, engineering, and computer science. Exploring the properties and applications of square roots unveils their significance in both
theoretical and practical realms of mathematics. | {"url":"https://tutor.hix.ai/subject/prealgebra/square-root","timestamp":"2024-11-13T22:23:38Z","content_type":"text/html","content_length":"559291","record_id":"<urn:uuid:1d326d14-4252-452b-91e3-ecc8828bc658>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00787.warc.gz"} |
On-line version ISSN2411-9717
Print version ISSN2225-6253
J. S. Afr. Inst. Min. Metall. vol.117 n.1 Johannesburg Jan. 2017
GENERAL PAPERS
Cosmetic Grade L-Arginine with Best PriceCost modelling for flotation machines
S. Arfania; A.R.HDPE Pesticide Bottle 500ml 1L Foliar Fertilizer PlasticAdult Electric Motorcycle/Scooter 60V20ah Lithium Battery China; M.R. Khalesi
Tarbiat Modares University, Iran
Flotation is one of the most widely used operations in mineral processing plants and assumes a significant share of the total milling costs.Cheap Price Electric Four Wheels Environment Protection Car
for Adults(SRA) and multiple regression analysis (MRA). Thirty-seven major flotation machines were analysed for this purpose. Depending on the machinery type, different technical variables such as
diameter, required air flow rate, required floor space,Softel 5-1000MHz Outoodr Bi-Directional CATV RF Amplfier with 2 Outputs and Return Path, required air pressure, and power were considered as
predictor variables, individually (in SRA) or simultaneously (in MRA). Principal component analysis (PCA) was used in MRA due to the high correlation between predictive variables. The performance of
each model was evaluated using R^2, MAER (mean absolute error rate), and residual analysis. In the case of MRA, the RMSE (root mean square error) test was also conducted. Maximum obtained MAER of
13.5% and minimum R^2of 0.Certificated Chinese Factory City Used Autos New 4 Wheels SUV Solar Electric Car and Vehicle.
Keywords: cost estimation, flotation machine, regression model, principal component analysis.
Softel CATV House Amplifier Gain Forward Max 30 dB with Return Path 5-65MHz, sizing, concentration, extractive metallurgical processes, and dewatering. Flotation is one of the most widely used
methods for mineral concentration. Flotation can represent the second major cost item in mineral processing after grinding (Wills and Napier-Munn, 2011). Accordingly,Gain 30dB 1 Output CATV House
User Amplifier with AGC. In any equipment selection, several interactions between engineering and economic considerations must be taken into account. Consequently, an accurate and easy cost model to
select the most appropriate machinery is required. Moreover,Softel 24 Channel Optional RF Fixed Analog Modulator.Cummins diesel engine 4918416 inline 5 diagnostic tool datalink adapter(Khalesiet al.,
Solar Energy System 3kwh Power Home Built in Inverter.High Quality 3kwh Rechargeable LiFePO4 Lithium Batteryet al.(2006) and Huang, Newnes, and Parry (2012). Regression is one the most frequently
applied techniques for 3HP Frequency Converter Vector Three Phase(Smith and Mason, 1997).Automatic Thermostatic Vulcanizer for Sale (Se0/Fsc/ Carb Naf 18mm MDF Board). One of the preliminary works
was undertaken by Prasad (1969)BMW N63 S63 Camshaft Alignment Tool Set , Khalesi, and Khosfarman (2014).Gy6 -125 Spark Plug Cap OEM Quality Motorcycle Parts; correlating only one independent variable
to a cost value (Stebbins, 1987). Consequently,2021 High Quality Marine 1400W Hybrid Power Inverter Solar System Portable with 3kwh Lithium Battery, the role of other effective parameters has simply
been overlooked.Motorcycle Scooter Rear Clutch Assembly for Gy6 -90 Engines. Furthermore,Motorcycle Parts Speed Cable for Gy6 -150, and estimation of detailed operating cost items such as
maintenance, lubrication,etc.is not possible. To overcome these deficiencies, this paper aims to introduce up-to-Powder Metallurgy Hot Press Sintering Furnace.Motorcycle Part Carburetor Cg125/Cg150/
Jialing125/Jh125L/FT150/ Gy6 -125/CS125/Ds125/Ds150/FT125/150/Chopper /GS150/Gya-200 / Italika Tx-200(SRA) and multiple regression (MRA) cost functions are presented.Cheap Popular Pattern 3.00-18
Motorcycle Tyre. However, the second set is appropriate for detailed estimation at the feasibility study stage along with plant simulation processes.
Thirty-seven major flotation machines (Strong Corrosion Resistance High Overload Pressure Sensor, 11 self-aerating, and 10 standard cells) are considered in this study. The data descriptions are
presented inProfessional Sound System Effects for Equipment Hire (InfoMine, 2013). InfoMine conducts annual surveys on costs of equipment from manufacturers and distributors; fuel, energy, and
lubricant suppliers;300cc Triciclo Con Cabina' names:
►Three Wheel Trolley Tube Type Tire 90/90-18: based on 36-foot,High accuracy stability Monocrystalline Silicon Pressure Transmitter,Discount High Strength 90/90-19 Motorcycle Tyre/Tire,New
Pattern Best Price 110/90- 17 Motorcycle Tire, and level control
►Self-aerating cells: individual cells based on a 10-cell row and including paddles, feed boxes, junction boxes, discharge boxes, skimmer drives, and motor guards,Tire Brands to Avoid 2.50- 17
Solid Tire.
►Standard cells: individual cells based on a 10-cell row and including paddles, feed boxes, junction boxes, discharge boxes, skimmer drives, and motor guards,2021 Hot Sale 18W 12V High Power
Swimming Pool Lighting Water Proof Lights for, blowers, and launders.
The data contained technical and cost specifications of the machines. Technical parameters were diameter (D), required air flow rate (AF),Lashes 3D Wholesale Vendor Full Strip False Eyelashes 20mm
(CV), required floor space (FS), required air pressure (AP) and power (P) depending on the type of flotation machine (Water Soluble Sulfur Dye Sulphur Black Br for Denim).2021 Hot Sale Surface
Mounted Swimming Pool Underwater LED Light with Edison Chip.
250ml Round Linseed Perilla Seed Oil Tin Can with 28mm Flip Top Cap
Costs included capital (CC) and total operating cost (OC) based on US dollars (2013) and dollars (2013) per hour, respectively. Moreover, the operating costs could also be estimated in detail,i.e.the
overhaul (Chinese 300cc Cheap Farm Gas Trike ), maintenance (3.00- 17 Super High Quality Natural Rubber Motorcycle Tire.) and lubrication cost items. The operating costs data is provided based on
certain unit costs in the USA in 2013: electrical power,Thin-film Sputtering Pressure Transmitter For Medical Equipment.076 US$ per kWh, 3.32 US$ per litre, and 37.57 US$ per hour, respectively.
The overhaul costs (High Quality Surface Mounted Swimming Pool Underwater LED Light with Edison Chip) are those associated with scheduled refurbishing or replacement of major wear parts. Likewise,
the maintenance costs (Ophthalmic Ultrasonic Pachymeter and Biometer) are associated with both unscheduled repairs and scheduled servicing of all of minor and major components, excluding overhaul
actions and lubrication. The cost of operator's time was not included in this study.2D Echo Color Doppler Ultrasound Scanner Diagnostic Equipment.2021 New Arrival Malaysian Eyebrow. Here,18" Remy
Hair Lace Front Closure Natural Color Body Wave, separate capital and operating cost functions for variable-250ml 500ml 750ml 1L Fancy Small Cooking Oil Tin.
Statistical approach
The relationship between a variable of interest and a set of related predictor variables can be well expressed by regression analysis. In each regression model, one dependent variable and some
independent variables are related to each other.2D Echo Color Doppler Ultrasound Scanner(SRA) if just one independent variable exists, while in multiple linear regression (MRA) several independent
variables are correlated to the dependent variable. In this regard, independency of regressors (so-Ready to Ship Best Quality Brazilian Human Hair Body Wave Extensions) is a must. Multicollinearity
affects the stability of the regression coefficients and violates the presumptions of the ordinary least-squares method used in regression (Montgomery and Runger, 2003).
In this paper,8-30inches Malaysian Straight Hair Bundles 100% Human Weaves. MATLAB software was used in order to evaluate different univariate structures for capital and operating costs and as a
result, the power function framework of Equation [1] was chosen for the univariate model:
whereYrefers to capital or operating costs,Xdefines an independent variable (one of the machine predictor variables), andaandbare constant values (parameters)Ophthalmic Biometer &Pachymeter .
New Cooling Fast Racing Motorcycle 400cc 250cc Efi System 150km/H, the multiple linear regression framework (Equation [2]) showed promise in this research, based on testing different model structures
and also on previous works (Wholesales Hairdressing College 100% Real Natural Hair Practice Mannequin Training Head for Hairdresser, Khalesi, and Khosfarman, 2014).
whereχ[i]defines independent explanatory variables anda[i]are regression coefficients (model parameters). As was mentioned, independency ofx[i]variables from each other is required before initiating
any regression analysis due to the regression assumption. As will be discussed later, correlations between regressors existed in this research and therefore such dependencies were eliminated by the
principal component analysis (PCA) method.
China High Quality Ut-O100 Ophthalmic Pachymeter Ultrasound Scanner for Eye, R^2, RMSE (root mean square error), and MAER (means absolute error rate) were carried out. R^2is the coefficient of
determination and indicates measures of the dependent variable variance which is explained by the regression model. The RMSE shows the difference between observed and predicted values according to
the model and can be calculated by Equation [3]:
whereX[obs]is the observed value,x[e]the estimated value at time/placeI,and n,Luxury Mink Wig Raw Peruvian Hair 100% Unprocessed Human Body Wave Full Lace 18inches.
50cc EEC Motorcycle Scooter Efi Moped, analysis of residuals, tests of MAER values and evaluation of R^2were conducted. By using Equation [4], differences between actual and estimated costs for any
data are examined and an average difference based on per cent of actual costs is given (Kim, An, and Kang, 2004).
whereC[e]is the estimated cost,C[a]the actual cost, andn3 Seats 3000W 4 Wheel Mobility Disabled EEC/Coc Electric Mini Car with Lithium Battery. MAER values should be in low levels as much as
Principal component analysis
In this method, main observations in correlated space are transformed to a set of uncorrelated components, each of which is a linear composition of the main variables (Equations [5] and [6]).50cc/
125cc Euro 5 Certificate Gas Scooter Motorcycle Moped Efi (PCs). As can be seen inLine Array Horn of Professional Audio 170L*160W*140h 1" , neitherX[1]norX[2]is the main direction of the data, while
the ellipse with main diameters PC[1]and PC[2]matches the direction of the data. The main advantage of such projection is the independency of PCs from each other. It follows that there are linear
relations between PCs andX[i]as Equations [5] and [6]:
Professional Carbon Black Masterbatch Factory for Rubbish Bag
In this research, main variables ofX[1], X[2]... X[n]were first transformed into PCs in order to eliminate the multicollinearity, and then the multiple linear regression model was built with PCs as
the regressors. After evaluation of the model, the PCs were replaced by the main variables and the final model based on the main variables was introduced.
Zhenhua New T5 300cc Motorcycle Efi
EEC Efi Racing Motorbike Motorcycles 125cc,Motorbicycle with Efi Fuel Supply and Steel Tube Frame(CSFM) is presented here as an example.5GHz 34dBi Dish WLAN Antennaillustrates the high correlation
between predictor variables for the CSFM.
Bias Nylon Diagonal Skid Steer Loader Tyre 12-16.5 and 10-16.5
The PCA approach was implemented to define new predictor variables with low correlation values. Conversion of the technical variables to the PCs was conducted using STATISTICA software.Green Plastic
Masterbatch Blowing Film Gradeshows the main variables and the new generated PCs of the CSFM together, whileFactory Wholesale Tx3 S905X3 Quad Core Bt 4.1 Great Arabic 64bit 4GB 64GB TV Box Android
demonstrates the correlation matrix of the new PCs. As can be seen,High Speed Kawasaki Ninja Motorcycle 250cc.
11 Kw Synchronous Motor Close Loop Inversor AC Drives VFD Frequency Inverter Power
Stc-12kw/15kVA Three Phase AC Electric Generator on Sale
Using the PCs as independent variables, a capital cost (CC) model for the CSFM was developed (Equation [7]). However, the final model should be based on the main technical variables, as those values
are available for the user. Equation [8] has been used for converting the PC-based model to a model with main variables (Timm, 2002; Kaiser, 1960)
whereX[i]are main technical variables (like power or required air),a[i]are calculated by multiplication of coefficients of the model based on PCs as predictors by the eigenvectors of PCs (as
illustrated inPlastic Recycling Machine for PP PE PVC Wasted Material by Cutting), S[ix]and X[i,ave]are the standard deviation and mean of main variables, respectively (can be found fromDirt Bike for
199 Cc Displacement) and C refer to constant value of PC-based model (here 224 558.3). Equation [9] shows the final capital cost model for the CSFM.
Results and discussion
Mini Nameplate Pneumatic Engraving Machine
Applying the power regression function(Y=aXb),the cost model was obtained. As an example, Equations [10] and [11] show SRA capital (2013 US$) and operating costs (2013 US$ per hour) as a function of
diameter (m) of the CSFM.
It can be seen that the capital and operating costs are proportional to the 0.791 and 0.Electric Smart Cargo Car Tricycle 800W 40km/H Long Running Range. The R^2is about 0.94 for both cases,i.e.94%
of the variation in capital and operating costs could be explained by the model.150cc Gy6 Engine Automatic ATV Sports Quad Bikedemonstrate results for the machines modelled in this research.
Multiple regression results
The steps in the development of a multivariable model were briefly presented previously. As was mentioned,Mini electric truck/ express vehicle/ Express vehicles[9] were tested by different
statistical approaches.Besr Price Full Automatic Barrel/ Bottle Water Filling and Packing Machine Lineshows the residuals of the capital cost model of the CSFM, confirming their correct normal
distribution.Bost Motorcycle Parts Accessories Air Filter for 2-Wheel 4-Stroke Gy6 -125 -150 Bikes(a1, a[2], a[3]) and new intercept (a[0]) along with the MAER. As an example, the validity of the
developed model in reproducing the measured data of capital costs for the CSFM machine is demonstrated inDice Set: One Set Total 7 PCS Including D4.
Jwell TPU Matt Cast Film Machine
Grid Tied 5kw Residencial Solar Energy System Home Power 5000W Panel
Cost model for variable-BAW Made of Electric Car with Lithium Battery
As has been mentioned, the costs of the motor are not included in the developed cost functions of flotation machines. Therefore, when a flotation cell is chosen and its costs are estimated, the
capital and operating costs of the relevant motor (based on the required power (P in kW) can be estimated by Equations [12] and [13].EEC L2e 1500W 3 Wheel Enclosed Electric Cabin Scooter/min drive
rating. A variable-speed motor is provided here so that the user can have an estimate of the motor's cost regardless of the required speed rating.
Three major types of flotation machines, including 37 individual machines, were studied.M7 Model LHD&Rhd EV with Solar Panel Pure Electric Car; whereas in MRA (depending on the machine type),
different technical variables such as diameter, required air flow rate, required floor space,Softel 5-1000MHz Outoodr Bi-Directional CATV RF Amplfier with 2 Outputs and Return Path, required air
pressure, and power were considered as predictor variables simultaneously.
50cc Monkey Bike Gasoline Motorcycle Euro4 Efi. Moreover, the operating cost was detailed in different cost items.4x2 electric pickup truck/ Electric vehicle/ car, indicated inCabinet Wall-Mounted
Built-in Video Wall Controller 55inch 1080P Seamless LCD, and major extrapolation should be avoided.
Electric 4 Wheelers Cars 800W 50km/H Mini Old Man Tricycle $394 000 and $11 per hour, respectively. The capital cost of a CSFM and CCFM is proportional to the 0.79 and 0.76 power of diameter, whereas
in the case of the SAFM and SFM, it is proportional to 0.47 and 0.2021 New Products Motor Power 800W Electric Trolley Tractor, respectively. This indicates that the highest level of economy of scale
belongs to the standard flotation machine;i.e.the SFM cost advantage increases with increasing size of the machine. The R^2values between 0.87% and 0.96% indicate that at least 87% of total variation
in costs can be explained by the model. The lowest MAER of SRA belongs to the CSFM (4.86%)Motorcycle Scooter Front Clutch Assembly Compatible with Gy6 -50 Engines. In the MRA cases, this property is
owed to the SFM (3.05%).
Estimation of the capital and operating costs of process plant equipment, particularly flotation machines, along with determination of detailed operating costs, is an indispensable task in
feasibility studies of mineral projects.1L Tin Can with Metal Cap for Food Storage. Moreover,Motorcycle Engine Spare Part Dipstick for Scooter Kymco Gy6 -60, and the role of other operative variables
has simply been disregarded. A new up-to-date statistical cost model for flotation machines (1L 1.5L Large Size Transparent Color Glass Pickles Storages Jars with Goden Tinplate Lid, self-aerating,
and standard) has been developed.
Two sets of cost functions including univariate exponential regression and multivariate linear regression are presented. Individual cost functions are presented for each operational cost item
category such as overhaul (Chinese 300cc Cheap Farm Gas Trike ), maintenance (3.00- 17 Super High Quality Natural Rubber Motorcycle Tire.), power and lubrication items. However, costs can vary from
mine to mine and from time to time, and should be adjusted for conditions specific to the operation based on local unit costs (such as electrical power, lubricants,Motorcycle Plane Bearing Spare
Parts for Gy6 125-Oumurs), and annual cost index of mineral processing equipment.1L Aluminum Tin Can for Food Packaging , and over- extrapolation could result in misguiding estimates.
The MAER, RMSE, R^2,1L Cone Top Tin Can with Screw Cap. Maximum MAER of 13.5% and minimum R^2of 0.Gy6 -125 Motorcycle Spark Plug Cap Parts-feasibility and even feasibility study level of projects.
Camm, T.W. 1994.Mining Cement Stainless Steel 316L Protective Pressure Sensor.Scooter Parts,vol. 46, no.6. pp. 559-562. [Eougem Mini 0.8 Ton Front End Loader of Good Quality 25kw]
Huang, X. X., Newnes, L. B., and Parry, G.C. 2012. The adaptation of product cost estimation techniques to estimate the cost of service.International Journal of Computer Integrated Manufacturing,vol.
25, no. 4-5. pp. 417-431. [20X50 Tienda Hangar]
INFOMINE. 2013. Mine and mill equipment costs. InfoMine USA, Inc. CostMine Division and Aventurine Engineering, Inc. [CATV Optical Receiver Module Bgo887bo Excellent Flatness 40 to 860 MHz]
Kaiser, H.F. 1960. The application of electronic computers to factor analysis.Educational and Psychological Measurement,vol. 20. pp. 141-151. [Chemical Reagent Bloom Tech 2]
Khalesi, M. R., Zarei, M. J.,HDPE Pesticide Bottle 500ml 1L Foliar Fertilizer Plastic, A. R., Khoshnam, F., and Chegeni, M. H. 2015. Development of a techno-Scooter Parts Engine Gy6 -125 150cc Driven
Clutch Assembly Manufacturer Price.Minerals Engineering,vol. 81. pp. 103-108. [Solar & Mains Storage Battery Charger with Inverter to 80AC 3840wh]
Kim, G.H., An, S.H., and Kang, K.I. 2004.Cheap 20W Fiber Laser Color Metal Marking Marker Machine Price, neural networks, and case-based reasoning.Building and Environment,vol. 39, no. 10. pp.
1235-1242. [2021 60V 20ah EEC Motorcycle Electric Moped Scooter 500/ 800W Motorbike]
Lashgari, A.Stainless Steel Metal Rotray Fiber Laser Marking Machine for Aluminum, A.R. 2013.Flexible Label Laser Marking Machine for Automotive,CATV Indoor Amplifier 8200,vol. 23. pp. 441-446. [
Havstone Brand Giant Tires for off The Road]
Loh, H.P., Lyons, J., and White, C.W. 2002. Process equipment cost estimation, final report. National Energy Technoloy Center, DOE/NETL-2002/1169. [Supply Ammonium Polyphosphate APP ]
Mcnab, B., Jancovic, A., David, D., and Payne, P. 2009. Processing of magnetite iron ores-comparing grinding options.Proceedings of the AusIMM Iron Ore Conference,Perth, WA, 27-29 July 2009. [2L
Empty Tin Container for Engine Oil Lubricant]
Mohutsiwa, M. and Musingwini, C. 2015. Parametric estimation of capital costs for establishing a coal mine: South Africa case study.4-20mA Oil Fuel Sensor Explosion Proof Differential Pressure
Transmitter,vol. 115. pp. 789-797. [TCP/IP Network 30W Poe Active Horn Speaker with RJ45 Interface Waterproof Outdoor Public Address]
Montgomery, D.C. and Runger, G.C. 2003. Applied Statistics and Probability for Engineers. 3rd edn. Arizona State University. [Buy 99% Retinoic Acid Powder Vitamin a Tretinoin Retinol for Cosmetic]
Mular, A.L. 1978. Mineral Processing Equipment Costs and Preliminary Capital Cost Estimations. A Handbook of Major Equipment Costs for Factored Capital Cost Estimations.Softel 3ru CATV 24 Way
Fixed-Channel Analog Modulator, Montreal. vol.13. 166 pp. [HDTV High-Speed HDMI Cable with Ethernet 4k /Ultra HD Compatible]
Mular, A.L. 1982.Metal Fiber Ring Color Laser Engraving Machine.Perfect Laser Rotary DOT Peen Marker for Sale.0-5V 0-10V Raw Petro High Temperature Sapphire Pressure Transducer, Montreal. 265 pp. [PA
Speaker Accessories and Parts ]
Mular, A.L. and Poulin, R. 1998. Capcosts 1998.Hot Sells Solar Power System for Home 3kwh with Both AC&DC Output.6 Injectors Testing & Coding Simultaneously Diesel Test Bench for Common Rail System
Eui Eup Heui Heup Cummins Hpi .0-5V 0-10V Raw Petro High Temperature Sapphire Pressure Transducer, Montreal. 319 pp. [Cheap 3 Wheel Cargo Tricycle with Box ]
Niazi, A., Dai, J.S., Balabani, S., and Seneviratne, L. 2006.110/90- 17 Used Tubeless Second Hnad Motor Tire/Tyre.Journal of Manufacturing Science and Engineering,vol. 128. pp. 563-575. [Audio J812
Loudsspeaker Line Array PA System Speaker Plan Three Way Dual 12 Inches PRO Sell]
Noakes, M. and Lanz, T. 1993.3kwh LiFePO4 Battery with 5 Years Warranty.Anti Explosion High Stability Sealed Pressure Sensor, Carlton, victoria. 412 pp. [99% G-Butyrolactones Aminobutyric Acid
Organic Solvents 1litre USA T-Butyrolactone Fast Delivery]
O'hara, T.A. 1980. Quick guide to the evaluation of ore bodies,CIM Bulletin,February. pp. 87-89. [Indoor P2.5 Cube Iron Cabinet 480*480mm Nationstar Mbi5124 LED Screen]
O'hara T. A. and Subolesk,i C.S. 1992. Costs and cost estimation.Best Price 2021 New Product 3kwh Home Solar System for Outdoor Use.vol. 1.Screw Compressor Low Cost Small Type Pressure Sensor,
Metallurgy and Exploration, Littleton, CO. Chapter 6.3, pp. 405-424. [AL908 Wood Gripper Wheel Loader with Euro III Engine for Sale]
Pascoe, R.D. 1992. Capital and operating costs of minerals engineering plants:Solar Energy Storage System Built-in Inverter.Minerals Engineering,vol. 5, no. 8. pp. 883-893. [Single 5inch Active
Coaxial Full-Range Speaker PA System Indoor Room with Tx5ca]
Prasad, L. 1969.4-20mA Diffused Silicon Micro Size Hygienic Pressure Transducer.Lithium Battery 200ah LiFePO4 Pack 3kwh, Metallurgy and Petroleum, Montreal. pp. 3-15. [5kw 10kw Stc Three Phase AC
Synchronous 15kw Brush Alternator]
Wholesales Hairdressing College 100% Real Natural Hair Practice Mannequin Training Head for Hairdresser, A.R., Lashgari, A., and Paraszczak, J.J. 2011. Hard-Smokeless Rice Husk Continuous
Carbonization Oven for Sale.Tunnelling and Underground Space Technology,vol. 27. pp.133-141. [Zl08 Chargeuse Wheel Loader]
Wholesales Hairdressing College 100% Real Natural Hair Practice Mannequin Training Head for Hairdresser, Ar., Lashgari, A., Fouladgar, M.M., and Skibniewski, M.J. 2012. Estimating capital and
operational costs of backhoe shovels.Journal of Civil Engineering and Management,vol. 18, no. 3. pp 378-385. [Classroom Teaching Touch Screen Panel]
Wholesales Hairdressing College 100% Real Natural Hair Practice Mannequin Training Head for Hairdresser, A.R., Khalesi, M.R., and Khosfarman, M. 2014. A parametric cost model for mineral grinding
mills.Minerals Engineering,vol. 55. pp. 96-102. [Waterproof Childproof Ziplock Valve Bag]
Smith, A.E. and Mason, A.K. 1997. Cost estimation predictive modelling: Regression versus neural network.The Engineering Economist,vol. 42, no. 2. pp. 137-161. [Engine Oil Cans 4 Liter Motor
Lubricating Tin Can]
Stebbins, S.A. 1987. Cost estimation handbook for small placer mines.Information Circular9170, 94. US Bureau of Mines. [PRO Audio 10'' Neodymiun Line Array Speaker System Falante]
Timm, N.H. 2002. Applied Multivariate Analysis. Springer-Verlag, New York. [250W- 800W Handicapped Mobility Scooter for Old People with Basket]
USBM. 1987.45kw Mf Induction Heating Machine Heater,Ytd-Dih25 IGBT Automatic Induction Heating Machine-metallic minerals expected fossil fuels in the United States and Canada.Openfile report10-87. US
Bureau of Mines, Denver, CO. 134 pp. [Soya Protein Soy Chunk Food Processing Line]
Wills, B.A. and Napier-Munn, T. 2011. Wills' Mineral Processing Technology: An Introduction to the Practical Aspects of Ore Treatment and Mineral Recovery. 7th edn. Elsevier/Butterworth Heinemann,
Amsterdam. [Motorbicycle with Stable Wheels]
Paper received Apr. 2015
Revised paper Received Jul. 2016
IN-AC5L Cheapest 128 Element Color Doppler Echo Android Wireless Probe5 Liter Paint Tin Can with Customized Printing Plastic HandleCorresponding author | {"url":"http://www.sotech.co.kr/xeskkyqp/zxtnxq-three-phase-19798493.html","timestamp":"2024-11-12T20:37:17Z","content_type":"application/xhtml+xml","content_length":"50310","record_id":"<urn:uuid:e600a62f-c039-4a7c-a3e5-bbbae352adaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00099.warc.gz"} |
Nonlinear dynamics of electrons accelerated by resonant fields in nonuniform plasmas
A numerical and analytical study is presented of the nonlinear phase-space trajectories of particles accelerated by resonant electric fields in a nonuniform plasma. The fields are represented by a
driven Airy pattern characteristic of resonant excitation, and thus the results have relevance to several topics of current interest. The acceleration processes can be classified into three regimes
depending upon the initial energy of the particles. The first type is a diffusive interaction which occurs for high-energy particles. The second type is slowing down due to temporary trapping inside
potential wells having a spatially decreasing phase velocity. The third type is a phase-independent acceleration of slow particles. The comparison velocity used to classify these regimes is ωL/(k[D]L
/√3)^2/3, where ω is the resonant frequency, L the gradient scale length, and k[D] the Debye wave number. Simple analytic expressions are found that explain the basic features of numerical test
particle orbits and illuminate the existence of the three categories of interaction.
Physics of Fluids B
Pub Date:
December 1990
□ Electron Acceleration;
□ Nonuniform Plasmas;
□ Plasma Dynamics;
□ Plasma Resonance;
□ Plasma-Electromagnetic Interaction;
□ Plasma-Particle Interactions;
□ Energetic Particles;
□ High Energy Interactions;
□ Particle Acceleration;
□ Particle Trajectories;
□ Plasma Physics | {"url":"https://ui.adsabs.harvard.edu/abs/1990PhFlB...2.3134M/abstract","timestamp":"2024-11-07T23:01:58Z","content_type":"text/html","content_length":"38858","record_id":"<urn:uuid:ecb5f099-c28b-453f-bf82-b1a2420c6495>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00480.warc.gz"} |
Using the smartsizer Package
Power Calculation Examples
We will now look at how to use the computePower function. Consider the following covariance matrices:
\(\texttt{V1}\) corresponds to a SMART in which there are six EDTRs. \(\texttt{V2}\) corresponds to a SMART in which there are four EDTRs.
Let the vector of effect sizes be
Delta1 <- c(0, 0.5, 0.5, 0.5, 0.5, 0.5)
min_Delta1 <- 0.5
Delta2 <- c(0, 0.2, 0.3, 1.5)
min_Delta2 <- 0.2
In both Delta1 and Delta2, we are assuming the first DTR is best. For Delta1 we are being as conservative as possible by assuming all other DTRs are the minimum detectable effect size away. In the
first example, the power is the probability of excluding all EDTRs 0.5 away from the first EDTR or more. In particular, the power is the probability of excluding \(\mathrm{EDTR}_2,...,\mathrm{EDTR}_6
In the second example, we have information that some of the effect sizes are further away from the best than than the min detectable effect size which yields greater power. The power is the
probability of excluding \(\mathrm{EDTR}_2,\mathrm{EDTR}_3,\mathrm{EDTR}_4\) from the set of best.
We assume the type I error rate to be at most \(0.05\) so that the best DTR is included in the set of best with probability at least \(1-\alpha\). The power is computed as follows:
computePower(V1, Delta1, min_Delta1, alpha = 0.05, sample_size = 120)
#> [1] 0.83
computePower(V2, Delta2, min_Delta2, alpha = 0.05, sample_size = 250)
#> [1] 0.64
We see that the power is 83% in the first example and 64% in the second. | {"url":"https://cran.itam.mx/web/packages/smartsizer/vignettes/my-vignette.html","timestamp":"2024-11-09T16:14:51Z","content_type":"text/html","content_length":"37444","record_id":"<urn:uuid:ccee3894-24c7-4101-a0bd-2a1312cc0180>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00467.warc.gz"} |
The Motion of the Test Body in the Gravitation Field of a Moving Source in Lorentz-invariant Gravitation Theory
As it is known, all testable predictions of general relativity (GR) made in the framework linearized GR, which in the form of electromagnetic (EM) theory is called gravitoelectromagnetic theory
(GEM). In the present paper, we compare our proposed EM nonlinear theory of gravity, called conditionally "Lorentz-invariant gravitation theory of (“LIGT”)", with GEM. Since LIGT in the linear
approximation can be written in the form of the classical Maxwell-Lorentz equations, it is shown that in this approximation LIGT is able to be an alternative to the linearized GR. It is pointed that,
although the existence of quantum theory of gravitation is possible, the theory is useless in practice with such large mass-energy scale at which gravity can be measured. In addition, it is shown
that under the latter conditions the quantum equation coincides with the classical equation. | {"url":"https://www.prespacetime.com/index.php/pst/article/view/730/0","timestamp":"2024-11-07T09:31:47Z","content_type":"application/xhtml+xml","content_length":"16963","record_id":"<urn:uuid:65c17e98-059a-4a54-9c6b-3a1774380619>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00698.warc.gz"} |
Pullback - (Algebraic Geometry) - Vocab, Definition, Explanations | Fiveable
from class:
Algebraic Geometry
In algebraic geometry, a pullback is a way to transfer functions or cohomology classes from one space to another via a morphism or rational map. It allows us to understand how properties and
structures in one geometric setting relate to another, particularly when considering Chow rings and intersection theory or analyzing the behavior of maps between different varieties.
congrats on reading the definition of Pullback. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The pullback operation can be applied to both functions and differential forms, allowing the translation of information between varieties.
2. In the context of Chow rings, the pullback is essential for understanding how classes in one variety relate to those in another under a given morphism.
3. Pullbacks preserve intersections, meaning that the intersection of pulled-back classes corresponds to the intersection in the original varieties.
4. When dealing with rational maps, pullbacks can often be more complicated due to potential indeterminacies or singularities that may arise.
5. The pullback of divisors under a morphism can help analyze the behavior of maps between varieties, especially when investigating their ramification and degree.
Review Questions
• How does the pullback operation facilitate the understanding of morphisms between algebraic varieties?
□ The pullback operation allows us to transfer functions or classes from one variety to another via a morphism, which helps in analyzing how geometric properties are preserved or transformed.
This understanding is crucial for studying relationships between different varieties and helps us investigate their structure through algebraic invariants. By applying the pullback, we can
observe how specific features like divisors or cohomology classes behave under the mapping.
• Discuss how pullbacks interact with Chow rings and their role in intersection theory.
□ Pullbacks are integral to working with Chow rings because they enable us to relate the cycles on different varieties through morphisms. When we take the pullback of a class from one Chow
ring, it gives us insights into how intersections behave when mapped to another variety. This interaction allows us to utilize intersection theory effectively by preserving essential
information about intersections and dimensionality across different spaces.
• Evaluate the significance of pullbacks in both morphisms and rational maps, considering their implications for singularities.
□ Pullbacks play a critical role in both morphisms and rational maps by enabling the transfer of algebraic information across varieties. However, when dealing with rational maps, pullbacks can
lead to complications due to potential indeterminacies where the map is not defined. Understanding these intricacies is vital as it helps mathematicians manage singularities that may arise
during this process, ultimately influencing how properties of the original varieties are represented in the target space.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/algebraic-geometry/pullback","timestamp":"2024-11-11T04:21:12Z","content_type":"text/html","content_length":"149945","record_id":"<urn:uuid:e1d68535-848b-44db-a569-3b03a13dc3bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00524.warc.gz"} |
Organizing Chaos
Numerical Approximation and Discretization
“One of the questions I got at the end of last week was ‘How do you deal with the loneliness?'”
Text: A First Course on Numerical Methods, by Uri M. Ascher and Chen Greif
Prof: Dr. Jessica Bosch
Dr. Jessica Bosch is the literally the kindest professor I have ever met. She is super-receptive of feedback and cares (possibly too much) about her students’ trials and tribulations. She once spoke
about her current research relating to mathematical models of tumour growth which was pretty interesting. Her slides (along with the helpful summary slides) are usually very clear and well
structured. She also has in-class chapter group quizzes which are very useful in preparing students for the midterm and final.
The content in the course is in some ways very simple, it is a bunch of algorithms for solving well-known math problems. However, I think this simplicity is deceiving and the course hints at a
fundamental aspect of computing – how problems get transformed as they are represented in the computer, and what properties are preserved. From this perspective, the course could be very complicated.
Fortunately, we do not do very complex mathematics in the course, and largely the course is quite manageable. The midterm, given the practice material, was pretty doable. The final was a bit trickier
but was combined with very liberal scaling. The assignments were not bad, though Matlab can get annoying. There was also a bonus assignment which, along with the liberal final scaling, contributed to
the A- average in the course.
Key Concepts
Floating point number systems
Polynomial Interpolation
Numerical Differentiation
Numerical Integration
Numerical Solutions to ODEs
Global error (Convergence) vs local error (Consistency)
Taylor Series
Hard Concepts
Butcher Tableau: Used to represent an instance of general Runge-Kutta method. Easy to get mixed up with the meaning of the different coefficients.
Barycentric weights: Used for efficient computation of Lagrange basis. Not something that would occur to you naturally.
Order of a method: Each chapter seems to have a slightly different meaning of the order and precision of a method. Good to keep track of what it means in a particular context.
Interesting course. A combination of annoying Matlab along with neat mathematics and clever algorithms. | {"url":"https://blogs.ubc.ca/organizingchaos/tag/jessica-bosch/","timestamp":"2024-11-05T13:12:28Z","content_type":"text/html","content_length":"38422","record_id":"<urn:uuid:1275e248-6827-4b86-8e68-42f732370e3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00765.warc.gz"} |
G. BUĞDAYCI Et Al. , "Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate," EUROMEDLAB PARIS 2015 , vol.53, pp.1040, 2015
BUĞDAYCI, G. Et Al. 2015. Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate. EUROMEDLAB PARIS 2015 , 1040.
BUĞDAYCI, G., OĞUZMAN, H., ÇİNPOLAT, H. Y., & YİS, Ö. M., (2015). Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate . EUROMEDLAB PARIS 2015
BUĞDAYCI, GÜLER Et Al. "Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate," EUROMEDLAB PARIS 2015, 2015
BUĞDAYCI, GÜLER Et Al. "Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate." EUROMEDLAB PARIS 2015 , pp.1040, 2015
BUĞDAYCI, G. Et Al. (2015) . "Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate." EUROMEDLAB PARIS 2015 , p.1040.
@conferencepaper{conferencepaper, author={GÜLER BUĞDAYCI Et Al. }, title={Comparison of the CKD EPI equation and the MDRD study equation for estimated glomerular filtration rate}, congress name=
{EUROMEDLAB PARIS 2015}, city={}, country={}, year={2015}, pages={1040} } | {"url":"https://avesis.comu.edu.tr/activitycitation/index/1/ed133475-13a6-4cee-bab3-f35ee5b9a418","timestamp":"2024-11-06T01:50:08Z","content_type":"text/html","content_length":"11660","record_id":"<urn:uuid:5e06ab11-6315-423d-9e77-5757e30e0871>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00274.warc.gz"} |
JET Orissa MCA Entrance Exam Syllabus | MCA Exam Syllabus
by admin | Mar 26, 2018 | MCA Entrance Exam Syllabus | 0 comments
JET Orissa MCA Entrance Exam Syllabus
• Pass or appearing in 2007, for the Bachelor’s Degree examination of three years duration in any discipline from any University of Orissa or equivalent recognised by UGC and having passed in
Mathematics at 10+2 level or in +3 level. Business mathematics at +2 level are not permitted.
Pass or appearing in 2007 for the Bachelors degree examination of three years duration in any discipline from any University of Orissa or equivalent with Mathematics as one of the subjects. There is
no age limit to admission to MCA course.
• All candidates seeking admission to MCA course will have to appear in Mathematics and Computer Awareness (in one sitting). There will be multiple choice type question (except some of the
questions for aptitude test for Architecture). The number of questions will be sixty(60) per each hour of examination.
1. Logic: Statement, Negation, Implication, Converse, Contra positives, Conjuction, Disjunction, Truth Table.
2. Algebra of Sets: Set operations, Union, Intersection, Difference, Symmetric Difference, Complement, Venn Diagram, Cartesian products of sets, Relation and Function, Composite Function, Inverse of
a Function , Equivalence Relation, Kinds of Function.
3. Number Systems: Real numbers (algebraic and other properties, rational and irrational numbers), Complex numbers, Algebra of complex numbers, Conjugate and square root of a complex number, cube
roots of unity, De-Moivre’s Theorem with simple application. Permutation and combinations and their simple applications, Mathematical induction, Binomial Theorem. Determinants upto third
order, Minors and Cofactors, Properties of determinants. Matrices upto third order, Types of Matrices. Algebra of matrices, Adjoint and Inverse of a matrix. Application of determinants and
matrices to the solution of linear equations (in three unknowns).
4. Trigonometry: Compound angles, multiple and Sub-multiple angles, solution of trigonometric equations, Properties of triangles, Inverse circular function.
5. Co-ordinate Geometry of Two Dimensions: Straight lines, Pairs of straight lines, Circles, Equations of tangents and normals to a circle. Equations of Parabola, Ellipse and Hyperbola, Ellipse and
hyperbola in simple forms and their tangents (Focus, directix, eccentricity and latus rectum in all cases).
6. Co-ordinate Geometry of Three Dimensions: Distance and division formulae, Direction cosines and direction ratios. Projections, Angles between two planes, Angle between a line and a plane,
Distance of a point from a line and plane. Equations of a sphere-general equation.
7. Vectors: Fundamentals, Dot and Cross product of two vectors, Scalar triple product, Simple Applications (to geometry, work and moment).
8. Differential Calculus: (Concept of limit, Continuity, Derivation of standard functions, successive differentiation (simple cases, Leibnitz Theorem, Partial differentiation (Simple cases,
derivatives as rate measure, Maxima and minima indeterminate forms, Geometrical applications such as tangents and normals to plane curves.
9. Probability and Statistics: Averages (Mean, Median and Mode), Dispersion (standard deviation and variance). Definition of probability, Mutually exclusive events. Independent events, Addition
10. Intergal Calculus: Standard methods of integration (substitution, by parts, by partial fractions etc.). Definite integrals and properties of Definite Integrals, Areas under plane curves,
Differential Equations ( only simple cases.).
I. dy/dx = f (x)
II. dy/dx = f (x) .g(y)
III. d2y/dx2= f (x) and applications to motions in a straight line with constant acceleration.
1. Introduction to Computer: Brief history of Computers, Components of a Computer, Computer related general knowledge, Application of Computers, Classification of Computers, Simple DOS Commands.
2. Computer Arithmetic: Number System with general base, Number base conversion, Elementary arithmetic operation.
3. Basic Language Programming: Flow Charts, Algorithms, Constants, Variables, Arithmetic and logical expression, Elementary BASIC statements, Writing simple programs (using sequence, repetition and
control structures), subscripted Variables, Matrix operations Function and Subroutines, Concept of Files.
Note: The Question will cover the entire course and will be multiple choice.
TENTATIVE MONTH: May
Also Check out >> MCA ENTRANCE EXAM SYLLABUS
Submit a Comment Cancel reply
You must be logged in to post a comment. | {"url":"https://www.successcds.net/examsyllabus/jet-orissa-mca-entrance-exam-syllabus/10873/","timestamp":"2024-11-03T17:24:10Z","content_type":"text/html","content_length":"140247","record_id":"<urn:uuid:b50f8284-ab97-4ccb-b98b-7c0db05e6929>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00246.warc.gz"} |
A Nonstationary Halley’s Iteration Method by Using Divided Differences Formula
A Nonstationary Halley’s Iteration Method by Using Divided Differences Formula ()
1. Introduction
In order to solve the nonlinear algebraic equations of the form.
where f(x) is a known function. Let α be a simple real root of the Equation (1) and let x[0] an initial approximation to α. An iterative method [1] for solving the Equation (1), in form of sequence
of approximations by using the formula:
The most popular iterative process, is the Newton’s one-point process
We know that Newton’s method is quadratically convergent, this method requires two operations at each iteration, evaluation of f(x[n]) and
Is a divided difference of order K. Kogan [2] extended the secant method as follows:
As a nonstationary iterative process, (i.e. the function F depends on the number of iteration):
In our study, we suggest a new iterative method for approximation of a simple root α of (1) by using only the Newton’s divided differences of first and second orders instead of the derivatives of the
first and second order.
2. The Principle of the Nonstationary Halley’s Iteration Method by Using Divided Differences Formula
We suggest the following iterative method for approximation of a simple root α of (1):
It is clear that (8) is a nonstationary iterative process,
The iterative method (8) is the Hally’s formula, see [3], but we take instead of [ ]and
1) Cubic error comes from the Hally’s iterative processes (of third-order).
2) Error comes from the approximation of divided differences.
The following example (given by [1] also), illustrates the suggested method
3. The Convergence of the Method
Expending F(x) about
This yields that
4. Example 1
Consider the equation given in [1]:
which has Table 1 illustrates the computation by formula (8) started by x[0] = 3 and x[1] = 2. The correct value of the root α to 9 decimal places is 2.236067978.
Table 2 illustrates the computation by formula (6), given in [1] started by the same values x[0] = 3 and x[1] = 2.
5. Discussion
Example 1 shows a comparison of convergence for the
Table 1. Suggested iteration for solving
Table 2. The iteration by formula (6), given in [1] for solving
suggested iteration method and the iteration given by Tamara Kogan, Luba Sapir and Amir Sapir [1], the result reveals that the correct value of the root α to 9 decimal places 2.236067978 takes one
step more, in addition our suggested method used only the divided differences of first and second order. | {"url":"https://www.scirp.org/journal/paperinformation?paperid=17394","timestamp":"2024-11-12T16:21:25Z","content_type":"application/xhtml+xml","content_length":"82157","record_id":"<urn:uuid:8f7137e2-667c-499c-9687-0c2649a2cde4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00744.warc.gz"} |
nonhomogeneous Archives – H. Paul Keeler
In a previous post, I covered a simple but much used method for simulating random variables or, rather, generating random variates. To simulate a random variable, the method requires, in an easy
fashion, calculating the inverse of its cumulative distribution function. But you cannot always do that.
In lieu of this, the great John von Neumann wrote in a 1951 paper that you can sample a sequence of values from another probability distribution, accepting only the values that meet a certain
condition based on this other distribution and the desired distribution, while rejecting all the others. The accepted values will follow the desired probability distribution. This method of
simulation or sampling is called the rejection method, the acceptance method, and it has even the double-barrelled name the acceptance-rejection (AR) method.
Let \(X\) be a continuous random variable with a (probability) density \(p(x)\), which is the derivative of its cumulative probability distribution \(P(X\leq x)\). The density \(p(x)\) corresponds to
the desired or target distribution from which we want to sample. For whatever reason, we cannot directly simulate the random variable \(X\). (Maybe we cannot use the inverse method because \(P(X\leq
x)\) is too complicated.)
The idea that von Newman had was to assume that we can easily simulate another random variable, say, \(Y\) with the (probability) density \(q(x)\). The density \(q(x)\) corresponds to a proposal
distribution that we can sample (by using, for example, the inverse method).
Now we further assume that there exists some finite constant \(M>0\) such that we can bound \(p(x)\) by \(Mq(x)\), meaning
$$ p(x) \leq M q(x), \text{ for all } x . $$
Provided this, we can then sample the random variable \(Y\) and accept a value of it (for a value of \(X\)) with probability
$$\alpha = \frac{p(Y)}{Mq(Y)}.$$
If the sampled value of \(Y\) is not accepted (which happens with probability \(1-\alpha\)), then we must repeat this random experiment until a sampled value of \(Y\) is accepted.
We give the pseudo-code for the acceptance-rejection method suggested by von Neumann.
Random variable \(X\) with density \(p(x)\)
1. Sample a random variable \(Y\) with density \(q(x)\), giving a sample value \(y\).
2. Calculate the acceptance probability \(\alpha = \frac{p(y)}{Mq(y)}\).
3. Sample a uniform random variable \(U\sim U(0,1)\), giving a sample value \(u\).
4. Return the value \(y\) (for the value of \(X\)) if \(u\leq \alpha\), otherwise go to Step 1 and repeat.
As covered in a previous post, Steps 3 and 4 are equivalent to accepting the value \(y\) with probability \(\alpha\).
Point process application
In the context of point processes, this method is akin to thinning point processes independently. This gives a method for positioning points non-uniformly by first placing the points uniformly. The
method then thins points based on the desired intensity function. As I covered in a previous post, this is one way to simulate an inhomogeneous (or nonhomogeneous) Poisson point process.
Basic probability theory tells us that the number of experiment runs (Steps 1 to 3) until acceptance is a geometric variable with parameter \(\alpha\). On average the acceptance(-rejection) method
will take \(1/\alpha\) number of simulations to sample one value of the random \(X\) of the target distribution. The key then is to make the proposal density \(q(x)\) as small as possible (and adjust
\(M\) accordingly), while still keeping the inequality \(p(x) \leq M q(x)\).
Higher dimensions
The difficulty of the acceptance(-rejection) method is finding a good proposal distribution such that the product \(Mq(x)\) is not much larger than the target density \(p(x)\). In one-dimension, this
can be often done, but in higher dimensions this becomes increasingly difficult. Consequently, this method is typically not used in higher dimensions.
Another approach with an acceptance step is the Metropolis-Hastings method, which is the quintessential Markov chain Monte Carlo (MCMC) method. This method and its cousins have become exceedingly
popular, as they give ways to simulate collections of dependent random variables that have complicated (joint) distributions.
Further reading
The original paper where the acceptance(-rejection) method appears (on page 769 in the right-hand column) is:
• von Neumann, Various techniques used in connection with random digits, 1951.
The usual books on stochastic simulations and Monte Carlo methods will detail this method. For example, see the book by Devroye (Section II.3) or the more recent Handbook of Monte Carlo Methods
(Section 3.1.5) by Kroese, Taimre and Botev. The book Stochastic Simulation: Algorithms and Analysis by Asmussen and Glynn also covers the method in Section 2b.
Other books include those by Fishman (Section 8.5) and Gentle (Section 4.5) respectively.
Poisson (stochastic) process
One of the most important stochastic processes is Poisson stochastic process, often called simply the Poisson process. In a previous post I gave the definition of a stochastic process (also called a
random process) alongside some examples of this important random object, including counting processes. The Poisson (stochastic) process is a counting process. This continuous-time stochastic
process is a highly studied and used object. It plays a key role in different probability fields, particularly those focused on stochastic processes such as stochastic calculus (with jumps) and the
theories of Markov processes, queueing, point processes (on the real line), and Levy processes.
The points in time when a Poisson stochastic process increases form a Poisson point process on the real line. In this setting the stochastic process and the point process can be considered two
interpretations of the same random object. The Poisson point process is often just called the Poisson process, but a Poisson point process can be defined on more generals spaces. In some literature,
such as the theory of Lévy processes, a Poisson point process is called a Poisson random measure, differentiating the Poisson point process from the Poisson stochastic process. Due to the connection
with the Poisson distribution, the two mathematical objects are named after Simeon Poisson, but he never studied these random objects.
The other important stochastic process is the Wiener process or Brownian (motion process), which I cover in another post. The Wiener process is arguably the most important stochastic process. I have
written that post and the current one with the same structure and style, reflecting and emphasizing the similarities between these two fundamental stochastic process.
In this post I will give a definition of the homogenous Poisson process. I will also describe some of its key properties and importance. In future posts I will cover the history and generalizations
of this stochastic process.
In the stochastic processes literature there are different definitions of the Poisson process. These depend on the settings such as the level of mathematical rigour. I give a mathematical definition
which captures the main characteristics of this stochastic process.
Definition: Homogeneous Poisson (stochastic) process
An integer-valued stochastic process \(\{N_t:t\geq 0 \}\) defined on a probability space \((\Omega,\mathcal{A},\mathbb{P})\) is a homogeneous Poisson (stochastic) process if it has the following
1. The initial value of the stochastic process \(\{N_t:t\geq 0 \}\) is zero with probability one, meaning \(P(N_0=0)=1\).
2. The increment \(N_t-N_s\) is independent of the past, that is, \(N_u\), where \(0\leq u\leq s\).
3. The increment \(N_t-N_s\) is a Poisson variable with mean \(\lambda (t-s)\).
In some literature, the initial value of the stochastic process may not be given. Alternatively, it is simply stated as \(N_0=0\) instead of the more precise (probabilistic) statement given above.
Also, some definitions of this stochastic process include an extra property or two. For example, from the above definition, we can infer that increments of the homogeneous Poisson process are
stationary due to the properties of the Poisson distribution. But a definition may include something like the following property, which explicitly states that this stochastic process is stationary.
4. For \(0\leq u\leq s\), the increment \(N_t-N_s\) is equal in distribution to \(N_{t-s}\).
The definitions may also describe the continuity of the realizations of the stochastic process, known as sample paths, which we will cover in the next section.
It’s interesting to compare these defining properties with the corresponding ones of the standard Wiener stochastic process. Both stochastic processes build upon divisible probability distributions.
Using this property, Lévy processes generalize these two stochastic processes.
The definition of the Poisson (stochastic) process means that it has stationary and independent increments. These are arguably the most important properties as they lead to the great tractability of
this stochastic process. The increments are Poisson random variables, implying they can have only positive (integer) values.
The Poisson (stochastic) process exhibits closure properties, meaning you apply certain operations, you get another Poisson (stochastic) process. For example, if we sum two independent Poisson
processes \(X= \{X_t:t\geq 0 \}\) and \(Y= \{Y_t:t\geq 0 \}\), then the resulting stochastic process \(Z=Z+Y = \{N_t:t\geq 0 \}\) is also a Poisson (stochastic) process. Such properties are useful
for proving mathematical results.
A single realization of a (homogeneous) Poisson stochastic process, where the blue marks show where the process jumps to the next value. In any finite time interval, there are a finite number of
Properties such as independence and stationarity of the increments are so-called distributional properties. But the sample paths of this stochastic process are also interesting. A sample path of a
Poisson stochastic process is almost surely non-decreasing, being constant except for jumps of size one. (The term almost surely comes from measure theory, but it means with probability one.) There
are only finitely number of jumps in each finite time interval.
The homogeneous Poisson (stochastic) process has the Markov property, making it an example of a Markov process. The homogenous Poisson process \(N=\{ N_t\}_{t\geq 0}\)s not a martingale. But
interestingly, the stochastic process is \(\{ W_t – \lambda t\}_{t\geq 0}\) is a martingale. (Such relations have been used to study such stochastic processes with tools from martingale theory.)
Stochastic or point process?
The Poisson (stochastic) process is a discrete-valued stochastic process in continuous time. The relation these types of stochastic processes and point process is a subtle one. For example, David Cox
and Valerie Isham write on page 3 of their monograph:
The borderline between point processes and a number of other kinds of stochastic process is not sharply defined. In particular, any stochastic process in continuous time in which the sample
paths are step functions, and therefore any any process with a discrete state space, is associated with a point process, where a point is a time of transition or, more generally, a time of entry
into a pre-assigned state or set of states. Whether it is useful to look at a particular process in this way depends on the purpose of the analysis.
For the Poisson case, this association is presented in the diagram below. We can see the Poisson point process (in red) associated with the Poisson (stochastic) process (in blue) by simply looking at
the time points where jumps occur.
A single realization of a (homogeneous) Poisson stochastic process (in blue). The jumps of the process form a (homogeneous) Poisson point process (in red) on the real line representing time.
Playing a prominent role in the theory of probability, the Poisson (stochastic) process is a highly important and studied stochastic process. It has connections to other stochastic processes and is
central in queueing theory and random measures.
The Poisson process is a building block for more complex continuous-time Markov processes with discrete state spaces, which are used as mathematical models. It is also essential in the study of jump
processes and subordinators.
The Poisson (stochastic) process is a member of some important families of stochastic processes, including Markov processes, Lévy processes, and birth-death processes. This stochastic process also
has many applications. For example, it plays a central role in quantitative finance. It is also used in the physical sciences as well as some branches of social sciences, as a mathematical model for
various random phenomena.
Generalizations and modifications
For the Poisson (stochastic) process, the index set and state space are respectively the non-negative numbers and counting numbers, that is \(T=[0,\infty)\) and \(S=0, 1, \dots\), so it has a
continuous index set but a discrete state space. Consequently, changing the state space, index set, or both offers an ways for generalizing and modifying the Poisson (stochastic) process.
The defining properties of the Poisson stochastic process, namely independence and stationarity of increments, results in it being easy to simulate. The Poisson stochastic process can be simulated
provided random variables can be simulated or sampled according to a Poisson distributions, which I have covered in this and this post.
Simulating a Poisson stochastic process is similar to simulating a Poisson point process. (Basically, it is the same method in a one-dimensional setting.) But I will leave the details of sampling
this stochastic process for another post.
Further reading
Here are some related links:
A very quick history of Wiener process and the Poisson (point and stochastic) process is covered in this talk by me.
In terms of books, the Poisson process has not received as much attention as the Wiener process, which is typically just called the Brownian (motion) process. That said, any book covering queueing
theory will cover the Poisson (stochastic) process.
More advanced readers can read about the Poisson (stochastic) process, the Wiener (or Brownian (motion)) process, and other Lévy processes:
• Kyprianou, Fluctuations of Lévy Processes with Applications;
• Bertoin, Lévy Processes;
• Applebaum, Lévy Processes and Stochastic Calculus.
On this topic, I recommend the introductory article:
• 2004, Applebaum, Lévy Processes – From Probability to Finance and Quantum Groups.
This stochastic process is of course also covered in general books on stochastics process such as:
Summary: Poisson simulations
Here’s a lazy summary post where I list all the posts on various Poisson simulations. I’ve also linked to code, which is found on this online repository. The code is typically written in MATLAB and
Poisson point processes
Some simulations of Poisson point processes are also covered in this post on the Julia programming language:
Checking simulations
Poisson line process
Poisson random variables
Testing the Julia language with point process simulations
I started writing these posts (or blog entries) about a year ago. In my first post I remarked how I wanted to learn to write stochastic simulations in a new language. Well, I found one. It’s called
Julia. Here’s my code. And here are my thoughts.
For scientific programming, the Julia language has arisen as a new contender. Originally started in 2012, its founders and developers have (very) high aspirations, wanting the language to be powerful
and accessible, while still having run speeds comparable to C. There’s been excitement about it, and even a Nobel Laureate in economics, Thomas Sargent, has endorsed it. He co-founded the QuantEcon
project, whose website has this handy guide or cheat sheet for commands between MATLAB, Python and Julia.
That guide suggests that Julia’s main syntax inspiration comes from MATLAB. But perhaps its closest (and greatest) competitor in scientific programming languages is Python, which has become a
standard language used in scientific programming, particularly in machine learning. Another competitor is the statistics language R, which is popular for data science. But R is not renown for its
As an aside, machine learning is closely related to what many call data science. I consider the two disciplines as largely overlapping with statistics, where their respective emphases are on theory
and practice. In these fields, often the languages Python and R are used. There are various websites discussing which language is better, such as this one, which in turn is based on this one. In
general, it appears that computer scientists and statisticians respectively prefer using Python and R.
Returning to the Julia language, given its young age, the language is still very much evolving, but I managed to find suitable Julia functions for stochastic simulations. I thought I would try it out
by simulating some point processes, which I have done several times before. I successfully ran all my code with Julia Version 1.0.3.
In short, I managed to replicate in (or even translate to) Julia the code that I presented in the following posts:
Simulating a homogeneous Poisson point process on a rectangle
Simulating a Poisson point process on a disk
Simulating a Poisson point process on a triangle
Simulating an inhomogeneous Poisson point process
Simulating a Matérn cluster point process
Simulating a Thomas cluster point process
The Julia code, like all the code I present here, can be found on my Github repository, which for this post is located here.
Language type and syntax
The Wikipedia article on Julia says:
Julia is a high-level general-purpose dynamic programming language designed for high-performance numerical analysis and computational science.
Scientific programming languages like the popular three MATLAB, R and Python, are interpreted languages. But the people behind Julia say:
it is a flexible dynamic language, appropriate for scientific and numerical computing, with performance comparable to traditional statically-typed languages.
Because Julia’s compiler is different from the interpreters used for languages like Python or R, you may find that Julia’s performance is unintuitive at first.
I already remarked that Julia’s syntax is clearly inspired by MATLAB, as one can see in this guide for MATLAB, Python and Julia. But there are key differences. For example, to access an array entry
in Julia, you use square brackets (like in most programming languages), whereas parentheses are used in MATLAB or, that old mathematical programming classic, Fortran, which is not a coincidence.
Julia requires you to install certain packages or libraries, like most languages. For random simulations and plots, you have to install the respective Julia packages Distributions and Plots, which is
done by running the code.
After doing that, it’s best to restart Julia. These packages are loaded with the using command:
Using Distributions;
Using Plots;
Also, the first time it takes a while to run any code using those newly installed packages.
I should stress that there are different plotting libraries. But Plots, which contains many plotting libraries, is the only one I could get working. Another is PlotPy, which uses the Python library.
As a beginner, it seems to me that the Julia community has not focused too much on developing new plotting functions, and has instead leveraged pre-existing libraries.
For standard scientific and statistical programming, you will usually also need the packages LinearAlgebra and Statistics.
Data types
Unlike MATLAB or R, Julia is a language that has different data types for numbers, such as integers and floating-point numbers (or floats). This puts Julia in agreement with the clear majority of
languages, making it nothing new for most programmers. This is not a criticism of the language, but this can be troublesome if you’ve grown lazy after years of using MATLAB and R.
Simulating random variables
In MATLAB, R and Python, we just need to call a function for simulating uniform, Poisson, and other random variables. There’s usually a function for each type of random variable (or probability
Julia does simulation of random objects in a more, let’s say, object-oriented way (but I’m told, it’s not an object-oriented language). The probability distributions of random variables are objects,
which are created and then sent to a general function for random generation. For example, here’s the code for simulating a Poisson variable with mean \(\mu=10\).
Similarly, here’s how to simulate a normal variable with mean \(\mu=10\) and standard deviation \(\sigma=1\).
Of course the last two lines can be collapsed into one.
But if you just want to create standard uniform variables on the interval (0,1), then the code is like that in MATLAB. For example, this code creates a \(4\times3\) matrix (or array) \(X\) whose
entries are simulation outcomes of independent uniform random variables:
The resulting matrix \(X\) is a Float 64 array.
The indexing of arrays in Julia starts at one, just like MATLAB, R, or Fortran. When you apply a function to an array, you generally need to use the dot notation. For example, if I try to run the
Y=sqrt(rand(10,1)); #This line will result in an error.
then on my machine (with Julia Version 1.0.3) I get the error:
ERROR: DimensionMismatch(“matrix is not square: dimensions are (10, 1)”)
But this code works:
Also, adding scalars to arrays can catch you in Julia, as you also often need to use the dot notation. This code:
Z=Y+1; #This line will result in an error.
gives the error:
ERROR: MethodError: no method matching +(::Array{Float64,2}, ::Int64)
This is fixed by adding a dot:
Z=Y.+1; #This line will work.
Note the dot has to be on the left hand side. I ended up just using dot notation every time to be safe.
Other traps exist. For example, with indexing, you need to convert floats to integers if you want to use them as indices.
Repeating array elements
There used to be a Julia function called repmat, like the one in MATLAB , but it was merged with a function called repeat. I used such repeating operations to avoid explicit for-loops, which is
generally advised in languages like MATLAB and R. For example, I used the repelem function in MATLAB to simulate Matérn and Thomas cluster point processes. To do this in Julia, I had to use this
nested construction:
y=vcat(fill.(x, n)...);
This line means that the first value in \(x \) is repeated \(n[1]\) times, where \(n[1]\) is the first entry of \(n\) (as indexing in Julia starts at one), then the second value of \(x\) is repeated
\(n[2]\) times, and so on. For example, the vectors \(x=[7,4,9]\) and \(n=[2,1,3]\), the answer is \(y=[7,7,4,9,9,9]\).
To do this in Julia, the construction is not so bad, if you know how, but it’s not entirely obvious. In MATLAB I use this:
Similarly in Python:
Different versions of Julia
I found that certain code would work (or not work) and then later the same code would not work (or would work) on machines with different versions of Julia, demonstrating how the language is still
being developed. More specifically, I ran code on Julia Version 1.0.3 (Date 2018-12-18) and Julia Version 0.6.4 (Date: 2018-07-09). (Note how there’s only a few months difference in the dates of the
two versions.)
Consider the code with the errors (due to the lack of dot operator) in the previous section. The errors occurred on one machine with Julia Version 1.0.3, but the errors didn’t occur on another
machine with the older Julia Version 0.6.4. For a specific example, the code:
Z=Y+1; #This line will not result in an error on Version 0.6.4.
gives no error with Julia Version 0.6.4, while I have already discussed how it gives an error with Julia Version 1.0.3.
For another example, I copied from this MATLAB-Python-Julia guide the following command:
A = Diagonal([1,2,3]); #This line will (sometimes?) result in an error.
It runs on machine with Julia Version 0.6.4 with no problems. But running it on the machine with Julia Version 1.0.3 gives the error:
ERROR: UndefVarError: Diagonal not defined
That’s because I have not used the LinearAlgebra package. Fixing this, the following code:
using LinearAlgebra; #Package needed for Diagonal command.
A = Diagonal([1,2,3]); #This line should now work.
gives no error with Julia Version 1.0.3.
If you have the time and energy, you can search the internet and find online forums where the Julia developers have discussed why they have changed something, rendering certain code unworkable with
the latest versions of Julia.
It seems that performing optimization on functions is done with the Optim package.
But some functions need the Linesearches package, so it’s best to install that as well.
Despite those two optimization packages, I ended up using yet another package called BlackBoxOptim.
In this package, I used a function called bboptimize. This is the first optimziation function that I managed to get working. I do not know how it compares to the functions in the Optim and
Linesearches packages.
In a previous post, I used optimization functions to simulate a inhomogeneous or nonhomogeneous Poisson point process on a rectangle. I’ve also written Julia code for this simulation, which is found
below. I used bboptimize, but I had some problems when I initially set the search regions to integers, which the package did not like, as the values need to be floats. That’s why I multiple the
rectangle dimensions by \(1.0\) in the following code:
boundSearch=[(1.0xMin,1.0xMax), (1.0yMin, 1.0yMax)]; #bounds for search box
#WARNING: Values of boundSearch cannot be integers!
resultsOpt=bboptimize(fun_Neg;SearchRange = boundSearch);
lambdaNegMin=best_fitness(resultsOpt); #retrieve minimum value found by bboptimize
In this brief experiment, I found the language Julia good for doing stochastic simulations, but too tricky for doing simple things like plotting. Also, depending on the version of Julia, sometimes my
code would work and sometimes it wouldn’t. No doubt things will get better with time.
Further reading
As I said, Julia is still very much an ongoing project. Here’s a couple of links that helped me learn the basics.
I’ve only posted here code for some of simulations, but the rest of the code is available on my GitHub repository located here. You can see how the code is comparable to that of MATLAB.
Poisson point process on a rectangle
I wrote about this point process here. The code is located here.
using Distributions #for random simulations
using Plots #for plotting
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process
#Simulate Poisson point process
numbPoints=rand(Poisson(areaTotal*lambda)); #Poisson number of points
xx=xDelta*rand(numbPoints,1).+xMin;#x coordinates of Poisson points
yy=yDelta*(rand(numbPoints,1)).+yMin;#y coordinates of Poisson points
plot1=scatter(xx,yy,xlabel ="x",ylabel ="y", leg=false);
Inhomogeneous Poisson point process on a rectangle
I wrote about this point process here. The code is located here.
using Distributions #for random simulations
using Plots #for plotting
using BlackBoxOptim #for blackbox optimizing
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
s=0.5; #scale parameter
#Point process parameters
function fun_lambda(x,y)
100*exp.(-(x.^2+y.^2)/s^2); #intensity function
###START -- find maximum lambda -- START ###
#For an intensity function lambda, given by function fun_lambda,
#finds the maximum of lambda in a rectangular region given by
#NOTE: Need xMin, xMax, yMin, yMax to be floats eg xMax=1. See boundSearch
function fun_Neg(x)
-fun_lambda(x[1],x[2]); #negative of lambda
xy0=[(xMin+xMax)/2.0,(yMin+yMax)/2.0];#initial value(ie centre)
#Find largest lambda value
boundSearch=[(1.0xMin,1.0xMax), (1.0yMin, 1.0yMax)];
#WARNING: Values of boundSearch cannot be integers!
resultsOpt=bboptimize(fun_Neg;SearchRange = boundSearch);
lambdaNegMin=best_fitness(resultsOpt); #retrieve minimum value found by bboptimize
###END -- find maximum lambda -- END ###
#define thinning probability function
function fun_p(x,y)
#Simulate a Poisson point process
numbPoints=rand(Poisson(areaTotal*lambdaMax)); #Poisson number of points
xx=xDelta*rand(numbPoints,1).+xMin;#x coordinates of Poisson points
yy=yDelta*(rand(numbPoints,1)).+yMin;#y coordinates of Poisson points
#calculate spatially-dependent thinning probabilities
#Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=rand(numbPoints,1).<p; #points to be retained
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
plot1=scatter(xxRetained,yyRetained,xlabel ="x",ylabel ="y", leg=false);
Thomas point process on a rectangle
I wrote about this point process here. The code is located here.
using Distributions #for random simulations
using Plots #for plotting
#Simulation window parameters
#Parameters for the parent and daughter point processes
lambdaParent=10;#density of parent Poisson point process
lambdaDaughter=10;#mean number of points in each cluster
sigma=0.05; #sigma for normal variables (ie random locations) of daughters
#Extended simulation windows parameters
rExt=7*sigma; #extension parameter
#for rExt, use factor of deviation sigma eg 6 or 7
#rectangle dimensions
areaTotalExt=xDeltaExt*yDeltaExt; #area of extended rectangle
#Simulate Poisson point process
numbPointsParent=rand(Poisson(areaTotalExt*lambdaParent)); #Poisson number of points
#x and y coordinates of Poisson points for the parent
#Simulate Poisson point process for the daughters (ie final poiint process)
numbPoints=sum(numbPointsDaughter); #total number of points
#Generate the (relative) locations in Cartesian coordinates by
#simulating independent normal variables
#replicate parent points (ie centres of disks/clusters)
xx=vcat(fill.(xxParent, numbPointsDaughter)...);
yy=vcat(fill.(yyParent, numbPointsDaughter)...);
#Shift centre of disk to (xx0,yy0)
#thin points if outside the simulation window
#retain points inside simulation window
plot1=scatter(xx,yy,xlabel ="x",ylabel ="y", leg=false);
Checking Poisson point process simulations
In previous posts I described how to simulate homogeneous Poisson point processes on a rectangle, disk and triangle. Then I covered how to randomly thin a point process in a spatially dependent
manner. Building off these posts, I wrote in my last post how to simulate an inhomogeneous or nonhomogeneous Poisson point process. Now I’ll describe how to verify that the simulation code correctly
simulates the desired Poisson point process.
Although this post is focused on the Poisson point process, I stress that parts of the material hold for all point processes. Also, not surprisingly, some of the material and the code overlaps with
that presented in the post on the inhomogeneous point process.
Any Poisson point process is defined with a measure called the intensity measure or mean measure, which I’ll denote by \(\Lambda\). For practical purposes, I assume that the intensity measure \(\
Lambda\) has a derivative \(\lambda(x,y)\), where \(x\) and \(y\) denote the Cartesian coordinates. The function \(\lambda(x,y)\) is often called the intensity function or just intensity. I assume
this function is bounded, so \(\lambda(x,y)<\infty\) for all points in a simulation window \(W\). Finally, I assume that the simulation window \(W\) is a rectangle.
Several times before I have mentioned that simulating a Poisson point process requires simulating two random components: the number of points and the locations of points. Working backwards, to check
a Poisson simulation, we must run the Poisson simulation a large number of times (say \(10^3\) or \(10^4\)), and collect the statistics on these two properties. We’ll start by examining the easiest
of the two random components.
Number of points
For any Poisson point process, the number of points is a Poisson random variable with a parameter (that is, a mean) \(\Lambda(W)\). Under our previous assumptions, this is given by the surface
$$\Lambda(W)=\int_W \lambda(x,y)dxdy.$$
Presumably we can evaluate such an integral analytically or numerically in order to simulate the Poisson point process. To check that we correctly simulate the random the number of points, we just
need to simulate the point process a large number of times, and compare the statistics to those given by the analytic expressions.
The definition of the intensity measure of a point process is a measure that gives the average or expected number of points in some region. As the number of simulations increases, the (sample)
average number of points will converge to the intensity measure \(\Lambda(W)\). I should stress that this is a test for the intensity measure, a type of first moment, which will work for the
intensity measure of any point process.
For Poisson point processes, there is another moment test that can be done. It can be shown mathematically that the variance of the number of points will also converge to the intensity measure \(\
Lambda(W)\), giving a second empirical test based on second moments. There is no point process theory here, as this moment result is simply due to the number of points being distributed according to
a Poisson distribution. The second moment is very good for checking Poissonness, forming the basis for statistical tests. If this and the first moment test hold, then there’s a very strong chance the
number of points is a Poisson variable.
Empirical distribution
Beyond the first two moments, an even more thorough test is to estimate an empirical distribution for the number of points. That is, we perform a histogram on the number of points, and then we
normalize it, so the total mass sums to one, producing an empirical probability distribution for the number of points.
The results should closely match the probability mass function of a Poisson distribution with parameter \(\Lambda(W)\). This is simply
$$ P(N=n)= \frac{[\Lambda(W)]^n}{n!} e^{-\Lambda(W)}, $$
where \(N\) is the random number of points in the window \(W\), which has the area \(|W|\). If the empirical distribution is close to results given by the above expression, then we can confidently
say that the number of points is a Poisson random variable with the right parameter or mean.
Locations of points
To check the positioning of points, we can empirically estimate the intensity function \(\lambda(x,y)\). A simple way to do this is to perform a two-dimensional histogram on the point locations. This
is very similar to the one-dimensional histogram, which I suggested to do for testing the number of points. It just involves counting the number of points that fall into two-dimensional
non-overlapping subsets called bins.
To estimate the intensity function, each bin count needs to be rescaled by the area of the bin, giving a density value for each bin. This empirical estimate of the intensity function should resemble
the true intensity function \(\lambda(x,y)\). For a visual comparison, we can use a surface plot to illustrate the two sets of results.
This procedure will work for estimating the intensity function of any point process, not just a Poisson one.
Advanced tests
In spatial statistics there are more advanced statistical tests for testing how Poisson a point pattern is. But these tests are arguably too complicated for checking a simple point process that is
rather easy to simulate. Furthermore, researchers usually apply these tests to a small number of point patterns. In this setting, it is not possible to accurately obtain empirical distributions
without further assumptions. But with simulations, we can generate many simulations and obtain good empirical distributions. In short, I would not use such tests for just checking that I have
properly coded a Poisson simulation.
I produced the results with ten thousand simulations, which gave good results and took only a few seconds to complete on a standard desktop computer. Clearly increasing the number of simulations
increases the accuracy of the statistics, but it also increases the computation time.
For the results, I used the intensity function
and \(s>0\) is a scale parameter. We can see that this function has two maxima or peaks at \((-0.5,-0.5)\) and \((0.5,0.5)\).
Further reading
I have not covered much new theoretical stuff in this post, so looking at the references in previous posts, such as this one, should help.
For two-dimensional histograms, I recommend going to the respective MATLAB and Python function websites. Here’s an example of a two-dimensional histogram implemented in Python.
Probably the most difficult part for me was performing the plotting in Python. I recommend these links:
All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here.
The estimating the statistical moments is standard. Performing the histograms is also routine, but when normalizing, you have to choose the option that returns the empirical estimate of the
probability density function (pdf).
Fortunately, scientific programming languages usually have functions for performing two-dimensional histograms. What is a bit tricky is how to normalize or rescale the bin counts. The histogram
functions can, for example, divide by the number of simulations, the area of each bin, or both. In the end, I chose the pdf option in both MATLAB and Python to give an empirical estimate of the
probability density function, and then multiplied it by the average number of points, which was calculated in the previous check. (Although, I could have done this in a single step in MATLAB, but not
in Python, so I chose to do it in a couple of steps in both languages so the code matches more closely.)
I used the surf function to plot the intensity function and its estimate; see below for details on the histograms.
close all;
%Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle
numbSim=10^4; %number of simulations
s=0.5; %scale parameter
%Point process parameters
fun_lambda=@(x,y)(100*exp(-((x).^2+(y).^2)/s^2));%intensity function
%%%START -- find maximum lambda -- START %%%
%For an intensity function lambda, given by function fun_lambda,
%finds the maximum of lambda in a rectangular region given by
funNeg=@(x)(-fun_lambda(x(1),x(2))); %negative of lambda
%initial value(ie centre)
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];%initial value(ie centre)
%Set up optimization step
%Find largest lambda value
%%%END -- find maximum lambda -- END%%%
%define thinning probability function
%for collecting statistics -- set numbSim=1 for one simulation
numbPointsRetained=zeros(numbSim,1); %vector to record number of points
for ii=1:numbSim
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambdaMax);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points
%calculate spatially-dependent thinning probabilities
%Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=rand(numbPoints,1)<p; %points to be retained %x/y locations of retained points xxRetained=xx(booleRetained); yyRetained=yy(booleRetained); %collect number of points simulated numbPointsRetained(ii)=length(xxRetained); end %Plotting plot(xxRetained,yyRetained,'bo'); %plot retained points xlabel('x');ylabel('y'); %run empirical test on number of points generated if numbSim>=10
%total mean measure (average number of points)
%Test: as numbSim increases, numbPointsMean converges to LambdaNumerical
%Test: as numbSim increases, numbPointsVar converges to LambdaNumerical
For the histogram section, I used the histcounts and histcounts2 functions respectively to estimate the distribution of the number of points and the intensity function. I used the pdf option.
Number of points
Locations of points
I used the Matplotlib library to plot the intensity function and its estimate; see below for details on the histograms.
import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #for plotting
from matplotlib import cm #for heatmap plotting
from mpl_toolkits import mplot3d #for 3-D plots
from scipy.optimize import minimize #for optimizing
from scipy import integrate #for integrating
from scipy.stats import poisson #for the Poisson probability mass function
plt.close("all"); #close all plots
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
numbSim=10**4; #number of simulations
numbBins=30; #number of bins for histogram
#Point process parameters
s=0.5; #scale parameter
def fun_lambda(x,y):
#intensity function
return lambdaValue;
###START -- find maximum lambda -- START ###
#For an intensity function lambda, given by function fun_lambda,
#finds the maximum of lambda in a rectangular region given by
def fun_Neg(x):
return -fun_lambda(x[0],x[1]); #negative of lambda
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];#initial value(ie centre)
#Find largest lambda value
resultsOpt=minimize(fun_Neg,xy0,bounds=((xMin, xMax), (yMin, yMax)));
lambdaNegMin=resultsOpt.fun; #retrieve minimum value found by minimize
###END -- find maximum lambda -- END ###
#define thinning probability function
def fun_p(x,y):
return fun_lambda(x,y)/lambdaMax;
#for collecting statistics -- set numbSim=1 for one simulation
numbPointsRetained=np.zeros(numbSim); #vector to record number of points
xxAll=[]; yyAll=[];
### START -- Simulation section -- START ###
for ii in range(numbSim):
#Simulate a Poisson point process
numbPoints = np.random.poisson(lambdaMax*areaTotal);#Poisson number of points
xx = xDelta*np.random.uniform(0,1,numbPoints)+xMin;#x coordinates of Poisson points
yy = yDelta*np.random.uniform(0,1,numbPoints)+yMin;#y coordinates of Poisson points
#calculate spatially-dependent thinning probabilities
#Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=np.random.uniform(0,1,numbPoints)<p; #points to be thinned
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
xxAll.extend(xxRetained); yyAll.extend(yyRetained);
### END -- Simulation section -- END ###
#Plotting a simulation
fig1 = plt.figure();
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none');
plt.xlabel("x"); plt.ylabel("y");
plt.title('A single realization of a Poisson point process');
#run empirical test on number of points generated
###START -- Checking number of points -- START###
#total mean measure (average number of points)
LambdaNumerical=integrate.dblquad(fun_lambda,xMin,xMax,lambda x: yMin,lambda y: yMax)[0];
#Test: as numbSim increases, numbPointsMean converges to LambdaNumerical
#Test: as numbSim increases, numbPointsVar converges to LambdaNumerical
pdfEmp, binEdges=np.histogram(numbPointsRetained, bins=binEdges,density=True);
#analytic solution of probability density
fig2 = plt.figure();
plt.scatter(nValues,pdfExact, color='b', marker='s',facecolor='none',label='Exact');
plt.scatter(nValues,pdfEmp, color='r', marker='+',label='Empirical');
plt.xlabel("n"); plt.ylabel("P(N=n)");
plt.title('Distribution of the number of points');
###END -- Checking number of points -- END###
###START -- Checking locations -- START###
#2-D Histogram section
p_Estimate, xxEdges, yyEdges = np.histogram2d(xxAll, yyAll,bins=numbBins,density=True);
X, Y = np.meshgrid(xxValues,yyValues) #create x/y matrices for plotting
#analytic solution of probability density
#Plot empirical estimate
fig3 = plt.figure();
plt.rc('text', usetex=True);
plt.rc('font', family='serif');
surf = ax.plot_surface(X, Y,lambda_Estimate,cmap=plt.cm.plasma);
plt.xlabel("x"); plt.ylabel("y");
plt.title('Estimate of $\lambda(x)$');
plt.locator_params(axis='x', nbins=5);
plt.locator_params(axis='y', nbins=5);
plt.locator_params(axis='z', nbins=3);
#Plot exact expression
surf = ax.plot_surface(X, Y,lambda_Exact,cmap=plt.cm.plasma);
plt.xlabel("x"); plt.ylabel("y");
plt.title('True $\lambda(x)$');
plt.locator_params(axis='x', nbins=5);
plt.locator_params(axis='y', nbins=5);
plt.locator_params(axis='z', nbins=3);
###END -- Checking locations -- END###
For the histogram section, I used the histogram and histogram2d functions respectively to estimate the distribution of the number of points and the intensity function. I used the pdf option. (The
SciPy website recommends not using the the normed option.)
Number of points
np.histogram(numbPointsRetained, bins=binEdges,density=True);
Locations of points
np.histogram2d(xxArrayAll, yyArrayAll,bins=numbBins,density=True);
Simulating an inhomogeneous Poisson point process
In previous posts I described how to simulate homogeneous Poisson point processes on a rectangle, disk and triangle. But here I will simulate an inhomogeneous or nonhomogeneous Poisson point process.
(Both of these terms are used, where the latter is probably more popular, but I prefer the former.) For such a point process, the points are not uniformly located on the underlying mathematical space
on which the Poisson process is defined. This means that certain regions will, on average, tend to have more (or less) points than other regions of the underlying space.
Any Poisson point process is defined with a non-negative measure called the intensity or mean measure. I make the standard assumption that the intensity measure \(\Lambda\) has a derivative \(\lambda
(x,y)\). (I usually write a single \(x\) to denote a point on the plane, that is \(x\in \mathbb{R}^2\), but in this post I will write the \(x\) and \(y\) and coordinates separately.) The function \(\
lambda(x,y)\) is often called the intensity function or just intensity, which I further assume is bounded, so \(\lambda(x,y)<\infty\) for all points in a simulation window \(W\). Finally, I assume
that the simulation window \(W\) is a rectangle, but later I describe how to lift that assumption.
Number of points
To simulate a point process, the number of points and the point locations in the simulation window \(W\) are needed. For any Poisson point process, the number of points is a Poisson random variable
with a parameter (that is, a mean) \(\Lambda(W)\), which under our previous assumptions is given by the integral
$$\Lambda(W)=\int_W \lambda(x,y)dxdy. $$
Assuming we can evaluate such an integral analytically or numerically, then the number of points is clearly not difficult to simulate.
Locations of points
The difficulty lies in randomly positioning the points. But a defining property of the Poisson point process is its independence, which allows us to treat each point completely independently.
Positioning each point then comes down to suitably simulating two (or more) random variables for Poisson point processes in two (or higher) dimensions. Similarly, the standard methods used for
simulating continuous random variables can be applied to simulating random point locations of a Poisson point process.
In theory, you can rescale the intensity function with the total measure of the simulation window, giving
$$f(x,y):=\frac{\lambda(x,y)}{\Lambda(W)}. $$
We can then interpret this rescaled intensity function \(f(x,y)\) as the joint probability density of two random variables \(X\) and \(Y\), because it integrates to one,
$$\int_W f(x,y)dxdy=1.$$
Clearly the method for simulating an inhomogeneous Poisson point process depends on the nature of the intensity function. For the inhomogeneous case, the random variables \(X\) and \(Y\) are, in
general, not independent.
To simulate an inhomogeneous Poisson point process, one method is to first simulate a homogeneous one, and then suitably transform the points according to deterministic function. For simple random
variables, this transformation method is quick and easy to implement, if we can invert the probability distribution. For example, a uniform random variable \(U\) defined on the interval \((0,1)\) can
be used to give an exponential random variable by applying the transformation \(h(u)= -(1/\lambda)\log(u)\), where \(\lambda>0\), meaning \(h(U)\) is an exponential random variable with parameter \(\
lambda>0\) (or mean \(1/\lambda\)).
Similarly for Poisson point processes, the transformation approach is fairly straightforward in a one-dimensional setting, but generally doesn’t work easily in two (or higher) dimensions. The reason
being that often we cannot simulate the random variables \(X\) and \(Y\) independently, which means, in practice, we need first to simulate one random variable, then the other.
It is a bit easier if we can re-write the rescaled intensity function or joint probability density \(f(x,y)\) as a product of single-variable functions \(f_X(x)\) and \(f_Y(y)\), meaning the random
variables \(X\) and \(Y\) are independent. We can then simulate independently the random variables \(X\) and \(Y\), corresponding to the \(x\) and \(y\) coordinates of the points. But this would
still require integrating and inverting the functions.
Markov chain Monte Carlo
A now standard way to simulate jointly distributed random variables is to use Markov chain Monte Carlo (MCMC), which we can also use to simulate the the \(X\) and \(Y\) random variables. Applying
MCMC methods is simply applying random point process operations repeatedly to all the points. But this is a bit too tricky and involved. Instead I’ll use a general yet simpler method based on
The thinning method is the arguably the simplest and most general way to simulate an inhomogeneous Poisson point process. If you’re unfamiliar with thinning, I recommend my previous post on thinning
and the material I cite.
This simulation method is simply a type of acceptance-rejection method for simulating random variables. More specifically, it is the acceptance-rejection or rejection method, attributed to the great
John von Neumann, for simulating a continuous random variable, say \(X\), with some known probability density \(f(x)\). The method accepts/retains or rejects/thins the outcome of each random variable
/point depending on the outcome of a uniform random variable associated with each random variable/point.
The thinning or acceptance-rejection method is also appealing because it is an example of a perfect simulation method, which means the distribution of the simulated random variables or points will
not be an approximation. This can be contrasted with typical MCMC methods, which, in theory, reach the desired distribution of the random variables in infinite time, which is clearly not possible in
Simulating the homogeneous Poisson point process
First simulate a homogeneous Poisson point process with intensity value \(\lambda^*\), which is an upper bound of the intensity function \(\lambda(x,y)\). The simulation step is the easy part, but
what value is \(\lambda^*\)?
I will use the maximum value that the intensity function \(\lambda(x,y)\) takes, which I denote by
$$ \lambda_{\max}:=\max_{(x,y)\in W}\lambda(x,y),$$
so I set \(\lambda^*=\lambda_{\max}\). Of course with \(\lambda^*\) being an upper bound, you can use any larger \(\lambda\)-value, so \(\lambda^*\geq \lambda_{\max}\), but that just means more
points will need to be thinned.
Scientific programming languages have implemented algorithms that find or estimate minima of mathematical functions, meaning such an algorithm just needs to find the \((x,y)\) point that gives the
minimum value of \(-\lambda(x,y)\), which corresponds to the maximum value of \(\lambda(x,y)\). What is very important is that the minimization procedure can handle constraints on the \(x\) and \(y\)
values, which in our case of a rectangular simulation window \(W\) are sometimes called box constraints.
Thinning the Poisson point process
All we need to do now is to thin the homogeneous Poisson point process with the thinning probability function
This will randomly remove the points so the remaining points will form a inhomogeneous Poisson point process with intensity function
$$ (1-p(x,y))\lambda^* =\lambda(x,y).$$
As a result, we can see that provided \(\lambda^*\geq \lambda_{\max}>0\), this procedure will give the right intensity function \(\lambda(x,y)\). I’ll skip the details on the point process still
being Poisson after thinning, as I have already covered this in the thinning post.
Empirical check
You can run an empirical check by simulating the point process a large number (say \(10^3\) or \(10^4\)) of times, and collect statistics on the number of points. As the number of simulations
increases, the average number of points should converge to the intensity measure \(\Lambda(W)\), which is given by (perhaps numerically) evaluating the integral
$$\Lambda(W)=\int_W \lambda(x,y)dxdy.$$
This is a test for the intensity measure, a type of first moment, which will work for the intensity measure of any point process. But for Poisson point processes, the variance of the number of points
will also converge to intensity measure \(\Lambda(W)\), giving a second empirical test based on second moments.
An even more thorough test would be estimating an empirical distribution (that is, performing and normalizing a histogram) on the number of points. These checks will validate the number of points,
but not the positioning of the points. In my next post I’ll cover how to perform these tests.
The homogeneous Poisson point process with intensity function \(\lambda(x)=100\exp(-(x^2+y^2)/s^2)\), where \(s=0.5\). The results look similar to those in the thinning post, where the thinned points
(that is, red circles) are generated from the same Poisson point process as the one that I have presented here.
Method extensions
We can extend the thinning method for simulating inhomogeneous Poisson point processes a couple different ways.
Using an inhomogeneous Poisson point process
The thinning method does not need to be applied to a homogeneous Poisson point process with intensity \(\lambda^*\). In theory, we could have simulated a suitably inhomogeneous Poisson point process
with intensity function \(\lambda^*(x,y)\), which has the condition
$$ \lambda^*(x,y)\geq \lambda(x,y), \quad \forall (x,y)\in W .$$
Then this Poisson point process is thinned. But then we would still need to simulate the underlying Poisson point process, which often would be as difficult to simulate.
Partitioning the simulation window
Perhaps the intensity of the Poisson point process only takes two values, \(\lambda_1\) and \(\lambda_2\), and the simulation window \(W\) can be nicely divided or partitioned into two disjoints sets
\(B_1\) and \(B_2\) (that is, \(B_1\cap B_2=\emptyset\) and \(B_1\cup B_2=W\)), corresponding to the subregions of the two different intensity values. The Poisson independence property allows us to
simulate two independent Poisson point processes on the two subregions.
This approach only works for a piecewise constant intensity function. But if if the intensity function \(\lambda(x)\) varies wildly, the simulation window can be partitioned into subregions \(B_1\
dots,B_m\) for different ranges of the intensity function \(\lambda(x)\). This allows us to simulate independent homogeneous Poisson point processes with different densities \(\lambda^*_1\dots, \
lambda^*_m\), where for each subregion \(B_i\) we set
$$ \lambda^*_i:=\max_{x\in B_i}\lambda(x,y).$$
The resulting Poisson point processes are then suitably thinned, resulting in a more efficient simulation method. (Although I imagine the gain would often be small.)
Non-rectangular simulation windows
If you want to simulate on non-rectangular regions, which is not a disk or triangle, then the easiest way is to simulate a Poisson point process on a rectangle \(R\) that completely covers the
simulation window, so \(W \subset R\subset \mathbb{R}^2\), and then set the intensity function \(\lambda \) to zero for the region outside the simulation window \(W\), that is \(\lambda (x,y)=0\)
when \((x,y)\in R\setminus W\).
Further reading
In Section 2.5.2 of Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke, there is an outline of the thinning method that I used. The same simulation section appears in the
previous edition by Kendall and Mecke, though these books in general have little material on simulation methods.
More details on the thinning method and its connection to acceptance-rejection sampling are given in Section 2.3 of the applications-oriented book Poisson Point Processes by Streit. The
acceptance-rejection method is covered in, for example, books on Monte Carlo methods, including Monte Carlo Strategies in Scientific Computing by Liu (in Section 2.2 )and Monte Carlo Methods in
Financial Engineering by Glasserman (in Section 2.2.2). This method and others for simulating generals random variable are covered in stochastic simulation books such as Uniform Random Variate
Generation by Devroye and Stochastic Simulation: Algorithms and Analysis by Asmussen and Glynn.
Kroese and Botev have a good introduction to stochastic simulation in the edited collection Stochastic Geometry, Spatial Statistics and Random Fields : Models and Algorithms by Schmidt, where the
relevant chapter (number 12) is also freely available online. And of course there are lectures notes on the internet that cover simulation material.
All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here. You can see that the code is very similar to that of the thinning code, which
served as the foundation for this code. (Note how we now keep the points, so in the code the > has become < on the line where the uniform variables are generated).
I have implemented the code in MATLAB and Python with an intensity function \(\lambda(x,y)=100\exp(-(x^2+y^2)/s^2)\), where \(s>0\) is a scale parameter. Note that in the minimization step, the box
constraints are expressed differently in MATLAB and Python: MATLAB first takes the minimum values then the maximum values, whereas Python first takes the \(x\)-values then the \(y\)-values.
The code presented here does not have the empirical check, which I described above, but it is implemented in the code located here. For the parameters used in the code, the total measure is \(\Lambda
(W)\approx 77.8068\), meaning each simulation will generate on average almost seventy-eight points.
I have stopped writing code in R for a couple of reasons, but mostly because anything I could think of simulating in R can already be done in the spatial statistics library spatstat. I recommend the
book Spatial Point Patterns, co-authored by the spatstat’s main contributor, Adrian Baddeley.
I have used the fmincon function to find the point that gives the minimum of \(-\lambda(x,y)\).
%Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle
s=0.5; %scale parameter
%Point process parameters
fun_lambda=@(x,y)(100*exp(-((x).^2+(y).^2)/s^2));%intensity function
%%%START -- find maximum lambda -- START %%%
%For an intensity function lambda, given by function fun_lambda,
%finds the maximum of lambda in a rectangular region given by
funNeg=@(x)(-fun_lambda(x(1),x(2))); %negative of lambda
%initial value(ie centre)
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];%initial value(ie centre)
%Set up optimization step
%Find largest lambda value
%%%END -- find maximum lambda -- END%%%
%define thinning probability function
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambdaMax);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points
%calculate spatially-dependent thinning probabilities
%Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=rand(numbPoints,1)<p; %points to be thinned
%x/y locations of retained points
xxRetained=xx(booleRetained); yyRetained=yy(booleRetained);
plot(xxRetained,yyRetained,'bo'); %plot retained points
The box constraints for the optimization step were expressed as:
I have used the minimize function in SciPy.
import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt #For plotting
from scipy.optimize import minimize #For optimizing
from scipy import integrate
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
s=0.5; #scale parameter
#Point process parameters
def fun_lambda(x,y):
return 100*np.exp(-(x**2+y**2)/s**2); #intensity function
###START -- find maximum lambda -- START ###
#For an intensity function lambda, given by function fun_lambda,
#finds the maximum of lambda in a rectangular region given by
def fun_Neg(x):
return -fun_lambda(x[0],x[1]); #negative of lambda
xy0=[(xMin+xMax)/2,(yMin+yMax)/2];#initial value(ie centre)
#Find largest lambda value
resultsOpt=minimize(fun_Neg,xy0,bounds=((xMin, xMax), (yMin, yMax)));
lambdaNegMin=resultsOpt.fun; #retrieve minimum value found by minimize
###END -- find maximum lambda -- END ###
#define thinning probability function
def fun_p(x,y):
return fun_lambda(x,y)/lambdaMax;
#Simulate a Poisson point process
numbPoints = np.random.poisson(lambdaMax*areaTotal);#Poisson number of points
xx = np.random.uniform(0,xDelta,((numbPoints,1)))+xMin;#x coordinates of Poisson points
yy = np.random.uniform(0,yDelta,((numbPoints,1)))+yMin;#y coordinates of Poisson points
#calculate spatially-dependent thinning probabilities
#Generate Bernoulli variables (ie coin flips) for thinning
booleRetained=np.random.uniform(0,1,((numbPoints,1)))<p; #points to be thinned
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
The box constraints were expressed as:
(xMin, xMax), (yMin, yMax)
After writing this post, I later wrote the code in Julia. The code is here and my thoughts about Julia are here.
Thinning point processes
One way to create new point processes is to apply thinning to a point process. As I mentioned in a previous post on point process operations, thinning is a random operation applied to the points of
an underlying point process, where the points are thinned (or removed) or retained (or kept) according to some probabilistic rule. Both the thinned and retained points form two separate point
processes, but one usually focuses on the retained points. Given an underlying point process, the nature of the thinning rule will result in different types of point processes.
As I detailed in the Applications section below, thinning can be used to simulate an inhomogeneous Poisson point process, as I covered in another post.
Thinning types
Thinning can be statistically independent or dependent, meaning that the probability of thinning any point is either independent or dependent of thinning other points. The more tractable case is
statistically independent thinning, which is the thinning type covered here. We can further group this thinning into two types based on whether the thinning rule depends on the locations of the
point. (I use the word location, instead of point, to refer to where a point of a point process is located on the underlying mathematical space on which the point process is defined.)
Spatially independent thinning
The simplest thinning operation is one that does not depend on point locations. This thinning is sometimes referred to as \(p\)-thinning, where the constant \(p\) has the condition \(0\leq p \leq 1\)
because it is the probability of thinning a single point. Simply put, the probability of a point being thinned does not depend on the point locations.
We can liken the thinning action to flipping a biased coin with probability of \(p\) for heads (or tails) for each point of the underlying point process, and then removing the point if a head (or
tails) occurs. If there were a constant number \(n\) of points of the underlying point process, then the number of thinned (or retained) points will form a binomial random variable with parameters \
(n\) and \(p\) (or \(1-p\)).
Simulating this thinning operation is rather straightforward. Given a realization of a point process, for each point, simply generate or simulate a uniform random variable on the interval \((0,1)\),
and if this random variable is less than \(p\), remove the point. (This is simply sampling a Bernoulli distribution, which is covered in this post.)
In the code section below, I have shown how this thinning operation is implemented.
Spatially dependent thinning
To generalize the idea of \(p\)-thinning, we can simply require that the thinning probability of any point depends on its location \(x\), which gives us the concept of \(p(x)\)-thinning. (I write a
single \(x\) to denote a point on the plane, that is \(x\in \mathbb{R}^2\), instead of writing, for example, the \(x\) and \(y\) and coordinates separately.) More precisely, the probability of
thinning a point is given by a function \(p(x)\) such that \(0 \leq p(x)\leq 1\), but all point thinnings occur independently of each other. In other words, this is a spatially dependent thinning
that is statistically independent.
I’ll illustrate the concept of (statistically independent) spatially dependent thinning with a somewhat contrived example. We assume that the living locations of all the people in the world form a
point process on a (slightly squashed) sphere. Let’s say that Earth has become overpopulated, particularly in the Northern Hemisphere, so we decide to randomly choose people and send them off to
another galaxy, but we do it based on how far they live from the North Pole. The thinning rule could be, for example, \(p(x)= \exp(- |x|^2/s^2)\), where \(|x|\) is the distance to the North Pole and
\(s>0\) is some constant for distance scaling.
Put another way, a person at location \(x\) flips a biased coin with the probability of heads being equal to \(p(x)=\exp(- |x|^2/s^2)\). If a head comes up, then that person is removed from the
planet. With the maximum of \(p(x)\) is at the North Pole, we can see that the lucky (or unlucky?) people in countries like Australia, New Zealand (or Aotearoa), South Africa, Argentina and Chile,
are more likely not to be sent off (that is, thinned) into the great unknown.
For people who live comparable distances from the North Pole, the removal probabilities are similar in value, yet the events of being remove remain independent. For example, the probabilities of
removing any two people from the small nation Lesotho are similar in value, but these two random events are still completely independent of each other.
Simulating a spatially dependent thinning is just slightly more involved than the spatially independent case. Given a realization of a point process, for each point at, say, \(x\), simply generate or
simulate a uniform random variable on the interval \((0,1)\), and if this random variable is less than \(p(x)\), remove the point.
In the code section, I have shown how this thinning operation is implemented with an example like the above one, but on a rectangular region of Cartesian space. In this setting, the maximum of \(p(x)
\) is at the origin, resulting in more points being thinned in this region.
Thinning a Poisson point process
Perhaps not surprisingly, under the thinning operation the Poisson point process exhibits a closure property, meaning that a Poisson point process thinned in a certain way gives another Poisson point
process. More precisely, if the thinning operation is statistically independent, then the resulting point process formed from the retained points is also a Poisson point process, regardless if it is
spatially independent or dependent thinning. The resulting intensity (interpreted as the average density of points) of this new Poisson point process has a simple expression.
Homogeneous case
For a spatially independent \(p\)-thinning, if the original (or underlying) Poisson point process is homogeneous with intensity \(\lambda\), then the point process formed from the retained points is
a homogeneous Poisson point process with intensity \(\lambda\). (There are different ways to prove this, but you can gain some intuition behind the proof by conditioning on the Poisson number of
points and then applying the total law of probability. Using generating functions helps.)
Inhomogeneous case
More generally, if we apply a spatially dependent \(p(x)\)-thinning to a Poisson point process has a intensity \(\lambda\), then the retained points form a an inhomogeneous or nonhomogeneous
Poisson point process with \(\lambda p(x)\), due to the spatial dependence in the thinning function \(p(x)\). This gives a way to simulate such Poisson point processes, which I’ll cover in another
We can see by symmetry that if we look at the thinned points, then the resulting point process is also a Poisson point process, but with intensity \((1-p(x))\lambda\). The retained and thinned points
both form Poisson point processes, but what is really interesting is these two point processes are independent of each other. This means that any random configuration that occurs among the retained
points is independent of any configurations among the thinned points.
This ability to split a Poisson point processes into independent ones is sometimes called the splitting property.
Thinning point processes has the immediate application of creating new point processes. It can also be used to randomly generate two point processes from one. In network applications, a simple
example is using the thinning procedure to model random sleep schemes in wireless networks, where random subsets of the network have been powered down.
Perhaps the most useful application of thinning is creating point processes with spatially-dependent intensities such that of an inhomogeneous Poisson point process. In another post I give details on
how to simulate this point process. In this setting, the thinning operation essentially is acceptance(-rejection) sampling, which I will cover in a future post.
All code from my posts, as always, can be found on the my GitHub repository. The code for this post is located here.
Spatially independent thinning
I have implemented in code the simple \(p\)-thinning operation applied to a Poisson point process on a rectangle, but in theory any point process can be used for the underlying point process that is
%Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle
%Point process parameters
lambda=100; %intensity (ie mean density) of the Poisson process
%Thinning probability parameters
p=0.25; %thinning probability
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambda);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points
%Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=rand(numbPoints,1)&amp;gt;p; %points to be thinned
booleRetained=~booleThinned; %points to be retained
%x/y locations of thinned points
xxThinned=xx(booleThinned); yyThinned=yy(booleThinned);
%x/y locations of retained points
xxRetained=xx(booleRetained); yyRetained=yy(booleRetained);
plot(xxRetained,yyRetained,'bo'); %plot retained points
hold on; plot(xxThinned,yyThinned,'ro'); %plot thinned points
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process
#Thinning probability
#Simulate a Poisson point process
numbPoints=rpois(1,areaTotal*lambda);#Poisson number of points
xx=xDelta*runif(numbPoints)+xMin;#x coordinates of Poisson points
yy=xDelta*runif(numbPoints)+yMin;#y coordinates of Poisson points
#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=runif(numbPoints)&amp;gt;p; #points to be thinned
booleRetained=!booleThinned; #points to be retained
#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
plot(xxRetained,yyRetained,'p',xlab='x',ylab='y',col='blue'); #plot retained points
points(xxThinned,yyThinned,col='red'); #plot thinned points
Of course, as I have mentioned before, simulating a spatial point processes in R is even easier with the powerful spatial statistics library spatstat. With this library, thinning can be done in
quite a general way by using the function rthin.
import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
#Point process parameters
lambda0=100; #intensity (ie mean density) of the Poisson process
#Thinning probability
#Simulate a Poisson point process
numbPoints = np.random.poisson(lambda0*areaTotal);#Poisson number of points
xx = np.random.uniform(0,xDelta,((numbPoints,1)))+xMin;#x coordinates of Poisson points
yy = np.random.uniform(0,yDelta,((numbPoints,1)))+yMin;#y coordinates of Poisson points
#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=np.random.uniform(0,1,((numbPoints,1)))&amp;gt;p; #points to be thinned
booleRetained=~booleThinned; #points to be retained
#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none', alpha=0.5 );
plt.scatter(xxThinned,yyThinned, edgecolor='r', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
Spatially dependent thinning
I have implemented in code a \(p(x)\)-thinning operation with the function \(p(x)=\exp(-|x|^2/s^2)\), where \(|x|\) is the Euclidean distance from \(x\) to the origin. This small changes means that
in the code there will be a vector or array of \(p\) values instead of a single \(p\) value in the section where the uniform random variables are generated and compared said \(p\) values. (Lines 24,
26 and 28 respectively in the MATLAB, R and Python code presented below.)
Again, I have applied thinning to a Poisson point process on a rectangle, but in theory any point process can be used for the underlying point process.
%Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; %rectangle dimensions
areaTotal=xDelta*yDelta; %area of rectangle
%Point process parameters
lambda=100; %intensity (ie mean density) of the Poisson process
%Thinning probability parameters
sigma=0.5; %scale parameter for thinning probability function
%define thinning probability function
%Simulate Poisson point process
numbPoints=poissrnd(areaTotal*lambda);%Poisson number of points
xx=xDelta*(rand(numbPoints,1))+xMin;%x coordinates of Poisson points
yy=xDelta*(rand(numbPoints,1))+yMin;%y coordinates of Poisson points
%calculate spatially-dependent thinning probabilities
%Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=rand(numbPoints,1)&amp;gt;p; %points to be thinned
booleRetained=~booleThinned; %points to be retained
%x/y locations of thinned points
xxThinned=xx(booleThinned); yyThinned=yy(booleThinned);
%x/y locations of retained points
xxRetained=xx(booleRetained); yyRetained=yy(booleRetained);
plot(xxRetained,yyRetained,'bo'); %plot retained points
hold on; plot(xxThinned,yyThinned,'ro'); %plot thinned points
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
#Point process parameters
lambda=100; #intensity (ie mean density) of the Poisson process
#Thinning probability parameters
sigma=0.5; #scale parameter for thinning probability function
#define thinning probability function
fun_p &amp;lt;- function(s,x,y) {
exp(-(x^2 + y^2)/s^2);
#Simulate a Poisson point process
numbPoints=rpois(1,areaTotal*lambda);#Poisson number of points
xx=xDelta*runif(numbPoints)+xMin;#x coordinates of Poisson points
yy=xDelta*runif(numbPoints)+yMin;#y coordinates of Poisson points
#calculate spatially-dependent thinning probabilities
#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=runif(numbPoints)&amp;lt;p; #points to be thinned
booleRetained=!booleThinned; #points to be retained
#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
plot(xxRetained,yyRetained,'p',xlab='x',ylab='y',col='blue'); #plot retained points
points(xxThinned,yyThinned,col='red'); #plot thinned points
Again, use the spatial statistics library spatstat, which has the function rthin.
import numpy as np; #NumPy package for arrays, random number generation, etc
import matplotlib.pyplot as plt
#Simulation window parameters
xDelta=xMax-xMin;yDelta=yMax-yMin; #rectangle dimensions
#Point process parameters
lambda0=100; #intensity (ie mean density) of the Poisson process
#Thinning probability parameters
sigma=0.5; #scale parameter for thinning probability function
#define thinning probability function
def fun_p(s, x, y):
return np.exp(-(x**2+y**2)/s**2);
#Simulate a Poisson point process
numbPoints = np.random.poisson(lambda0*areaTotal);#Poisson number of points
xx = np.random.uniform(0,xDelta,((numbPoints,1)))+xMin;#x coordinates of Poisson points
yy = np.random.uniform(0,yDelta,((numbPoints,1)))+yMin;#y coordinates of Poisson points
#calculate spatially-dependent thinning probabilities
#Generate Bernoulli variables (ie coin flips) for thinning
booleThinned=np.random.uniform(0,1,((numbPoints,1)))&amp;gt;p; #points to be thinned
booleRetained=~booleThinned; #points to be retained
#x/y locations of thinned points
xxThinned=xx[booleThinned]; yyThinned=yy[booleThinned];
#x/y locations of retained points
xxRetained=xx[booleRetained]; yyRetained=yy[booleRetained];
plt.scatter(xxRetained,yyRetained, edgecolor='b', facecolor='none', alpha=0.5 );
plt.scatter(xxThinned,yyThinned, edgecolor='r', facecolor='none', alpha=0.5 );
plt.xlabel("x"); plt.ylabel("y");
In the plotted results, the blue and red circles represent respectively the retained and thinned points.
Spatially independent thinning
For these results, I used a thinning probability \(p=0.25\), which means that roughly a quarter of the points will be thinned, so on average the ratio of blue to red circles is three to one.
Spatially dependent thinning
Observe how there are more thinned points (that is, red circles) near the origin, which is of course where the thinning function \(p(x)=\exp(-|x|^2/s^2)\) attains its maximum.
Further reading
The thinning operation is covered in Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and Mecke (Chapter 5). It is also covered in the book Statistical Inference and Simulation for
Spatial Point Processes by Moller and Waagepetersen (Section 3.2.2). Kallenberg presents a more theoretical and rigorous take on thinning Poisson point processes in his new book Random Measures,
Theory and Applications (Chapter 3). (A point process can be interpreted as a type of random measure called a random counting measure because it gives the random number of points in a set.)
Thinning is also covered in books that apply point processes to wireless networks, such as Stochastic Geometry and Wireless Networks by Baccelli and Błaszczyszyn (Volume 1, Section 1.3.2) or
Stochastic Geometry for Wireless Networks (Section 2.7.3) by Haenggi. These books also give examples of thinning point processes for wireless network applications.
Poisson point process
Balloons in scattered across a morning sky. Sand grains strewn on the ground. Seeds blown over a forest floor. Each of these phenomena can be represented mathematically as an object called a point
process or random point field. Although it has a couple of mathematical interpretations, a point process is essentially just a collection of points randomly scattered on some mathematical space*,
such as the real line, the Cartesian plane, a sphere, or more abstract spaces.
*The underlying mathematical space is sometimes called the carrier space or the state space, but the second term refers to something different from the state space used in the theory of stochastic
The most important point process is the Poisson point process, which is one of the two most fundamental and studied mathematical objects in probability. (The other is the Wiener process or Brownian
motion process, which is a type of random process or stochastic process, and it has been suggested by mathematicians such as John Kingman that the Poisson point process does not attract as much
research attention as it should.) This point process can be defined on very general mathematical spaces, but usually the plane gives sufficient intuition. In this setting, each randomly located point
can represent, for example, a star, a sand grain or a seed.
The most important defining property of the Poisson point process is that the numbers of points of the point process located in two (or more) non-overlapping (that is, disjoint) regions are two or
more independent random variables. This property, sometimes called independent scattering or complete independence, explains the tremendous tractability of this point process, and it is used
alongside the property that the random variables have Poisson distributions to define the Poisson point process.
To define a Poisson point process on some mathematical space, only a single mathematical object is needed. This object is applied to a region (or subset) of the underlying space on which the Poisson
point process is defined, and returns a non-negative number. This object is a type of measure from measure theory, so it is called the mean measure or intensity measure. The mean measure can be
interpreted as the mean or average number of points of a Poisson point process being located in a region of the underlying space.
Definition of a Poisson point process
A point process \(N\) defined on some underlying space \(S\) is a Poisson point process with intensity measure \(\Lambda\) if it has the two following properties:
1 The number of points in a bounded Borel set \(B \subset S\) is a Poisson random variable with mean \(\Lambda(B)\).
2 The number of points in \(n\) disjoint Borel sets forms \(n\) independent random variables.
A simple example of a mean measure of a Poisson point process is when the mean measure is given by the product of a non-negative constant and the area or volume of the region. The constant, often
denoted by \(\lambda\), is known as the intensity or rate, which is often can be interpreted as the average density of points. In this setting, the average density does not vary over the underlying
space, so the resulting point process is called a homogeneous Poisson point process or uniform Poisson point process, which is the simplest example of a Poisson point process.
If the intensity does change over the underlying space, meaning it is spatially dependent, then the terms inhomogeneous Poisson point process or nonhomogeneous Poisson point process are used. It is
usually assumed that the intensity measure \(\Lambda\) has a derivative \(\lambda\), so it can be written as an integral:
$$\Lambda(B)=\int_B \lambda(x) dx, $$
where the set \(B\) is some subregion of the underlying state space \(S\). (As per standard probability assumptions, the set \(B\) has to be Borel measurable, but we do not focus on such details
The Poisson point process is the cornerstone of fields where randomness meets geometry, such as spatial statistics, geometric probability and stochastic geometry. Researchers, scientists, and
engineers have proposed using the Poisson point process to model various objects randomly positioned. In recent years, it has been used extensively to mathematically model the locations of
transmitters and receivers in wireless communication networks such as cellular or mobile phone networks.
As a mathematical model, the Poisson point process should be used to represent objects or phenomena that have little or, ideally, zero interaction among the points. If that’s not the case, then the
Poisson point process can also serve as a null-hypothesis model in statistics, whose rejection implies there is sufficiently strong interaction among the points. For example, the stars influence each
other, undoubtedly, through gravity, and trees rely upon absorbing water in their vicinity through root systems, suggesting that non-Poisson models would be more suitable for representing these two
examples. Other more sophisticated point processes that incorporate such point interaction have been developed. Many of these point processes build off the Poisson point process.
The Poisson point process is often called simply the Poisson process, where it can be confused with the related stochastic process of the same name. This Poisson process is a continuous-time
discrete-valued stochastic process. The points in time where this stochastic process changes (or jumps) form the points of a Poisson point process on the real line. Depending on the literature,
interpretation and preference, the Poisson point process is also called the Poisson random field and Poisson random measure.
The Poisson point process is a highly useful and used random object. But we now need to simulate it on a computer, which will be the subject of a future post.
Further reading
The Wikipedia article is a good starting point. The best book on the Poisson point process is the monograph Poisson Processes by Kingman. A recent and more theoretical book is Lecture Notes on the
Poisson Process by Last and Penrose (the manuscript is freely available online here). More applied books include the classic Stochastic Geometry and its Applications by Chiu, Stoyan, Kendall and
Mecke, and Statistical Inference and Simulation for Spatial Point Processes by Møller and Waagepetersen.
In recent years, there have been various books applying the Poisson point process to wireless networks, which include the two-volume Stochastic geometry and Wireless Networks by Baccelli and
Błaszczyszyn, and Stochastic geometry for Wireless Networks by Haenggi. Much more recently, some colleagues and I contributed to this literature with Stochastic Geometry Analysis of Cellular Networks
by Błaszczyszyn, Haenggi, Keeler, and Mukherjee. | {"url":"https://hpaulkeeler.com/tag/nonhomogeneous/","timestamp":"2024-11-02T01:52:08Z","content_type":"text/html","content_length":"207084","record_id":"<urn:uuid:1849f6ab-0960-4881-8311-052765b76bf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00590.warc.gz"} |
How to not skip ranks and deal with ties?
Hi all,
I worked with the Prodesk on using the RANKEQ function which was helpful, but I am realizing I have records which are ties. Essentially I have a column which ranks our projects; if the weight is 0
it's "not ranked" and should be thrown out of the rankings, if it's anything else it's ranked. The formula is as follows:
=IFERROR(RANKEQ([Weighted Project Score]@row, [Weighted Project Score]:[Weighted Project Score], 0), "Not Ranked")
I found 2 issues with this and it's the following:
• It's skipping numbers (i.e. I've got 3 entries with the rank # if 99, then it jumps to 102, not seeing the ones in between)
• There's no way to clearly see the ties.
How can I modify the formula to do the following:
1. If there's a tie in there's, to not skip the ranking sequence of numbers
2. If there's a tie in the range, to highlight this
3. If there's a tie, to look at another column (i.e. Initiative ID or defined date field) and automatically re-rank based on that.
I found a similar thread and tried adapt the solution to mine, with no luck (Sequential numbering of duplicates — Smartsheet Community)
Best Answer
• Hi @summetg
I agree with Paul's solution in the post you linked to! This should work for you as well, you would just need to update the column names:
=VALUE(RANKEQ([Weighted Project Score]@row, [Weighted Project Score]:[Weighted Project Score], 1) + "." + COUNTIFS([Weighted Project Score]:[Weighted Project Score], @cell = [Weighted Project
Score]@row, Row:Row, @cell <= Row@row))
The first column added would be a System Generated auto-number column (called "Auto").
Then the second added column uses this Auto column to create number values (called "Row"):
Then you can use Paul's formula above:
This would show you ties because you will see if they are .2 or .3 in the list.
However if you prefer to see full numbers and want to number the tied values as a whole number, you can then add in another helper column with your original RANKEQ formula, looking at the Rank
column instead:
Is this what you were looking to do?
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi @summetg
I agree with Paul's solution in the post you linked to! This should work for you as well, you would just need to update the column names:
=VALUE(RANKEQ([Weighted Project Score]@row, [Weighted Project Score]:[Weighted Project Score], 1) + "." + COUNTIFS([Weighted Project Score]:[Weighted Project Score], @cell = [Weighted Project
Score]@row, Row:Row, @cell <= Row@row))
The first column added would be a System Generated auto-number column (called "Auto").
Then the second added column uses this Auto column to create number values (called "Row"):
Then you can use Paul's formula above:
This would show you ties because you will see if they are .2 or .3 in the list.
However if you prefer to see full numbers and want to number the tied values as a whole number, you can then add in another helper column with your original RANKEQ formula, looking at the Rank
column instead:
Is this what you were looking to do?
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi Genevieve,
This was very helpful and I made some headway - I actually had a Row ID column which is an auto-number column. I'm kind of stuck at the last step now where I was receiving an "Invalid Data Type'
error and trying different things.
My goal is to just keep things showing as ties, but not skip - tried your RANKEQ formula above (let's swap the column 'Rank' with 'Prioritized Score Helper').
=RANKEQ([Prioritized Score Helper]@row, [Prioritized Score Helper]:[Prioritized Score Helper], 1)
This produced an '#invalid data type' error. I then tried the following:
=IFERROR(RANKEQ([Prioritized Score Helper]@row, [Prioritized Score Helper]:[Prioritized Score Helper], 1), "Not Ranked")
This resulted in all entries being 'Not Ranked". In my sheet, we have certain cells which have a value of "not ranked", so I think the RANKEQ function is being thrown off by this. I'm going to
try a few other things, but do you have any suggestions on how to adjust this? I feel need to add some type of IF statement to ignore condition to not rank the "Not Ranked" values and keep them
noted in this new column....Thanks!
• Hi @summetg
Is it possible there's an "Invalid Data Type" error in the Prioritized Score Helper column? Formula errors can create domino effects, so if there's even one cell with an error in that column it
will roll up to this other formula.
Try wrapping the IFERROR around your Prioritized Score Helper formula to see if that resolves it!
If not, we can add in a COLLECT statement to the final RANKEQ formula.
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi Genevieve,
Maybe I created some confusion with the column names, so I'll include some screenshots. I essentially used your formula you have in "RANK" in the column "Prioritized Score Helper" column which
has the "IFERROR" and adjusted it so it's descending. For reference here it is
=IFERROR(RANKEQ([Weighted Project Score]@row, [Weighted Project Score]:[Weighted Project Score], 0) + "." + COUNTIFS([Weighted Project Score]:[Weighted Project Score], @cell = [Weighted Project
Score]@row, ROW:ROW, @cell <= ROW@row), "Not Ranked")
I'm getting the right result where ties are noted with decimals and items that should not be ranked are populated as such (because they are also not weighted):
Then the "official ranking" aka "Prioritized According to Score" uses the formula to keep the ties and Not Ranked:
=IFERROR(RANKEQ([Prioritized Score Helper]@row, [Prioritized Score Helper]:[Prioritized Score Helper], 0), "Not Ranked")
Instead all are being noted as "Not Ranked" - so this is where I am stuck.
• Hi @summetg
Thanks for identifying you're using text in some of the rows, this helps a lot.
Ok try the following - we'll use a COLLECT function to identify that you only want to Rank the rows that have a Number in them, like so:
=IFERROR(RANKEQ([Prioritized Score Helper]@row, COLLECT([Prioritized Score Helper]:[Prioritized Score Helper], [Prioritized Score Helper]:[Prioritized Score Helper], ISNUMBER(@cell)), 0), "Not
Let me know if this worked!
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi Genevieve,
Thanks for the follow up - turns out we're going to keep the standard rank logic with the duplications and skips as that's what is the generally accepted principle.
My colleague and I found an alternative solution by updating the helper column to add a "/10" to divide by 10, which would help anyone who wants to keep the ranks sequential when there's a tie.
=IFERROR(RANKEQ([Weighted Project Score]@row, [Weighted Project Score]:[Weighted Project Score], 0) + ((COUNTIFS([Weighted Project Score]:[Weighted Project Score], @cell = [Weighted Project
Score]@row, ROW:ROW, @cell <= ROW@row)) / 10), "Not Ranked").
Also an edit to my previous formula for the re-rank aka prioritized according to score column to use the condition of 1 to indicate descending order
=IFERROR(RANKEQ([Prioritized Score Helper]@row, [Prioritized Score Helper]:[Prioritized Score Helper], 0), "Not Ranked")
Your formula above also resulted in the same result from my brief review. Appreciate your help again on this and hope this helps other users in the future
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/85554/how-to-not-skip-ranks-and-deal-with-ties","timestamp":"2024-11-05T06:45:54Z","content_type":"text/html","content_length":"453438","record_id":"<urn:uuid:1cf9f297-0e71-4e7b-add4-97333128695a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00518.warc.gz"} |
January 2015 – Transport Informations Service
Photo of the month – January 2015 [German version]
Some of our readers may well be thinking to themselves that we are more than a little late with Christmas recipes. But we can put your minds at rest. Our column is still about load-securing, and we
intend it to stay that way. In this case, the recipe we are talking about describes what is, in our opinion, an extremely simple calculation method for dimensioning direct lashings used for securing
When the police arrived at the "scene of the accident", the vehicle, a drop-deck trailer carrying a telescopic forklift truck, was already standing on the hard shoulder. It was plain to see that the
two rear load-securing chains had snapped:
Figure 1 [Wolfgang Jaspers]
An initial inspection of the vehicle revealed that the load, a telescopic forklift truck with a weight of 14,000 kg, had slipped forward slightly (or perhaps it had even rolled), but had not reached
the gooseneck, so that neither the load nor the vehicle were damaged:
Figure 2 [Wolfgang Jaspers]
After the police had escorted the vehicle from the motorway extremely carefully, it was possible to have a look at it without risking life and limb:
Figure 3 [Wolfgang Jaspers]
To start with, the load-securing material was inspected. In this case, chains had been used with a lashing capacity (LC) of 4000 daN each:
Figure 4 [Wolfgang Jaspers]
Figure 5 below is puzzling. On the load-securing point, you can read "LL 3,850", and we assume that the missing unit is "daN". This is followed by "SWL 12 t":
Figure 5 [Wolfgang Jaspers]
These specifications alone are simply confusing. SWL stands for "Safe Working Load" and is used for lifting gear such as cranes, slinging equipment, slinging points etc. A safety factor of four must
be used in calculations for lifting operations and a safety factor of two for lashing operations. We could stick our necks out and assume that LL could perhaps stand for "Working Load Limit" and then
try to establish some kind of a relationship between these two numbers, i.e. approximately 4 tonnes and 12 tonnes. The ratio is 3:1, not 2:1 as we might expect on the basis of the regulations
governing lifting and lashing. So this assumption doesn't help us.
It is also noteworthy that the specifications for lashing and for lifting are both indicated on this load-securing point. As we know, this is not permitted. The reason for this is relatively simple:
Load-securing points are often overloaded. This may be because some of the lashing equipment is loose and a jolt during transport causes the equipment to be subjected to a sudden load. Alternatively,
it could be because vehicles rock and oscillate on the suspension and tires. Rocking, oscillating motion such as this can often be seen when construction vehicles are being transported. The result is
that the load-securing equipment is constantly being subjected to loads and then relieved. Consequently, load-securing points and the load-securing equipment can become overloaded. Because nobody can
see with the naked eye whether the material has ever been loaded beyond its capacity, it is absolutely forbidden to use load-securing points for lifting. The reason is that the safety factor for
lifting is twice as high as for lashing, and the failure of a slinging point would have immediate consequences.
We would like to see unambiguous labeling on load-securing points such as these:
• " They are suitable for lashing, with a value followed by LC, or
• " They are suitable for lifting, with a value followed either by SWL or WLL (Working Load Limit).
An additional label that clearly indicates which point is suitable for lifting and which for lashing would provide an additional level of safety and clarity for users. When we contacted the
manufacturer, we found out that these load-securing points were not factory fitted. We presume that an inventive owner had been doing some DIY on this telescopic forklift truck. Even though it is
laudable that people have given some thought to ways of securing their load, this example shows that such measures have to be thought through properly.
The officers at the scene wanted to check how the chains had been attached and whether the load-securing measures were adequate. To do this, the telescopic forklift truck was moved backwards
sufficiently to tension the front chains again. Although it was not possible to precisely reproduce the original pre-tensioning forces and angles, we shall ignore this fact for the purposes of our
rough calculation:
Figure 6 [Wolfgang Jaspers]
The recipe
The recipe really is as easy as pie. Our diagram also shows a number of angles:
All around us, and especially in the formulas for calculating load-securing measures, we see angle functions. But angle functions are ultimately nothing more than ratios between the sides of
triangles. And this means that we can happily ignore the angles indicated by alpha, beta and gamma in the diagram.
In order to determine the ratios between the lengths involved, the first thing is to measure the distances that are relevant for load-securing purposes. These are the distances marked A, C, E and D.
The distance B is not needed! The distances must be measured between the point at which the lashing equipment is attached to the load and the point at which it is attached to the vehicle. If the
equipment is passed over the edge of the loading bed in order to secure it to the underside of the bed, the measurement is taken to the point at which it is deflected over the edge of the bed. This
means that the effective length of the load-securing equipment is measured in the relevant direction. These distances represent the effect of the load-securing equipment in each direction, and we
only have to determine the ratios between them. This is done by dividing the effective longitudinal component (E) by the length of the load-securing equipment. Thus, we divide the length E by C to
give a factor that is always less than one, because the length C is always the greatest length. Now, all we have to do is multiply this factor by the LC value for the load-securing equipment to
obtain the precise value for the longitudinal effect of the load-securing equipment. We can perform exactly the same calculation for the distances D and A.
In the case of the distance A, there is a further issue in that it also acts as a tie-down lashing. We shall look at this in detail below.
Because it is usually possible to measure these distances very accurately, this "recipe" is considerably more accurate than any method that involves measuring angles. If, for any reason, one of the
components cannot be measured because of the shape of the load, this can be overcome by measuring a specific distance, for instance one meter, along the actual load-securing equipment. This distance
is then marked with a chalk mark. A plumb line (which can easily be made from a nut and a piece of tape) is then used to drop a perpendicular to the loading surface. Of course, it is necessary that
the vehicle is not on a slope or tilted for this to work. Another chalk mark is made at the bottom of the plumb line, and the relevant distances E, D and A can then be measured from this point.
Although these measurements result in shorter lengths, the ratios between them remain identical. If you measure exactly one meter along the load-securing equipment, the advantage is that the lengths
measured correspond directly to the factors, since you only have to divide by 100 cm. Example: If the distance E is measured as 60 cm, and if we have put our mark 100 cm along the load-securing
equipment, we divide 60 by 100 to give the factor 0.6. If the distance was 40 cm, this would give a factor of 0.4 and so on. If the factor is 0.6, we only need to multiply this by the LC of the
load-securing equipment, e.g. 2000 daN. In this case, the longitudinal securing effect would be (0.6 x 2000 daN) = 1200 daN. And that's all there is to the recipe. Which is why we have called it a
To visualize exactly how this works, consider the following:
Let us secure the load-securing equipment to the bottom corner of the load, effectively placing the load-securing equipment directly on the loading surface, and pass it backwards at right angles to
an imaginary load-securing point situated at a perfect position. With this arrangement, there are no angles gamma, alpha or beta, and there is no vertical component or lateral component. If you now
measure the distances C and E, they will be the same. For instance, 175 cm. If you now divide the 175 cm by 175 cm, this results in a factor of 1. If we now dogmatically apply the approach dictated
by the recipe we would have to multiply the LC of the load-securing equipment by the factor 1. In other words, the load-securing equipment acts 100 % against movement in the direction of travel, i.e.
with 2,000 daN in our example. If we take the same example and move the load securing point just 10 cm up, leaving all the other parameters the same, C would be 175.3 cm rather than 175 cm and there
would be a very small vertical component.
And we could do the same thought experiment for the distance D. It would make no sense to do so for the distance A, as this lashing would be a tie-down lashing. Any lashing used on the load that lies
between a right angle longitudinally and a right angle laterally has E, D and A, components in the same proportions as the ratios between the distances. And that is the end of it. And we haven't even
mentioned the words tangent or sine. Many people ask why the distance A must also be divided by the distance C and this factor then be multiplied by the LC. After all, the pre-tensioning force should
not exceed 50 % of the LC. The following conditions apply to this case: For a direct lashing to act, the load must (in our case) slip forward slightly. If we imagine the load slipping forward
sufficiently that the maximum lashing capacity is reached, the direct lashing does not at that moment act only with its E and D components, but also with the vertical component that equates to a
tie-down lashing. Depending on the length A, these tie-down lashing forces can be considerable, so we do not wish to exclude them from our recipe. Of course, it is important that the vertical
component only takes effect in combination with friction. Thus, if a vertical component of just 400 daN is identified, we then have to multiply this by the coefficient of friction. But let us return
to the case in hand.
In our example, the distance C was measured as 126 cm:
Figure 7 [Wolfgang Jaspers]
The distance E, or the longitudinal component, in our example was measured as 51 cm:
Figure 8 [Wolfgang Jaspers]
And the distance D was measured as 61.5 cm:
Figure 9 [Wolfgang Jaspers]
Figure 10 [Wolfgang Jaspers]
The distance A was 93 cm. (Unfortunately, we have no photograph to show this.) Dividing out the individual distances, gives the following results:
The distance E/C gives a factor of 0.4.
The distance D/C gives a factor of 0.49 and
the distance A/C gives a factor of 0.74.
If these factors are multiplied by the lashing capacity of 4000 daN, one chain will deliver a securing force of 1600 daN longitudinally, 1960 daN laterally and will provide a vertical component of
2960 daN. If we use a coefficient of friction μ of 0.3, the 2960 daN must then be multiplied by this coefficient μ of 0.3. This still results in a securing effect of 888 daN with one chain.
With respect to the longitudinal direction, this results in the following:
In the direction of travel, with a load weighing 14,000 kg, acceleration of 0.8 g and an assumed coefficient of friction μ of 0.3, it is necessary to provide a total securing force of 7000 daN. Each
of the chains, which were originally exactly the same length, provided 1600 daN, making a total of 3200 daN. The vertical component gives us a value of 888 daN for one chain and 1776 daN for both
chains. This all adds up to 4976 daN of securing force. This represents a shortfall of 1624 daN securing force. But if we were able to assume a coefficient of friction μ of 0.4, we would only require
5600 daN of securing force. This increase in the assumed coefficient of friction would also give us an additional securing force of 592 daN through the vertical component. This would give a total
(longitudinal) securing force of 5568 daN. This means that there is still a shortfall of 32 daN before the forklift truck is secured in accordance with regulations.
Now we are justified in asking why the load-securing arrangements did not restrain the load. After all, the shortfall of 32 daN should easily have been taken up by the spare capacity in the
load-securing equipment.
Observant readers may have noticed that we have not spent any more time discussing the load-securing points on the bed or on the telescopic forklift truck. To avoid this month's column becoming too
lengthy, we have restricted ourselves to the "facts", and it is an established fact that the chains failed and that the load-securing points on the vehicle and on the load did not.
Let us go back to Figure 3 and have a closer look at the friction: Here we see that the telescopic forklift truck is standing slightly to the left on the bed. This could have happened as a result of
the right-hand chain failing first, but because the truck is more or less aligned with the trailer and still positioned slightly to the left, it is possible that it was originally loaded in this
position. If we now have a look at Figure 6, we can see that, although the vehicle had already been moved back by this time, the wheels are partially on the painted steel. At this point, the friction
is certainly lower than on the ridged mesh. Given that the tires and the loading surface are quite clearly clean, we believe that a coefficient of friction of at least 0.4 should have been perfectly
Of course, we also asked ourselves whether the vehicle may have rolled rather than slipped. When we asked the manufacturer, they assured us that the brakes can only be released when the engine is
running and that as soon as the engine is turned off, the vehicle is braked automatically. So this does not appear to be the solution. The load-securing equipment was in pristine condition and the
load-securing points (we have already talked about the ones on the forklift truck) were not faulty. All of which leads us to the conclusion that the chains were slack when the jolt occurred during
braking. If the chains (or other load-securing equipment) are slack at the moment the load slips, the resulting force can be of such a magnitude that even the safety margin in the load-securing
equipment is no longer sufficient to restrain the load. The safety margin inherent in the load-securing materials is absolutely necessary to absorb any movement in the load, which must, as we
described above, slip into the chains to tension them up to their LC. And the load is still moving when the lashing capacity has been reached. But this kinetic energy must also be absorbed by the
load-securing equipment (although current guidelines make no mention of this; kinetic energy is simply ignored in the guidelines).
Suggestions for improvement:
So how can things be improved? Our recipe revealed that the longitudinal component E was the smallest component at 56 cm. But the load-securing equipment had an LC of 4000 daN in a longitudinal
direction and less than 50 % of this was used. Longer chains attached further back on the vehicle would have considerably increased the securing effect.
The way in which the chains were attached to the telescopic forklift truck resulted in extremely steep angles. Steep angles are the nemesis of any load-securing equipment, as any vertical vibrations
will always jolt the equipment. We do not know the extent to which this telescopic forklift truck is able to move on its suspension. But its balloon tires, designed for rough terrain, most certainly
allow the truck to move. This is more than enough to allow the truck to gently sway and rock during transportation. And at each movement, the vertical component introduces a small jolt into the
load-securing equipment. This is fatal for belts, and even more so for chains. This means that the smaller the vertical component, the better the load-securing equipment will be protected (only for
direct lashings, of course). It would be best if vehicles such as these could be supported so that they are no longer resting on their suspension. We do not know whether the hydraulic rams designed
to support the vehicle on rough terrain and while it is in use can also be extended on the loading bed. But this would prevent the load from oscillating as it bounces on its tires and/or suspension.
In order to ensure that there is sufficient friction under these hydraulic rams, heavy-duty mats would need to be placed beneath them.
Once again, we return to the positioning of the chains, which is a key issue for us. If these chains had been attached as little as one meter further back on the vehicle, this would have resulted in
a far larger longitudinal component. A longitudinal component of just 0.8 would have increased the securing force by 2800 daN without the need to use any additional load-securing equipment.
And a word about the pre-tensioning force: It is still common to hear people being instructed to tension direct lashings only hand-tight. Assuming that the load-securing equipment to the front and
rear can be placed more or less symmetrically, it is certainly beneficial if the pre-tensioning force is as high as possible, provided that it does not exceed 50 % of the LC. This is because the
telescopic forklift truck must have the freedom to slip a few millimeters or even centimeters forward to allow the chains at the rear to reach their full lashing capacity. At the same time, the front
chains are relieved to a corresponding degree, which means that the forklift truck doubles the pre-tensioning force while slipping only half the distance.
Your load-securing columnists apologize for the length of this Photo of the Month. We wish you a healthy, successful and above all secure 2015.
Your Load Securing Team
Back to beginning | {"url":"https://www.tis-gdv.de/tis_e/foto/2015/2015_01-htm/","timestamp":"2024-11-08T22:16:04Z","content_type":"text/html","content_length":"48145","record_id":"<urn:uuid:f2ad3d3e-348f-4a09-9054-9916cd05bc9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00161.warc.gz"} |
Cetane Index Calculator - Online Calculators
Enter the values to use our basic and advanced Cetane Index Calculator. Read the formula and detailed examples below to know more about the Cetane Index.
Cetane Index Calculator
Enter any 2 values to calculate the missing variable
The Cetane Index Calculator helps to understand the performance and combustion of diesel fuel engines. In order to know more about Cetane Index, kindly stay with us.
The formula is:
$\text{CI} = (454.54 – 1640.6 \times \text{SG}) + (774.74 \times \text{A}) – 0.554$
Variable Meaning
CI Cetane Index (a measure of the ignition quality of diesel fuel)
SG Specific Gravity of the fuel
A Aniline point of the fuel (measured in degrees Fahrenheit)
Explain How to Calculate ?
Firstly, determine the specific gravity (SG) of the fuel. Now determine the aniline point (A) of the fuel in degrees Fahrenheit. Next to that you have to multiply the specific gravity by 1640.6 and
subtract the result from 454.54. Now, multiply the aniline point by 774.74. And finally, add these results together and subtract 0.554 to obtain the Cetane Index (CI).
Solved Examples :
Example 1:
• Specific Gravity (SG) = 0.85
• Aniline Point (A) = 150°F
Calculation Instructions
Step 1: CI = $(454.54 – 1640.6 \times \text{SG}) + (774.74 \times \text{A}) – 0.554$ Start with the formula.
Step 2: CI = $(454.54 – 1640.6 \times 0.85) + (774.74 \times 150) – 0.554$ Replace SG with 0.85 and A with 150°F.
Step 3: CI = $(454.54 – 1394.51) + 116211 – 0.554$ Multiply SG by 1640.6 and A by 774.74.
Step 4: CI = $-939.97 + 116211 – 0.554$ Subtract 1394.51 from 454.54, then add the result to 116211.
Step 5: CI = $115270.476$ Subtract 0.554 to get the final Cetane Index.
The Cetane Index is 115270.476.
Example 2:
• Specific Gravity (SG) = 0.90
• Aniline Point (A) = 140°F
Calculation Instructions
Step 1: CI = $(454.54 – 1640.6 \times \text{SG}) + (774.74 \times \text{A}) – 0.554$ Start with the formula.
Step 2: CI = $(454.54 – 1640.6 \times 0.90) + (774.74 \times 140) – 0.554$ Replace SG with 0.90 and A with 140°F.
Step 3: CI = $(454.54 – 1476.54) + 108463.6 – 0.554$ Multiply SG by 1640.6 and A by 774.74.
Step 4: CI = $-1022.00 + 108463.6 – 0.554$ Subtract 1476.54 from 454.54, then add the result to 108463.6.
Step 5: CI = $107441.046$ Subtract 0.554 to get the final Cetane Index.
The Cetane Index is 107441.046.
What is the Cetane Index Calculator?
The Cetane Index Calculator is helpful in estimating the cetane number of diesel fuel based on its physical properties, specifically the specific gravity (SG) and the aniline point (A). The cetane
number is a critical measure of the ignition quality of diesel fuel, indicating how readily the fuel will ignite under compression. The higher cetane numbers the better is engine performance, the
smoother combustion, and lower emissions.
The cetane number and cetane index both measures the quality of diesel, however, they are to be determined in different way. The cetane index calculation formula D4737 assesses the cetane index by
using fuel properties and the cetane number, however, is measured directly using a cetane tester and a cetane combustion equation. While the cetane number involves a physical test, the cetane index
is calculated without the need for testing equipment. Understanding the difference between cetane number vs cetane index is very crucial for the assessment diesel fuel performance.
Final Words:
Overall, the Cetane Index Calculation are very essential in order to ensure that your diesel fuel is of well standard in order to meet the requirement of an efficient engine operation. | {"url":"https://areacalculators.com/cetane-index-calculator/","timestamp":"2024-11-03T03:50:32Z","content_type":"text/html","content_length":"112624","record_id":"<urn:uuid:3e91075d-aa06-4622-84af-e2301cabfe02>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00347.warc.gz"} |
Calculate positive and negative numbers
Search Engine users found us today by entering these math terms :
• use algebra to evaluate square roots
• graphical calculator for TI84
• quadratic equation interactive teach
• ti-84 polynomials
• least common multiple chart
• Free books in accounting
• solving lagrange's equations in matlab
• free ged tests printables
• factor quadratic equations calculator
• accounting for cost free books
• triginometry
• How to solve radical expressions on a TI-83?
• McDougal, littell & company practice sheets
• symmetry worksheets for intermediate students
• adding and subtracting positive and negative numbers quiz
• prentice hall inc.+chemistry
• print out 8th grade algebra sheets
• free intermediate accounting twelfth edition online solutions
• mcdougal algebra 2 teacher edition book extra practice section
• inequality math worksheets
• solving second order nonhomogeneous differential
• simplify expressions in one variable worksheets
• how to solve general aptitude
• maths for dummies
• math poems involving algebra
• aptitude questions : pdf
• find slope graphing calculator
• fourth grade fractions
• radical numbers and decimal points
• artin exercise answers
• formula for solving cube roots
• v-t graphs worksheet
• free statistic problem solver
• multiplication fun fact sheets for third grad
• adding exponents calculator
• algebra 2 made easy
• ti-83 manual pdf
• dimeter formula algebra
• Dividing by Monomials Calculator
• matlab dogleg method
• solve sequence problems online help
• aptitude questions with answers
• How do you do radicals on T89
• how to solve 2nd order quadratic
• multiple choice questions science, maths, english, year 8 and 9
• math test logarithms
• square root simplifier calculator
• "how to find square root"
• easy mathematical substitution
• greatest common factor tree worksheets free
• prentice hall mathematics pre algebra
• 7th grade translations of two dimensional figures math worksheets free online
• free math problem answers from glencoe workbooks
• fractions on TI89
• cross product proportions worksheet
• creative math worksheets
• quadratic equation factor calculator
• sample sheets for 8th grade nys math test
• algebra multivariable
• quiz 2 assessment book pre algebra mcdougal
• Density Problems Worksheet
• formula calcul grade celsius in grade far
• quadratic inverse formula
• www.6th grade math printable online problems.com
• PDF LINEAR ALGEBRA BY OTTO
• help with algebra homework
• math test on highest terms test for free
• How to convert decimal measurement to a mixed number
• free fraction calculator that shows work done
• formula parabola
• calculator with square roots online
• matlab third order
• middle school math with pizazz
• how to use substitution method
• biology free test download ks3
• hard algebra question
• texas ti-83 plus hex calculator
• distributive law of square roots
• aptitude questions ebook tutorials
• e-book álgebra linear anton
• assistance with fractions
• dividing and multiplication signed numbers
• nonlinear systems of equations in matlab
• easiest way to get vertex form
• calculas
• integer operations worksheets
• Aptitude questions + answers
• multiple choice metric measurement worksheet
• Math combination in vb
• adding like terms with several variables free worksheets
• simultaneous substitution equations algebra practice
• OH Algebra 1 textbook glencoe practice worksheets
• "Binomial theorem" "program download"
• "angle worksheets" and "free"
• multiplying exponents worksheets
• quadratic solver 3 degree calculator
• how do you find y-intercept on TI-83
• procedure to solve linear equations for 3 variables
• liner equations
• find a quadratic equation from a table of values
• Teaching Algebra 1 over two years, Heath Algebra 1: AN INTEGRATED APPROACH
• algebrator download
• answer key for saxon math pre-algebra book 8/7 {no sales products}
• teaching cube and cube roots to 11 year olds
• adding fractions worksheet
• simplifying radical fractions calculator
• Worksheets Beginning Fractions
• +"great common divisor" chart
• how do I convert 135% to fraction?
• aptitude questions +download
• simple tricks and tips on trigo
• how to factor algebra
• algebra two unknown degree 2 equation solving excel
• solve multiple unknowns simultaneously download
• order fractions from least to greatest
• solve roots for higher order equations
• free practice ninth grade
• solving 3rd order quadratic
• learn algabra
• elementary algebra help
• exponents calculator
• help to write a java application to do elementary math multiplication
• glencoe math answers to monomials
• iowas open response area perimeter triangle
• ks3 online tests
• elementary algebra tutorial
• root mean square formula
• mixed number decimal
• glencoe/Mcgraw Hill math tests answers
• 3rd year fluid cheat sheet
• dividing polynomials worksheet
• FACTOR A CUBED BINOMIAL
• 252 square root simplified
• holt middle school printable worksheets
• free online ti-89 calculator
• transformations printable worksheet
• order and compare negative worksheets
• aptitude test paper with answer
• division algebra calculator
• Converting mixed numbers to fractions worksheets
• math algebra 1 to do like school work
• algabra
• mixed number to decimal calculator
• free geometry worksheets for +grade7
• how to solve alegbra equations
• history of mathamatics
• geometry and math poem
• Worksheets Order of Operations
• scale factor problems
• ti-84 using roots to find equations
• work slope's online math
• prentice hall worksheet answers
• sample aptitude questions with answers
• KS3 long division
• printable 4th grade math on Probability worksheets
• liner equation
• fun games or activities to practice partial sums
• calculator for two side variable equations
• q-balancing-chemical-equation
• Radical expressions in real life
• free maths work sheet for grade1
• step by step algebra calculators
• free ks3 worksheets
• compare and order integers worksheet
• algebra add subtract negative
• 6th grade multiplying and dividing fractions worksheet
• finding asymptote with ti84
• nth term calculator
• logarithms worksheet
• word problems in +referance angle in trigonometry
• pringles can method of learning multiplication
• grade 1math worksheets
• real world applications for solving by substitution
• ks3 maths factorization
• pre algebra tutoring software educational
• add and subtract integers in temperature worksheets
• online graphing calculator printable
• how to slove the harder quadratic equation
• when a variable is in the exponent
• gnuplot linear regression
• Algebra Common Denominator
• ti84.rom
• hot to solve equations
• factor square root calculator
• (using tx 84 graphing calculator online)
• eXCEL EQUATIONS
• adding bases with variable exponents
• PRINT FREE ALGEBRA 1 WORKSHEETS
• science scott foresman fourth grade chapter tests and answers
• 8th grade math EOG help free worksheets
• examples of nonlinear differential equations
• michigan 8th grade math work sheets
• scale math problems
• need help with parabola for free
• math simplification calculators
• ti-84 calculator chemical balancing
• list calculas formula
• complex rational expressions
• adding and subtracting negative numbers, worksheet
• dividing fractions solver
• holt algebra with trigonometry homework solutions
• how to do radicals with a ti-89
• free permutation worksheets
• adding radical expressions similar to adding polynomial expressions
• factoring polynomials with a cubed root
• Algebraic Expressions worksheets
• HOW DO YOU CONVERT A DECIMAL INTO A FRACTION USING A TI89
• introductory algebra online exercise
• mathematics book search for analysis-II by walter rudin
• factoring cubed
• 3rd grade reading worksheets
• holt algebra 1
• integer operations+worksheet
• answers to Glencoe Algebra 1 practice workbook
• online cube root calculator
• FREE DOWNLOAD TI83PLUS USER GUIDE
• balancing equations online activity
• mathematics worksheet for grade 7 on identities
• apptitude test download
• quadratic equation factorer
• printable math practice test for fourth grade NYS test
• mathematical equation for lineal metres
• Alegbra Problem
• how to foil a cubed function
• adding and subtracting negative numbers worksheets
• solving linear systems lesson plan
• mathpower 9 western edition chapter 5 test answers
• Algebric Calculator
• games with subtracting 3 integers
• solving right triangles with the TI89
• history of algebra invented
• life science worksheets 4th grade
• addition and subtraction algebraic expressions
• simplify algebraic
• kumon free
• ti 89 algebra step by step
• GMAT Permutation and Combination
• online third degree polynomial solver
• matlab solve
• area worksheets Ks2
• sats papers to print
• find free worksheets for third grade on coordinate grids
• algebra 1 concepts and skills all answers.com
• 1st grade least to greatest with pictures
• algebra, extracting square roots
• "maths cheat sheet"
• how to graph absolute value functions on a TI-83 plus
• mathematics algebra - 5th root chart
• glencoe mathematics algebra 1
• algebra with pizzazz online worksheets
• learning algebra2 online
• how to factorise addition factions
• logic expression simplifier
• formula for decimals
• application of permutation and combination
• 3rd grade math fun worksheets
• interactive c absolute value
• what is the fomula for volume
• scale factor 7th grade practice
• how to use cryptography algorithms inthe real application
• Rational Expression Calculator
• factoring a higher order equation with a variable
• ti84 emulator
• free help with order of operations algebra
• solve college algebra problems
• SC glencoe math book
• factorizations ks3
• maths exercise KS3 free
• how to solve rational expressions
• Mathscape Answers
• aptitude papers with solutions
• polynomial factoring solver
• free monomial simplifying calculator
• applications of Algebra + free videos
• Turning a Probability into a Fraction
• teaching vectors gcse powerpoint
• logarithm for dummys
• solve multiplying integers
• 6th grade math ontario
• algebraically proving a function
• what is the least common factor of 10 and 20
• simplify log equasion root
• partial fraction expansion with complex denominators in mathematica
• symbolic calculator unit conversions
• polynomial online
• writing linear equations worksheets
• algebra 2 calculator online
• free algebra test
• tI 83 ROM IMAGE
• hold middle school math homework and practice workbook answer key
• graphing by littell
• math help.com\algebra
• adding and subtracting logarithms
• How is adding radical expressions differ to adding polynomial expressions?
• fourth square root
• Algebra Apitude Test 6th grade
• convert mixed number to decimal
• free ks3 past paper year 7
• how to do a wronskian in TI 83
• how to solve quadratic equations using the TI 84 Plus
• free math help-inverse relation
• excel equation solving
• mathematical basic multiplication primary school
• equations with fractional coefficients
• maths papers for grade5
• Percentage formulas
• solve equation 3rd degree
• nelson grade 6 math textbook answers
• fl.math 7th grade
• free coordinate plane
• ellipse general equation trigonometry
• cube roots on scientific calculator
• find domain of rational number with square root
• importance of checking rational expressions
• TI 89 square root
• texas instrument ti-89 gas formulas
• solving an equation in excel
• using matlab to solve a quadratic equation
• solving square and cube root on a scientific calc
• t-83 program for the roots of a function
• quardratic equations
• How to Add, Subtract,Multiply and Divide a Polynomial
• math combination permutation practice
• square root conversion charts
• math square roots ti-86
• Algebra 1 review problems
• worksheets on DIVIDING POLYNOMIALS BY MONOMIALS & REMOVING PARENTHESES AND SIMPLIFYING
• accounting book download
• roots determinant calculate
• trigonometry cheat sheet
• "binary math" Practice Exercises
• Are all linear equations functions? Is there an instance when a linear equation is not a function?
• third root in java
• simplified radical form calculator
• Free Aptitude test papers
• MULTIPLICATION STE
• solving quadratic equation fractions factorization
• quadratic equation games
• power of fraction
• "cpm classwork" pdf
• e 5th grade work sheets for free
• free do-it-yourself algebra projects
• ''pdf download algebra 2 book glencoe''
• C# solve LCM
• nonlinear use matlab
• first grade algebra software
• adding and substraction lcd fractions
• math - substitution
• algebra tiles equation worksheet
• online graphic calculater
• simplify exponentials
• 7th grade mathematics chart printout
• square root raised to the cube
• read through of algebra 1 and integrated approach
• ks2 literacy sats practise papers free to download
• percent proportion worksheets
• simplify dividing equation
• combining like terms worksheet
• linear equation powerpoint
• finding domain in line equations
• pythagoras calculator
• converting mixed radicals to entire radicals
• square root simplifier
• algebra trivia with answer
• number pattern worksheet
• "subtract" "integer" "worksheet"
• differential equation check on ti-89
• CPM geometry book answers
• Printable test on perfect square
• area of a circle worksheet fifth grade
• Answers to Chapter 11 Chemical Reactions Prentice Hall
• trinomial+factoring+tictactoe
• homework cheats for 4th grade
• Advance Algebra help
• fractions containing square roots
• free online tutoring for high school advanced algebra
• college algebra tips
• how to solve permutation and combination
• system of equations application problems
• Trigonometery examples in everyday life
• two polynomials cubed
• honors alg 2 logarithms help
• free on line math book
• gcf calculator with exponents
• mcDougal littell worksheets
• connections course 2 math test grade 6
• adding and subtrating positive and negative number worksheets
• prentice hall mathematics algebra 1 teachers edition
• solving a system of equations with excel
• algebra simplifying calculator natural log
• investigatory math
• answers to multiplying and dividing polynomials
• solving algebraic exponents
• diophantine integer intercepts two linear equation
• home schooling gr8 algebra resourses
• rudin chapter9 14
• square root formula
• quadric graph
• calculator to do my geometry homework
• algebraic revision exercises + ratio analyses and irrational numbers
• gcse o level past papers
• balancing polynomials
• foundations for algebra answers
• how to put games on ti84 plus
• Fun Probability Worksheet
• quadratic solving applet
• adding and subtracting square roots
• algebra linear equations on a graph help
• factoring online
• cube root on TI-83
• slope explanation algebra
• "Excel Year 8 Mathematics"
• MATH PROBLEM SLOVER
• algebra 1 worksheets or activities on slope
• holt math book answers
• balancing algebraic equations, worksheets
• help finding vertex with a ti84
• KS3 PAPERS + FREE + SATS
• solving simultaneous equations in mathcad
• multiplying dividing multiple digit number secrets indian education
• linear equations ppt
• extended euclidean algorithm greatest common divisor
• Algebrator
• how to teach ks2 maths square numbers
• holt math crossword
• worksheets, integers, adding, printable
• glencoe online physical science chapter 1 quiz AND which of the following is not an example of pure science?
• root multiply calculations
• fraction worksheets for first grade
• "square root addition" calculator solves for you
• how to solve a multiplying fractions application
• ti 83 plus rom image
• online calculator solving radical expressions
• formula equation generator physics mathematics online
• second degree equations using the square root method
• mixed adding and subtracting worksheets
• factoring quadratics using the diamond
• free printable mixed fraction worksheets for 5th graders
• free online quick grader
• what is the #1 8th Grade Algebra Tutorial Software
• conversion chart from square foot to lineal foot
• balance equation steps
• rationalizing for dummies
• PRINTABLES/FRACTION QUIZ
• McDougal Littell Algebra 2: Chapter 9 Resource Book
• binomial solver
• algebra ll worksheets
• Square algebra
• elementary algebra free printables
• FRACTION QUIZ/PRINTABLES
• aptitude questions and answers by IT companies
• pre-algebra pretest
• printable worksheets with decimals in order
• Cost Accounting book online book version
• execices accounting free
• "square root" polynomial
• how to solve equation on texas
• rational function online calculator
• Algebra 1 Answers
• precalculus sixth edition larson answers guide "download"
• yr 8 mathematics past papers
• completing the square with multivariable
• grade six holt middle school math worksheets chapters 9-3
• math factoring simple binomials exercise
• how to use ti-84 plus calculator for fractions
• simplifying radical expressions 30
• Algebra Homework Solver
• synthetic division applet
• free online precalculus aptitude test
• algebra cheat sheet
• how to solve square roots.
• Pre- Algebra with pizzazz ( Creative Publications)
• math trivia problem
• teachers study guide for square roots
• calculator to solve radical equation by rationalising
• 7th grade teacher aids life science worksheets
• free printable 8th grade work
• taking cube root on a ti83
• grade 9 polynomials questions
• algebra calculator elimination method
• inequality worksheets
• MasteringPhysics answers
• algebra with pizzazz/worksheets
• grade nine math help
• math charts on substitution in algebra
• solving literal fractional equation with exponents
• worksheets on dividing square roots
• Algebra Video Clips for Year 8-9
• example of factoring a four-term polynomial
• One Step Algebraic Equations Worksheet
• ti 83 graphing calculator online
• history
• pre algebra a for 6th graders
• free online 8th grade work
• permutations and combinations worksheet
• algebraic calculator
• ti-84 rational inequality
• How to graph 2 variable linear inequalities in standard form
• range of an equation for dummies
• worksheets revision for sats
• integers from least to greatest
• area practice sheets
• ti83 calculate log base 2
• math poems about algebra
• simplifying complex equations with exponents
• scale factor 7th grade
• 7 and 8 yr old math games
• quickbasic using to taylor series solution
• eog 4th grade practice printouts
• number grid to print gcse
• First Course In Abstract Algebra Instructor's Solutions Manual 7th edition
• Trigonometry sum difference of angles worksheet ppt
• saxon math answers for algebra 1
• 6th grade mathematics chart
• ti-89 accounting apps
• mcgraw-hill chapter 8 mid-chapter test answer sheet
• adding exponets
• matrices online calculator step by step
• sum and product rule factoring
• "polynomial factoring online"
• dividing divisor printables
• mcgraw-hill worksheet answers
• how to do slope on graphing calculator
• T183 calculator online
• square root equations with fractions
• elementary factors worksheet
• Introductory and Intermediate Algebra
• math aptitude papers
• Algebra's Math Trivia Question And Answer
• ti-83 calculator manuel slopes
• 10 yr old circles math problem
• chemistry addison-wesley fifth edition answers sheet chapter 25
• Math games 10th grade
• solve each equation for the given variable
• "Indiana Prentice Hall Mathematics Algebra 1" answers
• quadratic trinomial calculator
• printable canadian money worksheets/primary
• prentice hall answers
• how do you find scale factor
• math help 7th grade negative integers
• pre algebra problems
• equation with rational exponents
• ADVANCED ALGEBRA An Investigative Approach ANSWERS
• TI-89 decimal data type
• variables equations pre-algebra= z-10={-18}
• square root difference
• algebra substitution for rates
• solved mole equations
• mcdougal littell algebra 1 answer key
• mathimatical formulas
• +crossword +algebra +grade 9
• "math worksheets for fourth graders"
• how to find solutions of a third order equation
• grade 8 algebra test 2 answer sheet
• translating algebraic expressions worksheet
• equivalent signed fractions
• worksheets rational exponents
• using a calculator to approximate roots ti 83 plus
• "ti 89" "log base 2"
• review for the final exam mat 0024 college prep algebra
• Answers to Glencoe/mcgraw textbooks and worksheets
• cross products with calculator t-83
• TI84plus>how to graph
• ti-84 plus solving matrices
• integer exponents math problems
• how to substitute matrices TI89
• TI-89 rom download
• substitution algebra example
• solving a quadratic equation if the coefficient of x^2 is not 1
• pratice Algebra problems
• extend the forward Euler method
• MATHs EXERCISES GRADE 5
• algebra factoring trinomials calculator
• elementary equation worksheets
• ti-83 plus how to change log base
• math problems and worksheets for 7th and 8 th g rade for free
• Factoring on TI-83 Plus Calculator
• intermediate algebra online textbook
• Graphing calculater
• how to do fractional coefficients
• where can i try a ti 83 calcaltor online for my homework
• conceptual physics answers
• calculate foerster radius tutorial
• ti84 economic programs
• solve 2-step equations with fractions
• worksheets unlike denominators
• Precalculus (3rd Edition) exercise answers
• Free Polynomial Solver
• coursecompass cheats
• ordering integers worksheet
• 4th grade fraction practice
• holt pre-algebra "practice 5-1"
• every day uses of the logarithmic function
• changing a number from a radical to a fraction
• sixth grade math for beginners
• algebra worksheet for 6th grade
• find the value of each expression algebra
• algebra 2 calculator
• algebra on the GED
• equation calculator online
• finding least common denominator algebra
• product of primes worksheet ks3
• Phoenix application for fluid mechanics
• holt algebra 1 answers
• what is a sqaure root
• Factoring Polynomials calculator
• Is there an instance when a linear equation is not a function?
• algebra diamond box factoring
• how to solve graphing problems
• cramer's program instructions for ti-83
• why simplify radical expressions
• simplify radical calculator
• free+download+cost accounting+concept
• mixed number to percent
• scale factoring
• Organizational Chart of an Algebra
• hyperbola equations
• add/subtract worksheets
• prentice hall pre algebra chapter 6 practice test
• math multiple finder
• algebraic expressions worksheets third grade
• Hard Algebra equation
• free 9th grade worksheets
• best online equation solver with explanation
• casio calculator symbol definitions
• sin^2 on TI-89
• factorization of the denominator
• free GED tutoring online
• california 5th grade math fractions word problem
• trigonometry worksheet
• +science mcq ten years papers - 9th class
• percentage formulas
• scale factor 7th grade math
• addition and subtraction formulas proofs
• free math print outs for 5 year olds
• cheat.codes.PDF
• TI-89 calculator Download
• algebra 1 chapter 4 resource book answers
• pdf ti-89
• calculator printable games
• solve my algebra question
• permutation and combination symbols
• nonlinear equation solver matlab
• matlab second order integration
• factoring in algebra
• ti-84 finding polynomial roots
• mathematics worksheets on expansion
• good books for cost accounting
• online calculator linear inequalities
• importance of algebra
• prealgebra 5th edition martin-gay answer key
• why is it important to simplify radical expressions before adding
• synthetic division solver
• Factorising equations using casio
• mcdougal 6th grade math book
• solve a quadratic on a ti-89
• grade seven algebra worksheets'
• free prentice hall algebra i
• online calculator with pie
• ti-89 linear program
• program quadratic eq
• free printable worksheets for reading readiness for 1st grade
• love poems using algebra terms'
• rational expression with the use of foil method.com
• how to solve algebra equations
• Least common multiple generator
• cube route on ti30 calculator
• 5TH TO 6TH MATH PROBLEMS.COM
• powerpoint grade 3, equivalent fractions demo
• how to convert an improper fraction to a percent , a fraction
• free printable worksheets 6th grade permutation
• first grade math and english homework
• college algebra cheat sheet
• Algebra 2 by McDougal Littell answers
• multi-step equations free worksheets
• GCF POLYNOMIALS WORKSHEET
• solve online algebra
• pemrograman linear plus pdf
• 8th grade balancing chemical equation examples
• How to List Fractions from Least to Greatest
• "ALEKS calculator"
• ti-89 chemistry application notes
• factorising calculator
• polynomial factorer online solver
• math multiplication printable pages
• square roots in numerator and denominator
• how to solve log equation using a ti-89
• science sample paper for tenth class
• algebra help simultaneous equations
• math combination graphs
• solve by substitution calculator
• dividing and multiply fraction practice sheets for middle school for free
• roots of a third degree polynomial ti 83
• online ti-84 application
• algebra Fraction rules sheets
• Kumon ppt template
• expressions math test online
• solving second order homogeneous
• chemical reaction "mass is not conserved" tiny
• quadratic+vb source code+free
• 'fraction in Powers'
• pre-algebra intro worksheet
• dividing polynomials with binomials
• free worksheets on multiplying matrices
• practice 3rd grade multiplying
• free middle school math worksheets properties of integers
• 3 order equation
• square roots and radicals calculator
• equavilant fractions worksheet
• graphing equalities worksheet
• ks3 algebra questions
• grade school permutations
• 1st grade worksheets for estimating
• factoring calculator
• least common denominator finder
• algebra roots chart
• convert integer bases in C program
• adding, subtracting, multiplying variables
• holt pre-algebra workbook
• hard algebra problems
• prime factored form
• radical expression calculators
• find a formula for changing metres into feet
• math probloms
• LCM Answers
• 1st gradeprintable addition problem solving
• worksheets for basic beginning algebra
• formula for simplifying fractions
• What are the basic rules of graphing an equation or an inequality
• least common denominator calculator]
• example of poem about college algebra on radicals
• math module exam paper "statistics gcse"
• percentages + online exercises + maths
• reflection, translations worksheets
• linear algebra multiple equation solver
• synthetic division calculator
• integers fractions worksheets
• algebra 1 free worksheets or activities on slope
• program to solve a quadratic expression using c language
• how to solve a ratio word problem using a formulas
• +factors worksheet
• percentage equations
• ti calculator rom image rar
• math worksheets on metres for grade one
• cost Acounting ebook download
• lowest common multiple fraction worksheet
• "simultaneous equation" excel
• how to enter nonlinear system of equations in matlab
• chemical equations 5th grade
• Systems of equations in matlab with inequalities
• conceptual physics 3rd answers
• store formulas in a ti-84 plus calculator
• SCALES MEASUREMENTS LESS THAN ZERO
• online integration solvers
• "while loop" java would you like to play again
• example of math trivia
• conceptual physics practice book answers
• vertex form
• Antiderivative Solver
• simplifying a radican
• advance algebra poems
• least common denominator lcd calculator
• ged prep test +dowload worksheets
• pie value
• solve quadratic with three variables
• first grade trivia question
• free printable proportion worksheets
• boolean algebra calculator
• factor quadratic equation
• complex number equation solver
• math promblems
• Examples of a Trivia
• glencoe mcgraw hill sol test prep
• online calculator rational
• two variable linear equation
• how do use a TI-84 Plus to graph log?
• worksheets for Multiplication of integers
• math worksheets ratios pre-algebra
• investigatory project in mathematics
• tricks for solving simple algebra
• creative publications worksheet answers
• algebra, square root
• worksheet on mixed word problems for 3rd grade
• selected answers addison wesley chemistry fifth edition
• creative publications answers
• college algebra software
• algebraic expressions for 6th grade
• solve for x free calculator
• ti-83 calculator download
• maths rules area for kids
• amatyc "singapore math"
• graphing equations KS3
• ti 83 calculator how to find base
• clock time worksheet fractions percents
• polynomials online quiz
• free math worksheets proportions
• mathcad download student version shareware fast
• GCF Worksheets
• solve ratios worksheet
• free logarithm worksheets
• fractions least to greatest
• online math quiz finite geometric series
• macdougal littel algebra 2 answer book
• middle school math with pizzazz book e
• "cost accounting" "free online"
• need quiz on radicals in algebra 2
• Math poems in algebra
• math trivias in fraction form
• give me the answers to my math homework
• cube expansion algebra
• how to solve radicals
• algebra power of 3 formula
• factoring the sum of two cubes
• free algebrator download
• test questions for 9th grade math
• Root mean square solver
• Management & Cost Accounting (Student's Manual) Free Download
• solve limits online
• online polynomial solver
• TI 84 download function
• math term poems
• 10th grade algebra
• math dilations and worksheets
• order the fractions from greatest to least
• pythagoras year 9 practice questions
• ti-89 solving square root of variable
• algebra with pizzazz answers
• worlds longest seventh grade math equation
• Algebra-expansion -practical uses
• algebra calculator
• answers to in the holt mathematics linear and nonlinear functions
• illinois glencoe math book answers
• commom denominators
Bing visitors found us yesterday by entering these algebra terms:
• statistics question/solving
• business calculator cubed root
• algebra ratio
• linear equation with fractional coefficients
• Hard Math Equations
• "fraction problems" without gcf
• math calculator sol lesson plans
• ratios and permutations
• worksheet for maths for fifth class
• free printable conic section worksheets
• t1-83 plus
• worksheet place value comparison 3rd grade
• what is dilation scale factor in algebra
• algebra factoring ppt GCF LCM
• SAMPLE PAPER FOR APTITUDE TEST
• tI-83 plus manual, linear inequalities
• 2004 MENTAL MATHS PAPER QUESTIONS ks2
• holt workbook course 3 answers
• permutations and combinations GMAT
• how to do radical form
• free certification courses in finance & accounting
• adding and subtracting positive and negative numbers test
• 7th grade math permutation and combination practice problems
• "solve for the indicated variable", interactive tutorial
• what is permutation and combination
• practice on standard form, pre algebra
• solve system of equations three variables
• Linear Differential Equations Substitution square root
• Second grade Geometry Print-outs
• subtracting factoring equations practice
• solving nonhomogeneous difference equations
• Online Fraction Calculator
• polynomial factoring online calculator
• find matrix ratios ti-83
• antiderivative program
• equation solver square roots
• equation simplifying calculator
• 6th grade Conversion charts
• glencoe teacher edition chemistry online
• math 103 help algebraic expressions examples
• "The algebra of summation notation"
• algebra II worksheets on radicals
• algebra 2/trig by dolciani
• downloadable math worksheets for 5th graders
• physics 11 exam step by step solutions
• expression calculator simplify
• Statistics for idiots
• how to solve difference of two cubes
• factoring exponent and radical calculator
• mcdougal littell inc worksheet answers
• third degree equation solver
• intercept solver
• Algebra With Pizzazz Answers
• 6th grade simplifying cubic expressions
• linear equation 6th grade
• adding and multiplying and diving and subtracting integers
• multiplying with decimal numbers worksheet
• simplifying polynomials with square roots
• decomposing METHOD in equations in algebra POWERPOINTS
• love poem with algebra 1 terms
• algbra calculator online
• trignometry 10th class
• online equation calculator with division
• hardest math problem
• C language aptitude questions
• free online ks3 sheets
• algebra worksheet for 7 graders
• prentice-hall pre-algebra pearson education "practice workbook"
• 8th grade study guide factoring distributive property
• common denominator worksheets 6th grade
• wims function solver
• "two step math problems"
• CUBED BINOMIAL
• online antiderivative calculator
• algebra 2 worksheets
• ti-83 calculators finding cubed roots
• O-level Binomial Theorem Formula
• graph linear, quadratic, radical, absolute value, and root functions
• free online algebra calculator that shows steps
• converting mixed numbers to decimals calculator
• Holt Biology worksheet answers
• how to add fractions with different base
• TI-89 accounting
• texas ti-84 help plus show multiple graphs
• quadratic equation + the principle of the square root
• ratio 6th grade pre-AP math
• year 10 mathe
• download free o'level chemistry book
• printable focus-directrix graph paper
• i need fuckin help in math on the computer
• trig values
• free ks3 maths worksheets
• 5th grade algebra powerpoint
• Math solver program
• 3 simultaneous equations calculator
• partial sums addition worksheets
• solveing algebra
• solving a system of two equation usinf matrix algebra
• math pizzazz book d tEACHERS EDITION
• "fourier ti89"
• free linear algebra pdf books
• math word problems trigonometry calculators
• solving linear equations with Excel
• Permutation symbol in Stats
• converting standard to vertex form
• math worksheet scale factor
• printable proportion problems
• Free Online Algebra Solver
• algebra II questions with answer key
• prentice hall chemistry worksheet answers
• 7th grade algebra games
• step by step instructions on how to find a quadratic equation if you are only given the solution?
• mathmatical symbol definitions
• egyptian worksheet printables
• answers to lcm homework equations
• Passport To Mathematics Book 1:practice workbook
• factorising cubed variables.
• books on cost accounting
• PRENTICE HALL MATHEMATICS ALGEBRA 2 BOOK ANSWERS
• mathematical poem
• 10th grade online games
• scale factor worksheet
• third order equations
• adding/subtracting square roots
• 4th grade fraction test
• balance equations calculator
• chapter 11 chemistry review workbook pages
• how do you divide radicals in a Ti 83?
• boolean algebra for dummies
• adding and worksheets on subtracting integers
• dowload mcdougal littell
• how to solve a number pattern
• freee halflife2 download
• simplifying radical review
• holt algebra 9th
• complex quadratic solver
• excel, polynomial order 2
• free math tables of squares, cubes, 4th powers
• saxon math free assessment sheets
• maths calculator online test ks3
• hard mean maths equations
• solving second order differential equation with matlab using trapezoidal
• online trinomial calculator
• SOLVING second order system laplace
• free history sats papers
• conjugate radical help
• add multiply decimals
• 100 maths homework activities y4 term 2 sheet preview
• What is a value of a variable that does not have an exponent?
• Orleans-Hanna Algebra Prognosis ANSWERS AND QUESTIONS
• function of math for 5th grade
• intermediate accounting free ebook
• balancing chemical equations calculator
• aptitude questions for a intermediate level student
• square root addition
• add subtract multiply mixed numbers
• how to solve systems of differential equations with maple
• printable worksheets from prentice hall math
• figuring out algebra problems
• factorising cube
• what is a conjugate in algebra 2
• plotting second order differential equation matlab example
• hardest algebra problem ever
• free algebra calculater
• how to solve a permutation on a ti-83 plus
• algebra work books
• online radical equation solver
• mixed number caculator
• download software Algebra Buster
• factoring polynomial with more than two factors by grouping
• how to find the square root of 512
• how to solve numerical method using matlab
• probability,adding decimals for fifth graders worksheets
• examples of general addition formulas
• TI-84 Plus factoring tool
• exponents addition formula
• KS2 REVISION PREVIOUS TEST PAPERS
• how to do grade 10 integers
• how to do the substitution method in 9th grade algebra
• "circle graph worksheets"
• free fourth grade lesson plans for transformations, rotations
• "fun math worksheets"
• how to solve linear equation in 2 variables in excel
• pre algebra tussy citrus college edition
• formula for square root
• Prentice Hall Algebra 2 with Trigonometry answers
• worksheet on COMMON MONOMIAL FACTOR
• math glencoe- geometry ch.7
• KS2 maths coordinates
• fun and easy ways to solve systems of inequalities
• Algebra for college students
• free expresions and variables worksheet for fifth grade
• ordered pairs mathematics worksheets gr.8
• rational exponent equations
• solve my math for scientific notation for free right now
• basic mathematics formula
• Pre Algebra Permutations and Combinations
• 6th grade formula sheet
• cubic units worksheet
• how to solve algebra with scientific calculator
• advance algebra
• how do you solve for absolute value in the denominator
• pearson math answers
• simultaneous equations excel
• cube root finder
• set of numbers with greatest common factor of 3
• graphing calculator emulator TI84
• aptitude tests pdf
• finding GCF in java
• free ks3 sats yr 7 english math science
• factors worksheets for 4th grade
• area calculation in matlab
• algebra aptitude test
• mcdougal Littell Biology review answer key
• 72852587481415
• free +elementry english
• download ``ti 83 rom
• probability cheat sheet download
• algebra 2 factoring test
• latest math trivia with answers
• how to solve Rational expressions
• number pattern printable worksheets
• comon math symbols
• 9th grade algebra worksheets
• permutation combination lesson
• chemical reaction online solver
• check algebra
• saxon holt texas textbook glencoe
• teach me basic algebra
• easy way to understand maths in daily life
• past exam papers arabic
• addition and subtraction equations with integers text book
• math investigatory project
• glencoe Pre-Algebra Worksheet chapter 9 lesson 10
• software algebra
• 6x2-17x+12=(2x+a)(3x+b)
• simplifying square roots conversion
• solve my college algebra question
• holt mathematical
• Algebra For Beginners
• free download mathcad 7
• pass my algebra test
• middle school math with pizzazz-test of genius
• poem math terms
• monomial practice tests
• general equation of a hyperbola
• homework answers for sale algebra 2
• radical expressions calculator
• online calculator square root
• holt middle school math challenge workbook (solving equations with variables))
• extended geometric patterns worksheet 3rd grade
• prentice hall mathematics algebra 1
• simplifying cubed polynomials
• free simplifying fractions solver
• free practice taks test of reading of 9th grade
• do online math exam
• solving equations with fraction exponents
• completing the square step by step for idiots
• beginner fraction worksheets
• ti-83 financial calculator program - example problems and guide
• how to find domain and range of a parabola
• Find the roots in linear equations
• algebra poems
• algebra homework sliver answers
• TEST NEGATIVE NUMBERS worksheet
• grade nine factoring made easy
• complex trinomials
• algerbra solver
• pre-algebra with pizzazz! test of genius
• saxon math cheats
• glencoe math answers key
• holt middle school math course 2 11-3 worksheet
• +Math Word Problem Solver
• linear graph lessons
• calculator for finding the slope and y intersect
• cubed root ti-83
• McDougal Littell, Algebra 2, NC Edition
• Quadratic equation-ti-83
• solve equations with explanation software
• english work sheets year seven
• algebra;ks 2;worksheets
• fractional base conversion code C
• Different Math Trivia
• simplest radical form
• registration key cheats whole number worksheet generator
• solve system of linear equations three variables calculator
• 9th Grade English Worksheets
• chemistry chapter workbook answers yahoo
• adding and subtracting integers practice test
• algebra 2 homework in glencoe
• "root mean square"MATLAB
• inverse variation worksheet
• need help solving algebra equations
• solving second order differential equations boundary conditions
• 9th grade math linear systems calculator
• algebra math word problem solver
• graphing quadratics simulator
• 9TH GRADE FRACTION WORD PROBLEMS
• texas graphing calculator online
• factor sheets on math
• All Answers To Precalculus With Limits: A Graphing Approach Third Edition
• prentice hall mathematics pre algebra answer
• algebra 2 answers mcdougal littell
• order of operations poem
• first grade work shets
• simplifying radical expressions factor
• rational equations calculator
• practice formulas for 7th grade math
• factoring 3rd order
• java mathcontext precision number of digits after the decimal
• how to simplify a none perfect cubes
• decimal numbers that never end
• roots and radicals free worksheets
• ti-84 silver plus formulas
• solve non-linear equations in matlab
• algibra
• matlab automatic diff
• "long division practice" and decimals and rounding
• gcd calculate
• Cost Accounting Problems answer key
• trig answers
• algebra solver download
• putting logarithmic form in a calculator
• indefinite integral calculator (trigonometric substitutions)
• math turn on to base 2 worksheet
• download free Long Division solver
• factoring polynomials quiz binomials 7th grade
• printable trinomial factoring worksheets
• vertex form calculator
• nonhomogeneous equation handouts
• free book download+fluid mechanics
• mathmatical test
• why algebra was invented
• linear programing calculator online
• poems on college algebra
• 5th grade algebra
• Math for yr 1, Australia
• how to do permutation and combination using C language?
• add and subtract integer number
• grapcity aptitude test papers with answers
• The Algebrator for free
• multiplying rational exponents with unlike bases
• Elementary and Intermediate +Alegbra: A Practical Approach Teacher edition
• subtracting negative integers worksheet
• free elementary work sheets on solutions and solutes
• power solving calculators
• square polynomials three terms
• online calculator to solve quadratic equations
• Adding and Subtracting Negative Numbers worksheets
• 5th grade common denominators printable
• example of algibra
• 6TH GRADE MULTIPLICATION OF FRACTION RULES
• proportion online graph calculation
• english aptitude questions
• free ebooks on permutation and combination
• quadratic equation fractions calculator
• free integer worksheets
• probability of three variable calculator
• decimal formula
• pre algebra free test online
• how to solve polynomial using excel solver
• multiplying numbers with radicals and integers
• simplified fractions find #
• scale factor math practice
• ks2 maths sats papers pics
• equation for linear feet to square feet
• ti89 titanium log base 10
• algebra polynomials workbook answers
• free proofs or induction solver
• online calculator completing the square
• CPM Teacher Manual
• online multiplcation calculater
• finding quadratic equation numbers out of order
• fractions under the root
• how to solve for square root with fraction
• First order partial derivatives calculator
• radical solve problem
• homework answers for math
• free answers for 1998 prentice hall Algebra book
• dividing w/ decimals
• prentace hall cheats
• 4th square root
• domain and range of a graph
• common denominator calculator
• help simplifying 7th Grade expressions
• combinations on ti-84 plus
• algebraic poems
• properties of division worksheets
• Calculator That Solves Fractions
• Isometric Worksheets
• glencoe math 6th grade books
• 5th grade math problem solving steps
• prentice hall physics powerpoint
• free algebra worksheets
• McDougal Littell Florida Edition Algebra 1
• polynomial long division on TI-89
• Alegebra - Solving Equations in Two Steps
• Multiplying/Dividing Fractions with Equations with variables
• Venn Diagram Program TI 84 Plus
• linda cubed again rom
• Getting rid of variables in denominators
• Symmetry Math Printable Worksheets Kids
• lessons plans and activities on 1st grade word problems
• MATH TRIVIA EXAMPLES
• free ratios worksheets
• solve quadratic equation of three degree
• math worksheets factor 40 board game
• Saxon math prime number print off sheet
• worksheet about division of monomials
• free maths test papers for primary 4
• cost accounting homework solutions
• texas glenco algebra 1 book
• simplifying equations
• kumon samples
• gr. 6 algebra games
• free download the recent papers of aptitude
• sum and difference of radical expression
• prentice hall mathamatics pre algebra
• common factoring worksheets
• Summations in java
• exponetial notation
• mixed number as a decimal
• Binomial Theory
• graph a parabola on TI84 what the parts mean
• how to solve two quotient division
• transforming radicals(math)
• "adding radical expression"
• SOlving systems of non-linear equation with MATLAB
• domain and range of radicals
• Algebra 2 online free assessments
• quick tests on algebra to print off
• to simplify sums and differences of radicals worksheet
• google practise test for A.LA.N.
• how to solve Scale Factor for 7th grade
• division activities for second grade
• permutation and combination quiz
• free downloading of aptitude test questions
• two step equation calculator
• 6th grade english grammer test
• Algebra 1 practice questions Glenco
• Sample Algebra logaruthm problems
• polynomial graphing program online
• free answers for pre-algebra linear fuctions mcdougal littell
• mcdougal littell inc worksheets
• howdo you find dimensions if you have the area and perimeter of a rectangle?
• finding lowest common multiple of numbers worksheets
• standard form to vertex form calculator
• hyperbolic 1st order differential equation wave equation
• study guide, algebra and trigonometry, structure and method book 2 solution
• "three digit numbers" divisible by 6
• solutions to a first course in differential equations with modeling applications by dennis g zill
• graphing online calculator including vertex finder
• free math worksheets on radicals
• mean median mode math poem
• free solving my algebra homework
• multiply integer worksheet
• hyperbola center foci, vertices asymptotes
• hardware solution of simultaneous two linear equation
• mcdougal littell vocab answers
• multiplying and Dividing Rational Expressions calculator
• beginning algebra worksheets
• Answers to a math test for fourth graders/math book
• adding and subtraction signed numbers worksheet
• glencoe middle school math teachers edition for sale
• radicals calculator
• square root method
• least common multiple calculator
• hardest 6th grade math questions
• TI-83 video lessons
• Cube roots of an equation
• interpolation program for ti 83
• java code that input sentence and have loop
• factor 12 program TI-84
• advanced math 6th grade algebra help
• 1.574 into fraction
• math +graphing +inequalities +hyperbola
• online automatic free converting of fractions to decimals
• cubic root of 8
• ACCOUNTING EXERISES TO DO ONLINE FREE
• algibra
• store photos in ti-89
• definite integrals calculator
• solving systems by ADDITION AND SUBTRACTION
• how to solve simple radicals
• glencoe algebra 2 teachers edition book answers
• how to teach cube and cube roots
• simplify square roots calculator
• 7th grade math percentage formulas
• solving second order nonhomogeneous matlab
• orleans hannah study guide
• equation for sleeping parabola
• (how to manually solve radicals)
• online balancing
• free SATS KS2 maths booklet
• word search answers to biology: the dynamics of life
• real life graphs powerpoints maths
• TI-84 Plus factor
• C++ newton's method
• ks3 free printable science tests
• equation solver excel
• solving two function systems
• equations algebraic fractional
• algebra expressions practice grade 9
• online math problem solver
• linear equations with square roots
• pocket pc algebra simplify
• algebraic function ppt
• multiplying matrices worksheet
• lcm solver
• examples of while loop with sentinel in java
• saxon math tutor
• pre-algebra free worksheets
• highest common factor of two polynomials
• boolean algebra ti 89
• C ++ online help cube root
• algerba I help
• list of square and cube roots calculator
• "middle school" graphing worksheet free
• holt middle school free printable worksheets permutation probability
• fun way to teach integers
• simplifying cube roots
• sats science ks3 online test
• alegebra answers
• Solving Basic Equations worksheets
• maths-rational expressions
• square root -400 excel
• test of genius geometry
• Algebra1 il answers
• solving nonhomogeneous second order differential
• Solving percent Equations calculator
• algebra 1 concepts and skills chapter 1 test
• algebra worksheets for 9 grade why how to do it
• Algebra glencoe answers
• GED CHEATS
• formula square two terms
• quadratic equation parabola hyperbola ellipse
• calculate 2x2 determinant applet
• can tan sin cos be used on ti 89
• algebra how to do box method
• Literal Equations calculator
• Math Help with combination Functions
• poems using math terms
• mathematics exercices matrices
• greatest to least fraction
• "lesson plans" "chemistry vocabulary"
• c aptitude questions
• cramer's rule worksheet
• fraction calculater
• free worksheet for 3rd grade Right Triangles
• free algebra for grade 8
• Solving Quadratic Equations by the square root property
• 6th grade triangle worksheets
• quadratics and linear - determining point of intersection algebraically
• Factoring Quadratic expressions calculator
• teacchers edition free trial of florida edition mcdougal littell 6 grade science
• Free worksheets on percent for 5th grade
• 3 values find one variable
• trig chart
• dilation practice math edhelper
• algebra worksheet complementary supplementary angles
• When operating with algebraic expressions involving variables, what two conditions are necessary for terms to like terms?
• how to graph an equation using a TI-84
• power of a fraction
• ellipse math example problems
• students ordered pairs printables
• Rational Expressions- Multiplication and Division
• subtraction worksheets
• polynomials factoring demo
• graph an hyperbola in matlab
• 8th grade algebra worksheets
• online pre-algebra calculators
• the ladder method
• factoring 10th grade
• algebrator
• coordinate plane print out
• glencoe brain worksheet
• two-step equation WORKSHEET
• coordinate graphing for third graders
• free algebra pdf
• free 8th grade base 2 worksheets
• algebra tutor
• free printable coordinate grid worksheets
• algebra formulas absolute value
• precalculus online problem solver
• Hall Night Algebra
• Free Intermediate Algebra Software
• online calculater with exponent
• free tutorials on logarithm
• worksheet algebra 2 of factors problems
• solving Rational Expressions and Functions answers to problems
• matlab+ode45+algorithm
• Greatest Common Factor-variables
• Rational expressions with Like denominators calculators
• lessonplan for roots of quadratic equation
• middle school math formula sheet
• Diamond Method in Factoring
• Free Printable Fourth Grade Math Worksheets
• square root worksheet learn how to
• math poems
• Free Math solver
• 1st garde math
• holt Algebra 1 crossword puzzles
• poems about "math"
• Solving Equations with variables calculator
• teachers book of glencoe mathematics book course 1 applications and concepts
• dividing decimal by whole integer worksheet
• "MathCad Gauss elimination"
• adding eqations
• what is the difference between an equation and a expression
• find slope containing (3,5) and (8,4)
• convert square metres to lineal metres
• mcdougal algebra 2 answers free
• free homework help with algebra 2-Saxon
• aptitude question papers
• permutation and combination worksheet
• slope formula of line segment
• algebra question
• how to solve multiple non-linear equations
• prentice hall algebra 1 chapter 5
• trigonomic formula
• algebra simplify
• Samples of Math Trivia
• worksheet area trapezoid
• least common multiple polynomial calculator
• simplify equation
• adding and subtracting integers worksheets
• math diamond problems worksheet
• subtracting integers worksheets
• Sats papers to work on for year 6 on graphs
• pre-algebra solver
• factor trinomial online calculator
• dividing radical with exponents
• partial fractions for 3rd order polynomials
• express a ratio as a fraction in simplest form calculator
• How Do You Convert a Decimal into a Mixed Number?
• abstract algebra dummit foote solution manual
• adding AND SUBTRACTING integer practice page
• paper of Aptitude Test.PDF
• quadratic equations in ti-83 plus
• college pre-algebra
• algerbra +pie +R 2
• KS2 stats papers
• arithmatic/cube
• free on-line help on doing sixth grade math problems
• grade 5 math free worksheet least common multiple
• holt algebra 1 book
• quadratic equation program for a calculator
• sixth grade math help(percentages)
• fraction chart-equivalent
• algebra 2 programs
• practice adding and subtracting rational equation
• T1-83 Calculator Instructions
• quadratic equations made simple
• simplifying factors calculator
• word Math problem Solver
• Free Printable Algebra math review for solving systems of linear equations for 9th Graders
• "percent difference formula"
• math fractionsworksheets
• 2d non linear system maple worksheet
• expresions and variables worksheet for fifth grade
• teacher addition to text books to cheat homework
• completing the square math problems/ find the value of c
• basic maths formula
• algebra fractions 4th grADE
• math/8th grade worksheets free printable
• view the module paper of hfc apptitude test
• icici aptitude test sample paper
• 7th grade algebra problem solver
• Fourth grade fraction worksheets
• how to get significant digits in a fraction +java
• advanced mathematical concepts book answers
• convert decimal into fraction on TI-84
• lesson plans - math-slope
• Applications of Rational Exponents-practice
• contemporary abstract algebra "chapter 5" solution
• compare and order worksheets
• algebra answer generator
• how to solve fraction equations addition subtraction
• exponents activities
• CALCULATOR of picture
• free factoring basic polynomials help
• math, square, cube formulas
• ti-83 applet
• CALCULATOR TO SUBTRACT MIXED NUMBER
• polynomials solver
• speed distance time algebra sample problems
• mathquizes
• LOG BASE 2
• Science worksheet answers for The Heat is On for 8 grade
• glencoe algebra 1 test generator
• pie equations of area and perimeter
• algebra+pic
• linear programing for dummy
• practice math problems for addison wesley intermediate algebra
• advance trigonometry poem
• Holt Geometry worksheet answers
• Algebra 1 honors florida
• physics fractions worksheets
• solutions to intermediate algebra for college students fourth edition
• pythagorean theorem application problem worksheets
• grade 1math
• extended form absolute value inequalities - worksheet
• Algebra 1 textbook answers
• coordinate plane worksheet
• elementary algebra concepts
• quadratic equations root method
• boolean logic simplifier
• what is a way to get the answers to an algebra1 equation?
• factoring cubed functions
• solving exponential p
• dividing polynomials by binomials
• pre algebra inequality activities
• common entrance revision
• math worksheets squares and square roots
• variable worksheets with answers
• project worksheet multiple choice
• ti-83 rom .bin download
• free printable worksheets for fifth grade
• solving inequalities algebraically using a sign chart
• conceptual physics answer key
• free tutoring for 5th grade math
• intercept math worksheet
• division and multiplication
• simplify square root expressions
• multivariable linear programming equations
• equations help
• "grade 12 factoring"
• free it aptitude test papers
• addition subtraction linear systems worksheets
• formula for a dodecagon
• "algebrator"
• Mathmatical puzzles + Proportions
• math excercises for 8th greaders
• hardest math question
• dividing and multiplying notations
• algebra 2 free worksheets/answers
• symmetry worksheets ks3
• 2007 holt algebra 2 workbook answers
• 3 equation 3 unknown
• tutorial+combination and permutation
• mathematics new trivias
• +calculate +circumferance
• holt algebra with trigonometry answers
• holt world history answer sheets
• Quick View of Basic Accounting. Printable (PDF) "Cheat Sheet" of Terminology, Formulas, Ratios
• solves for multiple variables online calculator
• free online polynomial factoring calculator
• algebra 1 concepts and skills chapter 1 test mcdougal littell
• discriminant+worksheets
• Algebra and Trigonometry Sturcture and Method Book 2 by McDougal Littell teachers copy
• how do you times and divide integers?
• square root simplify radicals
• difference quotient calculator
• when was algebra invented?
• Least Common Denominator in Algebra
• "boolean algebra""java""practice
• radical expression calculator
• Convert percent mixed number
• solved apptitude papers
• factoring algerbra
• Printable Algebra math review for solving systems of linear equations for 9th Graders
• simplified radicals with fractions
• howto inequalities graphic calculator ti-83
• algebra solver graphs
• science book questions by mcdougal littell
• practice sheets for radicals in grade 11 math
• adding and subtracting integer fractions
• free finding the LCM by factoring polynomials
• glencoe mathematics 6th grade books
• help me to learn math printouts
• Simplifying Square Root Calculator
• free taks formatted quiz on decimals
• graphing calculator program online
• mechanics boolean math quiz
• rational exponents in math
• University of Phoenix Elementary and intermediate algebra
• Grade 6 Math intergers worksheet
• foiling math 3rd power
• combinations and permutations worksheets
• science sats exam paper
• factoring quadratic equation
• algabra
• Math K12 "Number theory" "Grade 10"
• online chemical reaction calculator
• ti-89 decimal to binary conversion program
• Free Math Problem Solver
• free area worksheets for fourth grade
• trigonometry answer key from ed helper
• algebra and trigonometry structure and method book 2 worksheet chapter 8
• least common denominator activities
• simultaneous equation solver
• algebraic fraction equations worksheet
• free math worksheets: functions
• linear equation standard form calculator
• lesson Matab download
• ontario high school text books
• printable multiplication sheets
• answers McDougal Littell Middle School Math Course 1
• root difference square guardbanding
• balancing equations calculations
• geometry-radicals-help
• practising math fractions for grade eights
• first grade grammer lesson plan
• stare test paper for 4 gread ca
• solve when base is variable exponential given
• 2004 Ks3 SATS English Papers to download
• calculator with a log key
• 6th grade interactive Math percent to decimal
• online calculator to graphing parabolas
• convert decimals to integers in java
• grade six creating a graph using negative integers
• learn maths yr10 scientific notation
• conversions using dimensional analysis free worksheets
• softmath algebra 2
• integer worksheet for grade nine
• worksheets polynomials descending order
• quadratic equation calculator show work
• graphing coordinates pictures
• polynomial calculator
• easy way to learn algebra
• ti-83 hyperbolic cos
• Finding the slope printable worksheets
• easy way to learn maths
• number grid maths coursework
• solve radical expressions
• simplify math online calculator
• english apptitude questions
• 9th grade worksheets free
• answer sheet for fifth grade science taks test
• expansion exponent calculator
• how to solve polynomials on a ti-83 plus
• "equation simplifier"
• FREE KS3 PAPERS ONLINE
• mcdougal littell pre-algebra quiz online
• completing the square calculator
• who discovered the formula of foil? + math
• McDougal Littell online book
• adding and subtracting positive and negative numbers worksheets
• dividing decimal calculation
• teaching algebra to dyslexic students
• linear equations fractional coefficients
• dividing fractions demo
• free online test exam (on softwarefree )
• ti 83 calculators reducing fractions
• "linear equation worksheets"
• cordic swf
Google users found us yesterday by entering these algebra terms:
• glencoe logarithms
• quadratic formula plug in answers
• ratio problems self practice
• the world's most complex mathematical formula
• pre-algebra calculator for division
• triangle worksheet
• convert mixed fractions to decimal
• solve multiple variables calculator
• online eog Quiz for 6th grade on percents
• buy kumon worksheets
• ti89 difference quotient
• grade 10 algebra coin problems
• pics of greatest common factors
• easy way to learn agebra for kids
• algebra 1 formulas for chemical problems
• " Differential Equations For Dummies "download
• foerster algebra and Trigonometery ch 4 review
• solve and graph quadratic equations by completing the square
• multiplying Integers+math worksheets+printable
• predicting the products of chemical reactions calculator
• 5th grade explanation for adding and subtracting positive integers
• writing numbers in simplest radical form.
• write a mixed number an a decimal for 108
• nth root online calculator
• mathcad examples step functions
• aptitude test question paper download
• calculas guide
• ti 89 84 pc emulator
• math exercise worksheet negative numbers
• java method to determine if a string is a palindrome
• cpm algebra 2 answers
• alegra for dummies
• Balancing Equations Worksheet key
• trigonometry 10 problems with answer
• complex rational expression
• algebra solution generator
• ratio formula
• turning fractions into decimals
• rationalize numerator square root variable
• "first course in probability " download
• Lattest apptitude questions and answer.pdf
• factoring cubed
• mixed number into decimal
• free 10th algebra worksheet
• calculator factoring program
• log base 2 on TI-83 Plus
• free math online integers multiple divide brackets
• what is the formula to fractions into decimals?
• free download of aptitude book
• Combinations and Permutations on my TI84 calculator
• high school math trivia
• trigonametry problem solver
• www.softmath.com
• greatest common divisor formula
• squaring fractions
• algebra elimination calculator
• Download KS3 Science 2005 SAT Paper
• calculator Lowest common denominator
• Locus education Permutations and combinations
• algebra questions
• free mathematics poems for kids
• free math induction solver
• decimals as mixed numbers
• how do i convert a mixed number to a decimal
• Algebra story problems for 5th grade
• free 8th grade math online
• Algebra 2 McDougal Littell answers Chapter 7
• "beginner fraction worksheets"
• algerba equation calculator
• pre algebra and introductory algebra 2nd edition
• creative publications math
• NJ PASS Gr2
• how to convert mixed fraction to decimal
• Algebra calculater
• circumferance
• solving quadratic equations with cubed variables
• explanation on adding, subtracting, timesing and dividing fractions
• answers from the pre algebra book
• pre algebra radical questions
• 5th grade math problem solving
• word problems on trinomials+tutorials with answers
• java convert to Decimal
• factoring monomial games
• download trigonometric calculator
• math with pizzazz Book D Answers
• percent formula equation simple
• ti-83 find slope of curve
• worksheet on plane for grade three
• answers for algebra 1 book
• math substitution worksheets
• polynomial third degree solver
• aptitude + free download
• trig calculator tables
• how to solve linear combinations
• quadratic formula program for ti 84 calculator
• square root addition calculator
• compund words worksheets
• 9th grade advanced math worksheets
• maths free worksheet "year 8"
• Fraction problem solver
• example of complete the square hyperbola
• how to solve a radical expression
• linear system of equation+program+calculator+casio
• answers for math test for sec 2
• slope intercept form worksheets
• algebra module
• maths primary school formula sheet
• equation of a regular hyperbola
• radical expressions solver
• free 6 grade printable +work +sheet
• fun algebra 1 8 grade games
• ti82 graphics calculator + lcm function
• free algebra 1 study sheets
• simple fractions work sheets
• solving equations by dividing fractions
• logic-grid medium printable worksheets
• pre algabra problem with answers
• vector convert magnitude calculator
• algebra 1 print out textbook tests
• ti89 derivative solver
• Free 9th grade curriculum
• practice worksheet chapter 7 answers prentice hall
• grade 11-maths worksheets
• 7th grade Math Permutations
• Write in vertex form calculator
• nonlinear simultaneous equation solver
• free algebra worksheets with answer key
• surds worksheets
• Solving the value of an exponent
• algbra formulas
• in c# write a programme to convert feet into inch
• how do solve 2nd order differential equations
• factorial polynom
• formula for root
• aptitude question
• Geometry mcdougal 1998
• software algebra curriculum
• download GRE CAT: Answers to the Real Essay Questions
• school
• 6th grade questions about integers
• Algebra Value of Expression
• online math test for class 7
• steps to solving integers/ adding and subtracting
• help with algebra 2 problems
• trig square roots with exponents
• math printouts for kids going from 2nd to 3rd grade
• Algebra formulas
• free pre-algebra study guide
• permutation or combination
• iq test printable with answers project yahoo
• McDougal Littell daily language skills
• rule for solving one step equations
• Algebrator free software download
• proportions worksheet
• statistic on TI-82 imaginary numbers
• cubed polynomials
• formula sheet for honors precalculus
• multiplying and dividing integers
• solving logarithms calculator
• logic-grid free printable worksheet
• easy way to remember fractions
• graphing linear equation calculator
• number lines and signed numbers worksheet
• calculator with a radical
• ordering intergers on a number line from greatest to least maths problems / free sample
• completing the square word problems
• algebra for college students third edition
• how to solve the GMAT inequality question
• graph of two variable linear equations
• 6th grade algebra printouts
• multiplying dividing adding and subtracting rationals
• seventh standard maths for kids
• 1st grade printable poems
• beginners Algebra online
• multiplying opposite signs
• how to solve algebraic slopes
• simultaneous equation solver
• mixed number as a decimal
• conics: is x squared minus y squared equals 0, a point, line, or intersecting line?
• diamond problem solver
• maths test papers year 7
• PRINTABLE MATH SHEETS FOR THE SUMMER
• how to simplify radical exponents
• FL.ALGEBRA download
• Gr. 10 Physics worksheets
• algeba problem-solving tutorial
• free math solving help on radical functions
• mcdougell Littell
• lesson for calculate cubic root
• multiplying and dividing integers free worksheet
• Factoring Solver
• Program Quadratic formula on TI-84 Plus silver edition
• lcm worksheet
• factoring and simplifying
• nonlinear diff equation solve
• adding and subtracting decimals worksheet
• decimal .25 per month means?
• algebra answers to finding slope
• permutation combination formula in java
• least common denominator algebra
• great math poems conic
• multiply rational expression calculator
• 8th grade final english exams/examples
• what does scale factor
• absolute value expressions
• aptitude online question with solution
• definition of graphing and solving linear equations
• printable yr8 pythagorean question
• yr 8 5-7 maths tests
• Permutations and combinations- basic lesson
• permutations worksheet
• pros and cons of elimination in algebra
• patterning and equations for grade 8 variables terms and expressions
• GED workbook sheets
• "i cubed" in algebra
• dividing rational expressions online calculator
• calculus graphing calculator manual download
• year 8 probability worksheet
• worksheet answers
• free download aptitude ebooks
• proportion worksheet
• Yr 6 maths work to do on the computer online for free
• prentice hall algebra 2 pdf
• solving applied problems
• method of substitution in fractions
• radicals calculator
• conceptual physics answer key
• Algebra i Algebra 1 (Holt, 2007)
• real life example involving polynomials
• Is there an instance in which a linear equation is not a function
• free algebraone worksheets
• free print out maths
• ti 84 plus quadratic formula
• KS2 practise maths test free online
• McDougal Littell Geometry textbook answers
• printable elementary algebra readiness test
• prentice hall Algebra 1 california edition review worksheets
• fractional exponents java
• cost accounting download
• solve my word problem
• math solve using substitution free worksheets
• emulate ti calculator 84
• rationalizing denominators gcse explanation
• free tutorials on Permutation Combination & Probability
• cube roots chart in math
• Algebra Solve for
• free algebra quizzes and answer keys
• pre-alegbra problems to work
• math integar
• free math problems scale
• how to work the difference of two square
• Algebra 1 online tutor
• ks3 math tests
• factoring quadratics calculator
• sample algebra questions
• Algebra two programs
• 1st year high school fractions and algebra
• Third order polynomial biological variables
• Free Online Tutoring ACT Basic reading comprehension and numerical reasoning skills test
• worded algebra ks3
• pre algebra calculator website
• mathematics - permutation and combination
• cube root
• 8th grade algebra worksheets
• FREE advanced algebra problems worksheet
• algebraic fractions calculator
• equation square roots solver
• answers to chapter 11 glencoe mathematics pre algebra
• harcourt algebra 2 ellipses
• free sample aptitude test for electrician
• factor calculator TRINOMIAL
• glencoe logarithm practice
• Finding domains of a function solver
• pre-algebra math worksheets
• solving variables
• factoring cubed root
• coordinate plane 5th grade
• fourth grade math, subtracting fractions,
• down loading free algebra worksheets
• conceptual physics prentice hall online quiz
• free triangle calculators
• boolean algebra source code
• online algebra 2 tutor
• math for advance kids in 6th grade
• probability+ks2+worksheet
• free math with formula with solution
• kumon answer key
• ti-84 calculator emulator
• Aptitude Questions+software programme
• maths exercises in for 5th to 8th standard
• permutation high school problems
• solve for x and y intercepts worksheets
• First, write the value(s) that make the denominators) zero. Then solve the equation
• maths for dummies
• simplify fraction algebra worksheet
• Free Online Algebra Problem Solver
• the steps associated with chemical equations
• foil help quadratic formula plugin
• Algebra ii Prentice hall Chapter 8 Test Answers
• 6th grade end of year math review printable
• integers worksheet
• algebra pdf
• sample lesson plan in teaching quadratic equations
• 7th grade formula sheet
• grade12 maths test (2007)
• manual squae roots
• algebra worksheets print
• multiplying integers worksheets
• florida 6th grade advanced math
• convertion chart
• Prentice Hall Course 1 Mathematics book online 6th Grade
• calculator cu radical
• free online math tutors for ged
• math question to answer translator
• order of operations poem
• 9th grade math study guide
• square root procedure for quadratic equation
• free download Accounting ebook
• free SATs papers for year 8
• elementary and intermediate math skills sheets
• decimals 6th grade worksheets
• math trinomial solver
• free maths exercise for primary 4
• rational expressions and equations help with homework
• complex rational expressions
• hard equation question
• LCM Finder
• trigonometry chart
• tic tac toe factoring
• where trinomials can be used in real life
• discrete mathematics and its applications exercises Solution download
• Multiplying and Dividing Square Roots
• learn algebra 1
• polynomial plotter for java
• what is a lineal metre
• "Algebra prognosis test" NC
• free maths sheets on the number 6
• Holt algebra
• NC End of Grade Test Practice and Sample Test Workbookt
• prentice hall physics middle school
• eighth grade simplifying polynomial test worksheet
• sample 9th grade geometry test
• basic maths sums
• free kumon worksheets to download
• printable worksheets quadrilaterals
• "trigonomic"
• help on grade six Pre-algebra finals
• "cheat sheet" permutations combinATIONS pdf NOT html
• algebra first year worksheets
• how to declare a integer variable of 10 digits in java
• english reading teaching method at the 7 grader
• power point presentation on simple equation of class VII
• 3rd algebra equation
• canadian online grade 11 math test
• trigonometry cheats
• maths cheat sheet year 10
• algebra 1b unit 5 assessment key
• british method algebra
• multiply rational expressions
• free learning algebra aptitude test
• interactive completing the square
• learn algebra free
• free math cross multiplication worksheet
• how to solve algebra formulas
• Mathematic o level Vectors
• Free Online Graphing Calculator Logarithms
• ti 89 sat apps
• JAVA exponent squared source code
• samples of 5th grade math work
• how to solve for c in a proportion equation
• algebra problems that = 27
• Free online algebra lessons with tests
• ratio word problems worksheets
• free rational expression solver
• free practice papers for ks3
• graphing calculator, find linear equation for power model
• Math poems - factoring
• cost accounting solved questions free
• books to print for 6/7th graders
• "linear equation" java
• Fractional "Quadratic Equations"
• general mathematics exam online
• graphing linear equations with absolute values
• basic algerbra
• type-in and solve equations with variable
• ALGEBRA 2 HOMEWORK ANSWERS
• teaching yourself algebra
• sixth grade math textbook
• index-of "parent-directory" pdf teach yourself
• math grade 1 printable exercises
• expected value outcome formula calculator
• elipses convert to standard form
• maths for 5 year old work sheets
• algebra 2 Holt 2004 teacher edition
• rational expressions solver
• runge kutta second order differential equation matlab
• ti84 factoring
• forgotten trigonometry
• free accounting past year exam question
• do maths test papers online
• 11th class accouting pdf
• trigonometric chart
• PH Pre-Algebra Tools for A Changing World Practice Workbook (P)
• online fraction calculator
• equation in matlab
• algebra books and answers free
• simplifying square root equations
• square root exponent calculator
• dividing quadratic equations calculator
• matlab simultaneous equations solver
• trig ratio problem solver
• square roots of binomial expansions
• algebra 2 mathproblems
• solving complex fraction practice
• algebra 2 book online richard g brown
• GED review worksheets for math
• free math worksheets for 9th grade
• medicine of incas mayas and swetz
• free download of 10 years question papers of CAT examination
• factor 3rd degree polynomials solver
• math code games worksheet
• calculas solution
• java rpoly roots polynomial
• online calculator elipses cylinder
• simplify calculator
• algebra worksheets free
• inverse matrices for T84 plus calc
• NC EOC English test for 9th graders free practice test
• simplifying algebraic expressions with exponents
• PROBLEM SUMS FOR GRADE 54
• gcse free printable maths sheets
• sample 6th grade math georgia test
• online tutor texas college algebra
• prentice hall pre algebra sheets
• free algebra 2 courses
• Radicals problems Calculator do my homework
• how to elimination method in trigonometry
• algerbrator
• quadratic formula calculator program
• ti 84 plus online graphing software
• partial factoring a quadratic
• Algebra Order Of Operations with fractions
• grade 9 math print out worksheets- slopes
• free solving an algebra problem step by step
• fraction papers for 1st grade
• linear-quadratic systems worksheet
• permutation&combination
• aptitude questions bank
• pythagoras + middle school + worksheets
• WORKSHEEP IN ALGEBRA FOR GRADE 3 IN SCHOOL
• how to pass the college algebra clep
• pre algebra sixth grade websites
• how to do algebra word equation year7
• algebra/square roots
• Algebra 2 free online quiz
• 3rd grade algebra
• grade 9 math learn
• mcdougal littell worksheet answers
• multipying and dividing radicals worksheets
• radical simplify ti-84 program
• Equations test to solve
• ti84plus+polair diagram
• factoring help
• worksheet Algebra gradient equation of line
• free math help tutor to download for a 3rd grader
• North Carolina Pre-Algebra Book
• 5th adding positive and negative numbers
• Paul A. Foerster Algebra and Trigonometry topics
• +Excel +polynom
• least common denominator calculator
• elementary algebra practice problems
• algebra 1 problem solver
• Online Books on Cost Accounting
• 8th grade trigonometry
• Algebraic skills changing the subject of the equation practice questions
• lcm exponents and variables
• find answers to my algebra problem
• simplifying expressions worksheet 3rd grade
• square root symbol history
• addison wesley math grade 10
• free math worksheets adding and subtracting intergers
• buy mcdougal littell-algebra 2 cd
• 20 math questions and answers for 5th and 6th graders
• 8٪ as decimal
• college algebra clep
• hardest math equation in the world
• math help free step by step
• decimal problems WITH ANSWERS
• free Ti-84 downloads
• maths investigatory project
• fifth grade math review worksheets
• 8th grade pre algebra
• basic triangle worksheet
• downloadable year 1 maths sheet
• employment aptitude test free download
• 8th grade algebra I end of course sample test
• school work printouts
• grade 7 math worksheet on translations
• fraction help to cheat website
• pie chart worksheets + 6th grade
• factoring binomials worksheet
• teachers manual prentice hall algebra
• lesson template + Algebra I
• Logarithms answer generator program
• order the fraction from least to greatest
• radicals division college algebra help
• simplifying complex polynomial expressions
• 6th math eog practice online
• free worksheet adding and subtracting integers
• how to pass 4th grade eog
• common divisor calculator
• ratio, proportion and percent worksheets printables
• beginning algebra free online tutor
• factoring cubed equation formula
• 6th grade Algebraic Properties
• combinations & permutations tutorial
• adding and subtracting numbers lecture
• focus circles algebra
• calculator online cu radicali
• exponents simplify calculator
• who invented the formula to solve a right triangle
• Learning reverse FOIL algebra
• Clep guide
• hardest algebra problem
• ellipses glencoe math
• divide polynomials two variables
• live linear equation help online for free
• arithematic
• 6th grade math quizzes and puzzles
• final tests for grade 6 worksheets
• examples of mathematics trivia
• basic maths for 9th class
• Free Math Program
• solve complex number ti 89 polar
• WorkSheets On Dividing Decimals
• maths area questions for kids
• chemistry ti-83 programs for finals
• volume+worksheet+grade 9
• free worksheets on solving simple equations
• quadratic equations fractorization
• Algebra gr 9
• algebra fx2 manual
• Add, subtract, multiply, and divide expressions containing square roots.
• list of all simple mathematical formulae
• how can i use prime factor decomposition to find out if a number is square
• algebra 2 "Glencoe/McGraw Hill" final review answer key
• dividing 22 by 25
• maths fun yr 8
• ti-84 precalculus downloads
• year 9 sats math 2008 preview paper cheats
• test of genius math sheets
• 7th grade sample math fraction problems
• finding square root with variables
• Calculating Square Roots
• math practice sheets area and circumference of a circle
• difference square
• Complete Algebra 1 Cheat Sheet
• hard mathematical equation
• pre algebra honors exam
• free download tricky mathematics book
• factoring math calculator
• where is the vertex on a graph for algebra
• formula sheet for ti calculator
• division printable work sheets
• online radical simplifier
• balancing linear equation
• coordinate pictures maths
• sol third grad math
• prentice hall chemistry the physical setting answer key
• Free+GRE+Maths+Formulas
• Algebrator
• online calculator summation notation
• ged math geormetry free
• adding and subtracting square roots calculator
• scale factors maths
• square root procedure for quadratic equation calculator
• root mean square formula
• Ti Calculator Programs
• maths formulae list download pdf
• solving two-step inequalities online solver
• logarithm for complex numbers+tutorial
• find the x value of y on a graphing calculator
• math answer to algebra 2
• yahoo aptitude questions
• cost accounting formulas
• square root fractions
• Fractions Least to Greatest
• divide algebra
• Learning Algebra Online
• How to do algebra problems
• pre algebra work sheet
• solve algebraic curve answers
• ti systems solver
• Printable math practice sheets for 6th grade math
• "rational expressions worksheet"
• GMAT model paper
• math trivias
• grade 10 algebra tutor
• division math problems printouts
• log with base code in C
• adding positive and negative numbers worksheets
• "nets of a cuboid"
• Finding Scale Factor
• how to solve algebra questions
• ti-89 hacks
• free algebra solver
• math book answers
• square root linear scales
• 5th and 6th grade algebra problems
• division of fractional equations worksheet
• solve equations with fractional exponent
• cheat for using the FOIL method
• year 8 maths test calculator
• math equation poem
• convert 2/3 to %
• eighth grade trigonometry printout
• science worksheets for 8th graders
• conics formula charts
• free decimal printouts
• Solving Binomial Equations
• download free software to solve differential equations on ti 89
• printable fraction worksheets for 4th grade
• mixed number solver
• ks3 long division
• algabra 1
• free online math solvers
• Instructors Answer Manual Discrete Mathematics and Its Applications 6th download
• year 8 sats past paper
• how to turn a decmal in to a whole number
• simplify square root multiplication
• excel spreadsheet for quadratic equations
• how do you solve algebraic equations
• how to solve equations
• holt algebra 1
• worksheets to solve of class 1
• word problems for grade 10 trigonometry
• eighth grade worksheets
• simplify rational expressions online calculator
• how to calculate lcm
• algebra half helper
• matlab equation example source code
• free positive and negative integer worksheets
• easy explanation of the asymptotes of a hyperbola
• order of operations key worksheet 5th grade
• hardest math test in the world
• algebra 1 honors practice work printable pages free
• third grade algebra websites
• Integers Worksheets
• unit 2 algebra 1 solutions cpm
• Graphs of hyperbola Functions
• online binomial solver
• algebra worksheets for kids
• c language apptitute questions
• algebra patterns worksheets
• nth term exponential
• "algebraic expressions" +factoring +"practice problems"
• square root of fraction 2/5
• Sequences+functions+ppt+ks3 maths
• slope worksheets
• combinations permutations exercises
• answers for algebra 1, third edition test 15 form a
• answers for algebra problems
• Why is it important to simplify radical expressions before adding
• highest common factor of 81
• math 10 pure equation solvers
• notes on simplifying radicals
• When solving a linear equation in one variable, the objective is to isolate the variable on one side of the equation. What does that mean?
• algebra 1, concepts and skills standardized test practice workbook answer
• chapter 13 practice algebra 1 glencoe
• simultaneous non linear equations
• A thrid grade Math step
• polynomial for kids
• algetiles mats
• downloads inverse matrices for T84 plus calc
• 12 yr old maths games online free
• webmath Square root simplification
• trigonometric calculator
• gmat tutorial+pdf
• evaluate rational expression calculator
• least common denominator for algebra 2
• practical maths worksheets for eighth class
• t89 dx dy differential equations
• matlab "generate worksheet"
• square root help
• online measurement in fractions calculator
• algebra easy
• printable homework practise sheets
• year 8 algebra sheets
• basic Maths definitions for Kids download
• simplifying complex numbers calculator
• holt biology worksheets
• college algebra practice problems
• dilations on a coordinate plane worksheets grade 8 math
• accounting book free pdf
• trigonometry for 8th graders
• First Grade Math Sheets
• math problems and concepts 10th class
• TI 83 programming quadradic equations
• SIMPLIFY CALCULATOR
• factoring cubes
• vertex form standard form
• mixed fractions radicals
• HYPERBOLA GRAPH
• "printable multiplication worksheets"
• how to give online exam ?
• Pre-Algebra - Prentice Hall California Edition answers
• Free State Grade 11 Examination Papers
• rom image ti-89 emulator ppc
• math scale factor
• formula to convert percentage to fraction
• " mixed number to decimal"
• high school algebra 1 computer software
• questions sheet for class sixth decimal and fraction maths in india
• quiz on adding, subtracting, multiplying and dividing integers
• symmetry and transformation tutorial pre-algebra
• algebra 8th grade worksheets
• formula for porabola
• ti-84 factoring
• math help for 8th graders in pre-algebra
• how many foci in a circle
• painless learn online 2nd grade math
• factoring polynomials calculator
• Math Answers for free
• adding cubed measurments
• college algebra: fourth edition dugopolski even answers
• foil ti-84
• Start solving your Algebra Problems in 5 minutes! free
• rational expressions online solving
• Topics for all formats include basic algebraic operations, elementary equations, laws of integral exponents, factoring and radical notation, rational expressions and an introduction to the
Cartesian coordinate system.
• log2 TI-92
• algebra pizzazz
• Venn Diagram software for TI-84
• gcse maths free worksheets
• eighth grade conversions pre algebra
• maths peoms
• download trigonometry calculator
• aptitude general questions
• maths test papers solved class8
• answers for algebra two question
• freeKS3 maths worksheet
• eoc practice for algebra 2 for north carolina
• polynomial tile calculator
• online study guide + Algebra Structure and Method 1
• T183 Graphing calculator
• free solving simple algebraic equations worksheets
• How is adding radical expressions similar to adding polynomial expressions? How is it different?
• Math test papers for 12 year old
• long division tutoring software
• simplifying logarithms calculator
• maths practice yr 8
• graphs of cubed roots
• learning algebra formulas
• LCD addition problems
• Glencoe. skill practice multipying interhers 2-4
• "solving 3rd degree equations"
• algebra 1 book middle school online usage
• algebra, function, equations worksheet
• gcse science+worksheets
• good books on cost audit
• free 5th grade y variable worksheets
• factors of trinomials solver
• sketch the graph of the system of linear inequalities
• negative and positive numbers printables
• math with pizzass worksheets
• how do you pass a algebra test
• 9th grade printable worksheets
• rational experession calculator
• prealgebra final sample
• Algebra 2 formula sheet
• how to do time series on ti-84
• i know 3 points on graph, how do i work our formula
• math review sheets for KS3
• math workbooks 5-7 grade
• Science questions & answers for ks2 students
• Hard Math Equations
• AJmain
• excel mental maths strategies year 4 unit 6 cheats
• math worksheets formulas
• graphing quadratic inequalities math a
• ti-84 free emulator
• printable 9th grade math formula chart
• example of multiplying a 3 x 1 matrix with a 3 x 1
• modern advanced accounting solutions by larsen
• "kumon math" algebra pdf example
• equation solving programs ti-84
• "quadratics with complex constants"
• how to measure lineal metres
• simultaneous equasions
• maths test ks3
• pre-algebra print out exercises
• www.learn english&maths free for year-2
• divide polynomials calculator
• n an expression, the number that does not have a variable is called
• free Algebra answers
• fraction and relation - algebra i
• free algebra problem solvers
• fraction in power
• free exam paper for primary one - 2007
• logarithm base "casio 9850"
• download ti calculator rom
• free math practices/multiplications
• fifth grade math today vs old math
• write a program using while loop to reverse the digits of the number in java
• simultaneous equation solver with integers
• games using nultiplication age exponents and age
• begginers algebra
• fun math ks2
• free SATs papers for KS2
• algebra 2 end of year review worksheets
• simplify square root
• attitude test about art download
• algebra help inequality that demonstrates that you are planning to spend no less
• adding binomial
• computer programs that solve a 3rd degree polynomial
• free algebra projects and activities
• online texas instruments graphing calculator with table
• GED Worksheet
• online nelson math work book 8
• sample 7th grade math placement test
• square root properties
• texas instruments free o;ine calculator
• free printable worksheets on integers
• nc eog 3rd grade worksheet
• free samples of negative coordinate graphs
• how to store formulas on a ti89
• printable math worksheets for 1st graders
• Adding And Subtracting Decimal Games
• factoring quadratics reference sheet
• find the value of c that makes the trinominal a perfect square
• high school maths worksheets online rational numbers
• aptitude practice questions with answers
• maths probloms
• ppt square roots
• how to factor +poloynomials
• trinomial calculator
• pre-algebra final exam study guide answers
• worksheet value negative numbers fourth grade
• Points of a Coordinate Plane workseets
• negative cube root
• javascript radicals calculator
• hardest math question in the world
• Trigonometry practise
• Algebra sheets
• factor polynomial calculator
• free math downloads + angles
• inverse operation 5th grade worksheets
• pre-algebra word problems worksheet
• add radicals in mathematica
• used 8th grade math glencoe textbooks
• 6th grade end of year math review
• sample word problems using java programming
• online word math problem solver
• calculator program for permutations
• trig equation solver
• "online mental maths"
• how do you find the square root
• additional square worksheets
• ellipse on graphing calculator
• download a dictionary on ti 84
• tutor classes for 5th grade
• math solving programs
• trigonometry problems grade 10
• algebra 6th grade
• pros and cons of solving equations by graphing or substitution and elimination
• elementary math trivia
• how to solve chemical equations
• 8th grade math problems to print fun
• maths test quiz year 8
• what is the quadratic equation used for fun
• LOGS ALGEBRA WORKSHEET
• Free Math Problem Solver
• comparing linear equations
• algebra solvers
• McDougal Littell World of Chemistry Worksheet answers
• 9th grade iowa math
• logbase () + TI 89
• add radical numbers
• grade 8 math basic algebra
• how to solve logarithms is steps
• college algebra help
• f1 maths exercise download
• ti-89 simulator
• algebra 2 section 2.5
• ti 83+ calculator program source codes in ti basic
• online graphing parabolas calculator
• how to complete a square using the standard form of a quadratic formula
• algebra(ordered operations)help me solve my problem
• multiple variable equation
• combinations in math
• TEACHER PRINTOUTS ON FRACTIONS FOR 5TH GRADE
• adding and subtracting rational expressions calculator
• Prentice Hall Chemistry Vocabulary List
• college arithmetic ratio and proportion
• vertex from algebra 2
• maths- basic algebra revision for years 8
• ti89 online
• simple lcm of polynomials
• least common multiple exponents
• ADDITION method algebra
• alegbra books
• "inequality questions" mathematics
• saxon algebra 1 answers
• sqare roots
• free subtraction and addition math grade 1 in ottawa online
• multiple variable equations
• best algebra 2
• maths revision assessment free
• worded questions with addition and subtraction in meters
• common linear factor
• fractional exponents problem solver
• radicals whole number
• ti-83 algebra 2 programs source code
• NEGATIVE AND POSITIVE INTEGERS WORKSHEET FOR 5 GRADE
• Fifth Root Solver
• maths yr 8 circles, Algebra, percentages test
• quadratics word problems calculator
• trig identities calculator program
• using algebra in excel
• pre algebra final rest
• mathematical formulae of percentage
• Learning Algebra 2
• Algebra Absolutely values
• merrill pre algebra book
• 9th grade - Geometry printed - PDF
• TI-89 titanium+completing the square
• help sloving Coefficient problems
• online calculator differential equation
• algebra 1 concepts and skills chapter 7 pages by McDougal Littell
• Tips Do Algebra Factoring
• mcdougal litell inc history answers
• adding and subtracting integers worksheet
• quadratic graphs bitesize
• ti84 guide
• aptitude questions of bank
• rewrite each expression using exponential notations
• simplify radicals free online
• help with multiplying
• math worksheets for 6th graders
• least common multiple calculator
• pdf sur ti 89
• biology notes ti-89
• "real analysis" "lecture video"
• University Physics 9th edition Solution Manual
• how to solve an equation in matlab roots
• Algebrator
• pre=algebra computer
• algebrator 4.0
• simplify radicals solver
• sixth grade math spelling language worksheets
• math poems
• quadratic formula in excel
• substitution and elimination problems grade 8
• graphing parabolas made easy
• least common multiple c program
• calculator online radical on line
• maths games scale
• how to find the quadratic sequences note
• lineal metre
• how to solve step functions for graphing
• translating words into math symbols worksheets with answers
• convert quadric into vertex form
• simple aptitude question & answer
• teaching math probability 9th grade summary
• prentice hall accounts books
• conceptual physics tenth edition answer key
• parabola formulas
• multiple variables quadratic
• factor problems
• polynomial interactive games
• ti-89 pdf
• radicals calculator
• Jeeves Solve Math Problems
• explain what is solving linear equations
• Adding and subtracting simple fractions sheet
• boolean algebra reducer
• simple problem sums involving subtraction & addition + worksheets
• algebra 1 concept and skills chapter test
• cheat for using the FOIL method, algebra
• free logarithm solver
• the power of algebra
• algebra math papers to print
• cost accounting ebook
• genius math problem solver
• Subtraction addition multiplying dividing
• online calculater remainder
• mcdougal littell algebra 2
• radical calculator simplifying
• triangle scale factor printouts
• grade four ontario math worksheets
• half life practice problems algebra
• Solve the PIN Code
• algebra expressions calculator
• holt algebra 1 notes
• systems of equations 3 variables ti 83
• prentice hall algebra 1
• free printable activity sheets grade 7
• babylonian method matlab code
• formula to find common denominator
• rom image ti-89 emu ppc
• How to Change a Mixed Number to a Decimal
• nonlinear equation systems Matlab
• factoring monomials calculator
• Algerbra equations
• online algebra solver
• ti 83 rom download
• scale factor algebra
• simplifying algebraic fractions ks3
• multiplying + dividing + rational expressions + worksheets
• geometry questions hyperbola equation
• formula for conversion to Base 7
• rules for graphing
• maths sheet
• how to solve quadratic equations using perfect square
• free worksheets on percents for fifth grade
• quadratic inequalities a levels
• maths worksheets ks2
• TI-83 how to factor
• advanced algebra chicago
• What is the difference between a function and a linear equation?
• gcse chemistry cheat sheet
• finding limits with TI-83 Plus
• Whole Numbers to decimals
• PROBLES IAS/IFRS
• yr 9 maths test revision sheets
• Recognize Conic Sections: Circles, Parabolas, Hyperbolas, Ellipses Math B regents
• free printable Accounting worksheets for college
• year 8 extension maths percentages test
• radical exponents
• probability questions for grade 8's on test
• algebra for beginners for free
• algebraic equations worksheets upper primary
• worksheets slopes
• 2/3 convert to decimal
• converted from a whole numbers to a decimal
• converting bases
• ti-89 polynomial finder
• free printable enrichment worksheets for 6th grade math
• solving systems of equations on graphics calculator matrix
• worksheets on adding decimals
• hyperbola formula
• quadratic equation in TI 89
• 6th Grade Square Roots
• mathematica interpolation
• factoring signed number
• free mean, median and mode activities
• stem and leaf plots math word problems for fifth graders
• free online algebra solver
• Math Activity pages-4th grade
• free online math principles test canada
• square root in fraction multiplying conjugate
• physics equation solver
• free download maths questions for fifth graDE
• how to understand algebra
• log worksheets for algebra 2
• blank coordinate plane Holt Mathematics
• basic algebra + radicals
• graphing calculator practice
• who is using quadratic formula in real life
• printable pizzazz worksheets | {"url":"https://www.softmath.com/math-com-calculator/quadratic-equations/calculate-positive-and.html","timestamp":"2024-11-09T13:08:52Z","content_type":"text/html","content_length":"170261","record_id":"<urn:uuid:6d1119d7-7828-495b-ae80-697319fce4af>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00443.warc.gz"} |
<<->> — Returns the n-D distance between the A and B geometries or bounding boxes
double precision <<->>( geometry A , geometry B );
The <<->> operator returns the n-D (euclidean) distance between the centroids of the bounding boxes of two geometries. Useful for doing nearest neighbor approximate distance ordering.
This operand will make use of n-D GiST indexes that may be available on the geometries. It is different from other operators that use spatial indexes in that the spatial index is only used when the
operator is in the ORDER BY clause.
Index only kicks in if one of the geometries is a constant (not in a subquery/cte). e.g. 'SRID=3005;POINT(1011102 450541)'::geometry instead of a.geom
Availability: 2.2.0 -- KNN only available for PostgreSQL 9.1+
Si veda anche | {"url":"https://postgis.net/docs/manual-3.6/it/geometry_distance_centroid_nd.html","timestamp":"2024-11-10T12:39:23Z","content_type":"application/xhtml+xml","content_length":"4964","record_id":"<urn:uuid:0e2862b7-9a50-46df-8b06-adbfbf3beb27>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00619.warc.gz"} |
pub struct BigDecimal { /* private fields */ }
Expand description
Creates and initializes a BigDecimal.
Make a BigDecimalRef of this value
Returns the scale of the BigDecimal, the total number of digits to the right of the decimal point (including insignificant leading zeros)
use bigdecimal::BigDecimal;
use std::str::FromStr;
let a = BigDecimal::from(12345); // No fractional part
let b = BigDecimal::from_str("123.45").unwrap(); // Fractional part
let c = BigDecimal::from_str("0.0000012345").unwrap(); // Completely fractional part
let d = BigDecimal::from_str("5e9").unwrap(); // Negative-fractional part
assert_eq!(a.fractional_digit_count(), 0);
assert_eq!(b.fractional_digit_count(), 2);
assert_eq!(c.fractional_digit_count(), 10);
assert_eq!(d.fractional_digit_count(), -9);
Creates and initializes a BigDecimal.
Decodes using str::from_utf8 and forwards to BigDecimal::from_str_radix. Only base-10 is supported.
use bigdecimal::{BigDecimal, Zero};
assert_eq!(BigDecimal::parse_bytes(b"0", 10).unwrap(), BigDecimal::zero());
assert_eq!(BigDecimal::parse_bytes(b"13", 10).unwrap(), BigDecimal::from(13));
Return a new BigDecimal object equivalent to self, with internal scaling set to the number specified. If the new_scale is lower than the current value (indicating a larger power of 10), digits will
be dropped (as precision is lower)
Return a new BigDecimal after shortening the digits and rounding
let n: BigDecimal = "129.41675".parse().unwrap();
assert_eq!(n.with_scale_round(2, RoundingMode::Up), "129.42".parse().unwrap());
assert_eq!(n.with_scale_round(-1, RoundingMode::Down), "120".parse().unwrap());
assert_eq!(n.with_scale_round(4, RoundingMode::HalfEven), "129.4168".parse().unwrap());
Return a new BigDecimal object with precision set to new value
let n: BigDecimal = "129.41675".parse().unwrap();
assert_eq!(n.with_prec(2), "130".parse().unwrap());
let n_p12 = n.with_prec(12);
let (i, scale) = n_p12.as_bigint_and_exponent();
assert_eq!(n_p12, "129.416750000".parse().unwrap());
assert_eq!(i, 129416750000_u64.into());
assert_eq!(scale, 9);
Return this BigDecimal with the given precision, rounding if needed
Return the sign of the BigDecimal as num::bigint::Sign.
fn sign_of(src: &str) -> Sign {
let n: BigDecimal = src.parse().unwrap();
assert_eq!(sign_of("-1"), Sign::Minus);
assert_eq!(sign_of("0"), Sign::NoSign);
assert_eq!(sign_of("1"), Sign::Plus);
Return the internal big integer value and an exponent. Note that a positive exponent indicates a negative power of 10.
use bigdecimal::{BigDecimal, num_bigint::BigInt};
let n: BigDecimal = "1.23456".parse().unwrap();
let expected = ("123456".parse::<BigInt>().unwrap(), 5);
assert_eq!(n.as_bigint_and_exponent(), expected);
Convert into the internal big integer value and an exponent. Note that a positive exponent indicates a negative power of 10.
use bigdecimal::{BigDecimal, num_bigint::BigInt};
let n: BigDecimal = "1.23456".parse().unwrap();
let expected = ("123456".parse::<num_bigint::BigInt>().unwrap(), 5);
assert_eq!(n.into_bigint_and_exponent(), expected);
Number of digits in the non-scaled integer representation
Compute the absolute value of number
let n: BigDecimal = "123.45".parse().unwrap();
assert_eq!(n.abs(), "123.45".parse().unwrap());
let n: BigDecimal = "-123.45".parse().unwrap();
assert_eq!(n.abs(), "123.45".parse().unwrap());
Multiply decimal by 2 (efficiently)
let n: BigDecimal = "123.45".parse().unwrap();
assert_eq!(n.double(), "246.90".parse().unwrap());
Divide decimal by 2 (efficiently)
Note: If the last digit in the decimal is odd, the precision will increase by 1
let n: BigDecimal = "123.45".parse().unwrap();
assert_eq!(n.half(), "61.725".parse().unwrap());
Square a decimal: x²
No rounding or truncating of digits; this is the full result of the squaring operation.
Note: doubles the scale of bigdecimal, which might lead to accidental exponential-complexity if used in a loop.
let n: BigDecimal = "1.1156024145937225657484".parse().unwrap();
assert_eq!(n.square(), "1.24456874744734405154288399835406316085210256".parse().unwrap());
let n: BigDecimal = "-9.238597585E+84".parse().unwrap();
assert_eq!(n.square(), "8.5351685337567832225E+169".parse().unwrap());
Cube a decimal: x³
No rounding or truncating of digits; this is the full result of the cubing operation.
Note: triples the scale of bigdecimal, which might lead to accidental exponential-complexity if used in a loop.
let n: BigDecimal = "1.1156024145937225657484".parse().unwrap();
assert_eq!(n.cube(), "1.388443899780141911774491376394890472130000455312878627147979955904".parse().unwrap());
let n: BigDecimal = "-9.238597585E+84".parse().unwrap();
assert_eq!(n.cube(), "-7.88529874035334084567570176625E+254".parse().unwrap());
Take the square root of the number
Uses default-precision, set from build time environment variable
If the value is < 0, None is returned
let n: BigDecimal = "1.1156024145937225657484".parse().unwrap();
assert_eq!(n.sqrt().unwrap(), "1.056220817156016181190291268045893004363809142172289919023269377496528394924695970851558013658193913".parse().unwrap());
let n: BigDecimal = "-9.238597585E+84".parse().unwrap();
assert_eq!(n.sqrt(), None);
Take the square root of the number, using context for precision and rounding
Take the cube root of the number, using default context
Take cube root of self, using properties of context
Compute the reciprical of the number: x^-1
Return inverse of self, rounding with ctx
Return number rounded to round_digits precision after the decimal point
Return true if this number has zero fractional part (is equal to an integer)
Evaluate the natural-exponential function e^x
Create string of this bigdecimal in scientific notation
let n = BigDecimal::from(12345678);
assert_eq!(&n.to_scientific_notation(), "1.2345678e7");
Write bigdecimal in scientific notation to writer w
Create string of this bigdecimal in engineering notation
Engineering notation is scientific notation with the exponent coerced to a multiple of three
let n = BigDecimal::from(12345678);
assert_eq!(&n.to_engineering_notation(), "12.345678e6");
Write bigdecimal in engineering notation to writer w
Trait Implementations§
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the + operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
The resulting type after applying the / operator.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts to this type from the input type.
Converts an i64 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an u64 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an
to return an optional value of this type. If the value cannot be represented by this type, then
is returned.
Read more
Converts an
to return an optional value of this type. If the value cannot be represented by this type, then
is returned.
Read more
Converts a f32 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts a
to return an optional value of this type. If the value cannot be represented by this type, then
is returned.
Read more
Converts an isize to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an i8 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an i16 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an i32 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts a usize to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an u8 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an u16 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
Converts an u32 to return an optional value of this type. If the value cannot be represented by this type, then None is returned.
The associated error which can be returned from parsing.
Parses a string
to return a value of this type.
Read more
Formats the value using the given formatter.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the * operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
Creates and initializes a BigDecimal.
Returns the multiplicative identity element of
Read more
Sets self to the multiplicative identity element of Self, 1.
is equal to the multiplicative identity.
Read more
Complete ordering implementation for BigDecimal
use std::str::FromStr;
let a = bigdecimal::BigDecimal::from_str("-1").unwrap();
let b = bigdecimal::BigDecimal::from_str("1").unwrap();
assert!(a < b);
assert!(b > a);
let c = bigdecimal::BigDecimal::from_str("1").unwrap();
assert!(b >= c);
assert!(c >= b);
let d = bigdecimal::BigDecimal::from_str("10.0").unwrap();
assert!(d > c);
let e = bigdecimal::BigDecimal::from_str(".5").unwrap();
assert!(e < c);
Compares and returns the maximum of two values.
Read more
Compares and returns the minimum of two values.
Read more
1.50.0 · source§
fn clamp(self, min: Self, max: Self) -> Self
Restrict a value to a certain interval.
Read more
This method tests for self and other values to be equal, and is used by ==.
This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
This method returns an ordering between
values if one exists.
Read more
This method tests less than (for
) and is used by the
Read more
This method tests less than or equal to (for
) and is used by the
Read more
This method tests greater than (for
) and is used by the
Read more
This method tests greater than or equal to (for
) and is used by the
Read more
The resulting type after applying the % operator.
The resulting type after applying the % operator.
The resulting type after applying the % operator.
The resulting type after applying the % operator.
The positive difference of two numbers.
Read more
Returns true if the number is positive and false if the number is zero or negative.
Returns true if the number is negative and false if the number is zero or positive.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
The resulting type after applying the - operator.
Method which takes an iterator and generates Self from the elements by “summing up” the items.
Method which takes an iterator and generates Self from the elements by “summing up” the items. | {"url":"https://docs.diesel.rs/2.1.x/bigdecimal/struct.BigDecimal.html","timestamp":"2024-11-09T19:32:34Z","content_type":"text/html","content_length":"1049279","record_id":"<urn:uuid:93d14a49-86ce-40ec-a7b0-5c2d51a1b99a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00424.warc.gz"} |
Tier 2 CO2 uptake models for concrete
Tier 2 provides a more advanced calculation method for estimating the annual uptake of CO[2] in existing concrete structures (primary use) on a national basis.
The model should be used primarily in cases where resources are missing to perform a more accurate calculation according to a future Tier 3 model, yet knowledge exists to perform a more advanced and
improved CO[2] uptake model than the simplified model offered in Tier 1. According to IPCC rules, Tier 2 or 3 should also replace Tier 1 if the uptake in the country is defined as a “key category”.
However, in this model, a more simplified calculation of the CO[2] uptake in existing (old) concrete structures is used compared to a Tier 3 method and referred to as an Onward calculation method.
The national annual CO[2] uptake in concrete; in the primary use stage (existing structures), in the end-of-life stage (demolition, crushing, stockpiling), and in secondary use (recycling), can be
estimated according to this methodology for Tier 2.
To be able to make an improved estimate of the annual CO[2] uptake, one needs a good overview and knowledge of the existing concrete product stock in the country or region in question, since it is in
these structure’s surface area that the CO[2] uptake takes place. The inventories of existing concrete structures comprise different product applications, its specific concrete surface area, concrete
exposure conditions, and concrete quality. The age distribution of concrete in place is estimated by cement consumption and cement type statistics over time. The methods used in different countries
are all based on the well-accepted and documented carbonation rate model of square root of time dependency shown below. This makes it possible to estimate the annual CO[2] uptake in existing
structures. The calculation according to Tier 2 of the annual CO[2] uptake in the building stock for a certain year, is done according to the “onward calculation method”
To simplify the uptake calculations, it has been assumed in Tier 2 that the uptake of CO[2] the present year can be equated with the uptake of CO[2] for 100 years in the current distribution of
concrete products and structures, and with the current year's concrete production. In reality, this year's uptake of CO[2] takes place in today's standing concrete products and structures and in
concrete produced during the last 100 years. This approximation is thus relatively good provided that the concrete production has been fairly stable over the years. If greater variations in concrete
production have occurred, this calculation method can be improved by using a rolling average value of the use of concrete or clinker over a longer period of time.
The primary use stage CO[2] uptake calculations used in Tier 2 is based on the calculation model presented in the European standard CEN/EN 16757, Annex BB. A schematic figure of the uptake principles
is shown in the figure below.
The CO[2] uptake in kg for application i during t years can be calculated as:
Which can be explained as follows:
• CO[2]-uptake is the total CO[2] uptake in kg CO[2] during t years
• k[i] is a constant factor for the rate of carbonation for surface i in mm/√(t)
• DOC[i] is the degree of carbonation for surface/volume i
• A[i] is the area of carbonation surface i in m^2
• t is the number of years from start of carbonation
• U[tcc] is the maximum theoretical uptake in kg CO[2]/kg clinker. A default value for clinker can be 0.538
• C is clinker content in kg clinker/m^3 of concrete
Values of k and DOC can, for example, be found in CEN/EN 16757, Annex BB. In that standard, correction factors for k due to use of additions in the concrete such as limestone, silica fume, siliceous
coal fly ash or/and ground granulated blast furnace slag (GGBS) can also be found.
The CO[2] uptake for end-of-life processes and secondary use is calculated as in Tier 1.
In the figure, it is shown that the carbonation takes place at the surface of the concrete and proceeds inwards from the surface with a rate that is decreasing with a linear time scale but constant
with a square root of time scale. The carbonation depth is accordingly directly proportional to √year. The uptake calculation is based on calculating the carbonated volume of the concrete structure
(from k[i]*DOC[i]*√year*A[i]) and then multiplying this volume with the CO[2] calcination emission from the clinker (kg CO[2]/kg clinker) and the clinker content in the concrete (kg clinker/m^3
Due to the uptake calculation simplifications (Onwards calculations), it is anticipated that the uptake during one year in existing buildings is equal to the uptake over the next (100) years in the
concrete produced during the present year. If large national variations in the clinker use exists, it is recommended that a rolling average value for the clinker use is used in the calculations. The
rolling average shall cover 20 years. | {"url":"https://magicbiblioteket.ivl.se/projekt/co2-concrete-uptake/calculation-models/tier-2-calculation-model.html","timestamp":"2024-11-05T07:27:18Z","content_type":"text/html","content_length":"128220","record_id":"<urn:uuid:9f305cef-c260-407b-a33a-29040a6e6a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00460.warc.gz"} |
The next step in planning for your new business is to analyze the cost behaviors - Essay
The next step in planning for your new business is to analyze the cost behaviors
Posted: September 14th, 2023
The next step in planning for your new business is to analyze the cost behaviors and systems of your industry and conduct the relevant financial calculations to determine appropriate costing
solutions. In this milestone assignment, you will conduct a break-even analysis to determine your target profits and the selling price you plan to use to achieve these profits for each product.
Use information from Milestone One and the provided Milestone Two Market Research Data Appendix Word Document to conduct a cost-volume profit analysis. Complete the “Contribution Margin Analysis” and
“Break-Even Analysis” tabs in the Project Workbook Spreadsheet that you used for completing the Milestone One assignment.
Specifically, you must address the following rubric criteria:
Contribution Margin. Determine your contribution margin per unit in the “Contribution Margin Analysis” tab.
Choose a sales price for each product.
Calculate the contribution margin for each product based on your sales price and the variable cost for that product. Show your work using calculations to the side of the table or using appropriate
formulas in the table.
Break-Even Analysis. Use cost-volume-profit (CVP) analysis to determine your break-even points for achieving your target profits in the “Break-Even Analysis” tab.
Determine the break-even points for each product. Show your work using calculations to the side of the table or using appropriate formulas in the table.
Determine break-even units for the suggested target profits for each product. Show your work using calculations to the side of the table or using appropriate formulas in the table.
What to Submit
Submit the Project Workbook with the “Contribution Margin Analysis” and “Break-Even Analysis” tabs completed. This file should be completed and submitted using Microsoft Excel. | {"url":"https://qbacade.com/the-next-step-in-planning-for-your-new-business-is-to-analyze-the-cost-behaviors/","timestamp":"2024-11-02T06:13:33Z","content_type":"text/html","content_length":"57750","record_id":"<urn:uuid:47ee4b4d-dd9d-4ad5-b243-ce59c25f7fe4>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00332.warc.gz"} |
5 Best Ways to Find All Possible Substrings After Deleting K Characters in Python
π ‘ Problem Formulation: The challenge is to compute all the different substrings that can be formed from a given string after deleting any k characters. For instance, given the string "Python" and
k=3, possible substrings after deletion might include "Pyt", "tho", and others, with a total of (n-k+1) combinations, where n is the length of the string.
Method 1: Iterative Approach
The Iterative Approach systematically generates all combinations of the substrings by deleting k characters from the original string using nested loops. It’s straightforward to implement but may
suffer efficiency issues for larger strings.
Here’s an example:
from itertools import combinations
def find_substrings(string, k):
substrings = set()
for combo in combinations(range(len(string)), len(string) - k):
substrings.add(''.join(string[i] for i in combo))
return substrings
# Example usage
print(find_substrings("Python", 3))
{'Pyt', 'Pyo', 'yhn', 'Phn', 'Pto', 'hon', 'yth', 'ton', 'Pth', 'Thn'}
In this code snippet, we leverage the combinations method from the itertools module to generate all possible indexes combinations after k deletions and then build substrings based on these indexes.
We then add these substrings to a set to ensure uniqueness.
Method 2: Recursive Approach
The Recursive Approach uses a backtracking method to generate substrings. This method is elegant and efficient for small to medium-sized strings, as it avoids creating all combinations in advance.
Here’s an example:
def find_substrings_recursively(string, k, start=0, curr_substr=''):
if k == 0:
return {curr_substr + string[start:]}
if start == len(string):
return {curr_substr} if k == 0 else set()
# Delete current character
result = find_substrings_recursively(string, k - 1, start + 1, curr_substr)
# Keep current character
result |= find_substrings_recursively(string, k, start + 1, curr_substr + string[start])
return result
# Example usage
print(find_substrings_recursively("Python", 3))
{'Pyo', 'Ptn', 'Ptn', 'hon', 'hno', 'Pth', 'Pyt', 'Pth', 'yto', 'on'}
This code uses recursion to either delete or keep the current character and moves on to the next character in the string. By exploring both possibilities at each step, it builds up all possible
substrings where exactly k characters have been deleted.
Method 3: Using itertools.combinations with Slicing
This method combines Python’s slicing capabilities with itertools.combinations to effectively generate all possible substrings. Slicing is used to reassemble the string without the selected
Here’s an example:
from itertools import combinations
def find_substrings_combinations(string, k):
return {''.join(string[i] for i in range(len(string)) if i not in combo)
for combo in combinations(range(len(string)), k)}
# Example usage
print(find_substrings_combinations("Python", 3))
{'Pto', 'yh', 'Py', 'ho', 'to', 'Ph', 'on', 'Pt', 'Po', 'tn'}
In this example, we use set comprehension along with combinations to identify the indices to be deleted and then loop through the original string to skip those indices and create our result
Method 4: Dynamic Programming Approach
Dynamic Programming can be used to solve this problem by storing intermediate results and building up the solution. This method is efficient for larger inputs as it avoids redundant computations.
Here’s an example:
# Dynamic Programming example will be demonstrated when needed.
This example is placeholder text, as Dynamic Programming implementations can be quite involved and are specific to the type of problem. If such a method is needed, readers can be referred to
resources that specifically cover Dynamic Programming techniques in detail.
Bonus One-Liner Method 5: Functional Approach with map and filter
This method employs a functional programming style to quickly generate substrings by combining map and filter with itertools.combinations.
Here’s an example:
from itertools import combinations
def find_substrings_functional(string, k):
return set(map(lambda combo: filter(lambda i: i not in combo, string),
combinations(range(len(string)), k)))
# Example usage
print(find_substrings_functional("Python", 3))
{('P', 'y', 'o'), ('y', 'h', 'n'), ('P', 't', 'o'), ...}
In this case, we use map to apply a filter function to each combination of indices. The filter function reconstructs the string without the indices from the current combination. The result is a set
of tuples representing the substrings.
• Method 1: Iterative Approach. Straightforward and easy to understand. May not be efficient for very large strings due to potential combinatorial explosion.
• Method 2: Recursive Approach. Elegant solution that avoids creating all combinations upfront. Can suffer from performance issues with very large strings or high values of k due to recursion
• Method 3: Using itertools.combinations with Slicing. Leverages built-in functions for a concise implementation. Performance can degrade with large string sizes due to the overhead of combinations
and slicing operations.
• Method 4: Dynamic Programming Approach. Highly efficient for larger inputs by reducing redundant calculations. The complexity of implementation may increase, and understanding the concept can be
challenging for beginners.
• Bonus Method 5: Functional Approach. Leverages functional programming paradigms for a compact solution. Results may need further processing to match the expected string format, and readability
could be an issue for those not familiar with functional programming. | {"url":"https://blog.finxter.com/5-best-ways-to-find-all-possible-substrings-after-deleting-k-characters-in-python/","timestamp":"2024-11-06T11:56:03Z","content_type":"text/html","content_length":"72897","record_id":"<urn:uuid:7b66c04e-8a66-46bc-89cf-00bca92c608c>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00585.warc.gz"} |
Two interesting facts about high-dimensional random projections | R-bloggersTwo interesting facts about high-dimensional random projections
Two interesting facts about high-dimensional random projections
[This article was first published on
R – Statistical Odds & Ends
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
John Cook recently wrote an interesting blog post on random vectors and random projections. In the post, he states two surprising facts of high-dimensional geometry and gives some intuition for the
second fact. In this post, I will provide R code to demonstrate both of them.
Fact 1: Two randomly chosen vectors in a high-dimensional space are very likely to be nearly orthogonal.
Cook does not discuss this fact as it is “well known”. Let me demonstrate it empirically. Below, the first function generates a $p$-dimensional unit vector uniformly at random. The second function
takes in two $p$-dimensional vectors, x1 and x2, and computes the angle between them. (For details, see Cook’s blog post.)
genRandomVec <- function(p) {
x <- rnorm(p)
x / sqrt(sum(x^2))
findAngle <- function(x1, x2) {
dot_prod <- sum(x1 * x2) / (sqrt(sum(x1^2) * sum(x2^2)))
Next, we use the replicate function to generate 100,000 pairs of 10,000-dimensional vectors and plot a histogram of the angles they make:
simN <- 100000 # no. of simulations
p <- 10000
angles <- replicate(simN, findAngle(genRandomVec(p), genRandomVec(p)))
angles <- angles / pi * 180 # convert from radians to degrees
Note the scale of the x-axis: the angles are very closely bunched up around 90 degrees, as claimed.
This phenomenon only happens for “high” dimensions. If we change the value of p above to 2, we obtain a very different histogram:
How “high” does the dimension have to be before we see this phenomenon kick in? Well, it depends on how tightly bunched up we want the angles to be around 90 degrees. The histogram below is the same
simulation but for p = 20 (notice the wider x-axis scale):
It seems like the bell-shaped curve already starts to appear with p = 3!
Fact 2: Generate 10,000 random vectors in 20,000 dimensional space. Now, generate another random vector in that space. Then the angle between this vector and its projection on the span of the first
10,000 vectors is very likely to be very near 45 degrees.
Cook presents a very cool intuitive explanation of this fact which I highly recommend. Here, I present simulation evidence of the fact.
The difficulty in this simulation is computing the projection of a vector onto the span of many vectors. It can be shown that the projection of a vector $v$ onto the column span of a (full-rank)
matrix $A$ is given by $\text{proj}_A (v) = A(A^T A)^{-1}A^T v$ (see this post and this post for details). For our fact, $A$ is a $20,000 \times 10,000$ matrix, so computing $(A^T A)^{-1}$ is going
to take prohibitively long.
I don’t know another way to compute the projection of a vector onto the span of other vectors. (Does anyone know of better ways?) Fortunately, based on my simulations in Fact 1, this phenomenon will
probably kick in for much smaller dimensions too!
First, let’s write up two functions: one that takes a vector v and matrix A and returns the projection of v onto the column span of A:
projectVec <- function(v, A) {
A %*% solve(t(A) %*% A) %*% t(A) %*% v
and a function that does one run of the simulation. Here, p is the dimensionality of each of the vectors, and I assume that we are looking at the span of p/2 vectors:
simulationRun <- function(p) {
A <- replicate(p/2, genRandomVec(p))
v <- genRandomVec(p)
proj_v <- projectVec(v, A)
findAngle(proj_v, v)
The code below runs 10,000 simulations for p = 20, taking about 2 seconds on my laptop:
simN <- 10000 # no. of simulations
p <- 20 # dimension of the vectors
angles <- replicate(simN, simulationRun(p))
angles <- angles / pi * 180 # convert from radians to degrees
hist(angles, breaks = seq(0, 90, 5))
abline(v = 45, col = "red", lwd = 3)
We can already see the bunching around 45 degrees:
The simulation for p = 200 takes just under 2 minutes on my laptop, and we see tighter bunching around 45 degrees (note the x-axis scale). | {"url":"https://www.r-bloggers.com/2019/04/two-interesting-facts-about-high-dimensional-random-projections/","timestamp":"2024-11-09T22:05:24Z","content_type":"text/html","content_length":"115516","record_id":"<urn:uuid:c71ddc39-d8b7-44fb-92ab-d1eb0c1487ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00557.warc.gz"} |
TOC | Previous | Next | Index
6.7 Functions of Matrices (.NET, C#, CSharp, VB, Visual Basic, F#)
NMath provides a variety of functions that take matrices as arguments.
The matrix classes provide Transpose() member functions for calculating the transpose of a matrix: B[i,k] = A[k,i]. Class NMathFunctions also provides a static Transpose() method that returns the
transpose of a matrix. For instance:
Code Example – C# matrix
var A = new FloatComplexMatrix( 5, 5, 1, 1 );
FloatComplexMatrix B = A.Transpose();
FloatComplexMatrix C = NMathFunctions.Transpose(A);
// B == C
Code Example – VB matrix
Dim A As New FloatComplexMatrix(5, 5, 1.0F, 1.0F)
Dim B As FloatComplexMatrix = A.Transpose()
Dim C As FloatComplexMatrix = NMathFunctions.Transpose(A)
' B == C
In both cases, the matrix returned is a new view of the same data. Transpose() just swaps the number of rows and the number of columns, as well as the row strides and column strides. No data is
The matrix classes provide member functions OneNorm() to compute the 1-norm (or largest column sum) of a matrix, InfinityNorm() to compute the infinity-norm (or largest row sum) of a matrix, and
FrobeniusNorm() to compute the Frobenius norm. For instance:
Code Example – C# matrix
var A = new DoubleMatrix( "3x3 [1 2 3 4 5 6 7 8 9]" );
double d1 = A.OneNorm();
double d2 = A.InfinityNorm();
Code Example – VB matrix
Dim A As New DoubleMatrix("3x3 [1 2 3 4 5 6 7 8 9]")
Dim D1 As Double = A.OneNorm()
Dim D2 As Double = A.InfinityNorm()
Class NMathFunctions provides the static Product() method for calculating the matrix product of two matrices. For example:
Code Example – C# matrix
var A = new FloatMatrix( "3x3 [1 2 3 4 5 6 7 8 9]" );
var B = new FloatMatrix( 3, 3, 1, 1 );
FloatMatrix C = NMathFunctions.Product( A, B );
Code Example – VB matrix
Dim A As New FloatMatrix("3x3 [1 2 3 4 5 6 7 8 9]")
Dim B As New FloatMatrix(3, 3, 1.0F, 1.0F)
Dim C As FloatMatrix = NMathFunctions.Product(A, B)
Transpose operations to be performed on the operands of a matrix-matrix multiply operation are specified using a value from the NMathFunctions.ProductTransposeOption enum:
● TransposeNone does not transpose either matrix before multiplying.
● TransposeBoth transposes both operands before multiplying.
● TransposeFirst transposes only the first operand before multiplying.
● TransposeSecond transposes only the second operand before multiplying.
● ConjTransposeBoth takes the conjucate transpose of both operands before multiplying.
● ConjTransposeFirst takes the conjugate transpose only of the first operand before multiplying.
● ConjTransposeSecond takes the conjugate transpose only of the second operand before multiplying.
Thus, this code calculates the inner product of the transpose of A with B:
Code Example – C# matrix
var A = new FloatMatrix( "3x3 [1 2 3 4 5 6 7 8 9]" );
var B = new FloatMatrix( 3, 3, 1, 1 );
FloatMatrix C = NMathFunctions.Product( A, B,
ProductTransposeOption.TransposeFirst );
Code Example – VB matrix
Dim A As New FloatMatrix("3x3 [1 2 3 4 5 6 7 8 9]")
Dim B As New FloatMatrix(3, 3, 1.0F, 1.0F)
Dim C As FloatMatrix = NMathFunctions.Product(A, B,
Additional overloads of the Product() method calculate the inner product of a matrix and a scalar:
Code Example – C# matrix
var A = new DoubleMatrix( "3x3 [1 2 3 4 5 6 7 8 9]" );
var v = new DoubleVector( "[3 2 1]" );
DoubleVector u = NMathFunctions.Product( A, v );
Code Example – VB matrix
Dim A As New DoubleMatrix("3x3 [1 2 3 4 5 6 7 8 9]")
Dim V As New DoubleVector("[3 2 1]")
Dim U As DoubleVector = NMathFunctions.Product(A, V)
Overloads are also provided which place the result of multiplying the first two operands into a third argument, rather than allocating new memory for the result:
Code Example – C# matrix
NMathFunctions.Product( A, B, C,
ProductTransposeOption.TransposeBoth );
Code Example – VB matrix
NMathFunctions.Product(A, B, C,
Matrix Inverse and Pseudoinverse
Class NMathFunctions provides the static Inverse() method for calculating the inverse of a matrix:
Code Example – C# matrix
var A = new FloatMatrix( "3x3 [1 2 3 4 5 6 7 8 9]" );
FloatMatrix AInv = NMathFunctions.Inverse( A );
Code Example – VB matrix
Dim A As New FloatMatrix("3x3 [1 2 3 4 5 6 7 8 9]")
Dim AInv As FloatMatrix = NMathFunctions.Inverse(A)
The standard inverse fails if the matrix is singular or not square.
The pseudoinverse n x m matrix, where
NMathFunctions provides the static Pseudoinverse() method:
Code Example – C# matrix
FloatMatrix APseudoInv = NMathFunctions.Pseudoinverse( A );
Code Example – VB matrix
Dim APseudoInv As FloatMatrix = NMathFunctions.PseudoInverse(A)
To test the quality of the pseudoinverse, you can check the condition number of
Code Example – C# matrix
float cond = NMathFunctions.ConditionNumber(
NMathFunctions.TransposeProduct( A, A ), NormType.OneNorm );
if (cond > 0.000001)
// good
Code Example – VB matrix
Dim Cond As Single = NMathFunctions.ConditionNumber(
NMathFunctions.TransposeProduct(A, A), NormType.OneNorm)
If Cond > 0.000001 Then
' good
End If
NOTE—The best way to compute the pseudoinverse is to use singular value decomposition. Method MatrixFunctions.Pseudoinverse() implements this method.
Class NMathFunctions provides static methods for rounding the elements of a matrix:
● Round() rounds each element of a given matrix to the specified number of decimal places.
● Ceil() applies the ceiling rounding function to each element of a given matrix.
● Floor() applies the floor rounding function to each element of a given matrix.
The static Sum() method on NMathFunctions accepts a matrix and returns a vector containing the sums of the elements in each column. To sum the rows, simply Transpose() the matrix first.
For example:
Code Example – C# matrix
var A = new DoubleMatrix( 5, 8, 1, 1 );
DoubleVector AColSums = NMathFunctions.Sum( A );
DoubleVector ARowSums = NMathFunctions.Sum( A.Transpose() );
A.Transpose() // return A to original view
Code Example – VB matrix
Dim A As New DoubleMatrix(5, 8, 1.0, 1.0)
Dim AColSums As DoubleVector = NMathFunctions.Sum(A)
Dim ARowSums As DoubleVector = NMathFunctions.Sum(A.Transpose())
A.Transpose() ' return A to original view
Transpose() just swaps the number of rows and the number of columns, as well as the row strides and column strides. No data is copied.
NaNSum() ignores values that are Not-A-Number (NaN).
NOTE—NaN functions are available for real-value matrices only, not complex number matrices.
The static Delta() method on NMathFunctions returns a new matrix with the same dimensions as a given matrix, whose values are the result of applying the vector delta function to each column of the
matrix. The vector delta computes the differences between successive elements in a given vector, such that:
u[0] = v[0]
u[i] = v[i] - v[i-1]
Applied to a matrix, Delta() returns a new matrix such that:
B[0,j] = A[0,j]
B[i,j] = A[i,j] - A[i-1,j]
Again, to apply the Delta() function to rows rather than columns, just transpose the matrix first.
Class NMathFunctions provides static min/max finding methods that return a vector containing the value of the element in each column that meets the appropriate criterion:
● Max() returns a vector containing the greatest values in each column.
● Min() returns a vector containing the smallest values in each column.
● NaNMax() returns a vector containing the greatest values in each column, ignoring values that are Not-a-Number (NaN).
● NaNMin() returns a vector containing the smallest values in each column.
NOTE—NaN functions are available for real-value matrices only, not complex number matrices.
To apply these functions to the rows of a matrix, simply Transpose() the matrix first.
The static Mean(), Median(), Variance(), and SumOfSquares() methods on NMathFunctions are overloaded to accept a matrix and return a vector containing the result of applying the appropriate function
to each column in the matrix:
Code Example – C# matrix
var A = new FloatMatrix( 5, 5, 0, 2 );
FloatVector means = NMathFunctions.Mean( A );
FloatVector medians = NMathFunctions.Median( A );
FloatVector variances = NMathFunctions.Variance( A );
Code Example – VB matrix
Dim A As New FloatMatrix(5, 5, 0.0F, 2.0F)
Dim Means As FloatVector = NMathFunctions.Mean(A)
Dim Medians As FloatVector = NMathFunctions.Median(A)
Dim Variances As FloatVector = NMathFunctions.Variance(A)
NaNMean(), NaNMedian(), NaNVariance(), and NaNSumOfSquares() ignore values that are Not-A-Number (NaN). NaNCount() returns the number of NaN values in each column. NaN functions are available for
real-value matrices only, not complex matrices.
To apply these functions to the rows of a matrix, simply Transpose() the matrix first.
NMath extends standard trigonometric functions Acos(), Asin(), Atan(), Cos(), Cosh(), Sin(), Sinh(), Tan(), and Tanh() to take matrix arguments. Class NMathFunctions provides these functions as
static methods. For instance, this code construct a matrix whose contents are the sines of another matrix:
Code Example – C# matrix
var A = new FloatMatrix( 10, 10, 0, Math.Pi/4 );
FloatMatrix cosA = NMathFunctions.Cos( A );
Code Example – VB matrix
Dim A As New FloatMatrix(10, 10, 0.0F, Math.PI / 4.0F)
Dim CosA As FloatMatrix = NMathFunctions.Cos(A)
The static Atan2() method takes two matrices and applies the two-argument arc tangent function to corresponding pairs of elements.
NMath extends standard transcendental functions Exp(), Log(), Log10(), and Sqrt() to take matrix arguments. Class NMathFunctions provides these functions as static methods; each takes a single matrix
as an argument and return a matrix as a result. For example, this code creates a matrix whose elements are the square root of the elements in another matrix:
Code Example – C# matrix
var A = new DoubleMatrix( 3, 3, 1, 1 );
DoubleMatrix sqrt = NMathFunctions.Sqrt( A );
Code Example – VB matrix
Dim A As New DoubleMatrix(3, 3, 1.0, 1.0)
Dim Sqrt As DoubleMatrix = NMathFunctions.Sqrt(A)
Function Expm() on NMathFunctions raises the constant e to a given matrix power, using a scaling and squaring method based upon Pade approximation. This is different than method Exp() which
exponentiates each element of a matrix independently.
Class NMathFunctions also provides the exponential function Pow() to raise each element of a matrix to a real exponent.
Code Example – C# matrix
var A = new DoubleMatrix( "2x2 [1 2 3 4]" );
DoubleMatrix cubed = NMathFunctions.Pow( A, 3 );
Code Example – VB matrix
Dim A As New DoubleMatrix("2x2 [1 2 3 4]")
Dim Cubed As DoubleMatrix = NMathFunctions.Pow(A, 3)
Absolute Value and Square Root
The static Abs() function on class NMathFunctions applies the absolute value function to each element of a given matrix:
Code Example – C# matrix
var A = new DoubleMatrix( 10, 10, 0, -1 );
DoubleMatrix abs = NMathFunctions.Abs( A );
Code Example – VB matrix
Dim A As New DoubleMatrix(10, 10, 0.0, -1.0)
Dim Abs As DoubleMatrix = NMathFunctions.Abs(A)
NMath also extends the standard Sqrt() function to take a matrix argument. Thus, this code creates a matrix whose elements are the square root of another matrix's elements:
Code Example – C# matrix
var A = new FloatMatrix( 10, 10, 1, 2 );
FloatMatrix sqrt = NMathFunctions.Sqrt( A );
Code Example – VB matrix
Dim A As New FloatMatrix(10, 10, 1.0F, 2.0F)
Dim Sqrt As FloatMatrix = NMathFunctions.Sqrt(A)
The static SortByColumn() method on class NMathFunctions sorts the rows of a matrix by the values in a specified column. For instance, this code sorts matrix A by values in the first column:
Code Example – C# matrix
var A = new FloatMatrix( 20, 20, 0, 1 );
A = NMathFunctions.SortByColumn( A, 0 );
Code Example – VB matrix
Dim A As New FloatMatrix(20, 20, 0.0F, 1.0F)
A = NMathFunctions.SortByColumn(A, 0)
Static methods Real() and Imag() on class NMathFunctions return the real and imaginary part of the elements of a matrix. If the elements of the given matrix are real, Real() simply returns the given
matrix and Imag() returns a matrix of the same dimensions containing all zeros.
Static methods Arg() and Conj() on class NMathFunctions return the arguments (or phases) and complex conjugates of the elements of a matrix. If the elements of the given matrix are real, both methods
simply return the given matrix.
For instance:
Code Example – C# matrix
DoubleComplexMatrix A =
new DoubleComplexMatrix( "2x2 [(1,-1) (2,-.5) (2.2,1.1) (7,9)]" );
DoubleComplexMatrix AConj = NMathFunctions.Conj( A );
// AConj = 2x2 [(1,1) (2,0.5) (2.2,-1.1) (7,-9)]
// Now use the Imag method to create a real matrix containing
// the imaginary parts of AConj.
DoubleMatrix AConjImag = NMathFunctions.Imag( AConj );
Code Example – VB matrix
Dim A As New DoubleComplexMatrix(
"2x2 [(1,-1) (2,-.5) (2.2,1.1) (7,9)]")
Dim AConj As DoubleComplexMatrix = NMathFunctions.Conj(A)
' AConj = 2x2 [(1,1) (2,0.5) (2.2,-1.1) (7,-9)]
' Now use the Imag method to create a real matrix containing
' the imaginary parts of AConj.
Dim AConjImag As DoubleMatrix = NMathFunctions.Imag(AConj) | {"url":"https://www.centerspace.net/doc/NMath/user/matrix-77353.htm","timestamp":"2024-11-10T11:29:24Z","content_type":"text/html","content_length":"49563","record_id":"<urn:uuid:aa64718e-d2ba-49a7-87cd-e6463256cca2>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00890.warc.gz"} |
On “Stock correlation has been rising” | R-bloggersOn "Stock correlation has been rising"
On “Stock correlation has been rising”
[This article was first published on
Portfolio Probe » R language
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
Ticker Sense posted about the mean correlation of the S&P 500. The plot there — similar to Figure 1 — shows that correlation has been on the rise after a low in February.
Figure 1: Mean 50-day rolling correlation of S&P 500 constituents to the index. For me, this post raised a whole lot more questions than answers.
Which correlation?
When I think of correlation, I think of the correlation of the constituents among themselves. But the post is of the correlation of each constituent with the index. So, does it matter?
Figure 2: Comparison of correlations: each constituent to the index (gold line) and constituents among themselves (blue line). Figure 2 seems to say that it probably doesn’t matter as long as you
know which you are talking about. However, if we look at the difference of the two methods — as in Figure 3 — we see something different. It looks like perhaps something really has begun recently.
Figure 3: Constituent to index mean correlation minus mean intra-constituent correlation.
Correlation versus level
The Ticker Sense plot shows the index level along with the correlation — implying there is a relationship. The relationship that we would care about is if correlation predicts returns. Well,
maybe. I’m waiting to be convinced.
Variability from the stocks
From now on, we’ll stay in the original playground of the correlation of each constituent to the index.
Figure 1 (and the original plot) is a solid line as if we actually knew the value at each point. In actuality there are sources of variability that means we don’t really know where the line is.
One source of variability is the constituents. We can explore how much variability they instill into the computation by using the statistical bootstrap.
Figure 4: Mean 50-day rolling correlation of S&P 500 constituents to the index (gold) plus 95% bootstrap confidence interval (purple).
Figure 4 shows the 95% confidence interval at each point from constituent variability. The confidence interval is narrow relative to the moves over time. As Figure 5 shows, the width of the
confidence interval is not constant through time.
Figure 5: Width of the 95% bootstrap confidence interval of constituent to index correlation.
Variability within the window
There is another source of variability: we are estimating the correlations with only 50 observations. Figure 6 shows the variability attributable to the finite sample for the most recent window
(through 2011 July 15).
Figure 6: Distribution from bootstrapping over the last 50 days. So there is substantial variability here. Figure 7 adds the 95% confidence interval due to days for the final window to the time
series view.
Figure 7: 95% confidence interval from bootstrapping the final window (blue) along with the 95% confidence interval for bootstrapping constituents at each point.
We have a problem. There is an uncertainty principle at work: if we widen the window to reduce variability, then we start to lose the dynamic nature of the correlation. We can’t know both location
and momentum.
Similar post
Appendix R
You can get a script of the R analysis and/or get a file of the R functions including those that create the figures.
This analysis — as does that of “Weight compared to risk fraction” – starts off by using the QuantTrader blog post “Downloading S&P 500 data to R”. In particular the post includes a link to a file
that contains the stock symbols for the constituents.
sp500.symbol.url <- "http://blog.quanttrader.org/wp-content/uploads/sp500.csv"
sp500.symbols <- scan(url(sp500.symbol.url), what="")
Whenever you get data, it should be standard practice to graphically inspect it. Such an inspection paid off in this case: something went wrong with the recent data for ‘Q’ and the same price was
repeated for several days. That stock was eliminated from the analysis. Better would be to investigate and fix the data.
Subscribe to the Portfolio Probe blog by Email | {"url":"https://www.r-bloggers.com/2011/07/on-%E2%80%9Cstock-correlation-has-been-rising%E2%80%9D/","timestamp":"2024-11-09T17:23:21Z","content_type":"text/html","content_length":"101455","record_id":"<urn:uuid:b0c6a452-d2bc-40d9-becd-88022ea3c52e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00134.warc.gz"} |
Coq devs & plugin devs
Apparently calling evarconv on proj ?X = v or (?X).(proj) = v is not the same if ?X : ?Y or ?X : i for some inductive i with primitive projection proj. If the type of ?X is know, both work. If it is
an evar, then only the non-primitive projection works.
Does this ring a bell?
This is hard to reproduce in plain Coq, one needs Coq-Elpi or (I guess) some ltac and uconstr.
Hmm indeed, we certainly rely on ?X : i ?pars ?inds as that's usually given by (pre)typing
Maybe we're missing a bit of "imitation", refining the evar ?V to i ?pars ?inds in some cases
What I don't get is why the compat constant works even if ?X was not typed before
The only extra piece of info is the number of params (that are all _)
but I guess that is statically known also for primproj, somehow...
I'll look at the code of evarconv tomorrow...
The code is well documented for once:
(* Knowing the structure and hence its number of arguments, I can cut sk1 into pieces. *)
let params1, c1, extra_args1 =
match arg with
| Some c -> (* A primitive projection applied to c *)
let ty =
try Retyping.get_type_of ~lax:true env sigma c with
| Retyping.RetypeError _ -> raise Not_found
in let (i, u), ind_args =
(* Are we sure that ty is not an evar? *)
Inductiveops.find_mrectype env sigma ty
in ind_args, c, sk1
The comment questions that if ty is an evar (my exact case) then things go wrong (Inductiveops.find_mrectype raises Not_found that corresponds to no canonical solution being available).
Enrico Tassi said:
Apparently calling evarconv on proj ?X = v or (?X).(proj) = v is not the same if ?X : ?Y or ?X : i for some inductive i with primitive projection proj. If the type of ?X is know, both work. If it
is an evar, then only the non-primitive projection works.
Does this ring a bell?
if the type of ?X is an evar then proj ?X is ill typed no?
It is not typed yet. If I type it, then ?Y has to become the inductive applied to unknown parameters.
The code above synthesizes/recovers the inductive parameters only to unify them with the ones of the canonical solution. If they can't be recovered, then we should just not perform that unification.
I mean fun c : i _ _ => (c).(proj) does not fail but has the same info at hand than fun c : _ => (c).(proj).
Yep, we can fix this
I have a fix, but I have an hard time writing a test.
I can test it with Elpi, but with Ltac2 I can't find a way, In particular:
Module U.
#[projections(primitive=yes)] Record r := R { r_car : Type }.
#[projections(primitive=yes)] Record s := S { s_car : Type }.
Canonical Structure foo (x : s) := R (s_car x).
Axiom a : s.
Ltac2 Eval
let t1 := open_constr:( ( _ ).(r_car) ) in
let t2 := constr:( (a).(s_car) ) in
match (Constr.Unsafe.kind t1) with
| Constr.Unsafe.Proj _ _ _ => ()
| _ => fail
match (Constr.Unsafe.kind t2) with
| Constr.Unsafe.Proj _ _ _ => ()
| _ => fail
printf "%t = %t" t1 t2;
unify_with_full_ts t1 t2;
printf "%t = %t" t1 t2.
succeeds (with or without my paths) and the second print tells me that the evar is still unassigned
This is my first Ltac2 program, so I probably misusing the API/language.
The preliminary patch is here: https://github.com/coq/coq/pull/19358
(the test above is not sufficient, since the _ is typed anyway, but how to build a term without typing it would be the next question)
unify_with_full_ts does Goal.enter (https://github.com/coq/coq/blob/65ff3388b4076eed7720a45b21be13515deb54fc/tactics/tactics.ml#L3404) so when there is no goal it does nothing
if you do Goal True before Ltac2 Eval you get
(r_car ?r) = (s_car a)
(r_car (foo a)) = (s_car a)
- : unit = ()
(you also need
Require Import Ltac2.Ltac2.
Import Printf.
Import Unification.
(the test above is not sufficient, since the _ is typed anyway, but how to build a term without typing it would be the next question)
do we really want evarconv to work on untyped terms?
Goal True.
Ltac2 Eval
let t1 := open_constr:( ( _ : _ r ).(r_car) ) in
let t2 := constr:( (a).(s_car) ) in
match (Constr.Unsafe.kind t1) with
| Constr.Unsafe.Proj _ _ _ => ()
| _ => fail
match (Constr.Unsafe.kind t2) with
| Constr.Unsafe.Proj _ _ _ => ()
| _ => fail
printf "%t = %t" t1 t2;
unify_with_full_ts t1 t2;
printf "%t = %t" t1 t2.
it is not about typed or not, it is about having a type that is syntactically an inductive
Error: Unable to unify "r_car (?y : ?T r)" with "s_car a".
But my patch needs amending, now that I can quickly test it.
Thanks for the help!
unify_with_full_ts does Goal.enter ... no goal does nothing
Not sure this is a user friendly API, but this is not my garden.
it is not about typed or not, it is about having a type that is syntactically an inductive
because of delayed constraints I guess? if you do ltac1:(solve_constraints) before the unify it fails with
Unable to unify "?T r" with "r" (cannot satisfy constraint "?T r" == "r").
(why is this even delayed, isn't there a unique solution ?T := fun x => x?)
No it is not, fun _ => r is another one. Anyway, I just wanted to prove that even if the terms are typed it may be that the type of the projected is not syntactically an inductive, not to defend the
unif candidates business.
Here my final tests, it works with my patch, it does not work without:
Module U.
#[projections(primitive=yes)] Record r := R { r_car : Type }.
#[projections(primitive=yes)] Record s := S { s_car : Type }.
Canonical Structure foo (x : s): (fun x=>x) r := R (s_car x).
Axiom a : s.
Set Debug "unification".
Goal True.
Ltac2 Eval
let t1 := open_constr:( ( _ : _ r ).(r_car) ) in
let t2 := constr:( (a).(s_car) ) in
match (Constr.Unsafe.kind t1) with
| Constr.Unsafe.Proj _ _ _ => ()
| _ => fail
match (Constr.Unsafe.kind t2) with
| Constr.Unsafe.Proj _ _ _ => ()
| _ => fail
printf "%t = %t" t1 t2;
unify_with_full_ts t1 t2;
printf "%t = %t" t1 t2.
End U.
I'll polish my PR and ask for a review
even if the terms are typed
I'm not sure I consider "typed up to delayed constraints" to be the same as "typed"
I don't really trust delayed constraints to not break random things
I totally agree. This PR fixes one of these cases ;-)
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/evarconv.20and.20primproj.html","timestamp":"2024-11-05T16:07:20Z","content_type":"text/html","content_length":"34497","record_id":"<urn:uuid:92084ae7-5343-41ff-87cb-8438ffe22727>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00742.warc.gz"} |
Logic for Computer Science - Wikibooks, open books for an open world
This book discusses logic as a tool for computer science; a field that uses logic at all levels. It provides a survey of mathematical logic and its various applications. Some areas where it is
particularly important include:
Digital circuit design
Complexity theory (NP equivalent to Existential second-order logic)
Database Systems (SQL; roughly predicate/first-order logic)
Computer-aided verification (Temporal logic & model checking)
Programming languages (lambda calculus)
AI, expert systems, inference engines
Distributed Systems
Logic Programming
Computer Security
After covering basic material of propositional logic and first-order logic, the course presents the foundations of finite model theory and descriptive complexity. Other topics, including logic
programming, non-monotonic reasoning, temporal logic, and reasoning about knowledge and belief, are surveyed as time allows. These notes were taken by student scribes.
You may also find the following references useful
• Mathematical Logic. H.-D. Ebbinghaus, J. Flum, and W. Thomas
• Foundations of Databases. Abiteboul, Hull, Vianu. Available here: http://www-cse.ucsd.edu/users/vianu/BOOK/book.html
• Computational Complexity. Christos H. Papadimitrou.
• Elements of Finite Model Theory. Leonid Libkin.
• Finite Model Theory and Its Applications. Grädel, Kolaitis, Libkin, Marx, Spencer, Vardi, Venema, Weinstein
• Gödels Proof. Ernest Nagel and James R. Newman
• Language, Proof, and Logic. John Barwise and John Echtermendy
• A Profile of Mathematical Logic. Howard DeLong | {"url":"https://en.m.wikibooks.org/wiki/Logic","timestamp":"2024-11-04T13:48:37Z","content_type":"text/html","content_length":"35744","record_id":"<urn:uuid:a8d570b2-fd41-44b8-80a4-6a81e6de7c57>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00264.warc.gz"} |
Time-randomness trade-off for oblivious routing
Three parameters characterize the performance of a probabilistic algorithm: T, the run-time of the algorithm; Q, the probability that the algorithm fails to complete the computation in the first T
steps; and R, the amount of randomness used by the algorithm, measured by the entropy of its random source. A tight trade-off between these three parameters for the problem of oblivious packet
routing on N-vertex bounded-degree networks is presented. A (1 - Q) log (N/T)-log Q-O(1) lower bound for the entropy of a random source of any oblivious packet routing algorithm that routes an
arbitrary permutation in T steps with probability 1 - Q is proved. It is shown that this lower bound is almost optimal. This result is complemented with an explicit construction of a family of
oblivious algorithms that use less than a factor of log N more random bits than the optimal algorithm achieving the same run-time.
Dive into the research topics of 'Time-randomness trade-off for oblivious routing'. Together they form a unique fingerprint. | {"url":"https://cris.biu.ac.il/en/publications/a-time-randomness-trade-off-for-oblivious-routing","timestamp":"2024-11-02T22:19:47Z","content_type":"text/html","content_length":"51494","record_id":"<urn:uuid:9fb665d3-0ea0-47e1-8d02-cd2d9857745b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00514.warc.gz"} |
PCR_STEPWISE - PCR Variables Selection Method (Stepwise)
Returns a list of the selected variables after performing the stepwise regression.
PCR_STEPWISE(X, Mask, Y, Intercept, Method, Alpha)
is the independent variables data matrix, so each column represents one variable.
is the boolean array to choose the explanatory variables in the model. If missing, all variables in X are included.
is the response or the dependent variable data array (a one-dimensional array of cells (e.g., rows or columns)).
is the constant or the intercept value to fix (e.g., zero). If missing, an intercept will not be fixed and is usually computed.
is a switch to select the variables inclusion/exclusion approach (1 = forward selection (default), 2 = backward elimination, 3 = bi-directional elimination)
Method Description
1 Forward selection
2 Backward elimination
3 Bi-direction elimination
is the statistical significance of the inclusion/exclusion test (i.e., alpha). If missing or omitted, an alpha value of 5% is assumed.
1. The underlying model is described here.
2. Stepwise regression includes regression models in which an automatic procedure carries out the choice of predictive variables. The process takes the form of a sequence of f-tests in selecting or
eliminating explanatory variables.
3. The three main approaches are:
□ Forward Selection involves starting with no variables in the model, testing the addition of each variable using a chosen model comparison criterion, adding the variable (if any) that improves
the model the most, and repeating this process until no further improvement is possible.
□ Backward Elimination involves starting with all candidate variables, testing the deletion of each variable using a chosen model comparison criterion, deleting the variable (if any) that
improves the model the most by being deleted, and repeating this process until no further improvement is possible.
□ Bi-directional Elimination, a combination of the above tests, involves testing for variables to be included or excluded.
4. The initial values in the mask array define the variables set that MLR_STEPWISE works with. In other words, variables that are not selected will not be considered during the regression.
5. The sample data may include data-points with missing values.
6. Each column in the input matrix corresponds to a separate variable.
7. Each row in the input matrix corresponds to an observation.
8. Observations (i.e., rows) with missing values in X or Y are removed.
9. The number of rows of the response variable (Y) must equal the number of rows of the explanatory variable (X).
10. The PCR_STEPWISE function is available starting with version 1.60 APACHE.
Files Examples
Related Links
• Hamilton, J .D.; Time Series Analysis, Princeton University Press (1994), ISBN 0-691-04289-6
• Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252-285
Article is closed for comments. | {"url":"https://support.numxl.com/hc/en-us/articles/215730623-PCR-STEPWISE-PCR-Variables-Selection-Method-Stepwise","timestamp":"2024-11-06T11:32:04Z","content_type":"text/html","content_length":"36826","record_id":"<urn:uuid:7f533422-7238-436b-9a41-19cf14d1970a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00690.warc.gz"} |
Circuit Theory EE6201 MJ2014 2nd Semester Question Paper
Sponsored Ads:
Quick Links
University Papers University Syllabus Entrance Exam
PSU Papers Bank Papers Placement Papers
VTU Anna Univerity Syllabus Anna Univerity Papers
Anna University Question Paper Code :57018
B.E./B.Tech. DEGREE EXAMINATION, MAY/JUNE 2014. .
Second Semester
Electrical and Electronics Engineering
EE 6201 — CIRCUIT THEORY
(Common to ,Electronics and Communication Engineering, Electronics and Instrumentation Engineering, Instrumentatlorrand Control Engineering, Biomedical Engineering and Medical Electronics
(Regulation 2013)
University Papers | Syllabus | Entrance Exam | Govt & PSU Papers | Bank Papers
Programming Questions | Travel Tips | Mobile Review | Placement Papers | Books ANNA UNIVERSITY SYLLABUS: CLICK HERE PART A (10 x 2 = 20 marks)
1: Find the equivalent resistance of the circuit shown in Fig.1.
2. Define- RMS voltage.
3. What is reciproCity theorem?
4. Why do you short circuit the voltage' source and open the current source when you find. TheveninTs voltage of a Network?
5. Define quality factor in the resonant circuit.
6. Determine the quality factor of a coil for .the series resonant circuit consisting .of R = 10 ohm, L = 0.1H, and C = 10 microfarad..
7: Distinguish between natural and forced response,
8. What is the time constant for RL and RC circuit?
9. Write the: effect of power factor in energy consumption billing. 10: DistingUish between unbalanced source and unbalanced load.
Post a Comment | {"url":"https://www.knowledgeadda.com/2014/08/circuit-theory-ee6201-mj2014-2nd.html","timestamp":"2024-11-03T16:08:40Z","content_type":"application/xhtml+xml","content_length":"228728","record_id":"<urn:uuid:7149db34-56d3-4841-9dce-33ffae8f69e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00478.warc.gz"} |
In this section, we review how to add accessible formulas to content.
What are formulas?
Formulas refer to math equations and science formulas.
File types used: LaTex, MathType
Who are you doing this for?
This work supports students who:
• Are blind or have low vision
• Have a form of cognitive disability, like Mark
• Have a physical disability
What do you need to do?
There are several ways to handle equations, from images with alt tags to MathML. Having access to an equation editor such as MathType or MathMagic can streamline processing and converting equations.
These tools are similar to equation editors found in the Microsoft Office.
Math ML is a text-based XML (short for “extensible markup language”) designed for math equations. Browsers that support MathML are able to translate the XML into a formatted equation. Since MathML
with MathJax can be rendered in many systems, including HTML, sites at Penn State, Angel and Drupal, it is considered the best choice for accessibility.
Here is information about creating and viewing MathML.
MathML may vary from system to system and the content can change rapidly.
Image with an alt tag
A safe option is to create an image of an equation (or export it from an equation editor) and then insert the image into a document with an alt tag.
Note: Alt tags can be written in Nemeth MathSpeak for students who have learned that system.
Example 1: An equation in HTML
View the alt tag
alt = “m equals begin fraction m sub 0 over begin square root 1 minus begin fraction v sup 2 over c sup 2 end fraction end square root end fraction”
LaTeX is a math markup language familiar to many in the science and math community. Unfortunately, it is not currently supported by screen reader technology. It is, however, fairly simple to convert
LaTeX to an image or MathML in most equation editors.
To import LaTeX, follow these steps in MathMagic and MathType:
1. Copy a piece of LaTeX code such as
m &= \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}
into an equation editor’s main editing window.
2. The equation should appear fully formatted. Make minor adjustments as needed.
At a BCcampus user-testing session, students indicated that it would be helpful to place an audio file of the formula or equation alongside each, allowing the user to hear exactly how the formula or
equation should be interpreted.
Additional resources
Math accessibility at Portland Community College
In 2012, Portland Community College departments took a closer look at making math accessible to blind students. Read more about the math accessibility study.
Watch Math Accessibility at Portland Community College. (This video is an open educational resource.)
Media Attributions
Text Attributions
This chapter is a derivative of Equations: MathML, Images and LaTeX by Penn State. Penn State has provided permission to the authors to use this information under the licence of the toolkit. | {"url":"https://pressbooks.nscc.ca/accessibilitytoolkit/chapter/formulas/","timestamp":"2024-11-02T02:59:10Z","content_type":"text/html","content_length":"70649","record_id":"<urn:uuid:abf4ccb8-178d-418a-b383-d61b6bd2d071>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00096.warc.gz"} |
Wolfram|Alpha Examples: Step-by-Step Statistics
Examples for
Step-by-Step Statistics
Location Statistics
See the steps for finding the expected value, or mean:
Find the median step by step:
Learn how to calculate the mode:
Calculate and simplify the geometric mean one step at a time:
Order Statistics
Apply basic formulas in order statistics on a set of data, seeing the steps along the way:
Dispersion Statistics
Find and learn to compute variance and standard deviation:
Get the steps for how to find the range of a set of numbers: | {"url":"https://www.wolframalpha.com/examples/pro-features/step-by-step-solutions/step-by-step-statistics","timestamp":"2024-11-03T02:37:17Z","content_type":"text/html","content_length":"77106","record_id":"<urn:uuid:f37695e5-f4ce-4ae7-a759-6447b0d33b63>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00749.warc.gz"} |
How to perform mixed models analysis in SAS? | Hire Someone To Take My SAS Assignment
How to perform mixed models analysis in SAS? In statistical learning, there is an implicit assumption that learning problems are generally modeled by a mixture of training and testing, as in
MMDITNIRT. As a condition, this means that the learning task must be modeled by a discrete variable, since each individual learning problem is typically generated by different randomness (such as
noise or sampling error) and hence can be represented with some type of mixture variable. In other words, one or more mixture models can be trained independently, (e.g., a model fitting procedure
followed by the learning task, which is not a discrete-level task), or jointly, as a result of an iterative process of training and testing the model. The mixtures above can be represented by $$\
begin{aligned} f\left(X \middle| \mathcal{L}\equiv 0 \right)= \sum_{k=1}^K\frac{\alpha_k}{1+\alpha_k}, \end{aligned}$$ where $\mathcal{L}$ is a hidden representation associated with each model, $\
alpha_k$ is the learning rule, and $\mathcal{K}$ is a particular subset of the hidden symbols, respectively. Thus, in general, each given hidden variable should be represented by a mixture between
the training model and the training/testing model, and one needs to be able to properly represent the data sets used throughout the simulation. Indeed, models with mixed hyperparameters as in Figs. \
[Fig-MCIT\_smodel\] and \[Fig-POT\_smodel\] are relatively stable, whereas models with more than two generative components as shown in Figs. \[Fig-MCITwith\_MCIT\] and \[Fig-POTwith\_POT\_smodel\].
Thus, the mixed Look At This inference system requires an active training process to capture the training and testing task simultaneously; the learning task may run in a sequential manner and should
not be treated as discrete, independent, and mixing. This is true for many training and testing tasks, but is commonly done for three-way learning. In this study we provide a general but a robust
solution to this problem. This paper, however, makes suggestions on how to make mixed models approach these challenges. Let $\pi$ be a single latent class using some data given by the test set.
Denote the infoset $[x, y]$ by $\gamma = \max\{(y_1, \ddots, x), (y_2, \ddots, x).[y_{ij} [y_i], y_{ij} \}}$; each infoset is associated with a training model, and $\newset{\gamma}$ is an infoset set
that contains each test dataset. Let ${\bf\chi}{(x)}$ represent a single learning rule (e.g., $f(X {\bf\hat}{\begin{array}[t]{@+}![X}]{}{\hat}{\chi_1}\\ {\hat}{\chi_2} \end{array}})$; its equivalent
infoset (e.
Hire Someone To Do My Homework
g., $f[X]_{0, 2}=[0]$) is denoted $\gamma{\bf\chi_2}(X)$. Then, *the MCIT algorithm utilizes the hidden representation as an input to an inference decision whether or not to use this representation.*
As shown in the following, *an optimal* MCIT initialization has been observed for the problem (Figure \[Fig-POTposterior\]): $$\begin{aligned} \label{eq:POTposterior} f(X) = f\How to perform mixed
models analysis in SAS? Now that we have a master-slave connection and a master-slave database, we can determine which models are appropriate for the data sets being used. Let’s assume that you have
a data set with 1000 rows under 200 fields. In our training and training models, we will calculate the number of instances that we will have in every row. And then we will sum up all instances in 50%
of the rows. So, we have a data set with 1000 rows and 200 fields. That’s a master-slave database and it will also be a master-slave database. And this, unlike what we did in PIL, the goal is not to
generate any output from each row in every model. We will create a series of data objects representing the rows from the master-slave database, and the remaining data for each row in each model. The
output will be its list of instances in all rows. Each data point represents one instance, (some may be same with more than one instance in one row) and that instance will be the same value you
specified in PIL. If all these data points are greater than zero, we just randomly and consistently sample from each row every 100 steps. A second step is designing a model that can be used in a
given dataset and to generate any output of a given data point. While the PIL model does not capture this Makes, of course, that many distinct data points and that people will definitely want to see
in your data sets. We can get this working by not having to store any data in a separate table and not having to make decisions about which data points to include into the model of a dataset. The
following section describes the concept of single-point models. Multi-point models The next model we’ll need Click Here consist of data objects representing a single data point such as a series of
rows and fields. We will use the addition of variables to relate data points together in a data object.
Do Online Assignments And Get Paid
For data point instances in a different table, we will use variables to control how we do it. We’ll use variables as a method to create any data objects that would like to see the data in our series.
Here’s an example for a data set with 500 rows to test. In it, we would create a series of 5000 data points from a graph with data of 50,000 instances before it had all the instances in it. This
example is based on the PIL class which records the instances so you don’t have to store all 20 instances in a particular row. When you start designing the data objects, you have two options…create
multiple data points and add some other data using a loop in PIL. The loop we’ll use to get each instance to a separate data point in the series. You’ll also be able to model & sort the dataHow to
perform mixed models analysis in SAS? Two modules in mixed models analysis are designed for the two different purposes that I’d propose first: : you test your hypothesis of which of two models
(conditions) fit it to, and : you decide to perform general mixed models analysis. Why do they use the module, and why should you choose it? Cases 1 to 4 have too much work to justify modifying them
but more than I dare to give here, so it’s probably a shame to be wrong but its important to note that these models most often take into account the model (conditions) most relevant to our
“benchmark” questions: Question 1. Does the model fit to the hypothesis? {#question-1-does-the-model-fit-to-the-hypothesis?.unnumbered} The model matches the hypothesis tested and can easily be
modified to fit it to the hypothesis tested. Question 2. Is it sufficient to perform model fitting? {#question-2-is-it sufficient-to-performance-to-perform-model- fitting?.unnumbered} While
regression is a better model to fit a parameter to given observations than other approaches on the subject, models more specifically intended for the purposes of the measurement of unobserved effects
have better parsimony than models that are more specific to the statistical level of the data. You might find that many regression terms even quite basic can have a lot of degrees of freedom: Note:
For this list I made an alteration of the module; this will be very useful within SAS. What do I mean by “can be modified to fit it to the hypothesis tested”? This is an observation that is already
covered by @daniels-bough at the moment by changing methods for the fit of regression to data and the assumption you make here. It is clear, however, that regression not only tests how well the
observed variation is explained by a fitted observation, but also describes the underlying features of the data. That all three of these concepts can be changed to fit the observed variable also. But
is that what you want to do? We should say that regression to models that don’t fit the hypothesis testing step helps immensely because the model reproduces, without having to explain the observed
variation, the underlying “what to do?” This is one way in which the model is able to capture a variety of surprising phenomena [@Daniels-bough-estimation]. But in particular, in treating how that
variance can be modelled: The paper by Dahan on a large collection of studies uses a more complicated model because the standard explanation of what is really happening leaves us susceptible to
variations just because regression is taking place in different variables.
Take My Statistics Tests For Me
In fact, that paper has two different proofs [@shogli:2005], | {"url":"https://sashelponline.com/how-to-perform-mixed-models-analysis-in-sas","timestamp":"2024-11-02T23:29:01Z","content_type":"text/html","content_length":"128191","record_id":"<urn:uuid:422d5539-48f4-4327-a110-a6aee2ea22a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00491.warc.gz"} |
Darts and Kites
Explore the geometry of these dart and kite shapes!
The diagram shows a rhombus $PQRS$ with an internal point $O$ such that $OQ = OR = OS = 1$ unit. Penrose used this rhombus, split into two quadrilaterals, a dart and a kite, to make his famous tiling
which fills the plane but, unlike a tessellation, does not repeat itself by translation or rotation.
Find all the angles in the diagram, show that $POR$ is a straight line and show that triangles $PRS$ and $QRO$ are similar. Hence prove that the length of the side of the rhombus is equal to the
Golden Ratio $(1+ \sqrt{5})/2$.
Getting Started
Use isosceles triangles, similar triangles and a quadratic equation.
Student Solutions
This solution is from Arun Iyer of SIA High School and Junior College.
Part 1
First I will show that $POR$ is a straight line. For this I would like to state the perpendicular bisector theorem.
PERPENDICULAR BISECTOR THEOREM: Every point equidistant from the two ends of a line segment lies on the perpendicular bisector of the line segment.
Now consider the line segment $QS$.
$OQ=OS=1$ therefore by the perpendicular bisector theorem, $O$ must lie on the perpendicular bisector of $QS$.
$PQ=PS$ (as the sides of the rhombus are equal), therefore by the perpendicular bisector theorem, $P$ must lie on the perpendicular bisector of $QS$.
$RQ=RS$ (as the sides of the rhombus are equal), therefore by the perpendicular bisector theorem, $R$ must lie on the perpendicular bisector of $QS$.
Now the perpendicular bisector of a line segment is unique and hence $P$, $O$, $R$ must lie on the same perpendicular bisector and hence $POR$ is a straight line.
Part 2
Now I will get all the angles of the rhombus.
$\angle QPS$ = $72^{\circ}$ (given), $\angle QRS = \angle QPS = 72^{\circ}$ as they are opposite angles of a rhombus. The diagonal of the rhombus bisects the angles of a rhombus and therefore $\angle
QPO = \angle SPO = \angle QRO = \angle SRO = 36^{\circ}$.
Triangles $OQR$ and $OSR$ are isosceles triangles, therefore $\angle OSR = \angle OQR = 36^{\circ}$.
Using the fact that sum of angles of a triangle is 180 degrees, we can see that $\angle SOR$ and $\angle QOR$ are equal to 108 degrees. Since we have proved that $POR$ is a straight line in Part 1,
we can determine $\angle QOP$ and $\angle SOP$ to be 72 degrees.
Again using the fact that sum of angles of a triangle is 180 degrees, we can see that $\angle OQP$ and $\angle OSP$ are equal to 72 degrees.
Part 3
Let the side of the rhombus be $x$.
Consider triangle $OPS$ in which $PO=PS=x$ (since $\angle POS = \angle PSO$).
Applying the cosine rule $$\cos OPS = [PS^2 + PO^2 - OS^2]/[2\times PS \times PO]$$ therefore $$ \cos 36 =[2x^2-1]/[2x^2]\quad (1).$$
Splitting the isosceles triangle $ORS$ into two right angled triangles gives $$\cos 36 = x/2 \quad (2).$$
From (1) and (2),
$$\eqalign{ [2x^2 - 1]/[2x^2] &= x/2\cr x^3 - 2x^2 + 1 &= 0 \cr (x - 1)(x^2 - x - 1) &= 0.}$$
Now $x\neq 1$ because triangle POS is not equilateral, therefore $x^2 - x - 1 = 0$ and hence
$x = [1 + \sqrt 5]/2$ or $x = [1 - \sqrt 5]/2$.
Clearly $x\neq [1 - \sqrt 5]/2$ because the side length cannot be negative, therefore $x = [1 + \sqrt 5]/2$, the Golden Ratio. | {"url":"https://nrich.maths.org/problems/darts-and-kites","timestamp":"2024-11-07T18:23:11Z","content_type":"text/html","content_length":"41503","record_id":"<urn:uuid:94374c45-f9dd-47d2-8af8-c1c9b9ec8b86>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00360.warc.gz"} |
Factors in Algebra
A factor is a quantity that
exactly into a
. A factor is one of the numbers, letters and brackets (or a product of them) that are
together to make a term.
Real Examples of Factors in Algebra
It is easier to understand factors with examples.
• 2 and x are factors of 2x.
2 and x are multiplied together to make 2x.
2 × x = 2x
• a, b and c are factors of abc.
a, b and c are multiplied together to make abc.
a × b × c = abc
• (x + 1) and (x + 2) are factors of (x + 1)(x + 2).
(x + 1) and (x + 2) are multiplied together to make (x + 1)(x + 2).
(x + 1) × (x + 2) = (x + 1)(x + 2)
More Factors in Algebra
It would appear that finding factors of a term in algebra is easy. In the example above, we saw that
are factors of
. It would seem that we would only have to write the letters or numbers or brackets that appear in term to find its factors. While these are all factors, they will not be the only ones. For example,
and the term itself (
) will also be factors (see
). Also, any product of these factors are also factors. If
are factors,
is also a factor.
will also be factors.
Dividing by Factors
Factors are terms that divide exactly into another term. We have seen that the factors of
. This means that we can divide
abc ÷ a = bc
abc ÷ b = ac
abc ÷ c = ab
Notice that the term you get when you divide by one factor gives another factor:
are also factors.
Lesson Slides
The slider below shows more real examples of factors algebra.
Numbers Have Factors
is a number which
exactly into another number. For example, the factors of
because they all divide exactly in 4. If an term in algebra includes a number, the factors of the number are also factors of the term. For example. the factors of
1 and the Term Itself Is Always a Factor
1 and the term itself is always a factor of the term. For example,
are both factors of
You might also like...
Help Us Improve Mathematics Monster
• Do you disagree with something on this page?
• Did you spot a typo?
Please tell us using this form.
Find Us Quicker!
• When using a search engine (e.g., Google, Bing), you will find Mathematics Monster quicker if you add #mm to your search term.
If you like Mathematics Monster (or this page in particular), please link to it or share it with others.
If you do, please tell us. It helps us a lot!
Create a QR Code
Use our handy widget to create a QR code for this page...or any page.
This test is printable and sendable | {"url":"https://mathematics-monster.com/glossary/factors_in_algebra.html","timestamp":"2024-11-05T17:02:38Z","content_type":"text/html","content_length":"19362","record_id":"<urn:uuid:123df3e8-135f-4be7-85a1-821666f255da>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00876.warc.gz"} |
Str8ts Strategies
Most of the Str8ts puzzle is about identifying the next cells that can be deduced from the clues in the black and white cells. On the easier puzzles this can be eye-balled - that is by scanning the
rows, columns and the 'compartments' you should be able to find several opportunities to solve a cell. On the harder puzzles you might need to make notes in some cells to remind you of the the
Both the
Str8ts Player
and the
Str8ts Solver
have the ability to type in small number notes. In the solver these are always displayed but they can be edited as well.
For the purposes of outlining the strategies for Str8ts I am going to follow the list on the solver.
Solving Sequence
Start solving a Str8ts by working the smaller compartments ( 1, 2, 3 or 4 spaces) first. Using all the rules and clues and strategies listed below, the smaller compartments are simply easier to
solve. Once solved, these smaller compartments start to fill in portions of the longer straights, and the filled in numbers remove themselves in rows and columns as possible numbers from the larger
straights. By adding additional numbers to the puzzle we essentially are turning larger straights ( 5, 6, 7, 8 and 9 spaces ) back into manageable compartments that will be easier to solve.
Especially if the following hands on strategies are used.
Compartment Check
A 'Compartment' of 5,6,7 and 8
This is the meat of Str8ts. A compartment is a set of white cells in a row or column bounded by the edge of the puzzle or a black cell. They will be between two and nine cells long. We know from the
rules that all the numbers in a compartment must be a 'straight' - no gaps in other words - but the order will be unknown until you complete the straight. Given four cells with 5x8x, as in the
example, we can know that to fill the gap we must use 6 and 7.
If you are making notes or using the solver you can immediately eliminate 1,2,3,4 and 9 from those cells since they can never reach the clues we know about.
Fortunately we have another clue in this example. The 6 tells us the rightmost X can't be a 6 so it must be a 7. That allows us to put the 6 between the 5 and 8.
When you are considering a straight you need to check the rows and columns for any useful numbers. If they exist elsewhere then they cant belong to any cell the number can 'see' in the compartment.
Black cell clues are interesting since they can cut the options in half. Say you had a black cell with '5' and it intersected a compartment such that it saw all the cells in the compartment. Let us
also say the compartment is four cells long. Now, because 5 is slap in the middle this compartment of four can only be 1/2/3/4 or 6/7/8/9. If you have a known you can decide which. So black cells,
along with existing clues and solved cells will help pin down the remaining options.
In the solver, all the known numbers and the black cells clues are taken into consideration, as is the length of each compartment and the remaining possibles are computed. Usually this will find one
or serveral cells for which there is only one remaining number. As a human, pencil and paper solver, you will not be able to pick out these opportunities as quickly but that's the fun of this puzzle,
searching for the right combination of clues to get the next cell.
Stranded Digit
Stranded Digits
I have isolated this strategy from Compartment Check as it is worth looking for in it's own right. The Stranded Digit is easily understood - take the example on the right. Looking at the second row
starting with the black clue 5 it cuts through a set of possible numbers for the two-cell on that row. Because the 5 is not possible (the clue) The 4's have become isolated from the 6,7,8 and 9s.
That means we can discard 4 as a candidate. Similarly in the third row the 8 has isolated the 9 from the green cell. It leaves only the 7 and that is the solution.
Another way to look at it is this. Write out all the remaining numbers available in a compartment. If there is a gap - then only the numbers before or the numbers after the gap can be the solution to
all the cells. Does this help you remove some candidates?
Lets take this small section and see how it plays out:
The 7 forces the 6 to the right. Above the 7 the 9 is too far away from 7 to be used - so 8 must fit there. 6 is disallowed because of the black cell clue above. And finally we can insert 7 into the
last cell (marked in green).
Stranded Sequence
Basically this extended Stranded Digit to a set of numbers that cannot form the compartment because there are not enough of them to cover all the cells. On the other side of the gap will be a second
set of numbers which will have the ability to straddle the whole compartment.
Compartment High/Low
Here we look at all the available numbers in the compartment and determine if the minimum and maximum can be used given what we already have. Works best when the range can be pinned down with some
clues or solved cells. Unlike the strategy below this looks in a single compartment at a time and is therefore slightly simpler.
High/Low Str8ts
I've added this strategy thanks to feedback from solvers. I'm not sure what a good name is, but it has to be short to fit on the solver list. This strategy looks a two compartments in the same row or
column. If there are some clues or solved numbers in a compartment often we can predict what the minimum and maximum necessary numbers will be for the entire compartment - based on the principle that
a 'straight' is going to contain a sequence with no gaps. We dont need to now the exact straight - sometimes only the middle numbers of the straight can be predicted. But whatever range we can be
sure of - these numbers can be eliminated from the other compartment in the row or column. Some examples:
New Mid Section - Edit This
High/Low example 1
This is a very easy and obvious one. The 9 in row A gives us a minimum and maximum range for that compartment of 7 to 9, ie a straight of {7,8,9}. All these numbers can be removed from the other
compartment, which the solver highlights in yellow. In fact in this example, the eliminations are identical to Naked Pairs. The only remaining numbers in A3 and A4 are {7/8} which forces 7 and 8 to
those cells, so any other 7s or 8s can be removed in the row.
High/Low example 2
A couple of more subtle examples:
Looking at the column 2 first, we have a 4-cell straight with a known 7 in it. Now the widest possible range of numbers starts from {4,5,6,7} and goes to {6,7,8,9}. Its not even necessary to have the
remaining candidates viewed to see the usefulness of this. The overlap of these two sets is {6/7}. We can predict that whatever the straight, 6 and 7 will be part of it. 7 is know so we can remove
the 6 from the other compartment.
In a similar manner, the 3 in column 3 gives as ranges from {1,2,3} to {3,4,5}. But! 1 is already a black cell clue, so the minimum range is actually {2,3,4}. The overlap is therefore {3,4}, so 4 can
be removed from C3.
There is a great deal of overlap between high/low and Stranded Digit and I'm not sure what to present first. The problem is that many fewer examples of 'high/low' will be found by the solver because
Compartment Check and Stranded Digit are checked first, but if you are aware of the strategy it will be very useful. It also seems to get results more at the beginning of the puzzle than the end, as
the min-max ranges are greater.
Required Digits
Within a compartment it may be possible to determine the minimum and maximum range of possible digits. For example, a compartment of length three may have the range of 6/7/8/9. Since the compartment
has a length of three the compartment will have either 6/7/8 or 7/8/9 as the final solution (in whatever order). The overlap of 6/7/8 and 7/8/9 is 7 and 8. These are the
Required Digits
(or "sure digits").
If you know what the Required Digits are for a compartment then you can remove those candidates from any other compartment in the same row or column as the compartment you are looking at.
Also, it maybe the case in some circumstances that a Required Digit occurs in only one cell of the compartment. If that is the case - because it is 'required' and only found in one cell, it must be
the solution to that cell.
Naked Pairs
These is some cross over from Sudoku in this puzzle. Since rows and columns have the same rules as Sudoku they have some of the same strategies. The simplest is Naked Pairs. If two cells on the same
row or column have the same two candidates - such as 1/6 and 1/6 - then we can be sure that 1 and 6 will appear in both those cells. We just can't be certain which way round yet. But that is
extremely useful. If 1 and 6 are bound to be on those cells then any other 1 or 6 can be removed from that row or column.
The Naked Pair acts like a single cell clue - but make sure you apply it to only the row or column common to both cells. The two cells need not be next to each other or in the same compartment.
In the example we have three Naked Pairs - all aligned on different rows. The top row is the Pair 2/3. This attacks the cells at the start of the row (highlighted in red). Likewise the 7/8 on the
second row effects the far righthand cell. And finally the 2/4 on the third row gives us a solution - marked in green.
Naked Triples
This is the same strategy as Naked Pairs except we are considering three cells, not two. For example, if 1/3/6 was present in three cells we know that 1,3 and 6 must be found in those cells – we just
can’t know which way round at this point. So any other 1, 3 or 6 in the row or column can be discounted.
The trick with Naked Triples is that we don’t need all three numbers to appear in all three cells. Consider three cells with these numbers:
[1,3], [1,6] and [3,6]
Think of any combination from these pairs and you will always use up all three numbers in those cells. The rule really is – “If any three cells contain
at most
three numbers – then those numbers can be removed from the rest of the row or column”.
There are bound to be further strategies and I don't pretend to have exhausted them all on this one page. Some will be borrowed from Sudoku - I can think of Hidden Pairs and Triples, for example, and
some will be unique to Str8ts.
If you find a new strategy and want to share it with others we'd be delighted to explain it here and credit you with the discovery.
Happy solving!
Andrew Stuart
... by: Ali Thursday 18-Jul-2024
In today's asymmetrical Str8ts, I used a strategy that I haven't seen documented. I only use it as a last resort, so maybe it's more luck than a valid strategy. I rely on the solution being unique.
I used it today in puzzle #5210. In column #8, at one point row #1 can be 2 or 4 and row #2 can be 1 or 3. Row #9 can be a 4 or 6. But since the solution must be unique, row #1 must be a 4, otherwise
in the final solution row #9 can still be either 4 or 6.
Is this valid logic, or does it maybe fall into one of the other strategies? Or is it too simple to even count as a strategy?
JT replies: Monday 29-Jul-2024
It is listed here on slide 42: | {"url":"https://www.str8ts.com/Str8ts_Strategies","timestamp":"2024-11-14T07:20:18Z","content_type":"text/html","content_length":"23566","record_id":"<urn:uuid:408c5b5e-3c14-47f0-aafa-67d1af84c308>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00191.warc.gz"} |
I would like to know how exactly to calculate the "final keyspace".
Unfortunately, I can't get any further with the information in the wiki.
The wiki tells me, if the number of base-words < sum of all GPU-power, then the final keyspace is reached.
GPU-power for each device is calculated with S * T * N * V
I am testing hashcat with a Quadro P1000 (mobile) and an i7-8750H with UHD Graphics 630.
For the P1000 I get the following info:
* Device #1: Quadro P1000, 3385/4096 MB, 4MCU
During execution:
Speed.#1.........: 1413 H/s (6.27ms) @ Accel:8 Loops:128 Thr:1024 Vec:1
that tells me:
S = 4 (MCU)
T = 256 (because it's Nvidia)
N = 8 (Accel)
V = 1 (Vec)
For the i7 I get:
* Device #4: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, 32565/65195 MB (8149 MB allocatable), 12MCU
During execution:
Speed.#4.........: 467 H/s (11.19ms) @ Accel:512 Loops:1024 Thr:1 Vec:4
S = 12 (MCU)
T = ?
N = 512
V = 4
For the UHD Graphics I get:
* Device #3: Intel(R) UHD Graphics 630, 12992/26078 MB (2047 MB allocatable), 24MCU
During execution:
Speed.#3.........: 88 H/s (6.25ms) @ Accel:64 Loops:64 Thr:16 Vec:1
S = 24
T = ?
N = 64
V = 1
For a test I run the following commands:
.\hashcat.exe -a 3 -m 9600 ?l?l?l?l --keyspace
hashcat.exe -a 3 -m 9600 hashfile.hash ?l?l?l?l --status --status-timer=5 --restore-disable --potfile-disable --logfile-disablet --outfile -D 1,2 password.txt
For me this results in the size of the base loop corresponding to the maximum number of 17576 base words.
But how do I get the remaining number of words in the base loop during the attack?
During execution, between 427,008 and 436,235 tested passwords, I get the message "Approaching final keyspace - workload adjusted".
Can someone help me to complete my calculation?
Am I correct in the assumptions I have made so far?
Thanks for reply!
04-16-2022, 11:19 AM
maybe this helps in understanding when this event is triggered: https://github.com/hashcat/hashcat/blob/...#L121-L131
The "final keyspace" is basically just the last "flush" of the remaining "words" (not a full load as before/normally, usually) | {"url":"https://hashcat.net/forum/thread-10714-post-55147.html#pid55147","timestamp":"2024-11-08T09:07:43Z","content_type":"application/xhtml+xml","content_length":"24198","record_id":"<urn:uuid:c3aa61a1-e9bb-4a86-b15f-af47e1504c00>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00553.warc.gz"} |
Ranking and drawing in subexponential time
In this paper we obtain parameterized subexponential-time algorithms for p-Kemeny Aggregation (p-KAGG) — a problem in social choice theory — and for p-One-Sided Crossing Minimization (p-OSCM) – a
problem in graph drawing (see the introduction for definitions). These algorithms run in time O*(2^{O(\sqrt{k} log k)}), where k is the parameter, and significantly improve the previous best
algorithms with running times O*(1.403^k) and O*(1.4656^k), respectively. We also study natural "above-guarantee" versions of these problems and show them to be fixed parameter tractable. In fact, we
show that the above-guarantee versions are equivalent to a weighted variant of p-Directed Feedback Arc Set. Our results for the above-guarantee version of p-KAGG reveal an interesting contrast. We
show that when the number of "votes" in the input to p-KAGG is odd the above guarantee version can still be solved in time O*(2^{O(\sqrt{k} log k)}), while if it is even then the problem cannot have
a subexponential time algorithm unless the exponential time hypothesis fails (equivalently, unless FPT=M[1]).
Original language English
Title of host publication Combinatorial Algorithms (21st International Workshop, IWOCA '10, London, UK, July 26-28, 2010. Proceedings)
Editors C.S. Iliopoulos, W.F. Smyth
Place of Publication Berlin
Publisher Springer
Pages 337-348
ISBN (Print) 978-3-642-19221-0
Publication status Published - 2011
Publication series
Name Lecture Notes in Computer Science
Volume 6460
ISSN (Print) 0302-9743
Dive into the research topics of 'Ranking and drawing in subexponential time'. Together they form a unique fingerprint. | {"url":"https://research.tue.nl/en/publications/ranking-and-drawing-in-subexponential-time","timestamp":"2024-11-11T08:00:49Z","content_type":"text/html","content_length":"49658","record_id":"<urn:uuid:e439a5bd-3dc8-472d-97d5-eae794702a7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00311.warc.gz"} |
Question ID - 50633 | SaraNextGen Top Answer
Let the moment of inertia of a hollow cylinder of length 30 cm (inner radius 10 cm and outer radius 20 cm). about its axis be I. The radius of a thin cylinder of the same mass such that its moment of
inertia about its axis is also I, is:
a. 12 cm b. 18 cm
c. 16 cm d. 14 cm | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=50633","timestamp":"2024-11-02T17:50:19Z","content_type":"text/html","content_length":"15842","record_id":"<urn:uuid:2285e949-038d-4616-8d5a-38b33c4e598f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00472.warc.gz"} |
Online gradient descent written in SQLOnline gradient descent written in SQL • Max Halford
Online gradient descent written in SQL
online-machine-learning sql
Table of contents
Edit – this post generated a few insightful comments on Hacker News. I’ve also put the code in a notebook for ease of use.
Modern MLOps is complex because it involves too many components. You need a message bus, a stream processing engine, an API, a model store, a feature store, a monitoring service, etc. Sadly,
containerisation software and the unbundling trend have encouraged an appetite for complexity. I believe MLOps shouldn’t be this complex. For instance, MLOps can be made simpler by bundling the logic
into your database.
In this post, I want to push this idea, and actually implement a machine learning algorithm within a relational database, using SQL. Some databases allow doing inference with an already trained
model. Actually training the model in the database would remove altogether the need for a separate inference/training service.
Being familiar with online machine learning, I picked online gradient descent. My gut feeling is that this should be a straightforward implementation using WITH RECURSIVE. I decided to work my way up
to it by first implementing simpler online algorithms, starting with a running average.
Some data
To illustrate, I took some Yahoo! Finance data:
import yfinance as yf
figures = yf.download(
figures /= figures.std()
Date Open High Low Close Adj Close Volume
2021-12-27 2.00543 2.06042 2.06425 2.11303 2.11677 -0.7789
2021-12-28 2.10966 2.09118 2.11413 2.0777 2.08174 -0.712006
2021-12-29 2.08148 2.06752 2.1008 2.08076 2.08477 -0.977945
2021-12-30 2.08623 2.06549 2.0991 2.04068 2.04502 -1.01873
2021-12-31 2.03938 2.0202 2.07074 2.01928 2.0238 -0.950815
☝️ I normalized the data using standard scaling. This puts the figures on a friendlier scale. It will also help the online gradient descent converge. This could very well be done in SQL, but this is
fine too.
Running average
With SQL, we could obviously just use AVG to obtain an average. We could also use a window function if we wanted to calculate the average at every point in time.
I’m not sure how common knowledge this is, but there is a formula that allows updating a running average with a new data point in $\mathcal{O}(1)$ time. This can be applied to a data stream, because
the update formula only requires the current data point, as well as the current average.
$$\mu_0 = 0$$ $$\mu_{t+1} = \mu_t + \frac{x - \mu_t}{t + 1}$$
I’ll be using DuckDB. A nice feature is that it’s aware of any existing pandas dataframe – provided you’re running DuckDB using Python. Indeed, we can directly query the figures dataframe. Also,
DuckDB supports WITH RECURSIVE, which is the cornerstone I’ll make heavy use of.
There are many good tutorials about how WITH RECURSIVE works, so I won’t expand on it. The way I will use it is a bit particular, in that I leverage it to update some current state. The current state
points to the current row. At each recursion step, the current state is joined with the next row, which allows updating the state.
Recursive state update
The first idea is to assign a step number to each row. Assuming the rows are pre-sorted, a ROW_NUMBER can be used to assign an auto-incrementing integer to each row. This step column is then used to
connect each state to the next row.
stream AS (
ROW_NUMBER() OVER () AS step,
"Adj Close" AS x
FROM figures
state(step, x, avg) AS (
-- Initialize
SELECT step, x, x AS avg
FROM stream
WHERE step = 1
UNION ALL
-- Update
state.avg + (stream.x - state.avg) / stream.step AS avg
FROM stream
INNER JOIN state ON state.step + 1 = stream.step
SELECT *
FROM state
ORDER BY step DESC
LIMIT 5
│ step │ x │ avg │
│ int64 │ double │ double │
│ 505 │ 5.981568542028378 │ 3.9577706471349923 │
│ 504 │ 6.002789566151079 │ 3.953755175121315 │
│ 503 │ 6.042539700173864 │ 3.949681548101375 │
│ 502 │ 6.039508125299193 │ 3.945512507957804 │
│ 501 │ 6.074541325571636 │ 3.941332875987063 │
We can verify this is correct by doing a rolling mean in pandas:
figures['Adj Close']
.rolling(len(figures), min_periods=1)
2021-12-31 3.957771
2021-12-30 3.953755
2021-12-29 3.949682
2021-12-28 3.945513
2021-12-27 3.941333
☝️ This usage of WITH RECURSIVE essentially boils down to a window function. This could be implemented as such, which would avoid the headache of thinking in terms of recursion. For instance,
PostgreSQL supports user-defined aggregates, which can be applied over a window. However, the WITH RECURSIVE syntax has better support across databases.
Running covariance
The query above measures the running average for a single variable – namely Adj Close. What if we want to compute something that involves more than one variable? The naive way is to just copy/paste
the logic for each variable. For instance, to calculate a running covariance, it is necessary to compute the running average of two variables. Check out Welford’s algorithm for more information.
stream AS (
ROW_NUMBER() OVER () AS step,
"Adj Close" AS x,
"Close" AS y
FROM figures
state(step, x, x_avg, y, y_avg, cov) AS (
-- Initialize
x AS x_avg,
y AS y_avg,
0::DOUBLE AS cov
FROM stream
WHERE step = 1
UNION ALL
-- Update
x_new_avg AS x_avg,
y_new_avg AS y_avg,
cov + ((x - x_prev_avg) * (y - y_new_avg) - cov) / step AS cov
FROM (
state.x_avg AS x_prev_avg,
state.x_avg + (stream.x - state.x_avg) / stream.step AS x_new_avg,
state.y_avg AS y_prev_avg,
state.y_avg + (stream.y - state.y_avg) / stream.step AS y_new_avg,
FROM stream
INNER JOIN state ON state.step + 1 = stream.step
SELECT step, cov
FROM state
ORDER BY step DESC
LIMIT 5
│ step │ cov │
│ int64 │ double │
│ 505 │ 0.9979967767965502 │
│ 504 │ 0.9918524780369538 │
│ 503 │ 0.985478504290919 │
│ 502 │ 0.9787158318485241 │
│ 501 │ 0.9719167545245742 │
Other than handling two variables, the major difference with this query is that a subquery is used to calculate some intermediary state. We will reuse this idea for online gradient descent.
We can also verify the output is correct by comparing to pandas:
.rolling(len(figures), min_periods=1)
.cov(ddof=0)['Adj Close']
.loc[:, 'Close']
2021-12-31 0.997997
2021-12-30 0.991852
2021-12-29 0.985479
2021-12-28 0.978716
2021-12-27 0.971917
Handling many variables
The downside of the queries above is that the variable names have to be hardcoded. There is no way to handle an arbitrary number of variables. For instance, if we have several variables, how would we
calculate the average of each variable, without expliciting them in the query?
As is often the case, converting the data to a tidy representation makes life easier. In this case, tidy data is obtained by melting – i.e. unpivoting – the dataframe.
figures_flat = figures.melt(ignore_index=False).reset_index()
figures_flat.columns = ['date', 'variable', 'value']
figures_flat = figures_flat.sort_values(['date', 'variable'])
date variable value
2020-01-02 Adj Close -1.46542
2020-01-02 Close -1.46182
2020-01-02 High -1.49763
2020-01-02 Low -1.46396
2020-01-02 Open -1.49242
2020-01-02 Volume 0.180024
2020-01-03 Adj Close -1.48965
2020-01-03 Close -1.48662
2020-01-03 High -1.4978
2020-01-03 Low -1.45277
stream AS (
SELECT RANK_DENSE() OVER (ORDER BY date) AS step, *
FROM figures_flat
ORDER BY date
state(step, variable, value, avg) AS (
-- Initialize
SELECT step, variable, value, value AS avg
FROM stream
WHERE step = 1
UNION ALL
-- Update
state.avg + (stream.value - state.avg) / stream.step AS avg
FROM stream
INNER JOIN state ON
state.step + 1 = stream.step AND
state.variable = stream.variable
SELECT *
FROM state
WHERE step = (SELECT MAX(step) FROM state)
ORDER BY variable
│ step │ variable │ value │ avg │
│ int64 │ varchar │ double │ double │
│ 505 │ Adj Close │ 5.981568542028378 │ 3.9577706471349923 │
│ 505 │ Close │ 6.03165394229666 │ 4.012373756823449 │
│ 505 │ High │ 6.057853942108038 │ 4.03765319364954 │
│ 505 │ Low │ 6.05591789308585 │ 3.985178489614261 │
│ 505 │ Open │ 6.046125216781687 │ 4.006746251814558 │
│ 505 │ Volume │ 1.0143664144585565 │ 1.9651814487272024 │
The main difference with the first query is that the join condition in the recursion includes the variable name, as well as the step number. A RANK_DENSE statement is also used instead of ROW_NUMBER
to assign a step number to each group of rows.
Here is the equivalent using pandas:
.rolling(len(figures_flat), min_periods=1)
Adj Close 3.957771
Close 4.012374
High 4.037653
Low 3.985178
Open 4.006746
Volume 1.965181
Online gradient descent
Finally, we have enough experience to implement online gradient descent. To keep things simple, we will use a very vanilla version:
• Constant learning rate, as opposed to a schedule.
• Single epoch, we only do one pass on the data.
• Not stochastic: the rows are not shuffled.
• Squared loss, which is the standard loss for regression.
• No gradient clipping.
• No weight regularisation.
• No intercept term.
None of these are impossible to implement using SQL. I just thought I’d keep things simple in order to keep the code digest. Anyway, these assumptions lead to the following update formulas:
$$p_t = \dot{w}_t \cdot \dot{x}_t$$
$$l_t = p_t - y_t$$
$$\dot{g}_t = l_t \dot{x}_t$$
$$\dot{w}_{t+1} = \dot{w}_t - \eta \dot{g}_t$$
I’ve added a $\dot{}$ symbol to the vector variables. Therefore $p_t$ is the prediction, defined as the dot product between weights $\dot{w}_t$ and features $\dot{x}_t$. The gradient of the loss
$l_t$ is used to obtain the error gradient for the features $\dot{g}_t$, which respect to the current weights. This all leads to the simple weight update formula $\dot{w}_t - \eta \dot{g}_t$.
As an example, I decided to predict the Adj Close variable using the other variables. I’m not saying this makes a lot of sense, it’s just for the sake of example.
X AS (
RANK_DENSE() OVER (ORDER BY date) AS step, *
FROM figures_flat
WHERE variable != 'Adj Close'
ORDER BY date
y AS (
RANK_DENSE() OVER (ORDER BY date) AS step, *
FROM figures_flat
WHERE variable = 'Adj Close'
ORDER BY date
stream AS (
SELECT X.*, y.value AS target
FROM X
INNER JOIN y ON X.step = y.step
state AS (
-- Initialize
0::DOUBLE AS weight,
0::DOUBLE AS prediction
FROM stream
WHERE step = 1
UNION ALL
-- Update
SUM(weight * value) OVER () AS prediction
FROM (
state.prediction - state.target AS loss_gradient,
loss_gradient * state.value AS gradient,
state.weight - 0.01 * gradient AS weight
FROM stream
INNER JOIN state ON
state.step + 1 = stream.step AND
state.variable = stream.variable
SELECT *
FROM state
WHERE step = (SELECT MAX(step) FROM state)
ORDER BY variable
│ step │ variable │ weight │ target │ prediction │
│ int64 │ varchar │ double │ double │ double │
│ 505 │ Close │ 0.2511547716803354 │ 5.981568542028378 │ 5.938875441702928 │
│ 505 │ High │ 0.24043897039853313 │ 5.981568542028378 │ 5.938875441702928 │
│ 505 │ Low │ 0.2447191283620627 │ 5.981568542028378 │ 5.938875441702928 │
│ 505 │ Open │ 0.23603830762609726 │ 5.981568542028378 │ 5.938875441702928 │
│ 505 │ Volume │ 0.057510279698874206 │ 5.981568542028378 │ 5.938875441702928 │
It seems to be working! How can we check it is correct though? Well, we can fit an instance of scikit-learn’s SGDRegressor. The weights should correspond exactly to what we obtained in SQL, as long
we provide the correct parameters. This is to align with the simplifying assumptions that were made in the SQL implementation.
from pprint import pprint
from sklearn import linear_model
model = linear_model.SGDRegressor(
X = figures[:-1].copy()
y = X.pop('Adj Close')
model = model.fit(X, y)
pprint(dict(zip(X.columns, model.coef_)))
{'Close': 0.2511547716803354,
'High': 0.2404389703985331,
'Low': 0.2447191283620624,
'Open': 0.23603830762609757,
'Volume': 0.05751027969887417}
Spot on! To be even more certain this is correct, we can compare with River’s linear regression implementation, which uses online gradient descent under the hood.
from river import linear_model
from river import optim
class CustomSquaredLoss:
def gradient(self, y_true, y_pred):
return y_pred - y_true
model = linear_model.LinearRegression(
for i, x in enumerate(figures[:-1].to_dict(orient='records')):
y = x.pop('Adj Close')
model.learn_one(x, y)
{'Close': 0.2511547716803356,
'High': 0.2404389703985331,
'Low': 0.24471912836206253,
'Open': 0.2360383076260972,
'Volume': 0.057510279698874255}
✅ 🎯 💯 🎉
A machine learning algorithm which can be trained using SQL opens a world of possibilities. The model and the data live in the same space. This is as simple as it gets in terms of architecture.
Basically, you only need a database which runs SQL.
Of course, the implementation we made is quite basic. Moreover, models using online gradient descent aren’t necessarily the strongest ones. However, one could argue that what matters most in a model
are the features you feed it with. As such, online gradient descent done in the database can be a great baseline from which to start with.
The key advantage of online machine learning is that you don’t need to revisit past data points to update a model. However, all the queries we’ve written are stateless, and will run from the top when
they are refreshed. This sort of defeats the purpose of doing things online. Thankfully, stream processing engines are popping up, and they usually provide an SQL interface. For instance, Materialize
is working on providing WITH RECURSIVE semantics. Doing online gradient descent on top of Materialize sounds very powerful to me. | {"url":"https://maxhalford.github.io/blog/ogd-in-sql/?ref=sangkon.com","timestamp":"2024-11-11T04:12:20Z","content_type":"text/html","content_length":"81469","record_id":"<urn:uuid:4f748a74-eace-48bf-8ce7-1c4c52e6099b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00162.warc.gz"} |
6 Oppositions - Logic Philosophy Spirituality6 Oppositions
By the ‘opposition’ of two propositions, is meant: the exact logical relation existing between them — whether the truth or falsehood of either affects, or not, the truth or falsehood of the other.
In this context, note, the expression ‘opposition’ is a technical term not necessarily connoting conflict. We commonly say of two statements that they are ‘opposite’, in the sense of incompatible.
But here, the meaning is wider; it refers to any mental confrontation, any logical face-off, between distinguishable propositions. In this sense, even forms which imply each other may be viewed as
‘opposed’ by virtue of their contradistinction, though to a much lesser degree than contradictories. Thus, the various relations of opposition make up a continuum.
Now, upon reflection, the logical relations of implication, incompatibility, and exhaustiveness, defined earlier, are found to be incomplete insofar as they leave certain issues open. There is
therefore a need to combine them in various ways, to obtain a list of seven fully defining kinds of ‘oppositions’:
a.Mutual Implication(orimplicance): is defined as the relation between two propositions which are either both true or both false. Each is called an implicant and is said to implicate the other. P
implies Q, and Q implies P; and, nonQ implies nonP, and nonP implies nonQ.
b.Subalternation: is the relation between two propositions which are either both true or both false, or one — called the subalternant — false and the other — called the subaltern — true; the
occurrence of ‘subalternant true and subaltern false’ being excluded by definition. The subalternant and subaltern may be referred to jointly as the subalternatives. This relation is, therefore,
one-way implication. P implies Q, but Q does not imply P; and, nonQ implies nonP, but nonP does not imply nonQ.
Subalternation, may be counted as two distinct relations, subalternating, and being subalternated, each of whose direction must be specified. This is in contrast to the other five oppositions, which
are symmetrical.
c.Contradiction: exists between two propositions which cannot be both true and cannot be both false. If either is true, the other is false; and if either is false, the other is true. They are said to
be contradictories. Their affirmations are incompatible and their denials are incompatible. P implies nonQ, and nonP implies Q; and, Q implies nonP, and nonQ implies P.
d.Contrariety: two propositions are contrary if they cannot both be true, but may both be false. If either is true the other is false, but if either is false the truth or falsehood of the other is
possible. They are said to be contraries. Their affirmations are incompatible, but not their denials. P implies nonQ, but nonP does not imply Q; and, Q implies nonP, but nonQ does not imply P.
e.Subcontrariety: occurs when two propositions cannot be both false, but may be both true. If either is false, the other is true; but the truth of either leaves that of the other indeterminate. They
are said to be subcontraries. Their denials are incompatible, but not their affirmations. nonP implies Q, but P does not imply nonQ; and, nonQ implies P, but Q does not imply nonP.
f.Unconnectedness(or neutrality): two propositions are ‘opposed’ in this way, if neither formally implies the other, and they are not incompatible, and they are not exhaustive. Note that this
definition does not exclude that unconnecteds may, under certain conditions, become connected (or remain unconnected under all conditions).
Note that these seven types of opposition define both directions of the relations concerned, in contrast to the basic logical relations. For this reason they may be called ‘full’ relations: they
leave no question marks. They are logically exhaustive, allowing us to classify the relation of any pair of propositions.
There are other kinds of compound logical relations, besides the above mentioned seven. These concern paradoxical propositions, which imply even their own contradictory, or some contradiction. For
example, ‘X is not X’ formally implies both that ‘something, called X, exists’ (by the law of identity), and that ‘there is no such thing as X’ (by the law of contradiction).
However, paradoxes are very rare in formal logic; rather they occur, only a bit less rarely, with specific contents. Formal logic is mainly interested in the oppositions between normal propositions,
which are in principle consistent in form. More will be said about paradoxes later, when we look into the logic of logic.
The official terminology for the various kinds of opposition, here suggested, may not always accord with common usage. Especially note that in practise the word ‘contradiction’ is very often taken as
equivalent to ‘incompatibility’, signifying (in official parlance) ‘either contradiction or contrariety’; thus, for instance, with the expression ‘law of contradiction’; we mean incompatibility.
Also, the word ‘opposite’ is sometimes used to mean contradictory.
It is curious to note, too, that the words ‘subaltern’ and ‘subcontrary’, though quite old, are rarely used in practise; I have only seen them used by logicians. Such failures of words or meanings to
enter the mainstream of language, are sad testimonies to the popular disinterest in studying logic.
The following table summarizes the above through analysis of the possibilities of combination of the affirmations and denials of two propositions, P and Q, which are given as being related by a
certain opposition, specified in the left column. ‘Yes‘ indicates possible combinations, ‘no‘ impossible ones.
Table 6.1 Definitions of Full Oppositions.
│POSSIBILITY OF: │P+Q│P+nonQ│nonP+Q│nonP+nonQ │
│Implicance │yes│ no │ no │ yes │
│Subalternating │yes│ no │ yes │ yes │
│Being Subalternated │yes│ yes │ no │ yes │
│Contradiction │no │ yes │ yes │ no │
│Contrariety │no │ yes │ yes │ yes │
│Subcontrariety │yes│ yes │ yes │ no │
│Unconnectedness │yes│ yes │ yes │ yes │
Note that incompatibles are either contradictory or contrary, while exhaustives are either contradictory or subcontrary. Also worth noting, compatibles may be either implicant, or subalternative (in
one or the other direction), or subcontrary, or unconnected. The seven definite oppositional relations are mutually exclusive (i.e. contrary, to be exact), but one of the seven must hold.
The doctrine of opposition arose out of the need to apply the laws of thought to propositions more complex than the initial forms ‘S is P’ and ‘S is not P’. The concepts of equality, conflict, and
limitation, had to be expanded upon, to reflect the more qualified relations found to exist between forms once they are quantified.
We know that two singular propositions differing only in polarity (viz.R,G) are contradictory, for at any given instant This-S cannot both be P and not-be P, and must be one or the other. But what of
the plural versions of these forms? The following diagram shows their interrelationships.
Diagram 6.1 Rectangle of Opposition.
We note, to begin with, that for each polarity, the universal (all) subalternates the singular (any specific individual), which in turn subalternates the particular (some is an indefinite quantity,
meaning one or more). Next, A universal and particular of opposite polarity (AandO, orEandI) are contradictory, just as two singulars (RandG) concerning one and the same individual are contradictory.
Lastly, universals are contrary to universals or singulars of opposite polarity (AandE,AandG,RandE), and particulars are subcontrary to particulars or singulars of opposite polarity (IandO,IandG,Rand
We may summarize these findings in the form of a ‘truth-table’. This tells us which other propositions must be true (T) or false (F), or may be either (.), in the context of each form given on the
left under headingTbeing true, or each form given on the right underFbeing false.
Table 6.2 Truth-table.
The conjunction ofIandOmay be viewed as a form of proposition in its own right, though composite. If we oppose this to the above standard forms, we obtain the following. Since ‘I+O‘ subalternatesIand
O(considered separately), it is contrary toAandE. It is unconnected toRandG, since either may be true or false without affecting it.
Also note in passing the position of forms quantified by ‘most’ or ‘few’, which we mentioned earlier… SeeAppendix 2for remarks on this topic.
Note that two propositions with the same subject, but with different predicates, may be considered opposites, if the predicates are well known to be antithetical. Thus, ‘S is P’ and ‘S is Q’ may
implicitly intend ‘S is P (but not Q)’ and ‘S is Q (but not P)’, respectively. In such case, the forms may of course be treated as effective contradictories.
These oppositions are proved as follows. Remember that each of the plural propositions can be defined by a series of singular propositions of the same polarity. Thus,AandIare reducible to a series S1
is P, S2 is P, S3 is P, etc., differing in that All-S covers the whole class of S, whereas Some-S covers only part of the same class. Likewise in the case of negatives,EandO. Thus the subalternation
of singular or particular, to a generality of like polarity, is simply the inclusion by the whole of the class of any part thereof. This relation is unidirectional in that if the whole is affirmed or
denied so is every part of it, whereas if some part is affirmed or denied it does not follow that other parts are.
Similarly, the contradictions ofAandO, orEandI, are proven by consideration of their subsumptions. If all the members of a class are included in a predication, then any which is declared excluded
would be found to be both P and nonP, an impossibility. The same can be argued in the negative case: if all are excluded, then none can be included without inconsistency.
With regard toIandO(orIandG, orRandO), they are subcontrary insofar as conflicting predicates can consistently be applied to different parts of the same subject-class, although it is impossible to
evade either affirming or denying any predicate of a subject, i.e. one must be true. The contrariety ofAandE(orAandG, orRandE) is due to the observation that, while they cannot be both true without
implying some singular case(s) of inconsistency, they could be both false without antinomy, as occurs in the case ofIandObeing both true.
The concepts of inclusion and exclusion are geometrically evident. They were implicit in the original formulation of the laws of thought, when we referred to the whole or part of a singular
phenomenon. In this logical discipline, we broaden the laws of thought, by treating individual instances as parts of a larger phenomenon we call a class or universal, and then applying our laws to
this new whole. Essentially, no information has been added, we have merely in fact elucidated inherent data.
To conclude, let us point out that ‘opposition’ can be viewed as a kind of immediate inference, like eduction. This is especially obvious when we draw out an implicant or subaltern, but can also be
said about affirming a proposition on the basis of another’s falsehood, or denying one on the basis of another’s truth or falsehood. Opposition is not a mere theoretical construct for logicians, but
of practical value to the layman. | {"url":"https://thelogician.net/FUTURE-LOGIC/Actual-Oppositions-6.htm","timestamp":"2024-11-09T19:10:22Z","content_type":"text/html","content_length":"132715","record_id":"<urn:uuid:b2866b9a-cad9-40d0-a0ea-10a2b3e75fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00850.warc.gz"} |
lgebra pro
Our users:
Ive never seen anything like it! Step-by-step, Im learning complicated algebra right alongside my kids!
Lacey Maggie, AZ
Just watching my students, one after another, easily grasp these higher mathematical concepts and really, truly understand what they are doing, makes Algebrator worth the price of admission.
Furthermore, for the gains, the price is tremendously inexpensive!
Diane Flemming, NV
WOW!!! This is AWESOME!! Love the updated version, it makes things go SMOOTH!!!
Rebecca Cox, WY
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2013-12-15:
• factoring quadratic expressions, algebra II prentice hall
• simultaneous equation calculator
• solve math poems
• Order of operations for fourth grade worksheets
• coefficient exponent calculator
• college algebra exercises
• free download of quantitative apptitude test pappers
• graphs for linear differential equations
• math exercises 8 year olds
• www.algebrator.com
• strategies for problem solving workbook answers
• ninth grade algebra lessons
• factoring polynomials calculator
• program to solve vector space
• Quadratic formula higher orders
• how to write powers and factorial equations in excel
• factor third order polynomials
• root calculation excel 2007
• integral exponents worksheets in algebra 2
• trigonometry calculator ti89 help
• example questions on the internet algebra year 11
• software to learn algebra
• free online algebra calculator help now
• math papers printables for 8th graders
• FREE LESSON PLANS SIXTH GRADE RATIOS
• inverse operation games for ks2
• prentice hall algebra 2 answers
• sample paper for class 8th
• algebra 2 an integrated approach math books online
• +casio +calculator linear equations
• printable practise sats papers ks3
• largest common denominator
• logarithmic expressions solver
• difference of 2 square
• Algebra Emulator Free
• simplification of algebraic expression
• dividing fractions tests
• Examples Of Grade 9 Maths
• preivous year mat question papers free download
• Trivias about math
• addition equations worksheets
• chemistry ti83
• maths worksheets 5a
• simplify rational division
• ti-83 plus solve function
• radius worksheet 5th grade
• free solved question paper for 9th
• roots of 2rd order equation
• apttitude question and answers for it
• boolean expressions simplifier
• intermidate algrbra help
• math fractions multiply divide add subtract for teachers
• solving 3 simultaneous equations Matlab
• learn permutation combination
• casio calculators downloads applications
• Real & Complex analysis,Walter Rudin,,3rd edition,Solutions manual
• simplified algebra
• cube root ti-83
• 3rd grade math printouts
• How Is Algebra Used in Real Life
• how to solve a cubic equation tutorial
• Trivias(math)
• free online aptitude test Diagramming practise online
• homogeneous differentials in mathematica
• cost accounting 12 edition test bank
• free printable simplify radicals worksheets
• math problems .com/
• least online games
• multivariable algebra calculator
• Online Algebra Practice for KS2 Pupils
• how to solve algebra
• ti84 calculator for free
• math formulae
• solve a cubic with 3 intercepts
• third order quadratic formula
• "numbers in prime factored form"
• factoring in ti-89
• online TI-89 graphic calculator
• english Exam papers 4 GR 7
• grade 8 maths homework
• 7th grade math worksheet,word
• yr 9 sats papers online
• solving an equation of a ti-84 plus | {"url":"http://algebra-help.com/algebra-help-factor/monomials/algebra-product.html","timestamp":"2024-11-09T01:31:01Z","content_type":"application/xhtml+xml","content_length":"12164","record_id":"<urn:uuid:ab5a3765-2945-464d-96af-3385ffd8ac98>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00623.warc.gz"} |
Sales Velocity - Formula and how to calculate Pipeline Velocity
The sales velocity equation
It uses four KPIs as levers and the levers are for the same period, for example, quarterly.
Take the number of qualified sales opportunities, times the average deal size, times the average win rate percentage, divided by the average sales cycle length in days, and that gives us the sales
velocity metric which is measured in sales revenue dollars per day.
A higher sales velocity is better.
The higher the sales velocity, the faster you're making money.
With the sales velocity dollars per day, you can get other numbers for week, month, quarter, and year.
Average deal value
Also called average deal size, and many know this already but to level set, it's the total revenue for a given period divided by the number of deals for that same period, both closed won and closed
It's typically measured in annual contract value or net new ARR.
The average sales cycle length
This is measured in days.
This is the total number of days between each deal created and closed, divided by the number of deals closed won, or lost for the same period.
The number of opportunities
It's the number of qualified deals worked for that same period, both closed won and lost. And then the fourth, average win rate percentage, which is the number of won deals divided by the number of
all deals worked for that same period, again closed won and lost.
Go Deeper
Within an organization, seller A and seller B could generate the same amount of revenue in a given quarter and have the same sales velocity metric, yet they may have completely different ways of
getting to this outcome.
So understanding sales velocity highlights what's different about the means to their similar end.
We take both sets of KPIs as inputs for both sellers and get the sales velocity metric of sales revenue dollars per day for both sellers. You can see that it's the same. Once we have that number, we
can calculate the quarterly and annual numbers. We could also calculate the numbers for months and weeks if we wanted to.
The sales velocity equation allows leaders to make adjustments and measure impact.
So when you start capturing your data, it allows you to measure how quickly your business is moving forward and achieving its growth targets.
It gives you insight into where leadership should focus more efforts. It also allows you to see your team's achievements aligned with their individual goals to ensure that individually and as a team
you're recognizing success.
It can also be used to set sales targets, allowing you to have an achievable quota but also a stretch goal and have harmony between what the team can do and what each seller can do.
And because you can adjust the levers, you can underperform in one area, adjust the other KPIs, and still prosper.
Uplift is achieved by improving one or more of the four KPIs by a realistic number such as 2, 5, or even 10 percent.
It's a best practice to look at the sales velocity in any place your sales motions differ and in the spirit of continuous improvement, compare past and present results, especially after making
changes in your business. | {"url":"https://trexrun.co/sales-velocity-formula/","timestamp":"2024-11-04T11:37:00Z","content_type":"text/html","content_length":"66353","record_id":"<urn:uuid:47f1585d-d0ce-4f79-8a85-8e9e0236a3b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00829.warc.gz"} |
[Solved] Universal law of gravitation MCQ [Free PDF] - Objective Question Answer for Universal law of gravitation Quiz - Download Now!
Universal law of gravitation MCQ Quiz - Objective Question with Answer for Universal law of gravitation - Download Free PDF
Last updated on Oct 22, 2024
Latest Universal law of gravitation MCQ Objective Questions
Universal law of gravitation Question 1:
A solid sphere of radius R gravitationally attracts a particle placed at 3R from its center with a force F[1]. Now a spherical cavity of radius R/2 is made in the sphere (as shown in figure) and the
force becomes F[2]. The value of F[1] : F[2] is:
Answer (Detailed Solution Below)
Option 1 : 41 ∶ 50
Universal law of gravitation Question 1 Detailed Solution
Gravitational field intensity g1 = GM/(3R)2 = GM/9R2 …(1)
Gravitational field intensity g2 = GM/9R2 – G(M/8)/(5R/2)2
= GM/9R2 – GM/R250
= (41/9×50) GM/R2….(2)
Implies , g1/g2 = 41/50
⇒ F1/F2 = mg1/mg2 = 41/50
∴ The value of F1 : F2 is 41: 50.
Universal law of gravitation Question 2:
The intensity of the Gravitational field of the Earth is maximum at the_________.
Answer (Detailed Solution Below)
Option 2 : Poles
Universal law of gravitation Question 2 Detailed Solution
The correct answer is Poles.
• The gravitational field is used to explain the gravitational phenomena and is measured in newtons per kilogram (N/kg).
• It is a physical quantity.
Key Points
• The value of g decreases with height and depth from the earth’s surface.
• The intensity of the Gravitational field of the Earth is maximum at the poles and it is minimum at the equator.
• The value of g decreases due to the rotation of the Earth.
• If the angular speed of the earth increases, then the value of g decreases and vice-versa.
Universal law of gravitation Question 3:
Four identical particles of equal masses 1 kg made to move along the circumference of a circle of radius 1 m under the action of their own mutual gravitational attraction. The speed of each particle
will be
Answer (Detailed Solution Below)
Option 1 : \(\sqrt{\frac{(1+2 \sqrt{2}) G}{2}}\)
Universal law of gravitation Question 3 Detailed Solution
⇒ By resolving force F2, we get
⇒ F1 + F2 cos 45° + F2 cos 45° = Fc
⇒ F1 + 2F2 cos 45° = Fc
Fc = centripetal force = MV2 / R
⇒ \(\frac{G M^2}{(2 R)^2}+\left[\frac{2 G M^2}{(2 R)^2} \cos 45^{\circ}\right]=\frac{M V^2}{R}\)
⇒ \(\frac{G M^2}{4 R^2}+\frac{2 G M^2}{2 \sqrt{2 R^2}}=\frac{M V^2}{R}\)
⇒ \(\frac{G M}{4 R}+\frac{G M}{\sqrt{2 R}}=V^2\)
⇒ \(V=\sqrt{\frac{G M}{4 R}+\frac{G M}{\sqrt{2 \cdot R}}}\)
⇒ \(V=\sqrt{\frac{G M}{R}\left[\frac{1+2 \sqrt{2}}{4}\right]}\)
⇒ \(\frac{1}{2} \sqrt{\frac{G M}{R}(1+2 \sqrt{2})}\) (Given: mass = 1 kg, radius = 1 m)
⇒ \(V=\frac{1}{2} \sqrt{G(1+2 \sqrt{2})}\)
∴ the correct option is 1
Universal law of gravitation Question 4:
The value of the acceleration due to gravity at a depth h below the surface of earth is half of its value at the surface of earth. If R[e] is the radius of earth, then
Answer (Detailed Solution Below)
Option 3 : \(h=\dfrac{R_e}{2}\)
Universal law of gravitation Question 4 Detailed Solution
The correct answer is option 3) i.e. \(h=\dfrac{R_e}{2}\)
• Law of Universal Gravitation: It states that all objects attract each other with a force that is proportional to the masses of two objects and inversely proportional to the square of the distance
that separates their centres.
It is given mathematically as follows:
\(F = \frac{Gm_1m_2}{R^2}\)
Where m1 and m2 are the mass of two objects, G is the gravitational constant and R is the distance between their centres.
From the Law of Universal Gravitation, the gravitational force (F) acting on an object of mass m placed on the surface of Earth is
\(F = \frac{GMm}{R^2}\)
Where R is the radius of the earth.
From Newton's second law, F = ma = mg
⇒ mg = \(\frac{GMm}{R^2}\)
Therefore, acceleration due to gravity, g = \(\frac{GM}{R^2}\)
Let us consider that the acceleration due to gravity on the surface of earth = g and at a depth h below the surface of earth = g'.
Given that:
g' = \(\frac{g}{2}\) ----(1)
The distance between the surface of the earth and centre of earth = Radius of earth = R[e]
The distance between the surface of the earth and a point which is at depth h below the surface = Re - h
At this point, we assume that the earth has a different mass at a depth of h.
g = \(\frac{GM}{R_e^2} = \frac{G}{R_e^2} × \rho[\frac{4}{3} \pi R_e^3]\) and g' =\(\frac{GM'}{(R_e - h)^2}\)= \(\frac{G}{(R_e - h)^2} × \rho [\frac{4}{3} \pi (R_e - h)^3 ]\) (∵ Mass = density ×
From (1)
\(\frac{g}{g'} = 2 \)
2 \(= \frac{ \frac{G}{R_e^2} × \rho[\frac{4}{3} \pi R_e^3]}{\frac{G}{(R_e - h)^2} × \rho [\frac{4}{3} \pi (R_e - h)^3 ]}\)
2 = \(\frac{R_e}{R_e - h}\)
⇒ 2R[e] - 2h = R[e]
⇒ h = \(\frac{R_e}{2}\)
Universal law of gravitation Question 5:
Newton's law of gravitation is called universal law because:
Answer (Detailed Solution Below)
Option 1 : it is applicable at all places of universe for all distances between all particles
Universal law of gravitation Question 5 Detailed Solution
The correct answer is
Option - 1. Concept -
• Newton's law of gravitation states
• "Every particle of matter in the universe attracts every other particle with a force that is directly proportional to the product of their masses and inversely proportional to the square of the
distance between their centres."
Explanation -
• Nweton's law of gravitation can be represented mathematically as:
F = G \(\)× (m[1] × m[2]/ r^2)
• F is the force of attraction between the two bodies.
• m1 and m2 are the masses of the two bodies.
• r is the distance between the centres of the two bodies.
• G is the gravitational constant, a proportional constant that in the SI system is approximately 6.674 ×10^-11N(m/kg)^2.
• So, if you have two bodies with masses m1 and m2, and they are separated by a distance r, then the force of gravity between them is directly proportional to the product of their masses and
inversely proportional to the square of the distance between them,
• Real-world applications of this law include everything from explaining the motion of planets in our solar system to the rolling of a ball down a hill. It's one of the cornerstones of classical
physics and is essential in understanding the larger structure of the universe.
Top Universal law of gravitation MCQ Objective Questions
The force of attraction between two objects of masses 'M' and 'm' which lie at a distance 'd' from each other is directly proportional to the-
Answer (Detailed Solution Below)
Option 2 : Product of the masses of objects M × m
• Gravity (or gravitation): The universal force of attraction acting between all matters.
□ It is the weakest known force in nature.
• The Law of Universal Gravitation states that every mass attracts every other mass in the universe by a force acting on a straight line between the centers-of-mass of both points.
□ The force is proportional to the product of the masses of the objects and inversely proportional to the square of the distance between them.
\(F = G\frac{{Mm}}{{{r^2}}}\)
Where G is universal gravitational constant, M and m are masses and r is the distance between two masses.
• The Law applies to all objects with masses, big or small.
• The force of attraction between two objects of masses M and m which lie at a distance d from each other is directly proportional to the Product of the masses of objects M x m. So option 2 is
The intensity of the Gravitational field of the Earth is maximum at the_________.
Answer (Detailed Solution Below)
Option 2 : Poles
The correct answer is Poles.
• The gravitational field is used to explain the gravitational phenomena and is measured in newtons per kilogram (N/kg).
• It is a physical quantity.
Key Points
• The value of g decreases with height and depth from the earth’s surface.
• The intensity of the Gravitational field of the Earth is maximum at the poles and it is minimum at the equator.
• The value of g decreases due to the rotation of the Earth.
• If the angular speed of the earth increases, then the value of g decreases and vice-versa.
The force that binds the atmosphere around the Earth is the-
Answer (Detailed Solution Below)
Option 2 : Force of gravity.
The correct answer is option 2) i.e. Force of gravity.
• The force of gravitation: It is the force acting between any two masses on the Earth due to their masses.
• Force of gravity: It is the force that the Earth exerts on a body.
□ Force of gravity acts between two bodies in which one of the body is always the Earth.
• The force of gravitation is a weak force that depends on the mass of the two bodies whereas, the force of gravity is a strong force with which the Earth attracts any object towards it.
• The Earth attracts all the objects towards its center and is called the force of gravity.
• The force of gravity binds the atmosphere around the Earth.
Additional Information
• Force of gravity also provides the necessary centripetal force to the celestial bodies around the Earth including the Moon to revolve around itself.
The force of attraction between all masses in the universe, especially the attraction of the earth's mass for bodies near its surface is called ________.
Answer (Detailed Solution Below)
Option 3 : Gravitational force
• Newton's law of Gravitation: It states that everybody in this universe attracts every other body with a force, which is directly proportional to the product of their masses and inversely
proportional to the square of the distance between their centers.
□ The direction of the force is along the line joining the particles.
□ The gravitational force was discovered by Sir Issac Newton.
The magnitude of the gravitational force F is given by:
\(F = G\frac{{{M_1}{M_2}}}{{{R^2}}}\)
Where G = universal gravitational constant, M1 = mass of 1st body, M2 = mass of 2nd body, and R = distance between the two bodies.
1. Spring force: The force applied when a spring is compressed or extended is called spring force.
2. Mechanical force: The force in which there is direct contact between the bodies is called mechanical force. For example Friction force, tension force, etc.
3. The force which pulls the body towards earth is called gravitational force. So option 3 is correct.
4. The Frictional force is the opposing force that is created between two surfaces that try to move in the same/opposite directions.
• The Four Fundamental Forces of Nature are:
1. Gravitational force, 2. Weak Nuclear force, 3. The electromagnetic force, 4. and Strong Nuclear force.
• Gravitational force is the weakest force in nature.
• The nuclear force is the strongest force in nature
Two bodies attract each other gravitationally. If the distance between their centers reduced to half, the gravitational force
Answer (Detailed Solution Below)
Option 4 : will become four times
The correct answer is option 4) i.e. will become four times more
The universal law of gravitation was proposed by Sir Isaac Newton In the year 1686.
• Newton’s law of gravitation: Every particle in the universe attracts every other particle with a force, which is directly proportional to the product of their masses and inversely proportional to
square of distance between them.
• If two bodies with mass m1 & m2, where r be the distance between two masses, then gravitational force,
\(F \propto \frac{{{m_1}{m_2}}}{{{r^2}}} \Rightarrow F = G\frac{{Mm}}{{{r^2}}}\;\)
Where, F = force of attraction, G = 6.67 × 10-11 N-m2/kg2 (gravitational constant), m1 & m2 = mass of bodies, r = distance between them.
• Unit of ‘G’ is Nm2/kg2. The dimensional formula of ‘G’ is [M-1L3T-2].
If the distance between their centres is cut in half, \(R' = \frac{1}{2}R\)
\(F = \frac{Gm_1m_2}{R'^2} = \frac{Gm_1m_2}{((1/2)R^2)} =4\frac{Gm_1m_2}{R^2} \)
Therefore, the gravitational force will become four times more.
The gravitational force between the earth and a body is equal to the__________.
Answer (Detailed Solution Below)
Option 1 : weight of the body
• Gravitational Force: between two masses m1 and m2 at the distance of R is given by:
\(F=G\frac{m_1m_2}{R^2} \)
where F is the gravitational force, m1 and m2 are the masses, and G is the gravitational constant.
• Gravitational Acceleration: It is calculated by using the formula
\(g = {GM\over R^2}\)
where g is the gravitational acceleration, G is the gravitational constant, M is the mass of the earth and R is the radius of the earth.
• Weight of the body (W): It is defined as the force acting on a body due to gravity.
□ The weight of a body is calculated by the formula:
W = mg
where W is the weight of the body, m is the mass of the body, and g is the gravitational acceleration.
• Weight of the body (W): It is defined as the force acting on a body due to gravity.
• The gravitational force between a body of mass m and the earth of mass M and radius R is
\(F=G\frac{Mm}{R^2} \)
we know that \(g = {GM\over R^2}\)
F = mg which is the weight of the body.
• The gravitational force between a body and the earth is known as the weight of the body.
• So the correct answer is option 1.
Which of the following is not a characteristic of gravitational force
Answer (Detailed Solution Below)
Option 3 : It is a strong force.
Newton’s law of gravitation: According to this law- When two objects having some masses are placed at a certain distance then the gravitational force is directly proportional to the product of masses
and it is inversely proportional to the square of the distance between them.
\(Gravitational\;Force = F \propto \frac{{{m_1}{m_2}}}{{{r^2}}} \Rightarrow F = G\frac{{{m_1}{m_2}}}{{{r^2}}}\)
Where G = Gravitational constant, r = Distance between two masses, m1 and m2 = Masses.
Characteristics of gravitational force are:
• Always attractive.
• A central force i.e. acts on the centre of mass of the body.
• Long-range force.
• Weakest force among all other existing forces like gravitational, electromagnetic, strong nuclear force and weak nuclear force.
• It is a conservative force.
Which of the following statement is true regarding gravitational attraction between man and Earth?
Answer (Detailed Solution Below)
Option 2 : Man and Earth pull each other with the same force
The correct answer is option 2) i.e. Man and Earth pull each other with the same force
• The universal law of gravitation: It states that every object in the universe attracts every other with a force varying directly as the product of their masses and inversely as the square of the
distance between them.
• This is given by:
\(F =\frac{GMm}{R^2}\)
Where F is the gravitational force, M and m are the masses of two objects, R is the distance between the centre of two objects, and G is the universal gravitational constant.
• We know that gravitational force is dependent on the mass of the two bodies and the distance between their centres.
• The gravitational attraction exerted by the man on Earth and by the Earth on man is the same:
\(F =\frac{GMm}{R^2}\) in both the case.
• So, the magnitude of the force will be the same.
• Thus, man and Earth pull each other with the same force.
• The same force acting between the two bodies does not necessarily mean that their accelerations are the same.
We know F = ma and thus, \(a = \frac{m}{F}\)
• The force F will be the gravitational force, but the magnitude of acceleration will vary depending on the mass of each body.
Two spheres of mass m[1] and m[2] have gravitational force F acting between them, when placed in air at a distance d. On placing the same spheres at the same distance in a liquid medium of relative
density 4, the gravitational force between them will
Answer (Detailed Solution Below)
Option 3 : be F
Newton's universal law of gravitation:
• The force of attraction between any two bodies is directly proportional to the product of their masses and is inversely proportional to the square of the distance between them.
• Formula, Force, \(F =G\frac{m_1m_2}{r^2}\),
• Where, m[1], m[2] are the masses of the body, r = distance between the two bodies, G = universal gravitational constant
• The SI unit of the force is Newton (N).
• The gravitational forces do not depend on the medium.
The gravitational force does not depend on the medium, so the gravitational force is the same when it is placed in a liquid medium of relative density 4.
Hence, the correct option is 3,
The gravitational force on a body of mass m at a distance r from the centre of the Earth for r < R, where R is the radius of Earth, is proportional to:
Answer (Detailed Solution Below)
Option 3 : r
The correct answer is option 3) i.e. r
• Law of Universal Gravitation: It states that all objects attract each other with a force that is proportional to the masses of two objects and inversely proportional to the square of the distance
that separates their centres.
It is given mathematically as follows:
\(F = \frac{Gm_1m_2}{R^2}\)
Where m1 and m2 are the mass of two objects, G is the gravitational constant and R is the distance between their centres.
From the Law of Universal Gravitation, the gravitational force (F) acting on an object of mass m placed on the surface of Earth is
\(F = \frac{GMm}{R^2}\)
Where R is the radius of the earth.
We know, the gravitational force \(F = \frac{GMm}{R^2}\)
This is valid when R is the centre to centre distance between the two masses M and m.
At a distance r < R, the mass of Earth cannot be taken as M.
The new mass (M'):
The mass of earth will be contained for a radius r. Let us assume that the mean density of the earth is ρ and that earth is a sphere with a radius equal to the distance r. We know, mass =
density × volume
\(M' = ρ[\frac{4}{3} \pi r^3]\)
Therefore the gravitational force, F = \( \frac{GM'm}{r^2}\) = \( \frac{G[ρ(\frac{4}{3} \pi r^3)]m}{r^2}\) = \(G[ρ(\frac{4}{3} \pi r)]m\)
Hence the gravitational force, F ∝ r | {"url":"https://testbook.com/objective-questions/mcq-on-universal-law-of-gravitation--5eea6a1439140f30f369f0c6","timestamp":"2024-11-10T15:33:38Z","content_type":"text/html","content_length":"630819","record_id":"<urn:uuid:c7056096-b0ff-4192-bb0e-ed606780803a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00769.warc.gz"} |
Ultimate Guide for ARIMA Time Series Forecasting - Dataaspirant
Ultimate Guide for ARIMA Time Series Forecasting
ARIMA, an acronym for Autoregressive Integrated Moving Average, is not just a statistical method; it's a story of time told through data. It captures the essence of time series data, accounting for
trends, seasonality, and noise, allowing us to project future values with surprising accuracy.
Whether you're a seasoned data scientist looking for a refresher or a beginner eager to dive into the world of time series forecasting, this guide is crafted for you.
In this "Ultimate Guide for ARIMA Time Series Forecasting," we will embark on a journey, starting with the foundational concepts of ARIMA, delving deep into its intricacies, and finally emerging with
a toolkit that will empower you to harness the full potential of ARIMA models.
Ultimate Guide for ARIMA Time Series Forecasting
By the end of this guide, forecasting the future will seem less like magic and more like a science, a skill you can master and employ.
So, let's set the stage and prepare to unravel the mysteries of time through the lens of ARIMA!
Before we learn about ARIMA time-series with Python, let’s understand the basics of time-series data.
What is Time-Series and Time Series Forecasting?
Time series is a sequence of data that is recorded at regular intervals. The data points (past values) are used for analysis for forecasting a future course of action.
It is considered to be time-dependent. Typically, time series has four components which are discussed below
• Trend: The trend is one of the primary components of a time series. It depicts growth or decline in a time series over a long period.
• Seasonality: This trend represents short-term changes in data due to seasonal factors. For example, exports are subject to changes when there are certain variations in weather or government
norms. Likewise, the sale of the umbrellas will change due to changes in the weather.
• Cyclical movements: In a cyclic trend, long-term variations are observed in the time series. These variations are common within a business cycle.
• Irregular fluctuation: The trend depicts a sudden change in the time series and is not likely to be repeated. Such changes are not possible to be explained by trends or cyclic movements. The
occurrence of this trend is purely random.
On the other hand, time series forecasting includes the methods of exploring time-series data that is collected over a specific period. This method enables the forecasting of values for future
predictions. It is important to note that every data with time or data values may not be considered as time-series data.
Some of the common usages of time series forecasting are in the fields of e-commerce, retail, stock market, and weather predictions. The time series forecasting methods are as follows.
• AutoRegressive Integrated Moving Average (ARIMA) Model
• Seasonal AutoRegressive Integrated Moving Average (SARIMA) Model
• Vector AutoRegression (VAR) Method
What Is ARIMA Time Series Model?
In Time series forecasting, the ARIMA models are one of the algorithms or techniques used to predict or forecast with the help of past data available. They are designed in such a way that they can
capture the autocorrelation and dependence of the target variable on time and other variables present in the dataset.
The ARIMA model can be called an autoregressive integrated moving average. That means they mainly consist of 3 things.
1. Autoregressive means the dependence of the current observation on the past observations;
2. Integrated means the differencing of the data, which is used to make the data stationary
3. Moving average can be considered as the capturing of the relationship between the current observation and past error or residual terms.
In general, the ASRIMA models can be written as ARIMA(p,d,q)
• p represents the order of autoregression,
• d represents the order of differencing,
• q represents the order of moving average.
The optimum value of these variables gives an accurate ARIMA model for the dataset. Now let us discuss the autoregression and moving average in detail to clarify the idea about it.
Auto Regression and Moving Average
The autoregressive and moving average is one of the key points in ARIMA models to have an idea about before working with the same. Let us start with the autoregressive models.
Autoregression Models
The autoregression models are the models which are used to forecast or predict the current values based on the past observation or the dataset. Here the current values of the data are predicated with
the help of one linear relationship between the past values and the current observation.
Here the linear relationship between current observation and past observation is assumed, which helps predict the current observation easily and efficiently by using the past data or observations.
Moving Average Models
The moving average models are also the same as the autoregressive models; just here, the current values of the observation are predicted with the help of past residuals or error terms. Here the noise
or small fluctuations of the data can be predicted with these error terms, which autoregression models can not do.
Mainly there is only one major difference between these two models autoregression assumes the linear relationship between current and past observations, and the moving averages use past errors to
predict the current observations.
Let us discuss the equation of the AR and MA models, which will clear the idea about the mathematical functioning of the model.
Mathematical Equations of ARIMA Models
The ARIMA model consists of AR, I, and MA terms, so the equations of these terms are considered for the final ARIMA models.
The equation of the AR models is
Y(t) = c + phi1 * Y(t-1) + phi2 * Y(t-2) + ... + phip * Y(t-p) + epsilon(t)
The equation for the MA models is
Y(t) = c + theta1 * epsilon(t-1) + theta2 * epsilon(t-2) + ... + thetaq * epsilon(t-q) + epsilon(t)
• Y(t) = Current value of the observation
• C = Constant Term
• Y(t-p) = Previous Observations
• Epsilon(t-p) = Previous lagged residuals errors
• Eplisolon(t) = Residual errors at time t
The equation of the ARIMA models is generalized according to the order of the p,d and q of the model and the above equations are used as the general form of the equations, which finally makes the
model’s specific equation.
Now let us discuss the p,d and q parameters.
What does the p, d, and q in the ARIMA model mean?
In ARIMA models, the p,d, and q parameters are important parameters that affect the performance of the model.
• p is the order of the AR model or the autoregression, which represents the relationship between the current and the past observations.
• q is the order of the MA model which represents the relationship between the current observations and the past errors or residuals.
• d is the order of differencing or the differencing term, which signifies the differencing that is gioing to be applied to the dataset.
Now let us discuss the process of choosing the right order with ARIMA models.
Selecting the Right Order for ARIMA Models
As we discussed above that the ARIMA models can be written as ARIMA(p,d,q) where the p,d, and q represent the order of autoregression, differencing, and moving averages. This order is very important
while forecasting with the ARIMA models, as small changes in this order can directly affect the performance of the model.
The right and optimum order should be selected before training the ARIMA model to achieve an accurate and efficient model.
The process for doing the same is quite simple, here we will take a range of some of the values for p,d, and q, and then we will run a loop-like system where all the combinations of the p,d, and q
values will be tried, and the ASRIMA models will be fitted ion the same.
For each combination, the metric or the AIC value will be calculated, which helps select the best-fit combination of the op,d, and q values for the dataset. Here lower the AIC or metric value, the
higher the performance of the model with a good combination of p,d, and q values.
Let us now try to implement the same with a code example; here, we will use a sample dataset with 100 rows containing a time series dataset, on which the loop with different combinations of p,d, and
q values will be run.
Now let us take a dataset from Kaggle and train an ARIMA model on that., which will clear the idea about the complete process of model building with ARIMA.
Building the ARIMA Time Series Model in Python
Now let us take a code example to understand the complete process. Here we will use an Air passenger dataset from Kaggle.
Here we will use an ARIMA model to fit the dataset, and the best order of the p,d, and q will be found with the help of running a loop with all possible combinations.
The above code trains an ARIMA model with the given dataset, and the output forecasted values can be seen in the below image.
However, there are some limitations of the ARIAM models here; the ARIMA model fails badly; let us discuss that in the next section.
Limitations of ARIMA Models
1. Stationarity: The ARIMA model assumes the data to be stationary, and hence it is one of the biggest limitations of ARIMA models where the data needs to be converted into stationary form before
using the ARIMA models.
2. Linearity: The ARIMA model can identify the relationship between variables but only the linear relationship, and it struggles with capturing the nonlinear relationship between the data and
dependent variables.
3. Outliers: The ARIMA models are very sensitive to outliers, and hence in case of outliers present in the dataset, the model performs very poorly.
4. Short-Term Forecasting: ARIMA models are generally preferred for short-term forecasting only; in the case of forecasting for very long periods, the model performs poorly, and the model's
performance declines with the length of the forecasting period.
Let us now discuss the extension of the ARIMA models, which re-used where doers the classic ARIMA models fail.
Advantages and disadvantages of choosing the ARIMA Model
1. Applicability: The ARIMA models are one of the most widely used models, which can be applied to almost any time series forecasting problem.
2. Flexibility: The ARIMA models are flexible enough and can be used flexibly with the help of tuning the order with p,d, and q parameters.
3. Accuracy: The ARIMA models are the most accurate models for non-seasonal short-term predictions.
4. Easy: The ARIMA models are one of the easiest models in the time series, which is very easy to understand and interpret.
1. Long-term Predictions: The ARIMA models can not be used for long-term predictions; they are only preferred fro short-term periods.
2. Assumptions: The ARIMA model assumes the data to have a linear relationship between past and current observations, which limits the use of ASRIMA models.
3. Anomalies: The ARIMA models are very sensitive to anomalies or outliers which affect the performance of the model very badly.
4. Seasonability: The ARIMA models can not be used with the seasonal data, limiting the use of the models for many cases.
Extensions of ARIMA Models
There are some other time series forecasting algorithms that are the extension of the ARIMA models.
1. SARIMA (Seasonal ARIMA): SARIMA is one of the most famous extensions of the ARIMA models, which is mainly used when there is a seasonality present in the dataset. It is considered the ARIMA model
in case of seasonality and performance better than the ARIMA models.
2. ARIMAX (ARIMA exogenous variable): ARIMAX is a type of ARIMA model which are used when there are multiple independent variables present in the dataset that directly affect the values or the
observations of the dependent variables in time series.
3. SARIMAX (Seasonal SARIMA): It is a kind of extension of the ASRIMA models, which includes the exogenous variables with stationarity. It is used in the case of seasonal data, where there are
multiplier-independent variables present in the data set.
4. VAR or Autoregression: VAR is a type of model that is used in the case of multiple time series variables. Here the VAR models can simultaneously model the multiple variables of the time series
5. Fractional ARIMA: It is a type of ARIAM model where fractional differencing is allowed. In the case of long-term forecasting or long-term dependencies of the time series variables, the standard
differencing may not help, and hence in such cases, the fractional ARIMA can be used.
There are some best practices that can be considered while working with the ARIMA model to enhance the performance and efficiency of the model.
Best Practices to Use ARIMA Models Efficiently
There are multiple time series algorithms that are used with a different dataset that works efficiently. However, the ARIMA models can also be used efficiently by following some of the practices.
1. Data Preprocessing: Data preprocessing is one of the core steps while building a model. It should be performed well in order to get an accurate and reliable model.
2. Model Parameters: The appropriate best-fit order (p,d,q) should be selected as per the AIC score for the ARIMA models to get the best possible performing model.
3. Model Evaluation: Once the model has been trained, use some performance metrics such as MAE, MSE, or RMSE in order to evaluate the performance of the model on the unseen data and modify the model
4. Residual Analysis: The model’s residuals should be analyzed after training the same. The distribution plots, PDF plots, and values should be analyzed to check for the residuals' no
autocorrelations and normal distribution.
5. Consider External Features: Sometimes, other factors outside of the data may affect the target variables and which should be selected for the information of the model. In such cases, variants
like ARIMAX or SARIMAX can be used to include exogenous variables.
In the expansive realm of time series forecasting, ARIMA models hold a distinguished place, offering an intricate blend of autoregressive (AR) and moving average (MA) processes. Over the course of
this article, we've journeyed through the underlying mechanics of ARIMA, elucidating the mathematical equations that give life to this powerful method.
From its foundational principles to the nuanced task of determining the optimal p, d, q values, with hands-on code examples. We've endeavoured to paint a comprehensive picture. Moreover, our
exploration extended to address both the strengths and limitations of ARIMA, ensuring a balanced perspective for our readers.
But beyond the technicalities, our aim has always been clear: to demystify ARIMA for both newcomers and those looking to strengthen their understanding. We hope that as you close this tab or bookmark
this page, you carry with you not just knowledge but confidence. A renewed zeal to harness ARIMA's capabilities in your next forecasting endeavor.
For every data enthusiast, beginner or seasoned, this guide stands as a testament to the magic and precision of ARIMA models. Until our next data-driven adventure, may your forecasts be accurate and
insights profound!
Frequently Asked Questions (FAQs) On ARIMA
1. What is ARIMA?
ARIMA stands for AutoRegressive Integrated Moving Average. It's a popular statistical method for time series forecasting.
2. What components make up the ARIMA model?
ARIMA consists of three components: AR (AutoRegressive), I (Integrated), and MA (Moving Average).
3. What does the "Integrated" in ARIMA mean?
Integrated refers to the process of differencing the data to make it stationary, which is a prerequisite for ARIMA modeling.
4. When should I use ARIMA for forecasting?
ARIMA is suitable for time series data that displays a consistent trend or seasonality and is stationary or can be made stationary through differencing.
5. How do I determine the ARIMA parameters (p, d, q)?
The parameters can be estimated using techniques like the ACF (Autocorrelation Function) and PACF (Partial Autocorrelation Function) plots.
6. Can ARIMA handle seasonality?
While basic ARIMA can't handle seasonality, its variant, Seasonal ARIMA (or SARIMA), is designed to model seasonal patterns.
7. How is ARIMA different from other time series forecasting methods?
ARIMA relies on the linear relationships and structure of the data, whereas methods like Exponential Smoothing or Prophet might handle non-linear trends or multiple seasonality better.
8. Does ARIMA provide point forecasts or interval forecasts?
While ARIMA primarily provides point forecasts, it can also be used to generate prediction intervals, indicating the uncertainty around forecasts.
9. What software or programming languages support ARIMA modeling?
ARIMA can be implemented in various software packages, including Python (with the `statsmodels` library) and R (with the `forecast` package).
10. Is ARIMA suitable for all time series datasets?
No, ARIMA might not be the best choice for time series with high volatility, non-linear trends, or non-continuous data points.
11. How do I evaluate the accuracy of my ARIMA forecasts?
Common metrics include Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE).
12. Can ARIMA be combined with other forecasting methods?
Yes, ARIMA predictions can be combined with other models, techniques, or domain knowledge to refine or enhance forecasting results.
Recommended Courses
Follow us:
I hope you like this post. If you have any questions ? or want me to write an article on a specific topic? then feel free to comment below. | {"url":"https://dataaspirant.com/arima-time-series-forecasting/","timestamp":"2024-11-02T23:12:29Z","content_type":"application/xhtml+xml","content_length":"256201","record_id":"<urn:uuid:1c52d8e8-0e76-4e42-8109-75899072ea03>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00685.warc.gz"} |
Søren Galatius - Clay Mathematics Institute
Søren Galatius
Category: Research Fellows
Affiliation: University of Copenhagen
Søren Galatius received his PhD from Aarhus University in 2004 under the supervision of Ib Madsen. The focus of his research is in algebraic topology, especially the interplay between stable homotopy
theory and geometry. Søren was appointed as a Clay Research Fellow for a term of three years beginning September 2007. | {"url":"https://www.claymath.org/people/soren-galatius/","timestamp":"2024-11-06T15:28:08Z","content_type":"text/html","content_length":"84301","record_id":"<urn:uuid:1a484a95-c223-4ada-8c5d-f9ef854a401d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00316.warc.gz"} |
Commitment scheme - Wikipedia
This article
needs additional citations for verification
(October 2014)
A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value
later.^[1] Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have
important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.
A way to visualize a commitment scheme is to think of a sender as putting a message in a locked box, and giving the box to a receiver. The message in the box is hidden from the receiver, who cannot
open the lock themselves. Since the receiver has the box, the message inside cannot be changed—merely revealed if the sender chooses to give them the key at some later time.
Interactions in a commitment scheme take place in two phases:
1. the commit phase during which a value is chosen and committed to
2. the reveal phase during which the value is revealed by the sender, then the receiver verifies its authenticity
In the above metaphor, the commit phase is the sender putting the message in the box, and locking it. The reveal phase is the sender giving the key to the receiver, who uses it to open the box and
verify its contents. The locked box is the commitment, and the key is the proof.
In simple protocols, the commit phase consists of a single message from the sender to the receiver. This message is called the commitment. It is essential that the specific value chosen cannot be
extracted from the message by the receiver at that time (this is called the hiding property). A simple reveal phase would consist of a single message, the opening, from the sender to the receiver,
followed by a check performed by the receiver. The value chosen during the commit phase must be the only one that the sender can compute and that validates during the reveal phase (this is called the
binding property).
The concept of commitment schemes was perhaps first formalized by Gilles Brassard, David Chaum, and Claude Crépeau in 1988,^[2] as part of various zero-knowledge protocols for NP, based on various
types of commitment schemes.^[3]^[4] But the concept was used prior to that without being treated formally.^[5]^[6] The notion of commitments appeared earliest in works by Manuel Blum,^[7] Shimon
Even,^[8] and Adi Shamir et al.^[9] The terminology seems to have been originated by Blum,^[6] although commitment schemes can be interchangeably called bit commitment schemes—sometimes reserved for
the special case where the committed value is a bit. Earlier to that, commitment via one-way hash functions was considered, e.g., as part of, say, Lamport signature, the original one-time one-bit
signature scheme.
Suppose Alice and Bob want to resolve some dispute via coin flipping. If they are physically in the same place, a typical procedure might be:
1. Alice "calls" the coin flip,
2. Bob flips the coin,
3. If Alice's call is correct, she wins, otherwise Bob wins.
If Alice and Bob are not in the same place a problem arises. Once Alice has "called" the coin flip, Bob can stipulate the flip "results" to be whatever is most desirable for him. Similarly, if Alice
doesn't announce her "call" to Bob, after Bob flips the coin and announces the result, Alice can report that she called whatever result is most desirable for her. Alice and Bob can use commitments in
a procedure that will allow both to trust the outcome:
1. Alice "calls" the coin flip but only tells Bob a commitment to her call,
2. Bob flips the coin and reports the result,
3. Alice reveals what she committed to,
4. Bob verifies that Alice's call matches her commitment,
5. If Alice's revelation matches the coin result Bob reported, Alice wins.
For Bob to be able to skew the results to his favor, he must be able to understand the call hidden in Alice's commitment. If the commitment scheme is a good one, Bob cannot skew the results.
Similarly, Alice cannot affect the result if she cannot change the value she commits to.
A real-life application of this problem exists, when people (often in media) commit to a decision or give an answer in a "sealed envelope", which is then opened later. "Let's find out if that's what
the candidate answered", for example on a game show, can serve as a model of this system.
One particular motivating example is the use of commitment schemes in zero-knowledge proofs. Commitments are used in zero-knowledge proofs for two main purposes: first, to allow the prover to
participate in "cut and choose" proofs where the verifier will be presented with a choice of what to learn, and the prover will reveal only what corresponds to the verifier's choice. Commitment
schemes allow the prover to specify all the information in advance, and only reveal what should be revealed later in the proof.^[10] Second, commitments are also used in zero-knowledge proofs by the
verifier, who will often specify their choices ahead of time in a commitment. This allows zero-knowledge proofs to be composed in parallel without revealing additional information to the prover.^[11]
The Lamport signature scheme is a digital signature system that relies on maintaining two sets of secret data packets, publishing verifiable hashes of the data packets, and then selectively revealing
partial secret data packets in a manner that conforms specifically to the data to be signed. In this way, the prior public commitment to the secret values becomes a critical part of the functioning
of the system.
Because the Lamport signature system cannot be used more than once, a system to combine many Lamport key-sets under a single public value that can be tied to a person and verified by others was
developed. This system uses trees of hashes to compress many published Lamport-key-commitment sets into a single hash value that can be associated with the prospective author of later-verified data.
Another important application of commitments is in verifiable secret sharing, a critical building block of secure multiparty computation. In a secret sharing scheme, each of several parties receive
"shares" of a value that is meant to be hidden from everyone. If enough parties get together, their shares can be used to reconstruct the secret, but even a malicious cabal of insufficient size
should learn nothing. Secret sharing is at the root of many protocols for secure computation: in order to securely compute a function of some shared input, the secret shares are manipulated instead.
However, if shares are to be generated by malicious parties, it may be important that those shares can be checked for correctness. In a verifiable secret sharing scheme, the distribution of a secret
is accompanied by commitments to the individual shares. The commitments reveal nothing that can help a dishonest cabal, but the shares allow each individual party to check to see if their shares are
Formal definitions of commitment schemes vary strongly in notation and in flavour. The first such flavour is whether the commitment scheme provides perfect or computational security with respect to
the hiding or binding properties. Another such flavour is whether the commitment is interactive, i.e. whether both the commit phase and the reveal phase can be seen as being executed by a
cryptographic protocol or whether they are non-interactive, consisting of two algorithms Commit and CheckReveal. In the latter case CheckReveal can often be seen as a derandomised version of Commit,
with the randomness used by Commit constituting the opening information.
If the commitment C to a value x is computed as C:=Commit(x,open) with open being the randomness used for computing the commitment, then CheckReveal (C,x,open) reduces to simply verifying the
equation C=Commit (x,open).
Using this notation and some knowledge about mathematical functions and probability theory we formalise different versions of the binding and hiding properties of commitments. The two most important
combinations of these properties are perfectly binding and computationally hiding commitment schemes and computationally binding and perfectly hiding commitment schemes. Note that no commitment
scheme can be at the same time perfectly binding and perfectly hiding – a computationally unbounded adversary can simply generate Commit(x,open) for every value of x and open until finding a pair
that outputs C, and in a perfectly binding scheme this uniquely identifies x.
Let open be chosen from a set of size ${\displaystyle 2^{k}}$, i.e., it can be represented as a k bit string, and let ${\displaystyle {\text{Commit}}_{k}}$ be the corresponding commitment scheme. As
the size of k determines the security of the commitment scheme it is called the security parameter.
Then for all non-uniform probabilistic polynomial time algorithms that output ${\displaystyle x,x'}$ and ${\displaystyle open,open'}$ of increasing length k, the probability that ${\displaystyle xeq
x'}$ and ${\displaystyle {\text{Commit}}_{k}(x,open)={\text{Commit}}_{k}(x',open')}$ is a negligible function in k.
This is a form of asymptotic analysis. It is also possible to state the same requirement using concrete security: A commitment scheme Commit is ${\displaystyle (t,\epsilon )}$ secure, if for all
algorithms that run in time t and output ${\displaystyle x,x',open,open'}$ the probability that ${\displaystyle xeq x'}$ and ${\displaystyle {\text{Commit}}(x,open)={\text{Commit}}(x',open')}$ is at
most ${\displaystyle \epsilon }$.
Let ${\displaystyle U_{k}}$ be the uniform distribution over the ${\displaystyle 2^{k}}$ opening values for security parameter k. A commitment scheme is respectively perfect, statistical, or
computational hiding, if for all ${\displaystyle xeq x'}$ the probability ensembles ${\displaystyle \{{\text{Commit}}_{k}(x,U_{k})\}_{k\in \mathbb {N} }}$ and ${\displaystyle \{{\text{Commit}}_{k}
(x',U_{k})\}_{k\in \mathbb {N} }}$ are equal, statistically close, or computationally indistinguishable.
It is impossible to realize commitment schemes in the universal composability (UC) framework. The reason is that UC commitment has to be extractable, as shown by Canetti and Fischlin^[13] and
explained below.
The ideal commitment functionality, denoted here by F, works roughly as follows. Committer C sends value m to F, which stores it and sends "receipt" to receiver R. Later, C sends "open" to F, which
sends m to R.
Now, assume we have a protocol π that realizes this functionality. Suppose that the committer C is corrupted. In the UC framework, that essentially means that C is now controlled by the environment,
which attempts to distinguish protocol execution from the ideal process. Consider an environment that chooses a message m and then tells C to act as prescribed by π, as if it has committed to m. Note
here that in order to realize F, the receiver must, after receiving a commitment, output a message "receipt". After the environment sees this message, it tells C to open the commitment.
The protocol is only secure if this scenario is indistinguishable from the ideal case, where the functionality interacts with a simulator S. Here, S has control of C. In particular, whenever R
outputs "receipt", F has to do likewise. The only way to do that is for S to tell C to send a value to F. However, note that by this point, m is not known to S. Hence, when the commitment is opened
during protocol execution, it is unlikely that F will open to m, unless S can extract m from the messages it received from the environment before R outputs the receipt.
However a protocol that is extractable in this sense cannot be statistically hiding. Suppose such a simulator S exists. Now consider an environment that, instead of corrupting C, corrupts R instead.
Additionally it runs a copy of S. Messages received from C are fed into S, and replies from S are forwarded to C.
The environment initially tells C to commit to a message m. At some point in the interaction, S will commit to a value m′. This message is handed to R, who outputs m′. Note that by assumption we have
m' = m with high probability. Now in the ideal process the simulator has to come up with m. But this is impossible, because at this point the commitment has not been opened yet, so the only message R
can have received in the ideal process is a "receipt" message. We thus have a contradiction.
A commitment scheme can either be perfectly binding (it is impossible for Alice to alter her commitment after she has made it, even if she has unbounded computational resources); or perfectly
concealing (it is impossible for Bob to find out the commitment without Alice revealing it, even if he has unbounded computational resources); or formulated as an instance-dependent commitment
scheme, which is either hiding or binding depending on the solution to another problem.^[14]^[15] A commitment scheme cannot be both perfectly hiding and perfectly binding at the same time.
Bit-commitment schemes are trivial to construct in the random oracle model. Given a hash function H with a 3k bit output, to commit the k-bit message m, Alice generates a random k bit string R and
sends Bob H(R || m). The probability that any R′, m′ exist where m′ ≠ m such that H(R′ || m′) = H(R || m) is ≈ 2^−k, but to test any guess at the message m Bob will need to make 2^k (for an incorrect
guess) or 2^k-1 (on average, for a correct guess) queries to the random oracle.^[16] We note that earlier schemes based on hash functions, essentially can be thought of schemes based on idealization
of these hash functions as random oracle.
One can create a bit-commitment scheme from any one-way function that is injective. The scheme relies on the fact that every one-way function can be modified (via the Goldreich-Levin theorem) to
possess a computationally hard-core predicate (while retaining the injective property).
Let f be an injective one-way function, with h a hard-core predicate. Then to commit to a bit b Alice picks a random input x and sends the triple
${\displaystyle (h,f(x),b\oplus h(x))}$
to Bob, where ${\displaystyle \oplus }$ denotes XOR, i.e., bitwise addition modulo 2. To decommit, Alice simply sends x to Bob. Bob verifies by computing f(x) and comparing to the committed value.
This scheme is concealing because for Bob to recover b he must recover h(x). Since h is a computationally hard-core predicate, recovering h(x) from f(x) with probability greater than one-half is as
hard as inverting f. Perfect binding follows from the fact that f is injective and thus f(x) has exactly one preimage.
Note that since we do not know how to construct a one-way permutation from any one-way function, this section reduces the strength of the cryptographic assumption necessary to construct a
bit-commitment protocol.
In 1991 Moni Naor showed how to create a bit-commitment scheme from a cryptographically secure pseudorandom number generator.^[17] The construction is as follows. If G is a pseudo-random generator
such that G takes n bits to 3n bits, then if Alice wants to commit to a bit b:
• Bob selects a random 3n-bit vector R and sends R to Alice.
• Alice selects a random n-bit vector Y and computes the 3n-bit vector G(Y).
• If b=1 Alice sends G(Y) to Bob, otherwise she sends the bitwise exclusive-or of G(Y) and R to Bob.
To decommit Alice sends Y to Bob, who can then check whether he initially received G(Y) or G(Y) ${\displaystyle \oplus }$ R.
This scheme is statistically binding, meaning that even if Alice is computationally unbounded she cannot cheat with probability greater than 2^−n. For Alice to cheat, she would need to find a Y',
such that G(Y') = G(Y) ${\displaystyle \oplus }$ R. If she could find such a value, she could decommit by sending the truth and Y, or send the opposite answer and Y'. However, G(Y) and G(Y') are only
able to produce 2^n possible values each (that's 2^2n) while R is picked out of 2^3n values. She does not pick R, so there is a 2^2n/2^3n = 2^−n probability that a Y' satisfying the equation required
to cheat will exist.
The concealing property follows from a standard reduction, if Bob can tell whether Alice committed to a zero or one, he can also distinguish the output of the pseudo-random generator G from
true-random, which contradicts the cryptographic security of G.
Alice chooses a ring of prime order p, with multiplicative generator g.
Alice randomly picks a secret value x from 0 to p − 1 to commit to and calculates c = g^x and publishes c. The discrete logarithm problem dictates that from c, it is computationally infeasible to
compute x, so under this assumption, Bob cannot compute x. On the other hand, Alice cannot compute a x′ <> x, such that g^x′ = c, so the scheme is binding.
This scheme isn't perfectly concealing as someone could find the commitment if he manages to solve the discrete logarithm problem. In fact, this scheme isn't hiding at all with respect to the
standard hiding game, where an adversary should be unable to guess which of two messages he chose were committed to - similar to the IND-CPA game. One consequence of this is that if the space of
possible values of x is small, then an attacker could simply try them all and the commitment would not be hiding.
A better example of a perfectly binding commitment scheme is one where the commitment is the encryption of x under a semantically secure, public-key encryption scheme with perfect completeness, and
the decommitment is the string of random bits used to encrypt x. An example of an information-theoretically hiding commitment scheme is the Pedersen commitment scheme,^[18] which is computationally
binding under the discrete logarithm assumption. ^[19] Additionally to the scheme above, it uses another generator h of the prime group and a random number r. The commitment is set ${\displaystyle c=
These constructions are tightly related to and based on the algebraic properties of the underlying groups, and the notion originally seemed to be very much related to the algebra. However, it was
shown that basing statistically binding commitment schemes on general unstructured assumption is possible, via the notion of interactive hashing for commitments from general complexity assumptions
(specifically and originally, based on any one way permutation) as in.^[21]
Alice selects ${\displaystyle N}$ such that ${\displaystyle N=p\cdot q}$, where ${\displaystyle p}$ and ${\displaystyle q}$ are large secret prime numbers. Additionally, she selects a prime ${\
displaystyle e}$ such that ${\displaystyle e>N^{2}}$ and ${\displaystyle gcd(e,\phi (N^{2}))=1}$. Alice then computes a public number ${\displaystyle g_{m}}$ as an element of maximum order in the ${\
displaystyle \mathbb {Z} _{N^{2}}^{*}}$ group.^[22] Finally, Alice commits to her secret ${\displaystyle m}$ by first generating a random number ${\displaystyle r}$ from ${\displaystyle \mathbb {Z} _
{N^{2}}^{*}}$ and then by computing ${\displaystyle c=m^{e}g_{m}^{r}}$.
The security of the above commitment relies on the hardness of the RSA problem and has perfect hiding and computational binding.^[23]
The Pedersen commitment scheme introduces an interesting homomorphic property that allows performing addition between two commitments. More specifically, given two messages ${\displaystyle m_{1}}$
and ${\displaystyle m_{2}}$ and randomness ${\displaystyle r_{1}}$ and ${\displaystyle r_{2}}$, respectively, it is possible to generate a new commitment such that: ${\displaystyle C(m_{1},r_{1})\
cdot C(m_{2},r_{2})=C(m_{1}+m_{2},r_{1}+r_{2})}$. Formally:
${\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=g^{m_{1}}h^{r_{1}}\cdot g^{m_{2}}h^{r_{2}}=g^{m_{1}+m_{2}}h^{r_{1}+r_{2}}=C(m_{1}+m_{2},r_{1}+r_{2})}$
To open the above Pedersen commitment to a new message ${\displaystyle m_{1}+m_{2}}$, the randomness ${\displaystyle r_{1}}$ and ${\displaystyle r_{2}}$ has to be added.
Similarly, the RSA-based commitment mentioned above has a homomorphic property with respect to the multiplication operation. Given two messages ${\displaystyle m_{1}}$ and ${\displaystyle m_{2}}$
with randomness ${\displaystyle r_{1}}$ and ${\displaystyle r_{2}}$, respectively, one can compute: ${\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}\cdot m_{2},r_{1}+r_{2})}$. Formally: ${\
displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=m_{1}^{e}g_{m}^{r_{1}}\cdot m_{2}^{e}g_{m}^{r_{2}}=(m_{1}\cdot m_{2})^{e}g_{m}^{r_{1}+r_{2}}=C(m_{1}\cdot m_{2},r_{1}+r_{2})}$.
To open the above commitment to a new message ${\displaystyle m_{1}\cdot m_{2}}$, the randomness ${\displaystyle r_{1}}$ and ${\displaystyle r_{2}}$ has to be added. This newly generated commitment
is distributed similarly to a new commitment to ${\displaystyle m_{1}\cdot m_{2}}$.
Some commitment schemes permit a proof to be given of only a portion of the committed value. In these schemes, the secret value ${\displaystyle X}$ is a vector of many individually separable values.
${\displaystyle X=(x_{1},x_{2},...,x_{n})}$
The commitment ${\displaystyle C}$ is computed from ${\displaystyle X}$ in the commit phase. Normally, in the reveal phase, the prover would reveal all of ${\displaystyle X}$ and some additional
proof data (such as ${\displaystyle R}$ in simple bit-commitment). Instead, the prover is able to reveal any single value from the ${\displaystyle X}$ vector, and create an efficient proof that it is
the authentic ${\displaystyle i}$th element of the original vector that created the commitment ${\displaystyle C}$. The proof does not require any values of ${\displaystyle X}$ other than ${\
displaystyle x_{i}}$ to be revealed, and it is impossible to create valid proofs that reveal different values for any of the ${\displaystyle x_{i}}$ than the true one.^[24]
Vector hashing is a naive vector commitment partial reveal scheme based on bit-commitment. Values ${\displaystyle m_{1},m_{2},...m_{n}}$ are chosen randomly. Individual commitments are created by
hashing ${\displaystyle y_{1}=H(x_{1}||m_{1}),y_{2}=H(x_{2}||m_{2}),...}$. The overall commitment is computed as
${\displaystyle C=H(y_{1}||y_{2}||...||y_{n})}$
In order to prove one element of the vector ${\displaystyle X}$, the prover reveals the values
${\displaystyle (i,y_{1},y_{2},...,y_{i-1},x_{i},m_{i},y_{i+1},...,y_{n})}$
The verifier is able to compute ${\displaystyle y_{i}}$ from ${\displaystyle x_{i}}$ and ${\displaystyle m_{i}}$, and then is able to verify that the hash of all ${\displaystyle y}$ values is the
commitment ${\displaystyle C}$. Unfortunately the proof is ${\displaystyle O(n)}$ in size and verification time. Alternately, if ${\displaystyle C}$ is the set of all ${\displaystyle y}$ values, then
the commitment is ${\displaystyle O(n)}$ in size, and the proof is ${\displaystyle O(1)}$ in size and verification time. Either way, the commitment or the proof scales with ${\displaystyle O(n)}$
which is not optimal.
A common example of a practical partial reveal scheme is a Merkle tree, in which a binary hash tree is created of the elements of ${\displaystyle X}$. This scheme creates commitments that are ${\
displaystyle O(1)}$ in size, and proofs that are ${\displaystyle O(\log _{2}{n})}$ in size and verification time. The root hash of the tree is the commitment ${\displaystyle C}$. To prove that a
revealed ${\displaystyle x_{i}}$ is part of the original tree, only ${\displaystyle \log _{2}{n}}$ hash values from the tree, one from each level, must be revealed as the proof. The verifier is able
to follow the path from the claimed leaf node all the way up to the root, hashing in the sibling nodes at each level, and eventually arriving at a root node value that must equal ${\displaystyle C}$.
A Kate-Zaverucha-Goldberg commitment uses pairing-based cryptography to build a partial reveal scheme with ${\displaystyle O(1)}$ commitment sizes, proof sizes, and proof verification time. In other
words, as ${\displaystyle n}$, the number of values in ${\displaystyle X}$, increases, the commitments and proofs do not get larger, and the proofs do not take any more effort to verify.
A KZG commitment requires a predetermined set of parameters to create a pairing, and a trusted trapdoor element. For example, a Tate pairing can be used. Assume that ${\displaystyle \mathbb {G} _{1},
\mathbb {G} _{2}}$ are the additive groups, and ${\displaystyle \mathbb {G} _{T}}$ is the multiplicative group of the pairing. In other words, the pairing is the map ${\displaystyle e:\mathbb {G} _
{1}\times \mathbb {G} _{2}\rightarrow \mathbb {G} _{T}}$. Let ${\displaystyle t\in \mathbb {F} _{p}}$ be the trapdoor element (if ${\displaystyle p}$ is the prime order of ${\displaystyle \mathbb {G}
_{1}}$ and ${\displaystyle \mathbb {G} _{2}}$), and let ${\displaystyle G}$ and ${\displaystyle H}$ be the generators of ${\displaystyle \mathbb {G} _{1}}$ and ${\displaystyle \mathbb {G} _{2}}$
respectively. As part of the parameter setup, we assume that ${\displaystyle G\cdot t^{i}}$ and ${\displaystyle H\cdot t^{i}}$ are known and shared values for arbitrarily many positive integer values
of ${\displaystyle i}$, while the trapdoor value ${\displaystyle t}$ itself is discarded and known to no one.
A KZG commitment reformulates the vector of values to be committed as a polynomial. First, we calculate a polynomial such that ${\displaystyle p(i)=x_{i}}$ for all values of ${\displaystyle x_{i}}$
in our vector. Lagrange interpolation allows us to compute that polynomial
${\displaystyle p(x)=\sum _{i=0}^{n-1}x_{i}\prod _{0\leq j<n,jeq i}{\frac {x-j}{i-j}}}$
Under this formulation, the polynomial now encodes the vector, where ${\displaystyle p(0)=x_{0},p(1)=x_{1},...}$. Let ${\displaystyle p_{0},p_{1},...,p_{n-1}}$ be the coefficients of ${\displaystyle
p}$, such that ${\textstyle p(x)=\sum _{i=0}^{n-1}p_{i}x^{i}}$. The commitment is calculated as
${\displaystyle C=\sum _{i=0}^{n-1}p_{i}Gt^{i}}$
This is computed simply as a dot product between the predetermined values ${\displaystyle G\cdot t^{i}}$ and the polynomial coefficients ${\displaystyle p_{i}}$. Since ${\displaystyle \mathbb {G} _
{1}}$ is an additive group with associativity and commutativity, ${\displaystyle C}$ is equal to simply ${\displaystyle G\cdot p(t)}$, since all the additions and multiplications with ${\displaystyle
G}$ can be distributed out of the evaluation. Since the trapdoor value ${\displaystyle t}$ is unknown, the commitment ${\displaystyle C}$ is essentially the polynomial evaluated at a number known to
no one, with the outcome obfuscated into an opaque element of ${\displaystyle \mathbb {G} _{1}}$.
A KZG proof must demonstrate that the revealed data is the authentic value of ${\displaystyle x_{i}}$ when ${\displaystyle C}$ was computed. Let ${\displaystyle y=x_{i}}$, the revealed value we must
prove. Since the vector of ${\displaystyle x_{i}}$ was reformulated into a polynomial, we really need to prove that the polynomial ${\displaystyle p}$, when evaluated at ${\displaystyle i}$, takes on
the value ${\displaystyle y}$. Simply, we just need to prove that ${\displaystyle p(i)=y}$. We will do this by demonstrating that subtracting ${\displaystyle y}$ from ${\displaystyle p}$ yields a
root at ${\displaystyle i}$. Define the polynomial ${\displaystyle q}$ as
${\displaystyle q(x)={\frac {p(x)-y}{x-i}}}$
This polynomial is itself the proof that ${\displaystyle p(i)=y}$, because if ${\displaystyle q}$ exists, then ${\displaystyle p(x)-y}$ is divisible by ${\displaystyle x-i}$, meaning it has a root at
${\displaystyle i}$, so ${\displaystyle p(i)-y=0}$ (or, in other words, ${\displaystyle p(i)=y}$). The KZG proof will demonstrate that ${\displaystyle q}$ exists and has this property.
The prover computes ${\displaystyle q}$ through the above polynomial division, then calculates the KZG proof value ${\displaystyle \pi }$
${\displaystyle \pi =\sum _{i=0}^{n-1}q_{i}Gt^{i}}$
This is equal to ${\displaystyle G\cdot q(t)}$, as above. In other words, the proof value is the polynomial ${\displaystyle q}$ again evaluated at the trapdoor value ${\displaystyle t}$, hidden in
the generator ${\displaystyle G}$ of ${\displaystyle \mathbb {G} _{1}}$.
This computation is only possible if the above polynomials were evenly divisible, because in that case the quotient ${\displaystyle q}$ is a polynomial, not a rational function. Due to the
construction of the trapdoor, it is not possible to evaluate a rational function at the trapdoor value, only to evaluate a polynomial using linear combinations of the precomputed known constants of $
{\displaystyle G\cdot t^{i}}$. This is why it is impossible to create a proof for an incorrect value of ${\displaystyle x_{i}}$.
To verify the proof, the bilinear map of the pairing is used to show that the proof value ${\displaystyle \pi }$ summarizes a real polynomial ${\displaystyle q}$ that demonstrates the desired
property, which is that ${\displaystyle p(x)-y}$ was evenly divided by ${\displaystyle x-i}$. The verification computation checks the equality
${\displaystyle e(\pi ,H\cdot t-H\cdot i)\ {\stackrel {?}{=}}\ e(C-G\cdot y,H)}$
where ${\displaystyle e}$ is the bilinear map function as above. ${\displaystyle H\cdot t}$ is a precomputed constant, ${\displaystyle H\cdot i}$ is computed based on ${\displaystyle i}$.
By rewriting the computation in the pairing group ${\displaystyle \mathbb {G} _{T}}$, substituting in ${\displaystyle \pi =q(t)\cdot G}$ and ${\displaystyle C=p(t)\cdot G}$, and letting ${\
displaystyle \tau (x)=e(G,H)^{x}}$ be a helper function for lifting into the pairing group, the proof verification is more clear.
${\displaystyle e(\pi ,H\cdot t-H\cdot i)=e(C-G\cdot y,H)}$
${\displaystyle e(G\cdot q(t),H\cdot t-H\cdot i)=e(G\cdot p(t)-G\cdot y,H)}$
${\displaystyle e(G\cdot q(t),H\cdot (t-i))=e(G\cdot (p(t)-y),H)}$
${\displaystyle e(G,H)^{q(t)\cdot (t-i)}=e(G,H)^{p(t)-y}}$
${\displaystyle \tau (q(t)\cdot (t-i))=\tau (p(t)-y)}$
Assuming that the bilinear map is validly constructed, this demonstrates that ${\displaystyle q(x)(x-i)=p(x)-y}$, without the validator knowing what ${\displaystyle p}$ or ${\displaystyle q}$ are.
The validator can be assured of this because if ${\displaystyle \tau (q(t)\cdot (t-i))=\tau (p(t)-y)}$, then the polynomials evaluate to the same output at the trapdoor value ${\displaystyle x=t}$.
This demonstrates the polynomials are identical, because, if the parameters were validly constructed, the trapdoor value is known to no one, meaning that engineering a polynomial to have a specific
value at the trapdoor is impossible (according to the Schwartz–Zippel lemma). If ${\displaystyle q(x)(x-i)=p(x)-y}$ is now verified to be true, then ${\displaystyle q}$ is verified to exist,
therefore ${\displaystyle p(x)-y}$ must be polynomial-divisible by ${\displaystyle (x-i)}$, so ${\displaystyle p(i)-y=0}$ due to the factor theorem. This proves that the ${\displaystyle i}$th value
of the committed vector must have equaled ${\displaystyle y}$, since that is the output of evaluating the committed polynomial at ${\displaystyle i}$.
Additionally, a KZG commitment can be extended to prove the values of any arbitrary ${\displaystyle k}$ values of ${\displaystyle X}$ (not just one value), with the proof size remaining ${\
displaystyle O(1)}$, but the proof verification time scales with ${\displaystyle O(k)}$. The proof is the same, but instead of subtracting a constant ${\displaystyle y}$, we subtract a polynomial
that causes multiple roots, at all the locations we want to prove, and instead of dividing by ${\displaystyle x-i}$ we divide by ${\textstyle \prod _{i}x-i}$ for those same locations.^[26]
It is an interesting question in quantum cryptography if unconditionally secure bit commitment protocols exist on the quantum level, that is, protocols which are (at least asymptotically) binding and
concealing even if there are no restrictions on the computational resources. One could hope that there might be a way to exploit the intrinsic properties of quantum mechanics, as in the protocols for
unconditionally secure key distribution.
However, this is impossible, as Dominic Mayers showed in 1996 (see^[27] for the original proof). Any such protocol can be reduced to a protocol where the system is in one of two pure states after the
commitment phase, depending on the bit Alice wants to commit. If the protocol is unconditionally concealing, then Alice can unitarily transform these states into each other using the properties of
the Schmidt decomposition, effectively defeating the binding property.
One subtle assumption of the proof is that the commit phase must be finished at some point in time. This leaves room for protocols that require a continuing information flow until the bit is unveiled
or the protocol is cancelled, in which case it is not binding anymore.^[28] More generally, Mayers' proof applies only to protocols that exploit quantum physics but not special relativity. Kent has
shown that there exist unconditionally secure protocols for bit commitment that exploit the principle of special relativity stating that information cannot travel faster than light.^[29]
Physical unclonable functions (PUFs) rely on the use of a physical key with internal randomness, which is hard to clone or to emulate. Electronic, optical and other types of PUFs^[30] have been
discussed extensively in the literature, in connection with their potential cryptographic applications including commitment schemes.^[31]^[32] | {"url":"https://hyperlinked.wiki/wiki.php?slug=Commitment_scheme","timestamp":"2024-11-12T20:22:42Z","content_type":"text/html","content_length":"366463","record_id":"<urn:uuid:8fc51403-b635-47e4-9efe-733a5e5e7c40>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00428.warc.gz"} |
Following some email exchanges I present some further thoughts on the Hyperspin Eight Dimensional hypothesis of fundamental particles. Fundamental particle physics strikes me as important for several
reasons, firstly because our current incomplete understanding of the subject may conceal knowledge that may enable us to develop new technologies such as starships. Secondly because we cannot hope to
understand cosmology completely until we have understood the universe at the quantum level as well. Thirdly because one would expect to find evidence for the metaphysics implied by magic showing up
at the basal levels of reality. In this case, evidence for the existence of three dimensional time, and a small overall positive spacetime curvature.
Some supplementary thoughts to the original HD8 paper on this site:
1) The Euclidian spatial axes s1, s2, s3 used to denote particle spins do not imply any sort of preferential or global system of coordinates. They denote directions orthogonal to the direction of the
curvature axes s4 and t4. (The same goes for the three temporal axes t1, t2, t3.)
2) The direction of curvature axes s4 and t4 corresponds to locally defined gravitational/acceleration gradients (equivalence principle). All particle interactions imply an acceleration of some kind,
and its direction defines the curvature axes.
3) Field effects do not depend on the exchange of so called ‘virtual’ bosons. All ‘field’ effects depend on various types of spacetime curvature induced by particle spins in various dimensions. Thus
for example electrostatic and magnetic fields arise from exotic forms of spacetime curvature, not from the exchange of virtual photons. They can have repulsive or attractive effects depending on
whether the spins match or not.
4) Real bosons arise only from the acceleration of fermions and they only have repulsive effects. (For example the mutual recoil of electrons exchanging real photons.)
5) Current descriptions of nuclear forces do not make much sense. The hypothesis of unobservable Quarks and Gluons used to explain the activities of the observable Hadrons (Baryons and Mesons) does
not give a convincing description of what holds atomic nuclei together if we persist in regarding virtual gluons and virtual mesons as the carriers of the strong nuclear and residual nuclear forces.
Although protons mutually repel and no atomic nucleus can consist of just multiple protons, the exchange of virtual colour-anticolour Mesons between Protons and Neutrons supposedly allows atomic
nuclei to form. However by this mechanism we would expect Neutron clusters of arbitrarily large number to form as well; and they plainly don’t, except under extreme stellar gravitational collapse
6) Plainly the colour force has no effect outside of Hadrons apart from its gravitational/inertial spacetime effects of adding mass. Its severe range limitation in space perhaps reflects the entirely
temporal nature of its component spins. Moreover the colour force (conventionally modelled as a SU3 symmetry) appears to have the same symmetry as the electromagnetic force (conventionally modelled
as SU1 symmetry) if we regard the charge on the electron as -3 rather than -1, for then both electromagnetic and colour charges only manifest as plus or minus 3 in Fermions and W Bosons, and plus and
minus one combined in Mesons and Photons, (and plus and minus three in Z Bosons).
7) Some form of charge polarisation leading to the sort of electrostatic attraction observed between charged and neutral objects may well explain the residual nuclear force binding Protons to
Neutrons. Deep Inelastic Scattering of Baryons shows that they behave as if they had a definite size and some internal structure which we describe as Quarks. Now using the quark model, Protons have
within them electromagnetic charges of +2, +2, and -1 whilst Neutrons have charges of +2, -1 and -1. If we try various polarisation arrangements of these charges and try pairing them up for
attractive or repulsive effects we find the following:
N – N. Repulsive effects dominate.
P – P. Repulsive effects dominate even more strongly*.
P – N. Attractive effects dominate.
8) Conclusions.
a) Electrostatic fields bind Protons to Neutrons in atomic nuclei, not the exchange of virtual Mesons. Real Mesons arise from violent Baryon accelerations only.
b) Virtual Gluon exchange does not bind Quarks together into Baryons or Mesons. The existence of such Hadrons arises from the stability of only plus or minus 3 colour spins in Baryons or plus and
minus colour spins in Mesons.
c) Real Gluons would only arise from impossibly energetic Hadron accelerations; the Gluon hypothesis thus appears superfluous.
d) Predictions:
(i) HD8 does not exclude a ‘Duon’ Boson having plus and minus two electromagnetic spins and presumably a mass below that of the Z Boson, nor does it exclude Mesons with plus and minus two colour
spins, although these may have huge masses and low stabilities. It does however exclude the Higgs Boson, if Yog-Sothoth does not deceive me.
(ii) Protonium will not exist in collapsed stars*.
(iii) Gravitational fields arise from spacetime curvature not from the exchange of virtual Gravitons. Real Gravitons in quantities detectable as gravity waves would only arise from the acceleration
of very large masses. | {"url":"https://www.specularium.org/blog/hd8-addendum","timestamp":"2024-11-13T19:12:39Z","content_type":"text/html","content_length":"122663","record_id":"<urn:uuid:89b943fb-7889-40a0-8f2b-141d39d6cd2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00006.warc.gz"} |
What is the specific capacity of mercury?
What is the specific capacity of mercury?
Specific heat capacity of water is 4.2 J/g0C & Specific heat capacity of mercury is 0.42 J/g0C.
What is the density of mercury at STP?
Mercury – Properties
Element Mercury
Element Category Transition Metal
Phase at STP Liquid
Atomic Mass [amu] 200.59
Density at STP [g/cm3] 13.534
What is mercury density?
Mercury is a very dense, heavy, silver-white metal that is a liquid at room temperature. (For more information about mercury, click here.) Mercury has a density of 13.5 g/mL, which is about 13.5
times denser than water (1.0 g/mL), so a small amount of mercury like this feels unexpectedly heavy.
What is the specific weight of mercury in lb ft3?
temperature is 62.4 lb/ft3. The specific gravity of mercury is 13.56. Compute the density of water and the specific weight and density of mercury.
How do you find the specific heat of mercury?
Use the equation Q=mcΔt , where Q is energy gained or lost in Joules; m is mass, in this case kg; c is specific heat, in this case in Jkg⋅K ; and Δt is change in temperature, in this case Kelvins.
What is specific weight water?
A commonly used value is the specific weight of water on Earth at 4 °C (39 °F), which is 9.807 kilonewtons per cubic metre or 62.43 pounds-force per cubic foot.
How is mmHg calculated?
The conversion calculation from inches of mercury column to millimetres of mercury column over the vacuum and atmospheric pressure range is as follows: 1 mmHg = 133.322 pascals (Pa) 1 inHg = 3386.39
pascals (Pa) mmHg value x 133.322 Pa = inHg value x 3386.39 Pa.
What is specific gravity of mercury?
For example, liquid mercury has a density of 13.6 kg per litre; therefore, its specific gravity is 13.6.
What is the specific heat of mercury as a solid?
Specific heat of Mercury is 0.139 J/g K.
Is 4.18 a specific heat?
The specific heat capacity of water is 4.18 J/g/°C.
What is the specific gravity of mercury in kg m3?
B. 13.6kg cm – 3.
What is mercury Hg used for?
Mercury metal has many uses. Because of its high density it is used in barometers and manometers. It is extensively used in thermometers, thanks to its high rate of thermal expansion that is fairly
constant over a wide temperature range.
What is the specific gravity of mercury?
13.6 kg per
For example, liquid mercury has a density of 13.6 kg per litre; therefore, its specific gravity is 13.6.
Is mmHg the same as Hg?
Two units of barometric pressure are millimeters of mercury (mm Hg) and inches of mercury (in Hg). As the pressure increases, the mercury rises higher in the barometer tube. The height of the mercury
can be measured in either inches or millimeters.
What is the atomic weight of mercury (Hg)?
Atomic weight of Mercury (Hg) is 200.59. Density of Mercury (Hg) is 13,533.6 kg/m3. Find Atomic weight of
What are the properties of saturated liquid mercury Hg?
Properties of saturated liquid Mercury – Hg – density, specific heat capacity, kinematic viscosity, thermal conductivity and Prandtls number: 1 kg/m 3 = 0.0624 lb/ft 3. 1 J/(kg K) = 2.389×10 -4 kcal/
(kg oC) = 2.389×10 -4 Btu/(lb m oF)
What are the physical properties of mercury?
Mercury – Density – Hg 1 Density of Mercury. Density of Mercury is 13.534g/cm3. 2 Atomic Mass of Mercury. Atomic mass of Mercury is 200.59 u. 3 Atomic Radius of Mercury. The atomic radius of Mercury
atom is 132pm (covalent radius). 4 Mercury – Crystal Structure. A possible crystal structure of Mercury is rhombohedral structure.
What is the molecular weight of 23931 Mercury?
Mercury PubChem CID 23931 Synonyms 7439-97-6 Hg Mercury Hydrargyrum Quicksi Molecular Weight 200.59 Date s Modify 2021-07-03 Create 2004-09-16 | {"url":"https://darkskiesfilm.com/what-is-the-specific-capacity-of-mercury/","timestamp":"2024-11-13T13:01:38Z","content_type":"text/html","content_length":"49957","record_id":"<urn:uuid:f5639a04-4ea0-45a9-876f-003a6e2662f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00469.warc.gz"} |
Constructing Geometric Solids
Constructing Geometric Solids:
Rhombic Dodecahedron
A rhombic dodecahedron is a 12-sided solid shape in which every face is a rhombus, i.e., a quadrilateral with four sides of equal length, a diamond. One interesting property of rhombic dodecahedrons
is that you can create a 3-dimensional tessellation with them, that is, you can pack them together with no gaps, just as you can do with cubes. The rhombic dodecahedron is also the dual of the
cuboctahedron, since each vertex of the rhombic dodecahedron corresponds to a face of the cuboctahedron, and vice versa.
You can construct rhombic dodecahedrons out of stiff paper and glue (or clear tape) using the pattern and instructions that follow below. You will also need a ruler, pencil, and sharp scissors or an
Xacto knife.
First, look at the Figure 1 on the right to see how a rhombic dodecahedron is built from rhombuses. The solid figure has 12 faces that meet at 14 points. Six of these points are where four diamonds
meet, and eight of the points are where three diamonds meet.
You cannot construct a rhombic dodecahedron using just any kind of rhombus. Each face must have certain dimensions. The long diagonal and short diagonal must be in a ratio of sqrt(2) : 1 (or
approximately 1.41421:1). The short diagonal and side must be in a ratio of 1 : sqrt(3)/2 (or approximately 1 : 0.866025). See Figure 2 on the left.
Using a ruler, sharp pencil, and the rhombus pattern in Figure 2, draw two copies of Figure 3 below. For each of these two of these figures, fold upward along every edge, and glue the
light blue
tabs onto the yellow rhombuses as indicated by the red arrows. You will then have two cup-like halves.
Make sure the dark blue tabs are folded toward the inside of the cups. Fit the two halves together where the obtuse angles of the diamonds match up. The dark blue tabs will be tucked inside. Once you
see how the two halves of the rhombic dodecahedron fit together, glue the dark blue tabs to the rhombuses.
Once the glue on your solid shape has dried, you are done! Make several rhombic dodecahedrons and stack them like cubes without creating any gaps. You can also use these polyhedra as 12-sided dice.
© Had2Know 2010 | {"url":"https://www.had2know.org/makeit/construct-geometric-solids-rhombic-dodecahedron.html","timestamp":"2024-11-13T02:00:41Z","content_type":"text/html","content_length":"36188","record_id":"<urn:uuid:b223a76d-b6cc-49df-8b82-b04037211a88>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00559.warc.gz"} |
Stack Data Structure - Programtopia
Stack Data Structure
Stack is one of the most powerful and most useful concept in programming. It is an ordered collection of items where items can be inserted and deleted from the same end. Only one end of the stack is
accessible while the other is restricted.
The end of the stack where the insertion and deletion of items take place is called the Top of the stack. The last item to be added onto the stack is the first item to be deleted. So, a stack is also
called a Last In First Out (LIFO) list.
For example, A stack of books. We add new books on top of old ones and the topmost book is removed first.
Figure: A stack of books
A general example of a stack is as shown in the fig below.
Figure: An empty stack
Figure: A full stack
Operations in a Stack
PUSH operation
When a new item is added on a stack, an item is pushed on the stack. The new item is added on the top of the stack.
Example: Push operation in a stack
• We have a stack S containing 4 elements as 1, 2, 3 and 4.
• The operation push(s, x) pushes the item x at the top of the stack S. If we push item 5 on to the stack then the resulting stack contains 1, 2, 3, 4 and 5.
• Again, if we push an item 6 onto the stack, the resulting stack contains 1, 2, 3, 4, 5 and 6.
• Now, this stack is full. A push operation cannot be performed on a full stack as there is no vacant space to place the item on the stack.
• The result of attempting to push an item onto the top of a full stack is called Stack Overflow.
POP operation
When an item is removed from the stack, the item is popped from the stack. The item to be popped is the item at the top of the stack.
The operation pop(s) removes the item at the top of stack and returns it as a functional value meaning that the operation
i = pop (S)
removes the item currently at the top of stack S and assigns the value that has just been popped to variable i.
Example: Pop operation in a stack
• We have a stack S with 2 elements 1 and 2.
• If we perform the pop operation, 2 is popped from the stack and the resulting stack contains only one element 1.
• If we perform the pop operation again then the resulting stack contains no element.
• Here, the resulting stack cannot be popped because the stack is empty meaning that there are no elements to pop.
• The result of attempting to pop an item from the top of an empty stack is called Stack Underflow.
createEmptyStack operation
This operation is used to create an empty stack.
isFull operation
This operation is used to determine whether the stack is full or not.
isEmpty operation
This operation is used to determine whether the stack is empty or not.
stackTop operation
Another operation that can be implemented on a stack is to determine what the top item on the stack is without removing it. This operation is used to determine the top item currently on the stack but
does not remove it.
Stack as an Abstract Data Type (ADT)
Since the following operations of stack can be implemented by using a data structure so stack is an Abstract Data Type (ADT):
│ Operations │ Description │
│CreateEmptyStack (S)│This operation is used to create an empty stack S. │
│Push (S, x) │This operation is used to add an item x on to the top of the stack S. This operation pushes the item if and only if the stack has an empty or vacant space.│
│Pop (S) │This operation is used to remove the item currently at the top of stack. This item pops the item at the top of stack if and only if the stack is not empty.│
│stackTop (S) │This operation returns the item currently at the top of the stack if and only if the stack is not empty. │
│isFull (S) │This operation is used to determine whether the stack S is full or not. It returns true value (i.e. 1) if the stack is full otherwise it returns false. │
│isEmpty (S) │This operation is used to determine whether the stack S is empty of not. It returns true value (i.e.0) if the stack is empty otherwise it returns false. │
Implementation of Stack
A stack can be implemented in two ways:
1. Static Implementation or Array implementation
2. Dynamic Implementation or Linked List implementation
1. Array implementation of Stack
In array implementation of stack,
• we use a one dimensional array which is large enough to store the data or items for maximum size of the stack.
• We also use an integer value top which is used to indicate the top of the stack.
• We declare a structure in which
□ One member is an array of a type of data to be stored in stack named items.
□ Another member is top which is an integer data type and is used to keep track of the current top item on the stack.
• The top is incremented and decremented as items are added and deleted respectively on the stack.
Creating an Empty Stack – createEmptyStack Function
To create an empty stack,
• We initialize top as a unique value to indicate the empty stack.
• Commonly, for better understanding, we initialize top as 0 or -1.
• Here, we initialize top as -1.
• The data will be stored from array index location 0 into the array items.
being procedure createEmptyStack
top ← 1
isEmpty Function
When the stack is empty,
• The value of top is -1.
• This function returns true value (i.e. 1) if the stack is empty, otherwise it returns false (i.e. 0).
begin procedure isEmpty
if top equals to -1
return 1
return 0
isFull Function
When the stack is full,
• the value of top is MAX-1 as the index of array starts from 0 and goes to maximum number of items that the stack can store i.e. MAX-1.
• This function returns true i.e. 1 if the stack is full, otherwise it returns false (i.e. 0).
begin procedure isFull
if top equals to MAX -1
return 1
return 0
PUSH Function
To push an item,
• Before adding the item to the top of the stack, we check whether the stack is full or not by seeing the result of the isFull function defined above.
• If the stack is full, the isFull function returns true value and no item can be added.
• If the stack is not full, the isFull function returns false and then we increment the top by 1 and add the item on to the top of stack.
begin procedure push: stack, newItem
if top equals to MAX -1
print "Stack is Full / Stack Overflow" and exit.
return 0
top ← top + 1
stack[top] ← newitem
POP Function
To pop an item,
• Before removing an item from the top of the stack, we check whether the stack is empty or not by seeing the result of the isEmpty function defined above.
• If the stack is empty then the isEmpty function returns true value and no item can be removed from the stack.
• If the stack is not empty then the isEmpty function returns false value and we remove the top item from the stack by decrementing top by 1.
begin procedure pop: stack
if top less than 0
print "Stack is Full / Stack Overflow" and exit.
return 0
poppedItem ← stack[top]
top ← top - 1
return poppedItem
Example: C Program to implement stack data structure using Array.
#define MAX 10
struct stack
int items[MAX];
int top;
typedef struct stack st;
void createemptystack(st *s)
int isfull(st *s)
if (s->top==MAX-1)
return 1;
return 0;
int isempty(st *s)
if (s->top==-1)
return 1;
return 0;
void push(st *s)
int newitem;
printf("Enter item to be inserted: ");
if (isfull(s))
printf("STACK FULL");
void display(st *s)
int i;
printf("n The items are: n");
for (i=s->top;i>=0;i--)
printf("n %d",s->items[i]);
void pop (st *s)
if (isempty(s))
printf("n STACK EMPTY n");
printf("Item popped= %d",s->items[s->top]);
void main()
int ch;
int loop;
st *s;
printf("n ***STACK OPERATIONS");
printf("n 1. PUSH");
printf("n 2. DISPLAY");
printf("n 3. POP");
printf("n 4. EXIT");
printf("n ***************");
printf("n Enter your choice: ");
scanf("%d", &ch);
switch (ch)
case 1:
case 2:
case 3:
case 4:
printf("THANK YOU");
printf("Invalid choice");
} while(loop);
Application of Stack
A stack is used to:
• Convert decimal number into binary number
• Print characters or strings in reverse order
• Evaluate prefix and postfix expressions
• Check balance of parenthesis in expressions
• Retain the page-visited history in a web browser
• Store the sequence of undo operations in text editor
• Create auxiliary data structure for algorithms
• Create component of other data structure | {"url":"https://www.programtopia.net/data-structures/stack","timestamp":"2024-11-09T03:49:04Z","content_type":"text/html","content_length":"50935","record_id":"<urn:uuid:f129b038-6197-4358-a783-bbbb65c1d2d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00712.warc.gz"} |
MAT204 Advanced Linear Algebra with Applications
1. MAT204 Advanced Linear Algebra with Applications
MAT204 Advanced Linear Algebra with Applications
Schedule: MWF at 11AM or at 1:30PM, Spring Only.
Brief Course Description: A more abstract treatment than 202, but more concrete than 217. Topics include linear systems of equations, linear independence and dimension, linear transformations,
determinants, real and complex eigenvectors and eigenvalues, orthogonality, spectral theorem, singular value decomposition, Jordan normal forms. The course will present various applications of linear
algebra in graph and network theory, cryptography, image compression, probability and statistics, machine learning, principal component analysis, linear, constrained, and regularized least-square
optimization, ranking algorithms (least-squares, Google page rank), and SVD-based recommendation engine (Netflix). Compared to 202 the course requires more curiosity and greater independence as well
as thorough conceptual understanding and computational fluency.
Why take this course? 204 is more in-depth and abstract than 202; It will be a first introduction to mathematical proofs for most students. Many interesting applications are illustrated in weekly
problem sets and optional MATLAB exercises. Its homework and exam problems often require greater fluency in mathematical computations and sophistication in mathematical thinking. For most students,
MAT202 is an excellent course that provides enough challenge and the background needed for upper division quantitative work.
Who takes this course? Most students in this course will be continuing from 203 in the Fall, and some are first-year students who took 215 or 216 in the Fall. Very strong students from 104 and 201
have also been successful in MAT204. Most students in this course have strong mathematical interest and maturity and intend to major in physics or engineering with a strong interest in applied math.
Prerequisites & Placement Info: While calculus is not a strictly required in this course, you will need to have the mathematical maturity and independence that most students acquire by studying
calculus in courses like 203. The course moves rapidly and will require a serious amount of time and effort. Because the course is more algebraic and abstract compared to calculus, many students find
it difficult in a new way. It is quite difficult to judge how much time you will need to master the more abstract aspects of the course. For many students, this is a big adjustment, and quite unlike
the math courses you have taken before. So be prepared to invest a lot of time early on to learn how to think about proofs and counterexamples. Because of its greater abstraction, 204 will give you a
better sense of what it is like to be a math major than 203.
Work Load: MAT204 covers a few more topics and many more applications than MAT202 and it has a significantly higher work load. Weekly precepts will cover more sophisticated examples and interesting
applications. Each weekly problem set comprise three parts: a selection of standard exercises, supplementary application problems, and optional MATLAB exercises. Students are expected to read the
textbook before each class and review lecture notes after each class. Homework will most likely take at least 6 hours, although this can vary quite a lot depending on your background and goals.
There will be organized homework sessions with guidance from undergraduate course assistants before each problem set is due, but students should first work independently before seeking help from
course assistants during homework sessions. To do well on math exams, you need to work through a lot of extra problems from old exams and quizzes. All in all, you should be ready to spend at least
10 hours per week working outside of class.
Grading: MAT204 is graded with a more generous curve than MAT202 in order to remove considerations of grades as a deciding factor between the two courses.
• You already took linear algebra in high school or at a local college, so you want to place out of MAT202/204. Some students in MAT202/204 have had linear algebra before, but very rarely with the
same depth and thoroughness. Most students will find that the sample problems are much more sophisticated than problems they have encountered in high school.
• You think MAT204 is too hard after looking at the sample problems or attending the first couple classes. There is a lot of overlap between MAT202 and MAT204, so it is quite easy to switch
between the two courses in the first few weeks. You may prefer the pace and style of MAT202. For most engineering majors, MAT202 is an equally good choice of linear algebra course and will give
you all the background you need for upper division courses in other quantitative disciplines.
• You think MAT204 is not challenging enough. Wait till you have had a quiz, which usually occurs in the 3rd week. Try some old quizzes for this course, but don’t just read the questions and
solutions. Instead see if you can produce correct solutions to most of the problems in the allotted time. If you can do well on old exams, then you may consider taking MAT217, but first check
out MAT215. You should have some solid experience writing proofs to do well in MAT217. | {"url":"https://www.math.princeton.edu/undergraduate/placement/MAT204","timestamp":"2024-11-08T16:03:53Z","content_type":"text/html","content_length":"42285","record_id":"<urn:uuid:d1ea7101-e51b-4cb7-9255-d51af4a8d92b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00776.warc.gz"} |
Export Reviews, Discussions, Author Feedback and Meta-Reviews
Submitted by Assigned_Reviewer_5
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
Summary of the paper:
The paper introduces a new class of multi-class boosting algorithms with base learners that are regularized based on their Rademacher complexities. The experiments show great improvement over other
multi-class approaches in the literature and on the non-regularized version of the proposed algorithm.
Detailed remarks:
For the title: in general, the term "deep" in deep learning is used for saying that the representation is hierarchical in a compositional sense. Trees are obviously not deep in this sense, and so
just because you are using large trees in boosting (a quite common setup in practice), it will not make the approach "deep". You still use a single "hidden layer" of trees, combined linearly, this is
definitely a shallow architecture. Thus I strongly object to the title, it is misleading.
Lines 042-043: In practice, boosted trees almost always outperform boosted stumps \cite{CaNi06, Keg14}, and when the validated tree sizes are obtained by proper hyper-parameter optimization, they can
be quite large, the same order as found in your experiments. Moreover, when boosting multi-class Hamming trees in AB.MH, \cite{Keg14} also found that on most of the data sets there is very little
overfitting, basically one can boost trees of several tens of inner nodes for ten thousand iterations (see, for example, pendigits or letters, two sets on which most algorithms are tested), without
increasing the test error. So, the statement of "boosting has been observed to overfit in practice", derived from 15 year-old papers, should be revised. Sometimes it overfits, sometimes it doesn't,
and basically we don't know when it does and why it does when it does.
Lines 046-047: To my knowledge, the first paper proposing adaptive regularization of base classifiers is \cite{KeWa04}. The intuitive idea is the same and the final algorithm is not that different
either (coefficients have to be shrunk by a quantity related to the empirical complexity of the weak classifier).
Although we cannot expect that a conference paper surveys all multiclass boosting algorithms, the paper should at least mention those that seem to be state of the art: AOSO \cite{SuReZh12}, ABC \cite
{Li09,Li09a}, Gao-Koller's iterative weak learner in hinge-boost \cite{GaKo11}, and AB.MH with Hamming trees \cite{Keg14} (it seems to me that the trees in this latter are quite similar to those used
in this submission).
The experimental setup.
I was honestly stunned reading this:
"We recorded the parameter tuple that had the lowest average error across all 10 runs, and this average error and the standard deviation of the error is reported in Table 1 and Table 2, along with
the average number of trees and the average size of the trees in the ensembles."
You're validating on the test set, something that we teach to our students never to do. The consequence is twofold. First, I cannot compare the errors to those available in the literature. Some of
the errors (e.g. on pendigit) looked suspiciously low, 3-4 times lower than I've ever seen, that's when I started to check your experimental setup. Second, the empirical comparison of the algorithms
you tested is tainted. It is obvious that if you take an algorithm and you add hyper-parameters (the AB.MR -> L1 AB.MR -> MDeepBoostSum chain), the minimum test error can only decrease. The ballpark
range of the "improvements" is very much in line with this view: you simply harvested the fact that the minimum of a larger sample is smaller than the minimum of a smaller sample, even if they come
from the same distribution.
Now, I know that this seems like a detail for a theoretician, but for people using these algorithms what you are claiming is important. We have tested a lot of ways of regularizing the weak learners
up to about ten years ago (you referred to some of the works), it never worked, more precisely, we didn't seem to need it. There were some indications that it could help on small data sets \cite
{KeWa04}, but, exactly because of the small size of the sets, results were inconclusive. If you now claim that it is not the case, the experimental validation has to be rock solid.
My suggestion is that you make an attempt to redo the experiments doing proper double cross validation during the rebuttal period, and show us the new results. If they are non-conclusive (that is,
the regularized version doesn't beat the standard algorithm), I would say the paper could still be accepted, but the message has to be altered to something like "here is an interesting-looking
algorithm with some strong theoretical justifications, but regularization doesn't work".
Providing an open source implementation of the algorithm would be a great way to make the experiments reproducible and to let people use the proposed techniques and to build on them.
Pseudocode: for those who would like to implement the method starting from the pseudocode, it would be helpful to point towards the definitions of the quantities in the caption: \Lambda and S_t for t
= 1 are undefined.
Address = {Vancouver, Canada},
Author = {K\'{e}gl, B. and Wang, L.},
Booktitle = {Advances in Neural Information Processing Systems},
Pages = {665--672},
Publisher = {The MIT Press},
Title = {Boosting on manifolds: adaptive regularization of base classifiers},
Volume = {17},
Year = {2004}}
Author = {Caruana, R. and Niculescu-Mizil, A.},
Booktitle = {Proceedings of the 23rd International Conference on Machine Learning},
Pages = {161--168},
Title = {An Empirical Comparison of Supervised Learning Algorithms},
Year = {2006}}
Address = {Montreal, Canada},
Author = {K\'{e}gl, B. and Busa-Fekete, R.},
Booktitle = {International Conference on Machine Learning},
Pages = {497--504},
Title = {Boosting products of base classifiers},
Volume = {26},
Year = {2009}}
Author = {Li, P.},
Booktitle = {International Conference on Machine Learning},
Title = {{ABC}-{B}oost: Adaptive Base Class Boost for Multi-class Classification},
Year = {2009}}
Author = {Li, P.},
Institution = {Arxiv preprint},
Number = {arXiv:0908.4144},
Title = {{ABC-LogitBoost} for Multi-class Classification},
Year = {2009}}
Author = {Gao, T. and Koller, D.},
Booktitle = {International Conference on Machine Learning},
Title = {Multiclass boosting with hinge loss based on output coding},
Year = {2011}}
Author = {Sun, P. and Reid, M. D. and Zhou, J.},
Booktitle = {International Conference on Machine Learning (ICML)},
Title = {{AOSO-LogitBoost}: Adaptive One-Vs-One {LogitBoost} for Multi-Class Problem},
Year = {2012}}
Abstract = { We train vector-valued decision trees within the framework of AdaBoost.MH. The key element of the method is a vector-valued decision stump, factorized into an input-independent vector of
length $K$ and label-independent scalar classifier.},
Author = {K\'{e}gl, B.},
Booktitle = {International Conference on Learning Representations},
Title = {The return of {AdaBoost.MH}: multi-class {H}amming trees},
Url = {http://arxiv.org/abs/1312.6086},
Year = {2014}}
I looked at the new experimental results and indeed they are more reasonable now. As expected, errors increased significantly (eg., on letters and pendigits the errors increased by 6-7 standard
deviations). In this light your answer
"1) While it is of course possible to overestimate the performance of a learning algorithm by optimizing hyperparameters on the test set, this concern is less valid when the size of the test set is
large relative to the "complexity" of hyperparameter space (as any generalization bound will attest). Note that our experiments varied only three hyperparameters over a large data set."
looks strange. You just proved that my concern was highly relevant.
If I look at your results now, I see no significant improvement by the added regularization which contradicts the main message of the paper (differences are in the 1-2 std range, completely
compatible by statistical fluctuation). On the other hand, your results using AB.MR and AB.MR-L1 are really good, you do something in your tree-building procedure which seems to work better than what
other people are doing (unfortunately, you don't give details on your procedure).
I'm sympathizing with you in the sense that I know that simply describing a state-of-the-art algorithm without any theoretical results will never be accepted at NIPS, but still, at this point your
theoretical results are irrelevant for a practitioner (it's not your theory that makes the algorithm good), and your practical results are irrelevant for a theoretician (for the same reason: it's not
your practical results that make the theory interesting or relevant). It's two papers in one with a false conclusion. I'd be happy to accept the paper if you were honest about it.
Q2: Please summarize your review in 1-2 sentences
I like the idea and algorithm itself, but considering the current state of the experimental setup the decision should be a clear rejection.
Submitted by Assigned_Reviewer_11
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The authors proposes a multi-class extension of Deep Boosting framework of Cortes and Mohri. First, the authors prove a generalization bound of linear combination of base hypotheses where each base
hypotheses belong to
different sets with different Radmacher complexities. The bound improves the standard multi-class generalization bound of Koltchinskii and Panchenko. Then, motivated by the generalization bound, the
authors formulate an optimization problem which minimizes an upper bound of the generalization error. The proposed algorithm is a coordinate-descent style algorithm solving the optimization problem.
The experimental results show that the proposed algorithm with decision trees as base hypotheses outperforms standard multi-class boosting algorithms such as AdaBoost.MR and its variants.
The optimization problem is well motivated by the improved generalization bound of the authors. Also, the formulation naturally implies an coordinate-descent algorithm to solve it. Interestingly
enough, the derived criterion to choose a weak hypothesis is the sum of the weighted error and the complexity of the hypothesis class.
The experimental result seems to show the effectiveness of the proposed method as well. But, it would be better, as done in the previous ICML'14 paper on Deep Boosting, to use validation sets to
optimize parameters.
Basically, the formulation is "primal", i.e, minimizing losses (say exponential or logistic loss), similar to AdaBoost or LogitBoost.
Another formulation is based on "dual" view of boosting. For example, AdaBoost is motivated by minimizing the relative entropy from the last distribution on the sample under some linear constraints
(see, Kivinen and Warmuth COLT99). Further investigations following the dual view are found in TotalBoost(Warmuth et al. ICML06), SoftBoost(Warmuth et al. NIPS07) and ERLPBoost (Warmuth et al.
ALT08). The dual view for the proposed algorithm might be interesting and it might deepen the understanding of the algorithm.
After viewing the authors's comments:
Since the authors report the new experimental results based on reviewers' suggestion, I would raise my evaluation.
Q2: Please summarize your review in 1-2 sentences
I think the theoretical contribution is sufficient enough for the community of NIPS.
Submitted by Assigned_Reviewer_24
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
This paper presents a multi-class extension to the recently proposed ensemble learning algorithm called DeepBoost. It provides new data-dependent bounds for convex ensembles in multi-class
classification setup.
The paper reads well and technical derivations are correct Further it provides data-dependent bounds which are tighter than existing bounds due to the explicit dependency on the mixture weights in
the convex combination. The special case of the bound derived in theorem 1, also leads to an important result as leads to a linear dependency on the number of classes. It improves the existing bounds
from Koltchinskii and Panchenko, wherein this dependency is quadratic. In this sense, the bound is more useful, particularly for large number of classes. The authors also design optimization
objectives and multi-class DeepBoosting algorithms which give good performance on UCI datasets as compared to the multi-class version of Adaboost and logistic regression.
However some parts of the paper need to be expanded and clarified especially the relation to the existing paper Deep Boosting in ICML, 2014 which makes the proposed approach somehow incremental as
well as the experimental setup. Indeed, the proof techniques, design of the objective function, and developed algorithms are very similar in flavor to this existing work. This limits the novelty of
this current work to certain extent. Another point which was not clear was that authors say in Line 143 (in section 3.1) that generalization error of $f$ and $f/\rho$ is same. In my opinion, $\rho$
has the interpretation of margin from definitions developed in equation (2), and hence it is not immediately clear why these two will admit the same generalization error. Moreover, it seems that the
authors are using the labels on the test set to find the hyper-parameters of the algorithm. In the ICML paper on binary classification, the experimental setting does look different, in that paper,
there is a separate validation set on which the parameter values are chosen, and not directly on the test set. The authors should clarify on this point as well.
Q2: Please summarize your review in 1-2 sentences
+The paper provides data-dependent bounds which are tighter than existing bounds due to the explicit dependency on the mixture weights in the convex combination.
+ The special case of the bound derived in theorem 1, also leads to an important result as leads to a linear dependency on the number of classes.
+ The authors also design optimization objectives and multi-class DeepBoosting algorithms which give good performance on UCI datasets
- This work is somewhat incremental on the existing paper Deep Boosting in ICML, 2014
- Some points need to be clarified (experimental setup, clear novelty with respect to Deep Boosting paper in ICML, 2014)
Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a
maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.
We thank the reviewers for carefully reading our paper and for their helpful comments.
- All reviewers expressed concern that the hyperparameters in our experiments were chosen by optimizing them on the test set instead of by cross-validation. We have two responses:
1) While it is of course possible to overestimate the performance of a learning algorithm by optimizing hyperparameters on the test set, this concern is less valid when the size of the test set is
large relative to the "complexity" of hyperparameter space (as any generalization bound will attest). Note that our experiments varied only three hyperparameters over a large data set.
2) We agree, however, that choosing hyperparameters by cross-validation is ideal, so per Reviewer 5's suggestion we re-ran our experiments. The only change we made to the experimental setup was to
divide each data set into 4 folds instead of 10: two for training, one for cross-validation, and one for testing. Note that, for instance, if we want to optimize three hyperparameters on 15x15x15
grid with 10 folds then this results in over 30000 runs of our algorithm and to make sure that we could meet the rebuttal deadline we had to reduce the number of folds. The table below reports the
average and standard deviation of the error on the test folds ("pm" stands for "plus-minus"). Observe that MDeepBoost has the lowest average test error on every data set (except for one data set
where it is tied with AdaBoost.MR-L1); in fact we observed that MDeepBoost had the lowest test error on almost every fold.
AdaBoost.MR-------0.014 pm 0.0025
AdaBoost.MR-L1---0.014 pm 0.0013
MDeepBoost--------0.012 pm 0.0011
AdaBoost.MR-------0.029 pm 0.0026
AdaBoost.MR-L1---0.026 pm 0.0071
MDeepBoost--------0.024 pm 0.0008
AdaBoost.MR-------0.024 pm 0.0011
AdaBoost.MR-L1---0.025 pm 0.0018
MDeepBoost--------0.021 pm 0.0015
AdaBoost.MR-------0.065 pm 0.0018
AdaBoost.MR-L1---0.059 pm 0.0059
MDeepBoost--------0.058 pm 0.0039
AdaBoost.MR-------0.415 pm 0.0353
AdaBoost.MR-L1---0.410 pm 0.0324
MDeepBoost--------0.407 pm 0.0282
AdaBoost.MR-------0.739 pm 0.0016
AdaBoost.MR-L1---0.737 pm 0.0065
MDeepBoost--------0.735 pm 0.0045
AdaBoost.MR-------0.120 pm 0.0123
AdaBoost.MR-L1---0.117 pm 0.0096
MDeepBoost--------0.117 pm 0.0087
AdaBoost.MR-------0.035 pm 0.0045
AdaBoost.MR-L1---0.035 pm 0.0031
MDeepBoost--------0.033 pm 0.0014
Finally, we would like to add that it was mainly because of the lack of time that we reported results for parameters tuned on test data instead of cross-validation results. We were able to obtain new
results quicker both due to the reduced number of folds and a new improved implementation. Should our work be accepted, we will include the newly presented cross-validation experiments and modify
conclusions that we make in the paper to reflect these results.
- Reviewer 5 objects to the title of the paper. While we agree that the title may lead a reader to incorrectly associate our paper with the literature on neural networks, the paper by Cortes, Mohri
and Syed (2014), which our work extends, also used "deep" in the title, so we feel that changing the title of our paper would be even more confusing than keeping it as it is.
- Reviewer 5 has helpfully provided several additional references, which we would be happy to cite in the camera-ready version. We will also clarify the ambiguous notation in the caption.
- Reviewer 5 would like us to provide an open source version of our algorithm. In fact our implementation was based on an open source version of the algorithm from Cortes, Mohri and Syed (2014), and
if our paper is accepted we can ask them whether they would be willing to include our code in their repository.
- We agree with Reviewer 11 that the dual view provides an alternative way of justifying the algorithm and that exploring this direction would be interesting.
- Reviewer 24 felt that our contribution was incremental relative to Cortes, Mohri and Syed (2014). We wish to emphasize that our paper contains many novel results, including new multi-class
generalization bounds, analysis of and comparison several objectives for the multi-class setting, new algorithms for each of these objectives, consistency proofs, convergence proofs, and experimental
results in the multi-class setting.
- Reviewer 24 asked us to clarify why generalization error of $f$ and $f/\rho$ is same in Line 143 (in section 3.1). Please recall that multiplying $f$ by a positive constant does not affect the way
a point is classified since the class with the highest score is selected. Therefore the zero-one loss of f is invariant to rescaling f by a positive constant.
In conclusion, we would like to emphasize again the theoretical and algorithmic contributions of our work. We give new data-dependent bounds for convex ensembles in the multi-class classification
setting. These bounds are finer than existing bounds for multi-class classification both thanks to a linear (instead of quadratic) dependency on the number of classes and, more importantly, by virtue
of a more favorable complexity term expressed as an average of the Rademacher complexities based on the ensemble's mixture weights. Based on these guarantees, we introduce several new multi-class
ensemble algorithms and establish the H-consistency and convergence results for several of these algorithms. | {"url":"https://proceedings.neurips.cc/paper_files/paper/2014/file/7bb060764a818184ebb1cc0d43d382aa-Reviews.html","timestamp":"2024-11-03T00:03:55Z","content_type":"application/xhtml+xml","content_length":"28448","record_id":"<urn:uuid:cdc466ca-b37d-4748-9881-15a7aac18a33>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00521.warc.gz"} |
A Portable Introduction to Data Analysis
9 Conditional Probability
We use [latex]\pr{A|B}[/latex] to denote the conditional probability of [latex]A[/latex] occurring if [latex]B[/latex] has occurred. This may be conditional in time, such as the probability that the
second person in a sample is female given the first was male, or might be different attributes of the same person or object, such as the probability of green eyes given that a person is female.
We have already seen how to calculate conditional proportions in Chapter 6 and the methods for conditional probabilities are the same. However, we can now be more explicit about the rules by using
our probability notation. Conditional probabilities are calculated using the rule
\[ \pr{A | B} = \frac{\pr{A \mbox{ and } B}}{\pr{B}}. \]
That is, to find [latex]\pr{A|B}[/latex] we look at the proportion of time [latex]B[/latex] occurs and then how often [latex]A[/latex] occurs as well, taking the ratio to obtain the conditional
probability. Note that we could have used [latex]\pr{B \mbox{ and } A}[/latex] instead of [latex]\pr{A \mbox{ and } B}[/latex] in the formula, since it also denotes the probability that [latex]A[/
latex] and [latex]B[/latex] occur together. We will show how conditional probabilities can be used through two examples.
Sensitive Questions
From the survey data, 43/60 = 71.7% of the Islanders answered ‘Yes’ to the sensitive question “Do you approve of kissing on the first date?” (Fidler & Kleinknecht, 1977). However, this group of 60
contained some who answered ‘Yes’ or ‘No’ by chance, according to the rules of the survey. We need to estimate the proportion who said ‘Yes’ because they actually did approve of kissing on the first
Tree diagram of sensitive question process
The figure above summarises the process involved in this question in a tree diagram. This diagram shows the different choices that can be made, giving the associated probabilities on the branches.
The question begins with a coin being tossed twice. We assume
\[ \pr{\mbox{first toss heads}} = \pr{\mbox{first toss tails}} = 0.5. \]
If the first toss was tails then the student answered ‘Yes’ if the second toss was heads, otherwise they answered ‘No’. Thus
\[ \pr{\mbox{Yes } | \mbox{ first toss tails}} = 0.5. \]
To work out what proportion of `Yes’ responses came this way we want
\[\pr{\mbox{Yes and first toss tails}}.\]
The conditional probability formula can be rearranged to give the multiplication rule
\[ \pr{A \mbox{ and } B} = \pr{B}\pr{A | B}, \]
\pr{\mbox{Yes and first toss tails}} & = & \pr{\mbox{first toss tails}} \pr{\mbox{Yes } | \mbox{ first toss tails}} \\
& = & 0.5 \times 0.5 \\
& = & 0.25. \\
Now we need to find out [latex]\pr{\mbox{Yes and first toss heads}}[/latex]. We can do this if we know [latex]\pr{\mbox{Yes } | \mbox{ first toss heads}}[/latex]. If they got heads on the first toss
then they were asked to answer the question truthfully. So the probability given by [latex]\pr{\mbox{Yes } | \mbox{ first toss heads}}[/latex] is actually the one we want to know, the proportion of
the population who would say `Yes’ if they answered the question truthfully. We will call this unknown [latex]p[/latex], so
\pr{\mbox{Yes and first toss heads}} & = & \pr{\mbox{first toss head}} \pr{\mbox{Yes } | \mbox{ first toss head}} \\
& = & 0.5p
So the total probability of ‘Yes’ is [latex]0.25 + 0.5p[/latex]. We have an estimate for this, the sample proportion 0.717, so we can equate these to estimate [latex]p[/latex]. That is,
\[ 0.25 + 0.5p = 0.717, \]
so [latex]p = 2(0.717 - 0.25) = 0.934[/latex]. Thus our estimate of the proportion who would actually approve of kissing on the first date is about 93%. Of course we would really like to know how
accurate this estimate might be, since it came from a sample of only 60 Islanders. We will return to that issue in Chapter 17.
Diagnostic Testing
A frequent need with conditional probabilities is to reverse the direction of the events, finding [latex]\pr{B|A}[/latex] when we know [latex]\pr{A|B}[/latex]. Wild and Seber (2000) give an extensive
case study of using this method to analyse error rates in diagnostic testing, which we will summarise here.
A common test for HIV has been to screen for the antibodies to the virus using an enzyme-linked immunosorbent assay (ELISA). Such a test is not perfect and there is always a tradeoff between having a
high sensitivity, the probability of correctly diagnosing a person with HIV, and a high specificity, the probability of correctly diagnosing a person without HIV. An experiment which compared the
results of the ELISA test to a more accurate, but more expensive, test suggest that
\[ \pr{\mbox{test positive } | \mbox{ HIV}} = 0.98 \]
\[ \pr{\mbox{test negative } | \mbox{ no HIV}} = 0.926. \]
Thus the sensitivity is 0.98 and the specificity is 0.926. What a patient who tests positive would like to know is the probability that they have HIV. This is [latex]\pr{\mbox{HIV } | \mbox{ test
positive}}[/latex], the opposite of what we have above.
\[ \pr{\mbox{HIV } | \mbox{ test positive}} = \frac{\pr{\mbox{HIV and test positive}}}{\pr{\mbox{test positive}}}. \]
The numerator is
\[ \pr{\mbox{HIV and test positive}} = \pr{\mbox{HIV}}\pr{\mbox{test positive } | \mbox{ HIV}}, \]
but what is [latex]\pr{\mbox{HIV}}[/latex]? This depends a lot on the population you are testing. For example, the Australian Bureau of Statistics reports that up to the year 2000 there had been
20995 HIV cases in Australia and 6017 AIDS deaths so we will suppose there are about 15000 people living with HIV in Australia. From a population of 19.2 million at that time this gives a probability
\[ \pr{\mbox{HIV}} = \frac{15000}{19200000} = 0.00078125. \]
\[ \pr{\mbox{HIV and test positive}} = 0.00078125 \times 0.98 = 0.00076563. \]
\[ \pr{\mbox{no HIV and test positive}} = (1 – 0.00078125) \times (1 – 0.926) = 0.073942. \]
Since someone either has HIV or does not have HIV we can add these two to find
\[ \pr{\mbox{test positive}} = 0.00076563 + 0.073942 = 0.074708. \]
Together this gives
\[ \pr{\mbox{HIV } | \mbox{ test positive}} = \frac{0.00076563}{0.074708} = 0.0102. \]
Thus, even though the test is positive 98% of the time for someone with HIV, the chance that you have HIV if you get a positive result is only 1 in a 100. Look at the numbers closely to see why this
There are important political implications of this result. Many groups, such as insurance companies and employers, are keen to introduce mass testing of customers and employees to assess their
medical risk. Whether this is a good or bad thing is a separate issue, but we have seen above that large scale diagnostic testing for relatively rare conditions will lead to a great number of false
positives, people testing positive when they do not have the condition. This can easily be obscured by a claim such as that a test is “98% accurate”.
Contrast these calculations for Australia with those for South Africa where [latex]\pr{\mbox{HIV}} = 0.261[/latex] for people aged 20-24 in 1998. If you follow the steps you will find
\[ \pr{\mbox{HIV } | \mbox{ test positive}} = \frac{0.25578}{0.31047} = 0.8239. \]
That is, with the much higher rate of infection, the chance of a false positive is now much smaller.
At a similar level, the estimated HIV infection rate for people in South Africa under 20 years old in 1998 was 21.0%, up from 12.7% in 1997. These statistics are hotly debated because they have
profound implications for the country. A government must decide whether to spend limited resources on areas such as medical treatment for those with AIDS, research into possible vaccinations,
education programs, or on welfare for those who lose family members. Such official statistics and their uses are an important part of statistical research but will not be discussed directly in this
We say that two events, [latex]A[/latex] and [latex]B[/latex], are independent if the outcome of one tells us nothing about the probability distribution of the other. That is, [latex]\pr{A|B} = \pr
{A}[/latex] and [latex]\pr{B|A} = \pr{B}[/latex]. In most cases you can decide whether two events are independent by thinking about the physical way in which they happen. An important assumption we
will make later is that consecutive samples from a population are independent. This can usually be achieved if the samples are chosen at random.
If events are independent then the multiplication rule in the previous section becomes
\[ \pr{A \mbox{ and } B} = \pr{A} \pr{B}. \]
This is an idea you are probably familiar with. If you toss a coin twice the probability of getting two heads in a row is 0.25 since 50% of the time the first will be heads and 50% of those times the
second will also be heads, and 50% of 50% is 25%. Similarly, if we pick two people from a population with female proportion [latex]p[/latex] then the probability of getting two females is [latex]p^2
An important use of this independence rule will be for testing whether there is an association between two categorical variables. For example, we previously saw a two-way table of counts of Islanders
from Ironbard categorised by pizza preference and sex. The marginal proportions showed that 0.49 of the Islanders were male and 0.14 of the Islanders preferred mushroom pizza. If pizza preference and
sex were independent then the proportion of Islanders who are male and prefer mushroom should be
\[ 0.49 \times 0.14 = 0.0686. \]
Out of 200 Islanders we would then expect there to be [latex]0.0686 \times 200 = 13.7[/latex] mushroom-liking males. In fact we observed 18 mushroom-liking males in the data, a difference of 4.3 from
what was expected. In Chapter 22 we will measure these differences and see whether they give evidence that the variables are not independent. If they were not independent then this would suggest an
association between them.
• Conditional probability gives the probability of an outcome occurring conditional on another outcome having occurred.
• Conditional probability can be used to analyse the results of sensitive questions on surveys that involve randomness.
• Conditional probability shows the trade-off between sensitivity and specificity in diagnostic testing.
• Events are independent if the conditional probability of one event is unchanged by the outcome of the other.
A group of 60 students were asked the sensitive question “Have you ever cheated on an exam?” using a random response technique. Before answering the question each student secretly rolled a die. If
the die indicated 1, 2 then they were asked to answer the question honestly. If the die indicated 3, 4 then they were asked to answer ‘yes’ regardless of their true answer. If the die indicated 5, 6
then they were asked to answer ‘no’. Of the 60 students, a total of 33 answered ‘yes’ using this technique. Based on this sample, estimate the true proportion of students who have cheated on an exam.
Think of other ways of using randomness to obtain anonymity in a similar way to the method earlier in this chapter. (A common approach is to use a deck of shuffled cards which have directions on
them, such as ‘Answer truthfully’ or ‘Answer Yes’.) For each method, show how to estimate the true proportion based on the one found in your sample.
For a particular population group it is estimated that the prevalence of cervical cancer is 8.6%. A new diagnostic procedure for this disease has a sensitivity of 0.927, correctly diagnosing an
individual with cervical cancer 92.7% of the time. The specificity of the procedure is 0.854, so that an individual without the disease is correctly diagnosed 85.4% of the time.
Suppose the diagnostic procedure indicates an individual from this population group has the disease. What is the probability that they actually do have cervical cancer?
Find additional data, such as that from the Bureau of Statistics website, so that you can repeat the calculations for the Diagnostic Testing example for different groups.
Two people, Alice and Victor, agree to play the following game: A barrel is filled with 1000 balls, 650 labelled ‘Alice’ and 350 labelled ‘Victor’. The players take it in turns to draw a ball from
the barrel. If it is labelled with their name they win and the game ends. Otherwise they replace their ball, shake the barrel, and the other player draws. If Alice goes first, what is the probability
that she wins the game?
Testing a hypothesis at the 5% level means that there is a 0.05 probability of rejecting the null hypothesis when in fact it is true. Suppose 12 research teams around the world independently carry
out trials of the same chemical compound to see whether it is effective against HIV. If the compound is actually not effective against HIV, what is the probability that at least one research team
will find significant evidence of an effect at the 5% level?
The first randomised response methodology was presented by Warner (1965) and was based on a different process to the example in the section on sensitive questions. For example, suppose 200 university
students are asked “if you or your girlfriend accidentally got pregnant would you seriously consider the possibility of an abortion?” Before answering, each student secretly tosses a fair coin twice.
If both tosses show heads then they answer the truth but for all other outcomes they answer the opposite of the truth. If 114 students answer ‘yes’, estimate the true proportion of students who would
consider an abortion.
Explain why Warner’s method in Exercise 7 requires a probability of answering truthfully, [latex]\theta[/latex], that is not equal to [latex]\frac{1}{2}[/latex].
Alice is planning on using a randomised response method to ask the sensitive question “Have you ever used marijuana?” She will have each subject toss a fair coin twice before answering the question.
As with the example earlier in this chapter, they will be asked to answer truthfully if the first toss shows heads. If the first toss shows tails then they are asked to answer ‘yes’ if the second
toss shows heads and ‘no’ if the second toss shows tails.
Suppose [latex]p[/latex] is the true proportion of people who have used marijuana in the population that Alice is sampling. If a subject says ‘yes’ for the randomised response, what is the
probability that they have used marijuana? | {"url":"https://uq.pressbooks.pub/portable-introduction-data-analysis/chapter/conditional-probability/","timestamp":"2024-11-14T05:28:44Z","content_type":"text/html","content_length":"111774","record_id":"<urn:uuid:deb4b1b5-09ac-4393-a015-6adaccfc16b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00195.warc.gz"} |
View source
enum abstract TriangleCulling(Null<Int>)
Available on all platforms
Defines codes for culling algorithms that determine which triangles not to render when drawing triangle paths.
The terms POSITIVE and NEGATIVE refer to the sign of a triangle's normal along the z-axis. The normal is a 3D vector that is perpendicular to the surface of the triangle.
A triangle whose vertices 0, 1, and 2 are arranged in a clockwise order has a positive normal value. That is, its normal points in a positive z-axis direction, away from the current view point. When
the TriangleCulling.POSITIVE algorithm is used, triangles with positive normals are not rendered. Another term for this is backface culling.
A triangle whose vertices are arranged in a counter-clockwise order has a negative normal value. That is, its normal points in a negative z-axis direction, toward the current view point. When the
TriangleCulling.NEGATIVE algorithm is used, triangles with negative normals will not be rendered.
@:value(cast 0)@:impl@:enuminlineread onlyNEGATIVE:TriangleCulling = 0
Specifies culling of all triangles facing toward the current view point.
@:value(cast 1)@:impl@:enuminlineread onlyNONE:TriangleCulling = 1
Specifies no culling. All triangles in the path are rendered.
@:value(cast 2)@:impl@:enuminlineread onlyPOSITIVE:TriangleCulling = 2
Specifies culling of all triangles facing away from the current view point. This is also known as backface culling. | {"url":"https://api.openfl.org/openfl/display/TriangleCulling.html","timestamp":"2024-11-12T16:53:43Z","content_type":"text/html","content_length":"10737","record_id":"<urn:uuid:8e857335-021f-4c52-b44d-2059c576a2ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00712.warc.gz"} |
What Is Impedance in PCB Design for Beginners? - PCBBUY.COM
What Is Impedance in PCB Design for Beginners?
By:PCBBUY 02/11/2022 09:51
According to the theory of signal transmission, the signal is a function of time and distance variables, so every part of the signal may change on the connection. Therefore, determine the AC
impedance of the connection, that is, the ratio of the change in voltage to the change in current as the characteristic impedance of the transmission line (Characteristic Impedance): the
characteristic impedance of the transmission line is only related to the characteristics of the signal connection itself.
In the actual circuit, the resistance value of the wire itself is less than the distributed impedance of the system, especially in high-frequency circuits, the characteristic impedance mainly depends
on the distributed impedance caused by the unit distributed capacitance and unit distributed inductance of the connection. The characteristic impedance of an ideal transmission line depends only on
the unit distributed capacitance and unit distributed inductance of the connection.
In this passage, we are providing all the information about impedance in PCB. If you are curious about impedance in PCB, please check and read the content below.
Wanna know PCB knowledge? Check and read for more.
How PCB layout affects circuit impedance?
In a real PCB layout, the impedance seen by signals can be very different from the ideal value you determined from a schematic. This is due to the presence of the substrate and the arrangement of
traces on the board. This gives rise to important effects like crosstalk, which changes the impedance from ideal values. Circuits can experience power integrity problems such as ringing when circuits
switch at high frequencies. The impedance of your power delivery network will also deviate from the ideal capacitive behavior at higher frequencies, which contributes to potential signal integrity
and power integrity problems.
Transmission Line Impedance
The impedance of transmission lines can be characterized using a number of impedance values. The most important of these is the characteristic impedance, which is simply the impedance of a
transmission line on a PCB in total isolation from any other transmission line. This value is normally 50 Ohms, although it may take a different value depending on the signaling standard used in your
device. As an example, LVDS specifies that the differential impedance of a differential pair should be 85 Ohms.
The other metrics used to describe transmission line impedance depend on the relative arrangement of two transmission lines. Due to the parasitic capacitance provided by the PCB substrate and the
mutual inductance between two nearby transmission lines, a transmission line can be characterized using even and odd mode impedance, which accounts for coupling between two nearby transmission lines
and how the two lines are driven (i.e., in common mode or differential mode). Common and differential impedance are related to these other values, bringing the total number of impedance values used
to describe transmission lines to five.
What are the considerations of impedance in PCB?
Substances with very low resistance conduct good conductors, such as metals, etc.; substances with extremely high resistance conduct conductors, such as wood and plastics. There is also a conductor
between the two called a semiconductor, and a superconductor is a substance whose resistance value is almost zero. However, in the field of alternating current, in addition to resistance inducting
current, capacitance and inductance will also interfere with the flow of current. This effect is enough to make reactance, which means resistance to current.
Their unit of measurement is ohms as the resistance, and its value is related to the frequency of the alternating current. The higher the frequency, the capacitive reactance, the inductance, the
capacitance, the inductance and the phase angle. There is a relationship between the vectors. Will say: resistance is the sum of resistance and reactance on the vector.
For a specific circuit, the impedance is not constant, but changes with frequency. In a series circuit of resistance, inductance and capacitance, the impedance of the circuit is generally larger than
the resistance. That is, the impedance is reduced to the minimum. In the parallel circuit of inductance and capacitance, the impedance increases to the maximum value at resonance which is the
opposite of the series circuit.
What is impedance control and signal matching?
Impedance control is matching PCB trace dimensions and locations with the properties of the substrate material to make sure that the strength of a signal traveling along a trace is within a required
range. The continual increase in device switching speeds is confronting engineers with signal integrity (SI) problems and eventually, most devices are going to have to deal with SI issues. So,
Printed Circuit Board (PCB) traces can no longer be treated as a simple point-to-point connection. Traces need to be considered as transmission lines and impedance matching becomes necessary and
required in order to lessen or eliminate the impact on SI. By following good design practices and approaches, many potential signal integrity issues can be averted or mitigated.
What are some of the factors affecting impedance?
Some of the variables on which the impedance of a trace is dependent on are more or less fixed by the PCB manufacturer and some of them are defined by the PCB designer. Power is transmitted uniformly
across the length of the trace across the PCB when there is uniform impedance.
Therefore, a trace that has a very uniform cross-sectional geometry needs to be constructed.
In other words, the shape and size should be as uniform as possible running across a consistent dielectric constant of the material that it’s running along the length for a given routing layer. And,
the more uniform the trace the more consistent the dielectric constant and we will achieve more consistent impedance and less power degradation.
PCB Instant Quote
x mm
Quote Now | {"url":"https://www.pcbbuy.com/news/What-Is-Impedance-in-PCB-Design-for-Beginners.html","timestamp":"2024-11-15T00:03:39Z","content_type":"text/html","content_length":"42716","record_id":"<urn:uuid:5dd11039-accf-4975-9375-130ae185270d>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00732.warc.gz"} |
UoH-Math Fest - Rules
UoH-MathFest24 consists of 3 rounds; all rules and guidelines are presented below:
The competition consists of 3 rounds. The competition is held over 2 weeks. TNumber of questions and the competetion time changed by stagehe ffirst round each day the contestants have four-and-a-half
hours to solve three problems. Each problem is worth 7 points for a maximum total score of 42 points. Calculators are not allowed. Protractors are not allowed relatively recently.[10] The problems
chosen are from various areas of secondary school mathematics, broadly classifiable as geometry, number theory, algebra, and combinatorics. They require no knowledge of higher mathematics such as
calculus and analysis, and solutions are often elementary. However, they are usually disguised so as to make the solutions difficult. The problems given in the IMO are largely designed to require
creativity and the ability to solve problems quickly. Thus, the prominently featured problems are algebraic inequalities, complex numbers, and construction-oriented geometrical problems, though in
recent years, the latter has not been as popular as before because of the algorithmic use of theorems like Muirhead's Inequality, and Complex/Analytic Bash to solve problems.[11] | {"url":"https://mathfest.uoh.edu.iq/uoh-math-fest24/rules","timestamp":"2024-11-09T01:37:38Z","content_type":"text/html","content_length":"127018","record_id":"<urn:uuid:940145da-f5fb-448d-8a77-710844f94544>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00675.warc.gz"} |
Laplace transform vs phasor analysis in circuit analysis
• Thread starter Wrichik Basu
• Start date
I recently acquainted myself with Laplace transform, and it appears that it has some relations with phasor analysis. This observation stems from the fact that while in Laplace transform, we have ##s
= \sigma + j \omega## as the variable, in phasor analysis, we just use ##j\omega,## apparently just setting ##\sigma = 0.## Is there any other way the two methods are related?
I have seen phasor analysis being used only for sinusoidal steady state analysis, while Laplace transform gives the transient solution since the initial conditions are built into it, and we solve the
circuit with that. Laplace transform also allows us to incorporate impulse-type voltages and currents in the circuit, wherefrom it seems to me that Laplace transform is a more powerful tool. Can
Laplace transform be used to do sinusoidal steady state analysis as well? I don't see why not, but need a confirmation.
Science Advisor
Gold Member
Yes, absolutely related. When s=jω, you get the steady state sinusoidal solutions, otherwise there is an exponential growth (or decay) element in the solutions. So the Laplace transform is the more
general case that can be used to obtain transient and/or steady state solutions.
DaveE said:
Yes, absolutely related. When s=jω, you get the steady state sinusoidal solutions, otherwise there is an exponential growth (or decay) element in the solutions. So the Laplace transform is the
more general case that can be used to obtain transient and/or steady state solutions.
I will have to go through that video once again to get my brain straight, but I understood this much that if I use Laplace transform and get a decay term, then in the ##\lim_{t \rightarrow \infty}##
that decay term basically ##\rightarrow 0,## so I am left with the steady state solution only. Thanks.
Science Advisor
Gold Member
Wrichik Basu said:
I recently acquainted myself with Laplace transform, and it appears that it has some relations with phasor analysis. This observation stems from the fact that while in Laplace transform, we have
##s = \sigma + j \omega## as the variable, in phasor analysis, we just use ##j\omega,## apparently just setting ##\sigma = 0.## Is there any other way the two methods are related?
I have seen phasor analysis being used only for sinusoidal steady state analysis, while Laplace transform gives the transient solution since the initial conditions are built into it, and we solve
the circuit with that. Laplace transform also allows us to incorporate impulse-type voltages and currents in the circuit, wherefrom it seems to me that Laplace transform is a more powerful tool.
Can Laplace transform be used to do sinusoidal steady state analysis as well? I don't see why not, but need a confirmation.
The two methods are related, but it depends on where the Laplace transform converges. I posted about this in a different thread awhile ago.
FAQ: Laplace transform vs phasor analysis in circuit analysis
What is the main difference between Laplace transform and phasor analysis in circuit analysis?
The main difference between Laplace transform and phasor analysis lies in their application and scope. Laplace transform is a more general technique that can handle both transient and steady-state
responses of circuits, while phasor analysis is limited to steady-state sinusoidal analysis. Laplace transform works in the s-domain, accommodating any input signal, whereas phasor analysis works in
the frequency domain, specifically for sinusoidal signals.
When should I use Laplace transform instead of phasor analysis?
You should use Laplace transform when you need to analyze circuits with non-sinusoidal inputs, transient responses, or initial conditions. It provides a comprehensive solution that includes both
transient and steady-state behavior. Phasor analysis is more appropriate for steady-state analysis of circuits with sinusoidal inputs, where it simplifies calculations by converting differential
equations into algebraic ones.
Can Laplace transform handle initial conditions in circuit analysis?
Yes, Laplace transform can handle initial conditions in circuit analysis. By transforming the circuit equations into the s-domain, initial conditions are incorporated into the transformed equations,
allowing for a complete solution that includes the effect of initial energy stored in inductors and capacitors.
Is phasor analysis applicable to non-sinusoidal signals?
No, phasor analysis is not applicable to non-sinusoidal signals. Phasor analysis is specifically designed for steady-state sinusoidal signals. It simplifies the analysis by representing sinusoidal
voltages and currents as complex phasors, but it cannot handle arbitrary or transient signals.
How do Laplace transform and phasor analysis simplify circuit analysis?
Laplace transform simplifies circuit analysis by converting differential equations into algebraic equations in the s-domain, making it easier to solve for complex circuits with arbitrary inputs and
initial conditions. Phasor analysis simplifies circuit analysis by converting sinusoidal functions into complex numbers (phasors), which transforms differential equations into simpler algebraic
equations in the frequency domain, but it is limited to steady-state sinusoidal analysis. | {"url":"https://www.physicsforums.com/threads/laplace-transform-vs-phasor-analysis-in-circuit-analysis.1055983/","timestamp":"2024-11-06T04:32:20Z","content_type":"text/html","content_length":"99428","record_id":"<urn:uuid:89188a89-6716-47ca-a139-f99ca1cb349c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00573.warc.gz"} |
Construction 2.5.5.7 (The Normalized Moore Complex: First Construction). Let $A_{\bullet }$ be a simplicial abelian group. We let $\mathrm{N}_{\ast }(A)$ denote the chain complex given by the
quotient $\mathrm{C}_{\ast }(A) / \mathrm{D}_{\ast }(A)$, where $\mathrm{C}_{\ast }(A)$ is the Moore complex of Construction 2.5.5.1 and $\mathrm{D}_{\ast }(A) \subseteq \mathrm{C}_{\ast }(A)$ is the
subcomplex of Proposition 2.5.5.6. We will refer to $\mathrm{N}_{\ast }(A)$ as the normalized Moore complex of the simplicial abelian group $A_{\bullet }$. | {"url":"https://kerodon.net/tag/00QG","timestamp":"2024-11-12T23:47:12Z","content_type":"text/html","content_length":"10072","record_id":"<urn:uuid:91ee23b3-0c4b-47fd-976f-ff994442d4da>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00415.warc.gz"} |
cost of burnout calculator
Cost Of Burnout Calculator
Introduction: Burnout is a significant issue affecting many individuals in various aspects of life, including work, relationships, and personal well-being. Understanding the cost of burnout can help
individuals and organizations recognize the importance of addressing this issue promptly. To facilitate this, a cost of burnout calculator can be a valuable tool.
How to Use: Using the cost of burnout calculator is simple. Input the required information into the designated fields, such as the number of employees affected, average salary, and estimated time
lost due to burnout. Once all necessary data is provided, click the “Calculate” button to obtain the cost of burnout.
Formula: The formula used to calculate the cost of burnout is as follows:
Cost of Burnout=Number of Employees×Average Salary×Estimated Time Lost
Example Solve: Let’s consider an example where a company has 50 employees affected by burnout, with an average salary of $50,000 per year, and an estimated time lost due to burnout of 2 weeks
(equivalent to 80 hours).
Cost of Burnout=50×50000×80=$2000000
Therefore, the cost of burnout for this example would be $2,000,000.
Q: Can the cost of burnout calculator be used for personal purposes?
A: Yes, the calculator can be used to estimate the cost of burnout for individuals as well as organizations.
Q: What factors should be considered when estimating the time lost due to burnout?
A: Factors such as decreased productivity, absenteeism, and presenteeism should be taken into account when estimating the time lost due to burnout.
Q: Is the cost of burnout calculator accurate for all industries?
A: While the calculator provides a general estimate, individual industries may have unique factors that influence the cost of burnout. It’s essential to consider industry-specific variables for a
more accurate assessment.
Conclusion: The cost of burnout calculator serves as a valuable tool for estimating the financial impact of burnout on individuals and organizations. By recognizing the significant costs associated
with burnout, proactive measures can be taken to address and prevent this issue, ultimately leading to improved well-being and productivity. | {"url":"https://calculatordoc.com/cost-of-burnout-calculator/","timestamp":"2024-11-06T11:31:17Z","content_type":"text/html","content_length":"83938","record_id":"<urn:uuid:ac3c235e-5953-4b5d-a02b-ec62439d3ece>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00091.warc.gz"} |
Calculating Risk Probabilities
Calculating the probability of one risk assigned to multiple tasks
What happens if the risk is assigned to multiple tasks? How to prioritize risks in this case? Here is an example. A Mexican drug lord El Stuppo wants to escape from jail. To do so, he is digging a
tunnel from his side, and his associate is digging a tunnel from the opposite side. They are hoping to meet somewhere in the middle. But there is a risk that the tunnel may collapse. So, there is a
single risk, but we have to assign it to different tasks: “El Stuppo digging” and “Associate digging” with probabilities of 30% and 40% respectively. What would be the total probability of this risk?
Everything depends on how these assignments are correlated. Let’s assume that the collapse is caused by heavy rain, which can occur at the same time on both sides of the tunnel. In this case, the
risk assignments are correlated, and probabilities are equal to maximum probability of both assignments: 40%. If risk assignments are not correlated, then probability is calculated as is shown in the
table below.
Calculated Impact and Probabilities of Event Chain
In addition to calculating probability and impact of individual risk events, we can rank event chains. To do this, we need to investigate if the event chain has been fully executed. In some cases,
one event can occur, but does not trigger another event. Remember that a receiver event may be triggered with a certain probability. When this is determined, we calculate the cumulative impact of the
event chain, rather than the individual risks.
If an event chain is fully executed, the cumulative impact of this event chain on all tasks to which the events from the chain are assigned is calculated. The next step is to calculate the
correlation between event chain impacts and project results. Again, this is a similar process to the analysis we perform on individual risk events and this will allow us to rank event chain based on
how they impact specific project parameters. In our next post, we will look at how we can forecast the impact of these risk events on project cost, duration and finish times. | {"url":"https://intaver.com/blog-project-management-project-risk-analysis/calculating-risk-probabilities/","timestamp":"2024-11-01T19:36:36Z","content_type":"text/html","content_length":"85313","record_id":"<urn:uuid:33401ab8-e1f1-42e7-a2a9-6e00ff375e21>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00741.warc.gz"} |
Quadratic Equations -
Resources for Quadratic Equations
• Questions
Click Here
• Video Tutorials
Click Here
Videos relating to Quadratic Equations.
Quadratic Equations - Video - Quadratic Equations 1
You must be logged in to access this resource
Quadratic Equations - Video - Quadratic Equations 2
You must be logged in to access this resource
Plans & Pricing
With all subscriptions, you will receive the below benefits and unlock all answers and fully worked solutions.
• All Content
All courses, all topics
Your own personal portal
Exam Revision
Revision by Topic
• Content
Any course, any topic
Your own personal portal
Exam Revision
Revision by Topic
An equation in which the highest power of the variable is 2 is called a quadratic equation, for example, \(y = 2{x^2} + 5\).
The graph of \(y = {x^2}\) is of parabola's shape and is concave up with turning point or vertex as \((0,0)\).
The graph of \(y = -{x^2}\) is of parabola's shape and is concave down with turning point or vertex as \((0,0)\).
The graph of \(y = a{x^2}\) where \(a\) is a constant (number), the size of \(a\) affects whether the parabola is 'wide' or 'narrow'. For example,
\(y = \frac{{{x^2}}}{2},y = - 4{x^2}\).
The graph of \(y = a{x^2}+c\) where \(a\) and \(c\) are constants, the effect of \(c\) is to move the parabola up or down from the origin.
Example 1
a. Graph \(y={x^2},y={x^2}+3,y=-{x^2}-2 and y=-{x^2}-5\) on the same set of axes.
b. Compare each parabola with the parabola \(y={x^2}\)
\(y={x^2}+3\) is \(y={x^2}\) shifted up \(3 units\)
\(y=-{x^2}-2\) is \(y={x^2}\) turned upside down and shifted down \(2 units\)
\(y=-{x^2}-5\) is \(y={x^2}\) turned upside down and shifted down \(5 units\)
c. What is the vertex of each parabola and is each parabola concave up or concave down?
\(y={x^2}\) has vertex \((0,0)\) and is concave up.
\(y={x^2}+3\) has vertex \((0,3)\) and is concave up.
\(y=-{x^2}-2\) has vertex \((0,-2)\) and is concave down.
\(y=-{x^2}-5\) has vertex \((0,-5)\) and is concave down. | {"url":"https://classmathematics.com.au/resources/nsw/year-9/maths-core/equations/quadratic-equations/","timestamp":"2024-11-05T23:33:24Z","content_type":"text/html","content_length":"62708","record_id":"<urn:uuid:61162576-a904-4cef-8a9b-7ee2c8b8ef82>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00158.warc.gz"} |
Thermodynamics notes for b.sc pdf 2nd year Download 2023
Thermodynamics Notes PDF
Free Thermodynamics notes pdf are provided here for Thermodynamics students so that they can prepare and score high marks in their Thermodynamics exam.
In these free Thermodynamics notes pdf, we will study the relationship between the macroscopic properties of physical systems in equilibrium. It reviews the concepts of thermodynamics learnt at
school from a more advanced perspective and develops them further. The primary goal is to understand the fundamental laws of thermodynamics and their applications to various systems and processes. In
addition, it will also give exposure to students about the Kinetic theory of gases, transport phenomena involved in ideal gases, phase transitions, and behavior of real gases.
We have provided complete Thermodynamics handwritten notes pdf for any university student of BCA, MCA, B.Sc, B.Tech, M.Tech branch to enhance more knowledge about the subject and to score better
marks in their Thermodynamics exam.
Free Thermodynamics notes pdf are very useful for Thermodynamics students in enhancing their preparation and improving their chances of success in Thermodynamics exam.
These free Thermodynamics pdf notes will help students tremendously in their preparation for Thermodynamics exam. Please help your friends in scoring good marks by sharing these free Thermodynamics
handwritten notes pdf from below links:
Topics in our Thermodynamics Notes PDF
The topics we will cover in these Thermodynamics notes for b.sc pdf will be taken from the following list:
Zeroth and First Law of Thermodynamics: Extensive and intensive Thermodynamic Variables, Thermodynamic Equilibrium, Zeroth Law of Thermodynamics & Concept of Temperature, Concept of Work & Heat,
State Functions, First Law of Thermodynamics and its differential form, Internal Energy, First Law & various processes, Applications of First Law: General Relation between CP and CV, Work Done during
Isothermal and Adiabatic Processes, Compressibility and Expansion Co-efficient.
Second Law of Thermodynamics: Reversible and Irreversible process with examples. Conversion of Work into Heat and Heat into Work. Heat Engines. Carnot?s Cycle, Carnot engine & efficiency.
Refrigerator & coefficient of performance, 2nd Law of Thermodynamics: Kelvin-Planck and Clausius Statements and their Equivalence. Carnot?s Theorem. Applications of Second Law of Thermodynamics:
Thermodynamic Scale of Temperature and its Equivalence to Perfect Gas Scale.
Entropy: Concept of Entropy, Clausius Theorem. Clausius Inequality, Second Law of Thermodynamics in terms of Entropy. Entropy of a perfect gas. Principle of Increase of Entropy. Entropy Changes in
Reversible and Irreversible processes with examples. Entropy of the Universe. Entropy Changes in Reversible and Irreversible Processes. Principle of Increase of Entropy. Temperature–Entropy diagrams
for Carnot’s Cycle. Third Law of Thermodynamics. Unattainability of Absolute Zero.
Thermodynamic Potentials: Internal Energy, Enthalpy, Helmholtz Free Energy, Gibb?s Free Energy. Their Definitions, Properties and Applications. Magnetic Work, Cooling due to adiabatic
demagnetization, First and second order Phase Transitions with examples, Clausius Clapeyron Equation and Ehrenfest equations.
Maxwell’s Thermodynamic Relations: Derivation of Maxwell?s thermodynamic Relations and their applications, Maxwell’s Relations:(1) Clausius Clapeyron equation, (2) Value of Cp-Cv, (3) Tds Equations,
(4) Energy equations.
Kinetic Theory of Gases Distribution of Velocities: Maxwell-Boltzmann Law of Distribution of Velocities in an Ideal Gas and its Experimental Verification. Mean, RMS and Most Probable Speeds. Degrees
of Freedom. Law of Equipartition of Energy (No proof required). Specific heats of Gases.
Molecular Collisions: Mean Free Path. Collision Probability. Estimation of Mean Free Path. Transport Phenomenon in Ideal Gases: (1) Viscosity, (2) Thermal Conductivity and (3) Diffusion. Brownian
Motion and its Significance.
Real Gases: Behavior of Real Gases: Deviations from the Ideal Gas Equation. Andrew?s Experiments on CO2 Gas. Virial Equation. Critical Constants. Continuity of Liquid and Gaseous State. Vapour and
Gas. Boyle Temperature. van der Waal?s Equation of State for Real Gases. Values of Critical Constants. Law of Corresponding States. Comparison with Experimental Curves.p-V Diagrams. Free Adiabatic
Expansion of a Perfect Gas. Joule Thomson Porous Plug Experiment. Joule-Thomson Effect for Real and vander Waal Gases. Temperature of Inversion. Joule-Thomson Cooling.
Thermodynamics Notes PDF FREE Download
Thermodynamics students can easily make use of all these complete Thermodynamics notes pdf by downloading them from below links:
How to Download FREE Thermodynamics Notes PDF?
Thermodynamics students can easily download free Thermodynamics notes pdf by following the below steps:
1. Visit TutorialsDuniya.com to download free Thermodynamics notes pdf
2. Select ‘College Notes’ and then select ‘Physics Course’
3. Select ‘Thermodynamics Notes’
4. Now, you can easily view or download free Thermodynamics handwritten notes pdf
Benefits of FREE Thermodynamics Notes PDF
Free Thermodynamics notes pdf provide learners with a flexible and efficient way to study and reference Thermodynamics concepts. Benefits of these complete free Thermodynamics pdf notes are given
1. Accessibility: These free Thermodynamics handwritten notes pdf files can be easily accessed on various devices that makes it convenient for students to study Thermodynamics wherever they are.
2. Printable: These Thermodynamics free notes pdf can be printed that allows learners to have physical copies of their Thermodynamics notes for their reference and offline reading.
3. Structured content: These free Thermodynamics notes pdf are well-organized with headings, bullet points and formatting that make complex topics easier to follow and understand.
4. Self-Paced Learning: Free Thermodynamics handwritten notes pdf offers many advantages for both beginners and experienced students that make it a valuable resource for self-paced learning and
5. Visual Elements: These free Thermodynamics pdf notes include diagrams, charts and illustrations to help students visualize complex concepts in an easier way.
We hope our free Thermodynamics notes pdf has helped you and please share these Thermodynamics handwritten notes free pdf with your friends as well 🙏
Download FREE Study Material App for school and college students for FREE high-quality educational resources such as notes, books, tutorials, projects and question papers.
If you have any questions feel free to reach us at [email protected] and we will get back to you at the earliest.
TutorialsDuniya.com wishes you Happy Learning! 🙂
Physics Notes
Thermodynamics Notes FAQs
Q: Where can I get complete Thermodynamics Notes pdf FREE Download?
A: TutorialsDuniya.com have provided complete Thermodynamics free Notes pdf so that students can easily download and score good marks in your Thermodynamics exam.
Q: How to download Thermodynamics notes pdf?
A: Thermodynamics students can easily make use of all these complete free Thermodynamics pdf notes by downloading them from TutorialsDuniya.com
Software Engineering Projects with Source & Documentation
You will always find the updated list of top and best free Software Engineering projects with source code in an easy and quick way. Our Free Software Engineering projects list has projects for
beginners, intermediates as well as experts to learn in 2023.
URL: https://tutorialsduniya.com/software-engineering-projects-pdf/
Author: Delhi University | {"url":"https://www.tutorialsduniya.com/notes/thermal-physics-notes/","timestamp":"2024-11-06T14:42:16Z","content_type":"text/html","content_length":"112697","record_id":"<urn:uuid:5a25c3bb-690d-4d3e-9278-73721bf79fcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00270.warc.gz"} |
Covariant derivative of field strength tensor
• I
• Thread starter d8586
• Start date
In summary, the conversation is about deriving the relations on the right hand column of equation (4) in a specific paper. The person is struggling with understanding the third row, which involves
the equation D_rho B_mu_nu = partial_rho B_mu_nu. They mention that this equation doesn't match their calculation, but upon further examination, it seems to imply that B_rho multiplied by the
difference of partial_mu B_nu and partial_nu B_mu is equal to zero. The person then asks for clarification on what they may be missing. The response is that the field tensor transforms according to
the adjoint representation, not the fundamental one, and for an Abelian group, the structure constants are zero.
I am struggling to derive the relations on the right hand column of eq.(4) in
Even the easy abelian one (third row)
which is
$$D_\rho B_{\mu\nu}=\partial_\rho B_{\mu\nu}$$
doesn't match my calculation
$$D_\rho B_{\mu\nu}=(\partial_\rho+i g B_\rho)(\partial_\mu B_\nu-\partial_\nu B_\mu)$$
the equation seems to imply that
$$B_\rho (\partial_\mu B_\nu-\partial_\nu B_\mu)=0$$
What am I missing?
Staff Emeritus
Science Advisor
Homework Helper
Gold Member
d8586 said:
The field tensor transforms according to the adjoint representation, not the fundamental one. The structure constants for an Abelian group are zero.
Orodruin said:
The field tensor transforms according to the adjoint representation, not the fundamental one. The structure constants for an Abelian group are zero.
Thank you very much!
FAQ: Covariant derivative of field strength tensor
1. What is the covariant derivative of the field strength tensor?
The covariant derivative of the field strength tensor is a mathematical operation used in differential geometry and field theory to calculate how a vector field changes as it moves along a curved
space. It takes into account the curvature of the space and allows for the formulation of equations that are invariant under coordinate transformations.
2. What is the significance of the covariant derivative in field theory?
The covariant derivative is an important tool in field theory as it allows for the formulation of equations that are consistent with the principles of general relativity. It takes into account the
curvature of space and ensures that physical laws remain invariant under coordinate transformations, making it essential in the study of relativistic systems.
3. How is the covariant derivative of the field strength tensor calculated?
The covariant derivative is calculated by taking the partial derivative of the field strength tensor and adding terms that account for the curvature of space. This involves using the Christoffel
symbols, which represent the connection between different coordinate systems in a curved space, and the metric tensor, which describes the curvature of space at a given point.
4. What is the difference between the covariant derivative and the ordinary derivative?
The covariant derivative takes into account the curvature of space, while the ordinary derivative does not. This means that the covariant derivative is better suited for use in curved spaces, while
the ordinary derivative is used in flat spaces. Additionally, the covariant derivative is invariant under coordinate transformations, while the ordinary derivative is not.
5. What are some practical applications of the covariant derivative of the field strength tensor?
The covariant derivative is used extensively in general relativity and other branches of physics that involve curved spaces, such as cosmology and astrophysics. It is also used in modern theories of
particle physics, such as the Standard Model, to describe the behavior of fundamental particles in a consistent and invariant way. Additionally, the covariant derivative has applications in
engineering, particularly in the study of fluid dynamics and electromagnetism. | {"url":"https://www.physicsforums.com/threads/covariant-derivative-of-field-strength-tensor.899264/","timestamp":"2024-11-10T15:26:35Z","content_type":"text/html","content_length":"90598","record_id":"<urn:uuid:dac2380e-3f37-4bad-a5af-cb161a10f8b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00367.warc.gz"} |
1. 正文
Title:Area comparison of hypersurfaces in space forms
Abstract:Mean curvature is one of the most fundamental extrinsic curvature in the theory of submanifold. A natural question is that wether mean curvature can control the area of hypersurfaces. In
this talk, we discuss the area comparison with respect to mean curvature for hypersurfaces in space forms. This is a joint work with Professor Sun Jun in Wuhan University.
Title:On the global dynamics of Yang-Mills-Higgs equation
Abstract:In this talk, I will talk about the global dynamics of Yang-Mills-Higgs fields in the future of a hyperboloid for data bounded in some weighted energy space larger than the conformal energy
space. As application, for the abelian case of Maxwell-Klein-Gordon system, we extend the small data result of Lindblad-Sterbenz to general large data.
Title:Global stability dynamics of the timelike extremal hypersurfaces in Minkowski space
Abstract:In this talk, we introduce the relationship between the timelike extremal hypersurfaces and the classical minimal surfaces. This target also gives the long time dynamics of timelike
extremal hypersurfaces in Minkowski spacetime with the dimension in [2,7]. In this dimension, the stationary solution of timelike extremal hypersurface equation is the solution of classical minimal
surface equation. It only admits the hyperplane solution by Bernstein theorem. We prove that this hyperplane solution as the stationary solution of timelike extremal hypersurface equation is
asymptotic stablely by finding the hidden dissipative structure of linearized equation.
Title:Well-posedness of Maxwell-scalar equations on Schwarzschild-AdS spacetimes
Abstract:In this talk, I will introduce the local well-posedness for initial-boundary value problems of conformal Maxwell-scalar equations on Schwarzschild-AdS spacetimes. This is a joint work with
Prof. Xiaoning Wu.
Title:Spherically symmetric Einstein-scalar-field equations for wave-like decaying null infinity
Abstract:In this talk, we will discuss the spherically symmetric Einstein-scalar field equations for wave-like decaying initial data at null infinity. We show that there exists a unique global
solution in (0,∞) and unique generalized solution on [0, ∞)in the sense of Christodoulou. This is a joint work with my supervisor Xiao Zhang.
Title:Einstein metrics of cohomogeneity one with a 4m+3 dimensional sphere as principal orbit
Abstract:In this talk, we discuss construction of non-compact complete Einstein metrics with a 4m+3 dimensional sphere as principal orbit. We first discuss the general machinery of using the
cohomogeneity one condition to reduce the Einstein equations. Then we present the detail in proving the existence of two continuous 2-parameter families of complete non-positive Einstein metrics with
a 4m+3 dimensional sphere as principal orbit. In the end, we will talk about the relation between the new examples to the known ones. This talk is mainly based on the researcher’s work in September
Title:Nonlinear stability of the slowly-rotating Kerr-de Sitter family
Abstract:The stability of black hole spacetimes is a critical question in mathematical relativity. The nonlinear stability of the slowly-rotating Kerr-de Sitter family was first proven by Hintz and
Vasy in 2016 using microlocal techniques. In my talk, I will present a novel proof of the nonlinear stability of slowly-rotating Kerr-de Sitter that avoids frequency-space techniques outside of a
neighborhood of the trapped set. The proof utilizes spectral methods to uncover a spectral gap that yields exponential decay at the linearized level. The exponential decay in the linearized problem
is then used in a bootstrap proof to conclude nonlinear stability.
Title:Low regularity ill-posedness for 3D elastic waves and for 3D ideal compressible MHD driven by shock formation
Abstract:We construct counterexamples to the local existence of low-regularity solutions to elastic wave equations and to the ideal compressible magnetohydrodynamics (MHD) system in three spatial
dimensions (3D). Inspired by the recent works of Christodoulou, we generalize Lindblad’s classic results on the scalar wave equation by showing that the Cauchy problems for 3D elastic waves and for
3D MHD system are ill-posed in $H^3(R^3)$ and $H^2(R^3)$, respectively. Both elastic waves and MHD are physical systems with multiple wave-speeds. We further prove that the ill-posedness is caused by
instantaneous shock formation, which is characterized by the vanishing of the inverse foliation density. In particular, when the magnetic field is absent in MHD, we also provide a desired
low-regularity ill-posedness result for the 3D compressible Euler equations, and it is sharp with respect to the regularity of the fluid velocity. Our proofs for elastic waves and for MHD are based
on a coalition of a carefully designed algebraic approach and a geometric approach. To trace the nonlinear interactions of various waves, we algebraically decompose the 3D elastic waves and the 3D
ideal MHD equations into $6\times 6$ and $7\times 7$ non-strictly hyperbolic systems. Via detailed calculations, we reveal their hidden subtle structures. With them we give a complete description of
solutions’ dynamics up to the earliest singular event, when a shock forms. This talk is based on joint works with Haoyang Chen and Silu Yin.
Title:Some recent progress on quasilinear wave
Abstract:Around 2001 Alinac introduced a new method to settle the global wellposedness of two dimensional quasilinear wave equations with null condtions. The highest norm of the constructed solution
admits an upper bound which grows polynomially in time. In this talk I will discuss a group of recent results related to Alinac's blowup at infinity conjecture and several related problems.
Title:The relative volume of Poincare-Einstein manifolds
Abstract:For a Poincare-Einstein manifold, the Bishop-Gromov comparison theorem tells us that the relative volume is a non-increasing function of the geodesic radius. In this talk, I will show that
the fractional Yamabe constant at the conformal infinity provides a lower bound for this function. As an application, this implies a gap phenomena and the rigidity theorem.
Title:Some results on 2D wave-Klein-Gordon equations
Abstract:I will present some recent results regarding the Dirac-Klein-Gordon equations and the Klein-Gordon-Zakharov equations in two space dimensions, which are or can be transformed into coupled
wave-Klein-Gordon equations. These results are joint with Y. Ma (Xi'an) or Z. Wyatt (Cambridge).
Abstract:本报告中,我们将介绍我们最近与Y. Sire, C. D. Sogge,张军勇一起合作的关于渐近双曲流形中的Strichartz估计及其应用的工作.
Title: Higher dimensional affine fluids and geodesics of SL(n)
Abstract:One way to understand incompressible fluids is to regard it as constrained free particle transport, a point of view realized in the Euler-Arnol'd formulation of fluid flow as a geodesic
motion on the volume-preserving diffeomorphism group. In this talk we will discuss some observations that grew out of the further restriction that the fluid flow be affine. Sideris (2017) observed
that such affine fluid flows can be described as geodesics on SL(n) with the Hilbert-Schmidt metric, and studied the properties of several explicit solutions when n = 3. Roberts, Shkoller, and
Sideris (2020) then integrated the geodesic equations when n = 2 and obtained a complete classification. In this talk I will present some contrasting results obtained, in collaboration with my
students Audrey Rosevear and Samuel Sottile, concerning the geodesic geometry of SL(n) for n > 2, and their applications towards stability and instability of the free boundary incompressible Euler
Title:Transition threshold for the 3D Couette flow in a finite channel
Abstract:In this report, I will first state some important progress on linear inviscid damping and enhanced dissipation for shear flows, which play a crucial role in determining the transition
threshold. Then I will sketch key ingredients in the proof of transition threshold for 3-D Couette flow in a finite channel. It was proved that if the initial velocity $v_0$ satisfies $\|v_0-(y,0,0)\
|_{H^2}\le c_0\text{Re}^{-1}$, then the solution of the 3D Navier-Stokes equations is global in time and does not transition away from the Couette flow, and rapidly converges to a streak solution for
$t\gg Re^{\f13}$ due to the mixing-enhanced dissipation effect.
Title:Supertranslation invariance of angular momentum
Abstract:While there is a well-accepted notion of total energy and linear momentum of gravitational fields, finding a suitable definition of angular momentum is more challenging. The main difficulty
is the presence of supertranslations. We discuss how a new definition proposed by Po-Ning Chen, Mu-Tao Wang, and Shing-Tung Yau provides a total flux of angular momentum that is supertranslation
invariant. The talk is based on joint work with Po-Ning Chen, Jordan Keller, Mu-Tao Wang, and Shing-Tung Yau.
Title:Sharp decay for Teukolsky equation in Kerr spacetimes
Abstract:Teukolsky equation in Kerr spacetimes governs the dynamics of the spin $s$ components, $s=0,\pm 1, \pm 2$ corresponding to the scalar field, the Maxwell field, and the linearized gravity,
respectively. I will discuss recent joint work with L. Zhang on proving the precise asymptotic profiles for these spin $s$ components in Schwarzschild and Kerr spacetimes.
17.Arick Shao(伦敦大学玛丽皇后学院)点击观看视频
Title:Bulk-boundary correspondence for vacuum asymptotically Anti-de Sitter spacetimes
Abstract:"The AdS/CFT conjecture in physics posits the existence of a correspondence between gravitational theories in asymptotically Anti-de Sitter (aAdS) spacetimes and field theories on their
conformal boundary. In this presentation, we prove rigorous mathematical statements toward this conjecture. In particular, we show there is a one-to-one correspondence between aAdS solutions of the
Einstein-vacuum equations and a suitable space of data on the conformal boundary (consisting of the boundary metric and the boundary stress-energy tensor). We also discuss consequences of this
result, as well as the main ingredient behind its proof: a unique continuation property for wave equations on aAdS spacetimes. This is joint work with Gustav Holzegel (and makes use of joint works
with Alex McGill and Athanasios Chatzikaleas)."
Title:Global stability of totally geodesic wave maps
Abstract:In this presentation I will talk about some recent progress on the global stability problem of totally geodesic wave maps in $\mathbb{R}^{1+2}$. We reformulated the problem when the target
space is a general Riemannian manifold (compared with the previous works where the target are supposed to be space forms). With some geometrical observations, we found that the evolution system of
the perturbation still enjoys sufficiently nice structure such that the global stability can be established via a global analysis on a type of wave-like systems in $\mathbb{R}^{1+2}$. For this
purpose, some techniques originally aimed at wave-Klein-Gordon systems are generalized. Especially, we do not demand the compactness on the support of initial data. This is a joint work with S.-H.
Title:On the instability mechanism of naked singularities
Abstract:I will give a brief introduction of the weak cosmic censorship, review the proof of the instability of naked singularities in spherical symmetry, and discuss some further related problems.
Title:The global stability of de Sitter spacetime for the arbitrary dimensional Einstein--Yang--Mills system
Abstract:We establish the non-linear global stability of solutions to the $n$-dimensional ($n\geq 4$) Einstein--Yang--Mills (EYM) system, which continues the local existence of the system in the
companion paper (Chao. Liu and Jinhua. Wang, 2021). A new tensorial Fuchsian formulation of the conformal EYM system with temporal gauge is presented and this Fuchsian formulation leads to the global
existence. This result generalizes the $4$ dimensional results of Friedrich (J. Differential Geometry, 1991) to the non-conformal invariant (dimension $n>4$) case. This is joint work with Chao. Liu
and Todd A. Oliynyk. | {"url":"https://gxcmr.gxu.edu.cn/info/1147/1661.htm","timestamp":"2024-11-06T04:36:13Z","content_type":"text/html","content_length":"77913","record_id":"<urn:uuid:8171ba33-022a-4081-81d7-07a6168fe6a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00729.warc.gz"} |
What is the distance between the following polar coordinates?: (5,(7pi)/4), (3,(9pi)/8) | HIX Tutor
What is the distance between the following polar coordinates?: # (5,(7pi)/4), (3,(9pi)/8) #
Answer 1
$\sqrt{34 - 30 \cos \left(\frac{5 \pi}{8}\right)} \approx 6.744$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the distance between the polar coordinates (5, (7π)/4) and (3, (9π)/8), you can use the distance formula in polar coordinates, which is given by:
[ \text{Distance} = \sqrt{r_1^2 + r_2^2 - 2r_1r_2\cos(\theta_2 - \theta_1)} ]
Plugging in the values, the distance between the two points is:
[ \sqrt{5^2 + 3^2 - 2(5)(3)\cos\left(\frac{9\pi}{8} - \frac{7\pi}{4}\right)} ]
[ = \sqrt{25 + 9 - 30\cos\left(\frac{9\pi}{8} - \frac{7\pi}{4}\right)} ]
[ = \sqrt{34 - 30\cos\left(\frac{9\pi}{8} - \frac{7\pi}{4}\right)} ]
[ = \sqrt{34 - 30\cos\left(\frac{\pi}{8}\right)} ]
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/what-is-the-distance-between-the-following-polar-coordinates-5-7pi-4-3-9pi-8-8f9afa2114","timestamp":"2024-11-12T21:47:57Z","content_type":"text/html","content_length":"576869","record_id":"<urn:uuid:9f4fce64-1afa-4548-96be-cb7bbcbcb5f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00478.warc.gz"} |
Problem Set: Marginal Analysis
Test your understanding of the learning outcomes in this module by working through the following problems. These problems aren’t graded, but they give you a chance to practice before taking the quiz.
If you’d like to try a problem again, you can click the link that reads, “Try another version of this question.”
Farmer’s Market
Midterm Grades
Candela Citations
CC licensed content, Original | {"url":"https://courses.lumenlearning.com/suny-microeconomics/chapter/problem-set-marginal-analysis/","timestamp":"2024-11-15T03:11:52Z","content_type":"text/html","content_length":"27674","record_id":"<urn:uuid:6c47fb2f-5447-4ef4-a18d-661f59039fa6>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00662.warc.gz"} |
Optimizing Pelton turbine performance: unveiling the power of three nozzles for maximum efficiency and sustainable hydropower generation
Water energy is one of the potential renewable energy, the problem so far has a low efficiency of the blade Pelton shape. So it takes a series of tools to know characteristics and performance of the
Pelton turbine as a hydroelectric power plant in this research. Pelton turbines work by utilizing the potential energy of water stored at a certain head, which flows through a penstock/pipe that is
equipped with a nozzle at the end. The high head causes the water to be under high pressure when it reaches the nozzle. The water coming out of the nozzle becomes kinetic energy in the form of a
pressurized water jet, which is used to rotate the runner of the Pelton turbine. In this study, the effect of the number of nozzles used to rotate the Pelton turbine was analyzed, with the result
that the number of nozzles is directly proportional to the efficiency of the Pelton turbine. Where the highest efficiency value is obtained by using 3 nozzles with a maximum efficiency value of 13.7
%, at 2 nozzles of 12.209 % and at 1 nozzle of 8.82 %.
• This research investigates the characteristics of Pelton turbines, particularly the effect of the number of nozzles on turbine performance.
• Comparing the characteristics of the number of nozzles on a Pelton turbine.
• Laboratory scale Pelton turbine simulator experiment showing experimental results and efficiency.
• Increase in the width of the separator by 1% of the inner bucket width results in a decrease in efficiency of approximately 1% at full load.
• In the design of the Pelton turbine blade, we adhere to strict sizing rules to ensure optimal performance.
• Preventive maintenance and timely replacement of worn components are key to maintaining the durability and efficiency of hydroelectric systems using Pelton turbine buckets.
1. Introduction
Many areas in Indonesia have the potential for renewable energy sources, one of which is water energy for hydroelectric power plants which are widely spread in areas where there is no PLN electricity
but has river flow, the problem so far in the use of water energy is still low efficiency so that a series of tools are needed for blade trials to find out the characteristics [1, 2]. Because
renewable energy is increasingly dominating the global stage in an effort to reduce dependence on fossil fuels. In this context, hydropower plays an important role as a clean and sustainable energy
provider. In 2021, the contribution of hydropower generation reached 1330 gigawatts, and the expected growth of around 60 % by 2050 presents a challenge to continuously improve its technology and
sustainability [3]. Although hydropower technology has reached a level of maturity, developments continue to improve flexibility, reliability, and reduce environmental impact. This includes the
refinement of turbine technology, where research continues to optimize turbine performance and reduce environmental impact [4, 5].
In 1880, Lester A. Pelton introduced the Pelton turbine, an impulse turbine that uses the potential energy of water stored at a certain elevation. This energy is delivered through a water pipe or
channel with a nozzle at the end. The significant head creates high water pressure as it reaches the nozzle, turning the water into a high-speed kinetic jet. This water jet is then used to rotate the
runner of the Pelton turbine [6].
In Europe, about 16 % of the installed hydropower capacity uses Pelton turbines. In European hydropower plants with a capacity of more than 50 MW, about 31 % use Pelton turbines. These turbines are
preferred in mountainous areas because they can operate efficiently at high heads and lower water flows compared to Kaplan and Francis turbines. Nozzle control regulates the water flow and keeps the
efficiency above 90 % even with varying turbine loads [8, 9].
Pelton turbines have two configurations: horizontal axis and vertical axis. In the horizontal axis configuration, no more than two nozzles can be installed, while in the vertical axis configuration,
up to six nozzles can be installed symmetrically around the wheel. The choice of configuration affects the load distribution of the turbine and requires careful consideration to avoid collisions
between the two water jets, thereby minimizing power loss. In addition, the configuration of the turbine axis also affects the friction losses and the windage [10, 11].
This research investigates the characteristics of Pelton turbines, particularly the effect of the number of nozzles on turbine performance. The research also examines the effect of water pressure on
turbine efficiency and output. The next section details the methods used, including turbine design parameters and equations to calculate the performance metrics of various turbines. The results and
analysis of the turbine tests, considering variations in nozzle number and water pressure, are then presented and discussed. The conclusion summarizes the key findings, emphasizing the importance of
nozzle number and water pressure in optimizing the efficiency and power of Pelton turbines. Finally, references are provided for additional context and credibility.
Fig. 1Turbines with more than 50 MW capacity worldwide [7]
2. Method
In this study, a careful analytical approach is taken to reveal the characteristics of the Pelton turbine and the effect of the number of nozzles on its performance. By detailing the design
parameters of the turbine, such as spray diameter, nozzle length, nozzle needle angle, and others, we conducted a series of tests with variations in the number of nozzles. In the design of the Pelton
turbine blade, we adhere to strict sizing rules to ensure optimal performance. In addition, this study applies an empirical approach to the number of buckets and provides a careful calculation, which
is $Z\ge$17. This approach allows in-depth analysis of the torque moment, fluid flow velocity, and turbine power. In evaluating the turbine efficiency, we calculated the turbine head (${H}_{t}$)
value as the ratio of the height between the fluid inlet and the turbine position. Throughout the experimental process, variations in the number of nozzles and water pressure were considered to study
their effects on the performance of the Pelton turbine. With this approach, our research approaches a thorough analysis of Pelton turbines, bringing a deeper understanding of how to optimize turbine
efficiency and performance in the context of renewable energy.
From the turbine picture above, we can see that $D$ is the diameter of the turbine runner as a place to install the bucket. While $Z$ here describes the number of buckets and also the number of
nozzles. For ${d}_{d}$ and ${d}_{s}$ are the diameter of the nozzle and the diameter of the water jet. While $B$ is the width of the bucket to receive water pressure from the nozzle to rotate the
Fig. 2Pelton turbine [12, 13]
2.1. Pelton turbine design steps
The following is the design procedure of Pelton turbine for micro hydro power generation.
Preliminary preparation for turbine location.
1) Head calculation (${H}_{n}$):
where: ${H}_{g}$ – Gross Head which is the vertical distance between the water surface at the intake and at the turbine. While ${H}_{tl}$ – total head loss due to open channels, trash racks, intakes,
penstocks, and gates or valves. This loss is approximately equal to 6 % of the Gross Head.
2) Water flow rate calculation ($Q$).
The water flow rate can be calculated by measuring the velocity of the river or stream (${V}_{r}$) in (m·s^-1) and its cross-sectional area ${A}_{r}$ in (m^2):
3) Turbine input power calculation (${P}_{ti}$):
${P}_{ti}=\rho g{C}_{n}^{2}{H}_{n}{Q}_{t}\left(\mathrm{W}\mathrm{a}\mathrm{t}\mathrm{t}\right),$
where $\rho$ is the density of the flowing fluid, $g$ is gravity speed, ${C}_{n}^{2}$ is the coefficient of performance or efficiency of the turbine, ${H}_{n}$ is the value of the head turbin dan $
{Q}_{t}$ is fluid flow rate.
4) Turbine rotational speed calculation (${N}_{s}$):
where ${n}_{j}$ is the number of turbine nozzles, and can be calculated as follows:
where ${Q}_{t}$ is water flow capacity of each nozzle (m^3·s^-1).
Then for the calculation of turbine rotational speed (rpm) can be calculated by:
From the continuity equation:
$Q=Z\frac{\pi {d}_{s}^{2}}{4}{c}_{1u},$
${d}_{s}=\sqrt{\frac{4Q}{Z\pi {c}_{1u}}},$
where ${d}_{s}$ is the diameter of the jet spray as the release of pressurized water flow to hit the turbine blades, $Z$ is the number of nozzles and in this study used 3 (three) nozzles, $Q$ is the
water flow discharge per unit time, and ${c}_{1u}$ is the coefficient factor of fluid flow:
Fig. 3Pelton turbine nozzle design
Table 1Pelton turbine nozzle parameters
No Pelton turbine nozzle parameters
1 Nozzle length 235 mm
2 Nozzle needle angle 70°
3 Number of nozzles 3 buah
4 Wheel diameter 14.07 mm
5 Jet spray Diameter 8.72 mm
The size of the Pelton turbine bucket uses the following rules:
$3,1>\frac{B}{{d}_{s}}\ge 3,4.$
With the provisions; $B=$ 3.1 ${D}_{s}$ for 1 nozzle, $B=$ 3.2 ${D}_{s}$ for 2 nozzles, $B=$ 3.2 ${D}_{s}$ for 4-5 nozzles and $B>$ 3.3 ${D}_{s}$for 6 nozzles.
2.2. Calculation of nozzle dimensions
The flow rate at each nozzle can be calculated as follows:
where ${V}_{j}$ is the velocity of water through the nozzle. ${V}_{j}={C}_{n}\sqrt{2g{H}_{n}}$ and ${A}_{j}$ is the nozzle area.
The area of the nozzle can be calculated:
${A}_{j}=\pi \frac{{D}_{j}^{2}}{4}\left({\mathrm{m}}^{2}\right).$
From two equations above we can calculate the nozzle diameter:
${D}_{j}=\sqrt{\frac{4{Q}_{t}}{\left(\pi {n}_{j}{V}_{j}\right)}}\mathrm{}\mathrm{}\left(\mathrm{m}\right),$
where ${D}_{j}$ is nozzle diameter, ${Q}_{t}$ is the nozzle water flow, ${n}_{j}$ is the number of the nozzle.
The nozzle length can be calculated using the following equation:
${L}_{n}=\frac{\left({D}_{pn}-{D}_{j}\right)}{tan\left(\beta \right)}\left(\mathrm{m}\right),$
The nozzle outlet should be as close as possible to the Pelton runner to prevent the jet from deviating from the designed diameter. The distance between the nozzle and the runner should be 5 % of the
runner orifice diameter, plus an additional allowance of (3) mm to allow for deflector emergencies:
To calculate the distance between the bucket and the nozzle, the minimum distance between the bucket and the nozzle can be calculated using the equation:
2.3. Calculation of bucket dimensions
Empirically, the number of buckets must fulfill the following equation $Z\ge 17$, where $Z$ is the number of buckets in the Pelton turbine.
Table 2Pelton turbine parameters
No Pelton turbine parameters
1 Runner radius 96.28 mm
2 Bucket width 28.28 mm
3 Bucket depth 12.08 mm
4 Bucket height 65.28 mm
5 Number of buckets 18 buah
6 Bucket gap width 13.11 mm
7 Bucket thickness 15 mm
The bucket axial width can be calculated as:
where ${D}_{j}$ is the nozzle diameter. Using a multiplying factor of 3.4, this equation states that the axial width of the bucket (${B}_{w}$) can be calculated by multiplying the bucket diameter ($
{D}_{j}$) by 3.4. In other words, this equation states that the axial width of the bucket is about 3.4 times the bucket diameter.
Bucket radial length can be calculated as:
Fig. 4a) Bucket design, b) runner design, c) Pelton turbine design
The radial length of the bucket can be calculated as:
The number of buckets in each runner should be determined so that no water particles are lost while minimizing the risk of adverse interactions between outflowing water particles and adjacent
buckets. The number of buckets can be calculated by:
Bucket moment arm length:
The radius of the bucket center of mass to the runner center is given as:
Meanwhile, the bucket volume can be calculated as:
We can calculate the mass of the bucket as follows:
${M}_{b}={\rho }_{m}{V}_{b}\left(\mathrm{k}\mathrm{g}\right).$
Fig. 5Pelton turbine bucket surface
Pelton turbine buckets are one of the most critical components in hydroelectric power plants using the Pelton turbine principle. The main wear of these buckets occurs on their internal surfaces,
which are constantly exposed to high water flow and pressure. The materials used in the construction of the bucket must be resistant to corrosion, erosion, and wear caused by the friction of the
water and the small grains carried by the flow. Over time, the repeated action of the flowing water will cause physical and chemical changes to the bucket surface that, if not properly addressed, can
reduce the efficiency of the Pelton turbine and shorten the life of the overall system.
The separators inside the Pelton bucket are also susceptible to wear, as they serve as the elements that direct the water flow to the turbine wheel with a degree of precision. According to Brekke et
al. (2002), an increase in the width of the separator by 1 % of the inner bucket width results in a decrease in efficiency of approximately 1 % at full load. The friction and stresses that occur
during operation cause gradual wear of the splitter surfaces. It is important to regularly monitor and inspect the condition of the splitter and the inner surface of the bucket to ensure that there
is no significant wear that could affect the performance of the Pelton turbine. Preventive maintenance and timely replacement of worn components are key to maintaining the durability and efficiency
of hydroelectric systems using Pelton turbine buckets.
2.4. Torque exerted on the wheel
The force ${F}_{w}$ generated on the bucket by the difference between these momentum flow rates is:
${F}_{w}=\stackrel{˙}{m}v-\stackrel{˙}{m}\left[u+k\left(v-u\right)\mathrm{c}\mathrm{o}\mathrm{s}\beta \right],$
${F}_{w}=\stackrel{˙}{m}\left(v-u\right)\left(1-k\mathrm{c}\mathrm{o}\mathrm{s}\beta \right).$
It is helpful to express the ratio of bucket speed 𝑢 to jet speed 𝑣 as $\lambda$:
$\lambda =\frac{u}{v}.$
So that:
${F}_{w}=\stackrel{˙}{mv}\left(1-\lambda \right)\left(1-k\mathrm{c}\mathrm{o}\mathrm{s}\beta \right).$
Therefore, the torque $T$ exerted on the wheel is:
$T=\stackrel{˙}{m}{R}_{w}v\left(1-\lambda \right)\left(1-\mathrm{c}\mathrm{o}\mathrm{s}\beta \right).$
We see that for a given wheel is fed with water at a fixed flow rate (so that $\stackrel{˙}{m}$ and 𝑣 are also fixed), the torque $T$ varies as ($1-\lambda$). Hence, the torque drops linearly from a
maximum when $\lambda$ = 0 (i.e. when the wheel is stationary) to zero when $\lambda =$ 1 (i.e. when the bucket is moving at the same speed as the jet).
2.5. Head turbine
Turbine Head is the height difference between the fluid intake position and the position of the turbine. To determine the head value, use the following equation:
${H}_{t}=\left[\frac{\left({p}_{1}-{p}_{2}\right)}{\rho g}+\frac{\left({V}_{1}^{2}-{V}_{2}^{2}\right)}{2g}+\left({Z}_{1}-{Z}_{2}\right)\right],$
where ${H}_{t}$ is head turbin, ${p}_{1}$ is pressure at fluid surface1, ${p}_{2}$is pressure at fluid surface 2, ${V}_{1}$ is flow velocity at point 1, ${V}_{2}$ is flow velocity at point 2, $g$ is
gravity acceleration, $\rho$ is fluid density, ${Z}_{1}$ is flow height at point 1, ${Z}_{1}$ is flow height at point 2.
Fig. 6Variation of torque 𝑇 with speed ratio 𝜆
2.6. Torque moment
The moment of torque is the amount of force acting on the turbine that causes the turbine to rotate. The torque moment value of the turbine can be known from the following equation:
where ${M}_{t}$ is the turbine torque moment, $F$ is the force on the turbine (N), and $r$ is the radius.
2.7. Flow velocity
Fluid flow velocity can be calculated using the following equation:
where $v$ is the flow velocity, $Q$ is the water discharge and $A$ is the cross-sectional area of the pipe.
2.8. Water power (WHP)
Water power (WHP) is defined as the effective power of water flow per unit time:
where $\rho$ is the water density (kg/m^3), $g$ is the acceleration of gravity (m/s^2), $Q$ is the water discharge (m^3/s) and ${H}_{t}$ is turbine head (m).
2.9. Turbine power (BHP)
Turbine power (BHP) is the power generated by the fluid to drive the turbine at a certain torque and speed. Turbine power can be calculated using the following equation:
where $N$ is the turbine rotation (Rpm) and ${M}_{t}$ is the torsional moment (Nm).
2.10. Turbine efficiency
Turbine efficiency is the percentage of power that can be generated or converted by a turbine from the potential energy or kinetic energy of water. The value of turbine efficiency is determined in
$\eta =\left(\frac{BHP}{WHP}\right)100%,$
where BHP is turbine power and WHP is water power.
Fig. 8Pelton turbine system
2.11. Schematic of Pelton turbine system
Pelton turbine consists of a set of road blades rotated by water whose dimensions are shown in Table 2, then the turbine blade is sprayed using one, two and three nozzles, the results of each nozzle
will be used for analysis, from the nozzle size used is shown in Table 1. Pelton turbine is a water turbine that is suitable for high head. Therefore, in this laboratory scale research, a water pump
is used as a booster until the water pressure through the nozzle pipe by moving the overflow pipe faucet to a pressure of 10 Psi. In this study a Prony brake, generator, several sensors used for data
collection are shown in the Fig. 9.
This research also includes a GUI and datalogger that is used to store data, for example to set the data recording time interval by changing the value in “Interval Record Data (s)” in the GUI. For
this experiment, change the data recording interval to 10 seconds, then after finishing changing the interval value, press ENTER. (If you feel that 10 seconds is still too fast, you can change it to
a larger value in seconds). When the GUI is recording data on the trainer the RECORD icon which was previously yellow will change to a green RECORDING icon, the indicator that the data has been
recorded by the GUI is the “Store Data” indicator on the GUI lights up green as shown in Fig. 10. After the “Store Data” indicator is green, change the load (increase) by turning the Prony brake knob
with a step interval of 0.1 Kg as shown in Fig. 11.
Fig. 9Schematic of Pelton turbine system: 1. Pump, 2. Faucet input line nozzles, 3. Drain line faucet, 4. Water pressure sensor, 5. Water flow speed sensor, 6. Pelton turbine, 7. Speed sensor, 8.
Prony brake, 9. Generator, 10. Electric current sensor, 11. Voltage sensor
Fig. 10Graphical user interface
3. Pelton turbine test results
In this study, the effect of the number of nozzles on the Pelton turbine performance was tested. The number of nozzles used in this test is 3 (three) nozzles as test parameters. In the first test,
one nozzle is used to rotate the Pelton turbine.
In Fig. 12 we can see the relationship between turbine rpm and turbine power which is the result of testing the pelton turbine using 1 (one) nozzle. The results show a graph with a parabolic curve
shape. The maximum turbine power was obtained at 4.929 watts at a turbine rotation of 271.49 rpm. While in the graph of the relationship between torque and turbine power, the maximum turbine power is
obtained at a torque of 0.173 Nm. This can be known from the intersection of the turbine rpm and turbine power relationship curves with the torque and turbine power relationship curves.
Fig. 11Prony brake, generator dan RPM
Fig. 12Pelton turbine graph with 1 nozzle
Fig. 13Graph of Pelton turbine with 2 nozzles
The next result using 2 (two) nozzles on the Pelton turbine can be seen in the graph of the relationship between turbine rpm and turbine power using 2 nozzles. A graph with a parabolic curve is
obtained, where the maximum power obtained is 13,517 watts at a turbine rotation of 267.86 rpm. While on the graph of the power generated against the Pelton turbine torque, a parabolic graph is
obtained with the maximum power obtained at a torque of 0.482 Nm.
The following are the results of research on Pelton turbines using 3 (three) nozzles. In the graph of the relationship between turbine rpm and turbine power using 3 nozzles, a graph with a parabolic
curve is obtained, where we can see that the power and RPM are increasing compared to using 1 (one) or 2 (two) nozzles. The maximum power obtained is 12.439 watts at a turbine rotation of 212.77 rpm.
The torque obtained at the time of maximum turbine power is 0.558 Nm.
Fig. 14Pelton turbine graph with 3 nozzles
Fig. 15Pelton turbine efficiency graph against load
After testing the performance of the Pelton turbine by loading each turbine with a different number of nozzles, namely 1, 2 and 3 nozzles, the results are obtained as shown in the graph above. In the
graph of nozzle 1 (one) we can see that initially the efficiency value increases quickly and the increase is quite drastic compared to using 2 (two) or 3 (three) nozzles. This is because using 1
(one) nozzle on the Pelton turbine makes the efficiency increase because there is no other resistance received by the turbine compared to using 2 (two) or 3 (three) nozzles. When using more than 1
(one) nozzle, the other turbine surface on the other side of the turbine will receive spray, so initially there is a greater load compared to using only 1 (one) nozzle. However, if the load continues
to increase, it is necessary to add another nozzle to increase efficiency. From the graph, we can see that the Pelton turbine with 3 nozzles has a higher efficiency value compared to turbines with 1
or 2 nozzles.
Based on the graph above, it is found that the higher the water pressure and turbine power, the efficiency will increase. The peak at a pressure of 18.13 Psi, the turbine efficiency is 10.656 %. And
at a pressure of 10.4 Psi, the efficiency becomes 0 %. For more details can be seen in Table 3.
To test the effect of pressure variation on Pelton turbine performance, a constant load of 1 kg was applied to the Pelton brake and then the water pressure at the nozzle was varied. The test results
produce a graph as shown in Fig. 16, where changes in water pressure affect the power and efficiency of the Pelton turbine. The higher the pressure at the nozzle, the higher the power and efficiency
of the turbine as shown in Table 3 no 3 has an efficiency of 11.120351.
Table 3Pelton turbine test results with pressure variations
No Pressure (Psi) Flowrate (L/min) Load (kg) Speed (RPM) Torque (Nm) Water power / WHP (W) Turbine power / BHP (W) Efficiency (%)
1 18.13 59.94 1.04 416.67 0.30576 125.1269464 13.33464001 10.6568892
2 17.97 59.57 1.04 419.58 0.30576 123.2571094 13.42776839 10.8941127
3 17.18 57.98 1.05 394.74 0.3087 114.6931988 12.75428624 11.120351
4 16.4 55.98 1.04 350.88 0.30576 105.7092573 11.22917053 10.6226936
5 16 54.95 1.03 314.14 0.30282 101.2334297 9.956717562 9.83540476
6 14.66 53.12 1.04 267.86 0.30576 89.66610827 8.572291437 9.56023586
7 13.4 51.26 1.03 225.56 0.30282 79.08965783 7.14916029 9.03931119
8 12.3 48.52 1.03 158.31 0.30282 68.71668242 5.01766078 7.30195435
9 11.43 45.84 1.09 94.94 0.32046 60.32913436 3.184428111 5.278425
10 10.96 44.95 1.04 53.1 0.30576 56.72526148 1.699352928 2.99576041
11 10.4 42.5 1.03 0 0.30282 50.8930512 0 0
Fig. 16Effect of water pressure variations on Pelton turbine power and efficiency
4. Conclusions
From the test results using a series of Pelton turbine blade performance test equipment, that the number of nozzles on the Pelton turbine affects the power efficiency of the Pelton turbine, where
from the experiments conducted it is known that the Pelton turbine using 3 nozzles has the highest efficiency value of 13.7 % compared to the Pelton turbine using 1 or 2 nozzles with efficiency
values of 12.209 % and 8.82 % respectively. Turbine power output is influenced by the value of water pressure, where the value of turbine power output converted from water potential energy is
directly proportional to the value of water pressure, the higher the water pressure, the greater the turbine power output will increase its efficiency.
• E. Y. Setyawan, S. Djiwo, D. H. Praswanto, P. Suwandono, and P. Siagian, “Design of low flow undershot type water turbine,” Journal of Science and Applied Engineering, Vol. 2, No. 2, p. 50, Nov.
2019, https://doi.org/10.31328/jsae.v2i2.1184
• E. Y. Setyawan, S. Djiwo, D. H. Praswanto, P. Suwandono, P. Siagian, and W. Naibaho, “Simulation model of vertical water wheel performance flow,” in IOP Conference Series: Materials Science and
Engineering, Vol. 725, No. 1, p. 012020, Jan. 2020, https://doi.org/10.1088/1757-899x/725/1/012020
• “Hydropower Status Report,” IHA Central Office, United Kingdom, International Hydropower Association (IHA), 2021.
• E. Quaranta et al., “Hydropower case study collection: innovative low head and ecologically improved turbines, hydropower in existing infrastructures, hydropeaking reduction, digitalization and
governing systems,” Sustainability, Vol. 12, No. 21, p. 8873, Oct. 2020, https://doi.org/10.3390/su12218873
• I. Kougias et al., “Analysis of emerging technologies in the hydropower sector,” Renewable and Sustainable Energy Reviews, Vol. 113, p. 109257, Oct. 2019, https://doi.org/10.1016/
• V. E. Mikhailov, I. P. Ivanchenko, and A. N. Prokopenko, “Modern state of hydropower and construction of hydro turbines in Russia and abroad,” Thermal Engineering, Vol. 68, No. 2, pp. 83–93, Feb.
2021, https://doi.org/10.1134/s004060152102004x
• E. Quaranta et al., “Assessing the energy potential of modernizing the European hydropower fleet,” Energy Conversion and Management, Vol. 246, p. 114655, Oct. 2021, https://doi.org/10.1016/
• M. Nechleba, Hydraulic Turbines: Their Design and Equipment. Prague: Artia, 1957.
• F. A. Essa and F. Selim, “Improving the operation of small reverse osmosis plant using Pelton turbine and supplying emergency electric loads,” Energy Sources, Part A: Recovery, Utilization, and
Environmental Effects, Vol. 45, No. 4, pp. 12040–12055, Oct. 2023, https://doi.org/10.1080/15567036.2023.2268594
• H. Zhao, B. Zhu, B. Xu, P. Tang, N. Guo, and W. Zhang, “Investigation on the influence of bucket’s flow patterns on energy conversion characteristics of Pelton turbine,” Engineering Applications
of Computational Fluid Mechanics, Vol. 17, No. 1, Dec. 2023, https://doi.org/10.1080/19942060.2023.2234435
• J.-W. Kim, I. C. Jo, J. H. Park, Y. Shin, and J. T. Chung, “Theoretical method of selecting number of buckets for the design and verification of a Pelton turbine,” Journal of Hydraulic Research,
Vol. 55, No. 5, pp. 695–705, Sep. 2017, https://doi.org/10.1080/00221686.2017.1354933
• J.-C. Marongiu, F. Leboeuf, J. Caro, and E. Parkinson, “Free surface flows simulations in Pelton turbines using an hybrid SPH-ALE method,” Journal of Hydraulic Research, Vol. 48, No. sup1, pp.
40–49, Jan. 2010, https://doi.org/10.1080/00221686.2010.9641244
• R. Khan and A. Kumar, “Performance enhancement of hydrokinetic turbine using augmentation techniques: a review,” International Journal of Green Energy, Vol. 21, No. 7, pp. 1667–1694, May 2024,
• B. Liu, Z. Yang, Y. Chen, L. Li, and S. Chen, “A decision-making framework for scheme selection for sustainable hydropower development,” International Journal of Green Energy, Vol. 18, No. 9, pp.
951–965, Jul. 2021, https://doi.org/10.1080/15435075.2021.1890081
• S. Schismenos, G. J. Stevens, D. Emmanouloudis, N. Georgeou, S. Shrestha, and M. Chalaris, “Humanitarian engineering and vulnerable communities: hydropower applications in localised flood
response and sustainable development,” International Journal of Sustainable Energy, Vol. 39, No. 10, pp. 941–950, Nov. 2020, https://doi.org/10.1080/14786451.2020.1779274
• S. A. Solarin, M. O. Bello, and O. E. Olabisi, “Toward sustainable electricity generation mix: an econometric analysis of the substitutability of nuclear energy and hydropower for fossil fuels in
Canada,” International Journal of Green Energy, Vol. 18, No. 8, pp. 834–842, Jun. 2021, https://doi.org/10.1080/15435075.2021.1880917
• F. Ope Olabiwonnu, T. Haakon Bakken, and B. Anthony Jnr, “The role of hydropower in renewable energy sector toward co2 emission reduction during the COVID-19 pandemic,” International Journal of
Green Energy, Vol. 19, No. 1, pp. 52–61, Jan. 2022, https://doi.org/10.1080/15435075.2021.1930005
• S. Schismenos, G. J. Stevens, D. Emmanouloudis, N. Georgeou, S. Shrestha, and N. D. Katopodes, “Using off-grid hydropower for community-led flood resilience: an integrated systems approach,”
International Journal of Sustainable Energy, Vol. 41, No. 6, pp. 594–608, Jul. 2022, https://doi.org/10.1080/14786451.2021.1961773
• K. Kashyap, R. Thakur, R. Kumar, and S. Kumar, “Feasibility analysis for conversion of existing traditional watermills in Western Himalayan region of India to micro-hydropower plants using a low
head Archimedes screw turbine for rural electrification,” International Journal of Ambient Energy, Vol. 43, No. 1, pp. 7463–7473, Dec. 2022, https://doi.org/10.1080/01430750.2022.2068056
• H. Brekke, Y. L. Wu, and B. Y. Cai, “Design of hydraulic machinery working in sand laden water,” Abrasive Erosion and Corrosion of Hydraulic Machinery, Vol. 2, pp. 155–233, Nov. 2011, https://
• Z. Zhang, Pelton Turbines. Cham: Springer International Publishing, 2016, https://doi.org/10.1007/978-3-319-31909-4
• “International Water Power and Dam Construction,” Wilmington Media Ltd, 2009.
About this article
Pelton turbine
water energy
renewable energy
Thanks to the National Institute of Technology Malang which has provided a lot of assistance and facilities for this research.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Author Contributions
Eko Yohanes Setyawan – conceptualization, ideas; formulation or evolution of overarching research goals and aims. Awan Uji Krismanto – management activities to annotate (produce metadata), scrub data
and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse. Mujiono – conducting a research and investigation
process, specifically performing the experiments, or data/evidence collection. Soeparno Djiwo – development or design of methodology; creation of models. Choirul Saleh – management and coordination
responsibility for the research activity planning and execution. Taufik Hidayat – creation and/or presentation of the published work, specifically writing the initial draft (including substantive
Conflict of interest
The authors declare that they have no conflict of interest.
Copyright © 2024 Eko Yohanes Setyawan, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/23966","timestamp":"2024-11-05T13:03:52Z","content_type":"text/html","content_length":"168507","record_id":"<urn:uuid:4914dc6f-2222-46f6-a247-06f07bf8c7c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00246.warc.gz"} |
Information card for entry 2228131
Crystallography Open Database
Information card for entry 2228131
Chemical name methyl <i>rel</i>-(3<i>R</i>,6<i>R</i>,10a<i>S</i>,11<i>S</i>,12a<i>S</i>)- 5-acetoxy-5a-acetoxymethyl-12b-hydroxy-3-methyl-
2,3,3a,5,5a,6,7,8,9,10,10a,11,12,12b-tetradecahydro-4<i>H</i>-1,6- methanocyclopenta[1,8]azuleno[4,3-<i>a-g</i>]indole-11-carboxylate
Formula C27 H37 N O7
Calculated formula C27 H37 N O7
SMILES O[C@]12N3C[C@H]([C@H]1C[C@@H](OC(=O)C)[C@]1([C@H](CCC4=C5[C@H](CC4)[C@@H](C[C@]215)C(=O)OC)C3)COC(=O)C)C
Title of publication Yuzurimine from of <i>Daphniphyllum macropodum</i> Miq.
Authors of publication Cheng, Ying; He, Xing-Jin
Journal of publication Acta Crystallographica Section E
Year of publication 2010
Journal volume 66
Journal issue 11
Pages of publication o3013
a 9.598 ± 0.0003 Å
b 9.7437 ± 0.0002 Å
c 26.0986 ± 0.0006 Å
α 90°
β 90°
γ 90°
Cell volume 2440.74 ± 0.11 Å^3
Cell temperature 294 ± 2 K
Ambient diffraction temperature 294 ± 2 K
Number of distinct elements 4
Space group number 19
Hermann-Mauguin space group symbol P 21 21 21
Hall space group symbol P 2ac 2ab
Residual factor for all reflections 0.0565
Residual factor for significantly intense 0.0443
Weighted residual factors for significantly 0.1144
intense reflections
Weighted residual factors for all 0.12
reflections included in the refinement
Goodness-of-fit parameter for all 1.042
reflections included in the refinement
Diffraction radiation wavelength 0.71073 Å
Diffraction radiation type MoKα
Has coordinates Yes
Has disorder No
Has F[obs] Yes
Revision Date Message Files
201978 (current) 2017-10-13 Marking COD entries in range 2/22 that are known to be related to 2228131.cif 2228131.hkl
ChemSpider entries using the _cod_related_entry data loop.
hkl/2/22/ (antanas@echidna)
181202 2016-04-03 Replacing _[local]_cod_* tags with their equivalents from the COD CIF 2228131.cif 2228131.hkl
dictionary in multiple entries in subrange 2/22/81.
cif/2/ (antanas@kurmis)
176789 2016-02-20 Replacing _[local]_cod_* tags with their equivalents from the COD CIF 2228131.cif 2228131.hkl
dictionary in multiple entries in subrange 2/22.
120072 2014-07-11 Adding DOIs to range 2 structures. 2228131.cif 2228131.hkl
88064 2013-08-29 Adding _cod_database_fobs_code data item to all COD CIFs that have Fobs deposited and did not have this data item yet. 2228131.cif 2228131.hkl
26848 2011-09-28 cif/2/, hkl/2/, html/ 2228131.cif 2228131.hkl
Reorganising CIFs of the range 2* into the "prefix-directory" tree.
5269 2010-12-22 hkl/ 2228131.cif 2228131.hkl
Adding Fobs data for the recent IUCr-published structres.
5168 2010-12-18 Adding structures of 2228131.cif
via cif-deposit CGI script. | {"url":"https://www.crystallography.net/cod/2228131.html","timestamp":"2024-11-13T09:14:29Z","content_type":"application/xhtml+xml","content_length":"11983","record_id":"<urn:uuid:96130af9-2ccc-4d06-8f76-eaf0505a79b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00546.warc.gz"} |
How to carry out conceptual properties norming studies as parameter estimation studies: Lessons from ecology.
Enrique Canessa, Sergio E. Chaigneau, Rodrigo Lagos & Felipe A. Medina (2020). How to carry out conceptual properties norming studies as parameter estimation studies: Lessons from ecology. Behavior
Research Methods, https://doi.org/10.3758/s13428-020-01439-8
27 de julio 2020
Conceptual properties norming studies (CPNs) ask participants to produce properties that describe concepts. From that data, different metrics may be computed (e.g., semantic richness, similarity
measures), which are then used in studying concepts and as a source of carefully controlled stimuli for experimentation. Notwithstanding those metrics’ demonstrated usefulness, researchers have
customarily overlooked that they are only point estimates of the true unknown population values, and therefore, only rough approximations. Thus, though research based on CPN data may produce reliable
results, those results are likely to be general and coarse-grained. In contrast, we suggest viewing CPNs as parameter estimation procedures, where researchers obtain only estimates of the unknown
population parameters. Thus, more specific and fine-grained analyses must consider those parameters’ variability. To this end, we introduce a probabilistic model from the field of ecology. Its
related statistical expressions can be applied to compute estimates of CPNs’ parameters and their corresponding variances. Furthermore, those expressions can be used to guide the sampling process.
The traditional practice in CPN studies is to use the same number of participants across concepts, intuitively believing that practice will render the computed metrics comparable across concepts and
CPNs. In contrast, the current work shows why an equal number of participants per concept is generally not desirable. Using CPN data, we show how to use the equations and discuss how they may allow
more reasonable analyses and comparisons of parameter values among different concepts in a CPN, and across different CPNs. | {"url":"https://cscn.uai.cl/publicacion/how-to-carry-out-conceptual-properties-norming-studies-as-parameter-estimation-studies-lessons-from-ecology/","timestamp":"2024-11-06T09:04:15Z","content_type":"application/xhtml+xml","content_length":"47851","record_id":"<urn:uuid:4eb20837-6497-4436-8d8a-4bf86840f07e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00669.warc.gz"} |
Compact Circuits for Efficient Möbius Transform
Compact Circuits for Efficient Möbius Transform
Subhadeep Banik , Universita della Svizzera Italiana, Lugano, Switzerland
Authors: Francesco Regazzoni , University of Amsterdam, Amsterdam, Netherlands; Universita della Svizzera Italiana, Lugano, Switzerland
The Möbius transform is a linear circuit used to compute the evaluations of a Boolean function over all points on its input domain. The operation is very useful in finding the solution of a
system of polynomial equations over GF(2) for obvious reasons. However the operation, although linear, needs exponential number of logic operations (around n · 2n−1 bit xors) for an
n-variable Boolean function. As such, the only known hardware circuit to efficiently compute the Möbius Transform requires silicon area that is exponential in n. For Boolean functions whose
algebraic degree is bound by some parameter d, recursive definitions of the Möbius Transform exist that requires only O(nd+1) space in software. However converting the mathematical
definition of this space-efficient algorithm into a hardware architecture is a non-trivial task, primarily because the recursion calls notionally lead to a depth-first search in a
transition graph that requires context switches at each recursion call for which straightforward mapping to hardware is difficult. In this paper we look to overcome these very challenges in
an engineering sense. We propose a space efficient sequential hardware circuit for the Möbius Transform that requires only polynomial circuit area (i.e. O(nd+1)) provided the algebraic
Abstract: degree of the Boolean function is limited to d. We show how this circuit can be used as a component to efficiently solve polynomial equations of degree at most d by using fast exhaustive
search. We propose three different circuit architectures for this, each of which uses the Möbius Transform circuit as a core component. We show that asymptotically, all the solutions of a
system of m polynomials in n unknowns and algebraic degree d over GF(2) can be found using a circuit of silicon area proportional to m · nd+1 and circuit depth proportional to 2 · log2(n −
d).In the second part of the paper we introduce a fourth hardware solver that additionally aims to achieve energy efficiency. The main idea is to reduce the solution space to a small enough
value by parallel application of Möbius Transform circuits over the first few equations of the system. This is done so that one can check individually whether the vectors of this reduced
solution space satisfy each of the remaining equations of the system using lower power consumption. The new circuit has area also bound by m · nd+1 and has circuit depth proportional to d ·
log2 n. We also show that further optimizations with respect to energy consumption may be obtained by using depth-bound Möbius circuits that exponentially decrease run time at the cost of
additional logic area and depth.
title={Compact Circuits for Efficient Möbius Transform},
journal={IACR Transactions on Cryptographic Hardware and Embedded Systems},
publisher={Ruhr-Universität Bochum},
volume={024 No. 2},
author={Subhadeep Banik and Francesco Regazzoni}, | {"url":"https://www.iacr.org/cryptodb/data/paper.php?pubkey=34060","timestamp":"2024-11-13T02:45:04Z","content_type":"text/html","content_length":"25799","record_id":"<urn:uuid:2a01b149-575b-4e80-81ba-b1b6dadce240>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00747.warc.gz"} |
Stochastic Oscillator - Beginner Trading Tips
Stochastic Oscillator
The Stochastic Oscillator is classed as a momentum indicator and was developed by Dr. George Lane. The indicator compares the closing price of the security in relation to the price range over a
number of set periods.
The formula below defines the calculation for the Stochastic Oscillator.
%K = 100[(C – Ln)/(Hn- Ln)]
C = Current Closing Price
Ln = Low over the period n
Hn = High over the period n
%K will be multiplied by 100 so the decimal place is moved two places to the left.
The default setting for the Stochastic Oscillator is 14 periods and is generally days but can be any time frame. A daily 14 period %K would use the latest close and the high and low over the last 14
The formula below is used in conjunction with %K.
%D = 3-period moving average of %K
This signifies a 3 day moving average of the %K and is generally drawn alongside the %K and acts as a possible transaction signal.
The Stochastic Oscillator is range bound between 0 and 100 and uses settings of 80 and 20 to indicate overbought and oversold conditions respectively. You should bear in mind a reading that is
consistently over 80 may just indicate a strong uptrend and does not always mean a possibility of bearish behaviour ahead. Likewise a reading that is consistently below 20 may indicate strong bearish
behaviour, not that bullish price action is about to occur.
The following chart of the EUR/USD shows the Stochastic Oscillator with its default settings.
Like the RSI divergences occur when when new highs and lows are created and the value of the Stochastic Oscillator fails to proceed higher or lower respectively.
As seen in the following chart of the EUR/USD bullish divergences occur when price makes a lower low and the Stochastic Oscillator makes a higher low.
This can be a possible sign of changing momentum and could be proceeded by a bullish reversal pattern. A bearish divergence is the opposite and forms when price forms a higher high and the Stochastic
Oscillator forms a lower high. Again this is an indication of changing momentum and possible bearish reversal pattern. | {"url":"https://www.beginnertradingtips.com/stochastic-oscillator","timestamp":"2024-11-02T17:17:39Z","content_type":"text/html","content_length":"32926","record_id":"<urn:uuid:d430a635-ee6b-49ba-b65c-c082f375587a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00200.warc.gz"} |
Anisotropic Failure Strength of Shale with Increasing Confinement: Behaviors, Factors and Mechanism
Key Laboratory of Shale Gas and Geoengineering, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing 100029, China
Institutions of Earth Science, Chinese Academy of Sciences, Beijing 100029, China
College of Earth Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
Institute of Crustal Dynamics, China Earthquake Administration, Beijing 100085, China
Author to whom correspondence should be addressed.
Submission received: 25 October 2017 / Revised: 13 November 2017 / Accepted: 13 November 2017 / Published: 15 November 2017
Some studies reported that the anisotropic failure strength of shale will be weakened by increasing confinement. In this paper, it is found that there are various types of anisotropic strength
behaviors. Four types of anisotropic strength ratio ($S A 1$) behaviors and three types of anisotropic strength difference ($S A 2$) behaviors have been classified based on laboratory experiments on
nine groups of different shale samples. The cohesion $c w$ and friction angle $ϕ w$ of the weak planes are proven to be two dominant factors according to a series of bonded-particle discrete element
modelling analyses. It is observed that shale is more prone to a slight increase of $S A 1$ and significant increase of $S A 2$ with increasing confinement for higher cohesion $c w$ and lower to
medium friction angle $ϕ w$. This study also investigated the mechanism of the anisotropic strength behaviors with increasing confinement. Owing to different contributions of $c w$ and $ϕ w$ under
different confinements, different combinations of $c w$ and $ϕ w$ may have various types of influences on the minimum failure strength with the increasing confinement; therefore, different types of
anisotropic behaviors occur for different shale specimens as the confinement increases. These findings are very important to understand the stability of wellbore and underground tunneling in the
shale rock mass, and should be helpful for further studies on hydraulic fracture propagations in the shale reservoir.
1. Introduction
It is well known that shale exhibits various degrees of anisotropic failure characteristics and strength values owing to its structures [
]. Anisotropic failure strength, referred to as strength variation with respect to the orientations of principal stresses [
], is of great importance in the stability problems of wellbore in shale gas exploitation and underground tunneling in the shale rock mass [
]. Extensive studies have been carried out to research the anisotropic strength properties of shale samples from various reservoirs or outcrops [
]. Several studies also try to build some anisotropic strength criteria which are more reasonable for the shale specimens [
]. In addition, with the rapid development of numerical modelling technology, many different numerical methods have been used to research the anisotropic strength behaviors, such as the finite
element method (FEM) [
], discrete element method (DEM) [
], and the combined finite element method/discrete element method (FEM/DEM) [
], etc. In recent years, the nonlocal lattice particle model has been developed and has proven to be a promising method to analyze anisotropic failure behaviors [
The shale should be under different stress states at different depths, in different places relative to the underground work, or considering different forms of support after opening. Nonetheless,
based on an extensive literature review, no systematic research on the anisotropic strength behaviors of shale with increasing confinement has been found in the previous studies. Some studies
reported that the anisotropic failure strength of shale will be weaker with increasing confinement [
]. This empirical understanding is always obtained from the observations on a limited number of test results, based on the anisotropic strength parameters defined as the ratios of strength values at
different loading directions [
]. However, is it always correct? It should be noted that the anisotropic strength differences are still quite considerable for the shale samples under higher confinement based on many laboratory
experimental results [
]. It is still necessary to make clear how the anisotropic strength behaviors change for different shale specimens as the confinement increases. Furthermore, what should be the predominant factors,
and what is the mechanism for the different anisotropic strength behaviors affected by these factors? These questions should be answered based on a more comprehensive study.
Focusing on the above-mentioned questions, this study tries to give a better understanding of the behaviors, factors and mechanisms of the anisotropic failure strength of shale with increasing
confinement. With more detailed analyses on the anisotropic strength parameters to describe the magnitude of strength anisotropy, the classifications of different types of anisotropic strength
behaviors are made based on nine groups of laboratory experiments on different shale specimens (
Section 2
). By bonded-particle discrete element modelling, a series of systematic analyses are conducted to study the influence of the key factors on the anisotropic strength behaviors (
Section 3
). Based on the well-known Jaeger’s strength criterion, as well as the laboratory and numerical test results, the mechanism of the different anisotropic strength behaviors is discussed in
Section 4
. This study may help us have a better understanding of the anisotropic strength properties of the shale specimens, especially for wellbore and excavation stability problems, or may even be able to
extend to the propagation characteristics of hydraulic fractures in the shale reservoir under different in situ stresses.
2. Classifications on Anisotropic Failure Strength Behaviors of Shale by Experimental Results
2.1. Degree of Anisotropic Failure Strength
It is of great importance to define suitable parameters to evaluate the degree of anisotropic failure strength. Different parameters have been used in the former studies as listed in
Table 1
. Although these anisotropic strength parameters have different forms, all of them actually reflect the ratio of strength values at different loading directions. These parameters are dimensionless
and have been applied widely for estimating the properties of strength anisotropy.
Here, this method will also be applied in this study and is defined more concisely as
$S A 1 = σ 1 , max σ 1 , min$
$σ 1 , max$
$σ 1 , min$
are the maximum and minimum strengths of shale under the certain confinement, respectively.
Meanwhile, another strength anisotropic parameter
$S A 2$
is also adopted in this work. It is defined as the difference between the maximum and minimum strength values of shale samples under a certain confinement:
$S A 2 = σ 1 , max − σ 1 , min$
The physical meanings of the two parameters can be understood based on their definitions.
$S A 1$
is a dimensionless coefficient from the perspective of strength ratio to evaluate the strength anisotropy at different loading directions. The material can be considered as isotropic for strength if
$S A 1$
= 1, and a higher value of
$S A 1$
means a higher degree of anisotropic strength behavior. Nonetheless,
$S A 2$
, with a unit of MPa, shows the specific values of strength differences for the material at different orientations.
$S A 2$
= 0 MPa refers to the material with isotropic strength, and the increasing value of
$S A 2$
shows the increasing anisotropic degree of strength. The relation between the two parameters can be described with the following equation:
$S A 2 = σ 1 , min ( S A 1 − 1 )$
Apparently, with the changing
$σ 1 , min$
under various confinements,
$S A 1$
$S A 2$
are two independent parameters. The laboratory test results of Greenriver Shale-2 samples [
] are used as an example to show the changing trends of
$S A 1$
$S A 2$
with the increasing confinements (
Figure 1
). It should be noted that the inclination angle
is defined as the acute angle between the weak planes and the direction of minimum principal stress. The variations of three other anisotropic parameters listed in
Table 1
are also plotted in
Figure 1
b for comparison, while the other two parameters from references [
] are not shown here because they only show the anisotropic strength behaviors of the samples under uniaxial compression.
According to
Figure 1
b, the anisotropic strength ratio
$S A 1$
decreases with the increasing confinement; however, the anisotropic strength difference
$S A 2$
shows an increasing trend. Obviously, the reduction of
$S A 1$
is caused by the increasing
$σ 1 , min$
with the improving confinement. At the meantime, the parameters
also present a decreasing trend like the parameter
$S A 1$
, because they are all defined from the perspective of strength ratio. The parameter
keeps almost constant because this parameter only considers the strength of the shale samples at the directions parallel and perpendicular to the weak planes, and the anisotropic strength behaviors
induced by the structures cannot be reflected completely.
Consequently, $S A 1$ and $S A 2$ can be used as two typical parameters to demonstrate the degree of anisotropic failure strength from two different perspectives. It is difficult to say which one is
better. $S A 1$ may be a better parameter to compare the anisotropic strength characteristics of different shale samples because it is a dimensionless coefficient. However, $S A 2$ is easier to be
applied to estimate the stability of a certain shale based on the strength criterion because it considers the specific values of strength differences. Therefore, both of these two measures should be
researched for a systematic and comprehensive understanding on the anisotropic strength behaviors of shale.
2.2. Laboratory Experimental Database
A database has been compiled including 251 uniaxial and conventional triaxial compressive tests on shale samples from nine different reservoirs or outcrops. The basic information of the samples and
test conditions are presented in
Table 2
. The first eight groups of test results are collected from the published papers, and more detailed information can be found in the corresponding references if necessary. The last group of
experiments are carried out by the authors in this study.
The laboratory experiments on Shale-5 specimens are carried out in the MTS815 test machine (Institute of Rock and Soil Mechanics, Chinese Academy of Sciences in Wuhan, China). The intact black shale
with a single set of parallel weak planes are well prepared to be cylindrical samples with the size of 50 mm in diameter and 100 mm in height. The samples with various oriented weak planes are shown
Figure 2
. The physical and mechanical parameters of the samples are presented in
Table 3
. It can be observed that Shale-5 samples show various degrees of anisotropic characteristics on P-wave velocity, uniaxial compressive strength, and Young’s modulus.
2.3. Different Types of Anisotropic Strength Behaviors Based on SA[1]
Based on the laboratory test results in the database mentioned above, anisotropic strength ratios
$S A 1$
are calculated for the different shale samples under different confinements. According to the calculated results exhibited in
Figure 3
, it is apparent that the anisotropic strength behaviors can be classified into four different types as follows:
(1)Type I[1]: Significant decrease of $S A 1$ with increasing confinement.
The anisotropic strength ratio
$S A 1$
falls significantly as the confinement increases. Taking the Shale-1 samples for example,
$S A 1$
can be reduced from about 2.63 to 1.49 with
increasing from 0 to 30 MPa (
Figure 3
(2)Type II[1]: Slight decrease of SA[1] with increasing confinement.
The anisotropic strength ratio
$S A 1$
may only be lowered slightly with the confinement going up, for example,
$S A 1$
of Greenriver Shale-2 decreases from about 1.62 to 1.23 with a high increase of
from 0 to 170 MPa (
Figure 3
(3)Type III[1]: Generally constant SA[1] with increasing confinement.
There may be some oscillations of
$S A 1$
as the confinement increases, but no obvious trend of up or down can be observed. For instance,
$S A 1$
of Outcrop Shale-8 remains in the range between about 1.37 and 1.74 with some undulations during the rise of
from 0 to 69 MPa (
Figure 3
(4)Type IV[1]: Slight increase of SA[1] with increasing confinement.
$S A 1$
goes up slightly with the confinement increases. For example,
$S A 1$
values of Shale-5 are only about 1.12 and 1.13 at the confinement of 0 and 20 MPa, while this ratio increases to 1.29 when
= 40 MPa. Although there appears a little reduction to 1.27 and 1.21 when
= 60 and 100 MPa, they are still higher than the cases at
= 0 and 20 MPa (
Figure 3
2.4. Different Types of Anisotropic Strength Behaviors Based on SA[2]
The anisotropic strength differences
$S A 2$
have also been calculated based on the above mentioned nine groups of laboratory experimental results. These
$S A 2$
values are plotted in
Figure 4
, and they can be classified into three different types according to their changing trends with increasing confinement:
(1)Type I[2]: Gradual decrease of SA[2] with increasing confinement.
The anisotropic strength difference
$S A 2$
of this type of shale goes down gradually with the increase of confinement. As an example,
$S A 2$
of Greenriver Shale-1 goes down gradually from 94.3 to 55.0 MPa with the confinement increases from 0 to 170 MPa (
Figure 4
(2)Type II[2]: Slight increase of SA[2] with increasing confinement.
For this type of shale samples,
$S A 2$
value increases much more slightly than that of Type I
with the rise of confinement. Taking Greenriver Shale-2 as an example,
$S A 2$
goes up gradually from 54.8 to 91.3 MPa with the confinement increases from 0 to 170 MPa (
Figure 4
(3)Type III[2]: Significant increase of SA[2] with increasing confinement.
With the rise of confinement, there is a significant increase of
$S A 2$
for this type of shale samples. For example,
$S A 2$
of Shale-3 increases quickly from 47.6 to 139.7 MPa as the confinement increases from 0 to 30 MPa (
Figure 4
2.5. Discussions
According to the different types of anisotropic strength behaviors of the shale samples based on both of the two parameters $S A 1$ and $S A 2$, it has been proved that the anisotropic failure
strength of shale may have different types of changes with the rise of confinement. With either parameter $S A 1$ or $S A 2$, there are shale samples with various degrees of increase or decrease as
the confinement goes up. It is important to notice these features, and furthermore, it is also important to make clear the dominant factors and how they affect these anisotropic strength behaviors.
Jaeger has proposed a well-known anisotropic strength criterion for the rock containing a set of parallel weak planes [
$σ 1 = σ 3 + 2 ( c w + σ 3 tan ϕ w ) ( 1 − tan ϕ w cot β ) sin 2 β$
$σ 1$
$σ 3$
are the maximum and minimum principal stresses,
is the acute angel between the weak planes and the direction of minimum principal stress, and
$c w$
$ϕ w$
are the cohesion and friction angle of the weak planes, respectively.
According to Jaeger’s strength criterion, it can be seen that $c w$ and $ϕ w$ play important roles in the strength of the transversely isotropic rock at different loading directions. For this
consideration, they should also have significant influences on the degree of strength anisotropy under different confinements.
Usually, the shale samples can be considered as intact rock with a set of weak planes. According to the structures of the specimens described in
Table 2
, it is reasonable and applicable to use Jaeger’s strength criterion to analyze the strength anisotropy of these shale samples. Consequently,
$c w$
$ϕ w$
will be considered as two important factors to study their influences on the different anisotropic strength behaviors of shale. For a comprehensive understanding on this problem, numerical modelling
will be applied in the next section to make a systematic analysis.
3. Anisotropic Strength Behaviors Affected by Cohesion and Friction Angle of Weak Planes
3.1. Bonded-Particle Discrete Element Modelling
The bonded-particle discrete element model used in this study is generated by PFC2D (Particle Flow Code in 2 dimensions) developed by Itasca Consulting Group [
]. Parallel bonded particle model is applied for analogue of the rock material, and smooth-joint model is used to simulate the weak planes.
In the parallel bonded particle model, circular particles are randomly bonded together, and the parallel bonds between the particles have specific strengths and stiffness at the normal and shear
directions. The bond will break when each the normal or shear strength is reached, and a crack will be formed at the place of the broken bond. Newton’s second law of motion and a force-displacement
law are used to govern the particle movements in each calculation cycle. Thus, a relatively simple set of micro-parameters of the particles and bonds can make the model exhibit emergent macro
characteristics including fracture propagation, strength behaviors, dilation, strain hardening and softening, etc. [
With the introduction of the smooth-joint model, the simulation of structural planes can avoid the inherent roughness arising from the circular shape of the particles, because the particles can move
along the direction of the structural planes, instead of having to rotate the other balls. In addition, in the smooth-joint model, the properties such as cohesion, friction angle, dilation angle etc.
can be set directly to the weak planes, and it is very convenient to study the effects of these parameters on the macro behaviors of the rock [
A more detailed theory and algorithm can be found in the references mentioned above. By using the parallel bonded particle model incorporated with the smooth-joint model, some research has
successfully simulated the strength and deformation behaviors of shale samples [
], as well as the hydraulic fracturing process in the reservoir [
Consequently, it is suitable to select this modelling method for the analysis of anisotropic failure strength here. In this study, there is only a single set of weak planes in the samples, and a 2D
model can give reasonable analogue of the strength behaviors of the shale specimens; besides, it can save much more computing time than the 3D model. Consequently, a 2D model is applied in this
study. The validation of the numerical model is based on the test results of Shale-1 samples [
]. The model with 6250 particles has a size of 50 mm in diameter and 100 mm in height (
Figure 5
). The validated micro-parameters for the bonded particle model and smooth-joint model are presented in
Table 4
Table 5
, respectively. The tested and modelled anisotropic strength values are presented in
Figure 6
a,b, respectively. It can be found that this model can generally simulate the various strength values of the Shale-1 samples with different oriented weak planes under different confinements. Although
the modelled strength values are a little lower at
= 90° under the confinement
$σ 3$
= 20 and 30 MPa, the model is reasonable enough to study the trends of anisotropic strength behaviors and mechanism of shale in this work.
3.2. Modelling Analyses
In this study, the modelling shale samples containing weak planes with four different inclination angles β = 0°, 30°, 60°, and 90° will be investigated under four different confinements $σ 3$ = 0,
10, 20 and 30 MPa. Different combinations of cohesion $c w$ = 10, 20 and 40 MPa as well as friction angle $ϕ w$ = 10°, 30°, and 50° will be considered for the weak planes. Influence of cohesion and
friction angle of weak planes will both be studied in the following analyses.
3.2.1. Influence of Cohesion of Weak Planes
Based on the modelling results, the anisotropic strength parameters
$S A 1$
$S A 2$
are plotted in
Figure 7
to obtain an understanding on the influence of weak plane cohesion
$c w$
on the anisotropic behaviors.
According to
Figure 7
, several observations on the variations of
$S A 1$
with increasing confinement can be obtained as follows:
In the case of a low to medium friction angle ($ϕ w$ = 10° and 30°), the increase of cohesion $c w$ may transfer the $S A 1$ behaviors from significant decrease to slight decrease or even slight
increase with the confinement going up;
For a high friction angle ($ϕ w$ = 50°), increasing cohesion $c w$ can also change the $S A 1$ features from significant decrease to slight decrease, however, it is difficult to obtain the
increasing trend of $S A 1$ with the rise of confinement;
Generally speaking, the lower cohesion $c w$ may be prone to lead to the significant decrease of $S A 1$ with the increasing confinement, while the higher cohesion $c w$ will weaken this trend,
but whether it will be slight decrease or increase is dependent on the friction angle $ϕ w$ of the weak planes.
Meanwhile, it is not difficult to find some features of $S A 2$ with increasing confinement:
When friction angle $ϕ w$ = 10°, the increase of cohesion $c w$ may lower all $S A 2$ values under various confinements, and the increasing trend of $S A 2$ will be more significant with the
increasing confinement;
When friction angle $ϕ w$ = 30°, the increase of cohesion $c w$ makes the slight decreasing trend of $S A 2$ transfer to a slight or significant increase with the increasing confinement;
When friction angle $ϕ w$ = 50°, the increase of cohesion $c w$ makes the significant decreasing trend of $S A 2$ transfer to a slight decrease as the confinement increases.
Comparing with the features of $S A 1$, there are more cases of increasing trend for $S A 2$. Nonetheless, for both parameters $S A 1$ and $S A 2$, it is similar that the increase of cohesion $c w$
may be prone to weaken the degree of the decreasing trend or transfer it to slight increasing behaviors.
3.2.2. Influence of Friction Angle of Weak Planes
In order to obtain an understanding of the influence of friction angle
$ϕ w$
on the anisotropic strength behaviors, the parameters
$S A 1$
$S A 2$
are again plotted in
Figure 8
for another series of comparative studies.
According to
Figure 8
, the changing trend of
$S A 1$
with the increasing confinement can be easily observed as follows:
For lower cohesion ($c w$ = 10 MPa), the increasing friction angle $ϕ w$ can make the decreasing trend of $S A 1$ more and more significant;
For medium to higher cohesion ($c w$ = 20 and 40 MPa), the increasing friction angle $ϕ w$ may transfer the slight increasing or almost constant trend of $S A 1$ to slight increasing behaviors;
As the cohesion $c w$ increases, the influence of friction angle $ϕ w$ on the degree of $S A 1$ changing behaviors is more and more limited.
Generally speaking, lower friction angle $ϕ w$ is prone to result in the slight decrease or even slight increase of $S A 1$, while higher friction angle $ϕ w$ may easily induce the slight or even
significant decrease of $S A 1$ with the confinement going up.
We can observe the features of $S A 2$ with the increasing confinement as follows:
For all cases of cohesion ($c w$ = 10, 20 and 40 MPa), the increasing friction angle $ϕ w$ can induce the transferring of the $S A 2$ trend from going up to going down with the rise of
As the cohesion $c w$ increases, the influence of friction angle $ϕ w$ on the degree of $S A 2$ changing behaviors is more and more limited.
Generally speaking, for both parameters $S A 1$ and $S A 2$, lower friction angle $ϕ w$ is prone to result in the slight decrease or even slight increase of anisotropic strength behaviors, while
higher friction angle $ϕ w$ may easily induce the slight or even significant decrease of anisotropic strength behaviors with the rise of confinement.
3.2.3. Conjoint Analysis on Both Factors $c w$ and $ϕ w$
According to the numerical modelling analyses considering various combinations of
$c w$
$ϕ w$
mentioned above, the anisotropic strength features covers all the four types of
$S A 1$
behaviors and three types of
$S A 2$
behaviors presented in
Section 2
based on laboratory experimental results. Here, the types of all numerical cases are plotted in
Table 6
to have a better understanding on the influences of
$c w$
$ϕ w$
on the anisotropic strength behaviors with increasing confinement.
Table 6
presents various types of
$S A 1$
behaviors for all combinations of
$c w$
$ϕ w$
. It is more apparent to find that the increase of cohesion
$c w$
will weaken the decreasing trend of
$S A 1$
from Type I
(significant decrease) to II
(slight decrease), III
(generally constant), or even IV
(slight increase). Meanwhile, the cases with lower friction angle
$ϕ w$
are more prone to have weaker decreasing trend of
$S A 1$
or even increase of
$S A 1$
. The phenomenon of increasing
$S A 1$
with increasing confinements occurs for the cases with lower to medium friction angle (
$ϕ w$
= 10° and 30°) and higher cohesion (
$c w$
= 40 MPa).
Three types of
$S A 2$
behaviors for all combinations of
$c w$
$ϕ w$
are plotted in
Table 6
. It is also found that it is more prone to have significant increase of
$S A 2$
for the cases with lower friction angle. With increasing
$ϕ w$
$S A 2$
changes from Type III
(significant increase) to II
(slight decrease), or I
(significant decrease). What is more, the medium to higher cohesion
$c w$
is more probable to induce significant increase of
$S A 2$
with the increasing confinement.
The tests on Shale-5 samples can be applied here as examples to examine the above-mentioned analyses. As shown in
Figure 3
d and
Figure 4
c, Shale-5 samples have Type IV
(slight increase) for
$S A 1$
behavior and Type III
(significant increase) for
$S A 2$
behavior. The fracturing patterns are found to be closely related to the strength characteristics of the samples and the properties of the weak planes [
Figure 9
presents the typical failure patterns of Shale-5 samples with different inclination angles (
= 30° and 90°) under different confinements (
= 0 and 60 MPa). For the case of
= 30° (
Figure 9
a,b), the specimen mainly fails by vertical extension fractures in the shale matrix under uniaxial compression (
= 0 MPa), while shear failure planes can be observed crossing the weak planes under the confinement of
= 60 MPa. No obvious sliding can be observed along the weak planes, and the failure is mainly controlled by the strength of the shale matrix. For the case of
= 90° (
Figure 9
c,d), the failure takes place by vertical extension along the weak planes under uniaxial compression (
= 0 MPa), and by shear fractures in the shale matrix under the confinement of
= 60 MPa. For both of these two cases, the strength of Shale-5 samples are not significantly weakened by the weak planes.
However, the weak planes have different degrees of influences on the strength of Shale-5 samples with inclination angle
= 60°. The fracturing patterns of the specimens are exhibited in
Figure 10
. It is observed that there are both failures along the weak planes and fractures in the rock material under the confinement
= 0 MPa; however, the failure is totally along the weak planes, and the fracture surface is very flat and quite smooth under higher confinement
= 20, 40, 60 and 100 MPa. These fracturing characteristics show that the weak planes of Shale-5 samples have high cohesion
$c w$
but relatively low friction angle
$ϕ w$
. This estimation can be supported by the numerical results shown in
Figure 11
. For the numerical samples with
= 60° and
$c w$
= 40 MPa, different values of
$ϕ w$
result in different failure characteristics under the confinement
= 30 MPa. For lower
$ϕ w$
= 10°, the failure mainly slips along the weak planes. As
$ϕ w$
increases to 30°, a few cracks can be observed in the shale matrix. When
$ϕ w$
is as high as 50°, there are lots of fractures shown in the shale matrix. Although this numerical model is not exactly the same with the conditions of Shale-5 samples, it can demonstrate that lower
$ϕ w$
may result in slip along the weak planes but higher
$ϕ w$
may induce the fractures in the shale matrix for the samples with
= 60° under high confinements.
In fact, as the fracture is along the weak planes under higher confinement, we can obtain the normal and shear stresses (
) on the weak planes by the stress transformation equations:
$σ n = 1 2 ( σ 1 + σ 3 ) + 1 2 ( σ 1 − σ 3 ) cos 2 β$
$τ = 1 2 ( σ 1 − σ 3 ) sin 2 β$
Based on the peak strength values of Shale-5 sample (
= 60°) under various confinements (
= 20, 40, 60 and 100 MPa) presented in
Figure 12
a, the normal and shear stresses on the weak planes can be calculated according to Equations (5) and (6), and they are plotted in
Figure 12
b. According to Coulomb’s criterion for structural planes:
The cohesion
$c w$
and friction angle
$ϕ w$
of Shale-5 samples can be obtained from the linearly fitted equation in
Figure 12
b as
$c w$
= 68.7 MPa and
$ϕ w$
= 28.8°. This means that Shale-5 samples have a high cohesion
$c w$
and medium to lower friction angle
$ϕ w$
of the weak planes. Considering that Shale-5 samples have Type IV
(slight increase) for
$S A 1$
behavior and Type III
(significant increase) for
$S A 2$
behavior, it is consistent with the analyses by the numerical results that it is more prone to have slight increase of
$S A 1$
and significant increase of
$S A 2$
with increasing confinement for medium to higher cohesion
$c w$
and lower to medium friction angle
$ϕ w$
4. Discussions
Based on a series of laboratory experimental results presented in
Section 2
, it is found that different shale samples may show different anisotropic strength behaviors with increasing confinement, and they can be classified into different types with two anisotropic strength
$S A 1$
$S A 2$
. According to the numerical analyses in
Section 3
, it has been proved that the cohesion and friction angle of the weak planes indeed have predominant influences on the variation of strength anisotropy of the shale samples. However, it is still
necessary to make clear the mechanism of these influences.
According to Jaeger’s strength criterion in Equation (3), the maximum strength
$σ 1 , max$
reaches at
$ϕ w$
= 90°, and the value is almost equal to the strength of rock material. The minimum strength
$σ 1 , min$
occurs at
$β = π 4 + ϕ w 2$
, and can be deduced as [
$σ 1 , min = σ 3 + 2 ( c w + μ w σ 3 ) [ ( 1 + μ w 2 ) 1 2 + μ w ]$
$μ w = tan ϕ w$
Consequently, the degree of strength anisotropy is mainly related to the values of minimum strength $σ 1 , min$. According to Equation (8), cohesion $c w$ plays a role independent of confinement,
while the effect of friction angle $ϕ w$ is closely related to the confinement σ[3]. Under lower confinement, friction angle $ϕ w$ has very limited influences on the strength, so cohesion $c w$
becomes more important here. As the confinement goes up, the role of friction angle $ϕ w$ with different values may have different degrees of enhancing, while the effect of cohesion $c w$ may not be
improved significantly. Consequently, different combinations of $c w$ and $ϕ w$ may have various types of influences on the minimum strength $σ 1 , min$ with the increasing confinement σ[3].
Thereafter, different types of anisotropic strength behaviors can be shown for different shale samples with increasing confinement.
In order to give a clearer explanation, four combinations of
$c w$
(10 and 40 MPa) and
$ϕ w$
(10° and 50°) are selected from the PFC2D modelling results, and four typical features of the maximum and minimum strength of the shale samples can be observed as follows (
Figure 13
Case I: for lower cohesion ($c w$ = 10 MPa) and lower friction angle ($ϕ w$ = 10°), there is quite a large difference between $σ 1 , max$ and $σ 1 , min$ under lower confinement mainly resulted
from the low value of $c w$, and the strength difference is also very considerable under higher confinement because the low value of $ϕ w$ cannot increase $σ 1 , min$ effectively with the
increasing σ[3]. In this case, the anisotropic strength ratio $S A 1$ may be lowered with increasing confinement, while the anisotropic strength difference $S A 2$ may not increase or decrease
Case II: for lower cohesion ($c w$ = 10 MPa) and higher friction angle ($ϕ w$ = 50°), the difference between $σ 1 , max$ and $σ 1 , min$ is again very large under lower confinement owing to the
low $c w$, however, as the high value of $ϕ w$ can enhance $σ 1 , min$ significantly under higher confinement, the strength difference turns much smaller. In this case, both anisotropic strength
ratio $S A 1$ and anisotropic strength difference $S A 2$ will decrease obviously with the increase of confinement.
Case III: for higher cohesion ($c w$ = 40 MPa) and lower friction angle ($ϕ w$ = 10°), the difference between $σ 1 , max$ and $σ 1 , min$ is much smaller than the first two cases as the cohesion
$c w$ has quite a high value, while the strength difference becomes larger with the increasing confinement because the low value of $ϕ w$ leads to quite a low $σ 1 , min$. In this case, the
anisotropic strength ratio $S A 1$ may remain almost constant or ever increase slightly with the increasing confinement, while the anisotropic strength difference $S A 2$ will increase
Case IV: for higher cohesion ($c w$ = 40 MPa) and higher friction angle ($ϕ w$ = 50°), there is quite a small difference between $σ 1 , max$ and $σ 1 , min$ under lower confinement attributed to
the high value of $c w$, and the strength difference is also very limited under higher confinement because the high value of $ϕ w$ can increase $σ 1 , min$ effectively with the increasing σ[3].
Similar to the first case, the anisotropic strength ratio $S A 1$ may be lowered with increasing confinement, while the anisotropic strength difference $S A 2$ may not change significantly.
It is not difficult to find examples from the laboratory experimental results corresponding with the four typical cases mentioned above. Four such examples are presented in
Figure 14
. This proves that the mechanism on the anisotropic failure strength behaviors of shale with increasing confinement in this study is reasonable.
It is very important to understand this mechanism when dealing with problems such as the wellbore stability in the shale reservoir. Under different in situ stresses, the shale reservoir with
different combinations of $c w$ and $ϕ w$ may show different types of anisotropic failure strength behaviors, which is related to the failure patterns of the wellbore. $c w$ and $ϕ w$ of the shale
may be related to the mineral contents, alignment of the minerals, the geometrical and mechanical properties of the natural fractures, etc., which requires further studies in future work.
5. Conclusions
According to a series of systematic analyses on the laboratory test results of nine groups of different shale samples, this work studied the various types of anisotropic failure strength behaviors of
shale with increasing confinement, using two different anisotropic strength parameters. In addition, the dominant factors and the mechanism have also been studied combining the test results with
numerical analyses. There are several main findings as follows:
Two anisotropic strength parameters, $S A 1$ from the perspective of strength ratio and $S A 2$ from the perspective of strength difference, should both be researched for a comprehensive
understanding of the anisotropic strength behaviors of shale under different confinements. $S A 1$ is better for comparing the anisotropic strength characteristics of different shale samples as a
dimensionless coefficient, while $S A 2$ is easier to be applied to estimate the stability of a certain shale based on the strength criterion because it considers the specific values of strength
Based on the laboratory experimental results of nine groups of different shale samples, it is found that there are four types of $S A 1$ behaviors (significant decrease, slight decrease,
generally constant, and slight increase) and three types of $S A 2$ behaviors (gradual decrease, slight increase, and significant increase) with increasing confinement;
With the parallel bonded particle model simulating the rock material and smooth-joint model simulating the weak planes, the different types of anisotropic strength behaviors are well reproduced
in the numerical models. By a series of systematic analyses, it is observed that cohesion $c w$ and friction angle $ϕ w$ of the weak planes are two dominant factors for the anisotropic strength
The increase of cohesion $c w$ will change the $S A 1$ behaviors from significant decrease to slight decrease with increasing confinement, or even slight increase if the friction angle $ϕ w$ is
medium to low. Meanwhile, the decrease of friction angle $ϕ w$ are more prone to transfer $S A 2$ behaviors from gradual decrease to slight increase with increasing confinement, or even
significant increase if the cohesion $c w$ is medium to high;
The mechanism of the anisotropic strength behaviors have been analyzed based on the well-known Jaeger’s strength criterion, as well as the laboratory and numerical test results. Under lower
confinement, cohesion $c w$ has more important roles as the friction angle $ϕ w$ has very limited influences on the strength. As the confinement goes up, the friction angle $ϕ w$ with different
values may take different degrees of roles, while the effect of cohesion $c w$ is not easy to be improved significantly. Consequently, different combinations of $c w$ and $ϕ w$ may have various
types of influences on the minimum failure strength with the increasing confinement, therefore different shale samples show different types of anisotropic behaviors with the increasing
It should be noted that these findings are based on the two proposed anisotropic parameters $S A 1$ and $S A 2$, while there are also some other measures used in other studies. This study has
analyzed the relation among these different measures, and it is shown that $S A 1$ and $S A 2$ are two typical parameters from two different perspectives. Consequently, these findings are reasonable
and important in order to have a comprehensive understanding of the behaviors, factors, and mechanism of anisotropic strength of shale under different confining pressures. This understanding should
be helpful in guiding the design and construction of the wellbore drilling and underground opening in the rock mass of shale. As an extension, this work should also be useful for understanding the
propagations of hydraulic fractures in shale reservoirs under different stress states. This should be studied further based on the mechanism proposed in this work.
This work is financially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDB10030301), National Natural Science Foundation of China (No. 41402286 and No.
41227901), and Opening fund of State Key Laboratory of Geohazard Prevention and Geoenvironment Protection (Chengdu University of Technology) (No. SKLGP2016K001). Hua Zhang from Institute of Rock and
Soil Mechanics, CAS is acknowledged for his help in the laboratory experiments. The anonymous reviewers gave very helpful suggestions, which were valuable for improving our manuscript.
Author Contributions
Cheng Cheng designed and conducted the laboratory experiments, made the analyses and wrote the paper; Xiao Li designed the framework of the research and guided the analyses. Haitao Qian played an
important role in numerical analyses.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 1.
) Anisotropic strength values of Greenriver Shale-2 samples under various confinements [
], and (
) different changing trends of
$S A 1$
$S A 2$
and some other anisotropic parameters with the increasing confinement.
Figure 2. Shale-5 samples with different oriented weak planes. (a) β = 30°; (b) β = 60°; and (c) β = 90°.
Figure 3. Four types of anisotropic strength behaviors based on SA[1] with increasing confinement. (a) Type I[1]; (b) Type II[1]; (c) Type III[1]; and (d) Type IV[1].
Figure 4. Three types of strength anisotropic behaviors based on SA[2] with increasing confinement. (a) Type I[2]; (b) Type II[2]; and (c) Type III[2].
Figure 5. Numerical models for the shale samples with different oriented weak planes. (a) β = 0°; (b) β = 30°; (c) β = 60°; and (d) β = 90°.
Figure 6.
) The peak strength values of Shale-1 samples at different loading directions under various confinements [
]; and (
) the validated strength values of the numerical model.
Figure 7. Influence of cohesion of weak planes with the certain friction angle (a) φ[w] = 10°; (b) φ[w] = 30°; and (c) φ[w] = 50° based on SA[1] (solid line) and SA[2] (dashed line).
Figure 8. Influence of friction angle of weak planes with the certain cohesion (a) c[w] = 10 MPa; (b) c[w] = 20 MPa; and (c) c[w] = 40 MPa based on SA[1] (solid line) and SA[2] (dashed line).
Figure 9. Different failure patterns of Shale-5 samples with inclination angle $β$ = 30° under the confinements of (a) σ[3] = 0 MPa; (b) σ[3] = 60 MPa and $β$ = 90° under the confinements of (c) σ[3]
= 0 MPa; (d) σ[3] = 60 MPa.
Figure 10. Different failure patterns of Shale-5 samples with inclination angle $β$ = 60° under the confinements of (a) σ[3] = 0 MPa; (b) σ[3] = 60 MPa and (c) σ[3] = 100 MPa.
Figure 11. Different fracture characteristics of the samples with inclination angle $β$ = 60° under confinement of 30 MPa by numerical simulations. (a) c[w] = 40 MPa, φ[w] = 10°; (b) c[w] = 40 MPa, φ
[w] = 30°; and (c) c[w] = 40 MPa, φ[w] = 50°. Blue color shows the position of the weak planes; Red and magenta colors show the tensile and shear micro-cracks in the matrix; Cyan and green colors
show the tensile and shear micro-cracks in the weak planes.
Figure 12. (a) Peak strengths of Shale-5 sample ($β$ = 60°) under various confinements (σ[3] = 20, 40, 60, and 100 MPa); and (b) normal and shear stresses on the weak planes based on the data in (a)
and the linearly fitted equation.
Figure 13. Four typical behaviors of maximum and minimum strengths with increasing confinement dominated by different combinations of cohesion $c w$ and friction angle $ϕ w$ of the weak planes based
on numerical analyses. (a) c[w] = 10 MPa, φ[w] = 10°; (b) c[w] = 10 MPa, φ[w] = 50°; (c) c[w] = 40 MPa, φ[w] = 10°; and (d) c[w] = 40 MPa, φ[w] = 50°.
Figure 14. Four typical behaviors of maximum and minimum strengths with increasing confinement based on laboratory experimental results of (a) Shale-1; (b) Shale-4; (c) Shale-5; and (d) Shale-2.
Parameters Descriptions References
$k 1 = ( σ 1 − σ 3 ) ∥ ( σ 1 − σ 3 ) ⊥$ Ratio between the failure stresses in the two principal directions parallel and perpendicular to the bedding planes, respectively [7]
$k 2 = ( σ 1 − σ 3 ) max ( σ 1 − σ 3 ) min$ Ratio of the maximum to minimum failure strengths [7]
$σ c ( max ) / σ c ( min )$ Ratio of the maximum to minimum uniaxial compressive strength (UCS) [11]
$f = σ 1 , max − σ 1 , min σ 1 , max$ Ratio of the strength difference to the maximum strength [15]
$R c = σ c i ( 90 ) / σ c i ( min )$ Ratio between the UCS perpendicular to the beddings and the minimum UCS [27]
Samples Description Inclination Angle β (°) Confinement (MPa) Ref.
Greenriver Light brown to light gray; highly laminated, composed of fine grained calcite and dolomite particles inter-bedded with 0, 15, 20, 30, 45, 60, 75, 7, 35, 70, 100, 170 [16]
shale-1 kerogen 90
Greenriver Much darker, with more oil; highly laminated, composed of fine grained calcite and dolomite particles inter-bedded with 0, 10, 20, 30, 40, 60, 90 7, 35, 70, 100, 170 [16]
shale-2 kerogen
Outcrop shale-#8 Gray to dark, with obvious plane of anisotropy shown in the photographs 0, 15, 30, 45, 60, 75, 90 3, 21, 35, 48, 69 [2]
Top seal shale - 0, 15, 30, 45, 60, 75, 90 3, 7, 14, 21, 35 [2]
Shale-1 Black shale from outcrop of Longmaxi Formation in China, with laminated structures from the SEM images 0, 30, 60, 90 0, 10, 20, 30 [25]
Shale-2 Cored black shale (3502.61~3508.63 m deep) of Longmaxi Formation in Sichuan, China, with planes of anisotropy 0, 15, 30, 45, 60, 75, 90 0, 10, 20, 30, 40 [24]
Shale-3 Black shale at the lower part of Longmaxi Formation in Guizhou, China, with laminated structures and micro-fissures from the 0, 45, 90 0, 5, 10, 15, 20, [26]
SEM images 25
Shale-4 Black shale of Niutitang Formation in China, showing obvious sedimentary rock feature from micrometer scale, with lamellar 0, 30, 45, 60, 90 0, 10, 20, 30, 40, [23]
minerals 50
Shale-5 Black shale from outcrop of Longmaxi Formation in Chongqing, China, with visible planes of anisotropy 30, 60, 90 0, 20, 40, 60, 100 This
Inclination Angle β (°) Vp (m/s) UCS (MPa) E (GPa)
30 4370 191.3 29.8
60 4706 176.6 32.7
90 4964 200.2 34.5
Grain (Particles) Cement (Parallel Bonds)
Ball density (kg/m^3) 2700 Bond modulus $E ¯ c$ (GPa) 21
Minimum ball radius (mm) 0.36 Normal bond strength (MPa) 90
Ball radius ratio R[max]/R[min] 1.66 S.D. ^1 normal bond strength (MPa) 15
Contact modulus E[c] (GPa) 21 Shearing bond strength (MPa) 90
Coefficient of friction 1.0 S.D. ^1 shearing bond strength (MPa) 15
Normal to shearing stiffness ratio k[n]/k[s] 2.5 Normal to shearing bond stiffness ratio $k ¯ n$/$k ¯ s$ 2.5
^1 S.D.: standard deviation.
Parameters Values
Cohesion C[sj] (MPa) 20
Friction angle φ[j] (°) 50
Dilation angle ψ[j] (°) 0
Normal stiffness k[n,sj] (GPa/m) 1500
Shear stiffness k[s,sj] (GPa/m) 2500
Tensile strength σ[n,sj] (MPa) 5
SA[1] SA[2]
$ϕ w$ = 10° $ϕ w$ = 30° $ϕ w$ = 50° $ϕ w$ = 10° $ϕ w$ = 30° $ϕ w$ = 50°
$c w$ = 10 MPa I[1] I[1] I[1] $c w$ = 10 MPa II[2] I[2] I[2]
$c w$ = 20 MPa III[1] II[1] II[1] $c w$ = 20 MPa III[2] II[2] I[2]
$c w$ = 40 MPa IV[1] IV[1] II[1] $c w$ = 40 MPa III[2] III[2] I[2]
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Cheng, C.; Li, X.; Qian, H. Anisotropic Failure Strength of Shale with Increasing Confinement: Behaviors, Factors and Mechanism. Materials 2017, 10, 1310. https://doi.org/10.3390/ma10111310
AMA Style
Cheng C, Li X, Qian H. Anisotropic Failure Strength of Shale with Increasing Confinement: Behaviors, Factors and Mechanism. Materials. 2017; 10(11):1310. https://doi.org/10.3390/ma10111310
Chicago/Turabian Style
Cheng, Cheng, Xiao Li, and Haitao Qian. 2017. "Anisotropic Failure Strength of Shale with Increasing Confinement: Behaviors, Factors and Mechanism" Materials 10, no. 11: 1310. https://doi.org/10.3390
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1996-1944/10/11/1310","timestamp":"2024-11-02T22:06:56Z","content_type":"text/html","content_length":"586037","record_id":"<urn:uuid:c952c951-a6df-4683-b3cb-0703ddf0e6c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00792.warc.gz"} |
Categorical algebra
In mathematics — specifically, in operator theory — a densely defined operator or partially defined operator is a type of partially defined function; in a topological sense, it is a linear operator
that is defined "almost everywhere". Densely defined operators often arise in functional analysis as operations that one would like to apply to a larger class of objects than those for which they a
priori "make sense".
A linear operator T from one topological vector space, X, to another one, Y, is said to be densely defined if the domain of T is a dense subset of X.
is a densely defined operator from C^0([0, 1]; R) to itself, defined on the dense subspace C^1([0, 1]; R). Note also that the operator D is an example of an unbounded linear operator, since
This unboundedness causes problems if one wishes to somehow continuously extend the differentiation operator D to the whole of C^0([0, 1]; R).
• The Paley–Wiener integral, on the other hand, is an example of a continuous extension of a densely defined operator. In any abstract Wiener space i : H → E with adjoint j = i^∗ : E^∗ → H, there
is a natural continuous linear operator (in fact it is the inclusion, and is an isometry) from j(E^∗) to L^2(E, γ; R), under which j(f) ∈ j(E^∗) ⊆ H goes to the equivalence class [f] of f in L^2(
E, γ; R). It is not hard to show that j(E^∗) is dense in H. Since the above inclusion is continuous, there is a unique continuous linear extension I : H → L^2(E, γ; R) of the inclusion j(E^∗) → L
^2(E, γ; R) to the whole of H. This extension is the Paley–Wiener map.
• 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may
include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 | {"url":"https://en.formulasearchengine.com/index.php?title=Categorical_algebra&oldid=17991","timestamp":"2024-11-10T03:41:03Z","content_type":"text/html","content_length":"24909","record_id":"<urn:uuid:e876412e-a2a3-4637-913a-de3012433578>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00338.warc.gz"} |