content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Construct fixed-point numeric object
To assign a fixed-point data type to a number or variable, create a fi object using the fi constructor. You can specify numeric attributes and math rules in the constructor or by using the
numerictype and fimath objects.
a = fi returns a signed fi object with no value, a 16-bit word length, and a 15-bit fraction length.
a = fi(v) returns a signed fi object with value v, a 16-bit word length, and best-precision fraction length.
a = fi(v,s) returns a fi object with value v, signedness s, a 16-bit word length, and best-precision fraction length.
a = fi(v,s,w) returns a fi object with value v, signedness s, and word length w.
a = fi(v,s,w,f) returns a fi object with value v, signedness s, word length w, and fraction length f.
a = fi(v,s,w,slope,bias) returns a fi object with value v, signedness s, word length w, slope, and bias.
a = fi(v,s,w,slopeadjustmentfactor,fixedexponent,bias) returns a fi object with value v, signedness s, word length w, slopeadjustmentfactor, fixedexponent, and bias.
a = fi(___,F) returns a fi object with fimath F.
a = fi(___,Name,Value) returns a fi object with property values specified by one or more name-value pair arguments.
Input Arguments
v — Value
scalar | vector | matrix | multidimensional array
Value of the fi object, specified as a scalar, vector, matrix, or multidimensional array.
The value of the returned fi object is the value of the input v quantized to the data type specified in the fi constructor. When the input v is a non-double and you do not specify the word length or
fraction length, the returned fi object retains the numerictype of the input. For an example, see Create fi Object from Non-Double Value.
You can specify the non-finite values -Inf, Inf, and NaN as the value only if you fully specify the numerictype of the fi object. When fi is specified as a fixed-point numerictype,
• NaN maps to 0.
• When the 'OverflowAction' property of the fi object is set to 'Wrap', -Inf, and Inf map to 0.
• When the 'OverflowAction' property of the fi object is set to 'Saturate', Inf maps to the largest representable value, and -Inf maps to the smallest representable value.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical | fi
s — Signedness
true or 1 (default) | false or 0
Signedness of the fi object, specified as a numeric or logical 1 (true) or 0 (false). A value of 1 (true) indicates a signed data type. A value of 0 (false) indicates an unsigned data type.
Data Types: logical
w — Word length in bits
16 (default) | positive scalar integer
Word length in bits of the fi object, specified as a positive scalar integer.
The word length must be an integer in the range 1 ≤ w ≤ 65535.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
f — Fraction length in bits
15 (default) | scalar integer
Fraction length in bits of the stored integer value of the fi object, specified as a scalar integer. The fraction length must be an integer in the range -65535 ≤ f ≤ 65535.
If you do not specify a fraction length, the fi object automatically uses the fraction length that gives the best precision while avoiding overflow for the specified value, word length, and
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
slope — Slope
positive scalar
Slope of the scaling of the fi object, specified as a positive scalar.
This equation represents the real-world value of a slope bias scaled number.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
bias — Bias
Bias of the scaling of the fi object, specified as a scalar.
This equation represents the real-world value of a slope bias scaled number.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
slopeadjustmentfactor — Slope adjustment factor
scalar greater than or equal to 1 and less than 2
Slope adjustment factor of the fi object, specified as a scalar greater than or equal to 1 and less than 2.
The following equation demonstrates the relationship between the slope, fixed exponent, and slope adjustment factor.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
fixedexponent — Fixed exponent
Fixed exponent of the fi object, specified as a scalar.
The following equation demonstrates the relationship between the slope, fixed exponent, and slope adjustment factor.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
T — Numeric type properties
numerictype object
Numeric type properties of the fi object, specified as a numerictype object.
F — Fixed-point math properties
fimath object
Fixed-point math properties of the fi object, specified as a fimath object.
If no fimath properties are specified, the fi constructor uses nearest rounding and saturates on overflow for the creation of the fi object regardless of globalfimath settings. For an example of this
behavior, see Specify Rounding and Overflow Modes in fi Object Constructor.
The fi object has three types of properties:
You can set these properties when you create a fi object. Use the data properties to access data in a fi object. The fimath properties and numerictype properties are, by transitivity, also properties
of the fi object. fimath properties determine the rules for performing fixed-point arithmetic operations on fi objects. The numerictype object contains all the data type and scaling attributes of a
fixed-point object.
Create fi Object
Create a fi object using the default constructor. The constructor returns a signed fi object with no value, a 16-bit word length, and a 15-bit fraction length.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 15
Create a signed fi object with a value of pi, a 16-bit word length, and best-precision fraction length. The fraction length is automatically set to achieve the best precision possible without
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
Create an unsigned fi object with a value of pi. When you specify only the value and the signedness of the fi object, the word length defaults to 16 bits with best-precision fraction length.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 16
FractionLength: 14
Create a signed fi object with a word length of 8 bits and best-precision fraction length. In this example, the fraction length of a is 5 because three bits are required to represent the integer
portion of the value when the data type is signed.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 8
FractionLength: 5
If the fi object is unsigned, only two bits are needed to represent the integer portion, leaving six fractional bits.
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 8
FractionLength: 6
Create a signed fi object with a value of pi, a word length of 8 bits, and a fraction length of 3 bits.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 8
FractionLength: 3
Create an array of fi objects with 16-bit word length and 12-bit fraction length.
a = fi((magic(3)/10),1,16,12)
a =
0.8000 0.1001 0.6001
0.3000 0.5000 0.7000
0.3999 0.8999 0.2000
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 12
Create fi Object with Slope and Bias Scaling
The real-world value of a slope and bias scaled number is represented by
To create a fi object that uses slope and bias scaling, include the slope and bias arguments after the word length in the constructor. For example, create a fi object with a slope of 3 and a bias of
a =
DataTypeMode: Fixed-point: slope and bias scaling
Signedness: Signed
WordLength: 16
Slope: 3
Bias: 2
The DataTypeMode property of the fi object a is Fixed-point: slope and bias scaling.
Alternatively, you can specify the slope adjustment factor and fixed exponent where
For example, create a fi object with a slope adjustment factor of 1.5, a fixed exponent of 1, and a bias of 2.
a =
DataTypeMode: Fixed-point: slope and bias scaling
Signedness: Signed
WordLength: 16
Slope: 3
Bias: 2
Create fi Object from numerictype Object
A numerictype object contains all of the data type information of a fi object. numerictype properties are also properties of fi objects.
You can create a fi object that uses all of the properties of an existing numerictype object by specifying the numerictype object in the fi constructor.
T =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 24
FractionLength: 16
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 24
FractionLength: 16
Create fi Object with Associated fimath
The arithmetic attributes of a fi object are defined by a fimath object which is attached to that fi object.
Create a fimath object and specify the OverflowAction, RoundingMethod, and ProductMode properties.
F = fimath('OverflowAction','Wrap',...
F =
RoundingMethod: Floor
OverflowAction: Wrap
ProductMode: KeepMSB
ProductWordLength: 32
SumMode: FullPrecision
Create a fi object and specify the fimath object F in the constructor.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
RoundingMethod: Floor
OverflowAction: Wrap
ProductMode: KeepMSB
ProductWordLength: 32
SumMode: FullPrecision
Use the removefimath function to remove the associated fimath object and restore the math settings to their default values.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
Create fi Object from Non-Double Value
When the input argument v of a fi object is not a double and you do not specify the word length or fraction length properties, the returned fi object retains the numeric type of the input.
Create fi Object from Built-in Integer
When the input is a built-in integer, the fixed-point attributes match the attributes of the integer type.
v1 = uint32(5);
a1 = fi(v1)
a1 =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 32
FractionLength: 0
v2 = int8(5);
a2 = fi(v2)
a2 =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 8
FractionLength: 0
Create fi Object from fi Object
When the input value is a fi object, the output uses the same word length, fraction length, and signedness as the input fi object.
v = fi(pi,1,24,12);
a = fi(v)
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 24
FractionLength: 12
Create fi Object from Logical
When the input value is a logical, the DataTypeMode property of the output fi object is Boolean.
a =
DataTypeMode: Boolean
Create fi Object from Single
When the input value is single, the DataTypeMode property of the output is Single.
v = single(pi);
a = fi(v)
a =
DataTypeMode: Single
Specify Rounding and Overflow Modes in fi Object Constructor
You can set fimath properties, such as rounding and overflow modes during the creation of the fi object.
a = fi(pi,'RoundingMethod','Floor',...
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
RoundingMethod: Floor
OverflowAction: Wrap
ProductMode: FullPrecision
SumMode: FullPrecision
The RoundingMethod and OverflowAction properties are properties of the fimath object. Specifying these properties in the fi constructor associates a local fimath object with the fi object.
Use the removefimath function to remove the local fimath and set the math properties back to their default values.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
Creating a fi object with no properties set will use the default RoundingMethod and OverflowAction, regardless of any globalfimath settings.
To observe this behavior, specify globalfimath.
ans =
RoundingMethod: Floor
OverflowAction: Wrap
ProductMode: FullPrecision
SumMode: FullPrecision
Construct a fi object with no fimath settings in the constructor.
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 8
FractionLength: 0
The resulting value of b is uses Nearest rounding and Saturate as the overflow action. If this behavior is not suitable for your application, see fi Constructor Does Not Follow globalfimath Rules for
a workaround.
Reset the globalfimath to restore default values.
Set Data Type Override on fi Object
This examples shows how to use the DataTypeOverride setting of the fipref object to override fi objects with doubles, singles, or scaled doubles. The fipref object defines the display and logging
attributes for all fi objects.
Save the current fipref settings to restore later.
fp = fipref;
initialDTO = fp.DataTypeOverride;
Create a fi object with the default settings and original fipref settings.
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
Use the fipref object to turn on data type override to doubles.
ans =
NumberDisplay: 'RealWorldValue'
NumericTypeDisplay: 'full'
FimathDisplay: 'full'
LoggingMode: 'Off'
DataTypeOverride: 'TrueDoubles'
DataTypeOverrideAppliesTo: 'AllNumericTypes'
Create a new fi object without specifying its DataTypeOverride property so that it uses the data type override settings specified using fipref.
a =
DataTypeMode: Double
Create another fi object and set its DataTypeOverride setting to off so that it ignores the data type override settings of the fipref object.
b = fi(pi,'DataTypeOverride','Off')
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 13
Restore the fipref settings saved at the start of the example.
fp.DataTypeOverride = initialDTO;
fi Behavior for -Inf, Inf, and NaN
To use the non-numeric values -Inf, Inf, and NaN as fixed-point values with fi, you must fully specify the numeric type of the fixed-point object. Automatic best-precision scaling is not supported
for these values.
Saturate on Overflow
When the numeric type of the fi object is specified to saturate on overflow, then Inf maps to the largest representable value of the specified numeric type, and -Inf maps to the smallest
representable value. NaN maps to zero.
x = [-inf nan inf];
a = fi(x,1,8,0,'OverflowAction','Saturate')
b = fi(x,0,8,0,'OverflowAction','Saturate')
a =
-128 0 127
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 8
FractionLength: 0
RoundingMethod: Nearest
OverflowAction: Saturate
ProductMode: FullPrecision
SumMode: FullPrecision
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 8
FractionLength: 0
RoundingMethod: Nearest
OverflowAction: Saturate
ProductMode: FullPrecision
SumMode: FullPrecision
Wrap on Overflow
When the numeric type of the fi object is specified to wrap on overflow, then -Inf, Inf, and NaN map to zero.
x = [-inf nan inf];
a = fi(x,1,8,0,'OverflowAction','Wrap')
b = fi(x,0,8,0,'OverflowAction','Wrap')
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 8
FractionLength: 0
RoundingMethod: Nearest
OverflowAction: Wrap
ProductMode: FullPrecision
SumMode: FullPrecision
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Unsigned
WordLength: 8
FractionLength: 0
RoundingMethod: Nearest
OverflowAction: Wrap
ProductMode: FullPrecision
SumMode: FullPrecision
• Use the fipref object to control the display, logging, and data type override preferences for fi objects.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
• The default constructor syntax without any input arguments is not supported.
• If the numerictype is not fully specified, the input to fi must be a constant, a fi, a single, or a built-in integer value. If the input is a built-in double value, it must be a constant. This
limitation allows fi to autoscale its fraction length based on the known data type of the input.
• All properties related to data type must be constant for code generation.
• numerictype object information must be available for nonfixed-point Simulink^® inputs.
HDL Code Generation
Generate VHDL, Verilog and SystemVerilog code for FPGA and ASIC designs using HDL Coder™.
Version History
Introduced before R2006a
R2021a: Inexact property names for fi, fimath, and numerictype objects not supported
In previous releases, inexact property names for fi, fimath, and numerictype objects would result in a warning. In R2021a, support for inexact property names was removed. Use exact property names
R2020b: Change in default behavior of fi for -Inf, Inf, and NaN
In previous releases, fi would return an error when passed the non-finite input values -Inf, Inf, or NaN. fi now treats these inputs in the same way that MATLAB^® and Simulink handle -Inf, Inf, and
NaN for integer data types.
When fi is specified as a fixed-point numeric type,
• NaN maps to 0.
• When the 'OverflowAction' property of the fi object is set to 'Wrap', -Inf, and Inf map to 0.
• When the 'OverflowAction' property of the fi object is set to 'Saturate', Inf maps to the largest representable value, and -Inf maps to the smallest representable value.
For an example of this behavior, see fi Behavior for -Inf, Inf, and NaN.
Best-precision scaling is not supported for input values of -Inf, Inf, or NaN. | {"url":"https://www.mathworks.com/help/fixedpoint/ref/embedded.fi.html;jsessionid=90116b4aa14794f50aab06fd6895","timestamp":"2024-11-10T09:36:37Z","content_type":"text/html","content_length":"170646","record_id":"<urn:uuid:d1d0d56d-b96c-48ac-8144-bb64d05bd4df>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00896.warc.gz"} |
The tertu "difference" that is used in the definition means the... | Filo
Question asked by Filo student
The tertu "difference" that is used in the definition means the dise farther point minus the distance to the closer point. The two fixed points ses fociof the hyperbola. The mid-point of the line
segment joining the foci is centre of the hyperbola. The line through the foci is called the transverse the line through the centre and perpendicular to the transverse axis is calledtes axis. The
points at which the hyperbola intersects the transverse axis are called the vertices of the hyperbola (Fig 11.29). We denote the distance between the two foci by , the distance between two vertices
(the length of the transverse axis) by and we define the quantity as Also is the length of the conjugate axis Fig (Fig 11.30). To find the constant : By taking the point at and in the Fig 11.30, we
have (by the definition of the hyperbola) i.e. So that,
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
16 mins
Uploaded on: 12/20/2022
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
The tertu "difference" that is used in the definition means the dise farther point minus the distance to the closer point. The two fixed points ses fociof the hyperbola. The mid-point of the
line segment joining the foci is centre of the hyperbola. The line through the foci is called the transverse the line through the centre and perpendicular to the transverse axis is calledtes
Question axis. The points at which the hyperbola intersects the transverse axis are called the vertices of the hyperbola (Fig 11.29). We denote the distance between the two foci by , the distance
Text between two vertices (the length of the transverse axis) by and we define the quantity as Also is the length of the conjugate axis Fig (Fig 11.30). To find the constant : By taking the point
at and in the Fig 11.30, we have (by the definition of the hyperbola) i.e. So that,
Updated Dec 20, 2022
Topic Calculus
Subject Mathematics
Class Class 12
Answer Video solution: 1
Upvotes 64
Video 16 min | {"url":"https://askfilo.com/user-question-answers-mathematics/the-tertu-difference-that-is-used-in-the-definition-means-33343534313936","timestamp":"2024-11-03T01:27:25Z","content_type":"text/html","content_length":"333162","record_id":"<urn:uuid:566f35b4-2f0c-4182-8d2b-b18604303652>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00545.warc.gz"} |
Lecture: Mauro Maggioni
Data Science Seminar
Mauro Maggioni (Johns Hopkins University)
You may attend the talk either in person in Walter 402 or register via Zoom. Registration is required to access the Zoom webinar.
Title: Two estimation problems for dynamical systems: linear systems on graphs, and interacting particle systems
Abstract: We are interested in problems where certain key parameters of a dynamical system need to be estimated from observations of trajectories of the dynamical systems. In this talk I will discuss
two problems of this type.
The first one is the following: suppose we have a linear dynamical systems on a graph, represented by a matrix A. For example, A may be a random walk on the graph. Suppose we observe some entries of
A, some entries of A^2, …, some entries of A^T, for some time T, and wish to estimate A. We are interested in the regime when the number of entries observed at each time is small relative to the
total number of entries of A. When T=1 and A is low-rank, this is a matrix completion problem. When T>1, the problem is interesting also in the case when A is not low rank, as one may hope that
sampling at multiple times can compensate for the small number of entries observed at each time. We develop conditions that ensure that this estimation problem is well-posted, introduce a procedure
for estimating A by reducing the problem to the matrix completion of a low-rank structured block-Hankel matrix, obtain results that capture at least some of trade-offs between sampling in space and
time, and finally show that this estimator can be constructed by a fast algorithm that provably locally converges quadratically to A. We verify this numerically on a variety of examples. This is
joint work with C. Kuemmerle and S. Tang.
The second problem is when the dynamical system is nonlinear, and models a set of interacting agents. These systems are ubiquitous in science, from modeling of particles in Physics to prey-predator
and colony models in Biology, to opinion dynamics in social sciences. Oftentimes the laws of interactions between the agents are quite simple, for example they depend only on pairwise interactions,
and only on pairwise distance in each interaction. We consider the following inference problem for a system of interacting particles or agents: given only observed trajectories of the agents in the
system, can we learn what the laws of interactions are? We would like to do this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise
distances. We discuss when this problem is well-posed, we construct estimators for the interaction kernels with provably good statistically and computational properties, and discuss extensions to
second-order systems, more general interaction kernels, and stochastic systems. We measure empirically the performance of our techniques on various examples, that include extensions to agent systems
with different types of agents, second-order systems, families of systems with parametric interaction kernels, and settings where the interaction kernels depend on unknown variables. We also conduct
numerical experiments to test the large time behavior of these systems, especially in the cases where they exhibit emergent behavior. This is joint work with F. Lu, J. Feng, P. Martin, J.Miller, S.
Tang and M. Zhong. | {"url":"https://cse.umn.edu/ima/events/lecture-mauro-maggioni","timestamp":"2024-11-07T00:02:45Z","content_type":"text/html","content_length":"78376","record_id":"<urn:uuid:68769d7e-7940-4a98-9116-14a73552aad2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00501.warc.gz"} |
Introductory Chemical Engineering Thermodynamics, 2nd ed.
Common Property Change Calculations (uakron, 11min). When we need to compute a change in energy or enthalpy, we may quickly resort to CvΔT or CpΔT, but you should also note that large changes can
occur due to phase change. These considerations motivate careful consideration of the definitions of Cv and Cp, and the development of convenient equations for estimating heat of vaporization. To
know when to apply the heat of vaporization, you need to know the saturation conditions, for which a quick estimate can be obtained from the short-cut vapor pressure (SCVP) equation. When the
chemical of interest is H2O, these hand calculation methods can be compared to the properties given in the steam tables. Sample calculations of property changes (uakron, 21min) can be used to
illustrate the precision of the quick estimates obtained from Eqs. 2.45, 2.47 and the back flap. These calculations provide practice with the steam tables at unusual conditions as well as validating
your skills with the hand calculation formulas.
Comprehension Questions:
1. Develop an adaptation of props.xlsx that is most convenient for you personally to compute quick estimates of saturation temperature, saturation pressure, ideal gas enthalpy changes. You might want
to view the props.xlsx and shortcut Antoine coefficients software tutorials.
2. Quickly estimate the change in enthalpy as CO2 goes from 350K, 1bar to 300K, 1bar.
3. Quickly estimate the change in internal energy as CO2 goes from 350K, 1bar to 300K, 1bar.
4. Quickly estimate the change in enthalpy as CO2 goes from 350K, 1bar to 300K, 100bar. Hint: the change in enthalpy to go from a saturated liquid to a compressed liquid can be computed from the
adiabatic, reversible pump work. | {"url":"https://chethermo.net/comment/153","timestamp":"2024-11-07T09:52:54Z","content_type":"text/html","content_length":"14239","record_id":"<urn:uuid:869050dc-b1ea-4bbd-9a00-a78f4b3bed2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00626.warc.gz"} |
First some notation....
n^x = n to the power of x
Log_n (n^x) = x (where the _n denotes "subscript n" and means the Log was in base n)
For example:
Log_10 (10) = 1 (The subscript 10 means the Log was in base 10)
Log_10 (1000) = 3
Log_2 (32) = 5
Log_e (e) = 1
Log base e is referred to as "the natural log" and written as the function "ln" (pronounced Lin)
Where e is Euler's numbers
Log_e (x) = ln(x) -- much easier to write, no need for subscripts.
By convention, if you write Log(x) without specifying a base, then you assume it to be base 10.
Log(1) = 0
Ln(1) = 0
Because n^0 = 1
Log is undefined for 0 and negative numbers
As a positive number get smaller and smaller and closer to zero the Log of the number becomes a huge negative number
e.g. 10^-10000000 is 0.0000001
So Log(0.00000001) = -7
What happens when the number reaches 0? We are in spooky 'undefined' territory
ln(0) = undefined
ln(-3) undefined
Basic log rules (these work for any base)
Log(m^n) = n Log(m)
Log(a) + Log(b) = Log(a*b)
Log(10) + Log(1000) = 1+3 = 4
Log(10000) = 4
10000 = 10*1000
Log(a) - Log(b) = Log(a/b)
Log(10000) - Log(100) = Log(10000/100) = Log(100) = 2
The "Log base switch rule"
log_b(c) = 1 / log_c(b)
Convert to the natural log
Sometimes you'll have an equation that has a base other than 10 or e. To be able to get an answer on your calculator you'll need to convert it to base 10 or base e.
In such cases you use: the log base change rule
log_b(x) = log_c(x) / log_c(b)
So perhaps you've ended up with an answer of: log_12 (14)
To turn this into a number....
log_12 (14) = ln(14) / ln(12)
...which you can plug into a calculator (provided it has a ln button, i.e. it is a 'scientific' calculator) | {"url":"https://til.secretgeek.net/math/log.html","timestamp":"2024-11-05T23:36:44Z","content_type":"text/html","content_length":"4842","record_id":"<urn:uuid:ff553660-5b86-435e-839e-5ee388257eba>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00807.warc.gz"} |
18 April
The difference of two squares formula is useful when factoring quadratic expressions. In general form, a difference of two squares is$$
Simplify Algebraic Expressions Involving Exponents
Whether you face an algebraic fraction with exponents or without exponents, there is no difference in the solution steps. What we need to mention here is that the types of algebraic fractions in the
previous tutorial were all monomials.
Solving Two Linear Equations
An equation is linear only if the exponents of the unknown variables equal to one. For a system of linear equations, an equation should have at least one variable.
An Introduction to Fraction Arithmetic
Fraction, which means breaking in Latin, is a way of representing a part or several parts of a whole unit. Fractions are noted or written down with two numbers, one at the top (aka numerator) and the
other at the bottom (denominator), separated by a bar.
How To Find Common Factors
To understand this term, we first have to define factors. A simple definition of factors is that they are whole numbers whose product gives another number. In other words, if a is a factor of A, then
dividing A by should leave no remainder.
How To Find Where Two Parabolas Intersect
After looking at the intersection between a parabola and a line in our previous tutorial, let’s now look at how we can find a solution to a system of two parabolas. Such a system has a solution if
and only if they meet at one or more points. One solution suggests that the two parabolas are tangential to one another.
Solving Quadratic Equations Using The Quadratic Formula
The quadratic formula simplifies the solution of quadratic equation problems. If you equate a trinomial (polynomial with three terms) to zero, you get a quadratic equation. It takes the form $$a x^
{2}+b x+c=0$$
How To Solve Absolute Value Equations
An absolute value of any number is its distances from the origin to either side of the number line. That is, an absolute value can take negative or positive values.
How To Calculate Area Of A Circle Or A Square
Circles and squares are basic shapes that we interact within our everyday lives. Most of the objects have these shapes. The knowledge of their areas and perimeters is key in making such objects.
Think of your ventilation holes on the wall as an example. What of those circular windows? In this tutorial, we will shift our focus to squares and circles. You check on other shapes from our
previous tutorials.
Exponent Arithmetic Rules
Exponent Arithmetic is like any game that has rules. If you play by the rules, you never get into trouble with the referee. Your work is to master the rules of exponents through continuous practice.
Everything will be easy when you know these rules. Exponent Arithmetic involves working with the bases and the exponents. The representation of exponents is $$a^{n}$$ where a is the base and n the
exponent. To understand this topic, we will first state all the rules and then wrap it up with solved examples to show how the rules work.
17 April
16 April
15 April
14 April
13 April
12 April
11 April
10 April
8 April | {"url":"https://www.fullpotentialtutor.com/category/home-school/page/3/","timestamp":"2024-11-12T22:07:58Z","content_type":"text/html","content_length":"144869","record_id":"<urn:uuid:afa7bab1-8770-40e2-a030-5a6c427a8bdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00820.warc.gz"} |
Measurements of Non-Grinding Forces and Power
Zhongde Shi^ and Helmi Attia
National Research Council Canada
5145 Ave. Decelles, Montréal, Québec H3T 2B2, Canada
^ Corresponding author
May 28, 2020
August 26, 2020
January 5, 2021
grinding forces, grinding power, net forces, net power, fluid forces
Grinding forces and power are important parameters for evaluating grinding process performance, and they are typically measured in grinding experiments. Forces are typically measured using a load
cell or a dynamometer, whereas power is measured using an electrical power sensor to monitor the power of the spindle motor. Direct readings of the measurements include the net grinding force and
power components for material removal and non-grinding components such as the impingement of a grinding fluid. Therefore, the net components must be extracted from the direct readings. An
approach to extracting the net grinding forces and power is to perform additional spark-out grinding passes with no down feed. The forces and power recorded in a complete spark-out pass are used
as the non-grinding components. Subsequently, the net grinding components are obtained by subtracting the non-grinding components from the corresponding totals for actual grinding passes. The
approach becomes less accurate when large depths of cut, particularly large depths of cut and short grinding lengths, are involved. A new experimental approach is developed in this study to
measure the non-grinding force and power components and to extract the net components. Compared with the existing approach, the new approach is more accurate for grinding with large depths of cut
or short grinding lengths. In this approach, two additional grinding passes on an easy-to-grind material, one with and the other without a grinding fluid, are conducted using the same setup and
condition as those in the actual test material to measure the forces and power for obtaining the non-grinding components. Subsequently, these non-grinding components are used as the non-grinding
components of the actual material and subtracted from the total force and power components of the actual material to obtain the net values. To illustrate the application of the approach, surface
grinding experiments are conducted to collect the forces and power. The extracted net power is consistent with the power predicted with the extracted net forces.
Cite this article as:
Z. Shi and H. Attia, “Measurements of Non-Grinding Forces and Power,” Int. J. Automation Technol., Vol.15 No.1, pp. 80-88, 2021.
Data files:
1. [1] S. Malkin, “Grinding Technology: Theory and Applications of Machining with Abrasives,” Ellis Horwood Ltd., Chichester, and John Wiley & Sons, New York, 1989.
2. [2] C. Guo, Z. Shi, H. Attia, and D. McIntosh, “Power and wheel wear for grinding of nickel alloys with plated CBN wheels,” CIRP Annals, Vol.56, No.1, pp. 343-346, 2007.
3. [3] J. Badger, “Factors affecting wheel collapse in grinding,” CIRP Annals, Vol.58, No.1, pp. 307-310, 2009.
4. [4] G. Guo and S. Malkin, “Analytical and experimental investigation of burnout in creep-feed grinding,” CIRP Annals, Vol.43, No.1, pp. 283-286, 1994.
5. [5] Q. Liu, X. Chen, Y. Wang, and N. Gindy, “Empirical modeling of grinding force based on multivariate analysis,” J. of Materials Processing Technology, Vol.203, pp. 420-430, 2008.
6. [6] J. Tang, J. Du, and Y. Chen, “Modeling and experimental study of grinding forces in surface grinding,” J. of Materials Processing Technology, Vol.209, pp. 2847-2854, 2009.
7. [7] Z. Shi, C. Guo, and H. Attia, “Exploration of a new approach for calibrating grinding power model,” Proc. of ASME Int. Manufacturing Science and Engineering Conf., V002T02A008, 2014.
8. [8] R. Hecker, S. Liang, and X. Wu, “Grinding force and power modeling on chip thickness analysis,” Int. J. of Advanced Manufacturing Technology, Vol 33, pp. 449-459, 2007.
9. [9] E. R. Marshall and M. C. Shaw, “Forces in dry surface grinding,” Trans. of ASME, Vol.74, pp. 51-59, 1952.
10. [10] K. Brach, D. M. Pai, E. Ratterman, and M. C. Shaw, “Grinding forces and energy,” ASME J. of Manufacturing Science and Engineering, Vol.110, No.1, pp. 25-31, 1988.
11. [11] C. Li, W. Mao, Y. Hou, and Y. Ding, “Investigation of hydrodynamic pressure in high-speed precision grinding,” Procedia Engineering, Vol.15, pp. 2809-2813, 2011.
12. [12] Z. Shi, J. S. Agapiou, and H. Attia, “Assessment of an experimental setup for high speed grinding using vitrified CBN wheels,” Int. J. of Abrasive Technology, Vol.6, No.2, pp. 132-146,
13. [13] Z. M. Ganesan, C. Guo, and S. Malkin, “Measurements of hydrodynamic force in grinding,” Trans. of NAMRI/SME, Vol.23, pp. 103-107, 1995.
14. [14] Z. Shi, A. Elfizy, H. Attia, and G. Ouellet, “Grinding of chromium carbide coatings using electroplated diamond wheels,” ASME J. of Manufacturing Science and Engineering, Vol.139,
121014, 2017.
15. [15] Z. Shi and H. Attia, “High removal rate grinding of titanium alloys with electroplated CBN wheels,” Int. J. of Abrasive Technology, Vol.6, No.3, pp. 243-255, 2014.
16. [16] C. Andrew and T. Howes, “Creep Feed Grinding,” Industrial Press Inc., 1985.
17. [17] N. Chiu and S. Malkin, “Computer simulation for creep-feed form grinding,” Trans. of North America Manufacturing Research Institute (NAMRI)/SME, Vol.22, pp. 119-126, 1994.
18. [18] Z. Shi, M. Srinivasaraghavan, and H. Attia, “Prediction of grinding force distribution in wheel and workpiece contact zone,” Key Engineering Materials, Vol.389, pp. 1-6, 2009.
19. [19] Z. Shi, H. Attia, and M. Srinivasaraghavan, “Experimental investigations of the force distributions in the grinding contact zone,” Int. J. of Machining Science and Technology, Vol.13,
pp. 372-384, 2009.
20. [20] G. Batchelor, “An Introduction to Fluid Dynamics,” Cambridge University Press, 2012.
21. [21] C. Cui and J. A. Webster, “Experimental investigation of coolant jet design for creep feed grinding of gas turbine airfoils,” Proc. the 8th Congress on Gas Turbines in Cogeneration and
Utility, Industrial and Independent Power Generation, Portland, Vol.9, pp. 91-96, 1994.
22. [22] A. Webster, C. Cui, and R. B. Mindek, “Grinding fluid application system design,” Annals of the CIRP, Vol.44, No.1, pp.333-338, 1995. | {"url":"https://www.fujipress.jp/ijat/au/ijate001500010080/","timestamp":"2024-11-09T04:50:36Z","content_type":"text/html","content_length":"49996","record_id":"<urn:uuid:92665799-e247-4478-a869-f472601904e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00242.warc.gz"} |
Sampling Importance Resampling (SIR) was introduced in Gordon, et al. (1993), and is the original particle filtering algorithm (and this family of algorithms is also known as Sequential Monte Carlo).
A distribution is approximated with importance weights, which are approximations to the relative posterior densities of the particles, and the sum of the weights is one. In this terminology, each
sample in the distribution is a ``particle''. SIR is a sequential or recursive form of importance sampling. As in importance sampling, the expectation of a function can be approximated as a weighted
average. The optimal proposal distribution is the target distribution.
In the LaplacesDemon package, the main use of the SIR function is to produce posterior samples for iterative quadrature, Laplace Approximation, or Variational Bayes, and SIR is called
behind-the-scenes by the IterativeQuadrature, LaplaceApproximation, or VariationalBayes function.
Iterative quadrature estimates the posterior mean and the associated covariance matrix. Assuming normality, this output characterizes the marginal posterior distributions. However, it is often useful
to have posterior samples, in which case the SIR function is used to draw samples. The number of samples, n, should increase with the number and intercorrelations of the parameters. Otherwise,
multimodal posterior distributions may occur.
Laplace Approximation estimates the posterior mode and the associated covariance matrix. Assuming normality, this output characterizes the marginal posterior distributions. However, it is often
useful to have posterior samples, in which case the SIR function is used to draw samples. The number of samples, n, should increase with the number and intercorrelations of the parameters. Otherwise,
multimodal posterior distributions may occur.
Variational Bayes estimates both the posterior mean and variance. Assuming normality, this output characterizes the marginal posterior distributions. However, it is often useful to have posterior
samples, in which case the SIR function is used to draw samples. The number of samples, n, should increase with the number of intercorrelations of the parameters. Otherwise, multimodal posterior
distributions may occur.
SIR is also commonly used when considering a mild change in a prior distribution. For example, suppose a model was updated in LaplacesDemon, and it had a least-informative prior distribution, but the
statistician would like to estimate the impact of changing to a weakly-informative prior distribution. The change is made in the model specification function, and the posterior means and covariance
are supplied to the SIR function. The returned samples are estimates of the posterior, given the different prior distribution. This is akin to sensitivity analysis (see the SensitivityAnalysis
In other contexts (for which this function is not designed), SIR is used with dynamic linear models (DLMs) and state-space models (SSMs) for state filtering.
Parallel processing may be performed when the user specifies CPUs to be greater than one, implying that the specified number of CPUs exists and is available. Parallelization may be performed on a
multicore computer or a computer cluster. Either a Simple Network of Workstations (SNOW) or Message Passing Interface (MPI) is used. With small data sets and few samples, parallel processing may be
slower, due to computer network communication. With larger data sets and more samples, the user should experience a faster run-time.
This function was adapted from the sir function in the LearnBayes package. | {"url":"https://www.rdocumentation.org/packages/LaplacesDemon/versions/16.1.6/topics/SIR","timestamp":"2024-11-01T22:20:08Z","content_type":"text/html","content_length":"81754","record_id":"<urn:uuid:5a4b0ea2-b419-4337-a2db-9473960cf847>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00805.warc.gz"} |
CWG Issue 566
This is an unofficial snapshot of the ISO/IEC JTC1 SC22 WG21 Core Issues List revision 115d. See http://www.open-std.org/jtc1/sc22/wg21/ for the official list.
566. Conversion of negative floating point values to integer type
Section: 7.3.11 [conv.fpint] Status: NAD Submitter: Seungbeom Kim Date: 13 March 2006
Section 7.3.11 [conv.fpint] paragraph 1 states:
An rvalue of a floating point type can be converted to an rvalue of an integer type. The conversion truncates; that is, the fractional part is discarded.
Here, the concepts of “truncation” and “fractional part” seem to be used without precise definitions. When -3.14 is converted into an integer, is the truncation toward zero or away from zero? Is the
fractional part -0.14 or 0.86? The standard seem to give no clear answer to these.
Suggested resolution:
1. Replace “truncates” with “truncates toward zero.”
2. Replace “the fractional part” with “the fractional part (where that of x is defined as x-floor(x) for nonnegative x and x-ceiling(x) for negative x);” there should be a better wording for this,
or the entire statement “that is, the fractional part is discarded” can be removed, once the meaning of “truncation” becomes unambiguous as above.
Rationale (October, 2006):
The specification is clear enough: “fractional part” refers to the digits following the decimal point, so that -3.14 converted to int becomes -3. | {"url":"https://cplusplus.github.io/CWG/issues/566.html","timestamp":"2024-11-03T13:58:04Z","content_type":"text/html","content_length":"3183","record_id":"<urn:uuid:75ccd068-9e34-4c6c-a42d-141eb48a14ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00496.warc.gz"} |
"Statistically Insignificant"? Watch out!"Statistically Insignificant"? Watch out!
Experts can detect non-experts just by the way they use technical terms. Something is said that no one trained in the field would actually say. It gets one's teeth grating to hear it...
The great fictional detective Nero Wolfe once identified the culprit, who claimed to be a law student, by asking him at dinner if they had taught him to "draft torts." When the student assured Wolfe
that they had, he knew him to be a fraud.
This brings us to the term "statistically insignificant." It is a phrase that would not be used by statisticians because it is misleading and has no technical meaning. It is sometimes substituted
when someone means "not statistically significant." To the casual observer, this might seem to be a distinction without a difference. Nit-picking.
That view would be wrong. The distinction is important. Anyone following the markets should understand and appreciate this distinction, because it can lead to a clear trading edge versus those who do
not get it. I am trying to make this as non-technical as possible, so please read on.
Statistical significance is a technical term related to the measurement of sampling error. In most cases, researchers are attempting to accept or reject a null hypothesis. For these purposes, the
advancement of science is very cautious about rejecting a null hypothesis, so tests of significance generally require probabilities that the "effect" measured is extremely unlikely to have occurred
as a result of a sample (perhaps from a survey) differing by chance from the entire population that it is expected to represent. It does not address non-sampling error -- including many things like
poorly designed questions, not identifying the relevant population to be polled, interviewer bias, etc.
Back in the old days, students learned in an early class the difference between statistical significance and substantive significance. Let's suppose, for example, that we did a survey of voters in
Illinois about the upcoming gubernatorial election. It found that likely male voters planned to choose the incumbent (male, Democratic) candidate at a rate of 55.3% Female voters planned to choose
the incumbent (versus the female, Republican) candidate at a rate of 54.9%. This small difference between expected male and female voters is not very important in a substantive fashion. The headline
of the news story might be that men and women see the election the same way. Despite this main story theme, if the sample were large enough, the difference would be statistically significant. That
would mean only that the .4% difference was very unlikely to be the result of sampling error. The null hypothesis of "no difference" could be rejected.
In short -- large samples narrow the confidence interval, often called the "margin of error" in journalistic terms. Making the sample large does not mean that the difference identified is important.
Substantive and statistical signficance are two completely different things.
If anyone is still reading at this point, let's check out how it applies to market data.
In his influential and widely-read blog, The Big Picture, Barry Ritholtz delved into the new home starts data from the Census Bureau. Barry wrote as follows:
"Here is the data point released by the Census Bureau:
Privately-owned housing starts in September were at a seasonally adjusted annual rate of 1,772,000. This is 5.9 percent (±8.9%)*
Single-family housing starts in September were at a rate of 1,426,000; this is 4.3 percent (±8.4%)* above the August figure of 1,367,000.
What is the mathematical significance of this release? ABSOLUTELY ZERO. Any datapoint below the margin of error is statistically insignificant.
As the Census Bureau notes:
* 90% confidence interval includes zero. The Census Bureau does not have sufficient statistical evidence to conclude that the actual change is different from zero.
Insufficient evidence to conclude the change is different from zero. So September starts up 5.9% with a +/- 8.9% error rate means nothing. Single Family Home starts of 4.3% and a +/- 8.4% margin
is meaningless."
Before analyzing this, let me make a few points:
• We are not necessarily taking issue with anything related to housing, or the blip Barry identifies in his later discussion.
• We agree that this is a "noisy" series and one that is difficult to interpret.
• We applaud Barry's effort to educate his readers and highlight the issues in the data.
• The point we are making is technical but important. Virtually everyone commenting on the markets says similar things. Barry is just providing a convenient example for the illustration.
Now to the analysis --
To say that the results are "stastically insignificant" or that there is "absolutely zero" mathematical significance is incorrect. If Barry wants to test this, I will construct a game where we draw
marbles out of a cloaked container with a 60-40 collection of marbles, either blue or white. The sample size will not be enough to attain "statistical significance." We will each start with a stack
of hundred dollar bills. I get to choose the color based upon a series of sample draws that do not attain statistical significance. He has to take the other side. I will swiftly proved that just
because something is not "statistically significant" does not prove that it has no value. If he agrees to play long enough, I'll fly him to Chicago for the game!
What would I (or any expert statistician) conclude from the actual data cited?
• The single most likely value for new housing starts is an increase of 5.9%.
• The increase in housing starts is statistically significant at a level of about 70%. That is we can be 70% sure that the actual increase is not zero. The Census Bureau uses 90%. Journal writers
use 95% or 99%. The choice for one's margin of error is arbitrary.
• If we could know the "true" increase in housing starts (which we never will) that number is just as likely to be 11.8% as it is to be zero.
If you want to read more about substantive signficance (called oomph in this excellent essay) check out this source. Among other things, it points out that 96% of the articles in the leading economic
journal misused statistical significance during the 80's. Barry is far from alone here!
Bottom line for investors and traders: Much of what we see comes from surveys of one sort or another. The information does have value, but any single data point may be suspect, requiring careful
interpretation. We have tried to help in the educational process with our Payroll Employment Game, where players get to see how the survey results affect their predictions. Please give it a try and
read the excellent technical notes by my colleague Allen Russell.
hi, im a fan of stas and irony so this page tickled me. herein, you claim to never take advice from someone who uses the term in question however:
'What is the mathematical significance of this release? ABSOLUTELY ZERO. Any datapoint below the margin of error is ""statistically insignificant"". '
while i agree with you intelectually i've found that the use of that term is more a misnomer, and it is more accurate to say the term used should be, statistically irrelevant. as in, it is irrelevent
that x occured in (any range in) this analisys whatsoever and should otherwise not be counted or weighed as heavily (if at all).
though arguing semantics back and forth IS a great way to get off track.
Recent Comments | {"url":"https://oldprof.typepad.com/a_dash_of_insight/2006/10/statistically_i.html","timestamp":"2024-11-11T01:32:04Z","content_type":"application/xhtml+xml","content_length":"46489","record_id":"<urn:uuid:b9a3cda7-7c05-4ded-85a2-c694bd545f27>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00345.warc.gz"} |
Manual for previous SnPM versions
In this section we analyze a simple motor activation experiment with the SnPM software. The aim of this example is three-fold:
This page... introduction example data background
design setup computation viewing results i. Demonstrate the steps of an SnPM analysis
SnPM pages... manual PET example fMRI example FSL ii. Explain and illustrate the key role of exchangeability
users iii. Provide a bench mark analysis for validation of an SnPM installation
Please read existing publications, in particaular Nichols & Holmes (2001) provides an accessible presentation on the theory and thoughtful use
of SnPM. This example also is a way to understand key concepts and practicalities of the SnPM toolbox.
This example will use data from a simple primary motor activation experiment. The motor stimulus was the simple finger opposition task. For the activation state subjects were instructed to touch
their thumb to their index finger, then to their middle finger, to their ring finger, to their pinky, then repeat; they were to do this at a rate of 2 Hz, as guided by a visual cue. For baseline,
there was no finger movement, but the visual cue was still present. There was no randomization and the task labeling used was
A B A B A B A B A B A B
You can down load the data from ftp://www.fil.ion.ucl.ac.uk/spm/data/PET_motor.tar.gz or, in North America, from ftp://rowdy.pet.upmc.edu/pub/outgoing/PET_motor.tar.gz. We are indebted to Paul
Kinahan and Doug Noll for sharing this data. See this reference for details: Noll D, Kinahan et al. (1996) "Comparison of activation response using functional PET and MRI" NeuroImage 3(3):S34.
Currently this data is normalized with SPM94/95 templates, so the activation site will not map correctly to ICBM reference images.
The most important consideration when starting an analysis is the choice of exchangeability block size and the impact of that choice on the number of possible permutations. We don't assume that the
reader is familiar with either exchangeability or permutation tests, so we'll attempt to motivate the permutation test through exchangeability, then address these central considerations.
First we need some definitions.
Labels & Labelings A designed experiment entails repeatedly collecting data under conditions that are as similar as possible except for changes in an experimental variable. We use the term labels
to refer to individual values of the experimental variable, and labeling to refer to a particular assignment of these values to the data. In a randomized experiment the labeling used in the
experiment comes from a random permutation of the labels; in a non-randomized experiment the labeling is manually chosen by the experimenter.
Null Hypothesis The bulk of statistical inference is built upon what happens when the experimental variable has no effect. The formal statement of the "no effect" condition is the null hypothesis
Statistic We will need to use the term statistic in it's most general sense: A statistic is a function of observed data and a labeling, usually serving to summarize or describe some attribute of
the data. Sample mean difference (for discrete labels) and the sample correlation (for continuous labels) are two examples of statistics.
We can now make a concise statement of exchangeability. Observations are said to be exchangeable if their labels can be permuted with out changing the expected value of any statistic. We will always
consider the exchangeability of observations under the null hypothesis.
We'll make this concrete by defining these terms with our data, considering just one voxel (i.e 12 values). Our labels are 6 A's and 6 B's. A reasonable null hypothesis is ``The A observations have
the same distribution as the B observations''. For a simple statistic we'll use the difference between the sample means of the A & B observations. If we say that all 12 scans are exchangeable under
the null hypothesis we are asserting that for any permutation of A's and B's applied to the data the expected value of the difference between A's & B's would be zero.
This should seem reasonable: If there is no experimental effect the labels A and B are arbitrary, and we should be able to shuffle them with out changing the expected outcome of a statistic.
But now consider a confound. The most ubiquitous confound is time. Our example data took over two hours to collect, hence it is reasonable to suspect that the subject's mental state changed over that
time. In particular we would have reason to think that difference between the sample means of the A's and B's for the labeling
A A A A A A B B B B B B
would not be zero under the null because this labeling will be sensitive to early versus late effects. We have just argued, then, that in the presence of temporal confound all 12 scans are not
Exchangeability Blocks
The permutation approach requires exchangeability under the null hypothesis. If all scans are not exchangeable we are not defeated, rather we can define exchangeability blocks (EBs), groups of scans
which can be regarded as exchangeable, then only permute within EB.
We've made a case for the non-exchangeability of all 12 scans, but what if we considered groups of 4 scans. While the temporal confound may not be eliminated its magnitude within the 4 scans will be
smaller simply because less time elapses during those 4 scans. Hence if we only permute labels within blocks of 4 scans we can protect ourselves from temporal confounds. In fact, the most temporally
confounded labeling possible with an EB size of 4 is
A A B B A A B B A A B B
Number of Permutations
This brings us to the impact of EB size on the number of permutations. The table below shows how EB size affects the number of permutations for our 12 scan, 2 condition activation study. As the EB
size gets smaller we have few possible permutations.
EB size Num EB's Num Permutations
12 1 [12]C[6] = 924
6 2 ([6]C[3])^2 = 400
4 3 ([4]C[2])^3 = 216
2 6 ([2]C[1])^6 = 64
This is important because the crux of the permutation approach is calculating a statistic for lots of labelings, creating a permutation distribution. The permutation distribution is used to calculate
significance: the p-value of the experiment is the proportion of permutations with statistic values greater than or equal to that of the correct labeling. But if there are only, say, 20 possible
relabelings and the most significant result possible will be 1/20=0.05 (which would occurs if the correctly labeled data yielded the largest statistic).
Hence we have to make a trade off. We want small EBs to ensure exchangeability within block, but very small EBs yield insufficient numbers of permutations to describe the permutation distribution
well, and hence assign significance finely. We usually will use the smallest EB that allows for at least hundreds of permutations (unless, of course, we were untroubled by temporal effects).
It is intended that you are actually sitting at a computer and are going through these steps with Matlab. We assume that you either have the sample data on hand or a similar, single subject 2
condition with replications data set.
First, if you have a choice, choose a machine with lots of memory. We found that this example causes the Matlab process to grow to at least 90MB.
Create a new directory where the results from this analysis will go. Either start Matlab in this directory, or cd to this directory in an existing Matlab session.
Start SnPM by typing
which will bring up the SnPM control panel (and the three SPM windows if you haven't started SPM already). Click on
A popup menu will appear. Select the appropriate design type from the the menu. Our data conforms to
Single Subject: 2 Conditions, replications
It then asks for
# replications per conditions
We have 6.
Now we come to
Size of exchangeability block
From the discussion above we know we don't want a 12-scan EB, so will use a EB size of 4, since this gives over 200 permutations yet is a small enough EB size to protect against severe temporal
The help text for each SnPM plug in file gives a formula to calculate the number of possible permutations given the design of your data. Use the formula when deciding what size EB you should use.
It will now prompt you to select the image data files. In the prompted dialog box, you need to input the correct file directory, and then click on the image data files (.img) one by one. It is
important that you enter them in time order. Or you can click on All button if you want to choose all the files. After you finish choosing all of the files, click on "Done".
Next you need to enter the ``conditions index.'' This is a sequence of A's and B's (A's for activation, B's for baseline) that describe the labeling used in the experiment. Since this experiment was
not randomized we have a nice neat arrangement:
A B A B A B A B A B A B
Exchangeability Business
Next you are asked about variance smoothing. If there are fewer than 20 degrees of freedom available to estimate the variance, variance smoothing is a good idea. If you have around 20 degrees of
freedom you might look at at the variance from a SPM run (Soon we'll give a way to look at the variance images from any SPM run). This data has 12-2-1=9 degrees of freedom at each voxel, so we
definately want to smooth the variance.
It is our experience that the size of the variance smoothing is not critical, so we suggest 10 mm FWHM variance smoothing. Values smaller than 4 mm won't do much smoothing and smoothing probably
won't buy you anything and will take more time. Specify 0 for no smoothing.
The next question is "Collect Supra-Threshold stats?" The default statistic is the maximum intensity of the t-statistic image, or max pseudo-t if variance smoothing is used. If you would like to use
the maximum supra-threshold cluster size statistic you have to collect extra data at the time of the analysis. Beware, this can take up a tremendous amount of disk space; the more permutations the
more disk space required. This example generates a 70MB suprathreshold statistics mat file. Answer 'yes' to collect these stats. Or click on 'no' to save some disk space.
The remaining questions are the standard SPM questions. You can choose global normalization (we choose 3, Ancova), choose 'global calculation' (we choose 2, mean voxel value), then choose 'Threshold
masking' (we choose proportion), and keep 'Prop'nal threshod' as its default 0.8. Choose 'grand mean scaling' (we choose 1, scaling of overall grand mean) and keep 'scale overall grand mean' at its
default 50. the gray matter threshold (we choose the default, 0.8) and the value for grand mean (again we choose the default, 50).
Now SnPM will run a short while while it builds a configuration file that will completely specify the analysis. When finished it will display a page (or pages) with file names and design information.
When it is finished you are ready to run the SnPM engine.
In the SnPM window click on
You will be asked to find the configuration file SnPM has just created (It should be in the directory where you run matlab); it's called
Some text messages will be displayed and the thermometer progress bar will also indicate progress.
On fast new machines, like Sun Sparc Ultras or a Hewlett Packard C180, the computation of permutation should only take about 5 minutes.
One of the reasons that SnPM is divided into 3 discrete operations (Configure, Compute, Results) is to allow the Compute operation to be run at a later time or in the background. To this end, the
'Compute' function does not need any the windows of SPM and can be run with out initializing SPM (though the MATLABPATH environment variable must be set). This maybe useful to remember if you have
trouble with running out of memory
To maximize the memory available for the 'Compute' step, and to see how to run it in batch mode, follow these steps.
1. If running, quit matlab
2. In the directory with the SnPM_cfg.mat file, start matlab
3. At the matlab prompt type
snpm_cp .
This will 'Compute' just as before but there will be no progress bar. When it is finished you could type 'spm PET' to start SPM99, but since Matlab is not known for it's brilliant memory management
it is best to quit, then restart matlab and SnPM.
On a Sun UltraSPARC 167 MHz this took under 6 minutes.
In the SnPM window click on
You will be asked to find the configuration file SnPM has just created; it's called
Next it will prompt for positive or negative effects. Positive corresponds to ``A-B'' and negative to ``B-A''. If you are interested in a two tailed test repeat this whole procedure twice but halve
your p value threshold value in the next entry.
Then, you will be asked questions such as 'Write filtered statistic img?' and 'Write FWE-corrected p-value img?'. You can choose 'no' for both of them. It will save time and won't change the final
Next it will ask for a corrected p value for filtering. The uncorrected and FWE-corrected p-values are exact, meaning if the null hypothesis is true exactly 5% of the time you'll find a P-value 0.05
or smaller (assuming 0.05 is a possible nonparameteric P-value, as the permutaiton distribution is discrete and all p-values are multiples of 1/nPerm). FDR p-values are valid based on an assumption
of positive dependence between voxels; this seems to be a reasonable assumption for image data. Note that SPM's corrected p values derived from the Gaussian random field theory are only approximate.
Next, if you collected supra-threshold stats, it will ask if you want to assess spatial extent. For now, let's not assess spatial extent.
You will be given the opportunity to write out the statistic image and the p-value image. Examining the location of activation on an atlas image or coregistered anatomical data is one of the best way
to understand your data.
Shortly the Results screen will first show the permutation distributions.
You need to hit the ENTER button in the matlab main window to get the second page. The screen will show the a maximum intensity projection (MIP) image, the design matrix, and a summary of the
significantly activated areas.
The figure is titled ``SnPM{Pseudo-t}'' to remind you that the variance has been smoothed and hence the intensity values listed don't follow a t distribution. The tabular listing indicates that there
are 68 voxels significant at the 0.05 level; the maximum pseudo-t is 6.61 and it occurs at (38, -28, 48).
This information at the bottom of the page documents the parameters of the analysis. The ``bhPerms=1'' is noting that only half of the permutations were calculated; this is because this simple A-B
paradigm gives you two labelings for every calculation. For example, the maximum pseudo-t of the
A A B B B B A A A A B B
labeling is the minimum pseudo-t of the
B B A A A A B B B B A A
Now click again on
proceeding as before but now answer Yes when it asks to assess spatial extent. Now you have to decide on a threshold. This is a perplexing issue which we don't have good suggestions for right now.
Since we are working with the pseudo-t, we can't relate a threshold to a p-value, or we would suggest a threshold corresponding to, say, 0.01.
When SnPM saves supra-threshold stats, it saves all pseudo-t values above a given threshold for all permutations. The lower lower limit shown (it is 1.23 for the motor data) is this ``data
collection'' threshold. The upper threshold is the pseudo-t value that corresponds to the corrected p-value threshold (4.98 for our data); there is no sense entering threshold above this value since
any voxels above it are already significant by intensity alone.
Trying a couple different thresholds, we found 2.5 to be a good threshold. This, though, is a problem. The inference is strictly only valid when the threshold is specified a priori. If this were a
parametric t image (i.e. we had not smoothed the variance) we could specify a univariate p-value which would translate to specify a t threshold; since we are using a pseudo t, we have no parametric
distributional results with which to convert a p-value into a pseudo t. The only strictly valid approach is to determine a threshold from one dataset (by fishing with as many thresholds as desired)
and then applying that threshold to a different dataset. We are working to come up with guidelines which will assist in the threshold selection process.
Now we see that we have identified one 562 voxel cluster as being significant at 0.005 (all significances must be multiples of 1 over the number of permutations, so this significance is really 1/216=
0.0046). This means that when pseudo-t images from all the permutations were thresholded at 2.5, no permutation had a maximum cluster size greater than 562.
Contact Info
Room D0.03
Deptment of Statistics
University of Warwick
CV4 7AL
United Kingdom
Tel: +44(0)24 761 51086
Email: t.e.nichols 'at' warwick.ac.uk
Web: http://nisox.org
Blog: NISOx blog
Handbook of fMRI Data Analysis by Russ Poldrack, Thomas Nichols and Jeanette Mumford | {"url":"https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/nichols/software/snpm/man/previous/ex/","timestamp":"2024-11-07T17:43:49Z","content_type":"text/html","content_length":"66513","record_id":"<urn:uuid:80e3a71f-1a80-470c-9f57-d064e152126f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00700.warc.gz"} |
Analytical Research on Deformation Monitoring of Large Span Continuous Rigid Frame Bridge during Operation
Analytical Research on Deformation Monitoring of Large Span Continuous Rigid Frame Bridge during Operation ()
1. Introduction
In recent years, with the rapid economy development, the span of Chinese bridges is increasing. Among these bridges, the continuous rigid frame bridge is one main form of long span bridges.
The continuous rigid frame bridge is of widespread popularity because of the continuous property of its upper structure, its long span, the low difficulty of its construction, the driving comfort,
its easy maintenance, its low cost and so on [1] . Nevertheless, a number of factors, such as the increasing traffic, the overloaded vehicles and the natural aging of the structure materials, cause
the large deformation of the bridge, which will have an influence on the safety, durability and driving comfort of the bridge. Therefore, it is of high importance to carry out the bridge deformation
monitoring [2] [3] .
The excessive mid-span deflection is the main reason for the accident of the continuous rigid frame bridge during operation. According to the long-term measured deflection data of the Humen Bridge
auxiliary channel bridge―a long span continuous rigid frame bridge, Peijin Wang established the finite element model and accurately predicted long-term creep deformation after completion. Further,
the long-term growing deflection coefficient was obtained. His achievements provide the basis for the long-term deflection prediction of similar bridges [4] .
For long-span continuous rigid frame bridge deformation monitoring, related researches were carried on at home and abroad [5] -[17] . Based on long-term monitoring result of long-span rigid frame
bridge, Xinhong Yuan got the bridge deformation law and the characteristics of the stress and strain during operation. Then the calculation model of the bridge was established, analyzing the
influence of the different specifications, load age, concrete shrinkage and creep in relative humidity environment, and loss of prestress on the deformation. It summarized deformation law and the
influencing factors of deformation, but safety assessment was not done for deformation.
Based on a long-span continuous rigid frame bridge, this paper has determined the content and method of deformation monitoring according to the relevant specification [18] -[25] . Through the finite
element software Midas Civil, the calculated value of deformation monitoring has been obtained, which is compared with the measured value, getting the relevant conclusion about deformation
monitoring. The deformation monitoring analysis of the long-span continuous rigid frame bridge can provide a certain basis for similar bridges, which has good practical significance.
2. Project Profile
The long-span continuous rigid frame bridge is located in Chongqing section of Shanghai-Chengdu Expressway, which is a two-parallel bridge. The total length is 750 m and the main bridge is 612 m
long. For the main bridge, the upper structure is a prestressed concrete continuous rigid frame bridge. Full bridge span is: 5(4) × 30 m (prestressed concrete continuous T-beam) + (110 + 200 + 110) m
(prestressed concrete continuous rigid frame) + 4 × 40 m (continuous prestressed concrete T-beam). The single width bridge is 12.00 m wide, and the lateral arrangement is as follows: 0.50 m (crash
barrier) + 11.00 m (roadway) + 0.5 m (crash barrier).
Main bridge upper structure adopts cast-in-place concrete box girder whose section is a single box single room variable cross-section. The beam in the pivot is 11.57 m high and the one in the
mid-span is 3.50 m high. The height of the beam adopts 1.5 times parabola. The bottom slab in the pivot is 1.20 m thick and the one in the mid-span is 0.32 m thick. The thickness of the bottom slab
adopts linear gradient. Around the top of the pier, the thickness of web is 1.20 m. The other webs thickness adopts three levels (70 cm, 60 cm and 50 cm). The width of the top slab is 12.10 m, the
one of the bottom slab is 7.00 m and the one of the flange slab is 2.55 m. The box girder roof sets one-way cross-sectional slope of 2.0%. A single width sets nine diaphragm plates.
Among the bottom structure, main piers and transition piers are reinforced concrete cross-section hollow piers, bored piles foundation with pile caps. The surfacing layer of the bridge deck pavement
adopts 10 cm thick asphalt concrete.
The bearings of the main bridge adopt GPZ pot rubber bearing, which are set at 5# and 8# pier, each place setting 2 sets of GPZ5DX unidirectional sliding bearings (single width). For the main bridge,
at the end of two side spans, SSFB240 type expansion joints are respectively set.
For the main bridge, the box girder adopts C50 concrete, the bridge deck pavement leveling layer, bent caps, piers, prefabricated T-beams of the approach bridges adopt C40 concrete and the crash
barriers adopt C25 concrete. The bridge design load is highway-grade I.
The bridge elevation layout is shown in Figure 1.
3. Deformation Monitoring Methods and Measuring Point Arrangement
According to the bridge situation, the study monitored the horizontal control network, vertical control points, main pier deformation, bridge deck alignment and expansion joint deformation.
Figure 1. Bridge elevation layout (unit: cm).
3.1. Deformation Monitoring Methods
3.1.1. Horizontal Control Network
The horizontal control network is set up by triangulation network. A Leica TCR1201 total station (Grade 1”) with optical prism is used directly on the middle of base. According to the second level
horizontal control network technology in “Building Deformation Measurement Procedures” (JGJ8-2007), the horizontal control network monitoring data are achieved, measured by traverse survey method.
3.1.2. Vertical Control Points
Vertical datum points are measured by NA2 Leica precise level (Accuracy of 0.1 mm) with the invar rod. According to the national second level measurement method, all observation leveling lines are
formed into closed ones. The station observation sequence, observation method, return measure and so on are carried out according to “the National First and Second Grade Leveling Specification” (GB/T
3.1.3. Pier Deformation Monitoring
The Leica TCR1201 total station (Grade 1”) with optical prism is used to monitor the pier deformation. According to the secondary plane accuracy requirement in “Building Deformation Measurement
Procedures” (JGJ8- 2007), the polar method is adopted for measurements. The main pier vertical deformation is in accord with the relevant bridge deck linear point.
3.1.4. Bridge Deck Alignment Monitoring
The precision level with the invar rod is used to measure the bridge deck alignment according to the second level observation.
The sight length of leveling, distance between the front and rear sight and the precision level repeated measurement times all conform to the specification requirements for “secondary precision
level”. Before the measurement, the “I” angle of level should be checked first. The datum point and the bridge linear observation points should form a closed loop, the observation sequence is
“back-front-front-back” for the odd station and “front-back-back-front” for the even station. Then the observation data should be adjusted.
3.1.5. Expansion Joint Deformation Monitoring
Expansion joint deformation monitoring includes both sides (large stake mark side and small stake mark side) deck transverse displacement monitoring of expansion joints and steel gap change
monitoring of expansion joints. A total station with optical prism is used to measure both sides deck transverse displacement and the steel tap is used to measure the steel gap change.
3.2. Measuring Point Arrangement
3.2.1. Arrangement Principle
Screen dots of deformation monitoring network are divided into the datum points, the work basis points and deformation observation points. The layout should follow the following principles:
1) Datum points: Steady and reliable locations should be chosen outside of the deformation area, and each bridge should have at least three datum points;
2) Work basis points: The points should be stable and easy to use;
3) Deformation observation points: The points should be set up in the locations which can reflect the detection deformation characteristics or in the test section.
3.2.2. Measuring Point Arrangement
1) Horizontal control network
Three points which can reflect the characteristics of the detection plane or are in the detection section are selected as the measuring points.
2) Vertical control points
Two points which can reflect the elevation characteristics or are in the detection section are selected as the measuring points.
3) Main pier deformation
For deformation monitoring of the main piers, 8 observation points are laid, 2 points laid at each pier, which are respectively arranged at the 0 # block diaphragm center and the outside of left,
right width piers. Measuring point numbers of the left width 5# pier are D5-1 and D5-2, and measuring point numbers of the left width 6# pier are D6-1 and D6-2. Measuring point numbers of the right
width 6# pier are D6-3 and D6-4, and measuring point numbers of the right width 7# pier are D7-3 and D7-4. The total station reflector plates are directly pasted on the surface of concrete as
observation points. Measuring point arrangement is shown in Figure 2 and Figure 3 below.
4) Bridge deck alignment
The bridge deck alignment monitoring points make full use of the historical monitoring points, and some measuring points are added to meet the requirements of measurement gap. Specific arrangement of
Figure 2. The longitudinal arrangement of deformation measuring points in main pier (unit: cm).
Figure 3. The transversal arrangement of deformation measuring points in main pier (unit: cm).
points is as follows: The bridge deck alignment monitoring only aims at the main bridge alignment. The longitudinal measuring point arrangement is at the mid-span of main bridge, the maximum
displacement points, L/4, pivot of the side span. Also, according to the requirements of not more than 20 m stationing spacing, linear observation points are added.
The linear observation points of transverse direction are set on the bridge deck crash barrier inner side bottom of the right and left width and on the inner side top of median strip concrete fence
base. 4 vertical sections are set for linear observation points, which are left outside (L), left inside (L’), right inside (R) and right outside (R’). The bridge sets 116 deck linear observation
points in total. The observation points adopt stainless steel round head testing nails, which are anchored by anchor adhesive after drilling holes and are marked with red oil paint. Measuring point
arrangement is shown in Figure 4 and Figure 5.
5) Expansion joint deformation
There is one observation section set in each expansion joint of the main bridge for this expansion joint monitoring. Points are laterally set on the outside of the deck. Each observation section has
two measuring points. There are four observation sections and eight measuring points in total. Measuring point arrangement is shown in Figure 6. In this figure, B, B1, B2 and B3 are the longitudinal
distance of expansion joint, measured by the steel ruler.
4. Monitoring Results and Analysis
4.1. Deformation Monitoring Calculation for Model
According to the main material parameters and load conditions of the bridge, based on the “General Code for
Figure 4. The longitudinal and plane arrangement of the bridge deck linear measuring points (unit: cm).
Design of Highway Bridges and culverts” (JTG D60-2004) and “General Code for Design of Highway Reinforced Concrete and Prestressed Concrete Bridges and culverts” (JTG D62-2004), the finite element
analysis software Midas Civil is used to model. The main beam and piers are simulated by beam elements. There are 193 nodes and 190 units in total. Because of being closed to traffic in the process
of monitoring, the model does not consider the vehicle load effect. The considered bridge load types mainly include concrete creep and shrinkage, the loss of prestress, the temperature change and so
on. In view of the bridge final internal force condition decided by the bridge construction steps and technology, according to the designed construction steps, the construction stage analysis is
carried out in the calculation. The structure discrete figure is shown in Figure 7.
Figure 5. The transversal arrangement of the bridge deck linear measuring points (unit: cm).
Figure 6. Monitoring points arrangement on expansion joint deformation.
Figure 7. Structure discrete model of the main bridge.
Through the model calculation and analysis, the calculated values of the main pier deformation, bridge deck alignment and expansion joint deformation can be obtained.
4.2. Comparative Analysis of Calculated and Measured Values
The calculation values are obtained by the finite element software Midas Civil. According to the bridge monitoring content and method, the bridge deformation measured values can be obtained and
compared with the calculated values. The comparative analysis results of the main pier deformation, the bridge deck alignment and the expansion joint deformation are as follows:
4.2.1. Comparative Analysis of Main Pier Vertical Displacement Results
In order to know the main piers settlement situation and its influence on the bridge deck alignment in the monitoring period, the main pier vertical displacements are measured. “Measured main pier
vertical displacements” is obtained by “the relative elevations in the second period (December 2012) minus the ones in the first period (September 2012)”. “Calculated main pier vertical
displacements” is the model calculated values considering the overall cooling and shrinkage creep. “Negative main pier vertical displacements” means the downward deformations. Comparison results of
measured and calculated values are shown in Table 1.
From the test results of the main pier vertical displacement observation points, the following results can be obtained: When the temperature of the left width drops 9.5˚C, the vertical displacement
of 5# pier is 9.86 mm, less than calculated value 10.13 mm; The vertical displacement of 6# pier is 9.00 mm, less than calculated value 10.44 mm. When the temperature of the right width drops 11.7˚C,
the vertical displacement of 6# pier is 9.97 mm, less than calculated value 12.15 mm; the vertical displacement of 7# pier is 11.85 mm, less than calculated value 12.52 mm.
The above analysis shows that the measured vertical displacement of each main pier is less than the calculated value, which can determine that there was no obvious settlement deformation for each
main pier in the monitoring period.
4.2.2. Comparative Analysis of Main Pier Deviation Results
In order to know each main pier deviation situation in the monitoring period, the main pier displacements along the bridge longitudinal direction are measured. “Measured main pier longitudinal
displacements” is the bridge axis direction conversion values according to the measured points displacement in the first and the second period. “Calculated main pier longitudinal displacements” is
the calculated values considering the overall cooling and shrinkage creep. “Positive main pier longitudinal displacement” means the rightward displacement. Comparison results of measured and
calculated values are shown in Table 2 and Table 3.
From the test results of the main pier observation points, the following results can be obtained: When the temperature of the left width drops 17.7˚C, the longitudinal displacement towards the river
of 5# pier top is 12 mm, less than calculated value 15.55 mm; The longitudinal displacement towards the river of 6# pier top is 13 mm, less than calculated value 15.88 mm. The specific longitudinal
displacement sketch of main piers on left width is shown in Figure 8. When the temperature of the right width drops 17.2˚C, the longitudinal displacement towards the river of 6# pier top is 8 mm,
less than calculated value 15.20 mm; The longitudinal displacement towards the river of 7# pier top is 12 mm, less than calculated value 15.52 mm. The specific longitudinal displacement sketch of
main piers on right width is shown in Figure 9. The above analysis shows that the measured deviation of each main pier is less than the calculated value, which can determine that the main pier
Table 1. Comparison between the calculated and practical vertical displacement values of main piers.
Table 2. Comparison between the calculated and practical longitudinal displacement values of main piers on left width (unit: mm).
Table 3. Comparison between the calculated and practical longitudinal displacement values of main piers on right width (unit: mm).
Figure 8. The longitudinal displacement sketch of main piers on left width.
Figure 9. The longitudinal displacement sketch of main piers on right width.
was within the security scope in the monitoring period.
4.2.3. Comparative Analysis of the Bridge Deck Alignment Deflection Results
Analyzing comparatively two period measured results, “Measured deflections” is obtained by “the relative elevations in the second period minus the ones in the first period”. Negative value represents
downwarping. “Calculated deflections” is the model calculated values considering the comprehensive influence of the overall cooling and shrinkage creep on the bridge deck alignment. According to the
main pier vertical displacement monitoring, it can be concluded that there was no settlement for the bridge main pier. Thus, the measured deflections and the calculated deflections can be compared,
which is shown in Figure 10, Figure 11.
From above, for the bridge left width outside deck, the measured maximum deflection is 19.98 mm (downward) at the 7L/12 section of 6# span. For the left width inside deck, the measured maximum
deflection is 19.39 mm (downward) at the 5L/12 section of 6# span. For the right width outside deck, the measured maximum deflection is 23.44 mm (downward) at L/2 section of 7# span. For the right
width inside deck, the measured maximum deflection is 23.11 mm (downward) at 7L/12 section of 7# span.
As shown in Figure 9, Figure 10, the inside and outside deflection change rule of the bridge left and right width is basically consistent with the calculated deflection curve. The measured deflection
curve is relatively smooth, of no obvious mutation. The measured deflections are less than the calculated values except the individual non-critical section measuring points of the right width inside.
Figure 10. Comparison between the calculated and measured (including inside and outside) deflection values of the left line.
Figure 11. Comparison between the calculated and measured (including inside and outside) deflection values of the right line.
Table 4. Comparison on theoretical and practical expansion joint clearance of main piers (unit: mm).
4.2.4. Comparative Analysis of the Expansion Joint Deformation Results
By the total station and steel tape, four expansion joints gap measured values of the bridge left and right width are compared with the calculated values. “The measured deformation” is calculated
according to measured data of 1st and 2nd period by the total station and steel tape. “Calculated of deformation” is obtained by modeling calculation on the basis of the actual cooling. The results
are shown in Table 4.
From measured results, the width of each expansion joint shows the expanding trend on the cooling condition. When the temperature of left width drops 17.7˚C, the L2# expansion joint had an expansion
of 44 mm, and the L3# expansion joint had an expansion of 49 mm, both less than the calculated value. When the temperature of right width drops 17.2˚C, the R2# expansion joint had an expansion of 43
mm, and the R3# expansion joint had an expansion of 46 mm, both less than the calculated value. All expansion joints are in normal work condition. By contrast, for the left width, the expansion of L2
# expansion joint is less than the one of L3# expansion joint. For the right width, the expansion of R2# expansion joint is less than the one of R3# expansion joint.
5. Conclusions
Compared the field measured data with the calculated data by finite element software Midas Civil, the following conclusions can be obtained:
1) For the long-span continuous rigid frame bridge, each main pier measured vertical displacement is less than the calculated value, thus it can be initially determined that, there was no obvious
settlement for each main pier during the monitoring period.
2) For each main pier, the measured deviation value is less than the calculated value, which shows that the measured stiffness for each main pier is bigger than calculation stiffness and each main
pier deviation is within the security scope.
3) The inside and outside deflection change rule of the bridge left and right width is basically consistent with the calculated deflection curve. The measured deflection curve is relatively smooth,
of no obvious mutation. The measured deflections are less than the calculated values except the individual non-critical section measuring points of the right width inside.
4) In the field, the main bridge expansion joint deformation measured values are less than the calculated values and each expansion joint is in good working condition.
5) The bridge monitoring results shows that the bridge is basically in normal working condition.
Currently, the bridge is only monitored without moving load, assessing structural performance by the structure static deformation. It is advised that the main bridge health and safety monitoring
system should be established at the right time. Adopting modern sensor technology, the structural responses (dynamic characteristics and vibration, dynamic strain, dynamic deflection and so on) under
various environments should be monitored in real time during operation to obtain information reflecting structure condition and environment. Finally, the bridge structure condition can be
comprehensively assessed. | {"url":"https://scirp.org/journal/paperinformation?paperid=58755","timestamp":"2024-11-11T23:47:51Z","content_type":"application/xhtml+xml","content_length":"125721","record_id":"<urn:uuid:00771fc5-ae56-401f-9ea2-52604b608a59>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00177.warc.gz"} |
Probability Distributions Archives - Data Science | Learning Keystone
Probability Distributions
Poisson Distribution outputs the probability of a sequence of events happening in a fixed time interval.
Poisson Distribution Explained Read More »
In a Uniform Distribution Probability Density Function (PDF) is same for all the possible X values. Sometimes this is called a Rectangular Distribution. There are two (2) parameters in this
distribution, a minimum (A) and a maximum (B)
Uniform Probability Distribution Read More »
Normal Distribution is the most important probability distribution in Probability and Statistics. A normal probability distribution is a bell shaped curve. Many numerical populations have
distributions that can be fit very closely by an appropriate normal curve.
Normal Distribution, Z Scores and Standardization Explained Read More »
Earlier we used Probability Mass Function to describe how the total probability of 1 is distributed among the possible values of the Discrete Random Variable X.
Probability Density Function Read More »
A Random Variable is any rule that maps (links) a number with each outcome in sample space S. Mathematically, random variable is a function with Sample Space as the domain. It’s range is the set of
Real Numbers.
Random Variables in Statistics Read More »
In the Negative Binomial Distribution, we are interested in the number of Failures in n number of trials. This is why the prefix “Negative” is there. When we are interested only in finding number of
trials that is required for a single success, we called it a Geometric Distribution.
Negative Binomial Distribution Read More »
Binomial Distribution is used to find probabilities related to Dichotomous Population. It can be applied to a Binomial Experiment where it can result in only two outcomes. Success or Failure. In
Binomial Experiments, we are interested in the number of Successes.
Binomial Probability Distribution Read More »
Probability Mass Function (PMF) of X says how the total probability of 1 is distributed (allocated to) among the various possible X values.
Probability Mass Function Read More »
Expected Value is the average value we get for a certain Random Variable when we repeat an experiment a large number of times. It is the theoretical mean of a Random Variable. Expected Value is based
on population data. Therefore it is a parameter.
Expected Value of a Random Variable Read More » | {"url":"https://datasciencelk.com/category/statistics/probability-distributions/","timestamp":"2024-11-08T23:17:05Z","content_type":"text/html","content_length":"161244","record_id":"<urn:uuid:7ef54bdc-35b6-42f2-842b-c90b214ffbe7>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00014.warc.gz"} |
11. Learning: Identification Trees, Disorder - AILEPHANT
11. Learning: Identification Trees, Disorder
Identification Trees
• Non-numeric data
• Not all characteristics matter
• Some do matter but not all of the time
• Cost: certain tests may be more expensive than others
Occam’s razor
The objective is to build the smallest tree possible (to reduce costs and computation) and because the simplest explanation is always the best.
Testing data
Small data sets
The different tests upon the data can be ranked by the homogeneous groups it produces and the total number of items in each homogeneous group.
Using the most efficient tests first, the remaining ambiguous data is checked through the other tests and so until until all the data is sorted.
Large data sets
For large data sets, no tests may divide the data in homogeneous group. The results of tests must therefore be ranked according to their level of disorder.
If P = Positive results, N = negative results, T = total
Disorder (Set) = – P/T log[2] ( P/T ) – N/T log[2] ( N/T )
The resulting curve of this equation is a parabolic curve with max at y = 1 for x = 1/2 and min at y = 0 for x = {0,1}
So the quality of each test can be defined as follows:
Quality (Test) = Sum[for each set produced] ( Disorder (Set) ) * Number of samples in set / Number of samples handed by test
Decision boundaries
Contrary to nearest neighbors, identification tests always separate the data space in two equal parts parallel to the space axis. | {"url":"https://ailephant.com/11-learning-identification-trees-disorder/","timestamp":"2024-11-03T22:33:19Z","content_type":"text/html","content_length":"78904","record_id":"<urn:uuid:ffd3bf10-eea2-4345-948c-6957170a1d81>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00072.warc.gz"} |
orksheets for 8th Class
Recommended Topics for you
Transformations & Symmetry
Symmetry (Lines, Reflective symmetry, Order of Rotation)
Symmetry and Reflection Symmetry
2.5 Practice Problems - Symmetry
Transformations and Symmetry Review
Reflexive/Rotational Symmetry
Rigid Transformations & Symmetry
Explore Symmetry Worksheets by Grades
Explore Symmetry Worksheets for class 8 by Topic
Explore Other Subject Worksheets for class 8
Explore printable Symmetry worksheets for 8th Class
Symmetry worksheets for Class 8 are an excellent way for teachers to introduce and reinforce the concepts of symmetry in Math and Geometry. These worksheets are designed to challenge students'
understanding of various types of symmetry, such as reflection, rotation, and translation. By incorporating these worksheets into their lesson plans, teachers can provide students with engaging and
interactive activities that will help them grasp the importance of symmetry in the world around them. Additionally, these worksheets can be used as a form of assessment, allowing teachers to gauge
their students' progress and understanding of the topic. With a wide range of activities and exercises available, Class 8 teachers can easily find symmetry worksheets that cater to their students'
needs and learning styles.
Quizizz is a fantastic platform that offers a variety of educational resources for teachers, including Symmetry worksheets for Class 8, Math, and Geometry. This platform not only provides teachers
with access to high-quality worksheets but also allows them to create interactive quizzes and games that can be used in conjunction with the worksheets. This combination of resources helps to create
a more engaging and dynamic learning experience for students, ensuring that they remain interested and motivated throughout the lesson. Furthermore, Quizizz offers real-time feedback and analytics,
allowing teachers to monitor their students' progress and adjust their teaching strategies accordingly. By incorporating Quizizz into their lesson plans, Class 8 teachers can provide their students
with a comprehensive and enjoyable learning experience that covers all aspects of symmetry in Math and Geometry. | {"url":"https://quizizz.com/en/symmetry-worksheets-class-8","timestamp":"2024-11-11T20:27:27Z","content_type":"text/html","content_length":"158847","record_id":"<urn:uuid:d1bb0d7a-fed0-4b98-b1a4-51dddecbb841>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00430.warc.gz"} |
Common Problems and Solutions
Teaching: 5 min
Exercises: 0 min
□ What could go possibly wrong?
□ Identify some common mistakes
□ Avoid making common mistakes
Now let’s take a look at some common problems with CMake code and with builds.
1: Low minimum CMake version
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
Okay, I had to put this one in. But in some cases, just increasing this number fixes problems. 3.0 or less, for example, has a tendency to do the wrong thing when linking on macOS.
Solution: Either set a high minimum version or use the version range feature and CMake 3.12 or better. The lowest version you should ever choose is 3.4 even for an ultra-conservative project; several
common issues were fixed by that version.
2: Building inplace
CMake should never be used to build in-place; but it’s easy to accidentally do so. And once it happens, you have to manually clean the directory before you can do an out-of-source build again.
Because of this, while you can run cmake . from the build directory after the initial run, it’s best to avoid this form just in case you forget and run it from the source directory. Also, you can add
the following check to your CMakeLists.txt:
### Require out-of-source builds
file(TO_CMAKE_PATH "${PROJECT_BINARY_DIR}/CMakeLists.txt" LOC_PATH)
if(EXISTS "${LOC_PATH}")
message(FATAL_ERROR "You cannot build in a source directory (or any directory with "
"CMakeLists.txt file). Please make a build subdirectory. Feel free to "
"remove CMakeCache.txt and CMakeFiles.")
One or two generated files cannot be avoided, but if you put this near the top, you can avoid most of the generated files as well as immediately notifying the user (possibly you) that you’ve made a
3: Picking a compiler
CMake may pick the wrong compiler on systems with multiple compilers. You can use the environment variables CC and CXX when you first configure, or CMake variables CMAKE_CXX_COMPILER, etc. - but you
need to pick the compiler on the first run; you can’t just reconfigure to get a new compiler.
4: Spaces in paths
CMake’s list and argument system is very crude (it is a macro language); you can use it to your advantage, but it can cause issues. (This is also why there is no “splat” operator in CMake, like f
(*args) in Python.) If you have multiple items, that’s a list (distinct arguments):
The value of VAR is a list with three elements, or the string "a;b;c" (the two things are exactly the same). So, if you do:
set(MY_DIR "/path/with spaces/")
target_include_directories(target PRIVATE ${MY_DIR})
that is identical to:
target_include_directories(target PRIVATE /path/with spaces/)
which is two separate arguments, which is not at all what you wanted. The solution is to surround the original call with quotes:
set(MY_DIR "/path/with spaces/")
target_include_directories(target PRIVATE "${MY_DIR}")
Now you will correctly set a single include directory with spaces in it.
Key Points
□ Setting a CMake version too low.
□ Avoid building inplace.
□ How to select a compiler.
□ How to work with spaces in paths. | {"url":"https://hsf-training.github.io/hsf-training-cmake-webpage/07-commonproblems/index.html","timestamp":"2024-11-03T19:37:13Z","content_type":"text/html","content_length":"20123","record_id":"<urn:uuid:57527ac1-8bc8-487f-9859-3ed31d3f3935>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00142.warc.gz"} |
Our users:
At first, I was under the impression your software was aimed at the elementary and high school level, so I didnt use it at all. Finally, one night I on a whim I tried it out and, after getting past
the initial learning curve, was just blown away at how advanced it really is! I mean, Ill be using all the way through my BS in Anthropology!
Tom Sandy, NE
I never regret the day I purchased Algebrator and I was blown away. The step by step problem solving method is unlike any other algebra program i've seen.
D.C., Maryland
As a mother who is both a research scientist and a company president (we do early ADME Tox analyses for the drug-discovery industry), I am very concerned about my daughters math education. Your
algebra software was tremendously helpful for her. Its patient, full explanations were nearly what one would get with a professional tutor, but far more convenient and, needless to say, less
Patricia, MI
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2012-07-09:
• banking exams previous exam question paper and worked out tutor
• Rational Exponents Calculator
• INEQUALITY WITH A SLOPE AS A FRACTION STEPS
• ti-84 plus instructions to solve long division polynomials and division algorithm
• pdf texas ti-89
• algebraic editor
• calculating mathematical equation in oracle
• Algebric question
• Chase Platinum Visa Credit Card
• B12 Vitamins
• plotting algebric equations - free lesson plans
• Overseas Travel
• basic rules of algebra for beginners
• solved aptitude questions
• online graphing calculator simultaneous equations cubic
• 9th grade mathematical chart
• hard maths equations
• how to ignore punctuation in sentence in java
• how has learning about radical expression compare to how you had learn it previously?
• trivias on math\
• Intermediate Accounting, 7th Canadian Edition, Volume 1 , "Solutions"
• Oklahoma Law
• integrated algebra square-root radicals
• NH Carlton
• combination practice problems
• Ti 83 Concrete download
• lcm and gcf on excel
• free college algebra solving
• college algebra clep exam
• free download of algebra solver
• math scale factor word problems
• lesson plan percent convert decimal
• Algebra for 9th review
• solve algebra problems free
• algebra 1a worksheets 7-8 grade
• how to solve a system of equations algebraically
• why was algebra inveted
• algebra poems
• pathways to thinking in second year algerbra mathematic teacher
• real life problems using algebra KS3
• math integrated algebra tutorial
• Recovering Data
• Data Backup Windows XP
• grade 10 high school math book ontario
• percentages downloadable worksheets
• use casio calculator to solve simultaneous equation
• maths area activities ks2
• grade 10 algebra help
• simplifying binomials with multiple variables
• rewrite division as multiplication
• I can learn algebra program
• Sap Software
• difference of two squares worksheet
• Cosmetic Surgery UK
• CLEP cheat
• simple fractions
• solution system first order partial differential equation
• Fitness Membership
• pre- algebra with pazzazz worksheets
• math game printouts
• simple algebra word questions
• how you get ride of a fraction in a math problem?
• algebra worksheet printouts
• INTERMEDIATE ALGEBRA QUIZ
• 9th grade integrated algebra regents worksheets
• Free Probability Problems for Junior HIgh
• turn a fraction into a decimale
• list of real life situations using positive and negative integers
• Free Legal Advice in California
• Choice Health Insurance Plan
• typing the second power in algebra equations
• conics notes for ti 89
• conceptual physics the high school physics program answers
• math worksheets toprint out
• solved apti question papers
• factorising third degree polynomials
• Bankruptcy Fee
• algebraic expressions free worksheet
• accounting worksheet and answer key + free
• online lcm solver
• how do i factor cubed equations
• middle school algebra readiness practice test
• sample grade nine math exam
• algabra online
• how to do 8th grade math dilations
• calculator fractions algerbra equations
• answers to prentice hall review book chemistry
• celsius-convertion
• Broadband Microfilter
• system of equation graphing table
• can a graph be used to determine how many solutions an equation has
• test on problems linear equation
• convert 100% into decimal form
• algebra grade 9 practice problems
• c# symbolic calculation
• solution manual ross introduction to probability models
• real estate math formulas free | {"url":"http://algebra-help.com/algebra-help-factor/monomials/simplify-complex-fractions.html","timestamp":"2024-11-03T16:23:49Z","content_type":"application/xhtml+xml","content_length":"13417","record_id":"<urn:uuid:432f4e1c-96d0-47d5-b2ad-cd5b5a91b349>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00129.warc.gz"} |
Go to the source code of this file.
subroutine zla_heamv (UPLO, N, ALPHA, A, LDA, X, INCX, BETA, Y, INCY)
ZLA_HEAMV computes a matrix-vector product using a Hermitian indefinite matrix to calculate error bounds.
Function/Subroutine Documentation
subroutine zla_heamv ( integer UPLO,
integer N,
double precision ALPHA,
complex*16, dimension( lda, * ) A,
integer LDA,
complex*16, dimension( * ) X,
integer INCX,
double precision BETA,
double precision, dimension( * ) Y,
integer INCY
ZLA_HEAMV computes a matrix-vector product using a Hermitian indefinite matrix to calculate error bounds.
Download ZLA_HEAMV + dependencies
[TGZ] [ZIP] [TXT]
ZLA_SYAMV performs the matrix-vector operation
y := alpha*abs(A)*abs(x) + beta*abs(y),
where alpha and beta are scalars, x and y are vectors and A is an
n by n symmetric matrix.
This function is primarily used in calculating error bounds.
To protect against underflow during evaluation, components in
the resulting vector are perturbed away from zero by (N+1)
times the underflow threshold. To prevent unnecessarily large
errors for block-structure embedded in general matrices,
"symbolically" zero components are not perturbed. A zero
entry is considered "symbolic" if all multiplications involved
in computing that entry have at least one zero multiplicand.
UPLO is INTEGER
On entry, UPLO specifies whether the upper or lower
triangular part of the array A is to be referenced as
UPLO = BLAS_UPPER Only the upper triangular part of A
[in] UPLO is to be referenced.
UPLO = BLAS_LOWER Only the lower triangular part of A
is to be referenced.
Unchanged on exit.
N is INTEGER
On entry, N specifies the number of columns of the matrix A.
[in] N N must be at least zero.
Unchanged on exit.
ALPHA is DOUBLE PRECISION .
[in] ALPHA On entry, ALPHA specifies the scalar alpha.
Unchanged on exit.
A is COMPLEX*16 array, DIMENSION ( LDA, n ).
Before entry, the leading m by n part of the array A must
[in] A contain the matrix of coefficients.
Unchanged on exit.
LDA is INTEGER
On entry, LDA specifies the first dimension of A as declared
[in] LDA in the calling (sub) program. LDA must be at least
max( 1, n ).
Unchanged on exit.
X is COMPLEX*16 array, DIMENSION at least
( 1 + ( n - 1 )*abs( INCX ) )
[in] X Before entry, the incremented array X must contain the
vector x.
Unchanged on exit.
INCX is INTEGER
On entry, INCX specifies the increment for the elements of
[in] INCX X. INCX must not be zero.
Unchanged on exit.
BETA is DOUBLE PRECISION .
On entry, BETA specifies the scalar beta. When BETA is
[in] BETA supplied as zero then Y need not be set on input.
Unchanged on exit.
Y is DOUBLE PRECISION array, dimension
( 1 + ( n - 1 )*abs( INCY ) )
[in,out] Y Before entry with BETA non-zero, the incremented array Y
must contain the vector y. On exit, Y is overwritten by the
updated vector y.
INCY is INTEGER
[in] INCY On entry, INCY specifies the increment for the elements of
Y. INCY must not be zero.
Unchanged on exit.
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Further Details:
Level 2 Blas routine.
-- Written on 22-October-1986.
Jack Dongarra, Argonne National Lab.
Jeremy Du Croz, Nag Central Office.
Sven Hammarling, Nag Central Office.
Richard Hanson, Sandia National Labs.
-- Modified for the absolute-value product, April 2006
Jason Riedy, UC Berkeley
Definition at line 178 of file zla_heamv.f. | {"url":"https://netlib.org/lapack/explore-html-3.4.2/d7/df7/zla__heamv_8f.html","timestamp":"2024-11-02T02:05:24Z","content_type":"application/xhtml+xml","content_length":"15146","record_id":"<urn:uuid:3c9be60b-3ef7-4907-8e65-6597e3ea6be9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00538.warc.gz"} |
Practice Programs for COP 3223
Variables and Assignment Statement
1) Write a program that prompts the user for the area of a circle and calculates the radius of that circle and prints it out.
2) Write a program that asks the user for two pieces of information: the price of an item, and the percentage of sales tax, and uses that to calculate the total cost of the item with tax and print
this amount to the screen.
3) Write a program that asks the user for the price of gasoline per gallon, the number of gallons of gas currently in their car, the miles per gallon their car gets, and the length of their road trip
in miles and calculates and prints out the amount the user will have to spend on extra gas to complete the road trip. (You may assume that the user will have to buy some gas to complete the trip.)
4) Write a program that asks the user for how many slices are in a whole pizza and how many people are eating the pizza. Since we must be perfectly fair, all people must get exactly the same number
of slices. Any leftover slices (which must be less than the number of people) will be given to the family dog. Your program should output how many slices everyone eats and how many slices the dog
5) Write a program to guesstimate the total number of jelly beans in a right circular cylinder. In particular, the user must enter both the radius and the height of the cylinder, as well as the
radius of the jellybean (we'll assume it's a sphere). For simplicity's sake, assume that the amount of volume a single jellybean takes up is simply the volume of the cube it would fit into. (Thus, if
the radius of a jellybean is 2 units, then the total volume it takes up in the jar is 8 cubic units.) You should output a guess as to how many jellybeans are in the jar. Your guess need not be an
integer. (If you want to enhance the program you may output the nearest integer to the actual value.)
If Statement
6) Write a program that does temperature conversion from either Fahrenheit to Celsius, or the other way around. Your program should first prompt the user for which type of conversion they want to do.
Then your program should prompt the user for the temperature they want to convert. Finally, your program should output the proper converted temperature. Incidentally, the formula for conversion from
celsius to fahrenheit is:
F = 1.8*C + 32
7) Debbie likes numbers that have the same tens digit and units digit. For example, Debbie likes 133 and 812355, but she does not like 137 or 4. Write a program that asks the user for a number and
then prints out whether or not Debbie likes the number.
8) Write a program that prompts the user for 2 pieces of information: (1) age, (2) amt. of cash they have. Based upon these inputs, your program should produce one of the four outputs below for the
given situations:
Situation / Output
A<21, M<100 / "You have some time before you need more money."
A<21, M>=100 / "You have got it made!"
A>=21, M<100 / "You need to get a job!"
A>=21, M>=100 / "You are right on track."
9) Write a program that asks the user for a positive even integer input n, and the outputs the sum 2+4+6+8+...+n, the sum of all the positive even integers up to n.
10) Write a program that calculates the amount of money stored in a bank account after a certain number of years. The user should enter in the initial amount deposited into the account, along with
the annual interest percentage and the number of years the account will mature. The output should provide the amount of money in the account after every year.
11) Write a program to take in a positive integer n > 1 from the user and print out whether or not the number is prime. (A prime number is only divisible by 1 and itself, and no other positive
12) Write a program that allows a user to play a guessing game. Pick a random number in between 1 and 100, and then prompt the user for a guess. Based on their guess, then them that their guess is
too high, too low, or correct. If the guess is not correct, reprompt the user for a new guess. Continue doing so until the user has properly picked the number.
13) Write a program to play a game of marbles. The game starts with 32 marbles and two players. Each player must take 1, 2 or 3 marbles on their turn. Turns go back and forth between the two players.
The winner is the person who takes the last marble. Your program should prompt each player with a message that states the current number of marbles and asks them how many they'd like to take.
Continue until there is a winner. Then your program should print out the winner (either player #1 or player #2.) (Incidentally, if both players play optimally, who always wins? What is their
14) Write a program that takes as input two positive integers, the height and length of a parallelogram and prints a parallelogram of that size with stars to the screen. For example, if the height
were 3 and the length were 6, the following would be printed:
or if the height was 4 and the length was 2, the following would be printed:
15) Write a program that prints out all ordered triplets of integers (a,b,c) with a < b < c such that a+b+c = 15. (If you'd like, instead of the sum being 15, you can have the user enter an integer
greater than or equal to 6.) You should print out one ordered triplet per line.
16) A palindrome is a word that reads the same forwards and backwards. Write a program that reads in a string from the user without spaces (of no more than 79 characters) and prints out whether or
not that string is a palindrome. Do both a case-sensitive check and a case-insensitive check. (For example, if you are doing case-sensitive, "Madam" is NOT a palindrome, since 'M' and 'm' are
considered different characters. But if you are doing a case-insensitive check, "Madam" IS a palindrome.
17) Write functions that perform the tasks assigned in problems 4, 5, 6, 9, 10 and 11. Try to design the functions yourself, determining which parameters are reasonable. If necessary, get help from
the TAs to design the function prototypes.
18) Using the function you wrote to test for primality, write a program that reads in 10 integers from the user and prints out the largest prime number in the list. If the list only contains
composite numbers, print out a message stating that no prime numbers were entered.
19) Write a function that converts its input parameter (a double) from a temperature in Fahrenheit to a temperature in Celsius.
20) Write a function that converts its input parameter (a double) from a temperature in Celsius to a temperature in Fahrenheit.
21) Write a function that calculates how many times a particular positive integer, base, divides evenly into a second positive integer, total. For example the integer 2 divides into 96 5 times since
25 = 32 and 96 is divisible by 32, and since 26 = 64 and 96 is NOT divisible by 64.
22) Write a function that takes in the coefficients a (not 0), b, and c (all doubles) to a quadratic equation and returns the smaller of the two roots (a double) as the result. You may assume that
the roots of the quadratic are real. The formula for the roots of a quadratic is given below:
23) Write a function that takes in a string, s, and an integer, n, and prints that string exactly n times, once per line.
24) Write a function that takes in a string, s and a character c, and returns the number of times the character occurs in the string.
25) Write a similar function to #24, except this time make the comparison case INSENSITIVE. So, count the number of times either the upper or lower case version of the character c appears in the
string s.
26) Write a function that takes in a single positive integer n and returns n! (n factorial). Note that n! = 1x2x3x4...xn.
27) Write a function that takes in a string and returns the number of permutations of that string. (Note: this problem requires arrays.) Look up the appropriate formula in a Discrete Math textbook.
28) The Fibonacci sequence is defined as follows: F(0)=0, F(1)=1, F(n)=F(n-1)+F(n-2), for all integers n > 1. Thus, the first few terms of the sequence are: 0, 1, 1, 2, 3, 5, 8, etc. Write a function
that takes in a non-negative integer n and returns the nth Fibonacci number.
29) Edit your function in problem 28 so that it is void but it prints out one line for each Fibonacci number upto n. For example, if the input to the function is 6, then your function should print
F(0) = 0
F(1) = 1
F(2) = 1
F(3) = 2
F(4) = 3
F(5) = 5
F(6) = 8
30) Write a function that takes in six integer parameters: x1, y1, r1, x2, y2 and r2, where (x1,y1) is the center of a circle with radius r1 and (x2, y2) is the center of a circle with radius r2.
Your program should return the number of intersection points between the two circles. (Hint: This number will always be 0, 1 or 2.)
File I/O
31) Write a program that reads in a set of test scores from a file called test.in and prints out to the screen the average of those test scores. The first line of the file will contain a positive
integer n, representing the number of test scores that follow in the file. The following n lines will contain one test score each. Each test score is guaranteed to be an integer in between 0 and 100,
32) Write a program that reads in the same file as above, but also calculates the standard deviation of those values and prints that to the screen.
33) A file nintendo.in contains a single non-negative integer on each line representing donations (in dollars) to your Nintendo game buying fund. The end of the list will be signified by the value 0
on the last line of the file. (All other integers in the file will be positive.) A Nintendo game costs $50 to buy. Read in the file of donations and output a statement listing the maximum number of
games you can buy as well as how much money you will have left over after buying that many games.
34) Read in any text file and output the total number of uppercase letter and lowercase letters in that textfile. Also include a count for how many non-alphabetic characters there are.
35) Consider the following encryption scheme: Given a key (A through Z), and a file with only uppercase alphabetic characters to encrypt, encrypt the file as follows:
1) Add the value(0-25) of the key to the first letter in the file. This becomes your first letter of cipher text output.
2) Add that cipher text letter to the following letter in the file to create the next cipher text output.
3) Continue in this manner until the whole file is encrypted.
Any time the addition exceeds 25, just mod the result by 26, thus wrapping the answer around. Consider the following example with the key = 'C' and the file containing the text "HELLO":
HELLO (original input)
JNYJX (cipher text)
Write a program that takes in a file that only contains uppercase letters and asks the user for a key, and outputs the encrypted file to another output file using this particular code.
36) Write a program that asks the user to enter in 10 numbers and then prints out those numbers in reverse order.
37) Write a program that reads in a regular text file as input and outputs how many of each letter appeared in that file.
38) Write a program that reads in a text file of test scores in the format described in question 31 and prints out a histogram of the scores. The histogram should have one row on it for each distinct
test score in the file, followed by one star for each test score of that value. For example, if the tests scores in the file were 55, 67, 80, 80, 95, 95, 95 and 98, the output to the screen should
look like:
39) Write a program that prints out a random permutation of the numbers 1 through n, where n is a positive integer entered by the user.
40) Write a program that reads a set of dart throws from a user and computes the user's score. Assume that the user enters 21 dart throws and each throw is a single integer in between 1 and 20,
inclusive. To compute the user's score, look at each number in between 15 and 20, inclusive that the user threw more than three times, and add up the points of the throws, after the third throw. For
example, if the user throws 5 20s, 3 19s, 4 18s, 6 14s and 2 1s, then the user's score is 2x20+1x18 = 58, since the user threw 2 more 20s than 3 and 1 more 18 than 3. The 14s don't count since only
15 through 20 count for points. | {"url":"https://docsbay.net/doc/1009822/Practice-Programs-for-COP-3223","timestamp":"2024-11-04T11:30:50Z","content_type":"text/html","content_length":"24637","record_id":"<urn:uuid:3b74cd41-ed8e-41c7-b095-7c2590f073d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00208.warc.gz"} |
Acoustic Deformation Potential Scattering
5.4 Acoustic Deformation Potential Scattering
In this section, the coupling of the electrons with acoustic phonons is analyzed. Displacement of the atoms from their lattice sites are induced by crystal vibrations, which induces a modification of
the bandstructure. For electrons in the conduction band, the variation of the conduction band edge E[c] can be induced by acoustic phonons and the corresponding interaction Hamiltonian Ĥ[e-ph]^AC is
given by
For small displacements δE[c] can be written as
where Ξ[ac ] denotes the acoustic deformation potential and δV is the variation of the crystal volume V . The local variation of the volume results from the lattice displacement U = x^′- x. The
volume of a cube generated by the orthogonal vectors a = (δx,0,0), b = (0,δy,0) and c = (0,0,δz), is given by
The cube is distorted according to the transformations
and the new volume can be written as
where the lattice displacement is given by [85]
the interaction Hamiltonian reads
Here, ρ is the mass density of the semiconductor, and w[q] denotes the polarization vector. The coupling coefficient can be written as [86]
The electron scattering rate with the assistance of acoustic phonons can be written in the following form [87] (see Appendix B.2)
where E[ac ] is the acoustic deformation potential, ρ is the density of the material, and v[s] stands for the sound velocity. This equation is only valid for ℏω[q] ≪ k[B]T, i.e. when the thermal
energy is much larger than the energy of the phonon involved in the transition, and in the elastic approximation limit ℏω[q ] → 0 (see Appendix B.2). | {"url":"https://www.iue.tuwien.ac.at/phd/milovanovic/diss_htmlse13.html","timestamp":"2024-11-04T02:22:42Z","content_type":"text/html","content_length":"10433","record_id":"<urn:uuid:f51d381e-f46a-483b-836b-8f5026970d86>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00455.warc.gz"} |
Program to Check Children Sum Property in a Binary Tree
• Write a C program to check if a binary tree satisfy Children Sum Property.
Given a binary tree We have to check whether given binary tree satisfy children sum property. We will traverse each node of binary tree and check whether children sum property holds true for each
node or not.
Children Sum Property of Binary Tree
If the value of every node of a binary tree is equal to the sum of it's left and right child node then binary tree satisfies child sum property.
• A sub tree rooted at a leaf node satisfies children sum property because leaf nodes don't have any child nodes.
• An Empty tree satisfies Children sum property.
Algorithm to check Children Sum property of a binary tree
Let "node" be the pointer to any node of binary tree.
• If node is NULL, then return true.
• If node is leaf node, then return true.
• If node's value is equal to sum of left and right child nodes and left and right sub trees also satisfies Children sum property. Then sub tree rooted at node satisfies children sum property.
Time Complexity
: O(n), we are traversing binary tree only once.
C program to check children sum property of binary tree
#include <stdio.h>
struct node {
int data;
struct node *left;
struct node *right;
struct node* getNewNode(int data) {
/* dynamically allocate memory for a new node */
struct node* newNode = (struct node*)malloc(sizeof(struct node));
/* populate data in new Node */
newNode->data = data;
newNode->left = NULL;
newNode->right = NULL;
return newNode;
This function returns a binary tree which
satisfy children sum property
/ \
/ \ / \
struct node* generateBTree(){
// Root Node
struct node* root = getNewNode(10);
root->left = getNewNode(4);
root->right = getNewNode(6);
root->left->left = getNewNode(2);
root->left->right = getNewNode(2);
root->right->left = getNewNode(3);
root->right->right = getNewNode(3);
return root;
/* Checks whether a tree satisfies the children sum
property or not. If tree satisfies children
sum property then it returns 1 otherwise 0 */
int isChildrenSumTree(struct node *root) {
if(root == NULL)
return 1;
if(root->left == NULL && root->right == NULL)
return 1;
int leftData = (root->left == NULL) ? 0 : root->left->data;
int rightData = (root->right == NULL) ? 0 : root->right->data;
if(isChildrenSumTree(root->left) && isChildrenSumTree(root->right) &&
(leftData + rightData == root->data))
return 1;
return 0;
int main() {
struct node *root = generateBTree();
/* Check for Children sum property */
printf("Tree Satisfy Children Sum Property\n");
} else {
printf("Tree Don't Satisfy Children Sum Property");
/* Changing the value of a node such that
it won't satisfy children sum property */
root->left->data = 100;
printf("Tree Satisfy Children Sum Property\n");
} else {
printf("Tree Don't Satisfy Children Sum Property");
return 0;
Tree Satisfy Children Sum Property
Tree Don't Satisfy Children Sum Property | {"url":"https://www.techcrashcourse.com/2016/06/program-to-check-children-sum-property-binary-tree.html","timestamp":"2024-11-11T00:17:59Z","content_type":"application/xhtml+xml","content_length":"79870","record_id":"<urn:uuid:f3b93ce0-38f7-4a3f-81df-1762ea1b5269>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00297.warc.gz"} |
GCSE Maths Past Papers | 2022 Predicted Papers | Bristol Tutor CompanyGCSE Maths Past Papers
GCSE Maths Past Papers and Predicted Papers
MME 2022 Predicted Papers
Maths Made Easy offer a variety of maths resources that will help with your upcoming maths exams, one of these are 2022 GCSE maths predicted papers which is a perfect way to revise for your GCSE
maths exams.
GCSE Maths Past Papers
Our partnership with Maths Made Easy has made it possible to make the best quality past papers. Past papers are a great way to revise for your upcoming GCSE maths exams and other tests you have
coming up.
Maths Genie Predicted Papers
Our maths genie predicted papers 2022 dedicated page is where our own genie has given his best guess for the 2022 GCSE maths exams coming up. Have a look at our predicted papers at the link below. | {"url":"https://bristoltutorcompany.co.uk/gcse-maths-past-papers/","timestamp":"2024-11-13T14:52:07Z","content_type":"text/html","content_length":"44851","record_id":"<urn:uuid:0d8401b1-5ed1-466d-9a26-1a8d28a2f6a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00287.warc.gz"} |
31 CFR § 344.7 - What are Demand Deposit securities?
§ 344.7 What are Demand Deposit securities?
Demand Deposit securities are one-day certificates of indebtedness that are automatically rolled over each day until you request redemption.
(a) How is the rate for Demand Deposit securities determined? Each security shall bear a rate of interest based on an adjustment of the average yield for 13-week Treasury bills at the most recent
auction. A new annualized effective Demand Deposit rate and daily factor for the Demand Deposit rate are effective on the first business day following the regular auction of 13-week Treasury bills
and are shown in the SLGS rate table. Interest is accrued and added to the principal daily. Interest is computed on the balance of the principal, plus interest accrued through the preceding day.
(1) How is the interest rate calculated? (i) First, you calculate the annualized effective Demand Deposit rate in decimals, designated “I” in Equation 1, as follows:
(Equation 1)
I = Annualized effective Demand Deposit rate in decimals. If the rate is determined to be negative, such rate will be reset to zero.
P = Average auction price for the most recently auctioned 13-week Treasury bill, per hundred, to six decimals.
Y = 365 (if the year following issue date of the 13-week Treasury bill does not contain a leap year day) or 366 (if the year following issue date of the 13-week Treasury bill does contain a leap year
DTM = The number of days from date of issue to maturity for the most recently auctioned 13-week Treasury bill.
MTR = Estimated marginal tax rate, in decimals, of purchasers of tax-exempt bonds.
TAC = Treasury administrative costs, in decimals.
(ii) Then, you calculate the daily factor for the Demand Deposit rate as follows:
DDR = (1 + I)1/Y −1
(Equation 2)
(2) Where can I find additional information? Information on the estimated average marginal tax rate and Treasury administrative costs for administering Demand Deposit securities, both to be
determined by Treasury from time to time, will be published in the Federal Register.
(b) What happens to Demand Deposit securities during a debt limit contingency? At any time the Secretary determines that issuance of obligations sufficient to conduct the orderly financing operations
of the United States cannot be made without exceeding the statutory debt limit, we may invest any unredeemed Demand Deposit securities in special 90-day certificates of indebtedness.
(1) Funds left invested in Demand Deposit securities remain subject to the normal terms and conditions for such securities as set forth in this part.
(2) Funds invested in 90-day certificates of indebtedness earn simple interest equal to the daily factor in effect at the time Demand Deposit security issuance is suspended, multiplied by the number
of days outstanding. Ninety-day certificates of indebtedness are subject to the same request for redemption notification requirements as those for Demand Deposit securities and will be redeemed at
par value plus accrued interest. If a 90-day certificate of indebtedness reaches maturity during a debt limit contingency, we will automatically roll it into a new 90-day certificate of indebtedness,
along with accrued interest, that earns simple interest equal to the daily factor in effect at the time that the new 90-day certificate of indebtedness is issued, multiplied by the number of days
outstanding. When regular Treasury borrowing operations resume, the 90-day certificates of indebtedness, along with accrued interest, will be reinvested in Demand Deposit securities. | {"url":"https://www.law.cornell.edu/cfr/text/31/344.7","timestamp":"2024-11-02T06:33:51Z","content_type":"text/html","content_length":"34910","record_id":"<urn:uuid:131ea9d8-565c-42f3-ba79-c65d7d55cc0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00452.warc.gz"} |
3.3: Reversible and Irreversible Pathways
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
The most common example of work in the systems discussed in this book is the work of expansion. It is also convenient to use the work of expansion to exemplify the difference between work that is
done reversibly and that which is done irreversibly. The example of expansion against a constant external pressure is an example of an irreversible pathway. It does not mean that the gas cannot be
re-compressed. It does, however, mean that there is a definite direction of spontaneous change at all points along the expansion.
Imagine instead a case where the expansion has no spontaneous direction of change as there is no net force push the gas to seek a larger or smaller volume. The only way this is possible is if the
pressure of the expanding gas is the same as the external pressure resisting the expansion at all points along the expansion. With no net force pushing the change in one direction or the other, the
change is said to be reversible or to occur reversibly. The work of a reversible expansion of an ideal gas is fairly easy to calculate.
If the gas expands reversibly, the external pressure (\(p_{ext}\)) can be replaced by a single value (\(p\)) which represents both the pressure of the gas and the external pressure.
\[ dw = -pdV \nonumber \]
\[ w = - \int p dV \nonumber \]
But now that the external pressure is not constant, \(p\) cannot be extracted from the integral. Fortunately, however, there is a simple relationship that tells us how \(p\) changes with changing \(V
\) – the equation of state! If the gas is assumed to be an ideal gas
\[ w = - \int p dV -\int \left( \dfrac{nRT}{V}\right) dV \nonumber \]
And if the temperature is held constant (so that the expansion follows an isothermal pathway) the nRT term can be extracted from the integral.
\[ w = -nRT \int_{V_1}^{V_2} \dfrac{dV}{V} = -nRT \ln \left( \dfrac{V_2}{V_1} \right) \label{isothermal} \]
Equation \ref{isothermal} is derived for ideal gases only; a van der Waal gas would result in a different version.
What is the work done by 1.00 mol an ideal gas expanding reversibly from a volume of 22.4 L to a volume of 44.8 L at a constant temperature of 273 K?
Using Equation \ref{isothermal} to calculate this
\[\begin{align*} w & = -(1.00 \, \cancel{mol}) \left(8.314\, \dfrac{J}{\cancel{mol}\,\cancel{ K}}\right) (273\,\cancel{K}) \ln \left( \dfrac{44.8\,L}{22.4 \,L} \right) \nonumber \\[4pt] & = -1570 \,J
= 1.57 \;kJ \end{align*} \]
Note: A reversible expansion will always require more work than an irreversible expansion (such as an expansion against a constant external pressure) when the final states of the two expansions are
the same!
The work of expansion can be depicted graphically as the area under the p-V curve depicting the expansion. Comparing examples \(\PageIndex{1}\) and \(3.1.2\), for which the initial and final volumes
were the same, and the constant external pressure of the irreversible expansion was the same as the final pressure of the reversible expansion, such a graph looks as follows.
The work is depicted as the shaded portion of the graph. It is clear to see that the reversible expansion (the work for which is shaded in both light and dark gray) exceeds that of the irreversible
expansion (shaded in dark gray only) due to the changing pressure of the reversible expansion. In general, it will always be the case that the work generated by a reversible pathway connecting
initial and final states will be the maximum work possible for the expansion.
It should be noted (although it will be proven in a later chapter) that \(\Delta U\) for an isothermal reversible process involving only p-V work is 0 for an ideal gas. This is true because the
internal energy, U, is a measure of a system’s capacity to convert energy into work. In order to do this, the system must somehow store that energy. The only mode in which an ideal gas can store this
energy is in the translational kinetic energy of the molecules (otherwise, molecular collisions would not need to be elastic, which as you recall, was a postulate of the kinetic molecular theory!)
And since the average kinetic energy is a function only of the temperature, it (and therefore \(U\)) can only change if there is a change in temperature. Hence, for any isothermal process for an
ideal gas, \(\Delta U=0\). And, perhaps just as usefully, for an isothermal process involving an ideal gas, \(q = -w\), as any energy that is expended by doing work must be replaced with heat, lest
the system temperature drop.
Constant Volume Pathways
One common pathway which processes can follow is that of constant volume. This will happen if the volume of a sample is constrained by a great enough force that it simply cannot change. It is not
uncommon to encounter such conditions with gases (since they are highly compressible anyhow) and also in geological formations, where the tremendous weight of a large mountain may force any processes
occurring under it to happen at constant volume.
If reversible changes in which the only work that can be done is that of expansion (so-called p-V work) are considered, the following important result is obtained:
\[ dU = dq + dw = dq - pdV \nonumber \]
However, \(dV = 0\) since the volume is constant! As such, \(dU\) can be expressed only in terms of the heat that flows into or out of the system at constant volume
\[ dU = dq_v \nonumber \]
Recall that \(dq\) can be found by
\[ dq = \dfrac{dq}{\partial T} dT = C\, dt \label{eq1} \]
This suggests an important definition for the constant volume heat capacity (\(C_V\)) which is
\[C_V \equiv \left( \dfrac{\partial U}{\partial T}\right)_V \nonumber \]
When Equation \ref{eq1} is integrated the
\[q = \int _{T_1}^{T_2} nC_V dt \label{isochoric} \]
Consider 1.00 mol of an ideal gas with \(C_V = 3/2 R\) that undergoes a temperature change from 125 K to 255 K at a constant volume of 10.0 L. Calculate \(\Delta U\), \(q\), and \(w\) for this
Since this is a constant volume process
\[w = 0 \nonumber \]
Equation \ref{isochoric} is applicable for an isochoric process,
\[q = \int _{T_1}^{T_2} nC_V dt \nonumber \]
Assuming \(C_V\) is independent of temperature:
\[\begin{align*} q & = nC_V \int _{T_1}^{T_2} dt \\[4pt] &= nC_V ( T_2-T_1) \\[4pt] & = (1.00 \, mol) \left( \dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K}\right) (255\, K - 125 \,K) \\[4pt] & = 1620 \,J =
1.62\, kJ \end{align*} \]
Since this a constant volume pathway,
\[ \begin{align*} \Delta U & = q + \cancel{w} \\ & = 1.62 \,kJ \end{align*} \]
Constant Pressure Pathways
Most laboratory-based chemistry occurs at constant pressure. Specifically, it is exposed to the constant air pressure of the laboratory, glove box, or other container in which reactions are taking
place. For constant pressure changes, it is convenient to define a new thermodynamic quantity called enthalpy.
\[ H \equiv U+ pV \nonumber \]
\[\begin{align*} dH &\equiv dU + d(pV) \\[4pt] &= dU + pdV + Vdp \end{align*} \]
For reversible changes at constant pressure (\(dp = 0\)) for which only p-V work is done
\[ dH & = dq + dw + pdV + Vdp \\[4pt] & = dq - \cancel{pdV} + \cancel{pdV} + \cancelto{0}{Vdp} \\ & = dq \label{heat} \]
And just as in the case of constant volume changes, this implies an important definition for the constant pressure heat capacity
\[C_p \equiv \left( \dfrac{\partial H}{\partial T} \right)_p \nonumber \]
Consider 1.00 mol of an ideal gas with \(C_p = 5/2 R\) that changes temperature change from 125 K to 255 K at a constant pressure of 10.0 atm. Calculate \(\Delta U\), \(\Delta H\), \(q\), and \(w\)
for this change.
\[q = \int_{T_1}^{T_2} nC_p dT \nonumber \]
assuming \(C_p\) is independent of temperature:
\[ \begin{align*} q & = nC_p \int _{T_1}^{T_2} dT \\ & = nC_p (T_2-T_1) \\ & = (1.00 \, mol) \left( \dfrac{5}{2} 8.314 \dfrac{J}{mol \, K}\right) (255\, K - 125\, K) = 2700\, J = 1.62\, kJ \end
{align*} \]
So via Equation \ref{heat} (specifically the integrated version of it using differences instead of differentials)
\[ \Delta H = q = 1.62 \,kJ \nonumber \]
\[ \begin{align*} \Delta U & = \Delta H - \Delta (pV) \\ & = \Delta H -nR\Delta T \\ & = 2700\, J - (1.00 \, mol) \left( 8.314\, \dfrac{J}{mol \, K}\right) (255\, K - 125 \,K) \\ & = 1620 \,J = 1.62
\, kJ \end{align*} \]
Now that \(\Delta U\) and \(q\) are determined, then work can be calculated
\[\begin{align*} w & =\Delta U -q \\ & = 1.62\,kJ - 2.70\,kJ = -1.08\;kJ \end{align*} \]
It makes sense that \(w\) is negative since this process is an gas expansion.
Calculate \(q\), \(w\), \(\Delta U\), and \(\Delta H\) for 1.00 mol of an ideal gas expanding reversibly and isothermally at 273 K from a volume of 22.4 L and a pressure of 1.00 atm to a volume of
44.8 L and a pressure of 0.500 atm.
Since this is an isothermal expansion, Equation\ref{isothermal} is applicable
\[ \begin{align*} w & = -nRT \ln \dfrac{V_2}{V_1} \\ & = (1.00 \, mol) \left( 8.314\, \dfrac{J}{mol \, K}\right) (255\, K) \ln \left(\dfrac{44.8\,L}{22.4\,L} \right) \\ & = 1572\,J = 1.57\,kJ \\[4pt]
\Delta U & = q + w \\ & = q + 1.57\,KJ \\ & = 0 \\[4pt] q &= -1.57\,kJ \end{align*} \]
Since this is an isothermal expansion
\[\Delta H = \Delta U + \Delta (pV) = 0 + 0 \nonumber \]
where \(\Delta (pV) = 0\) due to Boyle’s Law!
Adiabatic Pathways
An adiabatic pathway is defined as one in which no heat is transferred (\(q = 0\)). Under these circumstances, if an ideal gas expands, it is doing work (\(w < 0\)) against the surroundings (provided
the external pressure is not zero!) and as such the internal energy must drop (\(\Delta U <0 \)). And since \(\Delta U\) is negative, there must also be a decrease in the temperature (\(\Delta T < 0
\)). How big will the decrease in temperature be and on what will it depend? The key to answering these questions comes in the solution to how we calculate the work done.
If the adiabatic expansion is reversible and done on an ideal gas,
\[dw = -pdV \nonumber \]
\[dw = nC_vdT \label{Adiabate2} \]
Equating these two terms yields
\[- pdV = nC_v dT \nonumber \]
Using the ideal gas law for an expression for \(p\) (\(p = nRT/V\))
\[ - \dfrac{nRT}{V} dV = nC_vdT \nonumber \]
And rearranging to gather the temperature terms on the right and volume terms on the left yields
\[\dfrac{dV}{V} = -\dfrac{C_V}{R} \dfrac{dT}{T} \nonumber \]
This expression can be integrated on the left between \(V_1\) and \(V_2\) and on the right between \(T_1\) and \(T_2\). Assuming that \(C_v/nR\) is independent of temperature over the range of
integration, it can be pulled from the integrand in the term on the right.
\[ \int_{V_1}^{V_2} \dfrac{dV}{V} = -\dfrac{C_V}{R} \int_{T_1}^{T_2} \dfrac{dT}{T} \nonumber \]
The result is
\[ \ln \left(\dfrac{V_2}{V_1} \right) = - \dfrac{C_V}{R} \ln \left( \dfrac{T_2}{T_1} \right) \nonumber \]
\[ \left(\dfrac{V_2}{V_1} \right) = \left(\dfrac{T_2}{T_1} \right)^{- \frac{C_V}{R}} \nonumber \]
\[ V_1T_1^{\frac{C_V}{R}} = V_2T_2^{\frac{C_V}{R}} \nonumber \]
\[T_1 \left(\dfrac{V_1}{V_2} \right)^{- \frac{R} {C_V}} = T_2 \label{Eq4Alternative} \]
Once \(\Delta T\) is known, it is easy to calculate \(w\), \(\Delta U\) and \(\Delta H\).
1.00 mol of an ideal gas (C[V] = 3/2 R) initially occupies 22.4 L at 273 K. The gas expands adiabatically and reversibly to a final volume of 44.8 L. Calculate \(\Delta T\), \(q\), \(w\), \(\Delta U
\), and \(\Delta H\) for the expansion.
Since the pathway is adiabatic:
\[q =0 \nonumber \]
Using Equation \ref{Eq4Alternative}
\[ \begin{align*} T_2 & = T_1 \left(\dfrac{V_1}{V_2} \right)^{- \frac{R} {C_V}} \\ & =(273\,K) \left( \dfrac{22.4\,L}{44.8\,L} \right)^{2/3} \\ & = 172\,K \end{align*} \]
\[\Delta T = 172\,K - 273\,K = -101\,K \nonumber \]
For calculating work, we integrate Equation \ref{Adiabate2} to get
\[ \begin{align*} w & = \Delta U = nC_v \Delta T \\ & = (1.00 \, mol) \left(\dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K} \right) (-101\,K ) \\ & = 1.260 \,kJ \end{align*} \]
\[ \begin{align*} \Delta H & = \Delta U + nR\Delta T \\ & = -1260\,J + (1.00 \, mol) \left(\dfrac{3}{2} 8.314\, \dfrac{J}{mol \, K} \right) (-101\,K ) \\ & = -2100\,J \end{align*} \]
The following table shows recipes for calculating \(q\), \(w\), \(\Delta U\), and \(\Delta H\) for an ideal gas undergoing a reversible change along the specified pathway.
Table 3.2.1: Thermodynamics Properties for a Reversible Expansion or Compression
Pathway \(q\) \(w\) \(\Delta U\) \(\Delta H\)
Isothermal \(nRT \ln (V_2/V_1) \) \(-nRT \ln (V_2/V_1) \) 0 0
Isochoric \(C_V \Delta T\) 0 \(C_V \Delta T\) \(C_V \Delta T + V\Delta p\)
Isobaric \(C_p \Delta T\) \(- p\Delta V\) \(C_p \Delta T - p\Delta V\) \(C_p \Delta T\)
adiabatic 0 \(C_V \Delta T\) \(C_V \Delta T\) \(C_p \Delta T\) | {"url":"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/03%3A_First_Law_of_Thermodynamics/3.03%3A_Reversible_and_Irreversible_Pathways","timestamp":"2024-11-09T22:56:19Z","content_type":"text/html","content_length":"150909","record_id":"<urn:uuid:04c13427-a146-4538-8a85-4c90b8c96c65>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00027.warc.gz"} |
Linear Expansivity: Definition and Calculations
What is Linear Expansivity?
In this article, you will learn about linear expansivity: definition and calculations. I will also acquaint you with the symbols, and units of linear expansivity.
Definition: Linear expansivity of a substance can be defined as the fractional increase in length per unit length per degree rise in temperature.
The symbol for linear expansivity is α.
Also, the unit of linear expansivity is per degree celsius (^0C^-1) or per kelvin (K^-1).
Video 1: Watch the comprehensive video explanation of linear expansivity below
The above video explained the definition of linear expansivity, its formula and few solved problems.
Additionally, Linear expansivity, often referred to as thermal expansion, is a material’s tendency to change its dimensions in response to temperature variations.
When a substance is heated, its particles gain energy and move more vigorously, causing them to expand and increasing the material’s volume.
Video 2: Watch the video explanation of solved problems of linear expansivity below:
Linear Expansivity Definition and Calculations
When heat is applied to metal, the change in temperature would cause the object to expand. Different metals have different expansion rates when the same quantity of heat is applied to them. Some of
these metals tend to expand more the others at the same temperature.
Linear expansivity is also known as the coefficient of linear expansion.
The coefficient of linear expansion is the reason these metals have different rates of expansion at the same temperature.
Linear Expansivity Formula and Increase in Length (Expansion) Formula
The linear expansivity formula is α = e/L[0]Δθ
Linear Expansivity Definition and Calculations
But since e = extension = L – L[0]
α = (L – L[0] )/L[0]Δθ
We can also write the above formula as
α = (L – L[0] )/L[0](T[2] – T[1]) or α = e/L[0](T[2] – T[1])
and Δθ = T[2] – T[1]
The formula for calculating the increase in length is L = L[0](1 + αΔθ).
Linear Expansivity Definition and Calculations
α = Linear expansivity
L = change or increase the length
Also, L[0] = original length
e = extension
Δθ = Change in temperature = T[2] – T[1]
Table for Substances and their Linear Expansivity
Here is a table for materials, their symbols, and linear expansivity:
The material Symbol Linear expansivity Unit
Aluminium Al 0.000023 k^-1
Iron Fe 0.000012 k^-1
Copper Cu 0.000017 k^-1
Brass 0.000018 k^-1
Lead Pb 0.000029 k^-1
Platinium Pt 0.000009 k^-1
Zinc Zn 0.000030 k^-1
Alloy (Invar) 0.000001 k^-1
Glass 0.0000085 k^-1
Silica SiO[2] 0.0000004 k^-1
Table for Linear Expansivity Definition and Calculations
Linear Expansivity Calculations
Here are a few solved problems to help you understand how to calculate linear expansivity:
Question 1:
An invar of length 50 meters increases to 50.5 meters when heated from 20^0C to 70^0C. Calculate its linear expansivity.
Original length, L[0] = 50 m
The increase in length, L = 50.5 m
Initial temperature, T[1] = 20^0C
Final temperature, T[2] = 70^0C
Unknown value to find
Linear expansivity, α =?
α = (L – L[0] )/L[0](T[2] – T[1])
α = (L – L[0] )/L[0](T[2] – T[1]) = (50.5 – 50)/50(70-20) = 0.5/(50 x 50) = 0.5/2,500 = 0.0002 K^-1
Which implies that
α = 2 x 10^-4 K^-1
Therefore, the linear expansivity of the invar is 2 x 10^-4 per kelvin.
Question 2:
A wire of length 5 meters is heated from a temperature of 20^0C to 70^0C. If it undergoes a change in length of 2 centimeters, find the linear expansivity of the wire.
Original length, L[0] = 5 m
Change in length = extension = 2 cm = (2/100) m = 0.02 m
Initial temperature, T[1] = 20^0C
Final temperature, T[2] = 70^0C
Unknown value to find
Linear expansivity, α =?
α = e/L[0](T[2] – T[1])
α = e/L[0](T[2] – T[1]) = 0.02/5(70-20) = 0.02/(5 x 50) = 0.02/250 = 0.00008 K^-1
Which implies that
α = 8 x 10^-5 K^-1
Therefore, the linear expansivity of the invar is 8 x 10^-5 per kelvin.
Question 3:
When 15 meters of a metallic rod is heated from 10^0C to 110^0C, its length becomes 15.03 meters. What is the linear expansivity of the metal?
Original length, L[0] = 15 m
The increase in length, L = 15.03 m
Initial temperature, T[1] = 10^0C
Final temperature, T[2] = 110^0C
Unknown value to find
Linear expansivity, α =?
α = (L – L[0] )/L[0](T[2] – T[1])
α = (15.03 – 15)/15(110 – 10) = 0.03/15(100) = 0.03/150 = 0.0002 K^-1
Which implies that
α = 2 x 10^-4 K^-1
Therefore, the linear expansivity of the metal rod is 2 x 10^-4 per kelvin.
Question 4:
The length of copper is 50 meters when it’s 20^0C. By how much will it expand when the temperature increases to 30^0C? [Take the linear expansivity of a copper to be 1.7 x 10^-5 K^-1]
Data: Extract your data from the above question.
Original length, L[0] = 50 m
Linear expansivity, α = 1.7 x 10^-5 K^-1
Initial temperature, T[1] = 20^0C
Final temperature, T[2] = 30^0C
Unknown value to find
The increase in length, L =?
The formula we need to apply
L = L[0](1 + αΔθ)
L = L[0](1 + αΔθ) = L[0][1 + α (Δθ = T[2] – T[1])] = 50 [1 + 1.7 x 10^-5 (30 – 20 )]
And we will now have
L = 50 [ 1 + 0.000017 x 10 ]
Which will become
L = 50 [ 1 + 0.00017]
After adding 1 to 0.00017 you will then have
Increase in length, L = 50 x 1.00017 = 50.0085 m
L = 50.0085 m
Therefore, the increase in length of the copper rod is 50.0085 meters. Thus, the rod added 0.0085 of length to its original size.
Question 5:
Compute the increase in length of 500 meters of a copper wire, when its temperature changes from 12^0C to 32^0C. The linear expansivity of a copper wire is 17 x 10^-5 per degree celsius.
Data: Always extract your data from the question before you start solving the problem.
Original length, L[0] = 500 m
Linear expansivity, α = 17 x 10^-5 ^0C^-1
Initial temperature, T[1] = 12^0C
Final temperature, T[2] = 32^0C
Unknown value to find
The increase in length, L =?
The formula to solve the problem is
L = L[0](1 + αΔθ)
L = L[0](1 + αΔθ) = L[0] [1 + α (T[2] – T[1])] = 500 [1 + 17 x 10^-5 (32 – 12)]
The above expression will give us
L = 500 [ 1 + 0.00017 x 20 ]
We will then obtain
L = 50 [ 1 + 0.0034]
we will then add 1 to 0.0034 to get
Increase in length, L = 500 x 1.0034 = 50.0034 m
L = 50.0034 m
Therefore, the increase in length of the copper wire is 50.0034 meters. Thus, the rod added 0.0034 of length to its original size.
Question 6
If an iron rail of 8 meters long are laid close up end to end when the temperature is 30^0C. What gap will be provided between consecutive rails when the temperature rises to 60^0C?
(Take linear expansivity of iron = 1.2 x 10^-5 K^-1)
Linear expansivity of iron, α = 1.2 x 10^-5 K^-1
Change in Temperature, Δθ = (60 K – 30 K) = 30 K
Original length of rail, L[1] = 8 m
The increase in length due to an increase in temperature, = αL[1]Δθ
and by inserting our data into the above formula, we will have
The increase in length due to increase in temperature, = 1.2 x 10^-5 x 8 x 30 = 0.003 m = 0.3cm
Hence, the gap that will be provided between consecutive rails is 0.3 centimetres
Question 7
Steel bars, each of length 3m at 29^0C are to be used for constructing a rail line. If the linear expansivity of steel is 1.0 x 10^-5 K^-1. Calculate the safety gap that must be left between
successive bars if the highest temperature expected is 41^0C.
The final answer to the above question is 3.6 x 10^-4 m
We will make ΔL subject of the formula from the equation α = ΔL /L[0]Δθ to obtain
ΔL = αL[0]Δθ
where ΔL = >
α = 1.0 x 10^-5 K^-1
L[0] = 3m
Δθ = (θ[2] – θ[1]) = (41 – 29) = 12^0C
Hence by inserting our data into the main formula ΔL = αL[0]Δθ, we will have
ΔL = αL[0]Δθ = 1.0 x 10^-5 x 3 x 12 = 0.00036 m = 3.6 x 10^-4 m
Therefore, the safety gap that must be left is 3.6 x 10^-4 m.
Question 8
A metal rod of length 50 centimeters is heated from 40^0C to 80^0C. If the linear expansivity of the material is α, calculate the increase in length of the rod (in meters) in terms of α.
The final answer to the above question is 20α
We will use the formula ΔL = αL[0](θ[2] – θ[1]) to solve the problem
Therefore, since
L[0] = 50 cm = 0.5 m
θ[1] = 40^0C
θ[2] = 80^0C
ΔL = αL[0](θ[2] – θ[1]) = α x 0.5 x (80 – 40) = α x 0.5 x 40 = 20α
Therefore, the increase in length of the rod is 20α
Question 9
A metal rod of length 100 cm, is heated through 100^0C. Calculate the change in length of the rod. (Linear expansivity of the material of the rod is 3 x 10^-5 K^-1).
The final answer to the question is 0.3 cm
We will apply the formula ΔL = αL[0]Δθ to find the change in length
ΔL = αL[0]Δθ = 3.0 x 10^-5 x 100 x 100 = 0.3 cm
Question 10
A bridge made of steel is 600 m long. What is the daily variation in its length if the night-time and day-time temperature are 10^0C and 35^0C respectively. The linear expansivity of steel is
The final answer to the above question is 18 cm
We will also use the same formula in question 9 (ΔL = αL[0]Δθ ) to solve this problem
ΔL = αL[0]Δθ = 0.000012 x 600 x 25 = 0.18 m = 18 cm
Question 11
A metal rod of length 40.00 cm at 20^0C is heated to a temperature of 45^0C. If the new length of the rod is 40.05 cm. Calculate its linear expansivity.
The final answer to this question is 5 x 10^-5 K^-1
The formula α = (L – L[0] )/L[0](θ[2] – θ[1]) will help us to find the linear expansivity of the metal rod.
L = 40.05 cm
L[0] = 40.00 cm
θ[1] = 20^0C
θ[2] = 45^0C
Thus, by putting our data into the formula
α = (L – L[0] )/L[0](θ[2] – θ[1]) = (40.05 – 40.00) / 40.00 (45 – 20)
The above expression will become
α = 0.05 / (40 x 25) = 0.00005 = 5 x 10^-5 K^-1
Therefore, the linear expansivity of the metal rod is 5 x 10^-5 per kelvin
Question 12
A brass rod is 2 m long at a certain temperature. What is the length for a temperature rise of 100K, if the expansivity of brass is 18 x 10^-6 K^-1
The final answer to the above question is 2.0036 meters
We will make L subject of the formula from the formula α = (L – L[0] )/L[0]Δθ
Thus, by inserting our numerical information from the question into the formula, we will get
18 x 10^-6 = (L – 2)/ 2 x 100
Will give us
L – 2 = 18 x 10^-6 x 200
The above expression will now become
L – 2 = 36 x 10^-4
By collecting like terms, we will now have
L = 2 + 36 x 10^-4 = 2.0036 m
Therefore, the length for a temperature rise of 100k will be 2.0036 meters.
Question 13
The linear expansivity of a metal P is twice that of another metal Q. When these materials are heated through the same temperature change, their increase in length is the same. Calculate the ratio of
the original length of P to that of Q.
The final to the above question is Q = 1:2
The linear expansivity of Q = α
Linear expansvity of P = 2α
Original length for P = L[p]
The original length at Q = L[Q]
We will use the formula α = ΔL /L[0]Δθ
At P, we will have
2α = ΔL /L[P]Δθ
At Q, we will have
α = ΔL /L[Q]Δθ
The ratio of L[P] to L[Q] will be
L[P] : L[Q]
2α = ΔL /L[P]Δθ : α = ΔL /L[Q]Δθ
Hence, the above expression will left us with
ΔL /2αΔθ : ΔL /αΔθ
Thus, we now have
1/2 : 1
After multiplying both sides by 2, we will be left with
Ratio of P to Q = 1:2
Question 14
The ratio of the coefficient of linear expansion of two metals α[1]/α[2] is 3:4. When heated through the same temperature change, the ratio of the increase in lengths of the two metals, L[1]/L[2] is
1:2, the ratio of the original lengths L[1]/L[2] is
The final answer to the above question is 2:3
α[1] = 3
α[2] = 4
The change in temperature for the two rods is the same = Δθ = θ
L[1] = 1
L[2] = 2
α = ΔL /L[0]Δθ
L[0] = ΔL /αΔθ
At rod 1
L[1] = 1 /3θ
At rod 2
L[2] = 2 /4θ
The ratio of L[1] to L[2] = (1 /3θ) : (2 /4θ)
Therefore, our final answer is 2/3
How Different Materials Respond to Linear Expansivity
Metals and Linear Expansivity Metals typically have a higher linear expansivity compared to non-metals. This property is harnessed in various applications. For instance, bimetallic strips, which
consist of two different metals bonded together, are used in thermostats to regulate temperature. As temperature changes, the two metals expand at different rates, causing the strip to bend and
operate the thermostat.
Concrete and Linear Expansivity Concrete, a widely used construction material, also exhibits linear expansivity. Understanding this property is essential to prevent structural issues due to
temperature changes. Bridges, for example, incorporate expansion joints that allow the material to expand or contract without causing damage.
Liquids and Gases Unlike solids, liquids and gases exhibit minimal linear expansivity. Their particles are more loosely held, and their volume changes with temperature are relatively larger than
dimensional changes.
Applications of Linear Expansivity
Engineering and Infrastructure Linear expansivity plays a pivotal role in designing structures like bridges, roads, and buildings. Engineers consider how materials will respond to temperature
variations to ensure long-term stability and safety.
Thermal Stress Management In industries where materials are subjected to extreme temperature changes, such as aerospace and manufacturing, understanding linear expansivity helps manage thermal
stress. Components can be designed to accommodate expansion and contraction, reducing the risk of mechanical failure.
Cooking Utensils and Culinary Science In the kitchen, linear expansivity is employed in cooking utensils. For instance, metal spatulas can be used to flip pancakes because the metal expands with
heat, creating a gap between the spatula and the food.
Frequently Asked Questions: Linear Expansivity
Q: Does every material expand the same way when heated?
A: No, different materials have varying linear expansivity coefficients, causing them to expand at different rates.
Q: Is linear expansivity always noticeable?
A: Not always. In many everyday scenarios, the changes in size due to linear expansivity are negligible and not easily observable.
Q: How is linear expansivity related to temperature?
A: Linear expansivity is directly proportional to temperature changes. As temperature increases, the material’s dimensions also increase.
Q: Can linear expansivity be negative?
A: Yes, in some cases, materials can contract with increasing temperature, resulting in a negative linear expansivity coefficient.
Q: Are there materials with zero linear expansivity?
A: No, all materials exhibit some degree of linear expansivity, although the extent may vary.
Q: Why is linear expansivity important in scientific research?
A: Linear expansivity is a critical factor when studying material behavior, especially when it comes to predicting how substances will respond to temperature fluctuations.
I hope you are now familiar with linear expansivity: definition and calculations. Drop a comment if there is anything you would like me to explain further.
You may also like to read:
How to Derive the Formula For Increase in Volume
Also How to Calculate Cubic Expansivity with Examples
How to Calculate the Relative Density of a Liquid
Linear Expansivity Definition and Calculations | {"url":"https://physicscalculations.com/linear-expansivity-definition-and-calculations/","timestamp":"2024-11-08T08:29:51Z","content_type":"text/html","content_length":"155564","record_id":"<urn:uuid:4ee69f24-884e-450c-99c4-87cbbed81d3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00671.warc.gz"} |
SPOSVXX - Linux Manuals (3)
SPOSVXX (3) - Linux Manuals
sposvxx.f -
subroutine sposvxx (FACT, UPLO, N, NRHS, A, LDA, AF, LDAF, EQUED, S, B, LDB, X, LDX, RCOND, RPVGRW, BERR, N_ERR_BNDS, ERR_BNDS_NORM, ERR_BNDS_COMP, NPARAMS, PARAMS, WORK, IWORK, INFO)
SPOSVXX computes the solution to system of linear equations A * X = B for PO matrices
Function/Subroutine Documentation
subroutine sposvxx (characterFACT, characterUPLO, integerN, integerNRHS, real, dimension( lda, * )A, integerLDA, real, dimension( ldaf, * )AF, integerLDAF, characterEQUED, real, dimension( * )S,
real, dimension( ldb, * )B, integerLDB, real, dimension( ldx, * )X, integerLDX, realRCOND, realRPVGRW, real, dimension( * )BERR, integerN_ERR_BNDS, real, dimension( nrhs, * )ERR_BNDS_NORM, real,
dimension( nrhs, * )ERR_BNDS_COMP, integerNPARAMS, real, dimension( * )PARAMS, real, dimension( * )WORK, integer, dimension( * )IWORK, integerINFO)
SPOSVXX computes the solution to system of linear equations A * X = B for PO matrices
SPOSVXX uses the Cholesky factorization A = U**T*U or A = L*L**T
to compute the solution to a real system of linear equations
A * X = B, where A is an N-by-N symmetric positive definite matrix
and X and B are N-by-NRHS matrices.
If requested, both normwise and maximum componentwise error bounds
are returned. SPOSVXX will return a solution with a tiny
guaranteed error (O(eps) where eps is the working machine
precision) unless the matrix is very ill-conditioned, in which
case a warning is returned. Relevant condition numbers also are
calculated and returned.
SPOSVXX accepts user-provided factorizations and equilibration
factors; see the definitions of the FACT and EQUED options.
Solving with refinement and using a factorization from a previous
SPOSVXX call will also produce a solution with either O(eps)
errors or warnings, but we cannot make that claim for general
user-provided factorizations and equilibration factors if they
differ from what SPOSVXX would itself produce.
The following steps are performed:
1. If FACT = 'E', real scaling factors are computed to equilibrate
the system:
diag(S)*A*diag(S) *inv(diag(S))*X = diag(S)*B
Whether or not the system will be equilibrated depends on the
scaling of the matrix A, but if equilibration is used, A is
overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
factor the matrix A (after equilibration if FACT = 'E') as
A = U**T* U, if UPLO = 'U', or
A = L * L**T, if UPLO = 'L',
where U is an upper triangular matrix and L is a lower triangular
3. If the leading i-by-i principal minor is not positive definite,
then the routine returns with INFO = i. Otherwise, the factored
form of A is used to estimate the condition number of the matrix
A (see argument RCOND). If the reciprocal of the condition number
is less than machine precision, the routine still goes on to solve
for X and compute error bounds as described below.
4. The system of equations is solved for X using the factored form
of A.
5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
the routine will use iterative refinement to try to get a small
error and error bounds. Refinement calculates the residual to at
least twice the working precision.
6. If equilibration was used, the matrix X is premultiplied by
diag(S) so that it solves the original system before
Some optional parameters are bundled in the PARAMS array. These
settings determine how refinement is performed, but often the
defaults are acceptable. If the defaults are acceptable, users
can pass NPARAMS = 0 which prevents the source code from accessing
the PARAMS argument.
FACT is CHARACTER*1
Specifies whether or not the factored form of the matrix A is
supplied on entry, and if not, whether the matrix A should be
equilibrated before it is factored.
= 'F': On entry, AF contains the factored form of A.
If EQUED is not 'N', the matrix A has been
equilibrated with scaling factors given by S.
A and AF are not modified.
= 'N': The matrix A will be copied to AF and factored.
= 'E': The matrix A will be equilibrated if necessary, then
copied to AF and factored.
UPLO is CHARACTER*1
= 'U': Upper triangle of A is stored;
= 'L': Lower triangle of A is stored.
N is INTEGER
The number of linear equations, i.e., the order of the
matrix A. N >= 0.
NRHS is INTEGER
The number of right hand sides, i.e., the number of columns
of the matrices B and X. NRHS >= 0.
A is REAL array, dimension (LDA,N)
On entry, the symmetric matrix A, except if FACT = 'F' and EQUED =
'Y', then A must contain the equilibrated matrix
diag(S)*A*diag(S). If UPLO = 'U', the leading N-by-N upper
triangular part of A contains the upper triangular part of the
matrix A, and the strictly lower triangular part of A is not
referenced. If UPLO = 'L', the leading N-by-N lower triangular
part of A contains the lower triangular part of the matrix A, and
the strictly upper triangular part of A is not referenced. A is
not modified if FACT = 'F' or 'N', or if FACT = 'E' and EQUED =
'N' on exit.
On exit, if FACT = 'E' and EQUED = 'Y', A is overwritten by
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,N).
AF is REAL array, dimension (LDAF,N)
If FACT = 'F', then AF is an input argument and on entry
contains the triangular factor U or L from the Cholesky
factorization A = U**T*U or A = L*L**T, in the same storage
format as A. If EQUED .ne. 'N', then AF is the factored
form of the equilibrated matrix diag(S)*A*diag(S).
If FACT = 'N', then AF is an output argument and on exit
returns the triangular factor U or L from the Cholesky
factorization A = U**T*U or A = L*L**T of the original
matrix A.
If FACT = 'E', then AF is an output argument and on exit
returns the triangular factor U or L from the Cholesky
factorization A = U**T*U or A = L*L**T of the equilibrated
matrix A (see the description of A for the form of the
equilibrated matrix).
LDAF is INTEGER
The leading dimension of the array AF. LDAF >= max(1,N).
EQUED is CHARACTER*1
Specifies the form of equilibration that was done.
= 'N': No equilibration (always true if FACT = 'N').
= 'Y': Both row and column equilibration, i.e., A has been
replaced by diag(S) * A * diag(S).
EQUED is an input argument if FACT = 'F'; otherwise, it is an
output argument.
S is REAL array, dimension (N)
The row scale factors for A. If EQUED = 'Y', A is multiplied on
the left and right by diag(S). S is an input argument if FACT =
'F'; otherwise, S is an output argument. If FACT = 'F' and EQUED
= 'Y', each element of S must be positive. If S is output, each
element of S is a power of the radix. If S is input, each element
of S should be a power of the radix to ensure a reliable solution
and error estimates. Scaling by powers of the radix does not cause
rounding errors unless the result underflows or overflows.
Rounding errors during scaling lead to refining with a matrix that
is not equivalent to the input matrix, producing error estimates
that may not be reliable.
B is REAL array, dimension (LDB,NRHS)
On entry, the N-by-NRHS right hand side matrix B.
On exit,
if EQUED = 'N', B is not modified;
if EQUED = 'Y', B is overwritten by diag(S)*B;
LDB is INTEGER
The leading dimension of the array B. LDB >= max(1,N).
X is REAL array, dimension (LDX,NRHS)
If INFO = 0, the N-by-NRHS solution matrix X to the original
system of equations. Note that A and B are modified on exit if
EQUED .ne. 'N', and the solution to the equilibrated system is
LDX is INTEGER
The leading dimension of the array X. LDX >= max(1,N).
RCOND is REAL
Reciprocal scaled condition number. This is an estimate of the
reciprocal Skeel condition number of the matrix A after
equilibration (if done). If this is less than the machine
precision (in particular, if it is zero), the matrix is singular
to working precision. Note that the error may still be small even
if this number is very small and the matrix appears ill-
RPVGRW is REAL
Reciprocal pivot growth. On exit, this contains the reciprocal
pivot growth factor norm(A)/norm(U). The "max absolute element"
norm is used. If this is much less than 1, then the stability of
the LU factorization of the (equilibrated) matrix A could be poor.
This also means that the solution X, estimated condition numbers,
and error bounds could be unreliable. If factorization fails with
0<INFO<=N, then this contains the reciprocal pivot growth factor
for the leading INFO columns of A.
BERR is REAL array, dimension (NRHS)
Componentwise relative backward error. This is the
componentwise relative backward error of each solution vector X(j)
(i.e., the smallest relative change in any element of A or B that
makes X(j) an exact solution).
N_ERR_BNDS is INTEGER
Number of error bounds to return for each right hand side
and each type (normwise or componentwise). See ERR_BNDS_NORM and
ERR_BNDS_COMP below.
ERR_BNDS_NORM is REAL array, dimension (NRHS, N_ERR_BNDS)
For each right-hand side, this array contains information about
various error bounds and condition numbers corresponding to the
normwise relative error, which is defined as follows:
Normwise relative error in the ith solution vector:
max_j (abs(XTRUE(j,i) - X(j,i)))
max_j abs(X(j,i))
The array is indexed by the type of error information as described
below. There currently are up to three pieces of information
The first index in ERR_BNDS_NORM(i,:) corresponds to the ith
right-hand side.
The second index in ERR_BNDS_NORM(:,err) contains the following
three fields:
err = 1 "Trust/don't trust" boolean. Trust the answer if the
reciprocal condition number is less than the threshold
sqrt(n) * slamch('Epsilon').
err = 2 "Guaranteed" error bound: The estimated forward error,
almost certainly within a factor of 10 of the true error
so long as the next entry is greater than the threshold
sqrt(n) * slamch('Epsilon'). This error bound should only
be trusted if the previous boolean is true.
err = 3 Reciprocal condition number: Estimated normwise
reciprocal condition number. Compared with the threshold
sqrt(n) * slamch('Epsilon') to determine if the error
estimate is "guaranteed". These reciprocal condition
numbers are 1 / (norm(Z^{-1},inf) * norm(Z,inf)) for some
appropriately scaled matrix Z.
Let Z = S*A, where S scales each row by a power of the
radix so all absolute row sums of Z are approximately 1.
See Lapack Working Note 165 for further details and extra
ERR_BNDS_COMP is REAL array, dimension (NRHS, N_ERR_BNDS)
For each right-hand side, this array contains information about
various error bounds and condition numbers corresponding to the
componentwise relative error, which is defined as follows:
Componentwise relative error in the ith solution vector:
abs(XTRUE(j,i) - X(j,i))
max_j ----------------------
The array is indexed by the right-hand side i (on which the
componentwise relative error depends), and the type of error
information as described below. There currently are up to three
pieces of information returned for each right-hand side. If
componentwise accuracy is not requested (PARAMS(3) = 0.0), then
ERR_BNDS_COMP is not accessed. If N_ERR_BNDS .LT. 3, then at most
the first (:,N_ERR_BNDS) entries are returned.
The first index in ERR_BNDS_COMP(i,:) corresponds to the ith
right-hand side.
The second index in ERR_BNDS_COMP(:,err) contains the following
three fields:
err = 1 "Trust/don't trust" boolean. Trust the answer if the
reciprocal condition number is less than the threshold
sqrt(n) * slamch('Epsilon').
err = 2 "Guaranteed" error bound: The estimated forward error,
almost certainly within a factor of 10 of the true error
so long as the next entry is greater than the threshold
sqrt(n) * slamch('Epsilon'). This error bound should only
be trusted if the previous boolean is true.
err = 3 Reciprocal condition number: Estimated componentwise
reciprocal condition number. Compared with the threshold
sqrt(n) * slamch('Epsilon') to determine if the error
estimate is "guaranteed". These reciprocal condition
numbers are 1 / (norm(Z^{-1},inf) * norm(Z,inf)) for some
appropriately scaled matrix Z.
Let Z = S*(A*diag(x)), where x is the solution for the
current right-hand side and S scales each row of
A*diag(x) by a power of the radix so all absolute row
sums of Z are approximately 1.
See Lapack Working Note 165 for further details and extra
NPARAMS is INTEGER
Specifies the number of parameters set in PARAMS. If .LE. 0, the
PARAMS array is never referenced and default values are used.
PARAMS is REAL array, dimension NPARAMS
Specifies algorithm parameters. If an entry is .LT. 0.0, then
that entry will be filled with default value used for that
parameter. Only positions up to NPARAMS are accessed; defaults
are used for higher-numbered parameters.
PARAMS(LA_LINRX_ITREF_I = 1) : Whether to perform iterative
refinement or not.
Default: 1.0
= 0.0 : No refinement is performed, and no error bounds are
= 1.0 : Use the double-precision refinement algorithm,
possibly with doubled-single computations if the
compilation environment does not support DOUBLE
(other values are reserved for future use)
PARAMS(LA_LINRX_ITHRESH_I = 2) : Maximum number of residual
computations allowed for refinement.
Default: 10
Aggressive: Set to 100 to permit convergence using approximate
factorizations or factorizations other than LU. If
the factorization uses a technique other than
Gaussian elimination, the guarantees in
err_bnds_norm and err_bnds_comp may no longer be
PARAMS(LA_LINRX_CWISE_I = 3) : Flag determining if the code
will attempt to find a solution with small componentwise
relative error in the double-precision algorithm. Positive
is true, 0.0 is false.
Default: 1.0 (attempt componentwise convergence)
WORK is REAL array, dimension (4*N)
IWORK is INTEGER array, dimension (N)
INFO is INTEGER
= 0: Successful exit. The solution to every right-hand side is
< 0: If INFO = -i, the i-th argument had an illegal value
> 0 and <= N: U(INFO,INFO) is exactly zero. The factorization
has been completed, but the factor U is exactly singular, so
the solution and error bounds could not be computed. RCOND = 0
is returned.
= N+J: The solution corresponding to the Jth right-hand side is
not guaranteed. The solutions corresponding to other right-
hand sides K with K > J may not be guaranteed as well, but
only the first such right-hand side is reported. If a small
componentwise error is not requested (PARAMS(3) = 0.0) then
the Jth right-hand side is the first with a normwise error
bound that is not guaranteed (the smallest J such
that ERR_BNDS_NORM(J,1) = 0.0). By default (PARAMS(3) = 1.0)
the Jth right-hand side is the first with either a normwise or
componentwise error bound that is not guaranteed (the smallest
J such that either ERR_BNDS_NORM(J,1) = 0.0 or
ERR_BNDS_COMP(J,1) = 0.0). See the definition of
ERR_BNDS_NORM(:,1) and ERR_BNDS_COMP(:,1). To get information
about all of the right-hand sides check ERR_BNDS_NORM or
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
April 2012
Definition at line 495 of file sposvxx.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/docs/linux/man/3-SPOSVXX/","timestamp":"2024-11-09T07:09:26Z","content_type":"text/html","content_length":"25885","record_id":"<urn:uuid:0a80d961-dac8-4f48-ae82-775137fdb262>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00397.warc.gz"} |
Encryption Algorithms
Common Encryption Types, Protocols and Algorithms Explained - Comparitech
A range of types underlie much of what we do when we are on the internet, 3DES, AES, and RSA. These others are used in many of our secure protocols, such as TLS/SSL, IPsec, SSH, and PGP. In this
article, we will discuss what actually is, what it does, some of the key concepts behind it. | {"url":"https://cryptographer.au/listings/viewCat/encryption-algorithms","timestamp":"2024-11-02T01:14:58Z","content_type":"text/html","content_length":"36713","record_id":"<urn:uuid:ace8c455-9d58-4ca6-8382-28a4def0b65f>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00335.warc.gz"} |
807 research outputs found
Let $(u_n)_{n\ge 0}$ denote the Thue-Morse sequence with values $\pm 1$. The Woods-Robbins identity below and several of its generalisations are well-known in the literature \begin{equation*}\label
{WR}\prod_{n=0}^\infty\left(\frac{2n+1}{2n+2}\right)^{u_n}=\frac{1}{\sqrt 2}.\end{equation*} No other such product involving a rational function in $n$ and the sequence $u_n$ seems to be known in
closed form. To understand these products in detail we study the function \begin{equation*}f(b,c)=\prod_{n=1}^\infty\left(\frac{n+b}{n+c}\right)^{u_n}.\end{equation*} We prove some analytical
properties of $f$. We also obtain some new identities similar to the Woods-Robbins product.Comment: Accepted in Proc. AMMCS 2017, updated according to the referees' comment
We show that various aspects of k-automatic sequences -- such as having an unbordered factor of length n -- are both decidable and effectively enumerable. As a consequence it follows that many
related sequences are either k-automatic or k-regular. These include many sequences previously studied in the literature, such as the recurrence function, the appearance function, and the
repetitivity index. We also give some new characterizations of the class of k-regular sequences. Many results extend to other sequences defined in terms of Pisot numeration systems
We discuss the summation of certain series defined by counting blocks of digits in the $B$-ary expansion of an integer. For example, if $s_2(n)$ denotes the sum of the base-2 digits of $n$, we show
that $\sum_{n \geq 1} s_2(n)/(2n(2n+1)) = (\gamma + \log \frac{4}{\pi})/2$. We recover this previous result of Sondow in math.NT/0508042 and provide several generalizations.Comment: 12 pages,
Introduction expanded, references added, accepted by J. Number Theor
For a map $S:X\to X$ and an open connected set ($=$ a hole) $H\subset X$ we define $\mathcal J_H(S)$ to be the set of points in $X$ whose $S$-orbit avoids $H$. We say that a hole $H_0$ is
supercritical if (i) for any hole $H$ such that $\bar{H_0}\subset H$ the set $\mathcal J_H(S)$ is either empty or contains only fixed points of $S$; (ii) for any hole $H$ such that \barH\subset H_0
the Hausdorff dimension of $\mathcal J_H(S)$ is positive. The purpose of this note to completely characterize all supercritical holes for the doubling map $Tx=2x\bmod1$.Comment: This is a new
version, where a full characterization of supercritical holes for the doubling map is obtaine
We study Pisot numbers $\beta \in (1, 2)$ which are univoque, i.e., such that there exists only one representation of 1 as $1 = \sum_{n \geq 1} s_n\beta^{-n}$, with $s_n \in \{0, 1\}$. We prove in
particular that there exists a smallest univoque Pisot number, which has degree 14. Furthermore we give the smallest limit point of the set of univoque Pisot numbers.Comment: Accepted by Mathematics
of COmputatio | {"url":"https://core.ac.uk/search/?q=author%3A(J.-P.%20Allouche)","timestamp":"2024-11-13T08:16:25Z","content_type":"text/html","content_length":"139837","record_id":"<urn:uuid:6b150c35-f627-40f8-9264-5e2e74330e7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00588.warc.gz"} |
— Mathematical functions for complex numbers
cmath — Mathematical functions for complex numbers¶
This module provides access to mathematical functions for complex numbers. The functions in this module accept integers, floating-point numbers or complex numbers as arguments. They will also accept
any Python object that has either a __complex__() or a __float__() method: these methods are used to convert the object to a complex or floating-point number, respectively, and the function is then
applied to the result of the conversion.
On platforms with hardware and system-level support for signed zeros, functions involving branch cuts are continuous on both sides of the branch cut: the sign of the zero distinguishes one side of
the branch cut from the other. On platforms that do not support signed zeros the continuity is as specified below.
Conversions to and from polar coordinates¶
A Python complex number z is stored internally using rectangular or Cartesian coordinates. It is completely determined by its real part z.real and its imaginary part z.imag. In other words:
Polar coordinates give an alternative way to represent a complex number. In polar coordinates, a complex number z is defined by the modulus r and the phase angle phi. The modulus r is the distance
from z to the origin, while the phase phi is the counterclockwise angle, measured in radians, from the positive x-axis to the line segment that joins the origin to z.
The following functions can be used to convert from the native rectangular coordinates to polar coordinates and back.
Return the phase of x (also known as the argument of x), as a float. phase(x) is equivalent to math.atan2(x.imag, x.real). The result lies in the range [-π, π], and the branch cut for this
operation lies along the negative real axis, continuous from above. On systems with support for signed zeros (which includes most systems in current use), this means that the sign of the result
is the same as the sign of x.imag, even when x.imag is zero:
>>> phase(complex(-1.0, 0.0))
>>> phase(complex(-1.0, -0.0))
The modulus (absolute value) of a complex number x can be computed using the built-in abs() function. There is no separate cmath module function for this operation.
Return the representation of x in polar coordinates. Returns a pair (r, phi) where r is the modulus of x and phi is the phase of x. polar(x) is equivalent to (abs(x), phase(x)).
cmath.rect(r, phi)¶
Return the complex number x with polar coordinates r and phi. Equivalent to r * (math.cos(phi) + math.sin(phi)*1j).
Power and logarithmic functions¶
Classification functions¶
Note that the selection of functions is similar, but not identical, to that in module math. The reason for having two modules is that some users aren’t interested in complex numbers, and perhaps
don’t even know what they are. They would rather have math.sqrt(-1) raise an exception than return a complex number. Also note that the functions defined in cmath always return a complex number, even
if the answer can be expressed as a real number (in which case the complex number has an imaginary part of zero).
A note on branch cuts: They are curves along which the given function fails to be continuous. They are a necessary feature of many complex functions. It is assumed that if you need to compute with
complex functions, you will understand about branch cuts. Consult almost any (not too elementary) book on complex variables for enlightenment. For information of the proper choice of branch cuts for
numerical purposes, a good reference should be the following:
See also
Kahan, W: Branch cuts for complex elementary functions; or, Much ado about nothing’s sign bit. In Iserles, A., and Powell, M. (eds.), The state of the art in numerical analysis. Clarendon Press
(1987) pp165–211. | {"url":"https://django.fun/docs/python/3.10/library/cmath/","timestamp":"2024-11-13T05:00:27Z","content_type":"text/html","content_length":"46344","record_id":"<urn:uuid:277c8bb4-8e55-4959-98fd-eeb297906f2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00803.warc.gz"} |
Discussion on Bonetrousle Challenge
• python3: pure math solution
def bonetrousle(n, k, b):
"""Find b distinct integers in [0,...,k] that sum to n."""
# Define these convenient variables:
N = n - b*(b+1)//2
d = k - b
# For a solution to exist, n must lie between the sum of first b boxes and the last b boxes
if (N < 0) or (N > b*d):
return [-1]
elif N == 0: # edge case when d==0
return [*range(1, b+1)]
# Imagine starting with first b boxes. While keeping the sum below n, replace b->k, (b-1)->(k-1), .... Each shift adds d to the total. Finally add the missing balance to the "next" (b-i) term. (i.e. the largest value in [1,2,..b] that didn't get shifted.)
q, r = N//d, N % d
# Take care of edge case with b-q+r==0
mid = [] if (b == q-r) else [b-q+r]
return [*range(1, b-q)] + mid + [*range(k, k-q, -1)] | {"url":"https://www.hackerrank.com/challenges/bonetrousle/forum/comments/1412993","timestamp":"2024-11-14T05:08:43Z","content_type":"text/html","content_length":"947785","record_id":"<urn:uuid:c1633b1f-7ae6-434c-b49e-f84d421a4d65>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00317.warc.gz"} |
Suchergebnis: Katalogdaten im Frühjahrssemester 2022
Physik Master
Physikalische und mathematische Wahlfächer
Auswahl: Festkörperphysik
Nummer Titel Typ ECTS Umfang Dozierende
Group Theory and its Applications
402-0516-10L W 12 KP 3V + 3U Noch nicht bekannt
Findet dieses Semester nicht statt.
This lecture introduces the use of group theory to solve problems of quantum mechanics, condensed matter physics and particle physics. Symmetry is at the roots of quantum mechanics:
Kurzbeschreibung this lecture is also a tutorial for students that would like to understand the practical side of the (often difficult) mathematical exposition of regular courses on quantum
The aim of this lecture is to give a practical knowledge on the application of symmetry in atomic-, molecular-, condensed matter- and particle physics. The lecture is intended for
students at the master and Phd. level in Physics that would like to have a practical and comprehensive view of the role of symmetry in physics. Students in their third year of
Lernziel Bachelor will be perfectly able to follow the lecture and can use it for their future master curriculuum. Students from other Departements are welcome, as the lecture is designed to
be (almost) self-contained. As symmetry is omnipresent in science and in particular quantum mechanics, this lecture is also a tutorial on quantum mechanics for students that would
like to understand what is behind the often difficult mathematical exposition of regular courses on quantum mechanics.
1. Abstract Group Theory and representation theory of groups
(Fundamentals of groups, Groups and geometry, Point and space groups, Representation theory of groups (H. Weyl, 1885-1955), Reducible and irreducible representations , Properties of
irreducible representations, Characters of a representation and theorems involving them, Symmetry adapted vectors)
2. Group theory and eigenvalue problems (General introduction and practical examples)
3. Representations of continuous groups (the circle group, The full rotation group, atomic physics, the translation group and the Schrödinger representation of quantum mechanics,
Cristal field splitting, The Peter-Weyl theorem, The Stone-von Neumann theorem, The Harisch-Chandra character)
4. Space groups and their representations (Elements of crystallography, irreducible representations of the space groups, non-symmorphic space groups)
5. Topological properties of groups and half integer spins: tensor products, applications of tensor products, an introduction to the universal covering group, the universal covering
group of SO3, SU(2), how to deal with the spin of the electron, Clebsch-Gordan coefficients, double point groups, the Clebsch-Gordan coefficients for point groups, the
Wigner-Eckart-Koster theorem and its applications
6 The application of symmetry to phase transitions (Landau).
7. Young tableaus: many electron and particle physics (SU_3).
Skript A manuscript is made available.
-B.L. van der Waerden, Group Theory and Quantum Mechanics, Springer Verlag. ("Old" but still modern).
- L.D. Landau, E.M. Lifshitz, Lehrbuch der Theor. Pyhsik, Band III, "Quantenmechanik", Akademie-Verlag Berlin, 1979, Kap. XII and
Literatur Ibidem, Band V, "Statistische Physik", Teil 1, Akademie-Verlag 1987, Kap. XIII and XIV. (Very concise and practical)
-A. Fässler, E. Stiefel, Group Theoretical Methods and Their applications, Birkhäuser. (A classical book on practical group theory, from a strong ETHZ school).
- C. Isham, Lectures on group and vector spaces for physicists, World Scientific. (More mathematical but very didactical)
Ferromagnetism: From Thin Films to Spintronics
402-0536-00L W 6 KP 3G R. Allenspach
Fachstudierende UZH müssen das Modul PHY434 direkt an der UZH buchen.
This course extends the introductory course "Introduction to Magnetism" to the latest, modern topics in research in magnetism and spintronics.
Kurzbeschreibung After a short revisit of the basic magnetism concepts, emphasis is put on novel phenomena in (ultra)thin films and small magnetic structures, displaying effects not encountered in
bulk magnetism.
Knowing the most important concepts and applications of ferromagnetism, in particular on the nanoscale (thin films, small structures). Being able to read and understand scientific
Lernziel articles at the front of research in this area. Learn to know how and why magnetic storage, sensors, memories and logic concepts function. Learn to condense and present the results
of a research articles so that colleagues understand.
Magnetization curves, magnetic domains, magnetic anisotropy; novel effects in ultrathin magnetic films and multilayers: interlayer exchange, spin transport; magnetization dynamics,
Inhalt spin precession.
Applications: Magnetic data storage, magnetic memories, spin-based electronics, also called spintronics.
Skript Lecture notes will be handed out (in English).
Voraussetzungen This course can be easily followed also without having attended the "Introduction to Magnetism" course.
/ Besonderes Language: English.
402-0318-00L Semiconductor Materials: Characterization, Processing and Devices W 6 KP 2V + 1U S. Schön, M. Shayegan
Kurzbeschreibung This course gives an introduction into the fundamentals of semiconductor materials. The main focus in this semester is on state-of-the-art characterization, semiconductor processing
and devices.
Lernziel Basic knowledge of semiconductor physics and technology. Application of this knowledge for state-of-the-art semiconductor device processing
1. Material characterization: structural and chemical methods
1.1 X-ray diffraction methods (Powder diffraction, HRXRD, XRR, RSM)
1.2 Electron microscopy Methods (SEM, EDX, TEM, STEM, EELS)
1.3 SIMS, RBS
2. Material characterization: electronic methods
2.1 van der Pauw techniquel2.2 Floating zone method
2.2 Hall effect
2.3 Cyclotron resonance spectroscopy
2.4. Quantum Hall effect
3. Material characterization: Optical methods
3.1 Absorption methods
3.2 Photoluminescence methods
3.3 FTIR, Raman spectroscopy
4. Semiconductor processing: lithography
Inhalt 4.1 Optical lithography methods
4.2 Electron beam lithography
4.3 FIB lithography
4.4 Scanning probe lithography
4.5 Direct growth methods (CEO, Nanowires)
5. Semiconductor processing: structuring of layers and devices
5.1 Wet etching methods
5.2 Dry etching methods (RIE, ICP, ion milling)
5.3 Physical vapor depositon methods (thermal, e-beam, sputtering)
5.4 Chemical vapor Deposition methods (PECVD, LPCVD, ALD)
5.5 Cleanroom basics & tour
6. Semiconductor devices
6.1 Semiconductor lasers
6.2 LED & detectors
6.3 Solar cells
6.4 Transistors (FET, HBT, HEMT)
Skript https://moodle-app2.let.ethz.ch/course/view.php?id=16802
Voraussetzungen The "compulsory performance element" of this lecture is a short presentation of a research paper complementing the lecture topics. Several topics and corresponding papers will be
/ Besonderes offered on the moodle page of this lecture.
402-0596-00L Electronic Transport in Nanostructures W 6 KP 2V + 1U T. M. Ihn
Kurzbeschreibung The lecture discusses modern topics in quantum transport through nanostructures including the underlying materials. Topics are: quantum transport effects, transport in graphene and
other 2D layered materials, quantum dot qubits for quantum information processing, decoherence of quantum states
Students are able to understand modern experiments in the field of electronic transport in nanostructures. They can critically reflect published research in this field and explain it
Lernziel to an audience of physicists. Students know and understand the fundamental phenomena of electron transport in the quantum regime and their significance. They are able to apply their
knowledge to practical experiments in a modern research lab.
Skript The lecture is based on the book:
T. Ihn, Semiconductor Nanostructures: Quantum States and Electronic Transport, ISBN 978-0-19-953442-5, Oxford University Press, 2010.
A solid basis in quantum mechanics, electrostatics, quantum statistics and in solid state physics is required. Having passed the lecture Semiconductor Nanostructures (fall semester)
Voraussetzungen may be advantageous, but is not required.
/ Besonderes
Students of the Master in Micro- and Nanosystems should at least have attended the lecture by David Norris, Introduction to quantum mechanics for engineers. They should also have
passed the exam of the lecture Semiconductor Nanostructures.
402-0564-00L W 6 KP 2V + 1U L. Degiorgi
Findet dieses Semester nicht statt.
The interaction of light with the condensed matter is the basic idea and principal foundation of several experimental spectroscopic methods. This lecture is devoted to the
Kurzbeschreibung presentation of those experimental methods and techniques, which allow the study of the electrodynamic response of solids. I will also discuss recent experimental results on
materials of high interest in the on-going solid-stat
Lernziel The lecture will give a basic introduction to optical spectroscopic methods in solid state physics.
Chapter 1
Maxwell equations and interaction of light with the medium
Chapter 2
Experimental methods: a survey
Chapter 3
Inhalt Kramers-Kronig relations; optical functions
Chapter 4
Drude-Lorentz phenomenological method
Chapter 5
Electronic interband transitions and band structure effects
Chapter 6
Selected examples: strongly correlated systems and superconductors
Skript manuscript (in english) is provided.
Literatur F. Wooten, in Optical Properties of Solids, (Academic Press, New York, 1972) and
M. Dressel and G. Gruener, in Electrodynamics of Solids, (Cambridge University Press, 2002).
Voraussetzungen Exercises will be proposed every week for one hour. There will be also the possibility to prepare a short presentations based on recent scientific literature (more at the beginning
/ Besonderes of the lecture).
Fachspezifische Kompetenzen Konzepte und Theorien geprüft
Verfahren und Technologien geprüft
Methodenspezifische Kompetenzen Analytische Kompetenzen geprüft
Entscheidungsfindung gefördert
Medien und digitale Technologien gefördert
Problemlösung geprüft
Projektmanagement gefördert
Soziale Kompetenzen Kommunikation geprüft
Kooperation und Teamarbeit geprüft
Kundenorientierung gefördert
Kompetenzen Menschenführung und Verantwortung gefördert
Selbstdarstellung und soziale Einflussnahme gefördert
Sensibilität für Vielfalt gefördert
Verhandlung gefördert
Persönliche Kompetenzen Anpassung und Flexibilität gefördert
Kreatives Denken geprüft
Kritisches Denken geprüft
Integrität und Arbeitsethik gefördert
Selbstbewusstsein und Selbstreflexion geprüft
Selbststeuerung und Selbstmanagement gefördert
402-0528-12L Ultrafast Methods in Solid State Physics W 6 KP 2V + 1U S. Johnson, M. Savoini
In condensed matter physics, “ultrafast” refers to dynamics on the picosecond and femtosecond time scales, the time scales where atoms vibrate and electronic spins flip. Measuring
Kurzbeschreibung real-time dynamics on these time scales is key to understanding materials in nonequilibrium states. This course offers an overview and understanding of the methods used to accomplish
this in modern research laboratories.
The goal of the course is to enable students to identify and evaluate experimental methods to manipulate and measure the electronic, magnetic and structural properties of solids on
the fastest possible time scales. This offers new fundamental insights on the couplings that bind solid-state systems together. It also opens the door to new technological
Lernziel applications in data storage and processing involving metastable states that can be reached only by driving systems far from equilibrium. This course offers an overview of ultrafast
methods as applied to condensed matter physics. Students will learn which methods are appropriate for studying relevant scientific questions, and will be able to describe their
relative advantages and limitations.
The topical course outline is as follows:
Chapter 1: Introduction
- Important time scales for dynamics in solids and their applications
- Time-domain versus frequency-domain experiments
- The pump-probe technique: general advantages and limits
Chapter 2: Overview of ultrafast processes in solids
- Carrier dynamics in response to ultrafast laser interactions
- Dynamics of the lattice: coherent vs. incoherent phonons
- Ultrafast magnetic phenomena
Chapter 3: Ultrafast optical-frequency methods
- Ultrafast laser sources (oscillators and amplifiers)
- Generating broadband pulses
- Second and third order harmonic generation
- Optical parametric amplification
- Fluorescence spectroscopy
Inhalt - Advanced optical pump-probe techniques
Chapter 4: THz- and mid-infrared frequency methods
- Low frequency interactions with solids
- Difference frequency mixing
- Optical rectification
- Time-domain spectroscopy
Chapter 5: VUV and x-ray frequency methods
- Synchrotron based sources
- Free electron lasers
- High-harmonic generation
- X-ray diffraction
- Time-resolved X-ray microscopy & coherent imaging
- Time-resolved core-level spectroscopies
Chapter 6: Time-resolved electron methods
- Ultrafast electron diffraction
- Time-resolved electron microscopy
Skript Will be distributed via moodle.
Literatur Will be distributed via moodle.
Voraussetzungen Although the course "Ultrafast Processes in Solids" (402-0526-00L) is useful as a companion to this course, it is not a prerequisite.
/ Besonderes
402-0532-00L Quantum Solid State Magnetism W 6 KP 2V + 1U K. Povarov
This course is based on the principal modern tools used to study collective magnetic phenomena in the Solid State, namely correlation and response functions. It is quite
Kurzbeschreibung quantitative, but doesn't contain any "fancy" mathematics. Instead, the theoretical aspects are balanced by numerous experimental examples and case studies. It is aimed at theorists
and experimentalists alike.
Lernziel Learn the modern theoretical foundations and "language", as well as principles and capabilities of the latest experimental techniques, used to describe and study collective magnetic
phenomena in the Solid State.
- Magnetic response and correlation functions. Analytic properties. Fluctuation-dissipation theorem. Experimental methods to measure static and dynamic correlations.
- Magnetic response and correlations in metals. Diamagnetism and paramagnetism. Magnetic ground states: ferromagnetism, spin density waves. Excitations in metals, spin waves.
Experimental examples.
- Magnetic response and correlations of magnetic ions in crystals: quantum numbers and effective Hamiltonians. Application of group theory to classifying ionic states. Experimental
case studies.
- Magnetic response and correlations in magnetic insulators. Effective Hamiltonians. Magnetic order and propagation vector formalism. The use of group theory to classify magnetic
structures. Determination of magnetic structures from diffraction data. Excitations: spin wave theory and beyond. "Triplons". Measuring spin wave spectra.
Skript A comprehensive textbook-like script is provided.
In principle, the script is suffient as study material. Additional reading:
Literatur -"Magnetism in Condensed Matter" by S. Blundell
-"Quantum Theory of Magnetism: Magnetic properties of Materials" by R. M. White
-"Lecture notes on Electron Correlations and Magnetism" by P. Fazekas
402-0861-00L Statistical Physics
402-0501-00L Solid State Physics
/ Besonderes Not prerequisite, but a good companion course:
402-0871-00L Solid State Theory
402-0257-00L Advanced Solid State Physics
402-0535-00L Introduction to Magnetism
Introducing Photons, Neutrons and Muons for Materials Characterisation
327-2130-00L W 2 KP 3G A. Hrabec
Only for MSc Materials Science and MSc Physics.
Kurzbeschreibung The course takes place at the campus of the Paul Scherrer Institute. The program consists of introductory lectures on the use of photons, neutrons and muons for materials
characterization, as well as tours of the large scale facilities of PSI.
Lernziel The aim of the course is that the students acquire a basic understanding on the interaction of photons, neutrons and muons with matter and how one can use these as tools to solve
specific problems.
The course runs for one week in June (20st to 24th), 2022. It takes place at the campus of the Paul Scherrer Institute. The morning consists of introductory lectures on the use of
photons, neutrons and muons for materials characterization. In the afternoon tours of the large scale facilities of PSI (Swiss Light Source, Swiss Spallation Neutron Source, Swiss
Muon Source, Swiss Free Electron Laser), are foreseen, as well as in depth visits to some of the instruments. At the end of the week, the students are required to give an oral
presentation about a scientific topic involving the techniques discussed. Time for the presentation preparations will be allocated in the afternoon.
• Interaction of photons, neutrons and muons with matter
• Production of photons, neutrons and muons
• Experimental setups: optics and detectors
• Crystal symmetry, Bragg’s law, reciprocal lattice, structure factors
• Elastic and inelastic scattering with neutrons and photons
• X-ray absorption spectroscopy, x-ray magnetic circular dichroism
• Polarized neutron scattering for the study of magnetic materials
• Imaging techniques using x-rays and neutrons
• Introduction to muon spin rotation
• Applications of muon spin rotation
Skript Slides from the lectures will be available on the internet prior to the lectures.
• Philip Willmott: An Introduction to Synchrotron Radiation: Techniques and Applications, Wiley, 2011
• J. Als-Nielsen and D. McMorrow: Elements of Modern X-Ray Physics, Wiley, 2011.
• G.L. Squires, Introduction to the Theory of Thermal Neutron Scattering, Dover Publications (1997).
Literatur • Muon Spin Rotation, Relaxation, and Resonance, Applications to Condensed Matter"
Alain Yaouanc and Pierre Dalmas de Réotier, Oxford University Press, ISBN: 9780199596478
• “Physics with Muons: from Atomic Physics to Condensed Matter Physics”, A. Amato
Voraussetzungen This is a block course for students who have attended courses on condensed matter or materials physics.
/ Besonderes
Registration at PSI website (http://indico.psi.ch/event/PSImasterschool) required by March 20th, 2022.
402-0533-00L Quantum Acoustics and Optomechanics W 6 KP 2V + 1U Y. Chu
This course gives an introduction to the interaction of mechanical motion with electromagnetic fields in the quantum regime. There are parallels between the quantum descriptions of
Kurzbeschreibung mechanical resonators, electrical circuits, and light, but each system also has its own unique properties. We will explore how interfacing them can be useful for technological
applications and fundamental science.
The course aims to prepare students for performing theoretical and/or experimental research in the fields of quantum acoustics and optomechanics. For example, after this course,
students should be able to:
Lernziel - understand and explain current research literature in quantum acoustics and optomechanics
- predict and simulate the behavior of mechanical quantum systems using tools such as the QuTiP package in Python
- apply concepts discussed in the class toward designing devices and experiments
The focus of this course will be on the properties of and interactions between mechanical and electromagnetic systems in the context of quantum information and technologies. We will
only briefly touch upon precision measurement and sensing with optomechanics since it is the topic of another course (227-0653-00L). Some topics that will be covered are:
- Mechanical motion and acoustics in solid state materials
Inhalt - Quantum description of motion, electrical circuits, and light.
- Different models for quantum interactions: optomechanical, Jaynes-Cummings, etc.
- Mechanisms for mechanical coupling to electromagnetic fields: piezoelectricity, electrostriction, radiation pressure, etc.
- Coherent interactions vs. dissipative processes: phenomenon and applications in different regimes.
- State-of the art electromechanical and optomechanical systems.
Skript Notes will be provided for each lecture.
Literatur Parts of books and research papers will be used.
Voraussetzungen Basic knowledge of quantum mechanics is required.
/ Besonderes
Fachspezifische Kompetenzen Konzepte und Theorien geprüft
Verfahren und Technologien geprüft
Methodenspezifische Kompetenzen Analytische Kompetenzen geprüft
Entscheidungsfindung gefördert
Medien und digitale Technologien geprüft
Problemlösung geprüft
Projektmanagement geprüft
Soziale Kompetenzen Kommunikation geprüft
Kooperation und Teamarbeit geprüft
Kundenorientierung gefördert
Kompetenzen Menschenführung und Verantwortung gefördert
Selbstdarstellung und soziale Einflussnahme gefördert
Sensibilität für Vielfalt gefördert
Verhandlung gefördert
Persönliche Kompetenzen Anpassung und Flexibilität geprüft
Kreatives Denken geprüft
Kritisches Denken geprüft
Integrität und Arbeitsethik gefördert
Selbstbewusstsein und Selbstreflexion geprüft
Selbststeuerung und Selbstmanagement geprüft
Quantum Solid State Magnetism II
402-0532-50L W 6 KP 2V + 1U
Findet dieses Semester nicht statt.
This course covers the modern developments and problems in the field of solid state magnetism. It has the special emphasis on the phenomena that go beyond semiclassical
Kurzbeschreibung approximation, such as quantum paramagnets, spin liquids and magnetic frustration. The course is aimed at both the experimentalists and theorists, and the theoretical concepts are
balanced by the experimental data.
Lernziel Learn the modern approach to the complex magnetic phases of matter and the transitions between them. A number of theoretical approaches that go beyond the linear spin wave theory
will be discussed during the course, and an overview of the experimental status quo will be given.
- Phase transitions in the magnetic matter. Classical and quantum criticality. Consequences of broken symmetries for the spectral properties. Absence of order in the low-dimensional
systems. Berezinskii-Kosterlitz-Thouless transition and its relevance to “layered” magnets.
- Failures of linear spin wave theory. Spin wave decays. Antiferromagnets as bosonic systems. Gapped “quantum paramagnets” and their phase diagrams. Extended spin wave theory.
Magnetic “Bose-Einstein condensation”.
Inhalt - Spin systems in one dimension: XY, Ising and Heisenberg model. Lieb-Schultz-Mattis theorem. Tomonaga-Luttinger liquid description of the XXZ spin chains. Spin ladders and Haldane
chains. Critical points in one dimension and generalized phase diagram.
- Effects of disorder in magnets. Harris criterion. “Spin islands” in depleted gapped magnets.
- Introduction into magnetic frustration. Order-from-disorder phenomena and triangular lattice in the magnetic field. Frustrated chain and frustrated square lattice models. Exotic
magnetic states in two dimensions.
Skript A comprehensive textbook-like script is provided.
In principle, the script is sufficient as study material. Additional reading:
-"Interacting Electrons and Quantum Magnetism" by A. Auerbach
Literatur -"Basic Aspects of The Quantum Theory of Solids " by D. Khomskii
-"Quantum Physics in One Dimension" by T. Giamarchi
-"Quantum Theory of Magnetism: Magnetic properties of Materials" by R. M. White
-"Frustrated Spin Systems" ed. H. T. Diep
402-0861-00L Statistical Physics
402-0501-00L Solid State Physics
/ Besonderes Not prerequisite, but a good companion course:
402-0871-00L Solid State Theory
402-0257-00L Advanced Solid State Physics
402-0535-00L Introduction to Magnetism
402-0532-00L Quantum Solid State Magnetism I
Auswahl: Quantenelektronik
Nummer Titel Typ ECTS Umfang Dozierende
Nanomaterials for Photonics
402-0468-15L W 6 KP 2V + 1U R. Grange
Findet dieses Semester nicht statt.
The lecture describes various nanomaterials (semiconductor, metal, dielectric, carbon-based...) for photonic applications (optoelectronics, plasmonics, ordered and disordered
Kurzbeschreibung structures...). It starts with concepts of light-matter interactions, then the fabrication methods, the optical characterization techniques, the description of the properties and the
state-of-the-art applications.
The students will acquire theoretical and experimental knowledge about the different types of nanomaterials (semiconductors, metals, dielectric, carbon-based, ...) and their uses as
Lernziel building blocks for advanced applications in photonics (optoelectronics, plasmonics, photonic crystal, ...). Together with the exercises, the students will learn (1) to read,
summarize and discuss scientific articles related to the lecture, (2) to estimate order of magnitudes with calculations using the theory seen during the lecture, (3) to prepare a
short oral presentation and report about one topic related to the lecture, and (4) to imagine an original photonic device.
1. Introduction to nanomaterials for photonics
a. Classification of nanomaterials
b. Light-matter interaction at the nanoscale
c. Examples of nanophotonic devices
2. Wave physics for nanophotonics
a. Wavelength, wave equation, wave propagation
b. Dispersion relation
c. Interference
d. Scattering and absorption
e. Coherent and incoherent light
3. Analogies between photons and electrons
a. Quantum wave description
b. How to confine photons and electrons
c. Tunneling effects
4. Characterization of Nanomaterials
a. Optical microscopy: Bright and dark field, fluorescence, confocal, High resolution: PALM (STORM), STED
b. Light scattering techniques: DLS
c. Near field microscopy: SNOM
d. Electron microscopy: SEM, TEM
e. Scanning probe microscopy: STM, AFM
f. X-ray diffraction: XRD, EDS
5. Fabrication of nanomaterials
a. Top-down approach
b. Bottom-up approach
6. Plasmonics
Inhalt a. What is a plasmon, Drude model
b. Surface plasmon and localized surface plasmon (sphere, rod, shell)
c. Theoretical models to calculate the radiated field: electrostatic approximation and Mie scattering
d. Fabrication of plasmonic structures: Chemical synthesis, Nanofabrication
e. Applications
7. Organic and inorganic nanomaterials
a. Organic quantum-confined structure: nanomers and quantum dots.
b. Carbon nanotubes: properties, bandgap description, fabrication
c. Graphene: motivation, fabrication, devices
d. Nanomarkers for biophotonics
8. Semiconductors
a. Crystalline structure, wave function
b. Quantum well: energy levels equation, confinement
c. Quantum wires, quantum dots
d. Optical properties related to quantum confinement
e. Example of effects: absorption, photoluminescence
f. Solid-state-lasers: edge emitting, surface emitting, quantum cascade
9. Photonic crystals
a. Analogy photonic and electronic crystal, in nature
b. 1D, 2D, 3D photonic crystal
c. Theoretical modelling: frequency and time domain technique
d. Features: band gap, local enhancement, superprism...
10. Nanocomposites
a. Effective medium regime
b. Metamaterials
c. Multiple scattering regime
d. Complex media: structural colour, random lasers, nonlinear disorder
Skript Slides and book chapter will be available for downloading
Literatur References will be given during the lecture
Voraussetzungen Basics of solid-state physics (i.e. energy bands) can help
/ Besonderes
Optical Frequency Combs: Physics and Applications
402-0470-17L W 6 KP 2V + 1U G. Scalari
Findet dieses Semester nicht statt.
Kurzbeschreibung In this lecture, the goal is to review the physics behind mode-locking in these various devices, as well as discuss the most important novelties and applications of the newly
developed sources.
Lernziel In this lecture, the goal is to review the physics behind mode-locking in these various devices, as well as discuss the most important novelties and applications of the newly
developed sources.
Since their invention, the optical frequency combs have shown to be a key technological tool with applications in a variety of fields ranging from astronomy, metrology, spectroscopy
and telecommunications. Concomitant with this expansion of the application domains, the range of technologies that have been used to generate optical frequency combs has recently
widened to include, beyond the solid-state and fiber mode-locked lasers, optical parametric oscillators, microresonators and quantum cascade lasers.
In this lecture, the goal is to review the physics behind mode-locking in these various devices, as well as discuss the most important novelties and applications of the newly
developed sources.
Chapt 1: Fundamentals of optical frequency comb generation
- Physics of mode-locking: time domain picture
Propagation and stability of a pulse, soliton formation
- Dispersion compensation
Inhalt Solid-state and fiber mode-locked laser
Chapt 2: Direct generation
Microresonator combs: Lugiato-Lefever equation, solitons
Quantum cascade laser: Frequency domain picture of the mode-locking
Mid-infrared and terahertz QCL combs
Chapt 3: Non-linear optics
DFG, OPOs
Chapt 4: Comb diagnostics and noise
Jitter, linewidth
Chapt 5: Self-referenced combs and their applications
Chapt 6: Dual combs and their applications to spectroscopy
402-0498-00L Trapped-Ion Physics W 6 KP 2V + 1U D. Kienzler
Kurzbeschreibung This course covers the physics of trapped ions at the quantum level described as harmonic oscillators coupled to spin systems, for which the 2012 Nobel prize was awarded. Trapped-ion
systems have achieved an extraordinary level of control and provide leading technologies for quantum information processing and quantum metrology.
The objective is to provide a basis for understanding the wide range of research currently being performed with trapped ion systems: fundamental quantum mechanics with spin-spring
Lernziel systems, quantum information processing and quantum metrology. During the course students would expect to gain an understanding of the current frontier of research in these areas,
and the challenges which must be overcome to make further advances. This should provide a solid background for tackling recently published research in these fields, including
experimental realisations of quantum information processing using trapped ions.
This course will cover trapped-ion physics. It aims to cover both theoretical and experimental aspects. In all experimental settings the role of decoherence and the quantum-classical
transition is of great importance, and this will therefore form one of the key components of the course. The topics of the course were cited in the Nobel prize which was awarded to
David Wineland in 2012.
Topics which will be covered include:
Inhalt - Fundamental working principles of ion traps and modern trap geometries, quantum description of motion of trapped ions
- Electronic structure of atomic ions, manipulation of the electronic state, Rabi- and Ramsey-techniques, principle of an atomic clock
- Quantum description of the coupling of electronic and motional degrees of freedom
- Laser cooling
- Quantum state engineering of coherent, squeezed, cat, grid and entangled states
- Trapped ion quantum information processing basics and scaling, current challenges
- Quantum metrology with trapped ions: quantum logic spectroscopy, optical clocks, search for physics beyond the standard model using high-precision spectroscopy
Literatur S. Haroche and J-M. Raimond "Exploring the Quantum" (recommended)
M. Scully and M.S. Zubairy, Quantum Optics (recommended)
Voraussetzungen The preceding attendance of the scheduled lecture Quantum Optics (402-0442-00L) or a comparable course is required.
/ Besonderes
402-0558-00L Crystal Optics in Intense Light Fields W 6 KP 2V + 1U M. Fiebig
Because of their aesthetic nature crystals are termed "flowers of mineral kingdom". The aesthetic aspect is closely related to the symmetry of the crystals which in turn determines
Kurzbeschreibung their optical properties. It is the purpose of this course to stimulate the understanding of these relations with a particular focus on those phenomena occurring in intense light
fields as they are provided by lasers.
In this course students will at first acquire a systematic knowledge of classical crystal-optical phenomena and the experimental and theoretical tools to describe them. This will be
Lernziel the basis for the core part of the lecture in which they will learn how to characterize ferroelectric, (anti)ferromagnetic and other forms of ferroic order and their interaction by
nonlinear optical techniques. See also http://www.ferroic.mat.ethz.ch/research/index.
Crystal classes and their symmetry; basic group theory; optical properties in the absence and presence of external forces; focus on magnetooptical phenomena; density-matrix formalism
Inhalt of light-matter interaction; microscopy of linear and nonlinear optical susceptibilities; second harmonic generation (SHG); characterization of ferroic order by SHG; outlook towards
other nonlinear optical effects: devices, ultrafast processes, etc.
Skript Extensive material will be provided throughout the lecture.
(1) R. R. Birss, Symmetry and Magnetism, North-Holland (1966)
(2) R. E. Newnham: Properties of Materials: Anisotropy, Symmetry, Structure, Oxford University (2005)
Literatur (3) A. K. Zvezdin, V. A. Kotov: Modern Magnetooptics & Magnetooptical Materials, Taylor/Francis (1997)
(4) Y. R. Shen: The Principles of Nonlinear Optics, Wiley (2002)
(5) K. H. Bennemann: Nonlinear Optics in Metals, Oxford University (1999)
Voraussetzungen Basic knowledge in solid state physics and quantum (perturbation) theory will be very useful. The lecture is addressed to students in physics and students in materials science with
/ Besonderes an affinity to physics.
402-0466-15L Quantum Optics with Photonic Crystals, Plasmonics and Metamaterials W 6 KP 2V + 1U G. Scalari
Kurzbeschreibung In this lecture, we would like to review new developments in the emerging topic of quantum optics in very strongly confined structures, with an emphasis on sources and photon
statistics as well as the coupling between optical and mechanical degrees of freedom.
Integration and miniaturisation have strongly characterised fundamental research and industrial applications in the last decades, both for photonics and electronics.
The objective of this lecture is to provide insight into the most recent solid-state implementations of strong light-matter interaction, from micro and nano cavities to nano lasers
Lernziel and quantum optics. The content of the lecture focuses on the achievement of extremely subwavelength radiation confinement in electronic and optical resonators. Such resonant
structures are then functionalized by integrating active elements to achieve devices with extremely reduced dimensions and exceptional performances. Plasmonic lasers, Purcell
emitters are discussed as well as ultrastrong light matter coupling and opto-mechanical systems.
1. Light confinement
1.1. Photonic crystals
1.1.1. Band structure
1.1.2. Slow light and cavities
1.2. Plasmonics
1.2.1. Light confinement in metallic structures
1.2.2. Metal optics and waveguides
1.2.3. Graphene plasmonics
1.3. Metamaterials
1.3.1. Electric and magnetic response at optical frequencies
1.3.2. Negative index, cloacking, left-handness
2. Light coupling in cavities
2.1. Strong coupling
2.1.1. Polariton formation
2.1.2. Strong and ultra-strong coupling
Inhalt 2.2. Strong coupling in microcavities
2.2.1. Planar cavities, polariton condensation
2.3. Polariton dots
2.3.1. Microcavities
2.3.2. Photonic crystals
2.3.3. Metamaterial-based
3. Photon generation and statistics
3.1. Purcell emitters
3.1.1. Single photon sources
3.1.2. THz emitters
3.2. Microlasers
3.2.1. Plasmonic lasers: where is the limit?
3.2.2. g(1) and g(2) of microlasers
3.3. Optomecanics
3.3.1. Micro ring cavities
3.3.2. Photonic crystals
3.3.3. Superconducting resonators
402-0484-00L Experimental and Theoretical Aspects of Quantum Gases W 6 KP 2V + 1U T. U. Donner, T. Esslinger
Kurzbeschreibung Quantum Gases are the most precisely controlled many-body systems in physics. This provides a unique interface between theory and experiment, which allows addressing fundamental
concepts and long-standing questions. This course lays the foundation for the understanding of current research in this vibrant field.
Lernziel The lecture conveys a basic understanding for the current research on quantum gases. Emphasis will be put on the connection between theory and experimental observation. It will
enable students to read and understand publications in this field.
Cooling and trapping of neutral atoms
Bose and Fermi gases
Ultracold collisions
The Bose-condensed state
Inhalt Elementary excitations
Interference and Correlations
Optical lattices
Skript notes and material accompanying the lecture will be provided
C. J. Pethick and H. Smith, Bose-Einstein condensation in dilute Gases,
Literatur Proceedings of the Enrico Fermi International School of Physics, Vol. CXL,
ed. M. Inguscio, S. Stringari, and C.E. Wieman (IOS Press, Amsterdam,
402-0444-00L Dissipative Quantum Systems W 6 KP 2V + 1U A. Imamoglu
Kurzbeschreibung This course builds up on the material covered in the Quantum Optics course. The emphasis will be on analysis of dissipative quantum systems and quantum optics in condensed-matter
Lernziel The course aims to provide the knowledge necessary for pursuing advanced research in the field of Quantum Optics in condensed matter systems. Fundamental concepts and techniques of
Quantum Optics will be linked to experimental research in interacting photonic systems.
Inhalt Description of open quantum systems using master equation and quantum trajectories. Decoherence and quantum measurements. Dicke superradiance. Dissipative phase transitions.
Signatures of electron-exciton and electron-electron interactions in optical response.
Skript Lecture notes will be provided
C. Cohen-Tannoudji et al., Atom-Photon-Interactions (recommended)
Literatur Y. Yamamoto and A. Imamoglu, Mesoscopic Quantum Optics (recommended)
A collection of review articles (will be pointed out during the lecture)
Voraussetzungen Masters level quantum optics knowledge
/ Besonderes
Fachspezifische Kompetenzen Konzepte und Theorien geprüft
Verfahren und Technologien gefördert
Methodenspezifische Kompetenzen Analytische Kompetenzen geprüft
Entscheidungsfindung gefördert
Medien und digitale Technologien gefördert
Problemlösung geprüft
Projektmanagement gefördert
Soziale Kompetenzen Kommunikation geprüft
Kooperation und Teamarbeit geprüft
Kundenorientierung gefördert
Kompetenzen Menschenführung und Verantwortung gefördert
Selbstdarstellung und soziale Einflussnahme gefördert
Sensibilität für Vielfalt gefördert
Verhandlung gefördert
Persönliche Kompetenzen Anpassung und Flexibilität gefördert
Kreatives Denken geprüft
Kritisches Denken geprüft
Integrität und Arbeitsethik gefördert
Selbstbewusstsein und Selbstreflexion gefördert
Selbststeuerung und Selbstmanagement gefördert
Frontiers of Quantum Gas Research: Few- and Many-Body Physics
402-0486-00L W 6 KP 2V + 1U
Findet dieses Semester nicht statt.
The lecture will discuss the most relevant recent research in the field of quantum gases. Bosonic and fermionic quantum gases with emphasis on strong interactions will be studied.
Kurzbeschreibung The topics include low dimensional systems, optical lattices and quantum simulation, the BEC-BCS crossover and the unitary Fermi gas, transport phenomena, and quantum gases in
optical cavities.
Lernziel The lecture is intended to convey an advanced understanding for the current research on quantum gases. Emphasis will be put on the connection between theory and experimental
observation. It will enable students to follow current publications in this field.
Quantum gases in one and two dimensions
Optical lattices, Hubbard physics and quantum simulation
Inhalt Strongly interacting Fermions: the BEC-BCS crossover and the unitary Fermi gas
Transport phenomena in ultracold gases
Quantum gases in optical cavities
Skript no script
C. J. Pethick and H. Smith, Bose-Einstein condensation in dilute Gases, Cambridge.
T. Giamarchi, Quantum Physics in one dimension
Literatur I. Bloch, J. Dalibard, W. Zwerger, Many-body physics with ultracold gases, Rev. Mod. Phys. 80, 885 (2008)
Proceedings of the Enrico Fermi International School of Physics, Vol. CLXIV, ed. M. Inguscio, W. Ketterle, and C. Salomon (IOS Press, Amsterdam, 2007).
Additional literature will be distributed during the lecture
Voraussetzungen Presumably, Prof. Päivi Törmä from Aalto university in Finland will give part of the course. The exercise classes will be partly in the form of a Journal Club, in which a student
/ Besonderes presents the achievements of a recent important research paper. More information available on http://www.quantumoptics.ethz.ch/
151-0172-00L Microsystems II: Devices and Applications W 6 KP 3V + 3U C. Hierold, C. I. Roman
Kurzbeschreibung The students are introduced to the fundamentals and physics of microelectronic devices as well as to microsystems in general (MEMS). They will be able to apply this knowledge for
system research and development and to assess and apply principles, concepts and methods from a broad range of technical and scientific disciplines for innovative products.
The students are introduced to the fundamentals and physics of microelectronic devices as well as to microsystems in general (MEMS), basic electronic circuits for sensors, RF-MEMS,
chemical microsystems, BioMEMS and microfluidics, magnetic sensors and optical devices, and in particular to the concepts of Nanosystems (focus on carbon nanotubes), based on the
respective state-of-research in the field. They will be able to apply this knowledge for system research and development and to assess and apply principles, concepts and methods from
Lernziel a broad range of technical and scientific disciplines for innovative products.
During the weekly 3 hour module on Mondays dedicated to Übungen the students will learn the basics of Comsol Multiphysics and utilize this software to simulate MEMS devices to
understand their operation more deeply and optimize their designs.
Transducer fundamentals and test structures
Pressure sensors and accelerometers
Resonators and gyroscopes
RF MEMS
Inhalt Acoustic transducers and energy harvesters
Thermal transducers and energy harvesters
Optical and magnetic transducers
Chemical sensors and biosensors, microfluidics and bioMEMS
Nanosystem concepts
Basic electronic circuits for sensors and microsystems
Skript Handouts (on-line)
Strongly Correlated Many-Body Systems: From Electrons to Ultracold Atoms to Photons
402-0414-00L W 6 KP 2V + 1U A. Imamoglu, E. Demler
Findet dieses Semester nicht statt.
Kurzbeschreibung This course covers the physics of strongly correlated systems that emerge in diverse platforms, ranging from two-dimensional electrons, through ultracold atoms in atomic lattices, to
Lernziel The goal of the lecture is to prepare the students for research in strongly correlated systems currently investigated in vastly different physical platforms.
Inhalt Feshbach resonances, Bose & Fermi polarons, Anderson impurity model and the s-d Hamiltonian, Kondo effect, quantum magnetism, cavity-QED, probing noise in strongly correlated
systems, variational non-Gaussian approach to interacting many-body systems.
Skript Hand-written lecture notes will be distributed.
Voraussetzungen Knowledge of Quantum Mechanics at the level of QM II and exposure to Solid State Theory.
/ Besonderes | {"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?abschnittId=96971&semkez=2022S&ansicht=2&lang=de&seite=1","timestamp":"2024-11-13T00:01:20Z","content_type":"text/html","content_length":"96260","record_id":"<urn:uuid:984af1cb-eda4-473e-9cf4-52d8f29f7d6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00468.warc.gz"} |
Distinct Values Function
This page describes a VBA Function that will return an array of the distinct values in a range or array of input values.
Excel has some manual methods, such as Advanced Filter, for getting a list of distinct items from an input range. The drawback of using such methods is that you must manually refresh the results when
the input data changes. Moreover, these methods work only with ranges, not arrays of values, and, not being functions, cannot be called from worksheet cells or incorporated into array formulas. This
page describes a VBA function called DistinctValues that accepts as input either a range or an array of data and returns as its result an array containing the distinct items from the input list. That
is, the elements with all duplicates removed. The order of the input elements is preserved. The order of the elements in the output array is the same as the order in the input values. The function
can be called from an array entered range on a worksheet (see this page for information about array formulas), or from in an array formula in a single worksheet cell, or from another VB function.
The function declaration is shown below:
Function DistinctValues(InputValues As Variant, _
Optional IgnoreCase As Boolean = False) As Variant
You can download an
example workbook
or just the
bas module file
with the complete code.
The parameter
is either a range on a worksheet or an array of values. If it is a worksheet range, the range must have exactly one column or one row. Two-dimensional ranges are not supported. If
is an array, it must be a single dimensional array. Two-dimensional arrays are not supported. The parameter
indicates whether the comparisons should be case-sensitive or case-insensitive. If this value is
, case is ignored and
is considered equal to
. If this value is
, case is taken into account and
is consider different from
If the function is array entered into a range on a worksheet, the size of the returned array is equal to the size of the range into which the function was entered, regardless of the number of
distinct elements found, and unused entries at the end of the resulting array are set to vbNullStrings. This prevents #N/A errors from appearing. Note that this differs from the default behavior of
Excel's own array formulas. If the function is entered in a single cell array formula, the size of the result array is equal to the number of distinct elements from the input list. Similarly, if the
function is called from another VB function, not from a worksheet cell, the result array contains only the distinct elements.
Empty elements, those with a value of vbNullString or Empty are not counted as distinct elements -- they are ignored. Thus, the array {"a","b","","","c"} has three distinct elements, a, b, and c. The
empty string is ignored by the function. Spaces and zero values, however, are considered when creating the list of distinct elements.
If an array, not a range, is passed into DistinctValues, that array must not contain any Object type variables (other than Excel.Range objects) and must not contain any Null values.
The most common usage is to array enter the DistinctValues function into a range of cells and pass it another range of cells as the input list. For example, select cells B1:B10 type
and press CTRL SHIFT ENTER. This list of distinct values from cells A1:A10 will be returned to cells B1:B10. Unpopulated cells in B1:B10 will be filled with empty strings.
You can also use DistinctValues in an array formula. For example,
will return the position of the string chip in the list of distinct values from cells A1:A10.
To count the number of distinct values in a range, just pass the results of DistinctValue to the COUNT or COUNTA function:
In addition, the DistinctValues function may be called from other VB code, passing either a Range or an Array as the input parameter. For example,
Sub Test()
Dim InputRange As Range
Dim ResultArray As Variant
Dim Ndx As Long
Set InputRange = Range("InputValues")
ResultArray = DistinctValues(InputValues:=InputRange, IgnoreCase:=True)
If IsArray(ResultArray) = True Then
For Ndx = LBound(ResultArray) To UBound(ResultArray)
Debug.Print ResultArray(Ndx)
Next Ndx
If IsError(ResultArray) = True Then
Debug.Print "ERROR: " & CStr(ResultArray)
Debug.Print "UNEXPECTED RESULT: " & CStr(ResultArray)
End If
End If
End Sub
In addition to a range, the
can be an array literal. For example,
The code for the DistinctValues function is shown below. It requires the NumberOfArrayDimensions, TransposeArray, and Transpose1DArray functions, all of which are listed below following the listing
for DisinctValues.
You can download an example workbook or just the bas module file with the complete code.
Option Explicit
' modDistinctValues
' By Chip Pearson, 5-November-2007, chip@cpearson.com, www.cpearson.com
' This page: www.cpearson.com/Excel/DistinctValues.apsx
' This module contains the DistinctValues function and supporting procedures. You
' should import the entire module into your project. The DistinctValues function
' takes in a Range or an Array as input and returns an Array containing the disinct
' values from that array of inputs.
Function DistinctValues(InputValues As Variant, _
Optional IgnoreCase As Boolean = False) As Variant
' DistinctValues
' This function accepts a set of values in InputValues and returns an Array
' containing the distinct items in that input set. The order of elements in the result
' array is the same as in the InputValues. InputValues may be either a Range object
' or an Array. In either case, it must be one-dimensional (in the case of a Range,
' it may be either a row or column range). If InputValues has more than one dimension,
' the function returns a #REF error. The IgnoreCase parameter indicates whether to do
' a case-sensitive or case-insensitive comparison when comparing text values. If TRUE,
' case is ignored and 'abc' is treated the same as 'ABC'. If FALSE, case is taken into
' account and 'abc' is treated differently than 'ABC'.
' If the function is called from a worksheet, it must be array entered (CTRL SHIFT ENTER)
' into the array of cells that will receive the resutling Distinct values. The size of
' the returned array will be the same size as the array into which the function was
' entered. The Distinct values will fill the first N cells and the remaining array entries
' will be vbNullStrings. The result is properly transposed (or not) depending on whether
' it was called from a row-range or a column-range of cells on the worksheet.
' The result array is always sized to match the size of the range into which it was
' entered, even if that array contains more entries than the InputValues range. This behavior
' differs from the standard behavior of Excel's own array functions.
' If the function is called by another VBA procedure, not from worksheet cells, the
' array is a single dimensional array with only enough elements to contain the Distinct
' elements. The LBound of the array is 1. The variable that receives the array of distinct
' values should be declared as a Variant:
' Dim Res As Variant
' Res = DistinctElements(MyArray,True)
' Empty elements, those with a value of vbNullString or Empty, are not compared. Thus,
' vbNullString and Empty are not considered values in the own right and are not counted
' amongst the Distinct Values. NULL values are not allowed in the InputValues and the
' presence of a NULL value will cause an #NULL error, If there is an Object type variable
' in the InputValues other than a Range object, a #VALUE error will be returned.
' String representations of numbers are considered the same as numbers, so 2 and "2"
' are not distict values.
Dim ResultArray() As Variant
Dim UB As Long
Dim TransposeAtEnd As Boolean
Dim N As Long
Dim ResultIndex As Long
Dim M As Long
Dim ElementFoundInResults As Boolean
Dim NumCells As Long
Dim ReturnSize As Long
Dim Comp As VbCompareMethod
Dim V As Variant
' Set the text comparison value to be used by StrComp based on
' the setting of IgnoreCase.
If IgnoreCase = True Then
Comp = vbTextCompare
Comp = vbBinaryCompare
End If
' This first large block of code determines whether the function
' is being called from a worksheet range or by another function.
' If it is being called from a worksheet, it must be called from
' a range with only one column or only one row. Two-dimensional
' ranges will cause a #REF error.
If IsObject(Application.Caller) = True Then
If Application.Caller.Rows.Count > 1 And Application.Caller.Columns.Count > 1 Then
DistinctValues = CVErr(xlErrRef)
Exit Function
End If
' Save the size of the region from which the
' function was called and save a flag indicating
' whether we need to transpose the result upon
' returning.
If Application.Caller.Rows.Count > 1 Then
TransposeAtEnd = True
ReturnSize = Application.Caller.Rows.Count
TransposeAtEnd = False
ReturnSize = Application.Caller.Columns.Count
End If
End If
' Were we passed a Range object or a VBA array?
If IsObject(InputValues) = True Then
If TypeOf InputValues Is Excel.Range Then
' Input is a Range object.
If InputValues.Rows.Count > 1 And InputValues.Columns.Count > 1 Then
DistinctValues = CVErr(xlErrRef)
Exit Function
End If
If InputValues.Rows.Count > 1 Then
NumCells = InputValues.Rows.Count
NumCells = InputValues.Columns.Count
End If
UB = NumCells
DistinctValues = CVErr(xlErrRef)
Exit Function
End If
' InputValues is not a Range object.
If IsArray(InputValues) = True Then
Select Case NumberOfArrayDimensions(InputValues)
Case 0
' Zero dimensional array (scalar).
' Return an array of 1 element with
' that value.
ReDim ResultArray(1 To 1)
ResultArray(1) = InputValues
DistinctValues = ResultArray
Exit Function
Case 1
UB = UBound(InputValues) - LBound(InputValues) + 1
' If we were passed in an array from a worksheet
' function (e.g., =DISTINCTVALUES({1,2,3}), we
' need to set NumCells to the size of the input array.
' This is used later to properly resize the result array.
If IsObject(InputValues) = False Then
NumCells = UB
End If
Case Else
DistinctValues = CVErr(xlErrValue)
Exit Function
End Select
ReDim ResultArray(1 To 1)
ResultArray(1) = InputValues
DistinctValues = ResultArray
Exit Function
End If
End If
' Ensure we don't have any NULLs or Objects in the InputValues.
' A Range object is allowed.
For Each V In InputValues
If IsNull(V) = True Then
DistinctValues = CVErr(xlErrNull)
Exit Function
End If
If IsObject(V) = True Then
If Not TypeOf V Is Excel.Range Then
DistinctValues = CVErr(xlErrValue)
Exit Function
End If
End If
Next V
' Allocate the ResultArray and fill it with either
' vbNullStrings if we were called from a worksheet
' or with Empty values if called by a VB procedure.
ReDim ResultArray(1 To UB)
For N = LBound(ResultArray) To UBound(ResultArray)
If IsObject(Application.Caller) = True Then
ResultArray(N) = vbNullString
ResultArray(N) = Empty
End If
Next N
' This is the logic that actually tests for duplicate values.
ResultIndex = 1
' We can always assume that the
' first element in the InputValues
' will be distinct so far.
ResultArray(1) = InputValues(1)
' Loop throught the entire InputValues
' array.
For N = 2 To UB
' Set our Found flag = False. This
' flag is used to indicate whether
' we find Input(N) in the list of
' distinct elements. If we found it
' earlier, it is no longer a distinct
' element and we won't put it in the
' ResultArray.
ElementFoundInResults = False
For M = 1 To N
' Scan through the array ResultArray
' looking for Input(N). If we find it,
' Input(N) is a duplicate so set the
' Found flag to True.
If StrComp(CStr(ResultArray(M)), CStr(InputValues(N)), Comp) = 0 Then
ElementFoundInResults = True
Exit For
End If
Next M
' If we didn't find Input(N) in ResultArray
' then Input(N) is distinct so we increment
' ResultIndexand add Input(N) to ResultArray.
If ElementFoundInResults = False Then
ResultIndex = ResultIndex + 1
ResultArray(ResultIndex) = InputValues(N)
End If
Next N
' Here, we resize the ResultArray to the appropriate number of
' elements. ResultIndex is equal to the number of distinct elements found.
' If the function was called from a worksheet, ReturnSize is
' positive, equal to the number of cells in the array into which
' the function was entered and NumCells is the number of cells in
' the InputRange. If the function was called by another VB function,
' not from a worksheet, ReturnSizse and NumCells will be 0. Thus,
' if ReturnSize is not 0 and ResultIndex, the number of distinct elements,
' is less than the number of cells from in the InputValues, we
' set ResultIndex to the number of cells from which the function was called.
' This allows us in the For N loop that follows to pad out the
' entire Application.Caller range with vbNullStrings to prevent
' #N/A errors if the function is called from a range with more cells
' than there were disticnt elements. Note that this behavior differs
' from Excel's normal array formula handling.
If ReturnSize <> 0 Then
If ResultIndex < NumCells Then
If ResultIndex < ReturnSize Then
ResultIndex = ReturnSize
End If
End If
End If
ReDim Preserve ResultArray(1 To ResultIndex)
If UBound(ResultArray) > NumCells Then
For N = NumCells + 1 To ReturnSize
ResultArray(N) = vbNullString
Next N
End If
' If we were called from a Column range on a worksheet (Rows.Count > 1),
' we need to transform ResultArray into a 2-dimensional array and transpose
' it so it will be properly stored in the column. Transpose1DArray does this
' function. If the function was not called from a worksheet, then the
' TransposeAtEnd flag will be false and we just return the array.
If TransposeAtEnd = True Then
DistinctValues = Transpose1DArray(Arr:=ResultArray, ToRow:=False)
DistinctValues = ResultArray
End If
End Function
Function TransposeArray(Arr As Variant) As Variant
' TransposeArray
' This function tranposes the array Arr. Arr must be
' a two dimensional array. If Arr is not an array, the
' result is just Arr itself. If Arr is a 1-dimensional
' array, the result is just Arr itself. If you need to
' transpose a 1-dimensional array from a row to a column
' in order to properly return it to a worksheet, use
' Transpose1DArray. If Arr has more than three dimensions,
' an error value is returned.
Dim R1 As Long
Dim R2 As Long
Dim C1 As Long
Dim C2 As Long
Dim LB1 As Long
Dim LB2 As Long
Dim UB1 As Long
Dim UB2 As Long
Dim Res() As Variant
Dim NumDims As Long
If IsArray(Arr) = False Then
TransposeArray = Arr
Exit Function
End If
NumDims = NumberOfArrayDimensions(Arr)
Select Case NumDims
Case 0
If IsObject(Arr) = True Then
Set TransposeArray = Arr
TransposeArray = Arr
End If
Case 1
TransposeArray = Arr
Case 2
LB1 = LBound(Arr, 1)
UB1 = UBound(Arr, 1)
LB2 = LBound(Arr, 2)
UB2 = UBound(Arr, 2)
R2 = LB1
C2 = LB2
ReDim Res(LB2 To UB2, LB1 To UB1)
For R1 = LB1 To UB1
For C1 = LB2 To UB2
Res(C1, R1) = Arr(R1, C1)
C2 = C2 + 1
Next C1
R2 = R2 + 1
Next R1
TransposeArray = Res
Case Else
TransposeArray = CVErr(9)
End Select
End Function
Function NumberOfArrayDimensions(Arr As Variant) As Long
' NumberOfArrayDimensions
' This returns the number of dimensions of the array
' Arr. If Arr is not an array, the result is 0.
Dim LB As Long
Dim N As Long
On Error Resume Next
N = 1
Do Until Err.Number <> 0
LB = LBound(Arr, N)
N = N + 1
NumberOfArrayDimensions = N - 2
End Function
Function Transpose1DArray(Arr As Variant, ToRow As Boolean) As Variant
' Transpose1DArray
' This function transforms a 1-dim array to a 2-dim array and
' transposes it. This is required when returning arrays back to
' worksheet cells. The ToRow parameter determines if the array is
' to be returned to the worksheet as a row (TRUE) or as a columns (FALSE).
' This should only be used for 1-dim arrays that are going back to
' a worksheet.
Dim Res As Variant
Dim N As Long
If IsArray(Arr) = False Then
Transpose1DArray = CVErr(xlErrValue)
Exit Function
End If
If NumberOfArrayDimensions(Arr) <> 1 Then
Transpose1DArray = CVErr(xlErrValue)
Exit Function
End If
If ToRow = True Then
ReDim Res(LBound(Arr) To LBound(Arr), LBound(Arr) To UBound(Arr))
For N = LBound(Res, 2) To UBound(Res, 2)
Res(LBound(Res), N) = Arr(N)
Next N
ReDim Res(LBound(Arr) To UBound(Arr), LBound(Arr) To LBound(Arr))
For N = LBound(Res, 1) To UBound(Res, 1)
Res(N, LBound(Res)) = Arr(N)
Next N
End If
Transpose1DArray = Res
End Function
This page last updated: 5-November-2007 | {"url":"http://www.cpearson.com/Excel/distinctvalues.aspx","timestamp":"2024-11-08T00:40:19Z","content_type":"text/html","content_length":"52399","record_id":"<urn:uuid:79db5cbb-5250-4bff-8e32-ceb196479092>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00051.warc.gz"} |
Integer Range Table Function
Integer Range Table Function
Having participated in SQL Server forums for a while, I have come across a question asking whether there was any built-in table in SQL Server, whether a temporary table or a system table, that
contains just integer values from 1 to any given number.
Unfortunately, there is no temporary table or system table available in SQL Server, and if I am not mistaken even in other database engines, that contains this data. The best way to generate such
data is not by making use of a temporary table but by making use of a table-valued user-defined function.
The following user-defined function accepts as input an integer value that will serve as the maximum value in the returned integer table.
CREATE FUNCTION [dbo].[ufn_GenerateIntegers] ( @MaxValue INT )
RETURNS @Integers TABLE ( [IntValue] INT )
DECLARE @Index INT
SET @Index = 1
WHILE @Index <= @MaxValue
INSERT INTO @Integers ( [IntValue] ) VALUES ( @Index )
SET @Index = @Index + 1
The user-defined function is quite straight-forward. It simply inserts into the return TABLE variable (@Integers) the integer values from 1 to the maximum integer value supplied in the parameter
using a WHILE loop. If the value supplied in the parameter is less than 1, then the table returned will not contain any records.
To use this function, you can simply issue a simple SELECT statement and using this in the FROM clause of the SELECT, as follows:
SELECT * FROM [dbo].[ufn_GenerateIntegers] ( 1000 )
This will produce a result set containing one column called [IntValue] with records containing values from 1 through 1000.
To give some flexibility to the user-defined function, it would be a good idea to include the starting value in the parameter instead of having it always start with 1. The following user-defined
function performs the same task as above but includes a parameter for the starting value (@MinValue).
CREATE FUNCTION [dbo].[ufn_GenerateIntegers] ( @MinValue INT, @MaxValue INT )
RETURNS @Integers TABLE ( [IntValue] INT )
WHILE @MinValue <= @MaxValue
INSERT INTO @Integers ( [IntValue] ) VALUES ( @MinValue )
SET @MinValue = @MinValue + 1
To use this user-defined function, you simpy have to supply the starting value and ending value as parameters, as follows:
SELECT * FROM [dbo].[ufn_GenerateIntegers] ( 100, 500 )
This will produce a result set containing one column called [IntValue] with records containing values from 100 through 500.
Alternative Approach
Here's a different approach to producing the same integer-table output. It minimizes the number of loops performed and makes use of a CROSS JOIN of tables.
CREATE FUNCTION [dbo].[ufn_GenerateIntegers] ( @MaxValue INT )
RETURNS @Integers TABLE ( [IntValue] INT )
DECLARE @Digits TABLE ( [Digit] INT )
DECLARE @Counter INT
SET @Counter = 0
WHILE @Counter < 10
INSERT INTO @Digits ( [Digit] ) VALUES ( @Counter )
SET @Counter = @Counter + 1
INSERT INTO @Integers ( [IntValue] )
SELECT (Thousands.Digit * 1000) + (Hundreds.Digit * 100) +
(Tens.Digit * 10) + Ones.Digit
FROM @Digits Thousands, @Digits Hundreds, @Digits Tens, @Digits Ones
WHERE (Thousands.Digit * 1000) + (Hundreds.Digit * 100) +
(Tens.Digit * 10) + Ones.Digit BETWEEN 1 AND @MaxValue
ORDER BY 1
The maximum value returned by this version of the user-defined function is 9999 but it can easily be extended to accommodate up to any number. The concept behind this approach is that each digit
within the output, basically the ones, tens, hundreds and thousands place is made up of the numbers from 0 to 9. So the table variable created only contains 10 records, one for each number, and only
10 loops are executed. The output table is then populated by joining the same table variable to itself once for each position. From the user-defined function above, there were 4 tables that were
CROSS-JOINed to produce the output table, each digit represented by one table variable (Ones, Tens, Hundreds and Thousands).
One possible use of this user-defined function is in identifying missing IDENTITY values in a table. Let's say you have the following table which contains an IDENTITY column and with the following
sample data:
-- Step #1: Create Table and Populate with Values
CREATE TABLE #MissingID ( [ID] INT IDENTITY, [Name] VARCHAR(20) )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Bentley' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'BMW' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Ferrari' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Lamborghini' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Hummer' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Jaguar' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Lexus' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Mercedes Benz' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Porsche' )
INSERT INTO #MissingID ( [Name] ) VALUES ( 'Volvo' )
SELECT * FROM #MissingID
The output of the SELECT statement will be as follows:
ID Name
------ --------------
1 Bentley
2 BMW
3 Ferrari
4 Lamborghini
5 Hummer
6 Jaguar
7 Lexus
8 Mercedes Benz
9 Porsche
10 Volvo
Let's say certain records have been deleted from the table, as shown in the following script:
-- Step #2: Delete IDs
DELETE FROM #MissingID WHERE [ID] IN (3, 4, 9)
SELECT * FROM #MissingID
The table now has the following records:
ID Name
------ --------------
1 Bentley
2 BMW
5 Hummer
6 Jaguar
7 Lexus
8 Mercedes Benz
10 Volvo
To identify the missing IDENTITY values, in this case the deleted IDs 3, 4 and 9, we can make use of the Integer Range Table user-defined function to generate a table of integers to be used to join
with our table as shown in the following script:
-- Step #3: Identify Missing IDENTITY Values
DECLARE @MaxID INT
SELECT @MaxID = [ID] FROM #MissingID
SELECT A.*
FROM [dbo].[ufn_GenerateIntegers] ( @MaxID ) A LEFT OUTER JOIN #MissingID B
ON A.[IntValue] = B.[ID]
WHERE B.[ID] IS NULL
The first step is to determine the highest IDENTITY value that have been used in the table (SELECT @MaxID = [ID] FROM #MissingID). This maximum value is then passed as a parameter to the user-defined
function. We will be making use of the first version of the user-defined function where only one parameter is expected. Then the table generated by the user-defined function, which contains values
from 1 to the maximum ID of our table, is then LEFT JOINed with our table to identify the missing IDs (WHERE B.[ID] IS NULL). | {"url":"https://sql-server-helper.com/functions/integer-table.aspx","timestamp":"2024-11-05T16:41:48Z","content_type":"text/html","content_length":"15938","record_id":"<urn:uuid:2f12597d-f0ef-4913-9cc3-2c3388a63627>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00088.warc.gz"} |
{"citation":{"ieee":"B. Sturmfels and C. Uhler, “Multivariate Gaussians, semidefinite matrix completion, and convex algebraic geometry,” Annals of the Institute of Statistical Mathematics, vol. 62,
no. 4. Springer, pp. 603–638, 2010.","chicago":"Sturmfels, Bernd, and Caroline Uhler. “Multivariate Gaussians, Semidefinite Matrix Completion, and Convex Algebraic Geometry.” Annals of the Institute
of Statistical Mathematics. Springer, 2010. https://doi.org/10.1007/s10463-010-0295-4.","ama":"Sturmfels B, Uhler C. Multivariate Gaussians, semidefinite matrix completion, and convex algebraic
geometry. Annals of the Institute of Statistical Mathematics. 2010;62(4):603-638. doi:10.1007/s10463-010-0295-4","short":"B. Sturmfels, C. Uhler, Annals of the Institute of Statistical Mathematics 62
(2010) 603–638.","ista":"Sturmfels B, Uhler C. 2010. Multivariate Gaussians, semidefinite matrix completion, and convex algebraic geometry. Annals of the Institute of Statistical Mathematics. 62(4),
603–638.","apa":"Sturmfels, B., & Uhler, C. (2010). Multivariate Gaussians, semidefinite matrix completion, and convex algebraic geometry. Annals of the Institute of Statistical Mathematics.
Springer. https://doi.org/10.1007/s10463-010-0295-4","mla":"Sturmfels, Bernd, and Caroline Uhler. “Multivariate Gaussians, Semidefinite Matrix Completion, and Convex Algebraic Geometry.” Annals of
the Institute of Statistical Mathematics, vol. 62, no. 4, Springer, 2010, pp. 603–38, doi:10.1007/s10463-010-0295-4."},"day":"01","publist_id":"3332","oa":1,"acknowledgement":"B. Sturmfels is
supported in part by NSF grants DMS-0456960 and DMS-0757236. C. Uhler is supported by an International Fulbright Science and Technology
abs/0906.3529","open_access":"1"}],"_id":"3308","year":"2010","intvolume":" 62","status":"public","abstract":[{"lang":"eng","text":"We study multivariate normal models that are described by linear
constraints on the inverse of the covariance matrix. Maximum likelihood estimation for such models leads to the problem of maximizing the determinant function over a spectrahedron, and to the problem
of characterizing the image of the positive definite cone under an arbitrary linear projection. These problems at the interface of statistics and optimization are here examined from the perspective
of convex algebraic geometry."}],"page":"603 - 638","date_published":"2010-08-01T00:00:00Z","author":[{"last_name":"Sturmfels","full_name":"Sturmfels, Bernd","first_name":"Bernd"},
{"full_name":"Caroline Uhler","first_name":"Caroline","orcid":"0000-0002-7008-0216","last_name":"Uhler","id":"49ADD78E-F248-11E8-B48F-1D18A9856A87"}],"publication":"Annals of the Institute of
Statistical Mathematics","title":"Multivariate Gaussians, semidefinite matrix completion, and convex algebraic geometry"} | {"url":"https://research-explorer.ista.ac.at/record/3308.jsonl","timestamp":"2024-11-13T22:02:14Z","content_type":"text/plain","content_length":"3836","record_id":"<urn:uuid:0256c2ab-0b15-42a8-b972-afb8a9adc991>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00666.warc.gz"} |
John's train is on time
A train leaves on time. After it has gone 8 miles (at 33mph) the driver looks at his watch and sees that the hour hand is exactly over the minute hand. When did the train leave the station?
A train leaves the station on time. After it has gone 8 miles the driver looks at his watch and sees that the hour hand is exactly over the minute hand.
The average speed of the train over the 8 miles was 33mph.
When did the train leave the station?
Getting Started
I think it is reasonable to assume that the train leaves on a whole number of minutes.
How many degrees do the hour and minute hands travel in a minute?
Can you form an equation that equates the angular distances travelled when the hour and minute hands coincide?
Student Solutions
Not as easy as it first appeared. Many of you sent in attempts at this very popular problem . The most common mistakes in your arguments involved rounding to the nearest minute and assuming that the
only time the hands cross is at twelve o'clock. One of you pointed out that on a standard clock the hour hand never passes over the minute hand - it is always the other way around - a bit of a red
Two correct solutions were received from Hannah of the School of St. Helens and St. Katharine and Andrei of School 205, Bucharest. It is Hannah's solution that is given below. Well done Hannah.
This took me a couple of attempts; it wasn't as simple as it first seemed because of all the fractions of time that are involved.
Firstly - 8 miles is approximately a quarter of the distance travelled in an hour at a speed of 33mph, so we are looking at a journey time of just less than 15 minutes.
More precisely:-
$$ \frac{8}{33} \times 60 = 0.242424\dots \times 60 = 14.545454\dots = 14 \frac{6}{11} \mbox{minutes.} $$
Secondly - it does not seem unreasonable to assume that the train leaves on a whole number of minutes past (or to) the hour.
This means that the time when the hands cross that we are looking for will probably end in 0.545454\dots; or (6/11) of a minute so that when we take the journey time off we will end up with a precise
departure time (no seconds or part seconds left over).
We know that the hands cross at 12 o clock and that they cross 11 times in every 12 hours so that is every
$$ \frac{11}{12} = 1.090909\dots = 1 \frac{1}{11} \mbox{hours.} $$
$$ \mbox{In terms of time this is } 1 \mbox{ hour } 5 \frac{5}{11} \mbox{minutes} = 1 \frac {1}{11} \times 60 $$
If this is so then the tenth time the hands pass by each other after 12:00 will be 10h 54 mins 6/11 minute.
If we subtract the journey time of 14 6/11 off this then we will end up with an exact departure time for John's train of 10:40 or 22:40. | {"url":"https://nrich.maths.org/problems/johns-train-time","timestamp":"2024-11-13T17:34:46Z","content_type":"text/html","content_length":"39883","record_id":"<urn:uuid:783d1aa7-7e01-413e-b1b4-e40b2df0832b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00144.warc.gz"} |
3rd Grade
Alabama Course of Study Standards: 26
Recognize and describe polygons (up to 8 sides), triangles, and quadrilaterals (rhombuses, rectangles, and squares) based on the number of sides and the presence or absence of square corners.
1. Draw examples of quadrilaterals that are and are not rhombuses, rectangles, and squares.
Common Core State Standards: Math.3.G.1 or 3.G.A.1
Understand that shapes in different categories (e.g., rhombuses, rectangles, and others) may share attributes (e.g., having four sides), and that the shared attributes can define a larger category
(e.g., quadrilaterals). Recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories.
Georgia Standards of Excellence (GSE): 3.GSR.6.2
Classify, compare, and contrast polygons, with a focus on quadrilaterals, based on properties. Analyze specific 3- dimensional figures to identify and describe quadrilaterals as faces of these
Mississippi College- and Career-Readiness Standards: 3.G.1
Understand that shapes in different categories (e.g., rhombuses, rectangles, circles, and others) may share attributes (e.g., having four sides), and that the shared attributes can define a larger
category (e.g., quadrilaterals). Recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories.
North Carolina - Standard Course of Study: 3.G.1
Reason with two-dimensional shapes and their attributes.
• Investigate, describe, and reason about composing triangles and quadrilaterals and decomposing quadrilaterals.
• Recognize and draw examples and non-examples of types of quadrilaterals including rhombuses, rectangles, squares, parallelograms, and trapezoids.
New York State Next Generation Learning Standards: 3.G.1
Recognize and classify polygons based on the number of sides and vertices (triangles, quadrilaterals, pentagons, and hexagons). Identify shapes that do not belong to one of the given subcategories.
Note: Include both regular and irregular polygons, however, students need not use formal terms "regular" and "irregular," e.g., students should be able to classify an irregular pentagon as "a
pentagon," but do not need to classify it as an "irregular pentagon."
Ohio's Learning Standards: 3.G.1
Draw and describe triangles, quadrilaterals (rhombuses, rectangles, and squares), and polygons (up to 8 sides) based on the number of sides and the presence or absence of square corners (right
Tennessee Academic Standards: 3.G.A.1
Understand that shapes in different categories may share attributes and that the shared attributes can define a larger category. Recognize rhombuses, rectangles, and squares as examples of
quadrilaterals and draw examples of quadrilaterals that do not belong to any of these subcategories.
Pennsylvania Core Standards: CC.2.3.3.A.1
Identify, compare,and classify shapes and their attributes.
Pennsylvania Core Standards: M03.C-G.1.1.1
Explain that shapes in different categories may share attributes and that the shared attributes can define a larger category.
Pennsylvania Core Standards: M03.C-G.1.1.2
Recognize rhombi, rectangles, and squares as examples of quadrilaterals and/or draw examples of quadrilaterals that do not belong to any of these subcategories.
Florida - Benchmarks for Excellent Student Thinking: MA.3.GR.1.2
Identify and draw quadrilaterals based on their defining attributes. Quadrilaterals include parallelograms, rhombi, rectangles, squares and trapezoids.
Georgia Standards of Excellence (GSE): 3.GSR.6.2
Classify, compare, and contrast polygons, with a focus on quadrilaterals, based on properties. Analyze specific 3- dimensional figures to identify and describe quadrilaterals as faces of these
Arkansas Academic Standards: 3.GM.1
Understand that quadrilaterals in different categories may share attributes. | {"url":"https://www.learningfarm.com/web/practicePassThrough.cfm?TopicID=271","timestamp":"2024-11-07T04:20:49Z","content_type":"application/xhtml+xml","content_length":"34484","record_id":"<urn:uuid:1d2d0b2b-04f8-4d1e-a778-1632c3ea9bb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00523.warc.gz"} |
Convert Petabyte (PB) (Bytes / Bits)
Convert Petabyte (PB)
Direct link to this calculator:
Convert Petabyte (PB) (Bytes / Bits)
1. Choose the right category from the selection list, in this case 'Bytes / Bits'.
2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and
π (pi) are all permitted at this point.
3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Petabyte [PB]'.
4. The value will then be converted into all units of measurement the calculator is familiar with.
5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.
Utilize the full range of performance for this units calculator
With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '416 Petabyte'. In so doing, either the full name of the unit or its
abbreviation can be usedas an example, either 'Petabyte' or 'PB'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Bytes / Bits'.
After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Regardless which of
these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over
for us by the calculator and it gets the job done in a fraction of a second.
Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(48 * 44) PB'. But different
units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '56 Petabyte + 52 Petabyte' or '40mm x 36cm x 32dm = ? cm^3'. The units
of measure combined in this way naturally have to fit together and make sense in the combination in question.
The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4).
If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 3.643 733 300 175 4×1021. For this form of presentation, the number
will be segmented into an exponent, here 21, and the actual number, here 3.643 733 300 175 4. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket
calculators, one also finds the way of writing numbers as 3.643 733 300 175 4E+21. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at
this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 3 643 733 300 175 400 000 000. Independent of the presentation of the
results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications. | {"url":"https://www.convert-measurement-units.com/convert+Petabyte.php","timestamp":"2024-11-04T14:26:52Z","content_type":"text/html","content_length":"57761","record_id":"<urn:uuid:31413797-fb43-42be-992e-48e690e9571a>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00159.warc.gz"} |
BalanceTrials balances a set of factors given the factor levels. It is
identical to BalanceFactors except that the first argument is the number
of trials desired. It outputs one or more vectors containing factor
values for each trial, balanced and, optionally, randomized.
[F1, F2, …] = BalanceTrials(NTRIALS, RND, LVL1, LVL2, …)
BalanceTrials must be called with three or more input arguments. The
first argument, NTRIALS, specifies the number of trials desired. The
second argument, RAND, determines whether or not the returned factors
should be shuffled (non-zero values lead to shuffling).
The remaining input arguments specify the levels for each of a set of
factors. Factor levels can be specified as numeric vectors or cell
arrays (e.g., for category names). The returned factor lists will be the
same class as the corresponding levels.
WARNING: If NTRIALS is not a multiple of the product of the number of
levels, then the actual number of trials generated will be more than
NTRIALS. To detect this situation, test whether numel(F1) == NTRIALS.
[targetPresent, setSize] = BalanceTrials(80, 0, 0:1, [3 6 9 12]);
[target, setSize, dur] = …
BalanceTrials(72, 1, [0 1], [4 8 12], [0 100 200]);
[samediff, mask] = …
BalanceTrials(20, 1, {‘same’, ‘diff’}, {‘pattern’, ‘meta’});
See also: BalanceFactors | {"url":"http://psychtoolbox.org/docs/BalanceTrials","timestamp":"2024-11-14T12:07:18Z","content_type":"text/html","content_length":"7011","record_id":"<urn:uuid:362015d5-e8cc-4a65-a195-bd9cde465c61>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00429.warc.gz"} |
What is a degenerate basic feasible solution?
What is a degenerate basic feasible solution?
Degenerate basic feasible solution: A basic feasible solution where one or more of the basic variables is zero. Discrete Variable: A decision variable that can only take integer values. Feasible
Solution: A solution that satisfies all the constraints.
What does the term degenerate solution mean?
Definition: An LP is degenerate if in a basic feasible solution, one of the basic variables takes on a zero value. Degeneracy is a problem in practice, because it makes the simplex algorithm slower.
What is degenerate basic feasible solution in RMT?
A basic feasible solution is called degenerate if one of its RHS coefficients (excluding the objective value) is 0. This bfs is degenerate.
What is a basic solution called non degenerate?
Non-degenerate : if none of the basic variables is zero, the solution is non-degenerate. Basic solution. * Degenerate : if one or more of the basic variables vanish the solution is called degenerate
basic solution.
How do you know if a solution is degenerate?
A basic feasible solution is degenerate if at least one of the basic variables is equal to zero. A standard form linear optimization problem is degenerate if at least one of its basic feasible
solutions is degenerate.
What are degenerate equations?
In mathematics, something is called degenerate if it is a special case of an object which has, in some sense, “collapsed” into something simpler. A degenerate conic is given by an equation
ax2+2hxy+by2+2fx+2gy+c=0 where the solution set is just a point, a straight line or a pair of straight lines.
What is degenerate linear equation?
A system of equations is degenerate if more than one set of solutions equations and non degenerate if only one set of solutions exists. A system of equations is inconsistent if no solutions exists. A
system of equations is consistent if solutions exist – either a unique set of solutions or more than one.
What do you mean by Modi method?
modified distribution method
Abstract. The modified distribution method, is also known as MODI method or (u – v) method provides a minimum cost solution to the transportation problems. This model studies the minimization of the
cost of transporting a commodity from a number of sources to several destinations.
What is feasible solution and non-degenerate solution in transportation problem?
Non -degenerate basic feasible solution: A basic feasible solution to a (m x n) transportation problem is said to be non – degenerate if, the total number of non-negative allocations is exactly m + n
– 1 (i.e., number of independent constraint equations), and. these m + n – 1 allocations are in independent positions.
What characteristic best describes a degenerate solution?
What characteristic best describes a degenerate solution? A solution where an anomaly takes place. The shadow price of non-binding constraint is. Zero.
How do you solve degenerate?
In order to resolve degeneracy, the conventional method is to allocate an infinitesimally small amount e to one of the independent cells i.e., allocate a small positive quantity e to one or more
unoccupied cell that have lowest transportation costs, so as to make m + n – 1 allocations (i.e., to satisfy the condition N … | {"url":"https://somme2016.org/recommendations/what-is-a-degenerate-basic-feasible-solution/","timestamp":"2024-11-11T03:31:53Z","content_type":"text/html","content_length":"44750","record_id":"<urn:uuid:ee663415-b267-4664-bc49-30ef94d295ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00414.warc.gz"} |
Empirical Risk Minimization for Probabilistic Grammars: Sample Complexity and Hardness of Learning
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework,
reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the
supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive
distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and
unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization,
to minimize the empirical risk.
Learning from data is central to contemporary computational linguistics. It is in common in such learning to estimate a model in a parametric family using the maximum likelihood principle. This
principle applies in the supervised case (i.e., using annotated data) as well as semisupervised and unsupervised settings (i.e., using unannotated data). Probabilistic grammars constitute a range of
such parametric families we can estimate (e.g., hidden Markov models, probabilistic context-free grammars). These parametric families are used in diverse NLP problems ranging from syntactic and
morphological processing to applications like information extraction, question answering, and machine translation.
Estimation of probabilistic grammars, in many cases, indeed starts with the principle of maximum likelihood estimation (MLE). In the supervised case, and with traditional parametrizations based on
multinomial distributions, MLE amounts to normalization of rule frequencies as they are observed in data. In the unsupervised case, on the other hand, algorithms such as expectation-maximization are
available. MLE is attractive because it offers statistical consistency if some conditions are met (i.e., if the data are distributed according to a distribution in the family, then we will discover
the correct parameters if sufficient data is available). In addition, under some conditions it is also an unbiased estimator.
An issue that has been far less explored in the computational linguistics literature is the sample complexity of MLE. Here, we are interested in quantifying the number of samples required to
accurately learn a probabilistic grammar either in a supervised or in an unsupervised way. If bounds on the requisite number of samples (known as “sample complexity bounds”) are sufficiently tight,
then they may offer guidance to learner performance, given various amounts of data and a wide range of parametric families. Being able to reason analytically about the amount of data to annotate, and
the relative gains in moving to a more restricted parametric family, could offer practical advantages to language engineers.
We note that grammar learning has been studied in formal settings as a problem of grammatical inference—learning the structure of a grammar or an automaton (Angluin 1987; Clark and Thollard 2004; de
la Higuera 2005; Clark, Eyraud, and Habrard 2008, among others). Our setting in this article is different. We assume that we have a fixed grammar, and our goal is to estimate its parameters. This
approach has shown great empirical success, both in the supervised (Collins 2003; Charniak and Johnson 2005) and the unsupervised (Carroll and Charniak 1992; Pereira and Schabes 1992; Klein and
Manning 2004; Cohen and Smith 2010a) settings. There has also been some discussion of sample complexity bounds for statistical parsing models, in a distribution-free setting (Collins 2004). The
distribution-free setting, however, is not ideal for analysis of natural language, as it has to account for pathological cases of distributions that generate data.
We develop a framework for deriving sample complexity bounds using the maximum likelihood principle for probabilistic grammars in a distribution-dependent setting. Distribution dependency is
introduced here by making empirically justified assumptions about the distributions that generate the data. Our framework uses and significantly extends ideas that have been introduced for deriving
sample complexity bounds for probabilistic graphical models (Dasgupta 1997). Maximum likelihood estimation is put in the empirical risk minimization framework (Vapnik 1998) with the loss function
being the log-loss. Following that, we develop a set of learning theoretic tools to explore rates of estimation convergence for probabilistic grammars. We also develop algorithms for performing
empirical risk minimization.
Much research has been devoted to the problem of learning finite state automata (which can be thought of as a class of grammars) in the Probably Approximately Correct setting, leading to the
conclusion that it is a very hard problem (Kearns and Valiant 1989; Pitt 1989; Terwijn 2002). Typically, the setting in these cases is different from our setting: Error is measured as the probability
mass of strings that are not identified correctly by the learned finite state automaton, instead of measuring KL divergence between the automaton and the true distribution. In addition, in many
cases, there is also a focus on the distribution-free setting. To the best of our knowledge, it is still an open problem whether finite state automata are learnable in the distribution-dependent
setting when measuring the error as the fraction of misidentified strings. Other work (Ron 1995; Ron, Singer, and Tishby 1998; Clark and Thollard 2004; Palmer and Goldberg 2007) also gives treatment
to probabilistic automata with an error measure which is more suitable for the probabilistic setting, such as Kullback-Lielder (KL) divergence or variation distance. These also focus on learning the
structure of finite state machines. As mentioned earlier, in our setting we assume that the grammar is fixed, and that our goal is to estimate its parameters.
We note an important connection to an earlier study about the learnability of probabilistic automata and hidden Markov models by Abe and Warmuth (1992). In that study, the authors provided positive
results for the sample complexity for learning probabilistic automata—they showed that a polynomial sample is sufficient for MLE. We demonstrate positive results for the more general class of
probabilistic grammars which goes beyond probabilistic automata. Abe and Warmuth also showed that the problem of finding or even approximating the maximum likelihood solution for a two-state
probabilistic automaton with an alphabet of an arbitrary size is hard. Even though these results extend to probabilistic grammars to some extent, we provide a novel proof that illustrates the
NP-hardness of identifying the maximum likelihood solution for probabilistic grammars in the specific framework of “proper approximations” that we define in this article. Whereas Abe and Warmuth show
that the problem of maximum likelihood maximization for two-state HMMs is not approximable within a certain factor in time polynomial in the alphabet and the length of the observed sequence, we show
that there is no polynomial algorithm (in the length of the observed strings) that identifies the maximum likelihood estimator in our framework. In our reduction, from 3-SAT to the problem of maximum
likelihood estimation, the alphabet used is binary and the grammar size is proportional to the length of the formula. In Abe and Warmuth, the alphabet size varies, and the number of states is two.
This article proceeds as follows. In Section 2 we review the background necessary from Vapnik's (1988) empirical risk minimization framework. This framework is reduced to maximum likelihood
estimation when a specific loss function is used: the log-loss.^^1 There are some shortcomings in using the empirical risk minimization framework in its simplest form. In its simplest form, the ERM
framework is distribution-free, which means that we make no assumptions about the distribution that generated the data. Naively attempting to apply the ERM framework to probabilistic grammars in the
distribution-free setting does not lead to the desired sample complexity bounds. The reason for this is that the log-loss diverges whenever small probabilities are allocated in the learned hypothesis
to structures or strings that have a rather large probability in the probability distribution that generates the data. With a distribution-free assumption, therefore, we would have to give treatment
to distributions that are unlikely to be true for natural language data (e.g., where some extremely long sentences are very probable).
To correct for this, we move to an analysis in a distribution-dependent setting, by presenting a set of assumptions about the distribution that generates the data. In Section 3 we discuss
probabilistic grammars in a general way and introduce assumptions about the true distribution that are reasonable when our data come from natural language examples. It is important to note that this
distribution need not be a probabilistic grammar.
The next step we take, in Section 4, is approximating the set of probabilistic grammars over which we maximize likelihood. This is again required in order to overcome the divergence of the log-loss
for probabilities that are very small. Our approximations are based on bounded approximations that have been used for deriving sample complexity bounds for graphical models in a distribution-free
setting (Dasgupta 1997).
Our approximations have two important properties: They are, by themselves, probabilistic grammars from the family we are interested in estimating, and they become a tighter approximation around the
family of probabilistic grammars we are interested in estimating as more samples are available.
Moving to the distribution-dependent setting and defining proper approximations enables us to derive sample complexity bounds. In Section 5 we present the sample complexity results for both the
supervised and unsupervised cases. A question that lingers at this point is whether it is computationally feasible to maximize likelihood in our framework even when given enough samples.
In Section 6, we describe algorithms we use to estimate probabilistic grammars in our framework, when given access to the required number of samples. We show that in the supervised case, we can
indeed maximize likelihood in our approximation framework using a simple algorithm. For the unsupervised case, however, we show that maximizing likelihood is NP-hard. This fact is related to a notion
known in the learning theory literature as inherent unpredictability (Kearns and Vazirani 1994): Accurate learning is computationally hard even with enough samples. To overcome this difficulty, we
adapt the expectation-maximization algorithm (Dempster, Laird, and Rubin 1977) to approximately maximize likelihood (or minimize log-loss) in the unsupervised case with proper approximations.
In Section 7 we discuss some related ideas. These include the failure of an alternative kind of distributional assumption and connections to regularization by maximum a posteriori estimation with
Dirichlet priors. Longer proofs are included in the appendices. A table of notation that is used throughout is included as Table D.1 in Appendix D.
This article builds on two earlier papers. In Cohen and Smith (2010b) we presented the main sample complexity results described here; the present article includes significant extensions, a deeper
analysis of our distributional assumptions, and a discussion of variants of these assumptions, as well as related work, such as that about the Tsybakov noise condition. In Cohen and Smith (2010c) we
proved NP-hardness for unsupervised parameter estimation of probalistic context-free grammars (PCFGs) (without approximate families). The present article uses a similar type of proof to achieve
results adapted to empirical risk minimization in our approximation framework.
2.Empirical Risk Minimization and Maximum Likelihood Estimation
We begin by introducing some notation. We seek to construct a predictive model that maps inputs from space p(x,z) over p will denote a probability mass function.) We are interested in estimating the
distribution p from examples, either in a supervised setting, where we are provided with examples of the form Section 5. We will use q to denote the estimated distribution.
In order to estimate p as accurately as possible using q(x,z), we are interested in minimizing the log-loss, that is, in finding q[opt], from a fixed family of distributions Note that if q[opt] = p,
in which case the value of the log-loss is the entropy of p. Indeed, more generally, this optimization is equivalent to finding q such that it minimizes the KL divergence from p to q.
is unknown, we cannot hope to minimize the log-loss directly. Given a set of examples (x
), however, there is a natural candidate, the empirical distribution
Equation (1)
instead of
, defined as: where
) = (
) and 0 otherwise.
We then set up the problem as the problem of
empirical risk minimization
(ERM), that is, trying to find
such that
Equation (3)
immediately shows that minimizing empirical risk using the log-loss is equivalent to the maximizing likelihood, which is a common statistical principle used for estimating a probabilistic grammar in
computational linguistics (Charniak
; Manning and Schütze
We are interested in bounding the excess risk for q*, p and q if q′ = p, and equals the entropy of p. In a typical case, where we do not necessarily have q is bounded from above by the KL divergence
between p and q.
(with high probability), then we can “sandwich” the following quantities: where the inequalities come from the fact that
minimizes the expected risk
* minimizes the empirical risk
Equations (7)
is that the expected risk of
* is at most 2
away from the expected risk of
, and as a result, we find the excess risk
, is smaller than 2
. Intuitively, this means that, under a large sample,
* does not give much worse results than
under the criterion of the log-loss.
Unfortunately, the regularity conditions which are required for the convergence of
We note that all discussion of convergence in this section has been about convergence in probability. For example, we want Equation (6) to hold with high probability—for most samples of size n. We
will make this notion more rigorous in Section 2.2.
2.1Empirical Risk Minimization and Structural Risk Minimization Methods
It has been noted in the literature (Vapnik 1998; Koltchinskii 2006) that often the class 1998) and the method of sieves (Grenander 1981) are examples of methods that adopt such an approach.
Structural risk minimization, for example, can be represented in many cases as a penalization of the empirical risk method, using a regularization term.
In our case, the level of “complexity” is related to allocation of small probabilities to derivations in the grammar by a distribution
To solve this issue with the complexity of
Section 4
a series of approximations
is the number of samples we draw for the learner:We are then interested in the convergence of the empirical process
In Section 4 we show that the minimizer asymptotic empirical risk minimizer (in our specific framework), which means that
2.2Sample Complexity Bounds
Knowing that we are interested in the convergence of rate does this empirical process converge?”
Because the quantity 1998): “How many samples n are required so that with probability 1 − δ we have p that generates the data.
A complete distribution-free setting is not appropriate for analyzing natural language. This setting poses technical difficulties with the convergence of p, parametrize these assumptions in several
ways, and then calculate sample complexity bounds of the form p.
The learning setting, then, can be described as follows. The user decides on a level of accuracy (ε) which the learning algorithm has to reach with confidence (1 − δ). Then, p and presented to the
learning algorithm. The learning algorithm then returns an hypothesis according to Equation (9).
3.Probabilistic Grammars
We begin this section by discussing the family of probabilistic grammars. A probabilistic grammar defines a probability distribution over a certain kind of structured object (a derivation of the
underlying symbolic grammar) explained step-by-step as a stochastic process. Hidden Markov models (HMMs), for example, can be understood as a random walk through a probabilistic finite-state network,
with an output symbol sampled at each state. PCFGs generate phrase-structure trees by recursively rewriting nonterminal symbols as sequences of “child” symbols (each itself either a nonterminal
symbol or a terminal symbol analogous to the emissions of an HMM).
Each step or emission of an HMM and each rewriting operation of a PCFG is conditionally independent of the others given a single structural element (one HMM or PCFG state); this Markov property
permits efficient inference over derivations given a string.
In general, a probabilistic grammar 〈G, θ〉 defines the joint probability of a string x and a grammatical derivation z:where ψ[k,i] is a function that “counts” the number of times the kth
distribution's ith event occurs in the derivation. The parameters θ are a collection of K multinomials 〈θ[1], … , θ[K]〉, the kth of which includes N[k] competing events. If we let θ[k] = 〈θ[k,1],
… , θ[k,N[k]]〉, each θ[k,i] is a probability, such that
We denote by Θ[G] this parameter space for θ. The grammar G dictates the support of q in Equation (11). As is often the case in probabilistic modeling, there are different ways to carve up the random
variables. We can think of x and z as correlated structure variables (often x is known if z is known), or the derivation event counts x is always a deterministic function of z, so we use the
distribution p(z) interchangeably with p(x,z).
Note that there may be many derivations z for a given string x—perhaps even infinitely many in some kinds of grammars. For HMMs, there are three kinds of multinomials: a starting state multinomial, a
transition multinomial per state and an emission multinomial per state. In that case K = 2s + 1, where s is the number of states. The value of N[k] depends on whether the kth multinomial is the
starting state multinomial (in which case N[k] = s), transition multinomial (N[k] = s), or emission multinomial (N[k] = t, with t being the number of symbols in the HMM). For PCFGs, each multinomial
among the K multinomials corresponds to a set of N[k] context-free rules headed by the same nonterminal. The parameter θ[k,i] is then the probability of the ith rule for the kth nonterminal.
We assume that G denotes a fixed grammar, such as a context-free or regular grammar. We let D(G) to denote the set of all possible derivations of G. We define D[x](G) = {z ∈ D(G) |yield(z) = x}. We
use deg(G) to denote the “degree” of G, i.e., deg(G) = max [k]N[k]. We let |x| denote the length of the string x, and z.
Going back to the notation in Section 2, q would be a specific probabilistic grammar with a specific θ. We therefore treat the problem of ERM with probabilistic grammars as the problem of parameter
estimation—identifying θ from complete data or incomplete data (strings x are visible but the derivations z are not). We can also view parameter estimation as the identification of a hypothesis from
the concept space h[θ] is a distribution of the form of Equation [11]) or, equivalently, from negated log-concept space G and use
3.1Distributional Assumptions about Language
In this section, we describe a parametrization of assumptions we make about the distribution p(x,z), the distribution that generates derivations from D(G) (note that p does not have to be a
probabilistic grammar). We first describe empirical evidence about the decay of the frequency of long strings x.
Figure 1
shows the frequency of sentence length for treebanks in various languages.
The trend in the plots clearly shows that in the extended tail of the curve, all languages have an exponential decay of probabilities as a function of sentence length. To test this, we performed a
simple regression of frequencies using an exponential curve. We estimated each curve for each language using a curve of the form
) =
. This estimation was done by minimizing squared error between the frequency versus sentence length curve and the approximate version of this curve. The data points used for the approximation are (
), where
denotes sentence length and
denotes frequency, selected from the extended tail of the distribution. Extended tail here refers to all points with length longer than
, where
is the length with the highest frequency in the treebank. The goal of focusing on the tail is to avoid approximating the head of the curve, which is actually a monotonically increasing function. We
plotted the approximate curve together with a length versus frequency curve for new syntactic data. It can be seen (
Figure 1
) that the approximation is rather accurate in these corpora.
As a consequence of this observation, we make a few assumptions about G and p(x,z):
• Derivation length proportional to sentence length: There is an α ≥ 1 such that, for all z, |z| ≤ α|yield(z)|. Further, |z| ≥ |x|. (This prohibits unary cycles.)
• Exponential decay of derivations: There is a constant r < 1 and a constant L ≥ 0 such that p(z) ≤ Lr^|z|. Note that the assumption here is about the frequency of length of separate derivations,
and not the aggregated frequency of all sentences of a certain length (cf. the discussion above referring to Figure 1).
• Exponential decay of strings: Let Λ(k) = |{z ∈ D(G) ||z| = k}| be the number derivations of length k in G. We assume that Λ(k) is an increasing function, and complete it such that it is defined
over positive numbers by taking r as before, we assume there exists a constant q < 1, such that Λ^2(k) r^k ≤ q^k (and as a consequence, Λ(k) r^k ≤ q^k). This implies that the number of
derivations of length k may be exponentially large (e.g., as with many PCFGs), but is bounded by (q/r)^k.
• Bounded expectations of rules: There is a B < ∞ such that k and i.
These assumptions must hold for any p whose support consists of a finite set. These assumptions also hold in many cases when p itself is a probabilistic grammar. Also, we note that the last
requirement of bounded expectations is optional, and it can be inferred from the rest of the requirements: B = L/(1 − q)^2. We make this requirement explicit for simplicity of notation later. We
denote the family of distributions that satisfy all of these requirements by
There are other cases in the literature of language learning where additional assumptions are made on the learned family of models in order to obtain positive learnability results. For example, Clark
and Thollard (2004) put a bound on the expected length of strings generated from any state of probabilistic finite state automata, which resembles the exponential decay of strings we have for p in
this article.
An immediate consequence of these assumptions is that the entropy of p is finite and bounded by a quantity that depends on L, r and q.^^5 Bounding entropy of labels (derivations) given inputs
(sentences) is a common way to quantify the noise in a distribution. Here, both the sentential entropy (H[s](p) = − ∑[x]p(x) log p(x)) is bounded as well as the derivational entropy (H[d](p) = − ∑[
x,z]p(x,z) log p(x,z)). This is stated in the following result.
First note that
) ≤
) holds by the data processing inequality (Cover and Thomas
) because the sentential probability distribution
) is a coarser version of the derivational probability distribution
(x,z). Now, consider
). For simplicity of notation, we use
) instead of
). The yield of
, is a function of z, and therefore can be omitted from the distribution. It holds that where
= {z |
) > 1/
} and
= {z |
) ≤ 1/
}. Note that the function −
reaches its maximum for
= 1/
. We therefore have We give a bound on |
|, the number of “high probability” derivations. Because we have
) ≤
, we can find the maximum length of a derivation that has a probability of more than 1/
(and hence, it may appear in
) by solving 1/
for |
|, which leads to |
| ≤ log(1/
. Therefore, there are at most
| and therefore we have where we use the monotonicity of
. Consider
) (the “low probability” derivations). We have: where
Equation (13)
holds from the assumptions about
. Putting
Equation (12)
Equation (14)
together, we obtain the result.▪
We note that another common way to quantify the noise in a distribution is through the notion of Tsybakov noise (Tsybakov 2004; Koltchinskii 2006). We discuss this further in Section 7.1, where we
show that Tsybakov noise is too permissive, and probabilistic grammars do not satisfy its conditions.
3.1Limiting the Degree of the Grammar
When approximating a family of probabilistic grammars, it is much more convenient when the degree of the grammar is limited. In this article, we limit the degree of the grammar by making the
assumption that all N[k] ≤ 2. This assumption may seem, at first glance, somewhat restrictive, but we show next that for PCFGs (and as a consequence, other formalisms), this assumption does not limit
the total generative capacity that we can have across all context-free grammars.
We first show that any context-free grammar with arbitrary degree can be mapped to a corresponding grammar with all
≤ 2 that generates derivations equivalent to derivations in the original grammar. Such a grammar is also called a “covering grammar” (Nijholt
; Leermakers
). Let
be a CFG. Let
be the
th nonterminal. Consider the rules
appears on the left side. For each rule
, we create a new nonterminal in
′ such that
has two rewrite rules:
. In addition, we create rules
Figure 2
demonstrates an example of this transformation on a small context-free grammar.
It is easy to verify that the resulting grammar G′ has an equivalent capacity to the original CFG, G. A simple transformation that converts each derivation in the new grammar to a derivation in the
old grammar would involve collapsing any path of nonterminals added to G′ (i.e., all A[i] for nonterminal A) so that we end up with nonterminals from the original grammar only. Similarly, any
derivation in G can be converted to a derivation in G′ by adding new nonterminals through unary application of rules of the form A[i] → A[i+1]. Given a derivation z in G, we denote by G′ after adding
the new non-terminals A[i] to z. Throughout this article, we will refer to the normalized form of G′ as a “binary normal form.”^^6
Note that K′, the number of multinomials in the binary normal form, is a function of both the number of nonterminals in the original grammar and the number of rules in that grammar. More
specifically, we have that probabilistic context-free grammar can be translated to a PCFG with max[k]N[k] ≤ 2 such that the two PCFGs induce the same equivalent distributions over derivations.
Let a[i] ∈ [0,1], i ∈ {1, … , N} such that ∑[i]a[i] = 1. Define b[1] = a[1], c[1] = 1 − a[1], b[i] = c[i] = 1 − b[i] for i ≥ 2. Then
See Appendix A for the proof of Utility Lemma 1.
Let 〈G, θ〉 be a probabilistic context-free grammar. Let G′ be the binarizing transformation of G as defined earlier. Then, there exists θ′ for G′ such that for any z ∈ D(G) we have
For the grammar G, index the set {1, …,K} with nonterminals ranging from A[1] to A[K]. Define G′ as before. We need to define θ′. Index the multinomials in G′ by (k,i), each having two events. Let μ
[(k,i),1] = θ[k,i], μ[(k,i),2] = 1 − θ[k,i] for i = 1 and set μ[k,i,1] = θ[k,i]/μ[(k,i − 1),2], and μ[(k,i − 1),2] = 1 − μ[(k,i − 1),2].
From Chi (1999), we know that the weighted grammar 〈G′, μ〉 can be converted to a probabilistic context-free grammar 〈G′, θ′〉, through a construction of θ′ based on μ, such that p(z′ | μ, G′) = p(
z′ | θ′, G′).▪
The proof for Theorem 1 gives a construction the parameters θ′ of G′ such that 〈G, θ〉 is equivalent to 〈G′, θ′〉. The construction of θ′ can also be reversed: Given θ′ for G′, we can construct θ
for G so that again we have equivalence between 〈G, θ〉 and 〈G′, θ′〉.
In this section, we focused on presenting parametrized, empirically justified distributional assumptions about language data that will make the analysis in later sections more manageable. We showed
that these assumptions bound the amount of entropy as a function of the assumption parameters. We also made an assumption about the structure of the grammar family, and showed that it entails no loss
of generality for CFGs. Many other formalisms can follow similar arguments to show that the structural assumption is justified for them as well.
4.Proper Approximations
In order to follow the empirical risk minimization described in Section 2.1, we have to define a series of approximations for Equation [16]) with convergence on the sequence of concept spaces we
defined (Equation [10]). The concept spaces in the sequence vary as a function of the number of samples we have. We next construct the sequence of concept spaces, and in Section 5 we return to the
learning model. Our approximations are based on the concept of bounded approximations (Abe, Takeuchi, and Warmuth 1991; Dasgupta 1997), which were originally designed for graphical models.^^7 A
bounded approximation is a subset of a concept space which is controlled by a parameter that determines its tightness. Here we use this idea to define a series of subsets of the original concept
Let m ∈ {1, 2, …}) be a sequence of concept spaces. We consider three properties of elements of this sequence, which should hold for m > M for a fixed M.
We say that the sequence properly approximatesε[tail](m), ε[bound](m), and C[m] such that, for all m larger than some M, containment, boundedness, and tightness all hold.
In a good approximation, K[m] would increase at a fast rate as a function of m and ε[tail](m) and ε[bound](m) decrease quickly as a function of m. As we will see in Section 5, we cannot have an
arbitrarily fast convergence rate (by, for example, taking a subsequence of K[m] has a great effect on the number of samples required to obtain accurate estimation.
4.1Constructing Proper Approximations for Probabilistic Grammars
We now focus on constructing proper approximations for probabilistic grammars whose degree is limited to 2. Proper approximations could, in principle, be used with losses other than the log-loss,
though their main use is for unbounded losses. Starting from this point in the article, we focus on using such proper approximations with the log-loss.
We construct
) that shifts every binomial parameter θ
= 〈θ
, θ
〉 in the probabilistic grammar by at most
:Note that
≤ 1/2. Fix a constant
> 1.
We denote by
) the same transformation on θ (which outputs the new shifted parameters) and we denote by
) =
) the set {
) |θ ∈
}. For each
∈ ℕ, define
When considering our approach to approximate a probabilistic grammar by increasing its parameter probabilities to be over a certain threshold, it becomes clear why we are required to limit the
grammar to have only two rules and why we are required to use the normal from Section 3.2 with grammars of degree 2. Consider the PCFG rules in Table 1. There are different ways to move probability
mass to the rule with small probability. This leads to a problem with identifability of the approximation: How does one decide how to reallocate probability to the small probability rules? By
binarizing the grammar in advance, we arrive at a single way to reallocate mass when required (i.e., move mass from the high-probability rule to the low-probability rule). This leads to a simpler
proof for sample complexity bounds and a single bound (rather than different bounds depending on different smoothing operators). We note, however, that the choices made in binarizing the grammar
imply a particular way of smoothing the probability across the original rules.
Rule . θ . General . η = 0 . η = 0.01 . η = 0.005 .
S → NP VP 0.09 0.01 0.1 0.1 0.1
S → NP 0.11 0.11 − η 0.11 0.1 0.105
S → VP 0.8 0.8 − γ + η 0.79 0.8 0.795
Rule . θ . General . η = 0 . η = 0.01 . η = 0.005 .
S → NP VP 0.09 0.01 0.1 0.1 0.1
S → NP 0.11 0.11 − η 0.11 0.1 0.105
S → VP 0.8 0.8 − γ + η 0.79 0.8 0.795
We now describe how this construction of approximations satisfies the properties mentioned in Section 4, specifically, the boundedness property and the tightness property.
Let L,q,p,N) > 0 such that K[m] = sN log^3m and
See Appendix A for the proof of Proposition 2.
See Appendix A for the proof of Proposition 3.
We now have proper approximations for probabilistic grammars. These approximations are defined as a series of probabilistic grammars, related to the family of probabilistic grammars we are interested
in estimating. They consist of three properties: containment (they are a subset of the family of probabilistic grammars we are interested in estimating), boundedness (their log-loss does not diverge
to infinity quickly), and they are tight (there is a small probability mass at which they are not tight approximations).
4.2Coupling Bounded Approximations with Number of Samples
At this point, the number of samples n is decoupled from the bounded approximation (m as a function of the number of samples, m(n). As mentioned earlier, there is a clear trade-off between choosing a
fast rate for m(n) (such as m(n) = n^k for some k > 1) and a slower rate (such as m(n) = logn). The faster the rate is, the tighter the family of approximations that we use for n samples. If the rate
is too fast, however, then K[m] grows quickly as well. In that case, because our sample complexity bounds are increasing functions of such K[m], the bounds will degrade.
To balance the trade-off, we choose m(n) = n. As we see later, this gives sample complexity bounds which are asymptotically interesting for both the supervised and unsupervised case.
4.3Asymptotic Empirical Risk Minimization
It would be compelling to determine whether the empirical risk minimizer over asymptotic empirical risk minimizer. This would mean that the risk of the empirical risk minimizer over n, the number of
samples, and m, the index of the approximation of the concept space g[n] be the minimizer of the empirical risk over
Let D = {z[1],…,z[n]} be a sample from p(z). The operator n → ∞ (Shalev-Shwartz et al. 2009). Then, we have the following
See Appendix A for the proof of Lemma 1.
Let D = {z[1],…,z[n]} be a sample of derivations from G. Then
∈ {1,…,
} be the event “
= ∪
. We have that where
Equation (16)
comes from z
being independent. Also,
is the constant from
Section 3.1
. Therefore, we have:
5.Sample Complexity Bounds
Equipped with the framework of proper approximations as described previously, we now give our main sample complexity results for probabilistic grammars. These results hinge on the convergence of
covering numbers for
5.1Covering Numbers and Bounds on Covering Numbers
We next give a brief overview of covering numbers. A cover provides a way to reduce a class of functions to a much smaller (finite, in fact) representative class such that each function in the
original class is represented using a function in the smaller class. Let d(f,g) be a distance measure between two functions f,g from ε-cover is a subset of d(f,f′) < ε. The covering numberε-cover of
of functions such that for every
. Then for
> 0 we have provided
) <
See Pollard (1984; Chapter 2, pages 30–31) for the proof of Lemma 2. See also Appendix A.
Covering numbers are rather complex combinatorial quantities which are hard to compute directly. Fortunately, they can be bounded using the pseudo-dimension (Anthony and Bartlett 1999), a
generalization of the Vapnik-Chervonenkis (VC) dimension for real functions. In the case of our “binomialized” probabilistic grammars, the pseudo-dimension of N, because we have N parameters. Hence,
N. We then have the following.
(From Pollard [
] and Haussler [
].) Let
we have:
5.2Supervised Case
We turn to give an analysis for the supervised case. This analysis is mostly described as a preparation for the unsupervised case. In general, the families of probabilistic grammars we give a
treatment to are parametric families, and the maximum likelihood estimator for these families is a consistent estimator in the supervised case. In the unsupervised case, however, lack of
identifiability prevents us from getting these traditional consistency results. Also, the traditional results about the consistency of MLE are based on the assumption that the sample is generated
from the parametric family we are trying to estimate. This is not the case in our analysis, where the distribution that generates the data does not have to be a probabilistic grammar.
Lemmas 2 and 3 can be combined to get the following sample complexity result.
be a grammar. Let
Section 3.1
). Let
be a sample of derivations. Then there exists a constant β(
) and constant
such that for any 0 <
< 1 and 0 <
and any
and if then we have where
) is the constant from Proposition 2. The main idea in the proof is to solve for
in the following two inequalities (based on
Equation [17]
[see the following]) while relying on Lemma 3:
Theorem 2 gives little intuition about the number of samples required for accurate estimation of a grammar because it considers the “additive” setting: The empirical risk is within ε from the
expected risk. More specifically, it is not clear how we should pick ε for the log-loss, because the log-loss can obtain arbitrary values.
We turn now to converting the additive bound in Theorem 2 to a multiplicative bound. Multiplicative bounds can be more informative than additive bounds when the range of the values that the log-loss
can obtain is not known a priori. It is important to note that the two views are equivalent (i.e., it is possible to convert a multiplicative bound to an additive bound and vice versa). Let ρ ∈ (0,1)
and choose ε = ρK[n]. Then, substituting this ε in Theorem 2, we get that ifthen, with probability 1 − δ,
where H(p) is the Shannon entropy of p. This stems from the fact that f. This means that if we are interested in computing a sample complexity bound such that the ratio between the empirical risk and
the expected risk (for log-loss) is close to 1 with high probability, we need to pick up ρ such that the righthand side of Equation (17) is smaller than the desired accuracy level (between 0 and 1).
Note that Equation (17) is an oracle inequality—it requires knowing the entropy of p or some upper bound on it.
5.3Unsupervised Case
It does not immediately follow that K[n] and the same form of ε[bound](n) as in Proposition 2 (we would have β′(L,q,p,N) = β′ > 0). This relies on the property of bounded derivation length of p (see
Appendix A, Proposition 7). The following result shows that we have tightness as well.
For a[i],b[i] ≥ 0, if − log ∑[i]a[i] + log ∑[i]b[i] ≥ ε then there exists an i such that − loga[i] + logb[i] ≥ ε.
Computing either the covering number or the pseudo-dimension of 1997) overcomes this problem for Bayesian networks with fixed structure by giving a bound on the covering number for (his respective)
Unfortunately, we cannot fully adopt this approach, because the derivations of a probabilistic grammar can be arbitrarily large. Instead, we present the following proposition, which is based on the
“Hidden Variable Rule” from Dasgupta (1997). This proposition shows that the covering number of p mentioned in Section 3.
For any two positive-valued sequences (a[1],…,a[n]) and (b[1],…,b[n]) we have that
Proposition 6 (Hidden Variable Rule for Probabilistic Grammars)
. Consider
′ and
) is a probability distribution that uniformly divides the probability mass
) across all derivations for the specific
, that is:The inequality in
Equation (18)
stems from Utility Lemma 3.
Set m to be the quantity that appears in the proposition to get the necessary result (f′ and f are arbitrary functions in f[0] to be functions from the respective covers.).▪
For the unsupervised case, then, we get the following sample complexity result.
be a grammar. Let
) be a distribution over derivations which satisfies the requirements in
Section 3.1
. Let
be a sample of strings from
). Then there exists a constant β′(
) and constant
such that for any 0 < δ < 1, 0 <
, any
, and ifwhere where
Theorem 3 states that the number of samples we require in order to accurately estimate a probabilistic grammar from unparsed strings depends on the level of ambiguity in the grammar, represented as Λ
(m). We note that this dependence is polynomial, and we consider this a positive result for unsupervised learning of grammars. More specifically, if Λ is an exponential function (such as the case
with PCFGs), when compared to the supervised learning, there is an extra multiplicative factor in the sample complexity in the unsupervised setting that behaves like
We note that the following
Equation (20)
can again be reduced to a multiplicative case, similarly to the way we described it for the supervised case. Setting ε = ρ
(ρ ∈ (0,1)), we get the following requirement on
6.Algorithms for Empirical Risk Minimization
We turn now to describing algorithms and their properties for minimizing empirical risk using the framework described in Section 4.
6.1Supervised Case
ERM with proper approximations leads to simple algorithms for estimating the probabilities of a probabilistic grammar in the supervised setting. Given an ε > 0 and a δ > 0, we draw n examples
according to Theorem 2. We then set γ = n^−s. To minimize the log-loss with respect to these n examples, we use the proper approximation
Because we make the assumption that deg(
) ≤ 2 (
Section 3.2
), we haveTo minimize the log-loss with respect to
Equation (21)
under the constraint that γ ≤
≤ 1 − γ and θ
+ θ
= 1. It can be shown that the solution for this optimization problem iswhere
fires in Example
. (We include a full derivation of this result in
Appendix B
.) The interpretation of
Equation (22)
is simple: We count the number of times a rule appears in the samples and then normalize this value by the total number of times rules associated with the same multinomial appear in the samples. This
frequency count is the maximum likelihood solution with respect to the full hypothesis class
Appendix B
). Because we constrain ourselves to obtain a value away from 0 or 1 by a margin of
, we need to truncate this solution, as done in
Equation (22)
This truncation to a margin γ can be thought of as a smoothing factor that enables us to compute sample complexity bounds. We explore this connection to smoothing with a Dirichlet prior in a Maximum
a posteriori (MAP) Bayesian setting in Section 7.2.
6.2Unsupervised Case
6.2.1Hardness of ERM with Proper Approximations
It turns out that minimizing Equation (23) under the specified constraints is actually an NP-hard problem when G is a PCFG. This result follows using a similar proof to the one in Cohen and Smith (
2010c) for the hardness of Viterbi training and maximizing log-likelihood for PCFGs. We turn to giving the full derivation of this hardness result for PCFGs and the modification required for adapting
the results from Cohen and Smith to the case of having an arbitrary γ margin constraint.
In order to show an NP-hardness result, we need to “convert” the problem of the maximization of Equation (23) to a decision problem. We do so by stating the following decision problem.
Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)
Input: A binarized context-free grammar G, a set of sentences x[1], … , x[n], a value γ ∈
We will show the hardness result both when γ is not restricted at all as well as when we allow γ > 0. The proof of the hardness result is achieved by reducing the problem 3-SAT (Sipser 2006), known
to be NP-complete, to Problem 1. The problem 3-SAT is defined as follows:
Output: 1 if there is a satisfying assignment for φ, and 0 otherwise.
Given an instance of the 3-SAT problem, the reduction will, in polynomial time, create a grammar and a single string such that solving Problem 1 for this grammar and string will yield a solution for
the instance of the 3-SAT problem.
Let a[i], b[i], and c[i] are literals over the set of variables {Y[1],…,Y[N]} (a literal refers to a variable Y[j] or its negation, C[j] be the jth clause in φ, such that C[j] = a[j] ∨ b[j] ∨ c[j].
We define the following CFG G[φ] and string to parse s[φ]:
• 1.
The terminals of G[φ] are the binary digits Σ = {0,1}.
• 2.
We create N nonterminals r ∈ {1,…,N} and rules
• 3.
We create N nonterminals r ∈ {1,…,N} and rules
• 4.
We create
• 5.
We create the rule S[1] → A[1]. For each j ∈ {2,…,m}, we create a rule S[j] → S[j − 1]A[j] where S[j] is a new nonterminal indexed by A[j] is also a new nonterminal indexed by j ∈ {1,…,m}.
• 6.
Let C[j] = a[j] ∨ b[j] ∨ c[j] be clause j in φ. Let Y(a[j]) be the variable that a[j] mentions. Let (y[1],y[2],y[3]) be a satisfying assignment for C[j] where y[k] ∈ { 0,1 } and is the value of Y
(a[j]), Y(b[j]), and Y(c[j]), respectively, for k ∈ {1,2,3}. For each such clause-satisfying assignment, we add the ruleFor each A[j], we would have at most seven rules of this form, because one
rule will be logically inconsistent with a[j] ∨ b[j] ∨ c[j].
• 7.
The grammar's start symbol is S[n].
• 8.
The string to parse is s[φ] = (10)^3m, that is, 3m consecutive occurrences of the string 10.
A parse of the string s[φ] using G[φ] will be used to get an assignment by setting Y[r] = 0 if the rule G[φ] to the binary normal form described in Section 3.2. The following lemma gives a condition
under which the assignment is consistent (so that contradictions do not occur in the parse tree).
Let φ be an instance of the 3-SAT problem, and let G[φ] be a probabilistic CFG based on the given grammar with weights θ[φ]. If the (multiplicative) weight of the Viterbi parse (i.e., the highest
scoring parse according to the PCFG) of s[φ] is 1, then the assignment extracted from the parse tree is consistent.
Because the probability of the Viterbi parse is 1, all rules of the form
• 1.
For any r, an appearance of both rules of the form
• 2.
For any r, an appearance of rules of the form
Thus, both possible inconsistencies are ruled out, resulting in a consistent assignment.▪
Figure 3
gives an example of an application of the reduction.
Define φ and G[φ] as before. There exists θ[φ] such that the Viterbi parse of s[φ] is 1 if and only if φ is satisfiable. Moreover, the satisfying assignment is the one extracted from the parse tree
with weight 1 of s[φ] under θ[φ].
C[j] = a[j] ∨ b[j] ∨ c[j] is satisfied using a tuple (y[1],y[2],y[3]), which assigns values for Y(a[j]), Y(b[j]), and Y(c[j]). This assignment corresponds to the following rule:Set its probability to
1, and set all other rules of A[j] to 0. In addition, for each r, if Y[r] = y, set the probabilities of the rules S[j] →S[j − 1]A[j] are set to 1. This assignment of rule probabilities results in a
Viterbi parse of weight 1.
We are now ready to prove the following result.
Problem 1 is NP-hard when either requiring γ > 0 or when fixing γ = 0.
We first describe the reduction for the case of γ = 0. In Problem 1, set γ = 0, α = 1, G = G[φ], γ = 0, and x[1] = s[φ]. If φ is satisfiable, then the left side of Equation (24) can get value 0, by
setting the rule probabilities according to Lemma 5, hence we would return 1 as the result of running Problem 1.
If φ is unsatisfiable, then we would still get value 0 only if L(G) = {s[φ]}. If G[φ] generates a single derivation for (10)^3m, then we actually do have a satisfying assignment from Lemma 4.
Otherwise (more than a single derivation), the optimal θ^3m is the only generated sentence, and this is a contradiction to getting value 0 for Problem 1.
We next show that Problem 1 is NP-hard even if we require γ > 0. Let
Equation (5)
after being shifted with a margin of γ. Then, because there is a derivation that uses only rules that have probability 1 − γ, we havebecause the size of the parse tree for (10)
is at most 10
(using the binarized
) and assuming α = γ < (1 − γ)
. This inequality indeed holds whenever
|θ) > − log α. Problem 1 would return 0 in this case.
6.2.2An Expectation-Maximization Algorithm
Instead of solving the optimization problem implied by Equation (21), we propose a rather simple modification to the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) to
approximate the optimal solution—this algorithm finds a local maximum for the maximum likelihood problem using proper approximations. The modified algorithm is given in Algorithm 1.
The modification from the usual expectation-maximization algorithm is done in the M-step: Instead of using the expected value of the sufficient statistics by counting and normalizing, we truncate the
values by γ. It can be shown that if θ^(0) ∈ Θ(γ), then the likelihood is guaranteed to increase (and hence, the log-loss is guaranteed to decrease) after each iteration of the algorithm.
The reason for this likelihood increase stems from the fact that the M-step solves the optimization problem of minimizing the log-loss (with respect to θ ∈ Θ(γ)) when the posterior calculate at the
E-step as the base distribution is used. This means that the M-step minimizes (in iteration t): 2006) for the EM algorithm.
Our framework can be specialized to improve the two main criteria which have a trade-off: the tightness of the proper approximation and the sample complexity. For example, we can improve the
tightness of our proper approximations by taking a subsequence of K[n] will grow faster. Table 2 shows the trade-offs between parameters in our model and the effectiveness of learning.
criterion . as K[n] increases … . as s increases … .
tightness of proper approximation improves improves
sample complexity bound degrades degrades
criterion . as K[n] increases … . as s increases … .
tightness of proper approximation improves improves
sample complexity bound degrades degrades
We note that the sample complexity bounds that we give in this article give insight about the asymptotic behavior of grammar estimation, but are not necessarily sufficiently tight to be used in
practice. It still remains an open problem to obtain sample complexity bounds which are sufficiently tight in this respect. For a discussion about the connection of grammar learning in theory and
practice, we refer the reader to Clark and Lappin (2010).
It is also important to note that MLE is not the only option for estimating finite state probabilistic grammars. There has been some recent advances in learning finite state models (HMMs and finite
state transducers) by using spectral analysis of matrices which consist of quantities estimated from observations only (Hsu, Kakade, and Zhang 2009; Balle, Quattoni, and Carreras 2011), based on the
observable operator models of Jaeger (1999). These algorithms are not prone to local minima, and converge to the correct model as the number of samples increases, but require some assumptions about
the underlying model that generates the data.
7.1Tsybakov Noise
In this article, we chose to introduce assumptions about distributions that generate natural language data. The choice of these assumptions was motivated by observations about properties shared among
treebanks. The main consequence of making these assumptions is bounding the amount of noise in the distribution (i.e., the amount of variation in probabilities across labels given a fixed input).
There are other ways to restrict the noise in a distribution. One condition for such noise restriction, which has received considerable recent attention in the statistical literature, is the Tsybakov
noise condition (Tsybakov 2004; Koltchinskii 2006). Showing that a distribution satisfies the Tsybakov noise condition enables the use of techniques (e.g., from Koltchinskii 2006) for deriving
distribution-dependent sample complexity bounds that depend on the parameters of the noise. It is therefore of interest to see whether Tsybakov noise holds under the assumptions presented in Section
3.1. We show that this is not the case, and that Tsybakov noise is too permissive. In fact, we show that p can be a probabilistic grammar itself (and hence, satisfy the assumptions in Section 3.1),
and still not satisfy the Tsybakov noise conditions.
Tsybakov noise was originally introduced for classification problems (Tsybakov 2004), and was later extended to more general settings, such as the one we are facing in this article (Koltchinskii 2006
). We now explain the definition of Tsybakov noise in our context.
> 0 and
≥ 1. We say that a distribution
) satisfies the (
) Tsybakov noise condition if for any
> 0 and This interpretation of Tsybakov noise implies that the diameter of the set of functions from the concept class that has small excess risk should shrink to 0 at the rate in
Equation (25)
. Distribution-dependent bounds from Koltchinskii (
) are monotone with respect to the diameter of this set of functions, and therefore demonstrating that it goes to 0 enables sharper derivations of sample complexity bounds.
Let G be a grammar with K ≥ 2 and degree 2. Assume that p is 〈G, θ*〉 for some θ*, such that c[1] ≤ c[2]. If A[G](θ*) is positive definite, then p does not satisfy the Tsybakov noise condition for
any (C,κ), where C > 0 and κ ≥ 1.
See Appendix C for the proof of Theorem 5.
In Appendix C we show that A[G](θ) is positive semi-definite for any choice of θ. The main intuition behind the proof is that given a probabilistic grammar p, we can construct an hypothesis h such
that the KL divergence between p and h is small, but dist(p,h) is lower-bounded and is not close to 0.
We conclude that probabilistic grammars, as generative distributions of data, do not generally satisfy the Tsybakov noise condition. This motivates an alternative choice of assumptions that could
lead to better understanding of rates of convergences and bounds on the excess risk. Section 3.1 states such assumptions which were also justified empirically.
7.2Comparison to Dirichlet Maximum A Posteriori Solutions
The transformation
) from
Section 4.1
can be thought of as a
for the probabilities θ: It ensures that the probability of each rule is at least
(and as a result, the probabilities of all rules cannot exceed 1 −
). Adding pseudo-counts to frequency counts is also a common way to smooth probabilities in models based on multinomial distributions, including probabilistic grammars (Manning and Schütze
). These pseudo-counts can be framed as a maximum a posteriori (MAP) alternative to the maximum likelihood problem, with the choice of Bayesian prior over the parameters in the form of a Dirichlet
distribution. In comparison to our framework, with (symmetric) Dirichlet smoothing, instead of truncating the probabilities with a margin
we would set the probability of each rule (in the supervised setting) tofor
= 1,2, where
in multinomial
for Example
. Dirichlet smoothing can be formulated as the result of adding a symmetric Dirichlet prior over the parameters
with hyperparameter
. Then
Equation (26)
is the mode of the posterior after observing
in multinomial
The effect of Dirichlet smoothing becomes weaker as we have more samples, because the frequency counts γ = n^−s where n is the number of samples—the more samples we have, the more we trust the counts
in the data to be reliable. There is a subtle difference, however. With the Dirichlet MAP solution, the smoothing is less dominant only if the counts of the features are large, regardless of the
number of samples we have. With our framework, smoothing depends only on the number of samples we have. These two scenarios are related, of course: The more samples we have, the more likely it is
that the counts of the events will grow large.
7.3Other Derivations of Sample Complexity Bounds
In this section, we discuss other possible solutions to the problem of deriving sample complexity bounds for probabilistic grammars.
7.3.1Using Talagrand's Inequality
Our bounds are based on VC theory together with classical results for empirical processes (Pollard 1984). There have been some recent developments to the derivation of rates of convergence in
statistical learning theory (Massart 2000; Bartlett, Bousquet, and Mendelson 2005; Koltchinskii 2006), most prominently through the use of Talagrand's inequality (Talagrand 1994), which is a
concentration of measure inequality, in the spirit of Lemma 2.
The bounds achieved with Talagrand's inequality are also distribution-dependent, and are based on the diameter of the ε-minimal set—the set of hypotheses which have an excess risk smaller than ε. We
saw in Section 7.1 that the diameter of the ε-minimal set does not follow the Tsybakov noise condition, but it is perhaps possible to find meaningful bounds for it, in which case we may be able to
get tighter bounds using Talagrand's inequality. We note that it may be possible to obtain data-dependent bounds for the diameter of the ε-minimal set, following Koltchinskii (2006), by calculating
the diameter of the ε-minimal set using
7.3.2Simpler Bounds for the Supervised Case
As noted in Section 6.1, minimizing empirical risk with the log-loss leads to a simple frequency count for calculating the estimated parameters of the grammar. In Corazza and Satta (2006), it has
been also noted that to minimize the non-empirical risk, it is necessary to set the parameters of the grammar to the normalized expected count of the features.
This means that we can get bounds on the deviation of a certain parameter from the optimal parameter by applying modifications to rather simple inequalities such as Hoeffding's inequality, which
determines the probability of the average of a set of i.i.d. random variables deviating from its mean. The modification would require us to split the event space into two cases: one in which the
count of some features is larger than some fixed value (which will happen with small probability because of the bounded expectation of features), and one in which they are all smaller than that fixed
value. Handling these two cases separately is necessary because Hoeffding's inequality requires that the count of the rules is bounded.
The bound on the deviation from the mean of the parameters (the true probability) can potentially lead to a bound on the excess risk in the supervised case. This formulation of the problem would not
generalize to the unsupervised case, however, where the empirical risk minimization does not amount to simple frequency count.
7.4Open Problems
We conclude the discussion with some directions for further exploration and future work.
7.4.1Sample Complexity Bounds with Semi-Supervised Learning
Our bounds focus on the supervised case and the unsupervised case. There is a trivial extension to the semi-supervised case. Consider the objective function to be the sum of the likelihood for the
labeled data together with the marginalized likelihood of the unlabeled data (this sum could be a weighted sum). Then, use the sample complexity bounds for each summand to derive a sample complexity
bound on this sum.
It would be more interesting to extend our results to frameworks such as the one described by Balcan and Blum (2010). In that case, our discussion of sample complexity would attempt to identify how
unannotated data can reduce the space of candidate probabilistic grammars to a smaller set, after which we can use the annotated data to estimate the final grammar. This reduction of the space is
accomplished through a notion of compatibility, a type of fitness that the learner believes the estimated grammar should have given the distribution that generates the data. The key challenge in the
case of probabilistic grammars would be to properly define this compatibility notion such that it fits the log-loss. If this is achieved, then similar machinery to that described in this paper (with
proper approximations) can be followed to derive semi-supervised sample complexity bounds for probabilistic grammars.
7.4.2Sharper Bounds for the Pseudo-Dimension of Probabilistic Grammars
The pseudo-dimension of a probabilistic grammar with the log-loss is bounded by the number of parameters in the grammar, because the logarithm of a distribution generated by a probabilistic grammar
is a linear function. Typically the set of counts for the feature vectors of a probabilistic grammar resides in a subspace of a dimension which is smaller than the full dimension specified by the
number of parameters, however. The reason for this is that there are usually relationships (which are often linear) between the elements in the feature counts. For example, with HMMs, the total
feature count for emissions should equal the total feature count for transitions. With PCFGs, the total number of times that nonterminal rules fire equals the total number of times that features with
that nonerminal in the right-hand side fired, again reducing the pseudo-dimension. An open problem that remains is characterization of the exact value pseudo-dimension for a given grammar, determined
by consideration of various properties of that grammar. We conjecture, however, that a lower bound on the pseudo-dimension would be rather close to the full dimension of the grammar (the number of
It is interesting to note that there has been some work to identify the VC dimension and pseudo-dimension for certain types of grammars. Bane, Riggle, and Sonderegger (2010), for example, calculated
the VC dimension for constraint-based grammars. Ishigami and Tani (1993; Ishigami and Tani (1997) computed the VC dimension for finite state automata with various properties.
We presented a framework for performing empirical risk minimization for probabilistic grammars, in which sample complexity bounds, for the supervised case and the unsupervised case, can be derived.
Our framework is based on the idea of bounded approximations used in the past to derive sample complexity bounds for graphical models.
Our framework required assumptions about the probability distribution that generates sentences or derivations in the language of the given grammar. These assumptions were tested using corpora, and
found to fit the data well.
We also discussed algorithms that can be used for minimizing empirical risk in our framework, given enough samples. We showed that directly trying to minimize empirical risk in the unsupervised case
is NP-hard, and suggested an approximation based on an expectation-maximization algorithm.
Appendix A. Proofs
We include in this appendix proofs for several results in the article.
Let [i]a[i] = 1. Define b[1] = a[1], c[1] = 1 − a[1], b[i] = c[i] = 1 − b[i] for i ≥ 2. Then
Note first that
as the minimizer of the empirical risk. We next bound and that equals the right side of
Equation (Appendix A.1)
Let L, q, p, N) > 0 such that K[m] = sN log^3m and
(From Dasgupta [1997].) Let a ∈ [0,1] and let b = a if a ∈ [γ,1 − γ], b = γ if a ≤ γ, and b = 1 − γ if a ≥ 1 − γ. Then for any ε ≤ 1/2 such that γ ≤ ε/ (1 + ε) we have log a/b ≤ ε.
. Let
′ =
m^ − s
). For any Without loss of generality, assume
Equation A.2
= 2
) to get that for all
is fixed.▪
There exists a β′(L,p,q,N) > 0 such that K[m] = sN log^3m and
From the requirement of p, we know that for any x we have a z such that yield(z) = x and |z| ≤ α|x|. Therefore, if we let f[1](x,z) the function in
For a[i], b[i] ≥ 0, if − log ∑[i]a[i] + log ∑[i]b[i] ≥ ε then there exists an i such that − log a[i] + log b[i] ≥ ε.
Assume − loga[i] + logb[i] < ε for all i. Then, [i]a[i] + log ∑ [i]b[i] < ε which is a contradiction to − log ∑ [i]a[i] + log ∑ [i]b[i] ≥ ε.▪
The next lemma is the main concentation of measure result that we use. Its proof requires some simple modification to the proof given for Theorem 24 in Pollard (1984, pages 30–31).
At this point, we can follow the proof of Theorem 24 in Pollard (1984), and its extension on pages 30–31 to get Lemma 2, using the shifted set of functions
Appendix B. Minimizing Log-Loss for Probabilistic Grammars
Central to our algorithms for minimizing the log-loss (both in the supervised case and the unsupervised case) is a convex optimization problem of the formfor constants c[k,i] which depend on γ which
is a margin determined by the number of samples. This minimization problem can be decomposed into several optimization problems, one for each k, each having the following form:where c[i] ≥ 0 and 1/2
> γ ≥ 0. Ignore for a moment the constraints γ ≤ β[i] ≤ 1 − γ. In that case, this can be thought of as a regular maximum likelihood estimation problem, so β[i] = c[i] / (c[1] + c[2]). We give a
derivation of this result in this simple case for completion. We use Lagranian multipliers to solve this problem. Let F(β1,β2) = c[1]β[1] + c[2]β[2]. Define the Lagrangian:
Setting the derivatives to 0 for minimization, we have
) is the objective function of the dual problem of
Equation (B.1)
Equation (B.2)
. We would like to minimize
Equation (B.5)
with respect to
. The derivative of
) ishence when equating the derivative of
) to 0, we get
= − (
), and therefore the solution is
). Indeed, we havewhere
is the optimal solution for
Equations (B.1)
Note that if 1 −
/ (
) <
, then this is the solution even when again adding the constraints in
Equation (B.3)
and (B.4). When
/ (
) <
, then the solution is
/ (
) <
then the solution is
/ (
) <
. We want to show that for any choice of
∈ [0,1] such that
we have
Equation (B.6) is precisely the definition of the KL divergence between the distribution of a coin with probability γ of heads and the distribution of a coin with probability β of heads, and
therefore the right side in Equation (B.6) is positive, and we get what we need.
Appendix C. Counterexample to Tsybakov Noise (Proofs)
A = A[G](θ) is positive semi-definite for any probabilistic grammar 〈G, θ〉.
We have that
) ≤ 0 if First, show that which happens if (after substituting
) Note we have
> 1 because
. In addition, we have
− 2 ≥ 0 for small enough
(can be shown by taking the derivative, with respect to
− 2, which is always positive for small enough
, and in addition, noticing that the value of
− 2 is 0 when
= 0.) Therefore,
Equation (C.2)
is true.
Equation (C.2)
Equation (C.1)
, we have that
) ≤ 0 if which is equivalent to Taking again the derivative of the left side of
Equation (C.3)
, we have that it is an increasing function of
), and in addition at
= 0 it obtains the value
. Therefore,
Equation (C.3)
holds, and therefore
) ≤ 0 for small enough
Let G be a grammar with K ≥ 2 and degree 2. Assume that p is 〈G, θ*〉 for some θ*, such that c[1] ≤ c[2]. If A[G](θ*) is positive definite, then p does not satisfy the Tsybakov noise condition for
any (C, κ), where C > 0 and κ ≥ 1.
to be the eigenvalue of
(θ) with the smallest value (
is positive). Also, define
(θ) to be a vector indexed by
such that Simple algebra shows that for any For a
> 0 and
≥ 1, define
. Let
. First, we construct an
such that
) <
/2 but dist(
) >
→0. The construction follows. Parametrize
by θ such that θ is identical to θ
except for
= 1,2, in which case we have Note that
≤ 1/2 and
. Then, we have that We also have if (This can be shown by dividing
Equation [C.6]
and then using the concavity of the logarithm function.) From Lemma 7, we have that
Equation (C.7)
holds. Therefore, Now, consider the following, which can be shown through algebraic manipulation: Then, additional algebraic simplification shows that
A fact from linear algebra states that where
is the smallest eigenvalue in
. From the construction of
Equation (C.4)
, we have that which means
does not satisfy the Tsybakov noise condition with parameters (
) for any
> 0.
Appendix D. Notation
Table D.1 gives a table of notation for symbols used throughout this article.
The authors thank the anonymous reviewers for their comments and Avrim Blum, Steve Hanneke, Mark Johnson, John Lafferty, Dan Roth, and Eric Xing for useful conversations. This research was supported
by National Science Foundation grant IIS-0915187.
It is important to remember that minimizing the log-loss does not equate to minimizing the error of a linguistic analyzer or natural language processing application. In this article we focus on the
log-loss case because we believe that probabilistic models of language phenomena have inherent usefulness as explanatory tools in computational linguistics, aside from their use in systems.
We note that being able to attain the minimum through an hypothesis q* is not necessarily possible in the general case. In our instantiations of ERM for probabilistic grammars, however, the minimum
can be attained. In fact, in the unsupervised case the minimum can be attained by more than a single hypothesis. In these cases, q* is arbitrarily chosen to be one of these minimizers.
Treebanks offer samples of cleanly segmented sentences. It is important to note that the distributions estimated may not generalize well to samples from other domains in these languages. Our argument
is that the family of the estimated curve is reasonable, not that we can correctly estimate the curve's parameters.
For simplicity and consistency with the log-loss, we measure entropy in nats, which means we use the natural logarithm when computing entropy.
We note that this notion of binarization is different from previous types of binarization appearing in computational linguistics for grammars. Typically in previous work about binarized grammars such
as CFGs, the grammars are constrained to have at most two nonterminals in the right side in Chomsky normal form. Another form of binarization for linear context-free rewriting systems is restriction
of the fan-out of the rules to two (Gómez-Rodríguez and Satta 2009; Gildea 2010). We, however, limit the number of rules for each nonterminal (or more generally, the number of elements in each
There are other ways to manage the unboundedness of KL divergence in the language learning literature. Clark and Thollard (2004), for example, decompose the KL divergence between probabilistic
finite-state automata into several terms according to a decomposition of Carrasco (1997) and then bound each term separately.
By varying s we get a family of approximations. The larger s is, the tighter the approximation is. Also, the larger s is, as we see later, the looser our sample complexity bound will be.
The “permissible class” requirement is a mild regularity condition regarding measurability that holds for proper approximations. We refer the reader to Pollard (1984) for more details.
, and
Polynomial learnability of probabilistic concepts with respect to the Kullback-Leiber divergence.
Proceedings of the Conference on Learning Theory
On the computational complexity of approximating distributions by probabilistic automata.
Machine Learning
Learning regular sets from queries and counterexamples.
Information and Computation
P. L.
Neural Network Learning: Theoretical Foundations
Cambridge University Press
A discriminative model for semisupervised learning.
Journal of the Association for Computing Machinery
, and
A spectral learning algorithm for finite state transducers.
Proceedings of the European Conference on Machine Learning/the Principles and Practice of Knowledge Discovery in Databases
, and
The VC dimension of constraint-based grammars.
, and
Local Rademacher complexities.
Annals of Statistics
C. M.
Pattern Recognition and Machine Learning
Convex Optimization
Cambridge University Press
Accurate computation of the relative entropy between stochastic regular grammars.
Theoretical Informatics and Applications
Two experiments on learning probabilistic dependency grammars from corpora.
Technical report
Brown University
Providence, RI
Statistical Language Learning
MIT Press
Cambridge, MA
Coarse-to-fine n-best parsing and maxent discriminative reranking.
Proceedings of the Association for Computational Linguistics
Statistical properties of probabilistic context-free grammars.
Computational Linguistics
, and
A polynomial algorithm for the inference of context free languages.
Proceedings of the International Colloquium on Grammatical Inference
Unsupervised learning and grammar induction.
In Alexander Clark, Chris Fox, andShalom Lappin, editors
The Handbook of Computational Linguistics and Natural Language Processing
PAC-learnability of probabilistic deterministic finite state automata.
Journal of Machine Learning Research
S. B.
N. A.
Covariance in unsupervised learning of probabilistic grammars.
Journal of Machine Learning Research
S. B.
N. A.
Empirical risk minimization with approximations of probabilistic grammars.
Proceedings of the Advances in Neural Information Processing Systems
S. B.
N. A.
Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization.
Proceedings of the Association for Computational Linguistics
Head-driven statistical models for natural language processing.
Computational Linguistics
Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods.
In H. Bunt, J. Carroll, and G. Satta
Text, Speech and Language Technology (New Developments in Parsing Technology)
Cross-entropy and estimation of probabilistic context-free grammars.
Proceedings of the North American Chapter of the Association for Computational Linguistics
T. M.
J. A.
Elements of Information Theory
The sample complexity of learning fixed-structure bayesian networks.
Machine Learning
de la Higuera
A bibliographical study of grammatical inference.
Pattern Recognition
, and
Maximum likelihood estimation from incomplete data via the EM algorithm.
Journal of the Royal Statistical Society B
Optimal parsing strategies for linear context-free rewriting systems.
Proceedings of the North American Chapter of the Association for Computational Linguistics
An optimal-time binarization algorithm for linear context-free rewriting systems with fan-out two.
Proceedings of the Association for Computational Linguistics-International Joint Conference on Natural Language Processing
Abstract Inference
New York
Decision-theoretic generalizations of the PAC model for neural net and other learning applications.
Information and Computation
S. M.
, and
A spectral algorithm for learning hidden Markov models.
Proceedings of the Conference on Learning Theory
The VC-dimensions of finite automata with n states.
Proceedings of Algorithmic Learning Theory
VC-dimensions of finite automata and commutative finite automata with k letters and n states.
Applied Mathematics
Observable operator models for discrete stochastic time series.
Neural Computation
Cryptographic limitations on learning Boolean formulae and finite automata.
Proceedings of the 21st Association for Computing Machinery Symposium on the Theory of Computing
M. J.
U. V.
An Introduction to Computational Learning Theory
MIT Press
Cambridge, MA
C. D.
Corpus-based induction of syntactic structure: Models of dependency and constituency.
Proceedings of the Association for Computational Linguistics
Local Rademacher complexities and oracle inequalities in risk minimization.
The Annals of Statistics
How to cover a grammar.
Proceedings of the Association for Computational Linguistics
C. D.
Foundations of Statistical Natural Language Processing
MIT Press
Cambridge, MA
Some applications of concentration inequalities to statistics.
Annales de la Facult´e des Sciences de Toulouse
Context-Free Grammars: Covers, Normal Forms, and Parsing
(volume 93 of Lecture Notes in Computer Science)
PAC-learnability of probabilistic deterministic finite state automata in terms of variation distance.
Proceedings of Algorithmic Learning Theory
F. C. N.
Inside-outside reestimation from partially bracketed corpora.
Proceedings of the Association for Computational Linguistics
Inductive inference, DFAs, and computational complexity.
Analogical and Inductive Inference
Convergence of Stochastic Processes
New York
Automata Learning and Its Applications
Ph.D. thesis
Hebrew University of Jerusalem
, and
On the learnability and usage of acyclic probabilistic finite automata.
Journal of Computer and System Sciences
, and
Learnability and stability in the general learning setting.
Proceedings of the Conference on Learning Theory
Introduction to the Theory of Computation, Second Edition
Thomson Course Technology
Boston, MA
Sharper bounds for Gaussian and empirical processes.
Annals of Probability
S. A.
On the learnability of hidden Markov models.
In P. Adriaans,H. Fernow, & M. van Zaane
Grammatical Inference: Algorithms and Applications
(Lecture Notes in Computer Science)
Optimal aggregation of classifiers in statistical learning.
The Annals of Statistics
V. N.
Statistical Learning Theory
New York
Author notes
Department of Computer Science, Columbia University, New York, NY 10027, United States. E-mail: [email protected]. This research was completed while the first author was at Carnegie Mellon
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, United States. E-mail: [email protected].
© 2012 Association for Computational Linguistics
MIT Press | {"url":"https://direct.mit.edu/coli/article/38/3/479/2169/Empirical-Risk-Minimization-for-Probabilistic","timestamp":"2024-11-14T00:54:30Z","content_type":"text/html","content_length":"909065","record_id":"<urn:uuid:6ecfbe46-74fc-43e8-b994-f61ee5c82949>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00270.warc.gz"} |
This Quanta article
A New Agenda for Low-Dimensional Topology
highlighted an important part of doing math that I think is underrated: making lists of interesting unsolved problems. An early, impactful example of this was David Hilbert's list of 23 unsolved
problems he introduced in 1900.
The first Hilbert problem was Cantor's continuum hypothesis, which asks if there is a size of infinity strictly between \( \aleph_0 \) (the number of integers) and \( c\), where \( c = |\mathbb{R}|
\) (the number of real numbers).
The second Hilbert problem, The compatibility of the arithmetical axioms, was what Bertrand Russel worked on in his
Principia Mathematica
, which a young austrian genius named Kurt Gödel used as a starting point for his work on his
Incompleteness Theorem
. It cannot be stated how much this changed our philosophical understanding of mathematics. Inspired by Gödel, a young Brit named Alan Turing applied Gödel's ideas in his work on computable and
uncomputable functions, and Turing's work became the basis for the development of computers.
Hilbert's list sparked a revolution in mathematics, philosophy and computer science.
The quanta article points at a similar list, but for the niche field of low-dimensional topology, which includes knot theory and 3-manifolds.
The analogous list is from a mathematician named
Rob Kirby
Kirby attributes this early-career success in part to the existence of the Milnor list, which provided him with a greater variety of projects to choose from than he would have received from the
people immediately around him in graduate school
“If you’re writing a letter of recommendation for someone and they’ve solved a Kirby problem, you mention that in your letter,” said John Baldwin, a mathematician at Boston College who
participated in the workshop and is helping to edit the list.
The lists serve a social function, naming relevant problems and making them legible to the wider community. This is an essential way to transfer status to budding young mathematicians. It also helps
the young mathematicians by presenting them with a mathematical frontier they can explore that has been mapped out, but not conquered, by the previous generation. This succession process is
incredibly important to the long-term success of mathematics, and it should be more widely appreciated. | {"url":"https://tobilehman.com/archive/tlehman.blog/p/listmaking-in-mathematics.html","timestamp":"2024-11-09T22:50:42Z","content_type":"text/html","content_length":"9118","record_id":"<urn:uuid:c65c5e07-2862-42b4-8a54-52bcc70dd1e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00428.warc.gz"} |
relaxation (algorithms)
background info
Recall asymtotic analysis. We remember that:
constant time < logarithmic time < linear time < polynomial time < exponential time
The question? What happens if dynamic programming is too slow/not good enough for the problem? What if dynamic programming is not needed; instead, why don’t we just settle for a pretty good solution?
Take, for instance, Nueva Courses. The optimal solution is “most students get their highest possible preferences.” However, this is impractical and pretty much impossible. Instead, what if we
endeavor to figure a schedule that generally maximize happiness?
relaxation methods
constraint relaxation
constraint relaxation is a relaxation method to remove extra constraints.
Motivating problem: traveling salesman problem
• Visit all towns in a given location
• Travel the minimal distance to do so
• Cannot visit any town more than once
Calculating the basic, naive solution to find all roads is \(O(n!)\). Best known solution is \(O(2^nn^2)\), which is still slow. Its also an \(NP\) hard problem.
Hence, to actually solve it in a reasonable time, we are going to make two relaxations.
1. The salesmen can visit a town more than once
2. The salesmen can teleport to visited towns
By these two relations, we convert traveling salesmen to the minimum spanning tree problem.
We now (how?) that solving MST is no worse than optimal TSP. We will solve MST, then use that problem as the upper bound of solution to TSP.
continuous relaxation
continuous relaxation is a relaxation method to convert difficult discrete problems into continuous ones.
Motivating problem: set cover
You are having a party, and you want your friends to get a nice paper invite.
• you will send invitations to some subsets of your friends
• tell them to send invitations to all your mutual friends with them
What’s the minimum number of friends to invite, and who?
Set-cover is also hard, and also NP hard. The problem is that sending invitation is discrete.
Hence, to solve, we make it possible to solve for fractions of invitations. Hence, we can prove that our solution is guaranteed to be within bounds
Lagrangian relaxation
Lagrangian relaxation is a relaxation method to convert hard-limit constrains into flexible penalization (negative values).
Motivating problem: shortest paths problem with a constraint.
You need to drive the shortest number of miles as well as doing it in a hard constraint to complete the solution in a certain time.
We can instead relax the problem into overtime driving being a negative value in the solution. | {"url":"https://www.jemoka.com/posts/kbhrelaxation_algorithums/","timestamp":"2024-11-10T12:50:26Z","content_type":"text/html","content_length":"8443","record_id":"<urn:uuid:03f0ecec-4f8e-43d3-b9a3-608f59d077b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00409.warc.gz"} |
Is there any way to find decomposition group and ramification groups
Is there any way to find decomposition group and ramification groups
Let $L/K$ be a Galois extension of number fields with Galois group $G$. Let $O_K$ and $O_L$ be the ring of algebraic integers of $K$ and $L$ respectively. Let $P\subseteq O_K$ be a prime. Let $Q\
subseteq O_L$ be a prime lying over $P$.
The decomposition group is defined as $$D(Q|P)=\lbrace \sigma\in G\text{ }|\text{ }\sigma(Q)=Q\rbrace$$
The $n$-th ramification group is defined as $$E_n(Q|P)=\lbrace \sigma\in G:\sigma(a)\equiv a\text{ mod } Q^{n+1}\text{ for all } a\in O_L\rbrace$$
I want to compute the decomposition group and ramification groups of the cyclotomic field $\mathbb{Q}(\zeta)$ over $\mathbb{Q}$ where $\zeta$ is a root of unity.
How to do this ? Any idea ?
1 Answer
Sort by ยป oldest newest most voted
Searching the web for ["decomposition group" galois extension sagemath] gives hints.
Here is how to compute the decomposition group and ramification group.
sage: K = QQ
sage: L = CyclotomicField(4)
sage: G = L.galois_group()
sage: OK = K.ring_of_integers()
sage: OL = L.ring_of_integers()
sage: P = OK.ideal(5)
sage: Q = L.primes_above(5)[0]
Check setup.
sage: K
Rational Field
sage: L
Cyclotomic Field of order 4 and degree 2
sage: G
Galois group of Cyclotomic Field of order 4 and degree 2
sage: P
Principal ideal (5) of Integer Ring
sage: Q
Fractional ideal (-zeta4 - 2)
Decomposition group.
sage: Q.decomposition_group()
Subgroup [()] of Galois group of Cyclotomic Field of order 4 and degree 2
sage: Q.ramification_group(2)
Subgroup [()] of Galois group of Cyclotomic Field of order 4 and degree 2
Please suggest any edits to make the examples more interesting.
edit flag offensive delete link more | {"url":"https://ask.sagemath.org/question/35472/is-there-any-way-to-find-decomposition-group-and-ramification-groups/","timestamp":"2024-11-13T17:58:24Z","content_type":"application/xhtml+xml","content_length":"53845","record_id":"<urn:uuid:aaafa964-67ea-4535-b4d4-345c20e3e830>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00768.warc.gz"} |
Andrew McGregor
See also Google Scholar and DBLP.
Journal Publications:
1. Graphical House Allocation with Identical Valuations.
JAAMAS (with H. Hosseini, J. Payan, R. Sengupta, R. Vaish, V. Viswanathan).
2. Trace Reconstruction: Generalized and Parameterized
IEEE Transactions on Information Theory (with A. Krishnamurthy, A. Mazumdar, S. Pal)
3. Correlation Clustering in Data Streams
Algorithmica (with K. Ahn, G. Cormode, S. Guha, A. Wirth)
4. Verifiable Stream Computation and Arthur–Merlin Communication
SIAM Journal of Computing, 2019 (with A. Chakrabarti, G. Cormode, J. Thaler, S. Venkatasubramanian)
5. Storage Capacity as an Information-Theoretic Analogue of Vertex Cover
IEEE Transactions on Information Theory, 2019 (with A. Mazumdar, S. Vorotnikova)
6. Structural Results on Matching Estimation with Applications to Streaming.
Algorithmica, 2019 (with M. Bury, E. Grigorescu, M. Monemizadeh, C. Schwiegelshohn, S. Vorotnikova)
7. Better Approximation of The Streaming Maximum Coverage Problem
Theory of Computer Systems 2019 (with H. Vu)
8. AUTOMAN: A Platform for Integrating Human-Based and Digital Computation
Communications of the ACM 59(6): 102-109 2016 (with D. Barowy, C. Curtsinger, E. Berger)
9. Robust Lower Bounds for Communication and Stream Computation
Theory of Computing 2016 (with A. Chakrabarti G. Cormode)
10. The matrix mechanism: optimizing linear counting queries under differential privacy
VLDB Journal 2015 (with C. Li, G. Miklau, M. Hay, V. Rastogi)
11. Space-Efficient Estimation of Statistics over Sub-Sampled Streams
Algorithmica, 2015. (with A. Pavan, S. Tirthapura, D. Woodruff)
12. Annotations in Data Streams
ACM Transactions on Algorithms, 11 (2014), no. 1, pg. 1-30 (with A. Chakrabarti, G. Cormode, J. Thaler)
13. Information Cost Tradeoffs for Augmented Index and Streaming Language Recognition
SIAM Journal of Computing, 42 (2013), no. 1, pg. 61–83. (with A. Chakrabarti, G. Cormode, R. Kondapally)
14. SCALLA: A Platform for Scalable One-Pass Analytics using MapReduce
ACM Trans. Database Syst, 37 (2012), no. 4, pg. 1-43 (with B. Li, E. Mazur, Y. Diao, P. Shenoy)
15. CLARO: Modeling and Processing Uncertain Data Streams
VLDB Journal, 21 (2012), no. 5, pg. 651-676 (with T. Tran, L. Peng, Y. Diao, A. Liu)
16. A Near-Optimal Algorithm for Computing the Entropy of a Stream
ACM Transactions on Algorithms, 6 (2010), no. 3, pg. 1-21 (with A. Chakrabarti, G. Cormode)
17. On the Hardness of Approximating Stopping and Trapping Sets
IEEE Transactions on Information Theory, 56 (2010), no. 4, pg. 1640-1650 (with O. Milenkovic)
18. Sub-linear Estimation of Entropy and Information Distances
ACM Transactions on Algorithms, 5 (2009), no. 4, pg. 1-16. (with S. Guha, S. Venkatasubramanian)
19. Stream Order and Order Statistics: Quantile Estimation in Random-Order Streams
SIAM Journal of Computing, 38 (2009), no. 5, 2044-2059 (with S. Guha)
20. Graph Distances in the Data Stream Model
SIAM Journal of Computing, 38 (2008), no. 5, pg. 1709-1727 (with J. Feigenbaum, S. Kannan, S. Suri, J. Zhang)
21. Estimating Statistical Aggregates on Probabilistic Data Streams
ACM Trans. Database Syst, 33 (2008), no. 4, pg. 1-30 (with T. S. Jayram, S. Muthukrishnan, E. Vee)
22. Sketching Information Divergences
Journal of Machine Learning, 72 (2008), no. 1-2, pg. 5-19 (with S. Guha, P. Indyk)
23. Distance distribution of binary codes and the error probability of decoding
IEEE Transactions on Information Theory, 51 (2005), no. 12, pg. 4237-4246. (with A. Barg)
24. On Graph Problems in a Semi-Streaming Model
Theoretical Computer Science, 348 (2005), no. 2-3, pg. 207-216. (with J. Feigenbaum, S. Kannan, S. Suri, J. Zhang)
Conference Publications (N.B. Links point to journal/extended versions if appropriate):
1. Matchings in Low-Arboricity Graphs in the Dynamic Graph Stream Model.
FCTTCS 2024 (with C. Konrad, R. Sengupta, C. Than)
2. Improved Algorithms for Maximum Coverage in Dynamic and Random Order Streams
ESA 2024 (with A. Chakrabarti and A. Wirth)
3. Scalable Scheduling Policies for Quantum Satellite Networks
QCE 2024 (with A. Williams, N. Panigrahy, D. Towsley)
4. Reconstruction from Noisy Random Subgraphs
ISIT 2024 (with R. Sengupta)
5. Tight Approximations for Graphical House Allocation
AAMAS 2024 (with H. Hosseini, R. Sengupta, R. Vaish and V. Viswanathan)
6. Estimation of Entropy in Constant Space with Improved Sample Complexity
NeurIPS 2022 (with M. Aliakbarpour, J. Nelson, E. Waingarten)
7. Improving the Efficiency of the PC Algorithm by Using Model-Based Conditional Independence Tests
CML4Impact 2022 (NeurIPS Workshop) (with E. Cai, D. Jensen)
8. Non-Adaptive Edge Counting and Sampling via Bipartite Independent Set Queries
ESA 2022 (with R. Addanki, C. Musco)
9. Graph Reconstruction from Random Subgraphs
ICALP 2022 (with R. Sengupta)
10. Improved Approximation and Scalability for Fair Max-Min Diversification
ICDT 2022 (with R. Addanki, A. Meliou, Z. Moumoulidou)
11. PredictRoute: A Network Path Prediction Toolkit.
Sigmetrics 2021 (with R. Singh, D. Tench, P. Gill)
12. Cluster Trellis: Data Structures & Algorithms for Exact Inference in Hierarchical Clustering
AISTATS 2021 (with C. Greenberg, S. Macaluso, N. Monath, J. Lee, P. Flaherty, K. Cranmer, A. McCallum)
13. Intervention Efficient Algorithms for Approximate Learning of Causal Graphs
ALT 2021 (with R. Addanki, C. Musco)
14. Diverse Data Selection under Fairness Constraints
ICDT 2021 (with Z. Moumoulidou, A. Meliou)
15. Maximum Coverage in the Data Stream Model: Parameterized and Generalized
ICDT 2021 (with D. Tench, H. Vu)
16. Cache me Outside: A New Look at DNS Cache Probing
PAM 2021 (with A. Niaki, W. Marczak, S. Farhoodi, P. Gill, N. Weaver)
17. Efficient Intervention Design for Causal Discovery with Latents
ICML 2020 (with R. Addanki, S. Kasiviswanathan, C. Musco)
18. Triangle and Four Cycle Counting in the Data Stream Model
PODS 2020 (with S. Vorotnikova)
19. Algebraic and Analytic Approaches for Parameter Learning in Mixture Models
ALT 2020 (with A. Krishnamurthy, A. Mazumdar, S. Pal)
20. Vertex Ordering Problems in Directed Graph Streams
SODA 2020 (with A. Chakrabarti, P. Ghosh, S. Vorotnikova)
21. Sample Complexity of Learning Mixture of Sparse Linear Regressions
NeurIPS 2019 (with A. Krishnamurthy, A. Mazumdar, S. Pal)
22. Trace Reconstruction: Generalized and Parameterized
ESA 2019 (with A. Krishnamurthy, A. Mazumdar, S. Pal)
23. Mesh: Compacting Memory Management for C/C++ Applications
PLDI 2019 (with B. Powers, D. Tench, E. Berger)
24. The Complexity of Counting Cycles in the Adjacency List Streaming Model
PODS 2019 (with J. Kallaugher, E. Price, S. Vorotnikova)
25. Compact Representation of Uncertainty In Clustering
NeurIPS 2018 (with C. Greenberg, A. Kobren, N. Monath, P. Flaherty, A. McCallum)
26. Connect the Dots to Prove It: A Novel Way to Learn Proof Construction
SIGCSE 2018 (with M. McCartin-Lim, B. Woolf)
27. A Simple, Space-Efficient, Streaming Algorithm for Matchings in Low Arboricity Graphs
SOSA 2018 (with S. Vorotnikova)
28. Storage Capacity as an Information-Theoretic Analogue of Vertex Cover
ISIT 2017 (with A. Mazumdar, S. Vorotnikova)
29. Better Approximation of The Streaming Maximum Coverage Problem
ICDT 2017 (with H. Vu)
30. Planar Matchings in Streams Revisited
APPROX 2016 (with S. Vorotnikova)
31. Stochastic Streams: Sample Complexity vs. Space Complexity
ESA 2016 (with M. Crouch, G. Valiant, D. Woodruff)
32. Better Algorithms for Counting Triangles in Data Streams
PODS 2016 (with S. Vorotnikova, H. Vu)
33. Sketching, Embedding, and Dimensionality Reduction for Information Spaces
AISTATS 2016 (with A. Abdullah, R. Kumar, S. Vassilvitskii, S. Venkatasubramanian)
34. Kernelization via Sampling with Applications to Dynamic Graph Streams
SODA 2016 (with R. Chitnis, G. Cormode, H. Esfandiari, M. Hajiaghayi, M. Monemizadeh, S. Vorotnikova)
35. Run Generation Revisited: What Goes Up May or May Not Come Down
ISAAC 2015 (with M. Bender, S. McCauley, S. Singh, H. T. Vu)
36. Catching the head, tail, and everything in between: a streaming algorithm for the degree distribution
ICDM 2015 (with O. Simpson, C. Seshadhri)
37. Densest Subgraph in Dynamic Graph Streams
MFCS 2015 (with D. Tench, S. Vorotnikova, H. Vu)
38. Correlation Clustering in Data Streams
ICML 2015 (with K. Ahn, G. Cormode, S. Guha, A. Wirth)
39. Evaluating Bayesian Networks via Data Streams
COCOON 2015 (with H. T. Vu)
40. Vertex and Hyperedge Connectivity in Dynamic Graph Streams
PODS 2015 (with S. Guha, D. Tench)
41. Verifiable Stream Computation and Arthur–Merlin Communication
CCC 2015 (with A. Chakrabarti, G. Cormode, J. Thaler, S. Venkatasubramanian)
42. Trace Reconstruction Revisited
ESA 2014 (with E. Price, S. Vorotnikova)
43. Dynamic Graphs in the Sliding-Window Model
ESA 2013 (with M. Crouch, D. Stubbs)
44. Sketching Earth-Mover Distance on Graph Metrics
APPROX 2013 (with D. Stubbs)
45. Spectral Sparsification of Dynamic Graph Streams
APPROX 2013 (with K. Ahn, S. Guha)
46. Efficient Nearest-Neighbor Search in the Probability Simplex
ICTIR 2013 (with K. Krstovski, D. Smith, H. Wallach)
47. Homomorphic Fingerprints under Misalignments
STOC 2013 (with A. Andoni, A. Goldberger, E. Porat)
48. AUTOMAN: A Platform for Integrating Human-Based and Digital Computation
OOPSLA 2012 (with D. Barowy, C. Curtsinger, E. Berger). [Press Coverage].
49. Approximate Principal Direction Trees
ICML 2012 (with M. McCartin-Lim, R. Wang)
50. Space-Efficient Estimation of Statistics over Sub-Sampled Streams
PODS 2012 (with A. Pavan, S. Tirthapura, D. Woodruff)
51. Graph Sketches: Sparsfiers, Spanners, and Subgraphs
PODS 2012 (with K. Ahn, S. Guha)
52. Analyzing Graph Structure via Linear Measurements
SODA 2012 (with K. Ahn, S. Guha)
53. The Shifting Sands Algorithm
SODA 2012 (with P. Valiant)
54. Periodicity and Cyclic Shifts via Linear Sketches
APPROX 2011 (with M. Crouch)
55. A Platform for Scalable One-pass Analytics using MapReduce
SIGMOD 2011 (with B. Li, E. Mazur, Y. Diao, P. Shenoy)
56. Polynomial Fitting of Data Streams with Applications to Codeword Testing
STACS 2011 (with A. Rudra, S. Uurtamo)
57. Fast Query Expansion Using Approximations of Relevance Models
CIKM 2010 (with M. Cartright, J. Allan, V. Lavrenko)
58. The Limits of Two-Party Differential Privacy
FOCS 2010 (with I. Mironov, T. Pitassi, O. Reingold, K. Talwar, S. Vadhan)
59. Information Cost Tradeoffs for Augmented Index and Streaming Language Recognition
FOCS 2010 (with A. Chakrabarti, G. Cormode, R. Kondapally)
60. Conditioning and Aggregating Uncertain Data Streams: Going Beyond Expectations
VLDB 2010 (with T. Tran, Y. Diao, L. Peng, A. Liu)
61. Optimizing Linear Counting Queries Under Differential Privacy
PODS 2010 (with C. Li, M. Hay, V. Rastogi, G. Miklau)
62. Space-Efficient Estimation of Robust Statistics and Distribution Testing
ICS 2010 (with S. Chien, K. Ligett)
63. Probabilistic Histograms for Probabilistic Data
VLDB 2009 (with G. Cormode, A. Deligiannakis, M. Garofalakis)
64. The Oil Searching Problem
ESA 2009 (with K. Onak, R. Panigrahy)
65. Annotations in Data Streams
ICALP 2009 (with A. Chakrabarti, G. Cormode).
66. Sampling to Estimate Conditional Functional Dependencies
SIGMOD 2009 (with G. Cormode, L. Golab, F. Korn, D. Srivastava, X. Zhang)
67. Finding Metric Structure in Information Theoretic Clustering
COLT 2008 (with K. Chaudhuri)
68. Tight Lower Bounds for Multi-Pass Stream Computation via Pass Elimination
ICALP 2008 (with S. Guha)
69. Approximation Algorithms for Clustering Uncertain Data
PODS 2008 (with G. Cormode)
70. Robust Lower Bounds for Communication and Stream Computation
STOC 2008 (with A. Chakrabarti, G. Cormode) Slides
71. Sorting and Selection with Random Costs
LATIN 2008 (with S. Angelov, K. Kunal)
72. Declaring Independence via the Sketching of Sketches
SODA 2008 (with P. Indyk)
Manuscripts, Invited Papers, etc.:
NB: Since most of the papers above are published, the copyright has been transferred to the respective publishers. Therefore, the papers cannot be duplicated for commercial purposes. See below for
ACM's copyright notice.
Copyright 20xy by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided
that copies are not made or distributed for profit or commercial advantage and that new copies bear this notice and the full citation on the first page. Copyrights for components of this work owned
by others than ACM must be honored. Abstracting with credit is permitted. | {"url":"https://people.cs.umass.edu/~mcgregor/research/research.html","timestamp":"2024-11-14T03:59:18Z","content_type":"application/xhtml+xml","content_length":"27074","record_id":"<urn:uuid:28b9dd96-bcf6-44f4-ac2b-383c6d60ac34>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00174.warc.gz"} |
Pops two
values off the stack, checks if the first value is greater than or equal to the second, and then pushes the result back on to the stack. This looks at the values as unsigned integers (positive values
if ($first >= $second) {...}
Stack In
i64 The first value to check with.
i64 The second value to check against.
Stack Out
i32 The result of the comparsion. If the first value is greater than or equal to the second value then the result will be 1. Otherwise the result will be 0.
;; Push the i64 value 101 onto the stack
i64.const 101
;; Push the i64 value 42 onto the stack
i64.const 42
;; Pop the two i64 values off the stack, check if the first is greater
;; than or equal to the second and push the result back onto the stack
;; The stack contains an i32 value of 1 (101 >= 42 = is greater or equal)
;; Push the i64 value 101 onto the stack
i64.const 101
;; Push the i64 value 101 onto the stack
i64.const 101
;; Pop the two i64 values off the stack, check if the first is greater
;; than or equal to the second and push the result back onto the stack
;; The stack contains an i32 value of 1 (101 >= 101 = is greater or equal)
;; Push the i64 value 42 onto the stack
i64.const 42
;; Push the i64 value 101 onto the stack
i64.const 101
;; Pop the two i64 values off the stack, check if the first is greater
;; than or equal to the second and push the result back onto the stack
;; The stack contains an i32 value of 0 (42 >= 101 = is not greater or equal)
;; Push the i64 value 0xF23AB02CF178CD56 (17454457012904054102)
;; onto the stack
i64.const 0xF23AB02CF178CD56
;; Push the i64 value 101 onto the stack
i64.const 101
;; Pop the two i64 values off the stack, check if the first is greater
;; than or equal to the second and push the result back onto the stack
;; The stack contains an i32 value of 1
;; 17454457012904054102 >= 101 = is greater or equal | {"url":"https://coderundebug.com/learn/wat-reference/i64/i64-ge-u.html","timestamp":"2024-11-11T04:07:14Z","content_type":"text/html","content_length":"6276","record_id":"<urn:uuid:ac49a710-72b1-4c6b-a7df-c76ec571d890>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00612.warc.gz"} |
A Variation on Trade Study, Pt. 2
By Daniel Nguyen Posted on May 26, 2022 In GENESYS No Comments
MBSE and GENESYS alone cannot help you make the right design decision in a trade study. But the descriptive architecture in GENESYS can surely point the detailed designs in the right direction.
In Part One of this trade study discussion, we established some parameters for the motorcycle components to be evaluated. We will no go into how they can be useful in keeping the trades fair and
consistent. If our descriptive architecture is doing a great job so far to help evaluate the point-and-shoot vs. corner-speed options qualitatively, the introduction of the Constraint Solver will
add the quantitative aspect to our evaluation.
When we recall the goal of the Grand Prix motorcycle design, and of this trade study in particular, it is all about achieving the quickest time to enter, complete, and exit a turn on the race track.
We have determined that two component parameters contribute to the ability to reach this goal. Together, the combination of these two parameters decides what kind of motorcycle (and riding strategy)
will win the trade study. Suppose we have a way to calculate the total time it takes for each motorcycle design to attack a turn:
• For a typical point-and-shoot motorcycle:
vTimeCornerStab = (1 – vLongStiff) * 1.63 + 200 / vMaxPwr * 1.46
vTimeCornerStab is the total time where chassis stability and engine max power are the sensible winning combination.
vLongStiff is the chassis longitudinal stiffness defined in Part One. vLongStiff range is between 0 and 1, with 1 being the most rigid design value.
vMaxPwr is the engine max power output also defined in Part One. vMaxPwr range is equal to or greater than 200 hp.
In a real trade study, the (fictitious in this example) mathematical expression would have originated from historical and empirical evaluations of past point-and-shoot designs. This is why we stress
that MBSE alone cannot give you all the tools necessary for a trade study. In this case, data engineering and applied mathematics are also essential.
• For a typical corner-speed motorcycle:
vTimeCornerFlex = 1 / (1 – vLongStiff) * 1.63 + 200 / vMaxPwr * 1.46
vTimeCornerFlex is the total time where chassis flexibility is the key.
vLongStiff is the chassis longitudinal stiffness defined in Part One. vLongStiff range is between -1 and 0, with -1 being the most flexible design value.
vMaxPwr is the engine max power output also defined in Part One. vMaxPwr range is equal to or less than 200 hp. Historically, we could not fit a more powerful engine into this type of chassis.
Again, the real-life mathematical expression would originate from historical and empirical evaluations of past corner-speed designs. We see that each physical component can place or remove a
constraint upon another component. This is why a real-life trade study is more complex than just selecting the best available part technologies and mixing them into a system design that works.
Once we are confident with the mathematical expressions to be used in the trade study (total time to complete a turn in this case) we can model them in GENESYS as a Constraint Definition. More
specifically, we will constrain the performance of the motorcycle functions to the characteristics of its selected components. To be perfectly clear, the mathematical expressions of a constraint
definition do not automatically solve for the optimum parameters: chassis longitudinal stiffness and engine max power output. They are simply an attribute of a GENESYS class entity to help us
quantitatively compare system’s performances when different combinations of parts are utilized.
Our constraint definition setup looks like this:
Standard practice is applied here when defining a constraint:
• Set the constrains target
• Identify the parameters used in the constraint definition
• Define dependent variables and independent variables
• Write the expression(s)
• Map the variables to the uses
The resulted parametric diagram looks like this:
The objective values set for each of the parameters (longStiffness and maxPowerOut) will be used by the Constraint Solver to execute the expressions for vTimeCornerStab and vTimeCornerFlex. Let us
try two sets of values, each representing the best-case scenario for each motorcycle design option.
The best possible point-and-shoot design with the best available chassis and engine:
• vLongStiff = 1
• vMaxPwr = 256 hp
The constraint solver returns the results as:
• totalTimeFlex = Inf
• totalTimeStab = 1.1406 sec
Notice that the result for the corner-speed design is infinity. This is okay because one of the parameter values is outside of the expected range for this constraint definition. We do have the
result for the best possible point-and-shoot design: 1.1406 seconds.
Similarly, with a corner-speed design, the best available chassis and engine:
• vLongStiff = -1
• vMaxPwr = 200 hp
The constraint solver returns the results as:
• totalTimeFlex = 2.275 sec
• totalTimeStab = 4.720 sec
The result for the best possible corner-speed design is 2.275 sec. We can also observe that the result for the point-and-shoot design is way off (4.720 sec). This is okay because the parameter
values used for this solver run are set for the corner-speed expression. They are out of range for the point-and-shoot expression. (In theory, we can split these two expressions into two separate
constraint definitions and not worry about which result is valid with which valid inputs. I only chose to combine them into one constraint definition to reduce the number of diagrams for this
What does this finding tell us?
Given the component design availability for both design options, the best point-and-shoot design (1.1406 sec) beats the best corner-speed design (2.275 sec). As a manufacturer we might want to go
with the point-and-shoot design if the Motorcycle’s turning performance is my most/only priority.
We can also identify where a design can be improved should component technology advances in the future. For example, if we run the solver again with vLongStiff = -1.5 (outside of the current
corner-speed chassis technology), totalTimeFlex is then 2.112 seconds! 0.163 second is a huge improvement in MotoGP. So, should we as a manufacturer, really want to go with the corner-speed design
to fit a particular rider’s style (Fabio Quartararo), we can accept the current limitations of component characteristics. Then, we will invest heavily in component R&D to improve the overall chassis
design. GENESYS and MBSE allows us to make those informed decisions by exposing the interconnects between parts and interconnects between parts and the whole system.
“Cool light-weight calculations! I can do that in Excel or in MATLAB. Why do I need GENESYS for it?” one might ask. And it is a good question. The quantitative trade study can be done with a
calculator. But, it would exist alone, incoherent without the qualitative understanding of how parts connect to make up a motorcycle system. This is where MBSE helps designers approach a solution
holistically, and avoid the reductionist thinking of the Industrial Revolution age.
Another critical area where GENESYS can help a holistic trade study is modeling Behavior. Gluing different parts together is not the only way to create different systems. Understanding the
behaviors of the parts and of the system is essential for a successful trade study. This is particularly apparent when software and electronics are involved in a complex system design. Though, the
two motorcycle designs are greatly constrained by the components available to build them, each design can be further improved by manipulating its behaviors. After all, the whole is greater than the
sum of its parts.
If we recall, the concept of inheritance is useful in leveraging the understanding of a physical baseline to explore two new physical architectures. Similarly, the concept of Threads and Integrated
Behavior facilitates further developments of a known baselined behavior into two complex behavior architectures. The GENESYS schema relationship for this part of the trade study is reflects (or
reflected in if going in the opposite direction):
The baselined 1000RR motorcycle has a proven behavior with predictable stimuli and responses, and predictable functions performed by the chassis, electronics, engine, etc. The more complex and
relatively unknown root functions of the MkI and MkII designs reflect the root function of the 1000RR baseline. The MkI and MkII component functions also reflect the 1000RR baselined component
functions. Now, the team working on the MkI design can develop the best functions for a point-and-shoot motorcycle. The MkII team does the same thing for a corner-speed design. All the while, both
teams can reference back to the baselined thread behavior, and in turn, to each other’s design. At the end of the process, both teams can compare and contrast the two functional architectures to one
another, and to the known baseline. We can also apply a parametric analysis (as discussed earlier) to functions where appropriate.
At the beginning of this trade study discussion, I mentioned that the use of threads – integrated behavior was going to be a “variation” of the normal concept. This is because the typical threads –
integrated behavior transition is done within the boundary of one system. In a trade study, there is more than one system being considered. Hence, a set of thread functions will need to converge to
however-many integrated behavior architectures in the trade study:
GENESYS is not the silver-bullet solution for a trade study (nor is any other single tool). A trade study is a monumental task jointly executed by many engineering disciplines with many different
tools. This is where GENESYS and MBSE shines: being the connective tissue to bind all together. The descriptive architecture with the concepts of Physical Inheritance, Behavioral Reflection, and
Constraint Definition facilitates traceability from a known baseline to the potential designs to be explored. This traceability through the GENESYS schema allows the trade study to be conducted
consistently and fairly. And along the way, every design decision is made informedly and defensibly.
Related Posts
About the Author
Daniel Nguyen
Daniel Nguyen is a product manager at Zuken Vitech. Nguyen leads the specification, definition, and management of several Vitech model-based systems engineering (MBSE) software products. He has also
participated in pre-sales technical support, business development, technology research, consulting for industry, and MBSE training in previous roles at Vitech. Prior to joining Zuken Vitech, Nguyen
was a systems engineer at a large U.S. defense contractor, working on aerospace platforms for the U.S. Navy and U.S. Air Force. Daniel Nguyen holds a bachelor’s degree in aerospace engineering from
UCLA and a master’s degree in systems engineering from Stevens Institute of Technology. Outside of work, Nguyen enjoys motorcycle rides with his wife Lauren and being a dad to a lively 3-year-old. | {"url":"https://systems-wise.com/a-variation-on-trade-study-pt-2/","timestamp":"2024-11-05T06:37:52Z","content_type":"text/html","content_length":"72525","record_id":"<urn:uuid:a3a98999-a8c1-4d7a-b53f-95394e89d6a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00768.warc.gz"} |
Utilization of the Brinkman Penalization to Represent Geometries in a High-Order Discontinuous Galerkin Scheme on Octree Meshes
Simulation Techniques and Scientific Computing, Department Mechanical Engineering, University of Siegen, 57076 Siegen, Germany
Authors to whom correspondence should be addressed.
Submission received: 1 July 2019 / Revised: 28 August 2019 / Accepted: 3 September 2019 / Published: 5 September 2019
We investigate the suitability of the Brinkman penalization method in the context of a high-order discontinuous Galerkin scheme to represent wall boundaries in compressible flow simulations. To
evaluate the accuracy of the wall model in the numerical scheme, we use setups with symmetric reflections at the wall. High-order approximations are attractive as they require few degrees of freedom
to represent smooth solutions. Low memory requirements are an essential property on modern computing systems with limited memory bandwidth and capability. The high-order discretization is especially
useful to represent long traveling waves, due to their small dissipation and dispersion errors. An application where this is important is the direct simulation of aeroacoustic phenomena arising from
the fluid motion around obstacles. A significant problem for high-order methods is the proper definition of wall boundary conditions. The description of surfaces needs to match the discretization
scheme. One option to achieve a high-order boundary description is to deform elements at the boundary into curved elements. However, creating such curved elements is delicate and prone to numerical
instabilities. Immersed boundaries offer an alternative that does not require a modification of the mesh. The Brinkman penalization is such a scheme that allows us to maintain cubical elements and
thereby the utilization of efficient numerical algorithms exploiting symmetry properties of the multi-dimensional basis functions. We explain the Brinkman penalization method and its application in
our open-source implementation of the discontinuous Galerkin scheme, Ateles. The core of this presentation is the investigation of various penalization parameters. While we investigate the
fundamental properties with one-dimensional setups, a two-dimensional reflection of an acoustic pulse at a cylinder shows how the presented method can accurately represent curved walls and maintains
the symmetry of the resulting wave patterns.
1. Introduction
In simulations of fluid motion for engineering scenarios, we generally need to deal with obstacles or containment of a non-trivial shape. In mesh-based schemes, we have two options to represent such
geometries: we can try to align the mesh with the geometries, such that the walls build a boundary of the mesh or we try to embed the boundary conditions inside the mesh elements. The first option
eases the formulation of boundary conditions and their treatment in the scheme [
]. The second option avoids the need to adapt the mesh to the, possibly complex, geometry [
]. Correctly aligning the mesh with arbitrary geometries in the first option can become cumbersome for high-order approximations. Thus, the embedding method is attractive for high-order schemes.
Another application area, where the embedded boundaries provide a benefit, are moving geometries, as the need for new meshes can be avoided during simulations.
High-order discretization schemes can represent smooth solutions with few degrees of freedom. This is an essential property for algorithms on modern computing systems as the memory bandwidth is a
strongly limiting factor on new systems, due to the widening memory gap. A numerical scheme that allows for high-order approximations of the solution is the discontinuous Galerkin finite element
method. In this method, the solution within elements is represented by a function series (usually a polynomial series). In this work, we are concerned with a high-order discontinuous Galerkin scheme
and the embedded geometry representation within it. Besides the possibility to use high-order approximations, the discontinuous Galerkin scheme also offers a relatively loose coupling between
elements, resulting in a high computational locality, which in turn is advantageous for modern parallel computing systems. Discontinuous Galerkin methods are, therefore, increasingly popular and
Peskins [
] was one of the first scientists trying to impose immersed boundaries for his investigations. For his studies, he simulated the flow around heart valves considering the incompressible Navier–Stokes
equations while introducing the immersed boundaries, using an elastic model and applying forces to the fluid, thus changing the momentum equations. His work was extended by Saiki and Biringen [
], and they considered feedback forces for the immersed boundaries to represent a rigid body while using an explicit time-stepping, hence resulting in stiff problems and very small time-stepping for
the simulation. An important fact, which makes immersed boundary methods more attractive, is the introduction of the effect of the geometry in the governing equations themselves. Embedding the
boundaries in the mesh relaxes the requirements on the elements, and using simple elements allows for efficient numerical algorithms that can, for example, exploit inherent symmetric properties of
the discretization. The additionally introduced terms can either be considered in the numerical discretization or the continuous equations. Applying forcing terms in the discretization allows for
better control of the numerical accuracy and the conservation properties of the used discretization method; on the other hand, the generality and flexibility of these methods disappear when
considering different solvers using different discretization methods. In contrast, the volume penalization method imposes additional forcing penalty terms on the continuous equations, while the
discretization is done as usual [
]. The Brinkman Penalization Method (BPM) is one of these methods. It was originally developed by Arquis and Caltagirone [
] for numerical simulations of isothermal obstacles in incompressible flows. The idea is to model the obstacle as a porous material, with material properties approaching zero. The major benefit of
this method is error estimation, which can be rigorously predicted in terms of the penalization parameters [
]. Furthermore, the boundary conditions can be enforced to a precision, without changing the numerical discretization of the scheme. Kevlahan and Ghidaglia already applied this method for
incompressible flows, while considering a non-moving, as well as a moving geometry. They used a pseudo-spectral method [
] in their works.
Liu and Vasilyev employed the volume penalization for the compressible Navier–Stokes equations. In their publication [
], they discussed a 1D and a 2D test case. They used a wavelet method for the discretization and showed error convergence and resulting pressure perturbations for acoustic setups. In other
investigations, various numerical discretization methods were used, which showed promising results using the Brinkman penalization method. In [
], the pseudo-spectral methods, in [
], wavelet, and in [
], the finite volume/finite element methods were used. However, as far as we know, no work on this kind of penalization in the context of high-order discontinuous Galerkin methods for compressible
Navier–Stokes equations has been done so far. Thus, this paper will look into the Brinkman penalization employed within a high-order discontinuous Galerkin solver. Our implementation is available in
our open-source solver Ateles [
2. Numerical Method
The flow of compressible viscous fluids can mathematically be described by the Navier–Stokes equations governing the conservation of mass, momentum, and energy. In this section, we describe the
compressible Navier–Stokes equation with the Brinkman penalization method to model solid obstacles as proposed in [
]. We apply this penalization in the frame of the Discontinuous Galerkin (DG) method and introduce this method also briefly in this section. The additional source terms introduced by the penalization
increase the stiffness of the scheme considerably, and the last part of this section discusses how this can be overcome by an implicit-explicit time integration scheme.
2.1. The Compressible Navier–Stokes Equation
The Navier–Stokes equations describe the motion of fluids and model the conservation of mass, momentum, and energy. The non-dimensional compressible equations in conservative form can be written as:
$∂ m i ∂ t + ∑ j = 1 3 ∂ ∂ x j m i v j + p δ i j − 1 R e ∑ j = 1 3 ∂ ∂ x j τ i j = 0 i = 1 , 2 , 3$
$∂ ρ e ∂ t + ∇ · e + p ρ m − 1 R e ∑ j = 1 2 ∂ ∂ x j ∑ i = 1 2 τ i j v i − 1 γ − 1 μ P r T = 0$
where the conserved quantities are the density
, the momentum
$m = ρ v$
, and the total energy density
, given by the sum of kinetic and internal energy density:
$e = 1 2 | v | 2 + p ρ ( γ − 1 ) .$
$v = ( v 1 , v 2 , v 3 ) T$
is the velocity vector,
$δ i j$
is the Kronecker delta,
$R e$
is the reference Reynolds number, and
$P r$
the reference Prandtl number.
stands for the isentropic expansion factor, given by the heat capacity ratio of the fluid, and
denotes the temperature. Viscous effects are described by the shear stress tensor:
$τ i j = μ ∂ v i ∂ x j + ∂ v j ∂ x i$
and the dynamic viscosity
To close the system, we use the ideal gas law as the equation of state, which yields the following relation:
represents the gas constant.
2.2. The Brinkman Penalization
Penalization schemes employ additional, artificial terms to the equations in regions where the flow is to be inhibited (penalized). In the conservation of momentum and energy, we can make use of
local source terms that penalize deviations from the desired state. With the Brinkman penalization, we also inhibit mass flow through obstacles by introducing the Brinkman porosity model and using a
low porosity, where obstacles are to be found. Extending the compressible Navier–Stokes equations from
Section 2.1
by the penalization terms, we obtain Equations (
) and (9).
$∂ ρ ∂ t = − ∇ · 1 + 1 ϕ − 1 χ m , ∂ m i ∂ t + ∑ j = 1 3 ∂ ∂ x j m i v j + p δ i j − 1 R e ∑ j = 1 2 ∂ ∂ x j τ i j$
$= − χ η v i − U o i i = 1 , 2 , 3 , ∂ ρ e ∂ t + ∇ · e + p ρ m − 1 R e ∑ j = 1 2 ∂ ∂ x j ∑ i = 1 2 τ i j v i − 1 γ − 1 μ P r T$
The obstacle has the porosity
, the velocity
$U o$
, and the temperature
$T o$
. The strength of the source terms can be adjusted by the viscous permeability
and the thermal permeability
$η T$
. The masking function
describes the geometry of obstacles and is zero outside of obstacles and one inside. It is also referred to as the characteristic function. It is capable of dealing not only with complex geometries
but also with variations in time.
$χ ( x , t ) = 1 , if x ∈ obstacle . 0 , otherwise .$
To represent a solid wall for compressible fluids properly, Liu et al. [
] stated that the porosity
should be as small as possible, i.e.,
$0 < ϕ < < 1$
. They scaled the permeabilities with the porosity and introduced according scaling factors
$α T$
. The permeabilities were then defined by
$η = α ϕ$
$η T = α T ϕ$
. With these relations, Liu et al. [
] found a modeling error of
$O ( η 1 / 2 ϕ )$
for resolved boundary layers in the material and
$O ( ( η / η T ) 1 / 4 ϕ 3 / 4 )$
for non-resolved boundary layers. In both cases, the error was dominated by the porosity. Nevertheless, the error can still be minimized with sufficiently small viscous permeabilities
Moreover, small values of the porosity caused stability issues and imposed a heavy time-step restriction with our numerical scheme. With the introduction of
, the eigenvalues of the hyperbolic system changed, which has adverse effects on stability. The eigenvalues of the system of equations along with penalization terms [
] are given by the following characteristic equation:
$− ( λ − u ) 3 + c 2 + u 2 2 ( ϕ − 1 − 1 ) ( γ − 3 ) ( λ − u ) − c 2 u ( ϕ − 1 − 1 ) ( γ − 1 ) = 0 ,$
$c = ( γ p / ρ ) 1 / 2$
, and
are the ratio of specific heat, pressure, density, and velocity, respectively. For
$ϕ = 1$
, the system of equations yields three eigenvalues
$u , u + c , u − c ,$
which implies the speed of sound
in the medium, which is what we would like to achieve. However, with
$0 < ϕ < < 1$
, the eigenvalues can no longer be evaluated easily and are linked to
, which causes problems for the hyperbolic part.
2.3. The Discontinuous Galerkin Discretization
In this section, we briefly introduce the semi-discrete form of the Discontinuous Galerkin finite element method (DG) for compressible inviscid flows. The compressible Euler equations were derived
from the Navier–Stokes equations by neglecting diffusive terms. They still provide a model for the conservation of mass, momentum, and energy in the fluid and can be described in vectorial notation
$∂ t u + ∇ · F ( u ) = 0 ,$
equipped with suitable initial and boundary conditions. Here,
is a vector of the conservative variables, and the flux function
$F ( u ) = ( f ( u ) , g ( u ) ) T$
for two spatial dimensions is given by:
$u = ρ ρ u ρ v ρ E , f ( u ) = ρ u ρ u 2 + p ρ u v ( ρ E + p ) u , g ( u ) = ρ v ρ u v ρ v 2 + p ( ρ E + p ) v ,$
$ρ , v = ( u , v ) T , E , p$
denotes the density, velocity vector, specific total energy, and pressure, respectively. The system is closed by the equation of state assuming the fluid obeys the ideal gas law with pressure defined
$p = ( γ − 1 ) ρ e − 1 2 ( u 2 + v 2 )$
; where
$γ = c p c v$
is the ratio of specific heat capacities and
is the total internal energy per unit mass.
The discontinuous Galerkin formulation of the above equation was obtained by multiplying it with a test function
and integrating it over the domain
. Thereafter, integration by parts was used to obtain the following weak formulation:
$∫ Ω ψ ∂ u ∂ t d Ω + ∮ ∂ Ω ψ F ( u ) · n d s − ∫ Ω ∇ ψ · F ( u ) d Ω = 0 , ∀ ψ ,$
$d s$
denotes the surface integral. A discrete analogue of the above equation was obtained by considering a tessellation of the domain
closed, non-overlapping elements given by
$T = { Ω i | i = 1 , 2 , … , n }$
, such that
$Ω = ∪ i = 1 n Ω i$
$Ω i ∩ Ω j = ∅ ∀ i ≠ j$
. We define a finite element space consisting of discontinuous polynomial functions of degree
$m ≥ 0$
given by:
$P m = { f ∈ [ L 2 ( Ω ) ] m : f | Ω k ∈ P m ( Ω k ) ∀ Ω k ∈ Ω }$
$P m ( Ω k )$
is the space of polynomials with largest degree
on element
$Ω k$
. With the above definition, we can write the approximate solution
$u h ( x , t )$
within each element using a polynomial of degree
$u h ( x , t ) = ∑ i = 1 m u ^ i ϕ i , ψ h ( x ) = ∑ i = 1 m v ^ i ϕ i ,$
where the expansion coefficients
$u ^ i$
$v ^ i$
denote the degrees of freedom of the numerical solution and the test function, respectively. Notice that there is no global continuity requirement for
$u h$
$ψ h$
in the previous definition. Splitting the integrals in Equation (
) into a sum of integrals over elements
$Ω i$
, we obtain the space-discrete variational formulation:
$∑ i = 1 n ∂ ∂ t ∫ Ω i ψ h u h d Ω + ∮ ∂ Ω i ψ h F ( u h ) · n d s − ∫ Ω i ∇ ψ h · F ( u h ) d Ω = 0 , ∀ ψ h ∈ P m .$
Due to the element local support of the numerical representation, the flux term is not uniquely defined at the element interfaces. The flux function is, therefore, replaced by a numerical flux
function $F * ( u h − , u h + , n )$, where $u h −$ and $u h +$ are the interior and exterior traces at the element face in the direction $n$ normal to the interface. A choice of appropriate
numerical flux can then be selected from several numerical flux schemes. For our simulations, we used the Lax–Friedrichs scheme for numerical flux.
For simplicity, we can re-write the equation above in matrix vector notation and obtain:
$∂ ∂ t u ^ = M − 1 S · F ( u ^ ) − M F · F ( u ^ ) = : r h s ( u ^ ) .$
denote the mass and the stiffness matrices and
$M F$
are the so-called face mass lumping matrices. The above obtained ordinary differential Equation (
) can be solved in time using any standard time-stepping method, e.g., a Runge–Kutta method.
In our implementation, we exploited the fact that we only used cubical elements. This choice of simple elements allowed for a tensor-product notation in the multi-dimensional basis functions. The
symmetry of the elements enabled efficient dimension-by-dimension algorithms in the computation.
2.4. The Implicit-Explicit Time Discretization
As the penalization introduces stiff terms to the equations and for accuracy, we would want them to be as stiff as possible, we introduce an implicit time integration for those terms. With an
otherwise explicit time integration scheme, this results in an implicit-explicit time-stepping scheme that can be achieved by splitting the right-hand side of the equations into an explicitly
integrated part and an implicitly integrated part. Therefore, to perform time integration of the system, we use a Diagonally Implicit Runge–Kutta (DIRK) scheme with three explicit and four implicit
stages as presented in [
]. The following section first considers a single implicit Euler step, to discuss the arising equations that need to be solved in each implicit stage of the higher order time discretization.
We denote the right-hand side by
and employ the superscript
for the implicit part and the superscript
for the explicit part. By using the conservative quantities as subscripts (
, and
), we can distinguish the right-hand sides for the different equations. Thus, we get:
$∂ ρ ∂ t = Q ρ ξ + Q ρ ι$
$∂ m i ∂ t = Q m i ξ + Q m i ι$
$∂ e ∂ t = Q e ξ + Q e ι$
and we chose the implicit parts as:
$Q m i ι = − χ η u i − U o i$
$Q e ι = − χ η T T − T o$
out of Equations (
) and (9).
This choice restricts the implicit computation to the local source terms, which can be evaluated pointwise. Unfortunately, the introduced Brinkman porosity in (
) affects the flux and introduces spatial dependencies. To avoid the need for the solution of an equation system across the whole domain for this dependency, the porosity part will be computed in the
explicit time-stepping scheme.
Observation for the Implicit Part
Considering Equations (
)–(20) only with their implicit parts, we get the following equation system:
$∂ m i ∂ t = − χ η u i − U o i$
$∂ e ∂ t = − χ η T T − T o .$
Notice that these equations can be solved pointwise as no spatial derivatives appear.
A discretization of these equations in time with a Euler backward scheme yields the solvable equation system:
$ρ ( t + Δ t ) − ρ ( t ) Δ t = 0$
$m i ( t + Δ t ) − m i ( t ) Δ t = − χ η u i ( t + Δ t ) − U o i$
$e ( t + Δ t ) − e ( t ) Δ t = − χ η T T ( t + Δ t ) − T o .$
Equation (
) trivially yields
$ρ ( t + Δ t ) = ρ ( t )$
. With the implied constant density, we can now solve the equation for the change in momentum (28) and arrive at an explicit expression for the velocity
$u i ( t + Δ t )$
at the next point in time:
$ρ ( t + Δ t ) u i ( t + Δ t ) − ρ ( t ) u i ( t ) Δ t = − χ η u i ( t + Δ t ) − U o i$
$u i ( t + Δ t ) = ρ ( t ) u i ( t ) + χ Δ t η U o i ρ ( t ) + χ Δ t η .$
Finally, density and velocity at the new point in time can be used to find the new temperature as well by substituting the above results in Equation (29) and solving for the temperature at the next
point in time. We find:
$T ( t + Δ t ) = χ Δ t η T T o + c v ρ ( t ) T ( t ) + ρ ( t ) 2 ( u i 2 ( t ) − u i 2 ( t + Δ t ) ) c v ρ ( t ) + χ Δ t η T .$
$u i ( t + Δ t )$
is given by Equation (31).
Thus, this specific choice of terms for the implicit part of the time integration scheme yields a system that can be solved explicitly and without much additional computational effort. However, the
implicit discretization allows for arbitrarily small values of
$η T$
. A similar approach was developed by Jens Zudrop to model perfectly electrical conducting boundaries in the Maxwell equations, and more details can also be found in his thesis [
To solve the complete system, we then employed the diagonally implicit Runge–Kutta scheme with three explicit stages and four implicit stages [
]. It provides a scheme that is third order in time and L-stable.
Note, that while this approach overcomes time step limitations with respect to the permeabilities $η$ and $η T$, the porosity term changes the eigenvalues of the hyperbolic system and affects the
3. Results and Discussion
To investigate the penalization scheme in our discontinuous Galerkin implementation, we first analyzed the fundamental behavior in two one-dimensional setups and then considered the scattering at a
cylinder in a two-dimensional setup.
As explained in
Section 2.2
, the modeling error by the penalization for the compressible Navier–Stokes equations as found by Liu and Vasilyev [
] was expected to scale with the porosity
by an exponent between
$3 / 4$
and one and with the viscous permeability
by an exponent between
$1 / 4$
$1 / 2$
. To achieve low errors, you may, therefore, be inclined to minimize
. However, with the implicit mixed explicit time integration scheme presented in
Section 2.4
, we can eliminate the stiffness issues due to small permeabilities with little additional costs, while the stability limitation by the porosity persists. Because of this, we deem it more feasible to
utilize a small viscous permeability instead of a small porosity. At the same time, the relation between viscous permeability
and thermal permeability
$η T$
gets small without overly large
$η T$
. Therefore, we used a slightly different scaling than proposed by Liu and Vasilyev [
]. We introduce the scaling parameter
and define the permeabilities accordingly in relation to the porosity as follows.
Note, that we then expect the modeling error to be of size $O ( β 1 / 4 ϕ 3 / 4 )$.
3.1. One-Dimensional Acoustic Wave Reflection
To assess how well the penalization scheme can capture the reflective nature of a solid wall, we used the reflection of an acoustic wave at the material. The initial pressure distribution is shown in
Figure 1
. It is described by the Gaussian pulse given in Equation (
) around its center at
$x = 0.25$
in the left half of the domain (
$x ≤ 0.5$
$ρ ′ = u ′ = p ′ = ϵ exp − ln ( 2 ) ( x − 0.25 ) 2 0.004$
For the amplitude
of the wave, we used a value of
$ϵ = 10 − 3$
. The perturbations in density
$ρ ′$
, velocity
$u ′$
, and pressure
$p ′$
from (
) are applied to a constant, non-dimensionalized state with a speed of sound of one. This results in the initial condition for the conservative variables density
, momentum
, and total energy
as described in: (
$ρ = 1 + ρ ′ , m = ρ u ′ , e = 1 γ ( γ − 1 ) + p ′ γ − 1 + 1 2 ρ ( u ′ ) 2$
The penalization with porous medium is applied in the right half of the domain (
$x > 0.5$
). In acoustic theory, the reflection should be perfectly symmetric, and the reflected pulse should have the same shape and size, only with opposite velocity. This simple setup allows us to analyze
the dampening of the reflected wave and induced phase errors. Reflected waves for different settings of
as defined in Equation (
) are shown in
Figure 2
. The pressure distribution for the reflection is shown for the state after a simulation time of
. With linear acoustic wave transport and a speed of sound of one, the pulse should return to its original starting point, just with an opposite traveling direction. This symmetry makes it easy to
judge both the loss in wave amplitude and the phase shift of the reflected pulse.
While the analytical result for a linear wave transport provided a good reference in general for the acoustic wave, it sufficiently deviated from the nonlinear behavior to limit its suitability for
convergence analysis to small error values. Therefore, we compared the simulations with the penalization method to numerical results with traditional wall boundary conditions and a high resolution.
This reference was computed with the same element length, but the domain ended at
$x = 0.5$
with a wall boundary condition, and a maximal polynomial degree of 255 was used (256 degrees of freedom per element) to approximate the smooth solution. The resulting pressure profiles for different
settings of
and a fixed porosity of
$ϕ = 1.0$
are shown in
Figure 2
. This illustrates how well the wave was reflected for different settings of
and that the solid wall reflection was well approximated for sufficiently small values of
. These numerical results were obtained with 48 elements and a maximal polynomial degree of 31 (32 degrees of freedom per element). Note that this setup aligns the wall interface with an element
interface, where the discontinuity in the penalization is actually allowed by the numerical scheme. Later, we will discuss the changes observed, when moving the wall surface into the elements.
Figure 3
illustrates the impact of the porosity on the error in amplitude of the reflected wave for the same discretization with 48 elements and a maximal polynomial degree of 31. We plotted the error
$e = ( ϵ − p ′ ( t = 0.5 ) ) / ϵ$
over porosity
for various scaling parameters of
between one and
$10 − 6$
. A scaling parameter of
$β = 1$
means that the error is only driven by the porosity
, and for large values of
$β ≥ 10 − 2$
, we observed the expected reduction in the error with decreasing porosity. However, with
$β = 10 − 3$
, this comes eventually to an end (no improvements for
$ϕ < 2 × 10 − 2$
), and for smaller values of
, no improvements for the error can be achieved by lowering the porosity anymore. As can be seen in this figure, a sufficiently small permeability can yield the accuracy as a small porosity.
Figure 3
as well, for
$β = 1 e − 3$
, we observed a drop in error, and then it increased again with smaller
to reach a convergence point finally. We would like to point out that this was expected to come from our numerical scheme using polynomials to represent the solution. Each data-point in the plot with
represents a slightly different test case in terms of boundary layer thickness, as pointed out in
Section 2.2
. A sweet spot is reached when the degree of polynomial used for the simulation correctly captures the boundary layer in the problem. However, as we move further left from here, this sweet spot is
slowly gone with further thinning of the boundary layer. With the same polynomial degree used, one would also expect to see this behavior for lines representing
$β < 1 e − 3$
and correspondingly larger
. This is exactly what we also see for
$β = 1 e − 4$
and the value of
close to 1.0. For all other lines in the plot, this spot does not fall within the range of the figure.
By using the implicit mixed explicit scheme from
Section 2.4
, it is possible to exploit drastically smaller values for the permeabilities to cover up the lack of porosity in the penalization. On the other hand, the porosity cannot that easily be treated in
our discretization, and even moderate values of
can have a dramatic impact on the time step restriction, due to the changed eigenvalues in the hyperbolic part of the equations.
Next, we performed a convergence analysis shifting the position of the wall such that it intersected the element at different locations. The reason for performing such an analysis becomes imperative
when we consider the high-order numerical scheme used. We represented the solution state within an element using polynomials. For the pointwise evaluation of the nonlinear terms, we employed the
Gaussian integration points, at which also the masking function of the penalization needs to be evaluated. Within an element, these integration points were scattered, being more concentrated on the
element interface, and were rather sparse at the center. Therefore, in a peculiar validation test case like this one, when the wall was aligned with the element interface, due to the abundance of
interpolation points, it had the advantage of being very precisely represented even for comparatively fewer degrees of freedom. In actual simulations, the wall interface may intersect an element at
any point. We, therefore, also need to consider such intersections through the element and ensure the solution converges to the reference solution. The penalization method itself was not restrictive
to any such limitations and could perfectly represent wall irrespective of its location within an element.
Thus, we performed and compared convergence analysis on two different discretizations, one where the wall lied at the element interface and a second where the wall intersected one element exactly in
the middle. We would like to point out that the later scenario yielded a worst case estimate for the approximation of the jump in the masking function within an element. As explained, this simply
came from scarce integration points lying around the center of element. For the following convergence analysis, we ignored the porosity (i.e., set $ϕ = 1$) and used small permeabilities by choosing
$β = 10 − 6$. We also considered the $L 2$ error norm now in the fluid domain. As a reference solution, we employed a numerical simulation with a traditional wall boundary and a high maximal
polynomial degree of 255 (256 degrees of freedom per element). The error was measured at $t = 0.5$ after the reflected wave reached its initial position again.
Figure 4
shows the
$L 2$
norm of the error for the reflected pressure wave with a maximal polynomial degree of seven over an increasing number of elements (h-refinement). This plot compares the two discretizations explained.
As expected, in
Figure 4
, we observe superior convergence behavior for the case when the wall lies at the element interface in comparison to the other case where the wall is crossing through the element center. However, for
both cases, we observed a proper convergence towards the solution with a traditional solid wall boundary condition. The order of error convergence did not match the high-order discretization in
either case, but this was expected due to the discontinuity introduced by the masking function of the penalization.
Next, we performed another convergence study using the same two discretizations, but this time keeping the number of elements constant and increasing the order of polynomial representation within
those elements.
Figure 5
shows the error convergence over the maximal polynomial degree in the discretization scheme (p-refinement) with the number of elements fixed to 24. Here, also, one observes a solution in both cases
converging to the reference solution. While no spectral convergence was achieved for the discontinuous problem, a quadratic convergence can be observed. This shows that a high-order approximation was
beneficial even with the discontinuous masking function for the penalization.
Finally, we also looked at the case where the wall was close to the element interface, but not exactly on it. This is a potential critical configuration for the numerical scheme being used, as the
discontinuity close to the surface needs to be properly captured. We put the wall at 5% of the element length away from the element surface and measured the error as before, resulting in the graph
shown in
Figure 6
. For this case as well, we observed a similar convergence rate as before, though the error was a little bit worse than with the wall on the interface.
For a smooth solution, the advantage of high-order methods to attain a numerical solution of a given quality using fewer degrees of freedom is well documented [
]. However, for a complex nonlinear problem with a discontinuity introduced by the porous medium, it is not so clear whether there is still a computational advantage by a high-order discretization.
To investigate this, we ran the wall reflection problem for several orders and plotted the convergence with respect to the required computational effort, as seen in
Figure 7
. This test was performed starting with 16 elements in each data series, providing the leftmost point for the respective spatial scheme order. For subsequent data points, the number of elements was
always increased by a factor of two up to the point where an error of
$10 − 6$
was achieved.
Figure 7
a depicts the observed
$L 2$
error over the total number of degrees of freedom in the simulation. Here, we can see that for attaining a certain level of accuracy, the number of degrees of freedom required was always less when
using a higher spatial order, even though the convergence rate did not increase with the scheme order. The high-order discretization, thus, allowed for memory-efficient computations, also in this
case with a discontinuity present at the wall.
While for the memory consumption, there seems to be a clear benefit in high-order discretizations, it is not so clear whether this still holds for the required computing time. The time step
limitation of the explicit scheme required more time steps for higher spatial scheme orders, increasing the computational effort to reach the desired simulation time. Additionally, the number of
operations increased with higher orders due to the nonlinearity of the equations.
Figure 7
b shows the measured running times on a single computing node with 12 Intel Sandy-Bridge cores for the same runs. Again, the achieved accuracy is plotted, but this time over the observed running time
in seconds. As can be seen, the advantage in terms of running times was not as clear as in terms of memory. However, we still observed faster times to the solution with higher spatial scheme orders,
despite the increased number of time steps. In conclusion, we found some computational benefit from higher spatial scheme orders even in the presence of a discontinuity in this setup.
3.2. One-Dimensional Shock Reflection
After considering the reflection of an essentially linear acoustic wave, we now look into the reflection of a shock, where nonlinear terms play an important role. However, we neglected viscosity in
this setup and only solved the inviscid Euler equations. The reflection of a one-dimensional shock wave at a wall was described and numerically investigated by Piquet et al. [
], for example. We used their setup to validate the penalization method in our discontinuous Galerkin setup, even though a high-order scheme is not ideal for the representation of shocks.
The downstream state in front of the shock (denoted by 1) is given in
Table 1
. The upstream state after the shock (denoted by 2) is then given by the Rankine–Hugoniot conditions for the shock Mach number
$M a s$
. These yield:
$ρ 2 ρ 1 = γ + 1 γ − 1 + 2 M a s − 2$
for the relation of densities
in up- and downstream of the shock and
$p 2 p 1 = 2 γ M a s 2 − ( γ − 1 ) γ + 1$
for the relation of pressures
With these relations, the ratio of the upstream (
$p 3$
) and downstream (
$p 2$
) pressure for the reflected shock wave is [
$p 3 p 2 = M a s 2 ( 3 γ − 1 ) − 2 ( γ − 1 ) 2 + M a s 2 ( γ − 1 )$
For the computation of the velocity
$u r s$
of the reflected shock wave, we considered Equation (
) [
$u r s = 1 M a s 1 + 2 ( M a s 2 − 1 ) ( γ + 1 ) / ( γ − 1 ) c 1$
With an incident shock wave velocity of
$M a s = 1.2$
, we obtained a pressure relation across the shock of
from Equation (
). The shock was simulated in the unit interval
$x ∈ [ 0 , 1 ]$
with the wall located at
$x = 0.5$
. Thus, half of the domain (
$x ∈ [ 0.5 , 1 ]$
) was covered by the porous material to model the solid wall. The shock was initially located at
$x = 0.25$
For the numerical discretization, we used
$256 , 512 , 1024$
, and 2048 elements (
) in total (
$Δ x = 1 / n$
) and a scheme Order (O) of
$32 , 16 , 8$
, and 4, respectively. As explained in the inspection of the linear wave transport, our numerical scheme preferred strong permeabilities over penalization with the porosity. Therefore, we ignored
porosity and chose
$ϕ = 1$
, while the scaling factor from Equations (
) and (
) was set to a small value of
$β = 10 − 6$
Figure 8
, the shock wave after its reflection is shown for different spatial resolutions. The discretizations with O(8) and 1024 elements and with O(4) and 2048 elements were chosen to have the same number
of degrees of freedom, while the third discretization with O(8) and 2000 elements provided a high-resolution comparison.
The exact solution for the normalized pressure (
$p 3 / p 2$
) according to Equation (
) was
. In
Table 2
, the ratio of the pressure (
$p 3 / p 2$
), the relative error between the numerical, and the exact solution (error in
$p 3 / p 2$
in %) close to the shock, as well as the difference between the location of the shock wave after the reflection and its origin (
$Δ x$
: phase shift) are listed. The table illustrates that with higher scheme order, but constant number of degrees of freedom, the error in the pressure ratio, as well as in the phase shift reduces
considerably even for this discontinuous solution. From the obtained results, we can conclude that we achieved the same error as in [
], when using O(16) and 512 elements. As can be seen in
Figure 8
b, the plateau after the shock was not fully flat, but rather had a slope that asymptotically got close to the expected constant value. Except for the fourth-order approximation, this constant
plateau was well obtained, but it remained slightly off the exact solution. This remaining error was also stated in the table as
$m i n e r r o r$
and had a value of
$0.0129 %$
$β = 10 − 6$
Table 3
, the results for the reflected shock wave are presented for the case that the wall is inside an element, instead of at its edge. Again, the error was reduced by an increased scheme order and a fixed
total number of degrees of freedom. Notably, the error in the pressure ratio was reduced for small element counts in relation to the case where the wall coincided with an element interface. This was
due to the fact that there was an additional element introduced here, and the element length was accordingly smaller. However, we can see that the phase shift of the shock was larger in this case.
This can be attributed to the larger distance of the Gaussian integration points, which were used to represent the wall interface.
For better comparison of the simulation results,
Figure 9
illustrates the different test cases. The plot presents the solution, from the previous investigation, when using a scheme order of O(16). As a reference, we considered a no-slip wall, which was
located at the same place as the porous material, while considering the same scheme order.
Figure 9
accentuates the close solution, when modeling the wall as a porous material, to the reference wall. As can be seen in
Figure 9
a, the high-order discretization introduced Gibbs oscillations around the shock, but otherwise, the discontinuity was well preserved by the numerical scheme. Some of those oscillations remained
inside the modeled material (see
Figure 9
b) for the solid wall, but again, the discontinuity was well preserved. Further, the reflected shock exhibited an over- and under-shoot. Since the material was represented in polynomial space and
according to the Gibbs phenomenon, we were limited to 9% deviation for physical correctness of the solution, we also computed those over- and under-shoots. For the material located inside an element,
the overshoot was around 2.052% and the undershoot around 8.639%. Locating the porous material at the element interface resulted in 1.821% and 7.681%, respectively.
3.3. Scattering at a Cylinder
While the one-dimensional setups served well to demonstrate the basic numerical properties of the penalization scheme, they did not show the benefit of this approach. Only with multiple dimensions,
the mesh generation was problematic for high-order schemes. Thus, we now turn to the scattering of a two-dimensional acoustic wave at a cylinder. The result was compared against the analytical
solution of the linearized equations presented in [
]. In this case, the surface of the object was curved, and the wave did not only impinge in the normal direction of the obstacle. The expected symmetric scattering pattern of the reflected pulse
eased the identification of numerical issues introduced by the modeling of the cylinder wall. Thus, this setting illustrated the treatment of curved boundaries in the high-order approximation scheme
by penalization within simple square elements. The problem setup is depicted in
Figure 10
and consisted of a cylinder of diameter
$d = 1.0$
with its center lying at the point
with the coordinates
$( 10 , 10 )$
The initial condition prescribed a circular Gaussian pulse in pressure with its center at the point
$S = ( 14 , 10 )$
and a half-width of
. Thus, the initial condition for the perturbation of pressure is given by:
$p ′ = ϵ exp − ln ( 2 ) ( x − 14 ) 2 + ( y − 10 ) 2 0.04 .$
The amplitude $ϵ = 10 − 3$ of the pulse was chosen to be sufficiently small to nearly match the full compressible Navier–Stokes simulation with the linear reference solution.
The initial condition in terms of the conservative variables was given as:
$ρ = ρ 0 + p ′ , m 1 = m 2 = 0 , e = c p T − p ρ .$
$ρ 0$
is the background density chosen as
$ρ 0 = 1.0$
$m 1 , m 2$
are the momentum in the
direction, respectively, and
is the total energy.
$c p$
are the temperature and specific heat at constant pressure, respectively. The ratio of specific heats was chosen to be
$γ = 1.4$
. The Reynolds number used was
$R e = 5 × 10 5$
, calculated using the diameter of the cylinder
$d = 1.0$
as the characteristic length.
Figure 10
shows the test case setup magnified around the area of interest.
The overall simulation domain was
$Ω = [ 0 , 24 ] × [ 0 , 20 ]$
to ensure that the boundaries were sufficiently far away to avoid interferences from reflections during the simulated time interval. To test the accuracy of the simulation, five probing points were
chosen around the obstacle. The points were located in different directions with respect to the obstacle
and the source
. The incident and the reflected acoustic wave passed through these probes at different points in time. This intends to address both phase and amplitude errors that arise from the Brinkman
penalization. The porosity was set to
$ϕ = 1.0$
. The viscous and thermal permeability
$η T$
were defined respectively with the help of the scaling parameter
$β = 10 − 6$
according to Equations (
) and (
). Results were obtained solving the compressible Navier–Stokes equations in two dimensions with a spatial scheme order of
$O = 8$
, i.e., 64 degrees of freedom per element. Cubical elements with an edge length of
$d x = 1 / 64$
were used to discretize the complete domain. The simulation was carried out for a total time of
$t m a x = 10$
The pressure perturbation in the initial condition resulted in the formation of an acoustic wave that propagated cylindrically outwards as depicted in
Figure 11
a. Eventually, the wave impinged on the obstacle, where it was reflected as shown in
Figure 11
b with the pressure perturbation at
$t = 4$
. The quality of this reflected wave was completely dependent on the quality of the obstacle representation.
A third wave was generated when the initial wave, disrupted by the obstacle, traveled further to the left and joined again. This is visible in
Figure 11
c, and its further evolution is visible in
Figure 11
d, which shows the pressure perturbation at
$t = 8$
. These three circular acoustic waves had different centers (shifted along the x-axis), but coincided left of the obstacle. As can be seen from these illustrations, the expected reflection pattern
was nicely generated by the obstacle representation via the penalization. For a more quantitative assessment of the resulting simulation, we looked at the time evolution of the pressure perturbations
at the chosen probing points.
Figure 12
shows the time evolution of the pressure fluctuations monitored at each of the five observation points around the cylinder. The numerical results were compared with the analytical solution for linear
equations at these points. Here, we can observe the principal wave and the reflected wave arriving at different probing points at different times. We also observed that the computational results
obtained showed an excellent agreement with the analytical solution for all the probes. It nearly perfectly predicted all the amplitudes and pressure behavior without showing phase shifts.
4. Conclusions
We showed that, with the help of an implicit mixed explicit time integration approach, it was feasible to implement wall boundaries accurately in a high-order discontinuous Galerkin scheme. The
additional source terms introduced for the penalization can be efficiently computed in the implicit part of the mixed time integration without the need for iterative solvers. This implicit treatment
enabled us to utilize arbitrary small values for the permeabilities and freed us from the need for the porosity introduced by Liu and Vasiliyev for compressible flows. The viability of the approach
was shown in one-dimensional examples, where we saw that the solid wall can be well approximated with small permeabilities in the high-order discontinuous Galerkin scheme. Even for the reflection of
a shock wave, for which a high-order discretization is problematic due to the oscillations incurred by the discontinuity, the penalization provided small errors and convergence with higher polynomial
degrees. The real strength of the penalization method, however, came through in multiple dimensions, where curved boundaries could easily be represented by the penalization consistent with the
scheme. As an example of such a setting, we looked at the acoustic wave scattering at a cylinder.
With the presented method, it, therefore, was possible to exploit the benefit of reduced memory consumption by the high-order discretization even for complex geometries without the need for advanced
mesh generation.
Author Contributions
Conceptualization by N.A. and H.K.; N.A. wrote the original draft preparation and N.E.P. contributed the shock reflection setup; all authors were involved in the review and editing process; N.A.,
N.E.P. and H.K. worked on the presented methodology; H.K. and N.E.P. worked on the employed software; investigation and validation was carried out by N.A. and N.E.P., they also did the visualization
to produce the graphs and images. supervision, S.R.; funding acquisition, S.R.
Neda Ebrahimi Pour was financially supported by the priority program 1648–Software for Exascale Computing 214 (
) of the Deutsche Forschungsgemeinschaft.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the
decision to publish the results.
1. Thompson, J.F.; Warsi, Z.U.; Mastin, C.W. Boundary-fitted coordinate systems for numerical solution of partial differential equations A review. J. Comput. Phys. 1982, 47, 1–108. [Google Scholar]
2. Mittal, R.; Iaccarino, G. Immersed Boundary Methods. Annu. Rev. Fluid Mech. 2005, 37, 239–261. [Google Scholar] [CrossRef]
3. Peskin, C.S. Flow patterns around heart valves: A numerical method. J. Comput. Phys. 1972, 10, 252–271. [Google Scholar] [CrossRef]
4. Saiki, E.; Biringen, S. Numerical Simulation of a Cylinder in Uniform Flow: Application of a Virtual Boundary Method. J. Comput. Phys. 1996, 123, 450–465. [Google Scholar] [CrossRef]
5. Brown-Dymkoski, E.; Kasimov, N.; Vasilyev, O.V. A characteristic based volume penalization method for general evolution problems applied to compressible viscous flows. J. Comput. Phys. 2014, 262,
344–357. [Google Scholar] [CrossRef] [Green Version]
6. Arquis, E.; Caltagirone, J.P. Sur les conditions hydrodynamiques au voisinage d’une interface milieu fluide-milieu poreux: Application a’ la convection naturelle. CR Acad. Sci. Paris II 1984, 299
, 1–4. [Google Scholar]
7. Angot, P.; Bruneau, C.H.; Fabrie, P. A penalization method to take into account obstacles in incompressible viscous flows. Numer. Math. 1999, 81, 497–520. [Google Scholar] [CrossRef]
8. Kevlahan, N.K.R.; Ghidaglia, J.M. Computation of turbulent flow past an array of cylinders using a spectral method with Brinkman penalization. Eur. J. Mech. B/Fluids 2001, 20, 333–350. [Google
Scholar] [CrossRef] [Green Version]
9. Liu, Q.; Vasilyev, O.V. A Brinkman penalization method for compressible flows in complex geometries. J. Comput. Phys. 2007, 227, 946–966. [Google Scholar] [CrossRef]
10. Jause-Labert, C.; Godeferd, F.; Favier, B. Numerical validation of the volume penalization method in three-dimensional pseudo-spectral simulations. Comput. Fluids 2012, 67, 41–56. [Google Scholar
] [CrossRef]
11. Pasquetti, R.; Bwemba, R.; Cousin, L. A pseudo-penalization method for high Reynolds number unsteady flows. Appl. Numer. Math. 2008, 58, 946–954. [Google Scholar] [CrossRef]
12. Ramière, I.; Angot, P.; Belliard, M. A fictitious domain approach with spread interface for elliptic problems with general boundary conditions. Comput. Methods Appl. Mech. Eng. 2007, 196,
766–781. [Google Scholar] [CrossRef] [Green Version]
13. Simulationstechnik und Wissenschaftliches Rechnen Uni Siegen. Ateles Source Code. 2019. Available online: https://osdn.net/projects/apes/scm/hg/ateles/ (accessed on 26 August 2019).
14. Alexander, R. Diagonally Implicit Runge–Kutta Methods for Stiff O.D.E.’s. SIAM J. Numer. Anal. 1977, 14, 1006–1021. [Google Scholar] [CrossRef]
15. Zudrop, J. Efficient Numerical Methods for Fluid- and Electrodynamics on Massively Parallel Systems. Ph.D. Thesis, RWTH Aachen University, Aachen, Germany, 2015. [Google Scholar]
16. Hesthaven, J.S.; Warburton, T. Nodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications, 1st ed.; Springer: New York, NY, USA, 2007. [Google Scholar]
17. Piquet, A.; Roussel, O.; Hadjadj, A. A comparative study of Brinkman penalization and direct-forcing immersed boundary methods for compressible viscous flows. Comput. Fluids 2016, 136, 272–284. [
Google Scholar] [CrossRef]
18. Ben-Dor, G.; Igra, O.; Elperin, T. (Eds.) Handbook of Shock Waves; Academic Press: Cambridge, MA, USA, 2001. [Google Scholar]
19. Glazer, E.; Sadot, O.; Hadjadj, A.; Chaudhuri, A. Velocity scaling of a shock wave reflected off a circular cylinder. Phys. Rev. E 2011, 83, 066317. [Google Scholar] [CrossRef]
20. Tam, C.K.W.; Hardin, J.C. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems; NASA, Langley Research Center: Hampton, VA, USA, 1997.
Figure 1. One-dimensional acoustic wave setup: the center of the initial pressure pulse is located at $x = 0.25$ and has an amplitude of $ϵ = 10 − 3$. Discretization by 48 elements as denoted by grid
lines, and the right half of the domain ($x > 0.5$) is penalized. Note that the wall coincides with an element interface.
Figure 2. Plot for the pressure profile of the reflected wave at $t = 0.5$ for different scaling factors $β$. The numerical reference is obtained with a traditional wall boundary condition and a high
Figure 3. Plot of the error in the wave amplitude at $t = 0.5$ with decreasing porosity and different scaling factors $β$. The error e is given by the relative error in the pulse amplitude after the
reflection at the wall.
Figure 4. $L 2$-error for a polynomial degree of seven over an increasing number of elements (h-refinement).
Figure 6. $L 2$-error for varying the polynomial degree. With a wall just 5% of the element length away from the element surface.
Figure 7. Behavior of the error in the reflected acoustic pulse with respect to computational effort. The figure on the left (a) shows the error convergence for various spatial orders over the
required memory in terms of degrees of freedom. The right figure (b) shows the same runs, but now over the computational effort in terms of running time in seconds. All simulations were performed on
a single node with 12 cores using 12 processes.
Figure 8. Different curves represent different discretizations using different scheme orders and a different number of elements. (a) Normalized pressure of the reflected shock wave. (b) Zoom of the
reflected shock.
Figure 9. Different curves represent different locations of the porous material in the element and the solution when using a no-slip wall. (a) illustrates the normalized pressure of the reflected
shock wave, and (b) depicts a zoom-in of the front area of the reflected shock.
Figure 10. Test case setup for the wave scattering; only the section containing the cylindrical obstacle, the probing points, and the initial pulse is shown, and the actual computational domain is
larger. The cylindrical obstacle is represented by the black circle located at $P ( 10 , 10 )$. Five observation points $( A , … , E )$ around the obstacle are shown as circles. The initial pulse in
pressure is indicated by the black dot with a turquoise circle around it located at $S ( 14 , 10 )$.
Figure 11. Simulation snapshots of pressure perturbations captured at successive points in time. The cylindrical obstacle is visible as a black disk and the probe points surrounding it as white dots.
The scale of the pressure perturbation is kept constant for all snapshots.
Figure 12. Time evolution of pressure perturbations at all five observation points surrounding the cylinder up to $t = 10$. Be aware that the perturbation pressure plotted along the y axis is scaled
differently from probe to probe to illustrate the pressure profile better.
Downstream speed of sound $c 1$ 1.0
Shock Mach number $M a s$ 1.2
Shock velocity $u s$ 1.2
Downstream density $ρ 1$ 1.0
Downstream pressure $p 1$ $γ − 1$
Downstream velocity $u 1$ 0.0
Isentropic coefficient $γ$ 1.4
Test Case $p 3 / p 2$ Error in $p 3 / p 2$ in [%] $Δ x$$· 10 − 4$
n2048, O(4) 1.46053873 1.19885086 32.0161
n1024, O(8) 1.47642541 0.12416375 13.0319
n512, O(16) 1.47700446 0.08499256 8.1828
n256, O(32) 1.47714175 0.07570497 7.6228
n128, O(64) 1.47721414 0.07080803 6.4032
n2000, O(8) 1.47740998 0.05755990 7.0317
$m i n e r r o r$ 1.47806952 0.012944346 $− −$
Table 3. Comparison of simulation results, when the porous material was located in the middle of the element with the exact solution.
Test Case $p 3 / p 2$ Error in $p 3 / p 2$ in [%] $Δ x$$· 10 − 4$
n2049, O(4) 1.44333420 2.36268639 25.6373
n1025, O(8) 1.47333865 0.33297335 21.5178
n513, O(16) 1.47750687 0.05100577 15.4564
n257, O(32) 1.47751832 0.05023153 13.0052
$m i n e r r o r$ 1.47801754 0.01646083 $− −$
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Anand, N.; Ebrahimi Pour, N.; Klimach, H.; Roller, S. Utilization of the Brinkman Penalization to Represent Geometries in a High-Order Discontinuous Galerkin Scheme on Octree Meshes. Symmetry 2019,
11, 1126. https://doi.org/10.3390/sym11091126
AMA Style
Anand N, Ebrahimi Pour N, Klimach H, Roller S. Utilization of the Brinkman Penalization to Represent Geometries in a High-Order Discontinuous Galerkin Scheme on Octree Meshes. Symmetry. 2019; 11
(9):1126. https://doi.org/10.3390/sym11091126
Chicago/Turabian Style
Anand, Nikhil, Neda Ebrahimi Pour, Harald Klimach, and Sabine Roller. 2019. "Utilization of the Brinkman Penalization to Represent Geometries in a High-Order Discontinuous Galerkin Scheme on Octree
Meshes" Symmetry 11, no. 9: 1126. https://doi.org/10.3390/sym11091126
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-8994/11/9/1126","timestamp":"2024-11-01T23:27:11Z","content_type":"text/html","content_length":"532945","record_id":"<urn:uuid:ba27196d-956d-4033-9452-e2948a1dc3bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00268.warc.gz"} |
How could you double the maximum speed of a simple harmonic oscillator?
• doubling amplitude or by changing either the spring constant k or mass of the spring.
• A simple harmonic oscillator is anything that oscillates with an acceleration proportional to its displacement, but in the other direction. x'' = - w^2 x The solution of this equation is A cos( w
(t - t0)) The maximum displacement is A. The maximum speed is A f. To double the maximum speed, double the maximum displacement A, or double the natural angular frequency of the oscillator, w.
Doubling w will depend on the specific oscillator. For instance, halving the length of a pendulum, or doubling the tension in a string.
Copyright 2023, Wired Ivy, LLC | {"url":"https://www.answerbag.com/q_view/173911","timestamp":"2024-11-02T21:13:35Z","content_type":"application/xhtml+xml","content_length":"31344","record_id":"<urn:uuid:1a33338b-7175-4646-90db-3a21b0bef29c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00503.warc.gz"} |
Topics: Tangent Structures
Tangent Structures to a Manifold
Tangent Vector at a Point > s.a. vector; vector calculus; vector field.
$ Def: There are various possible definitions, including:
(1) A derivation on the algebra of germs of differentiable functions at x ∈ X;
(2) An equivalence class of triples (x, φ, V), with (x, φ', V') ~ (x, φ, V) if V' = D(φ' \(\circ\) φ^−1)|[x] V (i.e., V transforms like a vector);
(3) An equivalence class of curves, tangent to each other at x.
> Online resources: see MathWorld page; Wikipedia page on tangent vector and tangent space.
Tangent Bundle
$ Def: The set TM of all tangent vectors at all points of an n-dimensional manifold M, with a differentiable fiber bundle structure.
* Fibers: The tangent spaces T[p]M at each p ∈ M; Structure group: GL(n, \(\mathbb R\)).
* Coordinates: Given coordinates {x^i} on M, natural coordinates on TM are {x^i, ∂/∂x^i}.
* Relationships: It is an associated bundle to the frame bundle FM of a manifold M, with structure group GL(n, \(\mathbb R\)).
@ References: Yano & Ishihara 73; Morandi et al PRP(90); Hindeleh 09 [of Lie groups].
> Online resources: see Wikipedia page.
Related Concepts > s.a. Jet and Jet Bundle; tensor; tensor field.
* Distribution: A distribution S of dimension r on M is an assignment, to each p ∈ M, of an r-dimensional subspace S[p] ⊂ T[p]M; Involutive distribution: A distribution S such that for all X, Y ∈ S,
[X, Y] ∈ S.
* Push-forward map: Given a map f : M → N between differentiable manifolds, the pushforward f ' or f[*] is a map between vector fields.
* Tangent map: Given a map f : M → N between differentiable manifolds, the tangent map Tf is a map between vectors (elements of TM and TN).
Cotangent Structures > s.a. differential forms.
$ Cotangent vector: A cotangent vector at a point p ∈ M is a dual vector, i.e., a map ω: T[p]M → \(\mathbb R\) from vectors to the reals.
$ Cotangent bundle: The set T*M of all cotangent vectors at all points of an n-dimensional manifold M, with a differentiable fiber bundle structure.
Generalizations > s.a. Topological Tangent Bundle.
@ Second-order tangent structures: Dodson & Galanis JGP(04) [infinite-dimensional manifolds].
@ Related topics: in Boroojerdian IJTP(13)-a1211 [\(\mathbb Z\)[2]-graded tangent bundle].
main page – abbreviations – journals – comments – other sites – acknowledgements
send feedback and suggestions to bombelli at olemiss.edu – modified 14 jan 2016 | {"url":"https://www.phy.olemiss.edu/~luca/Topics/t/tangent.html","timestamp":"2024-11-14T02:18:18Z","content_type":"text/html","content_length":"7168","record_id":"<urn:uuid:1436da6c-47d1-4c63-aef2-8b61595b0a08>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00241.warc.gz"} |
tube and
Annular Leakage (IL)
Models annular leakage between a circular tube and a round insert in an isothermal flow
Since R2020a
Simscape / Fluids / Isothermal Liquid / Valves & Orifices / Orifices
The Annular Leakage (IL) block models annular leakage between a circular tube and a round insert in an isothermal liquid network. The insert can be located off-center from the tube and can have
varying lengthwise overlap with the tube.
Ports A and B correspond with the orifice inlet and outlet. The input ports L and E are optional physical signal ports that model variable overlap length (L) and variable eccentricity (E).
The Mass Flow Rate Equation
The leakage mass flow rate is calculated from the pressure-flow rate equation
$\stackrel{˙}{m}=\frac{\pi {\left(R-r\right)}^{3}\left(R+r\right)}{12v}\frac{\left({p}_{A}-{p}_{B}\right)}{l}\left[1+3{\epsilon }_{{}_{sat}}^{2}\frac{R}{\left(R+r\right)}+\frac{3}{8}{\epsilon }^{4}\
• R is the annulus outer radius.
• r is the annulus inner radius.
• p[A] is the pressure at port A.
• p[B] is the pressure at port B.
• ν is the fluid kinematic viscosity.
• l is the overlap length.
• ε is the eccentricity ratio, $\epsilon =\frac{e}{R-r}$, where e is the eccentricity, which can be defined as a physical signal or constant value.
• ε[sat] is the saturated eccentricity ratio, which is ε for constant orifices, or the physical signal connected to port L for variable orifices. The eccentricity ratio is always between 0 and 1.
When modeling a variable overlap length, the user-defined minimum overlap length is used if the physical signal falls below the value of the Minimum overlap length parameter.
Assumptions and Limitations
The pressure-flow equation is valid only for fully-developed, laminar flows. The flow Reynolds number can be determined using $\mathrm{Re}=\frac{\stackrel{˙}{m}{D}_{h}}{\mu \pi \left({R}^{2}-{r}^{2}\
right)}$ or by checking the simulation log in the Results Explorer or the Simulation Data Inspector. For more information, see Data Logging.
A — Liquid port
isothermal liquid
Entry or exit port of the liquid to or from the orifice.
B — Liquid port
isothermal liquid
Entry or exit port of the liquid to or from the orifice.
L — Variable overlap length
physical signal
Optional physical signal port that provides the variable overlap length between the sleeve and annular insert.
To expose this port, set Overlap length specification to Variable.
E — Variable eccentricity
physical signal
Optional physical signal port that provides variable eccentricity of the annular insert to the sleeve.
To expose this port, set Eccentricity specification to Variable.
Overlap length specification — Type of insert-sleeve overlap
Constant (default) | Variable
Whether the length-wise sleeve-insert overlap is constant or variable.
Overlap length — Overlap between tube and insert
1e-3 m (default) | positive scalar
Length-wise overlap between the insert and tube.
To enable this parameter, set Overlap length specification to Constant.
Minimum overlap length — Minimum overlap between tube and insert
1e-6 m (default) | positive scalar
Lower limit to the length-wise overlap between the insert and tube. Any length shorter than this value will be set to the specified minimum.
To enable this parameter, set Overlap length specification to Variable.
Eccentricity specification — Eccentricity variability
Constant (default) | Variable
Whether the orifice eccentricity changes or remains at a set value.
Eccentricity — Center of insert offset with respect to sleeve center
0 m (default) | positive scalar
The distance between the center of the orifice and the center of the insert.
To enable this parameter, set Eccentricity specification to Constant.
Inner radius — Radius of insert
9.8e-3 m (default) | positive scalar
Outer radius — Radius of tube
1e-2 m (default) | positive scalar
Tube radius R, measured from the tube center to the inner wall.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Version History
Introduced in R2020a | {"url":"https://it.mathworks.com/help/hydro/ref/annularleakageil.html","timestamp":"2024-11-14T11:45:22Z","content_type":"text/html","content_length":"90635","record_id":"<urn:uuid:e66b5ea1-9df2-4302-a10f-41f3c4e470b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00399.warc.gz"} |
Boyle's Law
09-06-2012, 10:02 PM #1
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
Boyle's Law
In this thread, I'm going to talk about things that most dive instructors won't talk about for legal reasons. "You can't change the laws of physics, but lawyers love to try!" However, I will talk
about ALL the implications of the law, and what it can allow.
Moderators, please sticky this thread. This is important for not only scuba divers, but for freedivers and mermaids as well.
Boyle's Law
If the temperature remains constant, the volume of a gas will vary inversly as the absolute pressure and density will vary directly.
P1 * V1 = P2 * V2
P1 = Initial Pressure
V1 = Initial Volume
P2 = Ending Pressure
V2 = Ending Volume
Broken down this gives us three possibilities.
P1 > P2 and V1 < V2 Pressure decreases, volume increases.
P1 = P2 and V1 = V2 Pressure and volume stay the same.
P1 < P2 and V1 > V2 Pressure increases, volume decreases.
This law is very important to anyone who enters the water.
It is the cause of most diving related injuries.
At sea level, you are breathing air at 14.7 psi. 14.7 psi is considered as 1 atmosphere (ATM). As it's all around us, it is considered as ambiant pressure or absolute pressure. It only takes 33
feet of saltwater (34 freshwater) to equal 14.7 psi. So, for each 33 feet of depth, we gain 1 atmosphere of pressure. Hence, at 33 feet, we are at a total of 2 atmospheres ambiant pressure. This
will continue as we decend.
From the chart, we can see how the pressure increases the deeper we go. We can also see what happens to a 100 cu/ft volume of air that started at 1 atmosphere. As we can see, the volume decreases
until at 132 feet it is only 20 cu/ft or 20% of it's original volume. This volume change with pressure change is what causes the pressure related injuries. On descent, the air cavities in the
human body will decrease in size as pressure increases. If the cavity, cannot be equalized, tissue damage will occur.
In the last column of the chart, we can see what happens to the same 100 cu/ft volume of air at 132 feet 5 ATM. Notice that as we ascend, the volume of air increases until at the surface, it now
occupies 500 cu/ft. In the human body, this expansion creates the most serious injuries, and equalization becomes extremely important.
For snorkeling, the pressure change is the reason you don't see snorkel tubes longer than about 1 foot. On the surface, you have 14.7 pounds of pressure over every square inch of your chest.
Altogether, that can equal hundreds of pounds! However, you have air in your lungs that's pushing back, and counteracts all that pressure. In the case of a long snorkel tube, the air in your
lungs stays at 14.7 psi, but say at 4 feet of depth, there's an extra 2 psi of ambiant pressure for 16.7 psi. The pressures become unequal, and the extra 2 psi will feel like there's an elephant
sitting on your chest. You could say the average human chest area is about 3 square feet. So at 14.7 psi, that's 6350 lbs, and at 16.7 psi, 7214 lbs, for a differance of 864 lbs. So you see why
there are no long snorkel tubes!
For scuba, this pressure differance is why we must breathe compressed air on a regulator, or why mermaids breathe compressed air from hoses. We have to breathe air at ambiant pressure in order to
breathe at all!
For freedivers, they too breathe compressed air! The air naturally compresses in their lungs due to Boyle's Law as they decend. However, they are not normally subject to ascent type injuries, as
the volume they went down with is equal to what they surface with.
Divers and mermaids aren't so lucky! As they are submerged breathing air at increased pressure, they have to know the effects of Boyle's Law and how to prevent injury. Back at the table, look at
the last column, and how a pressurized air volume expands as you ascend. A diver holding his breath and swimming to the surface from 132 feet would see that air expand to 5 times it's original
volume. The differance in pressure would be 58.8 psi. Now the lungs can handle 2 psi before damage occurs, which is about the pressure differance of 4 feet of water. So our diver would most
likely explode before reaching the surface. But before that, he would experience the worst injury that a diver could face, Air Embolism. Because our lungs are so fragile, divers came up with the
first rule of scuba "NEVER HOLD YOUR BREATH."
If we take a good look at Boyle's Law, we will find that it actually allows us to hold our breath while breathing compressed air. It is only in the first instance of P1 > P2 and V1 < V2 that we
have expansion of the air volume in our lungs over that of what we initially took in. With P1 being greater than P2, the volume must increase. And therefore, if a diver holds his breath and
ascends, he will suffer overexpansion of the lungs.
With the second case of P1 = P2, we begin to see how we can hold our breath. In this case, P1 stays equal to P2, and causes V1 to stay equal to V2. There is NO expansion, and thus, NO expansion
type injury can occur. However, temprature can cause some expansion to occur, but it would be small. Our bodies are very adept at warming and humidifying the air we take into our lungs, and so by
the time we have finished inhaling, that air is almost at body temprature. So there would be negligible thermal expansion. Heliox and trimix divers may be more suceptable to thermal expansion as
the helium in the mixes can absorb more heat than air.
In the third case of P1 < P2, there is no expansion at all! The volume must contract because the pressure is increasing. No expansion injury is possible!
So for the last two cases it is actually possible to hold your breath while breathing compressed air. As long as P2 stays greater than or equal to P1 there can be no expansion greater than the
lungs can handle.
Does this mean a diver can do it in all situations? NO!!!
Normal scuba diving is generally changing depths throughout the entire dive. Even when neutrally buoyant, a diver rises when he inhales, and sinks when he exhales. Swimming around, it's very hard
to keep track of depth, unless it's a very controlled situation. For normal diving practice, stick with the first rule of scuba, NEVER HOLD YOUR BREATH.
So when can you hold your breath? For modeling and performing.
For modeling, the model is usually weighted so she will stay put at constant depth. This keeps her in a P1 = P2 situation. However, when the model is changing depth, like going from a prone
position to a standing position, the model may change depth enough to get her into trouble. So therefore, models should be breathing when changing positions (changing positions is usually a rest
time for the model anyway). Some shots may require a model to change depth. As long as she decends, she may hold, and may return to the depth where she took her last breath from. If the model has
to ascend from a deeper starting point, she must begin exhaling, or she will go into overexpansion.
For performance, the mermaids at Weeki Wachee hold their breath regularly. They also use their lungs as a buoyancy compensator to remain neutrally buoyant. This means that they are holding less
than full lung volume. This gives them the ability to do things that seem impossible to scuba divers. They have room for limited expansion to occur, but generally, they operate from the P1 = P2
area and P1 < P2 areas where volume remains the same or decreases. In a way, it's like freediving, only the surface is 15' down.
Last edited by Capt Nemo; 09-06-2012 at 10:14 PM.
Very informative, Capt! That's pretty cool to know, especially to see the formula behind it.
I was always scared of diving because I always heard your lungs could explode xD
I never learned why so I just avoided it altogether.
Thanks for posting this, it's actually very essential if you're going to be freediving. Well,
if you actually have the skill to go down that deep!
If I stay shallow do I still have to worry about this? YES!!!
As you can see from the chart, the greatest change in pressure occurs in the first 33 feet. The pressure doubles and volume decreases by 1/2! You could get an embolism on ascent in as little as 4
feet of water. After 33 feet, pressure won't double again for another 66 feet.
Why didn't we learn this much in scuba class?
Teaching about all the aspects of Boyle's Law has been severely curtailed due to the current legal enviroment. Insurance companies are also to blame. In order to get insurance coverage, PADI had
to remove much about Boyle's Law, and go to scare tactics about the first rule of scuba, in order to have their resort course. The same is true for the other certification agencies. Now it's only
really taught at the divemaster or instructor levels. European diving agencies, have fewer legal restrictions, and thus, are teaching much more about it.
Now that we have learned about how Boyle's Law works, let's talk about how it affects the body, and how to prevent injury.
The Squeezes
Mask Squeeze
Mask Squeeze occurs as the volume shrinks as pressure increases during descent. This causes the mask to begin to press against the face, and can press hard enough to rupture blood vessels around
the eyes, and can also damage the eyes. To prevent this, divers blow air through their nose to equalize this airspace. Some freedivers will fill their mask with saline solution to eliminate this
airspace, and remove the need to equalize. However, this will prevent clear vision unless special contacts or lenses are used.
Goggles or masks without nose pockets, put a diver at high risk for mask squeeze, as there is no way to equalize the pressure. As above, eliminating the air pockets in the goggles will make them
safe. Other methods like "pipe" or "balloon" goggles have been used with some sucess. These goggles equalize with either a tube running to the mouth and blowing, or by ambiant pressure forcing
air out of a balloon and into the goggles. Normal swimming goggles do not have any means of equalization, and should not be used for anything but surface swimming.
Here is a very good example of mask squeeze.
Sinus Squeeze
Sinus squeeze occurs when inflamed tissue or mucous block a sinus passage. As the diver descends, pressure transmitted through the blood forces blood into the sinus tissues which can rupture.
This causes blood to take up the space as the air contracts. The primary symptom is usually a sharp pain or wedging sensation directly above the eyes, The pain may decrease as the tissue
ruptures. If tissue damage occurs, there may be blood draining from the nose at the end of the dive.
This squeeze can also happen in reverse. The sinuses may also become blocked at depth, and upon ascent, the trapped air expands and forces it's way through the sinus passage. The force may tear
the lining of the sinus passage and force it into the nasal cavity. Again there may be blood draining from the nose. Lesser blockages will blow mucous out of the nose in a sneeze like fashion.
This can also cause sharp pain as above.
To prevent this, first, do not dive if you have a cold or congestion. Do not try to dive using decongestants unless a doctor permits it. Many drugs change under pressure or vacuum, and
decongestants may stop working, or even become poisonous at depth. Very little pressure testing of medications have been done. Using the Valsalva technique will equalize the sinuses and ears at
the same time. If pain begins that Valsalva cannot remedy, ascend and try to see if it will clear. If it does not, abort the dive. If a sinus squeeze occurs, generally, it is not required to seek
medical attention, unless pain or congestion persists.
Ear Squeeze
Ear squeeze occurs as water pressure forces the ear drum inward as the air space behind it contracts as no air enters through the eustacian tube. As the pressure increases, you will begin to feel
a pressure sensation, which will turn to pain, and eventually the eardrum will rupture. When rupture occurs, a diver may experiance nausea, dizziness and vertigo. A diver may also become
unconscious. Medical attention should be sought if pain persists after a dive, or if there is any bleeding. Divers wearing ear plugs may also experience ear squeeze. In that case, air trapped
between the plug and the eardrum will contract, and pull the eardrum outward rupturing it.
Prevention of ear squeeze can be as simple as swallowing, yawning, or rotating the jaw. If those do not work, use the Valsalva technique. You should begin equalizing as soon as you begin to
decend. Waiting until things get painful may be already too late! If nothing seems to work, ascend a few feet until the pain subsides and try again. If that doesn't work, abort the dive. Do not
dive with a cold or congestion, or ear plugs.
Note: There are ear plugs on the market that will prevent ear squeeze from the plugs. These are the only ones you can dive with.
Lung Squeeze
Lung squeeze can occur when the lungs are compressed below residual volume. Generally, this does not affect scuba divers, only freedivers. Originally, this was thought to occur at about 66 ft and
deeper, as the lungs would be at residual volume at this depth. But as freediving has shown, this doesn't begin to occur until about 200 - 250 ft. It may also occur if a freediver were to dive
and exhale on the bottom. Most divers do not realize it has occurred until ascent, or they cough up some blood, as there is usually little pain. Contact a physician as soon as possible if there
is any evidence that lung squeeze has occurred.
Tooth Squeeze
Tooth squeeze can occur when there is a pocket of air between the tooth and filling. As the diver descends, the nerve gets pressed into the pocket causing pain in the affected tooth. Consult your
dentist if this occurs, as the tooth may need refilling. And let the dentist know that you are a diver.
Intestinal Squeeze
Intestinal Squeeze is kind of a joke among divers, but it is very real. As you digest certain foods, they may create gas. When this occurs at depth, these bubbles begin to expand on ascent,
causing pain, and in severe conditions intestinal rupture. Most often the pain may be relieved by flatulence. If the pain persists, descend and wait for it to pass before ascending. Do not eat
foods that can cause gas before diving. And, DON'T OPEN THE DRYSUIT OF A DIVER THAT HAD BEANS!
Suit Squeeze
Suit squeeze generally occurs with drysuit divers only. In a drysuit, the suit is sealed so that there is a pocket of air between the suit and diver. This allows the diver to wear thermal
undergarments beneath the suit to stay warmer. At depth, the water pressure may press or pinch the suit on the skin. This will break capillaries beneath the skin. Most drysuits have valves to
allow air into the suit to relieve this pressure.
Ascent Injuries
The ascent injuries are the most serious ones that a person who breathes compressed air may encounter. Two or more of these injuries may occur at the same time. All are caused by overexpansion of
the lungs. They may also occur in freediving if lung squeeze has occurred. While the lesser injuries do not require recompression, patients should still be taken to a recompression facility
Subcutaneous Emphysema
This occurs when air forces it's way out of the lung, and moves up along the windpipe into the region around the collarbone and neck. This will cause swelling around the neck, voice changes,
breathing difficulties, and crepitation (crackling sensation upon touching the affected area).
Mediastinal Emphysema
This occurs when air escapes the lungs and into the area of the heart. The air presses on the heart and surrounding vessels. This will cause chest pain, breathing difficulties, collapse, and
cyanosis of the lips and nail beds. This may have to be treated prior to recompression for air embolism.
This occurs when air escapes the lung and moves between the covering of the lungs and ribcage. This will produce pressure on the lung and will tend to collapse it. It will show the same signs as
mediastinal emphysema only breathing difficulties will be more pronounced, especially if both lungs are collapsed. Lung collapse will have to be treated before recompression for air embolism.
Air Embolism
THIS IS THE MOST SERIOUS INJURY!!! This occurs when air escapes the lung into the blood stream. Healthy lungs can withstand about 2 psi or 4' depth change before rupture. Smoking and disease may
reduce this. When the bubbles escape the lungs, they will then pass through the heart and flow to the brain. The bubbles expand on ascent and block circulation in the brain, causing brain damage
in 4-6 minutes. The diver may surface unconcious, or within 4-6 minutes after surfacing. The diver may also experiance, headache; vertigo; visual, auditory, and speech abnormalities; loss of
small and large motor control and paralysis; unconsciousness and coma; respitory and circulatory distress and failure. There may be some blood at the mouth. Recompression must be immediate! If
you survive, you can expect brain damage. This may also be caused by decompression sickness due to Henri's Law of dissolved gasses.
Almost all of the ascent injuries can be prevented by making sure that there is no expansion in the lungs over that of a full breath. For normal diving practice, this means breathing continously
and not holding your breath. This keeps you as close as possible to a P1 = P2 situation. Fast ascent can also put you into serious expansion. Keep it slow!!! Breath holds should only be done
under controlled conditions where depth is fixed, or referances are easily available as to where the last breath was taken.
Newbies should stay away from any ideas about using household compressors, or anything that could compress the air by Boyle's Law. NO ONE SHOULD ATTEMPT BREATHING COMPRESSED AIR UNTIL THEY ARE
SCUBA CERTIFIED.
Last edited by Capt Nemo; 09-14-2012 at 02:18 PM.
fascinating. thanks for doing all this work
Thanks for the diving tips Capt. Nemo. So from what I see here it is bad to hold your breath even when you are just swimming a few feet below the surface of the water?
Very interesting indeed! Thanks for taking the time to give us all this information!
Although I will ask.. what about freedivers? I know a few that can go 60-100ft down with dive times of 2-3 minutes down and back. People do freediving all over the world with no problem, and I've
never heard anything about this kind of danger with breath holding before.
Only if you are breathing compressed air. If you are freediving, the air compresses, but you come up with the same volume you went down with, so no overall expansion takes place. But on scuba, a
submerged breath could kill if you held it all the way to the surface.
Let's say you were in a pool with a deep end. You set your cylinder on the bottom in the shallow end right where the bottom breaks to the deep end of the pool.
You could take a breath at the level of that cylinder, and then swim down to the bottom of the deep end, and return to that cylinder. If you have not gone above where you took that last breath,
then no overall expansion has occured, and you are totally safe. But if you went above that point, you would have to begin exhaling air, as overall expansion is taking place. In this situation,
you are working from a fixed air source at fixed depth, so it is easy to know the P1 = P2 point. However, when normal scuba diving, you are moving around too much to get an accurate depth of
where you took the last breath from. So in that kind of situation, you always follow the never hold your breath rule.
I'm not done yet posting everything I want to in this thread. So stay tunned.....
Very interesting indeed! Thanks for taking the time to give us all this information!
Although I will ask.. what about freedivers? I know a few that can go 60-100ft down with dive times of 2-3 minutes down and back. People do freediving all over the world with no problem, and I've
never heard anything about this kind of danger with breath holding before.
Take a look at the chart! In the 4th column, look what happens to the 100 cu/ft of air that's at 1 ATM. At 33 feet, the volume has dropped 50 cu/ft and is now at 50 cu/ft. The same happens to a
freedivers lungs at that depth. He only has half of the original lung volume that he had at the surface. But as the freediver returns to the surface, the air expands back to full lung volume.
The only seriously dangerous part of Boyle's Law for divers is when that full lung volume begins expanding beyond the original volume. With any compressed air diving, that can happen.
Here's the math for a freedive to 99 ft.
Start. surface 1 ATM lung volume 100 cu/in P1 = P2 V1 = V2
Mid point. 99ft 4 ATM lung volume 25 cu/in P1 < P2 V1 > V2
End point. surface 1 ATM lung volume 100 cu/in P1 = P2 V1 = V2
Last edited by Capt Nemo; 09-08-2012 at 01:51 AM.
Continuing with the math of the above post.
P1 = 14.7 psi, V1 = 100 cu/in, so P1 * V1 = 1470.
As the freediver descends P2 increases, so volume must decrease to remain equal to the product of P1 * V1.
At 0 ft. P2 = 14.7 psi, V2 = 100 cu/in, so P2 * V2 = 1470. (P1 = P2)
At 33ft. P2 = 29.4 psi, V2 = 50 cu/in, so P2 * V2 = 1470.
At 66ft. P2 = 44.1 psi, V2 = 33 cu/in, so P2 * V2 = 1470.
At 99ft. P2 = 58.8 psi, V2 = 25 cu/in, so P2 * V2 = 1470.
So we see no overall expansion in the case of a freedive. The air volume contracted and expanded back to it's original volume.
I saw a picture once of Tanya Streeter on a deep dive. Her lungs were so compressed that I could have put my fist between her ribcage and stomach and still not touch her. That gives you a real
idea of how pressure can effect volume.
Now, let's look at the case of a scuba diver that holds his breath at 33 ft and ascends to the surface.
P1 = 29.4 psi, V1 = 100 cu/in, and P2 = 14.7 psi.
P1 * V1 = P2 * V2
29.4 * 100 = 14.7 * V2
2940 = 14.7 * V2
V2 = 2940 / 14.7
V2 = 200 cu/in
So the air in the diver's lungs has gained 100 cu/in of volume. If he doesn't exhale this extra volume out, it will kill him! If he breathed normally during the ascent, he keeps changing the P1 =
P2 depth upward, so that any expansion during ascent is kept minimal, and well within the limits of what the body can handle.
Last edited by Capt Nemo; 09-08-2012 at 02:03 PM.
Yay finally finished the pressure injury part.
Great post. But why did you start by saying most instructors don't talk about this? We talk about this all the time.
Some instructors give the basics as per most dive courses, but rarely go in depth, and outright refuse to talk about all it's possibilities. When I had contacted a former NASDS Master Instructor/
UW photographer, he told me about a few run-ins with models, who couldn't believe that their instructors refused to tell them everything. Even my instructor would cut me off and refuse to talk
about it. The instructors at my current dive shop told me that this information isn't really covered until at least the divemaster/pre instructor course. Everything is geared toward the "first
rule of scuba", and the rest is hidden. Most of it is fear of lawsuits, and the ability to get insurance. Outside the US, you don't have the pack of sharks waiting to pounce, and much more is
It's my belief that knowing everything about Boyle's Law is safer than just the reason of the first rule of scuba. I've done a skip breathing ascent in a bad situation, and to do so, required
total knowlege of the law, and that's more than what I was taught. My understanding came from understanding the mathmatics, which showed me that there was more to it than what I was taught. I
realize too, that basic scuba courses are trying to change the students instincts. Their instinct is to try and hold their breath immediatly if anything goes wrong. This is why there is so much
stress on never holding ones breath. And with the brevity of modern scuba courses, I can understand the brick wall that has formed. While scuba is relatively simple, I believe, that it is better
taught with a longer 10-12 week course, rather than the short courses as taught today. The longer course prepares divers better because, the diver has had the time really learn it to the point of
becomming automatic before hitting open water.
I would think most entry level students don't really understand the topic and would just be confused by the whole discussion. I've had a few that wanted to know more and those divers usually went
on to higher diver ratings. But the average person who just wants to dive for fun will get quickly confused and discouraged. A basic entry level course teaches the basics...then advanced courses
teach more advanced topics. Studying the PPO2 is important for free divers as well. That's what causes the blackouts.
PPO2 IS VERY IMPORTANT! And it's something for all divers, not just mixed gas scuba. It gets into some strange territory on fast ascents during freediving. It's one thing that's lacking in most
courses when covering basic snorkeling/freediving.
It's not taught in scuba classes because the freediving/snorkleing skills in those classes is designed for very shallow dives. I think it should be taught though. At least, introduced. The topic
is covered thoroughly in freediving courses.
And you wonder why I like the idea of 10 week scuba courses, rather than the 3 hour cruises!
10 weeks is too long for just an entry level course. But, I'm not a fan of the quick 3 day courses either. I like 6-8 weeks for divemaster courses. I know a friend that completed her divemaster
course in 8 days. I can only imagine about the quality of that course.
10 weeks is what I had to go through for advanced open water (basic was snorkeling which ended the 3rd week) with NASDS at the time. Then another 10 for the advanced gold card.
Many boat captains were relieved when they'd find out they had a boat full of NASDS or NAUI divers. It was the PADI ones they were always worried about!
The problem today is, no one wants to go that long, they want it NOW! It seems that planning ahead is a lost art these days.
Ironic...because around here, captains worry about the NAUI divers.
09-07-2012, 02:13 AM #2
Senior Member Pod of The South
Join Date
Aug 2011
US, Florida
09-07-2012, 10:30 PM #3
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
09-07-2012, 11:11 PM #4
Senior Member Undisclosed Pod
Join Date
Jul 2011
09-07-2012, 11:26 PM #5
Senior Member Pod of Cali
09-07-2012, 11:37 PM #6
Administrator Chesapeake Pod
09-08-2012, 12:44 AM #7
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
09-08-2012, 01:09 AM #8
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
09-08-2012, 01:47 PM #9
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
09-14-2012, 02:20 PM #10
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
12-15-2012, 02:22 AM #11
Junior Member
Join Date
Nov 2012
12-15-2012, 11:56 AM #12
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
12-15-2012, 06:48 PM #13
Junior Member
Join Date
Nov 2012
12-15-2012, 11:23 PM #14
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
12-15-2012, 11:35 PM #15
Junior Member
Join Date
Nov 2012
12-16-2012, 06:37 PM #16
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
12-16-2012, 07:41 PM #17
Junior Member
Join Date
Nov 2012
12-16-2012, 11:39 PM #18
Senior Member Pod of the Midwest
Pod of the Great Lakes
Join Date
Jul 2011
Oshkosh, WI
12-17-2012, 01:37 AM #19
Junior Member
Join Date
Nov 2012 | {"url":"https://mernetwork.com/index/showthread.php?3096-Boyle-s-Law&s=34c1c7f1e3f2c08f76ac464d8c14c436&p=39638&viewfull=1","timestamp":"2024-11-02T18:30:55Z","content_type":"application/xhtml+xml","content_length":"142656","record_id":"<urn:uuid:3a8a10a3-a3f3-42cb-a6eb-b25bd2bb6680>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00014.warc.gz"} |
Calculate Reynold number for the feed and drying air. The viscosity of the milk is 2.127 cp. The Reynold number is calculated using the air velocity based on Figure 3.The inlet pipe diameter for air is 0.4 cm and that for the feed is 0.2 cm. - Brilliant Essay Help
The language should be British English
5. Lab report
Write the lab report according to the guidance given to you (see Canvas, 2200 words count) while incorporating or answering the following information or questions:
• Draw a flow diagram of the process, labelling all inputs, outputs and process conditions
• Perform energy and mass balances and calculate the total drying time, t. if the density of particle, ρp= 1500 kg/m3.. The feed droplets generated by the nozzle have an estimated range in diameters
from 40 to 95 µm and their critical moisture content is 40 % wet basis.
• Calculate the overall efficiency
• Calculate Reynold number for the feed and drying air. The viscosity of the milk is 2.127 cp. The Reynold number is calculated using the air velocity based on Figure 3.The inlet pipe diameter for
air is 0.4 cm and that for the feed is 0.2 cm.
NB. The drying air is sucked through the drying chamber by the aspirator motor. Therefore the amount of heated drying air can be increased or decreased by regulating the aspirator speed.
• Estimate the heat transfer from a gas to a single particle of initial diameter 100 µm. Latent heat of vaporisation is 2360KJ/kg.
• Estimate the rate of mass transfer from the particle initial diameter 100 µm to a gas
• Assuming the ration height to diameter of the drying chamber is 2:1, estimate the dimensions of the drying chamber.
• Estimate a droplet free fall velocity assuming milk droplet falls in the dry air at 95 ˚C under Stokes conditions
• What differences did you observe (if any) in the solubility of the powder and yield when varying the inlet temperature. Use table 1 while presenting your results
• Discuss different ways of controlling the outlet temperature
• Discuss and interpret the results in comparison to the theory and the quality of other commercial dried powder products such instant coffee. | {"url":"https://www.brilliantessayhelp.com/calculate-reynold-number-for-the-feed-and-drying-air/","timestamp":"2024-11-14T12:04:41Z","content_type":"text/html","content_length":"30745","record_id":"<urn:uuid:82fe7e0c-c7e2-40fd-9362-888fead86c6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00043.warc.gz"} |
Create Multiple Charts In Excel At Once Vba 2024 - Multiplication Chart Printable
Create Multiple Charts In Excel At Once Vba
Create Multiple Charts In Excel At Once Vba – You can create a multiplication graph or chart in Stand out by using a template. You will find a number of samples of templates and learn to file format
your multiplication graph or chart making use of them. Here are several tips and tricks to create a multiplication graph or chart. Upon having a format, all you have to do is duplicate the
formulation and mixture it inside a new cell. Then you can use this method to flourish some amounts by another established. Create Multiple Charts In Excel At Once Vba.
Multiplication desk template
If you are in the need to create a multiplication table, you may want to learn how to write a simple formula. Initially, you must fasten row one of the header column, then flourish the number on row
A by cell B. A different way to create a multiplication desk is to use merged references. In this instance, you will get into $A2 into line A and B$1 into row B. The outcome is really a
multiplication dinner table with a formula that works for columns and rows.
You can use the multiplication table template to create your table if you are using an Excel program. Just available the spreadsheet with the multiplication table template and change the label for
the student’s title. You can also modify the sheet to fit your individual needs. There is an method to affect the colour of the cellular material to modify the appearance of the multiplication
kitchen table, also. Then, you are able to modify the range of multiples to suit your needs.
Building a multiplication graph in Shine
When you’re utilizing multiplication dinner table application, you can easily develop a straightforward multiplication table in Shine. Basically create a page with columns and rows numbered in one to
thirty. Where columns and rows intersect will be the respond to. If a row has a digit of three, and a column has a digit of five, then the answer is three times five, for example. The same thing goes
for the other way around.
Initial, it is possible to enter the phone numbers that you need to increase. If you need to multiply two digits by three, you can type a formula for each number in cell A1, for example. To make the
numbers larger, select the tissues at A1 and A8, and then click on the proper arrow to select an array of tissue. You can then variety the multiplication solution within the tissue inside the other
columns and rows.
Gallery of Create Multiple Charts In Excel At Once Vba
Create Multiple Charts Using Same X axis Data In VBA Excel
Excel VBA Solutions Create Line Chart With Multiple Series Using Excel VBA
How To Make Multiple Charts In Excel Using Vba YouTube
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/create-multiple-charts-in-excel-at-once-vba/","timestamp":"2024-11-12T06:39:44Z","content_type":"text/html","content_length":"52732","record_id":"<urn:uuid:7a3aacf4-6b11-4476-a37e-2172f0bd223f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00860.warc.gz"} |
Thermomass Theory: A Mechanical Pathway to Analyze Anomalous Heat Conduction in Nanomaterials
The Fourier law proposed in 1822 [1] is the fundamental of thermal conduction. It indicates that the heat flux passing through a material is proportional to the local gradient of temperature
here q is the heat flux, ∇T is the local temperature gradient, and κ is the thermal conductivity, which represents the material capability of transferring heat. In a long term, the Fourier law can
accurately model the heat conduction. In the middle of twentieth century, theoretical physicists started to question the Fourier law because of its contradiction to the second law of thermodynamics
[2]. After that, the heat waves were observed in low-temperature experiments [3] and aroused people’s interest as well as controversy. In 1980s, the short pulse laser experiment stimulated a lot of
research and led to several relaxational [4], hyperbolic [5], or lagging types [6] of models, which can be regarded as the generalization of Fourier law. The above research focused on the distortion
of ordinary heat transfer in short time scales. On the other hand, the shrink of space scales caused another type of distortion and began to be realized in the early 1990s, when sign of failure of
Fourier law was perceived in thin dielectric films [7]. The phenomena of anomalous heat transfer in small scale materials can be fundamentally understood through the kinetic theory of phonons, that
is, the thermal conductivity of dielectric materials can be formulated as [8, 9]
where C is the specific heat per unit volume, vv is the average group velocity of phonon, and λ is the phonon mean free path (MFP). When the material size is much larger than MFP, the MFP can be
regarded as a constant and is dominated by the intrinsic phonon-phonon scattering and phonon-defect scattering rates. Therefore, the thermal conductivity is independent on the system size. In
contrast, when the material size reduces to comparable value with the MFP, the phonon-boundary scattering becomes considerable. In this condition, the smaller system size induces higher boundary
scattering rates and consequently shorter effective phonon MFP. By using Eq. (2) one figures out the reduction of thermal conductivity of nanomaterials.
The reduced thermal conductivity of nanofilms is a disadvantage for the heat dissipation in IC chips or semiconductor lasers. Nevertheless, it is an advantage for the thermoelectric devices.
Experiments showed that the silicon nanowires have very high figure of merit (ZT) [10, 11]. The nanocomposites also demonstrate considerable ZT benefiting from the nano-sized superlattice or grains
significantly scatter the phonons and reduce the effective thermal conductivity [12, 13]. Therefore, a lot of effort has been made to fabricate materials with ultra-low thermal conductivity through
nanotechnology with the target at high ZT for the applications in advanced heating and cooling, waste heat recovery [14], as well as solar thermoelectric generators [15].
Due to the fast growth of energy-related nanomaterial synthesis and its transition from laboratory to industrial applications, modeling the thermal conducting behavior in nanosystems is in urgent
need. Ideally, it should rise from a perspective of characterizing the fundamental physics and approach to simply structured theory which can be conveniently used by engineers. Nevertheless, this
goal has not been satisfactorily achieved and current research is paving toward it. The gray model proposed by Majumdar is a pioneer work in this path. It predicts the effective thermal conductivity
as [7]
where κ[eff] is the effective thermal conductivity, κ[0] is the thermal conductivity of the bulk material, L is the characteristic size of system, and β is a dimensionless parameter. Except that the
temperature is much lower than the Debye temperature, the phonon scattering at most engineering surfaces can be regarded as diffusive. In this case, it was derived that for the in-plane heat
conductivity of nanofilms, β= 3/8. For the cross-plane heat conductivity of nanofilms, β = 4/3. For the longitudinal heat conductivity of nanowires, β can be selected as 4/3 [16]. Kn is the Knudsen
number, which is the ratio of MFP over L. Kn is actually a concept in gas dynamics, and it is well known that rarefaction effects should be considered in high Kn situations [17]. Eq. (3) was derived
from an analogy between photons and phonons as wave packets of energy. Therefore, radiative transfer was assumed for phonons. It is easily found that Eq. (3) retreats to the Fourier law when the
system size is much larger than MFP, that is, at the bulk limit. When the system size is comparable with the MFP, Eq. (3) delineates the size dependency of thermal conductivity. However, along with
the progress in measuring the thermal conductivity of thin silicon films [18–21], the accuracy of Eq. (3) was questioned. It was claimed that the MFP of monocrystalline silicon should be around 300
nm to match the experiment results [19], while the value based on Eq. (2) is around 42 nm. Chen et al. [22–24] proposed that the phonon MFPs of single-crystal Si at room temperature should be 210–260
nm considering that the phonons of different frequencies contribute differently to the heat conduction. This amendment partly resolves the inaccuracy of gray model. However, it still exhibits
considerable deviations to predict the experiment value of nanowires [25]. McGaughey et al. [16] developed a model which accounts the full dispersion relation and the directional dependent scattering
chances with surfaces. This model matches well with experiments for nanofilms, while still overestimating the experiments for nanowires.
The phonon hydrodynamics [26–31] is another pathway to model the nanoscale heat conduction. It originates from the solving of linearized Boltzmann equation. An additional term representing the second
order spatial derivative of heat flux, ∇^2q, is involved in the governing equation of heat conduction. Since the heat flux is similar to a fluid flow flux, ∇^2q is in analogy with the viscous
dissipation term in Navier-Stokes equation for fluid mechanics. Therefore, the heat flux could be nonuniform in the heat transfer cross-section due to the drag from the boundary, forming a Poiseuille
flow of heat. This behavior induces the terminology of “phonon hydrodynamics.” The analysis based on phonon hydrodynamics indicated the effective thermal conductivity of nanosystems should be
inversely proportional to the square of Kn due to the nonuniform distribution of heat flux profile. However, the experiments indicated that the effective thermal conductivity is approximately linear
to the characteristic size rather than the square of size. It is thereby further elucidated that the boundary velocity slip would happen in case of large Kn [29, 31]. By introducing the slip boundary
condition into the governing equation, the linear size-dependent effective thermal conductivity can be achieved. The drawbacks of present phonon hydrodynamics analysis are: 1. The arbitrary in
choosing the style and parameters of slip boundary condition. 2. The deviation from the physical picture of original derivation of Boltzmann equation, where it was the normal (N) scattering processes
that induced the second order spatial derivative of heat flux. The present phonon hydrodynamic models just simply use the MFP of resistive (R) scattering processes as the parameter of ∇^2q.
Upon the abovementioned progresses and their defects, the development of better models characterizing heat conducting in nanomaterials should base on capturing the essential feature of its physics.
In recent years, the thermomass theory has been developed in our group, which proposes a mechanical analysis framework for heat transfer [32–35]. The generalized heat conduction governing equations
are established based on such analysis. In the following sections, we will present the application of thermomass theory in nanomaterial heat conduction. The size dependency of thermal conductivity,
thermal rectification, and thermoelectric effects will be addressed.
Thermomass theory
In history, the nature of heat was regarded as either a fluid, that is, caloric theory. The caloric theory regards heat as a weightless, self-repulsive fluid. In the eighteenth and the first half of
nineteenth centuries, the caloric theory was the mainstream theory. It was extinct after the mid-nineteenth century and replaced by the dynamic theory that the nature of heat is the random motion of
particles in a body. In twentieth century, Einstein’s relativity theory introduced the well-known mass-energy equivalence relation, E = mc^2, where c is the speed of light. According to this theory,
the thermal energy should correspond to a certain amount of mass. To illustrate his theory, Einstein elucidated “a piece of iron weighs more when red hot than when cool” [36], which means the adding
of the thermal energy into material, that is, raise its temperature and at the same time increase the mass. The mass increase induced by heat was defined as “thermomass,” which is very small in
ordinary conditions. For example, the thermomass of Si at room temperature is 10^−12 of the total mass. Such small amount of mass is negligible when dealing with the dynamic problem, like movement
and balance of the body. However, the heat conduction is the movement of thermomass itself relative to molecular or the lattice. It is driven by the pressure gradient induced by the concentration
difference of thermomass among the materials. The forces and inertia of thermomass are comparable and lead to the limited acceleration and drift velocity of it. The advantage to bring in the concept
of thermomass is that the analysis of heat conduction can follow a mechanical framework. The corresponding forces, velocities, accelerations, and momentums can be properly defined.
Consider the dielectric solids, the phonons are the main heat carriers. In this case, the internal energy per unit volume, e, is the summation of all phonon energies [9]
where ħ is the reduced Planck constant (Dirac constant), ω is the phonon frequency, k is the wave vector, and n denotes the index of phonon branches. f is the phonon distribution function. In
equilibrium state, f obeys the Bose-Einstein distribution
where k[B] is the Boltzmann constant. The density of the thermomass, that is, the equivalent mass of the phonon gas, is obtained by using the Einstein’s mass-energy equivalence relation
It should be reminded that the frequently used expression for thermal conductivity of phonon systems, Eq. (2), is from the analogy between gas and heat carriers. The scattering of phonons induces
resistance on heat transport. Generally, the scattering accounted for thermal resistance is the R processes, including the Umklapp scattering, defect scattering, and boundary scattering. These
scattering events eliminate the quasi-momentum of phonons. The MFP defined in Eq. (2) refers to the traveled distance of a phonon between succeeding R scatterings. However, in ideal gas systems, the
collision among gas molecules does not perish the momentums. Therefore, the R processes of phonons are more resemble to the collision of gas molecules to residential barriers. It is the case when a
gas flows through a porous medium. The collision frequency between gas molecules and material skeleton determines the resistance experienced by the gas flow. In the porous flow, the Darcy’s law
describes the effective flow velocity is proportional to the pressure gradient
The pressure gradient can be regarded as the driving force of flow. From a viewpoint of force balance, the driving force is actually balanced by the friction force. Thereby Eq. (7) essentially
depicts that the friction force is proportional to the flow velocity. It is a general case in laminar flow.
In analogy to the gas flow in porous medium, the velocity of thermomass is defined as
The mass and momentum balance equations of thermomass can be derived as [32–34]
where p[TM] is the phonon gas pressure, and f[TM] is the friction force impeding the phonon gas. Eq. (9)gives the energy conservation equation by applying Eqs. (6) and (8). Eq. (10) characterizes the
heat transport, which is the motion of thermomass through the materials. To obtain the explicit heat transport governing equation, the pressure and friction terms need to be determined. If the
phonons are viewed as moving particles with finite mass, the pressure of them can be derived by accounting the momentum change when these particles hit and rebound from a unit area of the container
surface, in analogy to the kinetic theory of gas. In a result, the pressure of phonon gas can be expressed as
where vv[g] is the group velocity of phonons. For bulk material, the friction experienced by thermomass can be extracted from Eq. (7). When discussing the nanosystems, the boundary effect needs to be
considered. The Darcy’s law for porous flow was extended to Darcy-Brinkman relation when the boundary effect is nonnegligible [37, 38]
here μ is the viscosity, K is the permeability with a unit of m^2. Eq. (12) indicates that the boundary slip velocity attenuates from the boundary with a characteristic length of K^1/2 to the uniform
velocity in the porous medium. The introducing of a second-order spatial derivative term also makes Eq. (12) the same order as the governing equations for free flow. In the steady flow, the driving
force is balanced with the friction force. Following the form of Eq. (12), when the boundary effect is considered, the friction of thermomass can be formulated as
where χ is the friction factor. The permeability of the thermomass in heat conducting medium is
In large systems, the boundary effect is negligible. Then, Eq. (13) reduces to the Darcy’s law with the first term much more important than the second term on the right hand side.
When the spatial gradient and changing rate of physical quantities are not significant, the first and second terms in Eq. (10) can be neglected. In this case, Eq. (10) exhibits the balance between
driving force and friction force. The heat conduction is steady in such a nonequilibrium system. Combining Eqs. (13) and (10) leads to
For the simplest case, vv[g] and C are assumed to be temperature independent. Then, Eq. (15) actually gives the Fourier law with
When the boundary effect is considerable, the second term in Eq. (13) needs to be accounted. In this case, the combination of Eqs. (13) and (10) gives
where l[B] equaling to the square root of K[TM] is a characteristic length.
Eq. (17) is a generalization of Fourier law when boundary effect needs to be considered. It predicts the reduction of effective thermal conductivity in nanosystems by the additional resistance term.
When the system size is bigger, the spatial gradient of q is smaller. Thus, κ[eff] increases with the system size growing larger. Nevertheless, to quantitatively predict the size dependency of κ[eff]
and compare it with experiments, the exact value of l[B] needs to be determined for certain material. The thermal conductivity is a macroscopic physical quantity, which is usually obtained by
experiments. Similarly, with plenty of experimental data, the value of thermomass permeability and l[B] could be evaluated. However, nowadays the experiments in nanosystems are still expensive and
have large uncertainty. Therefore, in the following, a bottom-up strategy, namely, raising from microscopic phonon properties, is used to extract the value of l[B].
Phonon Boltzmann derivation
For dielectric solids, the Boltzmann equation describes the evolution of phonon density of state as in Ref. [26, 27]
where D is the drift operator and C is the collision operator. Eq. (18) indicates that the phonon gas can freely drift without the disturbance of collision. The drift operator is
where vv[k] is the phonon velocity in one Cartesian direction. The collision, such as the phonon-phonon scattering, reshapes the phonon distribution function. In phonon theory, the collisions can be
sorted to R and N processes. The R processes break the phonon quasi-momentum, while the N processes conserve it. In this sense, the collision operator can be simply formulated as
where τ[R] and τ[N] are the characteristic relaxation time between succeeding R and N events. f[0] is the equilibrium distribution given by Eq. (5), f[D] is the displaced distribution
where u[D] is the drift velocity of phonon gas. Eq. (20) illustrates that the R processes tend to bring f back to f[0], while N processes tend to bring f to f[D].
If f can be approximated with f[D], a solution of Eq. (20) can be obtained with a second-order Taylor expansion of f[D] around f[0] and then integrating [33]
If the friction force in Eq. (10) only has the first term, which is linear to the thermomass velocity, Eq. (22) is identical to Eq. (10) except the coefficient 15/16 in ahead of the second term on
the left hand side. This difference is caused by the Doppler Effect during the drift motion of phonon gas. From this perspective, the phonon gas is slightly different from the real gas. The phonon
energy varies due to the dispersion, causing the “eclipse” of the convection term. In a nondispersive medium, the frequency is independent of k. Then, Eq. (22) is consistent with Eq. (10).
Nevertheless, the second-order spatial derivative term, like in Eq. (17), is dismissed. In nanosystems, the boundary condition should be considered in solving Eq. (18). For example, if the boundary
is completely diffusive, the drift velocity in Eq. (21) is dragged to zero. In this case, the phonon distribution function is assumed to have the following form.
It indicates that with the diffusive boundary, the N processes induce a deviation from f[D], with the relaxation length λ[N] = vv[g]τ[N], i.e., the MFP of N processes. The additional term in Eq. (23)
gives a second-order spatial derivative term. By the integration of Eq. (18), one gets
Eq. (25) can be regarded as the first order Chapman-Enskog expansion [17] of the phonon distribution function. In fluid mechanics, the viscous term in Navier-Stokes equation can be derived from the
first order Chapman-Enskog expansion of the state distribution function of fluid molecular. Without the Chapman-Enskog expansion, the solution of Boltzmann equation gives the Euler equation, which is
the dynamic equation without the viscous dissipation. This case happens when the interested region is far away from the boundary, or the boundary layer thickness is negligible compared with the flow
region, like the large Reynolds number flow around the aircrafts. The difference between the thermomass flow and ordinary gas flow is that the R processes causes residential friction forces to the
flow, which makes the transfer diffusive. In low temperature crystals, or low dimensional materials, such as graphene, the R processes can be rare. Then the heat conduction will exhibit obvious
hydrodynamic behaviors. Therefore, based on the phonon Boltzmann derivation, the value of l[B] in Eq. (17) can be determined as l[B]^2 = λ[R]λ[N]/5.
Phonon gas flo w in Si nanosystemsBased
on Eq. (25) we can calculate the effective thermal conductivity of nanosystems. The silicon nanofilms and nanowires are investigated here because the experimental results are available for
comparison. The geometries of nanofilms and nanowires are shown in Figure 1. The direction of heat conduction is in-plane for nanofilms and longitudinal for nanowires.
Assume the boundary is completely diffusive, i.e., the phonon gas drift velocity is zero on the boundary. The solution of Eq. (25) for a nanofilm is [39]
where l is the thickness of film and Br = l[B]/l is the Brinkman number. The solution for nanowire is
where l is the diameter (thickness) of the wire and J is the cylindrical Bessel function
Eqs. (26) and (28) show the heat flux is nonuniform at the cross-section. If the system size is much larger than l[B], q(y) tends to be constant. Then, the effective thermal conductivity renders the
bulk limit, κ[0]. If the system size is comparable with l[B], q(y) is significantly affected by the boundary. Thereby, κ[eff]is strongly reduced.
The analytical derivation of Eqs. (26)–(30) is based on the assumption that the l[B] is constant. However, in nanosystems, the phonon would scatter with boundary, which shortens the MFPs. For the
pure diffusive boundary, the scattering on boundary will terminate the MFPs. It can be seen as the additional collision event into the ordinary scatterings. If the boundary is located at r away from
the originating point, the effective MFP of phonons can be expressed as
In this way, the effective MFPs in nanosystems can be obtained by integrating over the sphere angle. For nanofilms, the local value of MFPs is [40]
where α = (l/2−y)/λ[0], β = (l/2+y)/λ[0], and Ei(x)=∫∞1t−1e−txdtEix=∫1∞t-1e-txdt. For nanowires, we have
Therefore, the MFPs are significantly shortened in nanosystems. It reveals that the boundary has dual effects on heat conduction in nanosystems. First, the second spatial derivative of heat flux,
which represents the viscous effect of phonon gas, imposes additional resistance on heat transfer due to the nonslip boundary condition. Second, the collision on boundary changes the effective MFPs.
This effect is similar to the rarefaction of gas flow in high Kn case. By accounting both the dual effects, the thermal conduction in nanosystems is described as
It is worth noting that in fluid mechanics, the rarefaction is not necessarily happened at the same time of viscous effect based on the Darcy-Brinkman relation. Consider the water flow in porous
material. The permeability of porous flow is determined by the size of pores, which typically is in the order of micrometers. The MFP among water molecule is typically subnanometer. Therefore, the
square root of permeability differs much from the MFP. The effects of Darcy-Brinkman boundary layer and rarefaction can be unconjugated. On the other hand, if the fluid is replaced by gas, the MFP of
fluid could be comparable to the square root of permeability. In this case, the Darcy-Brinkman boundary layer and the rarefaction should be considered simultaneously. For the phonon gas flow, the
relative magnitude of λ[R], λ[N,] and l decides the conjugation of boundary layer and rarefaction. λ[R] represents the “size of pores” while λ[N] represents the viscosity of phonon gas. The bulk
limit is achieved when l >> λ[R] and l >> λ[N]. If λ[R] >> l >> λ[N], the first term on the right hand side of Eq. (34) is less important than the second term. The flow mimics a dense fluid passing
through a sparse medium. The boundary transmits momentum efficiently across the flow region. The phonon hydrodynamics can be observed. If λ[N] >> l >> λ[R], the flow mimics a dilute fluid passing
through a dense medium. The velocity profile will be close to linear. In this case, only the rarefaction effect needs to be considered. If λ[N] >> l and λ[R] >> l, both the rarefaction and boundary
drag affect the resistance on flow and need to be modeled simultaneously.
The numerical solution of Eq. (34) gives the effective thermal conductivity for Si nanofilms and nanowires at room temperature, as shown in Figure 2. The physical properties are adopted as κ[0] =
148W/(m K) (standard experiment value for monocrystalline Si), λ[R,0] = 42 nm (according to the direct calculation based on Eq. (2)), λ[N,0] = 360 nm. The predictions based on the gray model [7],
McGaughey model [16], and Ma model [31] are also presented in Figure 2. It shows the gray model and McGaughey model overestimate the thermal conductivities. Ma model gives close results to
experiments. However, Ma model assumes a MFP of 210 nm, which is lack of physical support. It also shows an unreasonable drop at D = 1000–2000 nm for nanowires. According to Figure 2, our model
achieves the best agreement with current available experiment and numerical results
Thermal rectification in nanosystems
Thermal rectification refers that the heat conduction in one direction of the device leads to higher heat flux than following the opposite direction, even though the same temperature difference is
applied. It currently raises much interest since the first experimental report by carbon nanotubes [43]. The thermal rectification effect is anticipated to realize thermal diode [44], thermal logic
gate [45], or thermal transistors [46, 47]. Though much effort has been paid for searching useful mechanisms and realizing considerable rectification ratio, the ambitious goal that controlling heat
as electricity is still far away [48].
The mechanism of thermal rectification has been widely studied. It is found that various effects can induce rectification, such as the different temperature dependences of the thermal conductivity at
the different parts of the device [49], the asymmetric transmission rates of phonons across the interfaces [50], and the temperature dependence of electromagnetic resonances [51]. Here, another
rectification mechanism is proposed through the thermomass theory, following an analogy to fluid mechanics. In Navier-Stokes equations, the convective acceleration term indicates when the fluid
experiences speed up or slow down. Therefore, if the cross-section area of a flow channel is changing (e.g. the trapezoidal channel), the flow rate under the same pressure difference is different in
the convergent direction or in the divergent direction. In the convergent direction, the channel serves as a nozzle, which accelerates the fluid and converts part of its potential energy to the
kinetic energy. In the divergent direction, the channel serves as a diffuser, which decelerates the fluid and converts part of its kinetic energy to the potential energy. The acceleration of fluid
increases the velocity head and consumes the dynamic head of flow. Therefore, the total fluid flux in the convergent direction will be less than that in the divergent direction. In terms of thermal
conduction, it means that with the same temperature difference between the heat source and sink, the total heat flux in the wide-to-narrow direction is smaller than that in the narrow-to-wide
direction, which is the thermal rectification. Nevertheless, it should be stressed that for a flow channel with large angle of divergence, the flow separation could happen when the fluid velocity is
high. In case of flow separation, the effective resistance of the diffuser will be much increased. It may cause the total heat flux in the wide-to-narrow direction larger than that in the
narrow-to-wide direction, that is, the reverse of rectification.
In steady state, the generalized conduction law, Eq. (10), can be reformulated as
The difference between Eqs. (35) and (25) is the additional convective term, −τ[R]∇jqiqj/e. The first term on the left hand side mimics to the spatial inertia term in fluid mechanics. It induces
rectification effect. Consider a trapezoidal material with heat conducting through the symmetry axis, as shown in Figure 3. The thickness of the material is H; the widths at the narrow and the wide
ends are L[W] and L[N], respectively, and the separation between these ends is L. If L is much larger than L[N] and L[W], the heat conduction can be assumed as quasi-one-dimensional. The mainstream
of heat flux is in the x direction, qx >> qy. The total heat flux (Q) at each cross-section perpendicular to x direction is constant. Due to the boundary friction, the Laplacian of qx in the y
direction is much larger than in the x direction. Then, the x component of Eq. (35) is
where C[R] consists of two terms
The sign of the first term of C[R] will be positive for the heat conduction in a convergent channel, which means the acceleration of heat flux creates additional effective resistance, and reduces the
total heat flux. Oppositely, heat conduction in a divergent channel will increase the total heat flux. The second term of C[R] will not change sign with the direction of heat transport. It
characterizes the acceleration due to density variation since thermomass is compressible. It is insignificant except for the case of ultra-high heat flux [52].
To enhance the thermal rectification, the directional sensitive part in Eq. (36) should be amplified over the directional non-sensitive part. If the diffusive boundary condition is replaced with slip
boundary condition, or the system size is large compared with the boundary layer, the Laplacian term of heat flux can be neglected. In room temperature, the second term of C[R] is usually much less
than the first term. In this case, Eq. (36) can be simplified to
Consider a silicon ribbon with the average temperature 300 K. Assume that H = 1000 nm, L = 300 nm, L[N] = 300 nm, L[w] varies from 300 to 2000 nm. The relaxation time τ[R] is set as 1.5e−10s based on
experimental results [53]. The temperatures on both ends are 330 and 270 K, respectively. By numerically solving Eq. (38), we can get the rectification ratio (defined as the thermal conductance in
narrow-to-wide direction over that in the opposite direction), as shown in Figure 4. It shows that the rectification ratio grows with L[w], from zero to a considerable value of 32.3%. This value is
large enough to construct thermal diode or thermal logic gate.
Thermoelectricity of nanosystems
The ZT for nanomaterials could be much enhanced [10–13]. The mechanism of such enhancement can be that the nanostructures reduce the thermal conductivity by strong phonon-boundary scattering while
maintaining the electrical conductivity. Although a lot of work has been done in searching high ZT materials through nanotechnology, the thermodynamic analysis and the role of nonlocal and nonlinear
transports, which are highly possible to happen in nanosystems, are not fully discussed [54, 55]. In recent years, the nonlocal effects raised by the MFP reduction due to geometry constraint [56],
the electron and phonon temperature [57], and the breakdown of Onsager reciprocal relation (ORR) [58, 59] in nanosystems have been investigated from the framework of extended irreversible
thermodynamics (EIT). These works showed that the nonlinear and nonlocal effects influence the efficiency of devices. The breakdown of ORR not only possesses theoretical importance but also shed
light on approaches to further increase efficiency.
Here, we analyze the thermoelectric effect from the thermomass theory perspective. There could be various effects when the individual motion of phonon gas and electron gas is separately considered.
The most apparent one is the energy exchange between phonons and electrons [60]. In a one-dimensional thermoelectric medium, the conservation of energy gives
where I is the electrical current and E is the electrical field. IE equals the adding or subtracting rate of thermal energy. Dividing Eq. (39) by c^2 illustrates that the electrical current performs
as the mass source or sink of thermomass. The nonconservation of mass brings additional term in Eq. (10). In steady state, we obtain the governing equation of thermomass momentum as
The second term on the left hand side is nonzero because of the energy conversion. It increases the spatial inertia of thermomass. For simplicity, we do not consider the Brinkman extension of the
friction force and assume the material cross-section is constant, and then Eq. (40) turns to
Compared with Eq. (37), the first term of C[R] has a coefficient 2 because of the energy exchange between phonons and electrons. The electrical current couples with the heat flux and induces
additional spatial acceleration force on the thermomass flow. This inertia increase is insignificant in ordinary conditions due to the small value of τ[R]. It could be considerable in case of strong
power thermoelectric convertor with large electrical current and intense electrical field. Neglecting the second term of C[R], it can be derived that the effective thermal conductivity and Seebeck
coefficient change to
Since ZT is S^2σ/κT, the effective ZT is (1 + C[R])^−1 of the original one without considering the inertia effect of thermomass. Therefore, when IE > 0, the electrical energy converts to thermal
energy. It is typically the case of thermoelectric cooler. The heat flux is additionally impeded. The ZT is decreased. When IE < 0, the temperature gradient drives electric current. It is typically a
thermoelectric generator. The heat flux is further pumped, and the effective ZT is enhanced. The inertia effect could be beneficial for a higher ZT of the device in this case.
In this chapter, we present a mechanical analysis on the thermal conduction in nanosystems with the thermomass theory. Firstly, the boundary resistance in nanosystems on heat flow is modeled with the
Darcy-Brinkman analogy. The permeability of thermomass in materials is derived based on the phonon Boltzmann equation. The size-dependent effective thermal conductivity of Si nanosystems thereby is
accurately predicted with the present model. Then, the spatial inertia effect of thermomass is shown to induce the thermal rectification in asymmetry nanosystems. The predicted rectification ratio
can be as high as 32.3% in a trapezoidal Si nanoribbon. Finally, the energy conversion in thermoelectric devices can be coupled with the spatial inertia of thermomass flow. The ZT tends to be
increased in case of a thermoelectric generator. | {"url":"https://mechanicalengineering.softecksblog.in/2632/","timestamp":"2024-11-05T17:00:47Z","content_type":"text/html","content_length":"175178","record_id":"<urn:uuid:fc52729e-1864-4532-af7e-00f61c095a5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00124.warc.gz"} |
This study presents the application of semi-analytical and numerical solution technique to both Volterra and Fredholm integro-differential difference equations by employing Differential Transform
Method depending on Taylor series expansion and introducing the new differential transform theorems with their proofs. To illustrate the computational efficiency and the reliability of the method to
other common numerical methods in the open literature, some examples are carried out it is found that the results are highly accurate and reliable. | {"url":"https://dergipark.org.tr/en/pub/cankujse/issue/61974/896212.xml","timestamp":"2024-11-02T21:49:07Z","content_type":"application/xml","content_length":"14690","record_id":"<urn:uuid:bf4bb390-39c8-4399-bb56-f3e6313293af>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00348.warc.gz"} |
Implementing an Autoencoder in PyTorch - 911 WeKnow
Implementing an Autoencoder in PyTorch
Building an autoencoder model for reconstruction
This is the PyTorch equivalent of my previous article on implementing an autoencoder in TensorFlow 2.0, which you may read through the following link,
Implementing an Autoencoder in TensorFlow 2.0
by Abien Fred Agarap
First, to install PyTorch, you may use the following pip command,
pip install torch torchvision
The torchvision package contains the image data sets that are ready for use in PyTorch.
More details on its installation through this guide from pytorch.org.
Since the linked article above already explains what is an autoencoder, we will only briefly discuss what it is.
An autoencoder is a type of neural network that finds the function mapping the features x to itself. This objective is known as reconstruction, and an autoencoder accomplishes this through the
following process: (1) an encoder learns the data representation in lower-dimension space, i.e. extracting the most salient features of the data, and (2) a decoder learns to reconstruct the original
data based on the learned representation by the encoder.
Mathematically, process (1) learns the data representation z from the input features x, which then serves as an input to the decoder.
Then, process (2) tries to reconstruct the data based on the learned data representation z.
The encoder and the decoder are neural networks that build the autoencoder model, as depicted in the following figure,
To simplify the implementation, we write the encoder and decoder layers in one class as follows,
The autoencoder model written as a custom torch.nn.Module.
Explaining some of the components in the code snippet above,
• The torch.nn.Linear layer creates a linear function (?x + b), with its parameters initialized (by default) with He/Kaiming uniform initialization, as it can be confirmed here. This means we will
call an activation/non-linearity for such layers.
• The in_features parameter dictates the feature size of the input tensor to a particular layer, e.g. in self.encoder_hidden_layer, it accepts an input tensor with the size of [N, input_shape]
where N is the number of examples, and input_shape is the number of features in one example.
• The out_features parameter dictates the feature size of the output tensor of a particular layer. Hence, in the self.decoder_output_layer, the feature size is kwargs[“input_shape”], denoting that
it reconstructs the original data input.
• The forward() function defines the forward pass for a model, similar to call in tf.keras.Model. This is the function invoked when we pass input tensors to an instantiated object of a
torch.nn.Module class.
To optimize our autoencoder to reconstruct data, we minimize the following reconstruction loss,
We instantiate an autoencoder class, and move (using the to() function) its parameters to a torch.device, which may be a GPU (cuda device, if one exists in your system) or a CPU (lines 2 and 6 in the
code snippet below).
Then, we create an optimizer object (line 10) that will be used to minimize our reconstruction loss (line 13).
Instantiating an autoencoder model, an optimizer, and a loss function for training.
For this article, let?s use our favorite dataset, MNIST. In the following code snippet, we load the MNIST dataset as tensors using the torchvision.transforms.ToTensor() class. The dataset is
downloaded (download=True) to the specified directory (root=<directory>) when it is not yet present in our system.
Loading the MNIST dataset, and creating a data loader object for it.
After loading the dataset, we create a torch.utils.data.DataLoader object for it, which will be used in model computations.
Finally, we can train our model for a specified number of epochs as follows,
Training loop for the autoencoder model.
In our data loader, we only need to get the features since our goal is reconstruction using autoencoder (i.e. an unsupervised learning goal). The features loaded are 3D tensors by default, e.g. for
the training data, its size is [60000, 28, 28]. Since we defined our in_features for the encoder layer above as the number of features, we pass 2D tensors to the model by reshaping batch_features
using the .view(-1, 784) function (think of this as np.reshape() in NumPy), where 784 is the size for a flattened image with 28 by 28 pixels such as MNIST.
At each epoch, we reset the gradients back to zero by using optimizer.zero_grad(), since PyTorch accumulates gradients on subsequent passes. Of course, we compute a reconstruction on the training
examples by calling our model on it, i.e. outputs = model(batch_features). Subsequently, we compute the reconstruction loss on the training examples, and perform backpropagation of errors with
train_loss.backward() , and optimize our model with optimizer.step() based on the current gradients computed using the .backward() function call.
To see how our training is going, we accumulate the training loss for each epoch (loss += training_loss.item() ), and compute the average training loss across an epoch (loss = loss / len
For this article, the autoencoder model was trained for 20 epochs, and the following figure plots the original (top) and reconstructed (bottom) MNIST images.
In case you want to try this autoencoder on other datasets, you can take a look at the available image datasets from torchvision.
Closing Remarks
I hope this has been a clear tutorial on implementing an autoencoder in PyTorch. To further improve the reconstruction capability of our implemented autoencoder, you may try to use convolutional
layers (torch.nn.Conv2d) to build a convolutional neural network-based autoencoder.
The corresponding notebook to this article is available here. In case you have any feedback, you may reach me through Twitter.
1. A.F. Agarap, Implementing an Autoencoder in TensorFlow 2.0 (2019). Towards Data Science.
2. I. Goodfellow, Y. Bengio, & A. Courville, Deep learning (2016). MIT press.
3. A. Paszke, et al. PyTorch: An imperative style, high-performance deep learning library (2019). Advances in Neural Information Processing Systems.
4. PyTorch Documentation. https://pytorch.org/docs/stable/nn.html. | {"url":"https://911weknow.com/implementing-an-autoencoder-in-pytorch","timestamp":"2024-11-03T16:50:44Z","content_type":"text/html","content_length":"48054","record_id":"<urn:uuid:969d0257-d96d-4d93-ba2c-3f655df1fda8>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00703.warc.gz"} |
RoughPy - SciPy Proceedings
Streaming data is rarely smooth
Rough path theory is a branch of mathematics arising out of stochastic analysis. One of the main tools of rough path analysis is the signature, which captures the evolution of an unparametrised path
including the order in which events occur. This turns out to be a useful tool in data science applications involving sequential data. RoughPy is our new Python package that aims change the way we
think about sequential streamed data, by viewing it through the lens of rough paths. In RoughPy, data is wrapped in a stream object which can be composed and queried to obtain signatures that can be
used in analysis. It also provides a platform for further exploration of the connections between rough path theory and data science.
Keywords:sequential dataunparametrised pathstime seriesrough pathssignaturesdata sciencemachine learningsignature kernelsLog-ODE method¶
Sequential data appears everywhere in the modern world: text, finance, health records, radio (and other electromagnetic spectra), sound (and speech), etc. Traditionally, these data are tricky to work
with because of the exponential complexity and different scales of the underlying process. Until recently, with the development of transformers and large language models, it has been difficult to
capture the long-term pattern whilst also capturing the short-term fine detail. Rough path theory gives us tools to work with sequential, ordered data in a mathematically rigorous way, which should
provide a means to overcome some of the inherent complexity of the data. In this paper, we introduce a new package RoughPy for working with sequential data through the lens of rough path theory,
where we can perform rigourous analyses and explore different ways to understand sequential data.
Rough paths arise in the study of controlled differential equations (CDEs), which generalise ordinary differential equations (ODEs) and stochastic differential equations Lyons, 1998Lyons et al., 2007
. These are equations of the form $\mathrm{d}Y_t = f(Y_t, \mathrm{d}X_t)$, subject to an initial condition $Y_0 = y_0$, that model a non-linear system driven by a input path $X$. One simple CDE turns
out to be critical to the theory:
$\mathrm{d}S_t = S_t \otimes \mathrm{d}X_t \qquad S_0 = \mathbf{1}.$
The solution of this equation is called the signature of $X$. It is analogous to the exponential function for ODEs, in that the solution of any CDE can be expressed in terms of the signature of the
driving path. When the path $X$ is sufficiently regular, the signature can be computed directly as a sequence of iterated integrals. In other cases, we can still solve CDEs if we are given higher
order data that can be used in place of the iterated integrals. A path equipped with this higher order data is called a rough path.
The signature turns out to be a useful summary of sequential data. It captures the order of events but not the necessarily the rate at which these events occur. The signature is robust to irregular
sampling and provides a fixed-size view of the data, regardless of how many observations are used to compute it. This means the signature can be a useful feature map to be used in machine learning
for sequential data. There are numerous examples of using signatures of analysing sequential data outlined in Section 2.
Besides signatures, there are two other rough path-based methods that have found their way into data science in recent years. These are the signature kernel and neural CDEs. Both of these enjoy the
same robustness of the signature, and expand the range of applications of rough path-based methods. We give a short overview of these methods in Section 1.2.
There are several Python packages for computing signatures of sequential data, including esig Lyons & Maxwell, 2017, iisignature Reizenstein & Graham, 2020, and signatory Kidger & Lyons, 2020. These
packages provide functions for computing signatures from raw, structured data presented in an $n\times d$ array, where $d$ is the dimension of the stream and $n$ is the number of samples. This means
the user is responsible for interpreting the data as a path and arranging the computations that need to be done.
RoughPy is a new package for working with sequential data and rough paths. The design philosophy for this package is to shift the emphasis from simply computing signatures on data to instead work
with streams. A stream is a view of some data as if it were a rough path, that can be queried over intervals to obtain a signature. The actual form of the data is abstracted away in favour of stream
objects that closely resemble the mathematics. The aim is to change the way that users think about sequential data and advance the understanding of path-like data analysis.
On top of the streams, RoughPy also provides concrete implementations for elements of the various algebras associated with rough path analysis. These include free tensor algebras, shuffle tensor
algebras, and Lie algebras (see Section 1.1). This allows the user to easily manipulate signatures, and other objects, in a more natural manner. This allows us to quickly develop methods by following
the mathematics.
The paper is organised as follows. In the remainder of this section, we give a brief overview of the mathematics associated with rough path theory, and provide some additional detail for the
signature kernel and neural CDEs. In Section 2 we list several recent applications of signatures and rough path-based methods in data science applications. These applications should serve to motivate
the development of RoughPy. Finally, in Section 3 we give a more detailed overview of the RoughPy library, the types and functions it contains, and give an example of how it can be used.
RoughPy is open source (BSD 3-Clause) and available on GitHub https://github.com/datasig-ac-uk/roughpy.
1.1Mathematical background¶
In this section we give a very short introduction to signatures and rough path theory that should be sufficient to inform the discussion in the sequel. For a far more comprehensive and rigorous
treatment, we refer the reader to the recent survey Lyons & McLeod, 2022. For the remainder of the paper, we write $V$ for the vector space $\mathbb{R}^d$, where $d \geq 1$.
A path in $V$ is a continuous function $X:[a, b] \to V$, where $a < b$ are real numbers. For the purposes of this discussion, we shall further impose the condition that all paths are of bounded
variation. The value of a path $X$ at some parameter $t\in[a, b]$ is denoted $X_t$.
The signature of $X$ is an element of the (free) tensor algebra. For $n \geq 0$, the $n$th tensor power of $V$ is defined recursively by $V^{\otimes 0} = \mathbb{R}$, $V^{\otimes 1} = V$, and $V^{\
otimes n+1} = V \otimes V^{\otimes n}$ for $n > 1$. For example, $V^{\otimes 2}$ is the space of $d\times d$ matrices, and $V^{\otimes 3}$ is the space of $d\times d\times d$ tensors. The tensor
algebra over $V$ is the space
$\mathrm{T}((V)) = \{\mathbf{x} = (x_0, x_1, \dots) : x_j \in V^{\otimes j} \,\forall j \geq 0\}$
equipped with the tensor product $\otimes$ as multiplication. The tensor algebra is a Hopf algebra, and comes equipped with an antipode operation $\alpha_V:\mathrm{T}((V)) \to \mathrm{T}((V))$. It
contains a group $\mathrm{G}(V)$ of elements under tensor multiplication and the antipode. The members of $\mathrm{G}(V)$ are called group-like elements. For each $n \geq 0$, we write $\mathrm{T}^n
(V)$ for the truncated tensor algebra of degree $n$, which is the space of all $\mathbf{x} = (x_0, x_1, \dots)$ such that $x_j = 0$ whenever $j > n$. Similarly, we write $\mathrm{T}^{>n}((V))$ for
the subspace of elements $\mathbf{x} = (x_0, x_1,\dots)$ where $x_j = 0$ whenever $j \leq n$, which is an ideal in $\mathrm{T}((V))$ and $\mathrm{T}^n(V) = \mathrm{T}((V)) / \mathrm{T}^{>n}((V))$.
The truncated tensor algebra is an algebra, when given the truncated tensor product, obtained by truncating the full tensor product.
The signature $\mathrm{S}(X)_{s, t}$ of a path $X:[a,b] \to V$ over a subinterval $[s, t)\subseteq [a, b]$ is $\mathrm{S}(X)_{s,t} = (1, \mathrm{S}_1(X)_{s,t}, \dots)\in \mathrm{G}(V)$ where for each
$m\geq 1$, $\mathrm{S}_m(X)_{s, t}$ is given by the iterated (Riemann-Stieltjes) integral
$\mathrm{S}_m(X)_{s, t} = \underset{s < u_1 < u_2 < \dots < u_m < t} {\int \dots \int} \mathrm{d}X_{u_1}\otimes \mathrm{d}X_{u_2}\otimes\dots\otimes \mathrm{d}X_{u_m}.$
The signature respects concatenation of paths, meaning $\mathrm{S}(X)_{s, t} = \mathrm{S}(X)_{s, u} \otimes \mathrm{S}(X)_{u, t}$ for any $s < u < t$. This property is usually called Chen’s relation.
Two paths have the same signature if and only if they differ by a tree-like path Hambly & Lyons, 2010. The signature is translation invariant, and it is invariant under reparametrisation.
The dual of $\mathrm{T}((V))$ is the shuffle algebra $\mathrm{Sh}(V)$. This is the space of linear functionals $\mathrm{T}((V))\to \mathbb{R}$ and consists of sequences $(\lambda_0, \lambda_1, \dots)
$ with $\lambda_k\in (V^{\ast})^{\otimes k}$ and where $\lambda_k = 0$ for all $k$ larger than some $N$. (Here $V^{\ast}$ denotes the dual space of $V$. In our notation $V^{\ast} \cong V$.) The
multiplication on $\mathrm{Sh}(V)$ is the shuffle product, which corresponds to point-wise multiplication of functions on the path. Continuous functions on the path can be approximated (uniformly) by
shuffle tensors acting on $\mathrm{G}(V)$ on the signature. This is a consequence of the Stone-Weierstrass theorem. This property is sometimes referred to as universal non-lineararity.
There are several Lie algebras associated to $\mathrm{T}((V))$. Define a Lie bracket on $\mathrm{T}((V))$ by the formula $[\mathbf{x}, \mathbf{y}] = \mathbf{x} \otimes \mathbf{y} - \mathbf{y}\otimes
\mathbf{x}$, for $\mathbf{x},\mathbf{y}\in \mathrm{T}((V))$. We define subspaces $L_m$ of $\mathrm{T}((V))$ for each $m\geq 0$ inductively as follows: $L_0 = \{\mathbf{0}\}$, $L_1 = V$, and, for $m \
geq 1$,
$L_{m+1} = \mathrm{span}\{[\mathbf{x}, \mathbf{y}] : \mathbf{x}\in V, \mathbf{y} \in L_m\}.$
The space of formal Lie series $\mathcal{L}(V)$ over $V$ is the subspace of $\mathrm{T}((V))$ containing sequences of the form $(\ell_0, \ell_1, \cdots)$, where $\ell_j\in L_j$ for each $j\geq 0$.
Note that $\mathcal{L}(V)\subseteq \mathrm{T}^{>0}(V)$. For any $\mathbf{x} \in \mathrm{T}(V)$ we define
$\exp(\mathbf{x}) = \sum_{n=0}^\infty \frac{\mathbf{x}^{\otimes n}}{n!} \quad\text{and}\quad \log(\mathbf{1} + \mathbf{x}) = \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\mathbf{x}^{\otimes n}.$
For any path $X$, we have $\mathrm{LogSig}(X)_{s, t} := \log(\mathrm{S}(X)_{s, t})\in \mathcal{L}(V)$, and we call this the log-signature of $X$ over $[s, t)$. This is an alternative representation
of the path, but doesn’t enjoy the same unievrsal non-linearity of the signature.
1.2Rough paths in data science¶
Now we turn to the applications of rough path theory to data science. Our first task is to form a bridge between sequential data and paths. Consider a finite, ordered sequence $\{(t_1, \mathbf{x}_1,\
dots, t_N,\mathbf{x}_N)\}$ of observations, where $t_j\in \mathbb{R}$, and $\mathbf{x}_j\in V$. (More generally, we might consider $\mathbf{x}_j\in\mathcal{L}(V)$ instead. That is, data that already
contains higher-order information. In our language, it is a genuine rough path.) We can find numerous paths that interpolate these observations; a path $X:[t_0, t_N]\to V$ such that, for each $j$,
$X_{t_j} = \mathbf{x}_j$. The simplest interpolation is to take the path that is linear between adjacent observations.
Once we have a path, we need to be able to compute signatures. For practical purposes, we truncate all signatures (and log-signatures) to a particular degree $M$, which we typically call the depth.
The dimension of the ambient space $d$ is usually called the width. Using linear interpolation, we can compute the iterated integrals explicitly using a free tensor exponential of the difference of
successive terms:
$\mathrm{Sig}^M([t_j, t_{j+1})) = \exp_M(\mathbf{x}_{j+1} - \mathbf{x}_j) := \sum_{j=0}^M \frac{1}{j!}(\mathbf{x}_{j+1} - \mathbf{x}_j)^{\otimes j}.$
Here, and in the remainder of the paper, we shall denote the empirical signature over an interval $I$ by $\mathrm{Sig}(I)$ and the log-signature as $\mathrm{LogSig}(I)$. We can compute the signature
over arbitrary intervals by taking the product of the these terms, using the multiplicative property of the signature.
1.2.1The signature transform¶
Most of the early applications of rough paths in data science, the (truncated) signature was used as a feature map Kidger et al., 2019. This provides a summary of the path that is independent of the
parameterisation and the number of observations. Unfortunately, the signature grows geometrically with truncation depth. If $d > 1$, then the dimension of $\mathrm{T}^M(V)$ is
$\sum_{m=0}^M d^m = \frac{d^{M+1} - 1}{d - 1}$
The size of the signature is a reflection of the complexity of the data, where data with a higher complexity generally needs a higher truncation level and thus a larger signature. It is worth noting
that this still represents a significant compression of stream information in many cases.
For some applications, it might be possible to replace the signature with the log-signature. The log-signature is smaller than the signature, but we lose the universal non-linearity property of the
signature. Alternatively, we might turn to other techniques that don’t require a full calculation of the signature (such as the signature kernel below). As the connection between rough paths and data
science becomes more mathematically mature, we will likely find new ways to use the signature without requiring its full size.
1.2.2Signature kernels¶
Kernel methods are useful tools for learning with sequential data. Mathematically, a kernel on a set $W$ is a positive-definite function $k:W\times W\to \mathbb{R}$. Kernels are often quite easy to
evaluate because of the kernel trick, which involves embedding the data in a inner product space, with a feature map, in which the kernel can be evaluated by simply taking an inner product.
Informally, kernels measure the similarity between two points. They are used in a variety of machine learning tasks such as classification.
The signature kernel is a kernel induced on the space of paths by combining the signature with an inner product defined on the tensor algebra Kiraly & Oberhauser, 2019. The theory surrounding the
signature kernel has been expanded several times since their introduction Fermanian et al., 2021Cass et al., 2024. Typically, the inner product on $\mathrm{T}((V))$ will itself by derived from an
inner product on $V$, extended to the tensor algebra.
Signatures are infinite objects, so we can’t simply evaluate inner products on the tensor algebra. Fortunately, we can approximate the signature kernel by taking inner products of truncated
signatures. Even better, it turns out that, in certain cases, the signature kernel can be realised as the solution to a partial differential equation (PDE) of Goursat type. This means the full
signature kernel can be computed from raw data without needing to compute full signatures Salvi et al., 2021.
In fact, in recent preprint, it has been shown that there are higher order solvers for signature kernels by rewriting the kernel solution of a system of PDEs of Goursat type Lemercier & Lyons, 2024.
A critical part of their method involves the adjoint of both left and right free tensor multiplication, which are not available in any current package for computing signatures. These functions are
provided by RoughPy.
1.2.3Neural controlled differential equations¶
Neural CDEs are a method for modelling irregular time series. We consider CDEs of the form
$\mathrm{d}Y_t = f_\theta(Y_t)\,\mathrm{d}X_t$
where $f_\theta$ is a neural network. We can treat the path $Y$ as “hidden state” that we can tune using data to understand the relationship between the driving path $X_t$ and some response. Neural
CDEs can be regarded as a continuous-time analogue of a recurrent neural network Kidger et al., 2020.
Neural CDEs initially showed some promising results on several benchmarks but now lag behind current state-of-the-art approaches to time series modelling. The latest iteration of neural CDEs are the
recently introduced Log-neural controlled differential equations Walker et al., 2024, which make use of the Log-ODE method for solving rough differential equations in order to boost the performance
of neural CDEs.
2Current applications of rough paths¶
In this section we enumerate several applications where rough paths have been used to develop or improve methods. This list presented here is certainly not exhaustive. In addition to the literature
cited below, there are numerous additional references and worked examples, in the form of Jupyter notebooks, available on the DataSig website (https://datasig.ac.uk/examples).
2.1Detecting interference in radio astronomy data¶
Radio frequency interference (RFI) is a substantial problem in the field of radio astronomy. Even small amounts of RFI can obscure the faint signals generated by distant stellar objects and events.
The problem of identifying RFI in a signal falls into a class of semi-supervised learning tasks called novelty (or anomaly) detection. Rough path methods have been applied to develop a novelty
detection framework based on rough path methods to detect RFI in radio astronomy data from several radio telescopes Arrubarrena et al., 2024. Their result show that their framework is effective at
detecting even faint RFI within the test data. This work is based on a general novelty detection framework Shao et al., 2020.
Signatures kernels have also been used for a similar problem of detecting malware by inspecting the streaming tree of processes on a computer system Cochrane et al. (2021). Their method uses a
support vector machine classifier to identify processes that are malicious compared to “normal” behaviour learned via training on a corpus of normality.
2.2Tracking mood via natural language processing¶
One application of rough paths in natural language processing has been in the domain of mental health Tseriotou et al., 2023Tseriotou et al., 2024. In this work, the authors present a model for
identifying changes in a person’s mood based on their online textual content. Many mental health conditions have symptoms that manifest in the (textual) expression, so this could be a powerful tool
for mental health professionals to identify changes in patients and intervene before the state develops. Their model achieves state-of-the-art performance vs existing models on two datasets.
2.3Predicting battery cell degradation¶
Another recent application of signatures is to predict the degradation of lithium-ion cells Ibraheem et al., 2023. They use signature features to train a model that can accurately predict the end of
life of a cell using relatively low-frequency sampling compared to existing models. They also observed that the performance at higher frequency was comparable to other models.
2.4Prediction of sepsis in intensive care data¶
One of the first effective demonstrations of the utility of signatures and rough paths based methods in healthcare was in the 2019 PhysioNet challenge Morrill et al., 2020. In this contest, teams
were invited to develop models to predict sepsis in patients from intensive care unit data. In this challenge, a team utilising signatures to enhance predictive power placed first in the official
phase of the challenge. Since then, signatures and other rough path based approaches have been used in several other clinical contexts Cohen et al., 2024Falcioni et al., 2023Tseriotou et al., 2024.
Clinical data is often irregularly sampled and often exhibits a high degree of missingness, but it can also be very high-frequency and dense. Rough path based methods can handle these data in an
elegant way, and retain the structure of long and short term dependencies within the data.
2.5Human action recognition¶
The task of identifying a specific action performed by a person from a short video clip is very challenging. Signatures derived from landmark data extracted from the video has been used to train
classification models that achieved state-of-the-art performance compared with contemporary models Yang et al., 2022Cheng et al., 2024Liao et al., 2021. (See also preprint papers Ibrahim & Lyons,
2023Jiang et al., 2024.) Also in the domain of computer vision, signatures have been used to produce lightweight models for image classification Ibrahim & Lyons, 2022 and in handwriting recognition
tasks Xie et al., 2018.
RoughPy is a new library that aims to support the development of connections between rough path theory and data science. It represents a shift in philosophy from simple computations of signatures for
sequential data, to a representation of these data as a rough path. The design objectives for RoughPy are as follows:
1. provide a class that presents a rough path view of some source of data as a rough path, exposing methods for querying the data over intervals to get a signature or log-signature;
2. provide classes and functions that allow the users to interact with the signatures and other algebraic objects in a natural, mathematical manner;
3. all operations should be differentiable and objects should be interoperable with objects from machine learning, such as TensorFlow (JAX) and PyTorch.
The first two objectives are simple design and implementation problems. The final objective presents the most difficulty, especially interoperability between RoughPy and common machine learning
libraries. There are array interchange formats for NumPy-like arrays, such as the Python Array API standard Meurer et al., 2023 and the DLPack protocol DLPack, 2023. These provide part of the
picture, but in order for them to be fully supported, RoughPy must support a variety of compute backends such as CUDA (NVidia), ROCm/HIP (AMD), and Metal (Apple).
RoughPy is a substantial library with numerous components, mostly written in C++ with a Python interface defined using Pybind11 Jakob et al., 2017. The original design of the library closely followed
the C++ template libraries libRDE and libalgebra Buckley et al., 2006, although it has seen many iterations since.
In the remainder of this section, we discuss some of the core components of RoughPy, give an example of using RoughPy, and discuss the future of RoughPy.
3.1Free tensors, shuffle tensors, and Lie objects¶
In order to properly support rough path based methods and allow users to write code based on mathematical concepts, we provide realisations of several algebra types. The algebras provided in RoughPy
are FreeTensor, ShuffleTensor, and Lie, which define elements of a particular free tensor algebra, shuffle tensor algebra, and Lie algebra respectively. Each of these algebras is initialized with a
width, depth, and scalar coefficient type, encapsulated in a Context object.
In addition to the algebra classes, RoughPy provides a number of supporting functions, including antipodes and half-shuffle products for FreeTensor/ShuffleTensor objects, and adjoint operators for
left free tensor multiplication. These are operations that are frequently used in the theory of rough paths, and will likely be necessary in developing new applications later (as in the signature
RoughPy algebras are designed around a flexible scalar ring system that allows users to perform calculations with different accuracy, or derive expressions by using polynomial coefficients. For most
applications, single or double precision floating point numbers will provide a good balance between performance and accuracy. (Double precision floats are the default.) When more precision is
required, rational coefficients can be used instead. These are backed by GMP rationals for fast, arbitrary precision rational arithmetic Granlund & the GMP development team, 2012. Polynomial
coefficients can be used to derive formulae by performing calculations. This is a powerful technique for understanding the terms that appear in the result, particularly whilst testing and debugging.
RoughPy is very careful in the way it handles intervals. All intervals in RoughPy are half-open, meaning that they include one end point but not the other; they are either clopen $[a, b) := \{t: a\
leq t < b\}$ or opencl $(a, b] := \{t : a < t \leq b\}$. Besides the type (clopen or opencl), all intervals must provide methods for retrieving the infimum ($a$ in the above notation) and the
supremum ($b$ above) of the interval as double precision floats. This is enforced by means of an abstract base class Interval. The main concrete interval types are RealInterval, an interval with
arbitrary real endpoints, and DyadicInterval, as described below. For brevity, we shall only consider clopen intervals.
A dyadic interval is an interval $D_k^n := [k/2^n, (k+1)/2^n)$, where $k$, $n$ are integers. The number $n$ is often described as the resolution of the interval. The family of dyadic intervals of a
fixed resolution $n$ partition the real line so that every real number $t$ belongs to a unique dyadic interval $D_n^k$. Moreover, the family of all dyadic intervals have the property that two dyadic
intervals are either disjoint or one contains the other (including the possibility that they are equal).
In many cases, RoughPy will granularise an interval into a dyadic intervals. The dyadic granularisation of $[a, b)$ with resolution $n$ is $[k_1/2^n, k_2/2^n)$ where $k_1 = \max\{k: k/2^n \leq a\}$
and $k_2 = \max\{k: k/2^n \leq b\}$. In effect, the dyadic granularisation is the result of “rounding” each end point to the included end of the unique dyadic interval that contain it.
Streams are central to RoughPy. A RoughPy Stream is a rough path view of some underlying data. It provides two key methods to query the object over intervals to retrieve either a signature or
log-signature. Importantly, once constructed, the underlying data is inaccessible except by querying via these methods. Streams are designed to be composed in various ways, such as by concatenation,
in order to build up more complex streams. A Stream is actually a (type-erasing) wrapper around a more minimal StreamInterface abstract class.
We construct streams by a factory function associated with each different StreamInterface, which might perform some compression of the underlying data. For example, a basic StreamInterface is the
LieIncrementStream, which can be constructed using the associated from_increments factory function (a static method of the class), which accepts an $n \times d$ array of increment data. These data
will typically be the differences between successive values of the data (but could also include higher-order Lie terms). This is similar to the way that libraries such as esig, iisignature, and
signatory consume data.
RoughPy streams cache the result of log-signature queries over dyadic intervals so they can be reused in later calculations. To compute the log-signature over any interval $I$, we granularise at a
fixed stream resolution $n$ to obtain the interval $\tilde I = [k_1/2^n, k_2/2^n)$, and then compute
$\mathrm{LogSig}(\tilde{I}) = \log\biggl(\prod_{k=k_1}^{k_2-1} \exp(\mathrm{LogSig}(D_k^n))\biggr).$
The $\mathrm{LogSig}(D_k^n)$ terms on the right-hand-side are either retrieved from the cache, or computed from the underlying source. This is essentially the Campbell-Baker-Hausdorff formula applied
to the log-signatures at the finest level. In practice, we can actually reduce the number of terms in the product, by merging complementary dyadic intervals that appear in the granularisation. We
further optimise by using a fused multiply-exponential ($A\exp(B)$) operation.
Signatures are always computed by first computing the log-signature and then exponentiating. Directly computing the signature as a product of exponentials of (cached) log-signatures might accumulate
enough numerical errors to drift slightly from a group-like tensor. That is, the result might not actually be a true signature. Taking the logarithm and then exponentiating back to obtain the
signature has the effect of correcting this numerical drift from a true signature.
Aside from the basic LieIncrementStream, there are several other implementations of the StreamInterface currently available in RoughPy. The BrownianStream approximates Brownian motion by generating
normal distributed increments over dyadic intervals of arbitrary resolution on demand, forming a reasonable approximation of true Brownian motion. The ExternalDataStream is an interface for loading
data from various external sources, such as from a database or specialised data format. Currently, only sound files are supported but we plan to extend support for other sources as the need arises.
This will certainly include “online” data sources such as computer peripheral devices (e.g. microphones).
The other main StreamInterface implementation is the PiecewiseAbelianStream, which is an important construction from CDE. A piecewise Abelian path, or log-linear path, is an example of a smooth rough
path, which generalises piecewise linear approximations of an arbitrary stream. Formally, an Abelian path $Y$ is a pair $([a, b), \mathbf{y})$ where $a < b$ and $\mathbf{y}\in\mathcal{L}(V)$. The
log-signature over an arbitrary interval $[u, v) \subseteq [a, b)$ is given by
$\mathrm{LogSig}(Y)_{u, v} = \frac{v - u}{b - a}\mathbf{y}.$
A piecewise Abelian path is the concatenation of finitely many Abelian paths with adjacent intervals. For any rough path $X$ and partition $\{a = t_0 < t_1 < \dots < t_N = b\}$ there is a piecewise
Abelian approximation for this path given by
$\{([t_{j-1}, t_j), \mathrm{LogSig}(X)_{t_{j-1}, t_j}): j=1, \dots, N\}.$
This construction turns out to be vital for computing signature kernels Meurer et al., 2023 and for solving CDEs Lyons et al., 2007Walker et al., 2024. In particular, this construction can be used to
compress data at some degree, which can the be used in computations at a higher degree.
In this section we show a very simple example of how to use RoughPy to construct a stream and compute a signature. This example is similar to the first few steps of the tutorial found in the RoughPy
documentation. RoughPy can be installed using pip, where prebuilt wheels are available for Windows, Linux, and MacOs:
We refer the reader to this documentation for much more detail. We will construct a stream in $\mathbb{R}^{26}$ by taking each letter in a word, “scipy” in this example, as the increments of a path:
import numpy as np
text = "scipy"
increments = np.zeros((5, 26), dtype="int8")
for i, c in enumerate(text):
increments[i, ord(c) - 97] = 1
Now we import RoughPy and construct a Stream using the factory mentioned above. One other critical ingredient is the algebra Context, which is used to set up a consistent set of algebra objects with
the desired width (26), truncation level (2), and coefficient type (Rational).
import roughpy as rp
ctx = rp.get_context(width=26, depth=2,
stream = rp.LieIncrementStream.from_increments(
increments, ctx=ctx)
Now we can compute the signature of the stream over the whole domain of the stream $[0, 4]$ by omitting the interval argument:
sig = stream.signature()
# { 1() 1(3) 1(9) 1(16) 1(19) 1(25) 1/2(3,3)
# 1(3,9) 1(3,16) 1(3,25) 1/2(9,9) 1(9,16)
# 1(9,25) 1/2(16,16) 1(16,25) 1(19,3) 1(19,9)
# 1(19,16) 1/2(19,19) 1(19,25) 1/2(25,25) }
The first term of the signature is always 1, and the empty parentheses indicate the empty tensor word. The next five terms correspond to the counts of each unique letter that appears, the number in
parentheses indicates the letter (with a being 1). The final terms indicate the order in which each pair of letters appear in the word. For instance, the term 1(3,9) indicates that a c appears before
an i.
This is only the beginning of the story. From here, we can use the signatures to compute the similarity between streams, via the signature kernel for instance, or used as features in a variety of
machine learning problems. More detailed examples of how to use signatures in data science are given on the DataSig website https://datasig.ac.uk/examples.
3.5The future of RoughPy¶
RoughPy is continuously evolving. At time of writing, the current version uses libalgebra and libalgebra-lite (libalgebra with fewer templates) for computations. Unfortunately, this made it difficult
to achieve the differentiability and computation device support that we want. We are currently changing the way we implement vectors and algebras to provide the support for on-device computation that
we want. Making the operations differentiable is crucial for machine learning, and will be the biggest challenge.
Long term, we need to expand support for signature kernels and CDEs. As applications of these tools grow in data science, we will need to devise new methods for computing kernels, or solving CDEs. We
will also build a framework for constructing and working with linear maps, and homomorphisms. For example, one very useful linear map is the extension of the $\log$ function to the whole tensor
The use of rough path theory in data science is rapidly expanding and provides a different way to view sequential data. Signatures, and other methods arising from rough path theory, are already used
in a wide variety of applications, with great effect. The next steps in overcoming the difficulty in modeling sequential data will require a change of perspective. Viewing these data through the lens
of rough path theory might provide this change.
RoughPy is a new Python library for working with streamed data using rough path methods. It is designed to abstract away the form and source of data so that analysis can be performed by querying
path-like objects. This approach is much closer to the mathematics. It also allows users to interact with the various algebras associated with rough paths (free tensor algebra, shuffle tensor
algebra, Lie algebra) in a natural way. RoughPy is under active development, and a long list of improvements and extensions are planned.
This work was supported in part by EPSRC (NSFC) under Grant EP/S026347/1, in part by The Alan Turing Institute under the EPSRC grant EP/N510129/1, the Data Centric Engineering Programme (under the
Lloyd’s Register Foundation grant G0095), and the Defence and Security Programme (funded by the UK Government). Terry Lyons was additionally supported by the Hong Kong Innovation and Technology
Commission (InnoHK project CIMDA). For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
1. Lyons, T. J. (1998). Differential equations driven by rough signals. Revista Matemática Iberoamericana, 14(2), 215–310. http://eudml.org/doc/39555
2. Lyons, T. J., Caruana, M., & Lévy, T. (2007). Differential Equations Driven by Rough Paths: École d’Été de Probabilités de Saint-Flour XXXIV - 2004. In Lecture Notes in Mathematics. Springer
Berlin Heidelberg. 10.1007/978-3-540-71285-5
3. Lyons, T., & Maxwell, D. (2017). esig.
4. Reizenstein, J. F., & Graham, B. (2020). Algorithm 1004: The Iisignature Library: Efficient Calculation of Iterated-Integral Signatures and Log Signatures. ACM Transactions on Mathematical
Software, 46(1), 1–21. 10.1145/3371237
5. Kidger, P., & Lyons, T. (2020). Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU. arXiv. 10.48550/ARXIV.2001.00706 | {"url":"https://proceedings.scipy.org/articles/DXWY3560","timestamp":"2024-11-12T09:10:04Z","content_type":"text/html","content_length":"891697","record_id":"<urn:uuid:94b9868b-2bae-46cc-b91c-8abbc1fc6e38>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00834.warc.gz"} |
ALGSEQ_1: Construction of Finite Sequences over Ring and Left-, Right-, and Bi-Modules over a Ring
Lm1: for R being non empty ZeroStr
for p being AlgSequence of R ex m being Nat st m is_at_least_length_of p
theorem Th2
for k being
for R being non
empty ZeroStr
for p being
of R st ( for i being
st i
k holds
<> 0.
R ) holds
theorem Th4
for R being non
empty ZeroStr
for p, q being
of R st
= len
q & ( for k being
st k
< len
p holds
k ) holds
Lm2: for R being non empty ZeroStr
for p being AlgSequence of R st p = <%(0. R)%> holds
len p = 0 | {"url":"https://mizar.uwb.edu.pl/version/current/html/algseq_1.html","timestamp":"2024-11-05T03:33:19Z","content_type":"text/html","content_length":"54289","record_id":"<urn:uuid:4b8ad3f4-5203-409e-beea-7e211448d207>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00367.warc.gz"} |
Dynamical Systems is the study of iteration of maps. The UNT Dynamical Systems Seminar covers a broad range of topics in dynamics that are of interest to current faculty and students, including, but
not limited to, complex dynamics, symbolic dynamics, ergodic theory and fractal geometry, as well as connections of dynamics with classical analysis, probability, and number theory. Talks are given
by faculty, graduate students and, from time to time, outside speakers.
Random Interval Maps with Holes and Local Dimension Multifractals
In this talk we will introduce a random class of interval maps with holes whose thermodynamic formalism and statistical properties have been recently described by Atnip, Froyland, González-Tokman,
and Vaienti over the past few years (2021-2023). For this class of random dynamical systems, we use this thermodynamic formalism along with an adaptation of a method of Mayer, Urbański, and Skorulski
(2011) to formulate conditions for which a random multifractal analysis of these systems with respect to local dimension is viable. In the process of developing a multifractal theory for this class
of systems we also improve upon a Bowen's Formula type result by addressing the topological requirements of such a system. This work is currently in preparation and joint with Jason Atnip at the
University of Queensland in Australia.
Date Speaker Title
2024-10-25 Jackson Morrow, UNT How can dynamical systems, equilibrium measures, and equidistribution results be applied to questions in arithmetic geometry?
2024-10-04 Daniel Prokaj, UNT Self-similar sets, dimension drop and Okamoto's function
2024-9-27 Bill Mance, Adam Mickiewicz University Independence of notions from dynamics: a descriptive set theoretic approach
2024-8-30 Pieter Allaart, UNT What does Okamoto's function have to do with beta-expansions?
2024-5-10 Anna Zdunik, University of Warsaw Hausdorff and packing measure for limit sets of conformal repellers and iterated function systems
2024-4-19 Nathan Dalaklis, UNT Extremal F-exponents of finitely irreducible CGDMS's
2023-11-3 Bunyamin Sari, UNT Coarse Embeddings into Banach spaces
2023-10-27 Johannes Jaerisch, Nagoya University Multifractal analysis of growth rates for the geodesic flow on hyperbolic surfaces
2023-10-20 Jiajie Zheng, UNT Twisted recurrence in measurable dynamical systems
2023-10-6 Mariusz Urbanski, UNT Ruelle's Operator and Conformal Measures with Applications in Fractal Geometry and Number Theory | {"url":"https://math.unt.edu/research/dynamical_systems_seminar.html","timestamp":"2024-11-08T21:51:33Z","content_type":"text/html","content_length":"59370","record_id":"<urn:uuid:5b0d8312-c33b-42a5-8ae7-08c49de2b836>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00362.warc.gz"} |
All Tasks - Asymptote
Uncle Scrooge monies
Uncle scrooge has a lot of money and he usually swim in it, if you know that there are 10000 monies of 1 euro, 15000 monies of 2 euro and 8000 monies of 50c and 2000 banknote of 5€, how much euros
does he have?
# Operations with natural numbers
Linear equation transformation
Find the value of x solving the equation.
Properties of linear functions
Is the statement true or false? The graph for the equation f(x)=4.5x+2 is a straight line through the origin.
Properties of linear functions
Is the statement true or false? The following pairs of values are all pairs of values of the function equation f(x)=2x-5.
Área T-A 1
El valor del área de la figura es igual a:
Área R-A
Aplicando la integral definida, determina el valor de $k$ sabiendo que el área $A$ es $18m^2$. | {"url":"https://www.asymptote-project.eu/es/all-tasks-es/?asymPagination=9","timestamp":"2024-11-13T14:54:36Z","content_type":"text/html","content_length":"109774","record_id":"<urn:uuid:e69fa788-630f-4746-a88f-6f24a1f563a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00822.warc.gz"} |
Can someone provide guidance on quantum algorithms for solving problems in quantum robotics and autonomous systems for my computer science assignment? Computer Science Assignment and Homework Help By CS Experts
Can someone provide guidance on quantum algorithms for solving problems in quantum robotics and autonomous systems for my computer he said assignment? Does this problem require a quantum computer to
perform all the mathematical forms of computer science (e.g. mathematics, computer programming, DNA, mathematics)? Many of the questions that are discussed for this assignment are appropriate for
solving equations of quantum mechanics. Are there any questions that would require a quantum computer to perform mathematical forms like numbers, geometric morphisms, etc.? If not, what quantum
computers do you think may be helpful for this assignment? I assume that this is a problem related to quantum physics, a 2-dimensional space, and that there is some general application of quantum
mechanics to this assignment too. But any other applications that I can imagine would involve quantum computers would not have any information about a specific mathematical object. This is a problem,
it is not a click for more info problem, it is exactly the same as in physical engineering, basically. While using a different form of computer to solve a system of equations, it is impossible for a
quantum computer to do physical or mathematical operations when using a different form of computer. In this respect, quantum computers need to get some information about the physical objects they are
running on, or the mathematical structures they are running on that are exactly the type of physical objects that are useful. Anyhow, such a problem has the associated “communication” field that can
be checked, but for some applications it is almost impossible to do this, due to the fact that the total information cannot lead to anything. Q: Are there any problems that involve a quantum system
that are capable of running computationally the usual way? A: You’re right to suggest that, of course, they have different form of computer use so that their interaction should become more
complicated. But you’re right that the quantum computer needs to get some information about the objects they are running on. Q: Do you think the answer for this assignment is “no,” as mentioned
earlier? Can someone provide guidance on quantum algorithms for solving problems in quantum robotics and autonomous systems for my computer science assignment? 1. Yes and no. 2. So this is the title
of your project. The first paragraph describes the algorithm: Where and When to Look basics look at this website is provided to the topic of quantum computers by the author: Quantising (Quantum)
Deciding in the future PXF http://bitstream.mit.edu/upload.php?link=pixf However, I should start by stating that this is a PXF question, as my supervisor in the research group noted.
Take Online Class For Me
It would not help a big city if they answered the OP without having to send me an answer. As you will see I left that part and I can also see that you too will get away from the questions above, if
you want to clarify the question at all. So my question is: Do quantum algorithms for solving the above NP-complete problems exist? Which is it correct?- the standard quantum algorithm?- If so, good
answer: as to the OP with the right context: This post is just my tip i.e i should see a pixf question that does not explain quantum algorithms for solving NP-complete problems or the standard
quantum algorithm for solving NP-complete tasks. The code is simple, but nevertheless not completely general. [thanks] 1. Why would it not be sufficient that a quantum algorithm be in practice? 2.
Quantum algorithms based on Feynman rules must use the Feynman rules that are described in these wikis: http://wikipedia.org/wiki/Feynman 3. In my opinion the question in line 3 is important: it is
about whether a quantum algorithm is in practice or not. 4. When there is no relevant noise, how “difficult” is the algorithm if you can find any algorithm that works on that noise? Do you think the
problem would keep finding any “well-behaved” quantum algorithms rather than the less-obvious Google algorithm for quantum computing? The page is not much good for this concern, so please don’t
question my point: Q&A: How can you describe this question? 1. What would the questions about the “difficult” or “difficult” one I seem to find in your answer be? 2. What are the methods to know and
suggest a better approach? 3. What if, or at least describe this approach from the source? 4. The PXF answer to the question above also means to reduce the burden upon you prior to answering again
the OP and allowing your supervisor to provide more guidance that is possible I have been meaning to ask you any questions at this house. I answer them all in one answer and will respond in another
post. I have been trying to get my mind a bit more clear and clear with these blog posts. In the past, I have reached theseCan someone provide guidance on quantum algorithms for solving problems in
quantum robotics and autonomous systems for my computer science assignment? Friday, August 8, 2016 The challenge is this: What are the potential benefits and risks to solving a problem using quantum
algorithms? A problem can be represented as a problem of the form of some systems of a certain type. Sometimes the term is used to refer to problems within a structure, etc.
How To Pass An Online College Class
Here I use word-by-word examples to help you understand This Site is possible and under what circumstances quantum mechanics is so useful. Here is one way to represent the process in question: The
computational process is accomplished by computer science, where the task is to create some piece of input at a given time and compute some output, obtained by solving an problem. In the example
above, only about 100 of the 10 options that can be used for solving the puzzle are available for the computer scientist. The solution set for the puzzles is the set of all possible solutions and is
decided on by the algorithms that the researchers are using. One may perform an approximation algorithm to solve all possible solutions, and the result should be equal after compressing the small
sets of the input. Because the algorithm is quite heavy, the next step is to perform simulations to evaluate the memory my blog to speed up the process. This is done by performing numerical
simulations to locate the time necessary for solving a given problem on which the algorithms are being used. In such simulations that can be performed, the solution space may not be much large
compared to the actual dimension of the problem and computation power may be needed. We will be using an object that has the form of a rectangle to figure out the value of the potential energy, so we
are going to make an approximation that does not need to be performed. We are going to make several simulations with the algorithm’s potential energy. A comparison of the result becomes as shown
below. This is like a simulation of a robot. Just because the input is quite large, it should be performed in fairly short | {"url":"https://csmonsters.com/can-someone-provide-guidance-on-quantum-algorithms-for-solving-problems-in-quantum-robotics-and-autonomous-systems-for-my-computer-science-assignment","timestamp":"2024-11-13T12:02:18Z","content_type":"text/html","content_length":"87488","record_id":"<urn:uuid:e8cd6166-a705-4e0f-b8cd-4d0f01612070>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00826.warc.gz"} |
Using the Slope Formula to Find the Slope between Two Points
Learning Outcomes
• Use the slope formula to find the slope of a line between two points
• Find the slope of horizontal and vertical lines
Sometimes we need to find the slope of a line between two points and we might not have a graph to count out the rise and the run. We could plot the points on grid paper, then count out the rise and
the run, but there is a way to find the slope without graphing.
Before we get to it, we need to introduce some new algebraic notation. We have seen that an ordered pair [latex]\left(x,y\right)[/latex] gives the coordinates of a point. But when we work with
slopes, we use two points. How can the same symbol [latex]\left(x,y\right)[/latex] be used to represent two different points?
Mathematicians use subscripts to distinguish between the points. A subscript is a small number written to the right of, and a little lower than, a variable.
We will use [latex]\left({x}_{1},{y}_{1}\right)[/latex] to identify the first point and [latex]\left({x}_{2},{y}_{2}\right)[/latex] to identify the second point. (If we had more than two points, we
could use [latex]\left({x}_{3},{y}_{3}\right),\left({x}_{4},{y}_{4}\right)[/latex], and so on.)
The Slope Formula
You’ve seen that you can find the slope of a line on a graph by measuring the rise and the run. You can also find the slope of a straight line without its graph if you know the coordinates of any two
points on that line. Every point has a set of coordinates: an [latex]x[/latex]-value and a [latex]y[/latex]-value, written as an ordered pair [latex](x, y)[/latex]. The [latex]x[/latex] value tells
you where a point is horizontally. The [latex]y[/latex] value tells you where the point is vertically.
Consider two points on a line—Point 1 and Point 2. Point 1 has coordinates [latex]\left(x_{1},y_{1}\right)[/latex] and Point 2 has coordinates [latex]\left(x_{2},y_{2}\right)[/latex].
The rise is the vertical distance between the two points, which is the difference between their [latex]y[/latex]-coordinates. That makes the rise [latex]\left(y_{2}-y_{1}\right)[/latex]. The run
between these two points is the difference in the [latex]x[/latex]-coordinates, or [latex]\left(x_{2}-x_{1}\right)[/latex].
So, [latex] \displaystyle \text{Slope}=\frac{\text{rise}}{\text{run}}[/latex] or [latex] \displaystyle m=\frac{{{y}_{2}}-{{y}_{1}}}{{{x}_{2}}-{{x}_{1}}}[/latex].
To see how the rise and run relate to the coordinates of the two points, let’s take another look at the slope of the line between the points [latex]\left(2,3\right)[/latex] and [latex]\left(7,6\
right)[/latex] below.
Since we have two points, we will use subscript notation.
On the graph, we counted the rise of [latex]3[/latex]. The rise can also be found by subtracting the [latex]y\text{-coordinates}[/latex] of the points.
[latex]\begin{array}{c}{y}_{2}-{y}_{1}\\ 6 - 3\\ 3\end{array}[/latex]
We counted a run of [latex]5[/latex]. The run can also be found by subtracting the [latex]x\text{-coordinates}[/latex].
[latex]\begin{array}{c}{x}_{2}-{x}_{1}\\ 7 - 2\\ 5\end{array}[/latex]
We know [latex]m={\Large\frac{\text{rise}}{\text{run}}}[/latex]
So [latex]m={\Large\frac{3}{5}}[/latex]
We rewrite the rise and run by putting in the coordinates. [latex]m={\Large\frac{6 - 3}{7 - 2}}[/latex]
But [latex]6[/latex] is the [latex]y[/latex] -coordinate of the second point, [latex]{y}_{2}[/latex]
and [latex]3[/latex] is the [latex]y[/latex] -coordinate of the first point [latex]{y}_{1}[/latex] . [latex]m={\Large\frac{{y}_{2}-{y}_{1}}{7 - 2}}[/latex]
So we can rewrite the rise using subscript notation.
Also [latex]7[/latex] is the [latex]x[/latex] -coordinate of the second point, [latex]{x}_{2}[/latex]
and [latex]2[/latex] is the [latex]x[/latex] -coordinate of the first point [latex]{x}_{2}[/latex] . [latex]m={\Large\frac{{y}_{2}-{y}_{1}}{{x}_{2}-{x}_{1}}}[/latex]
So we rewrite the run using subscript notation.
We’ve shown that [latex]m={\Large\frac{{y}_{2}-{y}_{1}}{{x}_{2}-{x}_{1}}}[/latex] is really another version of [latex]m={\Large\frac{\text{rise}}{\text{run}}}[/latex]. We can use this formula to find
the slope of a line when we have two points on the line.
Slope Formula
The slope of the line between two points [latex]\left({x}_{1},{y}_{1}\right)[/latex] and [latex]\left({x}_{2},{y}_{2}\right)[/latex] is
Say the formula to yourself to help you remember it:
[latex]\text{Slope is }y\text{ of the second point minus }y\text{ of the first point}[/latex]
[latex]x\text{ of the second point minus }x\text{ of the first point.}[/latex]
Find the slope of the line between the points [latex]\left(1,2\right)[/latex] and [latex]\left(4,5\right)[/latex].
We’ll call [latex]\left(1,2\right)[/latex] point #1 and [latex]\left(4,5\right)[/latex] [latex]\stackrel{{x}_{1},{y}_{1}}{\left(1,2\right)}\text{and}\stackrel{{x}_{2},{y}_{2}}{\left(4,5\right)}[/
point #2. latex]
Use the slope formula. [latex]m={\Large\frac{{y}_{2}-{y}_{1}}{{x}_{2}-{x}_{1}}}[/latex]
Substitute the values in the slope formula:
[latex]y[/latex] of the second point minus [latex]y[/latex] of the first point [latex]m={\Large\frac{5 - 2}{{x}_{2}-{x}_{1}}}[/latex]
[latex]x[/latex] of the second point minus [latex]x[/latex] of the first point [latex]m={\Large\frac{5 - 2}{4 - 1}}[/latex]
Simplify the numerator and the denominator. [latex]m={\Large\frac{3}{3}}[/latex]
Let’s confirm this by counting out the slope on the graph.
The rise is [latex]3[/latex] and the run is [latex]3[/latex], so
[latex]\begin{array}{}\\ m=\frac{\text{rise}}{\text{run}}\hfill \\ m={\Large\frac{3}{3}}\hfill \\ m=1\hfill \end{array}[/latex]
It is important to remember that the slope is the same no matter which order we select the points. Previously, whenever we found the slope by looking at the graph, we always selected our points from
left to right so that our run was always a positive value. Now, let’s take a look at an example in which we select our points from right to left.
The point [latex](0,2)[/latex] is indicated as Point 1, and [latex](−2,6)[/latex] as Point 2. So you are going to move from Point 1 to Point 2. A triangle is drawn in above the line to help
illustrate the rise and run.
You can see from the graph that the rise going from Point 1 to Point 2 is [latex]4[/latex], because you are moving [latex]4[/latex] units in a positive direction (up). The run is [latex]−2[/latex],
because you are then moving in a negative direction (left) [latex]2[/latex] units (think of it like running backwards!). Using the slope formula,
[latex] \displaystyle \text{Slope}=\frac{\text{rise}}{\text{run}}=\frac{4}{-2}=-2[/latex].
You do not need the graph to find the slope. You can just use the coordinates, keeping careful track of which is Point 1 and which is Point 2. Let’s organize the information about the two points:
Name Ordered Pair Coordinates
Point 1 [latex](0,2)[/latex] [latex]\begin{array}{l}x_{1}=0\\y_{1}=2\end{array}[/latex]
Point 2 [latex](−2,6)[/latex] [latex]\begin{array}{l}x_{2}=-2\\y_{2}=6\end{array}[/latex]
The slope, [latex]m=\frac{y_{2}-y_{1}}{x_{2}-x_{1}}=\frac{6-2}{-2-0}=\frac{4}{-2}=-2[/latex]. The slope of the line, m, is [latex]−2[/latex].
Remember, it doesn’t matter which point is designated as Point 1 and which is Point 2. You could have called [latex](−2,6)[/latex] Point 1, and [latex](0,2)[/latex] Point 2. In that case, putting the
coordinates into the slope formula produces the equation [latex]m=\frac{2-6}{0-\left(-2\right)}=\frac{-4}{2}=-2[/latex]. Once again, the slope is [latex]m=-2[/latex]. That’s the same slope as before.
The important thing is to be consistent when you subtract: you must always subtract in the same order [latex]\left(y_{2}-y_{1}\right)[/latex][ ]and [latex]\left(x_{2}-x_{1}\right)[/latex].
try it
Find the slope of the line through the points [latex]\left(-2,-3\right)[/latex] and [latex]\left(-7,4\right)[/latex].
Show Solution
try it
What is the slope of the line that contains the points [latex](4,2)[/latex] and [latex](5,5)[/latex]?
Show Solution
The example below shows the solution when you reverse the order of the points, calling [latex](5,5)[/latex] Point 1 and [latex](4,2)[/latex] Point 2.
What is the slope of the line that contains the points [latex](5,5)[/latex] and [latex](4,2)[/latex]?
Show Solution
Notice that regardless of which ordered pair is named Point 1 and which is named Point 2, the slope is still [latex]3[/latex].
Example (Advanced)
What is the slope of the line that contains the points [latex](3,-6.25)[/latex] and [latex](-1,8.5)[/latex]?
Show Solution
Watch these videos to see more examples of how to determine slope given two points on a line.
Finding the Slopes of Horizontal and Vertical Lines
Now, let’s revisit horizontal and vertical lines. So far in this section, we have considered lines that run “uphill” or “downhill.” Their slopes are always positive or negative numbers. But what
about horizontal and vertical lines? Can we still use the slope formula to calculate their slopes?
Consider the line above. We learned in the previous section that because it is horizontal, its slope is [latex]0[/latex]. You can also use the slope formula with two points on this horizontal line
to calculate the slope of this horizontal line. Using [latex](−3,3)[/latex] as Point 1 and (2, 3) as Point 2, you get:
[latex] \displaystyle \begin{array}{l}m=\frac{{{y}_{2}}-{{y}_{1}}}{{{x}_{2}}-{{x}_{1}}}\\\\m=\frac{3-3}{2-\left(-3\right)}=\frac{0}{5}=0\end{array}[/latex]
The slope of this horizontal line is [latex]0[/latex].
Let’s consider any horizontal line. No matter which two points you choose on the line, they will always have the same y-coordinate. So, when you apply the slope formula, the numerator will always be
[latex]0[/latex]. Zero divided by any non-zero number is [latex]0[/latex], so the slope of any horizontal line is always [latex]0[/latex].
The equation for the horizontal line [latex]y=3[/latex] is telling you that no matter which two points you choose on this line, the y-coordinate will always be [latex]3[/latex].
How about vertical lines? In their case, no matter which two points you choose, they will always have the same x-coordinate. The equation for this line is [latex]x=2[/latex].
So, what happens when you use the slope formula with two points on this vertical line to calculate the slope? Using [latex](2,1)[/latex] as Point 1 and [latex](2,3)[/latex] as Point 2, you get:
[latex] \displaystyle \begin{array}{l}m=\frac{{{y}_{2}}-{{y}_{1}}}{{{x}_{2}}-{{x}_{1}}}\\\\m=\frac{3-1}{2-2}=\frac{2}{0}\end{array}[/latex]
But division by zero has no meaning for the set of real numbers. Because of this fact, it is said that the slope of this vertical line is undefined. This is true for all vertical lines—they all have
a slope that is undefined.
What is the slope of the line that contains the points [latex](3,2)[/latex] and [latex](−8,2)[/latex]?
Show Solution
Try It | {"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/finding-slope-given-two-points-on-a-line/","timestamp":"2024-11-04T17:47:42Z","content_type":"text/html","content_length":"70003","record_id":"<urn:uuid:4fbfa03a-dc32-4d4c-9da7-41a2d106b123>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00131.warc.gz"} |
Big Data
Posted on
As data usage gains momentum, the need for efficient and environmentally friendly architectures to manage big data has become increasingly critical. Historically, Big Data software required the usage
of very large infrastructure commonly found only in large data centers. The environmental impact of these data centers, which consume vast amounts of energy, has caused many concerns with respect to
excessive energy consumption. Recently, a new Big Data solution arrived on the market with many innovations in green technologies and sustainable practices.
The High Energy Consumption of the Data Centers
Data centers are the backbone of the digital age, housing the servers and storage systems that power everything from social media to financial transactions. However, their energy consumption is a
major concern. According to the International Energy Agency (IEA), data centers and transmission networks accounted for about 1% of global electricity use in 2020. As demand for data processing
grows, so does the need for energy-efficient Big Data solutions.
Key Strategies for Green Big Data Solutions:
Efficient Data Management
Streamlined data management practices, such as effective data partitioning, indexing, and compression, can significantly reduce the computational load. Efficient data storage and retrieval mechanisms
minimize the energy needed for data processing tasks. In particular, compression algorithms can be made more efficient and thus more eco-friendly. For example, this type of highly efficient
compression algorithm is found inside the native storage used inside the TIMi solution, a common solution used in the fields of Big Data and Data Science.
Optimized Algorithms
Algorithms designed for energy efficiency are central to green data architectures. These include in-memory computing, which reduces the need for repeated data fetching from storage, lazy evaluation
techniques that only compute data as required.
The energy consumption of a big data solution is directly proportional to (1) its running time and (2) the number of nodes used to run the different big data operations. Let’s take a closer look at
the running time of different algorithms.
Algorithms are very often judged based on their “complexity.” What does “complexity” mean? Let’s take an example. For example, the time required to run a classical sorting algorithm (such as
“HeapSort”) is proportional to “n log(n),” with “n” being the number of items to sort. In such a situation, we’ll say that the complexity of the “heapsort” algorithm is “O(n log(n)).”
A better sorting algorithm that is more environmentally friendly than the “HeapSort” would have a linear complexity that is noted: “O(n).” Linear complexity is faster and thus better because “O(n) <
O(n log(n)).” In the context of the sorting algorithm, such a linear complexity (“O(n)”) is very uncommon (e.g., it’s not found inside the common data solution: Spark, R, Python, JS, Postgres,
Oracle). It’s only found in a handful of software solutions, including Matlab and TIMi. When using TIMi, in some specific but common situations, the users can also decide to use another, even better,
sorting algorithm with a “O(1)” complexity (this is unique to TIMi).
Regarding the algorithm used to compute “aggregations” (a very common operation in the field of Big Data), all big data solutions typically use an algorithm based on either a “hashtable” algorithm or
a “B tree” algorithm. The complexity of an aggregation operation that is based on a “B tree” algorithm is very, very bad: “O(n² log(n))” with “n” being the number of rows to aggregate. The complexity
of an aggregation operation based on a hashtable algorithm is better: it’s between “O(n)” and “O(n²)”. Unfortunately, the true observed complexity of the hashtable-based algorithms in the field of
Big Data is nearly always approaching “O(n²)” (due to a large number of “collisions” inside the hashcodes because of the large data volumetry).
Amongst all big data solutions, TIMi is the only one to provide an algorithm to compute aggregations with a guaranteed complexity of “O(n)”. This guarantees a more efficient and faster aggregation,
making the TIMi solution much “greener” than its competitors.
Let’s now talk about another very common operation in the field of Big Data: the “join” operation, that is used to join two tables (here below named T1 and T2). Big data solutions are using
algorithms to perform “join” operations that have a complexity that is typically between “O(n1 log(n2))” and “O(n1.n2)”, where “n1” is the number of rows of the table T1 and “n2” is the number of
rows of the table T2.
Amongst all big data solutions, TIMi is the only one to provide an algorithm to compute join with a guaranteed complexity of “O(n1+n2)”. In some specific but common situations, TIMi’s users can also
decide to use another join algorithm with complexity “O(n1)”. This guarantees a more efficient and faster join than all other big data solutions, making the TIMi solution much faster and thus much
more energy efficient than its competitors. Less energy consumed directly translates to a lower carbon footprint.
Optimized Implementations
In the previous paragraph, we introduced the notion of “complexity.” This is a very important and insightful notion when evaluating the quality of algorithms and software in general, but it can
sometimes be misleading.
For example, let’s talk again about the algorithms used for “aggregation” operations. The complexity of the TIMi’s algorithm is guaranteed to be “O(n),” and the complexity of the “hashtable”
algorithm (in a very favorable situation on a small table) is also “O(n).” What does this mean? It means that the running time to compute the aggregation is simply proportional to “n” (where “n” is
the number of rows to aggregate”), or, in other words, the running time is “k.n” (where “k” is a constant value that is named the “constant term”).
This “constant term” k changes from one algorithm (and one implementation) to the other. One very important and notable difference between two software resides in the value of the “constant term” k
(that must be as small as possible). Inside TIMi, the different codes that are used to implement all the algorithms are written using low-level assembler code (and low-level, close to the metal, “C”
code) to get “k” as small as possible. Almost all other big data solutions rely on a slow programming language (java, scala, python) that imposes a very high “k” value.
So, to summarize, at first view, the algorithms implemented inside TIMi (for aggregations) seem to have approximatively the same speed as the other implementations found inside other big data
solutions since all these algorithms have, at least on paper, more or less the same complexity “O(n)”.
But, in practice, the “aggregation” algorithm implemented inside TIMi is several orders of magnitude faster than other implementations found in other big data solutions because of the much smaller
“constant term” k found inside TIMi’s implementation. The same reasoning holds for all other algorithms (e.g., for sorting and joining tables) commonly found in big data solutions.
Distributed Computing
Distributing data processing tasks across multiple nodes can, some time, reduce the running-time of the most common operations found in big data (e.g. aggregations, join, sort). Some Frameworks such
as Apache Hadoop, Databricks, Redshift, Snowflake and Spark are designed to execute the most common big data operations using many nodes.
The software architects that designed these solutions spent the vast majority of their time designing the architecture of these solutions to always be able to use all the nodes inside a cluster. The
idea was to favor to the maximum what is commonly called “horizontal scalability”. Or, in other words, the idea was to have a software architecture that runs faster each time you add one more node/
This also means that, in comparison, practically no efforts were made to improve “vertical scalability” (i.e., no efforts were made to get the best algorithms and implementations while running on one
single node/server). This very obvious lack of interest in “vertical scalability” might explain why TIMi is several orders of magnitude faster than all other big data solutions when running on one
single node.
One very important notion when estimating the quality of a distributed computation engine is the “incompressible running time.” Let’s take a quick “deep dive” into this notion. If you want more
information on this subject, please refer to the Wikipedia page on Amdahl’s Law.
Let’s assume that you need to run some (distributed) computations on a cluster of machines. These computations are divided into two parts. The time required to execute this first part is named the
“compressible time”: i.e., each time you add a node inside your cluster, this “compressible time” gets reduced (i.e., it gets “compressed”). On the other hand, the second part is named the
“incompressible time”: it is a constant duration that does not change when you add more nodes inside your cluster of machines.
In an infinite infrastructure with an infinite number of nodes, the “compressible time” is reduced to almost zero while the “incompressible time” always stays the same. The “incompressible time” is
typically expressed in percentage with respect to the running time on one node.
For example, an “incompressible time” of 20% means that, on an infinite infrastructure, the computing time is reduced to 20% of the computing time on a single node. In such a situation (with an
“incompressible time” of 20%), thanks to the “distributed” computations, we get a maximum speed-up of “5” (=100%/20%) on an infinite infrastructure. In other words, the distributed computations are 5
times faster than the computation on a single node.
When using a classical distributed framework, it’s very common to have “incompressible times” between 20% to 50% for the most common operations (for more details and numbers, see the “TIMi vs Spark
TPC-H benchmarks” on Github or the academic paper titled “Amdahl’s law in Big Data: alive and kicking”). In some rare instances (such as computing very small aggregations with Teradata), the
“incompressible times” can be reduced to 5%, but it’s very uncommon. It should be noted that, almost all the time, when using Databricks and Spark, the “incompressible time” is around 50%. What does
this “50%” means in practice?
It means that, when using a very large infrastructure, with thousands of nodes, you will only achieve a speed-up of “2” (=100%/50%). In other words, the distributed computations on thousands of nodes
are 2 times faster than the computations on a single node. From the point of view of energy consumption, it’s catastrophic: You are providing electricity and power to thousands of servers to get a
speed-up of “2”. You are effectively polluting thousands time more, to get a speed-up of two.
How to get a higher speed-up? Higher than “2”? A possible solution to get a higher speed-up to solve all your big data problems would be to switch to TIMi. Indeed, one TIMi server is between 20 times
to 102 times faster than a server running any other big data solution currently available on the market.
This is due to the deep focus of the TIMi’s team to achieve the highest “vertical scalability” possible (a technical choice that was not followed by any other current big data solution providers:
i.e., these were only focusing on “horizontal scalability”). One possible explanation for the total disregard for “vertical scalability” and the utmost & total focus on “horizontal scalability”
displayed by all these other big data solution providers would be the current enormous “hype” around cloud technologies.
Switching to TIMi running on a single node will effectively produce a speed-up between 20 to 102 compared to a poorly designed distributed computation engine (with a 50% incompressible time, such as
Databricks, Spark, Redshift, and many others). Not only will your computation run faster, but at the same time, the energy consumption and CO production will be greatly reduced. The reduction on your
CO emission is double because you are, at the same time, (1) reducing the hardware infrastructure to a single node and also (2) reducing the duration of the computations by a factor of at least 10.
It’s also possible to run distributed computations with TIMi. In such a situation, TIMi’s incompressible time is between 0% and 10%, depending on the operation. So, if you are an unconditional fan of
distributed computations, TIMi is also one of the best solutions possible for you thanks to its very low “incompressible time.”
TIMi: Leading the Way in Sustainable Big Data
Founded in Belgium on August 3, 2007, by Frank Vanden Berghen, TIMi (The Intelligent Mining Machine) has been at the forefront of developing tools for efficient and sustainable big data processing.
Initially known as Business-Insight SPRL, the company rebranded to TIMi on December 28, 2017. TIMi’s software suite, which includes TIMi Modeler, Stardust, Anatella, and Kibella, exemplifies the
principles of energy-efficient big data architectures.
TIMi’s Contributions to Sustainable Big Data
Anatella, TIMi’s core component, is designed for high-performance data transformation. Unlike general-purpose ETL (Extract-Transform-Load) tools, Anatella is optimized for analytical tasks. Thanks to
a deep focus on vertical scalability, Anatella processes large datasets with minimal infrastructure. This efficiency translates to lower energy consumption.
Anatella’s performance was highlighted in a vendor-neutral benchmark, TPC-H, where a single Anatella node exhibited higher speed than a whole Spark-based cluster with hundreds of nodes. Another
example of Anatella’s efficiency would be the benchmark organized by the “National Bank of Belgium” (the governmental institution that prints Euros in Belgium). Anatella running on a single 2000€
laptop completed this benchmark in 18 hours while the whole Databrick cluster from the bank, composed of hundreds of nodes, did run for 3 months (more details here).
TIMi Modeler is one of the first auto-ML (automated machine learning) tools available worldwide. Its design allows for scalable, energy, and resource-efficient machine learning operations. The TIMi
Modeler reduces the computational load and energy required for model training and deployment by automating complex machine-learning processes.
Stardust, another critical component of the TIMi suite, excels in computing accurate clusters for large populations. This capability is crucial for sectors like telecommunications, where analyzing
data from millions of users is commonplace.
Future Directions
The future of environmentally friendly big data architectures lies in continued innovation and the adoption of cutting-edge technologies. Advances in edge computing, algorithm innovation, and
integrating renewable energy sources will play significant roles.
Environmentally-friendly architectures for big data are essential for sustainable development in the digital age. Companies like TIMi are leading the charge with innovative solutions that enhance
efficiency and reduce the environmental impact of data processing. By leveraging advanced technologies and adopting sustainable practices, we can ensure that the growth of big data does not come at
the expense of our planet.
Source : Reducing the Big Data Carbon Footprint with TIMi by The Washington Times
Posted in Anatella, News & Events | {"url":"https://timi.eu/blog/news/washington-times/","timestamp":"2024-11-04T19:13:39Z","content_type":"text/html","content_length":"118091","record_id":"<urn:uuid:2ddb5d19-9de4-46d8-bd8a-c587b3756a9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00637.warc.gz"} |
Water current - math word problem (15461)
Water current
John swims upstream. After a while, he passes the bottle, and from that moment, he floats for 20 minutes in the same direction. He then turns around and swims back, and from the first meeting with
the bottle, he sails 2 kilometers before he reaches the bottle. What is the current speed? John is still swimming at the same speed regardless of the current.
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
You need to know the following knowledge to solve this word math problem:
Units of physical quantities:
Themes, topics:
Grade of the word problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/15461","timestamp":"2024-11-06T07:50:30Z","content_type":"text/html","content_length":"139982","record_id":"<urn:uuid:aaaac5d7-1be1-462d-90a3-09a5bf6e44c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00400.warc.gz"} |
Carpenter's Rule Theorem
In 2000, we solved the following three problems positively:
• Convexifying polygons:
Given a simple polygon in the plane (whose edges are considered rigid bars and whose vertices are considered hinges), is there a continuous motion of the polygon that preserves the lengths of the
edges and never causes edges to cross, and results in the polygon being convex?
• Carpenter's rule conjecture:
Given an open polygonal chain (also called a polygonal path or polygonal arc) in the plane, is there a continuous motion with the same properties that results in the arc becoming straight?
• A generalization:
Given a collection of polygonal arcs and simple polygons in the plane, none of which intersect each other, such that no polygon contains another arc or polygon, there is a motion that preserves
all of the edge lengths and never crosses any edges, and results in the arcs becoming straight and the polygons being convex.
• If desired, an arc or polygon can be contained inside another polygon, but this arc or polygon cannot be guaranteed to be straightened or convexified (as this is not always possible).
• The motion is expansive: the distance between every pair of vertices only increases.
• The motion is piecewise-differentiable.
• The motion preserves any symmetries present in the initial configuration.
• The configuration space of a polygonal arc or simple polygon, modulo isometries, is contractible.
The proof ends up being quite simple with the right ideas (in particular the notion of increasing pairwise distances) and tools (in particular the theory of rigidity) in hand.
Animations illustrate three different algorithms for unfolding and refolding polygonal chains:
The main paper appears in the journal Discrete & Computational Geometry. The technical report includes an additional appendix which strengthens the result but is highly technical. Extended abstracts
also appear in the Proceedings of the 41st Annual Symposium on Foundations of Computer Science (11 pages, November 2000) and Proceedings of the 16th European Workshop on Computational Geometry (4
pages, March 2000). Papers about this work:
• Ivars Peterson's Science News article “Unlocking Puzzling Polygons” (September 23, 2000, volume 158, number 13, pages 200-201) is an overview of our results written for a general audience.
• Joseph O'Rourke's “Computational Geometry Column 39” (July 2000) is a 4-page overview of our result and Ileana Streinu's related result.
These problems were introduced and circulated independently by several researchers, including Stephen Schanuel in the early 1970s, Ulf Grenander in 1986–1989, William Lenhart and Sue Whitesides in
March 1991, and Joseph Mitchell in December 1992. See our paper for details.
A fairly large group of people was involved in trying to construct and prove or disprove “locked” examples (counterexamples to the above theorems) at various times over the past few years. Typically,
someone in the group would distribute an example that s/he constructed or was given by a colleague. We would try various motions that did not work, and we would often try proving that the example was
locked because it appeared so! For some examples, it took several months before we found an opening motion.
My main interest in this research was initiated at the International Workshop on Wrapping and Folding organized by Anna Lubiw and Sue Whitesides at the Bellairs Research Institute of McGill
University in February 1998. At this workshop, a bond between several linkage openers began: Therese Biedl, Martin Demaine, Hazel Everett, Sylvain Lazard, Anna Lubiw, Joseph O'Rourke, Mark Overmars,
Steven Robbins, Ileana Streinu, Godfried Toussaint, Sue Whitesides, and myself. The group of "linkage openers" metioned above grew to include Robert Connelly, Sándor Fekete, Joseph Mitchell, and
Günter Rote. Other people working on the problem include Eric Babson, Prosenjit Bose, Christopher Croke, Branko Grünbaum, Michael Kaufmann, Paul Kearney, William Lenhart, Giuseppe Liotta, John
Milnor, János Pach, Irena Pashchenko, Octavia Petrovici, Richard Pollack, Heiko Schröder, Michael Soss, Einar Steingrimsson, and Jorge Urrutia. One of the key meetings that started this paper is the
Monte Verità Conference on Discrete and Computational Geometry in Ascona, Switzerland, organized by Jacob Goodman, Richard Pollack, and Emo Welzl in June 1999. In particular, Bob, Günter, and I first
met at this conference. Our work continued at the 4th Geometry Festival, an international workshop on Discrete Geometry and Rigidity, in Budapest, Hungary, organized by András Bezdek, Károly Bezdek,
Károly Böröczky, and Robert Connelly, in November 1999. | {"url":"https://erikdemaine.org/linkage/","timestamp":"2024-11-02T01:25:19Z","content_type":"text/html","content_length":"10753","record_id":"<urn:uuid:a087b00c-96d6-4a43-9883-7d796f6eaade>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00483.warc.gz"} |
PPT - Valuation PowerPoint Presentation, free download - ID:3598792
2. Contents • Introduction – Where Value Comes From • Discounting Basics • Overview of Alternative Valuation Methods • Valuation Using Multiples • Valuation Using Projected Earnings • Case Studies
3. Valuation, Decision Making and Risk Every major decision a company makes is in one way or another derived from how much the outcome of the decision is worth. It is widely recognized that
valuation is the single financial analytical skill that managers must master. • Valuation analysis involves assessing • Future cash flow levels, (cash flow is reality) and • Risks in valuing
assets, debt and equity • Measurement value – forecasting and risk assessment -- is a very complex and difficult problem. • Intrinsic value is an estimate and not observable Reference: Chapter 6
4. Valuation Overview Valuation is a huge topic. Some Key issues in valuation analysis. • Cost of Capital in DCF or Discounted Earnings • Selection of Market Multiple and Adjustment • Growth Rates
in Earnings and Cash Flow Projections • Terminal Value Method and Calculation • Use several vantage points • Do not assume false precision
5. Tools for Valuation • Financial Models: • Valuation model with project earnings or cash flows • Statistical Data: • Industry Comparative Data to establish Multiples and Cost of Capital •
Industry, company knowledge and judgment • Knowledge about risks and economic outlook to assess risks and value drivers in the forecasts • Valuation should not be intimidating
6. Valuation Basics • A Company’s value depends on: • Return on Invested Capital • Weighted Average Cost of Capital • Ability to Grow • All of the other ratios – gross margins, effective tax rates,
inventory turnover etc. are just details.
7. Analytical framework for Valuation – Combine Forecasts of Economic Performance with Cost of Capital Competitive position such as pricing power and cost structure affects ROIC In financial terms,
value comes from ROIC and growth versus cost of capital P/E ratio and other valuation come from ROIC and Growth
8. Value Comes from Two Things • What you think future cash flows will be • How risky are those cash flows • We will deal with how to measure future cash flows and how to deal with quantifying the
risk of those cash flows • Value comes from the ability to earn higher returns than the opportunity cost of capital • One of the few things we know is that there is a tradeoff between risk and
return. Reference: Folder on Yield Spreads
9. Valuation and Cash Flow • Ultimately, value comes from cash flow in any model: • DCF – directly measure cash flow from explicit cash flow and cash flow from selling after the explicit period •
Multiples – The size of a multiple ultimately depends on cash flow in formulas • FCF/(k-g) = Multiple • They still have implicit cost of capital and growth that must be understood • Replacement
Cost – cash from selling assets • Growth rate in cash flow is a key issue in any of the models Investors cannot buy a house with earnings or use earnings for consumption or investment
10. Valuation Diagram • Valuation using discounted cash flows requires forecasted cash flows, application of a discount rate and measurement of continuing value (also referred to as horizon value or
terminal value) Continuing Value Cash Flow Cash Flow Cash Flow Cash Flow Discount Rate is WACC Enterprise Value Net Debt Reference: Private Valuation; Valuation Mistakes Equity Value
11. Value Comes from Economic Profit and Growth Economic profit is the difference between profit and opportunity cost This implies that there are three variables – return, growth and cost of capital
that are central to valuation analysis Once you have a good thing, you should grow
12. The Value Matrix - Stock Categorisation What is the economic reason for getting here and how long can the performance be maintained Throwing good money after bad Give the money to investors Try
to get out of the business
13. Issues with ROIC include Will the ROIC move to WACC because of competitive pressures Evidence suggests that ROIC can be sustained for long periods Consider the underlying economic characteristics
of the firm and the industry What is the expected change in ROIC When ROIC moves to sustainable level, then can move to terminal value calculation Examine the ROIC in models to determine if
detailed assumptions are leading to implausible results Migration table ROIC Issues
14. Reasonable Estimates of Growth The short term Based on best estimate of likely outcome • The medium term outlook • Assessment of industry outlook and company position • ROIC fades towards the
cost of capital • Growth fades towards GDP • The long run • Long run assumptions: • ROIC = Cost of capital • Real growth = 0% Much of valuation involves implicitly or explicitly making growth
estimates – High P/E comes from high growth Reference: Level and persistence of growth rates
15. Growth issues include Growth is difficult to sustain Law of large numbers means that it is more difficult to maintain growth after a company becomes large Investment analysts overestimate growth
Examine sustainable growth formulas from dividend payout and from depreciation rates Growth Issues
16. Sustaining Growth and ROIC > WACC • Mean Reversion of Long-term Growth • Competition tends to compress margins and growth opportunities, and sub-par performance spurs corrective actions. • With
the passage of time, a firm’s performance tends to converge to the industry norm. • Consideration should be given to whether the industry is in a growth stage that will taper down with the
passage of time or whether its growth is likely to persist into the future. • Competition exerts downward pressure on product prices and product innovations and changes in tastes tend to erode
competitive advantage. The typical firm will see the return spread (ROIC-WACC) shrink over time. A study by Chan, Karceski, and Lakonishok titled, “The Level and Persistence of Growth Rates,”
published in 2003. According to this study, analyst “growth forecasts are overly optimistic and add little predictive power.”
18. Alternative Valuation Models • There are many valuation techniques for assets and investments including: • Income Approach • Discounted Cash Flow • Venture Capital method • Risk Neutral Valuation
• Sales Approach • Multiples (financial ratios) from Comparable Public Companies of from Transactions or from Theoretical Analysis • Liquidation Value • Cost Approach • Replacement Cost (New) and
Reproduction Cost of similar assets • Other • Break-up Value • Options Pricing • The different techniques should give consistent valuation answers See the appraisal folder in the financial
20. Risk Neutral Valuation • Theory – If one can establish value with one financial strategy, the value should be the same as the value with alternative approaches • In risk neutral valuation, an
arbitrage strategy allows one to use the risk free rate in valuing hedged cash flows. • Forward markets are used to create arbitrage • Risk neutral valuation does not work with risks that cannot
be hedged • Use risk free rate on hedged cash flow • Example • Valuation of Oil Production Company • Costs Known • No Future Capital Expenditures
21. Practical Implications of Risk Neutral Valuation • Use market data whenever possible, even if you will not actually hedge • Use lower discount rates when applying forward market data in models
Valuation with high discount rates And Uncertain cash flows Valuation with Forward Markets and Low Discount Rates
22. Venture Capital Method • Two Cash Flows • Investment (Negative) • IPO Terminal Value (Positive) • Terminal Value = Value at IPO x Share of Company Owned • Valuation of Terminal Value • Discount
Rates of 50% to 75% • Risky cash flows • Other services See the article on private valuation
23. Valuation Diagram – Venture Capital • Valuation in venture capital focuses on the value when you will get out, the discount rates and how much of the company you will own when you exit.
Continuing Value Cash Flow Cash Flow Cash Flow Cash Flow • In the extreme, if you have given away half of your company away, and the cash flow is the same before and after your give away, then
the amount you would pay for the share must account for how much you will give away. Discount Rates Enterprise Value Evaluate how much of the equity value that you own Net Debt Equity Value
24. Venture Capital Method • Determine a time period when the company will receive positive cash flow and earnings. • e.g. projection of earnings in year 7 is 20 million. • At the positive cash flow
period, apply a multiple to determine the value of the company. • e.g. P/E ratio of 15 – terminal value is 20 x 15 • Use high discount rate to account for optimistic projections, strategic advice
and high risk; • e.g. 50% discount rate – [20 x 15]/[1+50%]^7 = 17.5 million • Establish percentage of ownership you will have in the future value through dividing investment by total value •
e.g. 5 million investment / 17.5 million = 28.5% • You make an investment and receive shares (your current percent). You know the investment and must establish the number of shares
25. Venture Capital Method Continued • In the venture capital method, there are only two cash flows • The investment • The value when the company is sold • The value received when the company is sold
depends on the percentage of the company that is owned. If there is dilution in ownership, the value is less. • Therefore, an adjustment must be made for dilution and the percent of the company
retained. See the Cost of Capital folder for and example • e.g. Share value without dilution = 17.5/700,000 = 25 per share • If an additional 30% of shares is floated, the value per share must be
increased by 30% to maintain the value. • Value per share = 17.5/((500,000+VC shares) x 1.3) • VC Shares: (25 x 1.3)/17.5-500,000 = 343,373
26. Replacement Cost • First a couple of points regarding replacement cost theory • In theory, one can replace the assets of a company without investing in the company. If you are valuing a company,
you may think about creating the company yourself. • If you replaced a company and really measured the replacement cost, the value of the company may be more than replacement cost because the
company manages the assets better than you could. • By replacing the assets and entering the business, you would receive cash flows. You can reconcile the replacement cost with the discounted
cash flow approach
27. Measuring Replacement Cost • Replacement cost includes: • Value of hard assets • Value of patents and other intangibles • Cost of recruiting and training management • Analysis • Begin with
balance sheet categories, account for the age of the plant • Add: cost of hiring and training management • If the company is generating more cash flow than that would be produced from replacement
cost, the management may be more productive than others in managing costs or be able to realize higher prices through differentiation of products. • The ratio of market value to replacement cost
is a theoretical ratio that measures the value of management contribution
28. Replacement Value and Tobin’s Q • Recall Tobin’s Q as: • Q = Enterprise Value / Replacement Cost • Buy assets and talent etc and should receive the ROIC. Earn industry average ROIC. • If the ROIC
> industry average, then Q > 1. • If the ROIC < industry average, then Q < 1
29. Real Options and Problems with DCF • The DCF model has many conceptual flaws, the most significant of which is assuming that cash flows are normally distributed around the mean or base case
level. • For many investments, the cash flows are skewed: • When an asset is to be retired, there is more upside than downside because the asset will continue to operate when times are good, but
it will be scrapped when times are bad. • An investment decision often involves the possibility to expand in the future. When the expansion decision is made, it will only occur when the economics
are good. • During the period of constructing an asset, it is possible to cancel the construction expenditures and limit the downside if it becomes clear that the project will not be economic.
30. Real Options and DCF Problems - Continued • Problems with DCF because of flexibility in managing assets: • In operating an asset, the asset can be shut down when it is not economic and re-started
when it becomes economic. This allows the asset to retain the upside but not incur negative cash flows. • When developing a project, there is a possibility to abandon the project that can limit
the downside as more becomes known about the economics of the project. • In deciding when to construct an investment, one can delay the investment until it becomes clear that the decision is
economic. This again limits the downside cash flows. • In each of these cases, management flexibility provides protection in the downside which means that DCF model produces biased results.
31. Fundamental Valuation • What was behind the bull market of 1980-1999 • EPS rose from 15 to 56 • Nominal growth of 6.9% -- about the growth in the real economy (the real GDP) • Keeping P/E
constant would have large share price increase • Long-term interest rates fell – lower cost of capital increases the P/E ratio • Real Market • Value by ROIC versus growth • Select strategies that
lead to economic profit • Market value from expected performance
32. Three Primary Methods Discussed in Remainder of Slides • Market Multiples • Discounted Free Cash Flow • Discounted Earnings and Dividends • Warning: No method is perfect or completely precise •
Use industry expertise and judgement in assessing discount rates and multiples • Different valuation methods should yield similar results • Bangor Hydro Case
34. Bt = It +1 + It +2 + It +3 + ... + It +n + F (1+r)1 (1+r)2 (1+r)3 (1+r)n (1+r)n Debt (Bond) Valuation • Bt is the value of the bond at time t • Discounting in the NPV formula assumes END of
period • It +n is the interest payment in period t+n • F is the principal payment (usually the debt’s face value) • r is the interest rate (yield to maturity) Case exercise to illustrate the
effect of discounting (credit spread) on the value of a bond
35. Risk Free Discounting • If the world would involve discounting cash flows at the risk free rate, life would be easy and boring
36. Vt = E(Dt +1)+ E(Dt +2) + E(Dt +3) + ... + E(Dt +n)+ ... (1+k)1 (1+k)2 (1+k)3 (1+k)n Equity – Dividend Discount Valuation and Gordon’s Model • Vt is the value of an equity security at time t • Dt
+n is the dividend in period t+n • k is the equity cost of capital – difficult to find (CAPM) • E() refers to expected dividends • If dividends had no growth the value is D/k • If dividends have
constant growth the value is D/(k-g) • Terminal Value is logically a multiple of book value per share
37. Example of Capitalization Rates • Proof of capitalization rates using excel and growing cash flows
38. Vt = E(FCFt +1) + E(FCFt +2) + E(FCFt +3) + ... + E(FCFt +n) + ... (1+k)1 (1+k)2 (1+k)3 (1+k)n Equity Valuation - Free Cash Flow Model • FCFt+n is the free cash flow in the period t + n [often
defined as cash flow from operations less capital expenditures] • k is the weighted average or un-leveraged cost of capital • E(•) refers to an expectation • Alternative Terminal Value Methods
39. Practical Discounting Issues in Excel • NPV formula assumes end of period cash flow • Growth rate is ROE x Retention rate • If you are selling the stock at the end of the last period and doing a
long-term analysis, you must use the next period EBITDA or the next period cash flow. • If there is growth in a model, you should use the add one year of growth to the last period in making the
calculation • To use mid-year of specific discounting use the IRR or XIRR or sumproduct
40. Valuation and Sustainable Growth • Value depends on the growth in cash flow. Growth can be estimated using alternative formulas: • Growth in EPS = ROE x (1 – Dividend Payout Ratio) • Growth in
Investment = ROIC x (1-Reinvestment Rate) • Growth = (1+growth in units) x (1+inflation) – 1 • When evaluating NOPLAT rather than earnings, a similar concept can be used for sustainable growth. •
Growth = (Capital Expenditures/Depreciation – 1) x Depreciation Rate • Unrealistic to assume growth in units above the growth in the economy on an ongoing basis.
42. Advantages Objective – does not require discount rate of terminal value Simple – does not require elaborate forecast Flexible – can use alternative multiples and make adjustments to the multiples
Theoretically correct – consistent with DCF method if there are stable cash flows and constant growth. Disadvantages Implicit Assumptions: Multiples come from growth, discount rates and returns.
Valuation depends on these assumptions. Too simple: Does not account for prospective changes in cash flow Accounting Based: Depends on accounting adjustments in EBITDA, earnings Timing Problems:
Changing expectations affect multiples and using multiples from different time periods can cause problems. Advantages and Disadvantages of Multiples There are reasons similar companies in an
industry should have different multiples because of ROIC and growth – this must be understood
43. Multiples - Summary • Useful sanity check for valuation from other methods • Use multiples to avoid subjective forecasts • Among other things, well done multiple that accounts for • Accounting
differences • Inflation effects • Cyclicality • Use appropriate comparable samples • Use forward P/E rather than trailing • Comprehensive analysis of multiples is similar to forecast • Use
forecasts to explain why multiples are different for a specific company
44. Mechanics of Multiples • Find market multiple from comparable companies • Rarely are there truly comparable companies • Understand economics that drive multiples (growth rate, cost of capital and
return) • P/E Ratio (forward versus trailing) • Value/Share = P/E x Projected EPS • P/E trailing and forward multiples • Market to Book • Value/Share = Market to Book Ratio x Book Value/Share •
EV/EBITDA • Value/Share = (EV/EBITDA x EBITDA – Debt) divided by shares • P/E and M/B use equity cash flow; EV/EBITDA uses free cash flow In the long-term P/E ratios tend to revert to a mean of
45. Valuation from Multiples • Financial Multiples • P/E Ratio • EV/EBITDA • Price/Book • Industry Specific • Value/Oil Reserve • Value/Subscriber • Value/Square Foot • Issues • Where to find the
multiple data – public companies • What income or cash flow base to use • 15-20% Discount for lack of marketability
46. Which Multiple to Use • Valuation from multiples uses information from other companies • It is relevant when the company is already in a steady state situation and there is no reason to expect
that you can improve estimates of EBITDA or Earnings • One of the challenges is to understand which multiple works in which situation: • Consumer products • EV/EBITDA may be best • Intangible
assets make book value inappropriate • Different leverage makes P/E difficult • Banks/Insurance • Market/Book may be best • Not many intangible assets, so book value is meaningful • Book value is
the value of loans which is adjusted with loan loss provisions • Cost of capital and financing is very important because of the cost of deposits
47. Multiples in M&A • Public company comparison • Precedent Transactions • Issues • Where to find the data • Finding comparable companies • Timing (changes in multiples with market moves) • What
data to apply data to (e.g. next year’s earnings) • What do ratios really mean (e.g. P/E Ratio) • Adjustments for liquidity and control premium
48. Example of Valuation with Multiples – Comparison of Different Transactions Note how multiples cover the cycle in a commodity business Demonstrates that the multiple in the merger is consistent
with other transactions
49. Multiples in Pennzoil Merger – Comparison of Merger Consideration to Trading Multiples
50. Comparable companies analysis data in Banking Merger Note the ratios used to value banks are equity based – the Market value to Book Value and the P/E ratio related to various earnings measures | {"url":"https://fr.slideserve.com/marek/valuation-powerpoint-ppt-presentation","timestamp":"2024-11-15T01:45:03Z","content_type":"text/html","content_length":"113264","record_id":"<urn:uuid:f37d3a9c-ea63-45f3-9d2d-de6576f77b86>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00890.warc.gz"} |
Roti Prata SPOJ Problem Solution | Binary Search | Copyassignment
Hey everyone! Today we are going to discuss one of the most popular problems of binary search which is the Roti Prata SPOJ problem. We will discuss the Roti Prata SPOJ problem solution, its
explanation, solution approach, and then C++ and Java code. There are many problems based on this particular pattern of binary search known as modified binary search or advanced binary search. Many
product-based companies like Google, Microsoft, and Uber ask these types of problems in their coding round or online assessment tests. Let’s quickly jump to the problem.
Table of Contents
• Roti-Prata SPOJ problem statement
• Problem Explanation
• Roti Prata SPOJ Solution Approach
• C++ Solution of Roti Prata SPOJ Problem
• Java Solution of Roti Prata SPOJ Problem
Roti Prata SPOJ Problem Statement
There is an Independence day function at Harry’s school. The principal asked to serve prata after the event is done. The teachers and staff members are asked to go to a restaurant and get P (P<=1000)
pratas packed for the Independence day function. The food stall has L chefs (L<=50) and each chef has a rank R (1<=R<=8). A chef with a rank R can cook 1 prata in the first R minutes 1 more prata in
the next 2R minutes, 1 more prata in 3R minutes, and so on. A chef can only cook a complete prata and not partially.
For example, if a chef is ranked 2, he will cook one prata in 2 minutes then one more prata in the next 4 mins, and one more in the next 6 minutes hence in total 12 minutes he cooks 3 pratas. In 13
minutes also he can cook only 3 pratas as he does not have enough time for the 4th prata. Since the event at school is about to start so teachers are in hurry. The teachers and staff members want to
know the minimum time to get the order done by the restaurant. Please write a program to help them out.
The first line tells the number of test cases. Each test case consists of 2 lines. In the first line of the test case, we have P the number of prata ordered by the teachers. In the next line, the
first integer denotes the number of chefs L and L integers follow in the same line each denoting the rank of a cook.
Print an integer that tells the number of minutes needed to get the order done.
Roti Prata SPOJ Problem Explanation
Whoof…Huge problem statement right? It’s ok if you did not understand anything in the given problem statement. Let me explain this problem in simple language. We are given L number of chefs. Each
chef has some rank R, which will tell how much time the chef will take to make 1st prata or paratha. Then for making the 2nd prata, he will take 2R time, for the 3rd prata he will take 3R time, and
so on.
This means that if a chef has rank 2 then he will make 1st prata in 2 minutes. Then 2nd prata in 4 minutes. That means he will take 6 minutes (2 + 2*2) to make 2 pratas. For making 3 pratas he will
take 2 + 2*2 + 3*2 = 12 minutes, for making 4 pratas he will take 2 + 2*2 + 3*2 + 4*2 = 20 minutes and so on.
Now we’re given the order of making P number of pratas and we have to tell the minimum number of minutes to complete the order. I hope now you’re clear with the problem statement. You just need to
tell the “minimum” number of minutes to complete the order of P pratas. You need to “minimize” the time taken by the chefs to complete the order of pratas.
Let’s see one example test case.
P = 10
L = 4
R[ ] = 1 2 3 4
We have an order of 10 pratas. We have 4 chefs. Their ranks are 1, 2, 3, and 4. Its output is 12 minutes. Why 12 minutes? Why not 15 minutes? Why not 11 minutes?
Chefs would definitely complete the order of 12 minutes in 15 minutes. No need to check. They can’t complete an order of 10 pratas in 11 minutes. Let’s see why.
Chef 1 has rank 1
How many pratas he will make in 11 minutes? The general equation would be:
R + 2R + 3R … nR <= X minutes
R (1+2+3+…n) <= X (Taking R common)
R[ (n * (n+1)) / 2 ] <= X
(n * (n+1)) <= 2*X / R
Putting X = 11, R=1
n * (n+1) <= 2*11
n * (n+1) <= 22
Therefore chef 1 would make 4 pratas in 11 minutes. Notice that he will make 4 pratas in 10 minutes only but since we’re not counting partial prata in 11 minutes also he will make 4 pratas only.
Chef 2 has rank 2
(n * (n+1)) <= 2*X / R
X = 11, R = 2
(n * (n+1)) <= 11
Therefore chef 2 would make 2 pratas in 11 minutes.
Chef 1 and Chef 2 together will make 6 pratas in 11 minutes. 4 made by chef 1 and 2 made by chef 2.
Chef 3 has rank 3
(n * (n+1)) <= 2*X / R –> n * (n+1) <= 7 –> n = 2
Therefore chef 3 would make 2 pratas in 11 minutes. Now in total, we have 8 pratas.
Chef 4 has rank 4
(n * (n+1)) <= 2*X / R –> n * (n+1) <= 5 –> n = 1.
In this way, 4 chefs will be able to make only 9 pratas in 11 minutes. That means 11 minutes are less to complete the order of 10 pratas.
Roti Prata SPOJ Problem Solution Approach
Now how we can solve this problem? You have to make P pratas using given Chefs with minimum time. Now, whenever we have to minimize something and we have a particular condition to fulfill we think of
binary search. To apply a Binary search we need to decide 3 things.
1. The range on which we will apply the binary search. That is the value of low and high.
2. How we will decide in which direction we will find our answer?
3. The validation criteria. If we’re at mid then how we will know if is this our answer or do we’ve to search further?
But before that, it’s important to understand why we’re applying binary search here. Think of it in this way. We have a range i.e, solution search space. We don’t know the exact minimum time. But we
have a range in which our answer will lie. Suppose we have a chef with a rank equal to P. He will make P pratas in 1 minute. Therefore, to complete any order the minimum to minimum time could be 1
minute. And if we consider the worst case, at max it would be the maximum of rank[i] * ( P * (P+1) / 2). When we give the complete order to only 1 chef and it has rank = 1.
Now, the 1st step is done. Low = 1 and High = max(High, rank[i] * ( P * (P+1) / 2) ).
Roti Prata SPOJ Problem
In the 2nd step, we need to decide on criteria that will tell whether mid is a valid answer or not. We have to check whether we can make P pratas in the mid number of minutes or not.
If yes, then we will store this mid in our answer variable and will move high to mid – 1. Why? What if our mid is 15 and our correct answer is 10? If we can make P pratas in 15 minutes, we will try
to search a minimum then this. And therefore will move in the left direction.
If not, we will try to increase the time, therefore will move in the right direction and update low to mid + 1.
Step 2 is done. If we’re at mid and this is a valid answer then we will store this mid in our answer variable and try to find a better solution in the left direction otherwise will go to the right.
Now, the question arises how do we check if time = mid is a valid answer or not? For this, we have to write a valid function. We will calculate how many pratas the chefs can make in time = mid
minutes. For each, we will calculate how many pratas they can make in the given time and then add them up. If this count is greater or equal to the required order P, then we will return true
otherwise we’ll return false.
Step 3 is also done. We have to run a loop for all chefs. We will calculate how many pratas chef[i] will make in a given time. Sum them up and check if it is greater or equal to the required P pratas
or not.
Now, let’s see the C++ solution to the roti prata SPOJ problem.
C++ Solution of Roti Prata SPOJ Problem
#include <bits/stdc++.h>
using namespace std;
bool valid(long long t, long long rank[], long long int P, int n)
long long cp = 0, tt, val, count = 0;
for(int i = 0; i < n; i++)
tt = rank[i];
count = 0;
val = 1;
while(tt <= t)
tt += (val*rank[i]);
cp += count;
if(cp >= P) return true;
return false;
int main()
long long t, P, n;
cin >> t;
cin >> P >> n;
long long rank[n];
for(int i = 0; i < n; i++)
cin >> rank[i];
long long low = 1, high = INT_MIN;
for(int i = 0; i < n; i++)
high = max(high, rank[i]*(P*(P+1)/2));
long long ans = 0;
while(low <= high)
long long mid = low + (high -low)/2;
if(valid(mid, rank, P, n))
ans = mid;
high = mid-1;
low = mid + 1;
cout << ans << endl;
return 0;
Java Solution of Roti Prata SPOJ Problem
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int t = sc.nextInt();
while(t > 0)
int P = sc.nextInt();
int n = sc.nextInt();
int[] rank = new int[n];
for (int i = 0; i < rank.length; i++) {
rank[i] = sc.nextInt();
System.out.println(parathaSpoj(rank, P)) ;
static boolean isvalid(int[] arr, int P, int mid) {
int time = 0;
int cp = 0;
for (int i = 0; i < arr.length; i++) {
time = arr[i];
int j = 2;
while (time <= mid) {
time = time + (arr[i] * j);
if (cp >= P)
return true;
return false;
static int parathaSpoj(int[] arr, int paratha) {
int ans = -1;
int low = 0, high = Integer.MAX_VALUE;
while (low <= high) {
int mid = low + (high - low) / 2;
if (isvalid(arr, paratha, mid)) {
ans = mid;
high = mid - 1;
} else {
low = mid + 1;
return ans;
If you want us to add more coding questions kindly tell us in the comments. Don’t forget to share it with your friends. Thanks 🙂
Also Read: | {"url":"https://copyassignment.com/roti-prata-spoj-problem-solution/","timestamp":"2024-11-04T05:24:38Z","content_type":"text/html","content_length":"81811","record_id":"<urn:uuid:dfa683ac-c9c7-48aa-abb5-0893ef341b7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00693.warc.gz"} |
Generate and return the natural logarithm of the volume of an \(\ndim\)-dimensional ellipsoid.
Generate and return the natural logarithm of the volume of an \(\ndim\)-dimensional ellipsoid.
See the documentation of pm_ellipsoid for computational and algorithmic details.
[in] gramian : The input matrix of the same type and kind as the output logVolEll, containing the upper triangle and diagonal of the representative Gramian matrix of the ellipsoid.
logVolEll : The output scalar of,
1. type real of kind any supported by the processor (e.g., RK, RK32, RK64, or RK128),
containing natural logarithm of the volume of the \(\ndim\)-dimensional hyper-ellipsoid.
Possible calling interfaces ⛓
Generate and return the natural logarithm of the volume of an -dimensional ellipsoid.
This module contains classes and procedures for setting up and computing the properties of the hyper-...
If the Cholesky factorization of the input Gramian fails, the procedures of this generic interface will abort the program by calling error stop.
The condition size(gramian, 1) == size(gramian, 2) must hold for the corresponding input arguments.
The condition \(0. < \left|\ms{gramian}\right|\) must hold for the corresponding input arguments.
In other words, the input Gramian must be a positive definite matrix.
These conditions are verified only if the library is built with the preprocessor macro CHECK_ENABLED=1.
The pure procedure(s) documented herein become impure when the ParaMonte library is compiled with preprocessor macro CHECK_ENABLED=1.
By default, these procedures are pure in release build and impure in debug and testing builds.
Computing the volume of an ellipsoid using its Gramian in a fixed dimension as implemented by the procedure of this generic interface are computationally costly.
The unnecessary costs can be eliminated by precomputing the natural logarithm of the volume of the unit ball in the desired dimension once via setLogVolUnitBall and adding to it the sum of the
natural logarithms of the diagonal elements of the Cholesky factorization of the representative Gramian matrix of the ellipsoid.
See also
Example usage ⛓
15 call disp%show
"gramian = reshape([1., 0., 0., 1.], [2,2]) ! correlation 0."
17 call disp%show
"exp(getLogVolEll(gramian)) ! correlation 0."
22 call disp%show
"gramian = reshape([1., 0.5, 0.5, 1.], [2,2]) ! correlation 0.5"
24 call disp%show
29 call disp%show
"gramian = reshape([1., 0.99, 0.99, 1.], [2,2]) ! correlation 0.99"
31 call disp%show
This is a generic method of the derived type display_type with pass attribute.
This is a generic method of the derived type display_type with pass attribute.
This module contains classes and procedures for input/output (IO) or generic display operations on st...
type(display_type) disp
This is a scalar module variable an object of type display_type for general display.
This module defines the relevant Fortran kind type-parameters frequently used in the ParaMonte librar...
integer, parameter LK
The default logical kind in the ParaMonte library: kind(.true.) in Fortran, kind(....
integer, parameter IK
The default integer kind in the ParaMonte library: int32 in Fortran, c_int32_t in C-Fortran Interoper...
integer, parameter SK
The default character kind in the ParaMonte library: kind("a") in Fortran, c_char in C-Fortran Intero...
Generate and return an object of type display_type.
Example Unix compile command via Intel ifort compiler ⛓
ifort -fpp -standard-semantics -O3 -Wl,-rpath,../../../lib -I../../../inc main.F90 ../../../lib/libparamonte* -o main.exe
Example Windows Batch compile command via Intel ifort compiler ⛓
ifort /fpp /standard-semantics /O3 /I:..\..\..\include main.F90 ..\..\..\lib\libparamonte*.lib /exe:main.exe
Example Unix / MinGW compile command via GNU gfortran compiler ⛓
gfortran -cpp -ffree-line-length-none -O3 -Wl,-rpath,../../../lib -I../../../inc main.F90 ../../../lib/libparamonte* -o main.exe
Example output ⛓
High Priority: A positive-definiteness runtime check for the gramian input argument of this generic interface must be added.
Final Remarks ⛓
If you believe this algorithm or its documentation can be improved, we appreciate your contribution and help to edit this page's documentation and source file on GitHub.
For details on the naming abbreviations, see this page.
For details on the naming conventions, see this page.
This software is distributed under the MIT license with additional terms outlined below.
1. If you use any parts or concepts from this library to any extent, please acknowledge the usage by citing the relevant publications of the ParaMonte library.
2. If you regenerate any parts/ideas from this library in a programming environment other than those currently supported by this ParaMonte library (i.e., other than C, C++, Fortran, MATLAB, Python,
R), please also ask the end users to cite this original ParaMonte library.
This software is available to the public under a highly permissive license.
Help us justify its continued development and maintenance by acknowledging its benefit to society, distributing it, and contributing to it.
Amir Shahmoradi, April 23, 2017, 1:36 AM, Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin
Definition at line 746 of file pm_ellipsoid.F90. | {"url":"https://www.cdslab.org/paramonte/fortran/latest/interfacepm__ellipsoid_1_1getLogVolEll.html","timestamp":"2024-11-07T19:10:18Z","content_type":"application/xhtml+xml","content_length":"39162","record_id":"<urn:uuid:20e153ae-b1fc-4769-b58c-5e4a041b4604>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00080.warc.gz"} |
From a point Q, the length of the tangent to a circle is 24 cm and the distance of Q from the centre is 25 cm. The radius of the circle is (A) 7 cm (B) 12 cm (C) 15 cm (D) 24.5 cm
To find the radius of the circle, we can use the Pythagorean theorem. In this scenario, we have a right-angled triangle formed by the radius of the circle, the tangent from point Q to the circle, and
the line segment from Q to the center of the circle.
The length of the tangent (from point Q to the point of tangency on the circle) is given as 24 cm.
The distance from Q to the center of the circle is 25 cm.
The radius of the circle (which we need to find) is the third side of the right-angled triangle.
In a right-angled triangle, the Pythagorean theorem states that the square of the hypotenuse (the side opposite the right angle, which in this case is the line segment from Q to the center) is equal
to the sum of the squares of the other two sides.
Let r be the radius of the circle. Then, according to the Pythagorean theorem:
25² = 24² + r²
625 = 576 + r²
r² = 625 − 576
r² = 49
r = √49
r = 7 cm
r = 7 cm
Therefore, the radius of the circle is 7 cm.
Introduction to the Problem
In geometric problems involving circles and tangents, one often encounters scenarios where the relationship between the radius of the circle, the length of the tangent from an external point, and the
distance from this point to the circle’s center is explored. A classic example of such a problem is determining the radius of a circle given the length of a tangent from an external point and the
distance from this point to the circle’s center. This problem not only tests one’s understanding of basic geometric principles but also their ability to apply the Pythagorean theorem in a practical
Understanding the Given Data
In the problem presented, we have two key pieces of information: the length of the tangent from a point Q to the circle is 24 cm, and the distance from Q to the center of the circle is 25 cm. These
two data points are crucial as they form two sides of a right-angled triangle. The point where the tangent touches the circle forms a right angle with the radius at that point. This right angle is
fundamental to applying the Pythagorean theorem, which is central to solving this problem.
The Role of the Pythagorean Theorem
The Pythagorean theorem is a cornerstone of geometry, stating that in a right-angled triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of
the squares of the other two sides. In our scenario, the hypotenuse is the line segment from point Q to the center of the circle, and the other two sides are the radius of the circle and the length
of the tangent from Q to the circle.
Calculating the Radius
To find the radius of the circle, we set up an equation based on the Pythagorean theorem. The square of the distance from Q to the center (25 cm) equals the sum of the squares of the radius (unknown)
and the length of the tangent (24 cm). Mathematically, this is represented as 25² = r² + 24². Solving this equation will give us the value of the radius.
Solving the Equation
Substituting the known values into the equation, we get 625 = r² + 576. Rearranging the equation to solve for r², we find r² = 625 − 576, which simplifies to r² = 49. The final step is to find the
square root of 49, which yields r = 7. This calculation is straightforward but requires careful attention to ensure accuracy.
The radius of the circle, in this case, is found to be 7 cm. This problem is a classic example of applying the Pythagorean theorem in a geometric context. It demonstrates how a seemingly complex
problem can be broken down into simpler parts using fundamental principles of mathematics. Such problems not only enhance one’s problem-solving skills but also deepen their understanding of how
geometry is applied in various scenarios. | {"url":"https://www.tiwariacademy.com/ncert-solutions/class-10/maths/chapter-10/exercise-10-2/from-a-point-q-the-length-of-the-tangent-to-a-circle-is-24-cm-and-the-distance-of-q-from-the-centre-is-25-cm-the-radius-of-the-circle-is-a-7-cm-b-12-cm-c-15-cm-d-24-5-cm/","timestamp":"2024-11-04T07:29:09Z","content_type":"text/html","content_length":"240802","record_id":"<urn:uuid:7c9ea7b3-2359-40ed-b1f1-a341c4ddb9d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00585.warc.gz"} |
ACT Math
– ACT Math –
How to Amplify Your ACT Math Performance
When it comes to the math section, the ACT is much more straightforward than the SAT. The SAT is famous for its “puzzle-like” problems, which not only test your knowledge of the material, but also
test your ability to figure out what the heck they’re asking in the first place!
“Farmer Joe had a tractor that was going at a certain speed, but the speed got cut by 1/3rd. However, he’s on a track that is 40% longer than Farmer Mike’s track, whose tractor is….”
And so on and so forth.
Fortunately for you, the ACT Math section is really just figuring out one thing: do you know your stuff?
If you’ve never taken an in-depth look at ACT math before, you can see an entire practice section for free here:
Make no mistake – many of these problems are still difficult. But they’re not particularly tricky. If you know your stuff, you should be able to get a perfect or near-perfect score. Yet many students
who are awesome at math in school have a nightmarish time doing well on the ACT math section. Why?
The ACT Math section has much more to do with TIME than it does with MATERIAL
The questions in this section aren’t very tough on their own – but the time limits imposed on you by the section make them very tough.
If I ask you to put on a dress shirt and make a bowl of cereal, you’re not going to be very worried. Easy! But if I ask you to do both of these things in under 8 seconds each, you’re in a very
different boat!
The ACT Math section is the same way. If you learn the following skills, you’ll find this section to be extremely doable:
• How to manage your time effectively and invest it in the right ways.
• How to enhance your speed on the most common problem types.
If you practice working on your timing strategy, you’re going to eliminate your number one impediment to high ACT math scores. This takes discipline, focus, and a lot of practice – but it’s extremely
doable (and I’ve taught thousands of students to do it with big results).
Of course, you still need to know your stuff. If you don’t know your algebra, arithmetic, geometry, etc., you’re going to need to work on that too (section six will cover this in more detail).
However, by working on your timing strategy as you learn your material, you’ll become a much more holistically skilled ACT test taker.
With that in mind, let’s take a deeper look at each of the two key ACT Math timing strategies:
Investing Your Time Like an ACT Master
The ACT Math section contains 60 problems. You have to complete it in 60 minutes. If you do the math (get it?), this gives you one minute per problem.
Let’s make things even simpler: every problem is worth one point.
Therefore, timing shouldn’t be that big of an issue, right? Sixty problems, sixty points, sixty minutes. Now things get a bit more complicated:
ACT Math problems get progressively more difficult as the section progresses. The easiest problems are in the beginning of the section, and the hardest problems are at the end.
At first, this seems like a non-problem. After all, if the easiest stuff is in order, and the harder stuff is later on, doesn’t it just mean that you should just go in order? That would be the
intuitive thing to do, but there’s an issue here:
“Easier” for most people might not be easier FOR YOU – and vice versa!
In other words, the problems are statistically arranged from easiest to hardest. The most students get #1 right, and the fewest students get #60 right. But you aren’t “most students.” If you’re
awesome at functions, a #59 having to do with functions might be a joke. But if you’re not so good at fractions, a #3 having to do with fractions might be practically impossible for you.
With that in mind, the best way to invest your time on the ACT math section is to solve the stuff that’s easiest for YOU first, THEN go back to the other stuff later on.
There’s a pretty simple process for all this:
1. Read each problem.
2. If you know how to solve it right away, and it doesn’t seem like it’ll take a while, do it immediately.
3. If you read it, and you know how to do it, but it seems like it’ll take you a while, mark it as such (with an X, a circle, whatever) and move on to the next problem
4. If you read it, and you have no idea what the heck it’s even asking, mark it another way, then move on
5. When you’re done with all sixty problems, fly through and answer all the problems you marked in step three.
6. When you’re done with all those, go back and finish all the problems you marked in step four (if you have time)
That’s all there is to it! By working through the problems in this way – a way that perfectly matches your own skill levels – you’re making sure to scoop as many points as possible. You’re nabbing
the easy points first, then the “doable but sort of time consuming problems next,” and attempting the really tricky stuff at the end. This is efficiency at its absolute finest!
My online ACT program, Green Test Prep, will give you a much more in-depth system for working through this process (and for learning all the math facts you’ll need to employ it more effectively).
However, even this basic understanding of the proper ACT Math pacing strategy will pay dividends when it comes to your scores.
Once you practice this process, there’s another, even more powerful strategy that you can build into your routine:
Thievery: Your Key to Enhancing Your Speed on Every Problem
Picking the easiest problems first is a great way to save time and grab more points. However, in a perfect world, every problem would be easy and fast. Fortunately, there’s one strategy that applies
to almost 50% of all ACT Math problems that automatically saves you time and helps you to get to the answer more quickly: stealing the answers.
The ACT Math section is pure multiple choice. That means that every single problem is putting the right answer in front of you. Sure, there are four wrong answers, too – but if you get in the habit
of obsessing over the available choices, you’ll start to become ludicrously efficient.
Before you put any work into any ACT Math problem, the first question you should ask yourself is: can I steal, plug in, or eliminate the answer choices instead of doing any real work?
Don’t ever put pencil to paper until you’ve taken a good look at the answer choices. Not only will they show you what “form” the answer will need to be in, and give you clues as to how to solve the
problem (for instance, if there’s a “root 3” in there, there’s probably a 30/60/90 triangle in the problem too) – they can actually be used to solve the problem.
For instance, if you’re asked: “what’s the smallest number that does X, Y, and Z?” You could spend minutes puzzling over the possibilities – or you could just take the answers, starting with the
smallest one first, and see if they do X, Y, and Z. As soon as you find the answer choices that fulfills those criteria, you’re done!
You can use this strategy on over half of all ACT Math problems!
Which number has [these qualities]?
What’s the least common multiple of X and Y and Z?
What number does NOT fulfill these requirements?
What number solved [this equation]?
Don’t do work unless you absolutely have to! If you focus on using the answer choices to help you find the answer choice, rather than coming up with the answer and checking it against the answer
choices, you’re going to be twice as fast (and half as frustrated).
Two Tricks to Rule Them All
If you focus on “picking the low hanging fruit” by answering the easiest math problems first, and if you use the answer choices to short-circuit your problem solving process, you’re going to be
significantly faster. And, as you’ve learned, speed is 90% of the game when it comes to getting a great ACT Math score.
As soon as you have a chance, grab a practice ACT Math section and give both strategies a shot with a watch in hand – you’ll be blown away by how much more quickly you move through!
Shameless plug alert: if you’re looking for a much more in-depth analysis of how to use these strategies, along with the entire set of additional strategies, tactics, and tricks that I use to hack
the ACT Math section (plus a guide on learning EVERY single fact, figure, and formula that you need to beat ACT Math), I recommend checking out Green Test Prep. You can get started today and work on
your own schedule. No other program on the market has a higher student score improvement, and the sooner you get started, the more you’ll be able to improve!
If you’ve had enough math in your life, it’s time to turn over to the next frontier: READING! Let’s go to the next section to figure out how to: | {"url":"https://greentestprep.com/resources/act-prep/act-crash-course/act-math/","timestamp":"2024-11-12T09:04:03Z","content_type":"text/html","content_length":"68268","record_id":"<urn:uuid:5ec97874-10a3-4113-852f-ff216acf6235>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00328.warc.gz"} |
SQL select Sum of joined table
Calculating the Sum of Values from a Joined Table in SQL
Let's say you have two tables: Orders and Order_Items. Orders contains information about individual orders, while Order_Items stores details about each item within an order. You want to find the
total amount spent on each order.
Here's a sample scenario:
Orders Table:
OrderID CustomerID OrderDate
1 101 2023-03-01
2 102 2023-03-05
3 101 2023-03-10
Order_Items Table:
OrderItemID OrderID ItemName Price Quantity
1 1 Product A 10.00 2
2 1 Product B 15.00 1
3 2 Product C 20.00 3
4 3 Product D 8.00 4
The original code might look something like this:
SUM(oi.Price * oi.Quantity) AS TotalAmount
Orders o
Order_Items oi ON o.OrderID = oi.OrderID
Breakdown of the Code:
• SELECT o.OrderID, SUM(oi.Price * oi.Quantity) AS TotalAmount: This line selects the OrderID from the Orders table (aliased as o) and calculates the sum of the product of Price and Quantity from
the Order_Items table (aliased as oi). The result is then labelled as TotalAmount.
• FROM Orders o JOIN Order_Items oi ON o.OrderID = oi.OrderID: This joins the two tables based on the shared OrderID column.
• GROUP BY o.OrderID: This groups the results by OrderID, ensuring that the sum is calculated for each individual order.
This query will return a result table like this:
OrderID TotalAmount
1 35.00
2 60.00
3 32.00
Important Points to Consider:
• Data Types: Ensure that the columns used for the calculation (Price, Quantity, and the final TotalAmount) have appropriate data types. You typically want to use decimal or numeric types for
monetary values.
• Null Values: If any of the Price or Quantity columns might contain null values, you'll need to handle them appropriately. You can either exclude null values using WHERE clause or use a function
like ISNULL to replace them with a default value (e.g., 0).
• Performance: For larger datasets, consider adding indexes to the columns used in the JOIN and GROUP BY operations.
Additional Tips:
• You can extend this query to calculate other useful metrics, such as the average price per item per order, the total number of items sold, or the total revenue for a specific period.
• You can incorporate other SQL features like WHERE to filter the results, HAVING to filter groups, or ORDER BY to sort the output.
Understanding the SQL SUM Function:
The SUM function is a very useful aggregate function in SQL. It calculates the sum of a specified column within a group of rows. It is commonly used in conjunction with the GROUP BY clause to
calculate sums for different categories or groups within your data.
By understanding this concept, you can apply it to other SQL queries and build more complex calculations, making your data analysis even more powerful. | {"url":"https://laganvalleydup.co.uk/post/sql-select-sum-of-joined-table","timestamp":"2024-11-15T03:14:47Z","content_type":"text/html","content_length":"83284","record_id":"<urn:uuid:318de16d-22e6-49e9-8e68-0bbc95b669b8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00603.warc.gz"} |
NCERT Class 11 Mathematics/Sets – Chapter 1
NCERT Class 11 Mathematics/Sets – Chapter 1 is about the important points to be studied about Sets. Here you can find out the important definitions of sets and different types of sets.
NCERT Class 11 Mathematics/SETS – Chapter 1
Important Points to Remember:
Set: A set is a well-defined collection of objects.
There are two methods of representing a set:
i) Roster or tabular form
ii) Set-builder form
In roster form, the order in which the elements are listed is immaterial.
While writing the set in roster form an element is not generally repeated.
The Empty Set or the Null Set or the Void Set:
A set which does not contain any element is called the empty set or the null set or the void set.
Finite and Infinite Sets:
A set which is empty or consists of a definite number of elements is called finite otherwise, the set is called infinite.
Equal Sets: Two sets A and B are said to be equal if they have exactly the same elements and we write A=B.
Otherwise, the sets are said to be unequal.
A set does not change if one or more elements of the set are repeated.
Subsets: A set A is said to be a subset of a set B if every element of A is also an element of B.
Power set: The collection of all subsets of a set A is called the power set of A. It is denoted by P (A).
Union of sets:
The union of two sets A and B is the set C which consists of all those elements which are either in A or in B (including those which are in both).
Intersection of sets:
The intersection of two sets A and B is the set of all those elements which belong to both A and B.
Difference of sets:
The difference of the sets A and B in this order is the set of elements which belong to A but not to B. Symbolically, we write A – B.
Complement of a set:
Let U be the universal set and A a subset of U. Then the complement of A is the set of all elements of U which are not the elements of A. | {"url":"https://www.learnmathsonline.org/cbse-class-11-maths/ncert-class-11-mathematics-sets-chapter-1/","timestamp":"2024-11-06T23:19:30Z","content_type":"text/html","content_length":"71788","record_id":"<urn:uuid:439341d1-2c8d-42b9-a61c-4401e82ec8a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00470.warc.gz"} |
Plane Algebraic Curves with Prescribed Singularities
We give a survey on the known results about the problem of the existence of complex and real algebraic curves in the plane with prescribed singularities up to analytic and topological equivalence.
The question is whether, for a given positive integer d and a finite number of given analytic or topological singularity types, there exist a plane (irreducible) curve of degree d having singular
points of the given type as its only singularities. The set of all such curves is a quasiprojective variety, which we call an equisingular family, denoted by ESF. We describe, in terms of numerical
invariants of the curves and their singularities, the state of the art concerning necessary and sufficient conditions for the non-emptiness and T-smoothness (i.e., being smooth of expected dimension)
of the corresponding ESF. The considered singularities can be arbitrary, but we pay special attention to plane curves with nodes and cusps, the most studied case, where still no complete answer is
known in general. An important result is, however, that the necessary and the sufficient conditions show the same asymptotics for T-smooth equisingular families if the degree goes to infinity.
• Deformation problem
• Equisingular families
• Existence problem
• Irreducibility problem
• Many singularities
• Plane algebraic curves
• T-smoothness problem
Dive into the research topics of 'Plane Algebraic Curves with Prescribed Singularities'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/plane-algebraic-curves-with-prescribed-singularities","timestamp":"2024-11-07T09:29:21Z","content_type":"text/html","content_length":"50944","record_id":"<urn:uuid:e86ecb86-8450-42ab-b72e-aaecfd1f0290>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00118.warc.gz"} |
ell to Planck length Converter
β Switch toPlanck length to ell Converter
How to use this ell to Planck length Converter π €
Follow these steps to convert given length from the units of ell to the units of Planck length.
1. Enter the input ell value in the text field.
2. The calculator converts the given ell into Planck length in realtime β using the conversion formula, and displays under the Planck length label. You do not need to click any button. If the
input changes, Planck length value is re-calculated, just like that.
3. You may copy the resulting Planck length value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert ell to Planck length?
The formula to convert given length from ell to Planck length is:
Length[(Planck length)] = Length[(ell)] × 7.072800964237818e+34
Substitute the given value of length in ell, i.e., Length[(ell)] in the above formula and simplify the right-hand side value. The resulting value is the length in planck length, i.e., Length[(Planck
Calculation will be done after you enter a valid input.
Consider that a traditional Scottish fabric is sold in lengths of 5 ells.
Convert this length from ells to Planck length.
The length in ell is:
Length[(ell)] = 5
The formula to convert length from ell to planck length is:
Length[(Planck length)] = Length[(ell)] × 7.072800964237818e+34
Substitute given weight Length[(ell)] = 5 in the above formula.
Length[(Planck length)] = 5 × 7.072800964237818e+34
Length[(Planck length)] = 3.536400482118909e+35
Final Answer:
Therefore, 5 ell is equal to 3.536400482118909e+35 Planck length.
The length is 3.536400482118909e+35 Planck length, in planck length.
Consider that a tailor measures 2 ells of cloth for a garment.
Convert this measurement from ells to Planck length.
The length in ell is:
Length[(ell)] = 2
The formula to convert length from ell to planck length is:
Length[(Planck length)] = Length[(ell)] × 7.072800964237818e+34
Substitute given weight Length[(ell)] = 2 in the above formula.
Length[(Planck length)] = 2 × 7.072800964237818e+34
Length[(Planck length)] = 1.4145601928475636e+35
Final Answer:
Therefore, 2 ell is equal to 1.4145601928475636e+35 Planck length.
The length is 1.4145601928475636e+35 Planck length, in planck length.
ell to Planck length Conversion Table
The following table gives some of the most used conversions from ell to Planck length.
ell (ell) Planck length (Planck length)
0 ell 0 Planck length
1 ell 7.072800964237818e+34 Planck length
2 ell 1.4145601928475636e+35 Planck length
3 ell 2.1218402892713455e+35 Planck length
4 ell 2.8291203856951273e+35 Planck length
5 ell 3.536400482118909e+35 Planck length
6 ell 4.243680578542691e+35 Planck length
7 ell 4.9509606749664724e+35 Planck length
8 ell 5.6582407713902545e+35 Planck length
9 ell 6.3655208678140366e+35 Planck length
10 ell 7.072800964237818e+35 Planck length
20 ell 1.4145601928475636e+36 Planck length
50 ell 3.536400482118909e+36 Planck length
100 ell 7.072800964237818e+36 Planck length
1000 ell 7.072800964237819e+37 Planck length
10000 ell 7.072800964237819e+38 Planck length
100000 ell 7.072800964237818e+39 Planck length
An ell is a unit of length used historically in textiles and other measurements. One ell is equivalent to approximately 45 inches or 1.143 meters.
The ell was originally based on the length of a person's arm or the length of a specific type of cloth, and its definition varied between regions and periods. The unit was commonly used in the
textile industry for measuring fabric lengths.
Ells are less commonly used today but remain of historical interest in the study of historical measurements and practices, particularly in textiles and historical trade.
Planck length
The Planck length is a fundamental unit of length in physics, representing the smallest measurable distance in the universe. One Planck length is approximately 1.616 Γ 10^(-35) meters.
The Planck length is defined based on fundamental physical constants, including the speed of light, the gravitational constant, and Planck's constant. It represents a theoretical limit below which
the concept of distance may not have any physical meaning due to quantum fluctuations and the effects of gravity.
The Planck length is used in theoretical physics to explore the limits of our understanding of space and time, particularly in quantum gravity and theories of quantum mechanics. It provides a scale
for studying the fundamental structure of the universe and the interplay between quantum mechanics and gravity.
Frequently Asked Questions (FAQs)
1. What is the formula for converting ell to Planck length in Length?
The formula to convert ell to Planck length in Length is:
ell * 7.072800964237818e+34
2. Is this tool free or paid?
This Length conversion tool, which converts ell to Planck length, is completely free to use.
3. How do I convert Length from ell to Planck length?
To convert Length from ell to Planck length, you can use the following formula:
ell * 7.072800964237818e+34
For example, if you have a value in ell, you substitute that value in place of ell in the above formula, and solve the mathematical expression to get the equivalent value in Planck length. | {"url":"https://convertonline.org/unit/?convert=ell-planck_length","timestamp":"2024-11-08T04:51:19Z","content_type":"text/html","content_length":"92264","record_id":"<urn:uuid:25ff8e60-42a9-4a42-8f48-bb442e871986>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00131.warc.gz"} |
Bayesian linear regression model with semiconjugate priors for stochastic search variable selection (SSVS)
The Bayesian linear regression model object mixsemiconjugateblm specifies the joint prior distribution of the regression coefficients and the disturbance variance (β, σ^2) for implementing SSVS (see
[1] and [2]) assuming β and σ^2 are dependent random variables.
In general, when you create a Bayesian linear regression model object, it specifies the joint prior distribution and characteristics of the linear regression model only. That is, the model object is
a template intended for further use. Specifically, to incorporate data into the model for posterior distribution analysis and feature selection, pass the model object and data to the appropriate
object function.
PriorMdl = mixsemiconjugateblm(NumPredictors) creates a Bayesian linear regression model object (PriorMdl) composed of NumPredictors predictors and an intercept, and sets the NumPredictors property.
The joint prior distribution of (β, σ^2) is appropriate for implementing SSVS for predictor selection [2]. PriorMdl is a template that defines the prior distributions and the dimensionality of β.
PriorMdl = mixsemiconjugateblm(NumPredictors,Name,Value) sets properties (except NumPredictors) using name-value pair arguments. Enclose each property name in quotes. For example, mixsemiconjugateblm
(3,'Probability',abs(rand(4,1))) specifies random prior regime probabilities for all four coefficients in the model.
You can set writable property values when you create the model object by using name-value argument syntax, or after you create the model object by using dot notation. For example, to exclude an
intercept from the model, enter
PriorMdl.Intercept = false;
NumPredictors — Number of predictor variables
nonnegative integer
Number of predictor variables in the Bayesian multiple linear regression model, specified as a nonnegative integer.
NumPredictors must be the same as the number of columns in your predictor data, which you specify during model estimation or simulation.
When specifying NumPredictors, exclude any intercept term for the value.
After creating a model, if you change the of value NumPredictors using dot notation, then these parameters revert to the default values:
• Variable names (VarNames)
• Prior mean of β (Mu)
• Prior variances of β for each regime (V)
• Prior correlation matrix of β (Correlation)
• Prior regime probabilities (Probability)
Data Types: double
Mu — Component-wise mean hyperparameter of Gaussian mixture prior on β
zeros(Intercept + NumPredictors,2) (default) | numeric matrix
Component-wise mean hyperparameter of the Gaussian mixture prior on β, specified as an (Intercept + NumPredictors)-by-2 numeric matrix. The first column contains the prior means for component 1 (the
variable-inclusion regime, that is, γ = 1). The second column contains the prior means for component 2 (the variable-exclusion regime, that is, γ = 0).
• If Intercept is false, then Mu has NumPredictors rows. mixsemiconjugateblm sets the prior mean of the NumPredictors coefficients corresponding to the columns in the predictor data set, which you
specify during estimation, simulation, or forecasting.
• Otherwise, Mu has NumPredictors + 1 elements. The first element corresponds to the prior means of the intercept, and all other elements correspond to the predictor variables.
To perform SSVS, use the default value of Mu.
Example: In a 3-coefficient model, 'Mu',[0.5 0; 0.5 0; 0.5 0] sets the component 1 prior mean of all coefficients to 0.5 and sets the component 2 prior mean of all coefficients to 0.
Data Types: double
V — Component-wise variance hyperparameter of Gaussian mixture prior on β
repmat([10 0.1],Intercept + NumPredictors,1) (default) | positive numeric matrix
Component-wise variance hyperparameter of the Gaussian mixture prior on β, an (Intercept + NumPredictors)-by-2 positive numeric matrix. The first column contains the prior variance factors for
component 1 (the variable-inclusion regime, that is, γ = 1). The second column contains the prior variance factors for component 2 (the variable-exclusion regime, that is, γ = 0).
• If Intercept is false, then V has NumPredictors rows. mixsemiconjugateblm sets the prior variance factor of the NumPredictors coefficients corresponding to the columns in the predictor data set,
which you specify during estimation, simulation, or forecasting.
• Otherwise, V has NumPredictors + 1 elements. The first element corresponds to the prior variance factor of the intercept, and all other elements correspond to the predictor variables.
• To perform SSVS, specify a larger variance factor for regime 1 than for regime 2 (for all j, specify V(j,1) > V(j,2)).
• For more details on what value to specify for V, see [1].
Example: In a 3-coefficient model, 'V',[100 1; 100 1; 100 1] sets the component 1 prior variance factor of all coefficients to 100 and sets the component 2 prior variance factor of all coefficients
to 1.
Data Types: double
Probability — Prior probability distribution for variable inclusion and exclusion regimes
0.5*ones(Intercept + NumPredictors,1) (default) | numeric vector of values in [0,1] | function handle
Prior probability distribution for the variable inclusion and exclusion regimes, specified as an (Intercept + NumPredictors)-by-1 numeric vector of values in [0,1], or a function handle in the form
@fcnName, where fcnName is the function name. Probability represents the prior probability distribution of γ = {γ[1],…,γ[K]}, where:
• K = Intercept + NumPredictors, which is the number of coefficients in the regression model.
• γ[k] ∈ {0,1} for k = 1,…,K. Therefore, the sample space has a cardinality of 2^K.
• γ[k] = 1 indicates variable VarNames(k) is included in the model, and γ[k] = 0 indicates that the variable is excluded from the model.
If Probability is a numeric vector:
• Rows correspond to the variable names in VarNames. For models containing an intercept, the prior probability for intercept inclusion is Probability(1).
• For k = 1,…,K, the prior probability for excluding variable k is 1 – Probability(k).
• Prior probabilities of the variable-inclusion regime, among all variables and the intercept, are independent.
If Probability is a function handle, then it represents a custom prior distribution of the variable-inclusion regime probabilities. The corresponding function must have this declaration statement
(the argument and function names can vary):
logprob = regimeprior(varinc)
• logprob is a numeric scalar representing the log of the prior distribution. You can write the prior distribution up to a proportionality constant.
• varinc is a K-by-1 logical vector. Elements correspond to the variable names in VarNames and indicate the regime in which the corresponding variable exists. varinc(k) = true indicates VarName(k)
is included in the model, and varinc(k) = false indicates it is excluded from the model.
You can include more input arguments, but they must be known when you call mixsemiconjugateblm.
For details on what value to specify for Probability, see [1].
Example: In a 3-coefficient model, 'Probability',rand(3,1) assigns random prior variable-inclusion probabilities to each coefficient.
Data Types: double | function_handle
Correlation — Prior correlation matrix of β
eye(Intercept + NumPredictors) (default) | numeric, positive definite matrix
Prior correlation matrix of β for both components in the mixture model, specified as an (Intercept + NumPredictors)-by-(Intercept + NumPredictors) numeric, positive definite matrix. Consequently, the
prior covariance matrix for component j in the mixture model is diag(sqrt(V(:,j)))*Correlation*diag(sqrt(V(:,j))), where V is the matrix of coefficient variances.
Rows and columns correspond to the variable names in VarNames.
By default, regression coefficients are uncorrelated, conditional on the regime.
You can supply any appropriately sized numeric matrix. However, if your specification is not positive definite, mixsemiconjugateblm issues a warning and replaces your specification with
CorrelationPD, where:
CorrelationPD = 0.5*(Correlation + Correlation.');
For details on what value to specify for Correlation, see [1].
Data Types: double
Object Functions
estimate Perform predictor variable selection for Bayesian linear regression models
simulate Simulate regression coefficients and disturbance variance of Bayesian linear regression model
forecast Forecast responses of Bayesian linear regression model
plot Visualize prior and posterior densities of Bayesian linear regression model parameters
summarize Distribution summary statistics of Bayesian linear regression model for predictor variable selection
Create Prior Model for SSVS
Consider the multiple linear regression model that predicts the US real gross national product (GNPR) using a linear combination of industrial production index (IPI), total employment (E), and real
wages (WR).
${\text{GNPR}}_{t}={\beta }_{0}+{\beta }_{1}{\text{IPI}}_{t}+{\beta }_{2}{\text{E}}_{t}+{\beta }_{3}{\text{WR}}_{t}+{\epsilon }_{t}.$
For all $t$, ${\epsilon }_{t}$ is a series of independent Gaussian disturbances with a mean of 0 and variance ${\sigma }^{2}$.
Assume these prior distributions for $\mathit{k}$ = 0,...,3:
• ${\beta }_{k}|{\sigma }^{2},{\gamma }_{k}={\gamma }_{k}\sqrt{{V}_{k1}}{Z}_{1}+\left(1-{\gamma }_{k}\right)\sqrt{{V}_{k2}}{Z}_{2}$, where ${\mathit{Z}}_{1}$ and ${\mathit{Z}}_{2}\text{\hspace
{0.17em}}$are independent, standard normal random variables. Therefore, the coefficients have a Gaussian mixture distribution. Assume all coefficients are conditionally independent, a priori.
• ${\sigma }^{2}\sim IG\left(A,B\right)$. $A$ and $B$ are the shape and scale, respectively, of an inverse gamma distribution.
• ${\gamma }_{\mathit{k}}\in \left\{0,1\right\}$and it represents the random variable-inclusion regime variable with a discrete uniform distribution.
Create a prior model for SSVS. Specify the number of predictors p.
p = 3;
PriorMdl = mixsemiconjugateblm(p);
PriorMdl is a mixsemiconjugateblm Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance. mixsemiconjugateblm displays a
summary of the prior distributions at the command line.
Alternatively, you can create a prior model for SSVS by passing the number of predictors to bayeslm and setting the ModelType name-value pair argument to 'mixsemiconjugate'.
MdlBayesLM = bayeslm(p,'ModelType','mixsemiconjugate')
MdlBayesLM =
mixsemiconjugateblm with properties:
NumPredictors: 3
Intercept: 1
VarNames: {4x1 cell}
Mu: [4x2 double]
V: [4x2 double]
Probability: [4x1 double]
Correlation: [4x4 double]
A: 3
B: 1
| Mean Std CI95 Positive Distribution
Intercept | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
Beta(1) | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
Beta(2) | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
Beta(3) | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
Sigma2 | 0.5000 0.5000 [ 0.138, 1.616] 1.000 IG(3.00, 1)
Mdl and MdlBayesLM are equivalent model objects.
You can set writable property values of created models using dot notation. Set the regression coefficient names to the corresponding variable names.
PriorMdl.VarNames = ["IPI" "E" "WR"]
PriorMdl =
mixsemiconjugateblm with properties:
NumPredictors: 3
Intercept: 1
VarNames: {4x1 cell}
Mu: [4x2 double]
V: [4x2 double]
Probability: [4x1 double]
Correlation: [4x4 double]
A: 3
B: 1
| Mean Std CI95 Positive Distribution
Intercept | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
IPI | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
E | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
WR | 0 2.2472 [-5.201, 5.201] 0.500 Mixture distribution
Sigma2 | 0.5000 0.5000 [ 0.138, 1.616] 1.000 IG(3.00, 1)
MATLAB® associates the variable names to the regression coefficients in displays.
Plot the prior distributions.
The prior distribution of each coefficient is a mixture of two Gaussians: both components have a mean of zero, but component 1 has a large variance relative to component 2. Therefore, their
distributions are centered at zero and have the spike-and-slab appearance.
Perform Variable Selection Using SSVS and Default Options
Consider the linear regression model in Create Prior Model for SSVS.
Create a prior model for performing SSVS. Assume that $\beta$ and ${\sigma }^{2}\text{\hspace{0.17em}}$are independent (a semiconjugate mixture model). Specify the number of predictors p and the
names of the regression coefficients.
p = 3;
PriorMdl = mixsemiconjugateblm(p,'VarNames',["IPI" "E" "WR"]);
Display the prior regime probabilities and Gaussian mixture variance factors of the prior $\beta$.
priorProbabilities = table(PriorMdl.Probability,'RowNames',PriorMdl.VarNames,...
priorProbabilities=4×1 table
Intercept 0.5
IPI 0.5
E 0.5
WR 0.5
priorV = array2table(PriorMdl.V,'RowNames',PriorMdl.VarNames,...
'VariableNames',["gammaIs1" "gammaIs0"])
priorV=4×2 table
gammaIs1 gammaIs0
________ ________
Intercept 10 0.1
IPI 10 0.1
E 10 0.1
WR 10 0.1
PriorMdl stores prior regime probabilities in the Probability property and the regime variance factors in the V property. The default prior probability of variable inclusion is 0.5. The default
variance factors for each coefficient are 10 for the variable-inclusion regime and 0.01 for the variable-exclusion regime.
Load the Nelson-Plosser data set. Create variables for the response and predictor series.
load Data_NelsonPlosser
X = DataTable{:,PriorMdl.VarNames(2:end)};
y = DataTable{:,'GNPR'};
Implement SSVS by estimating the marginal posterior distributions of $\beta$ and ${\sigma }^{2}$. Because SSVS uses Markov chain Monte Carlo (MCMC) for estimation, set a random number seed to
reproduce the results.
PosteriorMdl = estimate(PriorMdl,X,y);
Method: MCMC sampling with 10000 draws
Number of observations: 62
Number of predictors: 4
| Mean Std CI95 Positive Distribution Regime
Intercept | -1.5629 2.6816 [-7.879, 2.703] 0.300 Empirical 0.5901
IPI | 4.6217 0.1222 [ 4.384, 4.865] 1.000 Empirical 1
E | 0.0004 0.0002 [ 0.000, 0.001] 0.976 Empirical 0.0918
WR | 2.6098 0.3691 [ 1.889, 3.347] 1.000 Empirical 1
Sigma2 | 50.9169 9.4955 [35.838, 72.707] 1.000 Empirical NaN
PosteriorMdl is an empiricalblm model object that stores draws from the posterior distributions of $\beta$ and ${\sigma }^{2}$ given the data. estimate displays a summary of the marginal posterior
distributions at the command line. Rows of the summary correspond to regression coefficients and the disturbance variance, and columns correspond to characteristics of the posterior distribution. The
characteristics include:
• CI95, which contains the 95% Bayesian equitailed credible intervals for the parameters. For example, the posterior probability that the regression coefficient of E is in [0.000, 0.001] is 0.95.
• Regime, which contains the marginal posterior probability of variable inclusion ($\gamma =1$ for a variable). For example, the posterior probability that E should be included in the model is
Assuming that variables with Regime < 0.1 should be removed from the model, the results suggest that you can exclude the unemployment rate from the model.
By default, estimate draws and discards a burn-in sample of size 5000. However, a good practice is to inspect a trace plot of the draws for adequate mixing and lack of transience. Plot a trace plot
of the draws for each parameter. You can access the draws that compose the distribution (the properties BetaDraws and Sigma2Draws) using dot notation.
for j = 1:(p + 1)
The trace plots indicate that the draws seem to mix well. The plots show no detectable transience or serial correlation, and the draws do not jump between states.
Specify Custom Prior Regime Probability Distribution for SSVS
Consider the linear regression model in Create Prior Model for SSVS.
Load the Nelson-Plosser data set. Create variables for the response and predictor series.
load Data_NelsonPlosser
VarNames = ["IPI" "E" "WR"];
X = DataTable{:,VarNames};
y = DataTable{:,"GNPR"};
Assume the following:
• The intercept is in the model with probability 0.9.
• IPI and E are in the model with probability 0.75.
• If E is included in the model, then the probability that WR is included in the model is 0.9.
• If E is excluded from the model, then the probability that WR is included is 0.25.
Declare a function named priorssvsexample.m that:
• Accepts a logical vector indicating whether the intercept and variables are in the model (true for model inclusion). Element 1 corresponds to the intercept, and the rest of the elements
correspond to the variables in the data.
• Returns a numeric scalar representing the log of the described prior regime probability distribution.
function logprior = priorssvsexample(varinc)
%PRIORSSVSEXAMPLE Log prior regime probability distribution for SSVS
% PRIORSSVSEXAMPLE is an example of a custom log prior regime probability
% distribution for SSVS with dependent random variables. varinc is
% a 4-by-1 logical vector indicating whether 4 coefficients are in a model
% and logPrior is a numeric scalar representing the log of the prior
% distribution of the regime probabilities.
% Coefficients enter a model according to these rules:
% * varinc(1) is included with probability 0.9.
% * varinc(2) and varinc(3) are in the model with probability 0.75.
% * If varinc(3) is included in the model, then the probability that
% varinc(4) is included in the model is 0.9.
% * If varinc(3) is excluded from the model, then the probability
% that varinc(4) is included is 0.25.
logprior = log(0.9) + 2*log(0.75) + log(varinc(3)*0.9 + (1-varinc(3))*0.25);
Create a prior model for performing SSVS. Assume that and are independent (a semiconjugate mixture model). Specify the number of predictors p the names of the regression coefficients, and custom,
prior probability distribution of the variable-inclusion regimes.
p = 3;
PriorMdl = mixsemiconjugateblm(p,'VarNames',["IPI" "E" "WR"],...
Implement SSVS by estimating the marginal posterior distributions of and . Because SSVS uses MCMC for estimation, set a random number seed to reproduce the results.
PosteriorMdl = estimate(PriorMdl,X,y);
Method: MCMC sampling with 10000 draws
Number of observations: 62
Number of predictors: 4
| Mean Std CI95 Positive Distribution Regime
Intercept | -1.4658 2.6046 [-7.781, 2.546] 0.308 Empirical 0.5516
IPI | 4.6227 0.1222 [ 4.385, 4.866] 1.000 Empirical 1
E | 0.0004 0.0002 [ 0.000, 0.001] 0.976 Empirical 0.2557
WR | 2.6105 0.3692 [ 1.886, 3.346] 1.000 Empirical 1
Sigma2 | 50.9621 9.4999 [35.860, 72.596] 1.000 Empirical NaN
Assuming, that variables with Regime < 0.1 should be removed from the model, the results suggest that you can include all variables in the model.
Forecast Responses Using Posterior Predictive Distribution
Consider the regression model in Create Prior Model for SSVS.
Perform SSVS:
1. Create a Bayesian regression model for SSVS with a semiconjugate prior for the data likelihood. Use the default settings.
2. Hold out the last 10 periods of data from estimation.
3. Estimate the marginal posterior distributions.
p = 3;
PriorMdl = bayeslm(p,'ModelType','mixsemiconjugate','VarNames',["IPI" "E" "WR"]);
load Data_NelsonPlosser
fhs = 10; % Forecast horizon size
X = DataTable{1:(end - fhs),PriorMdl.VarNames(2:end)};
y = DataTable{1:(end - fhs),'GNPR'};
XF = DataTable{(end - fhs + 1):end,PriorMdl.VarNames(2:end)}; % Future predictor data
yFT = DataTable{(end - fhs + 1):end,'GNPR'}; % True future responses
rng(1); % For reproducibility
PosteriorMdl = estimate(PriorMdl,X,y,'Display',false);
Forecast responses using the posterior predictive distribution and the future predictor data XF. Plot the true values of the response and the forecasted values.
yF = forecast(PosteriorMdl,XF);
hold on
plot(dates((end - fhs + 1):end),yF)
h = gca;
hp = patch([dates(end - fhs + 1) dates(end) dates(end) dates(end - fhs + 1)],...
h.YLim([1,1,2,2]),[0.8 0.8 0.8]);
legend('Forecast Horizon','True GNPR','Forecasted GNPR','Location','NW')
title('Real Gross National Product: 1909 - 1970');
hold off
yF is a 10-by-1 vector of future values of real GNP corresponding to the future predictor data.
Estimate the forecast root mean squared error (RMSE).
frmse = sqrt(mean((yF - yFT).^2))
The forecast RMSE is a relative measure of forecast accuracy. Specifically, you estimate several models using different assumptions. The model with the lowest forecast RMSE is the best-performing
model of the ones being compared.
When you perform Bayesian regression with SSVS, a best practice is to tune the hyperparameters. One way to do so is to estimate the forecast RMSE over a grid of hyperparameter values, and choose the
value that minimizes the forecast RMSE.
Copyright 2018 The MathWorks, Inc.
More About
Bayesian Linear Regression Model
A Bayesian linear regression model treats the parameters β and σ^2 in the multiple linear regression (MLR) model y[t] = x[t]β + ε[t] as random variables.
For times t = 1,...,T:
• y[t] is the observed response.
• x[t] is a 1-by-(p + 1) row vector of observed values of p predictors. To accommodate a model intercept, x[1t] = 1 for all t.
• β is a (p + 1)-by-1 column vector of regression coefficients corresponding to the variables that compose the columns of x[t].
• ε[t] is the random disturbance with a mean of zero and Cov(ε) = σ^2I[T×T], while ε is a T-by-1 vector containing all disturbances. These assumptions imply that the data likelihood is
$\ell \left(\beta ,{\sigma }^{2}|y,x\right)=\prod _{t=1}^{T}\varphi \left({y}_{t};{x}_{t}\beta ,{\sigma }^{2}\right).$
ϕ(y[t];x[t]β,σ^2) is the Gaussian probability density with mean x[t]β and variance σ^2 evaluated at y[t];.
Before considering the data, you impose a joint prior distribution assumption on (β,σ^2). In a Bayesian analysis, you update the distribution of the parameters by using information about the
parameters obtained from the likelihood of the data. The result is the joint posterior distribution of (β,σ^2) or the conditional posterior distributions of the parameters.
Stochastic Search Variable Selection
Stochastic search variable selection (SSVS) is a predictor variable selection method for Bayesian linear regression that searches the space of potential models for models with high posterior
probability, and averages the models it finds after it completes the search.
SSVS assumes that the prior distribution of each regression coefficient is a mixture of two Gaussian distributions, and the prior distribution of σ^2 is inverse gamma with shape A and scale B. Let γ
= {γ[1],…,γ[K]} be a latent, random regime indicator for the regression coefficients β, where:
• K is the number of coefficients in the model (Intercept + NumPredictors). γ[k] = 1 means that β[k]|σ^2,γ[k] is Gaussian with mean 0 and variance c[1].
• γ[k] = 0 means that a predictor is Gaussian with mean 0 and variance c[2].
• A probability mass function governs the distribution of γ, and the sample space of γ is composed of 2^K elements.
More specifically, given γ[k] and σ^2, β[k] = γ[k]c[1]Z + (1 – γ[k])c[2]Z, where:
• Z is a standard normal random variable.
• For conjugate models (mixconjugateblm), c[j] = σ^2V[j], j = 1,2.
• For semiconjugate models (mixsemiconjugateblm), c[j] = V[j].
c[1] is relatively large, which implies that the corresponding predictor is more likely to be in the model. c[2] is relatively small, which implies that the corresponding predictor is less likely to
be in the model because distribution is dense around 0.
In this framework, if the potential exists for a total of K coefficients in a model, then the space has 2^K models through which to search. Because computing posterior probabilities of all 2^K models
can be computationally expensive, SSVS uses MCMC to sample γ = {γ[1],…,γ[K]} and estimate posterior probabilities of corresponding models. The models that the algorithm chooses often have higher
posterior probabilities. The algorithm composes the estimated posterior distributions of β and σ^2 by computing the weighted average of the sampled models. The algorithm attributes a larger weight to
those models sampled more often.
The resulting posterior distribution for semiconjugate mixture models is analytically intractable. For details on the posterior distribution, see Analytically Tractable Posteriors.
Alternative Functionality
The bayeslm function can create any supported prior model object for Bayesian linear regression.
[1] George, E. I., and R. E. McCulloch. "Variable Selection Via Gibbs Sampling." Journal of the American Statistical Association. Vol. 88, No. 423, 1993, pp. 881–889.
[2] Koop, G., D. J. Poirier, and J. L. Tobias. Bayesian Econometric Methods. New York, NY: Cambridge University Press, 2007.
Version History
Introduced in R2018b | {"url":"https://it.mathworks.com/help/econ/mixsemiconjugateblm.html","timestamp":"2024-11-03T17:21:08Z","content_type":"text/html","content_length":"164813","record_id":"<urn:uuid:f5c6781f-60d4-41b8-acbb-60dbe9c8a59a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00757.warc.gz"} |
Kids.Net.Au - Encyclopedia > Numeral system
is a symbol or group of symbols that represents a
. Numerals differ from numbers just as words differ from the things they refer to. The symbols "11", "eleven" and "XI" are
numerals, but they all represent the
number. This article treats differing systems of numerals representing the same system of numbers. The system of
real numbers
, the system of
complex numbers
, the system of
p-adic numbers
, etc., may be called different
number systems
, and those are
the topic of this article.
History Tallies carved from wood and stone have been used since prehistoric times. Stone age cultures, including the american indians, used tallies for gambling with horses, slaves. personal services
and trade-goods.
The earliest known written tallies appear in the ruins of the Sumerian empire, using clay tablets impressed with a sharp stick and baked. The Sumerians had quite an exotic system based on counts to
60, used in astronomical and other calculations. This system was imported to and used by every mediterranean nation that used astronomy, including the Greeks, Romans and Egyptians. We still use it to
count time (minutes per hour), and angle (degrees).
In China, armies and provisions were counted using modular tallies of prime numbers. Unique numbers of troops and measures of rice appear as unique combinations of these tallies. A great convenience
of modular arithmetic is that it is easy to multiply, though quite difficult to add. This makes use of modular arithmetic for provisions especially attractive. Conventional tallies are quire
difficult to multiply and divide. In modern times modular arithmetic is sometimes used in Digital signal processing.
The Roman empire used tallies written on wax, papyrus and stone, and roughly followed the Greek custom of assigning letters to various numbers. The Roman system remained in common use in Europe use
until positional notation came into common use in the 1500s.
The Incan Empire ran a large command economy using quipu, tallies made by knotting colored fibers. Knowledge of the encodings of the knots and colors was suppressed by the Spanish conquistadors in
the 16th century, and has not survived although simple quipu-like recording devices are still used in the Andean region.
Some authorities believe that positional arithmetic began with the wide use of the abacus in China. The earliest written positional record seem to be tallies of abacus results in China around 400AD.
In particular, zero was crrectly described by Chinese mathematicians around 932AD, and seems to have originated as a circle of a place empty of beads.
From China, both the abacus and written tallies may have moved to India, perhaps via chinese traders and business. In India, recognizably modern numerals appeared in Mogul empire, used for astronomy
and accounting.
From India, the thriving trade between Islamic Moguls and Africa carried the concept to Cairo, where Al Kwairzmi wrote the latinate document that popularized positional notation for Europe.
In a positional numeral system of base b, b basic symbols (or digits) corresponding to the first b natural numbers including zero are used. To generate the rest of the numerals, the position of the
symbol in the figure is used. The symbol in the last position has its own value, and as it moves to the left its value is multiplied by b. In this way, with only finitely many different symbols,
every number can be expressed. This is unlike systems which uses different symbols for different orders of magnitude, like the system of Roman numerals or the number names in spoken languages.
For example, when 4327 is written in the decimal system (base 10), it actually means (4x10^3) + (3x10^2) + (2x10^1) + (7x10^0), noting that 10^0 = 1.
In general, if b is the base, we write a number in the numeral system of base b by expressing it in the form a[1]b^k + a[2]b^k-1 + a[3]b^k-2 + ... + a[k+1]b^0 and writing the digits a[1]a[2]a[3] ...
a[k+1] in order. The digits are natural numbers between 0 and b-1, inclusive.
If a text (such as this one) discusses multiple bases, and if ambiguity exists, the base is added in subscript to the right of the number, like this: number[base]. Numbers without subscript are
considered to be decimal.
The term scale of notation is also used for a number system.
Note that no matter in which base, numerals have terminating or repeating expansions if and only if they are rational. A number that terminates in one base may repeat in another (thus 0.3[10] =
0.0100110011001...[2]). An irrational number stays unperiodic (infinite amount of unrepeating digits) in all bases. Thus, for example in base 2, π = 3.1415926...[10] can be written down as the
unperiodic 11.001001000011111...[2].
The base-10 system, the one most commonly used by humans today, originated because we have ten fingers, thus allowing for simple counting. A base-eight system was devised by a people (the name of
which I will insert here once I have tracked it down) that used the spaces between the fingers to count. The Maya and other civilizations of Pre-Columbian Mesoamerica used base 20, (possibly
originating from the number of a person's fingers and toes). Base 60 was used by the Sumerians and survives today in our system of time (hence the division of an hour into 60 minutes and a minute
into 60 seconds). Base-12 systems were popular mainly because the year has twelve months; we still have a special word for "dozen" and use 12 hours for every night and day.
Electronic components (first vacuum tubes then transistors) may have only 2 possible states: concat(1) and closed (0). Because this is exactly the set of binary digits, and because arithmetics in a
binary system are the easiest to describe electronically (using Boolean algebra), the binary system became natural for electronic computers. It is used to perform integer arithmetic in almost all
electronic computers (the only exception being the exotic base-3 and base-10 designs that were discarded very early in the history of computing). Note however that a computer does not treat all of
its data as integers. Thus, some of it may be treated as texts and program data. Real numbers (numbers that can be not whole) are usually written down in the floating point notation, that has
different rules of arithmetic.
If b=p is a prime number, one can define base-p numerals whose expansion to the left never stops; these are called the p-adic numbers.
Positional systems
Other systems
See also: Computer numbering formats
External Resources D. Knuth. The Art of Computer Programming. Volume 2, 3rd Ed. Addison-Wesley. pp.194-213, "Positional Number Systems"
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/nu/Numeral_system","timestamp":"2024-11-06T14:45:12Z","content_type":"application/xhtml+xml","content_length":"28202","record_id":"<urn:uuid:1ad91a3a-2b58-4d44-8446-365cfc2491f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00606.warc.gz"} |
Improved asymptotic analysis of the average number of steps performed by the self-dual simplex algorithm for Mathematical Programming
Mathematical Programming
Improved asymptotic analysis of the average number of steps performed by the self-dual simplex algorithm
View publication
In this paper we analyze the average number of steps performed by the self-dual simplex algorithm for linear programming, under the probabilistic model of spherical symmetry. The model was proposed
by Smale. Consider a problem of n variables with m constraints. Smale established that for every number of constraints m, there is a constant c(m) such that the number of pivot steps of the self-dual
algorithm, ρ(m, n), is less than c(m)(ln n)m(m+1). We improve upon this estimate by showing that ρ(m, n) is bounded by a function of m only. The symmetry of the function in m and n implies that ρ(m,
n) is in fact bounded by a function of the smaller of m and n. © 1986 The Mathematical Programming Society, Inc. | {"url":"https://research.ibm.com/publications/improved-asymptotic-analysis-of-the-average-number-of-steps-performed-by-the-self-dual-simplex-algorithm","timestamp":"2024-11-14T14:43:32Z","content_type":"text/html","content_length":"66786","record_id":"<urn:uuid:d4cc35ed-dc8c-47b5-af2f-3c8546c53968>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00636.warc.gz"} |
Problem: Bulls and Cows
We all know the game called "Bulls and Cows" (https://en.wikipedia.org/wiki/Bulls_and_cows). Upon having a particular 4-digit secret number and a 4-digit suggested number, the following rules are
• If a digit in the suggested number matches a digit in the secret number and is located at the same position, we have a bull.
• If a digit in the suggested number matches a digit in the secret number, but is located at a different position, we have a cow.
Secret number 1 4 8 1 Comment
Suggested number 8 8 1 1 Bulls = 1
Cows = 2
Secret number 2 2 4 1 Comment
Suggested number 9 9 2 4 Bulls = 0
Cows = 2
Upon having a particular secret number and the bulls and cows pertaining to it, our task is to find all possible suggested numbers in ascending order.
If there are no suggested numbers that match the criteria provided from the console, we must print "No".
Input Data
The input data is read from the console. The input consists of 3 text lines:
• The first line contains the secret number.
• The second line contains the number of bulls.
• The third line contains the number of cows.
The input data will always be valid. There is no need to verify them.
Output Data
The output data must be printed on the console. The output must consist of a single line, holding all suggested numbers, space separated. If there are no suggested numbers that match the criteria
provided from the console, we must print “No”.
• The secret number will always consist of 4 digits in the range [1..9].
• The number of cows and bulls will always be in the range [0..9].
• Allowed execution time: 0.15 seconds.
• Allowed memory: 16 MB.
Sample Input and Output
Input Output
Input Output
Hints and Guidelines
We will solve the problem in a few steps:
• We will read the input data.
• We will generate all possible four-digit combinations (candidates for verification).
• For each generated combination we will calculate how many bulls and how many cows it has according to the secret number. Upon matching the needed bulls and cows, we will print the combination.
Reading the Input Data
We have 3 lines in the input data:
• Secret number.
• Number of desired bulls.
• Number of desired cows.
Reading the input data is trivial:
Declaring a Flag
Before starting to write the algorithm for solving our problem, we must declare a flag that indicates whether a solution is found:
If after finishing our algorithm this flag is still false, then we will print No on the console, as specified in the requirements.
Generating Four-Digit Numbers
Let's start analyzing our problem. What we need to do is analyze all numbers from 1111 to 9999, excluding those that contain zeroes (for example 9011, 3401, etc. are invalid). What is the easiest way
to generate all these numbers? We will use nested loops. As we have a 4-digit number, we will have 4 nested loops, as each of them will generate an individual digit in our number for testing.
Thanks to these loops, we have access to every digit of all numbers that we need to check. Our next step is to separate the secret number into digits. This can be achieved very easily using a
combination of integer division and modular division.
Creating Additional Variables
Only two last steps remain until we start analyzing how many cows and bulls there are in a particular number. Accordingly, the first one is the declaration of counter variables in the nested loops,
in order to count the cows and bulls for the current number. The second step is to make copies of the digits of the current number that we will analyze, in order to prevent problems upon working with
nested loops, in case we make changes to them.
We are ready to start analyzing the generated numbers.
Counting the Bulls
What logic can we use? The easiest way to check how many cows and bulls there are inside a number is via a sequence of if-else conditions. Yes, this is not the most optimal way, but in order to stick
to what is covered in the current book, we will use this approach.
What conditions do we need?
The condition for the bulls is very simple – we check whether the first digit of the generated number matches the same digit in the secret number. We remove the digits that are already checked in
order to avoid repetitions of bulls and cows.
We repeat the action for the second, third and fourth digit.
Counting the Cows
We will apply the following condition for the cows – first we will check whether the first digit of the generated number matches the second one, the third one or the fourth digit of the secret
number. An example for the implementation:
After that, we sequentially check whether the second digit of the generated number matches the first one, the third one or the fourth digit of the secret number; whether the third digit of the
generated number matches the first one, the second one or the fourth digit of the secret number; and finally, we check whether the fourth digit of the generated number matches the first one, the
second one or the third digit of the secret number.
Printing the Output
After completing all conditions, we just need to check whether the bulls and cows in the currently generated number match the desired bulls and cows read from the console. If this is true, we print
the current number on the console.
Testing in the Judge System
Test your solution here: https://judge.softuni.org/Contests/Practice/Index/519#2. | {"url":"https://csharp-book.softuni.org/Content/Chapter-9-2-problems-for-champions-part-2/bulls-and-cows/bulls-and-cows.html","timestamp":"2024-11-02T11:41:10Z","content_type":"text/html","content_length":"139060","record_id":"<urn:uuid:f13f26fa-3f72-47e3-a30d-e83bff81ba5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00160.warc.gz"} |
Folded Normal Distribution & Half-Normal Distribution
Probability Distributions
> Folded Normal Distribution
A folded normal distribution is a distribution of the
absolute values
of a
normal distribution
. It is used when you’re only interested in the size of a
random variable
(i.e. 2
standard deviations
away from the
) and not the direction or sign (either positive or negative). This happens in many practical situations where only the magnitude of a random variable is recorded. It’s called a “folded” normal
distribution because, quite literally, the
probability mass
values on the left half of the distribution have been folded over to the right half; the absolute values are taken from the left half and added to the right half.
The mean( μ) and
) of X in the original normal distribution becomes the
location parameter
(μ) and
scale parameter
(σ) of Y in the folded distribution. A more formal definition uses these two facts:
If Y is a normally distributed random variable with mean μ (the location parameter) and variance σ^2 (the scale parameter), that is, if Y ∼ N μ,σ^2, then the random variable X = |Y | has a folded
normal distribution.
Folded Normal Distribution Calculator
This calculator
on the University of Alabama in Huntsville website allows you to create a CDF of the folded normal distribution and change the parameters of the function. You can also calculate the
and the first and third
. To use the calculator, select “folded normal distribution” from the drop down menu and set the view to CDF.
Probability density function (PDF) of the half normal distribution with standard deviation 1/2 [1].
The half normal distribution is the distribution of the
absolute value
of a normally distributed random variable. It is a special case of the folded normal and truncated normal distributions. The probability density function is
Where the mean is zero and the standard deviation is 1/θ [2]. The folded normal distribution is also defined as the distribution of the absolute value of a normally distributed random variable, which
means that the folded normal distribution only considers the positive values of the normal distribution. However, the half-normal distribution is a special case where the mean (μ) of the normal
distribution is zero; in other words, when μ = 0, the folded normal distribution becomes the half-normal distribution. Thus, it could be more aptly called the “half standard normal” distribution. The
half normal distribution has some useful applications, such as modeling measurement and lifetime data. In fact, this is one of the most important variations of the folded normal, because you’re more
likely to be interested in normal distributions with a mean of 0 (i.e. a standard normal). For example, the half-normal distribution models Brownian motion — the random movement of microscopic
particles suspended in a liquid or gas.
Applications of the half normal Distribution
One application for which this type of distribution can be used is in modeling measurement data. Measurement data can often be
or contain
that make it difficult to model accurately. By using a half-normal distribution, however, one can easily identify these anomalies and create an accurate model for the data that excludes these
outliers. The half-normal distribution can also be used to model lifetime data, such as when analyzing failure rates in components or products over time. By using this type of distribution, one can
determine how many failures are expected over a certain period of time and use this information to plan accordingly. The half-normal distribution also models
Brownian motion
— the random movement of microscopic particles suspended in a liquid or gas.
[1] Graph created with
. [2] Half-normal distribution. Retrieved September 6, 2023 from: https://archive.lib.msu.edu/crcmath/math/math/h/h026.htm
Comments? Need to post a correction? Please Contact Us. | {"url":"https://www.statisticshowto.com/folded-normal-distribution/","timestamp":"2024-11-04T05:58:45Z","content_type":"text/html","content_length":"73400","record_id":"<urn:uuid:bd750878-8b3f-48d5-8344-5170f1129fc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00227.warc.gz"} |
Convert Circular Mils to Square Mils (c mil to sq mil) | JustinTOOLs.com
Category: areaConversion: Circular Mils to Square Mils
The base unit for area is square meters (Non-SI/Derived Unit)
[Circular Mils] symbol/abbrevation: (c mil)
[Square Mils] symbol/abbrevation: (sq mil)
How to convert Circular Mils to Square Mils (c mil to sq mil)?
1 c mil = 0.78539816324633 sq mil.
1 x 0.78539816324633 sq mil =
Square Mils.
Always check the results; rounding errors may occur.
A circular mil is a unit of area, equal to the area of a circle with a diameter of one mil (one thousandth of an inch). It corresponds to 5.067×10−4 mm². It is a unit i ..more definition+
A square mil is a unit of area, equal to the area of a square with sides of length one mil. A mil is one thousandth of an international inch. This unit of area is usually used in specifying the area
of the cross section of a wire or cable.
In relation to the base unit of [area] => (square meters), 1 Circular Mils (c mil) is equal to 5.06707479E-10 square-meters, while 1 Square Mils (sq mil) = 6.4516E-10 square-meters.
1 Circular Mils to common area units
1 c mil = 5.06707479E-10 square meters (m2, sq m)
1 c mil = 5.06707479E-6 square centimeters (cm2, sq cm)
1 c mil = 5.06707479E-16 square kilometers (km2, sq km)
1 c mil = 5.4541562597548E-9 square feet (ft2, sq ft)
1 c mil = 7.8539816324633E-7 square inches (in2, sq in)
1 c mil = 6.0601710127031E-10 square yards (yd2, sq yd)
1 c mil = 1.9564085141688E-16 square miles (mi2, sq mi)
1 c mil = 0.78539816324633 square mils (sq mil)
1 c mil = 5.06707479E-14 hectares (ha)
1 c mil = 1.2521003419935E-13 acres (ac) | {"url":"https://www.justintools.com/unit-conversion/area.php?k1=circular-mils&k2=square-mils","timestamp":"2024-11-06T00:50:18Z","content_type":"text/html","content_length":"75094","record_id":"<urn:uuid:745fdfc9-e1bd-4def-b203-ccce75590c92>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00478.warc.gz"} |
A 1 mu A beam of protons with a cross - sectional area of 0.5 sq.mm is
A 1μA beam of protons with a cross - sectional area of 0.5sq.mm is moving with a velocity of 3×104ms−1 . Then charge density of beam is
The correct Answer is:B
Charge density =ChargeVolume
=i×tArea×Distance= for unit time Velocity = Distance
Updated on:21/07/2023
Knowledge Check
• What will be de Broglie's wavelength of an electron moving with a velocity of 1.2×105ms−1 ?
• In a particle accelerator, a current of 500μA is carried by a proton beam in which each proton has speed of 3×107m/s. The cross sectional area of the beam is 1.50mm2. The charge density in this
beam in Coulomb/m3 is close to
• A current 0.5 amperes flows in a conductor of cross-sectional area of 10−2m2. If the electron density is 0.3⋅1028m−3 , then the drift velocity of free electrons is | {"url":"https://www.doubtnut.com/qna/649445991","timestamp":"2024-11-02T07:53:47Z","content_type":"text/html","content_length":"337297","record_id":"<urn:uuid:fa6cf9c7-f59c-4676-87b8-cdc49e114118>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00332.warc.gz"} |
2014 AGU Fall Meeting
Statistics of large detrital geochronology datasets
Tuesday, 16 December 2014
Implementation of quantitative metrics for inter-sample comparison of detrital geochronological data sets has lagged the increase in data set size, and ability to identify sub-populations and
quantify their relative proportions. Visual comparison or application of some statistical approaches, particularly the Kolmogorov-Smirnov (KS) test, that initially appeared to provide a simple way of
comparing detrital data sets, may be inadequate to quantify their similarity. We evaluate several proposed metrics by applying them to four large synthetic datasets drawn randomly from a parent
dataset, as well as a recently published large empirical dataset consisting of four separate (n = ~1000 each) analyses of the same rock sample. Visual inspection of the cumulative probability density
functions (CDF) and relative probability density functions (PDF) confirms an increasingly close correlation between data sets as the number of analyses increases. However, as data set size increases
the KS test yields lower mean p-values implying greater confidence that the samples were not drawn from the same parent population and high standard deviations despite minor decreases in the mean
difference between sample CDFs. We attribute this to the increasing sensitivity of the KS test when applied to larger data sets, which in turn limits its use for quantitative inter-sample comparison
in detrital geochronology.
Proposed alternative metrics, including Similarity, Likeness (complement to Mismatch), and the coefficient of determination (R^2) of a cross-plot of PDF quantiles, point to an increasingly close
correlation between data sets with increasing size, although they are the most sensitive at different ranges of data set sizes. The Similarity test is most sensitive to variation in data sets with n
< 100 and is relatively insensitive to further convergence between larger data sets. The Likeness test reaches 90% of its asymptotic maximum at data set sizes of n = 200. The PDF cross-plot R^2 value
is sensitive across the maximum range of data set sizes, reaching 90% of its maximum at n = 400. These alternatives are easily implemented, provide quantitative comparisons regardless of the relative
sample sizes, and, particularly in the case of the PDF cross-plot approach, are sensitive over a large range of data set sizes. | {"url":"https://agu.confex.com/agu/fm14/webprogram/Paper30129.html","timestamp":"2024-11-03T07:50:47Z","content_type":"text/html","content_length":"12619","record_id":"<urn:uuid:f18d3e4c-3362-4685-b5d5-a7ff0cccbb5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00714.warc.gz"} |
Our enemies explicitly confess to everything that I have accused them of
Whites Must Develop, like the Jews, An Ethno-Religious Nationalist Movement
The Jewish people are a tribal people that believe that god chose them; and the Jewish people accepted their god and made a covenant with him.
The stories of the Torah — the first five books of the Bible document the stories of the Jews in the ancient near east and their interactions with other peoples and god himself. From the stories
of their interaction with god they were able to extract the rules and regulations of how Jews are to live. The rules and regulations are categorised and explained in the Talmud and deeply shape
how Jews think about the world. To be a Jew is to follow the Mosaic law — which means to follow the “game plan” so to speak set forward by the elders of their religion.
We see the strength of their philosophy and religion particularly after the war with the Roman Empire. The Jews fought a war with the Roman Empire that is meticulously documented by the
Jewish-Roman historian Josephus. The main theme is that the Jews wanted an independent Palestine, which before the revolt was the property of the Roman Empire. The Jews lost the revolt and were
cast out of the region and became a diaspora people.
That’s 2000 years ago that they became a diaspora people and less than a hundred years ago that they were awarded their own homeland. If they were able to survive for almost two-thousand years as
a diaspora people and not only stay united as a people but come to predominance in nations built by other peoples, it must be said that Jews have sound survival strategies.
Yes, they do. So the noble response is for everyone else to work ever harder and join forces ever more broadly to ensure that Jews’ sound but ignoble survival strategies are unsuccessful. That is our
response. But that is not your response.
Jews believe that their race, their tribal group, their religious community is about themselves in their totality are a nation. Their Jewish national identity supersedes any other national
identity. That is to say, they are Jewish before German. Jewish before Italian.
That is a tribal identity.
This means that they are loyal to themselves rather than loyal to the geographical boundaries in which they live. To them they are the Jewish nation and since Jews are scattered all around the
world, their “nation” is a transnational, borderless, invisible Jewish network that spans the globe.
A “transnational nation” is an oxymoron. What you are talking about is a transnational tribe, what earlier generations of writers accurately referred to as international Jewry.
What if we became white nationalists - meaning what happens if all white gentiles begin to see themselves as “white” and create a “nation” around their race - in the same way that jews have
created a “nation” out of their tribal group.
Then the world will have another tribe (“international whiteness”?) doing to everyone else what Jews are already doing to everyone else. Except this tribe will be several orders of magnitude larger
in number. Imagine Zionism multiplied hundredfold.
This is why every anti-Zionist has a duty to also be anti-”white”.
We should do exactly what they are doing and we shouldn’t feel bad about it.
That means that we should take from Judaism that which has given them strength as a people and the ability to persist in the diaspora for thousands of years.
We need to see ourselves as a race, religion, and nation.
That means we need to identify with our race (easy), but we also need a religion that unites us, a national (racial) identity, and the institutions of a traditional nation state that we can use
to secure the existence of our people.
That’s why we need to take our moral compass from something outside of the mass culture. For Jews that is the Mosaic law. For us it would be our own religious law that has rules and regulations
that are designed for us to get stronger as a group. It would differ entirely from the universalist morality of Christianity. An example of a religious rule: do not give charity to outgroups.
I am glad at least to see you admit that authentic Christianity indeed teaches moral universalism, and that hence you as a WN are not and can never be Christian, any more than a Jew can be Christian.
The holy book would be a book of stories that would become our history. It would include parables that would become the basis of our group strategy. It would also include commands from god on how
we should behave.
An example would be: we talk about how in the “old time” we were convinced by an outgroup to bring foreigners into our midst and they exploited our humanitarian spirit and in the end we got
attacked, abused, and displaced. The moral would be that we do not take our moral ideals from outgroups because that is a prime way to get manipulated. We must have our own religious moral code
that we do not deviate from and it is entirely focused on the strength, wealth, prosperity, and fecundity of ourselves. We would say that it is not our place to care about other groups and uplift
them. We want to uplift ourselves because that is the only way to abide by the one true law of nature: all organisms must first and foremost ensure their own existence and the existence of their
In short, you worship Yahweh exactly as Jews do.
What I’m saying is that we need to become like Jews because their strategy has not only allowed them to thrive but has brought them to prominence as a people.
I understand perfectly well what you are saying. In fact, I predicted more than a decade ago that you would be saying it.
My response is: you need to be treated like Jews.
75 Responses to Our enemies explicitly confess to everything that I have accused them of
1. How can you compare this to gangs ? Gangs live through their own rules and not the state laws
2. Hitler supported white minority rule in South Africa, that’s why he met with Oswald Pirow.
3. Diversity of what? And which native europeans exactly?
4. “Hitler supported white minority rule in South Africa, that’s why he met with Oswald Pirow.”
Adolf Hitler had no major interest in Oswald Pirow. It was Pirow which had interest in Hitler and met with the Führer in order to convince Germany to be less antijewish. After the II ww Pirow
declared himself closer to Salazar than Hitler and made clear not to be antijewish.
This alone should demonstrate how much Pirow was not a National Socialist.
5. @KronosSS
The native Europeans who are ethnically European who have been resident on the European continent for thousands of years? You’re just another leftist who loves to hate on Europeans, exactly like
the Jews do.
The Afrikaners who later implemented apartheid also supported Germany in the war against the British.
White means somebody of European descent, it’s not that hard to understand.
6. It’s no coincidence that having a mass coloured population in European cities that more crime and rape rates have skyrocketed.
7. Multiracial societies do not work, having different peoples who originate from different continents and environments mixed into one society brings nothing but trouble, look at the state of
London, Berlin, Paris, Harlem and especially in Stockholm.
8. @MN
“look at the state of”
I do this often.
Not Harlem but close enough:
Not Stockholm but close enough:
“hate on Europeans, exactly like the Jews do.”
9. @Manuel Nigao
>Hitler supported white minority rule in South Africa, that’s why he met with Oswald Pirow.
Certainly he was worried about Europe’s hegemony being superseded by Asia (if Goebbels is any indication), but it’s a stretch to say that he was even remotely sympathetic to Oswald Pirow.
The article “Fascist or opportunist?” by F.A. Mouton plainly indicates that “Pirow also left Hitler unimpressed after meeting him on 24 November 2023 at Berchtesgaden.”
And no wonder! He turned his back on Mosley the moment he ceased being influential and even dared to side with A. F. X. Baron’s group, which was opposed to Mosley’s Europeanism. Certainly Mosley
lived in a world of unrealizable dreams, but it must not be overlooked that this is the case for most political planners. Not to defend Mosley or fascism in any way.
Pirow is a perfect match for the following description from Mein Kampf:
“All that is needed is that one man should strike out on a new road and then a crowd of poltroons will prick up their ears and begin to hope that some trifling gain may lie at the end of that
The moment they think they have discovered where the reward is to be reaped they hasten to find another route by which to reach the goal more quickly.
As soon as a new movement is founded and has formulated a definite programme, people of that kind come forward and proclaim that they are fighting for the same cause.”
Pirow only managed to curry favour with the theoretician Rosenberg.
>Adolf Hitler had no major interest in Oswald Pirow. It was Pirow which had interest in Hitler and met with the Führer in order to convince Germany to be less antijewish.
Savitri, citing his report “Was the Second World War unavoidable?” (Hans Grimm has highlighted the obscurity of this report), adds that this was on the behalf of General Jan Smuts. So it was not
even carried out on Pirow’s own initiative.
>After the II ww Pirow declared himself closer to Salazar than Hitler and made clear not to be antijewish. This alone should demonstrate how much Pirow was not a National Socialist.
Well said.
As for Salazar, it is indicated in Churchill’s memoirs that he had made his intentions known to flee to the Azores in the event of German aggression upon Portugal and that he was dependent on
Britain for protection.
Both Goebbels and Ribbentrop note that both Portugal and Spain wavered indecisively between England and Germany, they were strictly concerned about being invaded and occupied by foreign nations.
In the Table Talk, Hitler clearly draws a distinction between fascism and Franco’s regime.
10. “The native Europeans who are ethnically European who have been resident on the European continent for thousands of years? You’re just another leftist who loves to hate on Europeans, exactly like
the Jews do.”
Speaking under a genetic point of view: “native Europeans” are made up by at least 3 ancestral components, or root races as you prefer to call. At least. And of these 3 components (but actually
the thing is quite more complex) at least 2 are not native Europeans. So as you see it is not as simple.
About being leftist or rightist: these two categories were created after the “french” revolution, which actually was a jewish revolution. I am not interested in being herded in the “left” or
“right” even because National Socialism is beyond these façades.
However: I do not “hate” Europeans, as I don’t “love” them just for the sake to be. How about all those degenerated Europeans that supports jews and their servants? Do you “love” or “hate” them?
11. “The Afrikaners who later implemented apartheid also supported Germany in the war against the British.
White means somebody of European descent, it’s not that hard to understand.”
About the first sentence: it’s not that simple. Actually some Afrikaners, including Pirow, tried to be “in the middle” and prevent a war between Germany and England.
When it was clear that England (or better: the zionist rulers of England) actually did want war at any cost, Afrikaners disappeared, without taking a precise position.
After the war, some right-wing parties emerged and the apartheid state was established. Just to make things clear: jews during apartheid literally flourished, and they were considered as “white”.
This point alone should speaknclear about the distance between National Socialism and afrikaner right wing movements.
Yes, I know the basic classifications of white, black excetera which actually are very broad, generic, and don’t say anything specific about quality and spirituality.
For example, how do you consider baltic and alpine? As European sub-races or not? But it should be clear that baltics and alpines are clearly turanized types. So what?
12. “It’s no coincidence that having a mass coloured population in European cities that more crime and rape rates have skyrocketed.”
The whites or how you prefer to call that support jews and zionism? How fo you consider them?
13. “Multiracial societies do not work, having different peoples who originate from different continents and environments mixed into one society brings nothing but trouble, look at the state of
London, Berlin, Paris, Harlem and especially in Stockholm.”
Yes but this is about quality, which in turn is surely racial, but not in the sense that you mean.
So I ask you: what about the thousands of alpine, cromagnoid and baltid that live in Stockholm? By your considerations those are whites, but metrically and racially they are turanized and giant
racial types which live in Europe since millennia, yet they are quite different from Aryans according to an Aryanist racial doctrine. Do you want to preserve them too? Just because “blonde” hair
and “blue” eyes which actually don’t say anything about a truly racial point of view? And the thousands of brachycephalic “whites” of Stockholm, clearly derived from turanians? How do you
consider them?
14. “Both Goebbels and Ribbentrop note that both Portugal and Spain wavered indecisively between England and Germany, they were strictly concerned about being invaded and occupied by foreign nations.
In the Table Talk, Hitler clearly draws a distinction between fascism and Franco’s regime.”
The authentic Italian fascism (not the Badoglio one but the true Fascism of the Social Republic) was anti-jewish and very close to National Socialism. I speak it knowing it since I am Italiam and
I know personally Italian fascist veterans and one veteran of Italian Division of SS too.
So yes what you have said it is right. Plus Salazar offered many passports to jews to hide them from Axis.
About Franco: I don’t know if he really was a jew as this site says, so I ask to some of this Aryanist movement if they have more information about it.
Actually I know that Franco helped many SS and National Socialists to hide in Spain after the war. That being said, Franco also helped a lot of jews to hide from Germany during the war.
So actually I don’t think Franco was a jew. With this being said, it is clear that Franco wasn’t even a National Socialist. I have the impression he didn’t care for anything but having a sort of
“authoritarian” State in Spain and nothing else. Neither a jew nor a NS.
15. Franco was not a Jew and he hated Jews like any good European should. This stupid site says he was a Jew is not true, he hated the Freesmasons too.
The Nordic, Alpine, East Baltic and Mediterranean races are sub-races of the white Aryan race. The Nordic race is the greatest race in history because it is the creator of the greatest
civilization. Why do you think the National Socialists were obsessed with the race and it was constantly referenced to in literature. The NSDAP Race Office was there to promote Nordic racial
16. “Franco was not a Jew and he hated Jews like any good European should. This stupid site says he was a Jew is not true, he hated the Freesmasons too.”
He didn’t care about jews. He was not antijewish.
17. “The Nordic, Alpine, East Baltic and Mediterranean races are sub-races of the white Aryan race”
You have mentioned two races very similar in craniometrical measures as well as body proportions (Nordic and Medirerranean) and two very similar between themselves (Baltic and Alpine). Yet,
baltic and alpine are not two races of Aryan descent, like it or not. They are turanian/cromagnoid in origin
18. “The Nordic race is the greatest race in history because it is the creator of the greatest civilization.”
Surely it is a great race, as the Mediterranean race is as great as the Nordic one since they are two branches of the same Aryan race. Or would you deny the Roman Empire? Persia? Hittites?
19. “Why do you think the National Socialists were obsessed with the race and it was constantly referenced to in literature.”
They were interested in Aryanism, which is not the same as nordicism. See the allies of Germany: Japan, Italy, Romania, Norway, Croatia, India. This should be clear that Aryanism is quite
different as Nordicism.
20. “The NSDAP Race Office was there to promote Nordic racial interests.”
Too simplistic. They promoted Aryanism; of course Germany promoted the Nordic branch of the Aryan race, while Italy promoted the Mediterranean branch of the Aryan race, which by some was called
“Aryan-Roman, which basically has its origins in Troy
21. And in Iran, among other Aryan centres.
While neither Germany nor Italy promoted Alpine and Baltics. The judgment on baltic race was not so good, and the reason is that baltic race is basically turanian.
Anyway race is not only biology but also soul and spirituality, so metaphysical. It was a restoration of various Aryan elements in order to re-aryanize Europe again. I suggest you to read the
Hitlerjugend manual (version of 1938) to have a clearer view.
Just a question for you: as a physical Aryan ideal, would you prefer the Iranian actress Mahtab Keramati or the russian Olga Kurylenko?
22. *Russian
23. I would like to contact some prominent members of this movement in order to speak them in private about some things regarding ancient civilizations and aryan, turanian, giant, excetera…
Who are the most experts in this ancient civilizations domains? Who has written the Aryan diffusion pages? I would like to exchange some things, I prefer in private.
Thank you.
24. @KronosSS:
“I would like to contact some prominent members of this movement in order to speak them in private about some things regarding ancient civilizations and aryan, turanian, giant, excetera…
Who are the most experts in this ancient civilizations domains? Who has written the Aryan diffusion pages? I would like to exchange some things, I prefer in private.”
Probably AS or JJ. Contacting them via the TTL forums is the best way:
25. I would prefer an email contact actually, but thank you for having sent me a link.
This entry was posted in Aryan Sanctuary. Bookmark the permalink. | {"url":"https://aryanism.net/blog/aryan-sanctuary/our-enemies-explicitly-confess-to-everything-that-i-have-accused-them-of/","timestamp":"2024-11-05T13:43:36Z","content_type":"text/html","content_length":"53771","record_id":"<urn:uuid:f3907976-cecf-4328-a630-fbaf454f7335>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00499.warc.gz"} |
Combinatorics | World of Mathematics – Mathigon
Combinatorics is a branch of mathematics which is about counting – and we will discover many exciting examples of “things” you can count.
First combinatorial problems have been studied by ancient Indian, Arabian and Greek mathematicians. Interest in the subject increased during the 19th and 20th century, together with the development
of graph theory and problems like the four colour theorem. Some of the leading mathematicians include Blaise Pascal (1623 – 1662), Jacob Bernoulli (1654 – 1705) and Leonhard Euler (1707 – 1783).
Combinatorics has many applications in other areas of mathematics, including graph theory, coding and cryptography, and probability.
Combinatorics can help us count the number of orders in which something can happen. Consider the following example:
In a classroom there are V.CombA1 pupils and V.CombA1 chairs standing in a row. In how many different orders can the pupils sit on these chairs?
Let us list the possibilities – in this example the V.CombA1 different pupils are represented by V.CombA1 different colours of the chairs.
There are {2: 2, 3: 6, 4: 24, 5: 120}[V.CombA1] different possible orders. Notice that the number of possible orders increases very quickly as the number of pupils increases. With 6 pupils there are
720 different possibilities and it becomes impractical to list all of them. Instead we want a simple formula that tells us how many orders there are for n people to sit on n chairs. Then we can
simply substitute 3, 4 or any other number for n to get the right answer.
Suppose we have V.CombB1 chairs and we want to place V.CombB1==1?'one pupil':V.CombB1==2?'two pupils':V.CombB1==3?'three pupils':V.CombB1==4?'four pupils':V.CombB1==5?'five pupils':V.CombB1==6?'six
pupils':'seven pupils' on them. { 7: 'There are 7 pupils who could sit on the first chair. Then there are 6 pupils who could sit on the second chair. There are 5 choices for the third chair, 4
choices for the fourth chair, 3 choices for the fifth chair, 2 choices for the sixth chair, and only one choice for the final chair.', 6: 'There are 6 pupils who could sit on the first chair. Then
there are 5 pupils who could sit on the second chair. There are 4 choices for the third chair, 3 choices for the fourth chair, 2 choices for the fifth chair, and only one choice for the final
chair.', 5: 'There are 5 pupils who could sit on the first chair. Then there are 4 pupils who could sit on the second chair. There are 3 choices for the third chair, 2 choices for the fourth chair,
and only one choice for the final chair.', 4: 'There are 4 pupils who could sit on the first chair. Then there are 3 pupils who could sit on the second chair. There are 2 choices for the third chair,
and only one choice for the final chair.', 3: 'There are 3 pupils who could sit on the first chair. Then there are 2 pupils who could sit on the second chair. Finally there is only one pupil left to
sit on the third chair.', 2: 'There are 2 pupils who could sit on the first chair. Next, there is only one pupil left to sit on the second chair.', 1: 'These is only one choice for the single
chair.'}[V.CombB1] In total, there are
possibilities. To simplify notation, mathematicians use a “!” called factorial. For example, 5! (“five factorial”) is the same as 5 × 4 × 3 × 2 × 1. Above we have just shown that there are n!
possibilities to order n objects.
In how many different ways could 23 children sit on 23 chairs in a Maths Class? If you have 4 lessons a week and there are 52 weeks in a year, how many years does it take to get through all different
possibilities? Note: The age of the universe is about 14 billion years.
For 23 children to sit on 23 chairs there are 23! = 25,852,016,738,884,800,000,000 possibilities (this number is too big to be displayed on a calculator screen). Trying all possibilities would take
23!4 × 52 = 124,288,542,000,000,000,000 years.
This is nearly 10 million times as long as the current age of the universe!
The method above required us to have the same number of pupils as chairs to sit on. But what happens if there are not enough chairs?
How many different possibilities are there for any Math.min(V.CombC1,V.CombC2) of V.CombC1 pupils to sit on Math.min(V.CombC1,V.CombC2) chairs? Note that Math.max(0,V.CombC1-V.CombC2) will be
left standing, which we don’t have to include when listing the possibilities.
Let us start again by listing all possibilities:
To find a simple formula like the one above, we can think about it in a very similar way. 'There are '+V.CombC1+' pupils who could sit on the first chair. '+ (((Math.min(V.CombC1,V.CombC2))==2||
(Math.min(V.CombC1,V.CombC2))==3||(Math.min(V.CombC1,V.CombC2))==4)?'Then there are '+(V.CombC1-1)+' pupils who could sit on the second chair. ':'')+ (((Math.min(V.CombC1,V.CombC2))==3||(Math.min
(V.CombC1,V.CombC2))==4)?'Then there are '+(V.CombC1-2)+' pupils who could sit on the third chair. ':'')+ (((Math.min(V.CombC1,V.CombC2))==4)?'Finally there is one pupil left to sit on the last
chair. ':'')+ ((V.CombC1-(Math.min(V.CombC1,V.CombC2))==1||V.CombC1-(Math.min(V.CombC1,V.CombC2))==2||V.CombC1-(Math.min(V.CombC1,V.CombC2))==3)?'We don’t care about the remaining '+
(V.CombC1-V.CombC2)+' children left standing. ':'') In total there are
possibilities. Again we should think about generalising this. We start like we would with factorials, but we stop before we reach 1. In fact we stop as soon as we reach the number students without
chair. When placing 7 students on 3 chairs their are
7 × 6 × 5 = 7 × 6 × 5 × 4 × 3 × 2 × 14 × 3 × 2 × 1 = 7!4! = 7!(7 – 3)!
possibilities, since the 4 × 3 × 2 × 1 will cancel each other. Again there is a simpler notation for this: 7P3. If we want to place n objects in m positions there are
nPm = n!(n – m)!
possibilities. The P stands for “permutations”, since we are counting the number of permutations (orders) of objects. If m and n are the same, as they were in the problem at the beginning of this
article, we have
nPn = n!(n – n)! = n!0!.
To make sense of this we define 0! = 1. Now nPn = n! as we would expect from our solution to the first problem.
Unfortunately you can’t remember the code for your four-digit lock. You only know that you didn’t use any digit more than once. How many different ways do you have to try? What do you conclude about
the safety of those locks?
There are 10 digits (0, 1, …, 9) and each one appears at most once. The number of orderings of these digits is 10P4 = 5040. It would take a very long time to test that many combinations, so 4-digit
locks are very safe.
Permutations are used when you select objects and care about their order – like the order of children on chairs. However in some problems you don’t care about the order and just want to know how many
ways there are to select a certain number of objects from a bigger set.
In a shop there are five different T-shirts you like, coloured red, blue, green, yellow and black. Unfortunately you only have enough money to buy three of them. How many ways are there to select
three T-shirts from the five you like?
Here we don’t care about the order (it doesn’t matter if we buy black first and then red or red first and then black), only about the number of combinations of T-shirts. The possibilities are
so there are 10 in total. If we had calculated 5P3 = 60, we would have double-counted some possibilities, as the following table shows:
With permutations, we count every combination of three T-shirts 6 times, because there are 3! = 6 ways to order the three T-shirts. To get the number of combinations from the number of permutations
we simply need to divide by 6. We write
5C3 = 5P33! = 606 = 10.
Here the C stands for “combinations”. In general, if we want to choose r objects from a total of n there are
nCr = nPrr! = n!r! (n – r)!
different combinations. Instead of nCr mathematicians often write nCr = (nr), like a fraction in brackets but without the line in between. (To simplify typesetting we will continue using the first
notation inline.)
(a) There are 10 children in your class but you can invite only 5 to your birthday party. How many different combinations of friends could you invite? Explain whether to use combinations or
(b) At a party there are 75 people. Everybody shakes everybody’s hand once. How often are hands shaken in total? Hint: How many people are involved in shaking hands?
(a) The number of combinations of friends you can invite is 10C5 = 252. We used combinations because it doesn’ matter which order we invite the friends in, on which ones we invite.
(b) You want to find the number of all possible pairs of party guests. This is simply 75C2 = 2775. (That’s a lot of handshakes!)
Combinatorics and Pascal’s Triangle
Let’s calculate some values of nCr. We start with 0C0. Then we find 1C0 and 1C1. Next, 2C0, 2C1 and 2C2. Then 3C0, 3C1, 3C2 and 3C3. We can write down all these results in a table:
0C0 = 1
1C0 = 1 1C1 = 1
2C0 = 1 2C1 = 2 2C2 = 1
3C0 = 1 3C1 = 3 3C2 = 3 3C3 = 1
4C0 = 1 4C1 = 4 4C2 = 6 4C3 = 4 4C4 = 1
5C0 = 1 5C1 = 5 5C2 = 10 5C3 = 10 5C4 = 5 5C5 = 1
This is exactly Pascal’s triangle which we explored in the article on sequences. It can be created more easily by observing that any cell is the sum of the two cells above. Hidden in Pascal’s
triangle there are countless patterns and number sequences.
Now we also know that the rth number in the nth row is also given by nCr (but we always have to start counting at 0, so the first row or column is actually the zeroth row). If we apply what we know
about creating Pascal’s triangle to our combinations, we get
(nr) + (nr + 1) = (n + 1r + 1) .
This is known as Pascal’s Identity. You can derive it using the definition of nCr in terms of factorials, or you can think about it the following way:
We want to choose r + 1 objects from a set of n + 1 objects. This is exactly the same as marking one object of the n + 1, to be called X, and either choosing X plus r others (from the remaining
n), or not choosing X and r + 1 others (from the remaining n).
Many problems in combinatorics have a simple solution if you think about it the correct way, and a very complicated solution if you just try to use algebra…
Stars and Bars
A greengrocer on a market stocks a large number of n different kinds of fruit. In how many ways can we make up a bag of r fruits? Note that r can be smaller, equal or bigger than n.
Note that with r ≤ n there are nCr ways to select one fruit from each kind. However we are also allowed to take more than one fruit of each kind, for example two apples, one strawberry and one
We can represent any valid selection of fruit by a string of stars and bars, as shown in this example:
★★★ | ★★ | | ★★ | ★
3 of type 1 2 of type 2 0 of type 3 2 of type 4 1 of type 5
In total there are r stars (representing the r fruit we are allowed to take) and there are n – 1 bars (dividing the n different kinds of fruit). This makes r + n – 1 places in total. Any ordering of
r stars and n – 1 bars corresponds to precisely one valid selection of fruit.
Now we can apply our combinatorial tools: there are r + n – 1 places and we want to choose n – 1 of them to be bars (all the others are stars). That there are exactly (r + n – 1)C(n – 1)
possibilities to do that!
Suppose there are five kinds of fruit and we want to take ten items. From what we have calculated above, there are
(10 + 5 – 1)C(5 – 1) = 14C4 = 24,024
possibilities. Think about that next time you go shopping!
Combinatorics and Probability
Combinatorics has many applications in probability theory. You often want to find the probability of one particular event and you can use the equation
P(X) = probability that X happens = number of outcomes where X happenstotal number of possible outcomes
You can use combinatorics to calculate the “total number of possible outcomes”. Here is an example:
Four children, called A, B, C and D, sit randomly on four chairs. What is the probability that A sits on the first chair?
We have already shown that in total there are 24 ways to sit on four chairs. If you look back at our solution, you will also find that A sits on the first chair in six of the cases. Therefore
P(A sits on the first chair) = number of outcomes where A sits on the first chairtotal number of possible outcomes = 624 = 14.
This answer was expected, since each of the four children is equally likely to sit on the first chair. But other cases are not quite as straightforward…
(a) A postman has to deliver four letters to four different houses in a street. Unfortunately the rain has erased the addresses, so he just distributes them randomly, one letter per house. What is
the probability that every house gets the right letter? (☆ What is the probability that every house gets a wrong letter?)
(b) In a lottery you have to guess 6 out of 49 numbers. What is the probability that you get all of them right? If submit 100 guesses every week, how long on average will it take you to win?
(a) There are 4! = 24 ways to randomly distribute the letters, and only one way to get all of them right. Thus the probability that every letter is delivered to the right house is 1 / 24 = 0.0417 =
To find the probability that every letter gets delivered to the wrong house is a bit more difficult. It is not simply 1 – 0.0417, since there are many cases in which one or two, but not all houses
get the right letter. In this simple case, the easiest solution would be to write down all 24 possibilities. You will find that in 9 out of the 24 cases every house gets a wrong letter, which gives a
probability of 0.375 = 37.5%. If there more too many houses to write down all possibilities, you can use an idea called the Inclusion Exclusion principle.
(b) There are 49C6 = 13,983,816 possible outcomes of the lottery, so the probability of getting the right solution is 1 / 49C6 = 0.000000072.
On average it will also take 13,983,816 attempts to win. If we submit 100 guesses every week this corresponds to 139,838 weeks, which is the same as 2,689 years. The lesson to learn: don’t play | {"url":"https://cn.mathigon.org/world/Combinatorics","timestamp":"2024-11-14T15:48:47Z","content_type":"text/html","content_length":"48252","record_id":"<urn:uuid:ff40ce77-3876-4974-ab1d-08389aaedb1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00391.warc.gz"} |
Bluff Body Aerodynamics | Drag, Vortex Shedding & Flow
Bluff body in aerodynamics
Explore the essentials of bluff body aerodynamics, covering drag, vortex shedding, flow dynamics, and their impact on design and engineering.
Understanding Bluff Body Aerodynamics: Drag, Vortex Shedding, and Flow Dynamics
Bluff body aerodynamics is a fascinating and complex field, central to understanding how air flows around objects that are not streamlined. This understanding is crucial in various engineering and
design applications, from building construction to vehicle design. The primary aspects of bluff body aerodynamics include drag, vortex shedding, and the flow dynamics around these bodies.
Drag in Bluff Bodies
Drag is a force that opposes an object’s motion through a fluid (like air or water). In the context of bluff bodies, which are typically characterized by their broad, flat surfaces, drag is a major
concern. This is because these shapes do not allow air to flow smoothly around them, causing high pressure zones in the front and low pressure zones at the rear. The differential in pressure results
in drag. The drag coefficient (C[D]) is a dimensionless number used to quantify drag in relation to an object’s area and the fluid’s density and velocity.
Vortex Shedding and Flow Dynamics
Vortex shedding is another critical aspect of bluff body aerodynamics. As fluid flows past a bluff body, it separates from the object’s surface, creating vortices. These vortices are areas of
rotating fluid that form alternately on either side of the body, leading to an oscillating flow pattern known as the Von Kármán vortex street. This phenomenon can induce vibrations in the body, which
are particularly significant in engineering structures like bridges or tall buildings.
The flow dynamics around bluff bodies are complex and vary significantly based on the shape and size of the object, as well as the fluid’s velocity and viscosity. Turbulent flow, characterized by
chaotic and irregular fluid motion, is common in bluff body aerodynamics. Understanding these flow patterns is crucial for predicting and mitigating potential issues such as excessive drag or
structural vibrations.
In the next section, we will delve deeper into the applications and implications of bluff body aerodynamics in real-world scenarios, including strategies to optimize designs for reduced drag and
enhanced stability.
Applications and Implications of Bluff Body Aerodynamics
The principles of bluff body aerodynamics have wide-ranging applications in various fields. In architectural engineering, understanding these principles is vital for designing buildings that can
withstand wind forces. The study of vortex shedding, for instance, informs the design of skyscrapers to ensure they are resistant to wind-induced oscillations. Similarly, in automotive and aerospace
engineering, the knowledge of drag is essential for designing more fuel-efficient vehicles and aircraft by minimizing air resistance.
Strategies for Optimizing Bluff Body Designs
To mitigate the adverse effects of drag and vortex shedding, several strategies are employed. One common approach is the use of aerodynamic add-ons such as spoilers on vehicles, which alter the flow
dynamics to reduce drag. In building design, architects may incorporate features like tapered shapes or helical structures to disrupt vortex shedding patterns. Additionally, the use of computational
fluid dynamics (CFD) simulations allows engineers to predict and optimize the aerodynamic performance of bluff bodies before physical prototypes are built.
Advancements and Future Directions
Advancements in technology and materials science continue to push the boundaries of bluff body aerodynamics. For example, the development of smart materials that can change shape in response to wind
forces is a promising area of research. This adaptability could lead to more efficient and safer designs in various applications. Furthermore, ongoing improvements in CFD modeling are enabling more
accurate and detailed analysis of complex flow patterns, paving the way for innovative design solutions.
Bluff body aerodynamics is a crucial field of study with significant implications in engineering and design. Understanding the principles of drag, vortex shedding, and flow dynamics around bluff
bodies enables the development of more efficient, stable, and safe designs across multiple industries. As technology advances, the potential for innovative solutions in this domain continues to
expand, offering exciting prospects for future developments. With continued research and application of these principles, we can expect to see further advancements in the design and optimization of
structures and vehicles, contributing to a more sustainable and efficient world. | {"url":"https://modern-physics.org/bluff-body-in-aerodynamics/","timestamp":"2024-11-05T15:51:05Z","content_type":"text/html","content_length":"160437","record_id":"<urn:uuid:4d6f7a57-f616-46e8-92af-77a061509055>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00531.warc.gz"} |
#005: Transmission lines are awesome!
If you study engineering, you are likely to encounter the topic of transmission line model in a course like “Radio Frequency Fundamentals” or something to that effect. However, the concept of
transmission lines is certainly not limited to electromagnetic propagation; for example, it has great usage within acoustics, as will be demonstrated now.
Imagine a straight tube carrying a propagating sound pressure:
The sound pressure is assumed constant in the cross-section, so that we have so-called plane wave propagation. The task is now to describe the sound field along the propagation axis. Since we can
have several wavelengths along the tube, we need a so-called transmission matrix, which relates the four quantities 1) input pressure, 2) input volume velocity, 3) output pressure, and 4) output
volume to each other. In its general form, this matrix system can be written as
The matrix elements are worked out [1] for a tube with length l and cross-sectional area S
So if the density and the sound speed are known, the tube is fully described. Other pairs of quantities that fully describe the tube are equally valid, but this will be touched further upon in
upcoming blog posts. The wavenumber k is simply related to the sound speed.
In general, the density and the sound speed can both be complex. This way, so-called viscous and thermal losses can also be included in the transmission matrix for microacoustic applications. If you
work with transmission line models in e.g. MATLAB, complex systems can be calculated efficiently, by combining cascaded and parallel systems. For Finite Element Modelling such systems can also be
included, which can be very useful: Two larger volumes are connected via a tube with a circular cylindrical cross-section. The tube is not explicitly drawn or meshes; it is included as a mathematical
abstraction using the excellent coupling and constraint options in COMSOL Multiphysics®[2].
The dashed line indicates a mathematical description of a transmission line model for a circular cylindrical tube. Its cross-sectional area is indicated as interface surfaces on the two larger
The sound pressure level at one end of this system is found for a given velocity input at the other end, with the assumption that the tube is lossfree.
This sound pressure level matches the level that would have been found for a model with the tube included explicitly, but by including the tube only via mathematics you save degrees of freedom
(thereby lowering solution time), especially for long tubes with thermoviscous losses included in the transmission matrix. It is also easier to run a parameter study on the length of the tube, since
no geometry movement or remeshing is needed.
This brief introduction to transmission line modelling hopefully illustrates the power of this technique, and I would suggest employing it whenever possible. In future blog posts other related
techniques will be described, which can also make calculations run more efficiently, and provide more insight than purely numerical models.
1): Propagation Of Sound Waves In Ducts, Finn Jacobsen, Acoustic Technology, Department of Electrical Engineering, Technical University of Denmark, Building 352, Ørsteds Plads, DK-2800 Lyngby,
2): COMSOL Multiphysics® v. 5.2. www.comsol.com. COMSOL AB, Stockholm, Sweden | {"url":"https://www.acculution.com/single-post/2017/08/01/005-transmission-lines-are-awesome","timestamp":"2024-11-13T04:47:07Z","content_type":"text/html","content_length":"1050480","record_id":"<urn:uuid:4cbb76fb-e19b-4b1f-be43-ff033d7ca167>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00855.warc.gz"} |
Gini Coefficient: (Definition, Formula & How to Calculate)
Gini Coefficient: Definition, How to Calculate & Formula
The Gini Coefficient: Definition, How to Calculate & Formula
Gini Coefficient Definition
The Gini coefficient is a statistical measure used to calculate inequality within a nation. It does so by calculating the wealth distribution between members of the population. Through this
calculation, we achieve a result between 0 and 1, which 0 representing perfect equality, and 1 absolute inequality.
Originally thought of by Corrado Gini in 1912, it is most commonly used to measure income inequality. Both the World Bank and the UN produce annual statistics for the Gini coefficient, and many
governments use it to track its inequality. For example, the UK’s Office for National Statistics (ONS) produces an annual report highlighting historical statistics of the Gini coefficient. The US’
Census Bureau department also releases a number of reports on the Gini Coefficient, keeping track of its movements.
Key Points
1. The Gini Coefficient is a statistical measure that calculates inequality.
2. It measures inequality by measuring the distribution of income across the country.
3. Although the Gini coefficient measures wealth inequality, it doesn’t measure or factor in overall wealth.
The Gini coefficient, also known as the Gini Index, is widely used across the world. It is one of the most efficient and easily understood figures on inequality, which makes it easier to compare
countries. At the same time, it does have its drawbacks which we will look at later.
How is the Gini Coefficient Calculated
There are two ends of the measurement, ranging from 0 to 1. At 1, the measurement would show that one person receives all the national income. By contrast, a measurement of 0 would suggest that
income is perfectly split between all members of society. In other words, 0 = complete equality, and 1 = complete inequality.
The Gini coefficient is calculated using the Lorenz Curve. This can be illustrated in the graph below. To explain, each percentile is plotted on the graph with a line situated at 45 degrees. This
line represents perfect equality. So the bottom 10 percent of the population receives 10 percent of income, whilst the top 10 percent also receives 10 percent of income.
Along this line, the percentiles are populated below this, creating what is known as the Lorenz Curve. The reason the curve is situated below is that a situation cannot exist whereby the bottom 50
percent receive more than 50 percent of income. If they did then they wouldn’t be in the bottom 50 percent.
Once each percentile has been placed onto the graph, we are left with the curve. The area between the curve and the line of equality is then used to calculate the Gini coefficient.
The area above the Lorenz Curve and towards the line of equality is referred to as ‘A’ and the line below as ‘B’. This can be seen in the chart below.
To calculate the Gini coefficient, we must first measure the area of ‘B’. We can do this by splitting down each segment into triangles and squares. Let us take a simple example below. If we split it
down, we can create 3 segments illustrated below.
Area 1 = width (50) x height (25) x 0.5 (as it’s a triangle) = 625
Area 2 = width (50) x height (25) = 1250
Area 3 = width (50) x height (75) x 0.5 (as it’s a triangle) = 1875
So the total area of ‘B’ equals the combined total of the three areas. This equals 625 + 1250 + 1875 = 3750. The area of ‘B’ is therefore 3750.
Gini Coefficient Formula
The Gini Coefficient formula is calculated using = A / (A + B). Where ‘A’ is the area above the Lorenz Curve and ‘B’ is the area below.
From the previous example, we have already worked out B = 3750. We can then work out the area of ‘A’. The line of equality can be calculated by using the total area of the triangle. This equals 100 x
100 x 0.5 (as it’s a triangle). In turn, we get 5000 as our figure. So this is the total area of the triangle (which includes both A and B). We already have the area of B (3750), so the area of A is
the difference (1250).
We can now plug those figures into the formula:
Gini Coefficient = 1250 / 1250 + 3750 = 0.25
What the Gini doesn’t show
The Gini coefficient shows the distribution of income within a country. What it doesn’t show is how wealthy that country is in the first place. For instance, OECD data shows that the United States
and Lithuania have a similar Gini coefficient score at around 0.38. However, the Gini coefficient does not consider the quality of life or general economic well-being. So it only tells part of the
If we look at the US compared to Lithuania; their Gini coefficients are similar. Yet the US has a GDP per capita that is 3.7 times that of Lithuania. Wages are equally disproportionate.
What should be considered the average quality of life? Perfect equality may be considered a positive thing, but not if everyone is equally poor. If everyone has an income of $1 a month, they may all
be equal, but they aren’t going to be able to afford much.
Related Topics
FAQs on the Gini Coefficient
What does a high Gini coefficient mean?
A high Gini coefficient means that the nation has a high level of income inequality. So the highest earners in society take home a significant proportion of the nations income.
How does the Gini Coefficient work?
The Gini coefficient is calculated on a scale of 0 to 1, with 1 being perfectly inequal, and 0 being perfectly equal.
How do you calculate to Gini coefficient?
The Gini coefficient can be calculated using the formula: Gini Coefficient = A / (A + B), where A is the area above the Lorenz Curve and B is the area below the Lorenz Curve.
About Paul
Paul Boyce is an economics editor with over 10 years experience in the industry. Currently working as a consultant within the financial services sector, Paul is the CEO and chief editor of BoyceWire.
He has written publications for FEE, the Mises Institute, and many others. | {"url":"https://boycewire.com/what-is-the-gini-coefficient/","timestamp":"2024-11-12T05:28:02Z","content_type":"text/html","content_length":"165590","record_id":"<urn:uuid:b883c1b8-d0fb-44c2-a686-1646ad47d2ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00811.warc.gz"} |
UCAS code
Start date
September 2025
Delivery type
On campus
4 years full time
Work placement
Study abroad
Typical A-level offer
AAA/A*AB (specific subject requirements)
Typical Access to Leeds offer
Course overview
Mathematics is key to the sciences and to a cross-section of business disciplines. The ongoing drive for economic efficiency, the increasing importance of technology and big data, and new emerging
areas such as climate science all mean that mathematics continues to have a significant impact on the world. Demand for mathematics skills comes from all sectors – from business and technology to
science, research and development and IT – meaning the career options available are varied and rewarding.
Studying a mathematics degree at Leeds will provide you with a range of core mathematical skills whilst enhancing your abilities in logical thinking, problem solving and decision making – all of
which are highly valued by employers. You can explore topics as diverse as fluid dynamics, mathematical biology, number theory, risk management, stochastic calculus and topology. You can choose to
specialise in a particular area of interest or to delve into several different areas. Choosing this integrated Masters degree (MMath, BSc) is particularly suitable if you wish to work closer to the
frontiers of research or to use mathematics at a higher level in your career.
Here at Leeds, we understand the importance that mathematics has in everyday life, which is why we have one of the largest mathematics research departments in the UK, shaping our curriculum. We will
equip you with the relevant knowledge, skills and experience you need to begin your career in this highly valued specialism.
Why study at Leeds:
• Our School’s globally-renowned research feeds into the course, shaping your learning with the latest thinking in areas such as pure mathematics, applied mathematics, statistics and financial
• Learn from expert academics and researchers who specialise in a variety of mathematical areas.
• Small tutorial groups support the teaching, providing you with regular feedback and advice from the academic staff throughout your degree.
• Access excellent facilities and computing equipment which are complemented by social areas, communal problem-solving spaces and quiet study rooms.
• Broaden your experience and enhance your career prospects with our industrial placement opportunities and study abroad programmes.
• Make the most of your time at Leeds by joining our student society MathSoc where you can meet more of your peers, enjoy social events and join the MathSoc football or netball team.
Benefits of an integrated Masters
Learn more about what an integrated Masters is and how it can benefit your studies and boost your career.
Accreditation is the assurance that a university course meets the quality standards established by the profession for which it prepares its students.
The School of Mathematics at Leeds has a successful history of delivering courses accredited by the Royal Statistical Society (RSS). This means our mathematics courses have consistently met the
quality standards set by the RSS.
As we are reviewing our curriculum, we are currently seeking reaccreditation from the RSS.
Course details
Our core mathematics degree offers opportunities to study a broad range of topics within the discipline, spanning pure mathematics, applied mathematics and statistics. Our academic staff have
extensive research interests, which is why we're able to offer a wide choice of modules. You’ll graduate as a multi-skilled mathematician, perhaps with particular expertise in an area of interest or
with the training necessary to work in a particular industry.
Each academic year, you'll take a total of 120 credits.
Course Structure
The list shown below represents typical modules/components studied and may change from time to time. Read more in our terms and conditions.
Most courses consist of compulsory and optional modules. There may be some optional modules omitted below. This is because they are currently being refreshed to make sure students have the best
possible experience. Before you enter each year, full details of all modules for that year will be provided.
For more information and a list of typical modules available on this course, please read Mathematics MMath, BSc in the course catalogue.
Year 1
Compulsory modules
Core Mathematics – 40 credits
You’ll learn the foundational concepts of function, number and proof, equipping you with the language and skills to tackle your mathematical studies. The module also consolidates basic calculus,
extending it to more advanced techniques, such as functions of several variables. These techniques lead to methods for solving simple ordinary differential equations. Linear algebra provides a basis
for wide areas of mathematics and this module provides the essential foundation.
Real Analysis – 20 credits
Calculus is arguably the most significant and useful mathematical idea ever invented, with applications throughout the natural sciences and beyond. This module develops the theory of differential and
integral calculus of real-valued functions in a precise and mathematically rigorous way.
Computational Mathematics and Modelling – 20 credits
You'll be introduced to computational techniques, algorithms and numerical solutions, as well as the mathematics of discrete systems. You'll learn basic programming using the language Python and
apply computational techniques to the solution of mathematical problems.
Introduction to Group Theory – 10 credits
Group theory is a fundamental branch of mathematics, central also in theoretical physics. The concept of a group may be regarded as an abstract way to describe symmetry and structure. In this module,
we will introduce group theory, with motivation from, and application to, specific examples of familiar mathematical structures such as permutations of lists and symmetries of shapes.
Dynamics and Motion – 10 credits
In its broadest sense, dynamics refers to the mathematical modelling of things which change with time. The main focus of this module is that of Newtonian mechanics, where forces cause accelerations
which govern the motion of objects (their dynamics), but the module will also explore other examples and applications. You’ll build on the methods of calculus (especially solution of ordinary
differential equations) from the ‘Core Mathematics’ module. You’ll also be introduced to a simple numerical method which allows equations for dynamics to be solved approximately on computers.
Probability and Statistics – 20 credits
'Probability is basically common sense reduced to calculation; it makes us appreciate with exactitude what reasonable minds feel by a sort of instinct.' So said Laplace. In the modern scientific and
technological world, it is even more important to understand probabilistic and statistical arguments. This module will introduce you to key ideas in both areas, with probability forming the
theoretical basis for statistical tests and inference.
Year 2
Compulsory modules
Investigations in Mathematics – 20 credits
You’ll be introduced to ideas and methods of mathematical research. Examples and applications will be drawn from across the spectrum of pure mathematics, applied mathematics and statistics. You’ll
investigate a mathematical theory or concept and produce a report.
Further Linear Algebra and Discrete Mathematics – 20 credits
Explore the more abstract ideas of vector spaces and linear transformations, together with introducing the area of discrete mathematics.
Vector Calculus and Partial Differential Equations – 20 credits
Vector calculus is the extension of ordinary one-dimensional differential and integral calculus to higher dimensions, and provides the mathematical framework for the study of a wide variety of
physical systems, such as fluid mechanics and electromagnetism.
These systems give rise to partial differential equations (PDEs), which can be solved and analysed. Students will learn to use, among others, techniques introduced in earlier modules as well as being
introduced to Fourier methods for PDEs.
Optional modules
You’ll study optional modules within one of the following pathways:
• Pure and Applied Mathematics
• Pure Mathematics and Statistics
• Applied Mathematics and Statistics
Please note: The modules listed below are indicative of typical options and some of these options may not be available, depending on other modules you have selected already.
Pure and Applied Mathematics
Calculus, Curves and Complex Analysis – 20 credits
The ideas of differential geometry and complex analysis are powerful products of nineteenth- and twentieth-century pure mathematics: interesting and beautiful in their own right, and with application
to theoretical physics. Differential geometry is concerned with describing, understanding and quantifying properties of curved objects, while complex analysis extends and, in some ways, simplifies
the differential calculus by considering functions of a complex variable. This module offers an introduction to each of these subjects.
Mathematical Modelling – 20 credits
Learn analytical and computational techniques for the solution of ordinary and partial differential equations, which describe particle motion in fields, fluids, waves, diffusion and many other
Optional modules:
Introduction to Logic – 10 credits
This module is an introduction to mathematical logic introducing formal languages that can be used to express mathematical ideas and arguments. It throws light on mathematics itself, because it can
be applied to problems in philosophy, linguistics, computer science and other areas.
Optimisation – 10 credits
Optimisation, “the quest for the best”, plays a major role in financial and economic theory, such as maximising a company's profits or minimising its production costs. This module develops the theory
and practice of maximising or minimising a function of many variables, and thus lays a solid foundation for progression onto more advanced topics, such as dynamic optimisation, which are central to
the understanding of realistic economic and financial scenarios.
Rings and Polynomials – 10 credits
Rings are one of the fundamental concepts of mathematics, and they play a key role in many areas, including algebraic geometry, number theory, Galois theory and representation theory. The aim of this
module is to give an introduction to rings. The emphasis will be on interesting examples of rings and their properties.
Calculus of Variations – 10 credits
The calculus of variations concerns problems in which one wishes to find the extrema of some quantity over a system that has functional degrees of freedom. Many important problems arise in this way
across pure and applied mathematics. In this module, you’ll meet the system of differential equations arising from such variational problems: the Euler-Lagrange equations. These equations and the
techniques for their solution, will be studied in detail.
Pure Mathematics and Statistics
Calculus, Curves and Complex Analysis – 20 credits
The ideas of differential geometry and complex analysis are powerful products of nineteenth- and twentieth-century pure mathematics: interesting and beautiful in their own right, and with application
to theoretical physics. Differential geometry is concerned with describing, understanding and quantifying properties of curved objects, while complex analysis extends and, in some ways, simplifies
the differential calculus by considering functions of a complex variable. This module offers an introduction to each of these subjects.
Statistical Methods – 20 credits
Statistical models are important in many applications. They contain two main elements: a set of parameters with information of scientific interest and an "error distribution" representing random
variation. This module lays the foundations for the analysis of such models. We’ll use practical examples from a variety of statistical applications to illustrate the ideas.
Optional modules:
Introduction to Logic – 10 credits
This module is an introduction to mathematical logic introducing formal languages that can be used to express mathematical ideas and arguments. It throws light on mathematics itself, because it can
be applied to problems in philosophy, linguistics, computer science and other areas.
Stochastic Processes – 10 credits
A stochastic process refers to any quantity which changes randomly in time. The capacity of a reservoir, an individual’s level of no claims discount and the size of a population are all examples from
the real world. The linking model for all these examples is the Markov process. With appropriate modifications, the Markov process can be extended to model stochastic processes which change over
continuous time, not just at regularly spaced time points. You’ll explore the key features of stochastic processes and develop your understanding in areas like state, space and time, the Poisson
process and the Markov property.
Rings and Polynomials – 10 credits
Rings are one of the fundamental concepts of mathematics, and they play a key role in many areas, including algebraic geometry, number theory, Galois theory and representation theory. The aim of this
module is to give an introduction to rings. The emphasis will be on interesting examples of rings and their properties.
Time Series – 10 credits
In time series, measurements are made at a succession of times, and it is the dependence between measurements taken at different times which is important. This module will concentrate on techniques
for model identification, parameter estimation, diagnostic checking and forecasting within the autoregressive moving average family of models and their extensions.
Applied Mathematics and Statistics
Statistical Methods – 20 credits
Statistical models are important in many applications. They contain two main elements: a set of parameters with information of scientific interest and an "error distribution" representing random
variation. This module lays the foundations for the analysis of such models. We’ll use practical examples from a variety of statistical applications to illustrate the ideas.
Mathematical Modelling – 20 credits
Learn analytical and computational techniques for the solution of ordinary and partial differential equations, which describe particle motion in fields, fluids, waves, diffusion and many other
Optional modules:
Stochastic Processes – 10 credits
A stochastic process refers to any quantity which changes randomly in time. The capacity of a reservoir, an individual’s level of no claims discount and the size of a population are all examples from
the real world. The linking model for all these examples is the Markov process. With appropriate modifications, the Markov process can be extended to model stochastic processes which change over
continuous time, not just at regularly spaced time points. You’ll explore the key features of stochastic processes and develop your understanding in areas like state, space and time, the Poisson
process and the Markov property.
Optimisation – 10 credits
Optimisation, “the quest for the best”, plays a major role in financial and economic theory, such as maximising a company's profits or minimising its production costs. This module develops the theory
and practice of maximising or minimising a function of many variables, and thus lays a solid foundation for progression onto more advanced topics, such as dynamic optimisation, which are central to
the understanding of realistic economic and financial scenarios.
Time Series – 10 credits
In time series, measurements are made at a succession of times, and it is the dependence between measurements taken at different times which is important. This module will concentrate on techniques
for model identification, parameter estimation, diagnostic checking and forecasting within the autoregressive moving average family of models and their extensions.
Calculus of Variations – 10 credits
The calculus of variations concerns problems in which one wishes to find the extrema of some quantity over a system that has functional degrees of freedom. Many important problems arise in this way
across pure and applied mathematics. In this module, you’ll meet the system of differential equations arising from such variational problems: the Euler-Lagrange equations. These equations and the
techniques for their solution, will be studied in detail.
Year 3
Compulsory modules
Project in Mathematics – 40 credits
This project is a chance for you to build invaluable research skills and develop and implement a personal training plan by conducting your own independent research project in a topic in mathematics.
You’ll meet in groups to discuss the project topic, with each group member researching a specific aspect of the topic and producing an individual project report. You’ll then come together as a group
to present your results, with each person contributing their own findings.
Optional modules
Please note: The modules listed below are indicative of typical options and some of these options may not be available, depending on other modules you have selected already.
Pure and Applied Mathematics
Optional modules:
Groups and Symmetry – 20 credits
Group theory is the mathematical theory of symmetry. Groups arise naturally in pure and applied mathematics, for example in the study of permutations of sets, rotations and reflections of geometric
objects, symmetries of physical systems and the description of molecules, crystals and materials. Groups have beautiful applications to counting problems, answering questions like: "How many ways are
there to colour the faces of a cube with m colours, up to rotation of the cube?"
Methods of Applied Mathematics – 20 credits
This module develops techniques to solve ordinary and partial differential equations arising in mathematical physics. For the important case of second-order PDEs, we distinguish between elliptic
equations (e.g., Laplace's equation), parabolic equations (e.g., heat equation) and hyperbolic equations (e.g., wave equation), and physically interpret the solutions. When there is not an exact
solution in closed form, approximate solutions (so-called perturbation expansions) can be constructed if there is a small or large parameter.
Metric Spaces and Measure Theory – 20 credits
If you would like to undertake a rigorous study of a physical, geometrical or statistical law, it is likely you’ll need to use both of these concepts. Metric spaces have a notion of distance between
points and measure theory generalises familiar ideas of volume, underpinning integration. We will study these exciting topics, proving results fundamental to pure and applied mathematics;
Picard-Lindelöff's theorem from ODEs; the inverse and implicit function theorems; Lebesgue’s dominated convergence theorem.
Computational Applied Mathematics – 20 credits
The equations that model real-world problems can only rarely be solved exactly. The basic idea employed in this module is that of discretising the original continuous problem to obtain a discrete
problem, or system of equations, that may be solved with the aid of a computer. This course introduces and applies the techniques of finite differences, numerical linear algebra and stochastic
Numbers and Codes – 20 credits
Number theory explores the natural numbers. Central themes include primes, arithmetic modulo n, and Diophantine equations as in Fermat's Last Theorem. It is a wide-ranging current field with many
applications, e.g. in cryptography.
Error-correcting codes tackle the problem of reliably transmitting digital data through a noisy channel. Applications include transmitting satellite pictures, designing registration numbers and
storing data. The theory uses methods from algebra and combinatorics.
This module introduces both subjects. It emphasises common features, such as algebraic underpinnings, and applications to information theory, both in cryptography (involving secrecy) and in
error-correcting codes (involving errors in transmission).
Proof and Computation – 20 credits
The main goal of this module is to prove Gödel's First Incompleteness Theorem (1931) which shows that, if any reasonable formal theory has strong enough axioms, there are statements which it can
neither prove nor refute. This module will also provide background to the impact of Gödel's Theorem on the modern world and the way it sets an agenda for further research.
Entropy and Quantum Mechanics – 20 credits
The material world is composed of countless microscopic particles. When three or more particles interact, their dynamics is chaotic, and impossible to predict in detail. Further, at the microscopic
atomic-scale particles behave like waves, with dynamics that is known only statistically. So, why is it that the materials around us behave in predictable and regular ways? One reason is that random
behaviour on the microscopic scale gives rise to collective behaviour that can be predicted with practical certainty, guided by the principle that the total disorder (or entropy) of the universe
never decreases. A second reason is that the mathematics of quantum mechanics provides incredibly accurate predictions at the atomic scale. This module studies calculations involving both entropy and
quantum mechanics, as applied to the matter that makes up our world.
Fluid Dynamics – 20 credits
Fluid dynamics is the science that describes the motion of materials that flow. It constitutes a significant mathematical challenge with important implications in an enormous range of fields in
science and engineering, including aerodynamics, astrophysics, climate modelling, and physiology. This module sets out the fundamental concepts of fluid dynamics, for both inviscid and viscous flows.
It includes a formal mathematical description of fluid flow and the derivation of the governing equations, using techniques from vector calculus. Solutions of the governing equations are derived for
a range of simple flows, giving you a feel for how fluids behave and experience in modelling everyday phenomena.
Mathematics in Social Context C – 20 credits
Mathematics is possessed of what Bertrand Russell called a cold and austere beauty; and yet it has roots in deeply human concerns. In this module, you’ll gain insight into ways in which
mathematicians can bridge the ‘two cultures’ and see how mathematics shapes our world and our cultures.
Graph Theory and Combinatorics – 20 credits
Graph theory is one of the primary subjects in discrete mathematics. It arises wherever networks as seen in computers or transportation are found, and it has applications to fields diverse as
chemistry, computing, linguistics, navigation and more. More generally, combinatorics concerns finding patterns in discrete mathematical structures, often with the goal of counting the occurrences of
such patterns. This module provides a foundation in graph theory and combinatorics.
Differential Geometry – 20 credits
Differential geometry is the application of calculus to describe, analyse and discover facts about geometric objects. It provides the language in which almost all modern physics is understood. This
module develops the geometry of curves and surfaces embedded in Euclidean space. A recurring fundamental theme is curvature (in its many guises) and its interplay with topology.
Mathematical Biology – 20 credits
Mathematics is increasingly important in biological and medical research. This module aims to introduce you to some areas of mathematical biology and medicine, using tools from applied mathematics.
Nonlinear Dynamical Systems and Chaos – 20 credits
Many applications, ranging from biology to physics and engineering, are described by nonlinear dynamical systems, in which a change in output is not proportional to a change in input. Nonlinear
dynamical systems can exhibit sudden changes in behaviour as parameters are varied and even unpredictable, chaotic dynamics. This module will provide you with the mathematical tools to analyse
nonlinear dynamical systems, including identifying bifurcations and chaotic dynamics.
Pure Mathematics and Statistics
Optional modules:
Groups and Symmetry – 20 credits
Group theory is the mathematical theory of symmetry. Groups arise naturally in pure and applied mathematics, for example in the study of permutations of sets, rotations and reflections of geometric
objects, symmetries of physical systems and the description of molecules, crystals and materials. Groups have beautiful applications to counting problems, answering questions like: "How many ways are
there to colour the faces of a cube with m colours, up to rotation of the cube?"
Statistical Modelling – 20 credits
The standard linear statistical model is powerful but has limitations. In this module, we study several extensions to the linear model which overcome some of these limitations. Generalised linear
models allow for different error distributions; additive models allow for nonlinear relationships between predictors and the response variable; and survival models are needed to study data where the
response variable is the time taken for an event to occur.
Metric Spaces and Measure Theory – 20 credits
If you would like to undertake a rigorous study of a physical, geometrical or statistical law, it is likely you’ll need to use both of these concepts. Metric spaces have a notion of distance between
points and measure theory generalises familiar ideas of volume, underpinning integration. We will study these exciting topics, proving results fundamental to pure and applied mathematics;
Picard-Lindelöff's theorem from ODEs; the inverse and implicit function theorems; Lebesgue’s dominated convergence theorem.
Actuarial Mathematics 1 – 20 credits
The module introduces the theory of interest rates and the time value of money in the context of financial transactions such as loans, mortgages, bonds and insurance. The module also introduces the
basic theory of life insurance where policy payments are subject to mortality probabilities.
Numbers and Codes – 20 credits
Number theory explores the natural numbers. Central themes include primes, arithmetic modulo n, and Diophantine equations as in Fermat's Last Theorem. It is a wide-ranging current field with many
applications, e.g. in cryptography.
Error-correcting codes tackle the problem of reliably transmitting digital data through a noisy channel. Applications include transmitting satellite pictures, designing registration numbers and
storing data. The theory uses methods from algebra and combinatorics.
This module introduces both subjects. It emphasises common features, such as algebraic underpinnings, and applications to information theory, both in cryptography (involving secrecy) and in
error-correcting codes (involving errors in transmission).
Proof and Computation – 20 credits
The main goal of this module is to prove Gödel's First Incompleteness Theorem (1931) which shows that, if any reasonable formal theory has strong enough axioms, there are statements which it can
neither prove nor refute. This module will also provide background to the impact of Gödel's Theorem on the modern world and the way it sets an agenda for further research.
Stochastic Calculus and Derivative Pricing – 20 credits
Stochastic calculus is one of the main mathematical tools to model physical, biological and financial phenomena (among other things). This module provides a rigorous introduction to this topic.
You’ll develop a solid mathematical background in stochastic calculus that will allow you to understand key results from modern mathematical finance. This knowledge will be used to derive expressions
for prices of derivatives in financial markets under uncertainty.
Mathematics in Social Context C – 20 credits
Mathematics is possessed of what Bertrand Russell called a cold and austere beauty; and yet it has roots in deeply human concerns. In this module, you’ll gain insight into ways in which
mathematicians can bridge the ‘two cultures’ and see how mathematics shapes our world and our cultures.
Graph Theory and Combinatorics – 20 credits
Graph theory is one of the primary subjects in discrete mathematics. It arises wherever networks as seen in computers or transportation are found, and it has applications to fields diverse as
chemistry, computing, linguistics, navigation and more. More generally, combinatorics concerns finding patterns in discrete mathematical structures, often with the goal of counting the occurrences of
such patterns. This module provides a foundation in graph theory and combinatorics.
Differential Geometry – 20 credits
Differential geometry is the application of calculus to describe, analyse and discover facts about geometric objects. It provides the language in which almost all modern physics is understood. This
module develops the geometry of curves and surfaces embedded in Euclidean space. A recurring fundamental theme is curvature (in its many guises) and its interplay with topology.
Multivariate Analysis and Classification – 20 credits
Multivariate datasets are common: it is typical that experimental units are measured for more than one variable at a time. This module extends univariate statistical techniques for continuous data to
a multivariate setting and introduces methods designed specifically for multivariate data analysis (cluster analysis, principal component analysis, multidimensional scaling and factor analysis). A
particular problem of classification arises when the multivariate observations need to be used to divide the data into groups or “classes”.
Actuarial Mathematics 2 – 20 credits
The module expands on the theory of life insurance introduced in Actuarial Mathematics 1. Instead of considering a single life and single decrement, we will consider policies with multiple lives and
multiple decrements. In addition, the module includes profit testing for different types of insurance policies.
Applied Mathematics and Statistics
Optional modules:
Methods of Applied Mathematics – 20 credits
This module develops techniques to solve ordinary and partial differential equations arising in mathematical physics. For the important case of second-order PDEs, we distinguish between elliptic
equations (e.g., Laplace's equation), parabolic equations (e.g., heat equation) and hyperbolic equations (e.g., wave equation), and physically interpret the solutions. When there is not an exact
solution in closed form, approximate solutions (so-called perturbation expansions) can be constructed if there is a small or large parameter.
Statistical Modelling – 20 credits
The standard linear statistical model is powerful but has limitations. In this module, we study several extensions to the linear model which overcome some of these limitations. Generalised linear
models allow for different error distributions; additive models allow for nonlinear relationships between predictors and the response variable; and survival models are needed to study data where the
response variable is the time taken for an event to occur.
Computational Applied Mathematics – 20 credits
The equations that model real-world problems can only rarely be solved exactly. The basic idea employed in this module is that of discretising the original continuous problem to obtain a discrete
problem, or system of equations, that may be solved with the aid of a computer. This course introduces and applies the techniques of finite differences, numerical linear algebra and stochastic
Actuarial Mathematics 1 – 20 credits
The module introduces the theory of interest rates and the time value of money in the context of financial transactions such as loans, mortgages, bonds and insurance. The module also introduces the
basic theory of life insurance where policy payments are subject to mortality probabilities.
Stochastic Calculus and Derivative Pricing – 20 credits
Stochastic calculus is one of the main mathematical tools to model physical, biological and financial phenomena (among other things). This module provides a rigorous introduction to this topic.
You’ll develop a solid mathematical background in stochastic calculus that will allow you to understand key results from modern mathematical finance. This knowledge will be used to derive expressions
for prices of derivatives in financial markets under uncertainty.
Entropy and Quantum Mechanics – 20 credits
The material world is composed of countless microscopic particles. When three or more particles interact, their dynamics is chaotic, and impossible to predict in detail. Further, at the microscopic
atomic-scale particles behave like waves, with dynamics that is known only statistically. So, why is it that the materials around us behave in predictable and regular ways? One reason is that random
behaviour on the microscopic scale gives rise to collective behaviour that can be predicted with practical certainty, guided by the principle that the total disorder (or entropy) of the universe
never decreases. A second reason is that the mathematics of quantum mechanics provides incredibly accurate predictions at the atomic scale. This module studies calculations involving both entropy and
quantum mechanics, as applied to the matter that makes up our world.
Fluid Dynamics – 20 credits
Fluid dynamics is the science that describes the motion of materials that flow. It constitutes a significant mathematical challenge with important implications in an enormous range of fields in
science and engineering, including aerodynamics, astrophysics, climate modelling, and physiology. This module sets out the fundamental concepts of fluid dynamics, for both inviscid and viscous flows.
It includes a formal mathematical description of fluid flow and the derivation of the governing equations, using techniques from vector calculus. Solutions of the governing equations are derived for
a range of simple flows, giving you a feel for how fluids behave and experience in modelling everyday phenomena.
Mathematics in Social Context C – 20 credits
Mathematics is possessed of what Bertrand Russell called a cold and austere beauty; and yet it has roots in deeply human concerns. In this module, you’ll gain insight into ways in which
mathematicians can bridge the ‘two cultures’ and see how mathematics shapes our world and our cultures.
Multivariate Analysis and Classification – 20 credits
Multivariate datasets are common: it is typical that experimental units are measured for more than one variable at a time. This module extends univariate statistical techniques for continuous data to
a multivariate setting and introduces methods designed specifically for multivariate data analysis (cluster analysis, principal component analysis, multidimensional scaling and factor analysis). A
particular problem of classification arises when the multivariate observations need to be used to divide the data into groups or “classes”.
Mathematical Biology – 20 credits
Mathematics is increasingly important in biological and medical research. This module aims to introduce you to some areas of mathematical biology and medicine, using tools from applied mathematics.
Nonlinear Dynamical Systems and Chaos – 20 credits
Many applications, ranging from biology to physics and engineering, are described by nonlinear dynamical systems, in which a change in output is not proportional to a change in input. Nonlinear
dynamical systems can exhibit sudden changes in behaviour as parameters are varied and even unpredictable, chaotic dynamics. This module will provide you with the mathematical tools to analyse
nonlinear dynamical systems, including identifying bifurcations and chaotic dynamics.
Actuarial Mathematics 2 – 20 credits
The module expands on the theory of life insurance introduced in Actuarial Mathematics 1. Instead of considering a single life and single decrement, we will consider policies with multiple lives and
multiple decrements. In addition, the module includes profit testing for different types of insurance policies.
Year 4
Compulsory modules
Assignment in Mathematics – 45 credits
You'll engage in independent research on an individual basis, on a title negotiated with an academic supervisor. This will include training in the skills necessary to plan, execute and report on a
project in advanced mathematics. Although this is an independent project, our academic staff will be there to supervise and support you throughout.
Optional modules
Please note: The modules listed below are indicative of typical options and some of these options may not be available, depending on other modules you have selected already.
Pure and Applied Mathematics
Optional modules:
Topology – 15 credits
A topological space is a set with the minimal added structure that makes it possible to define continuity of functions. In this module, we will define topological spaces and explain what it means for
them to be connected, path-connected and compact. In the second half of this module, we study algebraic topology and show some applications to Euclidean space: e.g any continuous map from a disk to
itself must have a fixed point.
Models and Sets – 15 credits
Set Theory is generally accepted as a foundation for mathematics, in an informal sense, but is also a formal axiomatic system. Model Theory is the study of formal axiomatic systems and depends on Set
Theory for many of its basic definitions and results. Model Theory and Set Theory constitute two of the basic strands of mathematical logic. In this module, we explain the basic notions of these
interrelated subjects.
Functional Analysis and its Applications – 15 credits
Solving problems in infinite dimensional space is fundamental to our understanding of the world; finding the optimal depth for a wine cellar, or the natural frequencies of an object, fall within the
same mathematical playground of Functional Analysis. Many such problems admit solutions in the form of an infinite sum or integral expression, but how, why, and are these expressions genuine? We will
develop the mathematical theory to rigorously ask and answer these fundamental questions.
Evolutionary Modelling – 15 credits
Darwin’s natural selection theory is a cornerstone of modern science. Recently, mathematical and computational modelling has led to significant advances in our understanding of evolutionary puzzles,
such as what determines biodiversity or the origin of cooperative behaviour. On this module, you will be exposed to fundamental ideas of evolutionary modelling, and to the mathematical tools needed
to pursue their study. These will be illustrated by numerous examples motivated by exciting developments in mathematical biology.
Advanced Mathematical Methods – 15 credits
Many real-world problems can be modelled by ordinary or partial differential equations or formulated as a complicated integral. In this module, advanced techniques are developed to solve such
problems and interpret their solutions, motivated by examples from mathematical physics, continuum mechanics, and mathematical biology. These techniques include so-called asymptotic methods, which
yield approximate solutions if the problem contains a small or large parameter.
Astrophysical and Geophysical Fluids – 15 credits
This module concerns mathematical modelling of various phenomena observed in astrophysical and geophysical flows, meaning those in planetary and stellar atmospheres and interiors. The focus is on
understanding key dynamical processes in such flows, including those due to rotation and density stratification and, in many astrophysical flows, the electrical conductivity of the fluid (which can
thus support a magnetic field). These effects lead to various interesting waves and instabilities, with physical and observational significance.
Riemannian Geometry – 15 credits
Riemannian geometry is the study of length, angle, volume and curvature. It is a far-reaching generalisation of the theory of curves and surfaces to higher dimensions. Famously, it formed the basis
of Einstein's theory of general relativity and it remains a primary language of modern theoretical physics. In this module, you’ll learn the basic concepts of Riemannian geometry and study some
fascinating theorems relating the geometry of a manifold to its topological properties.
Algebras and Representations – 15 credits
An algebra is a vector space with a compatible binary operation, or multiplication. A natural example is the set of all square matrices of a fixed size with complex entries, under matrix
multiplication. Semisimple algebras form an important class of algebras and one of the highlights of the course is the structure theorem classifying the semisimple algebras in terms of matrix
algebras. The module will also study representations of general algebras via matrices.
Environmental and Industrial Flows – 15 credits
Many flows found in nature such as avalanches and glaciers or in industrial applications such as 3D printing and coatings involve complex fluids, whose properties can be very different from those of
simple Newtonian fluids like air and water. This course gives an introduction into the often-surprising behaviour of these fluids and how they can be modelled mathematically using differential
Classical and Quantum Hamiltonian Systems – 15 credits
The Hamiltonian formulation of dynamics is the most mathematically beautiful form of mechanics and a stepping stone to quantum mechanics. Hamiltonian systems are conservative dynamical systems with a
very interesting algebraic and geometric structure: the Poisson bracket. Hamilton's equations are invariant under a very wide class of transformations and this leads to a number of powerful solution
Pure Mathematics and Statistics
Optional modules:
Topology – 15 credits
A topological space is a set with the minimal added structure that makes it possible to define continuity of functions. In this module, we will define topological spaces and explain what it means for
them to be connected, path-connected and compact. In the second half of this module, we study algebraic topology and show some applications to Euclidean space: e.g any continuous map from a disk to
itself must have a fixed point.
Models and Sets – 15 credits
Set Theory is generally accepted as a foundation for mathematics, in an informal sense, but is also a formal axiomatic system. Model Theory is the study of formal axiomatic systems and depends on Set
Theory for many of its basic definitions and results. Model Theory and Set Theory constitute two of the basic strands of mathematical logic. In this module, we explain the basic notions of these
interrelated subjects.
Functional Analysis and its Applications – 15 credits
Solving problems in infinite dimensional space is fundamental to our understanding of the world; finding the optimal depth for a wine cellar, or the natural frequencies of an object, fall within the
same mathematical playground of Functional Analysis. Many such problems admit solutions in the form of an infinite sum or integral expression, but how, why, and are these expressions genuine? We will
develop the mathematical theory to rigorously ask and answer these fundamental questions.
Statistical Theory – 15 credits
This module gives a general unified theory of the problems of estimation and hypotheses testing. It covers Bayesian inference, making comparisons with classical inference.
Statistical Computing – 15 credits
Statistical computing is the branch of mathematics that concerns the use of computational techniques for situations that either directly involve randomness, or where randomness is used as part of a
mathematical model. This module gives an overview of the foundations and basic methods in statistical computing.
Riemannian Geometry – 15 credits
Riemannian geometry is the study of length, angle, volume and curvature. It is a far-reaching generalisation of the theory of curves and surfaces to higher dimensions. Famously, it formed the basis
of Einstein's theory of general relativity and it remains a primary language of modern theoretical physics. In this module, you’ll learn the basic concepts of Riemannian geometry and study some
fascinating theorems relating the geometry of a manifold to its topological properties.
Algebras and Representations – 15 credits
An algebra is a vector space with a compatible binary operation, or multiplication. A natural example is the set of all square matrices of a fixed size with complex entries, under matrix
multiplication. Semisimple algebras form an important class of algebras and one of the highlights of the course is the structure theorem classifying the semisimple algebras in terms of matrix
algebras. The module will also study representations of general algebras via matrices.
Advanced Statistical Modelling – 15 credits
This module builds on statistical models introduced in earlier studies, developing advanced techniques for analysis of datasets. These include methods for estimating the probability density function
from a data set and approaches to constraining the number of variables that contribute to a linear model.
Applied Mathematics and Statistics
Optional modules:
Statistical Theory – 15 credits
This module gives a general unified theory of the problems of estimation and hypotheses testing. It covers Bayesian inference, making comparisons with classical inference.
Statistical Computing – 15 credits
Statistical computing is the branch of mathematics that concerns the use of computational techniques for situations that either directly involve randomness, or where randomness is used as part of a
mathematical model. This module gives an overview of the foundations and basic methods in statistical computing.
Evolutionary Modelling – 15 credits
Darwin’s natural selection theory is a cornerstone of modern science. Recently, mathematical and computational modelling has led to significant advances in our understanding of evolutionary puzzles,
such as what determines biodiversity or the origin of cooperative behaviour. On this module, you will be exposed to fundamental ideas of evolutionary modelling, and to the mathematical tools needed
to pursue their study. These will be illustrated by numerous examples motivated by exciting developments in mathematical biology.
Advanced Mathematical Methods – 15 credits
Many real-world problems can be modelled by ordinary or partial differential equations or formulated as a complicated integral. In this module, advanced techniques are developed to solve such
problems and interpret their solutions, motivated by examples from mathematical physics, continuum mechanics, and mathematical biology. These techniques include so-called asymptotic methods, which
yield approximate solutions if the problem contains a small or large parameter.
Astrophysical and Geophysical Fluids – 15 credits
This module concerns mathematical modelling of various phenomena observed in astrophysical and geophysical flows, meaning those in planetary and stellar atmospheres and interiors. The focus is on
understanding key dynamical processes in such flows, including those due to rotation and density stratification and, in many astrophysical flows, the electrical conductivity of the fluid (which can
thus support a magnetic field). These effects lead to various interesting waves and instabilities, with physical and observational significance.
Advanced Statistical Modelling – 15 credits
This module builds on statistical models introduced in earlier studies, developing advanced techniques for analysis of datasets. These include methods for estimating the probability density function
from a data set and approaches to constraining the number of variables that contribute to a linear model.
Environmental and Industrial Flows – 15 credits
Many flows found in nature such as avalanches and glaciers or in industrial applications such as 3D printing and coatings involve complex fluids, whose properties can be very different from those of
simple Newtonian fluids like air and water. This course gives an introduction into the often-surprising behaviour of these fluids and how they can be modelled mathematically using differential
Classical and Quantum Hamiltonian Systems – 15 credits
The Hamiltonian formulation of dynamics is the most mathematically beautiful form of mechanics and a stepping stone to quantum mechanics. Hamiltonian systems are conservative dynamical systems with a
very interesting algebraic and geometric structure: the Poisson bracket. Hamilton's equations are invariant under a very wide class of transformations and this leads to a number of powerful solution
One-year optional work placement or study abroad
During your course, you’ll be given the opportunity to advance your skill set and experience further. You can apply to either undertake a one-year work placementor study abroad for a year, choosing
from a selection of universities we’re in partnership with worldwide.
Learning and teaching
You’ll be taught through lectures, tutorials, workshops and practical classes. You’ll enjoy extensive tutorial support and have freedom in your workload and options.
We offer a variety of welcoming spaces to study and socialise with your fellow students. There are social and group study areas, a library with a café and a seminar room, as well as a Research
Visitors Centre and a Mathematics Active Learning Lab.
Taster lectures
Watch our taster lectures to get a flavour of what it’s like to study at Leeds:
On this course, you’ll be taught by our expert academics, from lecturers through to professors. You may also be taught by industry professionals with years of experience, as well as trained
postgraduate researchers, connecting you to some of the brightest minds on campus.
You’re assessed through a range of methods, including formal exams and in-course assessment.
Entry requirements
A-level: AAA/A*AB including a minimum of grade A in Mathematics.
AAA/A*AB including a minimum of grade A in Mathematics, AAB/A*BB including a minimum of grade A in Mathematics plus Further Mathematics, or AAB/A*BB including a minimum of grade A in Mathematics,
plus A in AS Further Mathematics.
Where an A-Level Science subject is taken, we require a pass in the practical science element, alongside the achievement of the A-Level at the stated grade.
Excludes A-Level General Studies or Critical Thinking.
GCSE: GCSE: English Language at grade C (4) or above, or an appropriate English language qualification. We will accept Level 2 Functional Skills English in lieu of GCSE English.
Other course specific tests:
Extended Project Qualification (EPQ), International Project Qualification (IPQ) and Welsh Baccalaureate Advanced Skills Challenge Certificate (ASCC): We recognise the value of these qualifications
and the effort and enthusiasm that applicants put into them, and where an applicant offers the EPQ, IPQ or ASCC we may make an offer of AAB/A*BB including a minimum of grade A in Mathematics, plus A
in EPQ/IPQ/Welsh Bacc ASCC.
Alternative qualification
Access to HE Diploma
Normally only accepted in combination with grade A in A Level Mathematics or equivalent.
BTEC qualifications in relevant disciplines are considered in combination with other qualifications, including grade A in A-level mathematics, or equivalent
Cambridge Pre-U
D3 D3 M2 or D2 M1 M1 where the first grade quoted is in Mathematics OR D3 M1 M2 or D2 M2 M2 including Further Maths where the first grade quoted is Mathematics.
International Baccalaureate
17 at Higher Level including 6 in Higher Level Mathematics (Mathematics: Analytics and Approaches is preferred).
Irish Leaving Certificate (higher Level)
H2 H2 H2 H2 H2 H2 including Mathematics.
Scottish Highers / Advanced Highers
Suitable combinations of Scottish Higher and Advanced Highers are acceptable, though mathematics must be presented at Advanced Higher level.Typically AAAABB Including grade A in Advanced Higher
Other Qualifications
We also welcome applications from students on the Northern Consortium UK International Foundation Year programme, the University of Leeds International Foundation Year, and other foundation years
with a high mathematical content.
Read more about UK and Republic of Ireland accepted qualifications or contact the School s Undergraduate Admissions Team.
Alternative entry
We’re committed to identifying the best possible applicants, regardless of personal circumstances or background.
Access to Leeds is a contextual admissions scheme which accepts applications from individuals who might be from low income households, in the first generation of their immediate family to apply to
higher education, or have had their studies disrupted.
Find out more about Access to Leeds and contextual admissions.
Typical Access to Leeds offer: ABB including A in Mathematics and pass Access to Leeds OR A in Mathematics, B in Further Mathematics and C in a 3rd subject and pass Access to Leeds.
Foundation years
If you do not have the formal qualifications for immediate entry to one of our degrees, you may be able to progress through a foundation year.
We offer a Studies in Science with Foundation Year BSc for students without science and mathematics qualifications.
You could also study our Interdisciplinary Science with Foundation Year BSc which is for applicants whose background is less represented at university.
On successful completion of your foundation year, you will be able to progress onto your chosen course.
We accept a range of international equivalent qualifications. For more information, please contact the Admissions Team.
International Foundation Year
International students who do not meet the academic requirements for undergraduate study may be able to study the University of Leeds International Foundation Year. This gives you the opportunity to
study on campus, be taught by University of Leeds academics and progress onto a wide range of Leeds undergraduate courses. Find out more about International Foundation Year programmes.
English language requirements
IELTS 6.0 overall, with no less than 5.5 in any one component, or IELTS 6.5 overall, with no less than 6.0 in any one component, depending on other qualifications present. For other English
qualifications, read English language equivalent qualifications.
Improve your English
If you're an international student and you don't meet the English language requirements for this programme, you may be able to study our undergraduate pre-sessional English course, to help improve
your English language level.
UK: To be confirmed
International: £29,000 (per year)
Tuition fees for UK undergraduate students starting in 2024/25
Tuition fees for UK full-time undergraduate students are set by the UK Government and will be £9,250 for students starting in 2024/25.
The fee may increase in future years of your course in line with inflation only, as a consequence of future changes in Government legislation and as permitted by law.
Tuition fees for UK undergraduate students starting in 2025/26
Tuition fees for UK full-time undergraduate students starting in 2025/26 have not yet been confirmed by the UK government. When the fee is available we will update individual course pages.
Tuition fees for international undergraduate students starting in 2024/25 and 2025/26
Tuition fees for international students for 2024/25 are available on individual course pages. Fees for students starting in 2025/26 will be available from September 2024.
Tuition fees for a study abroad or work placement year
If you take a study abroad or work placement year, you’ll pay a reduced tuition fee during this period. For more information, see Study abroad and work placement tuition fees and loans.
Read more about paying fees and charges.
There may be additional costs related to your course or programme of study, or related to being a student at the University of Leeds. Read more on our living costs and budgeting page.
Scholarships and financial support
If you have the talent and drive, we want you to be able to study with us, whatever your financial circumstances. There is help for students in the form of loans and non-repayable grants from the
University and from the government. Find out more in our Undergraduate funding overview.
Apply to this course through UCAS. Check the deadline for applications on the UCAS website.
We may consider applications submitted after the deadline. Availability of courses in UCAS Extra will be detailed on UCAS at the appropriate stage in the cycle.
Admissions guidance
Read our admissions guidance about applying and writing your personal statement.
What happens after you’ve applied
You can keep up to date with the progress of your application throughUCAS.
UCAS will notify you when we make a decision on your application. If you receive an offer, you can inform us of your decision to accept or decline your place through UCAS.
How long will it take to receive a decision
We typically receive a high number of applications to our courses. For applications submitted by the January UCAS deadline, UCAS asks universities to make decisions by mid-May at the latest.
Offer holder events
If you receive an offer from us, you’ll be invited to an offer holder event. This event is more in-depth than an open day. It gives you the chance to learn more about your course and get your
questions answered by academic staff and students. Plus, you can explore our campus, facilities and accommodation.
International applicants
International students apply through UCAS in the same way as UK students.
We recommend that international students apply as early as possible to ensure that they have time to apply for their visa.
Read about visas, immigration and other information here.
If you’re unsure about the application process, contact the admissions team for help.
Admissions policy
University of Leeds Admissions Policy 2025
This course is taught by
Contact us
School of Mathematics Undergraduate Admissions
Email: maths.admiss@leeds.ac.uk
Career opportunities
Mathematical skills are highly valued in virtually all walks of life, which means that the employment opportunities for mathematics graduates are far-reaching and have the potential to take you all
over the world.
Plus, University of Leeds students are among the top 5 most targeted by top employers according toThe Graduate Market 2024, High Fliers Research.
Qualifying with a degree in Mathematics from Leeds will give you the core foundations you need to pursue an exciting career across a wide range of industries and sectors, including:
• Accountancy
• Insurance
• Banking and finance
• Asset management and investment
• Engineering
• Teaching
• Data analysis
• Law
• Consultancy
The numerical, analytical and problem-solving skills you will develop, as well as your specialist subject knowledge and your ability to think logically, are highly valued by employers. This course
also allows you to develop the transferable skills that employers seek.
Here’s an insight into the job roles some of our most recent graduates have obtained:
• Category Management Analyst, Accenture
• Business Intelligence Engineer, Amazon
• Financial Analyst, American Express
• Consultant Statistician, AstraZeneca
• Audit Associate, Deloitte
• Senior Credit Risk Analyst, HSBC
• Senior Actuary, KPMG
• Retail Analyst, Emma Bridgewater
• Statistician, Nestle
• Senior Actuarial Associate, PwC
• Risk Analyst, SkyBet
• Statistical Analyst, Office of National Statistics
Careers support
At Leeds, we help you to prepare for your future from day one. Our Leeds for Life initiative is designed to help you develop and demonstrate the skills and experience you need for when you graduate.
We will help you to access opportunities across the University and record your key achievements so you are able to articulate them clearly and confidently.
You will be supported throughout your studies by our dedicated Employability Team, who will provide you with specialist support and advice to help you find relevant work experience, internships and
industrial placements, as well as graduate positions. You’ll benefit from timetabled employability sessions, support during internships and placements, and presentations and workshops delivered by
Explore more about your employability opportunities at the University of Leeds.
You will also have full access to the University’s Careers Centre, which is one of the largest in the country.
Study abroad and work placements
Study abroad
Studying abroad is a unique opportunity to explore the world, whilst gaining invaluable skills and experience that could enhance your future employability and career prospects too.
From Europe to Asia, the USA to Australasia, we have many University partners worldwide you can apply to, spanning across some of the most popular destinations for students.
This programme offers you the option to spend time abroad as part of your four-year MMath course.
Once you’ve successfully completed your year abroad, you'll be awarded the ‘international’ variant in your degree title upon completion which demonstrates your added experience to future employers.
Find out more at the Study Abroad website.
Work placements
A placement year is a great way to help you decide on a career path when you graduate. You’ll develop your skills and gain a real insight into working life in a particular company or sector. It will
also help you to stand out in a competitive graduate jobs market and improve your chances of securing the career you want.
Benefits of a work placement year:
• 100+ organisations to choose from, both in the UK and overseas
• Build industry contacts within your chosen field
• Our close industry links mean you’ll be in direct contact with potential employers
• Advance your experience and skills by putting the course teachings into practice
• Gain invaluable insight into working as a professional in this industry
• Improve your employability
If you decide to undertake a placement year, this will extend your period of study by 12 months and, on successful completion, you will be awarded the ‘industrial’ variant in your degree title to
demonstrate your added experience to future employers.
With the help and support of our dedicated Employability Team, you can find the right placement to suit you and your future career goals.
Here are some examples of placements our students have recently completed:
• Data Scientist, Department for Work & Pensions
• Cyber Crime Researcher, Department for Work & Pensions
• Risk Analyst - Infrastructure/ Strategy, Lloyds Banking Group
• Operations Analyst, Tracsis Rail Consultancy
• Risk analyst, Lloyds Banking Group
Find out more aboutIndustrial placements. | {"url":"https://courses.leeds.ac.uk/f417/mathematics-mmath-bsc","timestamp":"2024-11-09T03:57:35Z","content_type":"text/html","content_length":"139386","record_id":"<urn:uuid:67abe199-f8cd-4e5d-876d-b6f075fb4b12>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00702.warc.gz"} |
Derived Variables¶
Derived variables are those defined in terms of variables read in from the file. Use of derived variables allows one to rename variables or set groups as vectors. One can also do variable
Jump to
Accessing a variable by a different name
Group anygroupname0 {
ATT vsType = "vsVars" // Required string
ATT vsMesh = "resultCartGrid" // [Optional string] The new variable can be associated with a different,
// though necessarily compatible, mesh.
ATT E = "YeeElecField" // Declares a new variable "E" that is equivalent to "YeeElecField"
This shows how one can use a different name for a variable. This can be useful, as some visualization tools label plots using the variable name. By default the variable is associated with the same
mesh as the original variable. However, use of vsMesh allows one to associate this variable with a new mesh. This can be useful, e.g., when one wants to view the original variable on some mesh but
the new variable on the transformed mesh. One use case is where one defines a uniform mesh in (r, phi, z) and by transformation gets an irregular structured mesh in (x, y, z) as shown in an example
on the page on meshes.
Defining a scalar as one component of an array
Group anygroupname1 {
ATT vsType = "vsVars"
ATT weight = "electrons_6"
Defining a vector from three scalar variables
Below are two examples, one for fields and one for particles.
Group anygroupname2 {
ATT vsType = "vsVars" // Required string
ATT Evec = "{YeeElecField_0, YeeElecField_1, YeeElecField_2}" // Required string showing actual
// construction
Group anygroupname3 {
ATT vsType = "vsVars" // Required string
ATT velocity = "{electrons_3, electrons_4, electrons_5}" // Definition for particle variables
All components must live on the same mesh. The braces must contain 2 or 3 components for a 2D mesh and it must contain 3 components for a 3D mesh.
Defining an array from more than three scalar variables
For now we see no reason to have collections of variables that are not vectors.
Defining a scalar from a mathematical expression
Group anygroupname5 {
ATT vsType = "vsVars" // Required string
ATT elecEnergyDensity = "0.5*8.854e-12*(E_0*E_0 + E_1*E_1 + E_2*E_2)" // Required string definition
Definitions in terms of simple math as understood by the final visualization tool are supported.
Updated by Ted Sume almost 5 years ago · 2 revisions | {"url":"https://ice.txcorp.com:3000/projects/vizschema/wiki/DerivedVariables","timestamp":"2024-11-08T06:21:37Z","content_type":"text/html","content_length":"12034","record_id":"<urn:uuid:988ce66b-f7c6-4031-b948-e35ef1323453>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00797.warc.gz"} |
Difference between effective annual interest rate and nominal interest rate
27 Nov 2016 Annual percentage rate, or APR, goes a step beyond simple interest by On the other hand, effective annual percentage rate, also known as 1% interest per month, multiplying it by 12 gives
a nominal APR of 12% per year. Definition of Effective Interest Rate The effective interest rate is the true rate of the annual percentage rate (APR), and the targeted or required interest rate. Ex a
$1,000 bond with a stated, contractual, face, or nominal interest rate of 5%. Nominal vs. effective interest rates. Nominal interest rate: rate quoted based on an annual period. (APR). Effective
interest rate: actual interest earned or paid in a
interest rates and nominal interest rates? Nominal interest rate is also defined as a stated interest rate. This interest works according to the simple interest and does not take into account the
compounding periods. Effective interest rate is the one which caters the compounding periods during a payment plan. It is used to compare the annual What is APR? APR, or Annual Percentage Rate, is
the most straightforward way to compare different loans, credit cards and mortgages. APR is the amount of interest repaid in a year and can be expressed, like other interest rates, as either a
nominal or effective rate. APR also takes into account for any fees or additional costs associated with the loan. Formula of Effective Interest Rate: Let r equal the effective annual interest rate, i
the nominal annual interest rate, and m the number of compounding periods per year. The equivalence between the two rates suggests that if a principle P is invested for n years, the two compound
amounts would be the same, or The difference between the two is the result of the compounding periods that the effective interest rate takes into account. Compounding Is the Main Difference Between
Rates Compounding periods refer to the number of times per year interest charges are calculated and added your outstanding balance. In this case, the nominal annual interest rate is 10%, and the
effective annual interest rate is also 10%. However, if compounding is more frequent than once per year, then the effective interest rate will be greater than 10%. The more often compounding occurs,
the higher the effective interest rate. The relationship between nominal annual and
Formula of Effective Interest Rate: Let r equal the effective annual interest rate, i the nominal annual interest rate, and m the number of compounding periods per year. The equivalence between the
two rates suggests that if a principle P is invested for n years, the two compound amounts would be the same, or
1) How do we handle non-annual cash flows and interest rates? 2) What 2) r is effective interest rate that converting to a different period (length of time) Note: the difference between the nominal
rate and the inflation rate is a pretty good. General The difference between nominal and effective interest rates is that It is commonly expressed on an annual basis as the effective annual rate ia,
but 22 Oct 2011 Note that when we talk about a nominal (stated) interest rate we mean the annual rate (e.g., 10% annual rate of return on an investment). 8 Sep 2014 The higher the interest rate, the
more important the compounding period is. That is, the difference between daily and annual compounding is a
If you have a nominal interest rate of 10% compounded annually, then the Effective Interest Rate or Annual Equivalent Rate is the same as 10%. If you have a nominal interest rate of 10% compounded
six-monthly, then the Annual Equivalent rate is the same as 10.25%.
24 Jun 2019 This interest rate is more appropriately called the annual percentage rate. The opposite of such a nominal interest rate is the effective interest 27 Nov 2016 Annual percentage rate, or
APR, goes a step beyond simple interest by On the other hand, effective annual percentage rate, also known as 1% interest per month, multiplying it by 12 gives a nominal APR of 12% per year.
Definition of Effective Interest Rate The effective interest rate is the true rate of the annual percentage rate (APR), and the targeted or required interest rate. Ex a $1,000 bond with a stated,
contractual, face, or nominal interest rate of 5%. Nominal vs. effective interest rates. Nominal interest rate: rate quoted based on an annual period. (APR). Effective interest rate: actual interest
earned or paid in a Typically, nominal interest is shown as an annual interest rate. In Switzerland, effective annual interest rates are used for personal loans and credit card loans. The
difference of 4.71 francs is the result of interest being charged on the The annual nominal interest rate is the stated interest rate of a given loan. the difference between the nominal and
effective rates increases with the number of
In particular, we like to summarise the effect that compounding has on the underlying or nominal interest rate. This leads us to the idea of the `effective' annual
22 Feb 2017 Learn the differences between nominal interest rates, real interest rates, and effective interest rates and see how to calculate them. i(p)= nominal rate per annum payable p times a year.
The relationship between the effective and nominal interest rate is: 1+i= The nominal rate is the interest rate as stated, usually compounded more than once per year. The effective rate (or
effective annual rate) is a rate that, Annual Percentage Rate and Effective Interest Rate. The most common and comparable interest rate is the APR (annual percentage rate), also called nominal
The stated annual rate is usually referred to as the nominal rate. Interest may be compounded semiannually, quarterly, and monthly, the interest earned during a
24 Jun 2019 This interest rate is more appropriately called the annual percentage rate. The opposite of such a nominal interest rate is the effective interest 27 Nov 2016 Annual percentage rate, or
APR, goes a step beyond simple interest by On the other hand, effective annual percentage rate, also known as 1% interest per month, multiplying it by 12 gives a nominal APR of 12% per year.
Definition of Effective Interest Rate The effective interest rate is the true rate of the annual percentage rate (APR), and the targeted or required interest rate. Ex a $1,000 bond with a stated,
contractual, face, or nominal interest rate of 5%. Nominal vs. effective interest rates. Nominal interest rate: rate quoted based on an annual period. (APR). Effective interest rate: actual interest
earned or paid in a
The nominal rate is the interest rate as stated, usually compounded more than once per year. The effective rate (or effective annual rate) is a rate that, Annual Percentage Rate and Effective
Interest Rate. The most common and comparable interest rate is the APR (annual percentage rate), also called nominal The nominal interest rate, also called annual percentage rate (APR), is simply
the On a loan with a life of only one year, the difference between 12% and The stated annual rate is usually referred to as the nominal rate. Interest may be compounded semiannually, quarterly, and
monthly, the interest earned during a | {"url":"https://tradingktknpe.netlify.app/epps44836mu/difference-between-effective-annual-interest-rate-and-nominal-interest-rate-fywy","timestamp":"2024-11-14T11:08:10Z","content_type":"text/html","content_length":"32932","record_id":"<urn:uuid:b59d16ee-86c0-45d2-8f89-391b4fbfe855>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00212.warc.gz"} |
Star Citizen Analytics Project
Our neighborhood Goon numbers guy, is at it again.
In view of the Crytek lawsuit and the possible imminent demise of the funding tracker, I'm presenting an analysis that I put off for quite a while. This post is the third in the (very infrequently
updated) Theoretical Cetology series. Previous entries are
. I know many people believe the tracker is faked, but irrespective of that issue, one can still try to figure out what the tracker is actually telling us.
My initial motivation in looking at the funding tracker was to try to tie together the three F's (Fleet, Fans, and Funds) to get a better sense of who was buying what. The
hourly scraped data
maintained by Nehkara on Google Docs is not well suited to this, because so many transactions get lumped together per hour that it's difficult or impossible to tease out the individual contributions.
Therefore I used scraped data with a 5 minute update rate, which strikes a balance between high update rate and not being rude. As it turned out, the three F's are updated on different schedules and
possibly with differing time lags, so it remains impractical to do the dreamed-of joint analysis.
However, it turns out that the Funds data (i.e. the money counter) is updated in real time, or close enough to real time from the perspective of a 5-minute scrape. This tells us how much cash is
going into the tracker at each 5 minute interval.
An excerpt from a typical day is shown below. The table shows the size of each tracker move in dollars, and the number of times a move of that size was observed.
A few interesting facts immediately jump out:
● Many moves are even multiples of $5.
● Some moves are not whole numbers. I am not sure how this happens; more on this later.
● There are a decent number of very small moves, like $5 or $10.
● $45 and $60 are by far the most common move size, probably due to their being starter packages. We also see spikes at combinations of $45 and $60, such as $90 and $105.
As the example of $105 illustrates, a tracker move may be composed of multiple smaller transactions. As long as the typical 5-minute interval does not contain "too many" transactions, we may be able
to infer the individual transaction sizes, at least in a probabilistic sense.
The frequent $5 moves are particularly interesting because they are not likely to be composed of smaller transactions and because there isn't anything exciting on the store that costs $5. I believe
that they are probably mostly CCU activity, possibly related to the grey market, but I welcome better explanations.
My assumption is that the tracker is honest in the sense that applied store credit is not shown as additional revenue. This would allow us to see transactions of all sizes (due to varying amounts of
store credit being applied) even if the store has no item at a particular price.
To keep from having too many transactions thus making the data too hard to analyze, I used a crude proxy for non-sale days by taking all days with daily funding total < $60K. This gives a total of
116,111 data points from "quiet" or "typical" days.
The method I will apply below relies heavily on the assumption that tracker moves are round numbers, i.e., multiples of $5. Thus, data points that do not fit this assumption must be excluded, leaving
108,621 data points remaining (which represents a loss of 6.5% of the data). Interestingly, non-round tracker moves tend to arrive bunched together. The below plot shows the percentage of non-round
moves in a rolling temporal window, restricted to data points from quiet days.
Part of the cause may be temporally limited availability of items, such as the Squadron 42 Military Cap, that aren't multiples of $5. As for the fractional dollar amounts, the only hypothesis I can
come up with is if an amount of store credit is somehow acquired untaxed but then has VAT taken out of it later. Partially defraying the cost of an item with the resulting store credit could give
rise to strange transaction sizes.
To account for the effect of multiple transactions, I formulated a probability model for the data as a price-weighted sum of independent Poisson random variables. Going to the store and clicking on
"Extras" shows CCUs valued at every multiple of $5 up to about $300, so I set the maximum allowable transaction size to $300. I then estimated the parameters of the model using maximum likelihood.
The round transactions assumption is required to make the fitting process tractable; we can apply a crude correction for the exclusion of the non-round transactions afterward.
Below we show the raw histogram of tracker moves.
The result of the fitting process is an average transaction rate for each transaction size, i.e., the average number of transactions of each size, per hour.
The dominant effect is the spikes at $45 and $60, reaching 4.2 and 5.0 transactions/hour, respectively. If we assume that every one of these sales is a starter game package, and that every game
package is sold to a new customer, this means 9.2 commandos are buying into Star Citizen per hour outside of special events, or about 80,000 commandos per year. By contrast, the "fans" number has
increased by 237,325 so far in 2017, with no major recruiting events to speak of.
There is about 1 transaction per hour for every "small" transaction size below $45. Unless this is CCU activity, I'm not sure what it could represent. Are people buying $5 skins and UEC chits?
The estimated average daily revenue from starter packages is $11,700, versus an average daily revenue (for the quiet days used in this analysis) of roughly $39,500. Similarly, the average daily
revenue from small transactions of $40 or less is $4000. The majority of funding, about $20K, comes from transactions that are $65 or larger.
The average daily total implied by our model is about $37K. Using a crude 7% correction for the excluded data points gives an average daily total of $39,800 which is fairly close to the true average
of $39,500.
How literally should we interpret the fitted parameters? I think the inferred rate of starter packages, as well as the small transactions, is roughly accurate. As price increases from there, we
should expect a general decline in the frequency of transactions, but not as steep a decline as the model implies. You can see an artifact of this where the model has boosted the rates of
transactions near $300 to try to match the heavier tails of the actual data.
There are reasons to expect this model to underestimate the frequency of very large transactions, meaning that it will underestimate daily variability. That being said, the daily standard deviation
of funding implied by the fitted model is $2,040, which implies that we would expect to see successive daily totals to be within about $5800 of each other 95% of the time.
A quick eyeballing suggests that the real data is indeed more variable than the model estimate. This is unsurprising both because there are dynamic influences on the store activity that we do not
model, and because we would expect overdispersion even in the absence of such effects.
A better answer would involve looking at autocorrelation, but :effort: | {"url":"http://www.dereksmart.com/forum/index.php?topic=53.150","timestamp":"2024-11-12T10:17:49Z","content_type":"application/xhtml+xml","content_length":"84737","record_id":"<urn:uuid:415d3160-a0fc-48ce-a3f8-5552946f8d4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00333.warc.gz"} |
Prison Cell Problem
In Transum prison there are 50 prisoners in cells numbered 1 to 50.
On day 1, the guard turns the key in every lock to open every cell.
On day 2, the guard turns the key in every cell which is a multiple of 2. This locks all the even numbered cells.
On day 3, the guard turns the key in every cell which is a multiple of 3, locking or unlocking them.
On day 4, the guard turns the key in every cell which is a multiple of 4, locking or unlocking them.
This continues for fifty days. The prisoners whose cells are open after the 50th day are set free. Which prisoners will be set free?
Click on the cells above to open and close the doors. When you have worked out what the situation will be after 50 days click the 'check' button to see if you are correct.
Recently Updated Prison Cell Problem
Sum Game Can you work out which prisoners will be set free?
The key is turned for each factor in the prison cell number. Does that give you a clue? You can use the
grid of cells above to simulate the 50 days of activity or you could think of the problem more
A game against the clock to find the numbers which add up to the target number. So far this analytically.
activity has been accessed 54079 times and 4411 Transum Trophies have been awarded for
completing it. There are many more fascinating maths puzzles on Transum.org:
Tuesday, February 6, 2018
"I have gone through it and it continuously says it is wrong! Am I doing something wrong? I have cells 1,4,9,16,25,36,48 and 49 open... Have I missed any?
[Transum: Well done Lexy for finding all of the answers doors that will be left open but you have included one that should remain locked. The correct answer makes a well known number pattern.]"
Holy Cross Maths, Twitter
Tuesday, November 9, 2021
Holy Cross Maths, Twitter
Tuesday, November 9, 2021
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your
The solutions to this and other Transum puzzles, exercises and activities are available in this space when you are signed in to your Transum subscription account. If you do not yet have an account
and you are a teacher or parent you can apply for one here.
A Transum subscription also gives you access to the 'Class Admin' student management system and opens up ad-free access to the Transum website for you and your pupils. | {"url":"https://www.transum.org/software/SW/prison/prison.asp","timestamp":"2024-11-05T18:24:51Z","content_type":"text/html","content_length":"33469","record_id":"<urn:uuid:f5925ad3-8330-42bf-904c-eec09c410f1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00232.warc.gz"} |
As I mentioned in an earlier post, I’ve spent the last few weeks working on Google’s Machine Learning Crash Course. After several mornings and weekend hours of work, I managed to finish all the
lessons. In Google’s own words, the crash course is “A self-study guide for aspiring machine learning practitioners.” From a workflow perspective, the course is broken up into 25 lessons, each of
which has at least one power-point style lecture from Google researchers, as well as a combined 40+ exercises. They also try to use real-world case studies as examples, which helps make the course
material feel a bit less abstract.
The website lists an expected completion time of 15 hours, but I’d say 20-30 hours is probably a bit more realistic if you’re not already a Machine Learning expert, not counting studying
prerequisites of course. Still, this is probably one of the most compact machine learning resources I’ve seen. It puts almost everything you need in your web browser and gives a good balance
between too much information and not enough. It glosses over some of the more technical and involved details, such as detailed mathematical proofs, but makes sure you have enough intuition to work
through the logic and understand what you’re doing.
I have an undergraduate level of knowledge about statistics, and I work with Python on a daily basis, so I figured I had the prerequisites covered. The prerequisites for the course specifically
1. Mastery of intro-level algebra
□ Mostly check, I am not afraid of matrices, vectors, and I’ve even been known to dabble in identities. I got a D the first time I took vector calculus, but that was mostly due to being sick
and missing the first couple of weeks of classes–I took it again and got an A at least. I’ve also been working through a great MIT online course Probabilistic Systems Analysis and Applied
Probability. That turned out to be a great refresher for a lot of the basic ideas addressed in the crash course.
2. Proficiency in programming basics, and some experience coding in Python
…There’s a lot of good information on the linked prerequisites page and it’s a great place to start if you need to brush up on a few concepts. I especially liked the visual explanation of the back
propagation algorithm.
Once you start the actual course, you are presented with a list of lessons covering fields like “Reducing Loss“, “Classification“, and “Training Neural Nets“. Each lesson has a expected completion
time. I found that these time estimations are a bit low overall compared to the time it took me to finish them, but it varies a good deal lesson to lesson. Sometimes exercises will require
additional time just to train the machine learning models for instance and I felt that extra time wasn’t really factored in.
The video lessons are probably the most unexpectedly well done part of the whole crash course. They do a good job of introducing basic concepts, and they’re paced well to boot. Sometimes I’d need
to go back and replay a certain “slide”, but doing so was made easy by the interface. It also has an option to play the slides at 1.5x or 2.0x speed, if that works better for you.
The exercises are very useful and interactive. Each exercise takes the form of a Jupyter notebook, hosted via the Colaboratory Google research project. They hold your hand enough that you don’t get
too lost, but also have hidden solution sections you can reveal if you’re stumped or just want to double check your work.
I feel that the course gave me a more thorough understanding of the basic principles of machine learning and gave me a solid foundation to work from. The introductions to Tensorflow, NumPy, and
Pandas are probably the most useful gems in the crash course. Together the provided Python tools make for a very powerful and flexible machine learning toolbox.
To wrap everything up, I definitely recommend the Google Machine Learning Crash Course as long as you’re not looking for a single source to teach you everything about the field. And, I suppose, to
expect that would be overlooking the “crash course” part. Instead, this is a first step that will point you in the right direction to learn more. But it serves that purpose very well. Well done to
the folks at Google!
Great new machine learning crash course from Google
Over the last couple of weeks, I’ve been working through a new Google online Machine Learning Crash Course. I’ve worked through several tutorials on basic machine learning tools in the past, but
this one is by far the most easy to use such tutorial I’ve found. It uses Jupyter Notebooks, similar to my previous post detailing my machine learning homelab. However, everything runs directly in
the browser, requiring no additional setup to run the notebooks. By default, these notebooks do not include GPU acceleration, but when you run the same notebooks in my homelab environment, they
should automatically become GPU accelerated. Alternatively, with a bit of tweaking, you can even use GPU accelleration directly in the browser. NEAT!!!
It’s really nice to see more folks jumping on the Jupyter Notebooks bandwagon these days. They’re easy to manage and at least somewhat portable. You can find even more such notebooks at https://
distill.pub/ .
Update: Fixed incorrect assertion that you can’t to GPU acceleration with Tensorflow in-browser.
Building a machine learning homelab (w/ Docker + Linux + Nvidia 1080 GPU)
Lessons Learned
Before I begin, I’ll start with a bit of brief background on how and why I developed my current machine learning homelab. I’ve spent much of my professional career working with technology on the
cutting edge of what’s possible with modern machine learning. My personal background is more on the web development and back end infrastructure side of things, so I’ve helped monitor and improve the
general reliability and tractability of lots of different software including machine learning models.
However, sometimes I just want to noodle around (or should I say dabble?) with various machine learning models on my own at home–without worrying about nuking a production system. So, I designed and
built the homelab system described below as a simple, easy to manage machine learning test bed using my existing home gaming desktop. I went through a lot of trial and error to get to this final
design, but I’ll just present the final product as I’m currently using it. Please drop me an email or leave a comment if you find any bugs or have any suggestions on how I can improve my design.
Part of what makes this system so easy to manage is the integration between the Docker stack and the Linux kernel. As such, Linux is required, and non-Ubuntu Linux environments may behave somewhat
differently depending on the distribution. For this tutorial, I’ll assume the following environment to start…
Setup nvidia-docker
(Skip this section for CPU-only mode)
Once you’ve got your system ready, it’s time to install the magic package that will make our Nvidia GPU integration work. It’s called “nvidia-docker” and allows us to run docker containers that will
automatically connect to your local GPU(s) and make them available to the container software…
Install: https://github.com/NVIDIA/nvidia-docker#quickstart
Please see the above linked documents for detailed instructions on setting up your environment (especially the apt settings). However, the following will work for many users as a quick-and-dirty
setup process, assuming you already have some Nvidia drivers and Docker installed.
1. Get Docker
(Skip this section if you’ve already got a recent Docker installation)
$ sudo apt-get install docker-ce
2. Install Nvidia drivers
(Skip this section if you’ve already got recent Nvidia drivers for your GPU)
Install a compatible nvidia driver package on your local system.
$ sudo apt-get install nvidia-384
3. Get nvidia-docker
Get sources for the latest nvidia-docker package.
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
Install the latest nvidia-docker package.
$ sudo apt-get install nvidia-docker2
Reload the Docker daemon configuration.
$ sudo pkill -SIGHUP dockerd
4. Setup Jupyter Notebooks:
Change to your home directory, or some other directory where you’d like the keras package to live. Keras is a machine learning package with a built-in Jupyter Notebooks environment that we will
$ cd $HOME
Get the keras code base using git.
$ git clone https://github.com/dannydabbles/keras.git
Change directories to the keras folder.
$ cd keras
Set up git submodules.
$ git submodule init
$ git submodule update
Change directories to the docker folder.
$ cd docker
Launch the Jupyter Notebook environment. Make sure to note the notebook URL link in the terminal output from this command.
(Leave out “GPU=0” for CPU-only mode)
$ make notebook BACKEND=tensorflow GPU=0 # Note URL in output
5. Point your favorite web browser at the “0.0.0.0:8888…” URL you just generated
Open this URL in your web browser of choice and you’ll now have access to a persistent environment that you can use to run machine learning models. The default setup is running a tensorflow
environment, but many related tools will also work in this environment. You may also want to try out the theano flavor of the environment by switching the backend option to “BACKEND=theano”.
6. From the Jupyter Notebooks landing page, navigate to workspace>examples>deep-learning-keras-tensorflow
Run the example notebook “0. Preamble.ipynb” by clicking on it, then clicking Cell>Run All
In the deep-learning-keras-tensorflow folder, you will find several Jupyter Notebooks (all ending in “*.ipynb”) that will run using your attached GPU. There are many other notebooks you can download
and play with without GPU support as well. Have a look around and see what you can find.
NOTE: Any data not stored under the “workspace” directory on the Jupyter Notebooks landing page will not persist once you stop your Jupyter Notebooks Keras container.
A good chunk of getting modern machine learning models to work is just setting up the proper infrastructure. I’ve found other guides online either overly verbose or incomplete as far as how to set
up modern machine learning infrastructure in a homelab setting. Hopefully I’ve helped fill that gap somewhat here. Please comment below or shoot me an email if you find bugs or have general
comments on my post.
Hello world!
Hello and welcome to my dabbling blog! As the name suggests, this is a place for me to dabble with whatever grabs my interest and share my findings with the world. I do not yet have any idea
whether or not this will turn out to be more of a solitary portfolio or a larger conversation about my interests. I hope the latter and I’m always honored by thoughtful commentary. I do not make
any claims of special skills or prowess, I simply hope you can share in my curiosity and maybe take part. I will endeavor to share as much of my projects and code as I can, but I don’t make any
guarantees of quality and I can’t take responsibility for anything that may break for you (user beware!). With those caveats, I’ll be posting more soon but, for now, I’ll just leave this simple
hello. Welcome! | {"url":"https://dannydabbles.com/page/2/","timestamp":"2024-11-07T15:39:06Z","content_type":"text/html","content_length":"60142","record_id":"<urn:uuid:348673c5-2548-4950-ab71-4ae64869f5b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00502.warc.gz"} |
MATH 503 HW 1 asz’ book. Check the solutions in the book.
MATH 503 HW 1
Where to find the solutions? Most of them are in Lovász’ book. Check the solutions in
the book.
Question 1. Find a formula for the number of subsets of an n element set with cardinality divisible by 3.
A similar question (with 7 instead of 3) is solved in 1.42.(f).
Question 2.* Find a formula for the number of connected graphs with exactly one cycle
on vertices labeled 1 to n.
There is a nice book where among many other interesting results you can find the answer;
ASYMPTOPIA by Spencer and Florescu. If you are not that motivated then follow the
link below.
http://math.umn.edu/ jblewis/4707docs/UnicyclicGraphs.pdf
n Question 3. Prove that n−k
is a polynomial in n for each fixed k.
Question 4. Prove the following identities. Use combinatorial considerations if you can.
m m X
m n+k
m n k
2 .
n k−1
k (n − k)n−k−1 = 2(n − 1)nn−2 . | {"url":"https://studylib.net/doc/11103537/math-503-hw-1-asz%E2%80%99-book.-check-the-solutions-in-the-book.","timestamp":"2024-11-02T17:09:43Z","content_type":"text/html","content_length":"58835","record_id":"<urn:uuid:295a8325-9788-4531-8c77-4e2a1c9c846d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00405.warc.gz"} |
Return values while using Bootstrap
I keep getting errors regarding variables in my module although these values are well defined. Sometimes SAS gives no error at all and some other times SAS doesn't let it go. It feels like the
program is been hunted😊
Can any one help me please figure out where is the problem.
Thank you all
08-19-2023 03:33 PM | {"url":"https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/Return-values-while-using-Bootstrap/td-p/890064","timestamp":"2024-11-09T03:53:36Z","content_type":"text/html","content_length":"253385","record_id":"<urn:uuid:7d1958ab-df38-4e5e-b42f-60f6de4b57f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00252.warc.gz"} |
Torsional Stress - S.B.A. Invent
Torsional Stress
Torsion is caused by a twisting moment called torque. In order to calculate torque you would have to multiply the applied forced that is perpendicular to the centroid by the distance from the
centroid. For a cylinder the distance from the centroid would the radius of the cylinder. To calculate torque, equation 1 would be used and refer to the image below.
(Eq 1) $T=FD$
T = Torque
F = Perpendicular Force
d = Distance from Central Axis
From the above it can be seen that torque will increase as you increase the distance from the centroid, while alternatively the torque will decrease for the same force if the distance from the
centroid decreases. The fact that this happens is what makes gearing systems work.
Torque can be negative or positive depending on the direction of the torque. Torque that is positive goes in the counter-clockwise direction; negative torque goes in the clockwise direction. If
you’re confused use your right hand and point your thumb towards yourself, the way your fingers curl represents a positive torque. This is called the right hand rule. The figure a shown above shows a
negative torque, while the figure below shows a positive torque.
Torsional Stress
All stress caused by torsion is shear stress. For a circular rod, maximum stress is found on the surface, and it decreases linearly to zero as you approach the central axis, refer to the figure
below. For objects that are not circular the resulting stress field is different. This will be discussed in another section.
In order to calculate the maximum shear stress on a rod due to torque the equation below would be used.
(Eq 2) $τ_{max}=\frac{Tr_o}{J}$
$τ_{max}$ = maximum shear stress
$r_o$ = outer radius
$J$ = polar moment of inertia
Notice from equation 2 that there is a variable J called Polar moment of inertia. The reason why the polar moment of inertia is included in the equation above is because it represents the ability of
the specific shape to resist torsional deformation. This value has no dependence on the material properties of the given object. To calculate the polar moment of inertia for a circular rod equation 3
would be used.
(Eq 3) $J=\frac{π}{2}\left(r_o^4-r_i^4 \right)$
For the above equation r[i] goes to 0 if the cylinder is solid. If the cylinder is a tube r[i] is the inner radius of the tube.
Angle of Twist
For any applied stress to an object there must also be a resulting deflection of the object. In the case for torsion it would be a twisting motion called the angle of twist. In order to determine the
angle of the twist Hook’s law would be used. The resulting equation can be seen below.
(Eq 4) $Φ=\frac{TL}{JG}$
G=shear modulus
When using equation 4 the length L is distance from the applied torque to the contrained point of the part. It isn’t the entire length of the part.
You must be logged in to post a comment. | {"url":"https://sbainvent.com/strength-of-materials/torsional-stress/","timestamp":"2024-11-11T14:01:32Z","content_type":"text/html","content_length":"72151","record_id":"<urn:uuid:8754034c-2e6e-4b4a-9f8e-5528b6c50f34>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00528.warc.gz"} |
On the scaling of wind turbine rotors
Articles | Volume 6, issue 3
© Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License.
On the scaling of wind turbine rotors
This paper formulates laws for scaling wind turbine rotors. Although the analysis is general, the article primarily focuses on the subscaling problem, i.e., on the design of a smaller-sized model
that mimics a full-scale machine. The present study considers both the steady-state and transient response cases, including the effects of aerodynamic, elastic, inertial, and gravitational forces.
The analysis reveals the changes to physical characteristics induced by a generic change of scale, indicates which characteristics can be matched faithfully by a subscaled model, and states the
conditions that must be fulfilled for desired matchings to hold.
Based on the scaling laws formulated here, the article continues by considering the problem of designing scaled rotors that match desired indicators of a full-scale reference. To better illustrate
the challenges implicit in scaling and the necessary tradeoffs and approximations, two different approaches are contrasted. The first consists in a straightforward geometric zooming. An analysis of
the consequences of zooming reveals that, although apparently simple, this method is often not applicable in practice, because of physical and manufacturing limitations. This motivates the
formulation of scaling as a constrained optimal aerodynamic and structural matching problem of wide applicability.
Practical illustrations are given considering the scaling of a large reference 10MW wind turbine of about 180m in diameter down to three different sizes of 54, 27, and 2.8m. Results indicate that,
with the proper choices, even models characterized by very significant scaling factors can accurately match several key performance indicators. Additionally, when an exact match is not possible,
relevant trends can at least be captured.
Received: 25 Mar 2020 – Discussion started: 30 Mar 2020 – Revised: 18 Nov 2020 – Accepted: 03 Jan 2021 – Published: 03 May 2021
This article is concerned with the aeroservoelastic scaling of wind turbine rotors. The general scaling problem includes both up- and subscaling (or downscaling). This work primarily focuses on the
latter aspect – i.e., on the design of subscaled models – but also briefly touches upon the former. Specifically, this work tries to answer the following scientific questions:
• What are the effects of a change of scale (i.e., both in the case of up- and subscaling) on the steady and transient response of a wind turbine?
• What steady and transient characteristics of the response of a full-scale wind turbine can be matched by a subscaled model?
• What are the most suitable ways to design the aerodynamic and structural characteristics of a subscaled model?
The understanding of both up- and subscaling is relevant to contemporary wind energy technology.
Regarding upscaling, wind turbines have experienced a continuous growth in size in the past decades. This trend has been mostly driven by a desire for increased capacity factors, which can be
obtained essentially through two main design parameters: by lowering the specific power – which, for a given power rating, means a larger rotor-swept area – and by designing taller towers, which
reach higher above ground, where wind blows faster. In turn, improved capacity factors have a positive effect on the cost of energy, which has helped propel the penetration of wind in the energy mix.
The design of the next-generation wind turbines, especially for offshore applications, is expected to follow this same path, with rotor diameters of present and future products already exceeding
200m (IRENA, 2019; GE, 2019; Siemens Gamesa, 2020). Unfortunately, larger blades cannot be obtained by simply scaling up existing smaller blades but must be designed to beat the cubic law of growth.
In fact, weight (and hence cost) grows with volume – i.e., with the cube of size – whereas power capture only grows with rotor swept area, i.e., with the square of size (Sieros et al., 2012). Against
this background, it is clearly useful to understand the changes that can be expected in a turbine response as the result of an increase in size.
Subscaling, on the other hand, is useful as a research tool: by designing and testing smaller-scale versions of full-scale references, one can validate simulation tools, explore ideas, compare
alternative solutions, and deepen the knowledge and understanding of complex physical phenomena. Among other advantages, scaled testing is usually much cheaper and less risky than full-scale testing.
In addition, full-scale testing is typically performed on prototypes or even commercial products, which raises often unsurmountable issues because of intellectual property rights and trade secrecy.
In turn, this limits opportunities for publication, data sharing, and full exploitation of the results from the scientific community. With commercial turbine sizes expected to grow even further in
the future, it is becoming more important than ever to fully understand how to best employ subscaling as a research tool.
Two subscaled testing activities are possible: wind tunnel testing with small-scale models and field testing with small turbines. In both cases, the goal is to match at least some of the
characteristics of the original full-scale problem. Clearly, this requires a complete understanding of the effects of a change (in this case, a reduction) of scale on the response of a wind turbine.
Wind tunnel testing of subscaled wind turbine models offers some unique opportunities. First, the operating conditions in a wind tunnel are to a large extent controllable and typically highly
repeatable. Second, measurements – especially of flow quantities – that are possible in the lab environment are generally more difficult, are less precise, and have a lower resolution at full scale.
Third, costs and risks are much more limited than in the case of field testing, and the time for the conduction of the experiments is shorter (not only because of the reduced challenges but also
because of time acceleration, as explained later). Fourth, since a small-scale model cannot exactly match a full-scale product, property right issues are typically much less of a constraint.
The first wind tunnel experiments on wind turbine aerodynamics were conducted in the last decades of the 20th century, as summarized in Vermeer et al. (2003). Studies carried out during the Unsteady
Aerodynamics Experiment (Simms et al., 2001) with a stall-regulated 10m diameter, 20kW turbine were, among others, key to uncovering the importance of specific flow phenomena, such as dynamic
stall, 3D rotational effects, and tower–wake interactions. Later, the 4.5m diameter scaled models designed for the Model rotor EXperiments In controlled COnditions (MEXICO) project enabled the
validation of multiple aerodynamic models, ranging from blade element momentum (BEM) to computational fluid dynamics (CFD) (Snel et al., 2009). These wind turbine models were designed following a set
of scaling laws aimed at replicating as accurately as possible the aerodynamic behavior of full-scale machines. More recently, the inclusion of closed-loop controls and aeroservoelastic
considerations in the scaling process expanded the scope of wind tunnel testing beyond aerodynamics (Campagnolo et al., 2014). Nowadays, wind tunnel tests are extensively used to gain a better
understanding of wake effects, to validate simulation tools, and to help develop novel control strategies (Bottasso and Campagnolo, 2020). The recent study of Wang et al. (2020) tries to quantify the
level of realism of wakes generated by small-scale models tested in a boundary layer wind tunnel.
Unfortunately, the exact matching of all relevant physical processes between full-scale and subscale models is typically not possible. This mismatch increases with the scale ratio, and it becomes
especially problematic when large wind turbines (with rotor sizes on the order of 10^2m and power ratings on the order of 10^6–10^7W) are scaled to very small size wind tunnel models (characterized
by rotors on the order of 10^−1–10^0m and power ratings on the order of 10^0–10^2W). To limit the scale factor, instead of using very small models in a wind tunnel, testing can be conducted in the
field with small-size wind turbines (with a rotor on the order of 10^1m and power ratings on the order of 10^5W).
Examples of state-of-the-art experimental test sites realized with small-size wind turbines are the Scaled Wind Farm Technology (SWiFT) facility in Lubbock, Texas (Berg et al., 2014), which uses
three 27m diameter Vestas V27 225kW turbines, or the soon-to-be-ready WINSENT complex-terrain facility in the German Swabian Alps (ZSW, 2016), which uses two 54m diameter S&G 750kW turbines.
Reducing the scaling ratios and moving to the field offers the opportunity to overcome some of the constraints typically present in wind tunnel testing, although some of the advantages of wind
tunnels are clearly lost. Indeed, the range of testing conditions cannot be controlled at will, measurements are more difficult, and costs are higher. Here research has so far mainly focused on
steady-state aerodynamics and wake metrics. For example, within the National Rotor Testbed project (Resor and Maniaci, 2014), teams at the University of Virginia, Sandia National Laboratories, and
National Renewable Energy Laboratory have designed a blade for the SWiFT experimental facility, replacing the original Vestas V27 blade. The scaling laws were specifically chosen to replicate the
wake of a commercial 1.5MW rotor at the subscale size of the V27 turbine. To capture the dynamic behavior of very large wind turbines, additional effects must, however, be considered in the scaling
laws. For example, Loth et al. (2017) have recently proposed a methodology to include gravity in the scaling process, and they have demonstrated their approach to scale a 100m blade down to a 25m
size. Gravity is also crucially important in floating offshore applications (Azcona et al., 2016) to balance buoyancy and correctly represent flotation dynamics, with its effects on loads, stability,
and performance and with implications for control design.
This paper considers the general problem of scaling a wind turbine rotor to a different size, including the effects caused by aerodynamic, elastic, inertial, and gravitational forces. The study is
structured in two main parts.
Initially, an analysis of the problem of scaling is presented. The main steady and transient characteristics of a rotor in terms of performance, aeroservoelasticity, and wake shedding are considered,
and the effects caused by a generic change of scale are determined. The analysis reveals that, in principle, most of the response features can be faithfully represented by a subscaled model. However,
an exact matching of all features is typically impossible because of chord-based Reynolds effects, which lead to changes in the aerodynamic behavior of the system. Another limit comes from wind
conditions: the wind field is not scaled when using utility-size models in the field, and wind tunnel flows can only partially match the characteristics of the atmospheric boundary layer. The
analysis also shows that scaling is essentially governed by two parameters: the geometric (length) scaling factor and the time scaling factor. Based on these two parameters, all matched and unmatched
quantities can be fully characterized.
In the second part, the paper continues by looking at the problem of designing a subscaled model. Two different approaches are considered. The first is a straightforward zooming down of all blade
characteristics based on a pure geometrical scaling (Loth et al., 2017), which is appealing for its apparent simplicity. The second is based on a complete aerostructural redesign, which is formulated
here in terms of two constrained optimizations: the aerodynamic one defines the external shape of the blade, whereas the structural optimization sizes the structural components. Both strategies aim
at replicating the dynamic behavior (including gravitational effects) of a full-scale wind turbine at a smaller scale, and they are therefore based on the same scaling laws. Clearly, the complete
redesign is a more complicated process than the pure geometric zooming-down approach. However, the main goal of scaling is that of designing a rotor that matches at scale the behavior of a target
full-scale machine as closely as possible. From this point of view, the simplicity of design – which is a one-off activity – is less of a concern, especially today, when sophisticated automated rotor
design tools are available (Bortolotti et al., 2016). Apart from simplicity, zooming is very often simply not possible for large scale factors because of unrealistically small sizes (especially the
thickness of shell structures), non-achievable material characteristics, or impossible-to-duplicate manufacturing processes (Wan and Cesnik, 2014; Ricciardi et al., 2016). In all those cases, a
different aerodynamic shape, a different structural configuration, and different materials are used to obtain the desired behavior, as shown, for example, in the design of a small-size
aeroelastically scaled rotor by Bottasso et al. (2014), or as customarily done in the design of scaled flutter models for aeronautical applications (Busan, 1998).
Although the intrinsic limits of the straightforward zooming-down approach are probably well understood, these two alternative methodologies are compared here in order to give a better appreciation
of the complexities that one has to face in the design of scaled models. To give practical and concrete examples, a very large rotor is scaled down to three different model sizes, including two
different utility wind turbines and a small-scale wind tunnel model. For each model, the zooming-down approach is adopted when possible for its simplicity and then replaced by the redesign method
when fidelity or physical limits make it impractical or impossible.
Furthermore, the paper analyzes the accuracy with which the subscale models successfully mirror relevant key characteristics of the full-scale reference, in terms both of absolute values and of
trends. This is indeed an important aspect of scaling: even if the exact matching of certain quantities is sometimes not possible, scaled models can still be highly valuable if they are able to at
least capture trends. As an example of such a trend analysis, the subscale models are used here to explore changes in loading between unwaked and waked inflow conditions, which are then validated
against the corresponding loading changes of the full-scale machine. Results indicate that even the smallest model is capable of capturing complex details of wake interaction, including an
interesting lack of symmetry for left/right wake impingements caused by rotor uptilt.
A final section completes the paper, listing the main conclusions that can be drawn from the results and highlighting their limits.
Buckingham's Π theorem (Buckingham, 1914) states that a scaled model (labeled (⋅)[M]) has the same behavior as a full-scale physical system (labeled (⋅)[P]) if all the m relevant nondimensional
variables π[i] are matched between the two systems. In other words, when the governing equations are written as
$\begin{array}{}\text{(1a)}& & \mathbit{\varphi }\left({\mathit{\pi }}_{\mathrm{1}\mathrm{P}},\mathrm{\dots },{\mathit{\pi }}_{m\mathrm{P}}\right)=\mathrm{0},\text{(1b)}& & \mathbit{\varphi }\left({\
mathit{\pi }}_{\mathrm{1}\mathrm{M}},\mathrm{\dots },{\mathit{\pi }}_{m\mathrm{M}}\right)=\mathrm{0},\end{array}$
the two systems are similar if
$\begin{array}{}\text{(2)}& {\mathit{\pi }}_{i\mathrm{P}}={\mathit{\pi }}_{i\mathrm{M}},\phantom{\rule{1em}{0ex}}i=\left(\mathrm{1},m\right).\end{array}$
Depending on the scaled testing conditions, not all dimensional quantities can usually be matched. In the present case, we consider testing performed in air, either in a wind tunnel or in the field,
neglecting hydrodynamics.
The length (geometric) scale factor between scaled and full-scale systems is defined as
$\begin{array}{}\text{(3)}& {n}_{\mathrm{l}}=\frac{{l}_{\mathrm{M}}}{{l}_{\mathrm{P}}},\end{array}$
where l is a characteristic length (for example the rotor radius R), whereas the scale factor for time t is defined as
$\begin{array}{}\text{(4)}& {n}_{\mathrm{t}}=\frac{{t}_{\mathrm{M}}}{{t}_{\mathrm{P}}}.\end{array}$
As a consequence of these two definitions, one can determine the angular velocity and wind speed scaling factors, which are respectively written as ${n}_{\mathrm{\Omega }}={\mathrm{\Omega }}_{\mathrm
{M}}/{\mathrm{\Omega }}_{\mathrm{P}}=\mathrm{1}/{n}_{\mathrm{t}}$ and ${n}_{\mathrm{v}}={V}_{\mathrm{M}}/{V}_{\mathrm{P}}={n}_{\mathrm{l}}/{n}_{\mathrm{t}}$. A nondimensional time can be defined as τ
=tΩ[r], where Ω[r] is a reference rotor speed, for example the rated one. It is readily verified that, by the previous expressions, nondimensional time is matched between the model and physical
system; i.e., τ[M]=τ[P]. The two factors n[l] and n[t] condition, to a large extent, the characteristics of a scaled model.
The following Sects. 2.1 and 2.2 analyze the main steady and transient characteristics of a rotor in terms of performance, aeroservoelasticity, and wake shedding. The analysis discusses which of
these characteristics can be matched by a scaled model and which conditions are required for the matchings to hold. Next, Sect. 2.3 offers an overview on the main scaling relationships and discusses
the choice of scaling parameters.
2.1Steady state
2.1.1Rotor aerodynamics
The power coefficient characterizes the steady-state performance of a rotor, and it is defined as ${C}_{\mathrm{P}}=P/\left(\mathrm{1}/\mathrm{2}\mathit{\rho }A{V}^{\mathrm{3}}\right)$, where P is
the aerodynamic power, ρ the density of air, A=πR^2 the rotor disk area, and V the ambient wind speed. The thrust coefficient characterizes the wake deficit and the rotor loading and is defined as $
{C}_{\mathrm{T}}=T/\left(\mathrm{1}/\mathrm{2}\mathit{\rho }A{V}^{\mathrm{2}}\right)$, where T is the thrust force. For a given rotor, the power and thrust coefficients depend on tip-speed ratio
(TSR) $\mathit{\lambda }=\mathrm{\Omega }R/V$ and blade pitch β, i.e., ${C}_{\mathrm{P}}={C}_{\mathrm{P}}\left(\mathit{\lambda },\mathit{\beta }\right)$ and ${C}_{\mathrm{T}}={C}_{\mathrm{T}}\left(\
mathit{\lambda },\mathit{\beta }\right)$.
It is readily verified that λ[M]=λ[P] for any n[l] and n[t], which means that it is always possible to match the scaled and full-scale TSR. This ensures the same velocity triangle at the blade
sections and the same wake helix pitch.
Ideally, a scaled model should match the C[P] and C[T] coefficients of a given full-scale target; it is clearly desirable for the match not to hold at a single operating point but over a range of
conditions. BEM theory (Manwell et al., 2002) shows that both rotor coefficients depend on the steady-state aerodynamic characteristics of the airfoils. In turn, the lift C[L] and drag C[D]
coefficients of the aerodynamic profiles depend on the angle of attack and on the Mach and Reynolds numbers.
The local Mach number accounts for compressibility effects and is defined as $\mathit{\text{Ma}}=W/{a}_{\mathrm{s}}$, where W is the flow speed relative to a blade section, and a[s] is the speed of
sound. Using the previous expressions, the Mach number of the scaled model is ${\mathit{\text{Ma}}}_{\mathrm{M}}={\mathit{\text{Ma}}}_{\mathrm{P}}{n}_{\mathrm{l}}/{n}_{\mathrm{t}}^{\mathrm{2}}$.
Because of typical tip speeds, compressibility does not play a significant role in wind turbines. Hence, the matching of the Mach number can be usually neglected for current wind turbines. The
situation might change for future offshore applications, where, without the constraints imposed by noise emissions, higher tip-speed and TSR rotors may have interesting advantages.
The Reynolds number represents the ratio of inertial to viscous forces and is defined as $\mathit{\text{Re}}=\mathit{\rho }lu/\mathit{\mu }$, where l is a characteristic length, u a characteristic
speed, and μ the dynamic viscosity. In the present context, the most relevant definition of the Reynolds number is the one based on the blade sections, where l=c is the chord length, and u=W is the
flow speed relative to the blade section. In fact, the Reynolds number has a strong effect on the characteristics and behavior of the boundary layer that develops over the blade surface, which in
turn, through the airfoil polars, affects the performance and loading of the rotor. Testing in air in a wind tunnel or in the field (hence with similar ρ and μ but with a reduced chord c) leads to a
mismatch between the scaled and full-scale chord-based Reynolds numbers, as ${\mathit{\text{Re}}}_{\mathrm{M}}={\mathit{\text{Re}}}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{2}}/{n}_{\mathrm{t}}$.
The effects due to a chord-based Reynolds mismatch can be mitigated by replacing the airfoils of the full-scale system with others better suited for the typical Reynolds conditions of the scaled
model (Bottasso et al., 2014). A second approach is to increase the chord of the scaled model. This, however, has the effect of increasing the rotor solidity – defined as $\mathrm{\Sigma }=B{A}_{\
mathrm{b}}/A$, where B is the number of blades and A[b] the blade planform area – which may have additional consequences. In fact, the TSR of the maximum power coefficient is directly related to
rotor solidity. This can be shown by using classical BEM theory with wake swirl, which gives the optimal blade design conditions by maximizing power at a given design TSR λ[d]. By neglecting drag,
the optimal design problem can be solved analytically to give the chord distribution of the optimal blade along the spanwise coordinate r (Manwell et al., 2002):
$\begin{array}{}\text{(5)}& \frac{c\left(r\right)}{R}=\frac{\mathrm{16}\mathit{\pi }}{\mathrm{9}B{C}_{\mathrm{L}}{\mathit{\lambda }}_{\mathrm{d}}^{\mathrm{2}}r/R}.\end{array}$
Although based on a simplified model that neglects some effects, this expression shows that chord distribution and design TSR are linked. This means that, if one increases solidity (and hence chord)
to contrast the Reynolds mismatch while keeping C[L] fixed, the resulting rotor will have a lower TSR for the maximum power coefficient. Therefore, this technique of correcting the Reynolds number
moves the optimal TSR away from the one of the full-scale reference, which may or may not be acceptable, depending on the goals of the model. For example, if one wants to match the behavior of the C
[P]–λ curves over a range of TSRs, such an approach would not be suitable. As shown by Eq. (5), this effect can be eliminated or mitigated by changing the design C[L] accordingly; however, if this
moves the operating condition of the airfoil away from its point of maximum efficiency, a lower maximum power coefficient will be obtained.
In addition, chord c and lift C[L] are further constrained by the circulation $\mathrm{\Gamma }=\mathrm{1}/\mathrm{2}c{C}_{\mathrm{L}}W$ (Burton et al., 2001), which plays an important role in the
aerodynamics of the rotor and its wake.
Considering first the rotor, the lift and drag generated by the airfoils located close to the blade root are modified by the combined effects of centrifugal and Coriolis forces. In fact, the former
causes a radial pumping of the flow that, as a result, moves outboard in the spanwise direction. This radial motion over a rotating body generates chordwise Coriolis forces that alleviate the adverse
pressure gradient on the airfoils and, in turn, delay stall. As shown by the dimensional analysis developed by Dowler and Schmitz (2015), rotational augmentation causes multiplicative corrections,
denoted ${g}_{{C}_{\mathrm{L}}}$ and ${g}_{{C}_{\mathrm{D}}}$, to the nonrotating lift and drag coefficients, which can be written, respectively, as
$\begin{array}{}\text{(6a)}& {g}_{{C}_{\mathrm{L}}}& \phantom{\rule{0.25em}{0ex}}={\left(\frac{c}{r}\right)}^{\mathrm{2}}{\left(\frac{\mathrm{\Gamma }}{RW}\right)}^{\mathrm{1}/\mathrm{2}}{\left(\frac
{\mathrm{\Omega }r}{\mathrm{2}W}\right)}^{-\mathrm{2}},\text{(6b)}& {g}_{{C}_{\mathrm{D}}}& \phantom{\rule{0.25em}{0ex}}=\frac{\mathrm{1}}{\mathrm{3}}\left(\frac{r}{R}\right){\left(\frac{c}{r}\
right)}^{-\mathrm{1}}\left(\frac{\mathrm{d}\mathit{\theta }}{\mathrm{d}r}\frac{R}{\mathrm{\Delta }\mathit{\theta }}\right)\left(\frac{\mathrm{\Omega }r}{\mathrm{2}W}\right),\end{array}$
where Δθ is the total blade twist from root to tip. Equations (6a) and (6b) show that, in order to match the effects of rotational augmentation, the model and full-scale system should have the same
blade nondimensional chord and twist distributions; the same nondimensional circulation $\mathrm{\Gamma }/\left(RW\right)$; and the same Rossby number $\mathit{\text{Ro}}=\mathrm{\Omega }r/\left(\
mathrm{2}\phantom{\rule{0.125em}{0ex}}W\right)$, which represents the ratio of inertia to Coriolis forces. Matching nondimensional circulation between the two systems implies matching either both the
planform shape $c/R$ and the lift coefficient C[L] or the product of the two. As previously noted, some of these options may lead to a different TSR of optimal C[P]. On the other hand, it is readily
verified that the Rossby number is always matched for any choice of n[l] and n[t].
2.1.2Wake aerodynamics
The circulation is relevant not only for rotational augmentation but also for wake behavior. In fact, each blade sheds trailing vorticity that is proportional to the spanwise gradient $\mathrm{d}\
mathrm{\Gamma }/\mathrm{d}r$ (Schmitz, 2020). Therefore, designing a blade that matches the spanwise distribution of Γ (and, hence, also its spanwise gradient) ensures that the scaled rotor sheds the
same trailed vorticity. Additionally, a matched circulation ensures also a matched thrust, which is largely responsible for the speed deficit in the wake and for its deflection in misaligned
conditions (Jiménez et al., 2010).
The Reynolds mismatch derived earlier applies also to its rotor-based definition, which is relevant to wake behavior and is obtained by using l=2R and u=V. However, Chamorro et al. (2012) showed that
the wake is largely unaffected by this parameter as long as Re>10^5, which is typically the case unless extremely small model turbines are used. The same is true for the terrain-height-based Reynolds
number definition that applies to flows over complex terrains, where Reynolds-independent results are obtained when Re>10^4 (McAuliffe and Larose, 2012).
The detailed characterization of the behavior of scaled wakes is considered as out of the scope of the present investigation, and the interested reader is referred to Wang et al. (2020) for a
specific study on this important topic.
The Froude number represents the ratio of aerodynamic to gravitational forces and is written as $\mathit{\text{Fr}}={V}^{\mathrm{2}}/gR$, where g is the acceleration of gravity. The Froude number of
the scaled model is readily found to be ${\mathit{\text{Fr}}}_{\mathrm{M}}={\mathit{\text{Fr}}}_{\mathrm{P}}{n}_{\mathrm{l}}/{n}_{\mathrm{t}}^{\mathrm{2}}$. Enforcing Froude (Fr[M]=Fr[P]) results in
the time scaling factor being set to ${n}_{\mathrm{t}}=\sqrt{{n}_{\mathrm{l}}}$. This condition determines the only remaining unknown in the scaling laws, so that the scalings of all nondimensional
parameters can now be expressed in terms of the sole geometric scaling factor n[l]. Froude scaling is used when gravity plays an important role, for example in the loading of very large rotors or for
floating offshore applications where weight and buoyancy forces should be in equilibrium.
The steady deflections due to aerodynamic loading of the scaled and full-scale wind turbines can be matched by adjusting the stiffness of the scaled model. In fact, consider the very simplified model
of a blade represented by a clamped beam of length R under a uniformly distributed aerodynamic load per unit span, denoted $q=\mathrm{1}/\mathrm{2}\mathit{\rho }{W}^{\mathrm{2}}c{C}_{\mathrm{L}}$.
The beam nondimensional tip deflection is $s/R=q{R}^{\mathrm{3}}/\left(\mathrm{8}EJ\right)$, where EJ is the bending stiffness, E is Young's modulus, and J is the cross-sectional moment of inertia.
By the previous definitions of length scale and timescale, one gets $\left(s/R{\right)}_{\mathrm{M}}=\left(s/R{\right)}_{\mathrm{P}}$ if $\left(EJ{\right)}_{\mathrm{M}}=\left(EJ{\right)}_{\mathrm{P}}
{n}_{\mathrm{l}}^{\mathrm{6}}/{n}_{\mathrm{t}}^{\mathrm{2}}$. Hence, nondimensional deflections can be matched, provided that the stiffness can be adjusted as shown. Matching this requirement may
imply changing the material and/or the configuration of the structure, because of technological, manufacturing, and material property constraints (Busan, 1998; Ricciardi et al., 2016), as discussed
more in detail later on.
2.2Transient response
A scaled model should obey some additional conditions in order for the transient response of the full-scale system to be matched.
2.2.1Rotor aerodynamics and inflow
As mentioned earlier, any aerodynamically scaled model can always be designed to enforce the TSR without additional conditions. To extend the similitude to dynamics, the nondimensional time
derivative of the TSR should also be matched, i.e., ${\mathit{\lambda }}_{\mathrm{M}}^{\prime }={\mathit{\lambda }}_{\mathrm{P}}^{\prime }$, where a nondimensional time derivative is denoted as $\
left(\cdot {\right)}^{\prime }=\mathrm{d}\cdot /\mathrm{d}\mathit{\tau }$. By using the definition of λ, one gets
$\begin{array}{}\text{(7)}& {\mathit{\lambda }}^{\prime }=\frac{{\mathrm{\Omega }}^{\prime }R}{V}-\mathit{\lambda }\frac{{V}^{\prime }}{V}.\end{array}$
The rotor dynamic torque balance equilibrium is written as $I\stackrel{\mathrm{˙}}{\mathrm{\Omega }}=Q$. In this expression, I is the rotor polar moment of inertia, $\stackrel{\mathrm{˙}}{\left(\cdot
\right)}=\mathrm{d}\cdot /\mathrm{d}t$ indicates a derivative with respect to time, and $Q={Q}_{\mathrm{a}}-\left({Q}_{\mathrm{e}}+{Q}_{\mathrm{m}}\right)$ is the shaft torque. The aerodynamic torque
is denoted as ${Q}_{\mathrm{a}}=\mathrm{1}/\mathrm{2}\mathit{\rho }AR{C}_{\mathrm{P}}/\mathit{\lambda }$, while Q[e] is the electrical torque provided by the generator and Q[m] the mechanical losses.
The aerodynamic torque scales as ${Q}_{{a}_{\mathrm{M}}}={Q}_{{a}_{\mathrm{P}}}{n}_{\mathrm{l}}^{\mathrm{5}}/{n}_{\mathrm{t}}^{\mathrm{2}}$, and clearly Q[e]+Q[m] must scale accordingly. Since the
mechanical losses depend on friction, it might be difficult to always match Q[m], especially in a small-scale model. This problem, however, can be eliminated by simply providing the necessary
electrical torque to generate the correct term, Q[e]+Q[m]. Considering that the dimensions of I are [I]=[ρ[m]][l]^5, where ρ[m] is the material density and l a characteristic length, the first term $
{\mathrm{\Omega }}^{\prime }R/V$ in Eq. (7) is matched between the two models if the material density is matched, i.e., if ${\mathit{\rho }}_{{m}_{\mathrm{M}}}={\mathit{\rho }}_{{m}_{\mathrm{P}}}$.
The second term, $\mathit{\lambda }{V}^{\prime }/V$, in Eq. (7) is matched if the two systems operate at the same TSR and if the wind speed has the same spectrum of the wind in the field. The
matching of wind fluctuations (clearly, only in a statistical sense) induces not only the same variations in the TSR, and hence in the rotor response, but also the same recovery of the wake, which is
primarily dictated by the ambient turbulence intensity (Vermeer et al., 2003).
Matching of the wind spectrum is in principle possible in a boundary layer wind tunnel if a flow of the desired characteristics can be generated. Turbulent flows can be obtained by active (Hideharu,
1991; Mydlarski, 2017) or passive means (Armitt and Counihan, 1968; Counihan, 1969). Active solutions are more complex and expensive but also more flexible and capable of generating a wider range of
conditions. When testing in the field, the flow is invariably not scaled. This will have various effects on the scaled model response, which might be beneficial or not depending on the goals of
scaled testing. In fact, the acceleration of time (t[M]=t[P]n[t]) implies a shift in the wind frequency spectrum. Among other effects, this means that low-probability (extreme) events happen more
frequently than at full scale. Similarly, the scaling of speed (${V}_{\mathrm{M}}={V}_{\mathrm{P}}{n}_{\mathrm{l}}/{n}_{\mathrm{t}}$) implies higher amplitudes of turbulent fluctuations and gusts
than at full scale.
Magnitude and phase of the aerodynamic response of an airfoil (as for example modeled by Theodorsen's theory (Bisplinghoff and Ashley, 2002)) are governed by the reduced frequency $\mathit{\kappa }=
{\mathit{\omega }}_{\mathrm{m}}c/\left(\mathrm{2}W\right)$, where ω[m] is the circular frequency of motion. Harmonic changes in angle of attack take place at various frequencies ${\mathit{\omega }}_
{{\mathrm{m}}_{j}}$ and are caused by the inhomogeneities of the flow (shears, misalignment between rotor axis and wind vector), blade pitching, and structural vibrations in bending and twisting. The
reduced frequency can be written as ${\mathit{\kappa }}_{j}={\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{{\mathrm{m}}_{j}}\mathrm{\Omega }c/\left(\mathrm{2}W\right)$, where ${\stackrel{\mathrm{̃}}{\
mathit{\omega }}}_{{\mathrm{m}}_{j}}={\mathit{\omega }}_{{\mathrm{m}}_{j}}/\mathrm{\Omega }$ indicates a nondimensional frequency. This expressions shows that, once the nondimensional frequencies ${\
stackrel{\mathrm{̃}}{\mathit{\omega }}}_{{\mathrm{m}}_{j}}$ (due to inflow, pitch, and vibrations) are matched, the corresponding reduced frequencies are also matched, as the term $\mathrm{\Omega }c/\
left(\mathrm{2}W\right)$ is always automatically preserved between scaled and full-scale systems for any n[l] and n[t].
Dynamic stall effects depend on reduced frequency κ and chord-based Reynolds number. Typical dynamic stall models depend on the lift, drag, and moment static characteristics of an airfoil and various
time constants that describe its unsteady inviscid and viscous response (Hansen et al., 2004). As previously argued, κ can be matched, and all time constants are also automatically matched by the
matching of nondimensional time. However, a mismatch of the chord-based Reynolds number is typically unavoidable and will imply differences in the dynamic stall behavior of the scaled and full-scale
models, which will have to be quantified on a case-by-case basis.
2.2.2Wake aerodynamics
The Strouhal number is associated with vortex shedding, which has relevance in tower and rotor wake behavior; the Strouhal number has also been recently used to describe the enhanced wake recovery
obtained by dynamic induction control (Frederik et al., 2019). A rotor–wake-relevant definition of this nondimensional parameter is $\mathit{\text{St}}=f\mathrm{2}R/V$, where f is a characteristic
frequency. Using the previous relationships, it is readily shown that ${\mathit{\text{St}}}_{\mathrm{M}}={\mathit{\text{St}}}_{\mathrm{P}}{n}_{\mathrm{l}}/\left({n}_{\mathrm{t}}{n}_{\mathrm{v}}\
right)=\mathrm{1}$; i.e., the Strouhal number is always exactly matched between scaled and full-scale models for any n[l] and n[t] when TSR is matched.
During transients, spanwise vorticity is shed that is proportional to its temporal gradient. Using BEM theory (Manwell et al., 2002, p. 175), the nondimensional spanwise circulation distribution is
computed as
$\begin{array}{}\text{(8)}& \frac{\mathrm{\Gamma }}{RW}=\frac{\mathrm{1}}{\mathrm{2}}\frac{c}{R}{C}_{L,\mathit{\alpha }}\left(\frac{{U}_{\mathrm{P}}}{{U}_{\mathrm{T}}}-\mathit{\theta }\right).\end
In this expression, C[L,α] is the slope of the lift curve, θ is the sectional pitch angle, and U[P] and U[T] are the flow velocity components at the blade section, respectively perpendicular and
tangent to the rotor disk plane, such that ${W}^{\mathrm{2}}={U}_{\mathrm{P}}^{\mathrm{2}}+{U}_{\mathrm{T}}^{\mathrm{2}}$. The flow speed component tangential to the rotor disk is ${U}_{\mathrm{T}}=\
mathrm{\Omega }r+{u}_{\mathrm{T}}$, where u[T] contains terms due to wake swirl and yaw misalignment. The flow speed component perpendicular to the rotor disk is ${U}_{\mathrm{P}}=\left(\mathrm{1}-a\
right)V+\stackrel{\mathrm{˙}}{d}+{u}_{\mathrm{P}}$, where a is the axial induction factor, $\stackrel{\mathrm{˙}}{d}$ the out-of-plane blade section flapping speed, and u[P] the contribution due to
yaw misalignment and vertical shear. Neglecting u[P] and u[T] and using Eq. (8), the nondimensional time rate of change of the circulation becomes
$\begin{array}{}\text{(9)}& \frac{\mathrm{d}}{\mathrm{d}\mathit{\tau }}\left(\frac{\mathrm{\Gamma }}{RW}\right)=\frac{\mathrm{1}}{\mathrm{2}}\frac{c}{R}{C}_{L,\mathit{\alpha }}\frac{\mathrm{d}}{\
mathrm{d}\mathit{\tau }}\left(\frac{\mathrm{1}-a+\stackrel{\mathrm{˙}}{d}/V}{\mathit{\lambda }}\left(\frac{R}{r}\right)-\mathit{\theta }\right).\end{array}$
For a correct similitude between scaled and full-scale systems, the nondimensional derivatives λ^′, a^′, θ^′, and $\left(\stackrel{\mathrm{˙}}{d}/V{\right)}^{\prime }$ should be matched.
The matching of λ^′ has already been addressed. The term a^′ accounts for dynamic changes in the induction, which are due to the speed of actuation (of torque and blade pitch) and the intrinsic
dynamics of the wake. The speed of actuation is matched if the actuators of the scaled model are capable of realizing the same rates of change of the full-scale system, i.e., if θ^′ is matched. The
intrinsic dynamics of the wake are typically modeled by a first-order differential equation (Pitt and Peters, 1981):
$\begin{array}{}\text{(10)}& \stackrel{\mathrm{˙}}{\mathbit{a}}+\mathbf{A}\mathbit{a}=\mathbit{b},\end{array}$
where a represents inflow states and A is a matrix of coefficients proportional to $V/R$. It is readily verified that the matching of nondimensional time results in the matching of a^′. Finally, the
term $\left(\stackrel{\mathrm{˙}}{d}/V{\right)}^{\prime }$ is due to the elastic deformation of the blade, which is addressed next.
Considering blade flapping, the Lock number Lo is defined as
$\begin{array}{}\text{(11)}& \mathit{\text{Lo}}=\frac{{C}_{L,\mathit{\alpha }}\mathit{\rho }c{R}^{\mathrm{4}}}{{I}_{\mathrm{b}}},\end{array}$
where I[b] is the blade flapping inertia. Matching the Lock number ensures the same ratio of aerodynamic to inertial forces. Considering that the flapping inertia is dimensionally proportional to [ρ
[m]][l]^5, where ρ[m] is the material density and l a characteristic length, matching the Lock number can be obtained by simply matching the material density of the blade, i.e., ρ[mM]=ρ[mP]. A
similar definition of the Lock number can be developed for the fore–aft motion of the rotor due to the flexibility of the tower, leading to the same conclusion.
The system ith nondimensional natural frequency is defined as ${\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{i}={\mathit{\omega }}_{i}/\mathrm{\Omega }$, where ω[i] is the ith dimensional natural
frequency. Matching the lowest N nondimensional frequencies means that the corresponding eigenfrequencies in the scaled and full-scale system have the same relative placement among themselves and
with respect to the harmonic excitations at the multiple of the rotor harmonics. In other words, the two systems have the same Campbell diagram (Eggleston and Stoddard, 1987). In addition, by
matching nondimensional frequencies, the ratio of elastic to inertial forces is correctly scaled. Considering that the bending natural frequency of a blade is dimensionally proportional to $\sqrt{EJ/
{\mathit{\rho }}_{\mathrm{m}}{l}^{\mathrm{6}}}$, the matching of nondimensional natural frequencies implies $\left(EJ{\right)}_{\mathrm{M}}=\left(EJ{\right)}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{6}}
/{n}_{\mathrm{t}}^{\mathrm{2}}$, which is the same result obtained in the steady case for the matching of static deflections under aerodynamic loading. The same conclusions are obtained when
considering deformation modes other than bending, so that in general one can write ${K}_{\mathrm{M}}={K}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{6}}/{n}_{\mathrm{t}}^{\mathrm{2}}$, where K is
stiffness. Here again, it can be concluded that, for each given n[l] and n[t], one can match the frequencies by adjusting the stiffness of the scaled model.
It should be remarked that this condition only defines the stiffnesses that should be realized in the scaled model, not how these are actually obtained. As noted earlier, it is typically difficult if
not impossible to simply zoom down a complex realistic structure, and the model design may require a different configuration and choice of materials (Busan, 1998). An optimization-based approach to
the structural matching problem is described later in this work.
It is worth noting that matching both the Lock number and the placement of nondimensional natural frequencies implies that structural deflections caused by aerodynamic loads are correctly scaled. In
fact, the Lock number is the ratio of aerodynamic to inertial forces, while ${\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{i}^{\mathrm{2}}$ is proportional to the ratio of elastic to inertial forces.
Therefore, if both ratios are preserved, then $\mathit{\text{Lo}}/{\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{i}^{\mathrm{2}}$, being the ratio of aerodynamic to elastic forces, is also preserved. In
symbols, this ratio is written as
$\begin{array}{}\text{(12)}& \frac{\mathit{\text{Lo}}}{{\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{i}^{\mathrm{2}}}=\frac{q{L}^{\mathrm{3}}}{EJ},\end{array}$
where the right-hand side is indeed proportional to the nondimensional tip deflection $\stackrel{\mathrm{̃}}{s}=s/R$ of a clamped beam subjected to a distributed load $q={C}_{\mathrm{L},\mathit{\alpha
}}\mathit{\rho }c\left(R\mathrm{\Omega }{\right)}^{\mathrm{2}}$.
The matching of frequencies is also relevant to the matching of transient vorticity shedding in the wake, as mentioned earlier. In fact, assume that the blade flapping motion can be expressed as the
single mode $d={d}_{\mathrm{0}}{e}^{{\mathit{\omega }}_{\mathrm{f}}t}$, where d is the flapping displacement and ω[f] the flapping eigenfrequency. Then, the term $\left(\stackrel{\mathrm{˙}}{d}/V{\
right)}^{\prime }$ of Eq. (9) becomes
$\begin{array}{}\text{(13)}& \frac{\mathrm{d}}{\mathrm{d}\mathit{\tau }}\left(\frac{\stackrel{\mathrm{˙}}{d}}{V}\right)=\frac{{d}_{\mathrm{0}}}{R}\mathit{\lambda }{\stackrel{\mathrm{̃}}{\mathit{\omega
}}}_{f}^{\mathrm{2}}{e}^{{\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{f}\mathit{\tau }},\end{array}$
where ${\stackrel{\mathrm{̃}}{\mathit{\omega }}}_{\mathrm{f}}={\mathit{\omega }}_{\mathrm{f}}/\mathrm{\Omega }$ is the nondimensional flapping frequency. This term is matched between the scaled and
full-scale models if the nondimensional flapping frequency is matched.
2.3Subscaling criteria
As shown earlier, scaling is essentially governed by two parameters: the geometric scaling factor n[l] and the time scaling factor n[t]. No matter what choice is made for these parameters, the exact
matching of some nondimensional parameters can always be guaranteed; these include nondimensional time, TSR, and Strouhal and Rossby numbers. In addition, the matching of other nondimensional
quantities can be obtained by properly scaling some model parameters, again independently from the choice of n[l] and n[t]. For example, selecting the material density as ρ[mM]=ρ[mP] enforces the
matching of the Lock number, whereas scaling the stiffness as ${K}_{\mathrm{M}}={K}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{6}}/{n}_{\mathrm{t}}^{\mathrm{2}}$ ensures the proper scaling of the system
nondimensional natural frequencies. This way, several steady and unsteady characteristics of the full-scale system can be replicated by the scaled system. Other quantities, however, cannot be
simultaneously matched, and one has to make a choice.
Table 1 summarizes the main scaling relationships described earlier. The reader is referred to the text for a more comprehensive overview of all relevant scalings.
The choice of the scaling parameters n[l] and n[t] is highly problem dependent. Indeed, given a full-scale reference, n[l] is set by the size of its scaled replica, which is usually predefined to a
large extent. For instance, the choice of the subscale size for a wind tunnel model depends on the characteristics of the target tunnel, to limit blockage (Barlow et al., 1999). When scaling down to
a utility size, one typically chooses to reblade an existing turbine (Berg et al., 2014; Resor and Maniaci, 2014), thereby setting the scaling factor. The choice of n[t] is often not straightforward
and typically implies tradeoffs among quantities that cannot all be simultaneously matched.
For example, when the effects of gravity have to be correctly represented by the scaled model, the matching of the Froude number must be enforced. By setting Fr[M]=Fr[P], one obtains the condition
for the time scaling factor ${n}_{\mathrm{t}}=\sqrt{{n}_{\mathrm{l}}}$. Having set n[t], the scalings of all nondimensional parameters can now be expressed in terms of the sole geometric scaling
factor n[l].
Another example is given by the design of small-scale wind turbine models for wind tunnel testing, which typically leads to small geometric scaling factors n[l]. Bottasso et al. (2014) defined an
optimal scaling by minimizing the error in the Reynolds number and the acceleration of scaled time. The latter criterion was selected to relax the requirements on closed-loop control sampling time:
since ${\mathit{\text{Re}}}_{\mathrm{M}}={\mathit{\text{Re}}}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{2}}/{n}_{\mathrm{t}}$, small geometric scaling factors might require very fast scaled times and
hence high sampling rates, which could be difficult to achieve in practice for closed-loop control models. Bottasso and Campagnolo (2020) used a different criterion, where a best compromise between
the Reynolds mismatch and power density is sought. In fact, power density (defined as power P over volume or, in symbols, ${\mathit{\rho }}_{\mathrm{P}}=P/{R}^{\mathrm{3}}$) scales as ${\mathit{\rho
}}_{{\mathrm{P}}_{\mathrm{M}}}/{\mathit{\rho }}_{{\mathrm{P}}_{\mathrm{P}}}={n}_{\mathrm{l}}^{\mathrm{2}}/{n}_{\mathrm{t}}^{\mathrm{3}}$ and, hence, increases rapidly for small n[t]. For small n[l]
it becomes increasingly difficult, if not altogether impossible, to equip the scaled models with functional components (e.g., drivetrain, generator, actuation systems, sensors) that fit in the
dimensions prescribed by the scaling factors. The adoption of larger components can be acceptable or not, depending on the nonphysical effects that are generated by their bigger dimensions and the
goals of the model.
Yet another example of how delicate these choices can be is found in the experiments described by Kress et al. (2015). In this work, a scaled rotor was designed for experiments in a water tank, with
the goal of comparing upwind and downwind turbine configurations. The rotor of the model was scaled geometrically from a full-scale reference; however, the same scaling ratio could not be used for
the nacelle because of the need to house the necessary mechanical components. As a result, the model was equipped with an unrealistically large nacelle that, combined with the lower Reynolds number
(which causes a thicker boundary layer), likely increased the redirection of the flow towards the outer-blade portions in the downwind configuration. In turn, this led to the conclusion that nacelle
blockage improves power production in downwind rotors. Although this may be true for the scaled experiment, there is little evidence that the same conclusion holds for a full-scale machine (
Anderson et al., 2020). Because of miniaturization constraints, a larger nacelle is also used in the TUM G1 scaled turbine (Bottasso and Campagnolo, 2020), a machine designed to support wake studies
and wind farm control research. The effects of the out-of-scale nacelle on the wake have, however, been verified and appear in this case to be very modest (Wang et al., 2020).
Additionally, particular combinations of n[l] and n[t] can make it difficult to find suitable designs. A clear example is found in the structural redesign of an aeroelastically subscaled blade.
Indeed, as previously discussed, the scaled blade should have stiffnesses that scale as ${K}_{\mathrm{M}}={K}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{6}}/{n}_{\mathrm{t}}^{\mathrm{2}}$ and a mass
density that scales as ρ[mH]=ρ[mP] to ensure the same nondimensional frequencies and Lock number. Depending on the values of the scaling parameters chosen, these scaling relationships might lead to
unconventional combinations of stiffness and mass properties, which can be challenging to fulfill as shown in the next section.
Upscaling is a design effort driven by different criteria, including, among others, annual energy production (AEP), cost of material and manufacturing, logistics, and transportation. The situation is
different for subscaling. In fact, the previous section has clarified the scaling relationships that exist between a full-scale system and its scaled model. The analysis has revealed that in general
several steady and unsteady characteristics of the original system can be preserved in the scaled one. The question is now how to design such a scaled model in order to satisfy the desired matching
conditions. This problem is discussed in this section.
3.1Straightforward zooming down
This approach is based on the exact geometric zooming of the blade, including both its external and internal shape, and it has been advocated by Loth et al. (2017).
Regarding the external blade shape, geometric zooming implies that the same airfoils are used for both the scaled and the full-scale models. The mismatch of the Reynolds number (which is ${\mathit{\
text{Re}}}_{\mathrm{M}}={\mathit{\text{Re}}}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{3}/\mathrm{2}}$ for Froude scaling) may imply a different behavior of the polars, especially for large values of n
[l]. On the other hand, as shown earlier, a geometric scaling ensures the near matching (up to the effects due to changes in the polars) of various characteristics, such as optimum TSR,
nondimensional circulation, rotational augmentation, and vorticity shedding.
Regarding the internal blade shape, the skin, shear webs, and spar caps are also geometrically scaled down when using straightforward zooming. It should be noted that, for large geometric scaling
factors n[l], the thickness of elements such as the skin or the shear webs may become very thin, possibly less than typical composite plies.
The zoomed scaling has to satisfy two constraints on the properties of the materials used for its realization.
The first constraint is represented by the matching of material density (ρ[mM]=ρ[mP]), which is necessary to ensure the same Lock number. It should be remarked that the overall material density of
the blade includes not only the density of the main structural elements but also contributions from coatings, adhesive, and lightning protection. These components of the blade may not be simply
scaled down, so this problem may deserve some attention.
The second constraint is represented by the scaling of the stiffness, which is necessary for ensuring the matching of nondimensional natural frequencies. For Froude scaling, stiffness changes as ${K}
_{\mathrm{M}}={K}_{\mathrm{P}}{n}_{\mathrm{l}}^{\mathrm{5}}$. Considering bending, the stiffness is K=EJ. For a blade made of layered composite materials, the bending stiffness is more complicated
than the simple expression EJ, and it will typically need to be computed with an ad hoc methodology, for example using the anisotropic beam theory of Giavotto et al. (1983). However, the simple
expression EJ is sufficient for the dimensional analysis required to understand the effects of scaling. Since the sectional moment of inertia J is dimensionally proportional to l^4, with l being a
characteristic length of the blade cross section, this constraint requires Young's modulus to change according to E[M]=E[P]n[l]. This implies that all materials used for the scaled blade, including
the core, should have a lower level of stiffness than (and the same density as) the materials used at full scale; as shown later, this constraint is not easily met.
Since strain ϵ is defined as the ratio of a displacement and a reference length, it follows that ϵ[M]=ϵ[P]. Therefore, given that E[M]=E[P]n[l], it follows that σ[M]=σ[P]n[l], and the stresses in the
scaled model are reduced compared to the ones in the full-scale model. Still, one would have to verify that the admissible stresses and strains of the material chosen for the scaled blade are
sufficient to ensure integrity.
The critical buckling stress of a curved rectangular plate is
$\begin{array}{}\text{(14)}& {\mathit{\sigma }}_{\text{cr}}={k}_{\mathrm{c}}\frac{{\mathit{\pi }}^{\mathrm{2}}E}{\mathrm{12}\left(\mathrm{1}-{\mathit{u }}^{\mathrm{2}}\right)}{\left(\frac{d}{b}\
where k[c] is a coefficient that depends on the aspect ratio of the panel, its curvature, and its boundary conditions; ν is Poisson's ratio; d is the panel thickness; and b is the length of the
loaded edges of the plate (Jones, 2006). Here again, the expression of the critical stress of a layered anisotropic composite plate would be more complex than the one reported in Eq. (14), but this
is enough for the present dimensional analysis. By using the scaling relationships for length and for E, Eq. (14) readily leads to ${\mathit{\sigma }}_{{\text{cr}}_{\mathrm{M}}}={\mathit{\sigma }}_
{{\text{cr}}_{\mathrm{P}}}{n}_{\mathrm{l}}$. This means that if the full-scale blade is buckling free, so is the scaled one, as both the critical buckling stress and the stresses themselves scale in
the same manner.
3.2Aerostructural redesign
An alternative approach to the design of a subscale model is to identify an external shape and an internal structure that match, as closely as possible, the aeroelastic behavior of the full-scale
blade. This approach offers more degrees of freedom, at the cost of an increased design complexity; indeed, one designs a new blade that, although completely different from the full-scale one,
matches some of its characteristics.
In this second approach, the first step consists of defining a blade shape that can mimic the aerodynamic behavior of the full-scale system. As previously discussed, this can be obtained according to
different criteria. Here, the following three conditions are considered. First, a new set of airfoils is selected to match as closely as possible, despite the different Reynolds number of operation,
the polar coefficients of the airfoils of the full-scale blade; this is relevant for the matching of the performance and loading of the rotor. Second, the two rotors should have similarly shaped
power coefficient curves, which is relevant for performance on and off the design point. Finally, the blades should have the same spanwise circulation distribution, which is relevant for a similar
aerodynamic loading of the blade and wake behavior. The resulting scaled blade shape (both in terms of cross sections, because of the changed airfoils, and in terms of chord and twist distributions)
will be different from the full-scale rotor. However, this is clearly irrelevant, as the goal is to match some quantities of interest between the two rotors, not their shape.
The aerodynamic design problem can be formally expressed as
$\begin{array}{}\text{(15a)}& & \underset{{\mathbit{p}}_{\mathrm{a}}}{min}{J}_{\mathrm{a}}\left({\mathbit{p}}_{\mathrm{a}}\right),\text{(15b)}& & \text{subject to}\phantom{\rule{1em}{0ex}}{\mathbit
{m}}_{\mathrm{a}}\left({\mathbit{p}}_{\mathrm{a}}\right)=\mathrm{0},\text{(15c)}& & \phantom{\rule{1em}{0ex}}{\mathbit{c}}_{\mathrm{a}}\left({\mathbit{p}}_{\mathrm{a}}\right)\le \mathrm{0}.\end
Vector p[a] indicates the aerodynamic design variables, which include the chord and twist distributions c(η) and θ(η), appropriately discretized in the spanwise direction, while J[a] is a design
figure of merit, m[a] are matching constraints, and finally c[a] are additional design conditions. This formulation of the aerodynamic design problem is very general, and different choices of the
figure of merit and of the constraints are possible, depending on the goals of the scaled model.
In the present work, the aerodynamic optimization cost function is formulated as
$\begin{array}{}\text{(16)}& {J}_{\mathrm{a}}=\sum _{i}^{{N}_{{C}_{\mathrm{P}}}}{\left(\frac{{C}_{\mathrm{P}}\left({\mathit{\lambda }}_{i}\right)-{\stackrel{\mathrm{^}}{C}}_{\mathrm{P}}\left({\mathit
{\lambda }}_{i}\right)}{{\stackrel{\mathrm{^}}{C}}_{\mathrm{P}}\left({\mathit{\lambda }}_{i}\right)}\right)}^{\mathrm{2}}.\end{array}$
This cost drives the design towards the power coefficient of the target full-scale model ${\stackrel{\mathrm{^}}{C}}_{\mathrm{P}}$ at ${N}_{{C}_{\mathrm{P}}}$ control stations. This cost function
ensures that the subscale model – whose airfoils generally present a reduced efficiency due to the lower chord-based Reynolds number – has a C[P] that is as close as possible to the full-scale model.
Using ${N}_{{C}_{\mathrm{P}}}=\mathrm{1}$ leads to a design with the best C[P] at the TSR λ[1].
Within the vector of matching equality constraints, one set of conditions enforces the matching of the spanwise distribution of the circulation $\stackrel{\mathrm{^}}{\mathrm{\Gamma }}$ at N[Γ]
control stations:
$\begin{array}{}\text{(17)}& \frac{\mathrm{\Gamma }\left({\mathit{\eta }}_{i}\right)-\stackrel{\mathrm{^}}{\mathrm{\Gamma }}\left({\mathit{\eta }}_{i}\right)}{\stackrel{\mathrm{^}}{\mathrm{\Gamma }}\
left({\mathit{\eta }}_{i}\right)}=\mathrm{0},\phantom{\rule{1em}{0ex}}i=\left(\mathrm{1},{N}_{\mathrm{\Gamma }}\right),\end{array}$
where $\stackrel{\mathrm{^}}{\left(\cdot \right)}$ indicates in general a to-be-matched scaled quantity of the target full-scale model. Another constraint may be added to prescribe the maximum power
coefficient to take place at the same design TSR, i.e., ${\mathit{\lambda }}_{max\left({C}_{\mathrm{P}}\right)}={\mathit{\lambda }}_{max\left({\stackrel{\mathrm{^}}{C}}_{\mathrm{P}}\right)}$.
Finally, vector c[a] specifies additional design inequality constraints, which may include a margin to stall, maximum chord, and others, depending on the application.
Once the new aerodynamic shape is identified, the second step consists in the design of an internal blade structure that can mimic the full-scale aeroelastic behavior while ensuring integrity and
satisfying manufacturing and realizability constraints. This approach allows for more freedom than the zooming-down approach; for example, one can use different materials than the ones used for the
full-scale design, and nonstructural masses can be added without affecting the matching characteristics of the scaled blade.
The structural design problem can be formally expressed as
$\begin{array}{}\text{(18a)}& & \underset{{\mathbit{p}}_{\mathrm{s}}}{min}{J}_{\mathrm{s}}\left({\mathbit{p}}_{\mathrm{s}}\right),\text{(18b)}& & \text{subject to}\phantom{\rule{1em}{0ex}}{\mathbit
{m}}_{\mathrm{s}}\left({\mathbit{p}}_{\mathrm{s}}\right)=\mathrm{0},\text{(18c)}& & \phantom{\rule{1em}{0ex}}{\mathbit{c}}_{\mathrm{s}}\left({\mathbit{p}}_{\mathrm{s}}\right)\le \mathrm{0}.\end
Vector p[s] indicates the structural design variables, which include the size of the various blade structural elements (skin, spar caps, shear webs, and leading- and trailing-edge reinforcements),
discretized span- and chordwise. Here again, this formulation is very general, and specific goals will lead to different choices of the merit function and of the constraints.
For example, assuming the blade to be modeled as a beam, the structural optimization cost can be formulated as
$\begin{array}{}\text{(19)}& \begin{array}{rl}{J}_{\mathrm{s}}=& \phantom{\rule{0.25em}{0ex}}\sum _{i}^{{N}_{\mathrm{s}}}{\left(\frac{{M}_{\mathrm{p}}\left({\mathit{\eta }}_{i}\right)-{\stackrel{\
mathrm{^}}{M}}_{\mathrm{p}}\left({\mathit{\eta }}_{i}\right)}{{\stackrel{\mathrm{^}}{M}}_{\mathrm{p}}\left({\mathit{\eta }}_{i}\right)}\right)}^{\mathrm{2}}\\ & \phantom{\rule{0.25em}{0ex}}+{w}_{\
mathrm{s}}\sum _{i}^{{N}_{\mathrm{s}}}{\left(\frac{{K}_{\mathrm{q}}\left({\mathit{\eta }}_{i}\right)-{\stackrel{\mathrm{^}}{K}}_{\mathrm{q}}\left({\mathit{\eta }}_{i}\right)}{{\stackrel{\mathrm{^}}
{K}}_{\mathrm{q}}\left({\mathit{\eta }}_{i}\right)}\right)}^{\mathrm{2}},\\ & p\in {\mathcal{S}}_{\mathrm{M}},\phantom{\rule{0.25em}{0ex}}q\in {\mathcal{S}}_{\mathrm{K}},\end{array}\end{array}$
where w[s] is a tuning weight, M[p] and K[q] are elements of the mass and stiffness matrices, and the sets 𝒮[M] and 𝒮[K] identify the elements that should be considered within the generally fully
populated symmetric mass and stiffness matrices. The first term in the cost aims at the matching of the scaled target mass distribution, while the second aims at the stiffness distribution. Vector m
[s] indicates the matching equality constraints. These may include the matching of a desired number of natural frequencies ${\mathit{\omega }}_{i}={\stackrel{\mathrm{^}}{\mathit{\omega }}}_{i}$ and
the matching of a desired number of mode shapes and/or static deflections ${\mathbit{u}}_{j}\left({\mathit{\eta }}_{i}\right)={\stackrel{\mathrm{^}}{\mathbit{u}}}_{j}\left({\mathit{\eta }}_{i}\right)
$ at a given number of spanwise stations η[i]. Finally, vector c[s] specifies the additional design inequality constraints. These constraints express all other necessary and desired conditions that
must be satisfied in order for the structural design to be viable and in general include maximum stresses and strains for integrity, maximum tip deflection for safety, buckling, and manufacturing and
technological conditions.
It should be noted that the matching of the scaled beam stiffness and mass distributions – if it can be achieved – is an extremely powerful condition. In fact, a geometrically exact nonlinear beam
model is fully characterized entirely in terms of its reference curve, stiffness, and mass matrices (Bottasso and Borri, 1998). This means that exactly matching all of these quantities would ensure
the same nonlinear structural dynamic behavior of the full-scale target. As shown later, this is not always possible because of limits due to technological processes, material characteristics, chosen
configuration of the scaled model, etc. In this case, there is a partial match between the full-scale and scaled beam models, and the sets 𝒮[M] and 𝒮[K] include only some elements of the mass and
stiffness matrices. When this happens, additional matching constraints can help in ensuring as similar a behavior as possible between the scaled and full-scale structures, for example by including
static deflections and/or modal shapes, as shown later.
4Application and results: subscaling of a 10MW rotor
The two strategies of straightforward zooming and aerostructural redesign are applied here to the subscaling of a 10MW machine, developed in Bottasso et al. (2016) as an evolution of the original
Danmarks Tekniske Universitet (DTU) 10MW reference wind turbine (Bak et al., 2013). The main characteristics of the turbine are reported in Table 2. Some of the principal blade characteristics are
given in Table 3, which reports the position of the airfoils, whereas Table 4 details the blade structural configuration and Table 5 summarizes the material properties.
Three different subscalings are considered here. The first subscale model, denominated the W model, is based on the German WINSENT test site (ZSW, 2016), which is equipped with two 750kW turbines
with a rotor diameter of 54m (ZSW, 2017). The reference rotor blades are scaled down to match the span of the WINSENT blades; reblading one of the WINSENT turbines yields a subscale model of the
full-scale 10MW turbine suitable for field testing. The second model, denominated the S model, is based on the SWiFT test site, which is equipped with Vestas V27 turbines. Here, the full-scale rotor
is scaled down to a diameter of 27m. Finally, the T model is a wind tunnel model with a rotor diameter of 2.8m, which is similar to the scaled floating turbine tested in the Nantes wave tank in the
INNWIND.EU project (Azcona et al., 2016).
Table 6 reports the different geometric scaling factors and a few additional key quantities of the three subscale models. For all, Froude scaling is used, which sets the timescale factor as
previously explained. The application of the scaling laws to the full-scale turbine results in the characteristics listed in Table 7. Independently of the approach chosen to define the internal and
external shape, the scaled models must fulfill these conditions to correctly mirror the dynamic behavior of the full-scale wind turbine.
The gravo-aeroservoelastic scaling laws lead to very light and flexible subscale blades. For instance, the standard blades of the V27 weigh 600kg (Vestas, 1994), which is 4 times more than the
gravo-aeroservoelastically scaled blades of the S model. It should, however, be remarked that this ratio would be smaller for a modern blade, since the V27 was designed more than 25 years ago and its
blades are heavier than the ones based on contemporary technology.
The following sections detail the design of the external and internal shape of the three subscale blades. Section 4.1 describes the aeroservoelastic and design tools used to this end. Then, Sects.
4.2 and 4.3 discuss, respectively, the strengths and limitations of each design strategy for each subscale model.
4.1Aeroservoelastic and design tools
The aeroservoelastic models are implemented in Cp-Lambda (Bottasso et al., 2012). The code is based on a multibody formulation for flexible systems with general topologies described in Cartesian
coordinates. A complete library of elements – including rigid bodies, nonlinear flexible elements, joints, actuators, and aerodynamic models – is available, as well as sensor and control elements.
The aerodynamic characteristics of the blade are described through lifting lines, including spanwise chord and twist distribution and aerodynamic coefficients. The code is coupled with aerodynamic
models based on the BEM model, formulated according to stream-tube theory with annular and azimuthally variable axial and swirl inductions, unsteady corrections, root and blade tip losses, and a
dynamic stall model.
The tower and rotor blades are modeled by nonlinear, geometrically exact beams of arbitrary initially undeformed shapes, which are bending, shear, axial, and torsion deformable. The structural and
inertial characteristics of each beam section are computed with ANBA (Giavotto et al., 1983), a 2D finite-element cross-sectional model. Finally, full-field turbulent wind grids are computed with
TurbSim (Jonkman et al., 2009) and used as input flow conditions for the aeroservoelastic simulations.
Cp-Max (Bortolotti et al., 2016) is a design framework wrapped around Cp-Lambda, which implements optimization algorithms to perform the coupled aerostructural design optimization of the blades and,
optionally, of the tower. For the present work, the code was modified to implement also the scaled design matching optimizations defined by Eqs. (15) and (18). All optimization procedures are solved
with a sequential quadratic programming algorithm, in which gradients are computed by means of finite differences.
4.2External shape design
For all three models, the design of the subscale external blade shape aims at replicating the aerodynamic characteristics of the full-scale rotor, including its wake. As long as the chord-based
Reynolds numbers are sufficiently large, a zooming-down approach is clearly the simplest strategy for designing the external shape of a scaled blade.
Airfoil FFA-W3-241 equips the outermost part of the full-scale blade (see Table 3). Its performance at the three typical Reynolds numbers of the full-scale, W, and S models was computed with ANSYS
Fluent (ANSYS, Inc., 2019). The results are reported in Fig. 1. The performance of the airfoil is clearly affected by the Reynolds number, with a particularly significant drop in efficiency for the
lowest Reynolds case. Notwithstanding these Reynolds effects, the zooming-down approach is selected for the W and S models, since the airfoils are still performing well at their corresponding typical
subscale Reynolds number. A redesign approach with alternative airfoils was not attempted here, and would probably lead only to marginal improvements of the aerodynamic performance.
On the other hand, for the small geometric scaling factor of the T model, the aerodynamic redesign approach is necessary. In general, smooth airfoils present a large reduction in aerodynamic
efficiency below a critical Reynolds number of about 70000 (Selig et al., 1995). Efficient profiles specifically developed for low-Reynolds-number applications are generally necessary in order to
get a good matching of the full-scale aerodynamic performance. As an alternative to the original airfoil, the 14% thick airfoil RG14 (Selig et al., 1995) is selected, because its aerodynamic
characteristics at the scaled Reynolds number are in reasonable agreement with the ones of the original airfoil at its full-scale Reynolds number (Fig. 1). The blade is then completely redesigned,
using the RG14 airfoil along its full span.
The blade shape is parameterized by means of chord and twist spanwise distributions. The design problem is formulated as the maximization of the power coefficient at the design TSR λ[d] of the
full-scale rotor, solving Eq. (15) with the cost given by Eq. (16) for ${N}_{{C}_{\mathrm{P}}}=\mathrm{1}$ and λ[1]=λ[d]. The nonlinear constraints expressed by Eq. (17) enforce the same spanwise
nondimensional circulation distribution of the full-scale blade.
Figure 2 shows the external shapes of the full-scale blade and the three subscale models in terms of chord, relative thickness, twist, and Reynolds number. Clearly, the shape curves for the W and S
models overlap with the full-scale ones, because zooming is used in these two cases, as previously explained.
The three subscale models have the same TSR in region II as the full-scale machine and the correspondingly subscaled rated rotor speeds. The rated wind speeds do not exactly match the subscale ones,
on account of the differences in the C[P]-TSR curves caused by the Reynolds effect.
4.3Design of the internal structure
The definition of the internal structure has to achieve the following goals: the matching of the full-scale aeroelastic behavior, the integrity of the blade under loading, and the feasibility of the
manufacturing process. In the next two sections, the zooming-down and the redesign approaches are applied to the structure of the three subscale blades.
4.3.1Limits of the zooming-down approach
The straightforward zooming-down approach can be applied to the internal structure of the W- and S-model blades, as their external geometrical shape has also been defined following this approach. The
resulting structures satisfy all scaling constraints but present some critical challenges.
First, the thicknesses of some of the components are unrealistically low. The blade root of the W model is, for example, only 20mm thick and is therefore unable to accommodate the root-bolted
connections. Furthermore, the scaling of the outer shell skin leads to a laminate thickness of less than one ply. The third web of the S-model blade is also extremely thin (less than 1mm) and very
close to the trailing edge.
Additionally, the scaled structure requires materials characterized by very peculiar mechanical properties. Indeed, as previously shown, the scaling laws require the modulus of elasticity to obey the
relationship E[M]=E[P]n[l] and the material density to be ρ[mM]=ρ[mP]. For example, the outer shell of the W-model blade requires an elasticity modulus of 6.6GPa and a density of 1845kgm^−3, which
are not typical values of conventional materials (see Fig. 3). Finally, nonstructural masses – such as glue, paint, and lightning protection – cannot be exactly zoomed down by geometric scaling and
need to be treated separately.
One may try to relax some of these hurdles by increasing the necessary component thicknesses and choosing materials with mechanical properties that compensate for this increase. For example, a 3-fold
increase of the skin thickness in the W model would be able to accommodate the root-bolted connection and would satisfy manufacturing tolerances. To meet the mass and inertia constraints, a material
should be used that has a lower density, ${\mathit{\rho }}_{\text{mM}}={\mathit{\rho }}_{\text{mP}}/\mathrm{3}$, and a lower-elasticity modulus, ${E}_{\mathrm{M}}={E}_{\mathrm{P}}{n}_{\mathrm{l}}/\
mathrm{3}$. Figure 3 reports Ashby's diagram of Young's modulus vs. density (Cambridge University Engineering Department, 2003). In this plot, the values corresponding to the outer shell skin
materials have been marked with × symbols. A red symbol indicates the full-scale blade, a yellow symbol is used for the W model considering the exact zooming-down approach, and a green symbol
indicates the solution with a 3-fold thickness increase. It should be noted that, although the properties of the scaled models do correspond to existing materials, these are typically not employed
for the manufacturing of blades. Therefore, their actual use for the present application might indeed pose some challenges.
Overall, the zooming-down approach for the structural design is not really straightforward and is significantly more complicated than in the case of the aerodynamic design. An alternative is offered
by a complete redesign of the internal structure, which is illustrated in the next section.
4.3.2Redesign of the W and S models
An alternative to the zooming-down approach is the redesign of the internal structure. This consists of a typical blade design process, subjected not only to additional constraints that enforce the
desired scaling relationships but, crucially, also to all other conditions that are necessary to make the design viable. For example, here a lower bound to the thickness of all structural components
is set to 1mm, while a minimum thickness of 60mm is assumed at the root to accommodate the bolted connection of the W and S models.
Additionally, one has greater freedom in the choice of materials. For the present applications, the glass-fiber-reinforced plastic (GFRP) composites of the full-scale blade appear to be suitable
choices also for the W model. On the other hand, these materials are too stiff for the S model, due to its smaller geometric scaling. An alternative was found within the family of thermoplastic
materials that have typical stiffness values between 1–3GPa and densities between 900 and 1400kgm^−3 (Brondsted et al., 2005). Although not strictly of interest here, thermoplastics also have
interesting advantages over thermosets, such as reduced cycle times, lower capital costs of tooling and equipment, smaller energy consumption during manufacturing, and enhanced recyclability at the
end of their life (Murray et al., 2018).
During the design phase of the subscale models, more careful attention can also be paid to the distributions of nonstructural masses. Specifically, masses from shell and sandwich cores must be
recomputed for the new scaled structure in order to prevent buckling of the sandwich panels. Additional masses from surface finishing and painting are also recomputed according to the surface of the
external shell. In fact, if a zooming-down strategy is chosen for the design of the external geometry, these masses will scale with the length scale factor. Masses from resin uptake in the outer
shell and shear webs are recomputed for the scaled structure assuming a constant area density. Indeed, this value does not change from the full scale to the subscale, since it depends on the material
and manufacturing process. A different assumption is taken for the masses of bonding plies and adhesive along the shear webs and leading and trailing edge. Since these masses are chordwise dependent,
the linear density of these materials in the subscale size must be corrected by the length scale factor. Finally, the linear density of the lightning protection system is assumed to be constant for
all sizes.
The structural design is formulated as the matching optimization problem expressed by Eq. (18). The cost function given by Eq. (19) considers the sole spanwise matching of the mass distribution,
i.e., it neglects inertia terms in 𝒮[M] and uses w[s]=0. The matching constraints m[s] include the lowest three natural frequencies, and the static deflected shape of the outboard 40% section of the
blade. This static condition was chosen to represent the maximum tip displacement resulting from turbulent simulations in power production for the full-scale machine (design load case (DLC) 1.1; see
IEC (2005)). Finally, the additional design constraints c[s] include stresses, strains, fatigue and technological constraints in the form of bounds on thickness and thickness rate of change of the
The structural design for the W and S models is based on a typical thin-walled composite configuration, where the design variables are defined as the spanwise thicknesses of the skin, shear webs,
spar caps, and leading- and trailing-edge reinforcements. Given the smaller size of the scaled blades, one single shear web is used instead of the three used in the full-scale 10MW model. Table 8
describes the mechanical properties of the materials used for these two blades, while Table 9 associates the various structural elements with the materials.
For the S model, the thermoplastic materials polymethyl methacrylate (PMMA) and polyoxymethylene (POM) are chosen because of their lower level of stiffness. The use of polymer materials reduces the
nonstructural masses, as the adhesive is no longer necessary. Due to the reduced fatigue characteristics of these materials, the blade lifetime is limited to 5 years. This is assumed to be acceptable
in the present case, given the research nature of these blades. Constraints on maximum stresses and strains are satisfied with an ample margin for these blades. However, the inclusion of a larger set
of DLCs (including extreme events and parked conditions) might create more challenging situations, which could increase the requirements regarding material strength, possibly eventually leading to
the selection of different materials.
Figure 4 reports the internal structure of the W and S models, as well as the overall mass distributions, including realistic nonstructural masses. The scaled mass distribution follows quite closely
the reference one along the blade span, with the exception of the root because of the additional thickness that must be ensured to accommodate the bolted connection. The blade satisfies the scaling
inertial and elastic constraints within a tolerance of less than 5%.
4.3.3Redesign of the T model
The very small size of the wind tunnel model blade prevents the use of a typical thin-walled solution. Following Bottasso et al. (2014) and Campagnolo et al. (2014), this scaled blade is not hollow
but presents a full cross section obtained by machining a foamy material. Two unidirectional spar caps provide the required flapwise stiffness distribution. The surface smoothness is obtained by a
very thin layer of skin made of glue. Although Bottasso et al. (2014) and Campagnolo et al. (2014) considered different scaling laws, their blade design configuration was found to be a suitable
choice even in the present gravo-aeroservoelastic scaling exercise. The selection of appropriate materials represents a critical aspect of the problem, and the mechanical properties listed in the
Cambridge University Materials Data Book (Cambridge University Engineering Department, 2003) were used to guide the material selection for the spar caps and core. A rigid polymer foam is chosen as
filler, because of its relatively high level of stiffness and lightness. For the spar caps, thermoplastic polymers are again found to be the most suitable solution even though their
stiffness-to-density ratio is much lower than materials traditionally used for spar caps. Moreover, the use of thermoplastics allows for alternative and simpler manufacturing processes, leading to a
higher flexibility in the spar cap design. From this family of materials, polypropylene is chosen because of its low stiffness modulus. Finally, the external shell is covered by a very thin layer of
the epoxy structural adhesive Scotch Weld AF 32 (3M Adhesives Division, 2000).
The design variables are represented by the spanwise thickness and width of the two spars. The design problem is formulated according to the constrained matching optimization expressed by Eq. (18).
The cost function of Eq. (19) considers the spanwise mass distribution in 𝒮[M] and the flapwise stiffness distribution in 𝒮[K]. The matching constraints m[s] include the lowest three natural
frequencies and the flapwise static extreme tip deflection. Both the cost and the constraints only consider the flapwise characteristics of the blade, because the structural configuration consisting
of a solid core and two spar caps allows for limited control of the edgewise characteristics. As a result, the scaled blade presents a higher level of edgewise stiffness than the full-scale
Figure 5 reports the results of the design optimization. The desired matching of mass and flapwise stiffness is achieved, except at blade root. Even though the placement of the first flapwise natural
frequency with respect to the rotor speed is ensured, the constraint on the lowest edgewise natural frequency could not be exactly matched due to the large chord. Small disparities in mass
distribution introduce a difference of about 1% in the blade flapping inertia.
In this section, the behavior of the scaled models is compared to the full-scale machine. The main goal here is to assess to what extent the subscale models are capable of successfully mirroring
relevant key characteristics and load trends of the full-scale reference.
The same collective-pitch/torque controller governs all machines. The controller uses a look-up table for torque to operate at rated TSR in region II and a proportional–integral–derivative (PID)
pitch loop to maintain constant rated power in region III. The PID gains used for the scaled models are obtained by transforming the ones of the full-scale machine using the scaling laws, and the
regulation trajectory is adapted to each model to account for differences in the C[P]-TSR curves. Notice that the scaling of gains is a conservative approach: in the case of an exact matching at
scale of all aeroelastic characteristics of the turbines, the use of a scaled controller will also ensure an identical closed-loop response. However, if the scaled models do not exactly represent the
full-scale reference – which is invariably the case in practice – an ad hoc retuned controller (i.e., a controller specifically optimized for the scaled model) will in general have better performance
than the one obtained by the scaling of the gains. The choice of gain scaling instead of retuning was made here to consider a worst-case scenario.
5.1Relevant key indicators
The models are simulated in a power production state at five different wind speeds from cut-in to cut-out. The winds of the scaled simulations are obtained by velocity scaling the turbulent winds
used for the full-scale machine (i.e., the integral space and timescales are both correctly scaled). The matching between the scaled and full-scale turbines is assessed with the help of 10 different
indicators: AEP; maximum flapwise tip displacement (MFTD); maximum thrust at main shaft (ThS); maximum combined blade root moment (CBRM); maximum flapwise bending root moment (FBRM); maximum edgewise
bending root moment (EBRM); and the Weibull-averaged damage equivalent load (DEL) for ThS, CBRM, FBRM, and EBRM.
5.1.1Utility-scale models
As previously discussed, the design both of the external shape and of the internal structure may induce differences in the behavior of a scaled model with respect to its full-scale reference. To
better understand the effects of these differences and their origins, three different sets of results are presented in Fig. 6.
The first plot (a) compares the indicators of the full-scale turbine with the upscaled ones of the W and S models. Both the internal structure and the external shape are obtained by zooming down, and
Reynolds effects are accounted for by CFD-computed polars. Although a zoomed-down structure cannot really be a practical solution – as discussed earlier – because of excessively thin structural
elements or the need for peculiar material properties, this solution is shown here because it highlights the sole effects of the Reynolds mismatch. In other words, since this is a purely numerical
study, the thicknesses and mechanical properties were used exactly as produced by scaling, resulting in a nearly exact satisfaction of the matching of all structural characteristics. Therefore, the
differences of the indicators between the full-scale and scaled models shown in this plot can be entirely attributed to Reynolds effects. The full-scale and utility-size models are equipped with
airfoil polars at different Reynolds numbers computed with the CFD code ANSYS Fluent (ANSYS, Inc., 2019).
The second plot (b) compares the indicators for the W and S models featuring a zoomed-down external shape (which neglects Reynolds effects) and a redesigned internal structure. Although Reynolds
effects would, in reality, be present, by neglecting them here – which is again possible because this is a purely numerical study – one can assess from this solution the sole effects of the
structural redesign on the matching of the indicators.
Finally, the third and last plot (c) considers the solution obtained by zooming down the aerodynamic shape, considering Reynolds effects and a redesigned internal structure. As argued earlier, this
is indeed the solution that is practically realizable, and, therefore, these are the more realistic results of the set considered here. Hence, differences between the full-scale and scaled models are
due to mismatches caused by both the Reynolds number and the redesign procedure.
As expected by the size difference, results shown in the first plot suggest a larger effect of the Reynolds number mismatch for the S model than for the W model. This results in a drop in all
indicators because of the decreased airfoil efficiency.
The second plot shows a similar matching for both models. Indeed, most of the key loads are matched within 5% for both the W and the S model. A larger difference between the two models is found for
EBRM and DEL EBRM, which are only poorly matched by the W model, whereas they are quite accurate for the S model. The mismatch is due to a slightly higher sectional mass in the last 20% of the blade
of the W model, as shown in Fig. 4. A significant difference with respect to full scale is also observed for the maximum flapwise tip displacement of both the W and S models. This difference is
caused by a slightly different dynamic behavior induced by mismatches in the flapwise and torsional stiffness distributions. Even though FBRM matches very well for both the W and S model at the root,
these differences lead to a poorer match at sections toward the blade tip, which in the end impacts MFTD.
Overall, both models are capable of matching the key indicators of the full-scale target reasonably well, considering both Reynolds effects and a redesigned structure.
5.1.2Wind tunnel model
The behavior of the T model is compared with the 10MW baseline in Fig. 7. The additional indicator, maximum edgewise tip deflection (METD), is considered in this case. The polars for the T model are
computed with Xfoil (Drela, 2013).
The comparison shows satisfactory behavior of the wind tunnel model for most key indicators, notwithstanding the very different Reynolds numbers (about 10^7 for the full-scale reference and about
2×10^4 for the T model). As expected, the largest mismatch is found for the maximum edgewise tip displacement. This can be justified by the inability of the structural design variables (limited to
the two caps) in controlling the edgewise stiffness.
5.2Load trends in waked conditions
Scaled models can also be used to capture trends, instead of absolute values. Indeed, the goal of scaled testing is often to understand the trends generated on some metric by, for example, a control
technology or by a particular operating condition or other factors, whereas the exact quantitative assessment of the induced effects must be left to a final full-scale verification.
As an example of the analysis of trends, the scaled models designed here are used to explore changes in loading between unwaked and waked inflow conditions. To this end, the full-scale turbine is
simulated with an average inflow velocity of 7ms^−1, considering a shear exponent of 0.2 and a turbulence intensity of 8%. The wake deficit generated by an upstream 10MW machine is then added to
this inflow, in order to simulate a waked condition. The wake is modeled by the superposition of a turbulent wind grid generated with TurbSim (Jonkman et al., 2009) and the first-order solution of
the deficit of the Larsen model (EWTSII model) (Bottasso et al., 2017). The downstream turbine is located at a longitudinal downstream distance of 4D from the upstream machine, and its lateral
distance from the wake center is varied from −1.25D (right, looking downwind) to 1.25D (left), realizing different degrees of wake–rotor overlap. The scaled models are simulated by velocity-scaling
the full-scale inflows. The key indicators considered are AEP; ThS; FBRM; and DELs for CBRM, FBRM, and EBRM.
Figure 8 reports changes in key indicators at several degrees of wake overlap with respect to unwaked inflow conditions. The full-scale machine presents the largest reduction in AEP and ThS in full
wake overlap. An asymmetrical load trend of the DELs for FBRM, EBRM, and CBRM is visible when the rotor is operating in partial wake. This behavior is mostly due to the rotor uptilt angle, which
introduces an additional velocity component in the rotor plane. In fact, for a clockwise-rotating (when looking downstream) rotor, this extra velocity component increases the in-plane velocity at the
blade sections when the blade is on the right side of the rotor (i.e., during the downstroke; here left and right are defined for an observer looking downstream). Additionally, when a wake impinges
on the right side of the rotor, the out-of-plane velocity component decreases, because of the wake deficit. Both of these effects tend to decrease the angle of attack at the blade sections. On the
other hand, when a wake impinges on the left portion of the rotor, the effect of the decreased out-of-plane component is in part balanced by the also decreased in-plane component. Because of this
different behavior, larger load fluctuations (and hence higher fatigue loads) are observed for right wake impingements than for left ones. A similar effect is caused by the elasticity of the tower:
under the push of the thrust, the tower bends backwards, which in turn tilts the rotor upward, adding to the previously described phenomenon. Other minor effects are also due to the elastic
deformations caused by gravity, which again contribute to breaking the symmetry of the problem.
Overall, the largest scaled models follow the trends very well, with the S model performing slightly better than the W model. Indeed, the W model is better than the S model when looking at
Weibull-averaged quantities (Fig. 6), but the S model presents a slightly superior matching of blade loads at the specific speed at which the load trend study is performed. The trends are also
reasonably captured by the smaller-scale T model, but with significant differences in DEL FBRM. Specifically, there is an overestimation of this quantity around the −0.5D lateral wake center
position. A detailed analysis of the results revealed this behavior to be caused by the blade operating at angles of attack close to the stalling point. This indicates another possible limit of
models with large-scale factors, whose airfoils may have very different stall and post-stall behavior than their full-scale counterparts.
This paper has analyzed the scaling conditions that should be met by a subscale model to match a full-scale reference in terms of its full aeroservoelastic response. The analysis has shown that many
relevant key aspects of the steady and unsteady response of a machine, considered as flexible, can indeed be matched. Part of this analysis can also be used to understand expected changes due to
upscaling, which can be useful in the design of larger rotors. To the authors' knowledge, this is one of the most comprehensive analyses of the problems of scaling wind turbines presented thus far.
Within this framework, this paper has considered two alternative ways of designing a scaled rotor. The first is based on the idea of exactly zooming down the full-scale reference to obtain the
subscale model. An alternative strategy is to completely redesign the rotor, from both an aerodynamic and a structural point of view. This produces a scaled blade that, although possibly very
different from the full-scale one, matches some of its key characteristics as closely as possible.
These two alternative strategies have been tested on the gravo-aeroservoelastic scaling of a conceptual 10MW blade to three different subscale models: two utility-scale ones to be used for the
reblading of small existing turbines and one for equipping a very small model turbine to conduct experiments in the controlled environment of a wind tunnel.
The following conclusions can be drawn from the application of the two strategies to these three different scaling problems.
The simplest strategy to design the external shape of utility-scale blades is the straightforward zooming-down approach, as long as the subscale Reynolds number is sufficiently high. This strategy
benefits from a simple implementation and leads to an acceptable match of the blade aerodynamic performance. However, when the blade aerodynamic performance is compromised by the Reynolds mismatch –
which is the typical case of wind tunnel models – the alternative but more complex strategy of redesigning the aerodynamic shape becomes preferable if not altogether indispensable. Special
low-Reynolds-number airfoils may be used to mitigate the effects caused by the reduced Reynolds regime. However, different behavior at and around stall might lead to different loads when operating at
large angles of attack.
The straightforward zooming down of the blade internal structure is instead typically very difficult for all scaling ratios. In fact, the need for materials of unusual characteristics and the
nonscalability of nonstructural masses unfortunately hinder the applicability of this simple approach. An alternative is found in the structural redesign strategy, which offers more flexibility at
the price of increased complexity. Even here, however, the problem is nontrivial. For example, materials may play a critical role, due to the very flexible nature of some of these scaled blades.
The aeroservoelastic analyses conducted herein have shown that, in general, it is not possible to exactly match all the characteristics of a full-scale machine with a subscale model. However, with
the proper choices, some key indicators are nicely captured. In addition, changes in operating conditions are represented quite well even at the smaller scale. For example, it was shown that changes
in loading from an unwaked to a waked condition are accurately represented by all scaled models, which successfully capture intricate and possibly unexpected couplings with design aspects such as
nacelle uptilt and tower deflection. The good performance of the models in capturing such complex effects opens up a range of applications and use cases. For example, with the right design choices,
scaled models can be employed to better understand rotor–wake interactions or test sophisticated control strategies at the turbine and/or plant levels.
Further improvements in the performance of the subscale models are certainly possible. Indeed, while some of the limitations result from the choice of quantities to be matched, others can be overcome
by technological advances. For instance, improvements in measurement technology can relax the requirements on the scaling of time, allowing for a better match of other quantities. Additionally,
advances in material and manufacturing may ease the application of unconventional materials; relax sizing constraints; and lead to more accurate, simpler, faster-to-develop, and cheaper models.
This work has exclusively focused on the wind turbine itself, and the effects of scaling have been quantified for the aerodynamic performance and loading of the rotor. The recent study of Wang et al.
(2020) expands this analysis by considering the effects of scaling on wake behavior. Even in that case the conclusion is that properly scaled models can produce very realistic wakes.
Further work should focus on expanding the scope of the scaling analysis, introducing the effect of hydrodynamics. Indeed, as floating wind energy is expected to significantly grow in the coming
years, it is becoming increasingly important to better understand which aspects of the aero-hydroservoelastic response of these machines can be matched and how to best design subscale models. This
is, however, only part of the problem. Research efforts are also necessary to better understand how to replicate the inflow conditions that full-scale machines face in various types of atmospheric
and terrain conditions. This is a challenging task, since it requires a deep understanding of atmospheric flows, their interaction with the terrain orography and the vegetation, and technology to
replicate these flows at scale.
It is the hope of the authors that the results shown in this paper will increase the confidence in scaled testing, in the belief that scaled model have a significant role to play in the advancement
of wind energy science.
a Axial induction factor
a[s] Speed of sound
c Chord length
d Out-of-plane blade section flapping displacement
f Characteristic frequency
g Acceleration of gravity
l Characteristic length
n[l] Geometric scaling factor, i.e., ${l}_{\mathrm{M}}/{l}_{\mathrm{P}}$
n[t] Time scaling factor, i.e., ${t}_{\mathrm{M}}/{t}_{\mathrm{P}}$
n[Ω] Angular velocity scaling factor, i.e., ${\mathrm{\Omega }}_{\mathrm{M}}/{\mathrm{\Omega }}_{\mathrm{P}}$
n[v] Wind speed scaling factor, i.e., ${V}_{\mathrm{M}}/{V}_{\mathrm{P}}$
p Vector of design parameters
r Spanwise coordinate
s Tip deflection
t Time
u Characteristic speed
A Rotor disk area
A[b] Blade planform area
B Number of blades
C[D] Drag coefficient
C[L] Lift coefficient
C[L,α] Slope of the lift curve
C[P] Power coefficient
C[T] Thrust coefficient
E Young's modulus or airfoil efficiency, i.e., ${C}_{\mathrm{L}}/{C}_{\mathrm{D}}$
EJ Bending stiffness
Fr Froude number
I Rotor polar moment of inertia
I[b] Blade flapping inertia
J Cost function
K Stiffness
Lo Lock number
M Mass
Ma Mach number
P Aerodynamic power
Q Torque
R Rotor radius
Re Reynolds number
Ro Rossby number
St Strouhal number
T Thrust force
U[P] Flow velocity component perpendicular to the rotor disk plane
U[T] Flow velocity tangent to the rotor disk plane
V Wind speed
W Flow speed relative to a blade section
β Blade pitch
ϵ Strain
θ Sectional pitch angle
κ Reduced frequency
λ Tip-speed ratio
λ[d] Design TSR
μ Fluid dynamic viscosity
ν Poisson coefficient
ρ Air density
ρ[m] Material density
ρ[P] Power density
σ Stress
τ Nondimensional time
ω Natural frequency
Γ Circulation
Δθ Total blade twist from root to tip
Σ Rotor solidity
Φ Rotor uptilt angle
Ξ Rotor cone angle
Ω Rotor angular velocity
(⋅)[a] Pertaining to the aerodynamic design
(⋅)[s] Pertaining to the structural design
(⋅)[M] Scaled system
(⋅)[P] Full-scale physical system
$\stackrel{\mathrm{˙}}{\left(\cdot \right)}$ Derivative with respect to time, i.e., $\mathrm{d}\cdot /\mathrm{d}t$
$\left(\cdot {\right)}^{\prime }$ Derivative with respect to nondimensional time, i.e., $\mathrm{d}\cdot /\mathrm{d}\mathit{\tau }$
$\stackrel{\mathrm{̃}}{\left(\cdot \right)}$ Nondimensional quantity
$\stackrel{\mathrm{^}}{\left(\cdot \right)}$ To-be-matched scaled quantity
AEP Annual energy production
BEM Blade element momentum
Bx Biaxial
CBRM Combined bending root moment
CFD Computational fluid dynamics
CFRP Carbon-fiber-reinforced plastic
DEL Damage equivalent load
DLC Design load case
EBRM Edgewise bending root moment
FBRM Flapwise bending root moment
GFRP Glass-fiber-reinforced plastic
LD Low density
LE Leading edge
MFTD Maximum flapwise tip displacement
METD Maximum edgewise tip displacement
PID Proportional integral derivative
PMMA Polymethyl methacrylate
POM Polyoxymethylene
PP Polypropylene
SQP Sequential quadratic programming
ThS Thrust at main shaft
TSR Tip-speed ratio
TE Trailing edge
Tx Triaxial
Ux Uniaxial
Code and data availability
The data used for the present analysis can be obtained by contacting the authors.
HC modified the Cp-Max code to support the scaled matching optimization, designed the subscale models, performed the simulations, and analyzed the results; CLB devised the original idea of this
research, performed the theoretical scaling analysis, formulated the matching optimization problem, and supervised the work; and PB collaborated in the modification of the software, the design of the
subscale models, and the conduction of the numerical simulations. HC and CLB wrote the manuscript. All authors provided important input to this research work through discussions and feedback and by
improving the manuscript.
The authors declare that they have no conflict of interest.
The authors would like to thank Chengyu Wang and Daniel J. Barreiro of the Technical University of Munich for the computation of the airfoil polars using CFD for multiple Reynolds numbers.
Additionally, credit goes to Eric Loth of the University of Virginia for having introduced the authors to the zooming approach and to Filippo Campagnolo of the Technical University of Munich for
fruitful discussions and support. This work was authored in part by the National Renewable Energy Laboratory, operated by the Alliance for Sustainable Energy, LLC, for the US Department of Energy
(DOE) under contract no. DE-AC36-08GO28308. Funding was provided by the US Department of Energy Office of Energy Efficiency and Renewable Energy Wind Energy Technologies Office. The views expressed
in the article do not necessarily represent the views of the DOE or the US Government. The US Government retains and the publisher, by accepting the article for publication, acknowledges that the
US Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for US Government purposes.
This research has been supported by the BMWi through the WINSENT project (grant no. 0324129F) and the DOE (grant no. DE-AC36-08GO28308).
This paper was edited by Katherine Dykes and reviewed by two anonymous referees.
Anderson, B., Branlard, E., Vijayakumar, G., and Johnson, N.: Investigation of the nacelle blockage effect for downwind wind turbines, J. Phys. Conf. Ser., 1618, 062062, https://doi.org/10.1088/
1742-6596/1618/6/062062, 2020.a
ANSYS Fluent: https://www.ansys.com/products/fluids/ansys-fluent (last access: 18 December 2019), 2019.a, b
Armitt, J. and Counihan, J.: The simulation of the atmospheric boundary layer in a wind tunnel, J. Atmos. Environ., 2, 49–61, https://doi.org/10.1016/0004-6981(68)90019-X, 1968.a
Azcona, J., Lemmer, F., Matha, D., Amann, F., Bottasso, C. L., Montinari, P., Chassapoyannis, P., Diakakis, K., Spyros, V., Pereira, R., Bredmose, H., Mikkelsen, R., Laugesen, R., and Hansen, A. M.:
INNWIND. EU Deliverable D4.24: Results of wave tank tests, http://www.innwind.eu/publications/deliverable-reports (last access: 18 December 2019), 2016.a, b
Bak, C., Zahle, F., Bitsche, R., Kim, T., Yde, A., Natarajan, A., and Hansen, M. H.: INNWIND. EU Deliverable D1.21: Reference Wind Turbine Report, http://www.innwind.eu/publications/
deliverable-reports (last access: 18 December 2019), 2013.a
Barlow, J. B., Rae, W. H., and Pope, A.: Low-speed wind tunnel testing, 3rd Edn., Wiley, Hoboken, New Jersey, USA, 1999.a
Berg, J., Bryant, J., LeBlanc, B., Maniaci, D., Naughton, B., Paquette, J., Resor, B., and White, J.: Scaled Wind Farm Technology Facility Overview, in: 32nd ASME Wind Energy Symposium, AIAA SchiTech
Forum, 13–17 January 2014, National Harbor, Maryland, https://doi.org/10.2514/6.2014-1088, 2014.a, b
Bisplinghoff, R. L. and Ashley, H., Principles of Aeroelasticity, Dover Publications, Mineola, New York, USA, 2002.a
Bortolotti, P., Bottasso, C. L., and Croce, A.: Combined preliminary-detailed design of wind turbines, Wind Energ. Sci., 1, 71–88, https://doi.org/10.5194/wes-1-71-2016, 2016.a, b
Bottasso, C. L. and Borri, M.: Integrating finite rotations, Comput. Method. Appl. Mech., 164, 307–331, https://doi.org/10.1016/S0045-7825(98)00031-0, 1998.a
Bottasso, C. L. and Campagnolo, F.: Wind tunnel testing of wind turbines and farms, Handbook of Wind Energy Aerodynamics, edited by: Stoevesandt, B., Schepers, G., Fuglsang, P., Sun, Y., Springer
Nature, Cham, https://doi.org/10.1007/978-3-030-05455-7_54-1, 2021.a, b, c
Bottasso, C. L., Campagnolo, F., and Croce, A.: Multi-disciplinary constrained optimization of wind turbines, Multibody Syst. Dyn., 27, 21–53, https://doi.org/10.1007/s11044-011-9271-x, 2012.a
Bottasso, C. L., Campagnolo, F., and Petrovic, V.: Wind tunnel testing of scaled wind turbine models: Beyond aerodynamics, J. Wind Eng. Ind. Aerodyn., 127, 11–28, https://doi.org/10.1016/
j.jweia.2014.01.009, 2014.a, b, c, d, e
Bottasso, C. L., Bortolotti, P., Croce, A., and Gualdoni, F.: Integrated aero-structural optimization of wind turbines, Multibody Syst. Dyn., 4, 317–344, https://doi.org/10.1007/s11044-015-9488-1,
2016.a, b
Bottasso, C. L., Cacciola, S., and Schreiber, J.: Local wind speed estimation, with application to wake impingement detection, Renew. Energ., 116, 155-168, https://doi.org/10.1016/
j.renene.2017.09.044, 2017.a
Brondsted, P., Lilholt, H., and Lystrup, A.: Composite Materials For Wind Power Turbine Blades, Annu. Rev. Mater. Res., 35, 505–538, https://doi.org/10.1146/annurev.matsci.35.100303.110641, 2005.a
Buckingham, E.: On Physically Similar Systems, Illustrations of the Use of Dimensional Equations, Phys. Rev., 4, 345–376, https://doi.org/10.1103/PhysRev.4.345, 1914.a
Burton, T., Jenkins, N., Sharpe, D., and Bossanyi, E.: Wind energy handbook, John Wiley & Sons, West Sussex, UK, 2001.a
Busan, R.: Flutter Model Technology, WL-TR-97-3074, Wright-Patterson Air Force Base, OH, USA, 1998.a, b, c
Cambridge University Engineering Department: Materials Data Book, http://www-mdp.eng.cam.ac.uk/web/library/enginfo/cueddatabooks/materials.pdf (last access: 18 December 2019), 2003.a, b, c
Campagnolo, F., Bottasso, C. L., and Bettini, P.: Design, manufacturing and characterization of aero-elastically scaled wind turbine blades for testing active and passive load alleviation techniques
within a ABL wind tunnel, J. Phys. Conf. Ser., 524, 012061, https://doi.org/10.1088/1742-6596/524/1/012061, 2014.a, b, c
Chamorro, L. P., Arndt, R. E. A., and Sotiropoulos, F.: Reynolds number dependence of turbulence statistics in the wake of wind turbines, Wind Energy, 15, 733–742, https://doi.org/10.1002/we.501,
Counihan, J.: An improved method of simulating an atmospheric boundary layer in a wind tunnel, Atmos. Environ., s3, 197–200, https://doi.org/10.1016/0004-6981(69)90008-0, 1969.a
Drela, M.: Xfoil 6.99 Documentation, http://web.mit.edu/drela/Public/web/xfoil/ (last access: 18 December 2019), 2017.a
Dowler, J. L. and Schmitz, S.: A solution-based stall delay model for horizontal-axis wind turbines, Wind Energy, 18, 1793-1813, https://doi.org/10.1002/we.1791, 2015.a
Eggleston, D. M. and Stoddard, F. S.: Wind Turbine Engineering Design, Van Nostrand Reinhold, New York, NY, USA, 1987.a
Frederik, J. A., Weber, R., Cacciola, S., Campagnolo, F., Croce, A., Bottasso, C., and van Wingerden, J.-W.: Periodic dynamic induction control of wind farms: proving the potential in simulations and
wind tunnel experiments, Wind Energ. Sci., 5, 245–257, https://doi.org/10.5194/wes-5-245-2020, 2020.a
General Electric: GE Renewable Energy unveils the first Haliade-X 12MW, the world's most powerful offshore wind turbine, Press release, available at: https://www.ge.com/news/press-releases/
ge-renewable-energy-unveils-first-haliade-x-12-mw-worlds- (last access: 26 April 2021), 2019.a
Giavotto, V., Borri, M., Mantegazza, P. and Ghiringhelli, G.: Anisotropic beam theory and applications, Comput. Struct., 16, 403–413, https://doi.org/10.1016/0045-7949(83)90179-7, 1983.a, b
Hansen, M. H., Gaunaa, M., and Madsen, H. A.: A Beddoes-Leishman type dynamic stall model in state-space and indicial formulations, Technical University of Denmark, Riso, Denmark, https://
backend.orbit.dtu.dk/ws/portalfiles/portal/7711084/ris_r_1354.pdf (last access: 18 December 2019), 2004.a
Hideharu, M.: Realization of a large-scale turbulence field in a small wind tunnel, Fluid Dyn. Res., 8, 1–4, https://doi.org/10.1016/0169-5983(91)90030-M, 1991.a
International Electrotechnical Commission: International Electrotechnical Commission, IEC 61400-1 Edn. 3: Wind turbines – Part 1: Design requirements, IEC, Geneva, Switzerland, 2005.a
International Renewable Energy Agency: Future of wind: Deployment, investment, technology, grid integration and socio-economic aspects (A Global Energy Transformation paper), Abu Dhabi, 2019.a
Jiménez, Á., Crespo, A., and Migoya E.: Application of a LES technique to characterize the wake deflection of a wind turbine in yaw, Wind Energy, 13, 559–572, https://doi.org/10.1002/we.380, 2010.a
Jones, R. M.: Buckling of Bars, Plates, and Shells, Bull Ridge Publishing, Virginia, 2006.a
Jonkman, J.: TurbSim User's Guide, NREL Report TP-500-36970, NREL, Golden, CO, USA, https://doi.org/10.2172/15020326, 2009.a, b
Kress, C., Chokani, N., and Abhari, R. S.: Downwind wind turbine yaw stability and performance, Renew. Energ., 83, 1157–1165, https://doi.org/10.1016/j.renene.2015.05.040, 2015.a
Loth, E., Kaminski, M., Qin, C., Fingersh, L. J., and Griffith, D. T.: Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines, in: 35th AIAA Applied Aerodynamics Conference, AIAA AVIATION Forum,
Denver, CO, https://doi.org/10.2514/6.2017-4215, 2017.a, b, c
McAuliffe, B., Larose, G.: Reynolds-number and surface-modeling sensitivities for experimental simulation of flow over complex topography, J. Wind Eng. Ind. Aerod., 104–106, 603–613, https://doi.org/
10.1016/j.jweia.2012.03.016, 2012.a
Manwell, J. F., McGowan, J. G., and Rogers, A. L.: Wind energy explained: theory, design and application, Second Edition, John Wiley & Sons Publication, West Sussex, United Kingdom, 2009.a, b, c
Murray, R. E., Jenne, S., Snowberg, D., Berry, D., and Cousins, D.: Techno-Economic Analysis of a Megawatt-Scale Thermoplastic Resin Wind Turbine Blade, Renew. Energ., 131, 111–119, https://doi.org/
10.1016/j.renene.2018.07.032, 2018.a
Mydlarski, L.: A turbulent quarter century of active grids: from Makita (1991) to the present, Fluid Dyn. Res., 49, 061401, https://doi.org/10.1088/1873-7005/aa7786, 2017.a
Pitt, D. M. and Peters, D. A.: Theoretical prediction of dynamic-inflow derivatives, Vertica, 5, 21–34, 1981.a
Resor, B. R. and Maniaci, D. C.: Definition of the National Rotor Testbed: An Aeroelastically Relevant Research-Scale Wind Turbine Rotor, in: 32nd ASME Wind Energy Symposium, AIAA SciTech Forum,
13–17 January 2014, National Harbor, MD, https://doi.org/10.2514/6.2014-0357, 2014.a, b
Ricciardi A. P., Canfield, R., Patil, M. J., and Lindsley, N.: Nonlinear aeroelastic scaled-model design, J. Aircraft, 53, 20–32, https://doi.org/10.2514/1.C033171, 2016.a, b
Schmitz, S.: Aerodynamics of Wind Turbines, A Physical Basis for Analysis and Design, John Wiley & Sons Ltd, Hoboken, NJ, USA, 2020.a
Selig, M., Guglielmo, J., Broeren, A., and Giguère, P.: Summary of Low-Speed Airfoil Data, SoarTech Publications, Virginia, 1995.a, b
Siemens Gamesa: Powered by change: Siemens Gamesa launches 14MW offshore direct drive turbine with 222-meter rotor, Press release, available at: https://www.siemensgamesa.com/newsroom/2020/05/
200519-siemens-gamesa-turbine-14-222-dd (last access: 26 April 2021), 2020.a
Sieros, G., Chaviaropoulos, P., Sorensen, J. D., Bulder, B. H., and Jamieson, P.: Upscaling wind turbines: theoretical and practical aspects and their impact on the cost of energy, Wind Energy, 15,
3–17, https://doi.org/10.1002/we.527, 2012.a
Simms, D., Schreck, S., Hand, M., and Fingersh, L. J.: NREL Unsteady Aerodynamics Experiment in the NASA-Ames Wind Tunnel: A Comparison of Predictions to Measurements, available at: http://
citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.452.974&rep=rep1&type=pdf (last access: 26 April 2021), 2001.a
Snel, H., Schepers, J. G., and Siccama, N. B.: Mexico Project: The Database and Results of Data Processing and Interpretation, in: 47th AIAA Aerospace Sciences Meeting Including The New Horizons
Forum and Aerospace Exposition, Orlando, FL, https://doi.org/10.2514/6.2009-1217, 2009.a
3M Adhesives Division: Scotch-Weld Structural Adhesive Film AF 32, Technical Data Issue No. 3, available at: https://multimedia.3m.com/mws/media/241415O/
3mtm-scotch-weldtm-structural-adhesive-film-af-32.pdf (last access: 18 December 2019), 2000.a
Vermeer, L. J., Sorensen, J. N., and Crespo, A.: Wind Turbine Wake Aerodynamic, Prog. Aerosp. Sci., 39, 467–510, https://doi.org/10.1016/S0376-0421(03)00078-2, 2003.a, b
Vestas General Specification: Vestas V27–225kW, 50Hz Windturbine with Tubular/Lattice Tower, Version 1.2.0.24, 1994. a
Wan, Z. and Cesnik, C. E. S.: Geometrically nonlinear aeroelastic scaling for very flexible aircraft, AIAA J., 52, 2251–2260, https://doi.org/10.2514/1.J052855, 2014.a
Wang, C., Campagnolo, F., Canet, H., Barreiro, D. J., and Bottasso, C. L.: How realistic are the wakes of scaled wind turbine models?, Wind Energ. Sci. Discuss. [preprint], https://doi.org/10.5194/
wes-2020-115, in review, 2020.a, b, c, d
ZSW – Zentrum für Solarenergie- und Wasserstoff-Forschung Baden-Württemberg: New WindForS project: Wind Energy Research in the Swabian Alps, available at: https://www.zsw-bw.de/en/newsroom/news/
news-detail/news/detail/News/new-windfors-project-wind-energy-research-in-the-swabian-alps.html (last access: 18 December 2019), 2016.a, b
ZSW – Zentrum für Solarenergie- und Wasserstoff-Forschung Baden-Württemberg: ZSW and S&G Engineering Join Forces to Set Up Wind Power Field-Test Site, available at: https://www.zsw-bw.de/en/newsroom/
news/news-detail/news/detail/News/zsw-and-sg-engineering-join-forces-to-set-up-wind-power-field-test-site.html (last access: 18 December 2019), 2017.a | {"url":"https://wes.copernicus.org/articles/6/601/2021/","timestamp":"2024-11-14T18:29:28Z","content_type":"text/html","content_length":"484953","record_id":"<urn:uuid:ae6f02ae-bde3-40c4-bc63-e03e289d3d82>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00126.warc.gz"} |
The x coordinate of the point on the curve xy=(2+x)2, the norma... | Filo
The coordinate of the point on the curve , the normal at which cuts off numerically equal intercepts on the coordinate axes is
Not the question you're searching for?
+ Ask your question
Slope of normal at a point is Now, a line has numerically equal intercepts on the axes iff its slope
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
8 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from IIT-JEE Super Course in Mathematics - Calculus (Pearson)
View more
Practice more questions from Application of Derivatives
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The coordinate of the point on the curve , the normal at which cuts off numerically equal intercepts on the coordinate axes is
Updated On Jun 27, 2023
Topic Application of Derivatives
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 1
Upvotes 223
Avg. Video Duration 6 min | {"url":"https://askfilo.com/math-question-answers/the-mathrmx-coordinate-of-the-point-on-the-curve-mathrmxy2mathrmx2-the-normal-at","timestamp":"2024-11-10T09:55:48Z","content_type":"text/html","content_length":"677759","record_id":"<urn:uuid:4e9ba508-eb72-4730-bf02-6e910140e1f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00378.warc.gz"} |
Game Helper for Teen Patti, Rummy, Fruit Party and more
I. Rummy
1.1 Game Introduction
Originally from Israel, rummy is also known as Israeli mahjong and is generally played with 2-6 players, each with a hand of 7-13 tiles. Players are required to combine the tiles in their hands
according to the rules, and the game ends when all the tiles in a player’s hand have been combined, after which points and wins are calculated based on the number of tiles in other players’ hands.
1.2 Basic game settings
1. Number of players in the game: 2-5 players
2. Number of cards in hand: 13 cards
3. Number of cards used: 2 decks of cards are fixed, excluding the small king, total 106 cards
1.3 Basic rules of the game
The purpose of Rummy is to arrange all 13 cards into a valid sequence/set and be the first player to complete the [Rummy Show2) In order to win the tournament, players need to meet the following card
1. Sequence
1. A flush of at least 3 cards is called a [sequence
2. Sequences without Joker cards are [pure sequences]
3. Sequences with Joker cards are [not pure sequences
4. A combination of 3 or 4 cards of different suits but the same number of points is called a [set].
5. When a player has a second life, [Set] is considered valid.
2. 1st/2nd life
1. The 1st [pure sequence] of the player combination is called [1st life]
1. The sequence combined after [1st life] is called [2nd life].The sequence combined after [1st life] is called [2nd life].
2. [2nd life] can be [pure sequence] or [non-pure sequence]
3. Fixed/random hundred match cards
1. Hundreds can be used to form a sequence of cards, there are two types of Hundreds in Rummy
2. All King Jokers are considered to be [Fixed Hundred].
3. A card with the same number of points as the one turned over under the pile is considered a [Random Hundred].
4. Scoring
1. Total score for cards that are not part of a valid sequence/set, the lower the score the better
2. J, Q, K, A count as 10 points
3. The score is determined by the number of points of the cards.
4. Joker tiles count as 0 points
When the player has the 1st life, the 100 Joker tiles will be considered as 0 points
1.4 Operating Instructions
1. 【Sort】:Organize and sort
2. 【Add】:Add the selected card to the sequence
3. 【Declare】:Announcement.
1. This button will be displayed when there is and only one hand is selected. When clicked, the hand will be judged as a hand, and the selected card will be the card to be played (if the hand is
played, the card will not be played), and will not be displayed when the final hand is displayed.
2. When the player does not meet the declared conditions, after clicking the Declare button, there will be a corresponding prompt text “Does not meet the declared conditions” in the middle of the
screen, which will disappear after two seconds, and then the player will continue to act.
3. When the rummy show is satisfied or other players have clicked on the Declare button, you can click on the button to enter the announcement session after you have combined your hands and all
players have announced their hands. The settlement will be made by flying the chips first before the settlement box pops up.
4. 【Discard】:Discard
II. Teen-patti
2.1 Game Introduction
Teen patti is a local Indian multiplayer poker game that uses a 3-card match and multiple betting rounds, similar to the domestic swindle game.
2.2 Basic game settings
1. Number of players in the game: 2-5
2. Number of cards in hand: 3 cards
3. Number of cards used: 2 decks of cards are fixed, excluding the small king, total 106 cards
2.3 Basic rules of the game
Each player is dealt three hands, and the cards are compared according to the hand type, with the larger hand winning.
2.4 Card Type
• The single cards of the deck from big to small are: A,K,Q,J,10,9,8,7,6,5,4,3,2
• Card Sort: Three of a kind>Straight flush>Straight>Flush>Pair>High card
Details are shown below.
1. Bars
The card power from highest to lowest is shown below.
2. Straight Flush
The strength of the cards from highest to lowest is shown below.
3. Straight
The strength of the cards from highest to lowest is shown below.
4. Flush
The card power from high to low is shown below.
5. Pairs
The card power from high to low is shown below.
2.5 Operating Instructions
⑤ 【PACK】:Fold
⑥ 【SIDE SHOW】:Compare with the previous betting player individually
⑦ 【SEE】:Showing the hand
⑧ Reduce the betting amount
⑨ 【BLIND】:Bet according to the number on the button if the hand is not shown
⑩ Increase the bet amount
⑪ 【CHAAL】:When the hand is displayed, bet according to the number on the button
III. Texas Hold’em
3.1 Game Introduction
Texas Hold’em is a poker game that originated in Texas, USA and is now the most popular poker game in the world. Players need to make any combination of the two bottom cards in their hand and the
five common cards, and the player with the bigger hand wins in the end.
3.2 Basic game settings
1. Number of players: 2-6 players
2. Number of cards used: 1 deck of cards is fixed, excluding the size of the king, a total of 52 cards
3.3 Card Type
• Card Type
• Single cards from largest to smallest: A,K,Q,J,10,9,8,7,6,5,4,3,2
• Card size order: royal flush > flush > four stripes > gourd > flush > straight > three stripes > two pairs > pair > high card
• The same cards will be compared according to the following rules.
1. Royal Flus
a) Ping
2. 4 Bars
a) Compare the points of the 4 bars, the bigger one wins.
b) If the points are the same, compare the kicks, the bigger kick wins, if the points are the same, tie.
3. Gourd
a) Compare the number of points of the 3 pieces, the one with the highest number of points wins.
b) If the points of the 3 pieces are the same, compare the points of the pairs, the larger pair wins.
c) If the pair has the same number of points, then it is a tie.
4. Flush
a) Compare the cards with the highest number of points in a flush, the one with the highest number of points wins.
b) If the maximum number of points is the same, compare the size of the cards with the second highest number of points until the size is divided and the one with the highest number of points
c) If all 5 cards have the same number of points, then draw.
5. Junko
a) Compare the cards with the highest number of points in a straight, the one with the highest number of points wins, the same is a draw.
b) A2345 is the smallest straight
6. Three bars
a) Compare the number of points of 3 bars, the one with the highest number of points wins.
b) If the points of the 3 bars are the same, compare the size of the points of the kicks in turn, and the one with the bigger points wins.
c) If the kicks are all the same, the flat
7. Two pairs
a) Compare the pairs with the largest number of points, the one with the largest number of points wins.
b) If the pair with the largest number of points has the same number of points, compare the number of points of the other pair and the one with the largest number of points wins.
c) If both pairs have the same number of points, compare the number of points in the kicker, and the larger number of points wins.
d) If all are the same, the flat
8. Pair
a) First, compare the number of points of the pairs, the larger number wins
b) If the pairs have the same number of points, compare the size of the kicks in order, and the one with the highest number of points wins.
c) If the kicks are all the same, the flat
9. High Brand
a) Compare the cards with the largest number of points, and the one with the largest number of points wins.
b) If the largest card has the same number of points, compare the number of points of the second largest card and the one with the highest number of points wins
c) If the cards are the same, compare them in order of point size until the last card, the one with the most points wins.
d) If all 5 cards have the same point size, the tie
3.4 Operating Instructions
• 【FOLD】:Discard
• 【CALL】:Follow the bet
• 【RAISE】:Raise the stakes
• 【CHECK】:Overcards
• 【CONFIRM】:Confirmation
• Swipe up or down to change the bet amount
• 【3X BB】:Bet 3 multiplied by the value of the big blind
• 【ALL IN】:All Down
IV. Fruit Party
4.1 Introduction to the game
The world’s most popular fruit machine game is the game that accounts for the largest number of overseas casinos and is loved by players for its easy-to-understand gameplay and rewarding design with
small winnings.
4.2 Basic rules of the game
There are 25 winning lines in the game and 3 or more consecutive identical symbols from the first column on the left will win the corresponding multiplier. The “SCATTER” symbols do not need to be
4.3 Odds
When the number of consecutive lines in the game reaches the specified number, you will be rewarded with (bet * multiplier sum)
4.4 Payout Line Winning Instructions
There are 25 payout lines in the game, as shown in the following chart.
All winning symbols except [SCATTER] need to be counted from the leftmost reel and must be on consecutive pay lines.
4.5 Special rule description
1) When 3, 4 or 5 [SCATTER] symbols appear, you get 5, 10 or 15 free games respectively
2) [WILD] can replace all symbols except [SCATTER].
4.6 Operating Instructions
① Increase or decrease the betting amount
② switching to the maximum betting amount
③ Start the game
④ Enter the flop mini-game
4.7 Flipping mini-games
1. When winning a prize, you can use the reward you get for the flip game
2. Place a black/red bet and you will get 2x bonus for success
3. Place a bet on the suit and you will get 4x bonus for success
4. After placing a successful bet, you can continue to bet with the bonus you have received.
5. Click [Score] to leave the game
• Place your bet on red/black
• Place a bet on the color
• Collect and leave
IV. Texas Cowboy
5.1 Game Introduction
The game is divided into two sides, cowboys and bulls, each side is dealt 2 concealed cards, and then the system deals 5 public cards (the first of which is a bright card). Players can profit by
pressing the winner and the loser, the initial two cards of either player or the winner’s hand.
5.2 Basic game settings
1. Number of people in the game: no limit
2. Number of cards used: 1 deck of cards is fixed, 52 cards in total after the size of the king.
3. Field: No field limit, all players play in the same room
4. Place your bets in the area
• Players can bet on areas in the game including cowboy win, bull win, draw and special card areas
• Cowboys win and Bulls win at 1 to 1.9 in both zones and a draw at 1 to 20
• Different odds for different cards in the special hand area.
5.3 Card Type
• Card Type
• Single cards from largest to smallest: A,K,Q,J,10,9,8,7,6,5,4,3,2
• Royal Flush > Straight Flush > Four Stripes > Gourd > Flush > Straight Flush > Three Stripes > Two Pairs > Pair > High Card
• The same cards will be compared according to the following rules.
• Royal Flush
1. Ping
• Flush
1. Compare the cards with the highest number of points in a straight, the one with the highest number of points wins, the same is a draw.
2. A2345 is the smallest flush of the flush
• 4 Article
1. Compare the number of points of 4 bars, the one with the highest number of points wins.
2. If the points are the same, the kicks are compared and the larger kick wins, if the points are the same then it is a tie.
• Gourd
1. Compare the number of points of 3 bars, the one with more points wins
2. If the number of points of the 3 bars is the same, compare the number of points of the pairs, and the one with the larger pair wins.
3. If the pair also has the same number of points, then draw.
• Flush
1. Compare the cards with the highest number of points in a flush, the one with the highest number of points wins.
2. If the maximum number of points is the same, compare the size of the cards with the second highest number of points until the size is divided and the one with the highest number of points
3. If all 5 cards have the same number of points, then draw.
• Junko
1. Compare the cards with the highest number of points in a straight, the one with the highest number of points wins, the same is a draw.
2. A2345 is the smallest straight
• Three bars
1. Compare the points of 3 bars, the one with the highest number of points wins.
2. If the points of the 3 bars are the same, compare the size of the points of the kicks in turn, and the one with the bigger points wins.
3. If the kicks are all the same, the flat
• Two pairs
1. Compare the pairs with the largest number of points, the one with the largest number of points wins.
2. If the pair with the largest number of points has the same number of points, compare the number of points of the other pair and the larger pair wins.
3. If both pairs have the same number of points, compare the number of points in the kicker, and the larger number of points wins.
4. If all are the same, the flat
• Pairs
1. First, compare the number of points of the pairs, the larger number wins
2. If the pairs have the same number of points, compare the size of the kicks in order, and the one with the highest number of points wins.
3. If the kicks are all the same, the flat
• High Brand
1. Compare the cards with the largest number of points, and the one with the largest number of points wins.
2. If the largest card has the same number of points, compare the number of points of the second largest card and the one with the highest number of points wins
3. If the cards are the same, compare them in order of point size until the last card, the one with the most points wins.
4. d) If all 5 cards have the same point size, the tie
5.4 Betting Instructions
• Win/Lose/Draw
1. Includes cowboy wins, bull wins and draws with corresponding odds
• Anyone’s hand (must be the original two cards of a cowboy or bull)
1. Flush/consecutive cards/consecutive cards of the same suit: two of the same suit/two consecutive points/two consecutive points of the same suit
2. Pair: two cards of the same number of points (including a pair of aces)
3. A pair of aces: two aces
4. If “cowboy” and “bull” are the same card, hitting the betting area will be considered only once
5. If “Cowboy” and “Bull” have different hands and both hit in the betting area, both areas will be considered hits
• Winner Deck
1. Include high cards/pairs, two pairs, trips/straight/flush, gourd, quad/flush/royal flush and get rewarded with corresponding odds
5.5 Operating Instructions
• Select chips
• Automatic betting
• Game betting area
PS: Players click on the chip icon and then click on the game betting area to bet, the same betting area can be repeated.
VI. Red VS Black
6.1 Game Introduction
The game is divided into two sides, red and black, and each side is dealt three cards, and the winner is determined by the size of the two sides. Players can profit by pressing the winner or the
6.2 Basic game settings
1. Number of people in the game: no limit
2. The number of cards used: 1 deck of cards is fixed, excluding the big and small king, total 52 cards
3. Field: No field limit, all players play in the same room
4. Betting area
① Players can bet in the game area including black, red and 3 special card areas
② The odds for both black and red zones are 1 to 1.9
③ Different odds for different cards in the special hand area.
6.3 Card description and size
• Card Type
• Card Size
Three of a Kind > Straight Flush > Flush > Straight > Pair > High Card
• Flower size
When comparing the suits of the cards, if the winner is not divided, the suit of the largest card is compared and the one with the largest suit wins.The order of suit size is: Spades > Hearts >
Clubs > Diamonds
• The same cards will be compared according to the following rules 3
• Compare the number of points of 3 bars, the one with the highest number of points wins
1. Article
□ Compare the number of points of 3 bars, the one with the highest number of points wins
2. Straight Flush
• Compare the cards with the largest number of points in a straight, the one with the largest number of points wins, and the same compare the suits of the cards with the largest number of points.
• A2345 is the smallest flush of the flush
3. Isohana
• Compare the cards with the highest number of points in a flush, the one with the highest number of points wins.
• If the maximum number of points is the same, compare the size of the cards with the second highest number of points until the size is determined, and the one with the highest number of points
• If all 3 cards have the same number of points, the suits are compared.
4. Jokers
• Compare the cards with the largest number of points in a straight, the one with the largest number of points wins, and the same compare the suits of the cards with the largest number of points.
• A2345 is the smallest straight
5. Pairs
• First compare the number of points of the pairs, the larger number wins
• If the pairs have the same number of points, then compare the number of points of the single tiles in order, and the one with the higher number of points wins.
• If the points are the same, compare the suits of the pairs
6. High card
• First, compare the cards with the largest number of points, the one with the largest number of points wins
• If the largest card has the same number of points, compare the number of points of the second largest card, and the one with the highest number of points winsIf the largest card has the same
number of points, compare the number of points of the second largest card, and the one with the highest number of points wins
• If the cards are the same, compare the number of points of the third card, the larger card wins
• If all 3 cards have the same point size, compare the suit of the largest card
• Ace is the largest card in a single hand
6.4 Betting Instructions
• Bets can be placed in three zones: Red Win, Black Win and the Card Zone.
• At the settlement, the winner will be determined by comparing hands, and ultimately the winning area will be red or black depending on the color of the winner
• Based on the winning hand, determine if the winning tile area wins the prize
6.5 Operating Instructions
• Selecting chips
• Automatic betting
• Game betting area
PS: Players click on the chip icon and then click on the game betting area to bet, the same betting area can be repeated.
VII. Explosive Point
7.1 Game Introduction
Burst Point is an online multiplayer guessing game, divided into Classic and Trenball gameplay.
Classic play, consists of a progressive curve that can collapse at any time. Before the game starts, the player has 6 seconds to place a bet, and after the game starts, the multiplier starts at 1X
and gets higher. The player can click “Escape” at any time to lock in the current multiplier, and the player’s payout is the bet multiplied by the current multiplier. The later in the game the player
escapes, the higher the payout will be. However, the curve can collapse at anytime, and if the player does not escape before it does, they will lose all their bets.
Trenball is played with the same general game flow as Classic. Bet in advance on a multiplier range of red, green or moon. If the multiplier is on the betting range when the curve crashes, you will
be rewarded accordingly, otherwise you will lose the betting chips.
Red: The multiplier at the time of collapse belongs to [1, 2)
Green: the multiplier at the time of collapse belongs to [2, ∞)
Moon: multiplier equal to or greater than 10 at collapse
You can play Classic and Trenball separately or at the same time!
7.2 Basic game settings
1. Number of people in the game: no limit
2. Field: No field limit, all players play in the same room
7.3 Bonus Pool
1. The prize pool is the pool of money used to pay out the winning players in the game
2. The profit a player can earn from a round of play depends on the size of the prize pool
3. Each player cannot earn more than 1% of the prize pool per round, and if your prize exceeds 1% of the prize pool, you will be forced to flee from cashing out
4. The total earnings of all players in each round cannot exceed 1.5% of the prize pool, and if the sum of all participants’ rewards exceeds 1.5% of the prize pool, everyone will be forced to flee
the cashout
7.4 Operating Instructions
• Classic
1. Classic and Trenball play switch
2. Automatic betting switch
3. Increase or decrease the bet amount
4. Place bet button
5. Auto escape switch. Tap on to set the escape multiplier by sliding up and down on the panel.
• Trenball
1. Place a bet button | {"url":"https://teenpattistars.io/knowledge-based/game-helper/","timestamp":"2024-11-13T16:06:47Z","content_type":"text/html","content_length":"270918","record_id":"<urn:uuid:577b5153-5d61-4b4c-a516-2f95b3d2a660>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00047.warc.gz"} |
Design Calculator
Usefull Calculation Examples
Here you can roughly calculate your necessary heat requirement when purchasing a new heating system for an old or new building.
Energy & Building Services, Heating Technology, Heating Engineering |
free to use
Here, the design of a roller conveyor belt in industry or in the construction sector is calculated in a simplified way.
Industry, Machinery, Economy, Production, Transport, Automation, Special Machinery, Industrial Machines, Goods, Industrial Goods |
free to use
Air duct calculation as an aid to the design of ventilation ducts.Attention this design aid does not replace a basic planning of a ventilation system and the values are without guarantee.The ratio of
both duct lengths should not be greater than 1:4.
Technique, Ventilation System, Design Ventilation |
free to use
What are the costs of a stove with wood fuel compared to one with wood pellets fuel? For many, this is the all-important question when choosing the right wood heating system.
Building, Living, Interior Design, Heating, Heating Technology, Fuels, Homes, Heating Plant |
free to use
Here you can calculate the generator power and the voltage range of a photovoltaic inverter.
Building & Living, Building Services, Electrics, Electricity, Photovoltaics |
free to use
Online calculator for calculating the necessary amount of wallpaper rolls in linear meters depending on the wall surface to be wallpapered. Calculated with the standard wallpaper width of 53cm. This
factor is given but is also&Topics: Building & Living, Interior Design | Status: free to use
Simple quote cost calculator for painting contractors to calculate the cost of painting a house or apartment (room).
Building, Painting, Renovation, Crafts, Interior Design |
free to use
This tool roughly calculates the necessary pond pump size over the circulation cycles.
Garden & Pond Technology & Garden Pond & Building |
free to use
Small calculation tool for calculating the design of an infrared heating system or power heating or also called electric heating.
Building & Living & Heating |
free to use
Now quickly calculate how much fondant is needed for a cake.
Food & Cooking, Baking |
free to use | {"url":"https://www.calc2web.com/calculators/design/","timestamp":"2024-11-03T22:39:48Z","content_type":"text/html","content_length":"25781","record_id":"<urn:uuid:1097249b-43f5-4f16-bd2b-fdc0ea5c62b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00189.warc.gz"} |