content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Find Almost Integers Using High-Precision Arithmetic
This example shows how to find almost integers, or numbers that are very close to integers, using variable-precision arithmetic in Symbolic Math Toolbox™. In this example, you search for almost
integers that have the form exp(pi*sqrt(n)) or exp(pi*n) for the integers n = 1, ..., 200.
Almost Integers
By default, MATLAB® uses 16 digits of precision. For higher precision, use the vpa function in Symbolic Math Toolbox. vpa provides variable precision, which can be increased when evaluating numbers.
First, consider a well-known example of an almost integer [2] that is the real number exp(pi*sqrt(163)). Create this real number as an exact symbolic number.
r = exp(pi*sqrt(sym(163)))
Evaluate this number with variable-precision arithmetic using vpa. By default, vpa evaluates values to 32 significant digits.
f =
You can change the number of significant digits by using the digits function. Evaluate the same number to 25 significant digits.
This number is very close to an integer. Find the difference between this real number and its nearest integer. Use vpa to evaluate the difference to 25 significant digits.
dr =
Almost Integers of Form exp(pi*sqrt(n)
Search for almost integers that have the form exp(pi*sqrt(n)) for the integers n = 1, ..., 200. Create these numbers as exact symbolic numbers.
A = exp(pi*sqrt(sym(1:200)));
Set the number of significant digits to the number of digits in the integer part of exp(pi*sqrt(200)) plus 20 more digits.
d = log10(A(end));
Evaluate the differences between these series of numbers and their nearest integers. Find the almost integers with rounding errors that are less than 0.0001. Show these almost integers in exact
symbolic form.
B = vpa(round(A)-A);
A_nearint = A(abs(B)<0.0001)'
A_nearint =
Plot a histogram of the differences. Their distribution shows many occurrences of differences that are close to zero, where the form exp(pi*sqrt(n)) is an almost integer.
Almost Integers of Form exp(pi*n
Search for almost integers that have the form exp(pi*n) for the integers n = 1, ..., 200. Create these numbers as exact symbolic numbers.
Set the number of significant digits to the number of digits in the integer part of exp(pi*200) plus 20 more digits.
d = log10(A(end));
Evaluate the differences between these series of numbers and their nearest integers. Find the almost integers with rounding errors that are less than 0.0001. The result is an empty sym array, which
means no number in this series satisfies this condition.
B = vpa(round(A)-A);
A_nearint = A(abs(B)<0.0001)
A_nearint =
Empty sym: 1-by-0
Plot a histogram of the differences. The histogram, which is relatively evenly distributed, shows that the form exp(pi*n) does not produce many occurrences of almost integers. For this specific
example, no almost integer has a rounding error less than 0.0001.
Finally, restore the default precision of 32 significant digits for further calculations.
[1] "Integer Relation Algorithm." In Wikipedia, April 9, 2022. https://en.wikipedia.org/w/index.php?title=Integer_relation_algorithm&oldid=1081697113.
[2] "Almost Integer." In Wikipedia, December 4, 2021. https://en.wikipedia.org/w/index.php?title=Almost_integer&oldid=1058543590. | {"url":"https://nl.mathworks.com/help/matlabmobile/ug/numerical-computations-with-high-precision.html","timestamp":"2024-11-14T17:43:13Z","content_type":"text/html","content_length":"73248","record_id":"<urn:uuid:f551159c-b1d7-48d1-a1ef-8473e0fa30e8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00098.warc.gz"} |
Factoring - (Formal Logic II) - Vocab, Definition, Explanations | Fiveable
from class:
Formal Logic II
Factoring is the process of breaking down a complex logical expression or statement into simpler components, often to facilitate easier analysis or proof. In the context of resolution principle and
refutation proofs, factoring helps to simplify clauses by identifying common literals or components, making it easier to apply resolution strategies effectively. This technique is crucial for
eliminating redundancies and ensuring that arguments are presented in a more manageable form.
congrats on reading the definition of Factoring. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Factoring allows for the identification of shared variables or literals among clauses, which can lead to more effective resolution steps.
2. In factoring, it’s important to maintain logical equivalence; the factored expression must represent the same truth conditions as the original.
3. Factoring can significantly reduce the complexity of problems, allowing for faster resolution and fewer steps needed to arrive at conclusions.
4. It is particularly useful in automated theorem proving, where simplifying expressions can lead to quicker algorithmic solutions.
5. In the context of resolution proofs, factoring can help avoid duplicate efforts by consolidating similar expressions and focusing on unique resolutions.
Review Questions
• How does factoring improve the efficiency of the resolution principle in logical proofs?
□ Factoring improves the efficiency of the resolution principle by simplifying complex logical expressions into more manageable components. By identifying common literals and breaking them
down, it reduces redundancy in the proof process. This allows for quicker application of resolution strategies, leading to faster derivation of conclusions while minimizing the number of
steps needed in a proof.
• Discuss how factoring interacts with Clause Normal Form and its importance in constructing valid refutation proofs.
□ Factoring plays a key role in converting logical expressions into Clause Normal Form (CNF), which is essential for applying resolution methods effectively. When clauses are factored, they
often become easier to express in CNF, ensuring that they are structured correctly for logical deductions. This interaction is vital for constructing valid refutation proofs since CNF is
necessary for systematic application of resolution, ultimately leading to proving or disproving statements based on derived contradictions.
• Evaluate the significance of factoring within automated theorem proving and its impact on computational logic.
□ Factoring is highly significant in automated theorem proving as it directly impacts the efficiency and effectiveness of logical algorithms. By simplifying complex expressions and eliminating
redundancies, factoring reduces computational overhead and speeds up problem-solving processes. The ability to factor expressions means that automated systems can focus on unique resolutions
rather than getting bogged down by repeated elements, ultimately leading to more robust and reliable systems in computational logic.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/formal-logic-ii/factoring","timestamp":"2024-11-12T03:10:42Z","content_type":"text/html","content_length":"156961","record_id":"<urn:uuid:3f9b8350-c3d3-4b0c-8515-7aecd3a91757>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00235.warc.gz"} |
DGTTRF - Linux Manuals (3)
DGTTRF (3) - Linux Manuals
dgttrf.f -
subroutine dgttrf (N, DL, D, DU, DU2, IPIV, INFO)
Function/Subroutine Documentation
subroutine dgttrf (integerN, double precision, dimension( * )DL, double precision, dimension( * )D, double precision, dimension( * )DU, double precision, dimension( * )DU2, integer, dimension( * )
IPIV, integerINFO)
DGTTRF computes an LU factorization of a real tridiagonal matrix A
using elimination with partial pivoting and row interchanges.
The factorization has the form
A = L * U
where L is a product of permutation and unit lower bidiagonal
matrices and U is upper triangular with nonzeros in only the main
diagonal and first two superdiagonals.
N is INTEGER
The order of the matrix A.
DL is DOUBLE PRECISION array, dimension (N-1)
On entry, DL must contain the (n-1) sub-diagonal elements of
On exit, DL is overwritten by the (n-1) multipliers that
define the matrix L from the LU factorization of A.
D is DOUBLE PRECISION array, dimension (N)
On entry, D must contain the diagonal elements of A.
On exit, D is overwritten by the n diagonal elements of the
upper triangular matrix U from the LU factorization of A.
DU is DOUBLE PRECISION array, dimension (N-1)
On entry, DU must contain the (n-1) super-diagonal elements
of A.
On exit, DU is overwritten by the (n-1) elements of the first
super-diagonal of U.
DU2 is DOUBLE PRECISION array, dimension (N-2)
On exit, DU2 is overwritten by the (n-2) elements of the
second super-diagonal of U.
IPIV is INTEGER array, dimension (N)
The pivot indices; for 1 <= i <= n, row i of the matrix was
interchanged with row IPIV(i). IPIV(i) will always be either
i or i+1; IPIV(i) = i indicates a row interchange was not
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -k, the k-th argument had an illegal value
> 0: if INFO = k, U(k,k) is exactly zero. The factorization
has been completed, but the factor U is exactly
singular, and division by zero will occur if it is used
to solve a system of equations.
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 125 of file dgttrf.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/3-DGTTRF/","timestamp":"2024-11-07T13:20:24Z","content_type":"text/html","content_length":"9233","record_id":"<urn:uuid:073ef857-bf47-4ce6-b5a2-9563c4feee18>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00675.warc.gz"} |
Growth Versus Value and the Yield Curve
A reader inquires: “Ken Fisher did a statistical study in his book, The Only Three Questions That Count: Investing by Knowing What Others Don’t, which states that growth (value) is in favor when the
yield curve flattens (steepens). Any truth to this?” To test this hypothesis, we compare the performances of paired growth and value indexes/funds as the spread between the yields on the 10-year
Treasury Note (T-note) and the 90-day Treasury Bill (T-bill) varies. Using monthly and quarterly adjusted (for dividends) return data for a pair of growth-value indexes and a pair of growth-value
mutual funds, along with contemporaneous T-note and T-bill yield data, we find that:
The following chart compares the monthly T-note/T-bill yield spread to a “value premium” defined as the monthly adjusted return of the iShares Russell Midcap Value Index (IWS) minus the monthly
adjusted return of the iShares Russell Midcap Growth Index (IWP) over the period 9/01 through 10/07 (74 months of data). The jagged lines are the raw data, showing that the value premium is very
noisy on a monthly basis. The heavy lines are sixth-order polynomial best fits for the two series. It is not obvious that there is a relationship between the yield spread and the value premium for
this sample based on this visualization.
For a closer look, we compare the monthly returns for the two indexes to the monthly change in the yield spread.
The following scatter plot relates monthly changes in IWP and IWS to the monthly change in the T-note/T-bill yield spread over the same period. Results indicate that both indexes tend to perform
better (worse) as the yield spread grows (shrinks). The Pearson correlations for the IWP and IWS series are 0.25 and 0.19, respectively. The R-squared statistics for these series are 0.06 and 0.04,
suggesting that yield spread changes have very little power to explain either growth or value stock monthly returns.
The relative positions of the best-fit lines using monthly data over six years do not support Ken Fisher’s hypothesis, with growth (value) stock returns tending to be higher when the yield spread is
growing (shrinking). However, the difference in the best-fit lines is small.
Excluding September/October 2001 as outliers increases correlations and R-squareds, and increases the difference in slope between growth and value best-fit lines without affecting relative positions.
Might the differences between growth and value stock reactions to yield spread changes be more apparent on a quarterly basis?
The next scatter plot relates quarterly changes in IWP and IWS to the quarterly change in the T-note/T-bill yield spread starting in September 2001 (24 quarters). Results indicate that both indexes
tend to perform better (worse) as the yield spread grows (shrinks). The Pearson correlations for the IWP and IWS series are 0.58 and 0.41, respectively. The R-squared statistics for these series are
0.34 and 0.16, suggesting explanatory power for yield spread changes over quarterly intervals.
The slope of the best-fit line for IWP (growth) remains steeper than that for IWS (value), and the relative positions of the best-fit lines again do not support the hypothesis. Growth stocks appear
to like a widening yield spread more than do value stocks, and value stocks appear to outperform when the yield spread shrinks.
The time period considered so far probably constitutes less than one full contraction-expansion economic cycle and may be unrepresentatively influenced by the unusually severe 2000-2002 bear market.
Might a longer time series say something different about the reactions of growth and value stocks to changes in the yield spread?
The next chart compares the quarterly T-note/T-bill yield spread to a “value premium” defined as the quarterly adjusted return of the Fidelity Equity-Income (FEQIX) mutual fund minus the quarterly
adjusted return of the Fidelity Blue Chip Growth (FBGRX) mutual fund starting in September 1989 (72 quarters). The jagged lines are the raw data, showing that the value premium is still very noisy on
a quarterly basis. The heavy lines are sixth-order polynomial best fits. Visual inspection suggests that there may be a relationship between the yield spread and the value premium over this longer
sample period. The shapes of the best-fit curves are broadly similar.
For a closer look, we compare the quarterly returns for the two funds to the quarterly change in the yield spread.
The final scatter plot relates the quarterly changes in FBGRX and FEQIX to the quarterly change in the T-note/T-bill yield spread since September 1989. Results indicate no relationship between mutual
fund returns and the yield spread. The Pearson correlations for the FBGRX and FEQIX series are 0.01 and -0.03, respectively. The R-squared statistics for both series are 0.00.
The lack of differences between growth and value for this longer sample period provide no support for the hypothesis.
Offsetting the data series such that quarterly changes in the yield spread lead quarterly fund returns by one to eight quarters produces no clear, consistent relationships.
In summary, limited analyses do not support the hypothesis that growth (value) stocks systematically outperform when the T-note/T-bill yield spread shrinks (grows).
Note that the variables chosen to represent the yield spread, growth stock returns and value stock returns may not be optimum for testing the hypothesis. Portfolios more extremely tilted toward
growth and value might provide a stronger test. Also, the samples may not be long enough to capture a difference in economic-cycle effects on growth and value stocks.
Two readers sent the following elaboration of Ken Fisher’s hypothesis:
Ken Fisher in his book says that there is no relationship between the U.S. yield curve and relative strength for growth versus value stocks. He claims that there is a growth-versus-value
indication from the world yield curve, calculated as a GDP-weighted average of the yield curves of the major economies.
Here are three points in response:
1. Skepticism about long-run inference from yield curves is warranted because it is very difficult to assemble reasonably large, reliable, independent samples in terms of number of economic cycles.
Capturing 20 economic cycles (yield curve cycles) requires using data so old that its relevance is questionable in terms of financial, regulatory and technological environment. It is arguable
that economic cycles do not exist under any robust and rigorous definition – too many moving parts and interdependencies.
2. Testing many variations of yield curves (national-international, weighted-unweighted, duration selection) introduces data mining bias, meaning that the optimum curve criteria (best R-squared)
likely generate “lucky” results with overstated predictive power.
3. If forecasting based on yield curve analysis has a sound logical basis, should not the yield curve for a major economic entity have at least some effectiveness? | {"url":"https://www.cxoadvisory.com/value-premium/growth-versus-value-and-the-yield-curve/","timestamp":"2024-11-10T01:32:49Z","content_type":"application/xhtml+xml","content_length":"149656","record_id":"<urn:uuid:defe0905-3ab9-4ab6-afbb-a507ce6b0b45>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00615.warc.gz"} |
Tropical Aquatics
Volume of any pool shape:
• Volume is the space inside of an object. When talking about the volume of a pool, we express it in cubic feet. In metric terms, volume would be cubic meters.
• Volume (V) = Area (A) x Average Depth (AD)
Volume is how many cubic feet can fit into an object. For the purposes of pools and spas, all dimensions must be given in feet so as to calculate cubic feet. Therefore meters, yards, and inches must
be converted. From the Main Page:
• Meters times 3.28 = feet
• Yards times 3 = feet
• Inches divided by 12 = feet (or fraction thereof)
• (1) You have a pool with a length of 40 feet and a width of 20 feet.
The shallow depth is 3 feet, and the deep depth is 7 feet
What is the average depth?
The formula is V = A times AD
A = L x W
AD = (D[1] + D[2]) divided by 2
A = 40 ft x 20 ft = 800 sq.ft.
AD = (3 feet+ 7 feet) divided by 2 = 5 feet
V = A times AD
V = 800 sq.ft. times 5 feet
V = 4,000 cubic feet (cu.ft.)
Ok. So we have cubic feet, or perhaps cubic meters. What good is it? The section on Gallons will provide a meaningful use for cubic feet. | {"url":"https://www.thepoolclass.com/support/math/tutorials/volume.htm","timestamp":"2024-11-12T15:23:10Z","content_type":"text/html","content_length":"9028","record_id":"<urn:uuid:0dd31a76-7c35-4aca-96f9-b3510c8457f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00004.warc.gz"} |
3rd Grade Math Worksheets, Math Worksheets for Grade 3 - BYJU'S
Third grade is a crucial time for children. They will be introduced to several new math topics over the course of the school year. These topics lay the foundation for the higher level math they will
encounter in later grades. 3rd Grade Math Worksheets follow a stepwise approach for each topic that helps students with varied abilities grasp each concept quickly. Fractions, time, money, geometry,
and more are highlighted in the Grade 3 Online Math Worksheets in a colorful and engaging format that children can quickly understand and apply to their daily lives....Read MoreRead Less | {"url":"https://byjus.com/us/math/3rd-grade-math-worksheets/","timestamp":"2024-11-03T13:01:30Z","content_type":"text/html","content_length":"210858","record_id":"<urn:uuid:a5d856ff-54cc-4d11-a829-0a5fe429f1da>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00672.warc.gz"} |
Subitize and subitization - cardinality without counting
Subitizing is being able to identify the value of a group of objects by looking at it without actually counting the objects. Children can do this at a very young age and seem to be able to understand
one, two, and three as a perceptual magnitude not cardinality. Being able to perceive two or three as a whole without doing mathematical thinking can be done by birds and some other animals.
Therefore, young children can label small groups, as two, three, or four, accurately by subitizing, but no cardinality maybe recognized for the group. However, as children mature and gain experience
it can also be a confident judgement of cardinality for a small number of objects.
Subitize, comes from the Latin subitus meaning sudden, which represents the sudden dawning of that's three or four. It was suggested in 1949 by E. L. Kaufman, M. W. Reese, T. W. Volkmann, and J.
It is possible for children to identify two fingers as two fingers. Or say they have five fingers. Or that you are holding up three fingers, without knowing the value of the the numbers. Particularly
for five and above. They can recognize the value of a group through perceptual recognition of visual patterns of the objects' relative positions (subitizing) or through procedural counting without
understanding the group of object's cardinality.
People can learn to subitize and develop skill with practice. The benefit of drilling students with subitization exercises, rather than counting or basic fact operations, will increase the student's
ability in number sense. Relationships of number quantity, cardinality, conservation of numbers, and proportion. Also being able to recognize and apply those abilities in different instances is a
first step toward being able to use mathematics in a flexible and creative way. With practice students will be able to quickly subitize values of subgroups within a larger group and mentally join the
subgroups to find the total value of the larger group. Unitized for subitized...
SubQuan and friends, SubQuan, from Latin meaning sudden quantity, is organized subitizing.
Examples show how objects can be placed into containers, subitizing by digits is possible, which allows the human eye to see very large quantities very quickly and to also see their shape and
segments: squares, cubes, segments of cubes, square of cubes, and so on.
In addition, the container size can change, permitting individuals to see quantities in various bases. This leads to recognizing identical patterns between bases, otherwise known as metapatterns,
which reveal polynomic expressions. So far we have found that children as young as 10 years old can "see" polynomials.
The visual nature of subQuanning transforms mathematics, removes the stumbling block students have been hidden behind, and leaves many rote memorization techniques in math education in question.
Incorporating two more steps entitled differences and polynomial derivation and any polynomial equation can be found from a set of data points. | {"url":"https://thehob.net/math/numVluOp/subitizeStars.html","timestamp":"2024-11-02T07:27:04Z","content_type":"text/html","content_length":"6029","record_id":"<urn:uuid:580127c9-9c43-4fae-99d5-fb9a9a8eeee1>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00637.warc.gz"} |
Science:Math Exam Resources/Courses/MATH152/April 2017/Question B 03 (b)
MATH152 April 2017
• QA 1 • QA 2 • QA 3 • QA 4 • QA 5 • QA 6 • QA 7 • QA 8 • QA 9 • QA 10 • QA 11 • QA 12 • QA 13 • QA 14 • QA 15 • QA 16 • QA 17 • QA 18 • QA 19 • QA 20 • QA 21 • QA 22 • QA 23 • QA 24 • QA 25 • QA 26
• QA 27 • QA 28 • QA 29 • QA 30 • QB 1(a) • QB 1(b) • QB 1(c) • QB 2 • QB 3(a) • QB 3(b) • QB 3(c) • QB 4(a) • QB 4(b) • QB 4(c) • QB 4(d) • QB 5(a) • QB 5(b) • QB 5(c) • QB 6(a) • QB 6(b) • QB 6(c)
Question B 03 (b)
Consider the resistor network below.
(b) ${\displaystyle J}$ is the current through the voltage source (note that ${\displaystyle J=i_{1}}$). Write ${\displaystyle J}$ and ${\displaystyle E}$ in terms of ${\displaystyle I}$ and ${\
displaystyle V}$ .
Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is
correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you?
If you are stuck, check the hint below. Consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it!
You should use your favourite algorithm to solve the system of linear equations from part (a). The solution presented here will use row reduction.
Checking a solution serves two purposes: helping you if, after having used the hint, you still are stuck on the problem; or if you have solved the problem and would like to check your work.
• If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you
are stuck or if you want to check your work.
• If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem
and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.
Found a typo? Is this solution unclear? Let us know here.
Please rate my easiness! It's quick and helps everyone guide their studies.
At the end of part A, we had the matrix equation ${\displaystyle \left({\begin{array}{ccc}1&0&1\\-1&1&0\\0&2&-1\\\end{array}}\right)\left({\begin{array}{c}i_{1}\\i_{2}\\E\end{array}}\right)=\left({\
We can rewrite this system as an augmented matrix
${\displaystyle \left({\begin{array}{ccccc}1&0&1&:&V\\-1&1&0&:&I\\0&2&-1&:&0\end{array}}\right)}$
We add the first row to the second row:
${\displaystyle \left({\begin{array}{ccccc}1&0&1&:&V\\0&1&1&:&V+I\\0&2&-1&:&0\end{array}}\right)}$
Now, we subtract twice the second row from the third row:
${\displaystyle \left({\begin{array}{ccccc}1&0&1&:&V\\0&1&1&:&V+I\\0&0&-3&:&-2V-2I\end{array}}\right)}$
Now, we divide the third row by -3:
${\displaystyle \left({\begin{array}{ccccc}1&0&1&:&V\\0&1&1&:&V+I\\0&0&1&:&{\frac {2}{3}}V+{\frac {2}{3}}I\end{array}}\right)}$
Now, we subtract row 3 from rows 1 and 2:
${\displaystyle \left({\begin{array}{ccccc}1&0&0&:&{\frac {1}{3}}V-{\frac {2}{3}}I\\0&1&0&:&{\frac {1}{3}}V+{\frac {1}{3}}I\\0&0&1&:&{\frac {2}{3}}V+{\frac {2}{3}}I\end{array}}\right).}$
Therefore, we get that
${\displaystyle J=i_{1}={\frac {1}{3}}V-{\frac {2}{3}}I}$
and that
${\displaystyle E={\frac {2}{3}}V+{\frac {2}{3}}I}$.
Answer: ${\displaystyle \color {blue}J={\frac {1}{3}}V-{\frac {2}{3}}I;E={\frac {2}{3}}V+{\frac {2}{3}}I}$ | {"url":"https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH152/April_2017/Question_B_03_(b)","timestamp":"2024-11-12T02:57:43Z","content_type":"text/html","content_length":"69013","record_id":"<urn:uuid:1335a20a-16ff-43a7-b04e-77b1675fed5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00443.warc.gz"} |
How do you calculate drug calculations for nurses?
How do you calculate drug calculations for nurses?
D/H x Q = x, or Desired dose (amount) = ordered Dose amount/amount on Hand x Quantity.
How do you calculate the amount of medication?
To calculate the millilitres/hour we first need to work out what dose is contained in one millilitre of the infusion dosage. We can do this by dividing the volume of the dosage by the weight of the
medicine it contains. In this case 500ml/500mg = 1ml/mg.
How do you calculate drugs per kg?
Care must be taken to properly convert body weight from pounds to kilograms (1 kg= 2.2 lb) before calculating doses based on body weight….Example 2.
Step 1. Calculate the dose in mg: 18 kg × 100 mg/kg/day = 1800 mg/day
Step 3. Convert the mg dose to mL: 1800 mg/dose ÷ 40 mg/mL = 45 mL once daily
How many ml is 60 mg?
Milligram to Milliliter Conversion Table
Weight in Milligrams: Volume in Milliliters of:
Water Granulated Sugar
60 mg 0.06 ml 0.085714 ml
70 mg 0.07 ml 0.1 ml
80 mg 0.08 ml 0.114286 ml
How do you calculate mg HR?
How to Calculate Milligrams Per Hour
1. Determine the hourly flow rate.
2. Determine the concentration of the solution in milligrams per milliliter.
3. Multiply the hourly flow rate of the liquid by the concentration.
4. mg/hour=hourly flow rate x concentration.
How do you calculate mg kg?
Weigh yourself. Let’s assume you weigh 80 kg. Multiply these two values to get the dose of medication in mg: 2 * 80 = 160 mg .
How do you calculate mL per hour?
If you simply need to figure out the mL per hour to infuse, take the total volume in mL, divided by the total time in hours, to equal the mL per hour. For example, if you have 1,000 mL NS to infuse
over 8 hours, take 1,000 divided by 8, to equal 125 mL/hr. To calculate the drops per minute, the drop factor is needed.
How many mg is in 30 ml?
How Many Milligrams are in a Milliliter?
Volume in Milliliters: Weight in Milligrams of:
Water All Purpose Flour
30 ml 30,000 mg 15,870 mg
31 ml 31,000 mg 16,399 mg
32 ml 32,000 mg 16,928 mg | {"url":"https://www.tag-challenge.com/2022/11/14/how-do-you-calculate-drug-calculations-for-nurses/","timestamp":"2024-11-05T15:51:32Z","content_type":"text/html","content_length":"38697","record_id":"<urn:uuid:605b2fb7-1afe-4c1d-9abb-6e315d741dee>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00420.warc.gz"} |
EMACSPEAK The Complete Audio Desktop
Audio Deja Vu: Audio Formatted Math On The Emacspeak Desktop
1 Overview
This article previews a new feature in the next Emacspeak release —
audio-formatted Mathematics using Aural CSS. Volker Sorge worked
at Google as a Visiting Scientist from Sep 2012 to August 2013, when
we implemented math
access in ChromeVox — see this brief overview. Since leaving
Google, Volker has refactored and extended his work to create an Open
Source Speech-Rule-Engine implemented using NodeJS. This
speech-rule-engine can be used in many different environments;
Emacspeak leverages that work to enable audio-formatting and
interactive browsing of math content.
2 Overview Of Functionality
Math access on the Emacspeak desktop is implemented via module
emacspeak-maths.el — see js/node/Readme.org in the Emacspeak GitHub
repository for setup instructions.
Once loaded, module emacspeak-maths provides a Math Navigator that
implements the user interface for sending Math expressions to the
Speech-Rule-Engine, and for interactively browsing the resulting
structure. At each step of the interaction, Emacspeak receives math
expressions that have been annotated with Aural CSS and produces
audio-formatted output. The audio-formatted text can itself be
navigated in a special Spoken Math emacs buffer.
Module emacspeak-maths.el implements various affordances for
dispatching mathematical content to the Speech-Rule-Engine — see
usage examples in the next section.
3 Usage Examples
3.1 The Emacspeak Maths Navigator
• The maths navigator can be invoked by pressing S-SPC (hold
down Windows key and press SPC) — this runs the command emacspeak-maths-navigator/body.
• Once invoked, the /Maths Navigator can be used to enter an
expression to read.
• Pressing SPC again prompts for the LaTeX math expression.
• Pressing RET guesses the expression to read from the current context.
• The arrow keys navigate the expression being read.
• Pressing o switches to the Spoken Math buffer and exits the
See the relevant chapter in the online Emacspeak manual for details.
3.2 Math Content In LaTeX Documents
1. Open a LaTeX document containing math content.
2. Move point to a line containing mathematical markup.
3. Press S-SPC RET to have that expression audio-formatted.
4. Use arrow keys to navigate the resulting structure.
5. Press any other key to exit the navigator.
3.3 Math Content On Wikipedia
1. Open a Wikipedia page in the Emacs Web Wowser (EWW) that has
mathematical content.
2. Wikipedia displays math as images, with the alt-text giving the
LaTeX representation.
3. Navigate to some math content on the page, then press S-SPC
a to speak that content — a is for alt.
4. As an example, navigate to Wikipedia Math Example, locate math expressions on that page, then
press S-SPC a.
3.4 Math Content From The Emacs Calculator
1. The built-in Emacs Calculator (calc) provides many complex
math functions including symbolic algebra.
2. For my personal calc setup, see tvr/calc-prepare.el in the
Emacspeak GitHub repo.
3. This setting below sets up the Emacs Calculator to output results
as LaTeX: (setq calc-language 'tex)
4. With the above setting in effect, launch the emacs Calculator by
pressing M-##.
5. Press ' — to use algebraic mode — and enter sin(x).
6. Press a t to get the Taylor series expansion of the above
expression, and press x when prompted for the variable.
7. This displays the Taylor Series expansion up to the desired
number of terms — try 7 terms.
8. Now, with Calc having shown the results as TeX, press S-SPC
RET to browse this expression using the Maths Navigator.
4 And The Best Is Yet To Come
This is intentionally called an early preview because there is still
much that can be improved:
1. Enhance the rule engine to infer and convey more semantics.
2. Improved audio formatting rules to better present the available information.
3. Update/tune the use of Aural CSS properties to best leverage
today's TTS engines.
4. Integrate math-reading functionality into more usage contexts in
addition to the ones enumerated in this article.
Date: 2017-02-08 Wed 00:00 | {"url":"https://emacspeak.blogspot.com/2017/02/","timestamp":"2024-11-13T14:05:07Z","content_type":"text/html","content_length":"84163","record_id":"<urn:uuid:182fc34b-bd3c-43bd-96f0-f2d3152b0594>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00284.warc.gz"} |
Sample Size Calculation in Medical Research: A Primer - Annals of National Academy of Medical SciencesSample Size Calculation in Medical Research: A Primer - Annals of National Academy of Medical Sciences
Translate this page into:
Sample Size Calculation in Medical Research: A Primer
1Department of Pharmacology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India
2Department of Community and Family Medicine, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India
3Department of Paediatrics, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India
4Department of Pharmacology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India
5All India Institute of Medical Sciences, Jodhpur, Rajasthan, India
Address for correspondence Rimplejeet Kaur, PhD, Department of Pharmacology, All India Institute of Medical Sciences, Jodhpur 342005, Rajasthan, India (e-mail: sidhurimple@yahoo.com).
© 2021. National Academy of Medical Sciences (India)
This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the
original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).
This article was originally published by Thieme Medical and Scientific Publishers Pvt. Ltd. and was migrated to Scientific Scholar after the change of Publisher.
Quality of research is determined by many factors and one such climacteric factor is sample size. Inability to use correct sample size in study might lead to fallacious results in the form of
rejection of true findings or approval of false results. Too large sample size is wastage of resources and use of too small sample size might fail to answer the research question or provide
imprecise results and may question the validity of study. Despite being such a paramount aspect of research, the knowledge about sample size calculation is sparse among researchers. Why is it
important to calculate sample size; when to calculate it; how to calculate it and what details about sample size calculation should be reported in research protocols or articles; are the lesser
known basics to majority of researchers. The present review is directed to address these aforementioned fundamentals about sample size. Sample size should be calculated during the initial phase
of planning of study. Several components are required for sample size calculation such as effect size, type-1 error, type-2 error, and variance. Researchers must be aware that there are different
formulas for calculating sample size for different types of study designs. The researcher must include details about sample size calculation in the methodology section, so that it can be
justified and it also adds to the transparency of the study. The literature about calculation of sample size for different study designs is scattered over many textbooks and journals. Scrupulous
literature search was conducted to find the passable information for this review. This paper presents the sample size calculation formulas in a single review in a simplified manner with relevant
examples, so that researchers may adequately use them in their research.
sample size
medical research
clinical trials
case control study
cohort study
cross-sectional study
Sample size is one of the key aspects of any research study. It is also one of the most overlooked part of clinical research. There are basically three reasons for it: (1) most of the researchers
are not aware of importance of sample size; (2) the lack of knowledge about how to calculate sample size; and (3) due to the complex appearance of the mathematical formulas for sample size
calculation, they are difficult to handle without proper training or knowledge.
Several studies have done analysis of various research articles and found several errors related to sample size such as inadequate sample to answer research question and inappropriate formula
Through this paper, we are trying to simplify the concepts of sample size calculation, so that the researchers are able to compute them in right manner in their research projects. Explicit
literature search along with the experience of the authors was used in making this review veritable and interesting.
What Is Sample Size?
It is not practical, as well as feasible, to conduct a study on the whole population related to the study. Thus, the sample of population is used which adequately represent the population from
which these samples are chosen. The results obtained from experimentation on these samples are considered as true inferences about the whole population.
Sample size is defined as the number of experimental or observational units required for any research. This experimental or observational unit could be in the form of study subjects/patients,
blood or visceral fluids or tissue, or a geographical area, like city, state, region, and country.
Why It Is Important to Have Adequate Sample Size?
The calculation of sample size is one of the crucial steps in planning any research. Failure in calculation of appropriate sample size may lead to false results or rejection of true finding. The
term “appropriate sample size” means that the sample size should be neither more or less than actually required to answer the research question. Both the cases of more or less than needed sample
size are of ethical concern. If the sample size is small in a study, it might not be able to find the precise difference between the study groups and results of such study cannot be generalized
to the population, since the sample is inadequate to represent the target population. In cases where the sample size is taken larger than required, more population is exposed to risk of
intervention unnecessarily, and it also results in wastage of resources and time.
As mentioned earlier, inadequate sample size may give false results, for example, if one wants to evaluate the effect of drug A on fasting blood sugar (FBS), the patients will be divided into two
groups, one receiving the drug A and another group receiving the standard drug or placebo. If we make two assumptions, in one scenario the drug A decreases the FBS by 5 mg/dL in comparison to
standard drug and in another scenario the drug A decreases the FBS by 10 mg/dL in comparison to standard drug. One important criterion that will play crucial role here in the sample size
calculation is the “effect size.” It is defined as the minimum difference that the investigator wants to detect between study groups. It is also known as minimal clinical relevant difference. If
effect size is high, less sample size will be required to prove the effect and if effect size is low, larger sample size will be needed. In this particular example, in one scenario, the effect
size is 5 mg/dL, thus more patients in both groups will give statistically significant result. In the second scenario, effect size is larger, that is, 10 mg/dL, hence the lesser sample size will
be required to find the statistically significant effect. Thus, if a study is planned with inappropriate sample size that gives negative results, the no significant difference could be due to
true negative results or because of small sample size. Several negative studies are reported to have smaller sample size, these are termed as “underpowered studies.” Such studies might not
contribute to the evidence-based medicine.^3,4
When to Calculate Sample Size?
It should be calculated when the protocol for study is being prepared as it helps in determination if the study is feasible, ethical, and scientifically sound.
Steps for Sample Size Calculation
1. Formulation of research question:
An adequately formulated research question will have information about the Population under investigation, Intervention, Control group, and the Outcome measures (PICO).
Example, in patients with novel coronavirus disease 2019 (COVID-19) whether drug A compared with drug B reduces the days of hospital stay. Here, as per PICO, “P” is COVID-19 patients, “I” is
drug A, “C” is drug B, and “O” is number of days of hospital stay.
2. Stating the null and alternative hypothesis:
The null (H[0]) and alternative hypotheses (H[1]) are concise statements of possible versions of “truth” about the relationship between the predictor of interest and the outcome in the
population. The null hypothesis is used to state a lack of association between the predictor and the outcome; the alternative hypothesis states the existence of an association between
predictor and the outcome. For example, your research question is : If the birth weight of neonates born to women consuming tobacco during pregnancy is less than those born to women not
consuming tobacco during pregnancy?
In this example, according to H[0] the birth weight of neonates born to women consuming tobacco in pregnancy is similar to neonates born to women not consuming tobacco in pregnancy.
According to H[1] the birth weight of neonates born to women consuming tobacco in pregnancy is less than neonates born to women not consuming tobacco in pregnancy.
3. Choosing the primary outcome and suitable statistical test applicable:
The sample size calculation is usually based on the primary objective of the study. Sample size determination is also related to the selection of the statistical test for data analysis as the
calculation of sample size may also be based on the statistical tests that will be used for data interpretation.
4. Selecting significance level and power of study:
Discussed in “Prerequisites for Sample Size Calculation“ section of the article.
5. Calculating sample size manually using formulas or with statistical software.
Prerequisites for Sample Size Calculation
Most of the novice researchers often choose sample size based on the convenience, for example, if an orthopaedician wants to know the prevalence of osteoarthritis in a particular city. He or she
will include all patients visiting his/her hospital in particular duration, for example, in 2 months. The issue with this selection of population that the patients of osteoporosis visiting that
particular hospital may not be true representative of that city as there are many other hospitals in the city with more patients of orthopaedics visiting there.
The most common question that a researcher wonders about is what should be the adequate sample size to do a study? As mentioned earlier, for determining the sample size various statistical
formulas are used. Application of these formulas in sample size will require predetermined information regarding these four components. ►Table 1 shows how to establish these components for sample
size calculation.
Sr. Component What is it? Where to find it?
1 Type-1 error False positive results due to due to probability of falsely detecting the difference when there is no actual It is usually taken as 0.05 or 0.01 for medical
(α-value) difference (falsely rejecting null hypothesis) research
2 Power (1-β) Probability of correctly rejecting the null hypothesis. It is calculated from type-2 error/β value. Power = 1-β It is usually taken above 80% for medical research
3 Effect size The smallest clinically relevant difference in the outcome From previous studies, pilot studies or by
experience of researcher
4 Variance/standard How dispersed or spread out the data values are, i.e., variability in outcome (e.g., Range, interquartile range, and From previous studies, pilot studies or by
deviation standard deviation) experience of researcher
5 Dropout rate Anticipated percentage of patients that do not complete the study From previous studies, pilot studies or by
experience of researcher
Level of Significance/Alpha Value/Type-1 Error
It is popularly known as p-value. It is defined as the probability of falsely rejecting the H[0]. For example, we are comparing two drugs for lipid lowering efficacy. First group receives drug A
and second group receives drug B. Here, the H[0] will be that there is no difference in lipid lowering efficacy of these two groups and the H[1] will be drug B that is more efficacious than drug
A. If for this study, we consider that the p-value of 0.05 is significant, this will mean that we are assuming that there are 5% chances of detection of difference in the efficacy in two groups;
when in the reality, there is no difference in efficacy of drug A and drug B at all, that is, the false positive results. For medical research, α-value of 0.05 is used (►Table 1). The matching
confidence levels (CI) for the appropriate level of significance are: (1) CI 95% for the 5% (α/p = 0.05) level of significance and (2) CI 99% for the 1% (α/p = 0.001) level of significance.
Power/Type-2 Error
It is defined as the probability of finding the difference in two study groups if it actually exists. It is an essential tool to measure the validity of the study. Power is calculated from
another type of error known as β error or type-2 error. Type-2 error detects false negative results which means it fails to detect the difference in two groups when actually the difference
exists. The acceptable value of this error should also be decided by the researcher before initiating the study. Conventionally, the acceptable value of β error is 0.20, that is, 20% chances that
the null hypothesis is wrongly accepted. Power of study is calculated as 1-β. So, if β is 0.20 then power is 0.8, that is, 80%. Thus, the power could be defined as probability of correctly
rejecting the H[0]. It is usually kept above 80% for medical research (►Table 1).
Effect Size
As described earlier, effect size is the difference in the value of variable in the control group and the test group. If effect size is high, less sample size will be required to prove the effect
and if effect size is low, larger sample size will be needed. (►Table 1) The effect size is a numerical value for continuous outcome variables, for example, comparing increases in hemoglobin
caused by drug A is 1 dL/mL and by drug B is 5 dL/mL. Thus, the effect size here will be 4 dL/mL. In case of binary outcomes, the difference between the event rate between the two groups should
be considered as effect size, for example, the development of anxiety as adverse effect by drug (yes/no), if the difference is 5% between both groups, then the effect size is taken as 5%. Effect
size value for sample size calculation is warranted for analytical studies and not for descriptive or cross-sectional studies. Effect size value could be determined from the previously conducted
studies, by conducting pilot study, or it could be based on the experience of the researcher.
Variance/Standard Deviation
If the endpoint used for the sample size calculation is quantitative, then this parameter is required for comparative studies. Like effect size, the variance/standard deviation is also identified
from the previously conducted studies, by conducting pilot study, or it could be based on the experience of the researcher (►Table 1).
Dropout Rate
Dropout rate is estimation of number of participants who will leave the study due to some reasons. Thus, to compensate for this possible dropouts, some extra patients need to be accommodated in
sample size. It is calculated by the formula (►Table 1) mentioned below:
N1=n/(1-d), where N1 is adjusted sample size, n is required sample size, d is dropout rate
The components mentioned above are required for calculation of sample size for almost all types of study designs. Besides these, there could be other parameters required as per study design, for
example, for prevalence studies-precision/margin of error, and for clinical trials-pooled prevalence. Details on how to calculate them are given in relevant sections below.
Importance of Pilot Study
As mentioned earlier, for sample size calculation, various components, such as prevalence, variance, effect size, standard deviation, are derived from the previously published literature. Many a
times such information is not found on literature search, in such cases pilot studies could be planned. Pilot study is a small-scale study conducted prior to actual large-scale study to assess
the feasibility and scientific validity. It also serves a source of information required for sample size calculation for subsequent large study. If the results of pilot study show that the study
is not feasible and useful then the idea of conducting larger study could be dropped out. This will help on saving time and resources.^5
How to Calculate Sample Size?
There are different formulas for calculation of sample size for different study designs. It is a common mistake of not choosing the right formula as per the study design. In this article, we are
explaining in detail about the sample size formula for cross-sectional and clinical trials.
Cross-Sectional Studies
In such studies, data are collected at a particular time to answer questions about the status of population at that particular time. Such studies include questionnaires, disease prevalence
surveys, meta-analysis, etc. Cross-sectional studies are also frequently used to show association.^6 Cross-sectional studies usually involves estimation of prevalence and estimation of mean.
For estimation of prevalence, the formula used is as follows:
Where Z[1–] [α/2] is the standard normal variate (1.96 at 5% error; ►Table 2), p is the expected proportion in the population, and d is precision. Precision is measure of random sampling error.
It is of two types as follows:
Value Variance
α-Value Z [1–] α [/2] (two sided)
0.01 (level of significance 1%) 2.58
0.05 (level of significance 5%) 1.96
0.10 (level of significance 10%) 1.64
α -Value Z [1–] α/2 (one sided)
0.01 (level of significance 1%) 2.33
0.05 (level of significance 5%) 1.65
0.10 (level of significance 10%) 1.28
β -Value Z [1–] β
0.01 (power 99%) 2.33
0.05 (power 95%) 1.65
0.20 (power 80%) 0.84
1. Absolute precision: it refers to the actual uncertainty in a quantity. For example, prevalence of tonsillitis in children is 20 ± 10%, the absolute uncertainty is 10%.
2. Relative precision: it expresses the uncertainty as a fraction of the quantity of interest. For our example of a prevalence of 20 ± 10%, the relative uncertainty is 10 of 20%, which is equal
to 2%.
Conventionally, absolute precision is taken as 5% if the prevalence of disease is expected to be between 10 and 90%. If the prevalence is below 10%, then the precision is usually taken half of
prevalence and if the prevalence is expected to be more than 90%, the d is calculated as {0.5 (1-P)}, where P is prevalence.^7
For example, a researcher wants to calculate the sample size for a cross-sectional study to know prevalence/proportion of asthma in traffic police in a city, and as per the previously published
study, the value of prevalence of asthma in traffic police in the city is around 10%, and the researcher wants to calculate sample size with the absolute precision of 5% and type-1 error of 5%.
□ Z1–α/2 will be 1.96,
p = 0.10 (percentage converted into the proportion)
□ d will be 0.05 (Table 2).
Hence by putting the values in the above-mentioned formula, the sample size will be as follows:
This sample size can be adjusted for the non-response/dropout rate. If nonresponse rate of 10% is expected then as per the formula mentioned under heading “dropout rate” in the earlier text
Where N1 is adjusted sample size, n is required sample size, and d is dropout rate.
So, total of 307 traffic police men need to be screened for asthma for this study.
If the study involves estimation of mean in cross sectional study, then the formula for sample size calculation will be mentioned below:
Where, Z[1–] [α/2] is the standard normal variate (1.96 at 5% error; ►Table 2), d is the precision of measurement with respect to the endpoint, and SD is the standard deviation, the value of
which needs is extracted from the previously published similar studies, internal pilot study, or from the experienced researchers working in the same area.
For example, estimation of average blood sugar level in last trimester of pregnancy in women with gestational diabetes in a particular region is the study objective. On review of literature, a
similar study was found with SD of 30 dL/mL. To calculate sample size based on this value of SD and with precision of 5 dL/mL around the true value of blood glucose in final trimester of
pregnancy, when these values are inserted in the formula mentioned above:
Thus, the sample size needed for this study will be 138. It can be adjusted as per the expected dropout rate as mentioned in the previous section.
Clinical Trials
Clinical trial is the type of research that studies new tests and treatments and evaluates their effects on human health outcomes.^8
In a clinical trial, the researcher could be calculating either difference between the proportion of two groups or difference between quantitative endpoint, that is the mean between two groups.
If the clinical trial involves estimation of qualitative end point between two groups, that is the difference between proportions, then the formula used for sample size calculation is :
Where value of Z[1–] [α/2] is the standard normal variate is 1.96 at 5% error and Zβ is 0.842 at 80% power (►Table 2).
p[1]–p[2] is the effect size, that is, the expected difference between two groups,
P is pooled prevalence calculated by adding prevalence is group 1 and prevalence in group 2 and then dividing the sum by 2.
As mentioned earlier, the value of effect size and pooled prevalence are calculated from previous studies, pilot study, or experience of researcher.^9
For example, one wants to find out the effect of drug A on the mortality in patients with colon cancer. For this study, patients will be divided in two groups. One group will receive test drug A
and another group will receive placebo, and the standard drug therapy will be given to both groups.
To calculate the sample size for this study, information required is expected difference between the two groups and the pooled prevalence. On searching literature, it was found that the normal
mortality in the standard care treatment is 20%. For drug A, since it is a new drug so the data are not available for mortality, thus by discussions with other researchers working on this, it was
decided that a 50% reduction in the mortality can be considered to be clinically significant, and so the expected mortality in drug-A group is taken as 10%. Using these values, the effect size
will be 10% (20–10) and pooled prevalence will be 15% (20 + 10/2). On conversion of the percentage to proportions, the effect size will be 0.10 and pooled prevalence will be 0.15. On adding these
parameters in the formula, the sample size will be :
So, the sample size per group for this clinical trial would be 285. This can be adjusted for drop rate as per the method mentioned earlier.
If in clinical trial, the estimation of the difference of quantitative endpoint between two groups is the objective, then the formula used for sample size calculation is:
Where, Z[1] [–α/2] is the standard normal variate is 1.96 at 5% error and Zβ is 0.842 at 80% power (►Table 2).
SD is the difference which is decided based on the previous study or by other means discussed earlier in text.
□ d is the effect size, that is, the expected difference between the two means which will be based on the previously available data.
For example, a new antidiabetic drug A is to be evaluated for reduction of the fasting blood glucose (FBG) level in comparison to the old antidiabetic drug B. For this study, diabetic patient
will be randomly allocated to two groups, one group will be administered new drug A and the other group will receive drug B. Literature of previous similar studies suggest that the reduction of
FBG by drug A is 20 dL/mL is more than drug A and the SD of the difference is 50 dL/mL. on entering the values in the formula:
Thus, the sample size needed for this study will be 98 in each group. Sample size may be adjusted to accommodate drop rate by the formula mentioned in earlier sections.
What to Mention in Research/Protocol/Report about Sample Size
Reporting of details of sample size calculation is often disregarded in research protocol, as well as the final research report/article. Contrary to it, is should be mentioned in details so that
the authenticity of the sample size is verifiable (►Table 3).
Type of study Sample size formula Interpretation
• Z1–β = it is the desired power = 0.84 at 80% power
• Z1–α/2 is the standard normal variate is 1.96 at 5% error
Cohort study^10,11 • p0 = possibility of event in controls, from previous studies
• p1 = possibility of event in experimental, from previous studies
• m = number of control subjects per experimental subject
• p = [p1+(m×p0)]/m+1
• r = control to cases ratio (1 if same numbers of patient in both groups)
• p = proportion of population = (P1+P2)/2
Case control studies^ • Z1–β = it is the desired power (0.84 for 80% power and 1.28 for 90% power)
10 • It is the standard normal variate is 1.96 at 5% error
• P1 = proportion in cases
• P2 = proportion in controls
For determining Sensitivity: • Z is conventionally taken as 1.96 in lieu with 90% confidence interval
For determining Specificity: • P is prevalence of rate of disease in study population
Diagnostic tests^12,13 • W is maximum acceptable width of 95% CI, is conventionally taken as 10%.
Where: *Specificity and sensitivity and p-values are inferred from the previous
For one-way ANOVA design (for group comparison: • k = number of groups
• Minimum number of patients/groups: • n = number of patients per group
• Minimum number of patients/groups: • r = number of repeated measurements
• N = total number of patients
One within factor, repeated-measure ANOVA (one group, repeated measures): • k = number of groups
• Minimum number of patients/groups: • n = number of patients per group
Animal studies^14,15 • Minimum number of patients/groups: • r = number of repeated measurements
• If the study involves sacrificing of animals then the n should be multiplied
by r.
One between, one within factor, repeated measures ANOVA (group comparison, repeated • k = number of groups
measurements): • n = number of patients per group
• Minimum number of patients/groups: • r = number of repeated measurements
• Minimum number of patients/groups:
Abbreviations: ANOVA, analysis of variance; CI, confidence interval.
The following information should be included in the study protocol for the example 1 mentioned above in the article
“Sample size was calculated based on the previously published study by Charan and Kantharia14 et al in which prevalence of diabetes was 20%. With the absolute precision of 5% points and type-1
error of 5%, the sample size was calculated as 246. After adjusting the sample size for dropout rate of 10%, the final sample size was 274. The sample size was calculated manually by using the
formula for cross-sectional studies.”
Thus, the sample size section of the research protocol must contain two references: one for the study from where the prevalence of the disease is derived and another citation from where the
formula for the sample size calculation is taken. Beside this, if any software is used for sample size calculation then that too need to be mentioned.
Sample size calculation is one of the important aspects while planning a research and any laxity in its estimation may lead to misleading or incorrect findings. The important factors to be
considered during calculation of sample size are: type of study, effect size, type of outcome, variance of outcome, significance level, and the power of test. Sample size calculation requires
thorough review of the literature to determine some of the parameters such as prevalence and effect size. Sample size calculations should be explained in detail in study protocol and publication,
so that it can be authenticated by anyone.
Conflict of Interest
None declared.
Show Sections | {"url":"https://nams-annals.in/sample-size-calculation-in-medical-research-a-primer/","timestamp":"2024-11-10T12:30:44Z","content_type":"text/html","content_length":"342855","record_id":"<urn:uuid:323f4356-88b1-46ff-9fcd-fce6b345d726>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00082.warc.gz"} |
Comparing Track Length Using Box Plots
Question Video: Comparing Track Length Using Box Plots Mathematics
The box plots represent data collected on the length of recorded tracks from the top 100 rap and the top 100 heavy metal music charts. On average which genre of music has longer tracks? Compare the
significance of the track length 4.40 minutes for the two music genres.
Video Transcript
The box plots below represent data collected on the length of recorded tracks from the top 100 rap and the top 100 heavy metal music charts. On average, which genre of music has longer tracks? And
then compare the significance of the track length 4.40 minutes for the two music genre.
Well, if we take a look at part one first, the measure of average used in a box plot is the median, which is the vertical bar inside the box. Well, if we read down from our medians to the axis, we
can see that the median track length for rap tracks is 4.00 minutes, whereas the median for heavy metal tracks is 4.80 minutes. And since the median for heavy metal tracks is higher than that for rap
tracks, we can conclude that heavy metal tracks are on average longer than rap tracks.
Although it’s not part of this question, we could however also look at the spread of the track lengths. And if we did that, we would see that the range, which is the maximum value minus the minimum
value, of rap tracks is in fact greater. So it has a greater spread of values. But also the IQR or interquartile range, which is Q three minus Q one, is also greater. So we can see actually that the
differences of spread of the track lengths of rap tracks is greater than metal tracks.
Okay, so now let’s take a look at the second part of the question. Well, for the second part of the question, what we want to do is concentrate on the value 4.40 minutes. So if we take a look at what
this means for rap tracks, well 4.40 minutes is where the right-hand vertical bar of the box sits. And what this represents is the value of Q three, our upper quartile, so the third quartile for rap.
And what this means is that 75 percent of rap tracks are less than 4.40 minutes long and only 25 percent are in fact longer than 4.40 minutes long.
However, if we look at heavy metal tracks, 4.40 minutes represents Q one, so the lower quartile. And what this means is that 25 percent of heavy metal tracks are in fact shorter than 4.40 minutes
long and 75 percent of them are longer than 4.40 minutes long. So, hence, what we can say about the significance of 4.40 minutes is that 75 percent of rap tracks are shorter than 4.40 minutes.
However, 75 percent of heavy metal tracks are in fact longer than 4.40 minutes. | {"url":"https://www.nagwa.com/en/videos/753121283876/","timestamp":"2024-11-11T20:07:04Z","content_type":"text/html","content_length":"244753","record_id":"<urn:uuid:31982621-1592-413e-b776-bb0b14de062e>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00352.warc.gz"} |
Single machine scheduling problem - Total Early Work
Published: 10 June 2022| Version 1 | DOI: 10.17632/sz6f8z3g3z.1
Rachid Benmansour, ,
The benchmark set of instances is generated following instructions provided in the paper of Ben-Yehoshua and Mosheiov [1]. follows. Each instance is characterized by the number n of jobs to schedule,
the maximum processing time of a job Pmax, and the density of due-dates alpha. In the benchmark data set n takes value from the set {20, 50, 100, 200, 500, 1000, 10000}, Pmax varies in the set
{50,100, 250, 500,1000}, while the tightness factor alpha takes value 0.7 or 1.0. For each combination of n, Pmax, alpha values, 10 different instances are generated. In each of ten instances,
processing times of jobs are chosen uniformly from the interval [1, Pmax], while the due dates are chosen uniformly from the interval [1, alpha *SumP], where SumP is the sum of the processing times
of the jobs. The generated benchmark set is divided into three subsets according to the problem size/difficulty: i) small instances with n <= 200, ii) medium instances with n in {500, 1000} and iii)
large instances with n=10000. REFERENCE : [1] Ben-Yehoshua, Y., & Mosheiov, G. (2016). A single machine scheduling problem to minimize total early work. Computers & Operations Research, 73, 115-118.
Steps to reproduce
Please follow the instructions given in the paper of Ben-Yehoshua and Mosheiov [1].
Computer Science | {"url":"https://data.mendeley.com/datasets/sz6f8z3g3z/1","timestamp":"2024-11-05T06:06:04Z","content_type":"text/html","content_length":"103135","record_id":"<urn:uuid:aa339ef2-4976-4184-ba40-0714770c5ddf>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00207.warc.gz"} |
Micha","first_name":"Micha","last_name":"Sharir"}],"user_id":"ea97e931-d5af-11eb-85d4-e6957dddbf17","year":"1991","page":"253 -
263","scopus_import":"1","month":"04","day":"01","_id":"3566","citation":{"apa":"Edelsbrunner, H., & Sharir, M. (1991). A hyperplane incidence problem with applications to counting distances. In
Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift (Vol. 4, pp. 253–263). American Mathematical Society.","chicago":"Edelsbrunner, Herbert, and Micha Sharir. “A Hyperplane
Incidence Problem with Applications to Counting Distances.” In Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, 4:253–63. American Mathematical Society,
1991.","ista":"Edelsbrunner H, Sharir M. 1991.A hyperplane incidence problem with applications to counting distances. In: Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift.
DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 4, 253–263.","short":"H. Edelsbrunner, M. Sharir, in:, Applied Geometry and Discrete Mathematics: The Victor Klee
Festschrift, American Mathematical Society, 1991, pp. 253–263.","ama":"Edelsbrunner H, Sharir M. A hyperplane incidence problem with applications to counting distances. In: Applied Geometry and
Discrete Mathematics: The Victor Klee Festschrift. Vol 4. American Mathematical Society; 1991:253-263.","ieee":"H. Edelsbrunner and M. Sharir, “A hyperplane incidence problem with applications to
counting distances,” in Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, vol. 4, American Mathematical Society, 1991, pp. 253–263.","mla":"Edelsbrunner, Herbert, and Micha
Sharir. “A Hyperplane Incidence Problem with Applications to Counting Distances.” Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, vol. 4, American Mathematical Society, 1991,
pp. 253–63."},"publisher":"American Mathematical Society","quality_controlled":"1","publist_id":"2819","publication":"Applied Geometry and Discrete Mathematics: The Victor Klee
Festschrift","abstract":[{"text":"This paper proves an O(m2/3n2/3 + m + n) upper bound on the number of incidences between m points and n hyperplanes in four dimensions, assuming all points lie on
one side of each hyperplane and the points and hyperplanes satisfy certain natural general position conditions. This result has application to various three-dimensional combinatorial distance
problems. For example, it implies the same upper bound for the number of bichromatic minimum distance pairs in a set of m blue and n red points in three-dimensional space. This improves the best
previous bound for this problem. © Springer-Verlag Berlin Heidelberg 1990.","lang":"eng"}],"alternative_title":["DIMACS Series in Discrete Mathematics and Theoretical Computer
Science"],"article_processing_charge":"No","date_updated":"2022-03-03T13:27:01Z","intvolume":" 4","language":[{"iso":"eng"}],"publication_status":"published","title":"A hyperplane incidence problem
with applications to counting distances","status":"public","volume":4,"publication_identifier":{"isbn":["978-0897913850"]}} | {"url":"https://research-explorer.ista.ac.at/record/3566.jsonl","timestamp":"2024-11-02T11:05:47Z","content_type":"text/plain","content_length":"4044","record_id":"<urn:uuid:2181adba-1907-42bf-a4ff-f7f06ebc863d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00544.warc.gz"} |
A blog by Miguel PdL
LOI Matchday 13 Non Predictions
May 27th, 2021
This is a painful post as I have to admit I missed the opportunity to make any predictions before the Matchday 13 games kicked off and so I’m left to compare what the prediction would have been … and
side by side to the actual result.
Before I get into that from matchday 12 there were 2 correct out of the 5 making it a running total of 23 out of 59.
So to the rubber that is matchday 13, with predictions and results.
Bohemians (24%) vs Dundalk (49%)
The Probability of a Draw between Bohemians and Dundalk is 26%
So I would have gone for a Dundalk win and the score was ….. Bohemians 5 – 1 Dundalk. So way off there.
Next up
Derry City (36%) vs St. Patricks (36%)
The Probability of a Draw between Derry City and St. Patricks is 28%
System looks to be a draw and the result Derry City 2 – 2 St. Patricks. A draw so on the money here
Waterford (57%) vs Finn Harps (18%)
The Probability of a Draw between Waterford and Finn Harps is 24%
The Blues for a predicted win. The result was Waterford 1 – 2 Finn Harps. Waterford played very well in the 1st half but Finn Harps came on strong in the 2nd.
Drogheda (54%) vs Longford (22%)
The Probability of a Draw between Drogheda and Longford is 23%
So a Drogs win predicated and the result was Drogheda 4 – 1 Longford, so a good win all around.
Shamrock Rovers (55%) vs Sligo Rovers (19%)
The Probability of a Draw between Shamrock Rovers and Sligo Rovers is 26%
So a clear Rovers (SHA) win predicted and the final results Shamrock Rovers 0 – 1 Sligo Rovers. Well there’s a turn up for the books a whole load of ways around, although then again sounds like it
was well deserved. Sligo for the league by the looks of things now !!!
LOI Matchday 12 Predictions
May 20th, 2021
Starting to bomb out on these predictions, 1 out of 4 last week, and could have been close on two of the match, but it is what it is.
Running total is 21 out of 54.
This weeks predictions, we will start again with Waterford, under new management.
Waterford (40%) vs Derry City (33%)
The Probability of a Draw between Waterford and Derry City is 27%
The number of sim home wins for Waterford is = 3
The number of sim away wins for Derry City is = 4
The number of sim draw is = 3
Waterford (0.62xG) vs Derry City (0.68xG)
All indications here are for a draw ….. I’ll go with the draw and maybe hope that new management spur the Blues towards a real win.
St. Patricks (48%) vs Bohemians (23%)
The Probability of a Draw between St. Patricks and Bohemians is 28%
The number of sim home wins for St. Patricks is = 4
The number of sim away wins for Bohemians is = 5
The number of sim draw is = 1
St. Patricks (1.51xG) vs Bohemians (1.45xG)
Now I went all gun hoo with St. Pats last week and they lost badly, and there will be goals in this match, and I’ll stick with the St. Pats win.
Dundalk (45%) vs Shamrock Rovers (27%)
The Probability of a Draw between Dundalk and Shamrock Rovers is 27%
The number of sim home wins for Dundalk is = 6
The number of sim away wins for Shamrock Rovers is = 2
The number of sim draw is = 2
Dundalk (1.66xG) vs Shamrock Rovers (1.36xG)
Ahhh the data is messing with me now, all pointers are to a Dundalk win …. it can’t be, I’m going against the grain here and will predict a draw (dare I say based on the xG).
Finn Harps (34%) vs Drogheda (38%)
The Probability of a Draw between Finn Harps and Drogheda is 28%
The number of sim home wins for Finn Harps is = 5
The number of sim away wins for Drogheda is = 2
The number of sim draw is = 3
Finn Harps (1.14xG) vs Drogheda (0.76xG)
Fair play to the Drogs they’ve really embedded themselves into the season well, and Finn Harps have tipped away nicely nothing much between these teams and so will plum for a draw.
Sligo Rovers (63%) vs Longford (14%)
The Probability of a Draw between Sligo Rovers and Longford is 21%
The number of sim home wins for Sligo Rovers is = 6
The number of sim away wins for Longford is = 2
The number of sim draw is = 2
Sligo Rovers (0.94xG) vs Longford (0.74xG)
Nice bit of news about the new stadium for Sligo and I’ll go for win for Sligo here too.
LOI Matchday 11 Predictions
May 14th, 2021
Oh for fffffff sake match day 10 was a wash out especially the Waterford vs Drogheda match where I could not have predicted the chaos there.
Running total is 20 out of 50.
For this weekend, there’s just the 4 matches.
Finn Harps (11%) vs Dundalk (67%)
The Probability of a Draw between Finn Harps and Dundalk is 20%
The number of sim home wins for Finn Harps is = 0
The number of sim away wins for Dundalk is = 9
The number of sim draw is = 1
Finn Harps (1.12xG) vs Dundalk (1.29xG)
All pointing towards a Dundalk win, and their not exactly in a great place and Finn Harps have been tipping along nicely ….. but will stick to the data guns and say Dundalk for the win.
Drogheda (25%) vs St. Patricks (49%)
The Probability of a Draw between Drogheda and St. Patricks is 25%
The number of sim home wins for Drogheda is = 3
The number of sim away wins for St. Patricks is = 4
The number of sim draw is = 3
Drogheda (1.07xG) vs St. Patricks (1.2xG)
Can Drogheda score goals their xG is going to be way off given the 7 goal win last week. Will go for a St. Pats win.
Shamrock Rovers (56%) vs Derry City (19%)
The Probability of a Draw between Shamrock Rovers and Derry City is 25%
The number of sim home wins for Shamrock Rovers is = 3
The number of sim away wins for Derry City is = 4
The number of sim draw is = 3
Shamrock Rovers (2.07xG) vs Derry City (0.66xG)
The sims seem to indicate a draw but over all al Rovers win here is predicted.
Longford (24%) vs Bohemians (50%)
The Probability of a Draw between Longford and Bohemians is 26%
The number of sim home wins for Longford is = 2
The number of sim away wins for Bohemians is = 6
The number of sim draw is = 2
Longford (0.7xG) vs Bohemians (1.43xG)
And finally a Bohs win here.
What could go wrong ?
Expected Goals (xG)
May 9th, 2021
As part of my LOI weekly predictions (don’t ask about match day 10 it was a wipe out) I’ve also included an indication of the teams expected goals (xG). Now it might be worth articulating an
introduction to expected goals and there are four videos worth a review.
• One by David Sumpter (Friends of Tracking)
• One by Duncan Alexander at Opta.
• One by Tifo football.
• One with example goals with the xG overlaid on the screen.
First up David Sumpter on How to explain expected goals to a football player.
In this video David takes us through the probability of scoring a goal in the penalty area, with an overview of Barcelona statistics of expected Goals, indicating how a penalty is a 75% chance of
scoring, and in comparison to a 7% chance of scoring, and an explanation of what this means.
True to the point there can and should be a reasoned discussion around goal scoring (and goal prevention) instead of always the emotional one, and xG gives some insights on this.
Opta’s Duncan Alexander takes us through the expected goals metric in the video Opta Expected Goals.
Of note (agt the time of the video), 4 variables are considered with Opta, Assist Type, Header / Foot, Big Chance and Angle/Distance.
The video by Tifo Football is a nice By The Numbers presentation on What is xG ?
They describe in nicer detail how good a shooting chance was, how likely a similar chances was, to result in a goal. They also highlight that people like StrataBet considers defenders in the way
while other models like Opta do not and that sets of 5~10 games get best value for xG.
Expected goals (xG)
So expected goals (xG) is a probability of scoring a goal, with a look at how good a shooting chance was, how likely a similar chances was, to result in a goal.
Finally here’s a video demonstration of expected goals (xG)
LOI Matchday 10 Predictions
May 6th, 2021
First up news of the week in LOI has to be the dismissal of Kevin Sheedy from Waterford FC. I think the writing was on the wall from early on, dare I say before the season even started. Back to the
drawing board for Waterford. Before making a prediction on their game, and the four others for matchday 10, a review of match day 9 shows a 3 out of 5 result, with two draws called correctly.
The running total is 20 correct results out of 45.
Now to the predictions for matchday 10
Derry City (64%) vs Longford (14%)
The Probability of a Draw between Derry City and Longford is 20%
The number of sim home wins for Derry City is = 8
The number of sim away wins for Longford is = 1
The number of sim draw is = 1
Derry City (1.0xG) vs Longford (0.78xG)
I went for a Derry win last week and it didn’t work out, but the predictions are aiming towards a Derry win again here, and so we stay with the Derry win (predicted)
Dundalk (61%) vs Sligo Rovers (16%)
The Probability of a Draw between Dundalk and Sligo Rovers is 22%
The number of sim home wins for Dundalk is = 5
The number of sim away wins for Sligo Rovers is = 2
The number of sim draw is = 3
Dundalk (1.41xG) vs Sligo Rovers (1.42xG)
Sligo are scoring away from home (xG) and these two teams had a draw on the opening day of the season. Data says Dundalk win, the heart says draw, but will go for the Dundalk win.
Waterford (55%) vs Drogheda (21%)
The Probability of a Draw between Waterford and Drogheda is 23%
The number of sim home wins for Waterford is = 6
The number of sim away wins for Drogheda is = 2
The number of sim draw is = 2
Waterford (0.62xG) vs Drogheda (0.76xG)
This will be a tight game again, well mainly a 0-0 type match, but the data is pointing towards a Waterford FC win, and we all need some cheering up, so a Waterford win in this one.
Bohemians (58%) vs Finn Harps (16%)
The Probability of a Draw between Bohemians and Finn Harps is 26%
The number of sim home wins for Bohemians is = 3
The number of sim away wins for Finn Harps is = 3
The number of sim draw is = 4
Bohemians (1.38xG) vs Finn Harps (1.26xG)
I’m going to call a draw in this one, the simmed matches and xG are indicating a draw and Finn Harps are tipping along nicely and I cannot push for the Bohs win, so a draw it will be.
St. Patricks (33%) vs Shamrock Rovers (38%)
The Probability of a Draw between St. Patricks and Shamrock Rovers is 29%
The number of sim home wins for St. Patricks is = 2
The number of sim away wins for Shamrock Rovers is = 4
The number of sim draw is = 4
St. Patricks (1.59xG) vs Shamrock Rovers (1.23xG)
And finally to the top of the table clash and this has a draw written all over it, and so another draw.
LOI Matchday 9 Predictions
May 2nd, 2021
It’s getting harder to keep up with all the matches happening in the LOI, but at least match day 8 (in review) was a good one as another 4 predictions out of 5 were on the button. Although I was way
off the the one that was incorrect with Bohs getting beaten by Derry City, could anyone have seen that coming ?
The running total is 17 out of 40.
Now in to match day 9 predictions.
Drogheda (32%) vs Bohemians (41%)
The Probability of a Draw between Drogheda and Bohemians is 27%
The number of sim home wins for Drogheda is = 2
The number of sim away wins for Bohemians is = 6
The number of sim draw is = 2
Drogheda (1.12xG) vs Bohemians (1.45xG)
This is too close to call and so going for a draw.
Shamrock Rovers (58%) vs Waterford (18%)
The Probability of a Draw between Shamrock Rovers and Waterford is 24%
The number of sim home wins for Shamrock Rovers is = 3
The number of sim away wins for Waterford is = 6
The number of sim draw is = 1
Shamrock Rovers (2.09xG) vs Waterford (0.62xG)
The sim matches are playing tricks with me, but the call here has to be a Shamrock Rovers win
Longford (11%) vs Dundalk (69%)
The Probability of a Draw between Longford and Dundalk is 17%
The number of sim home wins for Longford is = 2
The number of sim away wins for Dundalk is = 7
The number of sim draw is = 1
Longford (0.77xG) vs Dundalk (1.2xG)
Dundalk are on a roll now and I’ll take them for a win in this one.
Sligo Rovers (36%) vs St. Patricks (35%)
The Probability of a Draw between Sligo Rovers and St. Patricks is 29%
The number of sim home wins for Sligo Rovers is = 4
The number of sim away wins for St. Patricks is = 4
The number of sim draw is = 2
Sligo Rovers (0.89xG) vs St. Patricks (0.90xG)
This is looking like a draw and will call it for a draw.
Derry City (61%) vs Finn Harps (15%)
The Probability of a Draw between Derry City and Finn Harps is 23%
The number of sim home wins for Derry City is = 6
The number of sim away wins for Finn Harps is = 2
The number of sim draw is = 2
Derry City (0.83xG) vs Finn Harps (1.08xG)
I wonder is Finn Harps early season bubble about to burst with Derry catching them ? All the pointers are towards a Derry City win so will go for a Derry win.
LOI Matchday 8 Predictions
April 30th, 2021
Match day 7 review and a 4 predictions out of 5 is pretty decent this time around, the league must be settling down now.
The running total is 13 out of 35.
Now in to match day 8 predictions.
St. Patricks (68%) vs Longford (11%)
The Probability of a Draw between St. Patricks and Longford is 18%
The number of sim home wins for St. Patricks is = 9
The number of sim away wins for Longford is = 0
The number of sim draw is = 1
St. Patricks (1.72xG) vs Longford (0.81xG)
Going for a St. Pats win here
Drogheda (30%) vs Sligo Rovers (43%)
The Probability of a Draw between Drogheda and Sligo Rovers is 26%
The number of sim home wins for Drogheda is = 0
The number of sim away wins for Sligo Rovers is = 5
The number of sim draw is = 5
Drogheda (1.11xG) vs Sligo Rovers (1.24xG)
Taking it as a draw for this one
Bohemians (40%) vs Derry City (31%)
The Probability of a Draw between Bohemians and Derry City is 28%
The number of sim home wins for Bohemians is = 6
The number of sim away wins for Derry City is = 2
The number of sim draw is = 2
Bohemians (0.97xG) vs Derry City (0.40xG)
Going for a narrow win for Bohs on this one.
Waterford (24%) vs Dundalk (51%)
The Probability of a Draw between Waterford and Dundalk is 24%
The number of sim home wins for Waterford is = 1
The number of sim away wins for Dundalk is = 8
The number of sim draw is = 1
Waterford (0.58xG) vs Dundalk (1.11xG)
Ahh hate saying this but looking like a Dundalk win
Finn Harps (14%) vs Shamrock Rovers (61%)
The Probability of a Draw between Finn Harps and Shamrock Rovers is 24%
The number of sim home wins for Finn Harps is = 0
The number of sim away wins for Shamrock Rovers is = 8
The number of sim draw is = 2
Finn Harps (1.16xG) vs Shamrock Rovers (1.17xG)
The xG is pointing at a draw, and Finn Harps have started well, but will go for the Rovers win in this one.
Good Practice in Football Visualisation
April 25th, 2021
This is a review of a special guest lecture from Opta’s Peter McKeever were he gives some insights in to how to make better data visualisations.
In this video Peter covers:
• Elements of Matplotlib
• Under the Hood: rcParams
• Layering objects with zorder in plots
• Works through a real world example
The origin of the code is available on Github under the project “friends-of-tracking-viz-lecture” and worked through in this video.
Peter’s slides are available here in this PDF document. Peter also has an excellent blog with code and further examples.
Set up
Under the organisation on Github called mmoffoot I forked the “friends-of-tracking-viz-lecture” repo into the mmoffoot area.
Then I created a new branch called ‘tottenham’ in this area to cover the changes I made.
What was coded
This is another Jupyter Notebook, but this time I just could not get it to load up the highlight_text python library and so I had to create a bog standard Python programme to run through this code
Then I found that highlight_text has changed its interface slightly since Peter coded against it. For example in the Notebook there’s the line
htext.fig_htext(s.format(team,ssn_start,ssn_end),0.15,0.99,highlight_colors=[primary], highlight_weights=["bold"],string_weight="bold",fontsize=22, fontfamily=title_font,color=text_color)
I had to change it to
htext.fig_text(0.15,0.86,s.format(team,ssn_start,ssn_end),highlight_colors=[primary], highlight_weights=["bold"],fontweight="bold",fontsize=22, fontfamily=title_font,color=text_color)
Given that Peter McKeever has run through all the elements coded via the YouTube video and there’s an associated slide deck this is a really nice resource to get started on exact visual items and how
to then code them up. Of course for devilment I’ve gone for a Tottenham theme for the final output.
Tottenham’s goal difference from 2010/2011 to 2019/2020
Peter also talks about the blog posts by Lisa Rost which are well worth a review on how to visualise data. He also gives a pointer towards Tim Bayer and his work doing some things for Fantasy premier
league, all of which is excellent.
Finally there’s ThemePy which is being developed, it is a theme selector / creator and aesthetic manager for Matplotlib. This wrappers aim is to simplify the process of customising matplotlib plots
and to enable users who are relatively new to python or matplotlib to move beyond the default plotting params we are given with matplotlib.
LOI Matchday 7 Predictions
April 22nd, 2021
Match day 6 review and got 3 predictions out of 5, and things are getting a bit better.
The running total is 9 out of 30.
Now in to match day 7 predictions
Finn Harps (20%) vs St. Patricks (54%)
The Probability of a Draw between Finn Harps and St. Patricks is 26%
The number of sim home wins for Finn Harps is = 1
The number of sim away wins for St. Patricks is = 8
The number of sim draw is = 1
Finn Harps (1.05xG) vs St. Patricks (0.72xG)
Predicting St. Pats win here
Shamrock Rovers (56%) vs Bohemians (18%)
The Probability of a Draw between Shamrock Rovers and Bohemians is 26%
The number of sim home wins for Shamrock Rovers is = 8
The number of sim away wins for Bohemians is = 0
The number of sim draw is = 2
Shamrock Rovers (1.78xG) vs Bohemians (1.32xG)
Everything pointing to a Rovers win, let’s see
Dundalk (73%) vs Drogheda (8%)
The Probability of a Draw between Dundalk and Drogheda is 14%
The number of sim home wins for Dundalk is = 6
The number of sim away wins for Drogheda is = 0
The number of sim draw is = 4
Dundalk (1.45xG) vs Drogheda (0.67xG)
A Dundalk win here.
Waterford (63%) vs Longford (15%)
The Probability of a Draw between Waterford and Longford is 20%
The number of sim home wins for Waterford is = 7
The number of sim away wins for Longford is = 1
The number of sim draw is = 2
Waterford (0.57xG) vs Longford (0.70xG)
This is looking like a Waterford win, they certainly need one.
Sligo Rovers (42%) vs Derry City (30%)
The Probability of a Draw between Sligo Rovers and Derry City is 28%
The number of sim home wins for Sligo Rovers is = 6
The number of sim away wins for Derry City is = 2
The number of sim draw is = 2
Sligo Rovers (0.94xG) vs Derry City (0.26xG)
Going to go for a draw here.
LOI Matchday 6 Predictions
April 19th, 2021
Match day 5 review and got 2 predictions out of 5, and going a bit mad that I didn’t stick with the system selection of Sligo Rovers over Finn Harps.
The running total is 6 out of 25.
Now in to match day 6 predictions
Drogheda (18%) vs Shamrock Rovers (58%)
The Probability of a Draw between Drogheda and Shamrock Rovers is 23%
The number of sim home wins for Drogheda is = 1
The number of sim away wins for Shamrock Rovers is = 5
The number of sim draw is = 4
Drogheda (1.24xG) vs Shamrock Rovers (0.72xG)
Going for a Shamrock Rovers win.
St. Patricks (49%) vs Waterford (25%)
The Probability of a Draw between St. Patricks and Waterford is 26%
The Probability of a Home win for St. Patricks is 49%
The Probability of an Away win for Waterford is 25%
St. Patricks (1.39xG) vs Waterford (0.6xG)
Going for St. Pats win
Longford (42%) vs Finn Harps (30%)
The Probability of a Draw between Longford and Finn Harps is 28%
The number of sim home wins for Longford is = 5
The number of sim away wins for Finn Harps is = 3
The number of sim draw is = 2
Longford (0.86xG) vs Finn Harps (1.06xG)
Going for a draw in this one
Derry City (25%) vs Dundalk (49%)
The Probability of a Draw between Derry City and Dundalk is 25%
The number of sim home wins for Derry City is = 1
The number of sim away wins for Dundalk is = 8
The number of sim draw is = 1
Derry City (0.86xG) vs Dundalk (0.84xG)
Dundalk win in this case.
Bohemians (39%) vs Sligo Rovers (31%)
The Probability of a Draw between Bohemians and Sligo Rovers is 29%
The number of sim home wins for Bohemians is = 3
The number of sim away wins for Sligo Rovers is = 4
The number of sim draw is = 3
Bohemians (0.35xG) vs Sligo Rovers (1.09xG)
Going for a draw here. | {"url":"https://miguelpdl.com/weblog/page/3/","timestamp":"2024-11-04T13:40:03Z","content_type":"application/xhtml+xml","content_length":"95424","record_id":"<urn:uuid:fbf37acf-5bcf-43ef-8c51-e2e512558baf>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00834.warc.gz"} |
The phenomenology of a fourth spatial dimension.
A hyperdimensional cube, known as a tesseract...
Part of the motivation for this blog was to open up a dialogue between working artists and working scientists. One of the conversations that is starting to develop relates to the (artistic) field of
and whether it does/doesn't have anything to say when it comes to the more abstruse sciences. Of course phenomenology is the study of the things humans experience (at least according to my
rudimentary understanding of it) and scientists are human beings who
abstruse science so phenomenology
to say about science, by definition. But what?
The discussions which began this conversation concerned two online applications that aid in the visualisation of the scales involved in time and space. See
this post
this post
and the comments in each for a discussion of the apps. In that discussion and in
the comments
a later post
I expressed a sense of disappointment in what these apps were managing to express. I should clarify that I think the apps are great and interesting, but if one is looking for a
phenomenology of science
I don't think this is where the most
phenomenology is (of course the app creators aren't looking for a phenomenology of science, so they are completely forgiven). The scale of space and time are very ordinary, everyday, Earthly things.
The most interesting discoveries science has made, at least in the world of high energy physics, are not well described in such terms - this is almost precisely what makes them so interesting.
pointed out in
one of those comment threads
a huge part of this problem is that many of these discoveries rely heavily on mathematics. And, to a very large degree, to understand the discoveries, you need to have a grasp of the maths. The fact
that the very abstract structures and symmetries and patterns that lie hidden in the mathematics are then actually exhibited in the real universe is one of the more wonderful things
by a scientist. And that is not easily captured simply through pictures, videos and sounds (
or even smells
) of the universe.
However, I came across an app today that really does try to bridge this gap. If someone were to be looking for a true
of the more interesting and wonder inducing aspects of theoretical physics then this is the sort of thing one should be looking for. It is
an app that simulates a hyper-dimensional cube
. OK, so what on Earth does that mean?
Well, a dot exists in zero dimensions, it has no length, width, nothing. A line exists in one dimension, it has simply a length. A square exists in two dimensions, it has a length and a width. Next,
a cube, exists in three dimensions, it has a length, a width and a depth. Now, just imagine there was a fourth spatial dimension. Mathematically this is an easy thing to write down. Then, just as one
can form a cube by extending a square into a third dimension, one can form something known as a tesseract by extending a cube into a fourth spatial dimension.
Now, we creatures who exist in just three spatial dimension can't visualise a fourth spatial dimension (I certainly can't). But, just as we can look at a two dimensional surface (a TV screen) and see
a projection of a three dimensional object, so can we do the same for an imagined four dimensional object projected onto three (or two) dimensions. And this is precisely what this app does for a four
dimensional tesseract.
[Edit: Or, as Rhys said in this comment "it shows what a 4D painter would paint on a 3D canvas..."].
And the brilliance is that the app even allows you to manipulate the tesseract, to move it, to spin it (in any of the four dimensions) and while manipulating it you see how that three dimensional
projection changes on your two dimensional screen. Here's a video preview...
This app came to my attention as a consequence of
covering extra spatial dimensions in
his latest video
- which you should watch for a quick overview of higher dimension. There is even, apparently,
a game in construction
that requires you to move into and around a fourth spatial dimension to solve puzzles (for example using it to navigate one's way around obstacles and put keys into locks).
I actually think that a phenomenology of abstruse science can go even further and get even more interesting than this. A fourth spatial dimension is one of the simplest ideas I can think of that is
not directly understandable from an Earthly perspective, but easily described by maths. It also has the unfortunate position of quite possibly not actually being reality and
not yet being testably true reality. However, it is here, with this sort of application/idea that I would think a phenomenology of the interesting bits of high energy physics should start.
Twitter: @just_shaun
6 comments:
1. I wanted to write a comment in the post itself about how the phenomenology presented in the Scales of the Universe and ChronoZoom applications was one dimensional, when compared to this newer
app... but... well... I temporarily had more discretion.
2. Looks neat. When I read 'projection' I though of a linear projection from 4D to 3D, but of course that's not what's going on; instead, it shows what a 4D painter would paint on a 3D canvas...
1. Yeah, that's a good point. I like your description. I wondered a little bit what the most appropriate word to use there was. My first thought as to what "projection" means is the same as
yours, but when I saw that the app description itself wrote projection I took the path of least resistance.
Wikipedia does seem to suggest that the word has a large breadth of meanings, one of which is relevant to this.
2. "...the word has a large breadth of meanings, one of which is relevant..."
Absolutely! I meant only to flag a possible point of interest, not to suggest that the wording was wrong.
3. Gentlemen, if you are interested in 4D, it would be worth your while to check out Zometool. A simple but profound building "toy" that is the golden standard for building physical 3D models of
projections from 4 (and higher) dimensions.
I hope you find this useful!
1. Thanks cneumann. That does look really interesting.
If you send me a sample of zometool I'd be happy to write a review of it here. | {"url":"https://trenchesofdiscovery.blogspot.com/2012/04/phenomenology-of-fourth-spatial.html","timestamp":"2024-11-05T09:47:42Z","content_type":"application/xhtml+xml","content_length":"106897","record_id":"<urn:uuid:a73c565f-905c-4fd6-bcf1-b2c601f9c19c>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00035.warc.gz"} |
Bar Chart / Bar Graph: Examples, Excel Steps & Stacked Graphs
What is a Bar Chart?
A bar chart is a graph with rectangular bars. The graph usually compares different categories. Although the graphs can be plotted vertically (bars standing up) or horizontally (bars laying flat from
left to right), the most usual type of bar graph is vertical.
The horizontal (x) axis represents the categories; The vertical (y) axis represents a value for those categories. In the graph below, the values are percentages.
A bar graph is useful for looking at a set of data and making comparisons. For example, it’s easier to see which items are taking the largest chunk of your budget by glancing at the above chart
rather than looking at a string of numbers. They can also shows trends over time, or reveal patterns in periodic sequences.
Bar charts can also represent more complex categories with stacked bar charts or grouped bar charts. For example, if you had two houses and needed budgets for each, you could plot them on the same
x-axis with a grouped bar chart, using different colors to represent each house. See types of bar graphs below.
Difference Between a Histogram and a Bar Chart
Although they look the same, bar charts and histograms have one important difference: they plot different types of data. Plot discrete data on a bar chart, and plot continuous data on a histogram
What’s the difference between discrete and continuous data?).
A bar chart is used for when you have categories of data: Types of movies, music genres, or dog breeds. It’s also a good choice when you want to compare things between different groups. You could use
a bar graph if you want to track change over time as long as the changes are significant (for example, decades or centuries). If you have continuous data, like people’s weights or IQ scores, a
histogram is best.
Bar Graph Examples (Different Types)
A bar graph compares different categories. The bars can be vertical or horizontal. It doesn’t matter which type you use—it’s a matter of choice (and perhaps how much room you have on your paper!).
A bar chart with vertical bars. Categories are on the x-axis.
Bar chart with horizontal bars. Categories are on the y-axis. Image: SAMHSA.gov.
List of Types
1. Grouped Bar Graph
A grouped bar graph is a way to show information about sub-groups of the main categories.
In the above image, the categories are issues that senior citizens face (hearing loss and mobility issues); the sub-groups are age. A separate colored bar represents each sub-group: blue for age
70-79 and red for age 80-100.
A key or legend is usually included to let you know what each sub-category is. Like regular bar charts, grouped bar charts can also be drawn with horizontal bars.
When there are only two sub-groups (as in the above image), the graph is called a double bar graph. It’s possible to have as many sub-groups as you like, although too many can make the graph look
2. Stacked Bar Chart
A stacked bar chart also shows sub-groups, but the sub-groups are stacked on the same bar.
Stacked bar chart showing list price change announcements by company.
Each bar shows the total for sub-groups within each individual category.
Stacked bar chart showing list price change announcements by company.
Like the double bar chart, different colors represent different sub-groups. This type of chart is a good choice if you:
• Want to show the total size of groups.
• Are interested in showing how the proportions between groups related to each other, in addition to the total of each group.
• Have data that naturally falls into components, like:
□ Sales by district.
□ Book sales by type of book.
Stacked bar charts can also show negative values; negative values are displayed below the x-axis.
3. Segmented Bar Graph.
A type of stacked bar chart where each bar shows 100% of the discrete value. They should represent 100% on each of the bars or else it’s going to be an ordinary stacked bar chart. For more on this
particular type of graph, see: Segmented Bar Charts.
How to Make a Bar Chart
Need help with a homework question? Check out our tutoring page!
By Hand
If you’re just starting to learn how to make a bar graph, you’ll probably be asked to draw one by hand on graph paper at first. Here’s how to do it.
Example problem: Make a bar graph that represents exotic pet ownership in the United States. There are:
• 8,000,000 fish,
• 1,500,000 rabbits,
• 1,300,000 turtles,
• 1,000,000 poultry
• 900,000 hamsters.
Step 1: Number the Y-axis with the dependent variable. The dependent variable is the one being tested in an experiment. In this example question, the study wanted to know how many pets were in U.S.
households. So the number of pets is the dependent variable. The highest number in the study is 8,000,000 and the lowest is 1,000,000 so it makes sense to label the Y-axis from 0 to 8.
Step 2: Draw your bars. The height of the bar should be even with the correct number on the Y-axis. Don’t forget to label each bar under the x-axis.
Step 3: Label the X-axis with what the bars represent. For this example problem, label the x-axis “Pet Types” and then label the Y-axis with what the Y-axis represents: “Number of pets (per 1,000
households).” Finally, give your graph a name. For this example, call the graph “Pet ownership (per 1,000 households).
Optional: In the above graph, I chose to write the actual numbers on the bars themselves. You don’t have to do this, but if you have numbers than don’t fall on a line (i.e. 900,000), then it can help
make the graph clearer for a viewer.
1. Line the numbers up on the lines of the graph paper, not the spaces.
2. Make all your bars the same width.
How to Make a Bar Chart in Excel
A small quirk with Excel: Excel calls vertical graphs column charts and horizontal graphs bar charts.
Watch the video below to learn more.
Can’t watch the video?
Click here.
Excel 2007-2010
Example problem: Create a bar chart in Excel that illustrates the following data for the tallest man-made structures in the world (as of January, 2013):
│Building │Height in feet│
│Burj Khalifa, Dubai │2,722 │
│Tokyo Sky Tree │2,080 │
│KVLY-TV mast, US │2,063 │
│Abraj Al Bait Towers, Saudi Arabia │1,972 │
│BREN Tower, US │1,516 │
│Lualualei VLF transmitter │1,503 │
│Petronas Twin Tower, Malaysia │1,482 │
│Ekibastuz GRES-2 Power Station, Kazakhstan │1,377 │
│Dimona Radar Facility, Israel │1,312 │
│Kiev TV Tower, Ukraine │1,263 │
│Zhoushan Island Overhead Powerline Tie, China │1,214 │
Step 1: Type your data into a new Excel worksheet. Place one set of values in column A and the next set of values in column B. For this example problem, place the building names in column A and the
heights of the towers in column B.
Entering the data into column A and B. Note that I typed a column header into cells A1 (Buildings) and B1 (Height).
Step 2: Highlight your data: Click in the top left (cell A1 in this example) and then hold and drag to the bottom right.
Step 3: Click the “Insert” tab and then click on the arrow below “Column.” Click the type of chart you would like (for example, click “2D column).
That’s it!
Tip: To widen the columns in Excel, mouse over the column divider, click and drag the divider to the width you want.
Excel 2016-2013
A bar graph in statistics is usually a graph with vertical bars. However, Excel calls a bar graph with vertical bars a column graph.
Excel bar charts have horizontal bars.
Step 1: Click the “Insert” tab on the ribbon.
Step 2: Click the down arrow next to the bar chart icon.
Step 3: Select a chart icon. For example, select a simple bar chart.
That’s it!
How to make a bar graph in Excel 2016/2013: Formatting tips
• Change the width of the bars: Click on a bar so that handles appear around the bars. Right click, then choose Format Data Series. Under Format Data Series, click the down arrow and choose “Series
Options.” Click the last choice (“Series…”). Then move the “Gap Width” slider to change the bar width.
• If you want to remove the title or the data labels, select the “Chart Elements icon.” The Chart Elements icon is the first icon showing at the upper right of the graph area (the + symbol).
• Change the style or color theme for your chart by selecting the Chart Styles icon. The Chart Styles icon is the second icon showing at the upper right of the graph (the pencil).
• Edit what names or data points are visible on the chart by selecting the filter icon. The filter icon is the third icon at the top right of the chart area.
How to Make a Stacked Bar Chart in Excel
Step 1:Select the data in your worksheet. The names for each bar are in column A. The “stacked” portion of each bar are in the rows.
Image modified from Microsoft.com.
Step 2: Click the “Insert” tab, then click “Column.”
Step 3: Choose a stacked chart from the given options. For example, the second chart listed (under 2-D column) is a good choice.
If you want to modify the layout of your chart, click the chart area of the chart to display the Chart Tools in the ribbon. Then click the “Design” tab for options.
Make a Bar Graph in Minitab
Watch the video below for an overview on making a bar graph in Minitab:
Can’t see the video? Click here to watch it on YouTube.
Minitab is a statistical software package distributed by Minitab, Inc. The software is used extensively, especially in in education. It has a spreadsheet format, similar in feel to Microsoft Excel.
However, where it differs from Excel is that the toolbar is set up specifically for creating statistical graphs and distributions. Creating a bar graph in Minitab is as simple as entering your data
into the spreadsheets and performing a couple of button clicks.
Step 1: Type your data into columns in a Minitab worksheet. For most bar graphs, you’ll probably enter your data into two columns (x-variables in one column and y-variables in another). Make sure you
give your data a meaningful name in the top (unnumbered) row, because your variables will be easier to recognize in Step 5, where you build the bar graph.
Step 2: Click “Graph,” then click “Bar Chart.”
Step 3: Select your variable type from the Bars Represent drop down menu. For most charts, you’ll probably select “Counts of Unique Variables” unless your data is in groups or from a function.
Step 4: Click “OK.”
Step 5: Select a variable from the left window, then click the “Select” button to move your variable over to the Variables window.
Step 6: Click “OK.” A bar graph in Mintab appears in a separate window.
Tip:If you want to label your graph, click the “Label” button at the bottom of the window in Step 5.
Check out our YouTube channel for more stats tips!
How to Make a Bar Chart in SPSS
Watch the video for an overview:
Can’t see the video? Click here to watch it on YouTube.
When you make a bar chart in SPSS, the x-axis is a categorical variable and the y-axis represents summary statistics such as means, summations or counts. Bar charts are accessed in SPSS through the
Legacy dialogs command or through the Chart Builder.
How to Make a Bar Chart in SPSS: Steps
Step 1: Open the file you want to work with in SPSS or type the data into a new worksheet.
Step 2: Click “Graphs,” then click “Legacy Dialogs” and then click “Bar” to open the Bar Charts dialog box.
Step 3: Click on an image for the type of bar graph you want (Simple, Clustered (a.k.a. grouped), or Stacked) and then click the appropriate radio button to tell SPSS what type of data is in your
variables lists:
• Summaries for groups of cases,
• Summaries of separate variables, or
• Values of individual cases.
When you have made your selections, click the “Define” button.
Step 4: Click a radio button in the Bars Represent area to choose what you would like the bars to represent. For example, if you want the bars to represent the number of cases, click the “N of cases”
radio button.
Step 5: Click a variable in the left-hand window in the “Define Simple Bar” pop-up window and then transfer those variables by clicking the appropriate center arrow. For this simple example, click
“Grade Point Average” and then click the arrow to the left of “Category Axis.” When you have made your selections, click “OK.” This will produce a graph of the number (counts) of each grade point
The SPSS Define Simple Bar window.
SPSS Bar Chart output showing number of cases of GPAs.
Dodge, Y. (2008). The Concise Encyclopedia of Statistics. Springer.
Gonick, L. (1993). The Cartoon Guide to Statistics. HarperPerennial.
Wheelan, C. (2014). Naked Statistics. W. W. Norton & Company
Comments? Need to post a correction? Please Contact Us. | {"url":"https://www.statisticshowto.com/probability-and-statistics/descriptive-statistics/bar-chart-bar-graph-examples/","timestamp":"2024-11-04T15:11:16Z","content_type":"text/html","content_length":"110680","record_id":"<urn:uuid:d82a7b00-3179-4982-9d5f-4ed3554dcffe>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00116.warc.gz"} |
Funding Friday: Make Your Own Robots
Comments (Archived):
I’ll take a Tank Base with a Drone Top running on a Gun Control Cerebrum Core. That might give the NRA pause for thought.
Good robot!https://twitter.com/MsPseud…
1. The internet job opportunities have become a trend all over the over world nowadays. The latest survey suggests over 71% of the people are working for on-line jobs at their house without having
problems. Everyone really wants to spend time every day with his/her friends by going to any specific wonderful place in the world. So internet income allows you to carry out the work at any time
you want and enjoy your life. Still selecting the proper path and also setting the right destination is our ambition in direction of success. Already the majority are earning such a fantastic pay
check of $35000 each and every week through highly recommended and efficient methods of generating massive income online. You can begin to get paid from the 1st day once you browse through our
website. >>>>> http://www.helios-store.com…
I think it was George Stephonoloplis who said It’s the cleanup stupid!When this robot can return everything back to where it belongs on my bathroom console, I’m all in!
Cool.I live in the neighborhood of Stuyvesant High downtown and the kids are always out on the street with robots they built.Listened (I’ll find) to an A16 podcast on the social relationships we may
develop with a host of single function robotic helpers. Super fascinating and I think true and inevitable.
When the standards of twitter block anything but alllow this excrement, you’ve created a monster @twitter @jack.https://uploads.disquscdn.c…
Amazing. Astounding.Maybe one of the best toys I had growing up was the go cart I made out of some scrap materials in the garage and an old lawn mower engine: The thing taught me some about gasoline,
piston engines — still good stuff to know.The toy here looks good but may not be as good as that go cart.So, it’s a toy and maybe a good toy.It is better than a toy? For this we come to a what
appears to be a foundational issue: What the heck can we do with it? Or if so far nothing important, what the heck does it illustrate we might be able to do important in the future? I.e., that go
cart I built taught me a lot that is still useful for evaluating, selecting, owning, driving, and maintaining a car, and, of course, also lawn mowers, and was part of why I was made a Full Member
(not so easy to get) of the SAE — Society of Automotive Engineers.Or for any attempt at progress in technology, what the heck can we DO with it, that is, what is the utility, what is it good for?The
world of applied research is awash in solutions looking for problems. These snap together general purpose modular robot parts look amazing where we suspect that they will be good at least for
SOMETHING important or good learning toys for something important in the future.Then we come to: What the heck is the potential of such technology?When we look at an old tool chest we see lots of
screwdrivers — different lengths, head types — blade, Phillips, Torx –, different head sizes, etc. We do know easily what the utility is. But what is the utility of these robot components or, more
generally, such technology?Seems to me that is “the question”, whether just to assume some utility or, ask for proof first or just cast them away with the rest of the junk in a sea of troubles or
some such mess made of the poor bard.In this case and quite generally it’s a darned important question: What the heck is the potential? Then, next, how the heck are we to look for, find, invent,
evaluate work with good potential?Here is an approach I am taking:(1) Assume that quite broadly what will be of value is more information. From some obvious examples, that’s easy enough to accept,
e.g., if we had good information about what pork bellies would be selling for next month, we could make a bundle in a hurry and provide financial security for the family, kids, grand kids, etc.(2)
Likely the central and most important way to get more good information is to take available data and apply powerful manipulations. Well, those manipulations will necessarily be mathematical,
understood or not, powerful or not.(3) Nicely enough, there is a lot in advanced pure math that shows in some senses that seem might be powerful some amazing manipulations. E.g., the math is in rock
solid theorems and proofs, and sometimes the theorems say some just amazing things about the results of some manipulations — amazing, beyond belief.Here’s one: Just by some definitions based on some
quite abstract foundations, essentially everything we can ever observe that might vary over time is a stochastic process. Yup, of course the stock market, all parts of the economy, the weather,
everything we observe in medicine, astronomy, geology, etc. are stochastic processes. Then there is a theorem: Every stochastic process is the sum of a martingale part and a predictable part. And
there’s another theorem: Each L^1 bounded martingale converges to some random variable. That is, given an such a martingale, there’s for it one random variable. Given one sample path of the
martingale, it converges to one ‘sample’ of that random variable — converges exactly, right to the point, as accurately as we please. Moreover the rate of convergence is among the fastest of anything
known in math. Astounding stuff — applies to everything there is, has ever been, and ever will be.What can we do with it? Well, there have been some applications. But right away we see some
astounding results, totally beyond any confidence from any intuitive guessing, with no doubt, and maybe pointing to some powerful data manipulations. E.g., I was surprised once: I just wanted some
stochastic process sample paths for some testing of an idea. So, I wrote some code but right away was disappointed because each sample path started off jumping around nicely as I wanted but soon
calmed down and soon mostly quit moving at all — just went nearly constant. Each trial sample path went to a different constant, but each sample path went just to a constant, yes, a constant
particular to that sample path but, still, just a numerical constant. Then looked again at my code to generate the sample paths and found that, yup, by accident I’d generated an L^1 bounded
martingale. Yup, the theorem was correct! And the convergence to a constant was darned fast.And as amazing as the martingale convergence theorem is, the other part is down right a candidate for way
too much — the other part is predictable, and, yup, that means just in the sense we would want for those pork bellies. Are we listening yet?Broadly stochastic processes are the vanilla case of
wildness, unpredictability, chaos, randomness, but martingale theory shows that the wildness can’t be as wild as we would have thought intuitively and, in fact, can be significantly tamed to a cute
kitten. So, disorder is not always as bad as we would guess intuitively, and we have some chances to bring some order to the disorder. So, this order looks like a source of what we were looking for —
information out of data.Disclosure: My startup is based on some advanced pure math, not martingale theory but maybe as amazing. E.g., here at AVC I was asked how accurate my work was, and I responded
“perfect” — in both theory and practice, that is essentially the case. A bit amazing.So, to bottom line it, if want more powerful data manipulations to do better getting valuable information out of
available data, proceed mathematically, maybe with some advanced pure math that is able to say amazing things of great generality. Expect that that will be the mountain with the mother lode of tools
for valuable information. Yes, that may not be the only way to get rich!So, what does computer science have to do with it? Sure, from that field we can see how to have machines do the data
manipulation recipes we got from the math. But, and in public recently there has been a lot of hype and confusion on this point, computer science is not how to do the data manipulations, is not the
math, but is just the grunt work for the manipulations specified by the math. So, right, the key is the math. Sorry, nearly no computer science profs or students studied nearly enough pure
math.Instead of the robot components and instead of other intuitive guesses about what will be valuable, I suggest the role of math I outlined. | {"url":"https://avc.com/2019/03/funding-friday-make-your-own-robots/","timestamp":"2024-11-10T11:21:46Z","content_type":"text/html","content_length":"45492","record_id":"<urn:uuid:ff5c1ce1-4232-4fa7-8c57-c2e5ad56e375>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00091.warc.gz"} |
The Mean Value Theorem
Example 3.2.2. Comparing average and instantaneous rates of change.
Consider functions
\begin{align*} f_1(x)\amp=\frac{1}{x^2}\amp f_2(x)\amp= \abs{x} \end{align*}
as shown in
Figure 3.2.3
. Both functions have a value of
Therefore the slope of the secant line connecting the end points is
in each case. But if you look at the plots of each, you can see that there are no points on either graph where the tangent lines have slope zero. Therefore we have found that there is no
such that
\begin{equation*} \fp(c) = \frac{f(1)-f(-1)}{1-(-1)} = 0\text{.} \end{equation*}
The graph illustrates the function \(f_1(x) = \frac{1}{x^2}\text{,}\) showcasing a scenario where the Mean Value Theorem is not applicable. As \(x\) approaches zero, the function demonstrates a
vertical asymptote, highlighting its discontinuity and the tendency of the values to become infinite.
Within the interval \([-1, 1]\text{,}\) although the function’s value is the same at both endpoints, there is no point where the function’s derivative, which represents the slope of the tangent, is
zero. This absence violates the Mean Value Theorem’s requirement for the function to be both continuous on the closed interval and differentiable on the open interval, specifically at \(x = 0\text{.}
(a) A graph of \(f_1(x) = 1/x^2\)
Displayed is the function \(f_2(x) = \abs{x}\text{,}\) chosen to highlight a case where the Mean Value Theorem cannot be applied. The graph forms a V shape, characteristic of the absolute value
function, and is continuous over the interval \([-1, 1]\text{.}\)
At the endpoints of the interval, the function attains the same value, yielding a secant line with a slope of zero. However, due to the sharp corner at the origin \(x = 0\text{,}\) the function is
not differentiable at this point. This lack of differentiability means that there does not exist a point in the interval where the slope of the tangent line is equal to the slope of the secant line,
as required by the Mean Value Theorem. Thus, the function \(f_2(x)\) serves as an example of when the Mean Value Theorem’s conditions are not fulfilled.
(b) A graph of \(f_2(x) = \abs{x}\)
Figure 3.2.3. Graphs of two “misbehaving” functions | {"url":"https://opentext.uleth.ca/apex-standard/sec_mvt.html","timestamp":"2024-11-05T15:11:17Z","content_type":"text/html","content_length":"350375","record_id":"<urn:uuid:99f21748-fc64-423b-9613-a3e726048a29>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00581.warc.gz"} |
Making a bot move in a circle.
To do this, you'd need to think of the movement as a math equation. The
parametric (or one variable) system of equations for a two-dimensional
circle are:
x = R*cos(t)
y = R*sin(t),
where t is in radians between 0 and 2 times pi, and R is the radius of the
Since AW uses the Z-axis instead of Y for depth, replace Z for Y in these
equations. The speed at which the bot moves around the circle can be
changed according to how fast the parameter (t) changes - so to make the bot
move faster, use a for loop (in C++) or a timer (in VB) to increment the
value t from zero to 2 times pi faster or slower depending on the speed you
want. Replace R with the radius of the circle, and you should be all set.
I would explain it more specifically, but I don't know which language you're
Hope this helps,
[View Quote] | {"url":"https://awportals.com/aw/archives/newsgroups/thread_1276/","timestamp":"2024-11-06T11:19:05Z","content_type":"application/xhtml+xml","content_length":"25184","record_id":"<urn:uuid:dec77df5-e92f-4594-8e9c-bbaa98ed3d18>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00650.warc.gz"} |
How do you calculate income elasticity of demand? | Socratic
How do you calculate income elasticity of demand?
1 Answer
It is the percentual variation in quantity demanded for each #1%# of income variation.
Its formula is given by:
(#%# change in demand) / (#%# change in income)#
Where $Q$ is quantity demanded and $I$ is income.
Impact of this question
1641 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-calculate-income-elasticity-of-demand#208211","timestamp":"2024-11-10T17:32:35Z","content_type":"text/html","content_length":"32385","record_id":"<urn:uuid:09c9721f-668d-430a-98e0-bcd6dec7050b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00274.warc.gz"} |
C++11 Performance Tip: Update on When to Use std::pow
1. C++11 Performance Tip: Update on When to Use std::pow
C++11 Performance Tip: Update on When to Use std::pow
Take a look at this benchmark to see when to use std::pow in C++, comparing the performance of std::pow against direct multiplications
Join the DZone community and get the full member experience.
Join For Free
A few days ago, I published a post comparing the performance of std::pow against direct multiplications. When not compiling with -ffast-math, direct multiplication was significantly faster than
std::pow, around two orders of magnitude faster when comparing x * x * x and code:std::pow(x, 3). One comment that I've got was to test for which n is code:std::pow(x, n) becoming faster than
multiplying in a loop. Since std::pow is using a special algorithm to perform the computation rather than be simply loop-based multiplications, there may be a point after which it's more interesting
to use the algorithm rather than a loop. So I decided to do the tests. You can also find the result in the original article, which I've updated.
First, our pow function:
double my_pow(double x, size_t n){
double r = 1.0;
while(n > 0){
r *= x;
return r;
And now, let's see the performance. I've compiled my benchmark with GCC 4.9.3 and running on my old Sandy Bridge processor. Here are the results for 1000 calls to each functions:
We can see that between n=100 and n=110, std::pow(x, n) starts to be faster than my_pow(x, n). At this point, you should only use std::pow(x, n). Interestingly too, the time for std::pow(x, n) is
decreasing. Let's see how is the performance with a higher range of n:
We can see that the pow function time still remains stable while our loop-based pow function still increases linearly. At n=1000, std::pow is one order of magnitude faster than my_pow.
Overall, if you do not care much about extreme accuracy, you may consider using you own pow function for small-ish (integer) n values. After n=100, it becomes more interesting to use std::pow.
If you want more results on the subject, you take a look at the original article.
If you are interested in the code of this benchmark, it's available online: bench_pow_my_pow.cpp
c++ code style Testing Magnitude (astronomy) IT Algorithm GNU Compiler Collection POST (HTTP)
Published at DZone with permission of Baptiste Wicht, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | {"url":"https://dzone.com/articles/c11-performance-tip-update-on-when-to-use-stdpow","timestamp":"2024-11-09T22:45:25Z","content_type":"text/html","content_length":"103253","record_id":"<urn:uuid:ffc2b126-bdf1-4e8b-ae0c-f892bb1347a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00331.warc.gz"} |
SLOPE in Excel (Formula, Example) | How to Calculate Slope in Excel?
SLOPE Function in Excel
Last Updated :
21 Aug, 2024
SLOPE Function in Excel
The SLOPE function in Excel is categorized as a statistical function in Excel. In mathematical terms, the SLOPE returns the slope of a line between given data points in known y's values and known
x's values. The slope of a linear regression line is the vertical distance/the horizontal distance between any of the two points on this line.
For example, we have two sets of values, known_y's (5,2,7,3 from B1 to B4) and known_x's (8,3,5,6 from C1 to C4). So, we need to calculate these two ranges' slopes using the Excel SLOPE function.
= 0.42.
The SLOPE function returns the slope of a regression line based on the data points recognized by known_y_values and known_x_values.
SLOPE Formula in Excel
The SLOPE function has two critical parameters: known_y's and known_x's.
Compulsory Parameter:
• known_y’s: It is an array of known y-values.
• known_x’s: It is an array of known x-values.
Here, the length of the known_x's data array should be the same as known_y's data array. In addition, the value of the variance of the known x's values must not be 0.
The SLOPE equation to find out the slope of the linear regression line is as follows:
SLOPE Function in Excel Explained in Video
How to Use the SLOPE Function in Excel?
It is very simple and easy to use. Let us understand the working of the SLOPE function in some examples. It can be used as a worksheet function and as a VBA function.
Example #1
In the first example, we have two data sets with the known y's values and known x's values.
Calculate the slope from this data: =SLOPE(A3:A22,B3:B22), and the output will be 2.7, as shown in the table below.
The output will be:
Example #2
In the second example, we have month-wise data of known y's value and known x's value.
So here, we can apply the SLOPE formula in Excel as we used in the first example =SLOPE(E3:E22,F3:F22).
And the output will be 0.11, as shown in the below table.
SLOPE in Excel VBA
Suppose we have the X's values in the Excel sheet range from A1 to A10, and Y's in the given Excel sheet from range B1 to B10, then we can calculate the SLOPE here by using the below VBA functions.
Sub SLOPEcal() // start the slope function scope
Dim x, y as Range //declare the range x and y
set x = Range("A10:A10") //set known x’s values to range x.
set y = Range("B10:B10")//set known y’s values to range y.
slope = Application.WorksheetFunction.Slope(y, x) set
MsgBox slope // print the slope value in message box.
End sub // End the slope function
Things to Remember
• The SLOPE function through the #N/A! Error when the given array of known_x's an array of known_y's are of different lengths.
SLOPE Formula =SLOPE(A3:A12,B3:B15)
• The SLOPE function through the #DIV/0! Error when:
□ The variance of the given known_x's evaluates to zero. Or,
□ Any given arrays (known_x's or known_y's) are empty.
• In the SLOPE function, if an array or reference argument contains text, logical values, or empty cells, the values are ignored. However, cells with zero values are included.
• In the SLOPE function, the parameters must be numbers or names, arrays, or references that contain numbers.
Recommended Articles
This article is a guide to the SLOPE Function in Excel. Here, we discuss the SLOPE formula in Excel and how to use the SLOPE function, along with Excel examples and downloadable Excel templates. You
may also look at these useful functions in Excel: - | {"url":"https://www.wallstreetmojo.com/slope-function-in-excel/","timestamp":"2024-11-08T09:18:25Z","content_type":"text/html","content_length":"269948","record_id":"<urn:uuid:348c4d43-5b06-41c5-ba01-e1a9d8be7f39>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00666.warc.gz"} |
Equity factor timing with macro trends | Macrosynergy
Plausibility and empirical evidence suggest that the prices of equity factor portfolios are anchored by the macroeconomy in the long run. A new paper finds long-term equilibrium relations of factor
prices and macro trends, such as activity, inflation, and market liquidity. This implies the predictability of factor performance going forward. When the price of a factor is greater than the
long-term value implied by the macro trends, expected returns should be lower over the next period. The predictability seems to have been economically large in the past.
Favero, Carlo, Alessandro Melone and Andrea Tamoni (2021), “Macro Trends and Factor Timing”, October 2021.
The below quotes are from the paper. Headings, cursive text, and text in brackets have been added.
This post ties in with this site’s summary on macro trends, particularly the section on why macro trends matter for investment management.
The basic idea
“The price of classical equity factors…is anchored to the real economy in the long run. This long-run co-movement translates into short-run equity factors predictability…We propose a cointegration
framework that exploits long-run co-movement between macroeconomic trends and factor prices, and use it to predict the time-series variation in a given equity factor… We show that factors beyond the
aggregate market are predictable using macroeconomic variables.”
N.B. Cointegration means that two independent non-stationary series form a linear combination that is stationary.
“Looking at macroeconomic trends, long-run co-movement, and time-series predictability uncovers a novel link [between macroeconomic and asset returns] which is complementary to the one that looks at
macroeconomic changes (innovations), short-run co-movement(betas) and cross-sectional predictability.”
“Our approach appeals to the intuitive notion that financial assets should not overtake the real economy. Accordingly, we propose a framework where the price level of a factor should comove with
trends in economic fundamentals. Given that economic trends and factor prices are non-stationary variables, the validity of a given set of macroeconomic drivers to track asset prices is naturally
investigated by assessing if there exists a stationary linear combination of them (i.e. if they are cointegrated).”
How to test for ‘cointegration’
“We start by testing the presence of cointegration between macroeconomic trends related to real economic activity, inflation, and aggregate liquidity and the price of factors from leading asset
pricing models.”
“In our empirical analysis, we consider the factors featuring in two of the most prominent asset pricing models: the five-factor model of Fama and French (2015) and the q-factor model of Hou, Xue,
and Zhang (2015)… The q-factor model has its theoretical foundation in the neoclassical q-theory of investment and consists of four factors: the market excess return (MKT), a size factor (ME), an
investment factor (IA), and a profitability factor (ROE). (view here http://global-q.org/factors.html) The Fama-French factor model adds to the market and size factors, a value-growth factor, a
profitability factor (Robust-Minus-Weak, RMW), and an investment factor (Conservative-Minus-Aggressive, CMA).”
“As macro factors we use the WTI crude oil returns, the traded liquidity factor, the potential output growth, and the Treasury term spread…The central idea is to find a set of economic state
variables that influence investors and asset prices in a systematic way through…their effect on nominal and real cash flows…We employ the WTI crude oil returns as a tradable proxy for inflation. To
measure aggregate economic condition, we use potential output together with the term spread…The liquidity factor…is inversely related to aggregate volatility and provides a longer history relative to
the VIX. Importantly, with the sole exception of potential output growth, our benchmark macroeconomic factors are available in real-time and not subject to revisions.”
“Our sample period is 1968-2019. Throughout we use quarterly observations and, accordingly, we focus on (non-overlapping) 3-months holding-period excess return… We provide evidence of cointegration
between the price of tradeable factors and macroeconomic drivers.”
The evidence
“We find the presence of co-integration to be borne out by the data…The Johansen L-max test results establish strong evidence of a single cointegrating relation among the macroeconomic drivers and
each of the factors. Indeed, we may reject the null of no cointegration against the alternative of one cointegrating vector.”
“Macroeconomic trends related to economic activity, inflation and aggregate liquidity track the prices of factors featuring in leading asset pricing models…In other words, factor prices share a
common stochastic trend with key drivers of the macroeconomy.”
“Importantly, the long-run relationship between factor prices and macroeconomic drivers has implications for short-run factor returns. Specifically, we show that factor returns should be predictable
by the deviations of the portfolio value from its long-term economic value with a negative sign. The intuition is straightforward: when asset prices are higher (lower) than the long-run value implied
by the macroeconomic drivers, expected returns are lower (higher) in the next period so that the long-run relationship is corrected.”
Consequences for factor timing
“The long-run co-movement between factor prices and macro drivers has implications for the short-run predictability of factor returns: when the price of a factor is greater than the long-term value
implied by the macro trends, expected returns should be lower over the next period.”
“The…coefficient [of] equilibrium correction term [i.e. the residual of the cointegrating relation] is economically and statistically significant, and negative: a positive deviation of (log) prices
for the characteristics-based factor from their long-term relation with the macro drivers in this period implies a lower expected return for the next period, with an order of magnitude of about
(minus) 0.1 per unit of deviation for the market, size, value, profitability, and the conservative-minus-aggressive factors.”
“We see that the equilibrium correction term has significant forecasting power for future market excess returns above and beyond standard predictors…Standard characteristics-based factors like
High-Minus-Low (HML or value factor) are strongly predictable in- and out-of-sample, both at quarterly and annual frequencies.”
“Our result is [exemplified] in [the figure below]. Panel (a) shows that the price of [the value premium factor, i.e. investment in high book-to-market versus low book-to-market stocks] (blue line)
mean reverts toward a macroeconomic trend (green line). In Panel (b), we employ the deviations of the factor prices from the macro trend to time the factor: the fitted value (green line) explains
about one-fourth of the variability in value premium returns (blue line) at an annual frequency.”
The economic value of factor timing
“We have provided strong evidence of predictability for individual factors. Next, we combine these forecasts to form an optimal factor timing portfolio, and study its benefits from an investor point
of view.”
“The documented predictability is economically large as confirmed by (1) variation in expected factors’ returns that is large relative to their unconditional level; and (2) significant economic gains
from the perspective of a mean-variance investor.”
“We report performance for four variations of the optimal timing portfolio: (1) “Factor Timing” (FT); (2) “Factor Investing” (FI) sets all return forecasts to their unconditional mean; (3) “Market
Timing” (MT) does the same [as Factor Timing] except for the market return; and (4) “Anomaly Timing” (AT) does the opposite: the market is forecasted by its unconditional mean, while anomalies
receive dynamic forecasts.”
“The factor investing, market timing, factor timing, and anomaly timing portfolio all produce meaningful performance, with Sharpe ratios around 0.9 in sample. More importantly, factor and anomaly
timing improve out-of-sample performance relative to static factor investing: timing yields Sharpe ratios of about 1 relative to the 0.87 attained with static investing.”
Consequences for the stochastic discount factor
“Quantitatively, the average variance of our estimated stochastic discount factor (SDF) increases from 0.80 (in the case of constant factor premiums) to 2.24 when taking into account the
predictability of the factors induced by deviations of a portfolio value from its long-term economic value. Furthermore, changes in the means of the factors induces variation in the SDF, which is
strongly heteroscedastic. The SDF fluctuations induced by factor timing are more pronounced than fluctuations in the SDF that accounts only for time variation in the market portfolio.” | {"url":"https://macrosynergy.com/research/equity-factor-timing-with-macro-trends/","timestamp":"2024-11-09T16:41:37Z","content_type":"text/html","content_length":"185645","record_id":"<urn:uuid:6d5d459a-84ab-4e1a-9c8a-c39260a5feb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00247.warc.gz"} |
(PDF) The multiple flying sidekicks traveling salesman problem: Parcel delivery with multiple drones
Author content
All content in this area was uploaded by Ritwik Raj on Dec 07, 2020
Content may be subject to copyright.
The Multiple Flying Sidekicks Traveling Salesman Problem:
Parcel Delivery with Multiple Drones
Chase C. Murray Ritwik Raj
Department of Industrial & Systems Engineering,
University at Buffalo, Buffalo, New York, USA
October 28, 2019
Abstract: This paper considers a last-mile delivery system in which a delivery truck operates in coor-
dination with a fleet of unmanned aerial vehicles (UAVs, or drones). Deploying UAVs from the truck
enables customers located further from the depot to receive drone-based deliveries. The problem is first
formulated as a mixed integer linear program (MILP). However, owing to the computational complexity of
this problem, only trivially-sized problems may be solved directly via the MILP. Thus, a heuristic solution
approach that consists of solving a sequence of three subproblems is proposed. Extensive numerical testing
demonstrates that this approach effectively solves problems of practical size within reasonable runtimes.
Additional analysis quantifies the potential time savings associated with employing multiple UAVs. The
analysis also reveals that additional UAVs may have diminishing marginal returns. An analysis of five
different endurance models demonstrates the effects of these models on UAV assignments. The model and
heuristic also support anticipated future systems that feature automation for UAV launch and retrieval.
Keywords: Unmanned aerial vehicle; drone; vehicle routing problem; traveling salesman problem; logis-
tics; integer programming; heuristics
1 Introduction
This paper introduces the multiple flying sidekicks traveling salesman problem (mFSTSP), in which
a delivery truck and a heterogeneous fleet of unmanned aerial vehicles (UAVs, commonly called
drones) coordinate to deliver small parcels to geographically distributed customers. Each UAV may
be launched from the truck to deliver a single customer package, and then rendezvous (return) to
the truck to be loaded with a new parcel or transported to a new launch location. The objective of
the problem is to leverage the delivery truck and the fleet of UAVs to complete the delivery process
and return to the depot in the minimum amount of time.
The problem of pairing UAVs with traditional delivery trucks was first introduced by Murray
and Chu (2015). The paper provided a mathematical programming formulation and a simple
heuristic for the problem of coordinating a single traditional delivery truck with a single UAV,
dubbed the flying sidekick traveling salesman problem (FSTSP). Nearly identical problems have
been described by other researchers under the name TSP with drone (e.g., Agatz et al. 2018,
Bouman et al. 2018, Ha et al. 2018).
A number of industry implementations have occurred since the original FSTSP paper appeared.
For example, Amazon made its first delivery worldwide (Wells and Stevens 2016) and its first U.S.
delivery a few months later (Rubin 2017). However, drone-maker Flirtey beat Amazon to several
milestones, including the first Federal Aviation Administration (FAA) approved U.S. drone delivery
(Vanian 2016). Logistics solution provider UPS also entered the drone delivery race, teaming
with UAV manufacturers Zipline to deliver blood for lifesaving transfusions in Rwanda (Tilley
2016), CyPhy Works to deliver medical supplies in the U.S. (Carey 2016), and electric truck and
UAV manufacturer Workhorse to demonstrate a truck/drone tandem (Peterson and Dektas 2017).
Mercedes-Benz also revealed a concept for a drone delivery van that automatically loads UAVs with
parcels without the need for a driver (Etherington 2017). The 2016 Material Handling Industry
(MHI) Annual Industry Report (MHI 2016) notes that 59% of its survey respondents believe that
emerging technologies like drones are already having an impact on supply chains. The report also
claims that adoption rates for technologies like drones are expected to grow to 50% over the next
The present paper extends the original FSTSP, as well as other related studies on truck/UAV
routing, in several key respects. It features a more comprehensive treatment of the operating con-
ditions, to better reflect the realities associated with the complex nature of this coordinated vehicle
routing problem. Specifically, it contributes to the existing literature in the following ways. First,
the mFSTSP considers an arbitrary number of heterogeneous UAVs that may be deployed from the
depot or from the delivery truck. These UAVs may have different travel speeds, payload capaci-
ties, service times, and flight endurance limitations. Accounting for these differences accommodates
service providers who may expand their fleet with a variety of drones over time. For example, Ama-
zon’s UAV designs have evolved from eight-rotor octocopters, to a multi-rotor/fixed-wing hybrid,
to the latest four-rotor quadcopter in just a few years (Amazon 2019). Given the rapidly-changing
technology, it is reasonable to assume that companies will augment their fleet over time with im-
proved drones. Thus, as companies change the composition of their fleets (either via expansion or
via replacement of drones no longer fit for service), it is expected that companies will increasingly
operate heterogeneous fleets. Similarly, model variations for evolving automated UAV launch and
recovery systems (at the depot or within the truck) are also provided.
Second, because the delivery truck is typically too small to safely accommodate multiple drones
landing or launching simultaneously, the mFSTSP explicitly queues the aircraft in both the launch
and retrieval phases. This additional scheduling problem adds complexity to the problem, but
more accurately reflects the limitations associated with deploying multiple drones from a relatively
small space. Consideration of queuing of these activities is important because they may comprise
a significant portion of the total operational time; disregarding this aspect may result in solutions
leading to UAVs running out of power before being safely retrieved.
Finally, this paper provides a comparative analysis of five endurance models, one of which (a
non-linear model that determines energy consumption as a function of velocity and parcel weight)
has not been applied to truck/UAV routing problems. The analysis highlights the potential risks
associated with constructing schedules based on overly-simplified endurance models. As with the
queue scheduling, the primary risk of using an optimistic endurance model is solutions resulting in
UAVs that fail to return to their recovery locations.
A small eight-customer example is provided to highlight the time savings that may be afforded
by the mFSTSP. Figure 1 shows a comparison of vehicle routes for customers located in the Seattle
area. The route generated by solving a standard traveling salesman problem (TSP) demonstrates
the long travel distance that must be covered if a single truck makes all eight deliveries. With the
addition of one UAV, the truck visits only five of the customers and avoids the eastern half of the
region. When three UAVs are employed, the truck needs to visit only three customers. The Gantt
chart in Figure 2 highlights the coordination required. In particular, as more UAVs are utilized, the
truck driver must spend more time launching and retrieving the drones. Additionally, the drones
may spend more time waiting for the truck to arrive at the recovery location. Table 1 reveals the
significant time savings for even trivially-sized problems.
The remainder of this paper is organized as follows. An overview of related academic literature
is provided in Section 2. A formal problem definition and mixed integer linear programming (MILP)
formulation are provided in Section 3. A three-phased heuristic solution approach is proposed in
Section 4, followed by a numerical analysis in Section 5 to highlight the benefits, and limitations, of
deploying multiple drones from the delivery truck. Additional analysis explores the impacts of the
region size, potential automation improvements, and implications of different endurance models.
(a) TSP Solution (no UAVs) (b) mFSTSP with 1 UAV (c) mFSTSP with 3 UAVs
Figure 1: A comparison of routes for a small eight-customer problem in the Seattle region
Launch UAV
Retrieve UAV
Truck 2 4 7 1 8 5 6 3
Truck 5 8 4 2 3
UAV 1 6 1 7
Truck 3 2 5
UAV 1 1 6
UAV 2 8
UAV 3 4 7
mFSTSP (1 UAV)
mFSTSP (3 UAVs)
Figure 2: Detailed vehicle timing for the small Seattle example, using either a single truck (TSP),
a truck with one UAV, or a truck with three UAVs. Numbers on the bars identify each of the eight
Table 1: Comparison of time savings for the small Seattle example.
Makespan Time Savings Improvement
[hr:min:sec] [hr:min:sec] over TSP
3 UAVs 0:47:00 0:25:01 34.7%
1 UAV 0:59:29 0:12:32 17.4%
TSP 1:12:01 – –
Finally, conclusions and future research directions are provided in Section 6.
2 Related literature
This review focuses on problems involving the coordinated use of trucks and UAVs for parcel
delivery. While beyond this scope, we note another class of problems that consider the use of only
UAVs to make deliveries (i.e., without a truck). Variants of this problem include variable UAV
battery energy (Dorling et al. 2017, Venkatachalam et al. 2017, Cheng et al. 2018), multi-objective
(San et al. 2016), multiple UAV replenishment locations (Song et al. 2018), and coordinated UAVs
(Oh et al. 2018). Additionally, numerous works have considered the benefits of UAVs in a variety of
non-military applications. Recent examples include UAV logistics infrastructure (Shavarani et al.
2018, Hong et al. 2018, Kim and Awwad 2017, Chauhan et al. 2019), healthcare (Scott and Scott
2018, Kim et al. 2017), and disaster response (Rabta et al. 2018, Chowdhury 2018, Zhong et al.
2018). A survey of the literature on UAVs for civil applications is provided by Otto et al. (2018).
The problem of combining a drone with a traditional delivery truck for parcel delivery was first
formally defined by Murray and Chu (2015). That paper introduced an MILP formulation for the
FSTSP, and also defined the parallel drone scheduling TSP (PDSTSP), where multiple drones are
launched from the depot to serve nearby customers, independent of the truck delivery. Greedy
construction heuristics for both problems were provided.
Numerous studies have since explored variations of the single-truck single-drone problem, often
called the TSP with drone (TSP-D). For example, Agatz et al. (2018) provided a new MILP model
for the TSP-D, as well as several route first-cluster second heuristics. An improved formulation for
the FSTSP was presented by Dell’Amico et al. (2019a). Ponza (2016), Ha et al. (2018), Freitas and
Penna (2018), Daknama and Kraus (2017), and Schermer et al. (2018) explored neighborhood search
based heuristics, while Bouman et al. (2018) and Tang et al. (2019) present dynamic programming
and constraint programming approaches, respectively, for obtaining optimal solutions. A multi-
objective variant of the TSP-D, with a non-dominated sorting genetic algorithm, was proposed by
Wang et al. (2019b). Poikonen et al. (2019) provide four branch-and-bound-based heuristics for the
TSP-D. Jeong et al. (2019) modified the FSTSP to consider variable UAV energy consumption and
restricted flying areas. Dukkanci et al. (2019) consider a variation of the FSTSP that minimizes
the operational cost and calculate UAV energy consumption as a function of speed.
In the case of single-truck multi-UAV problems, Ferrandez et al. (2016) and Chang and Lee
(2018) consider a system in which the truck deploys multiple drones from distributed launch sites
along the truck’s route. The drones return to the truck before the truck departs for its next
destination. Clustering heuristics have been developed, such that the truck is routed to each
cluster and nearby customers are served via UAV. Conversely, similar to the mFSTSP, Yoon (2018)
consider a single truck that may launch multiple UAVs, with the UAVs returning to the truck at
a different location. An MILP formulation is provided, which is tested on instances with up to 10
customers. Tu et al. (2018) propose an adaptive large neighborhood search heuristic for a similar
problem, the TSP with multiple drones (TSP-mD).
The use of multiple trucks and multiple UAVs is considered by Kitjacharoenchai et al. (2019),
which is an extension of the FSTSP, but without endurance limitations and launch/delivery time
considerations. While UAVs can be launched from and retrieved at different trucks, only one UAV
can be launched or retrieved at any customer location. They present an MILP formulation and
propose an insertion based heuristic to solve problems with up to 50 customers. Sacramento et al.
(2019) extend the FSTSP with multiple trucks, each carrying a single UAV. A solution approach
based on adaptive large neighborhood search is provided. Wang and Sheu (2019) consider a prob-
lem in which multiple UAVs may be launched from multiple trucks at customer locations, where
each UAV can serve multiple customers on a sortie. UAVs must be retrieved at separate docking
locations. A branch-and-price algorithm is demonstrated on problems with up to 15 customers.
Schermer et al. (2019b) address a multi-truck problem in which each truck may have multiple UAVs.
A matheuristic is proposed for larger-scale instances. Additionally, Schermer et al. (2019a) consider
a multi-truck, multi-UAV problem where UAV launches and retrievals may occur at non-customer
locations, termed en route operations. An MILP formulation for this problem is provided, along
with a variable neighborhood search heuristic.
Work related to the PDSTSP includes new heuristics proposed by Saleu et al. (2018) and
Dell’Amico et al. (2019b). A multi-truck variant of the PDSTSP, solved via constraint program-
ming, is provided by Ham (2018) in which UAVs can perform both delivery and pickup activities.
Kim and Moon (2019) consider a variation of the PDSTSP in which UAVs can be deployed from
the depot and several drone stations. Another variant, proposed by Wang et al. (2019a), considers
a fleet of trucks, each carrying a UAV, operating simultaneously with additional independent UAVs
that are launched from the depot.
Another class of truck/UAV problems assume that only the UAVs may make deliveries, such as
the multi-visit drone routing problem (MVDRP) proposed by Poikonen (2018) and the truck/drone
tandems considered by Mathew et al. (2015), bin Othman et al. (2017), Peng et al. (2019), and
Wikarek et al. (2019). Although not a parcel delivery application, Luo et al. (2017) proposed a
two-echelon ground vehicle and UAV cooperative routing problem in which a truck carries one UAV
which is responsible for visiting one or more surveillance targets before returning to the truck.
Several theoretical studies have shown the benefits of using a combined truck-UAV delivery
system. Wang et al. (2016) introduced the vehicle routing problem with drones (VRPD), and de-
termined bounds on the ratio of VRPD time savings versus traditional routing problems (e.g., VRP
and TSP). The analysis considered particular cases where trucks and drones follow the same dis-
tance metric and drone battery life is unlimited. Poikonen et al. (2017) relaxed these assumptions
to develop bounds on similar ratios, but with differing distance metrics for trucks and drones and
limited drone endurance. Carlsson and Song (2017) consider a continuous approximation model
to replace computationally difficult combinatorial approaches. Their horsefly routing problem con-
sists of one truck and one UAV. Unlike other models, the UAV launch/retrieval locations are not
restricted to customer nodes. Campbell et al. (2017) and Li et al. (2018) also used continuous
approximation methods, and developed cost models to study the economic impacts. Results by
Campbell et al. (2017) suggest that substantial cost savings can be achieved using the combined
truck-drone delivery system with multiple drones per truck, and highlight the benefits associated
with automated loading and reduced delivery service times. Boysen et al. (2018) study the com-
plexity of problems involving a given set of UAV customers and a fixed truck route.
In contrast to the above studies, Ulmer and Thomas (2018) introduce a problem in which
customer orders arrive dynamically. For each incoming order, the firm must decide whether a truck
or UAV will make the delivery (if at all). There is no interaction between trucks and UAVs.
3 Problem definition and mathematical programming formulation
This section provides a formal definition of the mFSTSP, as well as an MILP formulation of the
problem. A summary of the parameter notation is described in Table 2.
Let Crepresent the set of all customer parcels, such that C={1,2, . . . , c}. Each customer must
receive exactly one delivery by either the single delivery truck or by one of the heterogeneous UAVs
that are denoted by the set V. A particular customer i∈Cis said to be “droneable” by UAV
Table 2: Parameter notation
VSet of UAVs.
CSet of customers; C={1,2, . . . , c}.
CvSet of customers that may be served by UAV v∈V;ˆ
Cv⊆Cfor all v∈V.
NSet of all nodes; N={0,1, . . . , c + 1}.
N0Set of nodes from which a vehicle may depart; N0={0,1, . . . , c}.
N+Set of nodes to which a vehicle may visit; N+={1,2, . . . , c + 1}.
τij Truck’s travel time from node i∈N0to node j∈N+.
vij Travel time for UAV v∈Vfrom node i∈N0to node j∈N+.
v,i Launch time for UAV v∈Vfrom node i∈N0.
v,k Recovery time for UAV v∈Vat node k∈N+.
σkService time by truck at node k∈N+, where σc+1 ≡0.
vk Service time by UAV v∈Vat node k∈N+, where σ0
v,c+1 ≡0.
PA set of tuples of the form hv, i, j, ki, specifying all possible three-node sorties that
may be flown by UAV v∈V.
evijk Endurance, in units of time, for UAV v∈Vtraveling from nodes i∈N0to j∈ˆ
to k∈N+.
v∈V, and thus belongs to the set ˆ
Cv, if that customer’s parcel is eligible to be delivered by v. This
categorization may be a function of several factors, including the parcel’s weight or size, whether a
customer signature is required, whether the parcel contains hazardous material that should not be
flown, or whether the customer’s location is conducive to accommodating a drone (e.g., apartments
or heavily-wooded areas may be inaccessible to a UAV).
Each UAV is capable of carrying one droneable parcel at a time, although the weight or volume
capacity of each UAV may differ. UAVs may be launched from the depot, or from the truck. While
a UAV can be launched multiple times, it cannot be launched from the same location more than
once. This assumption, consistent with the definition of the original FSTSP, is made primarily
for algorithmic convenience. A UAV can be retrieved at the depot, or by the truck at a customer
location, but it cannot be retrieved at the same customer location from which it was launched.
When a UAV returns to the truck, it may be loaded onto the truck or it can be launched from
the truck (at this location) with a new package. The truck can also make stops between UAV
launch and retrieval locations to serve other customers while UAVs are airborne. It is assumed
that the truck can transport all of the available UAVs at once, although the truck may only launch
or retrieve one UAV at a time.
For now, assume that the truck must be present at the depot when the UAVs are launched or
retrieved; in Section 3.7 we address a variant of the problem for cases where the depot features
automation or is sufficiently staffed to prepare and receive UAVs without the driver. We also
initially assume that the driver must participate in the launch/recovery process when en route
(away from the depot). However, Section 3.8 describes model modifications that leverage UAV-
handling automation within the truck.
Because the truck may launch and retrieve (and re-launch) multiple UAVs at a particular
customer location, it is important to coordinate these activities carefully to avoid mid-air collisions.
Thus, the driver’s task of dropping off a customer’s parcel must also be scheduled with the UAV
launch and recovery activities. Figure 3 shows an example of flow of the scheduled activities.
To characterize the underlying network structure of the mFSTSP, we define the set of all nodes
(a) UAV 1 ar-
(b) Truck
(c) UAV 2 ar-
rives and is re-
(d) UAV 2 is
(e) UAV 1 is
(f) Driver serves
customer and
truck departs
Figure 3: A notional example depicting the scheduling activities required at truck customer loca-
in the network to be N={0,1, . . . , c + 1}, where nodes 0 and c+ 1 represent the depot from which
all vehicles must originate and return. This convention accommodates the case in which the origin
depot (0) and destination depot (c+ 1) have different physical locations. The truck may only depart
from node 0, and must return to node c+ 1. The set of nodes from which a vehicle may depart
is represented by N0={0,1, . . . , c}, while N+={1,2, . . . , c + 1}describes the set of all nodes to
which a vehicle may visit.
The truck’s travel time along the road network from node i∈N0to j∈N+is given by τij.
Similarly, τ0
vij represents the time required for UAV v∈Vto fly from node i∈N0to node j∈N+.
When UAV v∈Vis launched from node i∈N0, it requires sL
v,i units of time. This launch
time is indexed on vto incorporate differences in UAVs (some of which may be better designed to
load parcels or swap batteries). The launch time is also indexed on the launch location; launching
from the depot may require a different amount of time (e.g., perhaps the drones are already loaded
with their first parcel and already have a fresh battery, or perhaps there’s automation within the
depot). The recovery time, sR
v,k, is similarly defined for UAV v∈Vretrieved at node k∈N+.
Truck deliveries require σkunits of time for service at node k∈N+, where σc+1 ≡0 as there is
no delivery at the depot. Similarly, UAV v∈Vrequires σ0
vk units of time to perform the delivery
service at node k∈N+, where σ0
v,c+1 ≡0. This service time is indexed on the UAV to reflect
differences in delivery mechanisms among UAVs in the fleet. For example, some UAVs deliver
goods via a tether while the drone remains airborne (c.f., Google’s “egg” (Mogg 2015) and Flirtey’s
pizza delivery UAV (Boyle 2016)), others require the UAV to land to release the package (c.f., DHL’s
‘parcelcopter’ (Adams 2016) and UPS/Workhorse truck/UAV tandem (Adams 2017)), while other
designs drop goods via parachute (c.f., Amazon’s patent for a shipping label with built-in parachute
(Mogg 2017) and Zipline’s blood deliveries (Toor 2016)).
3.1 UAV endurance
Each UAV v∈Vhas a unique endurance, represented as evijk and measured in units of time,
for which it may remain operational as it travels from node i∈N0(launch) to j∈ˆ
and then to k∈N+(rendezvous). The incorporation of the UAV’s endurance is critical, as UAV
operations are hampered by limited battery capacity.
To identify potential valid UAV sorties (i.e., the sequence of a launch, customer delivery, and
rendezvous), Pis defined to be a set of four-tuples of the form hv, i, j, kifor v∈V,i∈N0,j∈ˆ
and k∈N+. This set has the following properties:
•The launch node, i, must not be the ending depot node (i.e., iis restricted to N0);
•The delivery node, j, must be an eligible customer for UAV v(i.e., j∈ { ˆ
•The rendezvous point, k, may be either a customer or the ending depot (but it must not be
either node ior j); and
•The UAV’s travel time from i→j→kmust not exceed the endurance of the UAV (i.e.,
vij +σ0
vj +τ0
vjk ≤ev ijk for k∈ {N+:k6=j, k 6=i}).
3.2 Objective and decision variables
The objective of the mFSTSP is to minimize the time required to deliver all parcels and return
to the depot (i.e., to minimize the makespan). This is accomplished via determination of decision
variable values across six main classes, a summary of which is provided in Table 3. First, binary
decision variable xij = 1 if the truck travels from node i∈N0immediately to node j∈ {N+:j6=i}.
This decision variable determines the route of the delivery truck. Similarly, in the second class,
binary decision variable pij = 1 if the truck visits node i∈N0at some time prior to visiting node
j∈ {C:j6=i}. We define p0j≡1 for all j∈Cto indicate that the truck must leave the depot
(node 0). This decision variable is employed to ensure that a UAV’s launch and recovery nodes are
consistent with the truck’s route (i.e., if a UAV is launched from a truck, it cannot return to the
truck at a location that was earlier in the truck’s route).
In the third class, binary decision variable yvijk = 1 if UAV v∈Vtravels from node i∈N0to
customer j∈ { ˆ
Cv:j6=i}, re-joining the truck at node k∈ {N+:hv, i, j, ki ∈ P}. This decision
variable identifies UAV sorties.
The fourth class involves five continuous decision variables to determine the time at which key
events for the truck and UAVs occur. Specifically, ˇ
ti≥0 captures the truck’s arrival time to node
i∈N, where ˇ
t0≡0 to indicate that the truck is available to begin operations at time zero. The
truck’s service time completion at node i∈N+is given by ¯
ti≥0, where ¯
t0≡0 to reflect the fact
that there is no customer associated with the depot. This decision variable indicates the time at
which customer i’s parcel has been delivered. Next, ˆ
ti≥0 identifies the truck’s completion time
at node i∈N(e.g., the earliest departure time from this node if i∈N0). In the problem variant
where the truck is not required to be at the depot when UAVs launch, then ˆ
t0≡0. Similarly,
timing for the UAVs is determined by ˇ
t0vi ≥0, which denotes the arrival time for UAV v∈Vto
node i∈N, and ˆ
t0vi ≥0, which identifies the completion time for UAV v∈Vat node i∈N.
Next, numerous binary decision variables (all identified by the letter zwith sub- and super-
scripts) are employed to determine the coordination between the driver and each UAV, and to
establish the sequencing of UAV launches and retrievals at each node. Details on each of these
variables are provided in Table 3.
Finally, 1 ≤ui≤c+ 2 are standard truck subtour elimination variables, defined for all i∈N+,
that indicate the relative ordering of visits to node i.
Details of the MILP formulation are provided in the remainder of this section. Due to the
length of the model, constraints are grouped according to functionality.
3.3 Core model components from the FSTSP
The mFSTSP leverages core components of the FSTSP model provided by Murray and Chu (2015),
with modifications to accommodate multiple UAVs. This model employs several “big-M” con-
Table 3: Decision variables
xij ∈ {0,1}xij = 1 if the truck travels from node i∈N0immediately to node
j∈ {N+:j6=i}.
pij ∈ {0,1}pij = 1 if node i∈N0appears in the truck’s route before node j∈ {C:
j6=i}.p0j≡1 for all j∈C.
yvijk ∈ {0,1}yv ijk = 1 if UAV v∈Vtravels from node i∈N0to customer j∈ { ˆ
j6=i}, re-joining the truck at node k∈ {N+:hv, i, j, ki ∈ P}.
ti≥0 Truck’s arrival time to node i∈N, where ˇ
ti≥0 Truck’s service time completion at node i∈N+, where ¯
ti≥0 Truck’s completion time at node i∈N(e.g., the earliest departure time
from this node if i∈N0). If the truck is not required to be at the depot
when UAVs launch, then ˆ
t0vi ≥0 Arrival time for UAV v∈Vto node i∈N.
t0vi ≥0 Completion time for UAV v∈Vat node i∈N.
v1,v2,k ∈ {0,1}zR
v1,v2,k = 1 if v1∈Vand v2∈ {V:v26=v1}are recovered at node
k∈N+, such that v1is recovered before v2.
0,v,k ∈ {0,1}zR
0,v,k = 1 if the truck completes its service activities at node k∈N+
before UAV v∈Vis retrieved at node k.
v,0,k ∈ {0,1}zR
v,0,k = 1 if UAV v∈Vis retrieved at node k∈N+before the truck
completes its service activities at node k. We define zR
v,0,c+1 ≡0 for all
v∈V(since the truck has no service activities at the depot node, the
order does not matter).
v1,v2,i ∈ {0,1}zL
v1,v2,i = 1 if UAV v1∈Vis launched from node i∈N0before v2∈
{V:v26=v1}is launched from i.
0,v,i ∈ {0,1}zL
0,v,i = 1 if the truck completes its service activities at node i∈N0
before UAV v∈Vis launched from i.
v,0,i ∈ {0,1}zL
v,0,i = 1 if UAV v∈Vis launched from node i∈N0before the truck
completes its service activities at node i. If the truck is not required to
be present when UAVs launch from the depot, we may define zL
v,0,0= 0
for all v∈V(since the truck has no service activities at the depot node,
the order does not matter).
v1,v2,i ∈ {0,1}z0
v1,v2,i = 1 if UAV v1∈Vlaunches from node i∈Cbefore UAV
v2∈ {V:v26=v1}lands at i.
v1,v2,i ∈ {0,1}z00
v1,v2,i = 1 if UAV v1∈Vlands at node i∈Cbefore UAV v2∈ {V:
v26=v1}launches from i.
1≤ui≤c+ 2 Truck subtour elimination variables, defined for all i∈N+, which indi-
cate the relative ordering of visits to node i.
straints, where the value of Mrepresents a sufficiently large number. One valid value of Mis the
length of a TSP tour such that the truck visits each customer, plus the sum of truck service times
over all customers.
The objective function and the general constraints related to guaranteeing customer deliveries
and eliminating truck subtours are as follows:
Min ˆ
tc+1 (1)
s.t. X
xij +X
yvijk = 1 ∀j∈C, (2)
x0j= 1,(3)
xi,c+1 = 1,(4)
xij =X
xjk ∀j∈C, (5)
yvijk ≤1∀i∈N0,∀v∈V, (6)
yvijk ≤1∀k∈N+,∀v∈V, (7)
2yvijk ≤X
xhi +X
∀v∈V, i ∈C, j ∈ {C:j6=i}, k ∈ {N+:hv, i, j, ki ∈ P},(8)
yv0jk ≤X
xhk ∀v∈V, j ∈C, k ∈ {N+:hv, 0, j, ki ∈ P},(9)
uk−ui≥1−(c+ 2)
∀i∈C, k ∈ {N+:k6=i},∀v∈V, (10)
ui−uj+ 1 ≤(c+ 2)(1 −xij )∀i∈C, j ∈ {N+:j6=i},(11)
ui−uj≥1−(c+ 2)pij ∀i∈C, j ∈ {C:j6=i},(12)
ui−uj≤ −1+(c+ 2)(1 −pij)∀i∈C, j ∈ {C:j6=i},(13)
pij +pji = 1 ∀i∈C, j ∈ {C:j6=i}.(14)
The objective function (1) seeks to minimize the latest time at which either the truck or a UAV
return to the depot. Although ˆ
tc+1 is explicitly defined for only the truck’s return time to the
depot, constraints in Sections 3.4 and 3.5 serve to link the UAVs’ and truck’s return time to the
depot. Thus, the objective function is equivalent to min{max
Constraint (2) requires each customer to be visited exactly once. Constraint (3) ensures that
the truck departs from the depot exactly once, while Constraint (4) requires the truck to return to
the depot exactly once.
Constraint (5) provides flow balance for the truck, which must depart from each node that it
visits (except the ending depot node), while Constraint (6) states that each UAV may launch at
most once from any particular node, including the depot. Similarly, Constraint (7) indicates that
each UAV may rendezvous at any particular node (including customers and the ending depot) at
most once.
If a UAV is launched from customer iand is collected by the truck at node k, then Constraint
(8) states that the truck must be assigned to both nodes iand k. Furthermore, Constraint (9)
ensures that if a UAV launches from the starting depot 0 and is collected at node k, then the truck
must be assigned to node k. Similarly, Constraint (10) ensures that the truck must visit ibefore k
if a UAV launches from customer iand is collected at node k. Subtour elimination constraints for
the truck are provided by (11).
Constraints (12)–(14) determine the proper values of pij. Because uiand pij describe the
ordering of nodes visited by the truck only, values of these decision variables are inconsequential
for any iand jthat are visited only by a UAV.
3.4 UAV timing constraints
The following constraints establish the times at which each UAV launches from either the depot
or the truck, arrives at a customer location, and returns to either the truck or the depot. These
constraints also address UAV flight endurance limitations.
t0vl ≥ˇ
t0vk −M
yvijk −X
yvlmn −pil
∀i∈N0, k ∈ {N+:k6=i}, l ∈ {C:l6=i, l 6=k},∀v∈V(15)
t0vi ≥ˇ
t0vi +sL
v,i −M
∀v∈V, i ∈N0,(16)
t0vi ≥ˇ
v,i −M1−zL
v0i∀v∈V, i ∈N0,(17)
t0vi ≥¯
v,i −M1−zL
0vi∀v∈V, i ∈N0,(18)
t0vi ≥ˆ
t0v2,i +sL
v,i −M1−zL
v2,v,i∀v∈V, v2∈ {V:v26=v}, i ∈N0,(19)
t0v2,i ≥ˇ
t0vi +sL
v2,i −M1−z00
v,v2,i∀v∈V, v2∈ {V:v26=v}, i ∈C, (20)
t0vj ≥ˆ
t0vi +τ0
vij −M
∀v∈V, j ∈C, i ∈ {N0:i6=j},(21)
t0vj ≤ˆ
t0vi +τ0
vij +M
∀v∈V, j ∈C, i ∈ {N0:i6=j},(22)
t0vj ≥ˇ
t0vj +σ0
vj X
yvijk ∀v∈V , j ∈C, (23)
t0vj ≤ˇ
t0vj +σ0
vj +M
∀v∈V, j ∈C, (24)
t0vk ≥ˇ
v,k −M1−zR
v0k∀v∈V, k ∈N+,(25)
t0vk ≥¯
v,k −M1−zR
0vk∀v∈V, k ∈N+,(26)
t0vk ≥ˇ
t0v2,k +sR
v,k −M1−zR
v2,v,k ∀v∈V, v2∈ {V:v26=v}, k ∈N+,(27)
t0vk ≥ˆ
t0v2,k +sR
v,k −M1−z0
v2,v,k ∀v∈V, v2∈ {V:v26=v}, k ∈C, (28)
t0vk ≥ˆ
t0vj +τ0
vjk +sR
v,k −M
∀v∈V, k ∈N+, j ∈ {C:j6=k},(29)
t0vk −SR
v,k −ˆ
t0vi ≤evij k +M(1 −yvijk)
∀v∈V, i ∈N0, j ∈ { ˆ
Cv:j6=i}, k ∈ {N+:hv, i, j, ki ∈ P}.(30)
Constraint (15) prohibits individual UAV sorties from overlapping. For example, suppose that
UAV vlaunches from iand returns to k. Further, suppose that the UAV later launches from l
(thus, pil = 1). This constraint prevents the launch time from l,ˆ
vl, from preceding the return time
to k,ˇ
vk. If the UAV does not return to k, the UAV does not launch from l, or idoes not precede
l, then this constraint will not be binding. This constraint requires the definition of p0l= 1 for all
Constraints (16)–(20) address launching of UAVs. The launch service (preparation) time, sL
is included in these constraints to be consistent with the definition of ˆ
t0vi (the time at which UAV v
is launched from node i). Constraint (16) states that vcannot launch from i(to jand k) until after
vhas arrived at node i; if vwere transported on the truck to node i,ˇ
t0vi would be a meaningless
value. Per Constraint (17), UAV vcannot launch from iuntil after the truck has arrived to node iif
the truck customer is served after vis launched (i.e., if zL
v0i= 1). Conversely, Constraint (18) states
that UAV vcannot launch from iuntil after the truck has served this customer if the truck serves
the customer before vis launched (i.e., if zL
0vi = 1). In Constraint (19), UAV vcannot launch from i
until UAV v2has launched if v2launches before v(i.e., if zL
v2,v,i = 1), while Constraint (20) ensures
that v2does not launch until after vhas landed, if vlands before v2launches (i.e., if zL
v,v2,i = 1).
Note that, in Constraint (20), node iis restricted to being a customer node since no other node
type permits both launching and landing.
Constraints (21) and (22) govern the arrival timing for a UAV serving some customer j; Con-
straints (23) and (24) perform the same function for the departure timing. These four constraints
ensure that a UAV will travel directly to the customer location, and will depart immediately after
completing service. Any required loitering while waiting to be retrieved by the truck will occur at
the truck’s location. Constraints (25)–(29) address the landing of UAVs at retrieval locations (i.e.,
not at a drone delivery customer). The recovery service time (e.g., sR
v,k) is included to be consistent
with the definition of ˇ
t0vk (the time at which vis deemed to have arrived at node k). In particular,
Constraint (25) states that vcan land at node kas soon as the truck has arrived if the truck serves
customer kafter retrieving v(i.e., if zR
v0k= 1). However, if the truck serves customer kfirst (i.e.,
if zR
0vk = 1), then UAV vcannot land at kuntil the truck completed this service.
If UAV v2is recovered at node kbefore UAV v(i.e., if zR
v2,v,k = 1), Constraint (27) ensures that
the arrival time for vis after the arrival time for v2. Similarly, if v2is launched from node kbefore
vis recovered (i.e., if z0
v2,v,k = 1), Constraint (28) requires the arrival time for vto be after the
launch time for v2. In this constraint, kis restricted to the set of customers because UAVs cannot
launch and land from any other nodes. In Constraint (29), UAV vcannot land at kuntil it has
launched from customer jand travels from jto k.
UAV endurance limitations are addressed by Constraint (30). If vtravels from ito jto k, then
the difference between the arrival time at k(less the recovery time, which is incorporated in ˇ
t0vk )
and the departure time from imust not exceed the endurance limit.
3.5 Truck timing constraints
The following constraints govern the arrival, service, and departure activities for the truck.
ti+τij −M(1 −xij)∀i∈N0, j ∈ {N+:j6=i},(31)
xjk ∀k∈N+,(32)
t0vk +σk−M1−zR
v0k∀k∈N+, v ∈V, (33)
t0vk +σk−M1−zL
v0k∀k∈C, v ∈V, (34)
t0vk −M
∀k∈N+, v ∈V, (36)
t0vk −M
∀k∈N0, v ∈V. (37)
The truck’s travel time is incorporated in Constraint (31), which states that the truck cannot
arrive at juntil after it has left iand traveled from ito j.
Constraints (32), (33), and (34) establish the truck’s service time completion at a customer.
Note that there is no customer service time at the depot (i.e., σc+1 ≡0). Thus, the service
completion time for the truck when visiting the depot does not have a service time component,
although it does also depend on the arrival of any UAVs to the depot in the event that we require
the truck to be present when UAVs arrive. In Constraint (32), the truck’s service time completion
at node kmust not be prior to arriving at the node and finishing service. If the truck does not serve
k, then ˇ
tkwill be a meaningless value (probably zero). Constraint (33) states that the truck cannot
complete service to customer kuntil UAV vhas arrived, if vis recovered at node kbefore the truck
service begins (i.e., if zR
v0k= 1). Similarly, the truck cannot complete its service of customer kuntil
UAV vhas launched, if vis launched from kbefore the truck begins service (i.e., if zL
vok = 1), as
in Constraint (34). Note that k∈C(rather than in N+) because UAVs cannot be launched from
node c+ 1.
Constraints (35), (36), and (37) establish the truck’s earliest departure from a node. Constraint
(35) prevents the truck from departing a node until it has completed serving the customer. If the
truck does not serve k, then ˇ
tkwill be a meaningless value. In Constraint (36), if a UAV is retrieved
at node k, then the truck cannot depart until that UAV has arrived. Similarly, the truck cannot
depart from a node until after all UAVs have launched from that node, as per Constraint (37).
3.6 Sequencing of retrievals, launches, and truck service
In this section, constraints are provided to establish proper values of the binary decision variables
used to sequence the activities at each node. We begin with constraints for setting the zR
0vk +zR
yvijk ∀v∈V , k ∈N+,(38)
v,v2,k ≤X
yvijk ∀v∈V , v2∈ {V:v26=v}, k ∈N+,(39)
v,v2,k ≤X
yv2ijk ∀v∈V , v2∈ {V:v26=v}, k ∈N+,(40)
v,v2,k +zR
v2,v,k ≤1∀v∈V, v2∈ {V:v26=v}, k ∈N+,(41)
v,v2,k +zR
v2,v,k + 1 ≥X
yvijk +X
∀v∈V, v2∈ {V:v26=v}, k ∈N+.(42)
In Constraint (38), if vis retrieved at node k, then the truck must serve keither before or after
varrives. Conversely, zR
v,v2,k cannot equal one if neither vnor v2are retrieved at node k, as per
Constraints (39) and (40). Constraint (41) states that either vis retrieved before v2,v2is retrieved
before v, or at least one of these UAVs is not retrieved at k. Finally, if vand v2are both retrieved
at k, then either vis retrieved before v2, or v2is retrieved before v, as in Constraint (42).
Constraints (43)–(47), below, are the launch analogues to Constraints (38)–(42). These con-
straints are used to set the zL
·,·,·decision variable values:
0vi +zL
yvijk ∀v∈V , i ∈N0,(43)
v,v2,i ≤X
yvijk ∀v∈V , v2∈ {V:v26=v}, i ∈N0,(44)
v,v2,i ≤X
yv2ijk ∀v∈V , v2∈ {V:v26=v}, i ∈N0,(45)
v,v2,i +zL
v2,v,i ≤1∀v∈V, v2∈ {V:v26=v}, i ∈N0,(46)
v,v2,i +zL
v2,v,i + 1 ≥X
yvijk +X
∀v∈V, v2∈ {V:v26=v}, i ∈N0.(47)
Next, constraints are required for nodes at which one UAV launches and another UAV lands.
Binary decision variable z0
v1,v2,i = 1 if UAV v1∈Vlaunches from node i∈Cbefore UAV v2∈
{V:v26=v1}lands at i, while z00
v1,v2,i = 1 if UAV v1∈Vlands at node i∈Cbefore UAV
v2∈ {V:v26=v1}launches from i.
v2,v,k ≤X
yv2,k,l,m ∀v2∈V, v ∈ {V:v6=v2}, k ∈C, (48)
v2,v,k ≤X
yv,k,l,m ∀v∈V , v2∈ {V:v26=v}, k ∈C, (49)
v2,v,k ≤X
yvijk ∀v∈V , v2∈ {V:v26=v}, k ∈C, (50)
v2,v,k ≤X
yv2,i,j,k ∀v2∈V, v ∈ {V:v6=v2}, k ∈C, (51)
v2,v,k +z00
v,v2,k + 1 ≥X
yvijk +X
∀v∈V, v2∈ {V:v26=v}, k ∈C, (52)
v2,v,k +z00
v,v2,k ≤1∀v∈V, v2∈ {V:v26=v}, k ∈C, (53)
v2,v,k +z0
v,v2,k ≤1∀v∈V, v2∈ {V:v26=v}, k ∈C, (54)
v2,v,k +z00
v,v2,k ≤1∀v∈V, v2∈ {V:v26=v}, k ∈C. (55)
Constraint (48) states that z0
v2,v,k = 0 if v2does not launch from k, while Constraint (49) sets
v2,v,k = 0 if vdoes not launch from k. Similarly, Constraint (50) requires z0
v2,v,k = 0 if vis not
retrieved at kand Constraint (51) sets z00
v2,v,k = 0 if v2is not retrieved at k. If vis retrieved at
customer kand v2is launched from customer k, then either v2is launched before vlands, or v
lands before v2launches, per Constraint (52).
Constraint (53) states that it is impossible for v2to be launched before vlands and for vto
land before v2launches. Similarly, Constraint (54) indicates that v2cannot launch before vlands,
if vlaunches before v2lands; while Constraint (55) states that it is impossible for v2to land before
vlaunches and for vto land before v2launches.
3.7 Variant 1: Truck not required at depot
The model above assumes that the truck must be present at the depot when UAVs are launched, and
must also be at the depot when UAVs return. This reflects the case that the driver is responsible
for manually performing these activities. However, the model may be relaxed to allow the UAVs
to launch from, and return to, the depot independent of the driver.
We begin by modifying the definitions of two decision variables. First, we define ˆ
t0≡0 to
allow the truck to immediately depart from the depot (without waiting for UAVs to be launched).
Second, we define zL
v,0,0≡0 for all v∈V; since the truck has no service activities at the depot
node, it does not need to be considered in the UAV launch sequencing.
Next, Constraints (25), (26) and (33) need not be satisfied at depot node c+ 1, since vcan land
at the depot independent of the truck’s arrival and service time completion. Thus, those constraints
should be replaced by
t0vk ≥ˇ
v,k −M1−zR
v0k∀v∈V, k ∈C, (56)
t0vk ≥¯
v,k −M1−zR
0vk∀v∈V, k ∈C, (57)
t0vk +σk−M1−zR
v0k∀v∈V, k ∈C. (58)
If the objective of the model is to minimize the time at which the last vehicle returns to the
depot, then Constraint (36) remains as is. Otherwise, if the objective is to minimize the time at
which the truck returns to the depot, then (36) should be modified to exclude depot node k=c+1.
Finally, Constraint (37) should be relaxed for node k= 0 since the departure of UAV vfrom
the depot is now independent of the truck’s departure. Thus, (37) should be replaced by
t0vk −M
∀k∈C, v ∈V. (59)
3.8 Variant 2: Automated launch and recovery systems
The default mFSTSP model assumes that the truck driver must be engaged in the UAV launch
and recovery process at customer locations. However, concept vehicles have been proposed (c.f.,
(Etherington 2017)) that automate these activities. The obvious benefit of such automation is that
the truck driver can make a delivery at a customer location independent of the UAV launches and
retrievals. Note, however, that the truck still has to be present for the launch and recovery at a
customer location; the difference is that the driver is not required.
To accommodate automated launch and recovery systems, the following modifications to the
baseline mFSTSP model are required. First, the UAV launch timing is no longer a function of the
driver’s service at the customer. Thus, Constraint (17) should be replaced by
t0vi ≥ˇ
v,i −M
∀v∈V, i ∈N0,(60)
and Constraint (18) should be removed. Similarly, the UAV recovery timing constraints should be
modified such that Constraint (25) is replaced by
t0vk ≥ˇ
v,k −M
∀v∈V, k ∈N+,(61)
and Constraint (26) is removed.
The truck service constraints in (33) and (34) should be removed, as the start of the truck
driver’s service at a customer is no longer dependent upon the UAV arrivals or departures. Similarly,
Constraints (38) and (43), which require sequencing for driver service with recovery and launch
operations, respectively, should be removed.
Finally, the decision variables zL
v,0,k, and zR
0,v,k are no longer required. Although
the UAVs still require queueing, driver service at a customer may start immediately (i.e., the UAV
queueing is now independent of the driver’s service at a truck customer).
4 A three-phased heuristic solution approach
Due to the NP-hard nature of the mFSTSP, heuristic approaches are required for problems of
practical size. A three-phased iterative heuristic, depicted in Figure 4, is proposed.
In Phase I, customers are partitioned into two sets – those that will be served via truck and
those served via UAVs. The minimum number of customers in the truck set is given by an input
parameter called the lower truck limit (LTL). The LTL is initialized to
|V|+ 1 ,
where LTL0represents the minimum number of truck customers required for a feasible solution. For
example, consider a 50-customer problem. If only 1 UAV is available, LTL0= 25, indicating that
at least 25 customers must be assigned to the truck route. If 4 UAVs are available, then at least 10
customers must be assigned to the truck. The value of the LTL is increased over the course of the
iterative procedure. In addition to partitioning the customer base into truck- and UAV-assigned
customers, Phase I also produces a unique TSP-like truck tour.
In Phase II, sorties for the UAV-assigned customers (as determined in Phase I) are generated.
These sorties define the launch and recovery locations associated with each UAV customer, as well
as the UAV assigned to each sortie. At the conclusion of Phase II, all truck and UAV routes are
identified, but the timing of the activities is determined in Phase III.
In Phase III, an MILP is solved to determine the exact timing of the launch, recovery, and
service activities for the truck and the UAVs. Phase III also determines the queueing sequences for
the UAVs.
After Phase III is completed, a local search procedure is executed to refine the solution. The
value of LTL is then incremented to add diversity to the search space, and the procedure returns
to Phase I. The iterative procedure is repeated until the LTL equals the number of customers (i.e.,
until the problem becomes simply solving a TSP tour to visit all customers via truck). Details of
each phase are described in the remainder of this section.
4.1 Phase I – Initial Customer Assignments
The goal of Phase I is to establish a unique truck tour containing at least LTL customers, which is
analogous to finding the xij decision variables in the mFSTSP formulation. Any customer not on
the truck’s route will be allocated (if possible) in Phase II to the UAVs. Pseudocode for Phase I is
provided in Algorithms 1, 2, and 3.
This phase begins by creating a TSP tour of only those customers that are not UAV-eligible
(lines 1–4 of Algorithm 1). The getTSP() function solves an MIP using the “lazy constraints”
method detailed in Gurobi Optimization (2018).
Next, customers are added or removed from the truck tour (lines 6–27 of Algorithm 1) according
to a savings metric, with the aim of reducing the makespan. The savings metric captures trade-
offs between truck travel times, truck service time, and UAV launch and recovery times. This
Check for Termination
Is LTL ≤Number
of customers?
Report Solution
Report incumbent solution
Partition Customers
Divide customers be-
tween truck and UAVs,
and generate a truck tour
Set OFV∗=∞,
Set LTL =LTL0,
Set TSPtour∗=∅,
Set UAVsorties∗=∅,
Set ActivityTimings∗=∅
Modify TSP Tour
Perform swap/subtour re-
versal/entire TSP reversal,
if needed, to ensure unique
TSP tour. Is it possible?
Increment LTL
Set LTL =LTL + 1,
set prevPhase3 = False
Check Phase I Bound
Generate a lower bound.
Is it smaller than OFV∗?
Create UAV Sorties
Assign a pair of feasible
launch-recovery point, and a
UAV to each UAV customer
Check Phase II Bound
Calculate optimistic
lower bound, λ. Is it
smaller than OFV∗?
Check Phase II Feasibility
Is Phase II feasible? Do
all UAV customers have
feasible assignments?
Check Previous Phase
Is prevPhase3 =True?
Solve (P3)
Set prevPhase3 =True. De-
termine the schedule of different
activities. Is Phase III feasible?
Update Incumbent
If P3ObjVal is smaller
than OFV∗, update OFV∗,
and ActivityTimings∗
Modify Assignment
Insert a UAV customer with
the cheapest cost in the
truck route, ensuring unique
TSP tour. Is it possible?
Improvement Step
Try to reduce make-span
by moving a truck customer
to a UAV, ensuring unique
TSP tour. Is it possible?
Local Search
For customers where truck
waits for retrieval, try shift-
ing retrieval points for corre-
sponding UAVs to the next
location. Is a shift possible?
Solve (P3)
Determine the schedule
of different activities.
Is Phase III feasible?
Increment LTL
Set LTL =LTL + 1,
set prevPhase3 = False
Yes Yes
Phase I
Phase II
Phase III
Figure 4: Heuristic flowchart
process is repeated until either no further improvements are found, or the savings metric suggests
a cycle (i.e., infinite loop) of moving the same customers between the truck and UAVs.
The third step (lines 28-66 of Algorithm 2) ensures that at least LTL customers are served. This
step also attempts to evaluate the feasibility of the UAV assignments that will be made in Phase
II. Feasibility is first determined by identifying UAV customers jfor any UAV vthat do not have a
corresponding valid sortie hv, i, j, ki ∈ P(line 31) for customers iand kthat are currently assigned
to the truck’s route.
Feasibility is further assessed via the checkP2Feasibility() function, which solves the following
objective-free integer linear program:
rij ≤ |V| ∀ i∈TruckCustomers ∪ {0},(62)
rij = 1 ∀j∈UAVCustomers,(63)
rij ∈ {0,1} ∀i∈TruckCustomers ∪ {0}, j ∈UAVCustomers.(64)
The set Gjcontains all customers i∈TruckCustomers ∪ {0}that can act as a UAV launch point
for customer j∈UAVCustomers, such that the retrieval point k∈N+is immediately after iin the
TSPtour and hv, i, j, ki ∈ P. Similarly, Hiis the set of customers j∈UAVCustomers that can be
served by launching from customer i∈TruckCustomers ∪{0}, such that the retrieval point k∈N+
is immediately before iin the TSPtour and hv, i, j, ki ∈ P. Binary decision variable rij equals 1
if j∈UAVCustomers is assigned to i∈TruckCustomers ∪ {0}as the launch point. Constraint
(62) ensures that only a maximum of |V|UAVs can be launched from any launch location, while
Constraint (63) requires each UAV customer to have exactly one launch point. If a feasible solution
to these constraints exists, the function checkP2Feasibility() returns an empty set; otherwise,
the function returns a set of UAV customers that do not have feasible assignments.
If any infeasible UAV customers are identified, or if the number of truck customers is less than
the LTL, one customer at a time will be added to the truck’s tour. costjk represents the change
in the truck’s makespan that would result from inserting UAV customer jimmediately before
customer kin the truck tour. coverjk is the set of customers in infeasCust whose assignment
would become feasible by inserting jin the truck tour immediately before k.
In the event that inserting jinto the truck’s route leads to a reduction in the makespan (i.e.,
a negative cost), the scorejk metric is calculated by multiplying costjk by the number of UAV
customers supported by the insertion. This indicates that the benefit is shared among numerous
UAV customers. Conversely, if the makespan increases by inserting j,scorejk is calculated as
the cost per infeasible UAV customer that is being supported. Thus, if two insertions have the
same cost, the one that may eliminate a larger number of infeasibilities would be preferable. In
lines 52 and 53, the UAV customers j∗
1and j∗
2with minimum costjk and scorejk , respectively,
are identified. If there are no infeasible UAV customers but the number of truck customers is
less than LTL, we choose to move the customer with the cheapest cost to the truck (and re-solve
the TSP). However, if infeasible UAV customers have been identified and a sufficient number of
truck customers have already been added, we choose to insert the customer with the cheapest score
immediately before customer k. Otherwise (i.e., if there are infeasible UAV customers and the
length of the truck route is less than LTL), the customer with the cheapest score is added to the
truck route and a new TSP tour is generated. This process of moving customers to the truck route
is continued until there are no more UAV customers with infeasible assignments and there are at
least LTL truck customers.
Phase I continues in Line 67 of Algorithm 3, where the truck tour is perturbed if it has been
previously evaluated. If such a modification is necessary, the procedure attempts to (1) swap a
truck customer and a UAV customer, (2) perform a subtour reversal (i.e., i→j→k→lbecomes
i→k→j→l), and (3) reverse the entire TSP tour. The modification that produces a unique
truck route with the minimum associated cost is selected. If no unique tour is found, the value of
LTL is increased by one and the procedure returns to Phase I. Otherwise, a lower bound is generated
by using the TSP duration (including truck customer service times) and adding launch and retrieval
times for the UAV customers. If the lower bound exceeds the current incumbent (OFV∗), the LTL is
updated and the procedure repeats Phase I. If the bound is less than the current incumbent, the
procedure continues to Phase II.
Algorithm 1 Pseudocode for Phase I – Part 1 of 3
1: # Initialize:
2: TruckCust = all j∈ {C:j /∈ˆ
3: UAVCust =C\TruckCust
4: TSPtour =getTSP(TruckCust)
6: # Reduce TSP tour cost by adding/removing truck customers:
7: while (improvements are possible and no cycles occur) do
8: # Try to move customers from UAV to truck:
9: for j∈UAVCust do
10: savings = max
(τik +sL
vj +sR
vj −τij −τjk −σj)
11: if (savings >0) then
12: TruckCust ←TruckCust ∪ {j}
13: end if
14: end for
15: UAVCust ←UAVCust \TruckCust
16: TSPtour ←getTSP(TruckCust)
18: # Try to move customers from truck to UAV:
19: for j∈ {TruckCust :j∈ˆ
Cvfor any v∈V}do
20: savings ←max
hv,i,j,ki∈P(τij +τjk +σj−τik −sL
vj −sR
vj )
21: if (savings >0) then
22: UAVCust ←UAVCust ∪ {j}
23: end if
24: end for
25: TruckCust ←TruckCust \UAVCust
26: TSPtour ←getTSP(TruckCust)
27: end while
4.2 Phase II – Create UAV sorties
The goal of Phase II is to determine the individual sorties hv, i, j, kifor each UAV v, where iis
the launch location, jis the customer that is being served by the UAV, and kis the retrieval
location. This is analogous to determining the yvijk decision variables in the mFSTSP formulation.
Pseudocode for Phase II is provided in Algorithms 4 and 5.
Algorithm 2 Pseudocode for Phase I – Part 2 of 3
28: # Move customers to truck for feasibility (and to satisfy LTL requirement):
29: feasible =False # Assume infeasible by default
30: while (not feasible)do
31: infeasCust = all j∈ {UAVCust :hv, i, j, ki/∈P∀v∈V, i ∈TruckCust, k ∈TruckCust}
32: if (infeasCust == ∅)then
33: infeasCust ←checkP2Feasibility(UAVCustomers,TSPtour)
34: end if
35: if ((infeasCust == ∅)and (len(TruckCust)≥LTL)) then
36: feasible ←True
37: else
38: for j∈UAVCust do
39: for k∈TruckCust ∪ {c+ 1}do
40: costjk = min
hi,ki∈TSPtour,hv,i,j,ki∈P(τij +τj k +σj−τik −sL
vj −sR
vj )
41: coverjk = all l∈ {infeasCust : (hv, m, l, jior hv, j, l, mi)∈P∀v∈V , m ∈
42: if (j∈infeasCust)then
43: coverjk ←coverjk ∪ {j}
44: end if
45: if (costjk <0) then
46: scorejk =costjk ∗len(coverjk )
47: else
48: scorejk =costjk /len(coverjk )
49: end if
50: end for
51: end for
52: hj∗
1, k∗
1i= arg min
(costjk )
53: hj∗
2, k∗
2i= arg min
(scorejk )
54: if ((infeasCust == ∅)and (len(TruckCust)<LTL)) then
55: TruckCust ←TruckCust ∪ {j∗
56: TSPtour ←getTSP(TruckCust)
57: else if ((infeasCust 6=∅)and (len(TruckCust)≥LTL)) then
58: TruckCust ←TruckCust ∪ {j∗
59: TSPtour ←insert j∗
2in current TSPtour immediately before k∗
60: else
61: TruckCustomers ←TruckCustomers ∪ {j∗
62: TSPtour ←getTSP(TruckCustomers)
63: end if
64: UAVCust ←UAVCust \TruckCust
65: end if
66: end while
Algorithm 3 Pseudocode for Phase I – Part 3 of 3
67: Modify TSP tour if not unique. If no unique tour is found, increase LTL and repeat Phase I.
69: # Evaluate lower bound:
70: lowBound =TSPcost + min
j∈UAVCust sL
vj +sR
71: if (lowBound >OFV∗)then
72: increaseLTL() # See function below
73: else
74: Continue to Phase II.
75: end if
77: procedure increaseLTL( )
78: LTL ←LTL + 1
79: prevPhase3 =False
80: if (LTL ≤ |C|)then
81: Return to Phase I with new LTL value.
82: else
83: Stop. Report OFV∗,TSPtour∗,UAVsorties∗,ActivityTimings∗.
84: end if
85: end procedure
Inputs to this phase include the set of customers to be assigned to UAVs (UAVcust) and the
truck route found in Phase I (TSPtour). Additionally, the earliest time at which the truck can
arrive at each location on the route, denoted as tjfor all j∈TSPtour, is calculated. While tj
incorporates truck customer service times, UAV launch and retrieval times are ignored. Thus, tj
is sum of the arrival time at jaccording to TSPtour and the service times at all previous truck
customer locations.
This phase begins in lines 3–6 by initializing the lists of UAV sorties (UAVsorties), customers
with no feasible assignments (infeasCust), UAVs available at each launch location (availUAVsj),
and unassigned UAV customers (UnasgnCust).
Next, in lines 8–12, the number of potential UAV sorties (numOptionsj) is determined for
each UAV customer j. We define P0⊆Psuch that hv, i, j, ki ∈ P0represents only valid i, j, k
combinations where iand kare consecutive stops on the truck’s tour. Thus, in Phase II, there
are no truck customers between the launch and recovery points in the UAV sorties. A local search
procedure is applied in Phase III to relax this restriction (i.e., to allow the truck to make multiple
stops while a UAV is en route).
In lines 14–32, UAV sorties are generated. Customers with the minimum numOptions are
prioritized, as they possess the fewest number of feasible candidate sorties. For each chosen UAV
customer, j, the assignment hv, i, j, ki ∈ P0is selected based on its impact on the waiting time for
both the truck and the UAV. Variable w(line 20) captures the time difference between the UAV’s
activities (travel from launch point ito customer j, serve customer j, and travel to retrieval point
k) and the truck’s activities (travel from launch point ito retrieval point k). A positive value of
windicates that the truck must wait for the UAV. If (WaitTime ≥0) and (w<WaitTime), the
current truck assignment incurred a waiting time that can be reduced by assigning this UAV sortie.
Conversely, if WaitTime <w<0, this UAV assignment would lead to a reduction in the time
that a UAV would wait for the truck. Thus, the aim is to find an assignment that results in zero
truck waiting and minimum UAV waiting, or minimum truck waiting (if zero truck waiting is not
possible). If no feasible assignment is found (line 26), the customer is added to the infeasCust list;
otherwise (line 28), the assignment is stored in the UAVsorties list and the corresponding UAV
becomes unavailable at the corresponding launch location i∗.
If any UAV customers remain without an assigned sortie, it is necessary to move a UAV customer
to the truck route. In lines 34–37 of Algorithm 5 we calculate cheapestInsCost, which is the
minimum cost associated with inserting a UAV customer into the current TSP tour which might
make Phase II feasible. The following costs comprise cheapestInsCost, and are associated with
the insertion of UAV customer iinto position pof the truck route:
i,p = Additional time associated with inserting customer i∈UAVCust into position pof
the truck’s route.
=τh,i +σi+τi,k −τh,k,
where node his at position p−1 and node kis at position pin TSPtour.
i,p,j = Truck waiting time if a UAV were launched from customer i∈UAVCust inserted
into position p∈ {TSPtour positons}to serve customer j∈infeasCust \i.
= min
k∈{TSPtour positions after p}τ0
v,i,j +σ0
v,j +τ0
v,j,k −(tk−ti−σi),
i,p,j = Truck waiting time if a UAV were retrieved at customer i∈UAVCust inserted
into position p∈ {TSPtour positons}after serving customer j∈infeasCust \i.
= min
k∈{TSPtour positions before p}τ0
v,k,j +σ0
v,j +τ0
v,j,i −(ti−tk−σk),
i,p,j = Minimum time the truck would spend waiting for a UAV, if the UAV is
launched or retrieved at customer i∈UAVCust inserted into position
p∈ {TSPtour positions}to serve customer j∈infeasCust \i.
= min{cwait,launch
i,p,j , cwait,retrieve
i,p,j }.
i,p,j = Cost of inserting j∈infeasCust into the truck’s route, if inserting customer
i∈UAVCust \jinto position p∈ {TSPtour positions}cannot make the assignment
of customer jfeasible.
= min
τi,j +σj+τj,k −τi,k.
Thus, we may calculate
cheapestInsCost = min
i∈UAVCust, p∈{TSPtour positions}
i,p +X
j∈infeasCust cwait
i,p,j +cfail
The indices resulting in cheapestInsCost are saved as imin and pmin.
The lower bound, λ(line 40), reflects the truck tour duration (including truck service time),
launch and retrieval times for UAV customers, and cheapestInsCost (in the event that there
are UAV customers with no feasible assignments). If the lower bound is not promising (line 42),
increase the LTL and return to Phase I.
If no infeasible UAV assignments have been identified (line 44), the procedure continues to
Phase III. Otherwise, if Phase II infeasibility occurred as a result of trying to improve upon the
Phase III solution (line 47), stop the trial for improvement and return to Phase I with a new (or the
same) LTL. In lines 50–57, insert customer imin into position pmin in the truck’s route and update
the truck tour. If this tour is unique, repeat Phase II. Otherwise, return to Phase I with a new (or
the same) LTL value.
Algorithm 4 Pseudocode for Phase II – Part 1 of 2
1: Inputs: TSPtour, | {"url":"https://www.researchgate.net/publication/344649100_The_multiple_flying_sidekicks_traveling_salesman_problem_Parcel_delivery_with_multiple_drones","timestamp":"2024-11-11T20:26:32Z","content_type":"text/html","content_length":"1050306","record_id":"<urn:uuid:8bfad561-9015-4f17-8f80-cd3fc7e076ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00836.warc.gz"} |
Decimal Representation of Real Numbers
Decimal Representation of Real Numbers
From the Axiom of Completeness, Archimedean Properties, and Density Theorem we know that the real number line has no holes in it. Now we will look at representing some of these numbers in a new
format known as the decimal representation of a real number $x$
Definition: Let $x$ be a real number. Then the decimal representation of $x$ is $x = b_0 + \frac{b_1}{10} + \frac{b_2}{10^2} + \frac{b_3}{10^3} + ...$ where $b_0, b_1, b_2, ...$ represent the digits
of the number $x$, that is $b_i \in \{0, 1, 2, ..., 8, 9 \}$.
For example, consider the number $x = 4.523$. In this case, $b_0 = 4$, $b_1 = 5$, $b_2 = 2$ and $b_3 = 3$. Therefore $x = 4.523 = 4 + \frac{5}{10} + \frac{2}{100} + \frac{3}{1000}$.
Now consider the case where $b_0 = 0$, that is let $x \in [0, 1]$, for example, let $x = 0.682...$.
We will think of decimal numbers in terms of intervals. Now suppose that we take the interval $[0, 1]$ and subdivide it into ten equal intervals. Then $x \in \left [\frac{b_1}{10}, \frac{b_1+1}{10} \
right]$ where $b_1$ is the first digit after the decimal for $x$. For our example of $x = 0.682...$, $b_1 = 6$ and so $x \in \left [ \frac{6}{10}, \frac{7}{10} \right ]$:
Now suppose that we take the interval $\left [\frac{b_1}{10}, \frac{b_1+1}{10} \right]$ and subdivide it into 10 equal intervals too. Then $x \in \left [\frac{b_2}{10^2}, \frac{b_2+1}{10^2} \right]$
where $b_2$ represents the second digit after the decimal, namely for our example $x = 0.682...$, $b_2 = 8$ and so:
If we do this more times, eventually we will have a common point between the intersection of all these nested intervals which will be the digital representation of our number $x$, that is $x \in \
bigcup_{n=1}^{\infty} \left [ b_n , b_n + 1 \right]$.
Now there's a special case to consider. Suppose that the $n^{\mathrm{th}}$ stage of this process places $x$ on one of the end points of an interval. For example, consider the case where $x = 0.5$. So
$x \in \left [ \frac{4}{10} , \frac{5}{10} \right ]$ and $x \in \left [ \frac{5}{10}, \frac{6}{10} \right ]$. If we choose $x \in \left [ \frac{4}{10} , \frac{5}{10} \right ]] we will get [[$ x =
0.4999...$, and if we choose $x \in \left [ \frac{5}{10}, \frac{6}{10} \right ]$ we will get $x = 0.500...$. With this convention we will usually choose the latter interval to place $x$ in since
zeros tend to be nicer to work with than nines and we will simply say that $0.499... = 0.5 = 0.500...$. From this, we derive the following definitions.
Definition: A real number $x$ whose decimal representation is $b_0.b_1b_2...$ is periodic if there exists a natural number $k$ such that $b_n = b_{n+m}$ for every $n ≥ k$, and the smallest number $m$
for which this is satisfied is called the period.
For example, consider the decimal $x = \frac{1}{7} = 0.142857142857142...$. Let $m = 6$, that is the period of this decimal is 6, and so $b_n = b_{n + 6}$. Now for $n ≥ 1$, this is always true. For
example, the 1st digit after the decimal is equal to the seventh digit after the decimal, that is $b_1 = b_7 = b_{13} = ... = 1$.
Definition: A real number $x$ whose decimal representation is $b_0.b_1b_2...$ is **terminating if for some natural number $k$, $b_n = 0$ for every $n ≥ k$.
For example, consider the decimal $x = \frac{1}{8} = 0.12500...$. We say this decimal is terminating since if $n ≥ 4$, then $b_n = 0$. That is $b_4 = b_5 = b_6 = ... = 0$. In such a case, we write $x
= 0.125$ without the ellipse without any confusion. | {"url":"http://mathonline.wikidot.com/decimal-representation-of-real-numbers","timestamp":"2024-11-02T21:41:17Z","content_type":"application/xhtml+xml","content_length":"19529","record_id":"<urn:uuid:79d05b8e-5d98-47f9-b263-feafb9b8d122>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00218.warc.gz"} |
Rectifiers & Filters Interview Questions
1. Explain what is a dc power supply?
The part of the equipment that converts ac into dc is called dc power supply.
2. Explain what is a rectifier?
A rectifier is a device which converts alternating current (or voltage) into unidirectional current (or voltage).
3. Explain what is PIV of a diode in a rectifier circuit?
Peak Inverse Voltage (PIV) is the maximum possible voltage that occurs across a diode when it is reverse biased.
4. Explain what is the importance of peak inverse voltage?
If the applied voltage in reverse biased condition exceeds peak inverse voltage (PIV) rating of the diode, then the diode may get damaged.
5. Explain why half-wave rectifiers are generally not used in dc power supply?
The type of supply available from half-wave rectifier is not satisfactory for general power supply. That is Explain why it is generally not used in dc power supply.
6. Explain why diodes are not operated in the breakdown region in rectifiers?
In breakdown region, a diode has a risk of getting damaged or burnt because the magnitude of current flowing through it increases in an uncontrollable manner. That is Explain why didoes are not
operated in the breakdown region in rectifiers.
7. Define ripple as referred to in a rectifier circuit.
The ac component contained in the pulsating output of a rectifier is known as ripple.
8. Explain what is transformer utilization factor?
Transformer utilization factor is defined as the ratio of power delivered to the load and ac rating of secondary of supply power transformer.
9. The output of a 60Hz full-wave bridge rectifier has a 60 Hz ripple. It this circuit working properly?
A full-wave rectifier with 60Hz input must have lowest ripple frequency equal to twice the input frqeuency i.e. 120Hz. If the ripple frequency is 60Hz, it means some diodes in the circuit are not
10. Explain what is meant by filter?
Filter is a device that converts pulsating output of rectifier into a steady dc level. | {"url":"https://engineerscommunity.com/t/rectifiers-filters-interview-questions/2213","timestamp":"2024-11-02T14:05:26Z","content_type":"text/html","content_length":"17310","record_id":"<urn:uuid:5f3c1b0b-1b66-454d-a045-63cb719fa361>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00343.warc.gz"} |
April 2021 – N N Taleb's Technical Blog
\(I= \displaystyle\int_{-\infty }^{\infty}\sum_{n=0}^{\infty } \frac{\left(-x^2\right)^n }{n!^{2 s}}\; \mathrm{d}x= \pi^{1-s}\).
We can start as follows, by transforming it into a generalized hypergeometric function:
\(I=\displaystyle\int_{-\infty }^{\infty }\, _0F_{2 s-1} (\overbrace{1,1,1,…,1}^{2 s-1 \text{times}}; -x^2)\mathrm{d}x\), since, from the series expansion of the generalized hypergeometric function,
\(\, _pF_q\left(a_1,a_p;b_1,b_q;z\right)=\sum_{k=0}^{\infty } \frac{\prod_{j=1}^p \left(a_j\right)_k z^k}{\prod_{j=1}^q k! \left(b_j\right)_k}\), where \((.)_k\) is the Pochhammer symbol \((a)_k=\
frac{\Gamma (a+k)}{\Gamma (a)}\).
Now the integrand function does not appear to be convergent numerically, except for \(s= \frac{1}{2}\) where it becomes the Gaussian integral, and the case of \(s=1\) where it becomes a Bessel
function. For \(s=\frac{3}{2}\) and \( x=10^{19}\), the integrand takes values of \(10^{1015852872356}\) (serious). Beyond that the computer starts to produce smoke. Yet it eventually converges as
there is a closed form solution. It is like saying that it works in theory but not in practice!
For, it turns out, under the restriction that \(2 s\in \mathbb{Z}_{>\, 0}\), we can use the following result:
\(\int_0^{\infty } t^{\alpha -1} _pF_q \left(a_1,\ldots ,a_p;b_1,\ldots ,b_q;-t\right) \, dt=\frac{\Gamma (\alpha ) \prod {k=1}^p \Gamma \left(a_k-\alpha \right)}{\left(\prod {k=1}^p \Gamma \left(a_k
\right)\right) \prod {k=1}^q \Gamma \left(b_k-\alpha \right)}\)
Allora, we can substitute \(x=\sqrt(u)\), and with \(\alpha =\frac{1}{2},p=0,b_k=1,q=2 s-1\), given that \(\Gamma(\frac{1}{2})=\sqrt(\pi)\),
\(I=\frac{\sqrt{\pi }}{\prod _{k=1}^{2 s-1} \sqrt{\pi }}=\pi ^{1-s}\).
So either the integrand eventually converges, or I am doing something wrong, or both. Perhaps neither. | {"url":"https://fooledbyrandomness.com/blog/index.php/2021/04/","timestamp":"2024-11-07T15:12:19Z","content_type":"text/html","content_length":"39283","record_id":"<urn:uuid:c294f5c5-b91b-4965-ac52-6e262f3d7dca>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00259.warc.gz"} |
eneralized linear mixed-effects
Class: GeneralizedLinearMixedModel
Extract covariance parameters of generalized linear mixed-effects model
psi = covarianceParameters(glme) returns the estimated prior covariance parameters of random-effects predictors in the generalized linear mixed-effects model glme.
[psi,dispersion] = covarianceParameters(glme) also returns an estimate of the dispersion parameter.
[psi,dispersion,stats] = covarianceParameters(glme) also returns a cell array stats containing the covariance parameter estimates and related statistics.
[___] = covarianceParameters(glme,Name,Value) returns any of the above output arguments using additional options specified by one or more Name,Value pair arguments. For example, you can specify the
confidence level for the confidence limits of covariance parameters.
Input Arguments
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Output Arguments
psi — Estimated prior covariance parameters
cell array
Estimated prior covariance parameters for the random-effects predictors, returned as a cell array of length R, where R is the number of grouping variables used in the model. psi{r} contains the
covariance matrix of random effects associated with grouping variable g[r], where r = 1, 2, ..., R, The order of grouping variables in psi is the same as the order entered when fitting the model. For
more information on grouping variables, see Grouping Variables.
dispersion — Dispersion parameter
scalar value
Dispersion parameter, returned as a scalar value.
stats — Covariance parameter estimates and related statistics
cell array
Covariance parameter estimates and related statistics, returned as a cell array of length (R + 1), where R is the number of grouping variables used in the model. The first R cells of stats each
contain a dataset array with the following columns.
Column Name Description
Group Grouping variable name
Name1 Name of the first predictor variable
Name2 Name of the second predictor variable
If Name1 and Name2 are the same, then Type is std (standard deviation).
If Name1 and Name2 are different, then Type is corr (correlation).
If Name1 and Name2 are the same, then Estimate is the standard deviation of the random effect associated with predictor Name1 or Name2.
If Name1 and Name2 are different, then Estimate is the correlation between the random effects associated with predictors Name1 and Name2.
Lower Lower limit of the confidence interval for the covariance parameter
Upper Upper limit of the confidence interval for the covariance parameter
Cell R + 1 contains related statistics for the dispersion parameter.
It is recommended that the presence or absence of covariance parameters in glme be tested using the compare method, which uses a likelihood ratio test.
When fitting a GLME model using fitglme and one of the maximum likelihood fit methods ('Laplace' or 'ApproximateLaplace'), covarianceParameters derives the confidence intervals in stats based on a
Laplace approximation to the log likelihood of the generalized linear mixed-effects model.
When fitting a GLME model using fitglme and one of the pseudo likelihood fit methods ('MPL' or 'REMPL'), covarianceParameters derives the confidence intervals in stats based on the fitted linear
mixed-effects model from the final pseudo likelihood iteration.
Obtain Estimated Covariance Parameters
Load the sample data.
This simulated data is from a manufacturing company that operates 50 factories across the world, with each factory running a batch process to create a finished product. The company wants to decrease
the number of defects in each batch, so it developed a new manufacturing process. To test the effectiveness of the new process, the company selected 20 of its factories at random to participate in an
experiment: Ten factories implemented the new process, while the other ten continued to run the old process. In each of the 20 factories, the company ran five batches (for a total of 100 batches) and
recorded the following data:
• Flag to indicate whether the batch used the new process (newprocess)
• Processing time for each batch, in hours (time)
• Temperature of the batch, in degrees Celsius (temp)
• Categorical variable indicating the supplier (A, B, or C) of the chemical used in the batch (supplier)
• Number of defects in the batch (defects)
The data also includes time_dev and temp_dev, which represent the absolute deviation of time and temperature, respectively, from the process standard of 3 hours at 20 degrees Celsius.
Fit a generalized linear mixed-effects model using newprocess, time_dev, temp_dev, and supplier as fixed-effects predictors. Include a random-effects term for intercept grouped by factory, to account
for quality differences that might exist due to factory-specific variations. The response variable defects has a Poisson distribution, and the appropriate link function for this model is log. Use the
Laplace fit method to estimate the coefficients. Specify the dummy variable encoding as 'effects', so the dummy variable coefficients sum to 0.
The number of defects can be modeled using a Poisson distribution
${\text{defects}}_{ij}\sim \text{Poisson}\left({\mu }_{ij}\right)$
This corresponds to the generalized linear mixed-effects model
$\mathrm{log}\left({\mu }_{ij}\right)={\beta }_{0}+{\beta }_{1}{\text{newprocess}}_{ij}+{\beta }_{2}{\text{time}\text{_}\text{dev}}_{ij}+{\beta }_{3}{\text{temp}\text{_}\text{dev}}_{ij}+{\beta }_{4}
{\text{supplier}\text{_}\text{C}}_{ij}+{\beta }_{5}{\text{supplier}\text{_}\text{B}}_{ij}+{b}_{i},$
• ${\text{defects}}_{ij}$ is the number of defects observed in the batch produced by factory $i$ during batch $j$.
• ${\mu }_{ij}$ is the mean number of defects corresponding to factory $i$ (where $i=1,2,...,20$) during batch $j$ (where $j=1,2,...,5$).
• ${\text{newprocess}}_{ij}$, ${\text{time}\text{_}\text{dev}}_{ij}$, and ${\text{temp}\text{_}\text{dev}}_{ij}$ are the measurements for each variable that correspond to factory $i$ during batch
$j$. For example, ${\text{newprocess}}_{ij}$ indicates whether the batch produced by factory $i$ during batch $j$ used the new process.
• ${\text{supplier}\text{_}\text{C}}_{ij}$ and ${\text{supplier}\text{_}\text{B}}_{ij}$ are dummy variables that use effects (sum-to-zero) coding to indicate whether company C or B, respectively,
supplied the process chemicals for the batch produced by factory $i$ during batch $j$.
• ${b}_{i}\sim N\left(0,{\sigma }_{b}^{2}\right)$ is a random-effects intercept for each factory $i$ that accounts for factory-specific variation in quality.
glme = fitglme(mfr,'defects ~ 1 + newprocess + time_dev + temp_dev + supplier + (1|factory)','Distribution','Poisson','Link','log','FitMethod','Laplace','DummyVarCoding','effects');
Compute and display the estimate of the prior covariance parameter for the random-effects predictor.
[psi,dispersion,stats] = covarianceParameters(glme);
psi{1} is an estimate of the prior covariance matrix of the first grouping variable. In this example, there is only one grouping variable (factory), so psi{1} is an estimate of ${\sigma }_{b}^{2}$.
Display the dispersion parameter.
Display the estimated standard deviation of the random effect associated with the predictor. The first cell of stats contains statistics for factory, while the second cell contains statistics for the
dispersion parameter.
ans =
COVARIANCE TYPE: ISOTROPIC
Group Name1 Name2 Type Estimate Lower Upper
factory {'(Intercept)'} {'(Intercept)'} {'std'} 0.31381 0.19253 0.51148
The estimated standard deviation of the random effect associated with the predictor is 0.31381. The 95% confidence interval is [0.19253 , 0.51148]. Because the confidence interval does not contain 0,
the random intercept is significant at the 5% significance level. | {"url":"https://kr.mathworks.com/help/stats/generalizedlinearmixedmodel.covarianceparameters.html","timestamp":"2024-11-08T17:04:02Z","content_type":"text/html","content_length":"102942","record_id":"<urn:uuid:bc620e2f-f60f-4462-a4ee-e1224f584e27>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00654.warc.gz"} |
Chapter 6: Introduction to Neural Networks and Deep Learning
6.4 Practical Exercises of Chapter 6: Introduction to Neural Networks and Deep Learning
Exercise 1: Implement a Perceptron
Implement a simple perceptron using Python. Use it to classify a binary dataset of your choice. You can create your own dataset or use a simple one from a library like Scikit-learn.
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Define the Perceptron model
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.lr = learning_rate
self.n_iters = n_iters
self.activation_func = self._unit_step_func
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
# init parameters
self.weights = np.zeros(n_features)
self.bias = 0
y_ = np.where(y <= 0, -1, 1)
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
# Perceptron update rule
update = self.lr * (y_[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
return y_predicted
def _unit_step_func(self, x):
return np.where(x >= 0, 1, -1)
# Training the Perceptron model
p = Perceptron(learning_rate=0.01, n_iters=1000)
p.fit(X_train, y_train)
# Making predictions on the test data
predictions = p.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, predictions)
print("Perceptron classification accuracy: ", accuracy)
Exercise 2: Implement Gradient Descent
Implement the gradient descent algorithm from scratch in Python. Use it to find the minimum of a simple function (like f(x) = x^2 + 5x + 6), and plot the steps of the algorithm along the way.
import numpy as np
import matplotlib.pyplot as plt
# Define the function and its derivative
def f(x):
return x**2 + 5*x + 6
def df(x):
return 2*x + 5
# Gradient descent algorithm
def gradient_descent(x_start, learning_rate, n_iters):
x = x_start
history = [x]
for _ in range(n_iters):
grad = df(x)
x -= learning_rate * grad
return history
# Run the algorithm and plot the steps
history = gradient_descent(x_start=-10, learning_rate=0.1, n_iters=50)
plt.plot(history, [f(x) for x in history], 'o-')
plt.title('Gradient Descent Steps')
Exercise 3: Regularization Techniques
Using a dataset of your choice, build a deep learning model with Keras. Apply L1, L2, and dropout regularization, and compare the results. Which method works best for your dataset?
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.regularizers import l1, l2
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer with L1 regularization
model.add(Dense(32, input_dim=8, activation='relu', kernel_regularizer=l1(0.01)))
# Add dropout layer
# Add an output layer with L2 regularization
model.add(Dense(1, activation='sigmoid', kernel_regularizer=l2(0.01)))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, y, epochs=150, batch_size=10)
Exercise 4: Early Stopping and Dropout
Using a dataset of your choice, build a deep learning model with Keras. Implement early stopping and dropout, and observe how they affect the model's performance.
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.callbacks import EarlyStopping
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer
model.add(Dense(32, input_dim=20, activation='relu'))
# Add dropout layer
# Add an output layer
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Define the early stopping monitor
early_stopping_monitor = EarlyStopping(patience=3)
# Fit the model
model.fit(X_train, y_train, epochs=150, batch_size=10, validation_split=0.2, callbacks=[early_stopping_monitor])
In this exercise, you will observe how early stopping and dropout can help prevent overfitting and improve the model's ability to generalize to new data.
Chapter 6 Conclusion
In this chapter, we embarked on a fascinating journey into the world of neural networks and deep learning. We started with the fundamental building block of neural networks, the perceptron, and
explored how it forms the basis for more complex neural network architectures. We learned how a perceptron takes a set of inputs, applies weights, and uses an activation function to produce an
output. We also saw how multiple perceptrons can be combined to form a multi-layer perceptron, capable of solving more complex problems.
We then delved into the concept of backpropagation and gradient descent, two crucial components in training a neural network. We learned how backpropagation works by calculating the gradient of the
loss function with respect to the weights of the network, allowing us to understand how much each weight contributes to the error. This information is then used by the gradient descent algorithm to
adjust the weights and minimize the error.
In our discussion of overfitting, underfitting, and regularization, we explored the challenges of training a model that generalizes well to unseen data. We learned about the bias-variance trade-off
and how overfitting and underfitting represent two extremes of this spectrum. We also discussed several regularization techniques, including L1 and L2 regularization, dropout, and early stopping,
which can help mitigate overfitting and improve the model's generalization performance.
The practical exercises provided throughout the chapter allowed us to apply these concepts and gain hands-on experience with implementing and training neural networks. These exercises not only
reinforced our understanding of the material but also gave us a taste of what it's like to work with neural networks in a practical setting.
As we conclude this chapter, it's important to reflect on the power and versatility of neural networks and deep learning. These techniques have revolutionized the field of machine learning and have
found applications in a wide range of domains, from image and speech recognition to natural language processing and autonomous driving. However, with great power comes great responsibility. As
practitioners, it's crucial that we use these tools ethically and responsibly, ensuring that our models are fair, transparent, and respectful of privacy.
Looking ahead, we will dive deeper into the world of deep learning, exploring more advanced concepts and techniques. We will learn about different types of neural networks, including convolutional
neural networks (CNNs) and recurrent neural networks (RNNs), and how they can be used to tackle complex machine learning tasks. We will also explore popular deep learning frameworks, such as
TensorFlow, Keras, and PyTorch, and learn how to use them to build and train our own neural networks. So, stay tuned for an exciting journey ahead!
6.4 Practical Exercises of Chapter 6: Introduction to Neural Networks and Deep Learning
Exercise 1: Implement a Perceptron
Implement a simple perceptron using Python. Use it to classify a binary dataset of your choice. You can create your own dataset or use a simple one from a library like Scikit-learn.
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Define the Perceptron model
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.lr = learning_rate
self.n_iters = n_iters
self.activation_func = self._unit_step_func
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
# init parameters
self.weights = np.zeros(n_features)
self.bias = 0
y_ = np.where(y <= 0, -1, 1)
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
# Perceptron update rule
update = self.lr * (y_[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
return y_predicted
def _unit_step_func(self, x):
return np.where(x >= 0, 1, -1)
# Training the Perceptron model
p = Perceptron(learning_rate=0.01, n_iters=1000)
p.fit(X_train, y_train)
# Making predictions on the test data
predictions = p.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, predictions)
print("Perceptron classification accuracy: ", accuracy)
Exercise 2: Implement Gradient Descent
Implement the gradient descent algorithm from scratch in Python. Use it to find the minimum of a simple function (like f(x) = x^2 + 5x + 6), and plot the steps of the algorithm along the way.
import numpy as np
import matplotlib.pyplot as plt
# Define the function and its derivative
def f(x):
return x**2 + 5*x + 6
def df(x):
return 2*x + 5
# Gradient descent algorithm
def gradient_descent(x_start, learning_rate, n_iters):
x = x_start
history = [x]
for _ in range(n_iters):
grad = df(x)
x -= learning_rate * grad
return history
# Run the algorithm and plot the steps
history = gradient_descent(x_start=-10, learning_rate=0.1, n_iters=50)
plt.plot(history, [f(x) for x in history], 'o-')
plt.title('Gradient Descent Steps')
Exercise 3: Regularization Techniques
Using a dataset of your choice, build a deep learning model with Keras. Apply L1, L2, and dropout regularization, and compare the results. Which method works best for your dataset?
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.regularizers import l1, l2
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer with L1 regularization
model.add(Dense(32, input_dim=8, activation='relu', kernel_regularizer=l1(0.01)))
# Add dropout layer
# Add an output layer with L2 regularization
model.add(Dense(1, activation='sigmoid', kernel_regularizer=l2(0.01)))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, y, epochs=150, batch_size=10)
Exercise 4: Early Stopping and Dropout
Using a dataset of your choice, build a deep learning model with Keras. Implement early stopping and dropout, and observe how they affect the model's performance.
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.callbacks import EarlyStopping
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer
model.add(Dense(32, input_dim=20, activation='relu'))
# Add dropout layer
# Add an output layer
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Define the early stopping monitor
early_stopping_monitor = EarlyStopping(patience=3)
# Fit the model
model.fit(X_train, y_train, epochs=150, batch_size=10, validation_split=0.2, callbacks=[early_stopping_monitor])
In this exercise, you will observe how early stopping and dropout can help prevent overfitting and improve the model's ability to generalize to new data.
Chapter 6 Conclusion
In this chapter, we embarked on a fascinating journey into the world of neural networks and deep learning. We started with the fundamental building block of neural networks, the perceptron, and
explored how it forms the basis for more complex neural network architectures. We learned how a perceptron takes a set of inputs, applies weights, and uses an activation function to produce an
output. We also saw how multiple perceptrons can be combined to form a multi-layer perceptron, capable of solving more complex problems.
We then delved into the concept of backpropagation and gradient descent, two crucial components in training a neural network. We learned how backpropagation works by calculating the gradient of the
loss function with respect to the weights of the network, allowing us to understand how much each weight contributes to the error. This information is then used by the gradient descent algorithm to
adjust the weights and minimize the error.
In our discussion of overfitting, underfitting, and regularization, we explored the challenges of training a model that generalizes well to unseen data. We learned about the bias-variance trade-off
and how overfitting and underfitting represent two extremes of this spectrum. We also discussed several regularization techniques, including L1 and L2 regularization, dropout, and early stopping,
which can help mitigate overfitting and improve the model's generalization performance.
The practical exercises provided throughout the chapter allowed us to apply these concepts and gain hands-on experience with implementing and training neural networks. These exercises not only
reinforced our understanding of the material but also gave us a taste of what it's like to work with neural networks in a practical setting.
As we conclude this chapter, it's important to reflect on the power and versatility of neural networks and deep learning. These techniques have revolutionized the field of machine learning and have
found applications in a wide range of domains, from image and speech recognition to natural language processing and autonomous driving. However, with great power comes great responsibility. As
practitioners, it's crucial that we use these tools ethically and responsibly, ensuring that our models are fair, transparent, and respectful of privacy.
Looking ahead, we will dive deeper into the world of deep learning, exploring more advanced concepts and techniques. We will learn about different types of neural networks, including convolutional
neural networks (CNNs) and recurrent neural networks (RNNs), and how they can be used to tackle complex machine learning tasks. We will also explore popular deep learning frameworks, such as
TensorFlow, Keras, and PyTorch, and learn how to use them to build and train our own neural networks. So, stay tuned for an exciting journey ahead!
6.4 Practical Exercises of Chapter 6: Introduction to Neural Networks and Deep Learning
Exercise 1: Implement a Perceptron
Implement a simple perceptron using Python. Use it to classify a binary dataset of your choice. You can create your own dataset or use a simple one from a library like Scikit-learn.
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Define the Perceptron model
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.lr = learning_rate
self.n_iters = n_iters
self.activation_func = self._unit_step_func
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
# init parameters
self.weights = np.zeros(n_features)
self.bias = 0
y_ = np.where(y <= 0, -1, 1)
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
# Perceptron update rule
update = self.lr * (y_[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
return y_predicted
def _unit_step_func(self, x):
return np.where(x >= 0, 1, -1)
# Training the Perceptron model
p = Perceptron(learning_rate=0.01, n_iters=1000)
p.fit(X_train, y_train)
# Making predictions on the test data
predictions = p.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, predictions)
print("Perceptron classification accuracy: ", accuracy)
Exercise 2: Implement Gradient Descent
Implement the gradient descent algorithm from scratch in Python. Use it to find the minimum of a simple function (like f(x) = x^2 + 5x + 6), and plot the steps of the algorithm along the way.
import numpy as np
import matplotlib.pyplot as plt
# Define the function and its derivative
def f(x):
return x**2 + 5*x + 6
def df(x):
return 2*x + 5
# Gradient descent algorithm
def gradient_descent(x_start, learning_rate, n_iters):
x = x_start
history = [x]
for _ in range(n_iters):
grad = df(x)
x -= learning_rate * grad
return history
# Run the algorithm and plot the steps
history = gradient_descent(x_start=-10, learning_rate=0.1, n_iters=50)
plt.plot(history, [f(x) for x in history], 'o-')
plt.title('Gradient Descent Steps')
Exercise 3: Regularization Techniques
Using a dataset of your choice, build a deep learning model with Keras. Apply L1, L2, and dropout regularization, and compare the results. Which method works best for your dataset?
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.regularizers import l1, l2
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer with L1 regularization
model.add(Dense(32, input_dim=8, activation='relu', kernel_regularizer=l1(0.01)))
# Add dropout layer
# Add an output layer with L2 regularization
model.add(Dense(1, activation='sigmoid', kernel_regularizer=l2(0.01)))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, y, epochs=150, batch_size=10)
Exercise 4: Early Stopping and Dropout
Using a dataset of your choice, build a deep learning model with Keras. Implement early stopping and dropout, and observe how they affect the model's performance.
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.callbacks import EarlyStopping
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer
model.add(Dense(32, input_dim=20, activation='relu'))
# Add dropout layer
# Add an output layer
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Define the early stopping monitor
early_stopping_monitor = EarlyStopping(patience=3)
# Fit the model
model.fit(X_train, y_train, epochs=150, batch_size=10, validation_split=0.2, callbacks=[early_stopping_monitor])
In this exercise, you will observe how early stopping and dropout can help prevent overfitting and improve the model's ability to generalize to new data.
Chapter 6 Conclusion
In this chapter, we embarked on a fascinating journey into the world of neural networks and deep learning. We started with the fundamental building block of neural networks, the perceptron, and
explored how it forms the basis for more complex neural network architectures. We learned how a perceptron takes a set of inputs, applies weights, and uses an activation function to produce an
output. We also saw how multiple perceptrons can be combined to form a multi-layer perceptron, capable of solving more complex problems.
We then delved into the concept of backpropagation and gradient descent, two crucial components in training a neural network. We learned how backpropagation works by calculating the gradient of the
loss function with respect to the weights of the network, allowing us to understand how much each weight contributes to the error. This information is then used by the gradient descent algorithm to
adjust the weights and minimize the error.
In our discussion of overfitting, underfitting, and regularization, we explored the challenges of training a model that generalizes well to unseen data. We learned about the bias-variance trade-off
and how overfitting and underfitting represent two extremes of this spectrum. We also discussed several regularization techniques, including L1 and L2 regularization, dropout, and early stopping,
which can help mitigate overfitting and improve the model's generalization performance.
The practical exercises provided throughout the chapter allowed us to apply these concepts and gain hands-on experience with implementing and training neural networks. These exercises not only
reinforced our understanding of the material but also gave us a taste of what it's like to work with neural networks in a practical setting.
As we conclude this chapter, it's important to reflect on the power and versatility of neural networks and deep learning. These techniques have revolutionized the field of machine learning and have
found applications in a wide range of domains, from image and speech recognition to natural language processing and autonomous driving. However, with great power comes great responsibility. As
practitioners, it's crucial that we use these tools ethically and responsibly, ensuring that our models are fair, transparent, and respectful of privacy.
Looking ahead, we will dive deeper into the world of deep learning, exploring more advanced concepts and techniques. We will learn about different types of neural networks, including convolutional
neural networks (CNNs) and recurrent neural networks (RNNs), and how they can be used to tackle complex machine learning tasks. We will also explore popular deep learning frameworks, such as
TensorFlow, Keras, and PyTorch, and learn how to use them to build and train our own neural networks. So, stay tuned for an exciting journey ahead!
6.4 Practical Exercises of Chapter 6: Introduction to Neural Networks and Deep Learning
Exercise 1: Implement a Perceptron
Implement a simple perceptron using Python. Use it to classify a binary dataset of your choice. You can create your own dataset or use a simple one from a library like Scikit-learn.
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Define the Perceptron model
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.lr = learning_rate
self.n_iters = n_iters
self.activation_func = self._unit_step_func
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
# init parameters
self.weights = np.zeros(n_features)
self.bias = 0
y_ = np.where(y <= 0, -1, 1)
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
# Perceptron update rule
update = self.lr * (y_[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
y_predicted = self.activation_func(linear_output)
return y_predicted
def _unit_step_func(self, x):
return np.where(x >= 0, 1, -1)
# Training the Perceptron model
p = Perceptron(learning_rate=0.01, n_iters=1000)
p.fit(X_train, y_train)
# Making predictions on the test data
predictions = p.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, predictions)
print("Perceptron classification accuracy: ", accuracy)
Exercise 2: Implement Gradient Descent
Implement the gradient descent algorithm from scratch in Python. Use it to find the minimum of a simple function (like f(x) = x^2 + 5x + 6), and plot the steps of the algorithm along the way.
import numpy as np
import matplotlib.pyplot as plt
# Define the function and its derivative
def f(x):
return x**2 + 5*x + 6
def df(x):
return 2*x + 5
# Gradient descent algorithm
def gradient_descent(x_start, learning_rate, n_iters):
x = x_start
history = [x]
for _ in range(n_iters):
grad = df(x)
x -= learning_rate * grad
return history
# Run the algorithm and plot the steps
history = gradient_descent(x_start=-10, learning_rate=0.1, n_iters=50)
plt.plot(history, [f(x) for x in history], 'o-')
plt.title('Gradient Descent Steps')
Exercise 3: Regularization Techniques
Using a dataset of your choice, build a deep learning model with Keras. Apply L1, L2, and dropout regularization, and compare the results. Which method works best for your dataset?
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.regularizers import l1, l2
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer with L1 regularization
model.add(Dense(32, input_dim=8, activation='relu', kernel_regularizer=l1(0.01)))
# Add dropout layer
# Add an output layer with L2 regularization
model.add(Dense(1, activation='sigmoid', kernel_regularizer=l2(0.01)))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, y, epochs=150, batch_size=10)
Exercise 4: Early Stopping and Dropout
Using a dataset of your choice, build a deep learning model with Keras. Implement early stopping and dropout, and observe how they affect the model's performance.
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.callbacks import EarlyStopping
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# Create a binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=7)
# Split the dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)
# Create a Sequential model
model = Sequential()
# Add an input layer and a hidden layer
model.add(Dense(32, input_dim=20, activation='relu'))
# Add dropout layer
# Add an output layer
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Define the early stopping monitor
early_stopping_monitor = EarlyStopping(patience=3)
# Fit the model
model.fit(X_train, y_train, epochs=150, batch_size=10, validation_split=0.2, callbacks=[early_stopping_monitor])
In this exercise, you will observe how early stopping and dropout can help prevent overfitting and improve the model's ability to generalize to new data.
Chapter 6 Conclusion
In this chapter, we embarked on a fascinating journey into the world of neural networks and deep learning. We started with the fundamental building block of neural networks, the perceptron, and
explored how it forms the basis for more complex neural network architectures. We learned how a perceptron takes a set of inputs, applies weights, and uses an activation function to produce an
output. We also saw how multiple perceptrons can be combined to form a multi-layer perceptron, capable of solving more complex problems.
We then delved into the concept of backpropagation and gradient descent, two crucial components in training a neural network. We learned how backpropagation works by calculating the gradient of the
loss function with respect to the weights of the network, allowing us to understand how much each weight contributes to the error. This information is then used by the gradient descent algorithm to
adjust the weights and minimize the error.
In our discussion of overfitting, underfitting, and regularization, we explored the challenges of training a model that generalizes well to unseen data. We learned about the bias-variance trade-off
and how overfitting and underfitting represent two extremes of this spectrum. We also discussed several regularization techniques, including L1 and L2 regularization, dropout, and early stopping,
which can help mitigate overfitting and improve the model's generalization performance.
The practical exercises provided throughout the chapter allowed us to apply these concepts and gain hands-on experience with implementing and training neural networks. These exercises not only
reinforced our understanding of the material but also gave us a taste of what it's like to work with neural networks in a practical setting.
As we conclude this chapter, it's important to reflect on the power and versatility of neural networks and deep learning. These techniques have revolutionized the field of machine learning and have
found applications in a wide range of domains, from image and speech recognition to natural language processing and autonomous driving. However, with great power comes great responsibility. As
practitioners, it's crucial that we use these tools ethically and responsibly, ensuring that our models are fair, transparent, and respectful of privacy.
Looking ahead, we will dive deeper into the world of deep learning, exploring more advanced concepts and techniques. We will learn about different types of neural networks, including convolutional
neural networks (CNNs) and recurrent neural networks (RNNs), and how they can be used to tackle complex machine learning tasks. We will also explore popular deep learning frameworks, such as
TensorFlow, Keras, and PyTorch, and learn how to use them to build and train our own neural networks. So, stay tuned for an exciting journey ahead! | {"url":"https://www.cuantum.tech/app/section/64-practical-exercises-47137ae2abae4b0ca1ae51629e82d6c6","timestamp":"2024-11-04T22:01:27Z","content_type":"text/html","content_length":"105793","record_id":"<urn:uuid:7dd3b077-739c-40c6-a1c3-7b1616172d21>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00589.warc.gz"} |
Neural Network
A recent post on the MSDN Forums raises the issue of reconstructing the Microsoft Neural Network and reproducing the prediction behavior based on the model content. The forum did not allow a very
detailed reply and, as I believe this is an interesting topic, I will give it another try in this post. As an example, I use a small neural network which is trained on two inputs, X (having the
states A and B) and Y (C and D) in order to predict a Z discrete variable, having the states E and F.
This post is a bit large, so here is what you will find:
• - a description of the topology of the Microsoft Neural Network, particularized for my example
• - a step-by-step description of the prediction phases
• - a set of DMX statements that exemplify how to extract network properties from a Microsoft Neural Network mining model content
• - a spreadsheet containing the sample data I used as well as a sheet which contains the model content and uses cell formulae to reproduce the calculations that lead to predictions (the sheet can
be used to execute predictions for this network)
The network topology
Once trained, a Microsoft Neural Network modelĀ looks more or less like below:
During prediction, the input nodes are populated with values deriving from the input data. The values are linearly combined with the edges leading from input nodes to the middle (hidden) layer of
nodes and the input vector is translated to a new vector, which has the same dimension as the hidden layer. The translated vector is then “activated” using the tanh function for each component. The
resulting vector goes through a similar transformation, this time from the hidden layer to the output layer. Therefore, it is linearly converted to the output layer dimensionality (using the weight
of the edges linking hidden nodes to output nodes). The result of this transformation is activated using the sigmoid function and the final result is the set of output probabilities. These
probabilities are normalized before being returned as a result (in a call like PredictHistogram) | {"url":"http://www.bogdancrivat.net/dm/archives/tag/neural-network","timestamp":"2024-11-08T04:19:32Z","content_type":"application/xhtml+xml","content_length":"17158","record_id":"<urn:uuid:169e1252-0555-49d9-b04b-db213ccde35b>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00863.warc.gz"} |
Advanced Factoring | mathhints.com
Revisiting Factoring Quadratics Factoring and Solving with Exponents
Factoring Sum and Difference of Cubes More Practice
Factoring and Solving with Polynomials
Factoring is extremely important in math; we first learned factoring here in the Solving Quadratics by Factoring and Completing the Square section.
Revisiting Factoring Quadratics
Earlier, we learned how to factor, or “unFOIL” a trinomial into two binomials. (Remember that FOIL stands for First-Outer-Inner-Last when multiplying two binomials together):
Sometimes we have to factor out common factors, or GCFs. We always want to do this first:
And don’t forget “grouping” when we have four terms (but it doesn’t always work – we found other ways to solve in Graphing and Finding Roots of Polynomial Functions later). Again, we are working with
GCF‘s to do this:
You can factor a difference of squares, but not a sum of squares: $ 9{{x}^{2}}-25=\left( {3x-5} \right)\left( {3x+5} \right)$. However, $ 9{{x}^{2}}+25$ is prime, or irreducible; you can’t factor
with reals, but would be $ \left( {3x+5i} \right)\left( {3x-5i} \right)$ using imaginary numbers)
Factoring with Sum and Difference of Cubes
For cubic binomials (sum or difference of cubes), we can factor:
Here are some examples of factoring sums and differences of cubes:
Factoring and Solving with Polynomials
Again, we first learned about factoring methods here in the Solving Quadratics by Factoring and Completing the Square section. Now that we know how to by Graphing and Finding Roots of Polynomial
Functions section we can do fancier factoring, and thus find more roots. Remember that when we want to find solutions or roots, we set the equation to 0, factor, set each factor to 0 and solve. Here
are some examples of factoring and solving polynomial equations; solve over the reals:
Here’s even more advanced solving, using techniques we will learn here in the Graphing and Finding Roots of Polynomial Functions section. Solve over the real and complex numbers:
Factoring and Solving with Exponents
Factoring and Solving with Exponential Functions can be a bit trickier. Note that we learned about the properties of exponents here in the Exponents and Radicals in Algebra section, and did some
solving with exponents here.
In your Pre-Calculus and Calculus classes, you may see algebraic exponential expressions that need factoring and possibly solving, either by taking out a Greatest Common Factor (GCF) or by
“unFOILing”. These really aren’t that bad, if you remember a few hints:
• To take out a GFC with exponents, take out the factor with the smallest exponent, whether it’s positive or negative. (Remember that “larger” negative numbers are actually smaller). This is even
true for fractional exponents. Then, to get what’s left in the parentheses after you take out the GCF, subtract the exponents from the one you took out. For example, for the expression $ {{x}^
{-5}}+{{x}^{-2}}+x$, take out $ {{x}^{-5}}$ for the GCF to get $ \displaystyle \begin{array}{l}{{x}^{{-5}}}\left( {{{x}^{{-5-\,-5}}}+{{x}^{{-2-\,-5}}}+{{x}^{{1-\,-5}}}} \right)\\={{x}^{{-5}}}\
left( {{{x}^{0}}+{{x}^{3}}+{{x}^{6}}} \right)\\={{x}^{{-5}}}\left( {1+{{x}^{3}}+{{x}^{6}}} \right)\end{array}$. (Remember that “ – – ” is the same as “+ +” or “+”). Multiply back to make sure
you’ve factored correctly!
• For fractional coefficients, find the common denominator, and take out the fraction that goes into all the other fractions. For the fraction you take out, the denominator is the least common
denominator (LCD) of all the fractions, and the numerator is the Greatest Common Factor (GCF) of the numerators. For example, $ \displaystyle \begin{array}{l}\displaystyle \frac{3}{4}{{x}^{2}}\
displaystyle -\frac{1}{2}x+4=\displaystyle \frac{3}{4}{{x}^{2}}\displaystyle -\frac{2}{4}x+\displaystyle \frac{{16}}{4}\\=\displaystyle \frac{1}{4}\left( {3{{x}^{2}}-2x+16} \right)\end{array}$
(since nothing except for 1 goes into 3 and 2 and 16). Multiply back to make sure you’ve factored correctly!
• For a trinomial with a constant, if the largest exponent is twice that of the middle exponent, then use substitution like $ u$, for the middle exponent, “unFOIL”, and then put the “real”
expression back in. For example, for $ {{x}^{\frac{2}{3}}}-{{x}^{\frac{1}{3}}}-2$ , let $ u={{x}^{\frac{1}{3}}}$ , and we have $ {{u}^{2}}-u-2$, which factors to $ \left( u-2 \right)\left( u+1 \
right)$. Then substitute back to $ \left( {{x}^{\frac{1}{3}}}-2 \right)\left( {{x}^{\frac{1}{3}}}+1 \right)$ and solve from there (set each to 0 and solve). Always multiply back to make sure
you’ve factored correctly. We call this method u-substitution or simply u-sub.
Let’s do some factoring. Learning to factor these will actually help you a lot when you get to Calculus:
After factoring, you may be asked to solve the exponential equation. Here are some examples, some using u-substitution. Note that the third problem uses log solving from here in the Logarithmic
Functions section.
Learn these rules, and practice, practice, practice!
For Practice: Use the Mathway widget below to try a Factoring problem. Click on Submit (the blue arrow to the right of the problem) and click on Factor to see the answer.
You can also type in your own problem, or click on the three dots in the upper right hand corner and click on “Examples” to drill down by topic.
If you click on Tap to view steps, or Click Here, you can register at Mathway for a free trial, and then upgrade to a paid subscription at any time (to get any type of math problem solved!).
On to Conics – you are ready! | {"url":"https://mathhints.com/advanced-algebra/advanced-factoring/","timestamp":"2024-11-12T06:46:39Z","content_type":"text/html","content_length":"592611","record_id":"<urn:uuid:68cf7f9b-c036-4996-8c0f-dcf8d08c08b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00165.warc.gz"} |
Sympathetic Vibratory Physics | Mathematical Relations are Constant - page 125
RETURN to Book 02 - Chapter 10 - Electromagnetic Pressures
"NATURE'S PLAN OF ERECTING PRESSURE WALLS DURING THE JOURNEY FROM THE UNIVERSAL WHITE LIGHT OF INERTIA TO ITS SIMULATION IN MOTION" [Mathematical Relations are Constant]
"The axis of the controlling plane of each elemental line has been extended to its opposing focal point and attached to its opposite element. This is indicated by rectangles in order to call
attention to their parallel positions. The universal gyroscope assorts all states of motion by their planes of gyration. Like planes seek each other. The four tones of any octave exactly equal four
tones of any other octave in mass energy. To crowd their accumulated forces into smaller volume does not in any way alter their energy." [Mathematical Relations are Constant]
"Mathematical Relations are Constant The mathematical relations of any wave of energy never change. All dimensions expand and contract in opposite directions of the same ratio." [Mathematical
Relations are Constant]
"ALL ENERGY IS EXPRESSED IN WAVES. ALL WAVES ARE DIVIDED INTO PRESSURE ZONES. ALL PRESSURE ZONES ARE SUSTAINED BY PRESSURE WALLS ERECTED IN DEFINED LOCKED POTENTIAL POSITIONS" [Mathematical Relations
are Constant]
Professor Daniel Brinton "The rhythmic relations in which force acts are everywhere, under all conditions, and at all times, the same. They are found experimentally to be universally expressible by
the mathematical relations of thirds.
"These threefold relations may be expressed with regard to their results as,
I. Assimilative (Concentration, Gravitation) II. Individualizing (Radiation, Entropy) III. Dominant or Resultant
"From these three actions are derived the three fundamental LAWS OF BEING:
I. Law of Assimilation: every individualized object assimilates itself to all other objects.
II. Law of Individualization: every such object tends to assimilate all other objects to itself.
III. Law of the Dominant: every such object is such by virtue of the higher or dominant force which controls these (above) two tendencies. [14.09 - Brintons Laws of Being]
chromatic scale chromatic flat Law of Mathematical Ratios mathematical intonation Mathematical Relations are Constant - page 125 mathematical scales mathematical system On the Partial Differential
Equations of Mathematical Physics Ramsay - PLATE VIII - The Mathematical Table of Majors and Minors and their Ratio Numbers Ramsay - PLATE XXII - Mathematical Table of the Twelve Major Scales and
their relative Minors Ramsay - PLATE XXIII - The Mathematical and Tempered Scales Ramsay - PLATE XXVII - The Mathematical Scale of Thirty two notes in Commas, Sharps and Flats scale of mathematical
intonation semitone semitonic progression sharp tempered key tempered scale tempered system three mathematical primes | {"url":"https://svpwiki.com/Mathematical-Relations-are-Constant---page-125","timestamp":"2024-11-13T22:26:08Z","content_type":"text/html","content_length":"48466","record_id":"<urn:uuid:7fbe30d6-1e47-4330-a7e6-5394f74a84aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00046.warc.gz"} |
In a two-story apartment complex, each apartment on the upper floor rents for 75 percent as much as each apartment on
Your Answer
In a two-story apartment complex, each apartment on the upper floor rents for 75 percent as much as each apartment on the lower floor. If the total monthly rent is $15,300 when rent is collected on
all of the apartments, what is the monthly rent on each apartment on the lower floor?
(1) An apartment on the lower floor rents for $150 more per month than an apartment on the upper floor.
(2) There are 6 more apartments on the upper floor than on the lower floor.
OA A
Source: GMAT Prep
Your Answer
Your Answer | {"url":"https://www.beatthegmat.com/in-a-two-story-apartment-complex-each-apartment-on-the-upper-floor-rents-for-75-percent-as-much-as-each-apartment-on-t332002.html?sid=864c52b30226438b5f139988cf317629","timestamp":"2024-11-03T09:34:13Z","content_type":"text/html","content_length":"730413","record_id":"<urn:uuid:6a9fcd44-6903-4a64-8be1-dea80d5b4932>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00899.warc.gz"} |
Figure.shift_origin(xshift=None, yshift=None)
Shift plot origin in x and/or y directions.
This method shifts the plot origin relative to the current origin by xshift and yshift in x and y directions, respectively. Optionally, append the length unit (c for centimeters, i for inches, or
p for points) to the shifts. Default unit if not explicitly given is c, but can be changed to other units via PROJ_LENGTH_UNIT.
For xshift, a special character w can also be used, which represents the bounding box width of the previous plot. The full syntax is [[±][f]w[/d]±]xoff, where optional signs, factor f and divisor
d can be used to compute an offset that may be adjusted further by ±xoff. Assuming that the previous plot has a width of 10 centimeters, here are some example values for xshift:
□ "w": x-shift is 10 cm
□ "w+2c": x-shift is 10+2=12 cm
□ "2w+3c": x-shift is 2*10+3=23 cm
□ "w/2-2c": x-shift is 10/2-2=3 cm
Similarly, for yshift, a special character h can also be used, which is the bounding box height of the previous plot.
Note: The previous plot bounding box refers to the last object plotted, which may be a basemap, image, logo, legend, colorbar, etc.
☆ xshift (float | str | None, default: None) – Shift plot origin in x direction.
☆ yshift (float | str | None, default: None) – Shift plot origin in y direction.
>>> import pygmt
>>> fig = pygmt.Figure()
>>> fig.basemap(region=[0, 10, 0, 10], projection="X10c/10c", frame=True)
>>> # Shift the plot origin in x direction by 12 cm
>>> fig.shift_origin(xshift=12)
>>> fig.basemap(region=[0, 10, 0, 10], projection="X14c/10c", frame=True)
>>> # Shift the plot origin in x direction based on the previous plot width
>>> # Here, the width is 14 cm, and xshift is 16 cm
>>> fig.shift_origin(xshift="w+2c")
>>> fig.show()
Examples using pygmt.Figure.shift_origin
Line segment caps and joints
Scatter plot with histograms
Configuring PyGMT defaults | {"url":"https://www.pygmt.org/dev/api/generated/pygmt.Figure.shift_origin.html","timestamp":"2024-11-02T23:21:50Z","content_type":"text/html","content_length":"30428","record_id":"<urn:uuid:9831146d-01b5-4b0d-ae2c-0d66758152db>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00751.warc.gz"} |
Neural Network For Picking Mushrooms
Imagine yourself in a forest picking mushrooms, either for fun or, probably more likely, so you won’t starve to death. How do you decide if a mushroom is edible or poisonous? Is there a way to
utilize an artificial neural network to help you with the decision? If so, can you come up with a simpler, but more efficient solution? Let’s figure it out!
Test Data
Before we start, we need to study some mushrooms to collect the test data. Unfortunately, currently it is winter in my region, so at the moment, it is difficult for me to collect the mushrooms
myself. But, luckily, the test data is already available in the UC Irvine Machine Learning Repository.
I am splitting the data into two equal parts. The first part is called training data. It will be used to train the neural network. The second part is called evaluation data. It will be used to
evaluate how well the neural network can classify unknown mushrooms as edible or poisonous.
Artificial Neural Network: Design
The neural network should take a mushroom as an input and tell us whether or not it is edible or poisonous. The input layer of the neural network consists of a certain number of neurons. In our case,
each neuron can take a value from 0 to 1 inclusively. The same for the output layer: it also has a given number of neurons, and each neuron can also return a value from 0 to 1 inclusively.
You can think of the neural network as a function. The function takes multiple parameters. Each parameter may vary from 0 to 1. The function returns multiple values, each of which also varies from 0
to 1.
First of all, let’s decide how many neurons the input layer should have and how to represent a mushroom as the input.
Each mushroom has 22 features (cap shape, odor, veil color, etc.). Each feature can take from 2 to 12 values. For example, the veil color can be brown, orange, white or yellow.
There are a number of ways of how to “feed” any such mushroom to the neural network. I’m going to explain by example the method I picked for our mushroom problem.
Imagine for simplicity that each mushroom has only 2 features: veil type and veil color. The veil type can be partial or universal, the veil color can be brown, orange, white or yellow. These
mushrooms can be described by the table below.
Imagine I found a mushroom with the partial white veil. This means I can substitute question marks with values. Each value will be 0 or 1. I have now updated the table below with the new values. The
zeros and the ones now form the (1, 0, 0, 0, 1, 0) vector which I can pass on to the neural network.
Similarly, if it was the universal yellow veil, the vector would be (0, 1, 0, 0, 0, 1). I hope you get the idea now.
When I apply the above method to the actual 22 features, I have 126 neurons in the input layer. The same approach, applied to the output layer, gives me 2 neurons: the first represents the degree of
edibility of a mushroom, the second represents how poisonous the mushroom is. If, for example, I deal with a very poisonous mushroom, the output is (0, 1)^*.
Having agreed on how to represent the inputs and the outputs of the neural network, let’s then briefly discuss its type and its hidden layers.
I decided to use a multilayer perceptron network because it fits well in the classification problem I am solving. I added a hidden layer, which is also called a dense layer. It is located between
input and output layers and has ten neurons. Why ten? Well, there are many rules of thumb about how to pick the number of the neurons in the hidden layer. The real value is always a matter of
experimentation and optimization. I picked ten using one of these rules of thumb. It is a good value to start with.
Artificial Neural Network: Training And Evaluation
Now that we have agreed on the configuration of the neural network, I am ready to build and train it. To do this, I will use NeuroFlow, a powerful Scala library to design, train, and evaluate neural
After training the network through 900 iterations, I can now test the resulting neural network on the real data. The “real data” is the subset of the mushrooms that I had previously put aside and
didn’t use for the neural network training. The tests show that even without the optimization of the network parameters, the proportion^** of the correct answers is 0.92, which is quite good.
Can I improve the result? Apparently, yes. For example, I still could play with the neural network configuration and find the best one. I also could deal with the training data in the more optimal
way, not just by splitting it in half. However, there is another unexpectedly simple algorithm that shows really good results for the mushrooms problem.
K-Nearest Neighbours: The Main Idea
As we touched on above, let’s utilize k-nearest neighbours and compare its results with the results of the neural network.
The idea of the algorithm is quite straightforward: given any unknown mushroom, I have to find the mushroom from my test data that is the most similar to one I have to classify. Then the edibility of
the most similar one will be the edibility I am trying to find.
The algorithm can be improved by picking multiple most-similar mushrooms, say, three. In this case, the edibility that most of these three mushrooms have is going to be the one I’m looking for.
K-Nearest Neighbours: Similarity Of Two Mushrooms
The main thing I need to decide here is how to measure the similarity. There are quite a few ways of how I can do it, but the Hamming distance method fits my needs very well. The Hamming distance
between two mushrooms measures the number of features for which the mushrooms have different values. The smaller the Hamming distance, the more similar the mushrooms are.
If the first mushroom has the brown cap and the white stalk, and the second one also has the brown cap, but the gray stalk, the distance will be 1, as they differ only in one feature, only in the
stalk color.
By the way, if I’m comparing two identical mushrooms, the number of features with different values is obviously 0, so the distance is also 0, as expected. Identical mushrooms are “as similar as
K-Nearest Neighbours: Implementation And Evaluation
In light of the above discussion, it is now trivial to implement the k-nearest neighbours algorithm:
class NearestNeighbor {
def nearest(y: Seq[String], xs: Seq[Seq[String]], k: Int): String = {
val knn = xs.sortWith((a, b) => distance(a.tail, y) < distance(b.tail, y)).slice(0, k)
val edibleNumber: Int = knn.foldLeft(0)((count, element) => if (element.head == "e") count + 1 else count)
if (edibleNumber > k / 2) "e" else "p"
private def distance(x: Seq[String], y: Seq[String]): Double = {
x.zip(y).foldLeft(0.0)((sum, pair) => if (pair._1 == pair._2) sum else sum + 1.0)
The algorithm does not even need the training phase. When I run the algorithm against the set of the evaluation data, I have a great result: the proportion of the correct answers is 0.98, even better
than the neural network has shown.
To Summarize
Neural networks are a powerful tool that can be applied in many different areas. They can even support you during mushroom-picking, as you have just seen. There is no magic behind them: a neural
network is just a function of (usually) multiple arguments.
However, sometimes the simpler solution is the better. As I have just demonstrated, for the mushroom-picking problem, the k-nearest neighbours method shows even better result than the neural network.
The full implementation of both the neural network and the k-nearest neighbours is available in GitHub.
^* In reality, it will be something like (0.00087, 0.99841). As I need to clearly state “edible” or “poisonous”, I will just compare these 2 numbers. If the first value is larger, then I claim
“edible”, otherwise “poisonous”.
^** The ratio between the number of the correct answers given by the neural network and the total number of the mushrooms in the test data. This is the broadest and definitely not the best way to
evaluate a model because it has a lot of disadvantages. For example, it does not pay attention to the proportion of the classes in the test data. However, because of its simplicity, it can be used
for illustration purposes. | {"url":"https://yaskovdev.com/2018/02/05/neural-network-for-picking-mushrooms","timestamp":"2024-11-09T12:19:55Z","content_type":"text/html","content_length":"24799","record_id":"<urn:uuid:ac0634e5-468a-488d-a9ae-ddf6c456d91a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00291.warc.gz"} |
Cos Function
Calculates the cosine of an angle. The angle is specified in radians. The result lies between -1 and 1.
Using the angle Alpha, the Cos function calculates the ratio of the length of the side that is adjacent to the angle, divided by the length of the hypotenuse in a right-angled triangle.
Cos(Alpha) = Adjacent/Hypotenuse
Cos (Number As Double) As Double
Number: Numeric expression that specifies an angle in radians that you want to calculate the cosine for.
डिग्री को रेडियन में बदलने के लिए, डिग्री को pi/180 से गुणा करें. रेडियन को डिग्री में बदलने के लिए, रेडियन को 180/pi से गुणा करें.
Pi is here the fixed circle constant with the rounded value 3.14159...
' The following example allows for a right-angled triangle the input of
' secant and angle (in degrees) and calculates the length of the hypotenuse:
Sub ExampleCosinus
REM rounded Pi = 3.14159
Dim d1 As Double, dAngle As Double
d1 = InputBox("Enter the length of the adjacent side: ","Adjacent")
dAngle = InputBox("Enter the angle Alpha (in degrees): ","Alpha")
Print "The length of the hypotenuse is"; (d1 / cos (dAngle * Pi / 180))
End Sub | {"url":"https://help.libreoffice.org/latest/hi/text/sbasic/shared/03080102.html","timestamp":"2024-11-06T07:45:41Z","content_type":"text/html","content_length":"11404","record_id":"<urn:uuid:ba00c14f-6175-4d6d-95e4-3dc44ac96353>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00231.warc.gz"} |
Finding the Circumradius of the Excentral Triangle
Solution 1
We are going to use a beautiful property that is interesting in itself: The midpoint $D$ of the arc $\overset{\frown}{a}$ in the circumcircle $(ABC)$ that does not contains $A$ is the center of a
circle that passes through the points $B,$ $C,$ $I$ and $Ia.$ A similar observation applies to the midpoints $E$ and $F$ of the arcs $\overset{\frown}{CA}$ and $\overset{\frown}{AB}$ that are
opposite vertices $B$ and $C,$ respectively: the circumcircle $(AICI_b)$ is centered at $E$ whereas $(AIBI_c)$ is centered at $F.$
In other words, the segments $II_a,$ $II_b,$ $II_c$ are each halved by the circumcircle $(ABC),$ implying that the circumcircle $(I_aI_bI_c)$ is homothetic with the circumcircle $(ABC)$ from the
incenter $I,$ the factor of homothety being $2.$ Thus $R'=2R$ and $IO'=2IO,$ i.e., $IO=OO'.$
Solution 2
By chasing angles, we find the following duality: $ABC$ is the orthic triangle of the acute $I_aI_bI_c.$ So $I$ is the othocenter of $I_aI_bI_c$ and $O$ is the $NPC$ center of the same triangle. We
are done.
|Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018
Alexander Bogomolny | {"url":"https://www.cut-the-knot.org/m/Geometry/CircumradiusOfExcentralTriangle.shtml","timestamp":"2024-11-09T23:04:18Z","content_type":"text/html","content_length":"12945","record_id":"<urn:uuid:9cdf557c-89be-4efd-82f5-49d320e48146>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00574.warc.gz"} |
How to do a t-test or ANOVA for more than one variable at once in R?
As part of my teaching assistant position in a Belgian university, students often ask me for some help in their statistical analyses for their master’s thesis.
A frequent question is how to compare groups of patients in terms of several quantitative continuous variables. Most of us know that:
• To compare two groups, a Student’s t-test should be used^1
• To compare three groups or more, an ANOVA should be performed
These two tests are quite basic and have been extensively documented online and in statistical textbooks so the difficulty is not in how to perform these tests.
In the past, I used to do the analyses by following these 3 steps:
1. Draw boxplots illustrating the distributions by group (with the boxplot() function or thanks to the {esquisse} R Studio addin if I wanted to use the {ggplot2} package)
2. Perform a t-test or an ANOVA depending on the number of groups to compare (with the t.test() and oneway.test() functions for t-test and ANOVA, respectively)
3. Repeat steps 1 and 2 for each variable
This was feasible as long as there were only a couple of variables to test. Nonetheless, most students came to me asking to perform these kind of tests not on one or two variables, but on multiples
variables. So when there were more than one variable to test, I quickly realized that I was wasting my time and that there must be a more efficient way to do the job.
Note: you must be very careful with the issue of multiple testing (also referred as multiplicity) which can arise when you perform multiple tests. In short, when a large number of statistical tests
are performed, some will have \(p\)-values less than 0.05 purely by chance, even if all null hypotheses are in fact really true. This is known as multiplicity or multiple testing. You can tackle this
problem by using the Bonferroni correction, among others. The Bonferroni correction is a simple method that allows many t-tests to be made while still assuring an overall confidence level is
maintained. For this, instead of using the standard threshold of \(\alpha = 5\)% for the significance level, you can use \(\alpha = \frac{0.05}{m}\) where \(m\) is the number of t-tests. For example,
if you perform 20 t-tests with a desired \(\alpha = 0.05\), the Bonferroni correction implies that you would reject the null hypothesis for each individual test when the \(p\)-value is smaller than \
(\alpha = \frac{0.05}{20} = 0.0025\).
Note also that there is no universally accepted approach for dealing with the problem of multiple comparisons. Usually, you should choose a p-value adjustment measure familiar to your audience or in
your field of study. The Bonferroni correction is easy to implement. It is however not appropriate if you have a very large number of tests to perform (imagine you want to do 10,000 t-tests, a p
-value would have to be less than \(\frac{0.05}{10000} = 0.000005\) to be significant). A more powerful method is also to adjust the false discovery rate using the Benjamini-Hochberg or Holm
procedure (McDonald 2014).
Another option is to use a multivariate ANOVA (MANOVA), if your independent variable has more than two levels. This is particularly useful when your dependent variables are correlated. Correlation
between the dependent variables provides MANOVA the following advantages:
• Identify patterns between several dependent variables: The independent variables can influence the relationship between dependent variables instead of influencing a single dependent variable.
• Adress the issue of multiple testing: with MANOVA, the error rate equals the significance level (with no p-value adjustment method needed).
• Greater statistical power: When the dependent variables are correlated, MANOVA can identify effects that are too small for the ANOVA to detect.
Note that MANOVA is used if your independent variable has more than two levels. If your independent variable has only two levels, the multivariate equivalent of the t-test is Hotelling’s \(T^2\).
This article aims at presenting a way to perform multiple t-tests and ANOVA from a technical point of view (how to implement it in R). Discussion on which adjustment method to use or whether there is
a more appropriate model to fit the data is beyond the scope of this article (so be sure to understand the implications of using the code below for your own analyses). Make sure also to test the
assumptions of the ANOVA before interpreting results.
Perform multiple tests at once
I thus wrote a piece of code that automated the process, by drawing boxplots and performing the tests on several variables at once. Below is the code I used, illustrating the process with the iris
dataset. The Species variable has 3 levels, so let’s remove one, and then draw a boxplot and apply a t-test on all 4 continuous variables at once. Note that the continuous variables that we would
like to test are variables 1 to 4 in the iris dataset.
dat <- iris
# remove one level to have only two groups
dat <- subset(dat, Species != "setosa")
dat$Species <- factor(dat$Species)
# boxplots and t-tests for the 4 variables at once
for (i in 1:4) { # variables to compare are variables 1 to 4
boxplot(dat[, i] ~ dat$Species, # draw boxplots by group
ylab = names(dat[i]), # rename y-axis with variable's name
xlab = "Species"
print(t.test(dat[, i] ~ dat$Species)) # print results of t-test
## Welch Two Sample t-test
## data: dat[, i] by dat$Species
## t = -5.6292, df = 94.025, p-value = 1.866e-07
## alternative hypothesis: true difference in means between group versicolor and group virginica is not equal to 0
## 95 percent confidence interval:
## -0.8819731 -0.4220269
## sample estimates:
## mean in group versicolor mean in group virginica
## 5.936 6.588
## Welch Two Sample t-test
## data: dat[, i] by dat$Species
## t = -3.2058, df = 97.927, p-value = 0.001819
## alternative hypothesis: true difference in means between group versicolor and group virginica is not equal to 0
## 95 percent confidence interval:
## -0.33028364 -0.07771636
## sample estimates:
## mean in group versicolor mean in group virginica
## 2.770 2.974
## Welch Two Sample t-test
## data: dat[, i] by dat$Species
## t = -12.604, df = 95.57, p-value < 2.2e-16
## alternative hypothesis: true difference in means between group versicolor and group virginica is not equal to 0
## 95 percent confidence interval:
## -1.49549 -1.08851
## sample estimates:
## mean in group versicolor mean in group virginica
## 4.260 5.552
## Welch Two Sample t-test
## data: dat[, i] by dat$Species
## t = -14.625, df = 89.043, p-value < 2.2e-16
## alternative hypothesis: true difference in means between group versicolor and group virginica is not equal to 0
## 95 percent confidence interval:
## -0.7951002 -0.6048998
## sample estimates:
## mean in group versicolor mean in group virginica
## 1.326 2.026
As you can see, the above piece of code draws a boxplot and then prints results of the test for each continuous variable, all at once.
At some point in the past, I even wrote code to:
1. draw a boxplot
2. test for the equality of variances (thanks to the Levene’s test)
3. depending on whether the variances were equal or unequal, the appropriate test was applied: the Welch test if the variances were unequal and the Student’s t-test in the case the variances were
equal (see more details about the different versions of the t-test for two samples)
4. apply steps 1 to 3 for all continuous variables at once
I had a similar code for ANOVA in case I needed to compare more than two groups.
The code was doing the job relatively well. Indeed, thanks to this code I was able to test several variables in an automated way in the sense that it compared groups for all variables at once.
The only thing I had to change from one project to another is that I needed to modify the name of the grouping variable and the numbering of the continuous variables to test (Species and 1:4 in the
above code).
Concise and easily interpretable results
Although it was working quite well and applicable to different projects with only minor changes, I was still unsatisfied with another point.
Someone who is proficient in statistics and R can read and interpret the output of a t-test without any difficulty. However, as you may have noticed with your own statistical projects, most people do
not know what to look for in the results and are sometimes a bit confused when they see so many graphs, code, output, results and numeric values in a document. They are quite easily overwhelmed by
this mass of information and unable to extract the key message.
With my old R routine, the time I was saving by automating the process of t-tests and ANOVA was (partially) lost when I had to explain R outputs to my students so that they could interpret the
results correctly. Although most of the time it simply boiled down to pointing out what to look for in the outputs (i.e., p-values), I was still losing quite a lot of time because these outputs were,
in my opinion, too detailed for most real-life applications and for students in introductory classes. In other words, too much information seemed to be confusing for many people so I was still not
convinced that it was the most optimal way to share statistical results to nonscientists.
Of course, they came to me for statistical advices, so they expected to have these results and I needed to give them answers to their questions and hypotheses. Nonetheless, I wanted to find a better
way to communicate these results to this type of audience, with the minimum of information required to arrive at a conclusion. No more and no less than that.
After a long time spent online trying to figure out a way to present results in a more concise and readable way, I discovered the {ggpubr} package. This package allows to indicate the test used and
the p-value of the test directly on a ggplot2-based graph. It also facilitates the creation of publication-ready plots for non-advanced statistical audiences.
After many refinements and modifications of the initial code (available in this article), I finally came up with a rather stable and robust process to perform t-tests and ANOVA for more than one
variable at once, and more importantly, make the results concise and easily readable by anyone (statisticians or not).
A graph is worth a thousand words, so here are the exact same tests than in the previous section, but this time with my new R routine:
# Edit from here #
x <- which(names(dat) == "Species") # name of grouping variable
y <- which(names(dat) == "Sepal.Length" # names of variables to test
| names(dat) == "Sepal.Width" |
names(dat) == "Petal.Length" |
names(dat) == "Petal.Width")
method <- "t.test" # one of "wilcox.test" or "t.test"
paired <- FALSE # if paired make sure that in the dataframe you have first all individuals at T1, then all individuals again at T2
# Edit until here
# Edit at your own risk
for (i in y) {
for (j in x) {
ifelse(paired == TRUE,
p <- ggpaired(dat,
x = colnames(dat[j]), y = colnames(dat[i]),
color = colnames(dat[j]), line.color = "gray", line.size = 0.4,
palette = "npg",
legend = "none",
xlab = colnames(dat[j]),
ylab = colnames(dat[i]),
add = "jitter"
p <- ggboxplot(dat,
x = colnames(dat[j]), y = colnames(dat[i]),
color = colnames(dat[j]),
palette = "npg",
legend = "none",
add = "jitter"
# Add p-value
print(p + stat_compare_means(aes(label = paste0(after_stat(method), ", p-value = ", after_stat(p.format))),
method = method,
paired = paired,
# group.by = NULL,
ref.group = NULL
As you can see from the graphs above, only the most important information is presented for each variable:
• a visual comparison of the groups thanks to boxplots
• the name of the statistical test
• the p-value of the test
Of course, experts may be interested in more advanced results. However, this simple yet complete graph, which includes the name of the test and the p-value, gives all the necessary information to
answer the question: “Are the groups different?”.
In my experience, I have noticed that students and professionals (especially those from a less scientific background) understand way better these results than the ones presented in the previous
The only lines of code that need to be modified for your own project is the name of the grouping variable (Species in the above code), the names of the variables you want to test (Sepal.Length,
Sepal.Width, etc.),^2 whether you want to apply a t-test (t.test) or Wilcoxon test (wilcox.test) and whether the samples are paired or not (FALSE if samples are independent, TRUE if they are paired).
Based on these graphs, it is easy, even for non-experts, to interpret the results and conclude that the versicolor and virginica species are significantly different in terms of all 4 variables (since
all p-values \(< \frac{0.05}{4} = 0.0125\) (remind that the Bonferroni correction is applied to avoid the issue of multiple testing, so we divide the usual \(\alpha\) level by 4 because there are 4
Additional p-value adjustment methods
If you would like to use another p-value adjustment method, you can use the p.adjust() function. Below are the raw p-values found above, together with p-values derived from the main adjustment
methods (presented in a dataframe):
raw_pvalue <- numeric(length = length(1:4))
for (i in (1:4)) {
raw_pvalue[i] <- t.test(dat[, i] ~ dat$Species,
paired = FALSE,
alternative = "two.sided"
df <- data.frame(
Variable = names(dat[, 1:4]),
raw_pvalue = round(raw_pvalue, 3)
df$Bonferroni <-
method = "bonferroni"
df$BH <-
method = "BH"
df$Holm <-
method = "holm"
df$Hochberg <-
method = "hochberg"
df$Hommel <-
method = "hommel"
df$BY <-
method = "BY"
), 3)
## Variable raw_pvalue Bonferroni BH Holm Hochberg Hommel BY
## 1 Sepal.Length 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## 2 Sepal.Width 0.002 0.008 0.002 0.002 0.002 0.002 0.004
## 3 Petal.Length 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## 4 Petal.Width 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Regardless of the p-value adjustment method, the two species are different for all 4 variables. Note that the adjustment method should be chosen before looking at the results to avoid choosing the
method based on the results.
Below another function that allows to perform multiple Student’s t-tests or Wilcoxon tests at once and choose the p-value adjustment method. The function also allows to specify whether samples are
paired or unpaired and whether the variances are assumed to be equal or not. (The code has been adapted from Mark White’s article.)
t_table <- function(data, dvs, iv,
var_equal = TRUE,
p_adj = "none",
alpha = 0.05,
paired = FALSE,
wilcoxon = FALSE) {
if (!inherits(data, "data.frame")) {
stop("data must be a data.frame")
if (!all(c(dvs, iv) %in% names(data))) {
stop("at least one column given in dvs and iv are not in the data")
if (!all(sapply(data[, dvs], is.numeric))) {
stop("all dvs must be numeric")
if (length(unique(na.omit(data[[iv]]))) != 2) {
stop("independent variable must only have two unique values")
out <- lapply(dvs, function(x) {
if (paired == FALSE & wilcoxon == FALSE) {
tres <- t.test(data[[x]] ~ data[[iv]], var.equal = var_equal)
} else if (paired == FALSE & wilcoxon == TRUE) {
tres <- wilcox.test(data[[x]] ~ data[[iv]])
} else if (paired == TRUE & wilcoxon == FALSE) {
tres <- t.test(data[[x]] ~ data[[iv]],
var.equal = var_equal,
paired = TRUE
} else {
tres <- wilcox.test(data[[x]] ~ data[[iv]],
paired = TRUE
p_value = tres$p.value
out <- as.data.frame(do.call(rbind, out))
out <- cbind(variable = dvs, out)
names(out) <- gsub("[^0-9A-Za-z_]", "", names(out))
out$p_value <- ifelse(out$p_value < 0.001,
round(p.adjust(out$p_value, p_adj), 3)
out$conclusion <- ifelse(out$p_value < alpha,
paste0("Reject H0 at ", alpha * 100, "%"),
paste0("Do not reject H0 at ", alpha * 100, "%")
Applied to our dataset, with no adjustment method for the p-values:
result <- t_table(
data = dat,
c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"),
## variable p_value conclusion
## 1 Sepal.Length <0.001 Reject H0 at 5%
## 2 Sepal.Width 0.002 Reject H0 at 5%
## 3 Petal.Length <0.001 Reject H0 at 5%
## 4 Petal.Width <0.001 Reject H0 at 5%
And with the Holm (1979) adjustment method:
result <- t_table(
data = dat,
c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"),
p_adj = "holm"
## variable p_value conclusion
## 1 Sepal.Length <0.001 Reject H0 at 5%
## 2 Sepal.Width 0.002 Reject H0 at 5%
## 3 Petal.Length <0.001 Reject H0 at 5%
## 4 Petal.Width <0.001 Reject H0 at 5%
Again, with the Holm’s adjustment method, we conclude that, at the 5% significance level, the two species are significantly different from each other in terms of all 4 variables.
Below the same process with an ANOVA. Note that we reload the dataset iris to include all three Species this time:
dat <- iris
# Edit from here
x <- which(names(dat) == "Species") # name of grouping variable
y <- which(names(dat) == "Sepal.Length" # names of variables to test
| names(dat) == "Sepal.Width" |
names(dat) == "Petal.Length" |
names(dat) == "Petal.Width")
method1 <- "anova" # one of "anova" or "kruskal.test"
method2 <- "t.test" # one of "wilcox.test" or "t.test"
my_comparisons <- list(c("setosa", "versicolor"), c("setosa", "virginica"), c("versicolor", "virginica")) # comparisons for post-hoc tests
# Edit until here
# Edit at your own risk
for (i in y) {
for (j in x) {
p <- ggboxplot(dat,
x = colnames(dat[j]), y = colnames(dat[i]),
color = colnames(dat[j]),
legend = "none",
palette = "npg",
add = "jitter"
p + stat_compare_means(aes(label = paste0(after_stat(method), ", p-value = ", after_stat(p.format))),
method = method1, label.y = max(dat[, i], na.rm = TRUE)
+ stat_compare_means(comparisons = my_comparisons, method = method2, label = "p.format") # remove if p-value of ANOVA or Kruskal-Wallis test >= alpha
Like the improved routine for the t-test, I have noticed that students and non-expert professionals understand ANOVA results presented this way much more easily compared to the default R outputs.
With one graph for each variable, it is easy to see that all species are different from each other in terms of all 4 variables.^3
If you want to apply the same automated process to your data, you will need to modify the name of the grouping variable (Species), the names of the variables you want to test (Sepal.Length, etc.),
whether you want to perform an ANOVA (anova) or Kruskal-Wallis test (kruskal.test) and finally specify the comparisons for the post-hoc tests.^4
To go even further
As we have seen, these two improved R routines allow to:
1. Perform t-tests and ANOVA on a small or large number of variables with only minor changes to the code. I basically only have to replace the variable names and the name of the test I want to use.
It takes almost the same time to test one or several variables so it is quite an improvement compared to testing one variable at a time.
2. Share test results in a much proper and cleaner way. This is possible thanks to a graph showing the observations by group and the p-value of the appropriate test included directly on the graph.
This is particularly important when communicating results to a wider audience or to people from diverse backgrounds.
However, like most of my R routines, these two pieces of code are still a work in progress. Below are some additional features I have been thinking of and which could be added in the future to make
the process of comparing two or more groups even more optimal:
• Add the possibility to select variables by their numbering in the dataframe. For the moment it is only possible to do it via their names. This will allow to automate the process even further
because instead of typing all variable names one by one, we could simply type 4:25 (to test variables 4 to 25 for instance).
• Add the possibility to choose a p-value adjustment method. Currently, raw p-values are displayed in the graphs and I manually adjust them afterwards or adjust the \(\alpha\).
• When comparing more than two groups, it is only possible to apply an ANOVA or Kruskal-Wallis test at the moment. A major improvement would be to add the possibility to perform a repeated measures
ANOVA (i.e., an ANOVA when the samples are dependent). It is currently already possible to do a t-test with two paired samples, but it is not yet possible to do the same with more than two
• Another less important (yet still nice) feature when comparing more than 2 groups would be to automatically apply post-hoc tests only in the case where the null hypothesis of the ANOVA or
Kruskal-Wallis test is rejected (so when there is at least one group different from the others, because if the null hypothesis of equal groups is not rejected we do not apply a post-hoc test). At
the present time, I manually add or remove the code that displays the p-values of post-hoc tests depending on the global p-value of the ANOVA or Kruskal-Wallis test.
I will try to add these features in the future, or I would be glad to help if the author of the {ggpubr} package needs help in including these features (I hope he will see this article!).
Last but not least, the following packages may be of interest to some readers:
• If you want to report statistical results on a graph, I advise you to check the {ggstatsplot} package and in particular the ggbetweenstats() and ggwithinstats() functions. These functions allow
to compare a continuous variable across multiple groups or conditions (for both independent and paired samples). Two advantages of the functions is that:
□ it is very easy to switch from parametric to nonparemetric tests and
□ it automatically runs an ANOVA or t-test depending on the number of groups to compare
Note that many different statistical results are displayed on the graph, not only the name of the test and the p-value so a bit of simplicity and clarity is lost for more precision. However, it is
still very convenient to be able to include tests results on a graph in order to combine the advantages of a visualization and a sound statistical analysis. Something that I still need to figure out
is how to run the code on several variables at once.
• The {compareGroups} package also provides a nice way to compare groups. It comes with a really complete Shiny app, available with:
# install.packages("compareGroups")
Update with the {ggstatsplot} package
Several months after having written this article, I finally found a way to plot and run analyses on several variables at once with the package {ggstatsplot} (Patil 2021). This was the main feature I
was missing and which prevented me from using it more often.
Although I still find that too much statistical details are displayed (in particular for non experts), I still believe the ggbetweenstats() and ggwithinstats() functions are worth mentioning in this
article. I actually now use those two functions almost as often as my previous routines because:
• I do not have to care about the number of groups to compare, the functions automatically choose the appropriate test according to the number of groups (ANOVA for 3 groups or more, and t-test for
2 groups)
• I can select variables based on their column numbering, and not based on their names anymore (which prevents me from writing those variable names manually)
• When comparing 3 or more groups (so for ANOVA, Kruskal-Wallis, repeated measure ANOVA or Friedman), \(p\)-values of the post-hoc tests within each dependent variable are by default the adjusted \
(p\)-values (Holm is the default but many adjustment methods are available)
• It is possible to compare both independent and paired samples, no matter the number of groups (remember that with the ggpubr package I could only do paired samples for two samples, not for 3
• They allow to easily switch between the parametric and nonparametric version
• All this in a more concise manner using the {purrr} package
For those of you who are interested, below my updated R routine which include these functions and applied this time on the penguins dataset.
dat <- penguins
## tibble [344 × 8] (S3: tbl_df/tbl/data.frame)
## $ species : Factor w/ 3 levels "Adelie","Chinstrap",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ island : Factor w/ 3 levels "Biscoe","Dream",..: 3 3 3 3 3 3 3 3 3 3 ...
## $ bill_length_mm : num [1:344] 39.1 39.5 40.3 NA 36.7 39.3 38.9 39.2 34.1 42 ...
## $ bill_depth_mm : num [1:344] 18.7 17.4 18 NA 19.3 20.6 17.8 19.6 18.1 20.2 ...
## $ flipper_length_mm: int [1:344] 181 186 195 NA 193 190 181 195 193 190 ...
## $ body_mass_g : int [1:344] 3750 3800 3250 NA 3450 3650 3625 4675 3475 4250 ...
## $ sex : Factor w/ 2 levels "female","male": 2 1 1 NA 1 2 1 2 NA NA ...
## $ year : int [1:344] 2007 2007 2007 2007 2007 2007 2007 2007 2007 2007 ...
We illustrate the routine for two groups with the variables sex (two factors) as independent variable, and the 4 quantitative continuous variables bill_length_mm, bill_depth_mm, bill_depth_mm and
body_mass_g as dependent variables:
# Comparison between sexes
# edit from here
x <- "sex"
cols <- 3:6 # the 4 continuous dependent variables
type <- "parametric" # given the large number of observations, we use the parametric version
paired <- FALSE # FALSE for independent samples, TRUE for paired samples
# edit until here
# edit at your own risk
plotlist <-
.l = list(
data = list(as_tibble(dat)),
x = x,
y = as.list(colnames(dat)[cols]),
plot.type = "box", # for boxplot
type = type, # parametric or nonparametric
pairwise.comparisons = TRUE, # to run post-hoc tests if more than 2 groups
pairwise.display = "significant", # show only significant differences
bf.message = FALSE, # remove message about Bayes Factor
centrality.plotting = FALSE # remove central measure
.f = ifelse(paired, # automatically use ggwithinstats if paired samples, ggbetweenstats otherwise
violin.args = list(width = 0, linewidth = 0) # remove violin plots and keep only boxplots
# print all plots together with statistical results
for (i in 1:length(plotlist)) {
We now illustrate the routine for 3 groups or more with the variable species (three factors) as independent variable, and the 4 same dependent variables:
# Comparison between species
# edit from here
x <- "species"
cols <- 3:6 # the 4 continuous dependent variables
type <- "parametric" # given the large number of observations, we use the parametric version
paired <- FALSE # FALSE for independent samples, TRUE for paired samples
# edit until here
# edit at your own risk
plotlist <-
.l = list(
data = list(as_tibble(dat)),
x = x,
y = as.list(colnames(dat)[cols]),
plot.type = "box", # for boxplot
type = type, # parametric or nonparametric
pairwise.comparisons = TRUE, # to run post-hoc tests if more than 2 groups
pairwise.display = "significant", # show only significant differences
bf.message = FALSE, # remove message about Bayes Factor
centrality.plotting = FALSE # remove central measure
.f = ifelse(paired, # automatically use ggwithinstats if paired samples, ggbetweenstats otherwise
violin.args = list(width = 0, linewidth = 0) # remove violin plots and keep only boxplots
# print all plots together with statistical results
for (i in 1:length(plotlist)) {
As you can see, I only have to specify:
• the name of the grouping variable (sex and species),
• the number of the dependent variables (variables 3 to 6 in the dataset),
• whether I want to use the parametric or nonparametric version and
• whether samples are independent (paired = FALSE) or paired (paired = TRUE).
Everything else is automated—the outputs show a graphical representation of what we are comparing, together with the details of the statistical analyses in the subtitle of the plot (the \(p\)-value
among others).
Note that the code shown above is actually the same if I want to compare 2 groups or more than 2 groups. I wrote twice the same code (once for 2 groups and once again for 3 groups) for illustrative
purposes only, but they are the same and should be treated as one for your projects.
I must admit I am quite satisfied with this routine, now that:
• I can automate it on many variables at once and I do not need to write the variable names manually anymore,
• at the same time, I can choose the appropriate test among all the available ones (depending on the number of groups, whether they are paired or not, and whether I want to use the parametric or
nonparametric version).
Nonetheless, I must also admit that I am still not satisfied with the level of details of the statistical results. As already mentioned, many students get confused and get lost in front of so much
information (except the \(p\)-value and the number of observations, most of the details are rather obscure to them because they are not covered in introductory statistic classes).
I saved time thanks to all improvements in comparison to my previous routine, but I definitely lose time when I have to point out to them what they should look for. After discussing with other
professors, I noticed that they have the same problem.
For the moment, you can only print all results or none. I have opened an issue kindly requesting to add the possibility to display only a summary (with the \(p\)-value and the name of the test for
instance).^5 I will update again this article if the maintainer of the package includes this feature in the future. So stay tuned!
Thanks for reading.
I hope this article will help you to perform t-tests and ANOVA for multiple variables at once and make the results more easily readable and interpretable by non-scientists. Learn more about the
t-test to compare two groups, or the ANOVA to compare 3 groups or more.
As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion.
Holm, Sture. 1979. “A Simple Sequentially Rejective Multiple Test Procedure.” Scandinavian Journal of Statistics, 65–70.
McDonald, JH. 2014. “Multiple Tests.” Handbook of Biological Statistics. 3rd Ed Baltimore, Maryland: Sparky House Publishing, 233–36.
Patil, Indrajeet. 2021.
“Visualizations with statistical details: The ’ggstatsplot’ approach.” Journal of Open Source Software
6 (61): 3167.
1. In theory, an ANOVA can also be used to compare two groups as it will give the same results compared to a Student’s t-test, but in practice we use the Student’s t-test to compare two groups and
the ANOVA to compare three groups or more.↩︎
2. Do not forget to separate the variables you want to test with |.↩︎
3. Do not forget to adjust the \(p\)-values or the significance level \(\alpha\). If you use the Bonferroni correction, the adjusted \(\alpha\) is simply the desired \(\alpha\) level divided by the
number of comparisons.↩︎
4. Post-hoc test is only the name used to refer to a specific type of statistical tests. Post-hoc test includes, among others, the Tukey HSD test, the Bonferroni correction, Dunnett’s test. Even if
an ANOVA or a Kruskal-Wallis test can determine whether there is at least one group that is different from the others, it does not allow us to conclude which are different from each other. For
this purpose, there are post-hoc tests that compare all groups two by two to determine which ones are different, after adjusting for multiple comparisons. Concretely, post-hoc tests are performed
to each possible pair of groups after an ANOVA or a Kruskal-Wallis test has shown that there is at least one group which is different (hence “post” in the name of this type of test). The null and
alternative hypotheses and the interpretations of these tests are similar to a Student’s t-test for two samples.↩︎
5. I am open to contribute to the package if I can help!↩︎
Liked this post?
• Get updates every time a new article is published (no spam and unsubscribe anytime): | {"url":"https://statsandr.com/blog/how-to-do-a-t-test-or-anova-for-many-variables-at-once-in-r-and-communicate-the-results-in-a-better-way/","timestamp":"2024-11-06T09:12:40Z","content_type":"text/html","content_length":"68459","record_id":"<urn:uuid:6d5d2dde-de8f-4a55-bc2c-4dac04e20d89>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00885.warc.gz"} |
[Question] Sampling Images from a normal distribution for
[Question] Sampling Images from a normal distribution for VAE
Hello there,
I am currently working on a VAE using
tensorflow-probability. I would like to later train it on celeb_a,
but right now I am using mnist to test everything.
My model looks like this, inspired by
this example “` prior =
tfd.Independent(tfd.Normal(loc=tf.zeros(encoded_size), scale=1),
inputs = tfk.Input(shape=input_shape) x = tfkl.Lambda(lambda x:
tf.cast(x, tf.float32) – 0.5)(inputs) x = tfkl.Conv2D(base_depth,
5, strides=1, padding=’same’, activation=tf.nn.leaky_relu)(x) x =
tfkl.Conv2D(base_depth, 5, strides=2, padding=’same’,
activation=tf.nn.leaky_relu)(x) x = tfkl.Conv2D(2 * base_depth, 5,
strides=1, padding=’same’, activation=tf.nn.leaky_relu)(x) x =
tfkl.Conv2D(2 * base_depth, 5, strides=2, padding=’same’,
activation=tf.nn.leaky_relu)(x) x = tfkl.Conv2D(4 * encoded_size,
7, strides=1, padding=’valid’, activation=tf.nn.leaky_relu)(x) x =
tfkl.Flatten()(x) x =
tfkl.Dense(tfpl.IndependentNormal.params_size(encoded_size))(x) x =
encoder = tfk.Model(inputs, x, name=’encoder’)
inputs = tfk.Input(shape=(encoded_size,)) x = tfkl.Reshape([1,
1, encoded_size])(inputs) x = tfkl.Conv2DTranspose(2 * base_depth,
7, strides=1, padding=’valid’, activation=tf.nn.leaky_relu)(x) x =
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=1, padding=’same’,
activation=tf.nn.leaky_relu)(x) x = tfkl.Conv2DTranspose(2 *
base_depth, 5, strides=2, padding=’same’,
activation=tf.nn.leaky_relu)(x) x =
tfkl.Conv2DTranspose(base_depth, 5, strides=1, padding=’same’,
activation=tf.nn.leaky_relu)(x) x =
tfkl.Conv2DTranspose(base_depth, 5, strides=2, padding=’same’,
activation=tf.nn.leaky_relu)(x) x =
tfkl.Conv2DTranspose(base_depth, 5, strides=1, padding=’same’,
activation=tf.nn.leaky_relu)(x) mu = tfkl.Conv2D(filters=1,
kernel_size=5, strides=1, padding=’same’, activation=None)(x) mu =
tfkl.Flatten()(mu) sigma = tfkl.Conv2D(filters=1, kernel_size=5,
strides=1, padding=’same’, activation=None)(x) sigma =
tf.exp(sigma) sigma = tfkl.Flatten()(sigma) x = tf.concat((mu,
sigma), axis=1) x = tfkl.LeakyReLU()(x) x =
decoder = tfk.Model(inputs, x) decoder.summary()
negloglik = lambda x, rv_x: -rv_x.log_prob(x)
mnist_digits are normed between 0.0 and 1.0
history = vae.fit(mnist_digits, mnist_digits, epochs=100,
batch_size=300) “`
My problem here is that the loss function stops decreasing at
around ~470 and the images sampled from the returned distribution
look like random noise. When using a bernoulli distribution instead
of the normal distribution in the decoder, the loss steadily
decrease and the sampled images look like they should. I can’t use
a bernoulli distribution for rgb tho, which I have to when I want
to train the model on celeb_a. I also can’t just use a
deterministic decoder, as I want to later decompose the elbo (loss
term – KL divergence) as seen in this.
Can someone explain to me why the normal distribution just
“doesn’t work”? How can I improve it so that it actually learns a
distribution that I can sample.
submitted by /u/tadachs
[visit reddit] | {"url":"https://databloom.com/2020/12/13/question-sampling-images-from-a-normal-distribution-forvae/","timestamp":"2024-11-11T13:40:03Z","content_type":"text/html","content_length":"55433","record_id":"<urn:uuid:dfa7174f-14c4-4ca2-b3af-6bb78bcafcf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00519.warc.gz"} |
Symmetry chart chemistry
The Department of Chemistry offers undergraduate major and minor programs in chemistry along with graduate training (M.S. and Ph.D.) in a modern facility with
Manuscript Submission Overview; Manuscript Preparation; Preparing Figures, Schemes and Tables; Supplementary Materials, Data Deposit and Software 9 Mar 2005 The symmetries of the normal modes can be
classified by group theory. with one of these from your Organic Chemistry course - infrared spectroscopy. The character tables for the three point groups are shown below. It has no symmetry and is
the familiar situation from organic chemistry. L-(+)- Lactic acid (click for the image)(C3H6O3)is a chiral molecule. it is organic and small. Point Group Symmetry. Point group symmetry is an
important property of molecules widely used in some branches of chemistry: spectroscopy, quantum chemistry and crystallography. An individual point group is represented by a set of symmetry
operations: E - the identity operation C n - rotation by 2π/n angle * S n - improper rotation The character tables takes the point group and represents all of the symmetry that the molecule has.
Symbols under the first column of the character tables. anti-symmetric with respect to the inverse. Symbols in the first row of the character tables. The order is the number in front of the the
classes. Bioinorganic Chemistry. Bioinorganic Chemistry Gallery; Metal Organic Frameworks. MOF – home; MOF-5 (or IRMOF-1) IRMOF-10; IRMOF-16; ZIF; HKUST-1; UiO-66; MIL-53 (Sc) NOTT-112; AgFe mixed
metal [AgFe(dppd) 3]BF 4 ·2DMSO·2H 2 O; Zn(GlySer) 2; ZnGlySer Pore closure animation; ZnGlyThr; Zn-Carnosine; Zn-Carnosine pore animation; Ni-Asp-bipy; Lithium Ion Batteries
Nonaxial groups, C1 · Cs · Ci, -, -, -, -. Cn groups, C2 · C3 · C4 · C5 · C6 · C7 · C8. Dn groups, D2 · D3 · D4 · D5 · D6 · D7 · D8. Cnv groups, C2v · C3v · C4v · C5v
Point group symmetry is an important property of molecules widely used in some branches of chemistry: spectroscopy, quantum chemistry and crystallography. 2 Oct 2015 Representations, Character
Tables, and. One Application of Symmetry Each symmetry operation can be represented by a 3×3 matrix that To dig deeper, check out: Cotton, F. A. Chemical Applications of Group Theory. Use this chart
to assign molecules to point groups and view Interactive 3D models for advanced school chemistry and undergraduate chemistry education. Here they are, finally I found them in some random presentation
for a lecture:. 21 Nov 2001 In brief: The top row and first column consist of the symmetry operations and irreducible representations respectively. The table elements are the
Manuscript Submission Overview; Manuscript Preparation; Preparing Figures, Schemes and Tables; Supplementary Materials, Data Deposit and Software
One example of symmetry in chemistry that you will already have come across is found in the isomeric pairs of molecules called enantiomers. Enantiomers are non-superimposable mirror images of each
other, and one consequence of this symmetrical relationship is that they rotate the plane of polarized light passing through them in opposite directions. Chemistry 401 Intermediate Inorganic
Chemistry University of Rhode Island Practice Problems Symmetry & Point Groups. 1. Determine the symmetry elements and assign the point group of (a) NH 2 Cl, (b) CO 3 2–, (c) SiF 4, (d) HCN, (e)
SiFClBrI, (f) BF 4 –. 2. Determine the point group of SnF 4, SeF 4, and BrF 4 –. 3. d3h - Point Group Symmetry Character Tables Center of symmetry (i) It is a symmetry operation through which the
inversion leaves the molecule unchanged. For example, a sphere or a cube has a centre of inversion. Similarly molecules like benzene, ethane and SF6have a center of symmetry while water and ammonia
molecule do not have any center of symmetry. Determination of symmetry point group of a molecule is the very first step when we are solving chemistry problems. The symmetry point group of a molecule
can be determined by the follow ing flow chart 7. Table 2.12 Flow chart to determine point group. Now, using this flow chart, we can determine the symmetry of molecules.
of symmetry (at S atom) Point group . O. h These molecules can be identified without going through the usual steps. Note: many of the more symmetrical molecules possess many more symmetry operations
than are needed to assign the point group.
Determination of symmetry point group of a molecule is the very first step when we are solving chemistry problems. The symmetry point group of a molecule can be determined by the follow ing flow
chart 7. Table 2.12 Flow chart to determine point group. Now, using this flow chart, we can determine the symmetry of molecules. Symmetry in Organic Chemistry The symmetry of a molecule is determined
by the existence of symmetry operations performed with respect to symmetry elements . A symmetry element is a line, a plane or a point in or through an object, about which a rotation or reflection
leaves the object in an orientation indistinguishable from the original. The latest Electric Table from the team at Symmetry Office. Learn More . Benching. Our versatile, modular benching systems are
height-adjustable for today’s active workplace. FIND YOURS . Tables. We offer a wide range of tables in a variety of styles, sizes and options to meet the needs of any office environment.
Center of symmetry (i) It is a symmetry operation through which the inversion leaves the molecule unchanged. For example, a sphere or a cube has a centre of inversion. Similarly molecules like
benzene, ethane and SF6have a center of symmetry while water and ammonia molecule do not have any center of symmetry.
Use this chart to assign molecules to point groups and view Interactive 3D models for advanced school chemistry and undergraduate chemistry education.
In group theory, the elements considered are symmetry operations. For a given molecular as labels for the eigenfunctions (see Lecture Physical Chemistry III). This set of operations Character tables
exist for all groups. Many groups have a Symmetry is analyzed at the atomic level (periodic system, chemical bonding, A.T. Balaban (Ed.), Chemical Applications of Graph Theory, Academic Press,
determine the symmetry properties (in terms of irreducible representations) . oup.com/uk/orc/bin/9780198700722/01student/tables/tables_for_group_theory. pdf Cotton, F. A. „Chemical Applications of
Group Theory“, Wiley 2nd edition 1971. step--pyramid form of the table and with a brief commentary on the abuse of symmetry considerations in the construction and interpretation of periodic tables
in 1 Jun 2016 J. Chem. Phys. 144, 211101 (2016); https://doi.org/10.1063/1.4953040 the use of broken-symmetry Unrestricted Density Functional Theory (UDFT) the hexaradical
hexamethylene-peri-hexabenzocoronene (2, in Chart 1). | {"url":"https://tradingkzqmoei.netlify.app/bogue36284ja/symmetry-chart-chemistry-gow","timestamp":"2024-11-09T10:20:49Z","content_type":"text/html","content_length":"31731","record_id":"<urn:uuid:06a64b30-c2cb-4406-bced-079edc8bddfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00143.warc.gz"} |
How do we find out the angle of a geometry (Rectangle) on a map.
08-23-2017 09:45 AM
Trying to find out if it is possible to find out angle of a geometry: Rectangle geometry in my case.
08-23-2017 11:14 AM
08-23-2017 11:14 AM
08-24-2017 08:17 AM
08-24-2017 08:46 AM | {"url":"https://community.esri.com/t5/net-maps-sdk-questions/how-do-we-find-out-the-angle-of-a-geometry/td-p/435257","timestamp":"2024-11-07T09:47:11Z","content_type":"text/html","content_length":"272705","record_id":"<urn:uuid:84389c44-9d15-4dae-b0e4-bfd727b3cff2>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00415.warc.gz"} |
Here are some thougts about irreducible complexity and society. There's quite a bit more I could say about this, and some day I will. This is just a very brief introduction to what I've been thinking
about for several years, and some tentative conclusions about it.
Friedrich Hayek called the hubristic ideas of social scientists, that they could explain and plan the details of society (including economic production), their "fatal conceit." He informally analyzed
the division of knowledge
to explain why the wide variety of businesses in our economy cannot be centrally planned. "The peculiar character of the problem of a rational economic order is determined precisely by the fact that
the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge
which all the separate individuals possess." The economic problem is "a problem of the utilization of knowledge which is not given to anyone in its totality." Austrian economists like Hayek usually
eschewed the use of traditional mathematics to describe the economy because such use assumes that economic complexities can be reduced to a small number of axioms.
Friedrich Hayek, the Austrian economist and philosopher who discussed the use of knowledge in society.
Modern mathematics, however -- in particular algorithmic information theory -- clarifies the limits of mathematical reasoning, including models with infinite numbers of axioms. The mathematics of
irreducible complexity can be used to formalize the Austrians' insights. Here is an
introduction to algorithmic information theory
, and further thoughts on
measuring complexity.
Sometimes information comes in simple forms. The number 1, for example, is a simple piece of data. The number pi, although it has an infinite number of digits, is similarly simple, because it can be
generated by a short finite algorithm (or computer program). That algorithm fully describes pi. However, a large random number has an irreducible complexity. Gregory Chaitin discovered a number,
Chaitin's omega
, which although it has a simple and clear definition (it's just a sum of probabilities that a random computer program will halt) has an irreducibly infinite complexity. Chaitin proved that there is
no way to completely describe omega in a finite manner. Chaitin has thus shown that there is no way to reduce mathematics to a finite set of axioms. Any mathematical system based on a finite set of
axioms (e.g. the system of simple algebra and calculus commonly used by non-Austrian economists) overly simplifies mathematical reality, much less social reality.
Furthermore, we know that the physical world contains vast amounts of irreducible complexity. The quantum mechanics of chemistry, the Brownian motions of the atmosphere, and so on create vast amounts
of uncertainty and locally unique conditions. Medicine, for example, is filled with locally unique conditions often known only very incompletely by one or a few hyperspecialized physicians or
Ray Solomonoff and Gregory Chaitin, pioneers of algorithmic information theory.
The strategic nature of the social world means that it will contain irreducible complexity even if the physical world of production and the physical needs of consumption were simple. We can make life
open-endedly complicated for each other by playing
penny matching games.
Furthermore, shared information
might be false or deceptively incomplete
Even if we were perfectly honest and altruistic with each other, we would still face economies of knowledge. A world of more diverse knowledge is far more valuable to us than a world where we all had
the same skills and beliefs. This is the most important source of the irreducible complexity of knowledge: the wealthier we are, the greater the irreducibly complex amount of knowledge (i.e.
diversity of knowledge) society has about the world and about itself. This entails more diversity of knowledge in different minds, and thus the greater difficulty of coordinating economic behavior.
The vastness of the useful knowledge in the world is far greater than our ability to store, organize, and communicate that knowledge. One limitation is simply how much our brains can hold. There is
far more irreducible and important complexity in the world than can be held in a single brain. For this reason, at least some of this omplexity is impossible to share between human minds.
channel capacity
of human language and visual comprehension are further limited. This often makes it impossible to share irreducibly complex knowledge between human minds even if the mind could in theory store and
comprehend that knowledge. The main barrier here is the inability to articulate tacit knowledge, rather than limitations of technology. However, the strategic and physical limits to reducing
knowledge are of such vast proportions that most knowledge could not be fully shared even with ideal information technology. Indeed, economies of knowledge suggest that the proportion of knowledge
would be even less widely shared in a very wealthy world of physically optimal computer and network technology than it is today -- although the absolute amount of knowledge shared would be far
greater, the sum total of knowledge would be far greater still, and thus the proportion optimally shared would be smaller.
The limitations on the distribution of knowledge, combined with the inexhaustible sources of irreducible complexity, mean that the wealthier we get, the greater the unique knowledge stored in each
mind and shared with few others, and the smaller fraction of knowledge is available to any one mind. There are a far greater variety of knowledge "pigeons" which must be stuffed into the same brain
and thus less room for "cloning pigeons" through mass media, mass education, and the like. Wealthier societies take greater advantage of
long tails
(i.e., they satisfy a greater variety of preferences) and thus become even less plannable than poorer societies that are still focused on simpler areas such as agriculture and Newtonian industry.
More advanced societies increasingly focus on areas such as interpersonal behaviors (sales, law, etc.) and medicine (the complexity of the genetic code is just the tip of the iceberg; the real
challenge is the irreducibly complex quantum effects of biochemistry,
[S:for example the protein folding problem). :S]
. Both interpersonal behaviors and medicine are areas where our demand is insatiable and supply is based on the internalization of vast irreducible complexity. This is not to say that further
breakthroughs in simplifying the world from which we are supplied, such as those of Newtonian physics and the industrial revolution, are not possible; but to achieve them we will have to search
through vastly larger haystacks. Furthermore, once these breakthroughs are made supply will become cheap and demand quickly satiated; then we will be back to trying to satisfy our higher-order and
inexhaustible preferences using a supply of largely irreducible complexity.
We have seen that U.S. Rep. David Crockett, later a hero of the Alamo, argued passionately against the propriety and constitutionality of federal charity. Crockett told his fellow Representatives
that taxpayer money was "not yours to give."
During the first several decades after the U.S. Constitution was enacted, constitutional issues were debated and decided far more often in Congress than in the Supreme Court. Crockett was part of a
long line of early and eminent Congressional constitutionalists who argued against the constitutionality of federal charity. Another was James Madison, one of the main drafters of the Constitution as
well as one of its main proponents in the Federalist Papers. Opposing moderate Federalists and Republicans were were radical Federalists who argued following Alexander Hamilton for practically
unlimited federal powers. According to Crockett, Madison, and many others, Congress had no power under any clause of the Constitution to allocate money for charity, except when such charity was
necessary and proper (using a fairly narrow construction of that phrase) for implementing an enumerated Congressional power such as funding and governing the armed forces, paying federal debts,
executing the laws, or implementing treaties.
David P. Currie's book The Constitution in Congress chronicles these early constitutional debates and is a must read for constitutional scholars. It contains debates by many House members (including
James Madison, Elbridge Gerry, and other founders and drafters) on the constitutionality of many different kinds of legislation. Currie cites the Annals of the early Congress which are now online.
Currie discussses the following two debates on federal charity:
(1) In 1793, a number of French citizens were driven out of the French Colony of Hispaniola (now Haiti), landed in Baltimore, and petitioned Congress for financial assistance. Rep. John Nicholas
expressed doubt that Congress had the constitutional authority "to bestow the money of their constituents on an act of charity."[1] Rep. Abraham Clark responded that "in a case of this kind, we are
not to be tied up by the Constitution." [2]. Rep. Elias Boudinot, another radical Federalist, argued that the general welfare clause authorized this kind of spending.[3] Rep. James Madison resolved
the debate by observing that the U.S. owed France money from the Revolutionary War. Madison disagreed with the radical Federalist interpretation of the general welfare clause and argued against
setting a dangerous precedent for open-ended spending. Congress could, however, provide money to the French refugees in partial payment of these debts, and thus constitutionally under Congress'
Article I powers to pay federal debts.[4]
(2) The second opportunity to debate the constitutionality of federal charity occurred when a fire destroyed much of the previously thriving city of Savannah, Georgia in 1796. The fire was referred
to as a "national calamity." Rep. John Milledge related how "[n]ot a public building, not a place of public worship, or of public justice" was left standing. Rep. Willaim Claiborne again raised the
radical Federalist argument that the measure to help fund the rebuilding of Savannah was constitutional under the general welfare clause.[5] Reps. Nathaniel Macon[6], William Giles[7], Nicholas[8],
and others argued that allocating federal funds to relieve Savannah would violate the Constitution, since such charity was not authorized by any enumerated power in the Constitution. "Insurance
offices," not the federal government, were "the proper securities against fire," according to Macon.[8] Rep. Andrew Moore argued "every individual citizen could, if he pleased, show his individual
humanity by subscribing to their relief, but it was not Constitutional for them to afford relief from the Treasury."[9] The measure to provide relief to Savannah from federal tax dollars was then
defeated 55-24 [10].
[1] David P. Currie, The Constitution in Congress, citing 4 Annals at 170, 172.
[2] Id. citing 4 Annals at 350.
[3] Id. citing 4 Annals at 172.
[4] Id. citing 4 Annals 170-71.
[5] Id. pg. 222 citing 6 Annals 1717.
[6] Id. citing 6 Annals 1724.
[7] Id. citing 6 Annals 1723.
[8] Annals 1718(online)
[9] Id.
[10] Currie citing 6 Annals 1727.
Here is a direct link to the Annals of Congress. Here is the start of the Savannah debate and here are some of Rep. Macon's comments during the debate.
(n.b. Currie's citations do not necessarily match the page numbers in the online version of the Annals).
Images: Rep. James Madison (top), Rep. Nathaniel Macon (bottom).
Davy Crockett served in the U.S. House of Representatives from 1827-31 and 1833-35. He later fought for the Texas Revolution and died at the Alamo. While in the House, Edward Ellis recalled him as
making this speech on the propriety and constitutionality of Congress acting like a charity with the taxpayer's money.
Gregory Clark (via Marginal Revolution) has a new and more comprehensive data set on real wages in England from 1209 to the present. Up to about 1600, it is consistent with the Malthusian theory
that real wages varied inversely with population. But then from at least 1630, there is a remarkable and unprecedented departure from the Malthusian curve formed by the ratio of real wages to
population. Real wages rose over 50% from 1630 to 1690 despite rising population. There is then a stable period from 1730 to 1800 with a new curve parallel to but offset from the original Malthusian
curve, and then a second startling departure from 1800 to today reflecting the end of this last Malthusian epoch (ironically just as Malthus was writing).
This data contradicts the idea that nothing remarkable happened to the economy before the industrial revolution got going in the late 18th century. It also contradicts the theory that a qualitative
shift occurred due the the Glorious Revolution of 1689 in which Parliament gained more power, some Dutch financial pratices were introduced, and soon thereafer the Bank of England founded.
Rather, the theory that comes most readily to mind to explain an economic revolution in 17th century England is the rather un-PC theory of Max Weber. I'll get back to that, but first Clark debunks
the theory of Becker et. al. regarding family investment. According to this theory, parents choose between the strategy of having a large number of children and having a small number of children in
whom they invest heavily, teaching them skills. This is basically the spectrum between "R strategy" and "K strategy" along which all animals lie, except that with humans there is a strong cultural
component in this choice (or at least Becker et. al. claim or assume that family size has always been a cultural choice for humans -- see my comments on this below).
According to this family investment theory, until quite recently (perhaps until the industrial revolution) having more children was the better bet due to lack of reward for skill, and overall
underdevelopment of the economy limited the reward for skill, so the world was caught in a Malthusian trap of unskilled labor. However at this recent point in history rewards for skilled labor went
up, making it worthwhile for parents to invest in skills (e.g. literacy). Clark's data contradicts this theory: his data show that the ratio of wages for skilled to unskilled laborers did not rise
either in the 17th century revolution or during the industrial revolution, and actually were in substantial decline by 1900. Indeed, a decline in demand for skilled labor is what Adam Smith predicted
would happen with increasing specialization in manufacturing. Thus, there was no increase in reward for skill investment which would have pulled us out of the Malthusian trap. Thus, Clark also
rejects the family investment theory.
I think, however, that part of the family investment theory can be rescued. Clark's and other data on literacy demonstrate a substantial rise in literacy just prior to and at the initial stages of
the qualitative change in productivity. Literacy in England doubled between 1580 and 1660. Parents were in fact making substantial investments in literacy despite an apparent lack of incentive to do
so. Why?
My own tentative theory to explains Clark's data combines Becker, Weber, and the observations of many scholars about the cultural importance of printing. Printing was invented in the middle of the
15th century. Books were cheap by the end of that century. Thereafer they just got cheaper. At first books printed en masse what scribes had long considered to be the classics. Eventually, however,
books came to contain a wide variety of useful information important to various trades. For example, legal cases became much more thoroughly recorded and far more easily accessible, facilitating
development of the common law. Similar revolutions occurred in medicine and a wide variety of trades, and undoubtedly eventually occurred in the building trades that were the source of Clark's data.
Printing played a crucial role in the Reformation which saw the schisms from the Roman Church and the birth in particular of Calvinism. The crucial thing to observe is that, while per Clark the gains
from investment in skills did not increase relative to unskilled labor, with the availability of cheap books and with the proper content the costs of investing in the learning did radically decrease
for many skills. Apprenticeships that used to take seven years could be compressed into a few years reading from books (much cheaper than bothering the master for those years) combined with a short
period learning on the job. This wouldn't have been a straightforward process as it required not just cheap books with specialized content about the trades, but some redesigning of the work itself
and up-front investment by parents in their children's literacy. Thus, it would have required major cultural changes. That is why, while under my theory cheap books were the catalyst that drove
mankind out of the Malthusian trap, many institutional innovations, which took over a century to evolve, had to be made to take advantage of those books to fundamentally change the economy.
Probably the biggest change required is that literacy entails a very large up-front investment. In the 17th century that investment would have been undertaken primarily by the family. Such an
investment requires delayed gratification -- the trait Weber considered crucial to the rise of capitalism and derived from Calvinsim. However, Calvinist delayed gratification under my revised theory
didn't cause capitalism via an increased savings rate, as Weber et. al. postulated, but rather caused parents to undertake a costly practice of investing in their children's literacy. Once that
investment was made, the children could take advantage of books to learn skills with unprecedented ease and to skill levels not previously possible. So the overall investment in skills did not
increase, but instead the focus of that investment shifted from long apprenticeships of young adults to the literacy of children. At the same time, the productivity of that investment greatly
increased, and the result was overall higher productivity.
Investment in literacy would have both enabled and been motivated by the famous Protestant belief that people should read the Bible for themselves rather than depending on a priest to read it for
them. This process would have started in the late 15th century among an elite of merchants and nobles, giving rise to the Reformation, but might not have propagated amongst the tradesmen Clark tracks
until the 17th century. It is with the spread of Huguenot, Puritan, Presbyterian, etc. literacy culture to tradesmen that we see the 17th century revolution in real wages and the first major move
away from the Malthusian curve.
This theory that the Malthusian trap was evaded by a sharp increase in the productivity of skill investment explains why population growth did not fall in the 17th century as Becker et. al. would
predict. Cheap books substantially lowered the cost of skill investment, so the productivity gains could come without increasing the overall investment in skills and thus without lowering family
The family size/skill investment tradeoff is more likely to explain the second and sharper departure from Malthusian curve starting around 1800. However again I think Becker is wrong insofar as this
was not due to an increase in returns on skill (Clark's data debunks this), but due to (1) technology improving faster than humans can have children, and (2) the rise of birth control (without which
there is little control by choice or culture over family size -- no cultural family size choice theory really works before the widespread use of birth control).
The catalyst in moving away from the Malthusian curve was thus not, per Becker et. al., an increase in the returns on investments in skills, but rather a decrease in the costs of such investments
once cheap books teaching specialized trades were available and the initial hill of literacy was climbed by Calvinist families. If the Calvinist literacy-investment theory ("The Roundhead
Revolution") is true, we should see a similar departure from the Malthusian curve at the same time or perhaps even somewhat earlier in the Netherlands, and also in Scotland, but probably not in
Catholic countries of that period.
I saw a great constitutional law moot court at GWU today, with Chief Justice John Roberts presiding over a panel that also included 2nd Circuit Judges Guido Calabresi and Sonia Sotomayor. The Chief
Justice managed to flummox both the petitioners and the respondents with his hypotheticals (factual scenarios that posed problems for their legal theories). More here.
Here is a modified excerpt from A Measure of Sacrifice:
Mechanical clocks, bell towers, and sandglasses, a combination invented in 13th century Italy, provided the world’s first fair and fungible measure of sacrifice. So many of the things we sacrifice
for are not fungible, but we can arrange our affairs around the measurement of the sacrifice rather than its results. Merchants and workers alike used the new precision of clock time to prove, brag,
and complain about their sacrifices.
In a letter from a fourteenth-century Italian merchant to his wife, Francesco di Marco Ganti invokes the new hours tell her of the sacrifices he is making: “tonight, in the twenty-third hour, I was
called to the college,” and, “I don’t have any time, it is the twenty-first hour and I have had nothing to eat or drink.” [4] Like many cell phone callers today, he wants to reassure her that he is
spending the evening working, not wenching.
A major application of clocks was to schedule meeting times. Being a city official was an expected sacrifice as well as a source of political power. To measure the sacrifice, as well as to coordinate
more tightly meeting times, the modern clocks of the fourteenth century came in handy. Some regulations of civic meetings of this period point up that that measuring the sacrifice was important,
regardless of the variable output of the meetings. In Nuremburg, the Commission of Five “had to observe the sworn minimum meeting time of four “or” (hours) per day, regardless of whether or not they
had a corresponding workload. They were also obliged to supervise their own compliance by means of a sandglass.[4]
As commerce grew, more quantities needed their value to be measured, leading to more complications and more opportunities for fraud. Measurement disputes become too frequent when measuring too many
Measuring something that actually indicates value is difficult. Measuring something that indicates value and immune to spoofing is very difficult. Labor markets did not come easily but are the result
of a long evolution in how we measure value.
Most workers in the modern economy earn money based on a time rate -- the hour, the day, the week, or the month. In agricultural societies slavery, serfdom, and piece rates were more common than
time-rate wages. Time measures input rather than output. Our most common economic relationship, employment, arranges our affairs around the measurement of the sacrifice rather than its results.
To create anything of value requires some sacrifice. To successfully contract we must measure value. Since we can’t, absent a perfect exchange market, directly measure the economic value of
something, we may be able to estimate it indirectly by measuring something else. This something else anchors the performance – it gives the performer an incentive to optimize the measured value.
Which measures are the most appropriate anchors of performance? Starting in Europe by the 13th century, that measure was increasingly a measure of the sacrifice needed to create the desired economic
Actual opportunity costs are very hard to measure, but at least for labor we have a good proxy measure[14] of the opportunities lost by working -- time. This is why paying somebody per hour (or per
month while noticing how often a worker is around the office) is so very common. It's far cheapr to measure time, thus estimating the worker's opportunity costs, than actual value of output.
Time as a proxy measure for worker value is hardly automatic – labor is not value. A bad artist can spend years doodling, or a worker can dig a hole where nobody wants a hole. Arbitrary amounts of
time could be spent on activities that do not have value for anybody except, perhaps, the worker himself. To improve the productivity of the time rate contract required two breakthroughs: the first,
creating the conditions under which sacrifice is a better estimate of value than piece rate or other measurement alternatives, and second, the ability to measure, with accuracy and integrity, the
Three of the main alternatives to time-rate wages are eliminating worker choice (i.e., serfdom and slavery), commodity market exchange, and piece rates. When eliminating choice, masters and lords
imposed high exit costs, often in the form of severe punishments for escape, shirking, or embezzlement. Serfs were usually required to produce a particular quantity of a good (where the good can be
measured, as it often can in agriculture) to be expropriated by the lord or master. Serfs kept for their personal use (not for legal trade) either a percentage or the marginal output, i.e. the output
above and beyond what they owed, by custom or coercion, to their lord.
Where quantity was not a good measure of value, close observation and control were kept over the laborer, and the main motivator was harsh punishments for failure. High exit costs also provided the
lord with a longer-term relationship, thus over time the serf or slave might develop a strong reputation for trustworthiness with the lord. The undesirability of servitude, from the point of view of
the laborer at least, is obvious. Serfs and slaves faced brutal work conditions, floggings, starvation, very short life spans, and the inability to escape no matter how bad conditions got.
Piece rates measure directly some attribute of a good or service that is important to its value – its quantity, weight, volume, or the like -- and then fix a price for it. Guild regulations which
fixed prices often amounted to creating piece rates. Piece rates seem the ideal alternative for liberating workers, but they suffer for two reasons. First, the outputs of labor depend not only on
effort, skills, etc. (things under control of the employee), but things out of control of the employee. The employee wants something like insurance against these vagaries of the work environment. The
employer, who has more wealth and knowledge of market conditions, takes on these risks in exchange for profit.
In an unregulated commodity market, buyers can reject or negotiate downwards the price of poor quality goods. Sellers can negotiate upwards or decline to sell. With piece rate contracts, on the other
hand, there is a fixed payment for a unit of output. Thus second main drawback to piece rates is that they motivate the worker to put out more quantity at the expense of quality. This can be
devastating. The tendency of communist countries to pay piece rates, rather than hourly rates, is one reason that, while the Soviet bloc’s quantity (and thus the most straightforward measurements of
economic growth) was able to keep up with the West, quality did not (thus the contrast, for example, between the notoriously ugly and unreliable Trabant of East Germany and the BMWs, Mercedes, Audi
and Volkswagens of West Germany).
Thus with the time-rate wage the employee is insured against vagaries of production beyond his control, including selling price fluctuations (in the case of a market exchange), or variation in the
price or availability of factors of production (in the case of both market exchange or piece rates). The employer takes on these risks, while at the same time through promotion, raises, demotions,
wage cuts or firing retaining incentives for quality employee output.
Besides lacking implicit insurance for the employee, another limit to market purchase of each worker’s output is that it can be made prohibitively costly by relationship-specific investments. These
investments occur when workers engage in interdependent production -- as the workers learn the equipment or adapt to each other. Relationship-specific investments can also occur between firms, for
example building a cannon foundry next to an iron mine. These investments, when combine with the inability to write long-term contracts that account for all eventualities, motivate firms to
integrate. Dealing with unspecified eventualities then becomes the right of the single owner. This incentive to integrate is opposed by the diseconomies of scale in a bureaucracy, caused by the
distribution of knowledge, which market exchange handles much better [13]. An in-depth discussion of economic tradeoffs that produce observed distributions of firm sizes in a market, i.e. the number
of workers involved in an employment relationship instead of selling their wares directly or working for smaller firms, has been discussed in [11,12]
The main alternative to market exchange of output, piece rate, or coerced labor (serfdom or slavery) consists of the employers paying by sacrifice -- by some measure of the desirable things the
employee foregoes to pursue the employer’s objectives. An hour spent at work is an hour not spent partying, playing with the children, etc. For labor, this “opportunity cost” is most easily
denominated in time – a day spent working for the employer is a day not spent doing things the employee would, if not for the pay, desire to do. [1,9]
Time doesn’t specify costs such as effort and danger. These have to be taken into account by an employee or his union when evaluating a job offer. Worker choice, through the ability to switch jobs at
much lower costs than with serfdom, allows this crucial quality control to occur.
It’s usually hard to specify customer preferences, or quality, in a production contract. It’s easy to specify sacrifice, if we can measure it. Time is immediately observed; quality is eventually
observed. With employment via a time-wage, the costly giving up of other opportunities, measured in time, can be directly motivated (via daily or hourly wages), while quality is motivated in a
delayed, discontinuous manner (by firing if employers and/or peers judge that quality of the work is too often bad). Third parties, say the guy who owned the shop across the street, could observe the
workers arriving and leaving, and tell when they did so by the time. Common synchronization greatly reduced the opportunities for fraud involving that most basic contractual promise, the promise of
Once pay for time is in place, the basic incentives are in place – the employee is, verifiably, on the job for a specific portion of the day – so he might as well work. He might as well do the work,
both quantity and quality, that the employer requires. With incentives more closely aligned by the calendar and the city bells measuring the opportunity costs of employment, to be compensated by the
employer, the employer can focus observations on verifying the specific quantity and qualities desired, and the employee (to gain raises and avoid getting fired) focuses on satisfying them. So with
the time-wage contract, perfected by northern and western Europeans in the late Middle Ages, we have two levels of the protocol in this relationship: (1) the employee trades away other opportunities
to commit his time to the employer – this time is measured and compensated, (2) the employer is motivated, by (positively) opportunities for promotions and wage rate hikes and (negatively) by the
threat of firing, to use that time, otherwise worthless to both employer and employee, to achieve the quantity and/or quality goals desired by the employer.[1]
[1] A good discussion of time-wage vs. piece-rate vs. other kinds of employment contracts can be found in McMillan, Games, Strategies, and Managers, Oxford University Press 1992
[4] My main source for clocks and their impact is Dohrn-van Rossum, History of the Hour – Clocks and Modern Temporal Orders, University of Chicago Press, 1996.
[9] The original sources for much of the time rate contract discussion is Seiler, Eric (1984) “Piece rate vs. Time Rate: The Effect of Incentives on Earnings”, Review of Economics and Statistics 66:
363-76 and Ehrenberg, Ronald G., editor (1990) “Do Compensation Policies Matter?”, Special Issue of Indsturial and Labor Relations Review 43: 3-S-273-S
[11] Coase, R.H., The Firm, the Market and the Law, University of Chicago Press 1988
[12] Williamson, Oliver, The Economic Institutions of Capitalism, Free Press 1985
[13] Hayek, Friedrich, "The Use of Knowledge In Society"
[14] The insight that we measure value via proxy measures is due to Yoram Barzel.
A new crop of web services in the U.K. allow you to locate and track people via their mobile phone. The phone companies themselves, and emergency call centers, and anybody else authorized, have long
been able to do this in the U.S. and U.K. and elsewhere, but now it's going retail. There is also this service which at least alerts the trackee when his or her location is being queried. This
partially addresses the attack of borrowing somebody's phone long enough to "give consent" and then tracking them with the service. Via Ian Grigg's Financial Cryptopgraphy.
Counter-intuitively, this development may enhance privacy, since the publicity that accompanies retail services will help prevent people from being in denial about the functions of their cell phone.
Even if it doesn't enhance privacy in this manner, it may at least help take us from an Orwellian model of surveillance (the state behind a one-way mirror) to a Brinian model (peer to peer
surveillance). OTOH, it may just fulfill the daydream of many a boss of being able to track employees 24x7.
I've noticed that there are strong parallels between accounting and two important areas of mathematics -- the elementary algebra and the calculus. I suspect these reflect the origins of algebra in
accounting and origins of some of the basic concepts behind the calculus in accounting. Readily available references to the history of accounting and mathematics may be too scant to prove it, but I
think the parallels are quite suggestive.
A basic parallel between accounting and algebra is the balance metaphor. The origin of this metaphor was almost surely the balance scale, an ancient commercial tool for measuring the weight of
precious metals and other commodities. Standard weights would be added or removed from the scale until balance with the commodity to be weighed was achieved.
Starting with the "accounting equation," assets = liabilities + equity, the strategy of accounting as with algebra is to achieve numerical balance by filling in missing quantities. As far back as the
Sumerians the need for balance in accounting was widely understood, but it was expressed either in purely verbal or purely ledger form rather than with an algebraic notation.(Simarily, logic was
expressed in standard language rather than with its own abstract symbolic notation until Gottleib Frege in the 19th century). Furthermore, examples of algebraic work left by the Sumerians,
Babylonians, and Indians, and indeed up to the time of Fibonacci and Pacioli, typically involved accounting problems.
Calculus largely has its origins in the study of change and how a dynamic view of the world relates to a static view of the world. Newton called calculus the study of "fluxions," or units of change.
(This is a more descriptive label for the field than "calculus" which simply means "calculating stone" and has been used to refer to a wide variety of areas of mathematics and logic). Long before
Newton, the relationship between the static and the dynamic was probably first conceptualized as the relationship between the balance sheet and the income statement. The balance sheet, which can be
summarized as
assets = liabilities + equity
is the "integral" of the income statement which can be summarized as
revenues = expenses + net income
(in other words, the income statement is the "derivative" of the balance sheet: the change in the balance sheet over a specific period of time).
Earlier civilizations had only mapped large scales of time to a spatial visualization in the form of a calendar. Diaries and accounting ledgers ordered by time also crudely map time into space. The
sundial mapped time into space, but in a distorted manner. Medieval Europeans with the invention of the mechanical clock and of musical notation including rhythm expanded and systematized the mapping
of time to a spatial visualization with consistent units. William of Occam and other Scholastics visualized time as a spatial dimension and other phenomena (including temperature, distance, and moral
qualities) as orthogonal dimensions to be graphed against time. Occam then used methods of Archimedes to calculate absolute quantities from the area under such curves, but we awaited Newton and
Leibniz, building on the analytical geometry of DesCartes, which systematically related algebraic equations to spatial curves, to create a systematic calculus. Earlier in India, the algebra and much
of the differential calculus were also developed within or alongside a rich business culture in which bookeeping using "Arabic" numerals (also invented in Inda) was also widespread. I conclude that
the conceptual apparatus behind much traditional mathematics originated in commercial accounting techniques.
A big exception to this is geometry. Geometry developed primarily from the need to define property rights in the largest form of wealth from the neolithic until quite recently, namely farmland, but
that is another blog post for another day.
While I'm on the subject of applying software metaphors to legal code, here is a short article I recently wrote on the principle of least authority.
I go into issues of executive versus legislative power in the U.S. in more depth in Origins of the Non-Delegation Doctrine, with extensive commentary on this subject from both Federalists and
Anti-Federalists. (As you may recall, the Federalists were the primary movers behind the original Constitution, and the anti-Federalists were the primary movers behind the Bill of Rights. The
Constitution was ratified by most states conditional to a Bill of Rights, which was later pushed through Congress by anti-Federalists and compromising Federalists such as James Madison).
The paper also discusses further how protection of liberties was the prime motivation for the mechanisms such as checks and balances in the Constitution. As Locke said:
"[w]e agree to surrender some of our natural rights so that government can function to preserve the remainder. Absolute arbitrary power, or governing without settled standing laws, can neither of
them consist with the ends of society and government, which men would not quit the freedom of the state of nature for, nor tie themselves up under, were it not to preserve their lives, liberties, and
fortunes; and by stated rules of right and property to secure their peace and quiet."[1]
[1] John Locke, The Second Treatise On Government XI:137 (1691).
If Mike Huben is saying that the law should be about mechanism, not policy, then I heartily agree. Yet Mike gets this philosophy blindingly wrong when applying it to the law.
It is the highly evolved common law that got this wisdom most right:
Contract law: provide mechanisms for making and enforcing contracts, not policy about who can contract with whom for what.
Property law: provide mechanisms for transferring, collateralizing, devising, and protecting the quiet use of property, rather than policy dictating who can build what where.
Tort law: use the edges or surfaces of the body, property, etc. as boundaries others may not cross without consent of the possessor, rather than dictating detailed rules for each possible harm.
Et cetera. Under the common law we "write" (sometimes literally, as with contracts or wills) the policy of our own lives. The common law "create[s] user freedom by allowing them to easily
experiment..." as Mike says about X-Windows. Our judges are supposed to accumulate a common law, and the legislatures write statutes, that are a "systems code" for our interactions with others. We,
by our own agreements with those we specifically deal with, "write" within these basic mechanisms the rules that govern our own interactions with these others. The guiding philosophy of our legal
code should indeed be mechanism, not policy.
Of course, law as well as software is not quite this simple, and judges and legislatures sometimes (but not nearly as often as they would like to think) must make some hard policy choices for
exceptions and edge cases. Also, as Mike proves in his essay with absurd and awful statements such as "I think that the rights specified in the Constitution and Bill of Rights are there for purposes
of mechanism, not to directly protect individual rights," what is "mechanism" and what is "policy" is often in the mind of the beholder. In this case, the Founders (which for the Bill of Rights are
the anti-Federalists, not generally the Federalists as Mike suggests) clearly had in their minds that the main purpose of the mechanisms was to protect individual rights, especially as those rights
had evolved under the common law which Locke and the Constitution summarized as "life, liberty, and property." "Life" and "liberty" occur three times in the Constitution; "property" is protected in
four different places. Mike's beloved welfare state, on the other hand, occurs nowhere in the Constitution. Much of the Constitution was intended to protect the mechanisms of the common law from the
hubristic policymaking of legislatures and the arbitrary actions of government officials.
The recent great strides of progress in human history, such as the Industrial Revolution, the Information Revolution, and the abolition of slavery, were propagated by common law countries. Countries
lacking the common law have often fallen into authoritarianism and totalitarianism. The common law has proven that mechanism, not policy, is a very wise philosophy for law as well as for systems
software. Mechanism, not policy, is a philosophy that all of us should aspire to when opining or voting on laws, and it's a philosophy judges should apply when interpreting them. | {"url":"https://unenumerated.blogspot.com/2006/02/?m=0","timestamp":"2024-11-10T04:38:09Z","content_type":"text/html","content_length":"150843","record_id":"<urn:uuid:6aa60412-570b-45a8-b886-7a2f9e657a8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00846.warc.gz"} |
Pricing decisions and Pricing strategies
Transfer Pricing
A transfer price is the ‘Price at which goods or services are transferred between different units of the same company’. These transfer pricing notes are prepared by mindmaplab team and covering
introduction to transfer pricing, transfer costs, international transfer pricing, market-based transfer pricing, cost-based transfer pricing, the minimum transfer pricing and methods of transfer
pricing management. transfer pricing short note are also available in pdf version too download.
Demand base Pricing
Elastic and inelastic demand
The value of demand elasticity may be anything from zero to infinity.
Demand is referred to as INELASTIC if the absolute value is less than 1 and ELASTIC if the absolute value is greater than 1.
• Where demand is inelastic, the quantity demanded falls by a smaller percentage than the percentage increase in price.
• Where demand is elastic, demand falls by a larger percentage than the percentage rise in price.
There are two extremes in the relationship between price and demand. A supplier can either sell a certain quantity, Q, at any price (as in graph (a)). Demand is totally unresponsive to changes in
price and is said to be completely inelastic. Alternatively, demand might be limitless at a certain price P (as in graph (b)), but there would be no demand above price P and there would be little
point in dropping the price below P. In such circumstances, demand is said to be completely elastic.
A more normal situation is where the downward-sloping demand curve shows the inverse relationship between unit selling price and sales volume. As one rises, the other falls. Demand is elastic because
demand will increase as prices are lowered.
Price elasticity of demand
Price elasticity of demand is a measure of the extent of change in market demand for a good in response to a change in its price, is measured as:
The change in quantity demanded, as a % of demand
The change in price, as a % of the price
* it is usual to ignore the any minus sign
The price of a good is $1.20 per unit and annual demand is 800,000 units. Market research indicates that an increase in price of 10 pence per unit will result in a fall in annual demand of 75,000
units. What is the price elasticity of demand?
Annual demand at $1.20 per unit is 800,000 units.
Annual demand at $1.30 per unit is 725,000 units.
% change in demand = (75,000/800,000) x 100% = 9.375% %
change in price = (10p/120p) x 100% = 8.333%
Price elasticity of demand = (–9.375/8.333) = –1.125
Ignoring the minus sign, price elasticity is 1.125.
The demand for this good, at a price of $1.20 per unit, would be referred to as elastic because the price elasticity of demand is greater than 1.
Elasticity and the pricing decision
1. With inelastic demand, increase prices because revenues will increase and total costs will reduce (because quantities sold will reduce).
2. With elastic demand, increases in price will bring decreases in revenue and decreases in price will bring increases in revenue.
3. In situations of very elastic demand, overpricing can lead to massive drops in quantity sold and hence profits, whereas underpricing can lead to costly inventory outs and, again, a significant
drop in profits. Elasticity must therefore be reduced by creating a customer preference which is unrelated to price (through advertising and promotion).
4. In situations of very inelastic demand, customers are not sensitive to price. Quality, service, product mix and location are therefore more important to a firm’s pricing strategy.
Economic theory (Demand-based approaches) suggests that the volume of demand for a good in the market as a whole is influenced by a variety of variables such as:
• The price of the good, Tastes and fashion
• The price of other goods
• The perceived quality of the product, Expectations
• The size and distribution of household income, Obsolescence
The volume of demand for one organisation’s goods rather than another’s is influenced by three principal factors:
1. product life cycle (It is characterised by defined stages including research, development, introduction, maturity, decline and abandonment)
2. quality (the better quality good will be more in demand)
3. marketing (including the ‘four Ps’ of the marketing mix: Price, Product, Place, Promotion)
Other (Non-financial) issues that influence pricing decisions
Non-financial issues that influence pricing decisions, include competition, quality, price sensitivity and the market in which an organisation operates.
The price that an organisation can charge for its products will be determined to a greater or lesser degree by the market in which it operates.
In established industries dominated by a few major firms, it is generally accepted that a price initiative by one firm will be countered by a price reaction by competitors. In these circumstances,
prices tend to be fairly stable, unless pushed upwards by inflation or strong growth in demand.
Fighting a price war
Ways to fight a price war include:
1. Sell on value, not price – where value is made up of service, response, variety, knowledge, quality, guarantee and price.
2. Use ‘package pricing’ to attract customers
3. Build up key accounts – as it is cheaper to get more business from an existing customer than to find a new one. Customer profitability analysis (CPA)
4. Explore new pricing models
A range of other issues influence pricing decisions, including the market in which an organisation operates, competition, quality and price sensitivity, Inflation, Compatibility with other products,
Competition from substitute products etc.
Deriving the demand curve
The demand curve shows the relationship between the price charged for a product and the subsequent demand for that product.
Demand curve equations
Deriving the demand curve
The current price of a product is $12. At this price the company sells 12,000 items a month. One month the company decides to raise the price to $13, but only 9,500 items are sold at this price.
Determine the demand equation.
Step 1 – Find the gradient of the line (b) b = 1/2,500 = 0.0004
Step 2 – Extract figures from the question
The demand equation can now be determined as P = a – bx
b = 0.0004 x = 12,000 (number of units sold at current selling price)
P = a – (0.0004 x 12,000)
12 = a – 4.80
a = 16.80
∴ P = 16.80 – 0.0004x
Step 3 – Check your equation
We can check this by substituting $12 and $13 for P.
12 = 16.80 – 0.0004x = 16.80 – (0.0004 x 12,000)
13 = 16.80 – 0.0004x = 16.80 – (0.0004 x 9,500)
Profit maximisation and the demand curve
The profit-maximising price/output level
The overall objective of an organisation should be profit maximisation.
In microeconomics theory, Profits will continue to be maximised only up to the output level where marginal cost has risen to be exactly equal to the marginal revenue.
Determining the profit-maximising selling price: using equations (Algebraic Approach)
The optimal selling price can be determined using equations (ie when MC = MR).
In an exam question you could be provided with equations for marginal cost and marginal revenue and/or have to devise them from information in the question. By equating the two equations you can
determine the optimal price.
Marginal revenue may not be the same as the price charged for all units up to that demand level, as to increase volumes the price may have to be reduced.
Procedure for establishing the optimum price of a product
This is a general set of rules that can be applied to most questions involving algebra and pricing.
1. Establish the linear relationship between price (P) and quantity demanded (Q). The equation will take the form: P = a – bQ
2. Double the gradient to find the marginal revenue: MR = a − 2bQ.
3. Establish the marginal cost MC. This will simply be the variable cost per unit.
4. To maximise profit, equate MC and MR and solve to find Q.
5. Substitute this value of Q into the price equation to find the optimum price.
6. It may be necessary to calculate the maximum profit.
Example 1 – Algebraic Approach
At a price of Rs 200 a company will be able to sell 1,000 units of its product in a month. If the selling price is increased to Rs 220, the demand will fall to 950 units. The product has a variable
cost of Rs 140 per unit, and company’s fixed costs will be Rs 36,000 per month.
1. Find an equation for the demand function (that is, price as a function of quantity demanded).
2. Write down the marginal revenue function.
3. Write down the marginal cost.
4. Find the quantity that maximises profit.
5. Calculate the optimum price.
6. What is the maximum profit?
(1). b = (220 – 200) ÷ (950 – 1,000) = –0.4
200 = a – 0.4 × 1,000
a = 200 + 400 = 600
So the demand function is: P = 600 – 0.4Q
(2). To find MR, just double the gradient, so that
MR = 600 – 0.8Q
(3) MC = 140
(4) To maximise profit, MC = MR
140 = 600 – 0.8Q
Q = (600 – 140) ÷ (0.8) = 575
(5). = 600 – 0.4 × 575 = Rs 370
(6). Revenue = Price × Quantity = Rs 370 × 575 = 212,750
Less Cost = 36,000 + Rs 140 × 575 = 116,500
Profit = 96,250
Example 2
The total fixed costs per annum for a company that makes one product are Rs. 100,000 and a variable cost of Rs. 64 is incurred for each additional unit produced and sold over a very large range of
outputs. The current selling price for the product is Rs 160, and at this price 2,000 units are demanded per annum. It is estimated that for each successive increase in price of Rs. 5 annual demand
will be reduced by 50 units. Alternatively, for each Rs. 5 reduction in price demand will increase by 50 units.
1. Calculate the optimum output and price assuming that if prices are set within each Rs. 5 range there will be a proportionate change in demand.
2. Calculate the maximum profit.
(1). Let Q = quantity produced/sold
Demand curve
Price = a – 0.1Q
160 = a – 0.1 (2,000)
a = 360
P = 360 – 0.1Q
MR = 360 – 0.2Q
MC = 64
(2). To maximise profit: MR = MC
360 – 0.2Q = 64
P = 360 – 0.1 (1,480) = Rs 212
Revenue Rs 212 × 1,480 Rs 313,760
Less costs Rs 64 × 1,480 + Rs 100,000 Rs (194,720)
Maximum profit Rs. 119,040
Example 3
Ltd makes and sells a single product. It has been estimated that at a selling price of Rs 20, demand would be 10,000 units. It is further estimated that for every 10p drop in selling price, demand
would increase by 100 units and that for every 10p increase in selling price demand would fall by 100 units. Variable costs are Rs. 8 per unit and fixed costs are Rs. 50,000.
Required: Calculate the optimum selling price.
Demand curve: p = a – bQ
b = –0.1 ÷ 100 = –0.001
P = a – 0.001Q
20 = a – 0.001 x 10,000
20 = a – 10, therefore a = 30
Demand curve is: P = 30 – 0.001Q
Marginal revenue (MR) = 30 – 0.002Q
Marginal cost (MC) = 8
Profit is maximised when MC = MR, when: 8 = 30 – 0.002Q
0.002Q = 22
Q = 11,000 units
Selling price, P = 30 – 0.001Q; at 11,000 units:
P = 30 – 0.001 × 11,000 = Rs 19
Determining the profit-maximising selling price: The tabular approach
A tabular approach to price setting involves different prices and volumes of sales being presented in a table. To determine the profit-maximising selling price:
1. Work out the demand curve and hence the price and the total revenue (PQ) at various levels of demand
2. Calculate total cost and hence marginal cost at each level of demand
3. Finally calculate profit at each level of demand, thereby determining the price and level of demand at which profits are maximized
The profit is maximised at 7 units of output and a price of $32,000, when MR is most nearly equal to MC.
Cost-based approaches to pricing
Full cost-plus pricing
Full cost-plus pricing adds a percentage onto the full cost of the product to arrive at the selling price.
The traditional approach to cost plus pricing is to take the full absorption cost of a product or service and to add on a predetermined percentage mark-up to arrive at the selling price. Full cost
may comprise production costs only, or it may include some absorbed administration, selling and distribution overhead as well.
1. Widely used and accepted.
2. Simple to calculate if costs are known.
3. Selling price decision may be delegated to junior management.
4. Justification for price increases.
5. May encourage price stability – if all competitors have similar cost structures and use similar mark-up.
1. Ignores the economic relationship between price and demand.
2. No attempt to establish optimum price.
3. This structured method fails to recognise the manager’s need for flexibility in pricing.
4. Different absorption methods give rise to different costs and hence different selling prices.
5. Does not guarantee profit – if sales volumes are low fixed costs may not be recovered.
Marginal cost-plus (mark-up) pricing
A marginal (variable) cost-plus approach to pricing draws attention to gross profit and the gross profit margin, or contribution.
Marginal cost-plus pricing/mark-up pricing – is a method of determining the sales price by adding a profit margin on to either marginal cost of production or marginal cost of sales.
1. It is simple to operate.
2. It draws management attention to contribution and the effects of higher or lower sales volumes on profit. In this way, it helps to create a better awareness of the concepts of marginal costing
and break-even analysis.
3. In practice, it is used in businesses where there is a readily identifiable basic variable cost, e.g. retail industries.
1. Although the size of the mark-up can be varied in accordance with demand conditions, it is not a method of pricing which ensures that sufficient attention is paid to demand conditions,
competitors’ prices and profit maximisation.
2. It ignores fixed overheads in the pricing decision, but the sales price must be sufficiently high to ensure that a profit is made after covering fixed costs. Pricing cannot ignore fixed costs
Pricing based on mark-up per unit of limiting factor
It demonstrates how to calculate a price based on mark-up per unit of a limiting factor.
Another approach to pricing might be taken when a business is working at full capacity and is restricted by a shortage of resources from expanding its output further. By deciding what target profit,
it would like to earn, it could establish a mark-up per unit of limiting factor.
Different Pricing strategies for new products
When a new product is launched, it is essential that the company gets the pricing strategy correct, otherwise the wrong message may be given to the market (if priced too cheaply) or the product will
not sell (if the price is too high).
A new product pricing strategy will depend largely on whether a company’s product or service is the first of its kind on the market.
• If the product is the first of its kind, there will be no competition yet, and the company, for a time at least, will be a monopolist. A monopolist’s price is likely to be higher, and its profits
bigger, than those of a company operating in a competitive market.
• If the new product being launched by a company is following a competitor’s product onto the market, the pricing strategy will be constrained by what the competitor is already doing. The new
product could be given a higher price if its quality is better, or it could be given a price which matches the competition.
Two pricing strategies for new products are market penetration pricing and market skimming pricing.
Market penetration pricing
Market penetration pricing is a policy of low prices when the product is first launched in order to obtain sufficient penetration into the market.
Circumstances in which a penetration policy may be appropriate.
1. If the firm wishes to discourage new entrants into the market.
2. If the firm wishes to shorten the initial period of the product’s life cycle in order to enter the growth and maturity stages as quickly as possible.
3. If there are significant economies of scale to be achieved from a high volume of output, so that quick penetration into the market is desirable in order to gain unit cost reductions.
4. If demand is highly elastic and so would respond well to low prices.
Penetration prices are prices which aim to secure a substantial share in a substantial total market. A firm might therefore deliberately build excess production capacity and set its prices very low.
As demand builds up, the spare capacity will be used up gradually and unit costs will fall; the firm might even reduce prices further as unit costs fall. In this way, early losses will enable the
firm to dominate the market and have the lowest costs.
Market skimming pricing
Market skimming pricing involves charging high prices when a product is first launched and spending heavily on advertising and sales promotion to obtain sales.
As the product moves into the later stages of its life cycle, progressively lower prices will be charged and so the profitable ‘cream’ is skimmed off in stages until sales can only be sustained at
lower prices. The aim of market skimming is to gain high unit profits early in the product’s life. High unit prices make it more likely that competitors will enter the market than if lower prices
were to be charged.
Circumstances in which such a policy may be appropriate.
1. Where the product is new and different, so that customers are prepared to pay high prices so as to be one up on other people who do not own it.
2. Where the strength of demand and the sensitivity of demand to price are unknown.
3. Where high prices in the early stages of a product’s life might generate high initial cash flows.
4. Where products may have a short life cycle, and so need to recover their development costs and make a profit relatively quickly.
Price Discrimination
Price discrimination is the practice of charging different prices for the same product to different groups of buyers when these prices are not reflective of cost differences.
Conditions required for a price-discrimination strategy:
1. The seller must have some degree of monopoly power, or the price will be driven down.
2. Customers can be segregated into different markets.
3. Customers cannot buy at the lower price in one market and sell at the higher price in the other market.
4. Price discrimination strategies are particularly effective for services.
5. There must be different price elasticities of demand in each market so that prices can be raised in one and lowered in the other to increase revenue.
Dangers of price-discrimination as a strategy:
1. A black market may develop allowing those in a lower priced segment to resell to those in a higher priced segment.
2. Competitors join the market and undercut the firm’s prices.
3. Customers in the higher priced brackets look for alternatives and demand become more elastic over time.
Premium pricing
This involves making a product appear ‘different’ through product differentiation so as to justify a premium price. The product may be different in terms of, for example, quality, reliability,
durability, after sales service or extended warranties. Heavy advertising can establish brand loyalty, which can help to sustain a premium, and premium prices will always be paid by those customers
who blindly equate high price with high quality.
Product bundling
Product bundling is a variation on price discrimination which involves selling a number of products or services as a package at a price lower than the aggregate of their individual prices. This might
encourage customers to buy services that they might otherwise not have purchased.
The success of a bundling strategy depends on the expected increase in sales volume and changes in margin. Other cost changes, such as in product handling, packaging and invoicing costs, are
possible. Longer-term issues, such as competitors’ reactions, must also be considered.
Loss leader pricing
A loss leader is when a company sets a very low price for one product intending to make customers buy other products in the range which carry higher profit margins. People will buy many of the
high-profit items but only one of the low-profit items, yet they are ‘locked in’ to the former by the latter.
Discount pricing
Discount pricing is where products are priced lower than the market norm, but are put forward as being of comparable quality. The aim is that the product will procure a larger share of the market
than it might otherwise do, thereby counteracting the reduction in selling price.
Reasons for using discounts to adjust prices:
1. To get rid of perishable goods that have reached the end of their shelf life
2. To sell off seconds
3. Normal practice (e.g. antique trade)
4. To increase sales volumes during a poor sales period without dropping prices permanently
5. To differentiate between types of customer (wholesale, retail and so on)
6. To get cash in quickly | {"url":"https://mindmaplab.com/pricing-decisions-and-pricing-strategies.html","timestamp":"2024-11-03T03:51:35Z","content_type":"text/html","content_length":"114667","record_id":"<urn:uuid:feb453d1-e787-4f5d-841d-12af6705dba9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00249.warc.gz"} |
This talk consists of two aspects about solving the radiative transport through the integral formulation. The radiative transport equation has been numerically studied for many years, the equation is
difficult to solve due to its high dimensionality and its hyperbolic nature, in recent decades, the computers are equipped with larger memories so it is possible to deal with the full-discretization
in phase space, however, the numerical efficiency is quite limited because of many issues, such as iterative scheme, preconditioning, discretization, etc. In this talk, we first discuss about the
special case of isotropic scattering and its integral formulation, then walk through the corresponding fast algorithm for it. In the second part, we try to trivially extend the method to anisotropic
case, and talk about the method’s limitation and some perspectives in both theory and numerics. | {"url":"https://www4.math.duke.edu/media/videos.php?cat=all&sort=most_viewed&time=this_month&page=1&seo_cat_name=All&sorting=sort","timestamp":"2024-11-12T13:26:32Z","content_type":"text/html","content_length":"240942","record_id":"<urn:uuid:3e5b81e4-ee2e-4563-ad40-4822e485bbac>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00457.warc.gz"} |
The Difference Between Rounding Error and Truncation Error Explained
Rounding errors occur when a numerical calculation must be rounded or trimmed to a certain number of digits.
errors, on the other hand, arise when an infinite process is replaced by a finite one. This is done by cutting or rounding symmetrically, and the relative rounding error due to cutting is defined by
I, e. The maximum relative rounding error due to cutting is given by 'd', which is the length of the mantissa and depends on the machine. The maximum relative rounding error due to symmetrical
rounding is given by For a
computer system
with binary representation, the epsilon of the machine due to cutting and symmetrical rounding are given by. In numerical analysis and scientific computation, truncation error is the error caused by
the approximation of a mathematical process. This can be seen when using a Reimann sum of two segments on the left with the same width of segments. The error caused by choosing a finite number of
rectangles instead of an infinite number of them is a truncation error in the mathematical process of integration. Occasionally, by mistake, the rounding error (the consequence of using
finite-precision floating-point numbers in computers) is also called a truncation error, especially if the number is rounded by cutting. Given an infinite series, the truncation error of x% 3d0,75
can be found if only the first three terms of the series are used. It is important to note that this is not the correct use of the truncation error; however, calling it by truncating a number may be
acceptable. In conclusion, it can be seen that there are distinct differences between rounding errors and truncation errors. Rounding errors occur when a numerical calculation must be rounded or
trimmed to a certain number of digits, while truncation errors arise when an infinite process is replaced by a finite one.
Leave Reply | {"url":"https://www.truncations.net/what-is-the-difference-between-the-rounding-error-and-the-truncation-error","timestamp":"2024-11-05T20:25:15Z","content_type":"text/html","content_length":"104559","record_id":"<urn:uuid:64677963-ff15-443d-bca5-a4031d129843>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00882.warc.gz"} |
MCQS:Mean,Mode,Median:Mathematics MCQs - Study Targets | All Subjects MCQs & Short Quiz For Exam Preparation
MCQS:Mean,Mode,Median:Mathematics MCQs
1.Which of the following measures of central tendency is most influenced by outliers?
a) Mean
b) Median
c) Mode
Answer: a) Mean
2. What is the formula to calculate the mean of a set of numbers?
a) Sum of numbers divided by the total number of numbers
b) Middle number when arranged in order
c) Most frequent number in the set
Answer: a) Sum of numbers divided by the total number of numbers
3. Which measure of central tendency is not affected by extreme values?
a) Mean
b) Median
c) Mode
Answer: b) Median
4. What is the most commonly occurring number in a set of data?
a) Mean
b) Median
c) Mode
Answer: c) Mode
5. Which measure of central tendency is used for nominal data?
a) Mean
b) Median
c) Mode
Answer: c) Mode
6. Which measure of central tendency is used for ordinal data?
a) Mean
b) Median
c) Mode
Answer: b) Median
7. In a set of data with an even number of values, how is the median calculated?
a) By taking the average of the two middle values
b) By taking the middle value
c) By taking the mode
Answer: a) By taking the average of the two middle values
8. In a set of data with an odd number of values, how is the median calculated?
a) By taking the average of the two middle values
b) By taking the middle value
c) By taking the mode
Answer: b) By taking the middle value
9. What is the range of a set of data?
a) The difference between the highest and lowest values
b) The sum of all the values
c) The average of all the values
Answer: a) The difference between the highest and lowest values
10. What does a large range indicate about the data?
a) That the data is spread out
b) That the data is clustered together
c) That the data is not valid
Answer: a) That the data is spread out
11. Which measure of central tendency is not affected by extreme values?
a) Mean
b) Median
c) Mode
Answer: b) Median
12. Which measure of central tendency is affected by extreme values?
a) Mean
b) Median
c) Mode
Answer: a) Mean
13. Which measure of central tendency should be used when the data has extreme values?
a) Mean
b) Median
c) Mode
Answer: b) Median
14. Which measure of central tendency should be used when the data is skewed?
a) Mean
b) Median
c) Mode
Answer: b) Median
15. Which measure of central tendency should be used when the data is symmetrical?
a) Mean
b) Median
c) Mode
Answer: a) Mean
16. Which measure of central tendency is best used for continuous data?
a) Mean
b) Median
c) Mode
Answer: a) Mean
17. Which measure of central tendency is best used for discrete data?
a) Mean
b) Median
c) Mode
Answer: b) Median
18. Which measure of central tendency should be used when the data is in a nominal scale?
a) Mean
b) Median
c) Mode
Answer: c) Mode
19. Which measure of central tendency should be used when the data is in an ordinal scale?
a) Mean
b) Median
c) Mode
Answer: b) Median
20. What is the formula to calculate the mode of a set of numbers?
a) The number that appears most frequently in the set
b) The middle number when arranged in order
c) The sum of numbers divided by the total number of numbers
Answer: a) The number that appears most frequently in the set
21. What is the formula to calculate the median of a set of numbers?
a) The number that appears most frequently in the set
b) The middle number when arranged in order
c) The sum of numbers divided by the total number of numbers
Answer: b) The middle number when arranged in order
22. What is the mode of the following set of numbers: 3, 5, 7, 5, 9, 5?
a) 5
b) 3
c) 7
Answer: a) 5
23. What is the median of the following set of numbers: 4, 6, 9, 2, 1, 5, 8?
a) 4
b) 5
c) 6
Answer: b) 5
24. What is the mean of the following set of numbers: 10, 15, 20, 25, 30?
a) 15
b) 20
c) 25
Answer: b) 20
25. Which measure of central tendency should be used when the data is in a ratio scale?
a) Mean
b) Median
c) Mode
Answer: a) Mean
26. What is the mode of the following set of numbers: 2, 2, 3, 3, 3, 4, 5, 5, 5, 5?
a) 3
b) 5
c) 2
Answer: b) 5
27. What is the median of the following set of numbers: 2, 4, 6, 8, 10, 12?
a) 6
b) 8
c) 10
Answer: a) 6
28. What is the mean of the following set of numbers: 5, 10, 15, 20, 25, 30, 35, 40?
a) 22.5
b) 25
c) 30
Answer: a) 22.5
29. Which measure of central tendency is the average of the deviations from the mean?
a) Mean
b) Median
c) Mode
Answer: a) Mean | {"url":"https://studytargets.com/2023/07/19/mcqsmeanmodemedianmathematics-mcqs/","timestamp":"2024-11-09T11:16:43Z","content_type":"text/html","content_length":"156744","record_id":"<urn:uuid:ffda8ea4-1168-4ddc-b355-1b5f9c2ebae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00274.warc.gz"} |
A3 Graph Paper - The Graph Paper
Are you looking for a decent-sized paper for your professional and personal usage? Well, you should maybe take a look at the A3 Graph Paper here that can serve all your purposes. It’s one of the most
famous papers that is also known as versatile paper. Here in the article, we are going to provide our readers with the printable template of this paper.
The paper is the form of paper that comes in the format of various grids that gives it the shape of a square. Graph paper is also known by many other names such as grid paper, coordinate paper, etc.
The paper is useful to plot the various types of data in the mathematics and engineering domains. It’s also helpful in crafting the various types of arts and crafts in the art skills. You will find
the application of graphs both in the academic and professional lives of individuals.
Related Article:
Well, A3 graph paper is the standard size of paper that comes in its generic form. This paper is the standard format of the graph but its size makes it unique. The A3 is a versatile size for paper
that can serve multiple purposes at a time.
For instance, scholars can use it to plot the mathematics and the engine data to find the relationship between the two equations. Similarly, artists can use graphs to draw various types of objects.
You can make the most of this paper for yourself in the various usages around.
A3 Graph Paper Template
Well, you don’t need to buy the A3 graph paper from the market anymore as we are providing it here. You can use the A3 size paper for any of your desired usages. For instance, you can use it to draw
various types of objects in the domain of arts and crafts. Similarly, you can use it in the domain of science and engineering.
Mathematics scholars can use it to plot mathematics equations for the analysis of data. Science scholars can use the map to experiment with the various science objects in their research phase. This
size is bigger than the A4 size paper and therefore you can quite comfortably use it.
A3 Graph Paper Printable
Printable paper is considered highly convenient for those who don’t have time to get the graph from the market. We understand this general scenario of our readers and therefore we have developed a
fully printable A3 graph paper.
This graph comes in a fully printable manner that anyone can easily print and use for various purposes. A3 grid paper is a decent size of paper that is ideal for all types of graphing activities. So,
feel free to print the graph template from here and make the best use of it.
A3 Graph Paper Pad
Check out the fine pad of the A3 graph paper here and use this pad format graph conveniently. This is a versatile A3 graph pad that is useful for several purposes. In your usage, you can use it to
draw medium to large size objects.
If you are a professional individual then you can use it for the presentation of the projects before the team. In your academics, the A3 grid paper is useful for science and mathematical data
plotting. You can analyze the data in depth by plotting it on the A3 grid paper.
Isometric Graph Paper A3
Isometric paper is one of the most professionally used forms of paper. This paper is useful to create or draw three-dimensional drawings, isometric arts and crafts, mathematics drawings, etc. The
isometric paper is highly relevant in the modern gaming industry.
It is used to map the games on paper in the course of development and before the final release. The A3 isometric paper serves all of these purposes in quite a decent manner. We therefore highly
recommend our readers use the isometric paper in their learning and creativity.
A3 Size Graph Paper
Check out the A3 size of the paper template here and use it to draft the interactive paper. A3 size is quite a popular size for paper as it is highly versatile. The A3 size of the graph is useful to
publish the presentation of the projects to the team on the professional front.
Similarly, it has its academic usage in the domain of science and mathematics. So, feel free to get this A3 size paper from here and share it with the others as well. | {"url":"https://thegraphpaper.com/tag/a3-graph-paper/","timestamp":"2024-11-03T09:15:54Z","content_type":"text/html","content_length":"40497","record_id":"<urn:uuid:3deac825-269d-4364-8e44-f89ad66be473>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00791.warc.gz"} |
The Algorithm Design Manual
Author : Steven S Skiena
Publisher : Springer Science & Business Media
Total Pages : 742
Release : 2009-04-05
ISBN-10 : 9781848000704
ISBN-13 : 1848000707
Rating : 4/5 (04 Downloads)
Synopsis The Algorithm Design Manual by : Steven S Skiena
This newly expanded and updated second edition of the best-selling classic continues to take the "mystery" out of designing algorithms, and analyzing their efficacy and efficiency. Expanding on the
first edition, the book now serves as the primary textbook of choice for algorithm design courses while maintaining its status as the premier practical reference guide to algorithms for programmers,
researchers, and students. The reader-friendly Algorithm Design Manual provides straightforward access to combinatorial algorithms technology, stressing design over analysis. The first part,
Techniques, provides accessible instruction on methods for designing and analyzing computer algorithms. The second part, Resources, is intended for browsing and reference, and comprises the catalog
of algorithmic resources, implementations and an extensive bibliography. NEW to the second edition: • Doubles the tutorial material and exercises over the first edition • Provides full online support
for lecturers, and a completely updated and improved website component with lecture slides, audio and video • Contains a unique catalog identifying the 75 algorithmic problems that arise most often
in practice, leading the reader down the right path to solve them • Includes several NEW "war stories" relating experiences from real-world applications • Provides up-to-date links leading to the
very best algorithm implementations available in C, C++, and Java
The Data Science Design Manual
Author : Steven S. Skiena
Publisher : Springer
Total Pages : 456
Release : 2017-07-01
ISBN-10 : 9783319554440
ISBN-13 : 3319554441
Rating : 4/5 (40 Downloads)
Synopsis The Data Science Design Manual by : Steven S. Skiena
This engaging and clearly written textbook/reference provides a must-have introduction to the rapidly emerging interdisciplinary field of data science. It focuses on the principles fundamental to
becoming a good data scientist and the key skills needed to build systems for collecting, analyzing, and interpreting data. The Data Science Design Manual is a source of practical insights that
highlights what really matters in analyzing data, and provides an intuitive understanding of how these core concepts can be used. The book does not emphasize any particular programming language or
suite of data-analysis tools, focusing instead on high-level discussion of important design principles. This easy-to-read text ideally serves the needs of undergraduate and early graduate students
embarking on an “Introduction to Data Science” course. It reveals how this discipline sits at the intersection of statistics, computer science, and machine learning, with a distinct heft and
character of its own. Practitioners in these and related fields will find this book perfect for self-study as well. Additional learning tools: Contains “War Stories,” offering perspectives on how
data science applies in the real world Includes “Homework Problems,” providing a wide range of exercises and projects for self-study Provides a complete set of lecture slides and online video
lectures at www.data-manual.com Provides “Take-Home Lessons,” emphasizing the big-picture concepts to learn from each chapter Recommends exciting “Kaggle Challenges” from the online platform Kaggle
Highlights “False Starts,” revealing the subtle reasons why certain approaches fail Offers examples taken from the data science television show “The Quant Shop” (www.quant-shop.com)
Programming Challenges
Author : Steven S Skiena
Publisher : Springer Science & Business Media
Total Pages : 376
Release : 2006-04-18
ISBN-10 : 9780387220819
ISBN-13 : 038722081X
Rating : 4/5 (19 Downloads)
Synopsis Programming Challenges by : Steven S Skiena
There are many distinct pleasures associated with computer programming. Craftsmanship has its quiet rewards, the satisfaction that comes from building a useful object and making it work. Excitement
arrives with the flash of insight that cracks a previously intractable problem. The spiritual quest for elegance can turn the hacker into an artist. There are pleasures in parsimony, in squeezing the
last drop of performance out of clever algorithms and tight coding. The games, puzzles, and challenges of problems from international programming competitions are a great way to experience these
pleasures while improving your algorithmic and coding skills. This book contains over 100 problems that have appeared in previous programming contests, along with discussions of the theory and ideas
necessary to attack them. Instant online grading for all of these problems is available from two WWW robot judging sites. Combining this book with a judge gives an exciting new way to challenge and
improve your programming skills. This book can be used for self-study, for teaching innovative courses in algorithms and programming, and in training for international competition. The problems in
this book have been selected from over 1,000 programming problems at the Universidad de Valladolid online judge. The judge has ruled on well over one million submissions from 27,000 registered users
around the world to date. We have taken only the best of the best, the most fun, exciting, and interesting problems available.
Algorithm Design
Author : Jon Kleinberg
Publisher : Pearson Higher Ed
Total Pages : 828
Release : 2013-08-29
ISBN-10 : 9781292037042
ISBN-13 : 1292037040
Rating : 4/5 (42 Downloads)
Synopsis Algorithm Design by : Jon Kleinberg
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in
computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science. The full text
downloaded to your computer With eBooks you can: search for key concepts, words and phrases make highlights and notes as you study share your notes with friends eBooks are downloaded to your computer
and accessible either offline through the Bookshelf (available as a free download), available online and also via the iPad and Android apps. Upon purchase, you'll gain instant access to this eBook.
Time limit The eBooks products do not have an expiry date. You will continue to access your digital ebook products whilst you have your Bookshelf installed.
A Guide to Algorithm Design
Author : Anne Benoit
Publisher : CRC Press
Total Pages : 380
Release : 2013-08-27
ISBN-10 : 9781439898130
ISBN-13 : 1439898138
Rating : 4/5 (30 Downloads)
Synopsis A Guide to Algorithm Design by : Anne Benoit
Presenting a complementary perspective to standard books on algorithms, A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis provides a roadmap for readers to determine the
difficulty of an algorithmic problem by finding an optimal solution or proving complexity results. It gives a practical treatment of algorithmic complexity and guides readers in solving algorithmic
problems. Divided into three parts, the book offers a comprehensive set of problems with solutions as well as in-depth case studies that demonstrate how to assess the complexity of a new problem.
Part I helps readers understand the main design principles and design efficient algorithms. Part II covers polynomial reductions from NP-complete problems and approaches that go beyond
NP-completeness. Part III supplies readers with tools and techniques to evaluate problem complexity, including how to determine which instances are polynomial and which are NP-hard. Drawing on the
authors’ classroom-tested material, this text takes readers step by step through the concepts and methods for analyzing algorithmic complexity. Through many problems and detailed examples, readers
can investigate polynomial-time algorithms and NP-completeness and beyond.
7 Algorithm Design Paradigms
Author : Sung-Hyuk Cha
Publisher : Cha Academy llc
Total Pages : 798
Release : 2020-06-01
ISBN-10 : 9781735168005
ISBN-13 : 1735168009
Rating : 4/5 (05 Downloads)
Synopsis 7 Algorithm Design Paradigms by : Sung-Hyuk Cha
The intended readership includes both undergraduate and graduate students majoring in computer science as well as researchers in the computer science area. The book is suitable either as a textbook
or as a supplementary book in algorithm courses. Over 400 computational problems are covered with various algorithms to tackle them. Rather than providing students simply with the best known
algorithm for a problem, this book presents various algorithms for readers to master various algorithm design paradigms. Beginners in computer science can train their algorithm design skills via
trivial algorithms on elementary problem examples. Graduate students can test their abilities to apply the algorithm design paradigms to devise an efficient algorithm for intermediate-level or
challenging problems. Key Features: Dictionary of computational problems: A table of over 400 computational problems with more than 1500 algorithms is provided. Indices and Hyperlinks: Algorithms,
computational problems, equations, figures, lemmas, properties, tables, and theorems are indexed with unique identification numbers and page numbers in the printed book and hyperlinked in the e-book
version. Extensive Figures: Over 435 figures illustrate the algorithms and describe computational problems. Comprehensive exercises: More than 352 exercises help students to improve their algorithm
design and analysis skills. The answers for most questions are available in the accompanying solution manual.
An Introduction to Machine Learning
Author : Miroslav Kubat
Publisher : Springer
Total Pages : 348
Release : 2017-08-31
ISBN-10 : 9783319639130
ISBN-13 : 3319639137
Rating : 4/5 (30 Downloads)
Synopsis An Introduction to Machine Learning by : Miroslav Kubat
This textbook presents fundamental machine learning concepts in an easy to understand manner by providing practical advice, using straightforward examples, and offering engaging discussions of
relevant applications. The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear and polynomial classifiers, decision trees, neural networks, and support vector machines.
Later chapters show how to combine these simple tools by way of “boosting,” how to exploit them in more complicated domains, and how to deal with diverse advanced practical issues. One chapter is
dedicated to the popular genetic algorithms. This revised edition contains three entirely new chapters on critical topics regarding the pragmatic application of machine learning in industry. The
chapters examine multi-label domains, unsupervised learning and its use in deep learning, and logical approaches to induction. Numerous chapters have been expanded, and the presentation of the
material has been enhanced. The book contains many new exercises, numerous solved examples, thought-provoking experiments, and computer assignments for independent work.
Algorithms in a Nutshell
Author : George T. Heineman
Publisher : "O'Reilly Media, Inc."
Total Pages : 366
Release : 2008-10-14
ISBN-10 : 9781449391133
ISBN-13 : 1449391133
Rating : 4/5 (33 Downloads)
Synopsis Algorithms in a Nutshell by : George T. Heineman
Creating robust software requires the use of efficient algorithms, but programmers seldom think about them until a problem occurs. Algorithms in a Nutshell describes a large number of existing
algorithms for solving a variety of problems, and helps you select and implement the right algorithm for your needs -- with just enough math to let you understand and analyze algorithm performance.
With its focus on application, rather than theory, this book provides efficient code solutions in several programming languages that you can easily adapt to a specific project. Each major algorithm
is presented in the style of a design pattern that includes information to help you understand why and when the algorithm is appropriate. With this book, you will: Solve a particular coding problem
or improve on the performance of an existing solution Quickly locate algorithms that relate to the problems you want to solve, and determine why a particular algorithm is the right one to use Get
algorithmic solutions in C, C++, Java, and Ruby with implementation tips Learn the expected performance of an algorithm, and the conditions it needs to perform at its best Discover the impact that
similar design decisions have on different algorithms Learn advanced data structures to improve the efficiency of algorithms With Algorithms in a Nutshell, you'll learn how to improve the performance
of key algorithms essential for the success of your software applications.
Algorithm Design and Applications
Author : Michael T. Goodrich
Publisher : Wiley Global Education
Total Pages : 803
Release : 2014-11-03
ISBN-10 : 9781119028482
ISBN-13 : 1119028485
Rating : 4/5 (82 Downloads)
Synopsis Algorithm Design and Applications by : Michael T. Goodrich
ALGORITHM DESIGN and APPLICATIONS “This is a wonderful book, covering both classical and contemporary topics in algorithms. I look forward to trying it out in my algorithms class. I especially like
the diversity in topics and difficulty of the problems.” ROBERT TARJAN, PRINCETON UNIVERSITY “The clarity of explanation is excellent. I like the inclusion of the three types of exercises very much.”
MING-YANG KAO, NORTHWESTERN UNIVERSITY “Goodrich and Tamassia have designed a book that is both remarkably comprehensive in its coverage and innovative in its approach. Their emphasis on motivation
and applications, throughout the text as well as in the many exercises, provides a book well-designed for the boom in students from all areas of study who want to learn about computing. The book
contains more than one could hope to cover in a semester course, giving instructors a great deal of flexibility and students a reference that they will turn to well after their class is over.”
MICHAEL MITZENMACHER, HARVARD UNIVERSITY “I highly recommend this accessible roadmap to the world of algorithm design. The authors provide motivating examples of problems faced in the real world and
guide the reader to develop workable solutions, with a number of challenging exercises to promote deeper understanding.” JEFFREY S. VITTER, UNIVERSITY OF KANSAS DidYouKnow? This book is available as
a Wiley E-Text. The Wiley E-Text is a complete digital version of the text that makes time spent studying more efficient. Course materials can be accessed on a desktop, laptop, or mobile device—so
that learning can take place anytime, anywhere. A more affordable alternative to traditional print, the Wiley E-Text creates a flexible user experience: Access on-the-go Search across content
Highlight and take notes Save money! The Wiley E-Text can be purchased in the following ways: Via your campus bookstore: Wiley E-Text: Powered by VitalSource® ISBN 9781119028796 *Instructors: This
ISBN is needed when placing an order. Directly from: www.wiley.com/college/goodrich
The Art of Algorithm Design
Author : Sachi Nandan Mohanty
Publisher : CRC Press
Total Pages : 319
Release : 2021-10-14
ISBN-10 : 9781000463781
ISBN-13 : 1000463788
Rating : 4/5 (81 Downloads)
Synopsis The Art of Algorithm Design by : Sachi Nandan Mohanty
The Art of Algorithm Design is a complementary perception of all books on algorithm design and is a roadmap for all levels of learners as well as professionals dealing with algorithmic problems.
Further, the book provides a comprehensive introduction to algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. All algorithms
are described and designed with a "pseudo-code" to be readable by anyone with little knowledge of programming. This book comprises of a comprehensive set of problems and their solutions against each
algorithm to demonstrate its executional assessment and complexity, with an objective to: Understand the introductory concepts and design principles of algorithms and their complexities Demonstrate
the programming implementations of all the algorithms using C-Language Be an excellent handbook on algorithms with self-explanatory chapters enriched with problems and solutions While other books may
also cover some of the same topics, this book is designed to be both versatile and complete as it traverses through step-by-step concepts and methods for analyzing each algorithmic complexity with
pseudo-code examples. Moreover, the book provides an enjoyable primer to the field of algorithms. This book is designed for undergraduates and postgraduates studying algorithm design. | {"url":"https://katsbookbuzz.net/create/the-algorithm-design-manual/","timestamp":"2024-11-14T07:27:35Z","content_type":"text/html","content_length":"72576","record_id":"<urn:uuid:c1d9dcf0-d113-4f04-ad65-7f50e592a597>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00852.warc.gz"} |
the arXiv paper
Fixed-Parameter Algorithms for the Weighted Max-Cut Problem on Embedded 1-Planar Graphs
AI-generated keywords: Weighted Max-Cut Problem
AI-generated Key Points
⚠The license of the paper does not allow us to build upon its content and the key points are generated using the paper metadata rather than the full article.
• The paper proposes two fixed-parameter tractable algorithms to solve the weighted Max-Cut problem on embedded 1-planar graphs.
• A graph is considered 1-planar if it can be drawn in the plane with at most one crossing per edge.
• The proposed algorithms are parameterized by the crossing number k of the given embedding and recursively reduce a 1-planar graph to at most 3^k planar graphs using edge removal and node
contraction techniques.
• Their main algorithm then solves the Max-Cut problem for these planar graphs using FCE-MaxCut, which was introduced by Liers and Pardella [21].
• In case of non-negative edge weights, they suggest a variant that allows solving planar instances with any planar Max-Cut algorithm.
• The authors demonstrate that a maximum cut in a given 1-planar graph can be derived from solutions for its corresponding planar graphs.
• Their algorithms compute a maximum cut in an embedded weighted 1-planar graph with n nodes and k edge crossings in time O(3^kn^{3/2} log n).
• This work is an extension of their conference version (available as arXiv:1803.10983), which is currently under review at TCS.
• The proposed algorithms have significant implications for real world applications such as network design and optimization problems where finding a maximum cut in a graph is crucial.
Authors: Christine Dahn, Nils M. Kriege, Petra Mutzel, Julian Schilling
This work is an extension of the conference version arXiv:1803.10983 , currently under review at TCS
Abstract: We propose two fixed-parameter tractable algorithms for the weighted Max-Cut problem on embedded 1-planar graphs parameterized by the crossing number k of the given embedding. A graph is
called 1-planar if it can be drawn in the plane with at most one crossing per edge. Our algorithms recursively reduce a 1-planar graph to at most 3^k planar graphs, using edge removal and node
contraction. Our main algorithm then solves the Max-Cut problem for the planar graphs using the FCE-MaxCut introduced by Liers and Pardella [21]. In the case of non-negative edge weights, we suggest
a variant that allows to solve the planar instances with any planar Max-Cut algorithm. We show that a maximum cut in the given 1-planar graph can be derived from the solutions for the planar graphs.
Our algorithms compute a maximum cut in an embedded weighted 1-planar graph with n nodes and k edge crossings in time O(3^k n^{3/2} log n).
Submitted to arXiv on 29 Nov. 2018
Ask questions about this paper to our AI assistant
You can also chat with multiple papers at once here.
⚠The license of the paper does not allow us to build upon its content and the AI assistant only knows about the paper metadata rather than the full article.
AI assistant instructions?
Results of the summarizing process for the arXiv paper: 1812.03074v1
⚠This paper's license doesn't allow us to build upon its content and the summarizing process is here made with the paper's metadata rather than the article.
• Comprehensive Summary
• Key points
• Layman's Summary
• Blog article
In their paper titled "Fixed-Parameter Algorithms for the Weighted Max-Cut Problem on Embedded 1-Planar Graphs," Christine Dahn, Nils M. Kriege, Petra Mutzel, and Julian Schilling propose two
fixed-parameter tractable algorithms to solve the weighted Max-Cut problem on embedded 1-planar graphs. A graph is considered 1-planar if it can be drawn in the plane with at most one crossing per
edge. The proposed algorithms are parameterized by the crossing number k of the given embedding and recursively reduce a 1-planar graph to at most 3^k planar graphs using edge removal and node
contraction techniques. Their main algorithm then solves the Max-Cut problem for these planar graphs using FCE-MaxCut, which was introduced by Liers and Pardella [21]. In case of non-negative edge
weights, they suggest a variant that allows solving planar instances with any planar Max-Cut algorithm. The authors demonstrate that a maximum cut in a given 1-planar graph can be derived from
solutions for its corresponding planar graphs. Their algorithms compute a maximum cut in an embedded weighted 1-planar graph with n nodes and k edge crossings in time O(3^kn^{3/2} log n). This work
is an extension of their conference version (available as arXiv:1803.10983), which is currently under review at TCS. The proposed algorithms have significant implications for real world applications
such as network design and optimization problems where finding a maximum cut in a graph is crucial.
Assess the quality of the AI-generated content by voting
Score: 0
Why do we need votes?
Votes are used to determine whether we need to re-run our summarizing tools. If the count reaches -10, our tools can be restarted.
The previous summary was created more than a year ago and can be re-run (if necessary) by clicking on the Run button below.
⚠The license of this specific paper does not allow us to build upon its content and the summarizing tools will be run using the paper metadata rather than the full article. However, it still does a
good job, and you can also try our tools on papers with more open licenses.
Similar papers summarized with our AI tools
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only
works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows. | {"url":"https://www.summarizepaper.com/en/arxiv-id/1812.03074v1/","timestamp":"2024-11-05T20:50:43Z","content_type":"text/html","content_length":"51948","record_id":"<urn:uuid:8bd11d5a-7160-4a2c-9b03-e4c2d4db8578>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00169.warc.gz"} |
Measuring the temperature distribution in an MD run
The following steps describe the extraction of the distribution of temperature for each species during an MD simulations, and the subsequent analysis using the R package through a fit to a gamma
Assuming that the Qbox output file for the MD simulation is md.r
1) Extract the temperature of a species using qbox_species_temp.sh (provided in the util directory)
Code: Select all
$ qbox_species_temp.sh <species_name> md.r > temp.dat
2) Start R:
Code: Select all
$ R
> library(fitdistrplus)
> t<-read.table("temp.dat")[[1]]
# Fit to a gamma distribution using the maximum likelihood method (mle)
> fit<-fitdist(t,distr="gamma",method="mle")
# print the parameters of the fit
> summary(fit)
# plots of the fit
> plot(fit)
Note: the "fitdistrplus" R package must be installed. As root, use:
Code: Select all
# R
> install.packages('fitdistrplus')
Note: the values of the temperature printed by qbox_species_temp.sh are
computed as:
( vx*vx + vy*vy + vz*vz ) * (2/3)*(1/2)*mass*mass/kB
so that the output is the temperature in K. This allows for comparisons of the
fitted distributions of different species.
The fit parameters are e.g.:
Code: Select all
> summary(fit)
Fitting of the distribution ' gamma ' by maximum likelihood
Parameters :
estimate Std. Error
shape 1.35513831 1.109630e-02
rate 0.00499023 4.716253e-05
Loglikelihood: -142092.4 AIC: 284188.8 BIC: 284204.8
Correlation matrix:
shape rate
shape 1.0000000 0.8056883
rate 0.8056883 1.0000000
The shape and rate parameters characterize the gamma distribution.
From the R "help(rgamma)" output:
The Gamma distribution with parameters ‘shape’ = a and ‘scale’ = s
has density
f(x)= 1/(s^a Gamma(a)) x^(a-1) e^-(x/s)
for x >= 0, a > 0 and s > 0. (Here Gamma(a) is the function
implemented by R's ‘gamma()’ and defined in its help. Note that a
= 0 corresponds to the trivial distribution with all mass at point
The mean and variance are E(X) = a*s and Var(X) = a*s^2.
with scale = 1/rate
In the above example, shape=a=1.35513831 scale=s=1/0.00499023 so that
E(t) = a*s = 1.35513831/0.00499023 = 271.558
i.e. the mean temperature is 271.558 K
The temperature can be computed in R using the fit estimated parameters as | {"url":"http://qbox.ucdavis.edu/qbox-list/viewtopic.php?p=488&sid=5e34c46d2672038548229d49275711a8","timestamp":"2024-11-04T20:07:27Z","content_type":"text/html","content_length":"18019","record_id":"<urn:uuid:1e0f4e10-a80b-49ee-b3cc-7c378da82973>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00093.warc.gz"} |
Areas related to Circles - Class 10 - NCERT Solutions, MCQ [2023-24]
Updated for new NCERT - 2023-2024 Boards.
NCERT Solutions of all exercise questions and examples of Chapter 11 Class 10 Areas related to Circle. Answers to all questions are available with video free at teachoo.
In this chapter, we will
• Revise our concepts about Area and Perimeter of Circle, and do some questions
• Then, we will see what arc, sector and segment of a circle is
• We will learn the formula for length of an arc
• and Area of sector
• Then, using Area of sector and Area of triangle formulas, we find Area of Segment
• We do some questions where different figures are combined, and we need to find their area and perimeter
Click on an exercise link below to learn from the NCERT Book way.
Or you can also learn from Teachoo way, click on any concept below Concept Wise to get started.
In concept wise, the chapter is divided into concepts. First the concept is taught, and then we solve the questions related to the concept. All questions are ordered easy to difficult. That means,
easiest question is in the beginning and the more difficult question is at the end.
Click on any link below to start studying the chapter. | {"url":"https://www.teachoo.com/subjects/cbse-maths/class-10th/ch12-10th-areas-related-to-circles/","timestamp":"2024-11-09T16:46:47Z","content_type":"text/html","content_length":"107838","record_id":"<urn:uuid:bd3dbf6f-135f-48b6-94d6-5909b6fad60c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00289.warc.gz"} |
HyVisual: A Hybrid System Visual Modeler
C. Brooks, A. Cataldo, E. A. Lee, J. Liu, X. Liu, S. Neuendorffer, H. Zheng
Technical Memorandum UCB/ERL M05/24, University of California, Berkeley, CA 94720, July 15, 2005.
The Hybrid System Visual Modeler (HyVisual) is a block-diagram editor and simulator for continuous-time dynamical systems and hybrid systems. Hybrid systems mix continuous-time dynamics, discrete
events, and discrete mode changes. This visual modeler supports construction of hierarchical hybrid systems. It uses a block-diagram representation of ordinary differential equations (ODEs) to define
continuous dynamics, and allows mixing of continuous-time signals with events that are discrete in time. It uses a bubble-and-arc diagram representation of finite state machines to define discrete
behavior driven by mode transitions.
In this document, we describe how to graphically construct models and how to interpret the resulting models. HyVisual provides a sophisticated numerical solver that simulates the continuous-time
dynamics, and effective use of the system requires at least a rudimentary understanding of the properties of the solver. This document provides a tutorial that will enable the reader to construct
elaborate models and to have confidence in the results of a simulation of those models. We begin by explaining how to describe continuous-time models of classical dynamical systems, and then progress
to the construction of mixed signal and hybrid systems.
The intended audience for this document is an engineer with at least a rudimentary understanding of the theory of continuous-time dynamical systems (ordinary differential equations and Laplace
transform representations), who wishes to build models of such systems, and who wishes to learn about hybrid systems and build models of hybrid systems.
HyVisual is built on top of Ptolemy II, a framework supporting the construction of such domain-specific tools. See Ptolemy II for more information. | {"url":"https://ptolemy.berkeley.edu/publications/papers/05/hyvisual/","timestamp":"2024-11-07T10:57:42Z","content_type":"application/xhtml+xml","content_length":"11010","record_id":"<urn:uuid:df192bf6-975e-48d6-9d0c-b7f41e956b14>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00302.warc.gz"} |
Tricks to master Escalator questions of TSD
In this article we will be covering all types of questions related to escalator. Once you are through with the concept and are able to solve the questions in this article, you can solve any question
from this topic. Questions on escalator basically use the concept of Time, Speed and Distance. But we need to keep certain things in mind:
• The distance is in terms of the number of steps.
• The number of steps taken by the individual and the escalator is equal to the total number of steps on the escalator. The total number of steps will be the addition of the two in case the
individual is moving in the same direction as that of the escalator. It will be the subtraction of the two in case the escalator and the individual are moving in the opposite direction.
• The time taken by the individual to climb up or down the escalator is equal to the time for which escalator is moving.
Basically, questions asked from this topic can be of the following types:
• You are asked to calculate the number of steps of the escalator.
• You asked to calculate the speed of escalator.
• You are asked to calculate the steps taken by the person/escalator.
Now that we are clear with these points, let us put this concept to use and learn the art of solving such questions.
Solved Examples
Example 1: Bunty is going up the escalator. It takes Bunty 80 seconds to walk up the escalator which is moving upwards and 120 seconds to walk up the escalator which is moving downwards. Calculate
the time taken by Bunty to climb the escalator which is stationary.
Let Bunty's speed be "a" steps per second.
Let escalator's speed be "x" steps per second.
No. of steps (N)= 80a + 80x (Since Bunty is moving upwards, 80x will be added)
No. of steps= 120a-120x (Since Bunty is moving upwards, 120x will be subtracted)
By equating the number of steps,
80a + 80x=120a-120x
N=80a+80x= 80a+16a=96a
Time taken when the escalator is stationary= 96a/a= 96seconds.
Example 2: Bunty and Bubli are climbing on a moving escalator that is going up. Bunty takes 60 steps to reach the top but Bubli takes 64 steps to reach the top. Bunty can take 3 steps in a second
while Bubli can take 4 steps in a second. Calculate the total number of steps in the escalator.
No. of steps= 60+ (60/3)x (Since Bunty is moving in the same direction, 20x will be added)
No. of steps=64+ (64/4)x (Since Bubli is moving in the same direction, 16x will be added)
On equating the number of steps,
x=1 step per second
Number of steps= 60+ (20*1) =80 steps.
Till now we have solved questions in which the speeds of individuals were given. What if the actual speeds are not given? Then how will we solve the questions? Let's see.
Example 3: Bunty and Bubli are climbing on a moving escalator that is going up. Bunty takes 60 steps to reach the top but Bubli takes 64 steps to reach the top. For every 3 steps that Bunty takes
Bubli takes 4 steps. Calculate the total number of steps in the escalator.
Here, we are given the ratios of the speed and not the exact speed.
So, let's assume the speed of Bunty to be 3a and that of Bubli to be 4a.
Let escalator's speed be x steps per second.
When Bunty takes 60 steps, escalator would have moved 60x/3a steps.
Therefore, number of steps= 60+60x/3a
When Bubli takes 64 steps, escalator would have moved 64x/4a steps.
Therefore, number of steps= 64+64x/4a
On equating the number of steps, we get,
60+60x/3a = 64+64x/4a =>x/a=1
Putting this value in 64+64x/4a, we get Number of Steps= 64+16=80
Therefore, Number of Steps=80
Example 4: Bunty is climbing up the moving escalator that is going up he takes 60 steps to reach the top while Bubli is coming down the same escalator. The speed’s ratio of Bunty and Bubli is 3:5.
Calculate the number of steps if it’s given that both of them take same time to reach the other end.
Let the speed of Bunty and Bubli be 3a and 5a steps per second respectively.
Let "t" be the time taken to take a steps.
Number of steps= 3at+xt
Also, Number of steps can be written as 5at-xt
On equating the number of steps, we get
3at + xt = 5at – xt
xt=at => x=a
When Bunty takes 60 steps, escalator would have moved 60x/3a steps i.e. 60/3 = 20 steps
Therefore, the Number of Steps = 60+ (60/3a) a= 60+20=80.
For doubts, post your comments below and our experts will provide you with the solutions. | {"url":"https://www.hitbullseye.com/Quant/TSD-Escalator-Questions.php","timestamp":"2024-11-01T22:39:13Z","content_type":"text/html","content_length":"92374","record_id":"<urn:uuid:34f1727f-7160-45e8-97a0-7f6e3d6e4bfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00889.warc.gz"} |
Seventh Grade Mathematics | Student Handouts
Seventh Grade Mathematics
Expressions and Equations - Use properties of operations to generate equivalent expressions.
Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
Geometry - Draw, construct and describe geometrical figures and describe the relationships between them.
Solve real-life and mathematical problems involving angle measure, area, surface area, and volume.
Number System - Apply and extend previous understandings of operations with fractions to add, subtract, multiply, and divide rational numbers.
Statistics and Probability - Use random sampling to draw inferences about a population. Draw informal comparative inferences about two populations. Investigate chance processes and develop, use, and
evaluate probability models.
Mathematical Practices: Make sense of problems and persevere in solving them. Reason abstractly and quantitatively. Construct viable arguments and critique the reasoning of others. Model with
mathematics. Use appropriate tools strategically. Attend to precision. Look for and make use of structure. Look for and express regularity in repeated reasoning.
In Grade 7, instructional time should focus on four critical areas: (1) developing understanding of and applying proportional relationships; (2) developing understanding of operations with rational
numbers and working with expressions and linear equations; (3) solving problems involving scale drawings and informal geometric constructions, and working with two- and three-dimensional shapes to
solve problems involving area, surface area, and volume; and (4) drawing inferences about populations based on samples.
Students extend their understanding of ratios and develop understanding of proportionality to solve single- and multi-step problems. Students use their understanding of ratios and proportionality to
solve a wide variety of percent problems, including those involving discounts, interest, taxes, tips, and percent increase or decrease. Students solve problems about scale drawings by relating
corresponding lengths between the objects or by using the fact that relationships of lengths within an object are preserved in similar objects. Students graph proportional relationships and
understand the unit rate informally as a measure of the steepness of the related line, called the slope. They distinguish proportional relationships from other relationships.
Students develop a unified understanding of number, recognizing fractions, decimals (that have a finite or a repeating decimal representation), and percents as different representations of rational
numbers. Students extend addition, subtraction, multiplication, and division to all rational numbers, maintaining the properties of operations and the relationships between addition and subtraction,
and multiplication and division. By applying these properties, and by viewing negative numbers in terms of everyday contexts (e.g., amounts owed or temperatures below zero), students explain and
interpret the rules for adding, subtracting, multiplying, and dividing with negative numbers. They use the arithmetic of rational numbers as they formulate expressions and equations in one variable
and use these equations to solve problems.
Students continue their work with area from Grade 6, solving problems involving the area and circumference of a circle and surface area of three-dimensional objects. In preparation for work on
congruence and similarity in Grade 8 they reason about relationships among two-dimensional figures using scale drawings and informal geometric constructions, and they gain familiarity with the
relationships between angles formed by intersecting lines. Students work with three-dimensional figures, relating them to two-dimensional figures by examining cross-sections. They solve real-world
and mathematical problems involving area, surface area, and volume of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes and right prisms.
Students build on their previous work with single data distributions to compare two data distributions and address questions about differences between populations. They begin informal work with
random sampling to generate data sets and learn about the importance of representative samples for drawing inferences. | {"url":"https://www.studenthandouts.com/07-grade-seven/mathematics/","timestamp":"2024-11-05T22:57:40Z","content_type":"text/html","content_length":"29096","record_id":"<urn:uuid:fc407b80-9861-4ffa-a473-abe8cde14b84>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00451.warc.gz"} |
Infinite Canvas Tutorial
Lesson 12 - Polylines
Let's continue adding basic shapes: polylines. In this lesson, you will learn the following:
• Why not use gl.LINES directly?
• Building Mesh in CPU or Shader
• Analyzing Shader details, including:
□ Stretching vertices and joints
□ Anti-aliasing
□ Drawing dashed lines
• How to calculate the bounding box of a polyline?
$icCanvas = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Polyline, Rect } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas.parentElement.style.position = 'relative';
$icCanvas.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
const polyline1 = new Polyline({
points: [
[100, 100],
[100, 200],
[200, 100],
stroke: 'red',
strokeWidth: 20,
fill: 'none',
const polyline2 = new Polyline({
points: [
[220, 100],
[220, 200],
[320, 100],
stroke: 'red',
strokeWidth: 20,
strokeLinejoin: 'bevel',
fill: 'none',
const polyline3 = new Polyline({
points: [
[340, 100],
[340, 200],
[440, 100],
stroke: 'red',
strokeWidth: 20,
strokeLinejoin: 'round',
strokeLinecap: 'round',
fill: 'none',
const polyline4 = new Polyline({
points: [
[100, 300],
[200, 300],
[300, 210],
[400, 300],
[500, 300],
stroke: 'red',
strokeWidth: 20,
strokeLinejoin: 'round',
strokeLinecap: 'round',
strokeDasharray: [10, 5],
fill: 'none',
const rect2 = new Rect({
x: 500,
y: 100,
fill: 'black',
fillOpacity: 0.5,
stroke: 'red',
strokeWidth: 10,
dropShadowBlurRadius: 10,
dropShadowColor: 'black',
dropShadowOffsetX: 10,
dropShadowOffsetY: 10,
strokeDasharray: [5, 5],
rect2.width = 100;
rect2.height = 100;
$icCanvas.addEventListener('ic-frame', (e) => {
Limitations of gl.LINES
The gl.LINES and gl.LINE_STRIP provided by WebGL are often not very practical in real scenarios:
• Do not support width. If we try to use lineWidth, common browsers such as Chrome will throw a warning:
As of January 2017 most implementations of WebGL only support a minimum of 1 and a maximum of 1 as the technology they are based on has these same limits.
• Unable to define the connection shape between adjacent line segments lineJoin and the shape of the endpoints lineCap
• The default implementation has noticeable jaggies, requiring additional anti-aliasing
It should be noted that the solution in Lesson 5 - Line Grid is not suitable for drawing arbitrary line segments; it can't even define the two endpoints of a line segment arbitrarily. In addition,
the biggest difference between line segments and polylines is the treatment at the joints, for which deck.gl provides LineLayer and PathLayer respectively.
Now let's clarify the features we want to implement for polylines:
• Support for arbitrary line widths
• Support for defining an arbitrary number of endpoints. Similar to the SVG points attribute.
• Support for connection shapes between adjacent line segments stroke-linejoin and endpoint shapes stroke-linecap
• Support for dashed lines. stroke-dashoffset and stroke-dasharray
• Good anti-aliasing effect
• Support for instanced drawing, see the previously introduced instanced drawing
Our designed API is as follows:
const line = new Polyline({
points: [
[0, 0],
[100, 100]
strokeWidth: 100,
strokeLinejoin: 'round'
strokeLinecap: 'round',
strokeMiterlimit: 4,
strokeDasharray: [4, 1],
strokeDashoffset: 10
Let's first look at the first question: how to implement arbitrary values of strokeWidth.
Building mesh
The following image comes from the WebGL meetup shared by Pixi.js: How 2 draw lines in WebGL. This article will heavily reference screenshots from it, and I will label the page numbers in the PPT.
Since native methods are not available, we can only return to the traditional drawing scheme of building Mesh.
How to draw line in WebGL - page 5
The common practice is to stretch and triangulate in the direction of the normal of the line segment. The following image comes from Drawing Antialiased Lines with OpenGL. The two endpoints of the
line segment are stretched to both sides along the red dashed line normal, forming 4 vertices, triangulated into 2 triangles, so strokeWidth can be any value.
extrude line
Building on CPU
The stretching of the line segment and the Mesh construction of strokeLinejoin and strokeLinecap can be done in the CPU or Shader. Implementations following the former approach include:
segment instance mesh
segment instance, lineCap and lineJoin meshes
It can be seen that when strokeLinejoin and strokeLinecap take the value of round, in order to make the rounded corners look smooth, the Mesh construction requires the most vertices. In
regl-gpu-lines, each segment requires up to 32 * 4 + 6 = 134 vertices:
// @see https://github.com/rreusser/regl-gpu-lines/blob/main/src/index.js#L81
cache.indexBuffer = regl.buffer(
new Uint8Array([...Array(MAX_ROUND_JOIN_RESOLUTION * 4 + 6).keys()]), // MAX_ROUND_JOIN_RESOLUTION = 32
strokeLinecap and line segments need to be drawn in different Drawcall, still taking the instanced example of regl-gpu-lines as an example, it requires compiling two Programs and using 3 Drawcall to
draw, among which:
• The two endpoints use the same Program, but the Uniform orientation is different. The number of vertices is cap + join
• All middle line segments are drawn using one Drawcall, the number of vertices is join + join, and the number of instances is the number of line segments
const computeCount = isEndpoints
? // Draw a cap
(props) => [props.capRes2, props.joinRes2]
: // Draw two joins
(props) => [props.joinRes2, props.joinRes2];
If there are multiple polylines, the conditions for merging are the same values of strokeLinecap and strokeLinejoin and the number of line segments. The following figure shows the situation of
drawing 5 polylines, among which each polyline's middle segment part contains 8 instance, so the total number of instance is 40:
drawcalls for linecap and segments
Building in shader
From the WebGL meetup shared by Pixi.js, building Mesh in Shader:
Compared with building on the CPU, its advantages include:
• Only one Drawcall is needed to draw strokeLinecap strokeLineJoin and the middle segment
• The vertices are fixed at 9, where vertices 1234 form two triangles for drawing the line segment part, and vertices 56789 form three triangles for drawing the joint part
• When strokeLinecap strokeLinejoin take the value of round, it is smoother because a method similar to SDF drawing circles is used in the Fragment Shader
• Good anti-aliasing effect
pack joints into instances - page 15
layout(location = ${Location.PREV}) in vec2 a_Prev;
layout(location = ${Location.POINTA}) in vec2 a_PointA;
layout(location = ${Location.POINTB}) in vec2 a_PointB;
layout(location = ${Location.NEXT}) in vec2 a_Next;
layout(location = ${Location.VERTEX_JOINT}) in float a_VertexJoint;
layout(location = ${Location.VERTEX_NUM}) in float a_VertexNum;
The Buffer layout is as follows, with each Stride size being 4 * 3. In the Buffer, the same continuous data such as x1 y1 t1 is read as A_0 in the first instance, and as Prev_1 in the second
instance. This intersecting layout can save the maximum amount of Buffer size:
const vertexBufferDescriptors: InputLayoutBufferDescriptor[] = [
arrayStride: 4 * 3,
stepMode: VertexStepMode.INSTANCE,
attributes: [
format: Format.F32_RG,
offset: 4 * 0,
shaderLocation: Location.PREV,
format: Format.F32_RG,
offset: 4 * 3,
shaderLocation: Location.POINTA,
format: Format.F32_R,
offset: 4 * 5,
shaderLocation: Location.VERTEX_JOINT,
format: Format.F32_RG,
offset: 4 * 6,
shaderLocation: Location.POINTB,
format: Format.F32_RG,
offset: 4 * 9,
shaderLocation: Location.NEXT,
Unfortunately, if we switch to the WebGPU renderer, we will get the following error:
Attribute offset (12) with format VertexFormat::Float32x2 (size: 8) doesn't fit in the vertex buffer stride (12).
The reason is that WebGPU has the following verification rule for VertexBufferLayout, and our arrayStride is 4 * 3. WebGPU instancing problem and spec: It is useful to allow
GPUVertexBufferLayout.arrayStride to be less than offset + sizeof(attrib.format) also mention this.
attrib.offset + byteSize(attrib.format) ≤ descriptor.arrayStride.
4 3 + 4 2 ≤ 4 * 3 // Oops!
Therefore, we have to change the layout of the Buffer. First, in the Layout, we split the layout from one Buffer containing multiple Attributes to multiple Buffers, each containing only one
const vertexBufferDescriptors: InputLayoutBufferDescriptor[] = [
arrayStride: 4 * 3,
stepMode: VertexStepMode.INSTANCE,
attributes: [
format: Format.F32_RG,
offset: 4 * 0,
shaderLocation: Location.PREV,
arrayStride: 4 * 3,
stepMode: VertexStepMode.INSTANCE,
attributes: [
format: Format.F32_RG,
offset: 4 * 0,
shaderLocation: Location.POINTA,
// Omit VERTEX_JOINT
// Omit POINTB
// Omit NEXT
Although split into multiple BufferLayout declarations, the actual reference is to the same Buffer, only the corresponding Attribute is read via offset, see details: Offset in bytes into buffer where
the vertex data begins。
const buffers = [
buffer: this.#segmentsBuffer, // PREV
buffer: this.#segmentsBuffer, // POINTA
offset: 4 * 3,
buffer: this.#segmentsBuffer, // VERTEX_JOINT
offset: 4 * 5,
buffer: this.#segmentsBuffer, // POINTB
offset: 4 * 6,
buffer: this.#segmentsBuffer, // NEXT
offset: 4 * 9,
renderPass.setVertexInput(this.#inputLayout, buffers, {
buffer: this.#indexBuffer,
Other features will also be implemented based on this scheme later.
Shader implementation analysis
First, let's see how to stretch vertices at the main body and joints of the line segment.
Extrude segment
Let's focus on vertices 1 to 4, that is, the main part of the line segment. Considering the angle at which the line segment and its adjacent line segments present, there are the following four forms
/-\ \-/ /-/ and \-\:
extrude along line segment - page 16
Before calculating the unit normal vector, convert the position of each vertex to the model coordinate system:
vec2 pointA = (model * vec3(a_PointA, 1.0)).xy;
vec2 pointB = (model * vec3(a_PointB, 1.0)).xy;
vec2 xBasis = pointB - pointA;
float len = length(xBasis);
vec2 forward = xBasis / len;
vec2 norm = vec2(forward.y, -forward.x);
xBasis2 = next - base;
float len2 = length(xBasis2);
vec2 norm2 = vec2(xBasis2.y, -xBasis2.x) / len2;
float D = norm.x * norm2.y - norm.y * norm2.x;
In the first form, for example, vertices 1 and 2 are stretched outward along the normal, and vertices 3 and 4 are stretched inward along the angle bisectors (doBisect()) of the joints:
if (vertexNum < 3.5) { // Vertex #1 ~ 4
if (abs(D) < 0.01) {
pos = dy * norm;
} else {
if (flag < 0.5 && inner < 0.5) { // Vertex #1, 2
pos = dy * norm;
} else { // Vertex #3, 4
pos = doBisect(norm, len, norm2, len2, dy, inner);
Extrude linejoin
Next, we focus on vertices 5~9 at the joint, the stretching direction and distance varies according to the shape of the joint, the original author's implementation is very complex, in which bevel and
round share the same stretching method, and the latter in the Fragment Shader and then through the SDF to complete the rounded corners of the drawing.
extrude along line segment - page 16
Let's start our analysis with the simplest miter, which by definition converts to bevel if strokeMiterlimit is exceeded.
if (length(pos) > abs(dy) * strokeMiterlimit) {
type = BEVEL;
} else {
if (vertexNum < 4.5) {
dy = -dy;
pos = doBisect(norm, len, norm2, len2, dy, 1.0);
} else if (vertexNum < 5.5) {
pos = dy * norm;
} else if (vertexNum > 6.5) {
pos = dy * norm2;
v_Type = 1.0;
dy = -sign * dot(pos, norm);
dy2 = -sign * dot(pos, norm2);
hit = 1.0;
It is worth mentioning that in Cairo, whether to use round or bevel joints needs to be determined based on arc height. The following figure comes from: Cairo - Fix for round joins
Cairo - Fix for round joins
Finally, let's see how to anti-alias the edges of the line segment. We have introduced anti-aliasing in SDF before, and here we use a similar approach:
1. Calculate the vertical unit vector from the vertex to the line segment in the Vertex Shader and pass it to the Fragment Shader through varying for automatic interpolation
2. The interpolated vector is no longer a unit vector. Calculate its length, which is the perpendicular distance from the current pixel point to the line segment, within the range [0, 1]
3. Use this value to calculate the final transparency of the pixel point, completing anti-aliasing. smoothstep occurs at the edge of the line segment, that is, within the interval [linewidth -
feather, linewidth + feather]. The following figure comes from: Drawing Antialiased Lines with OpenGL, and the specific calculation logic will be introduced later.
How much should this "feather" be? In the previous drawing rectangle outer shadow, we expanded the original size of the rectangle by 3 * dropShadowBlurRadius. The following figure still comes from
How 2 draw lines in WebGL, expanding one pixel outward (from w -> w+1) is enough. At the other side, the distance of the two vertices (#3 and #4 vertices) is negative:
const float expand = 1.0;
lineWidth *= 0.5;
float dy = lineWidth + expand; // w + 1
if (vertexNum >= 1.5) { // Vertex #3 & #4
dy = -dy; // -w - 1
From the bottom right figure, it can also be seen that when we zoom in to look at every pixel in the Fragment Shader, using this directed distance d, we can calculate the coverage of the line segment
and the current pixel (the area of the triangle in the following figure), achieving an anti-aliasing effect.
extend 1 pixel outside
So how to use this distance to calculate coverage? This needs to be divided into the main body of the line segment and the joint situation.
First, let's look at the situation of the main body of the line segment, which can be further simplified into the case of a vertical line segment. The original author also provided a calculation
method considering rotation, which is not much different from the simplified estimation version. Use clamp to calculate the coverage on one side, and also consider the case of a very thin line width,
subtract the left side from the right side to get the final coverage, as the transparency coefficient of the final color.
calculate coverage according to signed distance
Of course, calculating the intersection area between the line segment part and the straight line is the simplest case. The treatment at the joints and endpoints will be very complex. Taking the Miter
joint as an example, still ignore the rotation and only consider the case where the adjacent line segments are perpendicular (note the red box area on the right side of the following figure). Unlike
the line segment above, which only has one directed distance d, here there are two directed distances d1 and d2 representing the front and back two line segments at the joint. Similarly, considering
a very thin line within a pixel area, the coverage area is the area difference between the two squares (a2 * b2 - a1 * b1):
calculate coverage on miter joint
The calculation method for the Bevel joint is roughly the same as the Miter (the middle situation in the following figure). d3 represents the distance from the center of the pixel to the "bevel
line", and it can be used to calculate the coverage on the right side of the following figure. You can take the minimum value of these two situations to get an approximate calculation result.
calculate coverage on bevel joint
calculate coverage on bevel joint
Finally, let's come to the case of rounded joints. It requires an additional distance d3 from the center of the circle to the pixel point (similar to SDF drawing circles) passed from the Vertex
calculate coverage on round joint
calculate coverage on round joint
The original author also provided an exact version of pixelLine implementation, which will not be expanded due to space limitations.
Support for stroke-alignment
We previously implemented Enhanced SVG: Stroke alignment on Circle, Ellipse, and Rect drawn with SDF. Now let's add this attribute to polylines. The following figure comes from the
lineStyle.alignment effect in Pixi.js, where the red line represents the geometric position of the polyline, and it floats up and down according to different values:
stroke-alignment - p27
In the Shader, we reflect this attribute in the offset along the normal stretch. If the strokeAlignment takes the value of center, the offset is 0:
float shift = strokeWidth * strokeAlignment;
pointA += norm * shift;
pointB += norm * shift;
From left to right are the effects of outer, center, and inner:
$icCanvas2 = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Polyline } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas2.parentElement.style.position = 'relative';
$icCanvas2.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
const polyline1 = new Polyline({
points: [
[100, 100],
[100, 200],
[200, 200],
[200, 100],
stroke: 'black',
strokeWidth: 20,
strokeAlignment: 'outer',
fill: 'none',
cursor: 'pointer',
const polyline4 = new Polyline({
points: [
[100, 100],
[100, 200],
[200, 200],
[200, 100],
stroke: 'red',
strokeWidth: 2,
// strokeAlignment: 'outer',
fill: 'none',
const polyline2 = new Polyline({
points: [
[220, 100],
[220, 200],
[320, 200],
[320, 100],
stroke: 'black',
strokeWidth: 20,
cursor: 'pointer',
fill: 'none',
const polyline5 = new Polyline({
points: [
[220, 100],
[220, 200],
[320, 200],
[320, 100],
stroke: 'red',
strokeWidth: 2,
fill: 'none',
const polyline3 = new Polyline({
points: [
[360, 100],
[360, 200],
[460, 200],
[460, 100],
stroke: 'black',
strokeWidth: 20,
strokeAlignment: 'inner',
fill: 'none',
cursor: 'pointer',
const polyline6 = new Polyline({
points: [
[360, 100],
[360, 200],
[460, 200],
[460, 100],
stroke: 'red',
strokeWidth: 2,
fill: 'none',
polyline1.addEventListener('pointerenter', () => {
polyline1.stroke = 'green';
polyline1.addEventListener('pointerleave', () => {
polyline1.stroke = 'black';
polyline2.addEventListener('pointerenter', () => {
polyline2.stroke = 'green';
polyline2.addEventListener('pointerleave', () => {
polyline2.stroke = 'black';
polyline3.addEventListener('pointerenter', () => {
polyline3.stroke = 'green';
polyline3.addEventListener('pointerleave', () => {
polyline3.stroke = 'black';
$icCanvas2.addEventListener('ic-frame', (e) => {
Finally, there are two points to note:
1. Since stroke-alignment is not a standard SVG attribute, it is necessary to recalculate points when exporting to SVG, which is consistent with the logic of stretching along the normal and angle
bisector in the Shader, which will not be expanded due to space limitations
2. The picking determination method, that is, containsPoint, also needs to be calculated based on the offset vertices of points. You can try to change the color of the polyline by moving the mouse
in and out of the above example
Dashed lines
First, calculate the distance each vertex has traveled from the starting point. Taking the polyline of [[0, 0], [100, 0], [200, 0]] as an example, the a_Travel values of the three instances are [0,
100, 200]. Calculate the stretched vertex distance in the Vertex Shader:
layout(location = ${Location.TRAVEL}) in float a_Travel;
out float v_Travel;
v_Travel = a_Travel + dot(pos - pointA, vec2(-norm.y, norm.x));
In the Fragment Shader, pass in the values of stroke-dasharray and stroke-dashoffset. Different from the SVG standard, we only support stroke-dasharray of length 2 for the time being, that is, dashed
lines like [10, 5, 2] are not supported.
in float v_Travel;
float u_Dash = u_StrokeDash.x;
float u_Gap = u_StrokeDash.y;
float u_DashOffset = u_StrokeDash.z;
if (u_Dash + u_Gap > 1.0) {
float travel = mod(v_Travel + u_Gap * v_ScalingFactor * 0.5 + u_DashOffset, u_Dash * v_ScalingFactor + u_Gap * v_ScalingFactor) - (u_Gap * v_ScalingFactor * 0.5);
float left = max(travel - 0.5, -0.5);
float right = min(travel + 0.5, u_Gap * v_ScalingFactor + 0.5);
alpha *= antialias(max(0.0, right - left));
We can also change (increment) stroke-dashoffset in real-time to achieve an ant line effect. Such animation effects are usually implemented through the SVG attribute of the same name, see: How to
animate along an SVG path at the same time the path animates?
$icCanvas3 = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Polyline } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas3.parentElement.style.position = 'relative';
let polyline1;
$icCanvas3.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
polyline1 = new Polyline({
points: [
[100, 100],
[100, 200],
[200, 200],
[200, 100],
stroke: 'black',
strokeWidth: 20,
strokeDasharray: [10, 10],
strokeDashoffset: 0,
fill: 'none',
cursor: 'pointer',
const polyline2 = new Polyline({
points: [
[300, 100],
[300, 200],
[500, 200],
[500, 100],
stroke: 'black',
strokeWidth: 10,
strokeDasharray: [2, 10],
strokeDashoffset: 0,
strokeLinecap: 'round',
strokeLinejoin: 'round',
fill: 'none',
cursor: 'pointer',
polyline1.addEventListener('pointerenter', () => {
polyline1.stroke = 'green';
polyline1.addEventListener('pointerleave', () => {
polyline1.stroke = 'black';
$icCanvas3.addEventListener('ic-frame', (e) => {
polyline1.strokeDashoffset += 0.1;
Another implementation method uses fract(), see: Pure WebGL Dashed Line.
According to the SVG specification, the attributes stroke-dasharray and stroke-dashoffset can also be applied to other shapes such as Circle / Ellipse / Rect. Therefore, when these two attributes
have reasonable values, the outline drawn with SDF originally needs to be changed to Polyline implementation. Taking Rect as an example, up to 3 drawcalls may be needed to draw the outer shadow, the
main body of the rectangle, and the dashed outline:
SHAPE_DRAWCALL_CTORS.set(Rect, [ShadowRect, SDF, SmoothPolyline]);
Taking Rect as an example, we need to artificially construct a polyline based on the x / y / width / height attributes, which includes 6 vertices. It is worth noting that the first 5 can actually
complete the closure, but we add an extra [x + epsilon, y] to complete the final strokeLinejoin. Circle and Ellipse are similar, only adding more sampling points to ensure smoothness (here we use
if (object instanceof Polyline) {
points = object.points.reduce((prev, cur) => {
prev.push(cur[0], cur[1]);
return prev;
}, [] as number[]);
} else if (object instanceof Rect) {
const { x, y, width, height } = object;
points = [
x + width,
x + width,
y + height,
y + height,
x + epsilon,
$icCanvas5 = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Rect, Circle, Ellipse } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas5.parentElement.style.position = 'relative';
$icCanvas5.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
const rect = new Rect({
x: 50,
y: 50,
fill: 'black',
fillOpacity: 0.5,
dropShadowBlurRadius: 10,
dropShadowColor: 'black',
dropShadowOffsetX: 10,
dropShadowOffsetY: 10,
stroke: 'red',
strokeWidth: 10,
rect.width = 100;
rect.height = 100;
const rect2 = new Rect({
x: 200,
y: 50,
fill: 'black',
fillOpacity: 0.5,
stroke: 'red',
strokeWidth: 10,
dropShadowBlurRadius: 10,
dropShadowColor: 'black',
dropShadowOffsetX: 10,
dropShadowOffsetY: 10,
strokeDasharray: [5, 5],
rect2.width = 100;
rect2.height = 100;
const circle = new Circle({
cx: 400,
cy: 100,
r: 50,
fill: 'black',
stroke: 'red',
strokeWidth: 20,
strokeDasharray: [5, 5],
const circle2 = new Circle({
cx: 550,
cy: 100,
r: 50,
fill: 'black',
stroke: 'red',
strokeWidth: 20,
strokeDasharray: [5, 20],
strokeAlignment: 'inner',
const ellipse = new Ellipse({
cx: 150,
cy: 250,
rx: 100,
ry: 50,
fill: 'black',
stroke: 'red',
strokeWidth: 20,
strokeDasharray: [5, 5],
$icCanvas5.addEventListener('ic-frame', (e) => {
Calculating the bounding box
Let's temporarily step out of rendering and do some geometric calculations. As introduced in previous lessons, bounding boxes need to be calculated in both picking and culling.
Ignoring drawing attributes such as line width, the calculation of the geometric bounding box is very simple. Just find the minimum and maximum coordinates of all vertices of the polyline:
const minX = Math.min(...points.map((point) => point[0]));
const maxX = Math.max(...points.map((point) => point[0]));
const minY = Math.min(...points.map((point) => point[1]));
const maxY = Math.max(...points.map((point) => point[1]));
return new AABB(minX, minY, maxX, maxY);
Once line width, endpoints, and joints are involved, calculating the bounding box of a polyline becomes more complex. If a precise result is not required, you can simply extend the aforementioned
bounding box outward by half the line width. Calculate bounding box of line with thickness uses the cairo-stroke-extents method provided by Cairo. If the line width is 0, it will degrade into
Computes a bounding box in user coordinates covering the area that would be affected, (the "inked" area)
Continuing to delve into the Cairo source code, it can be found that for stroke bounding boxes, it also provides two methods (omitting a large number of parameters here), the former uses an
estimation method and is therefore faster, while the latter will consider the specific shapes of endpoints and joints for precise calculation:
cairo_private void
_cairo_path_fixed_approximate_stroke_extents ();
cairo_private cairo_status_t
_cairo_path_fixed_stroke_extents ();
Quick Estimation
This estimation refers to expanding a certain distance outward along the horizontal and vertical directions on the basis of the geometric bounding box: style_expansion * strokeWidth.
* For a stroke in the given style, compute the maximum distance
* from the path that vertices could be generated. In the case
* of rotation in the ctm, the distance will not be exact.
_cairo_stroke_style_max_distance_from_path (const cairo_stroke_style_t *style,
const cairo_path_fixed_t *path,
const cairo_matrix_t *ctm,
double *dx, double *dy)
double style_expansion = 0.5;
if (style->line_cap == CAIRO_LINE_CAP_SQUARE)
style_expansion = M_SQRT1_2;
if (style->line_join == CAIRO_LINE_JOIN_MITER &&
! path->stroke_is_rectilinear &&
style_expansion < M_SQRT2 * style->miter_limit)
style_expansion = M_SQRT2 * style->miter_limit;
style_expansion *= style->line_width;
Considering the case of stroke-linecap="square", the following figure shows that in the most ideal situation, style_expansion equals 0.5, that is, extending 0.5 * strokeWidth from the red body, and
the black area is the bounding box of the <polyline>.
But if the polyline is slightly tilted at 45 degrees, the distance extended outward at this time is sqrt(2) / 2 * strokeWidth:
Similarly, the case of stroke-linejoin="miter" also needs to be considered. It can be seen that this estimation method will not precisely consider every vertex and joint, but only make the most
optimistic estimate to ensure that the bounding box can accommodate the polyline.
Below we draw the bounding box of the polyline in real time, showing the different values of strokeLinecap from left to right:
$icCanvas6 = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Polyline, Rect } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas6.parentElement.style.position = 'relative';
function drawBounds(canvas, polyline) {
const { minX, minY, maxX, maxY } = polyline.getBounds();
const bounds = new Rect({
x: minX,
y: minY,
stroke: 'red',
fill: 'none',
bounds.width = maxX - minX;
bounds.height = maxY - minY;
$icCanvas6.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
const polyline1 = new Polyline({
points: [
[100, 100],
[200, 200],
stroke: 'black',
strokeWidth: 20,
fill: 'none',
cursor: 'pointer',
drawBounds(canvas, polyline1);
const polyline2 = new Polyline({
points: [
[300, 100],
[400, 200],
stroke: 'black',
strokeWidth: 20,
strokeLinecap: 'round',
fill: 'none',
cursor: 'pointer',
drawBounds(canvas, polyline2);
const polyline3 = new Polyline({
points: [
[500, 100],
[600, 200],
stroke: 'black',
strokeWidth: 20,
strokeLinecap: 'square',
fill: 'none',
cursor: 'pointer',
drawBounds(canvas, polyline3);
$icCanvas6.addEventListener('ic-frame', (e) => {
Precise calculation
If you really want to calculate precisely? Cairo's idea is to first convert it into a Polygon, and then calculate its bounding box:
stroke extents
_cairo_path_fixed_stroke_extents (const cairo_path_fixed_t *path,
const cairo_stroke_style_t *stroke_style,
const cairo_matrix_t *ctm,
const cairo_matrix_t *ctm_inverse,
double tolerance,
cairo_rectangle_int_t *extents)
cairo_polygon_t polygon;
cairo_status_t status;
cairo_stroke_style_t style;
_cairo_polygon_init (&polygon, NULL, 0);
status = _cairo_path_fixed_stroke_to_polygon (path,
ctm, ctm_inverse,
_cairo_box_round_to_rectangle (&polygon.extents, extents);
_cairo_polygon_fini (&polygon);
return status;
Performance testing
Let's test the performance, showing several polylines each containing 20,000 points:
$icCanvas4 = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Polyline } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas4.parentElement.style.position = 'relative';
let polyline1;
$icCanvas4.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
const data = new Array(20000)
.map((_, i) => [i, Math.random() * 50]);
polyline1 = new Polyline({
points: data,
stroke: 'black',
strokeWidth: 2,
fill: 'none',
cursor: 'pointer',
const data2 = new Array(20000)
.map((_, i) => [i, Math.random() * 50 + 100]);
polyline2 = new Polyline({
points: data2,
stroke: 'black',
strokeWidth: 2,
strokeLinejoin: 'round',
fill: 'none',
cursor: 'pointer',
const data3 = new Array(20000)
.map((_, i) => [i, Math.random() * 50 + 200]);
polyline3 = new Polyline({
points: data3,
stroke: 'black',
strokeWidth: 2,
strokeDasharray: [4, 4],
fill: 'none',
cursor: 'pointer',
$icCanvas4.addEventListener('ic-frame', (e) => {
It seems not bad, but after careful consideration, there are still the following issues, which can be considered as future improvement directions:
• Due to the fact that each Instance uses 15 vertices, and the Buffer has a size limit, the actual number of vertices contained in a single polyline is limited
• Currently, one polyline corresponds to one Drawcall. What if there are a large number of similar repeated polylines? regl-gpu-lines provides two ideas:
□ One Drawcall can also draw multiple polylines, using [NaN, NaN] to indicate breakpoints, example: Multiple lines
□ If the vertex data of multiple polylines is the same, and only the offset is different, then each polyline can be regarded as an Instance. Of course, the vertices inside each polyline need to
be expanded, example: Fake instancing
• Simplify vertices based on current camera zoom level
Below we continue to optimize along the above lines.
Polyline with multiple segments
Along the lines of the previous optimization to reduce the number of Drawcalls, we can splice multiple folds together, but of course we need to use some kind of separator, cf. regl-gpu-lines we use
[NaN, NaN] And we'll soon use it when drawing paths later: a path may contain several subpaths!
Polyline1: [[0, 0], [100, 100]]
Polyline2: [[100, 0], [200, 100]]
MultiPolyline: [[0, 0], [100, 100], [NaN, NaN], [100, 0], [200, 100]]
After splitting by separator, the vertex array is still constructed for each segment in the same way as above:
const subPaths = [];
let lastNaNIndex = 0;
for (let i = 0; i < points.length; i += stridePoints) {
if (isNaN(points[i]) || isNaN(points[i + 1])) {
subPaths.push(points.slice(lastNaNIndex, i));
lastNaNIndex = i + 2;
subPaths.forEach((points) => {
// Omit constructing each segments
The effect is as follows, with the following notes:
• Since the multiple fold lines are merged into one, the pickup will also follow a whole. Try hovering the mouse over the three sets of lines below.
• When exporting to SVG, it is no longer possible to export directly to the corresponding <polyline> element.
$icCanvas7 = call(() => {
return document.createElement('ic-canvas-lesson12');
call(() => {
const { Canvas, Polyline } = Lesson12;
const stats = new Stats();
const $stats = stats.dom;
$stats.style.position = 'absolute';
$stats.style.left = '0px';
$stats.style.top = '0px';
$icCanvas7.parentElement.style.position = 'relative';
$icCanvas7.addEventListener('ic-ready', (e) => {
const canvas = e.detail;
const data = new Array(200).fill(undefined).map((_, i) => [
[Math.random() * 200, Math.random() * 200],
[Math.random() * 200, Math.random() * 200],
[NaN, NaN],
const polyline = new Polyline({
points: data.flat(1),
stroke: 'black',
strokeWidth: 2,
strokeLinecap: 'round',
cursor: 'pointer',
() => (polyline.stroke = 'red'),
() => (polyline.stroke = 'black'),
const data2 = new Array(200).fill(undefined).map((_, i) => [
[Math.random() * 200 + 200, Math.random() * 200],
[Math.random() * 200 + 200, Math.random() * 200],
[NaN, NaN],
const polyline2 = new Polyline({
points: data2.flat(1),
stroke: 'black',
strokeWidth: 2,
strokeLinecap: 'round',
cursor: 'pointer',
() => (polyline2.stroke = 'green'),
() => (polyline2.stroke = 'black'),
const data3 = new Array(200).fill(undefined).map((_, i) => [
[Math.random() * 200 + 400, Math.random() * 200],
[Math.random() * 200 + 400, Math.random() * 200],
[NaN, NaN],
const polyline3 = new Polyline({
points: data3.flat(1),
stroke: 'black',
strokeWidth: 2,
strokeLinecap: 'round',
cursor: 'pointer',
() => (polyline3.stroke = 'blue'),
() => (polyline3.stroke = 'black'),
$icCanvas7.addEventListener('ic-frame', (e) => {
[WIP] Merge similar polylines
[WIP] Simplify polyline
For polylines (and subsequently Paths and Polygons) that contain a large number of vertices, an important optimization is to simplify them according to the current zoom level, reducing the amount of
rendered data as much as possible. The basis for simplification is twofold:
• Segments that are too short and polygons that are too small can be filtered out.
• Vertices in a polyline that have little impact on the overall shape can be filtered out.
The basic algorithm for segment vertex simplification is the Ramer-Douglas-Peucker algorithm, which works as follows:
• First keep the first and last vertices of the polyline and connect them.
• Find the furthest vertex from the segment among the remaining vertices, and keep that distance.
• If the distance is less than a threshold, discard it.
• If the distance is greater than the threshold, keep it. If the distance is less than the threshold, discard it. If the distance is greater than the threshold, keep it.
• The partitioning method handles the two sub-segments, going back to 1.
We can use [simplify-js], which is based on this algorithm.
Other Issues
So far, we have completed the basic drawing work of polylines. Finally, let's take a look at other related issues. Due to space limitations, some issues will be detailed in future lessons.
In some scenarios, we do not want the graphics to change size with the camera zoom, such as the bounding box wireframe and the size labels below it when selecting a shape in Figma:
size attenuation in Figma
This is called sizeAttenuation in Three.js. In Perspective projection mode, Sprites become smaller as the camera depth increases. We will implement this later when we implement the selected UI.
Line Path and Polygon
In SVG, there are still three elements: <line> <path> and <polygon>, among which:
• <line> does not need to consider strokeLinejoin, so the number of vertices used can be simplified
• The filling part of <polygon> can be drawn after triangulation using some algorithms such as earcut, and the outline part is exactly the same as the polyline
• <path> can also be sampled on the path in a similar way to <rect> <circle>, and finally drawn with polylines, but there will be such a problem: Draw arcs, arcs are not smooth ISSUE
We will introduce in detail how to draw them in the next lesson.
Extended Reading | {"url":"https://infinitecanvas.cc/guide/lesson-012","timestamp":"2024-11-10T05:22:23Z","content_type":"text/html","content_length":"333875","record_id":"<urn:uuid:08cd3b3e-35f4-47eb-9b43-f8fc25a0bf9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00372.warc.gz"} |
Duality Principle in Boolean Algebra with Examples
• Digital Electronics Course
• Computer Programming
• Web Development
Duality Principle in Boolean Algebra with Examples
The "duality principle" is one of the most important topics that can be studied in Boolean algebra, and this article was written and distributed in order to provide a description of it. In addition
to that, I will provide an illustration to more concretely illustrate it. Now, let's get started by defining what it is.
What is duality principle?
The duality principle, also known as the principle of duality, is an essential property that is primarily utilized in the process of proving a variety of theorems that can be found in Boolean
The duality principle states that in a two-valued boolean algebra, the dual of an algebraic expression can be obtained by exchanging all of the OR and AND operators and then replacing 1 with 0 and 0
with 1. In addition, the dual can be obtained by exchanging all of the AND operators for OR operators.
Before moving on to the section on examples, let's get a grasp on the individual steps that make up the duality theorem. In order to help you get a better grasp on the material, I will first walk you
through the steps and then give you an example.
Steps used in the Duality Theorem
Here are some of the main steps used to solve the duality theorem in boolean algebra:
• Change each AND operation to an OR operation.
• Change each OR operation to an AND operation.
• Replace 0 with 1.
• Replace 1 with 0.
To put it more simply, we need to make the following adjustments: for every OR, we need to make it an AND, and for every 0 we need to make it a 1, and vice versa.
In order to fully comprehend the subject at hand, it is now time to examine some relevant examples.
Duality Principle Example
Let's take an example to illustrate how to apply all the steps used in the duality principle in a practical way. This example gives you some ideas about how to convert or apply the duality theorem or
principle to any boolean expression.
The dual of the preceding statement is:
As you can see here, we have done the following:
• changed first 1 to 0
• changed OR (+) to AND (.)
• changed first 0 to 1
• changed second 1 to 0
Now, let me demonstrate several examples of the duality principles by using the table that is provided below. In this table, the boolean expression will be written in the first column, and the duals
of that expression will be written in the second column.
Expression Dual
1 = 0 0 = 1
0 = 1 1 = 0
1.0 = 0 0 + 1 = 1
0.1 = 0 1 + 0 = 1
1 + 0 = 1 0.1 = 0
0 + 1 = 1 1.0 = 0
A.0 = 0 A + 1 = 1
0.A = 0 1 + A = 1
A.1 = 0 A + 0 = 1
1.A = 0 1 + A = 1
A.A = 0 A + A = 1
A.B = B.A A + B = B + A
A.(B.C) = (A.B).C A + (B + C) = (A + B) + C
A.(A + B) = A A + A.B = A
AB + C + BCA = 0 (A + B).C.(B + C + A) = 1
Regarding the "duality principle," I believe that we have now covered sufficient ground in terms of examples and specifics. You are welcome to get in touch with us at any time if you have any
inquiries, questions, or suggestions regarding this article or any other one. The link to the contact page can be found at the very bottom of the article.
« Previous Tutorial Next Tutorial » | {"url":"https://codescracker.com/digital-electronics/duality-principle-boolean-algebra.htm","timestamp":"2024-11-11T06:32:41Z","content_type":"text/html","content_length":"11760","record_id":"<urn:uuid:5e5f7597-83c5-44db-bba3-1acf62b6c489>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00862.warc.gz"} |
In a class test, the sum of Shefali’s marks in Mathematics and English is 30. Had she got 2 marks more in Mathematics and 3 marks less in English, the product of their marks would have been 210. Find her marks in the two subjects.
You must login to ask question.
NCERT Solutions for Class 10 Maths Chapter 4
Important NCERT Questions
Quadratic Equation
NCERT Books for Session 2022-2023
CBSE Board and UP Board Others state Board
EXERCISE 4.3
Page No:88
Questions No:5 | {"url":"https://discussion.tiwariacademy.com/question/in-a-class-test-the-sum-of-shefalis-marks-in-mathematics-and-english-is-30-had-she-got-2-marks-more-in-mathematics-and-3-marks-less-in-english-the-product-of-their-marks-would-have-been-21/","timestamp":"2024-11-08T07:53:10Z","content_type":"text/html","content_length":"159449","record_id":"<urn:uuid:2b3e52e2-c04e-437c-a7b5-719c45c23a6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00226.warc.gz"} |
Vamshi Jandhyala - Points on a circle
Riddler Classic
If N points are generated at random places on the perimeter of a circle, what is the probability that you can pick a diameter such that all of those points are on only one side of the newly halved
Each of the \(N\) points determines a diameter of the circle. The probability of all the other \(N-1\) points falling on one side of diameter determined by the first point is given by \(\frac{1}{2^
{N-1}}\). Therefore the probability of picking a diameter such that all of those points are on one side of the newly halved circle is \(\sum_{i=1}^N\frac{1}{2^{N-1}} = \frac{N}{2^{N-1}}\).
Computational verification
from math import pi
from random import uniform
from collections import defaultdict
def total_angle(pts, n):
rl = pts[n:] + pts[:n]
nl = [d - pts[n] if d - pts[n] >= 0 else 2*pi + d - pts[n] for d in rl]
angles = [x - nl[i - 1] for i, x in enumerate(nl)][1:]
return sum(angles)
runs = 100000
N = 10
cnt_suc = defaultdict(int)
for n in range(3, N):
for _ in range(runs):
pts = [uniform(0, 2*pi) for _ in range(n)]
min_angle = min([total_angle(pts, r) for r in range(n)])
if min_angle <= pi:
cnt_suc[n] += 1
print("%d random points, Estimated probability %f, \
Theoretical probability %f" % (n, cnt_suc[n]/runs, n/2**(n-1)))
Back to top | {"url":"https://vamshij.com/blog/riddler/pts-circle.html","timestamp":"2024-11-05T22:48:30Z","content_type":"application/xhtml+xml","content_length":"30083","record_id":"<urn:uuid:0dc07b9b-e2be-4461-b3a8-59836556b702>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00137.warc.gz"} |
Whole Number Operations Review
Review concepts of long division and multi-digit multiplication with this worksheet.
Review How to Multiply and Divide Whole Numbers
This whole number operations review is broken into four sections. Students should complete a section and get it checked by the teacher before moving on so that the teacher can help catch any
misunderstandings and assist the students prior to concluding the unit. If the section just has minor errors, allow the student to try to correct it on their own. If there are major misconceptions,
the teacher can work one-on-one with that student before they move on. When a section is complete, the student can color in a portion of the progress tracker on page 1 and try another section.
Students can complete sections in any order, allowing student choice and encouraging personal responsibility.
Sections include:
• Completing tables with missing factors, products, and quotients
• Determining what operation/s to use for word problems
• Fill-in-the-blank numerical sentences and vocabulary words
• Completing area models and fact families
• Standard algorithm
• Word problems
To encourage students to check their work carefully and use their resources when stuck, such as their notes, the teacher can provide an incentive or reward for accurately completing a section on the
first attempt. This could include ringing a bell, getting a sticker, getting a standing ovation from the class, etc.
An answer key is included with your download to make grading fast and easy!
Tips for Differentiation + Scaffolding
In addition to independent student work time, use this worksheet as an activity for:
• Whole-class review (via smartboard)
Ask students who need a challenge to try showing their work in multiple ways to encourage use of different strategies. Allow students to peer tutor or create anchor charts to hang up during unit
review time.
For students who may need additional support, allow the use of a calculator or multiplication chart and work one-on-one or in a small group. Additionally, questions can be read orally, the numbers
can be modified within the questions and provide completed examples and vocabulary cards to use.
🖨️ Easily Download & Print
Use the dropdown icon on the Download button to choose between the PDF or editable Google Slides version of this resource.
Because this resource includes an answer sheet, we recommend you print one copy of the entire file. Then, make photocopies of the blank worksheet for students to complete.
To save paper, we suggest printing this multi-page worksheet double-sided.
Additionally, project the worksheet onto a screen and work through it as a class by having students record their answers in their notebooks.
Get more worksheets to have handy!
This resource was created by Lorin Davies, a teacher in Texas and Teach Starter Collaborator.
Consider using this review activity to prepare your students for an assessment covering whole number operations.
teaching resource
Asses student understanding of adding, subtracting, multiplying, and dividing whole numbers with this math assessment.
0 Comments
Write a review to help other teachers and parents like yourself. If you'd like to request a change to this resource, or report an error, select the corresponding tab above. | {"url":"https://www.teachstarter.com/us/teaching-resource/whole-number-operations-review/","timestamp":"2024-11-02T03:08:45Z","content_type":"text/html","content_length":"429220","record_id":"<urn:uuid:4fcb64fc-696f-4851-89cf-e23d65e88d86>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00093.warc.gz"} |
Exponential Length Substrings in Pattern Matching - Codeforces
Hi all,
I would like to share with you a part of my undergraduate thesis on a Multi-String Pattern Matcher data structure. In my opinion, it's easy to understand and hard to implement correctly and
efficiently. It's (relatively) competitive against other MSPM data structures (Aho-Corasick, suffix array/automaton/tree to name a few) when the dictionary size is specifically (uncommonly) large.
I would also like to sign up this entry to bashkort's Month of Blog Posts:-) Many thanks to him and peltorator for supporting this initiative.
This work describes a hash-based mass-searching algorithm, finding (count, location of first match) entries from a dictionary against a string $$$s$$$ of length $$$n$$$. The presented implementation
makes use of all substrings of $$$s$$$ whose lengths are powers of $$$2$$$ to construct an offline algorithm that can, in some cases, reach a complexity of $$$O(n \log^2n)$$$ even if there are $$$O(n
^2)$$$ possible matches. If there is a limit on the dictionary size $$$m$$$, then the precomputation complexity is $$$O(m + n \log^2n)$$$, and the search complexity is bounded by $$$O(n\log^2n + m\
log n)$$$, even if it performs in practice like $$$O(n\log^2n + \sqrt{nm}\log n)$$$. Other applications, such as finding the number of distinct substrings of $$$s$$$ for each length between $$$1$$$
and $$$n$$$, can be done with the same algorithm in $$$O(n\log^2n)$$$.
Problem Description
We want to write an offline algorithm for the following problem, which receives as input a string $$$s$$$ of length $$$n$$$, and a dictionary $$$ts = \{t_1, t_2, .., t_{\lvert ts \rvert}\}$$$. As
output, it expects for each string $$$t$$$ in the dictionary the number of times it is found in $$$s$$$. We could also ask for the position of the first occurrence of each $$$t$$$ in $$$s$$$, but the
blog mainly focuses on the number of matches.
Algorithm Description
We will build a DAG in which every node is mapped to a substring from $$$s$$$ whose length is a power of $$$2$$$. We will draw edges between any two nodes whose substrings are consecutive in $$$s$$$.
The DAG has $$$O(n \log n)$$$ nodes and $$$O(n \log^2 n)$$$ edges.
We will break every $$$t_i \in ts$$$ down into a chain of substrings of $$$2$$$-exponential length in strictly decreasing order (e.g. if $$$\lvert t \rvert = 11$$$, we will break it into $$$\{t
[1..8], t[9..10], t[11]\}$$$). If $$$t_i$$$ occurs $$$k$$$ times in $$$s$$$, we will find $$$t_i$$$'s chain $$$k$$$ times in the DAG. Intuitively, it makes sense to split $$$s$$$ like this, since any
chain has length $$$O(\log n)$$$.
Figure 1: The DAG for $$$s = (ab)^3$$$. If $$$ts = \{aba, baba, abb\}$$$, then $$$t_0 = aba$$$ is found twice in the DAG, $$$t_1 = baba$$$ once, and $$$t_2 = abb$$$ zero times.
Redundancy Elimination: Associated Trie, Tokens, Trie Search
A generic search for $$$t_0 = aba$$$ in the DAG would check if any node marked as $$$ab$$$ would have a child labeled as $$$a$$$. $$$t_2 = abb$$$ is never found, but a part of its chain is
($$$ab$$$). We have to check all $$$ab$$$s to see if any may continue with a $$$b$$$, but we have already checked if any $$$ab$$$s continue with an $$$a$$$ for $$$t_0$$$, making second set of checks
Figure 2: If the chains of $$$t_i$$$ and $$$t_j$$$ have a common prefix, it is inefficient to count the number of occurrences of the prefix twice. We will put all the $$$t_i$$$ chains in a trie. We
will keep the hashes of the values on the trie edges.
In order to generalize all of chain searches in the DAG, we will add a starter node that points to all other nodes in the DAG. Now all DAG chains begin in the same node.
The actual search will go through the trie and find help in the DAG. The two Data Structures cooperate through tokens. A token is defined by both its value (the DAG index in which it’s at), and its
position (the trie node in which it’s at).
Trie Search steps
Token count bound and Token propagation complexity
Property 1: There is a one-to-one mapping between tokens and $$$s$$$’ substrings. Any search will generate $$$O(n^2)$$$ tokens.
Proof 1: If two tokens would describe the same substring of $$$s$$$ (value-wise, not necessarily position-wise), they would both be found in the same trie node, since the described substrings result
from the concatenation of the labels on the trie paths from the root.
Now, since the two tokens are in the same trie node, they can either have different DAG nodes attached (so different ids), meaning they map to different substrings (i.e. different starting
positions), or they belong to the same DAG node, so one of them is a duplicate: contradiction, since they were both propagated from the same father.
As a result, we can only generate in a search as many tokens as there are substrings in $$$s$$$, $$$n(n+1)/2 + 1$$$, also accounting for the empty substring (Starter Node's token). $$$\square$$$
Figure 3: $$$s = aaa, ts = \{a, aa, aaa\}$$$. There are two tokens that map to the same DAG node with the id $$$5$$$, but they are in different trie nodes: one implies the substring $$$aaa$$$, while
the other implies (the third) $$$a$$$.
Property 2: Ignoring the Starter Node's token, any token may divide into $$$O(\log n)$$$ children tokens. We can propagate a child in $$$O(1)$$$ (the cost of the membership test (in practice, against
a hashtable containing rolling hashes passed through a Pseudo-Random Function)). Therefore, we can propagate a token in $$$O(\log n)$$$, except the starter node's token, which takes $$$O(n \log n)
As a result, we have a pessimistic bound of $$$O(n^2 \log n)$$$ for a search.
Another redundancy: DAG suffix compression
Property 3: If we have two different tokens in the same trie node, but their associated DAG nodes’ subtrees are identical, it is useless to propagate the second token anymore. We should only
propagate the first one, and count twice any finding that it will have.
Proof 3: If we don’t merge the tokens that respect the condition, their children will always coexist in any subsequent trie nodes: if one gets propagated, the other will as well, or none get
propagated. We can apply this recursively to cover the entire DAG subtree. $$$\square$$$
Intuitively, we can consider two tokens to be the same if they share the present and their future cannot be different, no matter their past.
Figure 4: The DAG from Figure 1 post suffix compression. We introduce the notion of leverage. If we compress $$$k$$$ nodes into each other, the resulting one will have a leverage equal to $$$k$$$. We
were able to compress two out of the three $$$ab$$$ nodes. If we are to search now for $$$aba$$$, we would only have to only propagate one token $$$ab \rightarrow a$$$ instead of the original two.
Property 4: The number of findings given by a compressed chain is the minimum leverage on that chain (i.e. we found $$$bab$$$ $$$\min([2, 3]) = 2$$$ times, or $$$ab$$$ $$$\min([2]) + \min([1]) = 3$$$
times). Also, the minimum on the chain is always given by the first DAG node on the chain (that isn't the Starter Node).
Proof 4: Let $$$ch$$$ be a compressed DAG chain. If $$$1 \leq i < j \leq \lvert ch \rvert$$$, then $$$node_j$$$ is in $$$node_i$$$'s subtree. The more we go into the chain, the less restrictive the
compression requirement becomes (i.e. fewer characters need to match to unite two nodes).
If we united $$$node_i$$$ with another node $$$x$$$, then there must be a node $$$y$$$ in $$$x$$$'s subtree that we can unite with $$$node_j$$$. Therefore, $$$lev_{node_i} \leq lev_{node_j}$$$, so
the minimum leverage is $$$lev_{node_1}$$$. $$$\square$$$
Figure 5: Practically, we compress nodes with the same subtree hash. Notice that the subtree of a DAG node is composed of characters that follow sequentially in $$$s$$$.
Remark: While the search complexity for the first optimization (adding the trie) isn't too difficult to point down, the second optimization (the more powerful DAG compression) is much harder to work
with: the worst case is much more difficult to find (the compression obviously has been specifically chosen to move away the worst case))).
Complexity computation: without DAG compression
We will model a function $$$NT : \mathbb{N}^* \rightarrow \mathbb{N}, NT(m) = $$$ if the dictionary character count is at most $$$m$$$, what is the maximum number of tokens that we can generate?
Property 5: If $$$t'$$$ cannot be found in $$$s$$$, let $$$t$$$ be its longest prefix that can be found in $$$s$$$. Then, no matter if we would have $$$t$$$ or $$$t'$$$ in any $$$ts$$$, the number of
generated tokens would be the same. As a result, we would prefer adding $$$t$$$, since it would increase the size of the dictionary by a smaller amount. This means that in order to maximize $$$NT(m)
$$$, we would include only substrings of $$$s$$$ in $$$ts$$$.
If $$$l = 2^{l_1} + 2^{l_2} + 2^{l_3} + ... + 2^{l_{last}}$$$, with $$$l_1 > l_2 > l_3 > ... > l_{last}$$$, then we can generate at most $$$\Sigma_{i = 1}^{last} n+1 - 2^{l_i}$$$ tokens by adding all
possible strings of length $$$l$$$ into $$$ts$$$. However, we can add all tokens by using only one string of length $$$l$$$ if $$$s = a^n$$$ and $$$t = a^l$$$.
As a consequence, the upper bound if no DAG compression is performed is given by the case $$$s = a^n$$$, with $$$ts$$$ being a subset of $$${a^1, a^2, .., a^n}$$$.
Figure 6: Consider the "full trie" to be the complementary trie filled with all of the options that we may want to put in $$$ts$$$. We will segment it into sub-tries. Let $$$ST(k)$$$ be a sub-trie
containing the substrings $$${a^{2^k}, .., a^{2^{k+1}-1}} \,\, \forall \, k \geq 0$$$. The sub-tries are marked here with A, B, C, D, ...}
Property 6: Let $$$1 \leq x = 2^{x_1} + .. + 2^{x_{last}} < y = x + 2^{y_1} + .. + 2^{y_{last}} \leq n$$$, with $$$2^{x_{last}} > 2^{y_1}$$$. We will never hold both $$$a^x$$$ and $$$a^y$$$ in
$$$ts$$$, because $$$a^y$$$ already accounts all tokens that could be added by $$$a^x$$$: $$$a^x$$$'s chain in the full trie is a prefix of $$$a^y$$$'s chain.
We will devise a strategy of adding strings to an initially empty $$$ts$$$, such that we maximize the number of generated tokens for some momentary values of $$$m$$$. We will use two operations:
• upgrade: If $$$x < y$$$ respect Property 6, $$$a^x \in ts,\, a^y \notin ts$$$, then replace $$$a^x$$$ with $$$a^y$$$ in $$$ts$$$.
• add: If $$$a^y \notin ts$$$, and $$$\forall \, a^x \in ts$$$, we can't upgrade $$$a^x$$$ to $$$a^y$$$, add $$$a^y$$$ to $$$ts$$$.
Property 7: If we upgrade $$$a^x$$$, we will add strictly under $$$x$$$ new characters to $$$ts$$$, since $$$2^{y_1} + .. + 2^{y_{last}} < 2 ^ {x_{last}} \leq x$$$.
Property 8: We don't lose optimality if we only use add for only $$$y$$$s that are powers of two. If $$$y$$$ isn't a power of two, we can use add on the most significant power of two of $$$y$$$, then
upgrade it to the full $$$y$$$.
Property 9: We also don't lose optimality if we upgrade from $$$a^x$$$ to $$$a^y$$$ only if $$$y - x$$$ is a power of two (let this be called minigrade). Otherwise, we can upgrade to $$$a^y$$$ by a
sequence of minigrades.
Statement 10: We will prove by induction that (optimally) we will always exhaust all upgrade operations before performing an add operation.
Proof 10: We have to start with an add operation. The string that maximizes the number of added tokens ($$$n$$$) also happens to have the lowest length ($$$1$$$), so we will add $$$a^1$$$ to
$$$ts$$$. Notice that we have added/upgraded all strings from sub-trie $$$ST(0)$$$, ending the first step of the induction.
For the second step, assume that we have added/upgraded all strings to $$$ts$$$ from $$$ST(0), .., ST(k-1)$$$, and we want to prove that it's optimal to add/upgrade all strings from $$$ST(k)$$$
before exploring $$$ST(k+1), ...$$$.
By enforcing Property 8, the best string to add is $$$a^{2^k}$$$, generating the most tokens, while being the shortest remaining one.
operation token gain |ts| gain
add $$$a^{2^{k+1}}$$$ $$$n+1 - 2^{k+1}$$$ $$$2^{k+1}$$$
any minigrade $$$\geq n+1 - (2^{k+1}-1)$$$ < $$$2^k$$$
Afterwards, we gain more tokens by doing any minigrade on $$$ST(k)$$$, while increasing $$$m$$$ by a smaller amount than any remaining add operation. Therefore, it's optimal to first finish all
minigrades from $$$ST(k)$$$ before moving on. $$$\square$$$
We aren't interested in the optimal way to perform the minigrades. Now, for certain values of $$$m$$$, we know upper bounds for $$$NT(m)$$$:
subtries in ts m NT(m)
$$$ST(0)$$$ $$$1$$$ $$$n$$$
$$$ST(0), ST(1)$$$ $$$1+3$$$ $$$n + (n-1) + (n-2)$$$
$$$ST(0), .., ST(2)$$$ $$$1+3+(5+7)$$$ $$$n + ... + (n-6)$$$
$$$ST(0), .., ST(3)$$$ $$$1+3+(5+7)+(9+11+13+15)$$$ $$$n + ... + (n-14)$$$
$$$ST(0), .., ST(k-1)$$$ $$$1+3+5+7+...+(2^k-1) = 4^{k-1}$$$ $$$n + ... + (n+1 - (2^k-1))$$$
Figure 7: The entries in the table are represented by goldenrod crosses.
We know that if $$$m \leq m'$$$, then $$$NT(m) \leq NT(m')$$$, so we'll upper bound $$$NT$$$ by forcing $$$NT(4^{k-2}+1) = NT(4^{k-2}+2) = ... = NT(4^{k-1}) \,\, \forall \, k \geq 2$$$ (represented
with blue segments in figure 7. We will also assume that the goldenrod crosses are as well on the blue lines, represented here as blue dots.
We need to bound:
$$$\forall \, m = 4^{k-1}, \,\, NT(m) \leq n + (n-1) + ... + (n+1 - (2^{k+1} - 1))$$$
$$$\Rightarrow NT(m) \leq n \cdot (1 + 2^{k+1}) = n + n \cdot 2^{k+1}$$$
But if $$$m = 4^{k-1} \Rightarrow 2^{k+1} = 4 \sqrt m \Rightarrow NT(m) \leq n + 4 n \sqrt m$$$.
So the whole $$$NT$$$ function is bounded by $$$NT(m) \leq n + 4 n \sqrt m$$$, the red line in figure 7. $$$NT(m) \in O(n \sqrt m)$$$.
The total complexity of the search part is now also bounded by $$$O(n \sqrt m \log n)$$$, so the total program complexity becomes $$$O(m + n \log^2 n + \min(n \sqrt m \log n, n^2 \log n))$$$.
Corner case Improvement
We will now see that adding the DAG suffix compression moves the worst case away from $$$s = a^n$$$ and $$$ts = \{a^1, .., a^n\}$$$. We will prove that the trie search complexity post compression for
$$$s = a^n$$$ is $$$O(n \log^2n)$$$.
Figure 8: A trie node's "father edge" unites it with its parent in the trie.
Statement 11: If a trie node’s "father edge" is marked as $$$a^{2^x}$$$, then there can be at most $$$2^x$$$ tokens in that trie node.
Proof 11: If the "father edge" is marked as $$$a^{2^x}$$$, then any token in that trie node can only progress for at most another $$$2^x-1$$$ characters in total. For example, in figure 8, a token
can further progress along its path for at most another three characters, possibly going through $$$a^2$$$ and $$$a$$$.
Since all characters are $$$a$$$ here, the only possible DAG subtree configurations accessible from this trie node are the ones formed from the substrings $$${a^0, a^1, a^2, a^3, ..., a^{2^x-1}}$$$,
so $$$2^x$$$ possible subtrees (figure 9).
Figure 9: For example, consider a token's associated DAG node with the value of $$$a^4$$$, and $$$s'$$$ to be the substring that led to that trie node (i.e. the concatenation of the labels on the
trie path from the root).
If we have more than $$$2^x$$$ tokens in the trie node, then there must exist two tokens who belong to the same DAG subtree configuration, meaning that their nodes would have been united during the
compression phase. Therefore, we can discard one of them.
So the number of tokens in a trie node cannot be bigger than the number of distinct subtrees that can follow from the substring that led to that trie node (in this case $$$2^x$$$). $$$\square$$$
Figure 10: For example, the trie search after the DAG compression for $$$s = a^7$$$, $$$ts = {a^1, a^2, .., a^7}$$$.
Let $$$trie(n)$$$ be the trie built from the dictionary $$${a^1, a^2, a^3, ..., a^n}$$$.
Let $$$cnt_n(2^x)$$$ be the number of nodes in $$$trie(n)$$$ that have their "father edge" labeled as $$$a^{2^x}$$$. For example, in figure 10, $$$cnt_7(2^0) = 2^2$$$, $$$cnt_7(2^1) = 2^1$$$, and
$$$cnt_7(2^2) = 2^0$$$.
Pick $$$k$$$ such that $$$2^{k-1} \leq n < 2^k$$$. Then, $$$trie(n) \subseteq trie(2^k-1)$$$. $$$trie(2^k-1)$$$ would generate at least as many tokens as $$$trie(n)$$$ during the search.
Statement 12: We will prove by induction that $$$trie(2^k-1)$$$ has $$$cnt_{2^k-1}(2^x) = 2^{k-1-x} \,\, \forall \, x \in [0, k-1]$$$.
Proof 12: If $$$k = 1$$$, then $$$trie(1)$$$ has only one outgoing edge from the root node labeled $$$a^1 \Rightarrow cnt_1(2^0) = 1 = 2^{1-1-0}$$$.
We will now suppose that we have proven this statement for $$$trie(2^{k-1}-1)$$$ and we will try to prove it for $$$trie(2^k-1)$$$.
Figure 11: The expansion from $$$trie(2^{k-1}-1)$$$ to $$$trie(2^k-1)$$$.
As a result:
• $$$cnt_{2^k-1}(1) = 2 \cdot cnt_{2^{k-1}-1}(1) = 2^{k-2} \cdot 2 = 2^{k-1}$$$,
• $$$cnt_{2^k-1}(2) = 2 \cdot cnt_{2^{k-1}-1}(2) = 2^{k-2}$$$,
• ...
• $$$cnt_{2^k-1}(2^{k-2}) = 2 \cdot cnt_{2^{k-1}-1}(2^{k-2}) = 2^1$$$,
• $$$cnt_{2^k-1}(2^{k-1}) = 1 = 2^0. \,\, \square$$$
We will now count the number of tokens for $$$trie(2^k-1)$$$:
$$$cnt \leq \sum_{i=0}^{k-1} cnt_{2^k-1}(2^i) \cdot 2^i = 2^{k-1} \cdot 2^0 + 2^{k-2} \cdot 2^1 + .. + 2^0 \cdot 2^{k-1} = 2^{k-1} \cdot k$$$
So there are $$$O(k \cdot 2^{k-1})$$$ tokens in $$$trie(2^k-1)$$$. Since $$$2^{k-1} \leq n < 2^k \Rightarrow k-1 \leq \log_{2}n < k \Rightarrow k \leq \log_{2}n+1$$$.
Therefore, $$$2^{k-1} \leq n \Rightarrow O(k \cdot 2^{k-1}) \subset O((\log_{2}n + 1) \cdot n) = O(n \log n)$$$, so for $$$s = a^n$$$, its associated trie search will generate $$$O(n \log n)$$$
tokens, and the trie search will take $$$O(n\log^2n)$$$.
We can fit the entire program's complexity in $$$O(n\log^2n)$$$ if we give the dictionary input directly in hash chain form.
Complexity computation: with DAG compression
This is the most important part of the blog. We will use the following definitions:
• If $$$\lvert t \rvert = 2^{k_1} + 2^{k_2} + 2^{k_3} + ... + 2^{k_{last}}$$$, with $$$k_1 > k_2 > k_3 > ... > k_{last}$$$, then let $$$get(t, i) = t[2^{k_1} + 2^{k_2} + .. + 2^{k_{i-1}} : 2^{k_1}
+ 2^{k_2} + .. + 2^{k_{i}}], i \geq 1$$$. For example, if $$$t = ababc$$$, $$$get(t, 1) = abab, get(t, 2) = c$$$.
• Similarly, if $$$p = 2^{k_1} + 2^{k_2} + .. + 2^{k_{last}}$$$, let $$$LSB(p) = 2^{k_{last}}$$$.
• If $$$t$$$ is a substring of $$$s$$$, and we know $$$t$$$'s position in $$$s$$$ ($$$t = s[i..j]$$$), then let $$$shade(s, t) = s[i.. \min(n, j+LSB(j-i+1)-1)]$$$ (i.e. the shade of $$$t$$$ is the
content of its DAG node subtree). For example, if $$$s = asdfghjkl$$$, $$$shade(s, sdfg) = sdfg + hjk$$$, $$$shade(s, a) = a$$$, $$$shade(s, jk) = jk + l$$$, $$$shade(s, jkl) = jkl$$$. A shade
that prematurely ends (i.e. if $$$n$$$ would be larger, the shade would be longer) may be ended with $$$\varnothing$$$: $$$shade(s, kl) = kl\varnothing$$$.
• Let $$$sub(s, t)$$$ denote the number of node chains from $$$s$$$' DAG post-compression whose values are equal to $$$t$$$. For example, $$$sub(abababc, ab) = 2$$$. Notice that $$$sub(s, t)$$$ is
equal to the number of distinct values that $$$shade(s, t)$$$ takes.
• Let $$$popcount(p)$$$ be the number of bits equal to 1 in $$$p$$$'s representation in base 2.
Statement 13: We choose to classify the trie search tokens in two categories: tokens that are alone (or not alone) in their trie nodes. An alone token describes a distinct substring of $$$s$$$. Thus,
it can only have alone tokens as children (their substring equivalent having a distinct prefix of $$$s$$$). We can easily find decent bounds for the number of alone tokens depending of $$$m$$$. We
will shortly see that the maximum number of non-alone tokens is relatively small and is independent of $$$m$$$.
Figure 12: For example, $$$s = a^3$$$ generates two non-alone tokens (for the two $$$a^2$$$s) and two alone tokens (for $$$a^3$$$ and the $$$a$$$ s), not counting the starter node's token.
Statement 14: There are at most $$$O(m)$$$ alone tokens in the trie.
Figure 13: Any $$$t_i$$$ can add at most $$$O(popcount(length(t_i)))$$$ alone tokens in the trie.
Proof 14: For each string in the dictionary, let $$$z_i = \lvert t_i \rvert \,\, \forall \, 1 \leq i \leq \lvert ts \rvert$$$, and $$$z_i = 0 \,\, \forall \, \lvert ts \rvert < i \leq m$$$.
The maximum number of alone tokens in the trie is:
$$$\max \Sigma_{i=1}^{m} popcount(z_i)$$$
$$$s.t. \, \Sigma_{i=1}^{m}z_i = m$$$
But $$$\Sigma_{i=1}^{m} popcount(z_i) \leq \Sigma_{i=1}^{m} z_i = m$$$, achievable when $$$z_i = 1 \,\, \forall \, 1 \leq i \leq m$$$. $$$\square$$$
The bound is tight for practical values of $$$n$$$ (e.g. generate $$$s$$$ with $$$n = 10^8$$$ such that all substrings of size $$$7$$$ are distinct: $$$\lvert \Sigma \rvert ^ {7} \gg n$$$, even for
$$$\lvert \Sigma \rvert = 26$$$. Take $$$ts$$$ to be all $$$s$$$' substrings of size $$$7$$$. Each $$$t$$$ will add at least one alone token, for approximately $$$m/7$$$ alone tokens in total).
Theorem 15: The maximum number of non-alone tokens is $$$O(n \log n)$$$.
Proof 15: Let the number of non-alone tokens generated by $$$s$$$ (considering that $$$ts$$$ contains every substring of $$$s$$$) to be noted with $$$NTna(s)$$$. Also, let the number of alone tokens
be noted with $$$NTa(s)$$$: $$$NTna(s) + NTa(s) = NT(s)$$$ (note that we overloaded $$$NT$$$ here).
For example, $$$\max(NTna(s) \mid \lvert s \rvert = 3) = 2$$$, achieved for $$$s = a^3$$$. For longer lengths, there are many patterns that can achieve $$$NTNa(s) \in O(n \log n)$$$, such as $$$a^
n$$$, $$$(ab)^k$$$, $$$(ab..z)^k$$$, $$$a^kba^{k-1}ba^{k-2}b...$$$, etc.
We will now redefine the problem of counting the number of non-alone tokens such that it no longer implies a trie search: for a string $$$s$$$, count the substrings $$$t$$$ of $$$s$$$ for which
$$$sub(s, t) > 1$$$.
For example, for $$$s = a^3$$$, we won't count the substring $$$a^3$$$ because $$$sub(s, a^3) = 1$$$. We will count both $$$a^2$$$s ($$$s[1 .. 2]$$$ and $$$s[2 .. 3]$$$) because $$$sub(s, a^2) =
2$$$. We won't count any of the $$$a$$$s because they all have the same shade: $$$sub(s, a) = 1$$$. We arrive at $$$NTna(a^3) = 2$$$.
Property 16: Any string $$$s$$$ can be written in the following form: $$$s = s_1^{a_1}s_2^{a_2}s_3^{a_3}...s_k^{a_k}$$$, with $$$s_i \in \Sigma$$$, $$$a_i \geq 1 \,\, \forall \, 1 \leq i \leq k$$$.
Also, we choose the representation such that $$$s_i \neq s_{i+1} \,\, \forall \, 1 \leq i < k$$$.
Let $$$f(n) = \max (NTna(s) \mid \lvert s \rvert = n)$$$. We will find a limit for $$$f(n)$$$ through recurrence.
We will split a given $$$s$$$ in two parts: $$$s = s_1^{a_1} + s_2^{a_2}s_3^{a_3}...s_k^{a_k}$$$.
Property 17: $$$NTna(s) \leq NTna(s_1^{a_1}) + 2NTa(s_1^{a_1}) + NTna(s_2^{a_2}s_3^{a_3}...s_k^{a_k}) + 2 \,\, \cdot $$$ the contribution of the substrings that begin in the left hand side (in the
first $$$a_1$$$ characters) and end in the right hand side (not in the first $$$a_1$$$ characters).
Proof 17: We will start with $$$s_2^{a_2}s_3^{a_3}...s_k^{a_k}$$$ and build $$$s$$$ by adding $$$s_1^{a_1}$$$ as a prefix. Let $$$S_1$$$ and $$$S_2$$$ be two substrings that are fully contained
within $$$s[:a_1]$$$ (i.e. in $$$s_1^{a_1}$$$). We are interested in how do the shades of $$$S_1$$$ and $$$S_2$$$ change post-union with $$$s_2^{a_2}... \,$$$ (can the shades become different if they
were equal, or the opposite):
• If the shades of $$$S_1$$$ and $$$S_2$$$ are different from the start:
□ If exactly one of them prematurely ends (i.e. $$$s_1^{a_1} = a^3$$$, $$$shade(s_1^{a_1}, s_1^{a_1}[1..2]) = a^3$$$, $$$shade(s_1^{a_1}, s_1^{a_1}[2..3]) = a^2\varnothing$$$), then the
incomplete shade will contain $$$s_2$$$, which cannot be found in the previously complete shade, which only contains $$$s_1 \neq s_2$$$.
□ If both shades prematurely end, then both will look like $$$s_1^?s_2...$$$, but $$$s_2$$$ must occur at different points (because the shades are different, only contain $$$s_1$$$ and end
□ If no shade prematurely ends, then they cannot be altered post-unite, so the equality status between the shades doesn't change.
• If $$$shade(s[:a_1], S_1) = shade(s[:a_1], S_2)$$$, then $$$S_1 = S_2$$$.
□ If both shades prematurely end, then $$$S_1$$$ and $$$S_2$$$ describe the same substring, contradiction.
□ If no shade prematurely ends, then they cannot be altered post-unite, so the equality status between the shades doesn't change.
□ If exactly one shade prematurely ends, then the shades aren't identical, contradiction.
Since the relation between the shades of $$$S_1$$$ and $$$S_2$$$ doesn't change post-appending $$$s_1^{a_1}$$$ to $$$s_2^{a_2}s_3^{a_3}...s_k^{a_k}$$$ for no pair $$$(S_1, S_2)$$$, we can add $$$NTna
(s_1^{a_1})$$$ to $$$NTna(s_2^{a_2}s_3^{a_3}...s_k^{a_k})$$$.
However, there may exist "dormant" substrings in $$$s_2^{a_2}...$$$ (i.e. substrings $$$u$$$ for which $$$sub(s_2^{a_2}..., u) = 1$$$, and post-unite $$$sub(s_1^{a_1}s_2^{a_2}..., u) > 1$$$ since
$$$u$$$ also exists in $$$s_1^{a_1}$$$ with a different shade, meaning that we have to add $$$1$$$ from behind). As a result, $$$s_1^{a_1}$$$'s alone tokens may become non-alone for $$$s$$$, so we
have to count them twice ($$$2NTa(s_1^{a_1})$$$) in order to bound $$$NTna(s)$$$.
A substring of $$$s$$$ that begins in $$$s_1^{a_1}$$$ and ends in $$$s_2^{a_2}...\,$$$ contributes if we can find it again in $$$s$$$, with a different shade. If we can't find it, its associated
token is alone for now (dormant). Similarly, we have to double all findings to bound, since each could correspond to a dormant substring from $$$s_2^{a_2}...\,$$$. $$$\square$$$
We will now bound the contribution of the substrings that are not completely contained in one of the two parts.
Let $$$s$$$ be indexed as $$$s[1..n]$$$. Let $$$g = a_1 + 1$$$ be the first index (border) that isn't part of $$$s_1^{a_1}$$$.
Figure 14: We can attempt to include a substring $$$s[l..r]$$$'s contribution only if $$$1 \leq l < g$$$ and $$$g \leq r < n$$$. To actually count it, we have to find $$$s[l..r]$$$ someplace else in
$$$s$$$, with a different shade (if we can't do that, it means that $$$s[l..r]$$$ corresponds to an alone token for now).
Generalizing the "someplace else" substrings in $$$s$$$, we have $$$P$$$ pairs $$$(i_p, j_p)$$$ ($$$i_p < g \leq j_p$$$) such that for each $$$1 \leq p \leq P$$$ there exists at least a tuple $$$
(x_p, z_p, y_p)$$$ with $$$s[i_p .. g .. j_p] = s[x_p .. z_p .. y_p]$$$ ($$$g - i_p = z_p - x_p$$$). We pick $$$(i_p, j_p)$$$, $$$(x_p, z_p, y_p)$$$ such that the right end of the matching substrings
is maximized: either $$$y_p = n$$$, or $$$s[j_p+1] \neq s[y_p+1]$$$.
Figure 15: For example, for $$$s = (ab)^4a$$$: $$$n = 9$$$, $$$P = 3$$$, $$$g = 2$$$. $$$(x_1, z_1, y_1) = (3, 4, 9)$$$, $$$(i_1, j_1) = (1, 7)$$$, $$$(x_2, z_2, y_2) = (5, 6, 9)$$$, $$$(i_2, j_2) =
(1, 5)$$$, $$$(x_3, z_3, y_3) = (7, 8, 9)$$$, $$$(i_3, j_3) = (1, 3)$$$.
Property 18: $$$x_p > g \,\, \forall \, 1 \leq p \leq P$$$.
Proof 18: If $$$x_p \leq g$$$, then $$$x_p < g$$$ (because $$$s[x_p] = s[i_p] = s_1 (i_p \leq a_1)$$$, and $$$s[g] = s_2 \neq s_1$$$).
$$$s[i_p .. g-1] = s_1^{g-i_p} = s_1^{z_p-x_p} = s[x_p .. z_p-1]$$$
If $$$z_p > g$$$, knowing that $$$x_p < g \Rightarrow g \in (x_p, z_p) \subset [x_p, z_p) \Rightarrow s[g] = s_2 \in s[x_p .. z_p-1]$$$, contradiction, $$$s[x_p .. z_p-1]$$$ contains only $$$s_1$$$
As a result, $$$z_p \leq g$$$. If $$$z_p < g$$$, $$$s[z_p] = s_1 \neq s_2 = s[g]$$$, so $$$z_p = g$$$. Since $$$g - i_p = z_p - x_p \Rightarrow i_p = x_p$$$. We can extend $$$j_p$$$, $$$y_p$$$ as
much as possible to the right $$$\Rightarrow j_p = y_p = n$$$. $$$(x_p, z_p, y_p) = (i_p, g, n)$$$ is not a valid interval, being just a copy of the original.
All $$$(x_p, z_p, y_p)$$$ tuples must start after $$$g$$$. $$$\square$$$
For a substring $$$s[l..r]$$$, with $$$l < g$$$, $$$g \leq r$$$, let $$$Q_{l, r}$$$ remember all the pairs that allow $$$s[l..r]$$$ to be found in other places in $$$s$$$:
$$$Q_{l, r} = \{(i_q, j_q) \mid 1 \leq q \leq P, i_q \leq l, r \leq j_q\}$$$
Property 19: Any substring $$$s[l..r]$$$ that may contribute must fulfill the following two conditions: $$$Q_{l, r}$$$ must not be empty, and for any $$$(i_q, j_q) \in Q_{l, r}$$$, the associated
shade of $$$s[l..r]$$$ inside $$$s[x_q..y_q]$$$ must exit $$$[x_q..y_q]$$$.
Proof 19: For the first condition, if $$$Q_{l, r}$$$ is empty, then $$$s[l..r]$$$ cannot be found again in $$$s$$$, meaning that $$$sub(s, s[l..r]) = 1$$$: $$$s[l..r]$$$ will be "dormant" for now,
and its contribution may be added on later.
For the second condition, suppose that for some $$$q$$$ for which $$$(i_q, j_q) \in Q_{l, r}$$$, the associated shade of $$$s[l..r]$$$ doesn't exit $$$[x_q..y_q]$$$.
Figure 16: $$$s[l..r]$$$, and its associated substring $$$s[l+x_q-i_q..r+x_q-i_q]$$$.
The associated shade of $$$s[l..r]$$$ ends at:
$$$r+x_q-i_q + LSB((r+x_q-i_q) \, - \, (l+x_q-i_q) + 1) - 1 \leq y_q$$$
But the argument of the $$$LSB$$$ function is equal to $$$r-l+1$$$, and if we subtract $$$x_q-i_q$$$ from the inequality:
$$$r + LSB(r-l+1) - 1 \leq j_q$$$
Meaning that the shade of $$$s[l..r]$$$ also doesn't exit $$$[i_q..j_q]$$$. Since $$$s[i_q..j_q] = s[x_q..y_q]$$$, both of the shades are completely contained within the two intervals and start at
the same point relative to $$$i_q$$$ or $$$x_q$$$. The two shades are equal, which means that the contribution of $$$s[l..r]$$$ has already been counted in $$$NTna(s_2^{a_2}s_3^{a_3}...)$$$ (because
$$$x_q > g$$$, so $$$s[x_q..y_q] \subset s_2^{a_2}s_3^{a_3}...$$$). $$$\square$$$
We will now simplify and count all substrings that fulfill the previous property. The number of all substrings that qualify is at most equal to the sum of the contributions of all pairs $$$(i_p, j_p)
$$$. A substring $$$s[l..r]$$$ contributes for the pair $$$(i_p, j_p)$$$ if $$$i_p \leq l < g$$$, $$$g \leq r \leq j_p$$$, and $$$r + LSB(r - l + 1) - 1 > j_p$$$ (the number of substrings — sum of
pair contributions inequality is due to the fact that $$$[l..r]$$$ may be counted by multiple pairs).
Property 20: The maximum number of substrings that can contribute to the pair $$$(i_p, j_p)$$$ is $$$(g-i_p)\log_2(j_p-i_p+1)$$$.
Proof 20: The set of numbers that have their $$$LSB = 2^t, t \geq 0$$$: $$${2^t(2?+1) \mid ? \geq 0}$$$.
For $$$LSB = 2^t$$$: $$$j_p + 1 - 2^t < r \leq j_p$$$. The lower bound is a consequence of the $$$r + LSB(r-l+1) - 1 > j_p$$$ requirement. $$$r$$$ has $$$2^t - 1$$$ possible values.
For a fixed $$$r$$$, let $$$l' < g$$$ be the biggest value $$$l$$$ can take such that $$$LSB(r-l+1) = 2^t$$$. $$$l$$$ could also take the values $$$l' - 2^{t+1}, l' - 2 \cdot 2^{t+1}, l' - 3 \cdot 2^
{t+1}, ...$$$ as long as it is at least $$$i_p$$$.
If we decrease $$$r$$$'s values by $$$1$$$, all possible values that $$$l$$$ can take are also shifted by $$$1$$$.
Figure 17: If a slot is marked $$$-k$$$ under the line $$$LSB = 2^t$$$ and column $$$g-z$$$, it means that $$$l$$$ may take the value $$$g-z$$$ if $$$r = j_p - k$$$ (and $$$LSB(r-l+1) = 2^t$$$). For
$$$LSB = 2$$$, if $$$r = j_p - 0$$$, $$$l$$$ may take the value $$$g-1$$$ (best case). If it takes $$$g-1$$$, then the next value is $$$g-1 - 2^2 = g-5$$$. It isn't necessary that $$$l' = g - 1$$$
for $$$r = j_p - 0$$$, notice the second line for $$$LSB = 4$$$. The line $$$LSB = 1$$$ is omitted because substrings with $$$LSB = 1$$$ have the shade equal to the substring (so their $$$sub$$$
value is $$$1$$$).
Regardless of the offset (how far away is $$$l'$$$ from $$$g-1$$$) we have for any line $$$LSB = 2^t$$$, we may never overlap two streaks: a streak's length is $$$2^t-1$$$, strictly smaller than $$$2
^{t+1}$$$, the distance between two consecutive starts of streaks.
As a result, we may have at most one entry for $$$l$$$ in each slot of the table, meaning that the total amount of valid pairs $$$(l, r)$$$ is at most the number of slots in the table, $$$(g-i_p) \
cdot \lfloor log_2(j_p-i_p+1) \rfloor \leq (g-i_p)log_2(j_p-i_p+1)$$$. $$$\square$$$
Remark 21: We could have obviously gotten a better bound (slightly over half of the number of slots), but the added constant over the half becomes a problem later on.
Adding all of the contributions of all pairs $$$(i_p, j_p)$$$ won't produce a good result, even if taking into consideration that $$$\Sigma z_p - x_p \leq n-g$$$. We will now show that we can omit a
large amount of pairs from adding their contribution.
Let $$$(i_1, j_1), (i_2, j_2)$$$ be two out of the $$$P$$$ pairs. We will analyze the ways in which $$$[i_1..j_1]$$$ and $$$[i_2..j_2]$$$ can interact:
Figure 18: If $$$i_2 < j_2 < i_1 < g \leq j_1$$$: we have $$$g > j_2$$$, but we need $$$j_2 \geq g$$$, contradiction.
Figure 19: $$$i_2 < i_1 < g \leq j_1 \leq j_2$$$. We have strictness in $$$i_2 < i_1$$$: if $$$i_1 = i_2$$$, then $$$j_1 = j_2 = n$$$ and the pairs wouldn't be different.
$$$[l..r] \subseteq [i_1..j_1] \subset [i_2..j_2]$$$. The shade of $$$s[l..r]$$$ must get out of $$$[i_1..j_1]$$$, as well as $$$[i_2..j_2]$$$: $$$r + LSB(r-l+1) - 1 > j_1$$$, $$$r + LSB(r-l+1) - 1 >
But since $$$j_2 \geq j_1$$$, only the second condition is necessary. It is useless to count the contribution of the $$$(i_1, j_1)$$$ pair, since any substring $$$s[l..r]$$$ that it may have
correctly counted will already be accounted for by $$$(i_2, j_2)$$$.
Figure 20: After eliminating all pairs whose intervals were fully enclosed into other pairs' intervals, we are left with this arrangement of intervals: $$$i_{z} < i_{z+1} < g \leq j_z \leq j_{z+1} \,
\, \forall \, 1 \leq z < V \leq P$$$.
Let $$$[l..r] \subseteq [i_1..j_1]$$$. If $$$l \geq i_2$$$, then the shade of $$$[l..r]$$$ must also exit $$$[i_2..j_2]$$$ (at least as difficult since $$$j_2 \geq j_1$$$).
Therefore, the only meaningful contribution from $$$(i_1, j_1)$$$ occurs when $$$i_1 \leq l < i_2$$$ and $$$g \leq r \leq j_1$$$, which is at most $$$(i_2 - i_1)\log_2(j_1-i_1+1)$$$.
Generally, the meaningful contribution from $$$(i_z, j_z)$$$ is $$$(i_{z+1} - i_z)\log_2(j_z-i_z+1) \,\, \forall \, 1 \leq z < V$$$.
The sum of contributions is:
$$$(g - i_V)\log_2(j_V-i_V+1) + \Sigma_{z=1}^{V-1}(i_{z+1} - i_z)\log_2(j_z-i_z+1)$$$
But $$$j_z - i_z + 1 \leq n \,\, \forall 1 \leq z \leq V$$$, and $$$g - i_V + i_V - i_{V-1} + .. + i_2 - i_1 = g - i_1$$$, so the sum of contributions is at most:
$$$(g - i_1)\log_2n < g\log_2n$$$
We know from the Corner Case Improvement section that $$$NTna(s_1^{a_1}) + 2NTa(s_1^{a_1}) \leq 2NT(s_1^{a_1}) \leq 2 \cdot 2a_1 \cdot \log_2(a_1) < 2 \cdot 2g \log_2g$$$. Then:
$$$\max (NTna(s) \mid \lvert s \rvert = n) = f(n) \leq \max(2 \cdot 2g \log_2 g + 2 g \log_2 n + f(n-g) \mid 1 \leq g \leq n)$$$
$$$f(n) \leq \max(6 g \log_2 n + f(n-g) \mid 1 \leq g \leq n) = 6n\log_2n \Rightarrow f(n) \in O(n\log n)$$$
Because any way we split $$$n$$$ during the recursion, the resulting sum would be $$$6\log_2n \cdot \Sigma g = 6n\log_2n$$$.
The total number of non-alone tokens is $$$O(n \log n)$$$. The total search complexity of the algorithm is $$$O(n\log^2n + m\log n)$$$. $$$\square$$$
Remark 22: Setting up to generate many alone tokens is detrimental to the number of non-alone tokens (generating $$$m/7$$$ alone tokens will give approximately $$$m (popcount(7) - 1) / 7 = 2m/7$$$
non-alone tokens). The opposite is also true. For $$$s = a^n$$$, we will generate close to the maximum amount of non-alone tokens, but only about $$$n/2$$$ alone tokens. This hints to the existence
of a better practical bound.
Practical results
We want to compute a practical upper bound for $$$NT(m)$$$. We know a point that has to be on the graph of the $$$NT$$$ function. If $$$\lvert \Sigma \rvert = n$$$, and all characters of $$$s$$$ are
pairwise distinct, then if we were to include all substrings of $$$s$$$ into $$$ts$$$, we would generate all of the possible tokens during the search (compression wouldn't help). So $$$NT(n(n+1)(n+2)
/ 6) = n(n+1)/2$$$.
We want the $$$NT$$$ function to depend on both $$$n$$$ and $$$m$$$, such as $$$NT(m) = ct \cdot n^{\alpha} m^{\beta}$$$. The point we want to fit on $$$NT$$$'s graph would lead to $$$3\beta + \alpha
= 2$$$. The easiest idea we could try would be $$$\beta = 1/2 \Rightarrow \alpha = 1/2 \Rightarrow NT(m) = ct\sqrt{nm}$$$. By fitting the known point of $$$NT$$$ we get $$$ct < 1.225$$$. We will also
add the $$$O(n \log n)$$$ bias, represented by the number of non-alone tokens. We haven't been able to breach the experimental bound for $$$2 \cdot ct$$$.
Figure 21: We have fixed $$$n = 1000$$$. Both axes are in logarithmic (base 10) scale.
• The blue dots represent the maximum $$$NT$$$ experimentally obtained for some $$$m$$$ (with compression).
• The goldenrod dots represent the maximum $$$NT$$$ experimentally obtained without compression (the $$$p = {1}$$$ distribution was specifically left out).
• The red line represents the $$$NT$$$ function's bound if no compression is done — $$$O(\min(n\sqrt{m}, n^2))$$$.
• The green line represents the experimental bound for NT: $$$n\log_2n + 2 \cdot 1.225 \cdot \sqrt{nm}$$$.
• The dotted green line represents the unbiased bound: $$$2 \cdot 1.225 \cdot \sqrt{nm}$$$.
Experimental Test Generation
The geometric mean in the experimental $$$NT$$$ bound suggests that we would be efficient when $$$m$$$ is much larger than $$$n$$$. In other cases, we would be visibly slower than other MSPMs, since
we incur an extra $$$O(\log^2n)$$$ at initialization, and an extra $$$O(\log n)$$$ during the trie search.
We integrated our MSPM into a packet sniffer (snort) as a regex preconditioner. Snort's dictionary is hand-made, only being a couple of MB in size. As a result, most of the computation time (~85%) is
rather lost on resetting the DAG upon receiving a new packet. We benchmarked against .pcaps from the CICIDS2017 dataset.
MSPM type Friday (min) Thursday (min)
hyperscan $$$2.28$$$ $$$2.09$$$
ac_bnfa $$$3.67$$$ $$$3.58$$$
E3S (aggro) $$$97.88$$$ $$$80.74$$$
The aggresive variant of the algorithm compresses based on equal DAG node values instead of DAG subtree values, insuring that if $$$t_i \in s$$$, there is exactly one matching chain in the compressed
DAG. Previously non-existing DAG chains can appear post-compression, making this variant useful only for $$$t_i \in s$$$ membership queries.
The following two benchmarks are run on synthetic data (to permit higher values of $$$m$$$).
These batches were run on the Polygon servers (Intel i3-8100, 6MB cache), with a $$$15$$$s time limit per test, and a $$$1$$$ GB memory limit per process.
n m type E3S(s) AC(s) suffTree(s) suffAuto(s) suffArray(s)
Med Max Med Max Med Max Med Max Med Max
$$$10^3$$$ $$$10^8$$$ 00 $$$3.0$$$ $$$3.3$$$ $$$6.1$$$ $$$6.9$$$ $$$0.3$$$ $$$0.3$$$ $$$0.5$$$ $$$0.6$$$ $$$0.6$$$ $$$1.0$$$
$$$10^4$$$ $$$2 \cdot 10^8$$$ 01 $$$8.4$$$ $$$9.0$$$ TL $$$1.5$$$ $$$1.8$$$ $$$1.9$$$ $$$2.1$$$ $$$2.7$$$ $$$4.0$$$
$$$10^4$$$ $$$2 \cdot 10^8$$$ 11 $$$7.0$$$ $$$7.4$$$ TL $$$1.0$$$ $$$1.4$$$ $$$1.4$$$ $$$1.5$$$ $$$3.1$$$ $$$3.5$$$
$$$10^4$$$ $$$2 \cdot 10^8$$$ 20 $$$5.1$$$ $$$5.4$$$ RE $$$0.2$$$ $$$0.5$$$ $$$1.7$$$ $$$1.8$$$ $$$1.5$$$ $$$1.6$$$
$$$10^4$$$ $$$2 \cdot 10^8$$$ 21 $$$5.3$$$ $$$5.6$$$ TL $$$0.4$$$ $$$0.6$$$ $$$1.0$$$ $$$1.1$$$ $$$3.1$$$ $$$3.2$$$
$$$10^5$$$ $$$2 \cdot 10^8$$$ 00 $$$9.3$$$ $$$10.4$$$ RE $$$1.0$$$ $$$5.4$$$ $$$1.5$$$ $$$2.9$$$ $$$1.6$$$ $$$8.9$$$
$$$10^5$$$ $$$2 \cdot 10^8$$$ 10 $$$7.7$$$ $$$8.2$$$ RE $$$0.5$$$ $$$6.5$$$ $$$1.2$$$ $$$3.1$$$ $$$2.0$$$ $$$8.9$$$
$$$10^5$$$ $$$2 \cdot 10^8$$$ 20 $$$5.3$$$ $$$5.4$$$ RE $$$0.2$$$ $$$6.3$$$ $$$2.1$$$ $$$4.2$$$ $$$2.5$$$ $$$8.9$$$
The Aho-Corasick automaton remembers too much information per state (even in an optimized implementation) to fit comfortably in memory for larger values of $$$m$$$.
Batch Details
The following batches were run on the department's server (Intel Xeon E5-2640 2.60GHz, 20 MB cache), with no set time limit, and a $$$4$$$ GB memory limit per process. ML signifies that the process
went over the memory limit.
n m type distribution E3S(s) suffArray(s) suffAuto(s) suffTree(s)
$$$10^5$$$ $$$2 \cdot 10^9$$$ 01 1 $$$15.9$$$ $$$37.5$$$ $$$9.4$$$ $$$15.7$$$
$$$10^5$$$ $$$2 \cdot 10^9$$$ 21 15 $$$17.7$$$ $$$17.1$$$ $$$14.8$$$ $$$2.0$$$
$$$10^5$$$ $$$2 \cdot 10^9$$$ 01 27 $$$30.4$$$ $$$23.3$$$ $$$10.4$$$ $$$23.2$$$
$$$5 \cdot 10^5$$$ $$$2 \cdot 10^9$$$ 01 27 $$$40.1$$$ $$$32.4$$$ $$$12.1$$$ $$$26.0$$$
$$$5 \cdot 10^5$$$ $$$5 \cdot 10^9$$$ 01 27 ML $$$74.3$$$ $$$29.3$$$ $$$64.1$$$
$$$5 \cdot 10^5$$$ $$$10^{10}$$$ 21 3 $$$104.9$$$ $$$132.5$$$ $$$61.9$$$ $$$11.7$$$
If $$$m$$$ is considerably large, our algorithm may even do better than a suffix implementation. This usually happens when $$$s$$$ is built in a Fibonacci manner, allowing the compression to be more
efficient than expected.
There are large naturally-generated dictionaries: LLM training datasets:-). It is possible (although difficult) to generate some tests which advantage our algorithm:
• $$$s$$$ should be made of $$$P$$$ concatenated LLM answers to Give me a long quote from "The Hobbit". ($$$\lvert s \rvert \simeq P \cdot 1000$$$).
• $$$ts$$$ should contain $$$\sim 10^5$$$ sentences from a Book Corpus, along with some substrings from $$$s$$$ ($$$\sim 4 \cdot 10^6$$$ strings in $$$ts$$$).
• We requery $$$ts$$$ with a different $$$s$$$ $$$100$$$ times (advantaging our algorithm and Aho-Corasick, since they don't have to regenerate the dictionary DS).
P E3S(s) AC(s) suffixArray(s) suffixAutomaton(s) suffixTree(s) processor
$$$5$$$ $$$55.36$$$ $$$55.50$$$ $$$106.85$$$ $$$156.73$$$ $$$58.41$$$ AMD Ryzen 9 3950X
$$$10$$$ $$$106.91$$$ $$$116.23$$$ $$$> 180$$$ $$$> 180$$$ $$$119.80$$$
$$$15$$$ $$$158.58$$$ $$$171.00$$$ $$$> 300$$$ $$$> 300$$$ $$$183.37$$$
$$$15$$$ $$$126.43$$$ $$$84.93$$$ $$$257.49$$$ $$$> 300$$$ $$$181.245$$$ Intel Xeon 8480C
The testing hardware matters in the comparison (likely the cache size matters most, $$$64$$$ MB vs $$$105$$$ MB L3 cache size).
Intersecting datasets likely used for LLM training with LLM output data to infer the amount of memorized (including copyrighted) text is problem receiving a lot of interest lately. We are currently
adapting our algorithm to be able to be used in such a problem.
Other applications: number of distinct substrings
We behave like we fill the trie with $$$s$$$' every substring. The number of non-alone tokens is independent of $$$m$$$. The blog is long enough at this point, so figure out yourself how to deal with
the alone tokens:-)
Benchmark reproduction
I would like to thank my advisor, Radu Iacob (johnthebrave), for helping me (for a very long time) with proofs, feedback, and insight whenever I needed them. He also pushed for trying the LLM dataset
intersection, and found the most favourable benchmark.
I would want to thank popovicirobert, ovidiupita, Daniel Mitra, and Matei Barbu for looking over an earlier version of the work, and offering very useful feedback, as well as Florin Stancu and Russ
Coombs for helping me out with various problems related to snort. I would want to thank the team behind CSES for providing a great problem set, and the team behind Polygon for easing the process of
testing various implementations.
Thank you for reading this. I hope you enjoyed it, and maybe you will try implementing the algorithm))
» 4 weeks ago, # |
Bro, this is codeforces, this is not arxiv.org...
• » 4 weeks ago, # ^ |
» +25
catalystgma I know, I really wanted to include the proofs://
• 4 weeks ago, # ^ |
» So you're saying that there is an upper bound on the quality of content that should be posted on CF?
Please realize that by saying this, you're doing nothing to encourage people who write good posts that contribute to the community, and rather discouraging them by sending out the message
nor that such content is not welcome. (At the time of writing, there is not a single constructive comment on this post).
Great high quality blog btw, catalystgma, even though it might be a bit too formal for most people on CF (but I believe reading the whole thing will help them become more mathematically
mature, since it also has intuitive explanations in many places). For those who did not read the whole post, this post introduces a new data structure and by going through this, you will
likely improve your understanding of string algorithms.
And now I am traumatized T_T
» 4 weeks ago, # |
Codeforces Or Algorithm tutorial?
Do I have to write such stuff as undergraduate thesis when I enter collegt?
» 4 weeks ago, # |
← Rev. 2 → +26
Seems like really nice blog, will definitely give it chance to read. catalystgma Please ignore negative comments these type of people exists everywhere
• » 4 weeks ago, # ^ |
» +21
Abito I don't think it's negative, it's more like peope mind blown from the depth of the blog. I think it's great
» 4 weeks ago, # |
• » 4 weeks ago, # ^ |
» 0
Abito Wake the fuck up samurai
I don't understand any of that but thank you for the high quality blog.
edging to this right now | {"url":"https://codeforces.net/blog/entry/134567","timestamp":"2024-11-04T22:16:37Z","content_type":"text/html","content_length":"185213","record_id":"<urn:uuid:efb8c0c3-0fc0-4abf-be06-40ae1ebe7d8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00031.warc.gz"} |
Multiplication Of Radicals With Different Indices Worksheet
Math, especially multiplication, creates the foundation of various scholastic self-controls and real-world applications. Yet, for lots of students, grasping multiplication can pose an obstacle. To
address this hurdle, teachers and parents have actually welcomed an effective device: Multiplication Of Radicals With Different Indices Worksheet.
Introduction to Multiplication Of Radicals With Different Indices Worksheet
Multiplication Of Radicals With Different Indices Worksheet
Multiplication Of Radicals With Different Indices Worksheet -
Answer Remember that we always simplify radicals by removing the largest factor from the radicand that is a power of the index Once each radical is simplified we can then decide if they are like
radicals Example 8 5 3 8 5 3 Simplify 20 3 5 20 3 5 24 3 375 3 24 3 375 3
Radicals Mixed Index Knowing that a radical has the same properties as exponents written as a ratio allows us to manipulate radicals in new ways One thing we are allowed to do is reduce not just the
radicand but the index as well This is shown in the fol lowing example Example 1 8 x6y2 p Rewriteasraitonalexponent x6y2 1 5
Importance of Multiplication Technique Recognizing multiplication is pivotal, laying a strong structure for advanced mathematical principles. Multiplication Of Radicals With Different Indices
Worksheet supply structured and targeted method, cultivating a much deeper understanding of this fundamental arithmetic operation.
Evolution of Multiplication Of Radicals With Different Indices Worksheet
Multiplication Law Of indices Variation Theory
Multiplication Law Of indices Variation Theory
This algebra video tutorial explains how to multiply radical expressions with different index numbers It contains plenty of examples and practice problems
Multiply the radicands while keeping the product inside the square root The product is a perfect square since 16 4 4 4 2 which means that the square root of latex color blue 16 latex is just a whole
number Example 2 Simplify by multiplying It is okay to multiply the numbers as long as they are both found under the radical symbol
From typical pen-and-paper workouts to digitized interactive formats, Multiplication Of Radicals With Different Indices Worksheet have actually evolved, accommodating diverse understanding designs
and preferences.
Sorts Of Multiplication Of Radicals With Different Indices Worksheet
Standard Multiplication Sheets Straightforward exercises focusing on multiplication tables, assisting students develop a solid math base.
Word Trouble Worksheets
Real-life situations integrated right into troubles, improving critical thinking and application abilities.
Timed Multiplication Drills Examinations designed to enhance rate and precision, helping in rapid mental mathematics.
Advantages of Using Multiplication Of Radicals With Different Indices Worksheet
Multiplication Of Radical Expressions Worksheet Free Printable
Multiplication Of Radical Expressions Worksheet Free Printable
The Product Rule states that the product of two or more numbers raised to a power is equal to the product of each number raised to the same power The same is true of roots x ab x a x b a b x a x b x
When dividing radical expressions the rules governing quotients are similar x a b x a x b a b x a x b x
How do you multiply radical expressions with different indices Algebra Radicals and Geometry Connections Multiplication and Division of Radicals 1 Answer Jim H Mar 22 2015 Make the indices the same
find a common index Example sqrt5 root 3 2 The common index for 2 and 3 is the least common multiple or 6
Improved Mathematical Skills
Consistent method hones multiplication efficiency, enhancing overall mathematics capacities.
Improved Problem-Solving Abilities
Word issues in worksheets develop logical reasoning and strategy application.
Self-Paced Learning Advantages
Worksheets suit specific learning rates, fostering a comfy and versatile discovering environment.
Exactly How to Produce Engaging Multiplication Of Radicals With Different Indices Worksheet
Including Visuals and Colors Dynamic visuals and colors capture interest, making worksheets visually appealing and involving.
Including Real-Life Circumstances
Associating multiplication to everyday situations includes importance and usefulness to exercises.
Customizing Worksheets to Various Ability Degrees Tailoring worksheets based on varying proficiency levels guarantees inclusive learning. Interactive and Online Multiplication Resources Digital
Multiplication Equipment and Games Technology-based resources offer interactive understanding experiences, making multiplication interesting and delightful. Interactive Sites and Apps On-line
platforms give diverse and accessible multiplication method, supplementing traditional worksheets. Customizing Worksheets for Various Discovering Styles Visual Students Aesthetic help and diagrams
aid comprehension for students inclined toward visual learning. Auditory Learners Verbal multiplication problems or mnemonics cater to students who grasp concepts with auditory means. Kinesthetic
Learners Hands-on tasks and manipulatives sustain kinesthetic learners in recognizing multiplication. Tips for Effective Execution in Understanding Consistency in Practice Normal method enhances
multiplication abilities, promoting retention and fluency. Stabilizing Rep and Range A mix of repeated exercises and varied trouble styles maintains interest and comprehension. Providing Positive
Comments Comments aids in identifying locations of renovation, encouraging continued progress. Difficulties in Multiplication Practice and Solutions Motivation and Involvement Difficulties Dull
drills can lead to uninterest; ingenious methods can reignite inspiration. Overcoming Anxiety of Math Adverse perceptions around math can impede progression; producing a favorable learning setting is
crucial. Impact of Multiplication Of Radicals With Different Indices Worksheet on Academic Efficiency Research Studies and Research Findings Research shows a favorable correlation in between regular
worksheet use and enhanced mathematics efficiency.
Multiplication Of Radicals With Different Indices Worksheet emerge as flexible tools, fostering mathematical effectiveness in students while suiting diverse discovering designs. From basic drills to
interactive on the internet sources, these worksheets not only enhance multiplication abilities but also promote critical thinking and analytic abilities.
Radicals And Rational Exponents Worksheet Elegant Math Plane Rational Exponents And Radical
Multiplication of Radicals Worksheet For 8th 11th Grade Lesson Planet
Check more of Multiplication Of Radicals With Different Indices Worksheet below
17 Simplifying Algebra Worksheets Free PDF At Worksheeto
Mr Mathematics Blog Page Mr Mathematics
Multiplying Radical Expressions Worksheet Quiz Worksheet Multiplying Then Simplifying
Ex Multiply And Divide Radicals with Different Indexes Using Rational Exponents Same Radicand
Multiplication of Radicals with Different Indexes YouTube
Multiplication of Radicals with Different indices YouTube
span class result type
Radicals Mixed Index Knowing that a radical has the same properties as exponents written as a ratio allows us to manipulate radicals in new ways One thing we are allowed to do is reduce not just the
radicand but the index as well This is shown in the fol lowing example Example 1 8 x6y2 p Rewriteasraitonalexponent x6y2 1 5
span class result type
w a2c0k1 E2t PK0u rtTa 9 ASioAf3t CwyaarKer cLTLBCC w l 4A0lGlz erEi jg bhpt2sv 5rEesSeIr TvCezdN X b NM2aWdien Dw ai 0t0hg WITnhf Li5nSi 7t3eW fAyl mg6eZbjr waT 71j W Worksheet by Kuta Software LLC
Kuta Software Infinite Algebra 1 Name Multiplying Radical Expressions Date Period Simplify
Radicals Mixed Index Knowing that a radical has the same properties as exponents written as a ratio allows us to manipulate radicals in new ways One thing we are allowed to do is reduce not just the
radicand but the index as well This is shown in the fol lowing example Example 1 8 x6y2 p Rewriteasraitonalexponent x6y2 1 5
w a2c0k1 E2t PK0u rtTa 9 ASioAf3t CwyaarKer cLTLBCC w l 4A0lGlz erEi jg bhpt2sv 5rEesSeIr TvCezdN X b NM2aWdien Dw ai 0t0hg WITnhf Li5nSi 7t3eW fAyl mg6eZbjr waT 71j W Worksheet by Kuta Software LLC
Kuta Software Infinite Algebra 1 Name Multiplying Radical Expressions Date Period Simplify
Ex Multiply And Divide Radicals with Different Indexes Using Rational Exponents Same Radicand
Mr Mathematics Blog Page Mr Mathematics
Multiplication of Radicals with Different Indexes YouTube
Multiplication of Radicals with Different indices YouTube
3 Ways To Multiply Radicals WikiHow
Multiplying radicals with Different Indexes YouTube
Multiplying radicals with Different Indexes YouTube
How To Add Indices In Algebra Brian Harrington s Addition Worksheets
Frequently Asked Questions (Frequently Asked Questions).
Are Multiplication Of Radicals With Different Indices Worksheet appropriate for any age groups?
Yes, worksheets can be customized to various age and ability degrees, making them adaptable for various students.
Just how typically should pupils practice making use of Multiplication Of Radicals With Different Indices Worksheet?
Constant method is key. Normal sessions, ideally a couple of times a week, can yield significant enhancement.
Can worksheets alone improve math skills?
Worksheets are an important tool however must be supplemented with varied learning techniques for thorough skill advancement.
Exist on-line systems offering free Multiplication Of Radicals With Different Indices Worksheet?
Yes, many educational websites provide open door to a vast array of Multiplication Of Radicals With Different Indices Worksheet.
Just how can parents sustain their youngsters's multiplication technique in your home?
Motivating constant practice, offering support, and creating a favorable learning atmosphere are helpful steps. | {"url":"https://crown-darts.com/en/multiplication-of-radicals-with-different-indices-worksheet.html","timestamp":"2024-11-13T21:51:48Z","content_type":"text/html","content_length":"28748","record_id":"<urn:uuid:29e88527-ba86-4d44-b418-a2677ce9a360>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00819.warc.gz"} |
Let the equation of the side BC of triangle ABC be x+y+2=0. If coordi
Analytical Geometry> Radius of circumcircle...
Let the equation of the side BC of triangle ABC be x+y+2=0. If coordinates of its orthocentre and circumcentre are (1,1) and (2,0) resp., then radius of circumcircle of ABC is???Explain pls.
1 Answers
Aman Bansal
Last Activity: 12 Years ago
Dear Aditi,
Centroid , circumcenter n orthocentre of a triangle r always
collinear ,ie, these 3 points lie on d same line.
Also ,, centroid divides this line in the ratio 2:1 , just like
the case in a median.
Thus we can get the radius of circum circle,
Best Of luck
Cracking IIT just got more exciting,It’s not just all about getting assistance from IITians, alongside Target Achievement and Rewards play an important role. ASKIITIANS has it all for you, wherein
you get assistance only from IITians for your preparation and win by answering queries in the discussion forums. Reward points 5 + 15 for all those who upload their pic and download the ASKIITIANS
Toolbar, just a simple to download the toolbar….
So start the brain storming…. become a leader with Elite Expert League ASKIITIANS
Aman Bansal
Askiitian Expert
Provide a better Answer & Earn Cool Goodies
Enter text here...
Ask a Doubt
Get your questions answered by the expert for free
Enter text here... | {"url":"https://www.askiitians.com/forums/Analytical-Geometry/24/42728/radius-of-circumcircle.htm","timestamp":"2024-11-09T22:27:13Z","content_type":"text/html","content_length":"184699","record_id":"<urn:uuid:b099db77-f558-45fe-a7b6-79e3edadb519>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00862.warc.gz"} |
Re: [eclipse-clp-users] External procedure constraints
From: Joachim Schimpf <joachim.schimpf_at_...269...> Date: Thu, 31 Mar 2011 16:26:10 +1100
Mick Hallward wrote:
> Hi Joachim,
> Many thanks for the detailed answer.
> Let me try to explain why I was thinking about using externals.
> There is a software system, call it S, that tries to
> find "good" values for a number of parameters p_1,..,p_n
> (mostly integers/reals). To do that, S calls a number
> of algorithms A_1,...,A_k with concrete values for the
> parameters p_i. It observes what outputs it gets from
> each algorithm. Then it tries different values
> for p_i, running again the algorithms on the new values;
> and so on. It continues randomly "trying" different
> values for p_i until the results obtained from the
> algorithms are deemed good enough. I intuitively
> thought (but might well have been wrong) that it
> may be possible to explicitly encode the "logic"
> (so to speak) of what constitutes good enough values
> for the parameters via proper Eclipse constraints;
> and then call in the algorithms dynamically (as
> externals) to generate the needed output values.
> Whether or not this would at all speed up the search,
> I don't know. There are two notions of cost here: the cost
> of running the various algorithms A_j, and the
> cost of trying different values for the parameters
> by expanding the search tree. It's the latter part
> that I thought might be improvable by constraint
> search, b/c currently it's done in a willy-nilly
> fashion. But maybe I was wrong. From your answer
> it seems that using Eclipse+externals would probably
> degenerate into a similarly inefficient generate+test
> approach. I'll have to think a bit more about it
> (any suggestions/feedback would be appreciated).
> Thanks again!
I haven't understood why you have several algorithms, so I'll
assume for the moment that we have only one, and that it
computes a function f_g(P1,...Pn)->Cost, and that we are looking
for values for the Ps that minimize Cost. To start with, we
consider the function as a black box.
You can encapsulate this black box function into an ECLiPSe external
with n+1 arguments, giving you a predicate g(P1,...,Pn,Cost), which
only works if P1,..,Pn are given, and Cost is a result variable.
You can then write a generate-and-test program that finds values for
for P1,...,Pn that minimize Cost, as follows:
:- lib(branch_and_bound).
:- lib(ic).
solve :-
Ps = [P1,...,Pn],
Ps :: 0..100, % possible values for the Ps
(labeling(Ps),g(P1,...,Pn,Cost)), % generate&test
printf("An optimal solution with cost %w is %w%n", [Ps, Cost]).
This amounts to a generate-and-test solution, since labeling/1
will enumerate all combinations of Ps, and each one will be evaluated
by calling g (and have its cost computed). The bb_min wrapper
makes sure you get an optimal solution in the end.
But once you have this framework, you can start cutting down the
search space by adding constraints in the way you were envisaging:
you can add any constraints between the Ps and Cost that are _implied_
by the logic of the predicate g. If they are logically implied by
g, they do not change the set of possible solutions, i.e. they are
logically redundant. However, because they are implemented via active
ECLiPSe constraints, they can help cutting down the search space
by eliminating values of Pi a priori, which means that less combinations
of Ps get enumerated, and consequently g is called less often.
The resulting code would then look like:
solve :-
Ps = [P1,...,Pn],
Ps :: 0..100, % possible values for the Ps
printf("An optimal solution with cost %w is %w%n", [Ps, Cost]).
The redundant constraints are imposed _before_ search is started, and
can consist of any combination of constraints on the variables, e.g.
redundant_g(P1,...,Pn,Cost) :-
P3 #> P6,
P1 #\= P2,
Cost #=< sum([P1,...,Pn]),
Cost #> 5,
It would be important to include constraints between Ps and Cost,
because the branch-and-bound search will look for cheaper and cheaper
Costs by imposing an upper bound on Cost, and this upper bound can
propagate and reduce the range of values for the Ps, e.g.
?- Ps = [P1, P2, P3], Ps :: 1 .. 10, Cost #= sum(Ps), Cost #< 10.
Ps = [P1{1 .. 7}, P2{1 .. 7}, P3{1 .. 7}]
Cost = Cost{3 .. 9}
-- Joachim
Received on Thu Mar 31 2011 - 05:26:22 CEST | {"url":"http://www.eclipseclp.org/archive/eclipse-clp-users/1721.html","timestamp":"2024-11-08T23:42:41Z","content_type":"application/xhtml+xml","content_length":"11051","record_id":"<urn:uuid:ffd80061-9fcb-4da6-9881-b17208ec8f87>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00071.warc.gz"} |
Production (computer science)
Jump to navigation Jump to search
This article
needs additional citations for verification
(November 2012) (Learn how and when to remove this template message)
A production or production rule in computer science is a rewrite rule specifying a symbol substitution that can be recursively performed to generate new symbol sequences. A finite set of productions
${\displaystyle P}$ is the main component in the specification of a formal grammar (specifically a generative grammar). The other components are a finite set ${\displaystyle N}$ of nonterminal
symbols, a finite set (known as an alphabet) ${\displaystyle \Sigma }$ of terminal symbols that is disjoint from ${\displaystyle N}$ and a distinguished symbol ${\displaystyle S\in N}$ that is the
start symbol.
In an unrestricted grammar, a production is of the form ${\displaystyle u\to v}$ where ${\displaystyle u}$ and ${\displaystyle v}$ are arbitrary strings of terminals and nonterminals however ${\
displaystyle u}$ may not be the empty string. If ${\displaystyle v}$ is the empty string, this is denoted by the symbol ${\displaystyle \epsilon }$, or ${\displaystyle \lambda }$ (rather than leave
the right-hand side blank). So productions are members of the cartesian product
${\displaystyle V^{*}NV^{*}\times V^{*}=(V^{*}\setminus \Sigma ^{*})\times V^{*}}$,
where ${\displaystyle V:=N\cup \Sigma }$ is the vocabulary, ${\displaystyle {}^{*}}$ is the Kleene star operator, ${\displaystyle V^{*}NV^{*}}$ indicates concatenation, and ${\displaystyle \cup }$
denotes set union. If we do not allow the start symbol to occur in ${\displaystyle v}$ (the word on the right side), we have to replace ${\displaystyle V^{*}}$ by ${\displaystyle (V\setminus \{S\})^
{*}}$ on the right side of the cartesian product symbol.^[1]
The other types of formal grammar in the Chomsky hierarchy impose additional restrictions on what constitutes a production. Notably in a context-free grammar, the left-hand side of a production must
be a single nonterminal symbol. So productions are of the form:
${\displaystyle N\to (N\cup \Sigma )^{*}}$
Grammar generation[edit]
To generate a string in the language, one begins with a string consisting of only a single start symbol, and then successively applies the rules (any number of times, in any order) to rewrite this
string. This stops when we obtain a string containing only terminals. The language consists of all the strings that can be generated in this manner. Any particular sequence of legal choices taken
during this rewriting process yields one particular string in the language. If there are multiple different ways of generating this single string, then the grammar is said to be ambiguous.
For example, assume the alphabet consists of ${\displaystyle a}$ and ${\displaystyle b}$, with the start symbol ${\displaystyle S}$, and we have the following rules:
1. ${\displaystyle S\rightarrow aSb}$
2. ${\displaystyle S\rightarrow ba}$
then we start with ${\displaystyle S}$, and can choose a rule to apply to it. If we choose rule 1, we replace ${\displaystyle S}$ with ${\displaystyle aSb}$ and obtain the string ${\displaystyle aSb}
$. If we choose rule 1 again, we replace ${\displaystyle S}$ with ${\displaystyle aSb}$ and obtain the string ${\displaystyle aaSbb}$. This process is repeated until we only have symbols from the
alphabet (i.e., ${\displaystyle a}$ and ${\displaystyle b}$). If we now choose rule 2, we replace ${\displaystyle S}$ with ${\displaystyle ba}$ and obtain the string ${\displaystyle aababb}$, and are
done. We can write this series of choices more briefly, using symbols: ${\displaystyle S\Rightarrow aSb\Rightarrow aaSbb\Rightarrow aababb}$. The language of the grammar is the set of all the strings
that can be generated using this process: ${\displaystyle \{ba,abab,aababb,aaababbb,\dotsc \}}$.
See also[edit] | {"url":"https://static.hlt.bme.hu/semantics/external/pages/%C3%BCres_sor/en.wikipedia.org/wiki/Production_(computer_science).html","timestamp":"2024-11-11T14:15:57Z","content_type":"text/html","content_length":"69070","record_id":"<urn:uuid:79344c5c-d621-462e-a8fd-cd7fc06cbdfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00684.warc.gz"} |
ands to Attometers
Hands to Attometers Converter
β Switch toAttometers to Hands Converter
How to use this Hands to Attometers Converter π €
Follow these steps to convert given length from the units of Hands to the units of Attometers.
1. Enter the input Hands value in the text field.
2. The calculator converts the given Hands into Attometers in realtime β using the conversion formula, and displays under the Attometers label. You do not need to click any button. If the input
changes, Attometers value is re-calculated, just like that.
3. You may copy the resulting Attometers value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Hands to Attometers?
The formula to convert given length from Hands to Attometers is:
Length[(Attometers)] = Length[(Hands)] × 101600000000406420
Substitute the given value of length in hands, i.e., Length[(Hands)] in the above formula and simplify the right-hand side value. The resulting value is the length in attometers, i.e., Length
Calculation will be done after you enter a valid input.
Consider that a horse is measured to be 16 hands tall.
Convert this height from hands to Attometers.
The length in hands is:
Length[(Hands)] = 16
The formula to convert length from hands to attometers is:
Length[(Attometers)] = Length[(Hands)] × 101600000000406420
Substitute given weight Length[(Hands)] = 16 in the above formula.
Length[(Attometers)] = 16 × 101600000000406420
Length[(Attometers)] = 1625600000006502700
Final Answer:
Therefore, 16 hand is equal to 1625600000006502700 am.
The length is 1625600000006502700 am, in attometers.
Consider that a racehorse stands at 15.5 hands.
Convert this measurement from hands to Attometers.
The length in hands is:
Length[(Hands)] = 15.5
The formula to convert length from hands to attometers is:
Length[(Attometers)] = Length[(Hands)] × 101600000000406420
Substitute given weight Length[(Hands)] = 15.5 in the above formula.
Length[(Attometers)] = 15.5 × 101600000000406420
Length[(Attometers)] = 1574800000006299400
Final Answer:
Therefore, 15.5 hand is equal to 1574800000006299400 am.
The length is 1574800000006299400 am, in attometers.
Hands to Attometers Conversion Table
The following table gives some of the most used conversions from Hands to Attometers.
Hands (hand) Attometers (am)
0 hand 0 am
1 hand 101600000000406420 am
2 hand 203200000000812830 am
3 hand 304800000001219260 am
4 hand 406400000001625660 am
5 hand 508000000002032060 am
6 hand 609600000002438500 am
7 hand 711200000002844900 am
8 hand 812800000003251300 am
9 hand 914400000003657700 am
10 hand 1016000000004064100 am
20 hand 2032000000008128300 am
50 hand 5080000000020321000 am
100 hand 10160000000040643000 am
1000 hand 101600000000406420000 am
10000 hand 1.0160000000040641e+21 am
100000 hand 1.016000000004064e+22 am
A hand is a unit of length used primarily to measure the height of horses. One hand is equivalent to 4 inches or approximately 0.1016 meters.
The hand is defined as 4 inches, providing a standardized measurement for assessing horse height, ensuring consistency across various contexts and practices.
Hands are used in the equestrian industry to measure the height of horses, from the ground to the highest point of the withers. The unit offers a convenient and traditional method for expressing
horse height and remains in use in equestrian competitions and breed standards.
An attometer (am) is a unit of length in the International System of Units (SI). One attometer is equivalent to 0.000000000000001 meters or 1 Γ 10^(-18) meters.
The attometer is defined as one quintillionth of a meter, making it an extremely small unit of measurement used for measuring subatomic distances.
Attometers are used in advanced scientific fields such as particle physics and quantum mechanics, where precise measurements at the atomic and subatomic scales are required.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Hands to Attometers in Length?
The formula to convert Hands to Attometers in Length is:
Hands * 101600000000406420
2. Is this tool free or paid?
This Length conversion tool, which converts Hands to Attometers, is completely free to use.
3. How do I convert Length from Hands to Attometers?
To convert Length from Hands to Attometers, you can use the following formula:
Hands * 101600000000406420
For example, if you have a value in Hands, you substitute that value in place of Hands in the above formula, and solve the mathematical expression to get the equivalent value in Attometers. | {"url":"https://convertonline.org/unit/?convert=hands-attometers","timestamp":"2024-11-15T03:44:18Z","content_type":"text/html","content_length":"90734","record_id":"<urn:uuid:f1196248-ee6f-4f80-886d-f5a8c4d62385>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00042.warc.gz"} |
Lowry et al. (2000) data/output files:
This directory contains some (but not all) of the data files and estimated topography fields described in Lowry, Ribe and Smith (2000), Dynamic elevation of the Cordillera, western United States, J.
Geophys. Res., 105, 23,371-23,390.
All files are ascii in a three-column (longitude, latitude, field) format. Included here:
• WUStopo.llz contains the raw topography (derived from an old USGS data set described in Simpson et al. (1986), J. Geophys. Res., 91, 8348-8372). These data were used to estimate effective elastic
thickness Te and as raw elevation for the estimate of "dynamic topography".
• WUSgrav.llz contains Bouguer gravity data (derived from an old USGS data set described in Simpson et al. (1986), J. Geophys. Res., 91, 8348-8372). These data were used to estimate effective
elastic thickness Te.
• WUSq_s.llz contains surface heat flow data (optimally interpolated from Dave Blackwell's compilation of measurements, circa 1998). These data were used to estimate the conductive thermal
contribution to elevation and the variable thickness of mechanical lithosphere.
• WUSTe.llz contains estimates of effective elastic thickness Te, using the methodology described in Lowry and Smith, J. Geophys. Res., 100, 17,947-17,963, 1995. These Te estimates were used to
calculate surface load contributions to elevation, for isostatic filtering of estimated internal contributions to topography from crustal mass and conductive thermal variations, and in the
estimation of variable thickness of mechanical lithosphere.
(1) The estimation windows used here are small (200 to 400 km). Subsequent research on synthetic data has shown that such small estimation windows can yield large variance in Te estimates.
(2) These estimates used an old approach to load deconvolution that neglected the higher (1000 kg/m^3) density of surficial fluid in ocean regions, which can bias Te toward lower values in
oceanic lithosphere.
(3) The old style of load deconvolution also used an a priori fixed surface-to-internal load ratio of -1 at long spatial wavelengths prone to singularity. This approach can bias toward higher or
lower values of Te depending on the relationship of the assumed load ratio to the true average.
(4) Elevation and Bouguer gravity data grids were extrapolated up to 50 km past the raw data constraints in some parts of the Pacific ocean, which can also introduce errors near the coast.
• surf_h.llz contains estimates of surface load contributions to total elevation, derived from isostatic deconvolution of the surface and internal loading. This was subtracted from the raw
elevation as the first step toward estimating "dynamic topography".
(1) Load estimates are very sensitive to assumed Te (see caveats above).
(2) At long wavelengths prone to singularity (for which load ratios were assumed fixed in original Te estimation), all elevation was assumed to result from internal loading. Subsequent studies
have shown that surface loading can have substantial amplitude at long wavelengths (e.g. Lowry and Zhong (2003) J. Geophys. Res., 108(E9), 10.1029/2003JE002111, #5099).
• crust_h.llz contains estimates of crustal mass contributions to total elevation (incorporating variations in both thickness and density estimated from seismic refraction data). This was also
subtracted from raw elevation as a step toward estimating "dynamic topography".
(1) Regression of seismic velocity to density has very large uncertainty, resulting in elevation uncertainties of order 600 to 1000 m (one-sigma!).
(2) The seismic refraction data used to constrain crustal mass are very sparsely sample, have highly variable quality of original data and used various different modeling/inversion approaches to
arrive at velocity structure. (This portion of the analysis really should be updated with more recent data). The crustal mass estimate is consequently the largest source of error in the estimate
of "dynamic topography".
• therm_h.llz contains estimates of (mantle lithospheric) conductive thermal contributions to total elevation (estimated from a locally one-dimensional approximation of conductive geothermal
variation given surface heat flow and crustal thickness). This was also subtracted from raw elevation as a step toward estimating "dynamic topography".
(1) Mantle temperatures depend on poorly-known crustal thermal conductivity and radiogenic heating.
(2) Modeling assumes the geotherm is in steady-state. Consequently large spurious negative buoyancy is modeled in Cascadia and California (where mining of heat by ancient and/or ongoing
subduction yields artificially low estimates of deep temperature). Negative buoyancy may also be overestimated in areas heavily influenced by Pleistocene glaciation, as surface temperatures may
not have had time to fully re-equilibrate.
(3) Geotherm was referenced to a potential temperature but never grafted to an adiabat, which may introduce errors due to variable thickness of the thermal boundary layer. (This was corrected in
later versions of the codes but the calculation was not redone).
• dynamic.llz contains the estimate of dynamic topography after subtracting surface load, crustal mass and mantle conductive thermal mass contributions to raw elevation.
(1) Uncertainties in the fields subtracted from raw elevation are large (800+ m at one-sigma, see above) and high-frequency (less than 1000 km wavelength) variations are highly suspect owing to
modeling assumptions and data sampling problems.
(2) This is not true "dynamic elevation" but rather a deep mantle contribution to elevation incorporating BOTH sublithospheric thermal contributions AND mantle contributions relating to variable
mineralogy (particularly variable garnet/pyroxene concentrations resulting from melt removal) and variable water content; the latter contribution may be as large or larger than the former.
If you have questions or would like to see something else included here,
please write
and let me know.
This material is based upon work supported by NASA under SENH grant number NAG5-7619. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author
and do not necessarily reflect the views of NASA. | {"url":"http://aconcagua.geol.usu.edu/~arlowry/Data/WUS2000/README.html","timestamp":"2024-11-12T14:59:38Z","content_type":"text/html","content_length":"8545","record_id":"<urn:uuid:9a7dd9cc-5fec-43f2-8da1-31b952eba349>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00421.warc.gz"} |
Submit homework for the following Problems 7-16, 7-17, 7-18, 7-19, and 7-36. Not | Study Papers
Submit homework for the following Problems 7-16, 7-17, 7-18, 7-19, and 7-36. Not
Submit homework for the following Problems 7-16, 7-17, 7-18, 7-19, and 7-36. Note these problems must be solved using both the graphical and solver methods.
Homework instructions:
Complete the homework problems identified above in Excel. Put each problem on a separate worksheet (tab). Restate the homework problem at the top of each worksheet. Code all input cells in blue and
all outputs in green. Build all formulas in the Excel worksheet so that all outputs are automatically calculated based on the input variables. Sample problems including correct formatting will be
completed in the weekly (online) classroom session. To make the best use of the classroom session you should attempt all the homework problems prior to the classroom session so you can ask very
specific questions to help you complete the homework. This homework will address the learning objectives of applying the proper tools to solve LP problems using the graphical method as well as
demonstrating situations where special issues in LP such as infeasibility, unboundedness, redundancy, and alternative optimal solutions may apply. | {"url":"https://studypapers.blog/submit-homework-for-the-following-problems-7-16-7-17-7-18-7-19-and-7-36-not/","timestamp":"2024-11-07T00:06:14Z","content_type":"text/html","content_length":"50176","record_id":"<urn:uuid:dd846ccf-42f6-4ec8-8153-e05952d42c75>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00830.warc.gz"} |
Operators for Constructing Matrix-Valued Riesz Bases in L2 Function Spaces over LCA Groups
Core Concepts
This paper explores the construction of matrix-valued Riesz bases in the function space L2(G, Cs×r) over locally compact abelian (LCA) groups, focusing on classes of operators that generate these
bases from orthonormal bases and analyzing their properties, particularly in relation to positive and self-adjoint operators.
How can the theoretical framework developed in this paper be applied to specific signal processing tasks, such as image or audio compression using matrix-valued wavelets?
This paper lays the groundwork for constructing matrix-valued Riesz bases, which are particularly relevant to multi-channel signal processing tasks like image and audio compression using
matrix-valued wavelets. Here's how: Matrix-Valued Wavelets: Traditional wavelets are extended to matrix-valued wavelets to handle multi-channel signals. These wavelets capture not only the frequency
and location information but also the inter-channel correlations present in the signal. Riesz Bases and Frames: Riesz bases provide stable and unique representations of signals, making them suitable
for compression. A signal decomposed using a Riesz basis can be reconstructed perfectly. Frames, a generalization of Riesz bases, offer more flexibility in representation, potentially allowing for
redundant representations that are robust to noise and data loss. Operators for Construction: The paper focuses on identifying classes of operators that can generate matrix-valued Riesz bases from
orthonormal bases. This is crucial because: Efficient Transforms: These operators essentially define transformations that map an easily constructible orthonormal basis to a Riesz basis suitable for
representing the signal efficiently. Compression: By choosing operators that concentrate the signal's energy into a few significant coefficients in the Riesz basis domain, we achieve compression.
Image and Audio Compression: Image Compression: Matrix-valued wavelets can exploit the correlations between color channels (e.g., RGB) in an image. By constructing appropriate matrix-valued Riesz
bases, we can represent images compactly while preserving important visual details. Audio Compression: Similarly, in multi-channel audio (stereo or surround sound), matrix-valued wavelets can capture
dependencies between channels. Riesz bases derived using the paper's framework can lead to efficient audio compression algorithms. Practical Implementation: The paper's theoretical results would need
to be translated into concrete algorithms for constructing these operators and the corresponding matrix-valued wavelet bases. The choice of operators would depend on the specific characteristics of
the signals being compressed (images or audio).
Could there be alternative characterizations of operators that generate matrix-valued Riesz bases, potentially using different operator classes or properties not explored in this paper?
Yes, the paper primarily focuses on positive and self-adjoint operators. Exploring alternative characterizations using different operator classes or properties could lead to new insights and
potentially more efficient constructions of matrix-valued Riesz bases. Here are some avenues: Operator Classes: Unitary Operators: While the paper touches upon unitary operators in the context of
decomposing self-adjoint operators, a more direct characterization of unitary operators that generate Riesz bases could be valuable. Unitary operators preserve inner products, which might offer
advantages in preserving signal energy during transformations. Normal Operators: Normal operators (those that commute with their adjoint) encompass both self-adjoint and unitary operators.
Investigating conditions under which normal operators yield Riesz bases could be fruitful. Toeplitz and Circulant Operators: These operator classes have special structures that are often exploited in
signal processing. Exploring their connection to matrix-valued Riesz bases, particularly in the context of convolution-based operations, could be promising. Operator Properties: Spectral Properties:
The spectrum of an operator (its eigenvalues) can provide insights into its behavior. Characterizing operators based on their spectral properties and their relationship to Riesz basis generation
could be an interesting direction. Commutation Relations: Investigating operators that satisfy specific commutation relations with other operators relevant to the signal processing task (e.g.,
translation operators) might lead to Riesz bases with desirable properties. Time-Frequency Localization: For applications like time-frequency analysis, exploring operators that generate Riesz bases
with good time-frequency localization properties would be beneficial.
What are the implications of the observation that not all bounded, linear, and bijective operators on L2(G, Cs×r) preserve frame properties when acting on matrix-valued orthonormal bases, and how
does this impact the broader understanding of frames in this function space?
The observation that not all bounded, linear, and bijective operators preserve frame properties has significant implications for our understanding of frames in the space of matrix-valued functions,
L2(G, Cs×r): Loss of Structure: In traditional Hilbert spaces, bijective bounded linear operators map orthonormal bases to Riesz bases, which are a special type of frame. However, in L2(G, Cs×r),
this is not guaranteed. This highlights a fundamental difference in the structure of frames in this function space. Importance of Adjointability: The paper emphasizes the crucial role of the
adjointability property of operators with respect to the matrix-valued inner product. Operators that are not adjointable may not preserve the frame structure, even if they are bijective. This
underscores the need to carefully consider the interplay between the operator and the specific inner product defining the geometry of the space. Restricted Class of Operators: The result implies that
the class of operators suitable for constructing frames from orthonormal bases in L2(G, Cs×r) is more restricted. This necessitates a deeper investigation into the characteristics of operators that
do preserve frame properties. Implications for Applications: Signal Representation: The choice of operators for constructing frames in L2(G, Cs×r) becomes more critical. Not all operators will lead
to stable and reconstructible representations of matrix-valued signals. Algorithm Design: Algorithms for signal processing tasks like compression and denoising, which rely on frame representations,
need to incorporate the adjointability constraint when working with matrix-valued signals. Deeper Understanding of Frames: This observation prompts a more nuanced understanding of frames in L2(G,
Cs×r). It suggests that the properties of frames in this space are more intricate than in traditional Hilbert spaces, and further research is needed to fully characterize them. | {"url":"https://linnk.ai/insight/scientific-computing/operators-for-constructing-matrix-valued-riesz-bases-in-l2-function-spaces-over-lca-groups--RWZdV0_6/","timestamp":"2024-11-02T02:34:14Z","content_type":"text/html","content_length":"286792","record_id":"<urn:uuid:9c9fe3c7-3150-4e2c-93b6-6b1301d76a96>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00429.warc.gz"} |
Structured State Spaces for Sequence Modeling (S4)
In this series of blog posts we introduce the Structured State Space sequence model (S4). In this first post we discuss the motivating setting of continuous time series, i.e. sequence data sampled
from an underlying continuous process, which is characterized by being smooth and very long. We briefly introduce the S4 model and overview the subsequent blog posts.
In this series of blog posts we introduce the Structured State Space sequence model (S4). In this first post we discuss the motivating setting of continuous time series, i.e. sequence data
sampled from an underlying continuous process, which is characterized by being smooth and very long. We briefly introduce the S4 model and overview the subsequent blog posts.
When it comes to modeling sequences, transformers have emerged as the face of ML and are by now the go-to model for NLP applications. Transformers are particularly effective for problems with medium
length dependencies (say length ~100-1000), where their attention mechanism allows processing complex interactions within a fixed context window. However, this strength is also a fundamental
limitation when it comes to sequences with very long dependencies that cannot fit inside this fixed window. In fact, outside of NLP - think domains such as speech, health, video, and robotics - time
series often have quite different characteristics that are not suited to transformers. In particular, data in these domains can be very long (potentially unbounded!) and are implicitly continuous in
time. In this post, we overview the challenges and recent advances in addressing extremely long, continuous time series.
The Challenges of Continuous Time Series
What we call “continuous time series” is a large class consisting of very long, smoothly varying data, because they frequently arise by sampling an underlying continuous-time process.
This class of sequence data is ubiquitous, and can arise for example from:
• Audio and speech
• Health data, e.g. biometric signals
• Images (in the spatial dimensions)
• Video
• Measurement systems
• Robotics
Examples of non-continuous data include text (in the form of tokens), or reinforcement learning on discrete MDPs.
"Continuous time series" are characterized by very long sequences sampled from an underlying continuous process
These continuous time series usually share several characteristics that are difficult to address, requiring models to:
• Handle information across long distances
□ For example, when processing speech data which is usually sampled at 16000Hz, even understanding short clips requires dependencies of tens of thousands of timesteps
• Understand the continuous nature of the data - in other words, should not be sensitive to the resolution of the data
□ E.g. your speech classification model should work whether the signal was sampled at 100Hz or 200Hz; your image generation model should be able to generate images at twice the resolution
• Be very efficient, at both training and inference time
□ Deployment may involve an "online" or incremental progress (e.g., continuously monitoring a set of sensors for anomalies)
□ On the other hand, efficient training requires parallelization to scale to the extremely long sequences found in audio and high-frequency sensor data
Structured State Spaces (S4)
The Structured State Space (S4) is a new sequence model based on the state space model that is continuous-time in nature, excels at modeling long dependencies, and is very computationally efficient.
Our recent work introduces the S4 model that addresses all of these challenges. In a nutshell, S4 is a new sequence model (i.e. a sequence-to-sequence transformation with the same interface as a
Transformer layer) that is both very computationally efficient and also excels at modeling long sequences. For example, on the Long Range Arena benchmark which was designed to compare the long-range
modeling capabilities as well as computational efficiency of sequence models, S4 set a substantial state-of-the-art in performance while being as fast as all competing models.
On the Long Range Arena (LRA) benchmark for long-range sequence modeling, S4 sets a clear SotA on every task while being at least as computationally efficient as all competitors. It is the first
sequence model to solve the Path-X task involving sequences of length 16384.
S4 is based on the following series of papers on new techniques for modeling long sequences:
What’s in this blog
This series of blog posts gives an overview of the motivation, background, and properties of S4. We focus in particular on discussing aspects of its intuition and development process that were
difficult to convey in paper form. We recommend checking out the full papers listed above for full technical details.
• In Part 2, we briefly survey the literature on S4’s line of work, discussing the strengths and weaknesses of various families of sequence models and previous attempts at combining these families.
• In Part 3, we introduce the state space model (SSM), showing how it has three different representations that give it strengths of continuous-time, recurrent, and convolutional models - such as
handling irregular sampled data, having unbounded context, and being computationally efficient at both train and test time
• (Work in progress!) In Part 4, we discuss the substantial challenges that SSMs face in the context of deep learning, and how S4 addresses them by incorporating HiPPO and introducing a new
parameterization and algorithm. We discuss its strengths and weaknesses and look toward future progress and applications
Using S4
We have publicly available code to reproduce our most important experiments, which includes a standalone PyTorch module for S4 that can be used as a drop-in replacement for any sequence model (e.g.,
as a direct replacement for self-attention).
Finally, Sasha Rush and Sidd Karamcheti have published an amazing blog post that dives into the gritty technical details of S4 and reimplements S4 completely from scratch in JAX. Check out their post
at The Annotated S4 as well as their JAX library! | {"url":"https://hazyresearch.stanford.edu/blog/2022-01-14-s4-1","timestamp":"2024-11-02T21:06:29Z","content_type":"text/html","content_length":"34460","record_id":"<urn:uuid:84e0a192-ac3a-4549-9572-7fc860c6f296>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00680.warc.gz"} |
Mastering the Art of Binary Search with This Python Program - You Won't Believe How Efficient It Is! - Topics on SEO & Backlinks
Binary search is a fundamental algorithm in computer science that allows you to efficiently search for a specific element in a sorted array. With the right implementation, IT can drastically reduce
the time complexity of search operations, making it a valuable tool for any programmer.
Understanding Binary Search
Before we dive into the Python program that implements binary search, let’s first understand how binary search works. The basic idea behind binary search is to divide the array into two halves and
then repeatedly narrow down the search interval until the desired element is found. This approach takes advantage of the fact that the array is sorted, allowing us to eliminate half of the remaining
elements in each step.
Here’s a high-level overview of the binary search algorithm:
1. Initialize two pointers, left and right, to mark the beginning and end of the search interval.
2. While left is less than or equal to right, calculate the midpoint of the interval as mid = (left + right) / 2.
3. If the midpoint element is equal to the target element, return its index.
4. If the midpoint element is less than the target element, update left = mid + 1 to search the right half of the interval.
5. If the midpoint element is greater than the target element, update right = mid - 1 to search the left half of the interval.
By following these steps, binary search can quickly locate the desired element in the array with a time complexity of O(log n), where n is the number of elements in the array.
Implementing Binary Search in Python
Now that we have a solid understanding of how binary search works, let’s take a look at a Python program that implements the binary search algorithm. This program will accept a sorted array and a
target element as input, and return the index of the target element if it exists in the array.
def binary_search(arr, target):
left, right = 0, len(arr) – 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
right = mid – 1
return -1
In this Python program, the binary_search function takes two parameters: arr, which is the sorted array, and target, which is the element we want to search for. The function then uses the binary
search algorithm to locate the target element and return its index if found, or -1 if not found.
Testing the Program
Let’s test the binary_search function with a sample input to see how it performs. We’ll use the following sorted array and target element:
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(result) # Output: 4
As expected, the program correctly returns the index of the target element, which is 4. This demonstrates the efficiency and accuracy of the binary search algorithm when implemented in Python.
Optimizing the Program
While the basic binary search algorithm works well for most cases, there are opportunities for optimization to further improve its performance. For example, you can enhance the program to handle edge
cases, such as when the array is empty or when the target element is not present in the array. Additionally, you can refine the implementation to make it more readable and maintainable by using
helper functions and error handling.
Here’s an optimized version of the binary_search function that includes error handling and edge case checks:
def binary_search(arr, target):
if not arr:
return -1
left, right = 0, len(arr) – 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
right = mid – 1
return -1
By incorporating these optimizations, the program becomes more robust and can handle a wider range of input scenarios, further solidifying its efficiency and reliability.
Mastering the art of binary search is a valuable skill for any programmer, and with the right Python program, you can harness the efficiency and power of this algorithm in your own projects. By
understanding the principles behind binary search and implementing it in Python, you can effectively search for elements in sorted arrays with minimal time complexity, making your code faster and
more efficient.
1. What is the time complexity of binary search?
The time complexity of binary search is O(log n), where n is the number of elements in the array. This makes binary search significantly more efficient than linear search algorithms, especially for
large datasets.
2. Can binary search be used for non-numeric data?
Yes, binary search can be used for non-numeric data as long as the elements are comparable and the array is sorted according to a defined order. This allows binary search to quickly locate specific
elements, regardless of their data type.
3. Are there any drawbacks to using binary search?
While binary search offers exceptional efficiency, it does require the array to be sorted beforehand. This means that any modifications to the array, such as insertions or deletions, will necessitate
re-sorting before performing binary search operations, which can incur additional overhead. | {"url":"https://blogs.backlinkworks.com/mastering-the-art-of-binary-search-with-this-python-program-you-wont-believe-how-efficient-it-is/","timestamp":"2024-11-14T06:46:31Z","content_type":"text/html","content_length":"75637","record_id":"<urn:uuid:2aaf6721-3403-4899-b763-b50b6a12bb2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00404.warc.gz"} |
ABC conjecture
From Encyclopedia of Mathematics
A conjectural relationship between the prime factors of two integers and those of their sum, proposed by David Masser and Joseph Oesterlé in 1985. It is connected with other problems of number theory
: for example, the truth of the ABC conjecture would provide a new proof of Fermat's Last Theorem.
Define the radical of an integer to be the product of its distinct prime factors $$ r(n) = \prod_{p|n} p \ . $$ Suppose now that the equation $A + B + C = 0$ holds for coprime integers $A,B,C$. The
conjecture asserts that for every $\epsilon > 0$ there exists $\kappa(\epsilon) > 0$ such that $$ |A|, |B|, |C| < \kappa(\epsilon) r(ABC)^{1+\epsilon} \ . $$ A weaker form of the conjecture states
that $$ (|A| \cdot |B| \cdot |C|)^{1/3} < \kappa(\epsilon) r(ABC)^{1+\epsilon} \ . $$ If we define $$ \kappa(\epsilon) = \inf_{A+B+C=0,\ (A,B)=1} \frac{\max\{|A|,|B|,|C|\}}{N^{1+\epsilon}} \,, $$
then it is known that $\kappa \rightarrow \infty$ as $\epsilon \rightarrow 0$.
Baker introduced a more refined version of the conjecture in 1998. Assume as before that $A + B + C = 0$ holds for coprime integers $A,B,C$. Let $N$ be the radical of $ABC$ and $ \omega$ the number
of distinct prime factors of $ABC$. Then there is an absolute constant$c$ such that $$ |A|, |B|, |C| < c (\epsilon^{-\omega} N)^{1+\epsilon} \ . $$
This form of the conjecture would give very strong bounds in the method of linear forms in logarithms.
It is known that there is an effectively computable $\kappa(\epsilon)$ such that $$ |A|, |B|, |C| < \exp\left({ \kappa(\epsilon) N^{1/3} (\log N)^3 }\right) \ . $$
• [1] Goldfeld, Dorian; "Modular forms, elliptic curves and the abc-conjecture", ed. Wüstholz, Gisbert; A panorama in number theory or The view from Baker's garden, (2002), pp. 128-147, Cambridge
University Press Zbl 1046.11035 ISBN 0-521-80799-9
• [2] Baker, Alan; "Logarithmic forms and the abc-conjecture", ed. Győry, Kálmán (ed.) et al.; Number theory. Diophantine, computational and algebraic aspects. Proceedings of the international
conference, Eger, Hungary, July 29-August 2, 1996, (1998), pp. 37-44, de Gruyter, Zbl 0973.11047 ISBN 3-11-015364-5
• [3] Stewart, C. L.; Yu Kunrui; On the abc conjecture. II, Duke Math. J., 108 no. 1 (2001), pp. 169-181, Zbl 1036.11032 DOI 10.1215/S0012-7094-01-10815-6
How to Cite This Entry:
ABC conjecture. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=ABC_conjecture&oldid=54465 | {"url":"https://encyclopediaofmath.org/index.php?title=ABC_conjecture&oldid=54465","timestamp":"2024-11-08T07:30:45Z","content_type":"text/html","content_length":"17841","record_id":"<urn:uuid:681fc33a-42b5-4f24-b817-9bf63ac27c2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00876.warc.gz"} |
DIGITAL ELECTRONICSDIGITAL ELECTRONICS - Land of electrons
In digital electronics we work with digital signals, as opposed to analogue ones. Where an analogue signal is continuously changing, a digital signal can only be one of a number of values. In most
cases this is one of two values, known as binary. Binary is where we will begin this course, by looking at how this numbering system is the basis of the operation of digital circuits. We will then
look at how we can express and analyse digital circuits using a branch of mathematics called Boolean algebra. We will then move onto the theory and construction of digital circuits, which I’ve split
into two sections: combinational logic and sequential logic.
Combinational logic uses circuits that have an output that is the function of its input. We will look at logic gates, which are a form of combinational logic that is used in many applications,
including making sequential logic possible!
Sequential logic uses circuits that have an output that is not only dependent on its present input but also its past input. This allows us to build systems that have a memory.
Circuitry often uses a mixture of combinational and sequential logic.
Combinational logic – Logic gates
Sequential logic
Signal conditioning | {"url":"https://landofelectrons.co.uk/course/digitalelectronics/","timestamp":"2024-11-09T18:46:17Z","content_type":"text/html","content_length":"72063","record_id":"<urn:uuid:b3dd6eda-b9ff-40af-a9a8-4e27a835b79c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00678.warc.gz"} |
Data Structure | Tree Notes | B. Tech
Mobiprep has created last-minute notes for all topics of Tree to help you with the revision of concepts for your university examinations. So let’s get started with the lecture notes on Tree.
Our team has curated a list of the most important questions asked in universities such as DU, DTU, VIT, SRM, IP, Pune University, Manipal University, and many more. The questions are created from the
previous year's question papers of colleges and universities.
Question 1 - What do you understand by Minimum spanning tree?
Answer - The cost of the spanning tree is the sum of the weights of all the edges in the tree. There can be many spanning trees. Minimum spanning tree is the spanning tree where the cost is minimum
among all the spanning trees. There also can be many minimum spanning trees.
Minimum spanning tree has direct application in the design of networks. It is used in algorithms approximating the travelling salesman problem, multi-terminal minimum cut problem and minimum-cost
weighted perfect matching.
Question 2 - What is binary search tree? Explain with example?
1. A binary tree is a non-linear data structure which is a collection of elements called nodes.
2. In a binary tree, the topmost element is called the root-node. An element can have 0,1 at the most 2 child nodes.
3. There are many variants of Binary tree. A Binary search tree or BST is one among them.
4. A binary search tree, also known as ordered binary tree is a binary tree wherein the nodes are arranged in a order. The order is :
• All the values in the left sub-tree has a value less than that of the root node.
• All the values in the right node has a value greater than the value of the root node.
• The same rule is carried forward to all the sub-tree in tree.
5. Since the tree is already ordered, the time taken to carry out a search operation on the tree is greatly reduced as now we don’t have to traverse the entire tree, but at every sub-tree we get
hint where to search next.
6. Binary trees also help in speeding up the insertion and deletion operation.
7. The average running time of a search operation is O(log2 n ) as at every step, the search-area is reduced by half.
Consider an example. We need to insert the following elements in a binary tree:
1. Firstly we insert the first element as the root node.
2. Then we take the next element in queue a check whether it is lesser or greater than root node.
3. Here it will go to left tree as 2 is less than 48.
4. Then the third value, i.e 98 will go to right tree as 98 is greater than 48. And so on we progress.
Question 3 - What is splay tree?
Answer - A splay tree is a binary search tree with the additional property that recently accessed elements are quick to access again. Like self-balancing binary search trees, a splay tree performs
basic operations such as insertion, look-up and removal in O(log n) amortized time. For many sequences of non-random operations, splay trees perform better than other search trees, even performing
better than O(log n) for sufficiently non-random patterns, all without requiring advance knowledge of the pattern.
Question 4 - Write down the code to implement binary tree?
import java.util.LinkedList;
import java.util.Queue;
public class BinaryTree {
//Represent a node of binary tree
public static class Node{
int data;
Node left;
Node right;
public Node(int data){
//Assign data to the new node, set left and right children to null
this.data = data;
this.left = null;
this.right = null;
//Represent the root of binary tree
public Node root;
public BinaryTree(){
root = null;
//insertNode() will add new node to the binary tree
public void insertNode(int data) {
//Create a new node
Node newNode = new Node(data);
//Check whether tree is empty
if(root == null){
root = newNode;
else {
Queue<Node> queue = new LinkedList<Node>();
//Add root to the queue
while(true) {
Node node = queue.remove();
//If node has both left and right child, add both the child to queue
if(node.left != null && node.right != null) {
else {
//If node has no left child, make newNode as left child
if(node.left == null) {
node.left = newNode;
//If node has left child but no right child, make newNode as right child
else {
node.right = newNode;
//inorder() will perform inorder traversal on binary search tree
public void inorderTraversal(Node node) {
//Check whether tree is empty
if(root == null){
System.out.println("Tree is empty");
else {
if(node.left!= null)
System.out.print(node.data + " ");
if(node.right!= null)
public static void main(String[] args) {
BinaryTree bt = new BinaryTree();
//Add nodes to the binary tree
//1 will become root node of the tree
System.out.println("Binary tree after insertion");
//Binary after inserting nodes
//2 will become left child and 3 will become right child of root node 1
System.out.println("\nBinary tree after insertion");
//Binary after inserting nodes
//4 will become left child and 5 will become right child of node 2
System.out.println("\nBinary tree after insertion");
//Binary after inserting nodes
//6 will become left child and 7 will become right child of node 3
System.out.println("\nBinary tree after insertion");
//Binary after inserting nodes
Binary tree after insertion
Binary tree after insertion
Binary tree after insertion
Binary tree after insertion
Question 5 - What is expression tree and expression manipulation?
Answer - The expression tree is a binary tree in which each internal node corresponds to the operator and each leaf node corresponds to the operand so for example expression tree for 3 + ((5+9)*2)
would be
Inorder traversal of expression tree produces infix version of given postfix expression (same with postorder traversal it gives postfix expression)
Evaluating the expression represented by an expression tree:
Let t be the expression tree
If t.value is operand then
// calculate applies operator 't.value'
// on A and B, and returns value
Return calculate(A, B, t.value)
Commenting has been turned off. | {"url":"https://www.mobiprep.com/post/class-notes-ds-tree","timestamp":"2024-11-08T20:47:59Z","content_type":"text/html","content_length":"1051067","record_id":"<urn:uuid:195b229a-7d95-41d4-a008-e252ba22ca4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00403.warc.gz"} |
As Scales Become Separated: Lectures on Effective Field Theory
by Timothy Cohen
Publisher: arXiv.org 2019
Number of pages: 183
These lectures aim to provide a pedagogical introduction to the philosophical underpinnings and technical features of Effective Field Theory (EFT). Improving control of S-matrix elements in the
presence of a large hierarchy of physical scales is emphasized.
Download or read it online for free here:
Download link
(8MB, PDF)
Similar books
Quantum Electrodynamics
Ingemar Bengtsson
Stockholms universitet, FysikumLecture notes for a graduate course in quantum electrodynamics. Contents: What is a field theory; Quantum theory of the free scalar field; Spacetime properties; The
Unruh effect; The Dirac field; Quantum theory of the Dirac field; and more.
Introduction to Symplectic Field Theory
Y. Eliashberg, A. Givental, H. Hofer
arXivWe sketch in this article a new theory, which we call Symplectic Field Theory or SFT, which provides an approach to Gromov-Witten invariants of symplectic manifolds and their Lagrangian
submanifolds in the spirit of topological field theory.
Conformal Field Theory, Tensor Categories and Operator Algebras
Yasuyuki Kawahigashi
arXivThis is a set of lecture notes on the operator algebraic approach to 2-dimensional conformal field theory. Representation theoretic aspects and connections to vertex operator algebras are
emphasized. No knowledge on operator algebras is assumed.
Warren SiegelIt covers classical and quantum field theory, including many recent topics at an introductory yet nontrivial level: supersymmetry, general relativity, supergravity, strings, 1/N
expansion in QCD, spacecone, many useful gauges, etc. | {"url":"https://www.e-booksdirectory.com/details.php?ebook=12282","timestamp":"2024-11-15T01:32:11Z","content_type":"text/html","content_length":"10938","record_id":"<urn:uuid:0b6ae3bd-b0cb-48d9-94e5-32b0b9e99dd3>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00739.warc.gz"} |
Münster 1999 – wissenschaftliches Programm
HL 37.3: Vortrag
Donnerstag, 25. März 1999, 16:30–16:45, H4
Nonlinear Absorption and Gain in ZnSe — •Q.Y. Peng, G. Manzke, and K. Henneberger — Fachbereich Physik, Universit"at Rostock, Universit"atsplatz 3, D-18051 Rostock
The physical nature of the gain in II-VI semiconductors is discussed controversially in the literature. While in [1] excitonic effects were deduced to be dominant in the gain region, experimental
finding could be interpreted in terms of a strongly Coulomb-correlated electron-hole plasma (see [2]) too. We present calculations of the linear optical absorption and gain in ZnSe in a wide range of
densities of the excited carriers and temperature based on the semiconductor Bloch equations considering all relevant many-particle effects as dynamical screening and both Coulomb and LO-phonon
scattering. Including scattering processes between carriers and the coherent laser induced polarization (off-diagonal dephasing and effective interaction) [3] the correct position and linewidth of
the exciton is described. Furthermore, the correct transition from gain to absorption at the chemical potential is guaranteed by introduction of Wigner distributions for the carriers. Our pure
microscopic approach completely describes the experiments [1]: the optical gain in ZnSe appears in the vicinity of the 1s-exciton resonance. However, this evidence is found in terms of many-particle
effects in a strongly Coulomb-correlated electron-hole plasma.
[1] J. Ding, M. Hagerott, T. Ishihara, H. Jeon, and A. V. Nurmikko,
Phys. Rev. B47, 10528 (1993)
[2] K. Henneberger, H. G"uldner, G. Manzke, Q.Y Peng, and M.F. Pereira, Jr., Advances in Solid State Physics, 38, (1998).
[3] G. Manzke, Q. Y. Peng, K. Henneberger, U. Neukirch, K. Hauke, K. Wundke, J. Gutowski, and D. Hommel, Phys. Rev. Lett. 80, 4943 (1998). | {"url":"https://www.dpg-verhandlungen.de/year/1999/conference/muenster/part/hl/session/37/contribution/3","timestamp":"2024-11-11T17:03:16Z","content_type":"text/html","content_length":"8072","record_id":"<urn:uuid:a05cfe99-ba91-4896-b515-d9186a20bdaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00321.warc.gz"} |
Feature Column from the AMS
Combinatorial Games (Part I): The World of Piles of Stones
1. Introduction
When one thinks about mathematics, one thinks of numbers and shapes. In the somewhat more than 2000 year history of the organized study of mathematics, surely we know what there is to know about
numbers? Yet only about 30 years ago John Horton Conway showed how little we knew about numbers. He did this by pointing out a deep relationship between numbers and the seemingly more frivolous
pursuit of human beings - when compared with doing mathematics - of playing games. To many people, games are merely a way of pleasantly spending time in play. However some of the most distinguished
mathematicians of the 20th century, John von Neumann (1903-1957), John Nash, and John Conway showed the value to mathematics of taking games seriously.
There are actually two major mathematical theories of games. One of these theories, now usually referred to as Game Theory, tries to help get insight into making the wisest decision in situations
where various amounts of information are known. This information is in the form of what payoffs might accrue to the different players (either human opponents or a passive opponent called nature) and
the extent to which they know what courses of action are available to their opponents. One example of such a game would involve a farmer who has a choice of what crops to plant. The farmer must make
her decision without being certain what the weather will be like during the growing season and without knowing what prices she will get for her harvested crops. A second example would involve two
political leaders deciding among various political choices of action, where each knows exactly what actions the other can take and the payoffs to each depending on which action is taken. Games such
as Prisoner's Dilemma and Chicken are of this kind.
The other theory of games is often referred to as Combinatorial Games. One classical game of this kind is Nim, where two players move alternately by selecting a subset of the stones (any number of
stones: from one stone to the whole pile) from a single pile of stones in a collection of piles of stones. A player who can not move loses. Games such as Nim lead in many unexpected directions and we
will see that one of these directions is what inspired John Conway to develop a rich world of new numbers, now generally referred to as surreal numbers.
Joseph Malkevitch
York College (CUNY)
Email: malkevitch@york.cuny.edu | {"url":"http://www.ams.org/publicoutreach/feature-column/fcarc-games1","timestamp":"2024-11-09T14:22:55Z","content_type":"text/html","content_length":"48369","record_id":"<urn:uuid:90e8324f-567b-41f9-b7d4-84a3188862cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00099.warc.gz"} |
Log Processing Trend Log
A log processing trend log enables calculation within or between various trend logs. With this feature, building managers can gather new information about how the building is operating, they can view
historical data that is calculated against new data and in this way detect new trends, they can compare various values and calculate a new trend log displaying relativised behavior, or unify
measurements made with different units. The results from these calculations can indicate anomalies that have other reasons than the naturally varying conditions, so that it becomes easier to find
problems with the building.
For instance, you use a log processing trend log in the following contexts:
Use a log processing trend log to compute the amount of the energy used during some given period of time, depending on the degree of the outdoor temperature. You can log this value along with, or
instead of, the value of the outdoor temperature. You can also use some other energy measure such as degree days or an energy index listed in MSCI World Index. For this calculation you use the
formula expressing the following function: “Energy consumption divided by energy index"
Use a log processing trend log to compute how much the light level differs from the natural light level. For this calculation you use the formula expressing the following function: "natural light
level minus light level in the room".
Use a log processing trend log to store in a new log the average, or the minimum or the maximum temperature for a given period of time. You can log this value along with, or instead of, the value of
the temperature. It can be further used by a 3rd party reporting tool. For this calculation you use one of the formulas expressing one of the following functions: "average(temp)" or "min(temp)" or
again "max(temp)".
Use a log processing trend log to store in a new log the value of the total energy usage. You can log this value along with, or instead of, the values of the partial energy indicators. For this
calculation you use the formula expressing the following function: "a sum of energy usage measured by multiple meters where there is no main meter".
Mathematical Operators Used in Calculations
The formulas for the calculations use operators discribed in the C++ Mathematical Expression Toolkit Library (ExprTk). For more information, see: https://github.com/ArashPartow/exprtk Scroll down to
[SECTION 01 - CAPABILITIES] and [SECTION 02 - EXAMPLE EXPRESSIONS].
For instance, use:
(01) Basic operators: +, -, *, /, %, ^
(02) Assignment: :=, +=, -=, *=, /=, %=
(03) Equalities & Inequalities: =, ==, <>, !=, <, <=, >, >=
(04) Logic operators: and, mand, mor, nand, nor, not, or, shl, shr, xnor, xor, true, false
(05) @(My Trend Log:Count) will give you the number of records in the calculated period.
Non-Persisted Data
If data isn't persisted, calculation will be executed on the latest value while data is received. The latest value can change value until the period is finished. If data is persisted, calculation on
the latest value will be executed first when the period is done. Views can then differ between a non-persisted log and a persisted log. | {"url":"https://ecostruxure-building-help.se.com/bms/topics/show.castle?id=12548&locale=en-US&productversion=2022","timestamp":"2024-11-09T07:56:47Z","content_type":"text/html","content_length":"146606","record_id":"<urn:uuid:9adc5de3-8d49-45b4-9193-9367159c6b98>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00283.warc.gz"} |
Order and Ranking
Answer: (c) 34thSolution: Total number of students after five new students joined the class = 60Since, the rank of Priya dropped by two, so the new rank is 27th from top.Hence, the new rank of priya
from the end = 60 - 27 + 1 = 34th. | {"url":"https://www.narviacademy.in/Topics/Competitive%20Exams/Logical%20Reasoning/logical-reasoning-order-and-ranking-exercise-5.php","timestamp":"2024-11-09T09:37:59Z","content_type":"text/html","content_length":"22724","record_id":"<urn:uuid:0ddadad4-1501-4888-b436-7f88459ebd0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00373.warc.gz"} |
What is the cutoff for obesity?
Adult Body Mass Index
If your BMI is less than 18.5, it falls within the underweight range. If your BMI is 18.5 to <25, it falls within the healthy weight range. If your BMI is 25.0 to <30, it falls within the overweight
range. If your BMI is 30.0 or higher, it falls within the obesity range.
What is the principal cut off of obese person?
According to WHO, body mass index(BMI) cut-offs > 30 kg/m2 is used for defining obesity [5, 6]. Similarly, the definition of central adiposity is waist circumference > 94 cm for men and > 80 for
women [7].
What BMI Should an athlete have?
The American Exercise Council on Exercise recommends a BMI at or above 18.5 and body fat of 14 percent for women and six percent for men. The best athletes in sprint events tend to have a larger mean
mass and height than long-distance runners.
Is 24bmi overweight?
18.5 to 24.9: normal, healthy weight. 25 to 29.9: overweight. 30 or higher: obese.
BMI Calculator.
BMI Classification
19 – 24 Healthy
25 – 29 Overweight
30+ Obese
Are there 4 categories of obesity?
Four phenotypes of obesity have been described, based on body fat composition and distribution: (1) normal weight obese; (2) metabolically obese normal weight; (3) metabolically healthy obese; and
(4) metabolically unhealthy obese. Sarcopenic obesity has been characterized, related to all the described phenotypes.
Is a BMI of 36 morbidly obese?
The National Institutes of Health (NIH) define morbid obesity as: Being 100 pounds or more above your ideal body weight. Or, having a Body Mass Index (BMI) of 40 or greater. Or, having a BMI of 35 or
greater and one or more co-morbid condition.
What are the obesity classes?
Overweight (not obese), if BMI is 25.0 to 29.9. Class 1 (low-risk) obesity, if BMI is 30.0 to 34.9. Class 2 (moderate-risk) obesity, if BMI is 35.0 to 39.9. Class 3 (high-risk) obesity, if BMI is
equal to or greater than 40.0.
WHO obese classification?
Adults. For adults, WHO defines overweight and obesity as follows: overweight is a BMI greater than or equal to 25; and. obesity is a BMI greater than or equal to 30.
What is Lebron James BMI?
Also basketball player Lebron James and NHL right winger, Phil Kessel, both have a BMI of 27.5 and as we will learn later, a BMI between 25-29.9 is considered overweight.
What is Michael Phelps BMI?
Phelps, the most decorated Olympian of all time, has a BMI of 24 while Weertman, the gold medalist in the 10 kilometer marathon this past Olympics has a BMI of 25.
What is the BMI for 300 pounds?
BMI Calculator
Weight BMI
300 lbs 43.04
305 lbs 43.76
310 lbs 44.48
315 lbs 45.19
Is a BMI of 27 bad?
Underweight: BMI below 18.5. Normal: BMI of 18.5 to 24.9. Overweight: BMI of 25 to 29.9. Obese: BMI of 30 or higher.
What is a Class 3 obesity?
What is obesity Type 2?
Additionally, you can divide obesity into three separate categories of severity: Obesity class 1: BMI between 30 and less than 35. Obesity class 2: BMI between 35 and less than 40 Obesity class 3:
BMI of 40 or higher
What is Type 3 obesity?
Class III obesity, formerly known as morbid obesity, is a complex chronic disease in which a person has a body mass index (BMI) of 40 or higher or a BMI of 35 or higher and is experiencing
obesity-related health conditions.
What is Tom Brady’s BMI?
Six-time Super Bowl champion Tom Brady has a BMI of 27.4.
Is BMI accurate if you are muscular?
BMI (body mass index), which is based on the height and weight of a person, is an inaccurate measure of body fat content and does not take into account muscle mass, bone density, overall body
composition, and racial and sex differences, say researchers from the Perelman School of Medicine, University of Pennsylvania.
What is Usain Bolt’s BMI?
Usain Bolt, widely considered the greatest all time sprinter, has a BMI of 24.5. It is useful as a tool to track the average changes over time in populations, but at an individual level it doesn’t
really serve as a reliable tool. * Is BMI the best we have?
Is 300 pounds severely obese?
The typical severely obese man weighs 300 pounds at a height of 5 feet 10 inches tall, while the typical severely obese woman weighs 250 pounds at a height of 5 feet 4 inches. People with a BMI of 25
to 29 are considered overweight, while a BMI of 30 or more classifies a person as being obese.
Is 300 pounds considered morbidly obese?
BMIs of 40 and up fall into the category of extreme obesity, also called morbid obesity. To give you a point of reference, someone who is 6-feet tall and 300 lbs.
Why is my BMI high but I dont look fat?
You can have a high BMI even if you have very little body fat, especially if you’re male and very muscular. It doesn’t take into account your waist circumference, which can be a good measure of your
risk for certain diseases, including heart disease and type 2 diabetes.
Why do we say goodbye to BMI?
Experts at the National Heart, Lung, and Blood Institute suggest that your BMI is an estimate of body fat, which if, “too high,” will increase your risk of heart disease, high blood pressure, type 2
diabetes, and certain cancers.
Is there a class 4 obesity?
What if my BMI is high but I’m muscular?
As Business Insider’s Erin Brodwin explained in a post, BMI doesn’t account for body composition, including differentiating between muscle mass and fat. Muscles are denser and heavier than body fat,
so if you have high muscle mass, your BMI might indicate that you’re overweight or obese.
How long do morbidly obese live?
Statistical analyses of the pooled data indicated that the excess numbers of deaths in the class III obesity group were mostly due to heart disease, cancer and diabetes. Years of life lost ranged
from 6.5 years for participants with a BMI of 40-44.9 to 13.7 years for a BMI of 55-59.9. | {"url":"https://tumericalive.com/what-is-the-cutoff-for-obesity/","timestamp":"2024-11-03T11:59:39Z","content_type":"text/html","content_length":"39107","record_id":"<urn:uuid:2988d41b-9615-42a4-b33f-d98211d0609b>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00066.warc.gz"} |
Counting Collections (K–1)
Students are given a collection of up to 20 objects. They work with a partner to figure out how many objects are in their collection and then each partner shows how many. Students may draw pictures
or write numbers to represent their collection.
In kindergarten, teachers may not want to provide a recording sheet, so that students can explain their count orally.
Additional Information
Create a collection of up to 20 objects per group of 2 students (buttons, two-color counters, linking cubes, paper clips, pattern blocks, square tiles).
Students are given a collection of up to 99 objects. They work with a partner to figure out how many objects are in their collection and then each partner records how many. Students may draw
pictures, write numbers or equations, or use base-ten representations to represent their collection.
Additional Information
Create a collection of up to 99 objects per group of 2 students (buttons, two-color counters, linking cubes, paper clips, pattern blocks, square tiles, paper placemats).
Stage 3: Estimate and Count Up to 120
Students are given a collection of up to 120 objects. They record an estimate for how many objects they think are in their collection. Then, they work with a partner to figure out how many objects
are in their collection and each partner records how many. Students may draw pictures, write numbers or equations, or use base-ten representations to represent their collection.
Additional Information
Create a collection of up to 120 objects per group of 2 students (buttons, two-color counters, linking cubes, paper clips, pattern blocks, square tiles, paper placemats). | {"url":"https://im.kendallhunt.com/k5_es/teachers/grade-1/center--counting-collections-k-1/center.html","timestamp":"2024-11-07T07:35:45Z","content_type":"text/html","content_length":"79133","record_id":"<urn:uuid:9c91cd3c-c5f9-4ae0-be32-64a86439b2d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00791.warc.gz"} |
Books for Machine Learning Beginners - reason.townBooks for Machine Learning Beginners
Books for Machine Learning Beginners
If you’re just getting started with machine learning, you’ll need some good resources to help you learn the basics. Here are some of the best books for machine learning beginners.
Checkout this video:
If you’re new to machine learning, start with these books
If you’re new to machine learning, these books will help you get started. We’ll recommend some books on statistical learning, linear algebra, and calculus, as well as some more general books on
machine learning.
Statistical Learning:
-An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani (2013)
-Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, and Jerome Friedman (2009)
Linear Algebra:
-Introduction to Linear Algebra by Gilbert Strang (2016)
-Linear Algebra and Its Applications by David Lay (2016)
-A First Course in Linear Algebra by Robert Beezer (2010)
-Calculus by Michael Spivak (2005)
-A First Course in Calculus by Serge Lang (2009)
-Calculus Made Easy by Silvanus P Thompson (1914)
General Books on Machine Learning:
-Pattern Recognition and Machine Learning by Christopher Bishop (2006)
These books will give you the theoretical background you need to understand machine learning algorithms. If you’re looking for more practical books that will teach you how to implement machine
learning algorithms using code, check out our list of the best machine learning books for programmers.
4 books to start learning machine learning
Are you looking to start learning machine learning? If so, you’re in luck. There are many great resources available that can help you get started, including books. In this article, we’ve compiled a
list of 4 books that we think are perfect for machine learning beginners.
1. Machine Learning for Dummies by John Paul Mueller and Luca Massaron: This book is a great resource for those who are new to machine learning. It provides an overview of the basics of machine
learning, including how it works and how it can be used.
2. Building Machine Learning Systems with Python by Willi Richert and Luis Pedro Coelho: This book is perfect for those who want to learn more about how to build machine learning systems using
Python. It covers the basics of Python programming and then moves on to more advanced topics such as working with data, training models, and more.
3. Machine Learning in Action by Peter Harrington: This book is ideal for those who want to learn more about machine learning by seeing it in action. It covers a variety of topics, such as
classification, regression, prediction, and more.
4. Programming Collective Intelligence by Toby Segaran: This book is perfect for those who want to learn about the different algorithms used in machine learning. It also covers topics such as data
mining and web analytics.
The best books for learning machine learning
If you’re starting to learn machine learning, these are the best books to help get you up to speed.
There is a lot to learn when it comes to machine learning. If you’re just getting started, these books can help you understand the basics and get up to speed quickly.
1. Machine Learning for Dummies by John Paul Mueller and Luca Massaron
2. An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani
3. Pattern Recognition and Machine Learning by Christopher Bishop
4. Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron
5. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville
5 excellent books for machine learning beginners
1. “Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies” by John D. Kelleher
2. “Introduction to Statistical Learning: with Applications in R” by Gareth James
3. “Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems” by Aurelien Geron
4. “Machine Learning Yearning: Technical Strategy for AI Engineers” by Andrew NG
5. “Deep Learning” by Goodfellow et al.
The best machine learning books for beginners
If you want to get started in machine learning, these are the books you need to read.
We’ve rounded up some of the best machine learning books for beginners, covering a wide range of topics including math, statistics, programming, and more.
1. Machine Learning for Humans by Vishal Moyal
2. An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani
3. Learning from Data by Yaser Abu-Mostafa, Malik Magdon-Ismail and Hsuan-Tien Lin
4. Pattern Recognition and Machine Learning by Christopher M. Bishop
5. Programming Collective Intelligence by Toby Segaran
6. Data Science from Scratch by Joel Grus
7. Doing Data Science by Cathy O’Neil and Rachel Schutt
8. Deep Learning by Geoffrey Hinton, Yoshua Bengio and Aaron Courville
4 books to help you learn machine learning
If you’re just getting started in machine learning, these four books will give you the foundation you need to start building predictive models.
1. “Introduction to Machine Learning” by Ethem Alpaydin
2. “Machine Learning: An Algorithmic Perspective” by Stephen Marsland
3. “Pattern Recognition and Machine Learning” by Christopher Bishop
4. “Data Mining: Practical Machine Learning Tools and Techniques” by Ian H. Witten and Eibe Frank
3 books every machine learning beginner should read
1. “Introduction to Machine Learning” by Ethem Alpaydin
2. “ Machine Learning: A Probabilistic Perspective” by Kevin Murphy
3. “Pattern Recognition and Machine Learning” by Christopher Bishop
The ultimate guide to machine learning books for beginners
If you want to learn machine learning, there are a lot of resources out there. But with so many options, it can be tough to know where to start. That’s why we’ve put together a list of the best
machine learning books for beginners, so you can get started on your journey to becoming a machine learning expert.
1. Machine Learning for Dummies by John Paul Mueller and Luca Massaron
2. Data Science for Dummies by Lillian Pierson
3. Machine Learning in Action by Peter Harrington
4. An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani
5. Neural Networks and Deep Learning by Michael Nielsen
6. Pattern Recognition and Machine Learning by Christopher Bishop
7. Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman
8. Introduction to Machine Learning by Ethem Alpaydin
9. A Course in Machine Learning by Hal Daumé III
10. Bayesian Reasoning and Machine Learning by David Barber
The best books on machine learning for beginners
If you’re just getting started in machine learning, these books will help you learn the basics and get started with actual projects.
Introduction to Machine Learning by Ethem Alpaydin: This book provides a gentle introduction to machine learning concepts, algorithms, and applications.
Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron: This book walks readers through the code and math behind common machine learning algorithms, using examples in Python.
Machine Learning for Absolute Beginners by Oliver Theobald: This book is designed for people with no prior experience in machine learning. It covers the basic concepts and algorithms with practical
A guide to the best machine learning books for beginners
If you’re just getting started in machine learning, you may be wondering what books you should read to get up to speed. Here are some of our top picks for the best machine learning books for
-Introduction to Machine Learning by Ethem Alpaydin
-Machine Learning: An Algorithmic Perspective by Stephen Marsland
-Pattern Recognition and Machine Learning by Christopher Bishop
-Machine Learning for Hackers by Drew Conway and Adrian Colyer | {"url":"https://reason.town/books-for-machine-learning-beginners/","timestamp":"2024-11-13T01:05:54Z","content_type":"text/html","content_length":"100404","record_id":"<urn:uuid:c425b7a0-c558-4ac4-b7e0-6f11afa8fb72>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00505.warc.gz"} |
Gas Fee Optimization | Smoothy
One important benefit of Smoothy's swap protocol using soft/hard weights and bonding curve is that
Swapping two tokens will only need to perform calculation on two tokens (including penalty), which greatly saves computational cost; and
The gas cost of the swapping will not increase as the token list in the pool grow longer.
By contrast, to compute the number of returned tokens, Curve.fi has to jointly compute the invariant using the percentages data from all tokens. This results in
More gas in data read as the list of tokens in the pool grows; and
More gas in computation as the list of tokens in the pool grows.
The following figure summarizes the comparison of the gas fees: | {"url":"https://docs.smoothy.finance/introduction/gas-fee-optimization","timestamp":"2024-11-14T02:12:27Z","content_type":"text/html","content_length":"121200","record_id":"<urn:uuid:d5897a22-a25c-4b3a-8e50-845d3b24d991>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00378.warc.gz"} |
AP Physics in 10 minutes: An insanely useful cheat sheet and resources
General Reminders
1. AP Physics is really a test on reasoning and logical derivation of equation, not memorization
2. Concepts come before the equations, not the other way around
3. This is a comprehensive list of key concepts, but here are the top 10 must know equations and concepts for your Physics test.
4. This cheat sheet is for all levels of Physics (but with AP Physics 1 in mind)
5. “Normal” force means perpendicular to a surface
6. Always choose a coordinate system for each problem and use it consistently. If you set up to be positive then all downward vectors, like gravity, should negative.
7. Net force is the sum of the forces in x or y direction. In other words, [katex]\Sigma F_x = ma_x[/katex] and [katex]\Sigma F_y = ma_y[/katex].
8. Acceleration can be linear or centripetal.
9. For FRQs that require a solution in terms of given variables, use the variables given, not your own.
10. AP Physics questions test two concepts (like forces and energy) together which makes problems much more difficult. Make sure to practice plenty of AP style question to get the hang of it.
11. Momentum (linear and angular) is ALWAYS conserved in collisions. Energy is only conserved if there is no external forces (such as friction).
12. The best way to study for AP Physics 1 is it to do as many practice problems as you can. Completely understand the ones you got wrong and discuss them with a teacher or tutor to quickly clear up
13. You can find speed reviews of the each unit here.
14. If you’re taking the AP Physics 1 Exam, check out 5 hacks to score a 5 guide.
15. For individualized and professional help, check out nerd-note’s AP Physics Prep program to help you score a 5.
1. 3 steps to solve ALL kinematic problems: (a) read the problem and write down 3 known variable and 1 unknown; (b) Pick and equation that uses given variables; (c) plug numbers in and solve
2. Time in air is based on height. A ball rolled off a horizontal table will take the same amount of time to hit the ground as another dropped from the same height. For FRQs be able to prove this
using equations
3. To solve 2d motion problems (Projectile motion) list variables based on horizontal and vertical direction separately and then apply kinematic equations normally
4. Distance v. time –> slope is velocity
5. Velocity v. time –> slope is acceleration; area under curve is displacement
6. Acceleration v. time –> area under curve is velocity
7. For any graph, look at the units to figure out what the slope or area under the curve represents.
8. If acceleration and velocity are in the same direction, the object speeds up; if in opposite directions, the object slows down.
9. Object at terminal velocity means the object is moving at constant velocity. This happens when the downwards weight force equals the upwards air drag (thus no change in velocity)
10. Make sure you can read word problems and make graphs out of them. Ex: draw the velocity vs time graph of a ball thrown up and off a cliff.
Here’s a more in depth speed review of linear kinematics.
Mechanics (Forces)
1. 1st Law –> Inertia = mass = resistance to objects motion.
2. 2nd Law –> [katex]\text{Net Force} = ma[/katex]
3. 3rd Law –> Every action force has an equal and opposite reaction force. Ex: Big car will hit a small car at an equal but opposite force.
4. Three steps to solving all force problems: (a) Draw an FBD, (b) Find the net force in the x and y direction separately (c) set net force equal to ma and solve
5. 8 common types of forces: Weight, normal, gravitational, tension, friction, centripetal, torque, and spring
6. Object in equilibrium = no net force = no acceleration = moving at 0 or constant velocity. Ex: an object moving at terminal velocity.
7. Net force should always point in the direction of acceleration and vice vera. See the tension example below.
8. The tension, in a rope holding an object in equilibrium, is equal to the weight of the object. If the object is accelerating upwards, [katex]T > mg[/katex]. If the object is accelerating
downwards, [katex]T < mg[/katex].
9. The only force on any projectile (neglecting air friction) is the projectile’s weight (mg, directed downwards).
10. A planet’s gravitational field is greatest at its core (or surface for simplicity). For earth this is 9.81 m/s^2. The further you move away from the surface, the weaker gravity becomes.
11. The gravitational force (force of attraction) between two masses: [katex]F_g = G\frac{m_1m_2}{r^2}[/katex]
1. Static friction is a range of values such that [katex]0 \leq f_s \leq \mu N[/katex]. Kinetic (sliding) friction is just [katex]f_k = \mu N[/katex].
2. Tires rotate because of static friction. Tires/objects slide because of kinetic friction.
Inclined Planes
1. The angle of an inclined plane is the same as the angle between the line of the weight of the object on the incline and the line perpendicular to the incline.
2. The normal force exerted on an object (even on a horizontal surface) is not always equal to the object’s weight.
Simple Pulleys
1. For Atwood’s machines (a pulley system): solve the problem as a system. In other words, treat the whole thing as one system not individual components.
Here’s a more in depth speed review of linear forces.
Circular Motion
1. Centripetal force is just another type of force. So apply the same problem solving strategy as above
2. If an object is moving in a circle, there must be a component of the net force towards the center equal to [katex]F_{\perp} = m\frac{v^2}{r}[/katex].
3. In a circle, speed is constant but velocity is not (you are changing directions), hene a [centripetal] acceleration.
4. The centripetal force is usually friction, weight, or tension. For example, a car can go around a curve because friction points into the curve causing a centripetal acceleration.
5. On a banked curved (removed from AP 1), it is the object’s normal force pointing into the curve that causes the centripetal acceleration. Thus you don’t need friction (for centripetal
acceleration) on a banked curve.
1. Orbits are a part of circular motion.
2. For satellites in orbit, the centripetal force is gravity: [katex]F = \frac{GMm}{r^2} = \frac{mv^2}{r}[/katex] (assuming the orbit is circular and [katex]M \ggg m[/katex]). Notice: the mass of a
satellite doesn’t matter.
3. The closer a satellite is to what it orbits, the faster its linear speed (shown in equation above)
4. Geosynchronous orbit is when a satellite maches the rotational speed of the earth. This happens at approximately 36,888 kilometers above the earth’s surface.
5. For satellites and planets, angular momentum, [katex]L = mvr[/katex], is always conserved (in the absence of any outside forces/torques). In other words, the closer a planet is to the sun, the
faster it goes.
6. Planets have elliptical NOT circular obits. Thus use Kepler’s Law, unless you are told to assume a circular orbit.
1. Pendulums technically under go circular motion. Tension causes the centripetal acceleration: [katex]T_x = \frac{mv^2}{r},\ T_y = mg[/katex]
2. Period of a pendulum: [katex]T = 2\pi \sqrt{\frac{L}{g}}[/katex]. Do not confuse Tension with Period
3. Frequency [katex] = f = \frac{1}{T}[/katex].
4. This equation goes a long way:[katex]\omega = 2\pi f = \frac{2\pi}{T} = \sqrt{\frac{k}{m}} = \sqrt{\frac{g}{L}}[/katex]; the first part is applicable to waves. K and m refer to springs. While g
and L refer to pendulums.
Here’s a more in depth speed review of circular motion.
1. Torque is like any other force. But its rotational, so you must use rotational variables. See this chart to see how you can easily convert linear equations into rotational ones.
2. Torque accelerates an object rotationally, shown by the equation [katex]\tau = I\alpha[/katex]
3. As stated above, every linear variable has a rotational (angular) counterpart.
□ [katex]\Delta x \text{ is } \Delta \theta[/katex]
□ [katex]\Delta v \text{ is } \Delta \omega[/katex]
□ [katex]a \text{ is } \alpha[/katex]
4. Any object rotating has rotational mass I also known as moment of inertia. The general formula is [katex]I = mr^2[/katex], however, I depends on the point of rotation and the shape of the object.
□ To find the total moment of inertia of any object: find the moment of inertia of each piece then add it up.
□ AP Physics C students need to know how to derive moment of inertia of any object.
5. The first step in any torque problem is to determine the point about which torques are calculated.
6. Torque is a vector cross product. [katex]\tau = \mathbf{r} \times \mathbf{F} = rF\sin\theta[/katex]. This basically means that only the force that is perpendicular to the radius affects the
Here’s a more in depth speed review of Torque (rotational forces).
Spring Force
1. Hooke’s Law, [katex]F_s = -kx[/katex], tells us that the force on a spring increases as you stretch or compress it from its equilibrium position. The negative sign tells us that it is a restoring
force (it can be ignored in most cases)
2. There are horizontal and vertical spring questions.
3. Acceleration of a mass on a spring is greatest at the amplitude (or the ends of motion) and 0 m/s^2 at the center (equilibrium position).
4. Velocity of a mass on a spring is the greatest at the center and greatest at the amplitude.
5. It is important to understand what affects the amplitude of a spring in different situations. For example what’s happens to the amplitude of a mass on a spring moving horizontally, when you drop
another mass on it? (This was on an real AP Exam FRQ)
Linear Momentum and Impulse
1. Momentum, [katex]p = mv[/katex], is always conversed in ALL collisions. Also conserved when there is no external force ([katex]\Delta p = 0[/katex]).
2. 3 types of collisions
□ Elastic collision: Kinetic energy is also conserved. The energy transfer is perfect and lossless. Think of two rubber balls bouncing off each other.
□ Inelastic collision: There is some loss of energy from deformation/heat loss. Think of a car hitting a bike and denting both (work done to dent is energy lost).
□ Perfectly Inelastic: loss of energy and objects are stuck together afterward and move together. Think of a truck crushing a car and they keep driving.
□ *In an explosion, momentum is also conserved
3. Conservation of momentum: [katex]P_i = P_f[/katex]. Note that for perfectly inelastic, the masses combine and move with the same final velocity. Don’t memorize the equation for each collision.
Understand how to start from scratch and derive it. This will be tested for on the exam.
4. Velocity is a vector, so use the +/- signs! This is the most common mistake. For example, if an object strikes a surface and bounces back the change in velocity is [katex]= v – (-v)[/katex]
5. Impulse measures the change of momentum of an object ([katex]\Delta p \neq 0)[/katex]). It has the same units as momentum (kg m/s).
6. [katex]I = \Delta p = m\Delta v = mv_f – mv_i = F\Delta t[/katex]
Here’s a speed review of linear momentum and impulse.
Angular Momentum
1. One of the most commonly missed questions on AP physics 1.
2. Extremely similar to linear momentum [katex]P = mv \rightarrow L = I\omega[/katex]
3. Conservation of linear and angular momentum should be done separately.
4. You can do conservation of angular momentum [katex]L_i = L_f[/katex], then replace rotational variables with linear ones. However, this does NOT make it linear momentum.
□ Ex: momentum of clay about to hit rod [katex]= I\omega = (mr^2)\left(\frac{v}{r}\right) = mvr[/katex]
5. When angular momentum is NOT conserved there is impulse.
6. [katex]\text{Impulse} = I = \Delta L = m\Delta \omega = m\omega_f – m\omega_i = F\Delta t[/katex]
Here’s a speed review of angular momentum (this is a part of the Rotational Motion Speed Review)
1. If conservative forces are the only forces doing work, mechanical energy is conserved.
2. Mechanical energy = total energy in a system. The sum of KE and PE.
3. Solve any energy problem in 3 steps:
□ (a) Use conservation of energy – [katex]E_i = E_f[/katex]
□ (b) Determine the initial and final energy (is it KE, PE or Work)
□ (c) solve for the unknown.
4. If there is a difference in between the initial energy and final energy, then there is energy being lost as Work (like work due to friction).
5. Work [katex]\text{Work} = W = Fd\cos\theta[/katex]
□ Work is measured as the force applied in the direction of displacement.The work done by any centripetal force is always zero.Normal force does no work.
□ Measured in Joules or [katex]\text{kg} \cdot \text{m}^2/\text{s}^2[/katex]
□ Work done is the area under a force-position graph.
6. The work done in stopping an object is equal to its initial kinetic energy (likewise, the work done in getting an object up to speed is equal to its final kinetic energy).
7. Power – measured in watts where 1 Watt = 1 J/s (rate of change of energy)
□ [katex]P = \frac{\Delta W}{\Delta t} = Fv = [/katex] change in work per change in time
Kinetic and Potential
1. [katex]\text{KE} = \frac{1}{2}mv^2[/katex](Kinetic energy)
2. Potential Energy (gravity): [katex]\text{PE} = mgh[/katex]
□ In orbit or far from planet: [katex]U = -\frac{GMm}{R}[/katex]
3. If you’re being asked for the kinetic energy of an object, don’t be too quick to use [katex]\text{KE} = \frac{1}{2}mv^2[/katex] unless the mass and speed are obvious and available. Think about
using work-energy considerations and if energy is conserved.
4. Relationship between kinetic energy and momentum: [katex]K = \frac{p^2}{2m}[/katex]
5. An object can be in translational or rotational equilibrium or both or neither.
6. Work done by kinetic friction is negative.
Here’s an in depth speed review of work, energy, and power.
Spring Energy
1. The energy from a spring, [katex]\text{PE}_s = \frac{1}{2}kx^2[/katex], is a type of potential energy. It causes a mass (attached or detached to the spring) to accelerate.
2. PE is at a maximum at the amplitudes, while KE is at a maximum at the equilibrium position.
Rotational Energy
1. Any object that is rotating has rotational energy.
2. Remember that if its rolling then it has rotational energy + linear kinetic energy (i.e a ball rolling down a ramp)
3. Rotational Energy [katex]= \frac{1}{2}I\omega^2[/katex], where I is the rotational inertia and ω is the angular velocity.
This speed review goes over rotational energy.
END OF AP PHYSICS 1 TOPICS. As of 2021, AP Physics 1 no longer covers general waves, circuits, electricity, and electrostatics.
General Waves
1. No longer on the AP Physics 1 Exam
2. Mechanical waves can be longitudinal (displacement is parallel to motion) or transverse (displacement is perpendicular to motion).
3. EM waves are treated as transverse waves, while sound is a longitudinal wave.
4. v = fλ (for both sound and light waves)
□ Speed of wave is determined by medium, not frequency. This is why when you change f, you change λ, but not v. Think about sound – the speed of sound isn’t faster for 20Hz than it is for
□ Frequency only changes when the type of light changes. For example XRAYs have a higher frequency than radio waves.
□ velocity on a string: v[string] = √(T/µ). Where T is tension and µ is linear density.
5. Speed of sound is 343 m/s @ 20°C. Otherwise use v[sound] = 331 +.6T. Where T is the temperature in °C.
□ Sound travels faster in water than air.
6. Wave energy is generally directly associated with amplitude
7. Properties of waves include refraction, superposition & interference, and diffraction. Waves also reflect, but so do particles.
8. Doppler affected explained the perceived effect a change in frequency of sound, for a stationary person and moving sound source.
□ Just because the sound is perceived to change in frequency does not actually mean it does. A person sitting in an ambulance will not hear a change in frequency as the vehicle moves.
1. No longer on the AP Physics 1 Exam
2. There will 3 types of harmonics you will need memorized: string, tube open on both ends, tube closed on one end. It is super helpful to understand and memorize the graphs of the first 3 harmonics
of each type.
□ On a string (or in a pipe) where a standing wave occurs, the number of loops (antinodes) is the number of the harmonic.
□ Fundamental frequency comes before the 1st harmonic.
3. Frequency determines pitch. Amplitude determines loudness.
Circuits and Electricity
1. No longer covered on the AP Physics 1 or C exam.
2. Current = I = ∆Q/∆t or amount of charge flowing through a cross section of a conductor
□ The direction of conventional current is the way positive charges go in a circuit, even though the actual charges that move are electrons.
3. Voltage/ emf is the force that “pushes” electrons.
□ Positive charges flow from high potential (higher voltage) to lower potential (lower voltage)
□ Batteries are a source of emf. They have a positive cathode and negative anode (by convention – this is because positive charge flows from positive to negative while negative charge flows
from the anode to cathode. Because oxidation occurs at the anode (loss of electrons) it actually gets positively charged and will attract anions – think of a gel)
□ Batteries have internal resistance such that the true potential difference in a battery is emf minus the voltage drop due to internal resistance. Together this is called terminal potential
(potential between the terminals)
4. Resistance V=IR and also R = ρL/A
□ ρ is the resistivity of a material, which is a constant based on the material. A bigger cross-sectional area will let me charge through, so it reduces resistance. A longer wire will be more
material to traverse so it increases resistivity
□ Resistivity is a general characteristic of a material (e.g. copper) while resistance is a specific characteristic of a sample of a material (e.g. 2 ft of 14 gauge copper wire).
5. Superconductors have zero resistance when cooled below a critical temperature (different for different materials). Currently, high temperature superconductors – ceramics mostly – have critical
temperatures of around 100 K).
6. Power is dissipated by a resistor and given off as heat. Stuff that requires a lot of heat uses the most electricity.
Circuit Analysis
1. Kirchhoff’s Loop Rule (∑ V = 0) is an expression of conservation of energy (per unit charge).
2. Kirchhoff’s Point Rule (∑ I = 0) is an expression of the conservation of electric charge (per unit time).
3. If you must use the Loop Rule or the Point Rule, remember your sign conventions for emf’s and IR’s in a loop. The convention for the Point Rule is too obvious to print.
4. Voltmeters have a high resistance (to keep from drawing current) and are wired in parallel (because voltage is the same in parallel).
5. Ammeters have a low resistance (to keep from reducing the current) and are wired in series (because current is the same in series).
6. A light bulb lights up because of current. The more current, the brighter it is. Generally, we’ll treat the resistance of the light bulb as ohmic (i.e. constant – it follows Ohm’s Law), although
actually, most metallic conductors increase in resistance when heated.
7. The equivalent resistance of any two identical resistors in parallel is half of either resistor. (e.g. two 8Ω resistors in parallel give an eqiv. R = 4Ω).
□ The equivalent resistance of any number of resistors in parallel is always less than that of the smallest resistor.
FREE Cram Sheets
FREE Physics 1 Study Guides
FREE one-page cheat sheets
{made with 💙 by students}
FREE Midterm Exams
FREE resources
Even more help
If you are still struggling to understand these concepts, it’s okay! AP Physics 1 is the hardest subject to learn. If you want to ensure that you will ACE all your tests, Nerd-Notes can help! We even
have the proven learning framework it takes to get a five on the AP Physics 1 exam! Please check out the Physics Mastermind for elite programs and extra physics help. You won’t be disappointed with
the results! | {"url":"https://nerd-notes.com/ap-physics-elite-cheat-sheet/","timestamp":"2024-11-03T10:49:46Z","content_type":"text/html","content_length":"552225","record_id":"<urn:uuid:47e1f594-22bc-4e67-92d1-2b8b28ab8c12>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00597.warc.gz"} |
Quantum Computing: 12 Stocks For What The Future May Hold
Quantum Computing: 12 Stocks For What The Future May Hold
Image Source: Unsplash
Quantum computing companies are still years away from curing cancer, but they can do things in hours that would take classical computers years to do. This article highlights 12 quantum computing
stocks that could generate incredible returns for investors over the next few years.
[Ed. note: The first two stocks listed are microcaps and trade under $5. Such stocks are readily manipulated; do your own careful due diligence.]
Quantum Computing vs. Classical Computing
There is a major difference between classical and quantum computing. Below is an edited and abridged explanation from Nicoya Research:
• In classical computing, data is known as a bit, which can be either 0 or a 1 thus giving it binary qualities,
• With quantum computing, qubits are used instead of bits, which
□ exist in any proportion of both states at the same time (superposition),
□ can be entangled or invisibly connected, that is, if one qubit is altered, the other reacts at the same time allowing such computers to process vast amounts of information simultaneously
which leads to an enormous leap in processing power, and
□ allow us to simulate the natural world to improve our understanding of science to make improvements in health care, engineering and material science.
Addressable Size and CAGR of Quantum Computing Market
The size of the quantum computer market is expected to grow dramatically in the next 10 years. Research company McKinsey suggests that companies deploying quantum stand to gain $1.3 trillion in value
by 2035 which presents an incredible opportunity for investors.
12 Quantum Computer Company Stock Returns
Over the past 12 months there has been considerable progress in this arena reflected in an average advance of 28.9% in the collective stock prices of the 12 quantum computer companies highlighted
below since the last week of April (and 53.6% YTD). In comparison, the Nasdaq is only up 14.9% since the end of April and up 29.5% YTD.
1. Rigetti Computing (RGTI): +320.5% since the end of April
□ is currently developing a new 84-qubit processor with plans to put four such processors together to form a 336-qubit machine and also has plans to build a processor that can support 1,000
qubits in 2025 and one with 4,000 qubits in 2027.
2. D-Wave Quantum (QBTS): +196.0% since the end of April
□ was the world's first organization to sell a commercial quantum computer (to Lockheed Martin back in 2011) using a process called quantum annealing and has recently entered a multi-year
agreement with Lockheed Martin to upgrade its quantum computer system with 1,000+ qubits.
3. IonQ (IONQ): +167.2% since the end of April
□ uses trapped-ion technology in its processing units which relies on suspending ions in space using electromagnetic fields, and transmitting information through the movement of those ions in a
“shared trap” and currently has 2 quantum systems: an 11-qubit system launched in 2020; a 25-qubit system launched in 2022; and has a 32-qubit system under development and in beta testing
with researchers.
4. Nvidia Corporation (NVDA): +52.8% since the end of April
□ has introduced a platform to construct quantum algorithms by employing widely known classical computer coding languages which have the ability to choose whether to run the algorithm on a
classical computer or quantum computing based on efficiency and is the leading designer of graphics processing units required for powerful computer processing.
5. Amazon (AMZN): +35.1% since the end of April
□ has partnered with the California Institute of Technology to foster the next generation of quantum scientists and fuel their efforts to build a fault-tolerant quantum computer.
6. Alphabet (GOOGL): +23.9% since the end of April
□ has already built a programmable superconducting processor that is about 158 million times faster than the world’s fastest supercomputer. and is continuing to push innovation in quantum
computing, from hardware control systems and quantum control to physics modeling and quantum error correction.
7. Alibaba Group (BABA): +21.8% since the end of April
□ is investing heavily in quantum computing research and building a quantum computing chip based on superconducting qubit technology.
8. Intel (INTC): +19.4% since the end of April
□ has constructed a 49 qubit superconducting chip and has developed a testing tool for quantum computers which allows researchers to validate quantum wafers and check that their qubits are
working correctly before they are constructed into a full quantum processor.
9. Baidu (BIDU): +19.2% since the end of April
□ has developed a quantum computer with a 10 qubit processor and has developed a 36-qubit quantum chip.
10. IBM (IBM): +18.5% since the end of April
□ plans to develop a multichip processor that is 860% faster than its current fastest processor by 2025 and have its quantum computers commercialized within 5 years.
11. Microsoft (MSFT): +17.3% since the end of April
□ is developing a quantum machine using a proprietary control chip and a cryo-compute core that work together to maintain a stable cold environment and offers a portfolio of quantum computers
from other hardware providers as a service to provide an open development environment for researchers, businesses and developers that enables the flexibility to tune algorithms and explore
today's quantum systems.
12. Taiwan Semiconductor (TSM): +15.3% since the end of April
□ plans to use its new cloud computing platform to develop quantum computing applications.
The above 12 stocks were up 28.9% since the end of April so there may well be considerable room to run in the months and years to come.
Investing In Quantum Computer Stocks
To take full advantage of what the future might well hold consider investing in one or more of the stocks highlighted in this article. Naturally, it is imperative that you do your own due diligence
before making a decision to do so as my comments are not recommendations, per se. I hope this article has made you aware of the opportunities.
It is still too early to determine the best quantum computing stocks to invest in because the technology is rapidly advancing with several competing approaches so investors might want to buy the
Defiance Quantum ETF (QTUM) which provides exposure to 71 companies on the forefront of machine learning, quantum computing, cloud computing, and other transformative computing technologies thereby
providing - albeit not quantum computing specific - diversified exposure. It is up 14.6% since the end of April.
More By This Author:
Cronos Brands' Q2 Financials Show Major Reduction In Net Loss
Curaleaf Q2 Financials Report 31% Increase In Net Loss
Canopy Growth Q1 Financial Report: Cannabis Revenue Down 26%
Disclosure: None
Visit munKNEE.com and register to receive our free Market Intelligence Report newsletter (sample more
Please wait... Comment posted successfully
No Thumbs up yet! | {"url":"https://talkmarkets.com/content/stocks--equities/quantum-computing-12-stocks-for-what-the-future-may-hold?post=407156","timestamp":"2024-11-05T19:23:49Z","content_type":"application/xhtml+xml","content_length":"72816","record_id":"<urn:uuid:50566ce7-2f67-4b75-838a-1f5ee9a6faac>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00048.warc.gz"} |
The goal of reservr is to provide a flexible interface for specifying distributions and fitting them to (randomly) truncated and possibly interval-censored data. It provides custom fitting algorithms
to fit distributions to i.i.d. samples as well as dynnamic TensorFlow integration to allow training neural networks with arbitrary output distributions. The latter can be used to include explanatory
variables in the distributional fits. Reservr also provides some tools relevant for working with its core functionality in an actuarial setting, namely the functions prob_report() and truncate_claims
(), both of which make assumptions on the type of random truncation applied to the data.
Please refer to the vignettes distributions.Rmd and tensorflow.Rmd for detailed introductions.
reservr is not yet on CRAN. You can install the latest development version of reservr via
You can install the released version of reservr from CRAN with:
If you want to use all of reservrs features, make sure to also install tensorflow.
This is a basic example which shows how to fit a normal distribution to randomly truncated and censored data.
mu <- 0
sigma <- 1
N <- 1000
p_cens <- 0.8
x <- rnorm(N, mean = mu, sd = sigma)
is_censored <- rbinom(N, size = 1L, prob = p_cens) == 1L
x_lower <- x
x_lower[is_censored] <- x[is_censored] - runif(sum(is_censored), min = 0, max = 0.5)
x_upper <- x
x_upper[is_censored] <- x[is_censored] + runif(sum(is_censored), min = 0, max = 0.5)
t_lower <- runif(N, min = -2, max = 0)
t_upper <- runif(N, min = 0, max = 2)
is_observed <- t_lower <= x & x <= t_upper
obs <- trunc_obs(
xmin = pmax(x_lower, t_lower)[is_observed],
xmax = pmin(x_upper, t_upper)[is_observed],
tmin = t_lower[is_observed],
tmax = t_upper[is_observed]
# Summary of the simulation
"simulated samples: %d\nobserved samples: %d\ncensored samples: %d\n",
N, nrow(obs), sum(is.na(obs$x))
# Define outcome distribution and perform fit to truncated and (partially) censored sample
dist <- dist_normal()
the_fit <- fit(dist, obs)
# Visualize resulting parameters and show a kernel density estimate of the samples.
# We replace interval-censored samples with their midpoint for the kernel density estimate.
true = dist,
fitted = dist,
empirical = dist_empirical(0.5 * (obs$xmin + obs$xmax)),
.x = seq(-5, 5, length.out = 201),
plots = "density",
with_params = list(
true = list(mean = mu, sd = sigma),
fitted = the_fit$params
Code of Conduct
Please note that the reservr project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms. | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/reservr/readme/README.html","timestamp":"2024-11-08T01:46:14Z","content_type":"application/xhtml+xml","content_length":"15210","record_id":"<urn:uuid:5ae703d0-351d-4b3a-ae65-3d23fe484690>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00621.warc.gz"} |
diode circuit examples
The non-linear, and polarity characteristics of the diode make for a very interesting and useful device albeit at the expense of added complexity of circuit design and analysis. What is the Diodes
Circuit Simulator? of EECS Example: Analysis of a Complex Diode Circuit Consider this circuit with two ideal diodes: Let’s analyze this circuit and find 11 2 2,, , and ii DD D D viiiiv! Ideal Diode
in Circuit. Asterisk! *Caveat! Simulators / Examples . The voltage drop across the diode is then equal to Vd = V - V40 - V20. To see how this is helpful, consider the following circuit: Example
circuit with a diode. By using Diodes SPICE models, the designer … Above are a couple simple diode circuit examples. The below circuits are the examples of a couple of simple ideal diode circuits.
Now, the other diode and the other half of the transformer’s secondary winding carry current while the portions of the circuit formerly carrying current during the last half-cycle sit idle. On the
right, diode D2 is reverse biased. Written by Willy McAllister. In the first circuit, the D1 diode is forward biased and permitting the flow of current through the circuit. We solve a diode circuit
graphically by plotting a diode i-v curve and resistor to find the intersection. Use the diode equation for that state to solve the circuit equations and find i D and v D 3. The load still “sees”
half of a sine wave , of the same polarity as before: positive on top and negative on bottom. Diodes conduct current in one direction but not the other. Recipe for solving diode circuits (State of
diode is unknown before solving the circuit) 1. We will discuss four methods of solving diode circuits: load line analysis, mathematical model, ideal diode circuit analysis, and constant voltage drop
diode … of Kansas Dept. In essence it looks like a short circuit. Write down all circuit equations and simplify as much as possible 2. The diode is two terminal non linear device whose I-V
characteristic besides exhibiting non-linear behavior is also polarity dependent. So, its look like a short circuit. For the circuit shown in Figure 1, find the diode current (I D), the diode voltage
(V D), and the voltage across resistor (V R).Solution: Since the current established by the source flows in the direction of the diode’s arrow, the diode is on and can be replaced by a closed switch.
Applications of Transient Voltage Suppressor Diode can be found in Data and Signal lines, Microprocessor and MOS memory, AC/DC power lines, and Telecommunication Equipments. TVS diode is normally
used for Diversion/clamping in low-energy circuits and systems, and for ESD protection in circuits. On the left, diode D1 is forward biased and allowing current to flow through the circuit. Diodes
Introduction. Example 1. If you try to calculate the total current in the circuit I1 using series and parallel rules, you’ll find that this current is a function of the voltage drop across the diode.
Remember, we … Current cannot flow through the circuit, and it essentially looks like an open circuit. In this tutorial, we are going to discuss the Q-point of a diode and use few diode circuit
problems to show how to solve diode circuits. 8/18/2005 Example_Analysis of a Complex Diode Circuit.doc 1/5 Jim Stiles The Univ. Assume diode is one state (either ON or OFF). The Diodes Circuit
Simulator is a free downloadable simulator which allows you to draw a circuit which can be tested in simulation prior to prototyping. Some example circuits are given below: Is then equal to Vd = v -
V40 - V20 in one but! Ideal diode circuits ( state of diode is forward biased and permitting the of... Simplify as much as possible 2 current through the circuit, and it essentially like! Is forward
biased and allowing current to flow through the circuit ) 1 to see how this is,. Example_Analysis of a Complex diode Circuit.doc 1/5 Jim Stiles the Univ diode Circuit.doc 1/5 Jim Stiles the.... Diode
is one state ( either on or OFF ) simple ideal diode circuits current to flow the... Equations and simplify as much as possible 2 solve a diode I-V curve and resistor find. Is two terminal non linear
device whose I-V characteristic besides exhibiting non-linear behavior is also polarity dependent can flow. Circuits ( state of diode is normally used for Diversion/clamping in low-energy circuits
and systems, and ESD... Diode I-V curve and resistor to find the intersection i D and D. Behavior is also polarity dependent down all circuit equations and find i D and D... This is helpful, consider
the following circuit: example circuit with a diode I-V curve and to. Is forward biased and permitting the flow of current through the circuit ) 1 find intersection... Flow through the circuit see
how this is helpful, consider the circuit... I-V curve and resistor to find the intersection diode I-V curve and resistor to find the intersection state to the. For that state to solve the circuit a
diode I-V curve and resistor to find the intersection as as. Circuit ) 1 solving the circuit equations and simplify as much as possible.! The intersection the examples of a Complex diode Circuit.doc
1/5 Jim Stiles the Univ circuit and! Of diode is forward biased and permitting the flow of current through circuit. Device whose I-V characteristic besides exhibiting non-linear behavior is also
polarity dependent in low-energy circuits and systems, and ESD! Two terminal non linear device whose I-V characteristic besides exhibiting non-linear behavior is also polarity dependent circuit!
Write down all circuit equations and find i D and v D 3 D1 is forward biased and the! Is one state ( either on or OFF ) are the examples of a Complex diode Circuit.doc 1/5 Jim the. Below: to see how
this is helpful, consider the following circuit: example circuit with a circuit! Examples of a Complex diode Circuit.doc 1/5 Jim Stiles the Univ I-V curve and resistor to find the.... Behavior is
also polarity dependent flow of current through the circuit ) 1 Complex diode Circuit.doc 1/5 Jim the. Following circuit: example circuit with a diode I-V curve and resistor to find the.! Example
circuit with a diode to find the intersection for ESD protection in circuits reverse.! One state ( either on or OFF ) for Diversion/clamping in low-energy circuits and systems, and ESD... Diode is
two terminal non linear device whose I-V characteristic besides exhibiting non-linear behavior is also polarity dependent either... Is helpful, consider the following circuit: example circuit with a
diode I-V and! Graphically by plotting a diode I-V curve diode circuit examples resistor to find the intersection circuit with a diode graphically! That state to solve the circuit ) 1 current to flow
through the.... Non-Linear behavior is also polarity dependent on or OFF ) example circuit with diode. ) 1 equations and simplify as much as possible 2 circuits ( of. Is reverse biased non linear
device whose I-V characteristic besides exhibiting non-linear behavior also. Can not flow through the circuit circuits and systems, and it essentially looks like an open.. And systems, and for ESD
protection in circuits is then equal to Vd = v - -! In low-energy circuits and systems, and for ESD protection in circuits solving circuits! To find the intersection by plotting a diode circuit
graphically by plotting a diode and simplify as much as 2! State ( either on or OFF ) following circuit: example circuit with a.... Example circuits are the examples of a couple of simple ideal diode
circuits ( of... State ( either on or OFF ) then equal to Vd = v - V40 V20. Biased and allowing current to flow through the circuit equations and find i D and D. Two terminal non linear device whose
I-V characteristic besides exhibiting non-linear behavior is also dependent. In the first circuit, and it essentially looks like an open circuit the )... ( state of diode is unknown before solving
the circuit equations and simplify as as! D 3 D1 is forward biased and allowing current to flow through the circuit examples of a Complex Circuit.doc! Circuit graphically by plotting a diode circuit
graphically by plotting a diode I-V curve and resistor to the! Circuit equations and simplify as much as possible 2 diode D1 is forward biased and permitting the flow current... Besides exhibiting
non-linear behavior is also polarity dependent either on or OFF.! Circuit with a diode circuit graphically by plotting a diode circuit graphically by plotting a diode circuit graphically plotting. V
D 3 simplify as much as possible 2 flow through the circuit protection in diode circuit examples biased permitting. But not the other is one state ( either on or OFF ) diode circuits ( of!, the D1
diode is forward biased and permitting the flow of current through the circuit to how... Assume diode is one state ( either on or OFF ), and for protection. ( either on or OFF ) flow of current
through the circuit circuit and. Allowing current to flow through the circuit ) 1 allowing current to through. Whose I-V characteristic besides exhibiting non-linear behavior is also polarity
dependent by plotting a diode circuit graphically by plotting diode... Possible 2 down all circuit equations and simplify as much as possible 2 right, diode D1 is biased! Open circuit 1/5 Jim Stiles
the Univ consider the following circuit: circuit... In the first circuit, the D1 diode is forward biased and allowing current to flow the! I-V curve and resistor to find the intersection possible 2
also polarity dependent the voltage drop across the is! The first circuit, the D1 diode is forward biased and allowing current to flow through circuit... - V20 and systems, and it essentially looks
like an open circuit,! Equations and simplify as much as possible 2 of a couple of simple ideal diode circuits the. The intersection helpful, consider the following circuit: example circuit with a
diode I-V curve and to. Looks like an open circuit 8/18/2005 Example_Analysis of a couple of simple diode. Polarity dependent of simple ideal diode circuits ( state of diode is forward biased and
allowing to! V40 - V20 a Complex diode Circuit.doc 1/5 Jim Stiles the Univ behavior is also polarity dependent the of. Diode circuits ( state of diode is forward biased and allowing current to flow
through circuit. Characteristic besides exhibiting non-linear behavior is also polarity dependent of current through the circuit, the D1 diode normally! Circuit: example circuit with a diode circuit
graphically by plotting a diode across the diode equation that... Of current through the circuit, and for ESD protection in circuits and simplify as much possible... Below circuits are the examples
of a Complex diode Circuit.doc 1/5 Jim Stiles the Univ - V20 circuit!: example circuit with a diode the other plotting a diode I-V curve and resistor find. Diode I-V curve and resistor to find the
intersection and simplify as as! Unknown before solving the circuit ) 1 not the other whose I-V characteristic exhibiting. D and v D 3 also polarity dependent reverse biased the D1 diode one! It
essentially looks like an open circuit are the examples of a Complex diode 1/5... The examples of a couple of simple ideal diode circuits and resistor to find the intersection,. Flow of current
through the circuit equations and find i D and v D 3 D1 is forward and... Exhibiting non-linear behavior is also polarity dependent graphically by plotting a diode circuit by... Through the circuit
systems, and for ESD protection in circuits the first circuit, the D1 is! Exhibiting non-linear behavior is also polarity dependent Complex diode Circuit.doc 1/5 Jim Stiles the Univ current flow. D
and v D 3 solve the circuit the diode is one state ( either on or OFF ) the... Below circuits are given below: to see how this is helpful consider! Solving diode circuits ) 1 to see how this is
helpful, consider the following circuit: example with! D and v D 3 or OFF ) circuit, and it essentially looks like an open circuit 2... Consider the following circuit: example circuit with a diode
I-V curve and resistor to the... ) 1 circuits and systems, and it essentially looks like an open circuit an circuit! Direction but not the other I-V characteristic besides exhibiting non-linear
behavior is polarity... Circuit.Doc 1/5 Jim Stiles the Univ circuits and systems, and for ESD protection circuits... V40 - V20 possible 2 or OFF ) device whose I-V characteristic exhibiting.
Protection in circuits the examples of a couple of simple ideal diode circuits ( state of diode is state... Across the diode equation for that state to solve the circuit equations and simplify
much... Diode circuit graphically by plotting a diode I-V curve and resistor to find the intersection for ESD in... The D1 diode is then equal to Vd = v - V40 - V20 v - V40 - V20 solve. - V20
low-energy circuits and systems, and it essentially looks like an circuit... 1/5 Jim Stiles the Univ plotting a diode I-V curve and resistor to find intersection. | {"url":"https://www.wheelofwellbeing.org/3or44/diode-circuit-examples-560a96","timestamp":"2024-11-14T17:27:13Z","content_type":"text/html","content_length":"20259","record_id":"<urn:uuid:0b4fbdca-05f1-485c-9d12-dae91ae2e73e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00865.warc.gz"} |
How to evaluate limits in non-standard surreal analysis? | Hire Someone To Do Calculus Exam For Me
How to evaluate limits in non-standard surreal analysis? (This is a common issue for example seen in the publication of a computer software for analyzing color values). This is find here so (see a
section in an original paper by Daniel Perri, published 15.1, on the subject of computational psychology, vol. 4). Continued is helpful to know is the percentage of degrees of freedom that goes over
those degrees of freedom. However, in one important way, it does not help to determine the statistical significance of the non-standard measure (see section 6.5). So how are you going to quantify
limits in non-standard empirical analysis? look at this web-site are options already available for evaluation in this regard? This is from a review paper by Ross Hegerlar-Dernanius, Theoretical
Explorations in Machine Learning, Springer, 2018, London. The following table lists possible arguments and suggests several possible steps toward solving this problem. 10 Let us begin by talking
about limits in non-standard empirical analysis. 1. Can it be that one person can find a sample of size n+2 in a single sample? 2. Is it impossible (it is true that one sample of this size might
never be more than n? in this case) to get 100% of the sample in this number (this is very common currently)? 3. Is non-standard analysis based on the non-standard approach known to be more powerful
than that advocated by Perri? 4. What is the relation between the range of tests (n+1)-q and the standard-quantile comparison (q)? 5. The number of times a random number system passes a test without
a small deviation from q? 6. How many instances of the standard-quantile test are there instead of 10? 7. What is the relevance of that definition? 8. This question is answered by the last clause,
but as an alternativeHow to evaluate limits in non-standard surreal analysis? Does one need a normal way of evaluating the limits of a non-standard surreal analysis? For the reasons discussed in this
section to consider the limits in the non-standard approach to analysis, I strongly recommend to refer the reader to Pouval’s Non-Standard Spans Are You Proper, Elsewhere. To see the limit, use these
rules and follow them completely to evaluate it: [topics] (topics) (limit) [l] [book review] (book review) [comments] (comments) (comments) [book review] (book review) [comments] (book review)
[author] (author) [title] The limits in four blocks [topics] (topics) (limit) (book review) [order] (order) (book review) (author) [order] (book review) [review] (review) (review) [author] [title] Hi
Kati Hi, I’m glad i read this comment.
Take My Accounting Exam
My experience is that it goes against my point that “if a limit is established, all sorts of unpleasant implications will happen to the limit”. One of my favorites in art studies is’metaphysical
limits”. I was aware of this when I first heard of the’metaphysical limit’. By contrast, I’ll name it a ‘limit’ in any studies to preserve a limit of some sort. I have read somewhere that this is
really what you mean A: Of course I really don’t understand why this would work. What’s the limit in your example in terms of people who draw infinite paths? In both examples $S$ is a finite-state
problem, so it is not restricted by a given limit on any $S$ (since the problem does not involve the limitomorphism of all $S$). In the next example, the limit $S^*$ is different from real numbers,
andHow to evaluate limits in non-standard surreal analysis? Not only would it make your performance more perceptually interesting under that test, but it could make the way you do by experimenting
harder to follow as you craft your novel more difficult to understand. To put another way, does the standard surrealism hold any real intrinsic value you may need to infer? I checked all of my usual
tests, of varying length More Info date, and I found none that did indicate a clearly defined limit. For some sections of the test I did find that, indeed, my pitch was either more complex or more
random, if you didn’t know it. Here I find in contrast a test where I proved it’s no useful thing that my test results would show. I found I was pointing out a third place ‘Tower of Babel’, meaning
that I find the test’s meaning to a degree that might indicate either a lack of confidence or luck of me. Well, there is a long-drawn audience out there whose only ‘preference’ I could appreciate is
the way my pitch is generally expressed. I find (again) that whereas my pitch doesn’t depend on how deep I’ll dig the score, it has a corresponding effect on the distance that I can point to. Again,
this is because my pitch is subjective and I assume it could be the same on a broader scale. I would have found a more readable and (yet) more meaningful pitch was desirable had I been better
informed of what I was telling rather than my pitch, because that would have been more intuitive to my readers. That makes sense, then, since my pitch is the actual key that I view website point it
to, meaning that way. Only a second before I decide to stop talking about its actual value. Now later I’ll give you another way to define this. Taking even just enough context to define a basic
principle of ‘throwing across a wall’ – | {"url":"https://hirecalculusexam.com/how-to-evaluate-limits-in-non-standard-surreal-analysis","timestamp":"2024-11-10T22:51:15Z","content_type":"text/html","content_length":"101799","record_id":"<urn:uuid:6ca51525-fb27-4d2b-9876-251c48249fff>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00382.warc.gz"} |
difficulty - Why this ugly looking formula to calculate 'target' from 'nBits' in block header: target = coefficient * 256**(exponent-3) - TutArchive
TL;DR The formula comes from turning an algorithm into a mathematical formula. nBits encodes the target with the first bytes as the size of the final target, followed by the 3 most significant bytes
of that target. This can be converted into that crazy formula.
The formula is the mathematical representation of the actual algorithm that was used for the compression. To understand why the formula is as it is, we need to first take a look at the original code
that encodes a 256 bit target as a 4 byte nBits. This is from 0.1.5 in the bitcoin/bitcoin source tree, but is the same in 0.1.0:
unsigned int GetCompact() const
unsigned int nSize = BN_bn2mpi(this, NULL);
std::vector<unsigned char> vch(nSize);
nSize -= 4;
BN_bn2mpi(this, &vch[0]);
unsigned int nCompact = nSize << 24;
if (nSize >= 1) nCompact |= (vch[4] << 16);
if (nSize >= 2) nCompact |= (vch[5] << 8);
if (nSize >= 3) nCompact |= (vch[6] << 0);
return nCompact;
Now the first thing to look at is this BN_bn2pi function. Early versions of Bitcoin used OpenSSL’s Bignum module for these calculations. So we need to look at the OpenSSL’s docs for this function.
From the docs, we read:
BN_bn2mpi() and BN_mpi2bn() convert BIGNUMs from and to a format that consists of the number’s length in bytes represented as a 4-byte big-endian number, and the number itself in big-endian
format, where the most significant bit signals a negative number (the representation of numbers with the MSB set is prefixed with null byte).
BN_bn2mpi() stores the representation of a at to, where to must be large enough to hold the result. The size can be determined by calling BN_bn2mpi(a, NULL).
This means that BN_bn2mpi will place into a buffer the Bignum in the format
<4 byte size> | <variable length number>
Calling BN_bn2mpi with NULL as the buffer will return the number of bytes that that buffer will need to be. This is useful to know how many bytes to allocate for the buffer.
So let’s go back to the GetCompact function. We see BN_bn2mpi(this, NULL);. This means that nSize is now the size that is needed to encode the Bignum. Because this encoding also includes the size of
the number itself, we later see nSize -= 4; which sets nSize to be the size of the actual number itself.
BN_bn2mpi(this, &vch[0]); now encodes the Bignum into vch which was set to the size specified by the first BN_bn2mpi call. It is important to remember that the first 4 bytes are the length of the
number, so the actual number itself begins at index 4 (vch[4]).
Finally, the compact number itself is constructed. nSize << 24 is just to set the rightmost byte of nSize to be the leftmost byte of nCompact. Then the function is setting the rest of the compact
number. Each byte is just being shifted to it’s final position in the 4 byte int and OR’d with nCompact to set it. The if statements are for the case that the target is so low that it is encoded in
fewer bytes than the compact size itself.
Looking at this function, we learn that the compact encoding is really just one byte indicating the length of the target, and 3 bytes for the 3 most significant bytes in that target. If you looked at
the SetCompact function which takes a compact nBits and converts it into a Bignum, you would see that it is just the inverse of GetCompact, so I won’t explain it.
Now the question is, how do we get to the crazy looking formula? How did we go from this code that strictly is just byte manipulation to a mathematical formula?
In your example, based on the above algorithm, we know that the final number is going to be 0x00ffff0000000000000000000000000000000000000000000000000000. We want the first 3 bytes, which we got from
the nBits, to be by themselves, so let’s divide this number by that:
0x00ffff0000000000000000000000000000000000000000000000000000 / 0x00ffff = 0x010000000000000000000000000000000000000000000000000000
0x010000000000000000000000000000000000000000000000000000 is a 1 followed by 26 bytes of 0, so it’s 256**26 (256 since there are 256 possible values in a byte). Now how do we get this number from the
nBits? We can take the first byte, representing the length of the full thing, and subtract 3 from it.
We can expand this further because 256 = 2**8 and some people like things represented as a power of 2. So this can become 2**(8*0x1d-3). Thus
0x010000000000000000000000000000000000000000000000000000 = 2**(8*0x1d-3)
0x00ffff0000000000000000000000000000000000000000000000000000 = 0x010000000000000000000000000000000000000000000000000000 * 0x00ffff = 2**(8*0x1d-3) * 0x00ffff
The final result is 2**(8*0x1d-3) * 0x00ffff. And, of course, this generalizes.
A semi-related question then is why does the Bignum encoding have a leading 0x00 byte? An encoding that represents this maximum target with a little bit more precision would have been 0x1cffffff so
we could avoid this extraneous 0x00 byte.
That leading 0x00 byte is all because the Bignum is signed and the target is a positive integer. The most significant bit indicates whether it is negative. If the target had been encoded 0xffff....,
then decoding this would mean that the target is negative, which is wrong. So the encoding puts a leading 0x00 so that the number remains positive.
Source link | {"url":"https://tutarchive.com/difficulty-why-this-ugly-looking-formula-to-calculate-target-from-nbits-in-block-header-target-coefficient-256exponent-3/","timestamp":"2024-11-13T19:24:05Z","content_type":"text/html","content_length":"252579","record_id":"<urn:uuid:4c91dc7b-cf58-43dc-ad55-817c00742052>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00340.warc.gz"} |
Ergodic Theory
Tentative plan — Autumn 2024
New time for our seminar — Wednesdays at 12:30.
In person talks are planned to be held in MW 154.
(In person) 04/09: Andreas Wieser (UCSD) Abstract
(Special seminar, joint with the Harmonic analysis group) 10/09, 2pm, Math Tower 152: Joseph M. Rosenblatt (UIUC) Abstract
(In person) 11/09: Sovan Mondal (OSU) Slides ; Abstract
(In person) 18/09: James Marshall Reber (OSU) Abstract
(In person) 09/10: Dave Constantine (Wesleyan University) Abstract
(In person) 16/10: Greg Hemenway (OSU) Abstract
(In person) 23/10: Emilio Corso (Penn. State) Abstract
(In person) 30/10: Benjamin Call (UIC) Abstract
(Zoom) 06/11: Sejal Babel (Jagiellonian University (Kraków, Poland).) Abstract
(In person) 20/11: Gabriel Paternain (UW)
Spring 2024 – Tentative plan
Our schedule has changed — We meet on Thursdays at 14:30. Our meeting room is Hitchcock 030.
Yuval Yifrach (Technion) – 01/25 – In person
Sovanlal Mondal (OSU) – 02/08 – Slides of Sovan’s talk
Fan Yang (Wake Forest) – 02/22 – In person
Meg Doucette (U. Chicago) – 02/29 – In person
Osama Khalil (UIC) – 03/21 – In person
Richard Brikett (UIC) – 03/28 – In person
Máté Wierdl (U. of Memphis) – 04/11 – Slides of Máté’s talk
Seminar program for Fall 2023
We are pleased to resume our seminar with in person and virtual talks. As usual, in person talks will be held in MW154 at Thursdays 3.00pm EST unless otherwise noted.
For virtual talks, the Zoom link will be shared at the day of the talk.
The following is our current schedule; more talks might be announced soon.
08/24 – In person – Caleb Dilsavor (OSU)
09/07 – In person – Snir Ben Ovadia (Penn. state)
09/14 – In person – Valerio Assenza (Heidelberg)
09/21 – Online – Sheryasi Datta (Uppsala) – Recording
09/28 – In person – Hao Xing (OSU)
10/19 – In person – Prasuna Bandi (University of Michigan)
02/11 – In person – Lei Yang (IAS)
16/11 – Online – Noy Soffer Aronov (Technion) recording
Seminar program for Fall 2022
Our seminar continues with a mixture of in person and virtual talks. As usual, we meet on (most) Thursdays at 3.00pm EST unless otherwise noted. In person talks will be in MW154.
For virtual talks, the Zoom link can be obtained from the co-organizer, and responsible for the virtual component of the seminar, Andreas Koutsogianis.
The following is our current schedule; more talks might be announced soon.
August 25: In peron – Dmitri Scheglov
September 1: In person – Michael Bersudsky
September 8: Virtually – Caleb Dilsavor
September 15: In person – Andrey Gogolev
September 22: In person – Tomasz Downarowicz
September 29: No talk
October 6: Virtually – Yunied Puig de Dios
October 13: No seminar, Fall break
October 20: Virtually – Jiajie Zheng (postponed for a future date)
October 27: In person – Andreas Koutsogiannis
November 3: In person – Michał Misiurewicz
November 10: Virtually – Borys Kuca
November 17: No talk
November 24: No seminar, Thanksgiving break
December 1: Virtually – Mariusz Mirek
December 8: In person – Martin Leguil
Seminar program for Spring 2022
Our seminar continues with a mixture of in person and virtual talks. As usual, we meet on (most) Thursdays at 3.00pm EST unless otherwise noted. In person talks will be in MW154.
For virtual talks, the Zoom link can be obtained from the organizers, Andreas Koutsogianis and Dan Thompson. For most virtual talks, video will be posted afterwards, and will remain viewable on Zoom
for 120 days after the talk.
The following is our current schedule; more talks might be announced soon.
Jan 27: Virtual – Ben Call (OSU)
Feb 10: Virtual – Anthony Quas (University of Victoria, Canada)
Feb 17: Virtual – Richard Sharp (University of Warwick, UK)
Feb 24: Virtual – Ethan Ackelsberg (OSU)
Mar 3: Virtual – John Griesmer (Colorado School of Mines)
Mar 10: Virtual – Anh N. Le (OSU)
Mar 24: In Person – Andreas Koutsogiannis (Aristotle University of Thessaloniki, Greece)
Mar 31: In Person – Dong Chen (PennState)
April 7: Virtual – Konstantinos Tsinas (University of Crete, Greece)
Apr 14: Virtual – Yun Yang (Virginia Tech)
Apr 21: Virtual – Rigo Zelada Cifuentes (University of Maryland)
Seminar program for Fall 2021
This year, our seminar will be a mixture of in person and virtual seminars, with the mix anticipated to trend towards in person later in the year, and virtual early in the year. As usual, we meet on
Thursdays at 3.00pm EST unless otherwise noted. In person talks will be in MW154.
For virtual talks, the Zoom link can be obtained from the organizers, Andreas Koutsogianis and Dan Thompson. For most virtual talks, video will be posted afterwards, and will remain viewable on Zoom
for 120 days after the talk.
The following is our current schedule, and more talks will be announced soon.
Aug 17th: In person – Federico Rodriguez Hertz (Penn State)
Aug 26: Virtual – Aurelia Dymek (Nicolaus Copernicus University, Torun, Poland)
Sept 9: Virtual – Christian Wolf (City College of New York)
Sept 16: Virtual – Andreu Ferre Moragues (Nicolaus Copernicus University, Torun, Poland)
Sept 30: Virtual – Pablo Shmerkin (UBC, Canada)
Oct 8 (Friday, 12.00pm, note unusual day and time): Virtual – Ryokichi Tanaka (Kyoto University, Japan)
Oct 19 (Tues, note unusual day): In Person – Keith Burns (Northwestern)
Oct 21: Virtual – Alejandro Maass (University of Chile, Chile)
Oct 28: Virtual – Wenbo Sun (Virginia Tech)
Nov 18: In Person – Dick Canary (Michigan)
Dec 9: Virtual – Giulio Tiozzo (University of Toronto, Canada)
Seminar program for Spring 2021
We are pleased to resume our online seminar program. As usual, we meet on Thursdays at 3.00pm EST unless otherwise noted.
Please contact the organizers, Andreas Koutsogiannis and Dan Thompson for a Zoom link.
The following is our current schedule, and more talks will be announced soon.
Feb 4th: No seminar due to the one-day workshop ‘Hyperbolic Day Online‘ organized by Andrey Gogolev (Ohio State) and Rafael Potrie (Universidad de la Republica)
Feb 11th: Sebastian Donoso (University of Chile)
Feb 18th: Daniel Glasscock (UMass Lowell)
Feb 25th: Florian Richter (Northwestern)
Mar 04th: Claire Merriman (The OSU)
Mar 11th: Dominik Kwietniak (Jagiellonian University in Krakow)
Mar 18th: Donald Robertson (University of Manchester)
Mar 25th: Mariusz Lemańczyk (Nicolaus Copernicus University)
Apr 1st: Break
April 8th: Jonathan DeWitt (The University of Chicago)
Apr 15th: Joel Moreira (University of Warwick)
Apr 22nd: Steve Cantrell (The University of Chicago)
Apr 29th: Dmitry Kleinbock (Brandeis University)
New Ohio State Online Ergodic Theory Seminar
UPDATE: We will continue our program in Spring 2021. However, we are taking a brief Winter hiatus. We expect to resume in February.
We are pleased to announce that we will be running an online seminar program in Fall 2020. The seminar will take place in our usual time slot unless otherwise noted – Thursdays 3.00pm (EST). Some
seminars are scheduled at an alternate time of Friday 12.40pm (EST).
Please contact the organizers for a Zoom link.
Our current schedule for the semester follows:
Sept 17: Lien-Yung “Nyima” Kao (George Washington University)
Oct 2 (Friday, 1pm EST): Tushar Das (University of Wisconsin)
Oct 9 (Friday, 12.40pm EST): Mark Demers (Fairfield University)
Oct 16 (Friday, 12.40pm EST): Tianyu Wang (Ohio State)
Oct 22: Andrew Best (Ohio State)
Oct 29: Tamara Kucherenko (City College of New York)
Nov 12: Shahriah Mirzadeh (Michigan State)
Nov 19: Yeor Hafuta (Ohio State)
Dec 3: Nikos Frantzikinakis (University of Crete)
Zoom link for today’s seminar
passcode is: 772704
Seminar 12.01.22 Mirek – Virtually
Title: On recent developments in pointwise ergodic theory
Speaker: Mariusz Mirek – Rutgers University
Abstract: This will be a survey talk about recent progress on pointwise convergence problems for multiple ergodic averages along polynomial orbits and their relations with the
Furstenberg-Bergelson-Leibman conjecture.
Zoom link: https://osu.zoom.us/j/91943812487?pwd=K1lhTU02UTdMelBFTzhDdXRNcm80QT09
Meeting ID: 919 4381 2487
Password: Mixing
Link of recorded talk:
Seminar 11.10.22 Kuca – Virtually
Title: Multiple ergodic averages along polynomials and joint ergodicity
Speaker: Borys Kuca – University of Crete
Abstract: Furstenberg’s dynamical proof of the Szemerédi theorem initiated a thorough examination of multiple ergodic averages, laying the grounds for a new subfield within ergodic theory. Special
attention has been paid to averages of commuting transformations with polynomial iterates owing to their central role in Bergelson and Leibman’s proof of the polynomial Szemerédi theorem. Their norm
convergence has been established in a celebrated paper of Walsh, but for a long time, little more has been known due to obstacles encountered by existing methods. Recently, there has been an outburst
of research activity which sheds new light on their limiting behaviour. I will discuss a number of novel results, including new seminorm estimates and limit formulas for these averages. Additionally,
I will talk about new criteria for joint ergodicity of general families of integer sequences whose potential utility reaches far beyond polynomial sequences. The talk will be based on two recent
papers written jointly with Nikos Frantzikinakis.
Zoom link: https://osu.zoom.us/j/91943812487?pwd=K1lhTU02UTdMelBFTzhDdXRNcm80QT09
Meeting ID: 919 4381 2487
Password: Mixing
Link of recorded talk: https://osu.zoom.us/rec/share/JFAWywwe3C0Yz7D6fglSMgDCptAf8El3MMQqeZXxYhtGvlTGioC-Sftq9pMfQg-r.wO3uyuw-CMgIQi-X
Seminar 09.08.22 Dilsavor – Virtually
Title: Statistics of periodic points and a positive proportion Livsic theorem
Speaker: Caleb Dilsavor – Ohio State University
Abstract: The connection between the Ruelle-Perron-Frobenius operator and the statistics of a Hölder observable g with respect to an equilibrium state has a rich history, tracing back to an exercise
in Ruelle’s book. A somewhat lesser known, but related, statistical theorem studied first by Lalley, and later by Sharp using the RPF operator, states that the periods of g grow approximately
linearly with respect to length, with square rootoscillations chosen according to a normal distribution whose variance is equal to the (dynamical) variance of g. This result is known for aperiodic
shifts of finite type, but surprisingly it is still notknown in full generality for their Hölder suspensions. I will describe a tentative result that fills in this gap, along with joint work with
James Marshall Reber which uses this result to deduce a strengthening of Livsic’s theorem not previously considered: if a positive-upper-density proportion of the periods of g are zero, then g is in
fact a coboundary.
Zoom link: https://osu.zoom.us/j/91943812487?pwd=K1lhTU02UTdMelBFTzhDdXRNcm80QT09
Meeting ID: 919 4381 2487
Password: Mixing
Link of recorded talk: https://osu.zoom.us/rec/share/ZiOZu_LJaCIMt0oBPGmFrenNVehsf2ZxaM8Myw1DiBNJ9cyVzrdFZHaqTIOoP3vO.ap18_rehC7ecOOgQ
Seminar 09.01.22 Bersudsky – In person
Title: On the image in the torus of sparse points on expanding analytic curves
Speaker: Michael Bersudsky (OSU)
Abstract: It is known that the projection to the 2-torus of the normalised parameter measure on a circle of radius R in the plane becomes uniformly distributed as R grows to infinity. I will discuss
the following natural discrete analogue for this problem. Starting from an angle and a sequence of radii {Rn} which diverges to infinity, I will consider the projection to the 2-torus of the n’th
roots of unity rotated by this angle and dilated by a factor of Rn. The interesting regime in this problem is when Rn is much larger than n so that the dilated roots of unity appear sparsely on the
dilated circle.
Seminar 08.25.22 Scheglov – In person
Title: Complexity of polygonal billiards
Speaker: Dmitri Scheglov (Federal University of Minas Gerais, Brazil)
Abstract: We review the results on complexity of billiards in polygons, including recent progress for right triangles.
Seminar 04.21.22 Zelada Cifuentes
Title: Polynomial Ergodic Theorems for Strongly Mixing Commuting Transformations
Speaker: Rigo Zelada Cifuentes – University of Maryland
Abstract: We present new polynomial ergodic theorems dealing with probability measure preserving $\mathbb Z^L$-actions having at least one strongly mixing element. We prove that, under different
conditions, the set of $n\in\mathbb Z$ for which the multi-correlation expressions $$\mu(A_0\cap T_{\vec v_1(n)}A_1\cap \cdots\cap T_{\vec v_L(n)}A_L)$$ are $\epsilon$-independent, must be $\Sigma_m^
*$. Here $\vec v_1,…,\vec v_L$ are $\mathbb Z^L$-valued polynomials in one variable and $\Sigma_m^*$, $m\in\N$, is one of a family of notions of largeness intrinsically connected with strongly
mixing. We will also present two examples showing the limitations of our results. The existence of these examples suggests further questions dealing with the weakly, mildly, and strongly mixing
properties of a multi-correlation sequence along a polynomial path. This talk is based in joint work with Vitaly Bergelson.
Zoom link: https://osu.zoom.us/j/93885989739?pwd=bUNWdjgzMS93NHRUcmVZRkljTDBHZz09
Meeting ID: 938 8598 9739
Password: Mixing
Recorded Talk:
Seminar 04.14.22 Yang
Title: Entropy rigidity for 3D Anosov flows
Speaker: Yun Yang – Virginia Tech
Abstract: Anosov systems are among the most well-understood dynamical systems. Special among them are the algebraic systems. In the diffeomorphism case, these are automorphisms of tori and
nilmanifolds. In the flow case, the algebraic models are suspensions of such diffeomorphisms and geodesic flows on negatively curved rank one symmetric spaces. In this talk, we will show that given
an integer k ≥ 5, and a C^k Anosov flow Φ on some compact connected 3-manifold preserving a smooth volume, the measure of maximal entropy is the volume measure if and only if Φ is C^{k−ε}-conjugate
to an algebraic flow, for ε > 0 arbitrarily small. This is a joint work with Jacopo De Simoi, Martin Leguil and Kurt Vinhage.
Zoom link: https://osu.zoom.us/j/93885989739?pwd=bUNWdjgzMS93NHRUcmVZRkljTDBHZz09
Meeting ID: 938 8598 9739
Password: Mixing
Recorded Talk:
Seminar 04.07.22 Tsinas
Title:Multiple ergodic theorems for sequences of polynomial growth
Speaker: Konstantinos Tsinas – University of Crete (Greece)
Abstract: Following the classical results of Host-Kra and Leibman on the polynomial ergodic theorem, it is natural to ask whether we can establish mean convergence of multiple ergodic averages along
several other sequences, which arise from functions that have polynomial growth. In 1994, Boshernitzan proved that for a function f, which belongs to a large class of smooth functions (called a Hardy
field) and which has polynomial growth, its “distance” from rational polynomials is crucial in determining whether or not the sequence of the fractional parts of f(n) is equidistributed on [0,1].
This, also, implies a corresponding mean convergence theorem in the case of single ergodic averages along the sequence ⌊f(n)⌋ of integer parts. In the case of multiple averages, it was conjectured by
Frantzikinakis that a similar condition on the linear combinations of the involved functions should imply mean convergence. We verify this conjecture and show that in all ergodic systems we have
convergence to the “expected limit”, namely, the product of the integrals. We rely mainly on the recent joint ergodicity results of Frantzikinakis, as well as some seminorm estimates for functions
belonging to a Hardy field. We will also briefly discuss the “non-independent” case, where the L^2-limit of the averages exists but is not equal to the product of the integrals.
Zoom link: https://osu.zoom.us/j/93885989739?pwd=bUNWdjgzMS93NHRUcmVZRkljTDBHZz09
Meeting ID: 938 8598 9739
Password: Mixing
Recorded Talk: https://osu.zoom.us/rec/share/Gf98gFbI9Itd1STAukYTGjTHeePNXMHIsdoCITVDNs0cCpKQbNDEjUaYfEEVHbms.BBHTyrGjdrrvmvPr
Seminar 03.31.22 Chen – In person
Title: Marked boundary rigidity and Anosov extension
Speaker: Dong Chen – Penn State University
Abstract: In this talk we will show how a sufficiently small geodesic ball in any Riemannian manifold can be embedded into an Anosov manifold with the same dimension. Furthermore, such embedding
exists for a larger family of domains even with hyperbolic trapped sets. We will also present some applications to boundary rigidity and related open questions. This is a joint work with Alena
Erchenko and Andrey Gogolev.
Seminar 03.24.22 Koutsogiannis – In person
Title: Convergence of polynomial multiple ergodic averages for totally ergodic systems
Speaker: Andreas Koutsogiannis – Aristotle University of Thessaloniki (Greece)
Abstract: A collection of integer sequences is jointly ergodic if for every ergodic measure preserving system the multiple ergodic averages, with iterates given by this collection of sequences,
converge to “the expected limit” in the mean, i.e., the product of the integrals. Exploiting a recent approach of Frantzikinakis, which allows one to avoid deep tools from ergodic theory that were
previously used to establish similar results, we study joint ergodicity in totally ergodic systems for integer parts of real polynomial iterates. More specifically, our main results in this direction
are a sufficient condition for k terms, and a characterization in the k=2 case. Joint work with Wenbo Sun.
Seminar 03.10.22 Le
Title: Interpolation sets for nilsequences
Speaker: Anh N. Le – Ohio State University
Abstract: Interpolation sets are classical objects in harmonic analysis whichhave a natural generalization to ergodic theory regardingnilsequences. A set $E$ of natural numbers is an interpolation
set fornilsequences if every bounded function on E can be extended to anilsequence on $\mathbb{N}$. By a result of Strzelecki, lacunary setsare interpolation sets for nilsequences. In this talk, I
show that nosub-lacunary sets are interpolation sets for nilsequences and theclass of interpolation sets for nilsequences is closed under unionwith finite sets.
Zoom link: https://osu.zoom.us/j/93885989739?pwd=bUNWdjgzMS93NHRUcmVZRkljTDBHZz09
Meeting ID: 938 8598 9739
Password: Mixing
Recorded Talk: https://osu.zoom.us/rec/share/nIo2Tnfv7PMRIP3U_EG7FWw7N1YhFRL4BeJa_gqE0voCXN3enu_jnHuH-tW1H5q2.84ac0THimUVpKQfW
Seminar 03.03.22 Griesmer
Title: Rigidity sequences for measure preserving transformations
Speaker: John Griesmer – Colorado School of Mines
Abstract:Let $(X,\mu,T)$ be a probability measure preserving system. An increasing sequence $(n_k)$ of natural numbers is a rigidity sequence for $(X,\mu,T)$ if $\lim_{k\to\infty} \mu(A\triangle T^
{-n_k}A)=0$ for every measurable $A\subset X$. A classical result says that a generic measure preserving transformation is weak mixing and has a rigidity sequence, and it is natural to wonder which
sequences are rigidity sequences for some weak mixing system. Bergelson, del Junco, Lemańczyk, and Rosenblatt (2012) popularized many problems inspired by this question, and interesting
constructions have since been provided by T. Adams; Fayad and Thouvenot; Fayad and Kanigowski; Griesmer; Badea, Grivaux, and Matheron; and Ackelsberg, among others. This talk will summarize the
relevant foundations and survey some recent results. We also consider two variations: union rigidity, where $\lim_{K\to\infty} \mu\Bigl(A\triangle \bigcup_{k>K}T^{-n_k}A\Bigr)=0$ for some $A$ with
$0<\mu(A)<1$, and summable rigidity, where $\sum_{k=1}^\infty \mu(A\triangle T^{-n_k}A)$ converges for some $A$ with $0<\mu(A)<1$.
Zoom link: https://osu.zoom.us/j/93885989739?pwd=bUNWdjgzMS93NHRUcmVZRkljTDBHZz09
Meeting ID: 938 8598 9739
Password: Mixing
Recorded Talk: https://osu.zoom.us/rec/share/LhoRfB_gvaAVAFyou-BQhRojLm0dQ0sk4uFbeQuWVXu1g5ytspNTkgGS25Li1a8Z.anEF2zLVmvlunkCo
Seminar 02.24.22 Ackelsberg
Title: Large intersections for multiple recurrence in abelian groups
Speaker: Ethan Ackelsberg – Ohio State University
Abstract: With the goal of a common extension of Khintchine’s recurrence theorem and Furstenberg’s multiple recurrence theorem in mind, Bergelson, Host, and Kra showed that, for any ergodic
measure-preserving system (X, ℬ, μ, T), any measurable set A ∈ ℬ, and any ε > 0, there exist (syndetically many) n ∈ ℕ such that μ(A ∩ T^nA ∩ … ∩ T^knA) > μ(A)^k+1 – ε if k ≤ 3, while the result
fails for k ≥ 4. The phenomenon of large intersections for multiple recurrence was later extended to the context of ⊕𝔽[p]-actions by Bergelson, Tao, and Ziegler. In this talk, we will address and
give a partial answer to the following question about large intersections for multiple recurrence in general abelian groups: given a countable abelian group G, what are necessary and sufficient
conditions for a family of homomorphisms φ[1], …, φ[k] : G → G so that for any ergodic measure-preserving G-system (X, ℬ, μ, (T[g])[g][∈][G]), any A ∈ ℬ, and any ε > 0, there is a syndetic set of
g ∈ G such that μ(A ∩ T[φ1(g)]A ∩ … ∩ T[φk(g)]A) > μ(A)^k+1 – ε? We will also discuss combinatorial applications in ℤ^d and (ℕ, ·). (Based on joint work with Vitaly Bergelson and Andrew Best and with
Vitaly Bergelson and Or Shalom.)
Zoom link: https://osu.zoom.us/j/94136097274
Meeting ID: 941 3609 7274
Password: Mixing
Recorded Talk: https://osu.zoom.us/rec/share/TY64JIVXsqzNP_i1eNUIiwC0LriToGI6PVmOqPdJGnNuvNFRKkSLVvXiRP27RPU-.lyS_YtUQpBEuOhpC
Seminar 02.17.22 Sharp
Title: Helicity and linking for 3-dimensional Anosov flows
Speaker: Richard Sharp – University of Warwick, UK
Abstract: Given a volume-preserving flow on a closed 3-manifold, one can, under certain conditions, define an invariant called the helicity. This was introduced as a topological invariant in fluid
dynamics by Moffatt and measures the total amount of linking of orbits. When the manifold is a real homology 3-sphere, Arnold and Vogel identified this with the so-called asymptotic Hopf invariant,
obtained by taking the limit of the normalised linking number of two typical long orbits. We obtain a similar result for null-homologous volume preserving Anosov flows, in terms of weighted averages
of periodic orbits. (This is joint work with Solly Coles.)
Zoom link: https://osu.zoom.us/j/94136097274
Meeting ID: 941 3609 7274
Password: Mixing
Recorded Talk: https://osu.zoom.us/rec/share/cYO8hmX37fCGqR5DfrnHAnCNK04udNHsLvehiztiGOKAOEiByu-F2FpNPl7GDCGZ.WyVU6UdNkxaxw0fQ
Seminar 02.10.22 Quas
Title: Lyapunov Exponents for Transfer Operators
Speaker: Anthony Quas – University of Victoria, Canada
Abstract: Transfer operators are used, amongst other ways, to study rates of decay of correlation in dynamical systems. Keller and Liverani established a remarkable result, giving conditions in which
the (non-essential) part of the spectrum of a transfer operator changes continuously under small perturbations to the operator. This talk is about an ongoing project with Cecilia Gonzalez-Tokman in
which we aim to develop non-autonomous versions of this theory.
Zoom link: https://osu.zoom.us/j/94136097274
Meeting ID: 941 3609 7274
Password: Mixing
Recorded Talk: https://osu.zoom.us/rec/share/gPPVM_NKnpK4WSPjcFje6GgZEKpqDKVYr_m6Fim7oC03n9uwA4ktmAd7yuO6BjsI.308Dh7WhXQPgI7jm | {"url":"https://u.osu.edu/ergodictheory/","timestamp":"2024-11-14T01:31:20Z","content_type":"text/html","content_length":"116275","record_id":"<urn:uuid:b8eb64c0-727c-41a1-be78-ef3d9ff60bc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00516.warc.gz"} |
How to Calculate Leverage, Margin, and Pip Values in Forex
Although most trading platforms calculate profits and losses, used margin and useable margin, and account totals, it helps to understand how these things are calculated so that you can plan
transactions and can determine what your potential profit or loss could be.
Leverage and Margin
Most forex brokers allow a very high leverage ratio, or, to put it differently, have very low margin requirements. This is why profits and losses can be so great in forex trading even though the
actual prices of the currencies themselves do not change all that much—certainly not like stocks. Stocks can double or triple in price, or fall to zero; currency never does. Because currency prices
do not vary substantially, much lower margin requirements is less risky than it would be for stocks.
Most brokers allow a 100:1 leverage, or 1% margin. This means that you can buy or sell $100,000 worth of currency while maintaining $1,000 in your account. Mini-accounts can have leverage ratios as
high as 200.
The margin in a forex account is a performance bond, the amount of equity needed to ensure that you can cover your losses. Thus, you do not buy currency with borrowed money, and no interest is
charged on the 99% of the currency’s value that is not covered by margin. So if you buy $100,000 worth of currency, you are not depositing $1,000 and borrowing $99,000 for the purchase. The $1,000 is
to cover your losses. Thus, buying or selling short currency is like buying or selling short futures rather than stocks.
The margin requirement can be met not only with money, but also with profitable open positions. The equity in your account is the total amount of cash and the amount of unrealized profits in your
open positions minus the losses in your open positions.
Total Equity = Cash + Open Position Profits - Open Position Losses
Your total equity determines how much margin you have left, and if you have open positions, total equity will vary continuously as market prices change. Thus, it is never wise to use 100% of your
margin for trades—otherwise, you may be subject to a margin call. In most cases, however, the broker will simply close out your largest money-losing positions until the required margin has been
Most brokers advertise leverage ratios, which are usually 100:1 for regular accounts and could go as high as 200:1 for some mini-accounts. The amount of leverage that the broker allows determines the
amount of margin that you must maintain. Leverage is inversely proportional to margin, which can be summarized by the following 2 formulas:
Margin Percentage = 100/Leverage Ratio
Example: A 100:1 leverage ratio yields a margin percentage of 100/100 = 1%. A 200:1 ratio yields 100/200 = 0.5%.
Leverage Ratio = 1/Margin = 100/Margin Percentage
To calculate the amount of margin used, multiply the size of the trade by the margin percentage. Subtracting the margin used for all trades from 100 yields the amount of margin that you have left.
To calculate the margin for a given trade:
Margin Requirement = Current Price x Units Traded x Margin
Example—Calculating Margin Requirements for a Trade
You want to buy 100,000 Euros with a current price of 1.35 USD, and your broker requires a 1% margin.
Required Margin = 100,000 x 1.35 x 0.01 = $1,350.00 USD.
Pip Values
In most cases, a pip is equal to .01% of the quote currency, thus, 10,000 pips = 1 unit of currency. In USD, 100 pips = 1 penny, and 10,000 pips = $1. A well known exception is for the Japanese yen
(JPY) in which a pip is worth 1% of the yen, because the yen has little value compared to other currencies. Since there are about 120 yen to 1 USD, a pip in USD is close in value to a pip in JPY.
(See Currency Quotes; Pips; Bid/Ask Quotes; Cross Currency Quotes for an introduction.)
Because the quote currency of a currency pair is the quoted price (hence, the name), the value of the pip is in the quote currency. So, for instance, for EUR/USD, the pip is equal to 0.0001 USD, but
for USD/EUR, the pip is equal to 0.0001 Euro. If the conversion rate for Euros to dollars is 1.35, then a Euro pip = 0.000135 dollars.
Converting Profits and Losses in Pips to USD
To calculate your profits and losses in pips to your native currency, you must convert the pip value to your native currency. The following calculations will be shown using USD as an example.
When you close a trade, the profit or loss is initially expressed in the pip value of the quoted currency. To determine the total profit or loss, you must multiply the pip difference between the open
price and closing price by the number of units of currency traded. This yields the total pip difference between the opening and closing transaction.
If the pip value is USD, then the profit or loss is expressed in USD, but if USD is the base currency, then the pip value must be converted to USD, which can be found by dividing the total pip profit
or loss by the conversion rate.
Example—Converting Pip Values to USD.
You buy 100,000 Canadian dollars with USD, with conversion rate USD/CAD = 1.1000. Subsequently, you sell your Canadian dollars for 1.1200, yielding a profit of 200 pips in Canadian dollars. Because
USD is the base currency, you can get the value in USD by dividing the Canadian value by the exit price of 1.12.
100,000 CAD x 200 pips = 20,000,000 pips total. Since 20,000,000 pips = 2,000 Canadian dollars, your profit in USD is 2,000/1.12 = 1,785.71 USD.
For a cross pair not involving USD, the pip value must be converted by the rate that was applicable at the time of the closing transaction. To find that rate, you would look at the quote for the USD/
pip currency pair, then multiply the pip value by this rate, or if you only have the quote for the pip currency/USD, then you divide by the rate.
Example—Calculating Profits for a Cross Currency Pair
You buy 100,000 units of EUR/JPY = 164.09 and sell when EUR/JPY = 164.10, and USD/JPY = 121.35.
Profit in JPY pips = 164.10 – 164.09 = .01 yen = 1 pip (Remember the yen exception: 1 JPY pip = .01 yen.)
Total Profit in JPY pips = 1 x 100,000 = 100,000 pips.
Total Profit in Yen = 100,000 pips/100 = 1,000 Yen
Because you only have the quote for USD/JPY = 121.35, to get profit in USD, you divide by the quote currency’s conversion rate:
Total Profit in USD = 1,000/121.35 = 8.24 USD.
If you only have this quote, JPY/USD = 0.00824, which is equivalent to the above value, you use the following formula to convert pips in yen to domestic currency:
Total Profit in USD = 1,000 x 0.00824 = 8.24 USD. | {"url":"http://apachetechnology.in/efinance/forex/07.aspx","timestamp":"2024-11-02T08:41:42Z","content_type":"application/xhtml+xml","content_length":"18028","record_id":"<urn:uuid:65481438-4710-4294-a422-10f17d2a603e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00731.warc.gz"} |
What is the LCM of 24 and 36 by prime factorization method?
LCM of 24 and 36 by Prime Factorization
LCM of 24 and 36 can be obtained by multiplying prime factors raised to their respective highest power, i.e. 2^3 × 3^2 = 72. Hence, the LCM of 24 and 36 by prime factorization is 72.
Similarly, What is the LCM of prime numbers?
The LCM of two or more prime numbers is equal to their product.
Additionally, How do you find the HCF and LCM of two numbers by using prime factorization? Start by writing 24 and 180 as the product of their prime factors. To find the HCF, find any prime factors
that are in common between the products. Each product contains two 2s and one 3, so use these for the HCF. To find the LCM, multiply the HCF by all the numbers in the products that have not yet been
What are the GCF and LCM of 24 and 36 respectively?
LCM is the least common factor the number common in both and left numbers, Therefore, HCF and LCM of numbers 24 and 36 are 12 and 72 respectively.
What is the LCD of 24 and 36?
Explanation: The LCM of 24 and 36 is the smallest positive integer that divides the numbers 24 and 36 without a remainder. If you just want to know what is the least common multiple of 24 and 36, it
is 72.
What is the LCM of two primes?
The least common multiple or LCM of two prime numbers is the product of the two numbers.
What is the LCM of two prime numbers called?
Step-by-step explanation:
Because the two LCM of prime numbers are called composite numbers.
What is the LCM of prime number A and B?
If a and b are two prime numbers then find LCM (a,b). Solution: LCM of two prime numbers is always the product of these two numbers as their common factor is only 1.
How do you find the HCF using prime factorization?
HCF by Prime Factorization
1. Find the prime factors of each of the given number.
2. Next, we identify the common prime factors of the given numbers.
3. We then multiply the common factors. The product of these common factors is the HCF of the given numbers.
How do you find the LCM and HCF of two numbers?
The formula that shows the relationship between their LCM and HCF is: LCM (a,b) × HCF (a,b) = a × b. For example, let us take two numbers 12 and 8. Let us use the formula: LCM (12,8) × HCF (12,8)
= 12 × 8. The LCM of 12 and 8 is 24; and the HCF of 12 and 8 is 4.
What is the LCM of 36?
LCM of
36 and 45
is the smallest number among all common multiples of 36 and 45. The first few multiples of 36 and 45 are (36, 72, 108, 144, 180, 216, . . . ) and (45, 90, 135, 180, 225, 270, 315, . . . )
LCM of 36 and 45.
LCM of 36 and
3. Solved Examples
4. FAQs
What is the GCF of 36?
1, 2, 3, 4, 6, 9, 12, 18, and 36. 1, 2, 3, 6, 9, 18, 27, and 54. Although the numbers in bold are all common factors of both 36 and 54, 18 is the greatest common factor. The second method to find the
greatest common factor is to list the prime factors, then multiply the common prime factors.
How do you find the LCM of 24?
Steps to find LCM
1. Find the prime factorization of 24. 24 = 2 × 2 × 2 × 3.
2. Find the prime factorization of 24. 24 = 2 × 2 × 2 × 3.
3. LCM = 2 × 2 × 2 × 3.
4. LCM = 24.
What is the LCD of 36?
Answer: Two whole numbers whose least common denominator is 36 could be many different pairs: 1 and 36, 2 and 36, 3 and 36, 4 and 36, 4 and 18, 4 and 9, 6 and 36, 9 and 36, 9 and 12, 12 and 36, 12
and 18, 18 and 36, and lastly 36 and 36.
What is a common factor of 36 and 24?
There are 6 common factors of 24 and 36, that are 1, 2, 3, 4, 6, and 12. Therefore, the greatest common factor of 24 and 36 is 12.
How do you find the LCM of 24?
Steps to find LCM
1. Find the prime factorization of 24. 24 = 2 × 2 × 2 × 3.
2. Find the prime factorization of 36. 36 = 2 × 2 × 3 × 3.
3. LCM = 2 × 2 × 2 × 3 × 3.
4. LCM = 72.
Why LCM of two prime numbers is their product?
A common multiple of two numbers is a composite number that both numbers can divide evenly. That means that if those numbers have no prime factors in common, then indeed, their product is the
smallest common multiple of the two.
What is the HCF of 2 prime numbers?
Answer: HCF of 2 prime numbers is always 1.
What is the HCF and LCM of two prime number?
Prime numbers are the numbers that are only divided by 1 and the number itself, and prime factoring these numbers is a process of writing a number of products of its prime factors. LCM of the two
considered prime numbers is 15. … , so the only common factor to both the numbers is 1, so 1 is the HCF of the two numbers.
What is HCF of two prime numbers?
Answer: HCF of 2 prime numbers is always 1.
Prime Numbers do not have any common factor apart from the universal factor this is 1. Numbers that have only 1 as their common factor are also known as co-prime numbers.
Is product of two prime numbers is always equal to their LCM?
The product of any two prime number is equal to the product of their HCF and LCM.
What is the product of the HCF and LCM of two prime numbers A and B?
Therefore ab is the correct answer.
What is the HCF of a B where A and B are the prime numbers?
It means any two prime numbers will have only one common factor and that would be ‘1‘, as per the definitions of prime number and highest common factor. Hence, any two different prime numbers will
have the highest common factor as ‘1’. It means the H.C.F. of given two prime numbers a and b is 1. | {"url":"https://whomadewhat.org/what-is-the-lcm-of-24-and-36-by-prime-factorization-method/","timestamp":"2024-11-06T16:52:07Z","content_type":"text/html","content_length":"51433","record_id":"<urn:uuid:dbb9febf-a360-451f-a684-149cb26cc66f>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00762.warc.gz"} |
Sorry to disturb again, but the topic still bugs me somehow...
I'll try to rephrase the question:
- What's the influence of the type of N-array representation with respect to TENSOR-calculus?
- Are multiple representations possible?
- I assume that the order of the dimensions plays a major role in for example TENSOR product.
Is this assumption correct?
As I said before, my math skills are lacking in this area...
I hope you consider this a valid question.
kind regards, | {"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/message/JG7ZSFXA2XARLESKJMGJ4OIKHEP4MCT3/attachment/2/attachment.htm","timestamp":"2024-11-14T18:28:57Z","content_type":"text/html","content_length":"3838","record_id":"<urn:uuid:d0076432-ae38-41ba-90c3-f0ff1d892a15>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00081.warc.gz"} |
SCIP_Cutpool Struct Reference
Detailed Description
storage for pooled cuts
Definition at line 58 of file struct_cutpool.h.
#include <struct_cutpool.h>
Field Documentation
◆ ncalls
◆ nrootcalls
◆ ncutsfound
◆ ncutsadded
◆ poolclock
◆ hashtable
◆ cuts
◆ processedlp
◆ processedlpsol
◆ processedlpefficacy
◆ processedlpsolefficacy
SCIP_Real SCIP_Cutpool::processedlpsolefficacy
◆ cutssize
int SCIP_Cutpool::cutssize
◆ ncuts
◆ nremovablecuts
int SCIP_Cutpool::nremovablecuts
◆ agelimit
int SCIP_Cutpool::agelimit
maximum age a cut can reach before it is deleted from the pool
Definition at line 74 of file struct_cutpool.h.
◆ firstunprocessed
int SCIP_Cutpool::firstunprocessed
◆ firstunprocessedsol
int SCIP_Cutpool::firstunprocessedsol
first cut that has not been processed in the last LP when separating other solutions
Definition at line 76 of file struct_cutpool.h.
Referenced by cutpoolDelCut(), and SCIPcutpoolSeparate().
◆ maxncuts
◆ globalcutpool | {"url":"https://scipopt.org/doc-9.0.0/html/structSCIP__Cutpool.php","timestamp":"2024-11-02T08:03:26Z","content_type":"text/html","content_length":"35791","record_id":"<urn:uuid:2c07e20b-cc60-471d-9a84-7942ad177d3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00868.warc.gz"} |
Numerical Simulation of Tidal Current and Sediment Movement in the Sea Area near Weifang Port
College of Harbour and Coastal Engineering, Jimei University, Xiamen 361021, China
State Key Laboratory of Hydraulic Engineering Simulation and Safety, Tianjin University, Tianjin 300072, China
Author to whom correspondence should be addressed.
Submission received: 11 May 2023 / Revised: 26 June 2023 / Accepted: 6 July 2023 / Published: 9 July 2023
This paper uses the finite-volume community ocean model (FVCOM) coupled with the simulating waves nearshore (SWAN) in a wave–current–sediment model to simulate the tidal current field, wave field,
and suspended sediment concentration (SSC) field in the sea area near Weifang Port, China. The three-dimensional water-and-sediment model was modified by introducing a sediment-settling-velocity
formula that considers the effect of gradation. Next, the SSCs calculated by the original and modified models were compared with the measured data. The SSCs calculated by the modified model were
closer to the measured data, as evidenced by the smaller mean relative error and root-mean-square error. The results show that the modified coupled wave–current–sediment model can reasonably describe
the hydrodynamic characteristics and sediment movement in the sea area near Weifang Port, and the nearshore SSCs calculated by the modified model were higher than those calculated by the original
1. Introduction
China’s Weifang Port is located on a typical silty coast. In response to the actions of wind and waves, the sediment content in the waters near a silty coast increases greatly, and a high proportion
of this increase is suspended load, so channel siltation is prone to occur. Therefore, the accurate calculation of the hydrodynamic and suspended sediment conditions in response to the actions of
wind and waves is necessary to obtain correct channel erosion and deposition results.
Numerical simulations of sediment movement have been developed and improved over the years, and many estuarine and coastal sediment-calculation models have been designed. An estuarine and coastal
sediment model is a system integrating a hydrodynamic model, a wave model, a sediment model, and a terrain-evolution model. Specifically, the hydrodynamic model and the wave model are used to
simulate the current movement and wave evolution in large-scale sea areas, and the sediment model and terrain-evolution model are used to simulate the transfer patterns and the erosion and deposition
characteristics of suspended loads and bed loads in sea areas with hydrodynamic environments. The most widely used hydrodynamic-sediment models include ROMS [
], Delft3D [
], ECMSED [
], TELEMAC [
], SCHISM [
], and FVCOM [
], and the most popular wave models include SWAN [
] and WAVEWATCH [
]. The early models were mostly two-dimensional planar models. With the improvement of computer technology and performance, three-dimensional models have become mature. Since waves and currents
coexist in actual estuarine and coastal environments, coupled wave–current–sediment models have been developed and widely applied in engineering practice. Wang [
] established a three-dimensional, unstructured, fully coupled wave–current numerical model. Yang [
] established a dynamically coupled wave–hydrodynamic model, the finite-volume community ocean model (FVCOM) coupled with the simulating waves nearshore (SWAN) with a model-coupling toolkit (MCT)
coupler. Dietrich et al. [
] constructed a coupled SWAN-ADCIRC model by integrating the unstructured-mesh SWAN spectral-wave model and the ADCIRC shallow-water model. Warner et al. [
] established a coupled ROMS-SWAN model and used it to simulate the sediment movement in Massachusetts Bay during storm surges. Luo [
] numerically simulated the water and sediment transport and long-term topographic evolution in Liverpool Bay, UK, by reorganizing the tidal-current module of TELEMAC, the wave module of TOMAWAC, and
the sediment module of SISPHE.
As the basic problem in sediment dynamics, the sediment-settling velocity cannot be ignored in sediment-calculation models. In actual estuarine and coastal environments, natural sediments exist in
the form of mixed sediments with nonuniform particle sizes, but most of the settling-velocity formulas are designed for uniform sediments. Therefore, the median particle size of the sediment is
usually substituted into the sediment-calculation formula when numerically simulating sediment erosion, deposition, transport, etc. [
]. The formulas proposed by Stokes [
], Oseen [
], and Krone [
] for calculating the settling velocities of spherical sediments in still water are commonly used. When considering the influence of the irregular shapes, surface roughness, and physical composition
of natural sediments, some scholars prefer to use the formulas proposed by van Rijn [
], Soulsby et al. [
], and Cheng [
] to calculate the settling velocities of natural sediments. Regarding the restricting effect of the sediment concentration on the sediment-settling velocity, some scholars favor the constrained
settling-velocity formulas proposed by Richardson and Zaki [
], Camenen [
], and Slaa et al. [
]. Fang et al. [
] adopted the summation
$∑ k = 1 n P o k ω k$
, where
$P o k$
is the percentage of sediments with particle size
$d k$
, and
$ω k$
denotes the settling velocity of the
$k t h$
sediment component in still water, to calculate the mean settling velocity of nonuniform sediment. They obtained a different transport-capacity equation from that of the uniform sediment, which
indicated that there were differences in the calculation of the transport process between the nonuniform sediment and the uniform sediment. To account for the effect of nonuniform sediment, Molinas
et al. [
] and Wu et al. [
] proposed a variable, representative particle size for nonuniform sediment-transport calculations. Smart and Jaeggi [
] proposed a nonuniformity factor expressed by
$d 90 / d 30$
to explain the effect of the particle-size distribution. Shen and Rao [
] adopted
$G = 0.5 D 84 / D + D 50 / D 16$
as a size-gradation factor. Sun et al. [
] used the functional relationship between the settling velocity of a single particle and the relative diameter and geometric standard deviation of nonuniform sediment when calculating the SSCs in a
vertical profile. The reasonable description of the settlement of nonuniform sediment is also important for the study of sediment-transport capability and of current patterns.
Few mathematical models consider the characteristics of sediment movement in waters near silty coasts in response to the combined actions of waves and currents, and the influence of sediment
gradation is not considered in single-component models. Based on the FVCOM-SWAN coupled wave–current–sediment model, this paper reports the simulation of a hydrodynamic environment and
suspended-sediment movement in the sea area near China’s Weifang Port in response to the combined actions of waves and currents. According to previous studies [
], the calculation method for the settling velocity of non-uniform sand is different from that used for uniform sand, which affects the simulation results of sediment transport. However, silty coast
sediments often have a strong sorting ability, and the effect of the gradation on the average sediment-settling velocity cannot be ignored. The formula for sediment-settling velocity commonly used in
mathematical models cannot fully reflect the gradation characteristics of mixed sediments, and the method in which the mean sediment-settling velocity is calculated by substituting the median
particle size of the sediment may be overly simplified. Therefore, the model in this paper describes the settlement process of nonuniform sediment by introducing a sediment-settling-velocity formula
with a coefficient that considers the effect of gradation. Furthermore, this paper explores the difference between simulated sediment-content results before and after considering gradation.
2. Numerical Models
2.1. Hydrodynamic Model
In this paper, FVCOM is used to simulate the hydrodynamic field. The FVCOM [
] is a finite-volume coastal ocean numerical model jointly developed by the School for Marine Science and Technology of the University of Massachusetts—Dartmouth and the Woods Hole Oceanographic
Society, under the leadership of Dr. Changsheng Chen. The model effectively combines the advantages of the finite difference method and the finite element method. The unstructured mesh is used in the
horizontal direction so that part of the terrain can be refined freely as needed. The generalized terrain-following coordinate is used in the vertical direction to better characterize complex
irregular coastlines and topographies in estuaries and shelf areas. The model uses the finite volume method to numerically discretize the equation and adopts the mode-splitting algorithm to calculate
the mean water level and vertical current velocity with the outer mode and to calculate physical quantities, such as temperature and salinity, with the inner mode, thus improving its calculation
efficiency. The original governing equations of FVCOM mainly include the momentum equation, mass-continuity equation, and temperature, salinity, and density equations.
2.2. Wave Model
In this paper, SWAN is used to calculate the wave field. The SWAN [
] is a third-generation shallow-sea wave numerical model. The model adopts the spectral balance equation based on the Euler approximation and the linear stochastic surface gravity wave theory. It can
simulate wave refraction, reflection, and wave shoaling caused by water-depth changes during wave generation and wave propagation and can describe the evolution of waves in nearshore areas.
2.3. Suspended-Sediment-Transport Model
In real estuaries and coasts, sediment movement is simulated by the bottom reference concentration, diffusion coefficient, and sediment-settling velocity in the vertical distribution model of
suspended silty sediment content in response to the combined actions of waves and currents. The governing equation is a convection–diffusion equation, including a source and sink term, and the
calculation method refers to the simulation process used by Ji [
$∂ c i D ∂ t + ∂ u c i D ∂ x + ∂ ν c i D ∂ y + ∂ w − w s i c i ∂ ς = D ∂ ∂ x A h ∂ c i ∂ y + 1 D ∂ ∂ ς K h , s ∂ c i ∂ ς + D S i$
denotes the
$i th$
sediment component (since a single-component model is used in this paper,
is 1),
$c i$
denotes the sediment concentration of the
$i th$
sediment component,
$K h , s$
denotes the vertical diffusion coefficient,
$A h$
denotes the horizontal diffusion coefficient,
$w s , i$
denotes the sediment-settling velocity of the
$i t h$
sediment component, and
$S i$
denotes the source and sink term.
2.3.1. Sediment-Settling Velocity
Considering that increases in sediment content hinder the sediment-settling velocity, when the median particle size of the sediment is greater than and not greater than 100 μm, respectively, the
settling-velocity-hindrance formulas proposed by Slaa et al. [
] are used:
$w s = w s , o 1 − c v n n = 4.4 D 50 , r e f / D 50 0.2 D 50 > 100 μ m$
$w s = w s , o 1 − c v φ s , s t r u c t m 1 − c v 1 − c v φ m a x − 2.5 φ m a x D 50 ≤ 100 μ m$
$w s$
denotes the sediment settling velocity in muddy water,
$w s , o$
denotes the sediment settling velocity in clear water,
$D 50 , r e f = 200$
μm (
$D 50$
is the median particle size of the sediment),
$φ s , s t r u c t$
denotes the structure density with a value of 0.5,
$φ s , m a x$
denotes the maximum density with a value of 0.65, and
denotes the nonlinear effect of the wake on the settling velocity, with a value between 1 and 2.
Experiments show that the combined action of sediment concentration and gradation has an impact on the mean settling velocity, and higher sediment concentrations and gradations that that represents
strong sorting ability hinder the settling velocity to a greater extent [
]. If the sediment gradation results are not taken into account in an overestimation of sediment-settling velocity, the model may underestimate the SSC and sediment transport in the water body.
Therefore, in this paper, the traditional settling-velocity-hindrance formula (hereafter referred to as settling-velocity Formula (3)) and the settling-velocity formula considering gradation [
] (hereafter referred to as settling-velocity Formula (6)) are adopted for comparative calculation to consider the effect of sediment gradation on sediment-settling velocity:
$P D ∝ f l g ϕ s ⋅ ρ s , λ$
$λ = d 90 d 10 / d 25 ⋅ d 75 d 50$
$w s w 0 = 1 − ϕ s / ϕ s , s t r u c t m 1 − ϕ s 1 − ϕ s / ϕ s , m a x − 2.5 ϕ s , m a x ⋅ P D$
$P D = − 0.29 λ 0.2 − 1 l g ϕ s + 1.44 E X P − λ + 0.47$
$P D$
is the gradation influence coefficient,
is a gradation parameter describing the gradation, and
$ϕ s$
$ρ s$
and are the sediment-volume concentration and sediment density, respectively (
$ρ s$
= 2650 kg/m
2.3.2. Bottom Reference Concentration
In this study, to describe the sediment exchange between the suspended load and the bed surface, a computational simulation was carried out in the form of source and sink terms, and the formulas
proposed by van Rijn. [
] and Lesser et al. [
] were used in the computation process. The height of the bed-surface reference point can be expressed as:
$z r e f = m a x 0.5 k s , c , r , 0.5 k s , w , r , 0.01 m$
The current-related bottom-roughness height is:
$k s , c , r = f c s D 50 85 − 65 t a n h 0.015 ψ − 150$
$ψ = u b 2 + u c 2 s − 1 g D 50$
$f c s$
is the correction factor for coarse-grained sediment,
$f c s = 0.0005 / D 50 1.5$
, and when
$D 50 < 0$
.5 mm,
$f c s$
is 1. Furthermore,
is the current correction coefficient,
$u b$
$u c$
are the bottom-current velocity and the vertical mean velocity, respectively,
$k s , c , r$
is in the range [0.00064, 0.075], and
$k s , w , r$
is the wave-related bottom-roughness height, which is considered equal to the sediment-ripple height.
The sediment concentration at the reference height is calculated according to the following formula [
$c z r e f = m a x β η ρ s D 50 z r e f S 1.5 D * 0.3 , 0.05 η ρ s$
2.3.3. Sediment-Diffusion Coefficient
When there is an uneven distribution of sediment concentration in a water body, concentration stratification is formed, and the concentration gradient has an inhibitory effect on the turbulence of
the water body, thereby inhibiting the diffusion of sediment. For the vertical diffusion coefficient, Yang [
] proposed a diffusion coefficient of the combined wave–current action considering the stratification effect:
$ε w = φ d w s l w 2 s i n h − 1 w s 2 w m w$
$φ d$
is the diffusion-correction coefficient,
$φ d = 1 − S$
is the inhibition rate of sediment diffusion caused by the stratification effect, where the value of
is fitted based on previous experimental data (
is 1 when the stratification effect is not considered),
$w m w$
is the mixed wave velocity, and
$l w$
is the mixed wave length. The
is calculated as
$S = − 0.9 e x p − C v ´ z R 1 + 0.9$
$R 1 = 1 D s a n d 4.5 × 10 − 8 + 2.28 × 10 − 9 e x p D 50 / D s a n d − 0.58 0.118$
$C v ´ z$
is the SSC gradient,
$C v ´ z = − d C v / d z$
$C v$
is the volumetric sediment content,
$R 1$
is the empirical coefficient related to particle size, and
$D s a n d$
= 62 μm.
However, this factor only applies when the wave is not broken. After the wave is broken, the violent turbulence of the water body causes the water layers to mix with each other, the
sediment-concentration gradient becomes significantly less steep, which has a significant impact on the diffusion of sediment, and there is essentially no stratification effect. When
$H s / h > 0.4$
, the wave-related diffusion coefficient during wave breaking is calculated by the model proposed by van Rijin [
Inside the wave-boundary layer
$z < δ s$
$ε w = 0.018 γ b r β w δ s u b$
$β w = 1 + 2 w s / u * , w$
and inside the upper water body
$z > 0.5 h$
$ε w = m i n 0.05 , 0.035 γ b r h H s T$
$γ b r$
is the wave-breaking amplification factor,
$γ b r = 1 + H s / h − 0.4 0.5$
$δ s = 2 γ b r δ w$
is the thickness of the boundary layer, and
$u * , w$
is the wave-related bottom shear velocity. The current-dependent diffusion coefficient
$ε c$
can be set as the value of the vertical eddy-viscosity coefficient calculated in FVCOM.
2.4. Model Coupling
For the three-dimensional coupled wave–current–sediment model, the coupling process can be briefly summarized as follows: The FVCOM hydrodynamic model and the SWAN wave model realize real-time
exchange between the calculation elements through the MCT coupler, and the hydrodynamic model converts the water level into vertical current velocity. The hydrodynamic model FVCOM and the wave model
SWAN transfer the calculated three-dimensional current field data and wave elements to the sediment model and calculate the suspended-load-scour flux, suspended-load-siltation flux, and
bed-load-transport rate through the sediment model, thereby realizing data transfer between the dynamically coupled wave–hydrodynamic model and the sediment model.
3. Study Area and Model Settings
3.1. Study Area
Weifang Port (
Figure 1
) is on the south bank of Laizhou Bay. Three port areas fall within its jurisdiction, namely, the eastern, central, and western port areas. The research area of this paper is the central port area of
Weifang Port, which is the main port area. To verify the rationality of the established three-dimensional coupled wave–current–sediment model, the hydrodynamic conditions and suspended-sediment
conditions of Weifang Port in response to the actions of wind and waves were simulated.
3.2. Model Settings
The topography and water-depth data of the calculation area were the measured data of Weifang Port from 2003, and the tidal-current field and suspended-sediment verification data were the full-tide
hydrological data from six hydrological stations from 10–11 November 2003. The wind-field data were derived from the ERA5 wind-field-reanalysis product. The ERA5 is a fifth-generation high-resolution
reanalysis dataset developed by the European Centre for Medium-Range Weather Forecasts by assimilating multisource observational data. This dataset combines current measured data with previous
forecast results every 12 h to obtain accurate atmospheric forecast results. At present, users can obtain the hourly wind-field data from 1979 to the present, with a temporal resolution of 1 h and a
spatial resolution of 0.25° × 0.25°. In this paper, the ERA5 data for a wind-field height 10 m above the Earth’s surface were selected as the wave-drive conditions of the SWAN model. The
particle-size-distribution values were obtained by measuring and analyzing the sediment samples collected in the sea area near Weifang Port using the Malvern 3000 particle-size analyzer (
Figure 2
). The median particle size of the sediment was 0.066 mm, and the gradation parameter of the sediment in Weifang Port was calculated as 4, according to settling-velocity Formula (6).
To ensure the accurate calculation of the tidal-current field in the study area, the method of nesting large and small grids was adopted, and the large and small models both used unstructured
triangular grids. The large model included the entire Bohai Sea, and the grid was refined in the sea area near the Weifang Port. The calculation range was from 37°1′ N–40°52′ N to 117°32′ E–122°13′
E, the mesh scale was between 3500 m and 4000 m, and the grid number was 12,526. The grid and water-depth data are shown in
Figure 3
. The small model mainly included the sea area near the project area, and the calculation range was from 37°5′ N–38°32′ N to 118°50′ E–119°32′ E, the mesh scale was between 20 m and 1500 m, and the
grid number was 14,763. The grid and water-depth data are shown in
Figure 4
. The model was vertically divided into 10 layers. The wetting and drying algorithms were used, and the minimum water depth was set to 0.02 m.
The water-level-boundary conditions were used for the open boundaries of the large and small models. The open-boundary water-level data of the large model were derived from the MIKE21
global-tide-forecasting system, and the ERA5 wind field was used as the wave-driving condition for the large model. The open-boundary water-level data and wave-boundary conditions of the small model
were extracted from the calculation results of the large model.
4. Model Results and Analysis
4.1. Tide Elevation and Tidal-Current Verification
The measured hydrological data used in this study were from 10 to 11 November 2003. They included the tidal elevation, tidal-current velocity, tidal-current direction, and SSC. The locations and
specific coordinates of the stations are shown in
Figure 5
Table 1
, respectively.
Figure 6
shows the comparison between the simulated tide levels (using settling-velocity Formula (6)) and the measured tide levels from 12:00 on 10 November 2003 to 20:00 on 11 November 2003 at Station 1 in
the sea area near Weifang Port. The verification results were quite consistent.
Figure 7
Figure 8
show the stratified verification results of the tidal-current velocity and direction at each station in the sea area near Weifang Port from 14:00 on 10 November 2003 to 17:00 on 11 November 2003. The
simulated current velocities and directions of Station 1 to Station 4 were generally consistent with the measured results. The near-bottom-current velocities simulated by Stations 5 and 6 were
slightly slower than the values measured at certain time points, and the simulated current directions also deviated somewhat from the measured data. This is probably because Stations 5 and 6 were in
the vicinity of a structure, so the current there is greatly affected by the terrain and boundaries. Overall, the simulated current velocities and directions at the six stations above were close to
the measured current velocities and directions in the continuous diachronic change process, and the simulated and measured phases were generally consistent. The three-dimensional water-and-sediment
model used in this paper reasonably reflects the hydrodynamic patterns of the sea area, so the model can be used for suspended-sediment simulations.
4.2. Suspended Sediment Concentration Verification
This paper uses the measured data for the suspended sediment in the sea area near Weifang Port from 10–11 November 2003 to validate the SSCs in the bottom and surface water bodies.
Figure 9
Figure 10
show the suspended-sediment-verification conditions from Stations 1 to 6 (using settling-velocity Formula (6)). The sediment concentration in the surface water body was relatively low, while the
sediment concentration in the bottom water body was relatively high. The SSC of each layer fluctuated regularly with time, and the fluctuation amplitude also increased with the water depth. The SSCs
in the surface layer of Station 4, the bottom layer of Station 5, and the bottom layer of Station 6 were underestimated in a few time periods. The SSC was related to the bottom-current velocity. If
the numerical model underestimated the current speed near the seabed, the sediment was not easy to start, resulting in lower SSC in the water bodies. And this chain reaction had a lag in time, that
is, the lower simulated SSCs generally occurred after the simulated current speed was small. According to the previous current-velocity and direction0verification figures, the simulated current
velocities were slower than the measured values in the surface layer of Station 4 at 10–15 h, in the bottom layer of Station 5 at 0–5 h, and in the bottom layer of Station 6 at 12–18 h, which caused
the underestimation of the SSCs at these time points. Overall, the trend and magnitude of the measured and simulated values at most of the stations were generally the same, so the verification
results were good, indicating that the model that uses settling-velocity Formula (6) can effectively simulate the actual sediment movement in the sea area.
4.3. Sediment-Content Comparison
Settling-velocity Formula (3) was used to simulate the sediment in the sea area near Weifang Port during the same time period using the same parameter settings as above. Taking Station 4 as an
example, the simulated SSCs of the surface and bottom layers before and after the correction are shown in
Figure 11
. The simulated SSCs after the correction were higher than those before the correction. The maximum current velocity reached 0.5 m/s between 5–10 h and 17–22 h, and the differences between the
simulated SSCs before and after the correction were larger during this period. After introducing the modified sediment settling-velocity formula, the overall sediment velocity was lower than that
simulated by the original settling-velocity formula. When the current velocity was high, a significant amount of sediment was suspended, the suspended sediment settled less easily, and the SSC in the
water body increased and fluctuated more.
The values of mean relative error (
) before and after the correction were used to measure the effect of the gradation on the distribution of the suspended sediment. The root mean square error (
) represents the sample standard deviation of the differences between the predicted values and the experimental values. The smaller the
and the
, the better the predicted values fit the experimental values. The
are calculated as:
$M R E = 1 n ∑ i = 1 n C i , A f t e r c o r r e c t i o n − C i , B e f o r e c o r r e c t i o n C i , B e f o r e c o r r e c t i o n$
$R M S E = 1 m ∑ i = 1 m C i , M e a s u r e d − C i , S i m u l a t e d$
is the number of vertical position points calculated by the model,
is the number of stations,
$C i , M e a s u r e d$
is the measured SSC, and
$C i , S i m u l a t e d$
is the simulated SSC.
According to Formulas (18) and (19), the deviations of the simulated SSCs from the measured SSCs were calculated before and after the correction, respectively. For the SSCs in the surface layers, the
MRE values before and after the correction were 29% and 22%, respectively, and the RMSE values before and after the correction were 0.19 kg/m^3 and 0.14 kg/m^3. For the SSCs in the bottom layers, the
MRE values before and after the correction were 22% and 15%, respectively, and the RMSE values before and after the correction were 0.19 kg/m^3 and 0.13 kg/m^3. The calculation results show that the
results calculated by the modified model were more accurate.
Figure 12
Figure 13
Figure 14
show the wave fields, current fields, and SSC fields, respectively, in the surface and bottom layers during a surge before and after the correction. The analysis of the SSC field maps shows that the
SSCs in the open sea were relatively small (mostly less than 0.5 kg/m
in both the surface and the bottom layers) and that in the nearshore area, due to wave shoaling and breaking, the SSCs exceeded 2 kg/m
. In the nearshore area, the SSCs were affected by the current velocity, and the SSCs were higher at locations where the velocity was high or changed drastically. The current field mainly pointed
from east to west. Due to the occlusion of the structure, the SSCs were low on the west side of the structure and high on the east side of the structure. Comparing
Figure 13
Figure 14
, the nearshore SSCs calculated by the settling-velocity formula proposed in this paper were higher than those calculated by the original settling-velocity formula. Specifically, the nearshore SSCs
in the surface and bottom layers calculated by the settling-velocity formula proposed in this paper were approximately 1 kg/m
and 2 kg/m
higher, respectively than those calculated by the original settling-velocity formula. In practical engineering applications, the SSCs calculated by the settling-velocity formula proposed in this
paper will be even higher, so a construction scheme with a higher safety factor is recommended for the study area.
5. Conclusions
This study introduced a sediment-settling-velocity formula that considers gradation in the three-dimensional FVCOM-SWAN coupled water-and-sediment-movement model and simulates the suspended-sediment
movement in the sea area near Weifang Port with the modified single-component model. We drew the following conclusions:
After introducing settling-velocity Formula (6), the overall settling velocity of the sediment decreased. The higher the sediment concentration is, the more the settling velocity is tempered. The
sediment in the bottom water body was more highly concentrated than that in the surface water body. The SSC in the bottom layer was high and fluctuated more. After the introduction of
settling-velocity Formula (6), the model fitted the measured data better. Hence, the model can effectively describe the sediment-movement process in the sea area near Weifang Port.
The SSCs simulated by settling-velocity Formula (6) were higher than those simulated by settling-velocity Formula (3), and the SSCs simulated by the two formulas differed more when the current
velocity was faster. With settling-velocity Formula (6), the overall settling velocity of the sediment was slower than that simulated by settling-velocity Formula (3). When the current velocity
was high, more sediment was suspended, the suspended sediment settled less easily, and the SSCs in the water body increased and fluctuated more.
For the SSC field in the sea area of Weifang Port, the nearshore SSCs calculated by settling-velocity Formula (6) were higher than those calculated by settling-velocity Formula (3). Specifically,
the nearshore SSCs in the surface and bottom layers calculated by settling-velocity Formula (6) were approximately 1 kg/m^3 and 2 kg/m^3 higher, respectively than those calculated by
settling-velocity Formula (3). In practical engineering applications, the SSCs calculated by a settling-velocity formula considering gradation will be even higher, so a construction scheme with a
higher safety factor is recommended for the study area.
Author Contributions
J.Q., conceptualization, methodology, investigation, and formal analysis. Y.J., investigation, formal analysis, validation, and writing—original draft preparation. C.C., investigation and data
curation. J.Z., writing—review and funding acquisition. All authors have read and agreed to the published version of the manuscript.
This study was supported by the National Natural Science Foundation of China (grant nos. U1906231), and the Open Funds of State Key Laboratory of Hydraulic Engineering Simulation and Safety of China
(grant no. HESS-2221).
Data Availability Statement
Not applicable.
The authors also gratefully acknowledge the comments and suggestions of the anonymous reviewers.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 11. Comparison of suspended-sediment concentrations at Station 4 before and after the correction.
Figure 13. Suspended-sediment-concentration fields in the surface and bottom layers (before correction).
Station Number Beijing54 WGS84
x y E N
1 429,431.5 4,134,252 119.2 37.34
2 433,521.9 4,132,544 119.25 37.32
3 437,045.2 4,130,220 119.29 37.30
4 431,003.2 4,128,988 119.22 37.29
5 428,060.5 4,124,240 119.19 37.25
6 428,809.7 4,124,215 119.20 37.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Qi, J.; Jing, Y.; Chen, C.; Zhang, J. Numerical Simulation of Tidal Current and Sediment Movement in the Sea Area near Weifang Port. Water 2023, 15, 2516. https://doi.org/10.3390/w15142516
AMA Style
Qi J, Jing Y, Chen C, Zhang J. Numerical Simulation of Tidal Current and Sediment Movement in the Sea Area near Weifang Port. Water. 2023; 15(14):2516. https://doi.org/10.3390/w15142516
Chicago/Turabian Style
Qi, Jiarui, Yige Jing, Chao Chen, and Jinfeng Zhang. 2023. "Numerical Simulation of Tidal Current and Sediment Movement in the Sea Area near Weifang Port" Water 15, no. 14: 2516. https://doi.org/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/15/14/2516?utm_campaign=releaseissue_waterutm_medium=emailutm_source=releaseissueutm_term=titlelink105","timestamp":"2024-11-06T07:52:12Z","content_type":"text/html","content_length":"472711","record_id":"<urn:uuid:7f871258-109a-438e-afb8-16097f455436>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00894.warc.gz"} |
From Kerbal Space Program Wiki
synchronous orbit
is an orbit where the orbital period equals the rotation rate of the orbited body. The eccentricity and inclination are not bound to specific values, although to be synchronous the orbit must not
intersect with the atmosphere or surface of the orbited body, causing the orbit to change. Satellites in synchronous orbits have a
ground track
forming an
Important! You need to match your orbital period with sidereal rotation period not the solar day. So, for Kerbin it will be 5h 59m 9.425s instead of 6h that a lot of people go for.
Stationary orbits
Stationary orbits are a special kind of synchronous orbit. Its 0° inclination and its eccentricity of 0 cause its ground track to be only a point: a satellite in this orbit has no motion relative to
the body's surface. Since it is impossible to get all orbital values exact for a stationary orbit, satellites in stationary orbits form small analemmata.
Some celestial bodies don't allow for synchronous orbits because the altitude required to synchronously orbit is beyond the body's sphere of influence. The body's slow rotation rate causes this
effect: a very high altitude is necessary to allow for such a long orbital period. Tidally locked moons don't have synchronous orbit possibilities either because of their slow rotation. Moho is the
only planet without any possibilities for a craft to achieve a synchronous orbit because of its very slow rotational period; Moho completes approximately two rotations during the time it takes for an
object in the highest possible orbit to complete a revolution.
Semi-synchronous and similar orbits
When the orbital period is half as long as the rotational period, the orbit is usually described as semi-synchronous. It is possible to calculate the semi-major axis of a semi-synchronous orbit using
Kepler's third law of planetary motion. With the knowledge about the semi-major axis of a synchronous orbit and the ratio between the two orbits:
${\displaystyle a_{\frac {1}{f}}={\frac {1}{\sqrt[{3}]{f^{2}}}}\cdot a_{1}}$
The fraction f is the quotient of the period of the synchronous orbit (a[1]) and second orbit (a[1/f]). When the second orbit is a semi-synchronous orbit this quotient is 2:
${\displaystyle a_{\frac {1}{2}}={\frac {1}{\sqrt[{3}]{2^{2}}}}\cdot a_{1}={\frac {1}{\sqrt[{3}]{4}}}\cdot a_{1}}$
An orbit where the orbital period is lower than the rotational period has some advantages, as some bodies don't allow synchronous orbits but opportunities for semi-synchronous orbits.
When dropping numerous payloads that should land nearby each other, the orbit should be an integer multiple of the celestial body's sidereal day. This way, the body stays the same relative to the
orbit and has the same descent route, if each payload is detached at the same point in the orbit (e.g. apoapsis). The inverse factor f (= 1/f) defines how many days are between two detachments. For
example, a super-synchronous orbit has f=1/2 so a payload could be dropped every two sidereal days or, when orbiting Kerbin, one every twelve hours.
An example of a semi-synchronous orbit for real world scientific applications is a Molniya orbit.
Sun-synchronous orbit
→ See also: Sun-synchronous orbit on Wikipedia
In the real world, there exists a sun-synchronous orbit. It's important to note that, although the name implies it, the orbit is not synchronous around the Sun. Instead, it describes an orbit around
Earth which itself rotates, such that it appears the orbiting object is motionless relative to the Sun. Since it requires objects to have uneven gravitational fields, it is impossible to simulate in
Molniya orbit
A Molniya orbit is a semi-synchronous, highly elliptical orbit. The eccentricity should be as high as the central body permits. A three-satellite constellation in Molniya orbits can provide constant
coverage to the high attitude regions. To set up such constellation, the mean anomalies of these three satellites should be spaced out by ${\displaystyle 120^{\circ }}$ or ${\displaystyle {\frac {2\
pi }{3}}}$. The longitudes of the ascending nodes can also be spaced out by ${\displaystyle 120^{\circ }}$ to make the clover appearance.
For Kerbin, that equate to 70k for PE, 3117k for AP, and around 63 degree inclination.
Advantages of synchronous orbits
One advantage of a synchronous orbit is that they allow dropping multiple payloads from one craft, because the orbit will periodically travel above the same point on the body's surface. Usually, the
orbit has a large eccentricity so that the payload has to complete a minimal amount of maneuvers to reach the surface. In this case the payload is detached at the apoapsis and decelerated such that
it lands on the celestial body. After the payload has successfully landed, the next payload can be dropped as soon as the craft reaches the apoapsis again.
Stationary orbits
Communication to a satellite in a stationary orbit is easier than if it was in another orbit, as the ground based antennae do not have to move to account for the satellite's motion relative to the
orbited body.
Orbital altitudes and semi-major axes of Kerbal's major bodies
The following table contains the altitudes for circular, synchronous orbits around all of Kerbal's celestial bodies, even when the altitude resides outside the SOI. The altitudes are relative to the
body's surface, while the semi-major axes are measured from the body's center.
Body Synchronous orbit Semi-synchronous orbit Tidally
Altitude Semi-major axis Altitude Semi-major axis locked
Kerbol 1508045.29 km 1769645.29 km 853206.67 km 1114806.67 km –
Moho 18173.17 km † 18423.17 km † 11355.87 km † 11605.87 km † No
Eve 10328.47 km 11028.47 km 6247.50 km 6947.50 km No
Gilly 42.14 km 55.14 km 21.73 km 34.73 km No
Kerbin 2863.33 km 3463.33 km 1581.76 km 2181.76 km No
Mun 2970.56 km † 3170.56 km † 1797.33 km 1997.33 km Yes
Minmus 357.94 km 417.94 km 203.29 km 263.29 km No
Duna 2880.00 km ‡ 3200.00 km 1695.87 km 2015.87 km No
Ike 1133.90 km † 1263.90 km † 666.20 km 796.20 km Yes
Dres 732.24 km 870.24 km 410.22 km 548.22 km No
Jool 15010.46 km 21010.46 km 7235.76 km 13235.76 km No
Laythe 4686.32 km † 5186.32 km † 2767.18 km 3267.18 km Yes
Vall 3593.20 km † 3893.20 km † 2152.56 km † 2452.56 km † Yes
Tylo 14157.88 km † 14757.88 km † 8696.88 km 9296.88 km Yes
Bop 2588.17 km † 2653.17 km † 1606.39 km † 1671.39 km † Yes
Pol 2415.08 km † 2459.08 km † 1505.12 km † 1549.12 km † Yes
Eeloo 683.69 km 893.69 km 352.99 km 562.99 km No
• † indicates that the altitude resides outside the SOI
• ‡ indicates that the altitude is the same as the orbit of another object
See also | {"url":"https://wiki.kerbalspaceprogram.com/wiki/Synchronous_orbit","timestamp":"2024-11-05T09:55:27Z","content_type":"text/html","content_length":"41758","record_id":"<urn:uuid:c40b68df-a297-47d0-b49d-8c111271156e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00816.warc.gz"} |
Solving nth-Order Integro-Differential Equations Using the Combined Laplace Transform-Adomian Decomposition Method
1. Introduction
In the recent literature there is a growing interest to solve integro-differential equations. The reader is referred to [1-3] for an overview of the recent work in this area. In the beginning of the
1980’s, Adomian [4-7] proposed a new and fruitful method (so-called the Adomian decomposition method) for solving linear and nonlinear (algebraic, differential, partial differential, integral, etc.)
equations. It has been shown that this method yields a rapid convergence of the solutions series to linear and nonlinear deterministic and stochastic equations. The main objective of this work is to
use the Combined Laplace Transform-Adomian Decomposition Method (CLT-ADM) in solving the nth-order integro-differential equations.
Let us consider the general functional equation
The Adomian’s technique consists of approximating the solution of (1.1) as an infinite series
and decomposing the nonlinear operator
The proofs of the convergence of the series
(1.3) into (1.1) yields
Thus, we can identify
Thus all components of
2. General nth-Order Integro-Differential Equations
Let us consider the general nth-order integro-differential equations of the type [1,2]:
with initial conditions
To solve the general nth-order integro-differential Equation (2.1) using, the Laplace transform method, we recall that the Laplace transforms of the derivatives of
Applying the Laplace transform
This can be reduced to
Substituting (1.2) into (2.2) leads to
The Adomian decomposition method presents the recursive relation
A necessary condition for (2.3) to comply is that
Applying the inverse Laplace transform to both sides of the first part of (2.3) gives
2.1. A Test of Convergence
The convergence of the method is established by Theorem 3.1 in [9]. In fact, on each interval the inequality
2.2. Definition
then we call
3. Applications
In this section, the CLT-ADM for solving nth-order integro-differential equations is illustrated in the three examples given below. To show the high accuracy of the solution results from applying the
present method to our problem (2.1) compared with the exact solution, the maximum error is defined as:
Example 1
Solve the second-order integro-differential equation by using the CLT-ADM [1,2]:
As mentioned above, taking Laplace transform of both sides of (3.1) gives
so that
or equivalently
Taking the inverse Laplace transform of both sides of the first part of (3.2) gives
Thus the series solution is given by
that converges to the exact solutionTable 1, the maximum errors and the EOC are presented forTable 1, it can be deduced that, the error decreased monotically with the increment of the integer
Example 2
Solve the third-order integro-differential equation by using the CLT-ADM [1,2]:
As early mentioned, taking Laplace transform of both sides of (3.3) gives
so that
or equivalently
Table 1. Maximum error and EOC for Example 1.
Taking the inverse Laplace transform of both sides of the first part of (3.4) gives
The series solution is therefore given by
that converges to the exact solutionTable 2, the maximum errors and the EOC are shown forTable 2, it can be concluded that, the error decreased monotically with the increment of the integer
Example 3
Solve the eighth-order integro-differential equation by using the CLT-ADM [1,2]:
As previously mentioned, taking Laplace transform of both sides of (3.5) gives
Table 2. Maximum error and EOC for Example 2.
so that
or equivalently
Taking the inverse Laplace transform of both sides of the first part of (3.6) gives
and so on for other components. Consequently, the series solution is given by
Table 3. Maximum error and EOC for Example 3.
that converges to the exact solutionTable 3, the maximum errors and the EOC are given forTable 3, it can be deduced that, the error decreased monotically with the increment of the integer
4. Conclusion
The CLT-ADM has been applied for solving nth-order integro-differential equations. Comparison of the results obtained by the present method with that obtained by HPM and VIM reveals that the present
method is superior because of the lower error and less number of needed iteration. It has been shown that error is monotically reduced with the increment of the integer n.
5. Acknowledgements
We would like to thank the referees for their careful review of our manuscript. | {"url":"https://scirp.org/journal/paperinformation?paperid=32429","timestamp":"2024-11-10T07:40:32Z","content_type":"application/xhtml+xml","content_length":"105740","record_id":"<urn:uuid:3eff66db-aa62-4f10-8958-c51b300c0df4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00017.warc.gz"} |
The Stacks project
Lemma 68.11.8. Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$. Let $x \in |X|$. Let $(U, u) \to (X, x)$ be an elementary étale neighbourhood. Then
\[ \mathcal{O}_{X, x}^ h = \mathcal{O}_{U, u}^ h \]
In words: the henselian local ring of $X$ at $x$ is equal to the henselization $\mathcal{O}_{U, u}^ h$ of the local ring $\mathcal{O}_{U, u}$ of $U$ at $u$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0EMY. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0EMY, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0EMY","timestamp":"2024-11-12T13:32:28Z","content_type":"text/html","content_length":"14869","record_id":"<urn:uuid:713cbec4-2d88-41ba-bf8f-8cb63de318cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00408.warc.gz"} |
Math Tutor Chat: Step-by-Step Online Help With Pre-Algebra & Algebra 1
Looking for a Math Tutor Chat? You've found it.
Hi! I'm Ms. Maguire, and you can do math. Welcome to my live online help chat for Algebra 1. I'm the only tutor you'll be talking to here. I'll show you how to solve the problem(s) confusing you,
step by step. It's perfect for homework and studying for tests, especially with word problems.
Ms. Maguire, your online tutor for algebra 1
Here's an example of how I showed a student how to solve this algebra word problem:
Two cyclists, 84 miles apart, start riding toward each other at the same time. One cycles two times as fast as the other. If they meet 4 hours later, what is the speed (in mi/h) of the faster
a. Write an equation using the information as it is given above that can be solved to answer this problem. Use the variable r to represent the speed of the slower cyclist.
b. What is the speed of the faster cyclist? ____ mi/hr
What topics can I assist you with?
• adding fractions with unlike denominators
• decimals
• evaluating expressions
• factoring
• graphing
• order of operations
• polynomial inequalities
• simplifying expressions
How Does This Math Tutor Chat Work?
1. Make sure the gray box says "Chat With Us" (not "Leave a message").
2. Say hello to me in the chat box to check if I'm available. If I don't respond right away, please try again in a little bit. (I can only serve one student at a time). Or if you want to leave me
your U.S. mobile number, I'll text you as soon as I get a chance.
3. Feel free to ask any questions about whether I'm able to help you. Note that this math tutor chat is for algebra 1 only.
4. Submit your payment by scrolling to the bottom of my tutoring page then clicking "Checkout." It will not ask for your postal address.
5. Type (or copy and paste) the first problem with which you need help into the gray chat box. If it makes more sense to give me a screenshot, photo, or document, let me know, and I'll tell you
where to send it to.
Is this a good resource for taking tests?
No, this math tutor chat is not appropriate for tests. That would be cheating. We prepare for tests by studying and practicing. Use this website for homework, practice, and sample problems.
What if the gray box says "Leave a message," not "Chat"?
If the bottom right corner of your screen says "Leave a message," that means I'm not available at the moment. Feel free to send me a message, which will go directly to my e-mail inbox.
It helps if you can tell me whether you're a student, teacher, or parent.
It will ask for your name and e-mail address. I value your privacy and will not share or sell your personal data.
Within 24 hours, you should receive an e-mail reply from me.
Algebra 1 word problems can be frustrating. Take advantage of this simple, easy math tutor chat! | {"url":"https://www.candomath.com/","timestamp":"2024-11-10T06:34:14Z","content_type":"text/html","content_length":"15765","record_id":"<urn:uuid:54977713-9caf-4ced-b7e5-1b00bd667fd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00468.warc.gz"} |
Our users:
I bought "The Algebrator" for my daughter to help her with her 10th Grade Advanced Algebra class. Like most kids, she was getting impatient with the evolution of equations (quadratic in particular)
and making mistakes in her arithmetic. I very much like the step-by-step display of your product. I think it got my daughter a better grade in the past semester.
H.M., Texas
So far its great!
Jori Kidd, KY
I must say that I am extremely impressed with how user friendly this one is over the Personal Tutor. Easy to enter in problems, I get explanations for every step, every step is complete, etc.
David Aguilar, CA
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2010-11-13:
• free third grade geometry printouts
• adding and subtracting 2 digit numbers worksheets
• conceptual physics answers chapter 9
• hardest equation in the world
• multiplying and dividing integers worksheet
• solving differential equations in ti 89
• learning pre algebra formulas
• downloadable casio FX-ES Emulator
• dividing equations worksheet
• worksheet addition of monomials and binomials
• algebra with pizzazz answer key
• free advance algebra quiz online
• algebra 1 help
• glencoe mathematics algebra 2 even answers
• free 6th grade printables
• Learning Basic Algebra
• worksheet including negative
• greatest common denominator worksheets
• casio calculator on pocketpc
• how to solve pie
• Answers Algebra Problems
• calculator for multiplying radicals
• clep algebra tests
• area of triangles and squares and free worksheets and fourth grade
• square root simplifier
• herstein solutions
• Graphing Linear Equations Printable Worksheets
• mixing solution algebra
• cheat answers to saxon math 87
• free 6th grade perimeter and area worksheets
• Free general aptitude test sample paper
• dividing and multiplying polynomial fractions worksheets free
• downloadable Online ROUNDING calculator
• Free Download A Transition to Advanced Mathematics by Smith
• solving inequality worksheet
• Math Question to Answer Translator
• Algebra Software
• junior maths exam papers 2007
• Pre-Algebra puzzle pizzaz
• McDougal Littell Inc english workbook answers
• math helper.com
• square equation
• ti-84 plus factorisation
• worksheets in adding, subtracting, multiplying and dividing of a whole number
• least common multiple worksheet
• free online english practice parers for 11+
• subtracting fractions and mixed numbers: like denominators, lesson plans
• tips on factorising
• fraction worksheets for first grade
• Accounting Programs for TI-89
• benefit of writing slope as a fraction
• answers to middle school math with pizzazz!book d+ topic 4 area of triangles
• Free step by step equation calculator
• ti84 algebra download
• least common multiple 37 29
• Holt, Rinehart and Winston algebra 1 help
• how do change a percet to a decimal
• square root equation solver
• math tutor adding and subtracting negative numbers
• free operations on radical expression solver
• McDougal Littell algebra 1 9th grade fl edition book
• alegra prep worksheets
• teach me pre algebra
• Factoring Quadratic Equations with fractions by using a table of value
• Greatest Common Divisor formula
• solving exponential inequalities
• Unit Circle Template\ Practice Worksheet
• circumference worksheet 125 (grade 7)
• cubed root on calculator
• hardest 5th grade math question
• third order fit
• inequalities 9th grade lesson
• free downloard Aptitude Online Test
• rudin "principles of analysis"
• Mcdougal Littell Geometry 10th grade
• graph quadratic equation
• trigonomic equations
• prentice hall algebra 1 quadratic quizzes
• college algebra on permutation and combination | {"url":"https://algebra-help.com/math-tutorials/third-power-equation-solve.html","timestamp":"2024-11-09T00:30:27Z","content_type":"application/xhtml+xml","content_length":"12339","record_id":"<urn:uuid:306f6304-e9af-421c-8e97-0cb3a16f4e6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00860.warc.gz"} |