content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Operation (mathematics)
Jump to navigation Jump to search
This article
needs additional citations for verification
(January 2016) (Learn how and when to remove this template message)
In mathematics, an operation is a calculation from zero or more input values (called operands) to an output value. The number of operands is the arity of the operation. The most commonly studied
operations are binary operations, (that is, operations of arity 2) such as addition and multiplication, and unary operations (operations of arity 1), such as additive inverse and multiplicative
inverse. An operation of arity zero, or nullary operation, is a constant. The mixed product is an example of an operation of arity 3, also called ternary operation. Generally, the arity is supposed
to be finite. However, infinitary operations are sometimes considered, in which context the "usual" operations of finite arity are called finitary operations.
Types of operation[edit]
There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two
values, and include addition, subtraction, multiplication, division, and exponentiation.
Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and
subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and
intersection and the unary operation of complementation. Operations on functions include composition and convolution.
Operations may not be defined for every possible value. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is
defined form a set called its domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its range. For example, in the real
numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers.
Operations can involve dissimilar objects. A vector can be multiplied by a scalar to form another vector. And the inner product operation on two vectors produces a scalar. An operation may or may not
have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on.
The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs.
An operation is like an operator, but the point of view is different. For instance, one often speaks of "the operation of addition" or "addition operation" when focusing on the operands and result,
but one says "addition operator" (rarely "operator of addition") when focusing on the process, or from the more abstract viewpoint, the function + : S × S → S.
An operation ω is a function of the form ω : V → Y, where V ⊂ X[1] × ... × X[k]. The sets X[k] are called the domains of the operation, the set Y is called the codomain of the operation, and the
fixed non-negative integer k (the number of arguments) is called the type or arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity
zero, called a nullary operation, is simply an element of the codomain Y. An operation of arity k is called a k-ary operation. Thus a k-ary operation is a (k+1)-ary relation that is functional on its
first k domains.
The above describes what is usually called a finitary operation, referring to the finite number of arguments (the value k). There are obvious extensions where the arity is taken to be an infinite
ordinal or cardinal, or even an arbitrary set indexing the arguments.
Often, use of the term operation implies that the domain of the function is a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain),^[1] although this is by no
means universal, as in the example of multiplying a vector by a scalar.
See also[edit] | {"url":"https://static.hlt.bme.hu/semantics/external/pages/tenzorszorzatok/en.wikipedia.org/wiki/Operation_(mathematics).html","timestamp":"2024-11-01T22:49:03Z","content_type":"text/html","content_length":"55806","record_id":"<urn:uuid:29e75086-f9b1-4010-86f7-167dfa66ef42>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00407.warc.gz"} |
Printable Figure Drawings
Improper Fractions To Mixed Numbers Worksheets
Improper Fractions To Mixed Numbers Worksheets - Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions.
Web it's easy to get mixed up in math class. Improper fraction worksheets improper fractions are. Web mixed numbers & improper fractions in each problem below, an improper fraction is represented by
blocks beneath a number line. Web this math worksheet gives your child practice converting improper fractions to mixed numbers and vice versa. Conceive of mixed numbers as equivalent to fractions to
complete these pdf worksheets.
The problems may be selected from easy, medium or hard level of difficulty. Here, they're partitioned into equal parts to represent improper fractions, though. 4th, 5th print full size skills
reducing fractions, working with improper fractions, working with mixed numbers common core standards: Web here you will find a wide range of free printable fraction worksheets which will help your
child understand and practice how to convert improper fractions to mixed numbers. This fraction worksheet is great for working on converting improper fractions and mixed numbers.
Web making fractions readable fractions greater than one are much easier to understand when written as mixed numbers. Designed by teachers for third to fifth grade, these activities provide plenty of
support in learning to convert mixed numbers and improper fractions. Improper fraction worksheets improper fractions are. Thirds, fourths, ifths.) 0 0 0 0 0 0 3 4 2 2 2 2 Web improper fractions &
mixed numbers videos 139 and 140 on www.corbettmaths.com question 1:
Web mixed numbers & improper fractions in each problem below, an improper fraction is represented by blocks beneath a number line. Web it's easy to get mixed up in math class. Web to convert a mixed
number into an improper fraction, multiply the whole number with the denominator of the fraction, add the numerator of the fraction to the product,.
Improper fraction worksheets improper fractions are. Change these improper fractions into mixed numbers (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) (s) (t) (u) (v) (w) (x)
(y) Web converting improper fractions and mixed numbers. It will produce 15 improper fractions problems and 15 mixed number problems. The problems may.
The problems may be selected from easy, medium or hard level of difficulty. Web mixed numbers & improper fractions in each problem below, an improper fraction is represented by blocks beneath a
number line. Change these improper fractions into mixed numbers (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) (s).
Web converting improper fractions and mixed numbers. These shapes aren't anything new for kids! Web this math worksheet gives your child practice converting improper fractions to mixed numbers and
vice versa. 4th, 5th print full size skills reducing fractions, working with improper fractions, working with mixed numbers common core standards: Web making fractions readable fractions greater than
one are much.
Web making fractions readable fractions greater than one are much easier to understand when written as mixed numbers. Web here you will find a wide range of free printable fraction worksheets which
will help your child understand and practice how to convert improper fractions to mixed numbers. Thirds, fourths, ifths.) 0 0 0 0 0 0 3 4 2 2.
Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions. 4th, 5th print full size skills reducing
fractions, working with improper fractions, working with mixed numbers common core standards: Change these improper fractions into mixed numbers (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k).
Web improper fractions & mixed numbers videos 139 and 140 on www.corbettmaths.com question 1: Grade 4 number & operations: Thirds, fourths, ifths.) 0 0 0 0 0 0 3 4 2 2 2 2 Here, they're partitioned
into equal parts to represent improper fractions, though. These shapes aren't anything new for kids!
Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions. Change these improper fractions into mixed
numbers (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) (s) (t) (u) (v) (w) (x) (y) Web converting improper fractions and.
It will produce 15 improper fractions problems and 15 mixed number problems. Web it's easy to get mixed up in math class. Web mixed numbers & improper fractions in each problem below, an improper
fraction is represented by blocks beneath a number line. Thirds, fourths, ifths.) 0 0 0 0 0 0 3 4 2 2 2 2 Here, they're.
Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions. Web here you will find a wide range of free
printable fraction worksheets which will help your child understand and practice how to convert improper fractions to mixed numbers. Web converting improper fractions to mixed numbers |.
Improper Fractions To Mixed Numbers Worksheets - Grade 4 number & operations: Web mixed numbers & improper fractions in each problem below, an improper fraction is represented by blocks beneath a
number line. Web this math worksheet gives your child practice converting improper fractions to mixed numbers and vice versa. Improper fraction worksheets improper fractions are. Give your students
some extra practice with our mixed numbers and improper fractions worksheets! It will produce 15 improper fractions problems and 15 mixed number problems. Here, they're partitioned into equal parts
to represent improper fractions, though. Change these improper fractions into mixed numbers (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) (s) (t) (u) (v) (w) (x) (y) Web
here you will find a wide range of free printable fraction worksheets which will help your child understand and practice how to convert improper fractions to mixed numbers. 4th, 5th print full size
skills reducing fractions, working with improper fractions, working with mixed numbers common core standards:
Use the number line to determine what the equivalent mixed number form would be. This fraction worksheet is great for working on converting improper fractions and mixed numbers. Web here you will
find a wide range of free printable fraction worksheets which will help your child understand and practice how to convert improper fractions to mixed numbers. Web mixed numbers & improper fractions
in each problem below, an improper fraction is represented by blocks beneath a number line. These shapes aren't anything new for kids!
Web converting improper fractions and mixed numbers. It will produce 15 improper fractions problems and 15 mixed number problems. Web here you will find a wide range of free printable fraction
worksheets which will help your child understand and practice how to convert improper fractions to mixed numbers. 4th, 5th print full size skills reducing fractions, working with improper fractions,
working with mixed numbers common core standards:
Web here you will find a wide range of free printable fraction worksheets which will help your child understand and practice how to convert improper fractions to mixed numbers. Designed by teachers
for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions. Change these improper fractions into mixed numbers (a) (b) (c) (d)
(e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) (s) (t) (u) (v) (w) (x) (y)
Web improper fractions & mixed numbers videos 139 and 140 on www.corbettmaths.com question 1: Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to
convert mixed numbers and improper fractions. Web mixed numbers & improper fractions in each problem below, an improper fraction is represented by blocks beneath a number line.
Change These Improper Fractions Into Mixed Numbers (A) (B) (C) (D) (E) (F) (G) (H) (I) (J) (K) (L) (M) (N) (O) (P) (Q) (R) (S) (T) (U) (V) (W) (X) (Y)
Web it's easy to get mixed up in math class. Improper fraction worksheets improper fractions are. Web improper fractions & mixed numbers videos 139 and 140 on www.corbettmaths.com question 1: Web
here you will find a wide range of free printable fraction worksheets which will help your child understand and practice how to convert improper fractions to mixed numbers.
Give Your Students Some Extra Practice With Our Mixed Numbers And Improper Fractions Worksheets!
Web to convert a mixed number into an improper fraction, multiply the whole number with the denominator of the fraction, add the numerator of the fraction to the product, and write it as the
numerator of the improper fraction. Designed by teachers for third to fifth grade, these activities provide plenty of support in learning to convert mixed numbers and improper fractions. Use the
number line to determine what the equivalent mixed number form would be. Web this math worksheet gives your child practice converting improper fractions to mixed numbers and vice versa.
Conceive Of Mixed Numbers As Equivalent To Fractions To Complete These Pdf Worksheets.
4th, 5th print full size skills reducing fractions, working with improper fractions, working with mixed numbers common core standards: Here, they're partitioned into equal parts to represent improper
fractions, though. The problems may be selected from easy, medium or hard level of difficulty. Web converting improper fractions and mixed numbers.
Grade 4 Number & Operations:
Web mixed numbers & improper fractions in each problem below, an improper fraction is represented by blocks beneath a number line. This fraction worksheet is great for working on converting improper
fractions and mixed numbers. These shapes aren't anything new for kids! It will produce 15 improper fractions problems and 15 mixed number problems. | {"url":"https://tunxis.commnet.edu/view/improper-fractions-to-mixed-numbers-worksheets.html","timestamp":"2024-11-05T17:22:14Z","content_type":"text/html","content_length":"37455","record_id":"<urn:uuid:c80b6d23-f08b-458d-90cd-741525f9e03b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00351.warc.gz"} |
Spherical Harmonics
Written by Paul Bourke
February 1990
MSWindows interactive viewer: SHDEmo.exe.gz contributed by Wolfgang Wester
(Requires OpenGL and Win95/98/Me or Win NT/2000)
Contribution by Georg Duemlein: pythonSOP
The following closed objects are commonly called spherical harmonics although they are only remotely related to the mathematical definition found in the solution to certain wave functions, most
notable the eigenfunctions of angular momentum operators.
The formula is quite simple, the form used here is based upon spherical (polar) coordinates (radius, theta, phi).
r = sin(m0 phi)^m1 + cos(m2 phi)^m3 + sin(m4 theta)^m5 + cos(m6 theta)^m7
Where phi ranges from 0 to pi (lines of latitude), and theta ranges from 0 to 2 pi (lines of longitude), and r is the radius. The parameters m0, m1, m2, m3, m4, m5, m6, and m7 are all integers
greater than or equal to 0.
Implementation details
The images here were created using OpenGL. While the parameters m0, m1, m2, m3, m4, m5, m6, m7 can range from 0 upwards, as the degree increases the objects become increasingly "pointed" and a large
number of polygons are required to represent the surface faithfully. All the examples here have a maximum degree of 6. The maximum number of polygons used is 128 x 128 and most were only 64 x 64,
that is, the theta and phi angles are split into 64 equal steps each.
The C function that computes a point on the surface is
XYZ Eval(double theta,double phi, int *m)
double r = 0;
XYZ p;
r += pow(sin(m[0]*phi),(double)m[1]);
r += pow(cos(m[2]*phi),(double)m[3]);
r += pow(sin(m[4]*theta),(double)m[5]);
r += pow(cos(m[6]*theta),(double)m[7]);
p.x = r * sin(phi) * cos(theta);
p.y = r * cos(phi);
p.z = r * sin(phi) * sin(theta);
The OpenGL snippet that creates the geometry is
du = TWOPI / (double)resolution; /* Theta */
dv = PI / (double)resolution; /* Phi */
for (i=0;i<resolution;i++) {
u = i * du;
for (j=0;j<resolution;j++) {
v = j * dv;
q[0] = Eval(u,v);
n[0] = CalcNormal(q[0],
c[0] = GetColour(u,0.0,TWOPI,colourmap);
q[1] = Eval(u+du,v);
n[1] = CalcNormal(q[1],
c[1] = GetColour(u+du,0.0,TWOPI,colourmap);
q[2] = Eval(u+du,v+dv);
n[2] = CalcNormal(q[2],
c[2] = GetColour(u+du,0.0,TWOPI,colourmap);
q[3] = Eval(u,v+dv);
n[3] = CalcNormal(q[3],
c[3] = GetColour(u,0.0,TWOPI,colourmap);
Exercises for the reader
• While it is easy to see that the even parameters m0, m2, m4, m6 need to be integers for closed forms, what about the odd terms m1, m3, m5, m7. Can they be real numbers?
• The figures on this page all use the same colour map, it maps theta onto a colour map that smoothly progresses through the colours red -> yellow -> green -> cyan -> blue -> magenta -> red. Note
it is a circular colour map. There are many other ways of colouring these surfaces, for example a radial intensity variation might look nice.
• While the objects here all look the same size, there is generally quite a bit of variation in the size arising from applying the above formula directly. Find a normalising term (function of the
m0, m1, .... m7) that will make surfaces fit into a unit cube.
Very nice rendering technique by Georg Duemlein (Nov 2008).
For source code see: pythonSOP | {"url":"https://paulbourke.net/geometry/sphericalh/index.html","timestamp":"2024-11-09T23:56:21Z","content_type":"text/html","content_length":"6952","record_id":"<urn:uuid:d55c299c-2970-489a-b757-ff5025b219b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00286.warc.gz"} |
An Intro to Physics Science
Physics science is actually.
It is very important to learn all branches so that you can be much physicist. Learning just 1 branch of mathematics will give you a feel of the whole science. All these are a few of the branches of
Mathematics is the branch. It deals with the source of energy and this world, atoms, and different elements that form the universe. The notions of classical physics have been confirmed again and
again and have helped scientists to learn different branches of mathematics.
Entrainment idea is the branch of mathematics that is commonly taught in senior high school and college diploma courses. It copes with just how our own bodies have been suffering from external
forces. By studying the results of gravity it can be implemented to individual.
Optics is another division. It is utilised to analyze the properties of light and the way it impacts folks and objects .
Gravitational Wave Astronomy can be really a popular branch of math. It studies the presence of gravitational waves. It is just recently that scientists have detected these waves.
Microwave Astronomy studies the presence of microwaves and also the way in which objects are affected by them. It was only that experts discovered the presence of this waves. They are very intriguing
because you may find them shape things like galaxies to review.
Radiation Physics is also a division of math. It is normally utilised in hospitals and different regions which will need to find out how the human body affects. It’s a beneficial branch of physics
that can help health practitioners also to determine and also to take care of patients.
Calculus is actually a hard and tough science to learn. It is utilised to assist you calculate various elements of one’s body. It can likewise be utilised to help you calculate how far away an object
is and in which it is found.
Vitality Physics is just one of many divisions of physics to understand. It addresses the production of electricity and it could also be used to allow you to understand various portions of the human
body. It is harder to master although in a lot of ways it is comparable to physics.
Electromagnetism is your branch of physics that deals with the possessions of power. Items can will help to fully grasp how electricity might be produced by objects and likewise destroy it. It is
also utilised to help explain how power flows through wires and the way in that it moves them around. It may likewise be used to help you fully college paper grasp the ways in.
Quantum math.njit.edu Physics is the branch of physics that deals with atoms and just how particles act and the properties https://www.masterpapers.com/ of their possessions. It is by far the branch
of physics to master. It will take years to comprehend. Most people have to take to know this branch of mathematics.
You’ll find several branches of physics which have helped scientists understand the method by which the planet works. Once you have heard these branches of physics, then you’re able to subsequently
use them to know that the entire world.
Leave a Reply Cancel reply
Author: Site Default
Related posts | {"url":"https://www.akstar.com.tr/2020/06/10/an-intro-to-physics-science/","timestamp":"2024-11-11T20:06:15Z","content_type":"text/html","content_length":"73447","record_id":"<urn:uuid:d3798019-1626-4add-a7ec-cea7b2561ac8>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00287.warc.gz"} |
Quantitative Aptitude Tips to Score Better in CAT/CMAT - Endeavor Magic
Quantitative Aptitude (QA) is a critical section in both the Common Admission Test (CAT) and the Common Management Admission Test (CMAT). It tests your mathematical skills, analytical ability, and
problem-solving proficiency. Scoring well in this section can significantly boost your overall score and enhance your chances of securing admission to a prestigious business school. We are here with
some comprehensive tips and strategies that will help you excel in the Quantitative Aptitude section of CAT and CMAT.
Understanding the Syllabus
Before diving into preparation strategies, it is essential to understand the syllabus for the Quantitative Aptitude section.
CAT Syllabus
1. Arithmetic: Percentages, Profit and Loss, Time and Work, Time, Speed and Distance, Ratios and Proportions, Averages, Simple and Compound Interest, Mixtures and Alligations.
2. Algebra: Linear and Quadratic Equations, Inequalities, Functions, Logarithms, Sequences and Series.
3. Geometry and Mensuration: Lines and Angles, Triangles, Circles, Polygons, Coordinate Geometry, Trigonometry, Mensuration (2D and 3D shapes).
4. Number System: Divisibility, HCF and LCM, Integers, Fractions, Decimals, Factorial, Base System.
5. Modern Math: Set Theory, Permutations and Combinations, Probability, Binomial Theorem.
CMAT Syllabus
1. Arithmetic: Similar to CAT, including additional topics like Ages, Clocks, and Calendars.
2. Algebra: Equations, Inequalities, Sequences and Series, Functions.
3. Geometry and Mensuration: Similar to CAT.
4. Number System: Similar to CAT.
5. Data Interpretation: Bar Graphs, Pie Charts, Line Graphs, Tables.
Preparation Strategies
1. Build a Strong Foundation
Concept Clarity: Start with the basics. Ensure you understand fundamental concepts in arithmetic, algebra, geometry, and the number system. Use standard textbooks like NCERT to clear your basics.
Practice Regularly: Quantitative Aptitude requires consistent practice. Solve a variety of problems to become comfortable with different question types.
Learn Shortcuts: While understanding concepts is crucial, learning shortcuts and tricks can save time during the exam. However, ensure you comprehend the logic behind these shortcuts.
2. Time Management
Set Timed Goals: Practice solving questions within a set time limit. This will help you manage your time effectively during the exam.
Mock Tests: Regularly take mock tests to simulate the exam environment. Analyze your performance to identify time-consuming areas and work on them.
Sectional Time Allocation: During the exam, allocate time to different sections wisely. Don’t spend too much time on any single question.
3. Analyze and Improve
Identify Weak Areas: Regularly analyze your performance in practice tests to identify weak areas. Focus on improving these topics.
Error Analysis: Review mistakes thoroughly. Understand why you made errors and how to avoid them in the future.
Adaptive Learning: Adapt your study plan based on your performance. Spend more time on challenging topics and less on areas where you are already strong.
4. Use Quality Study Material
Books: Use reputed books.
Online Resources: Utilize online additional learning resources and practice questions.
Coaching Institutes: If self-study is not yielding the desired results, consider enrolling in online coaching for CAT.
Topic-wise Tips and Strategies
1. Arithmetic
Percentages and Ratios: Understand the relationship between percentages, fractions, and decimals. Practice problems involving profit and loss, discounts, and markups.
Time and Work: Learn formulas and shortcuts for solving problems related to work and time. Practice problems involving multiple people working together and work equivalency.
Time, Speed, and Distance: Focus on understanding relative speed, average speed, and problems involving trains, boats, and streams. Practice solving problems using unitary methods.
2. Algebra
Equations: Master solving linear and quadratic equations. Understand the graphical representation of equations.
Inequalities: Practice solving inequalities and representing them on a number line. Understand the concepts of absolute values and logarithms.
Sequences and Series: Familiarize yourself with arithmetic and geometric progressions. Practice problems on sums of series and nth-term calculations.
3. Geometry and Mensuration
Lines and Angles: Understand basic geometric properties of lines, angles, and triangles. Practice problems involving parallel lines, transversals, and angle bisectors.
Triangles: Focus on properties of different types of triangles, including Pythagorean theorem, congruence, and similarity.
Circles and Polygons: Study properties of circles, including tangents, secants, and chords. Understand the properties of regular polygons.
Mensuration: Practice problems involving the area, perimeter, and volume of 2D and 3D shapes. Understand the formulas for different geometric shapes.
4. Number System
Divisibility Rules: Learn and practice the rules of divisibility for different numbers.
HCF and LCM: Understand the concepts of the highest common factor and least common multiple. Practice problems involving these concepts.
Base System: Familiarize yourself with different number bases (binary, octal, decimal, hexadecimal). Practice conversion problems between these bases.
Factorials: Understand the concept of factorial and practice problems involving factorial calculations and properties.
5. Modern Math
Set Theory: Understand the basic concepts of sets, subsets, unions, intersections, and complements. Practice problems involving Venn diagrams.
Permutations and Combinations: Learn the fundamental principles of counting, including permutations and combinations. Practice problems involving different scenarios of selection and arrangement.
Probability: Understand basic probability principles, including independent and dependent events. Practice problems involving probability calculations and conditional probability.
Exam-Day Strategies
1. Read Instructions Carefully
Understand the Pattern: Familiarize yourself with the exam pattern and question distribution. This helps in better time allocation.
Sectional Instructions: Read the instructions for each section carefully before starting. Understand the marking scheme and any negative marking.
2. Smart Question Selection
Start with Easy Questions: Begin with questions you find easy to build confidence and save time for tougher questions later.
Mark and Move: If you encounter a difficult question, mark it for review and move on. Return to it after answering the easier questions.
Avoid Guessing: Avoid random guessing in questions with negative markings. If you can eliminate one or two options, an educated guess might be worth the risk.
3. Stay Calm and Focused
Stay Positive: Maintain a positive attitude throughout the exam. Don’t let difficult questions affect your confidence.
Time Checks: Keep an eye on the time but don’t panic if you’re running behind. Adjust your pace accordingly.
Stay Focused: Stay focused on the question at hand. Avoid distractions and manage any anxiety with deep breathing techniques.
Post-Exam Analysis
1. Review Performance
Analyze Results: After the exam, review your performance. Identify areas where you did well and areas that need improvement.
Understand Mistakes: Analyze your mistakes to understand what went wrong. This helps in preparing better for future exams.
2. Plan Next Steps
Focus on Weak Areas: Continue practicing weak areas identified in your analysis. This ensures improvement over time.
Mock Test Analysis: Use insights from mock test performance to fine-tune your preparation strategy.
Stay Updated: Keep yourself updated with any changes in exam patterns or syllabus.
Excelling in the Quantitative Aptitude section of CAT and CMAT requires a blend of strong conceptual understanding, regular practice, effective time management, and smart exam strategies. By building
a solid foundation, practicing consistently, analyzing performance, and staying motivated, you can significantly enhance your chances of scoring well in these competitive exams. Remember, persistence
and dedication are key to success. Keep pushing your limits, and you will achieve your goals. | {"url":"https://endeavormagic.com/2024/06/quantitative-aptitude-tips-to-score-better-in-cat-cmat/","timestamp":"2024-11-03T21:22:57Z","content_type":"text/html","content_length":"120942","record_id":"<urn:uuid:b64d76e7-5d7f-455f-a86f-5a72f05c401c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00292.warc.gz"} |
» CHINO HILLS ALGEBRA I
Algebra Tutors & Teachers for Lessons, Instruction or Help in Chino Hills, CA
Algebra Tutoring in Chino Hills
Algebra is the foundation course for many advanced courses in Chino Hills Schools. Many students start with Pre-Algebra then move towards Algebra I and then finally take Algebra II. Students taking
Geometry, PreCalculus and Calculus also find the courses easy if they have solid grip on Algebra.
Algebra I Topics in Chino Hills School:
Review of Pre-Algebra topics.
Algebraic equations and solving for unknown variables in linear models.
Graphing system of linear equations and exponential functions.
Linear equations of form y = mx +b. Learn how to create and solve algebraic lines and what it means in term of actual graphical line.
Substitution method in solving linear equations
Elimination or addition method in solving linear equations
Graphing method in solving linear equations.
Factoring algebraic expressions.
Factoring Binomial and Trinomial.
Quadratic Equations: Solve algebraic quadratic equations numerically and graphically.
Simplifying algebraic expressions in rational expressions and equations.
Simply algebraic expression with roots and radicals.
Chino Hills School teaches Algebra I in Grade 9 but advanced learners can take Algebra I in Chino Hills Schools in Grade 8.
Why study Algebra I in Chino Hills Schools:
Algebra I is the pre-requisite for many mathematics courses in Chino Hills high schools.
Aptitude tests such as CAHSEE, ACT, SAT and many courses in mathematics, science and college need the knowledge of Algebra I.
Application of Algebra I:
1) Learn how to compute bills such as telephone bills, electricity bills and gas bills. Then analyze company that is a better provider in terms of cost saving.
2) Calculate time needed to reach any destination given speed and distance of drive.
3) Many jobs require knowledge of these Algebra I concepts.
What is next after Algebra I:
You are ready for more Mathematics then it is time to learn Algebra II and Geometry.
About Algebra I:
Chino Hills schools teach Algebra I in Grade 9.
Algebra II in Chino Hills high schools are taught in Grade 10.
Advanced learner:
It is very possible to complete Algebra I in Chino Hills Schools in Grade 8 so you are with the bright group of students who are taking Algebra II in 9^th Grade.
Mathematics and Algebra I knowledge needed in Professional Education and standardized tests:
Nursing Schools, Army and Navy tests (ASVAB), CAHSEE, ACT, SAT
Adult Education and College:
Many colleges require you to pass placement tests that have mathematics questions based on Algebra I. If you are not proficient in Algebra I then you are required to take foundation mathematics
course in Algebra before you can take any advanced courses in community colleges and universities.
Chino Hills Tutoring Subjects: | {"url":"https://usbesttutors.com/chinohills-tutoring/algebra-tutor-i/","timestamp":"2024-11-03T05:45:46Z","content_type":"text/html","content_length":"33823","record_id":"<urn:uuid:761b09c6-cbb9-461e-92cd-3d554d5f8ab7>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00202.warc.gz"} |
Langlands program
A general principle of functoriality
Langlands then formulated a much more general "Functoriality Principle", which relates automorphic representations of different groups (not just the general linear group) over the adele ring of Q, in
a way which is compatible with their L-functions.
All these conjectures can be formulated for more general fields in place of Q: algebraic number fields (the original and most important case), local fields and function fields (finite extensions of F
[p](t) where p is a prime and F[p](t) is the field of rational functions over the finite field with p elements).
Parts of the program for local fields were completed in 1998 and for function fields in 1999. Laurent Lafforgue received the Fields Medal in 2002 for his work on the function field case. This work
continued earlier investigations by Vladimir Drinfeld, which were honored with the Fields Medal in 1990. Only special cases of the number field case have been proven, some by Langlands himself.
Langlands received the Wolf Prize in 1996 for his work on these conjectures.
• Stephen Gelbart: An Elementary Introduction to the Langlands Program, Bulletin of the AMS v.10 no. 2 April 1984. | {"url":"http://www.fact-index.com/l/la/langlands_program.html","timestamp":"2024-11-02T02:34:53Z","content_type":"text/html","content_length":"10176","record_id":"<urn:uuid:2137609d-d9be-4717-8b51-669617fd8aec>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00042.warc.gz"} |
Old Projects
Analog Decodig - Padova Group
Introduction and aim of the research
The pioneering work of Loeliger's and Hagenauer's groups led to the first successful implementations of analog iterative decoders in BiCMOS technology, and demonstrated the potential of this approach
over the digital approach in terms of maximum attainable speed and power efficiency. Since then, the research efforts aimed at moving from the proof-of-concept design toward real-world applications
have multiplied. The first step was to move from a bipolar to a more attractive full CMOS implementation, though still for a very simple Hamming (8,4) code. Shortly after, Gaudet et al. realized the
first analog turbo decoder, which is a significant step ahead with respect to a decoder for a single convolutional code, but still far from a real application due to the limited interleaver size (16
Within this context we have designed, implemented, and successfully tested the first reported analog turbo decoder for a realistic application, a parallel concatenated, rate 1/3, code defined in the
3GPP standard with interleaver size of 40 bits and a codeword size of 132 bits.
The prototypes realized so far have shown an outstanding improvement in the power efficiency with respect to their digital counterparts, with a varying (from one implementation to the other) yet
limited error-correcting performance loss. A further research effort is necessary to demonstrate that analog decoders for very high performance codes are feasible and maintain their superiority over
the digital implementation. Turbo codes and low-density parity-check codes can get very close to the unconstrained Shannon limit for error correction provided that the data block length is large
enough (in the range of thousands of bits). Modern data communications standards require block length and code rate programmability. The main research issue is then to demonstrate that it possible to
design and realize power-efficient analog decoders with large and reconfigurable block length, programmable code rate and data throughput of hundreds of Mb/s (in CMOS technology) or several Gb/s (in
BiCMOS technology).
Analog Turbo Decoding for UMTS Channel
We have prototyped an analog decoder for the 40-bit block length, rate 1/3, Turbo Code defined in the UMTS standard. This is a significant step ahead in the evolution of analog decoders from simple
proof-of-concept prototypes towards real world applications. The prototype is fully integrated in a three-metal, double-poly, 0.35 µm CMOS technology, and includes an I/O interface that maximizes the
decoder throughput. After the successful implementation of proof-of-concept analog iterative decoders by different research groups both in bipolar and CMOS technologies, this is the first reported
prototype of analog decoder for a realistic error-correcting code. The decoder was successfully tested at the maximum data rate defined in the standard (2 Mbit/s), with an overall power consumption
of 10.3 mW at 3.3 V, going down to 7.6 mW with the decoder core operated at 2 V, and an extremely low energy per decoded bit and trellis state (0.85 nJ for the decoder core alone).
Integrated Circuits for Biomedical Applications
A fully-integrated CMOS cardiac pacemaker
Implantable biomedical devices can benefit from design solutions in submicron CMOS technologies, since the high level of integration can significantly help in reducing the implanted device size,
especially if design techniques that avoid off-chip components are used. Furthermore low-power techniques for the design of CMOS circuits can be profitably exploited in order to reduce the system
power consumption, which is one of the main goals for biomedical devices. It is also worth to consider that often these systems lay in the category of low-voltage applications, because they are
battery operated and are supposed to work correctly even when the battery discharges from its initial value to the end-of-life (EOL) voltage.
This activity is based on this approach, and led to the integration in a pure CMOS technology of the analog blocks of a cardiac pacemaker. The system includes a dual chamber sensing stage that allows
to amplify, filter and digitize both the atrial and the ventricular spontaneous cardiac activity, properly sensed by the pacemaker catheters. In addition the system includes a cuople of voltage
multipliers in order to generate the volatge pulse to stimulate the heart.
The realized circuit makes extensive use of current-mode translinear circuits, with particular emphasis on Log-domain circuits. The CMOS implementations of these circuits have indeed demonstrated a
good power efficiency and are well suited for low-voltage environments. A power-optimized Sigma-Delta A/D converter has been realized for signal digitization, because the chance to use a digital
version of the acquired signal allows more advanced pacing strategies. As a result of the used design approach, the realized system can operate with a supply voltage from 2.8V down to 1.8V, which is
about 200mV below the typical EOL voltage of a lithium-iodine battery, commonly used in pacemakers. More importantly the total current consumption of the whole chip is fully compatible with the
available power budget in cardiac pacemakers. Therefore the realized system gives a realistic contribution to full integration of the pacemaker on a single CMOS chip. This work has also demonstrated
how established low-power and low-voltage design techniques for CMOS circuits can be profitably exploited in order to improve performance in a biomedical system like the cardiac pacemaker.
To further details on this activity please contact us or refer to: | {"url":"http://icarus.dei.unipd.it/?q=node/139","timestamp":"2024-11-06T02:51:23Z","content_type":"application/xhtml+xml","content_length":"20037","record_id":"<urn:uuid:3f126a2b-b15f-4e0b-a260-38a782c7f4c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00505.warc.gz"} |
Help with relations: many-to-many + one-to-one - Drizzle Team
Help with relations: many-to-many + one-to-one
I'm having trouble figuring out how to model the following relationships between papers, authors, and author's names with Drizzle relations. The schema looks like this (I've ommitted some columns for
brevity) Conceptually, the relations are as follows: - each row in authors can be related to many rows in papers, and each row in papers can be related to many rows authors (many to many) - each row
in authors_to_papers is be associated with one row of authors Here are the relations I have set up: I know that I can do the following to return a paper with it's author(s): But how do I get from the
authorIds returned in the output of the above to the firstName and lastName associated with those authorIds? | {"url":"https://www.answeroverflow.com/m/1301217658963886150","timestamp":"2024-11-11T16:18:33Z","content_type":"text/html","content_length":"190947","record_id":"<urn:uuid:8bbefd64-f33b-439f-b3ce-e849ef9c80c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00069.warc.gz"} |
Diamond Carat Weight Calculator
This jewelers suite of calculators includes carat weight estimations of diamonds based on the cut diamonds shape and size that can be measured while still within a setting. This enables the jeweler
to estimate the carat weigh of the diamond without damaging the setting.
The diamond weight formulas contain carat weight equations that are specific to diamonds and diamond cuts:
For the carat weight estimation of other gems (e.g. ruby, sapphire), CLICK HERE.
NOTE: Never use an estimating equation for the weight of a gem if you can weigh it on a quality jeweler's scale. These equations are useful when the jeweler is trying to preserve the setting of the
gem while still providing a carat weight estimate. In this way, the jeweler can ascertain the type of gem, its shape and then using precise measuring tools measure the salient dimensions for use in
these equations. These steps can be done while the gem remains in the setting.
The vCalc Jeweler's calculator is free to use like all the other equations and calculators found in vCalc. Please feel free to comment on this wiki page using the comment button below and help us
make a better Jeweler calculator for you.
This calculator provides a calculator to Jewelers and other merchants of diamonds to compute an estimated current market value for jewelry items. Likewise private owners and insurance providers can
make similar estimates for use in establishing the basis for insurance valuation and coverage.
The carat weight equations and data used in vCalc's jewelry library and calculator were reviewed by a certified gemologist. The equations are based on industry recognized formulas and data. The table
below shows a comparison of computations between vCalc and an industry accepted application (Quantum Leap).
The length, width and depth are in millimeters (mm), and the Quantum Leap and vCalc measurements are in carats (cwt).
The largest variance, an oval faceted alexandrite, which can be seen in the last row above, has been double checked against several source equations which tend to support vCalc's accuracy. | {"url":"https://www.vcalc.com/wiki/diamond-carat-weight-calculator","timestamp":"2024-11-01T23:58:21Z","content_type":"text/html","content_length":"60914","record_id":"<urn:uuid:f614d874-a39c-47f6-acec-f2f960ee1dd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00021.warc.gz"} |
Financial Maths, Part 2 - David The Maths TutorFinancial Maths, Part 2
Financial Maths, Part 2
So I am talking about simple interest. In my last post, I explained how to calculate the interest after the money has been invested for one period – one year in our example. It turns out that by
investing $1000 at a simple interest rate of 3%, you earn $30 after one year. This means you have a total of $1000 + $30 = $1030 after one year. What if you want to know what you have after 5 years?
There are two ways of doing this: sequentially or directly. The sequential method has the advantage of showing how your money is growing each period, and the formula is very useful for entering in
spreadsheet applications like MS Excel. Let’s first discuss this sequential method.
So after the first year, you have $1030. Each year, the interest rate of 3% is applied to the initial $1000 investment, and you get an additional $30. Below is a table of how the investment grows
each year. I will explain the headings and the calculations afterwards:
n P r I A[n]
1 1000 3% 30 1030
2 1000 3% 30 1060
3 1000 3% 30 1090
4 1000 3% 30 1120
5 1000 3% 30 1150
So according to this table, you will have $1150 after 5 years. So what are the column headings?
The first 4 were defined in my last post, but I’ll repeat them here. n is the period number. It starts at 0 since this indicates when time starts. You only get interest after the money has been
invested for 1 period (a year in this case). P is the principal which in this scenario, is the amount originally invested. r is the interest rate. I is the amount of interest earned. From my last
post, this is calculated as I = Pr or I = Pr/100, depending if you use the decimal equivalent of r or not (see my last post). A[n] is the total amount you have after n periods. Using a subscript like
this is very common in maths. A[0] is the initial amount after 0 periods. A[1] is the amount after 1 period. A[5] is the amount after 5 periods.
I know the table is a bit repeating with the $1000 and the $30 repeated throughout the table, but I did this so you can see the difference between simple interest and the eventual compound interest
that I will talk about later.
Notice that the difference between A[n] for each adjacent period is $30, that is, $30 is added to the previous A[n] to get the next period total amount, A[n+1]. So the sequential formula to calculate
the next period’s amount is:
A[n+1] = A[n] + 30
This is called a recursion formula as you recursively calculate the next period’s total amount by knowing the previous period’s amount. So starting with A[0]:
A[1] = A[0] + 30 = $1000 + $30 = $1030
A[2] = A[1] + 30 = $1030 + $30 = $1060
and so on until
A[5] = A[4] + 30 = $1120 + $30 = $1150
Now let’s generalise this formula for any interest rate. The $30 in the above example is the interest I from the formula I = Pr or I = Pr/100. So the general recurring formula for the total amount of
interest in a simple interest investment is:
A[n+1] = A[n] + Pr (decimal equivalent r) or A[n+1] = A[n] + Pr/100
That’s all well and good for a spreadsheet formula, but what if you only want to know how much money will you have after 10 years? Do you need to apply this formula 10 times to get the answer? The
answer is “no” because we can get a formula that directly calculates an answer.
If you are adding the same amount each year, after 10 years, the total amount added is 10 × the amount after 10 years. So in our example, after 10 years, the total amount is $1000 + 10×30 = 1000 +
300 = $1300. Notice that I will get the same result as in the table above after 5 years: 1000 + 30×5 = 1000 + 150 = $1150. So you just need to multiply the same amount of interest each year by the
number of periods desired. In general,
A[n] = P + Prn (decimal equivalent r) or A[n] = P + Prn/100
Next time I will introduce compound interest. But to prepare you for this a bit, notice that I can factor out a P from the above formula to get an equivalent one ( please see my posts on the
Distributed Property if you need a review):
A[n] = P(1 + rn) (decimal equivalent r) or A[n] = P(1 +rn/100)
Knowing how to use this form of the equation will help you understand the compound interest formulas. | {"url":"https://davidthemathstutor.com.au/2019/08/31/financial-maths-part-2/","timestamp":"2024-11-03T13:39:39Z","content_type":"text/html","content_length":"48891","record_id":"<urn:uuid:e0e89358-1bbf-4297-b8f9-ba475e71e921>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00758.warc.gz"} |
Buckling length of lattice girder bars – ConsteelConsteel
Knowledge base Buckling length of lattice girder bars January 03, 2024 3 min read
Buckling length of lattice girder bars
Designing a lattice girder
The design of the bars of a truss (lattice girder) structure does not require any special theoretical knowledge: normally, the truss bars are designed as compressed and/or tensioned bars, neglecting
bending moments and shear forces. The dimensioning of compression bars is nowadays carried out using a model-based computer procedure. For details, see the knowledge base material Design of columns
against buckling. Here, only the determination of the deflection length of the compressed bars is presented.
The most important parameter for the dimensioning of a compressed bar is the slenderness:
where the buckling length factor k is recommended by EN1993-1-1 to facilitate manual calculations:
Type of the bar Direction of buckling k
chord - in-plane 0.9
- out-of-plane 0.9
bracing - in-plane 0.9
- out-of-plane 1.0
Software using model-based computational methods (e.g. Consteel software) determines the elastic critical force N[cr] directly by finite element numerical methods, taking into account the behaviour
of the entire lattice girder, instead of the above conservative formula. The following example is intended to illustrate the relationship between the manual design procedure proposed by the standard
and the results of the modern model-based numerical procedure.
• Let the structural model of the lattice girder under consideration be the Consteel model shown in Figure 1.
• Let the load shown correspond to the design load combination of the girder.
• Determine the deflection length of the most stressed compressed chord member using finite element numerical stability analysis.
Fig. 1 Structural model and design load combination of the examined lattice girder
(Consteel software)
Relationship between procedures
The steps of the calculation are:
Buckling stability analysis
The stability analysis of the elastic model shows the governing buckling mode of the lattice structure and the corresponding elastic critical load factor α[cr] (Figure 2).
Fig. 2 Buckling mode and critical load factor given by numerical analysis
We can see that the upper chord of the perfectly elastic model is deflected laterally under load. The load that causes this elastic buckling is the critical load, whose value is given by the product
of the design load and the critical load factor α[cr]=5.99.
Log in to view this content
Online service access and support options are based on subscription plans. Log in to view this content or contact our sales department to upgrade your subscription. If you haven’t tried Consteel yet,
try for free and get Pro access to our learning materials for 30 days! | {"url":"https://consteelsoftware.com/knowledgebase/buckling-length-of-lattice-girder-bars/","timestamp":"2024-11-07T00:58:27Z","content_type":"text/html","content_length":"56198","record_id":"<urn:uuid:7f341d29-5ef0-452e-841b-834309838bfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00461.warc.gz"} |
3.3: Geometric Distribution (Special Topic)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
How long should we expect to flip a coin until it turns up heads? Or how many times should we expect to roll a die until we get a 1? These questions can be answered using the geometric distribution.
We first formalize each trial - such as a single coin flip or die toss - using the Bernoulli distribution, and then we combine these with our tools from probability (Chapter 2) to construct the
geometric distribution.
Bernoulli Distribution
Stanley Milgram began a series of experiments in 1963 to estimate what proportion of people would willingly obey an authority and give severe shocks to a stranger. Milgram found that about 65% of
people would obey the authority and give such shocks. Over the years, additional research suggested this number is approximately consistent across communities and time. (Find further information on
Milgram's experiment at www.cnr.berkeley.edu/ucce50/ag-labor/7article/article35.htm.)
Each person in Milgram's experiment can be thought of as a trial. We label a person a success if she refuses to administer the worst shock. A person is labeled a failure if she administers the worst
shock. Because only 35% of individuals refused to administer the most severe shock, we denote the probability of a success with p = 0:35. The probability of a failure is sometimes denoted with q = 1
- p.
Thus, success or failure is recorded for each person in the study. When an individual trial only has two possible outcomes, it is called a Bernoulli random variable.
Bernoulli random variable (descriptive)
A Bernoulli random variable has exactly two possible outcomes. We typically label one of these outcomes a "success" and the other outcome a "failure". We may also denote a success by 1 and a failure
by 0.
TIP: "success" need not be something positive
We chose to label a person who refuses to administer the worst shock a "success" and all others as "failures". However, we could just as easily have reversed these labels. The mathematical framework
we will build does not depend on which outcome is labeled a success and which a failure, as long as we are consistent.
Bernoulli random variables are often denoted as 1 for a success and 0 for a failure. In addition to being convenient in entering data, it is also mathematically handy. Suppose we observe ten trials:
\[0 1 1 1 1 0 1 1 0 0\]
Then the sample proportion, \( \hat {p}\), is the sample mean of these observations:
\[ \hat {p} = \dfrac {\text {# of successes}}{\text {# of trials}} = \dfrac {0 + 1 + 1 + 1 + 1 + 0 + 1 + 1 + 0 + 0}{10} = 0.6\]
This mathematical inquiry of Bernoulli random variables can be extended even further. Because 0 and 1 are numerical outcomes, we can define the mean and standard deviation of a Bernoulli random
p is the true probability of a success, then the mean of a Bernoulli random variable X is given by
\[\mu = E[X] = P(X = 0) 0 + P(X = 1) 1\]
\[= (1 - p) X 0 + p X 1 = 0 + p = p\]
Similarly, the variance of \(X\) can be computed:
\[ \sigma^2 = P(X = 0)(0 - p)^2 + P(X = 1)(1 - p)^2\]
\[= (1 - p)p^2 + p(1- p)^2 = p(1- p)\]
The standard deviation is \(\sigma = \sqrt {p(1 - p)}\)
Bernoulli random variable (mathematical)
If X is a random variable that takes value 1 with probability of success p and 0 with probability 1 - p, then X is a Bernoulli random variable with mean and standard deviation
\[ \mu = p \sigma = \sqrt {p(1-p)}\]
In general, it is useful to think about a Bernoulli random variable as a random process with only two outcomes: a success or failure. Then we build our mathematical framework using the numerical
labels 1 and 0 for successes and failures, respectively.
Geometric Distribution
Example \(\PageIndex{1}\) illustrates what is called the geometric distribution, which describes the waiting time until a success for independent and identically distributed (iid) Bernoulli random
variables. In this case, the independence aspect just means the individuals in the example don't affect each other, and identical means they each have the same probability of success.
Example \(\PageIndex{1}\)
Dr. Smith wants to repeat Milgram's experiments, but she only wants to sample people until she finds someone who will not inflict the worst shock. (This is hypothetical since, in reality, this sort
of study probably would not be permitted any longer under current ethical standards). If the probability a person will not give the most severe shock is still 0.35 and the subjects are independent,
what are the chances that she will stop the study after the first person? The second person? The third? What about if it takes her n - 1 individuals who will administer the worst shock before finding
her first success, i.e. the first success is on the n^th person? (If the first success is the fifth person, then we say n = 5.)
The probability of stopping after the first person is just the chance the first person will not administer the worst shock: 1 - 0:65 = 0:35. The probability it will be the second person is
\[ P(\text{second person is the first to not administer the worst shock})\]
\[= P(\text{the first will, the second won't}) = (0:65)(0:35) = 0:228\]
Likewise, the probability it will be the third person is (0:65)(0:65)(0:35) = 0:148.
If the first success is on the nth person, then there are \(n - 1\) failures and finally 1 success, which corresponds to the probability \((0.65)^{n-1}(0:35)\). This is the same as \((1 - 0:35)^{n-1}
Figure 3.16 The geometric distribution when the probability of success is p = 0:35.
The geometric distribution from Example \(\PageIndex{1}\) is shown in Figure 3.16. In general, the probabilities for a geometric distribution decrease exponentially fast. While this text will not
derive the formulas for the mean (expected) number of trials needed to find the first success or the standard deviation or variance of this distribution, we present general formulas for each.
Geometric Distribution
If the probability of a success in one trial is \(p\) and the probability of a failure is \(1 - p\), then the probability of finding the first success in the n^th trial is given by
\[(1 - p)_{n-1}p \label{3.30}\]
The mean (i.e. expected value), variance, and standard deviation of this wait time are given by
\[ \mu = \dfrac {1}{7} {\sigma}^2 = \dfrac {1 - p}{p^2} \sigma = \sqrt {1 - p}{p^2} \label {3.31}\]
It is no accident that we use the symbol \(\mu\) for both the mean and expected value. The mean and the expected value are one and the same.
The left side of Equation \ref{3.31} says that, on average, it takes \(\dfrac {1}{p}\) trials to get a success. This mathematical result is consistent with what we would expect intuitively. If the
probability of a success is high (e.g. 0.8), then we don't usually wait very long for a success: \(\dfrac {1}{0.8} = 1.25\) trials on average. If the probability of a success is low (e.g. 0.1), then
we would expect to view many trials before we see a success: \( \dfrac {1}{0.1} = 10\) trials.
Exercise \(\PageIndex{1}\)
The probability that an individual would refuse to administer the worst shock is said to be about 0.35. If we were to examine individuals until we found one that did not administer the shock, how
many people should we expect to check? The first expression in Equation \ref{3.31} may be useful.
We would expect to see about 1=0:35 = 2:86 individuals to find the first success.
Example \(\PageIndex{2}\)
What is the chance that Dr. Smith will find the first success within the first 4 people?
This is the chance it is the first (n = 1), second (n = 2), third (n = 3), or fourth (n = 4) person as the first success, which are four disjoint outcomes. Because the individuals in the sample are
randomly sampled from a large population, they are independent. We compute the probability of each case and add the separate results:
\[P(n = 1, 2, 3, or 4)\]
\[= P(n = 1) + P(n = 2) + P(n = 3) + P(n = 4)\]
\[= (0:65)^{1-1}(0:35) + (0:65)^{2-1}(0:35) + (0:65)^{3-1}(0:35) + (0:65)^{4-1}(0:35)\]
\[= 0:82\]
There is an 82% chance that she will end the study within 4 people.
Exercise \(\PageIndex{2}\)
Determine a more clever way to solve Example 3.33. Show that you get the same result.
First find the probability of the complement: P(no success in first 4 trials) = \(0.65^4\) = 0.18. Next, compute one minus this probability: 1 - P(no success in 4 trials) = 1 - 0.18 = 0.82.
Example \(\PageIndex{3}\)
Suppose in one region it was found that the proportion of people who would administer the worst shock was "only" 55%. If people were randomly selected from this region, what is the expected number of
people who must be checked before one was found that would be deemed a success? What is the standard deviation of this waiting time?
A success is when someone will not inflict the worst shock, which has probability p = 1 - 0.55 = 0.45 for this region. The expected number of people to be checked is \(\dfrac {1}{p} = \dfrac {1}
{0.45} = 2.22\) and the standard deviation is \( \sqrt {\dfrac {(1 - p)}{p^2}} = 1.65\).
Exercise \(\PageIndex{3}\)
Using the results from Example 3.35, \( \mu \) = 2.22 and \( \mu \) = 1.65, would it be appropriate to use the normal model to find what proportion of experiments would end in 3 or fewer trials?
No. The geometric distribution is always right skewed and can never be well-approximated by the normal model.
The independence assumption is crucial to the geometric distribution's accurate description of a scenario. Mathematically, we can see that to construct the probability of the success on the nth
trial, we had to use the Multiplication Rule for Independent Processes. It is no simple task to generalize the geometric model for dependent trials.
Contributors and Attributions
• David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University) | {"url":"https://stats.libretexts.org/Bookshelves/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.03%3A_Geometric_Distribution_(Special_Topic)","timestamp":"2024-11-08T06:23:10Z","content_type":"text/html","content_length":"139672","record_id":"<urn:uuid:1665f17d-35bc-4750-bbe9-ae41f355917e>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00214.warc.gz"} |
Mathematical literacy ability of 9th grade students according to learning styles in Problem Based Learning-Realistic approach with Edmodo
Wardono, - and SCOLASTIKA MARIANI, FMIPA Matematika and Rista Tri Rahayuningsih, - and Endang Retno Winarti, - (2018) Mathematical literacy ability of 9th grade students according to learning styles
in Problem Based Learning-Realistic approach with Edmodo. Unnes Journal of Mathematics Education, 7 (1). pp. 48-56. ISSN 2252-6927
PDF - Published Version
Download (956kB)
PDF - Published Version
Download (851kB)
This study aims to determine the difference and increase of the mathematical literacy ability using PBL-PRS-E, PBL-PS and scientific approach, and to find out difference of the mathematical literacy
ability between learning styles. This study belongs to quantitative research. The population in this study are 9th grade students SMP Negeri 1 Majenang, Cilacap academic year 2016/2017. This study
uses a quasi-experimental design with pretest-posttest control group design. Then, methods of the study are test, questionnaire, and documentation. Data analysis was performed by one way anova, two
way anova, and increase in the gain normalized. The results of the study are (1) the mathematical literacy ability of students in the experimental group 1 is better than the mathematical literacy
ability of students in the experimental group 2 and control group, (2) there is no difference in the mathematical literacy ability between learning styles, (3) there is no interaction between the
mathematical literacy ability based learning models and student's learning styles, and (4) ithe increase of students’ mathematical literacy ability in the experimental group 1 is better than in the
control group but less than the increase of stuednts’ mathematical literacy ability in the experimental group 2. Eventually, this study suggests that 9 grade mathematics teacher in SMPN 1 Majenang
can use PBL-PRS-E model to improve the learning result and mathematical literacy ability of students.
Actions (login required) | {"url":"http://lib.unnes.ac.id/49539/","timestamp":"2024-11-12T03:05:31Z","content_type":"application/xhtml+xml","content_length":"31700","record_id":"<urn:uuid:c8f47b86-3324-436f-95ae-ef1e6301c78b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00026.warc.gz"} |
We’ve got one last Language Fun game for you to try!
Answer a couple of quick personality questions and we’ll tell you if you’ve got what it takes to be a scientist yourself. Are you a Super Scientist, or maybe just a Sort-of-Scientist?
Take our quiz and find out: click here to play!
How did it go? What ranking did you get? Tell us in the comments!
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Week 10: Final Results
Welcome to our final post for the summer!
This week is going to be a bit different from the rest as we wrap things up, and instead of a written post, we’ve got a video for you that gives a few updates and summarizes what we accomplished
You can watch that video above! Then, come back here to read the rest of what we’ve got.
We have two final tasks for you, if you want to help out!
First, we’d like to hear your thoughts about our project. Click here to take a quick survey where you can tell us what you liked, what you didn’t like, and how we can make this the best experience
possible next time.
Second, we do plan on running similar Citizen Science programs in the future! If you want to join us again, you can click here to leave your email address so we can keep you in the loop. (You’ll also
have a chance to give us your email in the feedback survey above; if you’ve done that already, no need to fill this one out too!)
That’s a wrap on this summer’s work. Thank you so much for reading and contributing! We’ll see you next time.
• The BLNDIY Team
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Language Fun: Dialects
Who do you sound like?
There’s all kinds of variation in the way people talk, which can be influenced by all kinds of things, too. Our personalities, identities, and origins all have a part to play in our unique versions
of our languages. How do different people speak English in the United States, and can we decide where people are from based on their dialect?
Can we guess where you’re from? Or if you don’t live in the U.S., where would you live based on the words you use? Click here to find out!
Did we get it right? What do you think about the different words we might use? Tell us about it in the comments!
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Week 9: Analyze Data
The results are in!
First, as always, feel free to sign up for our leaderboard or more if you’d like to help us out even more. If you’re just joining us, we’ve got the whole summer’s work archived for you to look
through and get up to speed, or jump in now anyway.
This week, we’ll be doing some statistical analysis of the data to know which effects are real (or “statistically significant”) and which we need more evidence on.
So, let’s get started!
When researchers conduct statistical analyses, they are trying to draw objective conclusions based solely on the data. Last week, we explored the data visually and looked for any patterns we might
see. Humans naturally look for patterns in everything, though. That’s why people see clouds that look like everyday objects or find faces in burnt pieces of toast. By conducting statistical analysis,
we can decide which of the patterns we saw last week have enough evidence for us to argue that they actually exist!
When a scientist calculates the stats for their data, they are choosing a set of “tests” to run, each of which is designed to look at a specific kind of difference. The exact type of analysis you use
depends on the type of data you’re looking at. Whatever the type, statistical tests give at least two values: the test statistic and the p-value. Because the test statistic is specific to the test
and hard to understand alone, the second value, the p-value, gives the probability that the difference or effect is not present in ‘true’ values. In other words, the p-value tells us whether we
should or should not find a similar result if we repeated the experiment.
If the p-value is less than 0.05, we say that the test is statistically significant and the evidence supports the effect or difference being ‘real’ and likely to be found again if we repeated the
We’ve got a lot of good info for you this week, because we want to look at a lot of potential effects and tell you how we reached our conclusions! So, we’re going to divide things up a bit and let
you jump around to different parts of the post as you see fit. Each section will start with an explanation on the type of test we used on the data and then give our results for the thing we were
looking at, so feel free to only read the bits you want to. Don’t miss the final results for the guesses you made in Week 7, too! Those are in with their relevant categories.
Internal vs. External Speech, & Men vs. Women: T-Tests
Personal Strategies: Fisher’s Exact Test
Test of Difference of Means: T-Test
T-tests are for comparing the mean of two groups. A key idea behind t-tests is that the mean value of a group in the data, say the average number of words remembered by men in our experiment (12.81),
is probably not the exact value we would get if we tested the entire group, i.e. all men. However, this ‘true’ value we would get if we tested all men is likely close to 12.81. To counter this, the
t-test calculates a range that should contain the ‘true’ value that is centered around the observed value (12.81). The t-test compares the amount of overlap between the ranges of possible ‘true’
values for the different groups of participants, and based on the amount of overlap, the t-test calculates a t-statistic and the related p-value. If the p-value is less than 0.05, there is a less
than 5% chance that there is no difference between the two groups. In other words, there is a greater than 95% chance that there is a difference between the ‘true’ values for the two groups.
Statistics are reported in papers in different formats depending on the field. Our lab and field use the American Psychological Association (APA) format. In this format, statistics are presented in
line in the following format: (t-value, p-value).
First, let’s see if there is a difference in performance between our two conditions.
We used a t-test to examine if there was a significant difference in the mean number of words remembered by participants told to repeat the words out loud and by participants told to repeat the words
in their head (our main research question!). The test was not significant (t=0.72, p=0.48), indicating that there was not a significant difference in the ability of the two groups. Here’s what you
all predicted in Week 7. Looks like half of you were right!
So, interestingly (and maybe unfortunately), we didn’t see a clear difference in performance between the two conditions. However, keep reading to see how that isn’t the full story!
Next, let’s see if there is a difference in performance between men and women.
We used a t-test again to examine if there was a significant difference in the mean number of words remembered by participants who identify as men and participants who identify as women. The test was
not significant (t=0.71, p=0.48), indicating that there was not a significant difference in ability based on gender. It should be noted that one participant reported “Other” as their gender and one
participant that preferred not to state their gender. While both of these participants scored above average, multiple participants are necessary to draw group conclusions.. Again, here’s what you
thought was going to happen. You predicted a slight lean towards women performing better in the study, but were still pretty close overall!
These results indicate that there was no clear difference in performance based on gender.
Click here to return to the top of the page!
Test of Relationship: Correlation/Linear Regression
Instead of looking at differences between groups, some tests examine if there is a relation between two variables. For example, the analysis below looks at the relationship between the number of
remembered words and the participant’s age. A correlation examines how one value increases or decreases as the second value increases or decreases. Maybe you’ve heard of ‘causation vs. correlation’
before? That’s what we’re talking about here; a correlation is just an observed relationship between two sets of values, not necessarily a statement on how one causes the other to happen! A
correlation between the number of words remembered and age is asking “as someone’s age increases, does the number of words they remember increase?” A correlation produces an r-value (similar to a
t-test producing a t-value) which gives the strength of the relation, where a higher value indicates a stronger relation. The range of r-values runs from -1 to +1, where -1 indicates that as one
variable increases, the other decreases and +1 indicates that as one variable increases, the other also increases. For example, as a tree ages, it grows taller. A correlation examining the relation
between a tree’s age and its height would have a high positive r-value because as one variable increases (age of the tree), the second variable (height of the tree) also almost always increases.
To determine a p-value for a correlation, we can use a technique known as Linear Regression. This technique tries to create a straight line that comes as close to the actual data as possible. To help
understand what that means, check out the plot below to see where that line falls compared to the other data. To determine the p-value, we can examine the difference between the predicted line and
the actual data.
So, how much would a participant’s age relate to their performance in the study?
A correlation comparing a participant’s age and the mean number of words they remembered was statistically significant (r=0.40, p=0.02), suggesting that there was a moderately strong relation between
age and performance! Linear regression suggested that the number of words a participant could remember increased by 0.18 per each year older. Here are the average predictions you made about age for
this study. The lower the score on the graph, the higher the rating you gave them predicting they would do better in the study. Turns out you were wrong on this one! You gave the youngest
participations the lowest score (putting them towards number 1 most often in your ranking), but older people actually did better!
As you can see in the plot below, while there is not a clear pattern where older participants almost always score well and younger participants almost always score poorly, there is a clear trend
where higher scores tend to fall in the higher age range.
Click here to return to the top of the page!
More Complex Tests: Analysis of Variance (ANOVA)
The tests we discussed before, correlation and t-test, are like the hammer and screwdriver of the scientist’s toolbox. It’s hard to complete a project without using at least one of them. If a t-test
is the screwdriver of the toolbox, the next test we’ll discuss (ANalaysis Of VAriance: ANOVA) is the electric drill. While a t-test is limited to only two groups, ANOVAs allow for comparison between
many different groups and different types of groups. The logic is the same as the t-test though. Take the mean of the group and based on the difference from one participant to the next, create a
range of possible values that contains the value we might get if we tested every single person possible. Then, compare the ranges for each group to decide if the groups are actually different from
one another.
We have two ANOVAs to look at, which can also help explain how ANOVAs are used. First, we want to compare the mean number of words remembered for each of our list categories (animals, objects, and
fruits/vegetables) to see if people did better on some lists compared to others. We have three groups though, so we can’t test all of them at once using a t-test. However, ANOVA can give us a test
statistic (F-value) and p-value based on whether there is a difference between any of the three groups.
Our second ANOVA adds a second layer to the question and shows the real strength of ANOVA. We want to compare if there is a difference in the mean number of words remembered based on the list
category AND whether a participant was told to repeat the words in their head or out loud. We’re comparing the three groups we looked at in the first ANOVA, which compared participants
within-subjects. It compared a participant against themselves, i.e. how many words they remembered for each category. In our second ANOVA, though, we are also comparing between-subjects, by splitting
participants based on whether they were in the “in your head” or “out loud” conditions. The ANOVA produces a F-value and p-value telling us whether there was a difference in the difference between
That might sound a bit confusing, so let’s actually use ANOVA for these questions and see how it works in action.
First, was there a difference based on list category?
An ANOVA examining the effect of list category was statistically significant (F=17.15, p=0.0002), indicating that participants remembered more words for some lists compared to others. We can use
t-tests to examine which specific groups were different. There was not a statistically significant difference between the number of animal words remembered and number of fruit/vegetable words
remembered (t=0.03, p=0.97). There was a significant difference between the number of objects remembered and the number of fruits/vegetables remembered (t=2.80, p=0.007) and between the number of
objects remembered and the number of animals remembered (t=2.68, p=0.009). The results indicate that household objects were significantly harder to remember than animals or fruits/vegetables.
Animals Objects Fruits/Vegetables
14.64 11.72 14.60
Second, did it matter whether participants repeated the words out loud or in their heads?
An ANOVA examining the interaction of list category and task condition (out loud/in your head) was not significant (F=2.16, p=0.15). While the test approached significance (relatively low p-value),
there was not enough evidence to conclude that there was a difference.
Condition Animals Objects Fruits/Vegetables
In Your Head 11.41 15.88 15.29
Out Loud 12.06 13.61 13.94
Click here to return to the top of the page!
Last one! Fisher’s Exact Test for Count Data
We have one more analysis to look at, but it’s pretty straightforward. We want to know if whether the participants were told to repeat the words in their heads or out loud impacted whether or not
they used a strategy. For this we can use a Fisher’s Exact Test, which will examine the ratio of Yes responses to No responses to the question “Did you use a strategy to help you remember?” for the
two conditions. The test was not significant (p=0.16), but there seemed to be a clear trend in the data. We might need to run a follow-up study to explore this relationship further.
Choice In Your Head Out Loud
Yes 14 9
No 4 9
Click here to return to the top of the page!
So, what does it all mean?
Now that we’ve done the analyses and have some objective measures of what happened, it’s time to draw some final conclusions. Were the results what you expected? We had two results with p-values
around .15, which isn’t low enough for us to conclude they’re real but is low enough that it might be worth exploring more.
In the end, it looks like age was a statistically significant factor, as was the category of objects that participants had to remember. But, neither gender nor task condition were significant (though
the latter interaction approached significance). Strategy choice had a clear pattern but was not statistically significant, and this should be investigated further in a following study to see what we
can make of it!
If you’ve read through all of this, well done! If this is your first time looking at data and stats, it may have been a little overwhelming. Don’t worry though! Science is a skill that takes time and
practice, and even just learning a bit about how it all works is a big accomplishment.
Remember that, even though using statistical tests like these gives us a far more objective look at our data, this is only the beginning of an even larger process. We were only testing for our pretty
specific research question this time around, but by trying to answer that question we’ve run into other cool things we might want to learn more about too! What kinds of studies might be good ways to
continue the work we’ve done here so far?
Click here to return to the top of the page!
In the comments section below, tell us about what you think the big takeaway message is from our results! What did we learn about internal language in our study?
Next week, we’ll be making some final conclusions about the study and looking back on everything we did this summer!
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Language Fun: The Stroop Effect
Want to trick your brain?
You probably don’t think about the fact that you’re reading whenever you see text, but somehow you can still remember what you saw after driving by a billboard on the highway or after you glance at a
sign somewhere. How does reading work like that? This week, we’ve got a quick version of the Language Pod’s most popular demo, which shows off the Stroop Effect.
Give the game a shot and see you how do! Click here to give it a try.
How did you do? Tell us how it went in the comments! Have your family and friends try it, too, and see who can do the task the easiest/fastest. What might that mean about reading and our brains?
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Week 8: Make Observations
We’ve got data!
First, as we always mention, feel free to sign up for our leaderboard or more if you’d like to help us out or get some bragging rights for participating. But just voting or commenting on our posts is
more than enough, and you can do that without signing up! If you’re just joining us, we’ve got the whole summer’s work archived for you to look through and get up to speed, or jump in now anyway.
This week, we can share the raw data from your experiment for you to explore! You’ll get the chance to make some first observations, and be sure to leave a comment with anything cool you find.
So, what did we collect?
After less than a day, we had 40 people complete your study, 20 who were told to remember the words by repeating them out loud and 20 who were told to repeat the words in their heads. Hopefully this
gives us some insight into how inner speech works!
CLICK HERE if you want to see everything we collected in a handy spreadsheet; then you can come back here to learn more and add your thoughts.
We’ve got a couple of graphs to get you started!
First, here’s our main effect. How well did participants seem to remember the pictures in each condition?
What about the picture categories? Did participants remember one set better than others?
Next, what if we separate the results by gender?
Finally, how much would a participant’s age relate to their performance in the study?
Of course, we have so much more we can look at! If you want to take it further, here’s all of our data in a spreadsheet again. Everything we collected is listed there, and you can find things like
what strategies the participants used, whether or not they say they really followed their condition’s requirements, basic demographic info, and more!
When looking at the graphs or the spreadsheet, try to think about how they connect to our original research question! In addition to whatever neat patterns you might find, what do these data say
about internal language?
Now, let’s talk a little bit about why we want to look at our data like this.
Scientists often start by making observations about the general pattern of the data through visual representations. Rather than doing too much number crunching, it’s useful to get a more general idea
of what might be happening.
We have to be careful, though, that making observations in this way doesn’t lead us to conclusions we shouldn’t reach. At what point should we be convinced that an effect we think we can see is real
and not just our minds making things up? That’s what the math part is for (which we’ll get into next week)! Once they have an idea of what is happening in the data and need to make final conclusions,
researchers can do statistical analyses and tests. That way, everyone can agree on what the experiment can tell us objectively rather than through opinions and just what our eyes see!
We have to use visual observations and our intuitions about experimental results together with statistical analysis to make sound conclusions.
So, here’s your job for the week: before we do the stats, what do you see in the data? What questions seem to be answered, and which aren’t? Is there anything you find confusing or surprising?
In the comments section below, tell us about it! What should we be thinking about from our first look at this data? Is there anything cool? Be the scientist and make some observations!
Next week, we’ll go over what we might be looking for when we start doing our statistical analysis, and what that will look like!
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Language Fun: Mayan Hieroglyphs
There are all kinds of different ways that languages can be written!
We have an alphabet in English (which you’re using to read right now!), but have you ever heard of or seen hieroglyphs? Ancient Egypt has perhaps the most famous example of this, but they aren’t the
only ones.
Want to learn how to read a Mayan Hieroglyph? Now’s your chance with this week’s Language Fun game!
What did you think? Tell us about how it went in the comments! Did you follow the links at the end to try to learn more?
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Week 7: Running the Experiment
It’s finally time to run our study!
After all of the hard work our Citizen Scientists have put into creating, designing, and testing their ideas, Week 7 is the moment of truth.
Before we get into that, even though we’re nearing the end of this summer’s project, we’d still love to have you sign up officially if you want to. That way you can be on our leaderboard or help us
better understand how to do citizen science better! Feel free to keep voting or commenting without signing up, though – we’re just happy you’re here!
This week, we’re sending off the study for real people to participate in! We also want to give you one last chance to make some guesses about what might happen in our experiment.
CLICK HERE TO VOTE!
What do you think we’re going to see? We have a couple questions for you about what the answer might be to our research question, and about how different people might do differently in the study’s
task. Thinking about these kinds of things is fun, and it helps us be more aware of our biases before we start evaluating our results.
Last week, we asked you to help us test out our stimuli and see if you could name our pictures. Everyone did great, and most of the pictures were consistently named! Thanks again to everyone who
tried it out. There were only a couple that had some different answers, like DVD vs. CD and hippo vs. hippopotamus. The nice thing is that these responses still show that the pictures are
recognizable, and as long as participants in the study can prove they remembered a picture, it doesn’t really matter what they’re called exactly!
Now, let’s go over how we are going to get our data in our final experiment.
All of this is going to be run through a service called Prolific. We just have to make an online study, and then we can send it off to be taken by participants all over. We can specify the kinds of
people who should be taking our study, like if we need only adults or those who speak a certain language, and we can pay them a fair rate for their time. If you have any friends or family who you
think might want to take our survey, they can make an account on Prolific and be in all kinds of studies too!
That being said, though we’re sure you’re curious, none of our Citizen Scientists can be in the study you helped us to design. It’s for the same reason that none of us on the BLNDIY research team can
participate, either: we know too much! Even if you wouldn’t mean to, taking the survey when you know the goals of the study can influence our results, especially if you expect or desire a certain
outcome. That’s why we only tell participants so much about an experiment until after they finish it. We only want them to know enough to be able to do the task we’re giving them and feel safe while
doing so. If we said beforehand exactly what we’re looking for, then it wouldn’t be a controlled experiment like good science should be!
Bias is something that’s really important to consider in experiments, which we’ve seen in a couple steps of our experimental designing that we’ve done here. A really cool example of this is the
Clever Hans effect. Here’s a short video telling the story of a horse that appeared to be able to do math. It turns out that the humans testing Hans accidentally gave him clues to answering
arithmetic problems without even knowing they were affecting the results at all… (And here’s an extra article if you want to know more!)
That’s it for this week!
It won’t be long before we have some real data for your experiment.
Again, CLICK HERE to make some bets on what might happen in the experiment.
And don’t forget our Language Fun section on the site! If you haven’t checked it out yet, we have all kinds of cool games and quizzes that are all about language science.
Are you excited? Share what you think is gonna happen down in the comments! Will people remember more words in the out-loud condition, the in-your-head condition, or will it be about the same between
the two? If there’s a difference, how big of a difference will it be?
Next week, we’ll have some preliminary results to share, and we’ll talk about making qualitative observations of our data.
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Language Fun: Root Out the Word
Have you ever heard of an affix before?
Affixes are small bits of meaning that are an important part of how words are made! Prefixes are a type of affix and can turn somebody who is ‘likeable’ into someone much less pleasant to be around:
‘unlikeable’. Suffixes are another kind of affix, which might help an adjective like ‘sad’ describe other words as an adverb: ‘sadly‘.
Can you find the affixes? Try our ‘Root Out the Word’ game!
How did it go? Do you have any favorite roots or affixes? Tell us about it in the comments!
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!**
Week 6: Stimuli
Welcome back!
First, if you want to help us out even more, make sure you check out our sign up page and everything we’ve got posted over there. Feel free to vote and comment without officially signing up, though!
This week, we need your help to pick the best stimuli for our experiment! Be sure to check out the results of last week’s vote down below, too.
CLICK HERE to help with the stimuli! Read below to find out why we’re doing this.
The pictures we are using are from the Bank of Standardized stimuli (BOSS) pictorial dataset, a set of more than 1400 pictures created by Dr. Martin Brodeur (as well as Dr. Martin Lepage, Dr.
Katherine Guérard, and many others). The dataset has been designed and tested to be standard across a variety of factors that are published in several publications available alongside the dataset. It
was designed as a research tool to be freely shared (with credit given to the researchers responsible) for cognitive and psycholinguistic research. Check out their website, where you can find the
entire set as well as the various studies that have been published!
We’ve picked a subset of these pictures to use in our experiment and we need your help testing them in a similar way to the testing done previously with the BOSS dataset. In our experiment, we’re
asking participants to remember a list of pictures, so we need to ensure that a typical participant would know the name of the object. We’re going to give you 10 pictures and ask you to just write
down what they are. It might seem simple, but this work is actually very important! We need to know that participants weren’t able to write a picture down because they didn’t remember it and not
because they didn’t know what it was. This process is called norming. The BOSS dataset is such an awesome tool because other researchers have already done this for many features that could
potentially impact how participants do, such as color, size of the object, brightness, and perspective. Without a dataset like this, we and many other researchers would have had to use (likely much
simpler) drawings instead of high-quality photographs.
Now for the results from last week’s vote on experimental design!
Sounds like you guys were pretty much in agreement that we should go with easy words and 30 images for our study. Our participants should be grateful that they don’t have to remember the hard words,
and 30 pictures will be a happy medium. That’s what we’ll do then!
We did have some questions last week that are definitely worth answering, too.
Eshmoney said: I thought of another question about the study design: Are the participants going to get all of the pictures at once or are they going to get them one at a time?
This is an important thing to think about with our experimental design, absolutely! The plan is to give each picture one at a time for one second. One at a time should help keep people from getting
overwhelmed or running out of time to scan through all of the pictures, while one second is long enough to fully see the picture without giving too much time to work on memorizing them. How else
might it change our design or results if we gave the pictures all at once or for a different amount of time instead?
PumpkinPie54 asked: With easier words like ”dog”, would a person picture the image they were given, or their own dog? Since “dog” is a pretty relatable word, and since most people have dogs, and most
people don’t have inner dialogues, wouldn’t they remember the picture… as another picture? Sorry if that sounded confusing, and if i am getting a bit off-topic.
That’s an interesting point! When we try to memorize things, we often try to link them to our own experiences and ideas. If we think about our own dogs when trying to remember the word “dog”, it
probably would be easier than if we didn’t have a dog and thus didn’t link it to what we’re memorizing. Hopefully, though, by giving 30 images in a row for only a second, participants won’t really
have enough time to build any kind of tricks or mnemonics that will influence our results. And by giving pictures rather than just popping the written word “dog” up on the screen, we’re helping to
suggest that particular dog in the image to people rather than letting them come up with whatever image they want to!
That’s it for this week!
Here’s the link one more time for the stimuli testing we’d like you to do.
What did you think about our pictures? Did you look through the BOSS dataset website and find anything cool we should know about? Tell us about it down in the comments! Next week, we’ll start
collecting the data and talk about some pitfalls and considerations when running an experiment.
**Although we moderate every comment before it gets posted, please remember to be kind to others and mindful of your personal information before you post here!** | {"url":"https://u.osu.edu/blndiy/author/bednar-48/","timestamp":"2024-11-07T00:04:34Z","content_type":"text/html","content_length":"95680","record_id":"<urn:uuid:8088a56a-f172-443a-8838-2e21f2bc303e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00324.warc.gz"} |
This is the second of two writings talking about the difficulties of probabilistic reasoning; here, we zoom in the problem of applying probability to actual data, examining the paradigms and
techniques of the field of statistics.
Our focus is, as before, on the hidden assumptions people make, the effects these assumptions have on the validity of their inferences, and the absence of perfect solutions — in short, why statistics
is difficult. Along the way, we'll be introducing lots of statistical frameworks, techniques, and models, generally at a relatively rigorous level. You don't necessarily have to follow the more
rigorous aspects of each argument and derivation to get the gist of it, but a familiarity with calculus helps immensely. Useful properties of some common distributions are given in part A of the
Appendices, while the most technical and/or tedious derivations are stowed away in part B.
The Questions of Statistical Inference
Previously, we covered the basic interpretations of probability:
• The classical interpretation, in which the probability of an event is determined by its position in a finite collection of $n$ events which, to us, seem equivalent; these events are each given
equal probability, such that, if they cover all possibilities, then each must have a probability of $1/n$. Made by and for French gamblers, it is only useful in the simplest of problems, where it
is best understood as a description of the influence of ignorance on belief.
• The propensity interpretation, in which the probability of an event is the propensity of the underlying physical system to generate it, either on a per-case basis (single-case propensity) or as a
frequency over many runs (long-run propensity). For practical purposes, this interpretation is basically irrelevant, being little more than a philosophical curiosity.
• The Bayesian interpretation, in which the probability of an event is the degree of belief we are rationally justified in having. By the famous Dutch Book argument, these probabilities, if they
are to be rational, must conform to Kolmogorov's axioms, and must be updated in response to observation in a manner described by Bayes' theorem. This interpretation silently smoldered under the
name of “inverse probability” for about two centuries after its foundation by Bayes and Laplace, only erupting in the middle of the 20th century after intense justification and popularization by
a small club of super-Bayesians (primarily de Finetti, Savage, Jeffreys, and Jaynes).
• The frequentist interpretation, in which the probability of an event is the frequency at which it's observed in a series of trials, either after a finite number of trials (empirical frequentism)
or as the limit of a hypothetically infinite number of trials (hypothetical frequentism). Developed largely as a response to the weaknesses of the classical interpretation, it is strongly
preferred by scientists due to its clear empirical nature.
Now, we'll put probability to the test by figuring out how to use it to understand the world. That it is useful is clear, for it is used all the time — to determine the reliability of industrial
processes, to predict the fluctuations of the stock market, to verify the accuracy of measurements, and, in general, to help us make informed decisions. Hence, we must understand why, how, and when
it is useful, and how to use it correctly.
To paraphrase Bandyopadhyay's Philosophy of Statistics, there are at least four different motivations for the use of statistical techniques:
1. To determine what belief one should hold, and to what degree to hold it to;
2. To understand whether some data constitutes evidence for or against some hypothesis;
3. To figure out what action to take in order to achieve some end;
4. To develop a prediction about the state of some thing from partial information about it.
!tab Obviously, these questions are very tightly connected to one another, but each of them provokes us to probe the data in different ways, to ask different follow-up questions, and to interpret
statements about the data in different ways; as such, they segregate themselves into different interpretations of statistical inference.
!tab At the same time, there are different schools of statistical inference, which generally (but not always) hold fast to one of these interpretations. The distinction I'm making between schools and
interpretations is a rather subtle one, but will be very important; to put it simply, a school is an established body of interlocking techniques of statistical inference, whereas an interpretation is
an assigned purpose for those techniques.
A short synopsis of the main schools of statistical inference:
• The Fisherian school adopts a frequentist interpretation of probability and an evidence-based interpretation of statistics;
• The Neyman-Pearsonian school adopts a frequentist interpretation of probability and an action-based interpretation of statistics;
• The Bayesian school adopts a Bayesian interpretation of probability (surprising, isn't it?), as well as a belief-based interpretation of statistics.
There is a fourth school, known as Likelihoodism, but in terms of influence it remains little more than a peculiarity. We'll go over each of these schools of statistics, explaining their positions,
differences with one another, implicit interpretations, and internal conflicts.
💡 In addition to a particular interpretation of probability, a framework for statistical inference must also commit itself to a particular understanding of the purpose of statistics if it is to be
coherent. These varying understandings, which may be called “interpretations” of statistical inference, are correlated to but distinct from the actual bodies of techniques which make up different
schools of statistical inference.
Mathematical Notation
Recall from the previous article that there is a difference between probability measures and probability distribution functions; the former send events to numbers representing their probability,
whereas the latter send individual points to numbers (events being collections of individual points). Fortunately, our explorations no longer require us to delve into measure theory, so we'll deal
primarily with probability distribution functions.
Some notation:
• We'll generally denote probability distribution functions by $P(x)$, though special classes of functions will get their own notation.
!tab For instance, we write ${\mathcal N}(\mu, \sigma^2)$ for the distribution $P(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\operatorname{exp}(-\frac12 \frac{(x-\mu)^2}{\sigma^2})$, aka the normal
distribution with mean $\mu$ and variance $\sigma^2$.
• When $P$ is parametrized by some parameter $\theta$, as normal distributions are parametrized by their means and variances, we may write $P(x \mid \theta)$, even when not in a Bayesian framework.
• Typically, we denote the sample space in which $x$ resides by $X$, and the parameter space in which $\theta$ resides by $\Omega$.
• To say that some collection of data points is sampled from some distribution, we use $\sim$.
!tab For instance, $D \sim {\mathcal N}(\mu, \sigma^2)$ means that $D$ is a collection of (independent unless otherwise specified) samples $x_1, \ldots, x_n$ from the normal distribution.
• Given a distribution $P(x)$ and some function $f$ of $x$, where $x \in X$, we'll denote by $\mathbb E[f]$ the expectation value of $f$ with respect to $x$.
• We'll write $\mathbb E_x[f]$ if there is any ambiguity about the parameter we're taking the expectation with respect to. This is calculated as $\mathbb E[f] := \int_X f(x)P(x)\, dx$, and is
interpreted as the average value of $f$ evaluated on a randomly sampled $x \sim P$. (This is an integral over all $x \in X$, but if we're dealing with finite distributions, we write $\sum_{x \in
X} f(x)P(x)$ instead).
Confusingly but commonly, $P$ is just as much an abstract symbol pointing to the concept of probability as it is a function — in programming jargon, it's an overloaded operator. If we have
independent samples $x_1, x_2$ from the same distribution $P(x \mid \theta)$, then we may write $P(x_1, x_2 \mid \theta)$ for the product $P(x_1 \mid \theta) P(x_2 \mid \theta)$, even though $P$ is
nominally only a function of one variable at a time. Correspondingly, if we have data $D = \{x_1, \ldots, x_n\}$, then we may write $P(D \mid\theta)$ when we mean $P(x_1\mid\theta) \cdot \ldots \cdot
P(x_n \mid \theta)$.
Frequentist Inference
Statistical inference — the use of statistical tests to extract information from arbitrary data sets in formal, reproducible ways — is largely a product of the 20th century. Its invention can, for
the most part, be attributed to the British statistician Ronald A. Fisher, who in his work analyzing crop data not only developed a great deal of innovative, new techniques for forming and then
testing hypotheses about data, but weaved them into a single framework, known as significance testing.
!tab The idea is as follows: to determine whether some data offers evidence for or against some hypothesis about a hidden parameter, build a model relating the hidden parameter to the data, and then
test the probability of the model producing the data contingent upon the hypothesis being true. The lower this probability, the more significant the data is as evidence against the hypothesis. Fisher
believed that what was to be done with this significance value, whether it be rejecting or accepting the hypothesis, was ultimately up to the researcher, who would have to weigh this decision against
a variety of other pragmatic and epistemic considerations. After all, no model is ever perfectly correct, no experiment is ever perfectly faithful, and unlikely things can happen.
!tab Shortly after the rapid adoption of Fisher's framework, two other statisticians working in Britain, Jerzy Neyman and Egon Pearson, collaborated on a new approach to statistical inference. They
sought to improve on Fisher's method by mathematizing the process of hypothesis rejection. Since unlikely things can always happen, this necessitated the fixing of some error rate and the pinning
down of some particular alternative hypothesis to be in error about.
!tab Their approach was to automatically reject the first hypothesis in favor of the alternative hypothesis if their test yielded such-and-such significance levels, these levels being chosen such
that the probability of being wrong was below the fixed error rate. Neyman and Pearson considered this framework, known as hypothesis testing, an improvement upon Fisher's. He strongly disagreed with
them, and ferociously argued with them over the course of decades; this debate, while famously acrimonious and destructive, shaped much of the early history of statistical inference, and is worth
investigating in detail. We'll start by formalizing each of their views.
💡 The history of modern statistics begins with a foundational divide between Fisher's framework for using statistics to interpret data as evidence and Neyman-Pearson's framework for using statistics
to form binary conclusions from data.
Fisher's Significance Testing
Parametric Hypotheses
Fisher and Neyman-Pearson both worked primarily with parametric models, or models with tunable parameters. These models largely take the form of distributions with tunable parameters; we assume that
data points have been sampled independently from a single distribution with a single parameter value. In statistics parlance, the data points are “independent and identically distributed” (i.i.d.) —
the only task left is to figure out the actual parameter values of the distribution that the data was sampled from.
!tab As mentioned above, we'll call this parametrized distribution $P(x \mid \theta)$, where $\theta$ is the parameter (or vector of parameters) and $x$ is a particular value that the sample may take
on. For instance, this may be a normal distribution with parameters $\theta = (\mu, \sigma^2)$, in which case the probability distribution function (pdf) is given by: $${P(x \mid \theta) = \frac{1}{\
sqrt{2\pi\sigma^2}}e^{-\frac12 \frac{(x-\mu)^2}{\sigma^2}}}$$
A normal distribution is any member of a continuous family of distributions parametrized by their mean $\mu$ and variance $\sigma^2$. Parametric models fix the family, and find fitting parameters. (
A hypothesis, in this parametric framework, concerns a particular value of some parameter. For instance, if we observe IQ data $D = \{114, 103, 109\}$, we might model this as coming from a normal
distribution with known (i.e., irrelevant) variance $\sigma^2$ and unknown mean $\mu$, and create the hypothesis that $\mu$ is equal to some particular value, say $106$. In general, we'll refer to
the hypothesis as $H_0: \theta = \theta_0$, where $\theta$ is the parameter and $\theta_0$ some particular value of the parameter; by tradition, this initial hypothesis is called the null hypothesis.
Testing the Null Hypothesis
Fisher's approach to testing the null hypothesis $H_0: \theta = \theta_0$ is as follows: create a function $t$ of the data $D$ such that $t(D)$ is expected to be higher the further away $\theta$ is
from $\theta_0$, and then compute the probability that $t(D_0) \ge t(D)$ for the observed $t(D)$ and new, hypothetical data $D_0$ sampled from $P(x \mid \theta_0)$. Any such function that depends
solely on the data, rather than unknown parameters, is known as a statistic; much of an introductory course in statistics will consist of rote memorization of particular statistics meant to be
applied to certain tests, of which there are many.
!tab The higher the value $t(D)$ is, the harder it will be for some $t(D_0)$ sampled from $P(x \mid \theta_0)$ to exceed it, and the lower the probability will be that $t(D_0) \ge t(D)$. This
probability is known as a p-value, and is generally denoted simply as $p$. The lower it is, the more reason we have to believe that $D$ wasn't actually sampled from $P(x \mid\theta_0)$.
This p-value was Fisher's number of choice for understanding the significance of some data, and he originally recommended that data be taken as significant evidence against the null hypothesis when
its p-value is below some threshold, his choice being the now-infamous p = 0.05. In his words, “It is convenient to take [p=0.05] as a limit in judging whether a deviation is to be considered
significant or not”.
!tab Obligatory grumbling: p-values are very commonly misinterpreted by scientists as error rates, but they're only the probability that data generated from the hypothetical underlying distribution
would be at least as extreme, in terms of the test statistic, as the actual observed data. The ideas are similar, but not identical, and we will show this later by constructing an example where they
diverge spectacularly.
An example
For instance, suppose that the weight of such-and-such group of people is known to be normally distributed with known variance $\sigma^2 = 100$ and unknown mean. We hypothesize that the mean is equal
to $\mu_0 = 150$, and sample two people with weights $165$, $153$, and $174$. The test statistic generally applied here is the z-test, with test statistic $$z= \frac{\left(\frac{1}{n}\sum_{i=1}^n D_n
\right) - \mu_0}{\sqrt{\sigma^2/n}}$$
Note that this is a statistic because it does not depend on any unknown parameters: $\sigma^2$ is known, and $\mu_0$ is known because it's our hypothesis, not necessarily the real mean of the
underlying distribution. Like most statistics, the $z$ statistic is carefully crafted to have a particular distribution so long as the data has the distribution we think it does — in this case, if $D
\sim {\mathcal N}(150, 100)$, then $z\sim {\mathcal N}(0, 1)$. Thus, if when we apply it to our data we get a $z$ statistic that is unusally extreme for a sample from ${\mathcal N}(0, 1)$, we can
take that to be statistically significant evidence against $D$ being sampled from ${\mathcal N}(150, 100)$ in the first place. This controlled modus-tollens reasoning is the underlying idea of
significance testing.
!tab From here on out, we'll refer to the mean of the sampled data, $\frac1n \sum_{i=1}^n D_i$, as the sample mean $\overline D$. With our given weights, $z$ evaluates to $14/(\sqrt{100/3}) \approx
2.42$. Were the data really distributed according to ${\mathcal N}(150, 100)$, the $z$ statistic would follow a normal distribution ${\mathcal N}(0, 1)$, so the $p$-value will be the probability that
a random $z_0$ sampled from ${\mathcal N}(0, 1)$ will be greater than the observed $z \approx 2.42$. This is given by $p = \int_{2.42}^\infty \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}\, dx \approx
0.00776$. Since this is well under $p =0.05$, Fisher would have us consider the sampled data $D$ as statistically significant evidence that the underlying population distribution is not ${\mathcal N}
(150, 100)$. However, this does not mean that we should conclude that that isn't the underlying distribution.
We can have evidence for things that aren't true; if I drink some suspiciously old milk, developing a stomachache twenty minutes later would be evidence that the milk was bad, but it's possible that
the milk was fine and the stomachache is unrelated. Similarly, an extreme value of some test statistic is evidence that the data is not from the distribution we hypothesize it is, but it's possible
that it is from that distribution and we just had really strange luck. Evidence is just evidence for a particular claim, and its strength as evidence is the extent to which it is more consistent with
the situation given by the claim than other situations.
Note: the perspective I'm attributing to Fisher here, namely that the presence of statistically significant evidence against the null hypothesis should not necessarily imply a rejection of the null
hypothesis, seems not to have been present in his earlier work; my guess is that he largely adopted it as a reaction to Neyman-Pearson's process of automatically rejecting the null hypothesis upon
seeing extreme values, c.f. page 5 of his 1955 “Statistical Methods and Scientific Induction” (pdf). The situation is not helped by the fact that much pedagogical writing on statistical testing
ruthlessly mixes up aspects of the Fisherian and Neyman-Pearsonian frameworks, leading to confusion about who said what.
💡 Fisherian statistics seeks to figure out the extent to which data counts as evidence for some hypothesis by developing methods to calculate how extreme that data is relative to the hypothesis.
Neyman-Pearson's Hypothesis Testing
Alternative Hypotheses and Error Rates
Neyman and Pearson sought to improve Fisher's early approach to significance testing by patching up the aforementioned ambiguity over rejection vs acceptance of the null hypothesis by constructing
procedures to automatically reject or accept it. To this end, they had to add two main ideas:
• The idea of an alternative hypothesis: while it's impossible to be right in our rejections all the time, we should at least analyze our automation algorithm to figure out how likely it is to be
wrong. However, there are endless ways to be wrong: if we hypothesize that $\theta = 2$, we might be wrong because $\theta = 2.007$, or we might be wrong because $\theta = 17$. Some particular
alternative must be fixed so we have a coherent, workable idea of what it means to be wrong.
Typically, we view the null hypothesis as the default — it's “null” because there's nothing unusual going on. The alternative hypothesis, meanwhile, indicates the presence of an abnormality. As such,
accepting the null hypothesis is often called a “negative” result, while accepting the alternative hypothesis is often called a “positive result”. We will always accept one of the two, rejecting the
• The idea of error rates: now that we have an alternative hypothesis, there are two ways to be wrong.
□ We could accept the alternative hypothesis when it is actually false. This is known as a Type I error, or more commonly a “false positive”.
□ We could accept the null hypothesis when it is actually false. This is known as a Type II error, or more commonly a “false negative”.
Let's formalize this.
False Positives and Negatives
As before, take a statistical test $t$ and data $D$ sampled from some $P(x \mid \theta)$. Let $H_0: \theta = \theta_0$ be the null hypothesis and $H_a: \theta =\theta_a$ the alternative hypothesis.
In order to know which hypothesis to reject given some test result, we may endow the test with a rejection region $R$ — whenever $t(D)$ is in this region, we reject $\theta_0$, accepting it (and
thereby rejecting $\theta_a$) whenever $t(D)$ is not in $R$. Usually, these rejection regions come in one of two forms:
1. $R = \{x \in \mathbb R \mid x > c\}$ for some $c$, or, symmetrically, $R = \{x \in \mathbb R \mid x < d\}$. Tests with these rejection regions yield rejections of the null hypothesis when their
values are above (resp. below) the pre-determined values $c$ (resp. $d$).
Because the rejection region is in either case a single wing of the real line, these are commonly called one-tailed tests, this tail either being on the left ($x < d$) or the right ($x > c$).
2. $R = \{x \in \mathbb R \mid |x| > e\}$ for some $e$. A test with this rejection region yields a rejection of the null hypothesis when its value is above $e$ or below $-e$.
Because the rejection region lies on both sides of the real line, these are commonly called two-tailed tests.
Visual diagram comparing the rejection regions of one-tailed tests with two-tailed tests. Source.
Neyman and Pearson's strategy was to, by controlling the boundaries of the rejection regions (the values of $c, d$, and $e$ in the above sets), control the probabilities of type I and type II errors.
By convention, they referred to the type I error, or false positive rate, of a given test as $\alpha$. The type II error, or false negative rate, is referred to as $\beta$. Let's define these
formally: $$\alpha = P(\text{Type I Error}) = P(t(D) \in R \mid \theta = \theta_0)\qquad \beta = P(\text{Type II Error}) = P(t(D) \notin R \mid \theta = \theta_a)$$
The only way to get a test with no chance of a false positive, $\alpha = 0$, is to always accept the null hypothesis, which forces $\beta = 1$; conversely, the only way to get a test with no chance
of a false negative, $\beta = 0$, is to always reject the null hypothesis, which forces $\alpha = 1$. It's clear that we must develop some way to characterize the tradeoff between $\alpha$ and $\
!tab Neyman and Pearson's approach to this was to first fix some $\alpha$, say $\alpha = 0.05$, and then find a test that minimized $\beta$ while maintaining that $\alpha$. To this end, they
introduced the concept of power. The power of a test is the probability that it will reject the null hypothesis when that is the right thing to do. $$\operatorname{Power} = P(t(D) \in R \mid \theta =
\theta_a) = 1 - P(t(D) \notin R \mid \theta = \theta_a) = 1-\beta$$
!tab Since the power of a test is simply $1-\beta$, this isn't exactly a new concept — but minimizing $\beta$ is equivalent to maximizing $1-\beta$, which allows us to rephrase the question in terms
of finding a most powerful test for some given $\alpha$. The lingo for such a test is an “$\alpha$-level most powerful” test.
!tab P-values are still legible in the Neyman-Pearson framework, since we can still speak of the probability of getting a test result more extreme than an observed one. However, the p-value is no
longer the important criterion — the rejection region is. If the p-value isn't at least as small as the fixed $\alpha$, we won't reject, since the test statistic won't have been extreme enough to hit
the rejection region, but even when it is, we'll still report the chance of error of our procedure as $\alpha$, even if the p-value is absolutely microscopic.
!tab Neyman and Pearson's justification for this inflexible protocol was given in the form of a frequentist principle they propounded, which is reproduced in this paper: “In repeated practical use of
a statistical procedure, the long-run average actual error should not be greater than the long-run average reported error”. If we stick with reporting an error rate of $\alpha$, then our actual error
rate should be no greater than our reported rate, whereas if we play fast and loose with p-values, which are not error rates per se, we may end up violating this principle.
The Neyman-Pearson Lemma
As mentioned, the Neyman-Pearson approach relies on selection of a test endowed with a rejection region such that (a) the false positive rate $\alpha$ of the test is equal to some predetermined
value, and (b) the false negative rate $\beta$ of thes test is as low as possible — equivalently, the power $1-\beta$ is as high as possible.
!tab Remarkably, they managed to find the general form of such $\alpha$-level most powerful tests, an achievement that considerably bolstered their position by rendering it practical. Their result,
known as the Neyman-Pearson lemma, goes like this: for a distribution $P(x \mid \theta)$ and null and alternative hypotheses $H_0: \theta = \theta_0,\ H_a: \theta = \theta_a$, the test statistic and
rejection region of the $\alpha$-level most powerful test $t$ is given by $$t(D) := \frac{{\mathcal L}(\theta_0 \mid D)}{{\mathcal L}(\theta_a\mid D)}\quad \quad R = \left\{x \in \mathbb R \mid x > \
eta\right\} $$
Here, ${\mathcal L}(\theta_0 \mid D)$ is the likelihood function, equal to $P(D \mid \theta_0)$; we rewrite it to emphasize that it is a function of $\theta$ rather than $D$, and that it is not a
probability distribution over $\theta$. The number $\eta$ is chosen so as to make the false positive rate equal to the pre-specified $\alpha$ — the calculation has to be done on a case-by-case basis.
💡 Neyman-Pearsonian statistics seeks to devise methods that can form decisive conclusions from data with controllable error rates.
The Neyman-Pearson/Fisher Debate
As mentioned, the debate between Neyman-Pearson and Fisher was very ferocious and very drawn-out, with both sides slightly updating their positions over the years in ways that make the debate hard to
follow without studying the full history. Since we don't have time for that, I'll try to sketch the main points of each side.
My two primary sources for this section are:
• Lehmann's “The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?” (link)
• Lenhard's “Models and Statistical Inference: The Controversy Between Fisher and Neyman-Pearson” (link)
Critique of Fisherian Testing
P-values aren't frequentist error rates
It's possible to construct situations in which reported $p$-values violate the aforementioned frequentist principle about making sure reported error never exceeds actual error in the long run. I'll
construct such a situation in the indented section below; as far as I know this construction is original, though you can skip it and get straight to the punchline.
!tab Suppose that I generate $n$ samples from ${\mathcal N}(0, 1)$ and $n$ more samples from ${\mathcal N}(m, 1)$. I give you one of these sets at random, affirm that it's normally distributed with
variance $1$, and ask you to check that the mean is $0$, masking the fact that it may not be $0$. You set up your $z$-test with null hypothesis $H_0: \mu = 0$. Half the time, your $z$-value is going
to be randomly sampled from ${\mathcal N}(0, 1)$. The other half of the time, it's going to be distributed as ${{\mathcal N}}(m\sqrt n, 1)$. In both cases, the $p = 0.05$ marker will be at $|z| =
Suppose now that your test does come up as $p < 0.05$. The probability of this happening is given by $$P(p < 0.05) = P(p < 0.05 \mid \mu = 0)P(\mu = 0) + P(p < 0.05 \mid \mu = m)P(\mu = m)\\ = 0.025
+ \frac12F(-1.96) + \frac12(1-F(1.96)) = 0.525 + \frac12\left(F(-1.96)-F(1.96)\right)$$ where $F$ is the cumulative distribution function for ${{\mathcal N}}(m \sqrt n, 1)$. What is the probability
that $H_0$ is true? $$P(\mu = 0 \mid p < 0.05) = \frac{P(p < 0.05 \mid \mu = 0) P(\mu = 0)}{P(p < 0.05)} \\= \frac{(0.05)(0.5)}{0.525 + \frac12\left(F(-1.96)-F(1.96)\right)} = \frac{0.05}{1.05 + F
(-1.96)-F(1.96)}$$ If $m = \frac12$ and $n = 5$, this probability will be $0.199$. So, even if you set your threshold for rejecting the null hypothesis at $p < 0.05$, in the long run $20\%$ of your
rejections will be wrong! By tailoring $m$ and $n$ we can make this percentage lie anywhere between $\frac{0.05}{1.05} \approx 4.8\%$ and $50\%$.
!tab Hence, the use of $p$-values alone violates the aforementioned frequentist principle: the reported error rate can be much greater than the actual error rate in the long run. This is why we
cannot treat $p$-values as false positive rates, and why we need an alternative hypothesis to calculate the false positive rate with respect to if we are to report error rates.
No clear test to use
To test a hypothesis $\theta = \theta_0$, we need a test statistic $t$ for the assumed underlying distribution $P(x \mid \theta)$. Which statistic should we use, though? Different statistics
undoubtedly have different distributions, which may lead us to make different inferences even at the same $p$-value thresholds. The Neyman-Pearson approach answers in terms of power: we should choose
the test $t$ that maximizes power, the probability of rejecting the null hypothesis when the alternative is true. They even specified the form of such a test via their lemma. Hence, the
Neyman-Pearson approach has a clear way to choose a test; Fisher's approach does not.
💡 Fisherian statistics has no real way to control error rates, since p-values can not be treated as error rates, and it offers no clear way to determine whether one test is better than another for a
given purpose.
Critique of Neyman-Pearson Testing
Error values are inflexible
The false positive rate $\alpha$ is the rate at which we reject a true null hypothesis. Because it does not depend on the alternative hypothesis, only the rejection region of the test, we can
calculate it without reference to any alternative hypothesis. If you design a test with $\alpha = 0.05$, and end up rejecting the null hypothesis, the Neyman-Pearson framework advises you to report
your possible error as $0.05$, because that's the overall probability of incorrectly rejecting the null hypothesis.
!tab As intuitive as this rule is, it can sometimes lead to nonsensical behavior. If we suppose that the men from some tribe in the Amazon rainforest are all tightly clustered around an average
height of 5'3” — making this the null hypothesis — only to pick twenty men at random and find that they're all over 7'0”, we can be sure that the chance of this being a false positive is well under
$0.05$. Nevertheless, if our original test was made to have $\alpha = 0.05$, that's what we'd have to report as the chance of a false positive. Of course, the $p$-value of this finding — the chance
that we'd get twenty goliaths in a row when the average truly is clustered tightly around 5'3” — is as astronomically small as it should be. Not all rejections are the same, and the Neyman-Pearson
approach to error reporting is too inflexible to handle this.
!tab The most obvious way to fix this — to report not the predetermined false positive rate of the test but the probability of the null hypothesis producing a result at least as outrageous as yours —
simply reproduces $p$-values, which we have already shown cannot be treated as error rates. The second most obvious way to fix this by finding the probability of the null hypothesis conditional on
the test statistic, but that boils down to the Bayesian method. This is a genuinely difficult problem.
No clear alternative to use
A null hypothesis is merely a proposition about a single value, and we can perform one of a great many tests to find evidence in favor of this proposition. Neyman and Pearson demand that we select a
test to use based on power, but power, being the probability that a test rejects the null hypothesis given that the alternative hypothesis is true, depends on a particular choice of alternative
hypothesis $H_a$. If you pick a different $H_a$, you get a different formula for power, and therefore have to pick a different test. It follows that, as Senn puts it here, “[test] statistics are more
primitive than alternative hypotheses and the latter cannot be made the justification of the former”. Neyman and Pearson have simply shifted the ambiguity from choice of test to choice of alternative
!tab Fisher's main critique to the use of alternatives came from his experience with the way actual science was done at the time. His contention was that in any given experiment the experimenter
knows what they are looking for, and which hypothesis they want to test; all the experimenter needs is a way to tell whether they should take notice of some data that is anomalous with respect to
their hypothesis. Why complicate things unnecessarily by throwing in a second hypothesis?
💡 Neyman-Pearsonian statistics has us throw out clear conclusions in order to maintain an inflexible long-term error rate, and only manages to shift the Fisherian ambiguity re testing to a new
ambiguity re alternative hypotheses, rather than eliminating ambiguity.
The Point of Statistics: Inductive Inference or Inductive Behavior?
Both Fisher and Neyman-Pearson saw their fundamental disagreement as one over the role of statistical testing. As Mayo explains in her 1985 “Behavioristic, Evidentialist, and Learning Models of
Statistical Testing” (pdf), the Neyman-Pearsonian model views the point of statistical testing as providing rules for behavior — namely, the acceptance and rejection of hypotheses — which help
prevent us from making errors too often in the long run. Their viewpoint is commonly called inductive behavior.
!tab Fisher, however, strongly believed that the process of actual decision-making could not be mathematized away, since it relied on a whole universe of scientific intuition, pragmatic constraints,
and, most importantly, an understanding about the scope and applicability of the model. It was, Fisher argued, the role of the statistician to mediate between the statistical model and the thing it
is modelling, using the model as an aid to determine how much evidence some data provides for some belief; this viewpoint is known as inductive inference.
!tab Hence, their debate hinges on their differing interpretations of statistical inference — is it meant to guide us in our decisions, or is it meant to help us interpret the evidence provided by
the data? Unfortunately, this aspect of the debate — if not the very content and existence of the debate — has been elided by most instructional textbooks, as historicized by Halpin and Stam in their
2006 “Inductive Inference or Inductive Behavior: Fisher and Neyman: Pearson Approaches to Statistical Testing in Psychological Research” (pdf).
Exploratory Data Analysis
Of course, there are other interpretations. The most popular of these is Tukey's exploratory data analysis (EDA), a method developed in the 60s and 70s that emphasizes exploratory analysis of the
data in order to figure out how to model it and what hypotheses to test, thereby “letting the data speak for itself”. To this end, he either invented or popularized many tools for getting a
qualitative look at the data without having a model in mind, as described on his Wiki page. This decisively contradicts Neyman-Pearson's behavioral approach, but only mildly concords with Fisher's
evidential approach, as Tukey was even more bearish than Fisher on the validity of any particular model. Through exploratory data analysis, statistics is explicitly interpreted as another tool for
dialectically building a scientific nature of understanding.
!tab The error of choosing an inappropriate model has been called an “error of a third kind”, in a humorous continuation of Neyman-Pearson's errors of the first and second kinds. We shall have more
to say about model selection later, after we've introduced the Bayesian approach to statistical inference.
!tab Of course, when there is a scarcity of data, looking at what data you do have in order to formulate hypotheses will result in hypotheses that are far more likely to apply to your data than to
the population, since you came up with those hypotheses specifically in response to that data; this is known as the problem of post hoc theorizing. Better to come up with hypotheses on a limited set
of data and see if they extrapolate to the rest of the data. Sometimes it's extremely hard to obtain extra data, in which case we're just kinda screwed; Tukey called this kind of situation “
uncomfortable science”.
💡 The Fisher/Neyman-Pearson debate is, at its core, a question over the purpose of statistics; alternative interpretations, such as exploratory data analysis, have been proposed, but have their own
issues as well.
A Curious Note: Fiducial Inference
One way that frequentists reason about the parameter $\theta$ underlying some probability distribution $P(x \mid \theta)$ is to construct confidence intervals for $\theta$: a confidence interval for
$\theta$ with fixed confidence level $c$ consists of a pair of statistics $s_c, t_c$ of $x$ such that $P(s_c(x) < \theta < t_c(x)) = c$.
!tab At first glance it might seem like this probability concerns the distribution of $\theta$, but it does not: being frequentists, we do not view $\theta$ as having a probability distribution. It
just has a value, and correspondingly it is either in the interval or not. The long-run frequency in which it will be in the interval corresponding to the $x$ sampled on any given trial is given by
$c$. (Since Bayesians take probabilities to be beliefs, they can form a distribution over $\theta$ — they understand that it is fixed, they're just not certain about where it's fixed, and as such
their beliefs as to the location assemble into a probability distribution. Their version of the confidence interval is known as the credible interval).
!tab One of Fisher's stranger statistical ideas was the use of these confidence intervals to form a distribution over the parameter $\theta$ itself. Prima facie, this distribution will depend on the
sampled data, since we have to calculate the statistics somehow, but there is a way around this — there are certain functions of the data, known as pivotal quantities, the distributions of which do
not depend on $\theta$, even though the data is sampled from $P(x \mid \theta)$. We saw one example of this with the $z$-score: if $D= \{d_1, \ldots, d_n\} \sim {\mathcal N}(\mu, \sigma^2)$, the
value $z = \frac{\overline D - \mu}{\sigma/\sqrt{n}}$ is distributed as ${\mathcal N}(0, 1)$. This distribution is independent of the parameters $\mu, \sigma^2$, making $z$ a pivotal quantity. Note
that this pivot is not a statistic — a statistic does not directly depend on the parameters of the distribution, whereas $z$ explicitly depends on $\mu$ and $\sigma$.
Fisher's so-called fiducial approach is as follows: first, find $a, b$ such that $P(a < \frac{\overline D - \mu}{\sigma/\sqrt n} < b) = \alpha$ — since the middle term is the $z$-value, this is
simple. Now, rearrange as follows: $${\small \alpha = P(a < \frac{\overline D - \mu}{\sigma/\sqrt n} < b) = P(\frac{a\sigma}{\sqrt n} < \overline D - \mu < \frac{b\sigma}{\sqrt n}) =P(\overline D - \
frac{b\sigma}{\sqrt n} < \mu < \overline D - \frac{a\sigma}{\sqrt n})}$$
Having moved to the equivalent expression on the right-hand side, we now treat the statistic $\overline D$ as fixed, and the parameter $\mu$ as variable, using this rearrangement to find the
probability that $\mu$ lies within some region. This is known as the fiducial probability distribution of $\mu$.
!tab Obviously, by treating $\mu$ as variable, Fisher made an enemy of many frequentists, who conceptualized $\mu$ as a fixed constant already existing in the world. He may have gotten away with this
if not for the fact that this approach was also mathematically unsound at best: it has been shown that this does not actually generate a probability distribution on $\mu$, because additivity does not
necessarily hold, and even Fisher himself admitted that “I don't understand yet what fiducial probability does. We shall have to live with it a long time before we know what it's doing for us”. As
such, fiducial inference didn't get very far, lasting only as a curious footnote in the history of statistical inference.
Likelihoodism is a school of statistical inference that can neither be described as frequentist nor as Bayesian. As the name suggests, it focuses on the use of the likelihood function ${\mathcal L}(\
theta \mid D) := P(D \mid\theta)$; likelihoodists claim that all evidence that $D$ provides about $\theta$ is contained within this function, a claim known as the likelihood principle. The
Neyman-Pearson lemma provides some modest support for the likelihood principle, since it states that the most powerful hypothesis test at any given level is precisely the quotient of the likelihoods
of the two hypotheses with respect to the sample data.
Birnbaum, in his 1962 “On the Foundations of Statistical Inference” (pdf), actually derives the likelihood principle from two simpler principles, which I'll paraphrase here (the actual paper being
rather obtuse):
• The principle of sufficiency: If an experiment $E$ is divided up into several subcomponents $\{E_\lambda\}_{\lambda \in \Lambda}$, such that $\lambda$ is an ancillary statistic of some
distribution and each $E_\lambda$ has its own set of outcomes, then observing outcome $x^\lambda_i$ of experiment $E_\lambda$ provides exactly as much evidence about the underlying distribution
as does observing outcome $x^\lambda_i$ of experiment $E$, without knowing that it comes from $E_\lambda$.
• The principle of conditionality: If $t$ is a sufficient statistic for some parameter $\theta$, and two experiments $E_1$ and $E_2$ yield outcomes $x_1$ and $x_2$ such that $t(x_1) = t(x_2)$, then
the evidence about $\theta$ provided by each of the two experiments is the same.
Birnbaum argues that the likelihood principle is true if and only if both of these two principles are true; since he assumes both of them to be obvious, he concludes that he has proved the likelihood
principle. Naturally, lots of criticism has been directed at this justification, as explained on the Wikipedia page.
Bayesian Inference
Bayesian Testing and Estimation
For the Bayesian, most of the above issues are sidestepped: probabilities are beliefs, and given a model $P(x \mid \theta)$, a prior $P(\theta)$ gives us a degree of partial belief for every possible
hypothesis. Given some data $D$, we know exactly how to update our beliefs in these hypotheses: $$P(\theta = \theta_0 \mid D) = \frac{P(D \mid \theta =\theta_0) P(\theta_0)}{\int_{\Omega} P(D \mid \
theta = \theta_1)P(\theta_1)\, d \theta_1}$$
This is the rational update to make to our belief. We need not confuse ourselves with issues of whether this data is significant or insignificant evidence, whether we have to either reject or accept
any particular hypothesis based on the data, or whether we have to commit to any other particular course of action — Bayesian probability is the logic of partial belief, and, by restricting itself to
belief, extends to a statistical framework that manages to sidestep many of the issues of Fisherian and Neyman-Pearsonian statistics.
!tab Because the Bayesian framework doesn't introduce much in the way of new machinery, as the frequentist schools do with hypothesis tests, error rates, and so on, we'll almost immediately have to
delve right into mathematics to really see what's going on, rather than critiquing this surface-level machinery as we did with the frequentist schools. As such, this section will be highly
!tab A note which exemplifies this: in the denominator of the right hand side of Bayes' law, as written above, we've performed an implicit marginalization. This is the process of removing a parameter
from a probability distribution by taking the expectation with respect to that parameter. This is a perfectly legitimate process with respect to the laws of probability, and makes intuitive sense:
the probability of sampling $D$ is equivalent to the probability of sampling $D$ and having $\theta = \theta_1$ for some $\theta_1$, for $\theta$ must always equal some value. Hence, $P(D)$ should be
a summation over all $\theta_1 \in \Omega$ of $P(D, \theta=\theta_1)$. Since $\theta$ is a continuous variable, we must perform an integral: $$P(D) = \int_{\Omega} P(D, \theta=\theta_1)\, d\theta_1 =
\int_\Omega P(D \mid \theta = \theta_1)P(\theta_1)\, d\theta_1 = \mathbb E_\theta[P(D\mid\theta)]$$
The only problem with this is that it can be tricky to calculate, since there's no guarantee that this integral is tractable; in such cases, we may have to turn to alternative computational methods
such as Variational Bayes or Markov Chain Monte Carlo, both designed to deal with this problem.
!tab At the same time, because it is only a logic of partial belief, it is not naturally suited to many of the tasks that any framework for statistical inference finds itself asked to perform. Hence,
if we consider Bayesianism to be analogous to a computer, then Bayesian probability — the framework of priors, conditionalization, and marginalization — is its kernel, the set of tasks it performs
natively; Bayesian statistical inference, on the other hand, is analogous to a software suite running on this computer, each program in which uses the language of Bayesian probability to perform
tasks well beyond the ambit of Bayesian probability.
I'll give a couple examples of these programs.
Meeting The Demands of Statistical Inference
1. Bayesian estimation: Given some data $D \sim P(x \mid \theta)$ and a prior $P(\theta)$, the classical way that a Bayesian would come up with a single estimate for $\theta$ is to first calculate
the posterior $P(\theta \mid D)$, and then to take the mode of this posterior distribution, yielding the estimate $$\widehat \theta(D) := \operatorname{argmax}_{\theta} P(\theta \mid D)$$ (This
is an example of an estimator, or a function of the data designed to yield an estimate of some parameter. An estimator of a parameter $\theta$ is generally denoted by $\widehat \theta$, as
The estimator given by the mode of the posterior distribution of the parameter $\theta$ is known as the maximum a posteriori (MAP) estimator for $\theta$. An example of MAP estimation for a
normal distribution with known variance and unknown mean is given in Appendix B.5; it turns out to be an elegant weighted average of the sample mean and the mean of the prior placed over the
unknown mean $\mu$.
There are other methods of doing Bayesian estimation, most of which have the following structure:
1. Construct a loss function $L(\theta_1, \theta_2)$, such as the mean squared error $L(\theta_1, \theta_2) :=(\theta_1-\theta_2)^2$.
2. Define a corresponding functional $F(\widehat\theta) := \mathbb E_{\theta, D} \left[L(\widehat\theta (D), \theta)\right]$.
3. Find the estimator $\widehat\theta$ that minimizes this functional.
If we use mean squared error loss, for instance, the estimator that arises is given by the mean of the posterior distribution, as shown in appendix B.6.
2. Bayesian hypothesis testing: If we really want to get a single number representing the strength of one hypothesis $H_0: \theta = \theta_0$ over another hypothesis $H_a: \theta = \theta_a$, we may
calculate the Bayes factor of the two hypotheses. Given some data $D \sim P(x \mid \theta)$, this factor is calculated as $$K := \frac{P(D \mid \theta = \theta_0)}{P(D \mid \theta = \theta_a)}$$
That's it. A value of $K$ over $1$ indicates support for $H_0$, while a value of $K$ under $1$ indicates support for $H_a$. The more extreme this value is, the stronger the evidence; one table
commonly used to convert from Bayes factors to qualitative judgements is as follows:
□ $K < 1$ : Negative
□ $1 < K < 3$ : Trifling
□ $3 < K < 10$ : Substantial
□ $10 < K < 30$ : Strong
□ $30 < K < 100$ : Very strong
□ $100 < K$ : Decisive
💡 The Bayesian approach to statistics consists of a bunch of heuristics and rules loosely tacked on to the Bayesian approach to probability; it's more like engineering, and is of questionable
philosophical coherence.
The Neverending Search For Objective Priors
The biggest weakness in the Bayesian approach to statistical inference is, as with Bayesianism in general, the generation of priors; this falls prey to the same critiques we discussed in the previous
article. As mentioned in that article, it is primarily objective Bayesians who concern themselves with finding priors that could in some sense be considered “canonical”, bringing in no extraneous
information; correspondingly, the subfield of Bayesian inference that relies on such priors is objective Bayesian inference. Of course, the grass isn't much greener on the other side: eschewing
objectivity in our priors forces us to bring in our own assumptions and biases, thereby skewing our analysis in the best case, and in the worst case forcing this analysis to simply tell us what we
already knew.
!tab First, we'll study methods of generating “perfect” objective priors, which bring in absolutely no information. There are at least five ways to do this: use of the principle of maximum entropy to
generate a prior, the use of an algorithmic universal prior, calculation of the Jeffreys, reference, maximal data information, or Haar priors, and empirical Bayesian methods for calculating a
reasonable prior from some data. We've covered algorithmic universal priors and the principle of maximum entropy previously, so we'll cover the other other five methods here.
💡 Bayesian statistics inherits the objectivity issues of Bayesian philosophy.
Jeffreys, Reference, Maximal Data Information, and Haar Priors
Fisher Information and the Geometry of Parameter Space
For some parametrized probability distribution $P(x \mid \theta)$, the Fisher information of $\theta$ is given by $$I(\theta) := \mathbb E \left[\left(\frac{\partial \ln P(x \mid \theta)}{\partial \
When there are multiple parameters, $\vec \theta = [\theta_1,\ldots, \theta_n]^T$, we may generalize this to get an $n \times n$ matrix as follows: $$[I(\vec \theta)]_{ij} := \mathbb E \left[\left(\
frac{\partial \ln P(x \mid \theta_i)}{\partial \theta_i}\right)\left(\frac{\partial \ln P(x \mid \theta_j)}{\partial \theta_j}\right)\right]$$
This is the Fisher information matrix (FIM). (Clearly, if there's only one parameter, then the FIM is the $1\times 1$ matrix containing the Fisher information of that sole parameter). It is designed
to capture a rather intuitive property of the parameter space: changes of some parameters qualitatively affect the distribution more than changes of other parameters, and this relative change depends
on the value of the parameters when they are changed, in the same way that the relative change of some function depends on the value of the function at the point that it is changed.
!tab At some fixed parameter vector $\vec \theta_0$, the infinitesimal change in information that an infinitesimal parameter change $d\vec \theta$ will evoke is given by the product $[(d \vec \theta)
^T I(\vec \theta_0) (d\vec\theta)]^{-1/2}$, which is a scalar of order $O(||d\vec \theta||)$. This product induces a metric on the tangent space to each point of the parameter space, thereby endowing
it with a local geometry.
!tab To really understand this notion requires a course in differential geometry, which I'm not going to provide here. The essential idea is that the Fisher information matrix can be used to endow
the parameter space with a space-like geometry, complete with curvature and geodesics. This is an extremely powerful idea, and motivates the field of information geometry, the study of such parameter
spaces in their geometric aspect. What is relevant for our purposes, though, is that this space has a “local volume” — it stretches and contracts as you move through it, in the same way that the
fabric of the universe stretches and contracts (due to the presence of mass-energy). The quantity measuring the amount of stretching at any given point, i.e. the Riemannian volume form, has magnitude
$|\det I(\vec \theta)|^{-1/2}$; this magnitude is entirely a property of the local geometry of the space, and remains the same under changes of coordinates.
A pseudo-Riemannian metric captures the stretching of some space. GR's metric is dependent on the mass-energy distribution throughout physical space. (Source)
With respect to the FIM metric for the normal parameter space, these are perfect circles of equal size. That they look elliptical indirectly displays the stretching of the parameter space. The volume
form increases as $\sigma^2$ gets smaller; calculating it gives $|\det I(\vec \theta)|^{-1/2} = \sigma^{-1}$. (Source)
The Jeffreys Prior
The volume form derived from the Fisher information metric is what is used to define the Jeffreys prior: $$P(\vec \theta) \propto |\operatorname{det} I(\vec \theta)|^{-1/2}$$
It corresponds to a sort of content-aware scaling on the parameter space, and is invariant under reparametrization.
An example: take the Bernoulli distribution $\operatorname{Bern}(p)$, given by $P(x \mid p) = p^x(1-p)^{1-x}$. The Fisher information of this distribution at parameter $p$ is given by $$I(p) = \
mathbb E\left[\left(\frac{\partial}{\partial p}(x \ln p + (1-x)\ln(1-p))\right)^2\right] = \sum_{x \in \{0, 1\}} \left(\frac{x}{p} -\frac{1-x}{1-p}\right)^2p^x(1-p)^{1-x} = \frac{1-p}{(1-p)^2} + \
frac{p}{p^2} = \frac{1}{1-p}+\frac1p = \frac{1}{p(1-p)}$$
So the Jeffreys prior for $p$ is, prior to normalization, $\sqrt{|I(p)|} = \frac{1}{\sqrt{p(1-p)}}$. This integrates to $\pi$ (Wolfram|Alpha), giving us a normalized prior $P(p) = \frac{1}{\pi\sqrt{p
(1-p)}}$. Fortunately, this is precisely $\operatorname{Beta}\left(\frac12, \frac12\right)$, the conjugate prior for the Bernoulli distribution, so we know how to do inference with it. (Since the
cumulative distribution function of this distribution is given by $P(X \le x) = \frac2\pi \arcsin(\sqrt{x})$, this is sometimes called the arcsine distribution as well).
$\operatorname{Beta}\left(\frac12, \frac12\right)$, the Jeffreys prior for the single parameter $p$ of the Bernoulli distribution. Also known as the arcsine distribution. (Source).
Note that this isn't even uniform. However, uniformness is not invariant under reparametrization. If we let our variable be not the per-trial chance of a success but the logarithm of the chance of a
success, $\ell = \ln p$, then $P(\ell = x) = \frac{d}{dx} P(\ell \le x) = \frac{d}{dx} P(\ln p \le x) = \frac{d}{dx} P(p \le e^x) = e^x$. This is normalized, since $\ell$ is necessarily negative: $\
int_{-\infty}^0 P(\ell) d\ell = e^\ell|_{-\infty}^0 = e^0 - e^{-\infty} = 1$, but it is clearly not uniform. (This is one major problem with uniform distributions: when over infinite sets, they
depend on the particular parametrization of the problem. In the previous article, this phenomenon was the cause of Bertrand's paradox: there is no single uniform distribution over chord space, only a
uniform distribution over some particular parametrization of chord space).
!tab Unfortunately, there's a reason I had to define the Jeffreys prior as only being proportional to the volume form: Jeffreys priors are often not normalizable. For instance, the Fisher information
of the mean of a normal distribution with known variance is given by $I(\mu) = 1/\sigma^2$. The unnormalized Jeffreys prior is therefore $P(\mu) = 1/\sigma$, but this is constant with respect to $\
mu$, and therefore $\int_{-\infty}^\infty P(\mu)\, d\mu = \frac1\sigma \int_{-\infty}^\infty d\mu = \infty$.
!tab This is often the case with Jeffreys priors, a terrible drawback to such an elegant method. This issue will repeatedly come up as we explore more methods of generating objective priors, and we
shall discuss it in detail later.
Reference Priors
Another strategy to construct a noninformative prior is to take the word “noninformative” literally, and construct a prior whose expected information gain upon conditioning upon some observation is
as large as possible.
Typically, the quantity used to measure the change in information effected by moving from one distribution $Q$ to another distribution $P$ is the Kullback-Leibler divergence, defined as $$\
operatorname{KL}(P \mid\mid Q) := \int_{-\infty}^\infty P(x)\ln\frac{P(x)}{Q(x)}\, dx$$
To construct this noninformative prior, then, we'll take some some sufficient statistic $t$ and attempt to maximize the information differential between $P(\theta \mid t)$ and $P(\theta)$, which can
be expressed in terms of the following integral: $$\mathbb E_t\left[\operatorname{KL}(P(\theta) \mid\mid P(\theta \mid t))\right] = \int P(t)\int P(\theta \mid t) \ln \frac{P(\theta \mid t)}{P(\
theta)}\, d\theta\, dt = \iint P(\theta, t) \ln \frac{P(\theta, t)}{P(\theta)P(t)}\, d\theta\,dt$$
The prior $P(\theta)$ that maximizes this value is known as the reference prior for $\theta$. Amazingly, the reference prior for a parameter is often equivalent to the Jeffreys prior for that
parameter; less amazingly, this causes reference priors to inherit the non-normalizability problems of Jeffreys priors.
!tab Furthermore, one might question the choice of the Kullback-Leibler divergence. This divergence isn't even an actual metric on the space of probability distributions, since it isn't symmetric; if
we were to choose another formula for divergence, of which there are many, we could get a different answer, as shown by Ghosh's “Objective Priors: An Introduction for Frequentists” (pdf).
Maximal Data Information Priors
Given a distribution $P(x)$, the entropy of $P(x)$ is defined by $$H[P(x)] := \mathbb E_{x}[\ln P(x)^{-1}] = -\int_X P(x)\ln P(x)\, dx$$
This is an information-theoretic concept like the KL divergence, though its meaning is a bit tricky. For our purposes, we may consider it as a measure of the information inherent to some
distribution. This immediately suggests another way to construct a noninformative prior, in a manner analogous to the construction of the reference prior: given $P(x \mid \theta)$: maximize the sum
of the entropy of the prior $P(\theta)$ with the expected entropy of $P(x \mid \theta)$, thereby constructing a prior which is both noninformative and produces a noninformative model. Hence, what we
want to maximize is the functional $$F[P(\theta)] = H[P(\theta)] + \mathbb E_{\theta}\left[H[P(x \mid \theta)]\right] \\ = -\int_\Omega P(\theta)\ln P(\theta)\, d\theta - \int_\Omega P(\theta)\int_X
P(x \mid \theta) \ln P(x \mid \theta)\, dx\, d\theta$$
The prior that maximizes this functional is known as the maximal data information prior for $\theta$. Often, this yields priors that are different from the above methods: Kass and Wasserman's “The
Selection of Prior Distributions by Formal Rules” (pdf) concerningly says of Zellner's method that it “leads to some interesting priors”. Indeed, it seems rather arbitrary to me to sum the entropy of
$P(\theta)$ with the expected entropy of $P(x \mid \theta)$.
!tab However, I learned the hard way that this approach falls prey to the same non-normalization problem: after being unable to find the MDI prior for the normal distribution with known mean and
unknown variance, I tried calculating it myself (see Appendix B.3), only to find that it was equal to the non-normalizable Jeffreys prior $P(\sigma^2) \propto \sigma^{-1}$. Hence, we get the fatal
flaw of the above approaches plus a spoonful of apprehension.
Haar Priors
🚨 Math warning: group theory, measure theory
If we can't construct a probability distribution over a parameter $\theta$ that is invariant under all transformations, as the Jeffreys prior is, perhaps we can construct a probability distribution
that is invariant under some collection of specified transformations. Because it makes the theory significantly easier, we assume that these transformations form a group: they are closed under
composition, they all have inverses which are all specified transformations, and the identity transformation is a specified translation. Formally, any collection of elements that satisfies these
properties (with respect to some notion of composition and inversion) is called a group, and when each of these elements can be considered as a transformation on some set, e.g. the set of probability
distributions on $\theta$, we speak of a group action. Nevertheless, if you don't already know group theory, I advise you to skip this section.
!tab The first step in this approach is to take a sample space $X$ upon which a group $G$ acts. If $P(x \mid \theta)$ is a parametrized distribution on $X$, with parameter space $\Omega$, then in
certain cases the action of $G$ on $X$ may lift to an action of $G$ on $\Omega$. For instance, if $G$ is the group of affine transformations $x \mapsto ax+b$, and $P$ is ${\mathcal N}(\mu, \sigma^2)
$, then, since $P(ax+b \mid \mu, \sigma^2) = P(x \mid a\mu+b, a^2\sigma^2)$, the element $g \in G$ acting on $X$ by sending $x$ to $a x + b$ can be considered to act on $\Omega$ by sending $(\mu, \
sigma^2)$ to $(a\mu + b, a^2\sigma^2)$. Let $\overline G$ be the group acting on $\Omega$.
!tab Under certain conditions on $G$, measure theory guarantees the existence of two measures on $\overline G$: the left Haar measure $\mu_L$, which satisfies $\mu(\overline g S) = \mu(S)$ for all
measurable $S \subseteq \overline G$ and elements $\overline g \in \overline G$, and the right Haar measure $\mu_R$, which satisfies $\mu(S\overline g) = \mu(S)$. When the lifted group $\overline G$
can be identified with the parameter space $\Omega$, as was the case in the above example, these measures can be identified with measures on $\Omega$, known as the left and right Haar priors.
Kass and Wasserman, linked above, discuss qualitative differences between these priors on pg. 6, and quote a convincing argument by Villegas in favor of the right Haar prior: pick any parameter (set)
$\theta$, and define the map $\phi_{\theta}: G \to \Omega$ by $\phi_\theta(g) = g(\theta)$. Any measure $\nu$ on $G$ can be pushed forward by $\phi_\theta$ to a new measure $\mu$ on $\Omega$ given by
$\mu_\theta(S) = \nu(\phi^{-1}_\theta(S))$$\mu(S) = \nu(\phi^{-1}_\theta(S)) = \nu(\{g \in G \mid g(\theta) \in S\})$. Suppose that this is invariant under choice of $\theta$. Then, for every $h \in
G$ we'll have $\mu_{h(\theta)} = \mu_\theta$. But $\mu_{h(\theta)}(S) = \nu(\{g \in G \mid (gh)(\theta) \in S\}) = \nu(\phi^{-1}_\theta(S)h)$, so we must have $\nu(\phi^{-1}_\theta(S)h) = \nu(\phi^
{-1}_\theta(S))$, evidencing the importance of right translation invariance. Villegas finishes this line of thought to prove that this implies that $\mu_\theta$ is the right Haar prior on $\Omega$.
!tab This construction provides one way to get priors on the parameter $\theta$ where other constructions might fail. However, not only does it require many specific conditions and choices — we have
to have $G$ locally compact, we have to have an identification of the lifting of $G$ with the parameter space, and we have to choose between a left and right Haar prior, which may not be the same,
but it also often fails to yield a normalizable prior in many cases of interest.
!tab For instance, whenever $P(x \mid k, \ell)$ is of the form $\frac{1}{\ell}f((x-k)/\ell)$, the left and right Haar priors for the action $g_{a, b}(x) = ax+b, \overline g_{a, b}(k, \ell) = (ak+b, a
\ell)$ are given by $\mu_L(k, \ell) \propto \ell^{-2}$ and $\mu_R(k, \ell) \propto \ell^{-1}$. In particular, this result covers the normal distribution example above, via $k = \mu, \ell = \sigma$
and $f(r)=e^{-r^2/2}/\sqrt{2\pi}$, from which it follows that the left and right Haar priors are given by $P_L(\mu, \sigma^2) \propto \sigma^{-2}$ and $P_R(\mu, \sigma^2) \propto \sigma^{-1}$.
Neither of these are normalizable.
The principle of indifference, by the way, can be considered from a group-theoretic standpoint as the only probability distribution on a finite set $X$ of cardinality $n$ that is invariant under the
action of $S_n$ (the symmetric group of order $n$), when this group is considered as the group of all bijections $X \to X$.
💡 There are multiple different approaches to constructing “objective” priors — (1) use the inherent geometry of the parameter space, (2) maximize informational distance between prior and posterior,
(3) minimize joint information across prior and model, (4) enforce a fixed set of symmetries. These often (though not always) yield the same results, which lends some credence to their “objectivity”;
tragically, however, they all tend to generate non-normalizable priors.
Are Non-Normalizable Priors Okay?
We've seen that most of the methods used to generate objective Bayesian priors have the extraordinarily bad habit of generating measures that aren't even probability distributions: they integrate to
infinity, and therefore cannot be normalized; these non-normalizable priors are generally called improper. Sometimes, it will be true that the posterior obtained by an improper prior is proper, so
that we can go ahead and proceed with inference.
If you ask an objective Bayesian, they'll tell you this is almost always the case: for instance, given a normal distribution with known variance and unknown mean, setting a constant prior over the
mean, $P(\mu) = c$, gives us a posterior of $$P(\mu \mid x) = \frac{P(x \mid \mu)P(\mu)}{\int_{-\infty}^\infty P(x\mid \mu)P(\mu)\, d\mu} = \frac{\frac{c}{\sqrt{2\pi\sigma^2}}e^{-\frac12\frac{(x-\mu)
^2}{\sigma^2}}}{\int_{-\infty}^\infty \frac{c}{\sqrt{2\pi\sigma^2}}e^{-\frac12\frac{(x-\mu)^2}{\sigma^2}}\,d\mu} = \frac{e^{-\frac12\frac{(x-\mu)^2}{\sigma^2}}}{\int_{-\infty}^\infty e^{-\frac{\mu^2}
{2\sigma^2}}\,d\mu} = \frac{1}{2\pi\sigma^2}e^{-\frac12 \frac{(\mu-x)^2}{\sigma^2}}= {{\mathcal N}}(\mu \mid x, \sigma^2)$$
(Note that $x$, not $\mu$, plays the role of the mean in the RHS; the posterior on $\mu$ is a normal distribution with average given by the observed $x$ and variance given by $\sigma^2$). Hence,
while these improper priors are not actually proper, if we treat them as talking about the relative likelihood of picking one value over another, then we can have our cake and eat it too by being
able not only to interpret them, but to convert them into proper posteriors, giving us true objective priors (modulo the choice of method for generating those priors, at least). However, this does
not always happen; an example is given on page 17 of the Kass and Wasserman paper linked above. This not only makes use of methods that can generate improper priors unsuitable for general use, but
makes us question their coherence in general.
💡 Non-normalizable (improper) priors are sometimes okay, in the sense that they often generate proper posteriors; they do not always do so, though, which brings into question the coherence of any
method that generates them.
Empirical Bayesian Methods
These methods, unlike the previous ones, give us a way to construct prior propers. As usual, we have data $D$ drawn from a distribution $P(x \mid \theta)$, and the question is the generation of a
prior $P(\theta)$. Let's assume that this prior is itself parametrized (e.g., as a conjugate prior is), as $P(\theta \mid \gamma)$ — here $\gamma$ is a hyperparameter. Our previous approaches to
generating a prior that incorporates no additional information have failed: the maxentropy and Jeffreys priors either don't exist or are clearly useless (e.g., as a normal distribution ${\mathcal N}
(0, 1)$ would be useless to model the mass of the sun in kilograms), and universal priors like the Solomonoff would be intractable if we even knew how to specify them precisely in the situation at
!tab Empirical Bayes attempts to construct the prior $P(\theta \mid \gamma)$ from scratch using the observed data $D$, using as little extra information as possible, by choosing the $\gamma$ that
maximizes $P(D \mid \gamma) = \int P(D \mid \theta)P(\theta \mid \gamma)\, d\theta$.
An example: suppose I wanted to figure out the rate of spam emails I'll get next week, having gotten 50 this week and 75 last week. Since these spam emails are sent (for the most part) independent of
one another, I can say that these numbers are the result of spam emails coming in randomly at some rate $\lambda$.
!tab I can model this rate with a Poisson distribution $\operatorname{Pois}(\lambda)$: given a non-negative integer $k$, $P(k \mid \lambda)$ is the chance that I'll receive $k$ spam emails in a week
given that their rate paces out $\lambda$ spam emails in a week. This value is equal to $(\lambda^ke^{-\lambda})/k!$. What we're looking for is a prior over $\lambda$. The conjugate prior here is the
Gamma distribution $\operatorname{Gamma}(\alpha, \beta)$, given by $$P(\lambda \mid \alpha, \beta) = \frac{\beta^{\alpha}\lambda^{\alpha-1}e^{-\beta\lambda}}{\Gamma(\alpha)}$$
We want to use our data $D = \{75, 50\}$ to find the most likely values of $\alpha$ and $\beta$. As mentioned, we'll attempt to do this by maximizing $P(D \mid \alpha, \beta)$. I've shown in Appendix
B that this results in $\alpha/\beta$ being set equal to the sample mean, but I can't solve for $\alpha$ and $\beta$ individually (and I tried for hours); there might not be a closed-form solution
(or I might just be stupid?). It can be solved numerically, in any case, giving us a prior that does not incorporate additional information. One drawback, however, is that our prior already
incorporates the data itself, which is not what priors are supposed to do.
!tab There are many nonparametric approaches to empirical Bayesian estimation, which do slightly better morally by placing no assumptions on the nature of the hyperprior; these often work by finding
a hyperprior that minimizes some loss functional derived from the data, such as the Kiefer-Wolfowitz nonparametric maximum likelihood estimator. However, these tend to be significantly more difficult
to calculate, either theoretically or numerically.
💡 Empirical Bayesian methods allow for the generation of truly proper priors that don't incorporate information beyond the data, at the cost of “double-dipping” from the data.
All known schools of statistical inference suffer from severe flaws: frequentist schools from an inability to properly handle error and ambiguities in the choice of tests/hypothesis, and Bayesian
schools from a lack of canonical solutions to several problems that statistics is expected to perform and a difficulty consistently generating objective priors. These flaws are greatly exacerbated by
the fact that statisticians, both in practice and pedagogy, cover such foundational issues with all the vigilance of a dead rabbit; the most famous example of this is the near-universal misuse of the
p-value, a subtle idea which the rank-and-file statisticians driving science are simply not equipped, let alone incentivized, to properly interpret and apply.
!tab As with probability, then, it seems that the only option that we have for now is to contextualize our statistical inferences by investigating and clarifying the purpose of our investigations,
the rationale for our modeling and testing decisions, and the interpretation of the various numbers we're dealing with. It is so easy to fall into error for any one of a myriad of reasons each of
which is invisible to us until it blows up in our face, but paying attention to foundational issues will at least serve as a prophylactic against error — not a cure, since many errors arise from
incompatibilities in deeply held, often subconscious commitments to certain ideas about the ontology of the world and the nature of its entities — but at least a prophylactic.
Appendix A. Common Distributions
1. The Normal and Log-Normal Distributions
Also called the Gaussian distribution due to its discovery by Carl Friedrich Gauss, the normal distribution is so-called due to its ubiquity in the empirical sciences. The normal distribution $\cal N
(\mu, \sigma^2)$ is defined in reference to a mean $\mu$, around which normally distributed samples will congregate symmetrically, and a variance $\sigma^2$, denoting the average squared distance of
a sampled point from the mean.
PDFs of normal distributions with different parameters. Source.
The normal distribution $\cal N(\mu, \sigma^2)$ can be defined as the distribution with the maximum entropy among all distributions with mean $\mu$ and $\sigma^2$. As Wikipedia demonstrates, this
alone allows us to find its PDF: $$p(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac12 \frac{(x-\mu)^2}{\sigma^2}}$$
The main reason for the ubiquity of the normal distribution is the central limit theorem, which states that if we have a method of drawing random samples from one underlying population in an
independent manner (the idiosyncratic way of saying this is “independent and identically distributed” random variables), then the mean of these samples will converge to a normal distribution. Namely,
suppose that we call the $m$th draw from this population $X_m$, and let $S_n = \sum_{i=1}^n X_i$ be the $n$th partial sum. If the mean of the population is $\mu$ and the variance is $\sigma^2$, $S_n$
will converge to $\cal N(n\mu, n\sigma^2)$ as $n$ grows large. For instance, suppose each $X_i$ represents a fair coin flip, $X_i = 0$ if tails and $1$ if heads. The mean of the infinite population
of “possible” coin flips is $1/2$, while the variance is $1/4$. As such, the probability distribution of the sum of $n$ coin flips will, as $n$ grows large, converge to the normal distribution with
mean $n/2$ and variance $n/4$.
This isn't the sort of case where the conditions have to be just right for the sampled sum distribution to approximate the normal distribution — this happens in a very wide variety of cases, even
when the $X_i$ are neither independent nor identically distributed. It results in the general $e^{-x^2}$ term appearing in a wide variety of models of empirical processes.
For instance, in Brownian motion, an atom is jostled around randomly by other atoms with no preferred direction. Each jostle can be thought of as a single random movement of the atom along some
arbitrary axis, with mean zero (since no preferred direction) and variance $v$ in inches squared; if there are $r$ collisions per second, then after $t$ seconds there are $rt$ collisions, and we can
expect the location of the particle along the given axis to be distributed as $\cal N(0, rtv)$. In other words, if we place some marked atoms at a fixed point in a solution where they're subject to
Brownian motion, then after $t$ seconds we should expect them to have spread out in a circle with a density distribution matching the normal distribution along any particular axis. This is exactly
what we see, as per the diffusion equation.
The Log-Normal Distribution
Often, we have a large amount of independent variables being multiplied rather than added; since the logarithm of the product of a set of random variables is the sum of the logarithms of the
individual random variables, the CLT may tell us how these large products tend to look.
Suppose we sample some numbers $X_1, \ldots, X_n$ from a fixed random variable $X$ and multiply them. $\prod_{i=1}^n X_i = \operatorname{exp}\left(\sum_{i=1}^n \ln X_i\right)$, so if $\ln X$ has mean
$\mu$ and variance $\sigma^2$, $\sum_{i=1}^n \ln X_i$ will as $n$ grows large converge to a normal distribution with mean $n\mu$ and variance $n\sigma^2$. It follows that the distribution of the
product of the $X_i$ is the exponential of a normal distribution. A distribution which is the exponential of a distribution with mean $\mu$ and variance $\sigma^2$ is known as the log-normal
distribution on $\mu$ and $\sigma^2$, or $\operatorname{LN}(\mu, \sigma^2)$
The expected value of the exponential of some $X \sim \cal N(\mu, \sigma^2)$ is $e^{\mu + \sigma^2/2}$, so the mean of $\operatorname{LN}(\mu, \sigma^2)$ will be $e^{\mu + \sigma^2/2}$. The most
notable property of the log-normal distribution is its extremely long tail: the mean of $\operatorname{LN}(0, 4^2)$ is $e^8\approx 2981$, but the one in a hundred level is 11 thousand, the one in ten
thousand level is 2.9 million, and the one in a million level is 180 million. These distributions and their long tails show up wherever processes are mediated by many multiplicative factors;
Wikipedia gives an impressive list.
2. The Bernoulli and Binomial Distributions
Unlike most other distributions we've talked about, these distributions are strictly over the natural numbers: they assign probabilities to 0, 1, 2, and so on.
The Bernoulli Distribution
The Bernoulli distribution is perhaps the simplest possible nontrivial distribution: $\operatorname{Bern}(p)$ is defined as the distribution on {0, 1} sending $1$ to $p$ and $0$ to $1-p$.
Conceptually, we often consider this as representing the outcome of a probabilistic trial with exactly two outcomes (or, a Bernoulli trial). For instance, if a coin has a 75% chance of landing heads,
then sending heads to $1$ and tails to $0$ allows us to consider this coin as having a $\operatorname{Bern}(0.75)$ distribution. Algebraically, we may write: $$P(x \mid p) = p^x(1-p)^{1-x}$$ As
expected, $P(0 \mid p) = p^0(1-p)^{1-0} = 1-p$, while $P(1 \mid p) = p^1(1-p)^{1-1} = p$. The mean of $\operatorname{Bern}(p)$ is simply $$\mathbb E[x] = \sum_{x \in \{0, 1\}} x P(x \mid p) = 0(1-p)
+ 1(p) = p$$ while the variance is $$\mathbb E[(x-\mu)^2] = \sum_{x \in \{0, 1\}}(x-p)^2P(x \mid p) = (0-p)^2 (1-p) + (1-p)^2p \\ = p^2 - p^3 + p - 2p^2 + p^3 = p(1-p)$$
The Binomial Distribution
Sum up $n$ Bernoulli trials and you get a distribution representing the number of successes in $n$ identical trials each with probability $p$ of success. This is known as the binomial distribution $\
operatorname{Bin}(n, p)$. The probability of getting $k$ successes out of $n$ trials is going to be a sum over the probabilities of all possible ways of getting $k$ successes.
Take $n = 100$. The probability of getting $2$ successes in $100$ trials is the same regardless of whether those successes are on trials #1, #2, and #3 or trials #21, #44, and #87. Since the trials
are independent, each of these runs has a probability given by $p \times p \times (1-p) \times (1-p) \times \ldots \times (1-p) = p^2(1-p)^{100-2}$. However, there are many more ways to get $2$
successes than there are ways to get $1$ success: there are $100$ ways to pick one number between $1$ and $100$, while there are $100 \times 99 / 2 = 4950$ ways to pick two numbers between $1$ and
$100$, disregarding order. In general, the number of ways to pick $k$ numbers out of $n$ will be given by the binomial coefficient $$\binom nk = \frac{\overbrace{n \times \ldots \times (n-k+1)}^{\
text{pick } k \text{ different numbers}}}{\underbrace{k \times \ldots \times 1}_\text{regard same choices with different orders as identical}} = \frac{n!}{k!(n-k)!}$$ Hence, the total probability of
getting $k$ successes out of $n$ trials will be given by $$P(k \mid n, p) = \binom nk p^k(1-p)^{n-k}$$ By the binomial theorem, this is already normalized: $$1 = 1^n = (p+(1-p))^n = \sum_{k=0}^n \
binom nk p^k(1-p)^{n-k} = \sum_{k=0}^n P(k \mid n, p)$$ The mean and variance of $\operatorname{Bin}(n, p)$ are $np$ and $np(1-p)$, though proving this directly from the definition requires some
combinatorial trickery.
The fact that $\operatorname{Bin}(n,p)$ is the sum of $n$ Bernoulli distributions allows us to pull an interesting trick: for $n$ large enough, the Central Limit Theorem implies that this sum will
converge to a normal distribution with mean $np$ and variance $np(1-p)$. In practice, the normal approximation $\operatorname{Bin}(n, p) \approx {\cal N}(np, np(1-p))$ is acceptable when $np$ and $n
(1-p)$ are both above some particular value, such as $9$.
3. The Chi Squared Distribution
Pearson's $\chi^2$ test is a statistical test for determining whether a set of empirical frequencies $e_1, \ldots, e_n$ within some population differs significantly from a set of theoretical
frequencies $t_1, \ldots, t_n$. In other words, it measures the “goodness of fit” of the empirical frequencies to the theoretical frequencies. (Note: this Pearson is Karl Pearson, not the Egon
Pearson forming the Neyman-Pearson duo. Not a coincidence, though — Karl was Egon's father).
For instance, suppose you show me an ocean of evenly mixed jellybeans, assuring me that 60% of them are red, 20% of them are blue, and 20% of them are green, and I scoop up 100 jellybeans and find 50
red, 30 blue, 20 green — should I be suspicious that your numbers are right?
In this case, the empirical frequencies are $e_1, e_2, e_3 = 0.5, 0.3, 0.2$, while the theoretical frequencies are $t_1, t_2, t_3 = 0.6, 0.2, 0.2$, and the sample size is $N= 100$. The test statistic
is given by $$\chi^2 = N\sum_{i=1}^n \frac{(e_i-t_i)^2}{t_i} = 100\left(\frac{0.1^2}{0.6} + \frac{0.1^2}{0.2} + \frac{0^2}{0.2}\right) \approx 6.67$$
Supposing that the theoretical frequencies are correct, the distribution of this $\chi^2$ statistic on $n$ categories (here, $n = 3$, red, green, blue) is known as the $\chi^2$ distribution with
$n-1$ degrees of freedom. This distribution, $\chi^2(k)$ for $n-1$, is given by $$P(x \mid k) = \frac{x^{k/2-1}e^{-x/2}}{2^{k/2}\Gamma(k/2)}$$
Graphs of the $\chi^2$ distribution for varying degrees of freedom $k$. (Source).
So the $p$-value for our test, which is the probability that we'd get a statistic at least as large as $6.67$ given that your theoretical frequencies are accurate, would be $$p = P(x \ge 6.67 \mid k
= 2) =\int_{6.67}^\infty \frac12 e^{-\frac12 x}\, dx = e^{-\frac12 (6.67)} \approx 0.0357$$
Thus, the $\chi^2$ statistic for my sample is high enough that I can be pretty suspicious of your claim.
Theoretically speaking, the $\chi^2$ distribution with $k$ degrees of freedom is the sum of the squares of $k$ samples of the standard normal distribution $\cal N(0, 1)$. Why does the $\chi^2$
statistic follow this distribution, given that no normality was specified there? It helps to rewrite the statistic: $$\chi^2 = N\sum_{i=1}^n \frac{(e_i-t_i)^2}{t_i} = \sum_{i=1}^n \frac{N^2(e_i-t_i)^
2}{Nt_i} = \sum_{i=1}^n \frac{(Ne_i-Nt_i)^2}{Nt_i}$$
Given that $N$ is the number of samples and $e_i$ and $t_i$ are the empirical and theoretical frequencies of category $i$, $Ne_i$ is simply the number of samples in category $i$, and $Nt_i$ the
number of samples expected to be in category $i$. The value $Ne_i$ can be considered as the sum of $N$ samples from Bernoulli distributions with identical parameter $t_i$; each one has mean $t_i$ and
variance $t_i(1-t_i)$, so by the central limit theorem we can expect the distribution of the sum of $N$ samples to approximate a normal distribution with mean $Nt_i$ and variance $Nt_i(1-t_i)$. This
is where the normality in the $\chi^2$ statistic comes from, though showing that it's the sum of standard normal distributions is a bit tricky.
4. Student's t-Distribution
The statistician William Gosset, while working at a brewery, devised a test for determining whether two normal distributions with the same variance have the same mean; his employer, not wanting him
to publish with his real name, had him use a pseudonym. He chose “Student”, and his test — really a family of tests all going by the same name — became known as Student's $t$-test.
Suppose we have $m$ samples $a_1, \ldots, a_m \sim \cal N(\mu_a, \sigma^2)$ and $n$ samples $b_1, \ldots, b_n \sim \cal N(\mu_b, \sigma^2)$. For instance, suppose I go around administering IQ tests —
four members of the anime club get 122, 109, 116, and 128, while five members of the book club get 98, 117, 104, 110, and 108. To test the null hypothesis that the means are equal — that $\mu_a = \
mu_b$ — we use the test statistic $$t = \frac{\overline a - \overline b}{\sqrt{\left(\frac1m + \frac1n\right)\left(\frac{(m-1)s_a^2 + (n-1)s_b^2}{m+n-2}\right)}} $$
Here, $s_a$ and $s_b$ are the usual estimators of the variances of each population, and it is assumed that these variances are similar to one another. (This is one of a large amount of related test
statistics, see here for others). In our example, this evaluates to 2.244.
If the null hypothesis is true, the distribution of $t$ will follow the Student's $t$ distribution; like the $\chi^2$ distribution, this takes as a parameter a number of degrees of freedom $k$,
generally taken to be the total number of samples minus two (subtract one for each group). The formula for $\operatorname{St}(k)$ is given by: $$P(x \mid k) = \frac{\Gamma\left(\frac{k+1}{2}\right)}
{\Gamma\left(\frac{k}{2}\right)\sqrt{\pi k}} \left(1+\frac{x^2}{k}\right)^{-\frac{k+1}{2}}$$
The Student's $t$ distribution, for varying degrees of freedom. (Source).
Assuming the null hypothesis that $\mu_a = \mu_b$, our statistic $t$ would be sampled from $\operatorname{St}(8)$; I'll choose to calculate our $p$-value to be the probability that the absolute value
of $t$ is larger than 2.244, since a priori there's no reason to believe that a statistically significant deviation would go in any particular direction. This $p$ value is given by $$p = P(|t| \ge
2.244 \mid k = 7) = 2 \cdot P(t \ge 2.244 \mid k = 7) = 2\int_{2.244}^\infty \frac{48}{15\pi\sqrt{7}}\left(\frac{x^2+7}{7}\right)^{-4}\, dx$$ I'm not going to do this integral myself, but Wolfram|
Alpha gives $p \approx 0.0597$, which, while suggestive, is not significant.
Appendix B. Results on Distributions and Priors
1. On Functional Differentiation and Lagrange Multipliers
Functional Differentiation
A functional is a map from a space of functions into the real numbers; they're generally called via brackets. For instance, the entropy functional is given by $$H[P] = -\int_X P(x)\ln P(x)\, dx$$
Just as functions can be differentiated with respect to their variables at certain numbers, functionals can be differentiated with respect to their variables at certain functions. The idea is as
follows: if we take a functional $F$, an arbitrary function $\phi$, and an extremely small $\epsilon$, then $F[P+\epsilon\phi]$ should look like $F[P]$ with a small addition that depends on $\phi$.
Specifically, there should be another function $f$ such that $F[P+\epsilon \phi] = F[P] + \int_X f(x) \cdot (\epsilon \phi(x))\, dx$.
The function $f$ which satisfies $\lim_{\epsilon \to 0} \frac{F[P+\epsilon \phi] - F[P]}{\epsilon} = \int f(x)\phi(x)\, dx$ is known as the functional derivative of $F$, and is generally written as $
\frac{\delta F}{\delta P}$.
!tab $F$ can be said to reach a maximum at some function $q$ when there is no arbitrarily small change that can be made to $q$ which increases the value of the functional. In other words, $q$ is a
maximum of $F$ when the functional derivative of $F$ is zero at $q$ no matter the choice of $\phi$.
Lagrange Multipliers
When we want to find the function $q$ that maximizes $F$ subject to some specific constraint, for instance $G[q] = k$, we may use a Lagrange multiplier. This is a functional $\lambda G$, with $\
lambda \in \mathbb R$, which is added to $F$ to produce the new functional $F_\lambda = F + \lambda (G-k)$. The justification for this is as follows: the calculus of variations guarantees that any
maximum of $F$ satisfying $G[q] = k$ will also maximize $F_\lambda$ for some $\lambda$. Hence, to find this $q$, we maximize $F_\lambda$, keeping $\lambda$ an arbitrary real number; $q$ will be
dependent on $\lambda$, and finding that particular $\lambda$ such that $G[q]=k$ gives us our final $q$.
Laplace Approximation
Suppose we have a smooth function $f: \mathbb R^d \to \mathbb R$. The Taylor expansion of $f$ about $\vec v_0$ is given by $$f(\vec v) = f(\vec v_0) + \sum_{n=1}^\infty \frac{1}{n!} \sum_{\substack{0
\le n_1, \ldots, n_d \le n \\ \\ n_1 + \ldots + n_d = n}} \left.\frac{\partial^nf}{\partial ^{n_1}x_1 \cdots \partial^{n_d} x_d}\right|_{\vec v_0}\prod_{i=1}^d (\vec v - \vec v_0)_{i}^{n_i}$$ The
first sum iterates over each order of the expansion, while the second sum iterates over all assignments of non-negative numbers to each dimension which total to the order. For instance, for the
first-order $n=1$ expansion, the only possible assignments that the second sum can iterate over are those that pick out exactly one $n_i$ corresponding to one dimension. As such, the first-order
Taylor expansion is $$f(\vec v) \approx f(\vec v_0) + \sum_{i=1}^d \frac{\partial f}{\partial x_i} (\vec v-\vec v_0)_i = f(\vec v_0) + (\vec v -\vec v_0)^T\nabla f(\vec v_0) $$ Hence, it simply adds
to $f(\vec v_0)$ the dot product of the difference with the gradient of $f$ at $\vec v_0$. The second-order Taylor expansion further adds the sum $\sum_{i=1}^d\sum_{j=1}^d \frac{\partial^2 f}{\
partial x_i\partial x_j} (\vec v-\vec v_0)_i (\vec v-\vec v_0)_j$, which can be expressed as the inner product $\frac12(\vec v - \vec v_0)^T H_f(\vec v_0) (\vec v - \vec v_0)$, where $[H_f(\vec v_0)]
_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}|_{\vec v_0}$.
We're interested in calculating the integral $\int_{} e^{nf(\vec v)}\, d\vec v$. To do this, we find the maximum $\vec v_0$ of $f$, and, to second order, Taylor expand $f$ around that maximum.
Because $\vec v_0$ is a maximum, $\nabla f$ will disappear, leaving us with the known integral $$\int_{\mathbb R^n} e^{nf(\vec v)}\,d\vec v = e^{nf(\vec v_0)} \int _{\mathbb R^n} e^{\frac n2 (\vec v
- \vec v_0)^TH_f(\vec v_0)(\vec v- \vec v_0)}\, d\vec v = e^{nf(\vec v)}\left(\frac{2\pi}{n}\right)^{d/2}|\det H_f(\vec v_0)|^{-\frac12}$$
Conjugate Priors
One particularly nice trick Bayesians use is the use of conjugate priors: suppose we have some data $D$ from a distribution $P(x \mid \theta)$ which is part of some family and a prior $P(\theta)$
that falls into some second family, e.g. it is a normal distribution. We say that this second family is a conjugate prior for the first family if the posterior distribution $P(\theta \mid x)$ is also
part of this second family, albeit with different parameters.
I used this trick above: the conjugate prior for a normal distribution with known variance $\sigma^2$ is another normal distribution with unknown mean and variance. What this means is: if we have
data $D$ sampled from some normal distribution $\cal N(\mu, \sigma^2)$, $\sigma^2$ known, and we set this conjugate prior on the parameter $\mu$, supposing $P(\mu) = \cal N(\mu_0, \sigma^2_0)$, then
$P(\mu \mid D)$ will also be a normal distribution, this time with updated mean $\mu'_0 = \frac{\sigma^2\mu_0 + n\sigma_0^2\overline D}{\sigma^2 + n\sigma_0^2}$ and variance $(\sigma_0^2)' = \frac{\
sigma^2\sigma_0^2}{\sigma^2+n\sigma_0^2}$. Some depth of thought is required here: we are not just dealing with the parameter $\mu$ of the distribution governing the data, but also with the
parameters $\mu_0, \sigma^2_0$ of the distribution governing our beliefs about those parameters. These secondary parameters are known as hyperparameters. If we wanted, we could place priors on these
hyperparameters; these would be known as hyperpriors, and their own parameters as hyperhyperparameters, but we won't go that far.
Another example: The conjugate prior for the Bernoulli distribution is the beta distribution $$P(p \mid \alpha, \beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} p^{\alpha-1} (1-p)^{\
beta-1}$$ This means that if we have some data $D$ from a Bernoulli distribution $P(x\mid p) \sim \operatorname{Bern}(p)$, and a beta-distributed prior $P(p \mid \alpha, \beta) \sim \operatorname
{Beta}(\alpha, \beta)$, then the posterior $P(p \mid D)$ will also be a beta distribution, with updated parameters $\alpha' = \alpha + \sum_{i=1}^n D_i$ and $\beta' = \beta + \sum_{i=1}^n (1-D_i)$;
we add the number of successes to $\alpha$ and the number of failures to $\beta$ to get the posterior distribution. For instance, if we start off with $\alpha = 1, \beta = 1$, which places a uniform
prior over $p$, then after observing five heads (successes) and two tails (failures), our posterior for $p$ will be $\operatorname{Beta}(6, 3)$, or $P(p = p_0 \mid \alpha = 6, \beta = 3) = 21p_0^5
(1-p_0)^2$. The MAP estimate for $p$ is $5/7$, which is precisely the number of heads divided by the total number of flips.
2. The Maximum Entropy Distribution is Normal
Consider the problem of finding the distribution on $\mathbb R$ with maximum entropy among all those distributions with known mean $\mu$ and known variance $\sigma^2$. Three Lagrange multipliers come
into play here, since we have to make sure that the probability distribution $P$ not only has that particular mean and that particular variance, but integrates to one as well. Hence, the functional
$F$ that we want to differentiate will be given by $$F[P] = -\int P(x)\ln P(x)\, dx + \lambda_1 \int xP(x)\, dx + \lambda_2 \int (x-\mu)^2P(x)\, dx + \lambda_3 \int P(x)\, dx - \lambda_1\mu - \
lambda_2 \sigma^2 - \lambda_3$$ Plugging this in to the functional derivative formula $\lim_{\epsilon \to 0} \frac{F[P+\epsilon \phi] - F[P]}{\epsilon} = \int \frac{\delta F}{\delta P}\phi(x)\, dx$,
the Lagrange multiplier parts, relying only linearly on $P(x)$, simplify immediately, giving us $$\lim_{\epsilon \to 0} \frac{F[P+\epsilon \phi] - F[P]}{\epsilon} = \int (\lambda_1 x + \lambda_2 (x-\
mu)^2 + \lambda_3)\phi(x)\, dx - \lim_{\epsilon \to 0} \frac{1}{\epsilon} \left[\int(P(x)+\epsilon \phi(x))\ln(P(x)+\epsilon \phi(x))\, dx - \int P(x)\ln P(x)\, dx\right]$$ The term in the limit
simplifies as $$(P(x)+\epsilon\phi(x))\ln(P(x)+\epsilon \phi(x)) - P(x)\ln P(x)\\ = P(x)\left[\ln( P(x)+\epsilon\phi(x))-\ln P(x)\right] + \epsilon \phi(x) \ln(P(x)+\epsilon \phi(x))$$ $$= P(x) \left
(\ln\frac{P(x)+\epsilon\phi(x)}{P(x)}\right) + \epsilon \phi(x) \left((\ln P(x)) + \frac{\epsilon \phi(x)}{P(x)} + O(\epsilon^2)\right) $$ $$= P(x)\left(\frac{\epsilon \phi(x)}{P(x)} + O(\epsilon^2)\
right) + \epsilon \phi(x)\ln P(x) + O(\epsilon^2) \\ = \epsilon \phi(x) (1+ \ln P(x)) + O(\epsilon^2)$$ Dividing by $\epsilon$ and taking the limit as $\epsilon \to 0$ yields $\phi(x)(1+\ln P(x))$,
allowing us to complete the functional derivative as $$\lim_{\epsilon \to 0} \frac{F[P+\epsilon \phi] - F[P]}{\epsilon} = \int \overbrace{(\lambda_1 x + \lambda_2(x-\mu)^2 + \lambda_3 -1 - \ln P(x))}
^{=\delta F/\delta P}\phi(x)\, dx$$ Hence, we maximize by setting $\lambda_1 x + \lambda_2(x-\mu)^2 + \lambda_3 - 1 - \ln P(x) = 0$, or $P(x) = C\operatorname{exp}(\lambda_1 x + \lambda_2 (x-\mu)^2)
$, where $C = e^{\lambda_3-1}$ is the normalization constant. Finally, we solve for the Lagrange multipliers by calculating the total probability, mean, and variance of $P(x)$: $$1 = \int C e^{\
lambda_1 x + \lambda_2(x-\mu)^2}\, dx = C\frac{\sqrt{\pi} e^{\lambda_2\mu^2-(\lambda_1-2\lambda_2\mu)^2/4\lambda_2}}{\sqrt{-\lambda_2}} \implies C = \frac{\sqrt{-\lambda_2}}{\sqrt{\pi} e^{\lambda_2\
mu^2-(\lambda_1-2\lambda_2\mu)^2/4\lambda_2}}$$ $$\mu = \int xCe^{\lambda_1 x + \lambda_2(x-\mu)^2}\, dx %= Ce^{\lambda_2\mu^2}\int xe^{\lambda_2x^2+(\lambda_1-2\lambda_2\mu)x}\, dx = Ce^{\lambda_2\
mu^2} \frac{(\lambda_1-2\lambda_2\mu)\sqrt\pi }{-2\lambda_2\sqrt{-\lambda_2}}e^{(\lambda_1-2\lambda_2\mu)^2/(-4\lambda_2)}$$ $$= Ce^{\lambda_2\mu^2} \frac{(\lambda_1-2\lambda_2\mu) }{-2\lambda_2}e^
{(\lambda_1-2\lambda_2\mu)^2/(-4\lambda_2)} = \frac{(\lambda_1-2\lambda_2\mu)}{-2\lambda_2} = -\frac12 \frac{\lambda_1}{\lambda_2} + \mu$$ $$\implies \lambda_1 = 0, \ C = \left(\int e^{\lambda_2(x-\
mu)^2}\,dx\right)^{-1} = \left(\frac{\sqrt{\pi}}{\sqrt{-\lambda_2}}\right)^{-1} = \sqrt{\frac{-\lambda_2}{\pi}}$$ $$\sigma^2 = \int C (x-\mu)^2 e^{\lambda_2(x-\mu)^2}\,dx = \frac C2 \sqrt{\frac{\pi}
{-\lambda_2^3}} = -\frac{1}{2\lambda_2} \implies \lambda_2 = -\frac{1}{2\sigma^2}, \ C = \frac{1}{\sqrt{2\pi\sigma^2}}$$ Plugging this back into the definition of $P$, we end up with our final
answer: $$P(x) = Ce^{\lambda_2(x-\mu)^2} = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}} = \cal N(\mu, \sigma^2)$$
Therefore, among all those distributions with a given mean and variance, the one with maximum entropy is precisely the normal distribution with that mean and variance.
3. Maximal Data Information Prior for Normal with Known Mean
For a normal distribution with known mean $\mu$ and unknown variance $s = \sigma^2$, the entropy is $\frac12\ln(2\pi\sigma^2) + \frac12$, giving us $$G = \int P(s)\ln{\sqrt s}\, ds - \int P(s)\ln P
(s)\, ds + \text{const.}$$
Add the Lagrange multiplier $\lambda \int P(s)\, ds$ to $G$ to get a new functional $G_\lambda[P]$. We calculate its functional derivative: $$\int \frac{\delta G_\lambda[P]}{\delta P}(s)\phi(s)\, ds
= \lim_{\epsilon \to 0} \frac{G_\lambda[P+\epsilon\phi] - G_\lambda[P]}{\epsilon}$$ $$ \lim_{\epsilon \to 0} \frac{1}{\epsilon}\left[\int (P(s)+\epsilon \phi(s))\ln\sqrt s\, ds - \int (P(s)+\epsilon
\phi(s))\ln (P(s)+\epsilon \phi(s))\, ds \right.\\ + \left.\lambda \int (P(s)+\epsilon\phi(s))\, ds - \int P(s)\ln{\sqrt s}\, ds + \int P(s)\ln P(s) \, ds - \lambda \int P(s)\, ds\right]$$ $$ = \lim_
{\epsilon \to 0} \frac{1}{\epsilon}\left[\int \epsilon\phi(s)\ln \sqrt{s}\, ds +\int P(s) \ln \frac{P(s)}{P(s)+\epsilon \phi(s)}\, ds \right. \\ - \int \epsilon \phi(s)\ln(P(s)+\epsilon \phi(s))\, ds
+ \lambda \int \epsilon \phi(s)\,ds\left. \right] $$ $$= \int \phi(s)(\lambda + \ln\sqrt{s}) \, ds + \lim_{\epsilon \to 0} \left[\int \frac1\epsilon P(s) \left(\frac{-\epsilon \phi(s)}{P(s)} - O(\
epsilon^2)\right)\, ds - \int \phi(s)\ln(P(s)+\epsilon \phi(s))\, ds \right] $$ $$ = \int \phi(s)(\lambda - 1 + \ln \sqrt{s} + \ln P(s))\, ds$$ which implies that $\frac{\delta G_\lambda[P]}{\delta
P} = \lambda + 1 + \ln \sqrt{s} + \ln P(s)$; we set $\lambda + 1 + \ln \sqrt{s} + \ln P(s) = 0$ to get $P(s) = e^{-\lambda-1-\ln \sqrt{s}} = Cs^{-\frac12} = C\sigma^{-1}$, where $C = e^{-\lambda-1}$
is the normalization constant. Unfortunately, this prior, which is the same as the Jeffreys prior, is non-normalizable: $\int_0^\infty s^{-\frac12}\, ds = \left[2\sqrt{s}\right]_0^\infty = \infty$.
4. Empirical Bayes for the Poisson Distribution With Gamma Prior
Given data $D = \{k_1, \ldots, k_n\}\sim \operatorname{Pois}(\lambda)$ and a prior $\lambda \sim \operatorname{Gamma}(\alpha, \beta)$, we want to find the $\alpha, \beta$ that maximize $P(D \mid \
alpha, \beta)$. Recall that for $k$ a single data point, these distributions are given by $$P(k \mid \lambda) = \frac{\lambda^ke^{-\lambda}}{k!} \qquad P(\lambda \mid \alpha, \beta) = \frac{\beta^{\
alpha}\lambda^{\alpha-1}e^{-\beta\lambda}}{\Gamma(\alpha)}$$ The first step is to write out $P(D \mid \alpha, \beta)$: $$P(D \mid \alpha, \beta) = \int_{0}^\infty P(D \mid \lambda)P(\lambda \mid \
alpha, \beta)\, d\lambda \\ = \int_{0}^\infty \left(\prod_{i=1}^n\frac{\lambda^{k_i}e^{-\lambda}}{{k_i}!}\right)\frac{\beta^\alpha \lambda^{\alpha-1}e^{-\beta\lambda}}{\Gamma(\alpha)}\, d\lambda = \
frac{\beta^\alpha}{\Gamma(\alpha)F}\int_0^{\infty}\lambda^{S+\alpha-1}e^{-(n+\beta)\lambda}\, d\lambda$$ where $S = \sum_{i=1}^n k_i$ and $F = \prod_{i=1}^n k_i!$.
In general, the integral $\int_0^\infty x^ae^{-bx}\,dx$ evaluates to $\Gamma(a+1)/b^{a+1}$ (easily derived from the definition of the gamma function), so the above integral evaluates to $$\frac{\beta
^\alpha}{\Gamma(\alpha)F} \frac{\Gamma(\alpha+S)}{(\beta+n)^{\alpha+S}} = \frac 1F \frac{\beta^\alpha}{(\beta+n)^{\alpha+S}}\frac{\Gamma(\alpha+S)}{\Gamma(\alpha)}$$ This is what we want to maximize
w.r.t. $\alpha$ and $\beta$. We'll take the logarithm and then take the partial derivatives: letting $\ell = \ln P(D \mid \alpha, \beta)$, we have $$\ell = \alpha \ln \beta - (\alpha+S)\ln(\beta + n)
+ \ln \Gamma(\alpha + S) - \ln \Gamma(\alpha) - \ln F \\ \frac{\partial \ell}{\partial \alpha} = \ln \beta - \ln(\beta + n) + \psi(\alpha+S) - \psi(\alpha)\\ \frac{\partial \ell}{\partial \beta} = \
frac{\alpha}{\beta} - \frac{\alpha+S}{\beta+n} $$ $$= \frac{\alpha\beta + \alpha n -\alpha \beta - \beta S}{\beta(n+\beta)} = \frac{\alpha n - \beta S}{\beta(n+\beta)}$$ Here, $\psi(x) = \frac{d}{dx}
\ln \Gamma(x)$ is the digamma function, putting us squarely in Special Function Hell. To maximize, we need to set the derivatives equal to zero, solve for $\alpha, \beta$, and then make sure we
haven't hit a saddle point by checking that $\ell_{\alpha\alpha}\ell_{\beta\beta} > \ell_{\alpha\beta}^2$ and either one of $\ell_{\alpha\alpha},\ell_{\beta\beta}$ is negative.
It is clear that $\partial \ell/\partial \beta = 0$ when $\alpha n = \beta S$, or $\alpha/\beta = S/n = \overline D$.
Solving for $\partial \ell/\partial \alpha = 0$ is extremely hard; one terrible approach is to use the identity $\Gamma(\alpha+S)/\Gamma(\alpha) \approx \alpha^S$, leading us to the line of reasoning
$$\ell \approx \alpha \ln \beta - (\alpha + S) \ln (\beta + n) + S \ln \alpha - \ln F \\ \frac{\partial \ell}{\partial \alpha} \approx \ln \beta - \ln (\beta + n) + \frac{S}{\alpha} = 0 \implies \
frac S\alpha = \ln(\beta + n) - \ln \beta = \ln\left(1 + \frac\beta n\right)$$ $$\implies 1+\frac\beta n = e^{S/\alpha} \implies \beta = n(e^{S/\alpha}-1) = ne^{n/\beta}-n \\ \implies n/\beta+1 = (n/
\beta)e^{n/\beta}$$ The equation $x +1 = xe^x$ has precisely one positive solution at $x \approx 0.806$, so we must have $\beta \approx n/0.806 \approx 1.24n$, and correspondingly $\alpha = \beta S/n
\approx 1.24S$.
This is a really bad approximation and just seems morally wrong, so I won't accept it. However, we should at least check that this is a maximum: we have $$\ell_{\alpha\alpha} \approx \psi_1(\alpha+S)
-\psi_1(\alpha), \ell_{\beta\beta} = \frac{-S\beta(n+\beta) - (n+2\beta)}{\beta^2(n+\beta)^2}, \ell_{\alpha\beta} = \ell_{\beta\alpha} = 0$$
Here, $\psi_1$ is the trigamma function; it is strictly decreasing, so $\ell_{\alpha\alpha}$ is always negative. $\ell_{\beta\beta}$ is clearly always negative as well, so we are at a maximum. In
fact, the fact that both of these are always negative implies that there is a unique global maximum, and therefore a unique maximum likelihood estimate of $\alpha, \beta$.
Work: To solve for $\partial \ell/\partial \alpha = 0$, we may make use of the following identity: for a positive integer $n$, $\psi(x+n)-\psi(x) = \sum_{i=0}^{n-1}\frac{1}{x+i}$. Each data point,
being the number of times something happened in a given period of time, is a positive integer, making their sum $S$ a positive integer as well. Hence, we have $$\frac{\partial \ell}{\partial \alpha}
= \ln \beta - \ln(\beta + n) + \sum_{i=0}^{S-1} \frac{1}{\alpha+i}$$ I've tried a lot of approaches, but I can't find a way to set this equal to zero that isn't either intractable or equivalent to $\
alpha/\beta = S/n$. Oh well. The sum is rather difficult to work with, so we'll approximate it by the integral $\int_0^S \frac{1}{\alpha+x}\, dx = \ln (\alpha + S)-\ln\alpha$, giving us $$\frac{\
partial \ell}{\partial \alpha} = \ln \frac{\beta}{\beta+n} + \ln \frac{\alpha+S}{\alpha} = \ln \frac{\beta (\alpha+S)}{(\beta+n)\alpha} = 0 \\ \implies \frac{\beta(\alpha+S)}{(\beta+n)\alpha} = 1 \
implies \beta \alpha + \beta S = \beta \alpha + n\alpha \\ \implies \beta S = n\alpha \implies \frac{\alpha}{\beta} = \frac{S}{n} = \overline D$$ We get the exact same result. Since the approximation
becomes arbitrarily accurate in the limit, We apply the Euler-Maclaurin formula to “simplify” the sum: $$\sum_{i=a}^{b-1} f(i) \approx \int_a^b f(x)\, dx + \frac{f(a)-f(b)}{2} + \sum_{i=1}^\infty \
frac{B_{2i}}{(2i)!}\left(f^{(2i-1)}(b)-f^{(2i-1)}(a)\right) \\ {\small \sum_{i=0}^{S-1} \frac{1}{\alpha+i} \approx \ln \frac{S+\alpha}{\alpha} + \frac{\frac1\alpha-\frac{1}{S+\alpha}}{2} + \sum_{i=1}
^\infty \frac{B_{2i}}{(2i)!}(-1)^{2i-1}(2i-1)!\left[(S+\alpha)^{-2i}-\alpha^{-2i}\right]}$$ $$ = \ln\left(\frac{S+\alpha}{\alpha}\right) + \frac{S}{2\alpha(S+\alpha)} - \sum_{i=1}^\infty \frac{B_
{2i}}{2i}\left[(S+\alpha)^{-2i}-\alpha^{-2i}\right]$$ Applying $\alpha = \beta S/n$, we get $$0= \ln \left(\frac{\beta}{\beta+n} \right)+ \ln\left(\frac{n+\beta }{\beta }\right) + \frac{n^2}{2S\beta
(n+\beta)} - \sum_{i=1}^\infty \frac{B_{2i}}{2i}\left(\frac nS\right)^{2i}\left[(n+\beta)^{-2i}-\beta^{-2i}\right]$$ The first two terms cancel out, so if we simply ignore the sum, we get leaving us
with $$\frac{1}{\beta(n+\beta)} = \frac{2S}{n^2}\sum_{i=1}^\infty \frac{B_{2i}}{2i}\left(\frac nS\right)^{2i}\frac{\beta^{2i}-(n+\beta)^{2i}}{\beta^{2i}(n+\beta)^{2i}} \\ \implies 1 = \sum_{i=1}^\
infty \frac{B_{2i}}{ni}\left(\frac{n}{S\beta(n+\beta)}\right)^{2i-1}(\beta^{2i}-(n+\beta)^{2i})$$ The first two terms of this sum are $$1 = -\frac{1}{6n}\left(\frac{n}{S\beta(n+\beta)}\right)((n+\
beta)^2-\beta^2) +\frac{1}{30n}\frac{n^3}{S^3\beta^3(n+\beta)^3}((n+\beta)^2+\beta^2)((n+\beta)^2-\beta^2)$$ $$ \implies 30\frac{S^3}{n^2} +10\frac{S^2}{n^2} = \frac{1}{\beta^3} +\frac{n}{\beta^2}+ \
frac{1}{(n+\beta)^2} -\frac{n}{(n+\beta)^3} - 5\frac{S^2}{n^2} \frac{n}{\beta} + 5\frac{S^2}{n^2} \frac{n}{(n+\beta)}$$ For large $n$, this will reduce to $$10\frac{S^2}{n^2}(3S+1) = \frac{n}{\beta^
2} $$
5. MAP for the Mean of a Gaussian Distribution with Known Variance
Take a normal distribution with known variance $\sigma^2$ and unknown mean $\mu$, from which data $D = \{x_1, \ldots, x_n\}$ is sampled. Placing a normal prior ${\cal N}(\mu_0, \sigma_0^2)$, the MAP
estimate for $\mu$ is given by $${\small \widehat \mu = \operatorname{argmax}_\mu \frac{P(D \mid \mu)P(\mu)}{P(D)} = \operatorname{argmax}_\mu\frac{1}{P(D)} \frac{1}{\sqrt{2\pi\sigma_0^2}}e^{-\frac12
\frac{(\mu-\mu_0)^2}{\sigma_0^2}} \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac12\frac{(x_i-\mu)^2}{\sigma^2}}}$$ Since we're just maximizing with respect to $\mu$, we can not only remove all
constants which do not depend on $\mu$ (including the difficult integral $P(D)$), but apply any order-preserving function we want without affecting the maximization process. Removing these constants,
and applying the logarithmic function, we simplify this as $${\small \widehat \mu = \operatorname{argmax}_\mu -\frac12\frac{(\mu-\mu_0)^2}{\sigma_0^2} - \frac{1}{2\sigma^2} \sum_{i=1}^n (x_i-\mu)^2 =
\operatorname{argmin}_\mu \sigma^2(\mu-\mu_0)^2 + \sigma_0^2\sum_{i=1}^n (x_i-\mu)^2 } \\ {\small = \operatorname{argmin}_\mu \sigma^2\mu^2-2\sigma^2\mu_0\mu + \sigma_0^2\sum_{i=1}^n \mu^2-2\mu{x_i}
= \operatorname{argmin}_\mu (\sigma^2+n\sigma_0^2)\mu^2 - 2\left(\sigma^2\mu_0+n\sigma_0^2\overline D\right)\mu}$$ The usual way to minimize such a function is to take the derivative with respect to
$\mu$ and set it equal to zero. Doing this gives us $$2(\sigma^2+n\sigma^2_0)\mu - 2(\sigma^2\mu_0 + n\sigma_0^2\overline D) = 0 \\ \,\\\implies \widehat \mu(D) = \frac{\sigma^2\mu_0+n\sigma_0^2\
overline D}{\sigma^2+n\sigma_0^2} = \frac{\left(\frac{\sigma^2}{\sigma_0^2}\right)\mu_0 + n\overline D}{\left(\frac{\sigma^2}{\sigma_0^2}\right) + n}$$
This is a weighted estimate of the prior and sample means, which weighs the sample mean more heavily insofar as it based off of a greater number of samples and weighs the prior mean more heavily
insofar as its variance is lower than that of the known population variance — beautifully intuitive. In the case where $D = \{x\}$ is a single number and $\sigma^2=\sigma_0^2$, the MAP estimator $\
widehat \mu(D)$ will be perfectly in between $\mu_0$ and $x$.
6. The MSE Bayes Estimator is the Posterior Mean | {"url":"https://n.cohomology.group/statistics-is-difficult.html","timestamp":"2024-11-03T20:22:06Z","content_type":"text/html","content_length":"124815","record_id":"<urn:uuid:8749db3a-189b-40ae-adfc-cf39fb90776c>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00704.warc.gz"} |
Transcript: How to load test with Goose - Part 2: Running a Gaggle
This is a transcript. For the video, see How to load test with Goose - part 2: Running a Gaggle.
Michael Meyers: [00:00:00] Hello, and welcome to Tag1 Team Talks, the podcast and blog of Tag1 Consulting. Today, we're going to be doing distributed load testing how to: a deep dive into running a
Gaggle with Tag1's, open source Goose load testing framework. Our goal is to prove to you that Goose is both the most scalable load testing framework currently available, and the easiest to scale.
We're going to show you how to run a distributed load, test yourself. And we're going to provide you with lots of code and examples to make it easy and possible for you to do this on your own. I'm
Michael Meyers, the managing director at Tag1, and joining me today is a star-studded cast.
[00:00:38] We have Jeremy Andrews, the founder and CEO of Tag1, who's also the original creator of Goose. Fabian Franz, our VP of Technology, who's made major contributions to Goose, especially
around performance and scalability. And Narayan Newton, our CTO who has set up and put together all the infrastructure that we're going to be using to run these load tests.
[00:01:03] Jeremy, why don't you take it away? Give us an overview of what we're going to be covering and let's jump into it.
[00:01:10] Jeremy Andrews: Yeah. So last time we were exploring with setting up a load test from a single server and confirmed that Goose makes great use of that server. It leverages all the CPU's
and ultimately tends to get as far as it can until the uplink slows it down.
[00:01:27] So today what we're going to do is use a feature of Goose called a Gaggle which is a distributed load test. If you're familiar with Locust, it is like a swarm. And the way it, the way that
this works with Goose you have a Manager process that you kick off and you say, I want to simulate 20,000 users and I'm expecting 10 Workers to do this load.
[00:01:49] The Manager process prepares things and, and all the Workers then connect in through a TCP port and it sends each of them a batch of users to run. And then each of them the Manager
coordinates a start, each of the Workers start at the same time. And then they send their statistics back to the Managers so that you can actually see what happened in the end.
[00:02:11] What this nicely solves is if you're uplink can only do so much traffic, or if you want traffic coming from multiple regions around the world you could let Goose manage that for you in all
of these different servers. So today Narayan has set up a pretty cool test where we're going to be spinning up a lot of Workers.
[00:02:28] And he could talk about how many each one is not going to be working too hard. They'll run maybe a thousand users per server, which means it'll be at least 50% idle. It won't be maxing out
the uplink on any given server. But in spite of that, we're going to show that working together in a Gaggle we can generate a huge amount of load.
[00:02:45] So now Narayan, if you can talk about what you've set up here.
[00:02:48] Narayan Newton: Sure. so what I built today is basically a simplistic Terraform tree. What is interesting about this is that we wanted to distribute the load between different regions and
for those people that have used Terraform in the past, that can be slightly odd in that you can only set one region for each AWS provider that Terraform uses to spin things up.
[00:03:12] So how we've done this is defined multiple providers, one for each region and a module that spins up our region Workers. And we basically initialize multiple versions of the module passing
each a different region. So in the default test, we spin up 10 Worker nodes in various regions. the Western part of the United States, the Eastern part of the United States, Ireland, Frankfurt.
[00:03:38] India and Japan with how the test currently works. It's the load testing truss, which is what we decided to call it. it's a little limited because once you start it, you can't really
interact with the Workers themselves. They start up, they pull down Goose and they run the test. Then next revision of this would be something that has a clustering agent between the Workers to, so
that you can actually interact with the Workers after they start it gets very annoying to have to run Terraform, to stand up these VMs all over the world, and then you want to make a change to them,
you have to destroy all of them and then relaunch them which isn't terrible. But as a testing sequence, it adds a lot of time, just because it takes time to destroy and recreate these VMs. So the
next revision of this would be something other than Goose, creating a cluster of these VMs. How it currently works is that we're using Fedora CoreOS so that we have a consistent base at each
[00:04:41] And so I could only send it a single file for initialization. And then Fedora CoreOS pulls down a container that has the Goose load test and a container that has a logging agent so that we
can monitor the Workers and send all the logs from the Goose agents back to a central location.
[00:05:02] Fabian Franz: I had a quick question. So Narayan the basic setup is that we have EC2 instances, like on AWS, and then we run containers like normal Kubernetes like on them, or how is it
[00:05:17] Narayan Newton: It's using Docker. So that is the big thing that I want to improve. And I almost got there before today. What would be nicer is if we could use one of the IOT distributions
or Kubernetes at the edge distributions to run a very slim version of Kubernetes on each Worker node so that we get a few things.
[00:05:37] One is cluster access, so we can actually interact with the clusters spread load, run, multiple instances of Goose. it would be interesting to pack multiple instances of Goose on things
like the higher end and also be able to actually edit the cluster after it's up and not have to destroy it and recreate it each time.
[00:05:56] The other thing is to get containered and not Docker. Just because there are some issues that you can hit with that. as it stands right now, CoreOS ships with Docker running, and that's
how you interact with it for the most part is a systemctl Docker, but you could also use Podman but I ran into issues with that for redirecting the logs.
[00:06:17] So we are actually using Docker itself and Docker is just running the container as you would in a local development environment.
[00:06:24] Fabian Franz: So what we are missing from standard Kubernetes deployment thing that we would normally have is the ability to deploy a new container. You were saying that if I want to
deploy a new container versus simplistic infrastructure right now, I need to shut down the EC2 instance and then start them up again.
[00:06:42] Okay.
[00:06:42] Narayan Newton: So that's, that's the, so like what I did when before this test, Jeremy released a new branch with some changes to make this load test faster as on startup. what I did to
deploy that is run Terraform destroy, wait for it, to kill all their VMs across the world, and then Terraform apply and wait for it to recreate all those VMs across the world.
[00:07:03] And like that is management style, honestly, but in this specific case, because we're doing sometimes micro iterations, it can get really annoying.
[00:07:13] Fabian Franz: Yeah, for sure.
[00:07:14] No, no, that makes perfect sense. I just want to understand, because I was like, in this container world, you can just deploy a new container, but obviously you need a Manager for that.
[00:07:23] Narayan Newton: Yes. Yes. I could totally deploy a new container. So what I could do is have Terraform output the list of IPs, and then I can SSH to each of them and pull a new container.
But at that point,
[00:07:40] But seriously, there's another Git repository that I have started. The version of this that uses a distribution of Kubernetes is called K3s that is designed for CI systems and IOT and
deployments to the edge. And it's a - it's a single binary version of Kubernetes where everything is wrapped into a single binary and starts on edge nodes and then can connect them all together and
so we could have a multi-region global cluster of this little Kubernetes agents.
[00:08:08] And then we could spin up Gooses on that. And that I think will actually work.
[00:08:12] Fabian Franz: You totally blew my mind. So now you've just signed up for follow up to show that because that's, I mean, that's, that's what you want actually, but now I'm really curious,
how does this Terraform configuration actually look, can, can you share a little bit about it?
[00:08:29]Narayan Newton: So this is the current tree. If everyone can see that it's pretty simplistic. So this is the main file that gets loaded. And then for everyone, there's a module that is
named after its region. They're all hitting that same actual module is just different revisions of this module. And then they'll take a Worker count and their region and their provider and the
provider is what is actually separating them into regions.
[00:09:02] And then if you look at the region Worker, which is where most of these things are happening, There's a variables thing, which is interesting because I have to define an AMI map because
every region has a different AMI because the regions are disparate. Like there's no, there's no consensus building between these regions for images.
[00:09:27] So one of the reasons I picked CoreOS is because it exists in each of these regions and can handle a single start-up file. When when we do the K3s version of this K3s kind of run on Ubuntu
of, and Ubuntu obviously exists in all these regions as well. but I'll still have to do something like this, or there's another way I can do it, but this was the way to do it for CoreOS.
[00:09:49] And then we set, instance type, this is this a default. And then the main version of this is very simple. We initialized our key pair, cause I want to be able to SSH into these instances
at some point and upload it to each region. We initialize a simple security group that allows me to SSH into each region. And then a simple instance that doesn't really have much because it's, it
doesn't even have a large root device cause we're not using it at all.
[00:10:21] Basically we're just spinning up a single container and then pushing the logs to Datadog, which is our central log agent. So even the logs aren't being written locally on that we
associated a public IP address. We spin up the AMI, we look up which AMI we should use based on our region. and then we output the Worker address.
[00:10:41] So the other part of this is the Manager. The only real difference in this, we basically spent out the exact same way is we also allow the Goose ports, which is 5115, and we spin up a DNS
record that points to our Manager because that DNS record is what all the region Workers are going to point at.
[00:11:03] Um, and we make use of the fact that they're all using Route 53. So this update propagates really quickly.
[00:11:14] And that's basically that it's pretty simple. each VM is running. Sorry, go ahead.
[00:11:22] Fabian Franz: Where do you actually put in the Goose part? Because I've seen the VM.
[00:11:28] Narayan Newton: Yep. So each CoreOS VM it can take an ignition file. The idea behind CoreOS is it was a project to simplify infrastructure that was based on containers.
[00:11:41] It became an underlying part of a lot of Kubernetes deployments because it's basically read only in essence on a configuration level. It even can auto update itself. It's a very
interesting way of dealing with an operating system. It - its entire concept is you don't really interact with it outside of containers.
[00:11:58] It's just a stable base for containers that remain secure, can auto update is basically read only in its essence and it takes these ignition files that define how it should set itself up
on first boot. So if we look at one of these ignition files,
[00:12:18] Okay. we can see that it's basically YAML. And we defined the SSH key we want to get pushed. We define an /etc/hosts file to push. We then define some systemd units, which include turning
off SELinux because we don't want to deal with that on short-lived Workers. And then we define the Goose service, which pulls down the image.
[00:12:41] And right here actually starts Goose. This is mostly defining the log driver, which ships logs back to Datadog, the log driver, the actual logging agent is started here. but then like,
this is one of the Workers. So we pulled the temp, Umami branch of Goose. We start it up, set it to Worker. Point it to the Manager host, set it to be somewhat verbose, set the log driver to be
Datadog startup data dogs that we get metrics in the logs.
[00:13:12] And then that's just how it runs. And this will restart over and over and over again. So you can actually run multiple tests with the same infrastructure. You just have to restart Goose on
the Manager and then the Workers will kill themselves and then restart.
[00:13:26] Narayan Newton: And so you get this plan, where it shows you where all the instances it's going to spin up. It's actually fairly long just because there are a lot of params for each EC2
instance that we're spinning up 11 of them, 10 plus the Manager, you say that's fine. And it goes.
[00:13:45] And I will stop sharing my screen now is this is going to take a bit.
[00:13:50] So is this already doing something now.
[00:13:53] Narayan Newton: Yes. And this is, you're probably going to see one of the quirks. And this is another thing I dislike about this. Because we're using CoreOS, these are all coming up on an
outdated AMI and they're all going to reboot right there.
[00:14:12] Because they come up, they start pulling the Goose container and then they start the update process and they're not doing anything. So at that point, they think it's safe to update. And so
they update and reboot. it's somewhat cool that that has no impact on anything like the entire infrastructure comes up, updates itself, reboots.
[00:14:31] Then it continues on with what it's doing, but it's another little annoyance that I just don't. You spin up this infrastructure and you don't really have a ton of control over it.
[00:14:41]And so this is the logs of the Manager process of Goose, and it's just waiting for its Workers to connect. They're all, they've all updated, rebooted and Goose is starting on them. As you
can see, eight of them had completed that process.
[00:15:00] Michael Meyers: Is the, you know, all of this, the stuff that you put together here is this going to be available open source for folks to download and leverage?
[00:15:07] Narayan Newton: Yep.
[00:15:08] Michael Meyers: Awesome.
[00:15:10] Narayan Newton: It's all online. On our Tag1 Consulting GitHub organization and the K3s version will be as well. And that's the one I'd recommend you use. This one's real annoying. I know
I keep going on about it, but like, this is how it skunkworks projects work. You make the first revision and you hate it and then you decide to never do that again. And then you make the second
revision. Okay. This is starting. Now I'm going to switch my screen over to the web browser so we can see what it's doing.
[00:15:40] Fabian Franz: Sure.
[00:15:41] Great. the logs that we're seeing there are they coming from the Datadog or just the Manager director. And so
[00:15:49] Narayan Newton: That was a direct connection to the Manager. If we go over to Datadog here, there, these are going to be the logs. As you can see, the host is just like what an EC2 host
name looks like, and they're all changing, but we're getting logs from every agent as well as the Worker. You can see they're launching. If we go back to Fastly, we can see that they're getting
global traffic. So we're getting traffic on the West coast, the East coast, Ireland, Frankfurt, and Mumbai, and the bandwidth will just keep ramping up from here
[00:16:34] Fabian Franz: For the Datadog is that way to also filter by the Manager, like,
[00:16:42] Narayan Newton: Sure. This is the live tail. We'll go to the past 15 minutes and then you can go service Goose. And then we have Worker and Manager, so I can do all my Worker. And that's
sorry, only Manager. The Manager is pretty quiet. The Workers are not.
[00:17:07] Jeremy Andrews: You must've disabled displaying metrics regularly. Cause I would have expected on the server to see that.
[00:17:12]Narayan Newton: If I did, I did not intend to, but I probably did.
[00:17:17] Jeremy Andrews: Can we, is it easy to quickly see what command you passed in or not to go back there from where you're at right now?
[00:17:24]Fabian Franz: It's in Terraform, I think.
[00:17:26] Narayan Newton: It is all set here.
[00:17:31] Jeremy Andrews: So it must be interesting. I have to figure out why you're not getting statistics on the Manager because you should be getting statistics on the Manager. Is this the log
you're tailing or is this what's verbosely put out to the screen?
[00:17:44] Narayan Newton: This is what is put out to the screen.
[00:17:46] Jeremy Andrews: Yeah. Interesting. Okay.
[00:17:48] I would have expected statistics every 30 seconds.
[00:17:53] Narayan Newton: So what's kind of interesting is you can expand this in Fastly and see we're doing significantly less traffic in Asia Pacific, but that makes sense. Considering we're only
hitting one of the PoPs and then Europe and North America tends to be about the same, but you can even drill down further.
[00:18:11]Fabian Franz: One quick question. I saw you hard-code the IP address end point in the Terraform. How does Fastly is still know essentially to which PoP to route and they're doing it through
[00:18:22]Narayan Newton: You mean I'd put the IP the same IP address everywhere in /etc/hosts? Yep. Yeah. It's because of how they're doing traffic.
[00:18:31] So it is the same IP address everywhere, but they the IP address points to different things. Basically. It's cool. A lot of CDNs do it that way. so instead of different IP addresses, it's
basically routing tricks.
[00:18:47] Jeremy Andrews: We seem to have maxed out. Can you look at the
[00:18:49] Narayan Newton: Yeah, this should be about it. It should be all started at this point.
[00:18:53] Yeah. So we've launched a thousand users, we've entered Goose attack. So we have evened out at 14.5 gigabits per second, which is, I think what we got on one server with 10,000 users as
[00:19:05]Jeremy Andrews: This is more, this is more than a single server single server. I think we max out at nine gigabit.
[00:19:10] Michael Meyers: Awesome. Thank you guys. All for joining us. It was really cool to see that in action. All the links we mentioned are going to be posted in the video summary and the blog
posts to that correlates with this. Be sure to check out Tag1.com/goose that's tag the number one.com. That's where we have all of our talks, documentation links to GitHub.
[00:19:33] There's some really great blog posts there that will show you step-by-step with the code, how to do everything that we covered today. So be sure to check that out. If you have any
questions about Goose, please post them to the Goose issue queues that we can share them with the community. Of course, if you like this talk, please remember to upvote subscribe and share it out.
[00:19:53] You can check out our past Tag1 TeamTalks on a wide variety of topics from open source and funding, getting funding on your open source projects to things like decoupled systems and
architectures for web applications at tag1.com/tag1teamtalks as always we'd love your feedback and input on both this episode, as well as ideas for future topics.
[00:20:20] You can email us at ttt@tag1.com Again, a huge thank you, Jeremy, Fabian and Narayan for walking us through this and to everyone who tuned in today. Really appreciate you joining us. Take | {"url":"https://www.tag1consulting.com/transcript-how-load-test-goose-part-2-running-gaggle","timestamp":"2024-11-02T08:15:53Z","content_type":"text/html","content_length":"35170","record_id":"<urn:uuid:48c71d75-6cfd-4d85-9b1c-a7c40190d95f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00070.warc.gz"} |
Genetic Optimizer
Benefits of Wealth-Lab's Genetic Optimizer
• Speeds up optimizations. Optimal parameters are usually discovered in a fraction of time it takes when using exhaustive search.
• Works where exhaustive method stumbles: when the number of parameter combinations is too large. It takes only a subset of total runs to find the "Top-10" of values using genetic algorithm.
• Aims at the near optimal solution which is likely to be a robust solution. The genetic algorithm does not seek for the absolute optimum which could easily turn an unreliable "peak" value.
• Like user-guided optimization, GA focuses on important regions of solution space, having the benefit of a selective search – yet without the need for human interaction.
• GA can be used by itself or to pave the way for exhaustive optimization – with a much tighter range of possible optimization values.
In a nutshell
Simply put, genetic algorithms are about solving an optimization problem by evolving a population of chromosomes of candidate solutions toward better solutions. A "chromosome" represents a
combination of Strategy Parameters that defines one of the many possible solutions to the optimization problem that the genetic algorithm is trying to solve.
Genetic optimization rules out the poorly performing parameter combinations and keeps the best, letting them proliferate so that each new generation is based upon a hopefully better genotype.
Is Genetic Algorithm any different from Monte Carlo?
Sure. Although both Genetic optimizer and Monte Carlo work fast and use randomness, unlike Monte Carlo, GA does its search purposefully. Genetic algorithm employs selection and recombination,
searching for populations with the best optimization criteria.
How does it work?
After installing
and restarting Wealth-Lab, Genetic Optimizer appears in your list of installed optimizers, next to "Exhaustive" and "Monte Carlo".
• The "evolution" happens in cycles, also known as "generations". The first cycle starts from a population of randomly generated chromosomes.
• A "fitness" function determines the optimality of a chromosome and allows to rank it against the population. In each generation, the fitness of each chromosome is evaluated.
• Based on their fitness, chromosomes are selected from the current population using a Selection method, forming the so called "mating pool". Regardless of chosen Selection method, chromosomes with
better fitness have greater chances of making it to mating pool.
• Then, a new generation is formed by applying two genetic "operators": Crossover and Mutation, to recombine and mutate the optimal chromosomes, producing a new, hopefully better generation.
• This process goes on iteratively upon reaching a satisfactory fitness level for the population, or after producing a number of generations specified as the Generation count (in this case, leaving
a chance of not finding a satisfactory solution).
Configuration cheat sheet
1. Specify population and generation count
Population count
is the number of chromosomes in each generation. The larger is a population, the better is the diversity (especially in first generations) – ensuring that the algorithm won't hit a local extreme by
mischance. Optimization process stops after reaching the number of generations specified as
Generation Count
No formal rule exists for finding the best combination. We recommend starting your optimization using default values for up to 200K runs required. Also, you can use companion
Genetic Optimizer Test
application to quickly determine optimal population and generation counts. Save the results of an exhaustive optimization of a strategy with similar required runs and perform a genetic search in the
Test utility.
2. Choose a fitness function
The metric to optimize is the
function – a measure of a chromosome's optimality among the population. To choose one, you can leave the default "Net profit" option or select any performance metric available from Scorecards
installed in Wealth-Lab – such as Basic, Extended or
if installed
). To change a Scorecard, switch to the "Results" tab, select a different Scorecard, click "Begin optimization" and "Cancel optimization", then re-open "Settings".
3. Choose a selection method
• Roulette wheel: Imagine a roulette wheel where every sector is a chromosome, and the better its fitness, the larger is the sector. Mating pool is formed among the sectors on which the ball falls
after spinning the wheel, until there are enough chromosomes in the population.
• Tournament method repeatedly picks a pair of chromosomes on a random basis and selects the one with the best fitness from the pair.
4. Configuring reproduction
Crossover and mutation are genetic operators that produce the next generation of chromosomes, increasing the average fitness by breeding and mixing. Among several available
WL Crossover 1
WL Crossover 2
are probably the best choices for solving optimization tasks:
• Flat crossover randomly picks an offspring. Generally, is a suboptimal choice since it tends to stop looking for new solutions too soon.
• BLX crossover (aka Blend crossover, alpha=0.5) allows the offspring gene to be located outside the interval.
• WL1 and WL2 crossovers are extensions of the BLX method. WL1 aims at procreating offsprings with better fitness, picking a chromosome among just three possible candidates: the smallest and the
largest values of the range and the mid-range. WL2 is similar to WL1, but when the appropriate genes of two parent chromosomes are identical, it seeks for a better fitness value by taking an
adjacent gene.
Last but not least, you need to configure
: this is what lets GA optimizer to maintain diversity from one generation to the next, i.e., keeps them from becoming too similar to each other, and thus continue evolution. There's one method,
Simple Mutation, that randomly mutates a drawn gene according to specified
: Wealth-Lab's Genetic Optimizer has elitist selection, pushing the elite – Top 3% chromosomes – to the next generation without breeding and mixing. This also protects from loss of good solutions
caused by a mutation rate too high.
Analyzing results
The output of Strategy optimized with GA is available as it happens, in the
tab and on the
First, here's some absolutely necessary geek speak that will help you decipher results coming from the
. Wealth-Lab's genetic optimizer uses the so called "real-coded" genetic algorithm, where each Strategy optimization parameter becomes a "gene" in "chromosome". Let's take the prepackaged "Channel
Breakout VT" strategy for example. It has 2 optimizable parameters: Long Channel (ranging from 35 to 100 with step 5), and Short Channel (ranging from 3 to 33 with step 3).
A chromosome takes the form of a series of numbers. For example, the "chromosome" consisting of the two genes, Long Channel = 100 and Short Channel = 18, would be coded the following way: {13;5}. How
does a gene take one value or the other? It's a zero-based enumeration, as the following table illustrates:
│ Short channel │ Gene │
│ 3 │ 0 │
│ 6 │ 1 │
│ ... │ ... │
│ 18 │ 5 │
│ ... │ ... │
│ 33 │ 10 │
When running a genetic optimization, chromosomes that existed before can ensue, in a different or even in the same generation. Hence, the number of Runs Required of a genetic optimization is unknown
and can not be precomputed by multiplying Population Count by Generation Count. Genetic optimizer saves processing time by not reprocessing duplicate parameter sets, freeing Wealth-Lab from having to
run the Strategy redundantly, so the number of actual runs will be equal to Unique Chromosome count on the
tab. You may filter out non-unique chromosomes by checking "Show unique only", as well as sort the table by a column.
graph illustrates the progress of a genetic optimization. Each step on the red line means a new generation. The first one is generated randomly, and all the rest, as we already know, are the results
of selecting, breeding and mutating of chromosomes.
watching the progress of an optimization in real time can slow it down.
Known issues
• Exception when the system decimal separator is not "." (e.g. ","): System.ArgumentOutOfRangeException: Value '5' is impossible for 'Value'. 'Value' must be between 'Minimum' and 'Maximum'.
□ Workaround: In the WealthLabConfig.txt file, change the number on this line to 0,05: Genetic.GA_MutationProbability=0.05
Powered by | {"url":"http://www2.wealth-lab.com/wl5WIKI/GeneticOptimizer.ashx","timestamp":"2024-11-05T16:51:42Z","content_type":"application/xhtml+xml","content_length":"32683","record_id":"<urn:uuid:b87e7a44-fa51-46ef-ba19-8705b094f3b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00892.warc.gz"} |
PerTOOLS Financial Analysis for Excel
You can choose from a range of over 50 functions (which includes an infinite number of implementations) each of which runs very quickly and can be called from a cell formula in exactly the same way
you might use an Excel formula like
=SUM() or =AVERAGE()
All functions refresh automatically when data is added, deleted or changed and you can easily use thousands of these functions in a single spreadsheet without experiencing speed issues. All
appropriate functions are supplied in both geometric and arithmetic versions so you can choose how to compound returns. A comprehensive manual is also provided which contains full explanations of all
functions together with all necessary mathematical formulae and examples. An example spreadsheet is also provided which shows examples of all functions being used in practice. Full VBA integration is
The functions have been written so that the user only needs to remember a minimum about how to use them. For example, the function =PT_CumRet_A() that computes an Arithmetically-Compounded Return can
produce many different kinds of output depending upon the function inputs. It can produce:
• An output expressed at any frequency. For example, if the input returns are daily, the computed arithmetically-compounded return can be daily, weekly, monthly, annual or any other user-chosen
frequency. This functionality is called UpScaling
• An output for the whole period. In this case, this would be an arithmetically-compounded return expressed over the whole period of the input data
The advantage is that the user doesn’t need to remember many different function names; he just needs to remember one. In this way, fewer functions can have many implementations. All of the functions,
manual and example spreadsheet are included for one price.
The new UpScaling functionality allows the user to compute any of the PerTOOLS statistics over any period. For example, suppose the user wanted to compute a Sharpe Ratio. No matter what the frequency
of the input data is (daily weekly, monthly etc.), the user can compute a daily, weekly, monthly or annual Sharpe Ratio
Available functions include:
• Arithmetically Compounded Return
• Geometrically Compounded Return
• Arithmetic Sharpe Ratio
• Geometric Sharpe Ratio
• Arithmetic Relative Sharpe Ratio
• Geometric Relative Sharpe Ratio
• Standard Deviation
• Gain Deviation
• Loss Deviation
• Downside Deviation
• Semi Deviation
• Maximum Drawdown
• Drawdown Analysis
• Maximum Run-Up
• Run-Up Analysis
• Alpha
• Beta
• Correlation
• R-Squared
• Skew
• Kurtosis
• Arithmetic Average Gain
• Geometric Average Gain
• Arithmetic Average Loss
• Geometric Average Loss
• Information Ratio
• Tracking Error
• Sortino Ratio
• Arithmetic Treynor Ratio
• Geometric Treynor Ratio
• Jensen's Alpha
• Up-Percentage Ratio
• Down-Percentage Ratio
• Percent-Gain Ratio
• Percent-Loss Ratio
• Up Capture
• Down Capture
• Up-Number Ratio
• Down-Number Ratio
• Gain-to-Loss Ratio
• Profit-to-Loss Ratio
• All functions
There are literally hundreds of uses for this product which include:
• Create custom statistics in Excel that update each month when new data is added
• Monitor peer-group performance Create custom functions of your own, easily
• Create custom reports branded with the company logo
• Create statistics for presentations. These statistics will update automatically each month when new data is added
• Integrate different data sources such as Reuters and Bloomberg and create your own custom analytics based on this data
• Analyse data from a database such as Pertrac and have the flexibility to compute statistics based on this data, easily and quickly by using Excel and the PerTOOLS functions
• Speed up VBA development times by using these functions within VBA
• Use in conjunction with PerTOOLS Automatic Data Extractor for Excel to add a whole new dimension to your analysis of Pertrac data
Full instructions are provided in a detailed user-guide. | {"url":"http://aertia.com/en/productos.asp?pid=335","timestamp":"2024-11-06T17:50:38Z","content_type":"text/html","content_length":"14725","record_id":"<urn:uuid:660da92e-3572-417d-9921-5ab789a37ddf>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00790.warc.gz"} |
Mad Minute Multiplication Worksheets
Mathematics, particularly multiplication, develops the foundation of numerous scholastic disciplines and real-world applications. Yet, for numerous learners, grasping multiplication can position a
challenge. To address this difficulty, educators and moms and dads have actually welcomed a powerful device: Mad Minute Multiplication Worksheets.
Introduction to Mad Minute Multiplication Worksheets
Mad Minute Multiplication Worksheets
Mad Minute Multiplication Worksheets -
Rudolph Academy s printable Minute Multiplication Worksheets also known as MAD Minutes are a valuable resource for enhancing math skills fostering a strong foundation in multiplication and instilling
a sense of confidence in young learners For Students Rudolph Academy s MAD Minutes are an engaging and effective way to practice multiplication
I can tell you with a little practice you can get work one of these worksheets to completion with 100 accuracy in just under a minute These two minute timed multiplication worksheets will get kids
ready for Mad Minute or RocketMath multiplication fact practice in third or fourth grade Quick and free printable PDFs with answer keys
Relevance of Multiplication Method Understanding multiplication is crucial, laying a solid foundation for innovative mathematical ideas. Mad Minute Multiplication Worksheets use structured and
targeted technique, cultivating a deeper comprehension of this basic arithmetic procedure.
Development of Mad Minute Multiplication Worksheets
Printable Mad Minute Multiplication Printable Word Searches
Printable Mad Minute Multiplication Printable Word Searches
A multiplication math drill is a worksheet with all of the single digit problems for multiplication on one page A student should be able to work out the 100 problems correctly in 5 minutes 60
problems in 3 minutes or 20 problems in 1 minute
40 Multiplication Worksheets These multiplication worksheets extend the Spaceship Math one minute timed tests with the x10 x11 and x12 facts Even if your school isn t practicing multiplication past
single digits these are valuable multiplication facts to learn for many time and geometry problems Extended Spaceship Math
From conventional pen-and-paper workouts to digitized interactive formats, Mad Minute Multiplication Worksheets have actually advanced, satisfying diverse understanding designs and choices.
Kinds Of Mad Minute Multiplication Worksheets
Standard Multiplication Sheets Straightforward exercises focusing on multiplication tables, assisting students build a solid arithmetic base.
Word Problem Worksheets
Real-life scenarios incorporated right into problems, improving essential reasoning and application skills.
Timed Multiplication Drills Tests made to enhance rate and precision, helping in fast mental math.
Advantages of Using Mad Minute Multiplication Worksheets
27 Mad Minute Math Worksheets Accounting Invoice In 2020 Math multiplication worksheets
27 Mad Minute Math Worksheets Accounting Invoice In 2020 Math multiplication worksheets
12 Pages Multiplication Drill worksheets 6 Pages Mixed up number multiplication practice 12 Pages separate multiplication tests for each of the fact families from 1 to 12 Each quiz contains 20
problems 60 problems in 1 page 5 Pages Mix Up Multiplication by 0 3 s 0 6 s 0 9 s and 0 12 s 3 Pages Filling in times table Mixed up
These mad minutes are a fun and engaging way to help elementary students improve their multiplication and division fact fluenc Subjects Basic Operations Math Mental Math Grades 2 nd 6 th Types
Worksheets Assessment Printables Show 3 included products 10 50
Improved Mathematical Skills
Consistent technique develops multiplication proficiency, improving general math capabilities.
Enhanced Problem-Solving Talents
Word troubles in worksheets establish logical thinking and approach application.
Self-Paced Understanding Advantages
Worksheets accommodate individual discovering speeds, cultivating a comfy and adaptable learning setting.
How to Create Engaging Mad Minute Multiplication Worksheets
Including Visuals and Shades Lively visuals and shades capture attention, making worksheets aesthetically appealing and engaging.
Consisting Of Real-Life Circumstances
Connecting multiplication to day-to-day situations includes significance and usefulness to workouts.
Customizing Worksheets to Different Ability Degrees Personalizing worksheets based on differing efficiency degrees ensures comprehensive understanding. Interactive and Online Multiplication Resources
Digital Multiplication Tools and Games Technology-based resources provide interactive understanding experiences, making multiplication appealing and enjoyable. Interactive Internet Sites and Apps
Online platforms provide varied and accessible multiplication method, supplementing conventional worksheets. Tailoring Worksheets for Various Learning Styles Visual Learners Visual help and layouts
help comprehension for students inclined toward visual learning. Auditory Learners Verbal multiplication troubles or mnemonics cater to learners who comprehend principles with acoustic methods.
Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic students in comprehending multiplication. Tips for Effective Implementation in Knowing Consistency in Practice Routine
practice reinforces multiplication abilities, advertising retention and fluency. Balancing Repeating and Selection A mix of recurring workouts and diverse trouble styles keeps rate of interest and
comprehension. Offering Positive Comments Responses aids in determining areas of renovation, motivating ongoing progression. Obstacles in Multiplication Technique and Solutions Motivation and
Interaction Difficulties Tedious drills can result in disinterest; cutting-edge methods can reignite motivation. Conquering Concern of Mathematics Negative understandings around math can prevent
progress; developing a positive discovering atmosphere is essential. Impact of Mad Minute Multiplication Worksheets on Academic Performance Research Studies and Research Findings Research study
indicates a favorable relationship in between consistent worksheet usage and enhanced math efficiency.
Mad Minute Multiplication Worksheets become flexible tools, promoting mathematical proficiency in students while suiting varied knowing styles. From standard drills to interactive on-line resources,
these worksheets not just improve multiplication abilities but also advertise crucial thinking and analytic abilities.
Mad Minute Math Subtraction Worksheets 99Worksheets
9 Best Images Of Math Minutes Answer Worksheets Multiplication Worksheets 6 Times Tables
Check more of Mad Minute Multiplication Worksheets below
Multiplication Mad Minute Stuff I Want To Make Multiplication Mad Minute Math Subtraction
11 Mad Minute Multiplication Worksheets Samples
11 Best Images Of Five Minute Frenzy Multiplication Worksheets 5 Minute Math Frenzy
Printable Multiplication Mad Minute PrintableMultiplication
10 Best Images Of Mad Minute Math Multiplication Worksheets Mad Minute Math Addition
Multiplication Worksheets Mad Minute PrintableMultiplication
Two Minute Multiplication Worksheets
I can tell you with a little practice you can get work one of these worksheets to completion with 100 accuracy in just under a minute These two minute timed multiplication worksheets will get kids
ready for Mad Minute or RocketMath multiplication fact practice in third or fourth grade Quick and free printable PDFs with answer keys
Math Minute Worksheets Mad Minutes Basic Facts
Minute Math Drills or Math Mad Minutes as they are known to many teachers are worksheets with simple drill and practice basic facts math problems Students are given a short period of time usually
three minutes or so to complete as many problems as they can
I can tell you with a little practice you can get work one of these worksheets to completion with 100 accuracy in just under a minute These two minute timed multiplication worksheets will get kids
ready for Mad Minute or RocketMath multiplication fact practice in third or fourth grade Quick and free printable PDFs with answer keys
Minute Math Drills or Math Mad Minutes as they are known to many teachers are worksheets with simple drill and practice basic facts math problems Students are given a short period of time usually
three minutes or so to complete as many problems as they can
Printable Multiplication Mad Minute PrintableMultiplication
11 Mad Minute Multiplication Worksheets Samples
10 Best Images Of Mad Minute Math Multiplication Worksheets Mad Minute Math Addition
Multiplication Worksheets Mad Minute PrintableMultiplication
Mad Minute Math Worksheets Siteraven Addition Multiplication Printable Timed Math Worksheets
Multiplication Worksheets Mad Minute PrintableMultiplication
Multiplication Worksheets Mad Minute PrintableMultiplication
Minute Math Multiplication And Division Worksheets 99Worksheets
FAQs (Frequently Asked Questions).
Are Mad Minute Multiplication Worksheets appropriate for all age teams?
Yes, worksheets can be tailored to various age and ability degrees, making them versatile for various students.
Exactly how usually should students exercise making use of Mad Minute Multiplication Worksheets?
Regular technique is vital. Routine sessions, preferably a couple of times a week, can generate considerable improvement.
Can worksheets alone boost math skills?
Worksheets are a valuable tool yet needs to be supplemented with diverse knowing methods for detailed skill advancement.
Are there on-line platforms supplying complimentary Mad Minute Multiplication Worksheets?
Yes, several educational web sites offer open door to a wide variety of Mad Minute Multiplication Worksheets.
How can parents sustain their youngsters's multiplication practice at home?
Urging consistent technique, supplying help, and developing a positive understanding environment are advantageous actions. | {"url":"https://crown-darts.com/en/mad-minute-multiplication-worksheets.html","timestamp":"2024-11-12T22:38:58Z","content_type":"text/html","content_length":"29688","record_id":"<urn:uuid:c8f4a6f6-cafa-4981-b618-96cd72a4d149>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00213.warc.gz"} |
Understanding Amortization: Key Concepts and Practical Examples | Samreen Info ✅
Understanding Amortization: Key Concepts and Practical Examples
Amortization is a crucial financial concept that affects loans, mortgages, and the value of intangible assets. Grasping the intricacies of amortization can significantly aid in making informed
financial decisions. This article breaks down the complexities, providing clear explanations, practical examples, and essential formulas.
Whether you're a student, a professional, or simply curious, understanding amortization is invaluable. Let's dive into the world of amortization and expand your financial knowledge.
Amortization is a fundamental financial concept that plays a crucial role in various aspects of business and personal finance. Whether you are dealing with loans, mortgages or the value of intangible
assets, understanding amortization can help you make informed financial decisions.
This article aims to provide a comprehensive overview of amortization, explaining its meaning and significance in simple terms. We'll explore the concept from different angles, including its
definition, examples and practical applications. We'll also delve into the differences between amortization and depreciation, ensuring you have a clear grasp of both concepts.
Whether you're a student, a business professional or simply someone looking to expand your financial knowledge, this article will equip you with the essential information needed to understand and
utilize amortization effectively. Let's dive into the details and unravel the intricacies of this important financial principle.
Amortization is the process of gradually reducing a debt over a specified period through regular payments. Each payment made includes both interest and a portion of the principal amount, which is the
original loan balance. This systematic approach ensures that the debt is paid off in full by the end of the term.
Amortization is also used in accounting to describe the process of expensing the cost of intangible assets, such as patents or trademarks, over their useful life. This helps in reflecting the asset's
diminishing value on financial statements accurately.
In essence, amortization serves two primary purposes:
1. Loan Repayment: It helps borrowers repay their debt in manageable installments.
2. Asset Depreciation: It allows businesses to allocate the cost of intangible assets over their useful life, providing a realistic view of their financial health.
Understanding amortization is essential for managing finances effectively, whether you're dealing with personal loans, business investments or accounting practices.
Is amortization an expense?
Yes, amortization is considered an expense. It refers to the process of gradually writing off the initial cost of an intangible asset over its useful life. In accounting, amortization is treated
similarly to depreciation, which applies to tangible assets.
Amortization is recorded on the income statement as an expense, which reduces the company's taxable income and its overall net income. For example, if a company acquires a patent, the cost of the
patent will be amortized over its useful life, and each year's amortization expense will be reported on the income statement.
What is a good example of amortization?
A good example of amortization is the treatment of a patent. Suppose a company purchases a patent for $100,000, and the patent has a useful life of 10 years. The company will amortize the cost of the
patent over 10 years, meaning each year, it will record $10,000 ($100,000 / 10 years) as an amortization expense on its income statement. This process spreads the cost of the patent over its useful
life, matching the expense with the revenue it generates.
What is amortization of an asset?
Amortization of an asset involves gradually expensing the cost of an intangible asset over its useful life. Intangible assets can include patents, trademarks, copyrights, and goodwill. The purpose of
amortization is to allocate the cost of the asset over the period it is expected to generate economic benefits.
This helps in providing a more accurate representation of the asset's value and the company's financial performance. For example, if a company buys a trademark for $50,000 with an estimated useful
life of 20 years, it would amortize the trademark by expensing $2,500 ($50,000 / 20 years) each year.
What is amortization of a loan?
Amortization of a loan refers to the process of repaying a loan over time through regular payments. Each payment covers both interest and principal, with the interest portion decreasing and the
principal portion increasing over time. This ensures that the loan is fully paid off by the end of the term.
For instance, if you take out a $200,000 mortgage at a 4% interest rate over 30 years, your monthly payments will be calculated to ensure the loan is paid off in full by the end of the 30 years.
Initially, a larger portion of each payment goes toward interest, but over time, more of each payment goes toward reducing the principal balance.
This process is called amortization, and it is often illustrated through an amortization schedule, which details each payment's allocation to interest and principal over the loan term.
Why is it useful to understand amortization?
Understanding amortization is useful for several reasons:
Financial Planning and Analysis
• Accurate Financial Statements: Amortization helps in presenting a more accurate picture of a company's financial health by matching expenses with the revenues they help generate.
• Budgeting: Knowing the amortization expense allows businesses to plan their budgets more accurately, ensuring they set aside the necessary funds to cover these expenses.
• Investment Decisions: Investors and analysts use amortization to assess a company's profitability and performance. It affects key financial metrics like net income and earnings per share (EPS).
Tax Implications
• Tax Deductions: Amortization expenses are deductible for tax purposes, reducing taxable income and therefore the amount of tax a company needs to pay.
• Tax Planning: Understanding the amortization schedule can aid in tax planning, helping companies to manage their tax liabilities more effectively.
Loan Management
• Payment Planning: For loans, knowing how amortization works allows borrowers to understand how their payments are structured, how much of each payment goes towards interest and principal, and how
this changes over time.
• Interest Savings: By understanding the amortization schedule, borrowers can see the impact of making extra payments, which can reduce the total interest paid over the life of the loan.
Asset Management
• Asset Valuation: Amortization helps in determining the book value of intangible assets over time, providing a realistic view of the company's asset base.
• Cost Allocation: It helps in allocating the cost of intangible assets over their useful life, which aids in performance measurement and cost control.
Strategic Decision Making
• Resource Allocation: By understanding how costs are spread over time, businesses can make more informed decisions about resource allocation and investment in intangible assets.
• Business Valuation: Amortization affects earnings and cash flow, which are critical factors in business valuation and mergers and acquisitions.
In summary, understanding amortization is crucial for effective financial management, strategic planning, tax optimization, and informed decision-making in both personal and corporate finance.
Amortization formula
Amortization can refer to the process of spreading out the cost of an intangible asset over its useful life or the process of paying off a loan over time through regular payments. Here are detailed
formulas for both cases:
Amortization of an Intangible Asset
To calculate the annual amortization expense for an intangible asset, you can use the following formula:
Annual Amortization Expense= (Cost of Intangible Asset−Residual Value) / Useful Life
• Cost of Intangible Asset: The initial cost incurred to acquire the intangible asset.
• Residual Value: The estimated value of the asset at the end of its useful life (often zero for intangible assets).
• Useful Life: The estimated period over which the asset will generate economic benefits.
Example of Amortization of an Intangible Asset
If a company purchases a patent for $50,000 with a useful life of 10 years and no residual value:
Annual Amortization Expense = ($50,000−$0) / 10 = $5,000
Amortization of a Loan
For loans, the amortization process involves calculating the regular payment that includes both interest and principal repayment. The formula for calculating the monthly payment on an amortizing loan
A = P x i (1 + i)^n / (1 + i)^n - 1
• A = Monthly payment
• P = Principal loan amount
• i = Monthly interest rate
• n = Number of months
Example of Amortization of a Loan
If you take out a $240,000 mortgage at an annual interest rate of 3.5% for 15 years, then
P = Principal loan amount = $240,000
i = Monthly interest rate = 0.0029167 (which is 3.5% annual interest rate divided by 12 months)
n = Total number of payments = 180 (which is 15 years times 12 months)
Formula is A = P x i (1 + i)^n / (1 + i)^n - 1
Calculation of First Part
(1 + i)^n = (1+0.0029167)^180 = 1.6891777009157
Then P x i (1 + i)^n = 240,000×0.0029167×1.6891777009157=1,182.42
Calculation of Second Part
(1 + i)^n - 1 = 1.6891777009157−1=0.6891777009157
And then dive the two part
A = 1,182.42 / 0.6891777009157 = 1,715.70
The fixed monthly mortgage payment is approximately $1,716
Amortization schedule
The amortization schedule will show the breakdown of each monthly payment into interest and principal, as well as the remaining balance after each payment.
First Few Payments
Interest Payment: 240,000×0.0029167=700
Principal Payment: 1,716−700=1,016
Remaining Balance: 240,000−1,016=238,984
Interest Payment: 238,984×0.0029167=696.69
Principal Payment: 1,716−696.69=1,019.31
Remaining Balance: 238,984−1,019.31=237,964.69
And so on for each month.
Full Amortization Schedule
Month Payment Interest Principal Remaining Balance
1 $1,716 $700 $1,016 $238,984
2 $1,716 $696.69 $1,019.31 $237,964.69
----- ----- ------ ------- -------
179 $1,716 $4.96 $1,711.04 $1,711.04
180 $1,716 $4.99 $1,711.01 $0
How to Use the Schedule
• Track Payments: Use the schedule to see how much of each payment goes towards interest and how much goes towards reducing the principal.
• Plan Finances: Understand how the loan balance decreases over time and plan for future financial needs.
• Evaluate Extra Payments: See how making extra payments can reduce the interest paid over the life of the loan and shorten the loan term.
What's the difference between amortization and depreciation?
Amortization and depreciation are both methods of allocating the cost of an asset over its useful life, but they apply to different types of assets and have distinct characteristics. Here are the key
differences between the two:
1. Types of Assets
• Amortization: Applies to intangible assets, which are non-physical assets such as patents, trademarks, copyrights, goodwill, and franchises.
• Depreciation: Applies to tangible assets, which are physical assets such as buildings, machinery, equipment, vehicles, and furniture.
2. Purpose
• Amortization: Spreads the cost of an intangible asset over its useful life, reflecting the consumption or expiration of the asset's value.
• Depreciation: Spreads the cost of a tangible asset over its useful life, reflecting wear and tear, usage, or obsolescence.
3. Methods
• Amortization: Typically uses the straight-line method, where the cost of the asset is divided equally over its useful life.
• Depreciation: Can use various methods, including:
• Straight-Line Method: Similar to amortization, spreading the cost equally over the asset's useful life.
• Declining Balance Method: Accelerated depreciation, where higher expenses are recorded in the earlier years of the asset's life.
• Units of Production Method: Based on the asset's usage, such as the number of units produced or hours used.
4. Residual Value
• Amortization: Usually assumes no residual value (salvage value) for intangible assets at the end of their useful life.
• Depreciation: Often considers a residual value, which is the estimated amount the asset can be sold for at the end of its useful life.
5. Impact on Financial Statements
• Amortization: Recorded as an expense on the income statement, reducing taxable income and net income. The intangible asset is reduced on the balance sheet.
• Depreciation: Also recorded as an expense on the income statement, reducing taxable income and net income. The tangible asset's book value is reduced on the balance sheet.
6. Tax Implications
• Amortization: Intangible asset expenses are often deductible for tax purposes, following specific rules and guidelines set by tax authorities.
• Depreciation: Tangible asset expenses are also deductible for tax purposes, with tax regulations often providing different depreciation methods or rates than those used in financial accounting.
7. Examples
• Amortization Example: A company purchases a patent for $100,000 with a useful life of 10 years. It amortizes the patent over 10 years, recording $10,000 as an amortization expense each year.
• Depreciation Example: A company buys a machine for $100,000 with a useful life of 10 years and a residual value of $10,000. Using the straight-line method, it depreciates the machine by $9,000
per year (($100,000 - $10,000) / 10 years).
Understanding amortization is essential for effective financial management. It involves systematically spreading the cost of intangible assets or loan repayments over a specified period. This process
helps in accurate financial planning, optimizing tax liabilities, and providing a clear picture of financial health.
By grasping the concepts of amortization for both intangible assets and loans, individuals and businesses can make informed decisions, better manage their finances, and strategically allocate
resources. This knowledge is crucial for maintaining financial stability and achieving long-term financial goals.
If you have any query related to this post titled "Understanding Amortization: Key Concepts and Practical Examples" please comment in the comment box, given below.
Thank you
Samreen info.
এই পোস্টটি পরিচিতদের সাথে শেয়ার করুন
এই পোস্টে এখনো কেউ মন্তব্য করে নি
মন্তব্য করতে এখানে ক্লিক করুন
সামরিন ইনফো এর নীতিমালা মেনে কমেন্ট করুন। প্রতিটি কমেন্ট রিভিউ করা হয়।
comment url | {"url":"https://www.samreeninfo.com/2024/08/Amortization.html","timestamp":"2024-11-08T11:20:18Z","content_type":"text/html","content_length":"113367","record_id":"<urn:uuid:db79aaa8-5cc5-4cdc-a7cc-87526cd2bc72>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00540.warc.gz"} |
Key life events
• 1986: Born in Delhi, India
Formal education
• School (1990-2004): Received school education at Delhi Public School, Noida. Successfully completed schooling.
• Undergraduate studies (2004-2007): Received undergraduate education at Chennai Mathematical Institute in Chennai, Tamil Nadu, India. At the end of it, received a B. Sc. (Hons) degree in
mathematics and computer science.
• Graduate studies (2007-2013): Received graduate education at The University of Chicago (mathematics department). Received a M. S. degree in August 2009 and a Ph.D. degree in December 2013.
Current job
My current job is as a data scientist-cum-software engineer at The Arena Group, a media/tech company that acquired LiftIgniter.
Other notable accomplishments
In my last two years of high school, I was active in the mathematics olympiad. I represented India at the International Mathematical Olympiad in 2003 and 2004, winning Silver Medals both times. | {"url":"https://vipulnaik.com/biography/","timestamp":"2024-11-07T16:24:37Z","content_type":"text/html","content_length":"41777","record_id":"<urn:uuid:74b066b5-a55b-4426-b589-4ae7c27991c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00867.warc.gz"} |
Random Effects (RFX) Group Analysis
BrainVoyager v23.0
Random Effects (RFX) Group Analysis
In order to test whether results obtained for individual subjects are valid at the population level, the statistical procedure must assess the variability of observed effects across subjects. In such
a random effects (RFX) analysis, individual subjects of a study are considered to be a representative sample of a population. If group effects are significant at the random effects level, the
findings from the sample of subjects can be generalized to the population from which the subjects have been drawn. In a fixed effects (FFX) analysis, obtained group results can not be generalized to
the population level since the data of all subjects is concatenated and analyzed as if it stems from a single subject. The error variance in a FFX analysis is, thus, estimated by the variability
across individual measurement time points while in a RFX analysis, the error variance is estimated by the variability of subject-specific effects across subjects.
In order to explicitly estimate the variability of effects across subjects, a RFX analysis usually proceeds in two (or more) levels (e.g. Kirby, 1993). In a first level, the data for each subject is
"collapsed" resulting in mean effect estimates per condition (level 1). The estimated first-level mean effects enter the second level as the new dependent variable (instead of the raw data) and are
analyzed across subjects (group analysis). Since the analysis at the second level explicitly models the variability of the estimated effects across subjects, the obtained results can be generalized
to the population from which the subjects (sample) were drawn. The first level is performed in BrainVoyager by running a General Linear Model estimating condition effects (beta values) separately for
each subject. Instead of one set of beta values in fixed effects analysis, this step provides a separate set of beta values for each subject. At the second level, BrainVoyager offers two approaches
to analyze the data, the summary statistics approach and the ANCOVA approach.
The Summary Statistics Approach
In the summary statistics approach, the same contrast is specified across the beta values of each subject and the mean value of this contrast is tested against zero using a t-test. In a similar way,
two groups are compared by computing the mean of the summary statistic (contrast) for each group followed by a t-test comparing the two group means. In case that a multi-subject GLM has been computed
or loaded, these basic random-effects analysis options are available directly in BrainVoyager's Overlay GLM Options dialog. In order to handle experimental designs with more than two groups or with
multiple factors, the ANCOVA dialog can be used. This dialog can be invoked directly from the Analysis menu but can also be invoked directly from the Overlay GLM Contrasts dialog by clicking the RFX
ANCOVA button.
The ANCOVA Approach
The mean condition effects (beta values per subject or contrasts) can be analyzed at the second level using a standard analysis of variance (ANOVA) approach allowing to model one or more within
subjects (repeated measures) factors. If the study represents data from multiple groups of subjects, up to two between subjects factors for group comparisons can be added. Using the ANOVA approach,
The statistical analysis at the second level does not differ from the usual statistical approach in other human (e.g. psychological or medical) studies. The only major difference to standard
statistics is that the analysis is performed separately for each voxel (or vertex) requiring appropriate corrections for a massive multiple comparisons problem. In addition to the estimated
subject-specific effects of the fMRI design (beta values or contrasts of first level analysis), additional external variables (e.g. an IQ value for each subject) may be incorporated as covariates at
the second level extending the ANOVA approach to a ANCOVA (analysis of covariance) approach.
RFX Analysis Steps
Random effects analysis in BrainVoyager QX operates in three major stages:
Copyright © 2023 Rainer Goebel. All rights reserved. | {"url":"http://brainvoyager.com/bv/doc/UsersGuide/StatisticalAnalysis/RandomEffectsAnalysis/RandomEffectsGroupAnalysis.html","timestamp":"2024-11-13T02:39:38Z","content_type":"text/html","content_length":"40372","record_id":"<urn:uuid:c6a157f3-98ab-4131-8716-496c40c20bdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00870.warc.gz"} |
Machine Learning for Jet Physics
Anders ANDREASSEN (Harvard)
Many early applications of Machine Learning in jet physics are classifiers that use Convolutional Neural Networks trained on jet images. We will present a work-in-progress custom probabilistic model,
tailored to learning the physics of jet production in an unsupervised way. Our model is built on a Recurrent Neural Network suited to modeling the approximate sequential splitting of a tree, which
can be explicitly defined through a clustering algorithm. The model also contains fully-connected sub-networks modeling physical quantities like the QCD splitting functions. We train our network on
Pythia jets as a proof-of-principle, but our framework importantly admits training on LHC data, including the potential to be jet-algorithm independent. Given the general structure, our model can be
used as a generative model for jets, though we do not anticipate that to be its primary use. Instead, we will investigate the extraction of splitting functions in various environments and their
sensitivity to global jet structure using unsupervised machine learning. Further possible physics applications will be explored. | {"url":"https://indico.physics.lbl.gov/event/546/contributions/1276/","timestamp":"2024-11-11T19:19:29Z","content_type":"text/html","content_length":"100760","record_id":"<urn:uuid:ab929e2e-c067-430b-b0b8-e23078ef8ef3>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00298.warc.gz"} |
PySDR: A Guide to SDR and DSP using Python
15. Multipath Fading¶
In this chapter we introduce multipath, a propagation phenomenon that results in signals reaching the receiver by two or more paths, which we experience in real-world wireless systems. So far we have
only discussed the “AWGN Channel”, i.e., a model for a wireless channel where the signal is simply added to noise, which really only applies to signals over a cable and some satellite
communications systems.
All realistic wireless channels include many “reflectors”, given that RF signals bounce. Any object between or near the transmitter (Tx) or receiver (Rx) can cause additional paths the signal
travels along. Each path experiences a different phase shift (delay) and attenuation (amplitude scaling). At the receiver, all of the paths add up. They can add up constructively, destructively, or a
mix of both. We call this concept of multiple signal paths “multipath”. There is the Line-of-Sight (LOS) path, and then all other paths. In the example below, we show the LOS path and a single
non-LOS path:
Destructive interference can happen if you get unlucky with how the paths sum together. Consider the example above with just two paths. Depending on the frequency and the exact distance of the paths,
the two paths can be received 180 degrees out of phase at roughly the same amplitude, causing them to null out each other (depicted below). You may have learned about constructive and destructive
interference in physics class. In wireless systems when the paths destructively combine, we call this interference “deep fade” because our signal briefly disappears.
Paths can also add up constructively, causing a strong signal to be received. Each path has a different phase shift and amplitude, which we can visualize on a plot in the time domain called a
“power delay profile”:
The first path, the one closest to the y-axis, will always be the LOS path (assuming there is one) because there’s no way for any other path to reach the receiver faster than the LOS path.
Typically the magnitude will decrease as the delay increases, since a path that took longer to show up at the receiver will have traveled further.
What tends to happen is we get a mix of constructive and destructive interference, and it changes over time as the Rx, Tx, or environment is moving/changing. We use the term “fading” when
referring to the effects of a multipath channel changing over time. That’s why we often refer to it as “multipath fading”; it’s really the combination of constructive/destructive interference
and a changing environment. What we end up with is a SNR that varies over time; changes are usually on the order of milliseconds to microseconds, depending on how fast the Tx/Rx is moving. Beneath is
a plot of SNR over time in milliseconds that demonstrates multipath fading.
There are two types of fading from a time domain perspective:
• Slow Fading: The channel doesn’t change within one packet’s worth of data. That is, a deep null during slow fading will wipe out the whole packet.
• Fast Fading: The channel changes very quickly compared to the length of one packet. Forward error correction, combined with interleaving, can combat fast fading.
There are also two types of fading from a frequency domain perspective:
Frequency Selective Fading: The constructive/destructive interference changes within the frequency range of the signal. When we have a wideband signal, we span a large range of frequencies. Recall
that wavelength determines whether it’s constructive or destructive. Well if our signal spans a wide frequency range, it also spans a wide wavelength range (since wavelength is the inverse of
frequency). Consequently we can get different channel qualities in different portions of our signal (in the frequency domain). Hence the name frequency selective fading.
Flat Fading: Occurs when the signal’s bandwidth is narrow enough that all frequencies experience roughly the same channel. If there is a deep fade then the whole signal will disappear (for the
duration of the deep fade).
In the figure below, the red shape shows our signal in the frequency domain, and the black curvy line shows the current channel condition over frequency. Because the narrower signal is experiencing
the same channel conditions throughout the whole signal, it’s experiencing flat fading. The wider signal is very much experiencing frequency selective fading.
Here is an example of a 16 MHz wide signal that is continuously transmitting. There are several moments in the middle where there’s a period of time a piece of signal is missing. This example
depicts frequency selective fading, which causes holes in the signal that wipe out some frequencies but not others.
Simulating Rayleigh Fading¶
Rayleigh fading is used to model fading over time, when there is no significant LOS path. When there is a dominant LOS path, the Rician fading model becomes more suitable, but we will be focusing on
Rayleigh. Note that Rayleigh and Rician models do not include the primarily path loss between the transmitter and receiver (such as the path loss calculated as part of a link budget), or any
shadowing caused by large objects. Their role is to model the multipath fading that occurs over time, as a result of movement and scatterers in the environment.
There is a lot of theory that comes out of the Rayleigh fading model, such as expressions for level crossing rate and average fade duration. But the Rayleigh fading model doesn’t directly tell us
how to actually simulate a channel using the model. To generate Rayleigh fading in simulation we have to use one of many published methods, and in the following Python example we will be using
Clarke’s “sum-of-sinusoids” method.
To generate a Rayleigh fading channel in Python we need to first specify the max Doppler shift, in Hz, which is based on how fast the transmitter and/or receiver is moving, denoted
We also choose how many sinusoids to simulate, and there’s no right answer because it’s based on the number of scatterers in the environment, which we never actually know. As part of the
calculations we assume the phase of the received signal from each path is uniformly random between 0 and
import numpy as np
import matplotlib.pyplot as plt
# Simulation Params, feel free to tweak these
v_mph = 60 # velocity of either TX or RX, in miles per hour
center_freq = 200e6 # RF carrier frequency in Hz
Fs = 1e5 # sample rate of simulation
N = 100 # number of sinusoids to sum
v = v_mph * 0.44704 # convert to m/s
fd = v*center_freq/3e8 # max Doppler shift
print("max Doppler shift:", fd)
t = np.arange(0, 1, 1/Fs) # time vector. (start, stop, step)
x = np.zeros(len(t))
y = np.zeros(len(t))
for i in range(N):
alpha = (np.random.rand() - 0.5) * 2 * np.pi
phi = (np.random.rand() - 0.5) * 2 * np.pi
x = x + np.random.randn() * np.cos(2 * np.pi * fd * t * np.cos(alpha) + phi)
y = y + np.random.randn() * np.sin(2 * np.pi * fd * t * np.cos(alpha) + phi)
# z is the complex coefficient representing channel, you can think of this as a phase shift and magnitude scale
z = (1/np.sqrt(N)) * (x + 1j*y) # this is what you would actually use when simulating the channel
z_mag = np.abs(z) # take magnitude for the sake of plotting
z_mag_dB = 10*np.log10(z_mag) # convert to dB
# Plot fading over time
plt.plot(t, z_mag_dB)
plt.plot([0, 1], [0, 0], ':r') # 0 dB
plt.legend(['Rayleigh Fading', 'No Fading'])
plt.axis([0, 1, -15, 5])
If you are intending to use this channel model as part of a larger simulation, you would simply multiply the received signal by the complex number z, representing flat fading. The value z would then
update every time step. This means all frequency components of the signal experience the same channel at any given moment in time, so you would not be simulating frequency selective fading, that
requires a multi-tap channel impulse response which we will not get into in this chapter. If we look at the magnitude of z, we can see the Rayleigh fading over time:
Note the deep fades that occur briefly, as well as the small fraction of time where the channel is actually performing better than if there was no fading at all.
Mitigating Multipath Fading¶
In modern communications, we have developed ways to combat multipath fading.
3G cellular uses a technology called code division multiple access (CDMA). With CDMA you take a narrowband signal and spread it over a wide bandwidth before transmitting it (using a spread spectrum
technique called DSSS). Under frequency selective fading, it’s unlikely that all frequencies will be in a deep null at the same time. At the receiver the spreading is reversed, and this
de-spreading process greatly mitigates a deep null.
4G cellular, WiFi, and many other technologies use a scheme called orthogonal frequency-division multiplexing (OFDM). OFDM uses something called subcarriers, where we split up the signal in the
frequency domain into a bunch of narrow signals squashed together. To combat multipath fading we can avoid assigning data to subcarriers that are in a deep fade, although it requires the receiving
end to send channel information back to the transmitter quick enough. We can also assign high order modulation schemes to subcarriers with great channel quality to maximize our data rate. | {"url":"https://pysdr.org/content/multipath_fading.html","timestamp":"2024-11-01T21:57:01Z","content_type":"text/html","content_length":"55902","record_id":"<urn:uuid:3112ce37-389d-4f80-b044-8ac5f59dcc3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00156.warc.gz"} |
Optimal penalty parameters for symmetric discontinuous Galerkin discretisations of the time-harmonic Maxwell equations
We provide optimal parameter estimates and a priori error bounds for symmetric discontinuous Galerkin (DG) discretisations of the second-order indefinite time-harmonic Maxwell equations. More
specifically, we consider two variations of symmetric DG methods: the interior penalty DG (IP-DG) method and one that makes use of the local lifting operator in the flux formulation. As a novelty,
our parameter estimates and error bounds are $i)$ valid in the pre-asymptotic regime; $ii)$ solely depend on the geometry and the polynomial order; and $iii)$ are free of unspecified constants. Such
estimates are particularly important in three-dimensional (3D) simulations because in practice many 3D computations occur in the pre-asymptotic regime. Therefore, it is vital that our numerical
experiments that accompany the theoretical results are also in 3D. They are carried out on tetrahedral meshes with high-order ($p = 1, 2, 3, 4$) hierarchic $H(\mathrm{curl})$-conforming polynomial
basis functions.
Original language Undefined
Place of Publication Enschede
Publisher University of Twente
Number of pages 34
Publication status Published - Jan 2010
Publication series
Name Memorandum / Department of Applied Mathematics
Publisher Department of Applied Mathematics, University of Twente
No. 1914
ISSN (Print) 1874-4850
ISSN (Electronic) 1874-4850
• METIS-270717
• Numerical mathematics
• EWI-17325
• Electromagnetic waves
• MSC-00A72
• Scientific computation
• Finite Element Method
• IR-69763 | {"url":"https://research.utwente.nl/en/publications/optimal-penalty-parameters-for-symmetric-discontinuous-galerkin-d-2","timestamp":"2024-11-07T06:49:29Z","content_type":"text/html","content_length":"46371","record_id":"<urn:uuid:af9a075d-3746-4eac-b478-87164f93f214>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00399.warc.gz"} |
6 Best Free Determinant of a Matrix Calculator for Windows
Here is a list of the best free determinant of a matrix calculator for Windows. The Determinant of a matrix is a number calculated using a square matrix. The determinant is a scalar value. It is
helpful for solving linear equations. This is possible by capturing how linear transformation changes area, volume, etc.
In this post, I’m covering 6 free determinant of a matrix calculator software for Windows. It is a collection of free as well as open-source programs that you can download and use for free. Each
calculator can perform various matrix operations. Some of these calculators support other functions as well such as solving linear and quadratic equations. You can go through the list and check out
these calculators in detail. Then you can pick one to use to calculate the determinant of a matrix in seconds.
My Favorite Determinant of a Matrix Calculator
Tibi’s Mathematics Suite is my favorite determinant of a matrix calculator on this list. It offers a simple matrix calculator where you can calculate addition, subtraction, multiplication, division,
transpose, min, max, inverse, determinant, and more. It is easy to use and find the determinant of a matrix of any size. Tibi’s Mathematics is a suite of applications that also packs a graphical
calculator, an equation calculator, and a numeric factorization. So, you get multiple calculators into a single package.
You can check out our lists of the free Probability Calculator Software For Windows, Fibonacci Calculator Software For Windows, and Boolean Expression Calculator Software For Windows.
Tibi's Mathematics Suite
The Tibi’s Mathematics Suite is a package of multiple mathematical calculators. It has a matrix calculator, a scientific calculator, a graphical calculator, and a numeric factorization. After
installation, it shows a Suite Settings page where you can set keyboard shortcuts for opening all these calculators. You can do the same from the system icon tray as well. When you select a
calculator, it opens in a new window. The matrix calculator is quite simple to use here. It has two sections for entering the matrix into the calculator. Below that, it lists various operations that
you can perform on one or both the matrices.
How to calculate the determinant of a matrix?
• Open the matrix calculator and then enter your matrix in the Matrix A section.
• Then select the Det(A) operation from the Operations.
• This gives you the determinant of matrix A in the Result section.
Matrix Calculator
Matrix Calculator is free software that you can use to calculate the determinant of a matrix on Windows. This calculator packs collection matrix operations into a simple and user-friendly interface.
The calculator opens with an excel-like sheet where you can insert your matrices for calculations. You can easily perform basic matrix operations, such as additions, multiplications, subtractions,
transpositions, etc. It shows the results right on the screen. One unique thing about this calculator is that you can export the results of every calculation in HTML format.
How to calculate the determinant of a matrix?
• To do that, open this calculator and select Determinant from the Operations section.
• This opens a new window where you can insert a 2×2 or 3×3 matrix. Select your matrix size and insert the values.
• When you do that, it instantly gives you the determinant of that matrix.
Matrix Reckoner
Matrix Reckoner is a free determinant of a matrix calculator for Windows. This calculator takes up to 2 matrices as input. You can start with picking up the operations that you want to perform. You
can perform addition, subtraction, multiply, transpose, inverse, and determinant. After that, you can pick the size of one or both matrices and then insert the values. This way, you can perform all
these matrix operations within seconds.
How to calculate the determinant of a matrix?
• To calculate the determinant of a matrix, first, open this calculator.
• Then select the Determinant operation from the left side.
• After that, select your matrix size and insert the values.
• Then click Calculate button to get the determinant of your matrix.
Matrix Calculator
Matrix Calculator is a free matrix calculator for Windows. This is a simple and open-source calculator that uses ANSI C++ functions to calculate various matrix operations. This calculator has one
interface for all the operations. On the left side, you can add and manage your matrices. Selecting a matrix from there shows it in the center where you can change the matrix size and add values.
Then you can simply choose any operation to perform on the selected matrix.
How to calculate the determinant of a matrix?
• Open this calculator on your PC and click on the Add button from the left side.
• Then select your matrix size on the top and insert the values.
• After that, click the Det button from the listed operations on the right.
• This gives you the determinant of the matrix.
Ro3n is a free calculator for Windows. This calculator is designed to solve mathematical equations. It works for linear as well as quadratic equations. Apart from that, you can also use this
calculator to find the determinant of a matrix. You can do that for a matrix of 2×2 or 3×3. The calculator instantly gives you the determinant which you can export to HTML.
How to calculate the determinant of a matrix?
• To do that, simply select the matrix size from the top and then pick the Determinant option below.
• Enter the values of your matrix in the box.
• This gives you the determinant of your matrix.
MatrixMath is a matrix calculator for Windows. It can solve linear equations up to the size of 6×6 matrices. Along with that, it can also solve inverse of a matrix, determinant of a matrix, matrices
of determinants, matrices of minors, etc. It can help you find the determinant of a matrix greater than 3×3. The process is quite simple, here is how you can do it.
How to calculate the determinant of a matrix?
• To do that, simply select the matrix size at the top left corner.
• Then insert the values of your matrix and click the Solve button.
• This gives you the Vector Matrix along with the determinant on the right side.
Trying to figure out how things work and writing about them all.
About Us
We are the team behind some of the most popular tech blogs, like: I LoveFree Software and Windows 8 Freeware.
More About Us
Provide details to get this offer | {"url":"https://listoffreeware.com/best-free-determinant-of-a-matrix-calculator-for-windows/","timestamp":"2024-11-02T22:18:58Z","content_type":"text/html","content_length":"109969","record_id":"<urn:uuid:21c2d5db-3022-4d14-8c8a-82f5f8590ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00343.warc.gz"} |
ICEinfer | CRAN/E
Incremental Cost-Effectiveness Inference using Two Unbiased Samples
CRAN Package
Given two unbiased samples of patient level data on cost and effectiveness for a pair of treatments, make head-to-head treatment comparisons by (i) generating the bivariate bootstrap resampling
distribution of ICE uncertainty for a specified value of the shadow price of health, lambda, (ii) form the wedge-shaped ICE confidence region with specified confidence fraction within [0.50, 0.99]
that is equivariant with respect to changes in lambda, (iii) color the bootstrap outcomes within the above confidence wedge with economic preferences from an ICE map with specified values of lambda,
beta and gamma parameters, (iv) display VAGR and ALICE acceptability curves, and (v) illustrate variation in ICE preferences by displaying potentially non-linear indifference(iso-preference) curves
from an ICE map with specified values of lambda, beta and either gamma or eta parameters.
• Version1.3
• R version≥ 3.5.0
• Needs compilation?No
• Last release10/12/2020
Last 30 days
Last 365 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by cranlogs | {"url":"https://cran-e.com/package/ICEinfer","timestamp":"2024-11-03T23:05:15Z","content_type":"text/html","content_length":"59477","record_id":"<urn:uuid:681498d1-3b12-4f95-a91a-48d3369360a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00152.warc.gz"} |
Class 9 Mathematics
CBSE Class 9 Mathematics Sample Papers
CBSE Class 9 Math sample Papers provided by cbseWizard are prepared by experienced CBSE teachers after following CBSE Guidelines and the latest syllabus of SA1 and SA2 exams. These papers will give
you a clear idea of the questions asked in CBSE class 9 Math paper and their marking scheme.Mathematics is usually considered to be the toughest but can also contribute maximum to your grades, if you
prepared well. All students are advised to solve below
CBSE sample papers for Class 9 math
to prepare well.
CBSE Sample Papers for Class 9 Math SA 1:
• CBSE Sample Paper for Class 9 SA1 Maths Set 1
• CBSE Sample Paper for Class 9 SA1 Maths Set 2
• CBSE Sample Paper for Class 9 SA1 Maths Set 3
• CBSE Sample Paper for Class 9 SA1 Maths Set 4
• CBSE Sample Paper for Class 9 SA1 Maths Set 5
CBSE Sample Papers for Class 9 Math SA 2:
• CBSE Sample Paper for Class 9 SA2 Maths Set 1
• CBSE Sample Paper for Class 9 SA2 Maths Set 2
• CBSE Sample Paper for Class 9 SA2 Maths Set 3
• CBSE Sample Paper for Class 9 SA2 Maths Set 4
• CBSE Sample Paper for Class 9 SA2 Maths Set 5 | {"url":"https://www.cbsewizard.com/cbse-class-ix/sample-papers/mathematics/","timestamp":"2024-11-11T13:53:01Z","content_type":"text/html","content_length":"6065","record_id":"<urn:uuid:cbcb0fc5-846e-4131-82be-c9bc7f59e3ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00582.warc.gz"} |
[QSMS Monthly Seminar] Symmetric functions and super duality
Date: 5월28일 금요일, 2021년
Place: 129동
Title: Symmetric functions and super duality
Speaker: 권재훈 교수(SNU)
The ring of symmetric functions is an object which has a close connection with the representations of Lie algebras and symmetric groups. In this talk, we give a categorical interpretation of the
involution on the ring of symmetric functions, more precisely in terms of super duality.
Title: Cluster category
Speaker: 조철현 교수 (SNU)
We give a basic introduction to cluster category theory, and find analogy with symplectic geometry of singularities. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&order_type=asc&page=2&document_srl=1478&sort_index=readed_count&listStyle=viewer","timestamp":"2024-11-14T18:39:46Z","content_type":"text/html","content_length":"21008","record_id":"<urn:uuid:5454e319-af7e-42e2-87c6-84259d307cb7>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00379.warc.gz"} |
Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis
National Space Science Center.CAS, University of Chinese Academy of Sciences, Beijing 100190, China
Science and Technology on Complex Aviation System Simulation Laboratory, Beijing 100076, China
Author to whom correspondence should be addressed.
Submission received: 16 December 2020 / Revised: 4 January 2021 / Accepted: 4 January 2021 / Published: 10 January 2021
In recent years, transfer learning has been widely applied in fault diagnosis for solving the problem of inconsistent distribution of the original training dataset and the online-collecting testing
dataset. In particular, the domain adaptation method can solve the problem of the unlabeled testing dataset in transfer learning. Moreover, Convolutional Neural Network (CNN) is the most widely used
network among existing domain adaptation approaches due to its powerful feature extraction capability. However, network designing is too empirical, and there is no network designing principle from
the frequency domain. In this paper, we propose a unified convolutional neural network architecture from a frequency domain perspective for a domain adaptation named Frequency-domain Fusing
Convolutional Neural Network (FFCNN). The method of FFCNN contains two parts, frequency-domain fusing layer and feature extractor. The frequency-domain fusing layer uses convolution operations to
filter signals at different frequency bands and combines them into new input signals. These signals are input to the feature extractor to extract features and make domain adaptation. We apply FFCNN
for three domain adaptation methods, and the diagnosis accuracy is improved compared to the typical CNN.
1. Introduction
Modern machinery and equipment are widely used in industrial production, and their structures are sophisticated and complex. They are usually operated in a high-intensity working environment. Among
them, rotating machinery plays an essential role in modern mechanical equipment, and is fragile and vulnerable to damage, significantly affecting the entire system’s stability. Therefore, fault
diagnosis of rotating machinery is vital in the modern industry. To get better diagnosis results, it is critical to extract significant features. Traditional data-driven fault diagnosis methods
extract features artificially from raw signals, namely handcraft features [
]. These handcraft features can be generated from time domain, frequency domain, time-frequency domain or other signal processing methods, and are classified by pattern recognition algorithms,
such as Support Vector Machine (SVM) [
], K-nearest Neighbors (k-NN) [
], Decision Tree (DT) [
] and so on. However, handcraft features require a lot of experience and professional knowledge, and different problems may require different feature extraction methods. Besides, feature selection
among variously alternative features is also tricky and time-consuming.
In recent years, deep learning has been applied in fault diagnosis [
], which has a powerful ability to learn features from large amounts of data compared with traditional machine learning [
]. It can automatically mine useful features from signals and regularization terms can be added for feature selection. Besides, deep learning can achieve end-to-end learning that combines feature
extraction and classification. The feature extraction and classifier of traditional methods are uncoupled and independent from each other. But feature extractor and classifier of deep learning are
trained jointly, and the extracted features are specific to certain diagnostic tasks [
While deep learning has achieved good performance in fault diagnosis, two problems need to be solved: (a) Exiting deep learning models require a lot of labeled data. However, sensors of industrial
devices will produce a lot of unlabeled data in a short time, and labeling data is very time-consuming and labor-intensive [
]. (b) Operating conditions of actual industrial equipment are often changing, which results in different distributions of collected datasets [
]. a model trained on one specific dataset will have poor generalization ability on another dataset with a different distribution.
To solve the above problems, transfer learning, a branch of machine learning, has been employed in fault diagnosis [
]. In transfer learning, the domain has a lot of labeled data and knowledge is called the
source domain
, and the
target domain
is the object that we want to transfer knowledge to [
]. Based on whether the source domain dataset has labels, transfer learning is divided into three categories: supervised transfer learning, semi-supervised transfer learning and unsupervised transfer
learning [
]. In this paper, we focus on unsupervised transfer learning. a widely used method to solve unsupervised transfer learning is
domain adaptation
, which is to learn common feature expressions between two domains to achieve feature adaptation [
]. Domain adaptation has been proven effective in fault diagnosis and has become one of the research hot spots in fault diagnosis [
]. However, exciting domain adaptation methods for fault diagnosis extract features on a single scale, and do not consider network design from the perspective of frequency-domain. In this paper,
amplitude-frequency characteristics (AFC) curve is utilized to describe the frequency domain characteristics of convolution kernels for the first time. Inspired by the discovery that convolution
kernels of different scales filter signals of different frequency bands, we propose a unified CNN architecture to improve the effect of domain adaptation for fault diagnosis, named Frequency-domain
Fusing CNN (FFCNN). Since a large kernel will increase the number of the networks’ parameters, we use dilated convolution [
] to expand the receptive field of convolution kernel without increasing the number of parameters. FFCNN concatenates several convolution kernels with different dilation rates in the first layer,
which will extract features at different scales of the original signals. Then these features are fused for domain adaptation.
While some papers have proposed similar network architectures of multi-scale convolution [
], our approach differs from theirs in the following respects: (a) Most existing papers focus on general classification problems, but we have verified the effectiveness of multi-scale structure
in domain adaptation; (b) Most methods do not clarify the physical meaning of multi-scale convolution, but our method is driven by the frequency-domain characteristics of convolution kernels,
which has a clear physical meaning. Compared with the previous domain adaptation methods for fault diagnosis, our proposed method is unified and suitable for different domain adaptation losses.
In consequence, the contributions of this paper are summarized as follows:
• We design the network architecture for fault diagnosis from the perspective of frequency-domain characteristics of convolution kernels. The motivation for network design has a clear physical
• For the first time, we use the amplitude-frequency characteristic curve to describe the frequency domain characteristic of the convolution kernels. This provides a new idea for analyzing
the physical meaning of the convolution kernels.
• the proposed FFCNN is suitable for various domain adaptation loss functions, and can significantly improve the performance of domain adaptation for fault diagnosis without increasing
the complexity of the networks.
• Dilated convolution is used in domain adaptation and fault diagnosis. Dilated convolution can improve the receptive field without increasing the number of parameters.
The rest of this paper is organized as follows. In
Section 2
, related work about deep learning methods and domain adaptation methods are introduced. Some background knowledge will be introduced, including domain adaptation, CNN, and dilated convolution in
Section 3
Section 4
will give the motivation of our proposed method.
Section 5
will detail the proposed MSCNN and the training process.
Section 6
will study two cases and provide in-depth analysis from different perspectives. Some usage suggestions, existing problems and future research contents are given in
Section 7
. Finally, the conclusions are drawn in
Section 8
. The symbols used in this paper are listed in Abbreviations.
2. Related Work
Deep learning for fault diagnosis. a variety of deep learning methods have been successfully applied in fault diagnosis in recent years. Jia et al. [
] proposes a Local Connection Network (LCN) constructed by normalized sparse Autoencoder (NSAE), named NSAE-LCN. This method overcomes two shortcomings of traditional methods: (a) They may learn
similar features in feature extraction. (b) the learned features have shift variant properties, which leads to the misclassification of fault types. Yu et al. [
] proposed a component selective Stacked Denoising Autoencoders (SDAE) to extract effective fault features from vibration signals. Then correlation learning is used to fine-tune the SDAE to construct
component classifiers. Finally, a selective ensemble is finished based on these SDAEs for gearbox fault diagnosis. Except for autoencoder, CNN is also a widely used deep learning method. Jing et al.
] developed a 1-D CNN to extract features directly from frequency data of vibration signals. The results showed that the proposed CNN method can extract more effective features than
the manually-extracting method. Huang et al. [
] developed an improved CNN that uses a new layer before convolutional layer to construct new signals of more distinguishable information. The new signals are obtained by concatenating the signals
convolved by kernels of different lengths. Generative adversarial network (GAN) and Capsule Network (CN) are the latest research results of deep learning. Han et al. [
] used adversarial learning as a regularization in CNN. The adversarial learning framework can make the feature representation robust, boost the generalization ability of the trained model, and avoid
overfitting even with a small size of labeled data. Chen et al. [
] proposed a novel method called deep capsule network with stochastic delta rule (DCN-SDR). The effective features are extracted from raw temporal signals, and the capsule layers reserve
the multi-dimensional features to improve the representation capacity of the model.
Domain adaptation for fault diagnosis. Domain adaptation method can use the unlabeled data for transfer learning. In the work of Li et al. [
], the multi-kernel maximum mean discrepancies (MMD) are minimized to adapt the learned features in multiple layers between two domains. This method can learn domain-invariant features and
significantly improve the performance of cross-domain testing. Han et al. [
] proposed an intelligent domain adaptation framework for fault diagnosis, deep transfer network (DTN). DTN extends the marginal distribution adaptation to joint distribution adaptation, guaranteeing
a more accurate distribution matching. Wang et al. [
] applies adversarial learning to domain adaptation, and proposes Domain-Adversarial Neural Networks (DANN). In addition, a unified experimental protocol for a fair comparison between domain
adaptation methods for fault diagnosis is offered. Guo et al. [
] proposes an intelligent method named deep convolutional transfer learning network (DCTLN) consists of condition recognition and domain adaptation. The condition recognition module is a 1-D CNN to
learn features and recognize machines’ health conditions. The domain adaptation module maximizes domain recognition errors and minimizes probability distribution distance to help 1-D CNN learning
domain invariant features. Li et al. [
] proposed a weakly supervised transfer learning method with domain adversarial training. This method aims to improve the diagnostic performance on the target domain by knowledge transferation from
multiple different but related source domain.
3. Background
3.1. Transfer Learning and Domain Adaptation
We consider a deep learning classification task $T$ where $X = { x 1 , x 2 , ⋯ , x n }$ is the dataset sampled form input space $X$ and $Y = { y 1 , y 2 , ⋯ , y n }$ is the labels of dataset from
label space $Y$. Above elements form a specific domain $D$. We need to learn a feature extractor $g ( · ) : X → Z$ and a classifier $h ( · ) : Z → Y$, where Z is the learned features representation.
Given two domains with different distributions named source domain $D S$ and target domain $D T$, transfer learning is to improve the performance of target domain using the knowledge of source
domain, where $X S ≠ X T$ or $Y S ≠ Y T$.
From the perspective of input spaces and label spaces, transfer learning can be divided into the following two types:
• Homogeneous transfer learning. The input spaces of the source domain and target domain are similar and the label spaces are the same, expressed as $X S ∩ X T ≠ ∅$ and $Y S = Y T$.
• Heterogeneous transfer learning. Both the input spaces and the label spaces may be different, expressed as $X S ∩ X T = ∅$ or $Y S ≠ Y T$.
Besides, according to whether the target domain contains labels, transfer learning can also be divided into following three types:
• Supervised transfer learning. All data in the target domain have labels.
• Semi-supervised transfer learning. Only part of the data in the target domain have labels.
• Unsupervised transfer learning. All data in the target domain have no labels.
Most of the research in recent years has focused on unsupervised homogeneous transfer learning [
], which is also the direction of our work. Domain adaptation is a common method to solve unsupervised homogeneous transfer learning. Given source domain
$D S$
and target domain
$D T$
, a labeled source dataset
$X S$
is sampled
$i . i . d$
$D S$
, and an unlabeled target dataset
$X T$
is sampled
$i . i . d$
$D T$
. a domain adaptation problem aims to train a common feature extractor
$g ( · ) : X → Z$
$X S$
$X T$
, and a classifier
$h ( · ) : Z → Y$
learned from
$X S$
with a low target risk [
$e r r D T ( h ) = Pr ( x , y ) ∼ D T h ( g ( x ) ) ≠ y$
To adapt the feature space of source domain and target domain, a specific criterion $d ( Z S , Z T )$ is chosen for measuring the discrepancy between $Z S$ and $Z T$. which is regarded as a loss
3.2. Convolutional Neural Network
In this paper, a one-dimensional convolutional neural network is built to extract features and classify fault types. a typical CNN consists of convolution layers, pooling layers and a fully-connected
layer. Let
$x i l − 1 = x i l − 1 , S , x i l − 1 , T ∈ R N × M$
is the output of
$( l − 1 ) t h$
layer containing source domain data and target domain data,
is the number of channels,
is the dimensional of feature maps. The kernel of
$l t h$
convoluntion layers is
$k l ∈ R C × N × H$
, bias is
$b l ∈ R C$
is the number of channels in the output feature maps,
is kernel size. So the output of
$l t h$
layer is obtained as follows [
$x i , ( oonv ) l = σ x i l − 1 * k l + b l ∈ R N ′ × M ′ N ′ = C M ′ = M − H + p s + 1$
$σ ( · )$
is activation function, * is convolution operation,
is the stride step, and
is padding size to keep the input and output dimensions consistent. After convolution layer, a down-sampling layer is connected to reduce the number of parameters and avoid overfitting [
$x i , ( p o o l ) l = pool x i l ∈ R N ″ × M ′ N ″ = C M ″ = M ′ − L s + 1$
is the pooling step, and
is pooling size. Repeat convolution layer and pooling layer several times to deepen the network. Then the feature maps are flattened into one-dimension to connect a fully-connected layer. Finally,
the softmax layer outputs the predicted classification probability:
$x i l = flatten x i l − 1 y ˜ i = soft max σ w 1 x i l + b 1$
The classification loss used to measure the discrepancy between predictions and labels can be expressed by cross-entropy:
$ℓ clf ( y , y ˜ ) = 1 m ∑ i = 1 m − y i · log y ˜ i ⊤ − 1 − y i · log 1 − y ˜ i ⊤$
$y i$
is the real label of
$i t h$
sample. The objective of the classification task is to optimize the loss function to reduce the classification risk.
3.3. Dilated Convolution
To explain dilated convolution, we compare it with a standard convolution as shown in
Figure 1
. We assume that the input data
$x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ]$
is six-dimensions, kernel is
$k = [ k 1 , k 2 , k 3 ]$
, stride is 1. According to Equation (1), the output is
$x ′ = [ x 1 ′ , x 2 ′ , x 3 ′ ]$
Figure 1
a, where
$x j ′ = x j k 1 + x j + 1 k 2 + x j + 2 k 3 + b$
In the standard convolution, the adjacent elements of the input data are multiplied and added to the kernel, and the operation is repeated by sliding s strides to the end of input data. Dimension of
output is $6 − 3 1 + 1 = 4$.
In dilated convolution, we denote
the dilation rate. Unlike standard convolution, the elements multiplied and added with the kernel are separated by
$r − 1$
elements in dilated convolution. In
Figure 1
b, dilation rate is 2, and the output becomes
$x ′ = [ x 1 ′ , x 2 ′ ]$
], where
$x j ′ = x j k 1 + x j + r k 2 + x j + 2 r k 3 + b .$
Dilated convolution is equivalent to expanding the kernel size, that is, expanding the receptive field, and the equivalent kernel size is [
$H d i l a t e d = H + ( H − 1 ) ( r − 1 )$
So the dimension of output
$M ′$
$M ′ = M − H × r + r + p − 1 s + 1$
The standard convolution is the dilated convolution of $r = 1$.
4. Motivation
The vibration signal is time domain signal, and most deep learning methods are designed from the perspective of time domain. But vibration signal can be composed of a series of sine wave signals with
different frequencies, phases, and amplitudes, which are the frequency domain representations of the vibration signal. The vibration modes of different fault types are different, and the FFT
spectrograms are also different, as shown in
Figure 2
. Signals of different fault type have different dominant frequency bands, which means that useful information is contained in different frequency bands. Traditional methods usually use some signal
processing techniques to extract features in the time domain and frequency domain. The commonly used CNN can automatically extract features from the original signals and learn related fault modes
based on the labeled data. But what exactly does the learned convolution kernel mean? Here we can regard the first layer of convolution kernels as the preprocessing of the original signals.
To observe the frequency domain characteristics of the convolution kernels, we can draw the amplitude-frequency characteristics (AFC) curve of kernels. Next, the principle of AFC will be explained.
Let the input signal is
, the output signal after a convolutional kernel is
$x ˜$
, and the convolution operation can be seen as a function
$G ( · )$
. To get the AFC curve of
$G ( · )$
, we take a series of sinusoidal signals
$X = { x 1 , x 2 , ⋯ , x i , ⋯ , x m }$
with different frequencies
${ f 1 , f 2 , ⋯ , f i , ⋯ , f m }$
. For each signal, the length is
$n t$
$x i = x i 1 , x i 2 , … , x i t , … , x i n t x i t = sin 2 π · f i$
Then a series of corresponding outputs
$X ˜ = { x ˜ 1 , x ˜ 2 , ⋯ , x ˜ i , ⋯ , x ˜ m }$
will be obtained. The amplitude ratio of the output signal to the input signal is calculated, and the logarithm of 20 times is taken:
$A f i = 20 lg G x i x i$
$| G ( x i ) |$
is amplitude of output signal,
$| x i |$
is amplitude of the input signal. So we will get a set of
${ f i → A ( f i ) | i = 1 , 2 , ⋯ , m }$
. With
$f i$
from low to high as the horizontal axis and
$A ( f i )$
as the vertical axis, we can get the AFC curve. AFC curve shows the ability of a convolution kernel to suppress signals in various frequency bands. In general, the signal amplitude that passes
through the filter will decrease and
$A ( f i )$
will be negative. If the value
$A ( f i )$
is very small, the filter will suppress the signal
$x i$
with frequency
$f i$
. In contrast, the filter does not suppress the signal
$x i$
To explore the meaning of the convolution kernel from a frequency domain perspective, we trained four CNN with different kernel sizes (kernel size is 15, dilation rates are 1, 2, 3, and 5).
The output of signal after the first convolution layer, AFC curve of one of the convolution kernels and FFT spectrogram of output are drawn in
Figure 3
. As we can see that the convolution kernels can be regarded as a series of filters, which can filter out signals of different frequency bands. Observing these AFC curves, we can get
the following points:
• the convolution kernels can be regarded as a series of filters, which can suppress signals in some single frequency bands.
• Different dilation rates have different AFC curves. Convolution kernels with a dilation rate $r > 1$ have multiple suppression bands. And kernels with higher dilation rates have more suppression
The above findings motivate us to design the network architecture from the perspective of the frequency domain. We change the first layer of CNN to a multi-scale convolution kernel fusion method.
The input signal is preprocessed in multiple frequency bands before entering the next stage of feature extraction. Compared with single-scale CNN, the improved CNN can extract richer frequency domain
information to improve CNN’s feature extraction ability.
5. Proposed Method
5.1. Frequency-Domain Fusing CNN
The architecture of the proposed FFCNN is shown in
Figure 4
. Note that the depth of the network should match the size of dataset. a small network will cause underfitting, while a large network will easily cause overfitting and increase training time.
According to the size of dataset used in this paper and some hyper-parameter debugging experiments, we used a CNN including two convolution layers and two fully-connected layers. The details of FFCNN
used in this paper are shown in
Table 1
. For dilation rates, although a large dilation rate will expand receptive field, it is not the bigger the better. According to the debugging experiments, we have selected two sets of dilation rates
with appropriate sizes,
$r = 1 , 2 , 3$
$r = 1 , 3 , 5$
, to evaluate the effect of different dilate rates.
Section 6.3
Section 6.4
will discuss the effect of different dilation rates.
For FFL, there are three convolutional branches with different dilation rates in the first convolution layer. They can preprocess signals on multiple scales and produce feature maps with the same
number of channels and dimensions. Then the three feature maps are connected in the channels axis and followed by a pooling layer. For example, there are three convolution layers with dilation rate
$r = 1 , 2 , 3$ that produce three feature maps with C channels and N dimensions, and the three feature maps are connected to a feature map with the shape of $3 C × N$. Next, the feature map is
followed by standard convolution layers and pooling layers, a feature extractor of the second stage. Then the final convolution layer’s feature map is flattened and followed by fully-connected
layers. Finally, the classification loss and domain loss are obtained.
For domain adaptation, the source data $X S$ and target data $X T$ are trained jointly. Source data and target data are mapped to source features $Z S$ and target features $Z T$ by the feature
extractor. The discrepancy measured by $d ( Z S , Z T )$ between $Z S$ and $Z T$ is calculated as a domain adaptation loss, $Z S$ is classified by softmax layer and classification loss is obtained.
Domain loss and classification loss together are optimized as a total loss. Back propagation (BP) algorithm is used to upgrade each layer’s parameters until the loss converges or reaches the maximum
number of iteration.
5.2. Learning Process
$X S = x i S , y i S i = 1 n s$
be the labeled source domain dataset,
$X T = x i T i = 1 n T$
be the unlabeled target domain dataset. The parameters set of the three branches in the first dilated convolution layer is
$θ r j c o n v 1 = k r j c o n v 1 , b r j c o n v 1 | j = 1 , 2 , 3$
, the output feature maps after dilated convolution and maxpooling are:
$x i , r j c o n v 1 = pool σ x i * k r j c o n v 1 + b r j c o n u 1 ∈ R C 1 × M 1$
$x i = x i S , x i T$
containing source and target domain data. They are connected into one feature map
$x i c o n v 1 = concat x i , r j c o n v 1 | j = 1 , 2 , 3 ∈ R 3 C 1 × M 1$
by channels. The feature map is followed by the second convolution layer and maxpooling layer with parameters
$θ c o n v 2 = k c o n v 2 , b c o n v 2$
and flatten:
$x i c o n v 2 = pool σ x i c o n v 1 * k c o n v 2 + b c o n v 2 ∈ R C 2 × M 2 x i f l a t t e n = flatten x i c o n v 2$
Next a fully-connected layer with parameters
$θ f c = w 1 , b 1$
$θ c l f = w 2 , b 2$
is followed to extract feature representations and classify them:
$z i = σ w 1 x i f l a t t e n + b 1 x i S = w 2 z i S + b 2 p ( y ˜ i , j S = 1 , j = 1 , 2 , … , c | x i S ) = exp x i , j S ∑ j = 1 c exp x i , j S$
is the number of labels. Here we only classify the labeled source feature representations
$x i S$
. The predicted vector can be written as
$y ˜ i S = y ˜ i , 0 S , y ˜ i , 1 S , … , y ˜ i , c S$
To measure the discrepancy between the source and target feature representations, a certain criterion
$d ( z S , z T )$
is chosen as a loss function. To achieve the purpose of domain adaptation, we minimize
$d ( z S , z T )$
and the classification error of source domain
$ℓ c l f ( y S , y ˜ S )$
simultaneously. Thus, the optimization objective of domain adaptation is expressed as [
$min θ ℓ y S , y ˜ S , z S , z T = ℓ c l f y S , y ˜ S + λ d z S , z T$
is the regularization parameter,
$θ = θ r j c o n v 1 , θ c o n v 2 , θ f c , θ c l f$
represents the parameter set of FFCNN.
To optimize the network, we calculate the gradient of objective function with respect to network parameters and upgrade parameters according to the backpropagation (BP) algorithm and mini-batch
stochastic gradient descent (SGD) algorithm [
$θ c l f ← θ c l f − η ∂ ℓ c l f ∂ θ c l f θ f c ← θ f c − η ∂ ℓ c l f ∂ θ f c + λ ∂ d ∂ θ f c θ f c c o v 2 ← θ c o n v 2 − η ∂ ℓ c l f ∂ θ c o n v 2 + λ ∂ d ∂ θ c o n v 2 θ r j c o n v 1 ← θ r j c
o n v 1 − η ∂ ℓ c l f ∂ θ r j c o n v 1 + λ ∂ d ∂ θ r j c o n v 1$
is the learning rate. The complete training process of FFCNN is shown in Algorithm 1.
Algorithm 1 The algorithm of FFCNN back-propagation.
Input: Labeled source domain samples $( x i S , y i ) i = 1 m$, unlabeled target domain samples
$( x i t ) i = 1 m$, regularization parameter $λ$, learning rate $η$, dilate rate ${ r 1 , r 2 , r 3 }$.
Output Network parameters $θ r j c o n v 1 , θ c o n v 2 , θ f c , θ c l f$ and predicted labels for target
domain samples.
Initialization for $θ r j c o n v 1 , θ c o n v 2 , θ f c , θ c l f$.
while stopping criteria is not met do
for each source and target domain samples of mini-batch size $m ′$do
Calculate output $x r j c o n v 1$ of each branch in dilate convolution layer according to
Equation (9).
Connect $x r j c o n v 1 j = 1 3$, and calculate output of the second convolution layer according
to Equation (10).
Calculate features representations $z i$ and output of softmax layer according to
Equation (11).
Calculate loss $ℓ ( y S , y ˜ S , z S , z T )$ according to Equation (12)
Upgrade $θ r j c o n v 1 , θ c o n v 2 , θ f c , θ c l f$ according to Equation (13).
end for
end while
5.3. Diagnosis Procedure
The flowchart of the proposed FFCNN for fault diagnosis is shown in
Figure 5
. It includes following two steps:
• Step 1: Data acquisition. The raw vibration signals are collected by sensors. Then the signals are sliced by a certain length of sliding window with a certain step size. When the samples are
ready, they are divided into different working conditions according to the different operation settings. Among them, working condition
is the source domain, and working condition
is the target domain(
$i ≠ j$
). The samples in each working condition are further divided into training data and testing data.
Section 6.1
will introduce the dataset used in this paper and the working conditions settings.
• Step 2: Domain adaptation. Based on the specific fault diagnosis problem and dataset information, the FFCNN configuration is chosen. The details of FFCNN used in this paper have been stated in
Section 5.1
. For training stage, FFCNN is trained by source training data and target training data based on Algorithm 1. For the testing stage, the target testing data are fed into trained FFCNN to get
classification results.
• Step 3: Results analysis. The diagnosis results will be analyzed form three perspective: network architecture, feature representation and frequency domain.
6. Experiment
6.1. Introduction to Datasets
CWRU bearing dataset. This dataset is provided by Case Western Reserve University (CWRU) Bearing Data Center [
]. Four different bearing conditions are considered in this dataset: normal (N), ball fault (B), inner race (IR) fault, and outer race (OR) fault. Each fault was artificially damaged by electrical
discharge machining. The vibration data are collected under different motor speeds at a sampling frequency of 12kHz or 48kHz. According to the sampling frequency and motor speed, the dataset is
divided into six different working conditions, as shown in
Table 2
Paderborn dataset. This bearing dataset is provided by the Chair of Design and Drive Technology, Paderborn University [
]. There are three types of bearings: healthy bearings, artificially damaged bearings, and realistically damaged bearings. Artificially damaged bearings arise in inner race or outer race, and
realistic damages occur in the form of pitting or plastic deformation. In this paper, we only focus on the diagnosis of the artificial damages. The vibration signals are collected under different
load torque, radial force, and rotational speed at s sampling frequency of 64 kHz. According to these different working conditions, the dataset is divided into four different subsets, as showed in
Table 3
Both above datasets are one-dimensional vibration signals, the example signals of CWRU and Paderborn dataset is shown in
Figure 6
. Because the length of the original signal is very long, the signals are sliced through a sliding window of length 1000, which means that each sample contains 1000 points. We use a sliding window
with a sliding step size of 100 to get samples. For each fault type, we generate 1024 samples, and 20% of which are used as test sets.
6.2. Experiment Settings and Compared Methods
FFCNN is a method to improve the architecture of the domain adaptation network used in the feature representation based domain adaptation methods. These methods extract latent feature representations
of the source domain and target domain, and reduce the discrepancy between them. Here we use three different discrepancy criterions: Maximum Mean Discrepancy (MMD), CORrelation ALignment (CORAL), and
Central Moment Discrepancy (CMD).
• MMD: MMD criterion maps features to a Reproducing Kernel Hilbert Space (RKHS) to measure the discrepancy between source and target domain [
]. It is defined as:
$d M M D z S , z T = 1 n S ∑ i = 1 n s ϕ z i S − 1 n T ∑ j = 1 n T ϕ z i T H$
$ϕ ( · ) : Z → H$
is referred to as the feature space map.
• CORAL: CORAL criterion measures the discrepancy using the second-order statistics of source and target domain feature representations [
]. It is defined as:
$d C O R a L z S , z T = 1 4 d 2 C S − C T F 2 C S = 1 n S − 1 z S ⊤ z S − 1 n S 1 ⊤ z S ⊤ 1 ⊤ z S C T = 1 n T − 1 z T ⊤ z T − 1 n T 1 ⊤ z T ⊤ 1 ⊤ z T$
is a vector with all elements equal to 1.
• CMD: CMD criterion matches the domains by explicitly minimizing differences of higher order central moments for each moment order [
]. It is defined as:
$d C M D z S , z T = 1 | b − a | E z S − E z T + ∑ k = 2 K 1 | b − 1 | k C k z S − C k z S 2$
$E z S = 1 n S ∑ i = 1 n S z i S$
is empirical expectation vector computed on features
$z S$
, and
$C k z S = E z S − E z S k$
is the vector of all
$k t h$
order samples central moments of the coordinates of
$z i S$
For FFCNN, we use two dilate rate settings to evaluate the influence of dilate rate, one is $r = 1 , 2 , 3$ named FFCNN-A, and another is $r = 1 , 3 , 5$ named FFCNN-B. Moreover, we compared FFCNN
with the ordinary CNN under the same computational complexity. In the first layer of FFCNN, each branch has a kernel with 8 channels and a size of 15, so three branches are equivalent to have a
kernel with 24 channels and a size of 15. To keep the same computational complexity, the first layer of ordinary CNN also has a kernel with 24 channels and a size of 15, and the other layers are the
same as the FFCNN. Besides, we also give the direct test results of the target domain data on the model trained by source domain dataset, called source-only. In these experiments, we set the number
of epochs to be 50 and batch size to be 64. Adam optimization algorithm and CosineAnnealingLR with an initial learning rate of 0.001 are applied. Five-fold cross-validation is used for each task. The
code is implemented by Tensorflow 2.0 and run on Tesla K80 GPU.
6.3. Experiment Results
The diagnosis results using CWRU dataset are shown in
Table 4
, and results using Paderborn dataset are shown in
Table 5
. To show the improvement effect of FFCNN more clearly, we average the improved accuracy of FFCNN compared to normal CNN in each source domain. For example, source domain
$B 1$
is transferred to five target domain
$B j ( j = 2 , 3 , 4 , 5 , 6 )$
, the improved accuracies of FFCNN compared with CNN are averaged. The results are shown in
Figure 7
Figure 8
. We can see that the diagnostic accuracy of FFCNN in most tasks is significantly improved compared to CNN. Only the average effect of FFCNN-B using CORAL in CWRU dataset has not improved. Next, we
will illustrate and analyze the results from three aspects in depth.
• The effectiveness of domain adaptation. These tables show that source-only, without domain adaptation, performs poorly. In comparison, domain adaptation methods greatly exceed source-only in most
tasks. For example, in task $B 1 → B 4$, the accuracy of source-only is 30.32%, but the accuracy of domain adaptation is 75.15% at the lowest and 100% at the highest. But domain adaptation fails
in some cases. Such as task $B 2 → B 3$, the accuracy of source-only is 72.27%, compared with 49.8% for CNN-MMD, 60.91% for FFCNN-A, and 55.15% for FFCNN-B. We suppose that these two methods did
not extract the appropriate features to adapt the source domain and target domain. Overall, domain adaptation methods achieved the highest average accuracy, proving the strong generalization of
domain adaptation.
• The effectiveness of FFCNN. FFCNN used different dilation rates to extract features at different scales, so that it may extract better features. Compared with ordinary CNN, FFCNN is more
effective in most tasks. In some tasks, the effect of using FFCNN can be greatly improved. For example, in task $B 5 → B 1$, FFCNN-B improved by 17.34% compared with CNN-MMD, 22.11% compared with
CNN-CORAL, and 12.33% compared with CNN-CMD. But FFCNN may not be effective in some cases, such as FFCNN-A compared with CNN-MMD and FFCNN-B compared with CNN-CORAL in task $B 5 → B 3$. For some
tasks, a feature extracted at a fixed scale may be the most significant, but multi-scale convolution may weaken the influence of such a significant feature. Nevertheless, FFCNN performs well both
in terms of the accuracy for most individual tasks and the average accuracy for all tasks.
• The influence of dilation rate. To clearly illustrate the effect of dilation rate, the average accuracy of FFCNN with different dilation rates on all tasks is shown in
Figure 9
. As directed from the figure, FFCNN with
$r = 1 , 3 , 5$
performs better than FFCNN with
$r = 1 , 2 , 3$
, except CORAL for B tasks. According to Equation (
), the kernels of size
$H = 15$
with dilation rate
$r = 1 , 2 , 3 , 4 , 5$
are equivalent to the kernels of size
$H d i l a t e d = 15 , 29 , 43 , 57 , 71$
. It can be concluded that a large dilation rate has a larger receptive field, which can improve the effect of domain adaptation. Further analysis of dilation rate and dilated convolution will be
discussed in the following sections.
• Dilated convolution v.s. common convolution. Dilated convolution expands the receptive field by expanding the convolution kernel. According to Equation (
), the receptive fields of different dilation rates and the receptive fields of specific size convolution kernels are equivalent. To show the advantage of dilated convolution, take task
$B 5 → B 1$
as an example, dilated convolution and common convolution are applied on CNN and FFCNN. The number of parameters and diagnosis accuracy of dilated convolution and common convolution are compared.
The results are shown in
Table 6
. As we can see, the models using dilated convolution with different dilation rates do not increase the number of parameters. In general, their accuracy is higher than the models using common
convolution kernels. This shows that both in terms of model size and diagnosis accuracy, dilated convolutions have advantages over common convolutions.
6.4. Analysis
6.4.1. Analysis from the Perspective of Network Architecture
FFCNN extracts features from multi scales using dilated convolution without increasing computational complexity, and different dilation rates represent different scales of the receptive field. To
show the effect of frequency-domain fusing convolution, the performance of different single scale CNN is shown in
Figure 10
. Each point in the figure represents the diagnosis accuracy with a single scale on a given task. Here we select task
$B 5 → B 1$
$P 1 → P 2$
as examples to change the dilation rate of the first convolution layer based on of CNN-MMD, CNN-CORAL, and CNN-CMD. The dilation rates on the horizontal axis are
$r = 1 , 2 , 3 , 4 , 5$
, respectively. The dotted red line indicates the highest accuracy of FFCNN for the task in
Section 5.3
. As we can see, increasing the dilation rate may increase accuracy and may also result in a decrease in accuracy. But in most cases, it will not exceed the accuracy of FFCNN. Furthermore, we cannot
know exactly which scale under the current task will get higher accuracy. Therefore, single scale convolution cannot be adapted to extract features to obtain better and more stable performance. On
the other hand, FFCNN can fuse multi-scale information to extract richer features and obtain excellent and stable results in most cases.
6.4.2. Analysis from the Perspective of Feature Representation
Domain adaptation aims to align features of different domains. That is to say, domain adaptation will reduce the classification loss of source domain as well as the discrepancy between the source
domain and target domain (called domain loss). So the features of different categories from the same domain can be dispersed as much as possible, and features of the same category from different
domains can be gathered as much as possible. To illustrate the effectiveness of FFCNN from this perspective, we use task
$B 4 → B 5$
$P 3 → P 2$
as examples to visualize the features after the adaptation using t-SNE algorithm [
] in
Figure 11
Figure 12
. For each subgraph, the domain loss and classification loss are also shown above. From the figures, we can see that the feature distributions of categories between the source domain and target
domain are not aligned well without frequency-fusing method, such as ball fault and inner race fault in CNN-MMD of
Figure 11
. But under FFCNN framework, the improvement of distribution adaptation is noticeable. For example, in CNN-MMD of
Figure 11
, categories of source domain or target domain are separated, but didn’t align the feature distributions of the same category between source and target domain. On the contrary, FFCNN-A-MMD
successfully aligns the feature distributions between domains, and the domain loss is
$3.32756 × 10 − 2$
, which is better than
$4.46758 × 10 − 2$
of CNN-MMD. This improvement has raised the accuracy of CNN-MMD from 80.98% to 94.80%, and reduced the classification loss from 1.23268 to
$1.86748 × 10 − 3$
. Similarly, the improvement of aligning effect will improve accuracy in other tasks.
6.4.3. Analysis from the Perspective of Frequency Domain
Figure 13
Figure 14
Figure 15
give the convolved signals, AFC curves, and FFT spectrogram of each filter in the first layer of CNN-MMD, FFCNN-A, and FFCNN-B from task
$B 5 → B 1$
. Signals, AFC curve and FFT spectrogram form a sub-figure in a figure vertically. For the FFT spectrogram, the blue curve represents the FFT of input signal, and red represents the FFT of convolved
signal. Combining FFT spectrogram, We can see that, compared with multi-scale convolution, the frequency band perceived by ordinary CNN is single. Signals filtered by different frequency bands will
contain more significant useful information, and frequency bands that do not contribute to fault classification will be suppressed. During the training process, the network will learn which frequency
bands are useful and which are not according to the loss function changes.
7. Discussion
This paper has proved the effectiveness of FFCNN with a large number of experiments and explained it from multiple perspectives. For the application of FFCNN, we have the following suggestions:
• FFCNN is a unified domain adaptation architecture for fault diagnosis, it can also be applied to other CNN structures, domain adaptation methods or datasets.
• Which dilation rates are used to construct a FFCNN need to be determined according to the specific task, not necessarily $r = 1 , 2 , 3$ or $r = 1 , 3 , 5$. And the number of combined scales can
also change.
• AFC curve can be considered as a general CNN analysis method. It provides a new perspective for describing the characteristics of the convolution kernel.
• Multi-scale convolution kernels are generally applied in the first layer, and using multi-scale convolution in the middle layers has not been studied to prove its effectiveness.
While FFCNN is effectively applied in domain adaptation for fault diagnosis, we still face the following challenges regarding transfer learning and fault diagnosis:
• While FFCNN can improve the effect of domain adaptation, if the source domain and target domain are too different, FFCNN will also fail. How to further enhance the effect of domain adaptation
still needs to be further studied [
• We explained the FFCNN from the perspective of frequency domain. How to improve the interpretability of deep learning methods for fault diagnosis is a more challenging task [
8. Conclusions
In this paper, a unified CNN architecture for domain adaptation named FFCNN using dilated convolutions with different scale is proposed. Experiments on two bearing datasets have proved the
significant effect of FFCNN. Based on the results and analysis, three main significances of this paper can be concluded. First, the proposed FFCNN is driven from the perspective of frequency-domain
characteristic. This inspires researchers to combine frequency-domain analysis with neural networks. Second, the frequency domain characteristic is described by the AFC curve, providing a new means
to understand CNN. Third, results on different domain loss functions show that FFCNN is suitable for various domain adaptation losses. Thus, FFCNN provides an example for unified domain adaptation
network design. While the proposed FFCNN has certain interpretability, it still does not fully explain the working principle of CNN. Further understanding of CNN to improve the effectiveness of fault
diagnosis will be future work.
Author Contributions
Conceptualization, X.L. and Y.H.; methodology, Y.H. and J.Z.; software, X.L.; validation, M.L., W.M.; writing—original draft preparation, X.L.; writing—review and editing, Y.H. All authors have read
and agreed to the published version of the manuscript.
This research was funded by the National Nature Science Foundation of China, grant number 61703431, and the Youth Innovation Promotion Association CAS.
The computing platform is provided by the STARNET cloud platform of National Space Science Center Public Technology Service Center.
Conflicts of Interest
The authors declare no conflict of interest.
$T$ Classification task
$D$ a specific domain
$D S , D T$ Source domain and target domain
$X$ Input sample space
$Y$ Input label space
$X S , X T$ Input source sample space and target sample space
$Y S , Y T$ Input label sample space and target label space
$X , Y$ Dataset and labels
$x , y$ a sample and a label in dataset
Z Learned features representation
$g ( · )$ Feature extractor of deep learning model
$h ( · )$ Classifier of deep learning model
$ℓ c l f , d ( · )$ classification loss and domain loss
$G ( · )$ a convolution operation
$A ( f i )$ Amplitude frequency characteristic of $G ( · )$ under frequency $f i$
1. Lei, Y.; Lin, J.; He, Z.; Zuo, M.J. A review on empirical mode decomposition in fault diagnosis of rotating machinery. Mech. Syst. Signal Process. 2013, 35, 108–126. [Google Scholar] [CrossRef]
2. Peng, Z.; Peter, W.T.; Chu, F. A comparison study of improved Hilbert–Huang transform and wavelet transform: Application to fault diagnosis for rolling bearing. Mech. Syst. Signal Process. 2005,
19, 974–988. [Google Scholar] [CrossRef]
3. Yan, R.; Gao, R.X.; Chen, X. Wavelets for fault diagnosis of rotary machines: A review with applications. Signal Process. 2014, 96, 1–15. [Google Scholar] [CrossRef]
4. Konar, P.; Chattopadhyay, P. Bearing fault detection of induction motor using wavelet and Support Vector Machines (SVMs). Appl. Soft Comput. 2011, 11, 4203–4211. [Google Scholar] [CrossRef]
5. Zhang, X.; Liang, Y.; Zhou, J. A novel bearing fault diagnosis model integrated permutation entropy, ensemble empirical mode decomposition and optimized SVM. Measurement 2015, 69, 164–179. [
Google Scholar] [CrossRef]
6. Li, Z.; Yan, X.; Tian, Z.; Yuan, C.; Peng, Z.; Li, L. Blind vibration component separation and nonlinear feature extraction applied to the nonstationary vibration signals for the gearbox
multi-fault diagnosis. Measurement 2013, 46, 259–271. [Google Scholar] [CrossRef]
7. Saimurugan, M.; Ramachandran, K.; Sugumaran, V.; Sakthivel, N. Multi component fault diagnosis of rotational mechanical system based on decision tree and support vector machine. Expert Syst.
Appl. 2011, 38, 3819–3826. [Google Scholar] [CrossRef]
8. Muralidharan, V.; Sugumaran, V. Feature extraction using wavelets and classification through decision tree algorithm for fault diagnosis of mono-block centrifugal pump. Measurement 2013, 46,
353–359. [Google Scholar] [CrossRef]
9. Hoang, D.T.; Kang, H.J. A survey on Deep Learning based bearing fault diagnosis. Neurocomputing 2019, 335, 327–335. [Google Scholar] [CrossRef]
10. Liu, R.; Yang, B.; Zio, E.; Chen, X. Artificial intelligence for fault diagnosis of rotating machinery: A review. Mech. Syst. Signal Process. 2018, 108, 33–47. [Google Scholar] [CrossRef]
11. Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar] [
12. Wang, J.; Ma, Y.; Zhang, L.; Gao, R.X.; Wu, D. Deep learning for smart manufacturing: Methods and applications. J. Manuf. Syst. 2018, 48, 144–156. [Google Scholar]
13. Lei, Y.; Yang, B.; Jiang, X.; Jia, F.; Nandi, A.K. Applications of machine learning to machine fault diagnosis: A review and roadmap. Mech. Syst. Signal Process. 2020, 138, 106587. [Google
Scholar] [CrossRef]
14. Li, X.; Hu, Y.; Li, M.; Zheng, J. Fault diagnostics between different type of components: A transfer learning approach. Appl. Soft Comput. 2020, 86, 105950. [Google Scholar]
15. Zhang, R.; Tao, H.; Wu, L.; Guan, Y. Transfer learning with neural networks for bearing fault diagnosis in changing working conditions. IEEE Access 2017, 5, 14347–14357. [Google Scholar] [
16. Zhao, Z.; Zhang, Q.; Yu, X.; Sun, C.; Wang, S.; Yan, R.; Chen, X. Unsupervised Deep Transfer Learning for Intelligent Fault Diagnosis: An Open Source and Comparative Study. arXiv 2019,
arXiv:1912.12528. [Google Scholar]
17. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar]
18. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef] [Green Version]
19. Wilson, G.; Cook, D.J. A Survey of Unsupervised Deep Domain Adaptation. arXiv 2018, arXiv:1812.02849. [Google Scholar]
20. Wang, M.; Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 2018, 312, 135–153. [Google Scholar] [CrossRef] [Green Version]
21. Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
22. Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1511.07122. [Google Scholar]
23. Wang, P.; Chen, P.; Yuan, Y.; Liu, D.; Huang, Z.; Hou, X.; Cottrell, G. Understanding convolution for semantic segmentation. In Proceedings of the 2018 IEEE Winter Conference on Applications of
Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 1451–1460. [Google Scholar]
24. Liu, R.; Wang, F.; Yang, B.; Qin, S.J. Multiscale Kernel Based Residual Convolutional Neural Network for Motor Fault Diagnosis Under Nonstationary Conditions. IEEE Trans. Ind. Inform. 2019, 16,
3797–3806. [Google Scholar] [CrossRef]
25. Jiang, G.; He, H.; Yan, J.; Xie, P. Multiscale convolutional neural networks for fault diagnosis of wind turbine gearbox. IEEE Trans. Ind. Electron. 2018, 66, 3196–3207. [Google Scholar] [
26. Qiao, H.; Wang, T.; Wang, P.; Zhang, L.; Xu, M. An adaptive weighted multiscale convolutional neural network for rotating machinery fault diagnosis under variable operating conditions. IEEE
Access 2019, 7, 118954–118964. [Google Scholar] [CrossRef]
27. Huang, W.; Cheng, J.; Yang, Y.; Guo, G. An improved deep convolutional neural network with multi-scale information for bearing fault diagnosis. Neurocomputing 2019, 359, 77–92. [Google Scholar] [
28. Jia, F.; Lei, Y.; Guo, L.; Lin, J.; Xing, S. A neural network constructed by deep learning technique and its application to intelligent fault diagnosis of machines. Neurocomputing 2018, 272,
619–628. [Google Scholar] [CrossRef]
29. Yu, J. A selective deep stacked denoising autoencoders ensemble with negative correlation learning for gearbox fault diagnosis. Comput. Ind. 2019, 108, 62–72. [Google Scholar] [CrossRef]
30. Jing, L.; Zhao, M.; Li, P.; Xu, X. A convolutional neural network based feature learning and fault diagnosis method for the condition monitoring of gearbox. Measurement 2017, 111, 1–10. [Google
Scholar] [CrossRef]
31. Han, T.; Liu, C.; Yang, W.; Jiang, D. A novel adversarial learning framework in deep convolutional neural network for intelligent diagnosis of mechanical faults. Know. Based Syst. 2019, 165,
474–487. [Google Scholar] [CrossRef]
32. Chen, T.; Wang, Z.; Yang, X.; Jiang, K. A deep capsule neural network with stochastic delta rule for bearing fault diagnosis on raw vibration signals. Measurement 2019, 148, 106857. [Google
Scholar] [CrossRef]
33. Li, X.; Zhang, W.; Ding, Q.; Sun, J.Q. Multi-layer domain adaptation method for rolling bearing fault diagnosis. Signal Process. 2019, 157, 180–197. [Google Scholar] [CrossRef] [Green Version]
34. Han, T.; Liu, C.; Yang, W.; Jiang, D. Deep transfer network with joint distribution adaptation: A new intelligent fault diagnosis framework for industry application. ISA Trans. 2019, 97, 269–281.
[Google Scholar] [CrossRef] [PubMed] [Green Version]
35. Wang, Q.; Michau, G.; Fink, O. Domain adaptive transfer learning for fault diagnosis. In Proceedings of the 2019 Prognostics and System Health Management Conference (PHM-Paris), Paris, France,
2–5 May 2019; pp. 279–285. [Google Scholar]
36. Guo, L.; Lei, Y.; Xing, S.; Yan, T.; Li, N. Deep convolutional transfer learning network: A new method for intelligent fault diagnosis of machines with unlabeled data. IEEE Trans. Ind. Electron.
2018, 66, 7316–7325. [Google Scholar] [CrossRef]
37. Li, X.; Zhang, W.; Ding, Q.; Li, X. Diagnosing Rotating Machines With Weakly Supervised Data Using Deep Transfer Learning. IEEE Trans. Ind. Electron. 2020, 16, 1688–1697. [Google Scholar] [
38. Neupane, D.; Seok, J. Bearing Fault Detection and Diagnosis Using Case Western Reserve University Dataset With Deep Learning Approaches: A Review. IEEE Access 2020, 8, 93155–93178. [Google
Scholar] [CrossRef]
39. Ben-David, S.; Blitzer, J.; Crammer, K.; Pereira, F. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2007;
pp. 137–144. [Google Scholar]
40. Zilong, Z.; Lv, H.; Xu, J.; Zizhao, H.; Qin, W. A Deep Learning Method for Bearing Fault Diagnosis through Stacked Residual Dilated Convolutions. Appl. Sci. 2019, 9, 1823. [Google Scholar]
41. Zellinger, W.; Grubinger, T.; Lughofer, E.; Natschläger, T.; Saminger-Platz, S. Central moment discrepancy (cmd) for domain-invariant representation learning. arXiv 2017, arXiv:1702.08811. [
Google Scholar]
42. Smith, W.A.; Randall, R.B. Rolling element bearing diagnostics using the Case Western Reserve University data: A benchmark study. Mech. Syst. Signal Process. 2015, 64, 100–131. [Google Scholar] [
43. Lessmeier, C.; Kimotho, J.K.; Zimmer, D.; Sextro, W. Condition monitoring of bearing damage in electromechanical drive systems by using motor current signals of electric motors: A benchmark data
set for data-driven classification. In Proceedings of the European Conference of the Prognostics and Health Management Society, Bilbao, Spain, 5–8 July 2016; pp. 5–8. [Google Scholar]
44. Ghifary, M.; Kleijn, W.B.; Zhang, M. Domain adaptive neural networks for object recognition. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Gold Coast,
QLD, Australia, 1–5 December 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 898–904. [Google Scholar]
45. Sun, B.; Saenko, K. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–10 October 2016;
Springer: Berlin/Heidelberg, Germany, 2016; pp. 443–450. [Google Scholar]
46. Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
47. Jiao, J.; Zhao, M.; Lin, J.; Liang, K. A comprehensive review on convolutional neural network in machine fault diagnosis. Neurocomputing 2020, 417, 36–63. [Google Scholar] [CrossRef]
Figure 3. Several typical amplitude-frequency characteristic curves and the signals after convolution without activation function. K is the kernel size, and r is the dilation rate. In the four
parallel subgraphs below, the first row is the output of signal after convolution, the second row is the amplitude-frequency characteristics (AFC) curve, and the third row is the FFT spectrogram. In
FFT spectrogram, the blue line represents the original signal, the red line represents the output signal.
Figure 6. Example signals of CWRU and Paderborn dataset. B1 to B6 are the working conditions of CWRU dataset. P1 to P4 are the working conditions of Paderborn dataset.
Figure 9. Average accuracy of FFCNN with different dilate rate on all tasks. B tasks are the tasks evaluated on CWRU dataset, and P tasks are the tasks evaluated on Paderborn dataset.
Figure 11. The visualization of learned features on CWRU dataset. The blue markers represent the source domain, the red markers represent the target domain. They are obtained from task $B 4 → B 5$.
Figure 12. The visualization of learned features on Paderborn dataset. The blue markers represent the source domain, the red markers represent the target domain. They are obtained from task $P 3 → P
Figure 13. Amplitude-frequency characteristic curves of each filter in the first layer of CNN-maximum mean discrepancies (MMD) from task $B 5 → B 1$.
Figure 14. Amplitude-frequency characteristic curves of each filter in the first layer of FFCNN-A from task $B 5 → B 1$. (a–c) represent the branches 1, 2, 3 with a dilation rate = 1, 2, 3,
Figure 15. Amplitude-frequency characteristic curves of each filter in the first layer of FFCNN-B from task $B 5 → B 1$. (a–c) represent the branches 1, 2, 3 with a dilation rate = 1, 3, 5,
Table 1. Details of proposed Frequency-domain Fusing Convolutional Neural Network (FFCNN) architecture.
Layer Hyperparameters
CONV ($r 1$) $r 1 = 1$; channels: 8, kernel size: 15; stride: 1; activation: ReLu; padding: same
CONV ($r 2$) $r 2 = 2$ (or 3); channels: 8, kernel size: 15; stride: 1; activation: ReLu; padding: same
CONV ($r 3$) $r 3 = 3$ (or 5); channels: 8, kernel size: 15; stride: 1; activation: ReLu; padding: same
POOL1 Average Pooling, stride: 2
CONV channels: 32, kernel size:15; stride: 1; activation: ReLu; padding: same
POOL2 Average Pooling, stride: 2
Features layer Node number: 256, activation: ReLu
Softmax layer Node number: number of faults types, activation: softmax
Sampling Frequency Sensor Position Speed (rpm) Name of Setting
48 kHz Driven end 1796 B1
48 kHz Driven end 1772 B2
48 kHz Driven end 1725 B3
12 kHz Driven end 1796 B4
12 kHz Driven end 1725 B5
12 kHz Driven end 1750 B6
Rotating Speed (rpm) Load Torque (Nm) Radial Force (N) Fault Type Name of Setting
900 0.7 1000 P1
1500 0.1 1000 Health, inner fault, outer fault P2
1500 0.7 400 P3
1500 0.7 1000 P4
Table 4. Diagnosis accuracy (%) on different working conditions compared with different methods using CWRU dataset. The values in bold indicate that FFCNN has a higher accuracy rate than CNN.
Tasks Source Only CNN-MMD FFCNN-A FFCNN-B CNN-CORAL FFCNN-A FFCNN-B CNN-CMD FFCNN-A FFCNN-B
$B 1 → B 2$ 75.10 81.13 89.65 90.28 75.20 75.17 75.44 78.49 81.91 83.64
$B 1 → B 3$ 78.69 79.27 81.96 84.15 79.32 81.66 83.77 82.86 87.06 90.09
$B 1 → B 4$ 30.32 98.32 100.00 98.12 75.15 74.66 61.69 97.83 99.44 99.78
$B 1 → B 5$ 31.13 67.48 70.48 80.76 66.90 65.19 71.26 92.90 96.92 96.19
$B 1 → B 6$ 48.73 100.00 100.00 99.98 76.86 76.39 70.46 99.46 99.00 99.29
$B 2 → B 1$ 88.13 90.21 98.66 99.63 89.82 93.46 95.97 90.80 94.41 96.12
$B 2 → B 3$ 72.27 49.80 60.91 55.15 73.54 76.42 74.27 73.49 74.93 75.17
$B 2 → B 4$ 50.00 97.05 97.05 96.12 57.03 68.43 66.58 97.51 98.66 98.68
$B 2 → B 5$ 50.00 54.90 65.31 60.77 50.54 53.37 49.71 89.62 98.17 97.12
$B 2 → B 6$ 40.40 55.91 58.42 59.30 35.33 49.52 44.14 95.80 96.63 99.44
$B 3 → B 1$ 60.76 99.95 100.00 100.00 76.59 92.28 96.66 99.56 99.88 99.98
$B 3 → B 2$ 54.30 66.35 67.01 74.51 61.13 69.80 72.63 75.85 74.85 73.00
$B 3 → B 4$ 50.00 75.02 86.62 85.86 50.00 50.00 50.00 89.19 95.85 98.15
$B 3 → B 5$ 51.25 59.15 96.02 97.37 51.95 51.42 52.00 86.55 92.82 95.68
$B 3 → B 6$ 49.54 99.95 99.22 99.10 49.58 54.24 50.05 95.14 99.34 99.05
$B 4 → B 1$ 25.71 100.00 100.00 99.19 86.33 84.15 86.52 98.90 99.95 99.90
$B 4 → B 2$ 33.45 75.63 75.22 74.98 73.02 74.85 74.05 76.49 76.66 76.29
$B 4 → B 3$ 38.53 59.23 59.30 62.28 47.00 56.47 65.11 70.26 77.54 79.66
$B 4 → B 5$ 58.89 80.98 94.80 95.48 85.25 90.23 93.66 99.39 99.56 99.10
$B 4 → B 6$ 78.05 100.00 90.57 90.59 94.55 89.97 84.45 100.00 100.00 100.00
$B 5 → B 1$ 26.41 76.41 86.23 93.75 53.57 61.45 75.68 84.84 92.58 97.17
$B 5 → B 2$ 25.46 46.09 54.34 52.22 38.91 48.68 47.04 72.70 79.10 75.12
$B 5 → B 3$ 35.65 76.44 71.07 79.57 66.95 67.14 56.86 70.51 77.51 80.47
$B 5 → B 4$ 50.07 51.88 70.55 71.32 50.00 69.19 73.02 99.95 100.00 100.00
$B 5 → B 6$ 50.07 52.39 72.10 71.97 76.66 87.60 87.01 100.00 100.00 100.00
$B 6 → B 1$ 25.00 95.53 95.56 100.00 46.24 42.62 52.88 98.32 99.12 99.93
$B 6 → B 2$ 25.00 59.50 59.42 58.88 36.45 40.87 48.34 70.38 77.66 76.73
$B 6 → B 3$ 35.84 70.07 77.63 82.32 63.38 61.91 51.93 77.03 80.86 87.04
$B 6 → B 4$ 51.56 100.00 100.00 100.00 75.00 75.00 74.98 100.00 100.00 100.00
$B 6 → B 5$ 54.00 76.00 77.66 71.26 67.53 85.18 75.12 99.02 99.63 99.73
AVG 48.14 76.49 81.86 82.83 64.33 68.91 68.71 88.76 91.67 92.42
Table 5. Diagnosis accuracy (%) on different working conditions compared with different methods using Paderborn dataset. The values in bold indicate that FFCNN has a higher accuracy rate than CNN.
Tasks Source Only CNN-MMD FFCNN-A FFCNN-B CNN-CORAL FFCNN-A FFCNN-B CNN-CMD FFCNN-A FFCNN-B
$P 1 → P 2$ 42.71 56.09 69.47 76.33 46.65 51.95 53.48 62.66 68.94 65.79
$P 1 → P 3$ 50.62 18.07 18.30 20.61 57.72 65.04 64.94 42.28 59.80 64.42
$P 1 → P 4$ 41.57 51.31 46.07 54.00 46.39 52.90 53.78 54.75 61.98 63.15
$P 2 → P 1$ 48.92 76.78 88.57 87.37 52.63 61.33 62.24 72.79 74.64 76.30
$P 2 → P 3$ 87.05 94.47 95.15 94.89 90.46 92.35 92.48 93.13 93.78 93.16
$P 2 → P 4$ 88.28 91.96 90.14 92.51 88.64 85.81 86.85 88.64 87.60 90.72
$P 3 → P 1$ 39.81 65.09 80.25 81.24 39.06 40.23 40.53 74.09 74.97 75.91
$P 3 → P 2$ 57.62 92.12 92.90 93.88 62.77 65.10 65.40 87.21 89.78 90.40
$P 3 → P 4$ 51.63 86.20 85.25 85.94 51.40 49.19 47.04 79.85 80.08 78.87
$P 4 → P 1$ 47.07 70.60 74.58 72.69 50.13 59.11 56.93 68.52 70.28 71.48
$P 4 → P 2$ 94.73 95.74 96.09 96.71 95.02 93.46 94.60 94.73 93.98 94.30
$P 4 → P 3$ 60.32 90.82 89.81 90.95 81.05 84.51 84.73 87.04 87.21 88.09
AVG 59.19 74.10 77.22 78.93 63.49 66.75 66.92 75.47 78.59 79.38
CNN $1$
Dilated kernels $1$ Common kernels
Diration rate Params $2$ Acc Kernel size Params $2$ Acc
1 11936 83.03 15 11936 83.03
2 11936 73.55 29 12272 89.3
3 11936 96.65 43 12608 64.85
4 11936 90.09 57 12944 68.58
5 11936 84.48 71 13280 83.87
1, 2, 3 11936 86.23 15, 29, 43 12272 88.06
1, 3, 5 11936 93.75 15, 43, 71 12608 87.11
^1 For fair comparison, dilated convolution and common convolution kernels of varying size only act on the first layer in CNN. ^2 Only count the number of parameters in the convolutional layers.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Li, X.; Zheng, J.; Li, M.; Ma, W.; Hu, Y. Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis. Sensors 2021, 21,
450. https://doi.org/10.3390/s21020450
AMA Style
Li X, Zheng J, Li M, Ma W, Hu Y. Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis. Sensors. 2021; 21(2):450.
Chicago/Turabian Style
Li, Xudong, Jianhua Zheng, Mingtao Li, Wenzhen Ma, and Yang Hu. 2021. "Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault
Diagnosis" Sensors 21, no. 2: 450. https://doi.org/10.3390/s21020450
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1424-8220/21/2/450","timestamp":"2024-11-08T21:26:21Z","content_type":"text/html","content_length":"630634","record_id":"<urn:uuid:53094041-4c28-4f39-aa4e-1bbd814e97e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00714.warc.gz"} |
Can't Paste Data from One Equation Object to Another
ID: Q143080
The information in this article applies to:
• Microsoft Equation Editor for Windows, version 2.1
• Microsoft Word for Windows 95, version 7.0
• Microsoft PowerPoint for Windows 95, version 7.0
You cannot paste all or part of one equation editor object into another equation editor object.
When you copy part of one equation using the shortcut menu, the entire equation pastes into the new equation object, even though you copied only a portion of the original equation. If you type part
of your equation before you choose paste, your equation will be overwritten by the paste operation.
This problem does not occur when you copy and paste part of an equation object into the current equation object.
This functionality is different from previous versions of Word where you are able to copy and paste portions of one equation object into another.
This problem occurs because Equation Editor is closed when you deactivate the equation object. In contrast, in previous versions of Word, the Equation Editor remains open in the background. This
functionality was changed to improve resource usage while Equation Editor is running.
Microsoft has confirmed this to be a problem in Word for Windows 95 version 7.0. Microsoft is researching this problem and will post new information here in the Microsoft Knowledge Base as it becomes
Start the Equation Editor independent of another program, switch between Word and Equation Editor to piece together your equation, and then paste the equation back into the document in the other
program (for example, paste the equation back into your Word document).
KBCategory: kbusage KBSubcategory: kbole
Additional reference words: 7.00 word95 word7 equation editor object ole copy paste dim dimmed grey gray greyed grayed not available unavailable can't won't doesn't
Last Reviewed: September 10, 1996 | {"url":"https://helparchive.huntertur.net/document/23805","timestamp":"2024-11-12T19:19:14Z","content_type":"text/html","content_length":"5301","record_id":"<urn:uuid:09cd3dbc-d020-4668-aa54-ea367d959aba>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00580.warc.gz"} |
Kevin Allen
Contact Information
Kevin Allen
School of Mathematics and Statistics
University College Dublin
Email: kevin dot allen1 at ucdconnect dot ie
Supervisor: Robert Osburn
Research Interests
Knot Theory, Number Theory, Modular Forms, Combinatorics, Projective Geometry
1. K. Allen, “Generalised rank deviations for overpartitions”, in preparation.
2. K. Allen and R. Osburn, “Unimodal sequences and mixed false theta functions”, submitted.
3. K. Allen and J. Sheekey, “On translation hyperovals in semifield planes”, Designs, Codes and Cryptography, accepted for publication.
Teaching (TA/ Tutor Roles)
MATH 10200 Matrix Algebra
MATH 10210 Foundations of Maths for Computer Science I
MATH 10030 Maths for Business
MATH 10350 Calculus for Mathematical and Physical Sciences
MATH 20320 Quantitative Methods in Business
MATH 20300 Linear Algebra 2 for the Math Sci
MATH 20060 Calculus of Several Variables
ACM 10060 Applications of Differential Equations
ACM 30030 Multivariable Calculus for Engineering II
MST 30070 Differential Geometry
Recent and Upcoming Talks
Canadian Number Theory Association Meeting, Fields Institute, June 2024 (Video of talk)
32èmes Journées Arithmétiques, Université de Lorraine, July 2023
35th Automorphic Forms Workshop, LSU, May 2023
Workshop on Integer Partitions, Nesin Mathematics Village, June 2022
Short PhD Talks, UCD Mathematical Society, Nov 2021
Recent and Upcoming Conferences
Building Bridges: 6th EU/US Summer School & Workshop on Automorphic Forms and Related Topics (BB6) School and Workshop on Automorphic Forms and Related Topics, CIRM, September 2024
Canadian Number Theory Association Meeting, Fields Institute, June 2024
32èmes Journées Arithmétiques, Université de Lorraine, July 2023
35th Automorphic Forms Workshop, LSU, May 2023
Ramanujan and Euler: Partitions, mock theta functions, and q-series, Online Number Theory School, July 2022
Workshop on Integer Partitions , Nesin Mathematics Village, June 2022
2022 NSF-CBMS Regional Research Conferences in the Mathematical Sciences: Ramanujan’s Partition Congruences, Mock Theta Functions, and Beyond, University of Texas RGV, May 2022
Funding and Awards
UCD Research Demonstratorship 2021
UCD Casey Medal 2020
UCD Stage 3 Scholar 2018
UCD Entrance Scholar 2016
• BSc Mathematics: "Semifields: A Classification Problem"
• MSc Mathematical Science: "Hyperovals, Semifields and Cherowitzo's Conjecture" | {"url":"https://maths.ucd.ie/~kallen/","timestamp":"2024-11-03T23:26:17Z","content_type":"text/html","content_length":"6006","record_id":"<urn:uuid:6b88c155-6dc2-454d-9c54-9be73dbf455e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00114.warc.gz"} |
Algebraic K-theory - Wikiwand
Algebraic K-theory is a subject area in mathematics with connections to geometry, topology, ring theory, and number theory. Geometric, algebraic, and arithmetic objects are assigned objects called K
-groups. These are groups in the sense of abstract algebra. They contain detailed information about the original object but are notoriously difficult to compute; for example, an important outstanding
problem is to compute the K-groups of the integers.
K-theory was discovered in the late 1950s by Alexander Grothendieck in his study of intersection theory on algebraic varieties. In the modern language, Grothendieck defined only K[0], the zeroth K
-group, but even this single group has plenty of applications, such as the Grothendieck–Riemann–Roch theorem. Intersection theory is still a motivating force in the development of (higher) algebraic
K-theory through its links with motivic cohomology and specifically Chow groups. The subject also includes classical number-theoretic topics like quadratic reciprocity and embeddings of number fields
into the real numbers and complex numbers, as well as more modern concerns like the construction of higher regulators and special values of L-functions.
The lower K-groups were discovered first, in the sense that adequate descriptions of these groups in terms of other algebraic structures were found. For example, if F is a field, then K[0](F) is
isomorphic to the integers Z and is closely related to the notion of vector space dimension. For a commutative ring R, the group K[0](R) is related to the Picard group of R, and when R is the ring of
integers in a number field, this generalizes the classical construction of the class group. The group K[1](R) is closely related to the group of units R^×, and if R is a field, it is exactly the
group of units. For a number field F, the group K[2](F) is related to class field theory, the Hilbert symbol, and the solvability of quadratic equations over completions. In contrast, finding the
correct definition of the higher K-groups of rings was a difficult achievement of Daniel Quillen, and many of the basic facts about the higher K-groups of algebraic varieties were not known until the
work of Robert Thomason.
The history of K-theory was detailed by Charles Weibel.^[1]
The Grothendieck group K[0]
In the 19th century, Bernhard Riemann and his student Gustav Roch proved what is now known as the Riemann–Roch theorem. If X is a Riemann surface, then the sets of meromorphic functions and
meromorphic differential forms on X form vector spaces. A line bundle on X determines subspaces of these vector spaces, and if X is projective, then these subspaces are finite dimensional. The
Riemann–Roch theorem states that the difference in dimensions between these subspaces is equal to the degree of the line bundle (a measure of twistedness) plus one minus the genus of X. In the
mid-20th century, the Riemann–Roch theorem was generalized by Friedrich Hirzebruch to all algebraic varieties. In Hirzebruch's formulation, the Hirzebruch–Riemann–Roch theorem, the theorem became a
statement about Euler characteristics: The Euler characteristic of a vector bundle on an algebraic variety (which is the alternating sum of the dimensions of its cohomology groups) equals the Euler
characteristic of the trivial bundle plus a correction factor coming from characteristic classes of the vector bundle. This is a generalization because on a projective Riemann surface, the Euler
characteristic of a line bundle equals the difference in dimensions mentioned previously, the Euler characteristic of the trivial bundle is one minus the genus, and the only nontrivial characteristic
class is the degree.
The subject of K-theory takes its name from a 1957 construction of Alexander Grothendieck which appeared in the Grothendieck–Riemann–Roch theorem, his generalization of Hirzebruch's theorem.^[2] Let
X be a smooth algebraic variety. To each vector bundle on X, Grothendieck associates an invariant, its class. The set of all classes on X was called K(X) from the German Klasse. By definition, K(X)
is a quotient of the free abelian group on isomorphism classes of vector bundles on X, and so it is an abelian group. If the basis element corresponding to a vector bundle V is denoted [V], then for
each short exact sequence of vector bundles:
${\displaystyle 0\to V'\to V\to V''\to 0,}$
Grothendieck imposed the relation [V] = [V′] + [V″]. These generators and relations define K(X), and they imply that it is the universal way to assign invariants to vector bundles in a way compatible
with exact sequences.
Grothendieck took the perspective that the Riemann–Roch theorem is a statement about morphisms of varieties, not the varieties themselves. He proved that there is a homomorphism from K(X) to the Chow
groups of X coming from the Chern character and Todd class of X. Additionally, he proved that a proper morphism f : X → Y to a smooth variety Y determines a homomorphism f[*] : K(X) → K(Y) called the
pushforward. This gives two ways of determining an element in the Chow group of Y from a vector bundle on X: Starting from X, one can first compute the pushforward in K-theory and then apply the
Chern character and Todd class of Y, or one can first apply the Chern character and Todd class of X and then compute the pushforward for Chow groups. The Grothendieck–Riemann–Roch theorem says that
these are equal. When Y is a point, a vector bundle is a vector space, the class of a vector space is its dimension, and the Grothendieck–Riemann–Roch theorem specializes to Hirzebruch's theorem.
The group K(X) is now known as K[0](X). Upon replacing vector bundles by projective modules, K[0] also became defined for non-commutative rings, where it had applications to group representations.
Atiyah and Hirzebruch quickly transported Grothendieck's construction to topology and used it to define topological K-theory.^[3] Topological K-theory was one of the first examples of an
extraordinary cohomology theory: It associates to each topological space X (satisfying some mild technical constraints) a sequence of groups K[n](X) which satisfy all the Eilenberg–Steenrod axioms
except the normalization axiom. The setting of algebraic varieties, however, is much more rigid, and the flexible constructions used in topology were not available. While the group K[0] seemed to
satisfy the necessary properties to be the beginning of a cohomology theory of algebraic varieties and of non-commutative rings, there was no clear definition of the higher K[n](X). Even as such
definitions were developed, technical issues surrounding restriction and gluing usually forced K[n] to be defined only for rings, not for varieties.
K[0], K[1], and K[2]
A group closely related to K[1] for group rings was earlier introduced by J.H.C. Whitehead. Henri Poincaré had attempted to define the Betti numbers of a manifold in terms of a triangulation. His
methods, however, had a serious gap: Poincaré could not prove that two triangulations of a manifold always yielded the same Betti numbers. It was clearly true that Betti numbers were unchanged by
subdividing the triangulation, and therefore it was clear that any two triangulations that shared a common subdivision had the same Betti numbers. What was not known was that any two triangulations
admitted a common subdivision. This hypothesis became a conjecture known as the Hauptvermutung (roughly "main conjecture"). The fact that triangulations were stable under subdivision led J.H.C.
Whitehead to introduce the notion of simple homotopy type.^[4] A simple homotopy equivalence is defined in terms of adding simplices or cells to a simplicial complex or cell complex in such a way
that each additional simplex or cell deformation retracts into a subdivision of the old space. Part of the motivation for this definition is that a subdivision of a triangulation is simple homotopy
equivalent to the original triangulation, and therefore two triangulations that share a common subdivision must be simple homotopy equivalent. Whitehead proved that simple homotopy equivalence is a
finer invariant than homotopy equivalence by introducing an invariant called the torsion. The torsion of a homotopy equivalence takes values in a group now called the Whitehead group and denoted Wh(π
), where π is the fundamental group of the target complex. Whitehead found examples of non-trivial torsion and thereby proved that some homotopy equivalences were not simple. The Whitehead group was
later discovered to be a quotient of K[1](Zπ), where Zπ is the integral group ring of π. Later John Milnor used Reidemeister torsion, an invariant related to Whitehead torsion, to disprove the
The first adequate definition of K[1] of a ring was made by Hyman Bass and Stephen Schanuel.^[5] In topological K-theory, K[1] is defined using vector bundles on a suspension of the space. All such
vector bundles come from the clutching construction, where two trivial vector bundles on two halves of a space are glued along a common strip of the space. This gluing data is expressed using the
general linear group, but elements of that group coming from elementary matrices (matrices corresponding to elementary row or column operations) define equivalent gluings. Motivated by this, the
Bass–Schanuel definition of K[1] of a ring R is GL(R) / E(R), where GL(R) is the infinite general linear group (the union of all GL[n](R)) and E(R) is the subgroup of elementary matrices. They also
provided a definition of K[0] of a homomorphism of rings and proved that K[0] and K[1] could be fit together into an exact sequence similar to the relative homology exact sequence.
Work in K-theory from this period culminated in Bass' book Algebraic K-theory.^[6] In addition to providing a coherent exposition of the results then known, Bass improved many of the statements of
the theorems. Of particular note is that Bass, building on his earlier work with Murthy,^[7] provided the first proof of what is now known as the fundamental theorem of algebraic K-theory. This is a
four-term exact sequence relating K[0] of a ring R to K[1] of R, the polynomial ring R[t], and the localization R[t, t^−1]. Bass recognized that this theorem provided a description of K[0] entirely
in terms of K[1]. By applying this description recursively, he produced negative K-groups K[−n](R). In independent work, Max Karoubi gave another definition of negative K-groups for certain
categories and proved that his definitions yielded that same groups as those of Bass.^[8]
The next major development in the subject came with the definition of K[2]. Steinberg studied the universal central extensions of a Chevalley group over a field and gave an explicit presentation of
this group in terms of generators and relations.^[9] In the case of the group E[n](k) of elementary matrices, the universal central extension is now written St[n](k) and called the Steinberg group.
In the spring of 1967, John Milnor defined K[2](R) to be the kernel of the homomorphism St(R) → E(R).^[10] The group K[2] further extended some of the exact sequences known for K[1] and K[0], and it
had striking applications to number theory. Hideya Matsumoto's 1968 thesis^[11] showed that for a field F, K[2](F) was isomorphic to:
${\displaystyle F^{\times }\otimes _{\mathbf {Z} }F^{\times }/\langle x\otimes (1-x)\colon x\in F\setminus \{0,1\}\rangle .}$
This relation is also satisfied by the Hilbert symbol, which expresses the solvability of quadratic equations over local fields. In particular, John Tate was able to prove that K[2](Q) is essentially
structured around the law of quadratic reciprocity.
Higher K-groups
In the late 1960s and early 1970s, several definitions of higher K-theory were proposed. Swan^[12] and Gersten^[13] both produced definitions of K[n] for all n, and Gersten proved that his and Swan's
theories were equivalent, but the two theories were not known to satisfy all the expected properties. Nobile and Villamayor also proposed a definition of higher K-groups.^[14] Karoubi and Villamayor
defined well-behaved K-groups for all n,^[15] but their equivalent of K[1] was sometimes a proper quotient of the Bass–Schanuel K[1]. Their K-groups are now called KV[n] and are related to
homotopy-invariant modifications of K-theory.
Inspired in part by Matsumoto's theorem, Milnor made a definition of the higher K-groups of a field.^[16] He referred to his definition as "purely ad hoc",^[17] and it neither appeared to generalize
to all rings nor did it appear to be the correct definition of the higher K-theory of fields. Much later, it was discovered by Nesterenko and Suslin^[18] and by Totaro^[19] that Milnor K-theory is
actually a direct summand of the true K-theory of the field. Specifically, K-groups have a filtration called the weight filtration, and the Milnor K-theory of a field is the highest weight-graded
piece of the K-theory. Additionally, Thomason discovered that there is no analog of Milnor K-theory for a general variety.^[20]
The first definition of higher K-theory to be widely accepted was Daniel Quillen's.^[21] As part of Quillen's work on the Adams conjecture in topology, he had constructed maps from the classifying
spaces BGL(F[q]) to the homotopy fiber of ψ^q − 1, where ψ^q is the qth Adams operation acting on the classifying space BU. This map is acyclic, and after modifying BGL(F[q]) slightly to produce a
new space BGL(F[q])^+, the map became a homotopy equivalence. This modification was called the plus construction. The Adams operations had been known to be related to Chern classes and to K-theory
since the work of Grothendieck, and so Quillen was led to define the K-theory of R as the homotopy groups of BGL(R)^+. Not only did this recover K[1] and K[2], the relation of K-theory to the Adams
operations allowed Quillen to compute the K-groups of finite fields.
The classifying space BGL is connected, so Quillen's definition failed to give the correct value for K[0]. Additionally, it did not give any negative K-groups. Since K[0] had a known and accepted
definition it was possible to sidestep this difficulty, but it remained technically awkward. Conceptually, the problem was that the definition sprung from GL, which was classically the source of K
[1]. Because GL knows only about gluing vector bundles, not about the vector bundles themselves, it was impossible for it to describe K[0].
Inspired by conversations with Quillen, Segal soon introduced another approach to constructing algebraic K-theory under the name of Γ-objects.^[22] Segal's approach is a homotopy analog of
Grothendieck's construction of K[0]. Where Grothendieck worked with isomorphism classes of bundles, Segal worked with the bundles themselves and used isomorphisms of the bundles as part of his data.
This results in a spectrum whose homotopy groups are the higher K-groups (including K[0]). However, Segal's approach was only able to impose relations for split exact sequences, not general exact
sequences. In the category of projective modules over a ring, every short exact sequence splits, and so Γ-objects could be used to define the K-theory of a ring. However, there are non-split short
exact sequences in the category of vector bundles on a variety and in the category of all modules over a ring, so Segal's approach did not apply to all cases of interest.
In the spring of 1972, Quillen found another approach to the construction of higher K-theory which was to prove enormously successful. This new definition began with an exact category, a category
satisfying certain formal properties similar to, but slightly weaker than, the properties satisfied by a category of modules or vector bundles. From this he constructed an auxiliary category using a
new device called his "Q-construction." Like Segal's Γ-objects, the Q-construction has its roots in Grothendieck's definition of K[0]. Unlike Grothendieck's definition, however, the Q-construction
builds a category, not an abelian group, and unlike Segal's Γ-objects, the Q-construction works directly with short exact sequences. If C is an abelian category, then QC is a category with the same
objects as C but whose morphisms are defined in terms of short exact sequences in C. The K-groups of the exact category are the homotopy groups of ΩBQC, the loop space of the geometric realization
(taking the loop space corrects the indexing). Quillen additionally proved his "+ = Q theorem" that his two definitions of K-theory agreed with each other. This yielded the correct K[0] and led to
simpler proofs, but still did not yield any negative K-groups.
All abelian categories are exact categories, but not all exact categories are abelian. Because Quillen was able to work in this more general situation, he was able to use exact categories as tools in
his proofs. This technique allowed him to prove many of the basic theorems of algebraic K-theory. Additionally, it was possible to prove that the earlier definitions of Swan and Gersten were
equivalent to Quillen's under certain conditions.
K-theory now appeared to be a homology theory for rings and a cohomology theory for varieties. However, many of its basic theorems carried the hypothesis that the ring or variety in question was
regular. One of the basic expected relations was a long exact sequence (called the "localization sequence") relating the K-theory of a variety X and an open subset U. Quillen was unable to prove the
existence of the localization sequence in full generality. He was, however, able to prove its existence for a related theory called G-theory (or sometimes K′-theory). G-theory had been defined early
in the development of the subject by Grothendieck. Grothendieck defined G[0](X) for a variety X to be the free abelian group on isomorphism classes of coherent sheaves on X, modulo relations coming
from exact sequences of coherent sheaves. In the categorical framework adopted by later authors, the K-theory of a variety is the K-theory of its category of vector bundles, while its G-theory is the
K-theory of its category of coherent sheaves. Not only could Quillen prove the existence of a localization exact sequence for G-theory, he could prove that for a regular ring or variety, K-theory
equaled G-theory, and therefore K-theory of regular varieties had a localization exact sequence. Since this sequence was fundamental to many of the facts in the subject, regularity hypotheses
pervaded early work on higher K-theory.
Applications of algebraic K-theory in topology
The earliest application of algebraic K-theory to topology was Whitehead's construction of Whitehead torsion. A closely related construction was found by C. T. C. Wall in 1963.^[23] Wall found that a
space X dominated by a finite complex has a generalized Euler characteristic taking values in a quotient of K[0](Zπ), where π is the fundamental group of the space. This invariant is called Wall's
finiteness obstruction because X is homotopy equivalent to a finite complex if and only if the invariant vanishes. Laurent Siebenmann in his thesis found an invariant similar to Wall's that gives an
obstruction to an open manifold being the interior of a compact manifold with boundary.^[24] If two manifolds with boundary M and N have isomorphic interiors (in TOP, PL, or DIFF as appropriate),
then the isomorphism between them defines an h-cobordism between M and N.
Whitehead torsion was eventually reinterpreted in a more directly K-theoretic way. This reinterpretation happened through the study of h-cobordisms. Two n-dimensional manifolds M and N are h
-cobordant if there exists an (n + 1)-dimensional manifold with boundary W whose boundary is the disjoint union of M and N and for which the inclusions of M and N into W are homotopy equivalences (in
the categories TOP, PL, or DIFF). Stephen Smale's h-cobordism theorem^[25] asserted that if n ≥ 5, W is compact, and M, N, and W are simply connected, then W is isomorphic to the cylinder M × [0, 1]
(in TOP, PL, or DIFF as appropriate). This theorem proved the Poincaré conjecture for n ≥ 5.
If M and N are not assumed to be simply connected, then an h-cobordism need not be a cylinder. The s-cobordism theorem, due independently to Mazur,^[26] Stallings, and Barden,^[27] explains the
general situation: An h-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion M ⊂ W vanishes. This generalizes the h-cobordism theorem because the simple connectedness
hypotheses imply that the relevant Whitehead group is trivial. In fact the s-cobordism theorem implies that there is a bijective correspondence between isomorphism classes of h-cobordisms and
elements of the Whitehead group.
An obvious question associated with the existence of h-cobordisms is their uniqueness. The natural notion of equivalence is isotopy. Jean Cerf proved that for simply connected smooth manifolds M of
dimension at least 5, isotopy of h-cobordisms is the same as a weaker notion called pseudo-isotopy.^[28] Hatcher and Wagoner studied the components of the space of pseudo-isotopies and related it to
a quotient of K[2](Zπ).^[29]
The proper context for the s-cobordism theorem is the classifying space of h-cobordisms. If M is a CAT manifold, then H^CAT(M) is a space that classifies bundles of h-cobordisms on M. The s-cobordism
theorem can be reinterpreted as the statement that the set of connected components of this space is the Whitehead group of π[1](M). This space contains strictly more information than the Whitehead
group; for example, the connected component of the trivial cobordism describes the possible cylinders on M and in particular is the obstruction to the uniqueness of a homotopy between a manifold and
M × [0, 1]. Consideration of these questions led Waldhausen to introduce his algebraic K-theory of spaces.^[30] The algebraic K-theory of M is a space A(M) which is defined so that it plays
essentially the same role for higher K-groups as K[1](Zπ[1](M)) does for M. In particular, Waldhausen showed that there is a map from A(M) to a space Wh(M) which generalizes the map K[1](Zπ[1](M)) →
Wh(π[1](M)) and whose homotopy fiber is a homology theory.
In order to fully develop A-theory, Waldhausen made significant technical advances in the foundations of K-theory. Waldhausen introduced Waldhausen categories, and for a Waldhausen category C he
introduced a simplicial category S[⋅]C (the S is for Segal) defined in terms of chains of cofibrations in C.^[31] This freed the foundations of K-theory from the need to invoke analogs of exact
Algebraic topology and algebraic geometry in algebraic K-theory
Quillen suggested to his student Kenneth Brown that it might be possible to create a theory of sheaves of spectra of which K-theory would provide an example. The sheaf of K-theory spectra would, to
each open subset of a variety, associate the K-theory of that open subset. Brown developed such a theory for his thesis. Simultaneously, Gersten had the same idea. At a Seattle conference in autumn
of 1972, they together discovered a spectral sequence converging from the sheaf cohomology of ${\displaystyle {\mathcal {K}}_{n}}$, the sheaf of K[n]-groups on X, to the K-group of the total space.
This is now called the Brown–Gersten spectral sequence.^[32]
Spencer Bloch, influenced by Gersten's work on sheaves of K-groups, proved that on a regular surface, the cohomology group ${\displaystyle H^{2}(X,{\mathcal {K}}_{2})}$ is isomorphic to the Chow
group CH^2(X) of codimension 2 cycles on X.^[33] Inspired by this, Gersten conjectured that for a regular local ring R with fraction field F, K[n](R) injects into K[n](F) for all n. Soon Quillen
proved that this is true when R contains a field,^[34] and using this he proved that
${\displaystyle H^{p}(X,{\mathcal {K}}_{p})\cong \operatorname {CH} ^{p}(X)}$
for all p. This is known as Bloch's formula. While progress has been made on Gersten's conjecture since then, the general case remains open.
Lichtenbaum conjectured that special values of the zeta function of a number field could be expressed in terms of the K-groups of the ring of integers of the field. These special values were known to
be related to the étale cohomology of the ring of integers. Quillen therefore generalized Lichtenbaum's conjecture, predicting the existence of a spectral sequence like the Atiyah–Hirzebruch spectral
sequence in topological K-theory.^[35] Quillen's proposed spectral sequence would start from the étale cohomology of a ring R and, in high enough degrees and after completing at a prime l invertible
in R, abut to the l-adic completion of the K-theory of R. In the case studied by Lichtenbaum, the spectral sequence would degenerate, yielding Lichtenbaum's conjecture.
The necessity of localizing at a prime l suggested to Browder that there should be a variant of K-theory with finite coefficients.^[36] He introduced K-theory groups K[n](R; Z/lZ) which were Z/lZ
-vector spaces, and he found an analog of the Bott element in topological K-theory. Soule used this theory to construct "étale Chern classes", an analog of topological Chern classes which took
elements of algebraic K-theory to classes in étale cohomology.^[37] Unlike algebraic K-theory, étale cohomology is highly computable, so étale Chern classes provided an effective tool for detecting
the existence of elements in K-theory. William G. Dwyer and Eric Friedlander then invented an analog of K-theory for the étale topology called étale K-theory.^[38] For varieties defined over the
complex numbers, étale K-theory is isomorphic to topological K-theory. Moreover, étale K-theory admitted a spectral sequence similar to the one conjectured by Quillen. Thomason proved around 1980
that after inverting the Bott element, algebraic K-theory with finite coefficients became isomorphic to étale K-theory.^[39]
Throughout the 1970s and early 1980s, K-theory on singular varieties still lacked adequate foundations. While it was believed that Quillen's K-theory gave the correct groups, it was not known that
these groups had all of the envisaged properties. For this, algebraic K-theory had to be reformulated. This was done by Thomason in a lengthy monograph which he co-credited to his dead friend Thomas
Trobaugh, who he said gave him a key idea in a dream.^[40] Thomason combined Waldhausen's construction of K-theory with the foundations of intersection theory described in volume six of
Grothendieck's Séminaire de Géométrie Algébrique du Bois Marie. There, K[0] was described in terms of complexes of sheaves on algebraic varieties. Thomason discovered that if one worked with in
derived category of sheaves, there was a simple description of when a complex of sheaves could be extended from an open subset of a variety to the whole variety. By applying Waldhausen's construction
of K-theory to derived categories, Thomason was able to prove that algebraic K-theory had all the expected properties of a cohomology theory.
In 1976, R. Keith Dennis discovered an entirely novel technique for computing K-theory based on Hochschild homology.^[41] This was based around the existence of the Dennis trace map, a homomorphism
from K-theory to Hochschild homology. While the Dennis trace map seemed to be successful for calculations of K-theory with finite coefficients, it was less successful for rational calculations.
Goodwillie, motivated by his "calculus of functors", conjectured the existence of a theory intermediate to K-theory and Hochschild homology. He called this theory topological Hochschild homology
because its ground ring should be the sphere spectrum (considered as a ring whose operations are defined only up to homotopy). In the mid-1980s, Bokstedt gave a definition of topological Hochschild
homology that satisfied nearly all of Goodwillie's conjectural properties, and this made possible further computations of K-groups.^[42] Bokstedt's version of the Dennis trace map was a
transformation of spectra K → THH. This transformation factored through the fixed points of a circle action on THH, which suggested a relationship with cyclic homology. In the course of proving an
algebraic K-theory analog of the Novikov conjecture, Bokstedt, Hsiang, and Madsen introduced topological cyclic homology, which bore the same relationship to topological Hochschild homology as cyclic
homology did to Hochschild homology.^[43] The Dennis trace map to topological Hochschild homology factors through topological cyclic homology, providing an even more detailed tool for calculations.
In 1996, Dundas, Goodwillie, and McCarthy proved that topological cyclic homology has in a precise sense the same local structure as algebraic K-theory, so that if a calculation in K-theory or
topological cyclic homology is possible, then many other "nearby" calculations follow.^[44]
The lower K-groups were discovered first, and given various ad hoc descriptions, which remain useful. Throughout, let A be a ring.
The functor K[0] takes a ring A to the Grothendieck group of the set of isomorphism classes of its finitely generated projective modules, regarded as a monoid under direct sum. Any ring homomorphism
A → B gives a map K[0](A) → K[0](B) by mapping (the class of) a projective A-module M to M ⊗[A] B, making K[0] a covariant functor.
If the ring A is commutative, we can define a subgroup of K[0](A) as the set
${\displaystyle {\tilde {K}}_{0}\left(A\right)=\bigcap \limits _{{\mathfrak {p}}{\text{ prime ideal of }}A}\mathrm {Ker} \dim _{\mathfrak {p}},}$
where :
${\displaystyle \dim _{\mathfrak {p}}:K_{0}\left(A\right)\to \mathbf {Z} }$
is the map sending every (class of a) finitely generated projective A-module M to the rank of the free ${\displaystyle A_{\mathfrak {p}}}$-module ${\displaystyle M_{\mathfrak {p}}}$ (this module is
indeed free, as any finitely generated projective module over a local ring is free). This subgroup ${\displaystyle {\tilde {K}}_{0}\left(A\right)}$ is known as the reduced zeroth K-theory of A.
If B is a ring without an identity element, we can extend the definition of K[0] as follows. Let A = B⊕Z be the extension of B to a ring with unity obtaining by adjoining an identity element (0,1).
There is a short exact sequence B → A → Z and we define K[0](B) to be the kernel of the corresponding map K[0](A) → K[0](Z) = Z.^[45]
Relative K[0]
Let I be an ideal of A and define the "double" to be a subring of the Cartesian product A×A:^[49]
${\displaystyle D(A,I)=\{(x,y)\in A\times A:x-y\in I\}\ .}$
The relative K-group is defined in terms of the "double"^[50]
${\displaystyle K_{0}(A,I)=\ker \left({K_{0}(D(A,I))\rightarrow K_{0}(A)}\right)\ .}$
where the map is induced by projection along the first factor.
The relative K[0](A,I) is isomorphic to K[0](I), regarding I as a ring without identity. The independence from A is an analogue of the Excision theorem in homology.^[45]
K[0] as a ring
If A is a commutative ring, then the tensor product of projective modules is again projective, and so tensor product induces a multiplication turning K[0] into a commutative ring with the class [A]
as identity.^[46] The exterior product similarly induces a λ-ring structure. The Picard group embeds as a subgroup of the group of units K[0](A)^∗.^[51]
Hyman Bass provided this definition, which generalizes the group of units of a ring: K[1](A) is the abelianization of the infinite general linear group:
${\displaystyle K_{1}(A)=\operatorname {GL} (A)^{\mbox{ab}}=\operatorname {GL} (A)/[\operatorname {GL} (A),\operatorname {GL} (A)]}$
${\displaystyle \operatorname {GL} (A)=\operatorname {colim} \operatorname {GL} (n,A)}$
is the direct limit of the GL(n), which embeds in GL(n + 1) as the upper left block matrix, and ${\displaystyle [\operatorname {GL} (A),\operatorname {GL} (A)]}$ is its commutator subgroup. Define an
elementary matrix to be one which is the sum of an identity matrix and a single off-diagonal element (this is a subset of the elementary matrices used in linear algebra). Then Whitehead's lemma
states that the group E(A) generated by elementary matrices equals the commutator subgroup [GL(A), GL(A)]. Indeed, the group GL(A)/E(A) was first defined and studied by Whitehead,^[52] and is called
the Whitehead group of the ring A.
Relative K[1]
The relative K-group is defined in terms of the "double"^[53]
${\displaystyle K_{1}(A,I)=\ker \left({K_{1}(D(A,I))\rightarrow K_{1}(A)}\right)\ .}$
There is a natural exact sequence^[54]
${\displaystyle K_{1}(A,I)\rightarrow K_{1}(A)\rightarrow K_{1}(A/I)\rightarrow K_{0}(A,I)\rightarrow K_{0}(A)\rightarrow K_{0}(A/I)\ .}$
Commutative rings and fields
For A a commutative ring, one can define a determinant det: GL(A) → A* to the group of units of A, which vanishes on E(A) and thus descends to a map det : K[1](A) → A*. As E(A) ◅ SL(A), one can also
define the special Whitehead group SK[1](A) := SL(A)/E(A). This map splits via the map A* → GL(1, A) → K[1](A) (unit in the upper left corner), and hence is onto, and has the special Whitehead group
as kernel, yielding the split short exact sequence:
${\displaystyle 1\to SK_{1}(A)\to K_{1}(A)\to A^{*}\to 1,}$
which is a quotient of the usual split short exact sequence defining the special linear group, namely
${\displaystyle 1\to \operatorname {SL} (A)\to \operatorname {GL} (A)\to A^{*}\to 1.}$
The determinant is split by including the group of units A* = GL[1](A) into the general linear group GL(A), so K[1](A) splits as the direct sum of the group of units and the special Whitehead group:
K[1](A) ≅ A* ⊕ SK[1] (A).
When A is a Euclidean domain (e.g. a field, or the integers) SK[1](A) vanishes, and the determinant map is an isomorphism from K[1](A) to A^∗.^[55] This is false in general for PIDs, thus providing
one of the rare mathematical features of Euclidean domains that do not generalize to all PIDs. An explicit PID such that SK[1] is nonzero was given by Ischebeck in 1980 and by Grayson in 1981.^[56]
If A is a Dedekind domain whose quotient field is an algebraic number field (a finite extension of the rationals) then Milnor (1971, corollary 16.3) shows that SK[1](A) vanishes.^[57]
The vanishing of SK[1] can be interpreted as saying that K[1] is generated by the image of GL[1] in GL. When this fails, one can ask whether K[1] is generated by the image of GL[2]. For a Dedekind
domain, this is the case: indeed, K[1] is generated by the images of GL[1] and SL[2] in GL.^[56] The subgroup of SK[1] generated by SL[2] may be studied by Mennicke symbols. For Dedekind domains with
all quotients by maximal ideals finite, SK[1] is a torsion group.^[58]
For a non-commutative ring, the determinant cannot in general be defined, but the map GL(A) → K[1](A) is a generalisation of the determinant.
Central simple algebras
In the case of a central simple algebra A over a field F, the reduced norm provides a generalisation of the determinant giving a map K[1](A) → F^∗ and SK[1](A) may be defined as the kernel. Wang's
theorem states that if A has prime degree then SK[1](A) is trivial,^[59] and this may be extended to square-free degree.^[60] Wang also showed that SK[1](A) is trivial for any central simple algebra
over a number field,^[61] but Platonov has given examples of algebras of degree prime squared for which SK[1](A) is non-trivial.^[60]
John Milnor found the right definition of K[2]: it is the center of the Steinberg group St(A) of A.
It can also be defined as the kernel of the map
${\displaystyle \varphi \colon \operatorname {St} (A)\to \mathrm {GL} (A),}$
or as the Schur multiplier of the group of elementary matrices.
For a field, K[2] is determined by Steinberg symbols: this leads to Matsumoto's theorem.
One can compute that K[2] is zero for any finite field.^[62]^[63] The computation of K[2](Q) is complicated: Tate proved^[63]^[64]
${\displaystyle K_{2}(\mathbf {Q} )=(\mathbf {Z} /4)^{*}\times \prod _{p{\text{ odd prime}}}(\mathbf {Z} /p)^{*}\ }$
and remarked that the proof followed Gauss's first proof of the Law of Quadratic Reciprocity.^[65]^[66]
For non-Archimedean local fields, the group K[2](F) is the direct sum of a finite cyclic group of order m, say, and a divisible group K[2](F)^m.^[67]
We have K[2](Z) = Z/2,^[68] and in general K[2] is finite for the ring of integers of a number field.^[69]
We further have K[2](Z/n) = Z/2 if n is divisible by 4, and otherwise zero.^[70]
Matsumoto's theorem
Matsumoto's theorem^[71] states that for a field k, the second K-group is given by^[72]^[73]
${\displaystyle K_{2}(k)=k^{\times }\otimes _{\mathbf {Z} }k^{\times }/\langle a\otimes (1-a)\mid aot =0,1\rangle .}$
Matsumoto's original theorem is even more general: For any root system, it gives a presentation for the unstable K-theory. This presentation is different from the one given here only for symplectic
root systems. For non-symplectic root systems, the unstable second K-group with respect to the root system is exactly the stable K-group for GL(A). Unstable second K-groups (in this context) are
defined by taking the kernel of the universal central extension of the Chevalley group of universal type for a given root system. This construction yields the kernel of the Steinberg extension for
the root systems A[n] (n > 1) and, in the limit, stable second K-groups.
Long exact sequences
If A is a Dedekind domain with field of fractions F then there is a long exact sequence
${\displaystyle K_{2}F\rightarrow \oplus _{\mathbf {p} }K_{1}A/{\mathbf {p} }\rightarrow K_{1}A\rightarrow K_{1}F\rightarrow \oplus _{\mathbf {p} }K_{0}A/{\mathbf {p} }\rightarrow K_{0}A\
rightarrow K_{0}F\rightarrow 0\ }$
where p runs over all prime ideals of A.^[74]
There is also an extension of the exact sequence for relative K[1] and K[0]:^[75]
${\displaystyle K_{2}(A)\rightarrow K_{2}(A/I)\rightarrow K_{1}(A,I)\rightarrow K_{1}(A)\cdots \ .}$
There is a pairing on K[1] with values in K[2]. Given commuting matrices X and Y over A, take elements x and y in the Steinberg group with X,Y as images. The commutator ${\displaystyle xyx^{-1}y^
{-1}}$ is an element of K[2].^[76] The map is not always surjective.^[77]
The above expression for K[2] of a field k led Milnor to the following definition of "higher" K-groups by
${\displaystyle K_{*}^{M}(k):=T^{*}(k^{\times })/(a\otimes (1-a)),}$
thus as graded parts of a quotient of the tensor algebra of the multiplicative group k^× by the two-sided ideal, generated by the
${\displaystyle \left\{a\otimes (1-a):\ aeq 0,1\right\}.}$
For n = 0,1,2 these coincide with those below, but for n ≧ 3 they differ in general.^[78] For example, we have K^M
[n](F[q]) = 0 for n ≧ 2 but K[n]F[q] is nonzero for odd n (see below).
The tensor product on the tensor algebra induces a product ${\displaystyle K_{m}\times K_{n}\rightarrow K_{m+n}}$ making ${\displaystyle K_{*}^{M}(F)}$ a graded ring which is graded-commutative.^[79]
The images of elements ${\displaystyle a_{1}\otimes \cdots \otimes a_{n}}$ in ${\displaystyle K_{n}^{M}(k)}$ are termed symbols, denoted ${\displaystyle \{a_{1},\ldots ,a_{n}\}}$. For integer m
invertible in k there is a map
${\displaystyle \partial :k^{*}\rightarrow H^{1}(k,\mu _{m})}$
where ${\displaystyle \mu _{m}}$ denotes the group of m-th roots of unity in some separable extension of k. This extends to
${\displaystyle \partial ^{n}:k^{*}\times \cdots \times k^{*}\rightarrow H^{n}\left({k,\mu _{m}^{\otimes n}}\right)\ }$
satisfying the defining relations of the Milnor K-group. Hence ${\displaystyle \partial ^{n}}$ may be regarded as a map on ${\displaystyle K_{n}^{M}(k)}$, called the Galois symbol map.^[80]
The relation between étale (or Galois) cohomology of the field and Milnor K-theory modulo 2 is the Milnor conjecture, proven by Vladimir Voevodsky.^[81] The analogous statement for odd primes is the
Bloch-Kato conjecture, proved by Voevodsky, Rost, and others.
The accepted definitions of higher K-groups were given by Quillen (1973), after a few years during which several incompatible definitions were suggested. The object of the program was to find
definitions of K(R) and K(R,I) in terms of classifying spaces so that R ⇒ K(R) and (R,I) ⇒ K(R,I) are functors into a homotopy category of spaces and the long exact sequence for relative K-groups
arises as the long exact homotopy sequence of a fibration K(R,I) → K(R) → K(R/I).^[82]
Quillen gave two constructions, the "plus-construction" and the "Q-construction", the latter subsequently modified in different ways.^[83] The two constructions yield the same K-groups.^[84]
The +-construction
One possible definition of higher algebraic K-theory of rings was given by Quillen
${\displaystyle K_{n}(R)=\pi _{n}(B\operatorname {GL} (R)^{+}),}$
Here π[n] is a homotopy group, GL(R) is the direct limit of the general linear groups over R for the size of the matrix tending to infinity, B is the classifying space construction of homotopy theory
, and the ^+ is Quillen's plus construction. He originally found this idea while studying the group cohomology of ${\displaystyle GL_{n}(\mathbb {F} _{q})}$^[85] and noted some of his calculations
were related to ${\displaystyle K_{1}(\mathbb {F} _{q})}$.
This definition only holds for n > 0 so one often defines the higher algebraic K-theory via
${\displaystyle K_{n}(R)=\pi _{n}(B\operatorname {GL} (R)^{+}\times K_{0}(R))}$
Since BGL(R)^+ is path connected and K[0](R) discrete, this definition doesn't differ in higher degrees and also holds for n = 0.
The Q-construction
The Q-construction gives the same results as the +-construction, but it applies in more general situations. Moreover, the definition is more direct in the sense that the K-groups, defined via the Q
-construction are functorial by definition. This fact is not automatic in the plus-construction.
Suppose ${\displaystyle P}$ is an exact category; associated to ${\displaystyle P}$ a new category ${\displaystyle QP}$ is defined, objects of which are those of ${\displaystyle P}$ and morphisms
from M′ to M″ are isomorphism classes of diagrams
${\displaystyle M'\longleftarrow N\longrightarrow M'',}$
where the first arrow is an admissible epimorphism and the second arrow is an admissible monomorphism. Note the morphisms in ${\displaystyle QP}$ are analogous to the definitions of morphisms in the
category of motives, where morphisms are given as correspondences ${\displaystyle Z\subset X\times Y}$ such that
${\displaystyle X\leftarrow Z\rightarrow Y}$
is a diagram where the arrow on the left is a covering map (hence surjective) and the arrow on the right is injective. This category can then be turned into a topological space using the classifying
space construction ${\displaystyle BQP}$ , which is defined to be the geometric realisation of the nerve of ${\displaystyle QP}$. Then, the i-th K-group of the exact category ${\displaystyle P}$ is
then defined as
${\displaystyle K_{i}(P)=\pi _{i+1}(\mathrm {BQ} P,0)}$
with a fixed zero-object ${\displaystyle 0}$. Note the classifying space of a groupoid ${\displaystyle B{\mathcal {G}}}$ moves the homotopy groups up one degree, hence the shift in degrees for ${\
displaystyle K_{i}}$ being ${\displaystyle \pi _{i+1}}$ of a space.
This definition coincides with the above definition of K[0](P). If P is the category of finitely generated projective R-modules, this definition agrees with the above BGL^+ definition of K[n](R) for
all n. More generally, for a scheme X, the higher K-groups of X are defined to be the K-groups of (the exact category of) locally free coherent sheaves on X.
The following variant of this is also used: instead of finitely generated projective (= locally free) modules, take finitely generated modules. The resulting K-groups are usually written G[n](R).
When R is a noetherian regular ring, then G- and K-theory coincide. Indeed, the global dimension of regular rings is finite, i.e. any finitely generated module has a finite projective resolution P[*]
→ M, and a simple argument shows that the canonical map K[0](R) → G[0](R) is an isomorphism, with [M]=Σ ± [P[n]]. This isomorphism extends to the higher K-groups, too.
The S-construction
A third construction of K-theory groups is the S-construction, due to Waldhausen.^[86] It applies to categories with cofibrations (also called Waldhausen categories). This is a more general concept
than exact categories.
While the Quillen algebraic K-theory has provided deep insight into various aspects of algebraic geometry and topology, the K-groups have proved particularly difficult to compute except in a few
isolated but interesting cases. (See also: K-groups of a field.)
Algebraic K-groups of finite fields
The first and one of the most important calculations of the higher algebraic K-groups of a ring were made by Quillen himself for the case of finite fields:
If F[q] is the finite field with q elements, then:
• K[0](F[q]) = Z,
• K[2i](F[q]) = 0 for i ≥1,
• K[2i–1](F[q]) = Z/(q^ i − 1)Z for i ≥ 1.
Rick Jardine (1993) reproved Quillen's computation using different methods.
Algebraic K-groups of rings of integers
Quillen proved that if A is the ring of algebraic integers in an algebraic number field F (a finite extension of the rationals), then the algebraic K-groups of A are finitely generated. Armand Borel
used this to calculate K[i](A) and K[i](F) modulo torsion. For example, for the integers Z, Borel proved that (modulo torsion)
• K[i] (Z)/tors.=0 for positive i unless i=4k+1 with k positive
• K[4k+1] (Z)/tors.= Z for positive k.
The torsion subgroups of K[2i+1](Z), and the orders of the finite groups K[4k+2](Z) have recently been determined, but whether the latter groups are cyclic, and whether the groups K[4k](Z) vanish
depends upon Vandiver's conjecture about the class groups of cyclotomic integers. See Quillen–Lichtenbaum conjecture for more details. | {"url":"https://www.wikiwand.com/en/articles/Algebraic_K-theory","timestamp":"2024-11-10T15:09:41Z","content_type":"text/html","content_length":"819057","record_id":"<urn:uuid:7a4982c1-912e-49a1-85c8-f67a306da2c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00041.warc.gz"} |
Turn average working days to projected date using WORKDAY()
The basic ask is to calculate a projected due date based on historical data and duration that a certain task takes. So if x, y, and z parameters are checked, it will find the average amount of time
(from a different sheet) that those parameters take and then calculate a suggested due date.
I want to use WORKDAY(), but am receiving an error message that reads #INVALID DATA TYPE.
So in the column titled [Average Days] I'll have the cell that calculates the average time and outputs a value of working days, let's say 20. In the cell next to it, I type =WORKDAY(TODAY(),[Column
A]1) and receive an error message. But when I try to just hardcode it - =WORKDAY(TODAY(),20), I get a projected date. So there must be something wrong with referencing a calculated/formula-ed cell?
Is there a workaround or has anyone seen anything similar and found a solution?
• Hello,
Happy to help! If you're receiving an #INVALID DATA TYPE with your =WORKDAY(TODAY(),[Column A]1) it's likely because the =AVG being referenced is producing a partial number for example 2.5 days.
To correct the WORKDAY formula I recommending do two things.
1. Adding an IFERROR will ensure that if an error is produced, or if the reference is blank, the formula will still work as desired. It could look like this:
=IFERROR(WORKDAY(TODAY(),[Column A]1), " ")
2. You may want to ROUND the average to ensure the average always produces a whole number. For example:
Smartsheet Support
• Ah ha! There was a partial date happening, so I utilized ROUND() and that solved the problem. Thank you!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/40916/turn-average-working-days-to-projected-date-using-workday","timestamp":"2024-11-07T06:41:14Z","content_type":"text/html","content_length":"397405","record_id":"<urn:uuid:c5e34dfe-d4cb-44d8-9c3a-a21c49e00e27>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00225.warc.gz"} |
TPS7A80: VDO question for TPS7A8001
Answers 1 answer
Part Number: TPS7A80 Other Parts Discussed in Thread: TPS74401
Hi team,
For the LDO dropout, we can find the Max dropout spec from datasheet.
The max dropout spec is different according to actual load current. I have a question about the LDO Vin dropout calculation. For our actual design which is shown as follow:
Vin=5.5V±3%, Vout=4.984±3.92% (the accuracy of output is calculated according to IC, external resistor accuracy)Imax of load =0.75A;
So the min Vin dropout of our design should meet: Vin dropout>350mV according to datasheet. But I have a question about the Vin dropout calculation. Please see the following calculation method:
Cal Method 1: Vin dropout=Vin(Typ)-Vout(Typ)=5.5-4.984=516mV>350mV;
Cal Method 2: Vin dropout=Vin(Typ)-Vout(Max)=5.5-5.18=320mV<350mV;
Cal Method 3: Vin dropout=Vin(Min)-Vout(Typ)=5.5*(1-3%)-4.984=351mV>350mV;
Cal Method 4: Vin dropout=Vin(Min)-Vout(Max)=5.5*(1-3%)-4.984*(1+3.92%)=155mV<350mV;
Please help to confirm which cal method is true because it is really important for our circuit design. Thanks a lot!
Hi Dane,
The dropout voltage of your application would be around the 350mV range because of your load current. The closest calculation you have is Method 2.
Hi Juliette,
1. If the calculation method is the method 2, the dropout is 320mV@0.75A load will be lower than 350mV. So what is the influence?
2. In my opinion, the Vin spec is 5.5V±3%, it also can be 5.5V*(1-3%)=5.335V at the worst case. So I want to know does LDO test case of the factory do not care this Vin accuracy?
3. The LDO dropout test definition of TI is as follow(it is defined in TPS74401 datasheet):
The dropout definition shows that when Vin-Vout=195mV, the Vout will be 2% below nominal. But if the designed accuracy of Vout 1.5%, even though we can meet the maximum dropout 195mV, the LDO also
can not guarantee the design accuracy. Is it right?
Hi Dane,
The LDO dropout voltage follows a linear scale, not necessarily your calculation.
An explanation can be seen on Understanding Low Drop Out (LDO) Regulators
What do you mean about the test case of the factory? | {"url":"https://e2e.ti.com/support/power-management-group/power-management/f/power-management-forum/980784/tps7a80-vdo-question-for-tps7a8001?tisearch=e2e-sitesearch&keymatch=TPS7A80","timestamp":"2024-11-07T14:08:13Z","content_type":"text/html","content_length":"145290","record_id":"<urn:uuid:282ea2b5-8d52-42b6-bafc-33f52f76703f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00835.warc.gz"} |
The base class Shape provides a default implementation for calculateArea that returns 0. This might lead to incorrect results if a derived class does not override this method. It would be better to
make calculateArea a pure virtual function to enforce that all derived classes provide their own implementation.
The value of π (pi) is approximated as 3.14, which is not very precise. Consider using a more accurate value of π, such as the constant M_PI from the <cmath> library.
#include <cmath> // ... return M_PI * radius * radius;
double area() const override { return 3.14 * radius * radius;
The calculation of the area of a circle uses a hard-coded value for π (pi), which is not precise. This can lead to inaccuracies in calculations, especially for larger circles or when high precision
is required. It’s recommended to use the M_PI constant from the <cmath> header, which provides a more accurate value for π. Replace 3.14 with M_PI to improve the precision of the area calculation.
Shape shapes[3] = {circle, square}; std::cout << "Total area: " << totalArea(shapes) << std::endl;
The code attempts to create an array of Shape objects, but Shape is an abstract class and cannot be instantiated directly. Moreover, the array is initialized with objects of derived classes (Circle
and Square), which is not allowed in C++ due to object slicing. This will lead to a compilation error. Instead, use a vector of pointers to Shape as initially commented out in line 45. Uncomment line
45 and replace line 46 with the commented-out vector initialization to fix this issue. Additionally, update the totalArea function call in line 48 to pass shapes as a reference to a vector of Shape*
instead of an array.
//raty malejace let raty = []; for(let i=0;i<ilosc_rat;i++){ raty[i]=pozost*oprocen+rata_kap; if(i==0) raty[i]+=prowizja; pozost-=rata_kap; }
The loop for calculating decreasing installments has a potential issue with floating-point arithmetic precision. When dealing with financial calculations, using floating-point numbers can lead to
inaccuracies due to the way these numbers are represented in memory. This can result in slight discrepancies in the calculated installments, which, over time or across many transactions, could lead
to significant financial discrepancies.
Recommendation: Consider using a library designed for precise decimal arithmetic, or represent monetary values in the smallest units (e.g., cents) as integers to avoid precision issues. For example,
instead of using 0.1 for the interest rate, use an integer to represent the rate in basis points (e.g., 1000 for 1%) and adjust calculations accordingly.
let oprocen = 1000; // 1% interest rate represented in basis points // Adjust calculation to account for basis points raty[i] = (pozost * oprocen / 10000) + rata_kap;
The way the commission (prowizja) is added to the first installment could be more transparent and maintainable. Currently, the commission is added directly within the loop, which makes the
calculation harder to follow and modify. This approach also tightly couples the commission calculation with the installment calculation logic.
Recommendation: Separate the commission addition from the loop. Calculate the first installment outside the loop or add the commission to the first installment after the loop. This separation
improves code readability and maintainability.
// Calculate installments for(let i = 0; i < ilosc_rat; i++) { raty[i] = pozost * oprocen + rata_kap; pozost -= rata_kap; } // Add commission to the first installment raty[0] += prowizja;
The calculateArea function in the Shape class should be a pure virtual function to make Shape an abstract class. This ensures that Shape cannot be instantiated and that derived classes must implement
the calculateArea function.
The value of π (pi) is approximated as 3.14, which is not precise. Consider using the constant M_PI from the <cmath> library for better precision.
#include <cmath> // ... double calculateArea() override { return M_PI * radius * radius; }
The value of π (pi) is approximated as 3.14, which is not precise. Consider using the constant M_PI from the <cmath> library for better accuracy.
We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic. By clicking “Accept,” you agree to our
website's cookie use as described in our Cookie Policy. | {"url":"https://codereviewbot.ai/tags/en-US/precision","timestamp":"2024-11-03T15:11:52Z","content_type":"text/html","content_length":"23151","record_id":"<urn:uuid:91e1ee24-6fea-465e-8ef2-f1b8f346c267>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00594.warc.gz"} |
Electronic Stopping Power
In sr-niel website, the resulting effect from collision energy losses of particles traversing an absorber – e.g., a device – is discussed in terms of electronic stopping power, if not otherwise
explicitly indicated. For instance, the user can find the treatment of such a physical mechanism in Chapter 2 of Leroy and Rancoita (2016) (see also references therein).
However, the energy deposited in an absorber may be lower than the energy actually lost by a particle by collision losses on atomic electrons. In fact, typical radiation detectors or devices are not
thick enough to fully absorb secondary δ-rays, emitted during the process. Thus, in order to account for such an effect, one needs to define an effective detectable maximum transferred energy W[0].
Beyond the ionization loss minimum or for W[0] ≪ 2mc^2 - where m is the electron mass at rest and c the speed of light -, the energy-loss formula reduces to the so-called restricted energy-loss
equation (e.g., see Sect. 2.1.1.4 in Leroy and Rancoita (2016)), which – at high energies – reaches a constant value called Fermi Plateau (for massive particles in silicon absorbers, e.g., see here
), i.e.,
where h is Planck constant and ν[p] is the plasma frequency of the medium, A, Z and ρ are atomic weigth, atomic number and density of the medium, respectively. Finally, z is the charge number of the
incoming particle (i.e., the atomic number of the fully ionized incoming particle).
One has to remark that, , in section 2.6 of ICRU-90 (2014) the linear energy transfer is currently introduced as:
While, in section 4.5 of ICRU-85 (2011) the linear energy transfer was more extensively discussed as in the following.
In addition, it is worth to note that the linear energy transfer used in dealing with SEU and SEE studies is actually the mass electronic stopping power (for instance, see Sect. 2.1.1 of Leroy and
Rancoita (2016) and in Sect. 2 (page 6) of ESCC-25100 (2014)), i.e., the electronic stopping power divided by the absorber density.
[Leroy and Rancoita (2016)] Principles of Radiation Interaction in Matter and Detection - 4th Edition -, World Scientific. Singapore, ISBN-978-981-4603-18-8 (printed); ISBN.978-981-4603-19-5 (ebook);
https://www.worldscientific.com/worldscibooks/10.1142/9167#t=aboutBook; it is also partially accessible via google books.
[ICRU-85 (2011)] ICRUM - International Commission on Radiation Units and Measurements (2011). FUNDAMENTAL QUANTITIES AND UNITS FOR IONIZING RADIATION (Revised), ICRU Report 85, Journal of the ICRU
Volume 11 No 1.
[ICRU-90 (2014)] ICRUM - International Commission on Radiation Units and Measurements (2011). Key Data for Ionizing-Radiation Dosimetry: Measurement Standards and Applications, ICRU Report 90,
Journal of the ICRU Volume 14 No 1.
[ESCC-25100 (2014)] SINGLE EVENT EFFECTS TEST METHOD GUIDELINES (2014), ESCC Basic Specification No. 25100 issue 2 | {"url":"https://www.sr-niel.org/index.php/sr-niel-long-write-up/electronic-stopping-power-and-let","timestamp":"2024-11-14T15:23:24Z","content_type":"text/html","content_length":"27265","record_id":"<urn:uuid:c4794921-e687-45f2-b96f-5ede8c547140>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00812.warc.gz"} |
Important Short Questions Answers: Boolean Algebra and Logic Gates
1. Define binary logic?
Binary logic consists of binary variables and logical operations. The variables are designated by the alphabets such as A, B, C, x, y, z, etc., with each variable having only two distinct values: 1
and 0. There are three basic logic operations: AND, OR, and NOT.
2. What are the basic digital logic gates?
The three basic logic gates are, AND gate OR gate NOT gate.
3. What is a Logic gate?
Logic gates are the basic elements that make up a digital system. The electronic gate is a circuit that is able to operate on a number of binary inputs in order to perform a particular logical
4. Which gates are called as the universal gates? What are its advantages?
The NAND and NOR gates are called as the universal gates. These gates are used to perform any type of logic application.
5. Mention the important characteristics of digital IC’s?
Fan out-Power dissipation Propagation Delay Noise Margin Fan In-Operating temperature Power supply requirements.
6. Define Fan-out?
Fan out specifies the number of standard loads that the output of the gate can drive without impairment of its normal operation.
7. Define power dissipation?
Power dissipation is measure of power consumed by the gate when fully driven by all its inputs.
8. What is propagation delay?
Propagation delay is the average transition delay time for the signal to propagate from input to output when the signals change in value. It is expressed in ns.
9. Define noise margin?
It is the maximum noise voltage added to an input signal of a digital circuit that does not cause an undesirable change in the circuit output. It is expressed in volts.
10. Define fan in?
Fan in is the number of inputs connected to the gate without any degradation in the voltage level.
11. State duality principle
“Every algebraic expression deducible from the postulates of Boolean algebrad ifremainsthe v operators and identity elements are interchanged.
12. Write the applications of gray code.
Used in telegraphy, for robust communication and in error detection and correction..
13. What are the limitations of Karnaugh map?
The size can be limited to 6 variables and also can be used for simplifying Boolean expressions.
14. What are error detecting codes?
To maintain the data integrity between the transmitter and receiver, extra bit or more than one bit are added in the data. The codes which allow only error detection are called error detecting codes.
15. What are different ways to represent a negative number?
ü Ordinary arithmetic-minus sign
ü Signed magnitude-MSB bit as 1.
ü 1’s complement
1. (i) Minimize the following[=̅]expression[̅+̅] using[̅+] k-map[̅] [+ ̅̅+ ̅] (ii)State and prove the De-morgan’s theorem.
2. Simplify the following Boolean function by using Tabulation method F (w, x, y, z) =_ (0, 1, 2, 8, 10, 11, 14,15)
3. Simplify the following Boolean functions by using K-Map in SOP & POS. F (w, x, y, z) =_ (1, 3, 4, 6, 9, 11, 12, 14)
4. Simplify the following Boolean functions by using K-Map in SOP & POS. F (w, x, y, z) =_ (1, 3, 7, 11, 15) + d (0 , 2, 5)
5. Reduce the given expression [(AB)‘ + A‘ +AB‘]
6. Reduce the following function using k-map technique f(A,B,C,D)= _ M(0, 3, 4, 7, 8, 10, 12, 14)+d (2, 6) | {"url":"https://www.brainkart.com/article/Important-Short-Questions-Answers--Boolean-Algebra-and-Logic-Gates_6752/","timestamp":"2024-11-10T11:33:17Z","content_type":"text/html","content_length":"58860","record_id":"<urn:uuid:6a6c4938-48f4-4467-bbcb-9982b47de019>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00160.warc.gz"} |
This exotic strategy is strongly related to trivial patterns like
Naked Pairs
Naked Triples
since we are identifying
Locked Sets
. The building blocks are
Almost Locked Sets
but the most interesting aspect is the alignment of these parts.
To re-cap the terns:
• A Locked Set is a group of N cells that can see each other which have N candidates - an example being a Naked Pair
• An Almost Locked Set is a group of N cells which are mutually visible and share N+1 candidates in some combination.
There is a sequence we can follow:
• An Almost Almost Locked Set is a group of N cells which are mutually visible and share N+2 candidates in some combination.
• An Almost Almost Almost Locked Set is a group of N cells which are mutually visible and share N+3 candidates in some combination.
and so on. Horrible names and often abbreviated AALS, AAALS etc.
Sue-De-Coq Example 1 : Load Example or : From the Start
Sue-De-Coq, named after the forum handle of the clever chap who identified it, starts with an AALS which must be aligned in a row or column AND be wholly contained within a box. This restricts the
size of the AALS group to two or three cells.
In the first example the two yellow cells (N=2) contain {2,3,5,8} which is N+2, or four candidates. The group is contained within box 4. We don't know which of the values {2,3,5,8} will be the
solution to
but clearly two of those four will be. Now, if we look along the unit of alignment (column 2) and within the box we can find single cells that contain two of those candidates.
contains {2,8} (the green cell) and
contains {3,5}. The AALS can see these cells - which is important!
We now know that the solution to the AALS
cannot be 2/8 or 3/5 since it would leave nothing in the cells
The logic is as follows. If neither {2/8} can fill the AALS nor {3,5} then some other combination must fill it that leaves a digit free for the single bi-value cells. So effectively {2,3,5,8} must
fill all coloured cells. Indeed the group in total contains four cells and there are four candidates, so we have identified the total group as a Locked Set. This means we can remove all candidates X
that see all X in the total group. This excludes the 8 in C2 and J2 (aligned in the column) and 2 G2 (also aligned in the columns) and the 3 in E3 (shares the box).
Sue-De-Coq Example 2 : Load Example or : From the Start
The example above was a relatively easy pattern - we found a four cell
Locked Set
. Sue-De-Coq can be used in more complicated patterns like the second example. In the first example we used bi-value single cells as the 'hooks' to make the 4-cell locked sets. Bi-value cells are by
Almost Locked Sets
since they contain two candidates in one cell. Sue-De-Coq can use larger ALSs to make the pattern.
In the second example we combine an AAALS (N+3) in {
} containing {1,3,6,7,8} with two normal ALSs {
} and {
} containing {1,3,7} and {6/8} respectively. The trick is that the total number of cells, 5, equals the total number of candidates in all the cells {1,3,6,7,8}. We get a 5-cell Locked Set.
To eliminate we look at what candidates OUTSIDE the pattern can see ALL the candidates INSIDE the pattern. These are the 6s and 8s in row E, and the 1s, 3s and 7s in box 6.
Generally then...
The general terms the rule for the pattern is as follows:
1. Find a 2-cell or 3-cell group inside a box that is also aligned on a row or column - call it group C
2. C contains a set of candidates, V, which must be two or more than the number of cells in C (N+2, N+3 ALS etc).
3. We need to find at least one bi-value cell (or larger ALS) in the row or column which only contains candidates from set V, called D
4. We need to find at least one bi-value cell (or larger ALS) in the box which only contains candidates from set V, called E
5. The candidates in D and E must be different.
6. Remove any candidates common to C+D not in the cells covered by C or D in the row or column
7. Remove any candidates common to C+E not in the cells covered by C or E in the box
... by: Strmckr
Friday 20-Oct-2023
John Handojo
Wxyz wing is an als xz move with up to 2 rcc,
Which precisely mimics suede coqs that only occupies 4 cells,
The diffrence is that a sue de coq can be any n size als with n +x degrees of freedom(where x=9-n at max),
we have examples in the forums for als dof = 7 sue de coq
That has no equivalent als xz move debunking the od adage that all sue de coqs where only als xz with 2rcc
... by: Megastar
Wednesday 17-May-2023
Our page in Frence confirm it :
... by: Cerberus
Tuesday 12-Apr-2022
Re: Sue de Cog
... you say, generally, the candidates in D & E must be different.
... is that, collectively different, or singular different, would 2, 3 in one, be okay, if 3, 6. was in the other, or must all instances of a 2 or a 3 be absent?
... by: Jonathan Handojo
Friday 11-Jun-2021
In the WXYZ-Wing, you are looking for one non-restricted common. But in this strategy, all candidates are restricted commons. Therefore there's a lot more eliminations in it.
So my simplest way of understanding this is: Where you have a Locked Set and all of its candidates in it are restricted commons (i.e. where all instances of each and every one of those candidates
that can see each other), you can eliminate any other candidates outside that locked set that can see every instance of those candidates within that locked set.
This is why there's a lot more elimination in the Sue-De-Coq rather than the WXYZ Wing.
... by: Jan Laloux
Thursday 28-Jun-2018
When working on my own solver and tackling this strategy I was trying to figure out how it worked. I found it very complicated to be honest. Thinking it trough I realized however that the complexity
is not needed and that SdC is nothing more than a (very) special case of a simple locked set.
It is not the ordinary locked set, but what I call a "dispersed locked set". In a normal LS all cell involved are bound to one unit. In a DLS there is no such restriction. I have not seen this
described anywhere else so far.
You have a DLS of size n when you have n cells anywhere in the grid with in total n digits, with the requirements that for each digit all candidates for that digit are bound to one unit, and that all
cells are "connected". The latter condition is a bit more complicated to explain. I'll demonstrate it with a colouring procedure: colour the first cell, then for each digit that is a candidate in
that cell colour all other cells in the set that have that digit as candidate, repeat the procedure for all the newly coloured cells. At the end all cells in the unit must be coloured.
In the first example above we have a DLS of size 4 with cells B2, D2, E2 and F3 and 4 digits 2, 3, 5 and 8. Digits 2 and 8 are bound to column 2, digits 3 and 5 to box 4. The eliminations follow the
same logic as with an ordinary LS, resulting in the eliminations shown in the SdC example.
There are in the same way also hidden DLS, dispersed almost locked set (DALS) etc. My solver uses DALS wherever an ALS can be used such in an AIC, death blossom, unique rectangle type 3 etc.
... by: Teige
Wednesday 27-Jan-2016
Impressive brain power at work! Great answer!
... by: Sherman
Saturday 10-May-2014
There is a restriction that is implied by the general terms: the size of the set of candidates in C+D+E must equal the number of cells in C+D+E. There are a couple of possible extensions if this
restriction is made explicit.
The first extension is that C can contain candidates not in D and E. This is a possibility if C is N+3 or larger. If C has one or more candidates not in D and E, these candidates can be removed in
the cells not covered by C+D+E in C's box and row or column.
The second extension is that D and E can contain candidates not in C, as long as D and E each contain at least one candidate from C. The candidates in D and E must still be different. The rule saying
D and E must be a subset of C is not needed. The two removal rules are modified to say that the candidates from D can be removed from the cells not covered by C+D in C's row or column and the
candidates from E can be removed from the cells not covered by C+E in C's box.
This works because C+D+E is a locked set and D and E are mutually exclusive. Between C+D+E, all possible locations of their candidates are fixed.
This is not my work. I found it by perusing other Sudoku websites.
... by: SudoNova
Tuesday 28-Aug-2012
In WXYZ-wing comments, Jeff Sandborn asks if there is a VWXYZ-wing.
I think the highlighted cells with digits13678 in example 2 are exactly this.
Following my strategy described in WXYZ-wing, the green-highlighted cell
contains the digits WZ (or perhaps VZ here) and the red highlighted digits
68 in row E are the rogue Z's
... by: Prasolov V.
Wednesday 27-Apr-2011
This strategy "Sue_de_cog" of your solver is not full.
... by: Herbert Jensen
Tuesday 7-Apr-2009
I use your Sudoku Solver daily as I don't like the bother of filling in all the cues manually. I've probably read 10 or more books on Sudoku strategies, but your site remains the best documented
reference work. | {"url":"https://www.sudokuwiki.org/Sue_De_Coq","timestamp":"2024-11-04T04:22:16Z","content_type":"text/html","content_length":"30146","record_id":"<urn:uuid:e0618462-25a7-48c5-b6ba-7d66e7b81017>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00246.warc.gz"} |
CSC412/2056 Assignment #1
Problem 1 (Variance and covariance, 6 points)
Let X and Y be two continuous independent random variables.
(a) Starting from the definition of independence, show that the independence of X and Y implies that their covariance
is zero.
(b) For a scalar constant a, show the following two properties, starting from the definition of expectation:
E(X + aY) = E(X) + aE(Y)
var(X + aY) = var(X) + a
Problem 2 (Densities, 5 points)
Answer the following questions:
(a) Can a probability density function (pdf) ever take values greater than 1?
(b) Let X be a univariate normally distributed random variable with mean 0 and variance 1/100. What is the pdf of
(c) What is the value of this pdf at 0?
(d) What is the probability that X = 0?
Problem 3 (Calculus, 4 points)
Let x, y ∈ Rm and A ∈ Rm×m. Please answer the following questions, writing your answers in vector notation.
(a) What is the gradient with respect to x of x
(b) What is the gradient with respect to x of x
(c) What is the gradient with respect to x of x
(d) What is the gradient with respect to x of x
Problem 4 (Linear Regression, 10pts)
Suppose that X ∈ Rn×m with n ≥ m and Y ∈ Rn
, and that Y ∼ N (Xβ, σ
I). In this question you will derive the result
that the maximum likelihood estimate βˆ of β is given by
βˆ = (X
(a) What are the expectation and covariance matrix of βˆ, for a given true value of β?
(b) Show that maximizing the likelihood is equivalent to minimizing the squared error
(yi − xiβ)
. [Hint: Use ∑
i = a
(c) Write the squared error in vector notation, (see above hint), expand the expression, and collect like terms. [Hint:
Use β
Ty = y
Txβ (why?) and x
Tx is symmetric ]
(d) Take the derivative of this expanded expression with respect to β to show the maximum likelihood estimate βˆ as
above. [Hint: Use results 3.c and 3.d for derivatives in vector notation.]
Problem 5 (Ridge Regression, 10pts)
Suppose we place a normal prior on β. That is, we assume that β ∼ N (0, τ
(a) Show that the MAP estimate of β given Y in this context is
βˆ MAP = (X
TX + λI)
where λ = σ
Estimating β in this way is called ridge regression because the matrix λI looks like a “ridge”. Ridge regression is a
common form of regularization that is used to avoid the overfitting that happens when the sample size is close to
the output dimension in linear regression.
(b) Show that ridge regression is equivalent to adding m additional rows to X where the j-th additional row has its
j-th entry equal to √
λ and all other entries equal to zero, adding m corresponding additional entries to Y that are
all 0, and and then computing the maximum likelihood estimate of β using the modified X and Y.
Problem 6 (Gaussians in high dimensions, 10pts)
In this question we will investigate how our intuition for samples from a Gaussian may break down in higher dimensions. Consider samples from a D-dimensional unit Gaussian x ∼ N (0D, ID) where 0D
indicates a column vector of D
zeros and ID is a D × D identity matrix.
1. Starting with the definition of Euclidean norm, quickly show that the distance of x from the origin is √
2. In low-dimensions our intuition tells us that samples from the unit Gaussian will be near the origin. Draw 10000
samples from a D = 1 Gaussian and plot a normalized histogram for the distance of those samples from the
origin. Does this confirm your intuition that the samples will be near the origin?
3. Draw 10000 samples from D = {1, 2, 3, 10, 100} Gaussians and, on a single plot, show the normalized histograms
for the distance of those samples from the origin. As the dimensionality of the Gaussian increases, what can you
say about the expected distance of the samples from the Gaussian’s mean (in this case, origin).
4. From Wikipedia, if xi are k independent, normally distributed random variables with means µi and standard
deviations σi
then the statistic Y =
is distributed according to the χ-distribution. On the previous
normalized histogram, plot the probability density function (pdf) of the χ-distribution for k = {1, 2, 3, 10, 100}.
5. Taking two samples from the D-dimensional unit Gaussian, xa, xb ∼ N (0D, ID) how is xa − xb distributed? Using
the above result about χ-distribution, how is ||xa − xb
||2 distributed? (Hint: start with a X -distributed random
variable and use the change of variables formula.) Plot the pdfs of this distribution for k = {1, 2, 3, 10, 100}.
How does the distance between samples from a Gaussian behave as dimensionality increases? Confirm this by
drawing two sets of 1000 samples from the D-dimensional unit Gaussian. On the plot of the χ-distribution pdfs,
plot the normalized histogram of the distance between samples from the first and second set.
6. In lecture we saw examples of interpolating between latent points to generate convincing data. Given two samples from a gaussian xa, xb ∼ N (0D, ID) the linear interpolation between them xα is
defined as a function of
α ∈ [0, 1]
lin interp(α, xa, xb
) = αxa + (1 − α)xb
For two sets of 1000 samples from the unit gaussian in D-dimensions, plot the average log-likelihood along the
linear interpolations between the pairs of samples as a function of α. (i.e. for each pair of samples compute
the log-likelihood along a linear space of interpolated points between them, N (xα|0, I) for α ∈ [0, 1]. Plot the
average log-likelihood over all the interpolations.) Do this for D = {1, 2, 3, 10, 100}, one plot per dimensionality.
Comment on the log-likelihood under the unit Gaussian of points along the linear interpolation. Is a higher
log-likelihood for the interpolated points necessarily better? Given this, is it a good idea to linearly interpolate
between samples from a high dimensional Gaussian?
7. Instead we can interpolate in polar coordinates: For α ∈ [0, 1] the polar interpolation is
polar interp(α, xa, xb
) = √
αxa +
(1 − α)xb
This interpolates between two points while maintaining Euclidean norm. On the same plot from the previous
question, plot the probabilitiy density of the polar interpolation between pairs of samples from two sets of 1000
samples from D-dimensional unit Gaussians for D = {1, 2, 3, 10, 100}. Comment on the log-likelihood under the
unit Gaussian of points along the polar interpolation. Give an intuative explanation for why polar interpolation
is more suitable than linear interpolation for high dimensional Gaussians. For 6. and 7. you should have one
plot for each D with two curves on each.
8. (Bonus 5pts) In the previous two questions we compute the average loglikelihood of the linear and polar interpolations under the unit gaussian. Instead, consider the norm along the interpolation,
α xα. As we saw
previously, this is distributed according to the X -distribution. Compute and plot the average log-likelihood of
the norm along the two interpolations under the the X -distribution for D = {1, 2, 3, 10, 100}, i.e. XD(
α xα).
There should be one plot for each D, each with two curves corresponding to log-likelihood of linear and polar interpolations. How does the log-likelihood along the linear interpolation compare to the
log-likelihood of the true
samples (endpoints)? Using your answer for questions 3 and 4, provide geometric intuition for the log-likelihood
along the linear and polar interpolations. Use this to further justify your explanation for the suitability of polar
v.s. linear interpolation. | {"url":"https://codingprolab.com/answer/csc412-2056-assignment-1/","timestamp":"2024-11-14T01:43:27Z","content_type":"text/html","content_length":"117581","record_id":"<urn:uuid:a1a3cbe3-cd7b-4bc6-82d2-bca35532d6a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00020.warc.gz"} |
Simple Sudoku Printable 4×4 | Sudoku Printables
Simple Sudoku Printable 4×4
Simple Sudoku Printable 4×4 – If you’ve had any issues solving sudoku, you’re aware that there are numerous kinds of puzzles that are available which is why it’s difficult for you to pick which ones
to work on. But there are also many options to solve them. In fact, you’ll see that solving a printable version can be an ideal way to begin. Sudoku rules are similar to the rules for solving other
puzzles, however, the format of sudoku varies slightly.
What Does the Word ‘Sudoku’ Mean?
The word ‘Sudoku’ is derived from the Japanese words suji and dokushin which translate to “number” and “unmarried individual and ‘unmarried person’, respectively. The aim of the puzzle is to fill in
all of the boxes with numbers, so that each number between one and nine appears just once on each horizontal line. The term Sudoku is an emblem associated with the Japanese puzzle company Nikoli,
which originated in Kyoto.
The name Sudoku comes by the Japanese word”shuji wa dokushin ni Kagiru, which means ‘numbers must remain single’. The game consists of nine 3×3 squares that have nine smaller squares in. It was
initially referred to as Number Place, Sudoku was a mathematical puzzle that stimulated development. While the origins of this game aren’t known, Sudoku is known to have deep roots in ancient number
Why is Sudoku So Addicting?
If you’ve ever played Sudoku you’ll realize how addictive the game can be. The Sudoku addicted person will be unable to stop thinking about the next challenge they’ll solve. They’re always thinking
about their next adventure, while different aspects of their lives tend to be left to wayside. Sudoku is a game that can be addictive However, it’s vital to keep the addictive nature of the game
under control. If you’ve become addicted to Sudoku here are some ways to curb your addiction.
One of the best methods to determine if you’re addicted to Sudoku is by observing your behavior. A majority of people have magazines and books as well as browse through social posts on social media.
Sudoku addicts carry newspapers, books, exercise books and smartphones everywhere they go. They can be found for hours working on puzzles and aren’t able to stop! Some discover it is easier to
complete Sudoku puzzles than standard crosswords. They simply can’t stop.
Simple Sudoku Printable 4×4
What is the Key to Solving a Sudoku Puzzle?
The best way to solve an printable sudoku game is to practice and play with various approaches. The best Sudoku puzzle solvers don’t employ the same strategy for each puzzle. The most important thing
is to test and practice different methods until you can find one that works for you. After some time, you’ll be able to solve sudoku puzzles without a problem! How do you master to solve the
printable sudoku puzzle?
The first step is to understand the basics of suduko. It is a game of reasoning and deduction, and you need be able to see the puzzle from various angles to spot patterns and solve it. When you are
solving Suduko puzzles, suduko puzzle, do not attempt to guess the numbers. instead, you should look over the grid for ways to recognize patterns. It is also possible to apply this technique to rows
and squares.
Related For Sudoku Puzzles Printable | {"url":"https://sudokuprintables.net/simple-sudoku-printable-4x4/","timestamp":"2024-11-08T20:42:35Z","content_type":"text/html","content_length":"37464","record_id":"<urn:uuid:360e9fa7-3b83-4d82-a6e6-8bf7a6e2fbae>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00577.warc.gz"} |
Lens Distortion White Paper
SynthEyes' lens-distortion algorithm is designed to model the bulk of the distortion created by the lens, but by design, it avoids having a large number of parameters, since having many parameters
provides more opportunity for details in the scene to be interpreted as lens distortion.
SynthEyes Image Coordinates
SynthEyes uses a resolution-independent representation of positions in the image plane: a U coordinate ranging from -1 on the left to +1 on the right, and a V coordinate ranging from -1 at the top to
+1 at the bottom. The exact center of the image occurs at U=V=0, and this is assumed to be the location of the camera and lens' optic axis.
Lens Distortion
The lens distortion algorithm is set up to rapidly distort image coordinates, since this is the calculation used very frequently during solving. The distortion algorithm is what you need to undistort
images: to undistort, for each undistorted output pixel, you compute where it's distorted location would be.
If (u,v) are the coordinates of a feature in the undistorted perfect image plane, then (u', v') are the coordinates of the feature on the distorted image plate, ie the scanned or captured image from
the camera. The distortion occurs radially away from the image center, with correction for the image aspect ratio (image_aspect = physical image width/height), as follows:
r2 = image_aspect*image_aspect*u*u + v*v
f = 1 + r2*(k + kcube*sqrt(r2))
u' = f*u
v' = f*v
The constant k is the distortion coefficient that appears on the lens panel and through Sizzle. It is generally a small positive or negative number under 1%. The constant kcube is the cubic
distortion value found on the image preprocessor's lens panel: it can be used to undistort or redistort images, but it does not affect or get computed by the solver. When no cubic distortion is
needed, neither is the square root, saving time.
Lens Un-distortion
Though distorting the coordinates is quick and painless, un-distorting them (going from distorted coordinates to perfect ones) is a bit of trouble. This direction is the one you need to distort CGI
images to match the distorted original plates, by computing for each (distorted) output pixel, where the original undistorted pixel lies.
Though a partial closed-form solution could probably be used for un-distorting equation plain quadratic distortion (requires a cubic solve), it is easiest to use an iterative technique. You can see
sample Sizzle code in the distort.szl script.
But if you are doing an image's worth, you can build a look-up table on the radius. That's what the lens presets do, using spline interpolation. See the "Lens Information File" section of the manual
for more information on them. | {"url":"https://support.borisfx.com/hc/en-us/articles/24284292845325-Lens-Distortion-White-Paper","timestamp":"2024-11-02T20:49:05Z","content_type":"text/html","content_length":"48728","record_id":"<urn:uuid:2993d9f6-54f8-453b-9ee5-f3fcd16b4629>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00177.warc.gz"} |
Problem I. Floyd-Warshall
You are to write a program that finds shortest distances between all pairs of vertices in a directed weighted graph. Graph consists of
vertices, numbered from 1 to
, and
Input file format
Input file contains two integers
, followed my
triplets of integers
u[i] v[i] w[i]
— starting vertex, ending vertex and weight or the edge. There is at most one edge connecting two vertices in every direction. There are no cycles of negative weight.
Output file format
Output file must contain a matrix of size
. Element in the
-th column of
-th row mush be the shortest distance between vertices
. The distance from the vertex to itself is considered to be 0. If some vertex is not reachable from some other, there must be empty space in corresponding cell of matrix.
0 ≤
≤ 100. All weights are less than 1000 by absolute value.
Sample tests
No. Input file (input.txt) Output file (output.txt) | {"url":"https://imcs.dvfu.ru/cats/problem_text?cid=897365;pid=519004;sid=","timestamp":"2024-11-10T06:42:04Z","content_type":"text/html","content_length":"24190","record_id":"<urn:uuid:3181c0a8-641a-4ca9-90c6-518732cc9e81>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00462.warc.gz"} |
Self-Diffusiophoresis of Slender Catalytic Colloids
We consider the self-diffusiophoresis of axisymmetric particles using a continuum description where the interfacial chemical reaction is modeled by first-order kinetics with a prescribed axisymmetric
distribution of rate-constant magnitude. We employ the standard macroscale framework where the interaction of solute molecules with the particle boundary is represented by diffusio-osmotic slip. The
dimensionless problem governing the solute transport involves two parameters (the particle slenderness ∈ and the Damköhler number Da) as well as two arbitrary functions which describe the axial
distributions of the particle shape and rate-constant magnitude. The resulting particle speed is determined throughout the solution of the accompanying problem governing the flow about the force-free
particle. Motivated by experimental configurations, we employ slender-body theory to investigate the asymptotic limit ∈ ≪ 1. In doing so, we seek algebraically accurate approximations where the
asymptotic error is smaller than a positive power of ∈. The resulting approximations are thus significantly more useful than those obtained in the conventional manner, where the asymptotic expansion
is carried out in inverse powers of ln ∈. The price for that utility is that two linear integral equations need to be solved: one governing the axial solute-sink distribution and the other governing
the axial distribution of Stokeslets. When restricting the analysis to spheroidal particles, no need arises to solve for the Stokeslet distribution. The integral equation governing the solute-sink
distribution is then solved using a numerical finite-difference scheme. This solution is supplemented by a large-Da asymptotic analysis, wherein a subtle nonuniformity necessitates a careful
treatment of the regions near the particle ends. The simple approximations thereby obtained are in excellent agreement with the numerical solution.
All Science Journal Classification (ASJC) codes
• Condensed Matter Physics
• Spectroscopy
• General Materials Science
• Surfaces and Interfaces
• Electrochemistry
Dive into the research topics of 'Self-Diffusiophoresis of Slender Catalytic Colloids'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/self-diffusiophoresis-of-slender-catalytic-colloids","timestamp":"2024-11-04T11:19:53Z","content_type":"text/html","content_length":"51971","record_id":"<urn:uuid:6e429781-110b-403b-8d29-2bf94a1da284>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00125.warc.gz"} |
An intuitive approach to statistics. Analysis and description of numerical data using frequency distributions, histograms and measures of central tendency and dispersion, elementary theory of
probability with applications of binomial and normal probability distributions, sampling distributions, confidence intervals, hypothesis testing, chi-square, linear regression, and correlation. A
graphing calculator without a CAS (Computer Algebra System) is required; Texas Instruments TI-83 or TI-84 recommended. Prerequisite: Math 1150 or higher. | {"url":"https://corning.cleancatalog.net/mathematics/math-1310","timestamp":"2024-11-04T01:33:27Z","content_type":"text/html","content_length":"12721","record_id":"<urn:uuid:5dcf14fb-2437-46f2-a943-9861a89acc85>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00601.warc.gz"} |
12.6: Analyzing Cash Flow Information (2024)
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)\(\newcommand{\id}{\mathrm{id}}
\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}
{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand
{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \
newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\
1. Analyze cash flow information.
Question: Companies and analysts tend to use income statement and balance sheet information to evaluate financial performance. In fact, financial results presented to the investing public typically
focus on earnings per share (Chapter 13 discusses earnings per share in detail). However, analysis of cash flow information is becoming increasingly important to managers, auditors, and outside
analysts. What measures are commonly used to evaluate performance related to cash flows?
Answer: Three common cash flow measures used to evaluate organizations are (1) operating cash flow ratio, (2) capital expenditure ratio, and (3) free cash flow. (Further coverage of these measures
can be found in the following article: John R. Mills and Jeanne H. Yamamura, “The Power of Cash Flow Ratios,” Journal of Accountancy, October 1998.) We will use two large home improvement retail
companies, The Home Depot, Inc., and Lowe’s Companies, Inc., to illustrate these measures.
Operating Cash Flow Ratio
Question: The operating cash flow ratio is cash provided by operating activities divided by current liabilities. What does this ratio tell us, and how is it calculated?
Answer: This ratio measures the company’s ability to generate enough cash from daily operations over the course of a year to cover current obligations. Although similar to the commonly used current
ratio, this ratio replaces current assets in the numerator with cash provided by operating activities. The operating cash flow ratio is as follows:
Key Equation
The numerator, cash provided by operating activities, comes from the bottom of the operating activities section of the statement of cash flows. The denominator, current liabilities, comes from the
liabilities section of the balance sheet. (Note that if current liabilities vary significantly from one period to the next, some analysts prefer to use average current liabilities. We will use ending
current liabilities unless noted otherwise.)
As with most financial measures, the resulting ratio must be compared to similar companies in the industry to determine whether the ratio is reasonable. Some industries have a large operating cash
flow relative to current liabilities (e.g., mature computer chip makers, such as Intel Corporation), while others do not (e.g., startup medical device companies).
The operating cash flow ratio is calculated for Home Depot and Lowe’s in the following using information from each company’s balance sheet and statement of cash flows.
Home Depot and Lowe’s are in the same industry and have comparable ratios, which is what we would expect for similar companies.
Capital Expenditure Ratio
Question: The capital expenditure ratio is cash provided by operating activities divided by capital expenditures. What does this ratio tell us, and how is it calculated?
Answer: This ratio measures the company’s ability to generate enough cash from daily operations to cover capital expenditures. A ratio in excess of 1.0, for example, indicates the company was able to
generate enough operating cash to cover investments in property, plant, and equipment. The capital expenditure ratio is as follows:
Key Equation
The numerator, cash provided by operating activities, comes from the bottom of the operating activities section of the statement of cash flows. The denominator, capital expenditures, comes from
information within the investing activities section of the statement of cash flows.
The capital expenditure ratio is calculated for Home Depot and Lowe’s in the following using information from each company’s statement of cash flows.
Since the capital expenditure ratio for each company is above 1.0, both companies were able to generate enough cash from operating activities to cover investments in property, plant, and equipment
(also called fixed assets).
Free Cash Flow
Question: Another measure used to evaluate organizations, called free cash flow, is simply a variation of the capital expenditure ratio described previously. What does this measure tell us, and how
is it calculated?
Answer: Rather than using a ratio to determine whether the company generates enough cash from daily operations to cover capital expenditures, free cash flow is measured in dollars. Free cash flow is
cash provided by operating activities minus capital expenditures. The idea is that companies must continue to invest in fixed assets to remain competitive. Free cash flow provides information
regarding how much cash generated from daily operations is left over after investing in fixed assets. Many organizations, such as Amazon.com, consider this measure to be one of the most important in
evaluating financial performance (see Note 12.34 "Business in Action 12.5"). The free cash flow formula is as follows:
Key Equation
Free cash flow = Cash provided by operating activities − Capital expenditures
The cash provided by operating activities comes from the bottom of the operating activities section of the statement of cash flows. The capital expenditures amount comes from information within the
investing activities section of the statement of cash flows.
The free cash flow amount is calculated for Home Depot and Lowe’s as follows using information from each company’s statement of cash flows.
Because free cash flow for each company is above zero, both companies were able to generate enough cash from operating activities to cover investments in fixed assets and have some left over to
invest elsewhere. This conclusion is consistent with the capital expenditure ratio analysis, which uses the same information to assess the company’s ability to cover fixed asset expenditures.
Formulas for the cash flow performance measures presented in this chapter are summarized in Table 12.1.
Table 12.1 Summary of Cash Flow Performance Measures
Business in Action 12.5
Source: Photo courtesy of James Duncan Davidson, http://www.flickr.com/photos/oreilly/6629275/
Free Cash Flow at Amazon.com
Amazon.com is an online retailer that began selling books in 1996 and has since expanded into other areas of retail sales. The founder and CEO (Jeff Bezos) believes free cash flow is so important,
the annual report included a letter from Mr. Bezos to the shareholders, which began with this statement, “Our ultimate financial measure, and the one we want to drive over the long-term, is free cash
flow per share.”
The company justifies this focus on free cash flow by making the point that earnings presented on the income statement do not translate into cash flows, and shares are valued based on the present
value of future cash flows. This implies shareholders should be most interested in free cash flow per share rather than earnings per share. Mr. Bezos goes on to state, “Cash flow statements often
don’t receive as much attention as they deserve. Discerning investors don’t stop with the income statement.”
Amazon.com’s free cash flow for 2010 totaled $2,164,000,000, compared to $2,880,000,000 in 2009. Net income for 2010 totaled $1,152,000,000, compared to $902,000,000 in 2009. It is interesting to
note that free cash flow is significantly higher than net income for 2010 and 2009.
Source: Amazon.com, Inc., “2010 Annual Report,” www.amazon.com.
Key Takeaway
• Three measures are often used to evaluate cash flow. The operating cash flow ratio measures the company’s ability to generate enough cash from daily operations over the course of a year to cover
current obligations. The formula is as follows:
The capital expenditure ratio measures the company’s ability to generate enough cash from daily operations to cover capital expenditures. The formula is as follows:
Free cash flow measures the company’s ability to generate enough cash from daily operations to cover capital expenditures and determines how much cash is remaining to invest elsewhere in the company.
The formula is as follows:
Free cash flow = Cash provided by operating activities − Capital expenditures
REVIEW PROBLEM 12.8
The following financial information is for PepsiCo Inc. and Coca-Cola Company for fiscal year 2010.
For PepsiCo and Coca-Cola, calculate the following measures and comment on your results:
1. Operating cash flow ratio
2. Capital expenditure ratio (Hint: fixed asset expenditures are the same as capital expenditures.)
3. Free cash flow
Solution to Review Problem 12.8
All dollar amounts are in millions.
1. The formula for calculating the operating cash flow ratio is as follows:
PepsiCo generated slightly more cash from operating activities to cover current liabilities than Coca-Cola.
2. The formula for calculating the capital expenditure ratio is as follows:
Both companies generated more than enough cash from operating activities to cover capital expenditures.
3. The formula to calculate free cash flow is as follows: Free cash flow = Cash provided by operating activities − Capital expenditures
The conclusion reached in requirement two is confirmed here. Both companies generated more than enough cash from operating activities to cover capital expenditures. In fact, PepsiCo had
$5,195,000,000 remaining from operating activities after investing in fixed assets, and Coca-Cola had $7,317,000,000 remaining.
As an expert in financial analysis and cash flow evaluation, I have extensive knowledge and experience in interpreting financial statements, particularly income statements, balance sheets, and
statements of cash flows. My expertise is rooted in practical applications, having worked with various organizations to assess their financial health and performance. I've conducted in-depth analyses
using key financial ratios and metrics to provide valuable insights into cash flow management.
Now, let's delve into the concepts discussed in the provided article:
1. Operating Cash Flow Ratio:
□ Definition: The operating cash flow ratio is calculated as cash provided by operating activities divided by current liabilities.
□ Purpose: It measures a company's ability to generate sufficient cash from daily operations to cover its current obligations over a year.
□ Formula: (\text{Operating Cash Flow Ratio} = \frac{\text{Cash provided by operating activities}}{\text{Current Liabilities}})
2. Capital Expenditure Ratio:
□ Definition: The capital expenditure ratio is calculated as cash provided by operating activities divided by capital expenditures.
□ Purpose: It assesses a company's ability to generate enough cash from operations to cover capital investments, with a ratio above 1.0 indicating the ability to cover such expenditures.
□ Formula: (\text{Capital Expenditure Ratio} = \frac{\text{Cash provided by operating activities}}{\text{Capital Expenditures}})
3. Free Cash Flow:
□ Definition: Free cash flow is a measure of how much cash a company has after covering capital expenditures, indicating funds available for other investments or activities.
□ Purpose: It provides insights into the company's ability to generate cash from daily operations beyond what is needed for capital expenditures.
□ Formula: (\text{Free Cash Flow} = \text{Cash provided by operating activities} - \text{Capital Expenditures})
4. Comparison and Industry Analysis:
□ Emphasizes the importance of comparing these cash flow measures with industry benchmarks to determine their reasonableness.
□ Industries may have different norms for these ratios based on their operational and capital expenditure characteristics.
5. Amazon.com's Emphasis on Free Cash Flow:
□ Highlights Amazon.com's CEO, Jeff Bezos, considering free cash flow per share as the ultimate financial measure.
□ Stresses that earnings on the income statement do not directly translate into cash flows, and shares are valued based on the present value of future cash flows.
6. Summary of Cash Flow Performance Measures (Table 12.1):
□ Provides a summarized table of the key formulas for the cash flow performance measures discussed in the article.
7. Review Problem 12.8:
□ Applies the concepts to financial information for PepsiCo Inc. and Coca-Cola Company for fiscal year 2010, calculating and interpreting the operating cash flow ratio, capital expenditure
ratio, and free cash flow for both companies.
In conclusion, the article underscores the increasing importance of analyzing cash flow information alongside income statements and balance sheets. It provides practical insights into evaluating a
company's financial performance using key cash flow metrics and emphasizes their relevance in different industries. | {"url":"https://dechellytours.com/article/12-6-analyzing-cash-flow-information","timestamp":"2024-11-04T07:50:16Z","content_type":"text/html","content_length":"81023","record_id":"<urn:uuid:6deeee5b-cfb6-4090-98e8-e331edf6bd92>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00724.warc.gz"} |
Nick Brettell, On the graph width parameter mim-width - Discrete Mathematics Group
Nick Brettell, On the graph width parameter mim-width
Wednesday, August 26, 2020 @ 10:30 AM - 11:30 AM KST
Zoom ID: 869 4632 6610 (ibsdimag)
Maximum induced matching width, also known as mim-width, is a width parameter for graphs introduced by Vatshelle in 2012. This parameter can be defined over branch decompositions of a graph G, where
the width of a vertex partition (X,Y) in G is the size of a maximum induced matching in the bipartite graph induced by edges of G with one endpoint in X and one endpoint in Y. In this talk, I will
present a quick overview of mim-width and some key results that highlight why this parameter is of interest from both a theoretical and algorithmic point of view. I will discuss some recent work
regarding the boundedness or unboundedness of mim-width for hereditary classes defined by forbidding one or two induced subgraphs, and for generalisations of convex graphs. I will also touch on some
interesting applications of this work, in particular for colouring and list-colouring. This is joint work with Jake Horsfield, Andrea Munaro, Giacomo Paesani, and Daniel Paulusma. | {"url":"https://dimag.ibs.re.kr/event/2020-08-26/","timestamp":"2024-11-11T23:41:29Z","content_type":"text/html","content_length":"147430","record_id":"<urn:uuid:42897a10-52ae-4f93-8c7d-d10fb3c835b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00341.warc.gz"} |
End of Year Math Game Board Project
Do you have at least one to two weeks left of school?
Do you want your students ENGAGED in school until the end??
If you said YES, then there is a Math Game Board Project for you and your students!
This is seriously, hands down the best way to end the school year with your students. Every year I end with the End of Year Math Game Board Project is a happy year for my students and I. I am able to
pack up my room, finish grading, and I don't have to stay after school. Kids are engaged, they loved making the board games, and the rubrics make it easy for everyone to get an A (which also helped
in grading)!!
Student Made
Teacher Reviews
"This is a fantastic resource! Thank you so much! Can't wait to use it again this year!" -Alissa
"My students loved this!" -Nicole
"This helped keep the kids engaged during that tough time of year!" -Lyndsay
"Awesome activity! My students' creativity shined in this assignment." -Jeanna
"Thank you for this resource. It is excellent. My students took advantage of it. Perfect end of the year project!" -Teacher
"Students loved this. It was the perfect way to end the year on an educational but fun note." -Teacher
"Students loved this." -Michelle
"This was so much fun. My stents got competitive on their games and now I have some for next year!!!!" -Teacher
"This activity saved me with 8th graders the last few weeks of school." -Wendy
This project saved and helped many teachers finish strong and kept the students ENGAGED. I have even had past students come back to my room at the end of the year BEGGING me to play one of the board
games. How cool is that? Students asking to visit your classroom and play educational games? Yes! Truly the best way to end your school year.
I hope this End of Year Project is a HIT in your classroom too! Just remember for the end of the year it's all about keeping those students ENGAGED until the last minute.
Happy Teaching! | {"url":"https://www.kellymccown.com/2017/05/end-of-year-math-game-board-project.html","timestamp":"2024-11-06T12:15:05Z","content_type":"application/xhtml+xml","content_length":"155812","record_id":"<urn:uuid:4688419e-0332-4056-a766-e6be2705dea4>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00543.warc.gz"} |
Playground Calculators | List of Playground Calculators
List of Playground Calculators
Playground calculators give you a list of online Playground calculators. A tool perform calculations on the concepts and applications for Playground calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Playground
calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/playground-Calculators/CalcList-7226","timestamp":"2024-11-07T05:45:31Z","content_type":"application/xhtml+xml","content_length":"78969","record_id":"<urn:uuid:329c747d-6c51-4a47-be01-a01af702159d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00010.warc.gz"} |
Field inversion machine learning for a ramp
Edit me
This tutorial elaborates on our latest field inversion machine learning (FIML) capability for steady-state and time-resolved unsteady flow problems. For more details, please refer to our POF paper.
Fig. 1 Mesh and the vertical probe profile for a 45-degree ramp.
FIML for time-accurate unsteady flow over the ramp
To run this case, first download tutorials and untar it. Then go to tutorials-master/Ramp/unsteady/train and run the “preProcessing.sh” script.
Then, use the following command to run the case:
mpirun -np 4 python runScript.py 2>&1 | tee logOpt.txt
The setup is similar to the steady state FIML except that we consider both spatial and temporal evolutions of the flow. In other words, the trained model will improve both spatial and temporal
predictions of the flow. We augment the SA model and make it match the k-omega SST model. Note: the default optimizer is SNOPT. If you don’t have access to SNOPT, you can also choose other
optimizers, such as IPOPT and SLSQP.
Objective function: Regulated prediction error for the bottom wall pressure at all time steps
Design variables: Weights and biases for the built-in neural network
Augmented variables: betaFINuTilda for the nuTilda equations
Training configuration: U0 = 10 m/s
Prediction configuration: U0 = 5 m/s
The optimization exited with 50 iterations. The objective function reduced from 4.756E-01 to 3.274E-02.
Once the unsteady FIML is done. We can copy the last dict from designVariableHist.txt to tutorials-master/Ramp/unsteady/predict/trained. Then, go to tutorials-master/Ramp/unsteady/predict and run
Allrun.sh. This command will run the primal for the baseline (SA), reference (k-omega SST), and the trained (FIML SA) models, similar to the steady-state case.
The following animation shows the comparison among these prediction case. The trained SA model significantly improves the spatial temporal evolution of velocity fields.
Fig. 2 Comparison among the reference, baseline, and trained models.
FIML for steady-state flow over the ramp (coupled FI and ML)
To run this case, first download tutorials and untar it. Then go to tutorials-master/Ramp/steady/train and run the “preProcessing.sh” script.
Then, use the following command to run the case:
mpirun -np 4 python runScript_FIML.py 2>&1 | tee logOpt.txt
This case uses our latest FIML interface, which incorporates a neural network model into the CFD solver to compute the augmented field variables (refer to the POF paper for details). We augment the
k-omega model and make it match the k-omega SST model. The FIML optimization is formulated as:
Objective function: Regulated prediction error for the bottom wall pressure and the CD values
Design variables: Weights and biases for the built-in neural network
Augmented variables: betaFIK and betaFIOmega for the k and omega equations, respectively
Training configurations: c1: U0 = 10 m/s. c2: U0 = 20 m/s
Prediction configuration: U0 = 15 m/s
The optimization converged in 14 iterations, and the objective function reduced from 3.957E+01 to 3.542E+00.
To test the trained model’s accuracy for an unseen flow condition (i.e., U0 = 15 m/s), we can copy the designVariable.json (this file will be generated after the FIML optimization is done.) to
tutorials-master/Ramp/steady/predict/trained. Then, go to tutorials-master/Ramp/steady/predict and run Allrun.sh. This command will run the primal for the baseline (k-omega), reference (k-omega SST),
and the trained (FIML k-omega) models. The trained will read the optimized neural network weights and biases from designVariable.json, assign them to the built-in neural network model (defined in the
regressionModel key in daOption) and run the primal solver.
For the U0 = 15 m/s case, the drag from the reference, baseline, and trained models are: 0.3881, 0.4458, 0.3829, respectively. The trained model significantly improve the drag prediction accuracy for
this case.
FIML for steady-state flow over the ramp (decoupled FI and ML)
We can also first conduct field inversion, save the augmented fields and features to the disk, then conduct an offline ML to train the relationship between the features and augmented fields. This
decoupled FIML approach was used in most of previous steady-state FIML studies.
First, generate the mesh and data for the c1 and c2 cases:
Then, use the following command to run FI for case c1:
mpirun -np 4 python runScript_FI.py -index=0 2>&1 | tee logOpt.txt
After that, use the following command to run FI for case c2:
mpirun -np 4 python runScript_FI.py -index=1 2>&1 | tee logOpt.txt
Once the above two FI cases converge, reconstruct the data for the last optimization iteration (it should be 0.0050). Copy c1/0.0050 to tf_training/c1_data. Copy c2/0.0050 to tf_training/c2_data.
Then, go to tf_training and run TensorFlow training:
python trainModel.py
After the training is done, it will save the model’s coefficients in a “model” folder. Copy this “model” folder to the c1 and c2 folders.
Lastly, we can use the trained model for prediction. For example, you can go to the c1 folder and run:
mpirun -np 4 python runPrimal.py -augmented=True 2>&1 | tee logOpt.txt
DAFoam will read the trained tensorflow model, compute flow features, use the trained tensorflow model to compute the augmented fields, and run the primal flow solutions. In addition to runPrimal,
you can also do an optimization using the trained model. | {"url":"https://dafoam.github.io/mydoc_tutorials_field_inversion_ramp.html","timestamp":"2024-11-11T00:46:28Z","content_type":"text/html","content_length":"30589","record_id":"<urn:uuid:a75e8196-65a6-4caf-8d0c-39d87655bffd>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00783.warc.gz"} |
data analysis – A Blog by Mike Conley
My experiment makes a little bit of an assumption – and it’s the same assumption most teachers probably make before they hand back work. We assume that the work has been graded correctly and
The rubric that I provided to my graders was supposed to help sort out all of this objectivity business. It was supposed to boil down all of the subjectivity into a nice, discrete, quantitative
But I’m a careful guy, and I like back-ups. That’s why I had 2 graders do my grading. Both graders worked in isolation on the same submissions, with the same rubric.
So, did it work? How did the grades match up? Did my graders tend to agree?
Sounds like it’s time for some data analysis!
About these tables…
I’m about to show you two tables of data – one table for each assignment. The rows of the tables map to a single criterion on that assignments rubric.
The columns are concerned with the graders marks for each criterion. The first columns, Grader 1 – Average and Grader 2 – Average, simply show the average mark given for each criteria for each
Number of Agreements shows the number of times the marks between both graders matched for that criterion. Similarly, Number of Disagreements shows how many times they didn’t match. Agreement
Percentage just converts those two values into a single percentage for agreement.
Average Disagreement Magnitude takes every instance where there was a disagreement, and averages the magnitude of the disagreement (a reminder: the magnitude here is the absolute value of the
Finally, I should point out that these tables can be sorted by clicking on the headers. This will probably make your interpretation of the data a bit easier.
So, if we’re clear on that, then let’s take a look at those tables…
Flights and Passengers Grader Comparison
[table id=6 /]
Decks and Cards Grader Comparison
[table id=7 /]
Findings and Analysis
It is very rare for the graders to fully agree
It only happened once, on the “add_passenger” correctness criterion of the Flights and Passengers assignments. If you sort the tables by “Number of Agreements” (or Number of Disagreements), you’ll
see what I mean.
Grader 2 tended to give higher marks than Grader 1
In fact, there are only a handful of cases (4, by my count), where this isn’t true:
1. The add_passenger correctness criterion on Flights and Passengers
2. The internal comments criterion on Flights and Passengers
3. The error checking criterion on Decks and Cards
4. The internal comments criterion on Decks and Cards
The graders tended to disagree more often on design and style
Sort the tables by Number of Disagreements descending, and take a look down the left-hand side.
There are 14 criteria in total for each assignment. If you’ve sorted the tables like I’ve asked, the top 7 criteria of each assignment are:
Flights and Passengers
1. Style
2. Design of __str__ method
3. Design of heaviest_passenger method
4. Design of lightest_passenger method
5. Docstrings
6. Correctness of __str__ method
7. Design of Flight constructor
Decks and Cards
1. Correctness of deal method
2. Style
3. Design of cut method
4. Design of __str__ method
5. Docstrings
6. Design of deal method
7. __str__
Of those 14, 9 have to do with design or style. It’s also worth noting that Doctrings and the correctness of the __str__ methods are in there too.
There were slightly more disagreement in Decks and Cards than in Flights and Passengers
Total number of disagreements for Flights and Passengers: 136 (avg: 9.71 per criterion)
Total number of disagreements for Decks and Cards: 161 (avg: 11.5 per criterion)
Being Hands-off
From the very beginning, when I contacted / hired my Graders, I was very hands-off. Each Grader was given the assignment specifications and rubrics ahead of time to look over, and then a single
meeting to ask questions. After that, I just handed them manila envelopes filled with submissions for them to mark.
Having spoken with some of the undergraduate instructors here in the department, I know that this isn’t usually how grading is done.
Usually, the instructor will have a big grading meeting with their TAs. They’ll all work through a few submissions, and the TAs will be free to ask for a marking opinion from the instructor.
By being hands-off, I didn’t give my Graders the same level of guidance that they may have been used to. I did, however, tell them that they were free to e-mail me or come up to me if they had any
questions during their marking.
The hands-off thing was a conscious choice by Greg and myself. We didn’t want me to bias the marking results, since I would know which submissions would be from the treatment group, and which ones
would be from control.
Anyhow, the results from above have driven me to conclude that if you just hand your graders the assignments and the rubrics, and say “go”, you run the risk of seeing dramatic differences in grading
from each Grader. From a student’s perspective, this means that it’s possible to be marked by “the good Grader”, or “the bad Grader”.
I’m not sure if a marking-meeting like I described would mitigate this difference in grading. I hypothesize that it would, but that’s an experiment for another day.
Questionable Calls
If you sort the Decks and Cards table by Number of Disagreements, you’ll find that the criterion that my Graders disagreed most on was the correctness of the “deal” method. Out of 30 submissions,
both Graders disagreed on that particular criterion 21 times (70%).
It’s a little strange to see that criterion all the way at the top there. As I mentioned earlier, most of the disagreements tended to be concerning design and style.
So what happened?
Well, let’s take a look at some examples.
Example #1
The following is the deal method from participant #013:
def deal(self, num_to_deal):
i = 0
while i < num_to_deal:
print self.deck.pop(0)
i += 1
Grader 1 gave this method a 1 for correctness, where Grader 2 gave this method a 4.
That’s a big disagreement. And remember, a 1 on this criterion means:
Barely meets assignment specifications. Severe problems throughout.
I think I might have to go with Grader 2 on this one. Personally, I wouldn’t use a while-loop here – but that falls under the design criterion, and shouldn’t impact the correctness of the method.
I’ve tried the code out. It works to spec. It deals from the top of the deck, just like it’s supposed to. Sure, there are some edge cases missed here (what is the Deck is empty? What if we’re
asked to deal more than the number of cards left? What if we’re asked to deal a negative number of cards? etc)… but the method seems to deliver the basics.
Not sure what Grader 1 saw here. Hmph.
Example #2
The following is the deal method from participant #023:
def deal(self, num_to_deal):
res = []
for i in range(0, num_to_deal):
Grader 1 gave this method a 0 for correctness. Grader 2 gave it a 3.
I see two major problems with this method. The first one is that it doesn’t print out the cards that are being dealt off: instead, it stores them in a list. Secondly, that list is just tossed out
once the method exits, and nothing is returned.
A “0” for correctness simply means Unimplemented, which isn’t exactly true: this method has been implemented, and has the right interface.
But it doesn’t conform to the specification whatsoever. I would give this a 1.
So, in this case, I’d side more (but not agree) with Grader 1.
Example #3
This is the deal method from participant #025:
def deal(self, num_to_deal):
num_cards_in_deck = len(self.cards)
num_to_deal = int(num_to_deal)
if num_to_deal > num_cards_in_deck:
print "Cannot deal more than " + num_cards_in_deck + " cards\n"
i = 0
while i < num_to_deal:
print str(self.cards[i])
i += 1
self.cards = self.cards[num_to_deal:]
print "Error using deal\n"
Grader 1 also gave this method a 1 for correctness, where Grader 2 gave a 4.
The method is pretty awkward from a design perspective, but it seems to behave as it should – it deals the provided number of cards off of the top of the deck and prints them out.
It also catches some edge-cases: num_to_deal is converted to an int, and we check to ensure that num_to_deal is less than or equal to the number of cards left in the deck.
Again, I’ll have to side more with Grader 2 here.
Example #4
This is the deal method from participant #030:
def deal(self, num_to_deal):
i = 0
while i <= num_to_deal:
print self.cards[0]
del self.cards[0]
Grader 1 gave this a 1. Grader 2 gave this a 4.
Well, right off the bat, there’s a major problem: this while-loop never exists. The while-loop is waiting for the value i to become greater than num_to_deal…but it never can, because i is defined
as 0, and never incremented.
So this method doesn’t even come close to satisfying the spec. The description for a “1” on this criterion is:
Barely meets assignment specifications. Severe problems throughout.
I’d have to side with Grader 1 on this one. The only thing this method delivers in accordance with the spec is the right interface. That’s about it.
Dealing from the Bottom of the Deck
I received an e-mail from Grader 2 about the deal method. I’ve paraphrased it here:
If the students create the list of cards in a typical way, for suit in CARD_SUITS; for rank in CARD_RANKS, and then print using something like:
for card in self.cards
print str(card) + “\n”
Then for deal, if they pick the cards to deal using pop() somehow, like:
for i in range(num_to_deal):
print str(self.cards.pop())
Aren’t they dealing from the bottom
My answer was “yes, they are, and that’s a correctness problem”. In my assignment specification, I was intentionally vague about the internal collection of the cards – I let the participant figure
that all out. All that mattered was that the model made sense, and followed the rules.
So if I print my deck, and it prints:
Q of Hearts
A of Spades
7 of Clubs
Then deal(1) should print:
Q of Hearts
regardless of the internal organization.
Anyhow, only Grader 2 asked for clarification on this, and I thought this might be the reason for all of the disagreement on the deal method.
Looking at all of the disagreements on the deal methods, it looks like 7 out of the 20 can be accounted for because students were unintentionally dealing from the bottom of the deck, and only Grader
2 caught it.
Subtracting the “dealing from the bottom” disagreements from the total leaves us with 13, which puts it more in line with some of the other correctness criteria.
So I’d have to say that, yes, the “dealing from the bottom” problem is what made the Graders disagree so much on this criterion: only 1 Grader realized that it was a problem while they were
marking. Again, I think this was symptomatic of my hands-off approach to this part of the experiment.
In Summary
My graders disagreed. A lot. And a good chunk of those disagreements were about style and design. Some of these disagreements might be attributable to my hands-off approach to the grading portion
of the experiment. Some of them seem to be questionable calls from the Graders themselves.
Part of my experiment was interested in determining how closely peer grades from students can approximate grades from TAs. Since my TAs have trouble agreeing amongst themselves, I’m not sure how
that part of the analysis is going to play out.
I hope the rest of my experiment is unaffected by their disagreement.
Stay tuned.
See anything?
Do my numbers make no sense? Have I contradicted myself? Have I missed something critical? Are there unanswered questions here that I might be able to answer? I’d love to know. Please comment! | {"url":"https://mikeconley.ca/blog/tag/data-analysis/","timestamp":"2024-11-04T18:40:39Z","content_type":"text/html","content_length":"68281","record_id":"<urn:uuid:9d6a4a3c-3b4d-4685-b018-75cbd15b2101>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00027.warc.gz"} |
638100 Yottahertz to Exahertz (YHz to EHz) | JustinTOOLs.com
Please support this site by disabling or whitelisting the Adblock for "justintools.com". I've spent over 10 trillion microseconds (and counting), on this project. This site is my passion, and I
regularly adding new tools/apps. Users experience is very important, that's why I use non-intrusive ads. Any feedback is appreciated. Thank you. Justin XoXo :)
Category: frequencyConversion: Yottahertz to Exahertz
The base unit for frequency is hertz (Non-SI/Derived Unit)
[Yottahertz] symbol/abbrevation: (YHz)
[Exahertz] symbol/abbrevation: (EHz)
How to convert Yottahertz to Exahertz (YHz to EHz)?
1 YHz = 1000000 EHz.
638100 x 1000000 EHz =
Always check the results; rounding errors may occur.
In relation to the base unit of [frequency] => (hertz), 1 Yottahertz (YHz) is equal to 1.0E+24 hertz, while 1 Exahertz (EHz) = 1.0E+18 hertz.
638100 Yottahertz to common frequency units
638100 YHz = 6.381E+29 hertz (Hz)
638100 YHz = 6.381E+26 kilohertz (kHz)
638100 YHz = 6.381E+23 megahertz (MHz)
638100 YHz = 6.381E+20 gigahertz (GHz)
638100 YHz = 6.381E+29 1 per second (1/s)
638100 YHz = 4.0093005468262E+30 radian per second (rad/s)
638100 YHz = 3.8286000015314E+31 revolutions per minute (rpm)
638100 YHz = 6.381E+29 frames per second (FPS)
638100 YHz = 1.3783048211509E+34 degree per minute (°/min)
638100 YHz = 6.381E+17 fresnels (fresnel) | {"url":"https://www.justintools.com/unit-conversion/frequency.php?k1=yottahertz&k2=exahertz&q=638100","timestamp":"2024-11-07T01:09:42Z","content_type":"text/html","content_length":"66034","record_id":"<urn:uuid:d6b9f005-2808-4123-bb02-1455b5170b02>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00162.warc.gz"} |
Being Resourceful - Primary Measures
One day five small animals in my garden were going to have a sports day. They decided to have a swimming race, a running race, a high jump and a long jump.
These pieces of wallpaper need to be ordered from smallest to largest. Can you find a way to do it?
These rectangles have been torn. How many squares did each one have inside it before it was ripped?
My local DIY shop calculates the price of its windows according to the area of glass and the length of frame used. Can you work out how they arrived at these prices?
Some of the numbers have fallen off Becky's number line. Can you figure out what they were? | {"url":"https://nrich.maths.org/being-resourceful-primary-measures","timestamp":"2024-11-02T13:55:54Z","content_type":"text/html","content_length":"43321","record_id":"<urn:uuid:6df22f10-158e-4d4c-bafb-adf3c00e6dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00734.warc.gz"} |
Shaw Prize in Mathematical Sciences Awarded to Gerd Faltings
Posted in
At a press conference on June 1, 2015 in Hong Kong, the Shaw Prize Foundation announced that this year's Shaw Prize in Mathematical Sciences is awarded in equal shares to Gerd Faltings and Henryk
Iwaniec for their introduction and development of fundamental tools in number theory, allowing them as well as others to resolve some longstanding classical problems. The prize consists of a monetary
award of one million US dollars.
From the prize justification of the Shaw Foundation: A polynomial equation of degree n in one variable with coefficients which are rational numbers has just n complex numbers as solutions. Such an
equation has a symmetry group, its Galois group, that describes how these complex solutions are related to each other.
A polynomial equation in two variables with rational coefficients has infinitely many complex solutions, forming an algebraic curve. In most cases (that is, when the curve has genus 2 or more) only
finitely many of these solutions are pairs of rational numbers. This well-known conjecture of Mordell had defied resolution for sixty years before Faltings proved it. His unexpected proof provided
fundamental new tools in Arakelov and arithmetic geometry, as well as a proof of another fundamental finiteness theorem — the Shaferavich and Tate Conjecture — concerning polynomial equations in many
variables. Later, developing a quite different method of Vojta, Faltings established a far-reaching higher dimensional finiteness theorem for rational solutions to systems of equations on Abelian
Varieties (the Lang Conjectures). In order to study rational solutions of polynomial equations by geometry, one needs arithmetic versions of the tools of complex geometry. One such tool is Hodge
theory. Faltings’ foundational contributions to Hodge theory over the p-adic numbers, as well as his introduction of other related novel and powerful techniques, are at the core of some of the recent
advances connecting Galois groups (from polynomial equations in one or more variables) and the modern theory of automorphic forms (a vast generalization of the theory of periodic functions). The
recent striking work of Peter Scholze concerning Galois representations is a good example of the power of these techniques.
Prof. Dr. Gerd Faltings, born in 1954, studied mathematics and physics at the University of Münster where he received his Diploma and Ph.D. in 1978. After visiting Harvard University from 1978-1979,
he was Assistant at the University of Münster from 1979-1982, completing his Habilitation in 1981. Following Professorships at the University of Wuppertal from 1982-1984 and Princeton University from
1985-1994, he became director of the Max Planck Institute for Mathematics in Bonn in 1995. He has already received numerous awards for his work: the Fields Medal in 1986, a Guggenheim Fellowship in
1988, the Gottfried Wilhelm Leibniz Prize in 1996, the Karl Georg Christian von Staudt Prize in 2008, the Heinz Gumin Prize in 2010, and the King Faisal International Prize for Science in 2014.
The Shaw Prize honors individuals who have achieved significant breakthroughs in academic and scientific research or applications and whose work has resulted in a positive and profound impact on
mankind. The prize is awarded annually in the three fields: Astronomy, Life Science and Medicine, and Mathematical Sciences,. This is the twelfth year that the Prize has been awarded and the
presentation ceremony is scheduled for Thursday, 24 September 2015.
(Photo credit: MFO / Gert-Martin Greuel) | {"url":"https://www.mpim-bonn.mpg.de/node/5966","timestamp":"2024-11-15T03:05:06Z","content_type":"application/xhtml+xml","content_length":"20969","record_id":"<urn:uuid:8609be83-07ef-46cd-8f3a-d406ba854bb4>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00539.warc.gz"} |
EViews Help: Estimating a Bayesian VAR in EViews
Estimating a Bayesian VAR in EViews
To estimate a Bayesian VAR in EViews, click on or type var in the command window to bring up the dialog. Select the as the in the radio buttons on the left-hand side of the dialog.
The dialog will change to the BVAR version of the VAR Specification dialog. As with a standard VAR, you may use the page to list of endogenous variables, the included lags, and any exogenous
variables, and to specify the estimation sample:
The three BVAR specific tabs—, , and —allow you to customize your specification. The following discussion of the settings on these three tabs assumes that you are familiar with the basics of the
various prior types and associated settings. For additional detail, see
“Technical Background”
Prior Type
The tab lets you specify the type of prior you wish to use, whether to include dummy observations, and options for calculating the initial residual covariance matrix.
You may use the drop-down menu to choose between , , , , , , and priors.
The check boxes allow you to include dummy/additional observations to the data matrices of the VAR. The setting adds observations to account for possible unit-root issues, while the setting adds
observations to account for possible cointegration issues (see
“Dummy Observations”
For the priors other than or n, a dropdown menu allows you to specify how the initial residual covariance matrix is calculated:
• estimates a univariate AR model (with number of lags matching those specified for the VAR) for each endogenous variable, then constructs the residual covariance matrix as a diagonal matrix with
diagonal elements equal to the residual variance from the estimated univariate models.
• uses the covariance from an estimated classical VAR model, but zeros out the off-diagonals.
• uses the covariance from an estimated classical VAR model.
• is computed in the same way as the , but only one lag is used.
The radio buttons specify whether to include any exogenous variables specified in the VAR in the calculation of the initial covariance matrix.
• If you select either of the choices, the checkbox specifies whether to include those observations in the initial covariance calculation.
• The selection determines whether to use a degrees-of-freedom correction in the covariance estimate.
If the prior type was selected in the drop-down menu, the checkbox selects whether to use the initial covariances as hyper-parameters, or as the starting values for an optimized hyper-parameter
Finally, the box is used to specify the sample that is used to estimate the covariance. If left blank, the same sample as the overall VAR estimation.
The tab allows specification of the hyper-parameters of the prior distribution.
If the prior type was selected on the tab, the checkbox selects whether to use the values as hyper-parameters, or as the starting values for an optimized hyper-parameter selection.
Note that only hyper-parameters that are available for the current prior type and choice of initial dummy observations are available for selection.
The tab offers options for the Giannone, Lenza & Primiceri (GLP) prior and the inverse normal-Wishart prior. The GLP prior requires optimization of the hyper-parameters, and so options relating to
the optimization algorithm; and . The latter requires estimation through a Gibbs Sampler, and so offers options for the number of draws from the sampler, the percentage of draws to discard as burn-in
draws, and the random number generator’s seed value. | {"url":"https://help.eviews.com/content/bVAR-Estimating_a_Bayesian_VAR_in_EViews.html","timestamp":"2024-11-04T09:03:35Z","content_type":"application/xhtml+xml","content_length":"13999","record_id":"<urn:uuid:3cb24c59-44c8-4398-b1ea-2e267e9263ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00423.warc.gz"} |
5 Q’s for Conrad Wolfram, Strategic Director and European Cofounder of Wolfram Research
by Joshua New
by Joshua New
The Center for Data Innovation spoke with Conrad Wolfram, strategic director and European cofounder of Wolfram Research, a computational software company with headquarters in Champaign, Illinois and
Oxfordshire, UK. Wolfram described how computation can be beneficial for quantitative problem solving, as well as the problems with the current state of math education around the world.
Joshua New: Most people are probably familiar with the Wolfram name thanks to Wolfram Alpha, the “computational knowledge engine” run by Wolfram Research. What exactly is a computational knowledge
engine? Why might somebody need this over a traditional search engine?
Conrad Wolfram: Essentially, we’re trying to come up with the answers to questions, not search what other people’s answers are. A search engine is a bit like a librarian, and we’re more like your
personal research assistant. The objective is the same—to answer a question—but the technology is there to actually compute a solution based on curated data. This works really well when your question
is more quantitative, or when nobody has put out much information that could help answer your question.
When you type in your question to Wolfram Alpha, the software breaks it down to symbolic representation and tries to understand what you’re actually asking. With this understanding, and our data, it
builds an answer to your question “on the fly.” Traditional search is useful for a lot of questions, but this approach is particularly beneficial for certain cases like medical diagnostics. When you
plug in unique patient data, a unique answer is much more valuable than general information that other people may have published. For example, you want to know what an individual patient’s heart
attack risk would be for a certain procedure, not heart attack risk associated with the procedure as a whole.
New: Wolfram’s computational software is used in a wide variety of industries. What use case do you think is the most valuable, or stands out the most?
Wolfram: Our software stack has been growing since its launch in the 1980s, and there are a lot of different layers and facets. One key underlying piece of the Wolfram language—which is in a sense a
modern programming language—is that computation is built in. This means you can immediately act on data with the language, since algorithms are built in to its function. This language is a building
block to allow people to do huge amount of work and computation much quicker than they could before. Typically, people need to find other languages and tailor them to accomplish specific
computations—there are a lot of moving parts that can slow the process down. Other times, there is software that can accomplish niche computations, such as for finance or engineering, but it’s
difficult to combine these with other tools to build a robust platform. So what we have is a complete, end-to-end solution to compute things—even new things that you maybe hadn’t planned on or things
that have never been computed before.
We’re trying to push the boundaries of computation, and our software stack is designed to do this at every level. One part of this stack in particular is our computable documents. Traditionally, the
method of communicating information from one person to another, particularly in government, is via text reports. This is ok, but reports are “dead”—they’ve been pretty much the same thing for the
last 300 years. Our computable documents allow us to mix interactivity with narrative.
We try to think about which areas are best for computational knowledge, rather than just search. Take transportation for example. We already have apps out there that compute the best route to a
user’s destination. Educational assessment and social security also benefit from this approach. But I think the area with the highest potential for gain is in healthcare. Regardless of a country’s
healthcare system, there are always huge inefficiencies. Better data science and technology has so much to offer here. System-wide, this enables massive process improvements, such as by applying
advanced analytics to medical imaging rather than relying on humans to examine this data. And on a more individual level, patient-generated data is hugely valuable for diagnostics. Around the world,
the rate of success for diagnostics hovers around or below 50 percent. Computation-aided diagnostics, like I mentioned in the previous question, could add so much here.
New: Wolfram’s software has been around for 30 years. How has the program changed over the years in response to the needs of your customers and what they want to do with their data?
Wolfram: When we launched Mathematica, one of our flagship products, people thought we just did math, and that math was a specialized thing. They even said, “it’s weird you guys think you can launch
a company around math.” What’s happened over the last 30 years is that math has turned into computation, and the world has become far more quantitative. This has been an iterative process, partly
because of people pushing the boundaries of what computers can do with math and applying it to areas that didn’t focus heavily on math before. Take biology for example. When we started Mathematica,
very little biology was done with math. Biologists performed standard data analysis, but it was very basic and they didn’t really rely on image processing. Biology today is almost an entirely new
field compared to a couple decades ago, thanks to sophisticated computation. We’ve tried to drive this change. We started out as a math company—and we still are—but now we focus on computation
meeting knowledge. That’s what inspired Wolfram Alpha, which we launched six years ago.
In practical terms, this means we can automate so much more. Before, if someone was trying to solve an equation, they took the time to plug in formulas and tools until they got their answer. But now
we can just give them the answer automatically. Over time, automation has become much better than human capability, and we try to put this tool in as many peoples’ hands as possible. This means two
things. First, people who already use high level math can become much more effective and efficient. And second, many more people can start using this behind the scenes to supplement what they’re
already doing. Computable documents, as I mentioned, lets users make informed decisions without having to expose themselves to all the math going on underneath.
In virtually every walk of live, computation is a critical feature. But, like the early days of computers, the solutions at the enterprise level are piecemeal. We’ve been trying to answer the
question of how to put enterprise-level computation everywhere to make everyone more effective at what they do.
New: It’s pretty apparent that you’re passionate about math, and one of your main projects is a campaign called Computer Based Math, in which you advocate for math education reform. Could you explain
the problem with the current state of math education a little more in depth?
Wolfram: Current math education is based on the idea that humans do the calculating about 80 percent of the time. In real life though, computers are the ones doing the calculating. My gripe is that
today, math is so important to life in ways it wasn’t even just 30 years ago due to the fact that computers have mechanized the process of solving problems. In education we have insisted that humans
have to be the calculators. This used to be essential, but now it’s holding us back because computers are so much better at this than humans ever can be.
What we need to do is rethink the subject of math in education to accommodate for its new role in the outside world. If we start thinking that computers can solve all these basic problems that human
are educated to solve, we can focus our attention on much harder problems. We educate people for 10 years to solve problems so they can graduate and use a machine to do the same task, and we find
that they haven’t actually learned much about how to apply mathematics to solve problems—just to solve math problems themselves. In my view, math is a four-step problem solving process. First, you
have to ask a specific question. How much bandwidth do I need to give a phone call good audio quality, for example. Then, you have to turn to abstract math, or code, to figure out how you want to
solve the problem. Next, you must compute this to get an answer. Finally, you must go back and ensure that your answer is the one you were looking for in your original question. So much of math
education focuses on step three—computing by hand—not steps one, two, and four. People aren’t learning how to set up a problem or abstract it to find answers. And importantly, we’re boring the pants
off most of them because this isn’t applicable to their lives. We need to teach math in a manner that assumes computers exist so we can focus on relevant and conceptual problems.
New: What are some of the key obstacles to changing how math education works?
Wolfram: I think most people would agree that we need to fix math, but the problem is that they haven’t thought enough about how. Most people assume math is just this static “thing” that doesn’t
change, despite the fact that math fundamentally has changed in the outside world. Most government administrators I’ve talked to around the world simply haven’t thought about how math relates to the
world, even if they think it’s important.
Additionally, writing a curriculum based on this theory is really tricky. The people who really know math were trained traditionally, and it’s hard for them—myself included—to step back and say “do
others really need to learn this?” instead of “I learned this, so other people should too.” There was a historical way of doing things, but now there’s a better way.
Building off that, a new problem arises from the fact that it’s mathematicians that set the math curriculum. Obviously they should be involved. But over 96 percent of Mathematica buyers would not
classify themselves as mathematicians—they’re engineers, in finance, government workers, and so on—because math is used in practical applications, not just for itself. When you don’t get this whole
range of people involved, you’re going to get people who set the curriculum who like math and think it’s interesting to teach math just for the sake of math.
Finally, assessment is a huge obstacle. Assessment drives what people teach, for better or for worse. People want good test scores so they can go to college, and these tests all examine hand
calculating ability. If you can do great computer-based problem solving, but not by hand, you’ll still fail. Teaching this calculation takes up so much time in the curriculum as a result that people
can’t learn anything else.
So, it’s somewhat of a chicken-and-egg problem, but we’ve actually had our first country adopt this new approach. Estonia has begun to pilot some of our curriculum so we’re very excited to see how
that turns out.
Joshua New
Joshua New was a senior policy analyst at the Center for Data Innovation. He has a background in government affairs, policy, and communication. Prior to joining the Center for Data Innovation, Joshua
graduated from American University with degrees in C.L.E.G. (Communication, Legal Institutions, Economics, and Government) and Public Communication. His research focuses on methods of promoting
innovative and emerging technologies as a means of improving the economy and quality of life.
next post
Visualizing What’s Warming the Earth
You may also like | {"url":"https://datainnovation.org/2015/08/5-qs-for-conrad-wolfram-strategic-director-and-european-cofounder-of-wolfram-research/","timestamp":"2024-11-09T07:16:06Z","content_type":"text/html","content_length":"125188","record_id":"<urn:uuid:e35b3930-5e7d-4633-9873-30c98cfcf104>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00163.warc.gz"} |
Diagrams for logic in LaTeX
Looking through my school material, I found a lot of LateX graphics which I made for the according courses, and thought I'd share them.
Most of them use TikZ, an excellent graphics and drawing library for LaTeX. Some of the graphics here are modified examples from texample.net.
Beside some examples there is a short piece of code for demonstration purposes, at the bottom of the post there's a link for the full source code and also a compiled PDF with all the examples. It's
quite some material and therefore a bit of a longer post :-) I grouped it into the following categories:
• Binary trees
• Logic circuits
• Directed acyclic graphs (DAG)
• Finite state machines (FSM)
• Karnaugh maps
• Ordered Binary Decision Diagrams (OBDD)
• General graphs & functions
• Misc
Binary trees
Drawing binary trees can bee a one-liner, using qtree:
\Tree [.a [.f [.g [.b c ] [.h i ] ] ] [.e j ] ]
compiles to:
Another example (I believe it's from examining time complexity in loops):
Logic circuits
Especially with logic circuits the beauty of LaTeX became apparent when doing the exercises. Using the package signalflowdiagram (this one needs some additional files, see source) and the TikZ
library shapes.gates.logic and shapes.gates.logic, it's possible to design your own logic circuits. It's still a lot of work
Some code for the half adder:
\tikzstyle{branch}=[fill,shape=circle,minimum size=3pt,inner sep=0pt]
\begin{tikzpicture}[label distance=2mm]
% nodes
\node (x) at (-1,6) {$x$};
\node (y) at ($(x) + (0,-1.2)$) {$y$};
\node[not gate US, draw] at ($(x)+(0.5,-0.8)$) (notx) {};
\node[not gate US, draw] at ($(y)+(0.5,-0.8)$) (noty) {};
\node[and gate US, draw, rotate=-90, logic gate inputs=nn] at (1,3) (A) {};
\node[and gate US, draw, rotate=-90, logic gate inputs=nn] at ($(A)+(2,0)$) (B) {};
\node[and gate US, draw, rotate=-90, logic gate inputs=nn] at ($(B)+(2,0)$) (C) {};
\node[or gate US, draw, rotate=-90, logic gate inputs=nn] at ($(A)+(1,-1.5)$) (D) {};
% draw NOT nodes
\foreach \i in {x,y} {
\path (\i) -- coordinate (punt\i) (\i |- not\i.input);
\draw (\i) |- (punt\i) node[branch] {} |- (not\i.input);
% direct inputs
\draw (puntx) -| (C.input 1);
\draw (punty) -| (C.input 2);
\draw (puntx) -| (B.input 1);
\draw (punty) -| (A.input 2);
\draw (notx) -| (A.input 1);
\draw (noty) -| (B.input 2);
\draw (A.output) -- ([yshift=-0.2cm]A.output) -| (D.input 2);
\draw (B.output) -- ([yshift=-0.2cm]B.output) -| (D.input 1);
\draw (C) -- ($(C) + (0, -1.8)$) -- node[right]{$R$} ($(C) + (0, -2.5)$);
\draw (D.output) -- node[right]{$U$} ($(D) + (0, -1)$);
Here I first draw the nodes based on the shapes.gates.logic-library shapes \ode[not gate US, draw] and then draw the inputs. Note that it's possible to affect how the lines are drawn by setting -|
between the nodes.
As you can see that's quite some code for a single graphic. For the adder network and half adder it's even worse :-), more complex circuits require a lot of tinkering with coordinates and shapes.
Adder network:
Logic circuit of a boolean function:
3-Mux Multiplexer:
Directed acyclic graphs (DAG)
DAGs are quite easy to create. First you create the different nodes with name, displayed text and position:
\begin{tikzpicture}[scale=1.4, auto,swap]
\foreach \pos/\name/\disp in {
\node[minimum size=20pt,inner sep=0pt] (\name) at \pos {\disp};
Then you just connect them using \path and specify the type of line as an arrow (->):
\foreach \source/\dest in {
\path[draw,thick,->] (\source) -- node {} (\dest);
Another example:
Finite state machines
Two graphics for Finite state machines. I already did something on this topic in an earlier blog post, check out my simulation for searching a path on a FSM.
These can also be built with two loops, one for the nodes and the other one for the connections. Remember to use the TikZ library automata.
\begin{tikzpicture}[scale=2, auto,swap]
\foreach \pos/\name/\disp/\initial/\accepting in {
\node[state,\initial,\accepting,minimum size=20pt,inner sep=0pt] (\name) at \pos {$\disp$};
\foreach \source/\dest/\name/\pos in {
\path[draw,\pos,thick,->] (\source) -- node {$\name$} (\dest);
Karnaugh maps
With Karnaugh maps, it is possible to simplify boolean algebra functions. For example, for boolean input variables A, B, C, D (which can either be true or false) and a function F(A, B, C, D) which
maps them to a boolean output value you can draw the following Karnaugh map (for further reading, check the Quine-McCluskey algorithm, minterms and maxterms)
The diagram can be read as follows: For example at cell 3, B and D are overlapping = true, while A and C are false. So the function with the input F(0, 1, 0, 1) = 0.
Another Karnaugh diagram (can you figure out the function behind it? :-) ):
Karnaugh maps are really easy to draw with the kvmacro (which you have to include):
\input kvmacros.tex
Ordered Binary Decision Diagrams (OBDD)
OBDDs are useful in debugging logic circuits (at least our professor told us so). They are drawn like the other graphs, defining the nodes (the end nodes as a special case with other looks) and in
the last step linking them together with arrows.
A sample OBDD:
You can also do some operations on such a tree, which will reduce it an change its form (in german "Verjüngung" and "Elimination"). Also, the initial variable ordering is important and has an
influence on how much you can optimize the tree.
Here is the equivalent graph in two more reduced forms:
General graphs & functions
Not strictly logic, but also functions, plotted with TikZ:
xmin=-1.57, xmax=1.57,
ymin=-5, ymax=5,
ylabel={$\ an(x)$},
\addplot+[no marks] function {tan(x)};
It's also possible to show the an integral area of a function:
Two intersecting lines:
Two functions on a graph:
\draw[->] (-3,0) -- (4.2,0) node[right] {$x$};
\draw[->] (0,-3) -- (0,4.2) node[above] {$y$};
plot ({\x},{\x*\x});
plot ({\y*\y},{\y});
Bonus: Some more graphics which I dug up, also generated with LaTeX :-)
You can download everything (and one or two more circuits which aren't listed here): | {"url":"https://www.kleemans.ch/diagrams-for-logic-in-latex","timestamp":"2024-11-11T20:51:13Z","content_type":"text/html","content_length":"17915","record_id":"<urn:uuid:4bbbcfd0-ffdb-42e2-a2ec-8d275accfd0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00029.warc.gz"} |
Build native Windows on Arm applications with Python: Build the application
In this section, you will use the NumPy package you installed previously to create a sample application. The application will use NumPy to perform fast Fourier transforms (FFTs) of the synthesized
sine waves corrupted by the noise. The application will run the FFTs several times for the variable lengths of the input waves. The application will measure the code execution time and we can then
analyze the performance boost of the Python interpreter and NumPy package on Arm64.
You can find the complete code here
Creating the application
Start by creating a new file, sample.py:
Then, import the NumPy and time packages:
import numpy as np
import time
The first package is for numerical computations and the second is for measuring the computation time.
Next, define a function that calculates a signal’s fast Fourier transform (FFT). Here, the signal is composed of a single-frequency sine wave with some random noise:
def perform_sin_fft(signal_length, frequency, trial_count):
start = time.time()
for i in np.arange(1, trial_count+1):
ramp = np.linspace(0, 2 * np.pi, signal_length)
noise = np.random.rand(signal_length)
input_signal = np.sin(ramp * frequency) + 0.1*noise
computation_time = time.time() - start
return computation_time
The above function returns the total time (in seconds) needed for calculating the FFT. Repeat the FFT multiple times (trial_count) to have a stable estimate of the computation time.
To measure the performance, invoke the perform_sin_fft function for various signal lengths:
signal_lengths = [2**10, 2**11, 2**12, 2**13, 2**14]
trial_count = 5000
for signal_length in signal_lengths:
frequency = int(signal_length / 4)
computation_time = perform_sin_fft(signal_length, frequency, trial_count)
print("Signal length {}, Computation time {:.3f} s".format(signal_length, computation_time))
The final form of the sample.py file will look as follows:
import numpy as np
import time
def perform_sin_fft(signal_length, frequency, trial_count):
start = time.time()
for i in np.arange(1, trial_count+1):
ramp = np.linspace(0, 2 * np.pi, signal_length)
noise = np.random.rand(signal_length)
input_signal = np.sin(ramp * frequency) + 0.1*noise
computation_time = time.time() - start
return computation_time
signal_lenghts = [2**10, 2**11, 2**12, 2**13, 2**14]
trial_count = 5000
for signal_length in signal_lenghts:
frequency = int(signal_length / 4)
computation_time = perform_sin_fft(signal_length, frequency, trial_count)
print("Signal length {}, Computation time {:.3f} s".format(signal_length, computation_time))
Measure the performance of Python packages
You will now run the application using non-Arm64 and Arm64 Python 3.12 to measure the performance difference. First, run the application using a non-Arm64 Python interpreter. To do this, in the
command prompt type the following command (make sure to invoke the commands from the same directory where your sample.py is):
py -3.12-64 sample.py
The command executes the script using x64 emulation mode and produces the following output:
Signal length 1024, Computation time 0.610 s
Signal length 2048, Computation time 0.970 s
Signal length 4096, Computation time 1.765 s
Signal length 8192, Computation time 3.312 s
Signal length 16384, Computation time 6.859 s
The computation times depend on the signal length. Specifically, for 16,384 points, the computation time is 6.86 seconds.
Now, use the Arm64 Python 3.12 interpreter. In the command prompt type:
py -3.12-arm64 sample.py
The above command uses Arm64 Python and has much shorter computation times as shown from the following output:
Signal length 1024, Computation time 0.172 s
Signal length 2048, Computation time 0.270 s
Signal length 4096, Computation time 0.702 s
Signal length 8192, Computation time 1.078 s
Signal length 16384, Computation time 2.716 s
The same 16,384-point computation takes 2.72 seconds, reducing the computation time to about 40 percent of the time needed by the emulation mode (x64). This difference represents a performance boost
of about two and a half times the speed of the emulation mode.
This graph illustrates the computation times and the corresponding performance boosts.
This learning path walked you through installing native Arm64 Python 3.12 on Windows 11.
You wrote a simple module that applied a fast Fourier transformation (FFT) to a signal and saw the performance improvements that Arm64 Python unlocked. This performance improvement accelerates
support for Windows on Arm (WoA). One example is Linaro’s demonstration of porting TensorFlow to Arm64 which displays impressive speed improvements and offers tremendous possibilities for AI, data
scientists, and researchers reliant on the ease and power of Python | {"url":"https://learn.arm.com/learning-paths/laptops-and-desktops/win_python/how-to-2/","timestamp":"2024-11-13T01:11:04Z","content_type":"text/html","content_length":"19616","record_id":"<urn:uuid:3b2e0aed-01f3-436a-b207-30e79f7f6cec>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00430.warc.gz"} |
Kraft Stock Boxes (medium)
Kraft stock boxes are sturdy and versatile. The boxes in the range of sizes for medium items (14x10x10 inches to 18x18x24 inches) are all made of 32 ECT (Edge Crush Test) corrugated cardboard.
The ECT measure is a true performance test, directly related to the stacking strength of a carton. ECT is a measure of the edgewise compressive strength of corrugated board.
Kraft stock boxes are also available in a range of sizes for small items (4x4x4 inches to 12x12x40 inches) and large items (20x13x13 inches to 48x40x36 inches).
Box sizes are specified by length x width x height (l x w x h). Make sure you have your measurements in the right order so you get the boxes you expect! Here are some simple diagrams to assist you. | {"url":"https://gmpackaging.com/shop/boxes/stock-boxes-2/kraft-stock-boxes-2/kraft-stock-box-medium/","timestamp":"2024-11-14T21:28:22Z","content_type":"text/html","content_length":"204214","record_id":"<urn:uuid:1f6a2211-9dd4-4d16-b28e-5770896df747>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00747.warc.gz"} |
Seminar Announcement - MSCS
Analysis and Applied Mathematics Seminar
Qirui Peng
University of Illinois Chicago
Non-unique weak solutions of forced SQG
Abstract: We construct non-unique weak solutions $\theta \in C^0_t C^{0-}_x$ for forced surface quasi-geostrophic (SQG) equations. This is achieved through a convex integration scheme adapted to the
sum-difference system of two distinct solutions. Without external forcing, non-unique weak solutions $\theta$ in space $C^0_t C^\alpha_x$ with $\alpha < -1/5$ were constructed by Buckmaster, Shkoller
and Vicol (2019) and Isett and Ma (2021).
Monday March 11, 2024 at 4:00 PM in 636 SEO | {"url":"https://www.math.uic.edu/persisting_utilities/seminars/view_seminar?id=7395","timestamp":"2024-11-10T17:44:23Z","content_type":"text/html","content_length":"11503","record_id":"<urn:uuid:288e7613-3ddb-4987-a3db-0b269dc6e1cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00240.warc.gz"} |
Internal pilot sample size re-estimation in paired comparative diagnostic accuracy trials with a binary response
McCray, Gareth and Titman, Andrew and Ghaneh, Paula and Lancaster, Gillian (2017) Internal pilot sample size re-estimation in paired comparative diagnostic accuracy trials with a binary response.
Trials, 18 (Suppl.): 200. ISSN 1745-6215
The sample size required to power a trial to a nominal level in a paired comparative diagnostic accuracy trial, i.e. Trials in which the diagnostic accuracy of two testing procedures are compared
relative to a gold standard, depends on the correlation between the two diagnostic tests being compared. The lower the correlation between the tests the higher the sample size required, the higher
the correlation between the tests the lower the sample size required. A priori, we usually do not know the correlation between the two tests and thus cannot determine the exact sample size.
Furthermore, the correlation between two tests is a quantity for which 1) it is difficult to make an accurate intuitive estimate and, 2) it is unlikely estimates exist in the literature, particularly
if one of the tests is new, as is very likely to be the case. One option, suggested in the literature, is to use the implied sample size for the maximal negative correlation between the two tests,
thus, giving the largest possible sample size. However, this overly conservative technique is highly likely to be wasteful of resources and unnecessarily burdensome on trial participants - as the
trial is likely to be overpowered and recruit many more participants than needed. A more accurate estimate of the sample size can be determined at a planned interim analysis point where the sample
size is re-estimated - thereby incorporating an internal pilot study into the trial design, with the intention of producing an accurate estimate of the correlation between the tests into the trial.
Methods This paper discusses a sample size estimation and re-estimation method based on the maximum likelihood estimates, under an implied multinomial model, of the observed values of correlation
between the two tests and, if required, prevalence, at a planned interim. The method is illustrated by comparing the accuracy of two procedures for the detection of pancreatic cancer, one procedure
using the standard battery of tests, and the other using the standard battery with the addition of a PET/CT scan all relative to the gold standard of a cell biopsy. Simulation of the proposed method
are also conducted to determine robustness in various conditions. Results The results show that the type I error rate of the overall experiment is stable using our suggested method and that the type
II error rate is close to or above nominal. Furthermore, the instances in which the type II error rate is above nominal are in the situations where the lowest sample size is required, meaning a lower
impact on the actual number of participants recruited. Conclusion We recommend a paired comparative diagnostic accuracy trial which used an internal pilot study to re-estimate the sample size at the
interim. This design would use a maximum likelihood estimate, under a multinomial model, of the correlation between the two tests being compared for diagnostic accuracy, in order to more effectively
estimate the number of participants required to power the trial to at least the nominal level.
Item Type:
Journal Article
Journal or Publication Title:
Uncontrolled Keywords:
?? pharmacology (medical)medicine (miscellaneous) ??
Deposited On:
10 Oct 2017 10:26
Last Modified:
02 Sep 2024 23:49 | {"url":"https://eprints.lancs.ac.uk/id/eprint/88199/","timestamp":"2024-11-14T18:46:34Z","content_type":"application/xhtml+xml","content_length":"28914","record_id":"<urn:uuid:6794af95-f895-4b12-a52f-e2a77a847275>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00246.warc.gz"} |
Finding Math in Ancient Architecture
To this day, mathematics and architecture are close companions. An architect must understand his building materials and environment, and in the modern age such understanding is invariably measured in
quantifiable terms. However, ancient structures also deliberately embedded basic mathematical principle into their design: not because they wielded mathematics as a tool within which to shoehorn a
building, but because the art of mathematics, like that of astronomy, was seen as an echo of the meaning of the universe. Many ancient languages, such as Hebrew, even doubled mathematical symbols
with their written alphabet: resulting in words that would also have numerical significance.
Not coincidentally, many natural geometric principles found in nature also happen to be pleasing to the human eye. Thus, to be aesthetically pleasing, a building often also had to be mathematically
Among the best known mathematical relationships used in ancient architecture is the ratio of the golden rectangle, whose length to side is approximately 1 : 1.62 (or phi), as well as its variants,
the golden triangle and the golden spiral. Important for ancient architects, the golden rectangle can be created very easily using only a compass and straight line by using the midpoint of a starting
square as both defining radius and reference line. Reversed, each part of the spiral obtained from subtracting squares from this quadrilateral shape always creates a new rectangle in the same ratio;
while the spiral itself has broad echoes throughout nature from the spiralling petals of a chrysanthemum to the spiral arms of a galaxy. The Acropolis of Athens is built to repeating ratios of the
golden rectangle.
The columns of the Acropolis also feature a very subtle and carefully calculated mathematical illusion. In order to appear straight to the viewer, they actually have a subtle bulge built in.
Another common relationship used to cleanly square buildings is the Pythagorean principle of the right triangle: a^2 + b^2 = c^2. Not only did this ratio make it very simple to ensure a true right
angle at the corners – very strong in buildings – but this particular ratio of triangle is also pleasing to the human eye. In Islamic architecture, the more common equivalent of the golden rectangle
ratio was 1 : square root of 2, another Pythagorean derivative using the two equal 45 degree angles to balance against the right angle.
The ratio of the circumference of the circle to its diameter, or pi, also seems to have been well-known both to the ancient Egyptians, who built it into the Cheops pyramid, and to the much later
Pythagorean Greeks to whom it is normally attributed. Pi does not seem, however, to be found among highly civilised cultures such as the Inca who happened never to have invented the wheel. Ancient
Indian architecture also aimed to echo divine structure by building around the concept of the mandala, or highly intricate cosmic circle: resulting in tiered architecture in which the full geometric
pattern of the mandala can only be seen from above.
While Islamic art and architecture does not permit reproduction of the human figure, what took its place were sweeping ribbed domes, detailed filigrees, and intricate repeating tilework on six- or
eight-count tessellating patterns.
A few ancient buildings and other structures were lined up very precisely to coincide with carefully calculated astronomical events. Besides its solstice alignment, the 56 Aubrey Holes of Stonehenge
1 also demarcate what might well have been a predictive lunar calendar.
Finally, Roman architecture emphasised straight lines and long rows or circles of half-circle arches. The Roman road surveys provided the groundwork for much of Europe’s modern highway system, while
many constructs such as the old aqueducts remain standing and in use to this day. Here, mathematics for the first time becomes solely a tool of engineering efficiency, echoes of which continue to the
modern day. | {"url":"http://www.actforlibraries.org/finding-math-in-ancient-architecture/","timestamp":"2024-11-12T09:25:07Z","content_type":"text/html","content_length":"22578","record_id":"<urn:uuid:52b6063a-31e6-4f55-966f-8cc6f885feb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00765.warc.gz"} |
True stress and true strain
The stress was calculated by dividing the load P by the initial cross section of the specimen.
True stress and true strain
In drawing the stress-strain diagram as shown in figure 1.13, the stress was calculated by dividing the load P by the initial cross section of the specimen. But it is clear that as the specimen
elongates its diameter decreases and the decrease in cross section is apparent during necking phase. Hence, the actual stress which is obtained by dividing the load by the actual cross sectional area
in the deformed specimen is different from that of the engineering stress that is obtained using undeformed cross sectional area as in equation 1.1 Though the difference between the true stress and
the engineering stress is negligible for smaller loads, the former is always higher than the latter for larger loads.
Similarly, if the initial length of the specimen is used to calculate the strain, it is called engineering strain as obtained in equation 1.9
But some engineering applications like metal forming process involve large deformations and they require actual or true strains that are obtained using the successive recorded lengths to calculate
the strain. True strain is also called as actual strain or natural strain and it plays an important role in theories of viscosity.
TYPES OF STRESSES :
Only two basic stresses exists : (1) normal stress and (2) shear shear stress. Other stresses either are similar to these basic stresses or are a combination of these e.g. bending stress is a
combination tensile, compressive and shear stresses. Torsional stress, as encountered in twisting of a shaft is a shearing stress.
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
Mechanical : Strength of Materials : Stress, Strain and Deformation of Solids : True stress and true strain | | {"url":"https://www.brainkart.com/article/True-stress-and-true-strain_5966/","timestamp":"2024-11-07T19:59:18Z","content_type":"text/html","content_length":"30815","record_id":"<urn:uuid:372c223d-72ad-420a-9e4d-823312d793a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00350.warc.gz"} |
Calculus Based Physics
22.1.2 Derive Euler’s equation by expanding the integrand of x? J(a)= / f(y@, 0), Yx(X, a), x) dx xy in powers of a. Note. The stationary condition is dJ(a@)/da@ = 0, evaluated at a = 0. The terms
quadratic in a@ may be useful in establishing the... | {"url":"https://www.transtutors.com/questions/science-math/physics/calculus-based-physics/","timestamp":"2024-11-09T03:18:19Z","content_type":"text/html","content_length":"225617","record_id":"<urn:uuid:f577542d-e600-4d21-b903-b1c2257286ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00311.warc.gz"} |
Time complexity in data structure pdf download
Problem solving with algorithms and data structures using. The definition of a data structure is a bit more involved we begin with the notion of an. Design and analysis of algorithms time complexity
in hindi part 1 asymptotic notation analysis digiimento. They are very common, but i guess some of us are not 100% confident about the exact answer. Bubble sort, selection sort are the example of
on2. Need to brush up on your basics, or learn about the latest libraries or frameworks. The need to be able to measure the complexity of a problem, algorithm or structure, and to obtain bounds and
quantitive relations for complexity arises in more and more sciences. This tutorial will give you a great understanding on data structures needed to. Amortized time is the way to express the time
complexity when an algorithm has the very bad time complexity only once in a while besides the time complexity that happens most of. Ill start by recommending introduction to algorithms, which has a
detailed take on complexity, both time and space, how to calculate it and how it helps you come up with efficient solutions to problems. An algorithm in which during each iteration the input data set
is partitioned into to sub parts is having complexity of ologn. Data structure objective type questions pdf download. When you add an item to a stack, you place it on top of the stack. Space
complexity is more tricky to calculate than time complexity because not all of these variables and data structures may be needed at the same time.
Data structures and algorithms in javascript github. When determining the efficiency of algorithm the time factor is measured by. Algorithms and data structures complexity of algorithms. Global
variables exist and occupy memory all the time. Time complexity measures the amount of work done by the algorithm during solving the problem in the way which is independent on the implementation and
particular input data. Therefore, no algorithm or data structure is presented without an explanation of its running time. You can also visit this to know more about latest trending areas. Space
complexity is the amount of memory used by the algorithm including the input values to the algorithm to execute and produce the result. As we see in the first sentence of the wikipedia definition,
time complexity is expressed in terms of the length of the input. Data structures and algorithms in java 6th edition pdf. For large problem sizes the dominant termone with highest value of exponent
almost completely determines the value of the complexity expression. Put your skills to the test by taking one of our quizzes today.
To get a g on the exam, you need to answer three questions to g standard. To get a vg on the exam, you need to answer five questions to vg standard. This is how we mastered reading and writing,
driving a. Data structures and algorithms school of computer science.
Time complexity of an algorithm signifies the total time required by the program to run till its completion. Concise notes on data structures and algorithms ruby edition christopher fox james madison
university 2011. If you continue browsing the site, you agree to the use of cookies on this website. Option a 22 the complexity of binary search algorithm is. Practice questions on time complexity
analysis geeksforgeeks. Complexity of algorithm and spacetime tradeoff slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Big o notation
fn ogn means there are positive constants c and k such that. For practicality, we evaluated the space and time complexity for airtravel data. Software complexity an overview sciencedirect topics.
This means that the algorithm requires a number of steps proportional to the size of the task. We will start by studying some key data structures, such as arrays, lists.
When you remove an item from a stack, you always remove the topmost item. Design and analysis of algorithms time complexity in. Pdf on jan 1, 2010, tiziana calamoneri and others published algorithms
and complexity. But auxiliary space is the extra space or the temporary space. A course in data structures and algorithms is thus a course in implementing abstract data. A unique data structure
metric for measuring software quality was the number of live variables within a procedure or subroutine as a sign of undue complexity 180. About is a free web service that delivers books in pdf
format to all the users without any restrictions. Show full abstract simple grid data structure and one based on highway hierarchies. Preface to the sixth edition data structures and algorithms in
java provides an introduction to data structures and algorithms, including their design, analysis, and implementation. Further some light is thrown on different types of data structure such as queue,
array, linked list and stack. This is usually a great convenience because we can look for a solution that works in a speci. We will study about it in detail in the next tutorial. Algorithms,
complexity analysis and data structures matter.
Whereas i ndep has no parameter values for the dependencies between y and z, c. Apart from time complexity, its space complexity is also important. One data structure metric surviving to modern times
is the information flow, or fan infan out metric, which measures the number of modules that exchange data 181. Time complexity of algorithmis the number of dominating operations executed by the
algorithm as the function of data size. A on b olog n c on2 d on log n view answer discuss. The better the time complexity of an algorithm is, the faster the algorithm will carry out his work in
practice. In this paper we present a first nontrivial exact algorithm whose running time is in o1.
During these weeks we will go over the building blocks of programming, algorithms and analysis, data structures, object oriented programming. For example, we can store a list of items having the same
datatype using the array data structure. If an algorithms uses nested looping structure over the data then it is having quadratic complexity of on2. This material can be used as a reference manual
for developers, or you can refresh specific topics before an. The time complexity of algorithms is most commonly expressed using the big o notation. How to learn time complexity and space complexity
in data. The term data structure is used to denote a particular way of organizing data for particular types of operation. Complexity rules for computing the time complexity the complexity of each
read, write, and assignment statement can be take as o1 the complexity of a sequence of statements is determined by the summation rule the complexity of an if statement is the complexity of the
executed statements, plus the time for evaluating the condition. All tracks basic programming complexity analysis time and space complexity. What are the time complexities of various data structures.
Its an asymptotic notation to represent the time complexity. In hilberts time, the notion of algorithms was not formalized but he thought that a uni. In this repository, you can find the
implementation of algorithms and data structures in javascript. This page contains detailed tutorials on different data structures ds with topicwise problems. To view our digital bigo algorithm and
data structure complexity cheat sheet click here. The stack data structure is identical in concept to a physical stack of objects. Exam with answers data structures dit960 time monday 30th may 2016,
14. To analyze an algorithm is to determine the resources such as time. This webpage covers the space and time bigo complexities of common algorithms used in computer science. Pdf living with
complexity download full pdf book download. Download pdf living with complexity book full free. Here you can download the free data structures pdf notes ds notes pdf latest and old materials with
multiple file links to download.
Every time you traverse a string and add it to the existing structure, you perform a few operations like initializing. Download objective type questions of data structure pdf. When preparing for
technical interviews in the past, i found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that i
wouldnt be stumped when asked about them. Space complexity and different case of time complexity. Data structures and algorithms multiple choice questions. In some cases, minute details that affect
the running time of the implementation are explored. Fundamentals of data structure, simple data structures, ideas for algorithm design, the table data type, free storage management, sorting, storage
on external media, variants on the set data type, pseudorandom numbers, data compression, algorithms on graphs, algorithms on strings and geometric algorithms. This is essentially the number of
memory cells which an algorithm needs. I am trying to list time complexities of operations of common data structures like arrays, binary search tree, heap, linked list, etc. Sometime auxiliary space
is confused with space complexity. Pradyumansinh jadeja 9879461848 2702 data structure 6 time can mean the number of memory accesses performed, the number of comparisons between integers, the number
of times some inner loop is executed, or some other natural unit related to the amount of real time the algorithm will take. Living with complexity available for download and read online in other
formats. A data structure is a particular way of organizing data in a computer so that it can be used effectively. For i ndep, the zeroorder crf and linearchain crf were run individually, and
parameter values and times were aggregated. | {"url":"https://coalydcori.web.app/559.html","timestamp":"2024-11-02T11:07:23Z","content_type":"text/html","content_length":"15187","record_id":"<urn:uuid:27097da0-0209-4af9-a92d-9bdaa1922683>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00505.warc.gz"} |
Math Grade 6 Quiz Solving Equations - 6.EE.B.7Math Grade 6 Quiz Solving Equations - 6.EE.B.7 | Tutorified.com Quizzes
Math Grade 6 Quiz Solving Equations – 6.EE.B.7
The standard CCSS.MATH.CONTENT.6.EE.B.7 deals with solving real-world and mathematical problems through the writing and solving of equations in the form x + p = q and px = q, where p, q, and x
represent nonnegative rational numbers. This involves understanding how to isolate the variable and solve for its value, applying arithmetic operations inversely, and interpreting the solution in the
context of a problem. | {"url":"https://quizzes.tutorified.com/quizzes/math-grade-6-quiz-solving-equations-6-ee-b-7/","timestamp":"2024-11-06T17:44:29Z","content_type":"text/html","content_length":"126747","record_id":"<urn:uuid:bcd6c992-e770-475c-a9cc-65349ad41b0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00052.warc.gz"} |
Components All New MacOS Windows Linux iOS
Examples Mac & Win Server Client Guides Statistic FMM Blog Deprecated Old
Returns the interest rate per period of an annuity.
Component Version macOS Windows Linux Server iOS SDK
Math 14.5 ✅ Yes ✅ Yes ✅ Yes ✅ Yes ✅ Yes
MBS( "Math.Rate"; Nper; pmt; pv { ; fv; type; guess } )
Parameter Description Example Flags
Nper The total number of payment periods in an annuity.
pmt The payment made each period and cannot change over the life of the annuity.
Typically, pmt includes principal and interest but no other fees or taxes. If pmt is omitted, you must include the fv argument.
pv The present value — the total amount that a series of future payments is worth now.
fv The future value, or a cash balance you want to attain after the last payment is made. Optional
If fv is omitted, it is assumed to be 0 (the future value of a loan, for example, is 0). If fv is omitted, you must include the pmt argument.
type The number 0 or 1 and indicates when payments are due. Optional
0 or omitted: At the end of the period
1: At the beginning of the period
guess Your guess for what the rate will be. Optional
* If you omit guess, it is assumed to be 10 percent.
* If RATE does not converge, try different values for guess. RATE usually converges if guess is between 0 and 1.
Returns OK or error.
Returns the interest rate per period of an annuity.
Should be same as Rate() function in Excel.
RATE is calculated by iteration and can have zero or more solutions. If the successive results of RATE do not converge to within 0.0000001 after 20 iterations, RATE returns an error.
See also:
Do a test calculation:
Set Variable [ $LoanAmount ; Value: 8000 ]
Set Variable [ $MonthlyPaymentAmount ; Value: 152,5 ]
Set Variable [ $NumberOfYears ; Value: 5 ]
Set Variable [ $NumberOfPeriods ; Value: 60 ]
Set Variable [ $PaymentPeriodsPerYear ; Value: 12 ]
Set Variable [ $Rate ; Value: MBS("Math.Rate"; $NumberOfPeriods; -$MonthlyPaymentAmount; $LoanAmount) ]
Set Variable [ $Rate ; Value: Round ( $Rate * 100; 2 ) ]
Show Custom Dialog [ "Rate" ; "Monthly: " & $Rate & "%" // shows .45% & ¶ & "Annually: " & ($Rate * $PaymentPeriodsPerYear) & "%" ]
Release notes
• Version 14.5
□ Added Math.Rate function.
Blog Entries
This function checks for a license.
Created 9th October 2024, last changed 9th October 2024
Math.Random.PoissonDistribution - Math.Reciprocal | {"url":"https://www.mbsplugins.eu/MathRate.shtml","timestamp":"2024-11-11T07:25:00Z","content_type":"text/html","content_length":"16583","record_id":"<urn:uuid:46402694-d7a4-407e-9890-53650c418f76>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00198.warc.gz"} |
Rate Of Change
The Rate of Change (Roc) function measures rate of change relative to previous periods.The function is used to determine how rapidly the data is changing. The factor of 100 is usually used to merely
make the numbers easier to interpret or graph. The function can be used to measure the Roc of any data series, such as price or another indicator.
The most popular way to use the indicator is as follows. When indicator is above the 0.5 line, it is a buy signal. If indicator crosses the 0.5 line from above it is a sell signal.
To initialize Roc indicator use one of the following constructors:
Roc – sets default values: period = 14
Roc(Int32) – sets value for period
Roc(TimeSpan) – sets time period
ROC - property to get current value
1// Create new instance
2Roc roc = new Roc(28);
3 4// Number of stored values
5roc.HistoryCapacity = 2;
6 7// Add new data point
910// Get indicator value
11double IndicatorValue = roc.ROC;
12// Get previous value
13if (roc.HistoryCount == 2)
15double IndicatorPrevValue = roc[1]; | {"url":"https://rtmath.net/assets/docs/finanalysis/html/13105123-723c-40c1-a7e1-a4809389aba7.htm","timestamp":"2024-11-05T23:37:19Z","content_type":"text/html","content_length":"22989","record_id":"<urn:uuid:c0cd497c-c1dd-45b3-8122-c57d7098ca08>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00881.warc.gz"} |
Random Partition Forests
The Nearest Neighbor Descent method as usually described is technically a way to optimize an existing estimate of the nearest neighbor graph. You must think of a way to initialize the graph. The
obvious approach and the one used in the description of NND in (Dong, Moses, and Li 2011) is to start with a random selection of neighbors. One of the clever things about the PyNNDescent
implementation is that it uses a random partition forest (Dasgupta and Freund 2008) to come up with the initial guess. Random partition forests are part of a large group of tree-based methods. These
are often very fast and conceptually simple, but can be inaccurate. Much of the literature is devoted to proposals of tweaks to these methods to improve their performance, often at the expense of
their simplicity and speed. PyNNDescent (and rnndescent follows its lead) avoids this because we only need to get to a decent guess of the nearest neighbor graph which we can then improve by nearest
neighbor descent. As long as we don’t take substantially longer than the random initialization to come up with the guess and it’s sufficiently good, we should come out ahead.
Random Partition Forests
Here’s a basic introduction to how random partition forests work.
Building a Space-Partitioning Tree
First, we will consider the recipe for building a space-partitioning tree:
1. Select a dimension.
2. Select a split point along that dimension.
3. Split the data into two child nodes based on the split point.
4. Repeat steps 1-3 on each of the two groups.
5. When the number of items in a group is less than some threshold, the node is now a leaf, and stop splitting.
Variations of steps 1 and 2 determines the vast majority of the differences between the various tree-based methods.
Building a Random Partition Tree
For a random partition tree we:
1. Select two points at random.
2. Calculate the mid-point between those two points.
This is enough to define a hyperplane in the data. This is not exactly the algorithm as described in (Dasgupta and Freund 2008), but it is how it’s done in the very similar method Annoy.
Step 3 then involves calculating which side of the hyperplane each point is on and assigning data to the child nodes on that basis.
From Trees to Forests
A random partition forest is just a collection of random partition trees. Because of the random nature of the trees, they will all be different.
Build a Forest
To build a forest with rnndescent, use the rpf_build function. We’ll use the iris dataset as an example, with the goal of finding the 15-nearest neighbors of each item in the dataset.
Some options at your disposal:
• metric: the type of distance calculation to use. The default is euclidean, but there are a lot to choose from. See the help text for the metric parameter in rpf_build()` for details.
• n_trees: the number of trees to build. The default is to choose based on the size of the data provided, with a maximum of 32: eventually you will get diminishing returns from the number of trees
in a forest.
• leaf_size: the number of items in a leaf. The splitting procedure stops when there are fewer than this number of items in a node. The default is 10 but you will want the leaf size to scale with
the number of neighbors you will look for, so I have increased it to 15 for this example. The bigger this value the more accurate the search will be, but at the cost of a lot more distance
calculations to carry out. Conversely, if you make it too small compared to the number of neighbors, then you may end up with not all items finding k neighbors.
• max_tree_depth: the maximum depth of the tree. If a tree reaches this depth then even if the current node size exceeds the value of leaf_size, it will stop splitting. The point of splitting a
tree is that the size of each leaf should rapidly decrease as you go down the tree, and in an ideal case it would decrease by a factor of two at each level, so ideally we can process datasets
that vary by many orders of magnitude while the depth of the tree only increases by a few levels. The default max_tree_depth is 200, so if you trigger this limit, the answer may not be to
increase the depth. It’s more likely that there is something about the distribution of your data that prevents it from splitting well. In this case, if there’s a different metric to try that
still has relevance for your data, that’s worth a try, but possibly the best solution is to abandon the tree-based approach (for example initialize nearest neighbor descent with random
neighbors). If you set verbose = TRUE you will get a warning about the maximum leaf size being larger than leaf_size.
• margin: this makes a slight modification to how the assignment of data to the sides of the hyperplane is calculated. We’ll discuss this below.
The forest that is returned is just an R list, so you can save it and load it with saveRDS and readRDS without issue. But it’s not something you will want to inspect and definitely don’t modify it.
It’s mainly useful for passing to other functions, like the one we will talk about next.
Finding Nearest Neighbors
To use this to find nearest neighbors, a query point will traverse the tree from the root to a leaf, calculating the side of each hyperplane it encounters. All the items in the leaf in which it ends
up are then candidates for nearest neighbors.
To query the forest we just build, we use the rpf_knn_query function. Apart from the forest itself, we also need the data we want to query (query) and the data used to build the forest (reference),
because the forest doesn’t store that information. In thus case, because we are looking at the k-nearest neighbors or iris, the query and the reference are the same, but they don’t have to be. At
this point, we must also specify the number of neighbors we want.
The iris_query that is returned is a list with two matrices: idx contains for each row the indices of the k-nearest neighbors, and dist contains the distances.
A Small Optimization for the k-Nearest Neighbors
You could use the querying approach mentioned above for finding the k-nearest neighbors of the data that was used in building the tree. However, the data has already been partitioned so if you want
k-nearest neighbor data, there’s a more efficient way to do that: for each leaf, the k-nearest neighbors of each point in the leaf are the other members of that leaf. While usually the distance
calculations take up most of the time when looking for neighbors, you do avoid having to make any tree traversals and the associated hyperplane distance calculations.
This should give the same result as running rpf_build followed by rpf_knn_query (apart from the vagaries of the random number generator), but is a lot more convenient and a bit faster. You have
access to the same parameters for forest building as rpf_build, e.g. leaf_size, n_trees, max_tree_depth etc.
Additionally, if you want the k-nearest neighbors and you also want the forest for future querying, if you set ret_forest = TRUE, the return value will now also contain the forest as the forest item
in the list. In this example we build the forest (and get the 15-nearest neighbors) for the first 50 iris items and then query the remaining 100:
The margin parameter determines how to calculate the side of the hyperplane each item in a split belongs to. The usual method (margin = "explicit") does the same thing as in PyNNDescent: the way the
hyperplane is defined is to use the vector defined by the two points \(a\) and \(b\) as the normal vector to a plane, and then the point midway between them as the point on the plane. We then
calculate the margin of a point \(x\) (effectively the signed distance from the plane to \(x\)) as:
\[ \text{margin}(\mathbf{x}) = ((\mathbf{b} - \mathbf{a}) \cdot (\mathbf{x} - \frac{\mathbf{a} + \mathbf{b}}{2})) \]
Taking dot products of vectors and finding mid points is all totally unexceptional if you are using a Euclidean metric. And because there is a monotonic relationship between the cosine distances and
the Euclidean distance after normalization of vectors, we can define an “angular” version of this calculation that works on the normalized vectors.
But for some datasets this will be a bit weird and un-natural. Imagine a dataset of binary vectors in which you are applying e.g. the Hamming metric. The mid-point of two binary vectors is not a
binary vector, and nor does it make sense to think about the geometric relationship implied by a dot product.
As an alternative to calculating the margin via an explicit creation of a hyperplane, you could instead think about how the distance between \(x\) and \(a\), \(d_{xa}\) compares to the distance
between \(x\) and \(b\), \(d_{xb}\) and what the significance for the margin is. Remember that the vector defined by \(a\) and \(b\) is the normal vector to the hyperplane, so you can think of a line
connecting \(a\) and \(b\), with the hyperplane splitting that line in two equal halves. Now imagine \(x\) is somewhere on that line. If \(x\) is closer to \(a\) than \(b\) it must be on the same
side of the hyperplane as \(a\), and vice versa. Therefore we can calculate the margin by comparing \(d_{xa}\) and \(d_{xb}\) and seeing which value is smaller.
Because we don’t explicitly create the hyperplane, I call this the “implicit” margin method and you can choose to generate splits this way by setting margin = "implicit". We’ll use some random binary
data for this example.
Note the as.logical call: if rnndescent detects binary data in this format and you specify a metric which is appropriate for binary data (e.g. Hamming), and you use margin = "implicit" then a
specialized function is called which should be much faster than the functions written only with generic floating point data in mind.
The following will give the same results but for large datasets is likely to be noticeably slower:
So if the implicit margin method is faster (and makes sense for more metrics) why would you ever want to use the explicit method? Well, the implicit method is only faster for binary data with
specialized metrics. The downside of the implicit method is that determining the side of the hyperplane requires two distance calculations per point, whereas the explicit method only requires the dot
product calculation, which is likely to be only as costly as a single distance calculation. So for floating point data, the explicit method is likely to be about twice as fast. That’s a lot to think
about so the default setting for margin is "auto", which tries to do the right thing: if you are using binary data with a suitable metric, it will use the implicit method, otherwise it will use the
explicit method and normalize the vectors to give a more “angular” approach for some metrics that put more emphasis on angle versus magnitude.
Filtering a Forest
As mentioned at the beginning of this vignette, in rnndescent it’s expected that you would only use random partition forests as an initialization to nearest neighbor descent. In that case, keeping
the entire forest for querying new data is probably unnecessary: we can keep only the “best” trees. PyNNDescent only keeps one tree for this purpose. For determining what tree is “best”, we mean the
tree that reproduces the k-nearest neighbor graph most effectively. You can do this by comparing an existing k-nearest neighbor graph with that produced by a single tree. The rpf_filter function does
this for you:
n_trees is the number of trees to keep. Feel free to keep more if you like, although there is no extra diversification step to ensure that the trees being retained are both good at reproducing the
k-nearest neighbor graph and are diverse from each other (perhaps they reproduce different parts of the neighbor graph well?). The higher quality the k-nearest-neighbor graph is, the better the
filtering will work so although the example above uses the graph from the forest, you might get better results using the graph from having run nearest neighbor descent with the forest result as
Dasgupta, Sanjoy, and Yoav Freund. 2008. “Random Projection Trees and Low Dimensional Manifolds.” In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, 537–46.
Dong, Wei, Charikar Moses, and Kai Li. 2011. “Efficient k-Nearest Neighbor Graph Construction for Generic Similarity Measures.” In Proceedings of the 20th International Conference on World Wide Web, | {"url":"https://cran.itam.mx/web/packages/rnndescent/vignettes/random-partition-forests.html","timestamp":"2024-11-03T12:40:22Z","content_type":"text/html","content_length":"31045","record_id":"<urn:uuid:ac29c071-cc23-46b4-aba7-c0788dc6bb3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00735.warc.gz"} |
How to Prepare for the ParaPro Math Test?
What is the ParaPro test and how should you prepare for it? Maybe this is your question too!
The ParaPro test is a standardized test used to assess the aptitude of test takers for admission as teacher’s assistants or paraprofessionals.
The purpose of the ParaPro test is to ensure that test takers have sufficient knowledge and skills to perform well in the classroom and assist teachers in classroom instruction.
Passing this test is required for paraprofessional certification in many US states.
The ParaPro test is designed and administered by the Educational Testing Service (ETS).
One of the advantages of this test is that, unlike many tests, it allows test takers to take the ParaPro test remotely on their personal computer at home or elsewhere.
During this test, the knowledge of the candidates is evaluated in the following three subjects:
The total time students have to answer 90 questions on this test is 150 minutes.
The format of the ParaPro test questions is multiple choice.
In the ParaPro Math test, test takers are not allowed to use the calculator.
The ParaPro test score ranges from 420 to 480, and the score that test-takers need to pass depends on the state or school district.
The math section of the ParaPro test covers the following topics:
• Number and Quantity
• Geometry
• Data Interpretation, Statistics, and Probability
The Absolute Best Book to Ace the ParaPro Math Test
Original price was: $24.99.Current price is: $17.99.
How to Study for the ParaPro Math Test?
Those who want to become paraprofessionals should take the ParaPro Math test. Although it may seem difficult to pass the ParaPro Math test, test-takers will succeed if they are prepared enough.
If you are also planning to take this test, join us in this article. We guide you step by step on how to prepare for the ParaPro Math exam.
1. Choose your study program
One of the most important and first steps in preparing for the ParaPro Math test is to use credible resources and study guides. Fortunately, many authoritative books can help you in this regard.
Large test preparation companies usually publish books to prepare ParaPro test-takers, but finding the best books among them can be a bit confusing.
If you want to start studying ParaPro Math Test but do not know which complete book to use, ParaPro Math for Beginners The Ultimate Step by Step Guide to Preparing for the ParaPro Math Test is a
perfect and comprehensive book that can help you with a lot. This book is a great book to strengthen your skills and boost your self-confidence.
This one is an alternative book:
Original price was: $25.99.Current price is: $13.99.
Learn How to ACE the ParaPro Math Test with this Comprehensive Practice Book:
Original price was: $20.99.Current price is: $15.99.
Also, if you need a ParaPro workbook to gauge your knowledge, the ParaPro Math Practice WorkbookThe Most Comprehensive Review for the Math Section of the ParaPro Test is the best choice for you.
Worksheets can also be very helpful in preparing for the ParaPro Math test. Here are some great free worksheets for you: ParaPro Math Worksheets
There are also many online courses that you can take to prepare for the ParaPro Math exam.
ParaPro Math FREE Resources:
2. Change your attitude about math
The next important step in improving your skills for the ParaPro Math test is to have a positive attitude toward the math lesson. This may not seem like an important tip to you, but it can make a big
A positive attitude will speed up the study process and increase your success rate in the ParaPro Math test. So always try to remind yourself that math is not a burden on you, but a lesson that is
sweet if you learn it well.
3. Make the concepts clear
In the next step, it is better to analyze the mathematical concepts of the ParaPro test carefully. There are many benefits to doing this because it will save your time while studying.
Categorize the content and place each in one of two categories of basic or advanced concepts.
Then, first, study the basic math concepts of the ParaPro test. This will help you better understand the material and will prevent you from getting bored with math lessons.
4. Practice daily
It is a very important tip that your study should be regular and daily. If you have trouble following this point, you can set up a study schedule. Take a short time to study the basic math concepts
first and gradually increase this time. Try to stick to this program as much as you can.
5. Find the best way to learn
It is better to know which learning method is the best way for you to speed up the learning process. There are many methods you can use.
You can hire a private tutor or read tutorial books to learn better.
There are books for beginners that you can use if you want to learn the basics of the ParaPro Math test.
You can also take prep courses and classes. It all depends on which book and which method you feel most comfortable with. So, find the best way to learn according to your situation and pave the way
for your success in the ParaPro Math exam.
Best ParaPro Math Prep Resource
Original price was: $69.99.Current price is: $35.99.
6. Memorize the formulas well
Remembering the essential formulas in each math test can greatly speed up test takers’ answering the questions on test day.
In the ParaPro Math test, you will not be given a formula sheet, and this may even be to your advantage! Because the ParaPro Math test usually is used simple and easy formulas that you probably know
most of them.
It is enough to learn the application of each formula well with more practice so that you can easily remember them on the day of the test.
7. Take Practice Tests
After going through all the above steps and studying enough, you can use the practice tests to gain more mastery of the ParaPro Math test content. This is important because it will improve your
testing skills for the day of the ParaPro Math test.
At this stage, you can use online practice tests or books that include practice tests to further practice.
Try to simulate the test conditions and manage the time during the practice tests, because observing this point will greatly reduce your stress during the test.
8. Register for the test
You should visit the Educational Testing Service (ETS) website for detailed information on how to register.
To register, a Certification of Documentation Form, a ParaPro Registration Form, and an Eligibility Form (If English Is Not your Primary Language) are required and must be completed first then you
should post them or email them to this address.
Remember that all documentation must be received at least three weeks before the expected test date at ETS, and all of them must be verified before the test date.
After approval of the documents and forms by ETS, you will receive an email with registration instructions. So you have to wait for your email approval before registering for the exam.
If you want to register at the test center, you must contact the test center you want to make an appointment with. In this case, you have to pay all the test costs directly to the test center.
If you want to register for the test remotely, you will schedule an appointment and at the same time pay for the remote test.
9. Take the ParaPro Math test
Test day will be a fateful day for you! So try to get ready a little earlier than the start of the test. This will greatly reduce your stress.
Prepare the necessary equipment for the test day the night before. These include a valid identification card, one or two sharpened pencils, a good eraser, and a blue or black pen.
If it is home testing, two tissues are also allowed and the equipment is checked by the proctor before starting the test.
Remember that electronic devices, calculators, and any personal items are not allowed.
The math part of the ParaPro test consists of 30 questions and you have 30 minutes to answer the questions.
You need to be able to manage time efficiently during the test. Try to act fast but respond carefully.
Read the questions carefully and before marking an answer, pay attention to all the answer options.
There is no penalty for answering a question incorrectly, so do not leave any questions unanswered. Finally, if you have finished answering the test questions early, review your answers.
The Best ParaPro Quick Study Guide:
Original price was: $17.99.Current price is: $12.99.
ParaPro FAQs:
Here are some common questions about the ParaPro test:
What kind of math is on the ParaPro test?
The math section of the ParaPro test covers the following topics: Number and Quantity, Algebra, Geometry, Data Interpretation, Statistics, and Probability.
What is a passing score on the paraprofessional Math test?
The ParaPro test score ranges from 420 to 480, and the score that test-takers need to pass depends on the state or school district.
How do I study for the paraprofessional Math test?
You can use good prep books or online resources to prepare for the ParaPro Math test
What kind of questions are on the paraprofessional Math test?
The math section of the ParaPro test consists of 30 multiple-choice questions that test the math knowledge of test-takers.
Is the ParaPro Math test hard?
Like any other test, the ParaPro Math test may seem difficult, but if you are well prepared for test day, there is no reason to worry.
How long is a paraprofessional certificate good for?
The ParaPro test’s certification is valid for 3 years.
Can you take the ParaPro Math test online?
Yes, you can take this test remotely via your personal computer.
Can you use a calculator on the ParaPro Math test?
No, in the ParaPro Math test, test takers are not allowed to use the calculator.
Can you retake the ParaPro test?
Yes, you have to wait for 28 days after the initial test, then you can retake the test.
Looking for the best resources to help you or your student succeed on the ParaPro test?
The Best Book to Ace the ParaPro Test
Original price was: $19.99.Current price is: $14.99.
More from Effortless Math for ParaPro Test …
Did you forget to make a list of formulas for the ParaPro math test?
You do not need to do this anymore because we are giving you this list: ParaPro Math Formulas
Want to learn ParaPro math virtually?
Do not miss our Ultimate ParaPro Math Course (+ FREE Worksheets & Tests)!
Do not know which websites are suitable for learning ParaPro math?
Top 10 Free Websites for ParaPro Math Preparation will introduce you to the top math websites.
The Perfect Prep Books for the ParaPro Math Test
Have any questions about the ParaPro Test?
Write your questions about the ParaPro or any other topics below and we’ll reply!
Related to This Article
What people say about "How to Prepare for the ParaPro Math Test? - Effortless Math: We Help Students Learn to LOVE Mathematics"?
No one replied yet. | {"url":"https://www.effortlessmath.com/blog/how-to-prepare-for-parapro-math-test/","timestamp":"2024-11-01T23:52:12Z","content_type":"text/html","content_length":"124092","record_id":"<urn:uuid:8d6c8b63-00b8-4bd7-8202-ce0b9b28b706>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00702.warc.gz"} |
DAV Class 6 SST Chapter 23 Question Answer - Our Rural Governance
DAV Class 6 SST Chapter 23 Question Answer – Our Rural Governance
These DAV Class 6 SST Solutions and DAV Class 6 SST Chapter 23 Question Answer – Our Rural Governance are thoughtfully prepared by experienced teachers.
DAV Class 6 SST Ch 23 Question Answer – Our Rural Governance
DAV Public School Class 6 SST Chapter 23 Question Answer – Our Rural Governance
Something to Know
A. Tick (✓) the correct option.
Question 1.
Which one of the following is not a local body under the Panchayati Raj System?
(a) Nagar Panchayat
(b) Gram Panchayat
(c) Block Samiti
(d) Zila Parishad
(a) Nagar Panchayat
Question 2.
The Chairperson of a Gram Panchayat is not called as.
(a) Pradhan
(b) Mukhia
(c) President
(d) Sarpanch
(c) President
Question 3.
The best example of direct democracy in India is.
(a) Gram Sabha
(b) Gram Panchayat
(c) Block Samiti
(d) Zila Parishad
(a) Gram Sabha
Question 4.
The administrative work of a Panchayat Samiti are looked after by a.
(a) Public Relations Officer
(b) Health Officer
(c) Block Development Officer
(d) Sub-Divisional Officer
(c) Block Development Officer
Question 5.
The apex body of the Panchayati Raj system is.
(a) Zila Parishad
(b) Nyaya Panchayat
(c) Gram Panchayat
(d) Block Samiti
(a) Zila Parishad
B. Fill in the blanks.
1. Gram Sabha consists of all the registered …………… of the village.
2. The administrative work of the Block Samiti is looked after by a ……………
3. The Zila Parishad acts as a link between ………….. and …………….
4. The Gram Panchayat operates at the …………… level of the Panchayat Raj System.
5. The Zila Parishad distributes grants to the ……………
1. Voters
2. Panchayat Samiti
3. State Government and Block Samiti and the village Panchayat
4. Village
5. Block Samities
C. Write True or False for the following statements.
1. The lowest level of government in India is Nyaya Panchayat.
2. The members of Gram Sabha only elect the member of Gram Panchayat.
3. Zila Parishad implements the Five year Plans in the district.
4. Two or three small villages can have a common panchayat.
5. The Zila Parishad does not have any ex-officio member.
1. True
2. True
3. True
4. True
5. False
D. Answer the following questions in brief.
Question 1.
Mention three levels of the local self- govertning bodies under the Panchayat Raj System.
• Gram Panchayat at the village level
• Block Samiti or Panchayat Samiti at block level.
• Zila Parishad or Zila Panchayat at district level.
Question 2.
Write two main functions of the Gram Sabha.
• The Gram Sabha takes important decisions about the welfare and development of the village.
• It approves the annual budget of the Gram Panchayat.
Question 3.
What is the most important function of a Panchayat Samiti?
A Panchayat Samiti looks after all the affairs of the village. It takes important decisions about the welfare and development of the village.
Question 4.
How does a Village Panchayat gener¬ate its finanical resources?
A Village Panchayat gets its income from taxes and grants or aid from the government. At times, it raises loans to complete its welfare and developmental projects.
Question 5.
How are the Panchs and the Pradhan of a Gram Panchayat elected?
The Sarpanch or Pradhan of a Gram Panchayat is elected by its Gram Sabha.
E. Answer the following questions.
Question 1.
Explain any three functions of the Gram Panchayat.
Three functions of the Gram Pancha yat are:
• Provision of clean drinking water.
• Sanitation and public health and animal husbandry.
• Construction and maintenance of village roads, streets, lights, public wells, tanks, waterways and other public places in the villages.
Question 2.
Describe the composition of a Zila Parishad.
A Zila Parishad is constituted by some elected members, the Chairman of the Block Samitis, members of the Lok Sabha, Rajya Sabha, Vidhan Sabha, Vidhan Parishad, representatives of scheduled castes,
scheduled tribes, and elected women from the district. The Zila Parishad elects a President and a Vice President from amongst its members.
Question 3.
How does the Zila Parishad keep control over the other Panchayati Raj Institutions?
• It keeps the government informed about the working of local self-governing bodies.
• It prepares plans for overall development of the whole district in the field of education, agriculture, animal husbandry, health care, entertainment, village and cottage industries, etc.
• It also distributes government funds to Block Samitis.
Question 4.
Differentiate between a Gram Sabha and a Gram Panchayat.
• The Gram Sabha is the general body of the village. All the men and women of the village who have attained the age of 18 years and are registered as voters, from the Gram Sabha. The Gram Panchayat
is a local self-governing body of the Panchayati Raj System at the village level.
• The members of the Gram Sabha elect the members of the Gram Panchayat. The Gram Sabha puts control over Gram Panchayat.
• The Gram Sabha takes important decisions about the welfare and development of the village. These are later implemented by the Gram Panchayat. The Gram Sabha also approves the annual budget of the
Gram Panchayat.
Question 5.
Highlight the significance of self-governing bodies in a democracy like India.
The issues and problems of an area can be understood better by the local people. Therefore, the solution to the local problems must be left to the people themselves. They would sit together at a
common place, hold discussions and try to find solutions to their day-to-day local problems. Self-governing bodies in India works effectively at the grass-root level.
Value-Based Question
Saryu and Sunder are cousins living in a village in Maharashtra. Once in a meeting of Gram Sabha, Saryu raised the problem of shortage of clean drinking water and the lower level of underground
water. Sunder supported him. In consultation with the experts, an effective plan was prepared. It was implemented within two years, with the result, that there was ample clean drinking water to
fulfill the needs of the villagers.
Question 1.
What would have happened, had there been no cooperation, no determination and no spirit of selfhelp among the villagers?
Do it yourself
Question 2.
Why is it important to provide drin¬king water in each and every part of India?
Do it yourself.
Map Skill
On the political map of India, locate and label the following.
(a) The state with largest number of districts- Uttar Pradesh
(b) The state which has won the National Award for the Best State for successfully implementing Panchayati Raj Programmes- Kerala
(c) The first state in India to fix minimum educational qualification for contesting elections to the Panchayati Raj Institutions- Rajasthan
(d) The states which have implemented 50% reservation for women in Gram Panchayats- Madhya Pradesh, Bihar, Uttarakhand, Himachal Pradesh
(e) The state which has made voting compulsory in the elections for the local bodies-Gujarat
Something to Do
Question 1.
lect a Nyaya Panchayat at the class level to settle the disputes of your class.
For self-attempt.
Question 2.
Find the names of rural local self-bodies of your state. Invite an elected member of Gram Panchayat and discuss with him the working of that body.
For self-attempt.
Question 3.
Enact the story ‘Panch Parmeshwar’ in your school in the form of a skit.
For self-attempt.
Question 4.
Hold a session of the class Panchayat to settle some dispute between two students or two groups of students of your class.
For self-attempt.
Question 5.
Arrange a trip to a nearby village. Find out the achievements of the Gram Panchayat. Also enlist some unfulfilled tasks which you would want the Gram Panchayat to perform.
For self-attempt.
Question 6.
Discuss the importance of justice and impartiality while deciding a dispute.
For self-attempt.
DAV Class 6 Social Science Chapter 23 Question Answer – Democracy and Government
A. Tick (✓) the correct option.
Question 1.
The members of a Gram Panchayat are directly elected for a term of
(a) 5 years
(b) 4 years
(c) 3 years
(d) 2 years
(a) 5 years
Question 2.
The Gram Panchayat.
(a) supplies quality seeds and fertilisers to farmers
(b) keeps*record of births and deaths of its village
(c) looks after public health and animal husbandly
(d) all of the above
(d) all of the above
Question 3.
The most popular fairs organised in the village are.
(a) trade fairs
(b) cattle fairs
(c) handicrafts fairs
(d) none of them
(b) cattle fairs
Question 4. T
here is one Nyaya Panchayat for every.
(a) two villages
(b) two-three villages
(c) three-four villages
(b) four-five villages
(c) three-four villages
Question 5.
The Nyaya Panchayat hears cases regarding.
(a) trespassing
(b) minor thefts
(c) water disputes
(d) all of them
(d) all of them
B. Very Short Answer Type Questions
Question 1.
What is meant by local self-govern¬ment?
The system under which local people govern their day-to-day affairs them¬selves is called local self-government.
Question 2.
What is Panchayati Raj System?
Panchayati Raj System is a process through which people participate in their own government.
Question 3.
Which authority approves the work of the Gram Panchayat?
The Gram Sabha approves the work of the Gram Panchayat.
Question 4.
What is a Gram Sabha?
A Gram Sabha is a general body of a village. All the men and women of the village who have attained the age of 18 years and are registered as voters, from it.
Question 5.
Who works in the absence of the Pradhan or Sarpanch?
The UP-Pmdhan takes over Lhe responsibilities of the Pi adhan in his absence.
Question 6.
What functions does the Panchayat Secretary perform in the Village Panchayat?
1’he Panchayat Secretary helps the elected members in the administer ative work such as maintaining the account of the income of expenditure and preparing reports of the meetings.
Question 7.
What is a Nyaya Panchayat?
It is a form of village court that helps the people to gel speedy and inexpensive justice.
Question 8.
What is the limitations of a Nyaya Panchayat?
It cannot sent a person to jail.
Question 9.
What different names is block Samiti known by?
Block Samiti is known by different names like Khand Samiu, Panchayat Samiti, and Kshetra Samilr. Prakhmid Samiti etc.
Question 10.
How many SC/ST members musi be there in the Panchayat Samiti?
Four SC/ST members must be there in the Panchayat Samiti
Question 11.
What is meant by an ex-officio member?
An ex-officio member is a person who automatically becomes a member of a body as he holds a par ueular post.
Question 12.
Who become the ex-officio members of the Panchayat Samiti s?
Ail the Pradhans of various Gram Panchayats, members of Vidhan Sabha, Vidhan Parishad, Lok Sabha and Rajya sabha who represent that Block, become the ex offcio members of the Panchayat Samitis.
Question 13.
What is the main function of the Zila Parishad?
The Zila Parishad supervises and coordinates the works of all the Block Samities of the district and also of the Gram Panchayats which are
under them.
Question 14.
Who maintains the records and accounts of the Zila Parishad?
The Secretary of the Zila Parishad maintains its records and accounts.
Question 15.
Mention the sources of income of a Zila Parishad.
A Zila Parishad gets financial grant from the State Government. It also gets the rent of its properties and certain other taxes.
C. Short Answer Type Questions
Question 1.
Mention three advantages of the system of the local self-government.
• The system of the local self¬government exists in every village or city to help and assist the people to meet the needs of their community.
• This system gives and opportunity to the people to develop self-reliance, initiative, power of decision-making and participation in the democratic process of the government.
• The system also lessens the burden of the State Government.
Question 2.
Write a short note on the Gram Sabha.
The Gram Sabha is a meeting of all adults who live in the area covered by a Panchayat. This could be only one village or some villages. Anyone who is 18 years old or above, and has the right to vote
is a member of the Gram Sabha. It elects not only the members of Gram Panchayat but also elects its Pradhan or the Headman. The Gram Sabha holds its meetings atleast twice a year. It makes important
decisions about the welfare and development of the village. The Gram Sabha is the best example of direct democracy in India.
Question 3.
What is called Block Samiti? Mention its sources of income.
The local self-body that works for the entire block is called Block Samiti. It acts as a link between the Gram Panchayat and the Zila Parishad.
The income of the Block Samiti comes from two sources-
• By levying taxes on water, land, shops, houses, fairs, expert services, common pastures, etc.
• By getting grants from the State Government.
Question 4.
Write three important points about the Nyaya Panchayat.
• The Nyaya Panchayat is a form of village court which helps the people to get speedy and inexpensive justice. Usually, three or four villages have one Nyaya Panchayat.
• The Nyaya Panchayat hears and decides only civil and criminal cases of minor nature like trespassing, minor thefts, water disputes, etc.
• It can impose a fine of only upto ? 100. But, it cannot send a person to prison.
D. Long Answer Type Questions
Question 1.
Mention the various functions of Panchayat Samiti.
The Panchayat Samiti performs various functions –
• It looks after the developmental and welfare works of the villages of a particular block.
• It gives advice to the villages in the field of agriculture, education, medicine, veterinary, etc.
• It also supervises the projects being undertaken by the Village Panchayats.
• The Panchayat Samiti also looks after agriculture, promotion of cottage industries, poultry and fishery.
• It helps in the formation of co-operative societies, etc.
Question 2.
Mention the various functions of the Gram Panchayat.
The functions of the Gram Panchayat include:
• Providing clean drinking water.
• Provision of centres of adult literacy.
• Sanitation and public health and animal husbandry.
• Planting trees.
• Construction and maintenance of village roads, street lights, public wells, tanks, water ways and other public places in the village.
• Supervision of work of government servants like policemen, workers of Primary Healthcentres, teachers, etc.
• Supplying of quality seeds and fertilisers.
• Organising of fairs and festivals.
• Keeping record of births and deaths. | {"url":"https://www.learncram.com/dav-solutions/dav-class-6-sst-chapter-23-question-answer/","timestamp":"2024-11-06T07:34:47Z","content_type":"text/html","content_length":"80056","record_id":"<urn:uuid:061715d2-dce0-474c-bdd6-db22166e5025>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00172.warc.gz"} |
All commands
Calculates nth Fibonacci number for all n>=0, (much faster than matrix power algorithm from http://everything2.com/title/Compute+Fibonacci+numbers+FAST%2521 ) n=70332 is the biggest value at http://
bigprimes.net/archive/fibonacci/ (corresponds to n=70331 there), this calculates it in less than a second, even on a netbook. UPDATE: Now even faster! Uses recurrence relation for F(2n), see http://
en.wikipedia.org/wiki/Fibonacci_number#Matrix_form n is now adjusted to match Fn at wikipedia, so bigprimes.net table is offset by 1. UPDATE2: Probably fastest possible now ;), uses a simple monoid
operation: uses monoid (a,b).(x,y)=(ax+bx+ay,ax+by) with identity (0,1), and recursion relations: F(2n-1)=Fn*Fn+F(n-1)*F(n-1) F(2n)=Fn*(2*F(n-1)+Fn) then apply fast exponentiation to (1,0)^n = (Fn,F
(n-1)) . Note that: (1,0)^-1=(1,-1) so (a,b).(1,0) = (a+b,a) and (a,b)/(1,0)=(a,b).(1,0)^-1=(b,a-b) So we can also use a NAF representation to do the exponentiation,http://en.wikipedia.org/wiki/
Non-adjacent_form , it's also very fast (about the same, depends on n): time echo 'n=70332;m=(n+1)/2;a=0;b=1;i=0;while(m>0){z=0;if(m%2)z=2-(m%4);m=(m-z)/2;e[i++]=z};while(i--){c=a*a;a=c+2*a*b;b=
c+b*b;if(e[i]>0){t=a;a+=b;b=t};if(e[i]<0){t=a;a=b;b=t-b}};if(n%2)a*a+b*b;if(!n%2)a*(a+2*b)' | bc Show Sample Output | {"url":"https://www.commandlinefu.com/commands/browse/sort-by-votes/14200","timestamp":"2024-11-13T14:22:34Z","content_type":"application/xhtml+xml","content_length":"73556","record_id":"<urn:uuid:44207c15-2c77-4df4-ad17-8ca1ced2927f>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00207.warc.gz"} |
NCERT solutions class 6 maths chapter 5 exercise 5.8
NCERT Solutions for Class 6 Maths Chapter 5 Understanding Elementary Shapes (Ex 5.8) Exercise 5.8
NCERT Solutions for Class 6 Maths Chapter 5 Understanding Elementary Shapes (Ex 5.8) Exercise 5.8
Class 6 is regarded as one of the most critical years in a student’s academic career. It prepares them for the more difficult subjects they will study in Class 7 and later. Plus, it is much harder
than what was taught to students in Class 5, as it marks the beginning of secondary classes. Students in secondary classes face new challenges that they have never faced before. It requires students
to study some new concepts that are of a much higher difficulty level than what they have studied before. Maths is a required subject for students who attend schools connected to the Central Board of
Secondary Education (CBSE). To completely understand the concepts taught in higher classes and get higher scores in examinations, it is essential to have a thorough knowledge of the topics covered in
Class 6. Students may find it challenging and unfamiliar to take multiple exams. If students want to fully understand the topics they learn, they must study the chapters in their entirety and
practice answering as many questions as they can. This can provide them with a significant advantage while taking their final exams. If students are aware of the many question types that can be
encountered on tests, they will be able to respond to more questions on the final examination. Ultimately, they may be able to answer more questions correctly, allowing them to perform better and
earn higher grades.
One of the most challenging subjects for students to grasp is Maths. Class 6 is a crucial year since it lays the foundation for their future courses. The majority of students struggle with the
incredibly intimidating mix of Maths and Class 6. However, if it is learned and taught properly, Maths may develop into a genuinely exciting subject. Students across all boards of education across
the nation struggle with this subject. The CBSE Maths curriculum covers themes that are logic- and concept-focused, in contrast to other subjects. When students are able to understand the concepts,
they can perform well on their tests. The Extramarks portal is available to assist Class 6 students in their preparation for exams. To enhance their understanding of the ideas more quickly and fully,
students can use interactive study tools from Extramarks that include 3D video modules made by subject-matter experts. Students need a variety of tools to help them prepare for their exams, including
the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8. With the help of the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8, students may comprehend the various types of questions that appear
on the question paper and get ready for the yearly exams. The NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 will teach students effective strategies for responding to the question paper. In
order to perform well in exams, it is essential for students to practise with NCERT solutions, such as the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8.
NCERT Class 6 Maths Chapter 5 Exercise 5.8 covers the topic of Polygons. This topic is part of the chapter on Understanding Elementary Shapes. There are five questions in Class 6 Maths Chapter 5
Exercise 5.8. These questions require students to identify as well as draw the figures of various types of polygons. It is essential that students solve each of these questions to be able to
understand the concept of Polygons thoroughly. This will not only help them score better marks in the Class 6 examinations but also in their future classes. Moreover, Extramarks provides the
solutions to this exercise, so that students will find it easier to solve all the problems.
Access Other Exercises of Class 6 Maths Chapter 5
│Chapter 5 – Understanding Elementary Shapes Exercises │
│Exercise 5.1 │7 Questions & Solutions │
│Exercise 5.2 │7 Questions & Solutions │
│Exercise 5.3 │2 Questions & Solutions │
│Exercise 5.4 │11 Questions & Solutions │
│Exercise 5.5 │4 Questions & Solutions │
│Exercise 5.6 │4 Questions & Solutions │
│Exercise 5.7 │3 Questions & Solutions │
│Exercise 5.9 │2 Questions & Solutions │
NCERT Solutions for Class 6 Maths Chapter 5 Understanding Elementary Shapes (Ex 5.8) Exercise 5.8
It is essential that students have access to the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 throughout the year so they can practice the questions as and when required. This could be made
possible by having the solutions in PDF format. Having the PDF file in hand will give students offline access to the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8. As a result, they will be
able to use them whenever they want. Hence, Extramarks provides the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 in PDF format, which can be downloaded from its website and mobile
Access NCERT solutions for Class 6 Maths Chapter 5 – Understanding Elementary Shapes
In order to prepare for exams, students require access to a range of questions and fully developed solutions. The majority of question banks use NCERT textbook exercises as the main source of their
content. When students have access to the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8, answering questions is made easier. That is why Extramarks offers NCERT Solutions Class 6 Maths Chapter
5 Exercise 5.8 on its website and mobile application.
NCERT Solutions for Class 6 Maths Chapter 5 Understanding Elementary Shapes Exercise 5.8
The most crucial resource for practising questions for the final exams is the NCERT exercises. Students are often advised to practise the NCERT textbook exercise questions. In order to check their
answers, they additionally require the right answers to the questions. Thus, the NCERT solutions enable students to receive the assistance they require while they study for the annual exams. By using
the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8, students can apply the concepts from the chapter in a better and more precise manner. Teachers with advanced degrees wrote the thorough and
error-free NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8. These solutions were prepared with the Class 6 students’ intellectual capacity in mind. The difficult problems have been divided into
smaller, more manageable pieces to make writing answers to the questions asked in this exercise easier. Teachers frequently emphasise the importance of detailed, well-explained answers. Students
should carefully follow the instructions in the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 for the best scores. Since many of the figures provided in the NCERT textbook are difficult for
students to understand and remember, they are unable to identify them to answer the exercises’ questions. However, if students practise using the solutions in the NCERT Solutions Class 6 Maths
Chapter 5 Exercise 5.8, it will be easier for them to remember the figures and answer the questions correctly.
Most of the ideas covered in Chapter 5 are new to students. They will not be adequately prepared for the exams by only reading those topics. To understand such topics, they must practise a lot of the
NCERT questions. To help students complete these exercises, the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 are accessible on Extramarks. With the help of the NCERT Solutions Class 6 Maths
Chapter 5 Exercise 5.8, students can better understand the kind of questions that are found on the annual exams. After completing the NCERT Solutions for Class 6 Maths Chapter 5 Exercise 5.8, they
will be able to identify the critical topics from which questions frequently appear in exams.They can use Extramarks’ NCERT solutions to learn the best way to format their answers in order to get the
best possible grades. Their time management skills will also advance as they regularly practise questions with the help of the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8. Students can
evaluate their strengths and weaknesses by completing NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8.They can then focus on enhancing their weak areas during revision.
The NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8, along with other study resources on Extramarks, go over every important topic covered by the CBSE syllabus for the Class 6 exams. To assist
students in preparing for possible problems that might appear in exams, highly qualified educators have created the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8. Students can also practise
the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 to get ready for competitive exams such as the Olympiads. They should practise the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8
multiple times in order to thoroughly comprehend the problems. In order to successfully complete the question paper within the stipulated time period, students must frequently practise the NCERT
textbook exercises. Lack of practise is one of the most likely causes of students’ inability to complete the question paper within the specified time. There are also a few questions from the NCERT
book on the exam paper. Utilising the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 teaches students how to properly respond to these types of questions, enabling them to finish the exam’s
question paper ahead of schedule.
Q.1 Examine whether the following are polygons. If any one among them is not, say why?
(a) The given figure is not closed, so it is not a polygon.
(b) The given figure is closed and has six line segments, so it is a polygon.
(c) The given figure is closed but has no line segment, so it is not a polygon.
(d) The given figure is closed but is joined by a curve, so it is not a polygon.
Q.2 Name each polygon.
Make two more examples of each of these.
1. The given figure has four sides, so it is a quadrilateral. Example : A table top, book
2. The given figure has three sides, so it is a triangle. Example : Pizza slice, Sandwich
3. The given figure has five sides, so it is a pentagon. Example : Road signs, Rangoli patterns
4. The given figure has eight sides, so it is an octagon. Example : Bolt, Some clock designs.
Q.3 Draw a rough sketch of a regular hexagon. Connecting any three of its vertices, draw a triangle. Identify the type of the triangle you have drawn.
This is the rough sketch of a regular hexagon.
The triangle drawn on connecting any three vertices of the hexagon is an isosceles obtuse triangle.
Q.4 Draw a rough sketch of a regular octagon. (Use squared paper if you wish). Draw a rectangle by joining exactly four of the vertices of the octagon.
Q.5 A diagonal is a line segment that joins any two vertices of the polygon and is not a side of the polygon. Draw a rough sketch of a pentagon and draw its diagonals.
FAQs (Frequently Asked Questions)
1. Does Extramarks provide the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8?
Although it can be challenging to find the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8, Extramarks provides them on its website and mobile application.
2. What advantages do students gain from using the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8?
In the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 provided by Extramarks, all of the exercises’ questions are addressed. The solutions include both fundamental and highly complex problems.
Answers to all the questions are given completely. Students can use the NCERT Solutions Class 6 Maths Chapter 5 Exercise 5.8 with confidence to get exact answers to all the questions. These problems
are accurately addressed in the Class 6 Maths Chapter 5 Exercise 5.8 Answers, and they are also given in a style that is clear to Class 6 students. Students who correctly respond to these questions
will score higher on their tests. If students have already practised answering all of these questions, they will find it easier to do so when they take their annual exams. | {"url":"https://www.extramarks.com/studymaterials/ncert-solutions/ncert-solutions-class-6-maths-chapter-5-exercise-5-8/","timestamp":"2024-11-09T09:24:30Z","content_type":"text/html","content_length":"638644","record_id":"<urn:uuid:f88b093e-3d82-44c6-9f58-260a01ff14d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00459.warc.gz"} |
Exact Conditional Logistic Regression
The theory of exact logistic regression, also known as exact conditional logistic regression, was originally laid out by Cox (1970), and the computational methods employed in PROC LOGISTIC are
described in Hirji, Mehta, and Patel (1987); Hirji (1992); Mehta, Patel, and Senchaudhuri (1992). Other useful references for the derivations include Cox and Snell (1989); Agresti (1990); Mehta and
Patel (1995).
Exact conditional inference is based on generating the conditional distribution for the sufficient statistics of the parameters of interest. This distribution is called the permutation or exact
conditional distribution. Using the notation in the section Computational Details, follow Mehta and Patel (1995) and first note that the sufficient statistics for the parameter vector of intercepts
and slopes, , are
Denote a vector of observable sufficient statistics as .
The probability density function (PDF) for can be created by summing over all binary sequences that generate an observable and letting denote the number of sequences that generate
In order to condition out the nuisance parameters, partition the parameter vector , where is a vector of the nuisance parameters, and is the parameter vector for the remaining parameters of interest.
Likewise, partition into and , into and , and into and . The nuisance parameters can be removed from the analysis by conditioning on their sufficient statistics to create the conditional likelihood
of given ,
where is the number of vectors such that and . Note that the nuisance parameters have factored out of this equation, and that is a constant.
The goal of the exact conditional analysis is to determine how likely the observed response is with respect to all possible responses . One way to proceed is to generate every vector for which , and
count the number of vectors for which is equal to each unique . Generating the conditional distribution from complete enumeration of the joint distribution is conceptually simple; however, this
method becomes computationally infeasible very quickly. For example, if you had only 30 observations, you would have to scan through different vectors.
Several algorithms are available in PROC LOGISTIC to generate the exact distribution. All of the algorithms are based on the following observation. Given any and a design , let and be the first i
rows of each matrix. Write the sufficient statistic based on these i rows as . A recursion relation results: .
The following methods are available:
• The multivariate shift algorithm developed by Hirji, Mehta, and Patel (1987), which steps through the recursion relation by adding one observation at a time and building an intermediate
distribution at each step. If it determines that for the nuisance parameters could eventually equal , then is added to the intermediate distribution.
• An extension of the multivariate shift algorithm to generalized logit models by Hirji (1992). Since the generalized logit model fits a new set of parameters to each logit, the number of
parameters in the model can easily get too large for this algorithm to handle. Note for these models that the hypothesis tests for each effect are computed across the logit functions, while
individual parameters are estimated for each logit function.
• A network algorithm described in Mehta, Patel, and Senchaudhuri (1992), which builds a network for each parameter that you are conditioning out in order to identify feasible for the vector. These
networks are combined and the set of feasible is further reduced, and then the multivariate shift algorithm uses this knowledge to build the exact distribution without adding as many intermediate
as the multivariate shift algorithm does.
• A hybrid Monte Carlo and network algorithm described by Mehta, Patel, and Senchaudhuri (2000), which extends their 1992 algorithm by sampling from the combined network to build the exact
The bulk of the computation time and memory for these algorithms is consumed by the creation of the networks and the exact joint distribution. After the joint distribution for a set of effects is
created, the computational effort required to produce hypothesis tests and parameter estimates for any subset of the effects is (relatively) trivial. See the section Computational Resources for Exact
Logistic Regression for more computational notes about exact analyses.
Note: An alternative to using these exact conditional methods is to perform Firth’s bias-reducing penalized likelihood method (see the FIRTH option in the MODEL statement); this method has the
advantage of being much faster and less memory intensive than exact algorithms, but it might not converge to a solution.
Consider testing the null hypothesis against the alternative , conditional on . Under the null hypothesis, the test statistic for the exact probability test is just , while the corresponding p-value
is the probability of getting a less likely (more extreme) statistic,
where there exist with , , and .
For the exact conditional scores test, the conditional mean and variance matrix of the (conditional on ) are calculated, and the score statistic for the observed value,
is compared to the score for each member of the distribution
The resulting p-value is
where there exist with , , and .
The mid-p statistic, defined as
was proposed by Lancaster (1961) to compensate for the discreteness of a distribution. See Agresti (1992) for more information. However, to allow for more flexibility in handling ties, you can write
the mid-p statistic as (based on a suggestion by Lamotte (2002) and generalizing Vollset, Hirji, and Afifi (1991))
where, for , is using strict inequalities, and is using equalities with the added restriction that . Letting yields Lancaster’s mid-p.
Caution: When the exact distribution has ties and METHOD=NETWORKMC is specified, the Monte Carlo algorithm estimates with error, and hence it cannot determine precisely which values contribute to the
reported p-values. For example, if the exact distribution has densities and if the observed statistic has probability 0.2, then the exact probability p-value is exactly 0.6. Under Monte Carlo
sampling, if the densities after N samples are and the observed probability is 0.21, then the resulting p-value is 0.39. Therefore, the exact probability test p-value for this example fluctuates
between 0.2, 0.4, and 0.6, and the reported p-values are actually lower bounds for the true p-values. If you need more precise values, you can specify the OUTDIST= option, determine appropriate
cutoff values for the observed probability and score, and then construct the true p-value estimates from the OUTDIST= data set and display them in the SAS log by using the following statements:
data _null_;
set outdist end=end;
retain pvalueProb 0 pvalueScore 0;
if prob < ProbCutOff then pvalueProb+prob;
if score > ScoreCutOff then pvalueScore+prob;
if end then put pvalueProb= pvalueScore=;
Inference for a Single Parameter
Exact parameter estimates are derived for a single parameter by regarding all the other parameters as nuisance parameters. The appropriate sufficient statistics are and , with their observed values
denoted by the lowercase t. Hence, the conditional PDF used to create the parameter estimate for is
for .
The maximum exact conditional likelihood estimate is the quantity , which maximizes the conditional PDF. A Newton-Raphson algorithm is used to perform this search. However, if the observed attains
either its maximum or minimum value in the exact distribution (that is, either or ), then the conditional PDF is monotonically increasing in and cannot be maximized. In this case, a median unbiased
estimate (Hirji, Tsiatis, and Mehta, 1989) is produced that satisfies , and a Newton-Raphson algorithm is used to perform the search.
The standard error of the exact conditional likelihood estimate is just the negative of the inverse of the second derivative of the exact conditional log likelihood (Agresti, 2002).
Likelihood ratio tests based on the conditional PDF are used to test the null against the alternative . The critical region for this UMP test consists of the upper tail of values for in the exact
distribution. Thus, the one-sided significance level is
Similarly, the one-sided significance level against is
The two-sided significance level against is calculated as
An upper % exact confidence limit for corresponding to the observed is the solution of , while the lower exact confidence limit is the solution of . Again, a Newton-Raphson procedure is used to
search for the solutions. Note that one of the confidence limits for a median unbiased estimate is set to infinity, but the other is still computed at . This results in the display of a one-sided %
confidence interval; if you want the limit instead, you can specify the ONESIDED option.
Specifying the ONESIDED option displays only one p-value and one confidence interval, because small values of and support different alternative hypotheses and only one of these p-values can be less
than 0.50.
The mid-p confidence limits are the solutions to for (Vollset, Hirji, and Afifi, 1991). produces the usual exact (or max-p) confidence interval, yields the mid-p interval, and gives the min-p
interval. The mean of the endpoints of the max-p and min-p intervals provides the mean-p interval as defined by Hirji, Mehta, and Patel (1988).
Estimates and confidence intervals for the odds ratios are produced by exponentiating the estimates and interval endpoints for the parameters.
Notes about Exact p-Values
In the “Conditional Exact Tests” table, the exact probability test is not necessarily a sum of tail areas and can be inflated if the distribution is skewed. The more robust exact conditional scores
test is a sum of tail areas and is generally preferred over the exact probability test.
The p-value reported for a single parameter in the “Exact Parameter Estimates” table is twice the one-sided tail area of a likelihood ratio test against the null hypothesis of the parameter equaling | {"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_logistic_details52.htm","timestamp":"2024-11-10T06:49:12Z","content_type":"application/xhtml+xml","content_length":"60952","record_id":"<urn:uuid:30318a80-c0d1-4aaa-b896-3178f4e88d94>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00262.warc.gz"} |
The authors are grateful for the financial support received from the National Key Technology R&D Program (No. 2015BAK14B02), the National Natural Science Foundation of China (No. 51578320, 51308321),
and National Non-profit Institute Research Grant of IGP-CEA (Grant No: DQJB14C01).
American Red Cross (ARC) (2002) Standards for hurricane evacuation shelter selection, ARC 4496. Washington, DC
American Society of Civil Engineers (ASCE) (2010) Minimum design loads for buildings and other structures, ASCE 7–10. Reston, VA
Antoniou S, Pinho R (2004) Advantages and limitations of adaptive and non-adaptive force-based pushover procedures. J Earthq Eng 8:497–522
Azarbakht A, Dolšek M (2007) Prediction of the median IDA curve by employing a limited number of ground motion records. Earthq Eng Struct D 36(15): 2401–2421
Behr RA (1998) Seismic performance of architectural glass in mid-rise curtain wall. Journal of Architectural Engineering–ASCE 4:94–98
Braga F, Manfredi V, Masi A, Salvatori A, Vona M (2011) Performance of non-structural elements in RC buildings during the L’Aquila, 2009 earthquake. Bull Earthq Eng 9:307–324
CECS127 (2001) Technical specification for point supported glass curtain wall. China Association for Engineering Construction Standardization, Beijing (In Chinese)
Chan YF, Alagappan K, Gandhi A, Donovan C, Tewari M, Zaets SB (2006) Disaster management following the Chi-Chi Earthquake in Taiwan. Prehospital and Disaster Medicine 21:196–202
Construction Standardization Information Network (CCSN) (2014) Design code for urban disasters emergency shelter (for public comment). http://www.ccsn.gov.cn/ (In Chinese)
Ellidokuz H, Ucku R, Aydin UY, Ellidokuz E (2005) Risk Factors for Death and Injuries in Earthquake: Cross-sectional Study from Afyon, Turkey. Croat Med J 46:613–618
Federal Emergency Management Agency (FEMA) (2008) Design and construction guidance for community safe rooms, FEMA P361. Washington, DC
Federal Emergency Management Agency (FEMA) (2009) Quantification of building seismic performance factors, FEMA P695. Washington DC
Federal Emergency Management Agency (FEMA) (2012) Multi-hazard loss estimation methodology HAZUS–MH 2.1 advanced engineering building module (AEBM) technical and user’s manual. Washington, DC
Ferracuti B, Pinho R, Savoia M, Francia R (2009) Verification of displacement-based adaptive pushover through multi-ground motion incremental dynamic analyses. Eng Struct 31:1789–1799
GB21734 (2008) Emergency shelter for earthquake disasters–site and its facilities. Standards Press of China, Beijing (In Chinese)
GB50011 (2010) Code for seismic design of building. China Architecture Industry Press, Beijing (In Chinese)
GB50413 (2007) Standard for urban planning on earthquake resistance and hazardous prevention. China Architecture Industry Press, Beijing (In Chinese)
Goulet CA, Haselton CB, Mitrani-Reiser J, Beck JL, Deierlein GG, Porter KA, et al. (2007) Evaluation of the seismic performance of a code-conforming reinforced-concrete frame building–from seismic
hazard to collapse safety and economic losses. Earthq Eng Struct D 36:1973–1997
Hamada M, Aydan O, Sakamoto A (2007) A quick report on Noto Peninsula Earthquake on March 25, 2007, Japan Society of Civil Engineers Report. Tokyo
Iervolino I, Cornell CA (2005) Record selection for nonlinear seismic analysis of structures. Earthq Spectra 21:685–713
Iervolino I, Manfredi G, Cosenza E (2006) Ground motion duration effects on nonlinear seismic response. Earthq Eng Struct D 35:21–38
Iervolino I, Galasso C, Cosenza E (2010) REXEL: computer aided record selection for code-based seismic structural analysis. Bull Earthq Eng 8:339–362
International Code Council (ICC) (2009) International Building Code. Country Club Hills, IL
JGJ102 (2003) Technical code for glass curtain wall engineering. China Architecture Industry Press, Beijing (In Chinese)
JGJ133 (2001) Technical code for metal and stone curtain walls engineering. China Architecture Industry Press, Beijing (In Chinese)
Johnston D, Standring S, Ronan K, Lindell M, Wilson T, Cousins J, et al. (2014) The 2010/2011 Canterbury earthquakes: context and cause of injury. Nat Hazards 73(2): 627–637
Kaisera A, Holden C, Beavan J, Beetham D, Benites R, Celentano A, et al. (2012) The Mw 6.2 Christchurch earthquake of February 2011: preliminary report. New Zealand Journal of Geology and Geophysics
55(1): 67–90
Katsanos EI, Sextos AG, Manolis GD (2010) Selection of earthquake ground motion records: A state-of-the-art review from a structural engineering perspective. Soil Dyn Earthq Eng 30:157–169
Li Y, Lu XZ, Guan H, Ye LP (2014a) An energy-based assessment on dynamic amplification factor for linear static analysis in progressive collapse design of ductile RC frame structures. Adv Struct Eng
Li Y, Lu XZ, Guan H, Ye LP (2014b) Progressive collapse resistance demand of RC frames under catenary mechanism, ACI Structural Journal, 111 (5): 1225–1234
Lu X, Lu XZ, Guan H, Ye LP (2013a) Collapse simulation of reinforced concrete high-rise building induced by extreme earthquakes. Earthq Eng Struct D 42:705–723
Lu X, Lu XZ, Guan H, Ye LP (2013b) Comparison and selection of ground motion intensity measures for seismic design of super high-rise buildings. Adv Struct Eng 16(7): 1249–1262
Lu X, Ye LP, Lu XZ, Li MK, Ma XW (2013c) An improved ground motion intensity measure for super high-rise buildings. Sci China Technol Sc 56(6): 1525–1533
Lu XZ, Han B, Hori M, Xiong C, Xu Z (2014) A coarse-grained parallel approach for seismic damage simulations of urban areas based on refined models and GPU/CPU cooperative computing. Adv Eng Softw
70: 90–103
Ma YH, Xie LL (2002) Determination of frequently occurred and seldom occurred earthquakes in consideration of earthquake environment. Journal of Building Structures 23:43–47 (In Chinese)
Mahdavinejad M, Bemanian M, Abolvardi G, Elhamian SM (2012) Analyzing the state of seismic consideration of architectural non-structural components (ANSCs) in design process (based on IBC).
International Journal of Disaster Resilience in the Built Environment 3:133–147
McLaren TM, Myers JD, Lee JS, Tolbert NL, Hampton SD, Navarro CM (2008) MAEviz: an earthquake risk assessment system. In: Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances
in Geographic Information Systems, ACM New York, pp 88
Memari AM, Behr RA, Kremer PA (2003) Seismic behavior of curtain walls containing insulating glass units. Journal of Architectural Engineering–ASCE 9:70–85
Mwafy AM, Elnashai AS (2001) Static pushover versus dynamic collapse analysis of RC buildings. Eng Struct 23:407–424
Pacific Earthquake Engineering Research Center (PEER) (2014) Preliminary notes and observations on the August 24, 2014, South Napa Earthquake, Report No. 2014/13. University of California, Berkeley
Padgett J, Desroches R (2007) Sensitivity of seismic response and fragility to parameter uncertainty. J Struct Eng-ASCE 133:1710–8
Peek-Asa C, Kraus JF, Bourque LB, Vimalachandra D, Yu J, Abrams J (1998) Fatal and hospitalized injuries resulting from the 1994 Northridge earthquake. Int J Epidemiol 27:459–465
Qiu J, Liu G, Wang S, Zhang X, Zhang L, Li Y, et al. (2010) Analysis of injuries and treatment of 3 401 inpatients in 2008 Wenchuan earthquake based on Chinese Trauma Databank. Chinese Journal of
Traumatology (English Edition) 13:297–303
Roy N, Shah H, Patel V, Coughlin RR (2002) The Gujarat earthquake (2001) experience in a seismically unprepared area: community hospital medical response. Prehospital and Disaster Medicine 17:186–95
Shi W, Lu XZ, Guan H, Ye LP (2014) Development of seismic collapse capacity spectra and parametric study. Adv Struct Eng 17:1241–1256
Shi W, Lu XZ, Ye LP (2012) Uniform-risk-targeted seismic design for collapse safety of building structures. Sci China Technol Sc 55:1481–1488
Sucuoǧlu H, Vallabhan CV (1997) Behaviour of window glass panels during earthquakes. Eng Struct 19:685–694
Tothong P, Luco N (2007) Probabilistic seismic demand analysis using advanced ground motion intensity measures. Earthq Eng Struct D 36:1837–1860
Villaverde R (2007) Methods to assess the seismic collapse capacity of building structures: State of the art. J Struct Eng-ASCE 133:57–66
Xu Z, Lu XZ, Guan H, Han B, Ren AZ (2014) Seismic damage simulation in urban areas based on a high-fidelity structural model and a physics engine. Nat Hazards 71(3):1679–1693
Xu Z, Lu XZ, Guan H, Lu X, Ren AZ (2013) Progressive-collapse simulation and critical region identification of a stone arch bridge. J Perform Constr Fac -ASCE 27 (1): 43–52
Zareian F, Krawinkler H (2007) Assessment of probability of collapse and design for collapse safety. Earthquake Eng Struct Dyn 36(13): 1901–1914
Figure captions:
Fig. 1 Proposed simulation framework
Fig. 2 The MCS model for a building
Fig. 3 Fragility curve of P(d[max]≥10 m) against different PGAs
Fig. 4 Seismic hazard curve over 50 years for the site in the case study
Fig. 5 Distribution of total probabilities for falling objects over 50 years
Fig. 6 Distribution probabilities of falling objects in the selected community area
Fig. 7 Hazard regions of falling objects and site location of emergency shelter
Table captions:
Table 1 Comparison between the proposed MCS model and the Hazus method
Table 2 Allowable story drift [ ] (ASCE 2010) | {"url":"http://luxinzheng.net/publication6/Falling_Debris_NH_2015.htm","timestamp":"2024-11-05T17:17:35Z","content_type":"text/html","content_length":"114466","record_id":"<urn:uuid:1b911406-0e51-4714-a8fc-84765ee8f3f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00016.warc.gz"} |
Three Red Hats -
For Geeks & Brainiacs #23- Three Red Hats -
Beth, Anne, and Susan are the three finalists in a logic competition. A final test will now be given to decide the winner. The three girls are sitting together at a round table and are blindfolded.
The moderator informs them that he will either put a red or green hat on each girl's head. Once he instructs them to remove the blindfold, they must raise their hand immediately if they see a red
hat. Based on who raised their hand and what color hats they see, the first girl who can determine the color of her own hat will win.
The game then begins and the moderator decides to put a red hat on each girl's head and informs them they can remove their blindfold. All three girls immediately raise their hand (as each of them
sees at least one red hat). After a long pause, Susan tells the moderator that her hat is red and explains why. Her logic is correct and she wins. How did she do it?
Need a Hint?
: What would the reasoning have been for the two other students if Susan's hat was green?
See Answer
Susan realizes that her hat can't be green because if it was, the two other girls would known right away that their own hat was red. EXPLANATION
: If Susan's hat was green, Beth and Anne would each have known right away that their own hat was red. Why? Because both Beth and Anne raised their hand indicating that they saw a red hat. If Susan's
hat had been green, Beth would have had to have seen a red hat on Anne and Anne would have to have seen a red hat on Beth and both Beth and Anne would have figured this out without much thought. So
Susan realizes that her hat must be red.
Do you have a
for this puzzle (e.g. something that should be mentioned/clarified in the question or solution, bug, typo, etc.)? | {"url":"https://puzzlesandriddles.com/ForGeeksAndBrainiacs23.html","timestamp":"2024-11-05T19:24:09Z","content_type":"text/html","content_length":"13630","record_id":"<urn:uuid:eced1d8b-e5b6-45eb-a8e9-ada1f31463c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00508.warc.gz"} |
Quadratic Equations Worksheet - Equations Worksheets
Quadratic Equations Worksheet
If you are looking for Quadratic Equations Worksheet you’ve come to the right place. We have 31 worksheets about Quadratic Equations Worksheet including images, pictures, photos, wallpapers, and
more. In these page, we also have variety of worksheets available. Such as png, jpg, animated gifs, pic art, logo, black and white, transparent, etc.
474 x 649 · jpeg quadratic formula worksheet yahoo image search results quadratics from www.pinterest.com
600 x 800 · jpeg quadratic equation worksheets simplifying rational expressions from www.pinterest.com
988 x 1280 · jpeg quadratic equation worksheet answer key kidsworksheetfun from kidsworksheetfun.com
1275 x 1650 · png math worksheets solving quadratic equations solving quadratic from byveera.blogspot.com
Don’t forget to bookmark Quadratic Equations Worksheet using Ctrl + D (PC) or Command + D (macos). If you are using mobile phone, you could also use menu drawer from browser. Whether it’s Windows,
Mac, iOs or Android, you will be able to download the worksheets using download button.
Leave a Comment | {"url":"https://www.equationsworksheets.com/quadratic-equations-worksheet/","timestamp":"2024-11-12T06:30:00Z","content_type":"text/html","content_length":"76350","record_id":"<urn:uuid:d5ffe340-6fe0-45de-af99-aa24da8d23c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00151.warc.gz"} |
12, 21, 28, 64, 144, 168 and 441 and there relationship to the pentagram and phi
There are 12 main vibrational dimensions and 144 sub-vibrational dimensions so I imputed 12 and 144 and got 5 lines showing a connection between the 12 vibrational dimensions and the number 5 which
makes sense since the 12/15 vibrational dimensions can be separated into 5 HU's/5 Kabbalistic worlds. 4 of these lines can correspond to 3 dimensions each and the 5th can correspond to the last 3 and
this shows the 12/13 sephirot/sphere correspondence.
I also reversed these numbers since 21 and 441 form the 441 cubistic matrix and 231 gates and the 231 gates/hebrew alphabet correspond to the 3×3×3 cube(27). When I did impute 21 and 441 I got a
pattern of 40 lines and I could form the prime number cross and the prime number cross corresponds to 12 lines and that leaves 28 other lines and this shows a correspondence to the 10/12 tree of life
and a correspondence between 27 and 28.
Next I decided to look at phi(1.618) and 64 because the 64 tetrahedron grid's higher dimensional form is the E8 lie group which is based on dodecahedron which is made up of pentagrams anyway the
tetrahedrons making up the lower dimensional E8 geometry are Planck length/phi sized(each line making them up are the size of phi) when I did enter 64 and phi I got a pentagon! showing a new
correspondence between the 64 tetrahedron grid, phi and the vibrational dimensions!
If you remember from months ago I showed how the 64 tetrahedron grid corresponds to the 7 star tetrahedron tetractys so I decided to enter the numbers 64 and 28 and when I did I got a star
tetrahedron! And as we know the number 28 is related to the numbers 84 and 168 which are both related to the E8 lie group superstring and then I decided to enter phi and 168(number of roots of E8)
and when I did I got another pentagon! This also shows the correspondence between 5 and 6! | {"url":"https://www.64tge8st.com/post/2020/05/24/12-21-28-64-144-168-and-441-and-there-relationship-to-the-pentagram-and-phi","timestamp":"2024-11-01T20:43:28Z","content_type":"text/html","content_length":"1050486","record_id":"<urn:uuid:a97b32c3-9067-472a-a22e-29873f465dab>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00462.warc.gz"} |
zkSNARks with short proof sizes
November’s IFT ZK Meeting (11/4) main topic will include a brief discussion on zk-SNARKs with small proof size. This year, multiple zk-SNARKs, Polymath (eprint 2024/916) and Pari (eprint 2024/
1245.pdf), have been developed based on Groth16 that attempts to reduce the practical proof size.
Groth16 gained popularity as it has the smallest proof among pairing-based zk-SNARKs. Specifically, a Groth16 proof consists of 3 group elements (2 from \mathbb{G}_1 and 1 from \mathbb{G}_2). It
remains an open question whether a zk-SNARK can be constructed with a proof consists of only 2 group elements. Additionally, Groth16 proofs can be aggregated using SNARKpack (eprint 2021/529).
Groth16 lost some favor due to its reliance on circuit specific setups. That is, Groth16 requires a trusted setup (or ceremony) to construct the common reference string for each application. This
restriction has helped in Plonk-based SNARKs from gaining popularity.
The 3 group elements make Groth16 an ideal proof for publishing on a blockchain. Due to this various projects (e.g., Nexus and RISCZero) use Groth16 as a compression layer.
Pari is a new zk-SNARK developed with an emphasis to minimize proof size with respect to the real storage space instead of the number of element group elements. In fact, in terms of theoretical size,
Polymath has a larger proof than Groth16.
Polymath consists of 3 group elements (from \mathbb{G}_1) and a field element. This is one field element more than Groth16’s proofs consist of! In practice, elements from \mathbb{G}_2 require more
storage space to represent than elements from \mathbb{G}_1. This observation is what Polymath builds off of.
Polymath uses a different circuit representation than Groth16. Instead of R1CS/QAP, Polymath uses the “lesser-known” squaring arithmetic program (SAP). A SAP consists of two gates: addition and
squaring. We note that multiplication can be represented as in terms of these gates (with scalar multiplication): x\cdot y = 2^{-1}(x+y)^2 + 2^{-1}(x-y)^2. Hence, each multiplication gate in R1CS can
be (naively) replaced with a minimal of 5 gates. Groth and Maller (eprint 2017/540.pdf) showed that a SAP constraint system has overhead at most two times that of R1CS.
Additionally, Polymath uses optimization techniques for Groth16 due to Lipmaa (eprint 2021/1700.pdf) which reduces the number of trapdoors from 5 to 2. As with Groth16, Polymath can use techniques
from SNARKpack for aggregation.
Pari further improves on the concrete proof size of Groth16. Pari’s proofs consist of 2 group elements (from \mathbb{G}_1) and 2 field elements. Like Polymath, Pari uses SAP for circuit
representation. The authors of Pari note that Pari could be adapted to use R1CS. This adaption would result in improved prover complexity but an increased concrete proof size (2 elements from \mathbb
{G}_1 and 3 field elements).
Pari is not a modification of Groth16. Instead, Pari uses a new polynomial commitments that enforces equality of coefficients for a set of polynomials over different bases. This is used in a rowcheck
The following table comes from Pari (eprint 2024/1245):
Concretely, for the elliptic curve BLS12-381, a Groth16 proof uses1536 bits, a Polymath proof uses 1408 bits, and a Pari proof uses 1280 bits.
Feel free to join in the conversation concerning these zk-SNARKs. Is the costs savings of a couple hundred bits per proof worth the hassle of potentially changing compression layers? | {"url":"https://forum.vac.dev/t/zksnarks-with-short-proof-sizes/383","timestamp":"2024-11-04T10:55:11Z","content_type":"text/html","content_length":"18210","record_id":"<urn:uuid:cfccf5cd-230f-4ccf-a325-2fb2186ece38>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00780.warc.gz"} |
Math Colloquia - Satellite operators on knot concordance
Concordance is a relation which classifies knots in 3-space via surfaces in 4-space, and it is closely related with low dimensional topology. Satellite operators are one of the main tools in the
study of knot concordance, and it has been widely used to reveal new structures of knot concordance. In this talk, I will explain interplay between concordance and low dimensional topology, and
discuss recent developments on satellite operators. The talk is based on joint work with Jae Choon Cha. | {"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&page=6&sort_index=speaker&order_type=asc&document_srl=1133015&l=en","timestamp":"2024-11-09T17:20:24Z","content_type":"text/html","content_length":"43595","record_id":"<urn:uuid:49adb371-bf0c-4124-9d02-ccf5ace11dc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00255.warc.gz"} |
d01ajf (dim1_fin_bad)
NAG FL Interface
d01ajf (dim1_fin_bad)
Note: this routine is deprecated
and will be withdrawn at Mark 31.3. Replaced by
FL Name Style:
FL Specification Language:
1 Purpose
is a general purpose integrator which calculates an approximation to the integral of a function
over a finite interval
2 Specification
Fortran Interface
Subroutine d01ajf ( f, a, b, epsabs, epsrel, result, abserr, w, lw, iw, liw, ifail)
Integer, Intent (In) :: lw, liw
Integer, Intent (Inout) :: ifail
Integer, Intent (Out) :: iw(liw)
Real (Kind=nag_wp), External :: f
Real (Kind=nag_wp), Intent (In) :: a, b, epsabs, epsrel
Real (Kind=nag_wp), Intent (Out) :: result, abserr, w(lw)
C Header Interface
#include <nag.h>
d01ajf_ (
void double (NAG_CALL *f)(const double *x),
const double *a, const double *b, const double *epsabs, const double *epsrel, double *result, double *abserr, double w[], const Integer *lw, Integer iw[], const Integer *liw, Integer *ifail)
The routine may be called by the names d01ajf or nagf_quad_dim1_fin_bad.
3 Description
is based on the QUADPACK routine QAGS (see
Piessens et al. (1983)
). It is an adaptive routine, using the Gauss
-point and Kronrod
-point rules. The algorithm, described in
de Doncker (1978)
, incorporates a global acceptance criterion (as defined by
Malcolm and Simpson (1976)
) together with the
-algorithm (see
Wynn (1956)
) to perform extrapolation. The local error estimation is described in
Piessens et al. (1983)
The routine is suitable as a general purpose integrator, and can be used when the integrand has singularities, especially when these are of algebraic or logarithmic type.
d01ajf requires you to supply a function to evaluate the integrand at a single point.
The routine
uses an identical algorithm but requires you to supply a subroutine to evaluate the integrand at an array of points. Therefore,
may be more efficient for some problem types and some machine architectures.
4 References
de Doncker E (1978) An adaptive extrapolation algorithm for automatic integration ACM SIGNUM Newsl. 13(2) 12–18
Malcolm M A and Simpson R B (1976) Local versus global strategies for adaptive quadrature ACM Trans. Math. Software 1 129–146
Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag
Wynn P (1956) On a device for computing the $em(Sn)$ transformation Math. Tables Aids Comput. 10 91–96
5 Arguments
1: $f$ – real (Kind=nag_wp) Function, supplied by the user. External Procedure
must return the value of the integrand
at a given point.
The specification of
Fortran Interface
Real (Kind=nag_wp) :: f
Real (Kind=nag_wp), Intent (In) :: x
C Header Interface
double f (const double *x)
1: $x$ – Real (Kind=nag_wp) Input
On entry: the point at which the integrand $f$ must be evaluated.
must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which
is called. Arguments denoted as
be changed by this procedure.
Note: f
should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by
. If your code inadvertently
return any NaNs or infinities,
is likely to produce unexpected results.
2: $a$ – Real (Kind=nag_wp) Input
On entry: $a$, the lower limit of integration.
3: $b$ – Real (Kind=nag_wp) Input
On entry: $b$, the upper limit of integration. It is not necessary that $a<b$.
4: $epsabs$ – Real (Kind=nag_wp) Input
On entry
: the absolute accuracy required. If
is negative, the absolute value is used. See
Section 7
5: $epsrel$ – Real (Kind=nag_wp) Input
On entry
: the relative accuracy required. If
is negative, the absolute value is used. See
Section 7
6: $result$ – Real (Kind=nag_wp) Output
On exit: the approximation to the integral $I$.
7: $abserr$ – Real (Kind=nag_wp) Output
On exit: an estimate of the modulus of the absolute error, which should be an upper bound for $|I-result|$.
8: $w(lw)$ – Real (Kind=nag_wp) array Output
On exit
: details of the computation see
Section 9
for more information.
9: $lw$ – Integer Input
On entry
: the dimension of the array
as declared in the (sub)program from which
is called. The value of
(together with that of
) imposes a bound on the number of sub-intervals into which the interval of integration may be divided by the routine. The number of sub-intervals cannot exceed
. The more difficult the integrand, the larger
should be.
Suggested value: $lw=800$ to $2000$ is adequate for most problems.
Constraint: $lw≥4$.
10: $iw(liw)$ – Integer array Output
On exit: $iw(1)$ contains the actual number of sub-intervals used. The rest of the array is used as workspace.
11: $liw$ – Integer Input
On entry
: the dimension of the array
as declared in the (sub)program from which
is called. The number of sub-intervals into which the interval of integration may be divided cannot exceed
Suggested value: $liw=lw/4$.
Constraint: $liw≥1$.
12: $ifail$ – Integer Input/Output
On entry
must be set to
to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $−1$ means that an error message is printed while a
value of $1$ means that it is not.
If halting is not appropriate, the value
is recommended. If message printing is undesirable, then the value
is recommended. Otherwise, the value
is recommended since useful values can be provided in some output arguments even when
on exit.
When the value $-1$ or $1$ is used it is essential to test the value of ifail on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
Note: in some cases d01ajf may return useful information.
The maximum number of subdivisions allowed with the given workspace has been reached without the accuracy requirements being achieved. Look at the integrand in order to determine the integration
difficulties. If the position of a local difficulty within the interval can be determined (e.g., a singularity of the integrand or its derivative, a peak, a discontinuity, etc.) you will probably
gain from splitting up the interval at this point and calling the integrator on the subranges. If necessary, another integrator, which is designed for handling the type of difficulty involved,
must be used. Alternatively, consider relaxing the accuracy requirements specified by
, or increasing the amount of workspace.
Round-off error prevents the requested tolerance from being achieved: $epsabs=⟨value⟩$ and $epsrel=⟨value⟩$.
Extremely bad integrand behaviour occurs around the sub-interval $(⟨value⟩,⟨value⟩)$. The same advice applies as in the case of $ifail=1$.
Round-off error is detected in the extrapolation table. The requested tolerance cannot be achieved because the extrapolation does not increase the accuracy satisfactorily; the returned result is
the best that can be obtained. The same advice applies as in the case of $ifail=1$.
The integral is probably divergent or slowly convergent.
On entry, $liw=⟨value⟩$.
Constraint: $liw≥1$.
On entry, $lw=⟨value⟩$.
Constraint: $lw≥4$.
An unexpected error has been triggered by this routine. Please contact
Section 7
in the Introduction to the NAG Library FL Interface for further information.
Your licence key may have expired or may not have been installed correctly.
Section 8
in the Introduction to the NAG Library FL Interface for further information.
Dynamic memory allocation failed.
Section 9
in the Introduction to the NAG Library FL Interface for further information.
7 Accuracy
cannot guarantee, but in practice usually achieves, the following accuracy:
$tol = max{|epsabs|,|epsrel|×|I|} ,$
are user-specified absolute and relative error tolerances. Moreover, it returns the quantity
which, in normal circumstances, satisfies
$|I-result| ≤ abserr ≤ tol .$
8 Parallelism and Performance
Background information to multithreading can be found in the
d01ajf is not threaded in any implementation.
9 Further Comments
The time taken by d01ajf depends on the integrand and the accuracy required.
on exit, then you may wish to examine the contents of the array
, which contains the end points of the sub-intervals used by
along with the integral contributions and error estimates over the sub-intervals.
Specifically, for
, let
denote the approximation to the value of the integral over the sub-interval
in the partition of
be the corresponding absolute error estimate. Then,
$∫ ai bi f(x) dx ≃ ri$
$result = ∑ i=1 n ri$
, unless
terminates while testing for divergence of the integral (see Section 3.4.3 of
Piessens et al. (1983)
). In this case,
) are taken to be the values returned from the extrapolation process. The value of
is returned in
, and the values
are stored consecutively in the array
, that is:
• $ai=w(i)$,
• $bi=w(n+i)$,
• $ei=w(2n+i)$ and
• $ri=w(3n+i)$.
10 Example
This example computes
$∫ 0 2π x sin(30x) 1 - (x/2π) 2 dx .$
10.1 Program Text
10.2 Program Data
10.3 Program Results | {"url":"https://support.nag.com/numeric/nl/nagdoc_30.1/flhtml/d01/d01ajf.html","timestamp":"2024-11-10T03:22:33Z","content_type":"text/html","content_length":"51460","record_id":"<urn:uuid:56fc6c3e-3794-4ace-ada2-a76ebcd3a2dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00405.warc.gz"} |
Flow in pipe
Fluid flow mean velocity and pipe diameter for known flow rate
Velocity of fluid in pipe is not uniform across section area. Therefore a mean velocity is used and it is calculated by the continuity equation for the steady flow as:
Calculate pipe diameter for known flow rate and velocity. Calculate flow velocity for known pipe diameter and flow rate. Convert from volumetric to mass flow rate. Calculate volumetric flow rate of
ideal gas at different conditions of pressure and temperature.
Pipe diameter can be calculated when volumetric flow rate and velocity is known as:
where is: D - internal pipe diameter; q - volumetric flow rate; v - velocity; A - pipe cross section area.
If mass flow rate is known than diameter can be calculated as:
where is: D - internal pipe diameter; w - mass flow rate; ρ - fluid density; v - velocity.
Calculate pipe diameter in an easy way
Take a look at these
three simple examples
and find out how you can use the calculator to calculate the pipe diameter for known fluid flow and desired fluid flow rate.
Laminar and turbulent fluid flow regime in pipe, critical velocity
If the velocity of fluid inside the pipe is small, streamlines will be in straight parallel lines. As the velocity of fluid inside the pipe gradually increase, streamlines will continue to be
straight and parallel with the pipe wall until velocity is reached when the streamlines will waver and suddenly break into diffused patterns. The velocity at which this occurs is called "critical
velocity". At velocities higher than "critical", the streamlines are dispersed at random throughout the pipe.
The regime of flow when velocity is lower than "critical" is called laminar flow (or viscous or streamline flow). At laminar regime of flow the velocity is highest on the pipe axis, and on the wall
the velocity is equal to zero.
When the velocity is greater than "critical", the regime of flow is turbulent. In turbulent regime of flow there is irregular random motion of fluid particles in directions transverse to the
direction on main flow. Velocity change in turbulent flow is more uniform than in laminar.
In the turbulent regime of flow, there is always a thin layer of fluid at pipe wall which is moving in laminar flow. That layer is known as the boundary layer or laminar sub-layer. To determine flow
regime use Reynolds number calculator.
Reynolds number, turbulent and laminar flow, pipe flow velocity and viscosity
The nature of flow in pipe, by the work of Osborne Reynolds, is depending on the pipe diameter, the density and viscosity of the flowing fluid and the velocity of the flow. Dimensionless Reynolds
number is used, and is combination of these four variables and may be considered to be ratio of dynamic forces of mass flow to the shear stress due to viscosity. Reynolds number is:
where is: D - internal pipe diameter; v - velocity; ρ - density; ν - kinematic viscosity; μ - dynamic viscosity;
Calculate Reynolds number with this easy to use calculator. Determine if flow is laminar or turbulent. Applicable for liquids and gases.
This equation can be solved using and fluid flow regime calculator.
Flow in pipes is considered to be laminar if Reynolds number is less than 2320, and turbulent if the Reynolds number is greater than 4000. Between these two values is "critical" zone where the flow
can be laminar or turbulent or in the process of change and is mainly unpredictable.
When calculating Reynolds number for non-circular cross section equivalent diameter (four time hydraulic radius d=4xRh) is used and hydraulic radius can be calculated as:
Rh = cross section flow area / wetted perimeter
It applies to square, rectangular, oval or circular conduit when not flowing with full section. Because of great variety of fluids being handled in modern industrial processes, a single equation
which can be used for the flow of any fluid in pipe offers big advantages. That equation is Darcy formula, but one factor - the friction factor has to be determined experimentally. This formula has a
wide application in the field of fluid mechanics and is used extensively throughout on this web site.
Bernoulli equation - fluid flow head conservation
If friction losses are neglected and no energy is added to, or taken from a piping system, the total head, H, which is the sum of the elevation head, the pressure head and the velocity head will be
constant for any point of fluid streamline.
This is the expression of law of head conservation to the flow of fluid in a conduit or streamline and is known as Bernoulli equation:
where is: Z[1,2] - elevation above reference level; p[1,2] - absolute pressure; v[1,2] - velocity; ρ[1,2] - density; g - acceleration of gravity
Bernoulli equation equation is used in several calculators on this site like pressure drop and flow rate calculator, Venturi tube flow rate meter and Venturi effect calculator and orifice plate
sizing and flow rate calculator.
Pipe flow and friction pressure drop, head energy loss | Darcy formula
From Bernoulli equation all other practical formulas are derived, with modifications due to energy losses and gains.
As in real piping system, losses of energy are existing and energy is being added to or taken from the fluid (using pumps and turbines) these must be included in the Bernoulli equation.
For two points of one streamline in a fluid flow, equation may be written as follows:
where is: Z[1,2] - elevation above reference level; p[1,2] - absolute pressure; v[1,2] - velocity; ρ[1,2] - density; h[L] - head loss due to friction in the pipe; H[p] - pump head; H[T] - turbine
head; g - acceleration of gravity;
Flow in pipe is always creating energy loss due to friction. Energy loss can be measured like static pressure drop in the direction of fluid flow with two gauges. General equation for pressure drop,
known as Darcy's formula expressed in meters of fluid is:
where is: h[L] - head loss due to friction in the pipe; f - friction coefficient; L - pipe length; v - velocity; D - internal pipe diameter; g - acceleration of gravity;
To express this equation like pressure drop in newtons per square meter (Pascals) substitution of proper units leads to:
Calculator based on Darcy equation. Calculate pressure drop for known flow rate or calculate flow rate for known pressure drop. Friction factor calculation included. Applicable for laminar and
turbulent flow, circular or rectangular duct.
where is: Δ p - pressure drop due to friction in the pipe; ρ - density; f - friction coefficient; L - pipe length; v - velocity; D - internal pipe diameter; Q - volumetric flow rate;
The Darcy equation can be used for both laminar and turbulent flow regime and for any liquid in a pipe. With some restrictions, Darcy equation can be used for gases and vapors. Darcy formula applies
when pipe diameter and fluid density is constant and the pipe is relatively straight.
Friction factor for pipe roughness and Reynolds number in laminar and turbulent flow
Physical values in Darcy formula are very obvious and can be easily obtained when pipe properties are known like D - pipe internal diameter, L - pipe length and when flow rate is known, velocity can
be easily calculated using continuity equation. The only value that needs to be determined experimentally is friction factor. For laminar flow regime Re < 2000, friction factor can be calculated, but
for turbulent flow regime where is Re > 4000 experimentally obtained results are used. In the critical zone, where is Reynolds number between 2000 and 4000, both laminar and turbulent flow regime
might occur, so friction factor is indeterminate and has lower limits for laminar flow, and upper limits based on turbulent flow conditions.
If the flow is laminar and Reynolds number is smaller than 2000, the friction factor may be determined from the equation:
where is: f - friction factor; Re - Reynolds number;
When flow is turbulent and Reynolds number is higher than 4000, the friction factor depends on pipe relative roughness as well as on the Reynolds number. Relative pipe roughness is the roughness of
the pipe wall compared to pipe diameter e/D. Since the internal pipe roughness is actually independent of pipe diameter, pipes with smaller pipe diameter will have higher relative roughness than
pipes with bigger diameter and therefore pipes with smaller diameters will have higher friction factors than pipes with bigger diameters of the same material.
Most widely accepted and used data for friction factor in Darcy formula is the Moody diagram. On Moody diagram friction factor can be determined based on the value of Reynolds number and relative
The pressure drop is the function of internal diameter with the fifth power. With time in service, the interior of the pipe becomes encrusted with dirt, scale and it is often prudent to make
allowance for expected diameter changes. Also roughness may be expected to increase with use due to corrosion or incrustation at a rate determined by the pipe material and nature of the fluid.
When the thickness of laminar sub layer (laminar boundary layer δ) is bigger than the pipe roughness e the flow is called flow in hydraulically smooth pipe and Blasius equation can be used:
where is: f - friction factor; Re - Reynolds number;
The boundary layer thickness can be calculated based on the Prandtl equation as:
where is: δ - boundary layer thickness; D - internal pipe diameter; Re - Reynolds number;
For turbulent flow with Re < 100 000 (Prandtl equation) can be used:
For turbulent flow with Re > 100 000 (Karman equation) can be used:
Most common equation used for friction coefficient calculation is Colebrook-White formula and it is used for the turbulent flow in the pressure drop calculator:
where is: f - friction factor; Re - Reynolds number; D - internal pipe diameter; k[r] - pipe roughness;
Static, dynamic and total pressure, flow velocity and Mach number
Static pressure is pressure of fluid in flow stream. Total pressure is pressure of fluid when it is brought to rest, i.e. velocity is reduced to 0.
Total pressure can be calculated using Bernoulli theorem. Imagining that flow is in one point of stream line stopped without any energy loss Bernoulli theorem can be written as:
If velocity at point 2 v[2]=0, pressure at point 2 is than total p[2]=p[t]:
where is: p - pressure; p[t] - total pressure; v - velocity; ρ - density;
The difference between total and static pressure represents fluid kinetic energy and it is called dynamic pressure.
Dynamic pressure for liquids and incompressible flow where the density is constant can be calculated as:
where is: p - pressure; p[t] - total pressure; p[d] - dynamic pressure; v - velocity; ρ - density;
If dynamic pressure is measured using instruments like Prandtl probe or Pitot tube velocity can be calculated in one point of stream line as:
where is: p - pressure; p[t] - total pressure; p[d] - dynamic pressure; v - velocity; ρ - density;
For gases and larger Mach numbers than 0.1 effects of compressibility are not negligible.
For compressible flow calculation gas state equation can be used. For ideal gases, velocity for Mach number M < 1 is calculated using following equation:
where is: M - Mach number M=v/c - relation between local fluid and local sound velocity; γ - isentropic coefficient;
It should be said that for M > 0.7 given equation is not totally accurate.
If Mach number M > 1, than normal shock wave will occur. Equation for velocity in front of the wave is given bellow:
where is: p - pressure; p[ti] - total pressure; v - velocity; M - Mach number; γ - isentropic coefficient;
Above equations are used for Prandtl probe and Pitot tube flow velocity calculator.
Note: You can download complete derivation of given equations
Fluid flow rate for the thermal - heat power transfer, boiler power and temperature
Calculate heat energy and thermal power for known flow rate. Calculate flow rate for known heat energy or thermal power. Applicable for boilers, heat exchangers, radiators, chillers, air heaters.
The flow rate of fluid required for the thermal energy - heat power transfer can be calculated as:
where is: q - flow rate [m^3/h]; ρ - density of fluid [kg/m^3]; c - specific heat of fluid [kJ/kgK]; Δ T - temperature difference [K]; P - power [kW];
This relation can be used to calculate required flow rate of, for example, water heated in the boiler, if the power of boiler is known. In that case temperature difference in above equation is the
change of temperature of fluid in front and after the boiler. It should be said that efficiency coefficient should be included in above equation, for precise calculation. | {"url":"https://pipeflowcalculations.com/pipe-valve-fitting-flow/flow-in-pipes.xhtml","timestamp":"2024-11-07T04:20:54Z","content_type":"application/xhtml+xml","content_length":"47250","record_id":"<urn:uuid:3e6326a0-8af9-455d-90b5-2f2398ebb7cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00196.warc.gz"} |
The Move Book
Generics can be used to define functions and structs over different input data types. This language feature is sometimes referred to as parametric polymorphism. In Move, we will often use the term
generics interchangeably with type parameters and type arguments.
Generics are commonly used in library code, such as in vector, to declare code that works over any possible instantiation (that satisfies the specified constraints). In other frameworks, generic code
can sometimes be used to interact with global storage many different ways that all still share the same implementation.
Both functions and structs can take a list of type parameters in their signatures, enclosed by a pair of angle brackets <...>.
Type parameters for functions are placed after the function name and before the (value) parameter list. The following code defines a generic identity function that takes a value of any type and
returns that value unchanged.
fun id<T>(x: T): T {
// this type annotation is unnecessary but valid
(x: T)
Once defined, the type parameter T can be used in parameter types, return types, and inside the function body.
Type parameters for structs are placed after the struct name, and can be used to name the types of the fields.
struct Foo<T> has copy, drop { x: T }
struct Bar<T1, T2> has copy, drop {
x: T1,
y: vector<T2>,
Note that type parameters do not have to be used
When calling a generic function, one can specify the type arguments for the function's type parameters in a list enclosed by a pair of angle brackets.
fun foo() {
let x = id<bool>(true);
If you do not specify the type arguments, Move's type inference will supply them for you.
Similarly, one can attach a list of type arguments for the struct's type parameters when constructing or destructing values of generic types.
fun foo() {
let foo = Foo<bool> { x: true };
let Foo<bool> { x } = foo;
If you do not specify the type arguments, Move's type inference will supply them for you.
If you specify the type arguments and they conflict with the actual values supplied, an error will be given:
fun foo() {
let x = id<u64>(true); // error! true is not a u64
and similarly:
fun foo() {
let foo = Foo<bool> { x: 0 }; // error! 0 is not a bool
let Foo<address> { x } = foo; // error! bool is incompatible with address
In most cases, the Move compiler will be able to infer the type arguments so you don't have to write them down explicitly. Here's what the examples above would look like if we omit the type
fun foo() {
let x = id(true);
// ^ <bool> is inferred
let foo = Foo { x: true };
// ^ <bool> is inferred
let Foo { x } = foo;
// ^ <bool> is inferred
Note: when the compiler is unable to infer the types, you'll need annotate them manually. A common scenario is to call a function with type parameters appearing only at return positions.
address 0x2 {
module m {
using std::vector;
fun foo() {
// let v = vector::new();
// ^ The compiler cannot figure out the element type.
let v = vector::new<u64>();
// ^~~~~ Must annotate manually.
However, the compiler will be able to infer the type if that return value is used later in that function:
address 0x2 {
module m {
using std::vector;
fun foo() {
let v = vector::new();
// ^ <u64> is inferred
vector::push_back(&mut v, 42);
For a struct definition, an unused type parameter is one that does not appear in any field defined in the struct, but is checked statically at compile time. Move allows unused type parameters so the
following struct definition is valid:
struct Foo<T> {
foo: u64
This can be convenient when modeling certain concepts. Here is an example:
address 0x2 {
module m {
// Currency Specifiers
struct Currency1 {}
struct Currency2 {}
// A generic coin type that can be instantiated using a currency
// specifier type.
// e.g. Coin<Currency1>, Coin<Currency2> etc.
struct Coin<Currency> has store {
value: u64
// Write code generically about all currencies
public fun mint_generic<Currency>(value: u64): Coin<Currency> {
Coin { value }
// Write code concretely about one currency
public fun mint_concrete(value: u64): Coin<Currency1> {
Coin { value }
In this example, struct Coin<Currency> is generic on the Currency type parameter, which specifies the currency of the coin and allows code to be written either generically on any currency or
concretely on a specific currency. This genericity applies even when the Currency type parameter does not appear in any of the fields defined in Coin.
In the example above, although struct Coin asks for the store ability, neither Coin<Currency1> nor Coin<Currency2> will have the store ability. This is because of the rules for Conditional Abilities
and Generic Types and the fact that Currency1 and Currency2 don't have the store ability, despite the fact that they are not even used in the body of struct Coin. This might cause some unpleasant
consequences. For example, we are unable to put Coin<Currency1> into a wallet in the global storage.
One possible solution would be to add spurious ability annotations to Currency1 and Currency2 (i.e., struct Currency1 has store {}). But, this might lead to bugs or security vulnerabilities because
it weakens the types with unnecessary ability declarations. For example, we would never expect a resource in the global storage to have a field in type Currency1, but this would be possible with the
spurious store ability. Moreover, the spurious annotations would be infectious, requiring many functions generic on the unused type parameter to also include the necessary constraints.
Phantom type parameters solve this problem. Unused type parameters can be marked as phantom type parameters, which do not participate in the ability derivation for structs. In this way, arguments to
phantom type parameters are not considered when deriving the abilities for generic types, thus avoiding the need for spurious ability annotations. For this relaxed rule to be sound, Move's type
system guarantees that a parameter declared as phantom is either not used at all in the struct definition, or it is only used as an argument to type parameters also declared as phantom.
In a struct definition a type parameter can be declared as phantom by adding the phantom keyword before its declaration. If a type parameter is declared as phantom we say it is a phantom type
parameter. When defining a struct, Move's type checker ensures that every phantom type parameter is either not used inside the struct definition or it is only used as an argument to a phantom type
More formally, if a type is used as an argument to a phantom type parameter we say the type appears in phantom position. With this definition in place, the rule for the correct use of phantom
parameters can be specified as follows: A phantom type parameter can only appear in phantom position.
The following two examples show valid uses of phantom parameters. In the first one, the parameter T1 is not used at all inside the struct definition. In the second one, the parameter T1 is only used
as an argument to a phantom type parameter.
struct S1<phantom T1, T2> { f: u64 }
Ok: T1 does not appear inside the struct definition
struct S2<phantom T1, T2> { f: S1<T1, T2> }
Ok: T1 appears in phantom position
The following code shows examples of violations of the rule:
struct S1<phantom T> { f: T }
Error: Not a phantom position
struct S2<T> { f: T }
struct S3<phantom T> { f: S2<T> }
Error: Not a phantom position
When instantiating a struct, the arguments to phantom parameters are excluded when deriving the struct abilities. For example, consider the following code:
struct S<T1, phantom T2> has copy { f: T1 }
struct NoCopy {}
struct HasCopy has copy {}
Consider now the type S<HasCopy, NoCopy>. Since S is defined with copy and all non-phantom arguments have copy then S<HasCopy, NoCopy> also has copy.
Ability constraints and phantom type parameters are orthogonal features in the sense that phantom parameters can be declared with ability constraints. When instantiating a phantom type parameter with
an ability constraint, the type argument has to satisfy that constraint, even though the parameter is phantom. For example, the following definition is perfectly valid:
struct S<phantom T: copy> {}
The usual restrictions apply and T can only be instantiated with arguments having copy.
In the examples above, we have demonstrated how one can use type parameters to define "unknown" types that can be plugged in by callers at a later time. This however means the type system has little
information about the type and has to perform checks in a very conservative way. In some sense, the type system must assume the worst case scenario for an unconstrained generic. Simply put, by
default generic type parameters have no abilities.
This is where constraints come into play: they offer a way to specify what properties these unknown types have so the type system can allow operations that would otherwise be unsafe.
Constraints can be imposed on type parameters using the following syntax.
// T is the name of the type parameter
T: <ability> (+ <ability>)*
The <ability> can be any of the four abilities, and a type parameter can be constrained with multiple abilities at once. So all of the following would be valid type parameter declarations:
T: copy
T: copy + drop
T: copy + drop + store + key
Constraints are checked at call sites so the following code won't compile.
struct Foo<T: key> { x: T }
struct Bar { x: Foo<u8> }
// ^ error! u8 does not have 'key'
struct Baz<T> { x: Foo<T> }
// ^ error! T does not have 'key'
struct R {}
fun unsafe_consume<T>(x: T) {
// error! x does not have 'drop'
fun consume<T: drop>(x: T) {
// valid!
// x will be dropped automatically
fun foo() {
let r = R {};
// ^ error! R does not have 'drop'
struct R {}
fun unsafe_double<T>(x: T) {
(copy x, x)
// error! x does not have 'copy'
fun double<T: copy>(x: T) {
(copy x, x) // valid!
fun foo(): (R, R) {
let r = R {};
// ^ error! R does not have 'copy'
For more information, see the abilities section on conditional abilities and generic types.
Generic structs can not contain fields of the same type, either directly or indirectly, even with different type arguments. All of the following struct definitions are invalid:
struct Foo<T> {
x: Foo<u64> // error! 'Foo' containing 'Foo'
struct Bar<T> {
x: Bar<T> // error! 'Bar' containing 'Bar'
// error! 'A' and 'B' forming a cycle, which is not allowed either.
struct A<T> {
x: B<T, u64>
struct B<T1, T2> {
x: A<T1>
y: A<T2>
Move allows generic functions to be called recursively. However, when used in combination with generic structs, this could create an infinite number of types in certain cases, and allowing this means
adding unnecessary complexity to the compiler, vm and other language components. Therefore, such recursions are forbidden.
address 0x2 {
module m {
struct A<T> {}
// Finitely many types -- allowed.
// foo<T> -> foo<T> -> foo<T> -> ... is valid
fun foo<T>() {
// Finitely many types -- allowed.
// foo<T> -> foo<A<u64>> -> foo<A<u64>> -> ... is valid
fun foo<T>() {
Not allowed:
address 0x2 {
module m {
struct A<T> {}
// Infinitely many types -- NOT allowed.
// error!
// foo<T> -> foo<A<T>> -> foo<A<A<T>>> -> ...
fun foo<T>() {
address 0x2 {
module n {
struct A<T> {}
// Infinitely many types -- NOT allowed.
// error!
// foo<T1, T2> -> bar<T2, T1> -> foo<T2, A<T1>>
// -> bar<A<T1>, T2> -> foo<A<T1>, A<T2>>
// -> bar<A<T2>, A<T1>> -> foo<A<T2>, A<A<T1>>>
// -> ...
fun foo<T1, T2>() {
bar<T2, T1>();
fun bar<T1, T2> {
foo<T1, A<T2>>();
Note, the check for type level recursions is based on a conservative analysis on the call sites and does NOT take control flow or runtime values into account.
address 0x2 {
module m {
struct A<T> {}
fun foo<T>(n: u64) {
if (n > 0) {
foo<A<T>>(n - 1);
The function in the example above will technically terminate for any given input and therefore only creating finitely many types, but it is still considered invalid by Move's type system. | {"url":"https://move-language.github.io/move/generics.html","timestamp":"2024-11-05T10:45:06Z","content_type":"text/html","content_length":"32759","record_id":"<urn:uuid:fd88df5d-cec1-4d97-b687-391c8336e96c>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00115.warc.gz"} |
stabs implements resampling procedures to assess the stability of selected variables with additional finite sample error control for high-dimensional variable selection procedures such as Lasso or
boosting. Both, standard stability selection (Meinshausen & Bühlmann, 2010, doi:10.1111/j.1467-9868.2010.00740.x) and complementarty pairs stability selection with improved error bounds (Shah &
Samworth, 2013, doi:10.1111/j.1467-9868.2011.01034.x) are implemented. The package can be combined with arbitrary user specified variable selection approaches.
For an expanded and executable version of this file please see
• Current version (from CRAN):
• Latest development version from GitHub:
To be able to use the install_github() command, one needs to install devtools first:
Using stabs
A simple example of how to use stabs with package lars:
## make data set available
data("bodyfat", package = "TH.data")
## set seed
## lasso
(stab.lasso <- stabsel(x = bodyfat[, -2], y = bodyfat[,2],
fitfun = lars.lasso, cutoff = 0.75,
PFER = 1))
## stepwise selection
(stab.stepwise <- stabsel(x = bodyfat[, -2], y = bodyfat[,2],
fitfun = lars.stepwise, cutoff = 0.75,
PFER = 1))
## plot results
par(mfrow = c(2, 1))
plot(stab.lasso, main = "Lasso")
plot(stab.stepwise, main = "Stepwise Selection")
We can see that stepwise selection seems to be quite unstable even in this low dimensional example!
User-specified variable selection approaches
To use stabs with user specified functions, one can specify an own fitfun. These need to take arguments x (the predictors), y (the outcome) and q the number of selected variables as defined for
stability selection. Additional arguments to the variable selection method can be handled by .... In the function stabsel() these can then be specified as a named list which is given to args.fitfun.
The fitfun function then needs to return a named list with two elements selected and path: * selected is a vector that indicates which variable was selected. * path is a matrix that indicates which
variable was selected in which step. Each row represents one variable, the columns represent the steps. The latter is optional and only needed to draw the complete selection paths.
The following example shows how lars.lasso is implemented:
lars.lasso <- function(x, y, q, ...) {
if (!requireNamespace("lars"))
stop("Package ", sQuote("lars"), " needed but not available")
if (is.data.frame(x)) {
message("Note: ", sQuote("x"),
" is coerced to a model matrix without intercept")
x <- model.matrix(~ . - 1, x)
## fit model
fit <- lars::lars(x, y, max.steps = q, ...)
## which coefficients are non-zero?
selected <- unlist(fit$actions)
## check if variables are removed again from the active set
## and remove these from selected
if (any(selected < 0)) {
idx <- which(selected < 0)
idx <- c(idx, which(selected %in% abs(selected[idx])))
selected <- selected[-idx]
ret <- logical(ncol(x))
ret[selected] <- TRUE
names(ret) <- colnames(x)
## compute selection paths
cf <- fit$beta
sequence <- t(cf != 0)
## return both
return(list(selected = ret, path = sequence))
To see more examples simply print, e.g., lars.stepwise, glmnet.lasso, or glmnet.lasso_maxCoef. Please contact me if you need help to integrate your method of choice.
Using boosting with stability selection
Instead of specifying a fitting function, one can also use stabsel directly on computed boosting models from mboost.
### low-dimensional example
mod <- glmboost(DEXfat ~ ., data = bodyfat)
## compute cutoff ahead of running stabsel to see if it is a sensible
## parameter choice.
## p = ncol(bodyfat) - 1 (= Outcome) + 1 ( = Intercept)
stabsel_parameters(q = 3, PFER = 1, p = ncol(bodyfat) - 1 + 1,
sampling.type = "MB")
## the same:
stabsel(mod, q = 3, PFER = 1, sampling.type = "MB", eval = FALSE)
## now run stability selection
(sbody <- stabsel(mod, q = 3, PFER = 1, sampling.type = "MB"))
opar <- par(mai = par("mai") * c(1, 1, 1, 2.7))
plot(sbody, type = "paths")
plot(sbody, type = "maxsel", ymargin = 6)
To cite the package in publications please use
which will currently give you
To cite package 'stabs' in publications use:
Benjamin Hofner and Torsten Hothorn (2021). stabs: Stability
Selection with Error Control, R package version R package version
0.6-4, https://CRAN.R-project.org/package=stabs.
Benjamin Hofner, Luigi Boccuto and Markus Goeker (2015). Controlling
false discoveries in high-dimensional situations: Boosting with
stability selection. BMC Bioinformatics, 16:144.
To cite the stability selection for 'gamboostLSS' models use:
Thomas, J., Mayr, A., Bischl, B., Schmid, M., Smith, A.,
and Hofner, B. (2017). Gradient boosting for distributional regression -
faster tuning and improved variable selection via noncyclical updates.
Statistics and Computing. Online First. DOI 10.1007/s11222-017-9754-6
Use ‘toBibtex(citation("stabs"))’ to extract BibTeX references.
To obtain BibTeX references use | {"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/stabs/readme/README.html","timestamp":"2024-11-14T21:07:53Z","content_type":"application/xhtml+xml","content_length":"26492","record_id":"<urn:uuid:dd8dfe2d-3747-4887-b7a7-b6ce41023567>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00845.warc.gz"} |
Biophysical models that attempt to infer real-world quantities from data usually have many free parameters. This over-parameterisation can result in degeneracies in model inversion and render
parameter estimation ill-posed. However, in many applications, we are not interested in quantifying the parameters per se, but rather in identifying changes in parameters between experimental
conditions (e.g. patients vs controls). Here we present a Bayesian framework to make inference on changes in the parameters of biophysical models even when model inversion is degenerate, which we
refer to as Bayesian EstimatioN of CHange (BENCH). We infer the parameter changes in two steps; First, we train models that can estimate the pattern of change in the measurements given any
hypothetical direction of change in the parameters using simulations. Next, for any pair of real data sets, we use these pre-trained models to estimate the probability that an observed difference in
the data can be explained by each model of change. BENCH is applicable to any type of data and models and particularly useful for biophysical models with parameter degeneracies, where we can assume
the change is sparse. In this paper, we apply the approach in the context of microstructural modelling of diffusion MRI data, where the models are usually over-parameterised and not invertible
without injecting strong assumptions. Using simulations, we show that in the context of the standard model of white matter our approach is able to identify changes in microstructural parameters from
conventional multi-shell diffusion MRI data. We also apply our approach to a subset of subjects from the UK-Biobank Imaging to identify the dominant standard model parameter change in areas of white
matter hyperintensities under the assumption that the standard model holds in white matter hyperintensities. | {"url":"https://www.win.ox.ac.uk/publications/1267748/modal","timestamp":"2024-11-11T16:44:11Z","content_type":"text/html","content_length":"5864","record_id":"<urn:uuid:25c70a1a-ea54-43a2-b378-243da5d2b841>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00720.warc.gz"} |
Euler's theorem, rival inputs and distinguishable particles
Update: Romer responded to Waldmann while I was still writing this post. I added comments below in brackets. Overall
Paul Romer
seems to think Euler's theorem is like the second law of thermodynamics.
Robert Waldmann
put it more succinctly than I ever could [and Romer agrees that the analogy wasn't complete, see the response for more]:
I find this odd, because Euler's theorem is math while the second law of thermodynamics is a scientific hypothesis
A minor quibble: I would have said
scientific hypothesis ... it follows from similar assumptions to the ones this blog makes about economics, by the way. Except humans can make
coordinated decisions
(panic, herd, etc) which causes economic entropy to fall much more than would be allowed by the
fluctuation theorem
(that is to say, the second law of thermodynamics is actually violated in small systems, but the violations in the second law of "econo-dynamics" are too big to be a result of something like the
fluctuation theorem).
The real thing that Romer says Andolfatto violates is the idea that if you double all rival inputs, output should double. [Romer points this out again in his response.] Romer points to
Vollrath's very good explanation
of the basic logic. This seems completely reasonable, but is wrong in general.
Here's an example. Let's say the only industry on Earth involves searching sequenced DNA for markers. Everyone does this with
computers and produces
output (locations of DNA markers). These computers are rival, there is perfect competition and the algorithms ("ideas") are all the same. Typically a good parallel algorithm follows Euler's theorem
-- you double the number of computers, you double output. However, there are actually two massive
super-linear speed-ups
you can get when increasing the number of computers -- when the size of the problem becomes less than the total RAM available on all of the computers together and another when the size of the problem
fits into the total memory cache (I think it is it L4 cache). The number of disk accesses (or RAM accesses) decreases as you approach this amount of memory, and during this transition from many to
zero accesses
the speed of your calculation goes up faster than linearly
and therefore your output goes up faster than linearly. For some
computers produces
(2 + δ)Y
output (because it does it faster). And
can be huge --
δ ~
10 or 100! A computer architecture that has a memory hierarchy like this can be super-linear across several orders of magnitude. Yes, if you allow for
α >> 1, X → α X
does result in
Y → α Y
, but such a large
is not necessarily empirically relevant.
The issue here is that Euler's theorem assumes you're doubling something that's non-interacting. The computers in the example above interact: their memory structures combine to open up a path to
greater efficiency that isn't available for smaller X.
Actually, you can get an interesting violation of "constant returns to scale" in physics -- it's called
Gibb's paradox
and it refers to entropy failing to be extensive (the physics term for constant returns to scale) using a naive entropy formula. Take two boxes and their entropy is twice the entropy of one box (call
Putting these boxes together raises the entropy by a small amount to (2+δ)S according to the naive formula (that I use again in the graph at the bottom of this post).
Separating them reduces the entropy back to 2S -- in violation of the second law!
The resolution of this paradox for the physical system is that the formula fails to take into account the particles are indistinguishable -- you can't tell if a Helium atom that starting in the left
box is in the left or right box when you separate them. Making that correction makes the entropy extensive (i.e. restores constant returns to scale).
However! In economics, with the exception of money (and similar abstract assets), we don't have indistinguishable particles. Amy is different from Bill, so you can tell when their companies merge and
then split up (like the pictures above) who works for which company. In physics you can
in principle
find this information out about the atoms, however it costs energy to do so and ends up raising the temperature of the system to identify every atom. In economics, this information is pretty cheap to
Additionally, the formulas for thermodynamics are all evaluated at large values of the number of particles (
). Really large, like 10²³. However, for large but more
economically relevant
scales (10⁶), we still haven't reached the limits ... in fact, if you look at the
entropy function log N! ≈ N log N - N
you'd have "increasing returns to scale" with the sum of the production function exponents being > 1:
What if that is all secular stagnation is? An approach to the limit where economics is extensive, there are constant returns to scale for rival inputs and all economic growth ... halts. Maybe growth
economists are studying the eventual future of economics ...
Anyway, in conclusion, the things that can allow increasing returns to scale are interacting rival inputs, distinguishable rival inputs and finite rival inputs. If you think of rival inputs (capital)
as an infinite frictionless fluid of indistinguishable molecules, then, yes, Romer seems to be basically correct. However competitive equilibrium with price taking makes only one of these assumptions
(the infinite one) -- in particular, it doesn't make the indistinguishable assumption. That's an important one if
entropy is related to output
. Amy and Bill could be identically productive, the same in every way. However, just allowing for them to have names means that the entropy (and hence output) of Amy, Bill, Catherine and Dave (who
are also the same as Amy and Bill) can be more than just twice the output of Amy and Bill.
18 comments:
1. So surprising (to me), to have come to this conclusion by entropy....one of the most common assumptions in formal economics is anonymity, I have a minor quibble therefore when you say that
competitive equilibrium doesn't require it, I think in most models it either is required or is tacitly baked in by making indexed firms otherwise indistinguishable...when we teach about
monopolistic competition we emphasize branding as a form of breaking competitive equilibrium...agents in new monetarist search models may be unique but acting on information other than their type
is assumed costly by making the odds of interacting with them again arbitrarily small...but yeah I'm with you anonymity is often a dumb assumption ....although I couldn't tell you the names of
any of the traders who traded today on any of the stock exchanges
1. Hi LAL,
Anonymity and an index of firms that are "the same" (Fi = Fj in some way) not quite the same as in-principle indistinguishablity. I should have been more explicit in the post above that the
idea has a specific meaning in physics.
As I mentioned above, money is the closest to being an indistinguishable particle in economics ... but dollar bills have serial numbers. It is impossible in principle to put a serial number
on a Helium atom. Any macroscopic object -- or even things represented by macroscopic objects like the states of electrons in a bank of (nanometer-scale) transistors in a computer
representing a dollar in your bank account -- is distinguishable.
Another analogy is that if economic agents were indistinguishable, you couldn't tell if you gave me money or I gave you money ... and it wouldn't matter!
The key point is even though it may be costly to figure out the identity of every stock trader, it is not an impossible task given the laws of physics. The result is that an entropy term that
looks like:
S ~ N log V
has a correction factor where you divide the number of states by the number of ways they could be exchanged (the over-counted states don't exist) ... N!, so the the entropy gets a term ~ -
log N! ~ - N log N so that S becomes:
S ~ N log V - N log N = N log (V/N)
This makes entropy extensive (V → α V, N → α N, then S → α S) ... i.e. obey Euler's theorem.
2. I see your point, but I promise the real world never gets in the way of economic modelling...there is no way for instance to know who traded with whom in the new monetarist search
models...every agent is assigned a real number but the event of two people meeting is not even a measurable event...
3. I shouldn't even say assigned a real number...they simply exist as a continuum...
4. That's pretty interesting -- such a system is basically a field with no ultraviolet cutoff (minimum size of fluctuations in the field) and the distribution of random fluctuations would have
an ultraviolet catastrophe just like in the days before Planck with black body radiation. That is to say it is inconsistent with random noise traders trading any money (because they would
trade an infinite amount of money).
5. Lol, im usually pretty good at following along your physics examples....this one has me stumped though...I will have to Wikipedia...
6. This comment has been removed by the author.
7. In the days before quantum physics, the calculations for the energy given off by a hot object (e.g. the red glow of a hot heating element) came out infinite. That was because if you uniformly
distribute random energy across an infinite number of light wavelengths (λ) that can have any energy they end up being mostly "ultraviolet" -- actually gamma rays -- and the red hot iron
would be irradiating you. Obviously this was wrong. Planck applied the solution that light only comes in energy E = h c/λ and got the result that things glow red, then yellow, then white and
finally blue.
I always used that as an example of everyday experience with quantum mechanics back when I taught classes.
So my intuition is that if you have random fluctuations in a continuum of traders (random fluctuations of the "trader field" as opposed to the electromagnetic field) and randomly distribute
value across those trades with a uniform distribution, most of them would be high value trades and you'd end up with an infinite amount of money.
Now in the NK models, equilibrium probably comes from some sort of optimization instead of a "maximum entropy" argument so they don't really suffer from this ultraviolet catastrophe -- there
aren't any random traders, just rational agents.
8. does it matter how the fluctuations are random? ...in the new monetarist models I'm talking about, the agents are usually choosing to accept/not accept money and to produce/not produce some
good each at the flip of coins with probabilities p1 and p2...
9. If they are accepting a unit of money and/or producing a unit of goods, then the value is already "quantized" -- in the picture in my comment above, agents could produce as much or as little
or buy as much or as little as they wanted.
2. This comment has been removed by the author.
3. If I recall correctly, the evolution of these models ran into some trouble precisely in making the optimal cash holdings determinate....then they developed the rather strange trick of alternating
between these search like markets with centralized markets....but that transition happened before my time...
4. Thanks. You addressed a question I had not yet formulated about an analogy with Maxwell's Demons in economics. :)
But I don't quite get the question of naming and indistinguishability in regard to fermions. Aren't fermions like identical twins? Not completely indistinguishable -- else we would have
Bose-Einstein statistics? And therefore nameable? Then in the case of uniting and then separating the boxes, we don't know which twin is which (and, even though they have names, they aren't
telling ;)).
1. Hi Bill,
Yes, fermions are identical particles ... there are two fundamental indistinguishable particles (fermions and bosons) that follow Fermi-Dirac or Bose-Einstein statistics.
You can also have indistinguishable particles without specific statistics (or at least where spin-statistics isn't important to the system at the scale you're looking at ... typically
molecules at normal temperatures and pressures). This is essentially the picture when you fix Gibb's paradox above.
What I am talking about in the non-extensive "economic" entropy functions are distinguishable particles -- particles that could be named if you set your mind to it. Normal entropy in physics
doesn't address this too much (except possibly in mesoscale physics) because its usually not relevant to physical systems.
2. Well, I think that the principle of the identity of indistinguishables produces Bose-Einstein statistics. E. g., you can't tell the difference between HHHT, HHTH, HTHH, and THHH, because you
can't distinguish the H's from each other.
Anyway, it's a fairly philosophical question. :)
3. Hi Bill,
Spin-statistics is not the same as whether something is distinguishable. Your example could be fermions or bosons ... I'll do two states because it is easier:
Symmetric boson state with indistinguishable states HT and TH
|TH> = |HT> = (|H>|T> + |T>|H>)/2
Anti-symmetric fermion state with indistinguishable states HT and TH
− |TH> = |HT> = (|H>|T> − |T>|H>)/2
note that the observable state = and the minus sign cancels.
4. HTML fail ... last line should be:
note that the observable state is〈HT|O|HT〉=〈TH|O|TH〉and the minus sign cancels
5. Many thanks, Jason! :)
Comments are welcome. Please see the Moderation and comment policy.
Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.
Note: Only a member of this blog may post a comment. | {"url":"https://informationtransfereconomics.blogspot.com/2015/06/eulers-theorem-rival-inputs-and.html","timestamp":"2024-11-08T19:02:32Z","content_type":"text/html","content_length":"146491","record_id":"<urn:uuid:c35b2bc7-621b-4c0a-b855-288686558501>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00440.warc.gz"} |
How to Calculate the Payout Value and Cost of an Annuity
An annuity is a payment plan that pays you a certain amount at regular intervals. The payments may be in the form of regular deposit to a savings account, monthly home mortgage or insurance payments,
or a pension. There are different types of annuities and each is categorized by the frequency of the payment dates. An example of a single type of annuity is a fixed annuity. Another type is a
variable annuity.
The difference between fixed annuities and immediate payments is the amount of money that the annuitant will receive each month. While the former starts paying immediately, the latter takes time to
begin. Depending on the type of annuity, the annuitant may specify a specific age for when they will start receiving payments from the insurance company. Similarly, annuities with a fixed payment
schedule will provide periodic payments to the annuitant over the duration of their life.
The payout value of an annuity is calculated by dividing the present value of all future payments by the discount rate of the purchasing company. The size of a payment is referred to as the payment
period, and the interest rate is the rate of interest. The interest rate determines the PV of an annuity. Divide the number of payment periods by the interest-rate to calculate the annuity’s value.
This formula can be used to estimate the future value of the annuity.
The cost of an annuity is calculated based on the present value of each payment. This calculation requires specific information, such as the number of payments that will be received. The purchasing
company will also use a discount rate to account for the risks in the market. The discount rate directly affects the value of an annuity and the amount you receive from the purchasing company. Once
you calculate the present value of your annuity, you can then decide whether or not you want to purchase it.
If you are looking to purchase an annuity, you will need to know the underlying rate. You will need to use a discount rate that will be the lowest amount possible and allow for compounding. The
current value of an annuity is calculated using a discount rate formula that will allow the purchaser to receive the same amount as the future payment amount. If the annuity has a high discount rate,
it is likely to be more expensive.
To calculate the PV of an annuity, you must know the payment size and interest rate. You can also calculate the present value of an annuity by calculating the PV of individual payments and then
multiplying this number by the rate. The present value of an annuity can be found by dividing the number of payments by the discount rate and the size of each payment. Then, you can divide the total
by the number of payments in a single annuity. | {"url":"https://www.mystructuredsettlementcash.com/blog/how-to-calculate-the-payout-value-and-cost-of-an-annuity/","timestamp":"2024-11-01T23:43:13Z","content_type":"text/html","content_length":"19354","record_id":"<urn:uuid:552efb91-7c16-4e3f-8e2e-40ac0a90fe6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00029.warc.gz"} |
Intrinsic and Extrinsic Semi-Conductors | Solids and Semiconductor | Online Notes Nepal
Intrinsic and Extrinsic Semi-Conductors
The semiconductor is divided into two types. One is an Intrinsic Semiconductor and the other is an Extrinsic semiconductor. The pure form of the semiconductor is known as the intrinsic semiconductor
and the semiconductor in which intentional impurities are added for making it conductive is known as the extrinsic semiconductor. The conductivity of the intrinsic semiconductor becomes zero at room
temperature while the extrinsic semiconductor is very little conductive at room temperature. The detailed explanation of the two types of the semiconductor is given below.
Intrinsic Semiconductor
An extremely pure semiconductor is called an Intrinsic Semiconductor. On the basis of the energy band phenomenon, an intrinsic semiconductor at absolute zero temperature is shown below.
Its valence band is completely filled and the conduction band is completely empty. When the temperature is raised and some heat energy is supplied to it, some of the valence electrons are lifted to
the conduction band leaving behind holes in the valence band as shown below.
The electrons reaching the conduction band move randomly. The holes created in the crystal also free to move anywhere. This behavior of the semiconductor shows that they have a negative temperature
coefficient of resistance. This means that with the increase in temperature, the resistivity of the material decreases, and the conductivity increases.
Extrinsic Semiconductor
A semiconductor to which an impurity at a controlled rate is added to make it conductive is known as an extrinsic Semiconductor.
An intrinsic semiconductor is capable to conduct a little current even at room temperature, but it is not useful for the preparation of various electronic devices. Thus, to make it conducive a small
amount of suitable impurity is added to the material.
Extrinsic Semiconductor
A semiconductor to which an impurity at a controlled rate is added to make it conductive is known as an extrinsic Semiconductor.
An intrinsic semiconductor is capable to conduct a little current even at room temperature, but it is not useful for the preparation of various electronic devices. Thus, to make it conducive a small
amount of suitable impurity is added to the material.
The process by which an impurity is added to a semiconductor is known as Doping. The amount and type of impurity which is to be added to the material have to be closely controlled during the
preparation of extrinsic semiconductor. Generally, one impurity atom is added to 108 atoms of a semiconductor.
The purpose of adding impurity in the semiconductor crystal is to increase the number of free electrons or holes to make it conductive. If a Pentavalent impurity, having five valence electrons is
added to a pure semiconductor a large number of free electrons will exist.
If a trivalent impurity having three valence electrons is added, a large number of holes will exist in the semiconductor.
Depending upon the type of impurity added the extrinsic semiconductor may be classified as n-type semiconductor and p-type semiconductor.
The PN junction
The total charge on each side of a PN Junction must be equal and opposite to maintain a neutral charge condition around the junction. If the depletion layer region has a distance of D, it, therefore,
must, therefore, penetrate into the silicon by a distance of Dp for the positive side, and a distance of Dn for the negative side giving a relationship between the two of Dp*NA = Dn*ND in order to
maintain charge neutrality also called equilibrium. | {"url":"https://onlinenotesnepal.com/intrinsic-and-extrinsic-semi-conductors","timestamp":"2024-11-09T09:38:58Z","content_type":"text/html","content_length":"82348","record_id":"<urn:uuid:988efa90-1582-46a0-8262-796e44f25846>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00253.warc.gz"} |
Turn off box plot in variability chart using JSLTurn off box plot in variability chart using JSL
Hi All,
Does anyone know why the following script failed to turn off the box plot and cell mean? I copy this script after I manually created the variability plot. However, if I run the script, the box plot
and cell mean are not turn off. I am currently using JMP17.
The only workaround I found is by using either of the two options below :
1. << (Variability Analysis[1] << Show Box Plots( 0 ))
2. << Variability Analysis << Show Box Plots( 0 )
In option 1, what does the [1] in Variability Analysis mean? It seems like I can omit without any issue. I found the solution from JMP -> Scripting Index.
dt = Open("$SAMPLE_DATA/Big Class.jmp");
Variability Chart(
Y( :height ),
Model( "Main Effect" ),
X( :sex ),
Sigma Multiplier( 6 ),
Analysis Type( "Choose best analysis (EMS REML Bayesian)" ),
Variability Analysis(
Show Range Bars( 0 ),
Show Cell Means( 0 ),
Std Dev Chart( 0 ),
Points Jittered( 1 ),
Show Box Plots( 0 ) | {"url":"https://community.jmp.com/t5/Discussions/Turn-off-box-plot-in-variability-chart-using-JSL/td-p/801205?code=en-US","timestamp":"2024-11-07T09:24:41Z","content_type":"text/html","content_length":"624920","record_id":"<urn:uuid:1da943c2-a26e-47ec-8350-3129c624b5dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00762.warc.gz"} |
We study the divisibility properties of the constant terms of certain meromorphic modular forms for Hecke groups and relate those properties to several O.E.I.S. sequences and several other sequences,
the members of which appear in congruences of Ramanujan. At the end of the article, we construct from elementary arithmetic functions some meromorphic but not necessarily modular functions and study
their constant terms. For our use in subsequent drafts, we work out a variation of the multinomial theorem convenient for application to single-variable power series. | {"url":"https://www.preprints.org/manuscript/202306.1164/v2","timestamp":"2024-11-11T16:50:14Z","content_type":"text/html","content_length":"370908","record_id":"<urn:uuid:0f96dc7e-a086-40d7-8a46-19ac2e8dbf79>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00850.warc.gz"} |
Factor Base Discrete Logarithms in Kummer Extensions
Paper 2015/859
Factor Base Discrete Logarithms in Kummer Extensions
Dianyan Xiao, Jincheng Zhuang, and Qi Cheng
The discrete logarithm over finite fields of small characteristic can be solved much more efficiently than previously thought. This algorithmic breakthrough is based on pinpointing relations among
the factor base discrete logarithms. In this paper, we concentrate on the Kummer extension $ \F_{q^{2(q-1)}}=\F_{q^2}[x]/(x^{q-1}-A). $ It has been suggested that in this case, a small number of
degenerate relations (from the Borel subgroup) are enough to solve the factor base discrete logarithms. We disprove the conjecture, and design a new heuristic algorithm with an improved bit
complexity $ \tilde{O}(q^{1+ \theta} ) $ (or algebraic complexity $\tilde{O}(q^{\theta} )$) to compute discrete logarithms of all the elements in the factor base $\{ x+\alpha | \alpha \in \F_{q^2} \}
$, where $ \theta<2.38 $ is the matrix multiplication exponent over rings. Given additional time $ \tilde{O} (q^4), $ we can compute discrete logarithms of at least $ \Omega(q^3) $ many monic
irreducible quadratic polynomials. We reduce the correctness of the algorithm to a conjecture concerning the determinant of a simple $ (q+1)$-dimensional lattice, rather than to elusive smoothness
assumptions. We verify the conjecture numerically for all prime powers $ q $ such that $ \log_2(q^{2(q-1)}) \leq 5134 $, and provide theoretical supporting evidences.
Note: 19 pages, writing revised, appendix modified
Available format(s)
Publication info
Preprint. MINOR revision.
Contact author(s)
zhuangjincheng @ iie ac cn
2017-02-27: revised
2015-09-06: received
Short URL
author = {Dianyan Xiao and Jincheng Zhuang and Qi Cheng},
title = {Factor Base Discrete Logarithms in Kummer Extensions},
howpublished = {Cryptology {ePrint} Archive, Paper 2015/859},
year = {2015},
url = {https://eprint.iacr.org/2015/859} | {"url":"https://eprint.iacr.org/2015/859","timestamp":"2024-11-11T23:09:57Z","content_type":"text/html","content_length":"15454","record_id":"<urn:uuid:71fb34fc-3880-40ce-b428-3074d1cec828>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00771.warc.gz"} |
Mathematical Advancement in a Group of 2
by Lucy Rüttgers
How the choice of topic came about
Already in the letter project, Edith (now 5;4) mentioned that she really wanted to learn arithmetic next and had already started something with it at home with her mother. I promised her that we
could do an arithmetic project after our letter project. She was thrilled.
In the meantime, I had also noticed Edith’s interest in music. So I talked to her again alone to clarify whether a music project would be more appropriate for her or whether we should do the
arithmetic project first and then a music project? She was very sure that she definitely wanted to learn arithmetic first. She then wanted to do the music project afterwards.
The emergence of the group of two
Already in the letter project it was important for me to bring Edith and Lotte (5;10) closer together. On the one hand, they are two very different children, with different interests and caregivers,
on the other hand, both are very high achievers and both lack a confidant at this level, a friend with whom they can exchange ideas. In addition, it has turned out that both of them will attend
school and our after-school care centre together next summer. So it would probably do them both good to have an „ally“.
…in brief …
Edith (5;4) is a very studious child, also when it comes to dealing with numbers. Her kindergarten teacher works out mathematical knowledge with her and another girl in a flexible way.
This shows that both children are highly motivated.
And it becomes clear that even with two children who are both very advanced, individual differentiation still makes sense.
Most of the time, educators do not have the time for such intensive support processes, but the report makes it clear that a lot can be achieved in just a few hours, not only for the children’s
knowledge and skills, but also for their personality development.
In the letter project, there were situations in which Lotte and Edith continued to work without the other children. I also noticed their similar reading levels. The cooperation in the letter project
was not yet enough to build up a friendship. Before the our new project began, I said to Edith, to whom I had promised this project, „I would like Lotte to join in too. Do you agree? If so, we could
ask Lotte if she wants to take part“. Edith agreed. The joint discussion with Lotte then revealed that she wanted to take part in the numeracy project.
Project description:
The mathematical area is very large and offers many possibilities. However, I only have 12 days until the report deadline for my IHVO Course due to the current staff holiday / illness situation in
our kindergarten. Therefore, I can only cover a small area for now. However, I hope to be able to continue working on the project with the children beyond the deadline as long as they are interested.
For the time being, I have to find out how far Edith and Lotte have come and what they are particularly interested in. I hope that this will result in good support for them.
As with the letter project, I have designed the so-called red thread. Here, too, the children themselves should design / decide the concrete course of the project – according to their interests and
wishes. I can imagine this being very productive with these two children.
The contents of the project chosen by me in advance are:
Counting, naming / recognising numbers (reading), whereby the number range is determined by the children,
Writing numbers. Possibly learning to read and write word numbers through the children’s strength of being able to read. Assign numbers to corresponding quantities.
Types of arithmetic:
Plus, minus, possibly division and multiplication tasks.
Measuring objects, lengths, widths;
Units of measurement: Millimetres, centimetres and metres.
Measuring instruments: ruler, folding rule, tape measure.
Weight units: Gram and kilogram
Arithmetic devices:
Children’s calculator (with balls in 10s mode / abacus), arithmetic chain (self-created in 10s mode), calculator, cash register calculator that prints receipts.
Place value of numbers:
Classify units, tens and possibly hundreds.
The two project children themselves should decide on the exact course of the project, the contents and the duration of the project.
First offer
Objective: Make the selection for the first topic content.
Materials available: scales, metre measure, a game with numbers / quantities,
an apple divided into four parts, an apple divided into two parts, a knife.
We had a conversation about counting and numbers. Then we looked at the numbers game together. It consists of wooden cuboids in different sizes, each size corresponds to a certain amount. On each
cuboid is the corresponding number in the form of the English word, for example „seven“ for the number 7. In addition, the number is represented on the cuboids by a corresponding number of lines.
The cuboids fit together in a wooden box, in a row always resulting in the number 10. Different combinations are possible.
The children worked out this system of the game and the statement of the individual cuboids.
Then we talked about the scales and the metre-measure, about their meaning and their possible uses, and the children tried them out.
Using the apples, we worked on the arithmetic operations: plus, minus and divide.
Finally, the children had to decide what they wanted to start the project with. The result was that they both agreed to measure with a metre-measure at the next meeting.
Duration: about 30 minutes. The kindergarten did not give me more time.
Observations during the offer:
Edith was disappointed about the early end, although I had announced it in advance.
Both children complemented each other in their knowledge. Edith immediately combined different plus and minus tasks with the apples. Lotte understood the division tasks faster than Edith.
Both children treated each other considerately and respectfully.
They both found the metre measure and its technique of unfolding and folding very exciting. They appeared very motivated for the next meeting.
Second offer:
Objective setting:
To get to know and use different measuring instruments, to get to know and use units of measurement.
Both were given a workbook in A4 format. They proudly labelled it with their name.
Part 1
Looking at the measuring instruments together and trying them out briefly.
They named the rows of numbers on the ruler. I know that both children can count very far – with a few gaps. Now I want to know if they can also read numbers in the 10 range in order to be able to
use a measuring instrument at all. It turns out that they already can.
To deepen and extend their skills, they write the number sequences from 1-10 and from 10-20 in their notebooks, using a ruler that reaches 30 cm.
As asked to do so, they write 30, 40 and 50 under the 20. In this way, they can complete the number sequences outside of what we offer, and they have both done so.
Part 2
Using the ruler, they learn about the units of measurement millimetre and centimetre – as well as their spelling abbreviations MM and CM. I chose capital letters because they are more familiar to
In their notebooks they recorded the new information: 10 MM = 1 CM.
Part 3
Each of them measured small objects found in the room with the ruler.
Whoever wanted to, should write down the measurement results in the notebook. Edith did it.
Then the two of them measured large objects and distances together with the metre-measure or the tape measure, depending on their choice.
Duration up to this point: 1 hour and 15 minutes.
When Edith signs up for more activities, Lotte withdraws from the offering saying, „I can’t do it anymore“.
Observations during the offer:
Lotte tired of the theoretical part very quickly. But when it came to the practical part, she was very fit and motivated again. Edith was full of power all the time.
The teamwork of the two was very nice. Large distances that they could not measure alone, they measured together in agreement and with good arrangements. Alternately, one recorded the beginning of
the measurement and the other read off the result. The wishes of the others were taken into account.
Continuation of the offer with Edith:
She wants to work with the children’s calculating machine.
Edith pushes the arithmetic balls together (in the 10s range) to form her own plus tasks, names them aloud and calculates them.
She does the same with minus tasks.
Duration of the follow-up offer: 20 minutes, total duration for Edith: 1 hour 35 minutes.
Observations during the supplementary offer:
She was still very concentrated and highly motivated and showed no signs of tiredness. At the beginning of the arithmetic exercises, she still counted each ball with her finger. After my suggestion
to try it without counting, she first obviously counted with her eyes without using her fingers. Later, she also named smaller quantities without counting. My tip to use the colour gradation of the
balls in steps of 5 and to continue counting only after 5 was partially implemented by her.
While she was working, she discovered the game „Rummikub“ (a number game) on the games shelf (recommended for ages 8 and up) and would have liked to have it explained to her right away.
Unfortunately, this was no longer possible for organisational reasons. I promised her it for the next day.
Third offer:
Deepening of measuring and / or introduction of the game of Rummikub. Extension by recording what has been measured.
Material per child: 1 ruler, pencil, scissors, glue, graph paper.
When I asked the children if they would like to measure objects first or if they would rather play the Rummikub game, they unanimously answered: „Measure!
Part 1
Review of the previous day. We look back at what we have written so far in the booklet. We look at the graph paper and trace the boxes using the ruler.
Part 2
Task: Measure smaller objects of one’s own choice found in the room with the ruler. Draw the object on graph paper using the measurement results. Write the length and width of the object on the
corresponding sides.
They drew two objects like this with my help. Since the task seemed a bit too difficult, the next task was:
Part 3
Draw lines and write their length next to them. I made some initial length guidelines:
10 cm, 5 cm, 8 cm, 15 cm, then their own choice. They both approached the task with great motivation.
Part 4
Cut out the drawings and stick them in the notebook. They have to think about the best way to do it. Both want to cut out each drawing individually.
Duration: 1 hour with me present, and 30 minutes without me for kindergarten organisational reasons.
Observations during the offer:
Both girls were highly motivated and full of joy. Lotte worked very quickly and independently. Edith was slower and more insecure. Her ruler slipped more often and she had difficulties putting the 0
at the beginning of the line correctly. However, she did not seem to be frustrated by this. She confidently decided to cut out her drawings one by one, using the wave and pinking shears, where it is
more difficult not to cut something off by mistake.
Lotte asked Edith if it worked well with the wave scissors, but then decided to use the normal scissors.
One difficulty arose: Because the children chose the objects to be measured themselves, the measurements resulted in decimal numbers. I explained to them that counting the remaining small millimetre
lines would result in a number after a decimal point. But then they needed help again and again.
Fourth offer:
Goal setting:
The previous day we had set for this day: the introduction of the Rummikub game.
(By the way, the game is very easy to make out of cardboard).
The game consists of number tiles from 1 to 13, each number row is available in 4 colours and twice. There are also 2 jokers. It is played similarly to the Rommee card game.
Part 1
Each person chooses a colour and lays out the number line from 1 to 13. To help them, they were both given a ruler on which they could trace the number line and also check that it was correct.
They then did the same again with a second colour to become more confident.
Part 2
They put the same numbers in different colours together.
Part 3
I explain the use of jokers and the rule: There must always be at least 3 tiles next to each other.
Part 4
Start of the game: Each player takes 14 tiles, sorts them by colour and puts them in a number sequence. The winner is the first to get rid of all the tiles.
The player whose turn it is can lay out 3 matching tiles, i.e. either three consecutive numbers in the same colour (for example: 7, 8, 9) or one and the same number in three colours (for example 5 in
blue, red and yellow).
In addition, whoever’s turn it is can put on all the tiles that match the rows of numbers that have already been laid out.
During the game, I could see the number tiles of both children and give them impulses accordingly.
I was then able to add a few levels of difficulty:
1. you can steal a number from a group (for your own use),
2. you are allowed to move rows of numbers apart to create your own, already existing numbers,
3. you can use a joker.
Duration: approx. 1 hour
Observations during the offer:
Both were very concentrated. Lotte was very quick and confident with the numbers up to 13. She did not make any mistakes when putting the numbers together. Edith was confident up to 10, above which
she had difficulties. Looking at the numbers on the ruler helped her to correct it. So did repeated counting. Her difficulty was in recognising the numbers above 10. She worked more slowly and very
deliberately and concentrated.
My question whether I could still explain something difficult to them (see the 3 extensions listed above) made them both excited and proud and they listened very motivated. The difficulties that were
still built in were better understood and implemented by Lotte than by Edith. Edith understood the connections but needed a little more time than Lotte.
Fifth offer:
Objective setting:
Deepening the game Rummikub to give the children the opportunity to play it independently later on. Deepening the handling of ruler and metre in order to gain more confidence with it.
There was a weekend between the last offer and this one.
Both children had decided on the content of the fourth offer. However, I had the feeling that Edith was not quite as keen to repeat the Rummikub game.
Now it looked like this: Edith wanted to measure objects, Lotte wanted to play Rummikub. Her mother also plays the game and at the weekend Lotte could join in, she said proudly.
Edith and Lotte agreed to do some measurements first and then play Rummikub.
Part 1
At first, both of them were very motivated to try to measure huge distances, but then they noticed their technical and physical limits.
As their independent action was important to me and we already had time limits again, they accepted my tip to opt for smaller objects. They then wanted to draw these on graph paper again.
The decimal points again posed a problem. In addition, both of them lacked a certain spatial imagination, as they had in the first drawings. They had difficulty drawing the rectangle that lay before
them. The longitudinal line was the first step and not a problem. But transferring the width in the right place overwhelmed them. They both then switched to using the rectangle as a template and
simply drawing around it.
So they found a solution to their problem themselves.
Part 2
Rummikub. They both wanted me to play along.
Lotte made sure that Edith did not see her number tiles. We played the game like last time. I gave both girls impulses and suggestions when they couldn’t come up with solutions on their own. Lotte
won and then supported Edith in the game against me.
Duration: 1 hour
Observations during the offer:
Lotte had more „insight“, also because she had already played the game at home. She saw more possibilities of placing tiles than Edith and she recognized them faster. Edith had already noticed her
own difficulties during the first game and was therefore not so enthusiastic about playing it again. She has a high demand on herself and doesn’t seem to like it when others can do something better
than her.
In the course of the game, however, she gained confidence. With the help of my suggestions, she made appropriate moves – and she put it away very well that Lotte won, with the prospect of still being
able to beat me. We then had to end the game before the second winner was determined. The game demanded a lot of concentration from both of them.
Despite this, or perhaps because of it, they enjoyed it.
Sixth offer:
At the end of the last meeting, they both expressed the wish: to write arithmetic problems in the notebook.
To write and use the arithmetic signs + (plus) and = (equal), and possibly – (minus) correctly.
Solve small arithmetic problems.
Part 1
To get into the mood: Both count in turn. They both manage to count up to 39 without any mistakes. After that, little help is enough to make them count even more.
Part 2
With the help of the wooden cuboids, the children do the arithmetic problems, calculate or count the solutions and write the problems with the solutions in their notebooks:
1+1=2 2+2=4 3+3=6 4+4=8.
In addition to the wooden cuboid game, both children use their fingers to count. Lotte holds out her fingers, Edith counts.
This is where an important phone call comes in for me. They continue to work independently, laying and counting.
Lotte: 1 +2=3, Edith: 2+5=6, then improves it herself.
Part 3
Edith brought a book with number pictures to the kindergarten. (With a pencil, you have to connect the numbers from 1 up to the largest number und then you can see the whole picture.) They both want
to do this. Each of them choose a picture, they both decide then on the same picture and I make them a slightly enlarged copy.
The series of numbers goes from 1 to 54 and results in a vampire picture. Lotte gets to 18 on her own and then asks for my help. Edith makes it to 28 on her own.
She complains of a sore throat and her nose is running. Nevertheless, they both want to play the Pharaoh game they discovered on the shelf. (The game is called „Der zerstreute Pharao“ 〈“The
Absentminded Pharaoh“〉, it is recommended for the kids from 7 to 16 years.)
Part 4
The game goes like this: Small pyramids cover up motif cards and cards without motifs. Cards that you have to draw tell you the motif you have to find under the pyramids. By moving the pyramids, you
have to find the motif you are looking for without uncovering other motifs. So you have to remember both the places where the motifs are and the places where there are no motifs. There are also
variations, for example, you can turn the game around 180 degrees. Lotte and Edith already know the game.
They need help where the increases in difficulty begin, as these are difficult to read from the cards.
At the end of the game, Lotte has 14 cards and 22 points, Edith has 10 cards and 18 points.
At the end, I wanted to calculate the score together with them using the calculator (abacus) and suggested that they write this calculation in their notebooks. Lotte did this. Edith didn’t want to.
It must have been the flu.
Duration: 2 hours
Observations during the offer:
It was noticeable that Lotte held out for so long. Both were very nice to each other again. Lotte was allowed to draw in Edith’s notebook. Both were fully concentrated and motivated. Writing down the
arithmetic problems in the notebook was a lot of work for them. Some of the numbers were written reversed. They also had to learn how to place the arithmetic signs, which was more „new territory“ for
Edith than for Lotte, who has a sister in the third year of school.
The number picture was a good relaxation afterwards. I could tell that Edith was coming down with a cold. She was not quite as concentrated as usual and made untypical careless mistakes.
Nevertheless, she also wanted to play the Pharaoh game until the end. The two of them spent the rest of the morning alone in the popular gym room, almost forgetting to eat. Edith then spent the
afternoon asleep in the kindergarten.
Seventh offer:
Goal setting:
Short repetition of plus tasks (deepening), introduction of minus tasks (whether with or without writing them down depends on the children’s wishes).
Introduction of easy division tasks. Independent creation of a number picture.
Part 1
Short review of the last joint activity, which took place 6 days ago (weekend and staff shortage again). With the help of the calculator / calculating board, both children had to place and name a
minus task with the beads one after the other. Help was needed to make this possible.
Both children were familiar with plus tasks, but they did not know much about minus tasks at first, although I had already done this with Edith during the second offer.
Edith then placed 3 beads, took 1 away, and 2 remained. The arithmetic task was to be named: 3 -1=2.
With the help of the cuboid numbers game, this could also be done. However, neither of them wanted to write it down in their notebooks.
To make it easier for them, they had to calculate the next tasks with the help of felt pens.
Lotte calculated: 2-1=1.
Edith: 5-3=2, Lotte 4-1=3, Edith again 8-4=4, Lotte didn’t want any more. So we continued with:
Part 2
Trying to make the children understand division tasks. First I tried it with the help of the felt pens.
I have eight pens and one girl. How many pens can I give her? The answer was immediately clear to Lotte, Edith still hesitated.
I have eight pens and would like to distribute them fairly to two children. How many does each child get?
I have eight pencils and would like to give four children the same number of pencils. How many does each child get?
With Lotte it somehow clicked. For Edith, I made it obvious again: we assigned places to the four children and distributed the pencils to each place in turn. That way it became clear.
In order not to let it get boring, now there must be a cake. Each of them had to paint a big round cake.
„Now, if two guests come to visit, how can you distribute the cake so that everyone gets the same amount?“ Easy for both girls! The cake was „cut“ in the middle with a line.
„Now, if 4 guests come to visit, how can you distribute the cake so that everyone gets the same amount?“ Lotte immediately draws the second centre line through the cake. Edith ponders longer, looking
at her cake.
Then 8 guests!
Lotte draws the corresponding diagonal lines, also in the right place. The cake pieces are almost the same size this way. Edith draws them too, although she can also draw from Lotte. To illustrate
the cake pieces, they should use different colours for the lines and number the eight pieces.
Lotte managed 16 pieces of cake well. With Edith, the lines went over each other, resulting in very small uneven pieces.
Writing down these division tasks, they both did not want to learn that – at least not at that time. But they announced very motivatedly that they wanted to paint the cake: with strawberries and
chocolate …
Before that, we shared a square cake. As with the round cake, Edith first drew the vertical and horizontal lines. Then, to get 8 parts, the diagonals. Lotte, on the other hand, drew the diagonals
first and divided them again later – but in such a way that unequal pieces were created. I think the task with the same size was not so clear to her. But you can only see the result when the lines
are already on the paper, maybe she wanted to try something out.
Part 3
After the wonderful colouring of the cake, I asked them if we should stop.
They didn’t want to stop!
I made a new worksheet for Edith. It contained – written from top to bottom – the numbers from 1 to 10 written as a number and next to it the corresponding number words.
Edith read the numbers to me first, but she „cheated“ and didn’t read the number word, but the number. I then wrote down five more number words for her – not in order and not with the number next to
it. She then read them to me and enthusiastically started copying them. Very carefully and accurately!
In the meantime, Lotte very motivatedly drew a fruit skewer (this was Lotte’s own idea), which is ideal for three children to share. Then she also drew a vegetable skewer. After that, she wanted the
same worksheets as Edith. While Lotte then read the numbers and copied them down, Edith then drew skewers. Lotte copied the numbers very quickly and did not pay attention to the size of the letters
or to writing straight (unlike Edith).
Duration: 1 hour and 20 minutes
Observations during the offer:
The two children were not only very nice to each other, but a funny atmosphere developed in between, where both of them really „fraternised“ against me. It was also noticeable that Lotte did not find
an end this time either, although she seemed quite tired at first, during the minus tasks. The fact that Lotte is usually quicker with her answers or with her work doesn’t seem to bother Edith. I
think each child was able to profit from the other and learn something.
Somehow we didn’t get around to making a number picture ourselves. They both understood the task of sharing well.
Read also: Basic Ideas of Mathematics.
Final thoughts
About the project:
The content of the project was very well chosen for both children and is far from being exhausted. Both of them were always happy when I collected them in the morning, even though they then, so that
we had enough time, could not take part in the „morning circle“. Unfortunately, due to a lack of staff, I often had to stop the activities earlier than it would have been suitable for the children.
This was especially true for Edith, who never really wanted to stop.
I tried to explain it to the children in a way they could understand and I think I succeeded. During the sessions, both children were very friendly and respectful with each other. They gave each
other tips and help and were very patient with each other. (Lotte, for example, had to wait more often when Edith was not yet finished because of her thoroughness and accuracy).
Neither grumbled or complained about the other. They clearly enjoyed their privilege of doing these offerings together with me alone.
I hope that I will be able to continue this project with them. Apart from the great knowledge potential of the two, which needs to be fostered, and apart from Edith’s huge intrinsic motivation, this
is also the best way to enable the two to become friends. Apart from the situation in the gym, where the two of them played alone for a very long time, I don’t know of any other situation where they
played together intensively. They both have their own play partners. But the project is really only at the beginning, and I believe that there is still a lot that can develop.
At least they will be able to draw on their shared experiences when they both attend the same school class and experience going to school together.
To Edith (my „observation child“ in the IHVO Course):
Edith’s intrinsic motivation is also huge when it comes to arithmetic. However, she seems to be even more interested in language. It was good for me to have Lotte there for comparison. Lotte is known
as an intelligent child – but will soon be 6 years old and has a sister in third grade. Edith is only 5;4 years old, she is an only child, both parents work and she is usually picked up very late
from kindergarten.
When I look at all this together, I think that Edith is certainly far more gifted than average and needs appropriate support.
Six years later, contact between Edith and Lotte still continues and is supported by the parents. Edith has since been tested, showing high intelligence in the language area.
Read more about Edith:
Project „Letters“ – Activity for Small Groups
Date of publication in German: September 2012
Copyright © Lucy Rüttgers, see Imprint. | {"url":"http://www.ihvo.de/21980/mathematical-advancement-in-a-group-of-2/","timestamp":"2024-11-07T06:14:08Z","content_type":"text/html","content_length":"44437","record_id":"<urn:uuid:80437b25-5b2f-47f0-ae5f-32ecf70dec52>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00242.warc.gz"} |
All Categories
Be strategic! Enjoy the summer but fit in some SAT math practice. I recommend you take 1 day per week for the month of July and take an official practice test. The math is composed of 2 sections one
calculator the other without. Total time commitment is 1 hour 20 minutes per test for the math. Take it Monday morning then take the rest of the week off! Then take another one the following Monday
until you have taken all 4 official practice tests. I know I know I know...your on vacation, right?! Sometimes its the little things that can make a big difference....this is one of those things. If
you want help going over the ones you missed give me a call.
Here is the link to the 4 official SAT practice tests...do your best, time yourself, and learn from your mistakes.
Whether you want to get a head start on next year's class, prepare for the math portion of the ACT, review concepts you don't understand or are taking a summer class give me a call. I'm available
during the summer for a bunch of sessions or just one or two. I usually go away for a week or two but otherwise can accommodate working with students in between all the summer activities. Want to
combine math and technology? Let's try an online Skype math tutoring lesson while you are away. Have a safe and fun summer. - Mario
A new addition to the website!
I've been producing some free math videos to help students with concepts that I'm frequently asked about. I've been mainly working on PreCalculus and Algebra 2 topics recently.
Have your student check back periodically if there is something they are not understanding in their class or let me know and I can add that one to the list.
Also, please excuse my beginner level video production. I'm aiming to improve as I get experience and add additional tutorials. The videos have everything one needs to learn but Hollywood hasn't
called yet.
Keep on studying and good luck with finals this week!
Image courtesy of Stuart Miles at FreeDigitalPhotos.net
0 Comments
One thing that I have noticed about many of the students I work with is how they go through stages.
At first, students can often be very tentative, shy, and unsure of themselves. They don't know what to work on, what questions they have, or why they are even getting tutored. I'll ask them
something like, "Do you have any questions?" and they'll say "no, I pretty much get it"...which doesn't match up with the "C" they just got on their last test.
After students work with me for a few sessions they start to get more comfortable and start asking questions.
As more time passes they start taking more and more responsibility for their math success and have an idea for what they would like to work on in a given session.
Eventually, many students take complete ownership of their tutoring and they want to get the most out of their sessions as possible. When I arrive they have their book and materials out, a list of
questions and topics they want to work on and in what order, and are ready to dive in and get to work. They actively take charge of their learning: making notes to themselves, tackling difficult
problems, skipping parts they already understand and they have a laser like focus.
Students are getting something even more important than just a better grade or math knowledge out of the sessions. They are learning to be mature self-directed adults. It's great to witness their
increased confidence and the skills they are learning will serve them well in college and beyond.
Image courtesy of Sujin Jetkasettakorn at FreeDigitalPhotos.net
0 Comments
One thing that I have noticed recently among a few of the students that I tutor is a discouragement with math. These students however have turned their present lack of success in math into a harsh
self criticism.
I have talked in previous posts about how math is completely neutral. Your math book is completely unaffected by whether you love it or hate it.
Similarly, just because you make a numerical or calculation error is no reason to berate yourself. I recommend just calmly taking a step back and seeing what it is you don't understand. Once you
understand where you went off track you can even completely rework the problem from start to finish. Create the repetition for yourself of doing the problem correctly then pat yourself on the back.
You are excellent at the math that you know you just need to keep building on that strong foundation and keep moving forward.
I recommend a positive approach and positive self talk. Encourage yourself and congratulate yourself on your successes. Look on your mistakes as part of the process needed to get to the
understanding you want.
Tutoring can help accelerate the learning process but it is you that are doing the work, asking the questions, making the mistakes, correcting your mistakes and improving. Adding some positive self
talk can help you create a more conducive internal learning envirionment plus it makes you feel better too!
Image courtesy of stock images at freedigitalphotos.net
0 Comments
A few days ago I met with a new student and within a short time of beginning the session she made it a point to tell me
"I'm really good at other things, but math I struggle with."
I find this really interesting and hear similar things from other students and parents as well. "Johnny is very intelligent but he doesn't put in enough time on his homework." Or, "Kelly is really
bright but she has a personality conflict with her teacher."
And sometimes in trying to break the ice with a student and lighten up the session I ask students if they play sports or do other extracurriculars. Come to find out, they tell me that not only do
they play football but they are the quarterback(
an important position
). Or, not only are they on the cheer team but they are the captain(
). Or still yet, they are a leader on student government(
What I believe all these students(and parents) want is to be respected, treated with dignity, and seen in a positive light. They don't want to be talked down to, judged negatively, or receive
inferior treatment or inferior tutoring.
One of the reasons I feel that I am effective with the students that I tutor is because I really understand and "get this." I treat my students with kindness, patience, encouragement, positivity,
respect and dignity. I believe in students' capacity to learn and improve and I recognize that outside of their math class, regardless of their math ability, they have amazing talents and skills and
are important as people.
Students pick up on the way they are being treated immediately and they either shut down or they are receptive to the assistance that I am offering. I never try to pretend to be a certain way
because young people especially know if they are dealing with a phony. Math is one of those core classes that often requires more time and effort than some others and I understand how it can be
I got my official start in tutoring working for a retired 30-year Detroit Public School teacher. I asked her once what was her secret and what she told me resonated with me and I never forgot: "When
students do something correctly praise them to the moon!" She was a super positive lady and her students felt good about themselves and they did well.
I've been able to make a similar positive impact on a number of students over the years. One in particular goes back maybe 8-10 years now and about once a year I bump into this student's mother in
the grocery store. She always tells me how I really helped her son's
. He's long graduated from college and is pursuing his dream of being a filmmaker out in California.
As you can imagine, events like these keep me going and reinforce my belief that if you believe in people you create the environment for better learning and even other positive qualities such as
better self esteem and self confidence.
Image courtesy of stockimages at FreeDigitalPhotos.net
0 Comments
When you play a sport you're aggressive, right? Why not be as aggressive with your math? Of course, we are talking about a positive form of aggressiveness where you aren't sitting watching on the
sidelines but rather chomping at the bit to get in the game.
Let me ask you a question: When you are in your class are you leaning against the back of your chair with your arms folded and your pencil lying on your desk?...or are you leaning forward with your
pencil in hand actively involved?
When you are doing story problems you should be filling up your paper with diagrams, equations, notes and sentences with intensity!
When I was in college, I was told that when taking a test if I happened to drop my pencil that the students sitting next to me would kick it across the room. The tests were graded on a curve and
everyone wanted an advantage over one another. This, of course, is not the aggressiveness we are going for. By the way, this never actually occurred but gives you an idea of how competitive you
should be with yourself.
You know that final lap your track coach makes you do before practice is officially over? Or the last 10 push ups and 20 sit-ups before your football practice is dismissed? Likewise, you want to be
doing one more challenging word problem, asking one last question, and proving one last theorem.
I've got to tell you a secret. I wasn't the smartest kid in my classes but I worked at school harder than my average classmate. Oftentimes things didn't "click" until that 5th or 10th time...but
once I got it - I really got it.
I'm sure a number of students I work with are in that same boat...so keep on working at it, be aggressive, ask for help when you need it, and keep on improving....
Image courtesy of imagerymajestic at freedigitalphotos.net
Occasionally I come across a student that absolutely has a bad relationship with math. They don't believe they are capable of being good at math, they don't see the point in it, and they don't want
to try and understand it.
I met with one such student recently. We went through the '
' of how to execute certain mathematical operations. We then took that one step further and discussed some
real-life parallels and applications
. We then repeated this pattern a number of times and then it occurred to me to ask this student, 'do you like math?'
Aha! Problem solved, or shall I say semi-solved. To make a long story shorter, I could try and teach, assist, and tutor this individual but without an internal shift my efforts would continue to be
largely rejected.
My suggestions to a student and family such as this would be to first work on changing the student's relationship with math to more of a friendly one.
Second, there is talk these days of 'helicopter' parents. These are parents that are continually 'hovering' over their children too closely making sure all their i's are dotted and t's crossed, etc.
There needs to be a shift in responsibility and accountability from the parent to the child. It may take some time but the parents need to empower the students to be more self accountable, self
responsible, and to take pride in their own efforts to learn and manage their time and studying.
As a tutor, I aim to provide good quality instruction in a positive reinforcing manner. When I see obstacles to learning that go beyond just understanding math concepts I will tactfully mention
these to parents. I always try to take a positive approach but some aspects of learning go beyond what I can personally provide.
I know of one tutor that won't tutor students unless the parents agree to make sure the students get 8 1/2 hours of sleep minimum per night otherwise he won't continue helping them. I haven't gotten
this strict(...yet : ) ) but I do think it's important to continually look at all the factors that contribute to successful learning.
So, in conclusion, first check in with your child and see how they feel about math and their math class. Second, see if they are taking an active role in their own learning. Thirdly, look for other
obstacles to successful learning such as lack of sleep, etc. Then allow tutoring to build on this excellent foundation.
Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net
If you have never been tutored or have not found success with tutoring you may be wondering what goes on in a tutoring session-right?
First, let me start by telling you that a tutoring session can potentially be anything you would like it to be. The better a student is prepared with questions, topics they would like to discuss,
etc. the more the session can be optimized to what a student would like to see happen.
Not every student is as self directed as above and some students are overwhelmed, lost, or at a loss as where to begin so I have a time tested approach that I generally follow and one which I find
works for most students.
I start off by asking if there are any questions or concepts they don't understand. If so, we go over those items first. Then we spend some time going over current concepts followed by reviewing
past topics and previewing upcoming sections. Lastly, we simulate the test taking experience to uncover hidden areas of difficulty and to iron out areas of confusion before exam day.
I've mentioned what can be covered in a tutoring session but I should also mention what should be avoided.
You don't want to turn your tutoring sessions into homework completion sessions. It's ok if you need some help with some problems on your homework but you don't want to spend your valuable time with
a tutor just doing homework. Attempt to complete your homework before you meet so you can spend time on the few problems you might not understand.
Don't use your tutoring sessions as a replacement for classroom learning. Combine the two together so that they build on one another and you further refine and solidify your understanding.
Lastly don't be satisfied with 'good enough' and go into 'coasting' mode. Challenge yourself to go further and achieve higher than you even thought possible. Synergistically combine your effort,
your classroom learning and your tutoring for optimum results!
Image courtesy of stockimages at www.freedigitalphotos.net
Mario DiBartolomeo shares his enthusiasm for learning through the math tutoring (PreAlgebra through PreCalculus) he offers and through his blog at www.mariosmathtutoring.com
Copyright 2015 Mario's Math Tutoring
0 Comments
0 Comments
What do you want from tutoring?
As this school year begins take a few moments to ask yourself what you want from your tutoring. If you know what you want this will help you focus your efforts toward your desired
goal(s). Now, I must say that parents have hopes and dreams for their children but if the students themselves don't share those same ambitions there will be a disconnect and mixed results.Let's look
at some of the outcomes students may want to achieve from their tutoring:
Higher grades (this seems always to be #1 doesn't it?)
More confidence (less able to be quantified but also important)
Less Stress (Tutoring can help you get a grip on where you are at, help you review, and even get a head start so you are better prepared and ready for what comes your way)
Less time spent studying to achieve the same or better result(Tutoring can accelerate your ability to understand and apply concepts but still requires your own independent study)
Deeper Understanding(going beyond just good enough, and "passing" the test)
Study Skills and Organizational Skills(Learning how to learn will help students as they go on into college and are expected to be more self-directed in their studying)...and I'm sure you can come up
with even more beyond these but this is a start...
I have some additional things that I would like for students to get from tutoring as well...
Learn how to take an active(not passive) role in your learning.
Know what you want to get out of your sessions and ask questions. Be involved and aim to get the most out of your sessions.
2) Have Fun. Really get immersed in the learning process and be interested in what you are studying. Even if it doesn't seem on the surface that interesting if you look at it more deeply and dive
into it you will find something that is fascinating about what you are studying and will make learning easier and more satisfying too.
Don't be afraid of what you don't know.
Students often tell me, "Oh this is so easy!" And I agree...What you don't know is difficult but once you really understand it it's super easy. So dig wholeheartedly into the tough stuff so that it
can be "so easy."
I'm looking forward to helping you make this a great year. As I always say: Don't hesitate to call me, text me or e-mail me anytime and I'll get back to you ASAP.
Image courtesy of farconville at FreeDigitalPhotos.net
0 Comments | {"url":"https://www.mariosmathtutoring.com/math-tutor-blog/category/all/2","timestamp":"2024-11-09T07:29:01Z","content_type":"text/html","content_length":"130734","record_id":"<urn:uuid:09ec3201-308b-4799-99a4-cf78fc4542df>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00114.warc.gz"} |
HP 10bii+ Margin/Markup bug
03-23-2021, 07:51 PM
Post: #1
Rick314 Posts: 47
Junior Member Joined: Oct 2014
HP 10bii+ Margin/Markup bug
The HP 10bii+ has MU CST PRC MAR (markup, cost, price, margin) keys. Only certain entry and solve-for sequences are allowed when mathematically the solutions are available. For example (ignoring
percent conversions) MAR = MU/(1 + MU). But entering MU and solving for MAR gives an error. Similarly, MU = MAR/(1 - MAR). But entering MAR and solving for MU gives an error. As another example, from
page 48 of the user manual, enter 9.6 CST 15 MU and solve for PRC = 11.04 and MAR = 13.04 (OK so far). But start over (C ALL) and try to solve for MAR before solving for PRC. You get an error, even
though MAR is known from the entry of MU alone. My guess is that this is because of the way the firmware is using the HP SOLVE app that is inside many of their calculators. They are not keeping track
of all possibilities of what can be determined from the values provided. I don't have an HP 10bii (without the +) but would guess the problem exists there too. Or am I misunderstanding something?
03-23-2021, 08:08 PM
Post: #2
Gene Posts: 1,401
Moderator Joined: Dec 2013
RE: HP 10bii+ Margin/Markup bug
While true that MU and MAR can be computed one from another, I think the original 10B and then the two 10BII models never implemented that relationship.
Page 47-48 of the manual simply state that to use MU and MAR together, compute the CST and PRC first, then compute the other value.
Not a bug per se, since the machine works as the manual specifies, but it could have been implemented to compute it as you point out.
The overriding concern with the HP-10BII+ was to not break any existing 10BII legal keystroke functionality with the addition of new functions. Going back to rewrite the MU and MAR calculations was
not considered, AFAIK.
Good catch, however!
03-23-2021, 08:41 PM
(This post was last modified: 03-23-2021 08:53 PM by Albert Chan.)
Post: #3
Albert Chan Posts: 2,774
Senior Member Joined: Jul 2018
RE: HP 10bii+ Margin/Markup bug
Hi, Rick134
You have 2 ways to compute MU:
PRC = CST * (1+MU%) → MU% = PRC/CST - 1
PRC = CST * (1+MU%) = CST / (1-MAR%) → MU% = 1/(1-MAR%) - 1
2 ways may give different results.
Thus, I believe MU via MAR formula were disabled, on purpose. (vice versa for MAR via MU)
03-23-2021, 09:05 PM
Post: #4
Rick314 Posts: 47
Junior Member Joined: Oct 2014
RE: HP 10bii+ Margin/Markup bug
Thank you Gene, but I think you are defending HP too much. I am a retired HP firmware project leader with 30+ years experience, much of it in user interface design, and this is a bug. You said "Page
47-48 of the manual simply state that to use MU and MAR together, compute the CST and PRC first, then compute the other value." Nothing in that manual example says anything about the sequence shown
being the only way or even the recommended way to use the feature. Also that argument doesn't apply to MU-MAR conversions, since CST and PRC are not required (and might not be known) at all. You
saying "Not a bug per se, since the machine works as the manual specifies" implies that nothing is promised except the values and solving sequences given in the manual. That logic could justify any
entry-solution sequence not demonstrated in the manual as being intentionally unsupported. That isn't how HP determines what a bug is. (I know.)
Thank you Albert, but I think "Both ways will give different results" is incorrect. Your first equation determines MU from PRC and CST. No problem. The second determines MU from MAR. No problem. All
the following conversions are mathematically possible:
MU to MAR
MAR to MU
MU, CST to PRC, MAR (in either order)
MU, PRC to CST, MAR (in either order)
CST, PRC to MU, MAR (in either order)
CST, MAR to MU, PRC (in either order)
PRC, MAR to MU, CST (in either order)
As you read down the list, all required equations exist: MAR = f(MU), MU = f(MAR), PRC = f(MU,CST), MAR = f(MU,CST), CST = f(MU,PRC), MAR = f(MU,PRC), etc. HP calculators keep track of which values
are entered and which values are being solved for. All the above should be recognized and properly programmed.
03-23-2021, 09:21 PM
Post: #5
Albert Chan Posts: 2,774
Senior Member Joined: Jul 2018
RE: HP 10bii+ Margin/Markup bug
(03-23-2021 09:05 PM)Rick314 Wrote: HP calculators keep track of which values are entered and which values are being solved for.
All the above should be recognized and properly programmed.
The problem is CST/PRC/MU/MAR cannot be reduced to 1 formula (unlike TVM)
If user inputted CST, PRC, MAR, and wanted MU, what is it supposed to return ?
03-23-2021, 09:41 PM
Post: #6
Rick314 Posts: 47
Junior Member Joined: Oct 2014
RE: HP 10bii+ Margin/Markup bug
> The problem is CST/PRC/MU/MAR cannot be reduced to 1 formula (unlike TVM)
That is true Albert, but it is not a mathematical requirement. That is why I said "My guess is that this is because of the way the firmware is using the HP SOLVE app that is inside many of their
calculators. They are not keeping track of all possibilities of what can be determined from the values provided."
> If user inputted CST, PRC, MAR, and wanted MU, what is it supposed to return ?
That isn't a real-world problem, whereas all cases in my list are. Try your example on the calculator, in the order given. After entering CST and PRC, enter MAR. Stop. What are you trying to do and
what do you expect? MAR is determined from CST and PRC, so isn't then a valid input in addition to CST and PRC. Yet it is accepted without error by the calculator. So at that point you don't know
what the calculator will use to calculate MU. You have an inconsistent (yet accepted without error) set of inputs. But it doesn't matter since entering CST, PRC, and MAR doesn't make sense.
03-24-2021, 02:09 AM
(This post was last modified: 03-24-2021 02:26 AM by Gamo.)
Post: #7
Gamo Posts: 759
Senior Member Joined: Dec 2016
RE: HP 10bii+ Margin/Markup bug
I have noticed this ERROR too.
When input CST follow by MAR will not
return correct result if follow by MU directly.
CST, MAR → MU // Error
So I have try to program this Pricing Calculations to
various HP programmable calculators.
This program can calculate between MU and MAR and
almost any combinations of two input with only the exception is
the two input is MU and MAR will not give result for CST, PRC
Here is the link to my post
Gamo 3/24/2021
03-24-2021, 02:42 AM
Post: #8
Gamo Posts: 759
Senior Member Joined: Dec 2016
RE: HP 10bii+ Margin/Markup bug
Just noticed that the HP Prime also give same error when
Input Cost and Margin press solve return Error: 0/0
Example: Cost=55, Margin=45
Answer supposed to be Markup=81.82, Price=100
If input Price and Margin will get answer no problem.
03-25-2021, 10:53 PM
(This post was last modified: 03-25-2021 10:55 PM by ijabbott.)
Post: #9
ijabbott Posts: 1,307
Senior Member Joined: Jul 2015
RE: HP 10bii+ Margin/Markup bug
One way it could have been done would be to treat MAR and MU as different views of the same variable. Solving for either MU or MAR would still be based on CST and PRC, but solving for MU would solve
for MAR at the same time (and vice versa). Also, inputting MU would automatically calculate MAR (and vice versa). After inputting or solving for one, you could RCL the other without solving.
For example:
25 MU (input markup on cost)
RCL MAR (recall margin on price -> 20)
100 CST (input cost)
125 PRC (input price)
MAR (solve for margin on price based on CST and PRC -> 20)
RCL MU (recall markup on cost -> 25)
MU (solve for markup on cost based on CST and PRC -> 25)
RCL MAR (recall margin on price -> 20)
— Ian Abbott
03-26-2021, 07:15 AM
(This post was last modified: 03-26-2021 07:24 AM by Gamo.)
Post: #10
Gamo Posts: 759
Senior Member Joined: Dec 2016
RE: HP 10bii+ Margin/Markup bug
Seem like HP use the Equations Solver for when input two knowns variables
will give result to the remaining two unknown variables.
But will not work for all cases espectually when input CST, MAR → MU, PRC
My program for the HP-12C the algorithm use is when one of the MU or MAR is known
go ahead and convert it then calculate the remaining formula. This method is effective
when user run through the input in series thinks of this like the Ohm's Law program.
User input either in the store register or input all in the stacks and put in zero for the varibles
that need to be solve.
For example:
Cost = 0 // Unknown
Price = 10
Markup = 25
Margin = 0 // Unknown
The Markup is known then convert to Margin
Now got result on both Markup and Margin calculate the remaining formula.
But when I try to program this Pricing Calculations to like HP-11C it is very complicated
to imprement all the neccessary situations for all cases with Cost, Price, Markup and Margin.
Took me a long time to get it right and it need to use the "Flags" function extensively in program.
03-26-2021, 09:54 AM
Post: #11
Gamo Posts: 759
Senior Member Joined: Dec 2016
RE: HP 10bii+ Margin/Markup bug
According to the HP-10BII+ manual on Appendex B:
Business Percentage formula
MAR = [(PRC - COST) ÷ PRC] x 100
MU = [(PRC - COST) ÷ COST] x 100
Maybe HP use this two formula together with SOLVE function.
I put this two formula to HP Prime SOLVE app and do calculations.
When input two variables with Price and Margin press [Solve]
Error Message said "Cannot find solution"
03-27-2021, 07:32 PM
(This post was last modified: 03-27-2021 08:20 PM by Rick314.)
Post: #12
Rick314 Posts: 47
Junior Member Joined: Oct 2014
RE: HP 10bii+ Margin/Markup bug
I think this provides an HP 200LX HP CALC Solver function that solves all markup, cost, price, margin (MU CST PRC MAR) problems, finding
all possible solution variables
by solving for
any possible solution variable
. It requires that all 4 variables start as 0, then only meaningful situations be solved. I believe
this could have been put in the HP 10bii and HP 10bii+ firmware
. First, simplify variables:
c = CST
p = PRC
u = MU/100
g = MAR/100
Next, there are only 12 meaningful input-output cases. # = case number, Inp = Input(s), S = Solve for, Other = remaining variable to solve for.
# Inp S Exp Other
== === = ======= =====
01 g u = g/(1-g) none
02 g,c u = g/(1-g) p
03 g,p u = g/(1-g) c
04 c,p u = (p-c)/c g
05 u g = u/(1+u) none
06 u,c g = u/(1+u) p
07 u,p g = u/(1+u) c
08 c,p g = (p-c)/p u
09 p,u c = p/(1+u) g
10 p,g c = p*(1-g) u
11 c,u p = c*(1+u) g
12 c,g p = c/(1-g) u
A long nested if-then-else statement is used, based on the S() function and inputs being zero or non-zero. For example S(u) means the user is solving for u. "S(u) & c & p" means the user is solving
for u and both c and p are non-zero.
The order of cases is important!
SOLVER solves by iterating the variable being solved for, searching for a 0 in the expression provided. Consider the 3 S(u) cases. After "S(u) & c & p" is known to be false, it must be true that
either c or p (but not both) is an input. In the "S(u) & c" case, once p is defined during its first iteration, further iterations happen in the "S(u) & c & p" case. SOLVER also evaluates the L
(var,exp) function (let var have value exp) before any other operators.
In the following table, #, Inp, and S agree with the table above. In the S column the variable in () is the Other variable in the table above. Test shows how SOLVER will identify the case given, in
order. [] indicates things that are known, due to prior IF clauses failing, so they need not be tested for. Expression is what SOLVER will try to set to 0.
# Inp S Test Expression
== === ===== =============== =======================
01 g u c=0 & p=0 u - g/(1-g)
05 u g c=0 & p=0 u - g/(1-g)
04 c,p u (g) S(u) & c & p u - L(g,(p-c)/p)/(1-g)
02 g,c u (p) S(u) & c u - (L(p,c/(1-g))-c)/c
03 g,p u (c) S(u) [& p] u - (p-L(c,p*(1-g)))/c
08 c,p g (u) S(g) & c & p g - L(u,(p-c)/c)/(1+u)
06 u,c g (p) S(g) & c g - (L(p,c*(1+u))-c)/p
07 u,p g (c) S(g) [& p] g - (p-L(c,p/(1+u)))/p
09 p,u c (g) S(c) & u [& p] c - p*(1-L(g,u/(1+u)))
12 p,g c (u) S(c) [& p & g] c - p/(1+L(u,g/(1-g)))
11 c,u p (g) S(p) & u [& c] p - c/(1-L(g,u/(1+u)))
10 c,g p (u) [S(p) & c & g] p - c*(1+L(u,g/(1-g)))
The above table leads to the following HP 200LX SOLVER function.
IF(c=0 AND p=0, u - g/(1-g),
IF(S(u) AND c AND p, u - L(g,(p-c)/p)/(1-g),
IF(S(u) AND c, u - (L(p,c/(1-g))-c)/c,
IF(S(u), u - (p-L(c,p*(1-g)))/c,
IF(S(g) AND c AND p, g - L(u,(p-c)/c)/(1+u),
IF(S(g) AND c, g - (L(p,c*(1+u))-c)/p,
IF(S(g), g - (p-L(c,p/(1+u)))/p,
IF(S(c) AND u, c - p*(1-L(g,u/(1+u))),
IF(S(c), c - p/(1+L(u,g/(1-g))),
IF(S(p) AND u, p - c/(1-L(g,u/(1+u))),
p - c*(1+L(u,g/(1-g)))
That is what I used for development and test. Changing variable names and ordering them as they are on the HP 10bii+ results in the following HP 200LX Solver function.
! Clear Data before each use. !
0*(MU+CST+PRC+MAR) + ! order menu variables !
IF(CST=0 AND PRC=0,
MU - MAR/(1-MAR/100),
IF(S(MU) AND CST AND PRC,
MU - L(MAR,100*(PRC-CST)/PRC)/(1-MAR/100),
IF(S(MU) AND CST,
MU - 100*(L(PRC,CST/(1-MAR/100))-CST)/CST,
MU - 100*(PRC-L(CST,PRC*(1-MAR/100)))/CST,
IF(S(MAR) AND CST AND PRC,
MAR - L(MU,100*(PRC-CST)/CST)/(1+MU/100),
IF(S(MAR) AND CST,
MAR - 100*(L(PRC,CST*(1+MU/100))-CST)/PRC,
MAR - 100*(PRC-L(CST,PRC/(1+MU/100)))/PRC,
IF(S(CST) AND MU,
CST - PRC*(1-L(MAR,MU/(1+MU/100))/100),
CST - PRC/(1+L(MU,MAR/(1-MAR/100))/100),
IF(S(PRC) AND MU,
PRC - CST/(1-L(MAR,MU/(1+MU/100))/100),
PRC - CST*(1+L(MU,MAR/(1-MAR/100))/100)
This development was done using a Windows 10 PC with DOSBox and the 16-bit HPCALC application that is included with the HP 200LX Connectivity Pack. The c p u g function was tested using c=4, p=5, u=
0.25, g=0.2 (12 test cases). The MU CST PRC MAR function was tested with MU=25, CST=4, PRC=5, MAR=20 (12 test cases).
Edit 1: Last table before code, case numbers 07 to 09, 05 to 11 (typos).
03-27-2021, 09:18 PM
Post: #13
Albert Chan Posts: 2,774
Senior Member Joined: Jul 2018
RE: HP 10bii+ Margin/Markup bug
(03-27-2021 07:32 PM)Rick314 Wrote: First, simplify variables:
c = CST
p = PRC
u = MU/100
g = MAR/100
It might simplify more, with g = -MAR/100. This make equations symmetrical:
p = c + c*u = f1(c,u)
c = p + p*g = f1(p,g)
u = (p-c)/c = f2(p,c)
g = (c-p)/p = f2(c,p)
u = -g/(1+g) = f3(g)
g = -u/(1+u) = f3(u)
(03-25-2021 10:53 PM)ijabbott Wrote: One way it could have been done would be to treat MAR and MU as different views of the same variable. Solving for either MU or MAR would still be based on
CST and PRC, but solving for MU would solve for MAR at the same time (and vice versa). Also, inputting MU would automatically calculate MAR (and vice versa). After inputting or solving for one,
you could RCL the other without solving.
I like this idea.
Looking back above formulas, we might need the "other view" anyway !
03-28-2021, 12:39 AM
Post: #14
Rick314 Posts: 47
Junior Member Joined: Oct 2014
RE: HP 10bii+ Margin/Markup bug
> It might simplify more, with g = -MAR/100...
I don't think that will simplify the final solution.
>> ...treat MAR and MU as different views of the same variable...
> I like this idea.
I don't think this is an implementable Solver concept. But in any case, it would be great if anyone can provide a simpler complete solution to the problem: On any HP calculator, a Solver function (so
limited to that tool) that solves all markup, cost, price, margin (MU CST PRC MAR) problems, finding all possible solution variables by solving for any possible solution variable.
Anyone have a simpler way to do that?
03-28-2021, 12:30 PM
(This post was last modified: 03-28-2021 12:33 PM by ijabbott.)
Post: #15
ijabbott Posts: 1,307
Senior Member Joined: Jul 2015
RE: HP 10bii+ Margin/Markup bug
Even though margin/markup conversions are slightly suboptimal on the HP 10bii+ (requiring a dummy value for cost or price and solving for the other), at least we can gloat on the fact that this is
much better than the TI BAII Plus, which has separate worksheets for "profit" (cost / price / margin) and "percentage change" (old (cost) / new (price) / %change (markup)), with no interaction
between the two.
— Ian Abbott
03-29-2021, 01:08 AM
(This post was last modified: 03-29-2021 01:16 AM by Gamo.)
Post: #16
Gamo Posts: 759
Senior Member Joined: Dec 2016
RE: HP 10bii+ Margin/Markup bug
This can be done on the HP-12C with all possible conditions as shown in the clip below.
This same program can make minor change to have user input each four variables
to the TVM functions keys directly to store each variables instead of input all in the stacks.
03-29-2021, 04:22 AM
Post: #17
Gamo Posts: 759
Senior Member Joined: Dec 2016
RE: HP 10bii+ Margin/Markup bug
Here is the full version work on the Free42 PC the HP-42S simulation.
This program work on all possible cases including the conversion between MU and MAR.
Program also do the profit calculations as well.
Video Clip:
03-29-2021, 04:39 PM
(This post was last modified: 04-10-2021 12:05 AM by Rick314.)
Post: #18
Rick314 Posts: 47
Junior Member Joined: Oct 2014
RE: HP 10bii+ Margin/Markup bug
(03-29-2021 01:08 AM)Gamo Wrote: This can be done on the HP-12C ...
Gamo: Sorry, but what you demonstrated does not solve the proposed problem. Maybe that problem isn't clear.
Quote:On any HP calculator, a Solver function (so limited to that tool) that solves all markup, cost, price, margin (MU CST PRC MAR) problems, finding all possible solution variables by solving
for any possible solution variable.
This is a user interface problem, not a problem with making a programming language do the required mathematics using additional keys and keystrokes.
The main point of the thread is that HP firmware engineers could have made the MU CST PRC MAR keys on the HP 10bII+ (and other calculators) work as users expect instead of giving errors for valid
input-output cases. Specifically, once the 4 variables involved are cleared to zero, these 12 keystroke sequences must work (plus entering the inputs in any order).
# Keystrokes Solves For
== ================ ==============
01 20 MAR MU MU
02 20 MAR 4 CST MU MU, PRC
03 20 MAR 5 PRC MU MU, CST
04 4 CST 5 PRC MU MU, MAR
05 25 MU MAR MAR
06 25 MU 4 CST MAR MAR, PRC
07 25 MU 5 PRC MAR MAR, CST
08 4 CST 5 PRC MAR MAR, MU
09 5 PRC 25 MU CST CST, MAR
10 5 PRC 20 MAR CST CST, MU
11 4 CST 25 MU PRC PRC, MAR
12 4 CST 20 MAR PRC PRC, MU
That is what the provided HP 200 LX solution does, and all Solves For values are visible immediately after the last keystroke. If implemented on a calculator with a 1-line display (like the HP 10bII+
or 12C or 42 or ...) then doing the additional keystrokes "RCL <second Solved For variable>" would also be allowed. But that's it. No use of any additional keys or keystrokes.
4/9/21 error correction: "02 20 MAR 4 CST MU -> MU, PRC" (corrected) was "02 25 MU MAR -> MAR" (error, duplicate of row 05).
03-29-2021, 08:08 PM
Post: #19
Rick314 Posts: 47
Junior Member Joined: Oct 2014
RE: HP 10bii+ Margin/Markup bug
I want to digress to a related Big Picture topic. The classic software development lifecycle is Requirements, Design, Coding, Test, Debug, Release, Support. Detailed
requirements are key
to minimizing iteration in the remaining steps. These 12 requirements are the foundation of this problem, and what I am guessing the calculator firmware engineers never produced.
(03-23-2021 09:05 PM)Rick314 Wrote: MU to MAR
MAR to MU
MU, CST to PRC, MAR (in either order)
MU, PRC to CST, MAR (in either order)
CST, PRC to MU, MAR (in either order)
CST, MAR to MU, PRC (in either order)
PRC, MAR to MU, CST (in either order)
Once these requirements are clear multiple designs present themselves and the simplest one can be implemented. I chose a Solver solution. I think any calculator with programmable USER keys can do
similarly using the following design (pseudo-code) for each of the 4 keys involved.
if <this variable can be solved for by non-zero values in the other 3>
then begin
<solve for this variable and save its value>;
if <any other zero-valued variables can now be solved for>
then <solve for each of them too>;
<recall this variable's value to the display>;
end else
<save the user-entered value to this variable>;
Again, my point is that the calculator firmware engineers could have made the MU CST PRC MAR keys work better by realizing and solving the 12 cases that actually make sense.
03-30-2021, 07:51 AM
Post: #20
ijabbott Posts: 1,307
Senior Member Joined: Jul 2015
RE: HP 10bii+ Margin/Markup bug
On the HP-17BII (and HP 17bii+), there are separate menus for solving COST, PRICE, M%C (markup on cost), and COST, PRICE, M%P (markup on price, i.e. margin). They share variables with each other, so
as long as COST and PRICE have values, they can be used to convert between margin and markup.
It would be nice if they added a third menu for solving M%C, M%P not based on COST and PRICE, but with the variables shared with the other two solver menus.
The HP-17BII (and HP 17bii+) have the ability to solve equations stored by the user, so it is easy to add a formula to convert between mark-up and margin, but there is no way to access or modify the
variables used by the built-in conversion functions using this method.
— Ian Abbott
User(s) browsing this thread: | {"url":"https://hpmuseum.org/forum/showthread.php?mode=linear&tid=16517&pid=145182","timestamp":"2024-11-09T14:46:15Z","content_type":"application/xhtml+xml","content_length":"95519","record_id":"<urn:uuid:36f450c1-50e1-4dad-a458-fb030e653fb4>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00513.warc.gz"} |
How Many Kilometers Is 2299 Inches?
2299 inches in kilometers
How many kilometers in 2299 inches?
2299 inches equals 0.0584 kilometers
Unit Converter
Conversion formula
The conversion factor from inches to kilometers is 2.54E-5, which means that 1 inch is equal to 2.54E-5 kilometers:
1 in = 2.54E-5 km
To convert 2299 inches into kilometers we have to multiply 2299 by the conversion factor in order to get the length amount from inches to kilometers. We can also form a simple proportion to calculate
the result:
1 in → 2.54E-5 km
2299 in → L[(km)]
Solve the above proportion to obtain the length L in kilometers:
L[(km)] = 2299 in × 2.54E-5 km
L[(km)] = 0.0583946 km
The final result is:
2299 in → 0.0583946 km
We conclude that 2299 inches is equivalent to 0.0583946 kilometers:
2299 inches = 0.0583946 kilometers
Alternative conversion
We can also convert by utilizing the inverse value of the conversion factor. In this case 1 kilometer is equal to 17.124871135345 × 2299 inches.
Another way is saying that 2299 inches is equal to 1 ÷ 17.124871135345 kilometers.
Approximate result
For practical purposes we can round our final result to an approximate numerical value. We can say that two thousand two hundred ninety-nine inches is approximately zero point zero five eight
2299 in ≅ 0.058 km
An alternative is also that one kilometer is approximately seventeen point one two five times two thousand two hundred ninety-nine inches.
Conversion table
inches to kilometers chart
For quick reference purposes, below is the conversion table you can use to convert from inches to kilometers | {"url":"https://convertoctopus.com/2299-inches-to-kilometers","timestamp":"2024-11-05T01:20:48Z","content_type":"text/html","content_length":"33270","record_id":"<urn:uuid:487a76d0-ac0f-46c1-b48e-ffd043b3ea60>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00473.warc.gz"} |
A13: Generating series for curve counting
Counts of curves can often be organized in a generating series. Important properties of the numbers such as recursions can be expressed in terms of generating series. In connection with a duality
relation of elliptic curves motivated by physics, the so-called mirror symmetry, generating series of counts of curves in a surface which is a Cartesian product of an elliptic curve with a line can
be expressed and studied via Feynman integrals. In this project, we will study further such generating series, using the tropical method. That is, counts of curves will first be expressed via counts
of tropical curves by means of suitable correspondence theorems. The counts of tropical curves can then be studied using combinatorics. The methods that we plan to develop will therefore also be of
interest to tropical geometry. | {"url":"https://www.computeralgebra.de/sfb/projects/derived-categories-of-equivariant-coherent-sheaves/","timestamp":"2024-11-07T22:54:19Z","content_type":"text/html","content_length":"42644","record_id":"<urn:uuid:f0e898fb-9039-4969-b6dd-8c4222c84d6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00783.warc.gz"} |
- 4 minutes read - 644 words
A set is a collection of different things. The things contained in a set are called elements or members. To denote the membership of \(a\) to a set \(A\) we write \(a\in A\) and read “a belongs in A”
or “a is in A”. If we want to indicate that \(a\) is not a member of \(A\), we write \(a\not\in A\) and read “a does not belong in A” or “a is not in A”. A set without elements is called the empty
set, and it is denoted by \(\emptyset\).
• A collection of four natural numbers
\[ A = \left\{4, 2, 1, 3\right\} \]
\[ B = \left\{\text{blue, white, red}\right\} \]
• The set of natural numbers
\[ \mathbb{N} = \left\{1, 2, 3, \dots \right\} \]
\[ \mathbb{Z} = \left\{\dots, -3, -2, -1, 0, 1, 2, 3, \dots \right\} \]
• The set of rational numbers
\[ \mathbb{Q} = \left\{\frac{z}{n}\colon\, z\in\mathbb{Z}, \, n\in\mathbb{N} \right\} \]
\[ \mathbb{R} = \text{…needs more concepts to be described.} \]
• The set of solutions of the parabola \(2x^{2}-6x + 4 = 0\)
\[ S = \left\{x\in \mathbb{R} \colon\, 2x^{2}-6x + 4 = 0 \right\} = \left\{1, 2\right\} \]
Inclusion operators can be defined for sets. If every element of a set \(A\) is a member of a set \(B\), we write \(A\subseteq B\) and say that “A is a subset of B”. We can also say that “B is a
superset of A”. Two sets \(A\) and \(B\) are equal if they have exactly the same elements, i.e., \(A \subseteq B\) and \(B \subseteq A\). We then write \(A = B\).
Cartesian Products
The Cartesian product of two sets \(A\) and \(B\) is a set with elements pairs combining an element of \(A\) and an element of \(B\). Specifically, we write \[ A \times B = \left\{(a,b)\colon\, a\in
A,\, b\in B\right\}. \] Elements of the Cartesian product are denoted by \((a,b)\in A\times B\). Cartesian products are analogously defined of any finite collection of \(n\) sets. For example, \[ \
times_{i=1}^{n} X_{i} = \left\{(x_{1}, \dots, x_{n})\colon\, x_{1}\in X_{1},\, \dots,\, x_{n}\in X_{n}\right\}. \]
Cartesian products are frequently used in economics and finance. For example, consider a situation where one would like to choose the amount of money invested in stocks and bonds. For simplicity, let
\(X_{1} = \mathbb{R}_{\ge 0}\) be the set of potential stock investments in Euros, and \(X_{2} = \mathbb{R}_{\ge 0}\) be the set of potential bond investments in Euros (short sales are excluded). The
Cartesian product \(X_{1}\times X_{2}\) is known as the investment opportunity set in finance, i.e., the set containing all investment choices available to an economic entity.
Convex Combinations and Convex Sets
It is rare for economic choices to be made in isolation. When a person visits a retail store, she purchases a variety of products instead of a single one on most occasions. In production settings,
entrepreneurs combine labor, capital, and other production factors to produce the desired output. In addition, entrepreneurs have to consider not only a particular pair of labor and capital but
rather how this pair compares to other feasible pairs of capital and labor they could employ in production. A commonly used way (for reasons going beyond this introduction’s scope) to mathematically
describe such collections of choices is via their convex combinations. For any real number \(\alpha\in[0,1]\), and any two points \(x_{1}, x_{2} \in X\), we say that \(x = \alpha x_{1} + (1 - \alpha)
x_{2}\) is a convex combination of \(x_{1}\) and \(x_{2}\).
We say that a set \(X\) is convex if it contains all the convex combinations of its elements. Namely, \(X\) is a convex set if for every \(\alpha\in[0,1]\), and every \(x_{1}, x_{2} \in X\), we have
\(\alpha x_{1} + (1 - \alpha) x_{2} \in X\).
Topic's Concepts | {"url":"https://teach.pikappa.eu/methods/topics/sets/","timestamp":"2024-11-02T22:12:34Z","content_type":"text/html","content_length":"21715","record_id":"<urn:uuid:ffd0bb98-0da2-425b-b065-76a6af730097>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00046.warc.gz"} |
Multiplicative Inverse Of Complex Numbers Worksheet
Multiplicative Inverse Of Complex Numbers Worksheet act as fundamental devices in the world of maths, offering an organized yet versatile system for learners to explore and understand mathematical
concepts. These worksheets supply an organized method to comprehending numbers, nurturing a strong foundation upon which mathematical efficiency prospers. From the simplest counting exercises to the
intricacies of advanced estimations, Multiplicative Inverse Of Complex Numbers Worksheet cater to students of diverse ages and skill levels.
Unveiling the Essence of Multiplicative Inverse Of Complex Numbers Worksheet
Multiplicative Inverse Of Complex Numbers Worksheet
Multiplicative Inverse Of Complex Numbers Worksheet -
Multiply and divide complex numbers Simplify powers of i Figure 1 Discovered by Benoit Mandelbrot around 1980 the Mandelbrot Set is one of the most recognizable fractal images The image is built on
the theory of self similarity and the operation of iteration
7 r2p0 K182k 7K 6u Xtra 0 3Swoofxt lw Ja mrKez YLpLHCx d i 6A7lSlX Ir AiTg LhBtls f HrKeis feQrmvTeyd 2 j c BMda ud Leb QwWirt Yhq mISn9f OihnOi6t2e 9 KAmlsg meHbVr va B J2V k Worksheet by Kuta
Software LLC Kuta Software Infinite Algebra 2 Name Operations with Complex Numbers Date Period
At their core, Multiplicative Inverse Of Complex Numbers Worksheet are cars for conceptual understanding. They envelop a myriad of mathematical principles, guiding learners with the labyrinth of
numbers with a series of appealing and deliberate workouts. These worksheets transcend the boundaries of standard rote learning, encouraging energetic involvement and promoting an instinctive grasp
of mathematical partnerships.
Nurturing Number Sense and Reasoning
16 PDF MULTIPLICATIVE INVERSE FOR COMPLEX NUMBERS FREE PRINTABLE DOWNLOAD ZIP
Multiplicative inverse of a number x is a number x which when multiplied by x leads to multiplicative identity i e in complex numbers 1 i0 Let the complex number be a ib and its multiplicative
inverse be c di then a ib c id 1 i0 or ac bd i ad bc 1 i0
Multiplication of Complex Numbers Use the FOIL method or the formula a bi c di ac bd ad bc i to find the product of the complex numbers Cross check your answers with the answer key provided
Rationalize the Denominator Rationalize the denominator by multiplying the numerator and denominator by the complex conjugate of the denominator
The heart of Multiplicative Inverse Of Complex Numbers Worksheet depends on growing number sense-- a deep understanding of numbers' significances and interconnections. They urge expedition, inviting
learners to explore math procedures, figure out patterns, and unlock the secrets of sequences. Through provocative obstacles and logical challenges, these worksheets come to be gateways to refining
reasoning skills, nurturing the analytical minds of budding mathematicians.
From Theory to Real-World Application
Algebra 2 Worksheets Complex Numbers Worksheets
Algebra 2 Worksheets Complex Numbers Worksheets
The inverse property of multiplication states that for any number a where a is not equal to zero there exists a number 1 a such that a 1 a 1 a a 1 Thus the multiplicative
Complex numbers Exercises with detailed solutions 1 Compute real and imaginary part of z i 4 2i 3 2 Compute the absolute value and the conjugate of z 1 i 6 w i17 3 Write in the algebraic form a ib
the following complex numbers z i5 i 1 w 3 3i 8 4 Write in the trigonometric form cos isin the following
Multiplicative Inverse Of Complex Numbers Worksheet act as avenues connecting academic abstractions with the apparent truths of daily life. By instilling functional circumstances into mathematical
workouts, students witness the significance of numbers in their surroundings. From budgeting and measurement conversions to comprehending analytical data, these worksheets encourage trainees to wield
their mathematical expertise beyond the confines of the class.
Varied Tools and Techniques
Adaptability is inherent in Multiplicative Inverse Of Complex Numbers Worksheet, employing a toolbox of pedagogical devices to deal with diverse discovering designs. Aesthetic aids such as number
lines, manipulatives, and electronic resources function as buddies in visualizing abstract principles. This diverse method guarantees inclusivity, suiting learners with various choices, strengths,
and cognitive styles.
Inclusivity and Cultural Relevance
In a progressively diverse globe, Multiplicative Inverse Of Complex Numbers Worksheet welcome inclusivity. They go beyond social boundaries, incorporating examples and problems that resonate with
learners from diverse histories. By including culturally appropriate contexts, these worksheets promote a setting where every learner feels stood for and valued, boosting their link with mathematical
Crafting a Path to Mathematical Mastery
Multiplicative Inverse Of Complex Numbers Worksheet chart a course in the direction of mathematical fluency. They instill perseverance, vital reasoning, and analytical skills, crucial features not
just in maths yet in different aspects of life. These worksheets encourage students to browse the detailed surface of numbers, nurturing an extensive recognition for the sophistication and logic
inherent in mathematics.
Accepting the Future of Education
In an era noted by technical advancement, Multiplicative Inverse Of Complex Numbers Worksheet effortlessly adjust to digital systems. Interactive interfaces and digital resources augment conventional
discovering, using immersive experiences that transcend spatial and temporal boundaries. This combinations of conventional methods with technical technologies proclaims an encouraging era in
education and learning, cultivating a more dynamic and appealing knowing setting.
Final thought: Embracing the Magic of Numbers
Multiplicative Inverse Of Complex Numbers Worksheet illustrate the magic inherent in maths-- a captivating trip of exploration, exploration, and proficiency. They transcend conventional pedagogy,
acting as drivers for stiring up the fires of curiosity and inquiry. Through Multiplicative Inverse Of Complex Numbers Worksheet, learners embark on an odyssey, unlocking the enigmatic world of
numbers-- one trouble, one service, each time.
Find The Multiplicative Inverse Of The Complex Number 4 3i Mathematics Shaalaa
Quiz Worksheet Multiplicative Inverses Complex Numbers Study
Check more of Multiplicative Inverse Of Complex Numbers Worksheet below
Multiplicative Inverse Of A Complex Number
Multiplying Complex Numbers Worksheet
What Is Multiplicative Inverse Of Complex Numbers YouTube
Ex 4 1 13 Find Multiplicative Inverse Of Complex Number i
Multiplicative Inverse Of Complex Numbers YouTube
Operations With Complex Numbers Kuta Software
7 r2p0 K182k 7K 6u Xtra 0 3Swoofxt lw Ja mrKez YLpLHCx d i 6A7lSlX Ir AiTg LhBtls f HrKeis feQrmvTeyd 2 j c BMda ud Leb QwWirt Yhq mISn9f OihnOi6t2e 9 KAmlsg meHbVr va B J2V k Worksheet by Kuta
Software LLC Kuta Software Infinite Algebra 2 Name Operations with Complex Numbers Date Period
Multiplicative Inverse Of A Complex Number Quiz amp Worksheet
Instructions Choose an answer and hit next You will receive your score and answers at the end question 1 of 3 What is the multiplicative inverse of 9 1 9 0 9 1 9 Next Worksheet Print
7 r2p0 K182k 7K 6u Xtra 0 3Swoofxt lw Ja mrKez YLpLHCx d i 6A7lSlX Ir AiTg LhBtls f HrKeis feQrmvTeyd 2 j c BMda ud Leb QwWirt Yhq mISn9f OihnOi6t2e 9 KAmlsg meHbVr va B J2V k Worksheet by Kuta
Software LLC Kuta Software Infinite Algebra 2 Name Operations with Complex Numbers Date Period
Instructions Choose an answer and hit next You will receive your score and answers at the end question 1 of 3 What is the multiplicative inverse of 9 1 9 0 9 1 9 Next Worksheet Print
What Is Multiplicative Inverse Of Complex Numbers YouTube
Multiplying Complex Numbers Worksheet
Ex 4 1 13 Find Multiplicative Inverse Of Complex Number i
Multiplicative Inverse Of Complex Numbers YouTube
The Multiplicative Inverse Reciprocal Of A Complex Number The Complex Hub
Using Multiplicative Inverses To Solve Equations Worksheet Times Tables Worksheets
Using Multiplicative Inverses To Solve Equations Worksheet Times Tables Worksheets
Ex 4 1 11 Find Multiplicative Inverse Of 4 3i Teachoo | {"url":"https://szukarka.net/multiplicative-inverse-of-complex-numbers-worksheet","timestamp":"2024-11-08T12:13:45Z","content_type":"text/html","content_length":"26399","record_id":"<urn:uuid:efc878b8-1642-44fd-9f89-9a73837953eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00232.warc.gz"} |
IWOTA 2019: Abstracts.
By the classical Brown, Halmos (1964) result, there is no commutative $C^\ast$-algebra generated by Toeplitz operators, with non-trivial symbols, acting on the Hardy space $H^2(S^1)$, while there are
only two, rather trivial, commutative Banach algebras generated by Toeplitz operators. For one of them symbols are analytic, and are conjugate analytic, for the other.
At the same time, as it was observed recently, there are many non-trivial commutative $C^\ast$-algebras generated by Toeplitz operators, acting on the Bergman space over the unit disk. Moreover, for
a multidimensional case of the weighted Bergman space $\mathcal{A}^2_{\lambda}(\mathbb{B}^n)$, apart of a wide variety of commutative $C^\ast$-algebras, there exist many commutative Banach algebras,
all of them are generated by Toeplitz operators with symbols from different specific classes.
The aim of the talk is to clarify the situation for a multidimensional Hardy space $H^2(S^{2n-1})$ case.
We present an universal approach that permits us to unhide and describe both commutative $C^\ast$ and Banach algebras generated by Toeplitz operators on $H^2(S^{2n-1})$, as well as to describe some
non-commutative $C^\ast$-algebras. In the latter case we characterize, among others, their irreducible representations and spectral properties of the corresponding Toeplitz operators. | {"url":"https://iwota2019.math.tecnico.ulisboa.pt/abstracts?abstract=5101","timestamp":"2024-11-07T11:54:00Z","content_type":"text/html","content_length":"59389","record_id":"<urn:uuid:9c8a4bae-bde8-4823-8867-18431dc60793>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00787.warc.gz"} |