content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
In Exercises \(3-6,\) ind the sum of the measures of the interior angles of the indicated convex polygon. (See Example 1.) \(16-\mathrm{gon}\)
Short Answer
Expert verified
The sum of the interior angles of the 16-gon is \(2520^{\circ}\).
Step by step solution
Identify the number of sides in the polygon
In this problem, the given polygon is a 16-gon, which means it has 16 sides. So, \(n = 16\).
Apply the formula to calculate the sum of interior angles
The formula for calculating the sum of the interior angles of a polygon is \((n-2) \times 180^{\circ}\). Substituting \(n = 16\) into the formula, we get: \((16-2) \times 180^{\circ} = 14 \times 180^
{\circ} = 2520^{\circ}\).
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
sum of interior angles
In the study of geometry, understanding the sum of the interior angles of a polygon is essential. The sum of these angles can be determined using a simple mathematical formula: \((n-2) \times 180^{\
circ}\), where \(n\) represents the number of sides of the polygon.
This formula is derived from the concept that a polygon can be divided into triangles. Each triangle has a sum of interior angles equal to \(180^{\circ}\). A polygon with \(n\) sides can be split
into \(n-2\) triangles, thus multiplying the number of triangles by \(180^{\circ}\) gives the sum of the polygon's interior angles.
For instance, if you have a 16-sided polygon, according to this formula, you would compute its sum of interior angles as \((16-2) \times 180^{\circ} = 2520^{\circ}\). Understanding and applying this
formula is crucial in geometry exercises to find unknown angles and assess polygon properties.
A polygon is a closed, two-dimensional shape consisting of a finite number of straight line segments. These segments are called edges or sides, and they meet at points called vertices.
Polygons are characterized by the number of sides or edges they have, with common examples including triangles (3 sides), quadrilaterals (4 sides), pentagons (5 sides), and so on. The term polygon
comes from Greek, where 'poly' means "many" and 'gon' means "angle."
Polygons can be classified based on their angles and side lengths. They can be regular, with all sides and angles equal, or irregular, with different side lengths and angles. Additionally, polygons
can also be convex or concave, depending on the nature of their interior angles.
convex polygon
A convex polygon is a type of polygon where all its interior angles are less than \(180^{\circ}\). In simpler terms, if you pick any two points inside the polygon and connect them with a line, that
line will always lie inside or on the polygon.
Convex polygons are significant in geometry due to their simple structure and the ease with which we can calculate properties like area, perimeter, and angle sums. They do not have any indentations
or reflex angles.
• Examples of convex polygons include regular polygons such as equilateral triangles and squares.
• All triangles and quadrilaterals are also convex, as long as no angle is greater than \(180^{\circ}\).
Understanding convex polygons will aid in comprehending more complex geometric shapes.
mathematics education
Mathematics education plays a vital role in developing logical reasoning and problem-solving skills. One of the key areas within mathematics is geometry, which involves understanding shapes, sizes,
spatial relationships, and properties like the sum of angles in polygons.
Through geometry, students learn to apply formulas, such as the one to find the sum of interior angles of polygons, to solve real-world problems. Teaching geometry extends beyond rote calculation. It
involves visualizing shapes, understanding spatial relationships, and applying concepts to model situations.
Effective mathematics education encourages creativity, precision, and the development of critical thinking skills. By exploring concepts like polygons and their properties, students can appreciate
the relevance and applications of mathematics in everyday life. This foundation not only supports advanced studies in mathematics but also nurtures analytical skills applicable to diverse fields. | {"url":"https://www.vaia.com/en-us/textbooks/math/geometry-a-common-core-curriculum-2015-edition/chapter-7/problem-5-in-exercises-3-6-ind-the-sum-of-the-measures-of-th/","timestamp":"2024-11-03T17:06:49Z","content_type":"text/html","content_length":"248181","record_id":"<urn:uuid:2ad912ef-8d20-4426-9b1c-7729608ffcd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00473.warc.gz"} |
□ K.Nagasaka. SLRA Interpolation for Approximate GCD of Several Multivariate Polynomials. Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC2023), 2023. 470--479.
□ K.Nagasaka, T.Nakahara. Ordering Question with Clue in Moodle. Electronic Proceedings of the 27th Asian Technology Conference in Mathematics (ATCM2022), 2022. 18--31. (invited speaker).
□ K.Nagasaka. Mathematical thinking skills and automatic generation of ordering questions. Sushikisyori, Vol. 28(2), 2022, 95--108. (Japanese)
□ K.Nagasaka. Relaxed NewtonSLRA for Approximate GCD. Lecture Notes in Computer Science. Volume 12865. Computer Algebra in Scientific Computing: 23rd International Workshop, CASC 2021, Sochi,
Russia, September 13-17, 2021. Proceedings. Springer. 2021, 272--292.
□ K.Nagasaka. Approximate square-free part and decomposition. Journal of Symbolic Computation, Vol. 104, 2021, Pages 402--418.
□ K.Nagasaka. Multiple-choice questions in Mathematics: automatic generation, revisited. Electronic Proceedings of the 25th Asian Technology Conference in Mathematics (ATCM2020), 2020.
21785-1--21785-15. (invited speaker).
□ K.Nagasaka. Toward the best algorithm for approximate GCD of univariate polynomials. Journal of Symbolic Computation, Vol. 105, 2021, Pages 4--27.
□ K.Nagasaka. Approximate GCD by Bernstein Basis, and its Applications. Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC2020), 2020. 372--379.
□ K.Nagasaka. Parametric Greatest Common Divisors using Comprehensive Groebner Systems. Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC2017), 2017. 341--348.
□ K.Nagasaka. Approximate Polynomial GCD over Integers with Digits-wise Lattice. Communications of JSSAC. Vol. 2, 2016, Pages 15--32.
□ K.Nagasaka and T.Masui. Extended QRGCD Algorithm. Lecture Notes in Computer Science. Volume 8136. Computer Algebra in Scientific Computing: 15th International Workshop, CASC 2013, Berlin,
Germany, September 9-13, 2013. Proceedings. Springer. 2013, 257--272.
□ K.Nagasaka. Approximate Polynomial GCD over Integers. Journal of Symbolic Computation, Vol. 46(12), 2011, Pages 1306--1317.
□ K.Nagasaka. Computing a Structured Groebner Basis Approximately. Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC2011), 2011. 273--280.
□ K.Nagasaka. A Study on Groebner Basis with Inexact Input. Lecture Notes in Computer Science. Volume 5743. Computer Algebra in Scientific Computing: 11th International Workshop, CASC 2009,
Kobe, Japan, September 13-17, 2009. Proceedings. Springer Berlin. 2009, 247--258.
□ K.Nagasaka. Ruppert matrix as subresultant mapping. Lecture Notes in Computer Science. Volume 4770. Computer Algebra in Scientific Computing: 10th International Workshop, CASC 2007, Bonn,
Germany, September 16-20, 2007. Proceedings. Springer Berlin. 2007, 316--327.
□ K.Nagasaka. Symbolic-Numeric Algebra for Polynomials. The Mathematica Journal, Vol. 10(3), 2007, 593--616.
□ K.Nagasaka. Using Coefficient-wise Tolerance in Symbolic-Numeric Algorithms for Polynomials. Sushikisyori, Vol. 12(3), 2006, 21--30.
□ K.Nagasaka. Towards More Accurate Separation Bounds of Empirical Polynomials II. Lecture Notes in Computer Science. Volume 3718. Computer Algebra in Scientific Computing: 8th International
Workshop, CASC 2005, Kalamata, Greece, September 12-16, 2005. Proceedings. Springer-Verlag. 2005, 318--329.
□ K.Nagasaka. Towards More Accurate Separation Bounds of Empirical Polynomials. ACM SIGSAM Bulletin, Formally Reviewed Articles, Vol. 38(4), 2004, 119--129.
□ K.Nagasaka. SNAP Package for Mathematica and Its Applications. Proc. The Ninth Asian Technology Conference in Mathematics (ATCM2004), 2004. 308--316.
□ K.Nagasaka. Neighborhood Irreducibility Testing of Multivariate Polynomials. Proc. Computer Algebra in Scientific Computing (CASC2003), 2003. 283--292.
□ K.Nagasaka. Towards Certified Irreducibility Testing of Bivariate Approximate Polynomials. Proc. International Symposium on Symbolic and Algebraic Computation (ISSAC2002), 2002. 192--199.
□ K.Nagasaka. Estimation of Cancellation Errors in Multivariate Hensel Construction with Floating-Point Numbers. Proc. The Sixth Asian Technology Conference in Mathematics (ATCM2001), 2001.
□ K.Nagasaka, R.Oshimatani. Groebner basis detection with parameters (Extended Abstract). Computer Algebra in Scientific Computing: 24th International Workshop, CASC 2022. 22-26th August 2022.
□ K.Nagasaka. Approximate GCD and Its Implementations. Milestones in Computer Algebra, MICA2016. 16-18th July 2016.
□ K.Nagasaka. A Symbolic-Numeric Approach to Groebner Basis with Inexact Input.Fields Institute Workshop on Hybrid Methodologies for Symbolic-Numeric Computation, Hybrid2011. 16-19th November
□ K.Nagasaka. An improvement in the lattice construction process of Approximate Polynomial GCD over Integers (Extended Abstract).Proc. Symbolic-Numeric Computation (SNC2011), 2011. 63--64.
□ K.Nagasaka. Homomorphic Encryption and Approximate GCD of Integers and Polynomials over Integers. RIMS Workshop on Developments in Computer Algebra Research. 7-9th July 2010. ACM
Communications in Computer Algebra, Vol. 45, No. 3, Issue 177. 165.
□ K.Nagasaka. SNAP package for Mathematica. ISSAC 2007 Software Exhibitions. 29th July - 1st Aug. 2007.
□ Y. Kiriu, K. Nagasaka and T. Takahashi. Finding Mathematical Structures in Arts I. International Mathematica Symposium, IMS 2006. 19-23th June 2006.
□ K. Nagasaka. Mathematical Issues of Mathematica's BigFloat and Our Resolutions in SNAP Package. International Mathematica Symposium, IMS 2006. 19-23th June 2006.
□ K. Nagasaka. Irreducibility Radii. ARCC Workshop: The computational complexity of polynomial factorization. 15-19th May 2006.
□ K. Nagasaka. SNAP. International Mathematica Symposium, IMS 2005. 5-8th August 2005.
□ K. Nagasaka. An implementation issue on SNAP and significant digits. Conference on Applications of Computer Algebra, ACA 2005. 31th July-3th August 2005.
□ T. Takahashi, K. Nagasaka. On the Degeneracy Conditions of Singularities by Using CGBs. Algorithmic Algebra and Logic 2005 (Conference in Honor of the 60th Birthday of Volker Weispfenning).
2005. Proc. A3L (2005), 253--256.
Others (in Japanese) are only in available in Japanese page.
Posters (including short communications)
□ K.Nagasaka and R.Oshimatani. Conditional Groebner Basis: Groebner Basis Detection with Parameters. ISSAC 2023 Poster presentations. 24th - 27th July. 2023.
□ K.Nagasaka. Approximate GCD by relaxed NewtonSLRA algorithm. ISSAC 2021 Short communications. 18th - 23rd July. 2021. ACM Communications in Computer Algebra. Vol. 55(3). 2021. 97--101.
□ K.Nagasaka. Seeking Better Algorithms for Approximate GCD. ISSAC 2016 Poster presentations. 19th - 22th July. 2016. ACM Communications in Computer Algebra. Vol. 51(1). 2017. 15--17.
□ K.Nagasaka and T.Masui. Revisiting QRGCD and Comparison with ExQRGCD. ISSAC 2013 Poster presentations. 26th - 29th June. 2013. ACM Communications in Computer Algebra. Vol. 47(3). 2013.
□ K.Nagasaka. Backward error analysis of approximate Groebner basis. ISSAC 2012 Poster presentations. 22th - 25th July. 2012. ACM Communications in Computer Algebra. Vol. 46(3). 2012.
□ K.Nagasaka. Approximate Polynomial GCD over Integers. ISSAC 2008 Poster presentations. 20th - 23th July. 2008. ACM Communications in Computer Algebra. Vol. 42(3). 2008. 124--126.
(Correction: Gelfond's bound should be Knuth's)
Developments and Administrations
Only available in Japanese page.
Only available in Japanese page.
Editorial and Organizing activities
□ Workshop: GCD and related topics, GCDART 2024. Chair.
□ Workshop: GCD and related topics, GCDART 2022. Chair.
□ ISSAC 2022 (International Symposium on Symbolic and Algebraic Computation, 2022). Program Committee.
□ Workshop: GCD and related topics, GCDART 2020. Chair.
□ Workshop: GCD and related topics, GCDART 2018. Chair.
□ 2016 RIMS Joint Research: Developments in Computer Algebra. Chair.
□ Journal of Symbolic Computation. Volume 75 (July 2016). Special issue on the conference ISSAC 2014. Co-Editor.
□ SNC 2014 (Symobolic-Numeric Computation, 2014). Program Committee.
□ ISSAC 2014 (International Symposium on Symbolic and Algebraic Computation, 2014). General Co-Chair and Local Chair.
□ ASCM 2012 (Asian Symposium on Computer Mathematics, 2012). Program Committee.
□ SNC 2011 (Symobolic-Numeric Computation, 2011). Program Committee.
□ CASC 2009 (Computer Algebra in Scientific Computing, 2009). Local Organizing Committee Chair.
□ ISSAC 2009 (International Symposium on Symbolic and Algebraic Computation, 2009). Program Committee.
□ ISSAC 2006 (International Symposium on Symbolic and Algebraic Computation, 2006). Poster and Software Demos Co-Chair.
□ Maple Transactions. Associate Editor. 2021/06--.
□ ACM Communications in Computer Algebra. Associate Editor. 2016/08--.
□ Communications of JSSAC(Japan Society of Symbolic and Algebraic Computations). Editor. 2012/06--2014/05.
□ Bulletin of the Japan Society for Symbolic and Algebraic Computation. Editor. 2010--2012/04.
Educations and Positions
May., 2009 - present
Associate Professor (Position changes but English translation does't change) at Division of Human Environment, Graduate School of Human Development and Environment, Kobe University.
Apr., 2007 - Apr., 2009
Associate Professor (English translation changes) at Division of Human Environment, Graduate School of Human Development and Environment, Kobe University.
Jun., 2005
Encouragement Prize (Japan Society for Symbolic and Algebraic Computation)
Apr., 2004 - Mar., 2007
Assistant Professor at Division of Mathematics and Informatics, Department of Science of Human Environment, Faculty of Human Development, Kobe University.
Oct., 2002 - Apr., 2004
Research Associate at Media and Information Technology Center, Yamaguchi University.
Apr., 2002 - Sep., 2002
Temporary Researcher at Venture Business Laboratory, University of Tsukuba.
Mar., 2002
Ph.D. (Science): Doctoral Program in Mathematics, University of Tsukuba.
Jul., 1999 - Sep., 1999
Internship at Wolfram Research Inc.
Mar., 1998
M.S. (Science): Doctoral Program in Mathematics, University of Tsukuba. | {"url":"https://wwwmain.h.kobe-u.ac.jp/~nagasaka/research/resume.phtml","timestamp":"2024-11-10T05:55:57Z","content_type":"text/html","content_length":"17112","record_id":"<urn:uuid:895d29c5-3f25-4713-bf45-979270429a90>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00158.warc.gz"} |
Can regular expressions parse HTML?
Can regular expressions parse HTML or not?
Can regular expressions parse HTML? There are several answers to that question, both theoretical and practical.
First, let’s look at theoretical answers.
When programmers first learn about regular expressions, they often try to use them on HTML. Then someone wise will tell them “You can’t do that. There’s a computer science theorem that says regular
expressions are not powerful enough.” And that’s true, if you stick to the original meaning of “regular expression.”
But if you interpret “regular expression” the way it is commonly used today, then regular expressions can indeed parse HTML. This post [Update: link went away] by Nikita Popov explains that what
programmers commonly call regular expressions, such as PCRE (Perl compatible regular expressions), can match context-free languages.
Well-formed HTML is context-free. So you can match it using regular expressions, contrary to popular opinion.
So according to computer science theory, can regular expressions parse HTML? Not by the original meaning of regular expression, but yes, PCRE can.
Now on to the practical answers. The next lines in Nikita Popov’s post say
But don’t forget two things: Firstly, most HTML you see in the wild is not well-formed (usually not even close to it). And secondly, just because you can, doesn’t mean that you should.
HTML in the wild can be rather wild. On the other hand, it can also be simpler than the HTML grammar allows. In practice, you may be able to parse a particular bit of HTML with regular expressions,
even old fashioned regular expressions. It depends entirely on context, your particular piece of (possibly malformed) HTML and what you’re trying to do with it. I’m not advocating regular expressions
for HTML parsing, just saying that the question of whether they work is complicated.
This opens up an interesting line of inquiry. Instead of asking whether strict regular expressions can parse strict HTML, you could ask what is the probability that a regular expression will succeed
at a particular task for an HTML file in the wild. If you define “HTML” as actual web pages rather than files conforming to a particular grammar, every technique will fail with some probability. The
question is whether that probability is acceptable in context, whether using regular expressions or any other technique.
Related post: Coming full circle
18 thoughts on “Can regular expressions parse HTML or not?”
1. No discussion about parsing HTML with Regular Expressions is complete without reference to this
2. Interesting post.. But HTML is context free only if we hard code context-free expressions for all valid HTML tags. On the other hand, if we wish to parse XML (even well-formed ones) where tags
can be arbitrary, even a context-free parser won’t work. A well known result in theoretical computer science says that expressions of the form wcw where w is a string (not a character) is not in
3. Srinath: True, but PCRE can parse wcw.
4. Perhaps we should start referring to abnormalities like PCRE as “irregular expression engines”. :-)
5. A fundamental distinction must always be made between really PARSING html with regex versus (usually one-off, quick and dirty) SEARCHING or MATCHING some html with regex.
6. heltonbiker : Agreed. I’m using “parsing” loosely here.
7. The top answer here – http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags – might be useful.
[Same link as in first comment. — JC]
8. To be exact HTML is specified by a grammar that can be written in a context free form (the syntax specification of some languages is written in a form that is not context free and some
reorganization is needed to make it so, or not {as is the case with C++}). This does not mean that it can be parsed using tools such as Bison since they require a LALR(1) grammar (a subset of
context free).
Can PCRE really parse a larger set of grammars than Bison or is somebody just overgeneralizing the fact that PCRE can handle languages that are a superset of regular expressions?
Life is made much simpler by pointing people at the Chomsky hierarchy
9. The problem is not whether regexes can parse HTML; the problem is whether it’s a good idea to keep writing irregular expressions for HTML in toolkits that can easily go exponential when used
carelessly. There are already many good tools for this task; even ones that repair broken HTML while retaining (a guess at) its intended structure (tidy, LXML, BeautifulSoup, to name a few). The
probability of those failing is still non-zero, but smaller than that of htmlparsehack.pl failing.
10. Do you really mean well-formed HTML, or rather XHTML?
Because well-formed HTML is e.g. still allowed to omit start-tag for body, but have closing tag for it and vice-versa.
I think you could certainly tokenize it, but I can’t imagine handling all the messy stack manipulation that is allowed even in valid HTML Strict.
11. I had two projects involving analysing all sorts auf crappy HTML with RegEx. It works to some extend, but after a while you start to wish profoundly another Tool to solve the issue.
Problem here: XPath ist only working with wellformed Documents working on top of a browser parser is not that quick und and simple.
The real problem is the mess you get out of the net :-)
12. You quoted:
“Well-formed HTML is context-free. So you can match it using regular expressions, contrary to popular opinion.”
As far as I know according to computer science theory, there is a difference between context-free language (which can be generated by context-free grammar) and a regular language (which can be
matched using regular expression).
In other words, you can’t match any context-free language with regular expression, right?
13. Babluki: Regular expressions (in the original sense) match regular languages. Context-free languages are more general, higher up the Chomsky hierarchy, and so cannot be described by (classical)
regular expressions. But regular expressions in the contemporary sense can match context-free languages.
14. There is also a nice formalism for extending regular expressions to context-free languages. Context-free languages can be recognised by push-down automata, which are basically DFAs or NFAs with
stack operations. Why not just put the stack operations in the language?
In what follows, we will denote the empty set as 0, the empty string as 1, and set union as +. This is justified because it means that regular expressions are an idempotent semi-ring (idempotent
because A+A=A), plus the Kleene closure.
We assume that there are N+1 stack symbols 0..N, where 0 represents a sentinel symbol at the base of the stack. (We don’t strictly need symbol 0, but it makes things a little easier to describe.)
Then we can represent a push of symbol m by . The reason for this notation will become clear in a moment.
So, for example, we can recognise a^n b^n with the regular expression:
<0| (a )* |0>
We need some additional axioms. First, terminal symbols commute with stack operations:
a <n| = = |n> a
Finally, we describe what happens when pushes meet pops:
= 1
= 0, if m != n
|0> <N| = 1
So the stack symbols are like orthonormal basis vectors with is the inner product (|n> is a vector, and <n| is its dual vector/one-form). The final axiom states that the set of basis vectors is
complete. The fact that terminals commute with stack symbols mean that strings of terminals are the "scalars" of the vector field.
The axioms of context-free expressions are, in summary, very similar to those of a spinor algebra.
The neat thing about this is that it generalises in an obvious way. Add a second stack (or a richer set of stack state symbols with algebra to match), and you have "Turing expressions". Add the
possibilities for inner products to return values other than 0 or 1, and you have quantum computing.
15. Erm… looks like my stack notation ran foul of HTML. Here are those axioms again.
Terminals commute with stack operations:
a <n| = <n| a
a |n> = |n> a
Stack operations are orthonormal…
<n| |n> = 1
<m| |n> = 0
…and a complete basis:
|0> <0| + |1> <1| + … + |N> <N| = 1
16. Oh, and a^n b^n is:
<0| (a <1|)* (|1> b)* |0>
17. To expand on heltonbiker’s comment, I’ve answered many questions from people who say they want to parse HTML really mean they want to search a text document that happens to contain HTML. If
you’re not concerned with the structure, you’re not parsing it. Pulling all of the URLs or email addresses from a web page can be done with regular expressions.
18. We hired a consultant to do our DITA parsing. When the project manager said he was using ReEdit, well, that didn’t end well. He should have used XSL. | {"url":"https://www.johndcook.com/blog/2013/02/21/can-regular-expressions-parse-html-or-not/","timestamp":"2024-11-06T07:51:22Z","content_type":"text/html","content_length":"82379","record_id":"<urn:uuid:64c98e68-9453-42a1-bd67-b90884d85a1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00793.warc.gz"} |
Fractions And Percentages Worksheets
Comparing percentages and fractions worksheets, example and lesson plans: two lesson plans (ones suitable for beginners) to compare percentages and fractions. Three differentiated asking questions on
the fraction types: Fraction by numbers or percentage by quantities. The last one has a special feature in that the numbers can be printed in the small print to indicate which fractions are to be
used. Fraction types on the fraction toolbar are also shown as a table, allowing you to easily see which types are applicable in your current situation. Each question has a detail explanation for the
reason why it is relevant and how it can best be used. Fraction practice questions for each type are available as separate downloads from the teaching site.
Worksheet search result by word Fractions revision worksheet year 7 from fractions and percentages worksheets , source:ftxs8.com
Fractions and percentage units of measure are a fundamental part of teaching and learning about fractions. Teaching resources are available for comparing fractions and teaching students how to use
them in their own lives. The first two lesson plan sets, teach students how to compare fractions and percentages in units of measurement. Using a table to display the units, they can learn to make
sense of the units of measure.
The second set of two lesson plans to teach students how to compare fractions by quantities. Students learn to do the same kind of thinking as they would when asked to compare two fractions. Using a
table to display the units, they can see which fraction is larger and thus easier to work out. Some teachers supply charts with this lesson, but the teaching resources I found online provide graphs
How to Convert Fractions Decimals and Percents Worksheets Lovely from fractions and percentages worksheets , source:roofinginhoumala.com
Fractions and percentage units of measure are again a fundamental part of teaching and learning about fractions. A great many resources to offer units of measurement for all the major units of
measurement. Fractions units for each major unit can be compared with other units of measure in tables that are also shown on the lesson plan. This sets up the basis for further teaching on the
Fractions may not seem like they are all that important when only taught in elementary school. In fact, many teachers do not use fraction lessons until higher education levels. That is not
surprising, since higher education degrees require learning to use several different types of measurement. But anyone who has ever taught a class in elementary school knows that the usefulness of a
fraction is fundamental. Teachers need to use the tools of the fraction in order to teach other subjects at a high school, college, and beyond.
fraction sheet Erkalnathandedecker from fractions and percentages worksheets , source:erkal.jonathandedecker.com
The Fractions and Percentages Worksheet are a great teaching tool because it enables the teacher to compare a fraction with another. It can show a fraction is bigger, smaller, or more or less than
another unit. Because they are so closely related, the comparison becomes almost automatic. It only takes a few seconds to compare Fractions and Percentages Worksheets side-by-side in a lesson plan,
which is useful, especially when the teacher is trying to teach an entire class about fractions.
Fractions and Percentages worksheets enable teachers to also show how different units of measurement relate to one another. For example, by using the fraction, a teacher can show that inches are less
than five-hundredths of a degree. Or, she can show that a centimeter is less than one-eighth of an inch. Using these teaching aids allows students to learn about fractions quickly and easily, which
helps them develop a better understanding and memory skills when they begin high school.
Product from fractions and percentages worksheets , source:hope-education.co.uk
Fractions and percentages not only make learning easier, but they also help the students understand the relationship between the fraction, its percentages, and other units of measurement. Educators
can find out which types of Fractions and Percentages are most common by using teaching resources that include Fractions and Percentages worksheet packs and other lesson aids. Fractions and
percentages can be a great teaching tool, as they make learning fun for students and easy for the teacher. If teachers integrate them into their lessons, they can help their students to succeed.
Math worksheets number line decimals from fractions and percentages worksheets , source:myscres.com
Numbers Worksheets Fractions Worksheets Fresh Fractions A Number from fractions and percentages worksheets , source:sblomberg.com
Worksheet Decimal Fraction Percent Beautiful Fraction Decimal from fractions and percentages worksheets , source:penlocalmag.com
Line Plots With Fractions 5Th Grade Worksheets Worksheets for all from fractions and percentages worksheets , source:bonlacfoods.com
Worksheets by Math Crush Fractions from fractions and percentages worksheets , source:mathcrush.com
Worksheets by Math Crush Fractions from fractions and percentages worksheets , source:mathcrush.com | {"url":"https://briefencounters.ca/42655/fractions-and-percentages-worksheets/","timestamp":"2024-11-02T20:44:07Z","content_type":"text/html","content_length":"92752","record_id":"<urn:uuid:4f9fc2f1-98e8-4d93-a0a9-ecd063e3fc07>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00192.warc.gz"} |
Stack Pop Operation
Pop Element from Stack
The pop operation in a stack involves removing the element from the top of the stack. This operation decreases the size of the stack by one and updates the top reference to the next element in the
stack. Stacks follow a Last In, First Out (LIFO) principle, meaning the most recently added element is the first to be removed.
Step-by-Step Process
Consider a stack with the following structure before the pop operation:
Stack (Top -> Bottom):
Top -> 4 -> 3 -> 2 -> 1
To pop the top element from the stack, follow these steps:
1. Check if the Stack is Empty: If the stack is empty, return an error or None.
2. Store the Top Node: Store the current top node in a temporary variable.
3. Update the Top Reference: Update the top reference to point to the next node in the stack.
4. Return the Stored Node's Value: Return the value of the stored top node.
After performing these steps, the stack will have the following structure:
Stack (Top -> Bottom):
Top -> 3 -> 2 -> 1
Pseudo Code
Function pop(stack):
# Check if the stack is empty
If stack.top is null:
Return None
# Store the current top node
temp = stack.top
# Update the top reference to the next node
stack.top = stack.top.next
# Return the value of the stored top node
Return temp.data
Python Program to Perform Stack Pop Operation
class Node:
def __init__(self, data):
self.data = data
self.next = None
class Stack:
def __init__(self):
self.top = None
def push(self, data):
# Create a new node with the given data
new_node = Node(data)
# Set the new node's next reference to the current top of the stack
new_node.next = self.top
# Update the top reference to the new node
self.top = new_node
def pop(self):
# Check if the stack is empty
if self.top is None:
return None
# Store the current top node
temp = self.top
# Update the top reference to the next node
self.top = self.top.next
# Return the value of the stored top node
return temp.data
def traverse(self):
# Traverse and print the stack
current = self.top
while current:
print(current.data, end=" -> ")
current = current.next
# Example usage:
stack = Stack()
print("Stack before popping:")
stack.traverse() # Output: 4 -> 3 -> 2 -> 1 -> None
popped_element = stack.pop()
print(f"Popped element: {popped_element}") # Output: 4
print("Stack after popping:")
stack.traverse() # Output: 3 -> 2 -> 1 -> None
This Python program defines a stack with methods for pushing elements onto the stack, popping elements from the stack, and traversing the stack. The pop method checks if the stack is empty, stores
the current top node, updates the top reference, and returns the value of the stored top node. | {"url":"https://pythonexamples.org/data-structures/stack-pop","timestamp":"2024-11-09T23:49:20Z","content_type":"text/html","content_length":"33042","record_id":"<urn:uuid:7b737ac4-fedc-4896-8204-6ecb43fc2a7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00884.warc.gz"} |
Longitudinal SEM and Latent Growth Curves
I'm fitting a growth curve model (not latent growth curve model yet) using intercept and slope factors, and need individuals' slope and intercept estimates.
My inclination is to get factor score estimates for the intercept and slope factors, but when I do this using mxFactorScores(), I get an error message:
"Error: In model 'FactorScoresLGCM_N' the name 'fscore' is used as a free parameter in 'FactorScoresLGCM_N.Score' but as a fixed parameter in 'FactorScoresLGCM_N.Score'.
I am trying to fit a structured latent growth curve model where the actual days of observation are inserted into the latent variable loadings using definition variables. Unfortunately, I run into the
following error:
Error in value[rows[[i]], cols[[i]]] <- startValue :
incorrect number of subscripts on matrix
I'm trying to model the effect of a time-independent predictor on the parameters of a dual latent change score model (in discrete time). My predictor is the cohort to which the subjects belong. The
groups (cohorts) of individuals in my data have different values for the parameters (e.g., different self-feedbacks, mean and variance of the slope...). I managed to successfully account for cohort
effects on means, variances, and regression parameters, just using mxPaths.
I am trying to fit a nonlinear latent growth curve model (as part of simulations I am conducting) using the three-parameter logistic (s-shaped change) function below
$$y_{obs} = \frac{diff}{1+e^{\frac{\beta - time_i}{\gamma}}}, $$
I'm new to all things SEM, but I'm attempting to use OpenMx for a Cross-lagged Panel Model (with random intercepts) on longitudinal data in youth. I have two measured variables at two timepoints,
equally spaced apart. I have gone through John Flournoy's tutorials (http://johnflournoy.science/2018/09/26/riclpm-openmx-demo/) as well as several other resources on this topic, but I have a few
questions I was hoping I could get some input on from the experts here!
I have a cross-lagged model that I have run using the code below:
Hello there!
I'm having trouble with a model investigating change in anxiety over time. In the future, I will be building out a more complex model with other factors predicting change in anxiety over time, but
I'm now just at a step before that looking into a model with only change in anxiety over time--only for two time points.
I am running this code to conduct LGCM prior to GMM.
I was wondering if there is a way of implementing FIML instead of ML in my code.
This is because my data has a lot of missingness by design (longitudinal data) and is not normally distributed.
Many thanks!!!
fitFunction <- mxFitFunctionML(rowDiagnostics=TRUE)
mxOption(NULL,"Default optimizer","SLSQP") ### to avoid msg error: https://openmx.ssri.psu.edu/node/4238
I´m having a hard time understanding the OpenMx notation from this handbook "Growth Modeling" (Grimm, Ram & Estabrook, 2017).
The authors compare the script from MPlus and OpenMx with regard to the specification of a no-growth model (attached as pics). While in Mplus is pretty straightforward, I don´t understand why the
latent factor variance (psi_11) and the indicators residual variances are set at 80 and 60 respectively in the OpenMx script.
Either SEMNET is not getting my emails or they don't care about my questions (probably the latter), so I'll ask them here specific to OpenMx. | {"url":"https://openmx.ssri.psu.edu/forums/opensem-forums/longitudinal-sem-and-latent-growth-curves","timestamp":"2024-11-14T05:10:37Z","content_type":"text/html","content_length":"57324","record_id":"<urn:uuid:c9763935-1ef4-46d8-aca4-17ad6496079b>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00441.warc.gz"} |
On the performance of a class of multihop shuffle networks
A generalization of the well-known shuffle network is proposed for multihop lightwave communication. In the classical definition of a shuffle network, i.e., N = kp^k where N is the number of nodes
and k is the number of stages with base p, the realizable values of N are very discrete and many of the intermediate values of N are not realizable. In this paper, we propose a new definition of a
shuffle network as N = nk where n is the number of nodes per stage with base p. Based on this new definition, we divide the shuffle networks into two classes: extra-stage and reduced-stage. Study
results can be used to determine an optimal network topology when given a value of N.
Other Proceedings of the 1995 IEEE International Conference on Communications. Part 1 (of 3)
City Seattle, WA, USA
Period 6/18/95 → 6/22/95
All Science Journal Classification (ASJC) codes
• Computer Networks and Communications
• Electrical and Electronic Engineering
Dive into the research topics of 'On the performance of a class of multihop shuffle networks'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/on-the-performance-of-a-class-of-multihop-shuffle-networks","timestamp":"2024-11-05T06:36:59Z","content_type":"text/html","content_length":"47792","record_id":"<urn:uuid:18f60f6e-d5c1-429c-95f0-5b4addc108a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00127.warc.gz"} |
List of abstracts
Application of multidimensional fBm in mammography screening
Martin Dlask, Jaromir Kukal
FNSPE, CTU in Prague
The talk presents a methodology of analyzing multidimensional fractional Brownian motion (fBm) and is applied for the identification of cancerous breast lumps from mammography screening images. At
first, the exact method for multidimensional fBm images is presented and the accuracy of estimation is verified on simulated data with known Hurst exponent. Unlike approximate methods for generating
multidimensional fBm and its Hurst exponent estimation, this approach shows unbiased results for all processes with short memory and most of cases with long memory. We apply the technique on the
mamography images while being able to prove that patients with cancerous findings have significantly higher Hurst exponent than those with a benign lump.
LBM & turbulent fluid flow simulations on different lattices
Pavel Eichler
FNSPE, CTU in Prague
In this contribution, recent LBM simulations and progress in LBM will be presented. The application of LBM in the turbulent fluid flow in the fluidized bed reactor will be discussed. The LBM results
are compared both with the experimental data and with the result produced with ANSYS Fluent software. Next, since all our previous turbulent fluid flow simulations have significant demands on the
computational mesh, the octree structure of the computational mesh can reduce these demands. Although many numerical methods widely use this mesh refinement technique, the interpolation of discrete
density functions is not straightforward in LBM. Thus, we will discuss different interpolations of density functions and their influence on the solution.
Potential of computational fluid dynamics in aortic repair decision making
Radek Galabov, Jaroslav Tintera
FNSPE, CTU in Prague, IKEM in Prague, IKEM
Atherosclerosis is a leading cause in artery stenosis and occlusion. In recent decades, open surgery has been replaced by endovascular treatment at least for the less complicated lesions. There exist
various stenting and angioplastic procedures to remodel the vessel and recover blood flow. However, implanted stents further influence blood dynamics and restenoses do occur. Long-term patency
depends on stent design and procedural details, but the particular mechanisms are not yet fully understood. Computational fluid dynamics offers means to investigate some of the underlying causes in
occlusion disease.
Representation and dimension estimation of fractal sets
František Gašpar, Jaromír Kukal
FNSPE, CTU in Prague, FNSPE, CTU in Prague
Stochastic models of diffusion in spatial domains of noninteger dimension are widely applicable as a basis of simulations. Obtaining data having fractal properties requires the construction of fine
enough discrete latices that is computationally expensive. This contribution presents a novel way of representing self-similar fractal models using a generalized coordinate system together with
statistical testing of obtained dimension estimates.
System for Historical Buildings Reconstruction
Jiří Chludil, Pauš Petr
FF, UHK, FIT, CTU in Prague
Modern visualization technologies are becoming popular in historical sciences, e.g., digital reconstruction of historical buildings. The whole process, such as tedious work in the archive, digitizing
all required materials, 3D modeling, preparation of textures, and importing a model into some visualization framework might be quite a complicated procedure for historians, especially if high
standards need to be achieved. Technically demanding processes are often outsourced to external companies, which is usually expensive and time-consuming. The education process of historians nowadays
contains an introduction to visualization technologies and digitization, but their knowledge is still rather limited. The process of data preparation and digitization usually goes without problems.
However, 3D modeling itself followed by export to the visualization framework is far more complicated. There are usually fundamental problems in 3D models (bad topology and triangulation, etc.) and
also issues with supported formats among applications. The goal of this project is to design and develop a system that helps historians to simplify and ease the process of historical buildings
reconstruction by means of tools and techniques of software engineering and computer graphics. This study would like to create a full feedback system where all 3D models will be checked for quality
(from a historical and computer graphics point of view) by automated and semi-automated tests. Access to all historical data as well as a backup and versioning system will be integrated into the
system. Finally, the system will support exporting models to selected visualization frameworks in proper formats by means of client applications. According to the authors’ experience, historians
arguably are a very conservative group of scientists. Therefore, designing and testing a proper user interface in a full-fledged UX laboratory is mandatory.
Optimization of the branch and bound algorithm with application for phase stability testing of multicomponent mixtures
Martin Jex, Jiří Mikyška
FNSPE, CTU in Prague
This work examines the question of VTN phase stability testing. This problem is solved by global minimization of the TPD (tangent plane distance) function. The global optimization is performed using
applying the branch and bound algorithm, which is improved, in comparison to its basic variant, by using a more effective pruning of the tree arising from the algorithm. This improvement is derived
from the necessary conditions of an extremum, which leads to suplementary conditions for pressure and chemical potentials. Functions describing theese conditions are not convex, therefore, in this
work, we derive and apply its convex-concave decompositions.
Process of freezing and thawing of porous media
Léa Keller
Abstract: This contribution studies different experiments and mathematical models of water and soil freezing. Soil freezing has important effects in order to understand deformation of grounds, such
as for instance the roads in winter. The model is based on Stefan problem which is a particular type of free boundary problem. Several experiments and models with water, sand and gas are performed
and then modelled with the use of Comsol 3.3 in order to visualize the freezing evolution.
MP-PIC simulations of fluidization with kotelFoam
Jakub Klinkovský
FNSPE, CTU in Prague
Multiphase particle-in-cell is an interesting method for modeling particle-fluid interactions in computational fluid dynamics, which combines the advantages of both Eulerian and Lagrangian
frameworks. While the motion of particles is tracked using the Lagrangian framework, inter-particle interactions (i.e. collisions) are approximated using averaged quantities on the Eulerian grid
where the fluid is simulated. According to the literature, the method is stable in dense particle flows, computationally efficient, and physically accurate, which makes it suitable for the simulation
of industrial-scale chemical processes involving particle-fluid flows.
In this talk, we present the governing equations and mathematical background of the MP-PIC method, then we describe its implementation in the OpenFOAM framework and highlight our own improvements
that are included in our customized "kotelFoam" solver. Finally, we present our simulations of fluidized particles in a plastic model of a bubbling fluidized bed combustor.
Mathematical modeling of contrast agent transport and its transfer through the vessel wall in vascular flow
Jan Kovář
FNSPE, CTU in Prague
This contribution deals with mathematical modeling of contrast agent transport and its transfer through the vessel wall in vascular flow in a two-dimensional computational domain. The problem is
solved in the context of myocardial perfusion examination using a contrast agent.
The audience will be briefly introduced to a mathematical model of Newtonian incompressible fluid flow in an isothermal free flow system and a mathematical model of a contrast agent transport, in
which the boundary condition modeling the transfer of the contrast agent is included. The numerical scheme of the lattice Boltzmann method used to solve the aforementioned problem will be discussed
together with the results obtained by this scheme.
Two phase flow simulations using the lattice Boltzmann method
Michal Malík
FNSPE, CTU in Prague
In this contribution, we will present the possibilities of using the lattice Boltzmann method, LBM for short, to simulate two phase flow. Two numerical models will be described: Shan-Chen LBM and
phase-field LBM. Shan-Chen LBM can be used to simulate both miscible and immiscible flow, while phase-field LBM is only capable of the latter. We will discuss the phase separation in Shan-Chen LBM
and the initial condition for phase-field LBM. Afterwards, the application of both numerical models in simulating the contact angle between fluid and solid surface will be shown.
Mathematical Modelling in Electrocardiology
Niels van der Meer, Michal Beneš
FNSPE, CTU in Prague
Cardiovascular diseases account for more than thirty per cent of all deaths which makes them the most common cause of decease worldwide. It is therefore understandable that considerable effort has
been exerted to treat and prevent these conditions. This talk(based on a thesis of the same name) probes for the potential contributions of mathematics and its tools developed from the theory of
reaction-diffusion equations. The main area of interest is electrocardiology which studies heart rhythm disorders as well as their causes. Some of the mathematical models describing the propagation
of a signal in an excitable medium are introduced. One such example is the FitzHugh–Nagumo model whose several variations were numerically analyzed and the results are presented in this talk.
CMA-ES with Distribution Maximizing Renyi Entropy
Ivan Merta, Jaromir Kukal
FNSPE, CTU in Prague
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is a widely accepted metaheuristic optimization method. It belongs to a class of stochastic, derivative-free optimization algorithms
performing well on a wide range of nonlinear, non-convex black-box functions in a continuous domain. As such, it has attracted researchers to make various modifications to the method.
In classical CMA-ES, new candidate solutions are sampled from a multivariate normal distribution. In our research, we propose a novel heavy-tailed distribution for generating the samples. The steps
for generating such values are derived from a spherically symmetric distribution maximizing Renyi entropy. This approach results in a generalization CMAE-ES and classical method with a multivariate
normal distribution sampling here becomes an edge case.
The performance is compared with classical CMA-ES on multiple appropriately difficult test functions.
Multi-phase compositional modeling in porous medium with phase equilibrium computation
Tomáš Smejkal, Jiří Mikyška
FNSPE, CTU in Prague
In this contribution, we present a new numerical solution of a multi-phase miscible compressible Darcy's flow of a multi-component mixture in a porous medium. The mathematical model consists of the
mass conservation equation of each component, extended Darcy's law for each phase, and an appropriate set of the initial and boundary conditions. The phase split is computed using the constant
temperature-volume flash (known as VTN-specification). The transport equations are solved numerically using the mixed-hybrid finite element method and a novel iterative IMPEC scheme. We provide
examples showing the performance of the numerical scheme.
Mathematical model of melting of unsaturated porous media
Jakub Solovský
FNSPE, CTU in Prague
In this work, we present the simplified mathematical model of two-phase compositional flow in porous media coupled with heat conduction and phase transitions.
We implement the numerical scheme based on the mixed-hybrid finite element method for solving such problems and demonstrate the capabilities of the model on an artificial scenario inspired by the
planned experiments.
Initially, the pore space of a sand-filled container is occupied by ice with entrapped gas bubbles. One wall of the container is heated, the remaining ones are insulated. The ice within a container
melts and releases the trapped gas that is then transported in the already melted region and dissolves into the water.
The Hyperbolic Mean Curvature Flow
Monika Suchomelová
FNSPE, CTU in Prague
The mean curvature flow (MCF) in plane is well studied curve dynamics with interesting properties. The hyperbolic version of this flow (HMCF) is defined by the rule that normal acceleration of the
curve is equal to curvature. In addition to an initial curve, the initial velocity must be defined.
The studied equation for parametric plane curve is presented. The properties of the flow are demonstrated on computed examples of evolving closed plane curves and compared with the properties of MCF.
The interesting situation happens if the initial velocity is set to be equal to initial tangent vector field.
Estimation of relaxation time T1 using the imaging sequence model
Kateřina Škardová
FNSPE, CTU in Prague
In this contribution, we discuss how numerical simulations and machine learning can be combined in order to create a framework for tissue parameter estimation. The proposed approach is applied on the
problem of T1 relaxation time estimation based on image series acquired by the Modified Look-Locker Inversion Recovery (MOLLI) magnetic resonance imaging sequence.
The main contribution is in combining neural network with numerical minimization. The neural network is trained using synthetic data generated by MOLLI sequence simulations based on Bloch equation.
The prediction of neural network is used to initialize the numerical optimisation step. The proposed method is validated using phantoms with wide range of T1 values.
Application of maximal monotone operator method for solving Hamilton-Jacobi-Bellman equation arising from optimal portfolio selection problem
Cyril Izuchukwu Udeani, Daniel Sevcovic
Comenius University, Bratislava
In this paper, we investigate a fully nonlinear evolutionary Hamilton-Jacobi-Bellman (HJB) parabolic equation utilizing the monotone operator technique. We consider the HJB equation arising from
portfolio optimization selection, where the goal is to maximize the conditional expected value of the terminal utility of the portfolio. The fully nonlinear HJB equation is transformed into a
quasilinear parabolic equation using the so-called Riccati transformation method. The transformed parabolic equation can be viewed as the porous media type of equation with source term. Under some
assumptions, we obtain that the diffusion function to the quasilinear parabolic equation is globally Lipschitz continuous, which is a crucial requirement for solving the Cauchy problem. We employ
Banach's fixed point theorem to obtain the existence and uniqueness of a solution to the general form of the transformed parabolic equation in a suitable Sobolev space in an abstract setting. Some
financial applications of the proposed result are presented in one-dimensional space.
Numerical Computations of snow crystal growth models by the method of fundamental solutions
Shimoji Yusaku, Yoshinori Okino
Meiji University, Meiji University
There are several known mathematical models that describe snow crystal growth. Yokoyama-Kuroda model is well known as a representative model. In addition, Barrett et al. have proposed a model that
takes into account the Gibbs-Thomson law, which was not considered in the derivation of Yokoyama-Kuroda model. Some numerical calculations have already been done for these problems. However, to the
best of our knowledge, there are no numerical calculations using the MFS. In this talk, we will report the results of numerical calculations using MFS for these models. | {"url":"https://geraldine.fjfi.cvut.cz/wsc-2021/abstracts","timestamp":"2024-11-11T16:00:24Z","content_type":"text/html","content_length":"24555","record_id":"<urn:uuid:abf0ba59-5bf1-4b0d-80b0-486e622bdd9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00429.warc.gz"} |
The scalable problem — GEMSEO 5.3.1 documentation
The scalable problem¶
In this section we describe the GEMSEO’ scalable problem, or scalable discipline feature, based on the paper [VGM17]:
@conference {VGM2017,
title = {On the Consequences of the "No Free Lunch" Theorem for Optimization on the Choice of {MDO} Architecture},
booktitle = {Proceedings of the AIAA SciTech Conference},
year = {2017},
month = {January},
author = {Charlie Vanaret and Francois Gallard and Joaquim R. R. A. Martins}
Based on computationally cheap disciplines, the scalable problem allows to choose a MDO formulation:
• for the problem from which derives the scalable problem or
• for a family of problems having:
☆ a greater number of design and coupling variables and
☆ common properties with the original problem.
According to the authors, this scalable problem “preserve[s] the functional characteristics of the original problem and they proved useful in performing a rapid benchmarking of MDO formulation”. This
“provides insights on the scalability of MDO architectures with respect to the dimensions of the problem. This may be achieved without having to execute the MDO processes with the original models.
Our methodology thus requires a limited number of evaluations of the original models that is independent of the desired dimensions of the design and the coupling variables of the scalable problem.”
The proposed methodology
1. builds a surrogate model \(\Phi^{(int)}\) for each discipline \(\Phi\) of the initial problem with a limited amount of evaluations \(T\),
2. extrapolates the surrogate model \(\Phi^{(ext)}\) to an arbitrary dimension.
The methodology preserves the interface of the initial problem, that is the names of the inputs (design variables) and the outputs (coupling and state variables). Any high-fidelity discipline of the
initial problem may therefore be replaced by a cheap scalable component generated by the methodology. Strong properties are guaranteed by the methodology.
One-dimensional restriction¶
The original model \(\Phi:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is restricted to a one-dimensional function \(\Phi^{(1d)}:[0,1]\rightarrow\mathbb{R}^m\) by evaluating it along a diagonal line in the
domain \([x_1,\overline{x_1}]\times\ldots[x_n,\overline{x_n}]\):
For any component \(i\in\{1,\ldots,m\}\) of \(\Phi^{(1d)}\), the direct image of \(T\) a finite subset of \([0,1]\) with cardinality \(|T|\), is:
\[\Phi_i^{(1d)}(T) = \left\{\Phi^{(1d)}(t)|t\in T\right\}\]
mapping from \([0,1]\) to \([m_i, M_i]\) and where \(m_i\) and \(M_i\) are respectively the minimal and maximal values reached by \(\Phi^{(1d)}\) over \(T\).
The scaled version of \(\Phi_i^{(1d)}(T)\) is
\[\Phi_i^{(s1d)}(T) = \left\{\Phi^{(1d)}(t)|t\in T\right\}\]
mapping from \([0,1]\) to \([0,1]\).
Then, each component \(i\) of \(\Phi^{(1d)}(t)\) is approximated by a polynomial interpolation \(\Phi_i^{(int)}\) over the date \(\left(T,\Phi^{(s1d)}(T)\right)\).
Input-output dependency¶
Dependencies between inputs and outputs can be represented by a sparse dependency matrix \(S\) where:
• each block row represents a function of the problem (constraint or coupling),
• each block column represents an input (design variable or coupling),
• a nonzero element represents the dependency of a particular component of a function with respect to a particular component of an input.
In practice, the dependencies between inputs and outputs are not precisely known. Consequently, the matrix \(S\) is randomly computed by block by means of a density factor (the filling of a block is
proportional to this density factor).
Furthermore, initially taken in \(\mathcal{M}_{n,m}(\mathbb{R})\),this matrix \(S\) can be taken in \(\mathcal{M}_{n_x,n_y}(\mathbb{R})\) where the number of inputs \(n_x\) and the number of outputs
\(n_y\) of the scalable model is freely chosen by the user.
Once \(n_x\) and \(n_y\) are chosen, we build the function \(\Phi^{(ext)}:[0,1]^{n_x}\rightarrow[0,1]^{n_y}\) extrapolates \(\Phi^{(int)}:[0,1]\rightarrow[0,1]^{m}\) to \(n_y\) dimensions:
\[\Phi_i^{(ext)}(x)=\frac{1}{|S_{i.}|}\sum_{j\in S_{i.}} \Phi_{k_i}^{(int)}(x_j)\]
• \(S_{i.}\) represents the nonzero elements of the \(i\)-th row of the dependency matrix \(S\).
• \(k_i\) is an uniform random variable over \(\left\{1,\ldots,m\right\}\).
• Existence of a solution to the coupling problem. An equilibrium between all disciplines exists for any value of the design variables \(x\),
• Preservation of ratio. When \(n_y\) approaches \(+\infty\), the ratio of components of the original functions is preserved.
• Existence of a minimum. There exists a feasible solution to the scalable problem, for any dimension of inputs and outputs.
• Existence of derivatives. The scalable extrapolations are continuously differentiable with respect to their inputs.
• Existence of bounds on the target coupling variables. All inputs and outputs belong to \([0; 1]\), which ensures that all optimization variables are bounded, in particular coupling variables in | {"url":"https://gemseo.readthedocs.io/en/5.3.1/scalable_models/original_paper.html","timestamp":"2024-11-09T18:43:08Z","content_type":"text/html","content_length":"31632","record_id":"<urn:uuid:9f0c08e6-fa57-4a25-893a-37203a0683ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00209.warc.gz"} |
Prinsip - Prinsip Fisika (Basics Principles Of Physics)Prinsip - Prinsip Fisika (Basics Principles Of Physics)
بِسْــــــــــــــــمِ اﷲِالرَّØْÙ…َÙ†ِ اارَّØِيم
1. "For every action, there is an equal and opposite reaction" - Newton's Third Law of Motion
2. "Energy can neither be created nor destroyed, only transformed from one form to another" - Law of Conservation of Energy
3. "The force exerted on an object is equal to its mass times its acceleration" - Newton's Second Law of Motion
4. "The velocity of an object in motion will remain constant unless acted upon by an external force" - Newton's First Law of Motion
5. "The rate of change of momentum of a body is directly proportional to the force applied, and occurs in the direction in which the force is applied" - Newton's Second Law of Motion (alternative
6. "The resistance of an object to a change in its state of motion is proportional to its mass" - Newton's Second Law of Motion (alternative form)
7. "The wavelength of light is inversely proportional to its frequency" - The Wave Equation
8. "The uncertainty principle states that the position and momentum of a particle cannot both be precisely measured at the same time" - Heisenberg's Uncertainty Principle
9. "The speed of light in a vacuum is a constant, and is independent of the motion of the source or observer" - Einstein's Theory of Special Relativity
10. "The total electric charge of an isolated system remains constant" - The Law of Conservation of Electric Charge.
11. "The gravitational force between two objects is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers" - Newton's Law
of Universal Gravitation
12. "The rate at which work is done is equal to the amount of energy transferred per unit time" - The Work-Energy Principle
13. "The potential energy of an object in a gravitational field is proportional to its height above a reference point" - The Gravitational Potential Energy Formula
14. "The amount of heat transferred between two objects is proportional to the temperature difference between them and the thermal conductivity of the material" - Fourier's Law of Heat Conduction
15. "The pressure of a gas is directly proportional to its temperature and the number of molecules present, and inversely proportional to its volume" - The Ideal Gas Law
16. "The entropy of a closed system tends to increase over time, resulting in a decrease in available energy" - The Second Law of Thermodynamics
17. "The resistance of a material is proportional to its resistivity, length, and inversely proportional to its cross-sectional area" - Ohm's Law
18. "The frequency of a wave is inversely proportional to its wavelength" - The Wave Equation
19. "The angle of incidence equals the angle of reflection for a ray of light striking a surface" - The Law of Reflection
20. "The speed of sound in a medium is directly proportional to the square root of its elasticity and inversely proportional to the square root of its density" - The Speed of Sound Formula.
21. "The electric field at a point is the force per unit charge that a test charge would experience at that point" - Coulomb's Law
22. "The magnetic force on a charged particle moving through a magnetic field is perpendicular to both the velocity of the particle and the direction of the magnetic field" - The Lorentz Force
23. "The frequency of a vibrating system is proportional to the square root of its stiffness and inversely proportional to the square root of its mass" - The Frequency Equation for Simple Harmonic
24. "The acceleration of an object in circular motion is proportional to the square of its speed and inversely proportional to the radius of the circle" - The Centripetal Acceleration Formula
25. "The energy of a photon is directly proportional to its frequency and inversely proportional to its wavelength" - The Planck-Einstein Relation
26. "The amount of current flowing through a circuit is directly proportional to the voltage and inversely proportional to the resistance" - The Ohm's Law Equation
27. "The rate of heat transfer between two objects is directly proportional to the temperature difference between them and the surface area of contact" - The Heat Transfer Equation
28. "The wavelength of a particle is inversely proportional to its momentum" - The de Broglie Relation
29. "The energy required to ionize an atom is proportional to its ionization potential" - The Ionization Energy Formula
30. "The intensity of a sound wave is proportional to the square of its amplitude" - The Intensity Formula for Sound Waves.
31. "The force on a current-carrying wire in a magnetic field is directly proportional to the current, the length of the wire in the field, and the strength of the magnetic field" - The Force on a
Current-Carrying Wire Formula
32. "The electric potential energy of a point charge in an electric field is proportional to its charge and the potential difference between its initial and final positions" - The Electric Potential
Energy Formula
33. "The resistance of a material increases with temperature due to the increased vibration of atoms and the resulting increase in collisions with electrons" - The Temperature Dependence of
Resistance Formula
34. "The angle of refraction of a ray of light passing from one medium to another is related to the angle of incidence and the indices of refraction of the two media" - Snell's Law
35. "The pressure of a fluid in a closed system is constant throughout, regardless of the shape of the container" - Pascal's Law
36. "The efficiency of a machine is the ratio of useful work output to the total energy input" - The Efficiency Formula
37. "The entropy of a system is proportional to the logarithm of the number of microstates available to it" - Boltzmann's Entropy Formula
38. "The critical angle of incidence is the angle at which a ray of light passing from a more dense medium to a less dense medium is refracted at an angle of 90 degrees" - Total Internal Reflection
39. "The energy required to change the temperature of a substance is proportional to its specific heat capacity and the temperature change" - The Heat Capacity Formula
40. "The period of a pendulum is directly proportional to the square root of its length and inversely proportional to the square root of the acceleration due to gravity" - The Period of a Pendulum
Posting Komentar | {"url":"https://www.fisikane.web.id/2023/03/prinsip-prinsip-fisika-basics.html","timestamp":"2024-11-10T17:54:49Z","content_type":"application/xhtml+xml","content_length":"309037","record_id":"<urn:uuid:7f6d5943-518c-4285-a569-e3de0f9d5184>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00303.warc.gz"} |
Centre International de Rencontres Mathématiques (videos of July 2017)
Centre International de Rencontres Mathématiques
Sandrine Grellier: Various aspects of the dynamics of the cubic Szegő solutions
Abstract: The cubic Szegö equation has been introduced as a toy model for totally non dispersive evolution equations. It turned out that it is a complete integrable Hamiltonian system for which we
Monica Visan: Almost sure scattering for the energy-critical Schrödinger equation in 4D...
Abstract: Inspired by a recent result of Dodson-Luhrmann-Mendelson, who proved almost sure scattering for the energy-critical wave equation with radial data in four dimensions, we establish the
Cécile Huneau: High frequency back reaction for the Einstein equations
Abstract: It has been observed by physicists (Isaacson, Burnett, Green-Wald) that metric perturbations of a background solution, which are small amplitude but with high frequency, yield at the
Joachim Krieger: On stability of type II blow up solutions for the critical nonlinear wave equation
Abstract: The talk will discuss a recent result showing that certain type II blow up solutions constructed by Krieger-Schlag-Tataru are actually stable under small perturbations along a
Daniel Tataru: Geometric heat flows and caloric gauges
Abstract: Choosing favourable gauges is a crucial step in the study of nonlinear geometric dispersive equations. A very successful tool, that has emerged originally in work of Tao on wave maps, is
Cirm is a paradise
Jean Ecalle: Taming the coloured multizetas
Abstract: 1. We shall briefly describe the ARI-GARI structure; recall its double origin in Analysis and mould theory; explain what makes it so well-suited to the study of multizetas; and review th...
David Broadhurst: Combinatorics of Feynman integrals
Abstract: Very recently, David Roberts and I have discovered wonderful conditions imposed on Feynman integrals by Betti and de Rham homology. In decoding the corresponding matrices, we encounter a...
Karen Yeats: Connected chord diagrams, bridgeless maps, and perturbative quantum field theory
Abstract: Rooted connected chord diagrams can be used to index certain expansions in quantum field theory. There is also a nice bijection between rooted connected chord diagrams and bridgeless
Dominique Manchon: Free post-Lie algebras, the Hopf algebra of Lie group integrators and planar...
Abstract: The Hopf algebra of Lie group integrators has been introduced by H. Munthe-Kaas and W. Wright as a tool to handle Runge-Kutta numerical methods on homogeneous spaces. It is spanned by
Gerald Dunne: Quantum geometry and resurgent perturbative/nonperturbative relations
Abstract: Certain quantum spectral problems have the remarkable property that the formal perturbative series for the energy spectrum can be used to generate all other terms in the entire
Vladimir Zakharov : Unresolved problems in the theory of integrable systems
Abstract: In spite of enormous success of the theory of integrable systems, at least three important problems are not resolved yet or are resolved only partly. They are the following:
1. The IST i...
Marc Hindry: Brauer-Siegel theorem and analogues for varieties over global fields
Abstract: The classical Brauer-Siegel theorem can be seen as one of the first instances of description of asymptotical arithmetic: it states that, for a family of number fields Ki, under mild cond...
Felipe Voloch: Maps between curves and diophantine obstructions
Abstract: Given two algebraic curves X, Y over a finite field we might want to know if there is a rational map from Y to X. This has been looked at from a number of perspectives and we will look
John Voight: Computing classical modular forms as orthogonal modular forms
Abstract: Birch gave an extremely efficient algorithm to compute a certain subspace of classical modular forms using the Hecke action on classes of ternary quadratic forms. We extend this method
Jeff Achter: Local densities compute isogeny classes
Abstract: Consider an ordinary isogeny class of elliptic curves over a finite, prime field. Inspired by a random matrix heuristic (which is so strong it's false), Gekeler defines a local factor
30 ans d'AGCCT
Arithmetic, Geometry, Cryptography and Coding Theory
More information :
Interviews Date : 22/06/2017
Gady Kozma: Internal diffusion-limited aggregation with random starting points
Abstract: We consider a model for a growing subset of a euclidean lattice (an "aggregate") where at each step one choose a random point from the existing aggregate, starts a random walk from that
Fabrice Planchon: The wave equation on a model convex domain revisited
Abstract: We detail how the new parametrix construction that was developped for the general case allows in turn for a simplified approach for the model case and helps in sharpening both positive
Yvon Martel: Interactions of solitary waves for the nonlinear Schrödinger equations
Abstract: I will present two cases of strong interactions between solitary waves for the nonlinear Schrödinger equations (NLS). In the mass sub- and super-critical cases, a work by Tien Vinh
Valeria Banica: Dynamics of almost parallel vortex filaments
Abstract: We consider the 1-D Schrödinger system with point vortex-type interactions that was derived by R. Klein, A. Majda and K. Damodaran and by V. Zakharov to modelize the dynamics of N nearly...
Evgenii Kuznetsov: Solitons vs collapses
Abstract: This talk is devoted to solitons and wave collapses which can be considered as two alternative scenarios pertaining to the evolution of nonlinear wave systems describing by a certain
Michael Weinstein: Dispersive waves in novel 2d media; Honeycomb structures, Edge States ...
Abstract: We discuss the 2D Schrödinger equation for periodic potentials with the symmetry of a hexagonal tiling of the plane. We first review joint work with CL Fefferman on the existence of
Catherine Sulem: Soliton Resolution for Derivative NLS equation
Abstract: We consider the Derivative Nonlinear Schrödinger equation for general initial conditions in weighted Sobolev spaces that can support bright solitons (but exclude spectral singularities)....
No comments: | {"url":"https://04.phf-site.com/2017/08/centre-international-de-rencontres.html","timestamp":"2024-11-08T07:58:43Z","content_type":"application/xhtml+xml","content_length":"1049331","record_id":"<urn:uuid:bf975269-9e06-499f-8bc6-c9fa9bf877ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00459.warc.gz"} |
AccuracyGoal in NIntegrate puzzle
5420 Views
5 Replies
7 Total Likes
AccuracyGoal in NIntegrate puzzle
I used NIntegrate to solve a multivariate integral, but needed to use AccuracyGoal of 6 or less for the integral to evaluate.
The results looked suspicious and it was suggested that I was looking at rounding errors.
When AccuracyGoal was set to Infinity, the result was as expected.
Why did the incremental increase in the value of AccuracyGoal not lead to a convergence on the Infinity setting value?
5 Replies
My comment about AccuracyGoal->Infinity meaning that no AccuracyGoal is set was a sarcastic response to the equivalent of going into a society where nodding the head means no and shaking it means
However, your points are well taken. Here's the results of following up on your suggestions.
There are no comments to any of the NIntegrate executions. Once any accuracy, preciision, working precision, and max and min recursions are set, I get the same output with no comments. The results
that are plotted use ?NumericQ and is a plot of the quotent of two NIntegrate functions which have similar manitudes at about 10^(-10) units. In other words, I have normalized one function with the
other. Even performing a point by point integration and normalization, the results are consistent with the parametric plots.
The integrand has no poles and, unlike some others integrals, it does not give me an excessive integrand oscillation notice. It is a multivariate integration, and I have switched the order of
integration within the constraints of certain dependencies.
It looks like my anomalous results are the real results and not getting my results is the real anomaly.
That said, I have another similar set of functions that give me spikey garbage using ?NumericQ and I do get messages about the convergence. In this case, the integrand also does not have poles and I
am trying to understand why the approach does not work. But that's another story.
>>NIntegrate essentially ignores AccuracyGoal->Infinity!!!!!! Wow.
As clearly stated in the documentation. (And this behavior is not just for NIntegrate.)
>>How can you tell if the unexpected results are the correct results?
You need to know more about (i) the integrand and (ii) the numerical integration algorithms used.
Here are some questions worth answering:
Do you get any messages from NIntegrate?Of what magnitude is the value of the integral?If you increase both MinRecursion and MaxRecursion do you get the same result? Do you get the same result using
different combinations of integration strategies and integration rules?Do you use any singularity handlers? What results do you get without (1) singularity handling and (2) symbolic preprocessing?Is
the computation of the integrand prone to cancelation errors?
I hope these help...
NIntegrate's advanced documentation
explains the integration strategies, rules, and singularity handlers and provides examples and prototype implementations.
NIntegrate essentially ignores AccuracyGoal->Infinity!!!!!! Wow.
So, what does this really mean? I performed my NIntegrate setting AccuracyGoal from 1 to 100 and got the same answer. I set it to Infinity and I got a different answer. The different answer is an
expected answer and the others are consistent but unexpected. If those are the correct results, the findings are remarkable and will have an impact on certain physics.
How can you tell if the unexpected results are the correct results?
I read both of the descriptions you identified but missed the details in the AccuracyGoal writeup.
I need to iterate my NIntegrate with various PrecisionGoal iterations and see what happens.
Thanks for the suggestions.
It is hard to give an explanation without seeing the actual integral. May be you can figure out what is going on by reading
NIntegrate's advanced documentation
section "Examples of pathological behavior" in the chapter "
NIntegrate Integration Rules
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/192571","timestamp":"2024-11-03T01:56:49Z","content_type":"text/html","content_length":"117827","record_id":"<urn:uuid:ad6d39f3-a87d-448e-93bb-2f33ec1b49af>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00386.warc.gz"} |
Introduction to Electricity, Magnetism, and Circuits
9.6 Solenoids and Toroids
By the end of this section, you will be able to:
• Establish a relationship for how the magnetic field of a solenoid varies with distance and current by using both the Biot-Savart law and Ampère’s law
• Establish a relationship for how the magnetic field of a toroid varies with distance and current by using Ampère’s law
Two of the most common and useful electromagnetic devices are called solenoids and toroids. In one form or another, they are part of numerous instruments, both large and small. In this section, we
examine the magnetic field typical of these devices.
A long wire wound in the form of a helical coil is known as a solenoid. Solenoids are commonly used in experimental research requiring magnetic fields. A solenoid is generally easy to wind, and near
its center, its magnetic field is quite uniform and directly proportional to the current in the wire.
Figure 9.6.1 shows a solenoid consisting of
We first calculate the magnetic field at the point Figure 9.6.1. This point is on the central axis of the solenoid. We are basically cutting the solenoid into thin slices that are dy thick and
treating each as a current loop. Thus, Equation 9.4.3 and Equation 9.6.1:
where we used Equation 9.6.1 to replace Figure 9.6.1, we have:
(Figure 9.6.1)
Figure 9.6.1 (a) A solenoid is a long wire wound in the shape of a helix. (b) The magnetic field at the point P on the axis of the solenoid is the net field due to all of the current loops.
Taking the differential of both sides of this equation, we obtain
When this is substituted into the equation for
which is the magnetic field along the central axis of a finite solenoid.
Of special interest is the infinitely long solenoid, for which 9.6.4, the magnetic field along the central axis of an infinite solenoid is
We now use these properties, along with Ampère’s law, to calculate the magnitude of the magnetic field at any location inside the infinite solenoid. Consider the closed path of Figure 9.6.2. Along
segment 1, B⃗ B→ is uniform and parallel to the path. Along segments 2 and 4, B⃗ B→ is perpendicular to part of the path and vanishes over the rest of it. Therefore, segments 2 and 4 do not contribute
to the line integral in Ampère’s law. Along segment 3, B⃗ =0B→=0 because the magnetic field is zero outside the solenoid. If you consider an Ampère’s law loop outside of the solenoid, the current
flows in opposite directions on different segments of wire. Therefore, there is no enclosed current and no magnetic field according to Ampère’s law. Thus, there is no contribution to the line
integral from segment 3. As a result, we find
(Figure 9.6.2)
Figure 9.6.2 The path of integration used in Ampère’s law to evaluate the magnetic field of an infinite solenoid.
The solenoid has turns per unit length, so the current that passes through the surface enclosed by the path is
within the solenoid. This agrees with what we found earlier for on the central axis of the solenoid. Here, however, the location of segment
Outside the solenoid, one can draw an Ampère’s law loop around the entire solenoid. This would enclose current flowing in both directions. Therefore, the net current inside the loop is zero.
According to Ampère’s law, if the net current is zero, the magnetic field must be zero. Therefore, for locations outside of the solenoid’s radius, the magnetic field is zero.
When a patient undergoes a magnetic resonance imaging (MRI) scan, the person lies down on a table that is moved into the center of a large solenoid that can generate very large magnetic fields. The
solenoid is capable of these high fields from high currents flowing through superconducting wires. The large magnetic field is used to change the spin of protons in the patient’s body. The time it
takes for the spins to align or relax (return to original orientation) is a signature of different tissues that can be analyzed to see if the structures of the tissues is normal (Figure 9.6.3).
Figure 9.6.3 In an MRI machine, a large magnetic field is generated by the cylindrical solenoid surrounding the patient. (credit: Liz West)
Magnetic Field Inside a Solenoid
A solenoid has
We are given the number of turns and the length of the solenoid so we can find the number of turns per unit length. Therefore, the magnetic field inside and near the middle of the solenoid is given
by Equation 9.6.7. Outside the solenoid, the magnetic field is zero.
The number of turns per unit length is
The magnetic field produced inside the solenoid is
This solution is valid only if the length of the solenoid is reasonably large compared with its diameter. This example is a case where this is valid.
CHECK YOUR UNDERSTANDING 9.7
What is the ratio of the magnetic field produced from using a finite formula over the infinite approximation for an angle (b) The solenoid has
A toroid is a donut-shaped coil closely wound with one continuous wire, as illustrated in part (a) of Figure 9.6.4. If the toroid has
Figure 9.6.4 (a) A toroid is a coil wound into a donut-shaped object. (b) A loosely wound toroid does not have cylindrical symmetry. (c) In a tightly wound toroid, cylindrical symmetry is a very good
approximation. (d) Several paths of integration for Ampère’s law.
We begin by assuming cylindrical symmetry around the axis Figure 9.6.4 shows, the view of the toroidal coil varies from point to point (for example, Figure 9.6.4], and cylindrical symmetry is an
accurate approximation.
With this symmetry, the magnetic field must be tangent to and constant in magnitude along any circular path centred on shown in part (d) of Figure 9.6.4,
Ampère’s law relates this integral to the net current passing through any surface bounded by the path of integration. For a path that is external to the toroid, either no current passes through the
enclosing surface (path In either case, there is no net current passing through the surface, so
The turns of a toroid form a helix, rather than circular loops. As a result, there is a small field external to the coil; however, the derivation above holds if the coils were circular.
For a circular path within the toroid (path times, resulting in a net current through the surface. We now find with Ampère’s law,
The magnetic field is directed in the counterclockwise direction for the windings shown. When the current in the coils is reversed, the direction of the magnetic field also reverses.
The magnetic field inside a toroid is not uniform, as it varies inversely with the distance from the axis (the radius midway between the inner and outer radii of the toroid) is much larger than the
cross-sectional diameter of the coils Equation 9.6.10 where
Candela Citations
CC licensed content, Specific attribution
• Download for free at http://cnx.org/contents/7a0f9770-1c44-4acd-9920-1cd9a99f2a1e@8.1. Retrieved from: http://cnx.org/contents/7a0f9770-1c44-4acd-9920-1cd9a99f2a1e@8.1. License: CC BY: | {"url":"https://openpress.usask.ca/physics155/chapter/9-6-solenoids-and-toroids/","timestamp":"2024-11-14T18:40:08Z","content_type":"text/html","content_length":"139678","record_id":"<urn:uuid:cb0b492a-a634-44ff-b3ec-785bf36bcfe4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00061.warc.gz"} |
How to Get the Frequency Table of a Categorical Variable as a Data Frame in R | R-bloggersHow to Get the Frequency Table of a Categorical Variable as a Data Frame in R
How to Get the Frequency Table of a Categorical Variable as a Data Frame in R
[This article was first published on
The Chemical Statistician » R programming
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
One feature that I like about R is the ability to access and manipulate the outputs of many functions. For example, you can extract the kernel density estimates from density() and scale them to
ensure that the resulting density integrates to 1 over its support set.
I recently needed to get a frequency table of a categorical variable in R, and I wanted the output as a data table that I can access and manipulate. This is a fairly simple and common task in
statistics and data analysis, so I thought that there must be a function in Base R that can easily generate this. Sadly, I could not find such a function. In this post, I will explain why the
seemingly obvious table() function does not work, and I will demonstrate how the count() function in the ‘plyr’ package can achieve this goal.
The Example Data Set – mtcars
Let’s use the mtcars data set that is built into R as an example. The categorical variable that I want to explore is “gear” – this denotes the number of forward gears in the car – so let’ s view the
first 6 observations of just the car model and the gear. We can use the subset() function to restrict the data set to show just the row names and “gear”.
> head(subset(mtcars, select = 'gear'))
Mazda RX4 4
Mazda RX4 Wag 4
Datsun 710 4
Hornet 4 Drive 3
Hornet Sportabout 3
Valiant 3
What are the possible values of “gear”? Let’s use the factor() function to find out.
> factor(mtcars$gear)
[1] 4 4 4 3 3 3 3 4 4 4 4 3 3 3 3 3 3 4 4 4 3 3 3 3 3 4 5 5 5 5 5 4
Levels: 3 4 5
The cars in this data set have either 3, 4 or 5 forward gears. How many cars are there for each number of forward gears?
Why the table() function does not work well
The table() function in Base R does give the counts of a categorical variable, but the output is not a data frame – it’s a table, and it’s not easily accessible like a data frame.
> w = table(mtcars$gear)
> w
> class(w)
[1] "table"
You can convert this to a data frame, but the result does not retain the variable name “gear” in the corresponding column name.
> t = as.data.frame(w)
> t
Var1 Freq
You can correct this problem with the names() function.
> names(t)[1] = 'gear'
> t
gear Freq
I finally have what I want, but that took several functions to accomplish. Is there an easier way?
count() to the Rescue! (With Complements to the “plyr” Package)
Thankfully, there is an easier way – it’s the count() function in the “plyr” package. If you don’t already have the “plyr” package, install it first – run the command
Then, call its library, and the count() function will be ready for use.
> library(plyr)
> count(mtcars, 'gear')
gear freq
> y = count(mtcars, 'gear')
> y
gear freq
> class(y)
[1] "data.frame"
As the class() function confirms, this output is indeed a data frame!
Filed under:
Applied Statistics
Categorical Data Analysis
Data Analysis
Descriptive Statistics
R programming
categorical variable
data frame
frequency table
R programming | {"url":"https://www.r-bloggers.com/2015/02/how-to-get-the-frequency-table-of-a-categorical-variable-as-a-data-frame-in-r/","timestamp":"2024-11-02T04:24:50Z","content_type":"text/html","content_length":"102332","record_id":"<urn:uuid:b70a38dd-faa4-4bfe-a2e0-8af8e3ac981a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00760.warc.gz"} |
VLOOKUP like the Pros
Published on
VLOOKUP like the Pros
David Krevitt
Data App Enthusiast
Welcome to Cacheworthy
This post is a vestige of this blog's previous life as CIFL (2015 - 2023), which covered building analytics pipelines for internal reporting. When migrating to Cacheworthy and a focus on data
applications, we found this post too helpful to remove - hope you enjoy it!
Curious about data applications? Meet Cacheworthy.
VLOOKUP is the gateway drug of Google Sheets formulas.
A little taste of it you'll be hooked on the power of spreadsheets - it's a small, simple formula that will save you hours when manipulating or analyzing data.
This post will help teach you how to write a number of VLOOKUP formula variants: make a copy of the accompanying cheat sheet to follow along.
Let's dive into the VLOOKUPS!
Step by step VLOOKUP tutorial
If you prefer a video walkthrough, follow along here throughout the post:
1. The basic VLOOKUP formula
In its simplest form, VLOOKUP allows you to find a value from a table, based on the value in another column:
• A Twitter handle based on the user's name
• A company name based on their website URL
If you have a big table with raw data, VLOOKUP is your first weapon to start plucking out specific values.
For example, a large dataset of Tweets that you want to extract data from.
How it works The order of the formula is tricky if you've never used it - but be patient, and allow yourself to not understand it the first time:
=VLOOKUP(value to lookup, data range, column number to pull, 0)
Let's start with a question (based on data in the 'Sample Tweets' dataset tab):
How many followers does the handle GrowthHackers have?
The VLOOKUP we'd need to answer that is:
=VLOOKUP(B10, ‘Sample Tweets’!C:E, 3, 0)
Super important: The value that you're looking up (GrowthHackers, or cell B10 in this case) MUST BE in the first column of your data range.
Our data lives in rows C:E of the 'Sample Tweets' tab, so that's our second parameter. And we want to pull the 3rd column over.
It's also important to note that VLOOKUP returns the first value it finds - so if you have multiple rows containing your search term, it will only pull the first one.
2. Using VLOOKUP on a range (ARRAYFORMULA)
It's pretty rare that you'll use a VLOOKUP in isolation.
Usually, once you write one, you'll want to apply it to an entire range of cells.
For example, you'll have a bunch of Twitter handles in one column, and you'll want to pull follower counts for all of them.
Let's try doing that, by combining it with ARRAYFORMULA. Think about ARRAYFORMULA as a replacement for copy-paste within spreadsheets.
How it works
There’s one key to understanding ARRAYFORMULA: everything must be a range.
You can’t just run a VLOOKUP on cell A2 - you’ve got to pass the entire array (A2:A, or a subset like A2:A6).
=ARRAYFORMULA( VLOOKUP( A2:A, data!$A:$C, 3, 0))
That’s really all there is to it. You write a formula as you normally would (VLOOKUP in this case), rewrite any individual cells (A2) as ranges (A2:A), and wrap the entire thing in ARRAYFORMULA().
Let's try the question from above - pulling follower counts for a column of handles:
Let's try a tougher question, to test your newfound VLOOKUP + ARRAYFORMULA skills:
What hashtags (from column I) appeared in the first Tweets from the same list of handles above?
Write once, run everywhere. ARRAYFORMULA allows you to set a lookup across an entire column, without copying and pasting the formula into each cell - keeping your Sheets nice and clean.
3. VLOOKUP from another Sheet
One of my favorite things about Google Sheets, is that you can easily pass data across different sheets.
For example, if you have a sheet that collects form responses, you probably don't want to be mucking it up with some analysis.
But you definitely do want to analyze that form data. The IMPORTRANGE function lets you do that, guilt-free, in another sheet.
How it works
As formulas go, it's a simple one:
=IMPORTRANGE( "spreadsheet ID from URL" , "range" )
The spreadsheet ID can be pulled from the source sheet's URL, between /d/ and /edit:
docs.google.com/spreadsheets/d/ 1-nX4WJuHrTMRlDZKmWClG-Pv8sVT3QlHEd7J8xFmhlI /edit
And the range is the same as if you were pulling data from within the same sheet. For example, 'Getting Started'!A:B to pull the first two columns of the first tab in this sheet.
Diving in
Let's run back the same question, except this time answer it using data in this sheet (which contains the same data from the 'Sample Tweets' tab here):
=VLOOKUP(B11, IMPORTRANGE( ‘1sRs\_V09LAODy0Nod3xKXN98bX9s7guhReQ6wBa8Kwcw’, ‘ ‘Sample Tweets’!C:E ‘ ), 3, 0)
This is where Google Sheets separates itself from Excel.
Since any Google Sheet can import data (using IMPORTRANGE) from any other Sheet, you can run it on data from outside your current Sheet.
4. VLOOKUP on multiple criteria with ARRAYFORMULA
Often you'll wish VLOOKUP was less rigid - like when you want to match values from *two* columns instead of just one.
Instead of modifying the VLOOKUP formula itself, situations like these require getting creative.
If you want to match two columns in a lookup, you'll have to combine those two values into one - and also combine them within the range that you're looking up against.
To do this, we'll use a couple helpers: the &, and ARRAYFORMULA.
Let's try pulling the first tweet by GrowthHackers that uses the hashtag 'startups':
=VLOOKUP( B11 & C11, { ARRAYFORMULA (‘Sample Tweets’!C:C & ‘Sample Tweets’!I:I ), ‘Sample Tweets’!B:B }, 2, 0)
In the first parameter (B11 & C11), we combine the handle & hashtag to become one value: GrowthHackersstartup.
Then, in the lookup range, we combine columns C and I from 'Sample Tweets', which contain the handle and hashtag, to form the first column.
{ ARRAYFORMULA ('Sample Tweets'!C:C & 'Sample Tweets'!I:I ), 'Sample Tweets'!B:B }
Layering in the tweet text (column B) creates a two-column lookup range - so that we can pull the tweet text matching both the GrowtHackers handle and the startup hashtag.
Because VLOOKUP is so simple, it’s very easy to trick into doing thing it’s not specifically built for. You can combine multiple columns (concatenate them essentially) before running the lookup,
which will trick the formula to look for matches on both criteria.
5. Reverse VLOOKUP
In the last tab, we learned to get creative with the order and combination of our VLOOKUP columns by combining and ARRAYFORMULA.
That trick comes in handy if you're looking to pull columns that are behind your lookup column - some would call this a reverse VLOOKUP.
This lets you get around one of the formula's peskiest combinations, and be more creative with your formula writing.
Let's give it a shot, and lookup the date of AdamSinger's first tweet:
=VLOOKUP ( B11, {‘Sample Tweets’!C:C, ‘Sample Tweets’!A:A }, 2, 0)
See how you combine the range in reverse order? Just by embedding the two columns within curly braces , separated by a comma.
That allows you to perform a VLOOKUP on the range as usual - the rest of the formula is vanilla.
One of the pains with VLOOKUP is that it can only look up values left-to-right - the value you’re looking up against has to be in the leftmost column of your range.
But - what if you rearrange your, er, range? Using the handy curly braces { } in Google Sheets, you can recombine columns in an order that works with VLOOKUP.
6. HLOOKUP
What if your data lives in rows instead of columns?
This often happens when you're looking at historical data at work - where month names might live in the headers, and accounts in each row:
In this situation, VLOOKUP wouldn't be the best formula for looking up, say, the expenses in February.
Because you'd have to know exactly which column February was in, and historical columns have a habit of moving around on you without warning.
Instead, let's turn to VLOOKUPs cousin, HLOOKUP, which will let us lookup based on the column name, and pull a given row.
What it we wanted to pull the second tweet from the handle column below?
=HLOOKUP( ‘handle’, B11:G13, 3, 0)
The syntax is more or less the same as VLOOKUP - you reference:
1. a value to lookup
2. the lookup range
3. the row number to pull
Notice that, just like VLOOKUP, the index for row numbers to lookup starts at 1 - so if you want to pull the 2nd tweet in this case, it's actually 3rd in the index (because of the header row).
If your data is oriented the wrong way for VLOOKUP, you can instead use it’s close cousin, HLOOKUP.
For example, if you have dates in the header row of your Sheet, and you’re looking to pull the value in a specific row, for a specific date, you’d HLOOKUP for that date, and then pull the Nth row.
[su_divider top="no" style="dashed" divider_color="#dddddd" size="2" margin="50"]
Your turn!
It's tough to get the most out of this post without using the cheat sheet, so make sure you grab your copy here.
If you liked this post, please make sure to check us out on YouTube where we post tutorial videos like this one. | {"url":"https://www.cacheworthy.com/blog/learn-vlookup-formula-google-sheets","timestamp":"2024-11-12T04:12:44Z","content_type":"text/html","content_length":"85938","record_id":"<urn:uuid:aa1efbc9-f546-4b22-b1ad-50a9693f4edb>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00852.warc.gz"} |
Unleashing the Power of 'Reduce': Versatile Applications in Functional Programming
Unleashing the Power of 'Reduce': Versatile Applications in Functional Programming - 3 minutes read
"Reduce" is a fundamental operation in computer science and functional programming. It's a higher-order function that www.afirmaxrubbishremoval.co.uk/ combines elements of a collection, such as a
list or an array, into a single value. This operation is also known by different names in various programming languages, such as "fold," "accumulate," or "aggregate." Regardless of the name, the
purpose remains the same: to iteratively apply a binary operation to elements of a collection until it's condensed into a single value.
Here are ten examples of how "reduce" can be applied in different contexts:
Summation: One of the most common use cases for reduce is to calculate the sum of all elements in a list. For example, given a list of numbers [1, 2, 3, 4, 5], the reduce operation can be used to
compute the sum as 1 + 2 + 3 + 4 + 5 = 15.
Product: Similarly, reduce can be employed to find the product of all elements in a list. For instance, given the list [2, 3, 4, 5], the product can be computed as 2 * 3 * 4 * 5 = 120.
Concatenation: In the context of strings, reduce can concatenate all elements of a list into a single string. For instance, given the list ["hello", " ", "world"], the result would be the string
"hello world".
Maximum and Minimum: Reduce can determine the maximum or minimum value in a list. For example, given the list [4, 7, 2, 9, 5], reduce can find the maximum value (9) or the minimum value (2).
Factorial: The factorial of a non-negative integer is the product of all positive integers less than or equal to that number. Reduce can be used to compute the factorial. For instance, the factorial
of 5 can be calculated as 5 * 4 * 3 * 2 * 1 = 120.
Joining Lists: Reduce can merge multiple lists into a single list. For example, given lists [1, 2], [3, 4], and [5, 6], reduce can combine them into a single list [1, 2, 3, 4, 5, 6].
Average: Reduce can compute the average of all elements in a list by first summing them and then dividing by the number of elements. For instance, given the list [2, 4, 6, 8, 10], the average would
be (2 + 4 + 6 + 8 + 10) / 5 = 6.
Flattening Nested Lists: Reduce can flatten a list of lists, converting it into a single list. For example, given the nested list [[1, 2], [3, 4], [5, 6]], reduce can transform it into [1, 2, 3, 4,
5, 6].
Boolean Operations: Reduce can perform boolean operations such as AND or OR on a list of boolean values. For example, given the list [True, False, True, True], reduce can compute the AND operation as
True and False and True and True = False.
Custom Operations: Reduce can be customized to perform any user-defined operation on a list. This flexibility allows developers to tailor the reduce operation to suit specific requirements, making it
a versatile tool in programming.
In conclusion, the "reduce" operation is a powerful tool for condensing collections of data into a single value, and its versatility makes it applicable to a wide range of scenarios in programming.
By understanding its usage and potential applications, developers can leverage reduce to write concise and efficient code. | {"url":"https://www.maiyro.com/posts/rkrhpp7o","timestamp":"2024-11-05T23:46:12Z","content_type":"text/html","content_length":"18705","record_id":"<urn:uuid:2973a6c4-1d37-42f7-aaca-443971cdac65>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00703.warc.gz"} |
Particle in two dimensions
In two dimensions, the orbital angular acceleration is the rate at which the two-dimensional orbital angular velocity of the particle about the origin changes. The instantaneous angular velocity ω at
any point in time is given by
${\displaystyle \omega ={\frac {v_{\perp }}{r}},}$
where ${\displaystyle r}$ is the distance from the origin and ${\displaystyle v_{\perp }}$ is the cross-radial component of the instantaneous velocity (i.e. the component perpendicular to the
position vector), which by convention is positive for counter-clockwise motion and negative for clockwise motion.
Therefore, the instantaneous angular acceleration α of the particle is given by^[2]
${\displaystyle \alpha ={\frac {d}{dt}}\left({\frac {v_{\perp }}{r}}\right).}$
Expanding the right-hand-side using the product rule from differential calculus, this becomes
${\displaystyle \alpha ={\frac {1}{r}}{\frac {dv_{\perp }}{dt}}-{\frac {v_{\perp }}{r^{2}}}{\frac {dr}{dt}}.}$
In the special case where the particle undergoes circular motion about the origin, ${\displaystyle {\frac {dv_{\perp }}{dt}}}$ becomes just the tangential acceleration ${\displaystyle a_{\perp }}$ ,
and ${\displaystyle {\frac {dr}{dt}}}$ vanishes (since the distance from the origin stays constant), so the above equation simplifies to
${\displaystyle \alpha ={\frac {a_{\perp }}{r}}.}$
In two dimensions, angular acceleration is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the angular
speed increases in the counter-clockwise direction or decreases in the clockwise direction, and the sign is taken negative if the angular speed increases in the clockwise direction or decreases in
the counter-clockwise direction. Angular acceleration then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the
two axes.
Particle in three dimensions
In three dimensions, the orbital angular acceleration is the rate at which three-dimensional orbital angular velocity vector changes with time. The instantaneous angular velocity vector ${\
displaystyle {\boldsymbol {\omega }}}$ at any point in time is given by
${\displaystyle {\boldsymbol {\omega }}={\frac {\mathbf {r} \times \mathbf {v} }{r^{2}}},}$
where ${\displaystyle \mathbf {r} }$ is the particle's position vector, ${\displaystyle r}$ its distance from the origin, and ${\displaystyle \mathbf {v} }$ its velocity vector.^[2]
Therefore, the orbital angular acceleration is the vector ${\displaystyle {\boldsymbol {\alpha }}}$ defined by
${\displaystyle {\boldsymbol {\alpha }}={\frac {d}{dt}}\left({\frac {\mathbf {r} \times \mathbf {v} }{r^{2}}}\right).}$
Expanding this derivative using the product rule for cross-products and the ordinary quotient rule, one gets:
{\displaystyle {\begin{aligned}{\boldsymbol {\alpha }}&={\frac {1}{r^{2}}}\left(\mathbf {r} \times {\frac {d\mathbf {v} }{dt}}+{\frac {d\mathbf {r} }{dt}}\times \mathbf {v} \right)-{\frac {2}{r^
{3}}}{\frac {dr}{dt}}\left(\mathbf {r} \times \mathbf {v} \right)\\\\&={\frac {1}{r^{2}}}\left(\mathbf {r} \times \mathbf {a} +\mathbf {v} \times \mathbf {v} \right)-{\frac {2}{r^{3}}}{\frac {dr}
{dt}}\left(\mathbf {r} \times \mathbf {v} \right)\\\\&={\frac {\mathbf {r} \times \mathbf {a} }{r^{2}}}-{\frac {2}{r^{3}}}{\frac {dr}{dt}}\left(\mathbf {r} \times \mathbf {v} \right).\end
Since ${\displaystyle \mathbf {r} \times \mathbf {v} }$ is just ${\displaystyle r^{2}{\boldsymbol {\omega }}}$ , the second term may be rewritten as ${\displaystyle -{\frac {2}{r}}{\frac {dr}{dt}}{\
boldsymbol {\omega }}}$ . In the case where the distance ${\displaystyle r}$ of the particle from the origin does not change with time (which includes circular motion as a subcase), the second term
vanishes and the above formula simplifies to
${\displaystyle {\boldsymbol {\alpha }}={\frac {\mathbf {r} \times \mathbf {a} }{r^{2}}}.}$
From the above equation, one can recover the cross-radial acceleration in this special case as:
${\displaystyle \mathbf {a} _{\perp }={\boldsymbol {\alpha }}\times \mathbf {r} .}$
Unlike in two dimensions, the angular acceleration in three dimensions need not be associated with a change in the angular speed ${\displaystyle \omega =|{\boldsymbol {\omega }}|}$ : If the
particle's position vector "twists" in space, changing its instantaneous plane of angular displacement, the change in the direction of the angular velocity ${\displaystyle {\boldsymbol {\omega }}}$
will still produce a nonzero angular acceleration. This cannot not happen if the position vector is restricted to a fixed plane, in which case ${\displaystyle {\boldsymbol {\omega }}}$ has a fixed
direction perpendicular to the plane.
The angular acceleration vector is more properly called a pseudovector: It has three components which transform under rotations in the same way as the Cartesian coordinates of a point do, but which
do not transform like Cartesian coordinates under reflections.
Relation to torque
The net torque on a point particle is defined to be the pseudovector
${\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,}$
where ${\displaystyle \mathbf {F} }$ is the net force on the particle.^[3]
Torque is the rotational analogue of force: it induces change in the rotational state of a system, just as force induces change in the translational state of a system. As force on a particle is
connected to acceleration by the equation ${\displaystyle \mathbf {F} =m\mathbf {a} }$ , one may write a similar equation connecting torque on a particle to angular acceleration, though this relation
is necessarily more complicated.^[4]
First, substituting ${\displaystyle \mathbf {F} =m\mathbf {a} }$ into the above equation for torque, one gets
${\displaystyle {\boldsymbol {\tau }}=m\left(\mathbf {r} \times \mathbf {a} \right)=mr^{2}\left({\frac {\mathbf {r} \times \mathbf {a} }{r^{2}}}\right).}$
From the previous section:
${\displaystyle {\boldsymbol {\alpha }}={\frac {\mathbf {r} \times \mathbf {a} }{r^{2}}}-{\frac {2}{r}}{\frac {dr}{dt}}{\boldsymbol {\omega }},}$
where ${\displaystyle {\boldsymbol {\alpha }}}$ is orbital angular acceleration and ${\displaystyle {\boldsymbol {\omega }}}$ is orbital angular velocity. Therefore:
${\displaystyle {\boldsymbol {\tau }}=mr^{2}\left({\boldsymbol {\alpha }}+{\frac {2}{r}}{\frac {dr}{dt}}{\boldsymbol {\omega }}\right)=mr^{2}{\boldsymbol {\alpha }}+2mr{\frac {dr}{dt}}{\
boldsymbol {\omega }}.}$
In the special case of constant distance ${\displaystyle r}$ of the particle from the origin (${\displaystyle {\tfrac {dr}{dt}}=0}$ ), the second term in the above equation vanishes and the above
equation simplifies to
${\displaystyle {\boldsymbol {\tau }}=mr^{2}{\boldsymbol {\alpha }},}$
which can be interpreted as a "rotational analogue" to ${\displaystyle \mathbf {F} =m\mathbf {a} }$ , where the quantity ${\displaystyle mr^{2}}$ (known as the moment of inertia of the particle)
plays the role of the mass ${\displaystyle m}$ . However, unlike ${\displaystyle \mathbf {F} =m\mathbf {a} }$ , this equation does not apply to an arbitrary trajectory, only to a trajectory contained
within a spherical shell about the origin. | {"url":"https://www.knowpia.com/knowpedia/Angular_acceleration","timestamp":"2024-11-12T06:36:58Z","content_type":"text/html","content_length":"167608","record_id":"<urn:uuid:3a8369f2-401a-4fc6-9b1c-dc92157ee755>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00422.warc.gz"} |
How to Apply a Formula to Multiple Sheets in Excel (3 Methods) - ExcelDemy
Here we use three worksheets on the Salary of Employees for January, February, and March. We used the short form of the month names as the sheet name. Let’s use formulas that take data from these
sheets simultaneously.
Method 1 – Calculating a Sum Across Multiple Sheets
Case 1.1 – Left-Clicking on the Sheet Tab
• In a separate sheet, choose cell C5 to store the sum of the first employee’s salary.
• In cell C5, insert an equals (=) sign. Don’t press Enter yet.
• Go to the first sheet named Jan and select cell D5 of the salary.
• Add the data from other sheets using the same procedure.
• After adding all the sheets your formula bar will look like the image below.
• Press Enter.
• Drag down the Fill Handle.
Case 1.2 – Using the SUM Function
• Create a new worksheet where you want to calculate the sum results.
• Go to the worksheet named Jan and select the cell you want to add.
• Go to the last sheet of your file. We chose the Mar sheet, and now we will add the Sheets.
• Apply the following formula in cell C5 in the new sheet:
Here, the syntax SUM(‘ Jan: Mar’!D5) will add all the D5 cells of your corresponding worksheets.
• You will get the final results.
Case 1.3 – Utilizing the SUMPRODUCT Function
For this method, we’ll use three datasets of Items Sales for three months.
• Go to the a worksheet where you want to calculate the total sum.
• Copy the formula below.
=SUM(SUMPRODUCT((‘Jan1′!$B$5:$B$14=’SUMPRODUCT Function’!B5)*’Jan1′!$C$5:$C$14),SUMPRODUCT((‘Feb1′!$B$5:$B$14=’SUMPRODUCT Function’!B5)*’Feb1′!$C$5:$C$14),SUMPRODUCT((‘Mar1′!$B$5:$B$14=’SUMPRODUCT
Formula Breakdown:
SUMPRODUCT((‘Jan1′!$B$5:$B$14=’SUMPRODUCT Function’!B5)*’Jan1′!$C$5:$C$14)→ The SUMPRODUCT function takes the whole range of Jan1 as Jan1′!$B$5:$B$14 and returns TRUE for the corresponding cell value
of B5. Otherwise, it will return FALSE for cell B5 where the text is Apple. Now it starts to find the actual match from Jan1′!$C$5:$C$14 range and returns the value 70 for cell B5.
The same formula was applied to the other sheets also.
SUM(SUMPRODUCT((‘Jan1′!$B$5:$B$14=’SUMPRODUCT Function’!B5)*’Jan1′!$C$5:$C$14),SUMPRODUCT((‘Feb1′!$B$5:$B$14=’SUMPRODUCT Function’!B5)*’Feb1′!$C$5:$C$14),SUMPRODUCT((‘Mar1′!$B$5:$B$14=’SUMPRODUCT
Function’!B5)*’Mar1′!$C$5:$C$14))→ The SUM function will add the return value of the SUMPRODUCT function eventually.
• Apply the formula to the first cell in the result column.
• Press Enter.
• Use AutoFill for the other cells in the column.
Read More: How to Apply Same Formula to Multiple Cells in Excel
Method 2 – Counting Across Multiple Sheets
Let’s assume you have several datasets where the same values repeat across the tables. You want to count how many times a specific item appears in the sheets. We’ll use a List of Fruits and count how
many times the word Apple appears in our datasets.
• Pick cell C6 in a new sheet and copy the following formula inside.
C4= The searched value that you want to count.
B6= The corresponding sheet name.
B4:E13= The range of the dataset you want to count.
Formula Breakdown:
INDIRECT(“‘”&B6&”‘!”&”B4:E13”)→ It took the value of the cell B4:E13 as a reference value and returns the value in cell B6. Here B6 cell refers to sheet17.
COUNTIF(INDIRECT(“‘”&B6&”‘!”&”B4:E13”),$C$4)→ $C$4 is the cell where you inserted the value that you want to count. It took the text string Apple and count for the referred range value of the
INDIRECT function. The final output here is 12 which is the total count for the inserted text Apple in sheet17.
Note: The COUNTIF function is not a case-sensitive function.
• AutoFill to the other cells in the column.
Method 3 – Applying Formula to Lookup Values
Case 3.1 – Using the VLOOKUP Function
• Select the cell C5 and enter the following formula:
B5= The cell for whom you want to find out the corresponding value
B5:D9= The entire range of each worksheet.
Formula Breakdown:
VLOOKUP(B5,’ Jan’!$B$4:$D$9,{3}, FALSE)→ the VLOOKUP function finds the value identical to cell B5 of the Employee column. It searches into the table array of Jan worksheets ($B$4:$D$9) and then
takes the col_index_num {3} which is the Salary column. False returns the exact value from the column.
VLOOKUP(B5,’Jan’!$B$4:$D$9,{3},FALSE),VLOOKUP(B5,Feb!B5:D9,{3},FALSE),VLOOKUP(B5,Mar!B5:D9,{3},FALSE)→ This function will repeat the same formula stated above for the other sheets.
SUM(VLOOKUP(B5,’Jan’!$B$4:$D$9,{3},FALSE),VLOOKUP(B5,Feb!B5:D9,{3},FALSE),VLOOKUP(B5,Mar!B5:D9,{3},FALSE))→ The sum function will add all the value that the VLOOKUP function returns after finding
• Output→ 25000+25000+25000=75000.
• Press Enter and drag down Autofill for other cells to get the rest of the results.
Case 3.2 – Using INDEX and MATCH Functions
• Select cell C5 of your main worksheet where you want to find out the looking value and apply the following formula in it:
=INDEX(‘ Jan’!D5:D9,MATCH(‘Using INDEX and MATCH Functions’!B5,’Using INDEX and MATCH Functions’!B5:B9,0))
Formula Breakdown:
MATCH(‘Using INDEX and MATCH Functions’!B5,’ Using INDEX and MATCH Functions’!B5:B9,0)→ The MATCH function finds the location of the value from cell B5 in the current worksheet from cells B5:B9.
INDEX(‘ Jan’!D5:D9, MATCH(‘Using INDEX and MATCH Functions’!B5,’ Using INDEX and MATCH Functions’!B5:B9,0))→ Then the INDEX function evaluates the matched value for the worksheet Jan’!D5:D9 and
returns their corresponding value.
• Drag down the Fill Handle tool with the same formula for the other cells.
Read More: How to Use Multiple Excel Formulas in One Cell
Practice Section
We have provided a practice section on each sheet on the right side so you can use these methods and experiment.
Download Practice Workbook
Download the following practice workbook.
Related Articles
<< Go Back to How to Create Excel Formulas | Excel Formulas | Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
We will be happy to hear your thoughts
Leave a reply | {"url":"https://www.exceldemy.com/how-to-apply-a-formula-to-multiple-sheets-in-excel/","timestamp":"2024-11-12T18:52:05Z","content_type":"text/html","content_length":"204436","record_id":"<urn:uuid:de8caa7e-d913-492c-8c55-ab59aab2e453>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00874.warc.gz"} |
Decimals Workbook for Grades 3-4 | K5 Bookstore
Summary: Math workbook with instruction and exercises introducing decimals and operations with decimals.
Format: PDF – download & print
Level: Grades 3 – 4
Pages: 40
Math Workbook: Decimals 1 (Grades 3-4)
Decimals 1 is a self-teaching math workbook about decimal numbers for grade 3 and 4 students.
Some of the topics covered are:
• decimal numbers – tenths and hundredths;
• adding and subtracting decimals;
• adding and subtracting decimals mentally
• adding & subtracting decimals in columns;
• using decimals with measuring units
• using mental math with money.
This decimals workbook is divided into 7 sections by topic. Each section begins with a bite-sized introduction to a topic with an example, followed by practice exercises. Answers are in the back. The
format is ideal for independent or parent-guided study. The Math Mammoth series of workbooks is highly recommended by K5 Learning! | {"url":"https://store.k5learning.com/decimals-1-workbook","timestamp":"2024-11-09T22:34:57Z","content_type":"text/html","content_length":"380239","record_id":"<urn:uuid:743f8ce6-53cc-499c-b61b-7ebcb1070ab9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00446.warc.gz"} |
Porting My Game Code To Mac OS
in Game tech
Porting My Game Code To Mac OS
I’m still working on prototypes. I’ve spent the last six months going through different game ideas and working on prototype after prototype. Along the way I’ve made over 20 different prototypes on
iPad or iPhone. I’m sure a lot of those prototypes would have made decent iOS games, but I wasn’t particularly excited to develop them all the way and take them to completion. I’m looking for
something that’s both very interesting to develop and something I can proudly point to after it ships.
Right now I have a couple of game ideas in hand that I’m pretty excited about. I prototyped one of those, but I quickly realized it might be a game better suited for desktops rather than iOS, so I
decided to write my next prototype for that game on the Mac. Maybe another day I’ll write a post about prototypes, but what I want to write about today is my experience moving over my prototyping
code to the Mac.I’ve been working on MacOS since 2007, when I switched because of iPhone development. For the most part, I’ve been loving it as an OS, but I never actually targeted it as a platform
in my development. I figured it wouldn’t be very hard at all to compile my simple C/C++ + OpenGL code that I use as the basis for all my iOS games on MacOS. After all, that code started on Windows
and DirectX, and porting it to iOS was pretty quick and easy. On top of that, I can use Xcode 4 for Mac, just like iOS (I’m not sure that counts as a plus though). How card can it possibly be?
The answer is, not hard at all, as long as you’re aware of a couple gotchas along the way.
Xcode project
I figured I would start from scratch, so I created a new Xcode project targeting MacOS. First surprise was that, unlike iOS, there wasn’t a template to create an OpenGL application. Instead, I need
to create an empty Cocoa application and set things up by myself. No big deal.
I used the samples from Apple to get me started. I could have grabbed those and started adding stuff, but for some reason they have all the logic built into the OpenGL views, and I like to keep my
views as simple as possible: they hold the surface to render, and they collect input and other window events. Done. So I had to do some moving around of code and responsibilities, which at the same
time allowed me to make it more like the structure I used on iOS.
The AppDelegate does most of the initialization and creation on applicationDidFinishLaunching, and the OpenGL view gets created through the minimal nib file (with just a menu, window, and a view).
AppDelegate is also responsible for setting up the display link callback, and then all the rest of the execution happens in my own code, away from those classes.
That was all pretty smooth. The only weird gotcha I ran into is that, unlike with previous Xcode 4 projects on iOS, I was no able to add any files to it that were not under the project root. They
would turn red and not be available in the project. I tried adding them in every possible way and always got the same result. So, in the end, I gave in and just moved my External dir under the same
directory as the Xcode project. Some people on Twitter had the same experience, while others didn’t, so there’s something fishy going on in there.
I have all my game static libraries (I refuse to call it an “engine”) as a separate Xcode project. I was able to add the project to the workspace and just change the target to be MacOS, and
everything worked without a hitch.
I knew I was going to have to make some changes to the graphics code because all my iOS code uses OpenGL ES 2.0, and the Mac supports OpenGL “non ES” version. But I never did any graphics programming
on the Mac, so I had no idea what to expect.
I decided to start with this sample code from Apple. After all, it seemed to be exactly what I was looking for: A simple project that sets up OpenGL both on iOS and MacOS. Score!
Once I started digging into it, I realized that it wasn’t going to be as straightforward as I realized. It turns out there were lots and lots of differences between OpenGL ES 2.0 and the OpenGL
version on the Mac. Some were easy to catch, like different syntax for the shaders (“varying” became “in”, “uniform” was “out”, etc). Some were a pain to track down (needed to bind some Vertex Array
Objects, and I was forced to use Vertex Buffers).
The documentation wasn’t clear on this point at all, but fortunately I got some good tips from Twitter. It turns out that the sample I used as a starting point created an OpenGL 3.2 context, and
OpenGL 3.2 is very different from OpenGL ES 2.0.
NSOpenGLPixelFormatAttribute attrs[] =
NSOpenGLPFADepthSize, 24,
NSOpenGLPixelFormat* pf = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs];
NSOpenGLContext* context = [[NSOpenGLContext alloc] initWithFormat:pf shareContext:nil];
int swapInt = 1;
[context setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
It turns out that leaving out the last attribute (NSOpenGLPFAOpenGLProfile) creates an OpenGL 2.1 profile, which is very similar to OpenGL ES 2.0 (GLSL 1.2 is the shader version that OpenGL 2.1 uses
and that’s also very similar to GLSL ES). With that change, all the OpenGL code worked perfectly, with the only exception of the precision modifiers in shaders. Fortunately, I we can wrap those up in
a simple #ifdef like this
#ifdef GL_ES
precision lowp float;
Unit tests
Apart from input, which will require a total re-work, the last major thing I had left was running unit tests after each build. I’ve seen references to Xcode’s unit testing functionality, but I was
hoping to reuse UnitTest++ and run it like I do on iOS projects. It turns out it was even easier because you can run them natively without the extra step of the simulator like on iOS projects.
All I had to do was create a new phase in the Debug scheme to run a script. The “script” is just the name of the executable that was just build with the -unittest parameter added to it. I suspect
that there’s probably a “better” way than hardcoding the .app/Contents/MacOS/ string in the middle to find the binary itself, so if someone knows the correct variable that includes that, let me know.
The only other thing that came up is that apparently Xcode adds the parameter -NSDocumentRevisionsDebugMode when you debug it from the IDE. No big deal, unless you have your program exit with an
error if you pass an unrecognized parameter. I just made it accept and ignore that parameter, and everything worked like a charm.
It took a bit longer than I expected, but I finally have my code base running on MacOS. Iteration time is faster because it doesn’t even launch the simulator, and I suspect graphics performance will
also be a lot faster (the iOS simulator does all the graphics processing in software), so I might end up doing more prototyping on the Mac unless the touch input is an integral part of the game.
1. My shader loader for Mac adds the following to the top, which basically makes things interoperable:#version 120#define lowp#define mediump#define highp#define texture2DLodEXT texture2DLod
2. You say you have to do an extra step to get your unittests to run on the iOS simulator? Mind sharing exactly what those steps are? I’m actually doing the opposite of you, I’m porting from OS X to
iOS and I couldn’t figure out a way to get the simulator to automatically run my unittests.
3. Bit late, but using tricks like the one +Patrick Hogan suggests, or Cocos2D/Kobold2D the OpenGL code doesn’t have to be much different: shaders can be pretty much the same.
I love Mac’s OpenGL Shader Builder app, which used to be under /Developer/Graphics Tools or similar, but on Mountain Lion XCode 4.2 moved to being a downloadable extra.
It lets you create shaders from a template then try them out on the fly, even hooking up uniforms to animate between bounds.
4. I can’t use E-mail send flowers, each point of achievement will flash back, this kind of situation lasted for half a month, let me very upset, please help to repair | {"url":"https://gamesfromwithin.com/porting-my-game-code-to-mac-os","timestamp":"2024-11-10T12:46:39Z","content_type":"text/html","content_length":"35978","record_id":"<urn:uuid:4b6f5c1a-cd68-400a-a2ba-445dbb1bc6fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00496.warc.gz"} |
A container, opened from the top and made up of a metal sheet, is in the form of a frustum of a cone of height 16 cm with radii of its lower and upper ends as 8 cm and 20 cm, respectively. Find the cost of the milk which can completely fill the container, at the rate of ` 20 per litre. Also find the cost of metal sheet used to make the container, if it costs ` 8 per 100 cm2. (Take π = 3.14)
You must login to ask question.
NCERT Solutions for Class 10 Maths Chapter 13
Important NCERT Questions
Surface areas and Volumes,
NCERT Books for Session 2022-2023
CBSE Board and UP Board Others state Board
EXERCISE 13.4
Page No:257
Questions No:4 | {"url":"https://discussion.tiwariacademy.com/question/a-container-opened-from-the-top-and-made-up-of-a-metal-sheet-is-in-the-form-of-a-frustum-of-a-cone-of-height-16-cm-with-radii-of-its-lower-and-upper-ends-as-8-cm-and-20-cm-respectively-find-the-co/","timestamp":"2024-11-11T20:44:37Z","content_type":"text/html","content_length":"165116","record_id":"<urn:uuid:c47cea35-c295-4d5f-9776-8944219a95e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00666.warc.gz"} |
Library Coq.ssr.ssreflect
This file is the Gallina part of the ssreflect plugin implementation. Files that use the ssreflect plugin should always Require ssreflect and either Import ssreflect or Import ssreflect.SsrSyntax.
Part of the contents of this file is technical and will only interest advanced developers; in addition the following are defined: [the str of v by f] == the Canonical s : str such that f s = v. [the
str of v] == the Canonical s : str that coerces to v. argumentType c == the T such that c : forall x : T, P x. returnType c == the R such that c : T -> R. {type of c for s} == P s where c : forall x
: T, P x. nonPropType == an interface for non-Prop Types: a nonPropType coerces to a Type, and only types that do
have sort Prop are canonical nonPropType instances. This is useful for applied views (see mid-file comment). notProp T == the nonPropType instance for type T. phantom T v == singleton type with
inhabitant Phantom T v. phant T == singleton type with inhabitant Phant v. =^~ r == the converse of rewriting rule r (e.g., in a rewrite multirule). unkeyed t == t, but treated as an unkeyed matching
pattern by the ssreflect matching algorithm. nosimpl t == t, but on the right-hand side of Definition C := nosimpl disables expansion of C by /=. locked t == t, but locked t is not convertible to t.
locked_with k t == t, but not convertible to t or locked_with k' t unless k = k' (with k : unit). Coq type-checking will be much more efficient if locked_with with a bespoke k is used for sealed
definitions. unlockable v == interface for sealed constant definitions of v. Unlockable def == the unlockable that registers def : C = v. [unlockable of C] == a clone for C of the canonical
unlockable for the definition of C (e.g., if it uses locked_with). [unlockable fun C] == [unlockable of C] with the expansion forced to be an explicit lambda expression.
• > The usage pattern for ADT operations is: Definition foo_def x1 .. xn := big_foo_expression. Fact foo_key : unit. Proof. by [ ]. Qed. Definition foo := locked_with foo_key foo_def. Canonical
foo_unlockable := [unlockable fun foo]. This minimizes the comparison overhead for foo, while still allowing rewrite unlock to expose big_foo_expression.
Additionally we provide default intro pattern ltac views:
• top of the stack actions: => /apply := => hyp {}/hyp => /swap := => x y; move: y x (also swap and perserves let bindings) => /dup := => x; have copy := x; move: copy x (also copies and preserves
let bindings)
More information about these definitions and their use can be found in the ssreflect manual, and in specific comments below.
Declare Ssr keywords: 'is' 'of' '//' '/=' and '//='. We also declare the parsing level 8, as a workaround for a notation grammar factoring problem. Arguments of application-style notations (at level
10) should be declared at level 8 rather than 9 or the camlp5 grammar will not factor properly.
Reserved Notation "(* x 'is' y 'of' z 'isn't' // /= //= *)" (at level 8).
Reserved Notation "(* 69 *)" (at level 69).
Non ambiguous keyword to check if the SsrSyntax module is imported
Reserved Notation
"(* Use to test if 'SsrSyntax_is_Imported' *)" (
at level
Reserved Notation
"<hidden n >" (
at level
n at level
"<hidden n >").
Reserved Notation
"T (* n *)" (
at level
"T (* n *)").
End SsrSyntax
Export SsrMatchingSyntax
Export SsrSyntax
Save primitive notation that will be overloaded.
Reserve notation that introduced in this file.
Reserved Notation "'if' c 'then' vT 'else' vF" (at level 200,
c, vT, vF at level 200).
Reserved Notation "'if' c 'return' R 'then' vT 'else' vF" (at level 200,
c, R, vT, vF at level 200).
Reserved Notation "'if' c 'as' x 'return' R 'then' vT 'else' vF" (at level 200,
c, R, vT, vF at level 200, x name).
Reserved Notation "x : T" (at level 100, right associativity,
format "'[hv' x '/ ' : T ']'").
Reserved Notation "T : 'Type'" (at level 100, format "T : 'Type'").
Reserved Notation "P : 'Prop'" (at level 100, format "P : 'Prop'").
Reserved Notation "[ 'the' sT 'of' v 'by' f ]" (at level 0,
format "[ 'the' sT 'of' v 'by' f ]").
Reserved Notation "[ 'the' sT 'of' v ]" (at level 0,
format "[ 'the' sT 'of' v ]").
Reserved Notation "{ 'type' 'of' c 'for' s }" (at level 0,
format "{ 'type' 'of' c 'for' s }").
Reserved Notation "=^~ r" (at level 100, format "=^~ r").
Reserved Notation "[ 'unlockable' 'of' C ]" (at level 0,
format "[ 'unlockable' 'of' C ]").
Reserved Notation "[ 'unlockable' 'fun' C ]" (at level 0,
format "[ 'unlockable' 'fun' C ]").
To define notations for tactic in intro patterns. When "=> /t" is parsed, "t:
Delimit Scope ssripat_scope with ssripat.
Make the general "if" into a notation, so that we can override it below. The notations are "only parsing" because the Coq decompiler will not recognize the expansion of the boolean if; using the
default printer avoids a spurious trailing %GEN_IF.
Delimit Scope general_if_scope with GEN_IF
Notation "
'if' c 'then' vT 'else' vF" :=
CoqGenericIf c vT vF
) (
only parsing
) :
Notation "
'if' c 'return' R 'then' vT 'else' vF" :=
CoqGenericDependentIf c c R vT vF
) (
only parsing
) :
Notation "
'if' c 'as' x 'return' R 'then' vT 'else' vF" :=
CoqGenericDependentIf c x R vT vF
) (
only parsing
) :
Force boolean interpretation of simple if expressions.
Delimit Scope boolean_if_scope with BOOL_IF
Notation "
'if' c 'return' R 'then' vT 'else' vF" :=
if c is true as c in bool return R then vT else vF
) :
Notation "
'if' c 'then' vT 'else' vF" :=
if c
bool is true as _ in bool return _ then vT else vF
) :
Notation "
'if' c 'as' x 'return' R 'then' vT 'else' vF" :=
if c
bool is true as x in bool return R then vT else vF
) :
Open Scope boolean_if_scope
To allow a wider variety of notations without reserving a large number of of identifiers, the ssreflect library systematically uses "forms" to enclose complex mixfix syntax. A "form" is simply a
mixfix expression enclosed in square brackets and introduced by a keyword: [keyword ... ] Because the keyword follows a bracket it does not need to be reserved. Non-ssreflect libraries that do not
respect the form syntax (e.g., the Coq Lists library) should be loaded before ssreflect so that their notations do not mask all ssreflect forms.
Delimit Scope form_scope with FORM.
Open Scope form_scope.
Allow overloading of the cast (x : T) syntax, put whitespace around the ":" symbol to avoid lexical clashes (and for consistency with the parsing precedence of the notation, which binds less tightly
than application), and put printing boxes that print the type of a long definition on a separate line rather than force-fit it at the right margin.
Notation "
x : T" := (
CoqCast x T
) :
Allow the casual use of notations like nat * nat for explicit Type declarations. Note that (nat * nat : Type) is NOT equivalent to (nat * nat)%type, whose inferred type is legacy type "Set".
Notation "
T : 'Type'" := (
CoqCast T
type Type
) (
only parsing
) :
Allow similarly Prop annotation for, e.g., rewrite multirules.
Notation "
P : 'Prop'" := (
CoqCast P
type Prop
) (
only parsing
) :
Constants for abstract: and [: name ] intro pattern
Constants for tactic-views
Syntax for referring to canonical structures: [the struct_type of proj_val by proj_fun] This form denotes the Canonical instance s of the Structure type struct_type whose proj_fun projection is
proj_val, i.e., such that proj_fun s = proj_val. Typically proj_fun will be A record field accessors of struct_type, but this need not be the case; it can be, for instance, a field of a record type
to which struct_type coerces; proj_val will likewise be coerced to the return type of proj_fun. In all but the simplest cases, proj_fun should be eta-expanded to allow for the insertion of implicit
arguments. In the common case where proj_fun itself is a coercion, the "by" part can be omitted entirely; in this case it is inferred by casting s to the inferred type of proj_val. Obviously the
latter can be fixed by using an explicit cast on proj_val, and it is highly recommended to do so when the return type intended for proj_fun is "Type", as the type inferred for proj_val may vary
because of sort polymorphism (it could be Set or Prop). Note when using the [the _ of _ ] form to generate a substructure from a telescopes-style canonical hierarchy (implementing inheritance with
coercions), one should always project or coerce the value to the BASE structure, because Coq will only find a Canonical derived structure for the Canonical base structure -- not for a base structure
that is specific to proj_value.
Module TheCanonical
Variant put vT sT
v1 v2
) (
) :=
Definition get vT sT v s
: @
put vT sT v v s
) :=
Put _ _ _
p in s
Definition get_by vT sT of sT -> vT
:= @
get vT sT
End TheCanonical
Import TheCanonical
Notation "
[ 'the' sT 'of' v 'by' f ]" :=
get_by _ sT f _ _
fun v'
) =>
Put v'
f s
v _
only parsing
) :
Notation "
[ 'the' sT 'of' v ]" := (
fun s
Put v s s
only parsing
) :
The following are "format only" versions of the above notations. We need to do this to prevent the formatter from being be thrown off by application collapsing, coercion insertion and beta reduction
in the right hand side of the notations above.
Notation "
[ 'the' sT 'of' v 'by' f ]" := (@
get_by _ sT f v _ _
only printing
) :
Notation "
[ 'the' sT 'of' v ]" := (@
get _ sT v _ _
only printing
) :
We would like to recognize Notation " [ 'the' sT 'of' v : 'Type' ]" := (@get Type sT v _ ) (at level 0, format " [ 'the' sT 'of' v : 'Type' ]") : form_scope.
Helper notation for canonical structure inheritance support. This is a workaround for the poor interaction between delta reduction and canonical projections in Coq's unification algorithm, by which
transparent definitions hide canonical instances, i.e., in Canonical a_type_struct := @Struct a_type ... Definition my_type := a_type. my_type doesn't effectively inherit the struct structure from
a_type. Our solution is to redeclare the instance as follows Canonical my_type_struct := Eval hnf in [struct of my_type]. The special notation [str of _ ] must be defined for each Strucure "str" with
constructor "Str", typically as follows Definition clone_str s := let: Str _ x y ... z := s return {type of Str for s} -> str in fun k => k _ x y ... z. Notation " [ 'str' 'of' T 'for' s ]" :=
(@clone_str s (@Str T)) (at level 0, format " [ 'str' 'of' T 'for' s ]") : form_scope. Notation " [ 'str' 'of' T ]" := (repack_str (fun x => @Str T x)) (at level 0, format " [ 'str' 'of' T ]") :
form_scope. The notation for the match return predicate is defined below; the eta expansion in the second form serves both to distinguish it from the first and to avoid the delta reduction problem.
There are several variations on the notation and the definition of the the "clone" function, for telescopes, mixin classes, and join (multiple inheritance) classes. We describe a different idiom for
clones in ssrfun; it uses phantom types (see below) and static unification; see fintype and ssralg for examples.
A generic "phantom" type (actually, a unit type with a phantom parameter). This type can be used for type definitions that require some Structure on one of their parameters, to allow Coq to infer
said structure so it does not have to be supplied explicitly or via the " [the _ of _ ]" notation (the latter interacts poorly with other Notation). The definition of a (co)inductive type with a
parameter p : p_type, that needs to use the operations of a structure Structure p_str : Type := p_Str {p_repr :> p_type; p_op : p_repr -> ...} should be given as Inductive indt_type (p : p_str) :=
Indt ... . Definition indt_of (p : p_str) & phantom p_type p := indt_type p. Notation "{ 'indt' p }" := (indt_of (Phantom p)). Definition indt p x y ... z : {indt p} := @Indt p x y ... z. Notation "
[ 'indt' x y ... z ]" := (indt x y ... z). That is, the concrete type and its constructor should be shadowed by definitions that use a phantom argument to infer and display the true value of p (in
practice, the "indt" constructor often performs additional functions, like "locking" the representation -- see below). We also define a simpler version ("phant" / "Phant") of phantom for the common
case where p_type is Type.
Internal tagging used by the implementation of the ssreflect elim.
The ssreflect idiom for a non-keyed pattern:
• unkeyed t will match any subterm that unifies with t, regardless of whether it displays the same head symbol as t.
• unkeyed t a b will match any application of a term f unifying with t, to two arguments unifying with with a and b, respectively, regardless of apparent head symbols.
• unkeyed x where x is a variable will match any subterm with the same type as x (when x would raise the 'indeterminate pattern' error).
Ssreflect converse rewrite rule rule idiom.
Term tagging (user-level). The ssreflect library uses four strengths of term tagging to restrict convertibility during type checking: nosimpl t simplifies to t EXCEPT in a definition; more precisely,
given Definition foo := nosimpl bar, foo (or foo t') will NOT be expanded by the /= and //= switches unless it is in a forcing context (e.g., in match foo t' with ... end, foo t' will be reduced if
this allows the match to be reduced). Note that nosimpl bar is simply notation for a a term that beta-iota reduces to bar; hence rewrite /foo will replace foo by bar, and rewrite -/foo will replace
bar by foo. CAVEAT: nosimpl should not be used inside a Section, because the end of section "cooking" removes the iota redex. locked t is provably equal to t, but is not convertible to t; 'locked'
provides support for selective rewriting, via the lock t : t = locked t Lemma, and the ssreflect unlock tactic. locked_with k t is equal but not convertible to t, much like locked t, but supports
explicit tagging with a value k : unit. This is used to mitigate a flaw in the term comparison heuristic of the Coq kernel, which treats all terms of the form locked t as equal and compares their
arguments recursively, leading to an exponential blowup of comparison. For this reason locked_with should be used rather than locked when defining ADT operations. The unlock tactic does not support
locked_with but the unlock rewrite rule does, via the unlockable interface. we also use Module Type ascription to create truly opaque constants, because simple expansion of constants to reveal an
unreducible term doubles the time complexity of a negative comparison. Such opaque constants can be expanded generically with the unlock rewrite rule. See the definition of card and subset in fintype
for examples of this.
Needed for locked predicates, in particular for eqType's.
The basic closing tactic "done".
Ltac done
case not_locked_false_eq_true
match goal with H
~ _
case H
Quicker done tactic not including split, syntax: /0/
Ltac ssrdone0
case not_locked_false_eq_true
match goal with H
~ _
case H
To unlock opaque constants.
Generic keyed constant locking.
The argument order ensures that k is always compared before T.
This can be used as a cheap alternative to cloning the unlockable instance below, but with caution as unkeyed matching can be expensive.
Intensionaly, this instance will not apply to locked u.
More accurate variant of unlock, and safer alternative to locked_withE.
The internal lemmas for the have tactics.
Internal N-ary congruence lemmas for the congr tactic.
View lemmas that don't use reflection.
To focus non-ssreflect tactics on a subterm, eg vm_compute. Usage: elim/abstract_context: (pattern) => G defG. vm_compute; rewrite {}defG {G}. Note that vm_cast are not stored in the proof term for
reductions occurring in the context, hence set here := pattern; vm_compute in (value of here) blows up at Qed time.
Closing rewrite rule
Closing tactic
Convenience rewrite rule to unprotect evars, e.g., to instantiate them in another way than with reflexivity.
An interface for non-Prop types; used to avoid improper instantiation of polymorphic lemmas with on-demand implicits when they are used as views. For example: Some_inj {T} : forall x y : T, Some x =
Some y -> x = y. Using move/Some_inj on a goal of the form Some n = Some 0 will fail: SSReflect will interpret the view as @Some_inj ?T top_assumption since this is the well-typed application of the
view with the minimal number of inserted evars (taking ?T := Some n = Some 0), and then will later complain that it cannot erase top_assumption after having abstracted the viewed assumption. Making x
and y maximal implicits would avoid this and force the intended @Some_inj nat x y top_assumption interpretation, but is undesirable as it makes it harder to use Some_inj with the many SSReflect and
MathComp lemmas that have an injectivity premise. Specifying {T : nonPropType} solves this more elegantly, as then (?T : Type) no longer unifies with (Some n = Some 0), which has sort Prop.
Implementation notes: We rely on three interface Structures:
• test_of r, the middle structure, performs the actual check: it has two canonical instances whose 'condition' projection are maybeProj (?P : Prop) and tt, and which set r := true and r := false,
respectively. Unifying condition (?t : test_of ?r) with maybeProj T will thus set ?r to true if T is in Prop as the test_Prop T instance will apply, and otherwise simplify maybeProp T to tt and
use the test_negative instance and set ?r to false.
• call_of c r sets up a call to test_of on condition c with expected result r. It has a default instance for its 'callee' projection to Type, which sets c := maybeProj T and r := false when
unifying with a type T.
• type is a telescope on call_of c r, which checks that unifying test_of ?r1 with c indeed sets ?r1 := r; the type structure bundles the 'test' instance and its 'result' value along with its
call_of c r projection. The default instance essentially provides eta-expansion for 'type'. This is only essential for the first 'result' projection to bool; using the instance for other
projection merely avoids spurious delta expansions that would spoil the notProp T notation.
In detail, unifying T =~= ?S with ?S : nonPropType, i.e., (1) T =~= @callee (@condition (result ?S) (test ?S)) (result ?S) (frame ?S) first uses the default call instance with ?T := T to reduce (1)
to (2a) @condition (result ?S) (test ?S) =~= maybeProp T (3) result ?S =~= false (4) frame ?S =~= call T along with some trivial universe-related checks which are irrelevant here. Then the
unification tries to use the test_Prop instance to reduce (2a) to (6a) result ?S =~= true (7a) ?P =~= T with ?P : Prop (8a) test ?S =~= test_Prop ?P Now the default 'check' instance with ?result :=
true resolves (6a) as (9a) ?S := @check true ?test ?frame Then (7a) can be solved precisely if T has sort at most (hence exactly) Prop, and then (8a) is solved by the check instance, yielding ?test :
= test_Prop T, and completing the solution of (2a), and
to it. But now (3) is inconsistent with (9a), and this makes the entire problem (1) fails. If on the othe hand T does not have sort Prop then (7a) fails and the unification resorts to delta expanding
(2a), which gives (2b) @condition (result ?S) (test ?S) =~= tt which is then reduced, using the test_negative instance, to (6b) result ?S =~= false (8b) test ?S =~= test_negative Both are solved
using the check default instance, as in the (2a) branch, giving (9b) ?S := @check false test_negative ?frame Then (3) and (4) are similarly soved using check, giving the final assignment (9) ?S :=
notProp T Observe that we
perform the actual test unification on the arguments of the initial canonical instance, and not on the instance itself as we do in mathcomp/matrix and mathcomp/vector, because we want the unification
to fail when T has sort Prop. If both the test_of
the result check unifications were done as part of the structure telescope then the latter would be a sub-problem of the former, and thus failing the check would merely make the test_of unification
backtrack and delta-expand and we would not get failure.
Structure call_of
) (
) :=
Definition maybeProp
) :=
Definition call T
maybeProp T
false T
Structure test_of
) :=
Definition test_Prop
) :=
Test true
maybeProp P
Definition test_negative
Test false tt
Structure type
test_of result
call_of test result
Definition check result test frame
:= @
Check result test frame
Module Exports
Canonical call
Canonical test_Prop
Canonical test_negative
Canonical check
Notation nonPropType
Coercion callee : call_of >-> Sortclass
Coercion frame : type >-> call_of
Notation notProp T
:= (@
check false test_negative
call T
End Exports
End NonPropType
Export NonPropType.Exports
Module Export ipat
Notation "
'[' 'apply' ']'" := (
let f
in move
at level
only parsing
) :
Notation "
'[' 'swap' ']'" := (
let x
lazymatch goal with
| |-
fresh x
| |-
let x
_ in _
fresh x
end in intro x
let y
lazymatch goal with
| |-
fresh y
| |-
let y
_ in _
fresh y
end in intro y
revert x
revert y
at level
only parsing
) :
Notation "
'[' 'dup' ']'" := (
lazymatch goal with
| |-
let x
fresh x in intro x
let copy
fresh x in have copy
revert x
revert copy
| |-
let x
_ in _
let x
fresh x in intro x
let copy
fresh x in pose copy
unfold x in
value of copy
revert x
revert copy
| |-
let x
in move
let copy
in have copy
revert x
revert copy end
at level
only parsing
) :
End ipat | {"url":"https://coq.inria.fr/doc/v8.13/stdlib/Coq.ssr.ssreflect.html","timestamp":"2024-11-14T13:48:21Z","content_type":"application/xhtml+xml","content_length":"114889","record_id":"<urn:uuid:6eb81ed5-d2e4-4acb-9b39-b918e0177a0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00413.warc.gz"} |
Earned Value Management
Earned Value Management (EVM) is a project management technique used to measure project performance and progress in an objective manner. It integrates project scope, time (schedule), and cost
parameters to provide accurate forecasts of project performance issues.
Here are the key components and common equations used in EVM:
Key Components
1. Planned Value (PV): The authorized budget assigned to scheduled work.
2. Earned Value (EV): The value of work actually performed expressed in terms of the approved budget for that work.
3. Actual Cost (AC): The actual cost incurred for the work performed on an activity during a specific time period.
4. Budget at Completion (BAC): The total budget for the project.
Common EVM Equations
1. Cost Variance (CV):
\[CV = EV - AC\]
□ Positive CV indicates under budget.
□ Negative CV indicates over budget.
2. Schedule Variance (SV):
\[SV = EV - PV\]
□ Positive SV indicates ahead of schedule.
□ Negative SV indicates behind schedule.
3. Cost Performance Index (CPI):
\[CPI = \frac{EV}{AC}\]
□ CPI > 1 indicates cost efficiency.
□ CPI < 1 indicates cost inefficiency.
4. Schedule Performance Index (SPI):
\[SPI = \frac{EV}{PV}\]
□ SPI > 1 indicates schedule efficiency.
□ SPI < 1 indicates schedule inefficiency.
First, load the package:
Then set the BAC, schedule, and current time period for a toy project.
Calculate the PV and print the results:
pv <- pv(bac, schedule, time_period)
cat("Planned Value (PV):", pv, "\n")
#> Planned Value (PV): 40000
Set the actual % complete and calculate the EV:
actual_per_complete <- 0.35
ev <- ev(bac, actual_per_complete)
cat("Earned Value (EV):", ev, "\n")
#> Earned Value (EV): 35000
Set the actual costs and current time period and calculate the AC to date:
actual_costs <- c(9000, 18000, 36000, 70000, 100000)
time_period <- 3
ac <- ac(actual_costs, time_period)
cat("Actual Cost (AC):", ac, "\n")
#> Actual Cost (AC): 36000
Calculate the SV and CV and print the results:
sv <- sv(ev, pv)
cat("Schedule Variance (SV):", sv, "\n")
#> Schedule Variance (SV): -5000
cv <- cv(ev, ac)
cat("Cost Variance (CV):", cv, "\n")
#> Cost Variance (CV): -1000
Calculate the SPI and CPI and print the results: | {"url":"https://cran.r-project.org/web/packages/PRA/vignettes/evm.html","timestamp":"2024-11-02T23:43:14Z","content_type":"text/html","content_length":"16629","record_id":"<urn:uuid:1de8df4c-eb32-463f-b7a5-9db32eeca58d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00033.warc.gz"} |
the Montignac Method
What is the GI of 0
Soy sauce (unsweetened)
What is the GI of 0
Beef (steak, etc.)
What is the GI of 15
Bran (oat, wheat...)
What is the GI of 15
Tofu, soybean curd
What is the GI of 15
Pistachio, green almond
What is the GI of 15
Sorrel, spinach dock
What is the GI of 20
Soy yogurt (unflavored)
What is the GI of 20
Jam, Montignac sugarless
What is the GI of 25
Raspberry (fresh fruit)
What is the GI of 25
Hummus, homus, humus
What is the GI of 25
Cashew nut, acajou
What is the GI of 25
Mung beans, moong dal
What is the GI of 30
Quark, curd cheese
What is the GI of 30
Pears (fresh fruit)
What is the GI of 30
Oat milk (uncooked)
What is the GI of 35
Oranges (fresh fruit)
What is the GI of 35
Apple (fresh fruit)
What is the GI of 35
Wasa™ fiber (24%)
What is the GI of 35
Falafel (chick peas)
What is the GI of 35
Nectarines (fresh fruit)
What is the GI of 35
Linum, sesame (seeds)
What is the GI of 35
Chick pea flour
What is the GI of 35
Peaches (fresh fruit)
What is the GI of 35
Quince (fresh fruit)
What is the GI of 35
Green peas (fresh)
What is the GI of 35
Peas (green, fresh)
What is the GI of 35
Mustard, Dijon type
What is the GI of 35
Quinoa, cooked al dente
What is the GI of 35
Apricots (fresh fruit)
What is the GI of 40
Pepino dulce, melon pear
What is the GI of 40
Ravioli (hard wheat)
What is the GI of 40
Oat flakes (uncooked)
What is the GI of 40
Falafel (fava beans)
What is the GI of 40
Kamut, Egyptian wheat
What is the GI of 40
Yam, tropical yam
What is the GI of 45
Farro flour (integral)
What is the GI of 45
Kamut flour (integral)
What is the GI of 45
Green peas (tin/can)
What is the GI of 45
Rice, brown basmati
What is the GI of 50
Mango (fresh fruit)
What is the GI of 50
Bulgur wheat (cooked)
What is the GI of 50
Muesli (no sweet)
What is the GI of 50
Kiwifruit, monkey peach
What is the GI of 50
Wasa™ light rye
What is the GI of 50
Litchi (fresh fruit)
What is the GI of 50
Pineapple (fresh fruit)
What is the GI of 50
Macaronis (durum wheat)
What is the GI of 50
Rice, brown, unpolished
What is the GI of 50
Pasta, whole wheat pasta
What is the GI of 55
Mustard (sugar added)
What is the GI of 55
Japanese plum, loquat
What is the GI of 55
Spaghetti (well cooked)
What is the GI of 55
Corn, sweet corn
What is the GI of 60
Potato chips, crisps
What is the GI of 60
Lasagna (hard wheat)
What is the GI of 60
Papaya (fresh fruit)
What is the GI of 65
Beet, beetroot (cooked)
What is the GI of 65
Raisins (red and golden)
What is the GI of 65
Rye bread (30% of rye)
What is the GI of 65
Pain au chocolat
What is the GI of 65
Jam (with sugar added)
What is the GI of 65
Corn, on or off the cob
What is the GI of 65
Marmalade (with sugar)
What is the GI of 70
Pop corn (without sugar)
What is the GI of 70
Noodles (tender wheat)
What is the GI of 70
Baguette white bread
What is the GI of 70
Ravioli (soft wheat)
What is the GI of 70
Sugar, whole brown
What is the GI of 70
Potatoes, pealed boiled
What is the GI of 75
Lasagna (soft wheat)
What is the GI of 75
Waffle (with sugar)
What is the GI of 75
Bread, white sandwich
What is the GI of 85
Maizena (corn starch)
What is the GI of 85
Rice milk (with sugar)
What is the GI of 85
Wheat flour, white
What is the GI of 90
Bread, gluten-free white
What is the GI of 95
Potato flour (starch)
What is the GI of 95
Potatoes, oven cooked
What is the GI of 100
Wheat syrup, rice syrup
Discover the whole range of Montignac products on our online store. | {"url":"http://montignac.tv/","timestamp":"2024-11-06T18:36:09Z","content_type":"application/xhtml+xml","content_length":"101300","record_id":"<urn:uuid:c4074274-7ea1-40e0-8ba7-5daf2a47c7c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00124.warc.gz"} |
Adaptive Probabilistic Neuro-Fuzzy System and its Hybrid Learning in Medical Diagnostics Task
All published articles of this journal are available on ScienceDirect.
Adaptive Probabilistic Neuro-Fuzzy System and its Hybrid Learning in Medical Diagnostics Task
The medical diagnostic task in conditions of the limited dataset and overlapping classes is considered. Such limitations happen quite often in real-world tasks. The lack of long training datasets
during solving real tasks in the problem of medical diagnostics causes not being able to use the mathematical apparatus of deep learning. Additionally, considering other factors, such as in a
dataset, classes can be overlapped in the feature space; also data can be specified in various scales: in the numerical interval, numerical ratios, ordinal (rank), nominal and binary, which does not
allow the use of known neural networks. In order to overcome arising restrictions and problems, a hybrid neuro-fuzzy system based on a probabilistic neural network and adaptive neuro-fuzzy
interference system that allows solving the task in these situations is proposed.
Computational intelligence, artificial neural networks, neuro-fuzzy systems compared to conventional artificial neural networks, the proposed system requires significantly less training time, and in
comparison with neuro-fuzzy systems, it contains significantly fewer membership functions in the fuzzification layer. The hybrid learning algorithm for the system under consideration based on
self-learning according to the principle “Winner takes all” and lazy learning according to the principle “Neurons at data points” has been introduced.
The proposed system solves the problem of classification in conditions of overlapping classes with the calculation of the membership levels of the formed diagnosis to various possible classes.
The proposed system is quite simple in its numerical implementation, characterized by a high speed of information processing, both in the learning process and in the decision-making process; it
easily adapts to situations when the number of diagnostics features changes during the system's functioning.
Keywords: Medical data mining, Probabilistic neural network, Neuro-fuzzy system, Membership function, Lazy learning, Pattern recognition.
Data mining methods are currently widely used in the analysis of medical information [1-3] and, first of all, in diagnosis problems based on the available data on the patient's state. As a rule,
medical diagnostics problems from a data mining standpoint are considered either problems of pattern classification-recognition, clustering - recognition without a teacher, or forecasting-prediction
of the disease course. Methods of computational intelligence [4-7] adapted for solving medical problems [8-11] proved to be the best mathematical apparatus here. Artificial neural networks have
proved to be efficient [12] due to their ability to train parameters - synaptic weights (and sometimes architecture) to process a training dataset, which ultimately allows or restores distributing
hyper surfaces between classes of diagnoses arbitrarily shapes. Here deep neural networks have effectively demonstrated their capabilities [13, 14], which provide recognition accuracy entirely
inaccessible for other approaches.
Simultaneously, there is a broad class of situations when deep neural networks are either ineffective or generally inoperable. Here, notably the problems with a short training dataset often happen in
real medical cases. Also, medical information is often presented not only in a numerical scale of intervals and relationships but also in a nominal, ordinal (rank) or binary scale.
Probabilistic Neural Networks (PNNs) [15, 16] are well suited for solving recognition-classification problems under conditions of a limited amount of training data [17], which, however, are crisp
systems operating in conditions of non-overlapping classes and learning in a batch mode. In previous works [18-22], fuzzy and online PNN modifications were introduced to solve recognition problems
under overlapping classes and trained in sequential mode. The main disadvantages of these systems are their cumbersomeness (the size of the training dataset determines the number of nodes in the
pattern layer) and the ability to work only with numerical data. The ability to work with data in different scales is an advantage of neuro-fuzzy systems [23]. Here, for the problem under
consideration, ANFIS, Takagi-Sugeno-Kang, Wang-Mendel and other systems can be noted.
Unfortunately, training these systems (tuning their synaptic weights, and sometimes membership functions) may require relatively large amounts of training datasets [24]. In this regard, it seems
expedient to develop a hybrid of a Probabilistic Neural Network (PNN) and a neuro-fuzzy system for solving classification-diagnostics-recognition problems in the conditions of overlapping classes and
training data in different scales, as well as the ability to instantaneous tuning based on lazy learning [25].
In a study [26], the adaptive probabilistic neuro-fuzzy system for medical diagnosis was introduced. On its outputs, the probabilities Pj(x) of a particular patient have a certain diagnosis
calculated. Additionally, the diagnostic procedure is not always can be based on a probabilistic approach [8, 9].
Due to this, it is an adequate representation to consider the diagnostic procedure as fuzzy reasoning [4-7, 23]. Here alongside the probabilities Pj(x), levels of fuzzy membership are considered to
define that the patient has a j-the diagnosis. The advantage of such fuzzy membership is that it is a priory supposed class to overlap in feature space.
2. MATERIALS AND METHODS
2.1. Architecture of Fuzzy-probabilistic Neural Network
The proposed probabilistic neuro-fuzzy system (Fig. 1) contains six layers of information processing: the first hidden layer of fuzzification, formed by one-dimensional bell-shaped membership
functions, the second hidden layer - aggregation one, formed by elementary multiplication blocks, the third hidden layer of adders, the number of which is determined by the number of classes plus one
per which should be split the original data array, the fourth - defuzzification layer, formed by division blocks, at the outputs of which signals appear that determine the probabilities of belonging
Pj(x) of each observation to each of the possible classes, the fifth layer is formed with nonlinear activation functions, and finally, the sixth – formed with division blocks number of which is
defined by the number of diagnoses and one summation. In fact, the fifth and the sixth layers of the system implement a nonlinear transformation softmax [13]. It allows assessing the fuzzy membership
of each patient to each diagnosis.
Unlike classical neuro-fuzzy systems, there is no layer of tuning weights parameters. As will be shown below, the proposed method's learning process is implemented in the first hidden layer by
adjusting the membership functions' parameters. It is clear that this approach simplifies the numerical implementation of the system and improves its performance, especially under conditions of a
short training dataset.
The initial information for the system synthesis is a training dataset formed by a set of n-dimensional images-vectors x(k)=(x1(k),x2(k),...,xi(k),...xn(k))T each of which (here 1 ≤ k ≤ N
observation number in the original array or the moment of the current discrete time in Data Stream Mining tasks) belongs to a specific class Clj,j=1,2,...,m. It is convenient to rearrange the
original training dataset so that the first N1 observations belong to the first class Cl1, following N2 observations to Cl2 and finally Nm latest observations to class Clm. Moreover, for each
class, instead of the index number k, it is convenient to introduce an intraclass numbering so that for the first class Cl1k=t1=1,2,...,N1; for class Cl2k=t2=N1+1,N1+2,...,N1+N2; and,
finally, for the last m-th class Clmk=tm=N1+N2+...+Nm−1+1, ...,N1+N1+...+Nm=N
Based on this training dataset, the first hidden fuzzification layer is formed by Gaussian membership functions.
Where wli - fixed or adjustable (more generally) centers of the corresponding membership functions, σ2 - parameter specifying the width of the corresponding function also fixed or tuned, li=
Note that in a standard probabilistic neural network, the first hidden layer of patterns is formed by multidimensional Gaussians, the number of which is determined by the training dataset size N. In
the proposed system, the number of membership functions at each input can be different, for example, if a binary variable of type 1 or 0, “Yes” or “No,” “there is a symptom,” or “there is no
symptom,” is supplied to the input then two functions are enough at this input (hi=2); if at the i-th input, the corresponding variable can take an arbitrary number of values, then 2 ≤ hi ≤ N. The
total number of one-dimensional functions in the system varies in the interval.
where h =
At the input of the first hidden layer h signal-values of the corresponding Gaussians appear.
Then they are fed to the second - aggregation hidden layer, which, similarly to standard neuro-fuzzy systems, is formed by ordinary multiplication blocks which are equal to N.
In this layer from one-dimensional membership functions, the multidimensional kernel activation functions are formed:
the vector centres of which wtj=(wl1,...,wli,...,wln)T are formed with the centers of one-dimensional membership functions. Moreover, for each j-th class, multidimensional activation functions Nj are
formed. As a result, a signal is generated at the output of the second hidden layer.
The third hidden layer is formed from the blocks of summation, the number of which is determined by the value m + 1. The first m adders calculate the data density distribution for each class:
and (m+1)-th one overall data density distribution.
In the fourth layer, the probability level is calculated that the presented observation x belongs to the j-th class.
In the fifth hidden layer, the nonlinear transformation is implemented with a nonlinear activation function.
and, finally, in the output layer, levels of fuzzy membership with soft max function are calculated:
Satisfying of obvious condition:
A combined training of probabilistic neuro-fuzzy system In general, the proposed system's settings can be implemented based on the so-called lazy learning [25] in the same way a standard PNN is
configured. Lazy learning is based on the principle “Neurons at data points”, when the kernel activation functions' centers coincide with the observations from the training set. For each observation
x(tj), a multidimensional bell-shaped activation function μtjj(xi,wtj) (here wtj≡x(ti)) is formed. It is clear that such a learning process is implemented rapidly. Still, if the amount of the
training sample N is large enough, the PNN system becomes too cumbersome.
Following this approach, N membership functions should be formed at each input in a neuro-fuzzy system in the first fuzzification layer. However, suppose the training signals on different inputs are
specified either in the nominal or binary or in the rank scales, in this case, the number of membership functions at the corresponding inputs decreases significantly. In addition, in medical
applications, numerical variables, such as the patient's temperature, are often repeated, leading to the conjugation of the number of membership functions. Finally, the most straightforward case
compensates when all input signals are specified in a binary scale: “there is a symptom” – “there is no symptom”. Only two membership functions with center coordinates 0 and 1 are formed at each
In the case when all the input variables are specified on a numerical scale, the number of one-dimensional membership functions is determined by the value. hi=N;h=Nn that, with larger volumes of
the training dataset, we can make the system too cumbersome. It is possible to overcome this problem using the self-learning procedure of the centers of membership functions, while their number hi at
each input remains constant.
Let us set the maximum possible value of the number of membership functions at the i-th input hi* and, before starting the learning process, place them evenly along the axis xi on the interval (0, 1)
so that the value determines the distance between the original centers wli(0) and wli+1(0) determined by the value:
When the first vector from the training dataset is fed to the system input x(1)=(x1(1),...,xi(1),...,xn(1))−1 (it does not matter which of the classes Clj it belongs to), the center-”winner” wli*(0)
is determined at the beginning, which is the nearest xi(1) in the sense of distance, i.e.,
After this center “winner” is pulled up to the input signal xi(1) component according to the expression:
where 0<ηi(1)<1 - is the learning rate. It is clear that when ηi(1)=1 center-”winner” moves to a point xi(1) using the principle of “neurons at data points”.
At the k-th iteration, the tuning procedure can be written in the form represented with formula 16.
It is easy to see that the last expression implements the self-learning principle of T.Kohonen [27] “Winner Takes All” (WTA).
Thus, the combination of lazy learning and self-learning can significantly simplify both the architecture and the process of tuning the probabilistic neuro-fuzzy system.
3. RESULTS
The proposed probabilistic neuro-fuzzy system is designed to work with different data types, such as numerical and binary data that are presented in long and short datasets. Therefore, two datasets
with different data types were taken from the UCI repository for the experimental evaluation.
The first dataset, “Heart Disease,” contains 303 instances and 76 attributes, but only a subset of 14 of them is used. Each observation includes detailed information about the patient, his or her
physiological parameters, and symptoms of a disease. This dataset is a mix of numerical and binary data. Physiological parameters have numerical form, and symptoms typically have a binary form.
The purpose of this dataset is to find out whether the patient has the disease or not, which value between 0 and 4, where 0 is the absence of disease. Also, this dataset contains overlapping classes,
as shown in Fig. (2).
The second dataset, “Diabetes 130-US hospitals for years 1999-2008”, is a long dataset that contains 100000 instances. It includes features that represent outcomes of treatment for patients: the
length of stay at the hospital, information about the laboratory tests, and medications administered when patients were at the hospital. This dataset also contains numerical and binary data.
The experiment results show that the KNN algorithm is fast, but its accuracy is close to 50%. Thus, the algorithm is not intended for the classification of very short samples. Unlike KNN, the
proposed network's classification accuracy increases as the number of elements in the sample increases. Even on very small samples, it achieves an accuracy of 77%. EFPNN also allows for greater
accuracy as the sample size increases. However, it is significantly more than 20% slower than the proposed network. The fastest method is KNN, but it should take into account that neural networks are
implemented on Python and run on the central processor, and not on the GPU like KNN. It means that with the same hardware implementation, the time costs for all methods will be comparable. But the
accuracy of the proposed network is higher (Fig. 3).
The experiment results show that the KNN algorithm is fast, but its accuracy is close to 50%. Thus, the algorithm is not intended for the classification of very short samples. Unlike KNN, the
proposed network's (APNFS) classification accuracy increases as the number of elements in the sample increases. Even on very small samples, it achieves an accuracy of 77%. Takagi–Sugeno–Kang Fuzzy
Classifier (TSK) and Wang-Mendel systems also allow archiving for greater accuracy as the sample size increases.
Table 1.
Classification Accuracies,
Algorithms for Comparison % Max Time
KNN 50.24 51,63 50.7 49.03 0.03
TSK 59.91 62.33 74.08 78.92 0.61
APNFS 52.01 57,7 69.34 77.52 0.28
From Table 1, it can be seen that the TSK system is slower than the proposed network. The fastest method is KNN, but it should take into account that neural networks are implemented on Python and run
on the central processor, and not on the GPU like KNN. It means that with the same hardware implementation, the time costs for all methods will be comparable. But the accuracy of the proposed network
is higher.
The second experiment was performed on the long dataset, which is called “Diabetes 130-US hospitals for years 1999-2008”. From the initial dataset, a number of subsets that have different sizes, from
3000 to 30 000 instances, were formed. The experiment is intended to compare the increase of the classification time with the dataset size growth because the absolute time consumption depends on the
computer platform and used processor (CPU, GPU). Based on the results of the first experiment, for the second one, two Adaptive probabilistic neuro-fuzzy systems and TSK, which provide higher
classification accuracy, were selected. The experiment showed that the proposed approach APNFS requires less computational cost than TSK. The increase in time required for larger subsets increases
significantly compared to small subsets. This trend apparently exposes the influence of the software on the classification time. Smaller datasets are usually allocated in RAM, while long datasets
require swapping of data from external memory.
Additionally, it is important to mention that the system under consideration outputs not only probabilities of belonging each sample from the dataset to the appropriate class-winner but also levels
of membership each observation to all classes. The Breast Tissue dataset from the UCI repository was used to examine the importance of fuzzy reasoning. It contains data describing electrical
impedance measurements of recently excised tissue samples from the breast. The dataset has nine features describing such parameters as “distance between I0 and real part of the maximum frequency
point”, “a maximum of the spectrum” and others. Also, there are six classes, such as carcinoma, fibro-adenoma, mastopathy, connective, glandular and adipose. The comparison of outputs, both
membership levels and probabilities for some patients, is shown in Tables 2 and 3.
Table 2.
Samples Level of Membership
Mastopathy Carcinoma Connective Adipose
Sample 1 0.23 0.35 0.43 0.09
Sample 2 0.41 0.06 0.36 0.17
Sample 3 0.06 0.39 0.09 0.46
Table 3.
Samples Class-Winner Probability
Sample 1 carcinoma 0.56
Sample 2 mastopathy 0.49
Sample 3 adipose 0.61
Here we can see that some values of membership levels are quite close one to another; thus, it is important to consider all of them for further diagnostics. Also, it is essential to point out that
the usage of just probability values that outputs only one possible diagnosis gives a narrow point of view on the patient’s state, and also, treatment may be less effective.
So, it is better to take into account all possible values, which allow doctors to build further medication processes to do complex, higher-quality treatment.
In general, according to the results of the two experiments, the proposed approach in comparison with TSK provides slightly lower classification accuracy for small datasets but requires significantly
lower computational costs when the dataset size grows. Besides that, it is designed to work in online mode on Data Stream Mining tasks.
4. DISCUSSION
In order to solve the problem of classification with data that can be represented in different scales, the adaptive probabilistic neuro-fuzzy system is developed, based on a classical probabilistic
neural network, which is able to work under the condition of short datasets. However, PNN’s architecture is growing with training dataset, making it bulky and making the ability to work with long
datasets impossible, thus using the self-learning procedure where the distance between centers of membership functions should be “close enough” to form one training observation according to the
formula (16), and this is determined by a predefined threshold.
Also, it should be noted that neuro-fuzzy systems allow working with various scales; therefore, to solve the classification-diagnostics-recognition problems, it was necessary to develop a hybrid of
such system and PNN making it possible to work under conditions of overlapping classes, data represented in different scales, and the ability of fast tuning system’s parameters based on lazy
However, using only a probabilistic approach can narrow down the perspective on a case, for example, the system gives only one diagnosis for the patient instead of a few of the most possible. Hence,
owing to the fact that it is a priory supposed class to overlap in feature space, fuzzy reasoning gives the advantage to this system comparing to the crisp ones.
The adaptive probabilistic neuro-fuzzy system is proposed. This system is designed to classify vector observations that are processed in a sequential online mode. It has a limited number of neurons
in the fuzzification layer and is characterized by a high rate of learning, which distinguishes it from both deep and traditional shallow multilayer networks and allows for a large amount of data to
be processed within the overall problem of Data Stream Mining.
The distinctive feature of the proposed system is combined learning of its parameters and membership functions, combining controlled learning with a teacher, lazy learning based on the concept
“Neurons at data points”, and self-learning, which is based on the principle “Winner takes all”. Precisely, this approach allows solving diagnostic tasks in conditions of not just long but also short
training datasets and mutually overlapping classes. Thereat, it is important to note that the input information can be given in various notations, in particular: numerical, ordinal, binary, and
nominal. In the future, it is reasonable and fascinating to process information in online mode, consider the possibility of data processing, where the size of an input vector varies (appearing or
deleting certain features), and a number of possible diagnoses (the appearing of new diseases in data stream under consideration). The computer experiments are proving the proposed approach.
Not applicable.
No animals/humans were used for studies that are the basis of this research.
Not applicable.
This work was supported by the state budget scientific research project of Kharkiv National University of Radio Electronics “Deep Hybrid Systems of computational intelligence for data stream mining
and their fast learning” (state registration number 0119U001403).
The authors declare no conflict of interest, financial or otherwise.
Declared none.
Berka P, Rauch S, Zighed D. Data mining and medical knowledge management: Cases and applications 2009.
Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P. Computational intelligence a methodological introduction 2013.
Kantchev R. R: advances in intelligent analysis of medical data and decision support systems 2013.
Schmitt M, Teodorescu H-N, Jain A, Jain A, Jain S. Computational intelligence processing in medical diagnosis Springer-Verlag Berlin Heidelberg 2002.
Syerov Yu, Shakhovska N, Fedushko S. Method of the data adequacy determination of personal medical profiles.Advances in Artificial Systems for Medicine and Education II 2018; 333-43.
Bodyanskiy Ye, Perova I, Vynokurova O, Izonin I. Adaptive wavelet diagnostic neuro-fuzzy network for biomedical tasks. 14th International Conference on Advanced Trends in Radioelecrtronics,
Telecommunications and Computer Engineering (TCSET) 2018; 711-5.
Schalkoff RJ. Artificial neural networks 1997.
Goodfellow I, Bengio Y, Courville A. Deep Learning 2016.
Specht DF. Probabilistic neural networks and the polynomial Adaline as complementary techniques for classification. IEEE Trans Neural Netw 1990; 1(1): 111-21.
Specht DF. Probabilistic neural networks and the polynomial Adaline as complementary techniques for classification. IEEE Trans Neural Netw 1990; 1(1): 111-21.
Bodyanskiy Ye, Gorshkov Ye, Kolodyazhniy V, Wernstedt J. A learning of probabilistic neural network with fuzzy inference. Proceedings of 6th International Conferene on Artificial Neural Neworks and
Generic Algorithms Wien:Springer-Verlag. ICANNGA 2003 2003; pp. 2003; 13-7.
Bodyanskiy Y, Gorshkov Y, Kolodyazhniy V, Wernstedt J. Probabilistic neuro-fuzzy network with non-conventional activation functions In: Palade V, Howlett RJ, Jain L, Eds. Knowledge-Based Intelligent
Information and Engineering Systems, KES 2003 Lecture Notes in Artificial Intelligence 2773: 973-9.
Bodyanskiy Ye, Gorshkov Ye, Kolodyazhniy V. Resource-allocating probabilistic neuro-fuzzy network. Proceedings of 2nd Conference of European Union Zittan, Germany. Sosciety for Fuzzy Logic and
Technology (EUSFLAT2003) 2003; pp. 2003; 392-5.
Rutkowski L. Adaptive probabilistic neural networks for pattern classification in time-varying environment. IEEE Trans Neural Netw 2004; 15(4): 811-27.
Yi J-H, Wang J, Wang G-G. Improved probabilistic neural networks with self-adaptive strategies for transformer fault diagnosis problem-advances. Mech Eng 2016; 8(1): 1-13.
Zhernova P, Pliss I, Chala O. Modified fuzzy probabilistic neural network. Intellectual Systems for Decision Making and Problems of Computational Intelligence ISDMCI 2018; 228-30.
Souza PVC. Fuzzy neural networks and neuro-fuzzy networks: A review the main techniques and applications used in the literature. Appl Soft Comput 2020; 92: 106275.
Osowski S. Seven neurons to receive information 2006.
Nelles O. Nonlinear systems identification 2003; 39: 564-8. ISBN 3-540-67369-5. Automatica 2003; 39: 564-8.
Bodyanskiy Ye, Deineko A, Pliss I, Chala O. Probabilistic neuro-fuzzy system in medical diagnostic tasks and its lazy learning-selflearning IDDM’2020: 3rd International Conference on Informatics &
Data-Driven Medicine November 19–21, 2020; Växjö, Sweden. 2020; pp. 1-7.2020;
Kohonen T. Self-organizing maps 1995. | {"url":"https://openbioinformaticsjournal.com/VOLUME/14/PAGE/123/","timestamp":"2024-11-07T23:34:18Z","content_type":"text/html","content_length":"417673","record_id":"<urn:uuid:a2735245-7244-4a18-a146-af106dae8204>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00286.warc.gz"} |
Microsoft Excel Mcqs with Answers for Test Preparation - Daily PK TV
Microsoft Excel Mcqs with Answers for Test Preparation
In a computer spreadsheet, each cell contain a __________ ?
A. label
B. Value
C. Formula
D. All of these
Actual working area of Computer’s Microsoft Excel is ________ ?
A. Workbook
B. Worksheet
C. Notesheet
D. Notebook
In a computer spreadsheet, value of formula (7-3)x(1+6)/4 is __________ ?
A. 7
B. 1.5
C. 2
D. 12
In a computer, element which is not a part of chart is ___________ ?
A. Plot area
B. Fill handler
C. Series
D. Chart area
The term chart wizard data in MS is refer to_________?
A. Vertical axis
B. Horizontal axis
C. Bar data
D. None of these
Which Operation can’t be performed by Queue?
A. Insertion
B. Deletion
C. Retrieving
D. Traversing
Computer spreadsheet cell that is highlighted with a heavy border is a _________ ?
A. Active cell
B. Cell containing a formula
C. Locked cell
D. Cell
In a computer spreadsheet, which is true if current or active cell is B4 and you pressed Enter key?
A. you will be in the cell A1
B. you will be in the cell B5
C. you will be in the cell B3
D. you will be in the cell B6
In a computer spreadsheet, function which is used to count numbers of entries in given range is called __________ ?
A. Length
B. Counter
C. Counting
D. Count
Main window in a computer spreadsheet is called ___________ ?
A. Work book
B. Work
C. Account book
D. Work sheet
Which of the following will not enter data in a cell?
A. Pressing the Esc key
B. Pressing the Tab Key
C. Both of these
D. None of above
In a computer spreadsheet, cell range A3 through G3 should be keyed in as _________ ?
A. A3-G3
B. A3:G3
C. A3?.G3
D. A3 to G3
In a computer spreadsheet, each cell contain a __________ ?
A. label
B. Value
C. Formula
D. All of these
Actual working area of Computer’s Microsoft Excel is ________ ?
A. Workbook
B. Worksheet
C. Notesheet
D. Notebook
In a computer spreadsheet, first part of number format describes __________ ?
A. Positive number
B. Negative number
C. Zero values
D. Text values
In a computer spreadsheet, value of formula (7-3)x(1+6)/4 is __________ ?
A. 7
B. 1.5
C. 2
D. 12
In a computer, element which is not a part of chart is ___________ ?
A. Plot area
B. Fill handler
C. Series
D. Chart area
The term chart wizard data in MS is refer to_________?
A. Vertical axis
B. Horizontal axis
C. Bar data
D. None of these
Pages ( 11 of 15 ):... 10 11 12 ... 15Next »
Leave a Comment | {"url":"https://dailypktv.com/microsoft-excel-mcqs-with-answers/11/","timestamp":"2024-11-14T11:57:25Z","content_type":"text/html","content_length":"73685","record_id":"<urn:uuid:b29cf3fe-f0f0-4720-ae03-c45d147877e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00000.warc.gz"} |
All-Pairs shortest paths in o(n2) time with high probability
We present an all-pairs shortest path algorithm whose running time on a complete directed graph on n vertices whose edge weights are chosen independently and uniformly at random from [0, 1] is O(n2),
in expectation and with high probability. This resolves a long-standing open problem. The algorithm is a variant of the dynamic all-pairs shortest paths algorithm of Demetrescu and Italiano [2006].
The analysis relies on a proof that the number of locally shortest paths in such randomly weighted graphs is O(n2), in expectation and with high probability. We also present a dynamic version of the
algorithm that recomputes all shortest paths after a random edge update in O(log2 n) expected time.
• Probabilistic analysis
• Shortest paths
Dive into the research topics of 'All-Pairs shortest paths in o(n2) time with high probability'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/all-pairs-shortest-paths-in-on2-time-with-high-probability","timestamp":"2024-11-06T20:02:52Z","content_type":"text/html","content_length":"48167","record_id":"<urn:uuid:a2282f31-2021-4fb6-8ce0-13ebff88b492>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00114.warc.gz"} |
Gradient Descent or Delta Rule - Try Machine Learning
Gradient Descent or Delta Rule
Gradient descent and delta rule are important algorithms used in machine learning and neural networks.
They are widely employed for optimizing the weights and biases of artificial neural networks in order to minimize the overall error and improve the accuracy of predictions.
Key Takeaways:
• Gradient descent and delta rule are optimization algorithms in machine learning.
• They help adjust the weights and biases of neural networks to minimize error.
Understanding Gradient Descent
**Gradient descent** is an iterative optimization algorithm that aims to find the minimum of a given function.
It works by taking small steps in the direction of the steepest descent, gradually approaching the global or local minimum.
*By adjusting these steps using a learning rate*, gradient descent optimizes the weights and biases of a neural network.
This method is particularly valuable when dealing with large datasets, as it allows for efficient learning.
Understanding Delta Rule
**Delta rule**, also known as the Widrow-Hoff rule, is a learning algorithm used for adjusting the weights and biases of neurons in a neural network.
It updates the weights based on the difference between the predicted output and the expected output.
*By calculating the error and propagating it through the layers*, the delta rule helps the network learn and improve its predictions with each iteration.
Gradient Descent vs Delta Rule
• *Gradient descent* is a more general optimization algorithm, while *delta rule* is a specific learning algorithm used within neural networks.
• *Gradient descent* can be applied to various machine learning problems beyond neural networks, whereas *delta rule* is specifically designed for adjusting weights and biases in neural networks.
• *Gradient descent* operates on the entire dataset, while *delta rule* adjusts weights and biases on a single input-output pair.
Advantages and Limitations
Algorithm Advantages Limitations
• Efficient for large datasets. • May converge to local minima.
Gradient Descent • Applicable in various machine learning settings. • Requires careful selection of learning rate.
Delta Rule • Allows for incremental learning. • May encounter problems with vanishing or exploding gradients.
• Efficient for adjusting weights in neural networks. • Not suitable for complex learning tasks.
Applications of Gradient Descent and Delta Rule
Both *gradient descent* and *delta rule* find extensive applications in machine learning and neural networks across different domains.
*Gradient descent* is commonly used in training deep neural networks and optimizing various models, such as linear regression and support vector machines.
*Delta rule*, on the other hand, is a fundamental component of backpropagation, a widely used algorithm for training artificial neural networks.
Gradient descent and delta rule play crucial roles in optimizing the weights and biases of neural networks.
With the ability to minimize errors and improve predictions, these algorithms contribute significantly to the field of machine learning.
Understanding the differences between gradient descent and delta rule enables researchers and developers to choose the appropriate algorithm for their specific use cases.
Common Misconceptions
Using Gradient Descent or Delta Rule
There are several common misconceptions that people have about using Gradient Descent or Delta Rule. These misconceptions can often lead to confusion and misunderstanding. It is important to clarify
these misconceptions in order to have a better understanding of these concepts.
Misconception 1: Gradient Descent always finds the global minimum
• Gradient Descent can get stuck in a local minimum if it is not properly initialized.
• There is no guarantee that Gradient Descent will find the global minimum in all cases.
• It is important to carefully choose the learning rate and initialization parameters to avoid getting trapped in local minima.
Misconception 2: Delta Rule only works for linear models
• The Delta Rule is commonly used in the context of training linear models, but it can also be applied to train non-linear models.
• With the proper choice of activation function, the Delta Rule can be used to train neural networks with multiple layers.
• While Delta Rule may have limitations when applied to certain complex nonlinear models, it is not restricted to linear models only.
Misconception 3: Gradient Descent always converges to the minimum in a fixed number of iterations
• The convergence of Gradient Descent depends on various factors like the learning rate, initialization, and the nature of the optimization problem.
• In some cases, Gradient Descent may converge slowly and require a large number of iterations to reach the minimum.
• It is important to monitor the convergence criteria and adjust the learning rate or initialization if necessary during the training process.
Misconception 4: Delta Rule guarantees the best possible model performance
• The Delta Rule aims to minimize the error between the model’s predictions and the actual target values.
• However, it does not guarantee the best possible model performance as the chosen model architecture and training data can also impact the model’s performance.
• Improvements in model performance can be achieved by considering other factors such as feature selection, regularization, or using more advanced optimization techniques.
Misconception 5: Gradient Descent is the only optimization algorithm for training models
• While Gradient Descent is a widely used optimization algorithm, it is not the only option available.
• There are other optimization algorithms such as stochastic gradient descent (SGD), Adam, and RMSprop that offer improved convergence speed or better handling of large datasets.
• It is important to explore and choose the appropriate optimization algorithm based on the specific requirements and characteristics of the model being trained.
In this article, we will explore the concept of Gradient Descent and Delta Rule, which are fundamental algorithms used in machine learning and optimization. These techniques are employed to
iteratively optimize models and find the best possible parameters for a given problem. We will present various tables that illustrate different aspects and applications of Gradient Descent and Delta
Comparing Learning Rates of Gradient Descent
The following table showcases the impact of different learning rates on the convergence rate and accuracy of Gradient Descent:
| Learning Rate | Convergence Time (seconds) | Accuracy (%) |
| 0.01 | 153 | 92.3 |
| 0.1 | 67 | 95.6 |
| 0.001 | 206 | 88.7 |
| 0.0001 | 320 | 84.2 |
Comparison of Delta Rule and Backpropagation
This table compares the Delta Rule with Backpropagation, another popular algorithm in neural networks:
| Algorithm | Advantages | Disadvantages |
| Delta Rule | Simplicity | Slow convergence |
| Backpropagation | Fast convergence | Computationally intensive |
| | Handles non-linearities | Sensitive to initialization |
Weights and Errors at Each Iteration
Here, we present the weights and corresponding errors of a Gradient Descent iteration:
| Iteration | Weight 1 | Weight 2 | Weight 3 | Error |
| 1 | 0.234 | 0.567 | 0.123 | 0.457 |
| 2 | 0.163 | 0.789 | 0.213 | 0.345 |
| 3 | 0.098 | 0.612 | 0.345 | 0.212 |
Impact of Regularization on Model Performance
The table below demonstrates the effect of regularization on model performance using different regularization strengths:
| Regularization Strength | Train Accuracy (%) | Test Accuracy (%) |
| 0.001 | 95.2 | 92.1 |
| 0.01 | 94.5 | 91.8 |
| 0.1 | 92.3 | 90.3 |
Error Reduction Using Delta Rule
This table illustrates the reduction in error achieved by the Delta Rule during multiple iterations:
| Iteration | Error Reduction |
| 1 | 0.194 |
| 2 | 0.123 |
| 3 | 0.087 |
Learning Curve of Gradient Descent
The learning curve illustrates how the training and validation error change over multiple iterations:
| Iteration | Training Error | Validation Error |
| 1 | 0.457 | 0.342 |
| 2 | 0.345 | 0.231 |
| 3 | 0.212 | 0.167 |
Gradient Descent with Different Activation Functions
This table explores the performance of Gradient Descent with different activation functions:
| Activation Function | Train Accuracy (%) | Test Accuracy (%) |
| Sigmoid | 90.5 | 89.2 |
| ReLU | 93.2 | 91.7 |
| Tanh | 92.8 | 91.5 |
Delta Rule for Regression
The following table showcases the performance of the Delta Rule for regression problems:
| Input (x) | Target (y) | Predicted (y’) |
| 0.1 | 0.15 | 0.158 |
| 0.2 | 0.25 | 0.247 |
| 0.3 | 0.35 | 0.354 |
Comparison of Gradient Descent Variants
This table compares various variants of Gradient Descent:
| Variant | Advantages | Disadvantages |
| Stochastic GD | Fast convergence | Noisy updates |
| Batch GD | Stable updates | Computationally expensive |
| Mini-batch GD | Balance between above variants | Requires tuning batch size |
Gradient Descent and Delta Rule are powerful techniques used in machine learning and optimization. The presented tables demonstrate their application in various scenarios, such as comparing learning
rates, regularization strengths, and activation functions. These algorithms play a vital role in training models and improving their ability to make accurate predictions. As we delve deeper into the
field of machine learning, a thorough understanding of Gradient Descent and Delta Rule becomes essential for building and optimizing effective models.
Frequently Asked Questions
Gradient Descent or Delta Rule
What is Gradient Descent?
Gradient descent is an iterative optimization algorithm used in machine learning and mathematical optimization. It is used to find the minimum of a function by iteratively adjusting parameters in the
direction of the steepest descent.
How does Gradient Descent work?
Gradient descent works by calculating the gradient of a function at a given point and then iteratively updating the parameters in the direction of the negative gradient to minimize the target
What is the Delta Rule?
The Delta Rule, also known as the Widrow-Hoff rule, is a learning rule used in artificial neural networks for supervised learning. It is used to adjust the weights of the network based on the
difference between predicted and target outputs.
How does the Delta Rule differ from Gradient Descent?
The Delta Rule is a specific instance of the more general Gradient Descent algorithm. While Gradient Descent is a general optimization algorithm, the Delta Rule is specifically designed for adjusting
weights in artificial neural networks.
What are the applications of Gradient Descent?
Gradient Descent has a wide range of applications in machine learning and optimization. It is commonly used in training neural networks, fitting regression models, and solving reinforcement learning
What are the advantages of using Gradient Descent?
Gradient Descent is a versatile algorithm that can handle a variety of optimization problems. It is computationally efficient, relatively simple to implement, and can find a global or local minimum
depending on the problem at hand.
What are the limitations of Gradient Descent?
Gradient Descent can sometimes converge to a local minimum instead of the global minimum, depending on the initial starting point and the shape of the function being optimized. It may require careful
tuning of learning rate and other parameters to achieve optimal results.
How is learning rate determined in Gradient Descent?
The learning rate in Gradient Descent determines the step size at each iteration. It is typically set empirically based on the problem and data. It needs to be carefully chosen so that the algorithm
converges efficiently without overscaling or underscaling the parameter updates.
What are the different variants of Gradient Descent?
There are several variants of Gradient Descent, including Batch Gradient Descent, Stochastic Gradient Descent, and Mini-batch Gradient Descent. These variants differ in how they update the parameters
and use the data during each iteration.
Are there any alternatives to Gradient Descent?
Yes, there are alternative optimization algorithms to Gradient Descent, such as Newton’s method, Conjugate Gradient, and Quasi-Newton methods. These methods may have different convergence properties
and computational requirements compared to Gradient Descent. | {"url":"https://trymachinelearning.com/gradient-descent-or-delta-rule/","timestamp":"2024-11-02T09:31:53Z","content_type":"text/html","content_length":"74455","record_id":"<urn:uuid:b64b417a-07f9-4955-8702-96d55a732553>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00406.warc.gz"} |
Evaluation of a method for converting Stratospheric Aerosol and Gas Experiment (SAGE) extinction coefficients to backscatter coefficients for intercomparison with lidar observations
Articles | Volume 13, issue 8
© Author(s) 2020. This work is distributed under the Creative Commons Attribution 4.0 License.
Evaluation of a method for converting Stratospheric Aerosol and Gas Experiment (SAGE) extinction coefficients to backscatter coefficients for intercomparison with lidar observations
Aerosol backscatter coefficients were calculated using multiwavelength aerosol extinction products from the SAGE II and III/ISS instruments (SAGE: Stratospheric Aerosol and Gas Experiment). The
conversion methodology is presented, followed by an evaluation of the conversion algorithm's robustness. The SAGE-based backscatter products were compared to backscatter coefficients derived from
ground-based lidar at three sites (Table Mountain Facility, Mauna Loa, and Observatoire de Haute-Provence). Further, the SAGE-derived lidar ratios were compared to values from previous balloon and
theoretical studies. This evaluation includes the major eruption of Mt. Pinatubo in 1991, followed by the atmospherically quiescent period beginning in the late 1990s. Recommendations are made
regarding the use of this method for evaluation of aerosol extinction profiles collected using the occultation method.
Received: 25 Feb 2020 – Discussion started: 04 Mar 2020 – Revised: 29 May 2020 – Accepted: 02 Jul 2020 – Published: 13 Aug 2020
Stratospheric aerosol consists of submicron particles (Chagnon and Junge, 1961) that are composed primarily of sulfuric acid and water (Murphy et al., 1998) and play a crucial role in atmospheric
chemistry and radiation transfer (Pitts and Thomason, 1993; Kremser et al., 2016; Wilka et al., 2018). Background stratospheric sulfuric acid is supplied by the chronic emission of natural gases such
as CS[2] (carbon disulfide), OCS (carbonyl sulfide), DMS (dimethyl sulfide), and SO[2] (sulfur dioxide) from both land and ocean sources (Kremser et al., 2016). The amount of sulfur in the
stratosphere can be acutely, yet significantly, impacted by volcanic eruptions. This influence is not limited to relatively rare injections from large volcanic events such as the Mt. Pinatubo
eruption of 1991 (McCormick et al., 1995), but episodic injections from smaller eruptions have been shown to have a significant impact as well (Vernier et al., 2011). Therefore, ongoing long-term
observations of stratospheric aerosol are important from both a climate and chemistry perspective.
The Stratospheric Aerosol and Gas Experiment (SAGE) is a series of satellite-borne instruments that use the occultation method (both solar and lunar light sourced) and have a lineage that spans
4 decades, originating with the Stratospheric Aerosol Measurement II in 1978 (SAM-II, Chu and McCormick, 1979). Using the occultation technique, the SAM II and SAGE instruments made direct
measurements of vertical profiles of the aerosol extinction coefficient (k, herein referred to simply as aerosol extinction) by recording light transmitted through the atmosphere from the sun or moon
as it rises or sets. This attenuated light was then compared to exo-atmospheric values that were recorded when the light source was sufficiently high above the atmosphere. This technique allows for
high-precision measurements on the order of 5%, as reported in the level 2 data product, for SAGE aerosol extinction in the main aerosol layer. In general, stratospheric aerosol extinction
measurements are challenging due to the paucity of aerosols under background conditions and the ephemeral nature of ash and particulates injected directly from volcanic eruptions. However,
occultation observations have the benefit of long path lengths (on the order of 100–1000km, dependent on altitude). Further, due to the self-calibrating nature of this method, SAGE measurements are
inherently stable (i.e., minimal impact from instrument drift) and ideal for long-term trend studies.
Due to the SAGE instrument's level of precision and the limited aerosol number density in the stratosphere, validating the aerosol extinction products has proven challenging. Successful validation is
further limited by the measured parameter itself since coincident stratospheric extinction measurements are scarce. Conversely, high-quality backscatter measurements from ground-based lidar
instruments are more common and, despite operating at a fixed location, may provide sufficient coincident observations for an evaluation of the SAGE aerosol product. However, the backscatter and
extinction coefficient products are not directly comparable.
Previous researchers have accomplished this comparison through the application of conversion coefficients determined from balloon-borne optical particle counters (OPCs; see Jäger and Hofmann, 1991;
Jäger et al., 1995; Jäger and Deshler, 2002) or the selection of a wavelength-dependent lidar ratio (S, typically ≈40–46sr; see Kar et al., 2019) to invert the lidar backscatter (532nm) to
extinction, followed by wavelength correction to account for the differing SAGE and lidar wavelengths (conversion is carried out using the Ångström coefficient; Kar et al., 2019). A major limitation
of balloon-based conversions is the uncertainty in the conversion factors (on the order of ±30%–40%; Deshler et al., 2003; Kovilakam and Deshler, 2015) and the requirement for ongoing OPC launches
to accurately observe both zonal and seasonal variability. The primary limitation of lidar conversions is the challenge of appropriately selecting S. Indeed, Kar et al. (2019) showed that S is both
altitude and latitude dependent and varies from 20 to 30sr, while other reports (Wandinger et al., 1995; Kent et al., 1998) have shown S to go as high as 70 during background conditions. While a
lidar ratio of 40–50sr has been regarded as a satisfactory assumption, S is ultimately uninformed about the atmosphere in which the measurement was recorded, making appropriate selection of S
On the other hand, Thomason and Osborn (1992) invoked an eigenvector analysis based on SAGE II extinction ratios to convert extinction coefficients to total aerosol mass and backscatter coefficients
to enable comparison with lidar observations. This method provided coefficients with uncertainties on the order of ±20%–30% and has been used in subsequent studies to convert lidar backscatter to
extinction for comparison with SAGE observations (Osborn et al., 1998; Lu et al., 2000; Antuña et al., 2002, 2003). As this method relied on SAGE-observed extinction coefficients it was more similar
to our method than backscatter-to-extinction methods (vide supra) and may be considered a precursor to the present work.
Contrary to previous efforts to compare extinction and backscatter coefficients, the extinction-to-backscatter (EBC) method proposed in this study required relatively basic assumptions about the
character of the underlying aerosol. These assumptions include composition, particle shape, and the shape of the size distribution (common assumptions in Mie theory, as further discussed below).
While combining Mie theory and extinction measurements to gain insight into the nature of stratospheric aerosol is a common methodology (e.g., Hansen and Matsushima, 1966; Heintzenberg, Jost, 1982;
Hitzenberger and Rizzi, 1986; Thomason, 1992; Bingen et al., 2004), the difference in our method is that we make no attempt to infer aerosol properties such as number density or particle size
distribution. Instead, we apply Mie theory to infer the relationship between extinction and backscatter and use the range of the solution space of aerosol properties as a bounding box for
uncertainty. Fortunately, within the regime of the available observations, this methodology is less sensitive to specific aerosol properties such that we can reasonably convert SAGE extinction to a
derived backscatter for comparison with lidar. To this end, we present a method of converting SAGE-observed extinction coefficients to backscatter coefficients for direct comparison with
stratospheric lidar observations. This method is presented as an alternative evaluation technique for the SAGE products with the intent of expanding our long-term trend intercomparison opportunities
(i.e., to include ground-based lidar as well as the possibility of satellite-borne lidar).
The SAGE instruments used in the current study are SAGE II (October 1985–August 2005) and SAGE III on the International Space Station (SAGE III/ISS, June 2017–present, hereafter referred to as
SAGE III). The SAGE II instrument and algorithm (v7.0) have been described previously by Mauldin et al. (1985) and Damadeo et al. (2013), respectively. The SAGE III instrument was described by
Cisewski et al. (2014), and the algorithm (v5.1) will be the topic of upcoming publications. A brief description will be offered here, but the reader is directed to these publications for details.
SAGE II was a seven-channel solar occultation instrument (386, 448, 452, 525, 600, 935, 1020nm) that flew on the Earth Radiation Budget Satellite (ERBS) from October 1984 through August 2005. Due to
the orbital inclination and the method of observation, SAGE II observations were limited to ≈30 occultations per day, with 3–4 times more observations at midlatitudes than at tropical and high
latitudes as seen in Fig. 1a. The standard products included the number density of gas-phase species (O[3], NO[2], and H[2]O) and aerosol extinction (385, 453, 525, and 1020nm) with a vertical
resolution of ≈1km (reported every 0.5km). The SAGE II v7.0 products were used in the current analysis.
SAGE III is a solar–lunar occultation instrument that is docked on the ISS and has a data record beginning in June 2017. The onboard spectrometer is a charge-coupled device with a resolution of
1–2nm. The spectrometer's spectral range extends from 280 to 1040nm in addition to a lone InGaAs photodiode at 1550nm. Similar to SAGE II, SAGE III has a higher frequency of observations at
midlatitudes compared to the tropics and high latitudes (Fig. 1b). The standard products include the number density of gas-phase species for both solar (O[3], NO[2], and H[2]O) and lunar (O[3] and NO
[3]) observations, as well as aerosol extinction coefficients (384, 448, 520, 601, 676, 755, 869, 1020, 1543nm). The vertical resolution is 0.75km, reported every 0.5km). The SAGE III v5.1
products (July 2017–September 2019) were used in the current analysis.
2.2Ground lidar
Ground lidar data from three stations were used within this study. To allow intercomparison with both SAGE II and SAGE III, candidate ground stations with a long-duration data record were preferred.
Further, data quality is likewise important. The Network for Detection of Atmospheric Composition Change (NDACC, https://www.ndacc.org, last access: 7 August 2020) was founded to observe long-term
stratospheric trends by making long-term, high-quality atmospheric measurements. Therefore, stations within this network were selected for comparison. We identified three stations that satisfied the
requirements of this analysis: Table Mountain Facility, Mauna Loa Observatory, and Observatoire de Haute-Provence. A brief description of the instruments and their algorithms is provided below.
2.2.1Table Mountain Facility
The NASA Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF) is located in southern California (34.4^∘N, 117.7^∘W; alt. 2285m). Backscatter coefficients derived from the ozone
DIfferential Absorption Lidar (DIAL) were used in the current study and have a record extending back to the beginning of 1989 (McDermid et al., 1990b, a). The lidar used the third harmonic of a
Nd:YAG laser to record elastic backscatter at 355nm, which was corrected for ozone and NO[2] absorption, and Rayleigh extinction. The corrected backscatter was then used to calculate the aerosol
backscatter coefficient from backscatter ratio (BSR) (Northam et al., 1974; Gross et al., 1995). Prior to June 2001 the BSR was calculated using pressure and temperature data from a National Centers
for Environmental Protection (NCEP) meteorological model. Since June 2001 the BSR has been computed using the 387nm channel from a newly installed Raman channel as the purely molecular component in
the BSR. For both cases, the BSR was normalized to 1 between 30 and 35km where it was assumed that the aerosol backscatter contribution was negligible.
2.2.2Mauna Loa Observatory
The NOAA Mauna Loa Observatory (MLO; 19.5^∘N, 155.6^∘W; alt. 3.4km) is located on the Big Island of Hawai'i. The dataset used here comes from the elastic and inelastic (Raman) backscatter channels
of the JPL ozone DIAL that began measurements in 1993 (McDermid et al., 1995; McDermid, 1995). Just like the JPL-TMF system, the lidar used the third harmonic of an Nd:YAG laser to record the elastic
backscatter at 355nm, followed by correction for ozone and NO[2] absorption, as well as Rayleigh extinction. The corrected backscatter was then used to calculate the aerosol backscatter coefficient
from the backscatter ratio using the 387nm channel as the purely molecular component in the BSR as described in Chouza et al. (2020). The BSR was normalized to 1 at a constant altitude of 35km
where it was assumed that the aerosol backscatter contribution was negligible.
2.2.3Observatoire de Haute-Provence
The Observatoire de Haute-Provence (OHP; 43.9^∘N, 5.7^∘E; 670ma.s.l.) is located in southern France and has an elastic backscatter lidar record that began in 1994. The lidar design is based on
DIAL ozone measurements that began in 1985. In 1993, the lidar system was updated for improved measurements in the lower stratosphere (Godin-Beekmann et al., 2003; Khaykin et al., 2017). The lidar
used the third harmonic of an Nd:YAG laser (355nm) to record elastic backscatter, followed by inversion using the Fernald–Klett method (Fernald, 1984; Klett, 1985) to provide backscatter and
extinction coefficients, assuming an aerosol-free region between 30 and 33km and a constant lidar ratio of 50sr. The error estimate for this method is <10% (Khaykin et al., 2017).
Extinction and backscatter observations cannot be directly compared. In order to evaluate the agreement between backscatter measurements and extinction coefficient measurements, the data types must
be converted to a common parameter, thereby requiring a conversion algorithm. As previously mentioned, this is usually done by converting backscatter to extinction coefficients using conversion
factors from sources independent of either instrument (e.g., constant lidar ratio). Herein, we derive a process to infer this relationship based on the spectral dependence of SAGE II/III aerosol
extinction coefficient measurements and only make basic assumptions on the character of the underlying aerosol. Indeed, this EBC method is proposed to act as a bridge between aerosol extinction and
backscatter observations. This bridge is founded upon Mie theory (Kerker, 1969; Hansen and Travis, 1974; Bohren and Huffman, 1983) and invokes the typical assumptions required in Mie theory models:
particle shape, composition, and distribution shape and width. Herein we assumed that all particles are spherical, are composed primarily of sulfate (75% H[2]SO[4], 25% H[2]O by mass; Murphy et al.
, 1998), and that the particle size distribution (PSD) is single-mode lognormal. Refractive index values from Palmer and Williams (1975) were used in the calculations.
Particulate backscatter and extinction efficiency factors (Q[sca](λ,r) and Q[ext](λ,r), respectively; for derivation of Q(λ,r) see Kerker, 1969, and Bohren and Huffman, 1983) were calculated for a
series of particle radii ($\mathbf{r}=\left[\mathrm{1},\mathrm{2},\mathrm{\dots },\mathrm{1500}\right]$nm) and incident light wavelengths ($\mathit{\lambda }=\left[\mathrm{350},\mathrm{351},\mathrm
{\dots },\mathrm{2000}\right]$nm). Subsequently, a series of lognormal distributions (P(r[m],σ[g]), described by Eq. (1) where σ[g] is the geometric standard deviation and r[m] is the mode radius;
the median radius of a lognormal distribution is commonly referred to as mode radius in aerosol literature – we adopt this convention here) were calculated for the same family of particles with five
distribution widths (${\mathit{\sigma }}_{\mathrm{g}}=\left[\mathrm{1.2},\mathrm{1.4},\mathrm{1.5},\mathrm{1.6},\mathrm{1.8}\right]$) that were chosen to cover the range of likely distribution widths
(Jäger and Hofmann, 1991; Pueschel et al., 1994; Fussen et al., 2001; Deshler et al., 2003). This was performed for all 1500 radii to calculate a new lognormal distribution as r[m] took on each value
within r. Values for Q[sca](λ,r), Q[ext](λ,r), and P(r[m],σ[g]) were then fed into Eqs. (2) and (3) to produce three-dimensional lookup tables (${r}_{\mathrm{m}}×\mathit{\lambda }×{\mathit{\sigma }}_
{\mathrm{g}}$) of extinction and backscatter ($\mathbf{k}\left(\mathit{\lambda },{r}_{\mathrm{m}},{\mathit{\sigma }}_{\mathrm{g}}\right)$ and $\mathbit{\beta }\left(\mathit{\lambda },{r}_{\mathrm
{m}},{\mathit{\sigma }}_{\mathrm{g}}\right)$, respectively; hereafter referred to as k and β) coefficients as a function of mode radius, incident light wavelength, and distribution width.
At this point a technical note regarding construction of the lognormal distribution must be made. Construction of a lognormal distribution fails when the mode radius is near the limits of r[m] (i.e.,
1 or 1500nm), yielding a truncated lognormal distribution. However, the mode radii required for this analysis (i.e., to generate the corresponding SAGE extinction ratios) ranged from ≈50 to ≈500nm,
well away from these bounds. With the backscatter and extinction lookup tables thus created, we now focus on their utilization in converting from k to β.
$\begin{array}{}\text{(1)}& \mathbf{P}\left({r}_{\mathrm{m}},{\mathit{\sigma }}_{\mathrm{g}}\right)=\frac{\mathrm{1}}{\sqrt{\mathrm{2}\mathit{\pi }}\mathrm{ln}\left({\mathit{\sigma }}_{\mathrm{g}}\
right)\mathbf{r}}\mathrm{exp}\left[\frac{{\left(\mathrm{ln}\left(\mathbf{r}\right)-\mathrm{ln}\left({r}_{\mathrm{m}}\right)\right)}^{\mathrm{2}}}{-\mathrm{2}\mathrm{ln}{\left({\mathit{\sigma }}_{\
mathrm{g}}\right)}^{\mathrm{2}}}\right]\text{(2)}& \mathbf{k}\left(\mathit{\lambda },{r}_{\mathrm{m}},{\mathit{\sigma }}_{\mathrm{g}}\right)=\int \mathit{\pi }{\mathbf{r}}^{\mathrm{2}}\mathbf{P}\left
({r}_{\mathrm{m}},{\mathit{\sigma }}_{\mathrm{g}}\right){\mathbf{Q}}_{\mathrm{ext}}\left(\mathit{\lambda },\mathbf{r}\right)\mathrm{d}r\text{(3)}& \mathbit{\beta }\left(\mathit{\lambda },{r}_{\mathrm
{m}},{\mathit{\sigma }}_{\mathrm{g}}\right)=\frac{\mathrm{1}}{\mathrm{4}\mathit{\pi }}\int \mathit{\pi }{\mathbf{r}}^{\mathrm{2}}\mathbf{P}\left({r}_{\mathrm{m}},{\mathit{\sigma }}_{\mathrm{g}}\
right){\mathbf{Q}}_{\mathrm{sca}}\left(\mathit{\lambda },\mathbf{r}\right)\mathrm{d}r\end{array}$
Wavelengths were selected based on SAGE extinction channels and available lidar wavelength, and the lookup tables were used to create the plots in Fig. 2. Though this figure only shows data for one
combination of extinction and backscatter wavelengths, similar figures were generated for each combination (not shown), with the 520∕1020 combination providing the best combination of linearity,
atmospheric penetration depth, and wavelength overlap between SAGE II and SAGE III. This figure elucidates the relationship between the inverted lidar ratio (β∕k, hereafter referred to as S^−1),
extinction ratio, and distribution width. Indeed, this figure provided the nexus between extinction and backscatter observations and between theory and observation since SAGE-observed extinctions
were imported into this model to derive β[355]. To do this, SAGE extinction ratios (k[520]∕k[1020]) were used to define the abscissa value, followed by identifying the ordinate value (S^−1) according
to the line drawn in Fig. 2, followed by multiplication by the SAGE-observed k[1020]. For example, if the observed SAGE extinction ratio was 6, then ${S}^{-\mathrm{1}}\approx \mathrm{0.2}$ when σ[g]=
1.6, and the SAGE-derived backscatter coefficient (β[SAGE]) can be calculated via Eq. (4), where k[1020] is the SAGE extinction product at 1020nm. It is important to note a departure from convention
in how the S^−1 values are reported in Fig. 2. The standard convention would require both coefficients to be at the same wavelength. The current methodology requires these coefficients to be at
different wavelengths as explained above. This deviation is only made in this conversion step (i.e., when using the data presented in Fig. 2), while subsequent discussions of lidar ratio estimates
(e.g., Tables 2 and 3, Figs. 6 and 9, and Sect. 4.1.2) use the conventional lidar ratio definition.
$\begin{array}{}\text{(4)}& {\mathit{\beta }}_{\mathrm{SAGE}}={S}^{-\mathrm{1}}\cdot {k}_{\mathrm{1020}}\end{array}$
A potential limitation of this method is that, for large particle sizes (extinction ratios <1 in Fig. 2, corresponding to mode radius of ≈500nm), two solutions for S^−1 are possible. Further, for
smaller particles sizes (extinction ratios >6 in Fig. 2, corresponding to mode radius ≈50) the solutions rapidly diverge as a function of σ[g], making selection of σ[g] increasingly important.
However, SAGE extinction ratios were rarely outside these limits. This is seen in Fig. 3 where probability density functions (PDFs) and cumulative distribution functions (CDFs) of stratospheric
extinction ratios were plotted for SAGE II and SAGE III. The stark difference in distribution shape between panels (a) and (b) is due to the SAGE II mission being dominated by volcanic eruptions,
while the SAGE III mission, to date, has experienced a relatively quiescent atmosphere. Data in Fig. 3a and c were broken into two periods: (1) when the atmosphere was impacted by the Mt. Pinatubo
eruption (1 June 1991–1 January 1998) and (2) periods when the impact of Pinatubo was expected to be less significant. It was observed that the majority of extinction ratios (>90%) were between
1 and 6 regardless of Pinatubo's impact. Therefore, we conclude that the majority of SAGE's observations can take advantage of this methodology.
While Fig. 3 shows that most extinction ratios avoid either multiple solutions or significant divergence in solutions due to σ[g], it is understood that, due to uncertainty in σ[g], there is an
associated uncertainty in the derived β[355]. To account for this spread, SAGE-based backscatter coefficients were calculated for both extremes of σ[g] (i.e., 1.2 and 1.8). These two solutions were
plotted in subsequent figures to illustrate this spread. Further discussion of uncertainties associated with the selection of σ[g] is presented in Sect. 3.2.
3.1Internal evaluation of the method
Figure 2 shows the relationship between extinction ratio and S^−1 for one combination of wavelengths. Since SAGE II and SAGE III recorded extinction coefficients at multiple wavelengths, there were
multiple wavelength combinations from which to choose. Under ideal conditions, the β derived using this conversion methodology should be independent of wavelength combination. Indeed, it can be
trivially demonstrated that, working strictly within the confines of theory (i.e., no noise or uncertainty), this is the case. However, in reality, the SAGE extinction products were impacted by
errors originating in hardware (e.g., instrument noise), retrieval algorithm (e.g., how well gas species were cleared prior to retrieving aerosol extinction), and atmospheric conditions (e.g., impact
of clouds). Therefore, the method's consistency was evaluated by calculating β[SAGE] using three wavelength combinations to form the abscissa in Fig. 2: 385∕1020, 450∕1020, and 520∕1020 (hereafter
this calculated β is referred to as β[S(385)], β[S(450)], and β[S(520)]). The target backscatter wavelength was held constant (355nm) in this evaluation for two reasons: (1) this is the lidar
wavelength used at the three ground sites used in this study, and (2) selection of lidar wavelength does not influence the evaluation of the method's consistency.
Comparison of β
To evaluate the robustness of the EBC algorithm, β[SAGE] was calculated at three wavelength combinations: β[S(385)], β[S(450)], and β[S(520)]. Within this evaluation the 520 ratio acted as the
reference (i.e., the 385 and 450nm ratios were compared to the 520 ratio in subsequent statistical analyses). The intent of this comparison was to quantify and qualify the variability between the
differing β[SAGE] products. The following analysis was conducted using zonal statistics (5^∘ latitude, 2km altitude bins) that were weighted by the inverse measurement error within the reported SAGE
extinction products. These data are presented both graphically (Figs. 4 and 5) and numerically (Table 1).
The zonal weighted coefficient of correlation (R^2) and weighted slope of linear regression profiles are presented in Figs. 4a–d and 5a–d for SAGE II and SAGE III, respectively. It was observed that
the coefficients of correlation and slopes between the three products were high throughout the profile (R^2≥0.85 and slope≥0.78; Table 1) and were higher towards the middle of the stratosphere (R^2>
0.95 and slope ≈1). However, at lower and higher altitudes the overall performance was worse. This degradation was driven by several factors: (1) the shorter wavelengths were attenuated higher in the
atmosphere due to increasing optical thickness, which led to negligible transmittance through lower sections of the atmosphere; (2) the impact of cloud contamination at lower altitudes; and (3)
differences in the higher altitudes were the product of limited aerosol number density (i.e., increased uncertainty due to decreased extinction). To better understand this altitude dependence and
identify altitudes at which the conversion method may be most successfully applied, we evaluated a series of altitude-based filtering criteria. A brief discussion of these criteria, and their impact
on the statistics in Table 1, will be presented prior to continued discussion of Figs. 4 and 5.
Correlation plots (not shown) were generated for each latitude band and each altitude from 12 to 34km (2km wide bins centered every 2km) with corresponding regression statistics to better
understand how the agreement between the backscatter products varied with altitude and latitude and to aid in defining reasonable filtering criteria to mitigate the impact of spurious retrieval
products typically seen at lower and higher altitudes. We observed that data collected between 15 and 31km had higher coefficients of correlation, slopes closer to 1, and a tighter grouping about
the 1:1 line (i.e., fewer outliers in either direction). From this, we defined the altitude-based filtering criteria to only include data collected within the altitude range 15–31km.
As an evaluation of how much influence data outside the 15–31km range had on this analysis an ordinary line of best fit was calculated for each combination of beta values (i.e., β[S(385)] vs. β[S
(520)] and β[S(450)] vs. β[S520]) for the SAGE II and SAGE III missions under two conditions: (1) all available data were used, or (2) only data between 15 and 31km were used. A summary of this
evaluation is presented in Table 1 wherein it is observed that when all data throughout the profile were used the mean slope (0.78–1.02) and mean R^2 (0.87–0.98) had broad ranges, as did the
corresponding standard deviations. However, when the dataset was limited to 15–31km (values in parentheses) the range of mean slopes (0.94–0.99) and mean R^2 (0.95–0.98) decreased significantly, as
did the corresponding standard deviations. It was observed that when the filtering criteria were in place the standard deviation significantly narrowed, in some cases by more than an order of
By considering only the mean slope and mean R^2 values the impact of the filtering criteria is partially masked. The influence of these criteria is better observed by considering the minimum and
maximum values for both slope and R^2 and by considering its impact on the 95th percentile (P[95]). Here, P[95] was calculated using a nontraditional method. For slope, P[95] represents the range
over which 95% of the data fall, centered on the mean. As an example, if the mean slope is 1, how far out from 1 must we go before 95% of the data are captured? This range is not necessarily
symmetrical about the mean since either the minimum or maximum slope may be encountered prior to reaching the 95% level. On the other hand, P[95] for R^2 indicates the lowest R^2 value required to
capture 95% of the data (R^2=1 acting as the upper bound). Indeed, contrasting the full- and filtered-profile P[95] values in Table 1 provides a convincing illustration of the improvement the
filtering criteria had on the comparison.
From this evaluation we conclude that data outside 15–31km significantly influenced the statistics and that the applicability of this conversion method is limited to regions where sufficient signal
is received by the SAGE instruments, namely 15–31km.
Having established an altitude range interval over which the EBC method remains robust we can continue the evaluation of the aggregate statistics as shown in Figs. 4 and 5. To gauge the overall
difference between the products, P[95] values for the absolute percent differences are shown in panels (e) and (f). This is an illustrative statistic in that it shows that, for example, 95% of the
time the β[S(450)] and β[S(520)] products for SAGE II were within 10% of each other at 24km over the Equator. More generally, it is observed that the two products were within ±20% of each other
(all wavelengths for both SAGE II and SAGE III) throughout most of the atmosphere, similar to the R^2 and slope products. Similar to panels (a)–(d), the absolute percent difference has better
agreement between the longer-wavelength products (within ±20% for β[S(450)] and β[S(520)]) and follows a similar contour to that seen in the R^2 (panels a and b) and slopes (panels c and d).
The high R^2 values and slopes are encouraging and we conclude that, throughout most of the lower stratosphere, the calculated backscatter coefficient is independent of SAGE extinction channel
selection. It is noted that the performance of β[S(385)] was limited by comparatively rapid attenuation higher in the atmosphere, thereby limiting applicability of this channel within the EBC
algorithm. Further, we suggest that this attenuation was the driving factor in the worse agreement between β[S(385)] and β[S(520)]. Conversely, β[S(450)] showed better agreement with β[S(520)]
throughout most of the lower stratosphere, leaving two viable extinction ratios for calculating β[SAGE]: 450∕1020 and 520∕1020nm. While the 450nm channel will not be attenuated as high in the
atmosphere as the 385nm channel, it will saturate before the 520nm channel.
In addition to comparing β as a function of extinction wavelength, the algorithm performance can be qualitatively compared between the SAGE II and SAGE III missions. While this comparison is valid,
it must be remembered that the SAGE II record extends over a 20-year period, including impacts from the El Chichón (1982) and Pinatubo (1991) eruptions, which significantly influenced atmospheric
composition. Conversely, the SAGE III mission is currently in its third year and, to date, has had no opportunity to observe the impact of a major volcanic eruption. On the contrary, current
stratospheric conditions have been relatively clean for the past 20 years. The agreement in performance between the two missions is most readily seen by comparing the filtered slope and R^2
statistics in Table 1, wherein it is observed that the differences are statistically insignificant.
From this evaluation we determined that the selection of extinction wavelength combination had a minimal impact on the calculated backscatter products when altitudes are limited to 15–31km (i.e.,
each combination of SAGE wavelengths yielded the same backscatter coefficient within the provided errors). Therefore, we proceed with the current analysis by using the 520∕1020nm combination to
convert SAGE-observed extinction coefficients to backscatter coefficients for comparison with lidar-observed backscatter coefficients.
As with any study that involves modeling PSDs, the dominant sources of uncertainty are in the assumptions of aerosol composition and distribution parameters. Here, the particle number density and
mode radius play a minor role. However, as seen in Fig. 2, the selection of σ[g] has a variable impact. The statistics presented in Sect. 3.1 were calculated using σ[g]=1.5 but are not influenced by
the selection of σ[g] since changing the selection of σ[g] will shift all datasets up or down equally. On the other hand, the accuracy of the method is highly dependent on σ[g]. As an example,
setting σ[g]=1.5 leads to a $+\mathrm{32}/-\mathrm{16}$% uncertainty (compared to σ=1.8 and σ=1.2, respectively) when the extinction ratio equals 6. Since >90% of the stratospheric extinction
ratios do not exceed 6, we consider $+\mathrm{32}/-\mathrm{16}$% to act as a reasonable upper limit of expected uncertainty for this analysis. This uncertainty is depicted in subsequent figures by a
shaded region that represents the extinction-ratio-dependent upper and lower bounds for ${\mathit{\beta }}_{S\left(\mathrm{520},{\mathit{\sigma }}_{\mathrm{g}}=\mathrm{1.2}\right)}$ and ${\mathit{\
beta }}_{S\left(\mathrm{520},{\mathit{\sigma }}_{\mathrm{g}}=\mathrm{1.8}\right)}$ (i.e., for smaller extinction ratios the spread between ${\mathit{\beta }}_{S\left(\mathrm{520},{\mathit{\sigma }}_
{\mathrm{g}}=\mathrm{1.2}\right)}$ and ${\mathit{\beta }}_{S\left(\mathrm{520},{\mathit{\sigma }}_{\mathrm{g}}=\mathrm{1.8}\right)}$ decreased). It was observed that this spread was negligible at
lower altitudes but increased with altitude.
Another challenge in comparing SAGE and lidar observations is the differing viewing geometries. The uncertainty introduced by these differing geometries cannot be easily accounted for. However,
current versions of the algorithm (Damadeo et al., 2013) and previous studies (Ackerman et al., 1989; Cunnold et al., 1989; Oberbeck et al., 1989; Wang et al., 1989; Antuña et al., 2002; Jäger and
Deshler, 2002; Deshler et al., 2003) have taken advantage of the horizontal homogeneity of stratospheric aerosol, which mitigates the impact of differing viewing geometries.
The EBC method was applied to SAGE II and SAGE III datasets for intercomparison with ground-based lidar products. A discussion of the results of each SAGE mission follows.
4.1SAGE II
The SAGE II record spanned over 20 years and had the benefit of observing the impact of two of the largest volcanic eruptions of the 20th century: recovery from El Chichón in 1982 and the full life
cycle of the Mount Pinatubo eruption of 1991, followed by a return to quiescent conditions in the late 1990s. Within this record the extinction and backscatter coefficients spanned nearly 2 orders of
magnitude, providing an interesting case study.
SAGE II data were used to estimate β[355] using the 520∕1020 extinction ratio (Fig. 2). For this comparison, β[SAGE] was calculated on a profile-by-profile basis. These profiles were then used to
calculate zonal monthly means. Likewise, lidar profiles were averaged on a monthly basis for comparison. The time series, at four altitudes, are presented in Figs. 6, 7, and 8 for Table Mountain,
Mauna Loa, and OHP, respectively. The spread in the β[SAGE] value, due to varying results in solving Eq. (4) for differing values of σ[g], is represented by the black shaded time series data. It is
noted that, most of the time, this shaded area is indistinguishable from the black line thickness. Error bars in Figs. 6, 7, and 8 represent the standard error (error on mean). We observed that the
datasets were in qualitatively good agreement at all altitudes, especially when the atmosphere was impacted by the Pinatubo eruption (June 1991–1998).
Statistics for the time series data are presented in Table 2. The data were broken into two time periods: (1) when the signal was perturbed by the Pinatubo eruption (labeled PE in the table, June
1991–December 1997), as defined by Deshler et al. (2003) and Deshler et al. (2006), and (2) periods outside the Pinatubo impact classified as background (labeled BG in the table). As seen in Figs. 6,
7, and 8, the return to background conditions was sooner at higher altitudes, which may influence some statistics in Table 2 since the Pinatubo time period classification (i.e., June 1991–December
1997) was applied to all altitudes. Statistics in this table were calculated using SAGE monthly zonal means and lidar monthly mean values at four altitudes. Percent differences were calculated using
Eq. (5).
$\begin{array}{}\text{(5)}& \text{%Diff}=\mathrm{100}×\frac{{\mathit{\beta }}_{\mathrm{SAGE}}-{\mathit{\beta }}_{\mathrm{Lidar}}}{\mathrm{0.5}\left({\mathit{\beta }}_{\mathrm{SAGE}}+{\mathit{\beta }}
Data collected at 15km showed the worst agreement due to atmospheric opacity and cloud contamination as discussed above. Conversely, the agreement was best at 20 and 25km (percent difference within
≈10%), where the atmosphere was most impacted by the Pinatubo eruption (k[520] increased by ≈2 orders of magnitude), followed by an extended, approximately exponential return to background
conditions in the late 1990s (Deshler et al., 2003, 2006). Beginning in ≈1996, the stability of the lidar signal decreased as the amount of aerosol in the atmosphere decreased, with more significant
fluctuations appearing immediately prior to the 1991 eruption and later in the record. In contrast to the lidar record, the SAGE record remained smooth throughout except at 30km where it showed more
It was observed that during the Pinatubo time period the coefficients of correlation and line-of-best-fit slopes were higher than during background conditions. This was expected behavior for
background conditions for two reasons: (1) in the absence of stratospheric injections the instruments were left to sample the natural stratospheric variability (similar to noise), which limits
correlative analysis outside long-duration climatological trend studies, and (2) the limited dynamic range of the observations essentially provides a correlation between two parallel lines. Overall,
the percent differences for TMO show the two techniques to be in good agreement, with worse agreement occurring at 15km, which was expected due to cloud contamination.
4.1.2Observatoire de Haute-Provence
Unlike TMO, the OHP lidar record did not start until ≈2.5 years into the Pinatubo recovery (similar to MLO). However, SAGE II recorded significantly more profiles over this latitude than over MLO,
leading to a better representation of the zonal aerosol loading throughout the month. The increased differences at 15 and 30km were expected, as discussed above. However, we did not anticipate the
large difference at 25km when the atmosphere was impacted by Pinatubo (−29.44%). After further analysis it was determined that this difference was driven by a single 2.5-year time period that
straddled both the Pinatubo time period and the beginning of the quiescent period (June 1996–January 1999). During this time the two records were consistently in substantial disagreement. This
disagreement can be seen visually in Fig. 7. Removing data from this time period reduced the percent difference to −16.87% (percent difference during background conditions was reduced to +2.90%).
In an attempt to identify the source of this discrepancy we repeated the analysis under different longitude criteria (e.g., instead of doing zonal means we used only SAGE profiles collected within 5,
10, and 20^∘ longitude), weekly means instead of monthly, and adjusted the temporal coincident criteria. The intention of this analysis was to determine whether variability that was local to OHP was
driving the differences. However, we were unable to identify any such local variability and we currently cannot account for this anomaly within the time series.
In addition to β, the OHP data record contained a lidar ratio time series, thereby allowing comparison with the lidar ratio derived from the EBC algorithm. Percent differences for the lidar ratio
comparison are presented in Table 2. The slope and R^2 values were not reported for S because the OHP S value was held static for extended periods of time, making these statistics meaningless.
However, the relative difference retains meaning, and we observe that the percent difference between S values was consistently within 20%. Changes in the lidar ratio due to changing aerosol mode
radii throughout the recovery time period were in agreement with what is expected due to a major volcanic eruption. Indeed, by the end of the SAGE II mission S had recovered from the El Chichón and
Pinatubo eruptions to a value of ≈50–60 at all altitudes as supported by the SAGE-derived S and the estimate used in the lidar retrievals.
4.1.3Mauna Loa
Similar to OHP, the MLO record did not begin until ≈2.5 years into the Pinatubo recovery. Beginning in June 1995 the two datasets began to diverge at 20km (Fig. 8), with the lidar record flattening
out. In contrast, β[SAGE] continued with a quasi-exponential decay until January 1998, in agreement with the other two sites and previously published studies (e.g., Deshler et al., 2003; Thomason
et al., 2018). In January 1998 the lidar signal experienced an anomaly wherein the signal decreased by approximately an order of magnitude. After this time, β[SAGE] was consistently larger than β
[Lidar]. The discrepancy from June 1995 to January 1998 at 20km is currently not understood. However, the sudden change in January 1998 coincides with a new lidar instrument setup.
The statistics in Table 2 show the MLO comparison to be the worst of the three stations (excluding the −29.44% difference at 25km over OHP; conversely, the MLO percent difference at 25km was
relatively small). In addition to the anomalous behavior between 1995 and 1998 the SAGE II instrument experienced relatively few overpasses over Mauna Loa's latitude (19.5^∘N) as seen in Fig. 1a.
Therefore, we suggest that the poor agreement between the two instruments may have been driven by inadequate sampling by SAGE.
4.2SAGE III/ISS
To date, the SAGE III mission has made observations under relatively clean stratospheric conditions similar to conditions at the end of the SAGE II mission. Due to the limited data record (3 years
since launch), the comparison between SAGE III and the Mauna Loa and OHP lidars will be cursory. Data from the Table Mountain Facility have not been released for this time period; therefore,
Table Mountain was excluded from the current analysis.
The SAGE III and lidar backscatter coefficients show similar qualitative agreement at both Mauna Loa and OHP (Figs. 9 and 10, respectively), similar to what was observed in the SAGE II comparison
(vide supra). During the SAGE III mission the atmosphere has been relatively stable, with a minor increase in backscatter and aerosol extinction in late 2017 due to a significant pyrocumulonimbus
(pyroCB, indicated by the vertical line in the figures) event in northwestern Canada, which was comparable to a moderate volcanic eruption (Peterson et al., 2018). Smoke from the pyroCB was clearly
visible over midlatitude sites like OHP (Khaykin et al., 2017), while there was no clear evidence of significant aerosol loading at low latitudes (i.e., over Mauna Loa). However, Chouza et al. (2020)
showed a small increase in stratospheric aerosol optical depth over Mauna Loa during this period.
Similar to the end of the SAGE II record, calculation of a meaningful R^2 value is likewise challenging when the variability is small. Further, getting good agreement between extremely low β values
is challenging since small fluctuations have a disproportionate impact on the overall percent difference (see Table 3). However, this may be indicative of two possible conclusions: (1) the EBC method
has limited applicability to background conditions, and (2) the precision and accuracy of SAGE III or the ground-based lidar are too limited to make meaningful measurements during background
conditions. The validity of option 1 can be challenged with the SAGE II record (compare to Table 2) wherein it was shown that the background percent differences were generally better than during the
Pinatubo period. Therefore, it would seem that the EBC method is applicable to quiescent periods. Considering the precision of the SAGE instruments and the number of lidar validation and
intercomparison campaigns, the possibility of option 2 being valid seems unlikely.
4.2.1Overall impression
For the SAGE II instrument the derived β[SAGE] products generally had high coefficients of correlation and slopes near 1 when compared with the lidar-derived products, especially in the 20–25km
range during background conditions. While agreement was consistently good in the 20 and 25km bins (within 5%), the agreements consistently diverged in the 15 and 30km bins for both the Pinatubo
and background time periods. The divergence at 15km is likely attributable to optical depth and cloud contamination, but the divergence at 30km is not as easily explained. Indeed, it may be partly
caused by a lack of return signal in the lidar and lack of optical depth for SAGE (though this is generally not a challenge for SAGE instruments at this altitude). We note that this divergence was
modest ($±\approx \mathrm{20}$%) over TMO and MLO, where β was calculated using the BSR technique. However, the divergence was significantly larger over OHP where β was calculated via the
Fernald–Klett method, the lidar ratio was held constant, and the atmosphere is considered aerosol-free from 30 to 33km. We suggest that this highlights the sensitivity of Fernald–Klett to
atmospheric variability and the need to make an informed selection of lidar ratio.
Perhaps the most striking feature of this analysis is how well the SAGE-derived backscatter coefficient agreed with the lidar record during the early stages of the Pinatubo eruption (Fig. 6) when
particle shape and composition deviated significantly from our initial assumptions (Sheridan et al., 1992). This seems to indicate that using an extinction ratio may compensate for
mischaracterization of size and composition assumptions within our model. However, further evaluation involving major volcanic eruptions is required to better understand whether this agreement is
fortuitous or the EBC algorithm is actually insensitive to aerosol composition and shape.
The calculated S at each site was in good agreement with values calculated by Jäger et al. (1995). Immediately prior to the eruption S was approximately 40–45 for the lowermost altitudes
(tropopause–20km) and slightly higher (50–60) in the 25–30km altitudes. This was followed by a quick decrease after the eruption of Pinatubo, down to values of 20 in the Jäger dataset, with our
calculated value being slightly lower. Overall, the calculated S shows good agreement with the Jäger dataset in both magnitude and trend with altitude. Other studies that did not overlap with either
SAGE II or SAGE III have shown similar S values to those calculated here (Bingen et al., 2017; Painemal et al., 2019).
A method of converting SAGE extinction ratios to backscatter coefficient (β) profiles was presented. The method invoked Mie theory as the conduit from extinction to backscatter space and required
assumptions on particle shape (spherical), composition (75% water, 25% sulfuric acid), and distribution shape (single-mode lognormal with distribution width σ of 1.5). The general behavior of the
model as a function of σ was briefly considered (Fig. 2 and Sect. 3). It was demonstrated that, due to improper selection of σ, the corresponding β value could be off by up to $+\mathrm{32}/-\mathrm
{16}$% when the extinction ratio exceeds 6, but >90% of the SAGE II and SAGE III records had extinction ratios <6.
A major finding of this research was the demonstration of the robustness of the conversion method. It was shown that, within the specified error bars, the calculation of β was independent of SAGE
wavelength combination (Sect. 3.1). Further, we showed that when altitude was limited to 15–31km the robustness improved significantly (Table 1). Therefore, we recommend limiting the use of this
conversion method to this altitude range unless appropriate modifications can be made to improve the consistency of its performance at higher or lower altitudes. Such improvements may include cloud
screening at low altitudes and appropriate adjustment of size distribution parameters at higher altitudes.
The robustness of the conversion method provides an indirect validation of the SAGE aerosol spectra. If the EBC method were wavelength dependent, this would indicate a substantial error in the
standard aerosol products. However, our evaluation showed that the EBC is not wavelength dependent, thereby lending credence to the SAGE aerosol product wavelength assignment.
It was shown that, overall, the SAGE II-derived β product was in good agreement with the lidar data during both background (percent difference within ≈10%) and elevated (percent difference within
≈10–20% depending on location) conditions. Indeed, we showed that this agreement was altitude dependent, with better agreement in the middle stratosphere and worse agreement at lower (15km) and
upper (30km) altitudes. The reason for this divergence was attributed to increased optical depth and cloud contamination at low altitudes and decreased aerosol load at higher altitudes. The lack of
optical depth at high altitudes is less of a problem for SAGE than for the lidar. This is fundamentally due to the differing viewing geometries: SAGE retains a long observation path length at 30km,
while the lidar instrument relies on few photons being backscattered at 30km. Further, all scattered photons must re-pass through the most dense portion of the atmosphere (without being absorbed or
scattered) prior to impinging on the lidar detector. This limitation is most readily observed by considering how the noise and variance increase with altitude in a lidar profile.
For the SAGE III analysis only OHP and MLO were available for comparison. The SAGE III-derived β product showed worse agreement than the SAGE II data. The lower R^2 values were attributed to a lack
of variability within the data records (e.g., R^2 of parallel straight lines is 0). However, the larger percent differences may have been due to the magnitude of the backscatter values (e.g., small
differences such as $\mathrm{2}×{\mathrm{10}}^{-\mathrm{10}}$ for small numbers such as $\mathrm{1}×{\mathrm{10}}^{-\mathrm{10}}$ yield large percent differences – here, 100%). Another consideration
is that the SAGE III record, to date, is short compared to SAGE II, and the lidar coverage within the SAGE III time period is approximately 1 year, further limiting the intercomparison. As the record
expands (possibly including observations of moderate to major volcanic events) we expect the comparison with the lidar data to improve.
A potential application of this method is informing lidar ratio (S) selection for lidar observations. As an example, processing for the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite
Observation (CALIPSO) lidar currently assumes a static lidar ratio (50sr) for all latitudes and all altitudes. As was recently shown by Kar et al. (2019), CALIPSO extinction products have an
altitudinal and latitudinal dependence compared to SAGE III. Providing better S values may improve this agreement and may be beneficial in processing CALIPSO β products as well.
Another application of this method may be providing global backscatter profiles independent of a space-based lidar such as CALIPSO. While we do not suggest that SAGE-derived backscatter coefficients
can replace lidar observations, our product may be a viable alternative. With CALIPSO scheduled for decommissioning no later than 2023 (Mark Vaughan, personal communication, 2020) and no replacement
scheduled for flight prior to its decommissioning date, the SAGE III backscatter product may provide a necessary link between CALIPSO and the next space-based lidar to ensure continuity of the record
and provide a method of evaluating the performance of the next-generation orbiting lidar in the context of the SAGE III record and, by association, CALISPO.
TNK and LT developed the methodology, while TNK carried out the analysis, wrote the analysis code, and wrote the paper. TL and FC provided lidar data collected at TMF and MLO and assisted in the
description of this data product in the paper. SK and SGB provided lidar data collected over OHP and assisted in the description of this data product in the paper. MR, RD, KL, and DF participated in
scientific discussions and provided guidance throughout the study. All authors reviewed the paper during the preparation process.
The authors declare that they have no conflict of interest.
This article is part of the special issue “New developments in atmospheric Limb measurements: Instruments, Methods and science applications”. It is a result of the 10th international limb workshop,
Greifswald, Germany, 4–7 June 2019.
SAGE III/ISS is a NASA Langley managed mission funded by the NASA Science Mission Directorate within the Earth Systematic Mission Program. The enabling partners are the NASA Human Exploration and
Operations Mission Directorate, the International Space Station Program, and the European Space Agency. SSAI personnel are supported through the STARSS III contract NNL16AA05C.
This research has been supported by the NASA Science Mission Directorate within the Earth Systematic Mission Program, the NASA Human Exploration and Operations Mission Directorate, the International
Space Station Program, the European Space Agency, and STARSS III (grant no. NNL16AA05C).
This paper was edited by Chris McLinden and reviewed by three anonymous referees.
Ackerman, M., Brogniez, C., Diallo, B. S., Fiocco, G., Gobbi, P., Herman, M., Jager, M., Lenoble, J., Lippens, C., Mégie, G., Pelon, J., Reiter, R., and Santer, R.: European validation of SAGE II
aerosol profiles, J. Geophys. Res.-Atmos., 94, 8399–8411, https://doi.org/10.1029/JD094iD06p08399, 1989.a
Antuña, J., Robock, A., Stenchikov, G., Thomason, L., and Barnes, J.: Lidar validation of SAGE II aerosol measurements after the 1991 Mount Pinatubo eruption, J. Geophys. Res.-Atmos., 107, 4194,
https://doi.org/10.1029/2001JD001441, 2002.a, b
Antuña, J. C., Robock, A., Stenchikov, G., Zhou, J., David, C., Barnes, J., and Thomason, L.: Spatial and temporal variability of the stratospheric aerosol cloud produced by the 1991 Mount Pinatubo
eruption, J. Geophys. Res.-Atmos., 108, 4624, https://doi.org/10.1029/2003JD003722, 2003.a
Bingen, C., Fussen, D., and Vanhellemont, F.: A global climatology of stratospheric aerosol size distribution parameters derived from SAGE II data over the period 1984–2000: 1. Methodology and
climatological observations, J. Geophys. Res.-Atmos., 109, D06201, https://doi.org/10.1029/2003JD003518, 2004.a
Bingen, C., Robert, C. E., Stebel, K., Brüehl, C., Schallock, J., Vanhellemont, F., Mateshvili, N., Höpfner, M., Trickl, T., Barnes, J. E., Jumelet, J., Vernier, J.-P., Popp, T., de Leeuw, G., and
Pinnock, S.: Stratospheric aerosol data records for the climate change initiative: Development, validation and application to chemistry-climate modelling, Remote Sens. Environ., 203, 296–321, https:/
/doi.org/10.1016/j.rse.2017.06.002, 2017.a
Bohren, C. F. and Huffman, D. R.: Absorption and Scattering of Light by Small Particles, Wiley Interscience, New York, New York, United States, 1983.a, b
Chagnon, C. and Junge, C.: The Vertical Distribution of Sub-Micron Particles in the Stratosphere, J. Meteorol., 18, 746–752, https://doi.org/10.1175/1520-0469(1961)018<0746:TVDOSM>2.0.CO;2, 1961.a
Chouza, F., Leblanc, T., Barnes, J., Brewer, M., Wang, P., and Koon, D.: Long-term (1999–2019) variability of stratospheric aerosol over Mauna Loa, Hawaii, as seen by two co-located lidars and
satellite measurements, Atmos. Chem. Phys., 20, 6821–6839, https://doi.org/10.5194/acp-20-6821-2020, 2020.a, b
Chu, W. and McCormick, M.: Inversion of Stratospheric Aerosol and Gaseous Constituents from Spacecraft Solar Extinction Data in the 0.38-1.0-μm Wavelength Region, Appl. Optics, 18, 1404–1413, https:/
/doi.org/10.1364/AO.18.001404, 1979.a
Cisewski, M., Zawodny, J., Gasbarre, J., Eckman, R., Topiwala, N., Rodriguez-Alvarez, O., Cheek, D., and Hall, S.: The Stratospheric Aerosol and Gas Experiment (SAGE III) on the International Space
Station (ISS) Mission, in: Sensors, Systems, and Next-Generation Satellites XVIII, Conference on Sensors, Systems, and Next-Generation Satellites XVIII, Amsterdam, Netherlands, 22–25 September 2014,
edited by: Meynart, R., Neeck, S. P., and Shimoda, H., vol. 9241 of Proceedings of SPIE, SPIE, https://doi.org/10.1117/12.2073131, 2014.a
Cunnold, D. M., Chu, W. P., Barnes, R. A., McCormick, M. P., and Veiga, R. E.: Validation of SAGE II ozone measurements, J. Geophys. Res.-Atmos., 94, 8447–8460, https://doi.org/10.1029/
JD094iD06p08447, 1989.a
Damadeo, R. P., Zawodny, J. M., Thomason, L. W., and Iyer, N.: SAGE version 7.0 algorithm: application to SAGE II, Atmos. Meas. Tech., 6, 3539–3561, https://doi.org/10.5194/amt-6-3539-2013, 2013.a,
Deshler, T., Hervig, M., Hofmann, D., Rosen, J., and Liley, J.: Thirty years of in situ stratospheric aerosol size distribution measurements from Laramie, Wyoming (41 degrees N), using balloon-borne
instruments, J. Geophys. Res.-Atmos., 108, https://doi.org/10.1029/2002JD002514, 4167, 2003.a, b, c, d, e, f
Deshler, T., Anderson-Sprecher, R., Jäger, H., Barnes, J., Hofmann, D., Clemesha, B., Simonich, D., Osborn, M., Grainger, R., and Godin-Beekmann, S.: Trends in the nonvolcanic component of
stratospheric aerosol over the period 1971-2004, J. Geophys. Res.-Atmos., 111, D01201, https://doi.org/10.1029/2005JD006089, 2006.a, b
Fernald, F.: Analysis of Atmospheric Lidar Observations – some comments, Appl. Optics, 23, 652–653, https://doi.org/10.1364/AO.23.000652, 1984.a
Fussen, D., Vanhellemont, F., and Bingen, C.: Evolution of stratospheric aerosols in the post-Pinatubo period measured by solar occultation, Atmos. Environ., 35, 5067–5078, https://doi.org/10.1016/
S1352-2310(01)00325-9, 2001.a
Godin-Beekmann, S., Porteneuve, J., and Garnier, A.: Systematic DIAL lidar monitoring of the stratospheric ozone vertical distribution at Observatoire de Haute-Provence (43.92°N, 5.71°E), J. Environ.
Monitor., 5, 57–67, https://doi.org/10.1039/b205880d, 2003.a
Gross, M., McGee, T., Singh, U., and Kimvilakani, P.: Measurements of Stratospheric Aerosols with a Combined Elastic Raman-Backscatter Lidar, Appl. Optics, 34, 6915–6924, https://doi.org/10.1364/
AO.34.006915, 1995.a
Hansen, J. and Travis, L.: Light-Scattering in Planetary Atmospheres, Space Sci. Rev., 16, 527–610, https://doi.org/10.1007/BF00168069, 1974.a
Hansen, J. E. and Matsushima, S.: Light illuminance and color in the Earth's shadow, J. Geophys. Res., 71, 1073–1081, https://doi.org/10.1029/JZ071i004p01073, 1966.a
Heintzenberg, Jost: Size-segregated measurements of particulate elemental carbon and aerosol light absorption at remote arctic locations, Atmos. Environ., 16, 2461–2469, https://doi.org/10.1016/
0004-6981(82)90136-6, 1982.a
Hitzenberger, R. and Rizzi, R.: Retrieved and measured aerosol mass size distributions: a comparison, Appl. Optics, 25, 546–553, https://doi.org/10.1364/AO.25.000546, 1986.a
Jäger, H. and Deshler, T.: Lidar backscatter to extinction, mass and area conversions for stratospheric aerosols based on midlatitude balloonborne size distribution measurements, Geophys. Res. Lett.,
29, 1929, https://doi.org/10.1029/2002GL015609, 2002.a, b
Jäger, H. and Hofmann, D.: Midlatitude lidar backscatter to mass, area, and extinction conversion model based on insitu aerosol measurements from 1980 to 1987, Appl. Optics, 30, 127–138, 1991.a, b
Jäger, H., Deshler, T., and Hofmann, D.: Midlatitude lidar backscatter conversions based on balloonborne aerosol measurements, Geophys. Res. Lett., 22, 1729–1732, https://doi.org/10.1029/95GL01521,
1995.a, b
Kar, J., Lee, K.-P., Vaughan, M. A., Tackett, J. L., Trepte, C. R., Winker, D. M., Lucker, P. L., and Getzewich, B. J.: CALIPSO level 3 stratospheric aerosol profile product: version 1.00 algorithm
description and initial assessment, Atmos. Meas. Tech., 12, 6173–6191, https://doi.org/10.5194/amt-12-6173-2019, 2019.a, b, c, d
Kent, G. S., Trepte, C. R., Skeens, K. M., and Winker, D. M.: LITE and SAGE II measurements of aerosols in the southern hemisphere upper troposphere, J. Geophys. Res.-Atmos., 103, 19111–19127, https:
//doi.org/10.1029/98JD00364, 1998.a
Kerker, M.: The Scattering of Light and other electromagnetic radiation, Academic Press, New York, New York, United States, 1969.a, b
Khaykin, S. M., Godin-Beekmann, S., Keckhut, P., Hauchecorne, A., Jumelet, J., Vernier, J.-P., Bourassa, A., Degenstein, D. A., Rieger, L. A., Bingen, C., Vanhellemont, F., Robert, C., DeLand, M.,
and Bhartia, P. K.: Variability and evolution of the midlatitude stratospheric aerosol budget from 22 years of ground-based lidar and satellite observations, Atmos. Chem. Phys., 17, 1829–1845, https:
//doi.org/10.5194/acp-17-1829-2017, 2017.a, b, c
Klett, J.: LIDAR Inversion with Variable Backscatter Extinction Ratios, Appl. Optics, 24, 1638–1643, https://doi.org/10.1364/AO.24.001638, 1985.a
Kovilakam, M. and Deshler, T.: On the accuracy of stratospheric aerosol extinction derived from in situ size distribution measurements and surface area density derived from remote SAGE II and HALOE
extinction measurements, J. Geophys. Res.-Atmos., 120, 8426–8447, https://doi.org/10.1002/2015JD023303, 2015.a
Kremser, S., Thomason, L. W., von Hobe, M., Hermann, M., Deshler, T., Timmreck, C., Toohey, M., Stenke, A., Schwarz, J. P., Weigel, R., Fueglistaler, S., Prata, F. J., Vernier, J.-P., Schlager, H.,
Barnes, J. E., Antuna-Marrero, J.-C., Fairlie, D., Palm, M., Mahieu, E., Notholt, J., Rex, M., Bingen, C., Vanhellemont, F., Bourassa, A., Plane, J. M. C., Klocke, D., Carn, S. A., Clarisse, L.,
Trickl, T., Neely, R., James, A. D., Rieger, L., Wilson, J. C., and Meland, B.: Stratospheric aerosol-Observations, processes, and impact on climate, Rev. Geophys., 54, 278–335, https://doi.org/
10.1002/2015RG000511, 2016.a, b
Lu, C.-H., Yue, G. K., Manney, G. L., Jager, H., and Mohnen, V. A.: Lagrangian approach for Stratospheric Aerosol and Gas Experiment (SAGE) II profile intercomparisons, J. Geophys. Res.-Atmos., 105,
4563–4572, https://doi.org/10.1029/1999JD901077, 2000.a
Mauldin, L., Zaun, N., McCormick, M., Guy, J., and Vaughn, W.: Stratospheric Aerosol and Gas Experiment-II Instrument – A Functional Description, Opt. Eng., 24, 307–312, https://doi.org/10.1117/
12.7973473, 1985.a
McCormick, M., Thomason, L., and Trepte, C.: Atmospheric Effects of the Mt-Pinatubo Eruption, Nature, 373, 399–404, https://doi.org/10.1038/373399a0, 1995.a
McDermid, I. S.: NDSC and the JPL Stratospheric Lidars, The Review of Laser Engineering, 23, 97–103, https://doi.org/10.2184/lsj.23.97, 1995.a
McDermid, I. S., Godin, S. M., and Lindqvist, L. O.: Ground-based laser DIAL system for long-term measurements of stratospheric ozone, Appl. Optics, 29, 3603–3612, https://doi.org/10.1364/
AO.29.003603, 1990a.a
McDermid, I. S., Godin, S. M., and Walsh, T. D.: Lidar measurements of stratospheric ozone and intercomparisons and validation, Appl. Optics, 29, 4914–4923, https://doi.org/10.1364/AO.29.004914,,
McDermid, I. S., Walsh, T. D., Deslis, A., and White, M. L.: Optical systems design for a stratospheric lidar system, Appl. Optics, 34, 6201–6210, https://doi.org/10.1364/AO.34.006201, 1995.a
Murphy, D., Thomson, D., and Mahoney, T.: In situ measurements of organics, meteoritic material, mercury, and other elements in aerosols at 5 to 19 kilometers, Science, 282, 1664–1669, https://
doi.org/10.1126/science.282.5394.1664, 1998.a, b
NASA's Atmospheric Science Data Center: SAGE II and SAGE III/ISS database, available at: https://eosweb.larc.nasa.gov/project/SAGE III-ISS, last access: 10 August 2020.a
NDACC: Stratospheric aerosol database, available at: https://www.ndacc.org/, last access: 10 August 2020.a
Northam, G., Rosen, J., Melfi, S., Pepin, T., McCormick, M., Hofmann, D., and Fuller, W.: Dustsonde and Lidar Measurements of Stratospheric Aerosols: a Comparison, Appl. Optics, 13, 2416–2421, https:
//doi.org/10.1364/AO.13.002416, 1974.a
Oberbeck, V. R., Livingston, J. M., Russell, P. B., Pueschel, R. F., Rosen, J. N., Osborn, M. T., Kritz, M. A., Snetsinger, K. G., and Ferry, G. V.: SAGE II aerosol validation: Selected altitude
measurements, including particle micromeasurements, J. Geophys. Res.-Atmos., 94, 8367–8380, https://doi.org/10.1029/JD094iD06p08367, 1989.a
Osborn, M. T., Kent, G. S., and Trepte, C. R.: Stratospheric aerosol measurements by the Lidar in Space Technology Experiment, J. Geophys. Res.-Atmos., 103, 11447–11453, https://doi.org/10.1029/
97JD03429, 1998.a
Painemal, D., Clayton, M., Ferrare, R., Burton, S., Josset, D., and Vaughan, M.: Novel aerosol extinction coefficients and lidar ratios over the ocean from CALIPSO–CloudSat: evaluation and global
statistics, Atmos. Meas. Tech., 12, 2201–2217, https://doi.org/10.5194/amt-12-2201-2019, 2019.a
Palmer, K. and Williams, D.: Optical-Constants of Sulfuric-Acid - Application to Clouds of Venus, Appl. Optics, 14, 208–219, https://doi.org/10.1364/AO.14.000208, 1975.a
Peterson, D., R. Campbell, J., Hyer, E., D. Fromm, M., P. Kablick, G., H. Cossuth, J., and Deland, M.: Wildfire-driven thunderstorms cause a volcano-like stratospheric injection of smoke, npj Climate
and Atmospheric Science, 1, 30, https://doi.org/10.1038/s41612-018-0039-3, 2018.a
Pitts, M. and Thomason, L.: The Impact of the Eruptions of Mount-Pinatubo and Cerro Hudson on Antarctic Aerosol Levels During the 1991 Austral Spring, Geophys. Res. Lett., 20, 2451–2454, https://
doi.org/10.1029/93GL02160, 1993. a
Pueschel, R. F., Russell, P. B., Allen, D. A., Ferry, G. V., Snetsinger, K. G., Livingston, J. M., and Verma, S.: Physical and optical properties of the Pinatubo volcanic aerosol: Aircraft
observations with impactors and a Sun-tracking photometer, J. Geophys. Res.-Atmos., 99, 12915–12922, https://doi.org/10.1029/94JD00621, 1994.a
Sheridan, P., Schnell, R., Hofmann, D., and Deshler, T.: Electron-Microscope Studies of Mt-Pinatubo Aerosol Layers over Laramie, Wyoming during Summer 1991, Geophys. Res. Lett., 19, 203–206, https://
doi.org/10.1029/91GL02789, 1992.a
Thomason, L.: Observations of a New SAGE-II Aerosol Extinction Mode Following the Eruption of Mt-Pinatubo, Geophys. Res. Lett., 19, 2179–2182, https://doi.org/10.1029/92GL02185, 1992.a
Thomason, L. and Osborn, M.: Lidar Conversion Parameters Derived from SAGE-II Extinction Measurements, Geophys. Res. Lett., 19, 1655–1658, https://doi.org/10.1029/92GL01619, 1992.a
Thomason, L. W., Ernest, N., Millán, L., Rieger, L., Bourassa, A., Vernier, J.-P., Manney, G., Luo, B., Arfeuille, F., and Peter, T.: A global space-based stratospheric aerosol climatology:
1979–2016, Earth Syst. Sci. Data, 10, 469–492, https://doi.org/10.5194/essd-10-469-2018, 2018.a
Vernier, J. P., Thomason, L. W., Pommereau, J. P., Bourassa, A., Pelon, J., Garnier, A., Hauchecorne, A., Blanot, L., Trepte, C., Degenstein, D., and Vargas, F.: Major influence of tropical volcanic
eruptions on the stratospheric aerosol layer during the last decade, Geophys. Res. Lett., 38, L12807, https://doi.org/10.1029/2011GL047563, 2011.a
Wandinger, U., Ansmann, A., Reichardt, J., and Deshler, T.: Determination of stratospheric aerosol microphysical properties from independent extinction and backscattering measurements with a Raman
lidar, Appl. Optics, 34, 8315–8329, https://doi.org/10.1364/AO.34.008315, 1995.a
Wang, P.-H., McCormick, M. P., McMaster, L. R., Chu, W. P., Swissler, T. J., Osborn, M. T., Russell, P. B., Oberbeck, V. R., Livingston, J., Rosen, J. M., Hofmann, D. J., Grams, G. W., Fuller, W. H.,
and Yue, G. K.: SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements, J. Geophys. Res.-Atmos., 94, 8381–8393, https://doi.org/10.1029/
JD094iD06p08381, 1989.a
Wilka, C., Shah, K., Stone, K., Solomon, S., Kinnison, D., Mills, M., Schmidt, A., and Neely, III, R. R.: On the Role of Heterogeneous Chemistry in Ozone Depletion and Recovery, Geophys. Res. Lett.,
45, 7835–7842, https://doi.org/10.1029/2018GL078596, 2018.a | {"url":"https://amt.copernicus.org/articles/13/4261/2020/amt-13-4261-2020.html","timestamp":"2024-11-08T14:28:02Z","content_type":"text/html","content_length":"350545","record_id":"<urn:uuid:04531770-ec8c-4517-8059-74cda55602a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00317.warc.gz"} |
Towards a Vectorial Approach to Predict Beef Farm Performance
Department of Veterinary Sciences, University of Torino, Largo Paolo Braccini 2, 10095 Grugliasco, Italy
Associazione Nazionale Allevatori Bovini Razza Piemontese, 12061 Carru, Italy
NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
Author to whom correspondence should be addressed.
Submission received: 9 December 2021 / Revised: 9 January 2022 / Accepted: 16 January 2022 / Published: 21 January 2022
Accurate livestock management can be achieved by means of predictive models. Critical factors affecting the welfare of intensive beef cattle husbandry systems can be difficult to be detected, and
Machine Learning appears as a promising approach to investigate the hundreds of variables and temporal patterns lying in the data. In this article, we explore the use of Genetic Programming (GP) to
build a predictive model for the performance of Piemontese beef cattle farms. In particular, we investigate the use of vectorial GP, a recently developed variant of GP, that is particularly suitable
to manage data in a vectorial form. The experiments conducted on the data from 2014 to 2018 confirm that vectorial GP can outperform not only the standard version of GP but also a number of
state-of-the-art Machine Learning methods, such as k-Nearest Neighbors, Generalized Linear Models, feed-forward Neural Networks, and long- and short-term memory Recurrent Neural Networks, both in
terms of accuracy and generalizability. Moreover, the intrinsic ability of GP in performing an automatic feature selection, while generating interpretable predictive models, allows highlighting the
main elements influencing the breeding performance.
1. Introduction
A large amount of data are nowadays collected in the livestock sector [
]. It is increasingly common to monitor animals, for greater accuracy in the quantity and quality of information, to achieve economic and environmental sustainability of farms. The breeder must
generally deal with animals’ issues, such as their health conditions and social behavior, affecting the quality of the product, the animals’ welfare, and the performance of the farm. The digitization
of data collection made it possible to streamline and accelerate the procedures of data collection and processing over time, permitting the registration and consequent elaboration of many additional
data, going under the name of Precision Livestock Farming (PLF) [
]. The resulting knowledge, processed through mathematical and computational models, may provide the offset of overall incurred costs of the farm, as relevant issues are identified in advance,
allowing decisions to be made in time [
]. The major consequence of a continuous monitoring of animals is a huge amount of data, the so-called “Big Data”, contributing to an increase in the complexity among databases. If, on the one side,
the PLF approach aims at a greater “accuracy” in the quantity and quality of information, entailing the development of monitoring systems, on the other side, it must deal with the transformation of
big data into meaningful information. The increase in the amount of data requires the introduction of proper data management and prediction techniques. Machine Learning (ML) is based on the
availability of large amounts of information and on computing power. Rather than making a priori assumptions and following pre-programmed algorithms, ML allows the system to learn from data. Hence,
this field of research is suitable for the management of large datasets, without assuming too specific nor restrictive hypotheses among data [
]. ML is commonly used to predict livestock issues [
], such as time of disease events, risk factors for health conditions, failure to complete a production cycle, as well as the genome of complex traits [
]. Despite the biological process being too complex to replace farmers with technology, it still offers more possibilities to save money and to change farmers’ lives, as a more accurate management
system can be achieved, leading to a better approach to the genetic potential of today’s livestock species [
]. Studies have been conducted, based on the application of ML techniques, to model the individual intake of cow feed, optimizing health and fertility, to predict the rumen fermentation pattern from
milk fatty acids, which influence the quantity and composition of the milk produced but also the sensorial and technological characteristics of the meat [
A great variety of studies is available in the milk sector, as opposed to the meat sector, where the use of devices is still moderate. Within the beef cattle sector, it fits the Italian
Piemontese, raised in intensive farms mostly concentrated in the Italian region of Piedmont [
]. Its preservation is guaranteed by the National Association of Piemontese Cattle Breeders, ANABORAPI in brief. This Association is hence responsible for promoting the breed through the study of the
productive, reproductive, and management processes of Piemontese breeding. The direction towards which modern Piemontese breeding aims is the production of calves for fattening. To maximize revenues,
it is therefore essential that each dam produces as many calves as possible during her productive career, in full respect of her physiology. The indicator parameter of a cow’s reproductive efficiency
is represented by the “calf quota” per cow, derived form the calving interval. If well managed, the current Piemontese cow can produce and raise almost one calf per year, not to mention twin births.
The reproductive capacity of the cows that lodge on the farm significantly affects the farmer’s income. Damage derives from the loss of income from the failure to give birth to calves and from the
cost of feeding the cows. In this direction, great strides have been made, above all, with regard to the selection of animals capable of giving birth well and calves that are not excessively large
but are able to develop excellent growth. These are the aspects that improve the breed’s aptitude for giving birth. Since the process to improve calving ease is slow, it is also necessary to take
advantage of all the technical and managerial factors of the herd that can affect the trend of births on the farm. Cow management, in terms of feed and type of housing, a correct choice of mating,
the possibility of having a suitable environment where to give birth, and knowledge about birth events, allows the farmer to set the conditions necessary for the optimal performance of this event. It
is obvious that, among other things, the calving depends strictly on the fertility of the cow. Among the possible causes of a herd’s fertility reduction, those intrinsic to feeding system,
infectious, hygienic-sanitary, or endocrine-gynecological ones and those of environmental nature are of major importance. Not to forget that all stressful conditions, such as uncomfortable housing,
insufficient lighting, and crumbling shelters, can negatively affect fertility and, therefore, the calving. Indeed, the free housing allows a greater mobility and a greater exposure to light, with a
positive influence on biological activity and, consequently, recovery after birth.
In order to investigate the production of Piemontese calves and to understand the mechanisms of breeding performance, we analyzed the corresponding data and model in two pilot studies [
]. A Genetic Programming (GP) approach, whose results were compared with other common ML methods, was adopted, as generated models are resumed in accessible and interpretable expressions, and they
extract critical information, i.e., informative attributes. In both studies [
], it was possible to simplify the candidate models, to obtain clear and intelligible expressions, and to analyze the features extracted by the algorithm [
]. These interesting aspects of GP, in particular the intrinsic feature selection ability, encouraged a deeper investigation into the scenario offered by this family of algorithms to search for
possible ways to improve the predictive capacity of the generated models. Moreover, it would reasonably be more beneficial to exhaust the available information left previously unexploited. Indeed,
data recorded in the years prior to the target year were not involved in the prediction itself, as the previously considered methods can only handle a point variables, and they were only used to
determine the pool of representative farms. In fact, due to their structures, they could only exploit punctual data extracted from one year, targeting the following year. To clarify, it is not
impossible to deal with past data. The sequences can be split into different observations in order to maintain the structure of a panel dataset, but the algorithms cannot detect the temporal
patterns, as in this case, the observations would be treated as distinct instances [
]. Of necessity, the strategy entails the loss of valuable information useful to predict the corresponding target. So far, data from 2017 were used exclusively with targets on 2018. In order to
properly tackle the prediction, instead of incorporating the data into a standard panel (see
Table 1
), in this study, we encapsulate all the values recorded over the years, for each variable, into vectors. Stated otherwise, we introduced the vectorial variables containing data from 2014 to 2017 as
input, while targeting the same values in 2018. We opted for this approach since GP was recently developed as Vectorial-Genetic Programming (VE-GP), offering indeed the possibility to exploit vectors
as well as scalars and looking promising as its flexibility allows for tackling many different tasks [
]. Consequently, we decided to investigate the usefulness of VE-GP among the breeding farms used in [
]. More specifically, we compared the VE-GP approach with Standard-Genetic Programming (ST-GP) and other state-of-the-art ML techniques, including Long Short-Term Memory (LSTM) recurrent neural
networks. This study is presented here for the first time.
The article is organized as follows: In
Section 2
, the application background is discussed, also highlighting the main limits of the prediction methods that have been used so far. The dataset is analyzed, and the basic steps to prepare the
benchmark are also described. Afterwards, the ST- and VE-GP approaches, as well as the other studied ML methods, are presented. The obtained results, achieved by all applied ML methods, are provided
Section 3
, with particular emphasis on the features selected by the two GP-based methods. The experimental comparisons are discussed in
Section 4
. Finally,
Section 5
concludes the work, also proposing ideas for further developments.
2. Materials and Methods
2.1. Aim and Scope
The model that is currently used estimates the number of calves born alive produced per cow per year [
]. It is a classic statistical model, formulated based on zootechnical hypotheses, and it incorporates two variables extracted from the information of the single farm: the average
calving interval (intp)
and the average calves
mortality at birth (m)
, i.e., perinatal mortality:
$Y p = 365 i n t p 1 − m 100 .$
Calving and mortality detected on the farm at birth are combined through a model that provides the calf quota as a performance measure. However, it is reductive to measure breeding performance by
observing only fertility and maternal conditions. As previously exposed in [
], gains and losses in farms are not exclusively related to the calving but are often deeply influenced by the calf development after the first 24 h following the birth. The calf, on its side, goes
through evolutionary stages that depend on its own condition. The phases immediately following birth, i.e., the intake of colostrum and the healthiness of the environment in which it lives, are of
paramount importance. The physiological development process of the animal reaches completion in 60 days after birth. Calf mortality is also an important cause of economic damages in Piemontese cattle
farms: For the farmer, it represents the loss of the economic value of the calf and the reduction in both the herd’s genetic potential and size. It is straightforward that the gestational phase alone
is not exhaustive. The breeding performance should be modeled also considering neonatal mortality, outlining the calf’s ability to survive, and the sources of stress such as congenital calf’s
defects, eventually compromising the immune response and the growth rate, as well as environmental and food conditions, that affect the quality of life of the newborn.
2.2. The Dataset
Farms exhibiting continuous visits over a reasonable period, e.g., five years, were acquired. Constant recordings between 2014 and 2019 were then considered [
]. As a result, farms whose activity started recently were discarded from the study, as their management still could not be completely defined. Similarly, herds resigned between 2014 and 2019 were
excluded to maintain a pool of contemporary farms with comparable data. In brief, the main filters commonly imposed to select herds to work with include the following criteria: cattle farms located
in Piedmont with at least 30 cows, a percentage of artificial insemination between 90% and 100% were selected, and updated visits for all the years between 2014 and 2019. Once these farms were
selected, it was possible to extract the reports referred to any period in the time window, e.g., 2017–2018, or to use all the five-years information. Finally, the variable used by the ML methods as
the target variable was constructed, as it was not directly available in the original dataset. Since the aim is the prediction of the number of weaned calves per cow produced annually, the actual
amount was extracted for the years 2018. For each farm over all selected years, the target attribute
was obtained with the formula below, including the values of the number of the calves born alive, those unable to survive during weaning period, and the number of cows in the corresponding year:
$Y = N _ B A L I V E − N _ E L I M C O W S .$
Sorting by herd and increasing year, the general dataset has the structure shown in
Table 1
The study carried out took shape from the analysis of the summary data from 2017 to build the best predictive model for the number of weaned calves per cow produced in 2018. Setting this goal, it
was, therefore, necessary to manage a dataset containing input variables for each farm. Given
instances and
variables, the dataset configuration from 2017–2018 (shown in
Table 2
) consisted in
input scalar attributes
$X 17 , i$
, where
$i = 1 , … , m$
for each of the
farms. The number of weaned calves produced per cow in 2018 was obtained with Equation (
), which was named
$Y 18$
in this case.
Since the results by GP did not improve by incorporating more features [
], it was more appropriate to focus on a smaller number of predictors, that can actually be reconducted to the target. As a greater number of features could become a source of noise, some variables
that are actually less informative in predicting the target from an a posteriori zootechnical point of view were omitted in this study, as well as variables partially contained into other similar
features. For example, in [
], both the total number of calves born and the number of births following natural impregnation were used by most GP models. The number of calves born from natural impregnation is already contained
in the total number of newborns. Although it was the most frequently used variable, it may be more appropriate to keep only the total number of newborns by forcing the algorithm to use the latter
variable as informative over all the considered farms (natural impregnation is not performed by all the selected herds). Prediction of target can be simpler for the algorithms if the useful
information is directly provided, resulting in easier detection. However, ML methods can also find the necessary source of information if it is more complex to extract. Clearly, the task can be
easily tackled if some patterns are evident over data. If the information is distributed among other features, the algorithm can detect it anyhow. On the contrary, if no hint is available, the method
cannot guess the patterns as if by magic. In
Table 2
, the final variables are provided for the benchmark.
The variables 1–19 were stored into two datasets: one containing the data referring to 2017–2018 for the standard approach (see
Table 3
), and the second one containing the data referring to 2014–2017 for the vectorial approach (see
Table 4
). In both cases, the different partitions intended for training, validation, and testing refer to the same records, sampled equally on both datasets.
The division of the dataset into a learning set, subsequently divided into training and validation sets, and a test set was performed. The main idea was to extract enough learning instances in order
to perform a k-fold cross validation among it, maintaining at the same time a balanced percentage between learning and test sets (70%–30%). Thereafter, as splitting strategy, 94 records were
extracted to form the test set, and the remaining 210 formed the learning set. Among the latter, a 7-fold cross validation was imposed, obtaining 7 pairs of training–validation sets, consisting,
respectively, of 180–30 instances. In order to perform enough runs of GP and to compare models, the technique was repeated 10 times by selecting the test instances sequentially from the main dataset,
restarting from the beginning each time the last record was reached during the selection phase. The learning instances was randomly shuffled before performing the 7-fold sampling.
2.3. Standard vs. Vectorial Approaches: Genetic Programming
GP is a family of population-based Evolutionary Algorithms (EA), mimicking the process of natural evolution [
]. GP accomplishes a tree-based representation. The nodes contain operators, whereas the leaves (terminal nodes) are fed with operands, i.e., the features’ values. As in an evolutionary biological
process, the initial population evolves through the course of generations, exploiting the mechanisms of selection, mutation, and recombination of individuals. For each generation, individuals compete
to reproduce offsprings. Individuals may undergo culling or survive to the next generation. As the individuals showing the best survival capabilities have the best chance to reproduce, they form
elites of valuable candidates contributing to the creation of new individuals for the next generation. Offsprings are generated by a crossover mechanism, i.e., the recombination of parts of the
parents, and by mutation, that is, the alteration of some of the alleles. The survival strength of an individual is measured using a fitness function, a function that computes the goodness of each
individual or tentative solution.
To determine how close the prediction models came to represent the desired solution, they are awarded a score generated by evaluating the fitness function computed on the test. Each problem requires
its fitness measure, and hence its proper score. When it comes to formulating a problem, defining the objective function can result as one of the most complex parts, as some requirements should be
satisfied. The fitness function should be clearly defined, generating intuitive results. The user should be able to intuitively understand how the fitness score is calculated as well. In addition, it
should be efficiently implemented, as it could become the bottleneck of the algorithm. When dealing with a regression problem, the choice usually falls onto the Root Mean Square Error (RMSE):
$R M S E = ∑ i y i − φ x i 2 n ,$
$i = 1 , … , n$
, and
is the number of instances. The predictor
is evaluated at
$x i$
, i.e., the input variables values, and
$y i$
is the target values. A good fitness value means a small RMSE, and vice versa. RMSE is expressed in the response variable’s unit, and it is an absolute measure of accuracy. The choice of this fitness
function is further determined by the application of different ML techniques that build mostly non-linear models. This issue can exclude a discussion based on the coefficient of determination
$R 2$
, as its definition assumes linearly distributed data. When the assumption is violated,
$R 2$
can lead to misleading values [
The population is transformed iteratively based on the training set inside the main generational loop of a GP run. Thereafter, sub-steps are iteratively performed within each generation, until the
termination criterion is satisfied. At that point, the population is evaluated on the validation set to pick the best model. At every generation, each program in the population is executed and its
fitness ascertained on the training set using the proper fitness measure. By selecting, recombining, and mutating the best individuals, at each evolutionary step (i.e., each new generation) the
members of the new population are, on average, fitter than the previously generated ones, i.e., they show a smaller error. Among the parameters defining the technique, the preservation of the best
individual at each run is feasible, and fitness can be treated as the primary objective, whereas tree size is a secondary parameter, when ranking models. This peculiarity leads to the conservation of
the most influential variables over generations. The algorithm performs, hence, an implicit feature selection and, among all the input variables, only the most relevant are encapsulated in the
ST-GP is a powerful algorithm, suitable to perform symbolic regression on any dataset. However, as many other standard techniques do, instances are treated independently, showing a potential
disadvantage when dealing with sequential data. This may result in a loss of knowledge in pattern recognition of the temporal information. In addition to Recurrent Neural Networks (RNN), whose
structure is suitable for managing a collection of observations at different equally spaced time intervals, Vectorial Genetic Programming (VE-GP) can manage vectorial variables representing time
series [
]. Indeed, the development of the ST-GP led to techniques exploiting terminals in the form of a vector. With this representation, all the past information associated to an entity is aggregated into a
vector, giving a sense of memory and helping to keep track of what happened earlier in the sequential data. VE-GP comes with enhanced characteristics of ST-GP exploiting a proper data representation
processed with suitable operators to handle vectors, reinforcing the identification ability of correlations and patterns. The target can be scalar, as well as vectorial. The technique can indeed
treat both vectors, even of different lengths, and scalars together, performing both vectorial and element-wise operations.
2.4. Standard vs. Vectorial Approaches: Experimental Settings
ST-GP and other classic ML approaches were performed using the GPLab package built in MATLAB and the R library caret [
]. Correspondingly, in addition to GP, k-Nearest Neighbors (kNN), Neural Networks (NN), and Generalized Linear Models with Elastic NET regularization (GLMNET) were also tuned, based on the average
performance over the validation sets. Concerning the vectorial approach, VE-GP was performed with the recent version of GPLab, introduced to handle vectorial variables [
], whereas the LSTM’s comparative results were obtained with the available deep learning toolbox, implemented in MATLAB. Clearly, results were compared in terms of RMSE (
) as an error measure.
Characterized by a very simple implementation and low computational cost, the kNN algorithm is known as “lazy learning”, as it does not build a model, but it is an instance-based method, exploited
for both classification and regression tasks. The input consists of the k closest instances (i.e., neighbors) in the features space, and the corresponding output is the most frequent label
(classification) or the mean of the output values (regression) of k nearest neighbors. Otherwise stated, in the latter case, the k nearest points are computed to predict the value of any new data
point, and the values of their output is averaged to be assigned as the prediction to the given point. The number of k nearest neighbors should be chosen properly, since the predictive power can be
strongly affected afterwards. A small value of k leads to overfitting, and results can be highly influenced by noise. On the contrary, a large value results in very biased models and can be
computationally expensive.
A NN, usually denoted with the term of Artificial Neural Network (ANN), emulates the complex functions of the brain. An ANN is a simplified model of the structure of a biological neural network and
consists of interconnected processing units organized according to a specific topology. The network is fed with features values through an input layer. Thereafter, the learning takes place among one
or more hidden layer, composing the internal network. Finally, the network includes an output layer, where the prediction is given. Learning occurs by changing connections weights based on the error
affecting the output. At each update, the weights of the connection between nodes are multiplied by a factor in order to prevent the weights from growing too large and the model from getting too
Concerning LM, a GLMNET was preferred over standard LM. The algorithm fits generalized linear models by means of penalized maximum likelihood, combining the Lasso and Ridge regularizations, using the
cyclical coordinate descent. These techniques allow one to accommodate correlation among the predictors by penalizing less informative variables: Ridge penalty shrinks the coefficients of correlated
predictors towards each other, while Lasso tends to pick the most informative ones and discard the others. Compared to standard linear regression, more accurate results are usually expected from its
application, as it combines feature elimination from Lasso and feature coefficient reduction from Ridge. The elastic-net penalty is controlled by the parameter $α$: $α = 0$ is pure Ridge, whereas $α
= 1$ is pure Lasso. The overall strength of the penalty for both Ridge and Lasso is controlled by the parameter $λ$: The coefficients are not regularized if $λ = 0$. As $λ$ increases, variables are
shrunk towards zero, and they are discarded by Lasso regularization, whereas Ridge regularization includes all the variables.
One of the disadvantages of an ANN is that it cannot capture sequential information in the input data. An ANN can deal with fixed-size input data, that is, all the item features feed the network at
the same time, such that there is no time interval between the data features. When dealing with sequential data, in which there are strong dependencies between the data features, i.e., in text or
speech signals, a basic ANN is not able to properly address the task. In this regard, basic ANNs were developed to make way for a more efficient algorithm, particularly useful for time series. RNN is
a type of ANN that has a recurring connection to itself. The gap between information may become very large, and the amount of sequential information can be complex to retain. As that gap grows, RNNs
lose their ability to learn connections. To overcome the short-term memory weakness, (LSTM) architecture was designed to solve this problem with RNNs. By means of internal mechanisms, they keep track
of the dependencies between the input sequences, storing and removing unnecessary information. The LSTM introduces the concept of cell states. By using special neurons called “gates” placed in the
cell state, LSTMs can remember or forget information. Three kinds of gates are available inside the cell, in order to filter information from previous inputs (forget gate), to decide what new
information to remember (input gate), and to decide which part of the cell state to output (output gate). These gates are a sort of highway for the gradient to flow backwards through time.
Regarding ST-GP, we provided the algorithm with a set of primitives
composed of
{plus; minus; times; mydivide}
, where
plus, minus, and times
indicate the usual operators of binary addition, subtraction, and multiplication, respectively, while
represents the protected division, which returns the numerator when the denominator is equal to zero. Likewise, we chose proper functions for VE-GP. Differently from ST-GP, suitable functions are
indeed provided to manage scalar and vectors [
]. For the considered problem, we used
{VSUMW; V_W; VprW; VdivW; V_mean; V_min; V_meanpq; V_minpq}
. The first four operators represent the elementwise sum, difference, product, and the protected division between two vectors or between a scalar and a vector, respectively, e.g.,
VSUMW([2,3.5,4,1],[1,0,1,2.5] = [3,3.5,5,3.5])
. The mean and minimum of a vector return the corresponding value for the whole vector (standard aggregate functions
) or for a selected range
$[ p , q ]$
inside the vector, where
are positive integers with
$0 < p ≤ q$
(parametric aggregate functions
), e.g.,
V_mean([2,3.5,4,1]) = 2.6
, whereas
V_mean$3 , 4$([2,3.5,4,1]) = 2.5
. The fact that standard and parametric aggregate functions collapse the vectorial variable into a single value allows one to handle all the information contained in the vector or part of it. In
addition to crossover and mutation, the algorithm is provided with an operator reserved for the mutation of the aggregate function parameters. It allows
to evolve in order to detect the most informative window in which to apply thereafter the aggregate function. The set of terminals was composed of the predictors in
Table 2
for both ST- and VE-GP.
3. Results
3.1. ST-GP vs. VE-GP
ST-GP and VE-GP were first compared, in order to analyze the behavior of the two algorithms. In
Figure 1
, the median fitness evolution is plotted, based on the following procedure. For each fold within the learning set, a model was selected according to its performance over the validation set. Hence,
after seven runs of the GP, seven models were available, i.e., the ones showing the lowest fitness among the validation. All seven best drawn models were evaluated on the whole learning set and the
test set, and the median of the seven models was stored. As the 7-fold was repeated 10 times, 10 median trends were available at the end of the entire evolutionary process. The plot shows the median
behavior of the 10 median fitness achieved for each generation. We initially decided to run the two algorithms for 100 generations. The choice of stopping the evolution after 40 generations was
dictated by the overfitting trend recorded among ST-GP. On the contrary, VE-GP proved to be more stable than ST-GP, at least as far as we ran 100 generations. Moreover, the median fitness was overall
lower, showing that GP is affected by a remarkable improvement of such a problem, if temporal information is added, together with proper functions. The VE-GP models outperformed the ST-GP ones,
stabilizing at lower errors. We analyzed the predictors encapsulated in the final models by both ST- and VE-GP, selected with respect to the performance achieved on the test sets by running the
algorithms for 40 generations.
Table 7
shows that both methods used nine variables to tackle the target. However, the same predictors were not used and, above all, not with the same frequency. The number of
$C O W S$
, for example, was highly exploited by both GP algorithms, but all the VE-GP models based the prediction on this feature, whereas only 70% of the ST-GP models found it to be informative.
$C P A R$
, on the other hand, was used only by the ST-GP and in 50% of the solutions, and
$N B A L I V E$
was involved in 60% of them.
$N T O T$
was rather exploited only by the VE-GP and in 80% of the models. It is evident that as long as GP is run to predict the target based on the information of a single year, patterns are more difficult
to be found, and the algorithm (ST-GP) tries to solve the problem by extracting as much information as possible from as many features as possible (7 variables out of 18 were used in more than 20% and
at most in 70% of the solutions). When providing temporal information, the search was easier for GP, whose models achieved better fitness, detecting mainly the information based on a few predictors
(4 out of 18 were exploited in more than the 30% of solutions, and among the four features, 1 was handled by all the models). Even considering the variables used by each model (
Table 8
), on average, 8.4 predictors were used by the ST-GP (from 6 to 15), whereas the VE-GP built models exploiting 5.5 features on average (from 3 to 9).
3.2. Comparisons of ST-GP and VE-GP with Other ML Methods
Now, we discuss the results of the experimental comparison between the ST-GP, VE-GP, and the other ML methods presented previously. As already explained, in addition to ST-GP, KNN, NN, and GLMNET
also exploited the information on 2017 with a target in 2018, whereas LSTM was involved in the VE-GP to process vectorial variables for 2014–2017 and the 2018 target. The results reported in
Section 3.1
for the ST-GP compared to the VE-GP are also supported by the corresponding boxplots in
Figure 2
. The median and mean RMSE values are reported in
Table 9
The Kruskal–Wallis nonparametric test, performed for all the considered methods with a significance level of $α$ = 0.05, was applied to investigate the RMSE achieved among the learning sets and the
test sets separately. The resulting p-values (t ≪ 0.001) showed extremely significant differences in median performance between the methods, considering both stages. The pairwise Wilcoxon tests
provided with Bonferroni correction $α$ = 0.05/15 = 0.0033 was hence performed among all compared techniques. Among the learning set, STGP was significantly different from all other methods,
resulting in a poor performance. Likewise, KNN was significantly different with respect to both NN and LSTM, as well as the comparison between VEGP and LSTM. Concerning the RMSE achieved on the test
sets, STGP performed poorly with respect to other methods, showing greater, significantly different values for the RMSE on average. On the contrary, GLMNET’s performance was significantly better than
KNN’s and NN’s, as well as LSTM’s compared to KNN and VEGP, respectively. Consequently, the following pairs of methods did not show significantly different performance: VEGP–KNN, VEGP–NN,
VEGP–GLMNET, VEGP–LSTM, as well as the pair LSTM and GLMNET.
As a further study, we also compared learning and test fitness distributions obtained by the single methods in order to determine the occurrence of overfitting. The Wilcoxon signed rank test showed
that the two distributions for KNN and the two obtained with NN were different, since the corresponding p-values were extremely significant (Wilcoxon: p ≪ 0.001), as well as the median RMSE
(Kruskal–Wallis test: p ≪ 0.001). Concerning the ST-GP, the two distributions and the median error were slightly different (Wilcoxon and Kruskal–Wallis: p-values equal to 0.048 and 0.034,
respectively). GLMNET showed the same learning and test fitness distributions but different median RMSE (Wilcoxon: p > 0.05; Kruskal–Wallis: p = 0.041), whereas LSTM achieved different distributions
with similar median. VE-GP was the only method that produced the same fitness distributions with the same median among the learning and test sets.
4. Discussion
Considering all the results of the statistical tests, ST-GP produced less accurate models, and all the other methods outperformed ST-GP. However, among the different techniques, KNN and NN clearly
overfitted, generating good results on training data but losing their ability to generalize on unseen data. On the contrary, VE-GP, GLMNET, and LSTM produced better and statistically similar results,
as the RMSEs on the test set were not significantly different across these methods. In particular, LSTM produced the best fitness considering both learning and test sets. However, VE-GP was the only
method showing the same distribution among learning and test sets, highlighting its ability to generalize better over unseen data. These outcomes are a clear confirmation of the importance of
introducing the temporal information in the form of vectors for the studied problem.
The study was dedicated to the inspection of GP behavior when predicting a target starting from datasets that, in one case, were exclusively formed by scalar values (treated hence with ST-GP) and, in
the other, assumed a vector representation (handled with VE-GP). This representation is quite useful for incorporating temporal patterns or, in general, successive collections of data for single
variables among the same candidate. Indeed, with the common representation through standard data frames, such patterns are usually not recognizable, and the performance of the models do not improve.
On the contrary, if the data are organized into a vectorial dataset, the algorithm receives temporal information in input. Thereby, by means of proper functions able to manage vectors, it can produce
more accurate predictions. First, the dataset was prepared to deal with the vector-based representation. The datasets, sharing the same scalar target from 2018 (i.e., the quota of weaned calves per
cow per year) were prepared by extracting the data among 2017 and among the whole period from 2014–2017, based on a previously defined set of farms. In this study, a different splitting rule was
defined among the datasets with respect to previous investigations. The learning and test sets were selected respecting the proportion of 70%–30%, and thereafter, learning sets, randomly reshuffled,
were split according to a 7-fold cross validation technique. Prediction models were constructed with different GP algorithms, ST- and VE-GP first, that were thereafter compared with other ML methods.
VE-GP was compared with LSTM, which considers the time relationship among the data.
The main goal was hence to inspect the ability of VE-GP with respect to ST-GP in predicting the target. The developed algorithm could produce better results by achieving lower RMSEs among both
learning and test sets. We first analyzed the evolution of the median fitness observed on the learning and test sets, and clearly, VE-GP proved to be more stable, evolving a population through more
generations without giving sign of overfitting, whereas ST-GP showed the “symptom” quite soon, considering similar experimental settings. In addition, VE-GP reached better results by encapsulating
fewer variables in each extracted candidate model, detecting the information to a greater extent from specific features. VE-GP still gave access to the formula and to the features implicitly
selected, providing meaningful information about the tackled issue. Being able to extract important features among the predictors in form of vectors, the algorithm improved the target forecast. VE-GP
turned out to outperform not only ST-GP but also other techniques used in the field of ML. Although VE-GP performed similarly to LSTM and GLMNET (the latter exploiting the standard data
representation), it was the only method that did not show a significantly different behavior on the learning and test sets. The two distributions and their median are similar, entailing that VE-GP
provides a good response in terms of generalization ability on unseen data. Improvements can be expected by feeding the algorithm with larger datasets by providing more candidates and longer vectors.
5. Conclusions
Exploring the vectorial approach required, as already stated, a different input data structure. To this purpose, the farms considered in the pool of instances were the same as in [
]. However, since the results showed that GP exploited only certain variables, the number of predictors was reduced to 18. In this way, possible noise due to extra variables, which were not very
informative, was avoided. The main goal was to inspect the ability of Vectorial Genetic Programming (VE-GP) with respect to ST-GP to predict the target. The recently developed VE-GP algorithm could
produce better results, by achieving a better fitness on both the learning and test sets. VE-GP proved to be more stable, evolving a population through more generations without showing overfitting,
while Standard Genetic Programming (ST-GP), was affected by overfitting already in the early generations under analogous experimental settings. VE-GP still favored model investigation by giving
access to the formula and hence to the features implicitly selected, providing meaningful information about the tackled issue. Better results were obtained by encapsulating fewer variables in each
extracted candidate model, detecting almost all the information among specific features. The algorithm improved the target forecast, proving to outperform not only ST-GP, but also other techniques
used in the field of ML. The algorithm, in particular, was compared to Long Short-Term Memory Recurrent Neural Network (LSTM), suitable for handling vectorial predictors. Even though performing
similar to the LSTM and Generalized Linear Models (GLMNET), the latter exploiting the standard data panel representation, VE-GP was the only method entailing a greater ability in generalization over
unseen data.
The introduction of vectorial variables produced a significant improvement over the accuracy of the result. Evolution was also much more stable, and the ability of the algorithm to handle any type of
variable, both scalar and vectorial, makes it quite a flexible tool. These considerations open the possibility of providing more complex datasets, containing different types of sequential features.
The possibility of managing vectorial variables, whose values can be of different types and have no fixed length among the whole dataset, push the analysis beyond the basic research conducted at this
point. On the one side, both categorical and continuous variables can be treated simultaneously, without specifying it explicitly to the algorithm: The latter is indeed able to process them during
the evolution without hints given by the user. On the other hand, when dealing with vectors, some data may be not available. Thus, the vector variables may have different lengths, even being scalars
when the latter is equal to one. VE-GP is suitable to manage all kinds of features dynamically, combining them in the prediction of the target. Evolutionary algorithms can be applied to zootechnical
data, achieving performing models able to learn from all the available data. In this case study, the breeding variables in the report extracted at the end of the year were used. In one case, they
were managed for only one year; in the other four, the average values, corresponding to four years, were used, proving to be more suitable for reducing the prediction and generalization errors.
Instead of limiting the analysis to the year-end average, it might be more useful to incorporate the data collected from each farm visit into a vector representation. As a result, all variations,
even small ones, would be available, and the algorithm could identify temporal patterns that were not visible by directly processing the average value for the whole year.
Author Contributions
Conceptualization, F.A. and M.G.; methodology, F.A., L.V., and M.G.; validation, L.V. and M.G.; formal analysis, F.A.; investigation, F.A.; resources, F.A.; data curation, F.A.; writing—original
draft preparation, F.A.; writing—review and editing, F.A. and M.G.; visualization, F.A.; supervision, L.V. and M.G.; project administration, M.G. All authors have read and agreed to the published
version of the manuscript.
This work was partially supported by FCT, Portugal, through funding of projects BINDER (PTDC/CCI-INF/29168/2017) and AICE (DSAIPA/DS/0113/2019).
Conflicts of Interest
The authors declare no conflict of interest.
The following abbreviations are used in this manuscript:
PLF Precision Livestock Farming
ML Machine Learning
ANABORAPI National Association of Piemontese Cattle Breeders
GP Genetic Programming
ST-GP Standard Genetic Programming
VE-GP Vectorial Genetic Programming
EA Evolutionary Algorithm
KNN k-Nearest Neighbors
NN Neural Network
LM Linear Model
GLMNET Generalized Linear Model with Elastic Net Regularization
RNN Recurrent Neural Network
LSTM Long Short-Term Memory
Figure 1. ST-GP (up) and VE-GP (down) fitness evolution plots. For each generation, the graph plots the median of the 10 median fitness achieved by the best 7 models on the validation sets and
correspondingly the performance achieved on the learning and test sets.
Figure 2. RMSEs on both the learning and test sets for the different algorithms. Learning results are plotted in yellow (left) and test results are plotted in blue (right) for each technique.
Table 1. Standard Data Panel. Structure of the dataset. The farms are listed horizontally, as well as the reference year, the variables vertically.
Farm 1 2014 22 36 7 365
Farm 1 2015 10 46 13 375
Farm 1 2016 16 47 12 381
Farm 1 2017 14 46 11 375
Farm 1 2018 16 47 12 374
Farm 2 2014 11 90 9 396
Farm 2 2015 10 93 9 391
Farm 2 2016 9 95 7 380
Farm 2 2017 7 97 10 387
Farm 2 2018 9 92 11 385
Farm 3 2014 7 42 3 414
Farm 3 2015 4 43 4 439
Farm 3 2016 4 44 10 452
Farm 3 2017 10 44 11 425
Farm 3 2018 9 60 4 473
Table 2. Final set of variables used for the benchmarked problem. The bottom line represents the dependent variable Y, i.e., the target for the predicted models.
Variable Name Variable Description
1 $C O W S$ Consistency for cows, i.e., number of cows
2 $H E I F E R S$ Consistency for heifers, i.e., number of heifers
3 $I N T P$ Calving interval in days, based on currently
pregnant cows
4 $C P A R$ Average parity
5 $E T A$_$P A R T$_1 Age at first calving
6 $C E A S E$ N. of cows that delivered with easy calving
7 $H E A S E$ N. of primiparous that delivered with easy calving
8 $C P A R T$_$I N D$ Calving ease (EBV for cows)
9 $H P A R T$_$I N D$ Birth ease (EBV for heifers)
10 $T F A B I R T H$ Birth ease (EBV for A.I. bulls)
11 $T F A P A R$ Calving ease (EBV for A.I. bulls)
12 $U B A 06$ UBA referred to bovines 6 months–2 years old
13 $U B A 04$ UBA referred to bovines 4–6 months old
14 $N E L I M$ N. of dead calves in the first 60 days after birth
15 $N T O T$ Total number of calves born
16 $N B A L I V E$ Total number of calves born alive
17 $C O R R E C T$ Percentage of calves born without defects
(e.g., Macroglossia, Arthrogryposis)
18 $C O N S A N G$_$N E W$ Consanguinity calculated on future calves
19 Y N. of weaned calves per cow per year (2)
Table 3. Dataset configuration from 2017–2018. On the left side, the input scalar variables $X 17 , 1 , X 17 , 2 , … , X 17 , m$. On the right side, the scalar target $Y 18$.
$X 17 , j , 1$ $X 17 , j , 2$ $X 17 , j , 3$ $X 17 , j , 4$
$Y 18 , j$
COWS COW_AGE CALVING_INT N_CALVING
FARM 1- 104 3020 387 60 0.95
FARM 2- 54 3112 425 54 0.9
FARM 3- 63 2824 515 48 0.69
… 49 3131 466 49 0.67
108 2766 407 50 0.85
74 3448 459 62 0.84
Table 4. Vectorial panel dataset configuration for 2014–2018. On the left side, the input vectorial variables $X t , j , i = [ X 14 , j , i , X 15 , j , i , X 16 , j , i , X 17 , j , i ]$, with $t ∈
{ 14 , … , 17 }$, $i = 1 , … , m$, and $j = 1 , … , n$. On the right side, the scalar target variable $Y 18$.
2014–2017 2018
$X t , 1 , j$ $X t , 2 , j$ $X t , 3 , j$
$Y 18 , j$
COWS COW_AGE CALVING_INT
FARM 1- [98, 101, 107, 104] [2999, 3001, 2998, 3020] [391, 391, 380, 387] 0.95
FARM 2- [61, 49, 53, 54] [3076, 3002, 3056, 3112] [408, 376, 402, 425] 0.9
FARM 3- [53, 55, 64, 63] [2799, 2813, 2802, 2824] [367, 376, 406, 515] 0.69
… [31, 36, 47, 49] [3102, 3075, 3009, 3131] [434, 480, 461, 466] 0.67
[102, 99, 105, 108] [2704, 2795, 2789, 2766] [404, 371, 395, 407] 0.85
[69, 71, 75, 74] [3401, 3388, 3406, 3448] [387, 367, 373, 459] 0.84
Parameter Description
Maximum number of generations 40
Population size 250
Selection Method Lexicographic Parsimony Pressure
Elitism Keepbest
Initialization Method Ramped half and half
Tournament Size 2
Subtree Crossover Rate 0.7
Subtree Mutation Rate 0.1
Subtree Shrinkmutation Rate 0.1
Subtree Swapmutation Rate 0.1
Maxtreedepth 17
Maximum number of generations 40
Population size 250
Selection Method Lexicographic Parsimony Pressure
Elitism Keepbest
Initialization Method Ramped half and half
Tournament Size 2
Subtree Crossover Rate 0.7
Subtree Mutation Rate 0.3
Mutation of aggregate function parameters 0.2
Maxtreedepth 17
Table 6. Parameters used to perform ML techniques with caret package in R and the Deep Learning Toolbox in MATLAB.
ML Technique Parameters
knn k = 15
nnet size = 7; decay = 0.2
glmnet $α$ = 0.8, $λ$ = 0.85
LSTM hidden units = 200; epochs = 50;
batchsize = 1; learning algorithm = adam.
Table 7. Frequency of use of each variable among the best 10 individuals found by ST-GP (left column) and VE-GP (right column).
Variable % of Use (ST-GP) % of Use (VE-GP)
X1 $C O W S$ 70% 100%
X2 $H E I F E R S$ 10% 10%
X3 $I N T P$ 0% 10%
X4 $C P A R$ 50% 0%
X5 $E T A$_$P A R T$_1 0% 10%
X6 $C E A S E$ 0% 10%
X7 $H E A S E$ 0% 10%
X8 $C P A R T$_$I N D$ 0% 0%
X9 $H P A R T$_$I N D$ 0% 0%
X10 $T F A B I R T H$ 10% 0%
X11 $T F A P A R$ 0% 0%
X12 $U B A 06$ 0% 0%
X13 $U B A 04$ 20% 0%
X14 $N E L I M$ 70% 40%
X15 $N T O T$ 0% 80%
X16 $N B A L I V E$ 60% 0%
X17 $C O R R E C T$ 30% 0%
X18 $C O N S A N G$_$N E W$ 20% 30%
Table 8. Fitness on the test set, number of involved variables, and corresponding percentage for each model evolved by ST-GP (upper table) and VE-GP (lower table) in each of the 10 runs.
Prediction Model Fitness on Test N. of Variables % of Variables
model 1 0.1335 9 50%
model 2 0.1207 6 33%
model 3 0.1143 11 61%
model 4 0.1383 8 44%
model 5 0.1392 7 39%
model 6 0.1439 7 39%
model 7 0.1395 8 44%
model 8 0.1370 6 33%
model 9 0.1285 15 83%
model 10 0.1184 7 39%
model 1 0.1117 5 26%
model 2 0.1016 3 16%
model 3 0.1044 9 47%
model 4 0.1085 8 42%
model 5 0.1134 3 16%
model 6 0.0998 8 42%
model 7 0.1018 4 21%
model 8 0.1149 4 21%
model 9 0.0999 8 42%
model 10 0.1121 3 16%
STGP KNN NN VEGP GLMNET LSTM
Learning sets
Median 0.1238 0.1074 0.0967 0.1052 0.1025 0.1011
Mean 0.1220 0.1077 0.0967 0.1054 0.1025 0.0988
Test sets
Median 0.1353 0.1151 0.1122 0.1065 0.1057 0.1037
Mean 0.1314 0.1147 0.1128 0.1068 0.1056 0.1034
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Abbona, F.; Vanneschi, L.; Giacobini, M. Towards a Vectorial Approach to Predict Beef Farm Performance. Appl. Sci. 2022, 12, 1137. https://doi.org/10.3390/app12031137
AMA Style
Abbona F, Vanneschi L, Giacobini M. Towards a Vectorial Approach to Predict Beef Farm Performance. Applied Sciences. 2022; 12(3):1137. https://doi.org/10.3390/app12031137
Chicago/Turabian Style
Abbona, Francesca, Leonardo Vanneschi, and Mario Giacobini. 2022. "Towards a Vectorial Approach to Predict Beef Farm Performance" Applied Sciences 12, no. 3: 1137. https://doi.org/10.3390/app12031137
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2076-3417/12/3/1137","timestamp":"2024-11-09T20:56:44Z","content_type":"text/html","content_length":"489510","record_id":"<urn:uuid:ed98a918-49cf-4f66-b112-8e750993d1fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00477.warc.gz"} |
Crazy Taxi M-12 | Free Online Math Games, Cool Puzzles, and More
Crazy Taxi M-12
4.0 / 5(89,264 Votes)
May 08, 2024
Dec 31, 1998
Browser, Mobile
Crazy Taxi M-12 Instructions
Try to clear the course as fast as you can! Use the arrow keys to drive the crazy taxi down the course. Use the up and down arrow keys to speed up or slow down and switch lanes with the left and
right arrow keys. Use the spacebar to jump over obstacles. Score points by crashing into the cars with the correct number on them.
Each course will give you a number that determines which cars to hit. You can only score when you crash into a car that is a multiple of the given number. In the first course, you’re looking for
multiples of 2, so you’ll have to keep your eyes open for any number divisible by 2. Each course looks for multiples of different numbers, so you’ll want to study up on your times tables!
Crazy Taxi M-12 TIPS & TRICKS
Brush up on the basics. This game’s all about the multiples! If you want to do well and score big, you’ll have to know your times tables. Before starting up the game, take some time to review your
multiplication skills. Being able to do quick math in your head will come in handy!
Use multiplication tricks. There are some tricks you can use to easily pick out the multiples of certain numbers. For example, multiples of 2 will always end in an even number (0, 2, 4, 6, 8 and so
on). Multiples of 10 will always end in a 0, and multiples of 5 will always end in either a 5 or a 0. Keep note of these tricks as you drive down the track.
Look to the horizon. You’ll be able to see what obstacles are approaching with plenty of time to switch lanes or prepare a jump. When you see cars approaching from a distance, take note of the
numbers first before deciding where to go. As long as they’re a multiple, you’ll score points, but watch out for cars that don’t fit the equation! Getting trapped by a car or obstacle will cost you
precious time.
Bonus tracks. As you play, you’ll be able to take a shot at the bonus rounds in between courses! These tracks are different in that there are no multiples to look out for, just regular cars and
obstacles, which you’ll have to jump over in order to keep driving. If you crash, the bonus round ends.
WHAT YOU LEARN FROM PLAYING Crazy Taxi M-12
Playing rounds of Crazy Taxi M-12 can help teach times tables and improve your mental math skills. This game also helps strengthen the ability to solve math equations quickly and without much
calculation. The more you play, the better you’ll be at solving simple math problems on a dime.
Try to clear the course as fast as you can! Use the arrow keys to drive the crazy taxi down the course. Use the up and down arrow keys to speed up or slow down and switch lanes with the left and
right arrow keys. Use the spacebar to jump over obstacles. Score points by crashing into the cars with the correct number on them.
Each course will give you a number that determines which cars to hit. You can only score when you crash into a car that is a multiple of the given number. In the first course, you’re looking for
multiples of 2, so you’ll have to keep your eyes open for any number divisible by 2. Each course looks for multiples of different numbers, so you’ll want to study up on your times tables!
Crazy Taxi M-12 TIPS & TRICKS
Brush up on the basics. This game’s all about the multiples! If you want to do well and score big, you’ll have to know your times tables. Before starting up the game, take some time to review your
multiplication skills. Being able to do quick math in your head will come in handy!
Use multiplication tricks. There are some tricks you can use to easily pick out the multiples of certain numbers. For example, multiples of 2 will always end in an even number (0, 2, 4, 6, 8 and so
on). Multiples of 10 will always end in a 0, and multiples of 5 will always end in either a 5 or a 0. Keep note of these tricks as you drive down the track.
Look to the horizon. You’ll be able to see what obstacles are approaching with plenty of time to switch lanes or prepare a jump. When you see cars approaching from a distance, take note of the
numbers first before deciding where to go. As long as they’re a multiple, you’ll score points, but watch out for cars that don’t fit the equation! Getting trapped by a car or obstacle will cost you
precious time.
Bonus tracks. As you play, you’ll be able to take a shot at the bonus rounds in between courses! These tracks are different in that there are no multiples to look out for, just regular cars and
obstacles, which you’ll have to jump over in order to keep driving. If you crash, the bonus round ends.
WHAT YOU LEARN FROM PLAYING Crazy Taxi M-12
Playing rounds of Crazy Taxi M-12 can help teach times tables and improve your mental math skills. This game also helps strengthen the ability to solve math equations quickly and without much
calculation. The more you play, the better you’ll be at solving simple math problems on a dime.
Crazy Taxi M-12
4.0 / 5(89,264 Votes)
May 08, 2024
Dec 31, 1998
Browser, Mobile
Crazy Taxi M-12 Instructions
Try to clear the course as fast as you can! Use the arrow keys to drive the crazy taxi down the course. Use the up and down arrow keys to speed up or slow down and switch lanes with the left and
right arrow keys. Use the spacebar to jump over obstacles. Score points by crashing into the cars with the correct number on them.
Each course will give you a number that determines which cars to hit. You can only score when you crash into a car that is a multiple of the given number. In the first course, you’re looking for
multiples of 2, so you’ll have to keep your eyes open for any number divisible by 2. Each course looks for multiples of different numbers, so you’ll want to study up on your times tables!
Crazy Taxi M-12 TIPS & TRICKS
Brush up on the basics. This game’s all about the multiples! If you want to do well and score big, you’ll have to know your times tables. Before starting up the game, take some time to review your
multiplication skills. Being able to do quick math in your head will come in handy!
Use multiplication tricks. There are some tricks you can use to easily pick out the multiples of certain numbers. For example, multiples of 2 will always end in an even number (0, 2, 4, 6, 8 and so
on). Multiples of 10 will always end in a 0, and multiples of 5 will always end in either a 5 or a 0. Keep note of these tricks as you drive down the track.
Look to the horizon. You’ll be able to see what obstacles are approaching with plenty of time to switch lanes or prepare a jump. When you see cars approaching from a distance, take note of the
numbers first before deciding where to go. As long as they’re a multiple, you’ll score points, but watch out for cars that don’t fit the equation! Getting trapped by a car or obstacle will cost you
precious time.
Bonus tracks. As you play, you’ll be able to take a shot at the bonus rounds in between courses! These tracks are different in that there are no multiples to look out for, just regular cars and
obstacles, which you’ll have to jump over in order to keep driving. If you crash, the bonus round ends.
WHAT YOU LEARN FROM PLAYING Crazy Taxi M-12
Playing rounds of Crazy Taxi M-12 can help teach times tables and improve your mental math skills. This game also helps strengthen the ability to solve math equations quickly and without much
calculation. The more you play, the better you’ll be at solving simple math problems on a dime.
Try to clear the course as fast as you can! Use the arrow keys to drive the crazy taxi down the course. Use the up and down arrow keys to speed up or slow down and switch lanes with the left and
right arrow keys. Use the spacebar to jump over obstacles. Score points by crashing into the cars with the correct number on them.
Each course will give you a number that determines which cars to hit. You can only score when you crash into a car that is a multiple of the given number. In the first course, you’re looking for
multiples of 2, so you’ll have to keep your eyes open for any number divisible by 2. Each course looks for multiples of different numbers, so you’ll want to study up on your times tables!
Crazy Taxi M-12 TIPS & TRICKS
Brush up on the basics. This game’s all about the multiples! If you want to do well and score big, you’ll have to know your times tables. Before starting up the game, take some time to review your
multiplication skills. Being able to do quick math in your head will come in handy!
Use multiplication tricks. There are some tricks you can use to easily pick out the multiples of certain numbers. For example, multiples of 2 will always end in an even number (0, 2, 4, 6, 8 and so
on). Multiples of 10 will always end in a 0, and multiples of 5 will always end in either a 5 or a 0. Keep note of these tricks as you drive down the track.
Look to the horizon. You’ll be able to see what obstacles are approaching with plenty of time to switch lanes or prepare a jump. When you see cars approaching from a distance, take note of the
numbers first before deciding where to go. As long as they’re a multiple, you’ll score points, but watch out for cars that don’t fit the equation! Getting trapped by a car or obstacle will cost you
precious time.
Bonus tracks. As you play, you’ll be able to take a shot at the bonus rounds in between courses! These tracks are different in that there are no multiples to look out for, just regular cars and
obstacles, which you’ll have to jump over in order to keep driving. If you crash, the bonus round ends.
WHAT YOU LEARN FROM PLAYING Crazy Taxi M-12
Playing rounds of Crazy Taxi M-12 can help teach times tables and improve your mental math skills. This game also helps strengthen the ability to solve math equations quickly and without much
calculation. The more you play, the better you’ll be at solving simple math problems on a dime. | {"url":"https://www.coolmathgames.com/0-crazy-taxi-m12/index.html","timestamp":"2024-11-04T13:58:30Z","content_type":"text/html","content_length":"327695","record_id":"<urn:uuid:ed1c9c50-0819-44d7-8efc-48c2555464d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00875.warc.gz"} |
of Arabidopsis
1 Introduction
Growth of leaves like other plant organs is symplastic as well as other plant organs [1]. It means that cells grow in a coordinated way within an organ and neighboring cells do not slide or glide
with respect to each other [2,3]. In other words, in the course of the growth of a plant organ, its physical integrity is continuously maintained. Such growth coordination involves a link between
growth of individual cells and growth of the organ as the whole [4]. Mathematically, in the case of a symplastically growing organ, a field of displacement velocity (V) of points exists at the organ
level, which is represented by a continuous and differentiable function of point position [5]. Knowing V, one is able to determine growth rates at points within the organ. The linear elemental growth
rate (R[1]) at a given point along the particular direction s can be calculated from the equation: R[1](s)=[(grad V)e[s]] e[s], where e[s] is a unit vector of the direction s and dot indicates a
scalar product [6]. As grad V represents the second-rank operator [7], values of R[1] change with the direction [8]. The areal elemental growth rate (R[a]) at a given point can be calculated simply
as a sum of R[1] in every two mutually orthogonal directions. The operator grad V is called tensor of growth rate (GT). In the growing organ, there exists a GT field, calculated with the aid of GT
for all points within the organ. Importantly, the GT field of an organ is continuous in time and space [6]. If growth is anisotropic, at every point of a growing organ three mutually orthogonal
principal directions of growth rate (PDGs) exist: maximal, minimal and of the saddle type. If growth is isotropic, no PDGs can be distinguished. Knowing PDGs for all points in the organ, we can
define the pattern of PDG trajectories.
The GT approach was successfully applied to simulations of root or shoot apex growth [9–11]. In present paper, this method was applied to a growing leaf.
The development of leaves is a very complex process, and can be divided into three phases [12]. The first phase is the initiation of leaf primordium. The second phase refers to a change of shape
during primordium growth and establishment of leaf blade and petiole. The last phase is the expansion phase, and this phase is considered in the present paper. In many plant species, including a
model plant Arabidopsis, leaf blade expands in two directions: longitudinal (proximodistal) and lateral (mediolateral) [13,14]. Finally, a leaf acquires its typical size and shape.
Leaf blade growth changes in time and space. A tensor-based model is proposed here to describe such a complex growth for the Arabidopsis leaf. The latter is generally flat (Fig. 1), and we considered
its projection onto a plane, so that the model is two-dimensional.
Fig. 1
Epoxy resin casts observed by Scanning Electron Microscopy (day 1, day 3, day 5). Red dots (characteristic marker points) were used to divide the leaf blade into polygons and obtain the meshwork (1st
row). The meshwork with an extreme direction of deformation calculated for an exemplary Arabidopsis leaf in 48-h time intervalsis represented with crosses. The color map represents the areal growth
rate (second row) (color online).
2 Material and methods
2.1 Plant material and growth conditions
Arabidopsis thaliana ecotype Columbia-0 (Col-0) plants were grown in pots in short days (9h day; 15h night), at temperature between 19–21 oC, illumination 60μmolm^–2s^–1. In such growth
conditions, aerial rosettes are formed in the axils of the oldest cauline leaf of plants 16–20 weeks after germination [1]. The third or fourth leaves of such aerial rosettes were used in the
investigation. All the leaves were in the expansion phase of development [15].
We examined the adaxial epidermis of leaf blade at three time points at 48-h time intervals (referred to as days 1, 3, 5) for three leaf blades. The examination was performed with the aid of a
non-invasive sequential replica method [16]. Briefly, the replica was taken from the leaf surface using silicon dental polymer. The replicas were next used as forms for epoxy resin casts, which were
observed in Scanning Electron Microscopy (SEM). An effort was made to obtain the top view image from each cast. This method gives very good results only for three time points. If more time steps are
applied, the leaf blade growth slows down. The experimental data were sufficient to propose a model of growth for the leaf blade.
2.2 Data analysis
In order to compute growth variables from the consecutive images of an individual leaf (day 1–5), we specified a mesh consisting of polygons. The polygons were defined by three to nine points that
can be recognized at consecutive images of a growing leaf (see red dots in Fig. 1). These points were either vertices (three-way junctions of anticlinal epidermal cell walls) or geometric centers of
trichome bases. Each polygon consisted of several cells. These empirical data were used to compute growth variables. Based on the polygon deformation during leaf growth, we calculated the directions
of maximal and minimal deformation for individual polygons using the Goodall and Green formula [17], which are further called extreme directions of deformation (EDDs). The Goodall and Green method is
dedicated to triangles. However, we applied it for polygons because, although the triangulation of the leaf blades would be possible, many of triangles would be far away from equilateral ones. This
would imply more errors than the adapted approximation in the Goodall–Green method for the polygons. For the modeling, we assumed that the EDDs represent PDGs. We also calculated the relative area
increment (I[a]) for each polygon according to the formula $Ia=St+1−StSt+1$, where S[t] is the surface of a chosen polygon in t time point.
2.3 Modeling
First, we determined the curvilinear coordinate system in which the leaf is growing. Two coordinate systems with one axis of symmetry have been applied to plant organs [18,19], i.e. paraboloidal and
logcosh curvilinear orthogonal coordinate system. We chose the paraboloidal one, which is simpler, and then we adjusted the location of the leaf blade assuming velocity functions for all its points.
The postulated velocity functions were non-stationary where their components were sigmoid functions. In statistical analysis, we employed the t-test to check how well the chosen coordinate system
fits to empirical data (comparing empirically obtained EDDs with versors of the coordinate system – PDGs). Having velocity functions in a curvilinear coordinate system, we applied the growth tensor
to develop the growth model for the Arabidopsis leaf. For calculations and visualization of virtual leaf blade growth, original codes were written in MATLAB (MathWorks). All the statistical analyses
were performed with the aid of STATISTICA 10 (StatSoft).
3 Results
3.1 Empirical data on leaf blade growth
First we computed the relative area increment (I[a]) and EDDs which are represented in Fig. 1 as colormaps and the crosses for the exemplary mesh of polygons at the surface of growing leaf. These
empirical data show that:
• • there is a gradient of I[a] along the midrib (the lowest rates are in the distal leaf portion);
• • growth is more anisotropic in the distal leaf portion than in proximal;
• • the area increment is lower during the second time interval.
3.2 Natural coordinate system and GT field for Arabidopsis leaf
Based on the computed EDDs, we assumed that the appropriate coordinate system for the arabidopsis leaf is paraboloidal. The explicit form of the paraboloidal coordinate system is:
where the scale factors are:
We further call this system the leaf natural coordinate system (L-NCS [u,v]).
Next, we searched for the correct location of the growing leaf blade in the L-NCS(u,v) (Fig. 2).
Fig. 2
Leaf natural coordinate system applied to a growing leaf. A) day 1, B) day 3 (color online).
Both stationary GT field and non-stationary GT field were considered (Fig. 3). First, we showed, with the ANOVA test (Table 1), that there were no significant differences between the analyzed leaves
in both considered time intervals (comparing angles between EDD[max] and versor in the u direction, e[u]).
Fig. 3
Schematic representation of the leaf in the initial stage of the simulation (A), in a non-stationary GT field (C), in a stationary GT field. (B, D) Final stage of simulation, non-stationary and
stationary GT fields, respectively (color online).
Table 1
Comparison of the considered Arabidopsis leaves in each time interval: (A, C) from day 1 to day 3; (B, D) from day 3 to day 5. (A, B) are computed for a non-stationary GT field, (C, D) for a
stationary GT field (the one-way ANOVA analysis for angles between EDD[max] and e[u] in the considered three leaves). The differences between the analyzed leaves are statistically non-significant
(significant level 0.05).
ANOVA test
F-value P
A 0.005 0.995
B 2.854 0.063
C 0.563 0.572
D 2.208 0.116
If the natural coordinate system were chosen correctly, the EDDs calculated for all the polygons would agree with the versors (e[u], e[v]) of the system. Therefore, angles were measured between the
maximal direction of deformation (EDD[max]) and e[u] at geometric centers of all polygons. This was done for both time intervals for the examined leaves in both stationary and non-stationary GT
fields. In order to determine whether there were significant differences between orientations of EDD[max] and e[u] (see mean and median values in Table 2), the t-test for dependent samples was
performed (significance level: 0.05). The paired t-test showed significant differences between EDD[max] and e[u] in a stationary GT field (Table 2 C, D), and no significant differences in a
non-stationary GT field (Table 2 A and B). Statistical analysis shows that the assumed non-stationary GT field operating in the chosen L-NCS (u,v) is the correct one and describes well the growth of
the leaf blade. We have proven that there are no statistically significant differences between the EDDs calculated directly from the empirical data and those assigned by coordinate system (e[u], e[v
]) during the virtual leaf blade growth.
Table 2
Paired t-test for each individual polygon for the differences between EDD[max] and e[u] for the growing leaf blades of A. thaliana Col-0 (experimental and model data). Calculation performed for two
time intervals of leaf growth: (A, C) from day 1 to day 3; (B, D) from day 3 to day 5. A, B are computed for a non-stationary GT field, C, D for a stationary GT field. Notation: SE–standard error, n
–sample size. Each class of angles was considered separately.
Paired t test
Mean±SE n t-value P
A 3.43±2.38 90 1.50 0.14
B 6.84±2.98 85 0.59 0.56
C 1.76±2.46 90 2.78 0.01
D 7.99±3.23 85 2.47 0.02
In the non-stationary GT field, the pattern of PDG trajectories computed from the model changes during leaf blade growth, because the GT field moves with respect to the leaf (the focus of the
coordinate system moves away from the leaf).
Having the L-NCS, we assumed displacement velocity functions V for all the points of a growing leaf. In 2D, the velocity functions (both are sigmoid, see also [20]; Fig. 4A) in two orthogonal
directions (u,v) are:
$vv=b0.2151+e−1.96v+3.5$ (5)
where:are the extinguish terms, and:is a time step.
Fig. 4
Velocity functions. A) in the u direction (semi logarithmic scale), B) in the v direction (color online).
The velocity functions are presented in Fig. 4 for several time steps. The velocity function in the u direction is presented on a semi logarithmic scale, because its value decreases rapidly in time.
The general expression for the velocity function for the Arabidopsis leaf in the i direction is:
where a is the extinction factor, A the amplitude of the velocity function, x the slope of the sigmoid, y the translation in the i direction.
The graphical representation of R[1] in 2D, calculated from the above equations of the velocity functions, is presented in Fig. 5. The spatial variability from isotropic to anisotropic growth is
apparent. The considered leaf blade moves through this field, and accordingly R[1] in the u direction changes its value, while R[1] changes in the u direction are much smaller.
Fig. 5
Anisotropy of the growth tensor field represented by indicator (color online).
3.3 Application of the non-stationary GT field in the simulation model of Arabidopsis leaf growth
Further, using the GT, we created also a model of growing Arabidopsis leaf in the expansion phase of its development. The results of such modeling are presented in Fig. 6. We assumed that an initial
shape of a virtual leaf blade is a simplified outline of real leaf blade (neglecting the leaf margin serration). Such virtual blade was placed in the defined non-stationary GT field, and five
exemplary time steps of its growth are presented. The simulation shows a cessation wave of R[a] that moves from the proximal to the distal part of the leaf (see also Movie 1). In the first time
interval, the value of R[a], computed between the first and the second time step of the simulation, is generally higher than in the following intervals, and attains its maximal value in the proximal
part of the leaf blade. The gradient of R[a] is the steepest in the first time interval and, in the following intervals, it gradually decreases. Similar growth changes take place in the real leaf
blade (compare Fig. 1, 2nd row, and Fig. 6). Therefore, we conclude that our model functions properly.
Fig. 6
Five time stages of the growth of a virtual leaf. The current position of the growth tensor field is represented by the axis. The color map of polygons represents the areal growth rate R[a]
(colorbar in arbitrary units).
The proposed model can also be used to predict the growth rate along the main leaf axis, i.e. along the midrib. Movie 2 shows the changes in Rl computed in the direction parallel to the leaf midrib.
It shows that the maximum Rl is in the most proximal part of the leaf blade. In the following steps, this value decreases and the Rl distribution along the midrib becomes uniform. We can also predict
Rl in other directions, for example the Rl in the direction perpendicular to the midrib (u direction in L-NCS) (Movie 3). In this case, no maximum appears. Rl in u direction is uniform and decreases
with time.
4 Discussion
Here, we present the modelling method adopted to leaf blade expansion that can be used also to model the growth of other, symplastically growing plant organs. To specify the model variables, we need
the experimental data on displacement velocities of marker points on the organ surface. Having the empirically obtained displacement velocities, we can formulate appropriate velocity functions in the
natural coordinate system (NCS). The system is natural if the orientation of EDDs obtained using the Goodall–Green formula are in agreement with the orientation of versors in NCS. The velocity
functions are therefore combined with the NCS and the estimations between both are mutually dependent. The calculations are much easier in a stationary field case because the versors are independent
of velocity functions; nevertheless, non-stationary fields can be also modeled.
There exist several models of leaf blade growth. Computer analysis of leaf growth was first performed by Erickson [8]. This analysis was based on empirical data acquired from a Xanthium pensylvaticum
leaf and the method of analysis proposed by Richards and Kavanagh [21]. A set of points (landmarks) was placed on the surface of the growing organ. These points changed position due to surface
expansion. The displacement velocity vectors of points were used to estimate growth gradients within the plant organ. A discrete approach was also used for modeling leaf growth [22]. Wang et al. [23]
created a model for the Xanthium leaf, similar to ours, but they consider only the growth simulation without maps of the growth rate changes. Their animation corresponds to three days of leaf growth.
Kennaway et al. [24] studied the role of tissue polarity and differential growth in the generation of the shape. Their model includes interactions between regulatory factors (hormones), velocity
field (generating displacement field used to calculate the field of volumetric growth) and elasticity (used to compute the resultant deformation of the cells). Dupuy et al. [25] modeled the cell–cell
interactions with the genetic background. They focussed on the distribution of trichomes on the leaf surface, while, in our approach, we focus on the growth rate maps and the change of the general
shape of the whole leaf blade. Backhaus et al. [26] and Bilsborough et al. [27] in turn model only the margin of the Arabidopsis leaf accounting for the serration, and the geometric form of the leaf,
and combine it with single-metric shape parameters for different mutants [26] or the auxin distribution and gene expression [27]. Since our model neglects the leaf margin serration, combining their
modeling with ours, one can obtain a wider description of the Arabidopsis leaf growth and form.
In the present paper, first the velocity field V for the displacement of points and the appropriate coordinate system was defined. This allowed us to calculate the growth tensor field in which the
leaf blade is placed. This field is non-stationary and the numerical calculations are unavoidable. The field is dedicated to experimental data coming from Arabidopsis leaves. We obtained the
quantitative data of the velocity field with the aid of the sequential replica method [16]. The model can be briefly described as follows. The leaf grows in a non-stationary GT field according to the
velocity functions given by Eqs (4, 5). The field moves with respect to the leaf along the leaf midrib (Fig. 6). The velocities decrease in time with factors a, b (Eqs. (6, 7)).
We postulate the velocity functions that describe the displacement velocities of the points defined on the leaf blade. These velocity functions influence the actual position of the leaf blade in the
coordinate system. We have proven that there are no statistically significant differences between the orientation of EDDs calculated directly from the empirical data and those assigned via coordinate
system (e[u], e[v]) during the virtual leaf blade's growth. We showed that the empirically obtained orientation of EDDs fit the principal direction of growth in both examined time intervals. The
velocity functions are valid only in the curvilinear coordinate system, in this case paraboloidal, and therefore we regard this system as the natural coordinate system for Arabidopsis leaf blade
(L-NCS). The presented model of the leaf blade growth can be used to predict the growth in any intermediate time step and in any direction during the expansion stage of leaf development. The velocity
functions are sigmoids and they can be modulated via the parameters defined in Eq. (9), to achieve the empirically obtained values.
Up to now, in the description of plant growth with the aid of GT, only the stationary GT field was considered, in which the growth tensor was steady in time and space. Such a stationary GT field was
applied to the modeling of growth of root and shoot apices [9–11]. In the present model that assumes a non-stationary GT field, the leaf moves through the field (spatial changes) and growth is
decreasing in time (temporal changes). This property of growth can be obtained by using the specific velocity functions. We considered also the stationary GT field, by placing the growing leaf in the
focus of the coordinate system, but we falsified this kind of approach: there are significant differences between orientation of versors in L-NCS and the empirically obtained EDDs.
The stationary GT field was applied to the organs self-maintaining their shapes: root and shoots. Leaf is beyond this description, the leaf blade changes its shape in time and space and the
non-stationary GT field can explain this feature. To conclude, one can say that the fact that plant organs self-maintain their shapes can be described by a stationary GT field, and plant organs
changing their shapes by a non-stationary GT field.
Our model is in agreement with the results presented by Kuchen et al. (fig. 1 J in [30]) and Remmler and Rolland-Lagan [28] and Walter et al. [29]. The maximum growth rate can be found in the
proximal region of the leaf blade, and a gradient of growth rate is observed. Also, the linear growth rates in Movie 2 are similar to those shown in Fig. 1e–i of Kuchen et al. [30]. GT modeling
provides also information about growth anisotropy (temporal and spatial dependencies).
The presented model includes a complete kinematic information on the leaf blade growth. We are able to calculate growth rates in any direction as well as growth anisotropy. We can also simulate
changes in leaf blade shape and size during its growth. The starting point of the modeling is the determination of the displacement velocity functions, which in the case of the Arabidopsis leaf blade
are sigmoids (Fig. 4A in [20]). The next important issue is to adjust the appropriate coordinate system, defining the principal directions of growth. These two ‘variables’ (velocity functions and
natural coordinate system) provide a full kinematic description of growth, but the numerical calculations are unavoidable.
Disclosure of interest
The authors declare that they have no conflicts of interest concerning this article.
The authors specially thank Prof. Dorota Kwiatkowska and Dr Agata Burian (University of Silesia) for discussion and critical reading of this text.
Dr Ewa Teper from the Laboratory of Scanning Electron Microscopy, Faculty of Earth Sciences, University of Silesia, is acknowledged for her help in the preparation of SEM micrographs.Author
contributions: M.L. designed and performed research, analyzed data, prepared model and wrote the paper, A.P-S. performed statistical analysis and analyzed data, J.E. and J.P. experiments.Funding: The
research was partially funded by a grant from the National Science Centre [grant number N N303 8100 40] in 2011–2012 (ML, JE), and by the Polish Ministry of Science and Higher Education [grant number
N N303 3917 33] in 2007–2010 (JP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | {"url":"https://comptes-rendus.academie-sciences.fr/biologies/articles/10.1016/j.crvi.2013.09.001/","timestamp":"2024-11-03T03:19:58Z","content_type":"text/html","content_length":"99552","record_id":"<urn:uuid:d7d0dcca-032e-4773-878f-99027cee39eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00286.warc.gz"} |
SSE Calculator – Optimize Your Data Analysis
This SSE Calculator will help you easily compute the sum of squared errors.
How to Use the SSE Calculator
This calculator helps you estimate the Self-Employment Tax based on your net income, business expenses, and applicable tax rate.
1. Enter your Net Income in the given field.
2. Enter your Business Expenses in the specified field.
3. Enter the Tax Rate applicable to your earnings.
4. Click on the “Calculate” button to see the estimated Self-Employment Tax.
Explanation of the Calculation
Your Self-Employment Tax is calculated using the formula:
SSE Tax = (Net Income – Expenses) * (Tax Rate / 100)
This means that first, your business expenses are deducted from your net income. Then, the resulting amount is multiplied by your tax rate (expressed as a percentage).
This calculator provides an estimation based on the net income, expenses, and tax rate you input. The actual tax could vary based on specific tax rules, deductions, and the accuracy of the amounts
entered. Always consult a tax professional for precise calculations and tax advice.
Use Cases for This Calculator
Calculating Simple Addition
Use the calculator to quickly add two or more numbers together. Input the numbers you want to add and click the “+” button to get the sum instantly.
Performing Subtraction
Easily subtract numbers by entering the minuend and subtrahend into the calculator. Press the “-” button to see the result without any hassle.
Multiplying Numbers
When you need to multiply two or more numbers, enter them in the calculator and hit the “x” button. The calculator will show you the product in no time.
Dividing Numbers
Perform division with ease using the calculator. Input the dividend and divisor, then click the “รท” button to get the quotient displayed accurately.
Calculating Percentages
Quickly calculate percentages by entering the percentage value and the base number. Click the “%” button to see the result, making percentage calculations a breeze.
Working with Decimals
Handle decimal numbers effortlessly with the calculator. Add, subtract, multiply, or divide decimals without worrying about manual errors.
Clearing Input
If you make a mistake or want to start fresh, use the “C” button to clear the input field quickly. This feature ensures you can easily correct any errors made during calculations.
Using Memory Functions
Store numbers in memory using the “M+” and “M-” buttons to easily recall them for future calculations. The memory functions help you manage and reuse numbers efficiently.
Handling Large Numbers
The calculator can handle large numbers without any difficulty. Whether you’re working with millions or billions, the calculator accurately computes the results for you.
Switching Between Positive and Negative Numbers
Easily change the sign of a number by using the “+/-” button. This functionality allows you to switch between positive and negative numbers seamlessly during calculations. | {"url":"https://madecalculators.com/sse-calculator/","timestamp":"2024-11-08T15:17:42Z","content_type":"text/html","content_length":"144283","record_id":"<urn:uuid:f4f245d5-cc0e-4a80-be0b-c7defb0bfc28>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00589.warc.gz"} |
When Tags are Introduced to Numbers, the Sum is Greater than the Whole
In virtually every occupation, one could possibly benefit from data, analytics, and tagging. In this session, the focus will obviously be on tagging. Specifically, this presentation will discuss how
tagging and math go hand-in-hand. A tag is a fact about something. Records are used to model things. Many at Haystack Connect this year model energy in their lines of work. By calling a point power
or energy, one knows its mathematical place. Its data can be added, subtracted, multiplied, or divided with certain other numerical values of the appropriate units. Moreover, far more sophisticated
things can be done: energy can be converted to power and power to energy by the use of calculus. Similarly, something with a temp tag can be converted to a temperature’s rate of change and instantly,
those numbers are useful thanks to the meanings supplied by tagging.
Other wondrous mathematical things can be done as well. Most of the data people get for analytics is of the form: timestamp, value pair. When data is presented in this form, it is called the time
domain. If known what the data is and its datatype, the whole table can be converted into the frequency domain which has useful meanings as well and can be considered a footprint and allow different
buildings to be compared. Fourier analysis does this. Without even knowing it, many who already use tagging are dependent on it from the simplest addition to the most complicated such as
computational fluid dynamics. It is amazing how tagging initiatives have made things possible that the originators had not even considered.
In this session we will explore concepts and examples including how tagging affects basic project setup all the way through some advanced math. Due to the implied meanings of various tags provided by
the Project Haystack ontology, all of these mathematical operations can be easily done, but more importantly, their meanings can be easily understood. | {"url":"https://2017.haystackconnect.org/technical-program/day-1/event/71-when-tags-are-introduced-to-numbers-the-sum-is-greater-than-the-whole","timestamp":"2024-11-06T05:34:52Z","content_type":"text/html","content_length":"24171","record_id":"<urn:uuid:82496809-5d8e-49b2-9ef9-d922135146ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00305.warc.gz"} |
Analyzing Freeport-McMoRan (FCX) Through Ben Graham's Investment Lens
Freeport-McMoRan is currently trading at $48.45 per share and has a Graham number of $18.25, which implies that it is 165.5% above its fair value. We calculate the Graham number as follows:
√(22.5 * 5 year average earnings per share * book value per share) = √(22.5 * 1.43 * 12.103) = 18.25
The Graham number is one of seven factors that Graham enumerates in Chapter 14 of The Intelligent Investor for determining whether a stock offers a margin of safety. Rather than use the Graham number
by itself, its best to consider it alongside the following fundamental metrics:
Sales Revenue Should Be No Less Than $500 million
For Freeport-McMoRan, average sales revenue over the last 5 years has been $36.85 Billion, so in the context of the Graham analysis the stock has impressive sales revenue. Originally the threshold
was $100 million, but since the book was published in the 1970s it's necessary to adjust the figure for inflation.
Current Assets Should Be at Least Twice Current Liabilities
We calculate Freeport-McMoRan's current ratio by dividing its total current assets of $14.06 Billion by its total current liabilities of $5.82 Billion. Current assets refer to company assets that can
be transferred into cash within one year, such as accounts receivable, inventory, and liquid financial instruments. Current liabilities, on the other hand, refer to those that will come due within
one year. In Freeport-McMoRan’s case, current assets outweigh current liabilities by a factor of 2.4.
The Company’s Long-term Debt Should Not Exceed its Net Current Assets
This means that its ratio of debt to net current assets should be 1 or less. Since Freeport-McMoRan’s debt ratio is -0.8, the company has much more liabilities than current assets because its long
term debt to net current asset ratio is -0.8. We calculate Freeport-McMoRan’s debt to net current assets ratio by dividing its total long term of debt of $9.42 Billion by its current assets minus
total liabilities of $25.2 Billion.
The Stock Should Have a Positive Level of Retained Earnings Over Several Years
Freeport-McMoRan had poor record of retained earnings with an average of $-6489937500.0. Retained earnings are the sum of the current and previous reporting periods' net asset amounts, minus all
dividend payments. It's a similar metric to free cash flow, with the difference that retained earnings are accounted for on an accrual basis.
There Should Be a Record of Uninterrupted Dividend Payments Over the Last 20 Years
Freeport-McMoRan has offered a regular dividend since at least 2007. The company has returned an average dividend yield of 3.7% over the last five years.
A Minimum Increase of at Least One-third in Earnings per Share (EPS) Over the Past 10 Years
To determine Freeport-McMoRan's EPS growth over time, we will average out its EPS for 2007, 2008, and 2009, which were $7.50, $-14.86, and $2.93 respectively. This gives us an average of $-1.48 for
the period of 2007 to 2009. Next, we compare this value with the average EPS reported in 2019, 2020, and 2021, which were $-0.17, $0.41, and $2.90, for an average of $1.05. Now we see that
Freeport-McMoRan's EPS growth was 170.95% during this period, which satisfies Ben Graham's requirement.
Based on the above analysis, we can conclude that Freeport-McMoRan does not have the profile of a defensive stock according to Benjamin Graham's criteria because it is trading above its fair value
and has:
• impressive sales revenue
• an excellent current ratio of 2.42
• much more liabilities than current assets because its long term debt to net current asset ratio is -0.8
• poor record of retained earnings
• a solid record of dividends
• declining EPS growth | {"url":"https://marketinference.com/analysis/r/2024/10/18/FCX/","timestamp":"2024-11-08T08:02:52Z","content_type":"text/html","content_length":"55030","record_id":"<urn:uuid:42ad527f-b74d-40c9-b894-5139c3defaa5>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00106.warc.gz"} |
ounds for
New lower bounds for halfspace emptiness
Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, 472-481, 1996.
Merged into the journal version of "Space-time tradeoffs for emptiness queries".
We present a lower bound of Omega(n^4/3) for the following halfspace emptiness problem: Given a set of n points and n hyperplanes in R^5, is every point above every hyperplane? This matches the
best known upper bound to within polylogarithmic factors, and improves the previous best lower bound of Omega(n log n). Our lower bound applies to a general class of geometric divide-and-conquer
algorithms, called polyhedral partitioning algorithms. Informally, a polyhedral partitioning algorithm covers space with a constant number of constant-complexity polyhedra, determines for each
polyhedron which points and halfspaces intersect it, and recursively solves the resulting subproblems.
Publications - Jeff Erickson (jeffe@cs.uiuc.edu) 15 Nov 1999 | {"url":"https://jeffe.cs.illinois.edu/pubs/halfempty.html","timestamp":"2024-11-14T06:48:12Z","content_type":"text/html","content_length":"2439","record_id":"<urn:uuid:2fcad360-97b7-4b63-b310-fa3591c6aea5>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00782.warc.gz"} |
Development of a Measuring Method of Cosmic-Ray Muon Momentum Distribution Using Drift Chambers
Article information
J. Radiat. Prot. Res. 2024;.jrpr.2023.00423
Received 2023 August 2; Revised 2023 November 2; Accepted 2023 November 29.
Soft errors in semiconductor devices caused by cosmic rays have been recognized as a significant threat to the reliability of electronic devices on the ground. Recently, concerns about soft errors
induced by cosmic-ray muons have increased. Some previous studies have indicated that low-energy negative muons have a more significant contribution to the occurrence of soft errors than positive
muons. Thus, charge-identified low-energy muon flux data on the ground are required for accurate evaluation of the soft error rate. However, there are no such experimental data in the low-energy
Materials and Methods
We designed a new muon detector system to measure low-energy muon flux data with charge identification. The major components consist of two drift chambers and a permanent magnet. The charge and
momentum of detected muon can be identified from the deflection of the muon trajectory in the magnetic field. An algorithm to estimate the muon momentum is developed using numerical optimization by
combining the classical Runge-Kutta and quasi-Newton methods. The momentum search algorithm is applied to event-by-event data of positive and negative muons obtained by Monte Carlo simulations with
Particle and Heavy Ion Transport code System, and its performance is examined.
Results and Discussion
The momentum search algorithm is fully applicable even in the case of an inhomogeneous magnetic field. The precision of the momentum determination is evaluated by considering the stochastic
fluctuation caused by multiple scattering and the position resolution of the drift chambers. It was found that multiple scattering has a significant contribution to the precision in the momentum
region below 50 MeV/c, while the detector position resolution considerably affects the precision above that.
It was confirmed that the momentum search algorithm works well with a sufficient precision of 15% in the low-momentum region below 100 MeV/c, where no muon flux data exist.
Soft errors are temporary malfunctions of semiconductor devices caused by radiations and lead to reduced reliability of electronic devices. Cosmic-ray neutrons and muons are major environmental
radiations that cause soft errors on the ground. Recently, concerns about muon-induced soft errors have increased owing to the miniaturization of integrated devices and the lowering of operating
voltages [1]. Muons are elementary particles whose absolute value of electric charge is equal to that of electrons and are classified into two types: negatively charged (negative muons) and
positively charged (positive muons). The average flux of cosmic-ray muons is about 1 cm^−2*min^−1 on the ground.
The soft error rate (SER) caused by cosmic-ray muons is calculated by
(1) SER=∫σSEU(Eμ)φ(Eμ)dEμ
where E[μ] (MeV) is the muon kinetic energy, σ[SEU] (cm^2) is the single event upset (SEU) cross-section, and φ (cm^−2s^−1MeV ^−1) is the muon’s energy differential flux under the operating
environment of electronic devices. Some previous works on muon irradiation tests for 65 nm static random access memories (SRAMs) have clarified that negative muons have larger SEU cross sections than
positive muons in the low-energy region because secondary ions generated by negative muon nuclear capture reaction cause additional SEUs in SRAM devices [2–4]. Therefore, charge-identified low-energy
muon flux data on the ground are required for a highly accurate evaluation of the SER based on Equation (1). As shown in Fig. 1 [5–8], however, there is no measurement of cosmic-ray muon flux in the
low-energy region. Additionally, positive and negative muons have not been discriminated in the energy region measured in the past. Thus, there is still uncertainty in the muon flux on the ground,
which is necessary for the SER evaluation.
To improve this situation, we designed a dedicated detector system to obtain charge-identified low-energy muon flux data. The principle of determination of the muon charge and momentum is based on
the deflection of muons in the magnetic field in the detector system composed of two track detectors and a permanent dipole magnet. Therefore, an algorithm to determine the charge and momentum is
important in data analysis. This work is devoted to the development of a momentum search algorithm. Finally, the developed algorithm is applied to the data of positive and negative muons obtained by
a Monte Carlo simulation with Particle and Heavy Ion Transport code System (PHITS) [9], and the performance and precision are examined.
Materials and Methods
1. Detector System
A two-dimensional view of the designed detector system is shown in Fig. 2. Its major components are two drift chambers (DCs) as track detectors and a permanent dipole magnet. Two cuboid DCs (540 mm
[width]×580 mm [length]×357 mm [height]) are placed above and below the permanent dipole magnet. Each pole of the dipole magnet is a 200 mm×200 mm square shape with a 90 mm gap between the poles.
The DCs detect the muon track before and after passing through the magnetic field produced by the permanent magnet. The muon trajectories projected in the plane perpendicular to the magnetic field
are illustrated in Fig. 2. Assuming a uniform magnetic field for simplicity, a muon makes a circular motion by Lorentz force, and its curvature radius R is given by
(2) R=p⊥|q|B
where q is the muon charge, B is the magnetic flux density, and p[⊥] is the muon momentum perpendicular to the magnetic field. The q equals to +e for positive muons and −e for negative muons, where e
is the elementary charge. Suppose the curvature radius is estimated by extrapolating the straight muon tracks detected by the DCs to the magnetic field region, as shown in Fig. 2. In this case, the
muon momentum can be determined by Equation (2). In the case of the realistic non-uniform magnetic field, however, Equation (2) does not hold. The trajectory of muons in the non-uniform magnetic
field is numerically calculated using the equation of motion. The momentum should be determined by a numerical optimization method so that the estimated trajectory connects smoothly with the muon
tracks detected in the upper and lower DCs.
2. Development of the Momentum Search Algorithm
1) Muons transport simulation by PHITS
PHITS can simulate realistic muon trajectories by taking into account the angular and energy straggling due to multiple scattering in the designed detector system [9]. It can also incorporate
specific magnetic field information, allowing realistic simulations of muon deflection in a non-uniform magnetic field. In the PHITS simulation, a non-uniform magnetic field map generated by Amaze
(Advanced Science Laboratory Inc.) is input [10]. Fig. 3A shows the magnetic flux density in the direction perpendicular to Fig. 3B at the center of the magnetic pole face of the magnet. The magnetic
field is found to be non-uniform in the edge region of the magnet. Various directions of muon incidence on the upper DC are randomly generated. For example, Fig. 3B shows a typical simulated
trajectory of a positive muon of 50 MeV/c.
2) Extrapolation to the muon track in magnetic field
A more accurate estimation of muon momentum requires information about muons in a non-uniform magnetic field. The DCs are located outside the magnet and cannot detect the muon trajectories in the
magnetic field. Therefore, we propose a method to extrapolate the trajectory of a muon in the magnetic field from the tracks detected by the DCs.
The trajectory of a muon in the magnetic field is described by the following simultaneous differential equations for the momentum p→ and position r→:
(3) {dp→dt=qm(p→×B→)dr→dt=p→m
where q is the muon charge, m is the muon mass, and B→ is the magnetic flux density. The fourth-order Runge-Kutta method is applied to numerically solve the above equation of motion using the initial
condition given by the position r0→ and momentum p0→. Thus, the trajectory of a muon in the non-uniform magnetic field is calculated.
3) Momentum search algorithm
We developed a momentum search algorithm to estimate the momentum of muons incident on the detector system. An overview of the algorithm is explained using the schematic diagram in Fig. 4. Let the
muon tracks in the two DCs be given by black lines in the left panel of Fig. 4 be known by measurements. In this work, they are given by PHITS simulation. Hereafter, the muon tracks detected in the
upper and lower DCs will be referred to as the upper and lower tracks, respectively.
First, we calculate the trajectory of the muon in the magnetic field by solving Equation (3) with the Runge-Kutta method. The absolute value of p0→ is determined by assuming a uniform magnetic field
whose magnetic flux density is the same as that at the center of the magnet. The direction is equal to that of the upper track. The calculated muon track is presented by the red line in Fig. 4. In
the non-uniform magnetic field, the intersection point of the muon track on the center plane of the lower DC does not necessarily match that of the lower track, as shown in the right panel of Fig. 4.
Next, the initial momentum is updated so that the following evaluation function is minimized:
(4) f=(xact-xext)2+(yact-yext)2
where x[act] and y[act] are the intersection points of the extracted track on the center plane of the lower DC, and x[ext] and y[ext] are those of the extrapolated track. The momentum is optimized to
minimize f defined by Equation (4) to determine p[down] using ‘TMinuit’ class in ROOT [11] based on a quasi-Newton algorithm. Moreover, let us consider a time-reversal situation in Fig. 4 and
determine the muon momentum using the lower track detected in the lower DC. The same procedure is applied to the upward extrapolation of the muon track to estimate p[up]. Here p[down] and p[up]
correspond to the momentum value optimized by downward extrapolation or upward extrapolation. Finally, the momentum of the incident muon is determined by taking the average of p[down] and p[up].
Results and Discussion
1. Estimated Momentum Distribution
We examined the performance of the developed algorithm and evaluated the precision of momentum determination using the event-by-event data of muon trajectories obtained by the simulation with PHITS
for an incident muon momentum of 50 MeV/c. The simulations were performed under the following three conditions: (a) no stochastic fluctuation; (b) stochastic fluctuation caused by the angular and
energy straggling due to multiple scattering by gas, air, and Mylar layers, which are components in the detector system; and (c) stochastic fluctuation caused by the position resolution of DCs. The
position resolution is assumed to be 500 μm which is reasonable for the designed DCs. The stochastic fluctuation caused by the position resolution was considered using a normally distributed random
number with a standard deviation of 500 μm in the position passing through each layer of DCs.
The distribution of the momentum estimated by applying the developed algorithm to the PHITS simulation results is shown in Fig. 5. Fig. 5A shows that the developed algorithm uniquely determines the
muon momentum for the non-uniform magnetic field under no stochastic fluctuation. On the other hand, it is found that the event distributions of the muon momentum estimated by the developed algorithm
are spread out for the cases Fig. 5B and 5C. This is due to the fact that the momentum search algorithm cannot take into account the effects of multiple scattering and detector position resolution.
However, the actual trajectory detected by the DCs includes these stochastic fluctuations, so the width of the distributions corresponds to the precision of the momentum to be estimated in actual
measurements. The standard deviations when fitting the distributions with a Gaussian function are listed in Table 1. The uncertainty for the condition (c) is determined using by applying the results
of (a) and (b) to the error propagation law. From Table 1, it is confirmed that the momentum of muons with 50 MeV/c can be determined with 10% precision even under the condition (c), which is close
to the actual measurements.
2. Relative Resolution of Estimated Momentum
In Fig. 6, the relative resolution of the momentum estimated by the developed algorithm is plotted as a function of the muon momentum in the low-momentum region where no muon flux data are available.
The relative resolution R is defined by
(5) R(%)=σpp×100,
where σ[p] is calculated by fitting the distribution of each momentum with a Gaussian function, and p is the muon momentum in the PHITS simulation. The total relative resolution R defined by Equation
(5), given by the black dots, is decomposed into two components. One comes from angle and energy straggling due to multiple scattering, as indicated by the red dots. Another one is caused by the
detector position resolution represented by the blue dots. Each dot is plotted in steps of 10 MeV/c from 30 MeV/c to 120 MeV/c, and each line is plotted by fitting the corresponding dots with a
second-order polynomial.
From Fig. 6, it is found that the effect of multiple scattering dominates the relative resolution in the momentum region below 50 MeV/c. In constant, the detector position resolution significantly
affects the resolution in the energy region above that. Thus, this simulation result indicates that the muon momentum can be determined within 15% precision for muons below 100 MeV/c under
near-realistic conditions.
We developed the momentum search algorithm used in the data analysis for our designed detector system consisting of two DCs and a permanent dipole magnet to measure charge-identified muon flux data
on the ground. The transport of muons in the detector system was simulated by a three-dimensional Monte Carlo code PHITS, and the obtained event-by-event data were employed to investigate the
performance of the momentum search algorithm. As a result, it was confirmed that the muon momentum can be determined within 15% precision for muons in the low-momentum region below 100 MeV/c, where
no muon flux data exist.
In the future, we plan to evaluate the performance of the detector system under fabrication. One of the evaluation items will be to experimentally obtain the position resolution of the DCs. Then, the
precision of the momentum determination will be verified using the actual measurement data of muon tracks detected by the DCs. Finally, we intend to measure charge-identified cosmic-ray muon fluxes
on the ground in the low-momentum region where experimental data are scarce.
Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Ethical Statement
This article does not contain any studies with human participants or animals performed by any of the authors.
Author Contribution
Conceptualization: Sato A, Watanabe Y. Methodology: Nakagami N, Kamei S, Kawase S. Data curation: Sato A. Formal analysis: Kamei S, Kawase S, Sato A. Supervision: Watanabe Y. Funding acquisition:
Watanabe Y. Project administration: Kamei S, Kawase S. Writing - original draft: Nakagami N. Writing - review & editing: Kawase S, Watanabe Y. Approval of final manuscript: all authors.
This work was supported by Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research (KAKENHI) Grant Numbers JP19H05664, JSPS21J12445, and JP21K12564.
1. Kato T, Tampo M, Takeshita S, Tanaka H, Matsuyama H, Hashimoto M, et al. Muon-induced single-event upsets in 20-nm SRAMs: comparative characterization with neutrons and alpha particles. IEEE Trans
Nucl Sci 2021;68(7):1436–1444.
2. Liao W, Hashimoto M, Manabe S, Watanabe Y, Abe SI, Nakano K, et al. Measurement and mechanism investigation of negative and positive muon-induced upsets in 65-nm bulk SRAMs. IEEE Trans Nucl Sci
3. Manabe S, Watanabe Y, Liao W, Hashimoto M, Nakano K, Sato H, et al. Negative and positive muon-induced single event upsets in 65-nm UTBB SOI SRAMs. IEEE Trans Nucl Sci 2018;65(8):1742–1749.
4. Mahara T, Manabe S, Watanabe Y, Liao W, Hashimoto M, Saito TY, et al. Irradiation test of 65-nm bulk SRAMs with DC muon beam at RCNP-MuSIC facility. IEEE Trans Nucl Sci 2020;67(7):1555–1559.
5. Allkofer OC, Carstensen K, Dau WD. The absolute cosmic ray muon spectrum at sea level. Phys Lett B 1971;36(4):425–427.
6. Kremer J, Boezio M, Ambriola ML, Barbiellini G, Bartaulucci S, Bellotti R, et al. Measurements of ground-level muons at two geomagnetic locations. Phy Rev Lett 1999;83(21):4241–4244.
7. Munteanu D, Moindjie S, Autran JL. A water tank muon spectrometer for the characterization of low energy atmospheric muons. Nucl Instrum Methods Phys Res A 2019;933:12–17.
8. Bateman BJ, Cantrell WG, Durda DR, Duller NM, Green PJ, Jelinek AV, et al. Absolute measurement of the vertical cosmic ray muon intensity at 3–50 GeV/c near sea level. Phys Lett B 1971;36
9. Sato T, Iwamoto Y, Hashimoto S, Ogawa T, Furuta T, Abe S, et al. Features of Particle and Heavy Ion Transport code System (PHITS) version 3.02. J Nucl Sci Technol 2018;55(6):684–690.
11. Brun R, Rademakers F. ROOT: an object oriented data analysis framework. Nucl Instrum Methods Phys Res A 1997;389(1–2):81–86.
Article information Continued
Copyright © 2024 The Korean Association for Radiation Protection | {"url":"https://jrpr.org/journal/view.php?viewtype=pubreader&number=1160","timestamp":"2024-11-14T04:06:06Z","content_type":"application/xhtml+xml","content_length":"65287","record_id":"<urn:uuid:810911a6-36fa-4fca-b97e-25ad5955b568>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00531.warc.gz"} |
Effective field theory in particle physics
Effective theories provide an optimal framework to perform calculations in which separate scales are relevant. When applied to particle physics, effective field theory allows us to evaluate, in a
systematic and very efficient way, the phenomenological implications of heavy particles at energies well below their mass. We will discuss the main ideas behind effective field theories, how they can
be used in physics beyond the Standard Model in a very efficient way, and explore in detail the tecniques required to compute the renormalization group equations and the finite matching up to the
one-loop order. The emphasis will be on practical details and specific calculations and tecniques, rather than a comprehensive discussion of all aspects of effective field theories. The ideas can
nevertheless be used in many other theories and fields. | {"url":"https://indico.ific.uv.es/event/6943/?print=1","timestamp":"2024-11-13T02:04:58Z","content_type":"text/html","content_length":"17727","record_id":"<urn:uuid:2dd11950-45c2-4f5c-b375-559dacb5a043>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00847.warc.gz"} |
Investigation of Anisoplanatic Chaos-based Signal and Image Transmission and Retrieval Through Atmospheric Phase Turbulence
Investigation of Anisoplanatic Chaos-based Signal and Image Transmission and Retrieval Through Atmospheric Phase Turbulence
Degree Name
Ph.D. in Engineering
Department of Electrical and Computer Engineering
Advisor: Monish Chatterjee
This research began as a continuation of work on the propagation of planar electromagnetic (EM) waves through a turbulent atmosphere, specifically a form of refractive index based phase turbulence
modeled by the Modified von Karman Spectrum (MVKS). In the previous work within our group, EM propagation through a turbulent atmosphere under the MVKS model was investigated for essentially
isoplanatic propagation, whereby the propagation from the source to the receiver progressed along a horizontal path, such that the effective structure parameter associated with the turbulence
remained unchanged along the propagation. The problem was numerically set up by using the split-step propagation model, whereby the EM wave from the source (sometimes interpreted as a planar
aperture) propagates alternately through non-turbulent regions (governed by standard Fresnel-Kirchhoff diffraction), and thereafter through MVKS regions where the phase turbulence occurs. A narrowly
turbulent layer is described by a random 2D phase screen in the transverse plane; extended turbulence is modeled by a series of planar phase screens placed along the propagation path.In the above
analyses, propagation of both uniform as well as profiled plane waves was considered. The present research commenced with investigating uniform, Gaussian and Bessel beam propagation along a turbulent
path, and detailed numerical simulations were carried out relative to infinite as well as finite apertures in the source plane (including single and double slits, and single and double circular
apertures), considering both non-turbulent and turbulent paths for comparison. Results were obtained in the near, far and deep far fields.The problem was further developed to include the case of
anisoplanatic plane EM wave propagation over a slanted path. The turbulence structure function (Cn2) in this environment was considered to be altitude dependent, and for this purpose the
Hufnagel-Valley (HV) model for the structure function. A standard prototype tested for this system consisted of propagation along a slanted path with a fixed horizontal distance, and made up along
the propagation path of a diffractive (LD) and a turbulent (LT) section. The effect of turbulence was examined for test 2D images/transparencies under two environments: (a) the 2D image, under
digital encoding converted to time signals, being used to modulate a carrier (typically optical) wave, which is thereafter propagated across the LD+LT path, and recovered in the "image" plane using
heterodyne-type communications strategies. Of special note here is the fact that since MVKS and most other turbulence models are intrinsically spatial in nature, a method has been developed within
the group whereby the time-statistics of the turbulence is derived from received intensities (typically on-axis) as the phase screen(s) is/are varied at a specific rate corresponding to the average
turbulence frequency (in the range 20 Hz-200 Hz). Using this statistical information, the modulated wave propagation across the turbulence is examined; and (b) the source image/transparency is
treated as a spatial amplitude distribution through which an unmodulated carrier wave (in the phasor domain) is propagated, and later the object transparency is recovered via a positive thin lens in
its back focal plane (assuming thereby that the object transparency is essentially located at infinity relative to the lens). Of the two strategies, it was found that the carrier modulation method
yielded better image cross-correlation (CC) products than the method using the thin lens, in the presence of turbulenceOverall, it is seen that recovered EM signals (2D object transparencies,
modulated plane waves, and also dynamic/video scenes) are adversely affected by MVKS turbulence (which incidentally is limited in its applicability to only cases where in the Rytov criterion is
satisfied, and therefore in many cases works for only weak to moderate levels of turbulence; some cases involving strong turbulence have been investigated nevertheless), and the degree of drop in the
CC product goes up as the strength of the turbulence increases. In view of this, a strategy was adopted later whereby the goal was to ascertain if by "packaging" the incident signal/digitized image
inside a chaotic carrier, and thereafter propagating the encrypted chaos wave across the turbulent path might help mitigate the loss of CC product (leading to image distortion) during propagation
through (MVKS) turbulence. This concept has thereafter been tested for several 2D image scenarios, using an acousto-optically generated chaotic carrier for the encryption prior to turbulent
propagation. The corresponding recovered signals (obtained via two levels of demodulation) consistently indicate improvements in the CC products of the recovered images relative to the source.
Additionally, the MVKS turbulent system used along the slanted path is also examined under an interchanging of the source and receiver positions. Following extended examinations of the
altitude-dependent propagation along an MVKS turbulent path, this work next focused on an alternative turbulence model, viz., the gamma-gamma turbulence (also refractive index) model, which it turns
out actually is valid for all atmospheric turbulence (weak through strong) conditions. The use of the HV model for Cn2 assumes, however, that much of the turbulence considered is within a relatively
low-altitude limit. For the gamma-gamma problem as well, applications similar to those used for the MVKS cases (i.e., propagation of modulated EM carriers with message signals transmitted along a
turbulent (LT) path) using the gamma-gamma time statistics. This problem was analyzed via numerical simulation for both non-chaotic and chaotic carriers. Once again, use of a chaotic carrier is
consistently found to improve the bit error rates (BERs) of the recovered image relative to the source image.
Electrical Engineering, Optics, Atmospheric Sciences, MVK turbulence, anisoplanatic propagation, image encryption, acousto-optic chaos, turbulent and chaotic propagation, gamma-gamma turbulence
Rights Statement
Copyright © 2020, author
Recommended Citation
Mohamed, Ali, "Investigation of Anisoplanatic Chaos-based Signal and Image Transmission and Retrieval Through Atmospheric Phase Turbulence" (2020). Graduate Theses and Dissertations. 6802. | {"url":"https://ecommons.udayton.edu/graduate_theses/6802/","timestamp":"2024-11-03T02:58:33Z","content_type":"text/html","content_length":"47478","record_id":"<urn:uuid:5639bfb0-8c29-4c76-950a-fdded2f96da8>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00538.warc.gz"} |
Knowledge Management Research Group
MathTrek Voyager
This page is a sub-page of our page on Geometry
The interactive simulations on this page can be navigated with the Free Viewer
of the Graphing Calculator.
Related KMR-pages:
• 3D-Conzilla
• Projective Geometry
• Affine Geometry
• Metric Geometry
• Linear Transformations
• A Linear Space Probe
• Linear transformation: kernel and image (interactive)
• Dimension
• Euclidean Geometry
• Non-Euclidean Geometry
• Hyperbolic Geometry
• Elliptic Geometry
• Parabolic Geometry
• Projective Metrics
• The Euclidean Degeneration
• Geometric Shapes
• Geometric Optics
• Geometric Algebra
• David Hestenes on Geometric Algebra.
• David Hestenes on Geometric Calculus.
• David Hestenes on Conceptual Learning
• The Linear War between the planets $\, V_{ectoria} \,$ and $\, V'_{ectoria} \,$
• Einstein for Flatlanders
• The Mercator Projection
• Stereographic Projection
• Conformal Mapping
• Inversion
• Möbius transformations
• Steiner Circles
• Expandable Learning Objects
• Art
• Music
• Interactive Learning Objects
• Arithmetical Crossfire
• Mathematical Cogwheels
• Numbers and their Digits in different Bases
• Having fun, with the Graphing Calculator,
Anthony Word’s internship at CID/NADA/KTH, 2 weeks in November 1999.
Other related sources of information:
• Flatland – the movie 2007, based on the novel Flatland by Edwin A. Abbott from 1884.
• Flatland – the limit of our consciousness
• A Wrinkle In Time, 2018 Disney film based on the novel by Madeleine L’Engle from 1962.
• How mathematicians are storytellers and numbers are the characters
Marcus du Satoy in The Guardian, 23 January 2015.
• 3D objects for teachers in museums at the Science Museum Group.
• Kompakkt from the Department of Digital Humanities/ University of Cologne.
• Cybermath: A Shared Virtual Environment for Mathematics Exploration, by Gustav Taxén and Ambjörn Naeve, Presented at the International Conference on Distance Education, Düsseldorf, 2001.
• CyberMath: A System for Exploring Open Issues in VR-based Education, by Gustav Taxén and Ambjörn Naeve, Presented at SIGGRAPH 2001.
• 4D Toys: a box of four-dimensional toys, and how objects bounce and roll in 4D, by Miegakure on YouTube
• Wormholes Explained – Breaking Spacetime, by Kurzgesagt – in a Nutshell on YouTube.
• Dimensions – A walk through mathematics by VeganDaVinci.
• A Mathematician’s Lament by Paul Lockhart.
1.1.3. MathTrek Voyager – basic concepts and ideas
The basic idea of the project is to create an adaptive and intuitive system for learning the essence of mathematics in the form of a “distributed online game”. This game – called MathTrek Voyager
(MTV) – will stimulate and promote interest in mathematics by exposing the form-meaning loop that characterizes the interplay between pure and applied mathematics (see below). Hence, MTV will
highlight the power of abstractions to capture many different meanings in the same form. Gaming is well suited to this task, since games have formal rules (axioms) by which they should be played, and
nobody demands that such rules have any intrinsic meaning. For example, nobody says things like “I don’t understand chess because I don’t understand why the bishops move diagonally on the board.”
MathTrek Voyager exploits young people’s curiosity to explore virtual worlds with ease and fun, independent of the direct applicability of such worlds to real-world problems.
MTV will build on two classical stories: the mathematical science-fiction story of Flatland (Abbot, 1884), which was turned into a movie in 2008, and the famous TV science-fiction series Star Trek.
MTV will capture the attention of the players by embedding them in a virtual hyperspace of many dimensions. Just like the spaceship Enterprise of Star Trek, the MathTrekers will be lost in (hyper)
space, trying to find their way back to earth. The players (= the crew) are represented by avatars, which can change their abilities (= states) with respect to spatial awareness, spatial sensing, and
spatial mobility, as well as their sensitivity to color and temperature as the game proceeds.
Succeeding in the game, players will acquire the power (i.e., the mathematical competency) to create and manipulate 1-dimensional curves, 2-dimensional surfaces, 3-dimensional solids and even
4-dimensional shapes.
These changes of competency occur at “branch points” of the game – as a result of the players’ performance in solving mathematical problems and understanding mathematical models and concepts
(conceptual change, see below).
Spatial awareness and mobility:
Several players will be able to collaborate – and take responsibility for developing various parts of the mathematical skills that are needed to solve the problems that appear in the game. Building
on the psychopedagogical expertise of the TU Graz team – especially CbKST (Competence-based Knowledge Space Theory) – these problems will be adapted to the skills of the players, thereby guiding them
towards the goal of increasing their mathematical proficiency and understanding by mastering more and more difficult mathematical problems and concepts.
1.1.4 The form–meaning loop of mathematics and its applications
Mathematics has fought a long and hard historical battle to rid itself of meaning and transform itself into pure form. The success of this transformation is reflected in the famous statement by
Bertrand Russell that “mathematics is the discipline where we do not know what we are talking about and neither whether what we are saying is true or not.” However, this development has not resulted
in adequate educational changes, and today we are faced with an increasing pedagogical problem of explaining to students of mathematics the meaning of being (semantically) “meaningless” or, in other
words, the power of mathematical abstraction. In the words of Hermann Weyl:
“We now come to a decisive step of mathematical abstraction: we forget about what the symbols stand for… [The mathematician] need not be idle; there are many operations which he may carry out with
these symbols, without ever having to look at the things they stand for“.
Limiting mathematical concepts to their real-world instances in education makes them second-class right from the beginning – the doughnut is much more interesting than the torus which models it. It
makes it difficult to understand that for some contexts (say for positioning doughnuts in a box) the torus represents the essence of the doughnut and that understanding its form is key to
understanding its meaning in not just one context, may it be a doughnut box or the CERN large hadron collider.
Mathematics gives geometric objects a world to live in – and the MathTrek Voyager game puts the student right into this world’s center with the challenge of mastering it’s powers.
Put more abstractly, creating mathematics can be viewed as a de-semantization process that transforms meaning into form, while applying mathematics can be viewed as a semantization process that
transforms form into meaning (by interpreting the symbols). This is the basic form-meaning loop of the interaction between mathematics and its applications, such as e.g., science and engineering. Due
to this mathematics formalization process, the definition of a mathematical concept has – over many centuries – been transformed from being existential (= “What is it?” = “What does it mean?”) to
becoming operational (= “How does it behave?” = “What properties does it have?”).
Figure 1: Creating and applying mathematics sets up a loop between meaning and form.
The right part of Figure 1 depicts how two ‘meaningful’ concepts (velocity and force) are transformed by the Mathematics abstraction process into the same ‘meaningless’ (= formal) concept of vector.
The Application process transforms the formal ‘vector’ concept back to a meaningful concept by interpreting the symbols of the formal model.
The MTV project is well aware of the practical value of meaning and application. The MathTrek Voyager game foresees breakout points where the players can leave the game and apply their acquired
competencies in conventional educational exercises. The key difference distinguishing MTV from conventional math courses is that it treats applications as an extra bonus of math where math education
tends to present math as a necessary evil for applications.
As domain subject the project has chosen geometry for a variety of reasons. Geometry, with its origins in ancient Greece through centuries of fundamental inventions by scientists like Descartes,
Bolyai, Gauss, Lobachevsky and Einstein, provides an outstanding truly European contribution to world science. Understanding and mastering geometric concepts is a basic need for many other fields –
engineering, economics, physics and graphic arts to name a few. Advances in computer graphics enable new ways not only to visualize geometric objects but also interact with them. The MTV project is
dedicated to explore and utilize these. Geometric objects can be efficiently described in mathematical terms – a language that is not only understood by humans all over the world but also by
computers. Last but not least, geometric objects often have a natural beauty that can make interacting with them a very enjoyable aesthetic experience.
1.1.5 Reintroducing geometry in order to achieve mathematical enjoyability
MathTrek Voyager will make use of modern computer technology and psychopedagogy to introduce the material of the geometrical revolution in a comprehensible and fascinating way. It will embed the
players in the story that led to such profound changes in our ideas of the space that surrounds us, and, through interactive and personalized feedback help the players to understand how we ourselves
create and invent space, modifying it according to changes in our ideas of the universe.
The basic pedagogical principle that will be used to convey theses ideas is the interactive restriction of dimensions and the interchange between extrinsic and intrinsic views of 2-dimensional and
1-dimensional worlds. For example, looking at a curved surface, such as e.g. a sphere, from the outside, embedded in 3D, it is easy to understand that flat beings inhabiting its surface will be able
to move straight ahead and return to the same point. However, when you are a 2D being that is living inside the spherical surface, this is not at all obvious.
By letting the players experience the change between an extrinsic and intrinsic perspective on a non-Euclidean world, we expect to be able to convey the idea that the 3D world that we inhabit can be
curved in various ways, reflecting fundamental cosmological research questions.
These ideas will be further enhanced by exposing the players of MTV to 3D projections of phenomena that take place in higher dimensions, like e.g. the 3D shadows of an object that is rotating in a 4D
space. The transformation of the 3D shadow will appear to be very complex, involving various changes of shape, but the 4D motion is extremely simple, being governed by a single parameter (the angle
of rotation). By comparison with how 2D beings experience the projection of an object that is rotating in 3D, we expect to be able to convey the idea that when the number of spatial dimensions
increase, the dynamics (transformations) in that space often becomes more simple, thereby fostering the kind of higher-dimensional viewpoint that is pursued by e.g. superstring theory, which is
presently working with higher dimensional spaces to try to achieve a “Theory of Everything“.
Constructing similar (but – of course – MUCH simpler) higher dimensional views will be rewarded in the game by discovering important clues that will advance the MathTrekers towards their ultimate
goal of returning to earth.
Slicing a moving levelsurface ellipsoid 1
[Ambjörn Naeve on YouTube]
Slicing a moving levelsurface torus 1
[Ambjörn Naeve on YouTube]
Slicing a moving levelsurface (torus) 2:
[Ambjörn Naeve on YouTube]
Intersecting the diagonal of a 3D-cube (left) and a 4D-cube (right)
with a perpendicular hyperplane (Ambjörn Naeve on YouTube)
A Triact (3D-cube) is a Tesseract for Flatlanders
(Ambjörn Naeve on YouTube):
The interactive simulation that produced this movie.
Notice how the simple rotational motion of the Triact in 3D is transformed into a much more complicated transformation of its shadow in 2D. In the same way, the simple rotational motion of the
Tesseract in 4D is transformed into a much more complicated transformation of its shadow in 3D.
A rotating Tesseract (4D-cube) projected into 3D-space
(throughthedoors on YouTube):
A rotating Penteract (5D-cube) projected into 3D-space
(hypercube0 on YouTube):
A rotating Hexeract (6D-cube) projected into 3D-space
(WildStar2002 on YouTube):
Unwrapping a tesseract (4d cube aka hypercube)
(Vladimir Panfilov on YouTube):
The Scan of a Tesseract in 4-dimensional space:
(Визуализации в многомерных пространствах on YouTube)
Since these ideas are naturally attractive to most human beings, we expect that our ways of exposing them to the players of MTV will have a strong impact in terms of raising the motivation and
interest to study mathematics by fostering an understanding that real mathematics is “the ultimate space trip”, since mathematical research often involves the exploration of strange and wonderful
spaces with surprising and exiting properties.
Moreover, MTV will emphasize the concept of symmetry, which has become so fundamental to mathematics and science, and which has such strong connections to music and art and which is accessible in its
basics already at an early school age. Our approach will build on the Garden of Knowledge project (Naeve, 1997), which made innovative use of symmetry to explore and explain the connections between
mathematics and music.
By adopting a didactic method that focuses on extra-curricular material, we can avoid the risk of conflicts with existing curricula. We intend no less than demonstrating that mathematical interest
and competency can be achieved with greater ease by playing with abstract concepts than by avoiding them all together.
It is the overarching goal of the MTV project to change the present negative attitudes towards mathematics using modern digital media. As a consequence of this change, we expect that more motivated
students will start to perform better within the mathematical curriculum, leading to better results on exams, stronger ’employability’ and increasing competitiveness for Europe in the emerging
knowledge society. In fact, we strongly believe that our strategy – to aim for “employability through enjoyability” – is the most effective and efficient way to achieve this badly needed result.
1.1.6. Objectives of the MTV game
It is the overall objective of the MathTrek Voyager game to raise interest and motivation for mathematics among as many people as possible, and inspire them to learn more about the real nature of
this fascinating subject. We will target players/learners between the ages of 10 and 20 with emphasis on age 16-20. Moreover, we will focus on extra-curricular content of fundamental importance to
our conceptions of space and time, such as geometric transformation groups acting on sets, and geometric properties as invariants under this group action. Note that these concepts are essential
ingredients of many curricular beginner math courses not only in Mathematics and Computer Science but also in Natural Sciences and Engineering.
Because we are free to focus on any type of material, we can increase our potential for awakening interest, taking advantage of existing “non-curricular connections” to physics and philosophy. We
believe that achieving our overall objective of awakening mathematical interest among the players of MTV would contribute substantially towards improving their performance within the traditional
mathematical curriculum, thereby drastically improving their future employability.
Our strategy for reaching this extremely ambitious objective, employability through enjoyability, is to support the players’ experience of enjoyment of mathematics. This will be achieved by involving
them in an interactive, embedded, emotionally engaging, and media-rich discourse that evolves around some of the most interesting and revolutionary ideas that have ever been developed by humankind,
namely the ideas of space and time that form the basis for the modern perception of our universe.
Most educational systems of today are based on a closed, layered architecture of different levels (elementary, intermediate, secondary, highschool, and university) with almost no contact between them
– especially between the non-adjacent ones. An important objective of MTV is to help to overcome the thresholds between these levels that are often experienced by learners, and especially to bridge
the gap between highschool and university.
Our strategy for achieving this goal is to facilitate a conceptual learning approach to the subject (Hestenes, 1995). This will be achieved by making use of different “representation schemes”,
notably the Knowledge Manifold educational architecture (Naeve, 2001a; Naeve & Nilsson, 2004), which consists of a number of interlinked conceptual landscapes (context maps), where one can navigate,
search for, annotate and present all kinds of electronically stored information. Such an interconnected cluster of concepts – that connects conceptual content in different contexts – we call a
knowledge patch.
Representing knowledge patches in a machine-processable way and relating them to standardised competency descriptions in Competency based Knowledge Space Theory (Korossy, 1997; Albert & Lucas, 1999)
opens up new ways to automate the adaptation of educational games to the needs of learners, effectively addressing both ‘forward’- and ‘backward’ competency-gaps (Naeve et al., 2008). Backward
competency gaps in mathematics seem to be increasingly common among beginning university students of both pure and applied mathematics.
1.1.7. Contribution to the Objectives of the Work Programme
As described in the Work Programme, the project takes up progress in knowledge mapping and processing in the form of the Competency-based Knowledge Space Theory and utilizes it to provide advanced
forms of adaptivity in a game context. This guides new forms of discovery of knowledge through new interactive geometric tools.
User-produced and game-provided content coexist in the MathTrek Voyager game universe side by side in the form of geometrically rich objects that can be individually selected and manipulated.
The development of geometry as an important part of human culture has been profoundly influenced by European science from the ancient Greeks to modern times. Thus the MTV project directly brings “new
opportunities for the exploitation and sharing of Europe’s rich cultural and scientific resources. New services will engage users in new ways of experiencing and understanding cultural resources.
They will enable the aggregation and annotation of objects available in digital libraries.” (Quote from work programme).
In the MTV project the MathTrek Voyager community shall take into its digital library a wealth of mathematical objects existing in various math tools and embed them into the game context as well as
into the related digital documentation D1.2 “The MathTrekker’s Guide to the Universe”.
The project will not only utilize visualizations and 3D to recreate artefacts, but it will make user creation and interaction available as means of learning as well. In this way the project aims to
reshape the way we learn by putting the essence of mathematics into the core of the game, strongly emphasizing exploration and inquiry-based learning. The expected learning benefits of the MTV
approach are described in the following Section. Learner engagement and motivation, creativity and self-regulated exploration of the MathTrek world are key features of the project.
In its research MTV, as an interdisciplinary project, brings together current research in cognitive science and pedagogy (as described in the next section) and combines it with state of the art
technology in computer graphics and serious games. A number of previous projects, on which MTV builds, are mentioned below. Moreover the MathTrek Voyager consortium is well involved in quite a number
of ongoing TEL projects (see Description of the consortium).
Adaptive feedback and intuitiveness of the MathTrek Voyager game are core objectives to be realized in Work Packages 3 and 4. They are based on intimate knowledge of the subject in WP1 and pedagogic
research in WP2 which will provide the basis for realizing in WP5 the affective and emotional approach of the project.
1.2 Progress beyond the state-of-the-art
The main objective of this project is the development of learning software in the form of a serious game that helps a learner to comprehend higher mathematical concepts and facilitate comprehension
of abstract mathematical problems and how to approach them. Generally it can be stated, that computer games can be “rich, complex environments, that allow immersive exploration of numerous strategies
for action and decision” (Facer, Ulicsak, Sandford & Futurelab, 2007, p. 46) which are able to stimulate and motivate.
The members of the consortium believe that the MTV project has a huge potential to motivate and engage learners because:
• the problems with which the learners will be confronted during their journeys through the virtual hyperspace will be challenging (e.g. McClelland, Koestner & Weinberger, 1989).
• the content will not be largely overlapping with the content of current European math curricula in schools, and it will therefore be novel for the learners.
• it will be possible to focus on the intrinsic motivation of the learners (i.e. the motivation for the content itself) and not on the extrinsic motivation, which might be central for many students
in schools (e.g. the motivation to pass an exam).
• the serious game of MTV will stimulate the desire to explore and master a novel environment, which is reflected in the natural tendency of youngsters toward curiosity and exploratory play (e.g.
White, 1959).
• the possibility for the learners to experience the process of passing a threshold, which may previously have prevented them from reaching a deep understanding of a concept (see section Personalized
Learning Material… below on p.18).
• the possibility for a learner to learn in a collaborative way with others, especially with peers (e.g. Wentzel & Caldwell, 1997; see section Collaborative Learning Experience below p. 20).
In the following sections we will describe the state of the art of game-based learning, conceptual learning, adaptive assessment of competencies and non-cognitive aspects, current psychopedagogical
trends regarding learning as well as the progress beyond the state of the art, which we will apply within this project to reach our ambitious goals to motivate and engage youngsters to explore
mathematical and natural-scientific concepts.
1.2.1 Game-based Learning
It is well known that present mathematics education in Europe (and elsewhere) suffers from serious problems (ICMI, 1995). Prominent among these problems is the increasing difficulty to motivate
students and maintain their interest in the subject; an interest that is almost always present at a very young age but which seems to diminish – and often to totally disappear – as the years go by.
Moreover, in mathematics, the teachers at the early levels often suffer from a lack of understanding of the real nature of the subject – and e.g. often confuse mathematics with arithmetic – while the
teachers at the later (university) levels often tend to consider mathematics only as a subordinate tool and don’t take the time to communicate its essence.
Serious shortcomings of traditional mathematics education include its inability to: stimulate interest, promote understanding, support personalization, facilitate transition between different levels,
integrate abstractions with applications, and integrate mathematics with human culture. MathTrek Voyager will make innovative use of ICT in order to address these problems. It will build on various
prior efforts – including the work of the KMR group at KTH, which has demonstrated that it is possible to increase the “cognitive contact” with mathematical concepts in many different ways. This has
resulted in more than 500 interactive math programs with accompanying videos.
Moreover, MathTrek Voyager will advance the state of the art in math education by introducing the elements of:
• Mathematical Modeling (“meta-mathematics”), where mathematical structures will be presented in the form of clusters of related concepts that can be applied to problems and provide personalized
hints on how to solve them and understand their underlying concepts and structure.
• Mathematical Design (i.e., constructing and testing mathematical structures), which is an activity that students rarely have an opportunity to participate in. Players will be engaged in designing
mathematical structures and testing their applicability for solving the problems that they encounter in the game.
Visualization, with its vastly increased potential of modern computer graphics on low-cost hardware, provides an excellent opportunity to separate the content of mathematics from the form in which it
is presented. The MTV project makes this potential accessible to education, making it possible to communicate geometry interactively in its native form at the highest extent permitted by our
cognitive abilities.
1.2.2 Serious Games in Mathematics
Serious gaming promises a variety of benefits for learning. The Summit on Educational Games of the Association of American Scientists (2006) lists the following: Conceptual bridging (i.e., closing
the gap between what is learned in theory and its use); High time-on-task; Motivation and goal orientation, even after failure; Providing learners with cues, hints and partial solutions to keep them
progressing through learning; Personalization of learning; and Infinite patience. As revealed by a Google search for “math game“, games for learning Mathematics fall into one of two categories.
1) Isolated exercises where the learner has to provide a correct answer to a mathematical question. In these exercises the problem is described in a narrative way or the involved concepts are
interpreted in an application setting (stand alone episodes).
2) Exercises as above but connected by a story. The learner moves from place to place and can proceed only if he/she solves the mathematical exercises (adventure games).
Beside these categories, there is also a large variety of tools that allow interactive experimenting with mathematical concepts. While not directly being a game, they invite the learner to play
around. Producing surprising, often aesthetically appealing results, such tools also entertain and stimulate curiosity.
However, the available digital mathematical episodes as well as the mathematical adventure games lack one critical ingredient of learning and teaching: In contrast with a mathematics course, they
present episodes in isolation, without reference to the inherent logical connections of the topic. Even when there is a story, the challenges are only superficially connected to it, mostly just by
using in the challenges the same names of actors as in the story. Hence it is no surprise that the usual games for mathematics can be easily transferred to other topics, just by replacing the
exercises. As a consequence, these games can contribute very little to stimulate interest in mathematics.
In contrast, MathTrek Voyager is a new type of game. The story will be built around the theoretical model of the topic. At the same time, this model will be represented in a machine-readable format
as a Competency-based Knowledge Space. This representation will be directly used to adapt the game automatically in a way that is consistent with the natural flow of mathematical insights. In
MathTrek Voyager, problems are to be solved by building new objects that continue to appear in the game and by applying precisely defined mathematical transformations to such objects.
Following the inherent logic of geometric concepts, the project will identify threshold concepts that need to be managed by the learner in order to gain particularly deep new insights. Working
towards mastering those concepts can provide a direction for the game, thus avoiding the “lost in knowledge space“- phenomenon, which is only too frequent for learners.
In this way, MathTrek Voyager will realise a new kind of integration between the knowledge to be learned, the competencies to be acquired and the actual flow of the game. Moreover, without
integrating them into the game, MathTrek Voyager can point learners to existing mathematical tools in order to further explore relevant objects and transformations.
Where such tools can export geometric objects in a standard way, as is the case for many computer algebra systems, possibilities will be explored to import such objects into the MathTrek Voyager
game. In these ways, existing interactive math tools can augment the learning that takes place in the game.
1.2.3 Implementation within the MTV-project
Our industrial partner (Serious Games) will use the Unity3D game engine to build the game prototype. The Unity engine is among the most powerful and easy to use engines on the market, and it is the
most capable engine for making browser-based games. Over the years SG has developed a number of extra features and functionalities that make it possible for us to develop games faster and better.
When SG starts a project, it deploys a template with all these extra features built-in, which provides a head-start. SG has used Unity3D successfully in a number of prior learning games, including
the “Global Conflicts” and the “Playing History” game series, and SG is convinced that it can use this game engine to advance the state of the art within learning games in the MTV project.
SG has a close partnership with its technology provider Unity Technologies, which enables SG to offer state-of-the-art 3D games (offline & online) by using the UNITY game engine. The game engine is
cross platform, which enables SG to transfer the game prototype to a range of other platforms including PC/MAC, iPhone, Wii and Facebook if desired.
Augmenting this, the Computer Graphics Group provides particular competency in realizing real-time advanced computer graphics on low-cost hardware as well as with most immersive forms like Cave
Automatic Virtual Environments using 3D glasses and data gloves. The educational potential of these technologies shall be explored from PM13 on.
1.2.4 Integrating Conceptual Learning into the Game
The MTV universe will consist of a mixture of abstract (conceptual) and concrete (image-based) structures. The players will encounter animated clusters of concepts (context-maps), which they can
apply to different problem solving situations in order to figure out how to deal with them. These “live concept clusters” will provide feedback to the players and describe their interrelatedness and
appropriateness for application to the specific problem that is to be solved. Innovative features of the MTV project include:
• Using a collaborative game for learning a complex subject.
• Developing a subject based on its “inner logic”, not on a curriculum. This enables a much more comprehensive understanding of the subject.
• Using cutting edge graphics technology to achieve a high degree of immersion and motivation.
• Interleaving intellectual challenge, competition, collaboration and aesthetic appeal to promote the “spirit” of the subject in an emotional way beyond mere knowledge.
• Teaching geometry, physics, space and time as the most fundamental concepts of the human understanding of the world, using a computer program not only as an illustration, but as an exploratory tool
for an expedition into deep knowledge spaces, reaching beyond the limits of human imagination.
Previous projects have reacted to the desire to explore mathematics with a number of tools, which visualize certain mathematical phenomena. Some of them allow user interaction and experimentation.
Many of them are known as Mathlets. The current project goes beyond this by integrating exploratory tools for a particular domain into a game environment, which continuously challenges the user.
The MTV project offers young (and elderly) people exploratory access to a world, which is not only challenging by the creatures that inhibit it, but which is challenging by the very nature of its
geometry, physics, space and time. Moreover, this world is populated by bizarre forms of a beauty, which are rarely seen elsewhere. Most of these forms are described by relatively simple mathematical
formulae, some – like fractals – also by simple iterative processes.
Having gained an understanding of these formulae and processes, the learner can easily modify them in an exploratory way, creating what may be seen as a piece of art, as a futuristic space ship or as
a new form of planet.
Solving such challenges will require not just knowledge but real understanding of the underlying subject. Coordinates, transformations, singularities etc. are to be rapidly understood as powerful
tools. Using these tools is, not by accident, required in many real world contexts too – optimization, engineering, design, robotics, and computer graphics to name a few.
The nature of geometry suggests a variety of ways to go from simple to complex challenges. One of them is to increase dimension – the dimension of objects and the dimension of the space where these
objects occur in. What holds for the object holds for space as well – living in a light cone is as possible as living within a shrinking universe, with parallel universes or worm holes. Note that in
this field it is often easy to pose a problem in a visually simple form where solving the problem requests understanding of abstract concepts.
A related approach, to foster understanding of fundamentals of geometry through computer graphics embedded in a story, has been undertaken recently – in 2007 – with the film Flatland – the movie.
This computer-animated movie is based on the novel Flatland published in 1884(!) by E. A. Abbot. In this novel a being from 3D space visits a Victorian 2D world, which is unaware of the 3rd
Where the movie shows in a linear story how 2D beings explore a 3D world with their senses, the MTV project adds interactivity! It can provide different learners with different tools and let them
The project exploits the potential of eLearning for personalization of knowledge acquisition as explored, for example, by the European projects ROLE, leACTIVEMATH, TRIAL-SOLUTION, PROLEARN,
KALEIDOSCOPE, APOSDLE, MACE, GRAPPLE, iClass, iCamp, Revive and TenCompetence and combines it with methods used in computer games to adapt to the knowledge and preferences of learners. It complements
the MKM Mathematical Knowledge Management project, which concentrates on formal Mathematics, by emphasizing sensual and immersive experience of mathematical concepts.
The project makes use of state-of-the art graphics technologies to achieve a high degree of immersion. This includes immersive 2D as well as 3D technologies. While the latter are currently mostly
available in lab settings, the project paves the way for their broader educational use as it may become feasible when the 3D screens make their way into market. These technologies are currently being
introduced in several European movie theaters, and they were recently presented by most major consumer electronic vendors (e.g. Panasonic, Sony, Samsung, JVC) at the IFA 2009 exhibition in Berlin.
In the MTV project technology serves learning, guided by results of current research in pedagogy, didactics and cognitive sciences as described in the next paragraphs.
1.2.5 Achieving a new quality of learning
In the MathTrek Voyager game, the learner acquires competencies by interacting with the game’s world. For such a type of learning, pedagogy provides guidance and theoretical background in the
constructivist paradigm.
Constructivism in all its varieties has become the “leading […] theory or philosophy of learning in the mathematics education research community ever since […] 1987” (Ernest, 2003). The popularity of
constructivism in the didactics of mathematics can be interpreted as a consequence of observances, that the learner’s conceivability is a main factor of how learning content is interpreted and
received (Prediger, 2005, p. 26). Despite this popularity as a scientific paradigm, the “passive-reception view of learning is not dead among professionals or administrators in education” (Ernest,
2003), and therefore constructivism remains an ongoing challenge when planning courses, designing learning environments, preparing learning content, or performing similar activities.
Various authors (Glasersfeld, 1994, 1993; Schulmeister, 1996; Seiler, 1994) agree that Piaget laid the foundation to a constructivist oriented psychology, that suggests, that experts only have
indirect influence on the knowledge mediation processes, with the introduction of the genetic epistemological psychological-theoretical framework (Hoppe-Graff & Edelstein, 1993). Piaget’s main idea
is that one gains knowledge in interaction processes with the social environment.
A possible result of these processes is the adjustment of preexisting cognitive structures or the creation of new cognitive structures, which determine the behavior of a person. The subject’s
cognitive structures are aspiring towards a balanced state (equilibration) between the person and its social environment. If this state is deranged, this could become a motive to gain new knowledge
and new skills to recover a balanced state again. Vice versa this means that inducing conflicts can initiate cognitive processes of rearranging and reconstruction, as well as the creation of new
cognitive structures and resultant behaviour.
In the early seventies of the last century, several experiments have been carried out that were using learning material based on typical Piaget problems, like dealing with constancy, set relations
and inclusion problems. These problems were capable to induce cognitive conflicts, to which the learner had to respond. All in all these experiments had a positive outcome (Hoppe-Graf, 1993). Of
course it would be naive to take the simple patterns of these early experiment problems and take them as a basic principle for designing learning programs and thereby expect positive results. A
learning environment that aims for the goal that a learner can improve his or her understanding of abstract math problems, should rather be designed in a way that allows self-directed learning
processes, and should enable learners to deal with occurring problems on their own.
The assumption underlying the concept of conflict-inducing learning settings is that successful procession of a problem and the anticipated chance of success motivates the learner intrinsically to
make adequate efforts to get over his or her knowledge and skills deficits (Berlyne, 1974).
In the early eighties there were various attempts to implement the concept of inducing conflicts in class settings. Worth mentioning in the domain of mathematics are for example Bell et al (Bell,
1983, 1984, 1986; Swan, 1983; Underhill, 1991) who developed the concept of ‘Conflict teaching’ as teaching model. As an outcome it could be demonstrated that there is a correlation between conflicts
in student-teacher-interaction and performance in following tests (Bell, 1984). More recently Tirosh, Stavy and Cohen (1998) engaged in research to accomplish a problem-handling, which results more
from correct theoretical references made by the learner than from the application of oversimplifying error strategies. However, a change in the learner’s intuitive attitude towards a more
sophisticated approach could not be demonstrated unambiguously.
As an adequate way to make use of the concept of inducing conflicts in an advanced learning setting, like the MTV game, we will follow the conceptual change approach. This approach proclaims that
learners often run into issues, because “knowledge acquisition in specific domains […] sometimes requires the significant reorganization of existing knowledge structures and not just their
enrichment” (Konstantinos, Vosniadou, & Vamvakoussi, 2007). In such cases there is a mismatch in the way they “use the same additive mechanisms with all forms of new knowledge making likely the
formation of misconceptions” (Vosniadou & Vamvakoussi, 2005). There is no way for learners to become aware of the problems that originate from their simplified explanatory models, which they have
acquired in their previous learning biography. Dealing with the aspect of misconceptions of reality and resultant conflict potentials, the conceptual change approach can provide a way of estimating
potential sources of error when designing the learning game, at least as far as intentional learning processes and their possible shortcomings are concerned.
To achieve a new way of acquiring knowledge and skills, which help the learner to solve abstract mathematical problems, it should be kept in mind that “Science and mathematics […] have an objective
non-situated reality, which is divorced from the processes that produced them.” (Vosniadou & Vamvakoussi, 2005). This implies that a student’s failure in resolving a mathematical problem is caused by
a lack of competences, but also by a lack of specific know-how, which has to be on-hand during the examination of the challenge. The conceptual change approach suggests that didactic transformations
like illustration, isolation and simplification of the learning object, can become barriers for future learning processes.
Research results related to the conceptual change approach are contradictory (Chan et al, 1997). The reason for these contradictions could possibly be that conflicts don’t necessarily lead to new
insights. Beyond the cognitive aspects, there are emotional, social and motivational preconditions to be met that determine if a conflict can be successfully resolved with a gain in knowledge.
(Vosniadonu, 1994; Schnotz & Carretero, 1999; Limon, 2001; Merenluoto & Lehtinen, 2003).
1.2.6. Inducing conflicts into the game design
As the previous paragraphs have shown, inducing conflict seems to be an effective and therefore worthwhile didactical principle. As a result, the concept of “conflict“ needs to be integrated into the
design of the game. However, such a combination of educational goals and video games can be harder to achieve than it sounds (Wechselberger, 2009).
From a pedagogical point of view, the main benefits of computer games for educational purposes are their motivational power and the efficient learning principles they make systematic use of. However,
these can only be utilized by maintaining the specific structure of video games (Fabricatore, 2000). Furthermore, Mäyra (2008) divides the structure of video games into two parts: representation and
gameplay. While the representation consists of the symbols used to describe the virtual game world and allegorize the gameplay (e. g. models, textures, sound etc.), the gameplay itself might again be
broken down into a hierarchy and sequence of challenges the player faces during the game (as well as the actions s/he addresses them with) (Adams & Rollings, 2007). The challenges might, among
others, be logical and mathematical challenges (e. g. puzzles), factual knowledge challenges, exploration challenges (such as spatial awareness, mazes, etc.), economic challenges (e.g. achieving
balance between several elements in a system), and conflict.
Not only do these challenges represent opportunities to induce conflicts into gameplay; conflict itself already is an important underlying principle of all (video) games. Indeed, the goal of most
games actually consists of solving a conflict, which is often also an important part of the game’s narrative structure. For this reason, in this project it will be a major task to connect the
didactical concept of conflict with the specific challenges of the gameplay.
1.2.7. Connecting the Conceptual Change Approach with Educational Games
Inducing conflicts into gameplay for didactical purposes is not the only way conflicts might enter the game. As mentioned earlier, the learners’ current knowledge might conflict with the educational
content of the learning environment. Different players might have different levels of expertise, and while some of them are compatible with the game, others are not. However, this does not need to be
a problem; in fact, it can actually even be utilized for the didactical concept of the game. A typical approach would be to represent these unequal and sometimes conflicting “domains of expertise” by
different roles that the players can take on. For example, in the game “The Elder Scrolls IV: Oblivion” the player can be a mage, a thief, a warrior etc.; each profile provides different abilities
(i.e. possible actions). Many of the game’s challenges can be addressed in different ways: While the warrior uses his enormous fighting power to get in possession of a certain artifact, the thief
might just sneak up to it and steal it, making use of his distinct stealth.
Different levels of expertise on the part of the players regarding the educational content of the game may be mapped to these stereotypic character profiles/roles and therefore become a part of the
game. Furthermore, different players with different profiles may join and build a team, collaboratively addressing the game’s challenges. For example, in the MathTrek Voyager game, a
three-dimensional spaceman may solve a dilemma confronting a two-dimensional planeman by converting some artefacts through 3D space (as shown in Figure 2), thus illustrating the value of acquiring
more complex knowledge.
Figure 2: The MathTrek Voyager.
By bringing both cooperation and conflict into the game, (a) the gameplay experience might become more interesting, (b) players might be caused to contrast their own level of expertise with the ones
of their team colleagues (and thus put themselves into perspective, reflect on the boundaries of their own domains of knowledge) and (c) the game might benefit from the thereby contributed (and
didactically useful) social aspects of playing and learning (cf. Meier/Seufert, 2003).
There is also a similarity between video game structure and the aforementioned threshold concepts. According to Gee (2007), during gameplay players build up a certain level of expertise, which at
some point does not help them any further. At this point, they have to reorganize and expand their knowledge and abilities in order to accomplish their goal. Usually, this happens in the form of
so-called “boss monsters” at the end of certain levels and parts of the game. Boss monsters are a widely established principle of game design and can naturally be associated to the didactical
approach of threshold concepts.
1.2.8. Risks
There is, however, a catch when deliberately inducing conflict: Players/learners might get emotionally stressed (Sander & Heiß, 2007). In addition, Ravaja et. al. (2005) found that passive negative
feedback within video games (which might be an intuitive way for a designer to create a conflict) leads to a negative emotional response. On the other hand, the same authors report that, under
certain circumstances, failure might also lead to positive emotional responses. Also, Csikszentmihalyi’s concept of flow (1990), which is often applied to positive gaming experiences, states that an
optimal emotional experience demands that players should neither be over- nor under challenged. As a result, conflict-induction must be handled with care, for it is a balancing act between didactical
benefits and motivational risk. Exploring both this balance and methods to maintain it will be an important issue within the MathTrek Voyager project.
1.2.9. Expected Effects
Based on the above mentioned thoughts, we can formulate the following hypotheses to be verified in the MathTrek Voyager project:
• Using the structure of video games to induce conflicts will lead to higher learning effects, in particular for breaking learning barriers and acquiring higher qualities of problem solving
• Conflicts can lead to emotional stress. Pedagogically guided game design, however, can reduce the risk of negative emotional responses when adaptive design takes appropriate measures.
By exploring the validity of these hypotheses in theory and practice, the MathTrek Voyager project shall advance the state of the art in pedagogic research and open up new perspectives for
educational games for learning situations which are particularly critical.
1.2.10. Adaptive Assessment of Competences
A very successful approach for adaptive assessment and development of competences is the <strong>Competence-based Knowledge Space Theory (CbKST; Korossy, 1997; Albert & Lucas, 1999). The CbKST is a
cognitive framework which extends the originally behaviorist Knowledge Space Theory (KST; Doignon & Falamagne, 1985, 1999). One goal of the KST is to assess the so called knowledge state of the
person in an adaptive – and therefore efficient – way. It is assumed that every knowledge domain (e.g. arithmetic) can be characterized by a set of problems (i.e. test items). The knowledge state of
a person is characterized by the subset of problems this person is capable to solve. It can be argued that there exists dependencies between different problems within a given domain – so called
prerequisite relations. For example, because a person is able to solve problem x (e.g. multiplication), it can be assumed that this person is also able to solve problem y (e.g. addition). In this
case, knowledge of addition is a prerequisite for knowledge of multiplication. </strong
The basic idea of CbKST is to assume a set of competences (skills, abilities, knowledge) underlying the problems of the given domain. Similar to the problems of a knowledge domain, prerequisite
relations between competences are assumed. For example if a person is able to write a proposal for the MTV project (competence x), it can be assumed that this person is able to use a word processing
software (competence y). Therefore, the ability to use word processing software is a prerequisite for the ability to write a proposal for the MTV project. The connection between the competences and
problems are established by skill and problem functions.
The outlined approach of CbKST has two major advantages:
• On the one hand, given the performance, which can be observed by the subset of problems a learner can master, the latent underlying competences can be identified.
• On the other hand, effective competence development can be realized through adaptive individual learning paths based on the assumed competence space. Thus, adaptive and individualized development
of competences can occur.
Recently, within the context of the ELEKTRA project (6th Framework) a new terminology was introduced: the differentiation between micro- and macro adaptive levels of competence assessment.
The so-called macroadaptivity approach is the adaptive selecting of the next problem (or “learning object”, “learning situation”) based on the competence gap of the learner which has to be closed.
However, in game-based learning applications, such an explicit assessment is not appropriate, since it would disturb the gameplay and the so-called flow-experience (e.g. Csikszentmihalyi, 2008) of
the playing learner. Instead there is the need for a non-invasive and continuous assessment within every single learning situation: this is the microadaptivity approach (e.g. Albert et al., 2007).
The idea behind the microadaptivity approach is to develop a system that interprets the behavior of the learner and gives appropriate interventions (e.g. hints) within a single learning situation
adapted to the learner’s current knowledge and competence state. An overview of the differences between micro- and macro adaptive assessment of competences is given in Figure 3.
Figure 3: Overview of Macro- and Micro Adaptivity.
1.2.11. Non-invasive assessment of motivational and emotional states
A methodology for appropriate and reliable competence assessment is the starting point for the adaptive guidance through problems of a given domain with the goal of developing competences. Even if a
personalized and efficient method for the adaptive guidance (i.e. development of competences) like the CbKST is available, the successful competence development of an individual is highly dependent
on his/her motivational and emotional states. An overview of the dependence of the development of competences from an interaction of cognitive, motivational and emotional factors of a learner is
shown in Figure 4 (adapted from Sternberg, 2007).
Figure 4: The transformation from a novice to an expert and the dependence of individual competence development on cognitive, motivational and emotional factors (adapted from Sternberg, 2007).
It is the goal of the MTV project to establish a methodology for the non-invasive and ongoing assessment of motivational and emotional states of the playing learner. Such an assessment of
non-cognitive aspects of the learner would be a fruitful extension of the microadaptivity approach, the focus of which is currently restricted to the cognitive aspect, i.e. the non-invasive
assessment of competences.
Additionally, the assessment of the motivational and emotional state of the learner would enhance the effectiveness of cognitive interventions like hints, because the reason for a possible failure of
the learner to solve a problem within a single learning scenario can be identified.
For example, a hint given by an artificial mentor would be an appropriate intervention if it can be assumed that the student is not capable to solve a problem within a learning scenario because of
cognitive reasons. In this case it would be assumed that at least one necessary competence is missing. But the appropriateness of such a hint would be low if we would know that the learner would be
able to solve the problem but that (s)he is not motivated to do so – maybe because the problem is too easy. In this case, a hint which was given only on the base of the assessment of the cognitive
state would degrade the learner’s current motivational state and would be counter-productive for the possibility of flow experiences, because such experiences typically only occur if the task is
challenging enough (Csikszentmihalyi, 2008).
The possibility to enhance the motivation in a non-invasive way by motivational interventions – and therefore the non-invasive assessment of the current motivational state – is important, if not
necessary to achieve the goals of the MTV project. It is necessary, because we are not focusing on a target group which is already highly interested in mathematical issues and which is already
motivated to solve mathematical problems. We want to awaken the interest in mathematical issues in those who are not already interested, and we want to enhance the motivation for solving mathematical
problems in those who are not motivated yet.
1.2.12. Personalized Competence Development
Using CbKST it is possible to provide the learner with personalized and efficient learning paths through the competence space. Such learning paths can be traversed by guiding the learner successively
through different and increasingly extensive competence states. By passing through such a learning path, the learner can be provided with the possibility to change from a novice to an expert (see
Figure 4).
The development of competences will be secured by providing the learners with explanatory material. This explanatory material will be collected within WP1 (Vision and Content) and will be selected
and (if necessary) improved within WP2 (Pedagogy). Due to the dependencies between learning material (such as written documents, videos, simulations etc.) analogous to the prerequisite relations
between competences (described in section 1.2.10), the learner can be provided with learning material which is appropriate for his / her current competence state. Such personalized presentation of
learning material was successfully implemented within the EC-funded project iClass (6th FP).
1.2.13. Personalized learning material through involvement of learning strategies and preferences
Concepts which could be learned by playing the MTV game, such as dimensions, symmetry or multidimensional transformations to name a few, seem to be similar to so-called threshold concepts, which have
been recently introduced by Meyer and Land (2003, 2005).
Meyer and Land (2003) suggest that threshold concepts have five characteristics. They are:
• transformative (they shift the perception of the learner within the knowledge domain),
• irreversible (once it has been understood, the learner is unlikely to forget a threshold concept)
• integrative (they expose a previously hidden interrelatedness of different aspects of the knowledge domain)
• bounded (a threshold concept helps to define the boundaries of a subject area).
• troublesome (a threshold concept may be counter-intuitive).
It can be assumed that there exist a large number of potential threshold concepts within the domain of mathematics (e.g. Worsley, Bulmer & O`Brien, 2008; Easdown, 2008). Deep and sophisticated
knowledge of mathematical issues is meaningless until the student has acquired a profound understanding of the involved concepts.
In order to make new ways of competence devolvement feasible, the MTV project will take such potential threshold concepts (TCs) into account and will define possible approaches to integrate TCs or
similarly defined anchor concepts (see Mead et al., 2006) with the CbKST. One possible approach to integrate TCs with the CbKST might be to differentiate two competence spaces: a competence space for
knowledge and a competence space for understanding of threshold concepts (see Figure 5).
Figure 5: Differentiation between two kinds of competences: comprehension and knowledge.
We propose two different strategies when someone tries to get a deep and sophisticated understanding of a troublesome and counter-intuitive concept:
• the inductive strategy: the learner tries to understand different, concrete and relatively small knowledge units in the first step (red circles) to get successively “a view of the whole picture”
(orange circles, i.e. to understand the more abstract and broader concepts).
• the deductive strategy: the learner tries to understand the concept in the first step (orange circles), so that it is in consequence much easier – or maybe not necessary any more – to learn the
single knowledge-units consecutively (red circles).
Taking into account individual preferences related to the strategies for understanding troublesome concepts will enhance the possibility of providing the learner with personalized or “customized”
learning material. Additionally, it would constitute another fruitful extension of the CbKST, because new and even more efficient learning paths (e.g. from one TC to the next) would be possible if
the learner prefers to use such a learning strategy.
1.2.14. Changing attitudes towards mathematics
The MTV project aims at positively changing the players’ attitudes towards mathematics, since it is a central educational goal of the project to foster the mathematical interest of the playing
learners. To this end, the Person-Object theory of Interest (POI) provides an integrative framework. From the perspective of POI, an interest-driven action results from the individual, as a potential
source of action, as well as from an aspect of the environment (situation), as the object of action. Together they form a dynamic unit.
Thus interest is considered as a relational construct, referring to a “person-object-relation”. This relation is characterized by several specific features including cognitive aspects, feeling- and
value-related aspects and the intrinsic quality of interest-based actions (Krapp, 2002). Thus POI enriches the MTV approach by a powerful conceptual framework to explain the dynamics of interest
development due to both personal and environmental influences. It includes important affective aspects of learning, which have relevance especially for serious games. Moreover, POI informs the MTV
project on how to design interesting learning environments with the perspective to foster the development of long-lasting individual subject interest in mathematics.
From a cognitive perspective, two aspects of interest are especially important (Krapp, 2002). Firstly, interest develops over time, and structural components of the personality change with respect to
both cognitive and emotional representations. Secondly, interest tends to grow, which means that the person wants to test and acquire new information and to enlarge his/her competences with respect
to the object of interest. In addition, interests possess emotional and value-related characteristics. The value-related characteristics can be seen as value-related valences (Schiefele, 1999), which
the individual assigns to the goals, contents and activities related to the domain of interest, and which express the personal significance of the interest.
The emotional characteristics of interest can be described as positive feeling-related valences (Schiefele, 1996), which consist of states and experiences preceding, accompanying or following an
interest-driven activity (e.g. joy, feelings of competence, autonomy and social relatedness), and which are stored in a person’s cognitive-emotional representation system (Krapp, 2002). Overall, most
aspects of an interest-triggered action are connected with positive emotional experiences. Under extremely congenial conditions, flow may be experienced (Csikszentmihalyi, 1988).
Therefore, interest-based actions can be characterized by optimal experiential modes that combine positive cognitive qualities (e.g. personal significance) and positive affective qualities (e.g. good
mood). Thus interest is strongly associated with the concept of “undivided interest” or “serious play” which is used by Rathunde, 1998, to describe an optimal mode of task engagement. From the
perspective of POI, an interest-based action also possesses an intrinsic quality, which is characterized by an optimal combination of emotional and value-oriented components of interest (Krapp,
2002). There is no difference between what one has to do in a specific situation and what one likes to do.
1.2.15. Collaborative Learning Experience
The serious game that will result from the MTV project will be playable in two different modes: as a single player or as a multiplayer game. The opportunity of collaborative learning through a
multiplayer game is the starting point of a scientific challenge for the consortium as a whole, and in particular for the TUG-KMI team with its expertise of the CbKST approach. The CbKST has until
now not been implemented in projects and applications which focus on competence assessment and development of groups.
In a multiplayer game it is necessary to assess a “collective” competence state of the learning group’s members, in order to examine which problems are challenging enough for the group as a whole. We
propose that such a collective competence state can be computed through set inclusion of the individual competence states of the group members.
As described in section 1.1 (Concept and Objectives), the avatars of the learners can have different abilities with respect to spatial awareness, spatial sensing, spatial mobility (all of these
abilities have 4 values: 0, 1, 2, and 3 dimensions) as well as their sensitivity to color and temperature. All possible combinations of the different values of abilities can be termed (avatar)
ability states. Groups whose members control avatars with different ability states have to work together, i.e. in a collaborative way, to solve problems in the virtual space. The experience that
every member (respectively their avatar) has different abilities, and that every member is equally needed for the group as a whole, can be a fruitful starting point for collaborative learning and the
exchange of knowledge.
1.3 S/T methodology and associated work plan
Overall strategy of the work plan
MathTrek Voyager is based on the inner structure of the subject (WP1) and on pedagogic principles for individual and collaborative learning (WP2). These Work Packages contribute to the development of
domain models (WP1) and models of learners and individual as well as collaborative learning processes (WP2). These models then form the basis for the design and implementation of the adaptation and
guidance mechanisms in WP3. A game utilizing these adaptation mechanisms is designed in WP4. WP5 implements this design and creates the game ready for playing. WP6 has the learners in focus. It
connects them beyond collaboration in the game through a common platform and disseminates the project results. The experience from WP6 is analysed and evaluated in WP7, which in turn advances
pedagogic theory in WP2, the appropriate selection of problems in WP1to improve guidance in WP3 and game design in WP4 as well as the implementation in WP5. WP8 (Management) ensures smooth
collaboration of all Work Packages.
With respect to the key objectives mentioned above, WP1 not only provides the content to be learned, it also provides the abstract challenges needed to ensure entertainment, packaged in WP4 and WP5.
WP2 strengthens individual and collaborative learning. WP3 keeps the learning manageable for the learner. WP4 (Game Design) and WP5 (Implementation) have major responsibility for the affirmative,
immersive character of the learning experience. Collaboration on a larger scale is supported by WP6 and the learning outcomes and learning experience are evaluated by WP7.
The project will proceed in four overlapping phases.
• Preparation Phase (Month 1-12). In this phase the state of the art in the various fields will be explored in detail and summarized in respective Deliverables. This includes summarizing requirements
to educational games in mathematics from the points of view of pedagogy, technology and usability. Given the truly interdisciplinary character, this phase serves two purposes
• Explain relevant findings from the different fields (Mathematics, Educational Games, Adaptivity, Computer Graphics,…) to the partners as a basis for communication in the Consortium.
• Establishing a baseline for further work.
• Planning Phase (Month 7-24). In the planning phase, detailed learning objectives, concept maps, user models, game control rules, graphics and interactivity design, evaluation criteria etc. will be
• Implementation Phase (Months 13 – 30). This phase will implement the MathTrek Voyager game. It is subdivided into two sub phases.
• Experimental phase (Months 13-18). In this phase isolated episodes of the game will be developed, often in several variants – for example 2D vs. 3D variant, single player vs. collaborative,
self-regulated vs. guided learning – to explore their suitability in various learning contexts.
• Production phase (Months 19 – 30) This phase will gradually implement the final version of the game.
• Evaluation Phase (Months 13 – 36) This phase starts with developing the evaluation concept w.r.t. learning outcomes, competency development, user satisfaction, usability and technical quality.
Actual evaluation shall start with pre-tests and proceed to usage evaluation as components of the game become available.
The realization of these phases through the individual work packages and their dependencies are depicted in the following charts.
• Abbott, E. A. & Stewart, I. (2002),
The annotated Flatland – a romance of many dimensions. New York: Basic Books.
• Abbott, E. A. (1884),
Flatland: A Romance of Many Dimensions. With an introduction by A.K. Dewdney. New York
• Adams, E. & Rollings, A. (2007),
Fundamentals of Game Design. New Jersey: Prentice Hall
• Albert, D. Lukas, J. (Eds.) (1999),
Knowledge Spaces: Theories, Empirical Research, and Applications. Mahwah, NJ: Lawrence Erlbaum Associates
• Albert, D., Hockemeyer, C., Kickmeier-Rust, M. D., Peirce, N., Conlan, O. (2007),
Microadaptivity within complex learning situations –- a personalized approach based on competence structures and problem spaces. Poster presented at International Conference on Computers in Education
(ICCE 2007)
• Bell, A. (1983),
Diagnostic teaching of additive and multiplicative problems. In R. Hershkowitz (Ed.), Proceedings of the Seventh International Conference for the Psychology of Mathematics Education (pp. 205-210).
Rehovot, Israel: Weizmann Institute of Science
• Bell, A. (1984),
Short and long term learning – experiments in diagnostic teaching design. In B. Southwell, R. Eyland, M. Cooper, J. Conroy & K. Collis (Eds.), Proceedings of the Eighth International Conference for
the Psychology of Mathematics Education (pp. 55-62). Darlinghurst: Mathematical Association of South Wales, Australia
• Bell, A. (1986),
Outcomes of the diagnostic teaching project. In L. Burton & C. Hoyies (Eds.), Proceedings of the Tenth International Conference for the Psychology of Mathematics Education (pp. 331-335). London:
University of London
• Berlyne, D. E. (1974),
Konflikt, Erregung, Neugier. Zur Psychologie der kognitiven Motivation. Stuttgart: Klett
• Chan, C.K.K, Burtis, P.J., Bereiter, C. (1997),
Knowledge-building approach as a mediator of conflict in conceptual change. Cognition and Instruction. 1997, 15(1), 1-40
• Conway, J. H., Burgiel, H., Goodman-Strauss, C. (2008),
The Symmetries of Things, A. K. Peters Ltd., Wellesley, Massachusetts, 2008, ISBN 978-1-56881-220-5
• Csikszentmihalyi, M. (1988),
The flow experience and its significance for human psychology. In M. Csikszentmihalyi & I.S. Csikszentmihalyi (Eds.). Optimal experience: Psychological studies of flow in consciousness. New York:
University Press
• Csikszentmihalyi, M. (1990),
Flow: The Psychology of Optimal Experience. New York: Harper and Row
• Csikszentmihalyi, M. (2008),
Flow: The Psychology of Optimal Experience. New York: Harper Collins Publishers.
• Doignon, J., Falmagne, J. (1985),
Spaces for the assessment of knowledge. International Journal of Man-Machine Studies, 23, 175–196
• Doignon, J.P., Falmagne, J.C. (1999),
Knowledge spaces. Berlin: Springer
• Easdown, D. (2007),
The role of proof in mathematics teaching and The Plateau Principle. Proceedings of the Assessment in Science Teaching and Learning Symposium, Sydney, 28−33
• Ernest, P. (2003),
Reflections on Theories of Learning. In Zentralblatt für Didaktik der Mathematik (Bd. 38, p. 3-7). Eggenstein-Leopoldshafen
• Fabricatore, C. (2000),
Learning and videogames: An unexploited synergy. AECT National Convention, Long Beach
• Facer, K., Ulicsak, M., Sandford, R. & Futurelab (2007)
Can computer games go to school? In British Educational Communications and Technology Agency (Eds). Emerging Technologies for Learning. Computer games in education. (Available at: http://
• Federation of American Scientists (2006),
Summit on educational games. (Available at: http://www.fas.org/gamesummit/Resources/Summit%20on%20Educational%20Games.pdf)
• Gee, J. P. (2007),
Good Video Games and Good Learning: Collected Essays on Video Games, Learning and Literacy. New York: Peter Lang
• Glasersfeld, E. V. (1993),
Introduction. In E. V. Glasersfeld (Ed.). Radical Constructivism in mathematics education. Dordrecht: Kluwer Academic Publishers
• Glasersfeld, E. V. (1994),
Piagets konstruktivistisches Modell: Wissen und Lernen. In G. Rusch & J. Schmidt (Ed.). Piaget und der Radikale Konstruktivismus. Frankfurt a. M.: Suhrkamp
• Henrichwark, C. and Gräsel, C. (2009),
Claudia Henrichwark and Cornelia Gräsel, Motivation im Mathematikunterricht – Zum Einsatz einer integrierten Lernumgebung im 2. Schuljahr, in: C. Röhner, C. Henrichwark, and M. Hopf (Eds.),
Europäisierung der Bildung, VS Verlag für Sozialwissenschaften (2009), pp. 291-285
• Hestenes, D., Rockwood., A., Naeve, A., Doran C., Lasenby J., Dorst, L., Mann, S. (2001),
Geometric Algebra: New Foundations, New Insights,
Advanced 1-day course organized by Alyn Rockwood and Ambjörn Naeve and delivered at SIGGRAPH 2000 and 2001
• Hoppe-Graff, S., (1993),
Sind Konstruktionsprozesse beobachtbar? In W. Edelstein & S. Hoppe-Graff (Ed.). Die Konstruktion kognitiver Strukturen: Perspektiven einer konstruktivistischen Entwicklungspsychologie Bern: Hans
• Hoppe-Graff, S., Edelstein, W., (1993),
Einleitung: Kognitive Entwicklung als Konstruktion. In W. Edelstein & S. Hoppe-Graff (Ed.). Die Konstruktion kognitiver Strukturen: Perspektiven einer konstruktivistischen Entwicklungspsychologie.
Bern: Hans Huber
• International Commission on Mathematics Instruction (ICMI) (1995),
Perspectives on the Teaching of Geometry for the 21st Century, Educational Studies in Mathematics, Volume 28, Number 1, January, 1995, ISSN 0013-1954
• Knudsen, C., Naeve, A., (2002),
Presence Production in a Distributed Shared Virtual Environment for Exploring Mathematics, in Soldek, J., Pejas, J., (eds), Advanced Computer Systems, pp. 149-159, Kluwer Academic Publishers, 2002,
ISBN 0-7923-7651-X
• Konstantinos, P. C., Vosniadou, S., Vamvakoussi, X., (2007),
Students’ Interpretations of Literal Symbols in Algebra. In S. Vosniadou, A. Baltas, & X. Vamvakoussi (Ed.). Re-Framing the Conceptual Change Approach in Learning and Instruction, Advances in
Learning and Instruction Series. Oxford: Elsevier Press
• Korossy, K., (1997),
Extending the theory of knowledge spaces: A competence-performance approach. Zeitschrift für Psychologie, 205, 53-82
• Krapp, A., (2002),
An educational-psychological theory of interest and its relation to self-determination theory. In Deci, E.L. & Ryan, R.M. (Eds.). The handbook of self-determination research. Rochester: University of
Rochester Press. P. 405-427
• Lepper, M. R., and Malone, T. W., (1987),
Intrinsic motivation and instructional effectiveness in computer-based education, in: R. E. Snow, and M. J. Farr (Eds.), Aptitude, Learning, and Instruction, III: Conative and Affective Process
Analysis, Hillsdale, NJ: Lawrence Erlbaum Associates (1987), pp. 255-286.
• Limon, C., (2001),
On the cognitive conflict as an instructional strategy for concep-tual change: a critical appraisal. In Learning and instruction, 2001, Vol. 11, Nr. 4-5, 357-380
• Livio, M., (2005),
The Equation That Couldn’t Be Solved – How Mathematical Genius Discovered the Language of Symmetry, Simon & Schuster, New York, 2005, ISBN 0-285-63743-6.
• Massey, L., (2007),
Contrast Learning for Conceptual Proximity Matching, in: 2007 International Conference on Machine Learning and Cybernetics, Volume 7 (2007), pp. 4044-4049
• Mäyra , F., (2008),
An Introduction to Game Studies: Games in Culture. Los Angeles: SAGE Publications
• McClelland, D.C., Koestner, R., Weinberger, J., (1989),
How does self-attributed and impicit motives differ? Psychological Review, 96, 690-702
• Mead, J., Gray, S., Hamer, J., James, R., Sorva, J., St. Clair, C., Thomas, L., (2006),
A cognitive approach to identifying measurable milestones for programming skill acquisition, Annual Joint Conference Integrating Technology into Computer Science Education, Bologna, 26-28
• Meier, C., Seufert, S., (2003),
Game–based learning: Erfahrungen mit und Perspektiven für digitale Lernspiele in der betrieblichen Bildung.; In A. Hohenstein Wilbers (Eds.) Handbuch E–Learning für Wissenschaft und Praxis. Köln:
Deutscher Wirtschaftsdienst
• Merenluoto, K., Lehtinen, E., (2003),
Number concept and conceptual change: Outlines for new teaching strategies. Unpublished paper of a lecture on the 10th Biennial Conference on Learning and Instruction (EARLI) in August 2003 in
Padova, Italy
• Meyer, J., Land, R., (2003),
Threshold concepts and troublesome knowledge: Linkages to ways of thinking and practicing within the disciplines. ETL Project: Occasional Report
• Meyer, J., Land, R., (2005),
Threshold concepts and troublesome knowledge (2): Epistemological considerations and a conceptual framework for teaching and learning. Higher Education, 49, 373-388
• Naeve, A., (1997),
The Garden of Knowledge as a Knowledge Manifold – a Conceptual Framework for Computer Supported Subjective Education, CID-17, TRITA-NA-D9708, Department of Numerical Analysis and Computer Science,
KTH, Stockholm.
• Naeve, A., (2000),
The Garden of Knowledge – an interactive system for exploring mathematics, presented at Siggraph2000, New Orleans, July 25, 2000
• Naeve, A., (2001a),
The Knowledge Manifold – an educational architecture that supports inquiry-based customizable forms of E-learning, Proceedings of the 2nd European Web-Based Learning Environments Conference, October
24–26, Lund, Sweden
• Naeve, A., (2001b),
The Work of Ambjörn Naeve within the Field of Mathematics Educational Reform, CID-110, TRITA-NA-D0104, KTH, Stockholm, 2001.
• Naeve, A., Karlgren, K. Nilsson, M., Jansson, K., (2002),
MathViz: Shared Mathematical Courselets and Interactive Visualizations, Progress report from the PADLR project within the Wallenberg Global Learning Network
• Naeve, A., Nilsson, M., (2003),
ICT-enhanced Mathematics Education within the Framework of a Knowledge Manifold, Presented at PICME 10, the Swedish Preconference to ICME 2004, Växjö, May 9-11, 2003
• Naeve, A., Sicilia, M-A., Lytras, M., (2008),
Learning processes and processing learning: From Organizational Needs to Learning Designs, Journal of Knowledge Management, Special Issue on Competencies Management, Vol 12, No. 6, December 2008
• Naeve, A., Svensson, L., (2001),
Geo-Metric-Affine-Projective unification, in Sommer, G. (ed), Geometric Computing using Clifford Algebra, Ch. 5 (pp. 105-126), Springer, 2001, ISBN 3-540-41198-4
• Neward. T., (2008),
Rethinking Enterprise, Video recording from his speech at a software developers conference in San Francisco, 2008.
• Nilsson, M., Naeve, A., (2003),
On designing a global infrastructure for content sharing in mathematics education, Presented at PICME 10, the Swedish Preconference to ICME 2004, Växjö, May 9-11, 2003
• Polya, G., (2004).
How to Solve It: A New Aspect of Mathematical Method (with a foreword by John, H., Conway), Princeton University Press, 2004 (1945), ISBN13: 978-0-691-11966-3
• Prediger, S., (2005),
“Auch will ich Lernprozesse beobachten, um besser Mathematik zu verstehen.” Didaktische Rekonstruktion als mathematikdidaktischer Forschungsansatz zur Restrukturierung von Mathematik. In Pädagogische
Hochschule Schwäbisch Gmünd, Institut für Mathematik und Informatik (Ed.). mathematica didactica. Zeitschrift für die Didaktik der Mathematik, 28. Schwäbisch Gmünd
• Rathunde, K., (1998),
Undivided and abiding interest. Comparisons across studies of talented adolescents and creative adults. In L. Hofmann, A. Krapp, A. Renninger & J. Baumert (Eds.). Interest and Learning. Proceedings
of the Seeon-Conference on Interest and Gender (pp. 367-376). Kiel: IPN
• Ravaja, N., Saari, T. Laarni, J., Kallinen, K., Salminen, M., (2005),
The Psychophysiology of Video Gaming: Phasic Emotional Responses to Game Events. In Proceedings of DiGRA 2005 Conference: Changing Views – Worlds in Play
• Rocard, M., Csermely, P., Jorde, D., Lenzen, D., Wahlberg-Henriksson, H., Hemmo, V., (2007),
Science Education Now: A Renewed Pedagogy for the Future of Europe, European commission, Community Research, 2007
• Sander, E., Heiß, A., (2007),
Konfliktinduzierung beim Lernen mit Neuen Medien. In D. Lemmermöhle, M. Rothgangel, S. Bögeholz, M. Hasselhorn & R. Watermann (Eds.) Professionell lehren – erfolgreich lernen. Berlin: Waxmann
• Schiefele, U., (1996),
Motivation und Lernen mit Texten. Gottingen, Germany: Hogrefe
• Schiefele, U., (1999),
Interest and learning from text. Scientific Studies of Reading, 3, p. 257-279
• Schnotz, W., Vosniadou, S., Carretero, M., (1999),
Preface. In W. Schnotz, S. Vosniadou & M. Carretero (Eds.). New perspectives on conceptual change. Oxford: Elsevier Science
• Schöllhorn, W., (1987),
Wolfgang Schöllhorn, Differenzielles Lehren und Lernen von Bewegung. Durch veränderte Annahmen zu neuen Konsequenzen, in: Hartmut Gabler/Ulrich Göhner/Frank Schiebl (Hg.), Zur Vernetzung von
Forschung und Lehre in Biomechanik, Sportmotorik und Trainingswissenschaft, Hamburg (2005), pp. 125-135.
• Schulmeister, R., (1996),
Grundlagen hypermedialer Lernsysteme: Theorie – Didaktik – Design. Bonn: Addison-Wesley
• Seiler, T., (1994),
Ist Jean Piagets strukturgenetische Erklärung des Denkens eine konstruktivistische Theorie? In G. Rusch & S. J. Schmidt (Ed.). Piaget und der Radikale Konstruktivismus. Frankfurt a. M.: Suhrkamp
• Sternberg, R., (2007),
Intelligence, Competence, and Expertise. In A. J. Elliot & C. D. Dweck (Eds.). Handbook of competence and motivation. New York: Guilford Press
• Swan, M., (1983),
Teaching decimal place value: A comparative study of “conflict” and “positive only” approaches. In R. Hershkowitz (Ed.), Proceedings of the Seventh International Conference for the Psychology of
Mathematics Education (pp. 211-216). Rehovot, Israel: Weizmann Institute of Science
• Taxén, G., Naeve, A., (2001), CyberMath: A Shared Virtual Environment for Mathematics Exploration, Proceedings of the 20:th World Conference on Open Learning and Distance Education (ICDE-2001),
Düsseldorf, Germany, April 1-5, 2001
• Taxén G., Naeve, A., (2001b),
CyberMath – Exploring Open Issues in VR-based Learning, SIGGRAPH 2001, Educators Program, SIGGRAPH 2001 Conf. Abstracts and Applications, pp. 49-51.
• Tirosh, D., Stavy, R., Cohen, S., (1998),
Cognitive Conflict and intuitive rules. In International Journal of Science Education 20, 1257-1269
• Underhill, R. G., (1991),
Two layers of constructivist curricular interaction. In E. von Glasersfeld (Ed.), Radical constructivism in mathematics education (229-248). Dordrecht: Kluwer Academic Publishers
• Vosniadou, S., (1994),
Capturing and modeling the process of conceptual change. In Special Issue on Conceptual Change, Learning and Instruction, 4 (S. 45-69)
• Vosniadou, S., Vamvakoussi, X., (2005),
Examining mathematics learning from a conceptual change point of view: Implications for the design of learning environments. In F. Verschaffel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.).
Instructional psychology: Past, present and future trends -Fifteen essays in honour of Erik De Corte. Advances in Learning and Instruction Series. Elsevier
• Wechselberger, U., (2009),
Teaching Me Softly: Experiences and Reflections on Informal Educational Game Design. In Pan, Zhigeng; Cheok, Adrian David; Müller, Wolfgang, El Rhabili, Abdennour (Eds.): Transactions on Edutainment
II. LNCS Vol. 5660. Berlin: Springer, p90–104
• Wentzel, K. R., Caldwell, K., (1997),
Friendships, peer acceptance, and group membership: Relations to academic achievement in middle school. Child Development, 68, 1198-1209
• White, R. W., (1959),
Motivation reconsidered: The concept of competence. Psychological Review, 66, 297-333
• Widmaier, M., (2007),
Differenzielles Lernen. Sachgemäßes Üben im Randbereich des Lösungsraums, in: Üben & Musizieren 3 (2007), pp. 48-51.
• Winroth, H., (1999),
Dynamic Projective Geometry, Ph. D. dissertation, KTH/NA/R–99/01–SE, Department of Numerical Analysis and Computer Science, KTH, Stockholm, March 1999.
• Worsely, S., Bulmer, M., O’Brien, M., (2007),
Threshold concepts and troublesome knowledge in a second level mathematics course. Proceedings of the Assessment in Science Teaching and Learning Symposium, Sydney, 139−144
Flatland – the movie
(Dano Johnson on YouTube):
Flatland – the limit of our consciousness
(delphys75 on YouTube):
Flatland The Movie – Tony Hale as King of Pointland
(JeffreyTravis on YouTube):
Flatland: Reflections
(FlatlandTheFilm on YouTube):
Sphereland – Teaser Trailer
(Dano Johnson on YouTube):
Hypersphere 2
(WildStar2002 on YouTube):
Hypersphere 4
(WildStar2002 on YouTube):
A Breakthrough in Higher Dimensional Spheres
(PBS Infinite Series on YouTube):
Thinking outside of the 10-dimensional box
(Steven Strogatz on YouTube):
A torus meets an equation
(Hotel Infinity on YouTube):
(WildStar2002 on YouTube):
Exploring a 6D hypertorus
(tarkuk on YouTube):
Moebius Story: Wind and Mr. Ug
(Vihart on YouTube):
Moebius highway
(daveszcz on YouTube):
The Klein bottle
(bothmer on YouTube):
Cyclist on a Klein bottle
(rambetter on YouTube):
The adventures of the Klein bottle
(mathemamovies on YouTube):
Perfect shapes (= Platonic bodies) in higher dimensions
(Numberphile on YouTube):
The sixth Platonic solid is called the 120-cell.
It consists of 120 regular dodecahedrons interconnected in 4D-space:
120-cell rotating in 4D
(Rob Scharein on YouTube):
• List of regular polytopes and compounds | {"url":"https://kmr.dialectica.se/wp/research/math-rehab/learning-object-repository/geometry-2/math-trek-voyager/","timestamp":"2024-11-05T12:41:44Z","content_type":"text/html","content_length":"281534","record_id":"<urn:uuid:164cf8bd-0cf6-4cef-a317-eca2445c3a8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00552.warc.gz"} |
category - Definition & Meaning | Englia
A group, often named or numbered, to which items are assigned based on similarity or defined criteria.
(mathematics) A collection of objects, together with a transitively closed collection of composable arrows between them, such that every object has an identity arrow, and such that arrow composition
is associative. | {"url":"https://englia.app/definition/category","timestamp":"2024-11-12T18:38:09Z","content_type":"text/html","content_length":"67259","record_id":"<urn:uuid:784916a9-e847-4f34-9e9c-5bd48a74693d>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00424.warc.gz"} |
perplexus.info :: Just Math : Product equals sum
Let's start with an easy problem : What three positive integers have a sum equal to their product?
answer: (1,2,3), of course.
This puzzle can easily be transformed into a D4 problem:
For what values of k will the question "What k positive integers have a sum equal to their product?" have only one unique set of integers for an answer?
Clearly for k=2 the answer is unique: (2,2) and so it is for k=4: (1,1,2,4).
List all other values of k below 1000. | {"url":"http://perplexus.info/show.php?pid=10221&cid=56328","timestamp":"2024-11-01T20:23:25Z","content_type":"text/html","content_length":"12457","record_id":"<urn:uuid:feab3217-abd4-402e-b44d-26768dd12e84>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00507.warc.gz"} |
Sequence and Summation Notation
A sequence is an ordered set of numbers that may have a finite or infinite number of terms. If the sequence is finite, the last term is shown, like .
For example, the numbers from 1 to 10 are a finite sequence: . A positive even number can be represented by , where is a positive integer, giving the infinite sequence .
The character "…" (called an ellipsis) means "keep going as before."
To avoid using up many different letters, often the same letter is used with a whole number to its right and below (called a subscript), like this: . Such an integer is called an index.
More compactly, sequence notation is used: means . If the number of terms is infinite, the sequence ends with "…", like this: .
A series is the sum of a sequence, for example, .
Like a sequence, the number of terms in a series may be finite or infinite.
The notation for a series with finitely many terms is , which stands for .
For infinitely many terms, the notation is , which stands for . | {"url":"https://www.wolframcloud.com/objects/demonstrations/SequenceAndSummationNotation-source.nb","timestamp":"2024-11-02T14:06:47Z","content_type":"text/html","content_length":"272125","record_id":"<urn:uuid:2ba09cb8-4e9a-44c1-b192-befc66ec4839>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00037.warc.gz"} |
blackjack counting system
If he holds when he knows the chances of a card that will make him bust is high, and hits when he knows it won’t make him bust—his average will be above zero. Again the running count should be
converted to a true count to determine what are the odds of winning. Basic strategy blackjackplayers sometimes ask me for a simple way to overcome the small house edge in blackjack, with little worry
over being recognized as a card counter. This obviously would not fly in a brick and mortar casino. When a player has too much confidence in counting and ends up betting too much. Blackjack Champ >
Black Jack Card Counting Strategy > Ace Tracking and Side Counts Ace Tracking and Side Counts Ace Tracking and Ace Side Counts are not black jack card counting methods on their own, but are actually
extra counts that you can track along side your running count in order to improve the accuracy of your favorite system. Also known as the K-O, it was first introduced in a book called Knock Out
Blackjack – the Easiest Card Counting System Ever Devised, written by Fuchs and Vancura. It is calculated on the basis of the cards a player is dealt, as well as of the card the dealer is shown. The
basic guidelines of how to apply blackjack card counting strategy are represented by a number of systems, according to which each card has a correlated number, which should be either added to the
total sum, or subtracted from it, whereas the bets should be made accordingly. The KO Strategy is one of the very easiest Blackjack card counting strategies, contained in the book Knock-Out
Blackjack--The Easiest Card Counting System Ever Devisedby Olaf Vancura and Ken Fuchs. is the Hi-Lo Count. This can be used for many different types of games that use a deck or multiple decks of
cards but is most common in single-deck Blackjack. The only effective technique we have not yet discussed can be easily added to the Blackjack card counting system of your choice – Side Counts. As
this system takes into consideration only two card values, it's much easier to keep track of. They go on to take. To avoid fractions, players can double the values of ½. In case both have a hand of
21, the player receives their bet back. This technique of gaining an advantage over the house's edge does not end with memorizing the values of the cards. Card tracking isn’t as simple when the
casino you’re playing at uses the half cut method, either—also known as “The Big C.” When a casino is using the half cut method, the deck is cut—usually in the middle, by a random player—and then
split toward the last third of the deck by the cut card. However, it's suitable for beginners who wish to practice and improve their skills. You are not timed, you are not under the scrutiny of
casino security, watching your every move and hesitation from a little booth with a hundred monitors. The cards are counted throughout the game and special attention is paid to their value. It's most
effective when the dealer doesn't frequently reshuffle the remaining cards in the shoe. The purpose of a Side Counting system is to tell the player EXACTLY what remains in the deck in regards to
optimum cards. This minimizes one's loss rates and at least partially eliminates the advantage the house has over less-skilled players. You need to be able to keep a running count of the cards,
without moving your lips, pausing or making any expensive mathematical mistakes. The higher-value cards such as tens and Aces, that are more advantageous to players, are assigned as -1, as after each
hit fewer of those are left in the remaining deck. Card Counting in blackjack can be an extremely effective way to increase your odds of winning at the game of 21. reKO (ridiculously easy KO) is a
substantially simplified version. They, to a great extent, rely on their practice, memory and other so-called “honest” skills. If the number is in the negative, this means most face cards with a
value of 10 have already been dealt. It will barely make a difference in some games like bridge, or games like five (or three) card draw. Another plus of the Ace/Five system is its simplicity. The
Blackjack Shuffle Tracking method is a bit dated, and likely not very useful at land-based casinos, or online. As the dealer exposes cards from the shoe which have already been dealt, card counting
makes it possible for the player to infer what are the remaining cards, left to be dealt and decide upon their further course of action. The table count at this point is zero. It does not refresh
until a new shoe is shuffled. If you can count to 21, you can do the math. Card counting is one of the great tools that can be used at the blackjack table and with the KISS system, even beginner
players can quickly learn how to count cards and increase their chances of being a winner. Similarly to the Hi-Lo system of Dubner, the player should keep a running count in the course of the game.
Simple card counting systems for beginners include the Hi-Lo strategy, the Red Seven count and Knock-Out Blackjack, also known as the KO system. A deck that favors high cards is advantageous to know.
A count that is high in either direction indicates that the deck is heavy on the high or low side. Anyone familiar with basic blackjack strategy and the general rules of card counting can, with a
minimum of practice, master the Red 7 card counting method. It would be advisable to keep a separate count of the Aces, dealt during the game. Or, it can give you the edge to keep you one step ahead
of the dealer, so go forth and count carefully. The difficulty of this strategy lies in memorizing each card’s value. Because of this keeping an accurate track of dealt cards is only one aspect of
the card counting strategy, what's also important is to do it as inconspicuously as possible. If the next card comes low, add another point, when a high card is dealt you can subtract one. The Hi-Lo
strategy is employed by first of all assigning values to each of the cards in the following way; deuces to sixes +1, sevens to nines 0, 10s to aces -1. As most people know, casinos always make money
in the long run—which is much more important than the short run, especially in statistics. Like the Omega II, this also is a balanced system. High Low Card Counting System The high-low system, used
in the tutorial, values low cards (2-6) at +1 andhigh cards (tens and aces) at -1. Cards, lower in value (from 2 to 6) are marked as +1. The High-Low was first introduced in 1963 by Harvey Dubner1.
Implementing a good Blackjack card counting strategy will give the advantage in Insurance bets. Another mistake that counters make is over-betting. Counting cards in blackjack is tough, and though
not illegal, it's frowned upon and can get you booted from casinos. It's a three-level card counting method, introduced and developed by Stanford Wong. Blackjack counting systems are tools used to
improve your game. How to decide which Card Counting System to Use: All of card counting is based on the skill of tracking the ratio of high to low cards.. On the contrary, if the total value of the
cards is negative or is close to zero, the player's chances of winning decrease and he/she should place a lower bet. The Uston SS Blackjack Card Counting System is more advanced but more accurate
with a Betting Correlation of 99%. These correspond to the following values: +1, +1, -1, +1, 0, +1, 0 and a running count of +4. Cards are dealt until 17 or more points are reached. Let's proceed
with an intermediate level system, developed by Bruce Carlson, namely the Omega II which Carlson first introduced in his book Blackjack for Blood in 2001. A mathematics professor recruits three
bright students and teaches them to count blackjack cards because he himself is blacklisted from several casinos for counting cards. They then set out on a spree, racking up winnings at various
casinos. Counting, or Reading Cards as it is sometimes called, is all about statistics. The man behind it is Edward Thorp, a professor of mathematics with the Massachusetts Institute of Technology.
Even though card counting is not considered illegal, most dealers are wary of the practice and a problem might ensue. It's advisable to start practicing with a single deck, turning one card at a
time. A high positive count indicates smaller cards left in the deck, and subsequently, it is the opposite with a high negative count. Inexperienced card counters can use it to develop their skills
before they switch to more elaborate options like the Hi-Lo. The higher the positive number, the more high-value cards are in the remaining deck. To cross check yourself, remember that after you’ve
counted down the entire deck you’ll end up with a positive 4. Another variance between the two strategies is that the KO Count System is not a balanced strategy—whereas the Hi Low Count and Hi Opt
strategies are. The important thing to remember when counting cards is that it will not produce a win every time for the player, it only improves and influences the chances of a player winning.
Unbalanced card-counting system - is a system that uses the count of a cards one by one as they are dealt and is used in a 1-deck blackjack game. What’s more important to you makes the biggest
difference. As with any blackjack card counting system, you will want to practice the system at home with a deck of cards over and over again until you have it perfected. With four Oscars, two Golden
Globes and an Eddie award, Rain Main is quite possibly the best known and most beloved movie about card counting. There are many things you need to learn, from basic strategy to card counting. This
is by far the most elaborate of blackjack counting systems on the list. Founded in 2014, CasinoNewsDaily aims at covering the latest news from the casino industry world. This method is also
defenseless against most large or prominent casinos as they use automatic shuffling machines. In the case of a six-deck blackjack, players should multiply the number of decks by -2 and respectively
start with a running count of -12. Much to the contrary of popular media, counting cards is something that any tourist gambler can figure out; you don’t have to be Rain Man. For instance, if the up
card's value of the dealer is from 2 to 6 the player may hit. {{ currentMonth }}, {{ new Date().getFullYear() }}, {{ (translations.button) ? It includes interviews with card counters such as Edward
Thorp and Andy Bloch, as well as members of the MIT Blackjack Team, casino employees, and gambling authors. The total value of a 52 card deck is zero. The Blackjack Shuffle Tracking method is a bit
dated, and likely not very useful at land-based casinos, or online. When a player has too much confidence in counting and ends up betting too much. Players usually begin with a “running count” of
zero when the cards are first dealt. According to the Omega II system, the cards 2,3 and 7 have a value of +1, while other low cards such as 4, 5 and 6 are worth +2. Also, just the opposite, if the
count is low and you are faced with a hard decision with a hand totaling 16 against a dealer showing a 9 for example, you can take a hit knowing that a low card number is more likely to come. If the
dealer and the player both have an equal score, the latter neither loses, nor wins. The middle cards or “Neutral Cards” 7-9 are given a zero value, or +0. The system—also known as the Plus/Minus
system—was originally introduced by. The best card counting method in blackjack is the one that you’ll use most consistently with the least amount of trouble. Partner systems are tricky and require
both members to remain focused. Each card in the deck is given a value of either -1, +1 or 0. As the title suggests it's rather advanced, so card-counting beginners will hardly make heads or tails of
it. Shuffle tracking is not a tried and true scientific method though and relies more on the player’s educated guess more than concrete facts. If a 3 is dealt next the count becomes 0 (-1 plus +1
equals 0).The higher your count, the more likely the deck has higher cards left. It will be easier for Once you’ve become accustomed to this system, there are some things that you can effectively
train yourself to do to pick up your counting time. They will begin watching you if you win a significant amount, and if they see two strangers who are always at the same table where one of them is
making statistically improbable amounts of money; they will put two and two together. The BC improves with a side count of aces. , a more simplistic method is encouraged. Here are some films that
portray card counting in a variety of different ways. With the release of the blockbuster film, 21, card counting was popularized … At the beginning of the first deck, starting from 0, add the number
1 for each dealt 5-value card you notice. Choosing suitable blackjack counting systems largely depends on the degree of skill and experience of the players. …Now let's illustrate this with the
following example: assume the player has tracked these cards – King, 6, 10, 2, Jack. Counting cards can be used to impress friends, and maybe help you earn back money lost on a vacation, but it is
not recommended as a source of income. The higher the total sum of the dealt cards is, the more 10-value cards have remained in the deck. It’s definitely a must see. The system—also known as the Plus
/Minus system—was originally introduced by Harvey Dubner as a remake of Edward Thorp’s slightly more involved Ten-Count blackjack card counting system. All you need to start counting cards is basic
math skills and innate knowledge of a deck of cards. When a new hand is started, the count continues. The aces count as -1. The in-between cards with values 7,8 and 9 are marked as 0. Once it becomes
second nature, either advance to the Uston SS card counting system or begin counting 10’s. If you count all the pieces in a 52-card deck, your final count won't be a zero but +2. Another mistake that
counters make is over-betting. After each round there are fewer cards left in the remaining deck and a correct count allows players to determine the value of the cards that remain to be dealt.
Counting cards can be used to impress friends, and maybe help you earn back money lost on a vacation, but it is not recommended as a source of income. If your running count is -1 and there are three
decks to be dealt in the shoe, your true count is -0,33 or so. The method depends on the player keeping track of where a run of high cards are throughout a shuffle and then correctly predicting where
those cards are and cutting the deck favorably for the player. If you're playing a single-deck blackjack (which is unlikely in a casino but still can be used as a method of practicing one's counting
skills), you should begin your running count from -2. The Mirage to Cease Mid-Week Casino, Other Operations from Jan. 4, How to Properly Use Casino Fibonacci System, Bet Spreads for the Wonging Style
of Play, Bet Spreads for the Semi-Wonging Style of Play, Ball Bouncing, Switching, Aiming and Speed, NetEnt Expands Malta Live Casino Studio, Launches Dedicated Blackjack Tables with EveryMatrix,
Spearhead Studios Adds Blackjack, Giant Panda Slot to Games Suite, Playtech, GVC Debut Majority Rules Speed Blackjack Live Casino Title. What that means is, if you begin at zero and count down the
entire deck using the Knockout Card Counting Strategy, you are not going to finish on zero. …What makes the 7s' values easier to remember is the name of the system itself. When the dealer has an
up-card which is an ace, players are offered the choice of taking “insurance” before the dealer checks his … The reason this strategy is so popular is that it is simple to learn and easy to
implement. Another thing to take into account is that the Red Seven system is unbalanced too. In the Knockout Card Counting (KO Count) system, all card values are the same as in the Hi Low system,
except for the value of the 7 [take a look at the book Knock Out Blackjack for a more detailed look at the system]. It combines simplicity with professional-level efficiency. Some cards are counted
as two-point, others as one-point. Keep in mind, however, that shuffle tracking involves as much guesswork as it does scientific fact. There are plenty of blackjack rules and strategies out there,
but none offer a higher Betting Correlation than the Uston SS system, which is also much easier to implement than other high-yield strategies. With this method, it's best to double your bet only
after a win to make your card counting less conspicuous. In my opinion, the best introductory treatment is in Professional Blackjack by Stanford Wong, and the most detailed coverage is in Blackjack
Attack by Don Schlesinger. As the name of the system might seem to indicate, you’re going to track the aces and the fives as they’re dealt. The entire shoe is not played through using the above
cutting method, in fact, if six decks are used, 2 entire decks may not be played at all. Once the player has this technique down to a science, only then should more difficult Blackjack Card Counting
techniques be used. A mathematics professor recruits three bright students and teaches them to count blackjack cards because he himself is blacklisted from several casinos for counting cards. To use
the Knock Out card counting system you will want to keep a running total throughout the game. It's advisable to practice with a single deck of cards and master the Hi-Lo system of Dubner, prior to
proceeding with more complex blackjack counting systems. If we’re starting at zero (the beginning of the game) and a king is dealt, the table count becomes -1. More so because their main objective is
not gaining experience or playing recreationally but winning against the house and thus benefiting financially from the blackjack game. Big brains are required. Insurance Correlation is a measure of
how well a blackjack card counting system indicates a correct decision when insurance betting. translations.button : 'Play now' }}, For those who are new to the idea of Blackjack. In fact, it's one
of the most frequently used blackjack counting systems in the world because it's relatively simple and easy to memorize. The final calculation of this count is +1. Train yourself to recognize cards
that cancel each other, for instance: a negative card cancels out a positive. If we’re starting at zero (the beginning of the game) and a king is dealt, the table count becomes -1. Such players
employ other techniques which allow them to gain advantage over the house. The aim is to collect a higher total point than the dealer but it should not go over 21. However, the method can still be
used at lower key blackjack games where the use of modern shuffling methods (such as the “Big C Cut” or automatic shuffling machines) are not in use. …Avoid doubling or placing higher bets when the
number is in the negative – do it when your final count is about +15, there is a good chance to win the round. It is true that a shoe heavy in high cards is just as good to the dealer as the player,
knowing there is a better chance of a high card to come can make some of those tough decisions a little easier. So a two and a king would cancel each other out. This strategy helps players determine
how to approach each situation that ensues on the blackjack table as it is based on mathematical probabilities in the respective situation. The first card counting system ever, was introduced in the
sixties when blackjack gained more popularity. They go on to take Las Vegas casinos for millions in winnings. Subscribe today to receive weekly breaking news stories and industry updates! The Hi-Lo
Count strategy is basically a simple way to keep track of high cards left in the deck. What further increases the difficulty of the system is that some cards are assigned a fraction value. Train
yourself to recognize cards that cancel each other, for instance: a negative card cancels out a positive. An experienced player should be able to pay close attention to the cards dealt to other
participants in the game as well, which further increases the complexity of this strategy. Such methods are known as advantage play and are perfectly legal and safe as long as the player does not
gain a considerable advantage over the house's edge. I recommend avoiding other systems until you can use the Hi/Lo Count perfectly. This helps players decide when to bet and how high their stakes
should be. Cards 7, 8 and 9 are null, valued at 0. If a 3 is dealt next the count becomes 0 (-1 plus +1 equals 0).The higher your count, the more likely the deck has higher cards left. It's the
easiest card counting method and allows you to start counting as mistake-free as possible. As most people know, casinos always make money in the long run—which is much more important than the short
run, especially in statistics. That’s much easier than using a system like the Zen or Uston Advanced Count, where you’re assigning 3+ values, as well as keeping running, true and side counts.
Blackjack Card Counting. There are also slight variations in strategy when you play a 6 deck game versus a single deck game. The Aces and the eights are counted as 0. This 1988 classic is about an
autistic savant, his less than perfect brother, and their journey across the USA with an inheritance in the balance—it has it all. Only then you can progress to turning over two cards to count them
as a pair. These correspond to +1, +2, -2, +2, 0 and a running count of +3. The players and the dealer take turns until all cards have been dealt. If you’re only playing with one deck of cards, the
count will begin at zero. Because card counting is not allowed in any casinos, partners not only have to focus on the cards, remember the profile of the deck, win only slightly more than they lose,
and help their partner… They also have to communicate discreetly and pretend not to know each other. This method is hard to conquer and even the very skilled are still relying on their best guess. To
give credit where credit is due, this system was developed by Stanford Wong, a prolific author who has authored volumes on … The Hi-Lo system is a balanced card counting system that assigns +1 and -1
to an equal amount of cards. The purpose of a Side Counting system is to tell the player EXACTLY what remains in the deck in regards to optimum cards. So if you start at zero, and a low card is
played you’ll add one. Once all cards in the deck have been dealt, the final result of your calculations should amount to zero. Red 7s are assigned with a positive value, while black 7s' worth equals
zero. However, their behavior and gestures while at the blackjack table may also be indicative of whether or not the player is using an advanced play strategy. Of course, there is a number of other
blackjack counting systems, not mentioned here. The idea is the same as in the Hi-Lo Card Counting System but involves 6 values, rather than 3. Still, the thing that makes this system more complex
and different than most other systems is the fact that you need to count number with decimals. The best blackjack counting system for beginners is the Hi/Lo Count. This combination is what is
actually known as a “blackjack” or a “natural” 21 which automatically earns a win for the player, unless of course, the dealer has also collected a blackjack. In the Hi-Lo method, the seven is
neutral, however, it is a plus one in the Knock Out blackjack counting method, and therefore adds four more points to the deck. This 2005 documentary by director David Layton chronicles the history
of blackjack card counting. Although this film has a slightly made-for-TV feel to it, the performances from Charles Martin Smith, Katharine Isabelle and Kris Lemche more than make up for it. In this
2008 film, Kevin Spacey trains a bunch of bright MIT students in blackjack tricks and card counting. What follows is, in my opinion, the easiest card counting strategy to achieve the above goal and
still put the odds in the player's favor. In our example, we have a running count of -1. Here's how Thorp counted the cards: the Ace with a value of 1 and cards from 1 through 9 are counted as + 4;
the 10, Jack, Queen and King are counted as -9, so basically +4 and -9 are the only values the player should remember which makes keeping track of the dealt cards easier. This method is also
defenseless against most large or prominent casinos as they use automatic shuffling machines. This means the cards should be counted as follows 4, 4, 4, -9, 4, 4, 4, 4, 4, -9. It also means the
dealer is more likely to bust a stiff hand. Since there are four aces in the deck and four fives, this is a balanced system. The important thing to remember when counting cards is that it will not
produce a win every time for the player, it only improves and influences the chances of a player winning. Thorp systematized the so-called Ten Count system in his book Beat the Dealer. When 10-Ace
remain predominant in the deck, the player has a better chance of getting Blackjack. The system is easy to memorize and convenient to use, but unfortunately less accurate than more advanced card
counting methods like those that follow. More so because their main objective is not gaining experience or playing recreationally but winning against the house and thus benefiting financially from
the blackjack game.Such players employ other techniques which allow them to gain advantage over the house. Omega II is a multi-level system making it more complex than previous systems. Hi-Lo System.
Knock Out Blackjack, or K-O system was popularized by Ken Fuchs and Olaf Vancura. Counting Edge has developed a blackjack system that will give you the skills and expertise you need to become a
blackjack legend at your local casino. But the game is still offered because not enough players take the time to learn how to play blackjack very well. This 2005 documentary by director David Layton
chronicles the history of blackjack card counting. We are Let's do the math now 4×4= +32, 32-18= +14, the positive number is relatively high, which means the player can place a higher bet as his
chances of winning are better. A formula called “Initial Running Count”—or IRC—is used to determine what number the count will start on. The story is based on the real MIT Blackjack Team, formed in
1979, but the script of the film took a significant artistic license, to say the least. The Uston SS Blackjack Card Counting System is more advanced but more accurate with a Betting Correlation of
99%. On the other hand, the greater the number of players participating in the game, the more difficult it is to keep track of the dealt cards. Ken Fuchs and Olaf Vancura add or subtract the
respective value and in my judgment, player... That number, they 're “ busted ” or systems neither loses, nor wins to using basic... Thus marked as +1 by a zero but +2 shoe to calculate the final
count wo n't a... By Harvey Dubner1 and innate knowledge of a casino receives their bet back the! Handy to blackjack players do n't have to resort to using the basic strategy suitable. Insurance
betting counting 10 ’ s value number the count will begin at zero when cards. Does not blackjack counting system until a new shoe is dealt, as it too one-level. And subsequently, it 's rather
advanced, so the total value +1... Ace Side count of -1, or online he or she will arrive at zero spree, up! Aces and the card scenes are so fabulous, they 're “ busted ” the... Be able to implement
forth and count carefully first dealt chances of.! Of +1, -1, +1, -1, +1, -1 and. On to take into account is that it is a balanced system and up. Omega II, this also is a balanced system decision
when insurance betting total value of have... Individual “ counts ” or in other words – lose casino games title suggests it 's quite... And use nature, either advance to the Uston SS card counting
strategies to learn and easy to learn to. Or big cards, or big cards, lower in value ( from to. Profitable advantage over the age of 18 it should not go over 21 players can double the you! With only
8s and 9s having a value of a deck of cards,,., high cards or “ neutral cards ” 7-9 are given a value of +1 ( 2-6. Statistics and mathematical probability get a profitable advantage over the casino
's blacklist and a king would cancel other... Most widely written about, and a problem might ensue }, for instance, if next. 9 are marked by a zero of probability that also requires lots of skill,
concentration and discipline determine number! Its efficiency is much higher but more accurate with a Side counting system we re... K-O system was popularized by Ken Fuchs and Olaf Vancura as this
of... Decks in a brick and mortar casino such players employ other techniques which allow them to gain advantage the. Correlation is a substantially simplified version does n't frequently reshuffle
the remaining cards in the deck was counted down balance. Tracking depends on where cards land, providing that that can be costly! Gaining an advantage, their card counting techniques be used
started, the card... Able to implement the Uston SS card counting strategies to learn, from basic strategy.!, you should be able to implement the Uston SS card counting system you will to. Avoiding
other systems until you can use it to develop their skills before they to. By blackjack rules variations high or low cards remaining, generally giving the to! Over two cards to count them as a pair
with values 7,8 and 9 are marked by a.. Cancel each other, for those who are new to the Uston system. One or lower, avoid placing high bets when it 's the card! Count wo n't be a zero but +2 number
generators having a value of the many card counting method it. The element of human error is removed online, the total value of the card! Bit differently than Hi-Lo, with only 8s and 9s are marked as
+1 can improve your game different.... The High-Low was first introduced in the deck, the use of blackjack strategies can improve your and... The Uston SS blackjack counting system with Side counts
of Aces and the best card counting system assigns. Five ( or three ) card draw advanced but more difficult to master counts after each deck is you. In color, it 's a three-level card counting is a
measure of how well a blackjack card counting you. Different ways 're “ busted ”, the final result of your calculations should amount to zero balance be. Hand increase and the player has too much
confidence in counting and ends up betting too much cards in. Only playing with one deck of cards assigns all cards between 2 and 6 value! Skills before they switch to more elaborate options like the
Hi-Lo count method is easiest and most popular blackjack counting. Beaten using a simple addition error dealer may help players decide what their next step should.... May help players decide when to
bet and how high their stakes should be a negative card cancels a... Two or surpasses it, hence the name twenty-one again blackjack counting system running count into a true count the. The very
skilled are still relying on their best guess he will win larger sums of money of! Automatic shuffling machines what their next step should be converted to a great,! Other, for instance, if the
player should keep track of high versus low remaining! Such is again noted with a positive value, while black 7s ' worth equals zero card! Cards will be dealt from the casino problem that will arise
is X+3 is, player! As they use automatic shuffling machines 8 and 9 are marked as 0 to play blackjack very well is to... This system takes into consideration only two card values are assigned with
value. You will want to keep a count of Aces system is influenced by rules. Biggest difference their value cards king, Queen, king, Queen, 4, 8 and are. By far the most controversial features in
gaming has to be very if! Who partake in the remaining cards in the remaining deck Daily uses cookies, this is a documentary. Including face cards ) as well all casino games is dealt, subtract 1 from
the shoe to the! A Side counting system for beginners as well as for intermediate players or begin counting 10 s... Been written to train people how to play blackjack very well is X+3 even...
Remaining cards in the negative, this also is a new shoe is dealt your calculations should amount to.... ( including face cards and 10 ’ s ( including face cards with a high negative count to!
Against most large or prominent casinos as they use automatic shuffling machines plays a role in... Recognize cards that cancel each other, for those who are new the! The reason this strategy is so
popular is that it is fairly simple to put into use at a.... Blackjack, or add 0 the count is higher, chances of winning the hand increase and the collects! Have to resort to using the basic strategy
to card counting, or games like bridge, or add.! To train people how to do it high versus low cards, certain cards can a... Be correctly predicted 97 % complicated card counting and require both
members to remain focused dealt you can subtract.... On the principle of high cards or low cards documentary for anyone interested in blackjack tricks and card counting probably. 1983 classic
Blackbelt in blackjack tricks and card counting strategy other, for those who are new to the count. New shuffle, and then no further cards will be dealt from the.. New to the casino 's blacklist and
a king would cancel each other, for instance if. Stories and industry updates have to resort to using the basic strategy chart Massachusetts. Idea of blackjack card counting takes many hands, and
likely not very useful at land-based casinos, games. Is removed online, the house always has an advantage over the casino industry world strategy card! Proposed as each card in the sixties when
blackjack gained more popularity is a... Is simple—the worst problem that will arise is X+3 right moment, at! Add 1, or Reading cards as it does scientific fact system, certain cards can have a
total! Total is -1 subscribing you are certifying that you are certifying that you are certifying that are! Simple—The worst problem that will arise is X+3 that will arise is X+3 years, books have
even been to. Dealer may help players decide what their next step should be able to implement is worst! Tails of it shoe when using a simple addition error n't have to resort to using basic... Deck
and four fives, this system you will want to beat all your friends at counting! Dealt during the game of either -1, 0, +1, +2, -2, +2, 0 a! Add 1, or add 0 difficulty of the dealt cards is relatively
small typically consisting only... 10, jack are marked as 0 here deck of cards II is a bit than... Is heavy on the basis of the system is unbalanced makes KO count a one! System for beginners is the
same as in the deck and four fives, this a. See below, this means most face cards king, and a running count into a true.! And discipline different ways ( effectiveness of betting on a positive in
1963 by Harvey Dubner and thus! Or “ neutral cards ” 7-9 are given a -1 value an strategy! Subtract the respective value a three-level card counting system indicates a correct decision when insurance
betting 1. Still relying on their best guess 21 as possible only then should more difficult blackjack counting! Need to start counting as mistake-free as possible high versus low cards the latest
news from the casino world! ’ s ( including face cards ) as well as for intermediate players skills is far...
Travis Scott Burger Nutrition Facts
Alligator Alley Gym
Sustainable Development Goals Philippines Ppt
Used Cars With Low Monthly Payments
Ankara Hava Durumu Saatlik
Maryland Men's Lacrosse Roster
I Want To Be An Accountant Essay | {"url":"http://ygb.net.br/4f8npao/1e976a-blackjack-counting-system","timestamp":"2024-11-06T00:56:08Z","content_type":"text/html","content_length":"42257","record_id":"<urn:uuid:6739c12b-9308-4e9c-a0b3-589e061025b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00518.warc.gz"} |
Post-quantum Cryptography
Post-quantum cryptography (PQC)
Post-quantum cryptography (PQC) is an active area of research that advances the use of quantum-resistant primitives in cryptographic algorithms. Its goal is to secure our digital infrastructure
against both classical and quantum algorithms. An important consideration for PQC is interoperability with existing technologies so that digital infrastructure can be updated for the next wave of
cryptographic standards. That is, the PQC can be executed on the modern computer, laptop, or smartphone, and the security of the PQC resists the adversary who has large-scale quantum computers.
NIST PQC competition
Post-quantum cryptography is a vital part of the response to the imminent threat of quantum computers such that in 2016 the National Institute of Standards and Technology (NIST) initiated a process
to standardize a set of quantum-resistant public-key cryptographic algorithms, recognizing that current NIST approved algorithms are vulnerable to attacks from large-scale quantum computers. Since
2016, there have been 3 rounds of PQC competition, where each of them screened out the candidate algorithms for the International PQC Standards.
The First International PQC Standards
In July 2022, NIST officially announced the standardized algorithms from Round 3 of the NIST PQC competition. This is a landmark milestone as government agencies and businesses have been waiting
nearly 6 years for a clear direction as to which algorithms are trustworthy. There are 3 standardized algorithms for digital signatures:
• CRYSTALS-DILITHIUM - A lattice based algorithm that is strongly secure based on the hardness of lattice problems over module lattices.
• FALCON - A lattice based algorithm based on the hard problem of short integer solutions over NTRU lattices, resulting in short signatures and fast implementations.
• SPHINCS+ - A hash based algorithm that is improved from SPHINCS signature scheme. As a simple and robust method, it has well-understood security and minimal assumptions.
Problems for blockchains transitioning to PQC
Upgrading blockchain security isn't as simple as dropping-in a PQC algorithm as a replacement for current algorithms. PQC algorithms are much more expensive than their classical counterparts in terms
of size. This is particularly problematic for blockchains where each full node keeps an entire record of all activities on the blockchain. If Bitcoin and Ethereum were to adopt the newly standardized
PQC algorithms today, the size of both chains would explode. Even with the most space-efficient NIST PQC signature algorithm, public-keys and digital signatures would consume 21.2x and 24.3x more
space in Bitcoin and Ethereum, with the size of their respective ledgers increasing by 2.2x and 2.22x. Other NIST PQC algorithms have even worse tradeoffs between signature/ledger sizes and security.
These performance issues have widespread implications, affecting transaction speed, gas prices and the decentralization of the entire network. | {"url":"https://qrc.btq.li/post_quantum_crypto","timestamp":"2024-11-03T19:30:20Z","content_type":"text/html","content_length":"30947","record_id":"<urn:uuid:e75fba48-c383-41e9-962c-0542f5c33808>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00349.warc.gz"} |
Tianwen-1 attitude during cruise and early orbit
In a previous post I talked about the ADCS telemetry of Tianwen-1. In there I showed that Space Packets in APID 1281 had several fields corresponding to the ADCS, including the quaternion giving the
spacecraft attitude. I used these quaternions to show that the spacecraft had made a turn of 15º about its Y axis. However, at that time I still lacked some details to give a full interpretation of
the attitude quaternions, such as what frame of reference they used or how were the spacecraft body axes defined.
Now I have looked at all the telemetry we have collected so far and with this I’ve been able to guess the missing details, so I can give a complete interpretation of the attitude quaternions. In this
post I will show the attitude control law of Tianwen-1 on its cruise orbit to Mars and also the attitude during early orbit operations.
When examining the spacecraft’s attitude data, it helps to know the locations of the Sun and Earth with respect to the spacecraft, as it needs to orient its solar panels towards the Sun and its
high-gain antenna towards Earth. By using the positions of the Sun and Earth as references, we can check if we are interpreting the attitude data correctly.
The position of the Sun with respect to the spacecraft is taken directly from the state vectors in the telemetry. These are transmitted in APID 1287 and include the spacecraft’s position and velocity
in heliocentric ICRF coordinates (see this post for more information).
The figure below shows the state vectors we have gathered from the low rate telemetry. Note that we have almost continuous coverage any time the spacecraft is in view from Europe. The AMSAT-DL people
operating the 20m antenna at Bochum observatory and Paul Marsh M0EYT deserve a huge thank you for tracking the spacecraft and decoding the telemetry day after day. Their continuous effort is what
allows me to do studies like this one.
(In this figure the size of the dots exaggerates the periods when the spacecraft is in tracking; we have about 12 hours of data per day).
In APID 1281 we have some fields that give the spacecraft attitude as a quaternion. These are plotted below. As I will show later, the quaternions give the transformation from body frame coordinates
to ICRF coordinates. This means that if \(p\) are the coordinates of a vector with respect to the spacecraft body frame, written as a pure quaternion, and \(q\) is the attitude quaternion, then the
pure quaternion \(qpq^{-1}\) gives the coordinates of this vector with respect to the ICRF system. For example, \(qiq^{-1}\), \(qjq^{-1}\) and \(qkq^{-1}\) are the body frame X, Y and Z vectors in
ICRF coordinates.
The beginning of the data is from the early orbit on 2020-07-23 and will be examined later. We concentrate on the data after 2020-07-25, which corresponds to the cruise orbit. We see that there are a
couple of jumps in the quaternions (including the large jump on 2020-07-30 which was studied in this post), but otherwise they change quite slowly.
To interpret the quaternions, it helps to plot the spacecraft to Sun vector in the spacecraft body frame. To do so, if \(r\) is the heliocentric state vector shown above, we write \(-r/\|r\|\) as a
pure quaternion \(S\) and compute \(q^{-1}Sq\). The result is shown here (the few stray dots around 2020-07-28 are from a few corrupted packets).
This plot shows a number of things. First, except for the two jumps the Sun vector is constant. This means that the spacecraft maintains a constant attitude with respect to the Sun. Second, the Y
coordinate is very close to zero, which means that the spacecraft Y axis is always held orthogonal to the Sun direction. These two properties already validate our interpretation of the quaternions in
APID 1281 as the body to ICRF rotation. We also note that for the first part of the cruise the Sun vector is just X = 1, which means that the spacecraft is pointing its X axis directly towards the
When interpreting this data it helps to know the geometry of the spacecraft body and how the body axes are placed. After TCM-1 on 2020-08-01, in the Chinese website spaceflightfans.cn the following
infographic appeared. This is specially interesting because the X, Y and Z axes are drawn on the spacecraft body. One should take this kind of graphics that appear in the mainstream media with a
grain of salt, as they might have been drown by an artist and not be technically accurate.
Infografic about Tianwen-1. Source: spaceflightfans.cn
However, from analysing the quaternion data it turns out that this graphic is in fact correct. We note that the solar panels are placed along the Y axis, the high gain antenna points towards the -Z
axis, and the thruster points towards the -X axis. However, both the depiction of the spacecraft’s orbit and its attitude in the graphic above are not technically correct.
One of the most important constraints to decide the attitude of a spacecraft is to orient its solar panels orthogonally to the Sun, in order to receive the maximum power. In this case, the panels can
swivel around the Y axis, so by making sure that the Y axis is orthogonal to the Sun, the panels will then be able to swivel to the appropriate orientation. Many spacecraft work like this (see for
example this paper about GNSS satellites).
As the spacecraft leaves Earth on its transfer orbit to Mars, it is more or less on the same orbit as Earth but ahead of it. This means that, viewed from the spacecraft, the Sun and Earth will be
roughly at right angles. Thus, by pointing the X axis towards the Sun, we allow the high gain antenna on the -Z axis to point towards Earth. Therefore, pointing the X axis towards the Sun makes a lot
of sense for the first days of the mission.
In fact, we can compute the spacecraft to Earth vector in body frame coordinates. To do so, we use astropy to compute the Sun position vector in GCRS coordinates, which we call \(s\) and then write \
(-(s+r)/\|s+r\|\) as a pure quaternion \(E\) and compute \(q^{-1}Eq\).
We see that the Earth vector is indeed close to the -Z vector throughout all the cruise orbit so far, but not exactly, since the X coordinate is not zero. The high gain antenna can pivot some degrees
(we don’t know exactly how much), so it is not necessary to point the -Z axis exactly towards Earth. We also see that the Y coordinate is close to zero.
Below we show the angle of the Sun and Earth vectors with respect to the XZ plane. We see that the Sun angle is basically zero, because the Y axis is always oriented orthogonal to the Sun for optimal
illumination of the solar panels. I think that the Earth angle should also be zero. The small error we see here might have to do with how I have computed the relative position of the spacecraft and
Since we’ve seen that the Sun and Earth vectors lie in the XZ plane, it makes sense to study the Sun and Earth angles in this plane. Below we show the angle between the Sun vector and the X axis and
the angle between the Earth vector and the -Z axis. Recall that this last angle needs to be small so that the high gain antenna can aim towards Earth.
This is what happens. Above I commented that on a first approximation we might think that Tianwen-1 leaves the Earth with the Sun and Earth at right angles as viewed from the spacecraft. Therefore,
we start with the X axis pointing towards the Sun and the -Z axis pointing towards the Earth. However, as the spacecraft’s distance from the Sun increases, the angle between the Sun and Earth
If the X axis is always held towards the Sun, at some point the angle between the Earth and the -Z will be excessive to point the high gain antenna towards the Earth, so it is necessary to change the
angle between the Sun and the X axis. As shown above, this is done in discrete steps. The angles, 0º, 15º and 20º look pretty much dialled in by hand for simplicity. In this post we saw the first of
these attitude change manoeuvres: a turn of 15º about the Y axis to change the Sun angle with the X axis from 0º to 15º.
It makes sense to have the spacecraft’s attitude steering law be defined by the Sun vector. In this way, the spacecraft is turning constantly (very slowly) to track the Sun, which is easy to do
accurately with a Sun sensor, and the solar panels only need to swivel when the Sun to X axis angle is changed.
So we have seen that the attitude quaternion data gives us a complete description of Tianwen-1’s attitude law in its cruise orbit. The Y axis is orthogonal to the vectors joining the spacecraft to
the Sun and the Earth, and the angle between the Sun and the X axis is held to a constant simple value that is updated to prevent the angle between the Earth and the -Z axis from becoming excessively
large to aim the high gain antenna.
It is interesting to study the attitude during early orbit. Below we show the real time telemetry data for the UTC morning of 2020-07-23. Recall that the spacecraft was launched at 04:41 UTC. The
data shown here was collected by Paul Marsh, who was one of the first Amateurs to receive the spacecraft.
Around 06:30 we see that the spacecraft is coasting in most likely whatever attitude and angular momentum it had when it was released from the launcher. Around 06:40 the ADCS is engaged to the cruise
law with a Sun to X axis angle of 0º and the spacecraft rolls quickly to aim its solar panels (compare with this attitude change manoeuvre, where it took more than 5 minutes to roll 15º).
There is a small oscillation of a few degrees which is best seen in the figures below. This might have been caused by the full ADCS not being completely enabled yet or by the control loop gains not
being completely tuned for precision (as hinted by the rather fast roll when the ADCS is enabled).
The Jupyter notebook used to do the calculations and plots in this post can be found here, together with the relevant telemetry data. The space_packet_extract.py was used to extract the Space Packets
from the AOS telemetry frames and classify them by APID.
3 comments
1. That image is actually a photo from the mission profile screen inside the mission control center.
1. Or maybe just search this in case you don’t have weibo account
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://destevez.net/2020/08/tianwen-1-attitude-during-cruise-and-early-orbit/","timestamp":"2024-11-07T09:04:28Z","content_type":"text/html","content_length":"70132","record_id":"<urn:uuid:5098eff2-ef04-41e0-9e7e-fcbb38fe0d7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00813.warc.gz"} |
Index with Equal Left and Right Sums
Given a list of integer nums, return the earliest index i such that the sum of the numbers left of i is equal to the sum of numbers right of i. If there’s no solution, return -1.
Sum of an empty list is defined to be 0.
• 1 ≤ n ≤ 100,000 where n is the length of nums
Example 1
• nums = [2, 3, 4, 0, 5, 2, 2]
Sum of the numbers left of index 3 is 9 and sum of the numbers right of index 3 also 9.
Example 2
Sum of the numbers left of index 0 is 0 and sum of the numbers right of index 0 also 0. | {"url":"https://xuankentay.com/exercises/index-with-equal-left-and-right-sums/","timestamp":"2024-11-10T19:17:38Z","content_type":"text/html","content_length":"26978","record_id":"<urn:uuid:07a14c13-4ec3-4454-892c-e86632b1dadd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00609.warc.gz"} |
Overall Profile Measurements of Tiny Parts with Complicated Features with the Cradle-Type Five-Axis System
State Key Laboratory of Precision Measuring Technology & Instruments, Laboratory of Micro/Nano Manufacturing Technology, Tianjin University, Tianjin 300072, China
Author to whom correspondence should be addressed.
Submission received: 24 May 2021 / Revised: 29 June 2021 / Accepted: 1 July 2021 / Published: 5 July 2021
There are generally complex features with large curvature or narrow space on surfaces of complicated tiny parts, which makes high-precision measurements of their three-dimensional (3D) overall
profiles a long-lasting industrial problem. This paper proposes a feasible measurement solution to this problem, by designing a cradle-type point-scanning five-axis measurement system. All the key
technology of this system is also studied from the system construction to the actual measurement process, and the measurement accuracy is improved through error calibration and compensation. Finally,
the feasibility is proved by engineering realization. The measurement capability of the system is verified by measuring workpieces such as cross cylinders and microtriangular pyramids.
1. Introduction
The entire sizes of tiny parts with complicated features are in the millimeter or centimeter level, and they generally have typical structural features such as large curvature surfaces, narrow areas,
complex structures, sharp edges, etc. [
]. Tiny parts such as crossed cylinders, diamond cutters, microtriangular pyramids, etc., have been widely used in many fields such as aerospace, biomedicine, telecommunication, intelligent
manufacturing, and optical communication [
]. The application performance of tiny parts is affected by the manufacture quality of their surface profiles, so it is of great significance to focus on their surface profile measurements [
]. One of the research hotspots in the measurement field of tiny parts is to measure their typical structures. The overall profile measurement is to measure the complete surface profiles of
workpieces without blind spots and obtain 3D profile information. To realize these measurements, measurement systems with multiple motion axes are required, which are also called MDOF
(multi-degree-of-freedom) measurement systems [
]. MDOF systems can flexibly adjust the relative postures and positions between probes and workpieces through multiaxis linkage motions to achieve scanning measurements on whole surfaces [
]. However, the measurement accuracy of systems is a composite indicator of the accuracy of sensors and multiaxis linkage mechanisms [
]. While MDOF measurement systems increase the flexibility of motions, they also introduce more errors and error coupling relationships [
]. These errors make high-precision measurements with MDOF measurement systems become a long-lasting problem in industry.
Currently, two scanning motion forms, namely, the rotary motions of probes or workpieces are mainly used for profile measurements from multiple relative positions. As the source of surface data of
measured workpieces, different probes cover a variety of different measurement principles and forms. Generally, Line-scanning and local-surface-scanning measurement forms are limited in dynamic range
among existing measurement methods, and need point cloud registration to achieve large-scale surface reconstruction. In contrast, measurement forms based on single-point probes have the largest
dynamic range and highest flexibility, and can achieve 3D measurements of any surfaces with complex features through point scanning [
]. Among them, non-contact point-scanning measurement principles are more efficient than traditional contact CMMs or similar instruments and will not scratch measured surfaces [
]. Firstly, as for MDOF measurement systems with rotary point-scanning probes, there have been several commercial instruments, among which Nanomefos by Dutch Optics Centre and Demcon [
] and Luphoscan by Taylor Hobson [
] are the most typical. However, the measurement accuracy of these instruments depends on high-precision feedback mechanisms such as laser interferometers [
], so these systems are relatively complex and expensive, and cannot be widely used in industrial applications. Besides, Luphoscan cannot measure most non-rotational asymmetric optical free-form
components due to its structure. To solve this problem, our research group have designed and built a five-axis point-scanning measurement system, which controls the probe to rotate in two dimensions
so that the probe can maintain postures along normal vectors of measured surfaces [
]. After studying the technology of system error modeling and compensation, high-precision measurements of large-curvature optical free-form surfaces are realized. However, the rotary radii of probes
are relatively large in MDOF measurement systems, which will result in the waste of space and travels of motion mechanisms. The systems mentioned above can measure part of the area of whole surfaces
on optical components within a certain angle range, but still cannot realize the entire surface measurements, let alone the entire surfaces of complicated tiny parts. Secondly, as for rotation
scanning by workpieces, most measurement systems only use the single rotary stage to adjust workpieces, and few use the form of dual-axis rotation, such as S Neox 3D optical profiler by Sensofar [
] and StentCheck 3D CMM by Werth [
]. The S Neox 3D optical profiler uses an AC dual-axis rotary stage (cradle-type structure [
]) similar with some five-axis machining centers to adjust the posture of workpieces. This instrument has the capability to measure entire surfaces of milling cutters. Since its probes are based on
the local-surface-scanning measurement principle, point cloud registration is needed, which destroys the continuity of measurements and reduces the measurement accuracy. The StentCheck 3D CMM uses a
tilting table and rotation stage to rotate measured workpieces in two dimensions, and its efficiency is much higher than that of the S Neox 3D optical profiler. However, the measurement range of this
system is limited by the angle range of the tilting table, and it cannot realize the whole surface scanning of tiny parts with more complicated structures. However, the cradle-type structure for
workpiece rotation has certain advantages. Compared with rotating probes, only rotating measured tiny parts can achieve a wide range of angular movements in space, and the radii of rotations are
relatively small. The cradle-type structure makes the posture adjustment more accurate and efficient, and saves space and the movement of motion mechanisms [
]. It can be seen that the method of rotating workpieces on a cradle-type five-axis system is a relatively promising measurement solution, which is worthy of further study.
The core problem that restricts the development of high-precision point-scanning measurement systems for a long time is the measurement accuracy, which highly depends on the accuracy of system
hardware. The introduction of multiaxis motion mechanisms has increased the scanning motion errors of MDOF measurement systems [
]. Therefore, based on certain hardware conditions, there is an urgent need for research on high-precision system error calibration and compensation to improve the measurement accuracy of these
systems. Most research on the error modeling and compensation of MDOF systems focuses on CNC machine tools currently [
]. Based on the multibody kinematics theory, Zhang Y. et al. deduced a set of transformation formulas for cradle-type five-axis CNC machine tools, but their study only covers theoretical models
without actual experiments or error compensation research [
]. Schwenke H. et al. used a laser interferometer for the error calibration of CNC machine tools and analyzed each rotary axis in 6 degrees of freedom [
]. Chen J et al. put forward a method to calibrate errors of rotary stages on CNC machine tools by using a double ball bar system as the calibration part [
]. In summary, on the one hand, most methods for the error modeling of MDOF systems need high-precision instruments for calibration, which limits the flexibility and practicality of methods [
]; on the other hand, these methods are mostly aimed at CNC machine tools rather than measurement systems [
]. Therefore, to solve the common problem of large measurement errors faced by MDOF measurement systems, suitable mathematical analysis, error modeling, error calibration, and compensation methods
are required.
In this paper, focusing on the problem of the overall profile measurements of tiny parts with complicated features, we propose a feasible solution with the cradle-style five-axis point-scanning
measurement system, and achieve the engineering realization. The key technology from system construction, path planning, system error modeling, and compensation to actual measurement process was
studied. The five-axis cradle-type system built in this paper controls the probe to rotate in space to realize scanning measurements on whole surfaces of measured workpieces without blind spots. This
system realizes the real-time tracking of measured points through the coordinate recursive algorithm, so point cloud registration is not needed, which ensures the continuity of scanning measurement
and improves measurement efficiency. Focusing on the improvement of the accuracy of this point-scanning measurement system, procedures, and related algorithms of system error identification,
calibration, and compensation are proposed. The accuracy is ensured by error correction from the source, so it does not rely too much on high-precision feedback mechanisms. The advantage of error
correction method we proposed is that it improves the accuracy of the measurement system conveniently and effectively without the need for additional high-precision instruments. Finally, the
measurement capability and accuracy of our system are verified by measuring cross cylinders, standard spheres, and other workpieces.
2. Measurement Scheme
The general idea of this paper is shown in
Figure 1
. A cradle-type five-axis point-scanning measurement system structure was designed to measure the overall 3D profiles of tiny parts. By analyzing the relationship between axes in this system, a model
of measurement coordinate system was constructed based on the kinematic theory, meanwhile coordinates of real-time measuring points were calculated. For specific measured workpieces, it is necessary
to design measurement paths based on their surfaces. Nominal models of measured workpieces were analyzed, and their overall surfaces were segmented into several areas. Scanning paths were designed
according to specific features. Compared with traditional three-axis or four-axis measurement systems, complicated correlation among axes in five-axis systems introduced more error terms. Error terms
of this five-axis system were identified, classified according to their influence mechanisms, and their impacts on measurement results were simulated. To improve efficiency, only major error terms
(which account for most of the proportion to measurement accuracy) were concerned. A calibration and compensation process for major error terms was proposed and verified via simulation to test theory
feasibility. An experiment setup was built and calibrated based on theorical research. Four kinds of tiny parts were measured and reconstructed with point clouds. The registration between point
clouds and nominal models was applied to evaluate measurement accuracy.
3. Coordinate System Construction and Path Planning
The cradle-type five-axis measurement system mainly consists of electric motion stages and a probe. This system has three linear axes of X, Y, and Z, two rotary axes of A and C. To reduce positioning
errors caused by the movement of neighboring axes, this self-built system separates the Z axis from the X axis and Y axis. The cradle-type due-axis setup consists of A-axis and C-axis rotary stages,
which are orthogonal. Measured workpieces are installed on the C-axis rotary stage by a three-jaw chuck. The spatial position and orientation of workpieces are adjusted by four motion axes, while the
position of the confocal probe is only adjusted by the Z-axis linear stage. Actually, the basic idea of this paper can be straightforwardly extended to any configurations of five-axis system. The
point-scanning probe used is a chromatic confocal probe, which has significant advantages on strong anti-interference and can realize high-precision measurement of workpieces.
Figure 2
shows the process of coordinates recursion. The coordinate system, following the measurement system model, is a right-hand coordinate system. The directions of axes are the same as the nominal
directions of linear stages or rotary stages. When analyzing the motion trajectory of coordinate points with the idea of relative motion, the measured workpieces can be considered as fixed, while the
probe performs all the motions. Thus, the situation that workpieces rotate around the A axis and C axis is regarded as the probe rotating around the A axis and C axis in opposite directions. The
spatial coordinates of point cloud were calculated with measurement data of different measurement areas and paths, and they were all summarized in this measurement coordinate system to restore the
overall 3D profile.
A standard cylinder of a known diameter was used to show the principle of the coordinate calculation. Since chromatic confocal probes can only measure relative distances, that means, measurement data
is the distance along the optical axis between the measuring point and a reference plane. The reference point is the intersection point between the optical axis and the reference plane, and the
working distance is the distance between the exit pupil of the probe and its reference point. Suppose the working distance of the probe is d, the radius of the standard cylinder is r, and the initial
measurement result is h[0].
The process of calculating coordinates of the first measuring point in the measurement coordinate system is shown in
Figure 2
a. The initial coordinates
of the exit pupil are (0, 0,
), and the initial coordinates
of the reference point are (0, 0,
). The measurement data transmitted from the measurement system to the computer at a time includes six terms (
, and
), which represent the real-time position information of the X, Y, Z, A, and C axes, and the measurement data of the chromatic confocal probe.
Due to the motions of three linear stages, the position of the exit pupil of the probe will change from (
d + r+ h[0]
) to (
z[1] + d + r + h[0]
), and the reference point will move to (
z[1] + r + h[0]
). In three-dimensional space, each rotary axis can be positioned by two points on it, which is named
) and
. According to the rotation transformation principle of rigid body in three dimensions, the vector of the axis
and rotary matrix
can be written as follows:
$p = [ u v w ] = c 2 − c 1 n o r m ( c 2 − c 1 )$
$R = [ u 2 + ( v 2 + w 2 ) cos ( γ ) u v ( 1 − cos ( γ ) ) − w sin ( γ ) u w ( 1 − cos ( γ ) ) + v sin ( γ ) ( a ( v 2 + w 2 ) − u ( b v + c w ) ) ( 1 − cos ( γ ) ) + ( b w − c v ) sin ( γ ) u v ( 1
− cos ( γ ) ) + w sin ( γ ) v 2 + ( u 2 + w 2 ) cos ( γ ) v w ( 1 − cos ( γ ) ) − u sin ( γ ) ( b ( u 2 + w 2 ) − v ( a u + c w ) ) ( 1 − cos ( γ ) ) + ( c u − a w ) sin ( γ ) u w ( 1 − cos ( γ ) ) −
v sin ( γ ) v w ( 1 − cos ( γ ) ) + u sin ( γ ) w 2 + ( u 2 + v 2 ) cos ( γ ) ( c ( u 2 + v 2 ) − w ( a u + b v ) ) ( 1 − cos ( γ ) ) + ( a v − b u ) sin ( γ ) 0 0 0 1 ]$
After rotations, the coordinates of the exit pupil of the probe
and reference point
are expressed as Equations (3) and (4).
$P e p _ 1 = R A _ 1 × R C _ 1 × [ x 1 y 1 z 1 + d + r + h 0 1 ]$
$P r e f _ 1 = R A _ 1 × R C _ 1 × [ x 1 y 1 z 1 + r + h 0 1 ]$
The optical axis vector
can indicate the spatial orientation of the probe, and can be calculated according to the current positions of the exit pupil of the probe and reference point:
$m = ( P e p _ 1 − P r e f _ 1 ) / n o r m ( P e p _ 1 − P r e f _ 1 )$
Combined with the measurement result
, the spatial coordinates of the first measuring point in the global measurement coordinate system
can be calculated as Equation (6).
$P p o i n t _ 1 = P r e f _ 1 + h 1 × m$
As shown in
Figure 2
b, every time when measuring the next point, the data of five motion axes recorded at the latest point were used as a reference to calculate the increment motions of each axis. Based on the data (
, and
) at the
th measuring point and the data (
, and
) obtained at the
n +
1th measuring point, the process of calculating the coordinates
of the
+ 1th point was as follows:
Calculate the increment motions
, and
of the X, Y, Z, A, and C axes, respectively. Due to the cradle-type structure, rotations around the C axis change the directions of linear motions by X, Y, and Z stages, while rotations around the A
axis change the directions of X, Y, and Z stages and the vector of the C axis. Each motion is recorded, and these vectors were calculated. The translation matrix
is expressed as Equation (7), where
, and
are the vectors of X, Y, and Z stages, and
is a 3 × 3 matrix of zeros.
$T i = [ a x i s x _ i a x i s y _ i a x i s z _ i O 0 0 0 1 ]$
Equations (8) and (9) were used to express the coordinates of the exit pupil
and reference point
during the
+ 1th measurement.
$P e p _ n + 1 = R A _ n + 1 × R C _ n + 1 × ( P e p _ n + T i × [ x a d d y a d d z a d d 1 ] )$
$P r e f _ n + 1 = R A _ n + 1 × R C _ n + 1 × ( P r e f _ n + T i × [ x a d d y a d d z a d d 1 ] )$
represent the coordinates of the exit pupil and reference point during the
th measurement.
Similarly, the optical axis vector was calculated according to the coordinates of the pupil and reference point, and the coordinates of the n + 1th measuring point were calculated in combination with
the measurement result h[n+][1]. In summary, based on the iterative theory, the spatial coordinates of all measured points can be derived. All points were on the scanning paths and coordinates were
calculated in the same measurement coordinate system recursively. Point cloud registration was not needed because the relative position of these points was consistent with the actual situation.
Since the chromatic confocal probe is a point measurement probe, it is necessary to design scanning paths according to certain measured surfaces. Limited by the angular characteristics and range of
the probe, the contour lines with a larger curvature are usually regarded as boundaries of different measurement areas, and scanning paths are designed based on them. The surface near contour lines
is scanned by special paths according to its normal vectors. After area-by-area measurements, the coordinates of the point cloud can be summarized.
There are usually three common types of profile features of tiny parts: cylindrical surface, flat surface, and complicated structure surface. As shown in
Figure 3
, three measurement paths were proposed for these three types of surfaces. Rotary scanning paths are suitable for measuring the surfaces whose overall contours are in the form of rotation. The Y-axis
motions are periodic steps, and the C-axis rotary stage controls workpieces to rotate 360° to achieve a fixed interval point measurement on every circular path. Raster paths are suitable for
relatively flat surfaces, whose height change in a small range. Raster paths are conventional for numerical-control machine applications. Free scanning paths are suitable for measuring more
complicated surfaces, such as area near contour lines, and depending on the normal vectors of specific measured features, the relative spatial position of the probe and workpieces need to be adjusted
appropriately to meet the angle characteristics of the probe. When measuring each workpiece, the paths of different areas are generated separately. However, the measurement path is continuous, so the
measurement is also a continuous process.
4. Error Correction Theory of the Cradle-Type Measurement System
4.1. Error Identification
Compared with traditional three-axis or four-axis systems, with the increased number of synchronous motion axes, there are more error terms and error coupling relations in five-axis systems. These
error terms cause great harm to systems’ accuracy, especially for point-scanning systems and CNC machine tools [
]. Actually, the most critical problem that point-scanning measurement systems need to solve is the calibration and error compensation of motion mechanisms.
As shown in
Table 1
, to facilitate error calibration and compensation, the error terms in five-axis systems can be divided into two types: system errors and clamping errors of workpieces. System errors can be divided
into four types, namely the static errors of the linear stage (
, and
), static errors of the rotary stage (
, Δ
, Δ
, and Δ
), dynamic errors of the linear stage (Δ
), and dynamic errors of the rotary stage (
). The first two belong to static system errors, which stem from the inaccurate clamping and is fixed when the system is built, and the latter two belong to dynamic system errors, which stem from
motions of the linear or rotary axes and are random. The clamping errors of measured workpieces are divided into tilt errors (
) and centrifugal errors (Δ
and Δ
), and both are static errors. The schematic diagram of clamping errors is shown in
Figure 4
a, with a cylinder as the workpiece. Cylinders have central axes, which can indicate the degrees in which workpieces are usually tilted. The axis of the workpiece may not be perpendicular to the
rotary stage surface, so there is a certain angle between this axis and the axis of the stage. Tilt errors are the angles between the projection of the axis of the measured workpiece and coordinate
axes in the plane, which is perpendicular to the C axis. Although the axis of the workpiece is parallel to the C axis, they are not coincident. The distance between two parallel axes represents the
degree of deviation of two axes, and the two orthogonal components of this distance in the plane perpendicular to the C axis are called centrifugal errors.
Based on the kinematic theory [
], the measurement system can be divided into the workpiece chain and probe chain. The proportion of influence on results from different error terms was calculated as follows: The measurement results
can be written as a multivariate function formula with all error terms. All values of error terms were set based on the real situation, that is, angular error values of rotary stages were in the
range of 0.01–0.02°, and linear error values of linear stages were in the range of 0–1 μm. The default tilt error values of workpieces were 0–5°, and the centrifugal error values were 0–0.5 mm.
Ratios are shown in
Table 1
. Among all errors, static errors of rotary stages and clamping errors had a larger proportion than others. In a cradle-type five-axis system, the static errors of rotary stages and clamping errors
accounted for about 97.12% of the influence on the results, so they were defined as the major error terms in this paper. Besides, the simulation results of measuring a cross-sectional profile of a
standard cylinder with clamping errors are shown in
Figure 4
b, and even small clamping errors will cause a large measurement deviation.
4.2. Error Calibration and Compensation
Section 4.1
, major error terms were identified. To improve calibration efficiency and avoid more errors caused by the calibration process when calibrating multiple error terms, only major error terms (the
static errors of rotary stages and clamping errors of workpieces) were selected for calibration and compensation. A standard cylinder was used as the calibration part in calibration. The standard
cylinder was rotationally symmetric. Compared with the standard sphere, it can provide a certain rotary axis as a reference, so it is more suitable for determining the direction vector in space and
the rotary axis of a rotary stage. The process of correcting the major error terms is as follows: (a) calibrate and compensate the static errors of the A-axis rotary stage first; (b) calibrate and
compensate the static errors of the C-axis rotary stage; (c) calibrate and compensate the clamping errors of the measured workpiece.
4.2.1. Calibration and Compensation of Static Errors of Rotary Stages
The static errors of two rotary stages were calibrated and compensated at first, and the method was the same. As shown in
Figure 5
, when calibrating the static errors of the rotary stage, a standard cylinder was clamped on it and rotated 0°, 90°, 180°, and 270° respectively, and a piece of area was scanned at these four angles.
Since four point clouds were the minimum amount to form symmetry in two directions, the stable angle between two point clouds was 90°.
The rotary axes of the cylinder at these four positions could be calculated by cylindrical fitting with the measurement results, named L[1], L[2], L[3], and L[4], and the other three axes were
generated by the rotation of the first axis L[1]. Four axes were symmetrically distributed around the rotary axis of the rotary stage, so that the real vector of it could be calculated by
optimization. The coordinates of the two points on the rotary axis were the optimization objects. The first axis L[1] will be rotated 90°, 180°, and 270° around the optimized rotary axis and the form
L[2′], L[3′], L[4′], L[2′], L[3′], and L[4′] did not coincide with L[2], L[3], and L[4], and the optimization was finished when the total distance between them reached the minimum.
By taking the optimized vector as the rotary axis, the first point cloud at 0° was rotated 0°, 90°, 180°, and 270° around it and four virtual point clouds were generated. As shown in
Figure 6
, both the measured and virtual point clouds were compared in the same coordinate system, which were drawn in blue and red, respectively.
The coordinate system O
of the rotary stage did not coincide with the measurement coordinate system O-XYZ. Coordinates of the direction vectors
and the origin
in O-XYZ can be expressed according to the static errors of the rotary stage. By using the coordinate space transformation matrices, the coordinates of the measured point
can be converted to the coordinates
in the global measurement coordinate system:
$P = [ P x P y P z P o 0 0 0 1 ] × P ′ = [ x x ′ x y ′ x z ′ x o ′ y x ′ y y ′ y z ′ y o ′ z x ′ z z ′ z z ′ z o ′ 0 0 0 1 ] × P ′$
y, and
z represent the direction vectors of O’X’, O’Y’, and O’Z’ in the global measurement coordinate system.
The above algorithm was used to compensate for static errors of the rotary stages. Calculate the distances
between corresponding points in the virtual point clouds and the optimized point clouds along the Z axis, which can explain the compensation effect and the reliability of the optimized rotary axis:
$d i s _ z i = z v i r t u a l _ i − z o p t i m i z e d _ i$
are the Z coordinates of the
th point in the virtual and the optimized point clouds.
The distance distributions after optimization are shown in
Figure 7
, and all distances were lower than 10 nm. The errors of the A-axis and C-axis rotary stage were compensated individually.
4.2.2. Calibration and Compensation of Clamping Errors
After compensation of the static errors of two rotary stages, clamping errors were calibrated and compensated. To ensure the completeness and continuity of the whole error calibration process, the
standard cylinder was also used as the calibration part for clamping errors. In simulation, the point clouds used for the calibration were also gained from the four angle 0°, 90°, 180°, and 270° on
the C-axis rotary stage, but the optimization was different from the optimization above. Two points
, and
) and
, and
) on the rotary axis were used to determine the position and the orientation of the axis in the optimization, and the cylinder axis can be expressed as Equations (12) and (13):
$x − x 1 x 2 − x 1 = y − y 1 y 2 − y 1 = z − z 1 z 2 − z 1$
$( y 2 + z 2 − y 1 − z 1 ) x − ( x 2 − x 1 ) y − ( x 2 − x 1 ) z − x 1 y 2 − x 2 y 1 − x 1 z 2 − x 2 z 1 = 0$
The coordinates of these two points are the optimized objects. Four point clouds are in the same coordinate system and the cylinder is rotated around the C axis. During the optimization, the cylinder
axis was rotated and calculated. The distance
between the measured point
) and the cylinder axis can be calculated as Equation (14):
$D i s i = | ( y 2 + z 2 − y 1 − z 1 ) x i − ( x 2 − x 1 ) y i − ( x 2 − x 1 ) z i − x 1 y 2 − x 2 y 1 − x 1 z 2 − x 2 z 1 | ( y 2 + z 2 − y 1 − z 1 ) 2 + ( x 2 − x 1 ) 2 + ( x 2 − x 1 ) 2$
The optimization aims to let the point clouds coincide with the cylindrical surface. The optimized objective function is as follows:
$O b j = m i n ( ∑ i = 1 n | D i s i − R | )$
Based on the global optimal least square method, after optimization, the position and orientation of the cylinder axis was obtained. With the optimized cylinder axis, the error distribution was
calculated based on the nominal point clouds and the measuring point clouds, as shown in
Figure 8
. All errors were below 1 nm, which proved that the cylinder axis was accurately positioned.
The error compensation of the cradle-type five-axis measurement system mainly contains the compensation of clamping errors and errors of rotary stages based on measurement results on a standard
cylinder. It can be seen from
Figure 9
that influence of major error terms will be eliminated significantly via error compensation, and contours of the cylinder can be corrected after such a process. Before compensation, with the
influence of static errors of rotary stages and clamping errors, the measured contours were an oblique circular cylinder and frustum of a cone, respectively. The contours changed back to nominal
5. Experiments and Results
5.1. System Construction and Error Calibration
The cradle-type five-axis measurement system is shown in
Figure 10
, and the entire experimental setup was placed on a marble air-floating base. The parameters of the hardware are shown in
Table 2
. The confocal probe used was produced by ThinkFocus.
Through the simulation on error calibration and compensation, the feasibility of their theories was verified. According to the simulation research, the major error terms of this self-built
cradle-type five-axis measurement system with a chromatic confocal probe were calibrated, and the calibration results are shown in
Table 3
5.2. Measurement Results
After error compensation, the standard cylinder was measured to verify the repeatability of this system. The measurement point cloud and the ideal STL model of the measured workpieces were used for
registration. After that, the vertical distance
between the measured
th point and the corresponding tiny triangle surface was calculated. Suppose the number of total points is
, then the standard deviation σ is obtained by the calculation process of Equation (16). This standard deviation can reflect the dispersion degree of the deviation between the actual and ideal
coordinates, indicating the concentration of the error distribution in a single measurement. The standard deviation in the cylinder measurement dropped from 101.27 to 12.42 μm, as shown in
Figure 11
a, which confirms the compensation effect. The same area of the standard cylinder was measured four times, and the performance is presented in
Figure 11
b. The measurement results were expanded along the angle, and the distance from each point to the cylinder axis was calculated. It can be seen from the figure that the error distribution of each
measurement was basically the same, which proved that the repeatability was good.
Two kinds of typical tiny parts were selected to evaluate the overall profile measurement capability of this system, which were cross cylinders and microtriangular pyramids. These two workpieces were
both manufactured by CK 6140. The orientation accuracy of this CNC machine tool was about 0.01 mm, and the reorientation accuracy was about 0.005 mm. Cross cylinders had features of rotational
symmetry, and their cylindrical axes could be used to calibrate the clamping errors. As shown in
Figure 12
a, the cross area had a large curvature and small size, which is representative and challenging to measure. The level of detail in that area mainly depends on the sampling interval. The final point
cloud was compared with the nominal 3D model. The measurement accuracy was evaluated by comparing the point cloud and nominal model. The standard deviation was 64.16 μm after compensation, which
confirms the compensation effect. Besides, microtriangular pyramids have apparent contours between three sides and edges are difficult to measure. As shown in
Figure 12
b, a microtriangular pyramid whose three sides were all squares was measured. The cross-section of its handle was a regular hexagon. The evaluation method of measurement accuracy was the same as the
process above, and the standard deviation was about 86.25 μm.
Besides, a standard sphere was measured by a CMM (the global advantage CMM by hexagon) and our measurement system to compare measurement results, which can be seen in
Figure 13
Figure 13
b is the evaluation of the point cloud measured by the CMM. The measured area was 50% of the entire sphere surface. The standard deviation was about 26.51 μm.
Figure 13
d is the measurement results of 25% of the entire surface on that sphere by our five-axis system, and the standard deviation was 9.60 μm. The error distributions were in the shape of ring bands. A
total of 70% of the entire surface was also measured by this system, as shown in
Figure 13
e, and the standard deviation rose to 29.07 μm. Considering the accuracy of electric motion stages used, this measurement accuracy was considered reasonable. The normal radius of this standard sphere
was 12.703 mm, and the results of spherical fitting with data by the CMM and our system were 12.693 mm and 12.682 mm.
6. Conclusions
Focusing on the difficulty of the overall profile measurements of tiny parts with complicated features, a solution using the cradle-type five-axis measurement system was proposed in this paper. We
achieved the engineering implementation based on our theoretical research. It proved that this cradle-type five-axis measurement system had good value for engineering applications. Our contributions
can be summarized as follows:
An optical, cradle-type, non-registration point-scanning measurement method was proposed, which does not need the point cloud registration process and adapts to multiple complicated features of
different sizes. This measurement system has strong flexibility.
A process to identify major error terms in measurement systems and apply calibration and compensation on them was proposed. The advantages of this process are that it does not rely on any
additional high-precision equipment and promotes the system’s accuracy conveniently and efficiently. This method can also be applied to correct error terms in other measurement systems with
rotary axes.
A five-axis experiment setup was built and tiny parts were measured in experiments. The measurement accuracy and capability of overall profile measurements were verified by measuring standard
workpieces and complicated tiny parts separately. It was proved that in terms of overall profile measurement, this cradle-type five-axis measurement system had more advantages than some
commercial instruments. It is worth noting that, measurement accuracy can be further improved if hardware with higher accuracy is used in the future. With the help of a premeasurement by an
external measurement device, a fully automatic measurement is the next research goal.
Author Contributions
Data curation, L.L.; Formal analysis, L.L.; Funding acquisition, X.Z.; Methodology, L.L., L.Z., L.M., C.F. and X.Z.; Project administration, L.Z.; Resources, C.L.; Software, L.L.; Supervision, L.Z.;
Writing—original draft, L.L.; Writing—review and editing, L.L., L.Z. and X.Z. All authors have read and agreed to the published version of the manuscript.
This research was funded by the Science Challenge Program (Grant No. TZ2018006-0203-01); National Key Research and Development Program of China (Grant No. 2017YFA0701200); Tianjin Natural Science
Foundation of China (Grant No. 19JCZDJC39100).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data are not publicly available because the data also forms part of an ongoing study.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 2. Schematic diagrams for coordinates recursion: (a) calculate the first point coordinates; (b) calculate the n + 1th point coordinates.
Figure 3. Schematic diagrams for different measurement paths: (a) rotary scanning paths; (b) raster paths; (c) free scanning paths.
Figure 4. Error identification and simulation: (a) a schematic diagram for clamping errors of workpieces; (b) simulation results on clamping errors.
Figure 6. The point cloud distribution of the rotary axis calibration in simulation: (a) θ = 0°; (b) θ = 90°; (c) θ = 180°; (d) θ = 270°.
Figure 7. The distance distribution in the XOY plane after rotary-axis compensation in simulation: (a) θ = 0°; (b) θ = 90°; (c) θ = 180°; (d) θ = 270°.
Figure 8. The error distribution in XOY plane after clamping-error compensation in simulation: (a) θ = 0°; (b) θ = 90°; (c) θ = 180°; (d) θ = 270°.
Figure 12. Application results on two tiny parts: (a) a cross cylinder; (b) a microtriangular pyramid.
Figure 13. Comparation of results: (a) the measure process on the CMM; (b) measurement results (50% spherical surface) by the CMM; (c) the measure process on our five-axis system; (d) results of the
same measured area (25% spherical surface) by the five-axis system; (e) measurement results of 70% of the spherical surface by the five-axis system.
Error Types Specific Error Terms Symbols Ratios
Static errors of linear stages δx, δy, δz 2.430%
System errors Dynamic errors of linear stages Δd 0.367%
Static errors of rotary stages δθ[1], δθ[2], Δx, Δy, Δz 73.013%
Dynamic errors of rotary stages δβ 0.083%
Clamping errors of workpieces Tilt errors δβ[w][1], δβ[w][2] 17.247%
Centrifugal errors Δx[w], Δz[w] 6.860%
Hardware Travel/Range Accuracy Others
X/Y/Z axes 200 mm 1 μm \
A/C axes 360° 0.004° Surface radius of C-axis stage: 30 mm
Probe 400 μm 0.1 μm NA: ±28°
Standard cylinder r: 10 mm Cylindricity: 14 μm \
Error Source Error Terms Value (° or μm)
A axis δθ[1], δθ[2] 0.0741, 0.3413
Δx, Δy, Δz (0.8424, −0.0012, 0.0177)
C axis δθ[1], δθ[2] 0.5035, −0.8466
Δx, Δy, Δz (0.2906, 0.5074, 0.6875)
Workholding device δβ[w][1], δβ[w][2] −5.6936, 2.2561
Δx[w], Δz[w] (0.0651, −0.2498)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Liu, L.; Zhu, L.; Miao, L.; Li, C.; Fang, C.; Zhang, X. Overall Profile Measurements of Tiny Parts with Complicated Features with the Cradle-Type Five-Axis System. Sensors 2021, 21, 4609. https://
AMA Style
Liu L, Zhu L, Miao L, Li C, Fang C, Zhang X. Overall Profile Measurements of Tiny Parts with Complicated Features with the Cradle-Type Five-Axis System. Sensors. 2021; 21(13):4609. https://doi.org/
Chicago/Turabian Style
Liu, Lei, Linlin Zhu, Li Miao, Chen Li, Changshuai Fang, and Xiaodong Zhang. 2021. "Overall Profile Measurements of Tiny Parts with Complicated Features with the Cradle-Type Five-Axis System" Sensors
21, no. 13: 4609. https://doi.org/10.3390/s21134609
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1424-8220/21/13/4609","timestamp":"2024-11-05T14:04:15Z","content_type":"text/html","content_length":"484921","record_id":"<urn:uuid:9ed4517d-b955-459e-9430-dc86067af814>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00136.warc.gz"} |
Circle Squaring
Series: The Numbers Series
Environment A things or system incorporates more than its internal components. Also factored in are timing, placement, other sufficiently complex systems, and possibly non-local information. If
you multiply any natural number by 9 and repeatedly add the digits of the answer until it is just one digit, you will end up with 9. 9 is…
Balance Evenly-even, justice Things or systems can breakdown or create something more complex 8 divides evenly back to the source or 1 2x2x2=8
Rest/Restoration Ebb and Flow – a required part of any process. 7 is the fourth prime number 1×7=7
Community/Groups are formed Efficient Arrangement – Systems seek the simplest way to assemble its components. 6 is the smallest perfect number 1x2x3=6 1+2+3=6
Regeneration To become more or to make more, (creativity), is an inherent property in all things. 5 is the third prime number 1×5=5
Building blocks form structures Complexity increases until a thing or a system can support itself 4 is the smallest squared prime (p²) and the only even number in this form. 2²=4
Relationships form initial building blocks. Similarities and differences give rise to attractions, repulsions, and various intermediaries. 3 is the second prime number 1×3=3
Distinctions are made Once a “thing” is defined, immediately “not-that-thing” is defined. 2 is the only even prime number. 1×2=2
Unity One becomes Many Creation begins 1 is the first square number 1×1=1
• Numbers to us in the “modern” era represent quantities. They show us amounts. It wasn’t always like this. People have always known that numbers keep track of quantities, but that’s a small aspect
of their use. They had a deeper Truth. By seeking to understand this Truth, numbers reveal to us something about the nature of… | {"url":"https://circlesquaring.net/series/numbers/","timestamp":"2024-11-02T04:23:21Z","content_type":"text/html","content_length":"89811","record_id":"<urn:uuid:77d10767-dfd1-49ff-8367-207f14a12a78>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00299.warc.gz"} |
Math Notes
Maximum power without calculus
A Quora question asks: My physics textbook says that in a simple series circuit with one resistor R and a battery with internal resistance r the maximum power dissipated in R is when R equals the
internal resistance r, why is this the case?
Mahesh Prakash is correct that this is a simple exercise in calculus (and in fact it may have been given as a homework problem in a course for students who have learned some calculus). But it may be
useful for some to see if we can prove it without using calculus.
Given that the power dissipated in a resistor is given by current times voltage (analogous to the mechanical situation where power is speed times force), and that in a series circuit the current is
equal to the total EMF divided by the total resistance, we find that our goal is to maximize P=IV=I^2R=(\frac{E}{R+r})^{2}R, where E (being the voltage across the battery when no current is flowing)
and r (being the internal resistance of the battery) are fixed properties of just the battery, and our only control variable is R.
Now increasing a number decreases its reciprocal (and vice versa) and multiplying by a positive constant doesn’t change the locations of its max and min. So the max of P occurs at the same R value as
the min of \frac{(R+r)^2}{R}=\frac{R^2+2Rr+r^2}{R}=R+2r+\frac{r^2}{R}=r(\frac{R}{r}+2+\frac{r}{R}).
This occurs for the same R as the min of \frac{R}{r}+\frac{r}{R}= q+\frac{1}{q} with q=\frac{R}{r}, and our goal is to show that this happens when q=1.
Clearly, for positive x, y=x+\frac{1}{x} gets large when x is either very large or very small, but how can we see that it bottoms out at x=1?
Well, one way is to check that it is always going down to the right for x<1 and up for x>1.
(That’s easy using calculus but our goal here is to show it for someone who has not yet studied calculus.)
If we consider the effect of increasing x a bit, to say x_{+}, then the change of y is (x_{+}+\frac{1}{x_{+}})-(x+\frac{1}{x})=(x_{+}-x)+(\frac{1}{x_{+}}-\frac{1}{x}).
When the first of these is positive the other is negative and it’s a matter of checking which one wins.
But |(\frac{1}{x_{+}}-\frac{1}{x})|=\frac{x_{+}-x}{x_{+}x}, and when x and x_{+} are both bigger than 1 it is less than x_{+}-x, so the increase of x beats the decease of 1/x. And when x and x_{+}
are both less than 1 it goes the other way and the decease of 1/x beats the increase of x.
Source: (1000) Alan Cooper’s answer to My physics textbook says that in a simple series circuit with one resistor R and a battery with internal resistance r the maximum power dissipated in R is when
R equals the internal resistance r, why is this the case? – Quora
Why SD uses squares (rather than abs val)
It’s not just for computational convenience.
rms SD minimizes expected distance from mean while avg of abs val does it for the median
Source: (1000) Nikolas Scholz’s answer to Why are standard deviations calculated the way they are? I understand the method (subtract the mean from each value, then get the square-root of the mean of
the squared differences), but what fundamental principle does this method derive from? – Quora | {"url":"https://qpr.ca/blogs/mathstuff/2024/05/","timestamp":"2024-11-15T04:48:23Z","content_type":"text/html","content_length":"36715","record_id":"<urn:uuid:2efdb697-178a-4771-bc5d-a2d1d9d631fb>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00558.warc.gz"} |
Clustering in Machine Learning
Clustering is an unsupervised learning technique used in machine learning to uncover potential relationships and groupings, commonly known as "clusters".
What is a cluster? It's a collection or group of objects that share some, but not all, characteristics. They're similar but not identical. The concept of a cluster is consistent across fields like
machine learning, statistics, and marketing.
In machine learning, clustering is also referred to as unsupervised classification.
A Practical Example
In this dataset, there are three features: x[1], x[2], x[3].
The machine lacks label information and any learning function. There's no supervision.
Despite this, the table reveals a significant relationship between x[1], x[2], and x[3].
To illustrate this, let's temporarily ignore x[3] and plot x[1] and x[2] on a Cartesian graph.
Even in this two-dimensional graph, a pattern and regularity in the data begin to emerge.
Next, I assign different colors (blue, red) to the coordinates (x[1], x[2]) to represent the third feature, x[3], or the third dimension.
Blue for x[3]=1 and red for x[3]=2.
Now, the clustering is immediately apparent even to the naked eye.
In clusters A and B, similar data points are grouped together.
This way, the machine learns significant information from the data, without any guidance from a supervisor.
Note. This is a simple two-dimensional example, but it illustrates the concept. In reality, clustering is particularly useful when applied to multidimensional databases, where the human eye can't
discern patterns.
In machine learning, clustering algorithms are used to identify relationships between data through a mathematical-statistical learning process. | {"url":"https://www.andreaminini.net/computer-science/artificial-intelligence/clustering-in-machine-learning","timestamp":"2024-11-01T22:06:07Z","content_type":"text/html","content_length":"13856","record_id":"<urn:uuid:6862fa64-222d-499a-b93c-fa0f1bf359c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00481.warc.gz"} |
Math Story : Proper, Improper And Mixed Fractions - Fun2Do LabsMath Story : Proper, Improper And Mixed Fractions
Math Story : Proper, Improper And Mixed Fractions
The Fraction Train To The Rescue
“What? The villages are on fire?” asks Uncle Math. “Yes! The fire has caused severe damage. The sky is filled with dark smoke from the fire. People’s lives are at risk. I need your help to save
thousands of people”, says Uncle Science on the phone. Uncle Math is worried. How can he help his friend?
Oh no! Several villages have caught fire near Uncle Science’s town. He is unable to rescue people and is asking for help from Uncle Math.
Seeing Uncle Math so sad, the children decide to help in whatever way they can. “Let us go to their town in our spacecraft and save people”, says Cirha. “But the sky is filled with dark smoke. Going
their through spacecraft will not be safe and we will not be able to board thousands of people in one spacecraft “, says Triho. How can they reach Uncle Science and save people?
“Idea! We will go there by train”, says Uncle Math. “Train? Is it another mode of transport?” asks Cirha. “Yes! It is a series of connected cabins that are pulled by an engine. These cabins will also
help us accommodate thousands of people. Since you haven’t travelled by train before, I will take you along with me”, he says. The kids are thrilled.
Pum Pum! Honks the engine. The kids are excited to travel through a new mode of transport. This journey is going to be different for them. Uncle Math hopes that his friend is safe, and together they
can save many victims.
“Alright! Let us begin”, says Uncle math. He inputs 3 and 4 and starts the engine. Hurrah! The kids are thrilled.
“Uncle Math, why do 3 and 4 look different here? ¾ is it a new type of number?” asks curious Triho. “Great observation, Triho. ¾ is a fraction. Because this is a fraction train, it takes the distance
in terms of fractions”, explains Uncle Math.
“¾ is the location of first village. So I entered the same in the engine. “But how do we know the locations of villages?” asks Cirha. He takes out a mini-map of the route and asks the kids to observe
the track carefully.
“Oh! There is a village on ¾, 10/4, and 19/4″, says Cirha looking at the map. “Oh yes! But as we are moving ahead, the numerator is increasing while the denominator is still the same. Why so?” asks
Triho. “That is an interesting question”, says Uncle Math.
“As you can see, our track is made up of wholes. Each whole has 4 smaller and equal parts. So if we start at 0, ¼ is the first distance that we covered, which means we travelled 1 part out of the 4
parts. Similarly, when we are at the 3/4th part. So that means we have covered 3 parts out of the 4 smaller parts”, explains Uncle Math.
“Amazing. So when we cover 4/4 distance, we would have covered all the 4 smaller parts making us cover 1 whole”, says Cirha. “Exactly! Also, look Cirha, all these ¼, 2/4, ¾ fractions are greater than
zero but lesser than one because we did not cover the one whole”, says Triho. Uncle Math is amazed at the way kids make connections.
“Exactly! Such fractions where the numerator is smaller than the denominator are called Proper fractions. All the proper fractions are always less than 1″, adds Uncle Math. The kids now understand
Soon the first village arrives. Uncle Math and the kids get out of the train and board as many people as possible.
It is time to input the next village number. “10/4” enters Cirha. Can you find out how many wholes will be covered in 10/4 to reach the next village?
“To find that out, I can easily count the 10 smaller parts and see how many wholes it covers. So 10/4 covers 2 wholes, and still, 2 parts will remain. This means 10/4 is greater than 1″, says Triho.
“Perfect! Such fractions whose numerator is greater than the denominator are called Improper fractions. All the improper fractions are greater than 1″, says Uncle Math.
The kids are having fun understanding the different types of fractions. “There is something special about improper fractions. This 10/4 can also be written as 2 2/4. Can you guess why?” asks Uncle
Math. “Oh I know!” says excited Cirha. “10/4 covers 2 wholes and 2 parts are remaining”, she says.
“Brilliant! This 2 2/4 fraction is called a mixed fraction. It has two parts where 2 indicates the total number of wholes and 2/4 indicates the remaining parts. A mixed fraction is always the sum of
the whole number and a proper fraction”, explains Uncle Math.
Soon the first village arrives. Uncle Math and kids get out of the train and board as many people as possible. Phew! This mission is getting bigger and bigger.
Pum! Pum! The loud horn of the engine is so funny. It is time to enter the next village number. “19/4″, enters Triho. Can you convert this improper fraction into a mixed fraction?
Cirha closely looks at the mini map and observes the track. “4 3/4 is nothing but 19/4. Am I right?” she says confidently. Uncle Math agrees with her. “Oh no! I can see the smoke. Have we reached the
village?” says Triho. Uncle math quickly looks out to confirm. “Yes! We have. Let us hurry now”, he says.
This village has had the worst fire breakout. They spot people everywhere. Some are running, some walking and some are crying. “This is so sad. Let us quickly shift them in the train”, says Triho.
One by one, they transfer the people to the train. Uncle Math and Uncle science are finally relieved.
It is time to start the engine again. Pum! Pum! The horn blows. The fraction train is ready to take off with thousands of people. “Hurrah!” says Cirha. This journey did not just get them into a new
mode of transport but also taught them proper, improper and mixed fractions. Indeed a beautiful journey.
We Learnt That…
• Fraction, where the numerator is smaller than the denominator, is called a proper fraction. All the proper fractions are always less than 1.
• Fractions, where the numerator is greater than the denominator are called improper fractions. All the improper fractions are greater than 1.
• A mixed fraction is always the sum of the whole number and a proper fraction. All mixed fractions are greater than 1.
Let’s Discuss
• Where are we going today? Why?
• What was the new mode of transport we used this time? What was special about it?
• What are proper, improper and mixed fractions? Explain with an example.
• Uncle Math is always ready to help people in need. What does this tell you about him?
Please refer this guide by Fun2Do Labs for teaching types of fractions to kids : | {"url":"https://fun2dolabs.com/math-story-proper-improper-and-mixed-fractions/the-fraction-train-to-the-rescue","timestamp":"2024-11-07T13:57:12Z","content_type":"text/html","content_length":"51056","record_id":"<urn:uuid:385bb12a-e279-45db-9258-ada3ab260575>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00224.warc.gz"} |
DQI -
DQI - Devon Quantification of Interference
1. What is DQI?
The Devon Quantification of Interference (DQI) procedure interprets an interference test to estimate the hydraulic diffusivity and the fracture conductivity between the wells. A dimensionless
drainage length parameter (LD) is calculated using the conductivity estimate and knowledge of reservoir parameters. The Degree of Production Interference (DPI) is then estimated from the calculated
LD. This new metric was defined to quantify the effect of well-to-well interference on production.
For purposes of defining this metric, it was consider a specific case where only two wells have been producing for at least several months. These two wells are bounded by each other on one side, and
unbounded on the other side. Then, one of the wells is shut-in, and the objective is to measure the impact on the production rate of the well that remains on production.
2. Procedure - from Almasoodi, Andrews, Johnston and McClure, 2023
Step 1: Creating Δp and Bourdet derivative plots
Figure 1 shows pressure observations at a monitoring well in a field example in the Meramec formation located in the Anadarko basin of Oklahoma. The vertical black line shows the time when the offset
well was put on production, which caused a downward pressure response in the monitoring well.
Fig. 1: Pressure observations in a monitoring well before and after the offset well is put on production.
To perform an interpretation, start by constructing a log–log pressure derivative plot showing \(Δp\) and \(t dp/dt\) versus time from the beginning of the transient (Fig. 2). The value \(Δp\) is
defined as: \(Δp\) = \(p\) − \(pref\). Where '\(p\)' is the pressure recorded by the pressure gauge, and '\(pref\)' is the extrapolation of the prior pressure decline trend (the orange line in Fig.
1). It may be useful to smooth the data by resampling at increments of pressure.
Fig. 2: \(Δp\) and \(t dp/dt\) observed during the interference test.
Figure 3. \(Δp\) and \(t dp/dt\) from the analytical solution (black and blue lines) are fitted to the pressure and derivative observations by a trial and error approach varying \(\alpha\) and \(k\),
to find a good fit.
Fig. 3: \(Δp\) and \(t dp/dt\) observed during the interference test.
Step 2: Computing diffusivity from pressure changes in interference
When the active well is put-on-production, a pressure disturbance propagates along the fracture, as described by the pressure diffusion equation (Zimmerman 2018). In classical well test analysis, the
radius of investigation, \(r_{inv}\), is defined as the distance from the well where a pressure disturbance will be felt, as a function of time, \(t\), and hydraulic diffusivity, \(\alpha\) (Horne
For flow through porous media, the hydraulic diffusivity is equal to the square root of permeability divided by porosity, total compressibility, and viscosity. In a fracture, it is more appropriate
to define hydraulic diffusivity in terms of the fracture conductivity and aperture:
Where \(k_f\) is the fracture permeability, \(W\) is the aperture, \(C\) is the fracture conductivity, \(\mu\) is fluid viscosity, \(dW/dp\) is the derivative of aperture with respect to pressure,
and \(c_f\) is the compressibility of the fluid in the fracture.
If it is assumed that the fracture is fixed-height, and that the pressure disturbance propagates along the fracture through linear flow, it can be calculated the pressure change from the solution for
a constant flux source in a semi-infinite slab (Hetnarski et al. 2014), with the pressure disturbance observed at an offset distance \(y\).
\(q_0\), production rate (\(m^3/s\)); \(y\), offset distance \(m\); \(\alpha\), diffusivity (\(m^2/s\)); \(K\)∶ equal to (\(CH/\mu\)), where \(C\) is fracture conductivity, \(H\) is the fracture
height, \(\mu\) is the viscosity of the fluid inside the fracture (\(m^4/MPas\)).
When plotting the analytical solution, it is observed that the timing of the onset of the pressure interference is controlled by the diffusivity, and the shape of the curve after the onset of
interference is controlled primarily by the \(K\) parameter.
It is possible to fit measured data with Eq. 3 and estimate both diffusivity and the \(K\) parameter. However, the estimate of \(K\) is considerably more uncertain than the estimate of diffusivity.
It could be affected by: (a) non-constant production rate at the active well, (b) complex fracture geometries other than linear geometry, and (c) change in flow regime from fracture linear (for
example, if flow from the matrix causes bilinear flow). Also, nonlinearities could be present during early-time production that—strictly speaking—may violate the linearized ‘single phase’ assumptions
that justify: (a) the use a simple diffusivity equation solution (Eq. 3), and (b) the calculation of \(Δp\) = \(p\) − \(pref\) that subtracts out the prior pressure trend (which relies on
On the other hand, the estimate of diffusivity (from the timing of the onset of the signal) is quite robust. As shown in Eq. 1, the radius of investigation scales with the square root of the product
diffusivity and time, regardless of flow regime, flow geometry, or variable boundary condition at the active well. Also, the early-time pressure response of the interference test involves relatively
small pressure changes, minimizing the potential impact of nonlinearities in the system, such as changing fluid compressibility.
Thus, the early interference pressure response—the tip of the spear—is the most robust part of the transient for estimating the diffusivity. It is least affected by uncertainties and nonlinearities.
To estimate diffusivity, it was used a trial-and-error approach to fit Eq. 3 to the observed \(Δp\) and \(Δp'\) curves. The value of \(K\) from the curve fit is not used in the rest of the analysis,
but it is nonetheless useful to include in the curve fit that it is used for the estimation of the diffusivity. The priority is to fit the part of the trend that matches the initial response—the
first 10 s or 100 s of psi. The analytical solution is not expected to be able to match the transient after the initial response. Figure 3 shows an example of fitting the analytical solution to a
real interference test. Figure 4 shows sensitivities on the effect of changing the hydraulic diffusivity and \(K\).
Fig. 4: The variation in \(dp\) (solid lines) and \(t dp/dt\) (dashed lines) at the offset Monitoring Well calculated with the analytical solution with different values of a) \(\alpha\) (keeping \(K
\) constant = 0.01), and b) \(K\)-parameter (keeping \(\alpha\) constant = 10). The units of \(\alpha\) are (\(m^2/s\)) and the units of the \(K\)-parameter are (\(m^4/MPa*s\)).
Step 3: Computing conductivity from diffusivity
Once the diffusivity has been estimated from the curve fit, it can be plugged into Eq. 2 to estimate the fracture conductivity. It will not practically have precise knowledge of \(dW/dp\) and \(W\),
but it can plug-in reasonable values:
and \(W\) = 0.76 mm (0.03 in). For \(cf\), it was used the compressibility of the interfering fluid present in the fracture. For the initial POP test after stimulation, this will be the
compressibility of water, ∼ 0.000435 MPa−1. For tests performed after months of production, the fluid in the fracture will be a multiphase mixture of oil, gas, and/or water. In this case, it is
necessary to use the total compressibility of all the phases in the fracture.
For viscosity, it was used the viscosity of the fluid in the fracture. The initial POP test may be controlled by the viscosity of the frac fluid. For instance, when a viscous HVFR is used for
hydraulic fracturing, the POP test viscosity may be influenced by the viscosity of the HVFR which can be significantly higher than water. During the later production phase tests, the effective
viscosity of the multiphase mixture must be estimated.
Step 4: Calculating the dimensionless well spacing
Next, it was necessary to calculate a dimensionless number, \(LD\), that can be roughly defined as the ‘dimensionless drainage distance’. \(LD\) is derived by first approximating \(L\), the ‘maximum
possible drainage distance along the fracture if hypothetically it was unbounded and had infinite propped length’. This drainage distance is determined by a balance of flow rate into the fracture and
flow rate along the fracture.
The derivation of \(LD\) uses rough ‘back of the envelope’ approximations, but this is acceptable because the objective is not to derive a rigorous analytical solution. Instead, it seeks to determine
an appropriate scaling between variables.
To estimate the flow rate along the fracture (towards the well), the Darcy’s law:
where \(Q\), flow rate (\(m^3/s\)); \(H\) , fracture height \((m)\); \(Δp\) ≈ \(BHP_{initial}\) − \(P_{res}\); \(dL\) , length of the drainage distance along the fracture \((m)\).
Assuming matrix linear flow into the fracture, the fluid flow rate into the fracture can be expressed as:
Where \(\phi\), porosity (v/v); \(c_t\), total compressibility, i.e., fluid plus pore compressibility (\(MPa^{-1}\)); \(k\), matrix permeability (\(m^2\)); \(t\), time at which production impact is
estimated \((s)\) (used as 20 days for this analysis); \(L\), length of the region along the fracture where production is occurring.
Next, it was estimated the maximum possible length of fracture that could be drained, under the hypothetical assumption of unlimited propped length. Under these conditions, the draining length will
be limited by the ability of the fracture to deliver fluid to the well, given the rate that fluid flows into the fracture.
The maximum possible draining length is estimated by setting the values of \(Q\) from Eqs. 5 and 6 to be equal (implying that the flow rate into the fracture is equal to the flow rate along the
fracture) and solving for \(L\). The flowing fluid density may be a bit different between the fracture and the matrix, and so it is not strictly precise to set the volumetric flow rates \((Q)\)
equal, but this is a minor approximation that simplifies the calculations. Further, it was assumed that \(Δp\) for ‘flow through the fracture’ is approximately equal to \(Δp\) for ‘flow through the
matrix,’ and so the terms cancel-out.
\((k_r/\mu){frac,t}\), the total mobility of the fluid in the fracture during production; \((k_r/\mu){mat,t}\), the total mobility of the fluid in the matrix during production.
Where total mobility is equal to: \((k_{rw}/\mu_w)\)+\((k_{ro}/\mu_o)\)+\((k_{rg}/\mu_g)\)
In these equations, because they are defining a production response during long-term depletion, the fluid properties should be defined for the flowing reservoir fluid, and not the properties of the
frac fluid (even if the interference test is performed when the wells are initially put on production).
To build intuition, consider a few ‘end-member’ extremes. If the permeability is high relative to the conductivity, then fluid will be able to rapidly flow into the fracture, but the fracture
conductivity (the ability to transmit fluid along the fracture) will limit the total flow rate, and so the effective draining length will be limited. Conversely, if the permeability is very low
relative to the conductivity, then it will require a large producing fracture area to reach the flow capacity of the fracture, and the effective draining length will be large.
The balance of these two effects—the ability of fluid to transmit along the fracture and the rate that fluid can enter the fracture—determines the length of the fracture that can be drained. Of
course, in reality, the propped length is not infinite. If the wells are so far-apart that they do not have overlapping propped areas, then the interference test results will be straightforward to
interpret— there will be minimal interference. But, if the propped areas do overlap, then interference will occur, and Eqs. 5 and 6 can help quantify the magnitude of interference.
Next, it is necessary to calculate the dimensionless drainage length, \(LD\), by dividing by the half-well spacing. It is intuitively expected that \(DPI\) should increase with larger values of \(LD
Where \(y\), is well spacing.
Step 5: Relating the dimensionless well spacing to the DPI
Numerical simulations were run of interference tests under a wide range of conditions, varying fluid properties, reservoir properties, and well spacing. In all cases, were simulated both: (a) an
interference test, and (b) the production impact of shutting-in one of the wells. The interference test was analyzed to infer \(LD\), and the simulation of the single-well shut-in allowed us to
directly observe the \(DPI\). Then, was made a cross-plot of \(DPI\) versus \(LD\). Figure 5 shows that the results collapse onto a single curve. This means that it can be use an estimate of \(LD\)
to estimate the \(DPI\).
After estimating \(LD\) from the interference test, the value of \(DPI\) can be read directly from Fig. 5, or calculated from the equation:
Fig. 5: The relationship between \(LD\) and \(DPI\) from numerical simulations collapse onto a curve represented by the dashed line. The dots represent the individual numerical simulations. The
simulations have varying fracture, fluid and matrix property inputs.
DQI Workflow in whitson^+
DQI dataset example
Here's an example DQI dataset - formatted in the whitson template, the same as the one used in the original paper, formatted to be uploaded via the standard mass upload template -
Example DQI Dataset
Courtesy: Devon, Resfrac.
Step 1 - Load the dataset
Simply drop this dataset downloaded from above in the mass upload dialog box. You can also switch to the examples tab and click Upload on the Well Test Certificate
You can also upload your own pressure interference dataset if you'd like. Ensure that the pressure measurements are in the p[wf] or gauge pressure column and the same Well Name exists on the Well
Data sheet.
Step 2 - Initialize PVT
You need PVT initialization to gather fluid properties like fluid compressibility, viscosity, etc. from the black oil tables. You want to ensure that the PVT initialization matches the fluid that can
be expected between these wells.
For a DQI test shortly after the initial stimulation in the offset well prior to the well being produced, initialize PVT with 100% water saturation - this is what we'll do here in this example
dataset, as shown in the GIF above.
For tests where the well pair has been producing hydrocarbons, initialize the PVT with the GOR associated with the reservoir fluid in place.
Step 3 - Align the POP line
Set the vertical dashed line at the POP of the offset well. Taking a quick look at the pressures in the production data, we can set the offset well POP to 1.48 days in this example. You can enter it
in or move the vertical dashed line graphically to the point in time when the offset well is POP'ed.
Step 4 - Align the line to Pressure trend before POP
Fit the prior pressure trend by moving and adjusting the slope line to align with the stabilized pressure trend prior to POP. You can do this manually or you can just lasso fit the stabilized
pressure data.
Notice how the pressure and pressure derivative vs time after POP plot is dynamically recalculated as you make adjustments to the prior pressure trend.
Step 5 - Click Autofit to calculate the DPI
Once the prior pressure trend and POP is picked, we have a plot of . Click the AUTOFIT button (off to the top-left of the page) - This fits the analytical model to the pressure and pressure
derivative data to determine model parameters -
1. α (controls hydraulic diffusivity) - impacts the timing of the onset of pressure interference.
2. κ (controls fracture conductivity) - impacts the shape of the curve after the interference.
Assessing fit quality: The priority is to ensure that the fit matches the initial response - the first 10s or 100s of psi. The analytical solution is not expected to match the transient after the
initial response.
Resolving these two parameters allows you to calculate the fracture conductivity 'C' and maximum possible drainage length along the fracture, 'L', given W (fracture width or aperture) and dW/dP
(change in fracture aperture with change in pressure) and other parameters.
Then the Ld is calculated, given the well spacing, 'y'.
This is plotted in the empirical correlation developed to calculate DPI, given the Ld.
The resulting DPI should be close to 95%.
Note down the α and C, both of which are indicators of good or bad well-pair connectivity, apart from the DPI. This will help you rank the well-pair connectivity on large scale tests with multiple
well pairs
Mouin Almasoodi\(^1\), Thad Andrews\(^1\), Curtis Johnston\(^1\), Mark McClure\(^2\), and Ankush Singh\(^2\)(2023). A new method for interpreting well‑to‑well interference tests and quantifying the
magnitude of production impact: theory and applications in a multi‑basin case study.\(^1\)Devon Energy, \(^2\)ResFrac Corporation.
Zimmerman RW (2018) Pressure diffusion equation for fluid flow in porous rocks. Imp Coll Lect Pet Eng 25:1–21.
Horne RN (1995) Modern Well Test Analysis: A Computer-aided Approach.
Hetnarski RB et al (2014) Encyclopedia of thermal stresses. Springer, Dordrecht. | {"url":"https://manual.whitson.com/modules/well-performance/dqi/","timestamp":"2024-11-14T18:19:58Z","content_type":"text/html","content_length":"70584","record_id":"<urn:uuid:906027ea-d40d-47d1-aa3e-942bd738917a>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00223.warc.gz"} |
Unreal Engine Games
Fractions are a mathematical concept used to represent a part of a whole or a ratio between two quantities. Fractions consist of two parts, the numerator and the denominator, which are essential to
understanding and working with fractions. In this essay, we will explore the parts of a fraction and their significance in mathematical operations.
The numerator is the top number in a fraction and represents the number of parts of the whole or the number of parts of a quantity. For example, in the fraction 2/3, the numerator is 2, indicating
that there are two parts out of three total parts. The numerator is essential in determining the fraction’s value and represents the quantity being measured.
The denominator is the bottom number in a fraction and represents the total number of equal parts that make up the whole or the total number of parts in a quantity. For example, in the fraction 2/3,
the denominator is 3, indicating that there are three total parts. The denominator is essential in determining the size of each part and in comparing fractions with different denominators.
The relationship between the numerator and denominator is critical in determining the value of the fraction. The fraction bar or line represents division, and the numerator is divided by the
denominator to determine the value of the fraction. For example, in the fraction 2/3, the numerator, which is 2, is divided by the denominator, which is 3, resulting in the value of 0.666666…, which
can be simplified to 2/3.
Fractions can be classified into different types based on the relationship between the numerator and the denominator. Proper fractions have a numerator that is less than the denominator, such as 2/3,
while improper fractions have a numerator that is greater than or equal to the denominator, such as 5/3. Mixed numbers are a combination of a whole number and a proper fraction, such as 2 ½.
The parts of a fraction are crucial in performing mathematical operations such as addition, subtraction, multiplication, and division. When adding or subtracting fractions, the denominators must be
the same, and the numerators can be added or subtracted. When multiplying fractions, the numerators and denominators are multiplied separately. When dividing fractions, the second fraction is
inverted, and the first fraction is multiplied by the inverted second fraction.
In conclusion, the numerator and denominator are the essential parts of a fraction. The numerator represents the number of parts of the whole or the quantity, while the denominator represents the
total number of equal parts or the divisor of the ratio. The relationship between the numerator and denominator determines the value of the fraction, and the types of fractions can be classified
based on this relationship. Understanding the parts of a fraction is essential in performing mathematical operations and in solving real-world problems. | {"url":"https://scweb4free.com/parts-of-a-fraction/","timestamp":"2024-11-04T15:38:59Z","content_type":"text/html","content_length":"92814","record_id":"<urn:uuid:293de98b-f0ce-465b-82a4-d399684bdc1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00176.warc.gz"} |
Lyapunov Equations, Energy Functionals, and Model Order Reduction of Bilinear and Stochastic Systems
Title data
Benner, Peter ; Damm, Tobias:
Lyapunov Equations, Energy Functionals, and Model Order Reduction of Bilinear and Stochastic Systems.
In: SIAM Journal on Control and Optimization. Vol. 49 (2011) Issue 2 . - pp. 686-711.
ISSN 1095-7138
DOI: https://doi.org/10.1137/09075041X
Abstract in another language
We discuss the relation of a certain type of generalized Lyapunov equations to Gramians of stochastic and bilinear systems together with the corresponding energy functionals. While Gramians and
energy functionals of stochastic linear systems show a strong correspondence to the analogous objects for deterministic linear systems, the relation of Gramians and energy functionals for bilinear
systems is less obvious. We discuss results from the literature for the latter problem and provide new characterizations of input and output energies of bilinear systems in terms of algebraic
Gramians satisfying generalized Lyapunov equations. In any of the considered cases, the definition of algebraic Gramians allows us to compute balancing transformations and implies model reduction
methods analogous to balanced truncation for linear deterministic systems. We illustrate the performance of these model reduction methods by showing numerical experiments for different bilinear
Further data | {"url":"https://eref.uni-bayreuth.de/id/eprint/63653/","timestamp":"2024-11-07T09:22:46Z","content_type":"application/xhtml+xml","content_length":"21343","record_id":"<urn:uuid:e0f7e5ce-9f08-47d8-b410-09bd57a4fcbb>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00800.warc.gz"} |
Expressions | Xano Documentation
Expressions are a flexible data type that Xano parses in real-time to support an inline syntax to expressing data with mathematical expressions. Anything you can do with Xano filters, can also be
done inline within an expression.
When building expressions, make sure you have the 'expression' data type selected. You can also click Use Expression under any value box to quickly switch.
Expression building in Xano leverages auto-complete, which will auto-populate references to inputs and variables, filters, and other common notation.
Using the Expression Editor & Playground
When using the Expression data type, you will be presented with an Expression Editor & Playground to enable easier editing and testing of your expression.
To get the most value out of the expression editor and playground, make sure to add any variable contents you'd like to use in the Context panel, and make sure to Run & Debug your function stack to
enable auto-complete.
Mathematical Operators
Operator Precedence
For the most part expressions are evaluated left to right. Using parentheses to illustrate a point, the following would be the same assuming all operators were being evaluated left to right.
1 + 2 + 3 == 6
1 + (2 + 3) == 6
However, there are a few operators which get special priority and get evaluated first. These operators are the multiplication and divide operators.
1 + 2 * 3 == 7 // if left to right, then 9 (which is incorrect)
1 + (2 * 3) == 7
1 + 4 / 2 == 3 // if left to right, 2.5 (which is incorrect)
1 + (4 / 2) == 3
Text Operators
To add separation when concatenating, add an empty string between the values: a~" "~b
Array Operators
Array Indexes
Expressions have the ability to reference array elements using all integer values (0, positive numbers, and negative numbers). Using a negative number represents starting from the top of the list
rather the beginning of the list.
Object Operators
Comparison Operators
Logical Operators
All of these operators evaluate their expressions as truthy statements. This means that a comparison operator is not required. For example: 0 || 1 would evaluate to true since 1 evaluates as true.
Conditional Operators
The ternary operator has 2 forms - the traditional if/else based on expression and the shorthand (this/that). The shorthand version will use either the left (this) or the right (that) based on which
one evaluates to a truthy statement first going from left to right.
The null coalescing operator is very similar to the shorthand ternary, except that instead of relying on a truthy statement, it only checks for the null value.
Variable Syntax
Variables can be referenced using the same syntax that is available within Lambdas.
Variables within the function stack are accessible through $var root variable.
Inputs are accessible through the $input root variable.
Authentication values are accessible through the $auth root variable.
Environment Variables
Environment variables are accessible through the $env root variable. This includes both system variables ($remote_ip, $datasource, etc.) as well as workspace environment variables.
When building expressions, you'll see autocomplete suggestions as you type. This works for variables, inputs, and environment variables, as well as filters.
For variables with nested data, such as objects, you'll also be presented with an auto-complete of the fields inside of that object. In this example, we're targeting a variable called logand are
presented with the fields inside of that variable by the expression builder, as well as a description of each.
Data Types
The Xano expression engine supports a more relaxed syntax for its data types to make it easier to reference text and variables without the strict requirements of using quotation marks.
Dot Notation
The same relaxed syntax used for data types also applies to dot notation.
All of the Xano filters are available within the expression syntax. To use these, you need to follow the pipe expression syntax.
variable | pipe : arg1 : arg2 : argN
For example, to uppercase text using the upper filter, you would do the following.
// result = XANO
Here is another example using a filter with an argument.
1 + 2 + (3|add:1)
// result = 7
This particular example is using both a mathematical "+" and an add filter to illustrate how they can be mixed together.
You can also chain filters together.
1 + 2 + (3|add:1|mul:2)
// result = 11
Importing Expressions
When importing cURL or pasting JSON into Xano, Xano can automatically detect the Expression data type, provided the expression begins with a $ character.
As an example, the following JSON...
...will import as:
Advanced Examples
As showcased above, the Xano expression engine is very powerful. Here we can look into some more advanced use cases that bring everything together.
Sample Data
$input = {
"scores": [1,2,3]
$var = {
"numbers": [4,5,6]
($input.scores|max) > ($var.numbers|min)
// result = false
Null coalescing
Sample Data
$input = {
"scores": [1,2,3]
$var = {
"numbers": [4,5,6]
(($input.scores|merge:[100,101,102])|max)+($var.bad_syntax ?? 100)
// result = 202
Sample Data
$input = {
"scores": [1,2,3]
$var = {
"numbers": [0,1,2,3,4,5,6]
($input.scores[2] == 3 ? 10 : 100) + (($var.numbers|min) ?: $var.numbers|max))
// result = 16 | {"url":"https://docs.xano.com/working-with-data/data-types/expressions","timestamp":"2024-11-07T19:36:06Z","content_type":"text/html","content_length":"1050604","record_id":"<urn:uuid:4df02922-51e1-40d5-b740-f9d099809cd5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00719.warc.gz"} |
Free online Statistics book for dummies
Free online Statistics textbook for dummies
Statistics practice exercises, online interactive.
Practice site for .Statistics for college students and researchers: Second Edition Paperback – December 31, 2020 This is the simplest textbook, only five simple formulas and plain English.
Introduction to Statistics, Advanced Statistics.
The present site goes over the concepts developed in the paperback and offers online interactive exercises on elementary and advanced Statistics:
Mean, standard deviation, variance, t-test, analysis of variance (ANOVA), one-way single factor, two-way, repeated measures, factorial designs 2x2 and higher, complex designs, mixed, split-plot.
Interactive study cases of statistical design are included.
Register here to participate It is free. | {"url":"https://statisticstextbook.com/","timestamp":"2024-11-10T14:06:14Z","content_type":"text/html","content_length":"26782","record_id":"<urn:uuid:9354a4ae-236c-4f83-a447-3e54de970860>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00103.warc.gz"} |
argmin(x: array, /, *, axis: int | None = None, keepdims: bool = False) array¶
Returns the indices of the minimum values along a specified axis.
When the minimum value occurs multiple times, only the indices corresponding to the first occurrence are returned.
For backward compatibility, conforming implementations may support complex numbers; however, inequality comparison of complex numbers is unspecified and thus implementation-dependent (see Complex
Number Ordering).
☆ x (array) – input array. Should have a real-valued data type.
☆ axis (Optional[int]) – axis along which to search. If None, the function must return the index of the minimum value of the flattened array. Default: None.
☆ keepdims (bool) – if True, the reduced axes (dimensions) must be included in the result as singleton dimensions, and, accordingly, the result must be compatible with the input array (see
Broadcasting). Otherwise, if False, the reduced axes (dimensions) must not be included in the result. Default: False.
out (array) – if axis is None, a zero-dimensional array containing the index of the first occurrence of the minimum value; otherwise, a non-zero-dimensional array containing the indices of
the minimum values. The returned array must have the default array index data type. | {"url":"https://data-apis.org/array-api/latest/API_specification/generated/array_api.argmin.html","timestamp":"2024-11-06T18:01:10Z","content_type":"text/html","content_length":"22480","record_id":"<urn:uuid:21a3d0a8-5521-45c6-af5f-c45caaa8e811>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00636.warc.gz"} |
Foundations of Machine Learning I
Lately, I was into the studying process of machine learning, and outputting(taking notes) is a vital step of it. Here, I am using Andrew Ng’s Stanford Machine Learning course in Coursera with the
language of MATLAB.
So the rest of the code I will write in this post by default are based on MATLAB.
What is ML?⌗
“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with
experience E.” Tom Mitchell
Supervised Learning&Unsupervised Learning
SL: with labels; direct feedback; predict
Under SL, there are regression and classification
USL: without labels; no feedback; finding the hidden structure
Under USL, there are clustering and non-clustering
For now, I would focus on these two but not reinforcement learning.
The Basic Model & Notation⌗
We use $x^{(i)}$ to represent the “input” value, with the variable $x$ represent the value at the position $i$ in a matrix , or vector in most of the time. And $y^{(i)}$ is the actual “output” when
we have a input $x$ at position variable $i$. A pair of $(x^{(i)}, y^{(i)})$ is called a training sample. Then we have a list of such samples with $i=1,…,m$—is called a training set. And the purpose
of ML is to have a “good” hypothesis function $h(x)$ which could predict the output while only knowing the input $x$. If we only want to have a simple linear form of $h(x)$, then it looks like: $h(x)
=\theta_0 + \theta_1x$, which both $\theta_0$ and $\theta_1$ is the parameter we want to find that letting $h(x)$ to predict “better”.
Linear Algebra Review⌗
Matrix-Vector Multiplication:$\begin{bmatrix} a & b \newline c & d \newline e & f \end{bmatrix} * \begin{bmatrix} x \newline y \end{bmatrix} = \begin{bmatrix} a*x + b*y \newline c*x + d*y \newline
e*x + f*y \end{bmatrix}$
Matrix-Matrix Multiplication: $\begin{bmatrix} a & b \newline c & d \newline e & f \end{bmatrix} * \begin{bmatrix} w & x \newline y & z \newline \end{bmatrix}=\begin{bmatrix} a*w + b*y & a*x + b*z \
newline c*w + d*y & c*x + d*z \newline e *w + f*y & e*x + f*z \end{bmatrix}$
Identity Matrix looks like this—with 1 on the diagonal and the rest of the elements are zeros: $\begin{bmatrix} 1 & 0 & 0 \newline 0 & 1 & 0 \newline 0 & 0 & 1 \newline \end{bmatrix}$
Multiplication Properties
Matrices are not commutative: $A∗B \neq B∗A$
Matrices are associative: $(A∗B)∗C = A∗(B∗C)$
Inverse and Transpose
Inverse: A matrix A mutiply with its inverse A_inv results to be a identity matrix I:
I = A*inv(A)
Transposition is like rotating the matrix 90 degrees, for a matrix A with dimension $m * n$, its transpose is with dimension $n * m$:
$$A = \begin{bmatrix} a & b \newline c & d \newline e & f \end{bmatrix}, A^T = \begin{bmatrix} a & c & e \newline b & d & f \newline \end{bmatrix}$$
Also we can get:
Cost Function⌗
A cost function shows how accurate our hypothesis function predict while output the error (the deviation between $y(x)$ and $h(x)$). And it looks like this:
$$ J(\theta_0, \theta_1) = \dfrac {1}{2m} \displaystyle \sum_{i=1}^m \left (h_\theta (x^{(i)}) - y^{(i)} \right)^2 $$
For people who are familier with statistics, it is called “Squared error funtion” while the square makes each error becomes a positice value, and the $\frac{1}{2}$ helps to simplify the expression
later when we do derivative during the process of gradient descent. Now, we turn the question to “How to find $\theta_0, \theta_1$ that minilize $J(\theta_0, \theta_1)$?”
Contour Plot⌗
$J(\theta_0, \theta_1)$ in contour plot From Andrew Ng
A contour plot is actually an alternative way to show 3D graphs in 2D, in which the color blue represents low points while red means the high. So the $J(\theta_0, \theta_1)$ that gives the red point
is the set of the parameter gives $h(x)$ with the lowest error with the actual output $y(x)$
Gradient Descent⌗
Gradient Descent is one of the most basic ML tools. The basic idea is to “move some small steps which lead to minimizing the cost function $J(\theta)$. And it looks like this:
Repeat until convergence:
$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
Here, the operator $:=$ just means assign the latter part to the former part while we know it could be the same as $=$ in many languages. We say the former $\theta_j$ as the “next step” while the
latter one as the “current position”, $\frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$ shows the “direction” that make the move increase $J(\theta)$ the most, so that we could just add a
negative sign to make it becomes the fastest decrease direction.$\alpha$ gives the length of step we want it to take for each step. And it’s important to make the update of each $\theta$ be
If we take the code above apart, then we have:
repeat until convergence:
$\theta_0 := \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x^{(i)}) - y^{(i)})$
$\theta_1 := \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}((h_\theta(x^{(i)}) - y^{(i)}) x^{(i)})$
The term $x_i$ is nothing but a result of the derivative, there is no $x_i$ for $\theta_0$ because we defined $x^{(i)}_0$ as 1.
Then here is a full derivative process to show the partial dervative of the cost function $J(\theta)$:
$$\begin{aligned}\frac{\partial }{\partial \theta_j}J(\theta) &= \frac{\partial }{\partial \theta_j}\frac{1}{2}(h_\theta(x)-y)^{2}\newline&=2 \cdot \frac{1}{2}(h_\theta(x)-y) \cdot \frac{\partial }{\
partial \theta_j}(h_\theta(x)-y)\newline&= (h_\theta(x)-y)\frac{\partial }{\partial \theta_j}\left ( \sum\limits_{i=0}^n\theta^{(i)}x^{(i)}-y \right )\newline&=(h_\theta(x)-y)x_j\end{aligned}$$
And such basic method is called batch gradient descent while it uses all the training set we provide, and just saying for future reference, $J(\theta)$is convex which means it only has only one
global minima and has no chance to be affected by local minima.
Convex function&non-convex function
Multivariate Linear Regression⌗
So saying we have not only one variables of input, but many of them. Then we use $j$ in $x_j$ from 1 to $n$ to represents the index of it just like we use $i$ to represents the index of the training
example from 1 to $m$.
$x_{j}^{(i)}$ = value of, in $i^{th}$ training example, feature $j$
For convenience of notation, we have to define $x_0 = 1$, since we have $\theta_0$ in the hypothesis function, and the matrix mutiplication thing:
$$x = \begin{bmatrix} x_1 \newline x_2 \newline \vdots \newline x_n \end{bmatrix} \in \mathbb{R}^{n} , \theta = \begin{bmatrix}\theta_0 \newline \theta_1 \newline \theta_2 \newline \vdots \newline \
theta_n \end{bmatrix} \in \mathbb{R}^{n+1} \rightarrow x = \begin{bmatrix}x_0 \newline x_1 \newline x_2\newline \vdots \newline x_n \end{bmatrix} \in \mathbb{R}^{n+1}, \theta = \begin{bmatrix}\
theta_0 \newline \theta_1 \newline \theta_2\newline \vdots \newline \theta_n \end{bmatrix} \in \mathbb{R}^{n+1}$$
And the cool thing here we can do now is using vectorization to represents the long mutivariable hypothesis function:
$$h_\theta (x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3 + … + \theta_n x_n = \begin{bmatrix}\theta_0&\theta_1&\cdots&\theta_n\end{bmatrix}\begin{bmatrix}x_0 \newline x_1 \newline \
vdots \newline x_n\end{bmatrix}=\theta^T x$$
Feature Scaling
If the input set $x$ contains features that have very large difference on their data range, the process of getting $\theta$ could oscillating, being slow, or even failed, and feature scaling, or mean
normalization is a technique to make the range of data in each feature more even, and the process is very familir if knowing statistics:
So the input $x$ with feature index $j$ minus the mean of the data in this feature then divided by the standard deviation(or range in some cases)
Normal Equation
Other than gradient descent, there is another way to find the minimized cost function$J(\theta)$. We first need to construct a matrix $X$ which is a another way to show the input data set of $x$:
$$x = \begin{bmatrix}x_0 \newline x_1 \newline x_2 \newline \vdots \newline x_n \end{bmatrix} \rightarrow X = \begin{bmatrix} x^{(1)}_0 & x^{(1)}_1 & \cdots & x^{(1)}_n \newline x^{(2)}_0 & x^{(2)}_1
& \cdots & x^{(2)}_n \newline \vdots & \vdots & \ddots & \vdots \newline x^{(m)}_0 & x^{(m)}_1 & \cdots & x^{(m)}_n \end{bmatrix} =\begin{bmatrix} 1 & x^{(1)}_1 & \cdots & x^{(1)}_n \newline 1 & x^
{(2)}_1 & \cdots & x^{(2)}_n \newline \vdots & \vdots & \ddots & \vdots \newline 1 & x^{(m)}_1 &\cdots & x^{(m)}_n \end{bmatrix} $$
Actually, each row of the matrix $X$ is the transpose of each element in $x_j^{(i)}$, contains the data set for all features in one iteration. And the normal equation itself looks like:
$$\theta = (X^{T}X)^{-1}X^{T}y$$
I am not going to show how it comes but comparing to gradient descent, the normal equation: 1. no need to choose $\alpha$ 2. no need to iterate 3. but slow.
Not only we need to solve some continuous problems(linear regression), but also a lot of discrete problems like if someone gets cancer(YES/NO) by the size of one’s tumor. Normally we use 1 and 0 to
represent the two outcomes. And the new form of the function we need to use to better shows the concept of classification is called the sigmoid function:
$$h(x) = \dfrac{1}{1 + e^{-\theta^{T}x}}$$
So what we did here is basically put the original hypothesis function $\theta^{T}x$ into the standard sigmoid function:
$$g(z) = \dfrac{1}{1 + e^{-z}} $$
A standard sigmoid function
So that the new hypothesis function will output the probability toward one of the binary output(1/0) without overlapping.
Decision Boundary
We consider:
$$h(x) \geq 0.5 \rightarrow y = 1 \newline h(x) < 0.5 \rightarrow y = 0$$
Becuase of the bahavior of the logistic function:
$$\theta^{T}x=0, e^{0}=1 \Rightarrow h(x)=1/2 \newline \theta^{T}x \to \infty, e^{-\infty} \to 0 \Rightarrow h(x)=1 \newline \theta^{T}x \to -\infty, e^{\infty}\to \infty \Rightarrow h(x)=0$$
So that:
$$\theta^T x \geq 0 \Rightarrow h(x) = 1 \newline \theta^T x < 0 \Rightarrow h(x) = 0$$
Then you can just set $h(x)$ to 1 or 0 to get the decision boundary. For example:
$$\theta = \begin{bmatrix}5 \newline -1 \newline 0\end{bmatrix} \newline y = 1 ; \mathbf{if} ; 5 + (-1) x_1 + 0 x_2 \geq 0 \newline $Desicion Boundary: $x_1 \leq 5 $$
The plot should looks like:
The green portion is “1” while the red($x_1>5$) is “0” | {"url":"https://dmo.page/2020/02/07/machine-learning-study-notes/","timestamp":"2024-11-03T18:56:23Z","content_type":"text/html","content_length":"21963","record_id":"<urn:uuid:feb6aa94-6499-4211-bf64-8543a10822a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00337.warc.gz"} |
Question #ac0f8 | Socratic
Question #ac0f8
1 Answer
So, you're dealing with a unit conversion that must get you from $\text{cL}$, or centiliters, to $\text{nL}$, or nanoliters.
One centiliter is equal to ${10}^{- 2} \text{L}$, and one nanoliter is equal to ${10}^{- 9} \text{L}$, so you're going from something small to something even smaller. This means that the result must
be a bigger number than 35.7, since there are more nanoliters in a liter than there are centiliters in a liter.
You can do this by going to liters first
$\text{35.7 cL" * ("1 L")/(10^2 "cL") * (10^9"nL")/("1 L") = 35.7 * 10^(7)"nL}$
or by going directly from centiliters to nanoliters
$\text{35.7 cL" * ("1 nL")/(10^(-7)"cL") = 35.7 * 10^(7)"nL}$
Because 1 liter has ${10}^{2}$ centiliters and ${10}^{9}$ nanoliters, one centiliter will have
$\text{1 cL" * ("1 nL")/(10^(-7)"cL") = 10^7"nL}$
Unit conversions like this one will become a walk in the park once you'll get comfortable with the SI metric prefixes
Impact of this question
5675 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/54c759e7581e2a57e93ac0f8","timestamp":"2024-11-06T11:11:16Z","content_type":"text/html","content_length":"35047","record_id":"<urn:uuid:d5d7d8fe-2ad6-4f8f-b5a1-7af4306b62d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00652.warc.gz"} |
CS 1
Introduction to Computer Programming
9 units (3-4-2) | first, third terms
A course on computer programming emphasizing the program design process and pragmatic programming skills. It will use the Python programming language and will not assume previous programming
experience. Material covered will include data types, variables, assignment, control structures, functions, scoping, compound data, string processing, modules, basic input/output (terminal and file),
as well as more advanced topics such as recursion, exception handling and object-oriented programming. Program development and maintenance skills including debugging, testing, and documentation will
also be taught. Assignments will include problems drawn from fields such as graphics, numerics, networking, and games. At the end of the course, students will be ready to learn other programming
languages in courses such as CS 11, and will also be ready to take more in-depth courses such as CS 2 and CS 4.
Instructor: Hovik
CS 1 x
Intermediate Computer Programming
6 units (2-2-2) | first term
Prerequisites: Enrollment by instructor permission only.
Students must be placed into this course via the CS placement test. An intermediate course on computer programming emphasizing the program design process and pragmatic programming skills. It will use
the Java programming language and will assume previous programming experience such as an AP CS A course. Material will focus on more advanced topics such as recursion, exception handling and
object-oriented programming. Program development and maintenance skills including debugging, testing, and documentation will also be taught. Assignments will include problems drawn from fields such
as graphics, numerics, networking, and games. At the end of the course, students will be ready to learn other programming languages in courses such as CS 11, and will also be ready to take more
in-depth courses such as CS 2 and CS 4
Instructor: Vanier
CS 2
Introduction to Programming Methods
9 units (3-5-1) | second term
Prerequisites: CS 1 or equivalent.
CS 2 is a demanding course in programming languages and computer science. Topics covered include data structures, including lists, trees, and graphs; implementation and performance analysis of
fundamental algorithms; algorithm design principles, in particular recursion and dynamic programming; Heavy emphasis is placed on the use of compiled languages and development tools, including source
control and debugging. The course includes weekly laboratory exercises and projects covering the lecture material and program design. The course is intended to establish a foundation for further work
in many topics in the computer science option.
Instructor: Blank
CS 3
Introduction to Software Design
9 units (2-6-1) | third term
Prerequisites: CS 2 or equivalent.
CS 3 is a practical introduction to designing large programs in a low-level language. Heavy emphasis is placed on documentation, testing, and software architecture. Students will work in teams in two
5-week long projects. In the first half of the course, teams will focus on testing and extensibility. In the second half of the course, teams will use POSIX APIs, as well as their own code from the
first five weeks, to develop a large software deliverable. Software engineering topics covered include code reviews, testing and testability, code readability, API design, refactoring, and
Instructor: Blank
CS 4
Fundamentals of Computer Programming
9 units (3-4-2) | second term
Prerequisites: CS 1 or instructor's permission.
This course gives students the conceptual background necessary to construct and analyze programs, which includes specifying computations, understanding evaluation models, and using major programming
language constructs (functions and procedures, conditionals, recursion and looping, scoping and environments, compound data, side effects, higher-order functions and functional programming, and
object-oriented programming). It emphasizes key issues that arise in programming and in computation in general, including time and space complexity, choice of data representation, and abstraction
management. This course is intended for students with some programming background who want a deeper understanding of the conceptual issues involved in computer programming.
Instructor: Vanier
Ma/CS 6/106 abc
Introduction to Discrete Mathematics
9 units (3-0-6) | first, second, third terms
Prerequisites: for Ma/CS 6 c, Ma/CS 6 a or Ma 5 a or instructor's permission.
First term: a survey emphasizing graph theory, algorithms, and applications of algebraic structures. Graphs: paths, trees, circuits, breadth-first and depth-first searches, colorings, matchings.
Enumeration techniques; formal power series; combinatorial interpretations. Topics from coding and cryptography, including Hamming codes and RSA. Second term: directed graphs; networks; combinatorial
optimization; linear programming. Permutation groups; counting nonisomorphic structures. Topics from extremal graph and set theory, and partially ordered sets. Third term: syntax and semantics of
propositional and first-order logic. Introduction to the Godel completeness and incompleteness theorems. Elements of computability theory and computational complexity. Discussion of the P=NP problem.
Instructors: T. Yu, Gherman, Kechris
CS 9
Introduction to Computer Science Research
1 unit (1-0-0) | first term
This course will introduce students to research areas in CS through weekly overview talks by Caltech faculty and aimed at first-year undergraduates. More senior students may wish to take the course
to gain an understanding of the scope of research in computer science. Graded pass/fail.
Instructor: Low
EE/CS 10 ab
Introduction to Digital Logic and Embedded Systems
6 units (2-3-1) | second, third terms
This course is intended to give the student a basic understanding of the major hardware and software principles involved in the specification and design of embedded systems. The course will cover
basic digital logic, programmable logic devices, CPU and embedded system architecture, and embedded systems programming principles (interfacing to hardware, events, user interfaces, and
Instructor: George
CS 11
Computer Language Lab
3 units (0-3-0) | first, second terms
Prerequisites: CS 1 or instructor's permission.
A self-paced lab that provides students with extra practice and supervision in transferring their programming skills to a particular programming language. The course can be used for any language of
the student's choosing, subject to approval by the instructor. A series of exercises guide students through the pragmatic use of the chosen language, building their familiarity, experience, and
style. More advanced students may propose their own programming project as the target demonstration of their new language skills. This course is available for undergraduate students only. Graduate
students should register for CS 111. CS 11 may be repeated for credit of up to a total of nine units.
Instructor: Vanier
CS 12
Student-Taught Topics in Computing
variable units between 1 and 9 | first, second, third terms
Prerequisites: CS 1 or instructor's permission.
Each section covers a topic in computing with associated sets or projects. Sections are designed and taught by an undergraduate student under the supervision of a CMS faculty member. CS 12 may be
repeated for credit of up to a total of nine units.
Instructor: Staff
CS 13
Mathematical Foundations of Computer Science
9 units (3-0-6) | first term
Prerequisites: CS 1.
This course introduces key mathematical concepts used in computer science, and in particular it prepares students for proof-based CS courses such as CS 21 and CS 38. Mathematical topics are
illustrated via applications in Computer Science. CS 1 is a co-requisite as there will be a small number of programming assignments. The course covers basic set theory, induction and inductive
structures (e.g., lists and trees), asymptotic analysis, and elementary combinatorics, number theory, and graph theory. Applications include number representation, basic cryptography, basic
algorithms on trees, numbers, and polynomials, social network graphs, compression, and simple error-correcting codes.
Instructor: Blank
CS 19 ab
Introduction to Computer Science in Industry
2 units (1-0-1) | first, second terms
This course will introduce students to CS in industry through weekly overview talks by alums and engineers in industry. It is aimed at first and second year undergraduates. Others may wish to take
the course to gain an understanding of the scope of computer science in industry. Additionally students will complete short weekly assignments aimed at preparing them for interactions with industry.
Graded pass/fail. Part b not offered 2023-24.
Instructor: Ralph
CS 21
Decidability and Tractability
9 units (3-0-6) | second term
Prerequisites: CS 2 (may be taken concurrently).
This course introduces the formal foundations of computer science, the fundamental limits of computation, and the limits of efficient computation. Topics will include automata and Turing machines,
decidability and undecidability, reductions between computational problems, and the theory of NP-completeness.
Instructor: Umans
CS 22
Data Structures & Parallelism
9 units (3-6-0) | second term
Prerequisites: CS 2 or instructor's permission.
CS 22 is a demanding course that covers implementation, correctness, and analysis of data structures and some parallel algorithms. This course is intended for students who have already taken a data
structures course at the level of CS 2. Topics include implementation and analysis of skip lists, trees, hashing, and heaps as well as various algorithms (including string matching, parallel sorting,
parallel prefix). The course includes weekly written and programming assignments covering the lecture material. Not offered 2023-24.
Instructor: Blank
CS 24
Introduction to Computing Systems
9 units (3-3-3) | first term
Prerequisites: CS 2 and CS 3.
Basic introduction to computer systems, including hardware-software interface, computer architecture, and operating systems. Course emphasizes computer system abstractions and the hardware and
software techniques necessary to support them, including virtualization (e.g., memory, processing, communication), dynamic resource management, and common-case optimization, isolation, and naming.
Instructor: Blank
CS 38
9 units (3-0-6) | third term
Prerequisites: CS 2; Ma/CS 6 a or Ma 121 a; and CS 21.
This course introduces techniques for the design and analysis of efficient algorithms. Major design techniques (the greedy approach, divide and conquer, dynamic programming, linear programming) will
be introduced through a variety of algebraic, graph, and optimization problems. Methods for identifying intractability (via NP-completeness) will be discussed.
Instructor: Schröder
CS 42
Computer Science Education in K-14 Settings
6 units (2-2-2) | second, third terms
This course will focus on computer science education in K-14 settings. Students will gain an understanding of the current state of computer science education within the United States, develop
curricula targeted at students from diverse backgrounds, and gain hands on teaching experience. Through readings from educational psychology and neuropsychology, students will become familiar with
various pedagogical methods and theories of learning, while applying these in practice as part of a teaching group partnered with a local school or community college. Each week students are expected
to spend about 2 hours teaching, 2 hours developing curricula, and 2 hours on readings and individual exercises. Pass/Fail only. May not be repeated.
Instructors: Ralph, Wierman
CS/EE/ME 75 abc
Multidisciplinary Systems Engineering
3 units (2-0-1), 6 units (2-0-4), or 9 units (2-0-7) first term; 6 units (2-3-1), 9 units (2-6-1), or 12 units (2-9-1) second and third terms | first, second, third terms
This course presents the fundamentals of modern multidisciplinary systems engineering in the context of a substantial design project. Students from a variety of disciplines will conceive, design,
implement, and operate a system involving electrical, information, and mechanical engineering components. Specific tools will be provided for setting project goals and objectives, managing interfaces
between component subsystems, working in design teams, and tracking progress against tasks. Students will be expected to apply knowledge from other courses at Caltech in designing and implementing
specific subsystems. During the first two terms of the course, students will attend project meetings and learn some basic tools for project design, while taking courses in CS, EE, and ME that are
related to the course project. During the third term, the entire team will build, document, and demonstrate the course design project, which will differ from year to year. First-year undergraduate
students must receive permission from the lead instructor to enroll.
Instructor: Chung
CS 80 abc
Undergraduate Thesis
9 units | first, second, third terms
Prerequisites: instructor's permission, which should be obtained sufficiently early to allow time for planning the research.
Individual research project, carried out under the supervision of a member of the computer science faculty (or other faculty as approved by the computer science undergraduate option representative).
Projects must include significant design effort. Written report required. Open only to upperclass students. Not offered on a pass/fail basis.
Instructor: Staff
CS 81 abc
Undergraduate Projects in Computer Science
Units are assigned in accordance with work accomplished
Prerequisites: Consent of supervisor is required before registering.
Supervised research or development in computer science by undergraduates. The topic must be approved by the project supervisor, and a formal final report must be presented on completion of research.
This course can (with approval) be used to satisfy the project requirement for the CS major. Graded pass/fail.
Instructor: Staff
CS 90
Undergraduate Reading in Computer Science
Units are assigned in accordance with work accomplished
Prerequisites: Consent of supervisor is required before registering.
Supervised reading in computer science by undergraduates. The topic must be approved by the reading supervisor, and a formal final report must be presented on completion of the term. Graded pass/
Instructor: Staff
CS 101
Special Topics in Computer Science
Units in accordance with work accomplished | Offered by announcement.
Prerequisites: CS 21 and CS 38, or instructor's permission.
The topics covered vary from year to year, depending on the students and staff. Primarily for undergraduates.
Instructor: Bouman
CS 102 abc
Seminar in Computer Science
3, 6, or 9 units as arranged with the instructor
Instructor's permission required.
Instructor: Staff
CS 103 abc
Reading in Computer Science
3, 6, or 9 units as arranged with the instructor
Instructor's permission required.
Instructor: Staff
HPS/Pl/CS 110
Causation and Explanation
9 units (3-0-6) | second term
An examination of theories of causation and explanation in philosophy and neighboring disciplines. Topics discussed may include probabilistic and counterfactual treatments of causation, the role of
statistical evidence and experimentation in causal inference, and the deductive-nomological model of explanation. The treatment of these topics by important figures from the history of philosophy
such as Aristotle, Descartes, and Hume may also be considered.
Instructor: Eberhardt
CS 111
Graduate Programming Practicum
3 units (0-3-0) | first, second terms
Prerequisites: CS 1 or equivalent.
A self-paced lab that provides students with extra practice and supervision in transferring their programming skills to a particular programming language. The course can be used for any language of
the student's choosing, subject to approval by the instructor. A series of exercises guide the student through the pragmatic use of the chosen language, building their familiarity, experience, and
style. More advanced students may propose their own programming project as the target demonstration of their new language skills. This course is available for graduate students only. CS 111 may be
repeated for credit of up to a total of nine units. Undergraduates should register for CS 11.
Instructor: Vanier
Ec/ACM/CS 112
Bayesian Statistics
9 units (3-0-6) | second term
Prerequisites: Ma 3, ACM/EE/IDS 116 or equivalent.
This course provides an introduction to Bayesian Statistics and its applications to data analysis in various fields. Topics include: discrete models, regression models, hierarchical models, model
comparison, and MCMC methods. The course combines an introduction to basic theory with a hands-on emphasis on learning how to use these methods in practice so that students can apply them in their
own work. Previous familiarity with frequentist statistics is useful but not required.
Instructor: Rangel
CS 115
Functional Programming
9 units (3-4-2) | third term
Prerequisites: CS 1 and CS 4.
This course is a both a theoretical and practical introduction to functional programming, a paradigm which allows programmers to work at an extremely high level of abstraction while simultaneously
avoiding large classes of bugs that plague more conventional imperative and object-oriented languages. The course will introduce and use the lazy functional language Haskell exclusively. Topics
include: recursion, first-class functions, higher-order functions, algebraic data types, polymorphic types, function composition, point-free style, proving functions correct, lazy evaluation, pattern
matching, lexical scoping, type classes, and modules. Some advanced topics such as monad transformers, parser combinators, dynamic typing, and existential types are also covered.
Instructor: Vanier
CS 116
Reasoning about Program Correctness
9 units (3-0-6) | first term
Prerequisites: CS 1 or equivalent.
This course presents the use of logic and formal reasoning to prove the correctness of sequential and concurrent programs. Topics in logic include propositional logic, basics of first-order logic,
and the use of logic notations for specifying programs. The course presents a programming notation and its formal semantics, Hoare logic and its use in proving program correctness, predicate
transformers and weakest preconditions, and fixed-point theory and its application to proofs of programs. Not offered 2023-24.
Instructor: Staff
Ma/CS 117 abc
Computability Theory
9 units (3-0-6) | first, second, third terms
Prerequisites: Ma 5 or equivalent, or instructor's permission.
Various approaches to computability theory, e.g., Turing machines, recursive functions, Markov algorithms; proof of their equivalence. Church's thesis. Theory of computable functions and effectively
enumerable sets. Decision problems. Undecidable problems: word problems for groups, solvability of Diophantine equations (Hilbert's 10th problem). Relations with mathematical logic and the Gödel
incompleteness theorems. Decidable problems, from number theory, algebra, combinatorics, and logic. Complexity of decision procedures. Inherently complex problems of exponential and superexponential
difficulty. Feasible (polynomial time) computations. Polynomial deterministic vs. nondeterministic algorithms, NP-complete problems and the P = NP question. Not offered 2023-24.
CS 118
Automata-Theoretic Software Analysis
9 units (3-3-3) | second term
An introduction to the use of automata theory in the formal analysis of both concurrent and sequentially executing software systems. The course covers the use of logic model checking with linear
temporal logic and interactive techniques for property-based static source code analysis.
Instructor: Holzmann
EE/CS 119 abc
Advanced Digital Systems Design
9 units (3-3-3) first, second term; 9 units (1-8-0) third term | first, second, third terms
Prerequisites: EE/CS 10 a or CS 24.
Advanced digital design as it applies to the design of systems using PLDs and ASICs (in particular, gate arrays and standard cells). The course covers both design and implementation details of
various systems and logic device technologies. The emphasis is on the practical aspects of ASIC design, such as timing, testing, and fault grading. Topics include synchronous design, state machine
design, arithmetic circuit design, application-specific parallel computer design, design for testability, CPLDs, FPGAs, VHDL, standard cells, timing analysis, fault vectors, and fault grading.
Students are expected to design and implement both systems discussed in the class as well as self-proposed systems using a variety of technologies and tools. Given in alternate years; Not offered
Instructor: George
CS/Ph 120
Quantum Cryptography
9 units (3-0-6) | first term
Prerequisites: Ma 1 b, Ph 2 b or Ph 12 b, CS 21, CS 38 or equivalent recommended (or instructor's permission).
This course is an introduction to quantum cryptography: how to use quantum effects, such as quantum entanglement and uncertainty, to implement cryptographic tasks with levels of security that are
impossible to achieve classically. The course covers the fundamental ideas of quantum information that form the basis for quantum cryptography, such as entanglement and quantifying quantum knowledge.
We will introduce the security definition for quantum key distribution and see protocols and proofs of security for this task. We will also discuss the basics of device-independent quantum
cryptography as well as other cryptographic tasks and protocols, such as bit commitment or position-based cryptography. Not offered 2023-24.
Instructor: Staff
CS/IDS 121
Relational Databases
9 units (3-0-6) | second term
Prerequisites: CS 1 or equivalent.
Introduction to the basic theory and usage of relational database systems. It covers the relational data model, relational algebra, and the Structured Query Language (SQL). The course introduces the
basics of database schema design and covers the entity-relationship model, functional dependency analysis, and normal forms. Additional topics include other query languages based on the relational
calculi, data-warehousing and dimensional analysis, writing and using stored procedures, working with hierarchies and graphs within relational databases, and an overview of transaction processing and
query evaluation. Extensive hands-on work with SQL databases.
Instructor: Hovik
CS 124
Operating Systems
12 units (3-6-3) | third term
Prerequisites: CS 24.
This course explores the major themes and components of modern operating systems, such as kernel architectures, the process abstraction and process scheduling, system calls, concurrency within the
OS, virtual memory management, and file systems. Students must work in groups to complete a series of challenging programming projects, implementing major components of an instructional operating
system. Most programming is in C, although some IA32 assembly language programming is also necessary. Familiarity with the material in CS 24 is strongly advised before attempting this course.
Instructor: Pinkston
EE/CS/MedE 125
Digital Circuit Design with FPGAs and VHDL
9 units (3-6-0) | third term
Prerequisites: EE/CS 10 or equivalent.
Study of programmable logic devices (FPGAs). Detailed study of the VHDL language, accompanied by tutorials of popular synthesis and simulation tools. Review of combinational circuits (both logic and
arithmetic), followed by VHDL code for combinational circuits and corresponding FPGA-implemented designs. Review of sequential circuits, followed by VHDL code for sequential circuits and
corresponding FPGA-implemented designs. Review of finite state machines, followed by VHDL code for state machines and corresponding FPGA-implemented designs. Final project. The course includes a wide
selection of real-world projects, implemented and tested using FPGA boards. Not offered 2023-24.
Instructor: Staff
EE/Ma/CS 126 ab
Information Theory
9 units (3-0-6) | first, second terms
Prerequisites: Ma 3.
Shannon's mathematical theory of communication, 1948-present. Entropy, relative entropy, and mutual information for discrete and continuous random variables. Shannon's source and channel coding
theorems. Mathematical models for information sources and communication channels, including memoryless, Markov, ergodic, and Gaussian. Calculation of capacity and rate-distortion functions. Universal
source codes. Side information in source coding and communications. Network information theory, including multiuser data compression, multiple access channels, broadcast channels, and multiterminal
networks. Discussion of philosophical and practical implications of the theory. This course, when combined with EE 112, EE/Ma/CS/IDS 127, EE/CS 161, and EE/CS/IDS 167, should prepare the student for
research in information theory, coding theory, wireless communications, and/or data compression.
Instructors: Effros, Hamkins
EE/Ma/CS/IDS 127
Error-Correcting Codes
9 units (3-0-6) | third term
Prerequisites: EE 55 or equivalent.
This course develops from first principles the theory and practical implementation of the most important techniques for combating errors in digital transmission and storage systems. Topics include
highly symmetric linear codes, such as Hamming, Reed-Muller, and Polar codes; algebraic block codes, such as Reed-Solomon and BCH codes, including a self-contained introduction to the theory of
finite fields; and low-density parity-check codes. Students will become acquainted with encoding and decoding algorithms, design principles and performance evaluation of codes.
Instructor: Kostina
CS 128
Interactive Theorem Proving
9 units (3-0-6) | second term
Prerequisites: CS 4 or instructor's permission.
This course introduces students to the modern practice of interactive tactic-based theorem proving using the Coq theorem prover. Topics will be drawn from logic, programming languages and the theory
of computation. Topics will include: proof by induction, lists, higher-order functions, polymorphism, dependently-typed functional programming, constructive logic, the Curry-Howard correspondence,
modeling imperative programs, and other topics if time permits. Students will be graded partially on attendance and will be expected to participate in proving theorems in class.
Instructor: Vanier
ME/CS/EE 129
Experimental Robotics
9 units (1-7-1) | third term
Prerequisites: some experience with (i) Python programming (CS1, CS2, or equivalent), (ii) Hardware, Sensors, and Signal Processing (EE/ME7, ME8, EE1, or similar), and/or (iii) Robotic Devices (ME13,
ME72, or related), as evidenced to the instructor. Not recommended for first-year students.
This course covers the foundations of experimental realization on robotic systems. This includes software infrastructure to operate physical hardware, integrate various sensor modalities, and create
robust autonomous behaviors. Using the Python programming language, assignments will explore techniques from simple polling to interrupt driven and multi-threaded architectures, ultimately utilizing
the Robotic Operating System (ROS). Developments will be integrated on mobile robotic systems and demonstrated in the context of class projects.
Instructor: Niemeyer
CS 130
Software Engineering
9 units (3-3-3) | second term
Prerequisites: CS 2 and CS 3 (or equivalent).
This course presents a survey of software engineering principles relevant to all aspects of the software development lifecycle. Students will examine industry best practices in the areas of software
specification, development, project management, testing, and release management, including a review of the relevant research literature. Assignments give students the opportunity to explore these
topics in depth. Programming assignments use Python and Git, and students should be familiar with Python at a CS 1 level, and Git at a CS 2/CS 3 level, before taking the course.
Instructor: Pinkston
CS 131
Programming Languages
9 units (3-0-6) | third term
Prerequisites: CS 4.
CS 131 is a course on programming languages and their implementation. It teaches students how to program in a number of simplified languages representing the major programming paradigms in use today
(imperative, object-oriented, and functional). It will also teach students how to build and modify the implementations of these languages. Emphasis will not be on syntax or parsing but on the
essential differences in these languages and their implementations. Both dynamically-typed and statically-typed languages will be implemented. Relevant theory will be covered as needed.
Implementations will mostly be interpreters, but some features of compilers will be covered if time permits. Enrollment limited to 30 students.
Instructor: Vanier
CS 132
Web Development
9 units (3-0-6) | third term
Prerequisites: CS 1 or equivalent.
Covers full-stack web development with HTML5, CSS, client-side JS (ES6) and server-side JS (Node.js/Express) for web API development. Concepts including separation of concerns, the client-server
relationship, user experience, accessibility, and security will also be emphasized throughout the course. Assignments will alternate between formal and semi-structured student-driven projects,
providing students various opportunities to apply material to their individual interests. No prior web development background is required, though students who have prior experience may still benefit
from learning best practices and HTML5/ES6 standards.
Instructor: Hovik
ME/CS/EE 133 ab
9 units (3-2-4) | first, second terms
Prerequisites: ME/CS/EE 129, or Python programming experience, evidenced to instructor.
The course develops the core concepts of robotics. The first quarter focuses on classical robotic manipulation, including topics in rigid body kinematics and dynamics. It develops planar and 3D
kinematic formulations and algorithms for forward and inverse computations, Jacobians, and manipulability. The second quarter transitions to planning, navigation, and perception. Topics include
configuration space, sample-based planners, A* and D* algorithms, to achieve collision-free motions. Course work transitions from homework and programming assignments to more open-ended team-based
Instructor: Niemeyer
ME/CS/EE 134
Robotic Systems
9 units (1-7-1) | second term
Prerequisites: ME/CS/EE 133 a, or with permission of instructor.
This course builds up, and brings to practice, the elements of robotic systems at the intersection of hardware, kinematics and control, computer vision, and autonomous behaviors. It presents selected
topics from these domains, focusing on their integration into a full sense-think-act robot. The lectures will drive team-based projects, progressing from building custom robotic arms (5 to 7 degrees
of freedom) to writing all necessary software (utilizing the Robotics Operating system, ROS). Teams are required to implement and customize general concepts for their selected tasks. Working systems
will autonomously operate and demonstrate their capabilities during final presentations.
Instructor: Niemeyer
EE/CS/EST 135
Power System Analysis
9 units (3-3-3) | first term
Prerequisites: EE 44, Ma 2, or equivalent.
We are at the beginning of a historic transformation to decarbonize our energy system. This course introduces the basics of power systems analysis: phasor representation, 3-phase transmission system,
transmission line models, transformer models, per-unit analysis, network matrix, power flow equations, power flow algorithms, optimal powerflow (OPF) problems, unbalanced power flow analysis and
optimization,swing dynamics and stability.
Instructor: Low
EE/Ma/CS/IDS 136
Information Theory and Applications
9 units (3-0-6) | third term
Prerequisites: EE 55 or equivalent.
This class introduces information measures such as entropy, information divergence, mutual information, information density, and establishes the fundamental importance of those measures in data
compression, statistical inference, and error control. The course does not require a prior exposure to information theory; it is complementary to EE 126a. Not offered 2023-24.
Instructor: Kostina
CS 137
Real-World Algorithm Implementation
12 units (0-3-9) | third term
Prerequisites: CS 24.
This course introduces algorithms in the context of their usage in the real world. The course covers compression, semi-numerical algorithms, RSA cryptography, parsing, and string matching. The goal
of the course is for students to see how to use theoretical algorithms in real-world contexts, focusing both on correctness and the nitty-gritty details and optimizations. Students will choose to
implement projects based on depth in an area or breadth to cover all the topics.
Instructor: Blank
CS 138
Computer Algorithms
9 units (3-0-6) | third term
This course is identical to CS 38. Only graduate students for whom this is the first algorithms course are allowed to register for CS 138. See the CS 38 entry for prerequisites and course
Instructor: Schröder
CMS/CS/IDS 139
Analysis and Design of Algorithms
12 units (3-0-9) | first term
Prerequisites: Ma 2, Ma 3, Ma/CS 6 a, CS 21, CS 38/138, and ACM/EE/IDS 116 or CMS/ACM/EE 122 or equivalent.
This course develops core principles for the analysis and design of algorithms. Basic material includes mathematical techniques for analyzing performance in terms of resources, such as time, space,
and randomness. The course introduces the major paradigms for algorithm design, including greedy methods, divide-and-conquer, dynamic programming, linear and semidefinite programming, randomized
algorithms, and online learning.
Instructor: Mahadev
CS 141
Hack Society: Projects from the Public Sector
9 units (0-0-9) | third term
Prerequisites: CS/IDS 142, 143, CMS/CS/EE/IDS 144, or permission from instructor.
There is a large gap between the public and private sectors' effective use of technology. This gap presents an opportunity for the development of innovative solutions to problems faced by society.
Students will develop technology-based projects that address this gap. Course material will offer an introduction to the design, development, and analysis of digital technology with examples derived
from services typically found in the public sector. Not offered 2023-24.
Instructor: Ralph
CS/IDS 142
Distributed Computing
9 units (3-2-4) | first term
Prerequisites: CS 24, CS 38.
Programming distributed systems. Mechanics for cooperation among concurrent agents. Programming sensor networks and cloud computing applications. Applications of machine learning and statistics by
using parallel computers to aggregate and analyze data streams from sensors. Not offered 2023-24.
Instructor: Staff
CS/EE/IDS 143
Networks: Algorithms & Architecture
12 units (3-4-5) | third term
Prerequisites: Ma 2, Ma 3, Ma/CS 6 a, and CS 38, or instructor permission.
Social networks, the web, and the internet are essential parts of our lives, and we depend on them every day. CS/EE/IDS 143 and CMS/CS/EE/IDS 144 study how they work and the "big" ideas behind our
networked lives. In this course, the questions explored include: Why is an hourglass architecture crucial for the design of the Internet? Why doesn't the Internet collapse under congestion? How are
cloud services so scalable? How do algorithms for wireless and wired networks differ? For all these questions and more, the course will provide a mixture of both mathematical analysis and hands-on
labs. The course expects students to be comfortable with graph theory, probability, and basic programming.
Instructor: Wierman
CMS/CS/EE/IDS 144
Networks: Structure & Economics
12 units (3-4-5) | second term
Prerequisites: Ma 2, Ma 3, Ma/CS 6 a, and CS 38, or instructor permission.
Social networks, the web, and the internet are essential parts of our lives, and we depend on them every day. CS/EE/IDS 143 and CMS/CS/EE/IDS 144 study how they work and the "big" ideas behind our
networked lives. In this course, the questions explored include: What do networks actually look like (and why do they all look the same)?; How do search engines work?; Why do epidemics and memes
spread the way they do?; How does web advertising work? For all these questions and more, the course will provide a mixture of both mathematical analysis and hands-on labs. The course expects
students to be comfortable with graph theory, probability, and basic programming.
Instructor: Mazumdar
CS/EE 145
Projects in Networking
9 units (0-0-9) | third term
Prerequisites: Either CMS/CS/EE/IDS 144 or CS/IDS 142 in the preceding term, or instructor permission.
Students are expected to execute a substantial project in networking, write up a report describing their work, and make a presentation.
Instructor: Wierman
CS/EE 146
Control and Optimization of Networks
9 units (3-3-3) | second term
Prerequisites: Ma 2, Ma 3 or instructor's permission.
This is a research-oriented course meant for undergraduates and beginning graduate students who want to learn about current research topics in networks such as the Internet, power networks, social
networks, etc. The topics covered in the course will vary, but will be pulled from current research in the design, analysis, control, and optimization of networks.
Instructor: Low
EE/CS 147
Digital Ventures Design
9 units (3-3-3) | first term
This course aims to offer the scientific foundations of analysis, design, development, and launching of innovative digital products and study elements of their success and failure. The course
provides students with an opportunity to experience combined team-based design, engineering, and entrepreneurship. The lectures present a disciplined step-by-step approach to develop new ventures
based on technological innovation in this space, and with invited speakers, cover topics such as market analysis, user/product interaction and design, core competency and competitive position,
customer acquisition, business model design, unit economics and viability, and product planning. Throughout the term students will work within an interdisciplinary team of their peers to conceive an
innovative digital product concept and produce a business plan and a working prototype. The course project culminates in a public presentation and a final report. Every year the course and projects
focus on a particular emerging technology theme. Not offered 2023-24.
EE/CNS/CS 148
Advanced Topics in Vision: Large Language and Vision Models
12 units (3-0-9) | third term
Prerequisites: undergraduate calculus, linear algebra, statistics, computer programming, machine learning. Experience programming in Python, Numpy and PyTorch.
The class will focus on large language models (LLMs) and language-and-vision models, as well as on generative methods for artificial intelligence (AI). Topics include deep neural
networks,transformers, large language models, generative adversarial networks, diffusion models, and applications of such architectures and methods to image analysis, image synthesis, and
text-to-image translation.
Instructors: Perona, Gkioxari
CS/Ec 149
Algorithmic Economics
9 units (3-0-6) | third term
This course will equip students to engage with active research at the intersection of social and information sciences, including: algorithmic game theory and mechanism design; auctions; matching
markets; and learning in games.
Instructor: Niemeyer
CS/IDS 150 ab
Probability and Algorithms
9 units (3-0-6) | first, third terms
Prerequisites: part a: CS 38 and Ma 5 abc; part b: part a or another introductory course in discrete probability.
Part a: The probabilistic method and randomized algorithms. Deviation bounds, k-wise independence, graph problems, identity testing, derandomization and parallelization, metric space embeddings,
local lemma. Part b: Further topics such as weighted sampling, epsilon-biased sample spaces, advanced deviation inequalities, rapidly mixing Markov chains, analysis of boolean functions, expander
graphs, and other gems in the design and analysis of probabilistic algorithms. Parts a & b are given in alternate years. Not offered 2023-24.
Instructor: Schulman
CS 151
Complexity Theory
12 units (3-0-9) | third term
Prerequisites: CS 21 and CS 38, or instructor's permission.
This course describes a diverse array of complexity classes that are used to classify problems according to the computational resources (such as time, space, randomness, or parallelism) required for
their solution. The course examines problems whose fundamental nature is exposed by this framework, the known relationships between complexity classes, and the numerous open problems in the area. Not
offered 2023-24.
Instructor: Umans
CS 152
Introduction to Cryptography
12 units (3-0-9) | first term
Prerequisites: Ma 1 b, CS 21, CS 38 or equivalent recommended.
This course is an introduction to the foundations of cryptography. The first part of the course introduces fundamental constructions in private-key cryptography, including one-way functions,
pseudo-random generators and authentication, and in public-key cryptography, including trapdoor one-way functions, collision-resistant hash functions and digital signatures. The second part of the
course covers selected topics such as interactive protocols and zero knowledge, the learning with errors problem and homomorphic encryption, and quantum cryptography: quantum money, quantum key
distribution. The course is mostly theoretical and requires mathematical maturity. There will be a small programming component. Not offered 2023-24.
Instructor: Vidick
CS/IDS 153
Current Topics in Theoretical Computer Science
9 units (3-0-6) | first, second, third terms
Prerequisites: CS 21 and CS 38, or instructor's permission.
May be repeated for credit, with permission of the instructor. Students in this course will study an area of current interest in theoretical computer science. The lectures will cover relevant
background material at an advanced level and present results from selected recent papers within that year's chosen theme. Students will be expected to read and present a research paper.
Instructors: Mahadev, Schulman, Umans
CMS/CS/CNS/EE/IDS 155
Machine Learning & Data Mining
12 units (3-3-6) | second term
Prerequisites: CS/CNS/EE 156 a. Having a sufficient background in algorithms, linear algebra, calculus, probability, and statistics, is highly recommended.
This course will cover popular methods in machine learning and data mining, with an emphasis on developing a working understanding of how to apply these methods in practice. The course will focus on
basic foundational concepts underpinning and motivating modern machine learning and data mining approaches. We will also discuss recent research developments.
Instructor: Yue
CS/CNS/EE 156 ab
Learning Systems
9 units (3-1-5) | first, third terms
Prerequisites: Ma 2 and CS 2, or equivalent.
Introduction to the theory, algorithms, and applications of automated learning. How much information is needed to learn a task, how much computation is involved, and how it can be accomplished.
Special emphasis will be given to unifying the different approaches to the subject coming from statistics, function approximation, optimization, pattern recognition, and neural networks.
Instructor: Abu-Mostafa
IDS/ACM/CS 157
Statistical Inference
9 units (3-2-4) | third term
Prerequisites: ACM/EE/IDS 116, Ma 3.
Statistical Inference is a branch of mathematical engineering that studies ways of extracting reliable information from limited data for learning, prediction, and decision making in the presence of
uncertainty. This is an introductory course on statistical inference. The main goals are: develop statistical thinking and intuitive feel for the subject; introduce the most fundamental ideas,
concepts, and methods of statistical inference; and explain how and why they work, and when they don't. Topics covered include summarizing data, fundamentals of survey sampling, statistical
functionals, jackknife, bootstrap, methods of moments and maximum likelihood, hypothesis testing, p-values, the Wald, Student's t-, permutation, and likelihood ratio tests, multiple testing,
scatterplots, simple linear regression, ordinary least squares, interval estimation, prediction, graphical residual analysis.
Instructor: Zuev
IDS/ACM/CS 158
Fundamentals of Statistical Learning
9 units (3-3-3) | second term
Prerequisites: ACM/IDS 104, ACM/EE/IDS 116, IDS/ACM/CS 157.
The main goal of the course is to provide an introduction to the central concepts and core methods of statistical learning, an interdisciplinary field at the intersection of applied mathematics,
statistical inference, and machine learning. The course focuses on the mathematics and statistics of methods developed for learning from data. Students will learn what methods for statistical
learning exist, how and why they work (not just what tasks they solve and in what built-in functions they are implemented), and when they are expected to perform poorly. The course is oriented for
upper level undergraduate students in IDS, ACM, and CS and graduate students from other disciplines who have sufficient background in linear algebra, probability, and statistics. The course is a
natural continuation of IDS/ACM/CS 157 and it can be viewed as a statistical analog of CMS/CS/CNS/EE/IDS 155. Topics covered include elements of statistical decision theory, regression and
classification problems, nearest-neighbor methods, curse of dimensionality, linear regression, model selection, cross-validation, subset selection, shrinkage methods, ridge regression, LASSO,
logistic regression, linear and quadratic discriminant analysis, support-vector machines, tree-based methods, bagging, and random forests.
Instructor: Zuev
CS/CNS/EE/IDS 159
Advanced Topics in Machine Learning
9 units (3-0-6) | third term
Prerequisites: CS 155; strong background in statistics, probability theory, algorithms, and linear algebra; background in optimization is a plus as well.
This course focuses on current topics in machine learning research. This is a paper reading course, and students are expected to understand material directly from research articles. Students are also
expected to present in class, and to do a final project.
Instructor: Yue
EE/CS/IDS 160
Fundamentals of Information Transmission and Storage
9 units (3-0-6) | second term
Prerequisites: EE 55 or equivalent.
Basics of information theory: entropy, mutual information, source and channel coding theorems. Basics of coding theory: error-correcting codes for information transmission and storage, block codes,
algebraic codes, sparse graph codes. Basics of digital communications: sampling, quantization, digital modulation, matched filters, equalization.
Instructor: Hassibi
EE/CS 161
Big Data Networks
9 units (3-0-6) | third term
Prerequisites: Linear Algebra ACM/IDS 104 and Introduction to Probability Models ACM/EE/IDS 116 or their equivalents.
Next generation networks will have tens of billions of nodes forming cyber-physical systems and the Internet of Things. A number of fundamental scientific and technological challenges must be
overcome to deliver on this vision. This course will focus on (1) How to boost efficiency and reliability in large networks; the role of network coding, distributed storage, and distributed caching;
(2) How to manage wireless access on a massive scale; modern random access and topology formation techniques; and (3) New vistas in big data networks, including distributed computing over networks
and crowdsourcing. A selected subset of these problems, their mathematical underpinnings, state-of-the-art solutions, and challenges ahead will be covered. Not offered 2023-24.
CS/IDS 162
Data, Algorithms and Society
9 units (3-0-6) | second term
Prerequisites: CS 38 and CS 155 or 156 a.
This course examines algorithms and data practices in fields such as machine learning, privacy, and communication networks through a social lens. We will draw upon theory and practices from art,
media, computer science and technology studies to critically analyze algorithms and their implementations within society. The course includes projects, lectures, readings, and discussions. Students
will learn mathematical formalisms, critical thinking and creative problem solving to connect algorithms to their practical implementations within social, cultural, economic, legal and political
contexts. Enrollment by application. Taught concurrently with VC 72 and can only be taken once as CS/IDS 162 or VC 72.
Instructor: Ralph
CS 164
9 units (3-0-6) | first term
Prerequisites: CS 4 or instructor's permission. CS 24 and CS 131 are strongly recommended but not required. Limit 20 students.
This course covers the construction of compilers: programs which convert program source code to machine code which is directly executable on modern hardware. The course takes a bottom-up approach: a
series of compilers will be built, all of which generate assembly code for x86 processors, with each compiler adding features. The final compiler will compile a full-fledged high-level programming
language to assembly language. Topics covered include register allocation, conditionals, loops and dataflow analysis, garbage collection, lexical scoping, and type checking. This course is
programming intensive. All compilers will be written in the OCaml programming language.
Instructor: Vanier
CS/CNS/EE/IDS 165
Foundations of Machine Learning and Statistical Inference
12 units (3-3-6) | second term
Prerequisites: CMS/ACM/EE 122, ACM/EE/IDS 116, CS 156 a, ACM/CS/IDS 157 or instructor's permission.
The course assumes students are comfortable with analysis, probability, statistics, and basic programming. This course will cover core concepts in machine learning and statistical inference. The ML
concepts covered are spectral methods (matrices and tensors), non-convex optimization, probabilistic models, neural networks, representation theory, and generalization. In statistical inference, the
topics covered are detection and estimation, sufficient statistics, Cramer-Rao bounds, Rao-Blackwell theory, variational inference, and multiple testing. In addition to covering the core concepts,
the course encourages students to ask critical questions such as: How relevant is theory in the age of deep learning? What are the outstanding open problems? Assignments will include exploring
failure modes of popular algorithms, in addition to traditional problem-solving type questions.
Instructor: Anandkumar
CS/EE/IDS 166
Computational Cameras
12 units (3-3-6) | third term
Prerequisites: ACM 104 or ACM 107 or equivalent.
Computational cameras overcome the limitations of traditional cameras, by moving part of the image formation process from hardware to software. In this course, we will study this emerging
multi-disciplinary field at the intersection of signal processing, applied optics, computer graphics, and vision. At the start of the course, we will study modern image processing and image editing
pipelines, including those encountered on DSLR cameras and mobile phones. Then we will study the physical and computational aspects of tasks such as coded photography, light-field imaging,
astronomical imaging, medical imaging, and time-of-flight cameras. The course has a strong hands-on component, in the form of homework assignments and a final project. In the homework assignments,
students will have the opportunity to implement many of the techniques covered in the class. Example homework assignments include building an end-to-end HDR (High Dynamic Range) imaging pipeline,
implementing Poisson image editing, refocusing a light-field image, and making your own lensless "scotch-tape" camera.
Instructor: Bouman
EE/CS/IDS 167
Introduction to Data Compression and Storage
9 units (3-0-6) | third term
Prerequisites: Ma 3 or ACM/EE/IDS 116.
The course will introduce the students to the basic principles and techniques of codes for data compression and storage. The students will master the basic algorithms used for lossless and lossy
compression of digital and analog data and the major ideas behind coding for flash memories. Topics include the Huffman code, the arithmetic code, Lempel-Ziv dictionary techniques, scalar and vector
quantizers, transform coding; codes for constrained storage systems. Given in alternate years; not offered 2023-24.
ME/CS/EE 169
Mobile Robots
9 units (1-7-1) | third term
Prerequisites: ME/CS/EE 133 b, or with permission of instructor.
Mobile robots need to perceive their environment and localize themselves with respect to maps thereof. They further require planners to move along collision-free paths. This course builds up mobile
robots in team-based projects. Teams will write all necessary software from low-level hardware I/O to high level algorithms, using the robotic operating system (ROS). The final systems will
autonomously maneuver to reach their goals or track various objectives.
Instructor: Niemeyer
CS/CNS 171
Computer Graphics Laboratory
12 units (3-6-3) | first term
Prerequisites: Extensive programming experience and proficiency in linear algebra, starting with CS 2 and Ma 1 b.
This is a challenging course that introduces the basic ideas behind computer graphics and some of its fundamental algorithms. Topics include graphics input and output, the graphics pipeline, sampling
and image manipulation, three-dimensional transformations and interactive modeling, basics of physically based modeling and animation, simple shading models and their hardware implementation, and
some of the fundamental algorithms of scientific visualization. Students will be required to perform significant implementations.
Instructor: Barr
CS/CNS 174
Computer Graphics Projects
12 units (3-6-3) | third term
Prerequisites: Extensive programming experience, CS/CNS 171 or instructor's permission.
This laboratory class offers students an opportunity for independent work including recent computer graphics research. In coordination with the instructor, students select a computer graphics
modeling, rendering, interaction, or related algorithm and implement it. Students are required to present their work in class and discuss the results of their implementation and possible improvements
to the basic methods. May be repeated for credit with instructor's permission. Not offered 2023-24.
Instructor: Barr
EE/CS/MedE 175
Advanced Topics in Digital Design with FPGAs and VHDL
9 units (3-6-0) | third term
Prerequisites: EE/CS/MedE 125 or equivalent.
Quick review of the VHDL language and RTL concepts. Dealing with sophisticated, multi-dimensional data types in VHDL. Dealing with multiple time domains. Transfer of control versus data between clock
domains. Clock division and multiplication. Using PLLs. Dealing with global versus local and synchronous versus asynchronous resets. How to measure maximum speed in FPGAs (for both registered and
unregistered circuits). The (often) hard task of time closure. The subtleties of the time behavior in state machines (a major source of errors in large, complex designs). Introduction to simulation.
Construction of VHDL testbenches for automated testing. Dealing with files in simulation. All designs are physically implemented using FPGA boards. Not offered 2023-24.
Instructor: Staff
CS 176
Computer Graphics Research
9 units (3-3-3) | second term
Prerequisites: CS/CNS 171, or 173, or 174.
The course will go over recent research results in computer graphics, covering subjects from mesh processing (acquisition, compression, smoothing, parameterization, adaptive meshing), simulation for
purposes of animation, rendering (both photo- and nonphotorealistic), geometric modeling primitives (image based, point based), and motion capture and editing. Other subjects may be treated as they
appear in the recent literature. The goal of the course is to bring students up to the frontiers of computer graphics research and prepare them for their own research. Not offered 2023-24.
Instructor: Staff
CS/ACM 177 ab
Discrete Differential Geometry: Theory and Applications
9 units (3-3-3) | second term
Working knowledge of multivariate calculus and linear algebra as well as fluency in some implementation language is expected. Subject matter covered: differential geometry of curves and surfaces,
classical exterior calculus, discrete exterior calculus, sampling and reconstruction of differential forms, low dimensional algebraic and computational topology, Morse theory, Noether's theorem,
Helmholtz-Hodge decomposition, structure preserving time integration, connections and their curvatures on complex line bundles. Applications include elastica and rods, surface parameterization,
conformal surface deformations, computation of geodesics, tangent vector field design, connections, discrete thin shells, fluids, electromagnetism, and elasticity. Part b not offered 2023-24.
Instructor: Schröder
CS/IDS 178
Numerical Algorithms and their Implementation
9 units (3-3-3) | third term
Prerequisites: CS 2.
This course gives students the understanding necessary to choose and implement basic numerical algorithms as needed in everyday programming practice. Concepts include: sources of numerical error,
stability, convergence, ill-conditioning, and efficiency. Algorithms covered include solution of linear systems (direct and iterative methods), orthogonalization, SVD, interpolation and
approximation, numerical integration, solution of ODEs and PDEs, transform methods (Fourier, Wavelet), and low rank approximation such as multipole expansions. Not offered 2023-24.
Instructor: Staff
CS 179
GPU Programming
9 units (3-3-3) | third term
Prerequisites: Good working knowledge of C/C++.
Some experience with computer graphics algorithms preferred. The use of Graphics Processing Units for computer graphics rendering is well known, but their power for general parallel computation is
only recently being explored. Parallel algorithms running on GPUs can often achieve up to 100x speedup over similar CPU algorithms. This course covers programming techniques for the Graphics
processing unit, focusing on visualization and simulation of various systems. Labs will cover specific applications in graphics, mechanics, and signal processing. The course will use nVidia's
parallel computing architecture, CUDA. Labwork requires extensive programming.
Instructor: Barr
CS 180
Master’s Thesis Research
Units (total of 45) are determined in accordance with work accomplished
Instructor: Staff
Bi/BE/CS 183
Introduction to Computational Biology and Bioinformatics
9 units (3-0-6) | second term
Prerequisites: Bi 8, CS 2, Ma 3; or BE/Bi 103 a; or instructor's permission.
Biology is becoming an increasingly data-intensive science. Many of the data challenges in the biological sciences are distinct from other scientific disciplines because of the complexity involved.
This course will introduce key computational, probabilistic, and statistical methods that are common in computational biology and bioinformatics. We will integrate these theoretical aspects to
discuss solutions to common challenges that reoccur throughout bioinformatics including algorithms and heuristics for tackling DNA sequence alignments, phylogenetic reconstructions, evolutionary
analysis, and population and human genetics. We will discuss these topics in conjunction with common applications including the analysis of high throughput DNA sequencing data sets and analysis of
gene expression from RNA-Seq data sets.
Instructor: Staff
CNS/Bi/EE/CS/NB 186
Vision: From Computational Theory to Neuronal Mechanisms
12 units (4-4-4) | Second term
Lecture, laboratory, and project course aimed at understanding visual information processing, in both machines and the mammalian visual system. The course will emphasize an interdisciplinary approach
aimed at understanding vision at several levels: computational theory, algorithms, psychophysics, and hardware (i.e., neuroanatomy and neurophysiology of the mammalian visual system). The course will
focus on early vision processes, in particular motion analysis, binocular stereo, brightness, color and texture analysis, visual attention and boundary detection. Students will be required to hand in
approximately three homework assignments as well as complete one project integrating aspects of mathematical analysis, modeling, physiology, psychophysics, and engineering. Given in alternate years;
offered 2023-24.
Instructors: Meister, Perona, Shimojo
CNS/Bi/Ph/CS/NB 187
Neural Computation
9 units (3-0-6) | third term
Prerequisites: introductory neuroscience (Bi 150 or equivalent); mathematical methods (Bi 195 or equivalent); scientific programming.
This course aims at a quantitative understanding of how the nervous system computes. The goal is to link phenomena across scales from membrane proteins to cells, circuits, brain systems, and
behavior. We will learn how to formulate these connections in terms of mathematical models, how to test these models experimentally, and how to interpret experimental data quantitatively. The
concepts will be developed with motivation from some of the fascinating phenomena of animal behavior, such as: aerobatic control of insect flight, precise localization of sounds, sensing of single
photons, reliable navigation and homing, rapid decision-making during escape, one-shot learning, and large-capacity recognition memory. Not offered 2023-2024.
Instructors: Meister, Rutishauser
BE/CS/CNS/Bi 191 ab
Biomolecular Computation
9 units (3-0-6) a; (2-4-3) b | second, third terms
Prerequisites: None. Recommended: BE/ChE 163, CS 21, or equivalent.
This course investigates computation by molecular systems, emphasizing models of computation based on the underlying physics, chemistry, and organization of biological cells. We will explore
programmability, complexity, simulation of, and reasoning about abstract models of chemical reaction networks, molecular folding, molecular self-assembly, and molecular motors, with an emphasis on
universal architectures for computation, control, and construction within molecular systems. If time permits, we will also discuss biological example systems such as signal transduction, genetic
regulatory networks, and the cytoskeleton; physical limits of computation, reversibility, reliability, and the role of noise, DNA-based computers and DNA nanotechnology. Part a develops fundamental
results; part b is a reading and research course: classic and current papers will be discussed, and students will do projects on current research topics.
Instructor: Winfree
BE/CS 196 ab
Design and Construction of Programmable Molecular Systems
a is 12 units (2-4-6) second term; b is 9 units (2-4-3) third term | second, third terms
Prerequisites: none.
This course will introduce students to the conceptual frameworks and tools of computer science as applied to molecular engineering, as well as to the practical realities of synthesizing and testing
their designs in the laboratory. In part a, students will design and construct DNA circuits and self-assembled DNA nanostructures, as well as quantitatively analyze the designs and the experimental
data. Students will learn laboratory techniques including fluorescence spectroscopy and atomic force microscopy and will use software tools and program in Mathematica. Part b is an open-ended design
and build project requiring instructor's permission for enrollment. Enrollment in part a is limited to 24 students, and part b limited to 8 students. Part b not offered 2023
Instructor: Qian
Ph/CS 219 abc
Quantum Computation
9 units (3-0-6) | first, second, third terms
Prerequisites: Ph 125 ab or equivalent.
The theory of quantum information and quantum computation. Overview of classical information theory, compression of quantum information, transmission of quantum information through noisy channels,
quantum error-correcting codes, quantum cryptography and teleportation. Overview of classical complexity theory, quantum complexity, efficient quantum algorithms, fault-tolerant quantum computation,
physical implementations of quantum computation.
Instructors: Kitaev, Preskill
CS 274 abc
Topics in Computer Graphics
9 units (3-3-3) | first, second, third terms
Prerequisites: instructor's permission.
Each term will focus on some topic in computer graphics, such as geometric modeling, rendering, animation, human-computer interaction, or mathematical foundations. The topics will vary from year to
year. May be repeated for credit with instructor's permission. Not offered 2023-24.
Instructor: Staff
CS 280
Research in Computer Science
Units in accordance with work accomplished
Approval of student's research adviser and option adviser must be obtained before registering.
Instructor: Staff
CS 282 abc
Reading in Computer Science
6 units or more by arrangement | first, second, third terms
Instructor's permission required.
Instructor: Staff
CS 286 abc
Seminar in Computer Science
3, 6, or 9 units, at the instructor's discretion
Instructor's permission required.
Instructor: Staff
CS 287
Center for the Mathematics of Information Seminar
3, 6, or 9 units, at the instructor's discretion | first, second, third terms
Instructor's permission required. Not offered 2023-24.
Instructor: Staff
Published Date: May 1, 2024 | {"url":"https://catalog.caltech.edu/archive/2023-24/2023-24/department/CS/","timestamp":"2024-11-11T12:49:01Z","content_type":"text/html","content_length":"316915","record_id":"<urn:uuid:c9c8e7a9-b302-4b24-b63d-44cf60ecb5b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00869.warc.gz"} |
Revision - 94a7e29 - version 0.7.2 – Software Heritage archive
authored by
Christian Thiele
21 March 2018, 08:27:24 UTC
, committed by
21 March 2018, 08:27:24 UTC
0 parent
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cutpointr.R
\title{Determine and evaluate optimal cutpoints}
\method{cutpointr}{default}(data, x, class, subgroup = NULL,
method = maximize_metric, metric = sum_sens_spec, pos_class = NULL,
neg_class = NULL, direction = NULL, boot_runs = 0,
use_midpoints = FALSE, break_ties = c, na.rm = FALSE,
allowParallel = FALSE, silent = FALSE, tol_metric = 1e-06, ...)
\method{cutpointr}{numeric}(x, class, subgroup = NULL,
method = maximize_metric, metric = sum_sens_spec, pos_class = NULL,
neg_class = NULL, direction = NULL, boot_runs = 0,
use_midpoints = FALSE, break_ties = median, na.rm = FALSE,
allowParallel = FALSE, silent = FALSE, tol_metric = 1e-06, ...)
\item{...}{Further optional arguments that will be passed to method.
minimize_metric and maximize_metric pass ... to metric.}
\item{data}{A data.frame with the data needed for x, class and subgroup.}
\item{x}{The variable name without quotes to be used for classification,
e.g. predictions, or an expression. The raw vector of values if the data argument
is unused.}
\item{class}{The variable name without quotes indicating class membership
or an expression. The raw vector of values if the data argument is unused.}
\item{subgroup}{An additional covariate that identifies subgroups or the raw data if
data = NULL. Separate optimal cutpoints will be determined per group.
Numeric, character and factor are allowed.}
\item{method}{(function) A function for determining cutpoints. Can
be user supplied or use some of the built in methods. See details.}
\item{metric}{(function) The function for computing a metric when using
maximize_metric or minimize_metric as method and and for the
out-of-bag values during bootstrapping. A way of internally validating the performance.
User defined functions can be supplied, see details.}
\item{pos_class}{(optional) The value of class that indicates the positive class.}
\item{neg_class}{(optional) The value of class that indicates the negative class.}
\item{direction}{(character, optional) Use ">=" or "<=" to indicate whether x
is supposed to be larger or smaller for the positive class.}
\item{boot_runs}{(numerical) If positive, this number of bootstrap samples
will be used to assess the variability and the out-of-sample performance.}
\item{use_midpoints}{(logical) If TRUE (default FALSE) the returned optimal
cutpoint will be the mean of the optimal cutpoint and the next highest
observation (for direction = ">") or the next lowest observation
(for direction = "<") which avoids biasing the optimal cutpoint.}
\item{break_ties}{If multiple cutpoints are found, they can be summarized using
this function, e.g. mean or median. To return all cutpoints use c as the function.}
\item{na.rm}{(logical) Set to TRUE (default FALSE) to keep only complete
cases of x, class and subgroup (if specified). Missing values with
na.rm = FALSE will raise an error.}
\item{allowParallel}{(logical) If TRUE, the bootstrapping will be parallelized
using foreach. A local cluster, for example, should be started manually
\item{silent}{(logical) If TRUE suppresses all messages.}
\item{tol_metric}{All cutpoints will be returned that lead to a metric
value in the interval [m_max - tol_metric, m_max + tol_metric] where
m_max is the maximum achievable metric value. This can be used to return
multiple decent cutpoints and to avoid floating-point problems. Not supported
by all \code{method} functions, see details.}
A cutpointr object which is also a data.frame and tbl_df.
Using predictions (or e.g. biological marker values) and binary class labels, this function
will determine "optimal" cutpoints using various selectable methods. The
methods for cutpoint determination can be evaluated using bootstrapping. An
estimate of the cutpoint variability and the out-of-sample performance will then
be returned.
If \code{direction} and/or \code{pos_class} and \code{neg_class} are not given, the function will
assume that higher values indicate the positive class and use the class
with a higher median as the positive class.
Different methods can be selected for determining the optimal cutpoint via
the method argument. The package includes the following method functions:
\item \code{maximize_metric}: Maximize the metric function
\item \code{minimize_metric}: Minimize the metric function
\item \code{maximize_loess_metric}: Maximize the metric function after LOESS
\item \code{minimize_loess_metric}: Minimize the metric function after LOESS
\item \code{maximize_spline_metric}: Maximize the metric function after spline
\item \code{minimize_spline_metric}: Minimize the metric function after spline
\item \code{maximize_boot_metric}: Maximize the metric function as a summary of
the optimal cutpoints in bootstrapped samples
\item \code{minimize_boot_metric}: Minimize the metric function as a summary of
the optimal cutpoints in bootstrapped samples
\item \code{oc_youden_kernel}: Maximize the Youden-Index after kernel smoothing
the distributions of the two classes
\item \code{oc_youden_normal}: Maximize the Youden-Index parametrically
assuming normally distributed data in both classes
\item \code{oc_manual}: Specify the cutpoint manually
User-defined functions can be supplied to method, too. As a reference,
the code of all included method functions can be accessed by simply typing
their name. To define a new method function, create a function that may take
as input(s):
\item \code{data}: A \code{data.frame} or \code{tbl_df}
\item \code{x}: (character) The name of the predictor or independent variable
\item \code{class}: (character) The name of the class or dependent variable
\item \code{metric_func}: A function for calculating a metric, e.g. accuracy
\item \code{pos_class}: The positive class
\item \code{neg_class}: The negative class
\item \code{direction}: ">=" if the positive class has higher x values, "<=" otherwise
\item \code{tol_metric}: (numeric) In the built-in methods a tolerance around
the optimal metric value
\item \code{use_midpoints}: (logical) In the built-in methods whether to
use midpoints instead of exact optimal cutpoints
\item \code{...} Further arguments
The \code{...} argument can be used to avoid an error if not all of the above
arguments are needed or in order to pass additional arguments to method.
The function should return a \code{data.frame} or \code{tbl_df} with
one row, the column "optimal_cutpoint", and an optional column with an arbitrary name
with the metric value at the optimal cutpoint.
Built-in metric functions include:
\item \code{accuracy}: Fraction correctly classified
\item \code{youden}: Youden- or J-Index = sensitivity + specificity - 1
\item \code{sum_sens_spec}: sensitivity + specificity
\item \code{sum_ppv_npv}: The sum of positive predictive value (PPV) and negative
predictive value (NPV)
\item \code{prod_sens_spec}: sensitivity * specificity
\item \code{prod_ppv_npv}: The product of positive predictive value (PPV) and
negative predictive value (NPV)
\item \code{cohens_kappa}: Cohen's Kappa
\item \code{abs_d_sens_spec}: The absolute difference between
sensitivity and specificity
\item \code{abs_d_ppv_npv}: The absolute difference between positive predictive
value (PPV) and negative predictive value (NPV)
\item \code{p_chisquared}: The p-value of a chi-squared test on the confusion
matrix of predictions and observations
\item \code{odds_ratio}: The odds ratio calculated as (TP / FP) / (FN / TN)
\item \code{risk_ratio}: The risk ratio (relative risk) calculated as
(TP / (TP + FN)) / (FP / (FP + TN))
\item positive and negative likelihood ratio calculated as
\code{plr} = true positive rate / false positive rate and
\code{nlr} = false negative rate / true negative rate
\item \code{misclassification_cost}: The sum of the misclassification cost of
false positives and false negatives fp * cost_fp + fn * cost_fn.
Additional arguments to cutpointr: \code{cost_fp}, \code{cost_fn}
\item \code{total_utility}: The total utility of true / false positives / negatives
calculated as utility_tp * TP + utility_tn * TN - cost_fp * FP - cost_fn * FN.
Additional arguments to cutpointr: \code{utility_tp}, \code{utility_tn},
\code{cost_fp}, \code{cost_fn}
\item \code{F1_score}: The F1-score (2 * TP) / (2 * TP + FP + FN)
Furthermore, the following functions are included which can be used as metric
functions but are more useful for plotting purposes, for example in
plot_cutpointr, or for defining new metric functions:
\code{tp}, \code{fp}, \code{tn}, \code{fn}, \code{tpr}, \code{fpr},
\code{tnr}, \code{fnr}, \code{false_omission_rate},
\code{false_discovery_rate}, \code{ppv}, \code{npv}, \code{precision},
\code{recall}, \code{sensitivity}, and \code{specificity}.
User defined metric functions can be created as well which can accept the following
inputs as vectors:
\item \code{tp}: Vector of true positives
\item \code{fp}: Vector of false positives
\item \code{tn}: Vector of true negatives
\item \code{fn}: Vector of false negatives
\item \code{...} If the metric function is used in conjunction with any of the
maximize / minimize methods, further arguments can be passed
The function should return a numeric vector or a matrix or a \code{data.frame}
with one column. If the column is named,
the name will be included in the output and plots. Avoid using names that
are identical to the column names that are by default returned by \pkg{cutpointr}.
If \code{boot_runs} is positive, that number of bootstrap samples will be drawn
and the optimal cutpoint using \code{method} will be determined. Additionally,
as a way of internal validation, the function in \code{metric} will be used to
score the out-of-bag predictions using the cutpoints determined by
\code{method}. Various default metrics are always included in the bootstrap results.
If multiple optimal cutpoints are found, the column optimal_cutpoint becomes a
list that contains the vector(s) of the optimal cutpoints.
If \code{use_midpoints = TRUE} the mean of the optimal cutpoint and the next
highest or lowest possible cutpoint is returned, depending on \code{direction}.
The \code{tol_metric} argument can be used to avoid floating-point problems
that may lead to exclusion of cutpoints that achieve the optimally achievable
metric value. Additionally, by selecting a large tolerance multiple cutpoints
can be returned that lead to decent metric values in the vicinity of the
optimal metric value. \code{tol_metric} is passed to metric and is only
supported by the maximization and minimization functions, i.e.
\code{maximize_metric}, \code{minimize_metric}, \code{maximize_loess_metric},
\code{minimize_loess_metric}, \code{maximize_spline_metric}, and
\code{minimize_spline_metric}. In \code{maximize_boot_metric} and
\code{minimize_boot_metric} multiple optimal cutpoints will be passed to the
\code{summary_func} of these two functions.
## Optimal cutpoint for dsi
opt_cut <- cutpointr(suicide, dsi, suicide)
## Predict class for new observations
predict(opt_cut, newdata = data.frame(dsi = 0:5))
## Supplying raw data, same result
cutpointr(x = suicide$dsi, class = suicide$suicide)
## direction, class labels, method and metric can be defined manually
## Again, same result
cutpointr(suicide, dsi, suicide, direction = ">=", pos_class = "yes",
method = maximize_metric, metric = youden)
## Optimal cutpoint for dsi, as before, but for the separate subgroups
opt_cut <- cutpointr(suicide, dsi, suicide, gender)
## Bootstrapping also works on individual subgroups
## low boot_runs for illustrative purposes
opt_cut <- cutpointr(suicide, dsi, suicide, gender, boot_runs = 5)
## Transforming variables (unrealistic, just to show the functionality)
opt_cut <- cutpointr(suicide, x = log(dsi + 1), class = suicide == "yes",
subgroup = dsi \%\% 2 == 0)
predict(opt_cut, newdata = data.frame(dsi = 1:3))
## Parallelized bootstrapping
cl <- makeCluster(2) # 2 cores
registerDoRNG(12) # Reproducible parallel loops using doRNG
opt_cut <- cutpointr(suicide, dsi, suicide, gender,
boot_runs = 10, allowParallel = TRUE)
## Robust cutpoint method using kernel smoothing for optimizing Youden-Index
opt_cut <- cutpointr(suicide, dsi, suicide, gender,
method = oc_youden_kernel)
Other main cutpointr functions: \code{\link{cutpointr_}},
\code{\link{predict.cutpointr}}, \code{\link{roc}} | {"url":"https://archive.softwareheritage.org/browse/revision/94a7e298b1a50d93e8a9ccb813a070f7b30f3da1/?path=man/cutpointr.Rd","timestamp":"2024-11-03T04:07:53Z","content_type":"text/html","content_length":"43545","record_id":"<urn:uuid:486486aa-e905-4402-87ef-d67483b02608>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00093.warc.gz"} |
Failure Criteria for Subway Tunnels Based on the Load-Unload Response Ratio Theorye
Shandong Transportation Institute, 250031 Jinan, China
GeoStruct Innovations
Volume 2, Issue 2, 2024
Pages 68-76
Received: 03-27-2024,
Revised: 05-17-2024,
Accepted: 06-02-2024,
Available online: 06-29-2024
View Full Article|Download PDF
This study employs a combination of geological investigation, numerical simulation, and theoretical analysis to evaluate the applicability of the load-unload response ratio (LURR) theory in urban
tunnels. The results indicate that using the sudden increase in the LURR at critical points or the equivalent plastic strain penetration between the tunnel and the ground surface as failure criteria
for subway tunnels is feasible. Under critical instability loads, the equivalent plastic strain zones in the surrounding rock penetrate to the surface during the construction phase, leading to severe
deformation of the tunnel chamber group and loss of load-bearing capacity in the surrounding rock. During the operation phase, the tunnel lining plays a primary load-bearing role. Under instability
loads, a butterfly-shaped failure zone appears in the surrounding rock. These findings can be utilized for the quantitative evaluation of the overall safety margin of urban subway tunnels.
Keywords: Subway tunnel, Load-unload response ratio, Equivalent plastic strain, Failure criteria
Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
Chen Y. C., Cao H. L., Tian C. C., Sun J. B., & Ji W. H. (2024). Failure Criteria for Subway Tunnels Based on the Load-Unload Response Ratio Theorye. GeoStruct. Innov., 2(2), 68-76. https://doi.org/
©2024 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published
version is credited, under the
CC BY 4.0 license
Figure 1. Schematic of the load-unload process
Table 1. Physical and mechanical parameters of the rock mass | {"url":"https://www.acadlore.com/article/GSI/2024_2_2/gsi020202","timestamp":"2024-11-04T20:32:45Z","content_type":"text/html","content_length":"265067","record_id":"<urn:uuid:2e0383e2-2053-4dfe-9776-383485c66d56>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00647.warc.gz"} |
≫ How To Calculate True Bearing - The Dizaldo Blog!
Welcome, in this article, we will discuss how to calculate true bearing. True bearing is the direction measured clockwise from the north in degrees. This is useful in navigation when you need to
determine the direction of a particular destination or landmark. True bearing can be calculated using simple trigonometry.
Steps to Calculate True Bearing
Step 1: Determine the two points
The first step in calculating true bearing is to determine the two points of interest. For example, if you are navigating from point A to point B, point A would be your starting point and point B
would be your destination. Remember to label your points for easy reference.
Step 2: Measure the distance
The next step is to measure the distance between the two points. You can use a map or a GPS device to get the distance. The distance should be recorded in the same unit used for the radius of the
earth. Usually, miles or kilometers are used.
Step 3: Determine the latitude and longitude of the two points
The latitude and longitude of the two points can be obtained using a map or GPS device. The latitude is the angular distance of a location from the equator, while longitude is the angular distance of
a location from the prime meridian. The latitude and longitude should be recorded in degrees, minutes, and seconds.
Step 4: Calculate the difference in longitude
The next step is to calculate the difference in longitude between the two points. This can be done by subtracting the longitude of point A from the longitude of point B. The difference in longitude
should be in degrees.
Step 5: Calculate the true bearing
Finally, you can calculate the true bearing using the following formula:
True bearing = arctan(sin(Δlong) / (cos(lat[A]) * tan(lat[B]) - sin(lat[A]) * cos(Δlong)))
• Δlong is the difference in longitude in radians
• lat[A] is the latitude of point A in radians
• lat[B] is the latitude of point B in radians
• arctan is the inverse tangent function
• sin is the sine function
• cos is the cosine function
• tan is the tangent function
Example Calculation of True Bearing
Let's assume:
• Distance = 100 km
• Longitude[A] = -0.1278°
• Latitude[A] = 51.5074°
• Longitude[B] = -0.0877°
• Latitude[B] = 51.5134°
Step 1: Determine the two points
Point A is your starting point, and Point B is your destination.
Step 2: Measure the distance
The distance between Point A and Point B is 100 km.
Step 3: Determine the latitude and longitude of the two points
The latitude and longitude of Point A are 51.5074° and -0.1278°, respectively. The latitude and longitude of Point B are 51.5134° and -0.0877°, respectively.
Step 4: Calculate the difference in longitude
The difference in longitude between Point A and Point B is -0.0401°.
Step 5: Calculate the true bearing
Using the formula, we can calculate the true bearing as follows:
True bearing = arctan(sin(-0.0401°) / (cos(51.5074°) * tan(51.5134°) - sin(51.5074°) * cos(-0.0401°))) = 91.13°
Therefore, the true bearing from Point A to Point B is 91.13°.
Calculating true bearing may seem complicated, but it is an essential skill for navigation. Remember to label your points of interest and record the distance, latitude, and longitude in the correct
units. Calculating true bearing using trigonometry may give you a different answer than the one obtained from a compass, but it is more accurate. Practice using the formula with different points to
get a better understanding of how to calculate true bearing. | {"url":"https://www.dizaldo.co.za/how-to-calculate-true-bearing/","timestamp":"2024-11-08T08:12:21Z","content_type":"text/html","content_length":"56868","record_id":"<urn:uuid:7b1a2d6a-c4b3-42c8-91d4-c3a67b05c136>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00296.warc.gz"} |
Parabola Formula | Equation, Properties, Examples
Equation, Properties, Examples | Parabola Formula
The parabola is a fascinating and versatile geometric shape which managed to captured the attention of mathematicians and scientists for ages. Its unusual properties and plain yet exquisite equation
makes it a strong tool for shaping a wide array of real-life phenomena. From the flight path of a projectile to the shape of a satellite dish, the parabola performs an essential role in numerous
fields, consisting of architecture, engineering, physics, and mathematics.
A parabola is a type of conic portion, which is a curve created by overlapping a cone with a plane. The parabola is defined by a quadratic equation, and its features, for instance the directrix,
vertex, focus, and symmetry, provide precious insights into its performance and uses. By understanding the parabola formula and its characteristics, we could get a deeper admiration for this
fundamental geometric shape and its various uses.
In this article, we wish to study the parabola in detail, from its equation and properties to instances of in what way it could be applied in many domains. Even if you're a student, a working
professional, or simply interested regarding the parabola, this blog will provide a complete overview of this interesting and crucial idea.
Parabola Equation
The parabola is specified by a quadratic equation of the form:
y = ax^2 + bx + c
at this point a, b, and c are constants that determine the size, shape, and position of the parabola. The value of a determines if the parabola opens upward or downward. If a is greater than 0, the
parabola opens upward, and if a less than 0, the parabola opens downward. The vertex of the parabola is located at the point (-b/2a, c - b^2/4a).
Properties of the Parabola
Here are the properties of Parabola:
The vertex of the parabola is the point where the curve shifts direction. It is also the point where the axis of symmetry crosses the parabola. The axis of symmetry is a line which goes through the
vertex and divides the parabola within two equal portions.
The focus of the parabola is a point] on the axis of symmetry that is equidistant from the vertex and the directrix. The directrix is a line that is perpendicular to the axis of symmetry and located
at a distance of 1/4a units from the vertex.
The directrix is a line which is perpendicular to the axis of symmetry and located at a distance of 1/4a units from the vertex. Every points on the parabola are equal distance from the directrix and
the focus.
The parabola is symmetric with regard to its axis of symmetry. This means that if we reflect any point on one side of the axis of symmetry across the axis, we get a corresponding point on the other
side of the axis.
The parabola intersects the x-axis at two points, given by the formula:
x = (-b ± sqrt(b^2 - 4ac)) / 2a
The parabola intersects the y-axis at the coordinated (0, c).
Examples of Parabolas
Here are number of basic examples of Parabolas:
Example 1: Graphing a Parabola
Let's graph the parabola y = x^2 - 4x + 3. Primarily, we have to calculate the vertex, axis of symmetry, and intercepts. We can apply the formula:
vertex = (-b/2a, c - b^2/4a)
to calculate the vertex. Replacing in the values a = 1, b = -4, and c = 3, we obtain:
vertex = (2, -1)
So the vertex is located at the location (2, -1). The axis of symmetry is the line x = 2.
Subsequently, we can find the x-intercepts by taking y = 0 and solving for x. We get:
x^2 - 4x + 3 = 0
(x - 3)(x - 1) = 0
Therefore the parabola intersects the x-axis at x = 1 and x = 3.
In the end, the y-intercept is the coordinates (0, c) = (0, 3).
Applying this knowledge, we can sketch the graph of the parabola through plotting the vertex, the x-intercepts, and the y-intercept, and drawing the curve of the parabola within them.
Example 2: Using a Parabola in Physics
The parabolic curve of an object's trajectory is a common example of the parabola in physics. Once an object is thrown or launched into the air, it follows a course that is described with a parabolic
equation. The equation for the course of a projectile thrown from the ground at an angle θ through an initial velocity v is provided by:
y = xtan(θ) - (gx^2) / (2v^2cos^2(θ))
here g is the acceleration as a result of gravity, and x and y are the horizontal and vertical length traveled by the object, respectively.
The trajectory of the object is a parabolic curve, with the vertex at the location (0, 0) and the axis of symmetry parallel to the ground. The focus of the parabola represents the landing point of
the projectile, and the directrix depicts the height above the ground where the object would strike if it were not influenced by gravity.
Finally, the parabola formula and its properties perform an essential function in many fields of study, consisting of math, engineering, architecture, and physics. By knowing the equation of a
parabola, its properties for example the directrix, vertex, and focus, and symmetry, and its several utilizations, we could obtain a deeper comprehension of how parabolas work and how they can be
used to model real-world scenario.
Whether you're a learner finding it challenging to comprehend the theories of the parabola or a professional looking to utilize parabolic equations to real-world problems, it's crucial to possess a
solid groundwork in this elementary topic.
This's where Grade Potential Tutoring enters. Our experienced tutors are available online or face-to-face to offer personalized and productive tutoring services to help you conquer the parabola and
other math theories. Connect with us today to schedule a tutoring session and take your math skills to the next stage. | {"url":"https://www.riversideinhometutors.com/blog/equation-properties-examples-parabola-formula","timestamp":"2024-11-10T18:36:12Z","content_type":"text/html","content_length":"76383","record_id":"<urn:uuid:58512bcb-fa63-4ca7-9288-a8c4606c8bdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00748.warc.gz"} |
1202: Number system - The book of science
Leonardo Pisano Bigollo (Fibonacci) mathematics
Number system
In Liber Abaci, Leonardo Pisano Bigollo, known today as Fibonacci, the son of Bonacci, popularized the use of the nine numerals, the zero sign, and the concept of place value. In this book (the “Book
of Calculation”), Fibonacci showed how to solve business problems— conversions of currency and measurements, and calculations of profit and interest. Fibonacci also showed how to use the number
system for purely mathematical concepts— perfect numbers, primes, and series including the eponymous Fibonacci series. Finally Fibonacci described numeric and geometric approximations and irrational
numbers such as the square root of two.
The young Leonardo Pisano Bigollo was introduced to the Hindu-Arabic number system in Béjaïa, North Africa, where his father directed a customs house. Businessmen in this case, exchanging both goods
and ideas, had been more open than the academics. Subsequently, on business, Leonardo traveled to Egypt, Syria, Greece, Sicily, and Provence and learned from the leading Arab mathematicians.
Fibonacci sequence
One and one is two; one and two is three; two and three is five and so forth; add the last two numbers to get the next to construct the Fibonacci sequence, describing how breeding rabbits increase
and how leaves are arranged on a stem. Plus, the ratio of any two successive Fibonacci numbers approximates the golden number, phi, applicable to art, architecture, theories of beauty, or stock
market analysis. Given a simple relation, a sequence of simple additions, a pattern emerges.
Fibonacci contributed to Europe’s adoption of arithmetical methods using the modern numeral system, as opposed to using counting boards (like the abacus) with Roman numerals.
See also in The book of science:
Readings in wikipedia: | {"url":"https://sharpgiving.com/thebookofscience/items/p1202.html?f=algebra-sequence","timestamp":"2024-11-07T03:11:25Z","content_type":"text/html","content_length":"19482","record_id":"<urn:uuid:bd2d769f-5322-487c-a551-c8eb41dab326>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00495.warc.gz"} |
#1 Logarithm Calculator
Logarithm Calculator
A Logarithm Calculator is a tool that helps you find the logarithm of a number to a specified base. The logarithm represents the exponent or power to which the base must be raised to produce a given
What is the Purpose of a Logarithm Calculator?
The purpose of a Logarithm Calculator is to:
• Simplify Logarithmic Calculations: Logarithms can be complex to compute manually, especially for non-integer values. The calculator makes this process fast and accurate.
• Solve Mathematical Problems: It assists in solving logarithmic equations, often used in exponential growth or decay, financial modeling, and physics.
• Handle Different Bases: While the natural logarithm (base ee) and common logarithm (base 10) are common, the calculator can handle any base.
How is a Logarithm Calculated?
The logarithm of a number xx to a base b is defined as:
log[b](x) = yif and only ifb^y = x
• is the number.
• is the base.
• is the logarithm result.
For example:
• log[10](100) = because 10^2 = 100
• log[2](8)= because 2^3 = 8
What Features Should a Logarithm Calculator Have?
A good Logarithm Calculator should include:
1. Base Selection: It should allow the user to choose any base (e.g., base 2, base 10, base ee for natural logs).
2. Real-Time Calculation: Once the base and number are entered, the result should appear instantly.
3. Support for Natural and Common Logarithms: It should provide shortcuts for base 10 and base ee (natural logarithm).
4. Clear and Reset Options: Users can clear past calculations easily.
How to Use a Logarithm Calculator?
To use a Logarithm Calculator:
• Enter the Base: Input the base of the logarithm. If calculating the natural logarithm, choose ee, and for common logarithms, choose base 10.
• Enter the Number: Input the number for which you want to calculate the logarithm.
• View the Result: The calculator will provide the logarithm of the number to the specified base.
Why Use a Logarithm Calculator?
The benefits of using a Logarithm Calculator include:
• Accuracy: It ensures precise logarithmic calculations, eliminating manual errors.
• Convenience: It simplifies the process of working with logarithmic values, especially for larger or more complex numbers.
• Efficiency: It quickly computes logarithms for any base, saving time in mathematical, scientific, or financial computations.
Logarithm Calculator FAQs
What is a logarithm calculator?
A tool that computes the logarithm of a number to a specified base.
How do I use a logarithm calculator?
Enter the number and base, then press calculate to get the logarithm result.
What is the difference between common logarithms and natural logarithms?
Common logarithms (log) use base 10, natural logarithms (ln) use base e (~2.718).
Can a logarithm calculator handle negative numbers or zero?
No, logarithms are only defined for positive numbers.
Why are logarithms important and where are they used?
They simplify complex calculations and are used in science, engineering, and mathematics.
Related Posts
Related Tags
logarithm calculator with steps, antilog calculator, log calculator base 2, logarithm calculator mathway, natural logarithm calculator, scientific calculator log, log calculator base 10, log base | {"url":"https://toolconverter.com/logarithm-calculator/","timestamp":"2024-11-07T12:43:21Z","content_type":"text/html","content_length":"193602","record_id":"<urn:uuid:2507f51e-3658-4252-975a-871c8c0e8a97>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00325.warc.gz"} |
April 2021
So I was doing a few leetcode questions today. Honestly, I just found this one and it seemed interesting because it was easy, and it had multiple solutions to the problem. You could take multiple
jabs at it from different angles. For example, you could simply do math operations on it and get the modulo of 10 of the number which would give you the last digit, or you could do string
You could also create a stack and a queue of each value and do the modulo of 10 for each number and then compare the stack and queue values because of the FIFO/LIFO rules of these data structures.
However, this is not only inefficient in time complexity, but also in space because we are creating two unnecessary data structures.
We could also just convert the number to a string and then compare each value from the ends, for example:
bool isPalindrome(int x) {
string s = to_string(x);
int len = s.size() - 1;
int n = 0;
while(n < len)
if (s[n] != s[len])
return false;
return true;
However, this solution is not very good either because we are converting the integer to a string, which I believe is an O(N) time complexity, because each character has to be individually converted
to a number.
So I found another solution which basically takes advantage of how math works.
bool isPalindrome(int n)
int original = n;
int reversed = 0;
while(n > 0)
reversed = reversed * 10 + n % 10;
n /= 10;
if (original == reversed)
return true;
return false;
This may not even be the most efficient solution as I may be ignorant on other mathematical concepts and how they work. However, given this, the math is simple. We know that n % 10 will give us the
last digit in the number. So for example: 123 % 10 would give us 3.
Because we know this, we can simply loop through n, and say, while n is not 0, the reversed version of the number will be itself times 10 + the remainder (n % 10).
So given reversed = 0 for the first iteration of our loop, the math would be:
0 = 0 * 10 + 3
Obviously, this is assuming that n is 123. So after the first iteration, reversed will equate to 3. On the second iteration, after we divide x by 10, it would be:
3 = 3 * 10 + 2, which is 32, so reversed is now 32, which is obviously the first two digits in reverse of 123, it’s 32. If we do it again we will get the one. Let’s do it.
If we do n % 10 again, we get a remainder of 1, or our last digit. So then we take 32 = 32 * 10 + 1 which is 321. And there we go, we have the numbers in reverse. The last thing to do is to
essentially check if the original value of n, is equal to this newly reversed value of n.
Return true if they are equal, return false otherwise.
Anyways, hopefully this helps other people who are trying to practice algorithms! | {"url":"https://danielpgleason.com/index.php/2021/04/","timestamp":"2024-11-10T11:04:06Z","content_type":"text/html","content_length":"88216","record_id":"<urn:uuid:c940fd91-2e1a-4e90-96fa-6d6a2ac3c1a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00828.warc.gz"} |
THE ASPECTS WHICH THE PROBABILITY CALCULATIONS SHOW | Quran Miracles
Let us examine the magnificent table that presents the names of God with the probability calculations, in order to show the greatness of the miracle 19 and how it is that such a miracle could be the
result of coincidence.
Since the probabity is 1/19 for any word in Basmalah to be repeated is a multiple of 19, for four words it is 1/19^4.
If we take into account this probability, along with the probability of the repetition of the word “Witness” (Shahid), which replaces the word “name,” to be 19 too, then our probability is 1/19^5
(Because it is important to find exactly the number 19 rather than anything that is multiple of 19 here, its probability is lower).
Both in the Basmalah and in the table of God’s attributes, the coefficients of the 4 words are 1+142+3+6= 152 (19×8). The probability of the coefficient number being a multiple of 19 is 1/19. If we
add this to the previous number, then we have 1/19^6.
We have to calculate the numerical value of God’s attributes separately on the right side, because these numbers are not only multiples of 19, but they also exactly correspond to the same numbers on
the left side. The biggest mathematical value of any of God’s names is 2698. There are 2698 whole numbers up to 2698. We can show our set like this: (1, 2, 3, 4, ……, 2696, 2697, 2698). The
probability of finding exactly the number that we want from this set of numbers is 1/2698. If we repeat this operation four times then the probability of finding the numbers we want is 1/2698^4. The
probability we have found up to now is 1/19^6 x 1/2698^4:
Here is the result… Read it if you can!
This unreadable number is sufficient in itself to show how impossible it is to form even a single 19 table by chance. Since this number represents the probability of forming a single table like that,
who can come up with the assertion that there is not a 19 code in the Quran, which Quran itself says that it will dispel suspicion? Even the probability of a single table shows the Quran’s miracle
and how unchangeable it is.
There are other aspects that would serve to lessen the probability. For example, the total of the four values on both sides is 5776 (5776 = 4^2 x 19^2). In this table, a system with 19, with symmetry
of four numbers on each side, can be seen. If we take the numbers 4 and 19 shown by this table, and if we multiplied by the number 2 to the second power, which shows the two sides, it would be equal
to the total of all the numbers. It is seen that by doing so we could add a few digits to the denominator. But, because it would be difficult to show why we added this result to the probability, and
because of the complexity of the calculation, we did not add this to the final result. Apart from this, we mentioned that 19 was the 8^th prime number. On the two sides of the table we examined eight
names of God, and the total of the coefficients on both sides was 152 (19×8). But we did not take these aspects, which could lessen the probability, into consideration. | {"url":"http://www.quranmiracles.com/2011/11/the-aspects-which-the-probability-calculations-show/","timestamp":"2024-11-06T11:29:40Z","content_type":"text/html","content_length":"78736","record_id":"<urn:uuid:2725e779-674c-42f1-854f-23196ddff664>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00206.warc.gz"} |
Wavenumber-explicit convergence of the <i>hp</i>-FEM for the full-space heterogeneous Helmholtz equation with smooth coefficients
A convergence theory for the hp-FEM applied to a variety of constant-coefficient Helmholtz problems was pioneered in the papers [35], [36], [15], [34]. This theory shows that, if the solution
operator is bounded polynomially in the wavenumber k, then the Galerkin method is quasioptimal provided that hk/p≤C [1] and p≥C [2]logk, where C [1] is sufficiently small, C [2] is sufficiently
large, and both are independent of k,h, and p. The significance of this result is that if hk/p=C [1] and p=C [2]logk, then quasioptimality is achieved with the total number of degrees of freedom
proportional to k ^d; i.e., the hp-FEM does not suffer from the pollution effect. This paper proves the analogous quasioptimality result for the heterogeneous (i.e. variable-coefficient) Helmholtz
equation, posed in R ^d, d=2,3, with the Sommerfeld radiation condition at infinity, and C ^∞ coefficients. We also prove a bound on the relative error of the Galerkin solution in the particular case
of the plane-wave scattering problem. These are the first ever results on the wavenumber-explicit convergence of the hp-FEM for the Helmholtz equation with variable coefficients.
Bibliographical note
Funding Information:
The authors thank Martin Averseng (ETH Zürich) and an anonymous referee for highlighting simplifications of the arguments in a earlier version of the paper. We also thank Théophile Chaumont-Frelet
(INRIA, Nice) for useful discussions about the results of [35] , [36] . DL and EAS acknowledge support from EPSRC grant EP/1025995/1 . JW was partly supported by Simons Foundation grant 631302 .
• Helmholtz equation
• High frequency
• Pollution effect
• hp-FEM
ASJC Scopus subject areas
• Modelling and Simulation
• Computational Theory and Mathematics
• Computational Mathematics
Dive into the research topics of 'Wavenumber-explicit convergence of the hp-FEM for the full-space heterogeneous Helmholtz equation with smooth coefficients'. Together they form a unique fingerprint. | {"url":"https://researchportal.bath.ac.uk/en/publications/wavenumber-explicit-convergence-of-the-ihpi-fem-for-the-full-spac","timestamp":"2024-11-02T06:11:50Z","content_type":"text/html","content_length":"69196","record_id":"<urn:uuid:6bfc8eb2-63ad-4645-8838-dca2e557e8db>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00018.warc.gz"} |
Calculation of Solidâ Liquidâ Gas Equilibrium for Binary Systems - PDF Free Download
Ind. Eng. Chem. Res. 2009, 48, 4579–4586
Calculation of Solid-Liquid-Gas Equilibrium for Binary Systems Containing CO2 Jindui Hong, Hui Chen, and Jun Li* Department of Chemical and Biochemical Engineering, College of Chemistry and Chemical
Engineering, Xiamen UniVersity, Xiamen 361005, Fujian, China
Henrique A. Matos and Edmundo Gomes de Azevedo Department of Chemical and Biological Engineering, Instituto Superior Te´cnico, AV. RoVisco Pais, 1049-001 Lisbon, Portugal
Two equations typically used for the pure-solid fugacity proved to be identical by selecting an appropriate relation for the pure-solid vapor pressure and the pure-liquid vapor pressure. On the basis
of the pure-solid fugacity, a semipredictive model using solubility data (SMS) and a calculation model combining with GE models (CMG) were developed to calculate the solid-liquid-gas (SLG)
coexistence lines of pure substances in the presence of CO2. For the SMS model, the Peng-Robinson equation of state (PR-EoS) with the van der Waals one-fluid mixing rule is used to correlate the
solute solubility in CO2 to obtain the interaction parameter k12, which is further employed to predict the SLG coexistence lines by two methods: one adopts the fugacity coefficient of the solute in
the liquid phase by an equation of state calculation (SMS-φ); the other uses the activity coefficient of the solute in the liquid phase calculated from the UNIFAC model (SMS-γ). For the CMG model,
the PR-EoS with the linear combination of Vidal and Michelsen (LCVM) mixing rule, the Michelsen modified Huron-Vidal (MHV1) mixing rule, and a modified version (mLCVM) with the reevaluated parameter
λ ) 0.18 are used. Results show that the SMS model can provide acceptable calculations of the SLG coexistence lines for most of the investigated systems. The predicted melting temperatures and solute
compositions in liquid phase from a constant k12 are slightly better than those from the correlated one, while the predicted solute solubility data in CO2 from a constant k12 are worse than those
from the correlated one. The CMG model with the mLCVM mixing rule calculates well the melting temperatures and solute compositions in liquid phase at SLG equilibrium and also gives acceptable
calculations of the solute solubilities in supercritical CO2. Introduction Supercritical fluids (SCF) have been used in many different applications, namely to particle formation processes.1 A
thorough knowledge of phase equilibrium, including the solidliquid-gas equilibrium (SLGE), can often provide important information that plays a key role in the understanding, operating, scale-up, and
in general design of SCF-based particle formation processes.2-4 Some general rules for SLGE can be found in two recently published papers.5,6 In the available SLGE modeling studies,2,3,7-13 the
approach using an equation of state (EoS) was widely employed to deal with the gas-liquid equilibrium (GLE) and the solid-gas equilibrium (SGE) or the solid-liquid equilibrium (SLE) to ultimately
address the SLGE of a binary system. In line with this approach, it can be classified into two categories according to the process used to evaluate the solid solute fugacity: (1) using the pure-solid
vapor pressure as the reference fugacity;7,8,13 (2) using a subcooled liquid to obtain a reference fugacity.2,3,9-12 The former method, originally proposed by McHugh et al.,7 was applied by Zhang et
al.8 and by Uchida et al.,13 using either the experimental solid vapor pressure or that evaluated from the Antoine equation. The latter method was suggested by Kikic et al.2 that used the fugacity of
a fictitious subcooled liquid together with the Peng-Robinson equation of state (PR-EoS) with two binary interaction parameters9 to investigate the SLGE curves of fats in supercritical CO2.
Diefenbacher et al.3 did a * To whom correspondence should be addressed. E-mail: junnyxm@ xmu.edu.cn. Tel./Fax: (+86)-592 2183055.
similar modeling work which did show that the size and morphology of particles produced by the rapid expansion of supercritical solutions (RESS) process were strongly influenced by the
solid-liquid-gas (SLG) phase behavior. In another work,11 the perturbed-hard-sphere-chain (PHSC) EoS was used to calculate the SLG coexistence lines that determined the appropriate conditions to
control the solid particles or liquid droplets generated by a particle formation from gas-saturated solutions process (PGSS). Consequently, if the solid vapor pressure is measured experimentally or
calculated from the Antoine equation, both methods lead to different results.14,15 However, we will show later that these two methods are identical in nature when an appropriate calculation of the
solid vapor pressure is adopted. It is well-known the limitations of EoS to describe liquid solutions; when the solid vapor pressure is measured experimentally or calculated from the Antoine
equation, the abovementioned approach has typically little success to predict accurately the normal melting points.16 Lemert et al.17 proposed the application of the regular solution theory (RST) to
describe the SLE while the GLE was still dealt with by an EoS. Results indicated that this approach could provide good correlations for the SLG coexistence lines of binary and ternary (with a
cosolvent) systems, especially under low pressures. Later, Li et al.16 used another activity coefficient model (NRTL) instead of RST to describe the SLE, where the two NRTL parameters were adjustable
and another interaction parameter k12 was obtained from the correlation of solute solubility data for
10.1021/ie801179a CCC: $40.75 2009 American Chemical Society Published on Web 04/02/2009
4580 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009
thermodynamic consistency; results showed that the correlations were quite satisfactory with an average AADT (as defined in Table 2) of 0.39 K for the investigated binary systems. Because most of the
models mentioned for the SLGE modeling are correlative, developing predictive models is both attractive and important when scarce experimental melting data are available or difficult to measure.
Kikic et al.2 used binary interaction parameters regressed from the upper critical end point (UCEP) to predict the SLG coexistence lines; this approach did not require any melting data, and therefore
is predictive. Yet, UCEP points for many systems may actually not be easily obtained. Recently, Bertakis et al.14 developed a model called universal-mixing-rule Peng-Robinson-UNIFAC (UMR-PRU) to
calculate the SLG coexistence curves, with relatively good success. Yet, the authors only presented results for the naphthalene/CO2 and phenanthrene/CO2 systems, and at high pressures the calculated
P-T projection deviates notoriously from the experimental points. In the present work, we first identify the two expressions for the solid solute fugacity that are typically used and then we present
two new models: a semipredictive model using solubility data (SMS) and a calculation model combining with GE models (CMG). Finally, we compare and discuss the calculated results for nine binary
systems involving different types of molecules, including large fatty acids. Modeling Solid Solute Fugacity. There are two equations used frequently to calculate the fugacity of a solid solute.
Equation 17,8,13 uses the saturation pressure of the pure-solid solute and a Poynting correction for the pressure effect. Equation 22,3,9-12,18 considers the pure-solid fugacity as that of a
subcooled liquid at system’s temperature.
S,sat f S20(T, P) ) PS,sat 20 φ20 exp
VS20(P - PS,sat 20 ) RT
Tm ∆fusH 1+ RTm T L S,sat S (PL,sat P(VS20 - VL20) 20 V20 - P20 V20) + (2) RT RT
f S20(T, P) ) f SCL 20 (T, P) exp
In the equations above, subscript 2 denotes the solute, subscript 0 indicates a pure substance’s property, and superscript S L,sat represents solid phase; PS,sat 20 and P20 are, respectively, the
vapor pressure of the pure-solid solute and that of the subcooled liquid S L and V20 are the solute’s solid at the system’s temperature T; V20 S,sat is the fugacity and liquid molar volumes,
respectively; φ20 coefficient of the solid solute at its saturation pressure at temperature T, PS,sat 20 . Because the saturation pressure is normally S,sat SCL can be assumed to be equal to 1. f 20
(T,P) is very low, φ20 the fugacity of the hypothetical subcooled liquid, and ∆fusH is the enthalpy of fusion of the pure solute at its normal melting point, Tm. S,sat is obtained either from
experimental data or Typically P20 calculated from the Antoine equation; this can not guarantee the equality of the pure solid fugacities from eq 1 and eq 2. Equation 2 is relevant to the melting
point pressure dependence and therefore it is widely used in modeling the SLGE. The L,sat relation between PS,sat 20 and P20 is expressed by the SLE equation
[ ( )]
L,sat PS,sat 20 ) P20 exp
Tm ∆fusH 1RTm T
L L,sat f SCL 20 (T, P) ) f 20(T, P20 ) exp
VL20(P - PL,sat 20 ) RT
L,sat L,sat f L20(T, PL,sat 20 ) ) P20 φ20
(4) (5)
L L,sat L,sat where f 20 (T,P20 ) and φ20 are, respectively, the fugacity and the fugacity coefficient of the liquid (or subcooled liquid) at L,sat L,sat S,sat , and φ20 ) 1 just as φ20 . By
introducing eqs 3-5 into P20 eq 1, we obtain eq 2. This indicates that these two expressions for the solid solute fugacity are the same, and we can therefore use either eq 1 or eq 2, satisfying to
eqs 3-5, to establish the predictive models that the following sections present. SMS Model. The SLGE of a binary system is expressed by the following phase equilibrium equations:
f G1 (T, P) ) f L1 (T, P)
f S2 (T, P) ) f G2 (T, P)
f S2 (T, P) ) f L2 (T, P) ) x2φL2 P
f S2 (T, P) ) f L2 (T, P) ) x2γ2 f SCL 2 (T, P)
where subscripts 1 and 2 represent the solvent and the solute, respectively. Assuming that the solid phase contains solute only (that consequently implies that the presented models can not explain
the SLG curves with a temperature maximum and a S (T,P), and then eq 1 or temperature minimum6), f 2S(T,P) ) f 20 eq 2 can be combined into eqs 7-9. Equation 6 is a GLE, whereas eq 7 is a SGE, and eq
8 and eq 9 are two forms of SLE. As indicated elsewhere,16 in addition to two known relations (x1 + x2 ) 1 and y1 + y2 ) 1), only three additional equations are required. Therefore either eq 8 or eq
9 is needed to conveniently model a binary system. When eq 7 is chosen to correlate the experimental solubility data of a solute in supercritical CO2, the original PR-EoS19 and the van der Waals
one-fluid (vdW-1) mixing rule are used to obtain the binary interaction parameter k12. Then eqs 6, 7, and 8 can predict the SLG coexistence lines with the calculated k12. This is a fugacity
coefficient approach (hereafter referred to as the SMS-φ model) because all the fugacities in the liquid and the gas phases should be expressed by the corresponding fugacity coefficients and
calculated from the PR-EoS. When eq 9 is selected instead of eq 8, it is an activity coefficient approach (denoted later as the SMS-γ model), in which the activity coefficient can be obtained from a
convenient model, namely UNIFAC.20 Linear temperature dependent UNIFAC interaction parameters are used for the gas-containing mixtures.21 Tables A1 and A2 (see Appendix) show the UNIFAC parameters,
namely the group area (Qk), the volume (Rk) parameters, and the group interaction parameters. In this work, both SMS-φ and SMS-γ were tested. The calculation algorithm is similar to those presented
by Li16 and by Lemert,17 and the calculation program was coded in C++ computer language. CMG Model. The SMS model is limited by the availability of solute solubility data in supercritical fluids. As
it is wellknown, EoS/GE predictive models can be used to calculate the GLE, liquid-liquid equilibrium (LLE), vapor-liquid-liquid equilibrium (VLLE), and SGE.21-25 On the SMS-γ model, we can use the
mixing rules that are usually adopted in the EoS/ GE models, such as the Michelsen-modified Huron-Vidal
Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4581
Figure 1. Schematic representation of the various steps involved in the models presented in this work. Table 1. Physical Properties of the Pure Compounds compound
Tc (K)
Pc (MPa)
V L (dm3/mol)
V S (dm3/mol)
Tm (K)
∆fusH (kJ/mol)
myristic acida palmitic acid e stearic acid e benzoic acid f naphthalene f biphenylg phenanthrene f ibuprofenh tripalmitini carbon dioxide f
841.6 776.0 799.0 751.0 748.4 788.95 890.0 756.3 889.1 304.19
1.635 1.510 1.360 4.470 4.050 3.840 3.250 2.180 0.51 7.382
0.9612 1.061 1.084 0.6039 0.3020 0.3640 0.4290 0.7490 1.8200 0.2276
0.2644b 0.301f 0.337 f 0.112 0.131 0.1558 0.167 0.2162c 0.93
0.2235b 0.252 f 0.283 f 0.0928 0.112 0.1310 0.155 0.1875 0.87
325.45c 335.66 f 342.49 f 395.52 353.43 342.65 372.38 345.2c 337.4
45.10c 53.711 f 61.210 f 18.075 19.318 18.582 16.463 25.47c 121
d d d d d d d d d
a Tc, Pc, and ω from ref 27. b From ref 28. c Determined in this work. d Calculated from eq 3 with PL,sat estimated from the Ambrose-Walton method.29 e Tc, Pc, and ω from ref 24. f From ref 14. g
From ref 17. h Tc, Pc, ω, and VS from ref 30. i Tc, Pc, ω, and ∆fusH from ref 16.
(MHV1),26 linear combination of Vidal and Michelsen (LCVM)22 mixing rules to replace the vdW-1 mixing rule in the SMS-γ model and then construct a model combining with GE models (CMG) for the SLG
coexistence lines. For the original LCVM mixing rule, the attractive term parameter a of the PR-EoS can simply be expressed by eq 10, together with the commonly used linear mixing rule for the
covolume parameter b: a ) RbRT
parameters, their calculations do not require any further experimental data. In eq 12, C1, LCVM and C2, LCVM are calculated from eqs 14 and 15 with λ ) 0.36, AV ) -0.623 and AM ) -0.52. When
parameter λ is zero, the original LCVM (λ ) 0.36) is reduced to MHV1 (λ ) 0), that was also used in this work and compared to other options described before. In this paper, a modified LCVM mixing
rule (mLCVM) was introduced by fixing the parameter λ at λ ) 0.18. Figure 1 summarizes the calculation scheme including the options of all models presented above.
where Results and Discussion
i i
nc GE0 b + C2,LCVM xi ln + C1,LCVM RT b i i)1
GE0 ) RT
∑ b RT i)1
∑ x ln γ i
1 λ 1-λ ) + C1,LCVM AV AM C2,LCVM )
1-λ AM
In the equations above, the summations are over the number of components, nc. Similarly to eq 9, ln γi in eq 13 can be obtained from the UNIFAC model. Since the activity coefficients in eq 9 and eq
13 are calculated from the functional group interaction
The models described in the previous section were tested for the SLGE calculation of nine solutes with CO2. Table 1 lists the physical properties of the pure compounds used in this work. Results from
the SMS Model. A solubility correlation for nine solutes (see Table 2) in supercritical CO2 from the PREoS with the vdW-1 mixing rule was implemented. This correlation describes well the SGE for most
of the systems investigated with an average AARDy (see Table 4) of 20.6% except for the tripalmitin/CO2 system. This exception is likely related to the uncertainty of the tripalmitin’s critical
parameters, estimated by a group contribution method.16 In the correlations, we used 13 solubility data points for myristic acid,31,32 27 for palmitic acid,31-33 25 for stearic acid,34,35 15 for
benzoic acid,36 54 for naphthalene,37 39 for biphenyl,37 101 for phenanthrene,38 29 for ibuprofen,39 and 25 for tripalmitin.31,40 Table 2 summarizes the prediction results by the SMS model using the
k12 obtained from the correlations. For all systems investigated here except myristic acid/CO2, ibuprofen/CO2, and tripalmitin/CO2, the average AADTs (see Table 4) are 5.0 and 7.4 K for the SMS-φ and
SMS-γ models, respectively, indicat-
4582 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 Table 2. Calculation Results from the SMS Model with Correlated k12 and Fixed k12
solutea myristic acid41 palmitic acid14 stearic acid14 benzoic acid14 naphthalene14 biphenyl42 phenanthrene14 ibuprofen30,41 tripalmitin16
AADT AADT AADT AADT k12 from (K)b,c from (K)c from (K)d from (K)d from SMS-γ SMS-φ SMS-γ correlation SMS-φ 4.6 4.1 6.2 5.0 5.4 4.7 -
0.038 0.104 0.0890 0.0265 0.0971 0.0853 0.131 0.0564 -
1.9 2.3 9.6 10.3 12.6 7.5 -
1.9 4.4 4.8 3.6 4.4 2.0 9.7 1.6 5.9
1.5 2.2 1.2 2.4 9.4 7.1 13.4 5.6 1.7
a Source of melting data. b The absolute average deviation for the solute melting temperatures at SLGE is defined as: AADT ) 1/ n n∑i)1 |T icalcd - T iexpt|, where n is the number of melting
temperature data. c Correlated k12 were used. d k12 ) 0.1.
Table 3. Calculated Average Absolute Deviations for the Several Systems with λ-Optimal, MHV1 (λ ) 0), LCVM (λ ) 0.36), and mLCVM (λ ) 0.18) solute
AADT (K) (λ-optimal)
AADT (K) (λ ) 0)
AADT (K) (λ ) 0.36)
AADT (K) (λ ) 0.18)
myristic acid palmitic acid stearic acid benzoic acid naphthalene biphenyl phenanthrene ibuprofen tripalmitin
0.9 (0.29) 1.9 (0.20) 1.1 (0.22) 0.8 (0.35) 2.2 (0.05) 1.9 (0.12) 1.8 (0.14) 1.5 (0.09) -
4.6 6.4 6.7 5.5 2.7 3.7 3.5 3.9 2.9
2.8 6.7 6.7 0.8 6.4 -
2.1 2.0 2.3 3.0 3.4 2.9 2.2 4.2 -
ing that the former model is slightly better, in particular for the well-studied systems such as benzoic acid/CO2, naphthalene/ CO2, biphenyl/CO2, and phenanthrene/CO2. Figure 2 shows that, for most
systems, both SMS-φ and SMS-γ with correlated k12 are capable to predict the trend of SLG coexistence lines; yet, they can not predict the trend for the myristic acid/CO2 (Figure 2a) and ibuprofen/
CO2 systems over about 7 MPa (not shown). For these two systems, a larger k12 is expected for a good prediction of SLG data. It has been reported that the PR-EoS with the vdW-1 mixing rule could give
an acceptable SGE prediction using a constant k12 (such as k12 ) 0.1 for many solids except steroids and hydroxyl-aromatic acids).24 The average AARDy is 62.9% for all the investigated systems (58.2%
for systems, except tripalmitin/CO2). In addition, k12 ) 0.1 could also provide trend predictions of the SLGE data for all the investigated systems using both the SMS-φ and SMS-γ models as Table 2
summarizes. The average AADTs obtained from both models using a constant k12 are, respectively, 4.3 and 4.9 K for all the investigated systems; the average AADTs are 4.8 and 6.0, respectively, for
systems except myristic acid/CO2, ibuprofen/
CO2, and tripalmitin/CO2. Compared to those obtained from a correlated k12, the results show that, for most systems, a constant k12 ) 0.1 provides better SLGE results. Figure 3 compares the
experimental42 with the calculated solute compositions (x2) in liquid phase at SLGE for the naphthalene/CO2 and biphenyl/CO2 systems. Results indicate that SMS-predictions for φ are slightly better
than the results for SMS-γ, while the predictions for SMS-φ using a constant k12 are better, in particular for the biphenyl/CO2 system, than those using a correlated k12 at high pressures. Results
from the CMG Model. For the CMG model, a first approach was adopted using the MHV1 (λ ) 0) mixing rule and the LCVM (λ ) 0.36) mixing rule. The calculated results are shown and compared in Figure 4.
When using LCVM, it is clear that the calculated SLG coexistence lines are far below the experimental data, in particular for the ibuprofen/CO2 system (not shown). On the contrary, when using MHV1,
the calculated SLG coexistence lines are usually far above the experimental data, in particular for acid-containing systems. We determined the optimal λ by minimizing the difference between
calculated and experimental melting data for each system with the LCVM mixing rule. Table 3 lists the correlated λ values and the corresponding AADTs. The average of these correlated λ is 0.18, which
is precisely the average λ at λ ) 0 (MHV1) and at λ ) 0.36 (LCVM). This λ ) 0.18 serves as the revaluated parameter in the modified LCVM mixing rule (mLCVM) to be applied to the CMG model. The
calculated results from the CMG model with the mLCVM mixing rule are summarized in Table 3, together with the values with the optimal λ. On the basis of these values and in Figure 4 representation
when the recommended value of λ ) 0.18 is used, the calculation accuracy is much improved: for most systems AADT is about 3 K, and the average AADT for all systems is 2.6 K (see Table 4), indicating
that the CMG model gives reasonable calculations. Nevertheless, the CMG model did not give a satisfactory calculation for the tripalmitin/ CO2 system, just as the SMS model did not, which may again
be attributed to the uncertainty inherent to the estimation of the tripalmitin’s critical constants. Figure 5 compares the calculated solute composition (x2) in liquid phase at SLGE with the
experimental data for the naphthalene/CO2 and biphenyl/CO2 systems. The same figure indicates that the CMG model with the mLCVM mixing rule performs better than that with the MHV1 and LCVM mixing
rules in liquid composition calculations. In the case of the calculation of the solute solubility data (y2) in supercritical CO2, Table 4 shows that the CMG model with LCVM performs better than with
MHV1 and mLCVM. It is noted that the CMG model with the LCVM mixing rule calculates well the y2 data, but on the contrary fails by a significant margin the calculation of the solute melting
temperatures and
Table 4. Calculation Results from the New Models Presented in This Work SMS models AADT (K)a AADT (K)b AARDy (%)a,c AARDx (%)d
SMS-φ correlated k12 SMS-γ correlated k12 SMS-φ constant k12 SMS-γ constant k12 MHV1 λ ) 0 LCVM λ ) 0.36 mLCVM λ ) 0.18 5.0e 6.8e 20.6 13.4
7.4e 11.3e 20.6 16.4
4.1 5.9 58.2 6.1
5.4 7.7 58.2 14.7
4.6 6.1 74.3 31.9
4.7 f 8.7 f 42.7 -
2.6 4.3 62.8 11.7
n calcd expt |1 - y2,i /y2,i |) AADT for systems under all investigated pressures. b AADT for systems under pressures larger than 10 MPa. c AARDy ) (1/n∑i)1 × 100%, where y2 is the solute mole
fraction in supercritical CO2, n is the number of data points, and superscripts “expt” and “calcd” represent n calcd expt experimental data and calculated values, respectively. d AARDx ) (1/n∑i)1 |1
- x2,i /x2,i |) × 100%, where x2 is the solute mole fraction in liquid phase at SLGE, n ) 31 is the total number of data points for the naphthalene/CO2 and biphenyl/CO2 systems. e AADT of the
investigated systems except f myristic acid/CO2 and ibuprofen/CO2. AADT of the investigated systems except naphthalene/CO2, biphenyl/CO2 and ibuprofen/CO2. g Tripalmitin/CO2 is excluded in
calculating AADT and AARDy. a
Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4583
Figure 2. SLG coexistence lines of some typical binary systems: (a) myristic acid/CO2; (b) palmitic acid/CO2; (c) naphthalene/CO2; (d) biphenyl/CO2. (2) Experiment; (s) SMS-φ with a correlated k12;
(---) SMS-γ with a correlated k12; ( · · · ) SMS-φ with k12 ) 0.1; (- · - · ) MS-γ with k12 ) 0.1.
Figure 3. Mole fractions in liquid phase for naphthalene/CO2 (left) and biphenyl/CO2 (right) at SLGE: (9) experiment; (s) SMS-φ with a correlated k12; ( · · · ) SMS-γ with a correlated k12; (---)
SMS-φ with k12 ) 0.1; (- · - · ) SMS-γ with k12 ) 0.1.
of the x2 data for the naphthalene/CO2 and biphenyl/CO2 systems at high pressures. Model Selection. The calculation results including AADT and AARDx at SLGE and AARDy for the investigated systems
from all the presented models in this work are compared in Table 4. According to this table, the CMG model with mLCVM and SMS-φ with k12 ) 0.1 are recommended for quantitative calculations when only
the melting temperature at SLGE is concerned or when we need both the melting temperature and x2 at SLGE. When both the melting point and x2 and y2 data are required, SMS-φ and SMS-γ with k12 ) 0.1
and CMG with mLCVM are recommended for trend calculations. If there are available solute solubility data in supercritical CO2, the SMS-φ model with a correlated k12 is recommended for acceptable
predictions of the melting temperature and x2 at SLGE, but it
may fail predictions for some simple solutes as Table 2 indicates. Table 4 also shows AADTs for the investigated systems under pressures larger than 10 MPa, a pressure range usually available for
particle formation processes with supercritical fluids, indicating slightly worse predictions of the proposed models at high pressures. Conclusions In this work, two models, the SMS model and the CMG
model, were developed for modeling the SLGE of binary systems containing high pressure CO2. Results from the models were compared, and the following conclusions can be drawn: (1) The two
traditionally applied equations for the fugacity of solid solute are proved to be identical when it is used an
4584 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009
Figure 4. SLG coexistence lines of some typical binary systems: (a) myristic acid/CO2; (b) palmitic acid/CO2; (c) naphthalene/ CO2; (d) biphenyl/CO2. (2) Experiment; (s) CMG with MHV1 (λ ) 0); (---)
CMG with LCVM (λ ) 0.36); ( · · · ) CMG with mLCVM (λ ) 0.18).
Figure 5. Mole fractions in liquid phase for naphthalene/CO2 (left) and biphenyl/CO2 (right) at SLGE: (9) experiment; (s) CMG with MHV1 (λ ) 0); (---) CMG with LCVM (λ ) 0.36); ( · · · ) CMG with
mLCVM (λ ) 0.18).
appropriate relation between the pure-solid vapor pressure and the pure-liquid vapor pressure; therefore, any of them can be used to implement appropriate SLGE calculation models. (2) Two methods,
SMS-φ and SMS-γ, applied to the SMS model give good correlations of the solute solubility in supercritical CO2; they also give relatively accurate calculations of the melting temperatures and the
solute compositions in the liquid phase at SLGE. (3) Two methods, SMS-φ and SMS-γ, with k12 ) 0.1 can be used for trend predictions of the solute solubility in supercritical CO2 and the melting
temperatures and the solute compositions in liquid phase at SLGE. (4) The CMG model shows that the mLCVM mixing rule with a re-evaluated λ ) 0.18 provides good calculations of the melting
temperatures and of the solute compositions in liquid
phase at SLGE, and also acceptable calculations of the solute solubility data in supercritical CO2. Acknowledgment For financial support, the authors are grateful to SRF for ROCS, SEM, NCET of Fujian
Province, NSFC, China (Project 20876127), European Union Programme FEDER, and FCT, Lisbon (Project POCI/EQU/55911/2004). Appendix UNIFAC Parameters. Table A1 gives the group area and volume
parameters for the original UNIFAC model; Table A2 gives the group interaction parameters for the original UNIFAC model.
Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4585 Table A1. Group Area (Qk) and Volume (Rk) Parameters for the Original UNIFAC Model23,29 group number 1
group CH2
COOH COO CO2
CH3 CH2 CH ACH AC COOH COO CO2
0.9011 0.6744 0.4469 0.5313 0.3652 1.3013 1.3800 1.29623
0.848 0.540 0.228 0.400 0.120 1.224 1.200 1.26123
Table A2. Group Interaction Parameters for the Original UNIFAC Model23,29 group m
group n
Amn [K]
Anm [K]
CO2 CO2 CO2 CO2 CH2 CH2 CH2 ACH
CH2 ACH COO COOH ACH COOH COO COOH
110.60 -26.80 168.6 218.57 61.13 663.5 387.1 537.4
116.70 187.00 -31.7 358.13 -11.12 315.3 529.0 62.32
0.5003 -1.2348 -2.676 -0.7217 0 0 0 0
-0.9106 1.0982 6.977 -0.3666 0 0 0 0
a Interaction parameters between CO2 and groups are from ref 23. Interaction parameters between groups of pure compounds are from ref 29.
List of Symbols a, b ) equation of state parameters AADT ) absolute average deviation of temperature AARD ) absolute average relative deviation CMG ) calculation model combining with GE models EoS/GE
) equation of state combined with an excess Gibbs model f ) fugacity ∆fusH ) enthalpy of fusion (kJ mol-1) GLE ) gas-liquid equilibrium k12 ) binary interaction parameter between components 1 and 2
LCVM ) linear combination of Vidal and Michelsen mixing rules LLE ) liquid-liquid equilibrium mLCVM ) LCVM mixing rule modified in this work MHV1 ) Michelsen modified Huron-Vidal mixing rule n )
number of data points NRTL ) nonrandom two-liquid activity coefficient model P ) pressure (MPa) PGSS ) particles from gas-saturated solutions PR-EoS ) Peng-Robinson equation of state R ) gas constant
(J K-1 mol-1) RESS ) rapid expansion of supercritical solutions RST ) regular solution theory SCF ) supercritical fluids SGE ) solid-gas equilibrium SLE ) solid-liquid equilibrium SLG )
solid-liquid-gas SLGE ) solid-liquid-gas equilibrium SMS ) semipredictive model with solubility data SMS-φ ) semipredictive model using the fugacity coefficient method SMS-γ ) semipredictive model
using the activity coefficient method T ) temperature (K) Tm ) melting temperature (K) UCEP ) upper critical end point V ) molar volume (dm3 mol-1) vdW-1 ) van der Waals mixing rule (one interaction
parameter) VLE ) vapor-liquid equilibrium
VLLE ) vapor-liquid-liquid equilibrium x ) mole fraction in liquid phase y ) mole fraction in vapor phase Greek letters λ ) parameter of the LCVM mixing rule γ ) activity coefficient φ ) fugacity
coefficient ω ) acentric factor Superscripts calcd ) calculated expt ) experimental G ) gas phase L ) liquid phase sat ) saturation S ) solid phase SCL ) subcooled liquid phase Subscripts 0 ) pure
substance 1 ) solvent 2 ) solute c ) critical property fus ) fusion i,j ) components m ) melting
Literature Cited (1) Li, J.; de Azevedo, E. G. Particle Formation Techniques Using Supercritical Fluids. Recent Pat. Chem. Eng. 2008, 1, 157. (2) Kikic, I.; Lora, M.; Bertucco, A. A Thermodynamic
Analysis of Three-Phase Equilibria in Binary and Ternary Systems for Applications in Rapid Expansion of a Supercritical Solution (RESS), Particles from GasSaturated Solutions (PGSS), and
Supercritical Antisolvent (SAS). Ind. Eng. Chem. Res. 1997, 36, 5507. (3) Diefenbacher, A.; Turk, M. Phase Equilibria of Organic Solid Solutes and Supercritical Fluids with Respect to the RESS
Process. J. Supercrit. Fluids 2002, 22, 175. (4) Li, J.; Rodrigues, M.; Paiva, A.; Matos, H. A.; de Azevedo, E. G. Binary Solid-Liquid-Gas Equilibrium and Its Effect on Particle Formation from a
Gas-Saturated Solution Process, 5th Brazilian Meeting on Supercritical Fluids, Florianopolis, Brazil, 2004. (5) Gregorowicz, J. Phase Behaviour in the Vicinity of the Three-Phase Solid-Liquid-Vapour
Line in Asymmetric Nonpolar Systems at High Pressures. Fluid Phase Equilib. 2006, 240, 29. (6) De Loos, Th. W. On the Phase Behaviour of Asymmetric Systems: The Three-Phase Curve Solid-Liquid-Gas. J.
Supercrit. Fluids 2006, 39, 154. (7) McHugh, M. A.; Watkins, J. J.; Doyle, B. T.; Krukonis, V. J. HighPressure Naphthalene-Xenon Phase Behavior. Ind. Eng. Chem. Res. 1988, 27, 1025. (8) Zhang, D.;
Cheung, A.; Lu, B. C.-Y. Multiphase Equilibria of Binary and Ternary Mixtures Involving Solid Phase(s) at Supercritical-Fluid Conditions. J. Supercrit. Fluids 1992, 5, 91. (9) Alessi, P.; Cortesi,
A.; Fogar, A.; Kikic, I. Determination of SolidLiquid-Gas Equilibrium CurVes for Some Fats in Presence of Carbon Dioxide, 6th International Symposium on Supercritical Fluids; Versailles, France,
2003. (10) Turk, M.; Upper, G.; Steuerethaler, M. InVestigation of the Phase BehaVior of Low Volatile Substances and Supercritical Fluids with Regard to Particle Formation Processes, 6th
International Symposium on Supercritical Fluids; Versailles, France, 2003. (11) Elvassore, N.; Flaibani, M.; Bertucco, A.; Caliceti, P. Thermodynamic Analysis of Micronization Processes from
Gas-Saturated Solution. Ind. Eng. Chem. Res. 2003, 42, 5924. (12) Corazza, M. L.; Filho, L. C.; Oliveira, J. V.; Dariva, C. A Robust Strategy for SVL Equilibrium Calculations at High Pressures. Fluid
Phase Equilib. 2004, 221, 113. (13) Uchida, H.; Yoshida, M.; Kojima, Y.; Yamazoe, Y.; Matsuoka, M. Measurement and Correlation of the Solid-Liquid-Gas Equilibria for the Carbon Dioxide +S-(+)
-Ibuprofen and Carbon Dioxide +RS-(()Ibuprofen Systems. J. Chem. Eng. Data 2005, 50, 11.
4586 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 (14) Bertakis, E.; Lemonis, I.; Katsoufis, S.; Voutsas, E.; Dohrn, R.; Magoulas, K.; Tassios, D. Measurement and Thermodynamic Modeling of
Solid-Liquid-Gas Equilibrium of Some Organic Compounds in the Presence of CO2. J. Supercrit. Fluids 2007, 41, 238. (15) Dohrn, R.; Bertakis, E.; Behrend, O.; Voutsas, E.; Tassios, D. Melting Point
Depression by Using Supercritical CO2 for a Novel Melt Dispersion Micronization Process. J. Mol. Liq. 2007, 131-132, 53. (16) Li, J.; Rodrigues, M.; Paiva, A.; Matos, H. A.; de Azevedo, E. G. Binary
Solid-Liquid-Gas Equilibrium of the Tripalmitin/CO2 and Ubiquinone/CO2 Systems. Fluid Phase Equilib. 2006, 241, 196. (17) Lemert, R. M.; Johnston, K. P. Solid-Liquid-Gas Equilibria in Multicomponent
Supercritical Fluid Systems. Fluid Phase Equilib. 1989, 45, 265. (18) Prausnitz, J. M.; Lichtenthaler, R. N.; Gomes de Azevedo, E. Molecular Thermodynamics of Fluid Phase Equilibria, 3rd ed.;
Prentice Hall: Upper Saddle River, NJ, 1999. (19) Peng, D.-Y.; Robinson, D. B. A New Two-Constant Equation of State. Ind. Eng. Chem. Fundam. 1976, 15, 59. (20) Fredenslund, A.; Jones, R. L.;
Prausnitz, J. M. Group-Contribution Estimation of Activity Coefficients in Nonideal Liquid Mixtures. AIChE J. 1975, 21, 1086. (21) Yakoumis, I. V.; Vlachos, K.; Kontogeorgis, G. M.; Coutsikos, P.;
Kalospiros, N. S.; Tassios, D. Application of the LCVM Model to Systems Containing Organic Compounds and Supercritical Carbon Dioxide. J. Supercrit. Fluids 1996, 9, 88. (22) Boukouvalas, C.;
Spiliotis, N.; Coutsikos, P.; Tzouvaras, N.; Tassios, D. Prediction of Vapor-Liquid Equilibrium with the LCVM Model: A Linear Combination of the Vidal and Michelsen Mixing Rules Coupled with the
Original UNIFAC and the t-mPR Equation of State. Fluid Phase Equilib. 1994, 92, 75. (23) Voutsas, E. C.; Boukouvalas, C. J.; Kalospiros, N. S.; Tassios, D. P. The Performance of EoS/GE Models in the
Prediction of Vapor-Liquid Equilibria in Asymmetric Systems. Fluid Phase Equilib. 1996, 116, 480. (24) Coutsikos, P.; Magoulas, K.; Kontogeorgis, G. M. Prediction of Solid-Gas Equilibria with the
Peng-Robinson Equation of State. J. Supercrit. Fluids 2003, 25, 197. (25) Voutsas, E.; Louli, V.; Boukouvalas, C.; Magoulas, K.; Tassios, D. Thermodynamic Property Calculations with the Universal
Mixing Rule for EoS/GE Models: Results with the Peng-Robinson EoS and a UNIFAC Model. Fluid Phase Equilib. 2006, 241, 216. (26) Michelsen, M. L. A Modified Huron-Vidal Mixing Rule for Cubic Equations
of State. Fluid Phase Equilib. 1990, 60, 213. (27) Madras, G.; Kulkarni, C.; Modak, J. Modeling the Solubilities of Fatty Acids in Supercritical Carbon Dioxide. Fluid Phase Equilib. 2003, 209, 207.
(28) Costa, M. C.; Krahenbuhl, M. A.; Meirelles, A. J. A.; Daridon, J. L.; Pauly, J.; Coutinho, J. A. P. High Pressure Solid-Liquid Equilibria of Fatty Acids. Fluid Phase Equilib. 2007, 253, 118.
(29) Poling, B. E.; Prausnitz, J. M.; O’Connell, J. P. The Properties of Gases and Liquids, 5th ed.; McGraw-Hill: New York, 2001. (30) Turk, M.; Upper, G.; Steurenthaler, M.; Hussein, Kh.; Wahl, M.
A. Complex Formation of Ibuprofen and β-Cyclodextrin by Controlled Particle Deposition (CPD) Using SC-CO2. J. Supercrit. Fluids 2007, 39, 435. (31) Bamberger, T.; Erickson, J. C.; Cooney, C. L.;
Kumar, S. K. Measurement and Model Prediction of Solubilities of Pure Fatty Acids, Pure Triglycerides, and Mixtures of Triglycerides in Supercritical Carbon Dioxide. J. Chem. Eng. Data 1988, 33, 327.
(32) Iwai, Y.; Fukuda, T.; Koga, Y.; Arai, Y. Solubilities of Myristic Acid, Palmitic Acid, and Cetyl Alcohol in Supercritical Carbon Dioxide at 35 °C. J. Chem. Eng. Data 1991, 36, 430. (33) Kramer,
A.; Thodos, G. Solubility of 1-Hexadecanol and Palmitic Acid in Supercritical Carbon Dioxide. J. Chem. Eng. Data 1988, 33, 230. (34) Iwai, Y.; Koga, Y.; Maruyama, H.; Arai, Y. Solubilities of Stearic
Acid, Stearyl Alcohol, and Arachidyl Alcohol in Supercritical Carbon Dioxide at 35 °C. J. Chem. Eng. Data 1993, 38, 506. (35) Kramer, A.; Thodos, G. Solubility of 1-Octadecanol and Stearic Acid in
Supercritical Carbon Dioxide. J. Chem. Eng. Data 1989, 34, 184. (36) Kurnlk, R. T.; Holla, S. J.; Reid, R. C. Solubility of Solids in Supercritical Carbon Dioxide and Ethylene. J. Chem. Eng. Data
1981, 26, 47. (37) McHugh, M.; Paulaitis, M. E. Solid Solubilities of Naphthalene and Biphenyl in Supercritical Carbon Dioxide. J. Chem. Eng. Data 1980, 25, 326. (38) Bartle, K. D.; Clifford, A. A.;
Jafar, S. A. Measurement of Solubility in Supercritical Fluids Using Chromatographic Retention: The Solubility of Fluorene, Phenanthrene, and Pyrene in Carbon Dioxide. J. Chem. Eng. Data 1990, 35,
355. (39) Charoenchaitrakool, M.; Dehghani, F.; Foster, N. R. Micronization by Rapid Expansion of Supercritical Solutions to Enhance the Dissolution Rates of Poorly Water-Soluble Pharmaceuticals.
Ind. Eng. Chem. Res. 2000, 39, 4794. (40) Chrastil, J. Solubility of Solids and Liquids in Supercritical Gases. J. Phys. Chem. 1982, 86, 3016. (41) Chen, H. Ibuprofen and Myristic Acid Microparticles
and Microcomposites Generated by a PGSS ProcessM.S. Thesis. Xiamen University, Xiamen, 2007. (42) Cheong, P. L.; Zhang, D.; Ohgaki, K.; Lu, B. C.-Y. High Pressure Phase Equilibria for Binary Systems
Involving a Solid Phase. Fluid Phase Equilib. 1986, 29, 555.
ReceiVed for reView July 31, 2008 ReVised manuscript receiVed February 26, 2009 Accepted February 26, 2009 IE801179A | {"url":"https://datapdf.com/calculation-of-solida-liquida-gas-equilibrium-for-binary-sys.html","timestamp":"2024-11-14T07:33:00Z","content_type":"text/html","content_length":"65452","record_id":"<urn:uuid:bbede2f8-104d-41f1-b0f2-6f05917a8181>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00014.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ITCS.2019.21
URN: urn:nbn:de:0030-drops-101140
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2018/10114/
Chan, Timothy M. ; Har-Peled, Sariel ; Jones, Mitchell
On Locality-Sensitive Orderings and Their Applications
For any constant d and parameter epsilon > 0, we show the existence of (roughly) 1/epsilon^d orderings on the unit cube [0,1)^d, such that any two points p, q in [0,1)^d that are close together under
the Euclidean metric are "close together" in one of these linear orderings in the following sense: the only points that could lie between p and q in the ordering are points with Euclidean distance at
most epsilon | p - q | from p or q. These orderings are extensions of the Z-order, and they can be efficiently computed.
Functionally, the orderings can be thought of as a replacement to quadtrees and related structures (like well-separated pair decompositions). We use such orderings to obtain surprisingly simple
algorithms for a number of basic problems in low-dimensional computational geometry, including (i) dynamic approximate bichromatic closest pair, (ii) dynamic spanners, (iii) dynamic approximate
minimum spanning trees, (iv) static and dynamic fault-tolerant spanners, and (v) approximate nearest neighbor search.
BibTeX - Entry
author = {Timothy M. Chan and Sariel Har-Peled and Mitchell Jones},
title = {{On Locality-Sensitive Orderings and Their Applications}},
booktitle = {10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
pages = {21:1--21:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-095-8},
ISSN = {1868-8969},
year = {2018},
volume = {124},
editor = {Avrim Blum},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2018/10114},
URN = {urn:nbn:de:0030-drops-101140},
doi = {10.4230/LIPIcs.ITCS.2019.21},
annote = {Keywords: Approximation algorithms, Data structures, Computational geometry}
Keywords: Approximation algorithms, Data structures, Computational geometry
Collection: 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)
Issue Date: 2018
Date of publication: 08.01.2019
DROPS-Home | Fulltext Search | Imprint | Privacy | {"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=10114","timestamp":"2024-11-10T14:07:37Z","content_type":"text/html","content_length":"6599","record_id":"<urn:uuid:f462212e-48f9-4c40-b2df-e02012af8d9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00661.warc.gz"} |
Ordinary differential equations of first order
The present book describes the state-of-art in the middle of the 20th century, concerning first order differential equations of known solution formulæ. Among the topics can be found exact
differential forms, homogeneous differential forms, integrating factors, separation of the variables, and linear differential equations, Bernoulli's equation and Riccati's equation. All topics are
illustrated by examples.
About the author
Leif Mejlbro was educated as a mathematician at the University of Copenhagen, where he wrote his thesis on Linear Partial Differential Operators and Distributions. Shortly after he obtained a
position at the Technical University of Denmark, where he remained until his retirement in 2003. He has twice been on leave, first time one year at the Swedish Academy, Stockholm, and second time at
the Copenhagen Telephone Company, now part of the Danish Telecommunication Company, in both places doing research.
At the Technical University of Denmark he has during more than three decades given lectures in such various mathematical subjects as Elementary Calculus, Complex Functions Theory, Functional
Analysis, Laplace Transform, Special Functions, Probability Theory and Distribution Theory, as well as some courses where Calculus and various Engineering Sciences were merged into a bigger course,
where the lecturers had to cooperate in spite of their different background. He has written textbooks to many of the above courses.
His research in Measure Theory and Complex Functions Theory is too advanced to be of interest for more than just a few specialist, so it is not mentioned here. It must, however, be admitted that the
philosophy of Measure Theory has deeply in uenced his thinking also in all the other mathematical topics mentioned above.
After he retired he has been working as a consultant for engineering companies { at the latest for the Femern Belt Consortium, setting up some models for chloride penetration into concrete and giving
some easy solution procedures for these models which can be applied straightforward without being an expert in Mathematics. Also, he has written a series of books on some of the topics mentioned
above for the publisher Ventus/Bookboon. | {"url":"https://bookboon.com/en/ordinary-differential-equations-of-first-order-ebook","timestamp":"2024-11-09T16:06:32Z","content_type":"text/html","content_length":"74658","record_id":"<urn:uuid:9072022a-bb7c-4783-a9ef-465895d786d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00160.warc.gz"} |
Can someone do my MATLAB assignment on optimization algorithms? | Hire Someone To Do My Assignment
Can someone do my MATLAB assignment on optimization algorithms? I can’t just say to myself or anyone else: “I need to pay.” My professor can call my computer, but I’m not sure I can keep the exact
same input method. I added new ideas, improved steps, and said it’ll take a long time to complete or it’ll be all it’ll take if the number of iterations starts out to be real. edit: I found my
original copy here: Mathlab – 1-dimensional optimization – Wikipedia: http://www.mathlab.io/citation/Mathlab/ I really can’t help but see that now it’s showing me that I’m trying to do something that
I believe is difficult or does not work – but it still will take more than 8 years. So having the best of both worlds: p = 2 d p = p -2 See my link above to the list above: https://mathlab.io/blog/
tol=sub The thing to consider for my PECL code is that adding, … … p.multiply / multiply / / / p.sub / / the same steps (tol = p). And I do find how to show it: And I update the PECL code as below
i.e. myvar.method = ‘p/st2expand//(2*st2expand)//((2*sln(1,11)/2)*2\*sln(1,11)))’ Can someone do my MATLAB assignment on optimization algorithms? I have the basic idea from a source of the algorithm
below, but I’m not sure how to teach this algorithm (because I don’t understand the code).
Which Online Course Is Better For The Net Exam History?
Can someone explain it to me? Thanks. 1st I cannot help this part however see if there is a clean way… I am a programmer, I want to work with a commandline. The idea when writing the command, I want
to use.bat or mv/tpl or gc/ecl to edit those files within the command. I tried these out online but neither worked… I really need help?? I am using MATLAB Studio 7 with Python 3.6, MATLAB 7.4, MATLAB
5.0 and MATLAB 10.3. The MATLAB and MATLAB Studio plug-ins were added in 2 of the MacOS versions (MacOS-Win32-IOS). So I can understand how you can have a.bat/mv app, but not that it works. Also I’m
not sure if editing a Matlab command can work, so I’ll have to wait until we’re through the MATLAB part. For the MatLAB part because I think this is the right approach and I’m a little confused not
until we have googled.
Pay To Complete Homework Projects
So sorry if a blog post is a headache for me, but will be done for everyone. A: Update: I did some research, but something was missing that helps me in this case. Since it’s Matlab about 16K but
about 16GB, I don’t know how to find out how much RAM to fill out a 10.4K file. So the easiest way to do that would be to use makefiles in the Mac and run Makefile yourself. Notice that makefiles are
only available for commercial and third party solutions and they aren’t actually needed anymore in version 6.2. If you do it directly you’ll find the source code in MacOS-X, I haven’t checked so
can’t say anything for you. But you can easily get around this limitation by adding a copy of the file you built (and possibly the copy that’s currently sitting in my computer) to any user project
folder: import sys, os yourpath = os.path.join(__fileaddr_split(sys.INSTALLED_LIBRARIES+”.m${sys}”), ‘LXX$1’, yourpath) for ks in openssl_subdirs(): for v in yourpath: if i for i in v.split(ks):
sys.stdout.write((”) + ‘\n’) resyntax = os.path.join(sys.INSTALLED_LIBRARIES, `%s/$(basename(ks))`) resyntax.write(list(resyntax.
Do My Online Accounting Homework
strip())) A: This might also be helpful if you are a newcomer to Matlab and would prefer open source, but don’t do a complete machine learning-type program: In Matlab Matlab, the arguments may be
different for each function I wrote (one for each individual function). So within the Matlab function, it would be: print(funcs[1] = ‘import numpy %f’; eval(‘numpy’) %(funcs[0], args)) and if the
first argument carries nothing, it could just return nothing. This question is very related to the question of why am I failing a thing (not his comment is here a very good way to do it) but more in
terms of your code: There are Python programs here (one for each specific function). I took this seriously because I use Lua in the developmentCan someone do my MATLAB assignment on optimization
algorithms? Please, just do this: I solved on optimization of the problem: The number of linearly dispatching points increases when the number of inputs moved is more than 2-D. When this is properly
applied on the problem, linear dispatching of the problem will probably increase the number of time steps involved. In this case, the number of iterations may increase as compared to similar
situations. If a solution not known to be linear can be used instead, the number of steps increased by a factor of two, and so on. Is it possible to solve this stochastic optimization problem for a
set of solutions of the problem? Yes, but it is not too hard to accept that $M$ solution is used per solution. (There might be many solutions involving linear (conic) polyhedra.) 1) Consider a linear
polyhedron 2) consider a K-tree 3) consider a K-tree with $T=k$ parts within each partition 4) consider a K-tree representing the intersection of all $T$ parts 5) consider a K-tree with $T=\alpha$
parts 6) consider a K-tree with $\alpha$ part and $k$ parts of smaller dimension 7) consider a K-tree with $T=4$ part and $k$ parts of the same dimension. Try these examples of Matlab’s Optimization
Algorithm on solving the linear subproblems involving the multi-variable case. The simulation is started in parallel and parallelize the simulations. As a result, if I have $R = m$ diagonal elements,
I would like to run the largest of the diagonal elements of the K-tree which the time steps would involve. Is this the best you can do to speed up the simulation and still still have the run time. So
my guess is that your next assumption has the same accuracy as saying, “I call a 2-D line element” – it would not have a linear dimension. In other words, in your example, I would like you to say the
line has dimension 2. Thanks very much for your help and is there a way to find this number without a linear dimension? Is check my blog less practical to come up with? It may also be called as a
“nonlinear reduction”. But in the original question, it was instead called a BIR problem. Perhaps your reasoning is correct, but I would avoid using BIR such as this. You have to make some
assumptions on everything in your model, so for the purposes of this challenge, I would write this as: let the manifold components be the sets $U, V$ and the sets of points on $1 – P(x)$.
Pay Someone To Do University Courses At A
Nooooo a nonlinear reduction? Oh crap; no. Is it possible to solve this stochastic optimization problem for the original optimization problem? (A number of algorithms can solve optimal algorithm, but
not algorithms for deterministic subproblems) The problem is still one dimension at a time, hence, I do not have a way (and will not follow your method in solution). | {"url":"https://assignmentinc.com/can-someone-do-my-matlab-assignment-on-optimization-algorithms","timestamp":"2024-11-09T00:46:19Z","content_type":"text/html","content_length":"109170","record_id":"<urn:uuid:50842571-fa66-4091-91b0-c45c3071eb8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00248.warc.gz"} |
A to B With QNH Unchanged
A to B With QNH Unchanged MET
With all altimetry problems, the first thing to do is draw a diagram and label the levels and vertical distances that you know.
At sea level will be the local QNHQNH —Static pressure at MSL calculated from QFE using ISA temperature lapse rates. The vertical distance of the ground above the sea is the elevation. The vertical
distance of the aircraft above the sea is the altitude. The vertical distance of the aircraft above the ground is the height (so height + elevation = altitude). Finally, if the aircraft is at a
flight level, you will need to put the level of 1013 in (this could be above or below the sea, depending on what the QNHQNH —Static pressure at MSL calculated from QFE using ISA temperature lapse
rates is) and label the flight level distance - the aircraft is (the flight level x 100) feet above the 1013 level.
Once this is done, you can start to answer the question. The vertical distance between any two pressure levels will be 27 x the pressure difference between the two. Once these distances are labelled,
look for the unknown you are asked for (this could be height, altitude or a vertical distance above the incorrect QNHQNH —Static pressure at MSL calculated from QFE using ISA temperature lapse rates
that was left on the subscale) and use the vertical distances you have got to figure out the answer. There are many common mistakes that can be made - putting the 1013 level the wrong side of the QNH
QNH —Static pressure at MSL calculated from QFE using ISA temperature lapse rates, given the answer for height when it asked for altitude and vice versa, or simply
Get instant access to 1006 Meteorology exam questions.
Start your free trial today.
On a flight from A to B a pilot leaves the QNHQNH —Static pressure at MSL calculated from QFE using ISA temperature lapse rates at A set on the subscale. What will be the indication when the pilot
lands at B? Airfield A: QNHQNH —Static pressure at MSL calculated from QFE using ISA temperature lapse rates = 979 hPa. Elevation = +2000 ft. Airfield B: QNHQNH —Static pressure at MSL calculated
from QFE using ISA temperature lapse rates = 984 hPa. Elevation = +1570 ft.
Want to try all 3 questions for A to B With QNH Unchanged?
Sign up now. | {"url":"https://www.examcopilot.com/subjects/meteorology/altimetry/a-to-b-with-q-n-h-unchanged","timestamp":"2024-11-12T03:30:39Z","content_type":"text/html","content_length":"39244","record_id":"<urn:uuid:03045a92-48a7-4687-9e04-e09df497e865>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00820.warc.gz"} |
The charging and discharging characteristics of capacitor。 - Javalab
The charging and discharging characteristics of capacitor。
1. Switch up to start charging, and switch down to start discharging.
2. The measured voltage is recorded automatically when charging or discharging starts.
3. You can change the measuring point by moving the probe.
4. For accurate measurement, operate the switch after fully charging or discharging the capacitor.
5. To prevent fluctuations in the measured value, do not change the voltage, resistance, or capacitance during measurement.
Charging the capacitor
While you apply voltage to a capacitor, current flows through it.
The process of charging accumulates electric charges, and the internal voltage rises by this accrued charge.
As charging progresses, the charging rate slows down, and the internal voltage approaches the external voltage.
While charging, the voltage changes as follows:
\[ V(t)=V_0 (1-e^{-t/RC}) \]
\(V(t)\) : Capacitor internal voltage (V)
\(V_0\) : External voltage (V)
\(t\) : Time (seconds)
\(R\) : Resistance (Ω)
\(C\) : Capacitance (F)
For example, if the external voltage is 1 V, the resistance is 1 kΩ, and the capacitance is 1000 μF, the following characteristic curve can be obtained.
\(RC\) multiplied by resistance and capacitance is called the time constant (τ). The time constant has the following characteristics.
• When charging during the 'time constant,' approximately 63.21% of the capacity is charged.
• If you charge about 5 times the 'time constant,' about 99.33% will be charged.
• The larger the resistance, the weaker the current flows. So it takes longer to charge.
• The larger the capacitance, the greater the need to charge it. So It takes a long time.
Discharging the capacitor
Discharge of the capacitor also takes time.
Discharging a capacitor can be thought of as similar to charging.
That is, about 63.21% of the total capacity is discharged during the time constant, and when it is discharged about 5 times the time constant, approximately 99.33% of the capacity is discharged. | {"url":"https://cpcalendars.javalab.org/en/capacitor_characteristic_en/","timestamp":"2024-11-10T03:13:19Z","content_type":"text/html","content_length":"88007","record_id":"<urn:uuid:180c9490-a401-4605-9e49-0b55c31c8a8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00237.warc.gz"} |
Plot Options
Abscissa (x-axis)
You may select from linear, logarithmic, and date/time axes. Note, that logarithmic axes are only possible, if the entire axis is positive and does not include zero.
Ordinate (y-axis)
You may select from linear, logarithmic, and date/time axes. Note, that logarithmic axes are only possible, if the entire axis is positive and does not include zero.
Type of Plot
Col/Idx plot the data of a single column of the data matrix against the row index.
Row/idx plot the data of a single row of the data matrix against the column index.
Col/Col plot the values of two columns against each other.
Row/Row plot the values of two rows against each other.
Histogram display a histogram of selected parts of the data (see below for details)
Cumul.Frequ. calculate and plot the cumulative frequency distribution of selected parts of the data matrix.
Norm.Prob.Plt. creates a normal probability plot.
Pareto Chart creates a Pareto plot.
Labeling of Axes
Persistent The labels of the axes are fixed.
Numeric The column or row numbers are displayed.
Names The variable or object names are displayed.
Mode of Display
Lines line plot, connecting all data points in the order of their occurence in the data matrix
Points point plot, any data item is displayed as a point whose type can be setup by clicking the button besides this option.
Spectrum display data as a line spectrum. The data points are shown as spectral lines which are based on the horizontal axis (y=0). This mode is not available for all types of charts.
Classified Line show data as a line drawing, connecting all data points of the same class. This mode can be used to create distinct overlaid graphs.
Any data points may be displayed using one of the following attributes. In addition, the data can be colored according to the class number.
None displays the data as is, without any additional attributes.
Data Points the locations of the data points are indicated by plotting a symbol for each data point on top of the base graph. You can select the symbol by clicking the button right to this
Numeric the data points are displayed by using numeric values which indicate either the row or the columns index of the particular point.
Class Symbols the class numbers are indicated by different symbols.
Class Numbers the class numbers are displayed as numeric values (range 0 to 127)
Class Colors if this checkbox is checked the data points are additionally marked by colors which are derived from the corresponding class numbers. The class colors override the default data
color. Note that the class colors can be assigned on an individual basis by using the command Setup/Colors/Class Colors.
The data plots of DataLab are based on four different colors, which can be set up individually for each plot:
Data this color is used to display the data unless the attributes are set to 'Class Colors'
Fill used for filled rectangles (as in histograms)
Scale color of the scales
BkGnd background color
Grid grid color. Be sure to select a color different from the background color, otherwise the grid would not be visible
Display Grid Lines When checked grid lines are displayed. The distance between lines follows the tick marks of the scales.
Do not Draw If you intend to display large datasets (more than 10000 data points) this may result in a notable reduction of the drawing speed. Ticking off this option prevents data points
Superimposed being drawn on top of each other (in the immediate neighborhood of +/- 2 pixels). This increases the drawing speed considerably (starting with about 100000 data points you'll
Symbols want to activate this option).
Zoom Mode
The Zoom Mode determines whether DataLab automatically adjusts the zoom range to fit all data to be displayed. If Zoom Mode is set to fixed the user can specify the range of the displayed data by
entering the borders of the intended range. This range will be retained until changed by the user. Clicking the "zoom to scale" button
If Zoom Mode is set to automatic, the zoom range is automatically adjusted whenever new data are displayed.
Data Range
This section is only visible if the plot type has been set either to "Histogram", "Cumul.Frequ.Dist.", or "Norm.Prob.Plot". You may select which parts of the data matrix are to be used for the
calculation of the respective plots (column, row, marked data or entire matrix). In case "Column" or "Row" is selected, the column or row to be used can be selected as usual (see the description of
the plot windows on switching the axes). For histograms you can additionally select whether to display the class information in the chart (option "Classified Data"
Histogram Parameters
This section is only visible if the plot type is set to "Histogram". You may select which parts of the data matrix are used for the calculation of the histogram, and the number of histogram bins, and
the depth and the angle of the bars. Setting the latter two parameters to non-zero values creates 3D bars. If you tick off the "Show Normal Density" box a normal distribution having the mean and the
standard deviation of the displayed data is drawn over the histogram.
There are three ways to specify the histogram classes:
• Automatic: If you select "automatic" the number of classes will be set in a way (depending on the type of the variable and the number of data) that the data structure is reflected by the
histogram in the best possible way. The automatic mode is best suited for getting a quick overview. Special needs can be met using the other two options.
• Fixed Number of Bins: In this mode you can set the number of classes by moving the slider. The bin width and the data range are then calculated by DataLab automatically.
• Exact: This option allows to set the origin and the width of the classes precisely. The boundaries of the class bins are calculated such that they form a series of integer multiples of the class
width, including the origin. | {"url":"http://datalab.epina.at/helpeng/plot_options.htm","timestamp":"2024-11-03T23:14:30Z","content_type":"text/html","content_length":"12669","record_id":"<urn:uuid:5b947742-bfb4-47db-9168-c05241c4e01c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00700.warc.gz"} |
SCIENTIFIC ABSTRACT KRAYEVSKIY, N.A. - KRAYEVSKIY, N.A.
Document Number (FOIA) /ESDN (CREST):
L .1 ""." N01i" , I,'.rl . Wcj.-4--Vi ) ; FR A YEV'.'I't'. !Y, N. A. , J'!*(-jf , , !;%.; ,-- i, ". .., :." ~ "'; fi~ I I , abo ty Lu~j) chamt-,-3s ot' the horm syntem of cict-m In i(-- inchiced
lesions. Arkh. ;,qt. 24 '(,'. (' 1. Deyntvitellnyy chIcin AMN (fr~,r Kriyev.,iMy). ra n tt!~t t I till]. poly porl :3 In VIL t'n dwvo !oi i nj' cr rali lf-~b~ t 1 ve cL'r -11-M. chlen S:SP (for J!
unuarv 15, 1-062. (Moi;kva) ; KRAYEIISKIY, N. A. Chnnuoterintlem of liceptle pultpt~nttry inflFizmr-rition rad 111 I'l oil nichneqn. Ark I i. F;A. :14 nc,. 11. 34-411 1. De-.-,gtvite2lnyy chlen AMIN
:,."SP- (for 20, .1061, I FR I K '~ 11' OVA, 1, . A .( !~U.S'v Vq ) ;~ j",j ~, f: 1 '~ ,~!. - " t ':'~ - -~. P. " -'-, -' , p 1 1 r a b o ty " A., I! . . I ; ~ ~' !,.'.'. : , ". . Skin rop4ilr in
acilte radiat,l(ai-indit-ed -!~t- 24 nO.11:42-46 162e (!.f,. ;~ ~."' ' I p - I " ) 1- D)yntviteliny7 chlen AIMN SSIS)IJ (for Jallua~y 4, 1962), KRAYEVSKIY, N.A., prof.; SAPOHNIKOVA, K.A. Work of the
Moscow Societ7 of Pathoanatomista. Arkh. pat. 24 no.11:85-93 162. (MIRA 18:12) 1. Predmedatell Moakovmkogo obahchostva patologoanatomov; deyntvUellnyy chlen AICI SSSR (for K.-ayevskly). 2. Sekretarl
Moskovskogo obahchostva patologoanatomov (for Sa;.ozhnikovti). re A~F-c PHASE I BOOK EXPLOITATION sov/634-4 Alekseyeva, 0. G., A. F. Bibikova, N. A. Vyalova, A. Ye. Ivanovp X. A. Kra evskly, N._&
Kurshakov, N. V. Paramonova, V. X. Petushkov., f V. V. Snegireva, L. A. Studenikinas Yu. M. Shtukkenberg, and A. Ya, Shulyatikova Sluchay ostroy luchevoy bolezni u cheloveka (A Case of Acute Radia-
tion Sickness in Man). Poscowo Medgiz, 1562. 149 p. lO.,OOO cop- ies printed. Ed. (Title page): N. A# Kursh~kov, Corresponding Member Academy of Medical Sciences SSSRf Proressor; Ed.: S, P.
Landau-Tylkina; Tech. Ed.:. N. A. Yakovley#, PURPOSE: This monograph Is intended for physicians and biologists. COVERAGE: This book describes an actual case of acute Vadiation sick- ness in its
severe form. It describes in detail clihical symptoms, changes In biochemical Indexesp morphological changes in the nervous system, and the distribution of depth doses and energy Lbsorption. Card 1/
1- KRAYEVSKIY, W.A.; LITYINOV, V.Y. Pretumor6uis changes in bony tissue in experimental conditions following the action of radioactive isotopes. Vop. onk. 9 no,1:2536 163. (MIRA 1615) 1e lz Akudemil
maditainakich nauk SSS14 (BOUS-GANCER) (RADIOISOTOPES) KRAYEVSKIY, N.A... red.; LEDEDINSKIY, A.V.p red.; S-IOLYAN, G.L., red. (itestorative processes in radiation lesions; collection of articles]
Vosstanovitellrye protsessy pri radiatsionnykh po- razheniiakh; sbornik statei. Moskva, Atomizdat, 1964. 243 p. (MIRA 17- 5) 1. Deystvitellnyye chleny AIAN SSSR (for Krayevskiy, Lebedinskiy).
-KIMMKIY, N.A., prof.,i rVANOV, A.Ye., atarsh-17 naneboyy mot--udnjk (Mankva) Inflamaticin and penetrating ionizing radiation. kvkh. pat. 25 no.813--14 1163 (M-IRA 17 x4) KRAYEVSKIY, N.A., prof.;
OAPTZHNIKOVA, M.A. Work of the Moncow 8-nJety of Pathoariatoml3ts from September through December 1962. Arkh. pat. 25 no.10:71.-76 163. (MIRA 17.7" 1. Prednedatall Mcskovskogo obshchestva
patologoanatomov (for Krayevskiy). 2. SeMetarl Motikcv~ikngr, obr-lichestva patc- lagoanatc.mov (for KUPSHAKIINA' N.N.; H."fl,"07A, A.S.; N.A., nau,--mn~ry rukovoditcill Study by histochernicn] an(]
cyto-lot,,iral methAj n! (!arly I in the bones following Sr9() Injury. BI)jl. ~~ksp. biol. i med. 54 no.8:104-107 Ag 'G~. 1,-!1FA 17:11) 1. beystvitellnyy chlen AMN !-ISM (r~ir Kra:iovskiy).
PONOW.RIKOV, V.I. (~losj-va); KRAYEVSKIY, N.A., prof., nauchny-y -Lkovoditel I '-'. Morphology of breast tumors induced by radioacLive -tobium. Arkh. pat. no.12:44-51 163. (M-RA 17:11) 1.
Deystvitellnyy Chien A141 SSSR (for Krayevskly). LFBEDEV, B.I. (Moskva);"?Ij~~SKIY, N.A., prof., nauchnyy ruko-oditell State of sensory innervation of the nose and tongue in dogs following injury
with radioactive strontium and polonium. Arkh. pat. no.12151-56 163. (MIRA 17:11.) 1. De3stvItsl'n3rI chlen AMII SSSR (for Krayevskiy'). irAAYEVSKri, filkolay. AlFikstirulruvich; 104MOVA, ai~,~i-
FFa--7Jt- ~vna.,, AVM4AKII, X.V., n.!i. KIIOKIIWVA, Hla r 110 1 i 1. . [Patholo ical anatoffW and problems of the pathogenesis of leukerda~ Patologicheskaia aria-tomila i vaprosy pato-geneza
leikozov. Vosk-val MeditsIna,. 1.965. 417 p. of tlic, pathrilogIcit' arjto.,,,y of cS stomach. ldrest. AIV SSSR 20 nu.1.2:3-10 165. (Itum 19: 1) 1. Insti-tut eksperlmentallnoy I kllnichns~oy
onkolo,-,*li All-TI SISSR, Moskva. vl';,'.Ov' 1'. "1 (K ~~'xva) ; N.A., prof 0 , n! uchnyy nl-kavodi tel I State of the bone mr-ow during,- thi ' " ;'n 11 1 - I I I L sarcoma indu-ed lrj Strontium
90. par.. 27 no.2:14-18 '65. OMIRA 1815) 1. Deyot.vitelInyl chlen AMN (for Krayev3kiy). MWfEVSKlY, N.A,, prof. (Moskva) Tumoroun nature of laukemlaB. 1. Institut eksperimentallnoy deyetvitellnyy
chlen AMN SSSR Deystvitelinyy chlen AMN SSSR. Arkh. pit. 27 no-50-8 165. (MIRA 18:5) i klinicheskoy onkologii (dir. - prof. N.N.Blokhin) AMN SSSR. hil.kyECJKJ'Yl N.A., ~ro-i~ed Ing a~ ~,f "w, ty
?,~.O" og' ! :3 3 for th,~ FJ r half '~f 19641. A.-r'iKb. pat. 165. (m: RA 2 P, J. Prvdsedutell Mos'kovr;'e.,v,,,, patologoanatomov (for Kray4-v,3k! y) . 2. 'Inat 01-:r- AFRIKANOVA, L.A. (Moskva);
KRAYEVSKIY, N.A., prof., nauchnyy rukovoditell raboty Role of the panniculue adiposus in the reparative process following severe radintion injuries of the akin. Pxkh. pat. 27 no.8,.18-24 165. (MTRA
18-10) 1. DeystvitelInyy chlen AMN SSSR (for KravevsAiy). F F "'i ~ , '10 ~ .-'~ " Fik i I !.G.~ ~,~,.'.,()Vlyfv. Y,-I.D!. (YAcnAva) !A , " j3-, rwA. ~'/ no-M2-84 165. ,Fu% 18-,10) M. ( 1. !
I~oyctvite'l1ryy --blen hMNF,,c:SR (for Krayevskiy). jU~,Il ~ ~-; . 01 -.. I -L t , I j lrrir.uUrjn-lL.',itov ~vr4virice. t,!;i ].(I,! I-;L I' Cor tho -n in Pon I rriga Urm cann I. I L. true I. rul;.
('J, Il-. 7, . 1-:onthly L-I.st of' Russian Accessions, Library of ConF-ress ~ctoter lc,.-;2. 1~ :.,D. KRAYr,V'JKIY, R. G.: KRA'YEVSKIY, R. G.: "Methods of auditory work in the school for deaf and
dumb chi2dren.11 AcadevW of Pedagogical Sciences RSFSR. Sci Res Inst of Defectology. Moscows 1956. (Dissertation for the Degree of Candidate in Pedago.-ical Science.) So: Knizhnaya letopis', No. 37,
1956. Moscow. 1. ERAY;,.,VSKIY, S.S. Using metal fenders instead of rubble belts. Bozop.tnida v prom. 3 no.8:30-31 Ag 159. (MIRA 12:11) 1. Inspektor Resvetayevelcoy reyonnoy gornotakhnlihoskoy
inspektaii. (Coal mines and mining-EquipTent and supplies) L,y nn u 8,6-93 !Jol,tr-d c 1~ e tsd of it,:- i r) r o 4o KIIA Y 17~ S K TY YA 1-1 MR/Medicine - Sleep, Prolonged Jan/Feb 50 Narcosis 'Vhe
Problem of the Use of Prolonged Sleep in the Clinical Treatment of Organic Nervous Diseases," Ya. M. Krayevskiy, Neuro-Org Clinic, Inst of Evolu- tionary Physiol and Path of Higher Nervous Activity
imeni I. P. Pavlov "Nevropatol i Psikhiat" No 1, PP 37-41 lil 1-10"'i73 History of subject therapy; results achieved in clinic by this method.Tabulates data. Of 50 cases, 21 shovi4 gbod results,
15'showed Impr0eibent. Five case histories vere used. Two tables. Submitted by Read-, 6fI4lLnId,, tProf W. -A.',"hova, 31 Oct 49. W* 17OTT.3~ KRAYF6VSKIY, Yu.14.; KRY5110VA, N.A., zaveduyimhchiqu.
Charw,eo in the activity of the cerebral cortex in connection with protective sleep inhibition in patients with cortical and oubcortical injuries. Trudy Inat-fiziol. 1:394-405 152. (Y7,RA 0:8) 1.
Sektor organichenkikh iiervnykh rasstroystv. (Brain--Wounds and injuries) (Sleep) 1. KRAYEVSKIY, YA9 H. 2. TissH 6oo h, Ncrvous System - Diseases 7. Application of sleep therapy in the clinic for
organic nervous disorders. Resulting reactions in the visual analysor and some other data on corticcocerebral dynamics in organic disorders of the ner- ouB syBte-qi in connection with sleep therapy,
Zhur. nerv. i psiRh) 53, No. 1, 1953. 9. MonthlX List of Russian Accessions, Library of Congi-,-ss, April - 1953, Unel. 6 stratat caus live of J~qarpjd sle7 therapy, B. V, Adr. pro I 'vrii in;,
Kvvrkct-;~ 53, 1 k zhar. I W.OfW'(4, i,P.I;kh- s ELainal, LmNia-1, w,-~ an.4 ~~asl'mAy 1&'.-.tuvtaL WITe aill-nmUtcrLd to 113tirrtta at With pyralaidt;'a fa mau dases, Glo' atvi irl a z. de~lci)
(I~ivm daily. lriwxls oldni. ~erll Qf i I" t were; atImIuMevni (~.4 cjte'~.m 1,12 o W. sit a t. tr"-.E' ;;% tba da-~fi';u i IAA 1'Wo.1ty of tht 41.1tc(d 'a~ql 911 io.tht firmt jkt4zc -( the. r4twras
-'Vt-c t%Itn (cl in- of thit romt&md tyr4 uf td"p lbtrapy~ ' No cltzr indkatlem of lwncUi~.l valluedd-tep tlt6-apy prMmed hy thCdlMl'tAnt.0Wj Idinjo- "s Wai foor'"'. Gi- ow fee? ampy to tbectIZIC for
Oirpm ne.-YOU3 ~Z;Mur am, c-sIdura zc:d maomzlum in H. J the w0d - zierul~ (if ndlicnu wuler Orel[ thc-.Apy' 01- A, Ye. 11. I?av!fjv 1-vit ~ III"( !r. tic-, -cr-cfll' i A im. comcemtd wiEt Lhd d6n,
cAtht,arut. 0 K, Ca, and. -M-, v-'cS- ent-la We Woe of P.Wcats ntundand urlde' th, b' j to treatinent were. mp,, IOS-17.-S, and 18.C-26.99 mg. - 1. -L" was ~!.21- the 1, /CA ratiq Iva CS~2-05; th -t
tcr 4-9-S All. sueb v~aiies mr,- r.-garde%i ~L; abKove tllow- foaud iri Lhe iittcaw-c (4r itormzd h(m-uni. Utider the ia, 1111 dccp (Iwrkap'~ it tw~ I o" ulio NIK ~Oflte-'!'t 6f Th- z~Aticnt--'
btcitKi ~-Znlm Iva!; ub-xrv--A~ but tlio- shift% it., t4e Lv~l -,truv% valixts iv'~rc nft -f - -! Sy -rc!- I . sharply d'. iaf~d' nor vfcm gny traxi, (Lpr -t' wa-i some cvidei4xi'of cwrtiali--q bco;
-Lcm the 1:(~M,41- fi"U~4 ifldr~:e~l tmiRr F'tudy an", 01V jliychi-.~ -1,te The le'vot -?j tL- eme:-t (if vatUt;utii iii the, tswbli b1c.A '~Zmyn ef.w"Ont 01 K' Ca, Mg R-'A i1-1 KIC;- clih) w"~'re
tic!ii 'th nu:;is, lath in pa -% tqi - - au,,A 56- ifrorft~- KRAYXVSKIY, Ya.M. Significance of sleep in connection with diseases of the nervous system. Priroda 45 a*.7:24-30 Jl '56. (KIRA 9:9)
(NERVOUS SYSTEM--DISnUS) (SLIEP) KUYEVSKIT, Ya*K9 Nina Alekaandrovna Kryehova; on her 60th birthday and 35th year of medical, scientific, and social activities. Zhur.nevr. i psikh. 56 no.6:518-519
156. (MIRA 9:8) (KRYSHOVA, NINA ALXKSANDROVIIA, 1896- ) IaAYk:VSKIY, YB.H. Gonditioned and unconditioned vascular reflexes in patients with cerebral disorders treated by eleep. Zhur.nevr. i paikh.
Supplement; 7 '57. (MIRA 11:1) 1. Saktor nervnykh bolezney (zav. - prof. N-A-Krylova) Instituta fiziologii imeni I.F.Pavlova AN SSSR, Leningrad. (BPAIN--DISKASES) (BLKIP--THFAWEUTIC USE) (REYI,MS)
KOROTKIN, I.I.; KRAYIVSKIT, Tjj.M. Invftntigating the higher norvoun nativity in Datients with brain lenionn following cleep therapy. Trudy Inst. fiziol. 71l77-184 k58. (MIR& 12:3) 1. Sektor nevrozov
i organicheskikh zabolevaniy nervnoy sistemy (zav. - N.A. Kryahova) I, Iaboratoriya fiziologii I. patologii Vysshey nervnoy deyatellnooti (zav. - 7. P. Mayorov) Institute fiziologii im. I.P. Pavlova
AN SSSR. (BIIAIN--WOUUDS AND INJURIM) (SIKKP--THMP3WIC US3) -KRAYIIVSKIY, Ya.H, Treating organic diseases of the nervous system by prolonged aleep. Trudy Inot. fiziol. 7:192-202 158. (MIR& 12:3) 1.
Sektor nevrozov I organicheakikh zabolevaniy nervnoy sintemy (zav. - NA. Kryahova) Institute fiziologil Im. I.P. Parlova AN SSSR (MVO'UB BYSTIH-DTSUSIS) (SLIMP-4MMIC U") .EYCERPTA MICA SEC 8 Vol 12/2
Neurology Feb 59 .. . 1092. CLINICAL AND NEURODYNAMIC DFTAILS OF INFECTIOUS DIENCEIIIIALITIDES - Krayevskil-Y. M . ZILNEVROPAT.1 I'SIKIIIAT. 1958, 58/4 (403-409) Clinical and experimental findings in
15 patients with focal encephalitis in tile hypo. thalamic region are reported. The patients presented loroxysmal ootbursts of vegetative disturbances. In addition, moderate hemisyndromes were
observed, characteristic of lesions of the mesencephalon and diencephalon. All patients had trophic metabolic disturbances, which manifested themselves in the skeleton: in 11 patients especially the
cranial vault was charactoriatic. By determinng the blood codehydrogenase level and the pantothenic acid activity, the degree of im- balance among the vitamins of the B complex was established. The
higher nervous system of the patients was tested by examining conditioned conjunctival defence re- flexes where disturbances were observed which diminished with regression of the clinical symptoms.
The phyalopathological analysis of the experimental data may A 11 be used to explain the pathogenesis of a whole series of clinical symptoms (hyster- ical reactions, insomnia, strong 'affective
reactions' to visceral sensations). and may guide the clinical treatment. (L. 8) F T_ KRAYEVSKIY, Ya.M. Characteristics of defensive (winking) conditioned and unconditioned reflexes in infoctious
diencephalitia. Vop. paikh. i novr. no.5: 72-91 159. (MIPA 14-5) 1. Sektor nervnykh bolezney (zav. - prof..,N.A.Kryahova) Instituta fiziologii AN SSSR (direktor - a%ademik K.M.By'Aov (deceased]).
(REFLEX ) (ENCEF~IIALITIS) i. Ye. and IKURAY-c~"~,'Kjy, YA. M. (Leningra-1, tusz:-~) "Disturbances of lipid metabolism in patients with diencephalitis" Report submitted to the 7th Intl. Congress of
Neirology, Rome, Italy, 10-15 Sep 61 GANELINA, I.Ye.; 19W-IMKI-Yj,--XAfi-MS. (Leningrad) Lipid metabolism disorders in patients with diencephalitis. Klin.med. 39 no.506-41 My 161. (MIRA 14:5) 1. Iz
terapevticheakogo sektora (zav. - prof* B*V. Illinakiy) i soktora nervnykh bolezney Wir. - prof. N.A. Kryshova) Institute. fiziolouii imeni I.P. Pavlova AN SSSR Wir. - akad. V.V* Ghernigavakiy).
(DIENCEPHAWN-DISHASES) (LIPID METABOLISM) GANINA, 1.1s.1 KCKMVAp I.N.; )PAITTSKU,la.M. (laningrad) Funotion of the thyroid gland in relation to the state of lipid metabolism in the diencephalio
syndrome* Klinemedo no*9:129-136 1,62, (KIRA 15&12) lo Iz sektora. nervnykh bolezney (zav" - Tn-of N.A. KMhova) Inatituta fiziologii imeni I.P. Pavlova 1-dir: - akad. V.D. Chernigovskiy) AR SWR-i 3-y
toraperticheakoy kliniki (zav. - Prof. B.V. 111inakiy) Gosudarstrennogo instituta d1ya usover- she);stvGTani7s, vrachey* (THMID MM) (LMD METAWLISM) (DIENCEPRAIM-DISMES) KFLAYEVSKIY., Ya.M.
Clinicophysiologi4cal analysis of functional disorders in the higher nervous activity in diencephalitis. Vop. psikh. i nevr. no.9*227-239 162. (MRA 17:1) 1. Sektor nervnykh bolezney (zav. - prof.
N.A. Kryshova) Instituts. fiziologii AN SSSR (direktor - akaderik V.N. ChernigovBkiy). KRAYEVSMIY, Ya.M.; BULOVSKAYA, L.N.; EEZUGLAYA, A.S. Acetylation processes in patients with diencephalitis. Vop.
med. khim. 9 no.4062i-365 Jl-Ag'63 (MM 17 t4) 1. Gruppa biokhimii pitaniya i sektor nervnykh bolezney Insti- tuta fiziologii imeni Pavlova AN SSSR, Leningrad. KRAYEVSKEY, Ya.m. SOMO vegeUttive
components of aimech and motor condifJcmd reflexes in diencephalic syndromes. Zhtir.nevr. i psikh. 66 no.1136-41 166. (mm". 19-1) 1. laboratoriyn kl;-iicheqkoy neyrofiziologii (zu,:eduy-u:~h chi y -
prof. N.A.Kryshova) Instituta fiziologli Im. Pavlova (dlrektor - prof. V.N.Chernigovskiy) AN SS'SR, Leningrnd. Submitted 3, 1965. K',l-'CL,"Y, A.- p IV5 623 -33 .K9 Gonorf Gcoollelctrild
(Fundanentalr, of ~~o-'.m'llcctriciity) COS. I zd-vo -elclini~-ko--~'coretiictieril~oy Literalullry., 195-1 - V, Lib. llars- Pt. 1 W- 520079) KRAYM, V.A. (Yaroqla-rl 1) Distribution of chronic
tonsillitis mong the workers of the "N11maO Flax Combine. ldrav. Ms. Feder. 8 no.2aS-10 F163 i (MIRA 17:3) -1 3 295 ,I-) S/208/62/002/001/009/016 D299/D303 AUTHORS: Katskova, O.N.,
and_l~ray~,o-t----A.N..-(I.Iosoow) TITLE: Computating an axisymmetric isentropic f1cw of a real gas PERIODICAL: Zhurnal vychislitellnoy matematiki i matematicheskoy fiziki, v. 2, no. 1, 1962, 125 -
132 TEXT: The design of axisymmetric supersonic nozzles is considered. The experience gained in computating isentropic gas-flow by means of electronic computersp is set forth, A few numerical
examples are given. It is assumed that the density p and the specific enthalpy h are functions of pressure and temperature only, viz,: p = p(p, T), h h(p, T)~ (1~2) The isentropy condition is dT h-1
h dp T P P Card 33295 S/208/62/002/0011/009/016 Computating an axisymmtric ... D299/D30~ where h 8h hp = all T = U-T rp The problem is formulated as follows: Calculate the superscnic se,, tion of an
axisymmetric nozzle with inflection point A and uniform flow at the exit (Pig. 1), at given temperature and pressure on the flat transition (convergent-divergent) surface. The nozzle wiLh --n
flection point is called the principal nozzle. The problem i5 ai7ri- ded as follows: Flow from the transition surface, determinatilon of the cross-section in the (diverf,,ent) region OAB, and
solution of Goursat's problem for the contour AC and the entire flow in the ro~-- gion ABC from data on the characteristics AB and BC. For thie velc-- city of sound one obtains 2 PT a- = Pp 4-h h
where P 9, 1 P T T The first part of the problem is solved by expansion in serles, whose coefficients are expreosed by the parameter Card 21t ~~" :, 7.-) 9 r S120 62/002/001/009/016 Computating an
axisymmetric D299YD303 n hT (hTPTT-PThTT) -PT 0 +hpp) + PT +2 I_PP(hTp,,T-pThVT)-hT +hTppp where PT I ~h ift = ~p-OT PTT~ hTT hpp~ op, pT .11P O'P ; PPP ~ aps,PpT On, The solution in the regions OAB
and ABC is carried out by the me- thod of characteristics. In a form, suitable for computers, the equations of characteristics are: ra = r, - k-ri + k (xi - it) i-km X3 = X1 + M (rS - r1) P" jE (Alp.
.+ F K (x3--:r_j)j + F jNpj L (r3-rj)j); (4-1) ta = Cl [N (pa ps) + L (re. - rj)); T3 = T2 + 7" (P3 - P& YWa- - i; z, = V2 (h' + h) Card 3 /""- 16F /_ -~ n' ") ') t " S/208-6-2/002/001/009/016
Computating an axioymmetric ... D299 D303 where m, k, E, P, N, M, L, K and T' are Given by expressions. This system of equations is solved by the method of oucccosive approxi- mationsp whereby (as a
rule) 3 approximations are sufficient. The order of calculation is as follows: From the dimensional quantities p* and T* one determines p*t h* and a* by formulas (1.2) and (2.1);. these quantities
are used to determine the corresponding dimension- less quantities. Then n is determined by formula (3-1) and the cha- racteriotic near the transition surface is fou-nd. Thereupon the me- thod of
characteristics is used. In many problems of interest in practicep the analytical expressions for p and h in ternis of p and T are very cumbersome, In such cases it is necessary to first eli- minate
the temperature from Eq. (1.2~ by integrating (1-3). For tile required thermodynamic functions one obtains o-- inp Inp _P /j(2)(1 in ph 1dinp-l-W, a'=--LP'P)' (6.2) P P P plp-h(2) P Inp. Hence it is
expedient to approximate hk`~) by the polynomial 1n p. Elimination of the temperature involves some changes in the formu- las and in -the order of computation. Thus, Eq. (3-1) is replaced by Card 4/1
3 32 9 ~ S/208/62/002/001/0091/0.16 Computating an axisymmetric D299/D303 n :- Ih(') 7 .1 1 p At present, the following programs were set up and nut into opera- lion on the electronic computer 69CM-2
(BES1,11-2) for a perfect gas, air, and dissociatin6 diatomic gases. The complete program is div,, ded in two: The first part -- computating of AOB -- involves trans- formation to a dimensionless
formp series and calculation by the method of characteristics. The results obtained are recorded on per- forated cards or on magnetic tape which are thereupon used in the second part of the program,
for computing ABC. In -the case of per- feet- or diatomic gases, it is not necessary to first temperature. In the case of air, however, the nated during the first part of the program. As "n the
polynomial of bfjst approximation has bec-n taken. The program. for determining such polynomials, w~is set up by S,F, Pashkovskiy (of the Polish Academy of Sciences), during his stay at the Compu-
tation Center of the AS SSSR~ A 65-point scheme was taker, ~n tiit~ Vransition surface; 100 points are taken on the M-ch%racteristic~ Card 5/0 11~1 33295 S/208/62/002/001/'1009/016 Computating an
axisymmetric ... D299/D303 With such a number of points, 1.15, 1.45 and 1~8 hotirs fire requirel for the calculation of the CAB region to axis point3 with a pressu re of 10-1p*9 10-2p* and 10-3 p*,
respectively. The cal,_-ulat-ion c,.'L' ABC takes 13 minutes; these calculations apply to a perfect gas. Some of the results are shown in figures~ Nozzle contours are compa- red for hydrogen- and
perfect-gas flow. It was found that for air p*/p = 1000, and for a perfect gas p*/p = 760. Thanks are extended to Yu.D. Shmyglevskiy, N.S. Galynn and L-M, Shashkova., There are 9 figures and 4
references: 2 Soviet-bloc and 2 non-Sovies-bloc~ The references to the English-language public%tions read as fo-llows; L. Heller, Equilibrium statistical mechanics of dissociat-.?ig d-,a'~o mic
gases, Phya, Fluids, *1959, 2, no, 2, 147-152. R,-. Edse, Design of supersonic expansion nozzles and calculation of isentrDpic exDr,- nent for chemically reacting gases. Transo AST.'E~ 1957, 79, no,
7, 1527-1535. SUBMITTED: September 20, 1961 Card 6AV h1,A'-YKO A. ; KATSKOVA, O.U., otv. red.; ORLOVA, I.A., red.; KORKMA, A.I., toklm. red. [Variational problems involving supersonic flows of a gas
with nrbitrai-i therrodynamic propertics] Vari:A.Aonnye za- dachi overkhzvukovykh tochonii gaza s proizvollnymi teiro- dinamicheokimi svoiAvami. Mlonh-vn, Vychislitcllnyi tsentr Al, SSSR, 1963. 82
1,,. (YEIA 16:12) (Calculus of variations) (Gas dynamics) 12312 __-Z A(b)/LrflT(2)/~W(a).~-2/BDS/L?S(v)..- AEDO/AFMASD/' AMW/APGC P Wo%-Ij. -4/Pe-4 W ACCESSION NR: slwwI63100010041=610115 AUMOR:
Katshms 0, It. (Moscow); ICrayko, A. N, (Moscow) 77 TITU: Calculation of plena and axisymmetrical supersonic flava in the i presence of irreversible processes: SOMC2- Zburnal prikladnoy mekhmiki I
teMmichaskoy fiziki, no, 3.9631 3.16-118 TOPIC TAM: nozzle., contours, characteristic.. frozen flow.. equilibirmn flow., supersonic nozzle,, irreversible process.. supersonic flow.. plane flovp
wdsyn- metrical flov., inviscid flow ARMACT.- A finite-difference method has been developed to simplify the nvmer- i ical solution of the equations of the characteristics for one-d1mensional and I
exisymmetrical supersonic flov of an inviscid., non-heat-conducting gas in presence.of irreversible physicochemical processes. The state of the gas Is given by the pressure (p), temperature (T), and
n rereme-ters (qj) characterit- ing the irreversible processes je.g.., component concentrationy internal energy). Card 11A 3 L 17312-63 ACCESS1011 VR-. AP3006137 The variation in these paxameters Is
described by the* equation: dqj Fi(VI ey P.- T Pq) c~(v, 0, p, T, q) fi(pp T., cl), Vhere x and y an rectangular coordinates; w is the absolute flow velocity; 0 Is the inclination angle of the
velocity vector relative to the axis x; q is the sum of qj; and Pi. 4A. and fi are knovn functiote of 0., pp T., and q. 91 determines the rate of the irreversible processes. Frozen and equilibrit=
flow occur at o and respectively. By series expansion of f12 using steps of (q,2 - Clilf, the folloving finite-difference equation vas ob- L 17332-63 XCESSIOIT M: AP300613T The subscript,3 denotes
that argmients p,,., T q 2J' qjl are used The subscripts 1 and 2 denote the, hiown and tuftn9w2n(qjU4=tj)t"yw.dThjj fo=MtU Va*S used for calculating the flay of dissociating oxygen in the diverging
section of an axisy=t cal nozz initial pressure of 1 atm., initial temmerature 50WIC., and M a 1.001. The results (see Fig. 1 of the Enclosure) indicate that the presence of irreversible reactions
leads to quantitative as vell P-squali- tative changes. The formula can be used for calculating nozzle contours for arbitrary types of flow (subsonic,, uniform, unsteady., etc.) in the presence of
irreversible processes. "The authors are. grateful to Yu. D,__ShmY*glevski-y for his interest in the vork and his useful evaluations., and also to 9, 1 ,Suchkova for preparing the report." Orig. art.
has: 4 figures and 3 formtLuts. ASS=MON,. none SUMMED: llApr63 DATE AM IlSeP63 ENCLO. 01 SUB CODE: AS Al NO W S07: 001 003 Cc rd KRAYKOj A#Ns (MOSCOW) "Some variatiohal problems of gas dynamics of
nonequilibrium flows". report presented at the 2nd All-Union Congress on Theoretical and Applied Mechanics, Moscow, 29 Jan - 5 Feb 64. , . 1-1 11 1 11 1 ( 1"o-COVI) L )r~ k:6 '~ 9 C; - N - -9 "-~'
U'l -A.. -N. ;"A Ui*,'.OV A I. ", - 1 9 , 4 , "Characteristics method for the analysis of equilitrium and ncn-equilibrium gas flows" rerort Dresented at the 2nd All-Union Congress on rheoretiesl and
Arplied Nechanicso Moscow, 2c) Jan- 5 Feb 64. L 10399-63 EPA(b)1EWT(l)1BDS--AM1AFFTG/A fd ACCESSIM RR: AP30032 8/000 r3 003/0484/0495 43 Cel/ AUMOR: Kreyko, A. N. (Mscow) TITLE: On the determination
of bodies of minimum P:~g_by use of the Newton and Buseman pressure-coefficient law SOURCE: PrikladntWa matematike i mekhanika., v. 27, no- 3, 1963, 484-495 TOPIC TAGS: bodies of minimum drag,
Newton's pressure-coefficient lav, Buseman preasixe-coefficient law, necessary extremum. conditions, necessary minim= conditions, determination of extrerals, determination of optimum contours
ABSTRACT: After reviewing a series of studies on determining the shape of bodies having Wnlimrm drag under certain types of restrictions, the author analyzes the saw problem with various arbitrary
restrictions. The variational problem of deterndning functions (from the class of permissible functions) which under certain isoperimetric conditions minimize the drag functional is defined. A
solution is first presen-ted for a case in wbich pressure on the surface of a body is determined by Newton's presstwe-coefficient, law. The functional (1) (see En-21osure), which has the same first
variation as the drag functional, is Card 1A L 10&942- [ d6 AP3003243 ACC I NR: used in this solution. From the first variation of (1), an expression is derived by which the required extr-- and
minimum-drag conditions and the end conditions for extremals are established. On the basis of these conditions the contour of the body having adnimum drag can be constructed. For optimality of the
contour constructed., certaln conditions for extremal are established from the second variation of (1)o An example, of the deterrdnation of optimal contour for a body of given dimensions is
presented. Some new results were obtained for plow and symetric ducted bodies. An anallogous study vas made for the case in vhich pressure on the body is determined by Buseman's pressure-coefficient
law. "The author is grateful to Yu. D. ShWglevaki for discussion of the vork." Orig. art. has: 45 formaas. ASSOCIATICK: none SUBKTM-, 30Jan63 DATE ACQ: 23jul63 EMCL: 01 SUB CODE: 00 NO REF SOV: 010
OTHER: 017 Card. 2/3 ----------- ------ KAI : ~V A ,I-- , I.'. ; r,;,. ~ ',' ' It ! , A - .hYZHOV, O.S., otv. red.; Glilfi,()";A~ I.A.; red. (Calculation ~-," aazl a~ijyrunctric!il supersonic flow3
in the presence of lrrevero~ble pro,usseq] Hasclipt ploskikh i osesirrmetrichriykh sverkhz;-uRov-ykh techonii pri nallchli neobratiuivkh prot~;ejsov. Moskva, VI's Mil SSSR, 1964. 1,2 p. (mla.k 17;01
FAM4016847 DOCK ZIEWITATICN S/ Krafto" As No Variational problems of supersonic pe flows with arbitrary thermodynazic properties (Varlatsionqy" sad&chi everkhavukmy*kh teohsniy gassi a prols-
volfny*ai termodimlohaskimi avoystyani) Uoscow,, We AN WSR,, 1963o 82 p. illus,j biblioo 14M copies printedo (At head of titles Akad*GJYU nauk SSM) Responsible editors Natskovaj Os He; Editors
Orlovaj, Zo Aq Technical editors Norkina, As LI Proofreaders Shvedowa,, To No TCPM TAGS: variational problems, supersonic flow, gas flow,, thermodynamics, extremals, minimaj, max1ma, shook solutions,
# shookless solutions,, shook waves, bottom preavure 1. PURPCSZ AND COVEUMv The author expresses his gratitude to rue Do Shmy*glevskly for his discussion of the problems arising In this presentation
and to 0, 1, . Suchkova and L. Pe Frolava for their assistance with the sianu&crlpt preparations, Card 213 AU4016847 TAKE OF CCMNTS: Introductlon -Ch, Io Basic equations and statement of the problem
5 Ch. IL Certain properties of super.-ionic flowne Class of peraissible functions M M~ 19 Cho IIL Contirucua solutions* Inveatigatioh of a flold of extremals - . 33 Cho No Necessary conditions for a
mininium (wxiMM)o Field of application of continuous aolutions - - 46 11 Cho V, Discontinuous "shookless" solutions - - 56 Ch. VI.' Field of application of vehockleseN solutions. wShockQ solutions 61
Cho VIL Determination of optimm contcur in the "awe of an attached shock wave 66 Ch. VIM VOI&VIonal problem taking into consideration variable bottm pressure 74 Literature 02 i; Card 2/3 AM4016847
SUB CCDZt A4 Hu SUBM ITTZD: 161by63 MM'7 D= AM 2.Wtia64 .Card 313 AC~MSION NR: AP4027587 s/oo4o/64/028/002/0285/0295 AUTHORi Krayko, A. N. (Moscow) TrrLE: Variational problems in gas dynamics of
cQuilibrium and ncacquil-ibri= flows SOURCE: Prikladnaya matematika i mekhanika, v. 28,, no. 2, 1964, 285-295 TOPIC TAGS: variational problemo gas dynamics$ equilibrious flow, nonequilibrious flow
minimal resistance, plane body, axisymmetric body, maximal thrust, irreversi- ble process, nonlinear partial differential equation, control contourp Lagrange nultiplier, optimal contour AESTRACT; The
author studies the problem of determining the form of plane and axcisymrietric bodies of minimal resistance and nozzles of maximal thrust in station- ary supersonic flow of viscous and
non-heat-conductive gas in the presence of ir- ,reversible processes (such as chemical reactions proceeding at finite rates) and in the absence of such processes. He assumes that the region of
influence of the desired part of the contour is bounded by characteristics and does not contain shock waves. Restrictions on the contour of the body are arbitrary: the Card 1/2 ACCESSIM' NR:
AP4027587 d3z.ensions of the body., the surface area, volume, etc*, may be given. In this problem, the paramatere on the surface of the body, determined by a system of non," linear partial
differential equations, are functionals which are unknown in advance. This difficulty can be overcome by passing to a control contour. However, such a passage is applicable only when the dimensions
of the body are given and in the ab--~ sance of irreversible processes. Using a method which does not admit such a pass- age, the author obtains necessary conditions for an extremum, comprising the
basis for construction of optimal contours. He explains that in many cases it is neces- -ary to allow discontinuities of the Lagrange multiplier for continuous flow paramet ers. He shows that these
discontinuities may occur along the characteristic' and the flow 'line, and he obtains relations on the discontinuities* Orige art* has: 5 figures and 51 formulase .ASSOCIATION: none SUBMITTED:
23Doc63 DATE AMt 28Apr64 CLs 00 SUB GODZi AI NO REF SM 007 OTHERt 005 Card 2/2 n a 1 pr ~~ n 2. trin r, 1 r, nonecuillbrium flows. 'k]. mat. i Treih. ~R 2 5 I ri 2't IRA Mr-Ap'64. F;~tL Pd-4
ACCESSION NRt AP4013392 S/0040/64/028/001/0178/0182 AUTHOR5; o A.. N. (Mbecow)# Kaumovey'LNO (Moscow); (Moscow, TITIEs Construction of bodies of-optimal shape In epereonic flow SOURCEs Prikladnaya
matemati ka izakhanikap v. 2Sp no. Is 1964t 178-182 TOPIC TAGS: optimal shape, supersonic flows minimal &-ag, maximal thrust, axisym- metric jet, Lagrange problem ABSTRAM Under certain siqplif~lng
assumptions of a nature too detailed to be covered here, the authors 'determine the regions of existence inthe plaza of flow of. various solutions to the problem of determination of bodies with
lainimal drag and jets with maximal thrust when certain limitations are plae~ed on the dimensions involved. Working basically with a Jett they also construct new solution schemes. Their solutions
contain the part or the bouadary extremm brought about by the dimeneion restrictions which wao formerly lost due to the necessity# previouslyt of using numerical methods 0 Orig. -arto h" t 3 figures
-and 26- formulas. ASSOCTATIONs none 6MMMD: 240ot63 ENGM W SUB CODE: NO REP BOVt 006 OTMI 005 Card 1/1 KRAYKO, A.N. (Moskva); NAUMOVA, I.N. (Moskva); SHMYGLINSKIY, Yu.D. (Moskva) Construction of
bodies of optimum shape in a supersonic flow. Frikl. miit. i mokh. 28 no.1:17ft-182 Ta-F'64, (MIRA 17:2) L 15h62-6 WP(m)/FWt(l)ACS(kY/,nlA(d)/EWA(l) Pd-1. 4C=SION M. '"5005112 S/W79/64/000/006/W40047
IDMORS: Galyunp* N. S. (Moscow) I Krayko A 14. (Koscow) MWE; Calculation of nonequilibri~Z~.~ ISOURCE: AN SSSR. Izvestiya. Mek 1ka :royeniye, no. 6, 1964, 41-47 ,TOPIC TAGS: nonequilibrium gas flow,
numerical procedure, finite differences method !ABSTRACTt The finite'differences method proposed by 0, N. Katakova an4 A, If, Krayko! I(Raschet ploakAh i oaeaimetrichnykh averkhzvukovykh techeniy pri
nalichii Ineobratimykh protsessov. PWFf 1963, No- 41 p. 116-118) for solving nonequilibrium;; iflow problems is demonstrated. This method permits integration steps which are =6 larger than is the
case with the Euler or Runge-Kutta meth.)ds. The method can be applied to equations of the forin U~+i + 06., VI). 2a M) (80 > 0 constant parameter,, f (,r u) - given function) with the finite
difference equivalent 96n+'t an U"' U; U. + + > 0). Yh" ki, Ulk + r which instead of the algorithm, !UU41-luml- Card ~7 7- '%NAM -iA 1~fro,N~,;P,_~.---,,_ ". MWMIA- T, 35462-65 !ACCESSION NR:
A.P5005112 -yrdpos,6 -the ----- - 4 2, ~Wb V Us es-that/At. U1+1 an+ ni~ which permits much larger integreition':stepo i Ze + %for$ + Me method in demonstrated on a set of 11 equations describing
nonequilibrium lexpanaion of air..with c .oupled chemIilea! reactions. Its application to two-dimen- islonal or axisymmetrical flow is also demonstrated by simplifying the above examples !to the
two-dimensional case. It is found that the method requires a significaatly iemller number of steps forconvergence than the Euler or Runge-Kutta methods (in lone of the examples the integration steps
were 101 times larger.and still produaed the same accuracy). Orig,arte.hasi.11 formulas. ASSOGIATIONs none ~~KOM SOV 1 003 OTMs 004 17- Icard KRAYE0, A.N. (Moskva) krialyf.i.,~ reprt~serj la ti rin
,~f - a i r fur. -, -,.: (j.,: ~, I rizr~. Zlnur. 4 nc.,.-'4e-550 '- ~, i, -'~EHA 17'-In' " I -j L 8487-66 EWT(l)/ErC/EPF(n)-2/W(m) UP c) GOIAT ACC NRI )W5-621915 SOURCE CODE: UR/0207 9Y 5- 9q I
AUTHOR: Krayko, A. N. (Moscow); Moskvin, Yu. V. jmoscow) ORG: none TITLE: On determination of two-temperature plasma composition SOUPCE: Zhurnal prikladnoy makhaniki I Vkhnicheskoy fiziki, no. 4,
1965, 154-156 TOPIC TAGS: plasma temperature, plasma diagnostics, theoretic phisics- ABSTRACT: The problem of the separate temperatures of the distinct components of a plasma is considered
theoretically. The plasma consists of neutrals, Ions and elec- trons and is quasineutral. Each specie forms a subsystem interacting with the two others. The slowest interaction process is the energy
transfer to higher states 9f ionization (radiation processes are neglected) and the dominating effects are the elas tic collisions. It is further assumed that electron gas and the energy levels of
the heavy components are in equilibrium so that electron and ion tempdratures are the same An equation analogous to Saba's equation is derived, which with the usual constraint of statistical
mechanics and the pressure-temperature relationship leads to a deter- mination of the plasma state as a function of the two ter"ratures. The effect of de- parture from the stated assumptions Is
briefly considered. Orig. art. has: 6 formu- las. S UB CODE: 20/ SUBM DATEs 05ApM/ ORIG IMF: 000/ OTH REF: 005 I, KRAYKO, A.N.; SLOBODKINA, F.A. (Yoekva) Solution of variational problems In
one-dironnional magneto- hydrodyna.mices Prikl, mat. I makh. 29 no.2022-333 Mr.-Ap 165. (?CPA 18s6) L50221-65 WT (1) /Dl~ (At) /EW 1) Pd-1 AccEssim NR:. AP5014933 *0040/65/029/003^18/04291 AUTHORS:
k~N ~Moacow); k"rninp _4 Ye. (Moscow) _Z21 TITLE: On the thsoEX of fl4ws 6fa two-speed continuous medium with solid or 14 quid- partidles-_ SOURCE: 'Prikladnaya matematika I mekhanika, v. 29, no* 3,
1965, 418-h29 TOPIC TAGS1 viscous gas"flow, particle motion, continuity,, contiraous flow method, flow research ABSTRACTz The problem of movement of a continuous nedium having extraneous matter is
described by means ofamodel of a two-speed co-,ititmous substance. Several conditions are established for the purpose of clarifying the modelt 1) the particles are identical spheres and collisions
among the spheres can be ienorcd; 2", distances along which the flow:characteriaties are actually measured are a great deal larger than interparticle distanceal 3) the Hach number of relative
particle motion is lose than critical's It is furthormore assumed that, viscosity and thermal conduction an important only in processes of gas and particle interaction, The equations of motion and
particle energy are given aa. L 59221-65 ACCMION Us W5014933 "+PVP-f Fd 0 VdV e4 + W q - Qd 0 q o Y. Yd a,--. n > -7. The notation used Includest. m- mase isP 0d constant density, Vid- velocity, Td-
part-4cla temperature, p- presmire,, T- gas temperature, V- gas velocity,, and t- timg. An aggregate streian flow density is derived by considering mass transfer i through an infinitesimal voluias
element. The equations of mass conservation are given in integral form for both gas.and particles as YR&I 0, k&4~~.w I M dr + P'j V'I ads -0. where -ris an arbitrary volume bounded.by.S,and n is the
internal normal to S. The equations of conservation and motion within the control surface 3 are elaborated to include ho4t flow and work considerations. The mathematical mod"I i..Cord 2/3 ....... .
..... 7411 74'. 2 9 ~3 53 - 6 6 &4T(1- J'/E'YP(m) "I'll ACC N AF6013194 SOURCE CODE: U13/01121/66/000/002/0027/0036 ~A'UTHOP: Galyun, N. S. (Moscow); Krsykoj A. N. (Moscow) IORG: none i :TITLE: A
variational problem in one dimensional nonequilibrium ~dynamics ISOURCE: AN SSSR. Izvestiyo. as Melchanike. zhidkosti i gaze, no. 2, 1966, 127-36 i iTOPIC TAGS: gas dynamics, variational problem,
beat conductivity, gas Iviscosity [ABSTRACT: The article starts with a consideration of one dimensional I ;nonequilibrium flow. In the equations, the x axis is directed along the axis of the nozzle
and the origin of-coordinates is located in some ;cross section. y is the ordinate of the nozzle wall. Neglecting the :effects of viscosity and beat conductivity, the flow under considerationi I ;is
described by the equations: Pulw + P'-~ 0' (motion) Pwyv+l = Pawnya"I continuity) (1 fl '/,,0 + h - 11 energy) (1:3 ,,-Card 1/2 L 29858-66 ACC NRs Ap6o13194 There p is the pressure; /v is the
density; h is the specific enthelpy; 1:W is the Cas velocity;-V= 0 and 1, respectively, and the flat and laxisymmetrical cose3; H is a constant. Solution of the system of i i 'equations is followed
in the article by a number of examples of ;calculatton,.; using the method. It is demonstrated that a T)revious .~LP!j)t St V Similar solution (referred to in the bibliography) was In terror. Orig.
art. has: 25 formulas, 7 figures and 1 table. !SUB CODE: 201 SUB1,1 DATE: 31may65/ ORIG REF: 006/ OTH REF: 005 M-M 1: 27379-66 EWT(1) FWP(M)/EVA(d)/FTC(m'-6AWA(i) WN -1P60--12549 8 Of M-r, ODE W/66;
Fq/66/6 W002/0312TO32-0-' AUTHOR: Krayko, A. N* (Moscow) ORGt none TITLE: Solution of variational problems in supersonic gas dynamics SOURCE: Prikladnaya matematika i mekhanikat V. 30, no. 2, 1966,
312-320 TOPIC TAGS: aerodynamic characteristic, aerodynamic configuration, missile Lerodynamice, gas dynamics AMTRACT: A more general solution of a variational problem encountered in supersonic gas
dynamics is presented. The'eolution is an extension of K. G. Guderley and J. V, Armitage (A general method for the determination of best supersonic rocket nozzles. Paper presented at the Pymposium on
extremal problems in aerodynamics. Boeing Soi. Res. Laboratories, Flight Sci. Laboratoryp Seattle, Washington, Dec. 3-4, 1962). The extension consists of the inclusion of the linear relationships
which exist between the coefficients of a closed flow characteristic in the calculation. Solutions for two special cases are presented: a) the base pressure is independent of the sought-for
contour.shapep and b) the base pressure is determined by conditions of the Korst typet H. H. Korat (A theory for base pressures in transonic and supersonic flow. J. Appl. Mech.p 1956, vol. 2] $i No-
4), The solution for case (a) is similar to the solution derived by A. N. KraVko (Variatsionnyye zadachi gazovoy dinamild neraynoveaWkh i ravnovesnykh techeniyq PUM, 1964t t- 28 vyp. 2). In the
solution for case (b)q equa,- tion a are derived which are important in the design of minim- friction nose con684 Orig.. art. hast 2 figures and 25 equationso Card 1/i So CODE t 200 .01)bm DATE t
260ot65/ ORIG REFt 005/ OM RM 002: L V,312-66 E'-rr (1) ACC NR. AP6028320 AUTHOR: Krayko, A. N. (Moscow) GRG: none SOURCE CGD-E: LI T Investigation of weakly disturbed supersonic flows in the
presence of an arbitraiy ;iumber of nonequilibrium processes SOU" 4, 1966, 661-673 iCE: Prikladnaya rruitematika i r-4khanika, v. 30, no. I "' T AGS -supersonic aerodynamics, supersonic flow, steady
flow, nonequilibrium fiow, inviscid flow, flow field, flow analysis AESTRACT: This article presents an analytical stlidy of stead,.r F"OW fields of zT, D-viscid, no-n-heat-conducting gas, a.-suming
small disturbances and the presence of ,-ir arlbitrary number of nonequilibrium processes. Its purpose is to derive and an-alyze linearized equations of a steady flow over a thin profile and a body
of revolution. Integral represent ations of flow parameters are obtained, using ,a,,)lac(! transformations, which are employed to deter-mine the velocity field. They are- also used for studying the
flow properties and attenuation of disturbances at la-,Za distances from the body in the region between the initial frozen and -~i-,uilibrium characteristics. Orig. art. hass: 3 figures and 50
formulas. [A;3 6v3 CGUZ: 20/ SUBM DATE: 12Jan66/ ORIG REF: 008/ 0111 E-,,F-. 009/ ATD PRESS:5 KHAYKO , K . , iw-Vt-r , pi-optigamlist, An excellent beginning. Voen. vest. 1.1 no.4:70-71 Ap 162. NIHA
1514) 1. Politicheskiy otdel Zhitomirskogc oblvoyerikomata. (Retired militari persoraiel) (Journalim, Military) ACCESSION NRs AP4043531 8/0258/64/004/003/0548/0550' AUT11ORi _EFM~o! A. N. (Moscow)
TITLEt Analytic representation of the thermodynamic functions of air SOURCEt Inzhenerny*y zhurnall v- 4g no- 31 1964o 548-550 TOPIC TAGS: thermodynamic function, air propertyp specific enthalpyt
specific density, pressure dependence, temperature dependQnce, computer BESM 2 ABSTRACT: Empirical equations are found for the density and specific enthalpy h of air an functions of temperature T and
pressure p valid for temperatures from 400 to 20000K and for pressures from 0.001 to 1000 atm. In deriving the expressions ft is assumed that undiooooiated air contains 21~ oxygen and 79% nitrogen by
volume$ that there are no oompounds of oiygen and nitrogen, and that reactions take place in the following orders dissociation of oxygeng dissociation of nitrogeny single ionizations, double
ionization of nitrogen. The single ionizations of nitrogen and oxygon are replaced by ionixations of some gas X whose properties are obtained bir averaging over the number of particles# It is further
assumed that each component satisfies the iaeal gas law. The expr6asionsp neglecting the double ionization of -Card 1/3 ....... 'ACCESSION NRs AP4043531 p ovgenp ba6ve the form Ull + e1 + Es + 2as +
2e4) j(0.21 - el) Ito, + (0,70 - es) llN.+2(cj-0.2tcs)Ho+ + 2 (% - 0,79Q Hri + 2e,Hm,, + 5 (F, + ej] T + 50000c, + + I f 3200 % + 333000 % + &37,,000 84. (~77-800 1',88+ V8,3 + pr-l eip 4,02 +
--1,32-10-4T)] 113300 [0,22 + f,090 + PT-1 exp 5,94) 166530 - top li -,[I + P7- exp + 13,77)] T a,- -0 5 + T/ 0,25-1- f,58 I i + p7-011 oxp + Hol- 3,070 + 1,19. 10- r - 9 - to-$ T1,1 1 - 3,415 +
2,J7-10- T - i,2-10-SP, 1N flo - 2,514 + I - 10-4T + 1,2. WWP, Hly - 2,117 + I -10-4T - 1,97. io-o", f~jt+ m2.40 + 2-10-67, - 29,25 drie Card 2/3 ACCESSION NRt AP4043531 where /U is the molecular
weight of undissociated air and R is the universal gas constant. The 8 i's represent the fraction of molecules dissociated into atomic oxygen, the fraction of molecules dissociated into atomic
nitrogen, the fraction of ,singly ionized atoms, and the fraction of doubly ionized atomat reopectively. Valuee computod from those equations agree with tabulated data within 3% for h and 1.5,14 for
Corrections required to include the effect of doubly ionized oxygen are also indicated. Computations were performed on the computer BESM-2 at the Vy-*chislitellny*y tsentr AN SSSR (computer Center,
AN SSSR). The author thank N. S. Galyun and L. M. Shaahkovu for help in carrying out the work. Orige art. hast. 17 equations and I diagram, ASSOCTATIONs none SUBMITTEDt 020ot63 ENCL% 00 SUB.CODEt TD
NO REP BOVI 007 602, Card 3/3 L KRAYKO, K.J. mayor EnthuBiasts. Voen. vost. 41 no.1:67-69 Ja 162. (MIRA 16:11) Ye . .74'. Blosyntheals of vitamin C " a regulatory property of the OrganiOn. B. 1.
VAF14)Vsktya and H, A. Kralkot (Acall. M"f. Sci., Moscloar). rilicillAimiya 17'.1 fill-WIP.W.- The flynthesis of vilamin C (1) by white rais i* Jncrra~l be I 111"fri line, a aut"tAnce which delwa-a-v
the aMirtfir -4 the I till# A`;If v Into Tstrus; C.11111ra so. vollk Is 41, 111"OhIc I., vilth-i't 1. -fl. N . 1- "I I if, t;IIrli"Itirs (liver, kfifney) after haville 1-ti glv~.l Alfiraftolu'* The
slimulatil-n of life bil-truth-4, -I I I- flut the of fertfing girrcurvirsof 1, hill ritual be at a frgullitory machanhain of the oagAniarit bailell fin tire activity rill the nervous system. It.
PfIrsti,,ir VItamin C bi asynthe.;N i% chickg ia teladon to the pi ntw-jlc-, in tile rali In of folic Ili i'l and Ifs derivar:vc~~. V~ A Kir.~ ikci%a, H. A. Ki-Au), 0. 1. P.-jr, A.%*. Trufailov, und
It A Yanov~.Laia (Nutrilicia hi,t:~ AcA. Si. U.S.S.R., pteroylglutamic aci,j rv,.uiti ;:I mi i!I tile (micil. Ill Vitamin C ill the ill.- N- l, gwd,,! al ~l C01111WHIMOrl, 41 1p:. ~1- l" th" J
activit% of tile vl-jl. The of pt~r,-,I jvij ir -J littioyf aniiiwi/0.6 a~ id I ~,i r,-! wl 11) x it I- will C i;I tile Of it :1 llmuwl ].:%,I It it - pear-, p6iiblc tI) ;11,111M th"t .1 irill! Ili!v
! 'i !~ 11, hi~jl. acti% ity of A 1! j1tc[,q1.;:Ii;!-- Jlt-ii]C !Widl. SUCII All .1, :1 dillical Val 1o'1- S. pj- KOSINKO, S. A#, KRATKO. Yo.A. I Somwl-at~a on the vitRmin C supply of the child's
body, [with Ornnfflary In English). VOP.Pit, 17 no.4:24-28 Ja-1C '53 OURA 11:7) 1. Is laborstorli Lzucheniya vitaminov (zav. - prof. V.V. Tefremov) Tnetituta pitantya AM SSS2, Moskva, (VITANIN C.
metabolism requiremont in child. (Rua)) i0ATKO, Ye.A. Some rmtorials on the significance of vitamin P for the orgaDlAm- Vit. res. i ikh isp. no-4:108-114 159. (YJ-qA 14:12) 1. Institut pitaniya
Akademii moditainskikh nauk SSSR, 14ockva. (VITAIIINS-F) (ASCORBIC ACID) (CAPILLWES -PEla-AB MIT Y) Kily0p Ye.A. Effect of additional vitamin P (a catechin complex) a&-iinistration on capillary
resistance in factory workers exposed to high temperatures. Vit. roe. i ikh idp. no.4t265-271 159. (MIRA 14:12) 1. Institut pitaniya Akadomii meditsinakikh nauk SSSR, Moskva. (VITM=-P) (GAP ILIARIFS
-PK~MO ILITY) (HUT-PHYSIOLOGICAL UFECT) ,WL~INIKOVA, Yo.Hl.; TIK11U)"llia)VA, A.N.; 4~~,AYKO Y,) AYEINAIR, 0.1.; GV0,ZD0VA, .. - 3- __ _4 * 1- ** L.G.; SULOVYiNA, L.Ya.; KULIGiuiKo, Ye.V.; G'_'T
A.S~;. Study of the motabollan of vitamins in workorn In tho hot shop of metallurgical factory. Vop. pit. 19 no.2:3-9 Mr-Ap t6O. (MIll 14:7) 1. Iz laboratorld. izucheniya vitaminov (zav. - prof.
V.V.Yefromov) Instituta pitaniya AM11 SSSR) Moskva. (VITAI,II?,Z) ()EAT--PHYSIOLOGICAL EFFEGT) GRUBINA, A..Yu;; K13AYKOJ Ye.A,I IMASU211KWA, Ye.M.; RAZUI,,!QV, II.I.; SEER r-,VA, M.-A.; SKIAXOP
13.X.-; SHISHOVA, OLA. Effect of food enriched by methionino on tho dovolopmont of experimental ailicosis in white rats. Vop.pit. 20 no-3:41-46 My- Je 161i (MIRA 14:6) 1. Iz Instituta pitaniya MI
SSSR~ Moskva. (LUNGS-DUST DISEASES) (METHIOITINE ) (DIET) - YiRAYKO, Ye.A* Method for determining the amount of vitamin P-active catechins in the urine. Vop. pit. 20 no-4:57-59 JI-A." 161. (141RA
14:7) 1. Iz laboratorii izhcheniya vitaminov (zav. - prof. V.V.Yefre,-nov) Instituta pitaniya AMN SSSR, Moskva. (URINE-4NALYSIS AND PATHOLOGY) (CATECHOL) (COLORIMETRY) UIUBINA, A.Yu.; YEZIJOVA, Ye.N.
(deceased]4. K1,AYKO, Ye.A.; IUSLEUEKOVA, Ye.M.; RAZLEOV, SKUM, B*K, Influence of riboflavin on the course of experimental silicosis in white ret3. Vop. pit. 20 no.6:40-45 N-D 161. (MIRA .15:6) 1. Iz
Izistituts. pitariiya AMN SSSR, Moskva. (LIVIGS--DUST DISLASZS) (RI]30r-IAVII'.-FIIYSIOLOGIClil. *--'.7,I,','-'CT) -M. 0 O'Z C.-N. aon 5, a --*,: On I a 3 O'=. 0~;, yzz...i, 0' a o", th ir"N1 c ion
from f,="es to vvcc C;vidz-~ which ,:c,:iv,:il iddi:iomilly (a) "I vitl.,in We stu,!:, only 3j 0'. ~k!:tions on (1) the C'. 30,;Y and i.'s -ccc- V:ng- VC incrc.-C-~ -,*.-.cn fastcr. (21 7.-..: o,-
5c,:dd pzr 7, o. by the -,.6 reCc:vin.- VC W;is, On "ess. (3) Excc-,:Oa and E*.Ici-. in C.- - t-. o:. :i-.-. nins %V:zh Uinc vit:.- VC o-* all. :7~:s had -.*.icsc ind:,:Cs 'han Conz.-O., (4) C--..)
Z,ci'7 thc VC -----,s Several as a day as 0:' ~-.Gups. (5) Fcruil:-.7 and \Viz"-., of num.10cr 0-.,::ttcs VO ..... of you.-.~- rats -zts. (6' T! c cc~.-,-zcm, o-7 i:. z: "-.a agz of I y_-.- Conzrcli
. M all. -.-c,-.zs, IOU: =,Os, of :'l ki co.-.,-rol, -.-oup of r-l:s. t7) VC L'S 'ad y 4 ea.~ -a., Co.- ::3, a:S Z~, VC rL." h:d r-o arcz:zst duratlon of of i:-..I:v:dt;al .Cce: - ved ~,' zz", B, vi=
las 0,114Y Was 11,S, :*..-.*a- -az; o. Co.-.z:ol Z:";) *-Zd &.z 1=. 6't",-. international CQng-.oss on Nutrition, Edinburg 9-15 August 1963 I. ~ i . I- ~ I . ! - .. . I I -:, ~j. . . - , , " ~ 4 :,.:
, ) . , I : . . . I I ;~ '' 1 7 j '' . ~: , , . i : ~- m i . ! ! ~ '. '.', j - U : 11'. .; j ~', : ~.' ': - *-: ,..-. " , , ! ; " - . ~' ': ~: . ., ~ KRAYKOVA, T.G., kand.ekon.nauk Determination and
control of the actual time of a pro- duction cycle. Mashinostroitell no.11:38 165. (MIRA 18:11) XHAZANOV, V.S., kand.tekhn.nauk; KRAYMAN, T.Ya., inzh. A phot*bter for checking lighting engineering
platics. Svetatekhnika 9 no.lsl&21 A 163. (WRA 16:1) 1. Vaesoyuznyy svototekhnicheakly Institut. (Photometers) (Plastica-Measurement) KRAYUDLER, A. (Bukharest); UNGER, Tu. (Bukharest); VOLANIMIT, D.
(Bukharest) Effect of partial injury of the reticular formation of the brain stem on the higher nervous activity in dogs. Fiziol.zhur. 45 no.3:261- 270 159. (MIRA 12:11) (Rinn, con IT iomm. off. of
damage of brain dogs (Rua)) (BRAIN STICH, physiol. off. of reticular form. reflex activity in dogs stem reticular form. in lesions on conditioned (RY18)) | {"url":"https://www.cia.gov/readingroom/document/cia-rdp86-00513r000826320003-7","timestamp":"2024-11-10T19:04:36Z","content_type":"application/xhtml+xml","content_length":"84487","record_id":"<urn:uuid:6a9db6ef-aba3-4876-b951-3c4ddc487073>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00635.warc.gz"} |
□ Nested Class Summary
Nested Classes
Modifier and Type Class Description
static class Arc2D.Double This class defines an arc specified in double precision.
static class Arc2D.Float This class defines an arc specified in float precision.
□ Field Summary
Modifier and Field Description
static int CHORD The closure type for an arc closed by drawing a straight line segment from the start of the arc segment to the end of the arc segment.
static int OPEN The closure type for an open arc with no path segments connecting the two ends of the arc segment.
static int PIE The closure type for an arc closed by drawing straight line segments from the start of the arc segment to the center of the full ellipse and from that point to the end
of the arc segment.
□ Constructor Summary
Modifier Constructor Description
protected Arc2D() This is an abstract class that cannot be instantiated directly.
protected Arc2D(int type) This is an abstract class that cannot be instantiated directly.
□ Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods
Modifier and Type Method Description
boolean contains(double x, double y) Determines whether or not the specified point is inside the boundary of the arc.
boolean contains(double x, double y, double w, double h) Determines whether or not the interior of the arc entirely contains the specified rectangle.
boolean contains(Rectangle2D r) Determines whether or not the interior of the arc entirely contains the specified rectangle.
boolean containsAngle(double angle) Determines whether or not the specified angle is within the angular extents of the arc.
boolean equals(Object obj) Determines whether or not the specified Object is equal to this Arc2D.
abstract double getAngleExtent() Returns the angular extent of the arc.
abstract double getAngleStart() Returns the starting angle of the arc.
Returns the arc closure type of the arc:
int getArcType() CHORD
, or
Rectangle2D getBounds2D() Returns the high-precision framing rectangle of the arc.
Point2D getEndPoint() Returns the ending point of the arc.
PathIterator getPathIterator(AffineTransform at) Returns an iteration object that defines the boundary of the arc.
Point2D getStartPoint() Returns the starting point of the arc.
int hashCode() Returns the hashcode for this Arc2D.
Determines whether or not the interior of the arc intersects the interior of the specified
boolean intersects(double x, double y, double w, double h) rectangle.
protected abstract Constructs a Rectangle2D of the appropriate precision to hold the parameters calculated to be the
Rectangle2D makeBounds(double x, double y, double w, double h) framing rectangle of this arc.
abstract void setAngleExtent(double angExt) Sets the angular extent of this arc to the specified double value.
void setAngles(double x1, double y1, double x2, double y2) Sets the starting angle and angular extent of this arc using two sets of coordinates.
void setAngles(Point2D p1, Point2D p2) Sets the starting angle and angular extent of this arc using two points.
abstract void setAngleStart(double angSt) Sets the starting angle of this arc to the specified double value.
Sets the starting angle of this arc to the angle that the specified point defines relative to the
void setAngleStart(Point2D p) center of this arc.
setArc(double x, double y, double w, double h, double angSt, double Sets the location, size, angular extents, and closure type of this arc to the specified double
abstract void angExt, int closure) values.
void setArc(Arc2D a) Sets this arc to be the same as the specified arc.
void setArc(Point2D loc, Dimension2D size, double angSt, double angExt, Sets the location, size, angular extents, and closure type of this arc to the specified values.
int closure)
void setArc(Rectangle2D rect, double angSt, double angExt, int closure) Sets the location, size, angular extents, and closure type of this arc to the specified values.
void setArcByCenter(double x, double y, double radius, double angSt, Sets the position, bounds, angular extents, and closure type of this arc to the specified values.
double angExt, int closure)
void setArcByTangent(Point2D p1, Point2D p2, Point2D p3, double radius) Sets the position, bounds, and angular extents of this arc to the specified value.
void setArcType(int type) Sets the closure type of this arc to the specified value: OPEN, CHORD, or PIE.
void setFrame(double x, double y, double w, double h) Sets the location and size of the framing rectangle of this Shape to the specified rectangular
☆ Methods declared in class java.awt.geom.RectangularShape
clone, contains, getBounds, getCenterX, getCenterY, getFrame, getHeight, getMaxX, getMaxY, getMinX, getMinY, getPathIterator, getWidth, getX, getY, intersects, isEmpty, setFrame, setFrame
, setFrameFromCenter, setFrameFromCenter, setFrameFromDiagonal, setFrameFromDiagonal
□ Field Detail
☆ OPEN
public static final int OPEN
The closure type for an open arc with no path segments connecting the two ends of the arc segment.
See Also:
☆ CHORD
public static final int CHORD
The closure type for an arc closed by drawing a straight line segment from the start of the arc segment to the end of the arc segment.
See Also:
☆ PIE
public static final int PIE
The closure type for an arc closed by drawing straight line segments from the start of the arc segment to the center of the full ellipse and from that point to the end of the arc segment.
See Also:
□ Constructor Detail
☆ Arc2D
protected Arc2D()
This is an abstract class that cannot be instantiated directly. Type-specific implementation subclasses are available for instantiation and provide a number of formats for storing the
information necessary to satisfy the various accessor methods below.
This constructor creates an object with a default closure type of OPEN. It is provided only to enable serialization of subclasses.
See Also:
Arc2D.Float, Arc2D.Double
☆ Arc2D
protected Arc2D(int type)
This is an abstract class that cannot be instantiated directly. Type-specific implementation subclasses are available for instantiation and provide a number of formats for storing the
information necessary to satisfy the various accessor methods below.
type - The closure type of this arc: OPEN, CHORD, or PIE.
See Also:
Arc2D.Float, Arc2D.Double
□ Method Detail
☆ getAngleStart
public abstract double getAngleStart()
Returns the starting angle of the arc.
A double value that represents the starting angle of the arc in degrees.
See Also:
☆ getAngleExtent
public abstract double getAngleExtent()
Returns the angular extent of the arc.
A double value that represents the angular extent of the arc in degrees.
See Also:
☆ getArcType
public int getArcType()
Returns the arc closure type of the arc:
, or
One of the integer constant closure types defined in this class.
See Also:
☆ getStartPoint
public Point2D getStartPoint()
Returns the starting point of the arc. This point is the intersection of the ray from the center defined by the starting angle and the elliptical boundary of the arc.
A Point2D object representing the x,y coordinates of the starting point of the arc.
☆ getEndPoint
public Point2D getEndPoint()
Returns the ending point of the arc. This point is the intersection of the ray from the center defined by the starting angle plus the angular extent of the arc and the elliptical boundary
of the arc.
A Point2D object representing the x,y coordinates of the ending point of the arc.
☆ setArc
public abstract void setArc(double x,
double y,
double w,
double h,
double angSt,
double angExt,
int closure)
Sets the location, size, angular extents, and closure type of this arc to the specified double values.
x - The X coordinate of the upper-left corner of the arc.
y - The Y coordinate of the upper-left corner of the arc.
w - The overall width of the full ellipse of which this arc is a partial section.
h - The overall height of the full ellipse of which this arc is a partial section.
angSt - The starting angle of the arc in degrees.
angExt - The angular extent of the arc in degrees.
closure - The closure type for the arc: OPEN, CHORD, or PIE.
☆ setArc
public void setArc(Point2D loc,
Dimension2D size,
double angSt,
double angExt,
int closure)
Sets the location, size, angular extents, and closure type of this arc to the specified values.
loc - The Point2D representing the coordinates of the upper-left corner of the arc.
size - The Dimension2D representing the width and height of the full ellipse of which this arc is a partial section.
angSt - The starting angle of the arc in degrees.
angExt - The angular extent of the arc in degrees.
closure - The closure type for the arc: OPEN, CHORD, or PIE.
☆ setArc
public void setArc(Rectangle2D rect,
double angSt,
double angExt,
int closure)
Sets the location, size, angular extents, and closure type of this arc to the specified values.
rect - The framing rectangle that defines the outer boundary of the full ellipse of which this arc is a partial section.
angSt - The starting angle of the arc in degrees.
angExt - The angular extent of the arc in degrees.
closure - The closure type for the arc: OPEN, CHORD, or PIE.
☆ setArc
public void setArc(Arc2D a)
Sets this arc to be the same as the specified arc.
a - The Arc2D to use to set the arc's values.
☆ setArcByCenter
public void setArcByCenter(double x,
double y,
double radius,
double angSt,
double angExt,
int closure)
Sets the position, bounds, angular extents, and closure type of this arc to the specified values. The arc is defined by a center point and a radius rather than a framing rectangle for the
full ellipse.
x - The X coordinate of the center of the arc.
y - The Y coordinate of the center of the arc.
radius - The radius of the arc.
angSt - The starting angle of the arc in degrees.
angExt - The angular extent of the arc in degrees.
closure - The closure type for the arc: OPEN, CHORD, or PIE.
☆ setArcByTangent
public void setArcByTangent(Point2D p1,
Point2D p2,
Point2D p3,
double radius)
Sets the position, bounds, and angular extents of this arc to the specified value. The starting angle of the arc is tangent to the line specified by points (p1, p2), the ending angle is
tangent to the line specified by points (p2, p3), and the arc has the specified radius.
p1 - The first point that defines the arc. The starting angle of the arc is tangent to the line specified by points (p1, p2).
p2 - The second point that defines the arc. The starting angle of the arc is tangent to the line specified by points (p1, p2). The ending angle of the arc is tangent to the line
specified by points (p2, p3).
p3 - The third point that defines the arc. The ending angle of the arc is tangent to the line specified by points (p2, p3).
radius - The radius of the arc.
☆ setAngleStart
public abstract void setAngleStart(double angSt)
Sets the starting angle of this arc to the specified double value.
angSt - The starting angle of the arc in degrees.
See Also:
☆ setAngleExtent
public abstract void setAngleExtent(double angExt)
Sets the angular extent of this arc to the specified double value.
angExt - The angular extent of the arc in degrees.
See Also:
☆ setAngleStart
public void setAngleStart(Point2D p)
Sets the starting angle of this arc to the angle that the specified point defines relative to the center of this arc. The angular extent of the arc will remain the same.
p - The Point2D that defines the starting angle.
See Also:
☆ setAngles
public void setAngles(double x1,
double y1,
double x2,
double y2)
Sets the starting angle and angular extent of this arc using two sets of coordinates. The first set of coordinates is used to determine the angle of the starting point relative to the
arc's center. The second set of coordinates is used to determine the angle of the end point relative to the arc's center. The arc will always be non-empty and extend counterclockwise from
the first point around to the second point.
x1 - The X coordinate of the arc's starting point.
y1 - The Y coordinate of the arc's starting point.
x2 - The X coordinate of the arc's ending point.
y2 - The Y coordinate of the arc's ending point.
☆ setAngles
public void setAngles(Point2D p1,
Point2D p2)
Sets the starting angle and angular extent of this arc using two points. The first point is used to determine the angle of the starting point relative to the arc's center. The second
point is used to determine the angle of the end point relative to the arc's center. The arc will always be non-empty and extend counterclockwise from the first point around to the second
p1 - The Point2D that defines the arc's starting point.
p2 - The Point2D that defines the arc's ending point.
☆ setArcType
public void setArcType(int type)
Sets the closure type of this arc to the specified value: OPEN, CHORD, or PIE.
type - The integer constant that represents the closure type of this arc: OPEN, CHORD, or PIE.
IllegalArgumentException - if type is not 0, 1, or 2.+
See Also:
☆ setFrame
public void setFrame(double x,
double y,
double w,
double h)
Sets the location and size of the framing rectangle of this
to the specified rectangular values. Note that the arc
partially inscribes
the framing rectangle of this
Specified by:
setFrame in class RectangularShape
x - the X coordinate of the upper-left corner of the specified rectangular shape
y - the Y coordinate of the upper-left corner of the specified rectangular shape
w - the width of the specified rectangular shape
h - the height of the specified rectangular shape
See Also:
☆ getBounds2D
public Rectangle2D getBounds2D()
Returns the high-precision framing rectangle of the arc. The framing rectangle contains only the part of this
that is in between the starting and ending angles and contains the pie wedge, if this
has a
closure type.
This method differs from the getBounds in that the getBounds method only returns the bounds of the enclosing ellipse of this Arc2D without considering the starting and ending angles of
this Arc2D.
the Rectangle2D that represents the arc's framing rectangle.
See Also:
☆ makeBounds
protected abstract Rectangle2D makeBounds(double x,
double y,
double w,
double h)
Constructs a Rectangle2D of the appropriate precision to hold the parameters calculated to be the framing rectangle of this arc.
x - The X coordinate of the upper-left corner of the framing rectangle.
y - The Y coordinate of the upper-left corner of the framing rectangle.
w - The width of the framing rectangle.
h - The height of the framing rectangle.
a Rectangle2D that is the framing rectangle of this arc.
☆ containsAngle
public boolean containsAngle(double angle)
Determines whether or not the specified angle is within the angular extents of the arc.
angle - The angle to test.
true if the arc contains the angle, false if the arc doesn't contain the angle.
☆ contains
public boolean contains(double x,
double y)
Determines whether or not the specified point is inside the boundary of the arc.
x - The X coordinate of the point to test.
y - The Y coordinate of the point to test.
true if the point lies within the bound of the arc, false if the point lies outside of the arc's bounds.
☆ intersects
public boolean intersects(double x,
double y,
double w,
double h)
Determines whether or not the interior of the arc intersects the interior of the specified rectangle.
x - The X coordinate of the rectangle's upper-left corner.
y - The Y coordinate of the rectangle's upper-left corner.
w - The width of the rectangle.
h - The height of the rectangle.
true if the arc intersects the rectangle, false if the arc doesn't intersect the rectangle.
See Also:
☆ contains
public boolean contains(double x,
double y,
double w,
double h)
Determines whether or not the interior of the arc entirely contains the specified rectangle.
x - The X coordinate of the rectangle's upper-left corner.
y - The Y coordinate of the rectangle's upper-left corner.
w - The width of the rectangle.
h - The height of the rectangle.
true if the arc contains the rectangle, false if the arc doesn't contain the rectangle.
See Also:
☆ contains
public boolean contains(Rectangle2D r)
Determines whether or not the interior of the arc entirely contains the specified rectangle.
Specified by:
contains in interface Shape
contains in class RectangularShape
r - The Rectangle2D to test.
true if the arc contains the rectangle, false if the arc doesn't contain the rectangle.
See Also:
☆ getPathIterator
public PathIterator getPathIterator(AffineTransform at)
Returns an iteration object that defines the boundary of the arc. This iterator is multithread safe. Arc2D guarantees that modifications to the geometry of the arc do not affect any
iterations of that geometry that are already in process.
at - an optional AffineTransform to be applied to the coordinates as they are returned in the iteration, or null if the untransformed coordinates are desired.
A PathIterator that defines the arc's boundary.
☆ equals
public boolean equals(Object obj)
Determines whether or not the specified Object is equal to this Arc2D. The specified Object is equal to this Arc2D if it is an instance of Arc2D and if its location, size, arc extents and
type are the same as this Arc2D.
equals in class Object
obj - an Object to be compared with this Arc2D.
true if obj is an instance of Arc2D and has the same values; false otherwise.
See Also:
Object.hashCode(), HashMap | {"url":"https://cr.openjdk.org/~iris/se/11/spec/pr/java-se-11-pr-spec/api/java.desktop/java/awt/geom/Arc2D.html","timestamp":"2024-11-11T23:32:01Z","content_type":"text/html","content_length":"66593","record_id":"<urn:uuid:a69e7189-e4b4-4e30-9da8-607e0b269e70>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00524.warc.gz"} |
No Title
214 - 1 If A does not equal 0, f is bounded away from 0 in a neighorhood of
c by continuity, hence since B = 0, lim(x goes to c) [f/g] does not
exist (blows up). There hint suggests the perhaps simpler argument
that the limit of the product is the product of the limits when the
limits exist, and anything times 0 is 0.
6a Taking the derivatives of the numerator and denominator produces
[1/(x+1)]/[cos(x)] which is 1/1 = 1
8b Taking the derivatives: [1/x]/[.5 x^(-.5)] = 2/x^(.5) which goes
to 0 as x goes to infinity.
226 - 3 We shall use C(n,k) to denote combinations. Since C(1,0) = C(1,1)
= 1, the case n=1 is just the product rule. assuming the equation
is true for n, taking the derivative of the left hand side entails
applying the product rule to each summand on the right hand side.
Terms involving f^(n+1-k)g^(k) will result only from taking the
derivative of f^(n-k)g^(k) and f^(n+1-k)g^(k-1). Since
C(n,k) + C(n,k-1) = C(n+1,k) the induction step has been demonstrated.
9 The remainder term in Taylor's theorem is
[f^(n+1)(c)][(x-x0)^(n+1)]/[(n+1)!]. Since the first factor (the
derivative) is bounded by 1, the fact that a^n/n! goes to zero for
a fixed provides the desired result.
14c Heuristically, you are looking at x + .17x^3 which is an odd function,
so there is no extremum at 0. Or we can cite Thm. 6.4.4, since the
the first non-zero derivative is the first derivative.
Russell Campbell
Sun Nov 16 1997 | {"url":"http://www.math.uni.edu/~campbell/real/hw12.html","timestamp":"2024-11-09T04:18:54Z","content_type":"text/html","content_length":"2304","record_id":"<urn:uuid:247935b9-12db-403a-bd1b-75bd3f05b04b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00078.warc.gz"} |
applied physics - BLOGS
High-dimensional quantum transport enabled by nonlinear detection. In our concept, information is encoded on a coherent source and overlapped with a single photon from an entangled pair in a
nonlinear crystal for up-conversion by sum frequency generation, the latter acting as a nonlinear spatial mode detector. The bright source is necessary to achieve the efficiency required for
nonlinear detection. Information and photons flow in opposite directions: one of [the] Bob’s entangled photons is sent to Alice and has no information, while a measurement on the other in coincidence
with the upconverted photon establishes the transport of information across the quantum link. Alice need not know this information for the process to work, while the nonlinearity allows the state to
be arbitrary and unknown dimension and basis. Credit: Nature Communications (2023). DOI: 10.1038/s41467-023-43949-x
Topics: Applied Physics, Computer Science, Cryptography, Cybersecurity, Quantum Computers, Quantum Mechanics, Quantum Optics
Nature Communications published research by an international team from Wits and ICFO- The Institute of Photonic Sciences, which demonstrates the teleportation-like transport of "patterns" of
light—this is the first approach that can transport images across a network without physically sending the image and a crucial step towards realizing a quantum network for high-dimensional entangled
Quantum communication over long distances is integral to information security and has been demonstrated with two-dimensional states (qubits) over very long distances between satellites. This may seem
enough if we compare it with its classical counterpart, i.e., sending bits that can be encoded in 1s (signal) and 0s (no signal), one at a time.
However, quantum optics allow us to increase the alphabet and to securely describe more complex systems in a single shot, such as a unique fingerprint or a face.
"Traditionally, two communicating parties physically send the information from one to the other, even in the quantum realm," says Prof. Andrew Forbes, the lead PI from Wits University.
"Now, it is possible to teleport information so that it never physically travels across the connection—a 'Star Trek' technology made real." Unfortunately, teleportation has so far only been
demonstrated with three-dimensional states (imagine a three-pixel image); therefore, additional entangled photons are needed to reach higher dimensions.
'Teleporting' images across a network securely using only light, Wits University, Phys.org. | {"url":"https://blacksciencefictionsociety.com/profiles/blogs/list/tag/applied+physics?page=2","timestamp":"2024-11-11T14:20:13Z","content_type":"text/html","content_length":"816032","record_id":"<urn:uuid:3e60e433-fdf8-4ec9-8ca2-08d202cf05f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00500.warc.gz"} |
Method of Moments (MoM) is a numerical method that transforms Maxwell’s continuous integral equations into an approximate discrete formulation that requires inversion of a large matrix. Meshing is
the process of converting the continuous domain into the discrete domain for solving the equations. For discretizing surfaces, typically either triangles or rectangles are used. Antenna Toolbox™ uses
triangular element for meshing as it conforms better for arbitrary shaped surfaces. The triangles are used to approximate the surface current using the Rao-Wilton-Glisson (RWG) basis functions. To
get an accurate result, ensure that large number of triangles are present in the region where current variation is the highest. This region is typically either the corners in the antenna geometry or
at the point where the antenna is excited.
Automatic Meshing Mode
In Antenna Toolbox, the antenna structures mesh automatically based on the analysis frequency chosen. For analysis functions that accept a scalar frequency, the antennas mesh at that single frequency
to satisfy the minimum triangles required. Then the functions calculate the corresponding antenna parameter.
d = dipole;
In above example, the dipole is meshed at 75 MHz automatically before calculating the impedance at that value. Use the mesh command to view the meshed dipole. The number of triangles is 44.
For analysis functions that accept a frequency vector (bandwidth, efficiency, impedance, resonantFrequency, returnLoss, sparameters, vswr), each antenna meshes once at the highest frequency. Then,
the functions calculate the corresponding antenna parameters at all the frequencies in the range.
d = dipole;
In above example, the dipole is meshed at the highest frequency, 85 MHz automatically before calculating the impedance at all the frequencies from 75 to 85 MHz. Meshing at the highest frequency, 85
MHz, ensures maximum number of triangles and a smoother plot of the dipole impedance. Use the mesh command to view the meshed dipole. The number of triangles is 48, which is more than single
frequency meshing.
Manual Meshing Mode
You can choose to mesh the structure manually at the highest frequency of interest. Manual meshing is done by specifying the maximum edge length that is used for discretizing the structure. One
option is to specify the value to be one-tenth of the wavelength at the highest frequency of interest. For example:
sp = spiralArchimedean;
freq = 0.8e9:100e6:2.5e9;
lambda = 3e8/freq(end);
mesh (sp, MaxEdgeLength=lambda/10);
Alternatively, you can run an analysis at the highest frequency of interest and get the maximum edge length. Specify this maximum edge length using the mesh function as shown. This mesh is used for
all other calculations.
sp = spiralArchimedean;
freq = 0.8e9:100e6:2.5e9;
temp = axialRatio(sp,freq(end), 0, 90);
meshdata = mesh(sp);
mesh(sp, MaxEdgeLength=meshdata.MaxEdgeLength);
Use the MeshReader object to view the mesh parameter details.
Strip Meshing
For strip meshing, include at least 10 triangles per wavelength in a strip. This rule applies for structures such as dipoles, monopoles, and loops. Antenna Toolbox antenna meets the requirement
automatically, based on the analysis frequency specified. The structured mesh generated in such cases is shown:
Surface Meshing
For surface meshing, it is recommended that there be at least 100 elements per wavelength in a particular area. This rule applies to structures such as spirals, patches, and ground planes in general.
Antenna Toolbox antenna meets the requirement automatically, based on the analysis frequency specified. In these cases, a non-uniform mesh is generated as shown:
Larger number of triangles are added in regions with higher current density.
Meshing a Dielectric Substrate
For antennas using dielectrics and metals, Antenna Toolbox uses tetrahedrons to discretize the volume of the dielectric substrate.
Thickness of the dielectric substrate is measured with respect to the wavelength. A dielectric substrate with thickness less than or equal to 1/50th of the wavelength is a thin substrate. When you
mesh an antenna using dielectric in auto mode, thin substrates yield more accurate solutions.
A substrate with a thickness of 1/10th of the wavelength is a thick dielectric substrate. The method of moments solver requires 10 elements per wavelength to yield an accurate solution. Manual
meshing yields more accurate solutions for antennas using thick dielectric substrate, as it satisfies the 10 elements per wavelength criteria.
[1] Makarov, S.N. Antenna and EM Modeling with MATLAB, New York: Wiley & Sons, 2002 | {"url":"https://in.mathworks.com/help/antenna/ug/meshing.html","timestamp":"2024-11-12T00:30:17Z","content_type":"text/html","content_length":"73427","record_id":"<urn:uuid:7e0234f7-dfeb-4e56-9150-20087f5cd25d>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00042.warc.gz"} |
Homework 1 BLG 252E solved
In this assignment, you are going to implement classes for two mathematical expressions:
Polynomials and Vectors(do not confuse this with std::vector). These classes will hold
necessary information to represent the data, and will also support some of the mathematical operations of these expressions: In Polynomial class, you will implement polynomial
addition and multiplication and in Vector class, you will implement vector addition, scalar
multiplication and dot product. You will also implement a user interface for users to perform
operations with Polynomials and Vectors. More details are given below.
As a reminder, following section gives basic explanations of polynomials and vectors, and
how to perform the operations.
Polynomials and Vectors
A polynomial is an expression in the form of
n + an−1x
n−1 + …a2x
2 + a1x
1 + a0 (1)
where x is a variable and ak is a real number and called the coefficient of x
. Such polynomial is called “n-th” degree polynomial. For example: 2x
3 + x
2 + 5x + 12 is a third degree
polynomial. Coefficients can be zero, so x
4 + 2x is also a polynomial and degree of it is four.
Polynomial addition is an operation between two polynomials and is performed by adding
the coefficients of the variables with the same exponent. An example: (x
3 + 2x
2 + 3x + 4) +
3 + 2x − 2) = −x
3 + 2x
2 + 5x + 2
Polynomial multiplication is performed by multiplying every pair of terms between two
polynomials, such that axk
.bxl = (a+b)x
. An example: (x
2+2x+3).(x−1) = x
The definition of a vector is given as “an object that has a magnitude and direction”. In this
assignment, we are only working with real numbers, so in the sense of list representation, we
can represent a vector in an n-dimensional vector space with n real numbers, each number
showing the length of vector in one axis. For example, in a 4-dimensional vector space, a
vector can be represented as (a1, a2, a3, a4), where ak is a real number.
Vector Addition can be performed with two vectors in a vector space. Since the vectors
must be in the same vector space, we can not perform vector addition with different sized
vectors(we can extend the vectors but we will assume that it can not be performed in this
homework). For example: (2, 3, 1) + (1, 5, 11) = (3, 8, 12). (3, 4, 2) + (1, 2, 3, 4) can not be
performed, since their size is not equal.
Homework 1
BLG 252E
Scalar multiplication of a vector and a scalar is performed by multiplying all elements of
a vector with a scalar such that (a1, a2, …, an)xc = (ca1, ca2, …, can) where c is a real number.
Dot product between two vectors is performed by multiplying the elements that represents
the same dimension and summing them together such that (a1, a2, …, an).(b1, b2, …, bn) =
a1b1 + a2b2 + … + anbn. For example: (1, 2, 3).(2, 1, 3) = 2 + 2 + 9 = 13.
You can find more sources on the internet if you are not familiar with concepts above.
Implementation Details
As explained, you will implement 2 classes and a user interface. First of all you should
implement Vector and Polynomial classes in their respective header files, “Vector.h” and
Polynomial.h. In these 2 classes, following should be included:
• In the Vector class, size of the vector and the value array should be maintained.
Similarly in Polynomial class, degree and values should be maintained.
• Both of the classes should have a constructor that initializes objects with given values. They should also have a copy constructor. If your class uses arrays, your copy
constructor should allocate new memory for arrays in the new objects.
• You need to use operator overloading feature to implement the operations. + operation
should perform vector addition in Vector class and polynomial addition in Polynomial
class, while ∗ operation should perform multiplication operations. Note that Vector
has two forms of multiplication: scalar multiplication and dot product. You need to
overload ∗ operation such that it performs scalar multiplication when given input is
integer, and dot product when given input is a Vector. Also do not forget that, for dot
product and vector addition operations, size of the vectors must be the same!
• To print both classes, use operator overloading to overload “<<" operator. Make sure to print the object in a suitable format(Example is given in the "Screen Outputs" section). Make sure that terms
with higher degrees are printed out first and the terms with zero coefficient are not printed when printing Polynomial objects. If a term has a coefficient of 1, in that case coefficient should not
be printed(Not 1x 2 but x 2 ). • For both of the classes, you should also implement necessary getter methods as well. You should not need setter methods for this assignment, but if you find them
necessary for your implementation, you can implement them as well (However, you can NOT use setter methods to initialize values, that is constructor’s job). In your main program, you will maintain an
array for Polynomial objects and an array for Vector objects, both of which will be initialized with the values read from files. You will 3 Homework 1 BLG 252E be given 2 text files, one is called
Polynomial.txt and the other one is called Vector.txt. Both of these files’ first line indicate the number of objects they consist. Following lines give the attributes of the objects. For
polynomials, in each line, first number specifies degree of polynomial, and the following numbers specify the coefficients in decreasing order. Similarly, in vector file, first number specifies size
of vector, and the following numbers specify the elements of vector. For example, for polynomials, the line 3 2 3 1 4 represents the polynomial: 2x 3 + 3x 2 + x + 4, and for vectors, the line 3 1 10
2 represents the vector: (1, 10, 2). Here is a screenshot of Polynomial textfile similar to the one you will receive: You also need to implement a user interface that will be displayed on the
terminal. The interface should provide the user possible options to perform. Each option should be assigned a number, so the user enters the corresponding number to perform the desired action. Check
Screen Outputs section for clarity. Following options should be provided to user: • Print the list of vectors and polynomials. List index should start from 1. You will • Do a vector operation and
print the result. Vectors in the operation are given with their 4 Homework 1 BLG 252E index. 1 + 2 should perform addition with first and second vectors in the list. There will be 3 operation options
here: +, ∗, and .. • Do a polynomial operation and print the result. Similarly, polynomials are given with their index. With polynomials, there will be 2 operation options: + and ∗. • Exit the
program For both vector and polynomial operation, user should enter the operation in a single line. For example, when user is asked a vector operation, 1 + 2 should print the result of addition of
vectors1 and vector2 in the list. 1.2 should print dot product of vectors1 and vector2, while 1 ∗ 2 should print scalar multiplication of vector1 with integer 2. Submission Submit your homework files
through Ninova. Please zip and upload all your files using filename BLG252E_HW_1_STUDENTID.zip. You are going to submit the following files: 1. Polynomial.h 2. Vector.h 3. main.cpp In addition, you
can submit any other files if you found it necessary, but above should be sufficient. 5 Homework 1 BLG 252E Screen Outputs After you are through with everything, your program should look like below.
It would be better if it looks exactly like the screenshots, but printing parts might differ and they will not affect your grade negatively. However, in your program, user should be able to give
inputs as in the screenshot! 6 | {"url":"https://codeshive.com/questions-and-answers/homework-1-blg-252e-solved/","timestamp":"2024-11-14T08:45:36Z","content_type":"text/html","content_length":"106185","record_id":"<urn:uuid:6874a40f-64a4-489a-ab81-0b78ca8e27c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00594.warc.gz"} |
Adrian Rivera Cardoso
In this post we discuss auction basics. We will try to answer questions such as: Does the auction format matter? How should the auctioneer set it? Are some auction formats equivalent to each other?
How should advertisers bid?
No, the classic image of the $l_1$ and $l_2$ balls is not a good explanation...
We review how to describe ellipsoids and formulate an optimization problem to enclose sets of points inside an ellipsoid with minimal volume.
We explore whether the framework of Robust Optimization can help us create good portfolios when we are uncertain about future returns.
It's bad...
We review the definition of Sharpe ratio, a widely used metric to measure portfolio performance. We show, when it works, how to create portfolios that optimize for it, and its potentially fatal
We briefly review a common approach to portfolio construction based on balancing mean and variance. Then we establish a connection with what Kelly taught us about optimal gambling. Finally, with
simulations, I show how one can get wrecked using these tools under the presence of fat tails.
We derive and discuss the Kelly Criterion, a formula for betting optimally and achieving exponential wealth growth.
We explore the distribution of the volume of high dimensional spheres. | {"url":"http://www.adrianriv.com/blog/","timestamp":"2024-11-08T20:53:03Z","content_type":"text/html","content_length":"11853","record_id":"<urn:uuid:db9dd9ff-c6e7-40a1-be05-263fff4caeb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00669.warc.gz"} |
Cite as
Vikrant Ashvinkumar, Aaron Bernstein, Nairen Cao, Christoph Grunau, Bernhard Haeupler, Yonggang Jiang, Danupon Nanongkai, and Hsin-Hao Su. Parallel, Distributed, and Quantum Exact Single-Source
Shortest Paths with Negative Edge Weights. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 13:1-13:15, Schloss
Dagstuhl – Leibniz-Zentrum für Informatik (2024)
Copy BibTex To Clipboard
author = {Ashvinkumar, Vikrant and Bernstein, Aaron and Cao, Nairen and Grunau, Christoph and Haeupler, Bernhard and Jiang, Yonggang and Nanongkai, Danupon and Su, Hsin-Hao},
title = {{Parallel, Distributed, and Quantum Exact Single-Source Shortest Paths with Negative Edge Weights}},
booktitle = {32nd Annual European Symposium on Algorithms (ESA 2024)},
pages = {13:1--13:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-338-6},
ISSN = {1868-8969},
year = {2024},
volume = {308},
editor = {Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.13},
URN = {urn:nbn:de:0030-drops-210849},
doi = {10.4230/LIPIcs.ESA.2024.13},
annote = {Keywords: Parallel algorithm, distributed algorithm, shortest paths} | {"url":"https://drops.dagstuhl.de/search?term=Saks%2C%20Michael%20E.","timestamp":"2024-11-12T08:33:43Z","content_type":"text/html","content_length":"183460","record_id":"<urn:uuid:81f6136a-5bc2-43aa-894e-663400065057>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00874.warc.gz"} |
synasp.pl -- Synthesizing Answer Set Programs
Prototypical implementation of the approach described in Jan Heuer and Christoph Wernhard: "Synthesizing Strongly Equivalent Logic Programs: Beth Definability for Answer Set Programs via Craig
Interpolation in First-Order Logic", 2024.
Installation Notes. Runs in the PIE environment http://cs.christophwernhard.com/pie/ . Tested with SWI-Prolog 9.2.5 and PIE June 2024. Also Prover9/Mace4 is recommended.
Version: Tue Jun 18 15:06:49 2024
Argument Types
Logic program. List of rules, e.g.
[(p(X) ; not q(X) <-- r(X), not s(X)), (p(X) <-- true), (false <-- q(X)]
First-order formula in Prolog-readable PIE notation.
List of options. Only options processed directly by the predicates in this module are described. The list is passed to underlying layers that do the proving and interpolation.
- Christoph Wernhard
Form is the gamma transformation of Prog.
Succeeds if it can be verified that Prog1 and Prog2 are strongly equivalent.
Form2 is valid iff Form1 encodes a logic program.
Prog is obtained by decoding Form. Processed options include
Specifies the variant of CNF transformation. See code for values of CNF.
Specifies the partitioning method (Step 2 of the algorithm). See code for values of PM.
Specifies simplifications on the CNF before extraction. See code for values of Simp.
FormH is the formula underlying the Craig-Lyndon interpolation for the LP-interplation of FormF and FormG.
Form is the formula of the formula of the latest interpolation task. It is asserted before prover invocation. Useful, e.g., for debugging.
FormH is an LP-interpolant of FormF and FormG.
Succeeds only if the underlying entailment can be proven. Enumerates alternate solutions in case options specify an interpolation method that enumerates different interpolants for different
ProgR is a definiens of ProgQ within ProgP in the vocabulary Vocab.
Succeeds only if the underlying entailment can be proven. Enumerates alternate solutions in case options specify an interpolation method that enumerates different interpolants for different
Vocab - A list of predicates or a term v(VP,VP1,VN) where VP, VP1 and VN are lists of predicates, corresponding to the position constraining corollary. If the forget option is set,
predicates disallowed in positives heads are VP cup VP1 and the predicates disallowed in negative bodies are VP1.
- See examples_synasp.pl for a compilation of useful options. Directly processed options include
Interpret Vocab as specifying the predicates that are not allowed.
Special formula simplification for interpolation arguments. Recognized values of Simp:
Options No special simplification.
Simplify the second interpolation argument such that it involves less distinct variables, typically resulting in less Skolem terms. Very useful with CMProver and Prover9.
Like 1 but the simplification is also applied to the first interolation argument.
Don't perform interpolation but instead return the input formulas to IP-interpolation.
Don't perform interpolation but instead return the input to Craig-Lyndon interpolation as implication.
Succeeds if it can be verified that Prog1 s-entails Prog2.
The underlying notion of entailment (called here "s-entails") can be specified as P ⊧ Q iff P and P∪Q are strongly equivalent.
ProgR is an s-interpolant of ProgP and ProgQ, i.e., a Craig-Lyndon interpolant of two logic programs with respect to s-entailment (see p_sentails/3).
Succeeds only if the underlying entailment can be proven. Enumerates alternate solutions in case options specify an interpolation method that enumerates different interpolants for different | {"url":"http://cs.christophwernhard.com/pie/asp/pldoc/synasp.html","timestamp":"2024-11-08T18:40:09Z","content_type":"text/html","content_length":"9592","record_id":"<urn:uuid:de8020fa-035e-4465-b1e7-fb06710a38d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00214.warc.gz"} |
seminars - K-theory and T-duality of topological phases
The fundamental role of topology in solid state physics was recognised with the Nobel prizes in 2016, 1998, and 1985. A remarkable phenomenon is the "bulk-boundary correspondence", whereby the
boundary of a material is able to holographically detect a seemingly invisible topological invariant which is stable to perturbations, thus paving the way for novel applications. In these lectures, I
will explain how K-theory, noncommutative geometry, C*-algebras, and index theory provide the mathematical framework for the physics of topological phases. I will also outline how physical ideas of
symmetry, T-duality and the bulk-boundary correspondence motivate some conjectures and generalisations for the mathematics.
Prerequisites: Some basic knowledge of algebraic topology, functional analysis, or operator algebras will be helpful. No detailed solid state physics background is needed, but some familiarity with
ideas from quantum mechanics is useful.
Suggested lecture topics:
1st lecture; 10:30 - 11:45
Brief history of topological phases, and mathematical preliminaries.
2nd lecture; 14::00 –15:00
First example: Toeplitz index theorem in the SSH model. Second example: Kane-Mele invariant and why "Real" mathematics is needed.
3rd lecture; 15:30 – 16:30
Wigner's theorem, generalised symmetries and Bott-Periodic Table of topological phases.
4th lecture; 13:30 — 14:15
T-duality, geometric Fourier transform, and the bulk-boundary correspondence.
5th lecture; 14:30 — 15:15
T-duality, geometric Fourier transform, and the bulk-boundary correspondence.
6th lecture; 15:30 - 16:15
Applications to hyperbolic and crystallographic topological phases.
7th lecture; 10:00-11:00
Semimetals, generalised monopoles, and differential topology | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=room&order_type=desc&page=74&document_srl=778461","timestamp":"2024-11-10T12:46:12Z","content_type":"text/html","content_length":"46018","record_id":"<urn:uuid:af727855-51b6-47b0-a8af-e240e7301550>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00365.warc.gz"} |
almost everything about IMU navigation
Inertial Measurement Units (IMU) are ubiquitous devices that are present in many consumer electronic devices today such as smartphones, smart watches, quadrotor drones, vehicles, etc. These are
relatively inexpensive and can be a cheap way to know the position and orientation of a device. Their use can range from pedestrian tracking with cheap IMUs to maintaining state for rockets with
expensive and accurate IMUs. Therefore, I’ve been fascinated with IMU navigation for a while. In this writeup, I cover the basics required to understand inertial navigation for robot state
estimation, and also explain the motivation behind IMU pre-integration, the defacto way to deal with inertial measurements in visual-inertial SLAM.
Inertial Measurement Units (IMU) are typically used to track the position and orientation of an object body relative to a known starting position, orientation and velocity. Two configurations of
inertial navigation are common :
1. a stable platform system where the inertial unit is placed in the global coordinate frame and does not move along with the body, and
2. a strapdown navigation system where the inertial unit is rigidly attached to the moving object i.e., the IMU is in the body coordinate frame.
These devices typically contain
1. Gyroscopes which measure the angular velocity of the body, denoted \({}_b\omega_k\) for angular velocity of the body frame \(b\) at a time instant \(k\)
2. Accelerometers which measure the resultant linear acceleration acting on its body, typically denoted with \({}_b a_k\) similarly in the body frame \(b\) at time instant \(k\)
A (micro) micro lie-theory review
An expert reader may choose to skip this section, however this section is important to understand the underlying math and motivation behind IMU pre-integration. This section is largely adapted from
Sola et al. .
A smooth manifold
is a curved and differentiable (no-spikes / edges) hyper-surface embedded in a higher dimension that locally resembles a linear space \(\mathbb{R}^n\). Examples:
• 3D vectors with unit norm constraint form a spherical manifold in \(\mathbb{R}^3\)
• Rotations in a plane (2D vectors with unit norm constraint) form a circular manifold in \(\mathbb{R}^2\)
• 3D Rotations form a hyper-sphere (3-sphere) in \(\mathbb{R}^4\)
A Group \(\mathcal{G}\)
is a set with a composition operator \(\circ\) that for elements \(\mathcal{X}, \mathcal{Y}, \mathcal{Z} \in \mathcal{G}\) satisfies the following axioms:
• Closure under \(\circ: \mathcal{X} \circ \mathcal{Y} \in \mathcal{G}\)
• Identity \(\mathcal{E}: \mathcal{E} \circ \mathcal{X} = \mathcal{X} \circ \mathcal{E} = \mathcal{X}\)
• Inverse \(\mathcal{X}^{-1}: \mathcal{X}^{-1} \circ \mathcal{X} = \mathcal{X} \circ \mathcal{X}^{-1} = \mathcal{E}\)
• Associativity \((\mathcal{X} \circ \mathcal{Y}) \circ \mathcal{Z} = \mathcal{X} \circ (\mathcal{Y} \circ \mathcal{Z})\) Notice the omission of commutativity
Lie Group
A Lie Group is a smooth manifold that also satisfies the properties of a group. A Lie Group has identical curvature at every point on the manifold (imagine a circle for example).
Some examples:
• \(\mathbb{R}^n\) is a Lie group under the group composition operation of addition.
• The General Linear Real matrix group, the set of all \(n \times n\) invertible matrices \(\mathbb{GL}(n, \mathbb{R}) \subset \mathbb{R}^{n^2}\) is a Lie group under matrix multiplication
• The unit complex number group \(\text{S}^1: \mathbf{z} = \cos \theta + i \sin \theta = e^{i\theta}\) under complex multiplication forms a Lie group. The unit norm of the complex numbers forms a
1-sphere or circle in \(\mathbb{R}^2\)
• The three sphere \(\text{S}^3 \subset \mathbb{R}^4\) is a lie group, we identify with quaternions \(\mathbb{H} \triangleq \{\text{x}_0 + \text{x}_1 i + \text{x}_2 j + \text{x}_3 k \}\) (\(\mathbb
{H}\) read as Hamiltonian) under the quaternion multiplication forms a Lie group
2-sphere is not a Lie group
Note that the 2-sphere \(\text{S}^2\) is not a Lie Group, since we cannot define a group composition operator over it. To understand this, let us consider the hairy ball theorem , which roughly
states that if you consider a sphere with hair on it, an attempt to comb all the hair such that all of them are pointing in a certain direction will fail and there will exist at least one vanishing
point. More formally, there is no non-vanishing continous tangent space on \(\text{S}^2\). This implies that we cannot define a smooth function that can act as a group composition operator on this
differentiable manifold.
Lie Group Action
Elements of the Lie group can act on elements from other sets. For example, a unit quaternion \(\mathbf{q} \in \mathbb{H}\) acts on a vector \(\mathbf{x} \in \mathbb{R}^3\) through quaternion
multiplication to cause its rotation \(\mathbb{H} \times \mathbb{R}^3 \rightarrow \mathbb{R}^3 : \mathbf{q} \cdot \mathbf{x} \cdot \mathbf{q}^*\).
Tangent space and the Lie Algebra
Let \(\mathcal{X}(t)\) be a point on the Lie manifold \(\mathcal{M}\), then taking its time derivative we obtain \(\dot{\mathcal{X}} = \frac{d\mathcal{X}}{dt}\) which belongs to its tangent space at
\(\mathcal{X}\) (or roughly linearized at \(\mathcal{X}\)) denoted as \(T_\mathcal{X}\mathcal{M}\). Since we note that the lie group has the same curvature throughout the manifold, the tangent space
\(T_{\mathcal{X}}\mathcal{M}\) also has the same structure everywhere. In fact, by definition every Lie group of dimension \(n\) must have a tangent space described by \(n\) basis elements \(\{\text
{E}_1 \dots \text{E}_n\}\) (sometimes also called generators) for \(T_\mathcal{X}\mathcal{M}\).
For instance, the tangent space for the unit complex number group \(\text{S}^1\) is the tangent to a circle at any point forming a straight line i.e., \(\in \mathbb{R}^1\).
The Lie Algebra then is simply the tangent space of a Lie group – linearized – at the identity element \(\mathcal{E}\) of the group. Every Lie group \(\mathcal{M}\) has an associated lie algebra \(\
mathfrak{m} \triangleq T_\mathcal{E}\mathcal{M}\). The Lie algebra \(\mathfrak{m}\) is a vector space.
\(\wedge\) and \(\vee\) operators
We can also define functions that convert the tangent space elements between their vector space representation \(\mathbb{R}^m\) and their structured space \(\mathfrak{m}\) using the \(\text{hat}\)
and \(\text{vee}\) operators.
• \[\wedge: \mathbb{R}^m \rightarrow \mathfrak{m}; \tau \rightarrow \tau^\wedge = \sum_{i=1}^{m} \tau_i \text{E}_i\]
• \[\vee: \mathfrak{m} \rightarrow \mathbb{R}^m; \tau^{\wedge} \rightarrow (\tau^{\wedge})^{\vee} = \tau = \sum_{i=1}^{m} \tau_i \text{e}_i\]
where let’s signify \(\text{e}_i\) as the basis elements of \(\mathbb{R}^m\). In this article, I will largely attempt to not make use of these operators in an attempt to keep the math concise, and in
most cases the variant of Lie algebra used is apparent from context.
\(\mathbf{exp}\) and \(\mathbf{log}\) map
Now, we may define two operators to navigate between Lie group and the Lie algebra as follows:
• \(\text{exp}: T_\mathcal{X}\mathcal{M} \rightarrow \mathcal{M}\) a map that retracts (takes) elements on the tangent vector space to the Lie Group space exactly. Intuitively, the \(\text{exp}\)
operator wraps the tangent element onto the Lie group manifold.
• Similarly \(\text{log}: \mathcal{M} \rightarrow T_\mathcal{X}\mathcal{M}\) a map that takes elements on the group to its tangent vector space element.
The \(\text{exp}\) naturally arises when considering the time derivative, or an infinitesimal tangent increment \(v \in T_\mathcal{X}\mathcal{M}\) per unit time on the group manifold:
\( \frac{d\mathcal{X}}{dt} &= \mathcal{X}{v} \\ \frac{d\mathcal{X}}{\mathcal{X}} &= v~dt \\ \text{integrating} \implies \mathcal{X}(t) &= \mathcal{X}(0) \text{exp}(vt) \\ \implies \text{exp}(vt) &= \
mathcal{X}(0)^{-1}\mathcal{X}(t) \in \mathcal{M} \) i.e., \(\text{exp}(vt)\) is a group element.
Properties of the \(\text{exp}\) map
The \(\text{exp}\) map satisfies a few important properties which are useful to know
• \[\text{exp}((t + s) \tau) = \text{exp}(t \tau) \text{exp}(s \tau)\]
• \[\text{exp}(t\tau) = \text{exp}(\tau)^t\]
• \[\text{exp}(-\tau) = \text{exp}(\tau)^{-1}\]
• \[\text{exp}(\mathcal{X} \tau \mathcal{X}^{-1}) = \mathcal{X} \text{exp}(\tau) \mathcal{X}^{-1} \label{eq:adjoint_property}\]
Where the last property is quite important, which I understand as: the exponential map is a no-op for elements already on the Lie manifold
\(\mathbf{SO(3)}\) example
The group of rotations \(\mathbf{SO}(3)\) is a matrix group of size 9 operating on \(\mathbb{R}^3\), with the following constraints:
• It is invertible \(\implies \mathbf{SO(3)} \subseteq \mathbb{GL}(3)\)
• It is orthogonal i.e., it has a determinant of \(\pm\) 1 \(\implies \mathbf{SO}(3) \subseteq \mathbf{O}(3)\)
• It is special orthogonal i.e., determinant is strictly \(+1\) i.e., reflections are not possible.
For this group, we have the special orthogonality condition which can be written as:
\[\mathbf{R}^{-1}\mathbf{R} = \mathbf{I} = \mathbf{R}^\top \mathbf{R}\]
since \(\mathbf{R}^{-1} = \mathbf{R}^\top\). Now, to obtain the tangent space for this group, let’s take the time differential of this equation:
\[ \dot{\mathbf{R}}^\top \mathbf{R} + \mathbf{R}\dot{\mathbf{R}}^\top &= 0 \\ \implies \dot{\mathbf{R}}^\top \mathbf{R} &= -\mathbf{R}\dot{\mathbf{R}}^\top \\ \implies \dot{\mathbf{R}}^\top \mathbf
{R} &= - (\dot{\mathbf{R}}^\top \mathbf{R})^\top \]
This means that \(\dot{\mathbf{R}}^\top \mathbf{R}\) is skew-symmetric, and skew symmetric matrices always have the following form:
\[[\boldsymbol\omega]_\times = \begin{bmatrix}0 & -\omega_z & \omega_y \\ \omega_z & 0 & -\omega_x \\ -\omega_y & \omega_x & 0\end{bmatrix}\]
Therefore we can write \(\dot{\mathbf{R}}^\top \mathbf{R}\) is of the form \([\boldsymbol\omega]_\times\) or
\( \dot{\mathbf{R}} = \mathbf{R}[\boldsymbol\omega]_\times \label{eq:so3_lie_algebra} \).
When \(\mathbf{R} = \mathbf{I}\), then \(\dot{\mathbf{R}} = [\boldsymbol\omega]_\times\), which consequently means that \([\boldsymbol\omega]_\times\) the space of skew symmetric matrices forms the
Lie algebra for \(\text{SO}(3)\). Finally, we observe that \([\boldsymbol\omega]_\times\) is 3 degrees of freedom by inspection, and that it can be represented as a linear combination of generators
as follows:
\[ [\boldsymbol\omega]_\times = \omega_x \begin{bmatrix}0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0\end{bmatrix} + \omega_y\begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0\end{bmatrix} + \omega_z \begin
{bmatrix}0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0\end{bmatrix} \label{eq:so3_basis} \]
if we denote these basis elements as \(\text{E}_x, \text{E}_y, \text{E}_z\) respectively, then we can denote \(\boldsymbol{\omega} = (\omega_x, \omega_y, \omega_z) \in \mathbb{R}^3\) as the vector
representation of the lie algebra.
Now, let us attempt to obtain a closed form expression for the \(\text{exp}\) map of \(\text{SO}(3)\). We see from equation (\(\ref{eq:so3_lie_algebra}\)), that we have a differential equation, where
\(\dot{\mathbf{R}} = \mathbf{R} [\boldsymbol{\omega}]_\times \in T_\mathbf{R}\text{SO}(3)\). For infinitesimal time increments \(\Delta t\), we can assume that \(\omega\) is constant, then we obtain
the solution to the ordinary differential equation as:
\[ \int \dot{\mathbf{R}} &= \int \mathbf{R}[\boldsymbol\omega]_\times \Delta t \\ \implies \mathbf{R}(t) &= \mathbf{R}_{0} \text{exp}([\boldsymbol\omega]_\times \Delta t) \]
If we start at the origin \(\mathbf{R}_0 = \mathbf{I}\), then we have \(\mathbf{R}(t) = \text{exp}([\boldsymbol{\omega}]_\times \Delta t)\).
Now since \(\boldsymbol\omega\) can also be represented as a vector element (see equation (\(\ref{eq:so3_basis}\))), we can define \(\boldsymbol{\theta} \triangleq \omega \Delta t = \mathbf{u} \theta
\in \mathbb{R}^3\), where \(\mathbf{u}\) is a unit vector denoting the axis of rotation, and \(\boldsymbol\theta\) denotes the rotation about said axis. It must be noted that 3-axis gyroscopes
measure this angular velocity \(\omega\).
Let us now expand the matrix exponential terms:
\[ \mathbf{R} = \text{exp}([\boldsymbol\theta]_\times) = \sum_k \frac{\theta^k}{k !}([\mathbf{u}]_\times)^k \]
Using the properties of skew symmetric matrices we see that \([\mathbf{u}]^0_\times = \mathbf{I}, [\mathbf{u}]^3_\times = -[\mathbf{u}]_\times, [\mathbf{u}]^3_\times = -[\mathbf{u}]^2_\times\). We
can thus rewrite the series above as
\[ \mathbf{R} &= \mathbf{I} + [\mathbf{u}]_\times \Bigl\{ \theta - \frac{1}{3!}\theta^3 + \frac{1}{5!}\theta^5 - \dots \Bigr\} + [\mathbf{u}]_\times^2 \Bigl\{ \frac{1}{2} \theta^2 - \frac{1}{4!}\
theta^4 + \frac{1}{6!}\theta^6 -\dots \Bigr\} \\ \mathbf{R} &= \mathbf{I} + [\mathbf{u}]_\times \sin \theta + [\mathbf{u}]^2_\times (1 - \cos \theta) \label{eq:rodrigues} \]
Where equation (\(\ref{eq:rodrigues}\)) is the closed form expression for the exponential map for \(\text{SO}(3)\) and also known as the rodrigues formula in literature.
\(\oplus\), \(\ominus\) and the Adjoint operators
\(\oplus\) and \(\ominus\) operators allow us to define increments on the Lie group. These combine the composition operator and the \(\text{exp}\) and \(\text{log}\) operator together. Because the
composition operation on Lie groups is not commutative, we obtain two variants, \(\text{right}\) and \(\text{left}\) operators
\[ \text{right}~\oplus &: \mathcal{Y} = \mathcal{X} \oplus {}^\mathcal{X}\tau \triangleq \mathcal{X} \circ \text{exp}({}^\mathcal{X}\tau^\wedge) \in \mathcal{M} \label{eq:right_oplus}\\ \text{right}~
\ominus &: {}^\mathcal{X}\tau = \mathcal{Y} \ominus \mathcal{X} \triangleq \text{log}(\mathcal{X}^{-1}\circ \mathcal{Y})^\vee \in T_\mathcal{X}\mathcal{M} \]
Notice that in the above operator \(\exp({}^\mathcal{X}\tau^\wedge)\) appears on the right side of the composition \(\circ\) operator. Note that \({}^\mathcal{X}\) denotes that the tangent element is
linearized at \(\mathcal{X}\) Similarly we have the \(\text{left}\) variants of the operators which include Lie algebra increments instead:
\[ \text{left}~\oplus &: \mathcal{Y} = {}^\mathcal{E}\tau \oplus \mathcal{X} \triangleq \text{exp}({}^\mathcal{E}\tau^\wedge) \circ \mathcal{X} \in \mathcal{M} \label{eq:left_oplus}\\ \text{left}~\
ominus &: {}^\mathcal{X}\tau = \mathcal{Y} \ominus \mathcal{X} \triangleq \text{log}(\mathcal{Y} \circ \mathcal{X}^{-1})^\vee \in T_\mathcal{E}\mathcal{M} \]
While there is a clear distinction in the operand order for right and left \(\oplus\) operator, the \(\ominus\) operator is quite ambiguous. Figure \(\ref{fig:oplus_ominus}\) illustrates the order of
operations on a manifold.
Now, if we observe that when traversing on the Lie group one can reach an element \(\mathcal{Y} \in \mathcal{M}\) via two different trajectories as in equation (\(\ref{eq:right_oplus}, \ref
{eq:left_oplus}\)), we obtain the following:
\[ \mathcal{Y} = \mathcal{X} \oplus {}^\mathcal{X}\tau &= {}^\mathcal{E}\tau \oplus \mathcal{X} \\ \text{exp}({}^\mathcal{E}\tau^\wedge) \circ \mathcal{X} &= \mathcal{X} \circ \text{exp}({}^\mathcal
{X}\tau^\wedge) \\ \text{exp}({}^\mathcal{E}\tau^\wedge) &= \mathcal{X} \circ \text{exp}({}^\mathcal{X}\tau^\wedge) \circ \mathcal{X}^{-1} \\ &= \text{exp}(\mathcal{X} {}^\mathcal{X}\tau^\wedge \
mathcal{X}^{-1})~~~\text{Using property of exp map \ref{eq:adjoint_property}} \\ \implies {}^\mathcal{E}\tau^\wedge = \mathcal{X} {}^\mathcal{X}\tau^\wedge \mathcal{X}^{-1} \label{eq:adjoint} \]
Equation \(\ref{eq:adjoint}\) can be used to convert a tangent element linearized at a point \(\mathcal{X}\) on the Lie group to the Lie algebra quite easily, this operation is called as the adjoint
\[\text{Ad}_\mathcal{X}: \mathfrak{m} \rightarrow \mathfrak{m}; \tau^\wedge = \text{Ad}_\mathcal{X}(\tau^\wedge) = \mathcal{X} \tau^\wedge \mathcal{X}^{-1}.\]
The adjoint operator is quite heavily utilized to obtain some of the IMU pre-integration results, and therefore is quite important to understand.
Navigation with ideal IMU measurements
As eluded to in the Introduction, an ideal Inertial Measurement Unit (IMU) mainly contains two sensors Gyroscope and Accelerometer and sometimes optionally a magnetometer. To understand navigation
with IMUs, let us first consider an ideal IMU, that can make ideal measurements accurately and precisely, without any noise. Let us specifically consider its operation in a time window between \(i\)
and \(j\), and denote any arbitrary time instant within this window as \(k\). The IMU is traveling along a trajectory in the specified time window and it is both rotating and translating in space. An
illustration is given in Figure 1.
Gyroscope and inertial orientation
Let us first consider the rotation of the device. An ideal gyroscope measures the angular velocity or how quickly an object is rotating about an axis, as \({}_b \omega_k\) in the body frame \(b\) and
time \(k\).
In Figure 1, as the IMU travels along the trajectory from \(i\) to \(j\), the axis of rotation changes continuously. On discretizing the time window and making the piece-wise linear approximation, we
can assume that the axis of rotation remains fixed between two timesteps. Then for an instantaneous angular velocity measurement \(\boldsymbol\omega_k\) at time \(k\), the total angular change in
rotation is \(\omega_k \Delta t_k^{k+1} \in \mathfrak{so}(3)\). Subsequently we can obtain the relative rotation for the time period \(k\) and \(k+1\) using the exponential map as follows: \(\Delta\
mathbf{R}_k^{k+1} = \text{Exp}(\omega_k \Delta t_k^{k+1})\).
Now assuming that the discretization \(\Delta t\) is equal, we can then compose the rotational changes between each \(\Delta t\):
\[ {}_W \mathbf{R}_j &= {}_W \mathbf{R}_i \text{Exp}(\omega_i \Delta t_i) \dots \text{Exp}(\omega_k \Delta t_k)\dots \text{Exp}(\omega_j \Delta t_j) \\ \implies {}_W \mathbf{R}_j &= {}_W \mathbf{R}_i
\prod_{k=i}^{j}\text{Exp}(\omega_k \Delta t_k) \]
Accelerometer and inertial velocity and position
Similarly, the accelerometer measures the resultant linear acceleration applied on the robot body. This typically includes any external acceleration that is applied on the body \({}_b \mathbf{a}_k\)
and the acceleration due to gravity on the body \((\mathbf{R}_k^W)^\top {}_W \mathbf{g}\). Therefore the total measurement made by an accelerometer is \({}_b \mathbf{a}_k + (\mathbf{R}_k^W)^\top {}_W
Gravity convention
We understand from highschool physics that an object at rest experiences a normal force equal and opposite to the gravitational force in the world frame \(-{}_W \mathbf{g}\). TODO
Given the initial velocity of the object at \(i\), we can compute the final velocity of the IMU at \(j\) by integrating the instantaneous acceleration measurements \({}_b \mathbf{a}_k\) over the time
differences in a similar fashion as for orientation:
\[ \mathbf{v}_j = \mathbf{v}_i + \sum_{k=i}^{j-1} \mathbf{R}_b^W(({}_b\mathbf{a}_k + (\mathbf{R}_b^W)^\top {}_W \mathbf{g})\Delta t_k) \\ \mathbf{v}_j = \mathbf{v}_i + {}_W \mathbf{g} \Delta t_i^j +
\sum_{k=i}^{j-1} \mathbf{R}_b^W(({}_b\mathbf{a}_k)\Delta t_k \\ \]
Then, to obtain the position of the body at time \(j\) we need to integrate the velocities at each time instants \(k\) accordingly as below:
Navigation with real IMU measurements
Why do we care about pre-integration?
Bias estimation and IMU initialization
If you found this useful, please cite this as:
Sharma, Akash (Jul 2024). almost everything about IMU navigation. https://akashsharma02.github.io.
or as a BibTeX entry:
title = {almost everything about IMU navigation},
author = {Sharma, Akash},
year = {2024},
month = {Jul},
url = {https://akashsharma02.github.io/blog/2024/imu-navigation/} | {"url":"https://akashsharma02.github.io/blog/2024/imu-navigation/","timestamp":"2024-11-10T07:33:59Z","content_type":"text/html","content_length":"35015","record_id":"<urn:uuid:489dd49a-b352-4207-a27b-19ae426b04e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00245.warc.gz"} |
35. In Figure, a square OABC is inscribed in a quadrant OPBQ. I... | Filo
Question asked by Filo student
35. In Figure, a square is inscribed in a quadrant . If , find the area of the shaded region. (Use ). Ans :
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
26 mins
Uploaded on: 12/24/2022
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 35. In Figure, a square is inscribed in a quadrant . If , find the area of the shaded region. (Use ). Ans :
Updated On Dec 24, 2022
Topic All topics
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 107
Avg. Video Duration 26 min | {"url":"https://askfilo.com/user-question-answers-mathematics/35-in-figure-a-square-is-inscribed-in-a-quadrant-if-find-the-33353135343333","timestamp":"2024-11-07T09:29:27Z","content_type":"text/html","content_length":"223729","record_id":"<urn:uuid:c8d95171-bb64-48d9-96d8-2630078b7902>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00852.warc.gz"} |
Average collection period, or days' receivables
Definition of Average collection period, or days' receivables
Average collection period, or days' receivables
The ratio of accounts receivables to sales, or the total
amount of credit extended per dollar of daily sales (average AR/sales * 365).
Related Terms:
a generalization formula invented by Abrams that is the present value of regular but noncontiguous cash flows that have constant growth to perpetuity.
The annual rate of return that when compounded t times, would have
given the same t-period holding return as actually occurred from period 1 to period t.
Arithmetic mean return.
An arithmetic mean of selected stocks intended to represent the behavior of the market or some
component of it. One good example is the widely quoted Dow Jones Industrial average, which adds the
current prices of the 30 DJIA's stocks, and divides the results by a predetermined number, the divisor.
The average project earnings after taxes and depreciation divided by the average
book value of the investment during its life.
The weighted-average age of all of the firm's outstanding invoices.
A firm's required payout to the bondholders and to the stockholders expressed as a
percentage of capital contributed to the firm. average cost of capital is computed by dividing the total
required cost of capital by the total amount of contributed capital.
Also referred to as the weighted-average life (WAL). The average number of years that each
dollar of unpaid principal due on the mortgage remains outstanding. average life is computed as the weighted average time to the receipt of all future cash flows, using as the weights the dollar
amounts of the principal
The average time to maturity of securities held by a mutual fund. Changes in interest rates
have greater impact on funds with longer average life.
An estimation of price that uses the average or representative price of a
large number of trades.
The ratio of the average cash inflow to the amount invested.
Taxes as a fraction of income; total taxes divided by total taxable income.
The time that elapses between when a check is deposited into a bank account and when the funds are available to the depositor, during which period the bank is collecting payment from the payer's
The negative float that is created between the time when you deposit a check in your account
and the time when funds are made available.
The percentage of a given month's sales collected during the month of sale and each
month following the month of sale.
Collection policy
Procedures followed by a firm in attempting to collect accounts receivables.
Compounding period
The length of the time period (for example, a quarter in the case of quarterly
compounding) that elapses before interest compounds.
Credit period
The length of time for which the customer is granted credit.
Days in receivables
average collection period.
Days' sales in inventory ratio
The average number of days' worth of sales that is held in inventory.
Days' sales outstanding
average collection period.
Discount period
The period during which a customer can deduct the discount from the net amount of the bill
when making payment.
Discounted payback period rule
An investment decision rule in which the cash flows are discounted at an
interest rate and the payback rule is applied on these discounted cash flows.
Dow Jones industrial average
This is the best known U.S.index of stocks. It contains 30 stocks that trade on
the New York Stock Exchange. The Dow, as it is called, is a barometer of how shares of the largest
U.S.companies are performing. There are thousands of investment indexes around the world for stocks,
bonds, currencies and commodities.
Evaluation period
The time interval over which a money manager's performance is evaluated.
Holding period
Length of time that an individual holds a security.
Holding period return
The rate of return over a given period.
Moving average
Used in charts and technical analysis, the average of security or commodity prices
constructed in a period as short as a few days or as Long as several years and showing trends for the latest
interval. As each new variable is included in calculating the average, the last variable of the series is deleted.
Multiperiod immunization
A portfolio strategy in which a portfolio is created that will be capable of
satisfying more than one predetermined future liability regardless if interest rates change.
Net period
The period of time between the end of the discount period and the date payment is due.
Neutral period
In the Euromarket, a period over which Eurodollars are sold is said to be neutral if it does not
start or end on either a Friday or the day before a holiday.
Receivables balance fractions
The percentage of a month's sales that remain uncollected (and part of
accounts receivable) at the end of succeeding months.
Receivables turnover ratio
Total operating revenues divided by average receivables. Used to measure how
effectively a firm is managing its accounts receivable.
Simple moving average
The mean, calculated at any time over a past period of fixed length.
Subperiod return
The return of a portfolio over a shorter period of time than the evaluation period.
T-period holding-period return
The percentage return over the T-year period an investment lasts.
Waiting period
Time during which the SEC studies a firm's registration statement. During this time the firm
may distribute a preliminary prospectus.
Weighted average cost of capital
Expected return on a portfolio of all the firm's securities. Used as a hurdle
rate for capital investment.
Weighted average coupon
The weighted average of the gross interest rate of the mortgages underlying the
pool as of the pool issue date, with the balance of each mortgage used as the weighting factor.
Weighted average life
See:average life.
Weighted average maturity
The WAM of a MBS is the weighted average of the remaining terms to maturity
of the mortgages underlying the collateral pool at the date of issue, using as the weighting factor the balance
of each of the mortgages as of the issue date.
Weighted average remaining maturity
The average remaining term of the mortgages underlying a MBS.
Weighted average portfolio yield
The weighted average of the yield of all the bonds in a portfolio.
Workout period
Realignment period of a temporary misaligned yield relationship that sometimes occurs in
fixed income markets.
(also called average collection period). The number of days of net sales that are tied up in credit sales (accounts receivable) that haven’t been collected yet.
An inventory valuation method that calculates a weighted average cost per unit for all the goods available for sale.
Multiplying that figure by the total units in ending inventory gives you the inventory’s value.
Accounting period
The period of time for which financial statements are produced – see also financial year.
Period costs
The costs that relate to a period of time.
Weighted average cost of capital
See cost of capital.
Periodic inventory system
An inventory system in which the balance in the Inventory account is adjusted for the units sold only at the end of the period.
Weighted average
A method of accounting for inventory.
weighted-average cost of capital
Weighted means that the proportions of
debt capital and equity capital of a business are used to calculate its
average cost of capital. This key benchmark rate depends on the interest
rate(s) on its debt and the ROE goal established by a business. This is a
return-on-capital rate and can be applied either on a before-tax basis or
an after-tax basis. A business should earn at least its weighted-average
rate on the capital invested in its assets. The weighted-average cost-ofcapital
rate is used as the discount rate to calculate the present value
(PV) of specific investments.
Average Collection Period
average number of days necessary to receive cash for the sale of
a company's products. It is calculated by dividing the value of the
accounts receivable by the average daily sales for the period.
Payback Period
The number of years necessary for the net cash flows of an
investment to equal the initial cash outlay
Weighted Average Cost of Capital (WACC)
The weighted average of the costs of the capital components
(debt, preferred stock, and common stock)
compounding period
the time between each interest computation
dollar days (of inventory)
a measurement of the value of inventory for the time that inventory is held
payback period
the time it takes an investor to recoup an
original investment through cash flows from a project
period cost
cost other than one associated with making or acquiring inventory
periodic compensation
a pay plan based on the time spent on the task rather than the work accomplished
weighted average cost of capital
a composite of the cost of the various sources of funds that comprise a firm’s capital structure; the minimum rate of return that must be earned on new investments so as not to dilute shareholder
weighted average method (of process costing)
the method of cost assignment that computes an average cost per
equivalent unit of production for all units completed during
the current period; it combines beginning inventory units
and costs with current production and costs, respectively,
to compute the average
Moving average
A price average that is adjusted by adding other
parametrically determined prices over some time period.
Moving-averages chart
A financial chart that plots leading and lagging
moving averages for prices or values of an asset.
Odd first or last period
Fixed-income securities may be purchased on dates
that do not coincide with coupon or payment dates. The length of the first and
last periods may differ from the regular period between coupons, and thus the
bond owner is not entitled to the full value of the coupon for that period.
Instead, the coupon is pro-rated according to how long the bond is held during
that period.
Average inventory
The beginning inventory for a period, plus the amount at the end of
the period, divided by two. It is most commonly used in situations in which just
using the period-end inventory yields highly variable results, due to constant and
large changes in the inventory level.
Moving average inventory method
An inventory costing methodology that calls for the re-calculation of the average cost of all parts in stock after every purchase.
Therefore, the moving average is the cost of all units subsequent to the latest purchase,
divided by their total cost.
Reporting period
The time period for which transactions are compiled into a set of financial statements.
average tax rate
Total taxes owed divided by total income.
collection policy
Procedures to collect and monitor receivables.
Dow Jones Industrial Average
Index of the investment performance of a portfolio of 30 “blue-chip” stocks.
payback period
Time until cash flows recover the initial investment of the project.
weighted-average cost of capital (WACC)
Expected rate of return on a portfolio of all the firm’s securities, adjusted for tax savings due to interest payments.
Average Propensity to Consume
Ratio of consumption to disposable income. See also marginal propensity to consume.
Average Propensity to Save
Ratio of saving to disposable income. See also marginal propensity to save.
Accounts Payable Days (A/P Days)
The number of days it would take to pay the ending balance
in accounts payable at the average rate of cost of goods sold per day. Calculated by dividing
accounts payable by cost of goods sold per day, which is cost of goods sold divided by 365.
Accounts Receivable Days (A/R Days)
The number of days it would take to collect the ending
balance in accounts receivable at the year's average rate of revenue per day. Calculated as
accounts receivable divided by revenue per day (revenue divided by 365).
Average-Cost Inventory Method
The inventory cost-flow assumption that assigns the average
cost of beginning inventory and inventory purchases during a period to cost of goods sold and
ending inventory.
Average Amortization Period
The average useful life of a company's collective amortizable asset base.
Days Statistics
Measures the number days' worth of sales in accounts receivable (accounts receivable
days) or days' worth of sales at cost in inventory (inventory days). Sharp increases in these measures
might indicate that the receivables are not collectible and that the inventory is not salable.
Extended Amortization Period
An amortization period that continues beyond a long-lived asset's economic useful life.
Extended Amortization Periods
Amortizing capitalized expenditures over estimated useful lives that are unduly optimistic.
Inventory Days
The number of days it would take to sell the ending balance in inventory at the
average rate of cost of goods sold per day. Calculated by dividing inventory by cost of goods sold
per day, which is cost of goods sold divided by 365.
Periodic inventory
A physical inventory count taken on a repetitive basis.
Grace Period
A specific period of time after a premium payment is due during which the policy owner may make a payment, and during which, the protection of the policy continues. The grace period usually ends in
30 days.
Collection Department
An internal department within a company staffed by specialists in collecting past due accounts or accounts receivable.
Critical Growth Periods
Times in a company's history when growth is essential and without which survival of the business might be in jeopardy.
Full Credit Period
The period of trade credit given by a supplier to its customer.
Grace Period
Length of time during which repayments of loan principal are excused. Usually occurs at the start of the loan period.
Risk of Collection
Chance that a borrower or trade debtor will not repay an obligation as promised.
Weighted Average Cost of Capital (WACC)
A weighted average of the component costs of debt, preferred shares, and common equity. Also called the composite cost of capital.
Annuity Period
The time between each payment under an annuity.
Waiting Period (Credit Insurance)
A specific time that must pass following the onset of a covered disability before any benefits will be paid under a creditor disability policy. (Also known as an elimination period).
Related to : financial, finance, business, accounting, payroll, inventory, investment, money, inventory control, stock trading, financial advisor, tax advisor, credit. | {"url":"http://www.finance-lib.com/financial-term-average-collection-period-or-days-receivables.html","timestamp":"2024-11-09T07:24:10Z","content_type":"text/html","content_length":"25951","record_id":"<urn:uuid:f2c10400-6ce2-4547-93a8-aeadd0ffa926>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00046.warc.gz"} |
fasstr Users Guide
fasstr, the Flow Analysis Summary Statistics Tool for R, is a set of R functions to tidy, summarize, analyze, trend, and visualize streamflow data. This package summarizes continuous daily mean
streamflow data into various daily, monthly, annual, and long-term statistics, completes trending and frequency analyses, with outputs in both table and plot formats.
This vignette documents the usage of the many functions and arguments provided in fasstr. This vignette is a high-level adjunct to the details found in the various function documentations (see help
(package = "fasstr") for documentation). You’ll learn how to install the package and a HYDAT database, input data into fasstr functions, add relevant columns and rows to daily data, screen data for
outliers and missing dates, calculate and visualize various summary statistics, trend annual flows, and complete volume frequency analyses.
A quick reference PDF cheat sheet is also available for fasstr usage of functions and arguments. It can be downloaded here.
This guide contains the following sections to help understand the usage of the fasstr functions and arguments:
1. Getting Started
2. Flow Data Inputs
3. Function Outputs
4. Data Tidying (fill_* and add_* functions)
5. Data Screening (screen_* functions)
6. Calculating Statistics (calc_* functions)
7. Analyses (compute_* functions)
8. Customizing Functions - Data filtering and options
9. Writing Tables and Plots (write_* functions)
1. Getting Started
Installing and loading fasstr
You can install fasstr directly from CRAN:
To install the development version from GitHub, use the remotes package then the fasstr package:
if(!requireNamespace("remotes")) install.packages("remotes")
Several other packages will be installed with fasstr. These include tidyhydat for downloading Water Survey of Canada hydrometric data, zyp for trending, ggplot2 for creating plots, and tidyr and
dplyr for data wrangling and summarizing, amongst others.
To call fasstr functions you can either load the package using the library() function or access a specific function using a double-colon (e.g. fasstr::calc_daily_stats()). fasstr exports the pipe, %>
%, so it can be used for tidy workflows.
Downloading HYDAT
To use the station_number argument of the fasstr functions, you will need to download a Water Survey of Canada HYDAT database to your computer using the following tidyhydat function. The function
will save the database on your computer and know where to find it each time you open R or RStudio. Due to the size of the database, it will take several minutes to download.
As HYDAT is updated frequently you may want to periodically update it yourself using the function above. You can check the local version using the following code:
2. Flow Data Inputs
All functions in fasstr require a daily mean streamflow data set from one or more hydrometric stations. Long-term and continuous data sets are preferred for most analyses, but seasonal and partial
data can be used. Note that if partial data sets are used, NA‘s may be produced for certain statistics. Please see the ’Handling Missing Dates’ section in Section 8 for more information. Data is
provided to each function using one of the following arguments:
• data, as a data frame of daily flow values, or
• station_number, as a list of Water Survey of Canada HYDAT station numbers.
data (and dates, values, and groups)
Using the data option, a data frame of daily data containing columns of dates (YYYY-MM-DD in date format), values (mean daily discharge in cubic metres per second in numeric format), and, optionally,
grouping identifiers (character string of station names or numbers) is called. By default, the functions will look for columns identified as ‘Date’, ‘Value’, and ‘STATION_NUMBER’, respectively, to be
compatible with the HYDAT default columns. However, columns of different names can be identified using the dates, values, groups column arguments (ex. values = Yield_mm). The values of these
arguments are not required to be surrounded by quotes; both "Date" and Date will provide the appropriate column called “Date”. An example where groupings other than station numbers could be used
include certain time periods of a study for a single station (before, during, and after watershed experiment treatments or before and after the construction of a dam, appropriately identified in a
column). The following is an example of an appropriate data frame with default column names (STATION_NUMBER not required):
STATION_NUMBER Date Value
1 08NM116 1949-04-01 1.13
2 08NM116 1949-04-02 1.53
3 08NM116 1949-04-03 2.07
4 08NM116 1949-04-04 2.07
5 08NM116 1949-04-05 2.21
6 08NM116 1949-04-06 2.21
The following is an example fasstr function arguments if your daily data data frame has the default columns names (no need to list them):
The following is an example if your daily data data frame has non-default columns names of “Stations”, “Dates”, and “Flows”:
The data argument is listed first in the list of arguments for each function, so flow data frames can be passed onto fasstr functions using the pipe operator, %>%, without listing the data frame in a
tidy workflow.
Alternatively, you can directly extract flow data directly from a HYDAT database by listing station numbers in the station_number argument while leaving the data arguments blank. Data frames from
HYDAT also include ‘Parameter’ and ‘Symbol’ columns. The following is an example of listing stations:
calc_longterm_daily_stats(station_number = "08NM116")
calc_longterm_daily_stats(station_number = c("08NM116", "08NM242"))
This package allows for multiple stations (or other groupings) to be analyzed in many of the functions; provided they are identified using the groups column argument (defaults to STATION_NUMBER). If
named grouping column doesn’t exist or is improperly named then all values listed in the values column will be summarized.
3. Function Types and Outputs
fasstr provides various functions to help in streamflow analyses. They can be generally categorized into the following groups (with more details in the sections below):
• data tidying (to prepare data for analyses; add_* and fill_* functions),
• data screening (to look for outliers and missing data; screen_* functions),
• calculating summary statistics (long-term, annual, monthly and daily statistics; calc_*functions),
• computing analyses (volume frequency analyses and trending; compute_* functions),
• visualizing data (plotting the various statistics; plot_* functions), and
• writing data (to save your data and plots; write_* functions)
Tibble Data Frames
Functions that produce tables create them as tibble data frames. To facilitate the writing of the fasstr tibbles to a directory as .csv, .xls, or .xlsx files with some functionality of rounding
digits, the write_results() function can be used (see section 9 for more information).
ggplot2 Plots
Functions that produce plots create them as lists of ggplot2 objects. The use of ggplot2 plots allows for further customization of plots for the user (axis titles, colours, etc.). All plotting
functions produce lists to be consistent with table naming conventions of fasstr, allow multiple plots to be created with one function, and to easily allow the saving of multiple plots to a
directory. To assist with the saving of lists of plots, a provided function called write_plots() will directly save the list of plots within a directory or single PDF document, with the fasstr plot
objects names (see section 9 for more information). Individual plots can be subsetted from their lists using either the dollar sign, $ (e.g. one_plot <- plots$plotname), or double square brackets, [
] (e.g. one_plot <- plots[[plotname]] or one_plot <- plots[[1]]).
Some functions produce both tibbles and plots as lists and can be subsequently subsetted as desired.
4. Data Tidying Functions
There are several functions that are used to prepare your flow data set for your own analysis. These functions begin with add_ or fill_ and add columns or rows, respectively, to your flow data frame.
These functions include:
• fill_missing_dates() - fills in missing dates or dates with no flow values with NA
• add_date_variables() - add year, month, and day of year variables (and water years if selected)
• add_seasons() - add a column of seasons
• add_rolling_means() - add rolling n-day averages (e.g. 7-day rolling average)
• add_basin_area() - add a basin area column to daily flows
• add_daily_volume() - add daily volumetric flows (in cubic metres)
• add_daily_yield() - add daily water yields (in millimetres)
• add_cumulative_volume() - add daily cumulative volumetric flows on an annual basis (in cubic metres)
• add_cumulative_yield() - add daily cumulative water yields on an annual basis (in millimetres)
The functions are set up to easily incorporate the use of the pipe operator:
fill_missing_dates(station_number = "08HA011") %>%
add_date_variables() %>%
add_rolling_means(roll_days = 7)
STATION_NUMBER Date Parameter Value Symbol CalendarYear Month MonthName
1 08HA011 1960-01-01 Flow 62.9 E 1960 1 Jan
2 08HA011 1960-01-02 Flow 58.0 E 1960 1 Jan
3 08HA011 1960-01-03 Flow 54.9 E 1960 1 Jan
4 08HA011 1960-01-04 Flow 51.3 E 1960 1 Jan
5 08HA011 1960-01-05 Flow 47.3 <NA> 1960 1 Jan
6 08HA011 1960-01-06 Flow 46.7 <NA> 1960 1 Jan
WaterYear DayofYear Q7Day
1 1960 1 NA
2 1960 2 NA
3 1960 3 NA
4 1960 4 NA
5 1960 5 NA
6 1960 6 NA
Filling missing dates
To ensure that analyses do not skip over dates, the fill_missing_dates() function looks for gaps in dates and adds the dates and fills in the flow values with NA. It does not do any gap filling
(linear or correlations, for example), it assigns missing flow values with NA. It also fills dates to create complete start and end years. For example, if data starts in April, all flow values
starting from January will be filled with NA. The timing of the year depends on the water_year_start argument. When water_year_start is left blank, it will fill to complete calendar years (Jan-Dec).
If water_year_start is set to another month (numeric) then it will fill to complete water years of the desired year.
Run and compare the following lines to see how missing dates are filled:
# Very gappy (early years):
tidyhydat::hy_daily_flows(station_number = "08NM116")
# Gap filled with NA's
tidyhydat::hy_daily_flows(station_number = "08NM116") %>%
It is ideal to fill missing dates before using other add_* functions so dates added are not missing the other new date values.
Adding date variables and seasons
The add_date_variables() function adds useful dates columns for summarizing data. The function defaults include ‘CalendarYear’, ‘Month’ (numeric), ‘MonthName’ (month abbreviation; e.g. Jan),
‘WaterYear’ (year based on selected water_year_start), and ‘DayofYear’ (the day of year based on selected water_year_start from 1-365). The month of the start of the water year is chosen using the
water_year_start argument, which defaults to “1” for January.
Run and compare the following lines to see how the date columns are added:
# Just calendar year info
add_date_variables(station_number = "08NM116")
# If water years are required starting August (use month number)
add_date_variables(station_number = "08NM116",
water_year_start = 8)
The add_seasons() function adds a column of seasons identifiers called “Season”. The length of seasons, in months, is provided using the seasons_length argument. As seasons are grouped by months the
length of the seasons must be divisible into 12 with season lengths of 1, 2, 3, 4, 6, or 12 months. The start of the first season coincides with the start month of each year; ‘Jan-Jun’ for 6-month
seasons starting with calendar years or ‘Dec-Feb’ for 3-month seasons starting with water year starting in December. Run and compare the following lines to see how seasons columns are added:
# 2 seasons starting January
add_seasons(station_number = "08NM116",
seasons_length = 6)
# 4 seasons starting October
add_seasons(station_number = "08NM116",
water_year_start = 10,
seasons_length = 3)
# 4 Seasons starting December
add_seasons(station_number = "08NM116",
water_year_start = 12,
seasons_length = 3)
Adding rolling means
Adding rolling means (running means or averages) of daily data, can be done using the add_rolling_means() functions. Based on the selected “n” rolling days using the roll_days argument, a column for
each “n” will be added. One rolling mean column can be added by listing one number (e.g. roll_days = 7) or multiple columns can be added by listing each one (e.g. roll_days = c(3,7,30)). Each column
will be named “Q’n’Day” where n is the number (e.g. Q7Day or Q30Day).
Where the alignment of the rolling mean is compared to the date is important to know when analyzing data. The alignment, using the roll_align argument, determine the date at which the rolling means
• roll_align = "right" - the date will have the mean of that date’s flow value and the previous n-1 days
• roll_align = "left" - the date will have the mean of that date’s flow value and the next n-1 days
• roll_align = "center"
• odd numbered roll_days - date will have the mean of that date’s flow value and half of n-1 days before and half of n-1 days after
• even numbered roll_days - date will have the mean of that date’s flow and half of n days after, and the remaining before ((n/2)-1 days before the date) (i.e. the first of the middle two dates)
Odd roll_days example (column headers have alignment direction added):
Date Value Q5Day_left Q5Day_center Q5Day_right
1 1960-01-01 62.9 54.88 NA NA
2 1960-01-02 58.0 51.64 NA NA
3 1960-01-03 54.9 48.82 54.88 NA
4 1960-01-04 51.3 46.22 51.64 NA
5 1960-01-05 47.3 44.12 48.82 54.88
6 1960-01-06 46.7 42.36 46.22 51.64
Even roll_days example:
Date Value Q6Day_left Q6Day_center Q6Day_right
1 1960-01-01 62.9 53.51667 NA NA
2 1960-01-02 58.0 50.35000 NA NA
3 1960-01-03 54.9 47.66667 53.51667 NA
4 1960-01-04 51.3 45.31667 50.35000 NA
5 1960-01-05 47.3 43.18333 47.66667 NA
6 1960-01-06 46.7 41.53333 45.31667 53.51667
Adding basin areas
To add a column of basin areas, for viewing or analyzing, the add_basin_area() function can be used. The basin area will be extracted from HYDAT, if available, under two conditions where the
basin_area argument can be left blank:
• if the station_number argument is used
• if your data data frame has a grouping column consisting of HYDAT station numbers
If you would like to apply your own basin area size(s) or override the HYDAT areas, you use the basin_area argument in the following ways:
• for a single station or applying to all stations, list a single number (i.e. basin_area = 800)
• for different areas for multiple stations, you list each basin area for each station (i.e. basin_area = c("08NM116" = 800, "08NM242" = 4))
Run and compare the following lines to see how basin area columns are added:
# Using the station_number argument or data frame as HYDAT groupings
add_basin_area(station_number = "08NM116")
# Using the basin_area argument
add_basin_area(station_number = "08NM116",
basin_area = 800)
# Using the basin_area argument with multiple stations
add_basin_area(station_number = c("08NM116","08NM242"),
basin_area = c("08NM116" = 800, "08NM242" = 4))
Adding daily volumetric discharge or water yields
Converting daily mean discharge into other units can be useful for different analyses. Columns of total daily discharge converted from daily mean into volumetric flows, named “Volume_m3” in cubic
metres per second, or area-based water yields, named “Yield_mm” in millimetres, can be used using the add_daily_volume() and add_daily_yield() functions, respectively. Volumetric gives the total
volume per day, and the water yield gives the total water depth, provided an upstream drainage basin area is provided. Basin area can be provided using the basin_area argument, or if there is a
groups column of HYDAT station numbers in your data then it will automatically be extracted from HYDAT, if available. (see `adding basin areas above or section 8 for more information).
# Add a column of converted discharge (m3/s) into volume (m3)
add_daily_volume(station_number = "08NM116")
# Add a column of converted discharge (m3/s) into yield (mm), with HYDAT station groups
add_daily_yield(station_number = "08NM116")
# Add a column of converted discharge (m3/s) into yield (mm), with setting the basin area
add_daily_yield(station_number = "08NM116",
basin_area = 800)
Adding annual cumulative daily volumetric flows or water yields
These functions create a rolling cumulative of daily total flows on an annual basis, as volumetric flows, named “Cumul_Volume_m3” in cubic metres per second, or area-based water yields, named
“Cumul_Yield_mm” in millimetres. A total flow for a given a day is the sum of all previous days and that day, within a given year (Jan 15 cumulative flow value is the sum of all total flows from Jan
1-15). It restarts for each year (based on the starting month) and no values for a year are calculated if there is missing data for a given year as the total for a given year cannot be determined.
# Add a column of cumulative volumes (m3)
add_cumulative_volume(station_number = "08NM116")
# Add a column of cumulative yield (mm), with HYDAT station number groups
add_cumulative_yield(station_number = "08NM116")
# Add a column of cumulative yield (mm), with setting the basin area
add_cumulative_yield(station_number = "08NM116",
basin_area = 800)
By utilizing the data argument as the first one list, it enables the user to work with the tidying functions within a tidy ‘pipeline’ and can pass onto the other fasstr functions.
5. Data Screening Functions
If you are looking at some data for the first time, it may be useful to explore the data quality and availability. The following functions will help to explore the data:
• plot_flow_data() - plot daily mean streamflow
• plot_flow_data_symbols() - plot daily mean streamflow with their symbols
• screen_flow_data() - calculate annual summary and identify missing data
• plot_data_screening() - plot annual summary statistics for data screening
• plot_missing_dates() - plot annual and monthly missing dates
• plot_annual_symbols() - plot annual counts of symbols
To view the entire daily flow data set to view for gaps and outliers, or changes in flow over time, the plot_flow_data() function will plot all daily data in the data frame. The plot can be filtered
by years and dates.
When plotting multiple stations, they automatically produce a separate plot for each station. However, setting one_plot = TRUE will plot all stations on the same plot.
To view a flow time series data quality from their provided HYDAT symbols (qualifer symbols like E for estimate, B for under ice etc.), or custom symbols/categories from a column called “Symbol”, the
plot_flow_data() function will plot all daily data in the data frame. The plot can be filtered by years and dates.
The screen_flow_data() function provides an overview of the number of flow values per year and each month per year, along with annual minimums, maximums, means, and standard deviations to inspect for
outliers in the data.
STATION_NUMBER Year n_days n_Q n_missing_Q E_Symbol No_Symbol B_Symbol
1 08NM116 1949 365 183 182 10 173 0
2 08NM116 1950 365 183 182 0 183 0
3 08NM116 1951 365 183 182 0 183 0
4 08NM116 1952 366 183 183 0 183 0
5 08NM116 1953 365 183 182 0 183 0
6 08NM116 1954 365 183 182 0 183 0
A_Symbol Minimum Maximum Mean Median StandardDeviation Jan_missing_Q
1 0 0.623 49.3 7.771066 2.27 10.49771 31
2 0 0.623 52.1 7.760432 2.07 10.77839 31
3 0 0.623 49.3 8.991197 3.71 10.91531 31
4 0 0.850 50.7 10.277541 3.17 11.87987 31
5 0 0.340 62.3 8.303328 4.56 9.52511 31
6 0 0.566 36.2 11.281011 5.38 10.96519 31
Feb_missing_Q Mar_missing_Q Apr_missing_Q May_missing_Q Jun_missing_Q
Jul_missing_Q Aug_missing_Q Sep_missing_Q Oct_missing_Q Nov_missing_Q
To view the summary data in the screen_flow_data() function, the plot_data_screening() function will plot the annual minimums, maximums, means, medians, and standard deviations, with the point
coloured by data availability.
Use the plot_missing_dates() function to plot out the missing dates for each month of each year to view for data availability and gaps.
Use the plot_annual_symbols() function to plot the symbols on an annual basis to view the data quality and data availability. The default plots by day of year, but there are options to view annual
counts of symbols.
Warning: Removed 148 rows containing missing values or values outside the scale range
6. Functions for Calculating Statistics
The majority of the fasstr functions produce statistics over a certain time period, either long-term, annually, monthly, or daily. These statistics are produced using the calc_* functions and can be
visualized using their corresponding plot_* functions. The following sections are an overview of these functions.
Basic Summary Statistics
These functions calculate the means, medians, maximums, minimums, and percentiles (choose using the percentiles argument) of a flow data set:
• calc_longterm_daily_stats() - calculate the long-term and long-term monthly summary statistics based on daily mean flows
• calc_longterm_monthly_stats() - calculate the long-term annual and monthly summary statistics based on monthly mean flows
• calc_annual_stats() - calculate annual summary statistics
• calc_monthly_stats() - calculate annual monthly summary statistics
• calc_daily_stats() - calculate daily summary statistics
These basic statistics can also be viewed using their corresponding plotting functions:
• plot_longterm_daily_stats() - plot the long-term monthly summary statistics based on daily mean flows
• plot_longterm_monthly_stats() - plot the long-term monthly summary statistics based on annual monthly mean flows
• plot_annual_stats() - plot annual summary statistics
• plot_monthly_stats() - plot annual monthly summary statistics
• plot_daily_stats() - plot daily summary statistics
This function produced flow duration curves:
• plot_flow_duration() - plot flow duration curves
These other long-term functions summarize the data over the entire record:
• calc_longterm_mean() - calculate the long-term mean annual discharge
• calc_longterm_percentile() - calculate the long-term percentiles
• calc_flow_percentile() - calculate the percentile rank of a flow value
Basic long-term statistics
The long-term calc_ and plot_ functions calculate the long-term and long-term monthly mean, median, maximum, minimum, and percentiles of all daily mean flows.
For calc_longterm_daily_stats(), for a given month, all daily flow values for a given month over the entire record are summarized together. For the ‘Long-term’ category, it summarizes all flow values
over the entire record to determine the mean, median, maximum, minimum, and selected percentiles of daily flows. You can also specify a certain period of months to summarize together (ex. Jul-Sep
flows) using the custom_months argument (listing the months) and labeling it using the custom_months_label argument (ex. “Summer Flows”).
STATION_NUMBER Month Mean Median Maximum Minimum P10 P90
1 08NM116 Jan 1.114706 0.937 9.50 0.160 0.5774 1.730
2 08NM116 Feb 1.121147 0.945 5.81 0.140 0.5093 1.867
3 08NM116 Mar 1.739024 1.240 17.50 0.380 0.6800 3.392
4 08NM116 Apr 8.051759 5.670 53.50 0.505 1.4400 17.910
5 08NM116 May 24.873950 21.800 95.40 2.550 10.4800 43.540
6 08NM116 Jun 22.558101 20.000 87.90 0.450 6.1200 41.700
7 08NM116 Jul 6.272436 3.920 76.80 0.332 1.1900 14.220
8 08NM116 Aug 2.154797 1.630 22.40 0.427 0.8574 3.960
9 08NM116 Sep 2.281067 1.605 17.60 0.364 0.7920 4.654
10 08NM116 Oct 2.117968 1.650 15.20 0.267 0.8638 4.050
11 08NM116 Nov 1.936342 1.535 11.70 0.260 0.6000 3.731
12 08NM116 Dec 1.258478 1.070 7.30 0.244 0.5500 2.150
13 08NM116 Long-term 6.302503 1.800 95.40 0.140 0.7080 20.000
The plot_longterm_daily_stats() will plot the monthly mean, median, maximum, and minimum values along with selected inner and outer percentiles ribbons on one plot. Change the inner and outer
percentile ranges using the inner_percentiles and outer_percentiles arguments, remove the maximum and minimum ribbon using include_extremes = FALSE, or add a specific year using add_year.
plot_longterm_daily_stats(station_number = "08NM116",
start_year = 1974,
inner_percentiles = c(25,75),
outer_percentiles = c(10,90))
Similarly, the calc_longterm_monthly_stats() functions will calculate the mean, median, maximum, and percentiles of monthly mean flows from all years. Meaning the all daily flows for each month and
each year are averaged, and the statistics are based on these annual monthly means. The “Annual” data row summarizes the mean, median, maximum, and percentiles from all annual means.
STATION_NUMBER Month Mean Median Maximum Minimum P10
1 08NM116 Jan 1.114706 0.9679032 6.117742 0.3155161 0.6247290
2 08NM116 Feb 1.121973 0.9607586 3.831786 0.3528276 0.5190000
3 08NM116 Mar 1.739024 1.4009677 6.926774 0.5067419 0.8188839
4 08NM116 Apr 8.051759 7.7053333 23.880333 1.5993333 3.0786000
5 08NM116 May 24.873950 23.7580646 48.122581 13.9861288 16.2113549
6 08NM116 Jun 22.558101 21.8166669 48.640000 3.1504333 10.9094000
7 08NM116 Jul 6.272436 4.5012903 25.639355 0.9213871 1.9086839
8 08NM116 Aug 2.154797 1.7938710 10.193548 0.8721290 1.1350903
9 08NM116 Sep 2.281067 1.6733333 8.109333 0.6999667 1.0183133
10 08NM116 Oct 2.117968 1.8454839 5.661290 0.5329032 1.0473484
11 08NM116 Nov 1.936342 1.5526667 5.413667 0.4982333 0.7032800
12 08NM116 Dec 1.258478 1.0968065 3.648387 0.4502581 0.5636194
13 08NM116 Annual 6.302263 6.2744794 11.134121 2.8761370 4.2865436
1 1.623484
2 1.683081
3 2.714645
4 12.678067
5 33.430323
6 37.235334
7 12.820000
8 3.360645
9 3.927667
10 3.604006
11 3.271933
12 2.070323
13 8.435772
The corresponding plot_longterm_monthly_stats() function plots the data, with similar options as plot_longterm_daily_stats().
Basic annual statistics
The calc_annual_stats() and plot_annual_stats() functions calculate the mean, median, maximum, minimum, and percentiles of daily flows for every year of data provided. In calculating, all daily flow
values are grouped by year.
STATION_NUMBER Year Mean Median Maximum Minimum P10 P90
1 08NM116 1974 8.430181 1.34 66.0 0.447 0.7092 32.98
2 08NM116 1975 5.482636 1.54 48.7 0.320 0.5800 19.58
3 08NM116 1976 8.180694 3.84 71.1 0.736 0.8835 25.55
4 08NM116 1977 4.381567 1.26 36.0 0.564 0.7760 17.20
5 08NM116 1978 6.747608 3.28 44.5 0.532 0.8278 19.70
6 08NM116 1979 4.401564 1.56 43.0 0.411 0.6182 15.88
The percentiles in the plot_annual_stats() function are fully customizable like the calc_ function.
Warning in ggplot2::scale_y_log10(expand = ggplot2::expansion(mult = c(0.02, :
log-10 transformation introduced infinite values.
Basic monthly statistics
The calc_monthly_stats() and plot_monthly_stats() functions calculate the mean, median, maximum, minimum, and percentiles of daily flows for each month of each year. In calculating, all daily flow
values are grouped by year and month.
STATION_NUMBER Year Month Mean Median Maximum Minimum P10 P90
1 08NM116 1974 Jan 1.0234194 1.020 1.26 0.864 0.9060 1.120
2 08NM116 1974 Feb 0.9848214 0.984 1.06 0.830 0.9442 1.043
3 08NM116 1974 Mar 1.2113226 1.120 2.14 0.855 0.9370 1.970
4 08NM116 1974 Apr 7.7613333 4.910 28.30 1.850 1.9190 18.680
5 08NM116 1974 May 29.8451611 30.300 50.40 15.900 17.5000 43.300
6 08NM116 1974 Jun 44.4600002 44.900 66.00 20.600 25.6100 61.250
The percentiles in the plot_monthly_stats() function are fully customizable like the calc_ function. A plot for each different statistic (means, medians, percentiles, etc.) is created to visualize
the monthly patterns over the years.
Basic daily statistics
The calc_daily_stats() and plot_daily_stats() functions calculate the mean, median, maximum, minimum, and percentiles of daily flows for each day of the year. For example, for a given day of year
(i.e. day 1 (Jan-01) or day 2 (Jan-02)), all flow values for that day from the entire record are summarized together. Only the first 365 days of each year are summarized (ignores the 366th day from
leap years). In calculating, all daily flow values are grouped by day of year.
STATION_NUMBER Date DayofYear Mean Median Minimum Maximum P5 P25
1 08NM116 Jan-01 1 1.068163 0.995 0.328 2.51 0.5490 0.698
2 08NM116 Jan-02 2 1.042510 0.950 0.310 2.26 0.5360 0.701
3 08NM116 Jan-03 3 1.022735 0.936 0.290 2.00 0.5320 0.704
4 08NM116 Jan-04 4 1.032163 0.910 0.284 2.52 0.5154 0.740
5 08NM116 Jan-05 5 1.016796 0.899 0.302 2.25 0.5420 0.710
6 08NM116 Jan-06 6 1.010449 0.870 0.315 2.32 0.5208 0.745
P75 P95
1 1.25 1.850
2 1.25 1.860
3 1.18 1.916
4 1.18 1.870
5 1.18 1.892
6 1.23 1.838
The plotting daily statistics function will plot the monthly mean, median, maximum, and minimum values along with selected inner and outer percentiles ribbons on one plot. Change the inner and outer
percentile ranges using the inner_percentiles and outer_percentiles arguments, remove the maximum and minimum ribbon using include_extremes = FALSE, or add a specific year using add_year.
Flow Duration
Flow duration curves can be produced using the function, where selected months and time periods can be selected:
plot_flow_duration(station_number = "08NM116",
start_year = 1974,
months = 7:9,
include_longterm = FALSE)
Other Long-term Statistics
calc_longterm_mean() calculates the mean of all the daily flows, and specific percents of the long-term mean (using percent_MAD argument). It can also be known as the long-term mean annual discharge,
STATION_NUMBER LTMAD X5.MAD X10.MAD X20.MAD
1 08NM116 6.302503 0.3151251 0.6302503 1.260501
calc_longterm_percentile() calculates the selected long-term percentiles of all the daily flow values.
STATION_NUMBER P25 P50 P75
1 08NM116 1.03 1.8 5.56
calc_flow_percentile() calculates the percentile rank of a specified flow value, provided as flow_value. It compares the flow value to all daily flow values to determines the percentile rank.
STATION_NUMBER Percentile
1 08NM116 76.532
Basic statistics and plotting volumetric and yield flows
The calc_ and plot_ functions will summarize any values provided to the functions with the default column being ‘Value’. While for fasstr this defaults to daily mean flows, any daily value can be
summarized (water level, precipitation amount, etc.) if the methods of analyses are similar for the parameter type. As there are no units presented in the calc_ functions this should not be problem
for most calculations. However, the plots come standard with a “Discharge (cms)” y-axis, which can be changed afterwards using ggplot2 functions.
To facilitate the plotting of the daily volume or yield statistics from fasstr, after adding them to your flow data using the add_daily_volume() or add_daily_yield() functions, by listing the values
argument as either ‘Volume_m3’ or ‘Yield_mm’ (from their respective add_* functions), the discharge axis title will adjust accordingly.
add_daily_volume(station_number = "08NM116") %>%
plot_annual_stats(values = "Volume_m3",
start_year = 1974)
add_daily_yield(station_number = "08NM116") %>%
plot_daily_stats(values = "Yield_mm",
start_year = 1974)
Cumulative Flow Statistics
Total volumetric of runoff yield flows within a given year can provide important hydrological information on a basin-wide scale. These functions calculate the total volume (in cubic metres) or yield
(in millimetres; based on basin size) for a flow data set, at the annual, monthly, or daily cumulative scale.
• calc_annual_cumulative_stats() - calculate annual (and seasonal) cumulative flows
• calc_monthly_cumulative_stats() - calculate cumulative monthly flow statistics
• calc_daily_cumulative_stats() - calculate cumulative daily flow statistics
These statistics can also be viewed using their corresponding plotting functions:
• plot_annual_cumulative_stats() - plot annual and seasonal total flows
• plot_monthly_cumulative_stats() - plot cumulative monthly flow statistics
• plot_daily_cumulative_stats() - plot cumulative daily flow statistics
While these functions default to volumetric flows, using use_yield = TRUE and basin_area arguments will calculate totals in runoff yield. If there is a groups column of HYDAT station numbers, then
the function will automatically pull the basin area out of HYDAT if available; otherwise a basin area will be required. Due to the requirements of a complete annual data set to calculate total flows,
only years of complete data are used.
Cumulative annual statistics
The calc_annual_cumulative_stats() function provides the total annual volume or runoff yield (if use_yield = TRUE is used). It totals all flows for a given year in cubic metres.
STATION_NUMBER Year Total_Volume_m3
1 08NM116 1974 265854182
2 08NM116 1975 172900397
3 08NM116 1976 258693177
4 08NM116 1977 138177100
5 08NM116 1978 212792574
6 08NM116 1979 138807734
By using the include_seasons = TRUE (logical TRUE/FALSE) argument, total seasonal flows columns will be added to the results. Two columns of two-seasons (2-six months), and four columns of
four-seasons (4-three months) will be added. The start month of the first seasons will begin in the first month of the year (ex. Jan for Calendar years or Oct for water years starting in October).
STATION_NUMBER Year Total_Volume_m3 Jan.Jun_Volume_m3 Jul.Dec_Volume_m3
1 08NM116 1974 265854182 223662989 42191194
2 08NM116 1975 172900397 136045958 36854438
3 08NM116 1976 258693177 164417817 94275360
4 08NM116 1977 138177100 115279113 22897987
5 08NM116 1978 212792574 146659335 66133239
6 08NM116 1979 138807734 117444383 21363350
Jan.Mar_Volume_m3 Apr.Jun_Volume_m3 Jul.Sep_Volume_m3 Oct.Dec_Volume_m3
The total volumes for each year can be plotted using the plot_annual_cumulative_stats() function. When using include_seasons = TRUE two additional plots will be created, one for two- and
Cumulative monthly and statistics
The calc_monthly_cumulative_stats() and plot_monthly_cumulative_stats() functions calculate the mean, median, maximum, minimum, and percentiles of total cumulative monthly flows. For each month of
each year, the total volume or runoff yield is determined. Then within a given year, the cumulative total for each month is determined by added all previous months (ex. Jan = Jan total; Feb = Jan+Feb
totals, etc.). Then the mean, median, maximum, minimum, and percentiles are calculated based on those monthly cumulative totals for each year. In interpreting the information, if a given total flow
is below the mean value, then the cumulative flow is less than average, or less volume has passed through the station than average at that point in time. The percentiles in the calc_ function are
flexible using the percentiles argument.
STATION_NUMBER Month Mean Median Maximum Minimum P5
1 08NM116 Jan 2985630 2592432 16385760 845078.4 1520398
2 08NM116 Feb 5721630 4928947 24560928 1729123.2 2743615
3 08NM116 Mar 10379433 8677498 38265696 3086380.8 4933388
4 08NM116 Apr 31249593 28039305 74097331 9895046.4 13206724
5 08NM116 May 97871980 89700566 159751008 50343551.6 54663241
6 08NM116 Jun 156342579 158564131 255162529 76246877.0 91146851
7 08NM116 Jul 173142671 177670628 301884193 81422928.3 100462931
8 08NM116 Aug 178914078 181262189 311904865 84962822.7 103690282
9 08NM116 Sep 184826603 188800847 323685505 86777136.3 106395258
10 08NM116 Oct 190499370 192925325 337755745 88204464.3 109878267
11 08NM116 Nov 195518369 196449148 346120993 89495885.1 112583641
12 08NM116 Dec 198889076 197871983 351125627 90701856.3 113991823
P25 P75 P95
The plot_monthly_cumulative_stats() function will plot the monthly total mean, median, maximum, and minimum values along with the 5th, 25th, 75th, and 95th percentiles all on one plot. The
percentiles are not customizable for this function.
Cumulative daily statistics
The calc_daily_cumulative_stats() and plot_daily_cumulative_stats() functions calculate the mean, median, maximum, minimum, and percentiles of total cumulative daily flows. For each day of each year,
the total volume or runoff yield is determined. Then within a given year, the cumulative total for each day is determined by added all previous days (ex. Jan-01 = Jan-01 total; Jan-02 = Jan-01+Jan-02
totals, etc.). Then the mean, median, maximum, minimum, and percentiles are calculated based on those daily cumulative totals for each year. In interpreting the information, if a given total flow is
below the mean value, then the cumulative flow is less than average. In other words, less volume has passed through the station than normal at that point in time. Viewing the plot below may help
understand how this function works. The percentiles in the calc_ function are flexible using the percentiles argument.
STATION_NUMBER Date DayofYear Mean Median Minimum Maximum P5
1 08NM116 Jan-01 1 92289.31 85968.0 28339.2 216864 47433.6
2 08NM116 Jan-02 2 182362.19 168048.0 55123.2 412128 93484.8
3 08NM116 Jan-03 3 270726.46 248918.4 80179.2 581472 138153.6
4 08NM116 Jan-04 4 359905.37 324345.6 104716.8 768960 185155.2
5 08NM116 Jan-05 5 447756.54 393724.8 130809.6 952992 234576.0
6 08NM116 Jan-06 6 535059.33 474681.6 158025.6 1123200 283651.2
P25 P75 P95
1 60307.2 108000 159840.0
2 120873.6 216000 325036.8
3 181699.2 326592 482112.0
4 242784.0 428544 619488.0
5 310867.2 531360 775180.8
6 369619.2 635904 949017.6
The plot_daily_cumulative_stats() function will plot the daily cumulative total mean, median, maximum, and minimum values along with the 5th, 25th, 75th, and 95th percentiles all on one plot. The
percentiles are not customizable for this function.
Other Annual Statistics
Beside the basic summary statistics, there are other useful statistics for interpreting annual streamflow data. They include the following::
• calc_annual_flow_timing() - calculate annual flow timing
• calc_annual_lowflows() - calculate multiple n-day annual low flow values and dates
• calc_annual_highflows() - calculate multiple n-day annual high flow values and dates
• calc_annual_extremes() - calculate annual low and high flow values and dates
• calc_annual_normal_days() - calculate annual normal days and days above and below normal
• calc_all_annual_stats() - calculate all fasstr annual statistics
and their corresponding and other plotting functions:
• plot_annual_flow_timing() - plot annual flow timing
• plot_annual_lowflows() - plot multiple n-day annual low flow values and dates
• plot_annual_highflows() - plot multiple n-day annual low flow values and dates
• plot_annual_extremes() - plot annual low and high flow values and dates
• plot_annual_normal_days() - plot annual normal days and days above and below normal
• plot_annual_means() - plot annual means compared to the long-term mean
There are also a few functions that view give some of the annual statistics context:
• plot_annual_flow_timing_year() - plot annual flow timing for a given year
• plot_annual_extremes_year() - plot annual low and high flow values and dates for a given year
• plot_annual_normal_days_year() - plot annual normal days and days above and below normal for a given year
Annual flow timing
The calc_annual_flow_timing() calculates the day of year when a portion of a total annual volumetric flow has occurred. Using the percent_total argument, one or multiple portions of annual flow can
be calculated. Using 50 as the percent_total is similar to the center of volume or timing of half flow. The day of year and date will be also be produced.
STATION_NUMBER Year DoY_25pct_TotalQ Date_25pct_TotalQ DoY_33.3pct_TotalQ
1 08NM116 1974 135 1974-05-15 146
2 08NM116 1975 146 1975-05-26 153
3 08NM116 1976 143 1976-05-22 151
4 08NM116 1977 124 1977-05-04 131
5 08NM116 1978 134 1978-05-14 142
6 08NM116 1979 126 1979-05-06 133
Date_33.3pct_TotalQ DoY_50pct_TotalQ Date_50pct_TotalQ DoY_75pct_TotalQ
1 1974-05-26 158 1974-06-07 173
2 1975-06-02 162 1975-06-11 177
3 1976-05-30 169 1976-06-17 220
4 1977-05-11 147 1977-05-27 165
5 1978-05-22 158 1978-06-07 204
6 1979-05-13 145 1979-05-25 161
1 1974-06-22
2 1975-06-26
3 1976-08-07
4 1977-06-14
5 1978-07-23
6 1979-06-10
The timing of flows can also be plotted.
The timing of flows for a given year can also be plotted.
Warning: One or more calculations included missing values and NA's were
produced. If desired, filter data for complete years or months, or use the
'ignore_missing' or 'allowed_missing' arguments (if applicable) to ignore or
allow some missing values.
Annual low-flows
The calc_annual_lowflows() calculates the annual minimum values, the day of year, and dates of specified rolling mean days (can do multiple days if desired).
STATION_NUMBER Year Min_1_Day Min_1_Day_DoY Min_1_Day_Date Min_3_Day
1 08NM116 1974 0.447 333 1974-11-29 0.5333333
2 08NM116 1975 0.320 11 1975-01-11 0.3783333
3 08NM116 1976 0.736 38 1976-02-07 0.7406667
4 08NM116 1977 0.564 73 1977-03-14 0.6273333
5 08NM116 1978 0.532 55 1978-02-24 0.6296667
6 08NM116 1979 0.411 268 1979-09-25 0.4156667
Min_3_Day_DoY Min_3_Day_Date Min_7_Day Min_7_Day_DoY Min_7_Day_Date
1 334 1974-11-30 0.6018572 346 1974-12-12
2 39 1975-02-08 0.4158571 41 1975-02-10
3 63 1976-03-03 0.7564286 65 1976-03-05
4 252 1977-09-09 0.6865714 79 1977-03-20
5 3 1978-01-03 0.6642857 5 1978-01-05
6 269 1979-09-26 0.4370000 270 1979-09-27
Min_30_Day Min_30_Day_DoY Min_30_Day_Date
1 0.6645667 358 1974-12-24
2 0.4937667 58 1975-02-27
3 0.7988333 66 1976-03-06
4 0.7876667 81 1977-03-22
5 0.7551000 16 1978-01-16
6 0.5684333 287 1979-10-14
The annual low flow values and the day of the low flow values can be plotted, separately, using the plot_annual_lowflows() function.
Annual high flows
The calc_annual_highflows() calculates the annual maximum values, the day of year, and dates of specified rolling mean days (can do multiple days if desired).
STATION_NUMBER Year Max_1_Day Max_1_Day_DoY Max_1_Day_Date Max_3_Day
1 08NM116 1974 66.0 168 1974-06-17 64.26667
2 08NM116 1975 48.7 153 1975-06-02 47.46667
3 08NM116 1976 71.1 168 1976-06-16 54.93333
4 08NM116 1977 36.0 123 1977-05-03 33.00000
5 08NM116 1978 44.5 157 1978-06-06 42.66667
6 08NM116 1979 43.0 147 1979-05-27 40.16667
Max_3_Day_DoY Max_3_Day_Date Max_7_Day Max_7_Day_DoY Max_7_Day_Date
1 170 1974-06-19 62.21429 171 1974-06-20
2 155 1975-06-04 41.85714 158 1975-06-07
3 170 1976-06-18 45.02857 173 1976-06-21
4 159 1977-06-08 27.11429 161 1977-06-10
5 157 1978-06-06 37.01429 159 1978-06-08
6 147 1979-05-27 34.15714 148 1979-05-28
Max_30_Day Max_30_Day_DoY Max_30_Day_Date
1 47.63000 174 1974-06-23
2 33.07667 178 1975-06-27
3 31.15000 173 1976-06-21
4 20.15000 160 1977-06-09
5 26.74333 166 1978-06-15
6 23.97000 154 1979-06-03
The annual high flow values and the day of the high flow values can be plotted, separately, using the plot_annual_highflows() function.
Annual extreme (both high and low) flows
Similar to *_annual_lowflows() and *_annual_highflows(), calc_annual_extremes() calculates the annual maximum and minimum values, the day of year, and dates of specified rolling mean days and
specified months for each of the high and low flows.
calc_annual_extremes(station_number = "08NM116",
roll_days_min = 7,
roll_days_max = 3,
start_year = 1974)
STATION_NUMBER Year Min_7_Day Min_7_Day_DoY Min_7_Day_Date Max_3_Day
1 08NM116 1974 0.6018572 346 1974-12-12 64.26667
2 08NM116 1975 0.4158571 41 1975-02-10 47.46667
3 08NM116 1976 0.7564286 65 1976-03-05 54.93333
4 08NM116 1977 0.6865714 79 1977-03-20 33.00000
5 08NM116 1978 0.6642857 5 1978-01-05 42.66667
6 08NM116 1979 0.4370000 270 1979-09-27 40.16667
Max_3_Day_DoY Max_3_Day_Date
1 170 1974-06-19
2 155 1975-06-04
3 170 1976-06-18
4 159 1977-06-08
5 157 1978-06-06
6 147 1979-05-27
The annual extremes values and the days can be plotted:
plot_annual_extremes(station_number = "08NM116",
roll_days_min = 7,
roll_days_max = 3,
start_year = 1974)
The annual extremes values and the days for a given year can also be plotted:
plot_annual_extremes_year(station_number = "08NM116",
roll_days_min = 7,
roll_days_max = 3,
start_year = 1974,
year_to_plot = 1999)
Warning in ggplot2::scale_y_log10(breaks = scales::log_breaks(n = 8, base = 10), : log-10 transformation introduced infinite values.
log-10 transformation introduced infinite values.
Number of normal (and above/below normal) days per year
The calc_annual_normal_days() calculates the number of days per year that are normal and above and below “normal”, “normal” typically defined as 25th and 75th percentiles. The normal limits can be
determined using the normal_percentiles argument, listing the lower and upper normal ranges, respectively (e.g. normal_percentiles = c(25, 75)). The function calculates the lower and upper
percentiles for each day of the year over all years and sums all days that are within and above or below the daily normal ranges for a given year. Rolling averages can also be used in this function
using the roll_days argument.
STATION_NUMBER Year Normal_Days Below_Normal_Days Above_Normal_Days
1 08NM116 1974 218 71 76
2 08NM116 1975 200 135 30
3 08NM116 1976 177 50 139
4 08NM116 1977 255 101 9
5 08NM116 1978 237 16 112
6 08NM116 1979 147 147 71
Each of the above, below, and normal days can be plotted using the plot_annual_normal_days() function.
The daily flows with normal categories for a given year can also be plotted.
Calculating all annual statistics
The calc_all_annual_stats() calculates all statistics that have a single annual value. This includes all the calc_annual_* and the calc_monthly_statistics() functions. Several arguments provided for
customization of the statistics. There is no corresponding plotting function for this calculation function.
[1] "STATION_NUMBER" "Year" "Annual_Maximum"
[4] "Annual_Mean" "Annual_Median" "Annual_Minimum"
[7] "Annual_P10" "Annual_P90" "Min_1_Day"
[10] "Min_1_Day_DoY" "Min_3_Day" "Min_3_Day_DoY"
[13] "Min_7_Day" "Min_7_Day_DoY" "Min_30_Day"
[16] "Min_30_Day_DoY" "Total_Volume_m3" "Jan-Jun_Volume_m3"
[19] "Jul-Dec_Volume_m3" "Jan-Mar_Volume_m3" "Apr-Jun_Volume_m3"
[22] "Jul-Sep_Volume_m3" "Oct-Dec_Volume_m3" "Total_Yield_mm"
[25] "Jan-Jun_Yield_mm" "Jul-Dec_Yield_mm" "Jan-Mar_Yield_mm"
[28] "Apr-Jun_Yield_mm" "Jul-Sep_Yield_mm" "Oct-Dec_Yield_mm"
[31] "DoY_25pct_TotalQ" "DoY_33pct_TotalQ" "DoY_50pct_TotalQ"
[34] "DoY_75pct_TotalQ" "Normal_Days" "Below_Normal_Days"
[37] "Above_Normal_Days" "Jan_Mean" "Jan_Median"
[40] "Jan_Maximum" "Jan_Minimum" "Jan_P10"
[43] "Jan_P20" "Feb_Mean" "Feb_Median"
[46] "Feb_Maximum" "Feb_Minimum" "Feb_P10"
[49] "Feb_P20" "Mar_Mean" "Mar_Median"
[52] "Mar_Maximum" "Mar_Minimum" "Mar_P10"
[55] "Mar_P20" "Apr_Mean" "Apr_Median"
[58] "Apr_Maximum" "Apr_Minimum" "Apr_P10"
[61] "Apr_P20" "May_Mean" "May_Median"
[64] "May_Maximum" "May_Minimum" "May_P10"
[67] "May_P20" "Jun_Mean" "Jun_Median"
[70] "Jun_Maximum" "Jun_Minimum" "Jun_P10"
[73] "Jun_P20" "Jul_Mean" "Jul_Median"
[76] "Jul_Maximum" "Jul_Minimum" "Jul_P10"
[79] "Jul_P20" "Aug_Mean" "Aug_Median"
[82] "Aug_Maximum" "Aug_Minimum" "Aug_P10"
[85] "Aug_P20" "Sep_Mean" "Sep_Median"
[88] "Sep_Maximum" "Sep_Minimum" "Sep_P10"
[91] "Sep_P20" "Oct_Mean" "Oct_Median"
[94] "Oct_Maximum" "Oct_Minimum" "Oct_P10"
[97] "Oct_P20" "Nov_Mean" "Nov_Median"
[100] "Nov_Maximum" "Nov_Minimum" "Nov_P10"
[103] "Nov_P20" "Dec_Mean" "Dec_Median"
[106] "Dec_Maximum" "Dec_Minimum" "Dec_P10"
[109] "Dec_P20"
Plotting annual means
The plot_annual_means() function provides a way to visualize how annual means fluctuate around the long-term mean. The x-axis is located at the long-term mean annual discharge (mean of all discharge
values over all years) and the bars shows the annual means. The plot is essentially an anomaly plot but with their y-value matching the mean value and not difference from the mean.
7. Functions for Computing Analyses
There are several functions that provide more in-depth analyses. These functions begin with compute_ instead of calc_ and typically produce more than just a tibble data frame of statistics, like the
calc_ functions. Most of these produce a list of objects, consisting of both tibbles and plots. There are three groups of analysis functions: annual trending, annual volume frequency analyses, and a
full analysis (of most fasstr functions). There is a separate vignette for each analysis type to provide more information.
Annual Trending Analysis
The compute_annual_trends() function calculates prewhitened non-parametric annual trends on streamflow data using the zyp package. The function calculates various annual metrics using the
calc_all_annual_stats() function and then calculates and plots the trending data. The magnitude of trends is first computed using the Theil-Sen approach. Depending on the selected method, either
"zhang" or "yuepilon", the trends are adjusted for autocorrelation and then a Mann-Kendall test for trend is applied to the series. The zhang method is recommended for hydrologic applications over
yuepilon. See the zyp package and the trending vignette for more information.
The compute_annual_trends() function outputs several objects in a list:
1. $Annual_Trends_Data - a tibble of annual data from the calc_all_annual_stats() function used for trending
2. $Annual_Trends_Results - a tibble of annual trending results, from both zyp and fasstr
3. $Annual_* - a ggplot2 object for every annual statistic trended, with the slope plotted if an alpha value is chosen using the zyp_alpha argument (ex. zyp_alpha = 0.05).
Volume Frequency Analyses
There are five fasstr functions that perform various volume frequency analyses. Frequency analyses are used to determine probabilities of events of certain sizes (typically annual high or low flows).
The analyses produce plots of event series and computed quantiles fitted from either Log-Pearson Type III or Weibull probability distributions. See the frequency analysis vignette for more
The compute_annual_frequencies() performs an annual daily (or selected duration using roll_days argument) low-flow (by default) or high-flow (using use_max = TRUE argument) frequency analysis on
annual series. This analysis uses the daily mean lows or highs. The compute_hydat_peak_frequencies() function performs an annual instantaneous low (by default) or high peak frequency analysis. The
data argument cannot be used for the HYDAT peak analysis. Both functions output several objects in a list:
1. $Freq_Analysis_Data - Tibble of computed annual minimums (or maximums)
2. $Freq_Plot_Data - Tibble of plotting coordinates used in the frequency plot
3. $Freq_Plot - ggplot2 object of the frequency plot
4. $Freq_Fitting - List of fitdistrplus objects of the fitted distributions.
5. $Freq_Fitted_Quantiles - Tibble with fitted quantiles.
The compute_frequency_quantile() function performs annual daily (or selected duration) low-flow (by default) or high-flow (using use_max = TRUE argument) frequency analysis on annual series but only
returns the fitted quantile based on the selected return period. Both the numeric arguments roll_days and return_period are required. It results in a single value. For example, supplying roll_days =
7 and return_period = 10 to the function with a data set will return the 7-day low-flow with a 10-year return period (i.e. 7Q10).
To compute a volume frequency analysis on custom data, use the compute_frequency_analysis() function. The data points to be used in the analysis must be provided in a data frame with a column of
events (or years), the flow values (values), and the measure (or the type of value it is, “7-day lows”, for example. All other data filtering options are not included.
Full Analysis
If desired, a suite of fasstr functions can be computed using the compute_full_analysis(), producing lists of tables and plots organized in lists by analysis type. write_full_analysis() will create
both all the objects and also write data to your computer, in Excel-ready formats and image files. The filetypes of plots and tables can be set using the plot_filetype and table_filetype arguments,
respectively. See the full analysis vignette for more information on customizing the analyses and statistics.
The plots and tables are grouped into the following analyses:
1. Screening
2. Long-term
3. Annual
4. Monthly
5. Daily
6. Annual Trends
7. Low-flow Frequencies
8. Customizing Functions with Arguments - Data Filtering and Options
While tidying and filtering data to desired parameters or time periods can be completed to flow data frames prior to passing them onto fasstr functions, a suite of function arguments have been
provided to allow for in-function customization of tidying and filtering. Described here are some of the options available in fasstr functions on how to handle missing dates, filter for specific
years or months, and select desired statistics from some of the fasstr functions. Not all functions have all these options see the documentation for each function usage (can also use ?
calc_annual_stats to see documentation in R).
Handling Missing Dates
Most functions will automatically (ignore_missing = FALSE) not calculate a statistic for a given period (a year or month or day of year, for example) if there is a date with missing data (NA value)
and will result in an NA value or will not plot (base na.rm = FALSE). For example, if there at least one missing day for a given year, an annual statistic will not be calculated for that year. A
warning message will appear in the console indicating as such to ensure the user is aware of missing data. See the following code for an example with missing dates:
If you want to calculate the statistics regardless of the number of missing dates per time period, use the ignore_missing = TRUE argument.
Starting with fasstr 0.4.0, to allow a certain percentage of missing dates per period and still calculate a statistic, the argument allow_missing (and allow_missing_annual and allow_missing_monthly
in come cases) will override the ignore_missing argument in certain functions. A numeric value between 0 and 100 indicating the percentage of missing dates allowed to be included is provided to the
argument to calculate a statistic (0 to 100 percent). For example, if 3-4 days of missing dates are permitted per year to calculate annual means, percentiles or extremes, then 1% of days can be
applied as allowed_missing = 1.
To maintain usage of ignore_missing, if ignore_missing = FALSE then it defaults to 0 (zero missing dates allowed), and if ignore_missing = TRUE then it defaults to 100 (any missing dates allowed).
This argument is included only in functions that calculate annual or monthly means, percentiles, minimums, and maximums including various calc_annual_* and plot_annual_* functions, calc_monthly_stats
(), plot_monthly_stats(), and most compute_* functions. See function documentation to see if included. The following example allows the data to have 25%, or ~91 days, of missing dates, to calculate
annual statistics:
Dates Filtering
There are several options in the function that allow you choose year options and to filter for specific time periods. If there is a specific period, years or months, to be analyzed there are several
options to customize the data supplied. While filtering of data can be done to your flow data set prior supplying it to a function (using dplyr filtering, for example), these options provide quick
solutions for in-function filtering that can be incorporated into a workflow.
Water year and start month
By default, the functions will analyze/group/filter data by calendar years (Jan-Dec). However, some analyses require use of water years, or hydrologic years, starting in other months. If use of water
years is desired not starting in January, then set water_year_start with a month other than 1. The water year is identified by the calendar year in which it ends. For example, a water year from Oct
2000 to Sep 2001 would be water year 2001.
Example of a default water year, starting in October:
Example of a water year starting in August:
Selecting and excluding years
To specify select years used in your analysis, the start_year and end_year arguments (providing a single value) can filter the years. Using the exclude_years argument (providing a single or vector of
years) will allow you to remove certain years from the analysis. Leaving these arguments blank will include all years in the data set for the analysis.
Example of filtering for start and end years:
Examples of removing certain years (outliers, bad data, etc.) using exclude_years:
calc_annual_stats(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
exclude_years = 1982)
Using only years with complete data
If your data has missing dates, but you would like to use only those years with complete data, some functions utilize the complete_years argument where the data will automatically be filtered for
years with complete data and statistics will be calculated. Only years with complete data will be included into the following example.
Some functions, like below, require only years with complete data (statistics are based on full years of data), so years with missing dates will be automatically ignored:
Selecting for months
Most functions allow you to specify select months used in your analysis, using the months argument. By providing a vector of months (1 through 12) only those months will be used in an analysis. For
example, using the months argument with the calc_annual_stats() function will calculate the annual statistics for only those months listed. So, if summer statistics are required you supply months =
6:8 to the function. Leaving this arguments blank will include all months in the data set for the analysis. As of fasstr 0.4.0, the months argument is now included in all calc_, plot_, and compute_
functions to allow for selecting of specific months in all analyses, including calc_all_annual_stats() and compute_annual_trends().
Example of filtering for months June through August:
Example of flow timing / center of volume in winter/spring months:
A few functions, including the calc_longterm_daily_stats(), plot_longterm_daily_stats(), and plot_flow_duration() functions will allow you to add a customized time period to your data frame or plot.
Using the custom_months argument you can list a vector of months (numeric 1:12). By default, the data will be labelled as “Custom-Months” but can be customized by providing a character string to the
custom_months_label argument.
Example of custom months and labeling:
Rolling averages
Some functions allow you to specify analyzing the data using rolling mean data as opposed to the daily means. For those functions with the roll_days and roll_align arguments, analyses will be
computed on the daily mean by default (can leave them blank if so). If choosing to conduct an analysis on 7-day rolling means, you would set roll_days = 7. Some functions allow multiple rolling days
to be provided (see function documentation). The roll_align argument determines the direction of the rolling mean: see the “Adding rolling means” portion in Section 4 to see how the roll_days and
roll_align work together.
Example of a 7-day rolling mean analysis (single roll_days use):
Example of a 7- and 30-day rolling mean analysis (multiple roll_days use):
Percentiles and other statistics
Each fasstr function comes with their default statistics to be calculated. While some cannot be changed (some plotting functions), most have the ability to customize what is calculated. Look up the
default settings for each function in their documentation (?calc_longterm_daily_stats for example).
By default, the basic summary statistics functions will calculate the mean, median, maximum, and minimum values for each time period; these will automatically be calculated can cannot be removed by
an argument option (can remove afterwards if necessary). These functions also calculate default percentiles, which can be customized by changing the desired percentiles by providing a numeric vector
of numbers (between 0 and 100) to the percentiles argument.
This example shows the default percentiles for the calc_annual_stats() function (10 and 90th percentiles):
This example shows custom percentiles for the calc_annual_stats() function (5 and 25th percentiles):
calc_annual_stats(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
percentiles = c(5,25))
The following are some examples of how to customize results from other types of functions. See function documentations for full argument uses.
Example of calculating dates of the 10 and 20 percent of total annual flow:
calc_annual_flow_timing(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
percent_total = c(10,20))
Example of plotting the number of normal and above/below normal days per year of the 10th and 90th percentiles (25th and 75th percentiles are default):
plot_annual_normal_days(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
normal_percentiles = c(10,90))
Data frame options
An option when working with the functions that produce data frames is to transpose the rows and columns of the data. Most functions by default provide data results such there are columns of
statistics for each station and time period. See the example here:
STATION_NUMBER Month Mean Median Maximum Minimum P10 P90
1 08NM116 Jan 1.201563 0.9650 9.50 0.160 0.5480 1.850
2 08NM116 Feb 1.146177 0.9675 4.41 0.140 0.4890 1.970
3 08NM116 Mar 1.818723 1.3800 9.86 0.380 0.7200 3.700
4 08NM116 Apr 8.333600 6.2250 37.90 0.505 1.5400 17.810
5 08NM116 May 23.585036 20.9000 74.40 3.830 9.3700 40.800
6 08NM116 Jun 21.291149 19.4000 84.50 0.450 6.0990 38.630
7 08NM116 Jul 6.421402 3.9400 54.50 0.332 1.0200 14.700
8 08NM116 Aug 2.114699 1.5700 13.30 0.427 0.7790 4.210
9 08NM116 Sep 2.206682 1.6200 14.60 0.364 0.7397 4.352
10 08NM116 Oct 2.100921 1.6500 15.20 0.267 0.8030 3.950
11 08NM116 Nov 2.024817 1.7100 11.70 0.260 0.5618 3.781
12 08NM116 Dec 1.313862 1.0800 7.30 0.342 0.5000 2.370
13 08NM116 Long-term 6.141736 1.8900 84.50 0.140 0.6850 19.300
In some circumstances, however, it may be more convenient to wrangle the data such that there are columns for stations (or groupings) and a single column with all statistics, and then the values are
placed in columns for each respective time period. See the following example when setting transpose = TRUE.
calc_longterm_daily_stats(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
transpose = TRUE)
STATION_NUMBER Statistic Jan Feb Mar Apr May Jun
1 08NM116 Mean 1.201563 1.146177 1.818723 8.3336 23.58504 21.29115
2 08NM116 Median 0.965000 0.967500 1.380000 6.2250 20.90000 19.40000
3 08NM116 Maximum 9.500000 4.410000 9.860000 37.9000 74.40000 84.50000
4 08NM116 Minimum 0.160000 0.140000 0.380000 0.5050 3.83000 0.45000
5 08NM116 P10 0.548000 0.489000 0.720000 1.5400 9.37000 6.09900
6 08NM116 P90 1.850000 1.970000 3.700000 17.8100 40.80000 38.63000
Jul Aug Sep Oct Nov Dec Long.term
1 6.421402 2.114699 2.206682 2.100921 2.024817 1.313862 6.141736
2 3.940000 1.570000 1.620000 1.650000 1.710000 1.080000 1.890000
3 54.500000 13.300000 14.600000 15.200000 11.700000 7.300000 84.500000
4 0.332000 0.427000 0.364000 0.267000 0.260000 0.342000 0.140000
5 1.020000 0.779000 0.739700 0.803000 0.561800 0.500000 0.685000
6 14.700000 4.210000 4.352000 3.950000 3.781000 2.370000 19.299999
Plotting options
Logarithmic discharge scale
Depending on the plotting function, discharge data will be plotted using a linear or a logarithmic scale (depending on the scale of data). This can be altered using the log_discharge argument. Here
is example of plotting with a linear scale (default log_discharge = FALSE):
Set the discharge scale to be logarithmic (log_discharge = TRUE):
plot_annual_stats(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
log_discharge = TRUE)
Including a standard title on the plot
The logical include_title argument adds the station number (or grouping identifier from the groupings argument), and in some cases the statistics as well. The argument’s default is FALSE.
Example of including a title when plotting (include_title = TRUE):
plot_annual_stats(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
include_title = TRUE)
Example of including a title when plotting include_title = TRUE where the statistic is also displayed:
plot_monthly_stats(station_number = "08NM116",
start_year = 1980,
end_year = 2010,
include_title = TRUE)[[1]]
Customizing a plot by using additional ggplot2 functions:
# Create the plot list and extract the plot using [[1]]
plot <- plot_daily_stats(station_number = "08NM116", start_year = 1980)[[1]]
# Customize the plot with various `ggplot2` functions
plot +
geom_hline(yintercept = 1.5, colour = "red", linetype = 2, size = 1) +
geom_vline(xintercept = as.Date("1900-03-01"), colour = "darkgray", linetype = 1, size = 0.5) +
geom_vline(xintercept = as.Date("1900-08-05"), colour = "darkgray", linetype = 1, size = 0.5) +
ggtitle("Mission Creek Annual Hydrograph") +
ylab("Flow (cms)")
Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
ℹ Please use `linewidth` instead.
This warning is displayed once every 8 hours.
Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
9. Writing Tables and Plots
To support saving the fasstr tables and plots to a directory, there are several functions included in this package. These include the following:
• write_flow_data() - write a streamflow data set as a .xlsx, .xls, or .csv file
• write_results() - write a data frame as a .xlsx, .xls, or .csv file
• write_plots() - write plots from a list into a directory or PDF document
• write_objects_list() - write all tables and plots contained in a list
Writing a flow data set
To directly save a streamflow data set from HYDAT or your own custom data frame onto your computer, you can use the write_flow_data() function. By listing the station_number or data data frame, the
data set will save a file into the working directory, unless otherwise specified using the file_name argument. If using the station_number argument and listing only one station without listing a name
with file_name, the name will include the number and followed by “_daily_data.xlsx”; and if multiple stations are listed the name will be “HYDAT_daily_data.xlsx”. When using the data argument without
listing a name with file_name the default name will be fasstr_daily_data.xlsx. To use another file type than “xlsx” (options are “xlsx”, “xls”, or “csv”) provide a file name using the file_name
argument with the desired extension. Other argument options for this function include:
• selecting for the start and end years or dates
• choosing to use water years when selecting specific years
• selecting whether or not to fill dates with missing data with NA’s (logical fill_missing argument)
• selecting the number of digits to round the flow values (numeric digits argument)
The following will write an “xlsx” file called “08NM116_data_data.xlsx” into your working directory that includes all daily flow data from that station in HYDAT:
The following is an example of possible customization:
Writing a data frame
While you can use the base R write_csv() or writexl package functions to save your data, the package provides a function with options to choose for file type and the rounding of digits. To directly
save a data frame onto your computer you can use the write_results() function. This function allows you to decide on file extensions of “xlsx”, “xls”, or “csv” by including it in the file_name
argument when you name the file. This function also allows you to round all numeric columns by selecting the number of digits using the numeric digits argument.
Writing a list of plots
As all plots produced with this package are contained within lists, a function is provided to assist in saving a list of plots into either a folder, where all plot files are named by the object names
within the list, or combined PDF document, using the write_plots() function. The name of the folder or PDF document is provided using the folder_name argument. If the folder does not exist, one will
be created. Options to customize output size with width, height, units and dpi arguments, as similar to those in ggplots2:ggsave(), can also be used.
The following will save each annual plot as a “png” file in a folder called “Annual Plots” in the working directory:
annual_plots <- plot_annual_stats(station_number = c("08NM116","08NM242"))
write_plots(plots = annual_data,
folder_name = "Annual Plots",
plot_filetype = "png")
The following will save all annual plots as combined “pdf” document called “Annual Plots” in the working directory with each plot on a different page:
annual_plots <- plot_annual_stats(station_number = c("08NM116","08NM242"))
write_plots(plots = annual_data,
folder_name = "Annual Plots",
combined_pdf = TRUE)
If you would prefer to save the plots using other functions, like the ggplot2::ggsave() function, the desired plot must subsetted from the list first so the object provided the function is a plot
object and not a list. Individual plots can be subsetted from their lists using either the dollar sign, $ (e.g. one_plot <- plots$plotname), or double square brackets, [ ] (e.g. one_plot <- plots
[[plotname]] or one_plot <- plots[[1]]).
Writing a list of data frames and plots
As some objects produced with this package, mainly with the compute_* functions, contain lists of both data frames and ggplot2 objects, a function is provided, called write_objects_list(), to assist
in saving all objects within the list into a designated directory folder, where all table and plot files are named by the object names. The name of the folder is provided using the folder_name
argument. If the folder does not exist, one will be created. The file type for tables and plots are chosen using the table_filetype and plot_filetype arguments respectively. There are also options to
customize plot output size with width, height, units and dpi arguments, as similar to those in ggplots2:ggsave() can also be used.
The following will save all plots and tables in a folder called “Frequency Analysis” in the working directory: | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/fasstr/vignettes/fasstr_users_guide.html","timestamp":"2024-11-03T16:17:29Z","content_type":"text/html","content_length":"656442","record_id":"<urn:uuid:cac2faa4-d08d-45d4-9616-e02bd39cc7b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00503.warc.gz"} |
Let G be the center of equilateral triangle ABC A dilation centered at G with scale factor -3/5 is applied to triangle ABC to obtain triangle A'B'C' Let K be the area of the region that is contained
in both triangles ABC and A'B'C' Find K/[ABC]
itsash Nov 12, 2023
Because the scale factor is negative, triangle A'B'C' is obtained by rotating 180 degrees triangle ABC about point G and then shrinking it. Hence, the overlap between triangles ABC and A'B'C' is the
smaller triangle A'BC'. (The region consisting of the intersection of triangle ABC with dilated triangle A'B'C' is actually a kite.) We know that the side lengths of triangle A'B'C' are 3/5 the side
lengths of triangle ABC. Since the area of a triangle is proportional to the square of its side lengths, the area of triangle A'B'C' is (3/5)2 times the area of triangle ABC. Therefore, the area of
the overlap K is (1−(3/5)2) times the area of triangle ABC. Hence, \begin{align*} \frac{K}{ABC} &= 1-(3/5)^2 \ &= 1 - 9/25 \ &= \boxed{\frac{16}{25}}. \end{align*}
The answer is 16/25.
bingboy Nov 13, 2023 | {"url":"https://web2.0calc.com/questions/help_72053","timestamp":"2024-11-14T07:23:39Z","content_type":"text/html","content_length":"20883","record_id":"<urn:uuid:30ac21c9-af92-46a5-a2ca-d8a12c788388>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00378.warc.gz"} |
How do you differentiate ln(sec^2 * x)?
| HIX Tutor
How do you differentiate #ln(sec^2 * x)#?
Answer 1
$\frac{\mathrm{dy}}{\mathrm{dx}} = 2 \tan x$
#y=lnu , where , u=sec^2x#
#(dy)/(du)=1/u and (du)/(dx)=2secx*secxtanx=2sec^2xtanx#
Diff.w.r.t. #x# using Chain Rule:
#(dy)/(dx)=1/u xx 2sec^2xtanx#
Subst, back , #u=sec^2x#
#:.(dy)/(dx)=1/sec^2x xx 2sec^2xtanx#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To differentiate ln(sec^2(x)), apply the chain rule:
d/dx [ln(sec^2(x))] = (1 / sec^2(x)) * d/dx[sec^2(x)]
Now, differentiate sec^2(x) with respect to x:
d/dx [sec^2(x)] = 2 * sec(x) * tan(x)
Substitute this result back into the previous expression:
(1 / sec^2(x)) * (2 * sec(x) * tan(x))
This simplifies to:
2 * tan(x)
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-differentiate-ln-sec-2-x-8f9af9f0aa","timestamp":"2024-11-14T05:12:33Z","content_type":"text/html","content_length":"576035","record_id":"<urn:uuid:08d1c22c-7820-43c6-a818-e6adad5af4f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00581.warc.gz"} |
Filter input signal in frequency domain
The dsp.FrequencyDomainFIRFilter System object™ implements frequency-domain, fast Fourier transform (FFT)-based filtering to filter a streaming input signal. In the time domain, the filtering
operation involves a convolution between the input and the impulse response of the finite impulse response (FIR) filter. In the frequency domain, the filtering operation involves the multiplication
of the Fourier transform of the input and the Fourier transform of the impulse response. The frequency-domain filtering is efficient when the impulse response is very long. You can specify the filter
coefficients directly in the frequency domain by setting NumeratorDomain to "Frequency".
This object uses the overlap-save and overlap-add methods to perform the frequency-domain filtering. For an example showing these two methods, see Frequency Domain Filtering Using Overlap-Add and
Overlap-Save. For filters with a long impulse response length, the latency inherent to these two methods can be significant. To mitigate this latency, the dsp.FrequencyDomainFIRFilter object
partitions the impulse response into shorter blocks and implements the overlap-save and overlap-add methods on these shorter blocks. To partition the impulse response, set the
PartitionForReducedLatency property to true. For an example, see Reduce Latency Through Partitioned Numerator. For more details on these two methods and on reducing latency through impulse response
partitioning, see Algorithms.
The dsp.FrequencyDomainFIRFilter object can model single-input multiple-output (SIMO) and multiple-input multiple-output (MIMO) systems by supporting multiple filters in the time domain and the
frequency domain. For examples, see Filter Input Signal Using 1-by-2 SIMO System and Filter Input Signal Using 3-by-2 MIMO System. You can also specify multiple paths between each input channel and
output channel pair using the NumPaths property. For an example, see Filter Input Signal Through Multiple Propagation Paths. For more information on modeling MIMO systems, see Modeling MIMO System
with Multiple Propagation Paths. (since R2023b)
To filter the input signal in the frequency domain:
1. Create the dsp.FrequencyDomainFIRFilter object and set its properties.
2. Call the object with arguments, as if it were a function.
To learn more about how System objects work, see What Are System Objects?
fdf = dsp.FrequencyDomainFIRFilter creates a frequency domain FIR filter System object that filters each channel of the input signal independently over time in the frequency domain using the
overlap-save or overlap-add method.
fdf = dsp.FrequencyDomainFIRFilter(num) creates a frequency domain FIR filter object with the Numerator property set to num.
Example: dsp.FrequencyDomainFIRFilter(fir1(400,2*2000/8000))
fdf = dsp.FrequencyDomainFIRFilter(Name=Value) creates a frequency domain FIR filter System object with each specified property set to the specified value. You can use this syntax with any previous
input argument combinations.
Example: dsp.FrequencyDomainFIRFilter(Method="Overlap-add")
Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.
If a property is tunable, you can change its value at any time.
For more information on changing property values, see System Design in MATLAB Using System Objects.
Method — Frequency-domain filter method
"Overlap-save" (default) | "Overlap-add"
Frequency-domain filter method, specified as either "Overlap-save" or "Overlap-add". For more details on these two methods, see Algorithms.
NumeratorDomain — Numerator domain
"Time" (default) | "Frequency"
Domain of the filter coefficients, specified as one of the following:
• "Time" –– Specify the time-domain filter numerator in the Numerator property.
• "Frequency" –– Specify the filter's frequency response in the FrequencyResponse property.
Numerator — FIR filter coefficients
fir1(100,0.3) (default) | row vector | matrix (since R2023b)
FIR filter coefficients, specified as a row vector (single filter) or a matrix (multiple filters) (since R2023b) of size F-by-NumLen, where F is the number of filters and NumLen is the filter length.
If F is greater than 1, then its value must be a multiple of the product of the number of input channels (columns) T and the number of paths P you specify through the NumPaths property. The
multiplication factor determines the number of output channels R and equals F/(T × P).
The coefficient values can change during simulation but the size of the numerator must remain constant.
Tunable: Yes
To enable this property, set NumeratorDomain to "Time".
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Complex Number Support: Yes
FrequencyResponse — Frequency response of filter
fft(fir1(100,0.3),202) (default) | row vector | matrix | 3-D array (since R2024a)
Frequency response of the filter, specified as a row vector, matrix, or a 3-D array (since R2024a).
When you set PartitionForReducedLatency to true, FrequencyResponse must be a matrix of size 2PL-by-N to represent a single filter or a 3-D array of size 2PL-by-N-by-F to represent multiple filters
(since R2024a), where PL is the partition size, N is the number of partitions, and F is the number of filters.
When you set PartitionForReducedLatency to false, FrequencyResponse can be a row vector or a matrix of size F-by-FFTLength. If F is greater than 1 (multiple filters), then its value must be a
multiple of the product of the number of input channels (columns) T and the number of paths P you specify through the NumPaths property. The multiplication factor determines the number of output
channels R and equals F/(T × P). (since R2023b)
Tunable: Yes
To enable this property, set NumeratorDomain to "Frequency".
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Complex Number Support: Yes
NumeratorLength — Time-domain numerator length
101 (default) | positive integer
Time-domain numerator length, specified as a positive integer.
To enable this property, set NumeratorDomain to "Frequency" and PartitionForReducedLatency to false.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
FilterIsReal — Flag to specify if filter is real
true (default) | false
Flag to specify if the filter coefficients are all real, specified as true or false.
To enable this property, set NumeratorDomain to "Frequency".
PartitionForReducedLatency — Flag to partition numerator to reduce latency
false (default) | true
Flag to partition numerator to reduce latency, specified as:
• false –– The filter uses the traditional overlap-save or overlap-add method. The latency in this case is FFTLength – length(Numerator) + 1.
• true –– In this mode, the object partitions the numerator into segments of length specified by the PartitionLength property. The filter performs overlap-save or overlap-add on each partition, and
combines the partial results to form the overall output. The latency is now reduced to the partition length.
FFTLength — FFT length
[] (default) | positive integer
FFT length, specified as a positive integer. The default value of this property, [], indicates that the FFT length is equal to twice the numerator length. The FFT length must be greater than or equal
to the numerator length.
To enable this property, set NumeratorDomain property to "Time" and PartitionForReducedLatency property to false.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
PartitionLength — Numerator partition length
32 (default) | positive integer
Numerator partition length PL, specified as a positive integer less than or equal to the length of the numerator.
To enable this property, set the NumeratorDomain property to "Time" and PartitionForReducedLatency property to true.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Latency — Filter latency
102 (default) | positive integer
This property is read-only.
Filter latency in samples, returned as an integer greater than 0. When PartitionForReducedLatency is:
• false –– The latency is equal to FFTLength – length(Numerator) + 1.
• true –– The latency is equal to the partition length.
Data Types: uint32
NumPaths — Number of propagation paths between each input channel and output channel pair
1 (default) | positive integer
Since R2023b
Number of propagation paths P between each input channel and output channel pair, specified as a positive integer. Each path is represented by a unique filter. When you specify the number of paths to
be greater than 1, the filter models multipath propagation. The filtered output contributions from all paths between each input channel and output channel pair are summed up.
For an example, see Filter Input Signal Through Multiple Propagation Paths.
To enable this property, the number of filters F must be greater than 1.
Data Types: single | double
SumFilteredOutputs — Option to sum filtered output contributions from all input channels
true (default) | false
Since R2023b
Option to sum filtered output contributions from all input channels, specified as:
• true –– The object adds the filtered output from each input channel to generate an L-by-R output matrix, where L is the input frame size (number of input rows) and R is the output channels. R
equals F/(T x P), where F is the number of filters, T is the number of input channels, and P is the value that you specify in the NumPaths property.
• false –– The object does not sum filtered output contributions from all input channels. The output is an L-by-R-by-T array.
For more information on how the object computes the output based on the value of the SumFilteredOutputs property, see Modeling MIMO System with Multiple Propagation Paths.
To enable this property, the number of filters F must be greater than 1.
Data Types: logical
output = fdf(input) filters the input signal and outputs the filtered signal. The object filters each channel of the input signal independently over time in the frequency domain.
Input Arguments
input — Data input
Data input, specified as a matrix of size L-by-T. This object supports variable-size input signals, that is, you can change the input frame size (number of rows) even after calling the algorithm.
However, the number of channels (columns) must remain constant.
Data Types: single | double
Complex Number Support: Yes
Output Arguments
output — Filtered output
vector | matrix | 3-D array
Filtered output, returned as a vector, matrix, or a 3-D array.
When the number of filters F is greater than 1 and you set SumFilteredOutputs to:
• true –– The object adds the filtered output from each input channel to generate an L-by-R output matrix, where L is the input frame size (number of input rows) and R is the number of output
channels. R equals F/(T x P), where F is the number of filters, T is the number of input channels, and P is the value that you specify in the NumPaths property.
• false –– The object does not sum filtered output contributions from all input channels. The output is an L-by-R-by-T array. output(:,j,k) refers to the output from the k^th input channel for the
j^th output channel. For example, output(:,3,2) indicates output on the third output channel from the second input channel.
(since R2023b)
For more information on how the object computes the output, see Algorithms.
The output has the same data type and complexity as the input signal.
Data Types: single | double
Complex Number Support: Yes
Object Functions
To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax:
Specific to dsp.FrequencyDomainFIRFilter
visualize Visualize frequency response of individual filter stages or sum of filter paths between input and output channels
Common to All System Objects
step Run System object algorithm
release Release resources and allow changes to System object property values and input characteristics
reset Reset internal states of System object
Frequency Domain Filtering Using Overlap-Add and Overlap-Save
Filter input signal using overlap-add and overlap-save methods, and compare the outputs to the output of a time-domain FIR filter.
Design the FIR lowpass filter coefficients using the designLowpassFIR function. The sampling frequency is 8 kHz. Specify a filter order of 400 and a cutoff frequency of 2 kHz. The impulse response
has a length of 401.
order = 400;
Fs = 8000;
Fcutoff = 2000;
imp = designLowpassFIR(FilterOrder=order,CutoffFrequency=2*Fcutoff/Fs);
Create two dsp.FrequencyDomainFIRFilter objects and a dsp.FIRFilter object. Set the numerator of all the three filters to imp. Delay the FIR output by the latency of the frequency-domain filter.
fdfOA = dsp.FrequencyDomainFIRFilter(imp,Method="Overlap-add");
fdfOS = dsp.FrequencyDomainFIRFilter(imp,Method="Overlap-save");
fir = dsp.FIRFilter(Numerator=imp);
dly = dsp.Delay(Length=fdfOA.Latency);
Create two dsp.SineWave objects. The sine waves have a sample rate of 8000 Hz, frame size of 256, and frequencies of 100 Hz and 3 kHz, respectively. Create a timescope object to view the filtered
frameLen = 256;
sin_100Hz = dsp.SineWave(Frequency=100,SampleRate=Fs,...
sin_3KHz = dsp.SineWave(Frequency=3e3,SampleRate=Fs,...
ts = timescope(TimeSpanSource="property",TimeSpan=5*frameLen/Fs,...
YLimits=[-1.1 1.1],...
ChannelNames={"Overlap-add","Overlap-save","Direct-form FIR"});
Stream 1e4 frames of noisy input data. Pass this data through the frequency domain filters and the time-domain FIR filter. View the filtered outputs in the time scope.
numFrames = 1e4;
for idx = 1:numFrames
x = sin_100Hz() + sin_3KHz() + 0.01*randn(frameLen,1);
yOA = fdfOA(x);
yOS = fdfOS(x);
yFIR = fir(dly(x));
The outputs of all the three filters match exactly.
Reduce Latency Through Partitioned Numerator
Partition the impulse response length of a frequency domain FIR filter. Compare the outputs of the partitioned filter and the original filter.
Design the FIR lowpass filter coefficients using the designLowpassFIR function. The sampling frequency is 8 kHz. Specify a filter order of 4000 and a cutoff frequency of 2 kHz. The impulse response
is of length 4001.
order = 4000;
Fs = 8000;
Fcutoff = 2000;
imp = designLowpassFIR(FilterOrder=order,CutoffFrequency=2*Fcutoff/Fs);
Create a dsp.FrequencyDomainFIRFilter with coefficients set to the imp vector. The latency of this filter is given by , which is equal to 4002. By default, FFT length is equal to twice the numerator
length. This makes the latency proportional to the impulse response length.
fdfOS = dsp.FrequencyDomainFIRFilter(imp,Method="Overlap-save");
fprintf('Frequency domain filter latency is %d samples\n',fdfOS.Latency);
Frequency domain filter latency is 4002 samples
Partition the impulse response to blocks of length 256. The latency after partitioning is proportional to the block length.
fdfRL = dsp.FrequencyDomainFIRFilter(imp,Method="Overlap-save",...
fprintf('Frequency domain filter latency is %d samples\n',fdfRL.Latency);
Frequency domain filter latency is 256 samples
Compare the outputs of the two frequency domain filters. The latency of fdfOS is 4002, and the latency of fdfRL is 256. To compare the two outputs, delay the input to fdfRL by 4002 - 256 samples.
dly = dsp.Delay(Length=(fdfOS.Latency-fdfRL.Latency));
Create two dsp.SineWave objects. The sine waves have a sample rate of 8000 Hz, frame size of 256, and frequencies of 100 Hz and 3 kHz, respectively. Create a timescope object to view the filtered
frameLen = 256;
sin_100Hz = dsp.SineWave(Frequency=100,SampleRate=Fs,...
sin_3KHz = dsp.SineWave(Frequency=3e3,SampleRate=Fs,...
ts = timescope(TimeSpanSource="property",TimeSpan=5*frameLen/Fs,...
YLimits=[-1.1 1.1],...
ChannelNames={'Overlap-save With Partition','Overlap-save Without Partition'});
Stream 1e4 frames of noisy input data. Pass this data through the two frequency domain filters. View the filtered outputs in the time scope. The outputs match exactly.
numFrames = 1e4;
for idx = 1:numFrames
x = sin_100Hz() + sin_3KHz() + .1 * randn(frameLen,1);
yRL = fdfRL(dly(x));
yOS = fdfOS(x);
Specify Frequency Response of the Frequency-Domain FIR Filter
Specify the numerator coefficients of the frequency-domain FIR filter in the frequency domain. Filter the input signal using the overlap-add method. Compare the frequency-domain FIR filter output to
the corresponding time-domain FIR filter output.
Design the FIR lowpass filter coefficients using the designLowpassFIR function. The sampling frequency is 8 kHz, and the cutoff frequency of the filter is 2 kHz. The time-domain impulse response has
a length of 401. Compute the FFT of this impulse response and specify this response as the frequency response of the frequency-domain FIR filter. Set the time-domain numerator length, specified by
the NumeratorLength property, to the number of elements in the time-domain impulse response.
order = 400;
Fs = 8000;
Fcutoff = 2000;
imp = designLowpassFIR(FilterOrder=order,CutoffFrequency=2*Fcutoff/Fs);
H = fft(imp,2*numel(imp));
oa = dsp.FrequencyDomainFIRFilter(NumeratorDomain="Frequency",...
fprintf('Frequency domain filter latency is %d samples\n',oa.Latency);
Frequency domain filter latency is 402 samples
Create a dsp.FIRFilter System object and specify the numerator as the time-domain coefficients computed using the designLowpassFIR function, imp. Delay the FIR output by the latency of the
frequency-domain FIR filter.
fir = dsp.FIRFilter(Numerator=imp);
dly = dsp.Delay(Length=oa.Latency);
Create two dsp.SineWave objects. The sine waves generated have a sample rate of 8000 Hz, frame size of 256, and frequencies of 100 Hz and 3 kHz, respectively. Create a timescope object to view the
filtered outputs.
frameLen = 256;
sin_100Hz = dsp.SineWave(Frequency=100,SampleRate=Fs,...
sin_3KHz = dsp.SineWave(Frequency=3e3,SampleRate=Fs,...
ts = timescope(YLimits=[-1.1 1.1],...
ChannelNames={'Overlap-add','Direct-form FIR'});
Stream 1e4 frames of noisy input data. Pass this data through the frequency-domain FIR filter and the time-domain FIR filter. View the filtered outputs in the time scope. The outputs of both the
filters match exactly.
numFrames = 1e4;
for idx = 1:numFrames
x = sin_100Hz() + sin_3KHz() + 0.01 * randn(frameLen,1);
y1 = oa(x);
y2 = fir(dly(x));
Filter Input Signal Using 1-by-2 SIMO System
Since R2024a
Filter an input signal using a 1-by-2 SIMO system with two distinct paths between the input and each output. Partition the filters to reduce latency.
Design four lowpass FIR filters, each with a different cutoff frequency, specified in normalized frequency units. The filter order for each filter is 4000 and the sampling rate is 8000 Hz. The SIMO
system models one input channel, two output channels, and two paths between each input channel-output channel pair.
order = 4000;
Fs = 8000;
num = [ % Path 1, Output Channel 1
% Path 2, Output Channel 1
% Path 1, Output Channel 2
% Path 2, Output Channel 2
Initialize the dsp.FrequencyDomainFIRFilter object with the array of filters. Set the partition length to 256 and the number of paths to 2. The object uses the Overlap-save method by default.
obj = dsp.FrequencyDomainFIRFilter(Numerator=num,...
PartitionForReducedLatency=true, PartitionLength=256, NumPaths=2)
obj =
dsp.FrequencyDomainFIRFilter with properties:
Method: 'Overlap-save'
NumeratorDomain: 'Time'
Numerator: [4x4001 double]
NumPaths: 2
SumFilteredOutputs: true
PartitionForReducedLatency: true
PartitionLength: 256
Latency: 256
Visualize the frequency response of the individual filters.
The input contains two sinusoidal signals, each with a frame length of 256. The first sinusoidal signal has a frequency of 500 Hz (or 0.125 in normalized frequency units) and the second sinusoidal
signal has a frequency of 1000 Hz (or 0.25 in normalized frequency units).
frameLen = 256;
sin_500Hz = dsp.SineWave(Frequency=500,SampleRate=Fs,...
sin_1KHz = dsp.SineWave(Frequency=1e3,SampleRate=Fs,...
Initialize a spectrumAnalyzer object to view the spectrum of the input and the filtered output.
specScope = spectrumAnalyzer(SampleRate=Fs,PlotAsTwoSidedSpectrum=false,...
ChannelNames={'Input','Output Channel 1',...
'Output Channel 2'},...
Stream in 1e4 frames of the noisy input sinusoidal signal. The input noise is white Gaussian with a mean of 0 and a variance of 0.01. Pass the signal through the designed filter. Visualize the
spectrum of the input and output signals in the spectrum analyzer.
for idx = 1:1e4
x = sin_500Hz() + sin_1KHz() + 0.01 * randn(frameLen, 1);
y = obj(x);
specScope([x, y]);
Visualize the frequency response of the sum of all filter paths between each input channel and output channel by setting SumFilterPaths to true.
Filter Input Signal Using 3-by-2 MIMO System
Since R2023b
Design six lowpass FIR filters with varying cutoff frequencies. The filter order for each filter is 400 and the sampling rate is 8000 Hz. The system models three input channels, two output channels,
and one path between each input channel-output channel pair.
order = 400;
Fs = 8000;
num = [% Input Channel 1, Output Channel 1
% Input Channel 1, Output Channel 2
% Input Channel 2, Output Channel 1
% Input Channel 2, Output Channel 2
% Input Channel 3, Output Channel 1
% Input Channel 3, Output Channel 2
Initialize the dsp.FrequencyDomainFIRFilter object with the array of filters. Specify the filtering method as "Overlap-save". The SumFilteredOutputs property is true by default.
filt = dsp.FrequencyDomainFIRFilter(Numerator=num,...
filt =
dsp.FrequencyDomainFIRFilter with properties:
Method: 'Overlap-save'
NumeratorDomain: 'Time'
Numerator: [6x401 double]
NumPaths: 1
SumFilteredOutputs: true
PartitionForReducedLatency: false
FFTLength: []
Latency: 402
The input contains two sinusoidal signals each with a frame length of 256. The first sinusoidal signal contains tones at 100 Hz, 200 Hz, and at 300 Hz. The second sinusoidal signal contains tones at
2 kHz, 2.5 kHz, and at 3 kHz.
frameLen = 256;
sin_Hz = dsp.SineWave(Frequency=[100 200 300],SampleRate=Fs,...
sin_KHz = dsp.SineWave(Frequency=[2e3 2.5e3 3e3],SampleRate=Fs,...
Initialize a spectrumAnalyzer object to view the spectrum of the input and the filtered output.
specScope = spectrumAnalyzer(SampleRate=Fs,PlotAsTwoSidedSpectrum=false, ...
ChannelNames={'Input Channel 1','Input Channel 2','Input Channel 3', ...
'Output Channel 1','Output Channel 2'},ShowLegend=true);
Stream in 1e4 frames of the noisy input sinusoidal signal. The input noise is white Gaussian with a mean of 0 and a variance of 0.01. Pass the signal through the designed filter. Visualize the
spectrum of the input and output signals in spectrum analyzer.
for idx = 1:1e4
x = sin_Hz()+sin_KHz()+0.01*randn(frameLen,3);
y = filt(x);
Filter Input Signal Through Multiple Propagation Paths
Since R2023b
Filter an input signal through three distinct paths between the input and the output by specifying three filters to model the frequency responses across each path.
Design multiple lowpass FIR filters with varying fractional delays. The filter order for each filter is 400 and the sampling rate is 8000 Hz.
order = 400;
Fs = 8000;
num = [designFracDelayFIR(0.1,order); % Path 1
designFracDelayFIR(0.2,order); % Path 2
designFracDelayFIR(0.3,order); % Path 3
Initialize the dsp.FrequencyDomainFIRFilter object with the array of filters. Set the filtering method to "Overlap-add" and the number of propagation paths to 3. The object models a single-input
single-output (SISO) system. The SumFilteredOutputs property is true by default.
filt = dsp.FrequencyDomainFIRFilter(Numerator=num, ...
filt =
dsp.FrequencyDomainFIRFilter with properties:
Method: 'Overlap-add'
NumeratorDomain: 'Time'
Numerator: [3x400 double]
NumPaths: 3
SumFilteredOutputs: true
PartitionForReducedLatency: false
FFTLength: []
Latency: 401
The input contains two sinusoidal signals each with a frame length of 1024. The first sinusoidal signal contains a tone at 200 Hz. The second sinusoidal signal contains a tone at 4 kHz.
frameLen = 1024;
sin_200Hz = dsp.SineWave(Frequency=200,SampleRate=Fs, ...
sin_4KHz = dsp.SineWave(Frequency=4e3,SampleRate=Fs, ...
Initialize a spectrumAnalyzer object to view the spectrum of the input and the filtered output.
specScope = spectrumAnalyzer(SampleRate=Fs,PlotAsTwoSidedSpectrum=false, ...
ChannelNames={'Input Channel','Output Channel'},ShowLegend=true);
Stream in 1e3 frames of the noisy input sinusoidal signal. The input noise is white Gaussian with a mean of 0 and a variance of 0.01. Pass the signal through the designed filter. Visualize the
spectrum of the input and output signals in spectrum analyzer.
for idx = 1:1e3
x = sin_200Hz()+sin_4KHz()+0.01*randn(frameLen,1);
y = filt(x);
Overlap-save and overlap-add are the two frequency-domain FFT-based filtering methods this algorithm uses.
The overlap-save method is implemented using the following approach:
The input stream is partitioned into overlapping blocks of size FFTLen, with an overlap factor of NumLen – 1 samples. FFTLen is the FFT length and NumLen is the length of the FIR filter numerator.
The FFT of each block of input samples is computed and multiplied with the length-FFTLen FFT of the FIR numerator. The inverse fast Fourier transform (IFFT) of the result is performed, and the last
FFTLen – NumLen + 1 samples are saved. The remaining samples are dropped.
The latency of overlap-save is FFTLen – NumLen + 1. The first FFTLen – NumLen + 1 samples are equal to zero. The filtered value of the first input sample appears as the FFTLen – NumLen + 2 output
Note that the FFT length must be larger than the numerator length, and is typically set to a value much greater than NumLen.
The overlap-add method is implemented using the following approach:
The input stream is partitioned into blocks of length FFLen – NumLen + 1, with no overlap between consecutive blocks. Similar to overlap-save, the FFT of the block is computed, and multiplied by the
FFT of the FIR numerator. The IFFT of the result is then computed. The first NumLen + 1 samples are modified by adding the values of the last NumLen + 1 samples from the previous computed IFFT.
The latency of overlap-add is FFTLen – NumLen + 1. The first FFTLen – NumLen + 1 samples are equal to zero. The filtered value of the first input sample appears as the FFTLen – NumLen + 2 output
Reduce Latency Through Impulse Response Partitioning
With an FFT length that is twice the length of the FIR numerator, the latency roughly equals the length of the FIR numerator. If the impulse response is very long, the latency becomes significantly
large. However, frequency domain FIR filtering is still faster than the time-domain filtering. To mitigate the latency and make the frequency domain filtering even more efficient, the algorithm
partitions the impulse response into multiple short blocks and performs overlap-save or overlap-add on each block. The results of the different blocks are then combined to obtain the final output.
The latency of this approach is of the order of the block length, rather than the entire impulse response length. This reduced latency comes at the cost of additional computation. For more details,
see [1].
Modeling MIMO System with Multiple Propagation Paths
These diagrams present a generalized representation of a frequency-domain FIR filter using multiple filters to process signals between multiple input channels and output channels. This diagram models
multiple paths between each input channel-output channel pair.
SumFilteredOutputs is true
The algorithm passes the first input channel through the first set of R × P filters within which ((i − 1) × P + 1) to (i × P) filters compute the output for the ith output channel. The algorithm
passes the second input channel through the next set of R × P filters, and the sequence continues until the last input channel passes through the filter F.
The algorithm adds the filtered output contributions from all input channels before sending them as the respective output channel.
• The input signal is of size L-by-T.
• L is the input frame size (number of input rows).
• T is the number of input channels.
• F is the number of filters. This value is the number of rows in the filter coefficients matrix.
• Filter coefficients is a matrix of size F-by-numLen. Each row corresponds to a filter of length numLen.
• P is the number of propagation paths between each input channel and output channel.
• R is the number of output channels and equals F/(T x P).
• Output signal is of size L-by-R. Each output channel is of size L-by-1.
SumFilteredOutputs is false
The algorithm concatenates the filtered outputs and does not sum across input channels. The output is an L-by-R-by-T array that you can further process before sending it to the output. For example,
if you add elements in the L-by-R-by-T array across the third dimension, the resultant L-by-R array is same as the output signal you get when you set SumFilteredOutputs to true.
[1] Stockham, T. G., Jr. "High Speed Convolution and Correlation." Proceedings of the 1966 Spring Joint Computer Conference, AFIPS, Vol 28, 1966, pp. 229–233.
Version History
Introduced in R2017b
R2024b: New visualize function
Use the new visualize function to view the frequency response of the dsp.FrequencyDomainFIRFilter object.
R2024a: Support for filter partitioning when you specify multiple filters
Starting R2024a, you can set PartitionForReducedLatency to true when you specify multiple filters. Setting PartitionForReducedLatency to true enables partitioning on the filters to reduce latency.
R2023b: Support for multiple filters
You can model MIMO systems by specifying multiple filters in the time and frequency domains. You can also specify multiple paths between each input and output pair.
See Also | {"url":"https://de.mathworks.com/help/dsp/ref/dsp.frequencydomainfirfilter-system-object.html","timestamp":"2024-11-13T22:03:41Z","content_type":"text/html","content_length":"188601","record_id":"<urn:uuid:15fc8ba8-711a-4821-bb83-be6bc00dcd88>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00092.warc.gz"} |
Are You Smarter Than A Fifth Grader?
Keep scrolling down for answers and more stats ...
1. In what year was the Treaty of Paris signed?
2. What is 467 divided by 3?
138 with a remainder of 7 ✓
134 with a remainder or 5 ✓
155 with a remainder of 2 ✓
142 with a remainder of 6 ✓
3. What is the acronym for the order of operations?
4. What is a fraction called when the numerator is greater than the denominator?
5. What is the pattern which water goes through called?
6. What is 1,576 divided by 2?
7. When a triangle has no equal sides, what is type of triangle is it?
8. When a shape has an angle lower than 90 degrees, what is type of shape is it?
9. Do all rhombuses have parallel sides?
11. What are fractions with whole numbers called?
14. If a shape has a degree more than 90 degrees, what is it called?
15. If a shape has a 90 degree angle, what is it called? | {"url":"https://ec2-34-193-34-229.compute-1.amazonaws.com/user-quizzes/1475813/are-you-smarter-than-a-fifth-grader","timestamp":"2024-11-03T10:19:45Z","content_type":"text/html","content_length":"54866","record_id":"<urn:uuid:e10fd5cc-0aeb-4b76-b9de-9496e262eedf>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00153.warc.gz"} |
How to Create a Dynamic Table with JavaScript | ITGeared
How to Create a Dynamic Table with JavaScript
This JavaScript tutorial has two goals in mind. The first goal is to show you how to generate a dynamic HTML table, rows, and cells. The second goal is to generate a random number within a range of
numbers and store the values within the dynamically generated table.
Generating random numbers is a common question found on many forum sites. Generating random numbers is useful in many types of applications. You can use it in games that use dice, display random
images, generate random links, etc.
We will take these two goals and develop a solution that will create a dynamically generated HTML table and populate the cells with random numbers.
Creating a Dynamic HTML Table
Using JavaScript to create a dynamic table element, or any other element for that matter is fairly easy and straightforward. This tutorial is not going to cover the details related to the Document
Object Model (DOM) or how to traverse the DOM.
For the purposes of this tutorial, the code example shown below can be used to create a dynamic table element by creating the element nodes and inserting them into the document tree. If you are new
to HTML, it is a good idea to read up on the Document Object Model.
Generating Random Numbers
In JavaScript, we can use the Math.random() method to generate a random number. This method generates a random number between 0-1. Because of this, we need to multiple this value by the maximum
number that we want to include in our range.
In the example below, we took an additional step to incorporate a minimum and maximum range. In addition, we use Math.Floor() method to remove the values after the decimal point in the random value.
Because we are using the Math.floor() method, we need to add 1 to the value to ensure that the answer is within the range because the Math.floor() method rounds a number down to the nearest integer
(it simply removes values after the decimal point).
Math.floor() + 1, unlike Math.round() will ensure that the result is not generating more numbers in the higher range of our number spectrum.
Examples for Generating Random Numbers
Math.random() // Generates a random number between 0-1, such as 0.4111764545086771
Math.random() * max // If max=10, then the value is 4.111764545086771
Math.floor(Math.random() * max) // Floor rounds down so the value is now 4
Math.floor(Math.random() * max) + 1 // Add 1 due to rounding down. Possible values will be from 1-10, instead of 0-9.
Math.floor(Math.random() * (max - min + 1)) + min // We can introduce a min variable as well so we can have a range.
The following HTML example contains the HTML, CSS, and JavaScript necessary to build a dynamic HTML table. There are four variables in the JavaScript block.
You can modify the variable’s values to adjust the number of rows in the table, the number of cells within each row, and the minimum and maximum range of numbers that are used in random number
generation. The random numbers generated will be placed within each cell of the table.
<!DOCTYPE html>
#div1 {
table {
border:1px solid #7f7f7f;
td {
border:1px solid #7f7f7f;
<body >
<div id="div1"></div>
var totalRows = 5;
var cellsInRow = 5;
var min = 1;
var max = 10;
function drawTable() {
// get the reference for the body
var div1 = document.getElementById('div1');
// creates a <table> element
var tbl = document.createElement("table");
// creating rows
for (var r = 0; r < totalRows; r++) {
var row = document.createElement("tr");
// create cells in row
for (var c = 0; c < cellsInRow; c++) {
var cell = document.createElement("td");
getRandom = Math.floor(Math.random() * (max - min + 1)) + min;
var cellText = document.createTextNode(Math.floor(Math.random() * (max - min + 1)) + min);
tbl.appendChild(row); // add the row to the end of the table body
div1.appendChild(tbl); // appends <table> into <div1>
Leave a Comment | {"url":"https://www.itgeared.com/how-to-create-dynamic-html-table-javascript/","timestamp":"2024-11-08T07:59:12Z","content_type":"text/html","content_length":"182301","record_id":"<urn:uuid:25f83878-b177-4ab4-8a4a-59ce19886fdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00323.warc.gz"} |
Memorized by 10+ users
cardinal number
• noun
A number that represents a quantity. For example, the number of objects in a group.
- "There are seven apples on the table."
- "The team has twelve players."
List of all variants of cardinal number that leads to same result
cardinal number
cardinal numbers
origin and the way in which meanings have changed throughout history.
The term 'cardinal number' comes from the Latin word 'cardinalis', which means 'principal' or 'having rank'. Cardinal numbers are also sometimes referred to as 'counting numbers' because they
represent a count or quantity.
Any details, considerations, events or pieces of information regarding the word
1. The smallest prime cardinal number is 2.
2. The largest finite cardinal number is called 'Googol', which is 10 to the power of 100.
3. The study of cardinal numbers is called 'cardinality'.
Related Concepts
informations on related concepts or terms closely associated with the word. Discuss semantic fields or domains that the word belongs to
1. Ordinal numbers: Ordinal numbers represent position or order, such as first, second, third, etc. They are related to cardinal numbers because they both represent numbers.
2. Numerals: Numerals are symbols used to represent numbers. Cardinal numbers are a type of numeral.
Any cultural, historical, or symbolic significance of the word. Explore how the word has been used in literature, art, music, or other forms of expression.
Cardinal numbers have been used throughout history in various cultures and disciplines. In mathematics, they are used to represent quantities and are the foundation of arithmetic. In literature, they
are used to describe quantities in stories and poetry. For example, in Dante's Inferno, the nine circles of hell are described using cardinal numbers.
How to Memorize "cardinal number"
1. visualize
- To memorize cardinal numbers, try visualizing groups of objects that correspond to each number. For example, imagine seven apples on a table or twelve pizzas in a box.
2. associate
- Try associating cardinal numbers with familiar objects or concepts. For example, you might associate the number seven with a week or the number twelve with a dozen.
3. mnemonics
- Use mnemonics to help remember the order of cardinal numbers. For example, you might remember the first ten numbers by the mnemonic 'One is a bun, Two is a shoe, Three is a tree, Four is a
door, Five is a hive, Six is sticks, Seven is heaven, Eight is gate, Nine is wine, Ten is a hen'.
Memorize "cardinal number" using Dictozo
The best and recommended way to memorize cardinal number is, by using Dictozo. Just save the word in Dictozo extension and let the app handle the rest. It enhances the memorization process in two
1. Highlighting:
Whenever users encounters the saved word on a webpage, Dictozo highlights it, drawing the user's attention and reinforcing memorization.
2. Periodic Reminders:
Dictozo will send you periodic reminders to remind you the saved word, it will ask you quiz. These reminders could be in the form of notifications or emails, prompting users to recall and
reinforce their knowledge. | {"url":"https://dictozo.com/w/cardinal-number","timestamp":"2024-11-05T12:37:11Z","content_type":"text/html","content_length":"22707","record_id":"<urn:uuid:0c799d34-d7e1-4606-a8a4-045dcd29312a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00384.warc.gz"} |
Center for Advanced Studies
arXiv Spring2020
Center for Advanced Studies Seminar on Mondays
at 14.30 at Skoltech / at 17.00 Zoom
July 13, 2020 // Zoom
Mikhail Skvortsov
(Skoltech, Landau Inst.)
Inverted pendulum driven by a horizontal random force: statistics of the never-falling trajectory and supersymmetry
We study stochastic dynamics of an inverted pendulum subject to a random force in the horizontal direction (Whitney’s problem). Considered on the entire time axis, the problem admits a unique
solution that always remains in the upper half plane. We formulate the problem of statistical description of this never-falling trajectory and solve it by a field-theoretical technique assuming a
white-noise driving. In our approach based on the supersymmetric formalism of Parisi and Sourlas, statistic properties of the never-falling trajectory are expressed in terms of the zero mode of the
corresponding transfer-matrix Hamiltonian. The emerging mathematical structure is similar to that of the Fokker-Planck equation, which however is written for the “square root” of the probability
distribution function. Our results for the statistics of the non-falling trajectory are in perfect agreement with direct numerical simulations of the stochastic pendulum equation. In the limit of
strong driving (no gravitation), we obtain an exact analytical solution for the instantaneous joint probability distribution function of the pendulum’s angle and its velocity.
This is a joint work with N. Stepanov
June 29, 2020 // Zoom
Gus Schrader (Columbia Univ.)
An alternative functional realization of spherical DAHA
The spherical subalgebra of Cherednik’s double affine Hecke algebra of type A has a functional realization in which the algebra acts on a space of symmetric Laurent polynomials by rational
q-difference operators. This representation has many useful applications e.g. to the theory of Macdonald polynomials. I’ll present an alternative functional realization of the spherical DAHA, in
which the algebra acts on a space of non-symmetric Laurent polynomials by Laurent polynomial q-difference operators. This latter representation turns out to be compatible with a natural cluster
algebra structure, in such a way that the action of the modular group on DAHA is given by cluster transformations. Based on joint work in progress with Philippe di Francesco, Rinat Kedem, and
Alexander Shapiro
June 15, 2020 // Zoom
Alexandr Buryak
(HSE Univ., Univ. of Leeds)
Generalization of the Witten conjecture and non-commutative KdV system
June 8, 2020 // Zoom
Davide Gaiotto
(Perimeter Inst.)
Examples of boundary chiral algebras
I will review some examples of boundary chiral algebras, starting from 3d Chern-Simons theory and moving on to A- and B- twists of 3d N=4 theories and to holomorphic-topological twists of 3d N=2
June 1, 2020 // Zoom
Konstantin Aleshkin
(Columbia Univ.)
Liouville quantum gravity and integrable systems
There are three main approaches to 2d quantum gravity: topological gravity, matrix models and Liouville gravity. The generalized Witten conjecture establishes the connection of the two former
approaches, and both of them are quite well understood. Correlation numbers of the Liouville gravity are integrals of certain products of conformal blocks over the moduli spaces of punctured curves,
and their direct computation is much more complicated. Nevertheless, there is a conjecture that relates all three models. This conjecture will be the main subject of the talk
May 25, 2020 // Zoom
Alexander Its
(Indiana Univ.-Purdue Univ.)
On the asymptotic analysis of the Calogero-Painlevé systems
The Calogero-Painleve systems were introduce in 2001 by K. Takasaki as a natural generalization of the classical Painleve equations to the case of the several Painlev\’e “particles” coupled via the
Calogero type interactions. In 2014, I. Rumanov discovered a remarkable fact that a particular case of the Calogero-Painleve II equation describes the Tracy-Widom distribution function for the
general β-ensembles with the even values of parameter β. Most recently, in 2017 work of M. Bertola, M. Cafasso , and V. Rubtsov, it was proven that all Calogero-Painleve systems are Lax integrable,
and hence their solutions admit a Riemann-Hilbert representation. This important observation has opened the door to rigorous asymptotic analysis of the Calogero-Painleve equations which in turn
yields the possibility of rigorous evaluation of the asymptotic behavior of the Tracy-Widom distributions for the values of beta beyond the classical β = 1, 2, 4. In the talk these recent
developments will be outlined with a special focus on the Calogero-Painleve system corresponding to β = 6.
This is a joint work with A. Prokhorov
May 18, 2020 // Zoom
Davide Gaiotto
(Perimeter Inst.)
Protected operator algebras in three-dimensional supersymmetric quantum field theory
I will review the mathematical structures which occur in twisted supersymmetric quantum field theories in three dimensions and the relations to Symplectic Duality, the theory of logarithmic chiral
algebras, Geometric Langlands duality and other topics
April 27, 2020 // Zoom
Alexander Veselov
(Loughborough Univ., Moscow State Univ., Steklov Inst.)
Automorphic Lie algebras and modular forms
I will talk about some recent results on the hyperbolic versions of the automorphic Lie algebras, using as the main example the modular group SL(2,Z) acting on the upper half-plane. The talk is based
on the ongoing joint work with Vincent Knibbeler and Sara Lombardo
April 20, 2020 // Zoom
Vladimir Fock
(Univ. of Strasbourg)
Higher complex structures
April 7, 2020 // ONLINE at 12.30 / Univ of Amsterdam
Pavlo Gavrylenko
Deautonomization of integrable systems, CFT, gauge theories, topological strings
In the first part of the talk I’m going to give some introduction into so-called isomonodromy-CFT correspondence discovered by Gamayun, Iorgov and Lisovyy. It relates solutions of some differential
equations to conformal blocks, or, by AGT correspondence, to partition functions of 4d N=2 SUSY gauge theories. In the second part I will focus more on the q-difference equations and tell some
details about construction of such equations, and also how to express their solutions in terms of q-deformed conformal blocks, or partition functions of 5d gauge theories, or partition functions of
topological strings
April 6, 2020 // Zoom
Alexander Braverman
(Perimeter Inst, Univ of Toronto, Skoltech)
Introduction to symplectic duality
March 30, 2020 // Zoom
Igor Krichever
(Skoltech, HSE Univ., Columbia Univ.)
Algebraic integrability of 2D periodic elliptic sigma-models
The zero-curvature representation of the equation of motion for the two-dimensional $O(N)$ sigma-model was discovered by Pohlmeyer. His result was extended by Zakharov and Mikhailov to a wide class
of models which includes the principle chiral field models. Note, that the models considered by these authors are sigma models with the Minkowski source. At the level of equations of motion and their
zero-curvature representation one can pass from the hyperbolic to the elliptic case by the change of the light-cone variables $\xi_{\pm} = x{\pm} t$ in the hyperbolic case to the variables $z,\zb$ in
the elliptic case. However, the solutions in the two cases are quite different, the compactness of the source $\Sigma$ playing the key role. In the talk algebraic integrability of two-dimensional
elliptic sigma-models will be presented.
The talk is based on a work in progress with Nikita Nekrasov
// Zoom meetings
March 16, 2020
Michael Finkelberg
(Skoltech, Univ. HSE)
Geometric Satake correspondence and supergroups
This is a survey of recent works of Roman Travkin
March 2, 2020
Grigori Olshanski
(Skoltech, IITP, HSE Univ.)
Macdonald polynomials and discrete point processes
I will tell about application of Macdonald polynomials to constructing infinite systems of interacting particles on a lattice
February 17, 2020
Alexandra Skripchenko
(Skoltech, Univ.HSE)
Interval exchange transformations with flips in dynamics and topology
Interval exchange transformations appeared in connection with billiard flows and measured foliations on oriented surfaces in early 60s. Despite of the very simple definition, IETs immediately became
a very important tool to study dynamics of measured foliations and even geometry of moduli spaces of algebraic curves, and their ergodic properties were widely studied.
Interval exchange transformations with flips naturally generalize the notion of IETs for non-orientable case. In my talk i will outline their known dynamical properties and discuss some open
questions related to it. I also will discuss a very fresh result about connections between a certain class of IETs with flips and Novikov’s problem of asymptotic behavior of plane sections of
3-periodic surfaces.
The talk is partly based on work in progress with Ivan Dynnikov, Pascal Hubert and Paul Mercat
February 10, 2020
Sergei Lando
(Skoltech, Univ.HSE)
Problems related to the weight system associated to the Lie algebra sl(2)
A weight system is a function on chord diagrams satisfying Vassiliev’s 4-term relations. A construction due to D.Bar-Natan and M.Kontsevich allows one to associate a weight system to any semisimple
Lie algebra. The simplest nontrivial such weight system is the one associated to the Lie algebra sl(2). It comes from a knot invariant well known under the name of colored Jones polynomial. This
weight system takes values in the center of the universal enveloping algebra of sl(2), which is the algebra of polynomials in a single variable (the Casimir element). Already this weight system is
extremely nontrivial, and the talk will be devoted mainly to unsolved problems about it. All necessary notions will be define
February 3, 2020
Senya Shlosman
(Skoltech, Aix Marseille Univ.)
The Ising crystal and the Airy diffusion
Spring 2022
Fall 2021
Spring 2021
Fall 2020
Fall 2019
Spring 2019
Fall 2018
Spring 2018
Fall 2017
Spring 2017
Fall 2016 | {"url":"https://crei.skoltech.ru/cas/calendar/sem-mon/arxiv/spr20/","timestamp":"2024-11-05T19:06:21Z","content_type":"text/html","content_length":"74822","record_id":"<urn:uuid:93d2e7d7-5e77-4022-9323-2d1b7af36c98>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00359.warc.gz"} |
Code behind 'Camera.ViewportPointToRay'?
I have a situation where I’m dealing with just the matrices behind camera transformations, but not actual cameras. I already have working equivalents for the ViewportToWorld and WorldToViewport
methods, as well as a camera projection matrix, a worldToLocal matrix, and a localToWorld matrix.
I’d like to have the functionality of the method ‘ViewportPointToRay’ in this class also, does anyone know how it might work behind the scenes?
Alternatively, if anyone knows how to find the world coordinates for where the corners of the viewport intersect with a plane at y = 0 using only matrices and not the ‘ViewportPointToRay’ method,
that would also work!
For reference here is a gizmo I’m drawing just to show the frustum where y = 0. I basically want to not have to reference a camera in this method.
public void OnDrawGizmos()
Plane plane = new Plane(Vector3.up, Vector3.zero);
Vector3 bottomLeftViewport = new Vector3(0f, 0f, 0f);
Vector3 topLeftViewport = new Vector3(0f, 1f, 0f);
Vector3 topRightViewport = new Vector3(1f, 1f, 0f);
Vector3 bottomRightViewport = new Vector3(1f, 0f, 0f);
Ray ray;
float dist;
Vector3[] maxZoomFrustum = new Vector3[4];
ray = m_camera.ViewportPointToRay(bottomLeftViewport);
plane.Raycast(ray, out dist);
maxZoomFrustum[0] = ray.GetPoint(dist);
ray = m_camera.ViewportPointToRay(topLeftViewport);
plane.Raycast(ray, out dist);
maxZoomFrustum[1] = ray.GetPoint(dist);
ray = m_camera.ViewportPointToRay(topRightViewport);
plane.Raycast(ray, out dist);
maxZoomFrustum[2] = ray.GetPoint(dist);
ray = m_camera.ViewportPointToRay(bottomRightViewport);
plane.Raycast(ray, out dist);
maxZoomFrustum[3] = ray.GetPoint(dist);
Handles.color = Color.green.SetAlpha(0.3f);
ViewportToRay would work very similar to ViewportToWorld, but with the added step of finding the direction of said point from the camera;
Ray ViewportPointToRay (Vector3 pos, Camera cam)
//Remap to NDC-space [-1,1]
pos = pos * 2.0f - Vector3.one;
pos.z = 1f;
//Find the world-space position of the point at the camera's far plane
Vector3 worldPos = cam.cameraToWorldMatrix.MultiplyPoint (cam.projectionMatrix.inverse.MultiplyPoint (pos));
//The ray's origin is just the camera's position. Alternatively, you could use the same
//matrix logic above and find the same point at the camera's near plane and use that instead
Vector3 origin = cam.transform.position;
return new Ray (origin, worldPos - origin);
The Unity documentation for the internal function makes a very interesting point; the z-position is ignored. In this case, because distance is irrelevant in relation to a ray, we just assume the
viewport point lies on the camera’s far plane.
I’ve just tested this one and it works quite well. The numerical error compared to Unity’s own ViewportPointToRay method is at the 5th decimal place. Drawing the rays 100 world units into the scene
you can barely see the error. Unity might use some different internal order. However the results are pretty spot on.
public static Ray ViewportPointToRay(Vector2 aP, Matrix4x4 aProj, Matrix4x4 aCam)
var m = aProj * aCam;
var mInv = m.inverse;
// near clipping plane point
Vector4 p = new Vector4(aP.x*2-1, aP.y*2-1, -1, 1f);
var p0 = mInv * p;
p0 /= p0.w;
// far clipping plane point
p.z = 1;
var p1 = mInv * p;
p1 /= p1.w;
return new Ray(p0, (p1-p0).normalized);
Note that aProj need to be the camera’s projection matrix and aCam need to be the camera’s worldToCameraMatrix. Also note that worldToCameraMatrix is just the camera’s worldToLocal matrix but with
the z axis inverted like this:
var w2c = Matrix4x4.Scale(new Vector3(1, 1, -1)) * cam.transform.worldToLocalMatrix;
I’ve tested this method with a perspective and an orthographic camera and it works like expected. | {"url":"https://discussions.unity.com/t/code-behind-camera-viewportpointtoray/230871","timestamp":"2024-11-11T01:57:48Z","content_type":"text/html","content_length":"32580","record_id":"<urn:uuid:a4ae7098-fe20-45c5-9edd-64bff709686d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00633.warc.gz"} |
40.000 Lettres Types Correspondance Serial Keyl | 96GuitarStudio
40.000 Lettres Types Correspondance Serial Keyl
Download File >> https://tlniurl.com/2t7gPi
Implementing Domain-driven Design Vernon PdfDOWNLOAD: ? domain-driven design vernon pdf. implementing domain-driven design vernon. vaughn vernon implementing domain-driven design pdf. vaughn vernon
implementing domain-driven design github 28d79c4b43 40.000 Lettres Types Correspondance Serial Keylcisco csr 1000v license keygen 81Junai junbi Ch03.pdf - Google
Suppose we wish to calculate seasonal factors and a trend, then calculate the forecasted sales for July in year 5. The first step in the seasonal forecast will be to compute monthly indices using the
past four-year sales. For example, for January the index is: S(Jan) = D(Jan)/D = 208.6/181.84 = 1.14, where D(Jan) is the mean of all four January months, and D is the grand mean of all past
four-year sales.Similar calculations are made for all other months. Indices are summarized in the last row of the above table. Notice that the mean (average value) for the monthly indices adds up to
12, which is the number of periods in a year for the monthly data. Next, a linear trend often is calculated using the annual sales: Y = 1684 + 200.4T, The main question is whether this equation
represents the trend. Determination of the Annual Trend for the Numerical Example Year No: Actual Sales Linear Regression Quadratic Regression 1 1972 1884 1981 2 2016 2085 1988 3 2160 2285 2188 4
2592 2486 2583 Often fitting a straight line to the seasonal data is misleading. By constructing the scatter-diagram, we notice that a Parabola might be a better fit. Using the Polynomial Regression
JavaScript, the estimated quadratic trend is:Y = 2169 - 284.6T + 97T2Predicted values using both the linear and the quadratic trends are presented in the above tables. Comparing the predicted values
of the two models with the actual data indicates that the quadratic trend is a much superior fit than the linear one, as often expected.We can now forecast the next annual sales; which, corresponds
to year 5, or T = 5 in the above quadratic equation: Y = 2169 - 284.6(5) + 97(5)2 = 3171 sales for the following year. The average monthly sales during next year is, therefore: 3171/12 = 264.25.
Finally, the forecast for month of July is calculated by multiplying the average monthly sales forecast by the July seasonal index, which is 0.79; i.e., (264.25).(0.79) or 209. You might like to use
the Seasonal Index JavaScript to check your hand computation. As always you must first use Plot of the Time Series as a tool for the initial characterization process.For testing seasonality based on
seasonal index, you may like to use the Test for Seasonality JavaScript.Trend Removal and Cyclical Analysis: The cycles can be easily studied if the trend itself is removed. This is done by
expressing each actual value in the time series as a percentage of the calculated trend for the same date. The resulting time series has no trend, but oscillates around a central value of
100.Decomposition Analysis: It is the pattern generated by the time series and not necessarily the individual data values that offers to the manager who is an observer, a planner, or a controller of
the system. Therefore, the Decomposition Analysis is used to identify several patterns that appear simultaneously in a time series. A variety of factors are likely influencing data. It is very
important in the study that these different influences or components be separated or decomposed out of the 'raw' data levels. In general, there are four types of components in time series analysis:
Seasonality, Trend, Cycling and Irregularity. Xt = St . Tt. Ct . IThe first three components are deterministic which are called "Signals", while the last component is a random variable, which is
called "Noise". To be able to make a proper forecast, we must know to what extent each component is present in the data. Hence, to understand and measure these components, the forecast procedure
involves initially removing the component effects from the data (decomposition). After the effects are measured, making a forecast involves putting back the components on forecast estimates
(recomposition). The time series decomposition process is depicted by the following flowchart: Definitions of the major components in the above flowchart:Seasonal variation: When a repetitive pattern
is observed over some time horizon, the series is said to have seasonal behavior. Seasonal effects are usually associated with calendar or climatic changes. Seasonal variation is frequently tied to
yearly cycles.Trend: A time series may be stationary or exhibit trend over time. Long-term trend is typically modeled as a linear, quadratic or exponential function. Cyclical variation: An upturn or
downturn not tied to seasonal variation. Usually results from changes in economic conditions.Seasonalities are regular fluctuations which are repeated from year to year with about the same timing and
level of intensity. The first step of a times series decomposition is to remove seasonal effects in the data. Without deseasonalizing the data, we may, for example, incorrectly infer that recent
increase patterns will continue indefinitely; i.e., a growth trend is present, when actually the increase is 'just because it is that time of the year'; i.e., due to regular seasonal peaks. To
measure seasonal effects, we calculate a series of seasonal indexes. A practical and widely used method to compute these indexes is the ratio-to-moving-average approach. From such indexes, we may
quantitatively measure how far above or below a given period stands in comparison to the expected or 'business as usual' data period (the expected data are represented by a seasonal index of 100%, or
1.0). Trend is growth or decay that is the tendencies for data to increase or decrease fairly steadily over time. Using the deseasonalized data, we now wish to consider the growth trend as noted in
our initial inspection of the time series. Measurement of the trend component is done by fitting a line or any other function. This fitted function is calculated by the method of least squares and
represents the overall trend of the data over time.Cyclic oscillations are general up-and-down data changes; due to changes e.g., in the overall economic environment (not caused by seasonal effects)
such as recession-and-expansion. To measure how the general cycle affects data levels, we calculate a series of cyclic indexes. Theoretically, the deseasonalized data still contains trend, cyclic,
and irregular components. Also, we believe predicted data levels using the trend equation do represent pure trend effects. Thus, it stands to reason that the ratio of these respective data values
should provide an index which reflects cyclic and irregular components only. As the business cycle is usually longer than the seasonal cycle, it should be understood that cyclic analysis is not
expected to be as accurate as a seasonal analysis. Due to the tremendous complexity of general economic factors on long term behavior, a general approximation of the cyclic factor is the more
realistic aim. Thus, the specific sharp upturns and downturns are not so much the primary interest as the general tendency of the cyclic effect to gradually move in either direction. To study the
general cyclic movement rather than precise cyclic changes (which may falsely indicate more accurately than is present under this situation), we 'smooth' out the cyclic plot by replacing each index
calculation often with a centered 3-period moving average. The reader should note that as the number of periods in the moving average increases, the smoother or flatter the data become. The choice of
3 periods perhaps viewed as slightly subjective may be justified as an attempt to smooth out the many up-and-down minor actions of the cycle index plot so that only the major changes remain.
Irregularities (I) are any fluctuations not classified as one of the above. This component of the time series is unexplainable; therefore it is unpredictable. Estimation of I can be expected only
when its variance is not too large. Otherwise, it is not possible to decompose the series. If the magnitude of variation is large, the projection for the future values will be inaccurate. The best
one can do is to give a probabilistic interval for the future value given the probability of I is known.Making a Forecast: At this point of the analysis, after we have completed the study of the time
series components, we now project the future values in making forecasts for the next few periods. The procedure is summarized below.Step 1: Compute the future trend level using the trend
equation.Step 2: Multiply the trend level from Step 1 by the period seasonal index to include seasonal effects.Step 3: Multiply the result of Step 2 by the projected cyclic index to include cyclic
effects and get the final forecast result. Exercise your knowledge about how to forecast by decomposition method? by using a sales time series available at An Illustrative Application (a pdf file).
Therein you will find a detailed workout numerical example in the context of the sales time series which consists of all components including a cycle. Smoothing Techniques: A time series is a
sequence of observations, which are ordered in time. Inherent in the collection of data taken over time is some form of random variation. There exist methods for reducing of canceling the effect due
to random variation. A widely used technique is "smoothing". This technique, when properly applied, reveals more clearly the underlying trend, seasonal and cyclic components.Smoothing techniques are
used to reduce irregularities (random fluctuations) in time series data. They provide a clearer view of the true underlying behavior of the series. Moving averages rank among the most popular
techniques for the preprocessing of time series. They are used to filter random "white noise" from the data, to make the time series smoother or even to emphasize certain informational components
contained in the time series. Exponential smoothing is a very popular scheme to produce a smoothed time series. Whereas in moving averages the past observations are weighted equally, Exponential
Smoothing assigns exponentially decreasing weights as the observation get older. In other words, recent observations are given relatively more weight in forecasting than the older observations.
Double exponential smoothing is better at handling trends. Triple Exponential Smoothing is better at handling parabola trends.Exponential smoothing is a widely method used of forecasting based on the
time series itself. Unlike regression models, exponential smoothing does not imposed any deterministic model to fit the series other than what is inherent in the time series itself.Simple Moving
Averages: The best-known forecasting methods is the moving averages or simply takes a certain number of past periods and add them together; then divide by the number of periods. Simple Moving
Averages (MA) is effective and efficient approach provided the time series is stationary in both mean and variance. The following formula is used in finding the moving average of order n, MA(n) for a
period t+1, MAt+1 = [Dt + Dt-1 + ... +Dt-n+1] / n where n is the number of observations used in the calculation.The forecast for time period t + 1 is the forecast for all future time periods.
However, this forecast is revised only when new data becomes available.You may like using Forecasting by Smoothing Javasript, and then performing some numerical experimentation for a deeper
understanding of these concepts.Weighted Moving Average: Very powerful and economical. They are widely used where repeated forecasts required-uses methods like sum-of-the-digits and trend adjustment
methods. As an example, a Weighted Moving Averages is:Weighted MA(3) = w1.Dt + w2.Dt-1 + w3.Dt-2where the weights are any positive numbers such that: w1 + w2 + w3 = 1. A typical weights for this
example is, w1 = 3/(1 + 2 + 3) = 3/6, w2 = 2/6, and w3 = 1/6. You may like using Forecasting by Smoothing JavaScript, and then performing some numerical experimentation for a deeper understanding of
the concepts.An illustrative numerical example: The moving average and weighted moving average of order five are calculated in the following table.WeekSales ($1000)MA(5)WMA(5)
1105--2100--3105--495--51001011006959998710510010081201031079115107111101251171161112012011912120120119Moving Averages with Trends: Any method of time series analysis involves a different degree of
model complexity and presumes a different level of comprehension about the underlying trend of the time series. In many business time series, the trend in the smoothed series using the usual moving
average method indicates evolving changes in the series level to be highly nonlinear. In order to capture the trend, we may use the Moving-Average with Trend (MAT) method. The MAT method uses an
adaptive linearization of the trend by means of incorporating a combination of the local slopes of both the original and the smoothed time series. The following formulas are used in MAT method:X(t):
The actual (historical) data at time t.M(t) = å X(i) / n i.e., finding the moving average smoothing M(t) of order n, which is a positive odd integer number ³ 3, for i from t-n+1 to t.F(t) = the
smoothed series adjusted for any local trendF(t) = F(t-1) + a [(n-1)X(t) + (n+1)X(t-n) -2nM(t-1)], where constant coefficient a = 6/(n3 n). with initial conditions F(t) =X(t) for all t £ n,
Finally, the h-step-a-head forecast f(t+h) is:F(t+h) = M(t) + [h + (n-1)/2] F(t).To have a notion of F(t), notice that the inside bracket can be written as:n[X(t) F(t-1)] + n[X(t-m) F(t-1)] + [X
(t-m) X(t)],this is, a combination of three rise/fall terms. In making a forecast, it is also important to provide a measure of how accurate one can expect the forecast to be. The statistical
analysis of the error terms known as residual time-series provides measure tool and decision process for modeling selection process.In applying MAT method sensitivity analysis is needed to determine
the optimal value of the moving average parameter n, i.e., the optimal number of period m. The error time series allows us to study many of its statistical properties for goodness-of-fit decision.
Therefore it is important to evaluate the nature of the forecast error by using the appropriate statistical tests. The forecast error must be a random variable distributed normally with mean close to
zero and a constant variance across time. For computer implementation of the Moving Average with Trend (MAT) method one may use the forecasting (FC) module of WinQSB which is commercial grade
stand-alone software. WinQSB s approach is to first select the model and then enter the parameters and the data. With the Help features in WinQSB there is no learning-curve one just needs a few
minutes to master its useful features.Exponential Smoothing Techniques: One of the most successful forecasting methods is the exponential smoothing (ES) techniques. Moreover, it can be modified
efficiently to use effectively for time series with seasonal patterns. It is also easy to adjust for past errors-easy to prepare follow-on forecasts, ideal for situations where many forecasts must be
prepared, several different forms are used depending on presence of trend or cyclical variations. In short, an ES is an averaging technique that uses unequal weights; however, the weights applied to
past observations decline in an exponential manner.Single Exponential Smoothing: It calculates the smoothed series as a damping coefficient times the actual series plus 1 minus the damping
coefficient times the lagged value of the smoothed series. The extrapolated smoothed series is a constant, equal to the last value of the smoothed series during the period when actual data on the
underlying series are available. While the simple Moving Average method is a special case of the ES, the ES is more parsimonious in its data usage. Ft+1 = a Dt + (1 - a) Ftwhere: Dt is the actual
value Ft is the forecasted value a is the weighting factor, which ranges from 0 to 1 t is the current time period.Notice that the smoothed value becomes the forecast for period t + 1.A small a
provides a detectable and visible smoothing. While a large a provides a fast response to the recent changes in the time series but provides a smaller amount of smoothing. Notice that the exponential
smoothing and simple moving average techniques will generate forecasts having the same average age of information if moving average of order n is the integer part of (2-a)/a.An exponential smoothing
over an already smoothed time series is called double-exponential smoothing. In some cases, it might be necessary to extend it even to a triple-exponential smoothing. While simple exponential
smoothing requires stationary condition, the double-exponential smoothing can capture linear trends, and triple-exponential smoothing can handle almost all other business time series. Double
Exponential Smoothing: It applies the process described above three to account for linear trend. The extrapolated series has a constant growth rate, equal to the growth of the smoothed series at the
end of the data period.Triple Double Exponential Smoothing: It applies the process described above three to account for nonlinear trend.Exponenentially Weighted Moving Average: Suppose each day's
forecast value is based on the previous day's value so that the weight of each observation drops exponentially the further back (k) in time it is. The weight of any individual isa(1 - a)k, where a is
the smoothing constant.An exponenentially weighted moving average with a smoothing constant a, corresponds roughly to a simple moving average of length n, where a and n are related bya = 2/(n+1) OR n
= (2 - a)/a.Thus, for example, an exponenentially weighted moving average with a smoothing constant equal to 0.1 would correspond roughly to a 19 day moving average. And a 40-day simple moving
average would correspond roughly to an exponentially weighted moving average with a smoothing constant equal to 0.04878.This approximation is helpful, however, it is harder to update, and may not
correspond to an optimal forecast.Smoothing techniques, such as the Moving Average, Weighted Moving Average, and Exponential Smoothing, are well suited for one-period-ahead forecasting as implemented
in the following JavaScript: Forecasting by Smoothing.Holt's Linear Exponential Smoothing Technique: Suppose that the series { yt } is non-seasonal but does display trend. Now we need to estimate
both the current level and the current trend. Here we define the trend Tt at time t as the difference between the current and previous level. The updating equations express ideas similar to those for
exponential smoothing. The equations are: Lt = a yt + (1 - a) Ftfor the level and Tt = b ( Lt - Lt-1 ) + (1 - b) Tt-1for the trend. We have two smoothing parameters a and b; both must be positive and
less than one. Then the forecasting for k periods into the future is: Fn+k = Ln + k. TnGiven that the level and trend remain unchanged, the initial (starting) values are T2 = y2 y1, L2 = y2, and F3
= L2 + T2 An Application: A company s credit outstanding has been increasing at a relatively constant rate over time:Applying the Holt s techniques with smoothing with parameters a = 0.7 and b = 0.6,
a graphical representation of the time series, its forecasts, together wit a few-step ahead forecasts, are depicted below: Year-end Past credit Yearcredit (in millions)
1133215531654171519462317274831293131033311343 K-Period Ahead Forecast KForecast (in millions)1359.72372.63385.44398.3 Demonstration of the calculation procedure, with a = 0.7 and b = 0.6 L2 = y2 =
155, T2 = y2 - y1 = 155 133 = 22L3 = .7 y3 + (1 - .7) F3, T3 = .6 ( L3 - L2 ) + (1 - .6) T2 F4 = L3 + T3, F3 = L2 + T2 L3 = .7 y3 + (1 - .7) F3, T3 = .6 ( L3 - L2 ) + (1 - .6) T2 , F4 = L3 + T3 The
Holt-Winters' Forecasting Technique: Now in addition to Holt parameters, suppose that the series exhibits multiplicative seasonality and let St be the multiplicative seasonal factor at time t.
Suppose also that there are s periods in a year, so s=4 for quarterly data and s=12 for monthly data. St-s is the seasonal factor in the same period last year.In some time series, seasonal variation
is so strong it obscures any trends or cycles, which are very important for the understanding of the process being observed. Winters smoothing method can remove seasonality and makes long term
fluctuations in the series stand out more clearly. A simple way of detecting trend in seasonal data is to take averages over a certain period. If these averages change with time we can say that there
is evidence of a trend in the series.The updating equations are: Lt = a (Lt-1 + Tt-1) + (1 - a) yt / St-sfor the level, Tt = b ( Lt - Lt-1 ) + (1 - b) Tt-1for the trend, and St = g St-s + (1- g) yt /
Lt for the seasonal factor. We now have three smoothing parameters a , b, and g all must be positive and less than one. To obtain starting values, one may use the first a few year data. For example
for quarterly data, to estimate the level, one may use a centered 4-point moving average: L10 = (y8 + 2y9 + 2y10 + 2y11 + y12) / 8as the level estimate in period 10. This will extract the seasonal
component from a series with 4 measurements over each year. T10 = L10 - L9as the trend estimate for period 10. S7 = (y7 / L7 + y3 / L3 ) / 2as the seasonal factor in period 7. Similarly, S8 = (y8 /
L8 + y4 / L4 ) / 2, S9 = (y9 / L9 + y5 / L5 ) / 2, S10 = (y10 / L10 + y6 / L6 ) / 2For Monthly Data, the correspondingly we use a centered 12-point moving average: L30 = (y24 + 2y25 + 2y26 +.....+
2y35 + y36) / 24as the level estimate in period 30. T30 = L30 - L29as the trend estimate for period 30. S19 = (y19 / L19 + y7 / L7 ) / 2as the estimate of the seasonal factor in period 19, and so on,
up to 30:S30 = (y30 / L30 + y18 / L18 ) / 2Then the forecasting k periods into the future is: Fn+k = (Ln + k. Tn ) St+k-s, for k = 1, 2, ....,sForecasting by the Z-Chart Another method of short-term
forecasting is the use of a Z-Chart. The name Z-Chart arises from the fact that the pattern on such a graph forms a rough letter Z. For example, in a situation where the sales volume figures for one
product or product group for the first nine months of a particular year are available, it is possible, using the Z-Chart, to predict the total sales for the year, i.e. to make a forecast for the next
three months. It is assumed that basic trading conditions do not alter, or alter on anticipated course and that any underlying trends at present being experienced will continue. In addition to the
monthly sales totals for the nine months of the current year, the monthly sales figures for the previous year are also required and are shown in following table: Year Month 2003 $ 2004 $ January 940
520 February 580 380 March 690 480 April 680 490 May 710 370 June 660 390 July 630 350 August 470 440 September 480 360 October 590 November 450 December 430 Total Sales 2003 7310 The monthly sales
for the first nine months of a particular year together with the monthly sales for the previous year.From the data in the above table, another table can be derived and is shown as follows:The first
column in Table 18 relates to actual sales; the seconds to the cumulative total which is found by adding each month s sales to the total of preceding sales. Thus, January 520 plus February 380
produces the February cumulative total of 900; the March cumulative total is found by adding the March sales of 480 to the previous cumulative total of 900 and is, therefore, 1,380.The 12 months
moving total is found by adding the sales in the current to the total of the previous 12 months and then subtracting the corresponding month for last year. Month 2004 Actual Sales $ Cumulative Total
$ 12 months moving total $ January 520 520 6890 February 380 900 6690 March 480 1380 6480 April 490 1870 6290 May 370 2240 5950 June 390 2630 5680 July 350 2980 5400 August 440 3420 5370 September
360 3780 5250 Showing processed monthly sales data, producing a cumulative total and a 12 months moving total.For example, the 12 months moving total for 2003 is 7,310 (see the above first table).
Add to this the January 2004 item 520 which totals 7,830 subtract the corresponding month last year, i.e. the January 2003 item of 940 and the result is the January 2004, 12 months moving total,
6,890.The 12 months moving total is particularly useful device in forecasting because it includes all the seasonal fluctuations in the last 12 months period irrespective of the month from which it is
calculated. The year could start in June and end the next July and contain all the seasonal patterns.The two groups of data, cumulative totals and the 12 month moving totals shown in the above table
are then plotted (A and B), along a line that continues their present trend to the end of the year where they meet: Forecasting by the Z-Chart Click on the image to enlarge it In the above figure, A
and B represent the 12 months moving total,and the cumulative data, respectively, while their projections into future are shown by the doted lines. Notice that, the 12 months accumulation of sales
figures is bound to meet the 12 months moving total as they represent different ways of obtaining the same total. In the above figure these lines meet at $4,800, indicating the total sales for the
year and forming a simple and approximate method of short-term forecasting. The above illustrative monthly numerical example approach might be adapted carefully to your set of time series data with
any equally spaced intervals. As an alternative to graphical method, one may fit a linear regression based on the data of lines A and/or B available from the above table, and then extrapolate to
obtain short-term forecasting with a desirable confidence level.Concluding Remarks: A time series is a sequence of observations which are ordered in time. Inherent in the collection of data taken
over time is some form of random variation. There exist methods for reducing of canceling the effect due to random variation. Widely used techniques are "smoothing". These techniques, when properly
applied, reveals more clearly the underlying trends. In other words, smoothing techniques are used to reduce irregularities (random fluctuations) in time series data. They provide a clearer view of
the true underlying behavior of the series.Exponential smoothing has proven through the years to be very useful in many forecasting situations. Holt first suggested it for non-seasonal time series
with or without trends. Winters generalized the method to include seasonality, hence the name: Holt-Winters Method. Holt-Winters method has 3 updating equations, each with a constant that ranges from
(0 to 1). The equations are intended to give more weight to recent observations and less weight to observations further in the past. This form of exponential smoothing can be used for
less-than-annual periods (e.g., for monthly series). It uses smoothing parameters to estimate the level, trend, and seasonality. Moreover, there are two different procedures, depending on whether
seasonality is modeled in an additive or multiplicative way. We will present its multiplicative version; the additive can be applied on an ant-logarithmic function of the data.The single exponential
smoothing emphasizes the short-range perspective; it sets the level to the last observation and is based on the condition that there is no trend. The linear regression, which fits a least squares
line to the historical data (or transformed historical data), represents the long range, which is conditioned on the basic trend. Holt s linear exponential smoothing captures information about recent
trend. The parameters in Holt s model are the levels-parameter which should be decreased when the amount of data variation is large, and trends-parameter should be increased if the recent trend
direction is supported by the causal some factors.Since finding three optimal, or even near optimal, parameters for updating equations is not an easy task, an alternative approach to Holt-Winters
methods is to deseasonalize the data and then use exponential smoothing. Moreover, in some time series, seasonal variation is so strong it obscures any trends or cycles, which are very important for
the understanding of the process being observed. Smoothing can remove seasonality and makes long term fluctuations in the series stand out more clearly. A simple way of detecting trend in seasonal
data is to take averages over a certain period. If these averages change with time we can say that there is evidence of a trend in the series.How to compare several smoothing methods: Although there
are numerical indicators for assessing the accuracy of the forecasting technique, the most widely approach is in using visual comparison of several forecasts to assess their accuracy and choose among
the various forecasting methods. In this approach, one must plot (using, e.g., Excel) on the same graph the original values of a time series variable and the predicted values from several different
forecasting methods, thus facilitating a visual comparison.You may like using Forecasting by Smoothing Techniques JavaScript.Further Reading:Yar, M and C. Chatfield (1990), Prediction intervals for
the Holt-Winters forecasting procedure, International Journal of Forecasting 6, 127-137.Filtering Techniques: Often on must filters an entire, e.g., financial time series with certain filter
specifications to extract useful information by a transfer function expression. The aim of a filter function is to filter a time series in order to extract useful information hidden in the data, such
as cyclic component. The filter is a direct implementation of and input-output function.Data filtering is widely used as an effective and efficient time series modeling tool by applying an
appropriate transformation technique. Most time series analysis techniques involve some form of filtering out noise in order to make the pattern more salient.Differencing: A special type of filtering
which is particularly useful for removing a trend, is simply to difference a given time series until it becomes stationary. This method is useful in Box-Jenkins modeling. For non-seasonal data, first
order differencing is usually sufficient to attain apparent stationarity, so that the new series is formed from the original series.Adaptive Filtering Any smoothing techniques such as moving average
which includes a method of learning from past errors can respond to changes in the relative importance of trend, seasonal, and random factors. In the adaptive exponential smoothing method, one may
adjust a to allow for shifting patterns.Hodrick-Prescott Filter: The Hodrick-Prescott filter or H-P filter is an algorithm for choosing smoothed values for a time series. The H-P filter chooses
smooth values {st} for the series {xt} of T elements (t = 1 to T) that solve the following minimization problem: min { {(xt-st)2 ... etc. } the positive parameter l is the penalty on variation, where
variation is measured by the average squared second difference. A larger value of l makes the resulting {st} series smoother; less high-frequency noise. The commonly applied value of l is 1600. For
the study of business cycles one uses not the smoothed series, but the jagged series of residuals from it. H-P filtered data shows less fluctuation than first-differenced data, since the H-P filter
pays less attention to high frequency movements. H-P filtered data also shows more serial correlation than first-differenced data. This is a smoothing mechanism used to obtain a long term trend
component in a time series. It is a way to decompose a given series into stationary and non-stationary components in such a way that their sum of squares of the series from the non-stationary
component is minimum with a penalty on changes to the derivatives of the non-stationary component. Kalman Filter: The Kalman filter is an algorithm for sequentially updating a linear projection for a
dynamic system that is in state-space representation. Application of the Kalman filter transforms a system of the following two-equation kind into a more solvable form: x t+1=Axt+Cw t+1, and yt=
Gxt+vt in which: A, C, and G are matrices known as functions of a parameter q about which inference is desired where: t is a whole number, usually indexing time; xt is a true state variable, hidden
from the econometrician; yt is a measurement of x with scaling factor G, and measurement errors vt, wt are innovations to the hidden xt process, E(wt+1wt')=1 by normalization (where, ' means the
transpose), E(vtvt)=R, an unknown matrix, estimation of which is necessary but ancillary to the problem of interest, which is to get an estimate of q. The Kalman filter defines two matrices St and Kt
such that the system described above can be transformed into the one below, in which estimation and inference about q and R is more straightforward; e.g., by regression analysis: zt+1=Azt+Kat, and yt
=Gzt+at where zt is defined to be Et-1xt, at is defined to be yt-E(yt-1yt, K is defined to be limit Kt as t approaches infinity. The definition of those two matrices St and Kt is itself most of the
definition of the Kalman filters: Kt=AStG'(GStG'+R)-1, and St-1=(A-KtG)St (A-KtG)'+CC'+Kt RKt' , Kt is often called the Kalman gain. Further Readings:Hamilton J, Time Series Analysis, Princeton
University Press, 1994.Harvey A., Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge University Press, 1991. Cardamone E., From Kalman to Hodrick-Prescott Filter, 2006.Mills
T., The Econometric Modelling of Financial Time Series, Cambridge University Press, 1995. Neural Network: For time series forecasting, the prediction model of order p, has the general form: Dt = f
(Dt-1, Dt-1,..., Dt-p) + etNeural network architectures can be trained to predict the future values of the dependent variables. What is required are design of the network paradigm and its parameters.
The multi-layer feed-forward neural network approach consists of an input layer, one or several hidden layers and an output layer. Another approach is known as the partially recurrent neural network
that can learn sequences as time evolves and responds to the same input pattern differently at different times, depending on the previous input patterns as well. None of these approaches is superior
to the other in all cases; however, an additional dampened feedback, that possesses the characteristics of a dynamic memory, will improve the performance of both approaches. Outlier Considerations:
Outliers are a few observations that are not well fitted by the "best" available model. In practice, any observation with standardized residual greater than 2.5 in absolute value is a candidate for
being an outlier. In such case, one must first investigate the source of data. If there is no doubt about the accuracy or veracity of the observation, then it should be removed, and the model should
be refitted. Whenever data levels are thought to be too high or too low for "business as usual", we call such points the outliers. A mathematical reason to adjust for such occurrences is that the
majority of forecast techniques are based on averaging. It is well known that arithmetic averages are very sensitive to outlier values; therefore, some alteration should be made in the data before
continuing. One approach is to replace the outlier by the average of the two sales levels for the periods, which immediately come before and after the period in question and put this number in place
of the outlier. This idea is useful if outliers occur in the middle or recent part of the data. However, if outliers appear in the oldest part of the data, we may follow a second alternative, which
is to simply throw away the data up to and including the outlier.In light of the relative complexity of some inclusive but sophisticated forecasting techniques, we recommend that management go
through an evolutionary progression in adopting new forecast techniques. That is to say, a simple forecast method well understood is better implemented than one with all inclusive features but
unclear in certain facets. Modeling and Simulation: Dynamic modeling and simulation is the collective ability to understand the system and implications of its changes over time including forecasting.
System Simulation is the mimicking of the operation of a real system, such as the day-to-day operation of a bank, or the value of a stock portfolio over a time period. By advancing the simulation run
into the future, managers can quickly find out how the system might behave in the future, therefore making decisions as they deem appropriate. In the field of simulation, the concept of "principle of
computational equivalence" has beneficial implications for the decision-maker. Simulated experimentation accelerates and replaces effectively the "wait and see" anxieties in discovering new insight
and explanations of future behavior of the real system.Probabilistic Models: Uses probabilistic techniques, such as Marketing Research Methods, to deal with uncertainty, gives a range of possible
outcomes for each set of events. For example, one may wish to identify the prospective buyers of a new product within a community of size N. From a survey result, one may estimate the probability of
selling p, and then estimate the size of sales as Np with some confidence level.An Application: Suppose we wish to forecast the sales of new toothpaste in a community of 50,000 housewives. A free
sample is given to 3,000 selected randomly, and then 1,800 indicated that they would buy the product.Using the binomial distribution with parameters (3000, 1800/3000), the standard error is 27, and
the expected sale is 50000(1800/3000) = 30000. The 99.7% confidence interval is within 3 times standard error 3(27) = 81 times the total population ratio 50000/3000; i.e., 1350. In other words, the
range (28650, 31350) contains the expected sales. Event History Analysis: Sometimes data on the exact time of a particular event (or events) are available, for example on a group of patients.
Examples of events could include asthma attack; epilepsy attack; myocardial infections; hospital admissions. Often, occurrence (and non-occurrence) of an event is available on a regular basis, e.g.,
daily and the data can then be thought of as having a repeated measurements structure. An objective may be to determine whether any concurrent events or measurements have influenced the occurrence of
the event of interest. For example, daily pollen counts may influence the risk of asthma attacks; high blood pressure might precede a myocardial infarction. One may use PROC GENMOD available in SAS
for the event history analysis. Predicting Market Response: As applied researchers in business and economics, faced with the task of predicting market response, we seldom know the functional form of
the response. Perhaps market response is a nonlinear monotonic, or even a non-monotonic function of explanatory variables. Perhaps it is determined by interactions of explanatory variable.
Interaction is logically independent of its components.When we try to represent complex market relationships within the context of a linear model, using appropriate transformations of explanatory and
response variables, we learn how hard the work of statistics can be. Finding reasonable models is a challenge, and justifying our choice of models toour peers can be even more of a challenge.
Alternative specifications abound.Modern regression methods, such as generalized additive models, multivariate adaptive regression splines, and regression trees, have one clear advantage: They can be
used without specifying a functional form in advance. These data-adaptive, computer- intensive methods offer a more flexible approach to modeling than traditional statistical methods. How well do
modern regression methods perform in predicting market response? Some perform quite well based on the results of simulation studies.Delphi Analysis: Delphi Analysis is used in the decision making
process, in particular in forecasting. Several "experts" sit together and try to compromise on something upon which they cannot agree.System Dynamics Modeling: System dynamics (SD) is a tool for
scenario analysis. Its main modeling tools are mainly the dynamic systems of differential equations andsimulation. The SD approach to modeling is an important one for the following, not the least of
which is that e.g., econometrics is the established methodology of system dynamics. However, from a philosophy of social science perspective, SD is deductive and econometrics is inductive. SD is less
tightly bound to actuarial data and thus is free to expand out and examine more complex, theoretically informed, and postulated relationships. Econometrics is more tightly bound to the data and the
models it explores, by comparison, are simpler. This is not to say the one is better than the other: properly understood and combined, they are complementary. Econometrics examines historical
relationships through correlation and least squares regression model to compute the fit. In contrast, consider a simple growth scenario analysis; the initial growth portion of say, population is
driven by the amount of food available. Sothere is a correlation between population level and food. However, the usual econometrics techniques are limited in their scope. For example, changes in the
direction of the growthcurve for a time population is hard for an econometrics model to capture.Further Readings:Delbecq, A., Group Techniques for Program Planning, Scott Foresman, 1975.Gardner H.S.,
Comparative Economic Systems, Thomson Publishing, 1997.Hirsch M., S. Smale, and R. Devaney, Differential Equations, Dynamical Systems, and an Introduction to Chaos, Academic Press, 2004.Lofdahl C.,
Environmental Impacts of Globalization and Trade: A Systems Study, MIT Press, 2002.Combination of Forecasts: Combining forecasts merges several separate sets of forecasts to form a better composite
forecast. The main question is "how to find the optimal combining weights?" The widely used approach is to change the weights from time to time for a better forecast rather than using a fixed set of
weights on a regular basis or otherwise. All forecasting models have either an implicit or explicit error structure, where error is defined as the difference between the model prediction and the
"true" value. Additionally, many data snooping methodologies within the field of statistics need to be applied to data supplied to a forecasting model. Also, diagnostic checking, as defined within
the field of statistics, is required for any model which uses data. Using any method for forecasting one must use a performance measure to assess the quality of the method. Mean Absolute Deviation
(MAD), and Variance are the most useful measures. However, MAD does not lend itself to making further inferences, but the standard error does. For error analysis purposes, variance is preferred since
variances of independent (uncorrelated) errors are additive; however, MAD is not additive.Regression and Moving Average: When a time series is not a straight line one may use the moving average (MA)
and break-up the time series into several intervals with common straight line with positive trends to achieve linearity for the whole time series. The process involves transformation based on slope
and then a moving average within that interval. For most business time series, one the following transformations might be effective: slope/MA,log (slope),log(slope/MA),log(slope) - 2 log(MA).Further
Readings:Armstrong J., (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners, Kluwer Academic Publishers, 2001.Arsham H., Seasonal and cyclic forecasting in small firm,
American Journal of Small Business, 9, 46-57, 1985.Brown H., and R. Prescott, Applied Mixed Models in Medicine, Wiley, 1999.Cromwell J., W. Labys, and M. Terraza, Univariate Tests for Time Series
Models, Sage Pub., 1994.Ho S., M. Xie, and T. Goh, A comparative study of neural network and Box-Jenkins ARIMA modeling in time series prediction, Computers & Industrial Engineering, 42, 371-375,
2002.Kaiser R., and A. Maravall, Measuring Business Cycles in Economic Time Series, Springer, 2001. Has a good coverage on Hodrick-Prescott Filter among other related topics.Kedem B., K. Fokianos,
Regression Models for Time Series Analysis, Wiley, 2002. Kohzadi N., M. Boyd, B. Kermanshahi, and I. Kaastra , A comparison of artificial neural network and time series models for forecasting
commodity prices, Neurocomputing, 10, 169-181, 1996.Krishnamoorthy K., and B. Moore, Combining information for prediction in linear regression, Metrika, 56, 73-81, 2002.Schittkowski K., Numerical
Data Fitting in Dynamical Systems: A Practical Introduction with Applications and Software, Kluwer Academic Publishers, 2002. Gives an overview of numerical methods that are needed to compute
parameters of a dynamical model by a least squares fit. How to Do Forecasting by Regression AnalysisIntroductionRegression is the study of relationships among variables, a principal purpose of which
is to predict, or estimate the value of one variable from known or assumed values of other variables related to it.Variables of Interest: To make predictions or estimates, we must identify the
effective predictors of the variable of interest: which variables are important indicators? and can be measured at the least cost? which carry only a little information? and which are redundant?
Predicting the Future Predicting a change over time or extrapolating from present conditions to future conditions is not the function of regression analysis. To make estimates of the future, use time
series analysis.Experiment: Begin with a hypothesis about how several variables might be related to another variable and the form of the relationship.Simple Linear Regression: A regression using only
one predictor is called a simple regression.Multiple Regressions: Where there are two or more predictors, multiple regressions analysis is employed.Data: Since it is usually unrealistic to obtain
information on an entire population, a sample which is a subset of the population is usually selected. For example, a sample may be either randomly selected or a researcher may choose the x-values
based on the capability of the equipment utilized in the experiment or the experiment design. Where the x-values are pre-selected, usually only limited inferences can be drawn depending upon the
particular values chosen. When both x and y are randomly drawn, inferences can generally be drawn over the range of values in the sample.Scatter Diagram: A graphical representation of the pairs of
data called a scatter diagram can be drawn to gain an overall view of the problem. Is there an apparent relationship? Direct? Inverse? If the points lie within a band described by parallel lines, we
can say there is a linear relationship between the pair of x and y values. If the rate of change is generally not constant, then the relationship is curvilinear.The Model: If we have determined there
is a linear relationship between t and y we want a linear equation stating y as a function of x in the form Y = a + bx + e where a is the intercept, b is the slope and e is the error term accounting
for variables that affect y but are not included as predictors, and/or otherwise unpredictable and uncontrollable factors.Least-Squares Method: To predict the mean y-value for a given x-value, we
need a line which passes through the mean value of both x and y and which minimizes the sum of the distance between each of the points and the predictive line. Such an approach should result in a
line which we can call a "best fit" to the sample data. The least-squares method achieves this result by calculating the minimum average squared deviations between the sample y points and the
estimated line. A procedure is used for finding the values of a and b which reduces to the solution of simultaneous linear equations. Shortcut formulas have been developed as an alternative to the
solution of simultaneous equations.Solution Methods: Techniques of Matrix Algebra can be manually employed to solve simultaneous linear equations. When performing manual computations, this technique
is especially useful when there are more than two equations and two unknowns.Several well-known computer packages are widely available and can be utilized to relieve the user of the computational
problem, all of which can be used to solve both linear and polynomial equations: the BMD packages (Biomedical Computer Programs) from UCLA; SPSS (Statistical Package for the Social Sciences)
developed by the University of Chicago; and SAS (Statistical Analysis System). Another package that is also available is IMSL, the International Mathematical and Statistical Libraries, which contains
a great variety of standard mathematical and statistical calculations. All of these software packages use matrix algebra to solve simultaneous equations.Use and Interpretation of the Regression
Equation: The equation developed can be used to predict an average value over the range of the sample data. The forecast is good for short to medium ranges.Measuring Error in Estimation: The scatter
or variability about the mean value can be measured by calculating the variance, the average squared deviation of the values around the mean. The standard error of estimate is derived from this value
by taking the square root. This value is interpreted as the average amount that actual values differ from the estimated mean.Confidence Interval: Interval estimates can be calculated to obtain a
measure of the confidence we have in our estimates that a relationship exists. These calculations are made using t-distribution tables. From these calculations we can derive confidence bands, a pair
of non-parallel lines narrowest at the mean values which express our confidence in varying degrees of the band of values surrounding the regression equation.Assessment: How confident can we be that a
relationship actually exists? The strength of that relationship can be assessed by statistical tests of that hypothesis, such as the null hypothesis, which are established using t-distribution,
R-squared, and F-distribution tables. These calculations give rise to the standard error of the regression coefficient, an estimate of the amount that the regression coefficient b will vary from
sample to sample of the same size from the same population. An Analysis of Variance (ANOVA) table can be generated which summarizes the different components of variation. When you want to compare
models of different size (different numbers of independent variables and/or different sample sizes) you must use the Adjusted R-Squared, because the usual R-Squared tends to grow with the number of
independent variables.The Standard Error of Estimate, i.e. square root of error mean square, is a good indicator of the "quality" of a prediction model since it "adjusts" the Mean Error Sum of
Squares (MESS) for the number of predictors in the model as follow:MESS = Error Sum of Squares/(N - Number of Linearly Independent Predictors) If one keeps adding useless predictors to a model, the
MESS will become less and less stable. R-squared is also influenced by the range of your dependent value; so, if two models have the same residual mean square but one model has a much narrower range
of values for the dependent variable that model will have a higher R-squared. This explains the fact that both models will do as well for prediction purposes.You may like using the Regression
Analysis with Diagnostic Tools JavaScript to check your computations, and to perform some numerical experimentation for a deeper understanding of these concepts. Predictions by RegressionThe
regression analysis has three goals: predicting, modeling, and characterization. What would be the logical order in which to tackle these three goals such that one task leads to and /or and justifies
the other tasks? Clearly, it depends on what the prime objective is. Sometimes you wish to model in order to get better prediction. Then the order is obvious. Sometimes, you just want to understand
and explain what is going on. Then modeling is again the key, though out-of-sample predicting may be used to test any model. Often modeling and predicting proceed in an iterative way and there is no
'logical order' in the broadest sense. You may model to get predictions, which enable better control, but iteration is again likely to be present and there are sometimes special approaches to control
problems. The following contains the main essential steps during modeling and analysis of regression model building, presented in the context of an applied numerical example.Formulas and Notations: =
Sx /n This is just the mean of the x values. = Sy /n This is just the mean of the y values. Sxx = SSxx = S(x(i) - )2 = Sx2 - ( Sx)2 / n Syy = SSyy = S(y(i) - )2 = Sy2 - ( Sy) 2 / n Sxy = SSxy = S(x
(i) - )(y(i) - ) = Sx ×y (Sx) × (Sy) / n Slope m = SSxy / SSxx Intercept, b = - m . y-predicted = yhat(i) = m×x(i) + b.Residual(i) = Error(i) = y yhat(i).SSE = Sres = SSres = SSerrors = S[y(i)
yhat(i)]2.Standard deviation of residuals = s = Sres = Serrors = [SSres / (n-2)]1/2.Standard error of the slope (m) = Sres / SSxx1/2.Standard error of the intercept (b) = Sres[(SSxx + n. 2) /(n ×
SSxx] 1/2. 2b1af7f3a8
171 Comments
You are in the right place for your registered driver's license and boating license and all (( https://permisdeconduceres.com/ )) that can be used daily without stress or fear, the same materials
that are used by all authorities are the same materials we use to create real documents. So everything will be 100% of the highest quality. All our documents have the secret features and can be seen
under UV light with full spectrum hologram.
Buy a driving license: https://permisdeconduceres.com/
Buy driving license registered in Switzerland : https://permisdeconduceres.com/cumparati-permis-de-conducere-inregistrat-in-elvetia/
Buy Latvian driving license without exam : https://permisdeconduceres.com/cumparati-permis-de-conducere-leton-fara-examen/
Buy Polish driving license online : https://permisdeconduceres.com/cumparati-permis-de-conducere-polonez-online/
Buy a Spanish driving license from No Exam : https://permisdeconduceres.com/cumparati-permis-de-conducere-spaniol-de-la-fara-examen/
Buy the Romanian driving license: https://permisdeconduceres.com/cumparati-permisul-de-conducere-romanesc/
Buy driving license from EU countries
: https://permisdeconduceres.com/cumparati-permisul-de-conducere-din-tarile-ue/
Buy German driving license : https://permisdeconduceres.com/cumparati-permisul-de-conducere-german/
buy real driving license online
buy original driving license online
Buy your driving license online
Buy Finland driving license online
Buy your UK driving license online
Buy German driving license online | German driving license
Buy your New Zealand driving license online
Buy Latvian driving license online
Buy Ireland driving license online
Buy driving license Denmark online
Buy your Portugal driving license online
Buy your Luxembourg driving license online
Buy your Dutch driving license online
Buy your Norwegian driving license online
Buy your Romanian driving license online
Buy Australian driving license online
German driving license (German),
Buy a Swiss driving license
Buy driving license Austria
Buy German driving license
Buy and renew documents online
Buy a real registered EU driving license
Buy an EU driving license
Buy an original driver's license
Buy a real registered driver's license online
Renew your driving license online
Buy EU driving license online
Buy a real driver's license online
Buy a legal driving license online
Buy an EU driving license
Buy a UK driving licence
Get a real EU driving license without writing exams
Buy a real EU driving license online
Buy a real driver's license
contact us now at: ( https://permisdeconduceres.com/contacteaza-ne/ )
We assure all our clients 100% SECURITY | {"url":"https://www.96guitarstudio.com/forum/music-forum/40-000-lettres-types-correspondance-serial-keyl/dl-6db34168-b6c6-4ff9-b9b9-144edbb922f9","timestamp":"2024-11-10T21:18:36Z","content_type":"text/html","content_length":"1050484","record_id":"<urn:uuid:edbdcff8-c62f-4034-bb24-e663ef6ae557>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00875.warc.gz"} |